id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
7960957 | Form factor (electronics) | In electronics and electrical engineering, the form factor of an alternating current waveform (signal) is the ratio of the RMS (root mean square) value to the average value (mathematical mean of absolute values of all points on the waveform). It identifies the ratio of the direct current of equal power relative to the given alternating current. The former can also be defined as the direct current that will produce equivalent heat.
Calculating the form factor.
For an ideal, continuous wave function over time T, the RMS can be calculated in integral form:
formula_0
The rectified average is then the mean of the integral of the function's absolute value:
formula_1
The quotient of these two values is the form factor, formula_2, or in unambiguous situations, formula_3.
formula_4
formula_5 reflects the variation in the function's distance from the average, and is disproportionately impacted by large deviations from the unrectified average value.
It will always be at least as large as formula_6, which only measures the absolute distance from said average. The form factor thus cannot be smaller than 1 (a square wave where all momentary values are equally far above or below the average value; see below), and has no theoretical upper limit for functions with sufficient deviation.
formula_7
can be used for combining signals of different frequencies (for example, for harmonics), while for the same frequency, formula_8.
As ARV's on the same domain can be summed as
formula_9,
the form factor of a complex wave composed of multiple waves of the same frequency can sometimes be calculated as
formula_10.
Application.
AC measuring instruments are often built with specific waveforms in mind. For example, many multimeters on their AC ranges are specifically scaled to display the RMS value of a sine wave. Since the RMS calculation can be difficult to achieve digitally, the absolute average is calculated instead and the result multiplied by the form factor of a sinusoid. This method will give less accurate readings for waveforms other than a sinewave, and the instruction plate on the rear of an Avometer states this explicitly.
The squaring in RMS and the absolute value in ARV mean that both the values and the form factor are independent of the wave function's sign (and thus, the electrical signal's direction) at any point. For this reason, the form factor is the same for a direction-changing wave with a regular average of 0 and its fully rectified version.
The form factor, formula_2, is the smallest of the three wave factors, the other two being crest factor formula_11 and the lesser-known averaging factor formula_12.
formula_13
Due to their definitions (all relying on the Root Mean Square, Average rectified value and maximum amplitude of the waveform), the three factors are related by formula_14, so the form factor can be calculated with formula_15.
Specific form factors.
formula_16 represents the amplitude of the function, and any other coefficients applied in the vertical dimension. For example, formula_17 can be analyzed as formula_18. As both RMS and ARV are directly proportional to it, it has no effect on the form factor, and can be replaced with a normalized 1 for calculating that value.
formula_19 is the duty cycle, the ratio of the "pulse" time formula_20 (when the function's value is not zero) to the full wave period formula_21. Most basic wave functions only achieve 0 for infinitely short instants, and can thus be considered as having formula_22. However, any of the non-pulsing functions below can be appended with formula_23
to allow pulsing. This is illustrated with the half-rectified sine wave, which can be considered a pulsed full-rectified sine wave with formula_24, and has formula_25.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nX_\\mathrm{rms} = \\sqrt {{1 \\over {T}} {\\int_{t_0}^{t_0+T} {[x(t)]}^2\\, dt}}\n"
},
{
"math_id": 1,
"text": "\nX_\\mathrm{arv} = {1 \\over {T}} {\\int_{t_0}^{t_0+T} {|x(t)|\\, dt}}\n"
},
{
"math_id": 2,
"text": "k_\\mathrm{f}"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "\nk_\\mathrm{f} = \\frac \\mathrm{RMS} \\mathrm{ARV} = \\frac{\\sqrt {{1 \\over {T}} {\\int_{t_0}^{t_0+T} {[x(t)]}^2\\, dt}}}{{1 \\over {T}} {\\int_{t_0}^{t_0+T} {|x(t)|\\, dt}}} = \\frac{\\sqrt{T\\int_{t_0}^{t_0+T}{{[x(t)]}^2\\, dt}}}{\\int_{t_0}^{t_0+T} {|x(t)|\\, dt}}\n"
},
{
"math_id": 5,
"text": "X_\\mathrm{rms}"
},
{
"math_id": 6,
"text": "X_\\mathrm{arv}"
},
{
"math_id": 7,
"text": "\\mathrm{RMS}_\\mathrm{total} = \\sqrt{{{\\mathrm{RMS}_1}^2} + {{\\mathrm{RMS}_2}^2} + ... + {{\\mathrm{RMS}_n}^2}}"
},
{
"math_id": 8,
"text": "\\mathrm{RMS}_\\mathrm{total} = \\mathrm{RMS}_1 + \\mathrm{RMS}_2 + ... + \\mathrm{RMS}_n"
},
{
"math_id": 9,
"text": "\\mathrm{ARV}_\\mathrm{total} = \\mathrm{ARV}_1 + \\mathrm{ARV}_2 + ... + \\mathrm{ARV}_n"
},
{
"math_id": 10,
"text": "k_{\\mathrm{f}_\\mathrm{tot}} = \\frac{\\mathrm{RMS}_\\mathrm{tot}}{\\mathrm{ARV}_\\mathrm{tot}} = \\frac{\\mathrm{RMS}_1 + ... + \\mathrm{RMS}_n}{\\mathrm{ARV}_1 + ... + \\mathrm{ARV}_n}"
},
{
"math_id": 11,
"text": "k_\\mathrm{a} = \\frac{X_\\mathrm{max}}{X_\\mathrm{rms}}"
},
{
"math_id": 12,
"text": "k_\\mathrm{av} = \\frac{X_\\mathrm{max}}{X_\\mathrm{arv}}"
},
{
"math_id": 13,
"text": "k_\\mathrm{av} \\ge k_\\mathrm{a} \\ge k_\\mathrm{f}"
},
{
"math_id": 14,
"text": "k_\\mathrm{av} = k_\\mathrm{a} k_\\mathrm{f}"
},
{
"math_id": 15,
"text": "k_\\mathrm{f} = \\frac{k_\\mathrm{av}}{k_\\mathrm{a}}"
},
{
"math_id": 16,
"text": "a"
},
{
"math_id": 17,
"text": "8 \\sin(t)"
},
{
"math_id": 18,
"text": "f(t) = a \\sin(t),\\ a = 8"
},
{
"math_id": 19,
"text": "D = {\\tau\\over T}"
},
{
"math_id": 20,
"text": "\\tau"
},
{
"math_id": 21,
"text": "T"
},
{
"math_id": 22,
"text": "\\tau = T, D = 1"
},
{
"math_id": 23,
"text": "{\\sqrt{D}\\over D} = {1\\over\\sqrt{D}} = \\sqrt{{T\\over\\tau}}"
},
{
"math_id": 24,
"text": "D = {1\\over2}"
},
{
"math_id": 25,
"text": "k_\\mathrm{f} = k_{\\mathrm{f}_\\mathrm{frs}}\\sqrt{2}"
}
] | https://en.wikipedia.org/wiki?curid=7960957 |
7962 | Logical disjunction | Logical connective OR
In logic, disjunction, also known as logical disjunction or logical or or logical addition or inclusive disjunction, is a logical connective typically notated as formula_0 and read aloud as "or". For instance, the English language sentence "it is sunny or it is warm" can be represented in logic using the disjunctive formula formula_1, assuming that formula_2 abbreviates "it is sunny" and formula_3 abbreviates "it is warm".
In classical logic, disjunction is given a truth functional semantics according to which a formula formula_4 is true unless both formula_5 and formula_6 are false. Because this semantics allows a disjunctive formula to be true when both of its disjuncts are true, it is an "inclusive" interpretation of disjunction, in contrast with exclusive disjunction. Classical proof theoretical treatments are often given in terms of rules such as disjunction introduction and disjunction elimination. Disjunction has also been given numerous non-classical treatments, motivated by problems including Aristotle's sea battle argument, Heisenberg's uncertainty principle, as well as the numerous mismatches between classical disjunction and its nearest equivalents in natural languages.
An operand of a disjunction is a disjunct.
Inclusive and exclusive disjunction.
Because the logical "or" means a disjunction formula is true when either one or both of its parts are true, it is referred to as an "inclusive" disjunction. This is in contrast with an exclusive disjunction, which is true when one or the other of the arguments are true, but not both (referred to as "exclusive or", or "XOR").
When it is necessary to clarify whether inclusive or exclusive "or" is intended, English speakers sometimes uses the phrase "and/or". In terms of logic, this phrase is identical to "or", but makes the inclusion of both being true explicit.
Notation.
In logic and related fields, disjunction is customarily notated with an infix operator formula_7 (Unicode ). Alternative notations include formula_8, used mainly in electronics, as well as formula_9 and formula_10 in many programming languages. The English word "or" is sometimes used as well, often in capital letters. In Jan Łukasiewicz's prefix notation for logic, the operator is formula_11, short for Polish "alternatywa" (English: alternative).
Classical disjunction.
Semantics.
In the semantics of logic, classical disjunction is a truth functional operation which returns the truth value "true" unless both of its arguments are "false". Its semantic entry is standardly given as follows:
formula_12 if formula_13 or formula_14 or both
This semantics corresponds to the following truth table:
Defined by other operators.
In classical logic systems where logical disjunction is not a primitive, it can be defined in terms of the primitive "and" (formula_15) and "not" (formula_16) as:
formula_17.
Alternatively, it may be defined in terms of "implies" (formula_18) and "not" as:
formula_19.
The latter can be checked by the following truth table:
It may also be defined solely in terms of formula_18:
formula_20.
It can be checked by the following truth table:
Properties.
The following properties apply to disjunction:
formula_24
formula_25
formula_26
formula_29
Applications in computer science.
Operators corresponding to logical disjunction exist in most programming languages.
Bitwise operation.
Disjunction is often used for bitwise operations. Examples:
The codice_0 operator can be used to set bits in a bit field to 1, by codice_0-ing the field with a constant field with the relevant bits set to 1. For example, codice_2 will force the final bit to 1, while leaving other bits unchanged.
Logical operation.
Many languages distinguish between bitwise and logical disjunction by providing two distinct operators; in languages following C, bitwise disjunction is performed with the single pipe operator (codice_3), and logical disjunction with the double pipe (codice_4) operator.
Logical disjunction is usually short-circuited; that is, if the first (left) operand evaluates to codice_5, then the second (right) operand is not evaluated. The logical disjunction operator thus usually constitutes a sequence point.
In a parallel (concurrent) language, it is possible to short-circuit both sides: they are evaluated in parallel, and if one terminates with value true, the other is interrupted. This operator is thus called the "parallel or".
Although the type of a logical disjunction expression is Boolean in most languages (and thus can only have the value codice_5 or codice_7), in some languages (such as Python and JavaScript), the logical disjunction operator returns one of its operands: the first operand if it evaluates to a true value, and the second operand otherwise. This allows it to fulfill the role of the Elvis operator.
Constructive disjunction.
The Curry–Howard correspondence relates a constructivist form of disjunction to tagged union types.
Set theory.
The membership of an element of a union set in set theory is defined in terms of a logical disjunction: formula_30. Because of this, logical disjunction satisfies many of the same identities as set-theoretic union, such as associativity, commutativity, distributivity, and de Morgan's laws, identifying logical conjunction with set intersection, logical negation with set complement.
Natural language.
Disjunction in natural languages does not precisely match the interpretation of formula_7 in classical logic. Notably, classical disjunction is inclusive while natural language disjunction is often understood exclusively, as the following English example typically would be.
* Mary is eating an apple or a pear.
This inference has sometimes been understood as an entailment, for instance by Alfred Tarski, who suggested that natural language disjunction is ambiguous between a classical and a nonclassical interpretation. More recent work in pragmatics has shown that this inference can be derived as a conversational implicature on the basis of a semantic denotation which behaves classically. However, disjunctive constructions including Hungarian "vagy... vagy" and French "soit... soit" have been argued to be inherently exclusive, rendering ungrammaticality in contexts where an inclusive reading would otherwise be forced.
Similar deviations from classical logic have been noted in cases such as free choice disjunction and simplification of disjunctive antecedents, where certain modal operators trigger a conjunction-like interpretation of disjunction. As with exclusivity, these inferences have been analyzed both as implicatures and as entailments arising from a nonclassical interpretation of disjunction.
* You can have an apple or a pear.
formula_31 You can have an apple and you can have a pear (but you can't have both)
In many languages, disjunctive expressions play a role in question formation.
* Is Mary a philosopher or a linguist?
For instance, while the above English example can be interpreted as a polar question asking whether it's true that Mary is either a philosopher or a linguist, it can also be interpreted as an alternative question asking which of the two professions is hers. The role of disjunction in these cases has been analyzed using nonclassical logics such as alternative semantics and inquisitive semantics, which have also been adopted to explain the free choice and simplification inferences.
In English, as in many other languages, disjunction is expressed by a coordinating conjunction. Other languages express disjunctive meanings in a variety of ways, though it is unknown whether disjunction itself is a linguistic universal. In many languages such as Dyirbal and Maricopa, disjunction is marked using a verb suffix. For instance, in the Maricopa example below, disjunction is marked by the suffix "šaa".
<templatestyles src="Interlinear/styles.css" />
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\lor "
},
{
"math_id": 1,
"text": " S \\lor W "
},
{
"math_id": 2,
"text": "S"
},
{
"math_id": 3,
"text": "W"
},
{
"math_id": 4,
"text": "\\phi \\lor \\psi"
},
{
"math_id": 5,
"text": "\\phi"
},
{
"math_id": 6,
"text": "\\psi"
},
{
"math_id": 7,
"text": "\\lor"
},
{
"math_id": 8,
"text": "+"
},
{
"math_id": 9,
"text": "\\vert"
},
{
"math_id": 10,
"text": "\\vert\\!\\vert"
},
{
"math_id": 11,
"text": "A"
},
{
"math_id": 12,
"text": " \\models \\phi \\lor \\psi"
},
{
"math_id": 13,
"text": " \\models \\phi"
},
{
"math_id": 14,
"text": "\\models \\psi"
},
{
"math_id": 15,
"text": "\\land"
},
{
"math_id": 16,
"text": "\\lnot"
},
{
"math_id": 17,
"text": "A \\lor B = \\neg ((\\neg A) \\land (\\neg B))"
},
{
"math_id": 18,
"text": "\\to"
},
{
"math_id": 19,
"text": "A \\lor B = (\\lnot A) \\to B"
},
{
"math_id": 20,
"text": "A \\lor B = (A \\to B) \\to B"
},
{
"math_id": 21,
"text": "a \\lor (b \\lor c) \\equiv (a \\lor b) \\lor c "
},
{
"math_id": 22,
"text": "a \\lor b \\equiv b \\lor a "
},
{
"math_id": 23,
"text": "(a \\land (b \\lor c)) \\equiv ((a \\land b) \\lor (a \\land c))"
},
{
"math_id": 24,
"text": "(a \\lor (b \\land c)) \\equiv ((a \\lor b) \\land (a \\lor c))"
},
{
"math_id": 25,
"text": "(a \\lor (b \\lor c)) \\equiv ((a \\lor b) \\lor (a \\lor c))"
},
{
"math_id": 26,
"text": "(a \\lor (b \\equiv c)) \\equiv ((a \\lor b) \\equiv (a \\lor c))"
},
{
"math_id": 27,
"text": "a \\lor a \\equiv a "
},
{
"math_id": 28,
"text": "(a \\rightarrow b) \\rightarrow ((c \\lor a) \\rightarrow (c \\lor b))"
},
{
"math_id": 29,
"text": "(a \\rightarrow b) \\rightarrow ((a \\lor c) \\rightarrow (b \\lor c))"
},
{
"math_id": 30,
"text": "x\\in A\\cup B\\Leftrightarrow (x\\in A)\\vee(x\\in B)"
},
{
"math_id": 31,
"text": "\\rightsquigarrow"
}
] | https://en.wikipedia.org/wiki?curid=7962 |
796233 | Spaceship (cellular automaton) | Type of pattern that periodically changes position
style="text-align:right"|
In a cellular automaton, a finite pattern is called a spaceship if it reappears after a certain number of generations in the same orientation but in a different position. The smallest such number of generations is called the period of the spaceship.
Description.
The speed of a spaceship is often expressed in terms of "c", the metaphorical speed of light (one cell per generation) which in many cellular automata is the fastest that an effect can spread. For example, a glider in Conway's Game of Life is said to have a speed of formula_0, as it takes four generations for a given state to be translated by one cell. Similarly, the "lightweight spaceship" is said to have a speed of formula_1, as it takes four generations for a given state to be translated by two cells. More generally, if a spaceship in a 2D automaton with the Moore neighborhood is translated by formula_2 after formula_3 generations, then the speed formula_4 is defined as:
formula_5
This notation can be readily generalised to cellular automata with dimensionality other than two.
A pullalong is a pattern that is not a spaceship in itself but that can be attached to the back of a spaceship to form a larger spaceship. Similarly, a pushalong is placed at the front. The term tagalong can refer to either of these patterns or a pattern that can be placed at the side of a spaceship to form a larger spaceship.
A pattern that, when a spaceship is input, outputs a copy of the spaceship travelling in a different direction is called a reflector. If the output is instead a different spaceship, the pattern is known as a converter.
Spaceships are important because they can sometimes be modified to produce puffers. Spaceships can also be used to transmit information. For example, in Conway's Game of Life, the ability of the glider (Life's simplest spaceship) to transmit information is part of a proof that Life is Turing-complete.
In March 2016, the unexpected discovery of a small but high-period spaceship enthused the Game of Life community. It was named "copperhead". A similar example, called "loafer", was found a few years earlier.
In March 2018, the first elementary spaceship with displacement (2,1) (knightwise) was discovered and named Sir Robin.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c/4"
},
{
"math_id": 1,
"text": "c/2"
},
{
"math_id": 2,
"text": "(x, y)"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": "v=\\frac{\\max\\left(|x|,|y|\\right)}{n}\\,c"
}
] | https://en.wikipedia.org/wiki?curid=796233 |
7962589 | Deterministic context-free language | In formal language theory, deterministic context-free languages (DCFL) are a proper subset of context-free languages. They are the context-free languages that can be accepted by a deterministic pushdown automaton. DCFLs are always unambiguous, meaning that they admit an unambiguous grammar. There are non-deterministic unambiguous CFLs, so DCFLs form a proper subset of unambiguous CFLs.
DCFLs are of great practical interest, as they can be parsed in linear time, and various restricted forms of DCFGs admit simple practical parsers. They are thus widely used throughout computer science.
Description.
The notion of the DCFL is closely related to the deterministic pushdown automaton (DPDA). It is where the language power of pushdown automata is reduced to if we make them deterministic; the pushdown automata become unable to choose between different state-transition alternatives and as a consequence cannot recognize all context-free languages. Unambiguous grammars do not always generate a DCFL. For example, the language of even-length palindromes on the alphabet of 0 and 1 has the unambiguous context-free grammar S → 0S0 | 1S1 | ε. An arbitrary string of this language cannot be parsed without reading all its letters first, which means that a pushdown automaton has to try alternative state transitions to accommodate for the different possible lengths of a semi-parsed string.
Properties.
Deterministic context-free languages can be recognized by a deterministic Turing machine in polynomial time and O(log2 "n") space; as a corollary, DCFL is a subset of the complexity class SC.
The set of deterministic context-free languages is closed under the following operations:
The set of deterministic context-free language is "not" closed under the following operations:
Importance.
The languages of this class have great practical importance in computer science as they can be parsed much more efficiently than nondeterministic context-free languages. The complexity of the program and execution time of a deterministic pushdown automaton is vastly less than that of a nondeterministic one. In the naive implementation, the latter must make copies of the stack every time a nondeterministic step occurs. The best known algorithm to test membership in any context-free language is Valiant's algorithm, taking O(n2.378) time, where n is the length of the string. On the other hand, deterministic context-free languages can be accepted in O(n) time by an LR(k) parser. This is very important for computer language translation because many computer languages belong to this class of languages.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " L "
}
] | https://en.wikipedia.org/wiki?curid=7962589 |
7963 | Disjunctive syllogism | Logical rule of inference
In classical logic, disjunctive syllogism (historically known as modus tollendo ponens (MTP), Latin for "mode that affirms by denying") is a valid argument form which is a syllogism having a disjunctive statement for one of its premises.
An example in English:
Propositional logic.
In propositional logic, disjunctive syllogism (also known as disjunction elimination and or elimination, or abbreviated ∨E), is a valid rule of inference. If it is known that at least one of two statements is true, and that it is not the former that is true; we can infer that it has to be the latter that is true. Equivalently, if "P" is true or "Q" is true and "P" is false, then "Q" is true. The name "disjunctive syllogism" derives from its being a syllogism, a three-step argument, and the use of a logical disjunction (any "or" statement.) For example, "P or Q" is a disjunction, where P and Q are called the statement's "disjuncts". The rule makes it possible to eliminate a disjunction from a logical proof. It is the rule that
formula_2
where the rule is that whenever instances of "formula_3", and "formula_4" appear on lines of a proof, "formula_1" can be placed on a subsequent line.
Disjunctive syllogism is closely related and similar to hypothetical syllogism, which is another rule of inference involving a syllogism. It is also related to the law of noncontradiction, one of the .
Formal notation.
For a logical system that validates it, the "disjunctive syllogism" may be written in sequent notation as
formula_5
where formula_6 is a metalogical symbol meaning that formula_1 is a syntactic consequence of formula_3, and formula_7.
It may be expressed as a truth-functional tautology or theorem in the object language of propositional logic as
formula_8
where formula_0, and formula_1 are propositions expressed in some formal system.
Natural language examples.
Here is an example:
Here is another example:
Strong form.
"Modus tollendo ponens" can be made stronger by using exclusive disjunction instead of inclusive disjunction as a premise:
formula_9
Related argument forms.
Unlike "modus ponens" and "modus ponendo tollens", with which it should not be confused, disjunctive syllogism is often not made an explicit rule or axiom of logical systems, as the above arguments can be proven with a combination of reductio ad absurdum and disjunction elimination.
Other forms of syllogism include:
Disjunctive syllogism holds in classical propositional logic and intuitionistic logic, but not in some paraconsistent logics.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "\\frac{P \\lor Q, \\neg P}{\\therefore Q}"
},
{
"math_id": 3,
"text": "P \\lor Q"
},
{
"math_id": 4,
"text": "\\neg P"
},
{
"math_id": 5,
"text": " P \\lor Q, \\lnot P \\vdash Q "
},
{
"math_id": 6,
"text": "\\vdash"
},
{
"math_id": 7,
"text": "\\lnot P"
},
{
"math_id": 8,
"text": " ((P \\lor Q) \\land \\neg P) \\to Q"
},
{
"math_id": 9,
"text": "\\frac{P \\underline\\lor Q, \\neg P}{\\therefore Q}"
}
] | https://en.wikipedia.org/wiki?curid=7963 |
7965029 | Excitation table | In electronics design, an excitation table shows the minimum inputs that are necessary to generate a particular next state (in other words, to "excite" it to the next state) when the current state is known. They are similar to truth tables and state tables, but rearrange the data so that the current state and next state are next to each other on the left-hand side of the table, and the inputs needed to make that state change happen are shown on the right side of the table.
Flip-flop excitation tables.
In order to complete the excitation table of a flip-flop, one needs to draw the Q(t) and Q(t + 1) for all possible cases (e.g., 00, 01, 10, and 11), and then make the value of flip-flop such that on giving this value, one shall receive the input as Q(t + 1) as desired.
T flip-flop.
The characteristic equation of a T flip-flop is formula_0.
SR flip-flop.
The characteristic equation of a SR flip-flop is formula_1.
JK flip-flop.
The characteristic equation of a JK flip-flop is formula_2.
D flip-flop.
The characteristic equation of a D flip-flop is formula_3
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q(\\text{next}) = TQ' + T'Q = T \\oplus Q"
},
{
"math_id": 1,
"text": "Q(\\text{next}) = S + QR'"
},
{
"math_id": 2,
"text": "Q(\\text{next}) = JQ' + K'Q"
},
{
"math_id": 3,
"text": "Q(\\text{next}) = D"
}
] | https://en.wikipedia.org/wiki?curid=7965029 |
7965327 | Nested sampling algorithm | The nested sampling algorithm is a computational approach to the Bayesian statistics problems of comparing models and generating samples from posterior distributions. It was developed in 2004 by physicist John Skilling.
Background.
Bayes' theorem can be applied to a pair of competing models formula_0 and formula_1 for data formula_2, one of which may be true (though which one is unknown) but which both cannot be true simultaneously. The posterior probability for formula_0 may be calculated as:
formula_3
The prior probabilities formula_0 and formula_1 are already known, as they are chosen by the researcher ahead of time. However, the remaining Bayes factor formula_4 is not so easy to evaluate, since in general it requires marginalizing nuisance parameters. Generally, formula_0 has a set of parameters that can be grouped together and called formula_5, and formula_1 has its own vector of parameters that may be of different dimensionality, but is still termed formula_5. The marginalization for formula_0 is
formula_6
and likewise for formula_1. This integral is often analytically intractable, and in these cases it is necessary to employ a numerical algorithm to find an approximation. The nested sampling algorithm was developed by John Skilling specifically to approximate these marginalization integrals, and it has the added benefit of generating samples from the posterior distribution formula_7. It is an alternative to methods from the Bayesian literature such as bridge sampling and defensive importance sampling.
Here is a simple version of the nested sampling algorithm, followed by a description of how it computes the marginal probability density formula_8 where formula_9 is formula_0 or formula_1:
Start with formula_10 points formula_11 sampled from prior.
for formula_12 to formula_13 do % The number of iterations j is chosen by guesswork.
formula_14current likelihood values of the pointsformula_15;
formula_16
formula_17
formula_18
Save the point with least likelihood as a sample point with weight formula_19.
Update the point with least likelihood with some Markov chain Monte Carlo steps according to the prior, accepting only steps that
keep the likelihood above formula_20.
end
return formula_21;
At each iteration, formula_22 is an estimate of the amount of prior mass covered by the hypervolume in parameter space of all points with likelihood greater than formula_23. The weight factor formula_19 is an estimate of the amount of prior mass that lies between two nested hypersurfaces formula_24 and formula_25. The update step formula_26 computes the sum over formula_27 of formula_28 to numerically approximate the integral
formula_29
In the limit formula_30, this estimator has a positive bias of order formula_31 which can be removed by using formula_32 instead of the formula_33 in the above algorithm.
The idea is to subdivide the range of formula_34 and estimate, for each interval formula_35, how likely it is a priori that a randomly chosen formula_5 would map to this interval. This can be thought of as a Bayesian's way to numerically implement Lebesgue integration.
Implementations.
Example implementations demonstrating the nested sampling algorithm are publicly available for download, written in several programming languages.
Applications.
Since nested sampling was proposed in 2004, it has been used in many aspects of the field of astronomy. One paper suggested using nested sampling for cosmological model selection and object detection, as it "uniquely combines accuracy, general applicability and computational feasibility." A refinement of the algorithm to handle multimodal posteriors has been suggested as a means to detect astronomical objects in extant datasets. Other applications of nested sampling are in the field of finite element updating where the algorithm is used to choose an optimal finite element model, and this was applied to structural dynamics. This sampling method has also been used in the field of materials modeling. It can be used to learn the partition function from statistical mechanics and derive thermodynamic properties.
Dynamic nested sampling.
Dynamic nested sampling is a generalisation of the nested sampling algorithm in which the number of samples taken in different regions of the parameter space is dynamically adjusted to maximise calculation accuracy. This can lead to large improvements in accuracy and computational efficiency when compared to the original nested sampling algorithm, in which the allocation of samples cannot be changed and often many samples are taken in regions which have little effect on calculation accuracy.
Publicly available dynamic nested sampling software packages include:
Dynamic nested sampling has been applied to a variety of scientific problems, including analysis of gravitational waves, mapping distances in space and exoplanet detection.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M_1"
},
{
"math_id": 1,
"text": "M_2"
},
{
"math_id": 2,
"text": "D"
},
{
"math_id": 3,
"text": "\n\\begin{align}\n P(M_1\\mid D) & = \\frac{P(D\\mid M_1) P(M_1)}{P(D)} \\\\\n & = \\frac{P(D\\mid M_1) P(M_1)}{P(D\\mid M_1) P(M_1) + P(D\\mid M_2) P(M_2)} \\\\\n & = \\frac{1}{1 + \\frac{P(D\\mid M_2)}{P(D\\mid M_1)} \\frac{P(M_2)}{P(M_1)} }\n\\end{align}\n"
},
{
"math_id": 4,
"text": "P(D\\mid M_2)/P(D\\mid M_1)"
},
{
"math_id": 5,
"text": "\\theta"
},
{
"math_id": 6,
"text": "P(D\\mid M_1) = \\int d \\theta \\, P(D\\mid \\theta,M_1) P(\\theta\\mid M_1)"
},
{
"math_id": 7,
"text": "P(\\theta\\mid D,M_1)"
},
{
"math_id": 8,
"text": "Z=P(D\\mid M)"
},
{
"math_id": 9,
"text": "M"
},
{
"math_id": 10,
"text": "N"
},
{
"math_id": 11,
"text": "\\theta_1,\\ldots,\\theta_N"
},
{
"math_id": 12,
"text": "i=1"
},
{
"math_id": 13,
"text": "j"
},
{
"math_id": 14,
"text": "L_i := \\min("
},
{
"math_id": 15,
"text": ")"
},
{
"math_id": 16,
"text": "X_i := \\exp(-i/N);"
},
{
"math_id": 17,
"text": "w_i := X_{i-1} - X_i"
},
{
"math_id": 18,
"text": "Z := Z + L_i\\cdot w_i;"
},
{
"math_id": 19,
"text": "w_i"
},
{
"math_id": 20,
"text": "L_i"
},
{
"math_id": 21,
"text": "Z"
},
{
"math_id": 22,
"text": "X_i"
},
{
"math_id": 23,
"text": "\\theta_i"
},
{
"math_id": 24,
"text": "\\{ \\theta \\mid P(D\\mid\\theta,M) = P(D\\mid\\theta_{i-1},M) \\}"
},
{
"math_id": 25,
"text": "\\{ \\theta \\mid P(D\\mid\\theta,M) = P(D\\mid\\theta_i,M) \\}"
},
{
"math_id": 26,
"text": "Z := Z+L_i w_i"
},
{
"math_id": 27,
"text": "i"
},
{
"math_id": 28,
"text": "L_i w_i"
},
{
"math_id": 29,
"text": "\n \\begin{align}\n P(D\\mid M) &= \\int P(D\\mid \\theta,M) P(\\theta\\mid M) \\,d \\theta \\\\\n &= \\int P(D\\mid \\theta,M) \\,dP(\\theta\\mid M)\n \\end{align}\n "
},
{
"math_id": 30,
"text": "j \\to \\infty"
},
{
"math_id": 31,
"text": " 1 / N"
},
{
"math_id": 32,
"text": "(1 - 1/N)"
},
{
"math_id": 33,
"text": "\\exp (-1/N)"
},
{
"math_id": 34,
"text": "f(\\theta) = P(D\\mid\\theta,M)"
},
{
"math_id": 35,
"text": "[f(\\theta_{i-1}), f(\\theta_i)]"
}
] | https://en.wikipedia.org/wiki?curid=7965327 |
796652 | Partition coefficient | Ratio of concentrations in a mixture at equilibrium
In the physical sciences, a partition coefficient (P) or distribution coefficient (D) is the ratio of concentrations of a compound in a mixture of two immiscible solvents at equilibrium. This ratio is therefore a comparison of the solubilities of the solute in these two liquids. The partition coefficient generally refers to the concentration ratio of un-ionized species of compound, whereas the distribution coefficient refers to the concentration ratio of all species of the compound (ionized plus un-ionized).
In the chemical and pharmaceutical sciences, both phases usually are solvents. Most commonly, one of the solvents is water, while the second is hydrophobic, such as 1-octanol. Hence the partition coefficient measures how hydrophilic ("water-loving") or hydrophobic ("water-fearing") a chemical substance is. Partition coefficients are useful in estimating the distribution of drugs within the body. Hydrophobic drugs with high octanol-water partition coefficients are mainly distributed to hydrophobic areas such as lipid bilayers of cells. Conversely, hydrophilic drugs (low octanol/water partition coefficients) are found primarily in aqueous regions such as blood serum.
If one of the solvents is a gas and the other a liquid, a gas/liquid partition coefficient can be determined. For example, the blood/gas partition coefficient of a general anesthetic measures how easily the anesthetic passes from gas to blood. Partition coefficients can also be defined when one of the phases is solid, for instance, when one phase is a molten metal and the second is a solid metal, or when both phases are solids. The partitioning of a substance into a solid results in a solid solution.
Partition coefficients can be measured experimentally in various ways (by shake-flask, HPLC, etc.) or estimated by calculation based on a variety of methods (fragment-based, atom-based, etc.).
If a substance is present as several chemical species in the partition system due to association or dissociation, each species is assigned its own "K"ow value. A related value, D, does not distinguish between different species, only indicating the concentration ratio of the substance between the two phases.
Nomenclature.
Despite formal recommendation to the contrary, the term "partition coefficient" remains the predominantly used term in the scientific literature.
In contrast, the IUPAC recommends that the title term no longer be used, rather, that it be replaced with more specific terms. For example, "partition constant", defined as
where "K"D is the process equilibrium constant, [A] represents the concentration of solute A being tested, and "org" and "aq" refer to the organic and aqueous phases respectively. The IUPAC further recommends "partition ratio" for cases where transfer activity coefficients can be determined, and "distribution ratio" for the ratio of total analytical concentrations of a solute between phases, regardless of chemical form.
Partition coefficient and log "P".
The partition coefficient, abbreviated P, is defined as a particular ratio of the concentrations of a solute between the two solvents (a biphase of liquid phases), specifically for un-ionized solutes, and the logarithm of the ratio is thus log "P". When one of the solvents is water and the other is a non-polar solvent, then the log "P" value is a measure of lipophilicity or hydrophobicity. The defined precedent is for the lipophilic and hydrophilic phase types to always be in the numerator and denominator respectively; for example, in a biphasic system of "n"-octanol (hereafter simply "octanol") and water:
formula_0
To a first approximation, the non-polar phase in such experiments is usually dominated by the un-ionized form of the solute, which is electrically neutral, though this may not be true for the aqueous phase. To measure the "partition coefficient of ionizable solutes", the pH of the aqueous phase is adjusted such that the predominant form of the compound in solution is the un-ionized, or its measurement at another pH of interest requires consideration of all species, un-ionized and ionized (see following).
A corresponding partition coefficient for ionizable compounds, abbreviated log "P" "I", is derived for cases where there are dominant ionized forms of the molecule, such that one must consider partition of all forms, ionized and un-ionized, between the two phases (as well as the interaction of the two equilibria, partition and ionization). "M" is used to indicate the number of ionized forms; for the I-th form ("I" = 1, 2, ... , "M") the logarithm of the corresponding partition coefficient, formula_1, is defined in the same manner as for the un-ionized form. For instance, for an octanol–water partition, it is
formula_2
To distinguish between this and the standard, un-ionized, partition coefficient, the un-ionized is often assigned the symbol log "P"0, such that the indexed formula_1 expression for ionized solutes becomes simply an extension of this, into the range of values "I" > 0.
Distribution coefficient and log "D".
The distribution coefficient, log "D, is the ratio of the sum of the concentrations of all forms of the compound (ionized plus un-ionized) in each of the two phases, one essentially always aqueous; as such, it depends on the pH of the aqueous phase, and log "D" = log "P" for non-ionizable compounds at any pH. For measurements of distribution coefficients, the pH of the aqueous phase is buffered to a specific value such that the pH is not significantly perturbed by the introduction of the compound. The value of each log "D is then determined as the logarithm of a ratio—of the sum of the experimentally measured concentrations of the solute's various forms in one solvent, to the sum of such concentrations of its forms in the other solvent; it can be expressed as
formula_3
In the above formula, the superscripts "ionized" each indicate the sum of concentrations of all ionized species in their respective phases. In addition, since log "D" is pH-dependent, the pH at which the log "D" was measured must be specified. In areas such as drug discovery—areas involving partition phenomena in biological systems such as the human body—the log "D" at the physiologic pH = 7.4 is of particular interest.
It is often convenient to express the log "D" in terms of "P"I, defined above (which includes "P"0 as state "I" = 0), thus covering both un-ionized and ionized species. For example, in octanol–water:
formula_4
which sums the individual partition coefficients (not their logarithms), and where formula_5 indicates the pH-dependent mole fraction of the I-th form (of the solute) in the aqueous phase, and other variables are defined as previously.
Example partition coefficient data.
The values for the octanol-water system in the following table are from the Dortmund Data Bank. They are sorted by the partition coefficient, smallest to largest (acetamide being hydrophilic, and 2,2',4,4',5-pentachlorobiphenyl lipophilic), and are presented with the temperature at which they were measured (which impacts the values).
Values for other compounds may be found in a variety of available reviews and monographs. Critical discussions of the challenges of measurement of log "P" and related computation of its estimated values (see below) appear in several reviews.
Applications.
Pharmacology.
A drug's distribution coefficient strongly affects how easily the drug can reach its intended target in the body, how strong an effect it will have once it reaches its target, and how long it will remain in the body in an active form. Hence, the log "P" of a molecule is one criterion used in decision-making by medicinal chemists in pre-clinical drug discovery, for example, in the assessment of druglikeness of drug candidates. Likewise, it is used to calculate lipophilic efficiency in evaluating the quality of research compounds, where the efficiency for a compound is defined as its potency, via measured values of pIC50 or pEC50, minus its value of log "P".
Pharmacokinetics.
In the context of pharmacokinetics (how the body absorbs, metabolizes, and excretes a drug), the distribution coefficient has a strong influence on ADME properties of the drug. Hence the hydrophobicity of a compound (as measured by its distribution coefficient) is a major determinant of how drug-like it is. More specifically, for a drug to be orally absorbed, it normally must first pass through lipid bilayers in the intestinal epithelium (a process known as transcellular transport). For efficient transport, the drug must be hydrophobic enough to partition into the lipid bilayer, but not so hydrophobic, that once it is in the bilayer, it will not partition out again. Likewise, hydrophobicity plays a major role in determining where drugs are distributed within the body after absorption and, as a consequence, in how rapidly they are metabolized and excreted.
Pharmacodynamics.
In the context of pharmacodynamics (how the drug affects the body), the hydrophobic effect is the major driving force for the binding of drugs to their receptor targets. On the other hand, hydrophobic drugs tend to be more toxic because they, in general, are retained longer, have a wider distribution within the body (e.g., intracellular), are somewhat less selective in their binding to proteins, and finally are often extensively metabolized. In some cases the metabolites may be chemically reactive. Hence it is advisable to make the drug as hydrophilic as possible while it still retains adequate binding affinity to the therapeutic protein target. For cases where a drug reaches its target locations through passive mechanisms (i.e., diffusion through membranes), the ideal distribution coefficient for the drug is typically intermediate in value (neither too lipophilic, nor too hydrophilic); in cases where molecules reach their targets otherwise, no such generalization applies.
Environmental science.
The hydrophobicity of a compound can give scientists an indication of how easily a compound might be taken up in groundwater to pollute waterways, and its toxicity to animals and aquatic life. Partition coefficient can also be used to predict the mobility of radionuclides in groundwater. In the field of hydrogeology, the octanol–water partition coefficient "K"ow is used to predict and model the migration of dissolved hydrophobic organic compounds in soil and groundwater.
Agrochemical research.
Hydrophobic insecticides and herbicides tend to be more active. Hydrophobic agrochemicals in general have longer half-lives and therefore display increased risk of adverse environmental impact.
Metallurgy.
In metallurgy, the partition coefficient is an important factor in determining how different impurities are distributed between molten and solidified metal. It is a critical parameter for purification using zone melting, and determines how effectively an impurity can be removed using directional solidification, described by the Scheil equation.
Consumer product development.
Many other industries take into account distribution coefficients, for example in the formulation of make-up, topical ointments, dyes, hair colors and many other consumer products.
Measurement.
A number of methods of measuring distribution coefficients have been developed, including the shake-flask, separating funnel method, reverse-phase HPLC, and pH-metric techniques.
Separating-funnel method.
In this method the solid particles present into the two immiscible liquids can be easily separated by suspending those solid particles directly into these immiscible or somewhat miscible liquids.
Shake flask-type.
The classical and most reliable method of log "P" determination is the "shake-flask method", which consists of dissolving some of the solute in question in a volume of octanol and water, then measuring the concentration of the solute in each solvent. The most common method of measuring the distribution of the solute is by UV/VIS spectroscopy.
HPLC-based.
A faster method of log "P" determination makes use of high-performance liquid chromatography. The log "P" of a solute can be determined by correlating its retention time with similar compounds with known log "P" values.
An advantage of this method is that it is fast (5–20 minutes per sample). However, since the value of log "P" is determined by linear regression, several compounds with similar structures must have known log "P" values, and extrapolation from one chemical class to another—applying a regression equation derived from one chemical class to a second one—may not be reliable, since each chemical classes will have its characteristic regression parameters.
pH-metric.
The pH-metric set of techniques determine lipophilicity pH profiles directly from a single acid-base titration in a two-phase water–organic-solvent system. Hence, a single experiment can be used to measure the logarithms of the partition coefficient (log "P") giving the distribution of molecules that are primarily neutral in charge, as well as the distribution coefficient (log "D") of all forms of the molecule over a pH range, e.g., between 2 and 12. The method does, however, require the separate determination of the pKa value(s) of the substance.
Electrochemical.
Polarized liquid interfaces have been used to examine the thermodynamics and kinetics of the transfer of charged species from one phase to another. Two main methods exist. The first is ITIES, "interfaces between two immiscible electrolyte solutions". The second is droplet experiments. Here a reaction at a triple interface between a conductive solid, droplets of a redox active liquid phase and an electrolyte solution have been used to determine the energy required to transfer a charged species across the interface.
Single-cell approach.
There are attempts to provide partition coefficients for drugs at a single-cell level. This strategy requires methods for the determination of concentrations in individual cells, i.e., with Fluorescence correlation spectroscopy or quantitative Image analysis. Partition coefficient at a single-cell level provides information on cellular uptake mechanism.
Prediction.
There are many situations where prediction of partition coefficients prior to experimental measurement is useful. For example, tens of thousands of industrially manufactured chemicals are in common use, but only a small fraction have undergone rigorous toxicological evaluation. Hence there is a need to prioritize the remainder for testing. QSAR equations, which in turn are based on calculated partition coefficients, can be used to provide toxicity estimates. Calculated partition coefficients are also widely used in drug discovery to optimize screening libraries and to predict druglikeness of designed drug candidates before they are synthesized. As discussed in more detail below, estimates of partition coefficients can be made using a variety of methods, including fragment-based, atom-based, and knowledge-based that rely solely on knowledge of the structure of the chemical. Other prediction methods rely on other experimental measurements such as solubility. The methods also differ in accuracy and whether they can be applied to all molecules, or only ones similar to molecules already studied.
Atom-based.
Standard approaches of this type, using atomic contributions, have been named by those formulating them with a prefix letter: AlogP, XlogP, MlogP, etc. A conventional method for predicting log "P" through this type of method is to parameterize the distribution coefficient contributions of various atoms to the overall molecular partition coefficient, which produces a parametric model. This parametric model can be estimated using constrained least-squares estimation, using a training set of compounds with experimentally measured partition coefficients. In order to get reasonable correlations, the most common elements contained in drugs (hydrogen, carbon, oxygen, sulfur, nitrogen, and halogens) are divided into several different atom types depending on the environment of the atom within the molecule. While this method is generally the least accurate, the advantage is that it is the most general, being able to provide at least a rough estimate for a wide variety of molecules.
Fragment-based.
The most common of these uses a group contribution method and is termed cLogP. It has been shown that the log "P" of a compound can be determined by the sum of its non-overlapping molecular fragments (defined as one or more atoms covalently bound to each other within the molecule). Fragmentary log "P" values have been determined in a statistical method analogous to the atomic methods (least-squares fitting to a training set). In addition, Hammett-type corrections are included to account of electronic and steric effects. This method in general gives better results than atomic-based methods, but cannot be used to predict partition coefficients for molecules containing unusual functional groups for which the method has not yet been parameterized (most likely because of the lack of experimental data for molecules containing such functional groups).
Knowledge-based.
A typical data-mining-based prediction uses support-vector machines, decision trees, or neural networks. This method is usually very successful for calculating log "P" values when used with compounds that have similar chemical structures and known log "P" values. Molecule mining approaches apply a similarity-matrix-based prediction or an automatic fragmentation scheme into molecular substructures. Furthermore, there exist also approaches using maximum common subgraph searches or molecule kernels.
Log "D" from log "P" and p"K"a.
For cases where the molecule is un-ionized:
formula_6
For other cases, estimation of log "D" at a given pH, from log "P" and the known mole fraction of the un-ionized form, formula_7, in the case where partition of ionized forms into non-polar phase can be neglected, can be formulated as
formula_8
The following approximate expressions are valid only for monoprotic acids and bases:
formula_9
Further approximations for when the compound is largely ionized:
For prediction of p"K"a, which in turn can be used to estimate log "D", Hammett type equations have frequently been applied.
Log "P" from log "S".
If the solubility, "S", of an organic compound is known or predicted in both water and 1-octanol, then log "P" can be estimated as
formula_14
There are a variety of approaches to predict solubilities, and so log "S".
Octanol-water partition coefficient.
The partition coefficient between "n"-Octanol and water is known as the n"-octanol-water partition coefficient, or "K"ow. It is also frequently referred to by the symbol P, especially in the English literature. It is also known as n"-octanol-water partition ratio.
"K"ow, being a type of partition coefficient, serves as a measure of the relationship between lipophilicity (fat solubility) and hydrophilicity (water solubility) of a substance. The value is greater than one if a substance is more soluble in fat-like solvents such as n-octanol, and less than one if it is more soluble in water.
Example values.
Values for log "K"ow typically range between -3 (very hydrophilic) and +10 (extremely lipophilic/hydrophobic).
The values listed here are sorted by the partition coefficient. Acetamide is hydrophilic, and 2,2′,4,4′,5-Pentachlorobiphenyl is lipophilic.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\log P_\\text{oct/wat} = \\log_{10}\\left(\\frac{\\big[\\text{solute}\\big]_\\text{octanol}^\\text{un-ionized}}{\\big[\\text{solute}\\big]_\\text{water}^\\text{un-ionized}}\\right)."
},
{
"math_id": 1,
"text": "\\log P_\\text{oct/wat}^I"
},
{
"math_id": 2,
"text": "\\log\\ P_\\text{oct/wat}^\\mathrm{I} = \\log_{10}\\left(\\frac{\\big[\\text{solute}\\big]_\\text{octanol}^I}{\\big[\\text{solute}\\big]_\\text{water}^I}\\right)."
},
{
"math_id": 3,
"text": "\\log D_\\text{oct/wat} = \\log_{10}\\left(\\frac{\\big[\\text{solute}\\big]_\\text{octanol}^\\text{ionized} + \\big[\\text{solute}\\big]_\\text{octanol}^\\text{un-ionized}}{\\big[\\text{solute}\\big]_\\text{water}^\\text{ionized} + \\big[\\text{solute}\\big]_\\text{water}^\\text{un-ionized}}\\right)."
},
{
"math_id": 4,
"text": "\\log D_\\text{oct/wat} = \\log_{10}\\left(\\sum_{I=0}^M f^I P_\\text{oct/wat}^I \\right),"
},
{
"math_id": 5,
"text": "f^I"
},
{
"math_id": 6,
"text": "\\log D \\cong \\log P."
},
{
"math_id": 7,
"text": "f^0"
},
{
"math_id": 8,
"text": "\\log D \\cong \\log P + \\log \\left(f^0\\right)."
},
{
"math_id": 9,
"text": "\\begin{align}\n \\log D_\\text{acids} &\\cong \\log P + \\log\\left[\\frac{1}{1 + 10^{\\mathrm{p}H - \\mathrm{p}K_a}}\\right], \\\\\n \\log D_\\text{bases} &\\cong \\log P + \\log\\left[\\frac{1}{1 + 10^{\\mathrm{p}K_a - \\mathrm{pH}}}\\right].\n\\end{align}"
},
{
"math_id": 10,
"text": "\\mathrm{pH} - \\mathrm{p}K_a > 1"
},
{
"math_id": 11,
"text": "\\log D_\\text{acids} \\cong \\log P + \\mathrm{p}K_a - \\mathrm{pH}"
},
{
"math_id": 12,
"text": "\\mathrm{p}K_a - \\mathrm{pH} > 1"
},
{
"math_id": 13,
"text": "\\log D_\\text{bases} \\cong \\log P - \\mathrm{p}K_a + \\mathrm{pH}"
},
{
"math_id": 14,
"text": "\\log P = \\log S_\\text{o} - \\log S_\\text{w}."
}
] | https://en.wikipedia.org/wiki?curid=796652 |
796760 | Projective Hilbert space | In mathematics and the foundations of quantum mechanics, the projective Hilbert space or ray space formula_0 of a complex Hilbert space formula_1 is the set of equivalence classes formula_2 of non-zero vectors formula_3, for the equivalence relation formula_4 on formula_1 given by
formula_5 if and only if formula_6 for some non-zero complex number formula_7.
This is the usual construction of projectivization, applied to a complex Hilbert space. In quantum mechanics, the equivalence classes formula_2 are also referred to as rays or projective rays.
Overview.
The physical significance of the projective Hilbert space is that in quantum theory, the wave functions formula_8 and formula_9 represent the same "physical state", for any formula_10. The Born rule demands that if the system is physical and measurable, its wave function has unit norm, formula_11, in which case it is called a normalized wave function. The unit norm constraint does not completely determine formula_8 within the ray, since formula_8 could be multiplied by any formula_7 with absolute value 1 (the circle group formula_12 action) and retain its normalization. Such a formula_7 can be written as formula_13 with formula_14 called the global phase.
Rays that differ by such a formula_7 correspond to the same state (cf. quantum state (algebraic definition), given a C*-algebra of observables and a representation on formula_1). No measurement can recover the phase of a ray; it is not observable. One says that formula_12 is a gauge group of the first kind.
If formula_1 is an irreducible representation of the algebra of observables then the rays induce pure states. Convex linear combinations of rays naturally give rise to density matrix which (still in case of an irreducible representation) correspond to mixed states.
In the case formula_1 is finite-dimensional, i.e., formula_15, the Hilbert space reduces to a finite-dimensional inner product space and the set of projective rays may be treated as a complex projective space; it is a homogeneous space for a unitary group formula_16. That is,
formula_17,
which carries a Kähler metric, called the Fubini–Study metric, derived from the Hilbert space's norm.
As such, the projectivization of, e.g., two-dimensional complex Hilbert space (the space describing one qubit) is the complex projective line formula_18. This is known as the Bloch sphere or, equivalently, the Riemann sphere. See Hopf fibration for details of the projectivization construction in this case.
Product.
The Cartesian product of projective Hilbert spaces is not a projective space. The Segre mapping is an embedding of the Cartesian product of two projective spaces into the projective space associated to the tensor product of the two Hilbert spaces, given by formula_19. In quantum theory, it describes how to make states of the composite system from states of its constituents. It is only an embedding, not a surjection; most of the tensor product space does not lie in its range and represents "entangled states".
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{P}(H)"
},
{
"math_id": 1,
"text": "H"
},
{
"math_id": 2,
"text": "[v]"
},
{
"math_id": 3,
"text": "v \\in H"
},
{
"math_id": 4,
"text": "\\sim"
},
{
"math_id": 5,
"text": "w \\sim v"
},
{
"math_id": 6,
"text": "v = \\lambda w"
},
{
"math_id": 7,
"text": "\\lambda"
},
{
"math_id": 8,
"text": "\\psi"
},
{
"math_id": 9,
"text": "\\lambda \\psi"
},
{
"math_id": 10,
"text": "\\lambda \\ne 0"
},
{
"math_id": 11,
"text": "\\langle\\psi|\\psi\\rangle = 1"
},
{
"math_id": 12,
"text": "U(1)"
},
{
"math_id": 13,
"text": "\\lambda = e^{i\\phi}"
},
{
"math_id": 14,
"text": "\\phi"
},
{
"math_id": 15,
"text": "H=H_n"
},
{
"math_id": 16,
"text": "\\mathrm{U}(n)"
},
{
"math_id": 17,
"text": "\\mathbf{P}(H_{n})=\\mathbb{C}\\mathbf{P}^{n-1}"
},
{
"math_id": 18,
"text": "\\mathbb{C}\\mathbf{P}^{1}"
},
{
"math_id": 19,
"text": "\\mathbf{P}(H) \\times \\mathbf{P}(H') \\to \\mathbf{P}(H \\otimes H'), ([x], [y]) \\mapsto [x \\otimes y]"
}
] | https://en.wikipedia.org/wiki?curid=796760 |
796893 | Haag's theorem | Theorem which describes the interaction picture as incompatible with relativistic quantum fields
While working on the mathematical physics of an interacting, relativistic, quantum field theory, Rudolf Haag developed an argument against the existence of the interaction picture, a result now commonly known as Haag’s theorem. Haag’s original proof relied on the specific form of then-common field theories, but subsequently generalized by a number of authors, notably Hall & Wightman, who concluded that no single, universal Hilbert space representation can describe both free and interacting fields. A generalization due to Reed & Simon shows that applies to free neutral scalar fields of different masses, which implies that the interaction picture is always inconsistent, even in the case of a free field.
Introduction.
Traditionally, describing a quantum field theory requires describing a set of operators satisfying the canonical (anti)commutation relations, and a Hilbert space on which those operators act. Equivalently, one should give a representation of the free algebra on those operators, modulo the canonical commutation relations (the CCR/CAR algebra); in the latter perspective, the underlying algebra of operators is the same, but different field theories correspond to different (i.e., unitarily inequivalent) representations.
Philosophically, the action of the CCR algebra should be irreducible, for otherwise the theory can be written as the combined effects of two separate fields. That principle implies the existence of a cyclic vacuum state. Importantly, a vacuum uniquely determines the algebra representation, because it is cyclic.
Two different specifications of the vacuum are common: the minimum-energy eigenvector of the field Hamiltonian, or the state annihilated by the number operator "a"†"a". When these specifications describe different vectors, the vacuum is said to polarize, after the physical interpretation in the case of quantum electrodynamics.
Haag's result explains that the same quantum field theory must treat the vacuum very differently when interacting vs. free.
Formal description.
In its modern form, the Haag theorem has two parts:
This state of affairs is in stark contrast to ordinary non-relativistic quantum mechanics, where there is always a unitary equivalence between the free and interacting representations. That fact is used in constructing the interaction picture, where operators are evolved using a free field representation, while states evolve using the interacting field representation. Within the formalism of quantum field theory (QFT) such a picture generally does not exist, because these two representations are unitarily inequivalent. Thus the quantum field theorist is confronted with the so-called "choice problem": One must choose the ‘right’ representation among an uncountably-infinite set of representations which are not equivalent.
Physical / heuristic point of view.
As was already noticed by Haag in his original work, it is the vacuum polarization that lies at the core of Haag’s theorem. Any interacting quantum field (including non-interacting fields of different masses) is polarizing the vacuum, and as a consequence its vacuum state lies inside a renormalized Hilbert space formula_0 that differs from the Hilbert space formula_1 of the free field. Although an isomorphism could always be found that maps one Hilbert space into the other, Haag’s theorem implies that no such mapping could deliver unitarily equivalent representations of the corresponding canonical commutation relations, i.e. unambiguous physical results.
Work-arounds.
Among the assumptions that lead to Haag’s theorem is translation invariance of the system. Consequently, systems that can be set up inside a box with periodic boundary conditions or that interact with suitable external potentials escape the conclusions of the theorem.
Haag (1958) and Ruelle (1962) have presented the Haag–Ruelle scattering theory, which deals with asymptotic free states and thereby serves to formalize some of the assumptions needed for the LSZ reduction formula. These techniques, however, cannot be applied to massless particles and have unsolved issues with bound states.
Quantum field theorists’ conflicting reactions.
While some physicists and philosophers of physics have repeatedly emphasized how seriously Haag’s theorem is shaking the foundations of QFT, the majority of practicing quantum field theorists simply dismiss the issue. Most quantum field theory texts geared to practical appreciation of the Standard Model of elementary particle interactions do not even mention it, implicitly assuming that some rigorous set of definitions and procedures may be found to firm up the powerful and well-confirmed heuristic results they report on.
For example, asymptotic structure (cf. QCD jets) is a specific calculation in strong agreement with experiment, but nevertheless should fail by dint of Haag’s theorem. The general feeling is that this is not some calculation that was merely stumbled upon, but rather that it embodies a physical truth. The practical calculations and tools are motivated and justified by an appeal to a grand mathematical formalism called QFT. Haag’s theorem suggests that the formalism is not well-founded, yet the practical calculations are sufficiently distant from the generalized formalism that any weaknesses there do not affect (or invalidate) practical results.
As was pointed out by Teller (1997):Everyone must agree that as a piece of mathematics Haag’s theorem is a valid result that at least appears to call into question the mathematical foundation of interacting quantum field theory, and agree that at the same time the theory has proved astonishingly successful in application to experimental results. Lupher (2005) suggested that the wide range of conflicting reactions to Haag’s theorem may partly be caused by the fact that the same exists in different formulations, which in turn were proved within different formulations of QFT such as Wightman’s axiomatic approach or the LSZ formula. According to Lupher,The few who mention it tend to regard it as something important that someone (else) should investigate thoroughly.
Sklar (2000) further pointed out:There may be a presence within a theory of conceptual problems that appear to be the result of mathematical artifacts. These seem to the theoretician to be not fundamental problems rooted in some deep physical mistake in the theory, but, rather, the consequence of some misfortune in the way in which the theory has been expressed. Haag’s theorem is, perhaps, a difficulty of this kind.
Wallace (2011) has compared the merits of conventional QFT with those of algebraic quantum field theory (AQFT) and observed that... algebraic quantum field theory has unitarily inequivalent representations even on spatially finite regions, but this lack of unitary equivalence only manifests itself with respect to expectation values on arbitrary small spacetime regions, and these are exactly those expectation values which don’t convey real information about the world. He justifies the latter claim with the insights gained from modern renormalization group theory, namely the fact that... we can absorb all our ignorance of how the cutoff [i.e., the short-range cutoff required to carry out the renormalization procedure] is implemented, into the values of finitely many coefficients which can be measured empirically.
Concerning the consequences of Haag’s theorem, Wallace’s observation implies that since QFT does not attempt to predict fundamental parameters, such as particle masses or coupling constants, potentially harmful effects arising from unitarily non-equivalent representations remain absorbed inside the empirical values that stem from measurements of these parameters (at a given length scale) and that are readily imported into QFT. Thus they remain invisible to quantum field theorists, in practice.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\;H_\\text{renorm}\\;"
},
{
"math_id": 1,
"text": "\\;H_\\text{free}\\;"
}
] | https://en.wikipedia.org/wiki?curid=796893 |
796928 | Consistent histories | Interpretation of quantum mechanics
In quantum mechanics, the consistent histories or simply "consistent quantum theory" interpretation generalizes the complementarity aspect of the conventional Copenhagen interpretation. The approach is sometimes called decoherent histories and in other work decoherent histories are more specialized.
First proposed by Robert Griffiths in 1984, this interpretation of quantum mechanics is based on a consistency criterion that then allows probabilities to be assigned to various alternative histories of a system such that the probabilities for each history obey the rules of classical probability while being consistent with the Schrödinger equation. In contrast to some interpretations of quantum mechanics, the framework does not include "wavefunction collapse" as a relevant description of any physical process, and emphasizes that measurement theory is not a fundamental ingredient of quantum mechanics. Consistent Histories allows predictions related to the state of the universe needed for quantum cosmology.
Key assumptions.
The interpretation rests on three assumptions:
The third assumption generalizes complementarity and this assumption separates consistent histories from other quantum theory interpretations.
Formalism.
Histories.
A "homogeneous history" formula_0 (here formula_1 labels different histories) is a sequence of Propositions formula_2 specified at different moments of time formula_3 (here formula_4 labels the times). We write this as:
formula_5
and read it as "the proposition formula_6 is true at time formula_7 "and then" the proposition formula_8 is true at time formula_9 "and then" formula_10". The times formula_11 are strictly ordered and called the "temporal support" of the history.
"Inhomogeneous histories" are multiple-time propositions which cannot be represented by a homogeneous history. An example is the logical OR of two homogeneous histories: formula_12.
These propositions can correspond to any set of questions that include all possibilities.
Examples might be the three propositions meaning "the electron went through the left slit", "the electron went through the right slit" and "the electron didn't go through either slit". One of the aims of the approach is to show that classical questions such as, "where are my keys?" are consistent. In this case one might use a large number of propositions each one specifying the location of the keys in some small region of space.
Each single-time proposition formula_2 can be represented by a projection operator formula_13 acting on the system's Hilbert space (we use "hats" to denote operators). It is then useful to represent homogeneous histories by the time-ordered product of their single-time projection operators. This is the history projection operator (HPO) formalism developed by Christopher Isham and
naturally encodes the logical structure of the history propositions.
Consistency.
An important construction in the consistent histories approach is the class operator for a homogeneous history:
formula_14
The symbol formula_15 indicates that the factors in the product are ordered chronologically according to their values of formula_3: the "past" operators with smaller values of formula_16 appear on the right side, and the "future" operators with greater values of formula_16 appear on the left side.
This definition can be extended to inhomogeneous histories as well.
Central to the consistent histories is the notion of consistency. A set of histories formula_17 is consistent (or strongly consistent) if
formula_18
for all formula_19. Here formula_20 represents the initial density matrix, and the operators are expressed in the Heisenberg picture.
The set of histories is weakly consistent if
formula_21
for all formula_19.
Probabilities.
If a set of histories is consistent then probabilities can be assigned to them in a consistent way. We postulate that the probability of history formula_0 is simply
formula_22
which obeys the axioms of probability if the histories formula_0 come from the same (strongly) consistent set.
As an example, this means the probability of "formula_0 OR formula_23" equals the probability of "formula_0" plus the probability of "formula_23" minus the probability of "formula_0 AND formula_23", and so forth.
Interpretation.
The interpretation based on consistent histories is used in combination with the insights about quantum decoherence.
Quantum decoherence implies that irreversible macroscopic phenomena (hence, all classical measurements) render histories automatically consistent, which allows one to recover classical reasoning and "common sense" when applied to the outcomes of these measurements. More precise analysis of decoherence allows (in principle) a quantitative calculation of the boundary between the classical domain and the quantum domain. According to Roland Omnès,
<templatestyles src="Template:Blockquote/styles.css" />[the] history approach, although it was initially independent of the Copenhagen approach, is in some sense a more elaborate version of it. It has, of course, the advantage of being more precise, of including classical physics, and of providing an explicit logical framework for indisputable proofs. But, when the Copenhagen interpretation is completed by the modern results about correspondence and decoherence, it essentially amounts to the same physics.
[... There are] three main differences:
1. The logical equivalence between an empirical datum, which is a macroscopic phenomenon, and the result of a measurement, which is a quantum property, becomes clearer in the new approach, whereas it remained mostly tacit and questionable in the Copenhagen formulation.
2. There are two apparently distinct notions of probability in the new approach. One is abstract and directed toward logic, whereas the other is empirical and expresses the randomness of measurements. We need to understand their relation and why they coincide with the empirical notion entering into the Copenhagen rules.
3. The main difference lies in the meaning of the reduction rule for 'wave packet collapse'. In the new approach, the rule is valid but no specific effect on the measured object can be held responsible for it. Decoherence in the measuring device is enough.
In order to obtain a complete theory, the formal rules above must be supplemented with a particular Hilbert space and rules that govern dynamics, for example a Hamiltonian.
In the opinion of others this still does not make a complete theory as no predictions are possible about which set of consistent histories will actually occur. In other words, the rules of consistent histories, the Hilbert space, and the Hamiltonian must be supplemented by a set selection rule. However, Robert B. Griffiths holds the opinion that asking the question of which set of histories will "actually occur" is a misinterpretation of the theory; histories are a tool for description of reality, not separate alternate realities.
Proponents of this consistent histories interpretation—such as Murray Gell-Mann, James Hartle, Roland Omnès and Robert B. Griffiths—argue that their interpretation clarifies the fundamental disadvantages of the old Copenhagen interpretation, and can be used as a complete interpretational framework for quantum mechanics.
In "Quantum Philosophy", Roland Omnès provides a less mathematical way of understanding this same formalism.
The consistent histories approach can be interpreted as a way of understanding which sets of classical questions can be consistently asked of a single quantum system, and which sets of questions are fundamentally inconsistent, and thus meaningless when asked together. It thus becomes possible to demonstrate formally why it is that the questions which Einstein, Podolsky and Rosen assumed could be asked together, of a single quantum system, simply cannot be asked together. On the other hand, it also becomes possible to demonstrate that classical, logical reasoning often does apply, even to quantum experiments – but we can now be mathematically exact about the limits of classical logic.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H_i"
},
{
"math_id": 1,
"text": "i"
},
{
"math_id": 2,
"text": "P_{i,j}"
},
{
"math_id": 3,
"text": "t_{i,j}"
},
{
"math_id": 4,
"text": "j"
},
{
"math_id": 5,
"text": " H_i = (P_{i,1}, P_{i,2},\\ldots,P_{i,n_i}) "
},
{
"math_id": 6,
"text": "P_{i,1}"
},
{
"math_id": 7,
"text": "t_{i,1}"
},
{
"math_id": 8,
"text": "P_{i,2}"
},
{
"math_id": 9,
"text": "t_{i,2}"
},
{
"math_id": 10,
"text": "\\ldots"
},
{
"math_id": 11,
"text": "t_{i,1} < t_{i,2} < \\ldots < t_{i,n_i}"
},
{
"math_id": 12,
"text": "H_i \\lor H_j"
},
{
"math_id": 13,
"text": "\\hat{P}_{i,j}"
},
{
"math_id": 14,
"text": "\\hat{C}_{H_i} := T \\prod_{j=1}^{n_i} \\hat{P}_{i,j}(t_{i,j}) = \\hat{P}_{i,n_i} \\cdots \\hat{P}_{i,2} \\hat{P}_{i,1}"
},
{
"math_id": 15,
"text": "T"
},
{
"math_id": 16,
"text": "t"
},
{
"math_id": 17,
"text": "\\{ H_i\\}"
},
{
"math_id": 18,
"text": "\\operatorname{Tr}(\\hat{C}_{H_i} \\rho \\hat{C}^\\dagger_{H_j}) = 0"
},
{
"math_id": 19,
"text": "i \\neq j"
},
{
"math_id": 20,
"text": "\\rho"
},
{
"math_id": 21,
"text": "\\operatorname{Tr}(\\hat{C}_{H_i} \\rho \\hat{C}^\\dagger_{H_j}) \\approx 0"
},
{
"math_id": 22,
"text": "\\operatorname{Pr}(H_i) = \\operatorname{Tr}(\\hat{C}_{H_i} \\rho \\hat{C}^\\dagger_{H_i})"
},
{
"math_id": 23,
"text": "H_j"
}
] | https://en.wikipedia.org/wiki?curid=796928 |
7970283 | Sequential probability ratio test | The sequential probability ratio test (SPRT) is a specific sequential hypothesis test, developed by Abraham Wald and later proven to be optimal by Wald and Jacob Wolfowitz. Neyman and Pearson's 1933 result inspired Wald to reformulate it as a sequential analysis problem. The Neyman-Pearson lemma, by contrast, offers a rule of thumb for when all the data is collected (and its likelihood ratio known).
While originally developed for use in quality control studies in the realm of manufacturing, SPRT has been formulated for use in the computerized testing of human examinees as a termination criterion.
Theory.
As in classical hypothesis testing, SPRT starts with a pair of hypotheses, say formula_0 and formula_1 for the null hypothesis and alternative hypothesis respectively. They must be specified as follows:
formula_2
formula_3
The next step is to calculate the cumulative sum of the log-likelihood ratio, formula_4, as new data arrive: with formula_5, then, for formula_6=1,2...,
formula_7
The stopping rule is a simple thresholding scheme:
where formula_11 and formula_12 (formula_13) depend on the desired type I and type II errors, formula_14 and formula_15. They may be chosen as follows:
formula_16 and formula_17
In other words, formula_14 and formula_15 must be decided beforehand in order to set the thresholds appropriately. The numerical value will depend on the application. The reason for being only an approximation is that, in the discrete case, the signal may cross the threshold between samples. Thus, depending on the penalty of making an error and the sampling frequency, one might set the thresholds more aggressively. The exact bounds are correct in the continuous case.
Example.
A textbook example is parameter estimation of a probability distribution function. Consider the exponential distribution:
formula_18
The hypotheses are
formula_19
Then the log-likelihood function (LLF) for one sample is
formula_20
The cumulative sum of the LLFs for all x is
formula_21
Accordingly, the stopping rule is:
formula_22
After re-arranging we finally find
formula_23
The thresholds are simply two parallel lines with slope formula_24. Sampling should stop when the sum of the samples makes an excursion outside the "continue-sampling region".
Applications.
Manufacturing.
The test is done on the proportion metric, and tests that a variable "p" is equal to one of two desired points, "p1" or "p2". The region between these two points is known as the "indifference region" (IR). For example, suppose you are performing a quality control study on a factory lot of widgets. Management would like the lot to have 3% or less defective widgets, but 1% or less is the ideal lot that would pass with flying colors. In this example, "p1 = 0.01" and "p2 = 0.03" and the region between them is the IR because management considers these lots to be marginal and is OK with them being classified either way. Widgets would be sampled one at a time from the lot (sequential analysis) until the test determines, within an acceptable error level, that the lot is ideal or should be rejected.
Testing of human examinees.
The SPRT is currently the predominant method of classifying examinees in a variable-length computerized classification test (CCT). The two parameters are "p1" and "p2" are specified by determining a cutscore (threshold) for examinees on the proportion correct metric, and selecting a point above and below that cutscore. For instance, suppose the cutscore is set at 70% for a test. We could select "p1 = 0.65" and "p2 = 0.75" . The test then evaluates the likelihood that an examinee's true score on that metric is equal to one of those two points. If the examinee is determined to be at 75%, they pass, and they fail if they are determined to be at 65%.
These points are not specified completely arbitrarily. A cutscore should always be set with a legally defensible method, such as a modified Angoff procedure. Again, the indifference region represents the region of scores that the test designer is OK with going either way (pass or fail). The upper parameter "p2" is conceptually the highest level that the test designer is willing to accept for a Fail (because everyone below it has a good chance of failing), and the lower parameter "p1" is the lowest level that the test designer is willing to accept for a pass (because everyone above it has a decent chance of passing). While this definition may seem to be a relatively small burden, consider the high-stakes case of a licensing test for medical doctors: at just what point should we consider somebody to be at one of these two levels?
While the SPRT was first applied to testing in the days of classical test theory, as is applied in the previous paragraph, Reckase (1983) suggested that item response theory be used to determine the "p1" and "p2" parameters. The cutscore and indifference region are defined on the latent ability (theta) metric, and translated onto the proportion metric for computation. Research on CCT since then has applied this methodology for several reasons:
Detection of anomalous medical outcomes.
Spiegelhalter et al. have shown that SPRT can be used to monitor the performance of doctors, surgeons and other medical practitioners in such a way as to give early warning of potentially anomalous results. In their 2003 paper, they showed how it could have helped identify Harold Shipman as a murderer well before he was actually identified.
Extensions.
MaxSPRT.
More recently, in 2011, an extension of the SPRT method called Maximized Sequential Probability Ratio Test (MaxSPRT) was introduced. The salient feature of MaxSPRT is the allowance of a composite, one-sided alternative hypothesis, and the introduction of an upper stopping boundary. The method has been used in several medical research studies. | [
{
"math_id": 0,
"text": "H_0"
},
{
"math_id": 1,
"text": "H_1"
},
{
"math_id": 2,
"text": "H_0: p=p_0"
},
{
"math_id": 3,
"text": "H_1: p=p_1"
},
{
"math_id": 4,
"text": "\\log \\Lambda_i"
},
{
"math_id": 5,
"text": "S_0 = 0"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "S_i=S_{i-1}+ \\log \\Lambda_i "
},
{
"math_id": 8,
"text": "a < S_i < b"
},
{
"math_id": 9,
"text": "S_i \\geq b"
},
{
"math_id": 10,
"text": "S_i \\leq a"
},
{
"math_id": 11,
"text": " a "
},
{
"math_id": 12,
"text": " b "
},
{
"math_id": 13,
"text": "a<0<b<\\infty"
},
{
"math_id": 14,
"text": "\\alpha"
},
{
"math_id": 15,
"text": "\\beta"
},
{
"math_id": 16,
"text": "a \\approx \\log \\frac{ \\beta }{1-\\alpha}"
},
{
"math_id": 17,
"text": "b \\approx \\log \\frac{1-\\beta}{\\alpha}"
},
{
"math_id": 18,
"text": "f_\\theta(x)= \\theta^{-1} e^{-\\frac{x}{\\theta}}, \\qquad x,\\theta>0"
},
{
"math_id": 19,
"text": "\\begin{cases} H_0: \\theta=\\theta_0 \\\\ H_1: \\theta=\\theta_1\\end{cases} \\qquad \\theta_1>\\theta_0."
},
{
"math_id": 20,
"text": "\\begin{align}\n\\log \\Lambda(x)&= \\log \\left ( \\frac{\\theta_1^{-1} e^{-\\frac{x}{\\theta_1}}}{\\theta_0^{-1} e^{-\\frac{x}{\\theta_0}}} \\right) \\\\\n&= \\log \\left ( \\frac{\\theta_0}{\\theta_1} e^{\\frac{x}{\\theta_0} - \\frac{x}{\\theta_1}} \\right) \\\\\n&= \\log \\left ( \\frac{\\theta_0}{\\theta_1} \\right) + \\log \\left (e^{\\frac{x}{\\theta_0} - \\frac{x}{\\theta_1}} \\right) \\\\\n&= -\\log \\left ( \\frac{\\theta_1}{\\theta_0} \\right ) + \\left (\\frac{x}{\\theta_0} - \\frac{x}{\\theta_1} \\right ) \\\\\n&= -\\log \\left ( \\frac{\\theta_1}{\\theta_0} \\right ) + \\left ( \\frac{\\theta_1-\\theta_0}{\\theta_0 \\theta_1}\\right ) x \n\\end{align}"
},
{
"math_id": 21,
"text": "S_n=\\sum_{i=1}^n \\log \\Lambda(x_i)= - n \\log \\left ( \\frac{\\theta_1}{\\theta_0} \\right ) + \\left (\\frac{\\theta_1-\\theta_0}{\\theta_0 \\theta_1} \\right)\\sum_{i=1}^n x_i "
},
{
"math_id": 22,
"text": "a<- n \\log \\left ( \\frac{\\theta_1}{\\theta_0} \\right ) + \\left (\\frac{\\theta_1-\\theta_0}{\\theta_0 \\theta_1} \\right ) \\sum_{i=1}^n x_i <b"
},
{
"math_id": 23,
"text": "a+n \\log \\left ( \\frac{\\theta_1}{\\theta_0} \\right ) < \\left ( \\frac{\\theta_1-\\theta_0}{\\theta_0 \\theta_1} \\right ) \\sum_{i=1}^n x_i < b+n \\log \\left ( \\frac{\\theta_1}{\\theta_0} \\right )"
},
{
"math_id": 24,
"text": "\\log ( \\theta_1/\\theta_0 )"
}
] | https://en.wikipedia.org/wiki?curid=7970283 |
7971569 | K-index | Geomagnetic storm indicator, 0–9
The K"-index quantifies disturbances in the horizontal component of Earth's magnetic field with an integer in the range 0–9 with 1 being calm and 5 or more indicating a geomagnetic storm. It is derived from the maximum fluctuations of horizontal components observed on a magnetometer during a three-hour interval. The label K comes from the German word "Kennziffer" meaning "characteristic digit". The K"-index was introduced by Julius Bartels in 1939.
Definition.
The "K"-scale is a quasi-logarithmic scale derived from the maximum fluctuation "R" (in units of nanoteslas, nT) in the horizontal component of Earth's magnetic field observed on a magnetometer relative to a quiet day during a three-hour interval. The conversion table from maximum fluctuation to "K"-index varies from observatory to observatory in such a way that the historical rate of occurrence of certain levels of "K" are about the same at all observatories. In practice this means that observatories at higher geomagnetic latitude require higher levels of fluctuation for a given "K"-index. For example, the corresponding "R" value for K = 9 is
in Godhavn, Greenland,
in Honolulu, Hawaii and
in Kiel, Germany.
The real-time "K"-index is determined after the end of prescribed intervals of 3 hours each: 00:00–03:00, 03:00–06:00, ..., 21:00–24:00. The maximum positive and negative deviations during the 3 hour period are added together to determine the total maximum fluctuation. These maximum deviations may occur any time during the 3 hour period.
The planetary "K"p-index.
The official planetary "K"p-index is derived by calculating a weighted average of "K"-indices from a network of 13 geomagnetic observatories at mid-latitude locations. Since these observatories do not report their data in real-time, various operations centers around the globe estimate the index based on data available from their local network of observatories. The "K"p-index was introduced by Bartels in 1939.
The daily average amplitude "A"-index.
The "a"-index is the three hourly equivalent amplitude for geomagnetic activity at a specific magnetometer station derived from the station-specific "K"-index. Because of the quasi-logarithmic relationship of the "K"-scale to magnetometer fluctuations, it is not meaningful to take the average of a set of "K"-indices directly. Instead each "K" is converted back into a linear scale.
The "A"-index is the daily average of amplitude for geomagnetic activity at a specific magnetometer station, derived from the eight (three hourly) "a"-indices.
The "A"p-index is the averaged planetary "A"-index based on data from a set of specific "K"p stations.
Example.
If the "K"-indices for the day were 3, 4, 6, 5, 3, 2, 2 and 1, the daily "A"-index is the average of the equivalent amplitudes:
formula_0
G-scale.
The NOAA G-scale describes the significance of effects of a geomagnetic storm to the public and those affected by the space environment. It is directly derived from the "K"p-scale, where G1 is the weakest storm classification (corresponding to a "K"p value of 5) and G5 is the strongest (corresponding to a "K"p value of 9).
Use in radio propagation studies.
The "K"p-index is used for the study and prediction of ionospheric propagation of high frequency radio signals. Geomagnetic storms, indicated by a "K"p = 5 or higher, have no direct effect on propagation. However they disturb the F-layer of the ionosphere, especially at middle and high geographical latitudes, causing a so-called "ionospheric storm" which degrades radio propagation. The degradation mainly consists of a reduction of the maximum usable frequency (MUF) by as much as 50%. Sometimes the E-layer may be affected as well. In contrast with sudden ionospheric disturbances (SID), which affect high frequency radio paths mostly at mid and low latitudes, the effects of ionospheric storms are more intense at high latitudes and the polar regions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A = (15 + 27 + 80 + 48 + 15 + 7 + 7 + 4)/8 = 25.375"
}
] | https://en.wikipedia.org/wiki?curid=7971569 |
7974227 | Lehmer matrix | In mathematics, particularly matrix theory, the "n×n" Lehmer matrix (named after Derrick Henry Lehmer) is the constant symmetric matrix defined by
formula_0
Alternatively, this may be written as
formula_1
Properties.
As can be seen in the examples section, if "A" is an "n×n" Lehmer matrix and "B" is an "m×m" Lehmer matrix, then "A" is a submatrix of "B" whenever "m">"n". The values of elements diminish toward zero away from the diagonal, where all elements have value 1.
The inverse of a Lehmer matrix is a tridiagonal matrix, where the superdiagonal and subdiagonal have strictly negative entries. Consider again the "n×n" "A" and "m×m" "B" Lehmer matrices, where "m">"n". A rather peculiar property of their inverses is that "A−1" is "nearly" a submatrix of "B−1", except for the "A−1n,n" element, which is not equal to "B−1n,n".
A Lehmer matrix of order "n" has trace "n".
Examples.
The 2×2, 3×3 and 4×4 Lehmer matrices and their inverses are shown below.
formula_2 | [
{
"math_id": 0,
"text": "A_{ij} =\n\\begin{cases}\ni/j, & j\\ge i \\\\\nj/i, & j<i.\n\\end{cases}\n"
},
{
"math_id": 1,
"text": "A_{ij} = \\frac{\\mbox{min}(i,j)}{\\mbox{max}(i,j)}."
},
{
"math_id": 2,
"text": "\n\\begin{array}{lllll}\nA_2=\\begin{pmatrix}\n 1 & 1/2 \\\\\n 1/2 & 1 \n\\end{pmatrix};\n&\nA_2^{-1}=\\begin{pmatrix}\n 4/3 & -2/3 \\\\\n -2/3 & {\\color{Brown}{\\mathbf{4/3}}}\n\\end{pmatrix};\n\n\\\\\n\\\\\n\nA_3=\\begin{pmatrix}\n 1 & 1/2 & 1/3 \\\\\n 1/2 & 1 & 2/3 \\\\\n 1/3 & 2/3 & 1 \n\\end{pmatrix};\n&\nA_3^{-1}=\\begin{pmatrix}\n 4/3 & -2/3 & \\\\\n -2/3 & 32/15 & -6/5 \\\\\n & -6/5 & {\\color{Brown}{\\mathbf{9/5}}}\n\\end{pmatrix};\n\n\\\\\n\\\\\n\nA_4=\\begin{pmatrix}\n 1 & 1/2 & 1/3 & 1/4 \\\\\n 1/2 & 1 & 2/3 & 1/2 \\\\\n 1/3 & 2/3 & 1 & 3/4 \\\\\n 1/4 & 1/2 & 3/4 & 1 \n\\end{pmatrix};\n&\nA_4^{-1}=\\begin{pmatrix}\n 4/3 & -2/3 & & \\\\\n -2/3 & 32/15 & -6/5 & \\\\\n & -6/5 & 108/35 & -12/7 \\\\\n & & -12/7 & {\\color{Brown}{\\mathbf{16/7}}}\n\\end{pmatrix}.\n\\\\\n\\end{array}\n"
}
] | https://en.wikipedia.org/wiki?curid=7974227 |
7974982 | Inversion transformation | Type of transformations applicable to coordinate space-time
In mathematical physics, inversion transformations are a natural extension of Poincaré transformations to include all conformal, one-to-one transformations on coordinate space-time. They are less studied in physics because, unlike the rotations and translations of Poincaré symmetry, an object cannot be physically transformed by the inversion symmetry. Some physical theories are invariant under this symmetry, in these cases it is what is known as a 'hidden symmetry'. Other hidden symmetries of physics include gauge symmetry and general covariance.
Early use.
In 1831 the mathematician Ludwig Immanuel Magnus began to publish on transformations of the plane generated by inversion in a circle of radius "R". His work initiated a large body of publications, now called inversive geometry. The most prominently named mathematician became August Ferdinand Möbius once he reduced the planar transformations to complex number arithmetic. In the company of physicists employing the inversion transformation early on was Lord Kelvin, and the association with him leads it to be called the Kelvin transform.
Transformation on coordinates.
In the following we shall use imaginary time (formula_0) so that space-time is Euclidean and the equations are simpler. The Poincaré transformations are given by the coordinate transformation on space-time parametrized by the 4-vectors "V"
formula_1
where formula_2 is an orthogonal matrix and formula_3 is a 4-vector. Applying this transformation twice on a 4-vector gives a third transformation of the same form. The basic invariant under this transformation is the space-time length given by the distance between two space-time points given by 4-vectors "x" and "y":
formula_4
These transformations are subgroups of general 1-1 conformal transformations on space-time. It is possible to extend these transformations to include all 1-1 conformal transformations on space-time
formula_5
We must also have an equivalent condition to the orthogonality condition of the Poincaré transformations:
formula_6
Because one can divide the top and bottom of the transformation by formula_7 we lose no generality by setting formula_8 to the unit matrix. We end up with
formula_9
Applying this transformation twice on a 4-vector gives a transformation of the same form. The new symmetry of 'inversion' is given by the 3-tensor formula_10 This symmetry becomes Poincaré symmetry if we set formula_11 When formula_12 the second condition requires that formula_2 is an orthogonal matrix. This transformation is 1-1 meaning that each point is mapped to a unique point only if we theoretically include the points at infinity.
Invariants.
The invariants for this symmetry in 4 dimensions is unknown however it is known that the invariant requires a minimum of 4 space-time points. In one dimension, the invariant is the well known cross-ratio from Möbius transformations:
formula_13
Because the only invariants under this symmetry involve a minimum of 4 points, this symmetry cannot be a symmetry of point particle theory. Point particle theory relies on knowing the lengths of paths of particles through space-time (e.g., from formula_14 to formula_15). The symmetry can be a symmetry of a string theory in which the strings are uniquely determined by their endpoints. The propagator for this theory for a string starting at the endpoints formula_16 and ending at the endpoints formula_17 is a conformal function of the 4-dimensional invariant. A string field in endpoint-string theory is a function over the endpoints.
formula_18
Physical evidence.
Although it is natural to generalize the Poincaré transformations in order to find hidden symmetries in physics and thus narrow down the number of possible theories of high-energy physics, it is difficult to experimentally examine this symmetry as it is not possible to transform an object under this symmetry. The indirect evidence of this symmetry is given by how accurately fundamental theories of physics that are invariant under this symmetry make predictions. Other indirect evidence is whether theories that are invariant under this symmetry lead to contradictions such as giving probabilities greater than 1. So far there has been no direct evidence that the fundamental constituents of the Universe are strings. The symmetry could also be a broken symmetry meaning that although it is a symmetry of physics, the Universe has 'frozen out' in one particular direction so this symmetry is no longer evident.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "t'=it"
},
{
"math_id": 1,
"text": "V_\\mu ^\\prime = O_\\mu ^\\nu V_\\nu +P_\\mu \\, "
},
{
"math_id": 2,
"text": "O"
},
{
"math_id": 3,
"text": "P"
},
{
"math_id": 4,
"text": "r = |x - y |. \\, "
},
{
"math_id": 5,
"text": "V_\\mu ^\\prime =\\left( A_\\tau ^\\nu V_\\nu +B_\\tau \\right) \\left( C_{\\tau \\mu\n}^\\nu V_\\nu +D_{\\tau \\mu }\\right) ^{-1}."
},
{
"math_id": 6,
"text": "AA^T+BC=DD^T+CB \\, "
},
{
"math_id": 7,
"text": "D,"
},
{
"math_id": 8,
"text": "D"
},
{
"math_id": 9,
"text": "V_\\mu ^\\prime =\\left( O_\\mu ^\\nu V_\\nu +P_\\tau \\right) \\left( \\delta _{\\tau\n\\mu} + Q_{\\tau \\mu }^\\nu V_\\nu \\right) ^{-1}. \\, "
},
{
"math_id": 10,
"text": "Q."
},
{
"math_id": 11,
"text": "Q=0."
},
{
"math_id": 12,
"text": "Q=0"
},
{
"math_id": 13,
"text": "\\frac{(x-X)(y-Y)}{(x-Y)(y-X)}."
},
{
"math_id": 14,
"text": "x"
},
{
"math_id": 15,
"text": "y"
},
{
"math_id": 16,
"text": "(x,X)"
},
{
"math_id": 17,
"text": "(y,Y)"
},
{
"math_id": 18,
"text": "\\phi(x,X). \\, "
}
] | https://en.wikipedia.org/wiki?curid=7974982 |
797527 | Homomorphic filtering | Homomorphic filtering is a generalized technique for signal and image processing, involving a nonlinear mapping to a different domain in which linear filter techniques are applied, followed by mapping back to the original domain. This concept was developed in the 1960s by Thomas Stockham, Alan V. Oppenheim, and Ronald W. Schafer at MIT and independently by Bogert, Healy, and Tukey in their study of time series.
Image enhancement.
Homomorphic filtering is sometimes used for image enhancement. It simultaneously normalizes the brightness across an image and increases contrast. Here homomorphic filtering is used to remove multiplicative noise. Illumination and reflectance are not separable, but their approximate locations in the frequency domain may be located. Since illumination and reflectance combine multiplicatively, the components are made additive by taking the logarithm of the image intensity, so that these multiplicative components of the image can be separated linearly in the frequency domain. Illumination variations can be thought of as a multiplicative noise, and can be reduced by filtering in the log domain.
To make the illumination of an image more even, the high-frequency components are increased and low-frequency components are decreased, because the high-frequency components are assumed to represent mostly the reflectance in the scene (the amount of light reflected off the object in the scene), whereas the low-frequency components are assumed to represent mostly the illumination in the scene. That is, high-pass filtering is used to suppress low frequencies and amplify high frequencies, in the log-intensity domain.
Operation.
Homomorphic filtering can be used for improving the appearance of a grayscale image by simultaneous intensity range compression (illumination) and contrast enhancement (reflection).
formula_0
Where,
m = image,
i = illumination,
r = reflectance
We have to transform the equation into frequency domain in order to apply high pass filter. However, it's very difficult to do calculation after applying Fourier transformation to this equation because it's not a product equation anymore. Therefore, we use 'log' to help solve this problem.
formula_1
Then, applying Fourier transformation
formula_2
Or formula_3
Next, applying high-pass filter to the image. To make the illumination of an image more even, the high-frequency components are increased and low-frequency components are decrease.
formula_4
Where
H = any high-pass filter
N = filtered image in frequency domain
Afterward, returning frequency domain back to the spatial domain by using inverse Fourier transform.
formula_5
Finally, using the exponential function to eliminate the log we used at the beginning to get the enhanced image
formula_6
The following figures show the results of applying the homomorphic filter, high-pass filter, and the both homomorphic and high-pass filter. All figures were produced using Matlab.
According to figures one to four, we can see how homomorphic filtering is used for correcting non-uniform illumination in the image, and the image become clearer than the original. On the other hand, if we apply the high pass filter to the homomorphic filtered image, the edges of the images become sharper and the other areas become dimmer. This result is similar to applying only a high-pass filter to the original image.
Anti-homomorphic filtering.
It has been suggested that many cameras already have an approximately logarithmic response function (or more generally, a response function which tends to compress dynamic range), and display media such as television displays, photographic print media, etc., have an approximately anti-logarithmic response, or an otherwise dynamic range expansive response. Thus homomorphic filtering happens accidentally (unintentionally) whenever we process pixel values f(q) on the true quantigraphic unit of light q. Therefore it has been proposed that another useful kind of filtering is anti-homomorphic filtering in which images f(q) are first dynamic-range expanded to recover the true light q, upon which linear filtering is performed, followed by dynamic range compression back into image space for display.
Audio and speech analysis.
Homomorphic filtering is used in the log-spectral domain to separate filter effects from excitation effects, for example in the computation of the cepstrum as a sound representation; enhancements in the log spectral domain can improve sound intelligibility, for example in hearing aids.
Surface electromyography signals (sEMG).
Homomorphic filtering has been used to remove the effect of the stochastic impulse train, which originates the sEMG signal, from the power spectrum of the sEMG signal itself. In this way, only information about motor unit action potential (MUAP) shape and amplitude was maintained; this was then used to estimate the parameters of a time-domain model of the MUAP itself.
Neural decoding.
How individual neurons or networks encode information is the subject of numerous studies and research. In central nervous system it mainly happens by altering the spike firing rate (frequency encoding) or relative spike timing (time encoding).
Time encoding consists of altering the random inter-spikes intervals (ISI) of the stochastic impulse train in output from a neuron. Homomorphic filtering was used in this latter case to obtain ISI variations from the power spectrum of the spike train in output from a neuron with or without the use of neuronal spontaneous activity. The ISI variations were caused by an input sinusoidal signal of unknown frequency and small amplitude, i.e. not sufficient, in absence of noise to excite the firing state. The frequency of the sinusoidal signal was recovered by using homomorphic filtering based procedures.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
A.V. Oppenheim, R.W. Schafer, T.G. Stockham "Nonlinear Filtering of Multiplied and Convolved Signals" Proceedings of the IEEE Volume 56 No. 8 August 1968 pages 1264-1291 | [
{
"math_id": 0,
"text": "m(x,y) = i(x,y)\\bullet r(x,y)"
},
{
"math_id": 1,
"text": "\\ln(m(x,y)) = \\ln(i(x,y)) + \\ln(r(x,y))"
},
{
"math_id": 2,
"text": "\\digamma(ln(m(x,y))) = \\digamma(ln(i(x,y))) + \\digamma(ln(r(x,y)))"
},
{
"math_id": 3,
"text": "M(u,v) = I(u,v) + R(u,v)"
},
{
"math_id": 4,
"text": "N(u,v) = H(u,v) \\bullet M(u,v)"
},
{
"math_id": 5,
"text": "n(x,y) = invF(N(u,v))"
},
{
"math_id": 6,
"text": "newImage(x,y) = exp(n(x,y))"
}
] | https://en.wikipedia.org/wiki?curid=797527 |
7977203 | Engineering economics | Subset of economics
Engineering economics, previously known as engineering economy, is a subset of economics concerned with the use and "...application of economic principles" in the analysis of engineering decisions. As a discipline, it is focused on the branch of economics known as microeconomics in that it studies the behavior of individuals and firms in making decisions regarding the allocation of limited resources. Thus, it focuses on the decision making process, its context and environment. It is pragmatic by nature, integrating economic theory with engineering practice. But, it is also a simplified application of microeconomic theory in that it assumes elements such as price determination, competition and demand/supply to be fixed inputs from other sources. As a discipline though, it is closely related to others such as statistics, mathematics and cost accounting. It draws upon the logical framework of economics but adds to that the analytical power of mathematics and statistics.
Engineers seek solutions to problems, and along with the technical aspects, the economic viability of each potential solution is normally considered from a specific viewpoint that reflects its economic utility to a constituency.
Fundamentally, engineering economics involves formulating, estimating, and evaluating the economic outcomes when alternatives to accomplish a defined purpose are available.
In some U.S. undergraduate civil engineering curricula, engineering economics is a required course. It is a topic on the Fundamentals of Engineering examination, and questions might also be asked on the Principles and Practice of Engineering examination; both are part of the Professional Engineering registration process.
Considering the time value of money is central to most engineering economic analyses. Cash flows are "discounted" using an interest rate, except in the most basic economic studies.
For each problem, there are usually many possible "alternatives". One option that must be considered in each analysis, and is often the "choice", is the "do nothing alternative". The "opportunity cost" of making one choice over another must also be considered. There are also non-economic factors to be considered, like color, style, public image, etc.; such factors are termed "attributes".
"Costs" as well as "revenues" are considered, for each alternative, for an "analysis period" that is either a fixed number of years or the estimated life of the project. The "salvage value" is often forgotten, but is important, and is either the net cost or revenue for decommissioning the project.
Some other topics that may be addressed in engineering economics are inflation, uncertainty, replacements, depreciation, resource depletion, taxes, tax credits, accounting, cost estimations, or capital financing. All these topics are primary skills and knowledge areas in the field of cost engineering.
Since engineering is an important part of the manufacturing sector of the economy, engineering industrial economics is an important part of industrial or business economics. Major topics in engineering industrial economics are:
Examples of usage.
Some examples of engineering economic problems range from value analysis to economic studies. Each of these is relevant in different situations, and most often used by engineers or project managers. For example, engineering economic analysis helps a company not only determine the difference between fixed and incremental costs of certain operations, but also calculates that cost, depending upon a number of variables. Further uses of engineering economics include:
Each of the previous components of engineering economics is critical at certain junctures, depending on the situation, scale, and objective of the project at hand. Critical path economy, as an example, is necessary in most situations as it is the coordination and planning of material, labor, and capital movements in a specific project. The most critical of these "paths" are determined to be those that have an effect upon the outcome both in time and cost. Therefore, the critical paths must be determined and closely monitored by engineers and managers alike. Engineering economics helps provide the Gantt charts and activity-event networks to ascertain the correct use of time and resources.
Value Analysis.
Proper value analysis finds its roots in the need for industrial engineers and managers to not only simplify and improve processes and systems, but also the logical simplification of the designs of those products and systems. Though not directly related to engineering economy, value analysis is nonetheless important, and allows engineers to properly manage new and existing systems/processes to make them more simple and save money and time. Further, value analysis helps combat common "roadblock excuses" that may trip up managers or engineers. Sayings such as "The customer wants it this way" are retorted by questions such as; has the customer been told of cheaper alternatives or methods? "If the product is changed, machines will be idle for lack of work" can be combated by; can management not find new and profitable uses for these machines? Questions like these are part of engineering economy, as they preface any real studies or analyses.
Linear Programming.
Linear programming is the use of mathematical methods to find optimized solutions, whether they be minimized or maximized in nature. This method uses a series of lines to create a polygon then to determine the largest, or smallest, point on that shape. Manufacturing operations often use linear programming to help mitigate costs and maximize profits or production.
Interest and Money – Time Relationships.
Considering the prevalence of capital to be lent for a certain period of time, with the understanding that it will be returned to the investor, money-time relationships analyze the costs associated with these types of actions. Capital itself must be divided into two different categories, "equity capital" and "debt capital". Equity capital is money already at the disposal of the business, and mainly derived from profit, and therefore is not of much concern, as it has no owners that demand its return with interest. Debt capital does indeed have owners, and they require that its usage be returned with "profit", otherwise known as interest. The interest to be paid by the business is going to be an expense, while the capital lenders will take interest as a profit, which may confuse the situation. To add to this, each will change the income tax position of the participants.
Interest and money time relationships come into play when the capital required to complete a project must be either borrowed or derived from reserves. To borrow brings about the question of interest and value created by the completion of the project. While taking capital from reserves also denies its usage on other projects that may yield more results. Interest in the simplest terms is defined by the multiplication of the principle, the units of time, and the interest rate. The complexity of interest calculations, however, becomes much higher as factors such as compounding interest or annuities come into play.
Engineers often utilize compound interest tables to determine the future or present value of capital. These tables can also be used to determine the effect annuities have on loans, operations, or other situations. All one needs to utilize a compound interest table is three things; the time period of the analysis, the minimum attractive rate of return (MARR), and the capital value itself. The table will yield a multiplication factor to be used with the capital value, this will then give the user the proper future or present value.
Examples of Present, Future, and Annuity Analysis.
Using the compound interest tables mentioned above, an engineer or manager can quickly determine the value of capital over a certain time period. For example, a company wishes to borrow $5,000.00 to finance a new machine, and will need to repay that loan in 5 years at 7%. Using the table, 5 years and 7% gives the factor of 1.403, which will be multiplied by $5,000.00. This will result in $7,015.00. This is of course under the assumption that the company will make a lump payment at the conclusion of the five years, not making any payments prior.
A much more applicable example is one with a certain piece of equipment that will yield benefit for a manufacturing operation over a certain period of time. For instance, the machine benefits the company $2,500.00 every year, and has a useful life of 8 years. The MARR is determined to be roughly 5%. The compound interest tables yield a different factor for different types of analysis in this scenario. If the company wishes to know the Net Present Benefit (NPB) of these benefits; then the factor is the P/A of 8 yrs at 5%. This is 6.463. If the company wishes to know the future worth of these benefits; then the factors is the F/A of 8 yrs at 5%; which is 9.549. The former gives a NPB of $16,157.50, while the latter gives a future value of $23,872.50.
These scenarios are extremely simple in nature, and do not reflect the reality of most industrial situations. Thus, an engineer must begin to factor in costs and benefits, then find the worth of the proposed machine, expansion, or facility.
Depreciation and Valuation.
The fact that assets and material in the real world eventually wear down, and thence break, is a situation that must be accounted for. Depreciation itself is defined by the decreasing of value of any given asset, though some exceptions do exist. Valuation can be considered the basis for depreciation in a basic sense, as any decrease in "value" would be based on an "original value". The idea and existence of depreciation becomes especially relevant to engineering and project management is the fact that capital equipment and assets used in operations will slowly decrease in worth, which will also coincide with an increase in the likelihood of machine failure. Hence the recording and calculation of depreciation is important for two major reasons.
Both of these reasons, however, cannot make up for the "fleeting" nature of depreciation, which make direct analysis somewhat difficult. To further add to the issues associated with depreciation, it must be broken down into three separate types, each having intricate calculations and implications.
Calculation of depreciation also comes in a number of forms; "straight line, declining balance, sum-of-the-year's," and "service output". The first method being perhaps the easiest to calculate, while the remaining have varying levels of difficulty and utility. Most situations faced by managers in regards to depreciation can be solved using any of these formulas, however, company policy or preference of individual may affect the choice of model.
The main form of depreciation used inside the U.S. is the Modified Accelerated Capital Recovery System (MACRS), and it is based on a number of tables that give the class of asset, and its life. Certain classes are given certain lifespans, and these affect the value of an asset that can be depreciated each year. This does not necessarily mean that an asset must be discarded after its MACRS life is fulfilled, just that it can no longer be used for tax deductions.
Capital Budgeting.
Capital budgeting, in relation to engineering economics, is the proper usage and utilization of capital to achieve project objectives. It can be fully defined by the statement; "... as the series of decisions by individuals and firms concerning how much and where resources will be obtained and expended to meet future objectives." This definition almost perfectly explains capital and its general relation to engineering, though some special cases may not lend themselves to such a concise explanation. The actual acquisition of that capital has many different routes, from equity to bonds to retained profits, each having unique strengths and weakness, especially when in relation to income taxation. Factors such as risk of capital loss, along with possible or expected returns must also be considered when capital budgeting is underway. For example, if a company has $20,000 to invest in a number of high, moderate, and low risk projects, the decision would depend upon how much risk the company is willing to take on, and if the returns offered by each category offset this perceived risk. Continuing with this example, if the high risk offered only 20% return, while the moderate offered 19% return, engineers and managers would most likely choose the moderate risk project, as its return is far more favorable for its category. The high risk project failed to offer proper returns to warrant its risk status. A more difficult decision may be between a moderate risk offering 15% while a low risk offering 11% return. The decision here would be much more subject to factors such as company policy, extra available capital, and possible investors. "In general, the firm should estimate the project opportunities, including investment requirements and prospective rates of return for each, expected to be available for the coming period. Then the available capital should be tentatively allocated to the most favorable projects. The lowest prospective rate of return within the capital available then becomes the minimum acceptable rate of return for analyses of any projects during that period."
Minimum Cost Formulas.
Being one of the most important and integral operations in the engineering economic field is the minimization of cost in systems and processes. Time, resources, labor, and capital must all be minimized when placed into any system, so that revenue, product, and profit can be maximized. Hence, the general equation;
formula_0
where "C" is total cost, "a b" and "k" are constants, and "x" is the variable number of units produced.
There are a great number of cost analysis formulas, each for particular situations and are warranted by the policies of the company in question, or the preferences of the engineer at hand.
Economic Studies, both Private and Public in Nature.
Economic studies, which are much more common outside of engineering economics, are still used from time to time to determine feasibility and utility of certain projects. They do not, however, truly reflect the "common notion" of economic studies, which is fixated upon macroeconomics, something engineers have little interaction with. Therefore, the studies conducted in engineering economics are for specific companies and limited projects inside those companies. At most one may expect to find some feasibility studies done by private firms for the government or another business, but these again are in stark contrast to the overarching nature of true economic studies. Studies have a number of major steps that can be applied to almost every type of situation, those being as follows; | [
{
"math_id": 0,
"text": "C=ax +b/x+k"
}
] | https://en.wikipedia.org/wiki?curid=7977203 |
7978 | Data Encryption Standard | Early unclassified symmetric-key block cipher
The Data Encryption Standard (DES ) is a symmetric-key algorithm for the encryption of digital data. Although its short key length of 56 bits makes it too insecure for modern applications, it has been highly influential in the advancement of cryptography.
Developed in the early 1970s at IBM and based on an earlier design by Horst Feistel, the algorithm was submitted to the National Bureau of Standards (NBS) following the agency's invitation to propose a candidate for the protection of sensitive, unclassified electronic government data. In 1976, after consultation with the National Security Agency (NSA), the NBS selected a slightly modified version (strengthened against differential cryptanalysis, but weakened against brute-force attacks), which was published as an official Federal Information Processing Standard (FIPS) for the United States in 1977.
The publication of an NSA-approved encryption standard led to its quick international adoption and widespread academic scrutiny. Controversies arose from classified design elements, a relatively short key length of the symmetric-key block cipher design, and the involvement of the NSA, raising suspicions about a backdoor. The S-boxes that had prompted those suspicions were designed by the NSA to address a vulnerability they secretly knew (differential cryptanalysis). However, the NSA also ensured that the key size was drastically reduced so that they could break the cipher by brute force attack. The intense academic scrutiny the algorithm received over time led to the modern understanding of block ciphers and their cryptanalysis.
DES is insecure due to the relatively short 56-bit key size. In January 1999, distributed.net and the Electronic Frontier Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes (see ). There are also some analytical results which demonstrate theoretical weaknesses in the cipher, although they are infeasible in practice. The algorithm is believed to be practically secure in the form of Triple DES, although there are theoretical attacks. This cipher has been superseded by the Advanced Encryption Standard (AES). DES has been withdrawn as a standard by the National Institute of Standards and Technology.
Some documents distinguish between the DES standard and its algorithm, referring to the algorithm as the DEA (Data Encryption Algorithm).
History.
The origins of DES date to 1972, when a National Bureau of Standards study of US government computer security identified a need for a government-wide standard for encrypting unclassified, sensitive information.
Around the same time, engineer Mohamed Atalla in 1972 founded Atalla Corporation and developed the first hardware security module (HSM), the so-called "Atalla Box" which was commercialized in 1973. It protected offline devices with a secure PIN generating key, and was a commercial success. Banks and credit card companies were fearful that Atalla would dominate the market, which spurred the development of an international encryption standard. Atalla was an early competitor to IBM in the banking market, and was cited as an influence by IBM employees who worked on the DES standard. The IBM 3624 later adopted a similar PIN verification system to the earlier Atalla system.
On 15 May 1973, after consulting with the NSA, NBS solicited proposals for a cipher that would meet rigorous design criteria. None of the submissions was suitable. A second request was issued on 27 August 1974. This time, IBM submitted a candidate which was deemed acceptable—a cipher developed during the period 1973–1974 based on an earlier algorithm, Horst Feistel's Lucifer cipher. The team at IBM involved in cipher design and analysis included Feistel, Walter Tuchman, Don Coppersmith, Alan Konheim, Carl Meyer, Mike Matyas, Roy Adler, Edna Grossman, Bill Notz, Lynn Smith, and Bryant Tuckerman.
NSA's involvement in the design.
On 17 March 1975, the proposed DES was published in the "Federal Register". Public comments were requested, and in the following year two open workshops were held to discuss the proposed standard. There was criticism received from public-key cryptography pioneers Martin Hellman and Whitfield Diffie, citing a shortened key length and the mysterious "S-boxes" as evidence of improper interference from the NSA. The suspicion was that the algorithm had been covertly weakened by the intelligence agency so that they—but no one else—could easily read encrypted messages. Alan Konheim (one of the designers of DES) commented, "We sent the S-boxes off to Washington. They came back and were all different." The United States Senate Select Committee on Intelligence reviewed the NSA's actions to determine whether there had been any improper involvement. In the unclassified summary of their findings, published in 1978, the Committee wrote:
<templatestyles src="Template:Blockquote/styles.css" />In the development of DES, NSA convinced IBM that a reduced key size was sufficient; indirectly assisted in the development of the S-box structures; and certified that the final DES algorithm was, to the best of their knowledge, free from any statistical or mathematical weakness.
However, it also found that
<templatestyles src="Template:Blockquote/styles.css" />NSA did not tamper with the design of the algorithm in any way. IBM invented and designed the algorithm, made all pertinent decisions regarding it, and concurred that the agreed upon key size was more than adequate for all commercial applications for which the DES was intended.
Another member of the DES team, Walter Tuchman, stated "We developed the DES algorithm entirely within IBM using IBMers. The NSA did not dictate a single wire!"
In contrast, a declassified NSA book on cryptologic history states:
<templatestyles src="Template:Blockquote/styles.css" />
and
<templatestyles src="Template:Blockquote/styles.css" />NSA worked closely with IBM to strengthen the algorithm against all except brute-force attacks and to strengthen substitution tables, called S-boxes. Conversely, NSA tried to convince IBM to reduce the length of the key from 64 to 48 bits. Ultimately they compromised on a 56-bit key.
Some of the suspicions about hidden weaknesses in the S-boxes were allayed in 1990, with the independent discovery and open publication by Eli Biham and Adi Shamir of differential cryptanalysis, a general method for breaking block ciphers. The S-boxes of DES were much more resistant to the attack than if they had been chosen at random, strongly suggesting that IBM knew about the technique in the 1970s. This was indeed the case; in 1994, Don Coppersmith published some of the original design criteria for the S-boxes. According to Steven Levy, IBM Watson researchers discovered differential cryptanalytic attacks in 1974 and were asked by the NSA to keep the technique secret. Coppersmith explains IBM's secrecy decision by saying, "that was because [differential cryptanalysis] can be a very powerful tool, used against many schemes, and there was concern that such information in the public domain could adversely affect national security." Levy quotes Walter Tuchman: "[t]hey asked us to stamp all our documents confidential... We actually put a number on each one and locked them up in safes, because they were considered U.S. government classified. They said do it. So I did it". Bruce Schneier observed that "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES."
The algorithm as a standard.
Despite the criticisms, DES was approved as a federal standard in November 1976, and published on 15 January 1977 as FIPS PUB 46, authorized for use on all unclassified data. It was subsequently reaffirmed as the standard in 1983, 1988 (revised as FIPS-46-1), 1993 (FIPS-46-2), and again in 1999 (FIPS-46-3), the latter prescribing "Triple DES" (see below). On 26 May 2002, DES was finally superseded by the Advanced Encryption Standard (AES), following a public competition. On 19 May 2005, FIPS 46-3 was officially withdrawn, but NIST has approved Triple DES through the year 2030 for sensitive government information.
The algorithm is also specified in ANSI X3.92 (Today X3 is known as INCITS and ANSI X3.92 as ANSI INCITS 92), NIST SP 800-67 and ISO/IEC 18033-3 (as a component of TDEA).
Another theoretical attack, linear cryptanalysis, was published in 1994, but it was the Electronic Frontier Foundation's DES cracker in 1998 that demonstrated that DES could be attacked very practically, and highlighted the need for a replacement algorithm. These and other methods of cryptanalysis are discussed in more detail later in this article.
The introduction of DES is considered to have been a catalyst for the academic study of cryptography, particularly of methods to crack block ciphers. According to a NIST retrospective about DES,
The DES can be said to have "jump-started" the nonmilitary study and development of encryption algorithms. In the 1970s there were very few cryptographers, except for those in military or intelligence organizations, and little academic study of cryptography. There are now many active academic cryptologists, mathematics departments with strong programs in cryptography, and commercial information security companies and consultants. A generation of cryptanalysts has cut its teeth analyzing (that is, trying to "crack") the DES algorithm. In the words of cryptographer Bruce Schneier, "DES did more to galvanize the field of cryptanalysis than anything else. Now there was an algorithm to study." An astonishing share of the open literature in cryptography in the 1970s and 1980s dealt with the DES, and the DES is the standard against which every symmetric key algorithm since has been compared.
Description.
DES is the archetypal block cipher—an algorithm that takes a fixed-length string of plaintext bits and transforms it through a series of complicated operations into another ciphertext bitstring of the same length. In the case of DES, the block size is 64 bits. DES also uses a key to customize the transformation, so that decryption can supposedly only be performed by those who know the particular key used to encrypt. The key ostensibly consists of 64 bits; however, only 56 of these are actually used by the algorithm. Eight bits are used solely for checking parity, and are thereafter discarded. Hence the effective key length is 56 bits.
The key is nominally stored or transmitted as 8 bytes, each with odd parity. According to ANSI X3.92-1981 (Now, known as ANSI INCITS 92–1981), section 3.5:
<templatestyles src="Template:Blockquote/styles.css" />One bit in each 8-bit byte of the "KEY" may be utilized for error detection in key generation, distribution, and storage. Bits 8, 16..., 64 are for use in ensuring that each byte is of odd parity.
Like other block ciphers, DES by itself is not a secure means of encryption, but must instead be used in a mode of operation. FIPS-81 specifies several modes for use with DES. Further comments on the usage of DES are contained in FIPS-74.
Decryption uses the same structure as encryption, but with the keys used in reverse order. (This has the advantage that the same hardware or software can be used in both directions.)
Overall structure.
The algorithm's overall structure is shown in Figure 1: there are 16 identical stages of processing, termed "rounds". There is also an initial and final permutation, termed "IP" and "FP", which are inverses (IP "undoes" the action of FP, and vice versa). IP and FP have no cryptographic significance, but were included in order to facilitate loading blocks in and out of mid-1970s 8-bit based hardware.
Before the main rounds, the block is divided into two 32-bit halves and processed alternately; this criss-crossing is known as the Feistel scheme. The Feistel structure ensures that decryption and encryption are very similar processes—the only difference is that the subkeys are applied in the reverse order when decrypting. The rest of the algorithm is identical. This greatly simplifies implementation, particularly in hardware, as there is no need for separate encryption and decryption algorithms.
The ⊕ symbol denotes the exclusive-OR (XOR) operation. The "F-function" scrambles half a block together with some of the key. The output from the F-function is then combined with the other half of the block, and the halves are swapped before the next round. After the final round, the halves are swapped; this is a feature of the Feistel structure which makes encryption and decryption similar processes.
The Feistel (F) function.
The F-function, depicted in Figure 2, operates on half a block (32 bits) at a time and consists of four stages:
The alternation of substitution from the S-boxes, and permutation of bits from the P-box and E-expansion provides so-called "confusion and diffusion" respectively, a concept identified by Claude Shannon in the 1940s as a necessary condition for a secure yet practical cipher.
Key schedule.
Figure 3 illustrates the "key schedule" for encryption—the algorithm which generates the subkeys. Initially, 56 bits of the key are selected from the initial 64 by "Permuted Choice 1" ("PC-1")—the remaining eight bits are either discarded or used as parity check bits. The 56 bits are then divided into two 28-bit halves; each half is thereafter treated separately. In successive rounds, both halves are rotated left by one or two bits (specified for each round), and then 48 subkey bits are selected by "Permuted Choice 2" ("PC-2")—24 bits from the left half, and 24 from the right. The rotations (denoted by "«<" in the diagram) mean that a different set of bits is used in each subkey; each bit is used in approximately 14 out of the 16 subkeys.
The key schedule for decryption is similar—the subkeys are in reverse order compared to encryption. Apart from that change, the process is the same as for encryption. The same 28 bits are passed to all rotation boxes.
Pseudocode.
Pseudocode for the DES algorithm follows.
Security and cryptanalysis.
Although more information has been published on the cryptanalysis of DES than any other block cipher, the most practical attack to date is still a brute-force approach. Various minor cryptanalytic properties are known, and three theoretical attacks are possible which, while having a theoretical complexity less than a brute-force attack, require an unrealistic number of known or chosen plaintexts to carry out, and are not a concern in practice.
Brute-force attack.
For any cipher, the most basic method of attack is brute force—trying every possible key in turn. The length of the key determines the number of possible keys, and hence the feasibility of this approach. For DES, questions were raised about the adequacy of its key size early on, even before it was adopted as a standard, and it was the small key size, rather than theoretical cryptanalysis, which dictated a need for a replacement algorithm. As a result of discussions involving external consultants including the NSA, the key size was reduced from 256 bits to 56 bits to fit on a single chip.
In academia, various proposals for a DES-cracking machine were advanced. In 1977, Diffie and Hellman proposed a machine costing an estimated US$20 million which could find a DES key in a single day. By 1993, Wiener had proposed a key-search machine costing US$1 million which would find a key within 7 hours. However, none of these early proposals were ever implemented—or, at least, no implementations were publicly acknowledged. The vulnerability of DES was practically demonstrated in the late 1990s. In 1997, RSA Security sponsored a series of contests, offering a $10,000 prize to the first team that broke a message encrypted with DES for the contest. That contest was won by the DESCHALL Project, led by Rocke Verser, Matt Curtin, and Justin Dolske, using idle cycles of thousands of computers across the Internet. The feasibility of cracking DES quickly was demonstrated in 1998 when a custom DES-cracker was built by the Electronic Frontier Foundation (EFF), a cyberspace civil rights group, at the cost of approximately US$250,000 (see EFF DES cracker). Their motivation was to show that DES was breakable in practice as well as in theory: "There are many people who will not believe a truth until they can see it with their own eyes. Showing them a physical machine that can crack DES in a few days is the only way to convince some people that they really cannot trust their security to DES." The machine brute-forced a key in a little more than 2 days' worth of searching.
The next confirmed DES cracker was the COPACOBANA machine built in 2006 by teams of the Universities of Bochum and Kiel, both in Germany. Unlike the EFF machine, COPACOBANA consists of commercially available, reconfigurable integrated circuits. 120 of these field-programmable gate arrays (FPGAs) of type XILINX Spartan-3 1000 run in parallel. They are grouped in 20 DIMM modules, each containing 6 FPGAs. The use of reconfigurable hardware makes the machine applicable to other code breaking tasks as well. One of the more interesting aspects of COPACOBANA is its cost factor. One machine can be built for approximately $10,000. The cost decrease by roughly a factor of 25 over the EFF machine is an example of the continuous improvement of digital hardware—see Moore's law. Adjusting for inflation over 8 years yields an even higher improvement of about 30x. Since 2007, SciEngines GmbH, a spin-off company of the two project partners of COPACOBANA has enhanced and developed successors of COPACOBANA. In 2008 their COPACOBANA RIVYERA reduced the time to break DES to less than one day, using 128 Spartan-3 5000's. SciEngines RIVYERA held the record in brute-force breaking DES, having utilized 128 Spartan-3 5000 FPGAs. Their 256 Spartan-6 LX150 model has further lowered this time.
In 2012, David Hulton and Moxie Marlinspike announced a system with 48 Xilinx Virtex-6 LX240T FPGAs, each FPGA containing 40 fully pipelined DES cores running at 400 MHz, for a total capacity of 768 gigakeys/sec. The system can exhaustively search the entire 56-bit DES key space in about 26 hours and this service is offered for a fee online.
Attacks faster than brute force.
There are three attacks known that can break the full 16 rounds of DES with less complexity than a brute-force search: differential cryptanalysis (DC), linear cryptanalysis (LC), and Davies' attack. However, the attacks are theoretical and are generally considered infeasible to mount in practice; these types of attack are sometimes termed certificational weaknesses.
There have also been attacks proposed against reduced-round versions of the cipher, that is, versions of DES with fewer than 16 rounds. Such analysis gives an insight into how many rounds are needed for safety, and how much of a "security margin" the full version retains.
Differential-linear cryptanalysis was proposed by Langford and Hellman in 1994, and combines differential and linear cryptanalysis into a single attack. An enhanced version of the attack can break 9-round DES with 215.8 chosen plaintexts and has a 229.2 time complexity (Biham and others, 2002).
Minor cryptanalytic properties.
DES exhibits the complementation property, namely that
formula_0
where formula_1 is the bitwise complement of formula_2 formula_3 denotes encryption with key formula_4 formula_5 and formula_6 denote plaintext and ciphertext blocks respectively. The complementation property means that the work for a brute-force attack could be reduced by a factor of 2 (or a single bit) under a chosen-plaintext assumption. By definition, this property also applies to TDES cipher.
DES also has four so-called "weak keys". Encryption ("E") and decryption ("D") under a weak key have the same effect (see involution):
formula_7 or equivalently, formula_8
There are also six pairs of "semi-weak keys". Encryption with one of the pair of semiweak keys, formula_9, operates identically to decryption with the other, formula_10:
formula_11 or equivalently, formula_12
It is easy enough to avoid the weak and semiweak keys in an implementation, either by testing for them explicitly, or simply by choosing keys randomly; the odds of picking a weak or semiweak key by chance are negligible. The keys are not really any weaker than any other keys anyway, as they do not give an attack any advantage.
DES has also been proved not to be a group, or more precisely, the set formula_13 (for all possible keys formula_14) under functional composition is not a group, nor "close" to being a group. This was an open question for some time, and if it had been the case, it would have been possible to break DES, and multiple encryption modes such as Triple DES would not increase the security, because repeated encryption (and decryptions) under different keys would be equivalent to encryption under another, single key.
Simplified DES.
Simplified DES (SDES) was designed for educational purposes only, to help students learn about modern cryptanalytic techniques.
SDES has similar structure and properties to DES, but has been simplified to make it much easier to perform encryption and decryption by hand with pencil and paper.
Some people feel that learning SDES gives insight into DES and other block ciphers, and insight into various cryptanalytic attacks against them.
Replacement algorithms.
Concerns about security and the relatively slow operation of DES in software motivated researchers to propose a variety of alternative block cipher designs, which started to appear in the late 1980s and early 1990s: examples include RC5, Blowfish, IDEA, NewDES, SAFER, CAST5 and FEAL. Most of these designs kept the 64-bit block size of DES, and could act as a "drop-in" replacement, although they typically used a 64-bit or 128-bit key. In the Soviet Union the GOST 28147-89 algorithm was introduced, with a 64-bit block size and a 256-bit key, which was also used in Russia later.
DES itself can be adapted and reused in a more secure scheme. Many former DES users now use Triple DES (TDES) which was described and analysed by one of DES's patentees (see FIPS Pub 46–3); it involves applying DES three times with two (2TDES) or three (3TDES) different keys. TDES is regarded as adequately secure, although it is quite slow. A less computationally expensive alternative is DES-X, which increases the key size by XORing extra key material before and after DES. GDES was a DES variant proposed as a way to speed up encryption, but it was shown to be susceptible to differential cryptanalysis.
On January 2, 1997, NIST announced that they wished to choose a successor to DES. In 2001, after an international competition, NIST selected a new cipher, the Advanced Encryption Standard (AES), as a replacement. The algorithm which was selected as the AES was submitted by its designers under the name Rijndael. Other finalists in the NIST AES competition included RC6, Serpent, MARS, and Twofish.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "E_K(P)=C \\iff E_{\\overline{K}}(\\overline{P})=\\overline{C}"
},
{
"math_id": 1,
"text": "\\overline{x}"
},
{
"math_id": 2,
"text": "x."
},
{
"math_id": 3,
"text": "E_K"
},
{
"math_id": 4,
"text": "K."
},
{
"math_id": 5,
"text": "P"
},
{
"math_id": 6,
"text": "C"
},
{
"math_id": 7,
"text": "E_K(E_K(P)) = P"
},
{
"math_id": 8,
"text": "E_K = D_K."
},
{
"math_id": 9,
"text": "K_1"
},
{
"math_id": 10,
"text": "K_2"
},
{
"math_id": 11,
"text": "E_{K_1}(E_{K_2}(P)) = P"
},
{
"math_id": 12,
"text": "E_{K_2} = D_{K_1}."
},
{
"math_id": 13,
"text": "\\{E_K\\}"
},
{
"math_id": 14,
"text": "K"
}
] | https://en.wikipedia.org/wiki?curid=7978 |
7981806 | Code rate | Non-redundant proportion of an error correction code data stream
In telecommunication and information theory, the code rate (or information rate) of a forward error correction code is the proportion of the data-stream that is useful (non-redundant). That is, if the code rate is formula_0 for every k bits of useful information, the coder generates a total of n bits of data, of which formula_1 are redundant.
If R is the gross bit rate or data signalling rate (inclusive of redundant error coding), the net bit rate (the useful bit rate exclusive of error correction codes) is formula_2.
For example: The code rate of a convolutional code will typically be <templatestyles src="Fraction/styles.css" />1⁄2, <templatestyles src="Fraction/styles.css" />2⁄3, <templatestyles src="Fraction/styles.css" />3⁄4, <templatestyles src="Fraction/styles.css" />5⁄6, <templatestyles src="Fraction/styles.css" />7⁄8, etc., corresponding to one redundant bit inserted after every single, second, third, etc., bit. The code rate of the octet oriented Reed Solomon block code denoted RS(204,188) is 188/204, meaning that 204 − 188 = 16 redundant octets (or bytes) are added to each block of 188 octets of useful information.
A few error correction codes do not have a fixed code rate—rateless erasure codes.
Note that bit/s is a more widespread unit of measurement for the information rate, implying that it is synonymous with "net bit rate" or "useful bit rate" exclusive of error-correction codes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k/n"
},
{
"math_id": 1,
"text": "n-k"
},
{
"math_id": 2,
"text": "\\leq R \\cdot k/n"
}
] | https://en.wikipedia.org/wiki?curid=7981806 |
798370 | Computer cooling | The process of removing waste heat from a computer
Computer cooling is required to remove the waste heat produced by computer components, to keep components within permissible operating temperature limits. Components that are susceptible to temporary malfunction or permanent failure if overheated include integrated circuits such as central processing units (CPUs), chipsets, graphics cards, hard disk drives, and solid state drives.
Components are often designed to generate as little heat as possible, and computers and operating systems may be designed to reduce power consumption and consequent heating according to workload, but more heat may still be produced than can be removed without attention to cooling. Use of heatsinks cooled by airflow reduces the temperature rise produced by a given amount of heat. Attention to patterns of airflow can prevent the development of hotspots. Computer fans are widely used along with heatsink fans to reduce temperature by actively exhausting hot air. There are also other cooling techniques, such as liquid cooling. All modern day processors are designed to cut out or reduce their voltage or clock speed if the internal temperature of the processor exceeds a specified limit. This is generally known as Thermal Throttling in the case of reduction of clock speeds, or Thermal Shutdown in the case of a complete shutdown of the device or system.
Cooling may be designed to reduce the ambient temperature within the case of a computer, such as by exhausting hot air, or to cool a single component or small area (spot cooling). Components commonly individually cooled include the CPU, graphics processing unit (GPU) and the northbridge.
<templatestyles src="Template:TOC limit/styles.css" />
Generators of unwanted heat.
Integrated circuits (e.g. CPU and GPU) are the main generators of heat in modern computers. Heat generation can be reduced by efficient design and selection of operating parameters such as voltage and frequency, but ultimately, acceptable performance can often only be achieved by managing significant heat generation.
In operation, the temperature of a computer's components will rise until the heat transferred to the surroundings is equal to the heat produced by the component, that is, when thermal equilibrium is reached. For reliable operation, the temperature must never exceed a specified maximum permissible value unique to each component. For semiconductors, instantaneous junction temperature, rather than component case, heatsink, or ambient temperature is critical.
Cooling can be impaired by:
Damage prevention.
Because high temperatures can significantly reduce life span or cause permanent damage to components, and the heat output of components can sometimes exceed the computer's cooling capacity, manufacturers often take additional precautions to ensure that temperatures remain within safe limits. A computer with thermal sensors integrated in the CPU, motherboard, chipset, or GPU can shut itself down when high temperatures are detected to prevent permanent damage, although this may not completely guarantee long-term safe operation. Before an overheating component reaches this point, it may be "throttled" until temperatures fall below a safe point using dynamic frequency scaling technology. Throttling reduces the operating frequency and voltage of an integrated circuit or disables non-essential features of the chip to reduce heat output, often at the cost of slightly or significantly reduced performance. For desktop and notebook computers, throttling is often controlled at the BIOS level. Throttling is also commonly used to manage temperatures in smartphones and tablets, where components are packed tightly together with little to no active cooling, and with additional heat transferred from the hand of the user.
The user can also perform several tasks in order to preemptively prevent damage from happening. They can perform a visual inspection of the cooler and case fans. If any of them are not spinning correctly, it is likely that they will need to be replaced. The user should also clean the fans thoroughly, since dust and debris can increase the ambient case temperature and impact fan performance. The best way to do so is with compressed air in an open space. Another preemptive technique to prevent damage is to replace the thermal paste regularly.
Mainframes and supercomputers.
As electronic computers became larger and more complex, cooling of the active components became a critical factor for reliable operation. Early vacuum-tube computers, with relatively large cabinets, could rely on natural or forced air circulation for cooling. However, solid-state devices were packed much more densely and had lower allowable operating temperatures.
Starting in 1965, IBM and other manufacturers of mainframe computers sponsored intensive research into the physics of cooling densely packed integrated circuits. Many air and liquid cooling systems were devised and investigated, using methods such as natural and forced convection, direct air impingement, direct liquid immersion and forced convection, pool boiling, falling films, flow boiling, and liquid jet impingement. Mathematical analysis was used to predict temperature rises of components for each possible cooling system geometry.
IBM developed three generations of the Thermal Conduction Module (TCM) which used a water-cooled cold plate in direct thermal contact with integrated circuit packages. Each package had a thermally conductive pin pressed onto it, and helium gas surrounded chips and heat-conducting pins. The design could remove up to 27 watts from a chip and up to 2000 watts per module, while maintaining chip package temperatures of around . Systems using TCMs were the 3081 family (1980), ES/3090 (1984) and some models of the ES/9000 (1990). In the IBM 3081 processor, TCMs allowed up to 2700 watts on a single printed circuit board while maintaining chip temperature at . Thermal conduction modules using water cooling were also used in mainframe systems manufactured by other companies including Mitsubishi and Fujitsu.
The Cray-1 supercomputer designed in 1976 had a distinctive cooling system. The machine was only in height and in diameter, and consumed up to 115 kilowatts; this is comparable to the average power consumption of a few dozen Western homes or a medium-sized car. The integrated circuits used in the machine were the fastest available at the time, using emitter-coupled logic; however, the speed was accompanied by high power consumption compared to later CMOS devices.
Heat removal was critical. Refrigerant was circulated through piping embedded in vertical cooling bars in twelve columnar sections of the machine. Each of the 1662 printed circuit modules of the machine had a copper core and was clamped to the cooling bar. The system was designed to maintain the cases of integrated circuits at no more than , with refrigerant circulating at . Final heat rejection was through a water-cooled condenser. Piping, heat exchangers, and pumps for the cooling system were arranged in an upholstered bench seat around the outside of the base of the computer. About 20 percent of the machine's weight in operation was refrigerant.
In the later Cray-2, with its more densely packed modules, Seymour Cray had trouble effectively cooling the machine using the metal conduction technique with mechanical refrigeration, so he switched to 'liquid immersion' cooling. This method involved filling the chassis of the Cray-2 with a liquid called Fluorinert. Fluorinert, as its name implies, is an inert liquid that does not interfere with the operation of electronic components. As the components came to operating temperature, the heat would dissipate into the Fluorinert, which was pumped out of the machine to a chilled water heat exchanger.
Performance per watt of modern systems has greatly improved; many more computations can be carried out with a given power consumption than was possible with the integrated circuits of the 1980s and 1990s. Recent supercomputer projects such as Blue Gene rely on air cooling, which reduces cost, complexity, and size of systems compared to liquid cooling.
Air cooling.
Fans.
Fans are used when natural convection is insufficient to remove heat. Fans may be fitted to the computer case or attached to CPUs, GPUs, chipsets, power supply units (PSUs), hard drives, or as cards plugged into an expansion slot. Common fan sizes include 40, 60, 80, 92, 120, and 140 mm. 200, 230, 250 and 300 mm fans are sometimes used in high-performance personal computers.
Performance of fans in chassis.
A computer has a certain resistance to air flowing through the chassis and components. This is the sum of all the smaller impediments to air flow, such as the inlet and outlet openings, air filters, internal chassis, and electronic components. Fans are simple air pumps that provide pressure to the air of the inlet side relative to the output side. That pressure difference moves air through the chassis, with air flowing to areas of lower pressure.
Fans generally have two published specifications: free air flow and maximum differential pressure. Free air flow is the amount of air a fan will move with zero back-pressure. Maximum differential pressure is the amount of pressure a fan can generate when completely blocked. In between these two extremes are a series of corresponding measurements of flow versus pressure which is usually presented as a graph. Each fan model will have a unique curve, like the dashed curves in the adjacent illustration.
Parallel vis-à-vis series installation.
Fans can be installed parallel to each other, in series, or a combination of both. Parallel installation would be fans mounted side by side. Series installation would be a second fan in line with another fan such as an inlet fan and an exhaust fan. To simplify the discussion, it is assumed the fans are the same model.
Parallel fans will provide double the free air flow but no additional driving pressure. Series installation, on the other hand, will double the available static pressure but not increase the free air flow rate. The adjacent illustration shows a single fan versus two fans in parallel with a maximum pressure of of water and a doubled flow rate of about .
Note that air flow changes as the square root of the pressure. Thus, doubling the pressure will only increase the flow 1.41 (√2) times, not twice as might be assumed. Another way of looking at this is that the pressure must go up by a factor of four to double the flow rate.
To determine flow rate through a chassis, the chassis impedance curve can be measured by imposing an arbitrary pressure at the inlet to the chassis and measuring the flow through the chassis. This requires fairly sophisticated equipment. With the chassis impedance curve (represented by the solid red and black lines on the adjacent curve) determined, the actual flow through the chassis as generated by a particular fan configuration is graphically shown where the chassis impedance curve crosses the fan curve. The slope of the chassis impedance curve is a square root function, where doubling the flow rate required four times the differential pressure.
In this particular example, adding a second fan provided marginal improvement with the flow for both configurations being approximately . While not shown on the plot, a second fan in series would provide slightly better performance than the parallel installation.
Temperature vis-à-vis flow rate.
The equation for required airflow through a chassis is
formula_0
where
A simple conservative rule of thumb for cooling flow requirements, discounting such effects as heat loss through the chassis walls and laminar versus turbulent flow, and accounting for the constants for specific heat and density at sea level is:
formula_6
formula_7
For example, a typical chassis with 500 watts of load, maximum internal temperature in a environment, i.e. a difference of :
formula_8
This would be actual flow through the chassis and not the free air rating of the fan. It should also be noted that "Q", the heat transferred, is a function of the heat transfer efficiency of a CPU or GPU cooler to the airflow.
Piezoelectric pump.
A "dual piezo cooling jet", patented by GE, uses vibrations to pump air through the device. The initial device is three millimetres thick and consists of two nickel discs that are connected on either side to a sliver of piezoelectric ceramics. An alternating current passed through the ceramic component causes it to expand and contract at up to 150 times per second so that the nickel discs act like a bellows. Contracted, the edges of the discs are pushed apart and suck in hot air. Expanding brings the nickel discs together, expelling the air at high velocity.
The device has no bearings and does not require a motor. It is thinner and consumes less energy than typical fans. The jet can move the same amount of air as a cooling fan twice its size while consuming half as much electricity and at lower cost.
Passive cooling.
Passive heatsink cooling involves attaching a block of machined or extruded metal to the part that needs cooling. A thermal adhesive may be used. More commonly for a personal computer CPU, a clamp holds the heatsink directly over the chip, with a thermal grease or thermal pad spread between. This block has fins and ridges to increase its surface area. The heat conductivity of metal is much better than that of air, and it radiates heat better than the component that it is protecting (usually an integrated circuit or CPU). Fan-cooled aluminium heatsinks were originally the norm for desktop computers, but nowadays many heatsinks feature copper base-plates or are entirely made of copper.
Dust buildup between the metal fins of a heatsink gradually reduces efficiency, but can be countered with a gas duster by blowing away the dust along with any other unwanted excess material.
Passive heatsinks are commonly found on older CPUs, parts that do not get very hot (such as the chipset), low-power computers, and embedded devices.
Usually a heatsink is attached to the integrated heat spreader (IHS), essentially a large, flat plate attached to the CPU, with conduction paste layered between. This dissipates or spreads the heat locally. Unlike a heatsink, a spreader is meant to redistribute heat, not to remove it. In addition, the IHS protects the fragile CPU.
Passive cooling involves no fan noise, as convection forces move air over the heatsink.
Other techniques.
Liquid immersion cooling.
Another growing trend due to the increasing heat density of computers, GPUs, FPGAs, and ASICs is to immerse the entire computer or select components in a thermally, but not electrically, conductive liquid. Although rarely used for the cooling of personal computers, liquid immersion is a routine method of cooling large power distribution components such as transformers. It is also becoming popular with data centers. Personal computers cooled in this manner may not require either fans or pumps, and may be cooled exclusively by passive heat exchange between the computer hardware and the enclosure it is placed in. A heat exchanger (i.e. heater core or radiator) might still be needed though, and the piping also needs to be placed correctly.
The coolant used must have sufficiently low electrical conductivity not to interfere with the normal operation of the computer. If the liquid is somewhat electrically conductive, it may cause electrical shorts between components or traces and permanently damage them. For these reasons, it is preferred that the liquid be an insulator (dielectric) and not conduct electricity.
A wide variety of liquids exist for this purpose, including transformer oils, synthetic single-phase and dual phase dielectric coolants such as 3M Fluorinert or 3M Novec. Non-purpose oils, including cooking, motor and silicone oils, have been successfully used for cooling personal computers.
Some fluids used in immersion cooling, especially hydrocarbon based materials such as mineral oils, cooking oils, and organic esters, may degrade some common materials used in computers such as rubbers, polyvinyl chloride (PVC), and thermal greases. Therefore it is critical to review the material compatibility of such fluids prior to use. Mineral oil in particular has been found to have negative effects on PVC and rubber-based wire insulation. Thermal pastes used to transfer heat to heatsinks from processors and graphic cards has been reported to dissolve in some liquids, however with negligible impact to cooling, unless the components were removed and operated in air.
Evaporation, especially for 2-phase coolants, can pose a problem, and the liquid may require either to be regularly refilled or sealed inside the computer's enclosure. Immersion cooling can allow for extremely low PUE values of 1.05, vs air cooling's 1.35, and allow for up to 100 KW of computing power (heat dissipation, TDP) per 19-inch rack, as opposed to air cooling, which usually handles up to 23 KW.
Waste heat reduction.
Where powerful computers with many features are not required, less powerful computers or ones with fewer features can be used. As of 2011[ [update]] a VIA EPIA motherboard with CPU typically dissipates approximately 25 watts of heat, whereas a more capable Pentium 4 motherboard and CPU typically dissipates around 140 watts. Computers can be powered with direct current from an external power supply unit which does not generate heat inside the computer case. The replacement of cathode ray tube (CRT) displays by more efficient thin-screen liquid crystal display (LCD) ones in the early twenty-first century has reduced power consumption significantly.
Heat-sinks.
A component may be fitted in good thermal contact with a heatsink, a passive device with large thermal capacity and with a large surface area relative to its volume. Heatsinks are usually made of a metal with high thermal conductivity such as aluminium or copper, and incorporate fins to increase surface area. Heat from a relatively small component is transferred to the larger heatsink; the equilibrium temperature of the component plus heatsink is much lower than the component's alone would be. Heat is carried away from the heatsink by convective or fan-forced airflow. Fan cooling is often used to cool processors and graphics cards that consume significant amounts of electrical energy. In a computer, a typical heat-generating component may be manufactured with a flat surface. A block of metal with a corresponding flat surface and finned construction, sometimes with an attached fan, is clamped to the component. To fill poorly conducting air gaps due to imperfectly flat and smooth surfaces, a thin layer of thermal grease, a thermal pad, or thermal adhesive may be placed between the component and heatsink.
Heat is removed from the heatsink by convection, to some extent by radiation, and possibly by conduction if the heatsink is in thermal contact with, say, the metal case. Inexpensive fan-cooled aluminium heatsinks are often used on standard desktop computers. Heatsinks with copper base-plates, or made of copper, have better thermal characteristics than those made of aluminium. A copper heatsink is more effective than an aluminium unit of the same size, which is relevant with regard to the high-power-consumption components used in high-performance computers.
Passive heatsinks are commonly found on older CPUs, parts that do not dissipate much power (such as the chipset), computers with low-power processors, and equipment where silent operation is critical and fan noise unacceptable.
Usually a heatsink is clamped to the integrated heat spreader (IHS), a flat metal plate the size of the CPU package which is part of the CPU assembly and spreads the heat locally. A thin layer of thermal compound is placed between them to compensate for surface imperfections. The spreader's primary purpose is to redistribute heat. The heatsink fins improve its efficiency.
Several brands of DDR2, DDR3, DDR4 and DDR5 DRAM memory modules are fitted with a finned heatsink clipped onto the top edge of the module. The same technique is used for video cards that use a finned passive heatsink on the GPU.
Higher-end M.2 SSDs can be prone to significant heat generation, and as a result these may be sold with a heatsink included, or alternatively a third-party heatsink may be attached by the user during installation.
Dust tends to build up in the crevices of finned heatsinks, particularly with the high airflow produced by fans. This keeps the air away from the hot component, reducing cooling effectiveness; however, removing the dust restores effectiveness.
Peltier (thermoelectric) cooling.
Peltier junctions are generally only around 10–15% as efficient as the ideal refrigerator (Carnot cycle), compared with 40–60% achieved by conventional compression cycle systems (reverse Rankine systems using compression/expansion). Due to this lower efficiency, thermoelectric cooling is generally only used in environments where the solid state nature (no moving parts, low maintenance, compact size, and orientation insensitivity) outweighs pure efficiency.
Modern TECs use several stacked units each composed of dozens or hundreds of thermocouples laid out next to each other, which allows for a substantial amount of heat transfer. A combination of bismuth and tellurium is most commonly used for the thermocouples.
As active heat pumps which consume power, TECs can produce temperatures below ambient, impossible with passive heatsinks, radiator-cooled liquid cooling, and heatpipe HSFs. However, while pumping heat, a Peltier module will typically consume more electric power than the heat amount being pumped.
It is also possible to use a Peltier element together with a high pressure refrigerant (two phase cooling) to cool the CPU.
Liquid cooling.
Liquid cooling is a highly effective method of removing excess heat, with the most common heat transfer fluid in desktop PCs being (distilled) water. The advantages of water cooling over air cooling include water's higher specific heat capacity and thermal conductivity.
The principle used in a typical (active) liquid cooling system for computers is identical to that used in an automobile's internal combustion engine, with the water being circulated by a water pump through a water block mounted on the CPU (and sometimes additional components as GPU and northbridge) and out to a heat exchanger, typically a radiator. The radiator is itself usually cooled additionally by means of a fan. Besides a fan, it could possibly also be cooled by other means, such as a Peltier cooler (although Peltier elements are most commonly placed directly on top of the hardware to be cooled, and the coolant is used to conduct the heat away from the hot side of the Peltier element). A coolant reservoir is often also connected to the system.
Besides active liquid cooling systems, passive liquid cooling systems are also sometimes used. These systems often leave out a fan or a water pump, theoretically increasing their reliability and making them quieter than active systems. The downsides of these systems are that they are much less efficient in discarding the heat and thus also need to have much more coolant – and thus a much bigger coolant reservoir – giving the coolant more time to cool down.
Liquids allow the transfer of more heat from the parts being cooled than air, making liquid cooling suitable for overclocking and high performance computer applications. Compared to air cooling, liquid cooling is also influenced less by the ambient temperature. Liquid cooling's comparatively low noise level compares favorably to that of air cooling, which can become quite noisy.
Disadvantages of liquid cooling include complexity and the potential for a coolant leak. Leaking water (and any additives in the water) can damage electronic components with which it comes into contact, and the need to test for and repair leaks makes for more complex and less reliable installations. (The first major foray into the field of liquid-cooled personal computers for general use, the high-end versions of Apple's Power Mac G5, was ultimately doomed by a propensity for coolant leaks.<ref name="PowerMac G5 Coolant Leaks/Repairs"></ref>) An air-cooled heatsink is generally much simpler to build, install, and maintain than a water cooling solution, although CPU-specific water cooling kits can also be found, which may be just as easy to install as an air cooler. These are not limited to CPUs, and liquid cooling of GPU cards is also possible.
While originally limited to mainframe computers, liquid cooling has become a practice largely associated with overclocking in the form of either manufactured all-in-one (AIO) kits or do-it-yourself setups assembled from individually gathered parts. The past few years have seen an increase in the popularity of liquid cooling in pre-assembled, moderate to high performance, desktop computers. Sealed ("closed-loop") systems incorporating a small pre-filled radiator, fan, and waterblock simplify the installation and maintenance of water cooling at a slight cost in cooling effectiveness relative to larger and more complex setups. Liquid cooling is typically combined with air cooling, using liquid cooling for the hottest components, such as CPUs or GPUs, while retaining the simpler and cheaper air cooling for less demanding components.
The IBM Aquasar system uses "hot water cooling" to achieve energy efficiency, the water being used to heat buildings as well.
Since 2011, the effectiveness of water cooling has prompted a series of all-in-one (AIO) water cooling solutions. AIO solutions result in a much simpler to install the unit, and most units have been reviewed positively by review sites.
Heat pipes and vapor chambers.
A heat pipe is a hollow tube containing a heat transfer liquid. The liquid absorbs heat and evaporates at one end of the pipe. The vapor travels to the other (cooler) end of the tube, where it condenses, giving up its latent heat. The liquid returns to the hot end of the tube by gravity or capillary action and repeats the cycle. Heat pipes have a much higher effective thermal conductivity than solid materials. For use in computers, the heatsink on the CPU is attached to a larger radiator heatsink. Both heatsinks are hollow, as is the attachment between them, creating one large heat pipe that transfers heat from the CPU to the radiator, which is then cooled using some conventional method. This method is usually used when space is tight, as in small form-factor PCs and laptops, or where no fan noise can be tolerated, as in audio production. Because of the efficiency of this method of cooling, many desktop CPUs and GPUs, as well as high end chipsets, use heat pipes or vapor chambers in addition to active fan-based cooling and passive heatsinks to remain within safe operating temperatures. A vapor chamber operates on the same principles as a heat pipe but takes on the form of a slab or sheet instead of a pipe. Heat pipes may be placed vertically on top and form part of vapor chambers. Vapor chambers may also be used on high-end smartphones.
Electrostatic air movement and corona discharge effect cooling.
The cooling technology under development by Kronos and Thorn Micro Technologies employs a device called an ionic wind pump (also known as an electrostatic fluid accelerator). The basic operating principle of an ionic wind pump is corona discharge, an electrical discharge near a charged conductor caused by the ionization of the surrounding air.
The corona discharge cooler developed by Kronos works in the following manner: A high electric field is created at the tip of the cathode, which is placed on one side of the CPU. The high energy potential causes the oxygen and nitrogen molecules in the air to become ionized (positively charged) and create a corona (a halo of charged particles). Placing a grounded anode at the opposite end of the CPU causes the charged ions in the corona to accelerate towards the anode, colliding with neutral air molecules on the way. During these collisions, momentum is transferred from the ionized gas to the neutral air molecules, resulting in movement of gas towards the anode.
The advantages of the corona-based cooler are its lack of moving parts, thereby eliminating certain reliability issues and operating with a near-zero noise level and moderate energy consumption.
Soft cooling.
Soft cooling is the practice of utilizing software to take advantage of CPU power saving technologies to minimize energy use. This is done using halt instructions to turn off or put in standby state CPU subparts that aren't being used or by underclocking the CPU. While resulting in lower total speeds, this can be very useful if overclocking a CPU to improve user experience rather than increase raw processing power, since it can prevent the need for noisier cooling. Contrary to what the term suggests, it is not a form of cooling but of reducing heat creation.
Undervolting.
Undervolting is a practice of running the CPU or any other component with voltages below the device specifications. An undervolted component draws less power and thus produces less heat. The ability to do this varies by manufacturer, product line, and even different production runs of the same product (as well as that of other components in the system), but processors are often specified to use voltages higher than strictly necessary. This tolerance ensures that the processor will have a higher chance of performing correctly under sub-optimal conditions, such as a lower-quality motherboard or low power supply voltages. Below a certain limit, the processor will not function correctly, although undervolting too far does not typically lead to permanent hardware damage (unlike overvolting).
Undervolting is used for quiet systems, as less cooling is needed because of the reduction of heat production, allowing noisy fans to be omitted. It is also used when battery charge life must be maximized.
Chip-integrated.
Conventional cooling techniques all attach their "cooling" component to the outside of the computer chip package. This "attaching" technique will always exhibit some thermal resistance, reducing its effectiveness. The heat can be more efficiently and quickly removed by directly cooling the local hot spots of the chip, within the package. At these locations, power dissipation of over 300 W/cm2 (typical CPU is less than 100 W/cm2) can occur, although future systems are expected to exceed 1000 W/cm2. This form of local cooling is essential to developing high power density chips. This ideology has led to the investigation of integrating cooling elements into the computer chip. Currently there are two techniques: micro-channel heatsinks, and jet impingement cooling.
In micro-channel heatsinks, channels are fabricated into the silicon chip (CPU), and coolant is pumped through them. The channels are designed with very large surface area which results in large heat transfers. Heat dissipation of 3000 W/cm2 has been reported with this technique. The heat dissipation can be further increased if two-phase flow cooling is applied. Unfortunately, the system requires large pressure drops, due to the small channels, and the heat flux is lower with dielectric coolants used in electronic cooling.
Another local chip cooling technique is jet impingement cooling. In this technique, a coolant is flowed through a small orifice to form a jet. The jet is directed toward the surface of the CPU chip, and can effectively remove large heat fluxes. Heat dissipation of over 1000 W/cm2 has been reported. The system can be operated at lower pressure in comparison to the micro-channel method. The heat transfer can be further increased using two-phase flow cooling and by integrating return flow channels (hybrid between micro-channel heatsinks and jet impingement cooling).
Phase-change cooling.
Phase-change cooling is an extremely effective way to cool the processor. A vapor compression phase-change cooler is a unit that usually sits underneath the PC, with a tube leading to the processor. Inside the unit is a compressor of the same type as in an air conditioner. The compressor compresses a gas (or mixture of gases) which comes from the evaporator (CPU cooler discussed below). Then, the very hot high-pressure vapor is pushed into the condenser (heat dissipation device) where it condenses from a hot gas into a liquid, typically subcooled at the exit of the condenser then the liquid is fed to an expansion device (restriction in the system) to cause a drop in pressure a vaporize the fluid (cause it to reach a pressure where it can boil at the desired temperature); the expansion device used can be a simple capillary tube to a more elaborate thermal expansion valve. The liquid evaporates (changing phase), absorbing the heat from the processor as it draws extra energy from its environment to accommodate this change (see latent heat). The evaporation can produce temperatures reaching around . The liquid flows into the evaporator cooling the CPU, turning into a vapor at low pressure. At the end of the evaporator this gas flows down to the compressor and the cycle begins over again. This way, the processor can be cooled to temperatures ranging from , depending on the load, wattage of the processor, the refrigeration system (see refrigeration) and the gas mixture used. This type of system suffers from a number of issues (cost, weight, size, vibration, maintenance, cost of electricity, noise, need for a specialized computer tower) but, mainly, one must be concerned with dew point and the proper insulation of all sub-ambient surfaces that must be done (the pipes will sweat, dripping water on sensitive electronics).
Alternately, a new breed of the cooling system is being developed, inserting a pump into the thermosiphon loop. This adds another degree of flexibility for the design engineer, as the heat can now be effectively transported away from the heat source and either reclaimed or dissipated to ambient. Junction temperature can be tuned by adjusting the system pressure; higher pressure equals higher fluid saturation temperatures. This allows for smaller condensers, smaller fans, and/or the effective dissipation of heat in a high ambient temperature environment. These systems are, in essence, the next generation fluid cooling paradigm, as they are approximately 10 times more efficient than single-phase water. Since the system uses a dielectric as the heat transport medium, leaks do not cause a catastrophic failure of the electric system.
This type of cooling is seen as a more extreme way to cool components since the units are relatively expensive compared to the average desktop. They also generate a significant amount of noise, since they are essentially refrigerators; however, the compressor choice and air cooling system is the main determinant of this, allowing for flexibility for noise reduction based on the parts chosen.
A "thermosiphon" traditionally refers to a closed system consisting of several pipes and/or chambers, with a larger chamber containing a small reservoir of liquid (often having a boiling point just above ambient temperature, but not necessarily). The larger chamber is as close to the heat source and designed to conduct as much heat from it into the liquid as possible, for example, a CPU cold plate with the chamber inside it filled with the liquid. One or more pipes extend upward into some sort of radiator or similar heat dissipation area, and this is all set up such that the CPU heats the reservoir and liquid it contains, which begins boiling, and the vapor travels up the tube(s) into the radiator/heat dissipation area, and then after condensing, drips back down into the reservoir, or runs down the sides of the tube. This requires no moving parts, and is somewhat similar to a heat pump, except that capillary action is not used, making it potentially better in some sense (perhaps most importantly, better in that it is much easier to build, and much more customizable for specific use cases and the flow of coolant/vapor can be arranged in a much wider variety of positions and distances, and have far greater thermal mass and maximum capacity compared to heat pipes which are limited by the amount of coolant present and the speed and flow rate of coolant that capillary action can achieve with the wicking used, often sintered copper powder on the walls of the tube, which have a limited flow rate and capacity.)
Liquid nitrogen.
As liquid nitrogen boils at , far below the freezing point of water, it is valuable as an extreme coolant for short overclocking sessions.
In a typical installation of liquid nitrogen cooling, a copper or aluminium pipe is mounted on top of the processor or graphics card. After the system has been heavily insulated against condensation, the liquid nitrogen is poured into the pipe, resulting in temperatures well below .
Evaporation devices ranging from cut out heatsinks with pipes attached to custom milled copper containers are used to hold the nitrogen as well as to prevent large temperature changes. However, after the nitrogen evaporates, it has to be refilled. In the realm of personal computers, this method of cooling is seldom used in contexts other than overclocking trial-runs and record-setting attempts, as the CPU will usually expire within a relatively short period of time due to temperature stress caused by changes in internal temperature.
Although liquid nitrogen is non-flammable, it can condense oxygen directly from air. Mixtures of liquid oxygen and flammable materials can be dangerously explosive.
Liquid nitrogen cooling is, generally, only used for processor benchmarking, due to the fact that continuous usage may cause permanent damage to one or more parts of the computer and, if handled in a careless way, can even harm the user, causing frostbite.
Liquid helium.
Liquid helium, colder than liquid nitrogen, has also been used for cooling. Liquid helium boils at , and temperatures ranging from have been measured from the heatsink. However, liquid helium is more expensive and more difficult to store and use than liquid nitrogen. Also, extremely low temperatures can cause integrated circuits to stop functioning. Silicon-based semiconductors, for example, will freeze out at around .
Optimization.
Cooling can be improved by several techniques which may involve additional expense or effort. These techniques are often used, in particular, by those who run parts of their computer (such as the CPU and GPU) at higher voltages and frequencies than specified by manufacturer (overclocking), which increases heat generation.
The installation of higher performance, non-stock cooling may also be considered modding. Many overclockers simply buy more efficient, and often, more expensive fan and heatsink combinations, while others resort to more exotic ways of computer cooling, such as liquid cooling, Peltier effect heatpumps, heat pipe or phase change cooling.
There are also some related practices that have a positive impact in reducing system temperatures:
Thermally conductive compounds.
Often called Thermal Interface Material (TIM).
Perfectly flat surfaces in contact give optimal cooling, but perfect flatness and absence of microscopic air gaps is not practically possible, particularly in mass-produced equipment. A very thin skim of thermal compound, which is much more thermally conductive than air, though much less so than metal, can improve thermal contact and cooling by filling in the air gaps. If only a small amount of compound just sufficient to fill the gaps is used, the best temperature reduction will be obtained.
There is much debate about the merits of compounds, and overclockers often consider some compounds to be superior to others. The main consideration is to use the minimal amount of thermal compound required to even out surfaces, as the thermal conductivity of compound is typically 1/3 to 1/400 that of metal, though much better than air. The conductivity of the heatsink compound ranges from about 0.5 to 80W/mK (see articles); that of aluminium is about 200, that of air about 0.02. Heat-conductive pads are also used, often fitted by manufacturers to heatsinks. They are less effective than properly applied thermal compound, but simpler to apply and, if fixed to the heatsink, cannot be omitted by users unaware of the importance of good thermal contact, or replaced by a thick and ineffective layer of compound.
Unlike some techniques discussed here, the use of thermal compound or padding is almost universal when dissipating significant amounts of heat.
Heat sink lapping.
Mass-produced CPU heat spreaders and heatsink bases are never perfectly flat or smooth; if these surfaces are placed in the best contact possible, there will be air gaps which reduce heat conduction. This can easily be mitigated by the use of thermal compound, but for the best possible results surfaces must be as flat as possible. This can be achieved by a laborious process known as lapping, which can reduce CPU temperature by typically .
Rounded cables.
Most older PCs use flat ribbon cables to connect storage drives (IDE or SCSI). These large flat cables greatly impede airflow by causing drag and turbulence. Overclockers and modders often replace these with rounded cables, with the conductive wires bunched together tightly to reduce surface area. Theoretically, the parallel strands of conductors in a ribbon cable serve to reduce crosstalk (signal carrying conductors inducing signals in nearby conductors), but there is no empirical evidence of rounding cables reducing performance. This may be because the length of the cable is short enough so that the effect of crosstalk is negligible. Problems usually arise when the cable is not electromagnetically protected and the length is considerable, a more frequent occurrence with older network cables.
These computer cables can then be cable tied to the chassis or other cables to further increase airflow.
This is less of a problem with new computers that use serial ATA which has a much narrower cable.
Airflow.
The colder the cooling medium (the air), the more effective the cooling. Cooling air temperature can be improved with these guidelines:
Fewer fans but strategically placed will improve the airflow internally within the PC and thus lower the overall internal case temperature in relation to ambient conditions. The use of larger fans also improves efficiency and lowers the amount of waste heat along with the amount of noise generated by the fans while in operation.
There is little agreement on the effectiveness of different fan placement configurations, and little in the way of systematic testing has been done. For a rectangular PC (ATX) case, a fan in the front with a fan in the rear and one in the top has been found to be a suitable configuration. However, AMD's (somewhat outdated) system cooling guidelines notes that "A front cooling fan does not seem to be essential. In fact, in some extreme situations, testing showed these fans to be recirculating hot air rather than introducing cool air." It may be that fans in the side panels could have a similar detrimental effect—possibly through disrupting the normal air flow through the case. However, this is unconfirmed and probably varies with the configuration.
Air pressure.
Loosely speaking, positive pressure means intake into the case is stronger than exhaust from the case. This configuration results in pressure inside of the case being higher than in its environment. Negative pressure means exhaust is stronger than intake. This results in internal air pressure being lower than in the environment. Both configurations have benefits and drawbacks, with positive pressure being the more popular of the two configurations. Negative pressure results in the case pulling air through holes and vents separate from the fans, as the internal gases will attempt to reach an equilibrium pressure with the environment. Consequently, this results in dust entering the computer in all locations. Positive pressure in combination with filtered intake solves this issue, as air will only incline to be exhausted through these holes and vents in order to reach an equilibrium with its environment. Dust is then unable to enter the case except through the intake fans, which need to possess dust filters.
Computer types.
Desktops.
Desktop computers typically use one or more fans for cooling. While almost all desktop power supplies have at least one built-in fan, power supplies should never draw heated air from within the case, as this results in higher PSU operating temperatures which decrease the PSU's energy efficiency, reliability and overall ability to provide a steady supply of power to the computer's internal components. For this reason, all modern ATX cases (with some exceptions found in ultra-low-budget cases) feature a power supply mount in the bottom, with a dedicated PSU air intake (often with its own filter) beneath the mounting location, allowing the PSU to draw cool air from beneath the case.
Most manufacturers recommend bringing cool, fresh air in at the bottom front of the case, and exhausting warm air from the top rear. If fans are fitted to force air into the case more effectively than it is removed, the pressure inside becomes higher than outside, referred to as a "positive" airflow (the opposite case is called "negative" airflow). Worth noting is that positive internal pressure only prevents dust accumulating in the case if the air intakes are equipped with dust filters. A case with negative internal pressure will suffer a higher rate of dust accumulation even if the intakes are filtered, as the negative pressure will draw dust in through any available opening in the case
The air flow inside the typical desktop case is usually not strong enough for a passive CPU heatsink. Most desktop heatsinks are active including one or even multiple directly attached fans or blowers.
Servers.
Server coolers.
Each server can have an independent internal cooler system; Server cooling fans in (1 U) enclosures are usually located in the middle of the enclosure, between the hard drives at the front and passive CPU heatsinks at the rear. Larger (higher) enclosures also have exhaust fans, and from approximately 4U they may have active heatsinks. Power supplies generally have their own rear-facing exhaust fans.
Rack-mounted coolers.
Rack cabinet is a typical enclosure for horizontally mounted servers. Air typically drawn in at the front of the rack and exhausted at the rear. Each cabinet can have additional cooling options; for example, they can have a Close Coupled Cooling attachable module or integrated with cabinet elements (like cooling doors in iDataPlex server rack).
Another way of accommodating large numbers of systems in a small space is to use blade chassis, oriented vertically rather than horizontally, to facilitate convection. Air heated by the hot components tends to rise, creating a natural air flow along the boards (stack effect), cooling them. Some manufacturers take advantage of this effect.
Data center cooling.
Because data centers typically contain large numbers of computers and other power-dissipating devices, they risk equipment overheating; extensive HVAC systems are used to prevent this. Often a raised floor is used so the area under the floor may be used as a large plenum for cooled air from a CRAC air conditioner and power cabling. A plenum made with a false ceiling can also be present. Hot Aisle containment or cold aisle containment are also used in datacenters to improve cooling efficiency. Alternatively slab floors can be used which are similar to conventional floors, and overhead ducts can be used for cooling.
Direct Contact Liquid Cooling has emerged more efficient than air cooling options, resulting in smaller footprint, lower capital requirements and lower operational costs than air cooling. It uses warm liquid instead of air to move heat away from the hottest components. Energy efficiency gains from liquid cooling is also driving its adoption. Single and dual/two phase immersion/open tub cooling and single and dual phase direct-to-chip cooling as well as immersion cooling confined to individual server blades have also been proposed for use in data centers. In-row cooling, rack cooling, rear door heat exchangers, racktop cooling which places heat exchangers above the rack, overhead cooling above aisles or fan walls/thermal walls in a data center can also be used. Direct Liquid Cooling (DLC) with cold plates for cooling chips in servers can be used due to the higher heat removal capacities of these systems. These systems can either cool some or all components on a server, using rubber or copper tubing respectively. Rear door heat exchangers were traditionally used for cooling high heat densities in data centers, but these did not see widespread adoption.
Laptops.
Laptops present a difficult mechanical airflow design, power dissipation, and cooling challenge. Constraints specific to laptops include: the device as a whole has to be as light as possible; the form factor has to be built around the standard keyboard layout; users are very close, so noise must be kept to a minimum, and the case exterior temperature must be kept low enough to be used on a lap. Cooling generally uses forced air cooling but heat pipes and the use of the metal chassis or case as a passive heatsink are also common. Solutions to reduce heat include using lower power-consumption ARM or Intel Atom processors.
Mobile devices.
Mobile devices usually have no discrete cooling systems, as mobile CPU and GPU chips are designed for maximum power efficiency due to the constraints of the device's battery. Some higher performance devices may include a heat spreader that aids in transferring heat to the external case of a phone or tablet.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{CFM} = \\frac{P}{Cp \\times r \\times dT}"
},
{
"math_id": 1,
"text": "\\text{CFM}"
},
{
"math_id": 2,
"text": "P"
},
{
"math_id": 3,
"text": "Cp"
},
{
"math_id": 4,
"text": "r"
},
{
"math_id": 5,
"text": "dT"
},
{
"math_id": 6,
"text": "\\text{CFM} = \\frac{3.16 \\times P}{\\text{allowed temperature rise in} ^\\circ F}"
},
{
"math_id": 7,
"text": "\\text{CFM} = \\frac{1.76 \\times P}{\\text{allowed temperature rise in} ^\\circ C}"
},
{
"math_id": 8,
"text": "\\text{CFM} = \\frac{3.16 \\times 500\\ \\text{W}}{(130 - 100)} = 53"
}
] | https://en.wikipedia.org/wiki?curid=798370 |
7984007 | Traced monoidal category | In category theory, a traced monoidal category is a category with some extra structure which gives a reasonable notion of feedback.
A traced symmetric monoidal category is a symmetric monoidal category C together with a family of functions
formula_0
called a "trace", satisfying the following conditions:
formula_4
formula_7
formula_11
formula_14
formula_16
formula_18
formula_19
(where formula_20 is the symmetry of the monoidal category). | [
{
"math_id": 0,
"text": "\\mathrm{Tr}^U_{X,Y}:\\mathbf{C}(X\\otimes U,Y\\otimes U)\\to\\mathbf{C}(X,Y)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "f:X\\otimes U\\to Y\\otimes U"
},
{
"math_id": 3,
"text": "g:X'\\to X"
},
{
"math_id": 4,
"text": "\\mathrm{Tr}^U_{X',Y}(f \\circ (g\\otimes \\mathrm{id}_U)) = \\mathrm{Tr}^U_{X,Y}(f) \\circ g"
},
{
"math_id": 5,
"text": "Y"
},
{
"math_id": 6,
"text": "g:Y\\to Y'"
},
{
"math_id": 7,
"text": "\\mathrm{Tr}^U_{X,Y'}((g\\otimes \\mathrm{id}_U) \\circ f) = g \\circ \\mathrm{Tr}^U_{X,Y}(f)"
},
{
"math_id": 8,
"text": "U"
},
{
"math_id": 9,
"text": "f:X\\otimes U\\to Y\\otimes U'"
},
{
"math_id": 10,
"text": "g:U'\\to U"
},
{
"math_id": 11,
"text": "\\mathrm{Tr}^U_{X,Y}((\\mathrm{id}_Y\\otimes g) \\circ f)=\\mathrm{Tr}^{U'}_{X,Y}(f \\circ (\\mathrm{id}_X\\otimes g))"
},
{
"math_id": 12,
"text": "f:X \\otimes I \\to Y \\otimes I"
},
{
"math_id": 13,
"text": "\\rho_X \\colon X\\otimes I\\cong X"
},
{
"math_id": 14,
"text": "\\mathrm{Tr}^I_{X,Y}(f)=\\rho_Y \\circ f \\circ \\rho_X^{-1}"
},
{
"math_id": 15,
"text": "f:X\\otimes U\\otimes V\\to Y\\otimes U\\otimes V"
},
{
"math_id": 16,
"text": "\\mathrm{Tr}^U_{X,Y}(\\mathrm{Tr}^V_{X\\otimes U,Y\\otimes U}(f)) = \\mathrm{Tr}^{U\\otimes V}_{X,Y}(f)"
},
{
"math_id": 17,
"text": "g:W\\to Z"
},
{
"math_id": 18,
"text": "g\\otimes \\mathrm{Tr}^U_{X,Y}(f)=\\mathrm{Tr}^U_{W\\otimes X,Z\\otimes Y}(g\\otimes f)"
},
{
"math_id": 19,
"text": "\\mathrm{Tr}^X_{X,X}(\\gamma_{X,X})=\\mathrm{id}_X"
},
{
"math_id": 20,
"text": "\\gamma"
}
] | https://en.wikipedia.org/wiki?curid=7984007 |
7984768 | Multivariate t-distribution | Multivariable generalization of the Student's t-distribution
In statistics, the multivariate "t"-distribution (or multivariate Student distribution) is a multivariate probability distribution. It is a generalization to random vectors of the Student's "t"-distribution, which is a distribution applicable to univariate random variables. While the case of a random matrix could be treated within this structure, the matrix "t"-distribution is distinct and makes particular use of the matrix structure.
Definition.
One common method of construction of a multivariate "t"-distribution, for the case of formula_0 dimensions, is based on the observation that if formula_1 and formula_2 are independent and distributed as formula_3 and formula_4 (i.e. multivariate normal and chi-squared distributions) respectively, the matrix formula_5 is a "p" × "p" matrix, and formula_6 is a constant vector then the random variable formula_7 has the density
formula_8
and is said to be distributed as a multivariate "t"-distribution with parameters formula_9. Note that formula_10 is not the covariance matrix since the covariance is given by formula_11 (for formula_12).
The constructive definition of a multivariate "t"-distribution simultaneously serves as a sampling algorithm:
This formulation gives rise to the hierarchical representation of a multivariate "t"-distribution as a scale-mixture of normals: formula_16 where formula_17 indicates a gamma distribution with density proportional to formula_18, and formula_19 conditionally follows formula_20.
In the special case formula_21, the distribution is a multivariate Cauchy distribution.
Derivation.
There are in fact many candidates for the multivariate generalization of Student's "t"-distribution. An extensive survey of the field has been given by Kotz and Nadarajah (2004). The essential issue is to define a probability density function of several variables that is the appropriate generalization of the formula for the univariate case. In one dimension (formula_22), with formula_23 and formula_24, we have the probability density function
formula_25
and one approach is to use a corresponding function of several variables. This is the basic idea of elliptical distribution theory, where one writes down a corresponding function of formula_0 variables formula_26 that replaces formula_27 by a quadratic function of all the formula_26. It is clear that this only makes sense when all the marginal distributions have the same degrees of freedom formula_28. With formula_29, one has a simple choice of multivariate density function
formula_30
which is the standard but not the only choice.
An important special case is the standard bivariate "t"-distribution, "p" = 2:
formula_31
Note that formula_32.
Now, if formula_33 is the identity matrix, the density is
formula_34
The difficulty with the standard representation is revealed by this formula, which does not factorize into the product of the marginal one-dimensional distributions. When formula_35 is diagonal the standard representation can be shown to have zero correlation but the marginal distributions are not statistically independent.
A notable spontaneous occurrence of the elliptical multivariate distribution is its formal mathematical appearance when least squares methods are applied to multivariate normal data such as the classical Markowitz minimum variance econometric solution for asset portfolios.
Cumulative distribution function.
The definition of the cumulative distribution function (cdf) in one dimension can be extended to multiple dimensions by defining the following probability (here formula_36 is a real vector):
formula_37
There is no simple formula for formula_38, but it can be approximated numerically via Monte Carlo integration.
Conditional Distribution.
This was developed by Muirhead and Cornish. but later derived using the simpler chi-squared ratio representation above, by Roth and Ding. Let vector formula_39 follow a multivariate "t" distribution and partition into two subvectors of formula_40 elements:
formula_41
where formula_42, the known mean vectors are formula_43 and the scale matrix is formula_44.
Roth and Ding find the conditional distribution formula_45 to be a new "t"-distribution with modified parameters.
formula_46
An equivalent expression in Kotz et. al. is somewhat less concise.
Thus the conditional distribution is most easily represented as a two-step procedure. Form first the intermediate distribution formula_47 above then, using the parameters below, the explicit conditional distribution becomes
formula_48
where
formula_49 Effective degrees of freedom, formula_50 is augmented by the number of disused variables formula_51.
formula_52 is the conditional mean of formula_53
formula_54 is the Schur complement of formula_55; the conditional covariance.
formula_56 is the squared Mahalanobis distance of formula_57 from formula_58 with scale matrix formula_59
formula_60
Copulas based on the multivariate "t".
The use of such distributions is enjoying renewed interest due to applications in mathematical finance, especially through the use of the Student's "t" copula.
Elliptical Representation.
Constructed as an elliptical distribution, take the simplest centralised case with spherical symmetry and no scaling, formula_61, then the multivariate "t"-PDF takes the form
formula_62
where formula_63 and formula_64 = degrees of freedom as defined in Muirhead section 1.5. The covariance of formula_65 is
formula_66
The aim is to convert the Cartesian PDF to a radial one. Kibria and Joarder, define radial measure formula_67 and, noting that the density is dependent only on r2, we getformula_68which is equivalent to the variance of formula_69-element vector formula_65 treated as a univariate heavy-tail zero-mean random sequence with uncorrelated, yet statistically dependent, elements.
Radial Distribution.
formula_70 follows the Fisher-Snedecor or formula_71 distribution:
formula_72
having mean value formula_73. formula_71-distributions arise naturally in tests of sums of squares of sampled data after normalization by the sample standard deviation.
By a change of random variable to formula_74 in the equation above, retaining formula_69-vector formula_39, we have formula_75 and probability distribution
formula_76
which is a regular Beta-prime distribution formula_77 having mean value formula_78.
Cumulative Radial Distribution.
Given the Beta-prime distribution, the radial cumulative distribution function of formula_79 is known:
formula_80
where formula_81 is the incomplete Beta function and applies with a spherical formula_82 assumption.
In the scalar case, formula_83, the distribution is equivalent to Student-"t" with the equivalence formula_84, the variable "t" having double-sided tails for CDF purposes, i.e. the "two-tail-t-test".
The radial distribution can also be derived via a straightforward coordinate transformation from Cartesian to spherical. A constant radius surface at formula_85 with PDF formula_86 is an iso-density surface. Given this density value, the quantum of probability on a shell of surface area formula_87 and thickness formula_88 at formula_89 is formula_90.
The enclosed formula_69-sphere of radius formula_89 has surface area formula_91. Substitution into formula_92 shows that the shell has element of probability formula_93 which is equivalent to radial density function
formula_94
which further simplifies to formula_95 where formula_96 is the Beta function.
Changing the radial variable to formula_97 returns the previous Beta Prime distribution
formula_98
To scale the radial variables without changing the radial shape function, define scale matrix formula_99 , yielding a 3-parameter Cartesian density function, ie. the probability formula_100 in volume element formula_101 is
formula_102
or, in terms of scalar radial variable formula_89,
formula_103
Radial Moments.
The moments of all the radial variables , with the spherical distribution assumption, can be derived from the Beta Prime distribution. If formula_104 then formula_105, a known result. Thus, for variable formula_106 we have
formula_107
The moments of formula_108 are
formula_109
while introducing the scale matrix formula_110 yields
formula_111
Moments relating to radial variable formula_89 are found by setting formula_112 and formula_113 whereupon
formula_114
Linear Combinations and Affine Transformation.
Full Rank Transform.
This closely relates to the multivariate normal method and is described in Kotz and Nadarajah, Kibria and Joarder, Roth, and Cornish. Starting from a somewhat simplified version of the central MV-t pdf: formula_115, where formula_116 is a constant and formula_117 is arbitrary but fixed, let formula_118 be a full-rank matrix and form vector formula_119. Then, by straightforward change of variables
formula_120
The matrix of partial derivatives is formula_121 and the Jacobian becomes formula_122. Thus
formula_123
The denominator reduces to
formula_124
In full:
formula_125
which is a regular MV-"t" distribution.
In general if formula_126 and formula_127 has full rank formula_69 then
formula_128
Marginal Distributions.
This is a special case of the rank-reducing linear transform below. Kotz defines marginal distributions as follows. Partition formula_129 into two subvectors of formula_40 elements:
formula_130
with formula_42, means formula_43, scale matrix formula_44
then formula_131, formula_132 such that
formula_133
formula_134
If a transformation is constructed in the form
formula_135
then vector formula_119, as discussed below, has the same distribution as the marginal distribution of formula_136 .
Rank-Reducing Linear Transform.
In the linear transform case, if formula_137 is a rectangular matrix formula_138, of rank formula_139 the result is dimensionality reduction. Here, Jacobian formula_140 is seemingly rectangular but the value formula_141 in the denominator pdf is nevertheless correct. There is a discussion of rectangular matrix product determinants in Aitken. In general if formula_142 and formula_143 has full rank formula_139 then
formula_144
formula_145
"In extremis", if "m" = 1 and formula_137 becomes a row vector, then scalar "Y" follows a univariate double-sided Student-t distribution defined by formula_146 with the same formula_50 degrees of freedom. Kibria et. al. use the affine transformation to find the marginal distributions which are also MV-"t".
References.
<templatestyles src="Reflist/styles.css" />
Literature.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "\\mathbf y"
},
{
"math_id": 2,
"text": "u"
},
{
"math_id": 3,
"text": "N({\\mathbf 0},{\\boldsymbol\\Sigma})"
},
{
"math_id": 4,
"text": "\\chi^2_\\nu"
},
{
"math_id": 5,
"text": "\\mathbf{\\Sigma}\\,"
},
{
"math_id": 6,
"text": "{\\boldsymbol\\mu}"
},
{
"math_id": 7,
"text": "{\\mathbf x}={\\mathbf y}/\\sqrt{u/\\nu} +{\\boldsymbol\\mu}"
},
{
"math_id": 8,
"text": "\n\\frac{\\Gamma\\left[(\\nu+p)/2\\right]}{\\Gamma(\\nu/2)\\nu^{p/2}\\pi^{p/2}\\left|{\\boldsymbol\\Sigma}\\right|^{1/2}}\\left[1+\\frac{1}{\\nu}({\\mathbf x}-{\\boldsymbol\\mu})^T{\\boldsymbol\\Sigma}^{-1}({\\mathbf x}-{\\boldsymbol\\mu})\\right]^{-(\\nu+p)/2}"
},
{
"math_id": 9,
"text": "{\\boldsymbol\\Sigma},{\\boldsymbol\\mu},\\nu"
},
{
"math_id": 10,
"text": "\\mathbf\\Sigma"
},
{
"math_id": 11,
"text": "\\nu/(\\nu-2)\\mathbf\\Sigma"
},
{
"math_id": 12,
"text": "\\nu>2"
},
{
"math_id": 13,
"text": "u \\sim \\chi^2_\\nu"
},
{
"math_id": 14,
"text": "\\mathbf{y} \\sim N(\\mathbf{0}, \\boldsymbol{\\Sigma})"
},
{
"math_id": 15,
"text": "\\mathbf{x} \\gets \\sqrt{\\nu/u}\\mathbf{y}+ \\boldsymbol{\\mu}"
},
{
"math_id": 16,
"text": "u \\sim \\mathrm{Ga}(\\nu/2,\\nu/2)"
},
{
"math_id": 17,
"text": "\\mathrm{Ga}(a,b)"
},
{
"math_id": 18,
"text": "x^{a-1}e^{-bx}"
},
{
"math_id": 19,
"text": "\\mathbf{x}\\mid u"
},
{
"math_id": 20,
"text": "N(\\boldsymbol{\\mu},u^{-1}\\boldsymbol{\\Sigma})"
},
{
"math_id": 21,
"text": "\\nu=1"
},
{
"math_id": 22,
"text": "p=1"
},
{
"math_id": 23,
"text": "t=x-\\mu"
},
{
"math_id": 24,
"text": "\\Sigma=1"
},
{
"math_id": 25,
"text": "f(t) = \\frac{\\Gamma[(\\nu+1)/2]}{\\sqrt{\\nu\\pi\\,}\\,\\Gamma[\\nu/2]} (1+t^2/\\nu)^{-(\\nu+1)/2}"
},
{
"math_id": 26,
"text": "t_i"
},
{
"math_id": 27,
"text": "t^2"
},
{
"math_id": 28,
"text": "\\nu"
},
{
"math_id": 29,
"text": " \\mathbf{A} = \\boldsymbol\\Sigma^{-1}"
},
{
"math_id": 30,
"text": "f(\\mathbf t) = \\frac{\\Gamma((\\nu+p)/2)\\left|\\mathbf{A}\\right|^{1/2}}{\\sqrt{\\nu^p\\pi^p\\,}\\,\\Gamma(\\nu/2)} \\left(1+\\sum_{i,j=1}^{p,p} A_{ij} t_i t_j/\\nu\\right)^{-(\\nu+p)/2}"
},
{
"math_id": 31,
"text": "f(t_1,t_2) = \\frac{\\left|\\mathbf{A}\\right|^{1/2}}{2\\pi} \\left(1+\\sum_{i,j=1}^{2,2} A_{ij} t_i t_j/\\nu\\right)^{-(\\nu+2)/2}"
},
{
"math_id": 32,
"text": "\\frac{\\Gamma \\left(\\frac{\\nu +2}{2}\\right)}{\\pi \\ \\nu \\Gamma \\left(\\frac{\\nu }{2}\\right)}= \\frac {1} {2\\pi}"
},
{
"math_id": 33,
"text": "\\mathbf{A}"
},
{
"math_id": 34,
"text": "f(t_1,t_2) = \\frac{1}{2\\pi} \\left(1+(t_1^2 + t_2^2)/\\nu\\right)^{-(\\nu+2)/2}."
},
{
"math_id": 35,
"text": " \\Sigma"
},
{
"math_id": 36,
"text": "\\mathbf{x}"
},
{
"math_id": 37,
"text": " F(\\mathbf{x}) = \\mathbb{P}(\\mathbf{X}\\leq \\mathbf{x}), \\quad \\textrm{where}\\;\\; \\mathbf{X}\\sim t_\\nu(\\boldsymbol\\mu,\\boldsymbol\\Sigma)."
},
{
"math_id": 38,
"text": "F(\\mathbf{x})"
},
{
"math_id": 39,
"text": " X "
},
{
"math_id": 40,
"text": " p_1, p_2 "
},
{
"math_id": 41,
"text": " X_p = \\begin{bmatrix}\n X_1 \\\\\n X_2 \\end{bmatrix} \\sim t_p \\left (\\mu_p, \\Sigma_{p \\times p}, \\nu \\right ) "
},
{
"math_id": 42,
"text": " p_1 + p_2 = p "
},
{
"math_id": 43,
"text": " \\mu_p = \\begin{bmatrix}\n \\mu_1 \\\\\n \\mu_2 \\end{bmatrix}"
},
{
"math_id": 44,
"text": " \\Sigma_{p \\times p} = \\begin{bmatrix}\n \\Sigma_{11} & \\Sigma_{12} \\\\\n \\Sigma_{21} & \\Sigma_{22} \\end{bmatrix} "
},
{
"math_id": 45,
"text": " p(X_1|X_2) "
},
{
"math_id": 46,
"text": " X_1|X_2 \\sim t_{ p_1 }\\left( \\mu_{1|2},\\frac{\\nu + d_2}{\\nu + p_2} \\Sigma_{11|2}, \\nu + p_2 \\right)"
},
{
"math_id": 47,
"text": " X_1|X_2 \\sim t_{ p_1 }\\left( \\mu_{1|2}, \\Psi ,\\tilde{ \\nu } \\right)"
},
{
"math_id": 48,
"text": "\nf(X_1|X_2) =\\frac{\\Gamma\\left[(\\tilde \\nu +p_1)/2\\right]}{\\Gamma(\\tilde \\nu /2) ( \\pi \\,\\tilde \\nu )^{p_1/2}\\left|{\\boldsymbol\\Psi}\\right|^{1/2}}\\left[1+\\frac{1}{\\tilde \\nu}( X_1 - \\mu_{1|2} )^T{\\boldsymbol\\Psi}^{-1}(X_1- \\mu_{1|2} )\\right]^{-(\\tilde \\nu + p_1)/2}"
},
{
"math_id": 49,
"text": " \\tilde \\nu = \\nu + p_2 "
},
{
"math_id": 50,
"text": " \\nu "
},
{
"math_id": 51,
"text": " p_2 "
},
{
"math_id": 52,
"text": " \\mu_{1|2} = \\mu_1 + \\Sigma_{12} \\Sigma_{22}^{-1} \\left(X_2 - \\mu_2 \\right ) "
},
{
"math_id": 53,
"text": "x_1 "
},
{
"math_id": 54,
"text": " \\Sigma_{11|2} = \\Sigma_{11} - \\Sigma_{12} \\Sigma_{22} ^{-1} \\Sigma_{21} "
},
{
"math_id": 55,
"text": " \\Sigma_{22} \\text{ in } \\Sigma "
},
{
"math_id": 56,
"text": " d_2 = (X_2 - \\mu_2)^T \\Sigma_{22}^{-1} (X_2 - \\mu_2) "
},
{
"math_id": 57,
"text": " X_2 "
},
{
"math_id": 58,
"text": "\\mu_2 "
},
{
"math_id": 59,
"text": " \\Sigma_{22} "
},
{
"math_id": 60,
"text": " \\Psi = \\frac{\\nu + d_2}{\\nu + p_2} \\Sigma_{11|2} "
},
{
"math_id": 61,
"text": " \\Sigma = \\operatorname{I} \\, "
},
{
"math_id": 62,
"text": " f_X(X)= g(X^T X) = \\frac{\\Gamma \\big ( \\frac{1}{2} (\\nu + p ) \\, \\big )}{ ( \\nu \\pi)^{\\,p/2} \\Gamma \\big( \\frac{1}{2} \\nu \\big)} \\bigg( 1 + \\nu^{-1} X^T X \\bigg)^{-( \\nu + p )/2 } "
},
{
"math_id": 63,
"text": " X =(x_1, \\cdots ,x_p )^T\\text { is a } p\\text{-vector} "
},
{
"math_id": 64,
"text": " \\nu "
},
{
"math_id": 65,
"text": "X"
},
{
"math_id": 66,
"text": " \\operatorname{E} \\left( XX^T \\right) = \\int_{-\\infty}^\\infty \\cdots \\int_{-\\infty}^\\infty f_X(x_1,\\dots, x_p) XX^T \\, dx_1 \\dots dx_p = \\frac{ \\nu }{ \\nu - 2 } \\operatorname{I} "
},
{
"math_id": 67,
"text": " r_2 = R^2 = \\frac{X^TX}{p} "
},
{
"math_id": 68,
"text": " \\operatorname{E} [ r_2 ] = \\int_{-\\infty}^\\infty \\cdots \\int_{-\\infty}^\\infty f_X(x_1,\\dots, x_p) \\frac {X^TX}{p}\\, dx_1 \\dots dx_p = \\frac{\\nu}{ \\nu -2} "
},
{
"math_id": 69,
"text": " p "
},
{
"math_id": 70,
"text": "r_2 = \\frac{X^TX}{p}"
},
{
"math_id": 71,
"text": " F "
},
{
"math_id": 72,
"text": " r_2 \\sim f_{F}( p,\\nu) = B \\bigg( \\frac {p}{2}, \\frac {\\nu}{2} \\bigg ) ^{-1} \\bigg (\\frac{p}{\\nu} \\bigg )^{ p/2 } r_2^ { p/2 -1 } \n \\bigg( 1 + \\frac{p}{\\nu} r_2 \\bigg) ^{-(p + \\nu)/2 }"
},
{
"math_id": 73,
"text": " \\operatorname{E} [ r_2 ] = \\frac { \\nu }{ \\nu - 2 } "
},
{
"math_id": 74,
"text": " y = \\frac{p}{\\nu} r_2 = \\frac {X^T X}{\\nu} "
},
{
"math_id": 75,
"text": " \\operatorname{E} [ y ] = \\int_{-\\infty}^\\infty \\cdots \\int_{-\\infty}^\\infty f_X(X) \\frac {X^TX}{ \\nu}\\, dx_1 \\dots dx_p = \\frac { p }{ \\nu - 2 }"
},
{
"math_id": 76,
"text": " \\begin{align} f_Y(y| \\,p,\\nu) & = \\left | \\frac {p}{\\nu} \\right|^{-1} B \\bigg( \\frac {p}{2}, \\frac {\\nu}{2} \\bigg )^{-1} \\big (\\frac{p}{\\nu} \\big )^{ \\,p/2 } \\big (\\frac{p}{\\nu} \\big )^{ -p/2 -1} y^ {\\, p/2 -1 } \\big( 1 + y \\big) ^{-(p + \\nu)/2 } \\\\ \\\\\n & = B \\bigg ( \\frac {p}{2}, \\frac {\\nu}{2} \\bigg )^{-1} y^{ \\,p/2 -1 }(1+ y )^{-(\\nu + p)/2} \\end{align} "
},
{
"math_id": 77,
"text": " y \\sim \\beta \\, ' \\bigg(y; \\frac {p}{2}, \\frac {\\nu}{2} \\bigg ) "
},
{
"math_id": 78,
"text": " \\frac { \\frac{1}{2} p }{ \\frac{1}{2}\\nu - 1 } = \\frac { p }{ \\nu - 2 }"
},
{
"math_id": 79,
"text": " y"
},
{
"math_id": 80,
"text": " F_Y(y) \\sim I \\, \\bigg(\\frac {y}{1+y}; \\, \\frac {p}{2}, \\frac {\\nu}{2} \\bigg ) B\\bigg( \\frac {p}{2}, \\frac {\\nu}{2} \\bigg )^{-1} "
},
{
"math_id": 81,
"text": " I"
},
{
"math_id": 82,
"text": " \\Sigma "
},
{
"math_id": 83,
"text": " p = 1"
},
{
"math_id": 84,
"text": " t^2 = y^2 \\sigma^{-1} "
},
{
"math_id": 85,
"text": " R = (X^TX)^{1/2} "
},
{
"math_id": 86,
"text": " p_X(X) \\propto \\bigg( 1 + \\nu^{-1} R^2 \\bigg)^{-(\\nu+p)/2} "
},
{
"math_id": 87,
"text": " A_R "
},
{
"math_id": 88,
"text": " \\delta R "
},
{
"math_id": 89,
"text": " R "
},
{
"math_id": 90,
"text": " \\delta P = p_X(R) \\, A_R \\delta R "
},
{
"math_id": 91,
"text": " A_R = \\frac { 2\\pi^{p/2 } R^{ \\, p-1 } }{ \\Gamma (p/2)} "
},
{
"math_id": 92,
"text": " \\delta P "
},
{
"math_id": 93,
"text": " \\delta P = p_X(R) \\frac { 2\\pi^{p/2 } R^{ p-1 } }{ \\Gamma (p/2)} \\delta R "
},
{
"math_id": 94,
"text": " f_R(R) = \\frac{\\Gamma \\big ( \\frac{1}{2} (\\nu + p ) \\, \\big )}{\\nu^{\\,p/2} \\pi^{\\,p/2} \\Gamma \\big( \\frac{1}{2} \\nu \\big)} \\frac { 2 \\pi^{p/2 } R^{ p-1 } }{ \\Gamma (p/2)} \\bigg( 1 + \\frac{ R^2 }{\\nu} \\bigg)^{-( \\nu + p )/2 } "
},
{
"math_id": 95,
"text": " f_R(R) = \\frac { 2}{ \\nu ^{1/2} B \\big( \\frac{1}{2} p, \\frac{1}{2} \\nu \\big)} \\bigg( \\frac {R^2}{ \\nu } \\bigg)^{ (p-1)/2 } \\bigg( 1 + \\frac{ R^2 }{\\nu} \\bigg)^{-( \\nu + p )/2 } "
},
{
"math_id": 96,
"text": " B(*,*) "
},
{
"math_id": 97,
"text": " y=R^2 / \\nu "
},
{
"math_id": 98,
"text": " f_Y(y) = \\frac { 1}{ B \\big( \\frac{1}{2} p, \\frac{1}{2} \\nu \\big)} y^{\\, p/2 - 1 } \\bigg( 1 + y \\bigg)^{-( \\nu + p )/2 } "
},
{
"math_id": 99,
"text": " \\Sigma = \\alpha \\operatorname{I} "
},
{
"math_id": 100,
"text": " \\Delta_P "
},
{
"math_id": 101,
"text": " dx_1 \\dots dx_p "
},
{
"math_id": 102,
"text": " \\Delta_P \\big (f_X(X \\,|\\alpha, p, \\nu) \\big ) = \\frac{\\Gamma \\big ( \\frac{1}{2} (\\nu + p ) \\, \\big )}{ ( \\nu \\pi)^{\\,p/2} \\alpha^{\\,p/2} \\Gamma \\big( \\frac{1}{2} \\nu \\big)} \\bigg( 1 + \\frac{X^T X }{ \\alpha \\nu} \\bigg)^{-( \\nu + p )/2 } \\; dx_1 \\dots dx_p "
},
{
"math_id": 103,
"text": " f_R(R \\,|\\alpha, p, \\nu) = \\frac { 2}{\\alpha^{1/2} \\; \\nu ^{1/2} B \\big( \\frac{1}{2} p, \\frac{1}{2} \\nu \\big)} \\bigg( \\frac {R^2}{ \\alpha \\, \\nu } \\bigg)^{ (p-1)/2 } \\bigg( 1 + \\frac{ R^2 }{ \\alpha \\, \\nu} \\bigg)^{-( \\nu + p )/2 } "
},
{
"math_id": 104,
"text": " Z \\sim \\beta'(a,b) "
},
{
"math_id": 105,
"text": " \\operatorname{E} (Z^m) = {\\frac {B(a + m, b - m)}{B( a ,b )}} "
},
{
"math_id": 106,
"text": " y = \\frac {p}{\\nu} R^2"
},
{
"math_id": 107,
"text": " \\operatorname{E} (y^m) = {\\frac {B(\\frac{1}{2}p + m, \\frac{1}{2} \\nu - m)}{B( \\frac{1}{2} p ,\\frac{1}{2} \\nu )}} = \\frac{\\Gamma \\big(\\frac{1}{2} p + m \\big)\\; \\Gamma \\big(\\frac{1}{2} \\nu - m \\big) }{ \\Gamma \\big( \\frac{1}{2} p \\big) \\; \\Gamma \\big( \\frac{1}{2} \\nu \\big) }, \\; \\nu/2 > m "
},
{
"math_id": 108,
"text": " r_2 = \\nu \\, y "
},
{
"math_id": 109,
"text": " \\operatorname{E} (r_2^m) = \\nu^m\\operatorname{E} (y^m) "
},
{
"math_id": 110,
"text": " \\alpha \\operatorname{I} "
},
{
"math_id": 111,
"text": " \\operatorname{E} (r_2^m | \\alpha) = \\alpha^m \\nu^m \\operatorname{E} (y^m) "
},
{
"math_id": 112,
"text": " R =(\\alpha\\nu y)^{1/2} "
},
{
"math_id": 113,
"text": " M=2m "
},
{
"math_id": 114,
"text": " \\operatorname{E} (R^M ) =\\operatorname{E} \\big((\\alpha \\nu y)^{1/2} \\big)^{2 m } = (\\alpha \\nu )^{M/2} \\operatorname{E} (y^{M/2})= (\\alpha \\nu )^{M/2} {\\frac {B \\big(\\frac{1}{2} (p + M), \\frac{1}{2} (\\nu - M) \\big )}{B( \\frac{1}{2} p ,\\frac{1}{2} \\nu )}} "
},
{
"math_id": 115,
"text": " f_X(X) = \\frac {\\Kappa }{ \\left|\\Sigma \\right|^{1/2} } \\left( 1+ \\nu^{-1} X^T \\Sigma^{-1} X \\right) ^ { -\\left(\\nu + p \\right)/2} "
},
{
"math_id": 116,
"text": " \\Kappa "
},
{
"math_id": 117,
"text": " \\nu "
},
{
"math_id": 118,
"text": " \\Theta \n \\in \\mathbb{R}^{p \\times p}"
},
{
"math_id": 119,
"text": " Y = \\Theta X "
},
{
"math_id": 120,
"text": " f_Y(Y) = \\frac {\\Kappa }{ \\left|\\Sigma \\right|^{1/2} } \\left( 1+ \\nu^{-1}Y^T \\Theta^{-T} \\Sigma^{-1} \\Theta^{-1} Y \\right) ^ { -\\left(\\nu + p \\right)/2} \\left| \\frac{\\partial Y }{\\partial X} \\right| ^{-1} "
},
{
"math_id": 121,
"text": " \\frac{\\partial Y_i }{\\partial X_j} = \\Theta_{i,j} "
},
{
"math_id": 122,
"text": " \\left| \\frac{\\partial Y }{\\partial X} \\right| = \\left| \\Theta \\right| "
},
{
"math_id": 123,
"text": " f_Y(Y) = \\frac {\\Kappa }{ \\left|\\Sigma \\right|^{1/2} \\left| \\Theta \\right| } \\left( 1 + \\nu^{-1} Y^T \\Theta^{-T} \\Sigma^{-1} \\Theta^{-1} Y \\right) ^ { -\\left(\\nu + p \\right)/2} "
},
{
"math_id": 124,
"text": " \\left|\\Sigma \\right|^{1/2} \\left| \\Theta \\right| = \\left|\\Sigma \\right|^{1/2} \\left| \\Theta \\right|^{1/2} \\left|\\Theta^T \\right|^{1/2} = \\left| \\Theta \\Sigma \\Theta^T \\right|^{1/2} "
},
{
"math_id": 125,
"text": " f_Y(Y) = \\frac { \\Gamma\\left[(\\nu+p) / 2\\right] }{ \\Gamma(\\nu/2) \\, (\\nu \\, \\pi)^{\\, p /2}\\left| \\Theta \\Sigma \\Theta^T \\right|^{1/2} } \\left( 1 + \\nu^{-1} Y^T \\left( \\Theta \\Sigma \\Theta^T \\right) ^{-1} Y \\right) ^ { -\\left(\\nu + p \\right)/2} "
},
{
"math_id": 126,
"text": " X \\sim t_p ( \\mu, \\Sigma, \\nu ) "
},
{
"math_id": 127,
"text": " \\Theta^{p \\times p } "
},
{
"math_id": 128,
"text": " \\Theta X + c \\sim t_p( \\Theta \\mu +c, \\Theta \\Sigma \\Theta^T, \\nu ) "
},
{
"math_id": 129,
"text": " X \\sim t (p, \\mu, \\Sigma, \\nu ) "
},
{
"math_id": 130,
"text": " X_p = \\begin{bmatrix}\n X_1 \\\\\n X_2 \\end{bmatrix} \\sim t \\left ( p_1 + p_2, \\mu_p, \\Sigma_{p \\times p}, \\nu \\right ) "
},
{
"math_id": 131,
"text": " X_1 \\sim t \\left ( p_1, \\mu_1, \\Sigma_{11}, \\nu \\right ) "
},
{
"math_id": 132,
"text": " X_2 \\sim t \\left ( p_2, \\mu_2, \\Sigma_{ 22}, \\nu \\right ) "
},
{
"math_id": 133,
"text": " f(X_1) = \n\\frac{\\Gamma\\left[(\\nu+p_1)/2\\right]}{\\Gamma(\\nu/2) \\, (\\nu \\,\\pi)^ {\\, p_1/2}\\left|{\\boldsymbol\\Sigma_{11}}\\right|^{1/2}}\\left[1+\\frac{1}{\\nu}({\\mathbf X_1}-{\\boldsymbol\\mu_1})^T{\\boldsymbol\\Sigma}_{11}^{-1}({\\mathbf X_1}-{\\boldsymbol\\mu_1})\\right]^{-(\\nu \\,+ \\, p_1)/2}"
},
{
"math_id": 134,
"text": " f(X_2) = \n\\frac{\\Gamma\\left[(\\nu+p_2)/2\\right]}{\\Gamma(\\nu/2) \\, (\\nu \\, \\pi)^{\\, p_2 /2}\\left|{\\boldsymbol\\Sigma_{22}}\\right|^{1/2}}\\left[1+\\frac{1}{\\nu}({\\mathbf X_2} - {\\boldsymbol\\mu_2})^T{\\boldsymbol\\Sigma}_{22}^{-1}({\\mathbf X_2}-{\\boldsymbol\\mu_2})\\right]^{-(\\nu \\,+ \\, p_2)/2}"
},
{
"math_id": 135,
"text": "\n \\Theta_{p_1 \\times \\, p} = \\begin{bmatrix}\n 1 & \\cdots & 0 & \\cdots & 0 \\\\\n 0 & \\ddots & 0 & \\cdots & 0 \\\\\n 0 & \\cdots & 1 & \\cdots & 0 \\end{bmatrix} "
},
{
"math_id": 136,
"text": " X_1 "
},
{
"math_id": 137,
"text": " \\Theta "
},
{
"math_id": 138,
"text": " \\Theta \\in \\mathbb{R}^{m \\times p}, m < p "
},
{
"math_id": 139,
"text": " m "
},
{
"math_id": 140,
"text": " \\left| \\Theta \\right| "
},
{
"math_id": 141,
"text": " \\left| \\Theta \\Sigma \\Theta^T \\right|^{1/2} "
},
{
"math_id": 142,
"text": " X \\sim t (p, \\mu, \\Sigma, \\nu ) "
},
{
"math_id": 143,
"text": " \\Theta^{m \\times p } "
},
{
"math_id": 144,
"text": " Y = \\Theta X + c \\sim t ( m, \\Theta \\mu + c, \\Theta \\Sigma \\Theta^T, \\nu ) "
},
{
"math_id": 145,
"text": " f_Y(Y) = \\frac{\\Gamma\\left[(\\nu + m)/2\\right]}{\\Gamma(\\nu/2) \\, (\\nu \\,\\pi)^{\\, m / 2} \\left| \\Theta \\Sigma \\Theta^T \\right|^{1/2}}\\left[1+\\frac{1}{\\nu}( Y - c_1 )^T ( \\Theta \\Sigma \\Theta^T )^{-1} (Y-c_1) \\right]^{-(\\nu \\,+ \\, m)/2}, \\; c_1 = \\Theta \\mu + c"
},
{
"math_id": 146,
"text": " t^2 = Y^2 / \\sigma^2 "
},
{
"math_id": 147,
"text": " Z "
},
{
"math_id": 148,
"text": "{1}/\\sqrt{u_1/\\nu_1}, \\; \\; {1}/\\sqrt{u_2/\\nu_2}"
},
{
"math_id": 149,
"text": "\\nu\\uparrow\\infty"
}
] | https://en.wikipedia.org/wiki?curid=7984768 |
7984781 | Three-phase traffic theory | Theory of traffic flow
Three-phase traffic theory is a theory of traffic flow developed by Boris Kerner between 1996 and 2002. It focuses mainly on the explanation of the physics of traffic breakdown and resulting congested traffic on highways. Kerner describes three phases of traffic, while the classical theories based on the fundamental diagram of traffic flow have two phases: "free flow" and "congested traffic". Kerner’s theory divides congested traffic into two distinct phases, "synchronized flow" and "wide moving jam", bringing the total number of phases to three:
The word "wide" is used even though it is the length of the traffic jam that is being referred to.
A phase is defined as a "state in space and time."
Free flow ("F").
In free traffic flow, empirical data show a positive correlation between the flow rate formula_0 (in vehicles per unit time) and vehicle density formula_1 (in vehicles per unit distance). This relationship stops at the maximum free flow formula_2 with a corresponding critical density formula_3. (See Figure 1.)
Congested traffic.
Data show a weaker relationship between flow and density in congested conditions. Therefore, Kerner argues that the fundamental diagram, as used in classical traffic theory, cannot adequately describe the complex dynamics of vehicular traffic. He instead divides congestion into "synchronized flow" and "wide moving jams".
In congested traffic, the vehicle speed is lower than the lowest vehicle speed formula_4 encountered in free flow, i.e., the line with the slope of the minimal speed formula_5 in free flow (dotted line in Figure 2) divides the empirical data on the flow-density plane into two regions: on the left side data points of free flow and on the right side data points corresponding to congested traffic.
Definitions ["J"] and ["S"] of the phases "J" and "S" in congested traffic.
In Kerner's theory, the phases "J" and "S" in congested traffic are observed outcomes in universal . The phases "J" and "S" are defined through the definitions ["J"] and ["S"] as follows:
The "wide moving jam" phase ["J"].
A so-called "wide moving jam" moves upstream through any highway bottlenecks. While doing so, the mean velocity of the downstream front formula_6 is maintained. This is the characteristic feature of the wide moving jam that defines the phase "J".
The term "wide moving jam" is meant to reflect the characteristic feature of the jam to propagate through any other state of traffic flow and through any bottleneck while maintaining the velocity of the downstream jam front. The phrase "moving jam" reflects the jam propagation as a whole localized structure on a road. To distinguish wide moving jams from other moving jams, which do not characteristically maintain the mean velocity of the downstream jam front, Kerner used the term "wide". The term "wide" reflects the fact that if a moving jam has a width (in the longitudinal road direction) considerably greater than the widths of the jam fronts, and if the vehicle speed inside the jam is zero, the jam always exhibits the characteristic feature of maintaining the velocity of the downstream jam front (see Sec. 7.6.5 of the book).
Thus the term "wide" has nothing to do with the width across the jam, but actually refers to its length being considerably more than the transition zones at its head and tail. Historically, Kerner used the term "wide" from a qualitative analogy of a wide moving jam in traffic flow with "wide autosolitons" occurring in many systems of natural science (like gas plasma, electron-hole plasma in semiconductors, biological systems, and chemical reactions): Both the wide moving jam and a wide autosoliton exhibit some characteristic features, which do not depend on initial conditions at which these localized patterns have occurred.
The "synchronized flow" phase ["S"].
In "synchronized flow," the downstream front, where the vehicles accelerate to free flow, does not show this characteristic feature of the wide moving jam. Specifically, the downstream front of the synchronized flow is often fixed at a bottleneck.
The term "synchronized flow" is meant to reflect the following features of this traffic phase: (i) It is a continuous traffic flow with no significant stoppage, as often occurs inside a wide moving jam. The term "flow" reflects this feature. (ii) There is a tendency towards synchronization of vehicle speeds across different lanes on a multilane road in this flow. In addition, there is a tendency towards synchronization of vehicle speeds in each of the road lanes (bunching of vehicles) in synchronized flow. This is due to a relatively low probability of passing. The term "synchronized" reflects this speed synchronization effect.
Explanation of the traffic phase definitions based on measured traffic data.
Measured data of averaged vehicle speeds (Figure 3 (a)) illustrate the phase definitions ["J"] and ["S"]. There are two spatial-temporal patterns of congested traffic with low vehicle speeds in Figure 3 (a). One pattern propagates upstream with an almost constant velocity of the downstream front, moving straight through the freeway bottleneck. According to the definition ["J"], this pattern of congestion belongs to the "wide moving jam" phase. In contrast, the downstream front of the other pattern is fixed at a bottleneck. According to the definition ["S"], this pattern belongs to the "synchronized flow" phase (Figure 3 (a) and (b)). Other empirical examples of the validation of the traffic phase definitions ["J"] and ["S"] can be found in the books and, in the article as well as in an empirical study of floating car data (floating car data is also called "probe vehicle data").
Traffic phase definition based on empirical single-vehicle data.
In Sec. 6.1 of the book has been shown that the traffic phase definitions ["S"] and ["J"] are the origin of most hypotheses of three-phase theory and related three-phase microscopic traffic flow models. The traffic phase definitions ["J"] and ["S"] are non-local macroscopic ones and they are applicable only after macroscopic data has been measured in space and time, i.e., in an "off-line" study. This is because for the definitive distinction of the phases J and S through the definitions ["J"] and ["S"] a study of the propagation of traffic congestion through a bottleneck is necessary. This is often considered as a drawback of the traffic phase definitions ["S"] and ["J"]. However, there are local microscopic criteria for the distinction between the phases "J" and "S" without a study of the propagation of congested traffic through a bottleneck. The microscopic criteria are as follows (see Sec. 2.6 in the book): If in single-vehicle ("microscopic") data related to congested traffic the "flow-interruption interval", i.e., a time headway between two vehicles following each other is observed, which is much longer than the mean time delay in vehicle acceleration from a wide moving jam (the latter is about 1.3–2.1 s), then the related flow-interruption interval corresponds to the wide moving jam phase. After all wide moving jams have been found through this criterion in congested traffic, all remaining congested states are related to the synchronized flow phase.
Kerner’s hypothesis about two-dimensional (2D) states of traffic flow.
Steady states of synchronized flow.
Homogeneous synchronized flow is a "hypothetical" state of synchronized flow of identical vehicles and drivers in which all vehicles move with the same time-independent speed and have the same space gaps (a space gap is the distance between one vehicle and the one behind it), i.e., this synchronized flow is homogeneous in time and space.
Kerner’s hypothesis is that homogeneous synchronized flow can occur anywhere in a two-dimensional region (2D) of the flow-density plane (2D-region S in Figure 4(a)). The set of possible free flow states (F) overlaps in vehicle density with the set of possible states of homogeneous synchronized flow. The free flow states on a multi-lane road and states of homogeneous synchronized flow are separated by a gap in the flow rate and, therefore, by a gap in the speed at a given density: at each given density the synchronized flow speed is lower than the free flow speed.
In accordance with this hypothesis of Kerner’s three-phase theory, at a given speed in synchronized flow, the driver can make an "arbitrary choice" as to the space gap to the preceding vehicle, within the range associated with the 2D region of homogeneous synchronized flow (Figure 4(b)): the driver accepts different space gaps at different times and does not use one unique gap.
The hypothesis of Kerner’s three-phase traffic theory about the 2D region of steady states of synchronized flow is contrary to the hypothesis of earlier traffic flow theories involving the fundamental diagram of traffic flow, which supposes a one-dimensional relationship between vehicle density and flow rate.
Car following in three-phase traffic theory.
In Kerner’s three-phase theory, a vehicle accelerates when the space gap formula_9 to the preceding vehicle is greater than a synchronization space gap formula_7, i.e., at formula_10 (labelled by "acceleration" in Figure 5); the vehicle decelerates when the gap "g" is smaller than a safe space gap formula_8, i.e., at formula_11 (labelled by "deceleration" in Figure 5).
If the gap is less than "G", the driver tends to adapt his speed to the speed of the preceding vehicle without caring what the precise gap is, so long as this gap is not smaller than the safe space gap formula_13 (labelled by "speed adaptation" in Figure 5). Thus the space gap formula_9 in car following in the framework of Kerner’s three-phase theory can be any space gap within the space gap range formula_12.
Autonomous driving in the framework of three-phase traffic theory.
In the framework of the three-phase theory the hypothesis about 2D regions of states of synchronized flow has also been applied for the development of a model of autonomous driving vehicle (called also automated driving, self-driving or autonomous vehicle).
Traffic breakdown – a "F" → "S" phase transition.
In measured data, congested traffic most often occurs in the vicinity of highway bottlenecks, e.g., on-ramps, off-ramps, or roadwork. A transition from free flow to congested traffic is known as traffic breakdown.
In Kerner’s three-phase traffic theory traffic breakdown is explained by a phase transition from free flow to synchronized flow (called as F →S phase transition). This explanation is supported by available measurements, because in measured traffic data after a traffic breakdown at a bottleneck the downstream front of the congested traffic is fixed at the bottleneck. Therefore, the resulting congested traffic after a traffic breakdown satisfies the definition ["S"] of the "synchronized flow" phase.
Empirical spontaneous and induced "F" → "S" transitions.
Kerner notes using empirical data that synchronized flow can form in free flow spontaneously (spontaneous F →S phase transition) or can be externally induced (induced F → S phase transition).
A spontaneous F →S phase transition means that the breakdown occurs when there has previously been free flow at the bottleneck as well as both up- and downstream of the bottleneck. This implies that a spontaneous F → S phase transition occurs through the growth of an internal disturbance in free flow in a neighbourhood of a bottleneck.
In contrast, an induced F → S phase transition occurs through a region of congested traffic that initially emerged at a different road location downstream from the bottleneck location. Normally, this is in connection with the upstream propagation of a synchronized flow region or a wide moving jam. An empirical example of an induced breakdown at a bottleneck leading to synchronized flow can be seen in Figure 3: synchronized flow emerges through the upstream propagation of a wide moving jam.
The existence of empirical induced traffic breakdown (i.e., empirical induced F →S phase transition) means that an F → S phase transition occurs in a metastable state of free flow at a highway bottleneck. The term metastable free flow means that when small perturbations occur in free flow, the state of free flow is still stable, i.e., free flow persists at the bottleneck. However, when larger perturbations occur in free flow in a neighborhood of the bottleneck, the free flow is unstable and synchronized flow will emerge at the bottleneck.
Physical explanation of traffic breakdown in three-phase theory.
Kerner explains the nature of the F → S phase transitions as a competition between "speed adaptation" and "over-acceleration". Speed adaptation is defined as the vehicle's deceleration to the speed of a slower moving preceding vehicle. Over-acceleration is defined as the vehicle acceleration occurring even if the preceding vehicle does not drive faster than the vehicle and the preceding vehicle additionally does not accelerate. In Kerner’s theory, the probability of over-acceleration is a discontinuous function of the vehicle speed: At the same vehicle density, the probability of over-acceleration in free flow is greater than in synchronized flow. When within a local speed disturbance speed adaptation is stronger than over-acceleration, an F → S phase transition occurs. Otherwise, when over-acceleration is stronger than speed adaptation the initial disturbance decays over time. Within a region of synchronized flow, a strong over-acceleration is responsible for a return transition from synchronized flow to free flow (S → F transition).
There can be several mechanisms of vehicle over-acceleration. It can be assumed that on a multi-lane road the most probable mechanism of over-acceleration is lane changing to a faster lane. In this case, the F → S phase transitions are explained by an interplay of acceleration while overtaking a slower vehicle (over-acceleration) and deceleration to the speed of a slower-moving vehicle ahead (speed adaptation). Overtaking supports the maintenance of free flow. "Speed adaptation" on the other hand leads to synchronized flow. Speed adaptation will occur if overtaking is not possible. Kerner states that the probability of overtaking is an "interrupted function of the vehicle density" (Figure 6): at a given vehicle density, the probability of overtaking in free flow is much higher than in synchronized flow.
Discussion of Kerner’s explanation of traffic breakdown.
Kerner’s explanation of traffic breakdown at a highway bottleneck by the F → S phase transition in a metastable free flow is associated with the following fundamental empirical features of traffic breakdown at the bottleneck found in real measured data: (i) Spontaneous traffic breakdown in an initial free flow at the bottleneck leads to the emergence of congested traffic whose downstream front is fixed at the bottleneck (at least during some time interval), i.e., this congested traffic satisfies the definition ["S"] for the synchronized flow phase. In other words, spontaneous traffic breakdown is always an F → S phase transition. (ii) Probability of this spontaneous traffic breakdown is an increasing function of the flow rates at the bottleneck. (iii) At the same bottleneck, traffic breakdown can be either spontaneous or induced (see empirical examples for these fundamental features of traffic breakdown in Secs. 2.2.3 and 3.1 of the book); for this reason, the F → S phase transition occurs in a metastable free flow at a highway bottleneck.
As explained above, the sense of the term metastable free flow is as follows. Small enough disturbances in metastable free flow decay. However, when a large enough disturbance occurs at the bottleneck, an F → S phase transition does occur. Such a disturbance that initiates the F → S phase transition in metastable free flow at the bottleneck can be called a nucleus for traffic breakdown. In other words, real traffic breakdown (F → S phase transition) at a highway bottleneck exhibits the nucleation nature. Kerner considers the empirical nucleation nature of traffic breakdown (F → S phase transition) at a road bottleneck as the empirical fundamental of traffic and transportation science.
The reason for Kerner’s theory and his criticism of classical traffic flow theories.
The empirical nucleation nature of traffic breakdown at highway bottlenecks cannot be explained by classical traffic theories and models. The search for an explanation of the empirical nucleation nature of traffic breakdown (F → S phase transition) at a highway bottleneck has been the motivation for the development of Kerner’s three-phase theory.
In particular, in two-phase traffic flow models in which traffic breakdown is associated with free flow instability, this model instability leads to the F → J phase transition, i.e. in these traffic flow models traffic breakdown is governed by spontaneous emergence of a wide moving jam(s) in an initial free flow (see Kerner’s criticism on such two-phase models as well as on other classical traffic flow models and theories in Chapter 10 of the book as well as in critical reviews,).
The main prediction of Kerner’s three-phase theory.
Kerner developed the three-phase theory as an explanation of the empirical nature of traffic breakdown at highway bottlenecks: a random (probabilistic) F → S phase transition that occurs in the metastable state of free flow.
Herewith Kerner explained the main prediction, that this metastability of free flow with respect to the F → S phase transition is governed by the nucleation nature of an instability of synchronized flow. The explanation is a large enough local increase in speed in synchronized flow (called an S → F instability), which is a growing speed wave of a local increase in speed in synchronized flow at the bottleneck. The development of the S → F instability leads to a local phase transition from synchronized flow to free flow at the bottleneck (S → F transition). To explain this phenomenon Kerner developed a microscopic theory of the S → F instability.
None of the classical traffic flow theories and models incorporate the S → F instability of the three-phase theory.
Initially developed for highway traffic, Kerner expanded the three phase theory for the description of city traffic in 2011–2014.
Range of highway capacities.
In three-phase traffic theory, traffic breakdown is explained by the F → S transition occurring in a metastable free flow. Probably the most important consequence of that is the existence of a range of highway capacities between some maximum and minimum capacities.
Maximum and minimum highway capacities.
Spontaneous traffic breakdown, i.e., a spontaneous F → S phase transition, may occur in a wide range of flow rates in free flow. Kerner states, based on empirical data, that because of the possibility of spontaneous or induced traffic breakdowns at the same freeway bottleneck "at any time instant" there is a range of highway capacities at a bottleneck. This range of freeway capacities is between a minimum capacity formula_14 and a maximum capacity formula_15 of free flow (Figure 7).
Highway capacities and metastability of free flow.
There is a maximum highway capacity formula_15: If the flow rate is close to the maximum capacity formula_15, then even small disturbances in free flow at a bottleneck will lead to a spontaneous F → S phase transition. On the other hand, only very large disturbances in free flow at the bottleneck will lead to a spontaneous F → S phase transition, if the flow rate is close to a minimum capacity formula_14 (see, for example, Sec. 17.2.2 of the book). The probability of a smaller disturbance in free flow is much higher than that of a larger disturbance. Therefore, the higher the flow rate in free flow at a bottleneck, the higher the probability of the spontaneous F → S phase transition. If the flow rate in free flow is lower than the minimum capacity formula_14, there will be no traffic breakdown (no F →S phase transition) at the bottleneck.
The infinite number of highway capacities at a bottleneck can be illustrated by the meta-stability of free flow at flow rates formula_0 with
formula_16
Metastability of free flow means that for small disturbances free flow remains stable (free flow persists), but with larger disturbances the flow becomes unstable and an F → S phase transition to synchronized flow occurs.
Discussion of capacity definitions.
Thus the basic theoretical result of three-phase theory about the understanding of the stochastic capacity of free flow at a bottleneck is as follows:
"At any time instant", there is an infinite number of highway capacities of free flow at the bottleneck. The infinite number of flow rates, at which traffic breakdown can be induced at the bottleneck and the infinite number of highway capacities. These capacities are within the flow rate range between a minimum capacity and a maximum capacity (Figure 7).
The range of highway capacities at a bottleneck in Kerner’s three-phase traffic theory contradicts fundamentally the classical understanding of stochastic highway capacity as well as traffic theories and methods for traffic management and traffic control which at any time assume the existence of a "particular" highway capacity. In contrast, in Kerner’s three-phase traffic theory "at any time" there is a range of highway capacities, which are between the minimum capacity formula_14 and maximum capacity formula_15. The values formula_14 and formula_15 can depend considerably on traffic parameters (the percentage of long vehicles in traffic flow, weather, bottleneck characteristics, etc.).
The existence "at any time instant" of a range of highway capacities in Kerner’s theory changes crucially methodologies for traffic control, dynamic traffic assignment, and traffic management. In particular, to satisfy the nucleation nature of traffic breakdown, Kerner introduced breakdown minimization principle (BM principle) for the optimization and control of vehicular traffic networks.
Wide moving jams ("J").
A moving jam will be called "wide" if its length (in direction of the flow) clearly exceeds the lengths of the jam fronts. The average vehicle speed within wide moving jams is much lower than the average speed in free flow. At the downstream front, the vehicles accelerate to the free flow speed. At the upstream jam front, the vehicles come from free flow or synchronized flow and must reduce their speed. According to the definition ["J"] the wide moving jam always has the same mean velocity of the downstream front formula_6, even if the jam propagates through other traffic phases or bottlenecks. The flow rate is sharply reduced within a wide moving jam.
Characteristic parameters of wide moving jams.
Kerner’s empirical results show that some characteristic features of wide moving jams are independent of the traffic volume and bottleneck features (e.g. where and when the jam formed). However, these characteristic features are dependent on weather conditions, road conditions, vehicle technology, percentage of long vehicles, etc.. The velocity of the downstream front of a wide moving jam formula_6 (in the upstream direction) is a characteristic parameter, as is the flow rate just downstream of the jam formula_17 (with free flow at this location, see Figure 8). This means that many wide-moving jams have similar features under similar conditions. These parameters are relatively predictable. The movement of the downstream jam front can be illustrated in the flow-density plane by a line, which is called "Line J" (Line J in Figure 8). The slope of Line J is the velocity of the downstream jam front formula_6.
Minimum highway capacity and outflow from wide moving jam.
Kerner emphasizes that the minimum capacity formula_14 and the outflow of a wide moving jam formula_17 describe two "qualitatively different features" of free flow: the minimum capacity formula_14 characterizes an F → S phase transition at a bottleneck, i.e., a traffic breakdown. In contrast, the outflow of a wide moving jam formula_17 determines a condition for the existence of the wide moving jam, i.e., the traffic phase "J" while the jam propagates in free flow: Indeed, if the jam propagates through free-flow (i.e., both upstream and downstream of the jam free flows occur), then a wide moving jam can persist, only when the jam inflow formula_18 is equal to or larger than the jam outflow formula_17; otherwise, the jam dissolves over time. Depending on traffic parameters like weather, percentage of long vehicles, et cetera, and characteristics of the bottleneck where the F → S phase transition can occur, the minimum capacity formula_14 might be smaller (as in Figure 8), or greater than the jam’s outflow formula_17.
Synchronized flow phase ("S").
In contrast to wide moving jams, both the flow rate and vehicle speed may vary significantly in the synchronized flow phase. The downstream front of synchronized flow is often spatially fixed (see definition ["S"]), normally at a bottleneck at a certain road location. The flow rate in this phase could remain similar to the one in free flow, even if the vehicle speeds are sharply reduced.
Because the synchronized flow phase does not have the characteristic features of the wide moving jam phase "J", Kerner’s three-phase traffic theory assumes that the hypothetical homogeneous states of synchronized flow cover a two-dimensional region in the flow-density plane (dashed regions in Figure 8).
"S" → "J" phase transition.
Wide moving jams do not emerge spontaneously in free flow, but they can emerge in regions of synchronized flow. This phase transition is called an S → J phase transition.
"Jam without obvious reason" – F → S → J phase transitions.
In 1998, Kerner found out that in real field traffic data the emergence of a wide moving jam in free flow is observed as a cascade of F → S → J phase transitions (Figure 9): first, a region of synchronized flow emerges in a region of free flow. As explained above, such an F → S phase transition occurs mostly at a bottleneck. Within the synchronized flow phase a further "self-compression" occurs and vehicle density increases while vehicle speed decreases. This self-compression is called "pinch effect". In "pinch" regions of synchronized flow, narrow moving jams emerge. If these narrow moving jams grow, wide moving jams will emerge labeled by S → J in Figure 9). Thus, wide moving jams emerge later than traffic breakdown (F → S transition) has occurred and at another road location upstream of the bottleneck. Therefore, when Kerner’s F → S → J phase transitions occurring in real traffic (Figure 9 (a)) are presented in the speed-density plane (Figure 9 (b)) (or speed-flow, or else flow-density planes), one should remember that states of synchronized flow and low speed state within a wide moving jam are measured at different road locations. Kerner notes that the frequency of the emergence of wide moving jams increases if the density in synchronized flow increases. The wide moving jams propagate further upstream, even if they propagate through regions of synchronized flow or bottlenecks. Obviously, any combination of return phase transitions (S → F, J → S, and J → F transitions shown in Figure 9) is also possible.
The physics of "S" → "J" transition.
To further illustrate S → J phase transitions: in Kerner’s three-phase traffic theory Line J divides the homogeneous states of synchronized flow in two (Figure 8). States of homogeneous synchronized flow above Line J are meta-stable. States of homogeneous synchronized flow below Line J are stable states in which no S → J phase transition can occur. Metastable homogeneous synchronized flow means that for small disturbances, the traffic state remains stable. However, when larger disturbances occur, synchronized flow becomes unstable, and an S → J phase transition occurs.
Traffic patterns of "S" and "J".
Very complex congested patterns can be observed, caused by F → S and S → J phase transitions.
Classification of synchronized flow traffic patterns (SP).
A congestion pattern of synchronized flow (Synchronized Flow Pattern (SP)) with a fixed downstream and a not continuously propagating upstream front is called Localised Synchronized Flow Pattern (LSP).
Frequently the upstream front of a SP propagates upstream. If only the upstream front propagates upstream, the related SP is called Widening Synchronised Flow Pattern (WSP). The downstream front remains at the bottleneck location and the width of the SP increases.
It is possible that both upstream and downstream front propagates upstream. The downstream front is no longer located at the bottleneck. This pattern has been called Moving Synchronised Flow Pattern (MSP).
Catch effect of synchronized flow at a highway bottleneck.
The difference between the SP and the wide moving jam becomes visible in that when a WSP or MSP reaches an upstream bottleneck the so-called "catch-effect" can occur. The SP will be caught at the bottleneck and as a result a new congested pattern emerges. A wide-moving jam will not be caught at a bottleneck and moves further upstream. In contrast to wide moving jams, the synchronized flow, even if it moves as an MSP, has no characteristic parameters. As an example, the velocity of the downstream front of the MSP might vary significantly and can be different for different MSPs. These features of SP and wide moving jams are consequences of the phase
definitions [S] and [J].
General congested traffic pattern (GP).
An often occurring congestion pattern is one that contains both congested phases, [S] and [J]. Such a pattern with [S] and [J] is called General Pattern (GP). An empirical example of GP is shown in Figure 9 (a).
In many freeway infrastructures, bottlenecks are very close to each other. A congestion pattern whose synchronized flow covers two or more bottlenecks is called an Expanded Pattern (EP). An EP could contain synchronized flow only (called ESP: Expanded Synchronized Flow Pattern)), but normally wide moving jams form in the synchronized flow. In those cases, the EP is called EGP (Expanded General Pattern) (see Figure 10).
Applications of three-phase traffic theory in transportation engineering.
One of the applications of Kerner’s three-phase traffic theory is the methods called (Automatische StauDynamikAnalyse (Automatic tracking of wide moving jams) and Forecasting Of Traffic Objects). ASDA/FOTO is a software tool able to process large traffic data volumes quickly and efficiently on freeway networks (see examples from three countries, Figure 11). ASDA/FOTO works in an online traffic management system based on measured traffic data. Recognition, tracking, and prediction of [S] and [J] are performed using the features of Kerner’s three-phase traffic theory.
Further applications of the theory are seen in the development of traffic simulation models, a ramp metering system (ANCONA), collective traffic control, traffic assistance, autonomous driving, and traffic state detection, as described in the books by Kerner.
Mathematical models of traffic flow in the framework of Kerner’s three-phase traffic theory.
Rather than a mathematical model of traffic flow, Kerner’s three-phase theory is a qualitative traffic flow theory that consists of several hypotheses. The hypotheses of Kerner’s three-phase theory should qualitatively explain spatiotemporal traffic phenomena in traffic networks found in real field traffic data, which was measured over years on a variety of highways in different countries. Some of the hypotheses of Kerner’s theory have been considered above. It can be expected that a diverse variety of different mathematical models of traffic flow can be developed in the framework of Kerner’s three-phase theory.
The first mathematical model of traffic flow in the framework of Kerner’s three-phase theory that mathematical simulations can show and explain traffic breakdown by an F → S phase transition in the metastable free flow at the bottleneck was the Kerner-Klenov model introduced in 2002. The Kerner–Klenov model is a microscopic stochastic model in the framework of Kerner’s three-phase traffic theory. In the Kerner-Klenov model, vehicles move in accordance with stochastic rules of vehicle motion that can be individually chosen for each of the vehicles. Some months later, Kerner, Klenov, and Wolf developed a cellular automaton (CA) traffic flow model in the framework of Kerner’s three-phase theory.
The Kerner-Klenov stochastic three-phase traffic flow model in the framework of Kerner’s theory has further been developed for different applications. In particular, to simulate on-ramp metering, speed limit control, dynamic traffic assignment in traffic and transportation networks, traffic at heavy bottlenecks, and on moving bottlenecks, features of heterogeneous traffic flow consisting of different vehicles and drivers, jam warning methods, vehicle-to-vehicle (V2V) communication for cooperative driving, the performance of self-driving vehicles in mixture traffic flow, traffic breakdown at signals in city traffic, over-saturated city traffic, vehicle fuel consumption in traffic networks (see references in Sec. 1.7 of a review).
Over time several scientific groups have developed new mathematical models in the framework of Kerner’s three-phase theory. In particular, new mathematical models in the framework of Kerner’s three-phase theory have been introduced in the works by Jiang, Wu, Gao, et al., Davis, Lee, Barlovich, Schreckenberg, and Kim (see other references to mathematical models in the framework of Kerner’s three-phase traffic theory and results of their investigations in Sec. 1.7 of a review).
Criticism of the theory.
The theory has been criticized for two primary reasons. First, the theory is almost completely based on measurements on the Bundesautobahn 5 in Germany. It may be that this road has this pattern, but other roads in other countries have other characteristics. Future research must show the validity of the theory on other roads in other countries around the world. Second, it is not clear how the data was interpolated. Kerner uses fixed-point measurements (loop detectors), but draws his conclusions on vehicle trajectories, which span the whole length of the road under investigation. These trajectories can only be measured directly if floating car data is used, but as said, only loop detector measurements are used. How the data in between was gathered or interpolated, is not clear.
The above criticism has been responded to in a recent study of data measured in the US and the United Kingdom, which confirms conclusions made based on measurements on the Bundesautobahn 5 in Germany. Moreover, there is a recent validation of the theory based on floating car data. In this article one can also find methods for spatial-temporal interpolations of data measured at road detectors (see article’s appendixes).
Other criticisms have been made, such as that the notion of phases has not been well defined and that so-called two-phase models also succeed in simulating the essential features described by Kerner.
This criticism has been responded to in a review as follows. The most important feature of Kerner’s theory is the explanation of the empirical nucleation nature of traffic breakdown at a road bottleneck by the F → S transition. The empirical nucleation nature of traffic breakdown "cannot" be explained with earlier traffic flow theories including two-phase traffic flow models studied in. | [
{
"math_id": 0,
"text": "q"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "q_{\\max}"
},
{
"math_id": 3,
"text": "k_\\text{crit}"
},
{
"math_id": 4,
"text": "v^{\\min}_\\text{free}"
},
{
"math_id": 5,
"text": "v^\\min_\\text{free} = \\frac{q_\\max}{k_\\text{crit}}"
},
{
"math_id": 6,
"text": "v_g"
},
{
"math_id": 7,
"text": "G"
},
{
"math_id": 8,
"text": "g_\\text{safe}"
},
{
"math_id": 9,
"text": "g"
},
{
"math_id": 10,
"text": "g>G"
},
{
"math_id": 11,
"text": "g<g_{\\rm safe}"
},
{
"math_id": 12,
"text": " g_\\text{safe} \\leq g \\leq G "
},
{
"math_id": 13,
"text": " g_\\text{safe}"
},
{
"math_id": 14,
"text": "C_{\\min}"
},
{
"math_id": 15,
"text": "C_{\\max}"
},
{
"math_id": 16,
"text": "C_{\\min} \\leq q < C_{\\max}. "
},
{
"math_id": 17,
"text": "q_\\text{out}"
},
{
"math_id": 18,
"text": "q_\\text{in}"
}
] | https://en.wikipedia.org/wiki?curid=7984781 |
7984958 | Bhatnagar–Gross–Krook operator | Collision operator used in a computational fluid dynamics technique
The Bhatnagar–Gross–Krook operator (abbreviated BGK operator) term refers to a collision operator used in the Boltzmann equation and in the lattice Boltzmann method, a computational fluid dynamics technique. It is given by the formula
formula_0
where formula_1 is a local equilibrium value for the population of particles in the direction of link formula_2. The term formula_3 is a relaxation time and related to the viscosity.
The operator is named after Prabhu L. Bhatnagar, Eugene P. Gross, and Max Krook, the three scientists who introduced it in an article in "Physical Review" in 1954.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Omega_i = -\\tau^{-1}(n_i - n_i^\\text{eq}),"
},
{
"math_id": 1,
"text": "n_i^\\text{eq}"
},
{
"math_id": 2,
"text": "\\mathbf{e}_i"
},
{
"math_id": 3,
"text": "\\tau"
}
] | https://en.wikipedia.org/wiki?curid=7984958 |
798571 | Rule of succession | Formula in probability theory
In probability theory, the rule of succession is a formula introduced in the 18th century by Pierre-Simon Laplace in the course of treating the sunrise problem. The formula is still used, particularly to estimate underlying probabilities when there are few observations or events that have not been observed to occur at all in (finite) sample data.
Statement of the rule of succession.
If we repeat an experiment that we know can result in a success or failure, "n" times independently, and get "s" successes, and "n − s" failures, then what is the probability that the next repetition will succeed?
More abstractly: If "X"1, ..., "X""n"+1 are conditionally independent random variables that each can assume the value 0 or 1, then, if we know nothing more about them,
formula_0
Interpretation.
Since we have the prior knowledge that we are looking at an experiment for which both success and failure are possible, our estimate is as if we had observed one success and one failure for sure before we even started the experiments. In a sense we made "n" + 2 observations (known as pseudocounts) with "s "+ 1 successes. Although this may seem the simplest and most reasonable assumption, which also happens to be true, it still requires a proof. Indeed, assuming a pseudocount of one per possibility is one way to generalise the binary result, but has unexpected consequences — see Generalization to any number of possibilities, below.
Nevertheless, if we had not known from the start that both success and failure are possible, then we would have had to assign
formula_1
But see Mathematical details, below, for an analysis of its validity. In particular it is not valid when formula_2, or formula_3.
If the number of observations increases, formula_4 and formula_5 get more and more similar, which is intuitively clear: the more data we have, the less importance should be assigned to our prior information.
Historical application to the sunrise problem.
Laplace used the rule of succession to calculate the probability that the Sun will rise tomorrow, given that it has risen every day for the past 5000 years. One obtains a very large factor of approximately 5000 × 365.25, which gives odds of about 1,826,200 to 1 in favour of the Sun rising tomorrow.
However, as the mathematical details below show, the basic assumption for using the rule of succession would be that we have no prior knowledge about the question whether the Sun will or will not rise tomorrow, except that it can do either. This is not the case for sunrises.
Laplace knew this well, and he wrote to conclude the sunrise example: "But this number is far greater for him who, seeing in the totality of phenomena the principle regulating the days and seasons, realizes that nothing at the present moment can arrest the course of it." Yet Laplace was ridiculed for this calculation; his opponents gave no heed to that sentence, or failed to understand its importance.
In the 1940s, Rudolf Carnap investigated a probability-based theory of inductive reasoning, and developed measures of degree of confirmation, which he considered as alternatives to Laplace's rule of succession. See also New riddle of induction#Carnap.
Intuition.
The rule of succession can be interpreted in an intuitive manner by considering points randomly distributed on a circle rather than counting the number "success"/"failures" in an experiment. To mimic the behavior of the proportion "p" on the circle, we will color the circle in two colors and the fraction of the circle colored in the "success" color will be equal to "p". To express the uncertainty about the value of "p", we need to select a fraction of the circle.
A fraction is chosen by selecting two uniformly random points on the circle. The first point "Z" corresponds to the zero in the [0, 1] interval and the second point "P" corresponds to "p" within [0, 1]. In terms of the circle the fraction of the circle from "Z" to "P" moving clockwise will be equal to "p". The "n" trials can be interpreted as "n" points uniformly distributed on the circle; any point in the "success" fraction is a success and a failure otherwise. This provides an exact mapping from success/failure experiments with probability of success "p" to uniformly random points on the circle. In the figure the success fraction is colored blue to differentiate it from the rest of the circle and the points "P" and "Z" are highlighted in red.
Given this circle, the estimate of "p" is the fraction colored blue. Let us divide the circle into "n+2" arcs corresponding to the "n+2" points such that the portion from a point on the circle to the next point (moving clockwise) is one arc associated with the first point. Thus, "Z" defines the first blue arc while "P" defines the first non-blue/failure arc. Since the next point is a uniformly random point, if it falls in any of the blue arcs then the trial succeeds while if it falls in any of the other arcs, then it fails. So the probability of success "p" is formula_6 where "b" is the number of blue arcs and "t" is the total number of arcs. Note that there is one more blue arc (that of "Z") than success point and two more arcs (those of "P" and "Z") than "n" points. Substituting the values with number of successes gives the rule of succession.
"Note: The actual probability needs to use the length of blue arcs divided by the length of all arcs. However, when k points are uniformly randomly distributed on a circle, the distance from a point to the next point is 1/k. So on average each arc is of the same length and ratio of lengths becomes ratio of counts."
Mathematical details.
The proportion "p" is assigned a uniform distribution to describe the uncertainty about its true value. (This proportion is not random, but uncertain. We assign a probability distribution to "p" to express our uncertainty, not to attribute randomness to "p". But this amounts, mathematically, to the same thing as treating "p as if" it were random).
Let "X""i" be 1 if we observe a "success" on the "i"th trial, otherwise 0, with probability "p" of success on each trial. Thus each "X" is 0 or 1; each "X" has a Bernoulli distribution. Suppose these "X"s are conditionally independent given "p".
We can use Bayes' theorem to find the conditional probability distribution of "p" given the data "X""i", "i" = 1, ..., "n." For the "prior" (i.e., marginal) probability measure of "p" we assigned a uniform distribution over the open interval "(0,1)"
formula_7
For the likelihood of a given "p" under our observations, we use the likelihood function
formula_8
where "s" = "x"1 + ... + "x""n" is the number of "successes" and "n" is the number of trials (we are using capital "X" to denote a random variable and lower-case "x" as the data actually observed). Putting it all together, we can calculate the posterior:
formula_9
To get the normalizing constant, we find
formula_10
(see beta function for more on integrals of this form).
The posterior probability density function is therefore
formula_11
This is a beta distribution with expected value
formula_12
Since "p" tells us the probability of success in any experiment, and each experiment is conditionally independent, the conditional probability for success in the next experiment is just "p". As "p" is being treated as if it is a random variable, the law of total probability tells us that the expected probability of success in the next experiment is just the expected value of "p". Since "p" is conditional on the observed data "X""i" for "i" = 1, ..., "n", we have
formula_13
The same calculation can be performed with the (improper) prior that expresses total ignorance of "p", including ignorance with regard to the question whether the experiment can succeed, or can fail. This improper prior is 1/("p"(1 − "p")) for 0 ≤ "p" ≤ 1 and 0 otherwise. If the calculation above is repeated with this prior, we get
formula_14
Thus, with the prior specifying total ignorance, the probability of success is governed by the observed frequency of success. However, the posterior distribution that led to this result is the Beta("s","n" − "s") distribution, which is not proper when "s" = "n" or "s" = 0 (i.e. the normalisation constant is infinite when "s" = 0 or "s" = "n"). This means that we cannot use this form of the posterior distribution to calculate the probability of the next observation succeeding when "s" = 0 or "s" = "n". This puts the information contained in the rule of succession in greater light: it can be thought of as expressing the prior assumption that if sampling was continued indefinitely, we would eventually observe at least one success, and at least one failure in the sample. The prior expressing total ignorance does not assume this knowledge.
To evaluate the "complete ignorance" case when "s" = 0 or "s" = "n" can be dealt with, we first go back to the hypergeometric distribution, denoted by formula_15. This is the approach taken in Jaynes (2003). The binomial formula_16 can be derived as a limiting form, where formula_17 in such a way that their ratio formula_18 remains fixed. One can think of formula_19 as the number of successes in the total population, of size formula_20.
The equivalent prior to formula_21 is formula_22, with a domain of formula_23. Working conditional to formula_20 means that estimating formula_24 is equivalent to estimating formula_19, and then dividing this estimate by formula_20. The posterior for formula_19 can be given as:
formula_25
And it can be seen that, if "s" = "n" or "s" = 0, then one of the factorials in the numerator cancels exactly with one in the denominator. Taking the "s" = 0 case, we have:
formula_26
Adding in the normalising constant, which is always finite (because there are no singularities in the range of the posterior, and there are a finite number of terms) gives:
formula_27
So the posterior expectation for formula_18 is:
formula_28
An approximate analytical expression for large "N" is given by first making the approximation to the product term:
formula_29
and then replacing the summation in the numerator with an integral
formula_30
The same procedure is followed for the denominator, but the process is a bit more tricky, as the integral is harder to evaluate
formula_31
where ln is the natural logarithm plugging in these approximations into the expectation gives
formula_32
where the base 10 logarithm has been used in the final answer for ease of calculation. For instance if the population is of size "10""k" then probability of success on the next sample is given by:
formula_33
So for example, if the population be on the order of tens of billions, so that "k" = 10, and we observe "n" = 10 results without success, then the expected proportion in the population is approximately 0.43%. If the population is smaller, so that "n" = 10, "k" = 5 (tens of thousands), the expected proportion rises to approximately 0.86%, and so on. Similarly, if the number of observations is smaller, so that "n" = 5, "k" = 10, the proportion rise to approximately 0.86% again.
This probability has no positive lower bound, and can be made arbitrarily small for larger and larger choices of "N", or "k". This means that the probability depends on the size of the population from which one is sampling. In passing to the limit of infinite "N" (for the simpler analytic properties) we are "throwing away" a piece of very important information. Note that this ignorance relationship only holds as long as only no successes are observed. It is correspondingly revised back to the observed frequency rule formula_34 as soon as one success is observed. The corresponding results are found for the "s=n" case by switching labels, and then subtracting the probability from 1.
Generalization to any number of possibilities.
This section gives a heuristic derivation similar to that in "Probability Theory: The Logic of Science".
The rule of succession has many different intuitive interpretations, and depending on which intuition one uses, the generalisation may be different. Thus, the way to proceed from here is very carefully, and to re-derive the results from first principles, rather than to introduce an intuitively sensible generalisation. The full derivation can be found in Jaynes' book, but it does admit an easier to understand alternative derivation, once the solution is known. Another point to emphasise is that the prior state of knowledge described by the rule of succession is given as an enumeration of the possibilities, with the additional information that it is possible to observe each category. This can be equivalently stated as observing each category once prior to gathering the data. To denote that this is the knowledge used, an "I""m" is put as part of the conditions in the probability assignments.
The rule of succession comes from setting a binomial likelihood, and a uniform prior distribution. Thus a straightforward generalisation is just the multivariate extensions of these two distributions: 1) Setting a uniform prior over the initial m categories, and 2) using the multinomial distribution as the likelihood function (which is the multivariate generalisation of the binomial distribution). It can be shown that the uniform distribution is a special case of the Dirichlet distribution with all of its parameters equal to 1 (just as the uniform is Beta(1,1) in the binary case). The Dirichlet distribution is the conjugate prior for the multinomial distribution, which means that the posterior distribution is also a Dirichlet distribution with different parameters. Let "p""i" denote the probability that category "i" will be observed, and let "n""i" denote the number of times category "i" ("i" = 1, ..., "m") actually was observed. Then the joint posterior distribution of the probabilities "p"1, ..., "p""m" is given by:
formula_35
To get the generalised rule of succession, note that the probability of observing category "i" on the next observation, conditional on the "p""i" is just "p""i", we simply require its expectation. Letting "A""i" denote the event that the next observation is in category "i" ("i" = 1, ..., "m"), and let "n" = "n"1 + ... + "n""m" be the total number of observations made. The result, using the properties of the Dirichlet distribution is:
formula_36
This solution reduces to the probability that would be assigned using the principle of indifference before any observations made (i.e. "n" = 0), consistent with the original rule of succession. It also contains the rule of succession as a special case, when "m" = 2, as a generalisation should.
Because the propositions or events "A""i" are mutually exclusive, it is possible to collapse the "m" categories into 2. Simply add up the "A""i" probabilities that correspond to "success" to get the probability of success. Supposing that this aggregates "c" categories as "success" and "m-c" categories as "failure". Let "s" denote the sum of the relevant "n"i values that have been termed "success". The probability of "success" at the next trial is then:
formula_37
which is different from the original rule of succession. But note that the original rule of succession is based on "I"2, whereas the generalisation is based on "I""m". This means that the information contained in "I""m" is different from that contained in "I"2. This indicates that mere knowledge of more than two outcomes we know are possible is relevant information when collapsing these categories down to just two. This illustrates the subtlety in describing the prior information, and why it is important to specify which prior information one is using.
Further analysis.
A good model is essential (i.e., a good compromise between accuracy and practicality). To paraphrase Laplace on the sunrise problem: Although we have a huge number of samples of the sun rising, there are far better models of the sun than assuming it has a certain probability of rising each day, e.g., simply having a half-life.
Given a good model, it is best to make as many observations as practicable, depending on the expected reliability of prior knowledge, cost of observations, time and resources available, and accuracy required.
One of the most difficult aspects of the rule of succession is not the mathematical formulas, but answering the question: When does the rule of succession apply? In the generalisation section, it was noted very explicitly by adding the prior information "I"m into the calculations. Thus, when all that is known about a phenomenon is that there are "m" known possible outcomes prior to observing any data, only then does the rule of succession apply. If the rule of succession is applied in problems where this does not accurately describe the prior state of knowledge, then it may give counter-intuitive results. This is not because the rule of succession is defective, but that it is effectively answering a different question, based on different prior information.
In principle (see Cromwell's rule), no possibility should have its probability (or its pseudocount) set to zero, since nothing in the physical world should be assumed strictly impossible (though it may be)—even if contrary to all observations and current theories. Indeed, Bayes rule takes "absolutely" no account of an observation previously believed to have zero probability—it is still declared impossible. However, only considering a fixed set of the possibilities is an acceptable route, one just needs to remember that the results are conditional on (or restricted to) the set being considered, and not some "universal" set. In fact Larry Bretthorst shows that including the possibility of "something else" into the hypothesis space makes no difference to the relative probabilities of the other hypothesis—it simply renormalises them to add up to a value less than 1. Until "something else" is specified, the likelihood function conditional on this "something else" is indeterminate, for how is one to determine formula_38? Thus no updating of the prior probability for "something else" can occur until it is more accurately defined.
However, it is sometimes debatable whether prior knowledge should affect the relative probabilities, or also the total weight of the prior knowledge compared to actual observations. This does not have a clear cut answer, for it depends on what prior knowledge one is considering. In fact, an alternative prior state of knowledge could be of the form "I have specified "m" potential categories, but I am sure that only one of them is possible prior to observing the data. However, I do not know which particular category this is." A mathematical way to describe this prior is the Dirichlet distribution with all parameters equal to "m"−1, which then gives a pseudocount of "1" to the denominator instead of "m", and adds a pseudocount of "m"−1 to each category. This gives a slightly different probability in the binary case of formula_39.
Prior probabilities are only worth spending significant effort estimating when likely to have significant effect. They may be important when there are few observations — especially when so few that there have been few, if any, observations of some possibilities – such as a rare animal, in a given region. Also important when there are many observations, where it is believed that the expectation should be heavily weighted towards the prior estimates, in spite of many observations to the contrary, such as for a roulette wheel in a well-respected casino. In the latter case, at least some of the pseudocounts may need to be very large. They are not always small, and thereby soon outweighed by actual observations, as is often assumed. However, although a last resort, for everyday purposes, prior knowledge is usually vital. So most decisions must be subjective to some extent (dependent upon the analyst and analysis used). | [
{
"math_id": 0,
"text": "P(X_{n+1}=1 \\mid X_1+\\cdots+X_n=s)={s+1 \\over n+2}."
},
{
"math_id": 1,
"text": "P'(X_{n+1}=1 \\mid X_1+\\cdots+X_n=s)={s \\over n}."
},
{
"math_id": 2,
"text": "s=0"
},
{
"math_id": 3,
"text": "s=n"
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "P'"
},
{
"math_id": 6,
"text": "\\frac{b}{t}"
},
{
"math_id": 7,
"text": "f(p) = \\begin{cases}\n0 & \\text{for }p \\le 0 \\\\\n1 & \\text{for }0 < p < 1 \\\\\n0 & \\text{for }p \\ge 1\n\\end{cases}"
},
{
"math_id": 8,
"text": "L(p)=P(X_1=x_1, \\ldots, X_n=x_n \\mid p)=\\prod_{i=1}^n p^{x_i}(1-p)^{1-x_i}=p^s (1-p)^{n-s}"
},
{
"math_id": 9,
"text": "f(p \\mid X_1=x_1, \\ldots, X_n=x_n) = {L(p)f(p) \\over \\int_0^1 L(r)f(r)\\,dr} = {p^s (1-p)^{n-s} \\over \\int_0^1 r^s(1-r)^{n-s}\\,dr}"
},
{
"math_id": 10,
"text": "\\int_0^1 r^s(1-r)^{n-s}\\,dr={s!(n-s)! \\over (n+1)!}"
},
{
"math_id": 11,
"text": "f(p \\mid X_1=x_1, \\ldots, X_n=x_n)={(n+1)! \\over s!(n-s)!}p^s(1-p)^{n-s}."
},
{
"math_id": 12,
"text": "\\operatorname{E}(p \\mid X_i=x_i\\text{ for }i=1,\\dots,n) = \\int_0^1 p f(p \\mid X_1=x_1, \\ldots, X_n=x_n)\\,dp = {s+1 \\over n+2}."
},
{
"math_id": 13,
"text": "P(X_{n+1}=1 \\mid X_i=x_i\\text{ for }i=1,\\dots,n)=\\operatorname{E}(p \\mid X_i=x_i\\text{ for }i=1,\\dots,n)={s+1 \\over n+2}."
},
{
"math_id": 14,
"text": "P'(X_{n+1}=1 \\mid X_i=x_i\\text{ for }i=1,\\dots,n)={s \\over n}."
},
{
"math_id": 15,
"text": "\\mathrm{Hyp}(s|N,n,S)"
},
{
"math_id": 16,
"text": "\\mathrm{Bin}(r|n,p)"
},
{
"math_id": 17,
"text": "N,S \\rightarrow \\infty"
},
{
"math_id": 18,
"text": "p={S \\over N}"
},
{
"math_id": 19,
"text": "S"
},
{
"math_id": 20,
"text": "N"
},
{
"math_id": 21,
"text": "{1 \\over p(1-p)}"
},
{
"math_id": 22,
"text": "{1 \\over S(N-S)}"
},
{
"math_id": 23,
"text": "1\\leq S \\leq N-1"
},
{
"math_id": 24,
"text": "p"
},
{
"math_id": 25,
"text": "P(S|N,n,s) \\propto {1 \\over S(N-S)} {S \\choose s}{N-S \\choose n-s}\n\\propto {S!(N-S)! \\over S(N-S)(S-s)!(N-S-[n-s])!}\n"
},
{
"math_id": 26,
"text": "P(S|N,n,s=0) \\propto {(N-S-1)! \\over S(N-S-n)!} = {\\prod_{j=1}^{n-1}(N-S-j) \\over S}\n"
},
{
"math_id": 27,
"text": "P(S|N,n,s=0) = {\\prod_{j=1}^{n-1}(N-S-j) \\over S \\sum_{R=1}^{N-n}{\\prod_{j=1}^{n-1}(N-R-j) \\over R}}\n"
},
{
"math_id": 28,
"text": "E\\left({S \\over N}|n,s=0,N\\right)={1 \\over N}\\sum_{S=1}^{N-n}S P(S|N,n=1,s=0)={1 \\over N}{\\sum_{S=1}^{N-n}\\prod_{j=1}^{n-1}(N-S-j) \\over \\sum_{R=1}^{N-n}{\\prod_{j=1}^{n-1}(N-R-j) \\over R}}\n"
},
{
"math_id": 29,
"text": "\\prod_{j=1}^{n-1}(N-R-j)\\approx (N-R)^{n-1}"
},
{
"math_id": 30,
"text": "\\sum_{S=1}^{N-n}\\prod_{j=1}^{n-1}(N-S-j)\\approx \\int_1^{N-n}(N-S)^{n-1} \\, dS = {(N-1)^n-n^n \\over n}\\approx {N^n \\over n}"
},
{
"math_id": 31,
"text": "\n\\begin{align}\n\\sum_{R=1}^{N-n}{\\prod_{j=1}^{n-1}(N-R-j) \\over R} & \\approx \\int_1^{N-n}{(N-R)^{n-1}\\over R} \\, dR \\\\\n& = N\\int_1^{N-n} {(N-R)^{n-2}\\over R} \\, dR - \\int_1^{N-n}(N-R)^{n-2} \\, dR \\\\\n& = N^{n-1}\\left[\\int_1^{N-n}{dR\\over R}-{1\\over n-1} + O\\left({1\\over N}\\right)\\right]\n\\approx N^{n-1}\\ln(N)\n\\end{align}\n"
},
{
"math_id": 32,
"text": "E\\left({S \\over N}|n,s=0,N\\right)\\approx {1 \\over N}{{N^n \\over n}\\over N^{n-1}\\ln(N)}={1 \\over n [\\ln(N)]}={\\log_{10}(e) \\over n [\\log_{10}(N)]}={0.434294 \\over n [\\log_{10}(N)]}\n"
},
{
"math_id": 33,
"text": "E\\left({S \\over N} \\mid n,s=0,N=10^k \\right)\\approx {0.434294 \\over nk}"
},
{
"math_id": 34,
"text": "p={s \\over n}"
},
{
"math_id": 35,
"text": "\nf(p_1,\\ldots,p_m \\mid n_1,\\ldots,n_m,I) = \n \\begin{cases} { \\displaystyle \n \\frac{\\Gamma\\left( \\sum_{i=1}^m (n_i+1) \\right)}{\\prod_{i=1}^m \\Gamma(n_i+1)}\np_1^{n_1}\\cdots p_m^{n_m}\n}, \\quad &\n \\sum_{i=1}^m p_i=1 \\\\ \\\\\n0 & \\text{otherwise.} \\end{cases}\n"
},
{
"math_id": 36,
"text": "P(A_i | n_1,\\ldots,n_m, I_m)={n_i + 1 \\over n + m}. "
},
{
"math_id": 37,
"text": "P(\\text{success}| n_1,\\ldots,n_m, I_m)={s + c \\over n + m}, "
},
{
"math_id": 38,
"text": " Pr(\\text{data} | \\text{something else},I) "
},
{
"math_id": 39,
"text": "\\frac{s+0.5}{n+1}"
}
] | https://en.wikipedia.org/wiki?curid=798571 |
7986007 | Q-Vandermonde identity | A q-analogue of the Chu–Vandermonde identity.
In mathematics, in the field of combinatorics, the "q"-Vandermonde identity is a "q"-analogue of the Chu–Vandermonde identity. Using standard notation for "q"-binomial coefficients, the identity states that
formula_0
The nonzero contributions to this sum come from values of "j" such that the "q"-binomial coefficients on the right side are nonzero, that is, max(0, "k" − "m") ≤ "j" ≤ min("n", "k").
Other conventions.
As is typical for "q"-analogues, the "q"-Vandermonde identity can be rewritten in a number of ways. In the conventions common in applications to quantum groups, a different "q"-binomial coefficient is used. This "q"-binomial coefficient, which we denote here by formula_1, is defined by
formula_2
In particular, it is the unique shift of the "usual" "q"-binomial coefficient by a power of "q" such that the result is symmetric in "q" and formula_3. Using this "q"-binomial coefficient, the "q"-Vandermonde identity can be written in the form
formula_4
Proof.
As with the (non-"q") Chu–Vandermonde identity, there are several possible proofs of the "q"-Vandermonde identity. The following proof uses the "q"-binomial theorem.
One standard proof of the Chu–Vandermonde identity is to expand the product formula_5 in two different ways. Following Stanley, we can tweak this proof to prove the "q"-Vandermonde identity, as well. First, observe that the product
formula_6
can be expanded by the "q"-binomial theorem as
formula_7
Less obviously, we can write
formula_8
and we may expand both subproducts separately using the "q"-binomial theorem. This yields
formula_9
Multiplying this latter product out and combining like terms gives
formula_10
Finally, equating powers of formula_11 between the two expressions yields the desired result.
This argument may also be phrased in terms of expanding the product formula_12 in two different ways, where "A" and "B" are operators (for example, a pair of matrices) that ""q"-commute," that is, that satisfy "BA" = "qAB".
Notes.
<templatestyles src="Reflist/styles.css" />
References.
| [
{
"math_id": 0,
"text": "\\binom{m + n}{k}_{\\!\\!q} =\\sum_{j} \\binom{m}{k - j}_{\\!\\!q} \\binom{n}{j}_{\\!\\!q} q^{j(m-k+j)}."
},
{
"math_id": 1,
"text": "B_q(n,k)"
},
{
"math_id": 2,
"text": " B_q(n, k) = q^{-k(n-k)} \\binom{n}{k}_{\\!\\!q^2}."
},
{
"math_id": 3,
"text": "q^{-1}"
},
{
"math_id": 4,
"text": "B_q(m + n,k)=q^{n k}\\sum_{j}q^{-(m+n)j} B_q(m,k - j) B_q(n,j)."
},
{
"math_id": 5,
"text": "(1 + x)^m (1 + x)^n"
},
{
"math_id": 6,
"text": "(1 + x)(1 + qx) \\cdots \\left (1 + q^{m + n - 1}x \\right )"
},
{
"math_id": 7,
"text": "(1 + x)(1 + qx) \\cdots \\left (1 + q^{m + n - 1}x \\right ) = \\sum_k q^{\\frac{k(k-1)}{2}} \\binom{m + n}{k}_{\\!\\!q} x^k."
},
{
"math_id": 8,
"text": "(1 + x)(1 + qx) \\cdots \\left (1 + q^{m + n - 1}x \\right ) = \\left((1 + x)\\cdots (1 + q^{m - 1}x)\\right) \\left( \\left(1 + (q^m x) \\right) \\left (1 + q(q^m x) \\right ) \\cdots \\left (1 + q^{n - 1}(q^m x) \\right )\\right)"
},
{
"math_id": 9,
"text": "(1 + x)(1 + qx) \\cdots \\left (1 + q^{m + n - 1}x \\right ) = \\left(\\sum_i q^{\\frac{i(i - 1)}{2}} \\binom{m}{i}_{\\!\\!q} x^i \\right) \\cdot \\left(\\sum_i q^{mi + \\frac{i(i - 1)}{2}} \\binom{n}{i}_{\\!\\!q} x^i \\right)."
},
{
"math_id": 10,
"text": " \\sum_k \\sum_j \\left(q^{j(m - k + j) + \\frac{k(k - 1)}{2}} \\binom{m}{k - j}_{\\!\\!q} \\binom{n}{j}_{\\!\\!q}\\right)x^k."
},
{
"math_id": 11,
"text": "x"
},
{
"math_id": 12,
"text": "(A + B)^m(A + B)^n"
}
] | https://en.wikipedia.org/wiki?curid=7986007 |
79861 | Constant term | Term in an algebraic expression which does not contain any variables
In mathematics, a constant term (sometimes referred to as a free term) is a term in an algebraic expression that does not contain any variables and therefore is constant. For example, in the quadratic polynomial,
formula_0
The number 3 is a constant term.
After like terms are combined, an algebraic expression will have at most one constant term. Thus, it is common to speak of the quadratic polynomial
formula_1
where formula_2 is the variable, as having a constant term of formula_3 If the constant term is 0, then it will conventionally be omitted when the quadratic is written out.
Any polynomial written in standard form has a unique constant term, which can be considered a coefficient of formula_4 In particular, the constant term will always be the lowest degree term of the polynomial. This also applies to multivariate polynomials. For example, the polynomial
formula_5
has a constant term of −4, which can be considered to be the coefficient of formula_6 where the variables are eliminated by being exponentiated to 0 (any non-zero number exponentiated to 0 becomes 1). For any polynomial, the constant term can be obtained by substituting in 0 instead of each variable; thus, eliminating each variable. The concept of exponentiation to 0 can be applied to power series and other types of series, for example in this power series:
formula_7
formula_8 is the constant term.
Constant of integration.
The derivative of a constant term is 0, so when a term containing a constant term is differentiated, the constant term vanishes, regardless of its value. Therefore the antiderivative is only determined up to an unknown constant term, which is called "the constant of integration" and added in symbolic form.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x^2 + 2x + 3,\\ "
},
{
"math_id": 1,
"text": "ax^2+bx+c,\\ "
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "c."
},
{
"math_id": 4,
"text": "x^0."
},
{
"math_id": 5,
"text": "x^2+2xy+y^2-2x+2y-4\\ "
},
{
"math_id": 6,
"text": "x^0y^0,"
},
{
"math_id": 7,
"text": "a_0 + a_1 x + a_2 x^2 + a_3 x^3 + \\cdots,"
},
{
"math_id": 8,
"text": "a_0"
}
] | https://en.wikipedia.org/wiki?curid=79861 |
7987653 | Koszul algebra | Process in mathematics
In abstract algebra, a Koszul algebra formula_0 is a graded formula_1-algebra over which the ground field formula_1 has a linear minimal graded free resolution, "i.e.", there exists an exact sequence:
formula_2
for some nonnegative integers formula_3. Here formula_4 is the graded algebra formula_0 with grading shifted up by formula_5, "i.e." formula_6, and the exponent formula_3 refers to the formula_3-fold direct sum. Choosing bases for the free modules in the resolution, the chain maps are given by matrices, and the definition requires the matrix entries to be zero or linear forms.
An example of a Koszul algebra is a polynomial ring over a field, for which the Koszul complex is the minimal graded free resolution of the ground field. There are Koszul algebras whose ground fields have infinite minimal graded free resolutions, "e.g", formula_7.
The concept is named after the French mathematician Jean-Louis Koszul. | [
{
"math_id": 0,
"text": "R"
},
{
"math_id": 1,
"text": "k"
},
{
"math_id": 2,
"text": "\\cdots \\rightarrow (R(-i))^{b_i} \\rightarrow \\cdots \\rightarrow (R(-2))^{b_2} \\rightarrow (R(-1))^{b_1} \\rightarrow R \\rightarrow k \\rightarrow 0."
},
{
"math_id": 3,
"text": "b_i"
},
{
"math_id": 4,
"text": "R(-j)"
},
{
"math_id": 5,
"text": "j"
},
{
"math_id": 6,
"text": "R(-j)_i = R_{i-j}"
},
{
"math_id": 7,
"text": "R = k[x,y]/(xy) "
}
] | https://en.wikipedia.org/wiki?curid=7987653 |
7987684 | Plutonium | Chemical element with atomic number 94 (Pu)
Plutonium is a chemical element; it has symbol Pu and atomic number 94. It is a silvery-gray actinide metal that tarnishes when exposed to air, and forms a dull coating when oxidized. The element normally exhibits six allotropes and four oxidation states. It reacts with carbon, halogens, nitrogen, silicon, and hydrogen. When exposed to moist air, it forms oxides and hydrides that can expand the sample up to 70% in volume, which in turn flake off as a powder that is pyrophoric. It is radioactive and can accumulate in bones, which makes the handling of plutonium dangerous.
Plutonium was first synthesized and isolated in late 1940 and early 1941, by deuteron bombardment of uranium-238 in the cyclotron at the University of California, Berkeley. First, neptunium-238 (half-life 2.1 days) was synthesized, which then beta-decayed to form the new element with atomic number 94 and atomic weight 238 (half-life 88 years). Since uranium had been named after the planet Uranus and neptunium after the planet Neptune, element 94 was named after Pluto, which at the time was also considered a planet. Wartime secrecy prevented the University of California team from publishing its discovery until 1948.
Plutonium is the element with the highest atomic number known to occur in nature. Trace quantities arise in natural uranium deposits when uranium-238 captures neutrons emitted by decay of other uranium-238 atoms. The heavy isotope plutonium-244 has a half-life long enough that extreme trace quantities should have survived primordially (from the Earth's formation) to the present, but so far experiments have not yet been sensitive enough to detect it.
Both plutonium-239 and plutonium-241 are fissile, meaning they can sustain a nuclear chain reaction, leading to applications in nuclear weapons and nuclear reactors. Plutonium-240 has a high rate of spontaneous fission, raising the neutron flux of any sample containing it. The presence of plutonium-240 limits a plutonium sample's usability for weapons or its quality as reactor fuel, and the percentage of plutonium-240 determines its grade (weapons-grade, fuel-grade, or reactor-grade). Plutonium-238 has a half-life of 87.7 years and emits alpha particles. It is a heat source in radioisotope thermoelectric generators, which are used to power some spacecraft. Plutonium isotopes are expensive and inconvenient to separate, so particular isotopes are usually manufactured in specialized reactors.
Producing plutonium in useful quantities for the first time was a major part of the Manhattan Project during World War II that developed the first atomic bombs. The Fat Man bombs used in the Trinity nuclear test in July 1945, and in the bombing of Nagasaki in August 1945, had plutonium cores. Human radiation experiments studying plutonium were conducted without informed consent, and several criticality accidents, some lethal, occurred after the war. Disposal of plutonium waste from nuclear power plants and dismantled nuclear weapons built during the Cold War is a nuclear-proliferation and environmental concern. Other sources of plutonium in the environment are fallout from many above-ground nuclear tests, which are now banned.
Characteristics.
Physical properties.
Plutonium, like most metals, has a bright silvery appearance at first, much like nickel, but it oxidizes very quickly to a dull gray, though yellow and olive green are also reported. At room temperature plutonium is in its α ("alpha") form. This, the most common structural form of the element (allotrope), is about as hard and brittle as gray cast iron unless it is alloyed with other metals to make it soft and ductile. Unlike most metals, it is not a good conductor of heat or electricity. It has a low melting point () and an unusually high boiling point (). This gives a large range of temperatures (over 2,500 kelvin wide) at which plutonium is liquid, but this range is neither the greatest among all actinides nor among all metals, with neptunium theorized to have the greatest range in both instances. The low melting point as well as the reactivity of the native metal compared to the oxide leads to plutonium oxides being a preferred form for applications such as nuclear fission reactor fuel (MOX-fuel).
Alpha decay, the release of a high-energy helium nucleus, is the most common form of radioactive decay for plutonium. A 5 kg mass of 239Pu contains about atoms. With a half-life of 24,100 years, about of its atoms decay each second by emitting a 5.157 MeV alpha particle. This amounts to 9.68 watts of power. Heat produced by the deceleration of these alpha particles makes it warm to the touch. 238Pu due to its much shorter half life heats up to much higher temperatures and glows red hot with blackbody radiation if left without external heating or cooling. This heat has been used in radioisotope thermoelectric generators (see below).
The resistivity of plutonium at room temperature is very high for a metal, and it gets even higher with lower temperatures, which is unusual for metals. This trend continues down to 100 K, below which resistivity rapidly decreases for fresh samples. Resistivity then begins to increase with time at around 20 K due to radiation damage, with the rate dictated by the isotopic composition of the sample.
Because of self-irradiation, a sample of plutonium fatigues throughout its crystal structure, meaning the ordered arrangement of its atoms becomes disrupted by radiation with time. Self-irradiation can also lead to annealing which counteracts some of the fatigue effects as temperature increases above 100 K.
Unlike most materials, plutonium increases in density when it melts, by 2.5%, but the liquid metal exhibits a linear decrease in density with temperature. Near the melting point, the liquid plutonium has very high viscosity and surface tension compared to other metals.
Allotropes.
Plutonium normally has six allotropes and forms a seventh (zeta, ζ) at high temperature within a limited pressure range. These allotropes, which are different structural modifications or forms of an element, have very similar internal energies but significantly varying densities and crystal structures. This makes plutonium very sensitive to changes in temperature, pressure, or chemistry, and allows for dramatic volume changes following phase transitions from one allotropic form to another. The densities of the different allotropes vary from 16.00 g/cm3 to 19.86 g/cm3.
The presence of these many allotropes makes machining plutonium very difficult, as it changes state very readily. For example, the α form exists at room temperature in unalloyed plutonium. It has machining characteristics similar to cast iron but changes to the plastic and malleable β ("beta") form at slightly higher temperatures. The reasons for the complicated phase diagram are not entirely understood. The α form has a low-symmetry monoclinic structure, hence its brittleness, strength, compressibility, and poor thermal conductivity.
Plutonium in the δ ("delta") form normally exists in the 310 °C to 452 °C range but is stable at room temperature when alloyed with a small percentage of gallium, aluminium, or cerium, enhancing workability and allowing it to be welded. The δ form has more typical metallic character, and is roughly as strong and malleable as aluminium. In fission weapons, the explosive shock waves used to compress a plutonium core will also cause a transition from the usual δ phase plutonium to the denser α form, significantly helping to achieve supercriticality. The ε phase, the highest temperature solid allotrope, exhibits anomalously high atomic self-diffusion compared to other elements.
Nuclear fission.
Plutonium is a radioactive actinide metal whose isotope, plutonium-239, is one of the three primary fissile isotopes (uranium-233 and uranium-235 are the other two); plutonium-241 is also highly fissile. To be considered fissile, an isotope's atomic nucleus must be able to break apart or fission when struck by a slow moving neutron and to release enough additional neutrons to sustain the nuclear chain reaction by splitting further nuclei.
Pure plutonium-239 may have a multiplication factor (keff) larger than one, which means that if the metal is present in sufficient quantity and with an appropriate geometry (e.g., a sphere of sufficient size), it can form a critical mass. During fission, a fraction of the nuclear binding energy, which holds a nucleus together, is released as a large amount of electromagnetic and kinetic energy (much of the latter being quickly converted to thermal energy). Fission of a kilogram of plutonium-239 can produce an explosion equivalent to . It is this energy that makes plutonium-239 useful in nuclear weapons and reactors.
The presence of the isotope plutonium-240 in a sample limits its nuclear bomb potential, as 240Pu has a relatively high spontaneous fission rate (~440 fissions per second per gram; over 1,000 neutrons per second per gram), raising the background neutron levels and thus increasing the risk of predetonation. Plutonium is identified as either weapons-grade, fuel-grade, or reactor-grade based on the percentage of 240Pu that it contains. Weapons-grade plutonium contains less than 7% 240Pu. Fuel-grade plutonium contains 7%–19%, and power reactor-grade contains 19% or more 240Pu. Supergrade plutonium, with less than 4% of 240Pu, is used in U.S. Navy weapons stored near ship and submarine crews, due to its lower radioactivity. Plutonium-238 is not fissile but can undergo nuclear fission easily with fast neutrons as well as alpha decay. All plutonium isotopes can be "bred" into fissile material with one or more neutron absorptions, whether followed by beta decay or not. This makes non-fissile isotopes of plutonium a fertile material.
Isotopes and nucleosynthesis.
Twenty radioisotopes of plutonium have been characterized. The longest-lived are 244Pu, with a half-life of 80.8 million years; 242Pu, with a half-life of 373,300 years; and 239Pu, with a half-life of 24,110 years. All other isotopes have half-lives of less than 7,000 years. This element also has eight metastable states, though all have half-lives less than a second. 244Pu has been found in interstellar space and it has the longest half-life of any non-primordial radioisotope.
The known isotopes of plutonium range in mass number from 228 to 247. The main decay modes of isotopes with mass numbers lower than the most stable isotope, 244Pu, are spontaneous fission and alpha emission, mostly forming uranium (92 protons) and neptunium (93 protons) isotopes as decay products (neglecting the wide range of daughter nuclei created by fission processes). The main decay mode for isotopes heavier than 244Pu, along with 241Pu and 243Pu, is beta emission, forming americium isotopes (95 protons). Plutonium-241 is the parent isotope of the neptunium series, decaying to americium-241 via beta emission.
Plutonium-238 and 239 are the most widely synthesized isotopes. 239Pu is synthesized via the following reaction using uranium (U) and neutrons (n) via beta decay (β−) with neptunium (Np) as an intermediate:
<chem>
</chem>
Neutrons from the fission of uranium-235 are captured by uranium-238 nuclei to form uranium-239; a beta decay converts a neutron into a proton to form neptunium-239 (half-life 2.36 days) and another beta decay forms plutonium-239. Egon Bretscher working on the British Tube Alloys project predicted this reaction theoretically in 1940.
Plutonium-238 is synthesized by bombarding uranium-238 with deuterons (D or 2H, the nuclei of heavy hydrogen) in the following reaction:
formula_0
where a deuteron hitting uranium-238 produces two neutrons and neptunium-238, which decays by emitting negative beta particles to form plutonium-238. Plutonium-238 can also be produced by neutron irradiation of neptunium-237.
Decay heat and fission properties.
Plutonium isotopes undergo radioactive decay, which produces decay heat. Different isotopes produce different amounts of heat per mass. The decay heat is usually listed as watt/kilogram, or milliwatt/gram. In larger pieces of plutonium (e.g. a weapon pit) and inadequate heat removal the resulting self-heating may be significant.
Compounds and chemistry.
At room temperature, pure plutonium is silvery in color but gains a tarnish when oxidized. The element displays four common ionic oxidation states in aqueous solution and one rare one:
The color shown by plutonium solutions depends on both the oxidation state and the nature of the acid anion. It is the acid anion that influences the degree of complexing—how atoms connect to a central atom—of the plutonium species. Additionally, the formal +2 oxidation state of plutonium is known in the complex [K(2.2.2-cryptand)] [PuIICp″3], Cp″ = C5H3(SiMe3)2.
A +8 oxidation state is possible as well in the volatile tetroxide PuO4. Though it readily decomposes via a reduction mechanism similar to FeO4, PuO4 can be stabilized in alkaline solutions and chloroform.
Metallic plutonium is produced by reacting plutonium tetrafluoride with barium, calcium or lithium at 1200 °C. Metallic plutonium is attacked by acids, oxygen, and steam but not by alkalis and dissolves easily in concentrated hydrochloric, hydroiodic and perchloric acids. Molten metal must be kept in a vacuum or an inert atmosphere to avoid reaction with air. At 135 °C the metal will ignite in air and will explode if placed in carbon tetrachloride.
Plutonium is a reactive metal. In moist air or moist argon, the metal oxidizes rapidly, producing a mixture of oxides and hydrides. If the metal is exposed long enough to a limited amount of water vapor, a powdery surface coating of PuO2 is formed. Also formed is plutonium hydride but an excess of water vapor forms only PuO2.
Plutonium shows enormous, and reversible, reaction rates with pure hydrogen, forming plutonium hydride. It also reacts readily with oxygen, forming PuO and PuO2 as well as intermediate oxides; plutonium oxide fills 40% more volume than plutonium metal. The metal reacts with the halogens, giving rise to compounds with the general formula PuX3 where X can be F, Cl, Br or I and PuF4 is also seen. The following oxyhalides are observed: PuOCl, PuOBr and PuOI. It will react with carbon to form PuC, nitrogen to form PuN and silicon to form PuSi2.
The organometallic chemistry of plutonium complexes is typical for organoactinide species; a characteristic example of an organoplutonium compound is plutonocene. Computational chemistry methods indicate an enhanced covalent character in the plutonium-ligand bonding.
Powders of plutonium, its hydrides and certain oxides like Pu2O3
are pyrophoric, meaning they can ignite spontaneously at ambient temperature and are therefore handled in an inert, dry atmosphere of nitrogen or argon. Bulk plutonium ignites only when heated above 400 °C. Pu2O3 spontaneously heats up and transforms into PuO2, which is stable in dry air, but reacts with water vapor when heated.
Crucibles used to contain plutonium need to be able to withstand its strongly reducing properties. Refractory metals such as tantalum and tungsten along with the more stable oxides, borides, carbides, nitrides and silicides can tolerate this. Melting in an electric arc furnace can be used to produce small ingots of the metal without the need for a crucible.
Cerium is used as a chemical simulant of plutonium for development of containment, extraction, and other technologies.
Electronic structure.
Plutonium is an element in which the 5f electrons are the transition border between delocalized and localized; it is therefore considered one of the most complex elements. The anomalous behavior of plutonium is caused by its electronic structure. The energy difference between the 6d and 5f subshells is very low. The size of the 5f shell is just enough to allow the electrons to form bonds within the lattice, on the very boundary between localized and bonding behavior. The proximity of energy levels leads to multiple low-energy electron configurations with near equal energy levels. This leads to competing 5fn7s2 and 5fn−16d17s2 configurations, which causes the complexity of its chemical behavior. The highly directional nature of 5f orbitals is responsible for directional covalent bonds in molecules and complexes of plutonium.
Alloys.
Plutonium can form alloys and intermediate compounds with most other metals. Exceptions include lithium, sodium, potassium, rubidium and caesium of the alkali metals; and magnesium, calcium, strontium, and barium of the alkaline earth metals; and europium and ytterbium of the rare earth metals. Partial exceptions include the refractory metals chromium, molybdenum, niobium, tantalum, and tungsten, which are soluble in liquid plutonium, but insoluble or only slightly soluble in solid plutonium. Gallium, aluminium, americium, scandium and cerium can stabilize δ-phase plutonium for room temperature. Silicon, indium, zinc and zirconium allow formation of metastable δ state when rapidly cooled. High amounts of hafnium, holmium and thallium also allows some retention of the δ phase at room temperature. Neptunium is the only element that can stabilize the α phase at higher temperatures.
Plutonium alloys can be produced by adding a metal to molten plutonium. If the alloying metal is reductive enough, plutonium can be added in the form of oxides or halides. The δ phase plutonium–gallium alloy (PGA) and plutonium–aluminium alloy are produced by adding Pu(III) fluoride to molten gallium or aluminium, which has the advantage of avoiding dealing directly with the highly reactive plutonium metal.
Occurrence.
Trace amounts of plutonium-238, plutonium-239, plutonium-240, and plutonium-244 can be found in nature. Small traces of plutonium-239, a few parts per trillion, and its decay products are naturally found in some concentrated ores of uranium, such as the natural nuclear fission reactor in Oklo, Gabon. The ratio of plutonium-239 to uranium at the Cigar Lake Mine uranium deposit ranges from to . These trace amounts of 239Pu originate in the following fashion: on rare occasions, 238U undergoes spontaneous fission, and in the process, the nucleus emits one or two free neutrons with some kinetic energy. When one of these neutrons strikes the nucleus of another 238U atom, it is absorbed by the atom, which becomes 239U. With a relatively short half-life, 239U decays to 239Np, which decays into 239Pu. Finally, exceedingly small amounts of plutonium-238, attributed to the extremely rare double beta decay of uranium-238, have been found in natural uranium samples.
Due to its relatively long half-life of about 80 million years, it was suggested that plutonium-244 occurs naturally as a primordial nuclide, but early reports of its detection could not be confirmed. Based on its likely initial abundance in the Solar System, present experiments as of 2022 are likely about an order of magnitude away from detecting live primordial 244Pu. However, its long half-life ensured its circulation across the solar system before its extinction, and indeed, evidence of the spontaneous fission of extinct 244Pu has been found in meteorites. The former presence of 244Pu in the early Solar System has been confirmed, since it manifests itself today as an excess of its daughters, either 232Th (from the alpha decay pathway) or xenon isotopes (from its spontaneous fission). The latter are generally more useful, because the chemistries of thorium and plutonium are rather similar (both are predominantly tetravalent) and hence an excess of thorium would not be strong evidence that some of it was formed as a plutonium daughter. 244Pu has the longest half-life of all transuranic nuclides and is produced only in the r-process in supernovae and colliding neutron stars; when nuclei are ejected from these events at high speed to reach Earth, 244Pu alone among transuranic nuclides has a long enough half-life to survive the journey, and hence tiny traces of live interstellar 244Pu have been found in the deep sea floor. Because 240Pu also occurs in the decay chain of 244Pu, it must thus also be present in secular equilibrium, albeit in even tinier quantities.
Minute traces of plutonium are usually found in the human body due to the 550 atmospheric and underwater nuclear tests that have been carried out, and to a small number of major nuclear accidents. Most atmospheric and underwater nuclear testing was stopped by the Limited Test Ban Treaty in 1963, which of the nuclear powers was signed and ratified by the United States, United Kingdom and Soviet Union. France would continue atmospheric nuclear testing until 1974 and China would continue atmospheric nuclear testing until 1980. All subsequent nuclear testing was conducted underground.
History.
Discovery.
Enrico Fermi and a team of scientists at the University of Rome reported that they had discovered element 94 in 1934. Fermi called the element "hesperium" and mentioned it in his Nobel Lecture in 1938. The sample actually contained products of nuclear fission, primarily barium and krypton. Nuclear fission, discovered in Germany in 1938 by Otto Hahn and Fritz Strassmann, was unknown at the time.
Plutonium (specifically, plutonium-238) was first produced, isolated and then chemically identified between December 1940 and February 1941 by Glenn T. Seaborg, Edwin McMillan, Emilio Segrè, Joseph W. Kennedy, and Arthur Wahl by deuteron bombardment of uranium in the cyclotron at the Berkeley Radiation Laboratory at the University of California, Berkeley.
Neptunium-238 was created directly by the bombardment but decayed by beta emission with a half-life of a little over two days, which indicated the formation of element 94. The first bombardment took place on December 14, 1940, and the new element was first identified through oxidation on the night of February 23–24, 1941.
A paper documenting the discovery was prepared by the team and sent to the journal "Physical Review" in March 1941, but publication was delayed until a year after the end of World War II due to security concerns. At the Cavendish Laboratory in Cambridge, Egon Bretscher and Norman Feather realized that a slow neutron reactor fuelled with uranium would theoretically produce substantial amounts of plutonium-239 as a by-product. They calculated that element 94 would be fissile, and had the added advantage of being chemically different from uranium, and could easily be separated from it.
McMillan had recently named the first transuranic element neptunium after the planet Neptune, and suggested that element 94, being the next element in the series, be named for what was then considered the next planet, Pluto. Nicholas Kemmer of the Cambridge team independently proposed the same name, based on the same reasoning as the Berkeley team. Seaborg originally considered the name "plutium", but later thought that it did not sound as good as "plutonium". He chose the letters "Pu" as a joke, in reference to the interjection "P U" to indicate an especially disgusting smell, which passed without notice into the periodic table. Alternative names considered by Seaborg and others were "ultimium" or "extremium" because of the erroneous belief that they had found the last possible element on the periodic table.
Hahn and Strassmann, and independently Kurt Starke, were at this point also working on transuranic elements in Berlin. It is likely that Hahn and Strassmann were aware that plutonium-239 should be fissile. However, they did not have a strong neutron source. Element 93 was reported by Hahn and Strassmann, as well as Starke, in 1942. Hahn's group did not pursue element 94, likely because they were discouraged by McMillan and Abelson's lack of success in isolating it when they had first found element 93. However, since Hahn's group had access to the stronger cyclotron at Paris at this point, they would likely have been able to detect plutonium had they tried, albeit in tiny quantities (a few becquerels).
Early research.
The chemistry of plutonium was found to resemble uranium after a few months of initial study. Early research was continued at the secret Metallurgical Laboratory of the University of Chicago. On August 20, 1942, a trace quantity of this element was isolated and measured for the first time. About 50 micrograms of plutonium-239 combined with uranium and fission products was produced and only about 1 microgram was isolated. This procedure enabled chemists to determine the new element's atomic weight. On December 2, 1942, on a racket court under the west grandstand at the University of Chicago's Stagg Field, researchers headed by Enrico Fermi achieved the first self-sustaining chain reaction in a graphite and uranium pile known as CP-1. Using theoretical information garnered from the operation of CP-1, DuPont constructed an air-cooled experimental production reactor, known as X-10, and a pilot chemical separation facility at Oak Ridge. The separation facility, using methods developed by Glenn T. Seaborg and a team of researchers at the Met Lab, removed plutonium from uranium irradiated in the X-10 reactor. Information from CP-1 was also useful to Met Lab scientists designing the water-cooled plutonium production reactors for Hanford. Construction at the site began in mid-1943.
In November 1943 some plutonium trifluoride was reduced to create the first sample of plutonium metal: a few micrograms of metallic beads. Enough plutonium was produced to make it the first synthetically made element to be visible with the unaided eye.
The nuclear properties of plutonium-239 were also studied; researchers found that when it is hit by a neutron it breaks apart (fissions) by releasing more neutrons and energy. These neutrons can hit other atoms of plutonium-239 and so on in an exponentially fast chain reaction. This can result in an explosion large enough to destroy a city if enough of the isotope is concentrated to form a critical mass.
During the early stages of research, animals were used to study the effects of radioactive substances on health. These studies began in 1944 at the University of California at Berkeley's Radiation Laboratory and were conducted by Joseph G. Hamilton. Hamilton was looking to answer questions about how plutonium would vary in the body depending on exposure mode (oral ingestion, inhalation, absorption through skin), retention rates, and how plutonium would be fixed in tissues and distributed among the various organs. Hamilton started administering soluble microgram portions of plutonium-239 compounds to rats using different valence states and different methods of introducing the plutonium (oral, intravenous, etc.). Eventually, the lab at Chicago also conducted its own plutonium injection experiments using different animals such as mice, rabbits, fish, and even dogs. The results of the studies at Berkeley and Chicago showed that plutonium's physiological behavior differed significantly from that of radium. The most alarming result was that there was significant deposition of plutonium in the liver and in the "actively metabolizing" portion of bone. Furthermore, the rate of plutonium elimination in the excreta differed between species of animals by as much as a factor of five. Such variation made it extremely difficult to estimate what the rate would be for human beings.
Production during the Manhattan Project.
During World War II the U.S. government established the Manhattan Project, for developing an atomic bomb. The three primary research and production sites of the project were the plutonium production facility at what is now the Hanford Site; the uranium enrichment facilities at Oak Ridge, Tennessee; and the weapons research and design lab, now known as Los Alamos National Laboratory, LANL.
The first production reactor that made 239Pu was the X-10 Graphite Reactor. It went online in 1943 and was built at a facility in Oak Ridge that later became the Oak Ridge National Laboratory.
In January 1944, workers laid the foundations for the first chemical separation building, T Plant located in 200-West. Both the T Plant and its sister facility in 200-West, the U Plant, were completed by October. (U Plant was used only for training during the Manhattan Project.) The separation building in 200-East, B Plant, was completed in February 1945. The second facility planned for 200-East was canceled. Nicknamed Queen Marys by the workers who built them, the separation buildings were awesome canyon-like structures 800 feet long, 65 feet wide, and 80 feet high containing forty process pools. The interior had an eerie quality as operators behind seven feet of concrete shielding manipulated remote control equipment by looking through television monitors and periscopes from an upper gallery. Even with massive concrete lids on the process pools, precautions against radiation exposure were necessary and influenced all aspects of plant design.
On April 5, 1944, Emilio Segrè at Los Alamos received the first sample of reactor-produced plutonium from Oak Ridge. Within ten days, he discovered that reactor-bred plutonium had a higher concentration of 240Pu than cyclotron-produced plutonium. 240Pu has a high spontaneous fission rate, raising the overall background neutron level of the plutonium sample. The original gun-type plutonium weapon, code-named "Thin Man", had to be abandoned as a result—the increased number of spontaneous neutrons meant that nuclear pre-detonation (fizzle) was likely.
The entire plutonium weapon design effort at Los Alamos was soon changed to the more complicated implosion device, code-named "Fat Man". In an implosion bomb, plutonium is compressed to high density with explosive lenses—a technically more daunting task than the simple gun-type bomb, but necessary for a plutonium bomb. Uranium, by contrast, can be used with either method.
Construction of the Hanford B Reactor, the first industrial-sized nuclear reactor for the purposes of material production, was completed in March 1945. B Reactor produced the fissile material for the plutonium weapons used during World War II. B, D and F were the initial reactors built at Hanford, and six additional plutonium-producing reactors were built later at the site.
By the end of January 1945, the highly purified plutonium underwent further concentration in the completed chemical isolation building, where remaining impurities were removed successfully. Los Alamos received its first plutonium from Hanford on February 2. While it was still by no means clear that enough plutonium could be produced for use in bombs by the war's end, Hanford was by early 1945 in operation. Only two years had passed since Col. Franklin Matthias first set up his temporary headquarters on the banks of the Columbia River.
According to Kate Brown, the plutonium production plants at Hanford and Mayak in Russia, over a period of four decades, "both released more than 200 million curies of radioactive isotopes into the surrounding environment—twice the amount expelled in the Chernobyl disaster in each instance". Most of this radioactive contamination over the years were part of normal operations, but unforeseen accidents did occur and plant management kept this secret, as the pollution continued unabated.
In 2004, a safe was discovered during excavations of a burial trench at the Hanford nuclear site. Inside the safe were various items, including a large glass bottle containing a whitish slurry which was subsequently identified as the oldest sample of weapons-grade plutonium known to exist. Isotope analysis by Pacific Northwest National Laboratory indicated that the plutonium in the bottle was manufactured in the X-10 Graphite Reactor at Oak Ridge during 1944.
Trinity and Fat Man atomic bombs.
The first atomic bomb test, codenamed "Trinity "and detonated on July 16, 1945, near Alamogordo, New Mexico, used plutonium as its fissile material. The implosion design of "Gadget", as the Trinity device was codenamed, used conventional explosive lenses to compress a sphere of plutonium into a supercritical mass, which was simultaneously showered with neutrons from "Urchin", an initiator made of polonium and beryllium (neutron source: (α, n) reaction). Together, these ensured a runaway chain reaction and explosion. The weapon weighed over 4 tonnes, though it had just 6 kg of plutonium. About 20% of the plutonium in the Trinity weapon, fissioned; releasing an energy equivalent to about 20,000 tons of TNT.
An identical design was used in "Fat Man", dropped on Nagasaki, Japan, on August 9, 1945, killing 35,000–40,000 people and destroying 68%–80% of war production at Nagasaki. Only after the announcement of the first atomic bombs was the existence and name of plutonium made known to the public by the Manhattan Project's Smyth Report.
Cold War use and waste.
Large stockpiles of weapons-grade plutonium were built up by both the Soviet Union and the United States during the Cold War. The U.S. reactors at Hanford and the Savannah River Site in South Carolina produced 103 tonnes, and an estimated 170 tonnes of military-grade plutonium was produced in the USSR. Each year about 20 tonnes of the element is still produced as a by-product of the nuclear power industry. As much as 1000 tonnes of plutonium may be in storage with more than 200 tonnes of that either inside or extracted from nuclear weapons.
SIPRI estimated the world plutonium stockpile in 2007 as about 500 tonnes, divided equally between weapon and civilian stocks.
Radioactive contamination at the Rocky Flats Plant primarily resulted from two major plutonium fires in 1957 and 1969. Much lower concentrations of radioactive isotopes were released throughout the operational life of the plant from 1952 to 1992. Prevailing winds from the plant carried airborne contamination south and east, into populated areas northwest of Denver. The contamination of the Denver area by plutonium from the fires and other sources was not publicly reported until the 1970s. According to a 1972 study coauthored by Edward Martell, "In the more densely populated areas of Denver, the Pu contamination level in surface soils is several times fallout", and the plutonium contamination "just east of the Rocky Flats plant ranges up to hundreds of times that from nuclear tests". As noted by Carl Johnson in Ambio, "Exposures of a large population in the Denver area to plutonium and other radionuclides in the exhaust plumes from the plant date back to 1953." Weapons production at the Rocky Flats plant was halted after a combined FBI and EPA raid in 1989 and years of protests. The plant has since been shut down, with its buildings demolished and completely removed from the site.
In the U.S., some plutonium extracted from dismantled nuclear weapons is melted to form glass logs of plutonium oxide that weigh two tonnes. The glass is made of borosilicates mixed with cadmium and gadolinium. These logs are planned to be encased in stainless steel and stored as much as underground in bore holes that will be back-filled with concrete. The U.S. planned to store plutonium in this way at the Yucca Mountain nuclear waste repository, which is about north-east of Las Vegas, Nevada.
On March 5, 2009, Energy Secretary Steven Chu told a Senate hearing "the Yucca Mountain site no longer was viewed as an option for storing reactor waste". Starting in 1999, military-generated nuclear waste is being entombed at the Waste Isolation Pilot Plant in New Mexico.
In a Presidential Memorandum dated January 29, 2010, President Obama established the Blue Ribbon Commission on America's Nuclear Future. In their final report the Commission put forth recommendations for developing a comprehensive strategy to pursue, including:
"Recommendation #1: The United States should undertake an integrated nuclear waste management program that leads to the timely development of one or more permanent deep geological facilities for the safe disposal of spent fuel and high-level nuclear waste".
Medical experimentation.
During and after the end of World War II, scientists working on the Manhattan Project and other nuclear weapons research projects conducted studies of the effects of plutonium on laboratory animals and human subjects. Animal studies found that a few milligrams of plutonium per kg of tissue is a lethal dose.
For human subjects, this involved injecting solutions typically containing 5 micrograms (μg) of plutonium into hospital patients thought to be either terminally ill, or to have a life expectancy of less than ten years either due to age or chronic disease. This was reduced to 1 μg in July 1945 after animal studies found that the way plutonium distributes itself in bones is more dangerous than radium. Most of the subjects, Eileen Welsome says, were poor, powerless, and sick.
In 1945–47, eighteen human test subjects were injected with plutonium without informed consent. The tests were used to create diagnostic tools to determine the uptake of plutonium in the body in order to develop safety standards for working with plutonium. Ebb Cade was an unwilling participant in medical experiments that involved injection of 4.7 μg of plutonium on 10 April 1945 at Oak Ridge, Tennessee. This experiment was under the supervision of Harold Hodge. Other experiments directed by the United States Atomic Energy Commission and the Manhattan Project continued into the 1970s. "The Plutonium Files" chronicles the lives of the subjects of the secret program by naming each person involved and discussing the ethical and medical research conducted in secret by the scientists and doctors. The episode is now considered to be a serious breach of medical ethics and of the Hippocratic Oath.
The government covered up most of these actions until 1993, when President Bill Clinton ordered a change of policy and federal agencies then made available relevant records. The resulting investigation was undertaken by the president's Advisory Committee on Human Radiation Experiments, and it uncovered much of the material about plutonium research on humans. The committee issued a controversial 1995 report which said that "wrongs were committed" but it did not condemn those who perpetrated them.
Applications.
Explosives.
239Pu is a key fissile component in nuclear weapons, due to its ease of fission and availability. Encasing the bomb's plutonium pit in a tamper (a layer of dense material) decreases the critical mass by reflecting escaping neutrons back into the plutonium core. This reduces the critical mass from 16 kg to 10 kg, which is a sphere with a diameter of about . This critical mass is about a third of that for uranium-235.
The Fat Man plutonium bombs used explosive compression of plutonium to obtain significantly higher density than normal, combined with a central neutron source to begin the reaction and increase efficiency. Thus only 6 kg of plutonium was needed for an explosive yield equivalent to 20 kilotons of TNT. Hypothetically, as little as 4 kg of plutonium—and maybe even less—could be used to make a single atomic bomb using very sophisticated assembly designs.
Mixed oxide fuel.
Spent nuclear fuel from normal light water reactors contains plutonium, but it is a mixture of plutonium-242, 240, 239 and 238. The mixture is not sufficiently enriched for efficient nuclear weapons, but can be used once as MOX fuel. Accidental neutron capture causes the amount of plutonium-242 and 240 to grow each time the plutonium is irradiated in a reactor with low-speed "thermal" neutrons, so that after the second cycle, the plutonium can only be consumed by fast neutron reactors. If fast neutron reactors are not available (the normal case), excess plutonium is usually discarded, and forms one of the longest-lived components of nuclear waste. The desire to consume this plutonium and other transuranic fuels and reduce the radiotoxicity of the waste is the usual reason nuclear engineers give to make fast neutron reactors.
The most common chemical process, PUREX ("P"lutonium–"UR"anium "EX"traction), reprocesses spent nuclear fuel to extract plutonium and uranium which can be used to form a mixed oxide (MOX) fuel for reuse in nuclear reactors. Weapons-grade plutonium can be added to the fuel mix. MOX fuel is used in light water reactors and consists of 60 kg of plutonium per tonne of fuel; after four years, three-quarters of the plutonium is burned (turned into other elements). MOX fuel has been in use since the 1980s, and is widely used in Europe. Breeder reactors are specifically designed to create more fissionable material than they consume.
MOX fuel improves total burnup. A fuel rod is reprocessed after three years of use to remove waste products, which by then account for 3% of the total weight of the rods. Any uranium or plutonium isotopes produced during those three years are left and the rod goes back into production. The presence of up to 1% gallium per mass in weapons-grade plutonium alloy has the potential to interfere with long-term operation of a light water reactor.
Plutonium recovered from spent reactor fuel poses little proliferation hazard, because of excessive contamination with non-fissile plutonium-240 and plutonium-242. Separation of the isotopes is not feasible. A dedicated reactor operating on very low burnup (hence minimal exposure of newly formed plutonium-239 to additional neutrons which causes it to be transformed to heavier isotopes of plutonium) is generally required to produce material suitable for use in efficient nuclear weapons. While "weapons-grade" plutonium is defined to contain at least 92% plutonium-239 (of the total plutonium), the United States have managed to detonate an under-20Kt device using plutonium believed to contain only about 85% plutonium-239, so called '"fuel-grade" plutonium. The "reactor-grade" plutonium produced by a regular LWR burnup cycle typically contains less than 60% Pu-239, with up to 30% parasitic Pu-240/Pu-242, and 10–15% fissile Pu-241. It is unknown if a device using plutonium obtained from reprocessed civil nuclear waste can be detonated, however such a device could hypothetically fizzle and spread radioactive materials over a large urban area. The IAEA conservatively classifies plutonium of all isotopic vectors as "direct-use" material, that is, "nuclear material that can be used for the manufacture of nuclear explosives components without transmutation or further enrichment".
Power and heat source.
Plutonium-238 has a half-life of 87.74 years. It emits a large amount of thermal energy with low levels of both gamma rays/photons and neutrons. Being an alpha emitter, it combines high energy radiation with low penetration and thereby requires minimal shielding. A sheet of paper can be used to shield against the alpha particles from 238Pu. One kilogram of the isotope generates about 570 watts of heat.
These characteristics make it well-suited for electrical power generation for devices that must function without direct maintenance for timescales approximating a human lifetime. It is therefore used in radioisotope thermoelectric generators and radioisotope heater units such as those in the "Cassini", "Voyager", "Galileo" and "New Horizons" space probes, and the "Curiosity" and "Perseverance" (Mars 2020) Mars rovers.
The twin "Voyager" spacecraft were launched in 1977, each containing a 500 watt plutonium power source. Over 30 years later, each source still produces about 300 watts which allows limited operation of each spacecraft. An earlier version of the same technology powered five Apollo Lunar Surface Experiment Packages, starting with Apollo 12 in 1969.
238Pu has also been used successfully to power artificial heart pacemakers, to reduce the risk of repeated surgery. It has been largely replaced by lithium-based primary cells, but as of 2003[ [update]] there were somewhere between 50 and 100 plutonium-powered pacemakers still implanted and functioning in living patients in the United States. By the end of 2007, the number of plutonium-powered pacemakers was reported to be down to just nine. 238Pu was studied as a way to provide supplemental heat to scuba diving. 238Pu mixed with beryllium is used to generate neutrons for research purposes.
Precautions.
Toxicity.
There are two aspects to the harmful effects of plutonium: radioactivity and heavy metal poisoning. Plutonium compounds are radioactive and accumulate in bone marrow. Contamination by plutonium oxide has resulted from nuclear disasters and radioactive incidents, including military nuclear accidents where nuclear weapons have burned. Studies of the effects of these smaller releases, as well as of the widespread radiation poisoning sickness and death following the atomic bombings of Hiroshima and Nagasaki, have provided considerable information regarding the dangers, symptoms and prognosis of radiation poisoning, which in the case of the Japanese survivors was largely unrelated to direct plutonium exposure.
The decay of plutonium, releases three types of ionizing radiation: alpha (α), beta (β), and gamma (γ). Either acute or longer-term exposure carries a danger of serious health outcomes including radiation sickness, genetic damage, cancer, and death. The danger increases with the amount of exposure. α-radiation can travel only a short distance and cannot travel through the outer, dead layer of human skin. β-radiation can penetrate human skin, but cannot go all the way through the body. γ-radiation can go all the way through the body.
Even though α radiation cannot penetrate the skin, ingested or inhaled plutonium does irradiate internal organs. α-particles generated by inhaled plutonium have been found to cause lung cancer in a cohort of European nuclear workers. The skeleton, where plutonium accumulates, and the liver, where it collects and becomes concentrated, are at risk. Plutonium is not absorbed into the body efficiently when ingested; only 0.04% of plutonium oxide is absorbed after ingestion. Plutonium absorbed by the body is excreted very slowly, with a biological half-life of 200 years. Plutonium passes only slowly through cell membranes and intestinal boundaries, so absorption by ingestion and incorporation into bone structure proceeds very slowly. Donald Mastick accidentally swallowed a small amount of plutonium(III) chloride, which was detectable for the next thirty years of his life, but appeared to suffer no ill effects.
Plutonium is more dangerous if inhaled than if ingested. The risk of lung cancer increases once the total radiation dose equivalent of inhaled plutonium exceeds 400 mSv. The U.S. Department of Energy estimates that the lifetime cancer risk from inhaling 5,000 plutonium particles, each about 3 μm wide, is 1% over the background U.S. average. Ingestion or inhalation of large amounts may cause acute radiation poisoning and possibly death. However, no human being is known to have died because of inhaling or ingesting plutonium, and many people have measurable amounts of plutonium in their bodies.
The "hot particle" theory in which a particle of plutonium dust irradiates a localized spot of lung tissue is not supported by mainstream research—such particles are more mobile than originally thought and toxicity is not measurably increased due to particulate form. When inhaled, plutonium can pass into the bloodstream. Once in the bloodstream, plutonium moves throughout the body and into the bones, liver, or other body organs. Plutonium that reaches body organs generally stays in the body for decades and continues to expose the surrounding tissue to radiation and thus may cause cancer.
A commonly cited quote by Ralph Nader states that a pound of plutonium dust spread into the atmosphere would be enough to kill 8 billion people. This was disputed by Bernard Cohen, an opponent of the generally accepted linear no-threshold model of radiation toxicity. Cohen estimated that one pound of plutonium could kill no more than 2 million people by inhalation, so that the toxicity of plutonium is roughly equivalent with that of nerve gas.
Several populations of people who have been exposed to plutonium dust (e.g. people living down-wind of Nevada test sites, Nagasaki survivors, nuclear facility workers, and "terminally ill" patients injected with Pu in 1945–46 to study Pu metabolism) have been carefully followed and analyzed. Cohen found these studies inconsistent with high estimates of plutonium toxicity, citing cases such as Albert Stevens who survived into old age after being injected with plutonium. "There were about 25 workers from Los Alamos National Laboratory who inhaled a considerable amount of plutonium dust during 1940s; according to the hot-particle theory, each of them has a 99.5% chance of being dead from lung cancer by now, but there has not been a single lung cancer among them."
Marine toxicity.
Plutonium is known to enter the marine environment by dumping of waste or accidental leakage from nuclear plants. Though the highest concentrations of plutonium in marine environments are found in sediments, the complex biogeochemical cycle of plutonium means it is also found in all other compartments. For example, various zooplankton species that aid in the nutrient cycle will consume the element on a daily basis. The complete excretion of ingested plutonium by zooplankton makes their defecation an extremely important mechanism in the scavenging of plutonium from surface waters. However, those zooplankton that succumb to predation by larger organisms may become a transmission vehicle of plutonium to fish.
In addition to consumption, fish can also be exposed to plutonium by their distribution around the globe. One study investigated the effects of transuranium elements (plutonium-238, plutonium-239, plutonium-240) on various fish living in the Chernobyl Exclusion Zone (CEZ). Results showed that a proportion of female perch in the CEZ displayed either a failure or delay in maturation of the gonads. Similar studies found large accumulations of plutonium in the respiratory and digestive organs of cod, flounder and herring.
Plutonium toxicity is just as detrimental to larvae of fish in nuclear waste areas. Undeveloped eggs have a higher risk than developed adult fish exposed to the element in these waste areas. Oak Ridge National Laboratory displayed that carp and minnow embryos raised in solutions containing plutonium did not hatch; eggs that hatched displayed significant abnormalities when compared to control developed embryos. It revealed that higher concentrations of plutonium have been found to cause issues in marine fauna exposed to the element.
Criticality potential.
Care must be taken to avoid the accumulation of amounts of plutonium which approach critical mass, particularly because plutonium's critical mass is only a third of that of uranium-235. A critical mass of plutonium emits lethal amounts of neutrons and gamma rays. Plutonium in solution is more likely to form a critical mass than the solid form due to moderation by the hydrogen in water.
Criticality accidents have occurred, sometimes killing people. Careless handling of tungsten carbide bricks around a 6.2 kg plutonium sphere resulted in a fatal dose of radiation at Los Alamos on August 21, 1945, when scientist Harry Daghlian received a dose estimated at 5.1 sievert (510 rem) and died 25 days later. Nine months later, another Los Alamos scientist, Louis Slotin, died from a similar accident involving a beryllium reflector and the same plutonium core (the "demon core") that had previously killed Daghlian.
In December 1958, during a process of purifying plutonium at Los Alamos, a critical mass formed in a mixing vessel, which killed chemical operator Cecil Kelley. Other nuclear accidents have occurred in the Soviet Union, Japan, the United States, and many other countries.
Flammability.
Metallic plutonium is a fire hazard, especially if finely divided. In a moist environment, plutonium forms hydrides on its surface, which are pyrophoric and may ignite in air at room temperature. Plutonium expands up to 70% in volume as it oxidizes and thus may break its container. The radioactivity of the burning material is another hazard. Magnesium oxide sand is probably the most effective material for extinguishing a plutonium fire. It cools the burning material, acting as a heat sink, and also blocks off oxygen. Special precautions are necessary to store or handle plutonium in any form; generally a dry inert gas atmosphere is required.
Transportation.
Land and sea.
The usual transport of plutonium is through the more stable plutonium oxide in a sealed package. A typical transport consists of one truck carrying one protected shipping container, holding a number of packages with a total weight varying from 80 to 200 kg of plutonium oxide. A sea shipment may consist of several containers, each holding a sealed package. The U.S. Nuclear Regulatory Commission dictates that it must be solid instead of powder if the contents surpass 0.74 TBq (20 curies) of radioactivity. In 2016, the ships "Pacific Egret" and "Pacific Heron" of Pacific Nuclear Transport Ltd. transported 331 kg (730 lbs) of plutonium to a United States government facility in Savannah River, South Carolina.
Air.
U.S. Government air transport regulations permit the transport of plutonium by air, subject to restrictions on other dangerous materials carried on the same flight, packaging requirements, and stowage in the rearmost part of the aircraft.
In 2012, media revealed that plutonium has been flown out of Norway on commercial passenger airlines—around every other year—including one time in 2011. Regulations permit a plane to transport 15 grams of fissionable material. Such plutonium transportation is without problems, according to a senior advisor ("seniorrådgiver") at Statens strålevern.
Notes.
Footnotes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{align}\n\\ce{ {^{238}_{92}U} + {^{2}_{1}H} ->} &\\ce{ {^{238}_{93}Np} + 2^{1}_{0}n} \\\\\n&\\ce{^{238}_{93}Np ->[\\beta^-] [2.117 \\ \\ce d] {^{238}_{94}Pu} }\n\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=7987684 |
7988 | Dual space | In mathematics, vector space of linear forms
In mathematics, any vector space "formula_0" has a corresponding dual vector space (or just dual space for short) consisting of all linear forms on "formula_1" together with the vector space structure of pointwise addition and scalar multiplication by constants.
The dual space as defined above is defined for all vector spaces, and to avoid ambiguity may also be called the algebraic dual space.
When defined for a topological vector space, there is a subspace of the dual space, corresponding to continuous linear functionals, called the continuous dual space.
Dual vector spaces find application in many branches of mathematics that use vector spaces, such as in tensor analysis with finite-dimensional vector spaces.
When applied to vector spaces of functions (which are typically infinite-dimensional), dual spaces are used to describe measures, distributions, and Hilbert spaces. Consequently, the dual space is an important concept in functional analysis.
Early terms for "dual" include "polarer Raum" [Hahn 1927], "espace conjugué", "adjoint space" [Alaoglu 1940], and "transponierter Raum" [Schauder 1930] and [Banach 1932]. The term "dual" is due to Bourbaki 1938.
Algebraic dual space.
Given any vector space formula_0 over a field formula_2, the (algebraic) dual space formula_3 (alternatively denoted by formula_4 or formula_5) is defined as the set of all linear maps "formula_7" (linear functionals). Since linear maps are vector space homomorphisms, the dual space may be denoted formula_8.
The dual space formula_6 itself becomes a vector space over "formula_2" when equipped with an addition and scalar multiplication satisfying:
formula_9
for all formula_10, "formula_11", and formula_12.
Elements of the algebraic dual space formula_6 are sometimes called covectors, one-forms, or linear forms.
The pairing of a functional "formula_13" in the dual space formula_6 and an element "formula_14" of "formula_0" is sometimes denoted by a bracket: "formula_15"
or "formula_16". This pairing defines a nondegenerate bilinear mapping formula_17 called the natural pairing.
Finite-dimensional case.
If formula_0 is finite-dimensional, then formula_6 has the same dimension as formula_0. Given a basis formula_18 in formula_0, it is possible to construct a specific basis in formula_6, called the dual basis. This dual basis is a set formula_19 of linear functionals on formula_0, defined by the relation
formula_20
for any choice of coefficients formula_21. In particular, letting in turn each one of those coefficients be equal to one and the other coefficients zero, gives the system of equations
formula_22
where formula_23 is the Kronecker delta symbol. This property is referred to as the "bi-orthogonality property".
For example, if formula_0 is formula_24, let its basis be chosen as formula_25. The basis vectors are not orthogonal to each other. Then, formula_26 and formula_27 are one-forms (functions that map a vector to a scalar) such that formula_28, formula_29, formula_30, and formula_31. (Note: The superscript here is the index, not an exponent.) This system of equations can be expressed using matrix notation as
formula_32
Solving for the unknown values in the first matrix shows the dual basis to be formula_33. Because formula_26 and formula_27 are functionals, they can be rewritten as formula_34 and formula_35.
In general, when formula_0 is formula_36, if formula_37 is a matrix whose columns are the basis vectors and formula_38 is a matrix whose columns are the dual basis vectors, then
formula_39
where formula_40 is the identity matrix of order formula_41. The biorthogonality property of these two basis sets allows any point formula_42 to be represented as
formula_43
even when the basis vectors are not orthogonal to each other. Strictly speaking, the above statement only makes sense once the inner product formula_44 and the corresponding duality pairing are introduced, as described below in "".
In particular, formula_36 can be interpreted as the space of columns of formula_41 real numbers, its dual space is typically written as the space of "rows" of formula_41 real numbers. Such a row acts on formula_36 as a linear functional by ordinary matrix multiplication. This is because a functional maps every formula_41-vector formula_14 into a real number formula_45. Then, seeing this functional as a matrix formula_46, and formula_14 as an formula_47 matrix, and formula_45 a formula_48 matrix (trivially, a real number) respectively, if formula_49 then, by dimension reasons, formula_46 must be a formula_50 matrix; that is, formula_46 must be a row vector.
If formula_0 consists of the space of geometrical vectors in the plane, then the level curves of an element of formula_6 form a family of parallel lines in formula_0, because the range is 1-dimensional, so that every point in the range is a multiple of any one nonzero element.
So an element of formula_6 can be intuitively thought of as a particular family of parallel lines covering the plane. To compute the value of a functional on a given vector, it suffices to determine which of the lines the vector lies on. Informally, this "counts" how many lines the vector crosses.
More generally, if formula_0 is a vector space of any dimension, then the level sets of a linear functional in formula_6 are parallel hyperplanes in formula_0, and the action of a linear functional on a vector can be visualized in terms of these hyperplanes.
Infinite-dimensional case.
If formula_0 is not finite-dimensional but has a basis formula_52 indexed by an infinite set formula_53, then the same construction as in the finite-dimensional case yields linearly independent elements formula_54 (formula_55) of the dual space, but they will not form a basis.
For instance, consider the space formula_56, whose elements are those sequences of real numbers that contain only finitely many non-zero entries, which has a basis indexed by the natural numbers formula_57. For formula_58, formula_59 is the sequence consisting of all zeroes except in the formula_60-th position, which is 1.
The dual space of formula_56 is (isomorphic to) formula_51, the space of "all" sequences of real numbers: each real sequence formula_61 defines a function where the element formula_62 of formula_56 is sent to the number
formula_63
which is a finite sum because there are only finitely many nonzero formula_64. The dimension of formula_56 is countably infinite, whereas formula_51 does not have a countable basis.
This observation generalizes to any infinite-dimensional vector space formula_0 over any field formula_2: a choice of basis formula_65 identifies formula_0 with the space formula_66 of functions formula_67 such that formula_68 is nonzero for only finitely many formula_55, where such a function formula_69 is identified with the vector
formula_70
in formula_0 (the sum is finite by the assumption on formula_69, and any formula_71 may be written uniquely in this way by the definition of the basis).
The dual space of formula_0 may then be identified with the space formula_72 of "all" functions from formula_53 to formula_2: a linear functional formula_73 on formula_0 is uniquely determined by the values formula_74 it takes on the basis of formula_0, and any function formula_75 (with formula_76) defines a linear functional formula_73 on formula_0 by
formula_77
Again, the sum is finite because formula_78 is nonzero for only finitely many formula_79.
The set formula_66 may be identified (essentially by definition) with the direct sum of infinitely many copies of formula_2 (viewed as a 1-dimensional vector space over itself) indexed by formula_53, i.e. there are linear isomorphisms
formula_80
On the other hand, formula_72 is (again by definition), the direct product of infinitely many copies of formula_2 indexed by formula_53, and so the identification
formula_81
is a special case of a general result relating direct sums (of modules) to direct products.
If a vector space is not finite-dimensional, then its (algebraic) dual space is "always" of larger dimension (as a cardinal number) than the original vector space. This is in contrast to the case of the continuous dual space, discussed below, which may be isomorphic to the original vector space even if the latter is infinite-dimensional.
The proof of this inequality between dimensions results from the following.
If formula_0 is an infinite-dimensional formula_2-vector space, the arithmetical properties of cardinal numbers implies that
formula_82
where cardinalities are denoted as absolute values. For proving that formula_83 it suffices to prove that formula_84 which can be done with an argument similar to Cantor's diagonal argument. The exact dimension of the dual is given by the Erdős–Kaplansky theorem.
Bilinear products and dual spaces.
If "V" is finite-dimensional, then "V" is isomorphic to "V"∗. But there is in general no natural isomorphism between these two spaces. Any bilinear form ⟨·,·⟩ on "V" gives a mapping of "V" into its dual space via
formula_85
where the right hand side is defined as the functional on "V" taking each "w" ∈ "V" to ⟨"v", "w"⟩. In other words, the bilinear form determines a linear mapping
formula_86
defined by
formula_87
If the bilinear form is nondegenerate, then this is an isomorphism onto a subspace of "V"∗.
If "V" is finite-dimensional, then this is an isomorphism onto all of "V"∗. Conversely, any isomorphism formula_88 from "V" to a subspace of "V"∗ (resp., all of "V"∗ if "V" is finite dimensional) defines a unique nondegenerate bilinear form formula_89 on "V" by
formula_90
Thus there is a one-to-one correspondence between isomorphisms of "V" to a subspace of (resp., all of) "V"∗ and nondegenerate bilinear forms on "V".
If the vector space "V" is over the complex field, then sometimes it is more natural to consider sesquilinear forms instead of bilinear forms.
In that case, a given sesquilinear form ⟨·,·⟩ determines an isomorphism of "V" with the complex conjugate of the dual space
formula_91
The conjugate of the dual space formula_92 can be identified with the set of all additive complex-valued functionals "f" : "V" → C such that
formula_93
Injection into the double-dual.
There is a natural homomorphism formula_94 from formula_0 into the double dual formula_95, defined by formula_96 for all formula_97. In other words, if formula_98 is the evaluation map defined by formula_99, then formula_100 is defined as the map formula_101. This map formula_94 is always injective; and it is always an isomorphism if formula_0 is finite-dimensional.
Indeed, the isomorphism of a finite-dimensional vector space with its double dual is an archetypal example of a natural isomorphism.
Infinite-dimensional Hilbert spaces are not isomorphic to their algebraic double duals, but instead to their continuous double duals.
Transpose of a linear map.
If "f" : "V" → "W" is a linear map, then the "transpose" (or "dual") "f"∗ : "W"∗ → "V"∗ is defined by
formula_102
for every "formula_103". The resulting functional "formula_104" in "formula_6" is called the "pullback" of "formula_13" along "formula_69".
The following identity holds for all "formula_103" and "formula_105":
formula_106
where the bracket [·,·] on the left is the natural pairing of "V" with its dual space, and that on the right is the natural pairing of "W" with its dual. This identity characterizes the transpose, and is formally similar to the definition of the adjoint.
The assignment "f" ↦ "f"∗ produces an injective linear map between the space of linear operators from "V" to "W" and the space of linear operators from "W"∗ to "V"∗; this homomorphism is an isomorphism if and only if "W" is finite-dimensional.
If "V" = "W" then the space of linear maps is actually an algebra under composition of maps, and the assignment is then an antihomomorphism of algebras, meaning that ("fg")∗ = "g"∗"f"∗.
In the language of category theory, taking the dual of vector spaces and the transpose of linear maps is therefore a contravariant functor from the category of vector spaces over "F" to itself.
It is possible to identify ("f"∗)∗ with "f" using the natural injection into the double dual.
If the linear map "f" is represented by the matrix "A" with respect to two bases of "V" and "W", then "f"∗ is represented by the transpose matrix "A"T with respect to the dual bases of "W"∗ and "V"∗, hence the name.
Alternatively, as "f" is represented by "A" acting on the left on column vectors, "f"∗ is represented by the same matrix acting on the right on row vectors.
These points of view are related by the canonical inner product on R"n", which identifies the space of column vectors with the dual space of row vectors.
Quotient spaces and annihilators.
Let formula_107 be a subset of formula_0.
The annihilator of formula_107 in formula_6, denoted here formula_108, is the collection of linear functionals formula_109 such that formula_110 for all formula_111.
That is, formula_108 consists of all linear functionals formula_112 such that the restriction to formula_107 vanishes: formula_113.
Within finite dimensional vector spaces, the annihilator is dual to (isomorphic to) the orthogonal complement.
The annihilator of a subset is itself a vector space.
The annihilator of the zero vector is the whole dual space: formula_114, and the annihilator of the whole space is just the zero covector: formula_115.
Furthermore, the assignment of an annihilator to a subset of formula_0 reverses inclusions, so that if formula_116, then
formula_117
If formula_53 and formula_118 are two subsets of formula_0 then
formula_119
If formula_120 is any family of subsets of formula_0 indexed by formula_60 belonging to some index set formula_121, then
formula_122
In particular if formula_53 and formula_118 are subspaces of formula_0 then
formula_123
and
formula_124
If formula_0 is finite-dimensional and formula_125 is a vector subspace, then
formula_126
after identifying formula_125 with its image in the second dual space under the double duality isomorphism formula_127. In particular, forming the annihilator is a Galois connection on the lattice of subsets of a finite-dimensional vector space.
If formula_125 is a subspace of formula_0 then the quotient space formula_128 is a vector space in its own right, and so has a dual. By the first isomorphism theorem, a functional formula_112 factors through formula_128 if and only if formula_125 is in the kernel of formula_69. There is thus an isomorphism
formula_129
As a particular consequence, if formula_0 is a direct sum of two subspaces formula_53 and formula_118, then formula_6 is a direct sum of formula_130 and formula_131.
Dimensional analysis.
The dual space is analogous to a "negative"-dimensional space. Most simply, since a vector formula_105 can be paired with a covector formula_132 by the natural pairing
formula_133 to obtain a scalar, a covector can "cancel" the dimension of a vector, similar to reducing a fraction. Thus while the direct sum formula_134 is a &NoBreak;&NoBreak;-dimensional space (if &NoBreak;&NoBreak; is &NoBreak;&NoBreak;-dimensional), &NoBreak;&NoBreak; behaves as an &NoBreak;&NoBreak;-dimensional space, in the sense that its dimensions can be canceled against the dimensions of &NoBreak;&NoBreak;. This is formalized by tensor contraction.
This arises in physics via dimensional analysis, where the dual space has inverse units. Under the natural pairing, these units cancel, and the resulting scalar value is dimensionless, as expected. For example, in (continuous) Fourier analysis, or more broadly time–frequency analysis: given a one-dimensional vector space with a unit of time &NoBreak;&NoBreak;, the dual space has units of frequency: occurrences "per" unit of time (units of &NoBreak;&NoBreak;). For example, if time is measured in seconds, the corresponding dual unit is the inverse second: over the course of 3 seconds, an event that occurs 2 times per second occurs a total of 6 times, corresponding to formula_135. Similarly, if the primal space measures length, the dual space measures inverse length.
Continuous dual space.
When dealing with topological vector spaces, the continuous linear functionals from the space into the base field formula_136 (or formula_137) are particularly important.
This gives rise to the notion of the "continuous dual space" or "topological dual" which is a linear subspace of the algebraic dual space formula_6, denoted by formula_5.
For any "finite-dimensional" normed vector space or topological vector space, such as Euclidean "n-"space, the continuous dual and the algebraic dual coincide.
This is however false for any infinite-dimensional normed space, as shown by the example of discontinuous linear maps.
Nevertheless, in the theory of topological vector spaces the terms "continuous dual space" and "topological dual space" are often replaced by "dual space".
For a topological vector space formula_0 its "continuous dual space", or "topological dual space", or just "dual space" (in the sense of the theory of topological vector spaces) formula_5 is defined as the space of all continuous linear functionals formula_138.
Important examples for continuous dual spaces are the space of compactly supported test functions formula_139 and its dual formula_140 the space of arbitrary distributions (generalized functions); the space of arbitrary test functions formula_141 and its dual formula_142 the space of compactly supported distributions; and the space of rapidly decreasing test functions formula_143 the Schwartz space, and its dual formula_144 the space of tempered distributions (slowly growing distributions) in the theory of generalized functions.
Properties.
If X is a Hausdorff topological vector space (TVS), then the continuous dual space of X is identical to the continuous dual space of the completion of X.
Topologies on the dual.
There is a standard construction for introducing a topology on the continuous dual formula_5 of a topological vector space formula_0. Fix a collection formula_145 of bounded subsets of formula_0.
This gives the topology on formula_0 of uniform convergence on sets from formula_146 or what is the same thing, the topology generated by seminorms of the form
formula_147
where formula_13 is a continuous linear functional on formula_0, and formula_53 runs over the class formula_148
This means that a net of functionals formula_149 tends to a functional formula_13 in formula_5 if and only if
formula_150
Usually (but not necessarily) the class formula_145 is supposed to satisfy the following conditions:
If these requirements are fulfilled then the corresponding topology on formula_5 is Hausdorff and the sets
formula_158
form its local base.
Here are the three most important special cases.
If formula_0 is a normed vector space (for example, a Banach space or a Hilbert space) then the strong topology on formula_5 is normed (in fact a Banach space if the field of scalars is complete), with the norm
formula_159
Each of these three choices of topology on formula_5 leads to a variant of reflexivity property for topological vector spaces:
Examples.
Let 1 < "p" < ∞ be a real number and consider the Banach space "ℓ p" of all sequences a = ("a""n") for which
formula_160
Define the number "q" by 1/"p" + 1/"q" = 1. Then the continuous dual of "ℓ" "p" is naturally identified with "ℓ" "q": given an element formula_161, the corresponding element of "ℓ" "q" is the sequence formula_162 where formula_163 denotes the sequence whose n-th term is 1 and all others are zero. Conversely, given an element a = ("a""n") ∈ "ℓ" "q", the corresponding continuous linear functional "formula_13" on "ℓ" "p" is defined by
formula_164
for all b = ("bn") ∈ "ℓ" "p" (see Hölder's inequality).
In a similar manner, the continuous dual of "ℓ" 1 is naturally identified with "ℓ" ∞ (the space of bounded sequences).
Furthermore, the continuous duals of the Banach spaces "c" (consisting of all convergent sequences, with the supremum norm) and "c"0 (the sequences converging to zero) are both naturally identified with "ℓ" 1.
By the Riesz representation theorem, the continuous dual of a Hilbert space is again a Hilbert space which is anti-isomorphic to the original space.
This gives rise to the bra–ket notation used by physicists in the mathematical formulation of quantum mechanics.
By the Riesz–Markov–Kakutani representation theorem, the continuous dual of certain spaces of continuous functions can be described using measures.
Transpose of a continuous linear map.
If "T" : "V → W" is a continuous linear map between two topological vector spaces, then the (continuous) transpose "T′" : "W′ → V′" is defined by the same formula as before:
formula_165
The resulting functional "T′"("φ") is in "V′". The assignment "T → T′" produces a linear map between the space of continuous linear maps from "V" to "W" and the space of linear maps from "W′" to "V′".
When "T" and "U" are composable continuous linear maps, then
formula_166
When "V" and "W" are normed spaces, the norm of the transpose in"L"("W′", "V′") is equal to that of "T" in "L"("V", "W").
Several properties of transposition depend upon the Hahn–Banach theorem.
For example, the bounded linear map "T" has dense range if and only if the transpose "T′" is injective.
When "T" is a compact linear map between two Banach spaces "V" and "W", then the transpose "T′" is compact.
This can be proved using the Arzelà–Ascoli theorem.
When "V" is a Hilbert space, there is an antilinear isomorphism "iV" from "V" onto its continuous dual "V′".
For every bounded linear map "T" on "V", the transpose and the adjoint operators are linked by
formula_167
When "T" is a continuous linear map between two topological vector spaces "V" and "W", then the transpose "T′" is continuous when "W′" and "V′" are equipped with "compatible" topologies: for example, when for "X" = "V" and "X" = "W", both duals "X′" have the strong topology "β"("X′", "X") of uniform convergence on bounded sets of "X", or both have the weak-∗ topology "σ"("X′", "X") of pointwise convergence on "X".
The transpose "T′" is continuous from "β"("W′", "W") to "β"("V′", "V"), or from "σ"("W′", "W") to "σ"("V′", "V").
Annihilators.
Assume that "W" is a closed linear subspace of a normed space "V", and consider the annihilator of "W" in "V′",
formula_168
Then, the dual of the quotient "V" / "W" can be identified with "W"⊥, and the dual of "W" can be identified with the quotient "V′" / "W"⊥.
Indeed, let "P" denote the canonical surjection from "V" onto the quotient "V" / "W" ; then, the transpose "P′" is an isometric isomorphism from ("V" / "W" )′ into "V′", with range equal to "W"⊥.
If "j" denotes the injection map from "W" into "V", then the kernel of the transpose "j′" is the annihilator of "W":
formula_169
and it follows from the Hahn–Banach theorem that "j′" induces an isometric isomorphism
"V′" / "W"⊥ → "W′".
Further properties.
If the dual of a normed space V is separable, then so is the space V itself.
The converse is not true: for example, the space "ℓ" 1 is separable, but its dual "ℓ" ∞ is not.
Double dual.
In analogy with the case of the algebraic double dual, there is always a naturally defined continuous linear operator Ψ : "V" → "V′′" from a normed space "V" into its continuous double dual "V′′", defined by
formula_170
As a consequence of the Hahn–Banach theorem, this map is in fact an isometry, meaning ‖ Ψ("x") ‖ = ‖ "x" ‖ for all "x" ∈ "V".
Normed spaces for which the map Ψ is a bijection are called reflexive.
When "V" is a topological vector space then Ψ("x") can still be defined by the same formula, for every "x" ∈ "V", however several difficulties arise.
First, when "V" is not locally convex, the continuous dual may be equal to { 0 } and the map Ψ trivial.
However, if "V" is Hausdorff and locally convex, the map Ψ is injective from "V" to the algebraic dual "V′"∗ of the continuous dual, again as a consequence of the Hahn–Banach theorem.
Second, even in the locally convex setting, several natural vector space topologies can be defined on the continuous dual "V′", so that the continuous double dual "V′′" is not uniquely defined as a set. Saying that Ψ maps from "V" to "V′′", or in other words, that Ψ("x") is continuous on "V′" for every "x" ∈ "V", is a reasonable minimal requirement on the topology of "V′", namely that the evaluation mappings
formula_171
be continuous for the chosen topology on "V′". Further, there is still a choice of a topology on "V′′", and continuity of Ψ depends upon this choice.
As a consequence, defining reflexivity in this framework is more involved than in the normed case.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "V,"
},
{
"math_id": 2,
"text": "F"
},
{
"math_id": 3,
"text": "V^{*}"
},
{
"math_id": 4,
"text": "V^{\\lor}"
},
{
"math_id": 5,
"text": "V'"
},
{
"math_id": 6,
"text": "V^*"
},
{
"math_id": 7,
"text": "\\varphi: V \\to F"
},
{
"math_id": 8,
"text": "\\hom (V, F)"
},
{
"math_id": 9,
"text": "\n\\begin{align}\n (\\varphi + \\psi)(x) &= \\varphi(x) + \\psi(x) \\\\\n (a \\varphi)(x) &= a \\left(\\varphi(x)\\right)\n\\end{align}"
},
{
"math_id": 10,
"text": "\\varphi, \\psi \\in V^*"
},
{
"math_id": 11,
"text": "x \\in V"
},
{
"math_id": 12,
"text": "a \\in F"
},
{
"math_id": 13,
"text": "\\varphi"
},
{
"math_id": 14,
"text": "x"
},
{
"math_id": 15,
"text": "\\varphi (x) = [x, \\varphi]"
},
{
"math_id": 16,
"text": "\\varphi (x) = \\langle x, \\varphi \\rangle"
},
{
"math_id": 17,
"text": "\\langle \\cdot, \\cdot \\rangle : V \\times V^* \\to F"
},
{
"math_id": 18,
"text": "\\{\\mathbf{e}_1,\\dots,\\mathbf{e}_n\\}"
},
{
"math_id": 19,
"text": "\\{\\mathbf{e}^1,\\dots,\\mathbf{e}^n\\}"
},
{
"math_id": 20,
"text": " \\mathbf{e}^i(c^1 \\mathbf{e}_1+\\cdots+c^n\\mathbf{e}_n) = c^i, \\quad i=1,\\ldots,n "
},
{
"math_id": 21,
"text": "c^i\\in F"
},
{
"math_id": 22,
"text": " \\mathbf{e}^i(\\mathbf{e}_j) = \\delta^{i}_{j} "
},
{
"math_id": 23,
"text": "\\delta^{i}_{j}"
},
{
"math_id": 24,
"text": "\\R^2"
},
{
"math_id": 25,
"text": "\\{\\mathbf{e}_1=(1/2,1/2),\\mathbf{e}_2=(0,1)\\}"
},
{
"math_id": 26,
"text": "\\mathbf{e}^1"
},
{
"math_id": 27,
"text": "\\mathbf{e}^2"
},
{
"math_id": 28,
"text": "\\mathbf{e}^1(\\mathbf{e}_1)=1"
},
{
"math_id": 29,
"text": "\\mathbf{e}^1(\\mathbf{e}_2)=0"
},
{
"math_id": 30,
"text": "\\mathbf{e}^2(\\mathbf{e}_1)=0"
},
{
"math_id": 31,
"text": "\\mathbf{e}^2(\\mathbf{e}_2)=1"
},
{
"math_id": 32,
"text": "\n\\begin{bmatrix}\ne^{11} & e^{12} \\\\\ne^{21} & e^{22}\n\\end{bmatrix}\n\\begin{bmatrix}\ne_{11} & e_{21} \\\\\ne_{12} & e_{22}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n1 & 0 \\\\\n0 & 1\n\\end{bmatrix}.\n"
},
{
"math_id": 33,
"text": "\\{\\mathbf{e}^1=(2,0),\\mathbf{e}^2=(-1,1)\\}"
},
{
"math_id": 34,
"text": "\\mathbf{e}^1(x,y)=2x"
},
{
"math_id": 35,
"text": "\\mathbf{e}^2(x,y)=-x+y"
},
{
"math_id": 36,
"text": "\\R^n"
},
{
"math_id": 37,
"text": "E=[\\mathbf{e}_1|\\cdots|\\mathbf{e}_n]"
},
{
"math_id": 38,
"text": "\\hat{E}=[\\mathbf{e}^1|\\cdots|\\mathbf{e}^n]"
},
{
"math_id": 39,
"text": "\\hat{E}^\\textrm{T}\\cdot E = I_n,"
},
{
"math_id": 40,
"text": "I_n"
},
{
"math_id": 41,
"text": "n"
},
{
"math_id": 42,
"text": "\\mathbf{x}\\in V"
},
{
"math_id": 43,
"text": "\\mathbf{x} = \\sum_i \\langle\\mathbf{x},\\mathbf{e}^i \\rangle \\mathbf{e}_i = \\sum_i \\langle \\mathbf{x}, \\mathbf{e}_i \\rangle \\mathbf{e}^i,"
},
{
"math_id": 44,
"text": "\\langle \\cdot, \\cdot \\rangle"
},
{
"math_id": 45,
"text": "y"
},
{
"math_id": 46,
"text": "M"
},
{
"math_id": 47,
"text": "n\\times 1"
},
{
"math_id": 48,
"text": "1\\times 1"
},
{
"math_id": 49,
"text": "Mx=y"
},
{
"math_id": 50,
"text": "1\\times n"
},
{
"math_id": 51,
"text": "\\R^\\N"
},
{
"math_id": 52,
"text": "\\mathbf{e}_\\alpha"
},
{
"math_id": 53,
"text": "A"
},
{
"math_id": 54,
"text": "\\mathbf{e}^\\alpha"
},
{
"math_id": 55,
"text": "\\alpha\\in A"
},
{
"math_id": 56,
"text": "\\R^\\infty"
},
{
"math_id": 57,
"text": "\\N"
},
{
"math_id": 58,
"text": "i \\in \\N"
},
{
"math_id": 59,
"text": "\\mathbf{e}_i"
},
{
"math_id": 60,
"text": "i"
},
{
"math_id": 61,
"text": "(a_n)"
},
{
"math_id": 62,
"text": "(x_n)"
},
{
"math_id": 63,
"text": "\\sum_n a_nx_n,"
},
{
"math_id": 64,
"text": "x_n"
},
{
"math_id": 65,
"text": "\\{\\mathbf{e}_\\alpha:\\alpha\\in A\\}"
},
{
"math_id": 66,
"text": "(F^A)_0"
},
{
"math_id": 67,
"text": "f:A\\to F"
},
{
"math_id": 68,
"text": "f_\\alpha=f(\\alpha)"
},
{
"math_id": 69,
"text": "f"
},
{
"math_id": 70,
"text": "\\sum_{\\alpha\\in A} f_\\alpha\\mathbf{e}_\\alpha"
},
{
"math_id": 71,
"text": "v\\in V"
},
{
"math_id": 72,
"text": "F^A"
},
{
"math_id": 73,
"text": "T"
},
{
"math_id": 74,
"text": "\\theta_\\alpha=T(\\mathbf{e}_\\alpha)"
},
{
"math_id": 75,
"text": "\\theta:A\\to F"
},
{
"math_id": 76,
"text": "\\theta(\\alpha)=\\theta_\\alpha"
},
{
"math_id": 77,
"text": "T\\left (\\sum_{\\alpha\\in A} f_\\alpha \\mathbf{e}_\\alpha\\right) = \\sum_{\\alpha \\in A} f_\\alpha T(e_\\alpha) = \\sum_{\\alpha\\in A} f_\\alpha \\theta_\\alpha."
},
{
"math_id": 78,
"text": "f_\\alpha"
},
{
"math_id": 79,
"text": "\\alpha"
},
{
"math_id": 80,
"text": " V\\cong (F^A)_0\\cong\\bigoplus_{\\alpha\\in A} F."
},
{
"math_id": 81,
"text": "V^* \\cong \\left (\\bigoplus_{\\alpha\\in A}F\\right )^* \\cong \\prod_{\\alpha\\in A}F^* \\cong \\prod_{\\alpha\\in A}F \\cong F^A"
},
{
"math_id": 82,
"text": "\\mathrm{dim}(V)=|A|<|F|^{|A|}=|V^\\ast|=\\mathrm{max}(|\\mathrm{dim}(V^\\ast)|, |F|),"
},
{
"math_id": 83,
"text": "\\mathrm{dim}(V)< \\mathrm{dim}(V^*),"
},
{
"math_id": 84,
"text": "|F|\\le |\\mathrm{dim}(V^\\ast)|,"
},
{
"math_id": 85,
"text": "v\\mapsto \\langle v, \\cdot\\rangle"
},
{
"math_id": 86,
"text": "\\Phi_{\\langle\\cdot,\\cdot\\rangle} : V\\to V^*"
},
{
"math_id": 87,
"text": "\\left[\\Phi_{\\langle\\cdot,\\cdot\\rangle}(v), w\\right] = \\langle v, w\\rangle."
},
{
"math_id": 88,
"text": "\\Phi"
},
{
"math_id": 89,
"text": " \\langle \\cdot, \\cdot \\rangle_{\\Phi} "
},
{
"math_id": 90,
"text": " \\langle v, w \\rangle_\\Phi = (\\Phi (v))(w) = [\\Phi (v), w].\\,"
},
{
"math_id": 91,
"text": "\n \\Phi_{\\langle \\cdot, \\cdot \\rangle} : V\\to \\overline{V^*}.\n "
},
{
"math_id": 92,
"text": "\\overline{V^*}"
},
{
"math_id": 93,
"text": "\n f(\\alpha v) = \\overline{\\alpha}f(v).\n "
},
{
"math_id": 94,
"text": "\\Psi"
},
{
"math_id": 95,
"text": "V^{**}=\\{\\Phi:V^*\\to F:\\Phi\\ \\mathrm{linear}\\}"
},
{
"math_id": 96,
"text": "(\\Psi(v))(\\varphi)=\\varphi(v)"
},
{
"math_id": 97,
"text": "v\\in V, \\varphi\\in V^*"
},
{
"math_id": 98,
"text": "\\mathrm{ev}_v:V^*\\to F"
},
{
"math_id": 99,
"text": "\\varphi \\mapsto \\varphi(v)"
},
{
"math_id": 100,
"text": "\\Psi: V \\to V^{**}"
},
{
"math_id": 101,
"text": "v\\mapsto\\mathrm{ev}_v"
},
{
"math_id": 102,
"text": "\n f^*(\\varphi) = \\varphi \\circ f \\,\n "
},
{
"math_id": 103,
"text": "\\varphi \\in W^*"
},
{
"math_id": 104,
"text": "f^* (\\varphi)"
},
{
"math_id": 105,
"text": "v \\in V"
},
{
"math_id": 106,
"text": "\n [f^*(\\varphi),\\, v] = [\\varphi,\\, f(v)],\n "
},
{
"math_id": 107,
"text": "S"
},
{
"math_id": 108,
"text": "S^0"
},
{
"math_id": 109,
"text": "f\\in V^*"
},
{
"math_id": 110,
"text": "[f,s]=0"
},
{
"math_id": 111,
"text": "s\\in S"
},
{
"math_id": 112,
"text": "f:V\\to F"
},
{
"math_id": 113,
"text": "f|_S = 0"
},
{
"math_id": 114,
"text": "\\{ 0 \\}^0 = V^*"
},
{
"math_id": 115,
"text": "V^0 = \\{ 0 \\} \\subseteq V^*"
},
{
"math_id": 116,
"text": "\\{ 0 \\} \\subseteq S\\subseteq T\\subseteq V"
},
{
"math_id": 117,
"text": "\n \\{ 0 \\} \\subseteq T^0 \\subseteq S^0 \\subseteq V^* .\n "
},
{
"math_id": 118,
"text": "B"
},
{
"math_id": 119,
"text": "\n A^0 + B^0 \\subseteq (A \\cap B)^0 .\n "
},
{
"math_id": 120,
"text": "(A_i)_{i\\in I}"
},
{
"math_id": 121,
"text": "I"
},
{
"math_id": 122,
"text": "\n \\left( \\bigcup_{i\\in I} A_i \\right)^0 = \\bigcap_{i\\in I} A_i^0 .\n "
},
{
"math_id": 123,
"text": "\n (A + B)^0 = A^0 \\cap B^0\n "
},
{
"math_id": 124,
"text": "\n (A \\cap B)^0 = A^0 + B^0 .\n "
},
{
"math_id": 125,
"text": "W"
},
{
"math_id": 126,
"text": "\n W^{00} = W\n "
},
{
"math_id": 127,
"text": "V\\approx V^{**}"
},
{
"math_id": 128,
"text": "V/W"
},
{
"math_id": 129,
"text": " (V/W)^* \\cong W^0 ."
},
{
"math_id": 130,
"text": "A^0"
},
{
"math_id": 131,
"text": "B^0"
},
{
"math_id": 132,
"text": "\\varphi \\in V^*"
},
{
"math_id": 133,
"text": "\\langle x, \\varphi \\rangle := \\varphi (x) \\in F"
},
{
"math_id": 134,
"text": "V \\oplus V^*"
},
{
"math_id": 135,
"text": "3s \\cdot 2s^{-1} = 6"
},
{
"math_id": 136,
"text": "\\mathbb{F} = \\Complex"
},
{
"math_id": 137,
"text": "\\R"
},
{
"math_id": 138,
"text": "\\varphi:V\\to{\\mathbb F}"
},
{
"math_id": 139,
"text": "\\mathcal{D}"
},
{
"math_id": 140,
"text": "\\mathcal{D}',"
},
{
"math_id": 141,
"text": "\\mathcal{E}"
},
{
"math_id": 142,
"text": "\\mathcal{E}',"
},
{
"math_id": 143,
"text": "\\mathcal{S},"
},
{
"math_id": 144,
"text": "\\mathcal{S}',"
},
{
"math_id": 145,
"text": "\\mathcal{A}"
},
{
"math_id": 146,
"text": "\\mathcal{A},"
},
{
"math_id": 147,
"text": "\\|\\varphi\\|_A = \\sup_{x\\in A} |\\varphi(x)|,"
},
{
"math_id": 148,
"text": "\\mathcal{A}."
},
{
"math_id": 149,
"text": "\\varphi_i"
},
{
"math_id": 150,
"text": "\\text{ for all } A\\in\\mathcal{A}\\qquad \\|\\varphi_i-\\varphi\\|_A = \\sup_{x\\in A} |\\varphi_i(x)-\\varphi(x)|\\underset{i\\to\\infty}{\\longrightarrow} 0. "
},
{
"math_id": 151,
"text": "A\\in\\mathcal{A}"
},
{
"math_id": 152,
"text": "\\text{ for all } x \\in V\\quad \\text{ there exists some } A \\in \\mathcal{A}\\quad \\text{ such that } x \\in A."
},
{
"math_id": 153,
"text": "A \\in \\mathcal{A}"
},
{
"math_id": 154,
"text": "B \\in \\mathcal{A}"
},
{
"math_id": 155,
"text": "C \\in \\mathcal{A}"
},
{
"math_id": 156,
"text": "\\text{ for all } A, B \\in \\mathcal{A}\\quad \\text{ there exists some } C \\in \\mathcal{A}\\quad \\text{ such that } A \\cup B \\subseteq C."
},
{
"math_id": 157,
"text": "\\text{ for all } A \\in \\mathcal{A}\\quad \\text{ and all } \\lambda \\in {\\mathbb F}\\quad \\text{ such that } \\lambda \\cdot A \\in \\mathcal{A}."
},
{
"math_id": 158,
"text": "U_A ~=~ \\left \\{ \\varphi \\in V' ~:~ \\quad \\|\\varphi\\|_A < 1 \\right \\},\\qquad \\text{ for } A \\in \\mathcal{A}"
},
{
"math_id": 159,
"text": "\\|\\varphi\\| = \\sup_{\\|x\\| \\le 1 } |\\varphi(x)|."
},
{
"math_id": 160,
"text": "\\|\\mathbf{a}\\|_p = \\left ( \\sum_{n=0}^\\infty |a_n|^p \\right) ^{\\frac{1}{p}} < \\infty."
},
{
"math_id": 161,
"text": "\\varphi \\in (\\ell^p)'"
},
{
"math_id": 162,
"text": "(\\varphi(\\mathbf {e}_n))"
},
{
"math_id": 163,
"text": "\\mathbf {e}_n"
},
{
"math_id": 164,
"text": "\\varphi (\\mathbf{b}) = \\sum_n a_n b_n"
},
{
"math_id": 165,
"text": "T'(\\varphi) = \\varphi \\circ T, \\quad \\varphi \\in W'."
},
{
"math_id": 166,
"text": "(U \\circ T)' = T' \\circ U'."
},
{
"math_id": 167,
"text": "i_V \\circ T^* = T' \\circ i_V."
},
{
"math_id": 168,
"text": "W^\\perp = \\{ \\varphi \\in V' : W \\subseteq \\ker \\varphi\\}."
},
{
"math_id": 169,
"text": "\\ker (j') = W^\\perp"
},
{
"math_id": 170,
"text": " \\Psi(x)(\\varphi) = \\varphi(x), \\quad x \\in V, \\ \\varphi \\in V' ."
},
{
"math_id": 171,
"text": " \\varphi \\in V' \\mapsto \\varphi(x), \\quad x \\in V , "
}
] | https://en.wikipedia.org/wiki?curid=7988 |
7989113 | No Wit, No Help Like a Woman's | No Wit, No Help Like a Woman's is a Jacobean tragicomic play by Thomas Middleton.
Title.
On the title page of the first published edition (1653), the play's title is rendered as follows:
formula_0
This title is difficult to translate into conventional prose; most subsequent editions have called it "No Wit, No Help Like a Woman's", but the 2007 Middleton complete works for Oxford University Press renders it as "No Wit/Help Like a Woman's.
Date.
External evidence on the play's date is lacking. In the text of the play, the character Weatherwise repeatedly refers to almanacs; he quotes sixteen almanac proverbs or catchphrases – fifteen of which derive from the 1611 edition of Thomas Bretnor's almanac, a fact that yields the obvious likely date for the play. This procedure was not atypical of Middleton's practice; when he composed his "Inner Temple Masque" in 1618, he used eleven proverbs from Bretnor's 1618 almanac.
The play was revived in 1638. James Shirley staged it at the Werburgh Street Theatre in Dublin, and wrote a Prologue for the work that was published with Middleton's text in the first edition.
Publication.
The play was entered into the Stationers' Register on 9 September 1653 by the bookseller Humphrey Moseley. Moseley printed the first edition four years later, in 1657, in an octavo edition printed for Moseley by Thomas Newcomb. Nineteen copies of the 1657 octavo survive, an unusually large total for a play of its era. Middleton's original was not reprinted prior to the 19th century.
Authorship.
Both the Stationers' Register entry and the title page of the first edition attribute the play to "Tho. Middleton." Given the play's strong similarities with Middleton's other works, the accuracy of this attribution has never been disputed. 19th-century critic Charles W. Stork suggested William Rowley as a possible collaborator, though his hypothesis has been rejected due to lack of supporting evidence.
Source and influences.
Middleton borrowed the main plot of "No Wit" from a 1589 play, a comedy by Giambattista Della Porta titled "La Sorella" ("The Sister"). In turn, Middleton's play served as a source for a contemporaneous Latin scholastic play by Samuel Brooke called "Adelphe". The extant manuscript of "Adelphe" states that that play was first performed in 1611, a validation for the dating of Middleton's play.
The play's subplot, which centers on Master Low-Water – and which was original with Middleton rather than borrowed from another source – had a long theatrical life over the coming centuries. It was adapted for the 1677 play "The Counterfeit Bridegroom, or The Defeated Widow", a work variously ascribed to either Aphra Behn or Thomas Betterton. "The Counterfeit Bridegroom" in turn was adapted into William Taverner's "The Artful Husband" (1717), which then became George Colman the Elder's "The Female Chevalier" (1778), which became both Alicia Sheridan's "The Ambiguous Lover" (1781) and William Macready the Elder's "The Bank Note, or Lesson for Ladies" (1795).
Synopsis.
Prologue
The Prologue notes that it will be difficult to please everyone in the audience because everyone has come for different reasons: some for the wit, some for the costumes, some for comedy, some for passion, and some to arrange a lascivious meeting. But, despite this, the Prologue is confident that, as long as everyone can pay attention and understand the play, they will all be satisfactorily entertained.
Act I.
Scene 1: A street near Sir Oliver Twilight's house
Master Sandfield is ready to kill his friend, Philip Twilight, because he believes Philip has been courting the woman he loves, Jane Sunset. Philip's servant, Savourwit, prevents Sandfield from killing Philip by explaining that the match between Philip and Jane is all Philip's father's doing and Philip is already married! A complicated story unfolds:
Ten years ago, Sir Oliver Twilight's wife and daughter, Lady Twilight and Grace, were kidnapped by 'Dunkirks' (French marauders) as they were crossing the English Channel on their way to Jersey; they were consequently sold and separated. Twilight didn't hear anything from either of them until a few months ago, when he received a letter saying that his wife's freedom could be bought for six hundred crowns. Twilight sent Philip and Savourwit to Jersey with the ransom money. On the way there, Philip and Savourwit stopped in a small town, where they wasted most of the money on women and partying. Philip also met and married a young woman, Grace. With the money gone and nothing to show for it, Philip and Savourwit returned home and told Sir Oliver that Lady Twilight was dead; they also told him that Philip's new wife Grace was his long-lost daughter and the ransom money had been spent to buy her freedom. This entire scheme was the brainchild of Savourwit, who prides himself on his knack for 'invention'.
Sandfield congratulates Savourwit on his 'invention' and makes up with Philip. Savourwit tells the friends that he has a new scheme in mind:
Because Sir Oliver Twilight believes that his 'long-lost daughter' Grace is still a maiden, he has made arrangements to marry her off to Weatherwise, a silly old fool. To get rid of Weatherwise, Savourwit will tell Sir Oliver that Sandfield wants to marry Grace: he is certain that Sir Oliver will prefer Sandfield as a suitor, especially because Sandfield does not require a dowry. Thus, Sandfield can pretend to marry Grace (a marriage that will not be legal because Grace is already married to Philip) and Philip can pretend to marry Jane (a marriage that will not be legal because Philip is already married to Grace). The couples can all live together under Sir Oliver Twilight's roof, maintaining appearances during the day, and swapping spouses at night. This way, Philip can still be with his wife (even though he has to pretend that she is his sister during the day) and Sandfield can be with Jane (even though he will have to pretend that she is his sister-in-law during the day).
Lady Goldenfleece, a rich widow, enters, followed she by her suitors: Sir Gilbert Lambston, Master Pepperton, and Master Overdone. They are followed by the two old men, Sir Oliver Twilight and Master Sunset, and their daughters, Grace Twilight and Jane Sunset. Savourwit notes that Lady Goldenfleece's recently deceased husband was a notorious usurer; he nearly doubled his wealth shortly before he died by seizing the property of a gentleman named Master Low-water.
Lady Goldenfleece greets Jane and Grace and alludes to a 'secret' regarding the girls. They beg her to tell them what the secret is, but she refuses. In an aside, Jane says that she wishes Lady Goldenfleece would be more kind to her relatives, the Low-waters. Everyone exits except Sir Oliver Twilight and Savourwit. Savourwit tells Sir Oliver that Sandfield is desperate to marry Grace and will take her without a dowry. Sir Oliver is pleased and promises to get rid of Weatherwise immediately. Savourwit exits; Weatherwise enters. Sir Oliver tells Weatherwise that he has changed his mind and does not want him to marry Grace. Weatherwise says that he will become a suitor to Lady Goldenfleece.
Scene 2: A room in Low-water's house
Mistress Low-water mourns her family's ruin and curses Lady Goldenfleece. Jane enters to ask Mistress Low-water if she knows anything about the mysterious secret that Lady Goldenfleece alluded to earlier. Mistress Low-water says it might have something to do with "some piece of money or land" that was bequeathed to Grace and Jane "by some departing friend on their deathbed". Jane thanks Mistress Low-water and exits.
A footman enters with a letter for Mistress Low-water from Sir Gilbert Lambston, a wealthy knight. Sir Gilbert wants to make Mistress Low-water his mistress and, in the letter, promises to double his previous offer of money if she agrees. He expects to marry Lady Goldenfleece soon, and will thus be able to pay for Mistress Low-water for her sexual services. Mistress Low-water says that Sir Gilbert's villainy almost makes her feel sorry for her 'enemy' Lady Goldenfleece. She swears that she will never become Sir Gilbert's mistress, but says that the letter is "welcome," hinting that she has a plan in mind.
Sir Gilbert enters. Mistress Low-water asks him to give her one day to consider his offer. He agrees and exits. Master Low-water enters. Mistress Low-water tells him that she thinks she has a way to restore them to their former wealth. Master Low-water agrees to go along with her plan.
Scene 3: A room in Sir Oliver Twilight's house
A Dutch merchant enters with news from Sir Oliver's wife: she wonders why Sir Oliver hasn't sent a ransom for her yet. Sir Oliver is shocked to hear that his wife is still alive. He tells the merchant that he sent his son and servant with the ransom ten weeks ago; they brought his daughter back, but said that his wife had died. The Dutch Merchant vows that he has seen Lady Twilight alive within the past month and asks if he can see the 'daughter' that Philip and Savourwit brought back with them.
Grace enters. The Dutch Merchant says that he has seen her before, at an inn in Antwerp. Grace worries that she has been found out and tells the merchant that he must be mistaken. The merchant sees through Savourwit's scheme; he tells Sir Oliver that he has been deceived by his son and servant: his wife is not dead, and this is not his real daughter. Sir Oliver is not sure what he should believe. He sends Grace away. The Dutch merchant says that he must leave for a moment to attend to a business matter. He leaves his little son in Sir Oliver's care and promises to return soon.
Savourwit enters and Sir Oliver confronts him with the merchant's allegations. Savourwit denies all. To cover himself, Savourwit pretends to have a conversation in Dutch with the merchant's little son, who speaks in a kind of pidgin English. Savourwit's 'Dutch' is almost pure gibberish (words such as "pisse" surface occasionally). Savourwit tells Sir Oliver that the boy has told him that the Dutch Merchant is crazy, and prone to telling wild tales. Sir Oliver is still uncertain what he should believe.
The Dutch merchant re-enters. Sir Oliver tells him that Savourwit has spoken with his son, who claims that the Dutch Merchant is crazy. After conferring with his son, the Dutch Merchant says that Savourwit is lying. Sir Oliver realizes he has been deceived by Savourwit. He invites the Dutch merchant to stay at his home.
Act II.
Scene 1: A room in Weatherwise's house
Lady Goldenfleece is dining at Weatherwise's home with her suitors, Weatherwise, Sir Gilbert Lambston, Master Pepperton and Master Overdone. The suitors compete for Lady Goldenfleece's attention. Weatherwise is obsessed with almanacs, calendars, phases of the moon, the zodiac, etc. Lambston is the most aggressive suitor: he gives Lady Goldenfleece a big kiss in front of everyone. Pepperton and Overdone secretly agree to work together against Lambston.
Mistress Low-water enters, disguised as a 'Gallant Gentleman'; her husband, Master Low-water, poses as her servant. Lady Goldenfleece is immediately attracted to the 'Gallant Gentleman'. The 'Gallant Gentleman' calls Sir Gilbert a villain and gives Lady Goldenfleece the letter in which Sir Gilbert offered to pay Mistress Low-water to become his mistress. Lady Goldenfleece is shocked, and throws Sir Gilbert out. The other suitors are pleased to see Sir Gilbert eliminated.
Scene 2: The street outside Sir Oliver Twilight's house
Sandfield, Philip and Savourwit worry about what might happen to them now that their scheme has been discovered. Philip attempts to commit suicide, but Savourwit and Sandfield stop him.
Lady Twilight, Philip's mother, enters. She has recently been rescued from captivity by the scholar, Beveril, who accompanies her. Philip greets his mother warmly and thanks Beveril for rescuing his mother. Beveril, who happens to be Mistress Low-water's brother, asks Philip how his sister is doing. Philip regretfully informs him of the Low-waters' recent misfortune.
On Savourwit's advice, Philip takes his mother aside, and confesses how he faked her death and spent the ransom. After Lady Twilight agrees to forgive his transgressions, he begs her to pretend that Grace is actually his sister. Lady Twilight agrees to protect her son.
Scene 3: A room in Lady Goldenfleece's house
Mistress Low-water and her husband, still disguised as a Gallant Gentleman and his servant, are at Lady Goldenfleece's house, waiting for an interview with her. Lady Goldenfleece's suitors, Weatherwise, Pepperton and Overdone arrive, and the 'Gallant Gentleman' scolds 'his' 'Servant' loudly, acting as though 'he' is now the lord of the house. The suitors assume that the 'Gallant Gentleman' has married Lady Goldenfleece under their noses. They exit, disappointed.
Lady Goldenfleece enters and apologizes for making the 'Gallant Gentleman' wait. After a good deal of heated flirting, the 'Gallant Gentleman' promises Lady Goldenfleece that he has never slept with or courted any other woman (these oaths are, of course, ironically true). Lady Goldenfleece is swept off her feet. They kiss.
Weatherwise, Pepperton and Overdone enter with renewed hopes: a servant has told them that Lady Goldenfleece and the Gentleman haven't married yet. Lady Goldenfleece is not happy to see the suitors. To get rid of them, she kisses the 'Gallant Gentleman', announces her intention to marry him, and exits. The suitors exit.
The 'Gallant Gentleman' asks Lady Goldenfleece's clown, Pickadillie, if he knows of anyone who can write an entertainment for the wedding. Pickadillie suggests the scholar Beveril. Mistress Low-water is overjoyed to see her brother, but remains in characters as the 'Gallant Gentleman.' Beveril says he cannot compose an entertainment for the wedding because Lady Goldenfleece has wronged his sister. The 'Gallant Gentleman' encourages Beveril to withhold judgment until he has made Lady Goldenfleece's acquaintance. Lady Goldenfleece enters. Lady Goldenfleece and Beveril are immediately attracted to each other.
Act III.
Scene 1: A street near Lady Goldenfleece's house
Weatherwise, Pepperton and Overdone meet Sir Gilbert Lambston on the street and tell him about Lady Goldenfleece's engagement. They all vow to find some way to disgrace Lady Goldenfleece. Pickadillie enters and tells the suitors that Beveril has been contracted to compose an entertainment for Lady Goldenfleece's wedding.
Beveril enters. Speaking to himself, he says is having trouble composing the entertainment because he has fallen madly in love with Lady Goldenfleece. The suitors introduce themselves to Beveril and volunteer to perform in the wedding entertainment. Beveril suggests that they play Earth, Air, Fire and Water. The suitors exit, making plans to 'poison' the entertainment and disgrace Lady Goldenfleece.
Mistress Low-water and her husband enter, still disguised as the 'Gallant Gentleman' and 'Servant', and overhear Beveril declaring his secret love for Lady Goldenfleece. Mistress Low-water is quite pleased. She says that she will work to bring her brother and Lady Goldenfleece together.
Act IV.
Scene 1: A room in Sir Oliver Twilight's house
Sir Oliver says he is very happy to have his wife back. He thanks Beveril for rescuing her and scolds Philip and Savourwit for faking her death. Lady Twilight says that rumors of her death were circulating during Philip and Savourwit's time in Jersey (a lie) and begs Sir Oliver to excuse them because the whole mix-up was obviously a misunderstanding, not a scheme.
Sir Oliver raises the matter of the 'minion' (Grace) that Philip and Savourwit brought home to pass off as the Twilights' daughter. Lady Twilight says that she will be able to recognize her true daughter immediately because she has seen her several times since the kidnapping (another lie). Grace is brought in. Feigning jubilation, Lady Twilight says that Grace is definitely her daughter. Sir Oliver apologizes to Philip and Savourwit and admits them back into his good graces. With all the problems out of the way, he continues to make plans to marry Grace to Sandfield and Philip to Jane. Everyone exits except Lady Twilight, Grace, Philip and Savourwit.
Lady Twilight notes that there is something peculiarly familiar about Grace's face, and asks who Grace's mother was. Grace says she doesn't know: she and her mother were kidnapped by Dunkirks and separated ten years ago; all she knows is that her mother was an English gentlewoman. Lady Twilight realizes that Grace is her true daughter; in fact, she is wearing an earring that Lady Twilight gave her when she was a baby. Philip is horrified to learn that he has been sleeping with his sister. Lady Twilight tells Philip that his sin of incest was committed in ignorance, so it can be forgiven. Lady Twilight and Grace exit.
Lamenting his ill-fortune, Philip says good-bye to Savourwit and says he is leaving home forever. Savourwit convinces him to stick around long enough to see Lady Goldenfleece's wedding.
Scene 2: A room in Lady Goldenfleece's house
Pickadillie observes as servants make preparations for Lady Goldenfleece's wedding.
Scene 3: A room in Lady Goldenfleece's house
Music plays. Lady Goldenfleece and the 'Gallant Gentleman,' now married, enter arm-in-arm. Sir Oliver Twilight, Master Sunset, the Dutch Merchant, Lady Twilight, Grace, Jane Sunset, Philip, Savourwit, Master Sandfield and Master Low-water (disguised as a servant) follow. Master Low-water tells his wife that Beveril is still deeply infatuated with Lady Goldenfleece. Mistress Low-water asks him if he has prepared 'the letter'. Master Low-water says he has.
Beveril enters to introduce the entertainment. The entertainment begins. Lady Goldenfleece's former suitors enter, dressed as 'Fire' (Sir Gilbert Lambston), 'Air' (Weatherwise), 'Water' (Overdone) and 'Earth' (Pepperton). Rather than reciting the parts Beveril has composed, each suitor recites a speech insulting Lady Goldenfleece. When the entertainment is finished, the suitors reveal their true identities and exit, satisfied that they have fulfilled their vow to see Lady Goldenfleece disgraced. Beveril apologizes for the fiasco. Lady Goldenfleece forgives him. Lady Goldenfleece says good night to the wedding guests, who will spend the night at her house.
Act V.
Scene 1: A room in Lady Goldenfleece's house
Lady Goldenfleece urges her new 'husband', to come to bed. 'He' resists her, claiming that 'he' cannot enjoy his marriage because 'he' knows that Lady Goldenfleece's fortune was wrongfully acquired. Lady Goldenfleece begs 'him' to reconsider, but 'he' will not relent. She exits with a heavy heart.
Master Low-water enters and tells his wife that he has delivered the letter she wrote to Beveril. Beveril enters on the balcony, reading the letter aloud. The letter is supposedly from Lady Goldenfleece, but was actually written by Lady Low-water. In the letter, 'Lady Goldenfleece' tells Beveril that she is already disappointed in her marriage because the 'Gallant Gentleman' has forsaken her bed without cause. She begs Beveril to come to her chamber to counsel her. Delighted, Beveril exits, on his way to Lady Goldenfleece's chamber.
Sir Oliver Twilight, Lady Twilight, Master Sunset, Grace, Philip, Sandfield, Jane, the Dutch Merchant and Savourwit enter. The 'Gallant Gentleman' tells them that he has heard strange noises coming from Lady Goldenfleece's bedroom and suspects that she is cheating on him on the night of their wedding. The door to Lady Goldenfleece's chamber is forced open. Lady Goldenfleece is discovered inside with Beveril. Feigning rage, the 'Gallant Gentleman' swears that he will never admit Lady Goldenfleece to his bed for as long as he lives, banishes her from the house and claims all of her wealth as his own.
Following pleas for leniency from Sir Oliver, the 'Gallant Gentleman' eventually agrees to leave with a casket containing half of all Lady Goldenfleece's wealth. As he prepares to leave, the 'Gallant Gentleman' says that, if 'he' wanted to, 'he' could release Lady Goldenfleece from her marriage contract with a few words, thus leaving her free to remarry. Everyone present wonders how this could be possible. Lady Goldenfleece begs the 'Gallant Gentleman' to tell her how she can regain her freedom. The 'Gallant Gentleman' says 'he' will only tell her if she promises to remarry immediately. Lady Goldenfleece promises.
The 'Gallant Gentleman' reveals that Goldenfleece's marriage contract is void because 'he' is already married! Lady Goldenfleece is shocked. In order to honor her promise to remarry immediately, and to spite the 'Gallant Gentleman', she announces that her next husband will be Beveril. The 'Gallant Gentleman' feigns torment. Sir Gilbert, Sir Oliver and Beveril note that the 'Gallant Gentleman's' days are numbered; the punishment for having two wives is hanging. The 'Gallant Gentleman' says that, although 'he' is already married, 'he' does not have two wives. To provide the solution to this riddle, 'he' reveals 'his' true identity as Mistress Low-water.
Sir Oliver decides that it is now time to draw up the marriage contracts for Sandfield & Grace and Philip & Jane. Lady Goldenfleece protests that Jane cannot marry Philip because she is, in fact, his sister! While everyone wonders what is going on, Lady Goldenfleece explains that Lady Sunset switched the girls when they were infants because she was worried that her husband would go bankrupt. Philip is overjoyed: the woman he is married to is Sunset's daughter, and not a relation of his. Sandfield is also overjoyed: he gets to marry Jane, the woman he really wanted to marry all along.
Epilogue
Weatherwise ends the play with an epilogue filled with characteristic references to almanacs and the phases of the moon. Consulting his almanac, he predicts that the play will end with great applause.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "No \\begin{Bmatrix}\n Wit \\\\\n Help\n\\end{Bmatrix} Like~a~Woman's"
}
] | https://en.wikipedia.org/wiki?curid=7989113 |
7989945 | Householder operator | In linear algebra, the Householder operator is defined as follows. Let formula_0 be a finite-dimensional inner product space with inner product formula_1 and unit vector formula_2. Then
formula_3
is defined by
formula_4
This operator reflects the vector formula_5 across a plane given by the normal vector formula_6.
It is also common to choose a non-unit vector formula_7, and normalize it directly in the Householder operator's expression:
formula_8
Properties.
The Householder operator satisfies the following properties:
formula_11
Special cases.
Over a real or complex vector space, the Householder operator is also known as the Householder transformation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " V\\, "
},
{
"math_id": 1,
"text": " \\langle \\cdot, \\cdot \\rangle "
},
{
"math_id": 2,
"text": " u\\in V"
},
{
"math_id": 3,
"text": " H_u : V \\to V\\,"
},
{
"math_id": 4,
"text": " H_u(x) = x - 2\\,\\langle x,u \\rangle\\,u\\,."
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "u"
},
{
"math_id": 7,
"text": "q \\in V"
},
{
"math_id": 8,
"text": "H_q \\left ( x \\right ) = x - 2\\, \\frac{\\langle x, q \\rangle}{\\langle q, q \\rangle}\\, q \\,."
},
{
"math_id": 9,
"text": "V"
},
{
"math_id": 10,
"text": "K"
},
{
"math_id": 11,
"text": "\\forall \\left ( \\lambda, \\mu \\right ) \\in K^2, \\, \\forall \\left ( x, y \\right ) \\in V^2, \\, H_q \\left ( \\lambda x + \\mu y \\right ) = \\lambda \\ H_q \\left ( x \\right ) + \\mu \\ H_q \\left ( y \\right )."
},
{
"math_id": 12,
"text": "K = \\mathbb{R}"
},
{
"math_id": 13,
"text": "K = \\mathbb{C}"
}
] | https://en.wikipedia.org/wiki?curid=7989945 |
7991 | Disperser | A disperser is a one-sided extractor. Where an extractor requires that every event gets the same probability under the uniform distribution and the extracted distribution, only the latter is required for a disperser. So for a disperser, an event formula_0 we have:
formula_1
Definition (Disperser): "A" formula_2"-disperser is a function"
formula_3
"such that for every distribution" formula_4 "on" formula_5 "with" formula_6 "the support of the distribution" formula_7 "is of size at least" formula_8.
Graph theory.
An ("N", "M", "D", "K", "e")-disperser is a bipartite graph with "N" vertices on the left side, each with degree "D", and "M" vertices on the right side, such that every subset of "K" vertices on the left side is connected to more than (1 − "e")"M" vertices on the right.
An extractor is a related type of graph that guarantees an even stronger property; every ("N", "M", "D", "K", "e")-extractor is also an ("N", "M", "D", "K", "e")-disperser.
Other meanings.
A disperser is a high-speed mixing device used to disperse or dissolve pigments and other solids into a liquid.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A \\subseteq \\{0,1\\}^{m}"
},
{
"math_id": 1,
"text": "Pr_{U_{m}}[A] > 1 - \\epsilon"
},
{
"math_id": 2,
"text": "(k, \\epsilon)"
},
{
"math_id": 3,
"text": "Dis: \\{0,1\\}^{n}\\times \\{0,1\\}^{d}\\rightarrow \\{0,1\\}^{m}"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "\\{0,1\\}^{n}"
},
{
"math_id": 6,
"text": "H_{\\infty}(X) \\geq k"
},
{
"math_id": 7,
"text": "Dis(X,U_{d})"
},
{
"math_id": 8,
"text": "(1-\\epsilon)2^{m}"
}
] | https://en.wikipedia.org/wiki?curid=7991 |
7992036 | Integrodifference equation | Recurrence equation on a function space, that involves integration
In mathematics, an integrodifference equation is a recurrence relation on a function space, of the following form:
formula_0
where formula_1 is a sequence in the function space and formula_2 is the domain of those functions. In most applications, for any formula_3, formula_4 is a probability density function on formula_2. Note that in the definition above, formula_5 can be vector valued, in which case each element of formula_6 has a scalar valued integrodifference equation associated with it. Integrodifference equations are widely used in mathematical biology, especially theoretical ecology, to model the dispersal and growth of populations. In this case, formula_7 is the population size or density at location formula_8 at time formula_9, formula_10 describes the local population growth at location formula_8 and formula_11, is the probability of moving from point formula_12 to point formula_8, often referred to as the dispersal kernel. Integrodifference equations are most commonly used to describe univoltine populations, including, but not limited to, many arthropod, and annual plant species. However, multivoltine populations can also be modeled with integrodifference equations, as long as the organism has non-overlapping generations. In this case, formula_9 is not measured in years, but rather the time increment between broods.
Convolution kernels and invasion speeds.
In one spatial dimension, the dispersal kernel often depends only on the distance between the source and the destination, and can be
written as formula_13. In this case, some natural conditions on f and k imply that there is a well-defined
spreading speed for waves of invasion generated from compact initial conditions. The wave speed is often calculated
by studying the linearized equation
formula_14
where formula_15.
This can be written as the convolution
formula_16
Using a moment-generating-function transformation
formula_17
it has been shown that the critical wave speed
formula_18
Other types of equations used to model population dynamics through space include reaction–diffusion equations and metapopulation equations. However, diffusion equations do not as easily allow for the inclusion of explicit dispersal patterns and are only biologically accurate for populations with overlapping generations. Metapopulation equations are different from integrodifference equations in the fact that they break the population down into discrete patches rather than a continuous landscape.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " n_{t+1}(x) = \\int_{\\Omega} k(x, y)\\, f(n_t(y))\\, dy,"
},
{
"math_id": 1,
"text": "\\{n_t\\}\\,"
},
{
"math_id": 2,
"text": "\\Omega\\,"
},
{
"math_id": 3,
"text": "y\\in\\Omega\\,"
},
{
"math_id": 4,
"text": "k(x,y)\\,"
},
{
"math_id": 5,
"text": "n_t"
},
{
"math_id": 6,
"text": "\\{n_t\\}"
},
{
"math_id": 7,
"text": "n_t(x)"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "t"
},
{
"math_id": 10,
"text": "f(n_t(x))"
},
{
"math_id": 11,
"text": "k(x,y)"
},
{
"math_id": 12,
"text": "y"
},
{
"math_id": 13,
"text": "k(x-y)"
},
{
"math_id": 14,
"text": " n_{t+1} = \\int_{-\\infty}^{\\infty} k(x-y) R n_t(y) dy "
},
{
"math_id": 15,
"text": " R = \\left.\\dfrac{df}{dn}\\right|_{n=0}"
},
{
"math_id": 16,
"text": " n_{t+1} = f'(0) k * n_t "
},
{
"math_id": 17,
"text": " M(s) = \\int_{-\\infty}^{\\infty} e^{sx} n(x) dx "
},
{
"math_id": 18,
"text": " c^* = \\min_{ w > 0 } \\left[\\frac{1}{w} \\ln \\left( R \\int_{-\\infty}^{\\infty} k(s) e^{w s} ds \\right) \\right] "
}
] | https://en.wikipedia.org/wiki?curid=7992036 |
7992717 | Fréchet surface | In mathematics, a Fréchet surface is an equivalence class of parametrized surfaces in a metric space. In other words, a Fréchet surface is a way of thinking about surfaces independently of how they are "written down" (parametrized). The concept is named after the French mathematician Maurice Fréchet.
Definitions.
Let formula_0 be a compact 2-dimensional manifold, either closed or with boundary, and let formula_1 be a metric space. A parametrized surface in formula_2 is a map
formula_3
that is continuous with respect to the topology on formula_0 and the metric topology on formula_4 Let
formula_5
where the infimum is taken over all homeomorphisms formula_6 of formula_0 to itself. Call two parametrized surfaces formula_7 and formula_7 in formula_2 equivalent if and only if
formula_8
An equivalence class formula_9 of parametrized surfaces under this notion of equivalence is called a Fréchet surface; each of the parametrized surfaces in this equivalence class is called a parametrization of the Fréchet surface formula_10
Properties.
Many properties of parametrized surfaces are actually properties of the Fréchet surface, that is, of the whole equivalence class, and not of any particular parametrization.
For example, given two Fréchet surfaces, the value of formula_11 is independent of the choice of the parametrizations formula_7 and formula_12 and is called the Fréchet distance between the Fréchet surfaces.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "(X, d)"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "f : M \\to X"
},
{
"math_id": 4,
"text": "X."
},
{
"math_id": 5,
"text": "\\rho(f, g) = \\inf_{\\sigma} \\max_{x \\in M} d(f(x), g(\\sigma(x))),"
},
{
"math_id": 6,
"text": "\\sigma"
},
{
"math_id": 7,
"text": "f"
},
{
"math_id": 8,
"text": "\\rho(f, g) = 0."
},
{
"math_id": 9,
"text": "[f]"
},
{
"math_id": 10,
"text": "[f]."
},
{
"math_id": 11,
"text": "\\rho(f, g)"
},
{
"math_id": 12,
"text": "g,"
}
] | https://en.wikipedia.org/wiki?curid=7992717 |
799405 | Shear mapping | Type of geometric transformation
In plane geometry, a shear mapping is an affine transformation that displaces each point in a fixed direction by an amount proportional to its signed distance from a given line parallel to that direction. This type of mapping is also called shear transformation, transvection, or just shearing. The transformations can be applied with a shear matrix or transvection, an elementary matrix that represents the addition of a multiple of one row or column to another. Such a matrix may be derived by taking the identity matrix and replacing one of the zero elements with a non-zero value.
An example is the linear map that takes any point with coordinates formula_0 to the point formula_1. In this case, the displacement is horizontal by a factor of 2 where the fixed line is the x-axis, and the signed distance is the y-coordinate. Note that points on opposite sides of the reference line are displaced in opposite directions.
Shear mappings must not be confused with rotations. Applying a shear map to a set of points of the plane will change all angles between them (except straight angles), and the length of any line segment that is not parallel to the direction of displacement. Therefore, it will usually distort the shape of a geometric figure, for example turning squares into parallelograms, and circles into ellipses. However a shearing does preserve the area of geometric figures and the alignment and relative distances of collinear points. A shear mapping is the main difference between the upright and slanted (or italic) styles of letters.
The same definition is used in three-dimensional geometry, except that the distance is measured from a fixed plane. A three-dimensional shearing transformation preserves the volume of solid figures, but changes areas of plane figures (except those that are parallel to the displacement).
This transformation is used to describe laminar flow of a fluid between plates, one moving in a plane above and parallel to the first.
In the general n-dimensional Cartesian space &NoBreak;&NoBreak; the distance is measured from a fixed hyperplane parallel to the direction of displacement. This geometric transformation is a linear transformation of &NoBreak;&NoBreak; that preserves the n-dimensional measure (hypervolume) of any set.
Definition.
Horizontal and vertical shear of the plane.
In the plane formula_2, a horizontal shear (or shear parallel to the x-axis) is a function that takes a generic point with coordinates formula_0 to the point formula_3; where m is a fixed parameter, called the shear factor.
The effect of this mapping is to displace every point horizontally by an amount proportionally to its y-coordinate. Any point above the x-axis is displaced to the right (increasing x) if "m" > 0, and to the left if "m" < 0. Points below the x-axis move in the opposite direction, while points on the axis stay fixed.
Straight lines parallel to the x-axis remain where they are, while all other lines are turned (by various angles) about the point where they cross the x-axis. Vertical lines, in particular, become oblique lines with slope formula_4 Therefore, the shear factor m is the cotangent of the shear angle formula_5 between the former verticals and the x-axis. (In the example on the right the square is tilted by 30°, so the shear angle is 60°.)
If the coordinates of a point are written as a column vector (a 2×1 matrix), the shear mapping can be written as multiplication by a 2×2 matrix:
formula_6
A vertical shear (or shear parallel to the y-axis) of lines is similar, except that the roles of x and y are swapped. It corresponds to multiplying the coordinate vector by the transposed matrix:
formula_7
The vertical shear displaces points to the right of the y-axis up or down, depending on the sign of m. It leaves vertical lines invariant, but tilts all other lines about the point where they meet the y-axis. Horizontal lines, in particular, get tilted by the shear angle formula_5 to become lines with slope m.
Composition.
Two or more shear transformations can be combined.
If two shear matrices are formula_8 and formula_9
then their composition matrix is
formula_10
which also has determinant 1, so that area is preserved.
In particular, if formula_11, we have
formula_12
which is a positive definite matrix.
Higher dimensions.
A typical shear matrix is of the form
formula_13
This matrix shears parallel to the x axis in the direction of the fourth dimension of the underlying vector space.
A shear parallel to the x axis results in formula_14 and formula_15. In matrix form:
formula_16
Similarly, a shear parallel to the y axis has formula_17 and formula_18. In matrix form:
formula_19
In 3D space this matrix shear the YZ plane into the diagonal plane passing through these 3 points: formula_20 formula_21 formula_22
formula_23
The determinant will always be 1, as no matter where the shear element is placed, it will be a member of a skew-diagonal that also contains zero elements (as all skew-diagonals have length at least two) hence its product will remain zero and will not contribute to the determinant. Thus every shear matrix has an inverse, and the inverse is simply a shear matrix with the shear element negated, representing a shear transformation in the opposite direction. In fact, this is part of an easily derived more general result: if S is a shear matrix with shear element λ, then Sn is a shear matrix whose shear element is simply "n"λ. Hence, raising a shear matrix to a power n multiplies its shear factor by n.
Properties.
If S is an "n" × "n" shear matrix, then:
General shear mappings.
For a vector space V and subspace W, a shear fixing W translates all vectors in a direction parallel to W.
To be more precise, if V is the direct sum of W and W′, and we write vectors as
formula_24
correspondingly, the typical shear L fixing W is
formula_25
where M is a linear mapping from W′ into W. Therefore in block matrix terms L can be represented as
formula_26
Applications.
The following applications of shear mapping were noted by William Kingdon Clifford:
"A succession of shears will enable us to reduce any figure bounded by straight lines to a triangle of equal area."
"... we may shear any triangle into a right-angled triangle, and this will not alter its area. Thus the area of any triangle is half the area of the rectangle on the same base and with height equal to the perpendicular on the base from the opposite angle."
The area-preserving property of a shear mapping can be used for results involving area. For instance, the Pythagorean theorem has been illustrated with shear mapping as well as the related geometric mean theorem.
Shear matrices are often used in computer graphics.
An algorithm due to Alan W. Paeth uses a sequence of three shear mappings (horizontal, vertical, then horizontal again) to rotate a digital image by an arbitrary angle. The algorithm is very simple to implement, and very efficient, since each step processes only one column or one row of pixels at a time.
In typography, normal text transformed by a shear mapping results in oblique type.
In pre-Einsteinian Galilean relativity, transformations between frames of reference are shear mappings called Galilean transformations. These are also sometimes seen when describing moving reference frames relative to a "preferred" frame, sometimes referred to as absolute time and space.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(x,y)"
},
{
"math_id": 1,
"text": "(x + 2y,y)"
},
{
"math_id": 2,
"text": "\\R^2 = \\R\\times\\R"
},
{
"math_id": 3,
"text": "(x + m y,y)"
},
{
"math_id": 4,
"text": "\\tfrac 1 m."
},
{
"math_id": 5,
"text": "\\varphi"
},
{
"math_id": 6,
"text": "\n \\begin{pmatrix}x^\\prime \\\\y^\\prime \\end{pmatrix} =\n \\begin{pmatrix}x + m y \\\\y \\end{pmatrix} =\n \\begin{pmatrix}1 & m\\\\0 & 1\\end{pmatrix} \n \\begin{pmatrix}x \\\\y \\end{pmatrix}.\n"
},
{
"math_id": 7,
"text": "\n \\begin{pmatrix}x^\\prime \\\\y^\\prime \\end{pmatrix} = \n \\begin{pmatrix}x \\\\ m x + y \\end{pmatrix} = \n \\begin{pmatrix}1 & 0\\\\m & 1\\end{pmatrix} \n \\begin{pmatrix}x \\\\y \\end{pmatrix}.\n"
},
{
"math_id": 8,
"text": "\\begin{pmatrix} 1 & \\lambda \\\\ 0 & 1 \\end{pmatrix}"
},
{
"math_id": 9,
"text": "\\begin{pmatrix} 1 & 0 \\\\ \\mu & 1 \\end{pmatrix}"
},
{
"math_id": 10,
"text": "\\begin{pmatrix} 1 & \\lambda \\\\ 0 & 1 \\end{pmatrix}\\begin{pmatrix} 1 & 0 \\\\ \\mu & 1\\end{pmatrix} = \\begin{pmatrix} 1 + \\lambda\\mu & \\lambda \\\\ \\mu & 1 \\end{pmatrix},"
},
{
"math_id": 11,
"text": "\\lambda=\\mu"
},
{
"math_id": 12,
"text": "\\begin{pmatrix} 1 + \\lambda^2 & \\lambda \\\\ \\lambda & 1 \\end{pmatrix},"
},
{
"math_id": 13,
"text": "S = \\begin{pmatrix}\n1 & 0 & 0 & \\lambda & 0 \\\\\n0 & 1 & 0 & 0 & 0 \\\\\n0 & 0 & 1 & 0 & 0 \\\\\n0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 0 & 0 & 1\n\\end{pmatrix}."
},
{
"math_id": 14,
"text": "x' = x + \\lambda y"
},
{
"math_id": 15,
"text": "y' = y"
},
{
"math_id": 16,
"text": "\\begin{pmatrix} x' \\\\ y' \\end{pmatrix} = \n\\begin{pmatrix}\n1 & \\lambda \\\\\n0 & 1\n\\end{pmatrix}\n\\begin{pmatrix} x \\\\ y \\end{pmatrix}."
},
{
"math_id": 17,
"text": "x' = x"
},
{
"math_id": 18,
"text": "y' = y + \\lambda x"
},
{
"math_id": 19,
"text": "\\begin{pmatrix}x' \\\\ y' \\end{pmatrix} =\n\\begin{pmatrix}\n1 & 0 \\\\\n\\lambda & 1\n\\end{pmatrix}\n\\begin{pmatrix} x \\\\ y \\end{pmatrix}."
},
{
"math_id": 20,
"text": "(0, 0, 0)"
},
{
"math_id": 21,
"text": "(\\lambda, 1, 0)"
},
{
"math_id": 22,
"text": "(\\mu, 0, 1)"
},
{
"math_id": 23,
"text": "S = \\begin{pmatrix}\n1 & \\lambda & \\mu \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1\n\\end{pmatrix}."
},
{
"math_id": 24,
"text": "v=w+w'"
},
{
"math_id": 25,
"text": "L(v) = (Lw+Lw') = (w+Mw') + w',"
},
{
"math_id": 26,
"text": "\\begin{pmatrix} I & M \\\\ 0 & I \\end{pmatrix}. "
}
] | https://en.wikipedia.org/wiki?curid=799405 |
799521 | Voltage multiplier | Electrical circuit power converter
A voltage multiplier is an electrical circuit that converts AC electrical power from a lower voltage to a higher DC voltage, typically using a network of capacitors and diodes.
Voltage multipliers can be used to generate a few volts for electronic appliances, to millions of volts for purposes such as high-energy physics experiments and lightning safety testing. The most common type of voltage multiplier is the half-wave series multiplier, also called the Villard cascade (but actually invented by Heinrich Greinacher).
Operation.
Assuming that the peak voltage of the AC source is +Us, and that the C values are sufficiently high to allow, when charged, that a current flows with no significant change in voltage, then the (simplified) working of the cascade is as follows:
Adding an additional stage will increase the output voltage by twice the peak AC source voltage (minus losses due to the diodes ‒ see the next paragraph).
In reality, more cycles are required for C4 to reach the full voltage, and the voltage of each capacitor is lowered by the forward voltage drop (Uf) of each diode on the path to that capacitor. For example, the voltage of C4 in the example would be at most 2Us - 4Uf since there are 4 diodes between its positive terminal and the source. The total output voltage would be U(C2) + U(C4)
(2Us - 2Uf) + (2Us - 4Uf)
4Us - 6Uf. In a cascade with n stages of two diodes and two capacitors, the output voltage is equal to 2n Us - n(n+1) Uf. The term n(n+1) Uf represents the sum of voltage losses caused by diodes, over all capacitors on the output side (i.e. on the right side in the example ‒ C2 and C4). For example if we have 2 stages like in the example, the total loss is 2+4 = 2*(2+1) = 6 times Uf. An additional stage will increase the output voltage by twice the source voltage, minus the forward voltage drop over 2n+2 diodes: 2Us - (2n+2)Uf.
Voltage doubler and tripler.
A voltage doubler uses two stages to approximately double the DC voltage that would have been obtained from a single-stage rectifier. An example of a voltage doubler is found in the input stage of switch mode power supplies containing a SPDT switch to select either 120 V or 240 V supply. In the 120 V position the input is typically configured as a full-wave voltage doubler by opening one AC connection point of a bridge rectifier, and connecting the input to the junction of two series-connected filter capacitors. For 240 V operation, the switch configures the system as a full-wave bridge, re-connecting the capacitor center-tap wire to the open AC terminal of a bridge rectifier system. This allows 120 or 240 V operation with the addition of a simple SPDT switch.
A voltage tripler is a three-stage voltage multiplier. A tripler is a popular type of voltage multiplier. The output voltage of a tripler is in practice below three times the peak input voltage due to their high impedance, caused in part by the fact that as each capacitor in the chain supplies power to the next, it partially discharges, losing voltage doing so.
Triplers were commonly used in color television receivers to provide the high voltage for the cathode ray tube (CRT, picture tube).
Triplers are still used in high voltage supplies such as copiers, laser printers, bug zappers and electroshock weapons.
Breakdown voltage.
While the multiplier can be used to produce thousands of volts of output, the individual components do not need to be rated to withstand the entire voltage range. Each component only needs to be concerned with the relative voltage differences directly across its own terminals and of the components immediately adjacent to it.
Typically a voltage multiplier will be physically arranged like a ladder, so that the progressively increasing voltage potential is not given the opportunity to arc across to the much lower potential sections of the circuit.
Note that some safety margin is needed across the relative range of voltage differences in the multiplier, so that the ladder can survive the shorted failure of at least one diode or capacitor component. Otherwise a single-point shorting failure could successively over-voltage and destroy each next component in the multiplier, potentially destroying the entire multiplier chain.
Other circuit topologies.
An even number of diode-capacitor cells is used in any column so that the cascade ends on a smoothing cell. If it were odd and ended on a clamping cell the ripple voltage would be very large. Larger capacitors in the connecting column also reduce ripple but at the expense of charging time and increased diode current.
Dickson charge pump.
The Dickson charge pump, or Dickson multiplier, is a modification of the Greinacher/Cockcroft–Walton multiplier. There are, however, several important differences:
To describe the ideal operation of the circuit, number the diodes D1, D2 etc. from left to right and the capacitors C1, C2 etc. When the clock formula_0 is low, D1 will charge C1 to "V"in. When formula_0 goes high the top plate of C1 is pushed up to 2"V"in. D1 is then turned off and D2 turned on and C2 begins to charge to 2"V"in. On the next clock cycle formula_0 again goes low and now formula_1 goes high pushing the top plate of C2 to 3"V"in. D2 switches off and D3 switches on, charging C3 to 3"V"in and so on with charge passing up the chain, hence the name charge pump. The final diode-capacitor cell in the cascade is connected to ground rather than a clock phase and hence is not a multiplier; it is a peak detector which merely provides smoothing.
There are a number of factors which reduce the output from the ideal case of "nV"in. One of these is the threshold voltage, "V"T of the switching device, that is, the voltage required to turn it on. The output will be reduced by at least "nV"T due to the volt drops across the switches. Schottky diodes are commonly used in Dickson multipliers for their low forward voltage drop, amongst other reasons. Another difficulty is that there are parasitic capacitances to ground at each node. These parasitic capacitances act as voltage dividers with the circuit's storage capacitors reducing the output voltage still further. Up to a point, a higher clock frequency is beneficial: the ripple is reduced and the high frequency makes the remaining ripple easier to filter. Also the size of capacitors needed is reduced since less charge needs to be stored per cycle. However, losses through stray capacitance increase with increasing clock frequency and a practical limit is around a few hundred kilohertz.
Dickson multipliers are frequently found in integrated circuits (ICs) where they are used to increase a low-voltage battery supply to the voltage needed by the IC. It is advantageous to the IC designer and manufacturer to be able to use the same technology and the same basic device throughout the IC. For this reason, in the popular CMOS technology ICs the transistor which forms the basic building block of circuits is the MOSFET. Consequently, the diodes in the Dickson multiplier are often replaced with MOSFETs wired to behave as diodes.
The diode-wired MOSFET version of the Dickson multiplier does not work very well at very low voltages because of the large drain-source volt drops of the MOSFETs. Frequently, a more complex circuit is used to overcome this problem. One solution is to connect in parallel with the switching MOSFET another MOSFET biased into its linear region. This second MOSFET has a lower drain-source voltage than the switching MOSFET would have on its own (because the switching MOSFET is driven hard on) and consequently the output voltage is increased. The gate of the linear biased MOSFET is connected to the output of the next stage so that it is turned off while the next stage is charging from the previous stage's capacitor. That is, the linear-biased transistor is turned off at the same time as the switching transistor.
An ideal 4-stage Dickson multiplier (5× multiplier) with an input of 1.5 V would have an output of 7.5 V. However, a diode-wired MOSFET 4-stage multiplier might only have an output of 2 V. Adding parallel MOSFETs in the linear region improves this to around 4 V. More complex circuits still can achieve an output much closer to the ideal case.
Many other variations and improvements to the basic Dickson circuit exist. Some attempt to reduce the switching threshold voltage such as the Mandal-Sarpeshkar multiplier or the Wu multiplier. Other circuits cancel out the threshold voltage: the Umeda multiplier does it with an externally provided voltage and the Nakamoto multiplier does it with internally generated voltage. The Bergeret multiplier concentrates on maximising power efficiency.
Modification for RF power.
In CMOS integrated circuits clock signals are readily available, or else easily generated. This is not always the case in RF integrated circuits, but often a source of RF power will be available. The standard Dickson multiplier circuit can be modified to meet this requirement by simply grounding the normal input and one of the clock inputs. RF power is injected into the other clock input, which then becomes the circuit input. The RF signal is effectively the clock as well as the source of power. However, since the clock is injected only into every other node the circuit only achieves a stage of multiplication for every second diode-capacitor cell. The other diode-capacitor cells are merely acting as peak detectors and smoothing the ripple without increasing the multiplication.
Cross-coupled switched capacitor.
A voltage multiplier may be formed of a cascade of voltage doublers of the cross-coupled switched capacitor type. This type of circuit is typically used instead of a Dickson multiplier when the source voltage is 1.2 V or less. Dickson multipliers have increasingly poor power conversion efficiency as the input voltage drops because the voltage drop across the diode-wired transistors becomes much more significant compared to the output voltage. Since the transistors in the cross-coupled circuit are not diode-wired the volt-drop problem is not so serious.
The circuit works by alternately switching the output of each stage between a voltage doubler driven by formula_0 and one driven by formula_1. This behaviour leads to another advantage over the Dickson multiplier: reduced ripple voltage at double the frequency. The increase in ripple frequency is advantageous because it is easier to remove by filtering. Each stage (in an ideal circuit) raises the output voltage by the peak clock voltage. Assuming that this is the same level as the DC input voltage then an "n" stage multiplier will (ideally) output "nV"in. The chief cause of losses in the cross-coupled circuit is parasitic capacitance rather than switching threshold voltage. The losses occur because some of the energy has to go into charging up the parasitic capacitances on each cycle.
Applications.
The high-voltage supplies for cathode-ray tubes (CRTs) in TVs often use voltage multipliers with the final-stage smoothing capacitor formed by the interior and exterior aquadag coatings on the CRT itself. CRTs were formerly a common component in television sets. Voltage multipliers can still be found in modern TVs, photocopiers, and bug zappers.
High voltage multipliers are used in spray painting equipment, most commonly found in automotive manufacturing facilities. A voltage multiplier with an output of about 100kV is used in the nozzle of the paint sprayer to electrically charge the atomized paint particles which then get attracted to the oppositely charged metal surfaces to be painted. This helps reduce the volume of paint used and helps in spreading an even coat of paint.
A common type of voltage multiplier used in high-energy physics is the Cockcroft–Walton generator (which was designed by John Douglas Cockcroft and Ernest Thomas Sinton Walton for a particle accelerator for use in research that won them the Nobel Prize in Physics in 1951).
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi_1"
},
{
"math_id": 1,
"text": "\\phi_2"
}
] | https://en.wikipedia.org/wiki?curid=799521 |
7996261 | Convergence in measure | Convergence in measure is either of two distinct mathematical concepts both of which generalize
the concept of convergence in probability.
Definitions.
Let formula_0 be measurable functions on a measure space formula_1 The sequence formula_2 is said to <templatestyles src="Template:Visible anchor/styles.css" />converge globally in measure to formula_3 if for every formula_4
formula_5
and to <templatestyles src="Template:Visible anchor/styles.css" />converge locally in measure to formula_3 if for every formula_6 and every formula_7 with
formula_8
formula_9
On a finite measure space, both notions are equivalent. Otherwise, convergence in measure can refer to either global convergence in measure or local convergence in measure, depending on the author.
Properties.
Throughout, "f" and "f""n" ("n" formula_10 N) are measurable functions "X" → R.
Counterexamples.
Let formula_12 "μ" be Lebesgue measure, and "f" the constant function with value zero.
Topology.
There is a topology, called the topology of (local) convergence in measure, on the collection of measurable functions from "X" such that local convergence in measure corresponds to convergence on that topology.
This topology is defined by the family of pseudometrics
formula_20
where
formula_21
In general, one may restrict oneself to some subfamily of sets "F" (instead of all possible subsets of finite measure). It suffices that for each formula_22 of finite measure and formula_23 there exists "F" in the family such that formula_24 When formula_25, we may consider only one metric formula_26, so the topology of convergence in finite measure is metrizable. If formula_27 is an arbitrary measure finite or not, then
formula_28
still defines a metric that generates the global convergence in measure.
Because this topology is generated by a family of pseudometrics, it is uniformizable.
Working with uniform structures instead of topologies allows us to formulate uniform properties such as
Cauchyness.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f, f_n\\ (n \\in \\mathbb N): X \\to \\mathbb R"
},
{
"math_id": 1,
"text": "(X, \\Sigma, \\mu)."
},
{
"math_id": 2,
"text": "f_n"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "\\varepsilon > 0,"
},
{
"math_id": 5,
"text": "\\lim_{n\\to\\infty} \\mu(\\{x \\in X: |f(x)-f_n(x)|\\geq \\varepsilon\\}) = 0,"
},
{
"math_id": 6,
"text": "\\varepsilon>0"
},
{
"math_id": 7,
"text": "F \\in \\Sigma"
},
{
"math_id": 8,
"text": "\\mu (F) < \\infty,"
},
{
"math_id": 9,
"text": "\\lim_{n\\to\\infty} \\mu(\\{x \\in F: |f(x)-f_n(x)|\\geq \\varepsilon\\}) = 0."
},
{
"math_id": 10,
"text": "\\in"
},
{
"math_id": 11,
"text": "\\mu (X)<\\infty"
},
{
"math_id": 12,
"text": "X = \\Reals"
},
{
"math_id": 13,
"text": "f_n = \\chi_{[n,\\infty)}"
},
{
"math_id": 14,
"text": "f_n = \\chi_{\\left[\\frac{j}{2^k},\\frac{j+1}{2^k}\\right]}"
},
{
"math_id": 15,
"text": "k = \\lfloor \\log_2 n\\rfloor"
},
{
"math_id": 16,
"text": "j=n-2^k"
},
{
"math_id": 17,
"text": "\\chi_{\\left[0,1\\right]},\\;\\chi_{\\left[0,\\frac12\\right]},\\;\\chi_{\\left[\\frac12,1\\right]},\\;\\chi_{\\left[0,\\frac14\\right]},\\;\\chi_{\\left[\\frac14,\\frac12\\right]}"
},
{
"math_id": 18,
"text": "f_n = n\\chi_{\\left[0,\\frac1n\\right]}"
},
{
"math_id": 19,
"text": "p \\geq 1"
},
{
"math_id": 20,
"text": "\\{\\rho_F : F \\in \\Sigma,\\ \\mu (F) < \\infty\\},"
},
{
"math_id": 21,
"text": "\\rho_F(f,g) = \\int_F \\min\\{|f-g|,1\\}\\, d\\mu."
},
{
"math_id": 22,
"text": "G\\subset X"
},
{
"math_id": 23,
"text": " \\varepsilon > 0 "
},
{
"math_id": 24,
"text": "\\mu(G\\setminus F)<\\varepsilon."
},
{
"math_id": 25,
"text": " \\mu(X) < \\infty "
},
{
"math_id": 26,
"text": "\\rho_X"
},
{
"math_id": 27,
"text": "\\mu"
},
{
"math_id": 28,
"text": "d(f,g) := \\inf\\limits_{\\delta>0} \\mu(\\{|f-g|\\geq\\delta\\}) + \\delta"
}
] | https://en.wikipedia.org/wiki?curid=7996261 |
799649 | Set of uniqueness | In mathematics, a set of uniqueness is a concept relevant to trigonometric expansions which are not necessarily Fourier series. Their study is a relatively pure branch of harmonic analysis.
Definition.
A subset "E" of the circle is called a set of uniqueness, or a "U"-set, if any trigonometric expansion
formula_0
which converges to zero for formula_1 is identically zero; that is, such that
"c"("n") = 0 for all "n".
Otherwise, "E" is a set of multiplicity (sometimes called an "M"-set or a Menshov set). Analogous definitions apply on the real line, and in higher dimensions. In the latter case, one needs to specify the order of summation, e.g. "a set of uniqueness with respect to summing over balls".
To understand the importance of the definition, it is important to get out of the Fourier mind-set. In Fourier analysis there is no question of uniqueness, since the coefficients "c"("n") are derived by integrating the function. Hence, in Fourier analysis the order of actions is
formula_2
In the theory of uniqueness, the order is different:
In effect, it is usually sufficiently interesting (as in the definition above) to assume that the sum converges to zero and ask if that means that all the "c"("n") must be zero. As is usual in analysis, the most interesting questions arise when one discusses pointwise convergence. Hence the definition above, which arose when it became clear that neither "convergence everywhere" nor "convergence almost everywhere" give a satisfactory answer.
Early research.
The empty set is a set of uniqueness. This simply means that if a trigonometric series converges to zero "everywhere" then it is trivial. This was proved by Riemann, using a delicate technique of double formal integration; and showing that the resulting sum has some generalized kind of second derivative using Toeplitz operators. Later on, Georg Cantor generalized Riemann's techniques to show that any countable, closed set is a set of uniqueness, a discovery which led him to the development of set theory. Paul Cohen, another innovator in set theory, started his career with a thesis on sets of uniqueness.
As the theory of Lebesgue integration developed, it was assumed that any set of zero measure would be a set of uniqueness — in one dimension the locality principle for Fourier series shows that any set of positive measure is a set of multiplicity (in higher dimensions this is still an open question). This was disproved by Dimitrii E. Menshov who in 1916 constructed an example of a set of multiplicity which has measure zero.
Transformations.
A translation and dilation of a set of uniqueness is a set of uniqueness. A union of a countable family of "closed" sets of uniqueness is a set of uniqueness. There exists an example of two sets of uniqueness whose union is not a set of uniqueness, but the sets in this example are not Borel. It is an open problem whether the union of any two Borel sets of uniqueness is a set of uniqueness.
Singular distributions.
A closed set is a set of uniqueness if and only if there exists a distribution "S" supported on the set (so in particular it must be singular) such that
formula_3
(formula_4 here are the Fourier coefficients). In all early examples of sets of uniqueness, the distribution in question was in fact a measure. In 1954, though, Ilya Piatetski-Shapiro constructed an example of a set of uniqueness which does not support any measure with Fourier coefficients tending to zero. In other words, the generalization of distribution is necessary.
Complexity of structure.
The first evidence that sets of uniqueness have complex structure came from the study of Cantor-like sets. Raphaël Salem and Zygmund showed that a Cantor-like set with dissection ratio ξ is a set of uniqueness if and only if 1/ξ is a Pisot number, that is an algebraic integer with the property that all its conjugates (if any) are smaller than 1. This was the first demonstration that the property of being a set of uniqueness has to do with "arithmetic" properties and not just some concept of size (Nina Bari had proved the case of ξ rational -- the Cantor-like set is a set of uniqueness if and only if 1/ξ is an integer -- a few years earlier).
Since the 50s, much work has gone into formalizing this complexity. The family of sets of uniqueness, considered as a set inside the space of compact sets (see Hausdorff distance), was located inside the analytical hierarchy. A crucial part in this research is played by the "index" of the set, which is an ordinal between 1 and ω1, first defined by Pyatetskii-Shapiro. Nowadays the research of sets of uniqueness is just as much a branch of descriptive set theory as it is of harmonic analysis. | [
{
"math_id": 0,
"text": "\\sum_{n=-\\infty}^{\\infty}c(n)e^{int}"
},
{
"math_id": 1,
"text": " t\\notin E"
},
{
"math_id": 2,
"text": "c(n)=\\int_0^{2\\pi}f(t)e^{-int}\\,dt"
},
{
"math_id": 3,
"text": "\\lim_{n\\to \\infty}\\widehat{S}(n)=0"
},
{
"math_id": 4,
"text": "\\hat S(n)"
}
] | https://en.wikipedia.org/wiki?curid=799649 |
7996832 | Umbrella antenna | An umbrella antenna is a capacitively top-loaded wire monopole antenna, consisting in most cases of a mast fed at the ground end, to which a number of radial wires are connected at the top, sloping downwards. One side of the feedline supplying power from the transmitter is connected to the mast, and the other side to a ground (Earthing) system of radial wires buried in the earth under the antenna. They are used as transmitting antennas below 1 MHz, in the MF, LF and particularly the VLF bands, at frequencies sufficiently low that it is impractical or infeasible to build a full size quarter-wave monopole antenna. The outer end of each radial wire, sloping down from the top of the antenna, is connected by an insulator to a supporting rope or cable anchored to the ground; the radial wires can also support the mast as guy wires. The radial wires make the antenna look like the wire frame of a giant umbrella (without the cloth) hence the name.
Design.
The antenna is supported by a central steel tubular or lattice mast. The top of the mast is attached to a ring of equally spaced radial wires extending diagonally to near the ground, where each is attached with a strain insulator to a length of non-radiating wire or rope which is anchored to the ground. The umbrella wires may also serve structurally as guy lines to support the mast. There are several different methods of feeding power from the transmitter to the antenna:
In base feed, the mast is supported on a thick ceramic insulator which keeps it insulated from the ground, and the feedline from the transmitter is attached to the base of the mast. The conductive steel mast serves as the monopole radiator. Alternately, in high power antennas, the mast is grounded, the umbrella wires are insulated where they connect to the central mast, and are attached to vertical radiator wires that hang down parallel to the mast which are fed at the bottom. This construction is used in high power antennas in which the very high voltage on the antenna would make it difficult to insulate the mast from the ground.
Under the antenna is a large ground (Earthing) system connected to the opposite side of the feedline, consisting of wires buried in the Earth extending radially from the base of the mast out to the edge of the umbrella wires.
Alternatively, in radial feed, the antenna can be fed power by applying the transmitter current to the ends of one or more of the radial wires instead of the mast. In this case the central mast is grounded. As with wire feeders, this avoids the need for a mast support insulator, and also does not require an isolator in the power cables for the mast's aircraft warning lights. This construction was used in three large umbrella antennas for the obsolete Omega navigation system which operated at 10–14 kHz, to eliminate the very difficult problem of insulating the mast base against the 200 kV antenna potential.
Since the antenna is shorter than the resonant length of one-quarter wavelength, it has capacitance. In order to cancel the capacitive reactance and make it resonant so it can be fed power efficiently, an impedance matching inductor called a loading coil is connected in series with the feedline at the base of the antenna.
Operation.
The vertical mast, isolated from the ground, or the vertical radiator wires, functions as a resonant monopole antenna. At the low frequencies used, the height of the mast is much less than its resonant length, one quarter wavelength (formula_0), so it makes a very electrically short antenna; it has very low radiation resistance and without the topload wires would be a very inefficient radiator. The oscillating current from the transmitter travels up the mast and divides approximately equally among the topload wires. It is reflected from the ends of the wires and travels back down the mast. The outgoing and reflected current superpose, forming a standing wave consisting of the tail part of a sine wave.
Due to ground reflections and the symmetrical placement of the topload wires, measured far from the antenna the radio waves radiated from the umbrella-like spoke wires largely cancel each other out, so the spoke wires themselves radiate almost no radio power. Instead the umbrella-wires function as a capacitive "top load" replacing some or all of the capacitance that would be provided by the top of a full-length quarter-wave mast. The ground wires buried or laid on the earth under the antenna function as the corresponding bottom plate of the giant 'capacitor'. The added capacitance increases the current in the vertical mast due to the extra charge required to charge and discharge the top load each half of the RF cycle. In the best case, this can double the total current, and quadruple the radiated power, increasing the signal up to 6 dB from the level it would be with no top loading.
To tune out the large capacitive reactance of the antenna and make it resonant at the operating frequency so it can be fed power efficiently, a large inductor (loading coil) is placed in the feedline in series with the antenna, at its base. The other side of the feedline from the transmitter is connected to the ground system. The antenna and coil form a tuned circuit. Their large reactance and low resistance usually give the antenna a high Q factor, so it has a narrow bandwidth over which it can work. In large umbrella antennas used in the very low frequency band, the bandwidth of the antenna can be less than 100 hertz.
Below are several grounded mast umbrella antenna variations developed by the US military in the 1970s for use on the low frequency band.
Radiation pattern.
The umbrella antenna radiates vertically polarised radio waves in an omnidirectional radiation pattern, with equal power emitted in all horizontal directions, with maximum signal strength radiated in horizontal directions, falling monotonically with elevation angle to zero at the zenith. Due to the large top load they are usually more efficient than the other common top loaded antennas, the “flattop” or ‘T’ antenna, at low frequencies, and are widely used in the VLF band.
Ground waves are vertically polarized waves which travel away from the antenna horizontally just above the ground. Umbrella antennas are good ground wave antennas, and are used as radio broadcasting antennas in the MF and LF bands.
The gain of an umbrella antenna over perfectly conducting ground, like other electrically short monopole antennas, is approximately 3.52 dBi if it is significantly shorter than formula_1
Since the diagonal wires are sloped down, the current in them has a vertical component. This current is in a direction opposite to the current in the mast, so far from the mast the radio waves radiated by it are 180° out of phase with the radio waves from the mast, and partially cancel them. Thus the umbrella wires partially shield the mast, reducing the power radiated. With enough umbrella wires all the radio waves emitted by the portion of mast above the bottom of the umbrella is blocked, and the only radiation is from the portion of the mast below the umbrella.
Applications.
Due to their large capacitive topload, umbrella antennas are some of the most efficient antenna designs at low frequencies, and are used for transmitters in the LF and VLF bands for navigational aids and military communication. They are in common use for commercial medium-wave and longwave AM broadcasting stations. Umbrella antennas with heights of 15–460 metres are in service. The largest umbrella antennas are the trideco antennas "(below)" built for VLF naval transmitting stations which communicate with submerged submarines. Eight umbrella antennas 350 metres high are in use in an array at the German VLF communications facility, operating at about 20 kHz with high radiation efficiency even though they are less than wavelength high.
With the progressing world-wide adoption of two new amateur radio bands at 630 metres and 2200 metres, amateurs with adequate real estate have resumed use of this design.
Trideco antenna.
The trideco antenna is a huge specialized umbrella antenna used in a few high power military transmitters at very low frequency (VLF). In a conventional umbrella antenna, the use of the sloping guy wires as the capacitive top load has some disadvantages: First, since the umbrella wires must be anchored to the ground, their length is limited. At low frequencies the length of topload wires required is far longer than can be used for guy wires, without additional supporting masts the wires would sag to the ground. Second, since the wires are sloping, the current in them has a vertical component. This vertical current is in the opposite direction to the current in the mast, so the radio waves radiated by it are 180° out of phase with the mast radiation, and partially cancels it.
In the trideco design the top load wires extend horizontally from the top of the central mast, supported by a ring of 12 masts surrounding the central mast, to create a radial wire "capacitor plate" parallel with the Earth, driven at the center. The topload wires are in the form of six rhomboidal (diamond) shaped panels extending symmetrically from the central mast at angles of 60°, giving the antenna the form of a six-pointed star when seen from above. Instead of using the central mast itself as a radiator, each panel is connected to a vertical radiator wire next to the central mast, and the six radiator wires are fed in phase at the base. This eliminates the difficult problem of insulating the mast from the ground at the extremely high voltages used. It also allows the possibility of shutting down power to one of the panels, and lowering it to the ground for maintenance while the rest of the antenna is operating. Buried in the ground under the antenna is an enormous radial ground system, which forms the bottom 'plate' of the capacitor with the overhead top load. The antenna must be very large at the VLF frequencies used; the supporting masts are high, and the topload is about in diameter.
The trideco antenna was developed for high power naval transmitters, which transmit on frequencies between 15 and 30 kHz at powers up to 2 megawatts, to communicate with submerged submarines worldwide. It is the most efficient antenna design found so far for this frequency range, achieving efficiencies of 70-80% where other VLF antenna designs have efficiency of 15-30% due to the low radiation resistance of the very electrically short monopole. The antenna was invented by Boynton Hagaman of Development Engineering Co. (DECO) and first installed at Cutler, Maine in 1961. The inspiration for the design was the umbrella antenna of the 1 megawatt Goliath transmitter built by Nazi Germany's navy in 1943 at Kalbe, Saxony-Anhalt, Germany. Today trideco antennas are located at a few military bases around the world, such as Cutler naval radio station in Maine, U.S.; Harold E. Holt Naval Communication Station, Exmouth, Australia; and Anthorn Radio Station, Anthorn, UK. A modified 3 panel antenna was located at NSS Annapolis, Annapolis, Maryland, but was decommissioned in 1990.
History.
Umbrella antennas were invented during the wireless telegraphy era, about 1900 to 1920, and used with spark-gap transmitters on longwave bands to transmit information by Morse code. Low frequencies were used for long distance transcontinental communication, and antennas were electrically short, so capacitively toploaded antennas were used. Umbrella antennas developed from large multi-wire capacitive antennas used by Guglielmo Marconi during his efforts to achieve reliable transatlantic communication.
One of the first antennas that used this design was the tubular mast erected in 1905 by Reginald Fessenden for his experimental spark gap transmitter at Brant Rock, Massachusetts with which he made the first two-way transatlantic transmission, communicating with an identical antenna in Machrihanish, Scotland. The wires attached to the top (either 4 or 8, depending on source) were electrically connected to the mast and stretched diagonally down to the surface, where they were insulated from the ground. Another early example is the umbrella antenna built in 1906 by Adolf Slaby at Nauen Transmitter Station, Germany's first long range radio station, consisting of a steel lattice tower radiator with 162 umbrella cables attached to the top, anchored by hemp ropes to the ground 200 m from the tower. Small umbrella antennas were widely used with portable transmitters by military signal corps during World War I, since there was no possibility of setting up full-sized quarter-wave antennas.
Umbrella antennas were used at most OMEGA Navigation System transmitters, operating around 10 kHz, at Decca Navigator stations and at LORAN-C stations, operating at 100 kHz with central masts approximately 200 metres tall, before those systems were shut down.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ \\tfrac{1}{4}\\lambda\\ "
},
{
"math_id": 1,
"text": "\\tfrac{1}{4}\\lambda~."
}
] | https://en.wikipedia.org/wiki?curid=7996832 |
799760 | Mahalanobis distance | Statistical distance measure
The Mahalanobis distance is a measure of the distance between a point formula_0 and a distribution formula_1, introduced by P. C. Mahalanobis in 1936. The mathematical details of Mahalanobis distance has appeared in the Journal of The Asiatic Society of Bengal. Mahalanobis's definition was prompted by the problem of identifying the similarities of skulls based on measurements (the earliest work related to similarities of skulls are from 1922 and another later work is from 1927). The sampling distribution of Mahalanobis distance has been obtained by Professor R.C. Bose, under the assumption of equal dispersion.
It is a multivariate generalization of the square of the standard score formula_2: how many standard deviations away formula_0 is from the mean of formula_1. This distance is zero for formula_0 at the mean of formula_1 and grows as formula_0 moves away from the mean along each principal component axis. If each of these axes is re-scaled to have unit variance, then the Mahalanobis distance corresponds to standard Euclidean distance in the transformed space. The Mahalanobis distance is thus unitless, scale-invariant, and takes into account the correlations of the data set.
Definition.
Given a probability distribution formula_3 on formula_4, with mean formula_5 and positive semi-definite covariance matrix formula_6, the Mahalanobis distance of a point formula_7 from formula_3 isformula_8Given two points formula_9 and formula_10 in formula_4, the Mahalanobis distance between them with respect to formula_3 isformula_11which means that formula_12.
Since formula_6 is positive semi-definite, so is formula_13, thus the square roots are always defined.
We can find useful decompositions of the squared Mahalanobis distance that help to explain some reasons for the outlyingness of multivariate observations and also provide a graphical tool for identifying outliers.
By the spectral theorem, formula_14 can be decomposed as formula_15 for some real formula_16 matrix, which gives us the equivalent definitionformula_17where formula_18 is the Euclidean norm. That is, the Mahalanobis distance is the Euclidean distance after a whitening transformation.
The existence of formula_19 is guaranteed by the spectral theorem, but it is not unique. Different choices have different theoretical and practical advantages.
In practice, the distribution formula_3 is usually the sample distribution from a set of IID samples from an underlying unknown distribution, so formula_20 is the sample mean, and formula_6 is the covariance matrix of the samples.
When the affine span of the samples is not the entire formula_4, the covariance matrix would not be positive-definite, which means the above definition would not work. However, in general, the Mahalanobis distance is preserved under any full-rank affine transformation of the affine span of the samples. So in case the affine span is not the entire formula_4, the samples can be first orthogonally projected to formula_21, where formula_22 is the dimension of the affine span of the samples, then the Mahalanobis distance can be computed as usual.
Intuitive explanation.
Consider the problem of estimating the probability that a test point in "N"-dimensional Euclidean space belongs to a set, where we are given sample points that definitely belong to that set. Our first step would be to find the centroid or center of mass of the sample points. Intuitively, the closer the point in question is to this center of mass, the more likely it is to belong to the set.
However, we also need to know if the set is spread out over a large range or a small range, so that we can decide whether a given distance from the center is noteworthy or not. The simplistic approach is to estimate the standard deviation of the distances of the sample points from the center of mass. If the distance between the test point and the center of mass is less than one standard deviation, then we might conclude that it is highly probable that the test point belongs to the set. The further away it is, the more likely that the test point should not be classified as belonging to the set.
This intuitive approach can be made quantitative by defining the normalized distance between the test point and the set to be formula_23, which reads: formula_24. By plugging this into the normal distribution, we can derive the probability of the test point belonging to the set.
The drawback of the above approach was that we assumed that the sample points are distributed about the center of mass in a spherical manner. Were the distribution to be decidedly non-spherical, for instance ellipsoidal, then we would expect the probability of the test point belonging to the set to depend not only on the distance from the center of mass, but also on the direction. In those directions where the ellipsoid has a short axis the test point must be closer, while in those where the axis is long the test point can be further away from the center.
Putting this on a mathematical basis, the ellipsoid that best represents the set's probability distribution can be estimated by building the covariance matrix of the samples. The Mahalanobis distance is the distance of the test point from the center of mass divided by the width of the ellipsoid in the direction of the test point.
Normal distributions.
For a normal distribution in any number of dimensions, the probability density of an observation formula_9 is uniquely determined by the Mahalanobis distance formula_25:
formula_26
Specifically, formula_27 follows the chi-squared distribution with formula_22 degrees of freedom, where formula_22 is the number of dimensions of the normal distribution. If the number of dimensions is 2, for example, the probability of a particular calculated formula_25 being less than some threshold formula_28 is formula_29. To determine a threshold to achieve a particular probability, formula_30, use formula_31, for 2 dimensions. For number of dimensions other than 2, the cumulative chi-squared distribution should be consulted.
In a normal distribution, the region where the Mahalanobis distance is less than one (i.e. the region inside the ellipsoid at distance one) is exactly the region where the probability distribution is concave.
The Mahalanobis distance is proportional, for a normal distribution, to the square root of the negative log-likelihood (after adding a constant so the minimum is at zero).
Other forms of multivariate location and scatter.
The sample mean and covariance matrix can be quite sensitive to outliers, therefore other approaches for calculating the multivariate location and scatter of data are also commonly used when calculating the Mahalanobis distance. The Minimum Covariance Determinant approach estimates multivariate location and scatter from a subset numbering formula_32 data points that has the smallest variance-covariance matrix determinant. The Minimum Volume Ellipsoid approach is similar to the Minimum Covariance Determinant approach in that it works with a subset of size formula_32 data points, but the Minimum Volume Ellipsoid estimates multivariate location and scatter from the ellipsoid of minimal volume that encapsulates the formula_32 data points. Each method varies in its definition of the distribution of the data, and therefore produces different Mahalanobis distances. The Minimum Covariance Determinant and Minimum Volume Ellipsoid approaches are more robust to samples that contain outliers, while the sample mean and covariance matrix tends to be more reliable with small and biased data sets.
Relationship to normal random variables.
In general, given a normal (Gaussian) random variable formula_33 with variance formula_34 and mean formula_35, any other normal random variable formula_36 (with mean formula_37 and variance formula_38) can be defined in terms of formula_33 by the equation formula_39 Conversely, to recover a normalized random variable from any normal random variable, one can typically solve for formula_40. If we square both sides, and take the square-root, we will get an equation for a metric that looks a lot like the Mahalanobis distance:
formula_41
The resulting magnitude is always non-negative and varies with the distance of the data from the mean, attributes that are convenient when trying to define a model for the data.
Relationship to leverage.
Mahalanobis distance is closely related to the leverage statistic, formula_32, but has a different scale:
formula_42
Applications.
Mahalanobis distance is widely used in cluster analysis and classification techniques. It is closely related to Hotelling's T-square distribution used for multivariate statistical testing and Fisher's linear discriminant analysis that is used for supervised classification.
In order to use the Mahalanobis distance to classify a test point as belonging to one of "N" classes, one first estimates the covariance matrix of each class, usually based on samples known to belong to each class. Then, given a test sample, one computes the Mahalanobis distance to each class, and classifies the test point as belonging to that class for which the Mahalanobis distance is minimal.
Mahalanobis distance and leverage are often used to detect outliers, especially in the development of linear regression models. A point that has a greater Mahalanobis distance from the rest of the sample population of points is said to have higher leverage since it has a greater influence on the slope or coefficients of the regression equation. Mahalanobis distance is also used to determine multivariate outliers. Regression techniques can be used to determine if a specific case within a sample population is an outlier via the combination of two or more variable scores. Even for normal distributions, a point can be a multivariate outlier even if it is not a univariate outlier for any variable (consider a probability density concentrated along the line formula_43, for example), making Mahalanobis distance a more sensitive measure than checking dimensions individually.
Mahalanobis distance has also been used in ecological niche modelling, as the convex elliptical shape of the distances relates well to the concept of the fundamental niche.
Another example of usage is in finance, where Mahalanobis distance has been used to compute an indicator called the "turbulence index", which is a statistical measure of financial markets abnormal behaviour. An implementation as a Web API of this indicator is available online.
Software implementations.
Many programming languages and statistical packages, such as R, Python, etc., include implementations of Mahalanobis distance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "z=(x- \\mu)/\\sigma"
},
{
"math_id": 3,
"text": "Q"
},
{
"math_id": 4,
"text": "\\R^N"
},
{
"math_id": 5,
"text": "\\vec{\\mu} = (\\mu_1, \\mu_2, \\mu_3, \\dots , \\mu_N)^\\mathsf{T}"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "\\vec{x} = (x_1, x_2, x_3, \\dots, x_N )^\\mathsf{T}"
},
{
"math_id": 8,
"text": "d_M(\\vec{x}, Q) = \\sqrt{(\\vec{x} - \\vec{\\mu})^\\mathsf{T} S^{-1} (\\vec{x} - \\vec{\\mu})}."
},
{
"math_id": 9,
"text": "\\vec{x}"
},
{
"math_id": 10,
"text": "\\vec{y}"
},
{
"math_id": 11,
"text": " d_M(\\vec{x} ,\\vec{y}; Q) = \\sqrt{(\\vec{x} - \\vec{y})^\\mathsf{T} S^{-1} (\\vec{x} - \\vec{y})}."
},
{
"math_id": 12,
"text": "d_M(\\vec{x}, Q) = d_M(\\vec{x},\\vec{\\mu}; Q)"
},
{
"math_id": 13,
"text": "S^{-1}"
},
{
"math_id": 14,
"text": " S^{-1}"
},
{
"math_id": 15,
"text": " S^{-1} = W^T W"
},
{
"math_id": 16,
"text": " N\\times N"
},
{
"math_id": 17,
"text": "d_M(\\vec{x}, \\vec{y}; Q) = \\|W(\\vec{x} - \\vec{y})\\|"
},
{
"math_id": 18,
"text": "\\|\\cdot\\|"
},
{
"math_id": 19,
"text": "W"
},
{
"math_id": 20,
"text": "\\mu"
},
{
"math_id": 21,
"text": "\\R^n"
},
{
"math_id": 22,
"text": "n"
},
{
"math_id": 23,
"text": "\\frac{\\lVert x - \\mu\\rVert_2}{\\sigma}"
},
{
"math_id": 24,
"text": "\\frac{\\text{testpoint} - \\text{sample mean}}{\\text{standard deviation}}"
},
{
"math_id": 25,
"text": "d"
},
{
"math_id": 26,
"text": "\n\\begin{align}\n\\Pr[\\vec x] \\,d\\vec x & = \\frac 1 {\\sqrt{\\det(2\\pi \\mathbf{S})}} \\exp \\left(-\\frac{(\\vec x - \\vec \\mu)^\\mathsf{T} \\mathbf{S}^{-1} (\\vec x - \\vec \\mu)} 2 \\right) \\,d\\vec{x} \\\\[6pt]\n& = \\frac{1}{\\sqrt{\\det(2\\pi \\mathbf{S})}} \\exp\\left( -\\frac{d^2} 2 \\right) \\,d\\vec x.\n\\end{align}\n"
},
{
"math_id": 27,
"text": "d^2"
},
{
"math_id": 28,
"text": "t"
},
{
"math_id": 29,
"text": "1 - e^{-t^2/2}"
},
{
"math_id": 30,
"text": "p"
},
{
"math_id": 31,
"text": "t = \\sqrt{-2\\ln(1 - p)}"
},
{
"math_id": 32,
"text": "h"
},
{
"math_id": 33,
"text": "X"
},
{
"math_id": 34,
"text": "S=1"
},
{
"math_id": 35,
"text": "\\mu = 0"
},
{
"math_id": 36,
"text": "R"
},
{
"math_id": 37,
"text": "\\mu_1"
},
{
"math_id": 38,
"text": "S_1"
},
{
"math_id": 39,
"text": "R = \\mu_1 + \\sqrt{S_1}X."
},
{
"math_id": 40,
"text": "X = (R - \\mu_1)/\\sqrt{S_1} "
},
{
"math_id": 41,
"text": "D = \\sqrt{X^2} = \\sqrt{(R - \\mu_1)^2/S_1} = \\sqrt{(R - \\mu_1) S_1^{-1} (R - \\mu_1) }."
},
{
"math_id": 42,
"text": "D^2 = (N - 1) \\left(h - \\tfrac 1 N \\right)."
},
{
"math_id": 43,
"text": "x_1 = x_2"
}
] | https://en.wikipedia.org/wiki?curid=799760 |
799876 | Electric susceptibility | Degree of polarization
In electricity (electromagnetism), the electric susceptibility (formula_0; Latin: "susceptibilis" "receptive") is a dimensionless proportionality constant that indicates the degree of polarization of a dielectric material in response to an applied electric field. The greater the electric susceptibility, the greater the ability of a material to polarize in response to the field, and thereby reduce the total electric field inside the material (and store energy). It is in this way that the electric susceptibility influences the electric permittivity of the material and thus influences many other phenomena in that medium, from the capacitance of capacitors to the speed of light.
Definition for linear dielectrics.
If a dielectric material is a linear dielectric, then electric susceptibility is defined as the constant of proportionality (which may be a tensor) relating an electric field E to the induced dielectric polarization density P such that
formula_1
where
In materials where susceptibility is anisotropic (different depending on direction), susceptibility is represented as a tensor known as the susceptibility tensor. Many linear dielectrics are isotropic, but it is possible nevertheless for a material to display behavior that is both linear and anisotropic, or for a material to be non-linear but isotropic. Anisotropic but linear susceptibility is common in many crystals.
The susceptibility is related to its relative permittivity (dielectric constant) formula_5 by
formula_6
so in the case of a vacuum,
formula_7
At the same time, the electric displacement D is related to the polarization density P by the following relation:
formula_8
where
Molecular polarizability.
A similar parameter exists to relate the magnitude of the induced dipole moment p of an individual molecule to the local electric field E that induced the dipole. This parameter is the "molecular polarizability" ("α"), and the dipole moment resulting from the local electric field Elocal is given by:
formula_11
This introduces a complication however, as locally the field can differ significantly from the overall applied field. We have:
formula_12
where P is the polarization per unit volume, and "N" is the number of molecules per unit volume contributing to the polarization. Thus, if the local electric field is parallel to the ambient electric field, we have:
formula_13
Thus only if the local field equals the ambient field can we write:
formula_14
Otherwise, one should find a relation between the local and the macroscopic field. In some materials, the Clausius–Mossotti relation holds and reads
formula_15
Ambiguity in the definition.
The definition of the molecular polarizability depends on the author. In the above definition,
formula_16
formula_17 and formula_18 are in SI units and the molecular polarizability formula_19 has the dimension of a volume (m3). Another definition would be to keep SI units and to integrate formula_3 into formula_19:
formula_20
In this second definition, the polarizability would have the SI unit of C.m2/V. Yet another definition exists where formula_17 and formula_18 are expressed in the cgs system and formula_19 is still defined as
formula_20
Using the cgs units gives formula_19 the dimension of a volume, as in the first definition, but with a value that is formula_21 lower.
Nonlinear susceptibility.
In many materials the polarizability starts to saturate at high values of electric field. This saturation can be modelled by a nonlinear susceptibility. These susceptibilities are important in nonlinear optics and lead to effects such as second-harmonic generation (such as used to convert infrared light into visible light, in green laser pointers).
The standard definition of nonlinear susceptibilities in SI units is via a Taylor expansion of the polarization's reaction to electric field:
formula_22
The first susceptibility term, formula_24, corresponds to the linear susceptibility described above. While this first term is dimensionless, the subsequent nonlinear susceptibilities formula_25 have units of (m/V)"n"−1.
The nonlinear susceptibilities can be generalized to anisotropic materials in which the susceptibility is not uniform in every direction. In these materials, each susceptibility formula_25 becomes an ("n" + 1)-degree tensor.
Dispersion and causality.
In general, a material cannot polarize instantaneously in response to an applied field, and so the more general formulation as a function of time is
formula_26
That is, the polarization is a convolution of the electric field at previous times with time-dependent susceptibility given by formula_27. The upper limit of this integral can be extended to infinity as well if one defines formula_28 for formula_29. An instantaneous response corresponds to Dirac delta function susceptibility formula_30.
It is more convenient in a linear system to take the Fourier transform and write this relationship as a function of frequency. Due to the convolution theorem, the integral becomes a product,
formula_31
This has a similar form to the Clausius–Mossotti relation:
formula_32
This frequency dependence of the susceptibility leads to frequency dependence of the permittivity. The shape of the susceptibility with respect to frequency characterizes the dispersion properties of the material.
Moreover, the fact that the polarization can only depend on the electric field at previous times (i.e. formula_28 for formula_29), a consequence of causality, imposes Kramers–Kronig constraints on the susceptibility formula_33.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\chi_{\\text{e}}"
},
{
"math_id": 1,
"text": "\\mathbf P =\\varepsilon_0 \\chi_{\\text{e}}{\\mathbf E},"
},
{
"math_id": 2,
"text": "\\mathbf{P}"
},
{
"math_id": 3,
"text": "\\varepsilon_0"
},
{
"math_id": 4,
"text": "\\mathbf{E}"
},
{
"math_id": 5,
"text": "\\varepsilon_{\\textrm{r}}"
},
{
"math_id": 6,
"text": "\\chi_{\\text{e}}\\ = \\varepsilon_{\\text{r}} - 1"
},
{
"math_id": 7,
"text": "\\chi_{\\text{e}}\\ = 0."
},
{
"math_id": 8,
"text": "\\mathbf{D} \\ = \\ \\varepsilon_0\\mathbf{E} + \\mathbf{P} \\ = \\ \\varepsilon_0 (1+\\chi_{\\text{e}}) \\mathbf{E} \\ = \\ \\varepsilon_{\\text{r}} \\varepsilon_0 \\mathbf{E} \\ = \\ \\varepsilon\\mathbf{E} "
},
{
"math_id": 9,
"text": "\\varepsilon \\ = \\ \\varepsilon_{\\text{r}} \\varepsilon_0"
},
{
"math_id": 10,
"text": "\\varepsilon_{\\text{r}} \\ = \\ 1+\\chi_{\\text{e}}"
},
{
"math_id": 11,
"text": "\\mathbf{p} = \\varepsilon_0\\alpha \\mathbf{E_{\\text{local}}}"
},
{
"math_id": 12,
"text": "\\mathbf{P} = N \\mathbf{p} = N \\varepsilon_0 \\alpha \\mathbf{E}_\\text{local},"
},
{
"math_id": 13,
"text": "\\chi_{\\text{e}} \\mathbf{E} = N \\alpha \\mathbf{E}_{\\text{local}}"
},
{
"math_id": 14,
"text": "\\chi_{\\text{e}} = N \\alpha."
},
{
"math_id": 15,
"text": "\\frac{\\chi_{\\text{e}}}{3+\\chi_{\\text{e}}} = \\frac{N \\alpha}{3}."
},
{
"math_id": 16,
"text": "\\mathbf{p}=\\varepsilon_0\\alpha \\mathbf{E_{\\text{local}}},"
},
{
"math_id": 17,
"text": "p"
},
{
"math_id": 18,
"text": "E"
},
{
"math_id": 19,
"text": "\\alpha"
},
{
"math_id": 20,
"text": "\\mathbf{p}=\\alpha \\mathbf{E_{\\text{local}}}."
},
{
"math_id": 21,
"text": "4\\pi"
},
{
"math_id": 22,
"text": " P = P_0 + \\varepsilon_0 \\chi^{(1)} E + \\varepsilon_0 \\chi^{(2)} E^2 + \\varepsilon_0 \\chi^{(3)} E^3 + \\cdots. "
},
{
"math_id": 23,
"text": "P_0 = 0"
},
{
"math_id": 24,
"text": "\\chi^{(1)}"
},
{
"math_id": 25,
"text": "\\chi^{(n)}"
},
{
"math_id": 26,
"text": "\\mathbf{P}(t) = \\varepsilon_0 \\int_{-\\infty}^t \\chi_{\\text{e}}(t-t') \\mathbf{E}(t')\\, \\mathrm dt'."
},
{
"math_id": 27,
"text": "\\chi_{\\text{e}}(\\Delta t)"
},
{
"math_id": 28,
"text": "\\chi_{\\text{e}}(\\Delta t) = 0"
},
{
"math_id": 29,
"text": "\\Delta t < 0"
},
{
"math_id": 30,
"text": "\\chi_{\\text{e}}(\\Delta t) = \\chi_{\\text{e}}\\delta(\\Delta t)"
},
{
"math_id": 31,
"text": "\\mathbf{P}(\\omega) = \\varepsilon_0 \\chi_{\\text{e}}(\\omega) \\mathbf{E}(\\omega)."
},
{
"math_id": 32,
"text": "\\mathbf{P}(\\mathbf{r}) = \\varepsilon_0\\frac{N\\alpha(\\mathbf{r})}{1-\\frac{1}{3}N(\\mathbf{r})\\alpha(\\mathbf{r})}\\mathbf{E}(\\mathbf{r}) = \\varepsilon_0\\chi_\\text{e}(\\mathbf{r})\\mathbf{E}(\\mathbf{r})"
},
{
"math_id": 33,
"text": "\\chi_{\\text{e}}(0)"
}
] | https://en.wikipedia.org/wiki?curid=799876 |
7999138 | Golden–Thompson inequality | In physics and mathematics, the Golden–Thompson inequality is a trace inequality between exponentials of symmetric and Hermitian matrices proved independently by and . It has been developed in the context of statistical mechanics, where it has come to have a particular significance.
Statement.
The Golden–Thompson inequality states that for (real) symmetric or (complex) Hermitian matrices "A" and "B", the following trace inequality holds:
formula_0
This inequality is well defined, since the quantities on either side are real numbers. For the expression on the right hand side of the inequality, this can be seen by rewriting it as formula_1 using the cyclic property of the trace.
Motivation.
The Golden–Thompson inequality can be viewed as a generalization of a stronger statement for real numbers. If "a" and "b" are two real numbers, then the exponential of "a+b" is the product of the exponential of "a" with the exponential of "b":
formula_2
If we replace "a" and "b" with commuting matrices "A" and "B", then the same inequality formula_3 holds.
This relationship is not true if "A" and "B" do not commute. In fact, proved that if "A" and "B" are two Hermitian matrices for which the Golden–Thompson inequality is verified as an equality, then the two matrices commute. The Golden–Thompson inequality shows that, even though formula_4 and formula_5 are not equal, they are still related by an inequality.
Generalizations.
The Golden–Thompson inequality generalizes to any unitarily invariant norm. If "A" and "B" are Hermitian matrices and formula_6 is a unitarily invariant norm, then
formula_7
The standard Golden–Thompson inequality is a special case of the above inequality, where the norm is the Schatten norm with formula_8. Since formula_4 and formula_9 are both positive semidefinite matrices, formula_10 and formula_11.
The inequality has been generalized to three matrices by and furthermore to any arbitrary number of Hermitian matrices by . A naive attempt at generalization does not work: the inequality
formula_12
is false. For three matrices, the correct generalization takes the following form:
formula_13
where the operator formula_14 is the derivative of the matrix logarithm given by formula_15.
Note that, if formula_16 and formula_17 commute, then formula_18, and the inequality for three matrices reduces to the original from Golden and Thompson.
Bertram Kostant (1973) used the Kostant convexity theorem to generalize the Golden–Thompson inequality to all compact Lie groups. | [
{
"math_id": 0,
"text": " \\operatorname{tr}\\, e^{A+B} \\le \\operatorname{tr} \\left(e^A e^B\\right)."
},
{
"math_id": 1,
"text": "\\operatorname{tr}(e^{A/2}e^B e^{A/2})"
},
{
"math_id": 2,
"text": " e^{a+b} = e^a e^b ."
},
{
"math_id": 3,
"text": " e^{A+B} = e^A e^B"
},
{
"math_id": 4,
"text": "e^{A+B}"
},
{
"math_id": 5,
"text": "e^Ae^B"
},
{
"math_id": 6,
"text": "\\|\\cdot\\|"
},
{
"math_id": 7,
"text": "\\|e^{A+B}\\| \\leq \\|e^{A/2}e^Be^{A/2}\\| ."
},
{
"math_id": 8,
"text": "p=1"
},
{
"math_id": 9,
"text": "e^{A/2}e^Be^{A/2}"
},
{
"math_id": 10,
"text": "\\operatorname{tr}(e^{A+B}) = \\|e^{A+B}\\|_1"
},
{
"math_id": 11,
"text": "\\operatorname{tr}(e^{A/2}e^Be^{A/2}) = \\|e^{A/2}e^Be^{A/2}\\|_1"
},
{
"math_id": 12,
"text": "\\operatorname{tr}(e^{A+B+C}) \\leq |\\operatorname{tr}(e^Ae^Be^C)|"
},
{
"math_id": 13,
"text": " \\operatorname{tr}\\, e^{A+B+C} \\le \\operatorname{tr} \\left(e^A \\mathcal{T}_{e^{-B}} e^C\\right),\n"
},
{
"math_id": 14,
"text": "\\mathcal{T}_f"
},
{
"math_id": 15,
"text": " \\mathcal{T}_f(g) = \\int_0^\\infty \\operatorname{d}t \\, (f+t)^{-1} g (f+t)^{-1} "
},
{
"math_id": 16,
"text": "f"
},
{
"math_id": 17,
"text": "g"
},
{
"math_id": 18,
"text": " \\mathcal{T}_f(g) = gf^{-1}"
}
] | https://en.wikipedia.org/wiki?curid=7999138 |
7999492 | Sediment transport | Movement of solid particles, typically by gravity and fluid entrainment
Sediment transport is the movement of solid particles (sediment), typically due to a combination of gravity acting on the sediment, and the movement of the fluid in which the sediment is entrained. Sediment transport occurs in natural systems where the particles are clastic rocks (sand, gravel, boulders, etc.), mud, or clay; the fluid is air, water, or ice; and the force of gravity acts to move the particles along the sloping surface on which they are resting. Sediment transport due to fluid motion occurs in rivers, oceans, lakes, seas, and other bodies of water due to currents and tides. Transport is also caused by glaciers as they flow, and on terrestrial surfaces under the influence of wind. Sediment transport due only to gravity can occur on sloping surfaces in general, including hillslopes, scarps, cliffs, and the continental shelf—continental slope boundary.
Sediment transport is important in the fields of sedimentary geology, geomorphology, civil engineering, hydraulic engineering and environmental engineering (see applications, below). Knowledge of sediment transport is most often used to determine whether erosion or deposition will occur, the magnitude of this erosion or deposition, and the time and distance over which it will occur.
Environments.
Aeolian.
"Aeolian" or "eolian" (depending on the parsing of æ) is the term for sediment transport by wind. This process results in the formation of ripples and sand dunes. Typically, the size of the transported sediment is fine sand (<1 mm) and smaller, because air is a fluid with low density and viscosity, and can therefore not exert very much shear on its bed.
Bedforms are generated by aeolian sediment transport in the terrestrial near-surface environment. Ripples and dunes form as a natural self-organizing response to sediment transport.
Aeolian sediment transport is common on beaches and in the arid regions of the world, because it is in these environments that vegetation does not prevent the presence and motion of fields of sand.
Wind-blown very fine-grained dust is capable of entering the upper atmosphere and moving across the globe. Dust from the Sahara deposits on the Canary Islands and islands in the Caribbean, and dust from the Gobi desert has deposited on the western United States. This sediment is important to the soil budget and ecology of several islands.
Deposits of fine-grained wind-blown glacial sediment are called loess.
Coastal.
Coastal sediment transport takes place in near-shore environments due to the motions of waves and currents. At the mouths of rivers, coastal sediment and fluvial sediment transport processes mesh to create river deltas.
Coastal sediment transport results in the formation of characteristic coastal landforms such as beaches, barrier islands, and capes.
Glacial.
As glaciers move over their beds, they entrain and move material of all sizes. Glaciers can carry the largest sediment, and areas of glacial deposition often contain a large number of glacial erratics, many of which are several metres in diameter. Glaciers also pulverize rock into "glacial flour", which is so fine that it is often carried away by winds to create loess deposits thousands of kilometres afield. Sediment entrained in glaciers often moves approximately along the glacial flowlines, causing it to appear at the surface in the ablation zone.
Hillslope.
In hillslope sediment transport, a variety of processes move regolith downslope. These include:
These processes generally combine to give the hillslope a profile that looks like a solution to the diffusion equation, where the diffusivity is a parameter that relates to the ease of sediment transport on the particular hillslope. For this reason, the tops of hills generally have a parabolic concave-up profile, which grades into a convex-up profile around valleys.
As hillslopes steepen, however, they become more prone to episodic landslides and other mass wasting events. Therefore, hillslope processes are better described by a nonlinear diffusion equation in which classic diffusion dominates for shallow slopes and erosion rates go to infinity as the hillslope reaches a critical angle of repose.
Debris flow.
Large masses of material are moved in debris flows, hyperconcentrated mixtures of mud, clasts that range up to boulder-size, and water. Debris flows move as granular flows down steep mountain valleys and washes. Because they transport sediment as a granular mixture, their transport mechanisms and capacities scale differently from those of fluvial systems.
Applications.
Sediment transport is applied to solve many environmental, geotechnical, and geological problems. Measuring or quantifying sediment transport or erosion is therefore important for coastal engineering. Several sediment erosion devices have been designed in order to quantify sediment erosion (e.g., Particle Erosion Simulator (PES)). One such device, also referred to as the BEAST (Benthic Environmental Assessment Sediment Tool) has been calibrated in order to quantify rates of sediment erosion.
Movement of sediment is important in providing habitat for fish and other organisms in rivers. Therefore, managers of highly regulated rivers, which are often sediment-starved due to dams, are often advised to stage short floods to refresh the bed material and rebuild bars. This is also important, for example, in the Grand Canyon of the Colorado River, to rebuild shoreline habitats also used as campsites.
Sediment discharge into a reservoir formed by a dam forms a reservoir delta. This delta will fill the basin, and eventually, either the reservoir will need to be dredged or the dam will need to be removed. Knowledge of sediment transport can be used to properly plan to extend the life of a dam.
Geologists can use inverse solutions of transport relationships to understand flow depth, velocity, and direction, from sedimentary rocks and young deposits of alluvial materials.
Flow in culverts, over dams, and around bridge piers can cause erosion of the bed. This erosion can damage the environment and expose or unsettle the foundations of the structure. Therefore, good knowledge of the mechanics of sediment transport in a built environment are important for civil and hydraulic engineers.
When suspended sediment transport is increased due to human activities, causing environmental problems including the filling of channels, it is called siltation after the grain-size fraction dominating the process.
Initiation of motion.
Stress balance.
For a fluid to begin transporting sediment that is currently at rest on a surface, the boundary (or bed) shear stress formula_0 exerted by the fluid must exceed the critical shear stress formula_1 for the initiation of motion of grains at the bed. This basic criterion for the initiation of motion can be written as:
formula_2.
This is typically represented by a comparison between a dimensionless shear stress formula_3 and a dimensionless critical shear stress formula_4. The nondimensionalization is in order to compare the driving forces of particle motion (shear stress) to the resisting forces that would make it stationary (particle density and size). This dimensionless shear stress, formula_5, is called the Shields parameter and is defined as:
formula_6.
And the new equation to solve becomes:
formula_7.
The equations included here describe sediment transport for clastic, or granular sediment. They do not work for clays and muds because these types of floccular sediments do not fit the geometric simplifications in these equations, and also interact thorough electrostatic forces. The equations were also designed for fluvial sediment transport of particles carried along in a liquid flow, such as that in a river, canal, or other open channel.
Only one size of particle is considered in this equation. However, river beds are often formed by a mixture of sediment of various sizes. In case of partial motion where only a part of the sediment mixture moves, the river bed becomes enriched in large gravel as the smaller sediments are washed away. The smaller sediments present under this layer of large gravel have a lower possibility of movement and total sediment transport decreases. This is called armouring effect. Other forms of armouring of sediment or decreasing rates of sediment erosion can be caused by carpets of microbial mats, under conditions of high organic loading.
Critical shear stress.
The Shields diagram empirically shows how the dimensionless critical shear stress (i.e. the dimensionless shear stress required for the initiation of motion) is a function of a particular form of the particle Reynolds number, formula_8 or Reynolds number related to the particle. This allows the criterion for the initiation of motion to be rewritten in terms of a solution for a specific version of the particle Reynolds number, called formula_9.
formula_10
This can then be solved by using the empirically derived Shields curve to find formula_4 as a function of a specific form of the particle Reynolds number called the boundary Reynolds number. The mathematical solution of the equation was given by Dey.
Particle Reynolds number.
In general, a particle Reynolds number has the form:
formula_11
Where formula_12 is a characteristic particle velocity, formula_13 is the grain diameter (a characteristic particle size), and formula_14 is the kinematic viscosity, which is given by the dynamic viscosity, formula_15, divided by the fluid density, formula_16.
formula_17
The specific particle Reynolds number of interest is called the boundary Reynolds number, and it is formed by replacing the velocity term in the particle Reynolds number by the shear velocity, formula_18, which is a way of rewriting shear stress in terms of velocity.
formula_19
where formula_0 is the bed shear stress (described below), and formula_20 is the von Kármán constant, where
formula_21.
The particle Reynolds number is therefore given by:
formula_22
Bed shear stress.
The boundary Reynolds number can be used with the Shields diagram to empirically solve the equation
formula_23,
which solves the right-hand side of the equation
formula_7.
In order to solve the left-hand side, expanded as
formula_24,
the bed shear stress needs to be found, formula_25. There are several ways to solve for the bed shear stress. The simplest approach is to assume the flow is steady and uniform, using the reach-averaged depth and slope. because it is difficult to measure shear stress "in situ", this method is also one of the most-commonly used. The method is known as the depth-slope product.
Depth-slope product.
For a river undergoing approximately steady, uniform equilibrium flow, of approximately constant depth "h" and slope angle θ over the reach of interest, and whose width is much greater than its depth, the bed shear stress is given by some momentum considerations stating that the gravity force component in the flow direction equals exactly the friction force. For a wide channel, it yields:
formula_26
For shallow slope angles, which are found in almost all natural lowland streams, the small-angle formula shows that formula_27 is approximately equal to formula_28, which is given by formula_29, the slope. Rewritten with this:
formula_30
Shear velocity, velocity, and friction factor.
For the steady case, by extrapolating the depth-slope product and the equation for shear velocity:
formula_30
formula_31,
The depth-slope product can be rewritten as:
formula_32.
formula_33 is related to the mean flow velocity, formula_34, through the generalized Darcy–Weisbach friction factor, formula_35, which is equal to the Darcy-Weisbach friction factor divided by 8 (for mathematical convenience). Inserting this friction factor,
formula_36.
Unsteady flow.
For all flows that cannot be simplified as a single-slope infinite channel (as in the depth-slope product, above), the bed shear stress can be locally found by applying the Saint-Venant equations for continuity, which consider accelerations within the flow.
Example.
Set-up.
The criterion for the initiation of motion, established earlier, states that
formula_7.
In this equation,
formula_37, and therefore
formula_38.
formula_4 is a function of boundary Reynolds number, a specific type of particle Reynolds number.
formula_39.
For a particular particle Reynolds number, formula_4 will be an empirical constant given by the Shields Curve or by another set of empirical data (depending on whether or not the grain size is uniform).
Therefore, the final equation to solve is:
formula_40.
Solution.
Some assumptions allow the solution of the above equation.
The first assumption is that a good approximation of reach-averaged shear stress is given by the depth-slope product. The equation then can be rewritten as:
formula_41.
Moving and re-combining the terms produces:
formula_42
where R is the submerged specific gravity of the sediment.
The second assumption is that the particle Reynolds number is high. This typically applies to particles of gravel-size or larger in a stream, and means the critical shear stress is constant. The Shields curve shows that for a bed with a uniform grain size,
formula_43.
Later researchers have shown this value is closer to
formula_44
for more uniformly sorted beds. Therefore the replacement
formula_39
is used to insert both values at the end.
The equation now reads:
formula_45
This final expression shows the product of the channel depth and slope is equal to the Shield's criterion times the submerged specific gravity of the particles times the particle diameter.
For a typical situation, such as quartz-rich sediment formula_46 in water formula_47, the submerged specific gravity is equal to 1.65.
formula_48
Plugging this into the equation above,
formula_49.
For the Shield's criterion of formula_43. 0.06 * 1.65 = 0.099, which is well within standard margins of error of 0.1. Therefore, for a uniform bed,
formula_50.
For these situations, the product of the depth and slope of the flow should be 10% of the diameter of the median grain diameter.
The mixed-grain-size bed value is formula_44, which is supported by more recent research as being more broadly applicable because most natural streams have mixed grain sizes. If this value is used, and D is changed to D_50 ("50" for the 50th percentile, or the median grain size, as an appropriate value for a mixed-grain-size bed), the equation becomes:
formula_51
Which means that the depth times the slope should be about 5% of the median grain diameter in the case of a mixed-grain-size bed.
Modes of entrainment.
The sediments entrained in a flow can be transported along the bed as bed load in the form of sliding and rolling grains, or in suspension as suspended load advected by the main flow. Some sediment materials may also come from the upstream reaches and be carried downstream in the form of wash load.
Rouse number.
The location in the flow in which a particle is entrained is determined by the Rouse number, which is determined by the density "ρ"s and diameter "d" of the sediment particle, and the density "ρ" and kinematic viscosity "ν" of the fluid, determine in which part of the flow the sediment particle will be carried.
formula_52
Here, the Rouse number is given by "P". The term in the numerator is the (downwards) sediment the sediment settling velocity "w"s, which is discussed below. The upwards velocity on the grain is given as a product of the von Kármán constant, "κ" = 0.4, and the shear velocity, "u"∗.
The following table gives the approximate required Rouse numbers for transport as bed load, suspended load, and wash load.
Settling velocity.
The settling velocity (also called the "fall velocity" or "terminal velocity") is a function of the particle Reynolds number. Generally, for small particles (laminar approximation), it can be calculated with Stokes' Law. For larger particles (turbulent particle Reynolds numbers), fall velocity is calculated with the turbulent drag law. Dietrich (1982) compiled a large amount of published data to which he empirically fit settling velocity curves. Ferguson and Church (2006) analytically combined the expressions for Stokes flow and a turbulent drag law into a single equation that works for all sizes of sediment, and successfully tested it against the data of Dietrich. Their equation is
formula_53.
In this equation "ws" is the sediment settling velocity, "g" is acceleration due to gravity, and "D" is mean sediment diameter. formula_14 is the kinematic viscosity of water, which is approximately 1.0 x 10−6 m2/s for water at 20 °C.
formula_54 and formula_55 are constants related to the shape and smoothness of the grains.
The expression for fall velocity can be simplified so that it can be solved only in terms of "D". We use the sieve diameters for natural grains, formula_56, and values given above for formula_14 and formula_57. From these parameters, the fall velocity is given by the expression:
formula_58
Alternatively, settling velocity for a particle of sediment can be derived using Stokes Law assuming quiescent (or still) fluid in steady state. The resulting formulation for settling velocity is,
formula_59
where formula_60 is the gravitational constant; formula_61 is the density of the sediment; formula_62 is the density of water; formula_63 is the sediment particle diameter (commonly assumed to be the median particle diameter, often referred to as formula_64 in field studies); and formula_14 is the molecular viscosity of water. The Stokes settling velocity can be thought of as the terminal velocity resulting from balancing a particle's buoyant force (proportional to the cross-sectional area) with the gravitational force (proportional to the mass). Small particles will have a slower settling velocity than heavier particles, as seen in the figure. This has implications for many aspects of sediment transport, for example, how far downstream a particle might be advected in a river.
Hjulström–Sundborg diagram.
In 1935, Filip Hjulström created the Hjulström curve, a graph which shows the relationship between the size of sediment and the velocity required to erode (lift it), transport it, or deposit it. The graph is logarithmic.
Åke Sundborg later modified the Hjulström curve to show separate curves for the movement threshold corresponding to several water depths, as is necessary if the flow velocity rather than the boundary shear stress (as in the Shields diagram) is used for the flow strength.
This curve has no more than a historical value nowadays, although its simplicity is still attractive. Among the drawbacks of this curve are that it does not take the water depth into account and more importantly, that it does not show that sedimentation is caused by flow velocity "deceleration" and erosion is caused by flow "acceleration". The dimensionless Shields diagram is now unanimously accepted for initiation of sediment motion in rivers.
Transport rate.
Formulas to calculate sediment transport rate exist for sediment moving in several different parts of the flow. These formulas are often segregated into bed load, suspended load, and wash load. They may sometimes also be segregated into bed material load and wash load.
Bed load.
Bed load moves by rolling, sliding, and hopping (or saltating) over the bed, and moves at a small fraction of the fluid flow velocity. Bed load is generally thought to constitute 5–10% of the total sediment load in a stream, making it less important in terms of mass balance. However, the bed material load (the bed load plus the portion of the suspended load which comprises material derived from the bed) is often dominated by bed load, especially in gravel-bed rivers. This bed material load is the only part of the sediment load that actively interacts with the bed. As the bed load is an important component of that, it plays a major role in controlling the morphology of the channel.
Bed load transport rates are usually expressed as being related to excess dimensionless shear stress raised to some power. Excess dimensionless shear stress is a nondimensional measure of bed shear stress about the threshold for motion.
formula_65,
Bed load transport rates may also be given by a ratio of bed shear stress to critical shear stress, which is equivalent in both the dimensional and nondimensional cases. This ratio is called the "transport stage" formula_66 and is an important in that it shows bed shear stress as a multiple of the value of the criterion for the initiation of motion.
formula_67
When used for sediment transport formulae, this ratio is typically raised to a power.
The majority of the published relations for bedload transport are given in dry sediment weight per unit channel width, formula_68 ("breadth"):
formula_69.
Due to the difficulty of estimating bed load transport rates, these equations are typically only suitable for the situations for which they were designed.
Notable bed load transport formulae.
Meyer-Peter Müller and derivatives.
The transport formula of Meyer-Peter and Müller, originally developed in 1948, was designed for well-sorted fine gravel at a transport stage of about 8. The formula uses the above nondimensionalization for shear stress,
formula_37,
and Hans Einstein's nondimensionalization for sediment volumetric discharge per unit width
formula_70.
Their formula reads:
formula_71.
Their experimentally determined value for formula_72 is 0.047, and is the third commonly used value for this (in addition to Parker's 0.03 and Shields' 0.06).
Because of its broad use, some revisions to the formula have taken place over the years that show that the coefficient on the left ("8" above) is a function of the transport stage:
formula_73
formula_74
The variations in the coefficient were later generalized as a function of dimensionless shear stress:
formula_75
Wilcock and Crowe.
In 2003, Peter Wilcock and Joanna Crowe (now Joanna Curran) published a sediment transport formula that works with multiple grain sizes across the sand and gravel range. Their formula works with surface grain size distributions, as opposed to older models which use subsurface grain size distributions (and thereby implicitly infer a surface grain sorting).
Their expression is more complicated than the basic sediment transport rules (such as that of Meyer-Peter and Müller) because it takes into account multiple grain sizes: this requires consideration of reference shear stresses for each grain size, the fraction of the total sediment supply that falls into each grain size class, and a "hiding function".
The "hiding function" takes into account the fact that, while small grains are inherently more mobile than large grains, on a mixed-grain-size bed, they may be trapped in deep pockets between large grains. Likewise, a large grain on a bed of small particles will be stuck in a much smaller pocket than if it were on a bed of grains of the same size. In gravel-bed rivers, this can cause "equal mobility", in which small grains can move just as easily as large ones. As sand is added to the system, it moves away from the "equal mobility" portion of the hiding function to one in which grain size again matters.
Their model is based on the transport stage, or ratio of bed shear stress to critical shear stress for the initiation of grain motion. Because their formula works with several grain sizes simultaneously, they define the critical shear stress for each grain size class, formula_76, to be equal to a "reference shear stress", formula_77.
They express their equations in terms of a dimensionless transport parameter, formula_78 (with the "formula_79" indicating nondimensionality and the "formula_80" indicating that it is a function of grain size):
formula_81
formula_82 is the volumetric bed load transport rate of size class formula_83 per unit channel width formula_68. formula_84 is the proportion of size class formula_83 that is present on the bed.
They came up with two equations, depending on the transport stage, formula_85. For formula_86:
formula_87
and for formula_88:
formula_89.
This equation asymptotically reaches a constant value of formula_78 as formula_85 becomes large.
Wilcock and Kenworthy.
In 2002, Peter Wilcock and T. A. Kenworthy, following Peter Wilcock (1998), published a sediment bed-load transport formula that works with only two sediments fractions, i.e. sand and gravel fractions. A mixed-sized sediment bed-load transport model using only two fractions offers practical advantages in terms of both computational and conceptual modeling by taking into account the nonlinear effects of sand presence in gravel beds on bed-load transport rate of both fractions. In fact, in the two-fraction bed load formula appears a new ingredient with respect to that of Meyer-Peter and Müller that is the proportion formula_84 of fraction formula_90 on the bed surface where the subscript formula_91 represents either the sand (s) or gravel (g) fraction. The proportion formula_84, as a function of sand content formula_92, physically represents the relative influence of the mechanisms controlling sand and gravel transport, associated with the change from a clast-supported to matrix-supported gravel bed. Moreover, since formula_92 spans between 0 and 1, phenomena that vary with formula_92 include the relative size effects producing "hiding" of fine grains and "exposure" of coarse grains.
The "hiding" effect takes into account the fact that, while small grains are inherently more mobile than large grains, on a mixed-grain-size bed, they may be trapped in deep pockets between large grains. Likewise, a large grain on a bed of small particles will be stuck in a much smaller pocket than if it were on a bed of grains of the same size, which the Meyer-Peter and Müller formula refers to. In gravel-bed rivers, this can cause "equal mobility", in which small grains can move just as easily as large ones. As sand is added to the system, it moves away from the "equal mobility" portion of the hiding function to one in which grain size again matters.
Their model is based on the transport stage, "i.e." formula_85, or ratio of bed shear stress to critical shear stress for the initiation of grain motion. Because their formula works with only two fractions simultaneously, they define the critical shear stress for each of the two grain size classes, formula_77, where formula_91 represents either the sand (s) or gravel (g) fraction. The critical shear stress that represents the incipient motion for each of the two fractions is consistent with established values in the limit of pure sand and gravel beds and shows a sharp change with increasing sand content over the transition from a clast- to matrix-supported bed.
They express their equations in terms of a dimensionless transport parameter, formula_78 (with the "formula_79" indicating nondimensionality and the "formula_80" indicating that it is a function of grain size):
formula_81
formula_82 is the volumetric bed load transport rate of size class formula_83 per unit channel width formula_68. formula_84 is the proportion of size class formula_83 that is present on the bed.
They came up with two equations, depending on the transport stage, formula_85. For formula_93:
formula_87
and for formula_94:
formula_95.
This equation asymptotically reaches a constant value of formula_78 as formula_85 becomes large and the symbols formula_96 have the following values:
formula_97
formula_98
In order to apply the above formulation, it is necessary to specify the characteristic grain sizes formula_99 for the sand portion and formula_100 for the gravel portion of the surface layer, the fractions formula_101 and formula_102 of sand and gravel, respectively in the surface layer, the submerged specific gravity of the sediment R and shear velocity associated with skin friction formula_103 .
Kuhnle "et al.".
For the case in which sand fraction is transported by the current over and through an immobile gravel bed, Kuhnle "et al."(2013), following the theoretical analysis done by Pellachini (2011), provides a new relationship for the bed load transport of the sand fraction when gravel particles remain at rest. It is worth mentioning that Kuhnle "et al." (2013) applied the Wilcock and Kenworthy (2002) formula to their experimental data and found out that predicted bed load rates of sand fraction were about 10 times greater than measured and approached 1 as the sand elevation became near the top of the gravel layer. They, also, hypothesized that the mismatch between predicted and measured sand bed load rates is due to the fact that the bed shear stress used for the Wilcock and Kenworthy (2002) formula was larger than that available for transport within the gravel bed because of the sheltering effect of the gravel particles.
To overcome this mismatch, following Pellachini (2011), they assumed that the variability of the bed shear stress available for the sand to be transported by the current would be some function of the so-called "Roughness Geometry Function" (RGF), which represents the gravel bed elevations distribution. Therefore, the sand bed load formula follows as:
formula_104
where
formula_105
the subscript formula_106 refers to the sand fraction, s represents the ratio formula_107 where formula_108 is the sand fraction density, formula_109 is the RGF as a function of the sand level formula_110 within the gravel bed, formula_111 is the bed shear stress available for sand transport and formula_112 is the critical shear stress for incipient motion of the sand fraction, which was calculated graphically using the updated Shields-type relation of Miller "et al."(1977)formula_113.
Suspended load.
Suspended load is carried in the lower to middle parts of the flow, and moves at a large fraction of the mean flow velocity in the stream.
A common characterization of suspended sediment concentration in a flow is given by the Rouse Profile. This characterization works for the situation in which sediment concentration formula_114 at one particular elevation above the bed formula_115 can be quantified. It is given by the expression:
formula_116
Here, formula_117 is the elevation above the bed, formula_118 is the concentration of suspended sediment at that elevation, formula_119 is the flow depth, formula_120 is the Rouse number, and formula_121 relates the eddy viscosity for momentum formula_122 to the eddy diffusivity for sediment, which is approximately equal to one.
formula_123
Experimental work has shown that formula_121 ranges from 0.93 to 1.10 for sands and silts.
The Rouse profile characterizes sediment concentrations because the Rouse number includes both turbulent mixing and settling under the weight of the particles. Turbulent mixing results in the net motion of particles from regions of high concentrations to low concentrations. Because particles settle downward, for all cases where the particles are not neutrally buoyant or sufficiently light that this settling velocity is negligible, there is a net negative concentration gradient as one goes upward in the flow. The Rouse Profile therefore gives the concentration profile that provides a balance between turbulent mixing (net upwards) of sediment and the downwards settling velocity of each particle.
Bed material load.
Bed material load comprises the bed load and the portion of the suspended load that is sourced from the bed.
Three common bed material transport relations are the "Ackers-White", "Engelund-Hansen", "Yang" formulae. The first is for sand to granule-size gravel, and the second and third are for sand though Yang later expanded his formula to include fine gravel. That all of these formulae cover the sand-size range and two of them are exclusively for sand is that the sediment in sand-bed rivers is commonly moved simultaneously as bed and suspended load.
Engelund–Hansen.
The bed material load formula of Engelund and Hansen is the only one to not include some kind of critical value for the initiation of sediment transport. It reads:
formula_124
where formula_125 is the Einstein nondimensionalization for sediment volumetric discharge per unit width, formula_126 is a friction factor, and formula_5 is the Shields stress. The Engelund–Hansen formula is one of the few sediment transport formulae in which a threshold "critical shear stress" is absent.
Wash load.
Wash load is carried within the water column as part of the flow, and therefore moves with the mean velocity of main stream. Wash load concentrations are approximately uniform in the water column. This is described by the endmember case in which the Rouse number is equal to 0 (i.e. the settling velocity is far less than the turbulent mixing velocity), which leads to a prediction of a perfectly uniform vertical concentration profile of material.
Total load.
Some authors have attempted formulations for the total sediment load carried in water. These formulas are designed largely for sand, as (depending on flow conditions) sand often can be carried as both bed load and suspended load in the same stream or shoreface.
Bed load sediment mitigation at intake structures.
Riverside intake structures used in water supply, canal diversions, and water cooling can experience entrainment of bed load (sand-size) sediments. These entrained sediments produce multiple deleterious effects such as reduction or blockage of intake capacity, feedwater pump impeller damage or vibration, and result in sediment deposition in downstream pipelines and canals. Structures that modify local near-field secondary currents are useful to mitigate these effects and limit or prevent bed load sediment entry.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tau_b"
},
{
"math_id": 1,
"text": "\\tau_c"
},
{
"math_id": 2,
"text": "\\tau_b=\\tau_c"
},
{
"math_id": 3,
"text": "\\tau_b*"
},
{
"math_id": 4,
"text": "\\tau_c*"
},
{
"math_id": 5,
"text": "\\tau*"
},
{
"math_id": 6,
"text": "\\tau*=\\frac{\\tau}{(\\rho_s-\\rho_f)(g)(D)}"
},
{
"math_id": 7,
"text": "\\tau_b*=\\tau_c*"
},
{
"math_id": 8,
"text": "\\mathrm{Re}_p"
},
{
"math_id": 9,
"text": "\\mathrm{Re}_p*"
},
{
"math_id": 10,
"text": "\\tau_b*=f\\left(\\mathrm{Re}_p*\\right)"
},
{
"math_id": 11,
"text": "\\mathrm{Re}_p=\\frac{U_p D}{\\nu}"
},
{
"math_id": 12,
"text": "U_p"
},
{
"math_id": 13,
"text": "D"
},
{
"math_id": 14,
"text": "\\nu"
},
{
"math_id": 15,
"text": "\\mu"
},
{
"math_id": 16,
"text": "{\\rho_f}"
},
{
"math_id": 17,
"text": "\\nu=\\frac{\\mu}{\\rho_f}"
},
{
"math_id": 18,
"text": "u_*"
},
{
"math_id": 19,
"text": "u_*=\\sqrt{\\frac{\\tau_b}{\\rho_f}}=\\kappa z \\frac{\\partial u}{\\partial z}"
},
{
"math_id": 20,
"text": " \\kappa "
},
{
"math_id": 21,
"text": " \\kappa = {0.407}"
},
{
"math_id": 22,
"text": "\\mathrm{Re}_p*=\\frac{u_* D}{\\nu}"
},
{
"math_id": 23,
"text": "\\tau_c*=f\\left(\\mathrm{Re}_p*\\right)"
},
{
"math_id": 24,
"text": "\\tau_b*=\\frac{\\tau_b}{(\\rho_s-\\rho_f)(g)(D)}"
},
{
"math_id": 25,
"text": "{\\tau_b}"
},
{
"math_id": 26,
"text": "\\tau_b=\\rho g h \\sin(\\theta)"
},
{
"math_id": 27,
"text": "\\sin(\\theta)"
},
{
"math_id": 28,
"text": "\\tan(\\theta)"
},
{
"math_id": 29,
"text": "S"
},
{
"math_id": 30,
"text": "\\tau_b=\\rho g h S"
},
{
"math_id": 31,
"text": "u_*=\\sqrt{\\left(\\frac{\\tau_b}{\\rho}\\right)}"
},
{
"math_id": 32,
"text": "\\tau_b=\\rho u_*^2"
},
{
"math_id": 33,
"text": "u*"
},
{
"math_id": 34,
"text": "\\bar{u}"
},
{
"math_id": 35,
"text": "C_f"
},
{
"math_id": 36,
"text": "\\tau_b=\\rho C_f \\left(\\bar{u} \\right)^2"
},
{
"math_id": 37,
"text": "\\tau*=\\frac{\\tau}{(\\rho_s-\\rho)(g)(D)}"
},
{
"math_id": 38,
"text": "\\frac{\\tau_b}{(\\rho_s-\\rho)(g)(D)}=\\frac{\\tau_{c}}{(\\rho_s-\\rho)(g)(D)}"
},
{
"math_id": 39,
"text": "\\tau_c*=f \\left(\\mathrm{Re}_p* \\right)"
},
{
"math_id": 40,
"text": "\\frac{\\tau_b}{(\\rho_s-\\rho)(g)(D)}=f \\left(\\mathrm{Re}_p* \\right)"
},
{
"math_id": 41,
"text": "{\\rho g h S}=f\\left(\\mathrm{Re}_p* \\right){(\\rho_s-\\rho)(g)(D)}"
},
{
"math_id": 42,
"text": "{h S}={\\frac{(\\rho_s-\\rho)}{\\rho}(D)}\\left(f \\left(\\mathrm{Re}_p* \\right) \\right)=R D \\left(f \\left(\\mathrm{Re}_p* \\right) \\right)"
},
{
"math_id": 43,
"text": "\\tau_c*=0.06"
},
{
"math_id": 44,
"text": "\\tau_c*=0.03"
},
{
"math_id": 45,
"text": "{h S}=R D \\tau_c*"
},
{
"math_id": 46,
"text": "\\left(\\rho_s=2650 \\frac{kg}{m^3} \\right)"
},
{
"math_id": 47,
"text": "\\left(\\rho=1000 \\frac{kg}{m^3} \\right)"
},
{
"math_id": 48,
"text": "R=\\frac{(\\rho_s-\\rho)}{\\rho}=1.65"
},
{
"math_id": 49,
"text": "{h S}=1.65(D)\\tau_c*"
},
{
"math_id": 50,
"text": "{h S}={0.1(D)}"
},
{
"math_id": 51,
"text": "{h S}={0.05(D_{50})}"
},
{
"math_id": 52,
"text": "P=\\frac{w_s}{\\kappa u_\\ast}"
},
{
"math_id": 53,
"text": "w_s=\\frac{RgD^2}{C_1 \\nu + (0.75 C_2 R g D^3)^{(0.5)}}"
},
{
"math_id": 54,
"text": "C_1"
},
{
"math_id": 55,
"text": "C_2"
},
{
"math_id": 56,
"text": "g=9.8"
},
{
"math_id": 57,
"text": "R"
},
{
"math_id": 58,
"text": "w_s=\\frac{16.17D^2}{1.8\\cdot10^{-5} + (12.1275D^3)^{(0.5)}}"
},
{
"math_id": 59,
"text": "{\\displaystyle w_{s}={\\frac {g~({\\frac {\\rho _{s}-\\rho }{\\rho }})~d_{sed}^{2}}{18\\nu }}},"
},
{
"math_id": 60,
"text": "g"
},
{
"math_id": 61,
"text": "\\rho_s"
},
{
"math_id": 62,
"text": "\\rho"
},
{
"math_id": 63,
"text": "d_{sed}"
},
{
"math_id": 64,
"text": "d_{50}"
},
{
"math_id": 65,
"text": "(\\tau^*_b-\\tau^*_c)"
},
{
"math_id": 66,
"text": "(T_s \\text{ or } \\phi)"
},
{
"math_id": 67,
"text": "T_s=\\phi=\\frac{\\tau_b}{\\tau_c}"
},
{
"math_id": 68,
"text": "b"
},
{
"math_id": 69,
"text": "q_s=\\frac{Q_s}{b}"
},
{
"math_id": 70,
"text": "q_s* = \\frac{q_s}{D \\sqrt{\\frac{\\rho_s-\\rho}{\\rho} g D}} = \\frac{q_s}{Re_p \\nu}"
},
{
"math_id": 71,
"text": "q_s* = 8\\left(\\tau*-\\tau*_c \\right)^{3/2}"
},
{
"math_id": 72,
"text": "\\tau*_c"
},
{
"math_id": 73,
"text": "T_s \\approx 2 \\rightarrow q_s* = 5.7\\left(\\tau*-0.047 \\right)^{3/2}"
},
{
"math_id": 74,
"text": "T_s \\approx 100 \\rightarrow q_s* = 12.1\\left(\\tau*-0.047 \\right)^{3/2}"
},
{
"math_id": 75,
"text": "\\begin{cases} q_s* = \\alpha_s \\left(\\tau*-\\tau_c* \\right)^n \\\\ n = \\frac{3}{2} \\\\ \\alpha_s = 1.6 \\ln\\left(\\tau*\\right) + 9.8 \\approx 9.64 \\tau*^{0.166} \\end{cases}"
},
{
"math_id": 76,
"text": "\\tau_{c,D_i}"
},
{
"math_id": 77,
"text": "\\tau_{ri}"
},
{
"math_id": 78,
"text": "W_i^*"
},
{
"math_id": 79,
"text": "*"
},
{
"math_id": 80,
"text": "_i"
},
{
"math_id": 81,
"text": "W_i^* = \\frac{R g q_{bi}}{F_i u*^3}"
},
{
"math_id": 82,
"text": "q_{bi}"
},
{
"math_id": 83,
"text": "i"
},
{
"math_id": 84,
"text": "F_i"
},
{
"math_id": 85,
"text": "\\phi"
},
{
"math_id": 86,
"text": "\\phi < 1.35"
},
{
"math_id": 87,
"text": "W_i^* = 0.002 \\phi^{7.5}"
},
{
"math_id": 88,
"text": "\\phi \\geq 1.35"
},
{
"math_id": 89,
"text": "W_i^* = 14 \\left(1 - \\frac{0.894}{\\phi^{0.5}}\\right)^{4.5}"
},
{
"math_id": 90,
"text": "i "
},
{
"math_id": 91,
"text": "_i "
},
{
"math_id": 92,
"text": "f_s"
},
{
"math_id": 93,
"text": "\\phi < \\phi^'"
},
{
"math_id": 94,
"text": "\\phi \\geq \\phi^'"
},
{
"math_id": 95,
"text": "W_i^* = A \\left(1 - \\frac{\\chi}{\\phi^{0.5}}\\right)^{4.5}"
},
{
"math_id": 96,
"text": "A,\\phi^',\\chi"
},
{
"math_id": 97,
"text": "A = 70 ,\\phi^'=1.19 ,\\chi=0.908, \\text{laboratory}"
},
{
"math_id": 98,
"text": "A = 115, \\phi^'=1.27 ,\\chi=0.923, \\text{field}"
},
{
"math_id": 99,
"text": "D_s"
},
{
"math_id": 100,
"text": "D_g"
},
{
"math_id": 101,
"text": " F_s"
},
{
"math_id": 102,
"text": " F_g"
},
{
"math_id": 103,
"text": " u_*"
},
{
"math_id": 104,
"text": "q^*_s = 2.29*10^{-5} A(z_s)^{2.14}\\left(\\frac{\\tau_b}{\\tau_{cs} } \\right)^{3.49}"
},
{
"math_id": 105,
"text": "q^*_s = \\frac{q_s}{[(s-1)gD_s]^{0.5}\\rho_sD_s}"
},
{
"math_id": 106,
"text": "_s"
},
{
"math_id": 107,
"text": "\\rho_s/\\rho_w"
},
{
"math_id": 108,
"text": " \\rho_s"
},
{
"math_id": 109,
"text": " A(z_s)"
},
{
"math_id": 110,
"text": " z_s"
},
{
"math_id": 111,
"text": " \\tau_b"
},
{
"math_id": 112,
"text": " \\tau_{cs}"
},
{
"math_id": 113,
"text": ""
},
{
"math_id": 114,
"text": "c_0"
},
{
"math_id": 115,
"text": "z_0"
},
{
"math_id": 116,
"text": "\\frac{c_s}{c_0} = \\left[\\frac{z \\left(h-z_0\\right)}{z_0\\left(h-z\\right)}\\right]^{-P/\\alpha}"
},
{
"math_id": 117,
"text": "z"
},
{
"math_id": 118,
"text": "c_s"
},
{
"math_id": 119,
"text": "h"
},
{
"math_id": 120,
"text": "P"
},
{
"math_id": 121,
"text": "\\alpha"
},
{
"math_id": 122,
"text": "K_m"
},
{
"math_id": 123,
"text": "\\alpha = \\frac{K_s}{K_m} \\approx 1"
},
{
"math_id": 124,
"text": "q_s* = \\frac{0.05}{c_f} \\tau*^{2.5} "
},
{
"math_id": 125,
"text": "q_s*"
},
{
"math_id": 126,
"text": "c_f"
}
] | https://en.wikipedia.org/wiki?curid=7999492 |
7999626 | Gårding's inequality | In mathematics, Gårding's inequality is a result that gives a lower bound for the bilinear form induced by a real linear elliptic partial differential operator. The inequality is named after Lars Gårding.
Statement of the inequality.
Let formula_0 be a bounded, open domain in formula_1-dimensional Euclidean space and let formula_2 denote the Sobolev space of formula_3-times weakly differentiable functions formula_4 with weak derivatives in formula_5. Assume that formula_0 satisfies the formula_3-extension property, i.e., that there exists a bounded linear operator formula_6 such that formula_7 for all formula_8.
Let "L" be a linear partial differential operator of even order "2k", written in divergence form
formula_9
and suppose that "L" is uniformly elliptic, i.e., there exists a constant "θ" > 0 such that
formula_10
Finally, suppose that the coefficients "Aαβ" are bounded, continuous functions on the closure of Ω for |"α"| = |"β"| = "k" and that
formula_11
Then Gårding's inequality holds: there exist constants "C" > 0 and "G" ≥ 0
formula_12
where
formula_13
is the bilinear form associated to the operator "L".
Application: the Laplace operator and the Poisson problem.
Be careful, in this application, Garding's Inequality seems useless here as the final result is a direct consequence of Poincaré's Inequality, or Friedrich Inequality. (See talk on the article).
As a simple example, consider the Laplace operator Δ. More specifically, suppose that one wishes to solve, for "f" ∈ "L"2(Ω) the Poisson equation
formula_14
where Ω is a bounded Lipschitz domain in R"n". The corresponding weak form of the problem is to find "u" in the Sobolev space "H"01(Ω) such that
formula_15
where
formula_16
formula_17
The Lax–Milgram lemma ensures that if the bilinear form "B" is both continuous and elliptic with respect to the norm on "H"01(Ω), then, for each "f" ∈ "L"2(Ω), a unique solution "u" must exist in "H"01(Ω). The hypotheses of Gårding's inequality are easy to verify for the Laplace operator Δ, so there exist constants "C" and "G" ≥ 0
formula_18
Applying the Poincaré inequality allows the two terms on the right-hand side to be combined, yielding a new constant "K" > 0 with
formula_19
which is precisely the statement that "B" is elliptic. The continuity of "B" is even easier to see: simply apply the Cauchy–Schwarz inequality and the fact that the Sobolev norm is controlled by the "L"2 norm of the gradient. | [
{
"math_id": 0,
"text": "\\Omega"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "H^k(\\Omega)"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "u\\colon\\Omega\\rightarrow\\mathbb{R}"
},
{
"math_id": 5,
"text": "L^2(\\Omega)"
},
{
"math_id": 6,
"text": "E\\colon H^k(\\Omega)\\rightarrow H^k(\\mathbb{R}^n)"
},
{
"math_id": 7,
"text": "Eu\\vert_\\Omega=u"
},
{
"math_id": 8,
"text": "u\\in H^k(\\Omega)"
},
{
"math_id": 9,
"text": "(L u)(x) = \\sum_{0 \\leq | \\alpha |, | \\beta | \\leq k} (-1)^{| \\alpha |} \\mathrm{D}^{\\alpha} \\left( A_{\\alpha \\beta} (x) \\mathrm{D}^{\\beta} u(x) \\right),"
},
{
"math_id": 10,
"text": "\\sum_{| \\alpha |, | \\beta | = k} \\xi^{\\alpha} A_{\\alpha \\beta} (x) \\xi^{\\beta} > \\theta | \\xi |^{2 k} \\mbox{ for all } x \\in \\Omega, \\xi \\in \\mathbb{R}^{n} \\setminus \\{ 0 \\}."
},
{
"math_id": 11,
"text": "A_{\\alpha \\beta} \\in L^{\\infty} (\\Omega) \\mbox{ for all } | \\alpha |, | \\beta | \\leq k."
},
{
"math_id": 12,
"text": "B[u, u] + G \\| u \\|_{L^{2} (\\Omega)}^{2} \\geq C \\| u \\|_{H^{k} (\\Omega)}^{2} \\mbox{ for all } u \\in H_{0}^{k} (\\Omega),"
},
{
"math_id": 13,
"text": "B[v, u] = \\sum_{0 \\leq | \\alpha |, | \\beta | \\leq k} \\int_{\\Omega} A_{\\alpha \\beta} (x) \\mathrm{D}^{\\alpha} u(x) \\mathrm{D}^{\\beta} v(x) \\, \\mathrm{d} x"
},
{
"math_id": 14,
"text": "\\begin{cases} - \\Delta u(x) = f(x), & x \\in \\Omega; \\\\ u(x) = 0, & x \\in \\partial \\Omega; \\end{cases}"
},
{
"math_id": 15,
"text": "B[u, v] = \\langle f, v \\rangle \\mbox{ for all } v \\in H_{0}^{1} (\\Omega),"
},
{
"math_id": 16,
"text": "B[u, v] = \\int_{\\Omega} \\nabla u(x) \\cdot \\nabla v(x) \\, \\mathrm{d} x,"
},
{
"math_id": 17,
"text": "\\langle f, v \\rangle = \\int_{\\Omega} f(x) v(x) \\, \\mathrm{d} x."
},
{
"math_id": 18,
"text": "B[u, u] \\geq C \\| u \\|_{H^{1} (\\Omega)}^{2} - G \\| u \\|_{L^{2} (\\Omega)}^{2} \\mbox{ for all } u \\in H_{0}^{1} (\\Omega)."
},
{
"math_id": 19,
"text": "B[u, u] \\geq K \\| u \\|_{H^{1} (\\Omega)}^{2} \\mbox{ for all } u \\in H_{0}^{1} (\\Omega),"
}
] | https://en.wikipedia.org/wiki?curid=7999626 |
799986 | Fraunhofer diffraction | Far-field diffraction
In optics, the Fraunhofer diffraction equation is used to model the diffraction of waves when plane waves are incident on a diffracting object, and the diffraction pattern is viewed at a sufficiently long distance (a distance satisfying Fraunhofer condition) from the object (in the far-field region), and also when it is viewed at the focal plane of an imaging lens. In contrast, the diffraction pattern created near the diffracting object and (in the near field region) is given by the Fresnel diffraction equation.
The equation was named in honor of Joseph von Fraunhofer although he was not actually involved in the development of the theory.
This article explains where the Fraunhofer equation can be applied, and shows Fraunhofer diffraction patterns for various apertures. A detailed mathematical treatment of Fraunhofer diffraction is given in Fraunhofer diffraction equation.
Equation.
When a beam of light is partly blocked by an obstacle, some of the light is scattered around the object, light and dark bands are often seen at the edge of the shadow – this effect is known as diffraction. These effects can be modelled using the Huygens–Fresnel principle; Huygens postulated that every point on a wavefront acts as a source of spherical secondary wavelets and the sum of these secondary wavelets determines the form of the proceeding wave at any subsequent time, while Fresnel developed an equation using the Huygens wavelets together with the principle of superposition of waves, which models these diffraction effects quite well.
It is generally not straightforward to calculate the wave amplitude given by the sum of the secondary wavelets (The wave sum is also a wave.), each of which has its own amplitude, phase, and oscillation direction (polarization), since this involves addition of many waves of varying amplitude, phase, and polarization. When two light waves as electromagnetic fields are added together (vector sum), the amplitude of the wave sum depends on the amplitudes, the phases, and even the polarizations of individual waves. On a certain direction where electromagnetic wave fields are projected (or considering a situation where two waves have the same polarization), two waves of equal (projected) amplitude which are in phase (same phase) give the amplitude of the resultant wave sum as double the individual wave amplitudes, while two waves of equal amplitude which are in opposite phases give the zero amplitude of the resultant wave as they cancel out each other. Generally, a two-dimensional integral over complex variables has to be solved and in many cases, an analytic solution is not available.
The Fraunhofer diffraction equation is a simplified version of Kirchhoff's diffraction formula and it can be used to model light diffraction when both a light source and a viewing plane (a plane of observation where the diffracted wave is observed) are effectively infinitely distant from a diffracting aperture. With a sufficiently distant light source from a diffracting aperture, the incident light to the aperture is effectively a plane wave so that the phase of the light at each point on the aperture is the same. At a sufficiently distant plane of observation from the aperture, the phase of the wave coming from each point on the aperture varies linearly with the point position on the aperture, making the calculation of the sum of the waves at an observation point on the plane of observation relatively straightforward in many cases. Even the amplitudes of the secondary waves coming from the aperture at the observation point can be treated as same or constant for a simple diffraction wave calculation in this case. Diffraction in such a geometrical requirement is called "Fraunhofer diffraction", and the condition where Fraunhofer diffraction is valid is called "Fraunhofer condition", as shown in the right box. A diffracted wave is often called "Far field" if it at least partially satisfies Fraunhofer condition such that the distance between the aperture and the observation plane formula_0 is formula_1.
For example, if a 0.5 mm diameter circular hole is illuminated by a laser light with 0.6 μm wavelength, then Fraunhofer diffraction occurs if the viewing distance is greater than 1000 mm.
Derivation of Fraunhofer condition.
The derivation of Fraunhofer condition here is based on the geometry described in the right box. The diffracted wave path "r"2 can be expressed in terms of another diffracted wave path "r"1 and the distance "b" between two diffracting points by using the law of cosines;
formula_2
This can be expanded by calculating the expression's Taylor series to second order with respect to formula_3,
formula_4
The phase difference between waves propagating along the paths "r"2 and "r"1 are, with the wavenumber where λ is the light wavelength,
formula_5
If formula_6 so formula_7, then the phase difference is formula_8. The geometrical implication from this expression is that the paths "r"2 and "r"1 are approximately parallel with each other. Since there can be a diffraction plane - observation plane diffracted wave path which angle with respect to a straight line parallel to the optical axis is close to 0, this approximation condition can be further simplified as formula_9 where "L" is the distance between two planes along the optical axis. Due to the fact that an incident wave on a diffracting plane is effectively a plane wave if formula_9 where "L" is the distance between the diffracting plane and the point wave source is satisfied, Fraunhofer condition is formula_9 where "L" is the smaller of the two distances, one is between the diffracting plane and the plane of observation and the other is between the diffracting plane and the point wave source.
Focal plane of a positive lens as the far field plane.
In the far field, propagation paths for wavelets from every point on an aperture to a point of observation are approximately parallel, and a positive lens (focusing lens) focuses parallel rays toward the lens to a point on the focal plane (the focus point position on the focal plane depends on the angle of the parallel rays with respect to the optical axis). So, if a positive lens with a sufficiently long focal length (so that differences between electric field orientations for wavelets can be ignored at the focus) is placed after an aperture, then the lens practically makes the Fraunhofer diffraction pattern of the aperture on its focal plane as the parallel rays meet each other at the focus.
Examples.
In each of these examples, the aperture is illuminated by a monochromatic plane wave at normal incidence.
Diffraction by a narrow rectangular slit.
The width of the slit is W. The Fraunhofer diffraction pattern is shown in the image together with a plot of the intensity vs. angle θ. The pattern has maximum intensity at "θ" = 0, and a series of peaks of decreasing intensity. Most of the diffracted light falls between the first minima. The angle, α, subtended by these two minima is given by:
formula_10
Thus, the smaller the aperture, the larger the angle α subtended by the diffraction bands. The size of the central band at a distance "z" is given by
formula_11
For example, when a slit of width 0.5 mm is illuminated by light of wavelength 0.6 μm, and viewed at a distance of 1000 mm, the width of the central band in the diffraction pattern is 2.4 mm.
The fringes extend to infinity in the "y" direction since the slit and illumination also extend to infinity.
If W < λ, the intensity of the diffracted light does not fall to zero, and if D « λ, the diffracted wave is cylindrical.
Semi-quantitative analysis of single-slit diffraction.
We can find the angle at which a first minimum is obtained in the diffracted light by the following reasoning. Consider the light diffracted at an angle θ where the distance "CD" is equal to the wavelength of the illuminating light. The width of the slit is the distance "AC". The component of the wavelet emitted from the point A which is travelling in the θ direction is in anti-phase with the wave from the point "B" at middle of the slit, so that the net contribution at the angle θ from these two waves is zero. The same applies to the points just below "A" and "B", and so on. Therefore, the amplitude of the total wave travelling in the direction θ is zero. We have:
formula_12
The angle subtended by the first minima on either side of the centre is then, as above:
formula_13
There is no such simple argument to enable us to find the maxima of the diffraction pattern.
Single-slit diffraction using Huygens' principle.
We can develop an expression for the far field of a continuous array of point sources of uniform amplitude and of the same phase. Let the array of length "a" be parallel to the y axis with its center at the origin as indicated in the figure to the right. Then the differential field is:
formula_14
where formula_15. However formula_16 and integrating from formula_17 to formula_18,
formula_19
where formula_20.
Integrating we then get
formula_21
Letting formula_22 where the array length in radians is formula_23, then,
formula_24
Diffraction by a rectangular aperture.
The form of the diffraction pattern given by a rectangular aperture is shown in the figure on the right (or above, in tablet format). There is a central semi-rectangular peak, with a series of horizontal and vertical fringes. The dimensions of the central band are related to the dimensions of the slit by the same relationship as for a single slit so that the larger dimension in the diffracted image corresponds to the smaller dimension in the slit. The spacing of the fringes is also inversely proportional to the slit dimension.
If the illuminating beam does not illuminate the whole vertical length of the slit, the spacing of the vertical fringes is determined by the dimensions of the illuminating beam. Close examination of the double-slit diffraction pattern below shows that there are very fine horizontal diffraction fringes above and below the main spot, as well as the more obvious horizontal fringes.
Diffraction by a circular aperture.
The diffraction pattern given by a circular aperture is shown in the figure on the right. This is known as the Airy diffraction pattern. It can be seen that most of the light is in the central disk. The angle subtended by this disk, known as the Airy disk, is
formula_25
where "W" is the diameter of the aperture.
The Airy disk can be an important parameter in limiting the ability of an imaging system to resolve closely located objects.
Diffraction by an aperture with a Gaussian profile.
The diffraction pattern obtained given by an aperture with a Gaussian profile, for example, a photographic slide whose transmissivity has a Gaussian variation is also a Gaussian function. The form of the function is plotted on the right (above, for a tablet), and it can be seen that, unlike the diffraction patterns produced by rectangular or circular apertures, it has no secondary rings. This technique can be used in a process called apodization—the aperture is covered by a Gaussian filter, giving a diffraction pattern with no secondary rings.
The output profile of a single mode laser beam may have a Gaussian intensity profile and the diffraction equation can be used to show that it maintains that profile however far away it propagates from the source.
Diffraction by a double slit.
In the double-slit experiment, the two slits are illuminated by a single light beam. If the width of the slits is small enough (less than the wavelength of the light), the slits diffract the light into cylindrical waves. These two cylindrical wavefronts are superimposed, and the amplitude, and therefore the intensity, at any point in the combined wavefronts depends on both the magnitude and the phase of the two wavefronts. These fringes are often known as Young's fringes.
The angular spacing of the fringes is given by
formula_26
The spacing of the fringes at a distance z from the slits is given by
formula_27
where d is the separation of the slits.
The fringes in the picture were obtained using the yellow light from a sodium light (wavelength = 589 nm), with slits separated by 0.25 mm, and projected directly onto the image plane of a digital camera.
Double-slit interference fringes can be observed by cutting two slits in a piece of card, illuminating with a laser pointer, and observing the diffracted light at a distance of 1 m. If the slit separation is 0.5 mm, and the wavelength of the laser is 600 nm, then the spacing of the fringes viewed at a distance of 1 m would be 1.2 mm.
Semi-quantitative explanation of double-slit fringes.
The difference in phase between the two waves is determined by the difference in the distance travelled by the two waves.
If the viewing distance is large compared with the separation of the slits (the far field), the phase difference can be found using the geometry shown in the figure. The path difference between two waves travelling at an angle θ is given by
formula_28
When the two waves are in phase, i.e. the path difference is equal to an integral number of wavelengths, the summed amplitude, and therefore the summed intensity is maximal, and when they are in anti-phase, i.e. the path difference is equal to half a wavelength, one and a half wavelengths, etc., then the two waves cancel, and the summed intensity is zero. This effect is known as interference.
The interference fringe maxima occur at angles
formula_29
where λ is the wavelength of the light. The angular spacing of the fringes is given by
formula_30
When the distance between the slits and the viewing plane is "z", the spacing of the fringes is equal to "zθ" and is the same as above:
formula_31
Diffraction by a grating.
A grating is defined in Born and Wolf as "any arrangement which imposes on an incident wave a periodic variation of amplitude or phase, or both".
A grating whose elements are separated by "S" diffracts a normally incident beam of light into a set of beams, at angles "θ""n" given by:
formula_32
This is known as the grating equation. The finer the grating spacing, the greater the angular separation of the diffracted beams.
If the light is incident at an angle θ0, the grating equation is:
formula_33
The detailed structure of the repeating pattern determines the form of the individual diffracted beams, as well as their relative intensity while the grating spacing always determines the angles of the diffracted beams.
The image on the right shows a laser beam diffracted by a grating into "n" = 0, and ±1 beams. The angles of the first order beams are about 20°; if we assume the wavelength of the laser beam is 600 nm, we can infer that the grating spacing is about 1.8 μm.
Semi-quantitative explanation.
A simple grating consists of a series of slits in a screen. If the light travelling at an angle θ from each slit has a path difference of one wavelength with respect to the adjacent slit, all these waves will add together, so that the maximum intensity of the diffracted light is obtained when:
formula_34
This is the same relationship that is given above. | [
{
"math_id": 0,
"text": "L"
},
{
"math_id": 1,
"text": "L\\gg \\frac{W^2}{\\lambda}"
},
{
"math_id": 2,
"text": "{r_2} = {\\left( r_1^2 + b^2 - 2b{r_1}\\cos \\left( \\frac{\\pi }{2} - \\theta \\right) \\right)}^{\\frac{1}{2}} = {r_1}{\\left( 1+\\frac{b^2}{r_1^2} - 2\\frac{b}{r_1} \\sin \\theta \\right)}^{\\frac{1}{2}}."
},
{
"math_id": 3,
"text": "\\frac{b}{r_1}"
},
{
"math_id": 4,
"text": "{r_2}={r_1}\\left( 1-\\frac{b}{r_1}\\sin \\theta +\\frac{b^2}{2 r_1^2} \\cos^2 \\theta + \\cdots \\right) = {r_1} - b\\sin \\theta +\\frac{b^2}{2 r_1} \\cos^2 \\theta + \\cdots ~."
},
{
"math_id": 5,
"text": "k{r_2}-k{r_1} = -kb\\sin \\theta +k\\frac{b^2}{2r_1} \\cos^2 \\theta + \\cdots ."
},
{
"math_id": 6,
"text": "k\\frac{b^2}{2{r_1}} \\cos^2 \\theta = \\pi \\frac{b^2}{\\lambda r_1} \\cos^2 \\theta \\ll \\pi "
},
{
"math_id": 7,
"text": "\\frac{b^2}{\\lambda r_1} \\cos^2 \\theta \\ll 1"
},
{
"math_id": 8,
"text": "k r_2 - k r_1 \\approx -kb\\sin \\theta "
},
{
"math_id": 9,
"text": "\\frac{b^2}{\\lambda }\\ll L"
},
{
"math_id": 10,
"text": " \\alpha \\approx {\\frac{2 \\lambda}{W}} "
},
{
"math_id": 11,
"text": "d_f = \\frac {2 \\lambda z}{W}"
},
{
"math_id": 12,
"text": "\\theta_\\text{min} \\approx \\frac {CD} {AC} = \\frac{\\lambda}{W}."
},
{
"math_id": 13,
"text": "\\alpha = 2 \\theta_\\text{min} = \\frac{2\\lambda}{W}."
},
{
"math_id": 14,
"text": "dE=\\frac{A}{r_1}e^{i \\omega [t-(r_1/c)]}dy=\\frac{A}{r_1}e^{i(\\omega t-\\beta r_1)}dy"
},
{
"math_id": 15,
"text": "\\beta=\\omega/c=2\\pi /\\lambda"
},
{
"math_id": 16,
"text": "r_1=r-y\\sin\\theta"
},
{
"math_id": 17,
"text": "-a/2"
},
{
"math_id": 18,
"text": "a/2"
},
{
"math_id": 19,
"text": "E \\simeq A' \\int_{-a/2}^{a/2} e^{i\\beta y \\sin\\theta} dy"
},
{
"math_id": 20,
"text": "A' = \\frac{Ae^{i(\\omega t-\\beta r)}}{r}"
},
{
"math_id": 21,
"text": "E = \\frac{2A'}{\\beta \\sin \\theta} \\sin\\left(\\frac{\\beta a}{2} \\sin \\theta\\right)"
},
{
"math_id": 22,
"text": "\\psi^'=\\beta a \\sin \\theta = \\alpha_r \\sin \\theta"
},
{
"math_id": 23,
"text": "a_r=\\beta a=2\\pi a/\\lambda"
},
{
"math_id": 24,
"text": "E= A' a \\frac{\\sin(\\psi^'/2)}{\\psi^'/2}"
},
{
"math_id": 25,
"text": " \\alpha \\approx \\frac {1.22 \\lambda} {W}"
},
{
"math_id": 26,
"text": "\\theta_\\text{f} = \\lambda/d."
},
{
"math_id": 27,
"text": "w_\\text{f} = z \\theta_f = z \\lambda/d,"
},
{
"math_id": 28,
"text": "d \\sin \\theta \\approx d \\theta."
},
{
"math_id": 29,
"text": "d \\theta_n = n \\lambda,\\quad n = 0, \\pm 1, \\pm 2, \\ldots"
},
{
"math_id": 30,
"text": "\\theta_\\text{f} \\approx \\lambda/d."
},
{
"math_id": 31,
"text": "w = z\\lambda / d."
},
{
"math_id": 32,
"text": "~ \\sin \\theta_n = \\frac{n \\lambda} {S}, \\quad n = 0, \\pm 1, \\pm 2, \\ldots "
},
{
"math_id": 33,
"text": "\\sin \\theta_n = \\frac {n \\lambda} {S} + \\sin \\theta_0, \\quad n=0, \\pm 1, \\pm 2, \\ldots "
},
{
"math_id": 34,
"text": "W \\sin \\theta = n \\lambda, \\quad n=0, \\pm 1, \\pm 2, \\ldots "
}
] | https://en.wikipedia.org/wiki?curid=799986 |
800010 | Belief propagation | Algorithm for statistical inference on graphical models
Belief propagation, also known as sum–product message passing, is a message-passing algorithm for performing inference on graphical models, such as Bayesian networks and Markov random fields. It calculates the marginal distribution for each unobserved node (or variable), conditional on any observed nodes (or variables). Belief propagation is commonly used in artificial intelligence and information theory, and has demonstrated empirical success in numerous applications, including low-density parity-check codes, turbo codes, free energy approximation, and satisfiability.
The algorithm was first proposed by Judea Pearl in 1982, who formulated it as an exact inference algorithm on trees, later extended to polytrees. While the algorithm is not exact on general graphs, it has been shown to be a useful approximate algorithm.
Motivation.
Given a finite set of discrete random variables formula_0 with joint probability mass function formula_1, a common task is to compute the marginal distributions of the formula_2. The marginal of a single formula_2 is defined to be
formula_3
where formula_4 is a vector of possible values for the formula_2, and the notation formula_5 means that the sum is taken over those formula_6 whose formula_7th coordinate is equal to formula_8.
Computing marginal distributions using this formula quickly becomes computationally prohibitive as the number of variables grows. For example, given 100 binary variables formula_9, computing a single marginal formula_2 using formula_1 and the above formula would involve summing over formula_10 possible values for formula_6. If it is known that the probability mass function formula_1 factors in a convenient way, belief propagation allows the marginals to be computed much more efficiently.
Description of the sum-product algorithm.
Variants of the belief propagation algorithm exist for several types of graphical models (Bayesian networks and Markov random fields in particular). We describe here the variant that operates on a factor graph. A factor graph is a bipartite graph containing nodes corresponding to variables formula_11 and factors formula_12, with edges between variables and the factors in which they appear. We can write the joint mass function:
formula_13
where formula_14 is the vector of neighboring variable nodes to the factor node formula_15. Any Bayesian network or Markov random field can be represented as a factor graph by using a factor for each node with its parents or a factor for each node with its neighborhood respectively.
The algorithm works by passing real valued functions called "messages" along the edges between the hidden nodes. More precisely, if formula_16 is a variable node and formula_15 is a factor node connected to formula_16 in the factor graph, then the messages formula_17 from formula_16 to formula_15 and the messages formula_18 from formula_15 to formula_16 are real-valued functions formula_19, whose domain is the set of values that can be taken by the random variable associated with formula_16, denoted formula_20. These messages contain the "influence" that one variable exerts on another. The messages are computed differently depending on whether the node receiving the message is a variable node or a factor node. Keeping the same notation:
As shown by the previous formula: the complete marginalization is reduced to a sum of products of simpler terms than the ones appearing in the full joint distribution. This is the reason that belief propagation is sometimes called "sum-product message passing", or the "sum-product algorithm".
In a typical run, each message will be updated iteratively from the previous value of the neighboring messages. Different scheduling can be used for updating the messages. In the case where the graphical model is a tree, an optimal scheduling converges after computing each message exactly once (see next sub-section). When the factor graph has cycles, such an optimal scheduling does not exist, and a typical choice is to update all messages simultaneously at each iteration.
Upon convergence (if convergence happened), the estimated marginal distribution of each node is proportional to the product of all messages from adjoining factors (missing the normalization constant):
formula_33
Likewise, the estimated joint marginal distribution of the set of variables belonging to one factor is proportional to the product of the factor and the messages from the variables:
formula_34
In the case where the factor graph is acyclic (i.e. is a tree or a forest), these estimated marginal actually converge to the true marginals in a finite number of iterations. This can be shown by mathematical induction.
Exact algorithm for trees.
In the case when the factor graph is a tree, the belief propagation algorithm will compute the exact marginals. Furthermore, with proper scheduling of the message updates, it will terminate after two full passes through the tree. This optimal scheduling can be described as follows:
Before starting, the graph is oriented by designating one node as the "root"; any non-root node which is connected to only one other node is called a "leaf".
In the first step, messages are passed inwards: starting at the leaves, each node passes a message along the (unique) edge towards the root node. The tree structure guarantees that it is possible to obtain messages from all other adjoining nodes before passing the message on. This continues until the root has obtained messages from all of its adjoining nodes.
The second step involves passing the messages back out: starting at the root, messages are passed in the reverse direction. The algorithm is completed when all leaves have received their messages.
Approximate algorithm for general graphs.
Although it was originally designed for acyclic graphical models, the Belief Propagation algorithm can be used in general graphs. The algorithm is then sometimes called loopy belief propagation, because graphs typically contain cycles, or loops. The initialization and scheduling of message updates must be adjusted slightly (compared with the previously described schedule for acyclic graphs) because graphs might not contain any leaves. Instead, one initializes all variable messages to 1 and uses the same message definitions above, updating all messages at every iteration (although messages coming from known leaves or tree-structured subgraphs may no longer need updating after sufficient iterations). It is easy to show that in a tree, the message definitions of this modified procedure will converge to the set of message definitions given above within a number of iterations equal to the diameter of the tree.
The precise conditions under which loopy belief propagation will converge are still not well understood; it is known that on graphs containing a single loop it converges in most cases, but the probabilities obtained might be incorrect. Several sufficient (but not necessary) conditions for convergence of loopy belief propagation to a unique fixed point exist. There exist graphs which will fail to converge, or which will oscillate between multiple states over repeated iterations. Techniques like EXIT charts can provide an approximate visualization of the progress of belief propagation and an approximate test for convergence.
There are other approximate methods for marginalization including variational methods and Monte Carlo methods.
One method of exact marginalization in general graphs is called the junction tree algorithm, which is simply belief propagation on a modified graph guaranteed to be a tree. The basic premise is to eliminate cycles by clustering them into single nodes.
Related algorithm and complexity issues.
A similar algorithm is commonly referred to as the Viterbi algorithm, but also known as a special case of the max-product or min-sum algorithm, which solves the related problem of maximization, or most probable explanation. Instead of attempting to solve the marginal, the goal here is to find the values formula_35 that maximizes the global function (i.e. most probable values in a probabilistic setting), and it can be defined using the arg max:
formula_36
An algorithm that solves this problem is nearly identical to belief propagation, with the sums replaced by maxima in the definitions.
It is worth noting that inference problems like marginalization and maximization are NP-hard to solve exactly and approximately (at least for relative error) in a graphical model. More precisely, the marginalization problem defined above is #P-complete and maximization is NP-complete.
The memory usage of belief propagation can be reduced through the use of the Island algorithm (at a small cost in time complexity).
Relation to free energy.
The sum-product algorithm is related to the calculation of free energy in thermodynamics. Let "Z" be the partition function. A probability distribution
formula_37
(as per the factor graph representation) can be viewed as a measure of the internal energy present in a system, computed as
formula_38
The free energy of the system is then
formula_39
It can then be shown that the points of convergence of the sum-product algorithm represent the points where the free energy in such a system is minimized. Similarly, it can be shown that a fixed point of the iterative belief propagation algorithm in graphs with cycles is a stationary point of a free energy approximation.
Generalized belief propagation (GBP).
Belief propagation algorithms are normally presented as message update equations on a factor graph, involving messages between variable nodes and their neighboring factor nodes and vice versa. Considering messages between "regions" in a graph is one way of generalizing the belief propagation algorithm. There are several ways of defining the set of regions in a graph that can exchange messages. One method uses ideas introduced by Kikuchi in the physics literature, and is known as Kikuchi's cluster variation method.
Improvements in the performance of belief propagation algorithms are also achievable by breaking the replicas symmetry in the distributions of the fields (messages). This generalization leads to a new kind of algorithm called survey propagation (SP), which have proved to be very efficient in NP-complete problems like satisfiability
and graph coloring.
The cluster variational method and the survey propagation algorithms are two different improvements to belief propagation. The name generalized survey propagation (GSP) is waiting to be assigned to the algorithm that merges both generalizations.
Gaussian belief propagation (GaBP).
Gaussian belief propagation is a variant of the belief propagation algorithm when the underlying distributions are Gaussian. The first work analyzing this special model was the seminal work of Weiss and Freeman.
The GaBP algorithm solves the following marginalization problem:
formula_40
where Z is a normalization constant, "A" is a symmetric positive definite matrix (inverse covariance matrix a.k.a. precision matrix) and "b" is the shift vector.
Equivalently, it can be shown that using the Gaussian model, the solution of the marginalization problem is equivalent to the MAP assignment problem:
formula_41
This problem is also equivalent to the following minimization problem of the quadratic form:
formula_42
Which is also equivalent to the linear system of equations
formula_43
Convergence of the GaBP algorithm is easier to analyze (relatively to the general BP case) and there are two known sufficient convergence conditions. The first one was formulated by Weiss et al. in the year 2000, when the information matrix "A" is diagonally dominant. The second convergence condition was formulated by Johnson et al. in 2006, when the spectral radius of the matrix
formula_44
where "D" = diag("A"). Later, Su and Wu established the necessary and sufficient convergence conditions for synchronous GaBP and damped GaBP, as well as another sufficient convergence condition for asynchronous GaBP. For each case, the convergence condition involves verifying 1) a set (determined by A) being non-empty, 2) the spectral radius of a certain matrix being smaller than one, and 3) the singularity issue (when converting BP message into belief) does not occur.
The GaBP algorithm was linked to the linear algebra domain, and it was shown that the GaBP algorithm can be viewed as an iterative algorithm for solving the linear system of equations "Ax" = "b" where "A" is the information matrix and "b" is the shift vector. Empirically, the GaBP algorithm is shown to converge faster than classical iterative methods like the Jacobi method, the Gauss–Seidel method, successive over-relaxation, and others. Additionally, the GaBP algorithm is shown to be immune to numerical problems of the preconditioned conjugate gradient method
Syndrome-based BP decoding.
The previous description of BP algorithm is called the codeword-based decoding, which calculates the approximate marginal probability formula_45, given received codeword formula_46. There is an equivalent form, which calculate formula_47, where formula_48 is the syndrome of the received codeword formula_46 and formula_49 is the decoded error. The decoded input vector is formula_50. This variation only changes the interpretation of the mass function formula_51. Explicitly, the messages are
formula_52
where formula_53 is the prior error probability on variable formula_16formula_54
This syndrome-based decoder doesn't require information on the received bits, thus can be adapted to quantum codes, where the only information is the measurement syndrome.
In the binary case, formula_55, those messages can be simplified to cause an exponential reduction of formula_56 in the complexity
Define log-likelihood ratio formula_57, formula_58, then
formula_59
formula_60
where formula_61
The posterior log-likelihood ratio can be estimated as formula_62
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X_1, \\ldots, X_n"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "X_i"
},
{
"math_id": 3,
"text": "p_{X_i}(x_i) = \\sum_{\\mathbf{x}': x'_i = x_i} p(\\mathbf{x}')"
},
{
"math_id": 4,
"text": "\\mathbf x' = (x'_1, \\ldots, x'_n)"
},
{
"math_id": 5,
"text": "\\mathbf x' : x'_i = x_i"
},
{
"math_id": 6,
"text": "\\mathbf x'"
},
{
"math_id": 7,
"text": "i"
},
{
"math_id": 8,
"text": "x_i"
},
{
"math_id": 9,
"text": "X_1, \\ldots, X_{100}"
},
{
"math_id": 10,
"text": "2^{99} \\approx 6.34 \\times 10^{29}"
},
{
"math_id": 11,
"text": "V"
},
{
"math_id": 12,
"text": "F"
},
{
"math_id": 13,
"text": "p(\\mathbf{x}) = \\prod_{a \\in F} f_a (\\mathbf{x}_a)"
},
{
"math_id": 14,
"text": "\\mathbf x_a"
},
{
"math_id": 15,
"text": "a"
},
{
"math_id": 16,
"text": "v"
},
{
"math_id": 17,
"text": "\\mu_{v \\to a}"
},
{
"math_id": 18,
"text": "\\mu_{a \\to v}"
},
{
"math_id": 19,
"text": "\\mu_{v \\to a}, \\mu_{a \\to v} : \\operatorname{Dom}(v) \\to \\mathbb R"
},
{
"math_id": 20,
"text": "\\operatorname{Dom}(v)"
},
{
"math_id": 21,
"text": "\\mu_{v \\to a}: \\operatorname{Dom}(v) \\to \\mathbb R"
},
{
"math_id": 22,
"text": "\\mu_{v \\to a} (x_v) = \\prod_{a^* \\in N(v)\\setminus\\{a\\} } \\mu_{a^* \\to v} (x_v)"
},
{
"math_id": 23,
"text": "x_v \\in \\operatorname{Dom}(v)"
},
{
"math_id": 24,
"text": "N(v)"
},
{
"math_id": 25,
"text": "N(v)\\setminus\\{a\\}"
},
{
"math_id": 26,
"text": "\\mu_{v \\to a}(x_v)"
},
{
"math_id": 27,
"text": "\\mu_{a \\to v}: \\operatorname{Dom}(v) \\to \\mathbb R"
},
{
"math_id": 28,
"text": "\\mu_{a \\to v} (x_v) = \\sum_{\\mathbf{x}'_a: x'_v = x_v } \\left( f_a (\\mathbf{x}'_a) \\prod_{v^* \\in N(a) \\setminus \\{v\\}} \\mu_{v^* \\to a} (x'_{v^*}) \\right) "
},
{
"math_id": 29,
"text": "N(a)"
},
{
"math_id": 30,
"text": "N(a) \\setminus \\{v\\}"
},
{
"math_id": 31,
"text": "\\mu_{a \\to v} (x_v) = f_a(x_v)"
},
{
"math_id": 32,
"text": " x_v = x_a "
},
{
"math_id": 33,
"text": " p_{X_v} (x_v) \\propto \\prod_{a \\in N(v)} \\mu_{a \\to v} (x_v). "
},
{
"math_id": 34,
"text": " p_{X_a} (\\mathbf{x}_a) \\propto f_a(\\mathbf{x}_a) \\prod_{v \\in N(a)} \\mu_{v \\to a} (x_v). "
},
{
"math_id": 35,
"text": "\\mathbf{x}"
},
{
"math_id": 36,
"text": "\\operatorname*{\\arg\\max}_{\\mathbf{x}} g(\\mathbf{x})."
},
{
"math_id": 37,
"text": "P(\\mathbf{X}) = \\frac{1}{Z} \\prod_{f_j} f_j(x_j)"
},
{
"math_id": 38,
"text": "E(\\mathbf{X}) = -\\log \\prod_{f_j} f_j(x_j)."
},
{
"math_id": 39,
"text": "F = U - H = \\sum_{\\mathbf{X}} P(\\mathbf{X}) E(\\mathbf{X}) + \\sum_{\\mathbf{X}} P(\\mathbf{X}) \\log P(\\mathbf{X})."
},
{
"math_id": 40,
"text": " P(x_i) = \\frac{1}{Z} \\int_{j \\ne i} \\exp(-\\tfrac 1 2 x^TAx + b^Tx)\\,dx_j"
},
{
"math_id": 41,
"text": "\\underset{x}{\\operatorname{argmax}}\\ P(x) = \\frac{1}{Z} \\exp(-\\tfrac 1 2 x^TAx + b^Tx)."
},
{
"math_id": 42,
"text": " \\underset{x}{\\operatorname{min}}\\ 1/2x^TAx - b^Tx."
},
{
"math_id": 43,
"text": " Ax = b."
},
{
"math_id": 44,
"text": "\\rho (I - |D^{-1/2}AD^{-1/2}|) < 1 \\, "
},
{
"math_id": 45,
"text": "P(x|X)"
},
{
"math_id": 46,
"text": "X"
},
{
"math_id": 47,
"text": "P(e|s)"
},
{
"math_id": 48,
"text": "s"
},
{
"math_id": 49,
"text": "e"
},
{
"math_id": 50,
"text": "x=X+e"
},
{
"math_id": 51,
"text": "f_a(X_a)"
},
{
"math_id": 52,
"text": "\\forall x_v\\in Dom(v),\\; \\mu_{v \\to a} (x_v) = P(X_v)\\prod_{a^* \\in N(v)\\setminus\\{a\\} } \\mu_{a^* \\to v} (x_v)."
},
{
"math_id": 53,
"text": "P(X_v)"
},
{
"math_id": 54,
"text": "\\forall x_v\\in Dom(v),\\; \\mu_{a \\to v} (x_v) = \\sum_{\\mathbf{x}'_a: x'_v = x_v } \\delta(\\text{syndrome}({\\mathbf x}'_v)={\\mathbf s}) \\prod_{v^* \\in N(a) \\setminus \\{v\\}} \\mu_{v^* \\to a} (x'_{v^*})."
},
{
"math_id": 55,
"text": "x_i \\in \\{0,1\\}"
},
{
"math_id": 56,
"text": "2^{|\\{v\\}|+|N(v)|}"
},
{
"math_id": 57,
"text": "l_v=\\log \\frac{u_{v \\to a}(x_v=0)}{u_{v \\to a} (x_v=1)}"
},
{
"math_id": 58,
"text": "L_a=\\log \\frac{u_{a \\to v}(x_v=0)}{u_{a \\to v} (x_v=1)}"
},
{
"math_id": 59,
"text": "v \\to a: l_v=l_v^{(0)}+\\sum_{a^* \\in N(v)\\setminus\\{a\\}} (L_{a^*})"
},
{
"math_id": 60,
"text": "a \\to v: L_a = (-1)^{s_a} 2 \\tanh^{-1} \\prod_{v^* \\in N(a)\\setminus\\{v\\}} \\tanh (l_{v^*}/2)"
},
{
"math_id": 61,
"text": "l_v^{(0)}=\\log (P(x_v=0)/P(x_v=1)) = \\text{const}"
},
{
"math_id": 62,
"text": "l_v=l_v^{(0)}+\\sum_{a \\in N(v)} (L_{a})"
}
] | https://en.wikipedia.org/wiki?curid=800010 |
8000600 | Hubbard–Stratonovich transformation | The Hubbard–Stratonovich (HS) transformation is an exact mathematical transformation invented by Russian physicist Ruslan L. Stratonovich and popularized by British physicist John Hubbard. It is used to convert a particle theory into its respective field theory by linearizing the density operator in the many-body interaction term of the Hamiltonian and introducing an auxiliary scalar field. It is defined via the integral identity
formula_0
where the real constant formula_1. The basic idea of the HS transformation is to reformulate a system of particles interacting through two-body potentials into a system of independent particles interacting with a fluctuating field. The procedure is widely used in polymer physics, classical particle physics, spin glass theory, and electronic structure theory.
Calculation of resulting field theories.
The resulting field theories are well-suited for the application of effective approximation techniques, like the mean field approximation. A major difficulty arising in the simulation with such field theories is their highly oscillatory nature in case of strong interactions, which leads to the well-known numerical sign problem. The problem originates from the repulsive part of the interaction potential, which implicates the introduction of the complex factor via the HS transformation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\exp \\left\\{ - \\frac{a}{2} x^2 \\right\\} =\n\\sqrt{\\frac{1}{2 \\pi a}} \\; \\int_{-\\infty}^\\infty\n\\exp \\left[ - \\frac{y^2}{2 a} - i x y \\right] \\, dy,\n"
},
{
"math_id": 1,
"text": "a > 0"
}
] | https://en.wikipedia.org/wiki?curid=8000600 |
8000781 | Displacement field (mechanics) | Assignment of displacement vectors for all points in a region
In mechanics, a displacement field is the assignment of displacement vectors for all points in a region or body that are displaced from one state to another. A displacement vector specifies the position of a point or a particle in reference to an origin or to a previous position. For example, a displacement field may be used to describe the effects of deformation on a solid body.
Formulation.
Before considering displacement, the state before deformation must be defined. It is a state in which the coordinates of all points are known and described by the function:
formula_0
where
Most often it is a state of the body in which no forces are applied.
Then given any other state of this body in which coordinates of all its points are described as formula_4 the displacement field is the difference between two body states:
formula_5
where formula_6 is a displacement field, which for each point of the body specifies a displacement vector.
Decomposition.
The displacement of a body has two components: a rigid-body displacement and a deformation.
A change in the configuration of a continuum body can be described by a displacement field. A "displacement field" is a vector field of all displacement vectors for all particles in the body, which relates the deformed configuration with the undeformed configuration. The distance between any two particles changes if and only if deformation has occurred. If displacement occurs without deformation, then it is a rigid-body displacement.
Displacement gradient tensor.
Two types of displacement gradient tensor may be defined, following the Lagrangian and Eulerian specifications.
The displacement of particles indexed by variable i may be expressed as follows. The vector joining the positions of a particle in the undeformed configuration formula_9 and deformed configuration formula_10 is called the displacement vector, formula_11, denoted formula_12 or formula_13 below.
Material coordinates (Lagrangian description).
Using formula_14 in place of formula_9 and formula_15 in place of formula_16, both of which are vectors from the origin of the coordinate system to each respective point, we have the Lagrangian description of the displacement vector:
formula_17
where formula_18 are the orthonormal unit vectors that define the basis of the spatial (lab frame) coordinate system.
Expressed in terms of the material coordinates, i.e. formula_19 as a function of formula_20, the displacement field is:
formula_21
where formula_22 is the displacement vector representing rigid-body translation.
The partial derivative of the displacement vector with respect to the material coordinates yields the material displacement gradient tensor formula_23. Thus we have,
formula_24
where formula_25 is the "material deformation gradient tensor" and formula_26 is a rotation.
Spatial coordinates (Eulerian description).
In the Eulerian description, the vector extending from a particle formula_3 in the undeformed configuration to its location in the deformed configuration is called the displacement vector:
formula_27
where formula_28 are the unit vectors that define the basis of the material (body-frame) coordinate system.
Expressed in terms of spatial coordinates, i.e. formula_29 as a function of formula_30, the displacement field is:
formula_31
The spatial derivative, i.e., the partial derivative of the displacement vector with respect to the spatial coordinates, yields the spatial displacement gradient tensor formula_32. Thus we have,
formula_33
where formula_34 is the "spatial deformation gradient tensor".
Relationship between the material and spatial coordinate systems.
formula_35 are the direction cosines between the material and spatial coordinate systems with unit vectors formula_36 and formula_37, respectively. Thus
formula_38
The relationship between formula_12 and formula_39 is then given by
formula_40
Knowing that
formula_41
then
formula_42
Combining the coordinate systems of deformed and undeformed configurations.
It is common to superimpose the coordinate systems for the deformed and undeformed configurations, which results in formula_43, and the direction cosines become Kronecker deltas, i.e.,
formula_44
Thus in material (undeformed) coordinates, the displacement may be expressed as:
formula_45
And in spatial (deformed) coordinates, the displacement may be expressed as:
formula_46
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\vec{R}_0: \\Omega \\to P"
},
{
"math_id": 1,
"text": "\\vec{R}_0"
},
{
"math_id": 2,
"text": "\\Omega"
},
{
"math_id": 3,
"text": "P"
},
{
"math_id": 4,
"text": "\\vec{R}_1"
},
{
"math_id": 5,
"text": "\\vec{u} = \\vec{R}_1 - \\vec{R}_0"
},
{
"math_id": 6,
"text": "\\vec{u}"
},
{
"math_id": 7,
"text": "\\kappa_0(\\mathcal B)"
},
{
"math_id": 8,
"text": "\\kappa_t(\\mathcal B)"
},
{
"math_id": 9,
"text": "P_i"
},
{
"math_id": 10,
"text": "p_i"
},
{
"math_id": 11,
"text": "p_i - P_i"
},
{
"math_id": 12,
"text": "u_i"
},
{
"math_id": 13,
"text": "U_i"
},
{
"math_id": 14,
"text": "\\mathbf{X}"
},
{
"math_id": 15,
"text": "\\mathbf{x}"
},
{
"math_id": 16,
"text": "p_i\\,\\!"
},
{
"math_id": 17,
"text": "\\mathbf u(\\mathbf X,t) = u_i \\mathbf e_i"
},
{
"math_id": 18,
"text": "\\mathbf e_i"
},
{
"math_id": 19,
"text": "\\mathbf u"
},
{
"math_id": 20,
"text": "\\mathbf X"
},
{
"math_id": 21,
"text": "\\mathbf u(\\mathbf X, t) = \\mathbf b(t)+\\mathbf x(\\mathbf X,t) - \\mathbf X \\qquad \\text{or}\\qquad u_i = \\alpha_{iJ} b_J + x_i - \\alpha_{iJ} X_J"
},
{
"math_id": 22,
"text": "\\mathbf b(t)"
},
{
"math_id": 23,
"text": "\\nabla_{\\mathbf X} \\mathbf u\\,\\!"
},
{
"math_id": 24,
"text": "\n\\nabla_{\\mathbf X}\\mathbf u = \\nabla_{\\mathbf X}\\mathbf x - \\mathbf R = \\mathbf F - \\mathbf R \\qquad \\text{or} \\qquad \\frac{\\partial u_i}{\\partial X_K} = \\frac{\\partial x_i}{\\partial X_K} - \\alpha_{iK} = F_{iK} - \\alpha_{iK}"
},
{
"math_id": 25,
"text": "\\mathbf F"
},
{
"math_id": 26,
"text": "\\mathbf{R}"
},
{
"math_id": 27,
"text": "\\mathbf U(\\mathbf x,t) = U_J\\mathbf E_J"
},
{
"math_id": 28,
"text": "\\mathbf E_i"
},
{
"math_id": 29,
"text": "\\mathbf U"
},
{
"math_id": 30,
"text": "\\mathbf x"
},
{
"math_id": 31,
"text": "\\mathbf U(\\mathbf x, t) = \\mathbf b(t) + \\mathbf x - \\mathbf X(\\mathbf x,t) \\qquad \\text{or}\\qquad U_J = b_J + \\alpha_{Ji} x_i - X_J"
},
{
"math_id": 32,
"text": "\\nabla_{\\mathbf x} \\mathbf U\\,\\!"
},
{
"math_id": 33,
"text": "\n\\nabla_{\\mathbf x}\\mathbf U = \\mathbf R^{T} - \\nabla_{\\mathbf x}\\mathbf X = \\mathbf R^{T} -\\mathbf F^{-1} \\qquad \\text{or} \\qquad \\frac{\\partial U_J}{\\partial x_k} = \\alpha_{Jk} - \\frac{\\partial X_J}{\\partial x_k} = \\alpha_{Jk} - F^{-1}_{Jk} \\,,"
},
{
"math_id": 34,
"text": "\\mathbf F^{-1} = \\mathbf H"
},
{
"math_id": 35,
"text": "\\alpha_{Ji}"
},
{
"math_id": 36,
"text": "\\mathbf E_J"
},
{
"math_id": 37,
"text": "\\mathbf e_i\\,\\!"
},
{
"math_id": 38,
"text": "\\mathbf E_J \\cdot \\mathbf e_i = \\alpha_{Ji} = \\alpha_{iJ} "
},
{
"math_id": 39,
"text": "U_J"
},
{
"math_id": 40,
"text": "u_i=\\alpha_{iJ}U_J \\qquad \\text{or} \\qquad U_J=\\alpha_{Ji} u_i"
},
{
"math_id": 41,
"text": "\\mathbf e_i = \\alpha_{iJ} \\mathbf E_J"
},
{
"math_id": 42,
"text": "\\mathbf u(\\mathbf X, t) = u_i\\mathbf e_i = u_i(\\alpha_{iJ}\\mathbf E_J) = U_J \\mathbf E_J = \\mathbf U(\\mathbf x, t)"
},
{
"math_id": 43,
"text": "\\mathbf b = 0\\,\\!"
},
{
"math_id": 44,
"text": "\\mathbf E_J \\cdot \\mathbf e_i = \\delta_{Ji} = \\delta_{iJ}"
},
{
"math_id": 45,
"text": "\\mathbf u(\\mathbf X, t) = \\mathbf x(\\mathbf X,t) - \\mathbf X \\qquad \\text{or}\\qquad u_i = x_i - \\delta_{iJ} X_J"
},
{
"math_id": 46,
"text": "\\mathbf U(\\mathbf x, t) = \\mathbf x - \\mathbf X(\\mathbf x,t) \\qquad \\text{or}\\qquad U_J = \\delta_{Ji} x_i - X_J "
}
] | https://en.wikipedia.org/wiki?curid=8000781 |
800092 | Category of groups | In mathematics, the category Grp (or Gp) has the class of all groups for objects and group homomorphisms for morphisms. As such, it is a concrete category. The study of this category is known as group theory.
Relation to other categories.
There are two forgetful functors from Grp, M: Grp → Mon from groups to monoids and U: Grp → Set from groups to sets. M has two adjoints: one right, I: Mon→Grp, and one left, K: Mon→Grp. I: Mon→Grp is the functor sending every monoid to the submonoid of invertible elements and K: Mon→Grp the functor sending every monoid to the Grothendieck group of that monoid. The forgetful functor U: Grp → Set has a left adjoint given by the composite KF: Set→Mon→Grp, where F is the free functor; this functor assigns to every set "S" the free group on "S."
Categorical properties.
The monomorphisms in Grp are precisely the injective homomorphisms, the epimorphisms are precisely the surjective homomorphisms, and the isomorphisms are precisely the bijective homomorphisms.
The category Grp is both complete and co-complete. The category-theoretical product in Grp is just the direct product of groups while the category-theoretical coproduct in Grp is the free product of groups. The zero objects in Grp are the trivial groups (consisting of just an identity element).
Every morphism "f" : "G" → "H" in Grp has a category-theoretic kernel (given by the ordinary kernel of algebra ker f = {"x" in "G" | "f"("x") = "e"}), and also a category-theoretic cokernel (given by the factor group of "H" by the normal closure of "f"("G") in "H"). Unlike in abelian categories, it is not true that every monomorphism in Grp is the kernel of its cokernel.
Not additive and therefore not abelian.
The category of abelian groups, Ab, is a full subcategory of Grp. Ab is an abelian category, but Grp is not. Indeed, Grp isn't even an additive category, because there is no natural way to define the "sum" of two group homomorphisms. A proof of this is as follows: The set of morphisms from the symmetric group "S"3 of order three to itself, formula_0, has ten elements: an element "z" whose product on either side with every element of "E" is "z" (the homomorphism sending every element to the identity), three elements such that their product on one fixed side is always itself (the projections onto the three subgroups of order two), and six automorphisms. If Grp were an additive category, then this set "E" of ten elements would be a ring. In any ring, the zero element is singled out by the property that 0"x"="x"0=0 for all "x" in the ring, and so "z" would have to be the zero of "E". However, there are no two nonzero elements of "E" whose product is "z", so this finite ring would have no zero divisors. A finite ring with no zero divisors is a field by Wedderburn's little theorem, but there is no field with ten elements because every finite field has for its order, the power of a prime.
Exact sequences.
The notion of exact sequence is meaningful in Grp, and some results from the theory of abelian categories, such as the nine lemma, the five lemma, and their consequences hold true in Grp.
Grp is a regular category.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E=\\operatorname{Hom}(S_3,S_3)"
}
] | https://en.wikipedia.org/wiki?curid=800092 |
8000987 | Incremental capital-output ratio | The Incremental Capital-Output Ratio (ICOR) is the ratio of investment to growth which is equal to the reciprocal of the marginal product of capital. The higher the ICOR, the lower the productivity of capital or the marginal efficiency of capital. The ICOR can be thought of as a measure of the inefficiency with which capital is used. In most countries the ICOR is in the neighborhood of 3. It is a topic discussed in economic growth. It can be expressed in the following formula, where "K" is capital output ratio, "Y" is output (GDP), and "I" is net investment.
formula_0
According to this formula the incremental capital output ratio can be computed by dividing the investment share in GDP by the rate of growth of GDP. As an example, if the level of investment (as a share of GDP) in a developing country had been (approximately) 20% over a particular period, and if the growth rate of GDP had been (approximately) 5% per year during the same period, then the ICOR would be 20/5 = 4. | [
{
"math_id": 0,
"text": "\\text{Incremental Capital Output Ratio (ICOR)} = \\frac{\\Delta K}{\\Delta Y} = \\frac{\\frac{\\Delta K}{Y}}{\\frac{\\Delta Y}{Y}}= \\frac{\\frac{I}{Y}}{\\frac{\\Delta Y}{Y}}\n"
}
] | https://en.wikipedia.org/wiki?curid=8000987 |
8001517 | Yield surface | A yield surface is a five-dimensional surface in the six-dimensional space of stresses. The yield surface is usually convex and the state of stress of "inside" the yield surface is elastic. When the stress state lies on the surface the material is said to have reached its yield point and the material is said to have become plastic. Further deformation of the material causes the stress state to remain on the yield surface, even though the shape and size of the surface may change as the plastic deformation evolves. This is because stress states that lie outside the yield surface are non-permissible in rate-independent plasticity, though not in some models of viscoplasticity.
The yield surface is usually expressed in terms of (and visualized in) a three-dimensional principal stress space (formula_3), a two- or three-dimensional space spanned by stress invariants (formula_4) or a version of the three-dimensional Haigh–Westergaard stress space. Thus we may write the equation of the yield surface (that is, the yield function) in the forms:
Invariants used to describe yield surfaces.
The first principal invariant (formula_0) of the Cauchy stress (formula_15), and the second and third principal invariants (formula_8) of the "deviatoric" part (formula_16) of the Cauchy stress are defined as:
formula_17
where (formula_3) are the principal values of formula_15, (formula_18) are the principal values of formula_16, and
formula_19
where formula_20 is the identity matrix.
A related set of quantities, (formula_21), are usually used to describe yield surfaces for cohesive frictional materials such as rocks, soils, and ceramics. These are defined as
formula_22
where formula_23 is the equivalent stress. However, the possibility of negative values of formula_2 and the resulting imaginary formula_11 makes the use of these quantities problematic in practice.
Another related set of widely used invariants is (formula_24) which describe a cylindrical coordinate system (the Haigh–Westergaard coordinates). These are defined as:
formula_25
The formula_26 plane is also called the Rendulic plane. The angle formula_14 is called stress angle, the value formula_27 is sometimes called the Lode parameter and the relation between formula_14 and formula_28 was first given by Novozhilov V.V. in 1951, see also
The principal stresses and the Haigh–Westergaard coordinates are related by
formula_29
A different definition of the Lode angle can also be found in the literature:
formula_30
in which case the ordered principal stresses (where formula_31) are related by
formula_32
Examples of yield surfaces.
There are several different yield surfaces known in engineering, and those most popular are listed below.
Tresca yield surface.
The Tresca yield criterion is taken to be the work of Henri Tresca. It is also known as the "maximum shear stress theory" (MSST) and the Tresca–Guest (TG) criterion. In terms of the principal stresses the Tresca criterion is expressed as
formula_33
Where formula_34 is the yield strength in shear, and formula_35 is the tensile yield strength.
Figure 1 shows the Tresca–Guest yield surface in the three-dimensional space of principal stresses. It is a prism of six sides and having infinite length. This means that the material remains elastic when all three principal stresses are roughly equivalent (a hydrostatic pressure), no matter how much it is compressed or stretched. However, when one of the principal stresses becomes smaller (or larger) than the others the material is subject to shearing. In such situations, if the shear stress reaches the yield limit then the material enters the plastic domain. Figure 2 shows the Tresca–Guest yield surface in two-dimensional stress space, it is a cross section of the prism along the formula_36 plane.
von Mises yield surface.
The von Mises yield criterion is expressed in the principal stresses as
formula_37
where formula_35 is the yield strength in uniaxial tension.
Figure 3 shows the von Mises yield surface in the three-dimensional space of principal stresses. It is a circular cylinder of infinite length with its axis inclined at equal angles to the three principal stresses. Figure 4 shows the von Mises yield surface in two-dimensional space compared with Tresca–Guest criterion. A cross section of the von Mises cylinder on the plane of formula_36 produces the elliptical shape of the yield surface.
Burzyński-Yagn criterion.
This criterion
formula_38
represents the general equation of a second order surface of revolution about the hydrostatic axis. Some special case are:
The relations compression-tension and torsion-tension can be computed to
formula_50
The Poisson's ratios at tension and compression are obtained using
formula_51
formula_52
For ductile materials the restriction
formula_53
is important. The application of rotationally symmetric criteria for brittle failure with
formula_54
has not been studied sufficiently.
The Burzyński-Yagn criterion is well suited for academic purposes. For practical applications, the third invariant of the deviator in the odd and even power should be introduced in the equation, e.g.:
formula_55
Huber criterion.
The Huber criterion consists of the Beltrami ellipsoid and a scaled von Mises cylinder in the principal stress space, see also
formula_56
with formula_57. The transition between the surfaces in the cross section formula_58 is continuously differentiable.
The criterion represents the "classical view" with respect to inelastic material behavior:
The Huber criterion can be used as a yield surface with an empirical restriction for Poisson's ratio at tension formula_63, which leads to formula_64.
The modified Huber criterion, see also, cf.
formula_65
consists of the Schleicher ellipsoid with the restriction of Poisson's ratio at compression
formula_66
and a cylinder with the formula_67-transition in the cross section formula_68.
The second setting for the parameters formula_57 and formula_69 follows with the compression / tension relation
formula_70
The modified Huber criterion can be better fitted to the measured data as the Huber criterion. For setting formula_71 it follows formula_72 and formula_73.
The Huber criterion and the modified Huber criterion should be preferred to the von Mises criterion since one obtains safer results in the region formula_74.
For practical applications the third invariant of the deviator formula_75 should be considered in these criteria.
Mohr–Coulomb yield surface.
The Mohr–Coulomb yield (failure) criterion is similar to the Tresca criterion, with additional provisions for materials with different tensile and compressive yield strengths. This model is often used to model concrete, soil or granular materials. The Mohr–Coulomb yield criterion may be expressed as:
formula_76
where
formula_77
and the parameters formula_78 and formula_79 are the yield (failure) stresses of the material in uniaxial compression and tension, respectively. The formula reduces to the Tresca criterion if formula_80.
Figure 5 shows Mohr–Coulomb yield surface in the three-dimensional space of principal stresses. It is a conical prism and formula_81 determines the inclination angle of conical surface. Figure 6 shows Mohr–Coulomb yield surface in two-dimensional stress space. In Figure 6 formula_82 and formula_83 is used for formula_79 and formula_78, respectively, in the formula. It is a cross section of this conical prism on the plane of formula_36. In Figure 6 Rr and Rc are used for Syc and Syt, respectively, in the formula.
Drucker–Prager yield surface.
The Drucker–Prager yield criterion is similar to the von Mises yield criterion, with provisions for handling materials with differing tensile and compressive yield strengths. This criterion is most often used for concrete where both normal and shear stresses can determine failure. The Drucker–Prager yield criterion may be expressed as
formula_84
where
formula_85
and formula_78, formula_79 are the uniaxial yield stresses in compression and tension respectively. The formula reduces to the von Mises equation if formula_80.
Figure 7 shows Drucker–Prager yield surface in the three-dimensional space of principal stresses. It is a regular cone. Figure 8 shows Drucker–Prager yield surface in two-dimensional space. The elliptical elastic domain is a cross section of the cone on the plane of formula_36; it can be chosen to intersect the Mohr–Coulomb yield surface in different number of vertices. One choice is to intersect the Mohr–Coulomb yield surface at three vertices on either side of the formula_86 line, but usually selected by convention to be those in the compression regime. Another choice is to intersect the Mohr–Coulomb yield surface at four vertices on both axes (uniaxial fit) or at two vertices on the diagonal formula_87 (biaxial fit). The Drucker-Prager yield criterion is also commonly expressed in terms of the material cohesion and friction angle.
Bresler–Pister yield surface.
The Bresler–Pister yield criterion is an extension of the Drucker Prager yield criterion that uses three parameters, and has additional terms for materials that yield under hydrostatic compression.
In terms of the principal stresses, this yield criterion may be expressed as
formula_88
where formula_89 are material constants. The additional parameter formula_90 gives the yield surface an ellipsoidal cross section when viewed from a direction perpendicular to its axis. If formula_91 is the yield stress in uniaxial compression, formula_92 is the yield stress in uniaxial tension, and formula_93 is the yield stress in biaxial compression, the parameters can be expressed as
formula_94
Willam–Warnke yield surface.
The Willam–Warnke yield criterion is a three-parameter smoothed version of the Mohr–Coulomb yield criterion that has similarities in form to the Drucker–Prager and Bresler–Pister yield criteria.
The yield criterion has the functional form
formula_95
However, it is more commonly expressed in Haigh–Westergaard coordinates as
formula_96
The cross-section of the surface when viewed along its axis is a smoothed triangle (unlike Mohr–Coulomb). The Willam–Warnke yield surface is convex and has unique and well defined first and second derivatives on every point of its surface. Therefore, the Willam–Warnke model is computationally robust and has been used for a variety of cohesive-frictional materials.
Podgórski and Rosendahl trigonometric yield surfaces.
Normalized with respect to the uniaxial tensile stress formula_98, the Podgórski criterion as function of the stress angle formula_14 reads
formula_99
with the shape function of trigonal symmetry in the formula_97-plane
formula_100
It contains the criteria of von Mises (circle in the formula_97-plane, formula_101, formula_102), Tresca (regular hexagon, formula_103, formula_104), Mariotte (regular triangle, formula_105, formula_104), Ivlev (regular triangle, formula_106, formula_104) and also the cubic criterion of Sayir (the Ottosen criterion ) with formula_107 and the isotoxal (equilateral) hexagons of the Capurso criterion with formula_104. The von Mises - Tresca transition follows with formula_103, formula_108. The isogonal (equiangular) hexagons of the Haythornthwaite criterion containing the Schmidt-Ishlinsky criterion (regular hexagon) cannot be described with the Podgórski ctiterion.
The Rosendahl criterion reads
formula_109
with the shape function of hexagonal symmetry in the formula_97-plane
formula_110
It contains the criteria of von Mises (circle, formula_111, formula_112), Tresca (regular hexagon, formula_113, formula_114), Schmidt—Ishlinsky (regular hexagon, formula_115, formula_114), Sokolovsky (regular dodecagon, formula_116, formula_114), and also the bicubic criterion with formula_117 or equally with formula_118 and the isotoxal dodecagons of the unified yield criterion of Yu with formula_114. The isogonal dodecagons of the multiplicative ansatz criterion of hexagonal symmetry containing the Ishlinsky-Ivlev criterion (regular dodecagon) cannot be described by the Rosendahl criterion.
The criteria of Podgórski and Rosendahl describe single surfaces in principal stress space without any additional outer contours and plane intersections. Note that in order to avoid numerical issues the real part function formula_119 can be introduced to the shape function: formula_120 and formula_121. The generalization in the form formula_122 is relevant for theoretical investigations.
A pressure-sensitive extension of the criteria can be obtained with the linear formula_0-substitution
formula_123
which is sufficient for many applications, e.g. metals, cast iron, alloys, concrete, unreinforced polymers, etc.
Bigoni–Piccolroaz yield surface.
The Bigoni–Piccolroaz yield criterion is a seven-parameter surface defined by
formula_124
where formula_125 is the "meridian" function
formula_126
formula_127
describing the pressure-sensitivity and formula_128 is the "deviatoric" function
formula_129
describing the Lode-dependence of yielding. The seven, non-negative material parameters:
formula_130
define the shape of the meridian and deviatoric sections.
This criterion represents a smooth and convex surface, which is closed both in hydrostatic tension and compression and has a
drop-like shape, particularly suited to describe frictional and granular materials. This criterion has also been generalized to the case of surfaces with corners.
Cosine Ansatz (Altenbach-Bolchoun-Kolupaev).
For the formulation of the strength criteria the stress angle
formula_131
can be used.
The following criterion of isotropic material behavior
formula_132
contains a number of other well-known less general criteria, provided suitable parameter values are chosen.
Parameters formula_133 and formula_134 describe the geometry of the surface in the formula_97-plane. They are subject to the constraints
formula_135
which follow from the convexity condition. A more precise formulation of the third constraints is proposed in.
Parameters formula_136 and formula_137 describe the position of the intersection points of the yield surface with hydrostatic axis (space diagonal in the principal stress space). These intersections points are called hydrostatic nodes.
In the case of materials which do not fail at hydrostatic pressure (steel, brass, etc.) one gets formula_138. Otherwise for materials which fail at hydrostatic pressure (hard foams, ceramics, sintered materials, etc.) it follows formula_69.
The integer powers formula_139 and formula_140, formula_141 describe the curvature of the meridian. The meridian with formula_142 is a straight line and with formula_143 – a parabola.
Barlat's Yield Surface.
For the anisotropic materials, depending on the direction of the applied process (e.g., rolling) the mechanical properties vary and, therefore, using an anisotropic yield function is crucial. Since 1989 Frederic Barlat has developed a family of yield functions for constitutive modelling of plastic anisotropy. Among them, Yld2000-2D yield criteria has been applied for a wide range of sheet metals (e.g., aluminum alloys and advanced high-strength steels). The Yld2000-2D model is a non-quadratic type yield function based on two linear transformation of the stress tensor:
formula_144 :
where formula_145 is the effective stress. and formula_146 and formula_147 are the transformed matrices (by linear transformation C or L):
formula_148
where s is the deviatoric stress tensor.
for principal values of X’ and X”, the model could be expressed as:
formula_149
and:
formula_150
where formula_151 are eight parameters of the Barlat's Yld2000-2D model to be identified with a set of experiments.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I_1"
},
{
"math_id": 1,
"text": "J_2"
},
{
"math_id": 2,
"text": "J_3"
},
{
"math_id": 3,
"text": " \\sigma_1, \\sigma_2 , \\sigma_3"
},
{
"math_id": 4,
"text": " I_1, J_2, J_3"
},
{
"math_id": 5,
"text": " f(\\sigma_1,\\sigma_2,\\sigma_3) = 0 \\,"
},
{
"math_id": 6,
"text": "\\sigma_i"
},
{
"math_id": 7,
"text": " f(I_1, J_2, J_3) = 0 \\,"
},
{
"math_id": 8,
"text": "J_2, J_3"
},
{
"math_id": 9,
"text": " f(p, q, r) = 0 \\,"
},
{
"math_id": 10,
"text": "p, q"
},
{
"math_id": 11,
"text": "r"
},
{
"math_id": 12,
"text": "f(\\xi,\\rho,\\theta) = 0 \\,"
},
{
"math_id": 13,
"text": "\\xi,\\rho"
},
{
"math_id": 14,
"text": "\\theta"
},
{
"math_id": 15,
"text": "\\boldsymbol{\\sigma}"
},
{
"math_id": 16,
"text": "\\boldsymbol{s}"
},
{
"math_id": 17,
"text": "\n \\begin{align}\n I_1 & = \\text{Tr}(\\boldsymbol{\\sigma}) = \\sigma_1 + \\sigma_2 + \\sigma_3 \\\\\n J_2 & = \\tfrac{1}{2} \\boldsymbol{s}:\\boldsymbol{s} = \n \\tfrac{1}{6}\\left[(\\sigma_1-\\sigma_2)^2+(\\sigma_2-\\sigma_3)^2+(\\sigma_3-\\sigma_1)^2\\right] \\\\\n J_3 & = \\det(\\boldsymbol{s}) = \\tfrac{1}{3} (\\boldsymbol{s}\\cdot\\boldsymbol{s}):\\boldsymbol{s}\n = s_1 s_2 s_3\n \\end{align}\n "
},
{
"math_id": 18,
"text": "s_1, s_2, s_3"
},
{
"math_id": 19,
"text": "\n \\boldsymbol{s} = \\boldsymbol{\\sigma}-\\tfrac{I_1}{3}\\,\\boldsymbol{I}\n"
},
{
"math_id": 20,
"text": "\\boldsymbol{I}"
},
{
"math_id": 21,
"text": "p, q, r\\,"
},
{
"math_id": 22,
"text": "\n p = \\tfrac{1}{3}~I_1 ~:~~\n q = \\sqrt{3~J_2} = \\sigma_\\mathrm{eq} ~;~~\n r = 3\\left(\\tfrac{1}{2}\\,J_3\\right)^{1/3} \n "
},
{
"math_id": 23,
"text": "\\sigma_\\mathrm{eq}"
},
{
"math_id": 24,
"text": "\\xi, \\rho, \\theta\\,"
},
{
"math_id": 25,
"text": "\n \\xi = \\tfrac{1}{\\sqrt{3}}~I_1 = \\sqrt{3}~p ~;~~\n \\rho = \\sqrt{2 J_2} = \\sqrt{\\tfrac{2}{3}}~q ~;~~\n \\cos(3\\theta) = \\left(\\tfrac{r}{q}\\right)^3 = \\tfrac{3\\sqrt{3}}{2}~\\cfrac{J_3}{J_2^{3/2}}\n "
},
{
"math_id": 26,
"text": "\\xi-\\rho\\,"
},
{
"math_id": 27,
"text": "\\cos(3\\theta)"
},
{
"math_id": 28,
"text": "J_2,J_3"
},
{
"math_id": 29,
"text": "\n \\begin{bmatrix} \\sigma_1 \\\\ \\sigma_2 \\\\ \\sigma_3 \\end{bmatrix} = \n \\tfrac{1}{\\sqrt{3}} \\begin{bmatrix} \\xi \\\\ \\xi \\\\ \\xi \\end{bmatrix} + \n \\sqrt{\\tfrac{2}{3}}~\\rho~\\begin{bmatrix} \\cos\\theta \\\\ \\cos\\left(\\theta-\\tfrac{2\\pi}{3}\\right) \\\\ \\cos\\left(\\theta+\\tfrac{2\\pi}{3}\\right) \\end{bmatrix}\n = \\tfrac{1}{\\sqrt{3}} \\begin{bmatrix} \\xi \\\\ \\xi \\\\ \\xi \\end{bmatrix} + \n \\sqrt{\\tfrac{2}{3}}~\\rho~\\begin{bmatrix} \\cos\\theta \\\\ -\\sin\\left(\\tfrac{\\pi}{6}-\\theta\\right) \\\\ -\\sin\\left(\\tfrac{\\pi}{6}+\\theta\\right) \\end{bmatrix} \\,.\n "
},
{
"math_id": 30,
"text": "\n \\sin(3\\theta) = ~\\tfrac{3\\sqrt{3}}{2}~\\cfrac{J_3}{J_2^{3/2}}\n "
},
{
"math_id": 31,
"text": "\\sigma_1 \\geq \\sigma_2 \\geq \\sigma_3"
},
{
"math_id": 32,
"text": "\n \\begin{bmatrix} \\sigma_1 \\\\ \\sigma_2 \\\\ \\sigma_3 \\end{bmatrix} = \n \\tfrac{1}{\\sqrt{3}} \\begin{bmatrix} \\xi \\\\ \\xi \\\\ \\xi \\end{bmatrix}\n +\n \\tfrac{\\rho}{\\sqrt{2}}~\\begin{bmatrix} \\cos\\theta - \\tfrac{\\sin\\theta}{\\sqrt{3}} \\\\ \\tfrac{2\\sin\\theta}{\\sqrt{3}} \\\\ -\\tfrac{\\sin\\theta}{\\sqrt{3}} - \\cos\\theta \\end{bmatrix}\n \\,.\n "
},
{
"math_id": 33,
"text": "\\tfrac{1}{2}{\\max(|\\sigma_1 - \\sigma_2| , |\\sigma_2 - \\sigma_3| , |\\sigma_3 - \\sigma_1| ) = S_{sy} = \\tfrac{1}{2}S_y}\\!"
},
{
"math_id": 34,
"text": "S_{sy}"
},
{
"math_id": 35,
"text": "S_y"
},
{
"math_id": 36,
"text": " \\sigma_1, \\sigma_2"
},
{
"math_id": 37,
"text": " {(\\sigma_1 - \\sigma_2)^2 + (\\sigma_2 - \\sigma_3)^2 + (\\sigma_3 - \\sigma_1)^2 = 2 {S_y}^2 }\\!"
},
{
"math_id": 38,
"text": " 3I_2' =\n \\frac{\\sigma_\\mathrm{eq}-\\gamma_1I_1}{1-\\gamma_1}\n \\frac{\\sigma_\\mathrm{eq}-\\gamma_2I_1}{1-\\gamma_2} "
},
{
"math_id": 39,
"text": " \\gamma_1 = \\gamma_2 = 0 "
},
{
"math_id": 40,
"text": " \\gamma_1 = \\gamma_2 \\in ]0,1[ "
},
{
"math_id": 41,
"text": "\\gamma_1 \\in ]0,1[, \\gamma_2 = 0 "
},
{
"math_id": 42,
"text": "I_1 = 0 "
},
{
"math_id": 43,
"text": "\\gamma_1 = - \\gamma_2 \\in ]0,1[ "
},
{
"math_id": 44,
"text": "I_1 = \\frac{1}{2}\\,\\bigg(\\frac{1}{\\gamma_1}+\\frac{1}{\\gamma_2} \\bigg) "
},
{
"math_id": 45,
"text": "\\gamma_1 \\in ]0,1[, \\gamma_2<0 "
},
{
"math_id": 46,
"text": "\\gamma_1 \\in ]0,1[, \\gamma_2 \\in ]0,\\gamma_1[ "
},
{
"math_id": 47,
"text": "\\gamma_1=-\\gamma_2 =a\\,i "
},
{
"math_id": 48,
"text": " i =\\sqrt{-1} "
},
{
"math_id": 49,
"text": "\\gamma_{1,2}= b \\pm a\\,i "
},
{
"math_id": 50,
"text": " \\frac{\\sigma_-}{\\sigma_+} =\\frac{1}{1-\\gamma_1-\\gamma_2}, \\qquad \\bigg(\\sqrt{3}\\,\\frac{\\tau_*}{\\sigma_+}\\bigg)^2 = \\frac{1}{(1-\\gamma_1)(1-\\gamma_2)} "
},
{
"math_id": 51,
"text": " \\nu_+^\\mathrm{in} =\n \\frac{-1+2(\\gamma_1+\\gamma_2)-3\\gamma_1\\gamma_2}{-2+\\gamma_1+\\gamma_2} "
},
{
"math_id": 52,
"text": " \\nu_-^\\mathrm{in} = - \\frac{-1+\n \\gamma_1^2+\\gamma_2^2-\\gamma_1\\,\\gamma_2}\n {(-2+\\gamma_1+\\gamma_2)\\,(-1+\\gamma_1+\\gamma_2)} "
},
{
"math_id": 53,
"text": "\\nu_+^\\mathrm{in}\\in \\bigg[\\,0.48,\\,\\frac{1}{2}\\,\\bigg] "
},
{
"math_id": 54,
"text": "\\nu_+^\\mathrm{in}\\in ]-1,~\\nu_+^\\mathrm{el}\\,] "
},
{
"math_id": 55,
"text": " 3I_2' \\frac{1+c_3 \\cos 3\\theta+c_6 \\cos^2 3\\theta}{1+c_3+\n c_6} =\n \\frac{\\sigma_\\mathrm{eq}-\\gamma_1I_1}{1-\\gamma_1}\n \\frac{\\sigma_\\mathrm{eq}-\\gamma_2I_1}{1-\\gamma_2} "
},
{
"math_id": 56,
"text": "\n3\\,I_2' = \n\\left\\{\n\\begin{array}{ll}\n \\displaystyle\\frac{\\sigma_\\mathrm{eq}-\\gamma_1 \\,I_1}{1-\\gamma_1} \\, \\frac{\\sigma_\\mathrm{eq}+\\gamma_1 \\,I_1}{1+\\gamma_1}, & I_1>0 \\\\[1em]\n \\displaystyle\\frac{\\sigma_\\mathrm{eq}}{1-\\gamma_1}\\, \\frac{\\sigma_\\mathrm{eq}}{1+\\gamma_1}, & I_1\\leq 0\n\\end{array}\n\\right.\n"
},
{
"math_id": 57,
"text": "\\gamma_1\\in[0, 1["
},
{
"math_id": 58,
"text": "I_1=0"
},
{
"math_id": 59,
"text": "I_1>0"
},
{
"math_id": 60,
"text": "\\nu_+^\\mathrm{in}\\in\\left]-1,\\,1/2\\right]"
},
{
"math_id": 61,
"text": "I_1<0"
},
{
"math_id": 62,
"text": "\\nu_-^\\mathrm{in}=1/2"
},
{
"math_id": 63,
"text": "\\nu_+^\\mathrm{in}\\in[0.48, 1/2]"
},
{
"math_id": 64,
"text": "\\gamma_1\\in[0, 0.1155]"
},
{
"math_id": 65,
"text": "\n 3\\,I_2' = \n\\left\\{\n\\begin{array}{ll}\n \\displaystyle\\frac{\\sigma_\\mathrm{eq}-\\gamma_1 \\,I_1}{1-\\gamma_1} \\, \\frac{\\sigma_\\mathrm{eq}-\\gamma_2 \\,I_1}{1-\\gamma_2}, & I_1>-d\\,\\sigma_\\mathrm{+} \\\\[1em]\n \\displaystyle\\frac{\\sigma_\\mathrm{eq}^2}{(1-\\gamma_1-\\gamma_2)^2}, & I_1\\leq -d\\,\\sigma_\\mathrm{+}\n\\end{array}\n\\right.\n"
},
{
"math_id": 66,
"text": " \\nu_-^\\mathrm{in} = - \\frac{-1+\n \\gamma_1^2+\\gamma_2^2-\\gamma_1\\,\\gamma_2}\n {(-2+\\gamma_1+\\gamma_2)\\,(-1+\\gamma_1+\\gamma_2)}=\\frac{1}{2} "
},
{
"math_id": 67,
"text": "C^1"
},
{
"math_id": 68,
"text": "I_1=-d\\,\\sigma_\\mathrm{+}"
},
{
"math_id": 69,
"text": "\\gamma_2<0"
},
{
"math_id": 70,
"text": " d=\\frac{\\sigma_-}{\\sigma_+} =\\frac{1}{1-\\gamma_1-\\gamma_2} \\geq1 "
},
{
"math_id": 71,
"text": "\\nu_+^\\mathrm{in}=0.48"
},
{
"math_id": 72,
"text": "\\gamma_1=0.0880"
},
{
"math_id": 73,
"text": "\\gamma_2=-0.0747"
},
{
"math_id": 74,
"text": "I_1>\\sigma_\\mathrm{+} "
},
{
"math_id": 75,
"text": "I_3'"
},
{
"math_id": 76,
"text": "\n\\frac{m+1}{2}\\max \\Big(|\\sigma_1 - \\sigma_2|+K(\\sigma_1 + \\sigma_2) ~,~~\n |\\sigma_1 - \\sigma_3|+K(\\sigma_1 + \\sigma_3) ~,~~\n |\\sigma_2 - \\sigma_3|+K(\\sigma_2 + \\sigma_3) \\Big) = S_{yc}\n"
},
{
"math_id": 77,
"text": " m = \\frac {S_{yc}}{S_{yt}}; K = \\frac {m-1}{m+1}"
},
{
"math_id": 78,
"text": "S_{yc}"
},
{
"math_id": 79,
"text": "S_{yt}"
},
{
"math_id": 80,
"text": "S_{yc}=S_{yt}"
},
{
"math_id": 81,
"text": "K"
},
{
"math_id": 82,
"text": "R_{r}"
},
{
"math_id": 83,
"text": "R_{c}"
},
{
"math_id": 84,
"text": " \\bigg(\\frac {m-1}{2}\\bigg) ( \\sigma_1 + \\sigma_2 + \\sigma_3 ) + \\bigg(\\frac{m+1}{2}\\bigg)\\sqrt{\\frac{(\\sigma_1 - \\sigma_2)^2 + (\\sigma_2 - \\sigma_3)^2 + (\\sigma_3 - \\sigma_1)^2}{2}} = S_{yc} "
},
{
"math_id": 85,
"text": " m = \\frac{S_{yc}}{S_{yt}} "
},
{
"math_id": 86,
"text": " \\sigma_1 = -\\sigma_2 "
},
{
"math_id": 87,
"text": " \\sigma_1 = \\sigma_2 "
},
{
"math_id": 88,
"text": "\n S_{yc} = \\tfrac{1}{\\sqrt{2}}\\left[(\\sigma_1-\\sigma_2)^2+(\\sigma_2-\\sigma_3)^2+(\\sigma_3-\\sigma_1)^2\\right]^{1/2} - c_0 - c_1~(\\sigma_1+\\sigma_2+\\sigma_3) - c_2~(\\sigma_1+\\sigma_2+\\sigma_3)^2\n "
},
{
"math_id": 89,
"text": "c_0, c_1, c_2 "
},
{
"math_id": 90,
"text": "c_2"
},
{
"math_id": 91,
"text": "\\sigma_c"
},
{
"math_id": 92,
"text": "\\sigma_t"
},
{
"math_id": 93,
"text": "\\sigma_b"
},
{
"math_id": 94,
"text": "\n \\begin{align}\n c_1 = & \\left(\\cfrac{\\sigma_t-\\sigma_c}{(\\sigma_t+\\sigma_c)}\\right)\n \\left(\\cfrac{4\\sigma_b^2 - \\sigma_b(\\sigma_c+\\sigma_t) + \\sigma_c\\sigma_t}{4\\sigma_b^2 + 2\\sigma_b(\\sigma_t-\\sigma_c) - \\sigma_c\\sigma_t} \\right) \\\\\n c_2 = & \\left(\\cfrac{1}{(\\sigma_t+\\sigma_c)}\\right)\n \\left(\\cfrac{\\sigma_b(3\\sigma_t-\\sigma_c) -2\\sigma_c\\sigma_t}{4\\sigma_b^2 + 2\\sigma_b(\\sigma_t-\\sigma_c) - \\sigma_c\\sigma_t} \\right) \\\\\n c_0 = & c_1\\sigma_c -c_2\\sigma_c^2\n \\end{align}\n "
},
{
"math_id": 95,
"text": "\n f(I_1, J_2, J_3) = 0 ~.\n "
},
{
"math_id": 96,
"text": "\n f(\\xi, \\rho, \\theta) = 0 ~.\n "
},
{
"math_id": 97,
"text": "\\pi"
},
{
"math_id": 98,
"text": "\\sigma_\\mathrm{eq}=\\sigma_+"
},
{
"math_id": 99,
"text": "\n \\sigma_\\mathrm{eq}=\\sqrt{3\\,I_2'}\\,\\frac{\\Omega_3(\\theta, \\beta_3, \\chi_3)}{\\Omega_3(0, \\beta_3, \\chi_3)},\n"
},
{
"math_id": 100,
"text": "\n \\Omega_3(\\theta, \\beta_3, \\chi_3)=\\cos\\left[\\displaystyle\\frac{1}{3}\\left(\\pi \\beta_3 -\\arccos [\\,\\sin (\\chi_3\\,\\frac{\\pi}{2}) \\,\\!\\cos 3\\,\\theta\\,]\\right)\\right], \\qquad \\beta_3\\in[0,\\,1], \\quad \\chi_3\\in[-1,\\,1].\n"
},
{
"math_id": 101,
"text": "\\beta_3=[0,\\,1]"
},
{
"math_id": 102,
"text": "\\chi_3=0"
},
{
"math_id": 103,
"text": "\\beta_3=1/2"
},
{
"math_id": 104,
"text": "\\chi_3=\\{1, -1\\}"
},
{
"math_id": 105,
"text": "\\beta_3=\\{0, 1\\}"
},
{
"math_id": 106,
"text": "\\beta_3=\\{1, 0\\}"
},
{
"math_id": 107,
"text": "\n\\beta_3=\\{0, 1\\}"
},
{
"math_id": 108,
"text": "\\chi_3=[0, 1]"
},
{
"math_id": 109,
"text": "\n \\sigma_\\mathrm{eq}=\\sqrt{3\\,I_2'}\\,\\frac{\\Omega_6(\\theta, \\beta_6, \\chi_6)}{\\Omega_6(0, \\beta_6, \\chi_6)},\n"
},
{
"math_id": 110,
"text": "\n \\Omega_6(\\theta, \\beta_6, \\chi_6)=\\cos\\left[\\displaystyle\\frac{1}{6}\\left(\\pi \\beta_6 -\\arccos [\\,\\sin (\\chi_6\\,\\frac{\\pi}{2})\\,\\!\\cos 6\\,\\theta\\,]\\right)\\right], \\qquad \\beta_6\\in[0,\\,1], \\quad \\chi_6\\in[-1,\\,1].\n"
},
{
"math_id": 111,
"text": "\\beta_6=[0,\\,1]"
},
{
"math_id": 112,
"text": "\\chi_6=0"
},
{
"math_id": 113,
"text": "\\beta_6=\\{1, 0\\}"
},
{
"math_id": 114,
"text": "\\chi_6=\\{1, -1\\}"
},
{
"math_id": 115,
"text": "\\beta_6=\\{0, 1\\}"
},
{
"math_id": 116,
"text": "\\beta_6=1/2"
},
{
"math_id": 117,
"text": "\\beta_6=0"
},
{
"math_id": 118,
"text": "\\beta_6=1"
},
{
"math_id": 119,
"text": "Re"
},
{
"math_id": 120,
"text": "Re(\\Omega_{3})"
},
{
"math_id": 121,
"text": "Re(\\Omega_{6})"
},
{
"math_id": 122,
"text": "\\Omega_{3n}"
},
{
"math_id": 123,
"text": "\n\\sigma_\\mathrm{eq}\\rightarrow\n \\frac{\\sigma_\\mathrm{eq}-\\gamma_1\\,I_1}{1-\\gamma_1} \\qquad\\mbox{with}\\qquad \\gamma_1\\in[0,\\,1[,\n"
},
{
"math_id": 124,
"text": "\n f(p,q,\\theta) = F(p) + \\frac{q}{g(\\theta)} = 0,\n "
},
{
"math_id": 125,
"text": "F(p)"
},
{
"math_id": 126,
"text": "\nF(p) = \n\\left\\{\n\\begin{array}{ll}\n-M p_c \\sqrt{(\\phi - \\phi^m)[2(1 - \\alpha)\\phi + \\alpha]}, & \\phi \\in [0,1], \\\\\n+\\infty, & \\phi \\notin [0,1],\n\\end{array}\n\\right.\n"
},
{
"math_id": 127,
"text": "\n\\phi = \\frac{p + c}{p_c + c},\n"
},
{
"math_id": 128,
"text": "g(\\theta)"
},
{
"math_id": 129,
"text": "\ng(\\theta) = \\frac{1}{\\cos[\\beta \\frac{\\pi}{6} - \\frac{1}{3} \\cos^{-1}(\\gamma \\cos 3\\theta)]},\n"
},
{
"math_id": 130,
"text": "\n\\underbrace{M > 0,~ p_c > 0,~ c \\geq 0,~ 0 < \\alpha < 2,~ m > 1}_{\\mbox{defining}~\\displaystyle{F(p)}},~~~ \n\\underbrace{0\\leq \\beta \\leq 2,~ 0 \\leq \\gamma < 1}_{\\mbox{defining}~\\displaystyle{g(\\theta)}}, \n"
},
{
"math_id": 131,
"text": "\\cos 3\\theta = \\frac{3\\sqrt{3}}{2}\\frac{I_3'}{I_2'^{\\frac{3}{2}}}"
},
{
"math_id": 132,
"text": " (3I_2')^3 \\frac{1+c_3 \\cos 3\\theta+c_6 \\cos^2 3\\theta}{1+c_3+\n c_6}= \\displaystyle\n\\left(\\frac{\\sigma_\\mathrm{eq}-\\gamma_1\\,I_1}{1-\\gamma_1}\\right)^{6-l-m}\\,\n \\left(\\frac{\\sigma_\\mathrm{eq}-\\gamma_2\\,I_1}{1-\\gamma_2}\\right)^l \\, \\sigma_\\mathrm{eq}^m "
},
{
"math_id": 133,
"text": "c_3"
},
{
"math_id": 134,
"text": "c_6"
},
{
"math_id": 135,
"text": " c_6=\\frac{1}{4}(2+c_3), \\qquad c_6=\\frac{1}{4}(2-c_3), \\qquad c_6\\ge \\frac{5}{12}\\,c_3^2-\\frac{1}{3}, "
},
{
"math_id": 136,
"text": "\\gamma_1\\in[0,\\,1["
},
{
"math_id": 137,
"text": "\\gamma_2"
},
{
"math_id": 138,
"text": "\\gamma_2\\in[0,\\,\\gamma_1["
},
{
"math_id": 139,
"text": "l\\geq0"
},
{
"math_id": 140,
"text": "m\\geq0"
},
{
"math_id": 141,
"text": "l+m< 6"
},
{
"math_id": 142,
"text": "l=m=0"
},
{
"math_id": 143,
"text": "l=0"
},
{
"math_id": 144,
"text": " \\Phi = \\Phi '(X') + \\Phi ''(X'') = 2{\\bar \\sigma ^a} "
},
{
"math_id": 145,
"text": " \\bar \\sigma "
},
{
"math_id": 146,
"text": " X' "
},
{
"math_id": 147,
"text": " X'' "
},
{
"math_id": 148,
"text": " \\begin{array}{l}\nX' = C'.s = L'.\\sigma \\\\\nX'' = C''.s = L''.\\sigma \n\\end{array} "
},
{
"math_id": 149,
"text": " \\begin{array}{l}\n\\Phi ' = {\\left| {{{X'}_1} + {{X'}_2}} \\right|^a}\\\\\n\\Phi '' = {\\left| {2{{X''}_2} + {{X''}_1}} \\right|^a} + {\\left| {2{{X''}_1} + {{X''}_2}} \\right|^a}\n\\end{array}\\ "
},
{
"math_id": 150,
"text": " \\left[ {\\begin{array}{*{20}{c}}\n{{{L'}_{11}}}\\\\\n{{{L'}_{12}}}\\\\\n{{{L'}_{21}}}\\\\\n{{{L'}_{22}}}\\\\\n{{{L'}_{66}}}\n\\end{array}} \\right] = \\left[ {\\begin{array}{*{20}{c}}\n{2/3}&0&0\\\\\n{ - 1/3}&0&0\\\\\n0&{ - 1/3}&0\\\\\n0&{ - 2/3}&0\\\\\n0&0&1\n\\end{array}} \\right]\\left[ {\\begin{array}{*{20}{c}}\n{{\\alpha _1}}\\\\\n{{\\alpha _2}}\\\\\n{{\\alpha _7}}\n\\end{array}} \\right], \\left[ {\\begin{array}{*{20}{c}}\n{{{L''}_{11}}}\\\\\n{{{L''}_{12}}}\\\\\n{{{L''}_{21}}}\\\\\n{{{L''}_{22}}}\\\\\n{{{L''}_{66}}}\n\\end{array}} \\right] = \\left[ {\\begin{array}{*{20}{c}}\n{ - 2}&2&8&{ - 2}&0\\\\\n1&{ - 4}&{ - 4}&4&0\\\\\n4&{ - 4}&{ - 4}&4&0\\\\\n{ - 2}&8&2&{ - 2}&0\\\\\n0&0&0&0&1\n\\end{array}} \\right]\\left[ {\\begin{array}{*{20}{c}}\n{{\\alpha _3}}\\\\\n{{\\alpha _4}}\\\\\n{{\\alpha _5}}\\\\\n{{\\alpha _6}}\\\\\n{{\\alpha _8}}\n\\end{array}} \\right] "
},
{
"math_id": 151,
"text": " \\alpha _1 ... \\alpha _8 "
}
] | https://en.wikipedia.org/wiki?curid=8001517 |
800155 | Lambertian reflectance | Model for determining radiant energy reflected off diffuse surfaces
Lambertian reflectance is the property that defines an ideal "matte" or diffusely reflecting surface. The apparent brightness of a Lambertian surface to an observer is the same regardless of the observer's angle of view. More precisely, the reflected radiant intensity obeys Lambert's cosine law, which makes the reflected radiance the same in all directions. Lambertian reflectance is named after Johann Heinrich Lambert, who introduced the concept of perfect diffusion in his 1760 book "Photometria".
Examples.
Unfinished wood exhibits roughly Lambertian reflectance, but wood finished with a glossy coat of polyurethane does not, since the glossy coating creates specular highlights. Though not all rough surfaces are Lambertian, this is often a good approximation, and is frequently used when the characteristics of the surface are unknown.
Spectralon is a material which is designed to exhibit an almost perfect Lambertian reflectance.
Use in computer graphics.
In computer graphics, Lambertian reflection is often used as a model for diffuse reflection. This technique causes all closed polygons (such as a triangle within a 3D mesh) to reflect light equally in all directions when rendered. The reflection decreases when the surface is tilted away from being perpendicular to the light source, however, because the area is illuminated by a smaller fraction of the incident radiation.
The reflection is calculated by taking the dot product of the surface's unit normal vector, formula_0, and a normalized light-direction vector, formula_1, pointing from the surface to the light source. This number is then multiplied by the color of the surface and the intensity of the light hitting the surface:
formula_2,
where formula_3 is the brightness of the diffusely reflected light, formula_4 is the color and formula_5 is the intensity of the incoming light. Because
formula_6,
where formula_7 is the angle between the directions of the two vectors, the brightness will be highest if the surface is perpendicular to the light vector, and lowest if the light vector intersects the surface at a grazing angle.
Lambertian reflection from polished surfaces is typically accompanied by specular reflection (gloss), where the surface luminance is highest when the observer is situated at the perfect reflection direction (i.e. where the direction of the reflected light is a reflection of the direction of the incident light in the surface), and falls off sharply.
Other waves.
While Lambertian reflectance usually refers to the reflection of light by an object, it can be used to refer to the reflection of any wave. For example, in ultrasound imaging, "rough" tissues are said to exhibit Lambertian reflectance.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{N}"
},
{
"math_id": 1,
"text": "\\mathbf{L}"
},
{
"math_id": 2,
"text": "B_{D}=\\mathbf{L}\\cdot\\mathbf{N} C I_\\text{L}"
},
{
"math_id": 3,
"text": "B_{D}"
},
{
"math_id": 4,
"text": "C"
},
{
"math_id": 5,
"text": "I_\\text{L}"
},
{
"math_id": 6,
"text": "\\mathbf{L}\\cdot\\mathbf{N}=|N||L|\\cos{\\alpha}=\\cos{\\alpha}"
},
{
"math_id": 7,
"text": "\\alpha"
}
] | https://en.wikipedia.org/wiki?curid=800155 |
8002251 | Meyer–Schuster rearrangement | The Meyer–Schuster rearrangement is the chemical reaction described as an acid-catalyzed rearrangement of secondary and tertiary propargyl alcohols to α,β-unsaturated ketones if the alkyne group is internal and α,β-unsaturated aldehydes if the alkyne group is terminal. Reviews have been published by Swaminathan and Narayan, Vartanyan and Banbanyan, and Engel and Dudley, the last of which describes ways to promote the Meyer–Schuster rearrangement over other reactions available to propargyl alcohols.
Mechanism.
The reaction mechanism begins with the protonation of the alcohol which leaves in an E1 reaction to form the allene from the alkyne. Attack of a water molecule on the carbocation and deprotonation is followed by tautomerization to give the α,β-unsaturated carbonyl compound.
Edens "et al." have investigated the reaction mechanism. They found it was characterized by three major steps: (1) the rapid protonation of oxygen, (2) the slow, rate-determining step comprising the 1,3-shift of the protonated hydroxy group, and (3) the keto-enol tautomerism followed by rapid deprotonation.
In a study of the rate-limiting step of the Meyer–Schuster reaction, Andres "et al." showed that the driving force of the reaction is the irreversible formation of unsaturated carbonyl compounds through carbonium ions. They also found the reaction to be assisted by the solvent. This was further investigated by Tapia "et al." who showed solvent caging stabilizes the transition state.
Rupe rearrangement.
The reaction of tertiary alcohols containing an α-acetylenic group does not produce the expected aldehydes, but rather α,β-unsaturated methyl ketones via an enyne intermediate. This alternate reaction is called the Rupe reaction, and competes with the Meyer–Schuster rearrangement in the case of tertiary alcohols.
Use of catalysts.
While the traditional Meyer–Schuster rearrangement uses harsh conditions with a strong acid as the catalyst, this introduces competition with the Rupe reaction if the alcohol is tertiary. Milder conditions have been used successfully with transition metal-based and Lewis acid catalysts (for example, Ru- and Ag-based catalysts). Cadierno "et al." report the use of microwave-radiation with InClformula_0 as a catalyst to give excellent yields with short reaction times and remarkable stereoselectivity. An example from their paper is given below:
Applications.
The Meyer–Schuster rearrangement has been used in a variety of applications, from the conversion of ω-alkynyl-ω-carbinol lactams into enamides using catalytic PTSA to the synthesis of α,β-unsaturated thioesters from γ-sulfur substituted propargyl alcohols to the rearrangement of 3-alkynyl-3-hydroxyl-1"H"-isoindoles in mildly acidic conditions to give the α,β-unsaturated carbonyl compounds. One of the most interesting applications, however, is the synthesis of a part of paclitaxel in a diastereomerically-selective way that leads only to the "E"-alkene.
The step shown above had a 70% yield (91% when the byproduct was converted to the Meyer-Schuster product in another step). The authors used the Meyer–Schuster rearrangement because they wanted to convert a hindered ketone to an alkene without destroying the rest of their molecule.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "_3"
}
] | https://en.wikipedia.org/wiki?curid=8002251 |
8003764 | Bra size | Measure (usually 2 factors) to determine proper bra fit
Bra size (also known as brassiere measurement or bust size) indicates the size characteristics of a bra. While there is a number of bra sizing systems in use around the world, the bra sizes usually consist of a number, indicating the size of the band around the woman's torso, and one or more letters that indicate the breast cup size. Bra cup sizes were invented in 1932 while band sizes became popular in the 1940s. For convenience, because of the impracticality of determining the size dimensions of each breast, the volume of the bra cup, or "cup size", is based on the difference between band length and over-the-bust measurement.
Manufacturers try to design and manufacture bras that correctly fit the majority of women, while individual women try to identify correctly fitting bras among different styles and sizing systems.
The shape, size, position, symmetry, spacing, firmness, and sag of individual women's breasts vary considerably. Manufacturers' bra size labelling systems vary from country to country because no international standards exist. Even within a country, one study found that the bra size label was consistently different from the measured size. As a result of all these factors, about 25% of women have a difficult time finding a properly fitted bra, and some women choose to buy custom-made bras due to the unique shape of their breasts.
Measurement method origins.
On 21 November 1911, Parisienne Madeleine Gabeau received a United States patent for a brassiere with soft cups and a metal band that supported and separated the breasts. To avoid the prevailing fashion that created a single "monobosom", her design provided: "...that the edges of the material "d" may be carried close along the inner and under contours of the breasts, so as to preserve their form, I employ an outlining band of metal "b" which is bent to conform to the lower curves of the breast."
Cup design origins.
The term "cup" was not used to describe bras until 1916 when two patents were filed.
In October 1932, S.H. Camp and Company was the first to use letters of the alphabet (A, B, C and D) to indicate cup size, although the letters represented how pendulous the breasts were and not their volume. Camp's advertising in the February 1933 issue of "Corset and Underwear Review" featured letter-labeled profiles of breasts. Cup sizes A to D were not intended to be used for larger-breasted women.
In 1935, Warner's introduced its Alphabet Bra with cup sizes from size A to size D. Their bras incorporated breast volume into its sizing, and continues to be the system in use today. Before long, these cup sizes got nicknames: egg cup, tea cup, coffee cup and challenge cup, respectively. Two other companies, Model and Fay-Miss (renamed in 1935 as the Bali Brassiere Company), followed, offering A, B, C and D cup sizes in the late 1930s. Catalogue companies continued to use the designations Small, Medium and Large through the 1940s. Britain did not adopt the American cups in 1933, and resisted using cup sizes for its products until 1948. The Sears Company finally applied cup sizes to bras in its catalogue in the 1950s.
However, though various manufacturers used the same descriptions of bra sizes (e.g., A to D, small large, etc.), there was no standardisation of what these descriptions actually measured, so that each company had its own standards.
Band measurement origins.
Multiple hook and eye closures were introduced in the 1930s that enabled adjustment of bands. Prior to the widespread use of bras, the undergarment of choice for Western women was a corset. To help women meet the perceived ideal female body shape, corset and girdle manufacturers used a calculation called "hip spring", the difference between waist and hip measurement (usually ).
The band measurement system was created by U.S. bra manufacturers just after World War II.
Other innovations.
The underwire was first added to a strapless bra in 1937 by André, a custom-bra firm. Patents for underwire-type devices in bras were issued in 1931 and 1932, but were not widely adopted by manufacturers until after World War II when metal shortages eased.
In the 1930s, Dunlop chemists were able to reliably transform rubber latex into elastic thread. After 1940, "whirlpool", or concentric stitching, was used to shape the cup structure of some designs. The synthetic fibres were quickly adopted by the industry because of their easy-care properties. Since a brassiere must be laundered frequently, easy-care fabric was in great demand.
Consumer fitting.
For best results, the breasts should be measured twice: once when standing upright, once bending over at the waist with the breasts hanging down. If the difference between these two measurements is more than 10 cm, then the average is chosen for calculating the cup size. A number of reports, surveys and studies in different countries have found that between 80% and 85% of women wear incorrectly fitted bras.
In November 2005, Oprah Winfrey produced a show devoted to bras and bra sizes, during which she talked about research that eight out of ten women wear the wrong size bra.
Larger breasts and bra fit.
Studies have revealed that the most common mistake made by women when selecting a bra was to choose too large a back band and too small a cup, for example, 38C instead of 34E, or 34B instead of 30D.
The heavier a person's build, the more difficult it is to obtain accurate measurements, as measuring tape sinks into the flesh more easily.
In a study conducted in the United Kingdom of 103 women seeking mammoplasty, researchers found a strong link between obesity and inaccurate back measurement. They concluded that "obesity, breast hypertrophy, fashion and bra-fitting practices combine to make those women who most need supportive bras the least likely to get accurately fitted bras."
One issue that complicates finding a correctly fitting bra is that band and cup sizes are not standardized, but vary considerably from one manufacturer to another, resulting in sizes that only provide an approximate fit. Women cannot rely on labeled bra sizes to identify a bra that fits properly. Scientific studies show that the current system of bra sizing may be inaccurate.
Manufacturers cut their bras differently, so, for example, two 34B bras from two companies may not fit the same person. Customers should pay attention to which sizing system is used by the manufacturer. The main difference is in how cup sizes increase, by 2 cm or 1 inch (= 2.54 cm, see below). Some French manufacturers also increase cup sizes by 3 cm. Unlike dress sizes, manufacturers do not agree on a single standard.
British bras currently range from A to LL cup size (with Rigby&Peller recently introducing bras by Elila which go up to US-N-Cup), while most Americans can find bras with cup sizes ranging from A to G. Some brands (Goddess, Elila) go as high as N, a size roughly equal to a British JJ-Cup. In continental Europe, Milena Lingerie from Poland produces up to cup R.
Larger sizes are usually harder to find in retail outlets. As the cup size increases, the labeled cup size of different manufacturers' bras tend to vary more widely in actual volume. One study found that the label size was consistently different from the measured size.
Even medical studies have attested to the difficulty of getting a correct fit. Research by plastic surgeons has suggested that bra size is imprecise because breast volume is not calculated accurately:
<templatestyles src="Template:Blockquote/styles.css" />The current popular system of determining bra size is inaccurate so often as to be useless. Add to this the many different styles of bras and the lack of standardization between brands, and one can see why finding a comfortable, well-fitting bra is more a matter of educated guesswork, trial, and error than of precise measurements.
The use of the cup sizing and band measurement systems has evolved over time and continues to change. Experts recommend that women get fitted by an experienced person at a retailer offering the widest possible selection of bra sizes and brands.
Bad bra-fit symptoms.
If the straps dig into the shoulder, leaving red marks or causing shoulder or neck pain, the bra band is not offering enough support. If breast tissue overflows the bottom of the bra, under the armpit, or over the top edge of the bra cup, the cup size is too small. Loose fabric in the bra cup indicates the cup size is too big. If the underwires poke the breast under the armpit or if the bra's center panel does not lie flat against the sternum, the cup size is too small. If the band rides up the torso at the back, the band size is too big. If it digs into the flesh, causing the flesh to spill over the edges of the band, the band is too small. If the band feels tight, this may be due to the cups being too small; instead of going up in band size a person should try going up in cup size. Similarly a band might feel too loose if the cup is too big. It is possible to test whether a bra band is too tight or too loose by reversing the bra on her torso so that the cups are at the back and then check for fit and comfort. Generally, if the wearer must continually adjust the bra or experiences general discomfort, the bra is a poor fit and she should get a new fitting.
Obtaining best fit.
Bra experts recommend that women, especially those whose cup sizes are D or larger, get a professional bra fitting from the lingerie department of a clothing store or a specialty lingerie store. However, even professional bra fitters in different countries including New Zealand and the United Kingdom produce inconsistent measurements of the same person. There is significant heterogeneity in breast shape, density, and volume. As such, current methods of bra fitting may be insufficient for this range of chest morphology.
A 2004 study by Consumers Reports in New Zealand found that 80% of department store bra fittings resulted in a poor fit. However, because manufacturer's standards widely vary, women cannot rely on their own measurements to obtain a satisfactory fit. Some bra manufacturers and distributors state that trying on and learning to recognize a properly fitting bra is the best way to determine a correct bra size, much like shoes.
A correctly fitting bra should meet the following criteria:
Confirming bra fit.
One method to confirm that the bra is the best fit has been nicknamed the "Swoop and Scoop". After identifying a well-fitting bra, the woman bends forward (the "swoop"), allowing her breasts to fall into the bra, filling the cup naturally, and then fastening the bra on the outermost set of hooks. When the woman stands up, she uses the opposite hand to place each breast gently into the cup (the "scoop"), and she then runs her index finger along the inside top edge of the bra cup to make sure her breast tissue does not spill over the edges.
Experts suggest that women choose a bra band that fits well on the outermost hooks. This allows the wearer to use the tighter hooks on the bra strap as it stretches during its lifetime of about eight months. The band should be tight enough to support the bust, but the straps should not provide the primary support.
Consumer measurement difficulties.
A bra is one of the most complicated articles of clothing to make. A typical bra design has between 20 and 48 parts, including the band, hooks, cups, lining, and straps. Major retailers place orders from manufacturers in batches of 10,000. Orders of this size require a large-scale operation to manage the cutting, sewing and packing required.
Constructing a properly fitting brassiere is difficult. Adelle Kirk, formerly a manager at the global Kurt Salmon management consulting firm that specializes in the apparel and retail businesses, said that making bras is complex:
<templatestyles src="Template:Blockquote/styles.css" />Bras are one of the most complex pieces of apparel. There are lots of different styles, and each style has a dozen different sizes, and within that there are a lot of colors. Furthermore, there is a lot of product engineering. You've got hooks, you've got straps, there are usually two parts to every cup, and each requires a heavy amount of sewing. It is very component intensive.
Asymmetric breasts.
Obtaining the correct size is complicated by the fact that up to 25% of women's breasts display a persistent, visible breast asymmetry, which is defined as differing in size by at least one cup size. For about 5% to 10% of women, their breasts are severely different, with the left breast being larger in 62% of cases. Minor asymmetry may be resolved by wearing a padded bra, but severe cases of developmental breast deformity — commonly called "Amazon's Syndrome" by physicians — may require corrective surgery due to morphological alterations caused by variations in shape, volume, position of the breasts relative to the inframammary fold, the position of the nipple-areola complex on the chest, or both.
Breast volume variation.
Obtaining the correct size is further complicated by the fact that the size and shape of women's breasts change, if they experience menstrual cycles, during the cycle and can experience unusual or unexpectedly rapid growth in size due to pregnancy, weight gain or loss, or medical conditions. Even breathing can substantially alter the measurements.
Some women's breasts can change shape by as much as 20% per month:
<templatestyles src="Template:Blockquote/styles.css" />"Breasts change shape quite consistently on a month-to-month basis, but they will individually change their volume by a different amount ... Some girls will change less than 10% and other girls can change by as much as 20%." Would it be better not to wear a bra at all then? "... In fact there are very few advantages in wearing existing bras. Having a bra that's generally supportive would have significant improvement particularly in terms of stopping them going south ...The skin is what gives the breasts their support."
Increases in average bra size.
In 2010, the most common bra size sold in the UK was 36D. In 2004, market research company Mintel reported that bust sizes in the United Kingdom had increased from 1998 to 2004 in younger as well as older consumers, while a more recent study showed that the most often sold bra size in the US in 2008 was 36D.
Researchers ruled out increases in population weight as the explanation and suggested it was instead likely due to more women wearing the correct, larger size.
Consumer measurement methods.
Bra retailers recommend several methods for measuring band and cup size. These are based on two primary methods, either under the bust or over the bust, and sometimes both. Calculating the correct bra band size is complicated by a variety of factors. The American National Standards Institute states that while a voluntary consensus of sizes exists, there is much confusion to the 'true' size of clothing. As a result, bra measurement can be considered an art and a science. Online shopping and in-person bra shopping experiences may differ because online recommendations are based on averages and in-person shopping can be completely personalized so the shopper may easily try on band sizes above and below her between measured band size. For the woman with a large cup size and a between band size, they may find their cup size is not available in local stores so may have to shop online where most large cup sizes are readily available on certain sites. Others recommend rounding to the nearest whole number.
Band measurement methods.
There are several possible methods for measuring the bust.
Underbust +0.
A measuring tape is pulled around the torso at the inframammary fold. The tape is then pulled tight while remaining horizontal and parallel to the floor. The measurement in inches is then rounded to the nearest even number for the band size. As of 2018[ [update]], Kohl's uses this method for its online fitting guide.
Underbust +4.
This method begins the same way as the underbust +0 method, where a measuring tape is pulled tight around the torso under the bust while remaining horizontal. If the measurement is even, 4 is added to calculate the band size. If it is odd, 5 is added. Kohl's used this method in 2013. The "war on plus four" was a name given to a campaign (circa 2011) against this method, with underbust +0 supporters claiming that the then-ubiquitous +4 method fails to fit a majority of women. Underbust +4 method generally only applies to the US and UK sizes.
Sizing chart.
Currently, many large U.S. department stores determine band size by starting with the measurement taken underneath the bust similar to the aforementioned underbust +0 and underbust +4 methods. A sizing chart or calculator then uses this measurement to determine the band size. Band sizes calculated using this method vary between manufacturers.
Underarm/upper bust.
A measuring tape is pulled around the torso under the armpit and above the bust. Because band sizes are most commonly manufactured in even numbers, the wearer must round to the closest even number.
Cup measurement methods.
Bra-wearers can calculate their cup size by finding the difference between their bust size and their band size. The bust size, bust line measure, or over-bust measure is the measurement around the torso over the fullest part of the breasts, with the crest of the breast halfway between the elbow and shoulder, usually over the nipples, ideally while standing straight with arms to the side and wearing a properly fitted bra, because this practice assumes the current bra fits correctly. The measurements are made in the same units as the band size, either inches or centimetres. The cup size is calculated by subtracting the band size from the over-the-bust measurement.
The meaning of cup sizes varies.
Cup sizes vary from one country to another. For example, a U.S. H-cup does not have the same size as an Australian, even though both are based on measurements in inches. The larger the cup size, the bigger the variation.
Surveys of bra sizes tend to be very dependent on the population studied and how it was obtained. For instance, one U.S. study reported that the most common size was 34B, followed by 34C, that 63% were size 34 and 39% cup size B. However, the survey sample was drawn from 103 Caucasian student volunteers at a Midwest U.S. university aged 18–25, and excluded pregnant and nursing women.
Plastic Surgeon Measuring System.
Bra-wearers who have difficulty calculating a correct cup size may be able to find a correct fit using a method adopted by plastic surgeons. Using a flexible tape measure, position the tape at the outside of the chest, under the arm, where the breast tissue begins. Measure across the fullest part of the breast, usually across the nipple, to where the breast tissue stops at the breast bone.
Conversion of the measurement to cup size is shown in the "Measuring cup size" table.
Note that, in general, countries that employ metric cup sizing (like in § Continental Europe) have their own system of increments that result in cup sizes which differ from those using inches, since does not equal .
These cup measurements are only correct for converting cup sizes for a band to cm using this particular method, because cup size is relative to band size. This principle means that bras of differing band size can have the same volume. For example, the cup volume is the same for 30D, 32C, 34B, and 36A. These related bra sizes of the same cup volume are called "sister sizes". For a list of such sizes, refer to § Calculating cup volume and breast weight.
Consumer fit research.
A 2012 study by White and Scurr University of Portsmouth compared method that adds 4 to the band size over-the-bust method used in many United Kingdom lingerie shops with and compared that to measurements obtained using a professional method. The study relied on the professional bra-fitting method described by McGhee and Steele (2010). The study utilized a five-step approach to obtain the best fitting bra size for an individual. The study measured 45 women using the traditional selection method that adds 4 to the band size over-the-bust method. Women tried bras on until they obtained the best fit based on professional bra fitting criteria. The researchers found that 76% of women overestimated their band and 84% underestimated their cup size. When women wear bras with too big a band, breast support is reduced. Too small a cup size may cause skin irritation. They noted that "ill-fitting bras and insufficient breast support can lead to the development of musculoskeletal pain and inhibit women participating in physical activity.". The study recommended that women should be educated about the criteria for finding a well-fitting bra. They recommended that women measure under their bust to determine their band size rather than the traditional over the bust measurement method.
Manufacturer design standards.
Bra-labeling systems used around the world are at times misleading and confusing. Cup and band sizes vary around the world. In countries that have adopted the European EN 13402 dress-size standard, the torso is measured in centimetres and rounded to the nearest multiple of 5 cm. Bra-fitting experts in the United Kingdom state that many women who buy off the rack without professional assistance wear up to two sizes too small.
Manufacturer Fruit of the Loom attempted to solve the problem of finding a well-fitting bra for asymmetrical breasts by introducing Pick Your Perfect Bra, which allow women to choose a bra with two different cup sizes, although it is only available in A through D cup sizes.
One very prominent discrepancy between the sizing systems is the fact that the US band sizes, based on inches, does not correspond to its centimeter based EU counterpart.
There are several sizing systems in different countries.
Cup size is determined by one of two methods: in the US and UK, increasing cup size every inch method; and in all other systems by increasing cup size for every two centimeters. Since one inch equals 2.54 centimeters, there is considerable discrepancy between the systems, which becomes more exaggerated as cup sizes increase. Many bras are only available in 36 sizes.
UK.
The UK and US use the inch system. The difference in chest circumference between the cup sizes is always one inch, or 2.54 cm. The difference between 2 band sizes is 2 inches or 5.08 cm.
Leading brands and manufacturers including Panache, Bestform, Gossard, Freya, Curvy Kate, Bravissimo and Fantasie, which use the British standard band sizes (where underbust measurement equals band size) 28-30-32-34-36-38-40-42-44, and so on. Cup sizes are designated by AA-A-B-C-D-DD-E-F-FF-G-GG-H-HH-J-JJ-K-KK-L.
However, some clothing retailers and mail order companies have their own house brands and use a custom sizing system. Marks and Spencers uses AA-A-B-C-D-DD-E-F-G-GG-H-J, leaving out FF and HH, in addition to following the US band sizing convention. As a result, their J-Cup is equal to a British standard H-cup. Evans and ASDA sell bras (ASDA as part of their George clothing range) whose sizing runs A-B-C-D-DD-E-F-G-H. Their H-Cup is roughly equal to a British standard G-cup.
Some retailers reserve AA for young teens, and use AAA for women.
Australia/New Zealand.
Australia and New Zealand cup and band sizes are in metric increases of 2 cm per cup similar to many European brands. Cup labelling methods and sizing schemes are inconsistent and there is great variability between brands. In general, cup sizes AA-DD follow UK labels but thereafter split off from this system and employ European labels (no double letters with cups progressing from F-G-H etc. for every 2 cm increase). However, a great many local manufacturers employ unique labelling systems Australia and New Zealand bra band sizes are labelled in dress size, although they are obtained by under bust measurement whilst dress sizes utilise bust-waist-hip. In practice very few of the leading Australian manufacturers produce sizes F+ and many disseminate sizing misinformation. The Australian demand for DD+ is largely met by various UK, US and European major brands. This has introduced further sizing scheme confusion that is poorly understood even by specialist retailers.
United States.
Bra sizing in the United States is very similar to the United Kingdom. Band sizes use the same designation in inches and the cups also increase by 1-inch-steps. However, some manufacturers use conflicting sizing methods. Some label bras beyond a C cup as D-DD-DDD-DDDD-E-EE-EEE-EEEE-F..., some use the variation: D1, D2, D3, D4, D5... but many use the following system: A, B, C, D, DD, DDD, G, H, I, J, K, L, M, N, O. and others label them like the British system D-DD-E-F-FF... Comparing the larger cup sizes between different manufacturers can be difficult.
In 2013, underwear maker Jockey International offered a new way to measure bra and cup size. It introduced a system with ten cup sizes per band size that are numbered and not lettered, designated as 1–36, 2–36 etc. The company developed the system over eight years, during which they scanned and measured the breasts and torsos of 800 women. Researchers also tracked the women's use of their bras at home. To implement the system, women must purchase a set of plastic cups from the company to find their Jockey cup size. Some analysts were critical of the requirement to buy the measurement kit, since women must pay about US$20 to adopt Jockey's proprietary system, in addition to the cost of the bras themselves.
Europe / International.
European bra sizes are based on centimeters. They are also known as "International." Abbreviations such as "EU", "Intl" and "Int" are all referring to the same European bra size convention. These sizes are used in most of Europe and large parts of the world.
The underbust measurement is rounded to the nearest multiple of 5 cm. Band sizes run 65, 70, 75, 80 etc., increasing in steps of 5 cm, similar to the English double inch. A person with a measured underbust circumference of 78–82 cm should wear a band size 80. The tightness or snugness of the measurement (e.g. a tape measure or similar) depends on the adipose tissue softness. Softer tissue require tightening when measuring, this to ensure that the bra band will fit snugly on the body and stay in place. A loose measurement can, and often does, vary from the tighter measurement. This causes some confusion as a person with a loose measurement of 84 cm would think they have band size 85 but due to a lot of soft tissue the same person might have a snugger and tighter and of 79 cm and should choose the more appropriate band size of 80 or even smaller band size.
The cup labels begin normally with "A" for an 13±1 cm difference between bust and underbust circumference "measurement" measured "loosely" (i.e. not tightly as for bra band size), i.e. the not between bust circumference and band "size" (that normally require some tightening when measured).To clarify the important difference in measuring: Underbust measuring for bra band is done snugly and tight while measuring underbust for determining bra cups is done loosely. For people with much soft adipose tissue these two measurements will not be identical. In this sense the method to determine European sizes differ compared to English systems where the cup sizes are determined by bust measurement compared to bra band "size". European cups increase for every additional 2 cm in difference between bust and underbust measurement, instead of 2.5 cm or 1-inch, and except for the initial cup size letters are neither doubled nor skipped. In very large cup sizes this causes smaller cups than their English counterparts.
This system has been standardized in the European dress size standard EN 13402 introduced in 2006, but was in use in many European countries before that date.
South Korea/Japan.
In South Korea and Japan the torso is measured in centimetres and rounded to the nearest multiple of 5 cm. Band sizes run 65-70-75-80..., increasing in steps of 5 cm, similar to the English double inch. A person with a loosely measured underbust circumference of 78–82 cm should wear a band size 80.
The cup labels begin with "AAA" for a 5±1.25 cm difference between bust and underbust circumference, i.e. similar bust circumference and band size as in the English systems. They increase in steps of 2.5 cm, and except for the initial cup size letters are neither doubled nor skipped.
Japanese sizes are the same as Korean ones, but the cup labels begin with "AA" for a 7.5±1.25 cm difference and usually precedes the bust designation, i.e. "B75" instead of "75B".
This system has been standardized in the Korea dress size standard KS K9404 introduced in 1999 and in Japan dress size standard JIS L4006 introduced in 1998.
France/Belgium/Spain.
The French and Spanish system is a permutation of the Continental European sizing system. While cup sizes are the same, band sizes are exactly 15 cm larger than the European band size.
Italy.
The Italian band size uses small consecutive integers instead of the underbust circumference rounded to the nearest multiple of 5 cm. Since it starts with size 0 for European size 60, the conversion consists of a division by 5 and then a subtraction of 12. The size designations are often given in Roman numerals.
Cup sizes have traditionally used a step size of 2.5 cm, which is close to the English inch of 2.54 cm, and featured some double letters for large cups, but in recent years some Italian manufacturers have switched over to the European 2-cm system.
Here is a conversion table for bra sizes in Italy with respect other countries:
Advertising and retail influence.
Manufacturers' marketing and advertising often appeals to fashion and image over fit, comfort, and function. Since about 1994, manufacturers have re-focused their advertising, moving from advertising functional brassieres that emphasize support and foundation, to selling lingerie that emphasize fashion while sacrificing basic fit and function, like linings under scratchy lace.
Engineered Alternative to traditional bras.
English mechanical engineer and professor John Tyrer from Loughborough University has devised a solution to problematic bra fit by re-engineering bra design. He started investigating the problem of bra design while on an assignment from the British government after his wife returned disheartened from an unsuccessful shopping trip. His initial research into the extent of fitting problems soon revealed that 80% of women wear the wrong size of bra.. He theorised that this widespread practice of purchasing the wrong size was due to the measurement system recommended by bra manufacturers. This sizing system employs a combination of maximum chest diameter (under bust) and maximum bust diameter (bust) rather than the actual breast volume which is to be accommodated by the bra. According to Tyrer, "to get the most supportive and fitted bra it's infinitely better if you know the volume of the breast and the size of the back.". He says the A, B, C, D cup measurement system is flawed. "It's like measuring a motor car by the diameter of the gas cap." "The whole design is fundamentally flawed. It's an instrument of torture." Tyrer has developed a bra design with crossed straps in the back. These use the weight of one breast to lift the other using counterbalance. Standard designs constrict chest movement during breathing. One of the tools used in the development of Tyrer's design has been a projective differential shape body analyzer for 40,000 GBP.
Breasts weigh up to ~1 kg and not ~0.2 .. 0.3 kg. Tyrer said, "By measuring the diameter of the chest and breasts current measurements are supposed to tell you something about the size and volume of each breast, but in fact it doesn't". Bra companies remain reluctant to manufacture Tyrer's prototype, which is a front closing bra with more vertical orientation and adjustable cups.
Calculating cup volume and breast weight.
The average breast weighs about . Each breast contributes to about 4–5% of the body fat. The density of fatty tissue is more or less equal to
If a cup is a hemisphere, its volume "V" is given by the following formula:
formula_0
where "r" is the radius of the cup, and "D" is its diameter.
If the cup is a hemi-ellipsoid, its volume is given by the formula:
formula_1
where "a", "b" and "c" are the three semi-axes of the hemi-ellipsoid, and "cw", "cd" and "wl" are respectively the cup width, the cup depth and the length of the wire.
Cups give a hemi-spherical shape to breasts and underwires give shape to cups. So the curvature radius of the underwire is the key parameter to determine volume and weight of the breast. The same underwires are used for the cups of sizes 36A, 34B, 32C, 30D etc. ... so those cups have the same volume. The reference numbers of underwire sizes are based on a B cup bra, for example underwire size 32 is for 32B cup (and 34A, 30C...). An underwire size 30 width has a curvature diameter of and this diameter increases by by size. The table below shows volume calculations for some cups that can be found in a ready-to-wear large size shop.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "V=\\frac{2 \\pi r^3}{3}=\\frac{\\pi D^3}{12}"
},
{
"math_id": 1,
"text": "V =\\frac{2 \\pi a b c}{3} \\approx \\frac{\\pi \\times cw \\times cd \\times wl}{12}"
}
] | https://en.wikipedia.org/wiki?curid=8003764 |
8006138 | Divided power structure | In mathematics, specifically commutative algebra, a divided power structure is a way of introducing items with similar properties as expressions of the form formula_0 have, also when it is not possible to actually divide by formula_1.
Definition.
Let "A" be a commutative ring with an ideal "I". A divided power structure (or PD-structure, after the French "puissances divisées") on "I" is a collection of maps formula_2 for "n" = 0, 1, 2, ... such that:
For convenience of notation, formula_16 is often written as formula_17 when it is clear what divided power structure is meant.
The term "divided power ideal" refers to an ideal with a given divided power structure, and "divided power ring" refers to a ring with a given ideal with divided power structure.
Homomorphisms of divided power algebras are ring homomorphisms that respects the divided power structure on its source and target.
formula_19
Constructions.
If "A" is any ring, there exists a divided power ring
formula_25
consisting of "divided power polynomials" in the variables
formula_26
that is sums of "divided power monomials" of the form
formula_27
with formula_28. Here the divided power ideal is the set of divided power polynomials with constant coefficient 0.
More generally, if "M" is an "A"-module, there is a universal "A"-algebra, called
formula_29
with PD ideal
formula_30
and an "A"-linear map
formula_31
If "I" is any ideal of a ring "A", there is a universal construction which extends "A" with divided powers of elements of "I" to get a divided power envelope of "I" in "A".
Applications.
The divided power envelope is a fundamental tool in the theory of PD differential operators and crystalline cohomology, where it is used to overcome technical difficulties which arise in positive characteristic.
The divided power functor is used in the construction of co-Schur functors.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x^n / n!"
},
{
"math_id": 1,
"text": "n!"
},
{
"math_id": 2,
"text": "\\gamma_n : I \\to A"
},
{
"math_id": 3,
"text": "\\gamma_0(x) = 1"
},
{
"math_id": 4,
"text": "\\gamma_1(x) = x"
},
{
"math_id": 5,
"text": "x \\in I"
},
{
"math_id": 6,
"text": "\\gamma_n(x) \\in I"
},
{
"math_id": 7,
"text": "\\gamma_n(x + y) = \\sum_{i=0}^n \\gamma_{n-i}(x) \\gamma_i(y)"
},
{
"math_id": 8,
"text": "x, y \\in I"
},
{
"math_id": 9,
"text": "\\gamma_n(\\lambda x) = \\lambda^n \\gamma_n(x)"
},
{
"math_id": 10,
"text": "\\lambda \\in A, x \\in I"
},
{
"math_id": 11,
"text": "\\gamma_m(x) \\gamma_n(x) = ((m, n)) \\gamma_{m+n}(x)"
},
{
"math_id": 12,
"text": "((m, n)) = \\frac{(m+n)!}{m! n!}"
},
{
"math_id": 13,
"text": "\\gamma_n(\\gamma_m(x)) = C_{n, m} \\gamma_{mn}(x)"
},
{
"math_id": 14,
"text": "m > 0"
},
{
"math_id": 15,
"text": "C_{n, m} = \\frac{(mn)!}{(m!)^n n!}"
},
{
"math_id": 16,
"text": "\\gamma_n(x)"
},
{
"math_id": 17,
"text": "x^{[n]}"
},
{
"math_id": 18,
"text": "\\Z"
},
{
"math_id": 19,
"text": "\\Z\\langle{x}\\rangle:=\\Z\\left [x,\\tfrac{x^2}{2},\\ldots,\\tfrac{x^n}{n!},\\ldots \\right]\\subset \\Q[x]."
},
{
"math_id": 20,
"text": "\\Q,"
},
{
"math_id": 21,
"text": "\\gamma_n(x) = \\tfrac{1}{n!} \\cdot x^n."
},
{
"math_id": 22,
"text": "S^\\bullet M"
},
{
"math_id": 23,
"text": "(S^\\bullet M)^\\vee = \\text{Hom}_A(S^\\bullet M, A)"
},
{
"math_id": 24,
"text": "\\Gamma_A(\\check{M})"
},
{
"math_id": 25,
"text": "A \\langle x_1, x_2, \\ldots, x_n \\rangle"
},
{
"math_id": 26,
"text": "x_1, x_2, \\ldots, x_n,"
},
{
"math_id": 27,
"text": "c x_1^{[i_1]} x_2^{[i_2]} \\cdots x_n^{[i_n]}"
},
{
"math_id": 28,
"text": "c \\in A"
},
{
"math_id": 29,
"text": "\\Gamma_A(M),"
},
{
"math_id": 30,
"text": "\\Gamma_+(M)"
},
{
"math_id": 31,
"text": "M \\to \\Gamma_+(M)."
}
] | https://en.wikipedia.org/wiki?curid=8006138 |
8006956 | Generalizations of Fibonacci numbers | Mathematical sequences
In mathematics, the Fibonacci numbers form a sequence defined recursively by:
formula_0
That is, after two starting values, each number is the sum of the two preceding numbers.
The Fibonacci sequence has been studied extensively and generalized in many ways, for example, by starting with other numbers than 0 and 1, by adding more than two numbers to generate the next number, or by adding objects other than numbers.
Extension to negative integers.
Using formula_1, one can extend the Fibonacci numbers to negative integers. So we get:
... −8, 5, −3, 2, −1, 1, 0, 1, 1, 2, 3, 5, 8, ...
and formula_2.
See also Negafibonacci coding.
Extension to all real or complex numbers.
There are a number of possible generalizations of the Fibonacci numbers which include the real numbers (and sometimes the complex numbers) in their domain. These each involve the golden ratio φ, and are based on Binet's formula
formula_3
The analytic function
formula_4
has the property that formula_5 for even integers formula_6. Similarly, the analytic function:
formula_7
satisfies formula_8 for odd integers formula_6.
Finally, putting these together, the analytic function
formula_9
satisfies formula_10 for all integers formula_6.
Since formula_11 for all complex numbers formula_12, this function also provides an extension of the Fibonacci sequence to the entire complex plane. Hence we can calculate the generalized Fibonacci function of a complex variable, for example,
formula_13
Vector space.
The term "Fibonacci sequence" is also applied more generally to any function formula_14 from the integers to a field for which formula_15. These functions are precisely those of the form formula_16, so the Fibonacci sequences form a vector space with the functions formula_17 and formula_18 as a basis.
More generally, the range of formula_14 may be taken to be any abelian group (regarded as a Z-module). Then the Fibonacci sequences form a 2-dimensional Z-module in the same way.
Similar integer sequences.
Fibonacci integer sequences.
The 2-dimensional formula_19-module of Fibonacci integer sequences consists of all integer sequences satisfying formula_15. Expressed in terms of two initial values we have:
formula_20
where formula_21 is the golden ratio.
The ratio between two consecutive elements converges to the golden ratio, except in the case of the sequence which is constantly zero and the sequences where the ratio of the two first terms is formula_22.
The sequence can be written in the form
formula_23
in which formula_24 if and only if formula_25. In this form the simplest non-trivial example has formula_26, which is the sequence of Lucas numbers:
formula_27
We have formula_28 and formula_29. The properties include:
formula_30
Every nontrivial Fibonacci integer sequence appears (possibly after a shift by a finite number of positions) as one of the rows of the Wythoff array. The Fibonacci sequence itself is the first row, and a shift of the Lucas sequence is the second row.
See also Fibonacci integer sequences modulo "n".
Lucas sequences.
A different generalization of the Fibonacci sequence is the Lucas sequences of the kind defined as follows:
formula_31
where the normal Fibonacci sequence is the special case of formula_32 and formula_33. Another kind of Lucas sequence begins with formula_34, formula_35. Such sequences have applications in number theory and primality proving.
When formula_33, this sequence is called P-Fibonacci sequence, for example, Pell sequence is also called 2-Fibonacci sequence.
The 3-Fibonacci sequence is
0, 1, 3, 10, 33, 109, 360, 1189, 3927, 12970, 42837, 141481, 467280, 1543321, 5097243, 16835050, 55602393, 183642229, 606529080, ... (sequence in the OEIS)
The 4-Fibonacci sequence is
0, 1, 4, 17, 72, 305, 1292, 5473, 23184, 98209, 416020, 1762289, 7465176, 31622993, 133957148, 567451585, 2403763488, ... (sequence in the OEIS)
The 5-Fibonacci sequence is
0, 1, 5, 26, 135, 701, 3640, 18901, 98145, 509626, 2646275, 13741001, 71351280, 370497401, 1923838285, 9989688826, ... (sequence in the OEIS)
The 6-Fibonacci sequence is
0, 1, 6, 37, 228, 1405, 8658, 53353, 328776, 2026009, 12484830, 76934989, 474094764, 2921503573, 18003116202, ... (sequence in the OEIS)
The n-Fibonacci constant is the ratio toward which adjacent formula_6-Fibonacci numbers tend; it is also called the nth metallic mean, and it is the only positive root of formula_36. For example, the case of formula_37 is formula_38, or the golden ratio, and the case of formula_39 is formula_40, or the silver ratio. Generally, the case of formula_6 is formula_41.
Generally, formula_42 can be called ("P",−"Q")-Fibonacci sequence, and "V"("n") can be called ("P",−"Q")-Lucas sequence.
The (1,2)-Fibonacci sequence is
0, 1, 1, 3, 5, 11, 21, 43, 85, 171, 341, 683, 1365, 2731, 5461, 10923, 21845, 43691, 87381, 174763, 349525, 699051, 1398101, 2796203, 5592405, 11184811, 22369621, 44739243, 89478485, ... (sequence in the OEIS)
The (1,3)-Fibonacci sequence is
1, 1, 4, 7, 19, 40, 97, 217, 508, 1159, 2683, 6160, 14209, 32689, 75316, 173383, 399331, 919480, 2117473, 4875913, 11228332, 25856071, 59541067, ... (sequence in the OEIS)
The (2,2)-Fibonacci sequence is
0, 1, 2, 6, 16, 44, 120, 328, 896, 2448, 6688, 18272, 49920, 136384, 372608, 1017984, 2781184, 7598336, 20759040, 56714752, ... (sequence in the OEIS)
The (3,3)-Fibonacci sequence is
0, 1, 3, 12, 45, 171, 648, 2457, 9315, 35316, 133893, 507627, 1924560, 7296561, 27663363, 104879772, 397629405, 1507527531, 5715470808, ... (sequence in the OEIS)
Fibonacci numbers of higher order.
A Fibonacci sequence of order n is an integer sequence in which each sequence element is the sum of the previous formula_6 elements (with the exception of the first formula_6 elements in the sequence). The usual Fibonacci numbers are a Fibonacci sequence of order 2. The cases formula_43 and formula_44 have been thoroughly investigated. The number of compositions of nonnegative integers into parts that are at most formula_6 is a Fibonacci sequence of order formula_6. The sequence of the number of strings of 0s and 1s of length formula_45 that contain at most formula_6 consecutive 0s is also a Fibonacci sequence of order formula_6.
These sequences, their limiting ratios, and the limit of these limiting ratios, were investigated by Mark Barr in 1913.
Tribonacci numbers.
The tribonacci numbers are like the Fibonacci numbers, but instead of starting with two predetermined terms, the sequence starts with three predetermined terms and each term afterwards is the sum of the preceding three terms. The first few tribonacci numbers are:
0, 0, 1, 1, 2, 4, 7, 13, 24, 44, 81, 149, 274, 504, 927, 1705, 3136, 5768, 10609, 19513, 35890, 66012, … (sequence in the OEIS)
The series was first described formally by Agronomof in 1914, but its first unintentional use is in the "Origin of Species" by Charles R. Darwin. In the example of illustrating the growth of elephant population, he relied on the calculations made by his son, George H. Darwin. The term tribonacci was suggested by Feinberg in 1963.
The tribonacci constant
formula_46 (sequence in the OEIS)
is the ratio toward which adjacent tribonacci numbers tend. It is a root of the polynomial formula_47, and also satisfies the equation formula_48. It is important in the study of the snub cube.
The reciprocal of the tribonacci constant, expressed by the relation formula_49, can be written as:
formula_50 (sequence in the OEIS)
The tribonacci numbers are also given by
formula_51
where formula_52 denotes the nearest integer function and
formula_53
Tetranacci numbers.
The tetranacci numbers start with four predetermined terms, each term afterwards being the sum of the preceding four terms. The first few tetranacci numbers are:
0, 0, 0, 1, 1, 2, 4, 8, 15, 29, 56, 108, 208, 401, 773, 1490, 2872, 5536, 10671, 20569, 39648, 76424, 147312, 283953, 547337, … (sequence in the OEIS)
The tetranacci constant is the ratio toward which adjacent tetranacci numbers tend. It is a root of the polynomial formula_54, approximately 1.927561975482925 (sequence in the OEIS), and also satisfies the equation formula_55.
The tetranacci constant can be expressed in terms of radicals by the following expression:
formula_56
where,
formula_57
and formula_58 is the real root of the cubic equation formula_59
Higher orders.
Pentanacci, hexanacci, heptanacci, octanacci and enneanacci numbers have been computed.
Pentanacci numbers:
0, 0, 0, 0, 1, 1, 2, 4, 8, 16, 31, 61, 120, 236, 464, 912, 1793, 3525, 6930, 13624, … (sequence in the OEIS)
The pentanacci constant is the ratio toward which adjacent pentanacci numbers tend.
It is a root of the polynomial formula_60, approximately 1.965948236645485 (sequence in the OEIS), and also satisfies the equation formula_61.
Hexanacci numbers:
0, 0, 0, 0, 0, 1, 1, 2, 4, 8, 16, 32, 63, 125, 248, 492, 976, 1936, 3840, 7617, 15109, … (sequence in the OEIS)
The hexanacci constant is the ratio toward which adjacent hexanacci numbers tend.
It is a root of the polynomial formula_62, approximately 1.98358284342 (sequence in the OEIS), and also satisfies the equation formula_63.
Heptanacci numbers:
0, 0, 0, 0, 0, 0, 1, 1, 2, 4, 8, 16, 32, 64, 127, 253, 504, 1004, 2000, 3984, 7936, 15808, … (sequence in the OEIS)
The heptanacci constant is the ratio toward which adjacent heptanacci numbers tend.
It is a root of the polynomial formula_64, approximately 1.99196419660 (sequence in the OEIS), and also satisfies the equation formula_65.
Octanacci numbers:
0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 4, 8, 16, 32, 64, 128, 255, 509, 1016, 2028, 4048, 8080, 16128, ... (sequence in the OEIS)
Enneanacci numbers:
0, 0, 0, 0, 0, 0, 0, 0, 1, 1, 2, 4, 8, 16, 32, 64, 128, 256, 511, 1021, 2040, 4076, 8144, 16272, ... (sequence in the OEIS)
The limit of the ratio of successive terms of an formula_6-nacci series tends to a root of the equation formula_66 (OEIS: , OEIS: , OEIS: ).
An alternate recursive formula for the limit of ratio formula_67 of two consecutive formula_6-nacci numbers can be expressed as
formula_68.
The special case formula_39 is the traditional Fibonacci series yielding the golden section formula_69.
The above formulas for the ratio hold even for formula_6-nacci series generated from arbitrary numbers. The limit of this ratio is 2 as formula_6 increases. An "infinacci" sequence, if one could be described, would after an infinite number of zeroes yield the sequence
[..., 0, 0, 1,] 1, 2, 4, 8, 16, 32, …
which are simply the powers of two.
The limit of the ratio for any formula_70 is the positive root formula_67 of the characteristic equation
formula_71
The root formula_67 is in the interval formula_72. The negative root of the characteristic equation is in the interval (−1, 0) when formula_6 is even. This root and each complex root of the characteristic equation has modulus formula_73.
A series for the positive root formula_67 for any formula_70 is
formula_74
There is no solution of the characteristic equation in terms of radicals when 5 ≤ "n" ≤ 11.
The kth element of the n-nacci sequence is given by
formula_75
where formula_52 denotes the nearest integer function and formula_67 is the formula_6-nacci constant, which is the root of formula_66 nearest to 2.
A coin-tossing problem is related to the formula_6-nacci sequence. The probability that no formula_6 consecutive tails will occur in formula_45 tosses of an idealized coin is formula_76.
Fibonacci word.
In analogy to its numerical counterpart, the Fibonacci word is defined by:
formula_77
where formula_78 denotes the concatenation of two strings. The sequence of Fibonacci strings starts:
b, a, ab, aba, abaab, abaababa, abaababaabaab, … (sequence in the OEIS)
The length of each Fibonacci string is a Fibonacci number, and similarly there exists a corresponding Fibonacci string for each Fibonacci number.
Fibonacci strings appear as inputs for the worst case in some computer algorithms.
If "a" and "b" represent two different materials or atomic bond lengths, the structure corresponding to a Fibonacci string is a Fibonacci quasicrystal, an aperiodic quasicrystal structure with unusual spectral properties.
Convolved Fibonacci sequences.
A convolved Fibonacci sequence is obtained applying a convolution operation to the Fibonacci sequence one or more times. Specifically, define
formula_79
and
formula_80
The first few sequences are
formula_81: 0, 0, 1, 2, 5, 10, 20, 38, 71, … (sequence in the OEIS).
formula_82: 0, 0, 0, 1, 3, 9, 22, 51, 111, … (sequence in the OEIS).
formula_83: 0, 0, 0, 0, 1, 4, 14, 40, 105, … (sequence in the OEIS).
The sequences can be calculated using the recurrence
formula_84
The generating function of the formula_67th convolution is
formula_85
The sequences are related to the sequence of Fibonacci polynomials by the relation
formula_86
where formula_87 is the formula_67th derivative of formula_88. Equivalently, formula_89 is the coefficient of formula_90 when formula_91 is expanded in powers of formula_92.
The first convolution, formula_93 can be written in terms of the Fibonacci and Lucas numbers as
formula_94
and follows the recurrence
formula_95
Similar expressions can be found for formula_96 with increasing complexity as formula_67 increases. The numbers formula_93 are the row sums of Hosoya's triangle.
As with Fibonacci numbers, there are several combinatorial interpretations of these sequences. For example formula_93 is the number of ways formula_97 can be written as an ordered sum involving only 0, 1, and 2 with 0 used exactly once. In particular formula_98 and 2 can be written 0 + 1 + 1, 0 + 2, 1 + 0 + 1, 1 + 1 + 0, 2 + 0.
Other generalizations.
The Fibonacci polynomials are another generalization of Fibonacci numbers.
The Padovan sequence is generated by the recurrence formula_99.
The Narayana's cows sequence is generated by the recurrence formula_100.
A random Fibonacci sequence can be defined by tossing a coin for each position formula_6 of the sequence and taking formula_101 if it lands heads and formula_102 if it lands tails. Work by Furstenberg and Kesten guarantees that this sequence almost surely grows exponentially at a constant rate: the constant is independent of the coin tosses and was computed in 1999 by Divakar Viswanath. It is now known as Viswanath's constant.
A repfigit, or Keith number, is an integer such that, when its digits start a Fibonacci sequence with that number of digits, the original number is eventually reached. An example is 47, because the Fibonacci sequence starting with 4 and 7 (4, 7, 11, 18, 29, 47) reaches 47. A repfigit can be a tribonacci sequence if there are 3 digits in the number, a tetranacci number if the number has four digits, etc. The first few repfigits are:
14, 19, 28, 47, 61, 75, 197, 742, 1104, 1537, 2208, 2580, 3684, 4788, 7385, 7647, 7909, … (sequence in the OEIS)
Since the set of sequences satisfying the relation formula_103 is closed under termwise addition and under termwise multiplication by a constant, it can be viewed as a vector space. Any such sequence is uniquely determined by a choice of two elements, so the vector space is two-dimensional. If we abbreviate such a sequence as formula_104, the Fibonacci sequence formula_105 and the shifted Fibonacci sequence formula_106 are seen to form a canonical basis for this space, yielding the identity:
formula_107
for all such sequences S. For example, if S is the Lucas sequence 2, 1, 3, 4, 7, 11, ..., then we obtain
formula_108.
N-generated Fibonacci sequence.
We can define the N-generated Fibonacci sequence (where N is a positive rational number): if
formula_109
where pr is the rth prime, then we define
formula_110
If formula_111, then formula_112, and if formula_113, then formula_114.
Semi-Fibonacci sequence.
The semi-Fibonacci sequence (sequence in the OEIS) is defined via the same recursion for odd-indexed terms formula_115 and formula_116, but for even indices formula_117, formula_118. The bissection of odd-indexed terms formula_119 therefore verifies formula_120 and is strictly increasing. It yields the set of the "semi-Fibonacci numbers"
1, 2, 3, 5, 6, 9, 11, 16, 17, 23, 26, 35, 37, 48, 53, 69, 70, 87, 93, 116, 119, 145, 154, ... (sequence in the OEIS)
which occur as formula_121
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_n =\n\\begin{cases}\n0 & n = 0 \\\\\n1 & n = 1 \\\\\nF_{n - 1} + F_{n - 2} & n > 1\n\\end{cases}"
},
{
"math_id": 1,
"text": "F_{n-2} = F_n - F_{n-1}"
},
{
"math_id": 2,
"text": "F_{-n} = (-1)^{n + 1} F_n"
},
{
"math_id": 3,
"text": "F_n = \\frac{\\varphi^n - (-\\varphi)^{-n}}{\\sqrt{5}}."
},
{
"math_id": 4,
"text": "\\operatorname{Fe}(x) = \\frac{\\varphi^x - \\varphi^{-x}}{\\sqrt{5}}"
},
{
"math_id": 5,
"text": "\\operatorname{Fe}(n) = F_n"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "\\operatorname{Fo}(x) = \\frac{\\varphi^x + \\varphi^{-x}}{\\sqrt{5}}"
},
{
"math_id": 8,
"text": "\\operatorname{Fo}(n) = F_n"
},
{
"math_id": 9,
"text": "\\operatorname{Fib}(x) = \\frac{\\varphi^x - \\cos(x \\pi)\\varphi^{-x}}{\\sqrt{5}}"
},
{
"math_id": 10,
"text": "\\operatorname{Fib}(n) = F_n"
},
{
"math_id": 11,
"text": "\\operatorname{Fib}(z + 2) = \\operatorname{Fib}(z + 1) + \\operatorname{Fib}(z)"
},
{
"math_id": 12,
"text": "z"
},
{
"math_id": 13,
"text": "\\operatorname{Fib}(3+4i) \\approx -5248.5 - 14195.9 i"
},
{
"math_id": 14,
"text": "g"
},
{
"math_id": 15,
"text": "g(n + 2) = g(n) + g(n + 1)"
},
{
"math_id": 16,
"text": "g(n) = F(n) g(1) + F(n - 1) g(0)"
},
{
"math_id": 17,
"text": "F(n)"
},
{
"math_id": 18,
"text": "F(n - 1)"
},
{
"math_id": 19,
"text": "\\mathbb{Z}"
},
{
"math_id": 20,
"text": "g(n) = F(n)g(1) + F(n-1)g(0) = g(1)\\frac{\\varphi^n-(-\\varphi)^{-n}}{\\sqrt 5}+g(0)\\frac{\\varphi^{n-1}-(-\\varphi)^{1-n}}{\\sqrt 5},"
},
{
"math_id": 21,
"text": "\\varphi"
},
{
"math_id": 22,
"text": "(-\\varphi)^{-1}"
},
{
"math_id": 23,
"text": "a\\varphi^n+b(-\\varphi)^{-n},"
},
{
"math_id": 24,
"text": "a = 0"
},
{
"math_id": 25,
"text": "b = 0"
},
{
"math_id": 26,
"text": "a = b = 1"
},
{
"math_id": 27,
"text": "L_n = \\varphi^n + (-\\varphi)^{- n}."
},
{
"math_id": 28,
"text": "L_1 = 1"
},
{
"math_id": 29,
"text": "L_2 = 3"
},
{
"math_id": 30,
"text": "\\begin{align}\n\\varphi^n &= \\left(\\frac{1+\\sqrt{5}}{2}\\right)^{\\!n} = \\frac{L(n)+F(n)\\sqrt{5}}{2}, \\\\\nL(n) &= F(n-1) + F(n+1).\n\\end{align}"
},
{
"math_id": 31,
"text": "\\begin{align}\nU(0) &= 0 \\\\\nU(1) &= 1 \\\\\nU(n + 2) &= P U(n + 1) - Q U(n),\n\\end{align}"
},
{
"math_id": 32,
"text": "P = 1"
},
{
"math_id": 33,
"text": "Q = -1"
},
{
"math_id": 34,
"text": "V(0) = 2"
},
{
"math_id": 35,
"text": "V(1) = P"
},
{
"math_id": 36,
"text": "x^2 - nx - 1 = 0"
},
{
"math_id": 37,
"text": "n = 1"
},
{
"math_id": 38,
"text": "\\frac{1 + \\sqrt{5}}{2}"
},
{
"math_id": 39,
"text": "n = 2"
},
{
"math_id": 40,
"text": "1 + \\sqrt{2}"
},
{
"math_id": 41,
"text": "\\frac{n + \\sqrt{n^2 + 4}}{2}"
},
{
"math_id": 42,
"text": "U(n)"
},
{
"math_id": 43,
"text": "n = 3"
},
{
"math_id": 44,
"text": "n = 4"
},
{
"math_id": 45,
"text": "m"
},
{
"math_id": 46,
"text": " \\frac{1+\\sqrt[3]{19+3\\sqrt{33}}+\\sqrt[3]{19-3\\sqrt{33}}}{3} = \\frac{1+4\\cosh\\left(\\frac{1}{3}\\cosh^{-1}\\left(2+\\frac{3}{8}\\right)\\right)}{3} \\approx 1.839286755214161,"
},
{
"math_id": 47,
"text": "x^3 - x^2 - x - 1 = 0"
},
{
"math_id": 48,
"text": "x + x^{-3} = 2"
},
{
"math_id": 49,
"text": "\\xi^3 + \\xi^2 + \\xi = 1"
},
{
"math_id": 50,
"text": "\\xi = \\frac{\\sqrt[3]{17+3\\sqrt{33}} - \\sqrt[3]{-17+3\\sqrt{33}} - 1}{3} = \\frac{3}{1 + \\sqrt[3]{19 + 3\\sqrt{33}} + \\sqrt[3]{19-3\\sqrt{33}}} \\approx 0.543689012."
},
{
"math_id": 51,
"text": "T(n) = \\left\\lfloor 3b\\, \\frac{\\left(\\frac{1}{3} \\left( a_{+} + a_{-} + 1\\right)\\right)^n}{b^2-2b+4} \\right\\rceil"
},
{
"math_id": 52,
"text": "\\lfloor \\cdot \\rceil"
},
{
"math_id": 53,
"text": "\\begin{align}\na_{\\pm} &= \\sqrt[3]{19 \\pm 3 \\sqrt{33}}\\,, \\\\\nb &= \\sqrt[3]{586 + 102 \\sqrt{33}}\\,.\n\\end{align}"
},
{
"math_id": 54,
"text": "x^4 - x^3 - x^2 - x - 1 = 0"
},
{
"math_id": 55,
"text": "x + x^{-4} = 2"
},
{
"math_id": 56,
"text": "x = \\frac{1}{4}\\!\\left(1+\\sqrt{u}+\\sqrt{11-u+\\frac{26}{\\sqrt{u}}}\\,\\right)"
},
{
"math_id": 57,
"text": "u = \\frac{1}{3}\\left(11-56\\sqrt[3]{\\frac{2}{-65+3\\sqrt{1689}}}+2\\cdot2^{\\frac{2}{3}}\\sqrt[3]{-65+3\\sqrt{1689}}\\right) "
},
{
"math_id": 58,
"text": "u"
},
{
"math_id": 59,
"text": "u^3-11u^2+115u-169."
},
{
"math_id": 60,
"text": "x^5 - x^4 - x^3 - x^2 - x - 1 = 0"
},
{
"math_id": 61,
"text": "x + x^{-5} = 2"
},
{
"math_id": 62,
"text": "x^6 - x^5 - x^4 - x^3 - x^2 - x - 1 = 0"
},
{
"math_id": 63,
"text": "x + x^{-6} = 2"
},
{
"math_id": 64,
"text": "x^7 - x^6 - x^5 - x^4 - x^3 - x^2 - x - 1 = 0"
},
{
"math_id": 65,
"text": "x + x^{-7} = 2"
},
{
"math_id": 66,
"text": "x + x^{-n} = 2"
},
{
"math_id": 67,
"text": "r"
},
{
"math_id": 68,
"text": "r=\\sum_{k=0}^{n-1}r^{-k}"
},
{
"math_id": 69,
"text": "\\varphi = 1 + \\frac{1}{\\varphi}"
},
{
"math_id": 70,
"text": "n > 0"
},
{
"math_id": 71,
"text": "x^n - \\sum_{i = 0}^{n-1} x^i = 0."
},
{
"math_id": 72,
"text": "2(1 - 2^{-n}) < r < 2"
},
{
"math_id": 73,
"text": "3^{-n} < |r| < 1"
},
{
"math_id": 74,
"text": "2 - 2\\sum_{i > 0} \\frac{1}{i}\\binom{(n+1)i -2}{i-1}\\frac{1}{2^{(n+1)i}}."
},
{
"math_id": 75,
"text": "F_k^{(n)} = \\left\\lfloor \\frac{r^{k-1} (r-1)}{(n+1)r-2n}\\right\\rceil\\!,"
},
{
"math_id": 76,
"text": "\\frac{1}{2^m}F^{(n)}_{m + 2}"
},
{
"math_id": 77,
"text": " F_n := F(n):= \\begin{cases}\n\\text{b} & n = 0; \\\\\n\\text{a} & n = 1; \\\\\nF(n-1)+F(n-2) & n > 1. \\\\\n\\end{cases}"
},
{
"math_id": 78,
"text": "+"
},
{
"math_id": 79,
"text": "F_n^{(0)}=F_n"
},
{
"math_id": 80,
"text": "F_n^{(r+1)}=\\sum_{i=0}^n F_i F_{n-i}^{(r)}"
},
{
"math_id": 81,
"text": "r = 1"
},
{
"math_id": 82,
"text": "r = 2"
},
{
"math_id": 83,
"text": "r = 3"
},
{
"math_id": 84,
"text": "F_{n+1}^{(r+1)}=F_n^{(r+1)}+F_{n-1}^{(r+1)}+F_n^{(r)}"
},
{
"math_id": 85,
"text": "s^{(r)}(x)=\\sum_{k=0}^{\\infty} F^{(r)}_n x^n=\\left(\\frac{x}{1-x-x^2}\\right)^r."
},
{
"math_id": 86,
"text": "F_n^{(r)}=r! F_n^{(r)}(1)"
},
{
"math_id": 87,
"text": "F^{(r)}_n(x)"
},
{
"math_id": 88,
"text": "F_n(x)"
},
{
"math_id": 89,
"text": "F^{(r)}_n"
},
{
"math_id": 90,
"text": "(x - 1)^r"
},
{
"math_id": 91,
"text": "F_x(x)"
},
{
"math_id": 92,
"text": "(x - 1)"
},
{
"math_id": 93,
"text": "F^{(1)}_n"
},
{
"math_id": 94,
"text": "F_n^{(1)}=\\frac{nL_n-F_n}{5}"
},
{
"math_id": 95,
"text": "F_{n+1}^{(1)}=2F_n^{(1)}+F_{n-1}^{(1)}-2F_{n-2}^{(1)}-F_{n-3}^{(1)}."
},
{
"math_id": 96,
"text": "r > 1"
},
{
"math_id": 97,
"text": "n - 2"
},
{
"math_id": 98,
"text": "F^{(1)}_4 = 5"
},
{
"math_id": 99,
"text": "P(n) = P(n - 2) + P(n - 3)"
},
{
"math_id": 100,
"text": "N(k) = N(k - 1) + N(k - 3)"
},
{
"math_id": 101,
"text": "F(n) = F(n - 1) + F(n - 2)"
},
{
"math_id": 102,
"text": "F(n) = F(n - 1) - F(n - 2)"
},
{
"math_id": 103,
"text": "S(n) = S(n - 1) + S(n - 2)"
},
{
"math_id": 104,
"text": "(S(0), S(1))"
},
{
"math_id": 105,
"text": "F(n) = (0, 1)"
},
{
"math_id": 106,
"text": "F(n - 1) = (1, 0)"
},
{
"math_id": 107,
"text": "S(n) = S(0) F(n-1) + S(1) F(n)"
},
{
"math_id": 108,
"text": "L(n) = 2F(n-1) + F(n)"
},
{
"math_id": 109,
"text": "N = 2^{a_1}\\cdot 3^{a_2}\\cdot 5^{a_3}\\cdot 7^{a_4}\\cdot 11^{a_5}\\cdot 13^{a_6}\\cdot \\ldots \\cdot p_r^{a_r},"
},
{
"math_id": 110,
"text": "F_N(n) = a_1F_N(n-1) + a_2F_N(n-2) + a_3F_N(n-3) + a_4F_N(n-4) + a_5F_N(n-5) + ..."
},
{
"math_id": 111,
"text": "n = r - 1"
},
{
"math_id": 112,
"text": "F_N(n) = 1"
},
{
"math_id": 113,
"text": "n < r - 1"
},
{
"math_id": 114,
"text": "F_N(n) = 0"
},
{
"math_id": 115,
"text": " a(2n+1) = a(2n) + a(2n-1)"
},
{
"math_id": 116,
"text": "a(1) = 1"
},
{
"math_id": 117,
"text": " a(2n) = a(n)"
},
{
"math_id": 118,
"text": "n \\ge 1"
},
{
"math_id": 119,
"text": "s(n) = a(2n-1)"
},
{
"math_id": 120,
"text": "s(n+1) = s(n) + a(n)"
},
{
"math_id": 121,
"text": "s(n) = a(2^k(2n-1)), k=0,1,...\\, ."
}
] | https://en.wikipedia.org/wiki?curid=8006956 |
8007 | Diameter | Straight line segment that passes through the centre of a circle
In geometry, a diameter of a circle is any straight line segment that passes through the centre of the circle and whose endpoints lie on the circle. It can also be defined as the longest chord of the circle. Both definitions are also valid for the diameter of a sphere.
In more modern usage, the length formula_0 of a diameter is also called the diameter. In this sense one speaks of the diameter rather than a diameter (which refers to the line segment itself), because all diameters of a circle or sphere have the same length, this being twice the radius formula_1
formula_2
For a convex shape in the plane, the diameter is defined to be the largest distance that can be formed between two opposite parallel lines tangent to its boundary, and the width is often defined to be the smallest such distance. Both quantities can be calculated efficiently using rotating calipers. For a curve of constant width such as the Reuleaux triangle, the width and diameter are the same because all such pairs of parallel tangent lines have the same distance.
For an ellipse, the standard terminology is different. A diameter of an ellipse is any chord passing through the centre of the ellipse. For example, conjugate diameters have the property that a tangent line to the ellipse at the endpoint of one diameter is parallel to the conjugate diameter. The longest diameter is called the major axis.
The word "diameter" is derived from (), "diameter of a circle", from (), "across, through" and (), "measure". It is often abbreviated formula_3 or formula_4
Generalizations.
The definitions given above are only valid for circles, spheres and convex shapes. However, they are special cases of a more general definition that is valid for any kind of formula_5-dimensional (convex or non-convex) object, such as a hypercube or a set of scattered points. The or of a subset of a metric space is the least upper bound of the set of all distances between pairs of points in the subset. Explicitly, if formula_6 is the subset and if formula_7 is the metric, the diameter is
formula_8
If the metric formula_7 is viewed here as having codomain formula_9 (the set of all real numbers), this implies that the diameter of the empty set (the case formula_10) equals formula_11 (negative infinity). Some authors prefer to treat the empty set as a special case, assigning it a diameter of formula_12 which corresponds to taking the codomain of formula_7 to be the set of nonnegative reals.
For any solid object or set of scattered points in formula_5-dimensional Euclidean space, the diameter of the object or set is the same as the diameter of its convex hull. In medical terminology concerning a lesion or in geology concerning a rock, the diameter of an object is the least upper bound of the set of all distances between pairs of points in the object.
In differential geometry, the diameter is an important global Riemannian invariant.
In planar geometry, a diameter of a conic section is typically defined as any chord which passes through the conic's centre; such diameters are not necessarily of uniform length, except in the case of the circle, which has eccentricity formula_13
Symbol.
The symbol or variable for diameter, ⌀, is sometimes used in technical drawings or specifications as a prefix or suffix for a number (e.g. "⌀ 55 mm"), indicating that it represents diameter. Photographic filter thread sizes are often denoted in this way.
The symbol has a Unicode code point at , in the Miscellaneous Technical set, and should not be confused with several other Unicode characters that resemble it but have unrelated meanings. It has the compose sequence .
Diameter vs. radius.
The diameter of a circle is exactly twice its radius. However, this is true only for a circle, and only in the Euclidean metric. Jung's theorem provides more general inequalities relating the diameter to the radius.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "r."
},
{
"math_id": 2,
"text": "d = 2r \\qquad\\text{or equivalently}\\qquad r = \\frac{d}{2}."
},
{
"math_id": 3,
"text": "\\text{DIA}, \\text{dia}, d,"
},
{
"math_id": 4,
"text": "\\varnothing."
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "S"
},
{
"math_id": 7,
"text": "\\rho"
},
{
"math_id": 8,
"text": "\\operatorname{diam}(S) = \\sup_{x, y \\in S} \\rho(x, y)."
},
{
"math_id": 9,
"text": "\\R"
},
{
"math_id": 10,
"text": "S = \\varnothing"
},
{
"math_id": 11,
"text": "- \\infty"
},
{
"math_id": 12,
"text": "0,"
},
{
"math_id": 13,
"text": "e = 0."
}
] | https://en.wikipedia.org/wiki?curid=8007 |
8008294 | Random phase approximation | The random phase approximation (RPA) is an approximation method in condensed matter physics and in nuclear physics. It was first introduced by David Bohm and David Pines as an important result in a series of seminal papers of 1952 and 1953. For decades physicists had been trying to incorporate the effect of microscopic quantum mechanical interactions between electrons in the theory of matter. Bohm and Pines' RPA accounts for the weak screened Coulomb interaction and is commonly used for describing the dynamic linear electronic response of electron systems. It was further developed to the relativistic form (RRPA) by solving the Dirac equation.
In the RPA, electrons are assumed to respond only to the total electric potential "V"(r) which is the sum of the external perturbing potential "V"ext(r) and a screening potential "V"sc(r). The external perturbing potential is assumed to oscillate at a single frequency "ω", so that the model yields via a self-consistent field (SCF) method a dynamic dielectric function denoted by εRPA(k, "ω").
The contribution to the dielectric function from the total electric potential is assumed to "average out", so that only the potential at wave vector k contributes. This is what is meant by the random phase approximation. The resulting dielectric function, also called the Lindhard dielectric function, correctly predicts a number of properties of the electron gas, including plasmons.
The RPA was criticized in the late 1950s for overcounting the degrees of freedom and the call for justification led to intense work among theoretical physicists. In a seminal paper Murray Gell-Mann and Keith Brueckner showed that the RPA can be derived from a summation of leading-order chain Feynman diagrams in a dense electron gas.
The consistency in these results became an important justification and motivated a very strong growth in theoretical physics in the late 50s and 60s.
Applications.
Ground state of an interacting bosonic system.
The RPA vacuum formula_0 for a bosonic system can be expressed in terms of non-correlated bosonic vacuum formula_1 and original boson excitations formula_2
formula_3
where "Z" is a symmetric matrix with formula_4 and
formula_5
The normalization can be calculated by
formula_6
where formula_7 is the singular value decomposition of formula_8.
formula_9
formula_10
formula_11
formula_12
the connection between new and old excitations is given by
formula_13.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left|\\mathrm{RPA}\\right\\rangle"
},
{
"math_id": 1,
"text": "\\left|\\mathrm{MFT}\\right\\rangle"
},
{
"math_id": 2,
"text": "\\mathbf{a}_{i}^{\\dagger}"
},
{
"math_id": 3,
"text": "\\left|\\mathrm{RPA}\\right\\rangle=\\mathcal{N}\\mathbf{e}^{Z_{ij}\\mathbf{a}_{i}^{\\dagger}\\mathbf{a}_{j}^{\\dagger}/2}\\left|\\mathrm{MFT}\\right\\rangle"
},
{
"math_id": 4,
"text": "|Z|\\leq 1"
},
{
"math_id": 5,
"text": "\\mathcal{N}= \\frac{\\left\\langle \\mathrm{MFT}\\right|\\left.\\mathrm{RPA}\\right\\rangle}{\\left\\langle \\mathrm{MFT}\\right|\\left.\\mathrm{MFT}\\right\\rangle}"
},
{
"math_id": 6,
"text": "\\langle \n\\mathrm{RPA}|\\mathrm{RPA}\\rangle=\n\\mathcal{N}^2 \\langle \\mathrm{MFT}|\n\\mathbf{e}^{z_{i}(\\tilde{\\mathbf{q}}_{i})^2/2}\n\\mathbf{e}^{z_{j}(\\tilde{\\mathbf{q}}^{\\dagger}_{j})^2/2}\n| \\mathrm{MFT}\\rangle=1\n"
},
{
"math_id": 7,
"text": "Z_{ij}=(X^{\\mathrm{t}})_{i}^{k} z_{k} X^{k}_{j}"
},
{
"math_id": 8,
"text": "Z_{ij}"
},
{
"math_id": 9,
"text": "\\tilde{\\mathbf{q}}^{i}=(X^{\\dagger})^{i}_{j}\\mathbf{a}^{j}"
},
{
"math_id": 10,
"text": "\\mathcal{N}^{-2}=\n\\sum_{m_{i}}\\sum_{n_{j}} \\frac{(z_{i}/2)^{m_{i}}(z_{j}/2)^{n_{j}}}{m!n!}\n\\langle \\mathrm{MFT}|\n\\prod_{i\\,j}\n(\\tilde{\\mathbf{q}}_{i})^{2 m_{i}}\n(\\tilde{\\mathbf{q}}^{\\dagger}_{j})^{2 n_{j}}\n| \\mathrm{MFT}\\rangle\n"
},
{
"math_id": 11,
"text": "=\\prod_{i}\n\\sum_{m_{i}} (z_{i}/2)^{2 m_{i}} \\frac{(2 m_{i})!}{m_{i}!^2}=\n"
},
{
"math_id": 12,
"text": "\n\\prod_{i}\\sum_{m_{i}} (z_{i})^{2 m_{i}} {1/2 \\choose m_{i}}=\\sqrt{\\det(1-|Z|^2)}\n"
},
{
"math_id": 13,
"text": "\\tilde{\\mathbf{a}}_{i}=\\left(\\frac{1}{\\sqrt{1-Z^2}}\\right)_{ij}\\mathbf{a}_{j}+\n\\left(\\frac{1}{\\sqrt{1-Z^2}}Z\\right)_{ij}\\mathbf{a}^{\\dagger}_{j}"
}
] | https://en.wikipedia.org/wiki?curid=8008294 |
8008662 | Tafel equation | Equation relating the rate of an electrochemical reaction to the overpotential
The Tafel equation is an equation in electrochemical kinetics relating the rate of an electrochemical reaction to the overpotential. The Tafel equation was first deduced experimentally and was later shown to have a theoretical justification. The equation is named after Swiss chemist Julius Tafel.It describes how the electrical current through an electrode depends on the voltage difference between the electrode and the bulk electrolyte for a simple, unimolecular redox reaction.
formula_0
Where an electrochemical reaction occurs in two half reactions on separate electrodes, the Tafel equation is applied to each electrode separately. On a single electrode the Tafel equation can be stated as:
where
A verification plus further explanation for this equation can be found here. The Tafel equation is an approximation of the Butler–Volmer equation in the case of formula_5. "[ The Tafel equation ] assumes that the concentrations at the electrode are practically equal to the concentrations in the bulk electrolyte, allowing the current to be expressed as a function of potential only. In other words, it assumes that the electrode mass transfer rate is much greater than the reaction rate, and that the reaction is dominated by the slower chemical reaction rate ". Also, at a given electrode the Tafel equation assumes that the reverse half reaction rate is negligible compared to the forward reaction rate.
Overview of the terms.
The exchange current is the current at equilibrium, i.e. the rate at which oxidized and reduced species transfer electrons with the electrode. In other words, the exchange current density is the rate of reaction at the reversible potential (when the overpotential is zero by definition). At the reversible potential, the reaction is in equilibrium meaning that the forward and reverse reactions progress at the same rates. This rate is the exchange current density.
The Tafel slope is measured experimentally. It can, however, be shown theoretically that when the dominant reaction mechanism involves the transfer of a single electron that
formula_6
where A is defined as
where
Equation in case of non-negligible electrode mass transfer.
In a more general case, The following derivation of the extended Butler–Volmer equation is adapted from that of Bard and Faulkner and Newman and Thomas-Alyea. [ ... ] the current is expressed as a function not only of potential (as in the simple version), but of the given concentrations as well. The mass-transfer rate may be relatively small, but its only effect on the chemical reaction is through the altered (given) concentrations. In effect, the concentrations are a function of the potential as well.The Tafel equation can be also written as:
where
Demonstration.
As seen in equation (1),
formula_13
formula_14 so:
formula_15
formula_16 as seen in equation (2) and because formula_17.
formula_18 because formula_19
due to the electrode mass transfer formula_20, which finally yields equation (3).
Equation in case of low values of polarization.
An other equation is applicable at low values of polarization formula_21. In such case, the dependence of current on polarization is usually linear (not logarithmic):
formula_22
This linear region is called "polarization resistance" due to its formal similarity to Ohm's law.
Kinetics of corrosion.
The pace at which corrosion develops is determined by the kinetics of the reactions involved, hence the electrical double layer is critical.
Applying an overpotential to an electrode causes the reaction to move in one direction, away from equilibrium. Tafel's law determines the new rate, and as long as the reaction kinetics are under control, the overpotential is proportional to the log of the corrosion current.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "Ox + n e^- \\leftrightarrows Red"
},
{
"math_id": 1,
"text": "\\eta"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "i_0"
},
{
"math_id": 5,
"text": "| \\eta | > 0.1V"
},
{
"math_id": 6,
"text": " \\frac{\\lambda k_\\text{B} T}{e} < A "
},
{
"math_id": 7,
"text": "\\lambda=\\ln(10)=2.302\\ 585..."
},
{
"math_id": 8,
"text": "k_\\text{B}"
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "e"
},
{
"math_id": 11,
"text": "V_T = k_\\text{B} T / e"
},
{
"math_id": 12,
"text": "\\alpha"
},
{
"math_id": 13,
"text": "\\eta=\\pm A\\cdot \\log_{10}\\left(\\frac{i}{i_0}\\right)"
},
{
"math_id": 14,
"text": "\\eta=\\pm A\\cdot \\frac{ \\ln \\left(\\frac{i}{i_0}\\right) }{\\ln(10)},"
},
{
"math_id": 15,
"text": "i=i_0 \\exp \\left( \\pm \\frac {\\ln(10) \\eta} {A} \\right)"
},
{
"math_id": 16,
"text": "i=i_0 \\exp \\left( \\pm \\alpha e \\frac {\\eta} {kT} \\right),"
},
{
"math_id": 17,
"text": "\\lambda = \\ln (10)"
},
{
"math_id": 18,
"text": "i=i_0 \\exp \\left( \\pm \\alpha F \\frac {\\eta} {RT} \\right)"
},
{
"math_id": 19,
"text": "\\frac {e}{k}=\\frac {e/Na}{k/Na} = \\frac {F}{R}"
},
{
"math_id": 20,
"text": "i_0=nkFC"
},
{
"math_id": 21,
"text": "|vert \\eta |vert \\simeq 0V"
},
{
"math_id": 22,
"text": "i=i_0 \\frac {nF} {RT} \\Delta E"
}
] | https://en.wikipedia.org/wiki?curid=8008662 |
8009032 | Modal analysis using FEM | The goal of modal analysis in structural mechanics is to determine the natural mode shapes and frequencies of an object or structure during free vibration. It is common to use the finite element method (FEM) to perform this analysis because, like other calculations using the FEM, the object being analyzed can have arbitrary shape and the results of the
calculations are acceptable. The types of equations which arise from modal analysis are those seen in eigensystems. The physical interpretation of the eigenvalues and eigenvectors which come from solving the system are that
they represent the frequencies and corresponding mode shapes. Sometimes, the only desired modes are the lowest frequencies because they can be the most prominent modes at which the object will vibrate, dominating all the higher frequency
modes.
It is also possible to test a physical object to determine its natural frequencies and mode shapes. This is called an Experimental Modal Analysis. The results of the physical test can be used to calibrate a finite element model to determine if the underlying assumptions made were correct (for example, correct material properties and boundary conditions were used).
FEA eigensystems.
For the most basic problem involving a linear elastic material which obeys Hooke's Law,
the matrix equations take the form of a dynamic three-dimensional spring mass system.
The generalized equation of motion is given as:
formula_0
where formula_1 is the mass matrix,
formula_2 is the 2nd time derivative of the displacement
formula_3 (i.e., the acceleration), formula_4
is the velocity, formula_5 is a damping matrix,
formula_6 is the stiffness matrix, and formula_7
is the force vector. The general problem, with nonzero damping, is a quadratic eigenvalue problem. However, for vibrational modal analysis, the damping is generally ignored, leaving only the 1st and 3rd terms on the left hand side:
formula_8
This is the general form of the eigensystem encountered in structural
engineering using the FEM. To represent the free-vibration solutions of the structure, harmonic motion is assumed. This assumption means that formula_9
is taken to equal formula_10,
where formula_11 is an eigenvalue (with units of reciprocal time squared, e.g., formula_12).
Using this, the equation reduces to:
formula_13
In contrast, the equation for static problems is:
formula_14
which is expected when all terms having a time derivative are set to zero.
Comparison to linear algebra.
In linear algebra, it is more common to see the standard form of an eigensystem which is
expressed as:
formula_15
Both equations can be seen as the same because if the general equation is
multiplied through by the inverse of the mass,
formula_16,
it will take the form of the latter.
Because the lower modes are desired, solving the system
more likely involves the equivalent of multiplying through by the inverse of the stiffness,
formula_17, a process called inverse iteration.
When this is done, the resulting eigenvalues, formula_18, relate to that of the original by:
formula_19
but the eigenvectors are the same.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n[M] [\\ddot U] +\n[C] [\\dot U] +\n[K] [U] =\n[F]\n"
},
{
"math_id": 1,
"text": " [M] "
},
{
"math_id": 2,
"text": " [\\ddot U] "
},
{
"math_id": 3,
"text": " [U] "
},
{
"math_id": 4,
"text": " [\\dot U] "
},
{
"math_id": 5,
"text": " [C] "
},
{
"math_id": 6,
"text": " [K] "
},
{
"math_id": 7,
"text": " [F] "
},
{
"math_id": 8,
"text": "\n[M] [\\ddot U] + [K] [U] = [0]\n"
},
{
"math_id": 9,
"text": "[\\ddot U]"
},
{
"math_id": 10,
"text": "\\lambda [U]"
},
{
"math_id": 11,
"text": "\\lambda"
},
{
"math_id": 12,
"text": "\\mathrm{s}^{-2}"
},
{
"math_id": 13,
"text": "[M][U] \\lambda + [K][U] = [0]"
},
{
"math_id": 14,
"text": " [K][U] = [F] "
},
{
"math_id": 15,
"text": "[A][x] = [x]\\lambda"
},
{
"math_id": 16,
"text": " [M]^{-1} "
},
{
"math_id": 17,
"text": " [K]^{-1} "
},
{
"math_id": 18,
"text": " \\mu "
},
{
"math_id": 19,
"text": "\n\\mu = \\frac{1}{\\lambda}\n"
}
] | https://en.wikipedia.org/wiki?curid=8009032 |
801135 | Conditional independence | Probability theory concept
In probability theory, conditional independence describes situations wherein an observation is irrelevant or redundant when evaluating the certainty of a hypothesis. Conditional independence is usually formulated in terms of conditional probability, as a special case where the probability of the hypothesis given the uninformative observation is equal to the probability without. If formula_0 is the hypothesis, and formula_1 and formula_2 are observations, conditional independence can be stated as an equality:
formula_3
where formula_4 is the probability of formula_0 given both formula_1 and formula_2. Since the probability of formula_0 given formula_2 is the same as the probability of formula_0 given both formula_1 and formula_2, this equality expresses that formula_1 contributes nothing to the certainty of formula_0. In this case, formula_0 and formula_1 are said to be conditionally independent given formula_2, written symbolically as: formula_5. In the language of causal equality notation, two functions formula_6 and formula_7 which both depend on a common variable formula_8 are described as conditionally independent using the notation formula_9, which is equivalent to the notation formula_10.
The concept of conditional independence is essential to graph-based theories of statistical inference, as it establishes a mathematical relation between a collection of conditional statements and a graphoid.
Conditional independence of events.
Let formula_0, formula_1, and formula_2 be events. formula_0 and formula_1 are said to be conditionally independent given formula_2 if and only if formula_11 and:
formula_12
This property is often written: formula_13, which should be read formula_14.
Equivalently, conditional independence may be stated as:
formula_15
where formula_16 is the joint probability of formula_0 and formula_1 given formula_2. This alternate formulation states that formula_0 and formula_1 are independent events, given formula_2.
It demonstrates that formula_13 is equivalent to formula_17.
formula_18
iff formula_19 (definition of conditional probability)
iff formula_20 (multiply both sides by formula_21)
iff formula_22 (divide both sides by formula_23)
iff formula_12 (definition of conditional probability) formula_24
Examples.
Coloured boxes.
Each cell represents a possible outcome. The events formula_25, formula_26 and formula_27 are represented by the areas shaded red, blue and yellow respectively. The overlap between the events formula_25 and formula_26 is shaded purple.
The probabilities of these events are shaded areas with respect to the total area. In both examples formula_25 and formula_26 are conditionally independent given formula_27 because:
formula_28
but not conditionally independent given formula_29 because:
formula_30
Proximity and delays.
Let events A and B be defined as the probability that person A and person B will be home in time for dinner where both people are randomly sampled from the entire world. Events A and B can be assumed to be independent i.e. knowledge that A is late has minimal to no change on the probability that B will be late. However, if a third event is introduced, person A and person B live in the same neighborhood, the two events are now considered not conditionally independent. Traffic conditions and weather-related events that might delay person A, might delay person B as well. Given the third event and knowledge that person A was late, the probability that person B will be late does meaningfully change.
Dice rolling.
Conditional independence depends on the nature of the third event. If you roll two dice, one may assume that the two dice behave independently of each other. Looking at the results of one dice will not tell you about the result of the second dice. (That is, the two dice are independent.) If, however, the 1st dice's result is a 3, and someone tells you about a third event - that the sum of the two results is even - then this extra unit of information restricts the options for the 2nd result to an odd number. In other words, two events can be independent, but NOT conditionally independent.
Height and vocabulary.
Height and vocabulary are dependent since very small people tend to be children, known for their more basic vocabularies. But knowing that two people are 19 years old (i.e., conditional on age) there is no reason to think that one person's vocabulary is larger if we are told that they are taller.
Conditional independence of random variables.
Two discrete random variables formula_31 and formula_32 are conditionally independent given a third discrete random variable formula_33 if and only if they are independent in their conditional probability distribution given formula_33. That is, formula_31 and formula_32 are conditionally independent given formula_33 if and only if, given any value of formula_33, the probability distribution of formula_31 is the same for all values of formula_32 and the probability distribution of formula_32 is the same for all values of formula_31. Formally:
where formula_34 is the conditional cumulative distribution function of formula_31 and formula_32 given formula_33.
Two events formula_35 and formula_1 are conditionally independent given a σ-algebra formula_36 if
formula_37
where formula_38 denotes the conditional expectation of the indicator function of the event formula_0, formula_39, given the sigma algebra formula_36. That is,
formula_40
Two random variables formula_31 and formula_32 are conditionally independent given a σ-algebra formula_36 if the above equation holds for all formula_35 in formula_41 and formula_1 in formula_42.
Two random variables formula_31 and formula_32 are conditionally independent given a random variable formula_43 if they are independent given "σ"("W"): the σ-algebra generated by formula_43. This is commonly written:
formula_44 or
formula_45
This it read "formula_31 is independent of formula_32, given formula_43"; the conditioning applies to the whole statement: "(formula_31 is independent of formula_32) given formula_43".
formula_46
This notation extends formula_47 for "formula_31 is independent of formula_32."
If formula_43 assumes a countable set of values, this is equivalent to the conditional independence of "X" and "Y" for the events of the form formula_48.
Conditional independence of more than two events, or of more than two random variables, is defined analogously.
The following two examples show that formula_47 "neither implies nor is implied by" formula_46.
First, suppose formula_43 is 0 with probability 0.5 and 1 otherwise. When "W" = 0 take formula_31 and formula_32 to be independent, each having the value 0 with probability 0.99 and the value 1 otherwise. When formula_49, formula_31 and formula_32 are again independent, but this time they take the value 1 with probability 0.99. Then formula_46. But formula_31 and formula_32 are dependent, because Pr("X" = 0) < Pr("X" = 0|"Y" = 0). This is because Pr("X" = 0) = 0.5, but if "Y" = 0 then it's very likely that "W" = 0 and thus that "X" = 0 as well, so Pr("X" = 0|"Y" = 0) > 0.5.
For the second example, suppose formula_47, each taking the values 0 and 1 with probability 0.5. Let formula_43 be the product formula_50. Then when formula_51, Pr("X" = 0) = 2/3, but Pr("X" = 0|"Y" = 0) = 1/2, so formula_46 is false.
This is also an example of Explaining Away. See Kevin Murphy's tutorial where formula_31 and formula_32 take the values "brainy" and "sporty".
Conditional independence of random vectors.
Two random vectors formula_52 and formula_53 are conditionally independent given a third random vector formula_54 if and only if they are independent in their conditional cumulative distribution given formula_55. Formally:
where formula_56, formula_57 and formula_58 and the conditional cumulative distributions are defined as follows.
formula_59
Uses in Bayesian inference.
Let "p" be the proportion of voters who will vote "yes" in an upcoming referendum. In taking an opinion poll, one chooses "n" voters randomly from the population. For "i" = 1, ..., "n", let "X""i" = 1 or 0 corresponding, respectively, to whether or not the "i"th chosen voter will or will not vote "yes".
In a frequentist approach to statistical inference one would not attribute any probability distribution to "p" (unless the probabilities could be somehow interpreted as relative frequencies of occurrence of some event or as proportions of some population) and one would say that "X"1, ..., "X""n" are independent random variables.
By contrast, in a Bayesian approach to statistical inference, one would assign a probability distribution to "p" regardless of the non-existence of any such "frequency" interpretation, and one would construe the probabilities as degrees of belief that "p" is in any interval to which a probability is assigned. In that model, the random variables "X"1, ..., "X""n" are "not" independent, but they are conditionally independent given the value of "p". In particular, if a large number of the "X"s are observed to be equal to 1, that would imply a high conditional probability, given that observation, that "p" is near 1, and thus a high conditional probability, given that observation, that the "next" "X" to be observed will be equal to 1.
Rules of conditional independence.
A set of rules governing statements of conditional independence have been derived from the basic definition.
These rules were termed "Graphoid Axioms"
by Pearl and Paz, because they hold in graphs, where formula_60 is interpreted to mean: "All paths from "X" to "A" are intercepted by the set "B"".
formula_61
Symmetry.
Proof:
Note that we are required to prove if formula_62 then formula_63. Note that if formula_62 then it can be shown formula_64. Therefore formula_65 as required.
formula_66
Decomposition.
Proof
A similar proof shows the independence of "X" and "B".
formula_71
Weak union.
Proof
The second condition can be proved similarly.
formula_77
Contraction.
Proof
This property can be proved by noticing formula_78, each equality of which is asserted by formula_76 and formula_73, respectively.
Intersection.
For strictly positive probability distributions, the following also holds:
formula_79
Proof
By assumption:
formula_80
Using this equality, together with the Law of total probability applied to formula_81:
formula_82
Since formula_83 and formula_84, it follows that formula_85.
Technical note: since these implications hold for any probability space, they will still hold if one considers a sub-universe by conditioning everything on another variable, say "K". For example, formula_86 would also mean that formula_87.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "P(A\\mid B,C) = P(A \\mid C)"
},
{
"math_id": 4,
"text": "P(A \\mid B, C)"
},
{
"math_id": 5,
"text": "(A \\perp\\!\\!\\!\\perp B \\mid C)"
},
{
"math_id": 6,
"text": "f(y)"
},
{
"math_id": 7,
"text": "g(y)"
},
{
"math_id": 8,
"text": "y"
},
{
"math_id": 9,
"text": "f\\left(y\\right) ~\\overset{\\curvearrowleft \\curvearrowright }{=}~ g\\left(y\\right)"
},
{
"math_id": 10,
"text": "P(f\\mid g,y) = P(f \\mid y)"
},
{
"math_id": 11,
"text": "P(C) > 0"
},
{
"math_id": 12,
"text": "P(A \\mid B, C) = P(A \\mid C)"
},
{
"math_id": 13,
"text": "(A \\perp\\!\\!\\!\\perp B \\mid C)"
},
{
"math_id": 14,
"text": "((A \\perp\\!\\!\\!\\perp B) \\vert C)"
},
{
"math_id": 15,
"text": "P(A,B|C) = P(A|C)P(B|C)"
},
{
"math_id": 16,
"text": "P(A,B|C)"
},
{
"math_id": 17,
"text": "(B \\perp\\!\\!\\!\\perp A \\mid C)"
},
{
"math_id": 18,
"text": "P(A, B \\mid C) = P(A\\mid C)P(B\\mid C)"
},
{
"math_id": 19,
"text": "\\frac{P(A, B, C)}{P(C)} = \\left(\\frac{P(A, C)}{P(C)}\\right) \\left(\\frac{P(B, C)}{P(C)} \\right)"
},
{
"math_id": 20,
"text": "P(A, B, C) = \\frac{P(A, C) P(B, C)}{P(C)}"
},
{
"math_id": 21,
"text": "P(C)"
},
{
"math_id": 22,
"text": "\\frac{P(A, B, C)}{P(B, C)}= \\frac{P(A, C)}{P(C)}"
},
{
"math_id": 23,
"text": "P(B, C)"
},
{
"math_id": 24,
"text": "\\therefore"
},
{
"math_id": 25,
"text": "\\color{red}R"
},
{
"math_id": 26,
"text": "\\color{blue}B"
},
{
"math_id": 27,
"text": "\\color{gold}Y"
},
{
"math_id": 28,
"text": "\\Pr({\\color{red}R}, {\\color{blue}B} \\mid {\\color{gold}Y}) = \\Pr({\\color{red}R} \\mid {\\color{gold}Y})\\Pr({\\color{blue}B} \\mid {\\color{gold}Y})"
},
{
"math_id": 29,
"text": "\\left[ \\text{not }{\\color{gold}Y}\\right]"
},
{
"math_id": 30,
"text": "\\Pr({\\color{red}R}, {\\color{blue}B} \\mid \\text{not } {\\color{gold}Y}) \\not= \\Pr({\\color{red}R} \\mid \\text{not } {\\color{gold}Y})\\Pr({\\color{blue}B} \\mid \\text{not } {\\color{gold}Y})"
},
{
"math_id": 31,
"text": "X"
},
{
"math_id": 32,
"text": "Y"
},
{
"math_id": 33,
"text": "Z"
},
{
"math_id": 34,
"text": "F_{X,Y\\,\\mid\\,Z\\,=\\,z}(x,y)=\\Pr(X \\leq x, Y \\leq y \\mid Z=z)"
},
{
"math_id": 35,
"text": "R"
},
{
"math_id": 36,
"text": "\\Sigma"
},
{
"math_id": 37,
"text": "\\Pr(R, B \\mid \\Sigma) = \\Pr(R \\mid \\Sigma)\\Pr(B \\mid \\Sigma) \\text{ a.s.}"
},
{
"math_id": 38,
"text": "\\Pr(A \\mid \\Sigma) "
},
{
"math_id": 39,
"text": "\\chi_A"
},
{
"math_id": 40,
"text": "\\Pr(A \\mid \\Sigma) := \\operatorname{E}[\\chi_A\\mid\\Sigma]."
},
{
"math_id": 41,
"text": "\\sigma(X)"
},
{
"math_id": 42,
"text": "\\sigma(Y)"
},
{
"math_id": 43,
"text": "W"
},
{
"math_id": 44,
"text": "X \\perp\\!\\!\\!\\perp Y \\mid W "
},
{
"math_id": 45,
"text": "X \\perp Y \\mid W"
},
{
"math_id": 46,
"text": "(X \\perp\\!\\!\\!\\perp Y) \\mid W"
},
{
"math_id": 47,
"text": "X \\perp\\!\\!\\!\\perp Y"
},
{
"math_id": 48,
"text": "[W=w]"
},
{
"math_id": 49,
"text": "W=1"
},
{
"math_id": 50,
"text": "X \\cdot Y"
},
{
"math_id": 51,
"text": "W=0"
},
{
"math_id": 52,
"text": "\\mathbf{X}=(X_1,\\ldots,X_l)^{\\mathrm T}"
},
{
"math_id": 53,
"text": "\\mathbf{Y}=(Y_1,\\ldots,Y_m)^{\\mathrm T}"
},
{
"math_id": 54,
"text": "\\mathbf{Z}=(Z_1,\\ldots,Z_n)^{\\mathrm T}"
},
{
"math_id": 55,
"text": "\\mathbf{Z}"
},
{
"math_id": 56,
"text": "\\mathbf{x}=(x_1,\\ldots,x_l)^{\\mathrm T}"
},
{
"math_id": 57,
"text": "\\mathbf{y}=(y_1,\\ldots,y_m)^{\\mathrm T}"
},
{
"math_id": 58,
"text": "\\mathbf{z}=(z_1,\\ldots,z_n)^{\\mathrm T}"
},
{
"math_id": 59,
"text": "\\begin{align}\nF_{\\mathbf{X},\\mathbf{Y}\\,\\mid\\,\\mathbf{Z}\\,=\\,\\mathbf{z}}(\\mathbf{x},\\mathbf{y}) &= \\Pr(X_1 \\leq x_1,\\ldots,X_l \\leq x_l, Y_1 \\leq y_1,\\ldots,Y_m \\leq y_m \\mid Z_1=z_1,\\ldots,Z_n=z_n) \\\\[6pt]\nF_{\\mathbf{X}\\,\\mid\\,\\mathbf{Z}\\,=\\,\\mathbf{z}}(\\mathbf{x}) &= \\Pr(X_1 \\leq x_1,\\ldots,X_l \\leq x_l \\mid Z_1=z_1,\\ldots,Z_n=z_n) \\\\[6pt]\nF_{\\mathbf{Y}\\,\\mid\\,\\mathbf{Z}\\,=\\,\\mathbf{z}}(\\mathbf{y}) &= \\Pr(Y_1 \\leq y_1,\\ldots,Y_m \\leq y_m \\mid Z_1=z_1,\\ldots,Z_n=z_n)\n\\end{align}"
},
{
"math_id": 60,
"text": "X \\perp\\!\\!\\!\\perp A\\mid B"
},
{
"math_id": 61,
"text": "\nX \\perp\\!\\!\\!\\perp Y\n\\quad \\Rightarrow \\quad\nY \\perp\\!\\!\\!\\perp X\n"
},
{
"math_id": 62,
"text": "P(X|Y) = P(X)"
},
{
"math_id": 63,
"text": "P(Y|X)=P(Y)"
},
{
"math_id": 64,
"text": "P(X, Y) = P(X)P(Y)"
},
{
"math_id": 65,
"text": "P(Y|X) = P(X, Y)/P(X) = P(X)P(Y)/P(X) = P(Y)"
},
{
"math_id": 66,
"text": "\nX \\perp\\!\\!\\!\\perp A,B\n\\quad \\Rightarrow \\quad\n\\text{ and }\n\\begin{cases}\n X \\perp\\!\\!\\!\\perp A \\\\\n X \\perp\\!\\!\\!\\perp B\n\\end{cases}\n"
},
{
"math_id": 67,
"text": "\np_{X,A,B}(x,a,b) = p_X(x) p_{A,B}(a,b)\n"
},
{
"math_id": 68,
"text": "X \\perp\\!\\!\\!\\perp A,B"
},
{
"math_id": 69,
"text": "\n\\int_B p_{X,A,B}(x,a,b)\\,db = \\int_B p_X(x) p_{A,B}(a,b)\\,db\n"
},
{
"math_id": 70,
"text": "\np_{X,A}(x,a) = p_X(x) p_A(a)\n"
},
{
"math_id": 71,
"text": "\nX \\perp\\!\\!\\!\\perp A,B\n\\quad \\Rightarrow \\quad\n\\text{ and }\n\\begin{cases}\n X \\perp\\!\\!\\!\\perp A \\mid B\\\\\n X \\perp\\!\\!\\!\\perp B \\mid A\n\\end{cases}\n"
},
{
"math_id": 72,
"text": "\\Pr(X) = \\Pr(X \\mid A, B) "
},
{
"math_id": 73,
"text": "X \\perp\\!\\!\\!\\perp B"
},
{
"math_id": 74,
"text": "\\Pr(X) = \\Pr(X \\mid B)"
},
{
"math_id": 75,
"text": "\\Pr(X \\mid B) = \\Pr(X \\mid A, B)"
},
{
"math_id": 76,
"text": "X \\perp\\!\\!\\!\\perp A \\mid B"
},
{
"math_id": 77,
"text": "\n\\left.\\begin{align}\n X \\perp\\!\\!\\!\\perp A \\mid B \\\\\n X \\perp\\!\\!\\!\\perp B\n\\end{align}\\right\\}\\text{ and }\n\\quad \\Rightarrow \\quad\nX \\perp\\!\\!\\!\\perp A,B\n"
},
{
"math_id": 78,
"text": "\\Pr(X\\mid A,B) = \\Pr(X\\mid B) = \\Pr(X)"
},
{
"math_id": 79,
"text": "\n\\left.\\begin{align}\n X \\perp\\!\\!\\!\\perp Y \\mid Z, W\\\\\n X \\perp\\!\\!\\!\\perp W \\mid Z, Y\n\\end{align}\\right\\}\\text{ and }\n\\quad \\Rightarrow \\quad\nX \\perp\\!\\!\\!\\perp W, Y \\mid Z\n"
},
{
"math_id": 80,
"text": "P(X|Z, W, Y) = P(X|Z, W) \\land P(X|Z, W, Y) = P(X|Z, Y) \\implies P(X|Z, Y) = P(X|Z, W)"
},
{
"math_id": 81,
"text": "P(X|Z)"
},
{
"math_id": 82,
"text": "\\begin{align}\nP(X|Z) &= \\sum_{w \\in W} P(X|Z, W=w)P(W=w|Z) \\\\[4pt]\n&= \\sum_{w \\in W} P(X|Y, Z)P(W=w|Z) \\\\[4pt]\n&= P(X|Z, Y) \\sum_{w \\in W} P(W=w|Z) \\\\[4pt]\n&= P(X|Z, Y)\n\\end{align}"
},
{
"math_id": 83,
"text": "P(X|Z, W, Y) = P(X|Z, Y)"
},
{
"math_id": 84,
"text": "P(X|Z, Y) = P(X|Z)"
},
{
"math_id": 85,
"text": "P(X|Z, W, Y) = P(X|Z) \\iff X \\perp\\!\\!\\!\\perp Y,W | Z"
},
{
"math_id": 86,
"text": "X \\perp\\!\\!\\!\\perp Y \\Rightarrow Y \\perp\\!\\!\\!\\perp X"
},
{
"math_id": 87,
"text": "X \\perp\\!\\!\\!\\perp Y \\mid K \\Rightarrow Y \\perp\\!\\!\\!\\perp X \\mid K"
}
] | https://en.wikipedia.org/wiki?curid=801135 |
8011882 | Aleph One (disambiguation) | Aleph One (formula_0) is the second aleph number.
Aleph One may also refer to:
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\aleph_1"
}
] | https://en.wikipedia.org/wiki?curid=8011882 |
8012046 | Convex preferences | Concept in economics
In economics, convex preferences are an individual's ordering of various outcomes, typically with regard to the amounts of various goods consumed, with the property that, roughly speaking, "averages are better than the extremes". The concept roughly corresponds to the concept of diminishing marginal utility without requiring utility functions.
Notation.
Comparable to the greater-than-or-equal-to ordering relation formula_0 for real numbers, the notation formula_1 below can be translated as: 'is at least as good as' (in preference satisfaction).
Similarly, formula_2 can be translated as 'is strictly better than' (in preference satisfaction), and Similarly, formula_3 can be translated as 'is equivalent to' (in preference satisfaction).
Definition.
Use "x", "y", and "z" to denote three consumption bundles (combinations of various quantities of various goods). Formally, a preference relation formula_1 on the consumption set "X" is called convex if whenever
formula_4 where formula_5 and formula_6,
then for every formula_7:
formula_8.
i.e., for any two bundles that are each viewed as being at least as good as a third bundle, a weighted average of the two bundles is viewed as being at least as good as the third bundle.
A preference relation formula_1 is called strictly convex if whenever
formula_4 where formula_5, formula_6, and formula_9,
then for every formula_10:
formula_11
i.e., for any two distinct bundles that are each viewed as being at least as good as a third bundle, a weighted average of the two bundles (including a positive amount of each bundle) is viewed as being strictly better than the third bundle.
Alternative definition.
Use "x" and "y" to denote two consumption bundles. A preference relation formula_1 is called convex if for any
formula_12 where formula_5
then for every formula_7:
formula_13.
That is, if a bundle "y" is preferred over a bundle "x", then any mix of "y" with "x" is still preferred over "x".
A preference relation is called strictly convex if whenever
formula_12 where formula_14, and formula_15,
then for every formula_10:
formula_16.
formula_17.
That is, for any two bundles that are viewed as being equivalent, a weighted average of the two bundles is better than each of these bundles.
Examples.
1. If there is only a single commodity type, then any weakly-monotonically increasing preference relation is convex. This is because, if formula_18, then every weighted average of "y" and "ס" is also formula_19.
2. Consider an economy with two commodity types, 1 and 2. Consider a preference relation represented by the following Leontief utility function:
formula_20
This preference relation is convex. Proof: suppose "x" and "y" are two equivalent bundles, i.e. formula_21. If the minimum-quantity commodity in both bundles is the same (e.g. commodity 1), then this implies formula_22. Then, any weighted average also has the same amount of commodity 1, so any weighted average is equivalent to formula_23 and formula_24. If the minimum commodity in each bundle is different (e.g. formula_25 but formula_26), then this implies formula_27. Then formula_28 and formula_29, so formula_30. This preference relation is convex, but not strictly-convex.
3. A preference relation represented by linear utility functions is convex, but not strictly convex. Whenever formula_31, every convex combination of formula_32 is equivalent to any of them.
4. Consider a preference relation represented by:
formula_33
This preference relation is not convex. Proof: let formula_34 and formula_35. Then formula_31 since both have utility 5. However, the convex combination formula_36 is worse than both of them since its utility is 4.
Relation to indifference curves and utility functions.
A set of convex-shaped indifference curves displays convex preferences: Given a convex indifference curve containing the set of all bundles (of two or more goods) that are all viewed as equally desired, the set of all goods bundles that are viewed as being at least as desired as those on the indifference curve is a convex set.
Convex preferences with their associated convex indifference mapping arise from quasi-concave utility functions, although these are not necessary for the analysis of preferences. For example, Constant Elasticity of Substitution (CES) utility functions describe convex, homothetic preferences. CES preferences are self-dual and both primal and dual CES preferences yield systems of indifference curves that may exhibit any degree of convexity.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\geq"
},
{
"math_id": 1,
"text": "\\succeq"
},
{
"math_id": 2,
"text": "\\succ"
},
{
"math_id": 3,
"text": "\\sim"
},
{
"math_id": 4,
"text": "x, y, z \\in X"
},
{
"math_id": 5,
"text": "y \\succeq x "
},
{
"math_id": 6,
"text": "z \\succeq x "
},
{
"math_id": 7,
"text": "\\theta\\in[0,1]"
},
{
"math_id": 8,
"text": "\\theta y + (1-\\theta) z \\succeq x "
},
{
"math_id": 9,
"text": " y \\neq z"
},
{
"math_id": 10,
"text": "\\theta\\in(0,1)"
},
{
"math_id": 11,
"text": "\\theta y + (1-\\theta) z \\succ x "
},
{
"math_id": 12,
"text": "x, y \\in X"
},
{
"math_id": 13,
"text": "\\theta y + (1-\\theta) x \\succeq x "
},
{
"math_id": 14,
"text": "y \\sim x "
},
{
"math_id": 15,
"text": " x \\neq y"
},
{
"math_id": 16,
"text": "\\theta y + (1-\\theta) x \\succ x "
},
{
"math_id": 17,
"text": "\\theta y + (1-\\theta) x \\succ y "
},
{
"math_id": 18,
"text": "y \\geq x "
},
{
"math_id": 19,
"text": "\\geq x "
},
{
"math_id": 20,
"text": "u(x_1,x_2) = \\min(x_1,x_2)"
},
{
"math_id": 21,
"text": "\\min(x_1,x_2) = \\min(y_1,y_2)"
},
{
"math_id": 22,
"text": "x_1=y_1 \\leq x_2,y_2"
},
{
"math_id": 23,
"text": "x"
},
{
"math_id": 24,
"text": "y"
},
{
"math_id": 25,
"text": "x_1\\leq x_2"
},
{
"math_id": 26,
"text": "y_1\\geq y_2"
},
{
"math_id": 27,
"text": "x_1=y_2 \\leq x_2,y_1"
},
{
"math_id": 28,
"text": "\\theta x_1 + (1-\\theta) y_1 \\geq x_1"
},
{
"math_id": 29,
"text": "\\theta x_2 + (1-\\theta) y_2 \\geq y_2"
},
{
"math_id": 30,
"text": "\\theta x + (1-\\theta) y \\succeq x,y"
},
{
"math_id": 31,
"text": "x\\sim y"
},
{
"math_id": 32,
"text": "x,y"
},
{
"math_id": 33,
"text": "u(x_1,x_2) = \\max(x_1,x_2)"
},
{
"math_id": 34,
"text": "x=(3,5)"
},
{
"math_id": 35,
"text": "y=(5,3)"
},
{
"math_id": 36,
"text": "0.5 x + 0.5 y = (4,4)"
}
] | https://en.wikipedia.org/wiki?curid=8012046 |
801246 | Cocountable topology | The cocountable topology or countable complement topology on any set "X" consists of the empty set and all cocountable subsets of "X", that is all sets whose complement in "X" is countable. It follows that the only closed subsets are "X" and the countable subsets of "X". Symbolically, one writes the topology as
formula_0
Every set "X" with the cocountable topology is Lindelöf, since every nonempty open set omits only countably many points of "X". It is also T1, as all singletons are closed.
If "X" is an uncountable set then any two nonempty open sets intersect, hence the space is not Hausdorff. However, in the cocountable topology all convergent sequences are eventually constant, so limits are unique. Since compact sets in "X" are finite subsets, all compact subsets are closed, another condition usually related to the Hausdorff separation axiom.
The cocountable topology on a countable set is the discrete topology. The cocountable topology on an uncountable set is hyperconnected, thus connected, locally connected and pseudocompact, but neither weakly countably compact nor countably metacompact, hence not compact. | [
{
"math_id": 0,
"text": "\\mathcal{T} = \\{A \\subseteq X : A = \\varnothing \\mbox{ or } X \\setminus A \\mbox{ is countable} \\}."
}
] | https://en.wikipedia.org/wiki?curid=801246 |
8014489 | Mixmaster universe | The Mixmaster universe (named after Sunbeam Mixmaster, a brand of Sunbeam Products electric kitchen mixer) is a solution to Einstein field equations of general relativity studied by Charles Misner in an effort to better understand the dynamics of the early universe. He hoped to solve the horizon problem in a natural way by showing that the early universe underwent an oscillatory, chaotic epoch.
Discussion.
The model is similar to the closed Friedmann–Lemaître–Robertson–Walker universe, in that spatial slices are positively curved and are topologically three-spheres formula_0. However, in the FRW universe, the formula_0 can only expand or contract: the only dynamical parameter is overall size of the formula_0, parameterized by the scale factor formula_1. In the Mixmaster universe, the formula_0 can expand or contract, but also distort anisotropically. Its evolution is described by a scale factor formula_1 as well as by two shape parameters formula_2. Values of the shape parameters describe distortions of the formula_0 that preserve its volume and also maintain a constant Ricci curvature scalar. Therefore, as the three parameters formula_3 assume different values, homogeneity but not isotropy is preserved.
The model has a rich dynamical structure. Misner showed that the shape parameters formula_2 act like the coordinates of a point mass moving in a triangular potential with steeply rising walls with friction. By studying the motion of this point, Misner showed that the physical universe would expand in some directions and contract in others, with the directions of expansion and contraction changing repeatedly. Because the potential is roughly triangular, Misner suggested that the evolution is chaotic.
Metric.
The metric studied by Misner (very slightly modified from his notation) is given by,
formula_4
where
formula_5
and the formula_6, considered as differential forms, are defined by
formula_7
formula_8
formula_9
In terms of the coordinates formula_10. These satisfy
formula_11
where formula_12 is the exterior derivative and formula_13 the wedge product of differential forms. The 1-forms formula_14 form a left-invariant co-frame on the Lie group SU(2), which is diffeomorphic to the 3-sphere formula_0, so the spatial metric in Misner's model can concisely be described as just a left-invariant metric on the 3-sphere; indeed, up to the adjoint action of SU(2), this is actually the general left-invariant metric. As the metric evolves via Einstein's equation, the geometry of this formula_0 typically distorts anisotropically. Misner defines parameters formula_15 and formula_16 which measure the volume of spatial slices, as well as "shape parameters" formula_17, by
formula_18.
Since there is one condition on the three formula_17, there should only be two free functions, which Misner chooses to be formula_19, defined as
formula_20
The evolution of the universe is then described by finding formula_19 as functions of formula_21.
Applications to cosmology.
Misner hoped that the chaos would churn up and smooth out the early universe. Also, during periods in which one direction was static (e.g., going from expansion to contraction) formally the Hubble horizon formula_22 in that direction is infinite, which he suggested meant that the horizon problem could be solved. Since the directions of expansion and contraction varied, presumably given enough time the horizon problem would get solved in every direction.
While an interesting example of gravitational chaos, it is widely recognized that the cosmological problems the Mixmaster universe attempts to solve are more elegantly tackled by cosmic inflation. The metric Misner studied is also known as the Bianchi type IX metric. | [
{
"math_id": 0,
"text": "S^3"
},
{
"math_id": 1,
"text": "a(t)"
},
{
"math_id": 2,
"text": "\\beta_\\pm(t)"
},
{
"math_id": 3,
"text": "a,\\beta_\\pm"
},
{
"math_id": 4,
"text": "\\text{d}s^2 = -\\text{d}t^2 + \\sum_{k=1}^3 {L_k^2(t)} \\sigma_k \\otimes \\sigma_k"
},
{
"math_id": 5,
"text": " L_k = R(t)e^{\\beta_k} "
},
{
"math_id": 6,
"text": "\\sigma_k"
},
{
"math_id": 7,
"text": "\\sigma_1 = \\sin \\psi \\text{d}\\theta - \\cos\\psi \\sin\\theta\\text{d}\\phi"
},
{
"math_id": 8,
"text": "\\sigma_2 = \\cos \\psi \\text{d}\\theta + \\sin\\psi \\sin\\theta\\text{d}\\phi"
},
{
"math_id": 9,
"text": "\\sigma_3 = -\\text{d}\\psi - \\cos\\theta\\text{d}\\phi"
},
{
"math_id": 10,
"text": "(\\theta,\\psi,\\phi)"
},
{
"math_id": 11,
"text": " \\text{d}\\sigma_i = \\frac{1}{2}\\epsilon_{ijk} \\sigma_j \\wedge \\sigma_k"
},
{
"math_id": 12,
"text": "\\text{d}"
},
{
"math_id": 13,
"text": "\\wedge"
},
{
"math_id": 14,
"text": "\\sigma_i"
},
{
"math_id": 15,
"text": "\\Omega(t)"
},
{
"math_id": 16,
"text": "R(t)"
},
{
"math_id": 17,
"text": "\\beta_k"
},
{
"math_id": 18,
"text": "R(t) = e^{-\\Omega(t)} = (L_1(t) L_2(t) L_3(t))^{1/3}, \\quad \\sum_{k=1}^3 \\beta_k(t) = 0"
},
{
"math_id": 19,
"text": "\\beta_\\pm"
},
{
"math_id": 20,
"text": "\\beta_+ = \\beta_1 + \\beta_2 = -\\beta_3, \\quad \\beta_- = \\frac{\\beta_1 - \\beta_2}{\\sqrt{3}}"
},
{
"math_id": 21,
"text": "\\Omega"
},
{
"math_id": 22,
"text": "H^{-1}"
}
] | https://en.wikipedia.org/wiki?curid=8014489 |
8014919 | Lindelöf's theorem | In mathematics, Lindelöf's theorem is a result in complex analysis named after the Finnish mathematician Ernst Leonard Lindelöf. It states that a holomorphic function on a half-strip in the complex plane that is bounded on the boundary of the strip and does not grow "too fast" in the unbounded direction of the strip must remain bounded on the whole strip. The result is useful in the study of the Riemann zeta function, and is a special case of the Phragmén–Lindelöf principle. Also, see Hadamard three-lines theorem.
Statement of the theorem.
Let formula_0 be a half-strip in the complex plane:
formula_1
Suppose that formula_2 is holomorphic (i.e. analytic) on formula_0 and that there are constants
formula_3, formula_4, and formula_5 such that
formula_6
and
formula_7
Then formula_2 is bounded by formula_3 on all of formula_0:
formula_8
Proof.
Fix a point formula_9 inside formula_0. Choose formula_10, an integer formula_11 and formula_12 large enough such that
formula_13. Applying maximum modulus principle to the function formula_14 and
the rectangular area formula_15 we obtain formula_16, that is, formula_17. Letting formula_18 yields
formula_19 as required. | [
{
"math_id": 0,
"text": "\\Omega"
},
{
"math_id": 1,
"text": "\\Omega = \\{ z \\in \\mathbb{C} | x_1 \\leq \\mathrm{Re} (z) \\leq x_2\\ \\text{and}\\ \\mathrm{Im} (z) \\geq y_0 \\} \\subsetneq \\mathbb{C}. "
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "B"
},
{
"math_id": 6,
"text": "| f(z) | \\leq M \\ \\text{for all}\\ z \\in \\partial \\Omega"
},
{
"math_id": 7,
"text": "| f (x + i y) | \\leq B y^A\\ \\text{for all}\\ x + i y \\in \\Omega."
},
{
"math_id": 8,
"text": "| f(z) | \\leq M\\ \\text{for all}\\ z \\in \\Omega."
},
{
"math_id": 9,
"text": "\\xi=\\sigma+i\\tau"
},
{
"math_id": 10,
"text": "\\lambda>-y_0"
},
{
"math_id": 11,
"text": "N>A"
},
{
"math_id": 12,
"text": "y_1>\\tau"
},
{
"math_id": 13,
"text": "\\frac{By_1^A}{(y_1 + \\lambda)^N}\\le \\frac {M}{(y_0+\\lambda)^N}"
},
{
"math_id": 14,
"text": "g(z)=\\frac {f(z)}{(z+i\\lambda)^N}"
},
{
"math_id": 15,
"text": "\\{z \\in \\mathbb{C} \\mid x_1 \\leq \\mathrm{Re} (z) \\leq x_2\\ \\text{and}\\ y_0 \\leq \\mathrm{Im} (z) \\leq y_1 \\}"
},
{
"math_id": 16,
"text": "|g(\\xi)|\\le \\frac{M}{(y_0+\\lambda)^N}"
},
{
"math_id": 17,
"text": "|f(\\xi)|\\le M\\left(\\frac{|\\xi + \\lambda|}{y_0+\\lambda}\\right)^N"
},
{
"math_id": 18,
"text": "\\lambda \\to +\\infty"
},
{
"math_id": 19,
"text": "|f(\\xi)| \\le M"
}
] | https://en.wikipedia.org/wiki?curid=8014919 |
8015680 | Monadic predicate calculus | In logic, the monadic predicate calculus (also called monadic first-order logic) is the fragment of first-order logic in which all relation symbols in the signature are monadic (that is, they take only one argument), and there are no function symbols. All atomic formulas are thus of the form formula_0, where formula_1 is a relation symbol and formula_2 is a variable.
Monadic predicate calculus can be contrasted with polyadic predicate calculus, which allows relation symbols that take two or more arguments.
Expressiveness.
The absence of polyadic relation symbols severely restricts what can be expressed in the monadic predicate calculus. It is so weak that, unlike the full predicate calculus, it is decidable—there is a decision procedure that determines whether a given formula of monadic predicate calculus is logically valid (true for all nonempty domains). Adding a single binary relation symbol to monadic logic, however, results in an undecidable logic.
Relationship with term logic.
The need to go beyond monadic logic was not appreciated until the work on the logic of relations, by Augustus De Morgan and Charles Sanders Peirce in the nineteenth century, and by Frege in his 1879 "Begriffsschrifft". Prior to the work of these three men, term logic (syllogistic logic) was widely considered adequate for formal deductive reasoning.
Inferences in term logic can all be represented in the monadic predicate calculus. For example the argument
All dogs are mammals.
No mammal is a bird.
Thus, no dog is a bird.
can be notated in the language of monadic predicate calculus as
formula_3
where formula_4, formula_5 and formula_6 denote the predicates of being, respectively, a dog, a mammal, and a bird.
Conversely, monadic predicate calculus is not significantly more expressive than term logic. Each formula in the monadic predicate calculus is equivalent to a formula in which quantifiers appear only in closed subformulas of the form
formula_7
or
formula_8
These formulas slightly generalize the basic judgements considered in term logic. For example, this form allows statements such as "Every mammal is either a herbivore or a carnivore (or both)", formula_9. Reasoning about such statements can, however, still be handled within the framework of term logic, although not by the 19 classical Aristotelian syllogisms alone.
Taking propositional logic as given, every formula in the monadic predicate calculus expresses something that can likewise be formulated in term logic. On the other hand, a modern view of the problem of multiple generality in traditional logic concludes that quantifiers cannot nest usefully if there are no polyadic predicates to relate the bound variables.
Variants.
The formal system described above is sometimes called the pure monadic predicate calculus, where "pure" signifies the absence of function letters. Allowing monadic function letters changes the logic only superficially, whereas admitting even a single binary function letter results in an undecidable logic.
Monadic second-order logic allows predicates of higher arity in formulas, but restricts second-order quantification to unary predicates, i.e. the only second-order variables allowed are subset variables. | [
{
"math_id": 0,
"text": "P(x)"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "[(\\forall x\\,D(x)\\Rightarrow M(x))\\land \\neg(\\exists y\\,M(y)\\land B(y))] \\Rightarrow \\neg(\\exists z\\,D(z)\\land B(z))"
},
{
"math_id": 4,
"text": "D"
},
{
"math_id": 5,
"text": "M"
},
{
"math_id": 6,
"text": "B"
},
{
"math_id": 7,
"text": "\\forall x\\,P_1(x)\\lor\\cdots\\lor P_n(x)\\lor\\neg P'_1(x)\\lor\\cdots\\lor \\neg P'_m(x)"
},
{
"math_id": 8,
"text": "\\exists x\\,\\neg P_1(x)\\land\\cdots\\land\\neg P_n(x)\\land P'_1(x)\\land\\cdots\\land P'_m(x),"
},
{
"math_id": 9,
"text": "(\\forall x\\,\\neg M(x)\\lor H(x)\\lor C(x))"
}
] | https://en.wikipedia.org/wiki?curid=8015680 |
8016170 | Hedgehog space | Topological space made of a set of spines joined at a point
In mathematics, a hedgehog space is a topological space consisting of a set of spines joined at a point.
For any cardinal number formula_0, the formula_0-hedgehog space is formed by taking the disjoint union of formula_0 real unit intervals identified at the origin (though its topology is not the quotient topology, but that defined by the metric below). Each unit interval is referred to as one of the hedgehog's "spines." A formula_0-hedgehog space is sometimes called a hedgehog space of spininess formula_0.
The hedgehog space is a metric space, when endowed with the hedgehog metric formula_1 if formula_2 and formula_3 lie in the same spine, and by formula_4 if formula_2 and formula_3 lie in different spines. Although their disjoint union makes the origins of the intervals distinct, the metric makes them equivalent by assigning them 0 distance.
Hedgehog spaces are examples of real trees.
Paris metric.
The metric on the plane in which the distance between any two points is their Euclidean distance when the two points belong to a ray though the origin, and is otherwise the sum of the distances of the two points from the origin, is sometimes called the Paris metric because navigation in this metric resembles that in the radial street plan of Paris: for almost all pairs of points, the shortest path passes through the center. The Paris metric, restricted to the unit disk, is a hedgehog space where "K" is the cardinality of the continuum.
Kowalsky's theorem.
Kowalsky's theorem, named after Hans-Joachim Kowalsky, states that any metrizable space of weight formula_0 can be represented as a topological subspace of the product of countably many formula_0-hedgehog spaces.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\kappa"
},
{
"math_id": 1,
"text": "d(x,y)=\\left| x - y \\right|"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "y"
},
{
"math_id": 4,
"text": "d(x,y)=\\left|x\\right| + \\left|y\\right|"
}
] | https://en.wikipedia.org/wiki?curid=8016170 |
8016525 | Moore space (topology) | In mathematics, more specifically point-set topology, a Moore space is a developable regular Hausdorff space. That is, a topological space "X" is a Moore space if the following conditions hold:
Moore spaces are generally interesting in mathematics because they may be applied to prove interesting metrization theorems. The concept of a Moore space was formulated by R. L. Moore in the earlier part of the 20th century.
Normal Moore space conjecture.
For a long time, topologists were trying to prove the so-called normal Moore space conjecture: every normal Moore space is metrizable. This was inspired by the fact that all known Moore spaces that were not metrizable were also not normal. This would have been a nice metrization theorem. There were some nice partial results at first; namely properties 7, 8 and 9 as given in the previous section.
With property 9, we see that we can drop metacompactness from Traylor's theorem, but at the cost of a set-theoretic assumption. Another example of this is Fleissner's theorem that the axiom of constructibility implies that locally compact, normal Moore spaces are metrizable.
On the other hand, under the continuum hypothesis (CH) and also under Martin's axiom and not CH, there are several examples of non-metrizable normal Moore spaces. Nyikos proved that, under the so-called PMEA (Product Measure Extension Axiom), which needs a large cardinal, all normal Moore spaces are metrizable. Finally, it was shown later that any model of ZFC in which the conjecture holds, implies the existence of a model with a large cardinal. So large cardinals are needed essentially.
gave an example of a pseudonormal Moore space that is not metrizable, so the conjecture cannot be strengthened in this way.
Moore himself proved the theorem that a collectionwise normal Moore space is metrizable, so strengthening normality is another way to settle the matter.
References.
<templatestyles src="Reflist/styles.css" />
MR (27 #709) Moore, R. L. "Foundations of point set theory". Revised edition. American Mathematical Society Colloquium Publications, Vol. XIII American Mathematical Society, Providence, R.I. 1962 xi+419 pp. (Reviewer: F. Burton Jones)
MR (33 #7980) Jones, F. Burton "Metrization". "American Mathematical Monthly" 73 1966 571–576. (Reviewer: R. W. Bagley)
MR (34 #3510) Bing, R. H. "Challenging conjectures". "American Mathematical Monthly" 74 1967 no. 1, part II, 56–64;
MR (1,317f) Vickery, C. W. "Axioms for Moore spaces and metric spaces". "Bulletin of the American Mathematical Society" 46, (1940). 560–564 | [
{
"math_id": 0,
"text": "2^{\\aleph_0}<2^{\\aleph_1}"
}
] | https://en.wikipedia.org/wiki?curid=8016525 |
801679 | Ultimatum game | Game in economic experiments
The ultimatum game is a game that has become a popular instrument of economic experiments. An early description is by Nobel laureate John Harsanyi in 1961. One player, the proposer, is endowed with a sum of money. The proposer is tasked with splitting it with another player, the responder (who knows what the total sum is). Once the proposer communicates their decision, the responder may accept it or reject it. If the responder accepts, the money is split per the proposal; if the responder rejects, both players receive nothing. Both players know in advance the consequences of the responder accepting or rejecting the offer.
Equilibrium analysis.
For ease of exposition, the simple example illustrated above can be considered, where the proposer has two options: a fair split, or an unfair split. The argument given in this section can be extended to the more general case where the proposer can choose from many different splits.
A Nash equilibrium is a set of strategies (one for the proposer and one for the responder in this case), where no individual party can improve their reward by changing strategy. If the proposer always makes an unfair offer, the responder will do best by always accepting the offer, and the proposer will maximize their reward. Although it always benefits the responder to accept even unfair offers, the responder can adopt a strategy that rejects unfair splits often enough to induce the proposer to always make a fair offer. Any change in strategy by the proposer will lower their reward. Any change in strategy by the responder will result in the same reward or less. Thus, there are two sets of Nash equilibria for this game:
However, only the first set of Nash equilibria satisfies a more restrictive equilibrium concept, subgame perfection. The game can be viewed as having two subgames: the subgame where the proposer makes a fair offer, and the subgame where the proposer makes an unfair offer. A perfect-subgame equilibrium occurs when there are Nash Equilibria in every subgame, that players have no incentive to deviate from. In both subgames, it benefits the responder to accept the offer. So, the second set of Nash equilibria above is not subgame perfect: the responder can choose a better strategy for one of the subgames.
Multi-valued or continuous strategies.
The simplest version of the ultimatum game has two possible strategies for the proposer, Fair and Unfair. A more realistic version would allow for many possible offers. For example, the item being shared might be a dollar bill, worth 100 cents, in which case the proposer's strategy set would be all integers between 0 and 100, inclusive for their choice of offer, "S". This would have two subgame perfect equilibria: (Proposer: "S"=0, Accepter: Accept), which is a weak equilibrium because the acceptor would be indifferent between their two possible strategies; and the strong (Proposer: "S"=1, Accepter: Accept if "S">=1 and Reject if "S"=0).
The ultimatum game is also often modelled using a continuous strategy set. Suppose the proposer chooses a share "S" of a pie to offer the receiver, where "S" can be any real number between 0 and 1, inclusive. If the receiver accepts the offer, the proposer's payoff is (1-S) and the receiver's is "S". If the receiver rejects the offer, both players get zero. The unique subgame perfect equilibrium is ("S"=0, Accept). It is weak because the receiver's payoff is 0 whether they accept or reject. No share with "S" > 0 is subgame perfect, because the proposer would deviate to "S' = S" - formula_0 for some small number formula_0 and the receiver's best response would still be to accept. The weak equilibrium is an artifact of the strategy space being continuous.
Experimental results.
The first experimental analysis of the ultimatum game was by Werner Güth, Rolf Schmittberger, and Bernd Schwarze: Their experiments were widely imitated in a variety of settings. When carried out between members of a shared social group (e.g., a village, a tribe, a nation, humanity) people offer "fair" (i.e., 50:50) splits, and offers of less than 30% are often rejected.
One limited study of monozygotic and dizygotic twins claims that genetic variation can have an effect on reactions to unfair offers, though the study failed to employ actual controls for environmental differences. It has also been found that delaying the responder's decision leads to people accepting "unfair" offers more often. Common chimpanzees behaved similarly to humans by proposing fair offers in one version of the ultimatum game involving direct interaction between the chimpanzees. However, another study also published in November 2012 showed that both kinds of chimpanzees (common chimpanzees and bonobos) did not reject unfair offers, using a mechanical apparatus.
Cross-cultural differences.
Some studies have found significant differences between cultures in the offers most likely to be accepted and most likely to maximize the proposer's income. In one study of 15 small-scale societies, proposers in gift-giving cultures were more likely to make high offers and responders were more likely to reject high offers despite anonymity, while low offers were expected and accepted in other societies, which the authors suggested were related to the ways that giving and receiving were connected to social status in each group. Proposers and responders from WEIRD (Western, educated, industrialized, rich, democratic) societies are most likely to settle on equal splits.
Framing effects.
Some studies have found significant effects of framing on game outcomes. Outcomes have been found to change based on characterizing the proposer's role as giving versus splitting versus taking, or characterizing the game as a windfall game versus a routine transaction game.
Explanations.
The highly mixed results, along with similar results in the dictator game, have been taken as both evidence for and against the Homo economicus assumptions of rational, utility-maximizing, individual decisions. Since an individual who rejects a positive offer is choosing to get nothing rather than something, that individual must not be acting solely to maximize their economic gain, unless one incorporates economic applications of social, psychological, and methodological factors (such as the observer effect). Several attempts have been made to explain this behavior. Some suggest that individuals are maximizing their expected utility, but money does not translate directly into expected utility. Perhaps individuals get some psychological benefit from engaging in punishment or receive some psychological harm from accepting a low offer. It could also be the case that the second player, by having the power to reject the offer, uses such power as leverage against the first player, thus motivating them to be fair.
The classical explanation of the ultimatum game as a well-formed experiment approximating general behaviour often leads to a conclusion that the rational behavior in assumption is accurate to a degree, but must encompass additional vectors of decision making. Behavioral economic and psychological accounts suggest that second players who reject offers less than 50% of the amount at stake do so for one of two reasons. An altruistic punishment account suggests that rejections occur out of altruism: people reject unfair offers to teach the first player a lesson and thereby reduce the likelihood that the player will make an unfair offer in the future. Thus, rejections are made to benefit the second player in the future, or other people in the future. By contrast, a self-control account suggests that rejections constitute a failure to inhibit a desire to punish the first player for making an unfair offer. Morewedge, Krishnamurti, and Ariely (2014) found that intoxicated participants were more likely to reject unfair offers than sober participants. As intoxication tends to exacerbate decision makers' prepotent response, this result provides support for the self-control account, rather than the altruistic punishment account. Other research from social cognitive neuroscience supports this finding.
However, several competing models suggest ways to bring the cultural preferences of the players within the optimized utility function of the players in such a way as to preserve the utility maximizing agent as a feature of microeconomics. For example, researchers have found that Mongolian proposers tend to offer even splits despite knowing that very unequal splits are almost always accepted. Similar results from other small-scale societies players have led some researchers to conclude that "reputation" is seen as more important than any economic reward. Others have proposed the social status of the responder may be part of the payoff. Another way of integrating the conclusion with utility maximization is some form of inequity aversion model (preference for fairness). Even in anonymous one-shot settings, the economic-theory suggested outcome of minimum money transfer and acceptance is rejected by over 80% of the players.
An explanation which was originally quite popular was the "learning" model, in which it was hypothesized that proposers' offers would decay towards the sub game perfect Nash equilibrium (almost zero) as they mastered the strategy of the game; this decay tends to be seen in other iterated games. However, this explanation (bounded rationality) is less commonly offered now, in light of subsequent empirical evidence.
It has been hypothesized (e.g. by James Surowiecki) that very unequal allocations are rejected only because the absolute amount of the offer is low. The concept here is that if the amount to be split were 10 million dollars, a 9:1 split would probably be accepted rather than rejecting a 1 million-dollar offer. Essentially, this explanation says that the absolute amount of the endowment is not significant enough to produce strategically optimal behaviour. However, many experiments have been performed where the amount offered was substantial: studies by Cameron and Hoffman et al. have found that higher stakes cause offers to approach "closer" to an even split, even in a US$100 game played in Indonesia, where average per-capita income is much lower than in the United States. Rejections are reportedly independent of the stakes at this level, with US$30 offers being turned down in Indonesia, as in the United States, even though this equates to two weeks' wages in Indonesia. However, 2011 research with stakes of up to 40 weeks' wages in India showed that "as stakes increase, rejection rates approach zero". It is worth noting that the instructions offered to proposers in this study explicitly state, "if the responder's goal is to earn as much money as possible from the experiment, they should accept any offer that gives them positive earnings, no matter how low," thus framing the game in purely monetary terms.
Neurological explanations.
Generous offers in the ultimatum game (offers exceeding the minimum acceptable offer) are commonly made. Zak, Stanton & Ahmadi (2007) showed that two factors can explain generous offers: empathy and perspective taking. They varied empathy by infusing participants with intranasal oxytocin or placebo (blinded). They affected perspective-taking by asking participants to make choices as both player 1 and player 2 in the ultimatum game, with later random assignment to one of these. Oxytocin increased generous offers by 80% relative to placebo. Oxytocin did not affect the minimum acceptance threshold or offers in the dictator game (meant to measure altruism). This indicates that emotions drive generosity.
Rejections in the ultimatum game have been shown to be caused by adverse physiologic reactions to stingy offers. In a brain imaging experiment by Sanfey et al., stingy offers (relative to fair and hyperfair offers) differentially activated several brain areas, especially the anterior insular cortex, a region associated with visceral disgust. If Player 1 in the ultimatum game anticipates this response to a stingy offer, they may be more generous.
An increase in rational decisions in the game has been found among experienced Buddhist meditators. fMRI data show that meditators recruit the posterior insular cortex (associated with interoception) during unfair offers and show reduced activity in the anterior insular cortex compared to controls.
People whose serotonin levels have been artificially lowered will reject unfair offers more often than players with normal serotonin levels.
People who have ventromedial frontal cortex lesions were found to be more likely to reject unfair offers. This was suggested to be due to the abstractness and delay of the reward, rather than an increased emotional response to the unfairness of the offer.
Evolutionary game theory.
Other authors have used evolutionary game theory to explain behavior in the ultimatum game. Simple evolutionary models, e.g. the replicator dynamics, cannot account for the evolution of fair proposals or for rejections. These authors have attempted to provide increasingly complex models to explain fair behavior.
Sociological applications.
The ultimatum game is important from a sociological perspective, because it illustrates the human unwillingness to accept injustice. The tendency to refuse small offers may also be seen as relevant to the concept of honour.
The extent to which people are willing to tolerate different distributions of the reward from "cooperative" ventures results in inequality that is, measurably, exponential across the strata of management within large corporations. See also: Inequity aversion within companies.
History.
An early description of the ultimatum game is by Nobel laureate John Harsanyi in 1961, who footnotes Thomas Schelling's 1960 book, "The Strategy of Conflict" on its solution by dominance methods. Harsanyi says,
"An important application of this principle is to ultimatum games, i.e., to bargaining games where one of the players can firmly commit himself in advance under a heavy penalty that he will insist under all conditions upon a certain specified demand (which is called his ultimatum)... Consequently, it will be rational for the first player to commit himself to his maximum demand, i.e., to the most extreme admissible demand he can make."
Josh Clark attributes modern interest in the game to Ariel Rubinstein, but the best-known article is the 1982 experimental analysis of Güth, Schmittberger, and Schwarze. Results from testing the ultimatum game challenged the traditional economic principle that consumers are rational and utility-maximising. This started a variety of research into the psychology of humans. Since the ultimatum game's development, it has become a popular economic experiment, and was said to be "quickly catching up with the Prisoner's Dilemma as a prime showpiece of apparently irrational behavior" in a paper by Martin Nowak, Karen M. Page, and Karl Sigmund.
Variants.
In the "competitive ultimatum game" there are many proposers and the responder can accept at most one of their offers: With more than three (naïve) proposers the responder is usually offered almost the entire endowment (which would be the Nash Equilibrium assuming no collusion among proposers).
In the "ultimatum game with tipping", a tip is allowed from responder back to proposer, a feature of the trust game, and net splits tend to be more equitable.
The "reverse ultimatum game" gives more power to the responder by giving the proposer the right to offer as many divisions of the endowment as they like. Now the game only ends when the responder accepts an offer or abandons the game, and therefore the proposer tends to receive slightly less than half of the initial endowment.
Incomplete information ultimatum games: Some authors have studied variants of the ultimatum game in which either the proposer or the responder has private information about the size of the pie to be divided. These experiments connect the ultimatum game to principal-agent problems studied in contract theory.
The pirate game illustrates a variant with more than two participants with voting power, as illustrated in Ian Stewart's "A Puzzle for Pirates".
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\epsilon "
}
] | https://en.wikipedia.org/wiki?curid=801679 |
8017307 | Development (topology) | In the mathematical field of topology, a development is a countable collection of open covers of a topological space that satisfies certain separation axioms.
Let formula_0 be a topological space. A development for formula_0 is a countable collection formula_1 of open coverings of formula_0, such that for any closed subset formula_2 and any point formula_3 in the complement of formula_4, there exists a cover formula_5 such that no element of formula_5 which contains formula_3 intersects formula_4. A space with a development is called developable.
A development formula_6 such that formula_7 for all formula_8 is called a nested development. A theorem from Vickery states that every developable space in fact has a nested development. If formula_9 is a refinement of formula_10, for all formula_8, then the development is called a refined development.
Vickery's theorem implies that a topological space is a Moore space if and only if it is regular and developable. | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "F_1, F_2, \\ldots"
},
{
"math_id": 2,
"text": "C \\subset X"
},
{
"math_id": 3,
"text": "p"
},
{
"math_id": 4,
"text": "C"
},
{
"math_id": 5,
"text": "F_j"
},
{
"math_id": 6,
"text": "F_1, F_2,\\ldots"
},
{
"math_id": 7,
"text": "F_{i+1}\\subset F_i"
},
{
"math_id": 8,
"text": "i"
},
{
"math_id": 9,
"text": "F_{i+1}"
},
{
"math_id": 10,
"text": "F_i"
}
] | https://en.wikipedia.org/wiki?curid=8017307 |
8017444 | Continuous knapsack problem | In theoretical computer science, the continuous knapsack problem (also known as the fractional knapsack problem) is an algorithmic problem in combinatorial optimization in which the goal is to fill a container (the "knapsack") with fractional amounts of different materials chosen to maximize the value of the selected materials. It resembles the classic knapsack problem, in which the items to be placed in the container are indivisible; however, the continuous knapsack problem may be solved in polynomial time whereas the classic knapsack problem is NP-hard. It is a classic example of how a seemingly small change in the formulation of a problem can have a large impact on its computational complexity.
Problem definition.
An instance of either the continuous or classic knapsack problems may be specified by the numerical capacity W of the knapsack, together with a collection of materials, each of which has two numbers associated with it: the weight "wi" of material that is available to be selected and the total value "vi" of that material. The goal is to choose an amount "xi" ≤ "wi" of each material, subject to the capacity constraint
formula_0
and maximizing the total benefit
formula_1
In the classic knapsack problem, each of the amounts "xi" must be either zero or "wi"; the continuous knapsack problem differs by allowing "xi" to range continuously from zero to "wi".
Some formulations of this problem rescale the variables "xi" to be in the range from 0 to 1. In this case the capacity constraint becomes
formula_2
and the goal is to maximize the total benefit
formula_1
Solution technique.
The continuous knapsack problem may be solved by a greedy algorithm, first published in 1957 by George Dantzig, that considers the materials in sorted order by their values per unit weight. For each material, the amount "xi" is chosen to be as large as possible:
Because of the need to sort the materials, this algorithm takes time "O"("n" log "n") on inputs with "n" materials. However, by adapting an algorithm for finding weighted medians, it is possible to solve the problem in time "O"("n").
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_i x_i \\le W"
},
{
"math_id": 1,
"text": "\\sum_i x_i v_i."
},
{
"math_id": 2,
"text": "\\sum_i x_i w_i \\leq W,"
}
] | https://en.wikipedia.org/wiki?curid=8017444 |
8018271 | Fuzzy subalgebra | Fuzzy subalgebras theory is a chapter of fuzzy set theory. It is obtained from an interpretation in a multi-valued logic of axioms usually expressing the notion of subalgebra of a given algebraic structure.
Definition.
Consider a first order language for algebraic structures with a monadic predicate symbol S. Then a "fuzzy subalgebra" is a fuzzy model of a theory containing, for any "n"-ary operation h, the axioms
formula_0
and, for any constant c, S(c).
The first axiom expresses the closure of S with respect to the operation h, and the second expresses the fact that c is an element in S. As an example, assume that the valuation structure is defined in [0,1] and denote by formula_1 the operation in [0,1] used to interpret the conjunction. Then a fuzzy subalgebra of an algebraic structure whose domain is D is defined by a fuzzy subset s : D → [0,1] of D such that, for every d1...,dn in D, if h is the interpretation of the n-ary operation symbol h, then
Moreover, if c is the interpretation of a constant c such that s(c) = 1.
A largely studied class of fuzzy subalgebras is the one in which the operation formula_1 coincides with the minimum. In such a case it is immediate to prove the following proposition.
Proposition. A fuzzy subset s of an algebraic structure defines a fuzzy subalgebra if and only if for every λ in [0,1], the closed cut {x ∈ D : s(x)≥ λ} of s is a subalgebra.
Fuzzy subgroups and submonoids.
The fuzzy subgroups and the fuzzy submonoids are particularly interesting classes of fuzzy subalgebras. In such a case a fuzzy subset "s" of a monoid (M,•,u) is a fuzzy submonoid if and only if
where u is the neutral element in A.
Given a group G, a fuzzy subgroup of G is a fuzzy submonoid s of G such that
It is possible to prove that the notion of fuzzy subgroup is strictly related with the notions of fuzzy equivalence. In fact, assume that S is a set, G a group of transformations in S and (G,s) a fuzzy subgroup of G. Then, by setting
we obtain a fuzzy equivalence. Conversely, let e be a fuzzy equivalence in S and, for every transformation h of S, set
Then s defines a fuzzy subgroup of transformation in S. In a similar way we can relate the fuzzy submonoids with the fuzzy orders. | [
{
"math_id": 0,
"text": "\\forall x_1, ..., \\forall x_n (S(x_1) \\land ..... \\land S(x_n) \\rightarrow S(h(x_1, ..., x_n))"
},
{
"math_id": 1,
"text": "\\odot"
},
{
"math_id": 2,
"text": "s(d_1) \\odot... \\odot s(d_n) \\leq s(\\mathbf{h}(d_1,...,d_n))"
},
{
"math_id": 3,
"text": "s(\\mathbf{u})=1"
},
{
"math_id": 4,
"text": "s(x) \\odot s(y) \\leq s(x \\cdot y)"
}
] | https://en.wikipedia.org/wiki?curid=8018271 |
8019200 | Extendible hashing | Extendible hashing is a type of hash system which treats a hash as a bit string and uses a trie for bucket lookup. Because of the hierarchical nature of the system, re-hashing is an incremental operation (done one bucket at a time, as needed). This means that time-sensitive applications are less affected by table growth than by standard full-table rehashes.
Extendible hashing was described by Ronald Fagin in 1979. Practically all modern filesystems use either extendible hashing or B-trees. In particular, the Global File System, ZFS, and the SpadFS filesystem use extendible hashing.
Example.
Assume that the hash function formula_0 returns a string of bits. The first formula_1 bits of each string will be used as indices to figure out where they will go in the "directory" (hash table), where formula_1 is the smallest number such that the index of every item in the table is unique.
Keys to be used:
formula_2
Let's assume that for this particular example, the bucket size is 1. The first two keys to be inserted, k1 and k2, can be distinguished by the most significant bit, and would be inserted into the table as follows:
Now, if k3 were to be hashed to the table, it wouldn't be enough to distinguish all three keys by one bit (because both k3 and k1 have 1 as their leftmost bit). Also, because the bucket size is one, the table would overflow. Because comparing the first two most significant bits would give each key a unique location, the directory size is doubled as follows:
And so now k1 and k3 have a unique location, being distinguished by the first two leftmost bits. Because k2 is in the top half of the table, both 00 and 01 point to it because there is no other key to compare to that begins with a 0.
The above example is from .
formula_3
Further detail.
Now, k4 needs to be inserted, and it has the first two bits as 01..(1110), and using a 2 bit depth in the directory, this maps from 01 to Bucket A. Bucket A is full (max size 1), so it must be split; because there is more than one pointer to Bucket A, there is no need to increase the directory size.
What is needed is information about:
In order to distinguish the two action cases:
Examining the initial case of an extendible hash structure, if each directory entry points to one bucket, then the local depth should be equal to the global depth.
The number of directory entries is equal to 2global depth, and the initial number of buckets
is equal to 2local depth.
Thus if global depth = local depth = 0, then 20 = 1, so an initial directory of one pointer to one bucket.
Back to the two action cases; if the bucket is full:
Key 01 points to Bucket A, and Bucket A's local depth of 1 is less than the directory's global depth of 2, which means keys hashed to Bucket A have only used a 1 bit prefix (i.e. 0), and the bucket needs to have its contents split using keys 1 + 1 = 2 bits in length; in general, for any local depth d where d is less than D, the global depth, then d must be incremented after a bucket split, and the new d used as the number of bits of each entry's key to redistribute the entries of the former bucket into the new buckets.
Now,
formula_3
is tried again, with 2 bits 01.., and now key 01 points to a new bucket but there is still &NoBreak;&NoBreak; in it (formula_4 and also begins with 01).
If &NoBreak;&NoBreak; had been 000110, with key 00, there would have been no problem, because &NoBreak;&NoBreak; would have remained in the new bucket A' and bucket D would have been empty.
So Bucket D needs to be split, but a check of its local depth, which is 2, is the same as the global depth, which is 2, so the directory must be split again, in order to hold keys of sufficient detail, e.g. 3 bits.
Now, formula_4 is in D and formula_3 is tried again, with 3 bits 011.., and it points to bucket D which already contains &NoBreak;&NoBreak; so is full; D's local depth is 2 but now the global depth is 3 after the directory doubling, so now D can be split into bucket's D' and E, the contents of D, &NoBreak;&NoBreak; has its formula_5 retried with a new global depth bitmask of 3 and &NoBreak;&NoBreak; ends up in D', then the new entry &NoBreak;&NoBreak; is retried with formula_6 bitmasked using the new global depth bit count of 3 and this gives 011 which now points to a new bucket E which is empty. So &NoBreak;&NoBreak; goes in Bucket E.
Example implementation.
Below is the extendible hashing algorithm in Python, with the disc block / memory page association, caching and consistency issues removed. Note a problem exists if the depth exceeds the bit size of an integer, because then doubling of the directory or splitting of a bucket won't allow entries to be rehashed to different buckets.
The code uses the "least significant bits", which makes it more efficient to expand the table, as the entire directory can be copied as one block ().
Python example.
PAGE_SZ = 10
class Page:
def __init__(self) -> None:
self.map = []
self.local_depth = 0
def full(self) -> bool:
return len(self.map) >= PAGE_SZ
def put(self, k, v) -> None:
for i, (key, value) in enumerate(self.map):
if key == k:
del self.map[i]
break
self.map.append((k, v))
def get(self, k):
for key, value in self.map:
if key == k:
return value
def get_local_high_bit(self):
return 1 « self.local_depth
class ExtendibleHashing:
def __init__(self) -> None:
self.global_depth = 0
self.directory = [Page()]
def get_page(self, k):
h = hash(k)
return self.directory[h & ((1 « self.global_depth) - 1)]
def put(self, k, v) -> None:
p = self.get_page(k)
full = p.full()
p.put(k, v)
if full:
if p.local_depth == self.global_depth:
self.directory *= 2
self.global_depth += 1
p0 = Page()
p1 = Page()
p0.local_depth = p1.local_depth = p.local_depth + 1
high_bit = p.get_local_high_bit()
for k2, v2 in p.map:
h = hash(k2)
new_p = p1 if h & high_bit else p0
new_p.put(k2, v2)
for i in range(hash(k) & (high_bit - 1), len(self.directory), high_bit):
self.directory[i] = p1 if i & high_bit else p0
def get(self, k):
return self.get_page(k).get(k)
if __name__ == "__main__":
eh = ExtendibleHashing()
N = 10088
l = list(range(N))
import random
random.shuffle(l)
for x in l:
eh.put(x, x)
print(l)
for i in range(N):
print(eh.get(i))
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "h(k)"
},
{
"math_id": 1,
"text": "i"
},
{
"math_id": 2,
"text": "\\begin{align}\nh(k_1) = 100100\\\\\nh(k_2) = 010110\\\\\nh(k_3) = 110110\n\\end{align}"
},
{
"math_id": 3,
"text": "h(k_4) = 011110"
},
{
"math_id": 4,
"text": "h(k_2) = 010110"
},
{
"math_id": 5,
"text": "h(k_2)"
},
{
"math_id": 6,
"text": "h(k_4)"
}
] | https://en.wikipedia.org/wiki?curid=8019200 |
8019439 | Functional response | Ecological concept; intake rate of a consumer as a function of food density
A functional response in ecology is the intake rate of a consumer as a function of food density (the amount of food available in a given ecotope). It is associated with the numerical response, which is the reproduction rate of a consumer as a function of food density. Following C. S. Holling, functional responses are generally classified into three types, which are called Holling's type I, II, and III.
Type I.
The type I functional response assumes a linear increase in intake rate with food density, either for all food densities, or only for food densities up to a maximum, beyond which the intake rate is constant. The linear increase assumes that the time needed by the consumer to process a food item is negligible, or that consuming food does not interfere with searching for food. A functional response of type I is used in the Lotka–Volterra predator–prey model. It was the first kind of functional response described and is also the simplest of the three functional responses currently detailed.
Type II.
The type II functional response is characterized by a decelerating intake rate, which follows from the assumption that the consumer is limited by its capacity to process food. Type II functional response is often modeled by a rectangular hyperbola, for instance as by Holling's disc equation, which assumes that processing of food and searching for food are mutually exclusive behaviors. The equation is
formula_0
where "f" denotes intake rate and "R" denotes food (or resource) density. The rate at which the consumer encounters food items per unit of food density is called the attack rate, "a". The average time spent on processing a food item is called the handling time, "h". Similar equations are the Monod equation for the growth of microorganisms and the Michaelis–Menten equation for the rate of enzymatic reactions.
In an example with wolves and caribou, as the number of caribou increases while holding wolves constant, the number of caribou kills increases and then levels off. This is because the proportion of caribou killed per wolf decreases as caribou density increases. The higher the density of caribou, the smaller the proportion of caribou killed per wolf. Explained slightly differently, at very high caribou densities, wolves need very little time to find prey and spend almost all their time handling prey and very little time searching. Wolves are then satiated and the total number of caribou kills reaches a plateau.
Type III.
The type III functional response is similar to type II in that at high levels of prey density, saturation occurs. At low prey density levels, the graphical relationship of number of prey consumed and the density of the prey population is a superlinearly increasing function of prey consumed by predators:formula_1
This accelerating function was originally formulated in analogy with of the kinetics of an enzyme with two binding sites for "k" = 2. More generally, if a prey type is only accepted after every "k" encounters and rejected the "k"-1 times in between, which mimicks learning, the general form above is found.
Learning time is defined as the natural improvement of a predator's searching and attacking efficiency or the natural improvement in their handling efficiency as prey density increases. Imagine a prey density so small that the chance of a predator encountering that prey is extremely low. Because the predator finds prey so infrequently, it has not had enough experience to develop the best ways to capture and subdue that species of prey. Holling identified this mechanism in shrews and deer mice feeding on sawflies. At low numbers of sawfly cocoons per acre, deer mice especially experienced exponential growth in terms of the number of cocoons consumed per individual as the density of cocoons increased. The characteristic saturation point of the type III functional response was also observed in the deer mice. At a certain density of cocoons per acre, the consumption rate of the deer mice reached a saturation amount as the cocoon density continued to increase.
Prey switching involves two or more prey species and one predator species. When all prey species are at equal densities, the predator will indiscriminately select between prey species. However, if the density of one of the prey species decreases, then the predator will start selecting the other, more common prey species with a higher frequency because if it can increase the efficiency which with it captures the more abundant prey through learning. Murdoch demonstrated this effect with guppy preying on tubificids and fruit flies. As fruit fly numbers decreased guppies switched from feeding on the fruit flies on the water's surface to feeding on the more abundant tubificids along the bed.
If predators learn while foraging, but do not reject prey before they accept one, the functional response becomes a function of the density of all prey types. This describes predators that feed on multiple prey and dynamically switch from one prey type to another. This behaviour can lead to either a type II or a type III functional response. If the density of one prey type is approximately constant, as is often the case in experiments, a type III functional response is found. When the prey densities change in approximate proportion to each other, as is the case in most natural situations, a type II functional response is typically found. This explains why the type III functional response has been found in many experiments in which prey densities are artificially manipulated, but is rare in nature.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \n\\begin{align} \nf(R) &= \\frac{a R}{1 + a h R}\n\\end{align}\n"
},
{
"math_id": 1,
"text": " \n\\begin{align} \nf(R) &= \\frac{a R^k}{1 + a h R^k}, \\;\\;\\;\\;\\;\\;\\; k > 1\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=8019439 |
8026273 | Factor base | In computational number theory, a factor base is a small set of prime numbers commonly used as a mathematical tool in algorithms involving extensive sieving for potential factors of a given integer.
Usage in factoring algorithms.
A factor base is a relatively small set of distinct prime numbers "P", sometimes together with -1. Say we want to factorize an integer "n". We generate, in some way, a large number of integer pairs ("x", "y") for which formula_0, formula_1, and formula_2 can be completely factorized over the chosen factor base—that is, all their prime factors are in "P".
In practice, several integers "x" are found such that formula_3 has all of its prime factors in the pre-chosen factor base. We represent each formula_3 expression as a vector of a matrix with integer entries being the exponents of factors in the factor base. Linear combinations of the rows corresponds to multiplication of these expressions. A linear dependence relation mod 2 among the rows leads to a desired congruence formula_4. This essentially reformulates the problem into a system of linear equations, which can be solved using numerous methods such as Gaussian elimination; in practice advanced methods like the block Lanczos algorithm are used, that take advantage of certain properties of the system.
This congruence may generate the trivial formula_5; in this case we try to find another suitable congruence. If repeated attempts to factor fail we can try again using a different factor base.
Algorithms.
Factor bases are used in, for example, Dixon's factorization, the quadratic sieve, and the number field sieve. The difference between these algorithms is essentially the methods used to generate ("x", "y") candidates. Factor bases are also used in the Index calculus algorithm for computing discrete logarithms.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x \\neq \\pm y"
},
{
"math_id": 1,
"text": " x^2 \\equiv y^2 \\pmod{n}"
},
{
"math_id": 2,
"text": "x^2 \\pmod{n} \\text{ and }y^2 \\pmod{n}"
},
{
"math_id": 3,
"text": "x^2 \\pmod{n}"
},
{
"math_id": 4,
"text": "x^2 \\equiv y^2 \\pmod{n}"
},
{
"math_id": 5,
"text": "\\textstyle n = 1 \\cdot n"
}
] | https://en.wikipedia.org/wiki?curid=8026273 |
8030626 | Lindström quantifier | In mathematical logic, a Lindström quantifier is a generalized polyadic quantifier. Lindström quantifiers generalize first-order quantifiers, such as the existential quantifier, the universal quantifier, and the counting quantifiers. They were introduced by Per Lindström in 1966. They were later studied for their applications in logic in computer science and database query languages.
Generalization of first-order quantifiers.
In order to facilitate discussion, some notational conventions need explaining. The expression
formula_0
for "A" an "L"-structure (or "L"-model) in a language "L", "φ" an "L"-formula, and formula_1 a tuple of elements of the domain dom("A") of "A". In other words, formula_2 denotes a (monadic) property defined on dom(A). In general, where "x" is replaced by an "n"-tuple formula_3 of free variables, formula_4 denotes an "n"-ary relation defined on dom("A"). Each quantifier formula_5 is relativized to a structure, since each quantifier is viewed as a family of relations (between relations) on that structure. For a concrete example, take the universal and existential quantifiers ∀ and ∃, respectively. Their truth conditions can be specified as
formula_6
formula_7
where formula_8 is the singleton whose sole member is dom("A"), and formula_9 is the set of all non-empty subsets of dom("A") (i.e. the power set of dom("A") minus the empty set). In other words, each quantifier is a family of properties on dom("A"), so each is called a "monadic" quantifier. Any quantifier defined as an "n" > 0-ary relation between properties on dom("A") is called "monadic". Lindström introduced polyadic ones that are "n" > 0-ary relations between relations on domains of structures.
Before we go on to Lindström's generalization, notice that any family of properties on dom("A") can be regarded as a monadic generalized quantifier. For example, the quantifier "there are exactly "n" things such that..." is a family of subsets of the domain of a structure, each of which has a cardinality of size "n". Then, "there are exactly 2 things such that φ" is true in A iff the set of things that are such that φ is a member of the set of all subsets of dom("A") of size 2.
A Lindström quantifier is a polyadic generalized quantifier, so instead being a relation between subsets of the domain, it is a relation between relations defined on the domain. For example, the quantifier formula_10 is defined semantically as
formula_11
where
formula_12
for an "n"-tuple formula_3 of variables.
Lindström quantifiers are classified according to the number structure of their parameters. For example formula_13 is a type (1,1) quantifier, whereas formula_14 is a type (2) quantifier. An example of type (1,1) quantifier is Hartig's quantifier testing equicardinality, i.e. the extension of {A, B ⊆ M: |A| = |B|}. An example of a type (4) quantifier is the Henkin quantifier.
Expressiveness hierarchy.
The first result in this direction was obtained by Lindström (1966) who showed that a type (1,1) quantifier was not definable in terms of a type (1) quantifier. After Lauri Hella (1989) developed a general technique for proving the relative expressiveness of quantifiers, the resulting hierarchy turned out to be lexicographically ordered by quantifier type:
(1) < (1, 1) < . . . < (2) < (2, 1) < (2, 1, 1) < . . . < (2, 2) < . . . (3) < . . .
For every type "t", there is a quantifier of that type that is not definable in first-order logic extended with quantifiers that are of types less than "t".
As precursors to Lindström's theorem.
Although Lindström had only partially developed the hierarchy of quantifiers which now bear his name, it was enough for him to observe that some nice properties of first-order logic are lost when it is extended with certain generalized quantifiers. For example, adding a "there exist finitely many" quantifier results in a loss of compactness, whereas adding a "there exist uncountably many" quantifier to first-order logic results in a logic no longer satisfying the Löwenheim–Skolem theorem. In 1969 Lindström proved a much stronger result now known as Lindström's theorem, which intuitively states that first-order logic is the "strongest" logic having both properties. | [
{
"math_id": 0,
"text": "\\phi^{A,x,\\bar{a}}=\\{x\\in A\\colon A\\models\\phi[x,\\bar{a}]\\}"
},
{
"math_id": 1,
"text": "\\bar{a}"
},
{
"math_id": 2,
"text": "\\phi^{A,x,\\bar{a}}"
},
{
"math_id": 3,
"text": "\\bar{x}"
},
{
"math_id": 4,
"text": "\\phi^{A,\\bar{x},\\bar{a}}"
},
{
"math_id": 5,
"text": "Q_A"
},
{
"math_id": 6,
"text": "A\\models\\forall x\\phi[x,\\bar{a}] \\iff \\phi^{A,x,\\bar{a}}\\in\\forall_A"
},
{
"math_id": 7,
"text": "A\\models\\exists x\\phi[x,\\bar{a}] \\iff \\phi^{A,x,\\bar{a}}\\in\\exists_A,"
},
{
"math_id": 8,
"text": "\\forall_A"
},
{
"math_id": 9,
"text": "\\exists_A"
},
{
"math_id": 10,
"text": "Q_A x_1 x_2 y_1 z_1 z_2 z_3(\\phi(x_1 x_2),\\psi(y_1),\\theta(z_1 z_2 z_3))"
},
{
"math_id": 11,
"text": "A\\models Q_Ax_1x_2y_1z_1z_2z_3(\\phi,\\psi,\\theta)[a] \\iff (\\phi^{A,x_1x_2,\\bar{a}},\\psi^{A,y_1,\\bar{a}},\\theta^{A,z_1z_2z_3,\\bar{a}})\\in Q_A"
},
{
"math_id": 12,
"text": "\\phi^{A,\\bar{x},\\bar{a}}=\\{(x_1,\\dots,x_n)\\in A^n\\colon A\\models\\phi[\\bar{x},\\bar{a}]\\}"
},
{
"math_id": 13,
"text": "Qxy\\phi(x)\\psi(y)"
},
{
"math_id": 14,
"text": "Qxy\\phi(x,y)"
}
] | https://en.wikipedia.org/wiki?curid=8030626 |
8031241 | Language equation | Language equations are mathematical statements that resemble numerical equations, but the variables assume values of formal languages rather than numbers. Instead of arithmetic operations in numerical equations, the variables are joined by language operations. Among the most common operations on two languages "A" and "B" are the set union "A" ∪ "B", the set intersection "A" ∩ "B", and the concatenation "A"⋅"B". Finally, as an operation taking a single operand, the set "A"* denotes the Kleene star of the language "A". Therefore, language equations can be used to represent formal grammars, since the languages generated by the grammar must be the solution of a system of language equations.
Language equations and context-free grammars.
Ginsburg and Rice
gave an alternative definition of context-free grammars by language equations. To every context-free grammar formula_0, is associated a system of equations in variables formula_1. Each variable formula_2 is an unknown language over formula_3 and is defined by the equation formula_4 where formula_5, ..., formula_6 are all productions for formula_7. Ginsburg and Rice used a fixed-point iteration argument to show that a solution always exists, and proved that i.e. any other solution must be a of this one.
For example, the grammar
formula_8
corresponds to the equation system
formula_9
which has as solution every superset of formula_10.
Language equations with added intersection analogously correspond to conjunctive grammars.
Language equations and finite automata.
Brzozowski and Leiss studied "left language equations" where every concatenation is with a singleton constant language on the left, e.g. formula_11 with variable formula_7, but not formula_12 nor formula_13. Each equation is of the form formula_14 with one variable on the right-hand side. Every nondeterministic finite automaton has such corresponding equation using left-concatenation and union, see Fig. 1. If intersection operation is allowed, equations correspond to alternating finite automata.
Baader and Narendran studied equations formula_15 using left-concatenation and union and proved that their satisfiability problem is EXPTIME-complete.
Conway's problem.
Conway proposed the following problem: given a constant finite language formula_16, is the greatest solution of the equation formula_17 always regular? This problem was studied by Karhumäki and Petre who gave an affirmative answer in a special case. A strongly negative answer to Conway's problem was given by Kunc who constructed a finite language formula_16 such that the greatest solution of this equation is not recursively enumerable.
Kunc also proved that the greatest solution of inequality formula_18 is always regular.
Language equations with Boolean operations.
Language equations with concatenation and Boolean operations were first studied by Parikh, Chandra, Halpern and Meyer
who proved that the satisfiability problem for a given equation is undecidable, and that if a system of language equations has a unique solution, then that solution is recursive. Later, Okhotin proved that the unsatisfiability problem is RE-complete and that every recursive language is a unique solution of some equation.
Language equations over a unary alphabet.
For a one-letter alphabet, Leiss discovered the first language equation with a nonregular solution, using complementation and concatenation operations. Later, Jeż showed that non-regular unary languages can be defined by language equations with union, intersection and concatenation, equivalent to conjunctive grammars. By this method Jeż and Okhotin proved that every recursive unary language is a unique solution of some equation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G = (V, \\Sigma, R, S)"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "X \\in V"
},
{
"math_id": 3,
"text": "\\Sigma"
},
{
"math_id": 4,
"text": "X=\\alpha_1 \\cup \\ldots \\cup \\alpha_m"
},
{
"math_id": 5,
"text": "X \\to \\alpha_1"
},
{
"math_id": 6,
"text": "X \\to \\alpha_m"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "S \\to a S c \\mid b \\mid S"
},
{
"math_id": 9,
"text": "S = ( \\{ a \\} \\cdot S \\cdot \\{ c \\} ) \\cup \\{ b \\} \\cup S"
},
{
"math_id": 10,
"text": "\\{ a^n b c^n \\mid n \\in \\mathcal{N} \\}"
},
{
"math_id": 11,
"text": "\\{a\\} \\cdot X"
},
{
"math_id": 12,
"text": "X \\cdot Y"
},
{
"math_id": 13,
"text": "X \\cdot \\{a\\}"
},
{
"math_id": 14,
"text": "X_i=F(X_1, ..., X_k)"
},
{
"math_id": 15,
"text": "F(X_1, \\ldots, X_k)=G(X_1, \\ldots, X_k)"
},
{
"math_id": 16,
"text": "L"
},
{
"math_id": 17,
"text": "LX=XL"
},
{
"math_id": 18,
"text": "LX \\subseteq XL"
}
] | https://en.wikipedia.org/wiki?curid=8031241 |
8031773 | Analytic space | An analytic space is a generalization of an analytic manifold that allows singularities. An analytic space is a space that is locally the same as an analytic variety. They are prominent in the study of several complex variables, but they also appear in other contexts.
Definition.
Fix a field "k" with a valuation. Assume that the field is complete and not discrete with respect to this valuation. For example, this includes R and C with respect to their usual absolute values, as well as fields of Puiseux series with respect to their natural valuations.
Let "U" be an open subset of "k""n", and let "f"1, ..., "f""k" be a collection of analytic functions on "U". Denote by "Z" the common vanishing locus of "f"1, ..., "f""k", that is, let "Z" = { "x" | "f"1("x") = ... = "f""k"("x") = 0 }. "Z" is an analytic variety.
Suppose that the structure sheaf of "U" is formula_0. Then "Z" has a structure sheaf formula_1, where formula_2 is the ideal generated by "f"1, ..., "f""k". In other words, the structure sheaf of "Z" consists of all functions on "U" modulo the possible ways they can differ outside of "Z".
An analytic space is a locally ringed space formula_3 such that around every point "x" of "X", there exists an open neighborhood "U" such that formula_4 is isomorphic (as locally ringed spaces) to an analytic variety with its structure sheaf. Such an isomorphism is called a local model for "X" at "x".
An analytic mapping or morphism of analytic spaces is a morphism of locally ringed spaces.
This definition is similar to the definition of a scheme. The only difference is that for a scheme, the local models are spectra of rings, whereas for an analytic space, the local models are analytic varieties. Because of this, the basic theories of analytic spaces and of schemes are very similar. Furthermore, analytic varieties have much simpler behavior than arbitrary commutative rings (for example, analytic varieties are defined over fields and are always finite-dimensional), so analytic spaces behave very similarly to finite-type schemes over a field.
Basic results.
Every point in an analytic space has a local dimension. The dimension at "x" is found by choosing a local model at "x" and determining the local dimension of the analytic variety at the point corresponding to "x".
Every point in an analytic space has a tangent space. If "x" is a point of "X" and "mx" is ideal sheaf of all functions vanishing at "x", then the cotangent space at "x" is "mx" / "mx"2. The tangent space is ("mx" / "mx"2)*, the dual vector space to the cotangent space. Analytic mappings induce pushforward maps on tangent spaces and pullback maps on cotangent spaces.
The dimension of the tangent space at "x" is called the embedding dimension at "x". By looking at a local model it is easy to see that the dimension is always less than or equal to the embedding dimension.
Smoothness.
An analytic space is called smooth at "x" if it has a local model at "x" which is an open subset of "k""n" for some "n". The analytic space is called smooth if it is smooth at every point, and in this case it is an analytic manifold. The subset of points at which an analytic space is not smooth is a closed analytic subset.
An analytic space is reduced if every local model for the space is defined by a radical sheaf of ideals. An analytic space "X" which isn't reduced has a reduction "X"red, a reduced analytic space with the same underlying topological space. There is a canonical morphism "r" : "X"red → "X". Every morphism from "X" to a reduced analytic space factors through "r".
An analytic space is normal if every stalk of the structure sheaf is a normal ring (meaning an integrally closed integral domain). In a normal analytic space, the singular locus has codimension at least two. When "X" is a local complete intersection at "x", then "X" is normal at "x".
Non-normal analytic spaces can be smoothed out into normal spaces in a canonical way. This construction is called the normalization. The normalization "N"("X") of an analytic space "X" comes with a canonical map ν : "N"("X") → "X". Every dominant morphism from a normal analytic space to "X" factors through ν.
Coherent sheaves.
An analytic space is coherent if its structure sheaf formula_5 is a coherent sheaf. A coherent sheaf of formula_5-modules is called a coherent analytic sheaf. For example, on a coherent space, locally free sheaves and sheaves of ideals are coherent analytic sheaves.
Analytic spaces over algebraically closed fields are coherent. In the complex case, this is known as the Oka coherence theorem. This is not true over non-algebraically closed fields; there are examples of real analytic spaces that are not coherent.
Generalizations.
In some situations, the concept of an analytic space is too restrictive. This is often because the ground field has additional structure that is not captured by analytic sets. In these situations, there are generalizations of analytic spaces which allow more flexibility in the local model spaces.
For example, over the real numbers, consider the circle "x"2 + "y"2
1. The circle is an analytic subset of the analytic space R2. But its projection onto the "x"-axis is the closed interval [−1, 1], which is not an analytic set. Therefore the image of an analytic set under an analytic map is not necessarily an analytic set. This can be avoided by working with subanalytic sets, which are much less rigid than analytic sets but which are not defined over arbitrary fields. The corresponding generalization of an analytic space is a subanalytic space. (However, under mild point-set topology hypotheses, it turns out that subanalytic spaces are essentially equivalent to subanalytic sets.)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{O}_U"
},
{
"math_id": 1,
"text": "\\mathcal{O}_Z = \\mathcal{O}_U / \\mathcal{I}_Z"
},
{
"math_id": 2,
"text": "\\mathcal{I}_Z"
},
{
"math_id": 3,
"text": "(X, \\mathcal{O}_X)"
},
{
"math_id": 4,
"text": "(U, \\mathcal{O}_U)"
},
{
"math_id": 5,
"text": "\\mathcal{O}"
}
] | https://en.wikipedia.org/wiki?curid=8031773 |
80327 | New Keynesian economics | School of macroeconomics
New Keynesian economics is a school of macroeconomics that strives to provide microeconomic foundations for Keynesian economics. It developed partly as a response to criticisms of Keynesian macroeconomics by adherents of new classical macroeconomics.
Two main assumptions define the New Keynesian approach to macroeconomics. Like the New Classical approach, New Keynesian macroeconomic analysis usually assumes that households and firms have rational expectations. However, the two schools differ in that New Keynesian analysis usually assumes a variety of market failures. In particular, New Keynesians assume that there is imperfect competition in price and wage setting to help explain why prices and wages can become "sticky", which means they do not adjust instantaneously to changes in economic conditions.
Wage and price stickiness, and the other present descriptions of market failures in New Keynesian models, imply that the economy may fail to attain full employment. Therefore, New Keynesians argue that macroeconomic stabilization by the government (using fiscal policy) and the central bank (using monetary policy) can lead to a more efficient macroeconomic outcome than a "laissez faire" policy would.
New Keynesianism became part of the new neoclassical synthesis that incorporated parts of both it and new classical macroeconomics, and forms the theoretical basis of mainstream macroeconomics today.
Development of New Keynesian economics.
1970s.
The first wave of New Keynesian economics developed in the late 1970s. The first model of "Sticky information" was developed by Stanley Fischer in his 1977 article, "Long-Term Contracts, Rational Expectations, and the Optimal Money Supply Rule". He adopted a "staggered" or "overlapping" contract model. Suppose that there are two unions in the economy, who take turns to choose wages. When it is a union's turn, it chooses the wages it will set for the next two periods. This contrasts with John B. Taylor's model where the nominal wage is constant over the contract life, as was subsequently developed in his two articles: one in 1979, "Staggered wage setting in a macro model", and one in 1980, "Aggregate Dynamics and Staggered Contracts". Both Taylor and Fischer contracts share the feature that only the unions setting the wage in the current period are using the latest information: wages in half of the economy still reflect old information. The Taylor model had sticky nominal wages in addition to the sticky information: nominal wages had to be constant over the length of the contract (two periods). These early new Keynesian theories were based on the basic idea that, given fixed nominal wages, a monetary authority (central bank) can control the employment rate. Since wages are fixed at a nominal rate, the monetary authority can control the real wage (wage values adjusted for inflation) by changing the money supply and thus affect the employment rate.
1980s.
Menu costs and imperfect competition.
In the 1980s the key concept of using menu costs in a framework of imperfect competition to explain price stickiness was developed. The concept of a lump-sum cost (menu cost) to changing the price was originally introduced by Sheshinski and Weiss (1977) in their paper looking at the effect of inflation on the frequency of price-changes. The idea of applying it as a general theory of nominal price rigidity was simultaneously put forward by several economists in 1985–86. George Akerlof and Janet Yellen put forward the idea that due to bounded rationality firms will not want to change their price unless the benefit is more than a small amount. This bounded rationality leads to inertia in nominal prices and wages which can lead to output fluctuating at constant nominal prices and wages. Gregory Mankiw took the menu-cost idea and focused on the welfare effects of changes in output resulting from sticky prices. Michael Parkin also put forward the idea. Although the approach initially focused mainly on the rigidity of nominal prices, it was extended to wages and prices by Olivier Blanchard and Nobuhiro Kiyotaki in their influential article "Monopolistic Competition and the Effects of Aggregate Demand". Huw Dixon and Claus Hansen showed that even if menu costs applied to a small sector of the economy, this would influence the rest of the economy and lead to prices in the rest of the economy becoming less responsive to changes in demand.
While some studies suggested that menu costs are too small to have much of an aggregate impact, Laurence M. Ball and David Romer showed in 1990 that real rigidities could interact with nominal rigidities to create significant disequilibrium. Real rigidities occur whenever a firm is slow to adjust its real prices in response to a changing economic environment. For example, a firm can face real rigidities if it has market power or if its costs for inputs and wages are locked-in by a contract. Ball and Romer argued that real rigidities in the labor market keep a firm's costs high, which makes firms hesitant to cut prices and lose revenue. The expense created by real rigidities combined with the menu cost of changing prices makes it less likely that firm will cut prices to a market clearing level.
Even if prices are perfectly flexible, imperfect competition can affect the influence of fiscal policy in terms of the multiplier. Huw Dixon and Gregory Mankiw developed independently simple general equilibrium models showing that the fiscal multiplier could be increasing with the degree of imperfect competition in the output market. The reason for this is that imperfect competition in the output market tends to reduce the real wage, leading to the household substituting away from consumption towards leisure. When government spending is increased, the corresponding increase in lump-sum taxation causes both leisure and consumption to decrease (assuming that they are both a normal good). The greater the degree of imperfect competition in the output market, the lower the real wage and hence the more the reduction falls on leisure (i.e. households work more) and less on consumption. Hence the fiscal multiplier is less than one, but increasing in the degree of imperfect competition in the output market.
The Calvo staggered contracts model.
In 1983 Guillermo Calvo wrote "Staggered Prices in a Utility-Maximizing Framework". The original article was written in a continuous time mathematical framework, but nowadays is mostly used in its discrete time version. The Calvo model has become the most common way to model nominal rigidity in new Keynesian models. There is a probability that the firm can reset its price in any one period h (the hazard rate), or equivalently the probability () that the price will remain unchanged in that period (the survival rate). The probability h is sometimes called the "Calvo probability" in this context. In the Calvo model the crucial feature is that the price-setter does not know how long the nominal price will remain in place, in contrast to the Taylor model where the length of contract is known "ex ante".
Coordination failure.
Coordination failure was another important new Keynesian concept developed as another potential explanation for recessions and unemployment. In recessions a factory can go idle even though there are people willing to work in it, and people willing to buy its production if they had jobs. In such a scenario, economic downturns appear to be the result of coordination failure: The invisible hand fails to coordinate the usual, optimal, flow of production and consumption. Russell Cooper and Andrew John's 1988 paper "Coordinating Coordination Failures in Keynesian Models" expressed a general form of coordination as models with multiple equilibria where agents could coordinate to improve (or at least not harm) each of their respective situations. Cooper and John based their work on earlier models including Peter Diamond's 1982 coconut model, which demonstrated a case of coordination failure involving search and matching theory. In Diamond's model producers are more likely to produce if they see others producing. The increase in possible trading partners increases the likelihood of a given producer finding someone to trade with. As in other cases of coordination failure, Diamond's model has multiple equilibria, and the welfare of one agent is dependent on the decisions of others. Diamond's model is an example of a "thick-market externality" that causes markets to function better when more people and firms participate in them. Other potential sources of coordination failure include self-fulfilling prophecies. If a firm anticipates a fall in demand, they might cut back on hiring. A lack of job vacancies might worry workers who then cut back on their consumption. This fall in demand meets the firm's expectations, but it is entirely due to the firm's own actions.
Labor market failures: Efficiency wages.
New Keynesians offered explanations for the failure of the labor market to clear. In a Walrasian market, unemployed workers bid down wages until the demand for workers meets the supply. If markets are Walrasian, the ranks of the unemployed would be limited to workers transitioning between jobs and workers who choose not to work because wages are too low to attract them. They developed several theories explaining why markets might leave willing workers unemployed. The most important of these theories was the efficiency wage theory used to explain long-term effects of previous unemployment, where short-term increases in unemployment become permanent and lead to higher levels of unemployment in the long-run.
In efficiency wage models, workers are paid at levels that maximize productivity instead of clearing the market. For example, in developing countries, firms might pay more than a market rate to ensure their workers can afford enough nutrition to be productive. Firms might also pay higher wages to increase loyalty and morale, possibly leading to better productivity. Firms can also pay higher than market wages to forestall shirking. Shirking models were particularly influential.Carl Shapiro and Joseph Stiglitz's 1984 paper "Equilibrium Unemployment as a Worker Discipline Device" created a model where employees tend to avoid work unless firms can monitor worker effort and threaten slacking employees with unemployment. If the economy is at full employment, a fired shirker simply moves to a new job. Individual firms pay their workers a premium over the market rate to ensure their workers would rather work and keep their current job instead of shirking and risk having to move to a new job. Since each firm pays more than market clearing wages, the aggregated labor market fails to clear. This creates a pool of unemployed laborers and adds to the expense of getting fired. Workers not only risk a lower wage, they risk being stuck in the pool of unemployed. Keeping wages above market clearing levels creates a serious disincentive to shirk that makes workers more efficient even though it leaves some willing workers unemployed.
1990s.
The new neoclassical synthesis.
In the early 1990s, economists began to combine the elements of new Keynesian economics developed in the 1980s and earlier with Real Business Cycle Theory. RBC models were dynamic but assumed perfect competition; new Keynesian models were primarily static but based on imperfect competition. The new neoclassical synthesis essentially combined the dynamic aspects of RBC with imperfect competition and nominal rigidities of new Keynesian models. Tack Yun was one of the first to do this, in a model that used the Calvo pricing model. Goodfriend and King proposed a list of four elements that are central to the new synthesis: intertemporal optimization, rational expectations, imperfect competition, and costly price adjustment (menu costs). Goodfriend and King also find that the consensus models produce certain policy implications: whilst monetary policy can affect real output in the short-run, but there is no long-run trade-off: money is not neutral in the short-run but it is in the long-run. Inflation has negative welfare effects. It is important for central banks to maintain credibility through rules based policy like inflation targeting.
Taylor Rule.
In 1993, John B Taylor formulated the idea of a Taylor rule, which is a reduced form approximation of the responsiveness of the nominal interest rate, as set by the central bank, to changes in inflation, output, or other economic conditions. In particular, the rule describes how, for each one-percent increase in inflation, the central bank tends to raise the nominal interest rate by more than one percentage point. This aspect of the rule is often called the Taylor principle. Although such rules provide concise, descriptive proxies for central bank policy, they are not, in practice, explicitly proscriptively considered by central banks when setting nominal rates.
Taylor's original version of the rule describes how the nominal interest rate responds to divergences of actual inflation rates from "target" inflation rates and of actual gross domestic product (GDP) from "potential" GDP:
formula_0
In this equation, formula_1 is the target short-term nominal interest rate (e.g. the federal funds rate in the US, the Bank of England base rate in the UK), formula_2 is the rate of inflation as measured by the GDP deflator, formula_3 is the desired rate of inflation, formula_4 is the assumed equilibrium real interest rate, formula_5 is the logarithm of real GDP, and formula_6 is the logarithm of potential output, as determined by a linear trend.
The New Keynesian Phillips curve.
The New Keynesian Phillips curve was originally derived by Roberts in 1995, and has since been used in most state-of-the-art New Keynesian DSGE models. The new Keynesian Phillips curve says that this period's inflation depends on current output and the expectations of next period's inflation. The curve is derived from the dynamic Calvo model of pricing and in mathematical terms is:
formula_7
The current period t expectations of next period's inflation are incorporated as formula_8, where formula_9 is the discount factor. The constant formula_10 captures the response of inflation to output, and is largely determined by the probability of changing price in any period, which is formula_11:
formula_12.
The less rigid nominal prices are (the higher is formula_11), the greater the effect of output on current inflation.
The science of monetary policy.
The ideas developed in the 1990s were put together to develop the new Keynesian dynamic stochastic general equilibrium used to analyze monetary policy. This culminated in the three-equation new Keynesian model found in the survey by Richard Clarida, Jordi Gali, and Mark Gertler in the "Journal of Economic Literature". It combines the two equations of the new Keynesian Phillips curve and the Taylor rule with the "dynamic IS curve" derived from the optimal dynamic consumption equation (household's Euler equation).
formula_13
These three equations formed a relatively simple model which could be used for the theoretical analysis of policy issues. However, the model was oversimplified in some respects (for example, there is no capital or investment). Also, it does not perform well empirically.
2000s.
In the new millennium there have been several advances in new Keynesian economics.
The introduction of imperfectly competitive labor markets.
Whilst the models of the 1990s focused on sticky prices in the output market, in 2000 Christopher Erceg, Dale Henderson and Andrew Levin adopted the Blanchard and Kiyotaki model of unionized labor markets by combining it with the Calvo pricing approach and introduced it into a new Keynesian DSGE model.
The development of complex DSGE models.
To have models that worked well with the data and could be used for policy simulations, quite complicated new Keynesian models were developed with several features. Seminal papers were published by Frank Smets and Rafael Wouters and also Lawrence J. Christiano, Martin Eichenbaum and Charles Evans The common features of these models included:
Sticky information.
The idea of sticky information found in Fischer's model was later developed by Gregory Mankiw and Ricardo Reis. This added a new feature to Fischer's model: there is a fixed probability that a worker can replan their wages or prices each period. Using quarterly data, they assumed a value of 25%: that is, each quarter 25% of randomly chosen firms/unions can plan a trajectory of current and future prices based on current information. Thus if we consider the current period: 25% of prices will be based on the latest information available; the rest on information that was available when they last were able to replan their price trajectory. Mankiw and Reis found that the model of sticky information provided a good way of explaining inflation persistence.
Sticky information models do not have nominal rigidity: firms or unions are free to choose different prices or wages for each period. It is the information that is sticky, not the prices. Thus when a firm gets lucky and can re-plan its current and future prices, it will choose a trajectory of what it believes will be the optimal prices now and in the future. In general, this will involve setting a different price every period covered by the plan. This is at odds with the empirical evidence on prices. There are now many studies of price rigidity in different countries: the United States, the Eurozone, the United Kingdom and others. These studies all show that whilst there are some sectors where prices change frequently, there are also other sectors where prices remain fixed over time. The lack of sticky prices in the sticky information model is inconsistent with the behavior of prices in most of the economy. This has led to attempts to formulate a "dual stickiness" model that combines sticky information with sticky prices.
2010s.
The 2010s saw the development of models incorporating household heterogeneity into the standard New Keynesian framework, commonly referred as 'HANK' models (Heterogeneous Agent New Keynesian). In addition to sticky prices, a typical HANK model features uninsurable idiosyncratic labor income risk which gives rise to a non-degenerate wealth distribution. The earliest models with these two features include Oh and Reis (2012), McKay and Reis (2016) and Guerrieri and Lorenzoni (2017).
The name "HANK model" was coined by Greg Kaplan, Benjamin Moll and Gianluca Violante in a 2018 paper that additionally models households as accumulating two types of assets, one liquid and the other illiquid. This translates into rich heterogeneity in portfolio composition across households. In particular, the model fits empirical evidence by featuring a large share of households holding little liquid wealth: the 'hand-to-mouth' households. Consistent with empirical evidence, about two-thirds of these households hold non-trivial amounts of illiquid wealth, despite holding little liquid wealth. These households are known as wealthy hand-to-mouth households, a term introduced in a 2014 study of fiscal stimulus policies by Kaplan and Violante.
The existence of wealthy hand-to-mouth households in New Keynesian models matters for the effects of monetary policy, because the consumption behavior of those households is strongly sensitive to changes in disposable income, rather than variations in the interest rate (i.e. the price of future consumption relative to current consumption). The direct corollary is that monetary policy is mostly transmitted via general equilibrium effects that work through the household labor income, rather than through intertemporal substitution, which is the main transmission channel in Representative Agent New Keynesian (RANK) models.
There are two main implications for monetary policy. First, monetary policy interacts strongly with fiscal policy, because of the failure of Ricardian Equivalence due to the presence of hand-to-mouth households. In particular, changes in the interest rate shift the Government's budget constraint, and the fiscal response to this shift affects households' disposable income. Second, aggregate monetary shocks are not distributional neutral since they affect the return on capital, which affects households with different levels of wealth and assets differently.
Policy implications.
New Keynesian economists agree with New Classical economists that in the long run, the classical dichotomy holds: changes in the money supply are neutral. However, because prices are sticky in the New Keynesian model, an increase in the money supply (or equivalently, a decrease in the interest rate) does increase output and lower unemployment in the short run. Furthermore, some New Keynesian models confirm the non-neutrality of money under several conditions.
Nonetheless, New Keynesian economists do not advocate using expansive monetary policy for short run gains in output and employment, as it would raise inflationary expectations and thus store up problems for the future. Instead, they advocate using monetary policy for stabilization. That is, suddenly increasing the money supply just to produce a temporary economic boom is not recommended as eliminating the increased inflationary expectations will be impossible without producing a recession.
However, when the economy is hit by some unexpected external shock, it may be a good idea to offset the macroeconomic effects of the shock with monetary policy. This is especially true if the unexpected shock is one (like a fall in consumer confidence) which tends to lower both output and inflation; in that case, expanding the money supply (lowering interest rates) helps by increasing output while stabilizing inflation and inflationary expectations.
Studies of optimal monetary policy in New Keynesian DSGE models have focused on interest rate rules (especially 'Taylor rules'), specifying how the central bank should adjust the nominal interest rate in response to changes in inflation and output. (More precisely, optimal rules usually react to changes in the output gap, rather than changes in output "per se".) In some simple New Keynesian DSGE models, it turns out that stabilizing inflation suffices, because maintaining perfectly stable inflation also stabilizes output and employment to the maximum degree desirable. Blanchard and Galí have called this property the 'divine coincidence'.
However, they also show that in models with more than one market imperfection (for example, frictions in adjusting the employment level, as well as sticky prices), there is no longer a 'divine coincidence', and instead there is a tradeoff between stabilizing inflation and stabilizing employment. Further, while some macroeconomists believe that New Keynesian models are on the verge of being useful for quarter-to-quarter quantitative policy advice, disagreement exists.
Alves (2014) showed that the divine coincidence does not necessarily hold in the non-linear form of the standard New-Keynesian model. This property would only hold if the monetary authority is set to keep the inflation rate at exactly 0%. At any other desired target for the inflation rate, there is an endogenous trade-off, even under the absence real imperfections such as sticky wages, and the divine coincidence no longer holds.
Relation to other macroeconomic schools.
Over the years, a sequence of 'new' macroeconomic theories related to or opposed to Keynesianism have been influential. After World War II, Paul Samuelson used the term "neoclassical synthesis" to refer to the integration of Keynesian economics with neoclassical economics. The idea was that the government and the central bank would maintain rough full employment, so that neoclassical notions—centered on the axiom of the universality of scarcity—would apply. John Hicks' IS/LM model was central to the neoclassical synthesis.
Later work by economists such as James Tobin and Franco Modigliani involving more emphasis on the microfoundations of consumption and investment was sometimes called neo-Keynesianism. It is often contrasted with the post-Keynesianism of Paul Davidson, which emphasizes the role of fundamental uncertainty in economic life, especially concerning issues of private fixed investment.
New Keynesianism was a response to Robert Lucas and the new classical school. That school criticized the inconsistencies of Keynesianism in the light of the concept of "rational expectations". The new classicals combined a unique market-clearing equilibrium (at full employment) with rational expectations. The New Keynesians used "microfoundations" to demonstrate that price stickiness hinders markets from clearing. Thus, the rational expectations-based equilibrium need not be unique.
Whereas the neoclassical synthesis hoped that fiscal and monetary policy would maintain full employment, the new classicals assumed that price and wage adjustment would automatically attain this situation in the short run. The new Keynesians, on the other hand, saw full employment as being automatically achieved only in the long run, since prices are "sticky" in the short run. Government and central-bank policies are needed because the "long run" may be very long.
Ultimately, the differences between new classical macroeconomics and New Keynesian economics were resolved in the new neoclassical synthesis of the 1990s, which forms the basis of mainstream economics today, and the Keynesian stress on the importance of centralized coordination of macroeconomic policies (e.g., monetary and fiscal stimulus), international economic institutions such as the World Bank and International Monetary Fund (IMF), and of the maintenance of a controlled trading system was highlighted during the 2008 global financial and economic crisis. This has been reflected in the work of IMF economists and of Donald Markwell.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "i_t = \\pi_t + r_t^* + a_\\pi ( \\pi_t - \\pi_t^* ) + a_y ( y_t - y_t^* )."
},
{
"math_id": 1,
"text": "\\,i_t\\,"
},
{
"math_id": 2,
"text": "\\,\\pi_t\\,"
},
{
"math_id": 3,
"text": "\\pi^*_t"
},
{
"math_id": 4,
"text": "r_t^*"
},
{
"math_id": 5,
"text": "\\,y_t\\,"
},
{
"math_id": 6,
"text": "y_t^*"
},
{
"math_id": 7,
"text": "\\pi_{t} = \\beta E_{t}[\\pi_{t+1}] + \\kappa y_{t}"
},
{
"math_id": 8,
"text": "\\beta E_{t}[\\pi_{t+1}]"
},
{
"math_id": 9,
"text": "\\beta"
},
{
"math_id": 10,
"text": "\\kappa"
},
{
"math_id": 11,
"text": "h"
},
{
"math_id": 12,
"text": "\\kappa = \\frac{h[1-(1-h)\\beta]}{1-h}\\gamma"
},
{
"math_id": 13,
"text": "y_{t}=E_{t} y_{t+1} - \\frac{1}{\\sigma}(i_{t} - E_{t}\\pi_{t+1})+v_{t}"
}
] | https://en.wikipedia.org/wiki?curid=80327 |
8035060 | Ecosystem model | A typically mathematical representation of an ecological system
An ecosystem model is an abstract, usually mathematical, representation of an ecological system (ranging in scale from an individual population, to an ecological community, or even an entire biome), which is studied to better understand the real system.
Using data gathered from the field, ecological relationships—such as the relation of sunlight and water availability to photosynthetic rate, or that between predator and prey populations—are derived, and these are combined to form ecosystem models. These model systems are then studied in order to make predictions about the dynamics of the real system. Often, the study of inaccuracies in the model (when compared to empirical observations) will lead to the generation of hypotheses about possible ecological relations that are not yet known or well understood. Models enable researchers to simulate large-scale experiments that would be too costly or unethical to perform on a real ecosystem. They also enable the simulation of ecological processes over very long periods of time (i.e. simulating a process that takes centuries in reality, can be done in a matter of minutes in a computer model).
Ecosystem models have applications in a wide variety of disciplines, such as natural resource management, ecotoxicology and environmental health, agriculture, and wildlife conservation. Ecological modelling has even been applied to archaeology with varying degrees of success, for example, combining with archaeological models to explain the diversity and mobility of stone tools.
Types of models.
There are two major types of ecological models, which are generally applied to different types of problems: (1) "analytic" models and (2) "simulation" / "computational" models. Analytic models are typically relatively simple (often linear) systems, that can be accurately described by a set of mathematical equations whose behavior is well-known. Simulation models on the other hand, use numerical techniques to solve problems for which analytic solutions are impractical or impossible. Simulation models tend to be more widely used, and are generally considered more ecologically realistic, while analytic models are valued for their mathematical elegance and explanatory power. Ecopath is a powerful software system which uses simulation and computational methods to model marine ecosystems. It is widely used by marine and fisheries scientists as a tool for modelling and visualising the complex relationships that exist in real world marine ecosystems.
Model design.
The process of model design begins with a specification of the problem to be solved, and the objectives for the model.
Ecological systems are composed of an enormous number of biotic and abiotic factors that interact with each other in ways that are often unpredictable, or so complex as to be impossible to incorporate into a computable model. Because of this complexity, ecosystem models typically simplify the systems they are studying to a limited number of components that are well understood, and deemed relevant to the problem that the model is intended to solve.
The process of simplification typically reduces an ecosystem to a small number of state variables and mathematical functions that describe the nature of the relationships between them. The number of ecosystem components that are incorporated into the model is limited by aggregating similar processes and entities into functional groups that are treated as a unit.
After establishing the components to be modeled and the relationships between them, another important factor in ecosystem model structure is the representation of space used. Historically, models have often ignored the confounding issue of space. However, for many ecological problems spatial dynamics are an important part of the problem, with different spatial environments leading to very different outcomes. "Spatially explicit models" (also called "spatially distributed" or "landscape" models) attempt to incorporate a heterogeneous spatial environment into the model. A spatial model is one that has one or more state variables that are a function of space, or can be related to other spatial variables.
Validation.
After construction, models are "validated" to ensure that the results are acceptably accurate or realistic. One method is to test the model with multiple sets of data that are independent of the actual system being studied. This is important since certain inputs can cause a faulty model to output correct results. Another method of validation is to compare the model's output with data collected from field observations. Researchers frequently specify beforehand how much of a disparity they are willing to accept between parameters output by a model and those computed from field data.
Examples.
The Lotka–Volterra equations.
One of the earliest, and most well-known, ecological models is the predator-prey model of Alfred J. Lotka (1925) and Vito Volterra (1926). This model takes the form of a pair of ordinary differential equations, one representing a prey species, the other its predator.
formula_0
formula_1
where,
Volterra originally devised the model to explain fluctuations in fish and shark populations observed in the Adriatic Sea after the First World War (when fishing was curtailed). However, the equations have subsequently been applied more generally. Although simple, they illustrate some of the salient features of ecological models: modelled biological populations experience growth, interact with other populations (as either predators, prey or competitors) and suffer mortality.
A credible, simple alternative to the Lotka-Volterra predator-prey model and its common prey dependent generalizations is the ratio dependent or Arditi-Ginzburg model. The two are the extremes of the spectrum of predator interference models. According to the authors of the alternative view, the data show that true interactions in nature are so far from the Lotka-Volterra extreme on the interference spectrum that the model can simply be discounted as wrong. They are much closer to the ratio dependent extreme, so if a simple model is needed one can use the Arditi-Ginzburg model as the first approximation.
Others.
The theoretical ecologist Robert Ulanowicz has used information theory tools to describe the structure of ecosystems, emphasizing mutual information (correlations) in studied systems. Drawing on this methodology and prior observations of complex ecosystems, Ulanowicz depicts approaches to determining the stress levels on ecosystems and predicting system reactions to defined types of alteration in their settings (such as increased or reduced energy flow, and eutrophication.
Conway's Game of Life and its variations model ecosystems where proximity of the members of a population are factors in population growth.
See also.
<templatestyles src="Refbegin/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{dX}{dt} = \\alpha . X - \\beta . X . Y"
},
{
"math_id": 1,
"text": "\\frac{dY}{dt} = \\gamma . \\beta . X . Y - \\delta . Y"
}
] | https://en.wikipedia.org/wiki?curid=8035060 |
803716 | Photoemission electron microscopy | Photoemission electron microscopy (PEEM, also called photoelectron microscopy, PEM) is a type of electron microscopy that utilizes local variations in electron emission to generate image contrast. The excitation is usually produced by ultraviolet light, synchrotron radiation or X-ray sources. PEEM measures the coefficient indirectly by collecting the emitted secondary electrons generated in the electron cascade that follows the creation of the primary core hole in the absorption process. PEEM is a surface sensitive technique because the emitted electrons originate from a shallow layer. In physics, this technique is referred to as PEEM, which goes together naturally with low-energy electron diffraction (LEED), and low-energy electron microscopy (LEEM). In biology, it is called photoelectron microscopy (PEM), which fits with photoelectron spectroscopy (PES), transmission electron microscopy (TEM), and scanning electron microscopy (SEM).
History.
Initial development.
In 1933, Ernst Brüche reported images of cathodes illuminated by UV light. This work was extended by two of his colleagues, H. Mahl and J. Pohl. Brüche made a sketch of his photoelectron emission microscope in his 1933 paper (Figure 1). This is evidently the first photoelectron emission microscope (PEEM).
Improved techniques.
In 1963, Gertrude F. Rempfer designed the electron optics for an early ultrahigh-vacuum (UHV) PEEM. In 1965, G. Burroughs at the Night Vision Laboratory, Fort Belvoir, Virginia built the bakeable electrostatic lenses and metal-sealed valves for PEEM. During the 1960s, in the PEEM, as well as TEM, the specimens were grounded and could be transferred in the UHV environment to several positions for photocathode formation, processing and observation. These electron microscopes were used for only a brief period of time, but the components live on. The first commercially available PEEM was designed and tested by Engel during the 1960s for his thesis work under E. Ruska and developed it into a marketable product, called the "Metioskop KE3", by Balzers in 1971. The electron lenses and voltage divider of the PEEM were incorporated into one version of a PEEM for biological studies in Eugene, Oregon around 1970.
Further research.
During the 1970s and 1980s the second generation (PEEM-2) and third generation (PEEM-3) microscopes were constructed. PEEM-2 is a conventional not aberration-corrected instrument employing electrostatic lenses. It uses a cooled charge-coupled device (CCD) fiber-coupled to a phosphor to detect the electron-optical image. The aberration corrected microscope PEEM-3 employs a curved electron mirror to counter the lowest order aberrations of the electron lenses and the accelerating field.
Background.
Photoelectric effect.
The photoemission or photoelectric effect is a quantum electronic phenomenon in which electrons (photoelectrons) are emitted from matter after the absorption of energy from electromagnetic radiation such as UV light or X-ray.
When UV light or X-ray is absorbed by matter, electrons are excited from core levels into unoccupied states, leaving empty core states. Secondary electrons are generated by the decay of the core hole. Auger processes and inelastic electron scattering create a cascade of low-energy electrons. Some electrons penetrate the sample surface and escape into vacuum. A wide spectrum of electrons is emitted with energies between the energy of the illumination and the work function of the sample. This wide electron distribution is the principal source of image aberration in the microscope.
Quantitative analysis.
Using Einstein's method, the following equations are used:
Energy of photon=Energy needed to remove an electron + Kinetic energy of the emitted electron
formula_0
"h" is Planck's constant;
"f" is the frequency of the incident photon;
formula_1 is the work function;
formula_2 is the maximum kinetic energy of ejected electrons;
"f"0 is the threshold frequency for the photoelectric effect to occur;
"m" is the rest mass of the ejected electron;
"v"m is the speed of the ejected electron.
Electron emission microscopy.
Electron emission microscopy is a type of electron microscopy in which the information carrying beam of electrons originates from the specimen. The source of energy causing the electron emission can be heat (thermionic emission), light (photoelectron emission), ions, or neutral particles, but normally excludes field emission and other methods involving a point source or tip microscopy.
Photoelectron imaging.
Photoelectron imaging includes any form of imaging in which the source of information is the distribution of points from which electrons are ejected from the specimen by the action of photons. The technique with the highest resolution photoelectron imaging is presently photoelectron emission microscopy using UV light.
Photoemission electron microscope.
A photoemission electron microscope is a parallel imaging instrument. It creates at any given moment a complete picture of the photoelectron distribution emitted from the imaged surface region.
Light sources.
The viewed area of the specimen must be illuminated homogeneously with appropriate radiation (ranging from UV to hard x-rays). UV light is the most common radiation used in PEEM because very bright sources are available, like Mercury lamps. However, other wavelengths (like soft x-rays) are preferred where analytical information is required.
Electron optical column and resolution.
The electron optical column contains two or more electrostatic or magnetic electron lenses, corrector elements such as a stigmator and deflector, an angle-limiting aperture in the backfocal plane of one of the lenses.
As in any emission electron microscope, the objective or cathode lens determines the resolution. The latter is dependent on the electron-optical qualities, such as spherical aberrations, and the energy spread of the photoemitted electrons. The electrons are emitted into the vacuum with an angular distribution close to a cosine square function. A significant velocity component parallel to the surface will decrease the lateral resolution. The faster electrons, leaving the surface exactly along the center line of the PEEM, will also negatively influence the resolution due to the chromatic aberration of the cathode lens. The resolution is inversely proportional to the accelerating field strength at the surface but proportional to the energy spread of the electrons. So resolution r is approximately:
formula_3
In the equation, d is the distance between the specimen and the objective, ΔE is the distribution width of the initial electron energies and U is the accelerating voltage.
Besides the cathode or objective lens, situated on the left hand side of Figure 4, two more lenses are utilized to create an image of the specimen: an intermediate three-electrode lens is used to vary the total magnification between 100× if the lens is deactivated, and up to 1000× when needed. On the right-hand side of Figure 4 is the projector, a three electrode lens combined with a two-element deceleration lens. The main task of this lens combination is the deceleration of the fast 20 keV electrons to energies for which the channelplate has its highest sensitivity. Such an image intensifier has its best performance for impinging electrons with kinetic energies roughly about 1 keV.
Energy filter.
An energy filter can be added to the instrument in order to select the electrons that will contribute to the image. This option is particularly used for analytical applications of the PEEM. By using an energy filter, a PEEM microscope can be seen as imaging Ultra-violet photoelectron spectroscopy (UPS) or X-ray photoelectron spectroscopy (XPS). By using this method, spatially resolved photoemission spectra can be acquired with spatial resolutions on the 100 nm scale and with sub-eV resolution. Using such instrument, one can acquire elemental images with chemical state sensibility or work function maps. Also, since the photoelectron are emitted only at the very surface of the material, surface termination maps can be acquired.
Detector.
A detector is placed at the end of electron optical column. Usually, a phosphor screen is used to convert the electron image to a photon image. The choice of phosphor type is governed by resolution considerations. A multichannel plate detector that is imaged by a CCD camera can substitute phosphor screen.
Time-resolved PEEM.
Compared to many other electron microscopy techniques, time-resolved PEEM offers a very high temporal resolution of only a few femtoseconds with prospects of advancing it to the attosecond regime. The reason is that temporal electron pulse broadening does not deteriorate the temporal resolution because electrons are only used to achieve a high spatial resolution. The temporal resolution is reached by using very short light pulses in a pump-probe setup. A first pulse optically excites dynamics like surface plasmons on a sample surface and a second pulse probes the dynamics after a certain waiting time by photoemitting electrons. The photoemission rate is influenced by the local excitation level of the sample. Hence, spatial information about the dynamics on the sample can be gained. By repeating this experiment with a series of waiting times between pump and probe pulse, a movie of the dynamics on a sample can be recorded.
Laser pulses in the visible spectral range are typically used in combination with a PEEM. They offer a temporal resolution of a few to 100 fs. In recent years, pulses with shorter wavelengths have been used to achieve a more direct access to the instantaneous electron excitation in the material. Here, a first pulse in the visible excites dynamics near the sample surface and a second pulse with a photon energy significantly above the work function of the material emits the electrons. By employing additional time-of-flight or high-pass energy recording in the PEEM, information about the instantaneous electronic distribution in a nanostructure can be extracted with high spatial and temporal resolution.
Efforts to achieve attosecond temporal resolution and with that directly record optical fields around nanostructures with so far unreached spatio-temporal resolution, are still ongoing.
Limitations.
The general limitation of PEEM, which is common with most surface science methods, is that the PEEM operates only under fairly restricted vacuum conditions. Whenever electrons are used to excite a specimen or carry information from its surface there has to be a vacuum with an appropriate mean free path for the electrons. With "In-situ" PEEM techniques, water and aqueous solution can be observed by PEEM.
The resolution of PEEM is limited to about 10 nm, which results from a spread of the photoelectron emission angle. Angle resolved photoemission spectroscopy (ARPES) is a powerful tool for structure analysis. However, it may be difficult to make angle-resolved and energy-selective PEEM measurements because of a lack of intensity. The availability of synchrotron-radiation light sources can offer exciting possibilities in this regard.
Comparison from other techniques.
Transmission electron microscopy (TEM) and scanning electron microscopy (SEM): PEEM differs from these two microscopies by using an electric accelerating field at the surface of specimen. The specimen is part of the electron-optical system.
Low-energy electron microscopy (LEEM) and mirror electron microscopy (MEM): these two electron emission microscopy use electron gun supply beams which are directed toward the specimen, decelerated and backscattered from the specimen or reflected just before reaching the specimen. In photoemission electron microscopy (PEEM) the same specimen geometry and immersion lens are used, but the electron guns are omitted.
New PEEM technologies.
Time resolved photoemission electron microscopy (TR-PEEM) is well suited for real-time observation of fast processes on surfaces equipped with pulsed synchrotron radiation for illumination.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "hf=\\phi + E_{k_{max}} \\,"
},
{
"math_id": 1,
"text": "\\phi=h f_0 \\ "
},
{
"math_id": 2,
"text": "E_{k_{max}}=\\frac{1}{2} m v_m^2 "
},
{
"math_id": 3,
"text": "r\\approx \\frac{d\\,\\Delta\\,E}{e\\,U}"
}
] | https://en.wikipedia.org/wiki?curid=803716 |
8038396 | West number | The West number is an empirical parameter used to characterize the performance of Stirling engines and other Stirling systems. It is very similar to the Beale number where a larger number indicates higher performance; however, the West number includes temperature compensation. The West number is often used to approximate of the power output of a Stirling engine. The average value is (0.25) for a wide variety of engines, although it may range up to (0.35), particularly for engines operating with a high temperature differential.
The West number may be defined as:
formula_0
where:
When the Beale number is known, but the West number is not known, it is possible to calculate it. First calculate the West number at the temperatures "T"H and "T"K for which the Beale number is known, and then use the resulting West number to calculate output power for other temperatures.
To estimate the power output of a new engine design, nominal values are assumed for the West number, pressure, swept volume and frequency, and the power is calculated as follows:
formula_1
For example, with an absolute temperature ratio of 2, the portion of the equation representing temperature correction equals 1/3. With a temperature ratio of 3, the temperature term is 1/2. This factor accounts for the difference between the West equation, and the Beale equation in which this temperature term is taken as a constant. Thus, the Beale number is typically in the range of 0.10 to 0.15, which is about 1/3 to 1/2 the value of the West number. | [
{
"math_id": 0,
"text": "W_n = \\frac{Wo}{PVf} \\frac{T_\\text{H} + T_\\text{K}}{T_\\text{H} - T_\\text{K}} = B_n \\frac{T_\\text{H} + T_\\text{K}}{T_\\text{H} - T_\\text{K}}"
},
{
"math_id": 1,
"text": "W_o = W_n PVf \\frac{T_\\text{H} - T_\\text{K}}{T_\\text{H} + T_\\text{K}}"
}
] | https://en.wikipedia.org/wiki?curid=8038396 |
803894 | Spectral sequence | Tool in homological algebra
In homological algebra and algebraic topology, a spectral sequence is a means of computing homology groups by taking successive approximations. Spectral sequences are a generalization of exact sequences, and since their introduction by Jean Leray (1946a, 1946b), they have become important computational tools, particularly in algebraic topology, algebraic geometry and homological algebra.
Discovery and motivation.
Motivated by problems in algebraic topology, Jean Leray introduced the notion of a sheaf and found himself faced with the problem of computing sheaf cohomology. To compute sheaf cohomology, Leray introduced a computational technique now known as the Leray spectral sequence. This gave a relation between cohomology groups of a sheaf and cohomology groups of the pushforward of the sheaf. The relation involved an infinite process. Leray found that the cohomology groups of the pushforward formed a natural chain complex, so that he could take the cohomology of the cohomology. This was still not the cohomology of the original sheaf, but it was one step closer in a sense. The cohomology of the cohomology again formed a chain complex, and its cohomology formed a chain complex, and so on. The limit of this infinite process was essentially the same as the cohomology groups of the original sheaf.
It was soon realized that Leray's computational technique was an example of a more general phenomenon. Spectral sequences were found in diverse situations, and they gave intricate relationships among homology and cohomology groups coming from geometric situations such as fibrations and from algebraic situations involving derived functors. While their theoretical importance has decreased since the introduction of derived categories, they are still the most effective computational tool available. This is true even when many of the terms of the spectral sequence are incalculable.
Unfortunately, because of the large amount of information carried in spectral sequences, they are difficult to grasp. This information is usually contained in a rank three lattice of abelian groups or modules. The easiest cases to deal with are those in which the spectral sequence eventually collapses, meaning that going out further in the sequence produces no new information. Even when this does not happen, it is often possible to get useful information from a spectral sequence by various tricks.
Formal definition.
Cohomological spectral sequence.
Fix an abelian category, such as a category of modules over a ring, and a nonnegative integer formula_0. A cohomological spectral sequence is a sequence formula_1 of objects formula_2 and endomorphisms formula_3, such that for every formula_4
Usually the isomorphisms are suppressed and we write formula_9 instead. An object formula_2 is called "sheet" (as in a sheet of paper), or sometimes a "page" or a "term"; an endomorphism formula_10 is called "boundary map" or "differential". Sometimes formula_11 is called the "derived object" of formula_7.
Bigraded spectral sequence.
In reality spectral sequences mostly occur in the category of doubly graded modules over a ring "R" (or doubly graded sheaves of modules over a sheaf of rings), i.e. every sheet is a bigraded R-module formula_12
So in this case a cohomological spectral sequence is a sequence formula_1 of bigraded R-modules formula_13 and for every module the direct sum of endomorphisms formula_14 of bidegree formula_15, such that for every formula_4 it holds that:
The notation used here is called "complementary degree". Some authors write formula_17 instead, where formula_18 is the "total degree". Depending upon the spectral sequence, the boundary map on the first sheet can have a degree which corresponds to "r" = 0, "r" = 1, or "r" = 2. For example, for the spectral sequence of a filtered complex, described below, "r"0 = 0, but for the Grothendieck spectral sequence, "r"0 = 2. Usually "r"0 is zero, one, or two. In the ungraded situation described above, "r"0 is irrelevant.
Homological spectral sequence.
Mostly the objects we are talking about are chain complexes, that occur with descending (like above) or ascending order. In the latter case, by replacing formula_19 with formula_20 and formula_21 with formula_22 (bidegree formula_23), one receives the definition of a homological spectral sequence analogously to the cohomological case.
Spectral sequence from a chain complex.
The most elementary example in the ungraded situation is a chain complex "C•". An object "C•" in an abelian category of chain complexes naturally comes with a differential "d". Let "r"0 = 0, and let "E"0 be "C•". This forces "E"1 to be the complex "H"("C•"): At the "i"'th location this is the "i"'th homology group of "C•". The only natural differential on this new complex is the zero map, so we let "d"1 = 0. This forces formula_24 to equal formula_25, and again our only natural differential is the zero map. Putting the zero differential on all the rest of our sheets gives a spectral sequence whose terms are:
The terms of this spectral sequence stabilize at the first sheet because its only nontrivial differential was on the zeroth sheet. Consequently, we can get no more information at later steps. Usually, to get useful information from later sheets, we need extra structure on the formula_7.
Visualization.
A doubly graded spectral sequence has a tremendous amount of data to keep track of, but there is a common visualization technique which makes the structure of the spectral sequence clearer. We have three indices, "r", "p", and "q". An object formula_7 can be seen as the formula_26 checkered page of a book. On these sheets, we will take "p" to be the horizontal direction and "q" to be the vertical direction. At each lattice point we have the object formula_27. Now turning to the next page means taking homology, that is the formula_28 page is a subquotient of the formula_26 page. The total degree "n" = "p" + "q" runs diagonally, northwest to southeast, across each sheet. In the homological case, the differentials have bidegree (−"r", "r" − 1), so they decrease "n" by one. In the cohomological case, "n" is increased by one. The differentials change their direction with each turn with respect to r.
The red arrows demonstrate the case of a first quadrant sequence (see example below), where only the objects of the first quadrant are non-zero. While turning pages, either the domain or the codomain of all the differentials become zero.
Properties.
Categorical properties.
The set of cohomological spectral sequences form a category: a morphism of spectral sequences formula_29 is by definition a collection of maps formula_30 which are compatible with the differentials, i.e. formula_31, and with the given isomorphisms between the cohomology of the "r"th step and the "(r+1)"th sheets of "E" and "E' ", respectively: formula_32. In the bigraded case, they should also respect the graduation: formula_33
Multiplicative structure.
A cup product gives a ring structure to a cohomology group, turning it into a cohomology ring. Thus, it is natural to consider a spectral sequence with a ring structure as well. Let formula_34 be a spectral sequence of cohomological type. We say it has multiplicative structure if (i) formula_7 are (doubly graded) differential graded algebras and (ii) the multiplication on formula_11 is induced by that on formula_7 via passage to cohomology.
A typical example is the cohomological Serre spectral sequence for a fibration formula_35, when the coefficient group is a ring "R". It has the multiplicative structure induced by the cup products of fibre and base on the formula_36-page. However, in general the limiting term formula_37 is not isomorphic as a graded algebra to H("E"; "R"). The multiplicative structure can be very useful for calculating differentials on the sequence.
Constructions of spectral sequences.
Spectral sequences can be constructed by various ways. In algebraic topology, an exact couple is perhaps the most common tool for the construction. In algebraic geometry, spectral sequences are usually constructed from filtrations of cochain complexes.
Spectral sequence of an exact couple.
Another technique for constructing spectral sequences is William Massey's method of exact couples. Exact couples are particularly common in algebraic topology. Despite this they are unpopular in abstract algebra, where most spectral sequences come from filtered complexes.
To define exact couples, we begin again with an abelian category. As before, in practice this is usually the category of doubly graded modules over a ring. An exact couple is a pair of objects ("A", "C"), together with three homomorphisms between these objects: "f" : "A" → "A", "g" : "A" → "C" and "h" : "C" → "A" subject to certain exactness conditions:
We will abbreviate this data by ("A", "C", "f", "g", "h"). Exact couples are usually depicted as triangles. We will see that "C" corresponds to the "E"0 term of the spectral sequence and that "A" is some auxiliary data.
To pass to the next sheet of the spectral sequence, we will form the derived couple. We set:
From here it is straightforward to check that ("A'", "C'", "f '", "g'", "h'") is an exact couple. "C'" corresponds to the "E1" term of the spectral sequence. We can iterate this procedure to get exact couples ("A"("n"), "C"("n"), "f"("n"), "g"("n"), "h"("n")).
In order to construct a spectral sequence, let "En" be "C"("n") and "dn" be "g"("n") "h"("n").
The spectral sequence of a filtered complex.
A very common type of spectral sequence comes from a filtered cochain complex, as it naturally induces a bigraded object. Consider a cochain complex formula_38 together with a descending filtration, formula_39 . We require that the boundary map is compatible with the filtration, i.e. formula_40, and that the filtration is "exhaustive", that is, the union of the set of all formula_41 is the entire chain complex formula_42. Then there exists a spectral sequence with formula_43 and formula_44. Later, we will also assume that the filtration is "Hausdorff" or "separated", that is, the intersection of the set of all formula_41 is zero.
The filtration is useful because it gives a measure of nearness to zero: As "p" increases, formula_41 gets closer and closer to zero. We will construct a spectral sequence from this filtration where coboundaries and cocycles in later sheets get closer and closer to coboundaries and cocycles in the original complex. This spectral sequence is doubly graded by the filtration degree "p" and the complementary degree "q" = "n" − "p".
Construction.
formula_45 has only a single grading and a filtration, so we first construct a doubly graded object for the first page of the spectral sequence. To get the second grading, we will take the associated graded object with respect to the filtration. We will write it in an unusual way which will be justified at the formula_46 step:
formula_47
formula_48
formula_49
formula_50
Since we assumed that the boundary map was compatible with the filtration, formula_51 is a doubly graded object and there is a natural doubly graded boundary map formula_52 on formula_51. To get formula_46, we take the homology of formula_51.
formula_53
formula_54
formula_55
formula_56
Notice that formula_57 and formula_58 can be written as the images in formula_59 of
formula_60
formula_61
and that we then have
formula_62
formula_63 are exactly the elements which the differential pushes up one level in the filtration, and formula_64 are exactly the image of the elements which the differential pushes up zero levels in the filtration. This suggests that we should choose formula_65 to be the elements which the differential pushes up "r" levels in the filtration and formula_66 to be image of the elements which the differential pushes up "r-1" levels in the filtration. In other words, the spectral sequence should satisfy
formula_67
formula_68
formula_69
and we should have the relationship
formula_70
For this to make sense, we must find a differential formula_10 on each formula_2 and verify that it leads to homology isomorphic to formula_71. The differential
formula_72
is defined by restricting the original differential formula_73 defined on formula_74 to the subobject formula_65. It is straightforward to check that the homology of formula_2 with respect to this differential is formula_71, so this gives a spectral sequence. Unfortunately, the differential is not very explicit. Determining differentials or finding ways to work around them is one of the main challenges to successfully applying a spectral sequence.
The spectral sequence of a double complex.
Another common spectral sequence is the spectral sequence of a double complex. A double complex is a collection of objects "Ci,j" for all integers "i" and "j" together with two differentials, "d I" and "d II". "d I" is assumed to decrease "i", and "d II" is assumed to decrease "j". Furthermore, we assume that the differentials "anticommute", so that "d I d II" + "d II d I" = 0. Our goal is to compare the iterated homologies formula_75 and formula_76. We will do this by filtering our double complex in two different ways. Here are our filtrations:
formula_77
formula_78
To get a spectral sequence, we will reduce to the previous example. We define the "total complex" "T"("C"•,•) to be the complex whose "n"'th term is formula_79 and whose differential is "d I" + "d II". This is a complex because "d I" and "d II" are anticommuting differentials. The two filtrations on "Ci,j" give two filtrations on the total complex:
formula_80
formula_81
To show that these spectral sequences give information about the iterated homologies, we will work out the "E"0, "E"1, and "E"2 terms of the "I" filtration on "T"("C"•,•). The "E"0 term is clear:
formula_82
where "n"
"p" + "q".
To find the "E"1 term, we need to determine "d I" + "d II" on "E"0. Notice that the differential must have degree −1 with respect to "n", so we get a map
formula_83
Consequently, the differential on "E0" is the map "C""p","q" → "C""p","q"−1 induced by "d I" + "d II". But "d I" has the wrong degree to induce such a map, so "d I" must be zero on "E"0. That means the differential is exactly "d II", so we get
formula_84
To find "E2", we need to determine
formula_85
Because "E"1 was exactly the homology with respect to "d II", "d II" is zero on "E"1. Consequently, we get
formula_86
Using the other filtration gives us a different spectral sequence with a similar "E"2 term:
formula_87
What remains is to find a relationship between these two spectral sequences. It will turn out that as "r" increases, the two sequences will become similar enough to allow useful comparisons.
Convergence, degeneration, and abutment.
Interpretation as a filtration of cycles and boundaries.
Let "E""r" be a spectral sequence, starting with say "r" = 1. Then there is a sequence of subobjects
formula_88
such that formula_89; indeed, recursively we let formula_90 and let formula_91 be so that formula_92 are the kernel and the image of formula_93
We then let formula_94 and
formula_95;
it is called the limiting term. (Of course, such formula_37 need not exist in the category, but this is usually a non-issue since for example in the category of modules such limits exist or since in practice a spectral sequence one works with tends to degenerate; there are only finitely many inclusions in the sequence above.)
Terms of convergence.
We say a spectral sequence converges weakly if there is a graded object formula_96 with a filtration formula_97 for every formula_98, and for every formula_99 there exists an isomorphism formula_100. It converges to formula_96 if the filtration formula_97 is Hausdorff, i.e. formula_101. We write
formula_102
to mean that whenever "p" + "q" = "n", formula_27 converges to formula_103.
We say that a spectral sequence formula_27 abuts to formula_103 if for every formula_104 there is formula_105 such that for all formula_106, formula_107. Then formula_108 is the limiting term. The spectral sequence is regular or degenerates at formula_109 if the differentials formula_110 are zero for all formula_111. If in particular there is formula_112, such that the formula_113 sheet is concentrated on a single row or a single column, then we say it collapses. In symbols, we write:
formula_114
The "p" indicates the filtration index. It is very common to write the formula_115 term on the left-hand side of the abutment, because this is the most useful term of most spectral sequences. The spectral sequence of an unfiltered chain complex degenerates at the first sheet (see first example): since nothing happens after the zeroth sheet, the limiting sheet formula_116 is the same as formula_46.
The five-term exact sequence of a spectral sequence relates certain low-degree terms and "E"∞ terms.
Examples of degeneration.
The spectral sequence of a filtered complex, continued.
Notice that we have a chain of inclusions:
formula_117
We can ask what happens if we define
formula_118
formula_119
formula_120
formula_103 is a natural candidate for the abutment of this spectral sequence. Convergence is not automatic, but happens in many cases. In particular, if the filtration is finite and consists of exactly "r" nontrivial steps, then the spectral sequence degenerates after the "r"th sheet. Convergence also occurs if the complex and the filtration are both bounded below or both bounded above.
To describe the abutment of our spectral sequence in more detail, notice that we have the formulas:
formula_121
formula_122
To see what this implies for formula_123 recall that we assumed that the filtration was separated. This implies that as "r" increases, the kernels shrink, until we are left with formula_124. For formula_125, recall that we assumed that the filtration was exhaustive. This implies that as "r" increases, the images grow until we reach formula_126. We conclude
formula_127,
that is, the abutment of the spectral sequence is the "p"th graded part of the "(p+q)"th homology of "C". If our spectral sequence converges, then we conclude that:
formula_128
Long exact sequences.
Using the spectral sequence of a filtered complex, we can derive the existence of long exact sequences. Choose a short exact sequence of cochain complexes 0 → "A•" → "B•" → "C•" → 0, and call the first map "f•" : "A•" → "B•". We get natural maps of homology objects "Hn"("A•") → "Hn"("B•") → "Hn"("C•"), and we know that this is exact in the middle. We will use the spectral sequence of a filtered complex to find the connecting homomorphism and to prove that the resulting sequence is exact.To start, we filter "B•":
formula_129
formula_130
formula_131
This gives:
formula_132
formula_133
The differential has bidegree (1, 0), so "d0,q" : "Hq"("C•") → "H""q"+1("A•"). These are the connecting homomorphisms from the snake lemma, and together with the maps "A•" → "B•" → "C•", they give a sequence:
formula_134
It remains to show that this sequence is exact at the "A" and "C" spots. Notice that this spectral sequence degenerates at the "E"2 term because the differentials have bidegree (2, −1). Consequently, the "E"2 term is the same as the "E"∞ term:
formula_135
But we also have a direct description of the "E"2 term as the homology of the "E"1 term. These two descriptions must be isomorphic:
formula_136
formula_137
The former gives exactness at the "C" spot, and the latter gives exactness at the "A" spot.
The spectral sequence of a double complex, continued.
Using the abutment for a filtered complex, we find that:
formula_138
formula_139
In general, "the two gradings on Hp+q(T(C•,•)) are distinct". Despite this, it is still possible to gain useful information from these two spectral sequences.
Commutativity of Tor.
Let "R" be a ring, let "M" be a right "R"-module and "N" a left "R"-module. Recall that the derived functors of the tensor product are denoted Tor. Tor is defined using a projective resolution of its first argument. However, it turns out that formula_140. While this can be verified without a spectral sequence, it is very easy with spectral sequences.
Choose projective resolutions formula_141 and formula_142 of "M" and "N", respectively. Consider these as complexes which vanish in negative degree having differentials "d" and "e", respectively. We can construct a double complex whose terms are formula_143 and whose differentials are formula_144 and formula_145. (The factor of −1 is so that the differentials anticommute.) Since projective modules are flat, taking the tensor product with a projective module commutes with taking homology, so we get:
formula_146
formula_147
Since the two complexes are resolutions, their homology vanishes outside of degree zero. In degree zero, we are left with
formula_148
formula_149
In particular, the formula_150 terms vanish except along the lines "q" = 0 (for the "I" spectral sequence) and "p" = 0 (for the "II" spectral sequence). This implies that the spectral sequence degenerates at the second sheet, so the "E"∞ terms are isomorphic to the "E"2 terms:
formula_151
formula_152
Finally, when "p" and "q" are equal, the two right-hand sides are equal, and the commutativity of Tor follows.
Worked-out examples.
First-quadrant sheet.
Consider a spectral sequence where formula_27 vanishes for all formula_99 less than some formula_153 and for all formula_154 less than some formula_155. If formula_153 and formula_155 can be chosen to be zero, this is called a first-quadrant spectral sequence.
The sequence abuts because formula_156 holds for all formula_157 if formula_158 and formula_159. To see this, note that either the domain or the codomain of the differential is zero for the considered cases. In visual terms, the sheets stabilize in a growing rectangle (see picture above). The spectral sequence need not degenerate, however, because the differential maps might not all be zero at once. Similarly, the spectral sequence also converges if formula_27 vanishes for all formula_99 greater than some formula_153 and for all formula_154 greater than some formula_155.
2 non-zero adjacent columns.
Let formula_160 be a homological spectral sequence such that formula_161 for all "p" other than 0, 1. Visually, this is the spectral sequence with formula_162-page
formula_163
The differentials on the second page have degree (-2, 1), so they are of the form
formula_164
These maps are all zero since they are
formula_165, formula_166
hence the spectral sequence degenerates: formula_167. Say, it converges to formula_168 with a filtration
formula_169
such that formula_170. Then formula_171, formula_172, formula_173, formula_174, etc. Thus, there is the exact sequence:
formula_175.
Next, let formula_160 be a spectral sequence whose second page consists only of two lines "q" = 0, 1. This need not degenerate at the second page but it still degenerates at the third page as the differentials there have degree (-3, 2). Note formula_176, as the denominator is zero. Similarly, formula_177. Thus,
formula_178.
Now, say, the spectral sequence converges to "H" with a filtration "F" as in the previous example. Since formula_179, formula_180, etc., we have: formula_181. Putting everything together, one gets:
formula_182
Wang sequence.
The computation in the previous section generalizes in a straightforward way. Consider a fibration over a sphere:
formula_183
with "n" at least 2. There is the Serre spectral sequence:
formula_184;
that is to say, formula_185 with some filtration formula_186.
Since formula_187 is nonzero only when "p" is zero or "n" and equal to Z in that case, we see formula_188 consists of only two lines formula_189, hence the formula_162-page is given by
formula_190
Moreover, since
formula_191
for formula_189 by the universal coefficient theorem, the formula_162 page looks like
formula_192
Since the only non-zero differentials are on the formula_193-page, given by
formula_194
which is
formula_195
the spectral sequence converges on formula_196. By computing formula_197 we get an exact sequence
formula_198
and written out using the homology groups, this is
formula_199
To establish what the two formula_200-terms are, write formula_201, and since formula_202, etc., we have: formula_203 and thus, since formula_204,
formula_205
This is the exact sequence
formula_206
Putting all calculations together, one gets:
formula_207
Low-degree terms.
With an obvious notational change, the type of the computations in the previous examples can also be carried out for cohomological spectral sequence. Let formula_208 be a first-quadrant spectral sequence converging to "H" with the decreasing filtration
formula_209
so that formula_210
Since formula_211 is zero if "p" or "q" is negative, we have:
formula_212
Since formula_213 for the same reason and since formula_214
formula_215.
Since formula_216, formula_217. Stacking the sequences together, we get the so-called five-term exact sequence:
formula_218
Edge maps and transgressions.
Homological spectral sequences.
Let formula_160 be a spectral sequence. If formula_219 for every "q" < 0, then it must be that: for "r" ≥ 2,
formula_220
as the denominator is zero. Hence, there is a sequence of monomorphisms:
formula_221.
They are called the edge maps. Similarly, if formula_219 for every "p" < 0, then there is a sequence of epimorphisms (also called the edge maps):
formula_222.
The transgression is a partially-defined map (more precisely, a map from a subobject to a quotient)
formula_223
given as a composition formula_224, the first and last maps being the inverses of the edge maps.
Cohomological spectral sequences.
For a spectral sequence formula_208 of cohomological type, the analogous statements hold. If formula_225 for every "q" < 0, then there is a sequence of epimorphisms
formula_226.
And if formula_225 for every "p" < 0, then there is a sequence of monomorphisms:
formula_227.
The transgression is a not necessarily well-defined map:
formula_228
induced by formula_229.
Application.
Determining these maps are fundamental for computing many differentials in the Serre spectral sequence. For instance the transgression map determines the differential
formula_230
for the homological spectral spectral sequence, hence on the Serre spectral sequence for a fibration formula_35 gives the map
formula_231.
Further examples.
Some notable spectral sequences are:
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r_0"
},
{
"math_id": 1,
"text": " \\{E_r, d_r\\}_{r\\geq r_0} "
},
{
"math_id": 2,
"text": " E_r "
},
{
"math_id": 3,
"text": " d_r : E_r \\to E_r "
},
{
"math_id": 4,
"text": " r\\geq r_0 "
},
{
"math_id": 5,
"text": " d_r \\circ d_r = 0 "
},
{
"math_id": 6,
"text": " E_{r+1} \\cong H_{*}(E_r, d_r) "
},
{
"math_id": 7,
"text": "E_r"
},
{
"math_id": 8,
"text": "d_r"
},
{
"math_id": 9,
"text": " E_{r+1} = H_{*}(E_r, d_r) "
},
{
"math_id": 10,
"text": " d_r "
},
{
"math_id": 11,
"text": "E_{r+1}"
},
{
"math_id": 12,
"text": " E_r = \\bigoplus_{p,q \\in \\mathbb{Z}^2} E_r^{p,q}. "
},
{
"math_id": 13,
"text": " \\{E_r^{p,q}\\}_{p,q} "
},
{
"math_id": 14,
"text": " d_r = (d_r^{p,q} : E_r^{p,q} \\to E_r^{p+r,q-r+1})_{p,q \\in \\mathbb{Z}^2} "
},
{
"math_id": 15,
"text": " (r,1-r) "
},
{
"math_id": 16,
"text": " d_r^{p+r,q-r+1} \\circ d_r^{p,q} = 0 "
},
{
"math_id": 17,
"text": " E_r^{d,q} "
},
{
"math_id": 18,
"text": " d = p + q "
},
{
"math_id": 19,
"text": " E_r^{p,q} "
},
{
"math_id": 20,
"text": " E^r_{p,q} "
},
{
"math_id": 21,
"text": " d_r^{p,q} : E_r^{p,q} \\to E_r^{p+r,q-r+1} "
},
{
"math_id": 22,
"text": " d^r_{p,q} : E^r_{p,q} \\to E^r_{p-r,q+r-1} "
},
{
"math_id": 23,
"text": " (-r,r-1) "
},
{
"math_id": 24,
"text": "E_2"
},
{
"math_id": 25,
"text": "E_1"
},
{
"math_id": 26,
"text": "r^{th}"
},
{
"math_id": 27,
"text": "E_r^{p,q}"
},
{
"math_id": 28,
"text": "(r+1)^{th}"
},
{
"math_id": 29,
"text": " f : E \\to E' "
},
{
"math_id": 30,
"text": " f_r : E_r \\to E'_r "
},
{
"math_id": 31,
"text": " f_r \\circ d_r = d'_r \\circ f_r "
},
{
"math_id": 32,
"text": " f_{r+1}(E_{r+1}) \\,=\\, f_{r+1}(H(E_r)) \\,=\\, H(f_r(E_r)) "
},
{
"math_id": 33,
"text": " f_r(E_r^{p,q}) \\subset {E'_r}^{p,q}. "
},
{
"math_id": 34,
"text": "E^{p, q}_r"
},
{
"math_id": 35,
"text": "F \\to E \\to B"
},
{
"math_id": 36,
"text": "E_{2}"
},
{
"math_id": 37,
"text": "E_{\\infty}"
},
{
"math_id": 38,
"text": " (C^{\\bullet}, d) "
},
{
"math_id": 39,
"text": " ... \\supset\\, F^{-2}C^{\\bullet} \\,\\supset\\, F^{-1}C^{\\bullet} \\supset F^{0}C^{\\bullet} \\,\\supset\\, F^{1}C^{\\bullet} \\,\\supset\\, F^{2}C^{\\bullet} \\,\\supset\\, F^{3}C^{\\bullet} \\,\\supset... \\, "
},
{
"math_id": 40,
"text": " d(F^pC^n) \\subset F^pC^{n+1}"
},
{
"math_id": 41,
"text": "F^pC^{\\bullet}"
},
{
"math_id": 42,
"text": "C^{\\bullet}"
},
{
"math_id": 43,
"text": " E_0^{p,q} = F^{p}C^{p+q}/F^{p+1}C^{p+q} "
},
{
"math_id": 44,
"text": " E_1^{p,q} = H^{p+q}(F^{p}C^{\\bullet}/F^{p+1}C^{\\bullet}) "
},
{
"math_id": 45,
"text": " C^{\\bullet} "
},
{
"math_id": 46,
"text": " E_1 "
},
{
"math_id": 47,
"text": "Z_{-1}^{p,q} = Z_0^{p,q} = F^p C^{p+q}"
},
{
"math_id": 48,
"text": "B_0^{p,q} = 0"
},
{
"math_id": 49,
"text": "E_0^{p,q} = \\frac{Z_0^{p,q}}{B_0^{p,q} + Z_{-1}^{p+1,q-1}} = \\frac{F^p C^{p+q}}{F^{p+1} C^{p+q}}"
},
{
"math_id": 50,
"text": "E_0 = \\bigoplus_{p,q\\in\\mathbf{Z}} E_0^{p,q}"
},
{
"math_id": 51,
"text": " E_0 "
},
{
"math_id": 52,
"text": " d_0 "
},
{
"math_id": 53,
"text": "\\bar{Z}_1^{p,q} = \\ker d_0^{p,q} : E_0^{p,q} \\rightarrow E_0^{p,q+1} = \\ker d_0^{p,q} : F^p C^{p+q}/F^{p+1} C^{p+q} \\rightarrow F^p C^{p+q+1}/F^{p+1} C^{p+q+1}"
},
{
"math_id": 54,
"text": "\\bar{B}_1^{p,q} = \\mbox{im } d_0^{p,q-1} : E_0^{p,q-1} \\rightarrow E_0^{p,q} = \\mbox{im } d_0^{p,q-1} : F^p C^{p+q-1}/F^{p+1} C^{p+q-1} \\rightarrow F^p C^{p+q}/F^{p+1} C^{p+q}"
},
{
"math_id": 55,
"text": "E_1^{p,q} = \\frac{\\bar{Z}_1^{p,q}}{\\bar{B}_1^{p,q}} = \\frac{\\ker d_0^{p,q} : E_0^{p,q} \\rightarrow E_0^{p,q+1}}{\\mbox{im } d_0^{p,q-1} : E_0^{p,q-1} \\rightarrow E_0^{p,q}}"
},
{
"math_id": 56,
"text": "E_1 = \\bigoplus_{p,q\\in\\mathbf{Z}} E_1^{p,q} = \\bigoplus_{p,q\\in\\mathbf{Z}} \\frac{\\bar{Z}_1^{p,q}}{\\bar{B}_1^{p,q}}"
},
{
"math_id": 57,
"text": "\\bar{Z}_1^{p,q}"
},
{
"math_id": 58,
"text": "\\bar{B}_1^{p,q}"
},
{
"math_id": 59,
"text": "E_0^{p,q}"
},
{
"math_id": 60,
"text": "Z_1^{p,q} = \\ker d_0^{p,q} : F^p C^{p+q} \\rightarrow C^{p+q+1}/F^{p+1} C^{p+q+1}"
},
{
"math_id": 61,
"text": "B_1^{p,q} = (\\mbox{im } d_0^{p,q-1} : F^p C^{p+q-1} \\rightarrow C^{p+q}) \\cap F^p C^{p+q}"
},
{
"math_id": 62,
"text": "E_1^{p,q} = \\frac{Z_1^{p,q}}{B_1^{p,q} + Z_0^{p+1,q-1}}."
},
{
"math_id": 63,
"text": "Z_1^{p,q}"
},
{
"math_id": 64,
"text": "B_1^{p,q}"
},
{
"math_id": 65,
"text": "Z_r^{p,q}"
},
{
"math_id": 66,
"text": "B_r^{p,q}"
},
{
"math_id": 67,
"text": "Z_r^{p,q} = \\ker d_0^{p,q} : F^p C^{p+q} \\rightarrow C^{p+q+1}/F^{p+r} C^{p+q+1}"
},
{
"math_id": 68,
"text": "B_r^{p,q} = (\\mbox{im } d_0^{p-r+1,q+r-2} : F^{p-r+1} C^{p+q-1} \\rightarrow C^{p+q}) \\cap F^p C^{p+q}"
},
{
"math_id": 69,
"text": "E_r^{p,q} = \\frac{Z_r^{p,q}}{B_r^{p,q} + Z_{r-1}^{p+1,q-1}}"
},
{
"math_id": 70,
"text": "B_r^{p,q} = d_0^{p,q}(Z_{r-1}^{p-r+1,q+r-2})."
},
{
"math_id": 71,
"text": " E_{r+1} "
},
{
"math_id": 72,
"text": "d_r^{p,q} : E_r^{p,q} \\rightarrow E_r^{p+r,q-r+1}"
},
{
"math_id": 73,
"text": " d "
},
{
"math_id": 74,
"text": "C^{p+q}"
},
{
"math_id": 75,
"text": "H^I_i(H^{II}_j(C_{\\bullet,\\bullet}))"
},
{
"math_id": 76,
"text": "H^{II}_j(H^I_i(C_{\\bullet,\\bullet}))"
},
{
"math_id": 77,
"text": "(C_{i,j}^I)_p = \\begin{cases}\n0 & \\text{if } i < p \\\\\nC_{i,j} & \\text{if } i \\ge p \\end{cases}"
},
{
"math_id": 78,
"text": "(C_{i,j}^{II})_p = \\begin{cases}\n0 & \\text{if } j < p \\\\\nC_{i,j} & \\text{if } j \\ge p \\end{cases}"
},
{
"math_id": 79,
"text": "\\bigoplus_{i+j=n} C_{i,j}"
},
{
"math_id": 80,
"text": "T_n(C_{\\bullet,\\bullet})^I_p = \\bigoplus_{i+j=n \\atop i > p-1} C_{i,j}"
},
{
"math_id": 81,
"text": "T_n(C_{\\bullet,\\bullet})^{II}_p = \\bigoplus_{i+j=n \\atop j > p-1} C_{i,j}"
},
{
"math_id": 82,
"text": "{}^IE^0_{p,q} =\nT_n(C_{\\bullet,\\bullet})^I_p / T_n(C_{\\bullet,\\bullet})^I_{p+1} =\n\\bigoplus_{i+j=n \\atop i > p-1} C_{i,j} \\Big/\n\\bigoplus_{i+j=n \\atop i > p} C_{i,j} =\nC_{p,q},"
},
{
"math_id": 83,
"text": "d^I_{p,q} + d^{II}_{p,q} :\nT_n(C_{\\bullet,\\bullet})^I_p / T_n(C_{\\bullet,\\bullet})^I_{p+1} =\nC_{p,q} \\rightarrow\nT_{n-1}(C_{\\bullet,\\bullet})^I_p / T_{n-1}(C_{\\bullet,\\bullet})^I_{p+1} =\nC_{p,q-1}"
},
{
"math_id": 84,
"text": "{}^IE^1_{p,q} = H^{II}_q(C_{p,\\bullet})."
},
{
"math_id": 85,
"text": "d^I_{p,q} + d^{II}_{p,q} :\nH^{II}_q(C_{p,\\bullet}) \\rightarrow\nH^{II}_q(C_{p+1,\\bullet})"
},
{
"math_id": 86,
"text": "{}^IE^2_{p,q} = H^I_p(H^{II}_q(C_{\\bullet,\\bullet}))."
},
{
"math_id": 87,
"text": "{}^{II}E^2_{p,q} = H^{II}_q(H^{I}_p(C_{\\bullet,\\bullet}))."
},
{
"math_id": 88,
"text": "0 = B_0 \\subset B_1 \\subset B_{2} \\subset \\dots \\subset B_r \\subset \\dots \\subset Z_r \\subset \\dots \\subset Z_2 \\subset Z_1 \\subset Z_0 = E_1"
},
{
"math_id": 89,
"text": "E_r \\simeq Z_{r-1}/B_{r-1}"
},
{
"math_id": 90,
"text": "Z_0 = E_1, B_0 = 0"
},
{
"math_id": 91,
"text": "Z_r, B_r"
},
{
"math_id": 92,
"text": "Z_r/B_{r-1}, B_r/B_{r-1}"
},
{
"math_id": 93,
"text": "E_r \\overset{d_r}\\to E_r."
},
{
"math_id": 94,
"text": "Z_{\\infty} = \\cap_r Z_r, B_{\\infty} = \\cup_r B_r"
},
{
"math_id": 95,
"text": "E_{\\infty} = Z_{\\infty}/B_{\\infty}"
},
{
"math_id": 96,
"text": " H^{\\bullet} "
},
{
"math_id": 97,
"text": " F^{\\bullet} H^{n} "
},
{
"math_id": 98,
"text": " n "
},
{
"math_id": 99,
"text": " p "
},
{
"math_id": 100,
"text": " E_{\\infty}^{p,q} \\cong F^pH^{p+q}/F^{p+1}H^{p+q} "
},
{
"math_id": 101,
"text": " \\cap_{p}F^pH^{\\bullet}=0 "
},
{
"math_id": 102,
"text": "E_r^{p,q} \\Rightarrow_p E_\\infty^n"
},
{
"math_id": 103,
"text": "E_\\infty^{p,q}"
},
{
"math_id": 104,
"text": " p,q "
},
{
"math_id": 105,
"text": " r(p,q) "
},
{
"math_id": 106,
"text": "r \\geq r(p,q)"
},
{
"math_id": 107,
"text": "E_r^{p,q} = E_{r(p,q)}^{p,q}"
},
{
"math_id": 108,
"text": "E_{r(p,q)}^{p,q} = E_\\infty^{p,q}"
},
{
"math_id": 109,
"text": " r_0 "
},
{
"math_id": 110,
"text": "d_r^{p,q}"
},
{
"math_id": 111,
"text": " r \\geq r_0 "
},
{
"math_id": 112,
"text": " r_0 \\geq 2 "
},
{
"math_id": 113,
"text": " r_0^{th} "
},
{
"math_id": 114,
"text": "E_r^{p,q} \\Rightarrow_p E_\\infty^{p,q}"
},
{
"math_id": 115,
"text": "E_2^{p,q}"
},
{
"math_id": 116,
"text": " E_{\\infty} "
},
{
"math_id": 117,
"text": "Z_0^{p,q} \\supe Z_1^{p,q} \\supe Z_2^{p,q}\\supe\\cdots\\supe B_2^{p,q} \\supe B_1^{p,q} \\supe B_0^{p,q}"
},
{
"math_id": 118,
"text": "Z_\\infty^{p,q} = \\bigcap_{r=0}^\\infty Z_r^{p,q},"
},
{
"math_id": 119,
"text": "B_\\infty^{p,q} = \\bigcup_{r=0}^\\infty B_r^{p,q},"
},
{
"math_id": 120,
"text": "E_\\infty^{p,q} = \\frac{Z_\\infty^{p,q}}{B_\\infty^{p,q}+Z_\\infty^{p+1,q-1}}."
},
{
"math_id": 121,
"text": "Z_\\infty^{p,q} = \\bigcap_{r=0}^\\infty Z_r^{p,q} = \\bigcap_{r=0}^\\infty \\ker(F^p C^{p+q} \\rightarrow C^{p+q+1}/F^{p+r} C^{p+q+1})"
},
{
"math_id": 122,
"text": "B_\\infty^{p,q} = \\bigcup_{r=0}^\\infty B_r^{p,q} = \\bigcup_{r=0}^\\infty (\\mbox{im } d^{p,q-r} : F^{p-r} C^{p+q-1} \\rightarrow C^{p+q}) \\cap F^p C^{p+q}"
},
{
"math_id": 123,
"text": "Z_\\infty^{p,q}"
},
{
"math_id": 124,
"text": "Z_\\infty^{p,q} = \\ker(F^p C^{p+q} \\rightarrow C^{p+q+1})"
},
{
"math_id": 125,
"text": "B_\\infty^{p,q}"
},
{
"math_id": 126,
"text": "B_\\infty^{p,q} = \\text{im }(C^{p+q-1} \\rightarrow C^{p+q}) \\cap F^p C^{p+q}"
},
{
"math_id": 127,
"text": "E_\\infty^{p,q} = \\mbox{gr}_p H^{p+q}(C^\\bull)"
},
{
"math_id": 128,
"text": "E_r^{p,q} \\Rightarrow_p H^{p+q}(C^\\bull)"
},
{
"math_id": 129,
"text": "F^0 B^n = B^n"
},
{
"math_id": 130,
"text": "F^1 B^n = A^n"
},
{
"math_id": 131,
"text": "F^2 B^n = 0"
},
{
"math_id": 132,
"text": "E^{p,q}_0\n= \\frac{F^p B^{p+q}}{F^{p+1} B^{p+q}} = \\begin{cases}\n0 & \\text{if } p < 0 \\text{ or } p > 1 \\\\\nC^q & \\text{if } p = 0 \\\\\nA^{q+1} & \\text{if } p = 1 \\end{cases}"
},
{
"math_id": 133,
"text": "E^{p,q}_1\n= \\begin{cases}\n0 & \\text{if } p < 0 \\text{ or } p > 1 \\\\\nH^q(C^\\bull) & \\text{if } p = 0 \\\\\nH^{q+1}(A^\\bull) & \\text{if } p = 1 \\end{cases}"
},
{
"math_id": 134,
"text": "\\cdots\\rightarrow H^q(B^\\bull) \\rightarrow H^q(C^\\bull) \\rightarrow H^{q+1}(A^\\bull) \\rightarrow H^{q+1}(B^\\bull) \\rightarrow\\cdots"
},
{
"math_id": 135,
"text": "E^{p,q}_2\n\\cong \\text{gr}_p H^{p+q}(B^\\bull)\n= \\begin{cases}\n0 & \\text{if } p < 0 \\text{ or } p > 1 \\\\\nH^q(B^\\bull)/H^q(A^\\bull) & \\text{if } p = 0 \\\\\n\\text{im } H^{q+1}f^\\bull : H^{q+1}(A^\\bull) \\rightarrow H^{q+1}(B^\\bull) &\\text{if } p = 1 \\end{cases}"
},
{
"math_id": 136,
"text": " H^q(B^\\bull)/H^q(A^\\bull) \\cong \\ker d^1_{0,q} : H^q(C^\\bull) \\rightarrow H^{q+1}(A^\\bull)"
},
{
"math_id": 137,
"text": " \\text{im } H^{q+1}f^\\bull : H^{q+1}(A^\\bull) \\rightarrow H^{q+1}(B^\\bull) \\cong H^{q+1}(A^\\bull) / (\\mbox{im } d^1_{0,q} : H^q(C^\\bull) \\rightarrow H^{q+1}(A^\\bull))"
},
{
"math_id": 138,
"text": "H^I_p(H^{II}_q(C_{\\bull,\\bull})) \\Rightarrow_p H^{p+q}(T(C_{\\bull,\\bull}))"
},
{
"math_id": 139,
"text": "H^{II}_q(H^I_p(C_{\\bull,\\bull})) \\Rightarrow_q H^{p+q}(T(C_{\\bull,\\bull}))"
},
{
"math_id": 140,
"text": "\\operatorname{Tor}_i(M,N) =\\operatorname{Tor}_i(N,M)"
},
{
"math_id": 141,
"text": "P_\\bull"
},
{
"math_id": 142,
"text": "Q_\\bull"
},
{
"math_id": 143,
"text": "C_{i,j} = P_i \\otimes Q_j"
},
{
"math_id": 144,
"text": "d \\otimes 1"
},
{
"math_id": 145,
"text": "(-1)^i(1 \\otimes e)"
},
{
"math_id": 146,
"text": "H^I_p(H^{II}_q(P_\\bull \\otimes Q_\\bull)) = H^I_p(P_\\bull \\otimes H^{II}_q(Q_\\bull))"
},
{
"math_id": 147,
"text": "H^{II}_q(H^I_p(P_\\bull \\otimes Q_\\bull)) = H^{II}_q(H^I_p(P_\\bull) \\otimes Q_\\bull)"
},
{
"math_id": 148,
"text": "H^I_p(P_\\bull \\otimes N) = \\operatorname{Tor}_p(M,N)"
},
{
"math_id": 149,
"text": "H^{II}_q(M \\otimes Q_\\bull) = \\operatorname{Tor}_q(N,M)"
},
{
"math_id": 150,
"text": "E^2_{p,q}"
},
{
"math_id": 151,
"text": "\\operatorname{Tor}_p(M,N) \\cong E^\\infty_p = H_p(T(C_{\\bull,\\bull}))"
},
{
"math_id": 152,
"text": "\\operatorname{Tor}_q(N,M) \\cong E^\\infty_q = H_q(T(C_{\\bull,\\bull}))"
},
{
"math_id": 153,
"text": " p_0 "
},
{
"math_id": 154,
"text": " q "
},
{
"math_id": 155,
"text": " q_0 "
},
{
"math_id": 156,
"text": " E_{r+i}^{p,q} = E_r^{p,q} "
},
{
"math_id": 157,
"text": " i\\geq 0 "
},
{
"math_id": 158,
"text": " r>p "
},
{
"math_id": 159,
"text": " r>q+1 "
},
{
"math_id": 160,
"text": "E^r_{p, q}"
},
{
"math_id": 161,
"text": "E^2_{p, q} = 0"
},
{
"math_id": 162,
"text": "E^2"
},
{
"math_id": 163,
"text": "\\begin{matrix}\n& \\vdots & \\vdots & \\vdots & \\vdots & \\\\\n\\cdots & 0 & E^2_{0,2} & E^2_{1,2} & 0 & \\cdots \\\\\n\\cdots & 0 & E^2_{0,1} & E^2_{1,1} & 0 & \\cdots \\\\\n\\cdots & 0 & E^2_{0,0} & E^2_{1,0} & 0 & \\cdots \\\\\n\\cdots & 0 & E^2_{0,-1} & E^2_{1,-1} & 0 & \\cdots \\\\\n& \\vdots & \\vdots & \\vdots & \\vdots & \n\\end{matrix}"
},
{
"math_id": 164,
"text": "d^2_{p,q}:E^2_{p,q} \\to E^2_{p-2,q+1}"
},
{
"math_id": 165,
"text": "d^2_{0,q}:E^2_{0,q} \\to 0"
},
{
"math_id": 166,
"text": "d^2_{1,q}:E^2_{1,q} \\to 0"
},
{
"math_id": 167,
"text": "E^{\\infty} = E^2"
},
{
"math_id": 168,
"text": "H_*"
},
{
"math_id": 169,
"text": "0 = F_{-1} H_n \\subset F_0 H_n \\subset \\dots \\subset F_n H_n = H_n"
},
{
"math_id": 170,
"text": "E^{\\infty}_{p, q} = F_p H_{p+q}/F_{p-1} H_{p+q}"
},
{
"math_id": 171,
"text": "F_0 H_n = E^2_{0, n}"
},
{
"math_id": 172,
"text": "F_1 H_n / F_0 H_n = E^2_{1, n -1}"
},
{
"math_id": 173,
"text": "F_2 H_n / F_1 H_n = 0"
},
{
"math_id": 174,
"text": "F_3 H_n / F_2 H_n = 0"
},
{
"math_id": 175,
"text": "0 \\to E^2_{0, n} \\to H_n \\to E^2_{1, n - 1} \\to 0"
},
{
"math_id": 176,
"text": "E^3_{p, 0} = \\operatorname{ker} (d: E^2_{p, 0} \\to E^2_{p - 2, 1})"
},
{
"math_id": 177,
"text": "E^3_{p, 1} = \\operatorname{coker}(d: E^2_{p+2, 0} \\to E^2_{p, 1})"
},
{
"math_id": 178,
"text": "0 \\to E^{\\infty}_{p, 0} \\to E^2_{p, 0} \\overset{d}\\to E^2_{p-2, 1} \\to E^{\\infty}_{p-2, 1} \\to 0"
},
{
"math_id": 179,
"text": "F_{p-2} H_{p} / F_{p-3} H_{p} = E^{\\infty}_{p-2, 2} = 0"
},
{
"math_id": 180,
"text": "F_{p-3} H_p / F_{p-4} H_p = 0"
},
{
"math_id": 181,
"text": "0 \\to E^{\\infty}_{p - 1, 1} \\to H_p \\to E^{\\infty}_{p, 0} \\to 0"
},
{
"math_id": 182,
"text": "\\cdots \\to H_{p+1} \\to E^2_{p + 1, 0} \\overset{d}\\to E^2_{p - 1, 1} \\to H_p \\to E^2_{p, 0} \\overset{d}\\to E^2_{p - 2, 1} \\to H_{p-1} \\to \\dots."
},
{
"math_id": 183,
"text": "F \\overset{i}\\to E \\overset{p}\\to S^n"
},
{
"math_id": 184,
"text": "E^2_{p, q} = H_p(S^n; H_q(F)) \\Rightarrow H_{p+q}(E)"
},
{
"math_id": 185,
"text": "E^{\\infty}_{p, q} = F_p H_{p+q}(E)/F_{p-1} H_{p+q}(E)"
},
{
"math_id": 186,
"text": "F_\\bullet"
},
{
"math_id": 187,
"text": "H_p(S^n)"
},
{
"math_id": 188,
"text": "E^2_{p, q}"
},
{
"math_id": 189,
"text": "p = 0,n"
},
{
"math_id": 190,
"text": "\\begin{matrix}\n& \\vdots & \\vdots & \\vdots & & \\vdots & \\vdots & \\vdots & \\\\\n\\cdots & 0 & E^2_{0,2} & 0 & \\cdots & 0 & E^2_{n,2} & 0 & \\cdots \\\\\n\\cdots & 0 & E^2_{0,1} & 0 & \\cdots & 0 & E^2_{n,1} & 0 & \\cdots \\\\\n\\cdots & 0 & E^2_{0,0} & 0 & \\cdots & 0 & E^2_{n,0} & 0 & \\cdots \\\\\n\\end{matrix}"
},
{
"math_id": 191,
"text": "E^2_{p, q} = H_p(S^n;H_q(F)) = H_q(F)"
},
{
"math_id": 192,
"text": "\\begin{matrix}\n& \\vdots & \\vdots & \\vdots & & \\vdots & \\vdots & \\vdots & \\\\\n\\cdots & 0 & H_2(F) & 0 & \\cdots & 0 & H_2(F) & 0 & \\cdots \\\\\n\\cdots & 0 & H_1(F) & 0 & \\cdots & 0 & H_1(F) & 0 & \\cdots \\\\\n\\cdots & 0 & H_0(F) & 0 & \\cdots & 0 & H_0(F) & 0 & \\cdots \\\\\n\\end{matrix}"
},
{
"math_id": 193,
"text": "E^n"
},
{
"math_id": 194,
"text": "d^n_{n,q}:E^n_{n,q} \\to E^n_{0,q+n-1}"
},
{
"math_id": 195,
"text": "d^n_{n,q}:H_q(F) \\to H_{q+n-1}(F)"
},
{
"math_id": 196,
"text": "E^{n+1} = E^{\\infty}"
},
{
"math_id": 197,
"text": "E^{n+1}"
},
{
"math_id": 198,
"text": "0 \\to E^{\\infty}_{n, q-n} \\to E^n_{n, q-n} \\overset{d}\\to E^n_{0, q-1} \\to E^{\\infty}_{0, q-1} \\to 0."
},
{
"math_id": 199,
"text": "0 \\to E^{\\infty}_{n, q-n} \\to H_{q-n}(F) \\overset{d}\\to H_{q-1}(F) \\to E^{\\infty}_{0, q-1} \\to 0."
},
{
"math_id": 200,
"text": "E^\\infty"
},
{
"math_id": 201,
"text": "H = H(E)"
},
{
"math_id": 202,
"text": "F_1 H_q/F_0 H_q = E^{\\infty}_{1, q - 1} = 0"
},
{
"math_id": 203,
"text": "E^{\\infty}_{n, q-n} = F_n H_q / F_0 H_q"
},
{
"math_id": 204,
"text": "F_n H_q = H_q"
},
{
"math_id": 205,
"text": "0 \\to E^{\\infty}_{0, q} \\to H_q \\to E^{\\infty}_{n, q - n} \\to 0."
},
{
"math_id": 206,
"text": "0 \\to H_q(F) \\to H_q(E) \\to H_{q-n}(F)\\to 0."
},
{
"math_id": 207,
"text": "\\dots \\to H_q(F) \\overset{i_*}\\to H_q(E) \\to H_{q-n}(F) \\overset{d}\\to H_{q-1}(F) \\overset{i_*}\\to H_{q-1}(E) \\to H_{q-n -1}(F) \\to \\dots"
},
{
"math_id": 208,
"text": "E_r^{p, q}"
},
{
"math_id": 209,
"text": "0 = F^{n+1} H^n \\subset F^n H^n \\subset \\dots \\subset F^0 H^n = H^n"
},
{
"math_id": 210,
"text": "E_{\\infty}^{p,q} = F^p H^{p+q}/F^{p+1} H^{p+q}."
},
{
"math_id": 211,
"text": "E_2^{p, q}"
},
{
"math_id": 212,
"text": "0 \\to E^{0, 1}_{\\infty} \\to E^{0, 1}_2 \\overset{d}\\to E^{2, 0}_2 \\to E^{2, 0}_{\\infty} \\to 0."
},
{
"math_id": 213,
"text": "E_{\\infty}^{1, 0} = E_2^{1, 0}"
},
{
"math_id": 214,
"text": "F^2 H^1 = 0,"
},
{
"math_id": 215,
"text": "0 \\to E_2^{1, 0} \\to H^1 \\to E^{0, 1}_{\\infty} \\to 0"
},
{
"math_id": 216,
"text": "F^3 H^2 = 0"
},
{
"math_id": 217,
"text": "E^{2, 0}_{\\infty} \\subset H^2"
},
{
"math_id": 218,
"text": "0 \\to E^{1, 0}_2 \\to H^1 \\to E^{0, 1}_2 \\overset{d}\\to E^{2, 0}_2 \\to H^2."
},
{
"math_id": 219,
"text": "E^r_{p, q} = 0"
},
{
"math_id": 220,
"text": "E^{r+1}_{p, 0} = \\operatorname{ker}(d: E^r_{p, 0} \\to E^r_{p-r, r-1})"
},
{
"math_id": 221,
"text": "E^{r}_{p, 0} \\to E^{r-1}_{p, 0} \\to \\dots \\to E^3_{p, 0} \\to E^2_{p, 0}"
},
{
"math_id": 222,
"text": "E^2_{0, q} \\to E^3_{0, q} \\to \\dots \\to E^{r-1}_{0, q} \\to E^r_{0, q}"
},
{
"math_id": 223,
"text": "\\tau: E^2_{p, 0} \\to E^2_{0, p - 1}"
},
{
"math_id": 224,
"text": "E^2_{p, 0} \\to E^p_{p, 0} \\overset{d}\\to E^p_{0, p-1} \\to E^2_{0, p - 1}"
},
{
"math_id": 225,
"text": "E_r^{p, q} = 0"
},
{
"math_id": 226,
"text": "E_{2}^{p, 0} \\to E_{3}^{p, 0} \\to \\dots \\to E_{r-1}^{p, 0} \\to E_r^{p, 0}"
},
{
"math_id": 227,
"text": "E_{r}^{0, q} \\to E_{r-1}^{0, q} \\to \\dots \\to E_{3}^{0, q} \\to E_2^{0, q}"
},
{
"math_id": 228,
"text": "\\tau: E_2^{0, q-1} \\to E_2^{q, 0}"
},
{
"math_id": 229,
"text": "d: E_q^{0, q-1} \\to E_q^{q, 0}"
},
{
"math_id": 230,
"text": "d_n:E_{n,0}^n \\to E_{0,n-1}^n"
},
{
"math_id": 231,
"text": "d_n:H_n(B) \\to H_{n-1}(F)"
}
] | https://en.wikipedia.org/wiki?curid=803894 |
804039 | Dynamical friction | Gravitational loss of momentum and energy by bodies moving through surrounding matter
In astrophysics, dynamical friction or Chandrasekhar friction, sometimes called gravitational drag, is loss of momentum and kinetic energy of moving bodies through gravitational interactions with surrounding matter in space. It was first discussed in detail by Subrahmanyan Chandrasekhar in 1943.
Intuitive account.
An intuition for the effect can be obtained by thinking of a massive object moving through a cloud of smaller lighter bodies. The effect of gravity causes the light bodies to accelerate and gain momentum and kinetic energy (see slingshot effect). By conservation of energy and momentum, we may conclude that the heavier body will be slowed by an amount to compensate. Since there is a loss of momentum and kinetic energy for the body under consideration, the effect is called "dynamical friction".
Another equivalent way of thinking about this process is that as a large object moves through a cloud of smaller objects, the gravitational effect of the larger object pulls the smaller objects towards it. There then exists a concentration of smaller objects behind the larger body (a "gravitational wake"), as it has already moved past its previous position. This concentration of small objects behind the larger body exerts a collective gravitational force on the large object, slowing it down.
Of course, the mechanism works the same for all masses of interacting bodies and for any relative velocities between them. However, while the most probable outcome for an object moving through a cloud is a loss of momentum and energy, as described intuitively above, in the general case it might be either loss or gain. When the body under consideration is gaining momentum and energy the same physical mechanism is called slingshot effect, or "gravity assist". This technique is sometimes used by interplanetary probes to obtain a boost in velocity by passing close by a planet.
Chandrasekhar dynamical friction formula.
The full Chandrasekhar dynamical friction formula for the change in velocity of the object involves integrating over the phase space density of the field of matter and is far from transparent. The Chandrasekhar dynamical friction formula reads as
formula_0
where
The result of the equation is the gravitational acceleration produced on the object under consideration by the stars or celestial bodies, as acceleration is the ratio of velocity and time.
Maxwell's distribution.
A commonly used special case is where there is a uniform density in the field of matter, with matter particles significantly lighter than the major particle under consideration i.e., formula_7 and with a Maxwellian distribution for the velocity of matter particles i.e.,
formula_8
where formula_9 is the total number of stars and formula_10 is the dispersion. In this case, the dynamical friction formula is as follows:
formula_11
where
In general, a simplified equation for the force from dynamical friction has the form
formula_15
where the dimensionless numerical factor formula_16 depends on how formula_17 compares to the velocity dispersion of the surrounding matter.
But note that this simplified expression diverges when formula_18; caution should therefore be exercised when using it.
Density of the surrounding medium.
The greater the density of the surrounding medium, the stronger the force from dynamical friction. Similarly, the force is proportional to the square of the mass of the object. One of these terms is from the gravitational force between the object and the wake. The second term is because the more massive the object, the more matter will be pulled into the wake. The force is also proportional to the inverse square of the velocity. This means the fractional rate of energy loss drops rapidly at high velocities. Dynamical friction is, therefore, unimportant for objects that move relativistically, such as photons. This can be rationalized by realizing that the faster the object moves through the media, the less time there is for a wake to build up behind it.
Applications.
Dynamical friction is particularly important in the formation of planetary systems and interactions between galaxies.
Protoplanets.
During the formation of planetary systems, dynamical friction between the protoplanet and the protoplanetary disk causes energy to be transferred from the protoplanet to the disk. This results in the inward migration of the protoplanet.
Galaxies.
When galaxies interact through collisions, dynamical friction between stars causes matter to sink toward the center of the galaxy and for the orbits of stars to be randomized. This process is called violent relaxation and can change two spiral galaxies into one larger elliptical galaxy.
Galaxy clusters.
The effect of dynamical friction explains why the brightest (more massive) galaxy tends to be found near the center of a galaxy cluster. The effect of the two body collisions slows down the galaxy, and the drag effect is greater the larger the galaxy mass. When the galaxy loses kinetic energy, it moves towards the center of the cluster.
However the observed velocity dispersion of galaxies within a galaxy cluster does not depend on the mass of the galaxies. The explanation is that a galaxy cluster relaxes by violent relaxation, which sets the velocity dispersion to a value independent of the galaxy's mass.
Star clusters.
The effect of dynamical friction explains why the most massive stars of SCs tend to be found near the center of star cluster. This concentration of more massive stars in the cluster's cores tend to favor collisions between stars, which may trigger the runaway collision mechanism to form intermediate mass black holes. Globular clusters orbiting through the stellar field of a galaxy experience dynamic friction. This drag force causes the cluster to lose energy and spiral in toward the galactic center.
Photons.
Fritz Zwicky proposed in 1929 that a gravitational drag effect on photons could be used to explain cosmological redshift as a form of tired light. However, his analysis had a mathematical error, and his approximation to the magnitude of the effect should actually have been zero, as pointed out in the same year by Arthur Stanley Eddington. Zwicky promptly acknowledged the correction, although he continued to hope that a full treatment would be able to show the effect.
It is now known that the effect of dynamical friction on photons or other particles moving at relativistic speeds is negligible, since the magnitude of the drag is inversely proportional to the square of velocity. Cosmological redshift is conventionally understood to be a consequence of the expansion of the universe.
Notes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{d\\mathbf{v}_M}{dt} = -16 \\pi^2 (\\ln \\Lambda) G^2 m (M+m) \\frac{1}{v_M^3}\\int_0^{v_M}v^2 f(v) d v \\mathbf{v}_M"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "m"
},
{
"math_id": 4,
"text": " {v_M} "
},
{
"math_id": 5,
"text": " \\mbox{ln}(\\Lambda) "
},
{
"math_id": 6,
"text": "f(v)"
},
{
"math_id": 7,
"text": "M\\gg m"
},
{
"math_id": 8,
"text": "f(v) = \\frac{N}{(2\\pi\\sigma^2)^{3/2}}e^{-\\frac{v^2}{2\\sigma^2}}"
},
{
"math_id": 9,
"text": "N"
},
{
"math_id": 10,
"text": "\\sigma"
},
{
"math_id": 11,
"text": " \\frac{d\\mathbf{v}_M}{dt} = -\\frac{4\\pi \\ln (\\Lambda) G^2 \\rho M}{v_M^3}\\left[\\mathrm{erf}(X)-\\frac{2X}{\\sqrt{\\pi}}e^{-X^2}\\right]\\mathbf{v}_M"
},
{
"math_id": 12,
"text": " X = v_M/(\\sqrt{2} \\sigma) "
},
{
"math_id": 13,
"text": " \\mathrm{erf}(X) "
},
{
"math_id": 14,
"text": " \\rho= mN "
},
{
"math_id": 15,
"text": "F_\\text{dyn} \\approx C \\frac{G^2 M^2 \\rho}{v^2_M}"
},
{
"math_id": 16,
"text": " C "
},
{
"math_id": 17,
"text": "v_M"
},
{
"math_id": 18,
"text": " v_M \\to 0 "
}
] | https://en.wikipedia.org/wiki?curid=804039 |
804056 | C2 | C2 or a derivative (C-2, C2, etc.) may refer to:
<templatestyles src="Template:TOC_right/styles.css" />
Codes or abbreviations.
C2 may be a code or abbreviation for:
See also.
<templatestyles src="Dmbox/styles.css" />
Topics referred to by the same termThis page lists articles associated with the same title formed as a letter–number combination. | [
{
"math_id": 0,
"text": "\\Complex^2"
}
] | https://en.wikipedia.org/wiki?curid=804056 |
804155 | Pairwise independence | Set of random variables of which any two are independent
In probability theory, a pairwise independent collection of random variables is a set of random variables any two of which are independent. Any collection of mutually independent random variables is pairwise independent, but some pairwise independent collections are not mutually independent. Pairwise independent random variables with finite variance are uncorrelated.
A pair of random variables "X" and "Y" are independent if and only if the random vector ("X", "Y") with joint cumulative distribution function (CDF) formula_0 satisfies
formula_1
or equivalently, their joint density formula_2 satisfies
formula_3
That is, the joint distribution is equal to the product of the marginal distributions.
Unless it is not clear in context, in practice the modifier "mutual" is usually dropped so that independence means mutual independence. A statement such as " "X", "Y", "Z" are independent random variables" means that "X", "Y", "Z" are mutually independent.
Example.
Pairwise independence does not imply mutual independence, as shown by the following example attributed to S. Bernstein.
Suppose "X" and "Y" are two independent tosses of a fair coin, where we designate 1 for heads and 0 for tails. Let the third random variable "Z" be equal to 1 if exactly one of those coin tosses resulted in "heads", and 0 otherwise (i.e., formula_4). Then jointly the triple ("X", "Y", "Z") has the following probability distribution:
formula_5
Here the marginal probability distributions are identical: formula_6 and
formula_7 The bivariate distributions also agree: formula_8 where formula_9
Since each of the pairwise joint distributions equals the product of their respective marginal distributions, the variables are pairwise independent:
However, "X", "Y", and "Z" are not mutually independent, since formula_10 the left side equalling for example 1/4 for ("x", "y", "z") = (0, 0, 0) while the right side equals 1/8 for ("x", "y", "z") = (0, 0, 0). In fact, any of formula_11 is completely determined by the other two (any of "X", "Y", "Z" is the sum (modulo 2) of the others). That is as far from independence as random variables can get.
Probability of the union of pairwise independent events.
Bounds on the probability that the sum of Bernoulli random variables is at least one, commonly known as the union bound, are provided by the Boole–Fréchet inequalities. While these bounds assume only univariate information, several bounds with knowledge of general bivariate probabilities, have been proposed too. Denote by formula_12 a set of formula_13 Bernoulli events with probability of occurrence formula_14 for each formula_15. Suppose the bivariate probabilities are given by formula_16 for every pair of indices formula_17. Kounias derived the following upper bound:<br>
formula_18 <br>
which subtracts the maximum weight of a star spanning tree on a complete graph with formula_13 nodes (where the edge weights are given by formula_19) from the sum of the marginal probabilities formula_20. <br>
Hunter-Worsley tightened this upper bound by optimizing over formula_21 as follows:<br>
formula_22
where formula_23 is the set of all spanning trees on the graph. These bounds are not the tightest possible with general bivariates formula_19 even when feasibility is guaranteed as shown in Boros et.al. However, when the variables are pairwise independent (formula_24), Ramachandra—Natarajan showed that the Kounias-Hunter-Worsley bound is tight by proving that the maximum probability of the union of events admits a closed-form expression given as:<br>
where the probabilities are sorted in increasing order as formula_25. The tight bound in Eq. 1 depends only on the sum of the smallest formula_26 probabilities formula_27 and the largest probability formula_28. Thus, while ordering of the probabilities plays a role in the derivation of the bound, the ordering among the smallest formula_26 probabilities formula_29 is inconsequential since only their sum is used.
Comparison with the Boole–Fréchet union bound.
It is useful to compare the smallest bounds on the probability of the union with arbitrary dependence and pairwise independence respectively. The tightest Boole–Fréchet upper union bound (assuming only univariate information) is given as:<br>
As shown in Ramachandra-Natarajan, it can be easily verified that the ratio of the two tight bounds in Eq. 2 and Eq. 1 is upper bounded by formula_30 where the maximum value of formula_30 is attained when <br>
formula_31, formula_32 <br>
where the probabilities are sorted in increasing order as formula_25. In other words, in the best-case scenario, the pairwise independence bound in Eq. 1 provides an improvement of formula_33 over the univariate bound in Eq. 2.
Generalization.
More generally, we can talk about "k"-wise independence, for any "k" ≥ 2. The idea is similar: a set of random variables is "k"-wise independent if every subset of size "k" of those variables is independent. "k"-wise independence has been used in theoretical computer science, where it was used to prove a theorem about the problem MAXEkSAT.
"k"-wise independence is used in the proof that k-independent hashing functions are secure unforgeable message authentication codes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_{X,Y}(x,y)"
},
{
"math_id": 1,
"text": "F_{X,Y}(x,y) = F_X(x) F_Y(y),"
},
{
"math_id": 2,
"text": "f_{X,Y}(x,y)"
},
{
"math_id": 3,
"text": "f_{X,Y}(x,y) = f_X(x) f_Y(y)."
},
{
"math_id": 4,
"text": "Z = X \\oplus Y"
},
{
"math_id": 5,
"text": "(X,Y,Z)=\\left\\{\\begin{matrix}\n(0,0,0) & \\text{with probability}\\ 1/4, \\\\\n(0,1,1) & \\text{with probability}\\ 1/4, \\\\\n(1,0,1) & \\text{with probability}\\ 1/4, \\\\\n(1,1,0) & \\text{with probability}\\ 1/4.\n\\end{matrix}\\right."
},
{
"math_id": 6,
"text": "f_X(0)=f_Y(0)=f_Z(0)=1/2,"
},
{
"math_id": 7,
"text": "f_X(1)=f_Y(1)=f_Z(1)=1/2."
},
{
"math_id": 8,
"text": " f_{X,Y}=f_{X,Z}=f_{Y,Z}, "
},
{
"math_id": 9,
"text": "f_{X,Y}(0,0)=f_{X,Y}(0,1)=f_{X,Y}(1,0)=f_{X,Y}(1,1)=1/4."
},
{
"math_id": 10,
"text": "f_{X,Y,Z}(x,y,z) \\neq f_X(x)f_Y(y)f_Z(z),"
},
{
"math_id": 11,
"text": "\\{X,Y,Z\\}"
},
{
"math_id": 12,
"text": "\\{{A}_i, i \\in \\{1,2,...,n\\}\\}"
},
{
"math_id": 13,
"text": "n"
},
{
"math_id": 14,
"text": "\\mathbb{P}(A_{i})=p_i"
},
{
"math_id": 15,
"text": "i"
},
{
"math_id": 16,
"text": "\\mathbb{P}(A_{i} \\cap A_{j})=p_{ij}"
},
{
"math_id": 17,
"text": "(i,j)"
},
{
"math_id": 18,
"text": "\n \\mathbb{P}(\\displaystyle {\\cup}_iA_{i}) \\leq \\displaystyle \\sum_{i=1}^n p_{i}-\\underset {j\\in \\{1,2,..,n\\}}{\\max} \\sum_{i\\neq j} p_{ij},\n"
},
{
"math_id": 19,
"text": "p_{ij}"
},
{
"math_id": 20,
"text": "\\sum_i p_i"
},
{
"math_id": 21,
"text": "\\tau \\in T"
},
{
"math_id": 22,
"text": "\n\\mathbb{P}(\\displaystyle {\\cup}_i A_{i}) \\leq \\displaystyle \\sum_{i=1}^n p_{i}-\\underset {\\tau \\in T}{\\max}\\sum_{(i,j) \\in \\tau} p_{ij},\n"
},
{
"math_id": 23,
"text": "T"
},
{
"math_id": 24,
"text": "p_{ij}=p_ip_j"
},
{
"math_id": 25,
"text": " 0 \\leq p_{1} \\leq p_{2} \\leq \\ldots \\leq p_{n} \\leq 1"
},
{
"math_id": 26,
"text": "n-1"
},
{
"math_id": 27,
"text": "\\sum_{i=1}^{n-1} p_{i}"
},
{
"math_id": 28,
"text": "p_n"
},
{
"math_id": 29,
"text": "\\{p_1,p_2,...,p_{n-1}\\}"
},
{
"math_id": 30,
"text": "4/3"
},
{
"math_id": 31,
"text": "\\sum_{i=1}^{n-1} p_{i}=1/2"
},
{
"math_id": 32,
"text": "p_n=1/2"
},
{
"math_id": 33,
"text": "25\\%"
}
] | https://en.wikipedia.org/wiki?curid=804155 |
8042940 | List of fallacies | List of faulty argument types
A fallacy is the use of invalid or otherwise faulty reasoning in the construction of an argument. All forms of human communication can contain fallacies.
Because of their variety, fallacies are challenging to classify. They can be classified by their structure (formal fallacies) or content (informal fallacies). Informal fallacies, the larger group, may then be subdivided into categories such as improper presumption, faulty generalization, error in assigning causation, and relevance, among others.
The use of fallacies is common when the speaker's goal of achieving common agreement is more important to them than utilizing sound reasoning. When fallacies are used, the premise should be recognized as not well-grounded, the conclusion as unproven (but not necessarily false), and the argument as unsound.
Formal fallacies.
A formal fallacy is an error in the argument's form. All formal fallacies are types of .
Propositional fallacies.
A propositional fallacy is an error that concerns compound propositions. For a compound proposition to be true, the truth values of its constituent parts must satisfy the relevant logical connectives that occur in it (most commonly: [and], [or], [not], [only if], [if and only if]). The following fallacies involve relations whose truth values are not guaranteed and therefore not guaranteed to yield true conclusions.
Types of propositional fallacies:
Quantification fallacies.
A quantification fallacy is an error in logic where the quantifiers of the premises are in contradiction to the quantifier of the conclusion.
Types of quantification fallacies:
Formal syllogistic fallacies.
Syllogistic fallacies – logical fallacies that occur in syllogisms.
Informal fallacies.
Informal fallacies – arguments that are logically unsound for lack of well-grounded premises.
Faulty generalizations.
Faulty generalization – reaching a conclusion from weak premises.
Questionable cause.
Questionable cause is a general type of error with many variants. Its primary basis is the confusion of association with causation, either by inappropriately deducing (or rejecting) causation or a broader failure to properly investigate the cause of an observed effect.
Relevance fallacies.
Red herring fallacies.
A red herring fallacy, one of the main subtypes of fallacies of relevance, is an error in logic where a proposition is, or is intended to be, misleading in order to make irrelevant or false inferences. This includes any logical inference based on fake arguments, intended to replace the lack of real arguments or to replace implicitly the subject of the discussion.
Red herring – introducing a second argument in response to the first argument that is irrelevant and draws attention away from the original topic (e.g.: saying "If you want to complain about the dishes I leave in the sink, what about the dirty clothes you leave in the bathroom?"). In jury trial, it is known as a Chewbacca defense. In political strategy, it is called a dead cat strategy. <templatestyles src="Crossreference/styles.css" />
See also.
<templatestyles src="Div col/styles.css"/>
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
"The following is a sample of books for further reading, selected for a combination of content, ease of access via the internet, and to provide an indication of published sources that interested readers may review. The titles of some books are self-explanatory. Good books on critical thinking commonly contain sections on fallacies, and some may be listed below." | [
{
"math_id": 0,
"text": "P \\lor \\neg P"
},
{
"math_id": 1,
"text": "P"
}
] | https://en.wikipedia.org/wiki?curid=8042940 |
8045333 | Ambulatory blood pressure | Technique for measuring blood pressure over regular intervals
Ambulatory blood pressure, as opposed to office blood pressure and home blood pressure, is the blood pressure over the course of the full 24-hour sleep-wake cycle. Ambulatory blood pressure monitoring (ABPM) measures blood pressure at regular intervals throughout the day and night. It avoids the white coat hypertension effect in which a patient's blood pressure is elevated during the examination process due to nervousness and anxiety caused by being in a clinical setting. ABPM can also detect the reverse condition, masked hypertension, where the patient has normal blood pressure during the examination but uncontrolled blood pressure outside the clinical setting, masking a high 24-hour average blood pressure. Out-of-office measurements are highly recommended as an adjunct to office measurements by almost all hypertension organizations.
Blood pressure variability.
24-hour, non-invasive ambulatory blood pressure (BP) monitoring allows estimates of cardiac risk factors including excessive BP variability or patterns of circadian variability known to increase risks of a cardiovascular event.
Nocturnal hypertension.
Ambulatory blood pressure monitoring allows blood pressure to be intermittently monitored during sleep and is useful to determine whether the patient is a "dipper" or "non-dipper"—that is to say, whether or not blood pressure falls at night compared to daytime values. A nighttime fall is normal and desirable. It correlates with relationship depth, and also other factors such as sleep quality, age, hypertensive status, marital status, and social network support. Absence of a nighttime dip is associated with poorer health outcomes; a 2011 study found increased mortality. Nocturnal hypertension is also associated with end organ damage, and is a much better indicator than the daytime blood pressure reading.
Target organ damage.
Readings revealing possible hypertension-related end organ damage, such as left ventricular hypertrophy or narrowing of the retinal arteries, are more likely to be obtained through ambulatory blood pressure monitoring than through clinical blood pressure measurement. Isolated clinical BP measurements are more subject to the general marked variability of BP measurements. Clinical measurements may be affected by the "white coat effect", a rise in the blood pressure of many patients due to the stress of being in the medical situation.
Overnight reduction or surge in blood pressure.
Optimal blood pressure fluctuates over a 24-hour sleep-wake cycle, with values rising in the daytime and falling after midnight. The reduction in early morning blood pressure compared with average daytime pressure is referred to as the night-time dip. Ambulatory blood pressure monitoring may reveal a blunted or abolished overnight dip in blood pressure. This is clinically useful information because non-dipping blood pressure is associated with a higher risk of left ventricle hypertrophy and cardiovascular mortality. By comparing the early morning pressures with average daytime pressures, a ratio can be calculated which is of value in assessing relative risk. Dipping patterns are classified by the percent of drop in pressure, and based on the resulting ratios a person may be clinically classified for treatment as a "non-dipper" (with a blood pressure drop of less than 10%), a "dipper", an "extreme dipper", or a "reverse dipper", as detailed in the chart below. Additionally, ambulatory monitoring may reveal an excessive morning blood pressure surge, which is associated with increased risk of stroke in elderly hypertensive people.
Classification of dipping in blood pressure is based on the American Heart Association's calculation, using systolic blood pressure (SBP) as follows:
formula_0
Dippers have significantly lower all-cause mortality than non-dippers or reverse dippers; "... ambulatory blood pressure predicts mortality significantly better than clinic blood pressure."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Dip} = \\left(1 - \\frac{\\text{SBP}_\\text{Sleeping}}{\\text{SBP}_\\text{Waking}}\\right) \\times 100\\% "
}
] | https://en.wikipedia.org/wiki?curid=8045333 |
804755 | Antecedent (logic) | First half of an hypothetic statement (in logic)
An antecedent is the first half of a hypothetical proposition, whenever the if-clause precedes the then-clause. In some contexts the antecedent is called the protasis.
Examples:
This is a nonlogical formulation of a hypothetical proposition. In this case, the antecedent is P, and the consequent is Q. In the implication "formula_2 implies formula_3", formula_2 is called the antecedent and formula_3 is called the consequent. Antecedent and consequent are connected via logical connective to form a proposition.
"formula_4 is a man" is the antecedent for this proposition while "formula_4 is mortal" is the consequent of the proposition.
Here, "men have walked on the Moon" is the antecedent and "I am the king of France" is the consequent.
Let formula_5.
"formula_6" is the antecedent and "formula_7" is the consequent of this hypothetical proposition.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "\\phi"
},
{
"math_id": 3,
"text": "\\psi"
},
{
"math_id": 4,
"text": "X"
},
{
"math_id": 5,
"text": "y=x+1"
},
{
"math_id": 6,
"text": "x=1"
},
{
"math_id": 7,
"text": "y=2"
}
] | https://en.wikipedia.org/wiki?curid=804755 |
8047696 | Sol-air temperature | Sol-air temperature ("T"sol-air) is a variable used to calculate cooling load of a building and determine the total heat gain through exterior surfaces. It is an improvement over:
formula_0
Where:
The above equation only takes into account the temperature differences and ignores two important parameters, being 1) solar radiative flux; and 2) infrared exchanges from the sky. The concept of "T"sol-air was thus introduced to enable these parameters to be included within an improved calculation. The following formula results:
formula_6
Where:
The product formula_11 just found can now be used to calculate the amount of heat transfer per unit area, as below:
formula_12
An equivalent, and more useful equation for the net heat loss across the whole construction is:
formula_13
Where:
By expanding the above equation through substituting formula_19 the following heat loss equation is derived:
formula_20
The above equation is used for opaque facades in, and renders intermediate calculation of formula_19 unnecessary. The main advantage of this latter approach is that it avoids the need for a different outdoor temperature node for each facade. Thus, the solution scheme is kept simple, and the solar and sky radiation terms from all facades can be aggregated and distributed to internal temperature nodes as gains/losses. | [
{
"math_id": 0,
"text": "\\frac{q}{A} = h_o(T_o - T_s)"
},
{
"math_id": 1,
"text": "q"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "h_o"
},
{
"math_id": 4,
"text": "T_o"
},
{
"math_id": 5,
"text": "T_s"
},
{
"math_id": 6,
"text": "T_\\mathrm{sol-air} = T_o + \\frac{ (a \\cdot I - \\Delta Q_{ir})}{h_o}"
},
{
"math_id": 7,
"text": "a"
},
{
"math_id": 8,
"text": "I"
},
{
"math_id": 9,
"text": "\\Delta Q_{ir}"
},
{
"math_id": 10,
"text": "\\Delta Q_{ir} = F_r * h_r * \\Delta T_{o-sky}"
},
{
"math_id": 11,
"text": " T_\\mathrm{sol-air} "
},
{
"math_id": 12,
"text": "\\frac{q}{A} = h_o(T_\\mathrm{sol-air} - T_s)"
},
{
"math_id": 13,
"text": "\\frac{q}{A} = U_c(T_i - T_\\mathrm{sol-air})"
},
{
"math_id": 14,
"text": "U_c"
},
{
"math_id": 15,
"text": "T_i"
},
{
"math_id": 16,
"text": "\\Delta T_{o-sky}"
},
{
"math_id": 17,
"text": "F_r"
},
{
"math_id": 18,
"text": "h_r"
},
{
"math_id": 19,
"text": "T_\\mathrm{sol-air}"
},
{
"math_id": 20,
"text": "\\frac{q}{A} = U_c(T_i - T_o) - \\frac{U_c}{h_o} {[a \\cdot I - F_r \\cdot h_r \\cdot \\Delta T_{o-sky}]}"
}
] | https://en.wikipedia.org/wiki?curid=8047696 |
804778 | Consequent | Hypothetical proposition component
A consequent is the second half of a hypothetical proposition. In the standard form of such a proposition, it is the part that follows "then". In an implication, if "P" implies "Q", then "P" is called the antecedent and "Q" is called the consequent. In some contexts, the consequent is called the apodosis.
Examples:
formula_1 is the consequent of this hypothetical proposition.
Here, "formula_2 is an animal" is the consequent.
"They are alive" is the consequent.
The consequent in a hypothetical proposition is not necessarily a consequence of the antecedent.
"Fish speak Klingon" is the consequent here, but intuitively is not a consequence of (nor does it have anything to do with) the claim made in the antecedent that "monkeys are purple.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "Q"
},
{
"math_id": 2,
"text": "X"
}
] | https://en.wikipedia.org/wiki?curid=804778 |
8049071 | J (disambiguation) | J, or j, is the tenth letter of the English alphabet.
J may also refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\sqrt{-1}"
},
{
"math_id": 1,
"text": "\\bar{j}"
}
] | https://en.wikipedia.org/wiki?curid=8049071 |
8050547 | Aldose reductase | Enzyme
In enzymology, aldose reductase (or aldehyde reductase) (EC 1.1.1.21) is an enzyme in humans encoded by the gene AKR1B1. It is an cytosolic NADPH-dependent oxidoreductase that catalyzes the reduction of a variety of aldehydes and carbonyls, including monosaccharides, and primarily known for catalyzing the reduction of glucose to sorbitol, the first step in polyol pathway of glucose metabolism.
Reactions.
Aldose reductase catalyzes the NADPH-dependent conversion of glucose to sorbitol, the first step in polyol pathway of glucose metabolism. The second and last step in the pathway is catalyzed by sorbitol dehydrogenase, which catalyzes the NAD-linked oxidation of sorbitol to fructose. Thus, the polyol pathway results in conversion of glucose to fructose with stoichiometric utilization of NADPH and production of NADH.
Galactose is also a substrate for the polyol pathway, but the corresponding keto sugar is not produced because sorbitol dehydrogenase is incapable of oxidizing galactitol. Nevertheless, aldose reductase can catalyze the reduction of galactose to galactitol
Function.
The aldose reductase reaction, in particular the sorbitol produced, is important for the function of various organs in the body. For example, it is generally used as the first step in a synthesis of fructose from glucose; the second step is the oxidation of sorbitol to fructose catalyzed by sorbitol dehydrogenase. The main pathway from glucose to fructose (glycolysis) involves phosphorylation of glucose by hexokinase to form glucose 6-phosphate, followed by isomerization to fructose 6-phosphate and hydrolysis of the phosphate, but the sorbitol pathway is useful because it does not require the input of energy in the form of ATP:
Aldose reductase is also present in the lens, retina, Schwann cells of peripheral nerves, placenta and red blood cells.
In "Drosophila", CG6084 encoded a highly conserved protein of human Aldo-keto reductase 1B. dAKR1B in hemocytes, is necessary and sufficient for the increasement of plasma sugar alcohols after gut infection. Increased sorbitol subsequently activated Metalloprotease 2, which cleaves PGRP-LC to activate systemic immune response in fat bodies. Thus, aldose reductase provides a critical metabolic checkpoint in the global inflammatory response.
Enzyme structure.
Aldose reductase may be considered a prototypical enzyme of the aldo-keto reductase enzyme superfamily. The enzyme comprises 315 amino acid residues and folds into a β/α-barrel structural motif composed of eight parallel β strands. Adjacent strands are connected by eight peripheral α-helical segments running anti-parallel to the β sheet. The catalytic active site situated in the barrel core. The NADPH cofactor is situated at the top of the β/α barrel, with the nicotinamide ring projects down in the center of the barrel and pyrophosphate straddling the barrel lip.
Enzyme mechanism.
The reaction mechanism of aldose reductase in the direction of aldehyde reduction follows a sequential ordered path where NADPH binds, followed by the substrate. Binding of NADPH induces a conformational change (Enzyme•NADPH → Enzyme*•NADPH) that involves hinge-like movement of a surface loop (residues 213–217) so as to cover a portion of the NADPH in a manner similar to that of a safety belt. The alcohol product is formed via a transfer of the pro-R hydride of NADPH to the re face of the substrate's carbonyl carbon. Following release of the alcohol product, another conformational change occurs (E*•NADP+ → E•NADP+) in order to release NADP+. Kinetic studies have shown that reorientation of this loop to permit release of NADP+ appears to represent the rate-limiting step in the direction of aldehyde reduction. As the rate of coenzyme release limits the catalytic rate, it can be seen that perturbation of interactions that stabilize coenzyme binding can have dramatic effects on the maximum velocity (Vmax).
The hydride that is transferred from NADP+ to glucose comes from C-4 of the nicotinamide ring at the base of the hydrophobic cavity. Thus, the position of this carbon defines the enzyme's active site. There exist three residues in the enzyme within a suitable distance of the C-4 that could be potential proton donors: Tyr-48, His-110 and Cys-298. Evolutionary, thermodynamic and molecular modeling evidence predicted Tyr-48 as the proton donor. This prediction was confirmed the results of mutagenesis studies. Thus, a [hydrogen-bonding] interaction between the phenolic hydroxyl group of Tyr-48 and the ammonium side chain of Lys-77 is thought to help to facilitate hydride transfer.
Role in diabetes.
Diabetes mellitus is recognized as a leading cause of new cases of blindness, and is associated with increased risk for painful neuropathy, heart disease and kidney failure. Many theories have been advanced to explain mechanisms leading to diabetic complications, including stimulation of glucose metabolism by the polyol pathway. Additionally, the enzyme is located in the eye (cornea, retina, lens), kidney, and the myelin sheath–tissues that are often involved in diabetic complications. Under normal glycemic conditions, only a small fraction of glucose is metabolized through the polyol pathway, as the majority is phosphorylated by hexokinase, and the resulting product, glucose-6-phosphate, is utilized as a substrate for glycolysis or pentose phosphate metabolism. However, in response to the chronic hyperglycemia found in diabetics, glucose flux through the polyol pathway is significantly increased. Up to 33% of total glucose utilization in some tissues can be through the polyol pathway.
Glucose concentrations are often elevated in diabetics and aldose reductase has long been believed to be responsible for diabetic complications involving a number of organs. Many aldose reductase inhibitors have been developed as drug candidates but virtually all have failed although some such as Epalrestat are commercially available in several countries. Additional reductase inhibitors such as Alrestatin, Exisulind, Imirestat, Zopolrestat, Tolrestat, Zenarestat, Caficrestat, Fidarestat, Govorestat, Ranirestat, Ponalrestat, Risarestat, Sorbinil, and Berberine, Poliumoside, Ganoderic acid are currently in clinical trials.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=8050547 |
8056148 | Differential ideal | In the theory of differential forms, a differential ideal "I" is an "algebraic ideal" in the ring of smooth differential forms on a smooth manifold, in other words a graded ideal in the sense of ring theory, that is further closed under exterior differentiation "d", meaning that for any form α in "I", the exterior derivative "d"α is also in "I".
In the theory of differential algebra, a differential ideal "I" in a differential ring "R" is an ideal which is mapped to itself by each differential operator.
Exterior differential systems and partial differential equations.
An exterior differential system consists of a smooth manifold formula_0 and a differential ideal
formula_1.
An integral manifold of an exterior differential system formula_2 consists of a submanifold formula_3 having the property that the pullback to formula_4 of all differential forms contained in formula_5 vanishes identically.
One can express any partial differential equation system as an exterior differential system with independence condition. Suppose that we have a "k"th order partial differential equation system for maps formula_6, given by
formula_7.
The graph of the formula_8-jet formula_9 of any solution of this partial differential equation system is a submanifold formula_4 of the jet space, and is an integral manifold of the contact system formula_10on the formula_8-jet bundle.
This idea allows one to analyze the properties of partial differential equations with methods of differential geometry. For instance, we can apply the Cartan–Kähler_theorem to a system of partial differential equations by writing down the associated exterior differential system. We can frequently apply Cartan's equivalence method to exterior differential systems to study their symmetries and their diffeomorphism invariants.
Perfect differential ideals.
A differential ideal formula_11 is perfect if it has the property that if it contains an element formula_12 then it contains any element formula_13 such that formula_14 for some formula_15. | [
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": " I\\subset \\Omega^*(M) "
},
{
"math_id": 2,
"text": "(M,I)"
},
{
"math_id": 3,
"text": "N\\subset M"
},
{
"math_id": 4,
"text": "N"
},
{
"math_id": 5,
"text": "I"
},
{
"math_id": 6,
"text": " u: \\mathbb{R}^m \\rightarrow \\mathbb{R}^n"
},
{
"math_id": 7,
"text": " F^r\\left(x, u, \\frac{\\partial^{|I|}u }{\\partial x^I}\\right)=0, \\quad 1\\le |I|\\le k "
},
{
"math_id": 8,
"text": "k"
},
{
"math_id": 9,
"text": "(u^a,p^a_i,\\dots,p^a_I)=(u^a(x),\\frac{\\partial u^a}{\\partial x^i},\\dots,\\frac{\\partial^{|I|}u }{\\partial x^I})_{1\\le |I|\\le k}"
},
{
"math_id": 10,
"text": "du^a-p^a_i dx^i,\\dots,dp^a_I-p^p_{Ij} dx^j{}_{1\\le |I|\\le k-1}"
},
{
"math_id": 11,
"text": "I \\, "
},
{
"math_id": 12,
"text": " a \\in I "
},
{
"math_id": 13,
"text": " b \\in I "
},
{
"math_id": 14,
"text": " b^n = a "
},
{
"math_id": 15,
"text": " n > 0 \\, "
}
] | https://en.wikipedia.org/wiki?curid=8056148 |
805700 | Green's identities | Vector calculus formulas relating the bulk with the boundary of a region
In mathematics, Green's identities are a set of three identities in vector calculus relating the bulk with the boundary of a region on which differential operators act. They are named after the mathematician George Green, who discovered Green's theorem.
Green's first identity.
This identity is derived from the divergence theorem applied to the vector field F = "ψ" ∇"φ" while using an extension of the product rule that ∇ ⋅ ("ψ" X ) = ∇"ψ" ⋅X + "ψ" ∇⋅X: Let φ and ψ be scalar functions defined on some region "U" ⊂ R"d", and suppose that φ is twice continuously differentiable, and ψ is once continuously differentiable. Using the product rule above, but letting X = ∇"φ", integrate ∇⋅("ψ"∇"φ") over U. Then
formula_0
where ∆ ≡ ∇2 is the Laplace operator, ∂"U" is the boundary of region U, n is the outward pointing unit normal to the surface element "dS" and "dS = ndS" is the oriented surface element.
This theorem is a special case of the divergence theorem, and is essentially the higher dimensional equivalent of integration by parts with ψ and the gradient of φ replacing u and v.
Note that Green's first identity above is a special case of the more general identity derived from the divergence theorem by substituting F = "ψ"Γ,
formula_1
Green's second identity.
If φ and ψ are both twice continuously differentiable on "U" ⊂ R3, and ε is once continuously differentiable, one may choose F = "ψε" ∇"φ" − "φε" ∇"ψ" to obtain
formula_2
For the special case of "ε" = 1 all across "U" ⊂ R3, then,
formula_3
In the equation above, ∂"φ"/∂n is the directional derivative of φ in the direction of the outward pointing surface normal n of the surface element "dS",
formula_4
Explicitly incorporating this definition in the Green's second identity with "ε" = 1 results in
formula_5
In particular, this demonstrates that the Laplacian is a self-adjoint operator in the "L"2 inner product for functions vanishing on the boundary so that the right hand side of the above identity is zero.
Green's third identity.
Green's third identity derives from the second identity by choosing "φ" = "G", where the Green's function G is taken to be a fundamental solution of the Laplace operator, ∆. This means that:
formula_6
For example, in R3, a solution has the form
formula_7
Green's third identity states that if ψ is a function that is twice continuously differentiable on U, then
formula_8
A simplification arises if ψ is itself a harmonic function, i.e. a solution to the Laplace equation. Then ∇2"ψ" = 0 and the identity simplifies to
formula_9
The second term in the integral above can be eliminated if G is chosen to be the Green's function that vanishes on the boundary of U (Dirichlet boundary condition),
formula_10
This form is used to construct solutions to Dirichlet boundary condition problems. Solutions for Neumann boundary condition problems may also be simplified, though the Divergence theorem applied to the differential equation defining Green's functions shows that the Green's function cannot integrate to zero on the boundary, and hence cannot vanish on the boundary. See Green's functions for the Laplacian or for a detailed argument, with an alternative.
It can be further verified that the above identity also applies when ψ is a solution to the Helmholtz equation or wave equation and G is the appropriate Green's function. In such a context, this identity is the mathematical expression of the Huygens principle, and leads to Kirchhoff's diffraction formula and other approximations.
On manifolds.
Green's identities hold on a Riemannian manifold. In this setting, the first two are
formula_11
where u and v are smooth real-valued functions on M, dV is the volume form compatible with the metric, formula_12 is the induced volume form on the boundary of M, N is the outward oriented unit vector field normal to the boundary, and Δ"u" = div(grad "u") is the Laplacian.
Green's vector identity.
Green's second identity establishes a relationship between second and (the divergence of) first order derivatives of two scalar functions. In differential form
formula_13
where "pm" and "qm" are two arbitrary twice continuously differentiable scalar fields. This identity is of great importance in physics because continuity equations can thus be established for scalar fields such as mass or energy.
In vector diffraction theory, two versions of Green's second identity are introduced.
One variant invokes the divergence of a cross product and states a relationship in terms of the curl-curl of the field
formula_14
This equation can be written in terms of the Laplacians,
formula_15
However, the terms
formula_16
could not be readily written in terms of a divergence.
The other approach introduces bi-vectors, this formulation requires a dyadic Green function. The derivation presented here avoids these problems.
Consider that the scalar fields in Green's second identity are the Cartesian components of vector fields, i.e.,
formula_17
Summing up the equation for each component, we obtain
formula_18
The LHS according to the definition of the dot product may be written in vector form as
formula_19
The RHS is a bit more awkward to express in terms of vector operators. Due to the distributivity of the divergence operator over addition, the sum of the divergence is equal to the divergence of the sum, i.e.,
formula_20
Recall the vector identity for the gradient of a dot product,
formula_21
which, written out in vector components is given by
formula_22
This result is similar to what we wish to evince in vector terms 'except' for the minus sign. Since the differential operators in each term act either over one vector (say formula_23’s) or the other (formula_24’s), the contribution to each term must be
formula_25
formula_26
These results can be rigorously proven to be correct through evaluation of the vector components. Therefore, the RHS can be written in vector form as
formula_27
Putting together these two results, a result analogous to Green's theorem for scalar fields is obtained,
Theorem for vector fields: formula_28
The curl of a cross product can be written as
formula_29
Green's vector identity can then be rewritten as
formula_30
Since the divergence of a curl is zero, the third term vanishes to yield Green's vector identity:
formula_31
With a similar procedure, the Laplacian of the dot product can be expressed in terms of the Laplacians of the factors
formula_32
As a corollary, the awkward terms can now be written in terms of a divergence by comparison with the vector Green equation,
formula_33
This result can be verified by expanding the divergence of a scalar times a vector on the RHS.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\int_U \\left( \\psi \\, \\Delta \\varphi + \\nabla \\psi \\cdot \\nabla \\varphi \\right)\\, dV = \\oint_{\\partial U} \\psi \\left( \\nabla \\varphi \\cdot \\mathbf{n} \\right)\\, dS=\\oint_{\\partial U}\\psi\\,\\nabla\\varphi\\cdot d\\mathbf{S} "
},
{
"math_id": 1,
"text": "\\int_U \\left( \\psi \\, \\nabla \\cdot \\mathbf{\\Gamma} + \\mathbf{\\Gamma} \\cdot \\nabla \\psi\\right)\\, dV = \\oint_{\\partial U} \\psi \\left( \\mathbf{\\Gamma} \\cdot \\mathbf{n} \\right)\\, dS=\\oint_{\\partial U}\\psi\\mathbf{\\Gamma}\\cdot d\\mathbf{S} ~. "
},
{
"math_id": 2,
"text": " \\int_U \\left[ \\psi \\, \\nabla \\cdot \\left( \\varepsilon \\, \\nabla \\varphi \\right) - \\varphi \\, \\nabla \\cdot \\left( \\varepsilon \\, \\nabla \\psi \\right) \\right]\\, dV = \\oint_{\\partial U} \\varepsilon \\left( \\psi {\\partial \\varphi \\over \\partial \\mathbf{n}} - \\varphi {\\partial \\psi \\over \\partial \\mathbf{n}}\\right)\\, dS. "
},
{
"math_id": 3,
"text": " \\int_U \\left( \\psi \\, \\nabla^2 \\varphi - \\varphi \\, \\nabla^2 \\psi\\right)\\, dV = \\oint_{\\partial U} \\left( \\psi {\\partial \\varphi \\over \\partial \\mathbf{n}} - \\varphi {\\partial \\psi \\over \\partial \\mathbf{n}}\\right)\\, dS. "
},
{
"math_id": 4,
"text": " {\\partial \\varphi \\over \\partial \\mathbf{n}} = \\nabla \\varphi \\cdot \\mathbf{n} = \\nabla_\\mathbf{n}\\varphi."
},
{
"math_id": 5,
"text": " \\int_U \\left( \\psi \\, \\nabla^2 \\varphi - \\varphi \\, \\nabla^2 \\psi\\right)\\, dV = \\oint_{\\partial U} \\left( \\psi \\nabla \\varphi - \\varphi \\nabla \\psi\\right)\\cdot d\\mathbf{S}. "
},
{
"math_id": 6,
"text": " \\Delta G(\\mathbf{x},\\boldsymbol{\\eta}) = \\delta(\\mathbf{x} - \\boldsymbol{\\eta}) ~."
},
{
"math_id": 7,
"text": "G(\\mathbf{x},\\boldsymbol{\\eta})= \\frac{-1}{4 \\pi \\|\\mathbf{x} - \\boldsymbol{\\eta} \\|} ~."
},
{
"math_id": 8,
"text": " \\int_U \\left[ G(\\mathbf{y},\\boldsymbol{\\eta}) \\, \\Delta \\psi(\\mathbf{y}) \\right] \\, dV_\\mathbf{y} - \\psi(\\boldsymbol{\\eta})= \\oint_{\\partial U} \\left[ G(\\mathbf{y},\\boldsymbol{\\eta}) {\\partial \\psi \\over \\partial \\mathbf{n}} (\\mathbf{y}) - \\psi(\\mathbf{y}) {\\partial G(\\mathbf{y},\\boldsymbol{\\eta}) \\over \\partial \\mathbf{n}} \\right]\\, dS_\\mathbf{y}."
},
{
"math_id": 9,
"text": "\\psi(\\boldsymbol{\\eta})= \\oint_{\\partial U} \\left[\\psi(\\mathbf{y}) \\frac{\\partial G(\\mathbf{y},\\boldsymbol{\\eta})}{\\partial \\mathbf{n}} - G(\\mathbf{y},\\boldsymbol{\\eta}) \\frac{\\partial \\psi}{\\partial \\mathbf{n}} (\\mathbf{y}) \\right]\\, dS_\\mathbf{y}."
},
{
"math_id": 10,
"text": "\\psi(\\boldsymbol{\\eta}) = \\oint_{\\partial U} \\psi(\\mathbf{y}) \\frac{\\partial G(\\mathbf{y},\\boldsymbol{\\eta})}{\\partial \\mathbf{n}} \\, dS_\\mathbf{y} ~."
},
{
"math_id": 11,
"text": "\\begin{align}\n\\int_M u \\,\\Delta v\\, dV + \\int_M \\langle\\nabla u, \\nabla v\\rangle\\, dV &= \\int_{\\partial M} u N v \\, d\\widetilde{V} \\\\\n\\int_M \\left (u \\, \\Delta v - v \\, \\Delta u \\right )\\, dV &= \\int_{\\partial M}(u N v - v N u) \\, d \\widetilde{V}\n\\end{align}"
},
{
"math_id": 12,
"text": "d\\widetilde{V}"
},
{
"math_id": 13,
"text": "p_m \\, \\Delta q_m-q_m \\, \\Delta p_m = \\nabla\\cdot\\left(p_m\\nabla q_m-q_m \\, \\nabla p_m\\right),"
},
{
"math_id": 14,
"text": "\\mathbf{P}\\cdot\\left(\\nabla\\times\\nabla\\times\\mathbf{Q}\\right)-\\mathbf{Q}\\cdot\\left(\\nabla\\times\\nabla\\times \\mathbf{P}\\right) = \\nabla\\cdot\\left(\\mathbf{Q}\\times\\left(\\nabla\\times\\mathbf{P}\\right)-\\mathbf{P}\\times\\left(\\nabla\\times\\mathbf{Q}\\right)\\right)."
},
{
"math_id": 15,
"text": "\\mathbf{P}\\cdot\\Delta \\mathbf{Q}-\\mathbf{Q}\\cdot\\Delta \\mathbf{P} + \\mathbf{Q} \\cdot \\left[\\nabla\\left(\\nabla\\cdot\\mathbf{P}\\right)\\right]-\\mathbf{P} \\cdot \\left[ \\nabla \\left(\\nabla \\cdot \\mathbf{Q}\\right)\\right] = \\nabla \\cdot \\left( \\mathbf{P}\\times \\left(\\nabla\\times\\mathbf{Q}\\right) - \\mathbf{Q}\\times\\left(\\nabla\\times\\mathbf{P}\\right)\\right)."
},
{
"math_id": 16,
"text": "\\mathbf{Q}\\cdot\\left[\\nabla\\left(\\nabla\\cdot\\mathbf{P}\\right)\\right]-\\mathbf{P} \\cdot \\left[\\nabla\\left(\\nabla\\cdot\\mathbf{Q}\\right)\\right],"
},
{
"math_id": 17,
"text": "\\mathbf{P}=\\sum_m p_{m}\\hat{\\mathbf{e}}_m, \\qquad \\mathbf{Q}=\\sum_m q_m \\hat{\\mathbf{e}}_m."
},
{
"math_id": 18,
"text": "\\sum_m \\left[p_m\\Delta q_m - q_m\\Delta p_m\\right]=\\sum_m \\nabla \\cdot \\left( p_m \\nabla q_m-q_m\\nabla p_m \\right)."
},
{
"math_id": 19,
"text": "\\sum_m \\left[p_m \\, \\Delta q_m-q_m \\, \\Delta p_m\\right] = \\mathbf{P} \\cdot \\Delta\\mathbf{Q}-\\mathbf{Q}\\cdot\\Delta\\mathbf{P}."
},
{
"math_id": 20,
"text": "\\sum_m \\nabla\\cdot\\left(p_m \\nabla q_m-q_m\\nabla p_m\\right)= \\nabla \\cdot \\left(\\sum_m p_m \\nabla q_m - \\sum_m q_m \\nabla p_m \\right)."
},
{
"math_id": 21,
"text": "\\nabla \\left(\\mathbf{P} \\cdot \\mathbf{Q} \\right) = \\left( \\mathbf{P} \\cdot \\nabla \\right) \\mathbf{Q} + \\left( \\mathbf{Q} \\cdot \\nabla \\right) \\mathbf{P} + \\mathbf{P}\\times \\left(\\nabla\\times\\mathbf{Q}\\right)+\\mathbf{Q}\\times \\left(\\nabla\\times\\mathbf{P}\\right),"
},
{
"math_id": 22,
"text": "\\nabla\\left(\\mathbf{P}\\cdot\\mathbf{Q}\\right)=\\nabla\\sum_m p_m q_m = \\sum_m p_m \\nabla q_m + \\sum_m q_m \\nabla p_m."
},
{
"math_id": 23,
"text": "p_m"
},
{
"math_id": 24,
"text": "q_m"
},
{
"math_id": 25,
"text": "\\sum_m p_m \\nabla q_m = \\left(\\mathbf{P} \\cdot \\nabla\\right) \\mathbf{Q} + \\mathbf{P} \\times \\left(\\nabla \\times \\mathbf{Q}\\right),"
},
{
"math_id": 26,
"text": "\\sum_m q_m \\nabla p_m = \\left(\\mathbf{Q} \\cdot \\nabla\\right) \\mathbf{P} + \\mathbf{Q} \\times \\left(\\nabla \\times \\mathbf{P}\\right)."
},
{
"math_id": 27,
"text": " \\sum_m p_m \\nabla q_m - \\sum_m q_m \\nabla p_m = \\left(\\mathbf{P} \\cdot \\nabla\\right) \\mathbf{Q} + \\mathbf{P}\\times \\left(\\nabla\\times\\mathbf{Q}\\right)-\\left( \\mathbf{Q} \\cdot \\nabla\\right) \\mathbf{P} - \\mathbf{Q}\\times \\left(\\nabla\\times\\mathbf{P}\\right)."
},
{
"math_id": 28,
"text": "\\color{OliveGreen}\\mathbf{P} \\cdot \\Delta \\mathbf{Q} - \\mathbf{Q} \\cdot \\Delta \\mathbf{P} = \\left[ \\left(\\mathbf{P} \\cdot \\nabla\\right) \\mathbf{Q} + \\mathbf{P}\\times \\left(\\nabla\\times\\mathbf{Q}\\right)-\\left( \\mathbf{Q} \\cdot \\nabla\\right) \\mathbf{P} - \\mathbf{Q}\\times \\left(\\nabla\\times\\mathbf{P}\\right)\\right]."
},
{
"math_id": 29,
"text": "\\nabla\\times\\left(\\mathbf{P}\\times\\mathbf{Q}\\right)=\\left(\\mathbf{Q}\\cdot\\nabla\\right)\\mathbf{P}-\\left(\\mathbf{P}\\cdot\\nabla\\right)\\mathbf{Q}+\\mathbf{P}\\left(\\nabla\\cdot\\mathbf{Q}\\right)-\\mathbf{Q}\\left(\\nabla\\cdot\\mathbf{P}\\right);"
},
{
"math_id": 30,
"text": "\\mathbf{P}\\cdot\\Delta \\mathbf{Q}-\\mathbf{Q}\\cdot\\Delta \\mathbf{P}= \\nabla \\cdot \\left[\\mathbf{P} \\left(\\nabla\\cdot\\mathbf{Q}\\right)-\\mathbf{Q} \\left( \\nabla \\cdot \\mathbf{P}\\right)-\\nabla \\times \\left( \\mathbf{P} \\times \\mathbf{Q} \\right) +\\mathbf{P}\\times\\left(\\nabla\\times\\mathbf{Q}\\right) - \\mathbf{Q}\\times \\left(\\nabla\\times\\mathbf{P}\\right)\\right]."
},
{
"math_id": 31,
"text": "\\color{OliveGreen}\\mathbf{P}\\cdot\\Delta\\mathbf{Q}-\\mathbf{Q} \\cdot \\Delta \\mathbf{P} =\\nabla\\cdot\\left[\\mathbf{P}\\left(\\nabla\\cdot\\mathbf{Q}\\right)-\\mathbf{Q} \\left( \\nabla \\cdot \\mathbf{P} \\right) + \\mathbf{P}\\times \\left(\\nabla\\times\\mathbf{Q}\\right) - \\mathbf{Q}\\times\\left(\\nabla\\times\\mathbf{P}\\right)\\right]."
},
{
"math_id": 32,
"text": " \\Delta \\left( \\mathbf{P} \\cdot \\mathbf{Q} \\right) = \\mathbf{P} \\cdot \\Delta \\mathbf{Q}-\\mathbf{Q}\\cdot\\Delta \\mathbf{P} + 2\\nabla \\cdot \\left[ \\left( \\mathbf{Q} \\cdot \\nabla \\right) \\mathbf{P} + \\mathbf{Q} \\times \\nabla \\times \\mathbf{P} \\right]."
},
{
"math_id": 33,
"text": "\\mathbf{P}\\cdot \\left[ \\nabla \\left(\\nabla \\cdot \\mathbf{Q} \\right) \\right] - \\mathbf{Q} \\cdot \\left[ \\nabla \\left( \\nabla \\cdot \\mathbf{P} \\right) \\right] = \\nabla \\cdot\\left[\\mathbf{P}\\left(\\nabla\\cdot\\mathbf{Q}\\right)-\\mathbf{Q} \\left( \\nabla \\cdot \\mathbf{P} \\right) \\right]."
}
] | https://en.wikipedia.org/wiki?curid=805700 |
8057418 | Quantum potential | Quantum mechanical statistic
The quantum potential or quantum potentiality is a central concept of the de Broglie–Bohm formulation of quantum mechanics, introduced by David Bohm in 1952.
Initially presented under the name "quantum-mechanical potential", subsequently "quantum potential", it was later elaborated upon by Bohm and Basil Hiley in its interpretation as an information potential which acts on a quantum particle. It is also referred to as "quantum potential energy", "Bohm potential", "quantum Bohm potential" or "Bohm quantum potential".
In the framework of the de Broglie–Bohm theory, the quantum potential is a term within the Schrödinger equation which acts to guide the movement of quantum particles. The quantum potential approach introduced by Bohm provides a physically less fundamental exposition of the idea presented by Louis de Broglie: de Broglie had postulated in 1925 that the relativistic wave function defined on spacetime represents a pilot wave which guides a quantum particle, represented as an oscillating peak in the wave field, but he had subsequently abandoned his approach because he was unable to derive the guidance equation for the particle from a non-linear wave equation. The seminal articles of Bohm in 1952 introduced the quantum potential and included answers to the objections which had been raised against the pilot wave theory.
The Bohm quantum potential is closely linked with the results of other approaches, in particular relating to work by Erwin Madelung of 1927 and to work by Carl Friedrich von Weizsäcker of 1935.
Building on the interpretation of the quantum theory introduced by Bohm in 1952, David Bohm and Basil Hiley in 1975 presented how the concept of a "quantum potential" leads to the notion of an "unbroken wholeness of the entire universe", proposing that the fundamental new quality introduced by quantum physics is nonlocality.
Quantum potential as part of the Schrödinger equation.
The Schrödinger equation
formula_0
is re-written using the polar form for the wave function formula_1 with real-valued functions formula_2 and formula_3, where formula_2 is the amplitude (absolute value) of the wave function formula_4, and formula_5 its phase. This yields two equations: from the imaginary and real part of the Schrödinger equation follow the continuity equation and the quantum Hamilton–Jacobi equation respectively.
Continuity equation.
The imaginary part of the Schrödinger equation in polar form yields
formula_6
which, provided formula_7, can be interpreted as the continuity equation formula_8 for the probability density formula_9 and the velocity field formula_10
Quantum Hamilton–Jacobi equation.
The real part of the Schrödinger equation in polar form yields a modified Hamilton–Jacobi equation
formula_11
also referred to as "quantum Hamilton–Jacobi equation". It differs from the classical Hamilton–Jacobi equation only by the term
formula_12
This term formula_13, called "quantum potential", thus depends on the curvature of the amplitude of the wave function.
In the limit formula_14, the function formula_3 is a solution of the (classical) Hamilton–Jacobi equation; therefore, the function formula_3 is also called the Hamilton–Jacobi function, or action, extended to quantum physics.
Properties.
Hiley emphasised several aspects that regard the quantum potential of a quantum particle:
In 1979, Hiley and his co-workers Philippidis and Dewdney presented a full calculation on the explanation of the two-slit experiment in terms of Bohmian trajectories that arise for each particle moving under the influence of the quantum potential, resulting in the well-known interference patterns.
Also the shift of the interference pattern which occurs in presence of a magnetic field in the Aharonov–Bohm effect could be explained as arising from the quantum potential.
Relation to the measurement process.
The collapse of the wave function of the Copenhagen interpretation of quantum theory is explained in the quantum potential approach by the demonstration that, after a measurement, "all the packets of the multi-dimensional wave function that do not correspond to the actual result of measurement have no effect on the particle" from then on. Bohm and Hiley pointed out that
<templatestyles src="Template:Blockquote/styles.css" />...the quantum potential can develop unstable bifurcation points, which separate classes of particle trajectories according to the "channels" into which they eventually enter and within which they stay. This explains how measurement is possible without "collapse" of the wave function, and how all sorts of quantum processes, such as transitions between states, fusion of two states into one and fission of one system into two, are able to take place without the need for a human observer.
Measurement then "involves a participatory transformation in which both the system under observation and the observing apparatus undergo a mutual participation so that the trajectories behave in a correlated manner, becoming correlated and separated into different, non-overlapping sets (which we call 'channels')".
Quantum potential of an n-particle system.
The Schrödinger wave function of a many-particle quantum system cannot be represented in ordinary three-dimensional space. Rather, it is represented in configuration space, with three dimensions per particle. A single point in configuration space thus represents the configuration of the entire n-particle system as a whole.
A two-particle wave function formula_15 of identical particles of mass formula_16 has the quantum potential
formula_17
where formula_18 and formula_19 refer to particle 1 and particle 2 respectively. This expression generalizes in straightforward manner to formula_20 particles:
formula_21
In case the wave function of two or more particles is separable, then the system's total quantum potential becomes the sum of the quantum potentials of the two particles. Exact separability is extremely unphysical given that interactions between the system and its environment destroy the factorization; however, a wave function that is a superposition of several wave functions of approximately disjoint support will factorize approximately.
Derivation for a separable quantum system.
That the wave function is separable means that formula_4 factorizes in the form formula_22. Then it follows that also formula_2 factorizes, and the system's total quantum potential becomes the sum of the quantum potentials of the two particles.
formula_23
In case the wave function is separable, that is, if formula_4 factorizes in the form formula_22, the two one-particle systems behave independently. More generally, the quantum potential of an formula_20-particle system with separable wave function is the sum of formula_20 quantum potentials, separating the system into formula_20 independent one-particle systems.
Formulation in terms of probability density.
Quantum potential in terms of the probability density function.
Bohm, as well as other physicists after him, have sought to provide evidence that the Born rule linking formula_2 to the probability density function
formula_24
can be understood, in a pilot wave formulation, as not representing a basic law, but rather a "theorem" (called quantum equilibrium hypothesis) which applies when a "quantum equilibrium" is reached during the course of the time development under the Schrödinger equation. With Born's rule, and straightforward application of the chain and product rules
formula_25
the quantum potential, expressed in terms of the probability density function, becomes:
formula_26
Quantum force.
The quantum force formula_27, expressed in terms of the probability distribution, amounts to:
formula_28
Formulation in configuration space and in momentum space, as the result of projections.
M. R. Brown and B. Hiley showed that, as alternative to its formulation terms of configuration space (formula_29-space), the quantum potential can also be formulated in terms of momentum space (formula_30-space).
In line with David Bohm's approach, Basil Hiley and mathematician Maurice de Gosson showed that the quantum potential can be seen as a consequence of a projection of an underlying structure, more specifically of a non-commutative algebraic structure, onto a subspace such as ordinary space (formula_29-space). In algebraic terms, the quantum potential can be seen as arising from the relation between implicate and explicate orders: if a non-commutative algebra is employed to describe the non-commutative structure of the quantum formalism, it turns out that it is impossible to define an underlying space, but that rather "shadow spaces" (homomorphic spaces) can be constructed and that in so doing the quantum potential appears. The quantum potential approach can be seen as a way to construct the shadow spaces. The quantum potential thus results as a distortion due to the projection of the underlying space into formula_29-space, in similar manner as a Mercator projection inevitably results in a distortion in a geographical map. There exists complete symmetry between the formula_29-representation, and the quantum potential as it appears in configuration space can be seen as arising from the dispersion of the momentum formula_30-representation.
The approach has been applied to extended phase space, also in terms of a Duffin–Kemmer–Petiau algebra approach.
Relation to other quantities and theories.
Relation to the Fisher information.
It can be shown that the mean value of the quantum potential formula_31 is proportional to the probability density's Fisher information about the observable formula_32
formula_33
Using this definition for the Fisher information, we can write:
formula_34
Relation to the Madelung pressure tensor.
In the Madelung equations presented by Erwin Madelung in 1927, the non-local quantum pressure tensor has the same mathematical form as the quantum potential. The underlying theory is different in that the Bohm approach describes particle trajectories whereas the equations of Madelung quantum hydrodynamics are the Euler equations of a fluid that describe its averaged statistical characteristics.
Relation to the von Weizsäcker correction.
In 1935, Carl Friedrich von Weizsäcker proposed the addition of an inhomogeneity term (sometimes referred to as a "von Weizsäcker correction") to the kinetic energy of the Thomas–Fermi (TF) theory of atoms.
The von Weizsäcker correction term is
formula_35
The correction term has also been derived as the first-order correction to the TF kinetic energy in a semi-classical correction to the Hartree–Fock theory.
It has been pointed out that the von Weizsäcker correction term at low density takes on the same form as the quantum potential.
Quantum potential as energy of internal motion associated with spin.
Giovanni Salesi, Erasmo Recami and co-workers showed in 1998 that, in agreement with the König's theorem, the quantum potential can be identified with the kinetic energy of the internal motion ("zitterbewegung") associated with the spin of a spin-1/2 particle observed in a center-of-mass frame. More specifically, they showed that the internal "zitterbewegung" velocity for a spinning, non-relativistic particle of constant spin with no precession, and in absence of an external field, has the squared value:
formula_36
from which the second term is shown to be of negligible size; then with formula_37 it follows that
formula_38
Salesi gave further details on this work in 2009.
In 1999, Salvatore Esposito generalized their result from spin-1/2 particles to particles of arbitrary spin, confirming the interpretation of the quantum potential as a kinetic energy for an internal motion. Esposito showed that (using the notation formula_39=1) the quantum potential can be written as:
formula_40
and that the causal interpretation of quantum mechanics can be reformulated in terms of a particle velocity
formula_41
where the "drift velocity" is
formula_42
and the "relative velocity" is formula_43, with
formula_44
and formula_45 representing the spin direction of the particle. In this formulation, according to Esposito, quantum mechanics must necessarily be interpreted in probabilistic terms, for the reason that a system's initial motion condition cannot be exactly determined. Esposito explained that "the quantum effects present in the Schrödinger equation are due to the presence of a peculiar spatial direction associated with the particle that, assuming the isotropy of space, can be identified with the spin of the particle itself". Esposito generalized it from matter particles to gauge particles, in particular photons, for which he showed that, if modelled as formula_46, with probability function formula_47, they can be understood in a quantum potential approach.
James R. Bogan, in 2002, published the derivation of a reciprocal transformation from the Hamilton-Jacobi equation of classical mechanics to the time-dependent Schrödinger equation of quantum mechanics which arises from a gauge transformation representing spin, under the simple requirement of conservation of probability. This spin-dependent transformation is a function of the quantum potential.
EP quantum mechanics with quantum potential as Schwarzian derivative.
In a different approach, the EP quantum mechanics formulated on the basis of an Equivalence Principle (EP), a quantum potential is written as:
formula_48
where formula_49 is the Schwarzian derivative, that is, formula_50. However, even in cases where this may equal
formula_51
it is stressed by E. Faraggi and M. Matone that this does not correspond with the usual quantum potential, as in their approach formula_52 is a solution to the Schrödinger equation but does "not" correspond to the wave function. This has been investigated further by E.R. Floyd for the classical limit formula_14, as well as by Robert Carroll.
Re-interpretation in terms of Clifford algebras.
B. Hiley and R. E. Callaghan re-interpret the role of the Bohm model and its notion of quantum potential in the framework of Clifford algebra, taking account of recent advances that include the work of David Hestenes on spacetime algebra. They show how, within a nested hierarchy of Clifford algebras formula_53, for each Clifford algebra an element of a minimal left ideal formula_54 and an element of a right ideal representing its Clifford conjugation formula_55 can be constructed, and from it the "Clifford density element" (CDE) formula_56, an element of the Clifford algebra which is isomorphic to the standard density matrix but independent of any specific representation. On this basis, bilinear invariants can be formed which represent properties of the system. Hiley and Callaghan distinguish bilinear invariants of a first kind, of which each stands for the expectation value of an element formula_57 of the algebra which can be formed as formula_58, and bilinear invariants of a second kind which are constructed with derivatives and represent momentum and energy. Using these terms, they reconstruct the results of quantum mechanics without depending on a particular representation in terms of a wave function nor requiring reference to an external Hilbert space. Consistent with earlier results, the quantum potential of a non-relativistic particle with spin (Pauli particle) is shown to have an additional spin-dependent term, and the momentum of a relativistic particle with spin (Dirac particle) is shown to consist in a linear motion and a rotational part. The two dynamical equations governing the time evolution are re-interpreted as conservation equations. One of them stands for the conservation of energy; the other stands for the conservation of probability and of spin. The quantum potential plays the role of an internal energy which ensures the conservation of total energy.
Relativistic and field-theoretic extensions.
Quantum potential and relativity.
Bohm and Hiley demonstrated that the non-locality of quantum theory can be understood as limit case of a purely local theory, provided the transmission of "active information" is allowed to be greater than the speed of light, and that this limit case yields approximations to both quantum theory and relativity.
The quantum potential approach was extended by Hiley and co-workers to quantum field theory in Minkowski spacetime and to curved spacetime.
Carlo Castro and Jorge Mahecha derived the Schrödinger equation from the Hamilton-Jacobi equation in conjunction with the continuity equation, and showed that the properties of the relativistic Bohm quantum potential in terms of the ensemble density can be described by the Weyl properties of space. In Riemann flat space, the Bohm potential is shown to equal the Weyl curvature. According to Castro and Mahecha, in the relativistic case, the quantum potential (using the d'Alembert operator formula_59 and in the notation formula_60) takes the form
formula_61
and the quantum force exerted by the relativistic quantum potential is shown to depend on the Weyl gauge potential and its derivatives. Furthermore, the relationship among Bohm's potential and the Weyl curvature in flat spacetime corresponds to a similar relationship among Fisher Information and Weyl geometry after introduction of a complex momentum.
Diego L. Rapoport, on the other hand, associates the relativistic quantum potential with the metric scalar curvature (Riemann curvature).
In relation to the Klein–Gordon equation for a particle with mass and charge, Peter R. Holland spoke in his book of 1993 of a "quantum potential-like term" that is proportional formula_62. He emphasized however that to give the Klein–Gordon theory a single-particle interpretation in terms of trajectories, as can be done for nonrelativistic Schrödinger quantum mechanics, would lead to unacceptable inconsistencies. For instance, wave functions formula_63 that are solutions to the Klein–Gordon or the Dirac equation cannot be interpreted as the probability amplitude for a particle to "be found in" a given volume formula_64 at time formula_65 in accordance with the usual axioms of quantum mechanics, and similarly in the causal interpretation it cannot be interpreted as the probability for the particle to "be in" that volume at that time. Holland pointed out that, while efforts have been made to determine a Hermitian position operator that would allow an interpretation of configuration space quantum field theory, in particular using the Newton–Wigner localization approach, but that no connection with possibilities for an empirical determination of position in terms of a relativistic measurement theory or for a trajectory interpretation has so far been established. Yet according to Holland this does not mean that the trajectory concept is to be discarded from considerations of relativistic quantum mechanics.
Hrvoje Nikolić derived formula_66 as expression for the quantum potential, and he proposed a Lorentz-covariant formulation of the Bohmian interpretation of many-particle wave functions. He also developed a generalized relativistic-invariant probabilistic interpretation of quantum theory, in which formula_67 is no longer a probability density in space but a probability density in space-time.
Quantum potential in quantum field theory.
Starting from the space representation of the field coordinate, a causal interpretation of the Schrödinger picture of relativistic quantum theory has been constructed. The Schrödinger picture for a neutral, spin 0, massless field formula_68, with formula_69 real-valued functionals, can be shown to lead to
formula_70
This has been called the superquantum potential by Bohm and his co-workers.
Basil Hiley showed that the energy–momentum-relations in the Bohm model can be obtained directly from the energy–momentum tensor of quantum field theory and that the quantum potential is an energy term that is required for local energy–momentum conservation. He has also hinted that for particle with energies equal to or higher than the pair creation threshold, Bohm's model constitutes a many-particle theory that describes also pair creation and annihilation processes.
Interpretation and naming of the quantum potential.
In his article of 1952, providing an alternative interpretation of quantum mechanics, Bohm already spoke of a "quantum-mechanical" potential.
Bohm and Basil Hiley also called the quantum potential an "information potential", given that it influences the form of processes and is itself shaped by the environment. Bohm indicated "The ship or aeroplane (with its automatic Pilot) is a "self-active" system, i.e. it has its own energy. But the form of its activity is determined by the "information content" concerning its environment that is carried by the radar waves. This is independent of the intensity of the waves. We can similarly regard the quantum potential as containing "active information". It is potentially active everywhere, but actually active only where and when there is a particle." (italics in original).
Hiley refers to the quantum potential as internal energy and as "a new quality of energy only playing a role in quantum processes". He explains that the quantum potential is a further energy term aside the well-known kinetic energy and the (classical) potential energy and that it is a nonlocal energy term that arises necessarily in view of the requirement of energy conservation; he added that much of the physics community's resistance against the notion of the quantum potential may have been due to scientists' expectations that energy should be local.
Hiley has emphasized that the quantum potential, for Bohm, was "a key element in gaining insights into what could underlie the quantum formalism. Bohm was convinced by his deeper analysis of this aspect of the approach that the theory could not be mechanical. Rather, it is organic in the sense of Whitehead. Namely, that it was the whole that determined the properties of the individual particles and their relationship, not the other way round."
Peter R. Holland, in his comprehensive textbook, also refers to it as "quantum potential energy". The quantum potential is also referred to in association with Bohm's name as "Bohm potential", "quantum Bohm potential" or "Bohm quantum potential".
Applications.
The quantum potential approach can be used to model quantum effects without requiring the Schrödinger equation to be explicitly solved, and it can be integrated in simulations, such as Monte Carlo simulations using the hydrodynamic and drift diffusion equations. This is done in form of a "hydrodynamic" calculation of trajectories: starting from the density at each "fluid element", the acceleration of each "fluid element" is computed from the gradient of formula_71 and formula_13, and the resulting divergence of the velocity field determines the change to the density.
The approach using Bohmian trajectories and the quantum potential is used for calculating properties of quantum systems which cannot be solved exactly, which are often approximated using semi-classical approaches. Whereas in mean field approaches the potential for the classical motion results from an average over wave functions, this approach does not require the computation of an integral over wave functions.
The expression for the quantum force has been used, together with Bayesian statistical analysis and Expectation-maximisation methods, for computing ensembles of trajectories that arise under the influence of classical and quantum forces.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\ni \\hbar \\frac{\\partial \\psi}{\\partial t} = \\left( - \\frac{\\hbar^2}{2m} \\nabla^2 +V \\right)\\psi \\quad\n"
},
{
"math_id": 1,
"text": "\\psi = R \\exp(i S / \\hbar)"
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "S"
},
{
"math_id": 4,
"text": "\\psi"
},
{
"math_id": 5,
"text": "S/\\hbar"
},
{
"math_id": 6,
"text": "\n\\frac{\\partial R}{\\partial t} = -\\frac{1}{2m} \\left[ R \\nabla^2 S + 2 \\nabla R \\cdot \\nabla S \\right],\n"
},
{
"math_id": 7,
"text": "\\rho = R^2"
},
{
"math_id": 8,
"text": "\n\\partial \\rho / \\partial t + \\nabla \\cdot( \\rho v) =0"
},
{
"math_id": 9,
"text": "\\rho"
},
{
"math_id": 10,
"text": " v = \\frac{1}{m}\\nabla S "
},
{
"math_id": 11,
"text": "-\\frac{\\partial S}{\\partial t} = \\frac{\\|\\nabla S\\|^2}{2m} + V + Q"
},
{
"math_id": 12,
"text": "Q = - \\frac{\\hbar^2}{2m} \\frac{\\nabla^2 R}{R}."
},
{
"math_id": 13,
"text": "Q"
},
{
"math_id": 14,
"text": "\\hbar \\to 0"
},
{
"math_id": 15,
"text": "\\psi(\\mathbf{r_1},\\mathbf{r_2},\\,t)"
},
{
"math_id": 16,
"text": "m"
},
{
"math_id": 17,
"text": " Q(\\mathbf{r_1},\\mathbf{r_2},\\,t) = - \\frac{\\hbar^2}{2m} \\frac{(\\nabla_1^2 + \\nabla_2^2) R(\\mathbf{r_1},\\mathbf{r_2},\\,t)}{R(\\mathbf{r_1},\\mathbf{r_2},\\,t)} "
},
{
"math_id": 18,
"text": "\\nabla_1^2"
},
{
"math_id": 19,
"text": "\\nabla_2^2"
},
{
"math_id": 20,
"text": "n"
},
{
"math_id": 21,
"text": "\nQ(\\mathbf{r_1},...,\\mathbf{r_n},\\,t) = -\\frac{\\hbar^2}{2 R(\\mathbf{r_1},...,\\mathbf{r_n},\\,t) } \\sum_{i=1}^{n} \\frac{\\nabla_i^2}{m_i} R(\\mathbf{r_1},...,\\mathbf{r_n},\\,t) \n"
},
{
"math_id": 22,
"text": "\\psi(\\mathbf{r_1},\\mathbf{r_2},\\,t) = \\psi_A(\\mathbf{r_1},\\,t) \\psi_B(\\mathbf{r_2},\\,t) "
},
{
"math_id": 23,
"text": "\nQ(\\mathbf{r_1},\\mathbf{r_2},\\,t) = - \\frac{\\hbar^2}{2m} (\\frac{\\nabla_1^2 R_A(\\mathbf{r_1},\\,t)}{R_A(\\mathbf{r_1},\\,t)} + \\frac{\\nabla_2^2 R_B(\\mathbf{r_2},\\,t)}{R_B(\\mathbf{r_2},\\,t)}) = Q_A(\\mathbf{r_1},\\,t) + Q_B(\\mathbf{r_2},\\,t)\n"
},
{
"math_id": 24,
"text": "\\rho = R^2 \\quad"
},
{
"math_id": 25,
"text": "\\nabla^2 \\sqrt \\rho = \\nabla \\nabla \\rho^{1/2} = \\nabla \\left(\\frac{1}{2} \\rho^{-1/2} \\nabla \\rho\\right) = \\frac{1}{2} \\left[ \\left(\\nabla \\rho^{-1/2}\\right) \\nabla \\rho + \\rho^{-1/2} \\nabla^2 \\rho \\right]"
},
{
"math_id": 26,
"text": " Q = - \\frac{\\hbar^2}{2m} \\frac{\\nabla^2 \\sqrt{\\rho}}{\\sqrt{\\rho}} = - \\frac{\\hbar^2}{4m} \\left[ \\frac{\\nabla^2 \\rho}{\\rho} - \\frac{1}{2} \\frac{(\\nabla \\rho)^2}{\\rho^2} \\right]"
},
{
"math_id": 27,
"text": "F_Q = - \\nabla Q"
},
{
"math_id": 28,
"text": "F_Q = \\frac{\\hbar^2}{4m} \\left[ \\frac{\\nabla (\\nabla^2\\rho)}{\\rho} - \\frac{ \\nabla (\\nabla \\rho \\cdot \\nabla \\rho) }{ 2\\rho^2 } - \\left( \\frac{\\nabla^2 \\rho}{\\rho} - \\frac{ \\nabla \\rho \\cdot \\nabla \\rho }{ \\rho^2 } \\right) \\frac{\\nabla\\rho}{\\rho} \\right]"
},
{
"math_id": 29,
"text": "x"
},
{
"math_id": 30,
"text": "p"
},
{
"math_id": 31,
"text": "Q = - \\hbar^2 \\nabla^2 \\sqrt{\\rho} / (2m \\sqrt{\\rho})"
},
{
"math_id": 32,
"text": "\\hat{x}"
},
{
"math_id": 33,
"text": " \\mathcal{I} = \\int \\rho \\cdot (\\nabla \\ln \\rho)^2 \\, d^3x = - \\int \\rho \\nabla^2 (\\ln \\rho) \\, d^3x."
},
{
"math_id": 34,
"text": " \\langle Q \\rangle = \\int \\psi^* Q \\psi \\, d^3x = \\int \\rho Q \\, d^3x = \\frac{\\hbar^2}{8m} \\mathcal{I}."
},
{
"math_id": 35,
"text": "\nE_\\mathrm{W}[\\rho] = \\int dr\\, \\frac{\\rho \\hbar^2 (\\nabla \\ln \\rho)^2}{8m} = \\frac{\\hbar^2}{8m} \\int dr\\, \\frac{(\\nabla \\rho)^2}{\\rho} = \\int dr\\, \\rho\\,Q.\n"
},
{
"math_id": 36,
"text": "\\mathbf V^2 = \\frac{(\\nabla \\rho \\land \\mathbf s)^2} {(m \\rho)^2} = \\frac{(\\nabla \\rho)^2 \\mathbf s^2 - (\\nabla \\rho \\cdot \\mathbf s)^2}{(m \\rho)^2}"
},
{
"math_id": 37,
"text": "| \\mathbf s | = \\hbar/2"
},
{
"math_id": 38,
"text": "| \\mathbf V | = \\frac{\\hbar}{2} \\frac{\\nabla \\rho}{m \\rho}"
},
{
"math_id": 39,
"text": "\\hbar"
},
{
"math_id": 40,
"text": "Q = - \\frac{1}{2} m \\mathbf v_S^2 - \\frac{1}{2} \\nabla \\cdot \\mathbf v_S"
},
{
"math_id": 41,
"text": "\\mathbf v = \\mathbf v_B + \\mathbf v_S \\times \\mathbf s"
},
{
"math_id": 42,
"text": "\\mathbf v_B = \\frac {\\nabla S}{m}"
},
{
"math_id": 43,
"text": "\\mathbf v_S \\times \\mathbf s"
},
{
"math_id": 44,
"text": "\\mathbf v_S = \\frac {\\nabla R^2}{2m R^2}"
},
{
"math_id": 45,
"text": "\\mathbf s"
},
{
"math_id": 46,
"text": "\\psi = (\\mathbf E - i \\mathbf B) / \\sqrt 2"
},
{
"math_id": 47,
"text": "\\psi^* \\cdot \\psi = (\\mathbf E^2 + \\mathbf B^2)/2"
},
{
"math_id": 48,
"text": "Q (q) = \\frac{\\hbar^2}{4m} \\{ S ; q \\}"
},
{
"math_id": 49,
"text": "\\{ \\cdot \\, ; \\cdot \\}"
},
{
"math_id": 50,
"text": " \\{ S ; q \\} = (S''' / S') - (3/2) (S''/S')^2"
},
{
"math_id": 51,
"text": "Q (q) = - \\frac {\\hbar^2}{2m} \\frac {\\Delta R}{R}"
},
{
"math_id": 52,
"text": "R \\exp (i S /\\hbar)"
},
{
"math_id": 53,
"text": "C\\ell_{i,j}"
},
{
"math_id": 54,
"text": "\\Phi_L(\\mathbf r, t)"
},
{
"math_id": 55,
"text": "\\Phi_R(\\mathbf r, t) = \\tilde{\\Phi}_L(\\mathbf r, t)"
},
{
"math_id": 56,
"text": "\\rho_c(\\mathbf r, t) = \\Phi_L(\\mathbf r, t) \\tilde{\\Phi}_L(\\mathbf r, t)"
},
{
"math_id": 57,
"text": "B"
},
{
"math_id": 58,
"text": "{\\rm Tr} B \\rho_c"
},
{
"math_id": 59,
"text": "\\scriptstyle\\Box"
},
{
"math_id": 60,
"text": "\\hbar=1"
},
{
"math_id": 61,
"text": "Q = - \\frac {1}{2m} \\frac {\\quad \\Box \\sqrt \\rho}{\\sqrt \\rho}"
},
{
"math_id": 62,
"text": "\\Box R/R"
},
{
"math_id": 63,
"text": "\\psi(\\mathbf{x},t)"
},
{
"math_id": 64,
"text": "d^3 x"
},
{
"math_id": 65,
"text": "t"
},
{
"math_id": 66,
"text": "Q = - (1/2m) \\, \\Box R/R"
},
{
"math_id": 67,
"text": "|\\psi|^2"
},
{
"math_id": 68,
"text": "\\Psi \\left[ \\psi(\\mathbf{x},t) \\right] = R \\left[ \\psi(\\mathbf{x},t) \\right] e^{S \\left[ \\psi(\\mathbf{x},t) \\right]}"
},
{
"math_id": 69,
"text": "R \\left[ \\psi(\\mathbf{x},t) \\right], S \\left[ \\psi(\\mathbf{x},t) \\right]"
},
{
"math_id": 70,
"text": "Q \\left[ \\psi(\\mathbf{x},t) \\right] = - (1/2R) \\int d^3 x \\, \\delta^2 R / \\delta \\psi^2 "
},
{
"math_id": 71,
"text": "V"
}
] | https://en.wikipedia.org/wiki?curid=8057418 |
805814 | Code 93 | Code 93 is a barcode symbology designed in 1982 by Intermec to provide a higher density and data security enhancement to Code 39. It is an alphanumeric, variable length symbology. Code 93 is used primarily by Canada Post to encode supplementary delivery information. Every symbol includes two check characters.
Each Code 93 character is nine modules wide, and always has three bars and three spaces, thus the name. Each bar and space is from 1 to 4 modules wide. (For comparison, a Code 39 character consists of five bars and four spaces, three of which are wide, for a total width of 13–16 modules.)
Code 93 is designed to encode the same 26 upper case letters, 10 digits and 7 special characters as code 39:
codice_0
codice_1
codice_2
In addition to 43 characters, Code 93 defines 5 special characters (including a start/stop character), which can be combined with other characters to unambiguously represent all 128 ASCII characters.
In an open system, the minimum value of X dimension is 7.5 mils (0.19 mm). The minimum bar height is 15 percent of the symbol length or , whichever is greater. The starting and trailing quiet zone should be at least .
Structure of a code 93 barcode.
A typical code 93 barcode has the following structure:
Detailed outline.
The 48 possible code-93 symbols are as follows. There are actually formula_0 = 56 combinations that satisfy the coding rules, but one would be confused with the stop symbol in reverse, and the other 7 are unused. Codes 43–46 can be prefixed to alphanumeric values to produce all 128 possible ASCII codes. This is done in exactly the same way as Full ASCII Code 39, but uses reserved codes rather than re-using codes 39–42.
Full ASCII Code 93.
Code 93 is restricted to 43 characters and 5 special characters. In Full ASCII Code 93, the 43 basic symbols (0–9, A-Z, "-", ".", "$", "/", "+" and "%") are the same as their representations in Code 93. Lower case letters, additional punctuation characters and control characters are represented by sequences of two characters of Code 93.
This encoding is the same as Full ASCII Code 39, except that four special-purpose symbols are used, rather than reassigning $, /, + and %: | [
{
"math_id": 0,
"text": "\\tbinom 83"
}
] | https://en.wikipedia.org/wiki?curid=805814 |
8062003 | Hypercone | 4-dimensional figure
In geometry, a hypercone (or spherical cone) is the figure in the 4-dimensional Euclidean space represented by the equation
formula_0
It is a quadric surface, and is one of the possible 3-manifolds which are 4-dimensional equivalents of the conical surface in 3 dimensions. It is also named "spherical cone" because its intersections with hyperplanes perpendicular to the "w"-axis are spheres. A four-dimensional right hypercone can be thought of as a sphere which expands with time, starting its expansion from a single point source, such that the center of the expanding sphere remains fixed. An oblique hypercone would be a sphere which expands with time, again starting its expansion from a point source, but such that the center of the expanding sphere moves with a uniform velocity.
Parametric form.
A right spherical hypercone can be described by the function
formula_1
with vertex at the origin and expansion speed "s".
A right spherical hypercone with radius "r" and height "h" can be described by the function
formula_2
An oblique spherical hypercone could then be described by the function
formula_3
where formula_4 is the 3-velocity of the center of the expanding sphere.
An example of such a cone would be an expanding sound wave as seen from the point of view of a moving reference frame: e.g. the sound wave of a jet aircraft as seen from the jet's own reference frame.
Note that the 3D-surfaces above enclose 4D-hypervolumes, which are the 4-cones proper.
Geometrical interpretation.
The spherical cone consists of two unbounded "nappes", which meet at the origin and are the analogues of the nappes of the 3-dimensional conical surface. The "upper nappe" corresponds with the half with positive "w"-coordinates, and the "lower nappe" corresponds with the half with negative "w"-coordinates.
If it is restricted between the hyperplanes "w" = 0 and "w" = "r" for some nonzero "r", then it may be closed by a 3-ball of radius "r", centered at (0,0,0,"r"), so that it bounds a finite 4-dimensional volume. This volume is given by the formula π"r"4, and is the 4-dimensional equivalent of the solid cone. The ball may be thought of as the 'lid' at the base of the 4-dimensional cone's nappe, and the origin becomes its 'apex'.
This shape may be projected into 3-dimensional space in various ways. If projected onto the "xyz" hyperplane, its image is a ball. If projected onto the "xyw", "xzw", or "yzw" hyperplanes, its image is a solid cone. If projected onto an oblique hyperplane, its image is either an ellipsoid or a solid cone with an ellipsoidal base (resembling an ice cream cone). These images are the analogues of the possible images of the solid cone projected to 2 dimensions.
Construction.
The (half) hypercone may be constructed in a manner analogous to the construction of a 3D cone. A 3D cone may be thought of as the result of stacking progressively smaller discs on top of each other until they taper to a point. Alternatively, a 3D cone may be regarded as the volume swept out by an upright isosceles triangle as it rotates about its base.
A 4D hypercone may be constructed analogously: by stacking progressively smaller balls on top of each other in the 4th direction until they taper to a point, or taking the hypervolume swept out by a tetrahedron standing upright in the 4th direction as it rotates freely about its base in the 3D hyperplane on which it rests.
Measurements.
Hypervolume.
The hypervolume of a four-dimensional pyramid and cone is
formula_5
where "V" is the volume of the base and "h" is the height (the distance between the centre of the base and the apex). For a spherical cone with a base volume of formula_6, the hypervolume is
formula_7
Surface volume.
The lateral surface volume of a right spherical cone is formula_8 where formula_9 is the radius of the spherical base and formula_10 is the slant height of the cone (the distance between the 2D surface of the sphere and the apex). The surface volume of the spherical base is the same as for any sphere, formula_11. Therefore, the total surface volume of a right spherical cone can be expressed in the following ways:
formula_12
formula_14
where formula_9 is the radius and formula_15 is the height.
formula_16
formula_17
where formula_9 is the radius and formula_10 is the slant height.
formula_18
formula_19
where formula_20 is the base surface area, formula_9 is the radius, and formula_10 is the slant height.
Temporal interpretation.
If the "w"-coordinate of the equation of the spherical cone is interpreted as the distance "ct", where "t" is coordinate time and "c" is the speed of light (a constant), then it is the shape of the light cone in special relativity. In this case, the equation is usually written as:
formula_21
which is also the equation for spherical wave fronts of light. The upper nappe is then the "future light cone" and the lower nappe is the "past light cone".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x^2 + y^2 + z^2 - w^2 = 0."
},
{
"math_id": 1,
"text": " \\vec \\sigma (\\phi, \\theta, t) = (t s \\cos \\theta \\cos \\phi, t s \\cos \\theta \\sin \\phi, t s \\sin \\theta, t) "
},
{
"math_id": 2,
"text": " \\vec \\sigma (\\phi, \\theta, t) = \\left(t \\cos \\phi \\sin \\theta, t \\sin \\phi \\sin \\theta, t \\cos \\theta, \\frac{h}{r}t\\right) "
},
{
"math_id": 3,
"text": " \\vec \\sigma (\\phi, \\theta, t) = (v_x t + t s \\cos \\theta \\cos \\phi, v_y t + t s \\cos \\theta \\sin \\phi, v_z t + t s \\sin \\theta, t) "
},
{
"math_id": 4,
"text": " (v_x, v_y, v_z) "
},
{
"math_id": 5,
"text": "H=\\frac{1}{4}Vh"
},
{
"math_id": 6,
"text": "V=\\frac{4}{3}\\pi r^3"
},
{
"math_id": 7,
"text": "H=\\frac{1}{4}Vh=\\frac{1}{4}\\left(\\frac{4}{3}\\pi r^3\\right)h=\\frac{1}{3}\\pi r^3h"
},
{
"math_id": 8,
"text": "LSV = \\frac{4}{3}\\pi r^2 l"
},
{
"math_id": 9,
"text": "r"
},
{
"math_id": 10,
"text": "l"
},
{
"math_id": 11,
"text": "\\frac{4}{3}\\pi r^3"
},
{
"math_id": 12,
"text": "\\frac{4}{3}\\pi r^3 + \\frac{4}{3}\\pi r^2 \\sqrt{r^2+h^2}"
},
{
"math_id": 13,
"text": "\\sqrt{r^2+h^2}"
},
{
"math_id": 14,
"text": "\\frac{4}{3}\\pi r^2 \\left(r + \\sqrt{r^2+h^2}\\right)"
},
{
"math_id": 15,
"text": "h"
},
{
"math_id": 16,
"text": "\\frac{4}{3}\\pi r^3 + \\frac{4}{3}\\pi r^2 l"
},
{
"math_id": 17,
"text": "\\frac{4}{3}\\pi r^2 \\left(r + l\\right)"
},
{
"math_id": 18,
"text": "\\frac{1}{3}Ar + \\frac{1}{3}Al"
},
{
"math_id": 19,
"text": "\\frac{1}{3}A\\left(r + l\\right)"
},
{
"math_id": 20,
"text": "A"
},
{
"math_id": 21,
"text": "x^2 + y^2 + z^2 - (ct)^2 = 0,"
}
] | https://en.wikipedia.org/wiki?curid=8062003 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.