id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
1361141 | One- and two-tailed tests | Alternative ways of computing the statistical significance of a parameter inferred from a data set
In statistical significance testing, a one-tailed test and a two-tailed test are alternative ways of computing the statistical significance of a parameter inferred from a data set, in terms of a test statistic. A two-tailed test is appropriate if the estimated value is greater or less than a certain range of values, for example, whether a test taker may score above or below a specific range of scores. This method is used for null hypothesis testing and if the estimated value exists in the critical areas, the alternative hypothesis is accepted over the null hypothesis.
A one-tailed test is appropriate if the estimated value may depart from the reference value in only one direction, left or right, but not both. An example can be whether a machine produces more than one-percent defective products. In this situation, if the estimated value exists in one of the one-sided critical areas, depending on the direction of interest (greater than or less than), the alternative hypothesis is accepted over the null hypothesis. Alternative names are one-sided and two-sided tests; the terminology "tail" is used because the extreme portions of distributions, where observations lead to rejection of the null hypothesis, are small and often "tail off" toward zero as in the normal distribution, colored in yellow, or "bell curve", pictured on the right and colored in green.
Applications.
One-tailed tests are used for asymmetric distributions that have a single tail, such as the chi-squared distribution, which are common in measuring goodness-of-fit, or for one side of a distribution that has two tails, such as the normal distribution, which is common in estimating location; this corresponds to specifying a direction. Two-tailed tests are only applicable when there are two tails, such as in the normal distribution, and correspond to considering either direction significant.
In the approach of Ronald Fisher, the null hypothesis H0 will be rejected when the "p"-value of the test statistic is sufficiently extreme (vis-a-vis the test statistic's sampling distribution) and thus judged unlikely to be the result of chance. This is usually done by comparing the resulting p-value with the specified significance level, denoted by formula_0, when computing the statistical significance of a parameter"." In a one-tailed test, "extreme" is decided beforehand as either meaning "sufficiently small" "or" meaning "sufficiently large" – values in the other direction are considered not significant. One may report that the left or right tail probability as the one-tailed p-value, which ultimately corresponds to the direction in which the test statistic deviates from H0. In a two-tailed test, "extreme" means "either sufficiently small or sufficiently large", and values in either direction are considered significant. For a given test statistic, there is a single two-tailed test, and two one-tailed tests, one each for either direction. When provided a significance level formula_0, the critical regions would exist on the two tail ends of the distribution with an area of formula_1 each for a two-tailed test. Alternatively, the critical region would solely exist on the single tail end with an area of formula_0 for a one-tailed test. For a given significance level in a two-tailed test for a test statistic, the corresponding one-tailed tests for the same test statistic will be considered either twice as significant (half the "p"-value) if the data is in the direction specified by the test, or not significant at all ("p"-value above formula_0) if the data is in the direction opposite of the critical region specified by the test.
For example, if flipping a coin, testing whether it is biased "towards" heads is a one-tailed test, and getting data of "all heads" would be seen as highly significant, while getting data of "all tails" would be not significant at all ("p" = 1). By contrast, testing whether it is biased in "either" direction is a two-tailed test, and either "all heads" or "all tails" would both be seen as highly significant data. In medical testing, while one is generally interested in whether a treatment results in outcomes that are "better" than chance, thus suggesting a one-tailed test; a "worse" outcome is also interesting for the scientific field, therefore one should use a two-tailed test that corresponds instead to testing whether the treatment results in outcomes that are "different" from chance, either better or worse. In the archetypal lady tasting tea experiment, Fisher tested whether the lady in question was "better" than chance at distinguishing two types of tea preparation, not whether her ability was "different" from chance, and thus he used a one-tailed test.
Coin flipping example.
In coin flipping, the null hypothesis is a sequence of Bernoulli trials with probability 0.5, yielding a random variable "X" which is 1 for heads and 0 for tails, and a common test statistic is the sample mean (of the number of heads) formula_2 If testing for whether the coin is biased towards heads, a one-tailed test would be used – only large numbers of heads would be significant. In that case a data set of five heads (HHHHH), with sample mean of 1, has a formula_3 chance of occurring, (5 consecutive flips with 2 outcomes - ((1/2)^5 =1/32). This would have formula_4 and would be significant (rejecting the null hypothesis) if the test was analyzed at a significance level of formula_5 (the significance level corresponding to the cutoff bound). However, if testing for whether the coin is biased towards heads or tails, a two-tailed test would be used, and a data set of five heads (sample mean 1) is as extreme as a data set of five tails (sample mean 0). As a result, the "p"-value would be formula_6 and this would not be significant (not rejecting the null hypothesis) if the test was analyzed at a significance level of formula_5.
History.
The "p"-value was introduced by Karl Pearson in the Pearson's chi-squared test, where he defined P (original notation) as the probability that the statistic would be at or above a given level. This is a one-tailed definition, and the chi-squared distribution is asymmetric, only assuming positive or zero values, and has only one tail, the upper one. It measures goodness of fit of data with a theoretical distribution, with zero corresponding to exact agreement with the theoretical distribution; the "p"-value thus measures how likely the fit would be this bad or worse.
The distinction between one-tailed and two-tailed tests was popularized by Ronald Fisher in the influential book Statistical Methods for Research Workers, where he applied it especially to the normal distribution, which is a symmetric distribution with two equal tails. The normal distribution is a common measure of location, rather than goodness-of-fit, and has two tails, corresponding to the estimate of location being above or below the theoretical location (e.g., sample mean compared with theoretical mean). In the case of a symmetric distribution such as the normal distribution, the one-tailed "p"-value is exactly half the two-tailed "p"-value:
<templatestyles src="Template:Blockquote/styles.css" />Some confusion is sometimes introduced by the fact that in some cases we wish to know the probability that the deviation, known to be positive, shall exceed an observed value, whereas in other cases the probability required is that a deviation, which is equally frequently positive and negative, shall exceed an observed value; the latter probability is always half the former.
Fisher emphasized the importance of measuring the tail – the observed value of the test statistic and all more extreme – rather than simply the probability of specific outcome itself, in his "The Design of Experiments" (1935). He explains this as because a "specific" set of data may be unlikely (in the null hypothesis), but more extreme outcomes likely, so seen in this light, the specific but not extreme unlikely data should not be considered significant.
Specific tests.
If the test statistic follows a Student's "t"-distribution in the null hypothesis – which is common where the underlying variable follows a normal distribution with unknown scaling factor, then the test is referred to as a one-tailed or two-tailed "t"-test. If the test is performed using the actual population mean and variance, rather than an estimate from a sample, it would be called a one-tailed or two-tailed "Z"-test.
The statistical tables for "t" and for "Z" provide critical values for both one- and two-tailed tests. That is, they provide the critical values that cut off an entire region at one or the other end of the sampling distribution as well as the critical values that cut off the regions (of half the size) at both ends of the sampling distribution.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha"
},
{
"math_id": 1,
"text": "\\alpha/2"
},
{
"math_id": 2,
"text": "\\bar X."
},
{
"math_id": 3,
"text": "1/32 = 0.03125 \\approx 0.03"
},
{
"math_id": 4,
"text": "p \\approx 0.03"
},
{
"math_id": 5,
"text": "\\alpha = 0.05"
},
{
"math_id": 6,
"text": "2/32 = 0.0625 \\approx 0.06"
}
]
| https://en.wikipedia.org/wiki?curid=1361141 |
13612447 | Repeating decimal | Decimal representation of a number whose digits are periodic
A repeating decimal or recurring decimal is a decimal representation of a number whose digits are eventually periodic (that is, after some place, the same sequence of digits is repeated forever); if this sequence consists only of zeros (that is if there is only a finite number of nonzero digits), the decimal is said to be "terminating", and is not considered as repeating.
It can be shown that a number is rational if and only if its decimal representation is repeating or terminating. For example, the decimal representation of becomes periodic just after the decimal point, repeating the single digit "3" forever, i.e. 0.333... A more complicated example is , whose decimal becomes periodic at the "second" digit following the decimal point and then repeats the sequence "144" forever, i.e. 5.8144144144... Another example of this is , which becomes periodic after the decimal point, repeating the 13-digit pattern "1886792452830" forever, i.e. 11.18867924528301886792452830...
The infinitely repeated digit sequence is called the repetend or reptend. If the repetend is a zero, this decimal representation is called a terminating decimal rather than a repeating decimal, since the zeros can be omitted and the decimal terminates before these zeros. Every terminating decimal representation can be written as a decimal fraction, a fraction whose denominator is a power of 10 (e.g. 1.585
); it may also be written as a ratio of the form (e.g. 1.585
). However, "every" number with a terminating decimal representation also trivially has a second, alternative representation as a repeating decimal whose repetend is the digit 9. This is obtained by decreasing the final (rightmost) non-zero digit by one and appending a repetend of 9. Two examples of this are 1.000...
[[Category:Pages which use a template in place of a magic word|TRepeating decimal]] 0.999... and 1.585000...
1.584999... (This type of repeating decimal can be obtained by long division if one uses a modified form of the usual division algorithm.)
Any number that cannot be expressed as a ratio of two integers is said to be irrational. Their decimal representation neither terminates nor infinitely repeats, but extends forever without repetition (see ). Examples of such irrational numbers are and π.
Background.
Notation.
There are several notational conventions for representing repeating decimals. None of them are accepted universally.
In English, there are various ways to read repeating decimals aloud. For example, 1.234 may be read "one point two repeating three four", "one point two repeated three four", "one point two recurring three four", "one point two repetend three four" or "one point two into infinity three four". Likewise, 11.1886792452830 may be read "eleven point repeating one double eight six seven nine two four five two eight three zero", "eleven point repeated one double eight six seven nine two four five two eight three zero", "eleven point recurring one double eight six seven nine two four five two eight three zero" "eleven point repetend one double eight six seven nine two four five two eight three zero" or "eleven point into infinity one double eight six seven nine two four five two eight three zero".
Decimal expansion and recurrence sequence.
In order to convert a rational number represented as a fraction into decimal form, one may use long division. For example, consider the rational number :
0.0675
74 ) 5.00000
4.44
560
518
420
370
500
etc. Observe that at each step we have a remainder; the successive remainders displayed above are 56, 42, 50. When we arrive at 50 as the remainder, and bring down the "0", we find ourselves dividing 500 by 74, which is the same problem we began with. Therefore, the decimal repeats: ...
For any integer fraction , the remainder at step k, for any positive integer "k", is "A" × 10"k" (modulo "B").
Every rational number is either a terminating or repeating decimal.
For any given divisor, only finitely many different remainders can occur. In the example above, the 74 possible remainders are 0, 1, 2, ..., 73. If at any point in the division the remainder is 0, the expansion terminates at that point. Then the length of the repetend, also called "period", is defined to be 0.
If 0 never occurs as a remainder, then the division process continues forever, and eventually, a remainder must occur that has occurred before. The next step in the division will yield the same new digit in the quotient, and the same new remainder, as the previous time the remainder was the same. Therefore, the following division will repeat the same results. The repeating sequence of digits is called "repetend" which has a certain length greater than 0, also called "period".
In base 10, a fraction has a repeating decimal if and only if in lowest terms, its denominator has any prime factors besides 2 or 5, or in other words, cannot be expressed as 2"m"&hairsp;5"n", where "m" and "n" are non-negative integers.
Every repeating or terminating decimal is a rational number.
Each repeating decimal number satisfies a linear equation with integer coefficients, and its unique solution is a rational number. In the example above, "α"
5.8144144144... satisfies the equation
The process of how to find these integer coefficients is described below.
Formal proof.
Given a repeating decimal formula_0 where formula_1, formula_2, and formula_3 are groups of digits, let formula_4, the number of digits of formula_2. Multiplying by formula_5 separates the repeating and terminating groups:
formula_6
If the decimals terminate (formula_7), the proof is complete. For formula_8 with formula_9 digits, let formula_10 where formula_11 is a terminating group of digits. Then,
formula_12
where formula_13 denotes the "i-"th "digit", and
formula_14
Since formula_15,
formula_16
Since formula_17 is the sum of an integer (formula_18) and a rational number (formula_19), formula_17 is also rational.
Table of values.
Thereby "fraction" is the unit fraction and "ℓ"10 is the length of the (decimal) repetend.
The lengths "ℓ"10("n") of the decimal repetends of , "n" = 1, 2, 3, ..., are:
0, 0, 1, 0, 0, 1, 6, 0, 1, 0, 2, 1, 6, 6, 1, 0, 16, 1, 18, 0, 6, 2, 22, 1, 0, 6, 3, 6, 28, 1, 15, 0, 2, 16, 6, 1, 3, 18, 6, 0, 5, 6, 21, 2, 1, 22, 46, 1, 42, 0, 16, 6, 13, 3, 2, 6, 18, 28, 58, 1, 60, 15, 6, 0, 6, 2, 33, 16, 22, 6, 35, 1, 8, 3, 1, 18, 6, 6, 13, 0, 9, 5, 41, 6, 16, 21, 28, 2, 44, 1, 6, 22, 15, 46, 18, 1, 96, 42, 2, 0... (sequence in the OEIS).
For comparison, the lengths "ℓ"2("n") of the binary repetends of the fractions , "n" = 1, 2, 3, ..., are:
0, 0, 2, 0, 4, 2, 3, 0, 6, 4, 10, 2, 12, 3, 4, 0, 8, 6, 18, 4, 6, 10, 11, 2, 20, 12, 18, 3, 28, 4, 5, 0, 10, 8, 12, 6, 36, 18, 12, 4, 20, 6, 14, 10, 12, 11, ... (=["n"], if "n" not a power of 2 else =0).
The decimal repetends of , "n" = 1, 2, 3, ..., are:
0, 0, 3, 0, 0, 6, 142857, 0, 1, 0, 09, 3, 076923, 714285, 6, 0, 0588235294117647, 5, 052631578947368421, 0, 047619, 45, 0434782608695652173913, 6, 0, 384615, 037, 571428, 0344827586206896551724137931, 3, 032258064516129, 0, 03, 2941176470588235, 285714... (sequence in the OEIS).
The decimal repetend lengths of , "p" = 2, 3, 5, ... ("n"th prime), are:
0, 1, 0, 6, 2, 6, 16, 18, 22, 28, 15, 3, 5, 21, 46, 13, 58, 60, 33, 35, 8, 13, 41, 44, 96, 4, 34, 53, 108, 112, 42, 130, 8, 46, 148, 75, 78, 81, 166, 43, 178, 180, 95, 192, 98, 99, 30, 222, 113, 228, 232, 7, 30, 50, 256, 262, 268, 5, 69, 28, 141, 146, 153, 155, 312, 79... (sequence in the OEIS).
The least primes "p" for which has decimal repetend length "n", "n" = 1, 2, 3, ..., are:
3, 11, 37, 101, 41, 7, 239, 73, 333667, 9091, 21649, 9901, 53, 909091, 31, 17, 2071723, 19, 1111111111111111111, 3541, 43, 23, 11111111111111111111111, 99990001, 21401, 859, 757, 29, 3191, 211, 2791, 353, 67, 103, 71, 999999000001, 2028119, 909090909090909091, 900900900900990990990991, 1676321, 83, 127, 173... (sequence in the OEIS).
The least primes "p" for which has "n" different cycles (1 ≤ "k" ≤ "p"−1), "n" = 1, 2, 3, ..., are:
7, 3, 103, 53, 11, 79, 211, 41, 73, 281, 353, 37, 2393, 449, 3061, 1889, 137, 2467, 16189, 641, 3109, 4973, 11087, 1321, 101, 7151, 7669, 757, 38629, 1231, 49663, 12289, 859, 239, 27581, 9613, 18131, 13757, 33931... (sequence in the OEIS).
Fractions with prime denominators.
A fraction in lowest terms with a prime denominator other than 2 or 5 (i.e. coprime to 10) always produces a repeating decimal. The length of the repetend (period of the repeating decimal segment) of is equal to the order of 10 modulo "p". If 10 is a primitive root modulo "p", then the repetend length is equal to "p" − 1; if not, then the repetend length is a factor of "p" − 1. This result can be deduced from Fermat's little theorem, which states that 10"p"−1 ≡ 1 (mod "p").
The base-10 digital root of the repetend of the reciprocal of any prime number greater than 5 is 9.
If the repetend length of for prime "p" is equal to "p" − 1 then the repetend, expressed as an integer, is called a cyclic number.
Cyclic numbers.
Examples of fractions belonging to this group are:
The list can go on to include the fractions , , , , , , , , , , etc. (sequence in the OEIS).
Every "proper" multiple of a cyclic number (that is, a multiple having the same number of digits) is a rotation:
The reason for the cyclic behavior is apparent from an arithmetic exercise of long division of : the sequential remainders are the cyclic sequence {1, 3, 2, 6, 4, 5}. See also the article 142,857 for more properties of this cyclic number.
A fraction which is cyclic thus has a recurring decimal of even length that divides into two sequences in nines' complement form. For example starts '142' and is followed by '857' while (by rotation) starts '857' followed by "its" nines' complement '142'.
The rotation of the repetend of a cyclic number always happens in such a way that each successive repetend is a bigger number than the previous one. In the succession above, for instance, we see that 0.142857... < 0.285714... < 0.428571... < 0.571428... < 0.714285... < 0.857142... This, for cyclic fractions with long repetends, allows us to easily predict what the result of multiplying the fraction by any natural number n will be, as long as the repetend is known.
A "proper prime" is a prime "p" which ends in the digit 1 in base 10 and whose reciprocal in base 10 has a repetend with length "p" − 1. In such primes, each digit 0, 1..., 9 appears in the repeating sequence the same number of times as does each other digit (namely, times). They are:
61, 131, 181, 461, 491, 541, 571, 701, 811, 821, 941, 971, 1021, 1051, 1091, 1171, 1181, 1291, 1301, 1349, 1381, 1531, 1571, 1621, 1741, 1811, 1829, 1861... (sequence in the OEIS).
A prime is a proper prime if and only if it is a full reptend prime and congruent to 1 mod 10.
If a prime "p" is both full reptend prime and safe prime, then will produce a stream of "p" − 1 pseudo-random digits. Those primes are
7, 23, 47, 59, 167, 179, 263, 383, 503, 863, 887, 983, 1019, 1367, 1487, 1619, 1823, 2063... (sequence in the OEIS).
Other reciprocals of primes.
Some reciprocals of primes that do not generate cyclic numbers are:
The reason is that 3 is a divisor of 9, 11 is a divisor of 99, 41 is a divisor of 99999, etc.
To find the period of , we can check whether the prime "p" divides some number 999...999 in which the number of digits divides "p" − 1. Since the period is never greater than "p" − 1, we can obtain this by calculating . For example, for 11 we get
formula_20
and then by inspection find the repetend 09 and period of 2.
Those reciprocals of primes can be associated with several sequences of repeating decimals. For example, the multiples of can be divided into two sets, with different repetends. The first set is:
where the repetend of each fraction is a cyclic re-arrangement of 076923. The second set is:
where the repetend of each fraction is a cyclic re-arrangement of 153846.
In general, the set of proper multiples of reciprocals of a prime "p" consists of "n" subsets, each with repetend length "k", where "nk" = "p" − 1.
Totient rule.
For an arbitrary integer "n", the length "L"("n") of the decimal repetend of divides "φ"("n"), where "φ" is the totient function. The length is equal to "φ"("n") if and only if 10 is a primitive root modulo "n".
In particular, it follows that "L"("p") = "p" − 1 if and only if "p" is a prime and 10 is a primitive root modulo "p". Then, the decimal expansions of for "n" = 1, 2, ..., "p" − 1, all have period "p" − 1 and differ only by a cyclic permutation. Such numbers "p" are called full repetend primes.
Reciprocals of composite integers coprime to 10.
If "p" is a prime other than 2 or 5, the decimal representation of the fraction repeats:
= 0.020408163265306122448979591836734693877551.
The period (repetend length) "L"(49) must be a factor of "λ"(49) = 42, where "λ"("n") is known as the Carmichael function. This follows from Carmichael's theorem which states that if "n" is a positive integer then "λ"("n") is the smallest integer "m" such that
formula_21
for every integer "a" that is coprime to "n".
The period of is usually "pT""p", where "T""p" is the period of . There are three known primes for which this is not true, and for those the period of is the same as the period of because "p"2 divides 10"p"−1−1. These three primes are 3, 487, and 56598313 (sequence in the OEIS).
Similarly, the period of is usually "p""k"–1"T""p"
If "p" and "q" are primes other than 2 or 5, the decimal representation of the fraction repeats. An example is :
119 = 7 × 17
"λ"(7 × 17) = LCM("λ"(7), "λ"(17)) = LCM(6, 16) = 48,
where LCM denotes the least common multiple.
The period "T" of is a factor of "λ"("pq") and it happens to be 48 in this case:
= 0.008403361344537815126050420168067226890756302521.
The period "T" of is LCM("T""p", "T""q"), where "T""p" is the period of and "T""q" is the period of .
If "p", "q", "r", etc. are primes other than 2 or 5, and "k", "ℓ", "m", etc. are positive integers, then
formula_22
is a repeating decimal with a period of
formula_23
where "Tpk", "Tqℓ", "Trm"... are respectively the period of the repeating decimals , , ... as defined above.
Reciprocals of integers not coprime to 10.
An integer that is not coprime to 10 but has a prime factor other than 2 or 5 has a reciprocal that is eventually periodic, but with a non-repeating sequence of digits that precede the repeating part. The reciprocal can be expressed as:
formula_24
where "a" and "b" are not both zero.
This fraction can also be expressed as:
formula_25
if "a" > "b", or as
formula_26
if "b" > "a", or as
formula_27
if "a" = "b".
The decimal has:
For example = 0.03571428:
Converting repeating decimals to fractions.
Given a repeating decimal, it is possible to calculate the fraction that produces it. For example:
Another example:
A shortcut.
The procedure below can be applied in particular if the repetend has "n" digits, all of which are 0 except the final one which is 1. For instance for "n" = 7:
formula_28
So this particular repeating decimal corresponds to the fraction , where the denominator is the number written as "n" 9s. Knowing just that, a general repeating decimal can be expressed as a fraction without having to solve an equation. For example, one could reason:
formula_29
or
formula_30
It is possible to get a general formula expressing a repeating decimal with an "n"-digit period (repetend length), beginning right after the decimal point, as a fraction:
formula_31
More explicitly, one gets the following cases:
If the repeating decimal is between 0 and 1, and the repeating block is "n" digits long, first occurring right after the decimal point, then the fraction (not necessarily reduced) will be the integer number represented by the "n"-digit block divided by the one represented by "n" 9s. For example,
If the repeating decimal is as above, except that there are "k" (extra) digits 0 between the decimal point and the repeating "n"-digit block, then one can simply add "k" digits 0 after the "n" digits 9 of the denominator (and, as before, the fraction may subsequently be simplified). For example,
Any repeating decimal not of the form described above can be written as a sum of a terminating decimal and a repeating decimal of one of the two above types (actually the first type suffices, but that could require the terminating decimal to be negative). For example,
An even faster method is to ignore the decimal point completely and go like this
It follows that any repeating decimal with period "n", and "k" digits after the decimal point that do not belong to the repeating part, can be written as a (not necessarily reduced) fraction whose denominator is (10"n" − 1)10"k".
Conversely the period of the repeating decimal of a fraction will be (at most) the smallest number "n" such that 10"n" − 1 is divisible by "d".
For example, the fraction has "d" = 7, and the smallest "k" that makes 10"k" − 1 divisible by 7 is "k" = 6, because 999999 = 7 × 142857. The period of the fraction is therefore 6.
In compressed form.
The following picture suggests kind of compression of the above shortcut.
Thereby formula_32 represents the digits of the integer part of the decimal number (to the left of the decimal point), formula_33 makes up the string of digits of the preperiod and formula_34 its length, and formula_35 being the string of repeated digits (the period) with length formula_36 which is nonzero.
In the generated fraction, the digit formula_37 will be repeated formula_36 times, and the digit formula_38 will be repeated formula_34 times.
Note that in the absence of an integer part in the decimal, formula_32 will be represented by zero, which being to the left of the other digits, will not affect the final result, and may be omitted in the calculation of the generating function.
Examples:
formula_39
The symbol formula_40 in the examples above denotes the absence of digits of part formula_33 in the decimal, and therefore formula_41 and a corresponding absence in the generated fraction.
Repeating decimals as infinite series.
A repeating decimal can also be expressed as an infinite series. That is, a repeating decimal can be regarded as the sum of an infinite number of rational numbers. To take the simplest example,
formula_42
The above series is a geometric series with the first term as and the common factor . Because the absolute value of the common factor is less than 1, we can say that the geometric series converges and find the exact value in the form of a fraction by using the following formula where "a" is the first term of the series and "r" is the common factor.
formula_43
Similarly,
formula_44
Multiplication and cyclic permutation.
The cyclic behavior of repeating decimals in multiplication also leads to the construction of integers which are cyclically permuted when multiplied by certain numbers. For example, 102564 × 4 = 410256. 102564 is the repetend of and 410256 the repetend of .
Other properties of repetend lengths.
Various properties of repetend lengths (periods) are given by Mitchell and Dickson.
formula_45
for some "m", but
formula_46
then for "c" ≥ 0 we have
formula_47
For some other properties of repetends, see also.
Extension to other bases.
Various features of repeating decimals extend to the representation of numbers in all other integer bases, not just base 10:
formula_48
combined with a consecutive set of digits
formula_49
with "r" :
|b|, "dr" :
d1 + "r" − 1 and 0 ∈ "D", then a terminating sequence is obviously equivalent to the same sequence with "non-terminating" repeating part consisting of the digit 0. If the base is positive, then there exists an order homomorphism from the lexicographical order of the right-sided infinite strings over the alphabet "D" into some closed interval of the reals, which maps the strings 0."A"1"A"2..."A""n""db" and 0."A"1"A"2...("An"+1)"d"1 with "Ai" ∈ "D" and "An" ≠ "db" to the same real number – and there are no other duplicate images. In the decimal system, for example, there is 0.9 = 1.0 = 1; in the balanced ternary system there is 0.1 = 1.T = .
formula_50
represents the fraction
formula_51
For example, in duodecimal, = 0.6, = 0.4, = 0.3 and = 0.2 all terminate; = 0.2497 repeats with period length 4, in contrast with the equivalent decimal expansion of 0.2; = 0.186A35 has period 6 in duodecimal, just as it does in decimal.
If b is an integer base and k is an integer, then
formula_52
For example 1/7 in duodecimal:
formula_53
which is 0.186A35base12. 10base12 is 12base10, 102base12 is 144base10, 21base12 is 25base10, A5base12 is 125base10.
Algorithm for positive bases.
For a rational 0 < < 1 (and base "b" ∈ N>1) there is the following algorithm producing the repetend together with its length:
function b_adic(b,p,q) // b ≥ 2; 0 < p < q
digits = "0123..."; // up to the digit with value b–1
begin
s = ""; // the string of digits
pos = 0; // all places are right to the radix point
while not defined(occurs[p]) do
occurs[p] = pos; // the position of the place with remainder p
bp = b*p;
z = floor(bp/q); // index z of digit within: 0 ≤ z ≤ b-1
p = b*p − z*q; // 0 ≤ p < q
if p = 0 then L = 0;
if not z = 0 then
s = s . substring(digits, z, 1)
end if
return (s);
end if
s = s . substring(digits, z, 1); // append the character of the digit
pos += 1;
end while
L = pos - occurs[p]; // the length of the repetend (being < q)
// mark the digits of the repetend by a vinculum:
for i from occurs[p] to pos-1 do
substring(s, i, 1) = overline(substring(s, i, 1));
end for
return (s);
end function
The first highlighted line calculates the digit z.
The subsequent line calculates the new remainder p′ of the division modulo the denominator q. As a consequence of the floor function codice_0 we have
formula_54
thus
formula_55
and
formula_56
Because all these remainders p are non-negative integers less than q, there can be only a finite number of them with the consequence that they must recur in the codice_1 loop. Such a recurrence is detected by the associative array codice_2. The new digit z is formed in the yellow line, where p is the only non-constant. The length L of the repetend equals the number of the remainders (see also section Every rational number is either a terminating or repeating decimal).
Applications to cryptography.
Repeating decimals (also called decimal sequences) have found cryptographic and error-correction coding applications. In these applications repeating decimals to base 2 are generally used which gives rise to binary sequences. The maximum length binary sequence for (when 2 is a primitive root of "p") is given by:
formula_57
These sequences of period "p" − 1 have an autocorrelation function that has a negative peak of −1 for shift of . The randomness of these sequences has been examined by diehard tests.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x=a.b\\overline{c}"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "c"
},
{
"math_id": 4,
"text": "n=\\lceil{\\log_{10}b}\\rceil"
},
{
"math_id": 5,
"text": "10^n"
},
{
"math_id": 6,
"text": "10^nx=ab.\\bar{c} ."
},
{
"math_id": 7,
"text": "c=0"
},
{
"math_id": 8,
"text": "c\\neq0"
},
{
"math_id": 9,
"text": "k\\in\\mathbb{N}"
},
{
"math_id": 10,
"text": "x=y. \\bar{c}"
},
{
"math_id": 11,
"text": "y\\in\\mathbb{Z}"
},
{
"math_id": 12,
"text": "c=d_1 d_2\\,...d_k"
},
{
"math_id": 13,
"text": "d_i"
},
{
"math_id": 14,
"text": "x=y+\\sum_{n=1}^\\infty \\frac{c}{{(10^k)}^n}= y +\\left(c\\sum_{n=0}^\\infty \\frac{1}{{(10^k)}^n}\\right)-c ."
},
{
"math_id": 15,
"text": "\\textstyle \\sum_{n=0}^\\infty \\frac{1}{{(10^k)}^n} = \\frac{1}{1-10^{-k}} "
},
{
"math_id": 16,
"text": "x=y -c+\\frac{10^k c }{10^k-1} ."
},
{
"math_id": 17,
"text": "x"
},
{
"math_id": 18,
"text": "y -c"
},
{
"math_id": 19,
"text": "\\frac{10^kc}{10^k-1}"
},
{
"math_id": 20,
"text": "\\frac{10^{11-1}-1}{11}= 909090909"
},
{
"math_id": 21,
"text": "a^m \\equiv 1 \\pmod n"
},
{
"math_id": 22,
"text": "\\frac{1}{p^k q^\\ell r^m \\cdots}"
},
{
"math_id": 23,
"text": "\\operatorname{LCM}(T_{p^k}, T_{q^\\ell}, T_{r^m}, \\ldots)"
},
{
"math_id": 24,
"text": "\\frac{1}{2^a \\cdot 5^b p^k q^\\ell \\cdots}\\, ,"
},
{
"math_id": 25,
"text": "\\frac{5^{a-b}}{10^a p^k q^\\ell \\cdots}\\, ,"
},
{
"math_id": 26,
"text": "\\frac{2^{b-a}}{10^b p^k q^\\ell \\cdots}\\, ,"
},
{
"math_id": 27,
"text": "\\frac{1}{10^a p^k q^\\ell \\cdots}\\, ,"
},
{
"math_id": 28,
"text": "\\begin{align}\n x &= 0.000000100000010000001\\ldots \\\\\n10^7x &= 1.000000100000010000001\\ldots \\\\\n\\left(10^7-1\\right)x=9999999x &= 1 \\\\\n x &= \\frac{1}{10^7-1} = \\frac{1}{9999999}\n\\end{align}"
},
{
"math_id": 29,
"text": "\n\\begin{align}\n7.48181818\\ldots & = 7.3 + 0.18181818\\ldots \\\\[8pt]\n& = \\frac{73}{10}+\\frac{18}{99} = \\frac{73}{10} + \\frac{9\\cdot2}{9\\cdot 11}\n= \\frac{73}{10} + \\frac{2}{11} \\\\[12pt]\n& = \\frac{11\\cdot73 + 10\\cdot2}{10\\cdot 11} = \\frac{823}{110}\n\\end{align}\n"
},
{
"math_id": 30,
"text": "\n\\begin{align}\n11.18867924528301886792452830\\ldots & = 11 + 0.18867924528301886792452830\\ldots \\\\[8pt]\n& = 11 + \\frac{10}{53} = \\frac{11\\cdot53 + 10}{53} = \\frac{593}{53}\n\\end{align}\n"
},
{
"math_id": 31,
"text": "\\begin{align}\nx &= 0.\\overline{a_1 a_2 \\cdots a_n} \\\\\n10^n x &= a_1 a_2 \\cdots a_n.\\overline{a_1 a_2 \\cdots a_n} \\\\[5pt]\n\\left(10^n - 1\\right)x = 99 \\cdots 99x &= a_1 a_2 \\cdots a_n \\\\[5pt]\nx &= \\frac{a_1 a_2 \\cdots a_n}{10^n - 1} = \\frac{a_1 a_2 \\cdots a_n}{99 \\cdots 99}\n\\end{align}"
},
{
"math_id": 32,
"text": "\\mathbf{I}"
},
{
"math_id": 33,
"text": "\\mathbf{A}"
},
{
"math_id": 34,
"text": "\\#\\mathbf{A}"
},
{
"math_id": 35,
"text": "\\mathbf{P}"
},
{
"math_id": 36,
"text": "\\#\\mathbf{P}"
},
{
"math_id": 37,
"text": "9"
},
{
"math_id": 38,
"text": "0"
},
{
"math_id": 39,
"text": "\\begin{array}{lllll}\n3.254444\\ldots &=3.25\\overline{4} &= \\begin{Bmatrix}\n\\mathbf{I}=3&\\mathbf{A}=25&\\mathbf{P}=4\\\\\n&\\#\\mathbf{A}=2&\\#\\mathbf{P}=1\n\\end{Bmatrix}\n&=\\dfrac{3254-325}{900}&=\\dfrac{2929}{900}\n\\\\\n\\\\0.512512\\ldots &=0.\\overline{512} &= \\begin{Bmatrix}\n\\mathbf{I}=0&\\mathbf{A}=\\emptyset&\\mathbf{P}=512\\\\\n&\\#\\mathbf{A}=0&\\#\\mathbf{P}=3\n\\end{Bmatrix}\n&=\\dfrac{512-0}{999}&=\\dfrac{512}{999}\n\\\\\n\\\\1.09191\\ldots &=1.0\\overline{91} &= \\begin{Bmatrix}\n\\mathbf{I}=1&\\mathbf{A}=0&\\mathbf{P}=91\\\\\n&\\#\\mathbf{A}=1&\\#\\mathbf{P}=2\n\\end{Bmatrix}\n&=\\dfrac{1091-10}{990}&=\\dfrac{1081}{990}\n\\\\\n\\\\1.333\\ldots &=1.\\overline{3} &= \\begin{Bmatrix}\n\\mathbf{I}=1&\\mathbf{A}=\\emptyset&\\mathbf{P}=3\\\\\n&\\#\\mathbf{A}=0&\\#\\mathbf{P}=1\n\\end{Bmatrix}\n&=\\dfrac{13-1}{9}&=\\dfrac{12}{9}&=\\dfrac{4}{3}\n\\\\\n\\\\0.3789789\\ldots &=0.3\\overline{789} &= \\begin{Bmatrix}\n\\mathbf{I}=0&\\mathbf{A}=3&\\mathbf{P}=789\\\\\n&\\#\\mathbf{A}=1&\\#\\mathbf{P}=3\n\\end{Bmatrix}\n&=\\dfrac{3789-3}{9990}&=\\dfrac{3786}{9990}&=\\dfrac{631}{1665}\n\\end{array}\n"
},
{
"math_id": 40,
"text": "\\emptyset"
},
{
"math_id": 41,
"text": "\\#\\mathbf{A}=0"
},
{
"math_id": 42,
"text": "0.\\overline{1} = \\frac{1}{10} + \\frac{1}{100} + \\frac{1}{1000} + \\cdots = \\sum_{n=1}^\\infty \\frac{1}{10^n}"
},
{
"math_id": 43,
"text": "\\frac{a}{1-r} = \\frac{\\frac{1}{10}}{1-\\frac{1}{10}} = \\frac{1}{10-1} = \\frac{1}{9}"
},
{
"math_id": 44,
"text": "\\begin{align}\n0.\\overline{142857} &= \\frac{142857}{10^6} + \\frac{142857}{10^{12}} + \\frac{142857}{10^{18}} + \\cdots = \\sum_{n=1}^\\infty \\frac{142857}{10^{6n}} \\\\[6px]\n\\implies &\\quad \\frac{a}{1-r} = \\frac{\\frac{142857}{10^6}}{1-\\frac{1}{10^6}} = \\frac{142857}{10^6-1} = \\frac{142857}{999999} = \\frac17\n\\end{align}"
},
{
"math_id": 45,
"text": "\\text{period}\\left(\\frac{1}{p}\\right)= \\text{period}\\left(\\frac{1}{p^2}\\right)= \\cdots = \\text{period}\\left(\\frac{1}{p^m}\\right)"
},
{
"math_id": 46,
"text": "\\text{period}\\left(\\frac{1}{p^m}\\right) \\ne \\text{period}\\left(\\frac {1}{p^{m+1}}\\right),"
},
{
"math_id": 47,
"text": "\\text{period}\\left(\\frac{1}{p^{m+c}}\\right) = p^c \\cdot \\text{period}\\left(\\frac{1}{p}\\right)."
},
{
"math_id": 48,
"text": "b\\in\\Z\\smallsetminus\\{-1,0,1\\}"
},
{
"math_id": 49,
"text": "D:=\\{d_1, d_1+1, \\dots, d_r\\}"
},
{
"math_id": 50,
"text": "\\left(0.\\overline{A_1A_2\\ldots A_\\ell}\\right)_b"
},
{
"math_id": 51,
"text": "\\frac{( A_1A_2\\ldots A_\\ell)_b}{b^\\ell-1}."
},
{
"math_id": 52,
"text": "\\frac{1}{k} = \\frac{1}{b} + \\frac{(b-k)^1}{b^2} + \\frac{(b-k)^2}{b^3} + \\frac{(b-k)^3}{b^4} + \\cdots + \\frac{(b-k)^{N-1}}{b^N} + \\cdots = \\frac1b \\frac1{1-\\frac{b-k}b}."
},
{
"math_id": 53,
"text": "\n\\frac17 = \\left(\\frac1{10^{\\phantom1}} + \\frac5{10^2} + \\frac{21}{10^3} + \\frac{A5}{10^4} + \\frac{441}{10^5} + \\frac{1985}{10^6} + \\cdots \\right)_\\text{base 12}\n"
},
{
"math_id": 54,
"text": "\\frac{b p}{q} - 1 \\; \\; < \\; \\; z = \\left\\lfloor \\frac{b p}{q} \\right\\rfloor \\; \\; \\le \\; \\; \\frac{b p}{q} , "
},
{
"math_id": 55,
"text": "b p - q < z q \\quad \\implies \\quad p' := b p - z q < q "
},
{
"math_id": 56,
"text": "z q \\le b p\\quad \\implies \\quad 0 \\le b p - z q =: p' \\,."
},
{
"math_id": 57,
"text": "a(i) = 2^i \\bmod p \\bmod 2"
}
]
| https://en.wikipedia.org/wiki?curid=13612447 |
1361454 | Stochastic differential equation | Differential equations involving stochastic processes
A stochastic differential equation (SDE) is a differential equation in which one or more of the terms is a stochastic process, resulting in a solution which is also a stochastic process. SDEs have many applications throughout pure mathematics and are used to model various behaviours of stochastic models such as stock prices, random growth models or physical systems that are subjected to thermal fluctuations.
SDEs have a random differential that is in the most basic case random white noise calculated as the derivative of a Brownian motion or more generally a semimartingale. However, other types of random behaviour are possible, such as jump processes like Lévy processes or semimartingales with jumps. Random differential equations are conjugate to stochastic differential equations. Stochastic differential equations can also be extended to differential manifolds.
Background.
Stochastic differential equations originated in the theory of Brownian motion, in the work of Albert Einstein and Marian Smoluchowski in 1905, although Louis Bachelier was the first person credited with modeling Brownian motion in 1900, giving a very early example of a stochastic differential equation now known as Bachelier model. Some of these early examples were linear stochastic differential equations, also called Langevin equations after French physicist Langevin, describing the motion of a harmonic oscillator subject to a random force.
The mathematical theory of stochastic differential equations was developed in the 1940s through the groundbreaking work of Japanese mathematician Kiyosi Itô, who introduced the concept of stochastic integral and initiated the study of nonlinear stochastic differential equations. Another approach was later proposed by Russian physicist Stratonovich, leading to a calculus similar to ordinary calculus.
Terminology.
The most common form of SDEs in the literature is an ordinary differential equation with the right hand side perturbed by a term dependent on a white noise variable. In most cases, SDEs are understood as continuous time limit of the corresponding stochastic difference equations. This understanding of SDEs is ambiguous and must be complemented by a proper mathematical definition of the corresponding integral. Such a mathematical definition was first proposed by Kiyosi Itô in the 1940s, leading to what is known today as the Itô calculus.
Another construction was later proposed by Russian physicist Stratonovich,
leading to what is known as the Stratonovich integral.
The Itô integral and Stratonovich integral are related, but different, objects and the choice between them depends on the application considered. The Itô calculus is based on the concept of non-anticipativeness or causality, which is natural in applications where the variable is time.
The Stratonovich calculus, on the other hand, has rules which resemble ordinary calculus and has intrinsic geometric properties which render it more natural when dealing with geometric problems such as random motion on manifolds, although it is possible and in some cases preferable to model random motion on manifolds through Itô SDEs, for example when trying to optimally approximate SDEs on submanifolds.
An alternative view on SDEs is the stochastic flow of diffeomorphisms. This understanding is unambiguous and corresponds to the Stratonovich version of the continuous time limit of stochastic difference equations. Associated with SDEs is the Smoluchowski equation or the Fokker–Planck equation, an equation describing the time evolution of probability distribution functions. The generalization of the Fokker–Planck evolution to temporal evolution of differential forms is provided by the concept of stochastic evolution operator.
In physical science, there is an ambiguity in the usage of the term "Langevin SDEs". While Langevin SDEs can be of a more general form, this term typically refers to a narrow class of SDEs with gradient flow vector fields. This class of SDEs is particularly popular because it is a starting point of the Parisi–Sourlas stochastic quantization procedure, leading to a N=2 supersymmetric model closely related to supersymmetric quantum mechanics. From the physical point of view, however, this class of SDEs is not very interesting because it never exhibits spontaneous breakdown of topological supersymmetry, i.e., (overdamped) Langevin SDEs are never chaotic.
Stochastic calculus.
Brownian motion or the Wiener process was discovered to be exceptionally complex mathematically. The Wiener process is almost surely nowhere differentiable; thus, it requires its own rules of calculus. There are two dominating versions of stochastic calculus, the Itô stochastic calculus and the Stratonovich stochastic calculus. Each of the two has advantages and disadvantages, and newcomers are often confused whether the one is more appropriate than the other in a given situation. Guidelines exist (e.g. Øksendal, 2003) and conveniently, one can readily convert an Itô SDE to an equivalent Stratonovich SDE and back again. Still, one must be careful which calculus to use when the SDE is initially written down.
Numerical solutions.
Numerical methods for solving stochastic differential equations include the Euler–Maruyama method, Milstein method, Runge–Kutta method (SDE), Rosenbrock method, and methods based on different representations of iterated stochastic integrals.
Use in physics.
In physics, SDEs have wide applicability ranging from molecular dynamics to neurodynamics and to the dynamics of astrophysical objects. More specifically, SDEs describe all dynamical systems, in which quantum effects are either unimportant or can be taken into account as perturbations. SDEs can be viewed as a generalization of the dynamical systems theory to models with noise. This is an important generalization because real systems cannot be completely isolated from their environments and for this reason always experience external stochastic influence.
There are standard techniques for transforming higher-order equations into several coupled first-order equations by introducing new unknowns. Therefore, the following is the most general class of SDEs:
formula_0
where formula_1 is the position in the system in its phase (or state) space, formula_2, assumed to be a differentiable manifold, the formula_3 is a flow vector field representing deterministic law of evolution, and formula_4 is a set of vector fields that define the coupling of the system to Gaussian white noise, formula_5. If formula_6 is a linear space and formula_7 are constants, the system is said to be subject to additive noise, otherwise it is said to be subject to multiplicative noise. This term is somewhat misleading as it has come to mean the general case even though it appears to imply the limited case in which formula_8.
For a fixed configuration of noise, SDE has a unique solution differentiable with respect to the initial condition. Nontriviality of stochastic case shows up when one tries to average various objects of interest over noise configurations. In this sense, an SDE is not a uniquely defined entity when noise is multiplicative and when the SDE is understood as a continuous time limit of a stochastic difference equation. In this case, SDE must be complemented by what is known as "interpretations of SDE" such as Itô or a Stratonovich interpretations of SDEs. Nevertheless, when SDE is viewed as a continuous-time stochastic flow of diffeomorphisms, it is a uniquely defined mathematical object that corresponds to Stratonovich approach to a continuous time limit of a stochastic difference equation.
In physics, the main method of solution is to find the probability distribution function as a function of time using the equivalent Fokker–Planck equation (FPE). The Fokker–Planck equation is a deterministic partial differential equation. It tells how the probability distribution function evolves in time similarly to how the Schrödinger equation gives the time evolution of the quantum wave function or the diffusion equation gives the time evolution of chemical concentration. Alternatively, numerical solutions can be obtained by Monte Carlo simulation. Other techniques include the path integration that draws on the analogy between statistical physics and quantum mechanics (for example, the Fokker-Planck equation can be transformed into the Schrödinger equation by rescaling a few variables) or by writing down ordinary differential equations for the statistical moments of the probability distribution function.
Use in probability and mathematical finance.
The notation used in probability theory (and in many applications of probability theory, for instance in signal processing with the filtering problem and in mathematical finance) is slightly different. It is also the notation used in publications on numerical methods for solving stochastic differential equations. This notation makes the exotic nature of the random function of time formula_5 in the physics formulation more explicit. In strict mathematical terms, formula_5 cannot be chosen as an ordinary function, but only as a generalized function. The mathematical formulation treats this complication with less ambiguity than the physics formulation.
A typical equation is of the form
formula_9
where formula_10 denotes a Wiener process (standard Brownian motion).
This equation should be interpreted as an informal way of expressing the corresponding integral equation
formula_11
The equation above characterizes the behavior of the continuous time stochastic process "X""t" as the sum of an ordinary Lebesgue integral and an Itô integral. A heuristic (but very helpful) interpretation of the stochastic differential equation is that in a small time interval of length "δ" the stochastic process "X""t" changes its value by an amount that is normally distributed with expectation "μ"("X""t", "t") "δ" and variance "σ"("X""t", "t")2 "δ" and is independent of the past behavior of the process. This is so because the increments of a Wiener process are independent and normally distributed. The function "μ" is referred to as the drift coefficient, while "σ" is called the diffusion coefficient. The stochastic process "X""t" is called a diffusion process, and satisfies the Markov property.
The formal interpretation of an SDE is given in terms of what constitutes a solution to the SDE. There are two main definitions of a solution to an SDE, a strong solution and a weak solution Both require the existence of a process "X""t" that solves the integral equation version of the SDE. The difference between the two lies in the underlying probability space (formula_12). A weak solution consists of a probability space and a process that satisfies the integral equation, while a strong solution is a process that satisfies the equation and is defined on a given probability space. The Yamada–Watanabe theorem makes a connection between the two.
An important example is the equation for geometric Brownian motion
formula_13
which is the equation for the dynamics of the price of a stock in the Black–Scholes options pricing model of financial mathematics.
Generalizing the geometric Brownian motion, it is also possible to define SDEs admitting strong solutions and whose distribution is a convex combination of densities coming from different geometric Brownian motions or Black Scholes models, obtaining a single SDE whose solutions is distributed as a mixture dynamics of lognormal distributions of different Black Scholes models. This leads to models that can deal with the volatility smile in financial mathematics.
The simpler SDE called arithmetic Brownian motion
formula_14
was used by Louis Bachelier as the first model for stock prices in 1900, known today as Bachelier model.
There are also more general stochastic differential equations where the coefficients "μ" and "σ" depend not only on the present value of the process "X""t", but also on previous values of the process and possibly on present or previous values of other processes too. In that case the solution process, "X", is not a Markov process, and it is called an Itô process and not a diffusion process. When the coefficients depends only on present and past values of "X", the defining equation is called a stochastic delay differential equation.
A generalization of stochastic differential equations with the Fisk-Stratonovich integral to semimartingales with jumps are the SDEs of "Marcus type". The Marcus integral is an extension of McShane's stochastic calculus.
An innovative application in stochastic finance derives from the usage of the equation for Ornstein–Uhlenbeck process
formula_15
which is the equation for the dynamics of the return of the price of a stock under the hypothesis that returns display a Log-normal distribution.
Under this hypothesis, the methodologies developed by Marcello Minenna determines prediction interval able to identify abnormal return that could hide market abuse phenomena.
SDEs on manifolds.
More generally one can extend the theory of stochastic calculus onto differential manifolds and for this purpose one uses the Fisk-Stratonovich integral. Consider a manifold formula_16, some finite-dimensional vector space formula_17, a filtered probability space formula_18 with formula_19 satisfying the usual conditions and let formula_20 be the one-point compactification and formula_21 be formula_22-measurable. A "stochastic differential equation on formula_16" written
formula_23
is a pair formula_24, such that
For each formula_27 the map formula_28 is linear and formula_29 for each formula_30.
A solution to the SDE on formula_16 with initial condition formula_31 is a continuous formula_32-adapted formula_16-valued process formula_33 up to life time formula_34, s.t. for each test function formula_35 the process formula_36 is a real-valued semimartingale and for each stopping time formula_37 with formula_38 the equation
formula_39
holds formula_40-almost surely, where formula_41 is the differential at formula_2. It is a "maximal solution" if the life time is maximal, i.e.,
formula_42
formula_40-almost surely. It follows from the fact that formula_36 for each test function formula_35 is a semimartingale, that formula_2 is a "semimartingale on formula_16". Given a maximal solution we can extend the time of formula_2 onto full formula_43 and after a continuation of formula_44 on formula_45 we get
formula_46
up to indistinguishable processes.
Although Stratonovich SDEs are the natural choice for SDEs on manifolds, given that they satisfy the chain rule and that their drift and diffusion coefficients behave as vector fields under changes of coordinates, there are cases where Ito calculus on manifolds is preferable. A theory of Ito calculus on manifolds was first developed by Laurent Schwartz through the concept of Schwartz morphism, see also the related 2-jet interpretation of Ito SDEs on manifolds based on the jet-bundle. This interpretation is helpful when trying to optimally approximate the solution of an SDE given on a large space with the solutions of an SDE given on a submanifold of that space, in that a Stratonovich based projection does not result to be optimal. This has been applied to the filtering problem, leading to optimal projection filters.
As rough paths.
Usually the solution of an SDE requires a probabilistic setting, as the integral implicit in the solution is a stochastic integral. If it were possible to deal with the differential equation path by path, one would not need to define a stochastic integral and one could develop a theory independently of probability theory.
This points to considering the SDE
formula_47
as a single deterministic differential equation for every formula_48, where formula_49 is the sample space in the given probability space (formula_12). However, a direct path-wise interpretation of the SDE is not possible, as the Brownian motion paths have unbounded variation and are nowhere differentiable with probability one, so that there is no naive way to give meaning to terms like formula_50, precluding also a naive path-wise definition of the stochastic integral as an integral against every single formula_50. However, motivated by the Wong-Zakai result for limits of solutions of SDEs with regular noise and using rough paths theory, while adding a chosen definition of iterated integrals of Brownian motion, it is possible to define a deterministic rough integral for every single formula_48 that coincides for example with the Ito integral with probability one for a particular choice of the iterated Brownian integral. Other definitions of the iterated integral lead to deterministic pathwise equivalents of different stochastic integrals, like the Stratonovich integral. This has been used for example in financial mathematics to price options without probability.
Existence and uniqueness of solutions.
As with deterministic ordinary and partial differential equations, it is important to know whether a given SDE has a solution, and whether or not it is unique. The following is a typical existence and uniqueness theorem for Itô SDEs taking values in "n"-dimensional Euclidean space R"n" and driven by an "m"-dimensional Brownian motion "B"; the proof may be found in Øksendal (2003, §5.2).
Let "T" > 0, and let
formula_51
formula_52
be measurable functions for which there exist constants "C" and "D" such that
formula_53
formula_54
for all "t" ∈ [0, "T"] and all "x" and "y" ∈ R"n", where
formula_55
Let "Z" be a random variable that is independent of the "σ"-algebra generated by "B""s", "s" ≥ 0, and with finite second moment:
formula_56
Then the stochastic differential equation/initial value problem
formula_57
formula_58
has a P-almost surely unique "t"-continuous solution ("t", "ω") ↦ "X""t"("ω") such that "X" is adapted to the filtration "F""t""Z" generated by "Z" and "B""s", "s" ≤ "t", and
formula_59
General case: local Lipschitz condition and maximal solutions.
The stochastic differential equation above is only a special case of a more general form
formula_60
where
More generally one can also look at stochastic differential equations on manifolds.
Whether the solution of this equation explodes depends on the choice of formula_69. Suppose formula_69 satisfies some local Lipschitz condition, i.e., for formula_70 and some compact set formula_71 and some constant formula_72 the condition
formula_73
where formula_74 is the Euclidean norm. This condition guarantees the existence and uniqueness of a so-called "maximal solution".
Suppose formula_69 is continuous and satisfies the above local Lipschitz condition and let formula_75 be some initial condition, meaning it is a measurable function with respect to the initial σ-algebra. Let formula_76 be a predictable stopping time with formula_77 almost surely. A formula_78-valued semimartingale formula_79 is called a "maximal solution" of
formula_80
with "life time" formula_34 if
formula_83
formula_34 is also a so-called "explosion time".
Some explicitly solvable examples.
Explicitly solvable SDEs include:
formula_87
formula_88
Linear SDE: General case.
where
formula_89
formula_90
Reducible SDEs: Case 1.
for a given differentiable function formula_44 is equivalent to the Stratonovich SDE
formula_91
which has a general solution
formula_92
where
formula_93
formula_94
Reducible SDEs: Case 2.
for a given differentiable function formula_44 is equivalent to the Stratonovich SDE
formula_95
which is reducible to
formula_96
where formula_97 where formula_98 is defined as before.
Its general solution is
formula_99
SDEs and supersymmetry.
In supersymmetric theory of SDEs, stochastic dynamics is defined via stochastic evolution operator acting on the differential forms on the phase space of the model. In this exact formulation of stochastic dynamics, all SDEs possess topological supersymmetry which represents the preservation of the continuity of the phase space by continuous time flow. The spontaneous breakdown of this supersymmetry is the mathematical essence of the ubiquitous dynamical phenomenon known across disciplines as chaos, turbulence, self-organized criticality etc. and the Goldstone theorem explains the associated long-range dynamical behavior, i.e., the butterfly effect, 1/f and crackling noises, and scale-free statistics of earthquakes, neuroavalanches, solar flares etc.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\mathrm{d}x(t)}{\\mathrm{d}t} = F(x(t)) + \\sum_{\\alpha=1}^ng_\\alpha(x(t))\\xi^\\alpha(t),\\,"
},
{
"math_id": 1,
"text": "x\\in X "
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "F\\in TX"
},
{
"math_id": 4,
"text": "g_\\alpha\\in TX "
},
{
"math_id": 5,
"text": "\\xi^\\alpha"
},
{
"math_id": 6,
"text": " X "
},
{
"math_id": 7,
"text": "g"
},
{
"math_id": 8,
"text": " g(x) \\propto x"
},
{
"math_id": 9,
"text": " \\mathrm{d} X_t = \\mu(X_t,t)\\, \\mathrm{d} t + \\sigma(X_t,t)\\, \\mathrm{d} B_t , "
},
{
"math_id": 10,
"text": "B"
},
{
"math_id": 11,
"text": " X_{t+s} - X_{t} = \\int_t^{t+s} \\mu(X_u,u) \\mathrm{d} u + \\int_t^{t+s} \\sigma(X_u,u)\\, \\mathrm{d} B_u . "
},
{
"math_id": 12,
"text": "\\Omega,\\, \\mathcal{F},\\, P"
},
{
"math_id": 13,
"text": "\\mathrm{d} X_t = \\mu X_t \\, \\mathrm{d} t + \\sigma X_t \\, \\mathrm{d} B_t."
},
{
"math_id": 14,
"text": "\\mathrm{d} X_t = \\mu \\, \\mathrm{d} t + \\sigma \\, \\mathrm{d} B_t"
},
{
"math_id": 15,
"text": "\\mathrm{d} R_t = \\mu R_t \\, \\mathrm{d} t + \\sigma_t \\, \\mathrm{d} B_t."
},
{
"math_id": 16,
"text": "M"
},
{
"math_id": 17,
"text": "E"
},
{
"math_id": 18,
"text": "(\\Omega,\\mathcal{F},(\\mathcal{F}_t)_{t\\in \\R_{+}},P)"
},
{
"math_id": 19,
"text": "(\\mathcal{F}_t)_{t\\in \\R_{+}}"
},
{
"math_id": 20,
"text": "\\widehat{M}=M\\cup \\{\\infty\\}"
},
{
"math_id": 21,
"text": "x_0"
},
{
"math_id": 22,
"text": "\\mathcal{F}_0"
},
{
"math_id": 23,
"text": "\\mathrm{d}X=A(X)\\circ dZ"
},
{
"math_id": 24,
"text": "(A,Z)"
},
{
"math_id": 25,
"text": "Z"
},
{
"math_id": 26,
"text": "A:M\\times E\\to TM, (x,e)\\mapsto A(x)e"
},
{
"math_id": 27,
"text": "x\\in M"
},
{
"math_id": 28,
"text": "A(x):E\\to T_{x}M"
},
{
"math_id": 29,
"text": "A(\\cdot)e\\in \\Gamma(TM)"
},
{
"math_id": 30,
"text": "e\\in E"
},
{
"math_id": 31,
"text": "X_0=x_0"
},
{
"math_id": 32,
"text": "\\{\\mathcal{F}_t\\}"
},
{
"math_id": 33,
"text": "(X_t)_{t<\\zeta}"
},
{
"math_id": 34,
"text": "\\zeta"
},
{
"math_id": 35,
"text": "f\\in C_c^{\\infty}(M)"
},
{
"math_id": 36,
"text": "f(X)"
},
{
"math_id": 37,
"text": "\\tau"
},
{
"math_id": 38,
"text": "0\\leq \\tau < \\zeta"
},
{
"math_id": 39,
"text": "f(X_{\\tau})=f(x_0)+\\int_0^\\tau (\\mathrm{d}f)_X A(X)\\circ \\mathrm{d}Z"
},
{
"math_id": 40,
"text": "P"
},
{
"math_id": 41,
"text": "(df)_X:T_xM\\to T_{f(x)}M"
},
{
"math_id": 42,
"text": "\\{\\zeta <\\infty\\}\\subset\\left\\{\\lim\\limits_{t\\nearrow \\zeta}X_t=\\infty \\text{ in }\\widehat{M}\\right\\}"
},
{
"math_id": 43,
"text": "\\R_+"
},
{
"math_id": 44,
"text": "f"
},
{
"math_id": 45,
"text": "\\widehat{M}"
},
{
"math_id": 46,
"text": "f(X_{t})=f(X_0)+\\int_0^t (\\mathrm{d}f)_X A(X)\\circ \\mathrm{d}Z, \\quad t\\geq 0"
},
{
"math_id": 47,
"text": " \\mathrm{d} X_t(\\omega) = \\mu(X_t(\\omega),t)\\, \\mathrm{d} t + \\sigma(X_t(\\omega),t)\\, \\mathrm{d} B_t(\\omega) "
},
{
"math_id": 48,
"text": "\\omega \\in \\Omega"
},
{
"math_id": 49,
"text": "\\Omega"
},
{
"math_id": 50,
"text": "\\mathrm{d} B_t(\\omega)"
},
{
"math_id": 51,
"text": "\\mu : \\mathbb{R}^{n} \\times [0, T] \\to \\mathbb{R}^{n};"
},
{
"math_id": 52,
"text": "\\sigma : \\mathbb{R}^{n} \\times [0, T] \\to \\mathbb{R}^{n \\times m};"
},
{
"math_id": 53,
"text": "\\big| \\mu (x, t) \\big| + \\big| \\sigma (x, t) \\big| \\leq C \\big( 1 + | x | \\big);"
},
{
"math_id": 54,
"text": "\\big| \\mu (x, t) - \\mu (y, t) \\big| + \\big| \\sigma (x, t) - \\sigma (y, t) \\big| \\leq D | x - y |;"
},
{
"math_id": 55,
"text": "| \\sigma |^{2} = \\sum_{i, j = 1}^{n} | \\sigma_{ij} |^{2}."
},
{
"math_id": 56,
"text": "\\mathbb{E} \\big[ | Z |^{2} \\big] < + \\infty."
},
{
"math_id": 57,
"text": "\\mathrm{d} X_{t} = \\mu (X_{t}, t) \\, \\mathrm{d} t + \\sigma (X_{t}, t) \\, \\mathrm{d} B_{t} \\mbox{ for } t \\in [0, T];"
},
{
"math_id": 58,
"text": "X_{0} = Z;"
},
{
"math_id": 59,
"text": "\\mathbb{E} \\left[ \\int_{0}^{T} | X_{t} |^{2} \\, \\mathrm{d} t \\right] < + \\infty."
},
{
"math_id": 60,
"text": "\\mathrm{d}Y_t=\\alpha(t,Y_t)\\mathrm{d}X_t"
},
{
"math_id": 61,
"text": "\\R^n"
},
{
"math_id": 62,
"text": "Y"
},
{
"math_id": 63,
"text": "\\R^d"
},
{
"math_id": 64,
"text": "\\alpha:\\R_{+}\\times U \\to \\operatorname{Lin}(\\R^{n};\\R^{d})"
},
{
"math_id": 65,
"text": "U\\subset \\R^d"
},
{
"math_id": 66,
"text": "\\operatorname{Lin}(\\R^{n};\\R^{d})"
},
{
"math_id": 67,
"text": "\\R^{n}"
},
{
"math_id": 68,
"text": "\\R^{d}"
},
{
"math_id": 69,
"text": "\\alpha"
},
{
"math_id": 70,
"text": "t\\geq 0"
},
{
"math_id": 71,
"text": "K\\subset U"
},
{
"math_id": 72,
"text": "L(t,K)"
},
{
"math_id": 73,
"text": "|\\alpha(s,y)-\\alpha(s,x)|\\leq L(t,K)|y-x|,\\quad x,y\\in K,\\;0\\leq s\\leq t,"
},
{
"math_id": 74,
"text": "|\\cdot|"
},
{
"math_id": 75,
"text": "F:\\Omega\\to U"
},
{
"math_id": 76,
"text": "\\zeta:\\Omega\\to \\overline{\\R}_{+}"
},
{
"math_id": 77,
"text": "\\zeta>0"
},
{
"math_id": 78,
"text": "U"
},
{
"math_id": 79,
"text": "(Y_t)_{t<\\zeta}"
},
{
"math_id": 80,
"text": "dY_t=\\alpha(t,Y_t)dX_t,\\quad Y_0=F"
},
{
"math_id": 81,
"text": "\\zeta_n\\nearrow\\zeta"
},
{
"math_id": 82,
"text": "Y^{\\zeta_n}"
},
{
"math_id": 83,
"text": "\\mathrm{d}Y=\\alpha(t,Y)\\mathrm{d}X^{\\zeta_n}"
},
{
"math_id": 84,
"text": "\\{\\zeta <\\infty\\}"
},
{
"math_id": 85,
"text": "Y_{t}\\to\\partial U"
},
{
"math_id": 86,
"text": "t\\to \\zeta"
},
{
"math_id": 87,
"text": "\\mathrm{d}X_t=(a(t)X_t+c(t))\\mathrm{d}t+(b(t)X_t+d(t))\\mathrm{d}W_t"
},
{
"math_id": 88,
"text": "X_t=\\Phi_{t,t_0}\\left(X_{t_0}+\\int_{t_0}^t\\Phi^{-1}_{s,t_0}(c(s)-b(s)\\mathrm{d}(s))\\mathrm{d}s+\\int_{t_0}^t\\Phi^{-1}_{s,t_0}\\mathrm{d}(s)\\mathrm{d}W_s\\right)"
},
{
"math_id": 89,
"text": "\\Phi_{t,t_0}=\\exp\\left(\\int_{t_0}^t\\left(a(s)-\\frac{b^2(s)}{2}\\right)\\mathrm{d}s+\\int_{t_0}^tb(s)\\mathrm{d}W_s\\right)"
},
{
"math_id": 90,
"text": "\\mathrm{d}X_t=\\frac12f(X_t)f'(X_t)\\mathrm{d}t+f(X_t)\\mathrm{d}W_t"
},
{
"math_id": 91,
"text": "\\mathrm{d}X_t=f(X_t)\\circ W_t"
},
{
"math_id": 92,
"text": "X_t=h^{-1}(W_t+h(X_0))"
},
{
"math_id": 93,
"text": "h(x)=\\int^{x}\\frac{\\mathrm{d}s}{f(s)}"
},
{
"math_id": 94,
"text": "\\mathrm{d}X_t=\\left(\\alpha f(X_t)+\\frac12 f(X_t)f'(X_t)\\right)\\mathrm{d}t+f(X_t)\\mathrm{d}W_t"
},
{
"math_id": 95,
"text": "\\mathrm{d}X_t=\\alpha f(X_t)\\mathrm{d}t + f(X_t)\\circ W_t"
},
{
"math_id": 96,
"text": "\\mathrm{d}Y_t=\\alpha \\mathrm{d}t+\\mathrm{d}W_t"
},
{
"math_id": 97,
"text": "Y_t=h(X_t)"
},
{
"math_id": 98,
"text": "h"
},
{
"math_id": 99,
"text": "X_t=h^{-1}(\\alpha t+W_t+h(X_0))"
}
]
| https://en.wikipedia.org/wiki?curid=1361454 |
13616862 | Ward Leonard control | Ward Leonard control, also known as the Ward Leonard drive system, was a widely used DC motor speed control system introduced by Harry Ward Leonard in 1891. In the early 1900s, the control system of Ward Leonard was adopted by the U.S. Navy and also used in passenger lifts of large mines. It also provided a solution to a moving sidewalk at the Paris Exposition of 1900, where many others had failed to operate properly. It was applied to railway locomotives used in World War I, and was used in anti-aircraft radars in World War II. Connected to automatic anti-aircraft gun directors, the tracking motion in two dimensions had to be extremely smooth and precise. The MIT Radiation Laboratory selected Ward-Leonard to equip the famous radar SCR-584 in 1942. The Ward Leonard control system was widely used for elevators until thyristor drives became available in the 1980s, because it offered smooth speed control and consistent torque. Many Ward Leonard control systems and variations on them remain in use.
Basic concept.
The key feature of the Ward Leonard control system is the ability to smoothly vary the speed of a DC motor, including reversing it, by controlling the field and hence the output voltage of a DC generator, as well as the field of the motor itself. As the speed of a DC motor is dictated by the supplied voltage, this gives simple speed control. The DC generator could be driven by any means. This 'prime mover' could be an AC motor, or it could be an internal combustion engine (its application to vehicles was patented by H.W. Leonard in 1903).
A Ward Leonard drive can be viewed as a high-power amplifier in the multi-kilowatt range, built from rotating electrical machinery. Where the 'prime mover' is electrical, a Ward Leonard drive unit consists of a motor and generator with shafts coupled together. The prime mover, which turns at a constant speed, may be AC or DC powered. The generator is a DC generator, with field windings and armature windings. The input to the amplifier is applied to the field windings, and the higher power output comes from the armature windings.
(See Excitation (magnetic)#Amplifier principle for how a generator can act as an amplifier.) The amplifier output is usually connected to a second motor, which moves the load, such as an elevator. With this arrangement, small changes in current applied to the input, and thus the generator field, result in large changes in the output, allowing smooth speed control.
A flywheel may be used to reduce voltage fluctuations during sudden load changes. The Ward Leonard system with this modification is known as "Ward Leonard Ilgner Control". In that configuration, the synchronous motor, normally used for Ward Leonard control, is replaced by a wound-rotor induction motor. The combination of an induction motor, flywheel, and generator(s) is known as an "Ilgner set". It effectively decouples intermittent short-term high loading of the generator from the AC supply.
A more technical description.
The speed of a DC motor is controlled by varying the voltage fed to the generator field windings, Vgf, which varies the output voltage of the generator. The varied output voltage will change the voltage of the motor, since they are connected directly through the armature. Consequently, changing the Vgf will control the speed of the motor. The picture on the right shows the Ward Leonard control system, with the Vgf feeding the generator and Vmf feeding the motor.
Transfer function.
The first subscripts 'g' and 'm' each represents generator and motor. The superscripts 'f', 'r',and 'a', correspond to field, rotor, and armature.
Eq. 1: The generator field equation
formula_7
Eq. 2: The equation of electrical equilibrium in the armature circuit
formula_8
Eq. 3: Motor torque equation
formula_9
With total impedance, formula_10, neglected, the transfer function can be obtained by solving eq 3 formula_11.
Eq. 4: Transfer function
formula_12
with the constants defined as below:
formula_13
formula_14
formula_15
formula_16
formula_17
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "W_i"
},
{
"math_id": 1,
"text": "K"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "J"
},
{
"math_id": 4,
"text": "D"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "s"
},
{
"math_id": 7,
"text": "V_g^f = R_g^f I_g^f + L_g^f I_g^f"
},
{
"math_id": 8,
"text": "-G_g^fa I_g^f W_g^r + (R_g^a + R_m^a) I^a + (L_g^a + L_m^a) I^a + G_m^fa I_m^f W_m^r = 0"
},
{
"math_id": 9,
"text": "-T_L = J_m W_m^r + D_mW_m^r"
},
{
"math_id": 10,
"text": "L_g^a + L_m^a"
},
{
"math_id": 11,
"text": "T_L = 0"
},
{
"math_id": 12,
"text": "\\frac{W_m^r(S)}{V_g^f(S)} = \\cfrac{K_BK_v/D_m}{\\left(t_g^fs + 1\\right)\\left(t_ms + \\frac{K_m}{D_m}\\right)}"
},
{
"math_id": 13,
"text": "K_B = \\frac{G_m^fa V_m^f}{R_m^f(R_g^a + R_m^a)}"
},
{
"math_id": 14,
"text": "K_v = \\frac{G_g^fa W_g^r}{R_g^f}"
},
{
"math_id": 15,
"text": "t_m = \\frac{J_m}{D_m}"
},
{
"math_id": 16,
"text": "t_g^f = \\frac{L_g^f}{R_g^f}"
},
{
"math_id": 17,
"text": "K_m = D_m + K_B^2(R_g^a + R_m^a)"
}
]
| https://en.wikipedia.org/wiki?curid=13616862 |
13618652 | Hybrid-pi model | Model of electronic circuits involving transistors
Hybrid-pi is a popular circuit model used for analyzing the small signal behavior of bipolar junction and field effect transistors. Sometimes it is also called Giacoletto model because it was introduced by L.J. Giacoletto in 1969. The model can be quite accurate for low-frequency circuits and can easily be adapted for higher frequency circuits with the addition of appropriate inter-electrode capacitances and other parasitic elements.
BJT parameters.
The hybrid-pi model is a linearized two-port network approximation to the BJT using the small-signal base-emitter voltage, formula_0, and collector-emitter voltage, formula_1, as independent variables, and the small-signal base current, formula_2, and collector current, formula_3, as dependent variables.
A basic, low-frequency hybrid-pi model for the bipolar transistor is shown in figure 1. The various parameters are as follows.
formula_4
is the transconductance, evaluated in a simple model, where:
where:
Related terms.
The "output conductance", "g"ce, is the reciprocal of the output resistance, "r"o:
formula_16.
The "transresistance", "r"m, is the reciprocal of the transconductance:
formula_17.
Full model.
The full model introduces the virtual terminal, B', so that the base spreading resistance, "r"bb, (the bulk resistance between the base contact and the active region of the base under the emitter) and "r"b′e (representing the base current required to make up for recombination of minority carriers in the base region) can be represented separately. "C"e is the diffusion capacitance representing minority carrier storage in the base. The feedback components, "r"b′c and "C"c, are introduced to represent the Early effect and Miller effect, respectively.
MOSFET parameters.
A basic, low-frequency hybrid-pi model for the MOSFET is shown in figure 2. The various parameters are as follows.
formula_18
is the transconductance, evaluated in the Shichman–Hodges model in terms of the Q-point drain current, formula_19:
formula_20,
where:
The combination:
formula_24
is often called "overdrive voltage".
formula_25
is the output resistance due to channel length modulation, calculated using the Shichman–Hodges model as
formula_26
using the approximation for the "channel length modulation" parameter, λ:
formula_27.
Here "VE" is a technology-related parameter (about 4 V/μm for the 65 nm technology node) and "L" is the length of the source-to-drain separation.
The "drain conductance" is the reciprocal of the output resistance:
formula_28.
References and notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\textstyle v_\\text{be}"
},
{
"math_id": 1,
"text": "\\textstyle v_\\text{ce}"
},
{
"math_id": 2,
"text": "\\textstyle i_\\text{b}"
},
{
"math_id": 3,
"text": "\\textstyle i_\\text{c}"
},
{
"math_id": 4,
"text": "g_\\text{m} = \\left.\\frac{i_\\text{c}}{v_\\text{be}}\\right\\vert_{v_\\text{ce} = 0} = \\frac{I_\\text{C}}{V_\\text{T}}"
},
{
"math_id": 5,
"text": "\\textstyle I_\\text{C} \\,"
},
{
"math_id": 6,
"text": "\\textstyle V_\\text{T} = \\frac{kT}{e}"
},
{
"math_id": 7,
"text": "\\textstyle k"
},
{
"math_id": 8,
"text": "\\textstyle e"
},
{
"math_id": 9,
"text": "\\textstyle T"
},
{
"math_id": 10,
"text": "\\textstyle V_\\text{T}"
},
{
"math_id": 11,
"text": "r_\\pi = \\left.\\frac{v_\\text{be}}{i_\\text{b}}\\right\\vert_{v_\\text{ce} = 0} = \\frac{V_\\text{T}}{I_\\text{B}} = \\frac{\\beta_0}{g_\\text{m}}"
},
{
"math_id": 12,
"text": "\\textstyle I_\\text{B}"
},
{
"math_id": 13,
"text": "\\textstyle \\beta_0 = \\frac{I_\\text{C}}{I_\\text{B}}"
},
{
"math_id": 14,
"text": "\\textstyle r_\\text{o} = \\left.\\frac{v_\\text{ce}}{i_\\text{c}}\\right\\vert_{v_\\text{be} = 0} ~=~ \\frac{1}{I_\\text{C}}\\left(V_\\text{A} + V_\\text{CE}\\right) ~\\approx~ \\frac{V_\\text{A}}{I_\\text{C}}"
},
{
"math_id": 15,
"text": "\\textstyle V_\\text{A}"
},
{
"math_id": 16,
"text": "g_\\text{ce} = \\frac{1}{r_\\text{o}}"
},
{
"math_id": 17,
"text": "r_\\text{m} = \\frac{1}{g_\\text{m}}"
},
{
"math_id": 18,
"text": "g_\\text{m} = \\left.\\frac{i_\\text{d}}{v_\\text{gs}}\\right\\vert_{v_\\text{ds} = 0}"
},
{
"math_id": 19,
"text": "\\scriptstyle I_\\text{D}"
},
{
"math_id": 20,
"text": "g_\\text{m} = \\frac{2I_\\text{D}}{V_{\\text{GS}} - V_\\text{th}}"
},
{
"math_id": 21,
"text": "\\scriptstyle I_\\text{D} "
},
{
"math_id": 22,
"text": "\\scriptstyle V_\\text{th}"
},
{
"math_id": 23,
"text": "\\scriptstyle V_\\text{GS}"
},
{
"math_id": 24,
"text": "V_\\text{ov} = V_\\text{GS} - V_\\text{th}"
},
{
"math_id": 25,
"text": "r_\\text{o} = \\left.\\frac{v_\\text{ds}}{i_\\text{d}}\\right\\vert_{v_\\text{gs} = 0}"
},
{
"math_id": 26,
"text": "\\begin{align}\n r_\\text{o} &= \\frac{1}{I_\\text{D}}\\left(\\frac{1}{\\lambda} + V_\\text{DS}\\right) \\\\\n &= \\frac{1}{I_\\text{D}}\\left(V_E L + V_\\text{DS}\\right) \\approx \\frac{V_E L}{I_\\text{D}}\n\\end{align}"
},
{
"math_id": 27,
"text": " \\lambda = \\frac{1}{V_E L} "
},
{
"math_id": 28,
"text": "g_\\text{ds} = \\frac{1}{r_\\text{o}} "
}
]
| https://en.wikipedia.org/wiki?curid=13618652 |
1361980 | Integrally closed | In mathematics, more specifically in abstract algebra, the concept of integrally closed has three meanings:
<templatestyles src="Dmbox/styles.css" />
Topics referred to by the same termThis page lists mathematics articles associated with the same title. | [
{
"math_id": 0,
"text": "R"
},
{
"math_id": 1,
"text": "S"
}
]
| https://en.wikipedia.org/wiki?curid=1361980 |
13620523 | Cauchy–Hadamard theorem | A theorem that determines the radius of convergence of a power series.
In mathematics, the Cauchy–Hadamard theorem is a result in complex analysis named after the French mathematicians Augustin Louis Cauchy and Jacques Hadamard, describing the radius of convergence of a power series. It was published in 1821 by Cauchy, but remained relatively unknown until Hadamard rediscovered it. Hadamard's first publication of this result was in 1888; he also included it as part of his 1892 Ph.D. thesis.
Theorem for one complex variable.
Consider the formal power series in one complex variable "z" of the form
formula_0
where formula_1
Then the radius of convergence formula_2 of "f" at the point "a" is given by
formula_3
where lim sup denotes the limit superior, the limit as n approaches infinity of the supremum of the sequence values after the "n"th position. If the sequence values are unbounded so that the lim sup is ∞, then the power series does not converge near "a", while if the lim sup is 0 then the radius of convergence is ∞, meaning that the series converges on the entire plane.
Proof.
Without loss of generality assume that formula_4. We will show first that the power series formula_5 converges for formula_6, and then that it diverges for formula_7.
First suppose formula_6. Let formula_8 not be formula_9 or formula_10
For any formula_11, there exists only a finite number of formula_12 such that formula_13.
Now formula_14 for all but a finite number of formula_15, so the series formula_5 converges if formula_16. This proves the first part.
Conversely, for formula_11, formula_17 for infinitely many formula_15, so if formula_18, we see that the series cannot converge because its "n"th term does not tend to 0.
Theorem for several complex variables.
Let formula_19 be an "n"-dimensional vector of natural numbers (formula_20) with formula_21, then formula_22 converges with radius of convergence formula_23 with formula_24 if and only if
formula_25
to the multidimensional power series
formula_26
Proof.
From
Set formula_27 formula_28, then
formula_29
This is a power series in one variable formula_30 which converges for formula_31 and diverges for formula_32. Therefore, by the Cauchy-Hadamard theorem for one variable
formula_33
Setting formula_34 gives us an estimate
formula_35
Because formula_36 as formula_37
formula_38
Therefore
formula_39
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(z) = \\sum_{n = 0}^{\\infty} c_{n} (z-a)^{n}"
},
{
"math_id": 1,
"text": "a, c_n \\in \\Complex."
},
{
"math_id": 2,
"text": "R"
},
{
"math_id": 3,
"text": "\\frac{1}{R} = \\limsup_{n \\to \\infty} \\left( | c_{n} |^{1/n} \\right)"
},
{
"math_id": 4,
"text": "a=0"
},
{
"math_id": 5,
"text": "\\sum_n c_n z^n"
},
{
"math_id": 6,
"text": "|z|<R"
},
{
"math_id": 7,
"text": "|z|>R"
},
{
"math_id": 8,
"text": "t=1/R"
},
{
"math_id": 9,
"text": "0"
},
{
"math_id": 10,
"text": "\\pm\\infty."
},
{
"math_id": 11,
"text": "\\varepsilon > 0"
},
{
"math_id": 12,
"text": "n"
},
{
"math_id": 13,
"text": "\\sqrt[n]{|c_n|} \\geq t+\\varepsilon"
},
{
"math_id": 14,
"text": "|c_n| \\leq (t+\\varepsilon)^n"
},
{
"math_id": 15,
"text": "c_n"
},
{
"math_id": 16,
"text": "|z| < 1/(t+\\varepsilon)"
},
{
"math_id": 17,
"text": "|c_n|\\geq (t-\\varepsilon)^n"
},
{
"math_id": 18,
"text": "|z|=1/(t-\\varepsilon) > R"
},
{
"math_id": 19,
"text": "\\alpha"
},
{
"math_id": 20,
"text": "\\alpha = (\\alpha_1, \\cdots, \\alpha_n) \\in \\N^n"
},
{
"math_id": 21,
"text": "||\\alpha|| = \\alpha_1 + \\cdots + \\alpha_n"
},
{
"math_id": 22,
"text": "f(x)"
},
{
"math_id": 23,
"text": "\\rho = (\\rho_1, \\cdots, \\rho_n) \\in \\R^n"
},
{
"math_id": 24,
"text": "\\rho^\\alpha = \\rho_1^{\\alpha_1} \\cdots \\rho_n^{\\alpha_n}"
},
{
"math_id": 25,
"text": "\\limsup_{||\\alpha||\\to\\infty} \\sqrt[||\\alpha||]{|c_\\alpha|\\rho^\\alpha}=1"
},
{
"math_id": 26,
"text": "\\sum_{\\alpha\\geq0}c_\\alpha(z-a)^\\alpha := \\sum_{\\alpha_1\\geq0,\\ldots,\\alpha_n\\geq0}c_{\\alpha_1,\\ldots,\\alpha_n}(z_1-a_1)^{\\alpha_1}\\cdots(z_n-a_n)^{\\alpha_n}"
},
{
"math_id": 27,
"text": "z = a + t\\rho"
},
{
"math_id": 28,
"text": "(z_i = a_i + t\\rho_i)"
},
{
"math_id": 29,
"text": "\\sum_{\\alpha \\geq 0} c_\\alpha (z - a)^\\alpha = \\sum_{\\alpha \\geq 0} c_\\alpha \\rho^\\alpha t^{||\\alpha||} = \\sum_{\\mu \\geq 0} \\left( \\sum_{||\\alpha|| = \\mu} |c_\\alpha| \\rho^\\alpha \\right) t^\\mu"
},
{
"math_id": 30,
"text": "t"
},
{
"math_id": 31,
"text": "|t| < 1"
},
{
"math_id": 32,
"text": "|t| > 1"
},
{
"math_id": 33,
"text": "\\limsup_{\\mu \\to \\infty} \\sqrt[\\mu]{\\sum_{||\\alpha|| = \\mu} |c_\\alpha| \\rho^\\alpha} = 1"
},
{
"math_id": 34,
"text": "|c_m| \\rho^m = \\max_{||\\alpha|| = \\mu} |c_\\alpha| \\rho^\\alpha"
},
{
"math_id": 35,
"text": "|c_m| \\rho^m \\leq \\sum_{||\\alpha|| = \\mu} |c_\\alpha| \\rho^\\alpha \\leq (\\mu + 1)^n |c_m| \\rho^m"
},
{
"math_id": 36,
"text": "\\sqrt[\\mu]{(\\mu + 1)^n} \\to 1"
},
{
"math_id": 37,
"text": "\\mu \\to \\infty"
},
{
"math_id": 38,
"text": "\\sqrt[\\mu]{|c_m| \\rho^m} \\leq \\sqrt[\\mu]{\\sum_{||\\alpha|| = \\mu} |c_\\alpha| \\rho^\\alpha} \\leq \\sqrt[\\mu]{|c_m| \\rho^m} \\implies \\sqrt[\\mu]{\\sum_{||\\alpha|| = \\mu} |c_\\alpha| \\rho^\\alpha} = \\sqrt[\\mu]{|c_m| \\rho^m} \\qquad (\\mu \\to \\infty)"
},
{
"math_id": 39,
"text": "\\limsup_{||\\alpha||\\to\\infty} \\sqrt[||\\alpha||]{|c_\\alpha|\\rho^\\alpha} = \\limsup_{\\mu \\to \\infty} \\sqrt[\\mu]{|c_m| \\rho^m} = 1"
}
]
| https://en.wikipedia.org/wiki?curid=13620523 |
13622736 | First Hurwitz triplet | In the mathematical theory of Riemann surfaces, the first Hurwitz triplet is a triple of distinct Hurwitz surfaces with the identical automorphism group of the lowest possible genus, namely 14 (genera 3 and 7 each admit a unique Hurwitz surface, respectively the Klein quartic and the Macbeath surface). The explanation for this phenomenon is arithmetic. Namely, in the ring of integers of the appropriate number field, the rational prime 13 splits as a product of three distinct prime ideals. The principal congruence subgroups defined by the triplet of primes produce Fuchsian groups corresponding to the triplet of Riemann surfaces.
Arithmetic construction.
Let formula_0 be the real subfield of formula_1 where formula_2 is a 7th-primitive root of unity.
The ring of integers of "K" is formula_3, where formula_4. Let formula_5 be the quaternion algebra, or symbol algebra formula_6. Also Let formula_7 and formula_8. Let formula_9. Then formula_10 is a maximal order of formula_5 (see Hurwitz quaternion order), described explicitly by Noam Elkies [1].
In order to construct the first Hurwitz triplet, consider the prime decomposition of 13 in formula_3, namely
formula_11
where formula_12 is invertible. Also consider the prime ideals generated by the non-invertible factors. The principal congruence subgroup defined by such a prime ideal "I" is by definition the group
formula_13
namely, the group of elements of reduced norm 1 in formula_10 equivalent to 1 modulo the ideal formula_14. The corresponding Fuchsian group is obtained as the image of the principal congruence subgroup under a representation to PSL(2,R).
Each of the three Riemann surfaces in the first Hurwitz triplet can be formed as a Fuchsian model, the quotient of the hyperbolic plane by one of these three Fuchsian groups.
Bound for systolic length and the systolic ratio.
The Gauss–Bonnet theorem states that
formula_15
where formula_16 is the Euler characteristic of the surface and formula_17 is the Gaussian curvature . In the case formula_18 we have
formula_19 and formula_20
thus we obtain that the area of these surfaces is
formula_21.
The lower bound on the systole as specified in [2], namely
formula_22
is 3.5187.
Some specific details about each of the surfaces are presented in the following tables (the number of systolic loops is taken from [3]). The term Systolic Trace refers to the least reduced trace of an element in the corresponding subgroup formula_23. The systolic ratio is the ratio of the square of the systole to the area. | [
{
"math_id": 0,
"text": "K"
},
{
"math_id": 1,
"text": "\\mathbb{Q}[\\rho]"
},
{
"math_id": 2,
"text": "\\rho"
},
{
"math_id": 3,
"text": "\\mathbb{Z}[\\eta]"
},
{
"math_id": 4,
"text": "\\eta=2\\cos(\\tfrac{2\\pi}{7})"
},
{
"math_id": 5,
"text": "D"
},
{
"math_id": 6,
"text": "(\\eta,\\eta)_{K}"
},
{
"math_id": 7,
"text": "\\tau=1+\\eta+\\eta^2"
},
{
"math_id": 8,
"text": "j'=\\tfrac{1}{2}(1+\\eta i + \\tau j)"
},
{
"math_id": 9,
"text": "\\mathcal{Q}_\\mathrm{Hur}=\\mathbb{Z}[\\eta][i,j,j']"
},
{
"math_id": 10,
"text": "\\mathcal{Q}_\\mathrm{Hur}"
},
{
"math_id": 11,
"text": "13=\\eta (\\eta +2)(2\\eta-1)(3-2\\eta)(\\eta+3),"
},
{
"math_id": 12,
"text": "\\eta (\\eta+2)"
},
{
"math_id": 13,
"text": "\\mathcal{Q}^1_\\mathrm{Hur}(I) = \\{x \\in \\mathcal{Q}_\\mathrm{Hur}^1 : x \\equiv 1 \\pmod{I\\mathcal{Q}_\\mathrm{Hur}}\\},"
},
{
"math_id": 14,
"text": "I\\mathcal{Q}_{\\mathrm Hur}"
},
{
"math_id": 15,
"text": "\\chi(\\Sigma)=\\frac{1}{2\\pi} \\int_{\\Sigma} K(u)\\,dA,"
},
{
"math_id": 16,
"text": "\\chi(\\Sigma)"
},
{
"math_id": 17,
"text": "K(u)"
},
{
"math_id": 18,
"text": "g=14"
},
{
"math_id": 19,
"text": "\\chi(\\Sigma)=-26"
},
{
"math_id": 20,
"text": "K(u)=-1,"
},
{
"math_id": 21,
"text": "52\\pi"
},
{
"math_id": 22,
"text": "\\frac{4}{3} \\log(g(\\Sigma)),"
},
{
"math_id": 23,
"text": "\\mathcal{Q}^1_{Hur}(I)"
}
]
| https://en.wikipedia.org/wiki?curid=13622736 |
13623676 | Mohsen Hashtroodi | Iranian mathematician (1908–1976)
Mohsen Hashtroodi (Hachtroudi) (; also romanized as Mohsen Hashtrūdi; December 13, 1908, Tabriz – September 4, 1976, Tehran) was a prominent Iranian mathematician, known as "Professor Hashtroodi (Hashtroudi)". His father, Shaikh Esmāeel Mojtahed was an advisor to Shaikh Mohammad Khiābāni, who played a significant role in the establishment of the parliamentary democracy in Iran during and after the Iranian Constitutional Revolution.
Mohsen Hashtroodi attended "Sirus" and "Aghdasieh" primary schools in Tehran and subsequently studied at the élite school of "Dar ol-Fonoon", also in Tehran, from where he graduated in 1925. He obtained his doctoral degree in mathematics in 1936 as student of Élie Cartan in France. His doctoral dissertation ("Sur les espaces d'éléments à connexion projective normale") was on differential geometry. By significantly generalizing the work of Cartan to the case of hypersurfaces in formula_0, he constructed a projective connection used in studying systems of differential equations known as the "Hachtroudi Connection". His subsequent research involved using intrinsically defined affine and Weylian connections to study the invariants of differential systems relative to different groups of transformations. He was a Distinguished Professor at University of Tabriz and University of Tehran. One of the Prizes of Iranian Mathematical Society is named after Professor Hashtroodi.
Mohsen Hashtroodi married Robāb Modiri in 1944. They had two daughters, Farānak and Faribā, and one son, Rāmin.
Professor Hashtroodi is buried in the Behesht-e Zahra cemetery in Tehran.
Notes and references.
Hachtroudi, Mohsen (1937) Les espaces d'éléments à connexion projective normale.
Thèse de doctorat, Université de Paris. | [
{
"math_id": 0,
"text": "\\mathbb{R}^n"
}
]
| https://en.wikipedia.org/wiki?curid=13623676 |
1362378 | Bubble ring | Toroidal vortex ring of air in water
A bubble ring, or toroidal bubble, is an underwater vortex ring where an air bubble occupies the core of the vortex, forming a ring shape. The ring of air as well as the nearby water spins poloidally as it travels through the water, much like a flexible bracelet might spin when it is rolled on to a person's arm. The faster the bubble ring spins, the more stable it becomes. The physics of vortex rings are still under active study in fluid dynamics. Devices have been invented which generate bubble vortex rings.
Physics.
As the bubble ring rises, a lift force pointing downward that is generated by the vorticity acts on the bubble in order to counteract the buoyancy force. This reduces the bubble's velocity and increases its diameter. The ring becomes thinner, despite the total volume inside the bubble increasing as the external water pressure decreases. Bubble rings fragment into rings of spherical bubbles when the ring becomes thinner than a few millimetres. This is due to Plateau–Rayleigh instability. When the bubble reaches a certain thickness, surface tension effects distort the bubble's surface pulling it apart into separate bubbles. Circulation of the fluid around the bubble helps to stabilize the bubble for a longer duration, counteracting the effects of Plateau–Rayleigh instability. Below is the equation for Plateau–Rayleigh instability with circulation as a stabilizing term:
formula_0
where formula_1 is the growth rate, formula_2 is the wave number, formula_3 is the radius of the bubble cylinder, formula_4 is the surface tension, formula_5 is the circulation, and formula_6 is the modified Bessel function of the second kind of order formula_7. When formula_1 is positive, the bubble is stable due to circulation and when formula_1 is negative, surface tension effects destabilize it and break it up. Circulation also has an effect on the velocity and radial expansion of the bubble. Circulation increases the velocity while reducing the rate of radial expansion. Radial expansion however is what diffuses energy by stretching the vortex. Instability happens more quickly in turbulent water, but in calm water, divers can achieve an external diameter of a meter or more before the bubble fragments.
Buoyancy induced toroidal bubbles.
As an air bubble rises, there is a difference in pressure between the top and bottom of the bubble. The higher pressure at the bottom of the bubble pushes the bubble's bottom surface up faster than the top surface rises. This creates a fluid jet that moves up through the center of the bubble. If the fluid jet has enough energy, it will puncture the top of the bubble and create a bubble ring. Because of the motion of the fluid moving through the center of the bubble, the bubble begins to rotate. This rotation moves the fluid around the bubble creating a toroidal vortex. If the surface tension of the fluid interface or the viscosity of the liquid is too high, then the liquid jet will be more broad and will not penetrate the top of the bubble. This results in a spherical cap bubble. Air bubbles with a diameter greater than about two centimeters become toroidal in shape due to the pressure differences.
Cavitation bubbles.
Cavitation bubbles, when near a solid surface, can also become a torus. The area away from the surface has an increased static pressure causing a high pressure jet to develop. This jet is directed towards the solid surface and breaks through the bubble to form a torus shaped bubble for a short period of time. This generates multiple shock waves that can damage the surface.
Cetaceans.
Cetaceans, such as beluga whales, dolphins and humpback whales, blow bubble rings. Dolphins sometimes engage in complex play behaviours, creating bubble rings on purpose, seemingly for amusement. There are two main methods of bubble ring production: rapid puffing of a burst of air into the water and allowing it to rise to the surface, forming a ring; or creating a toroidal vortex with their flukes and injecting a bubble into the helical vortex currents thus formed. The dolphin will often then examine its creation visually and with sonar. They will sometimes play with the bubbles, distorting the bubble rings, breaking smaller bubble rings off of the original or splitting the original ring into two separate rings using their beak. They also appear to enjoy biting the vortex-rings they have created, so that they burst into many separate normal bubbles and then rise quickly to the surface. Dolphins also have the ability to form bubble rings with their flukes by using the reservoir of air at the surface.
Humpback whales use another type of bubble ring when they forage for fish. They surround a school of forage fish with a circular bubble net and herd them into a bait ball.
Human divers.
Some scuba divers and freedivers can create bubble rings by blowing air out of their mouth in a particular manner. Long bubble rings also can form spontaneously in turbulent water such as heavy surf.
Other uses of the term.
The term "bubble ring" is also used in other contexts. A common children's toy for blowing soap bubbles is called a bubble ring, and replaces the bubble pipe toy that was traditionally used for many years because the bubble pipe can be perceived as too reminiscent of smoking and therefore a bad example for children. Soapsuds are suspended on a ring connected by a stem to the screwcap of a bottle containing soapsuds.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\omega^2= \\left ( \\frac{-ka \\, K_1(ka)}{K_0(ka)} \\right ) \\left [ (1-k^2 a^2) \\frac{T}{pa^3} - \\frac{\\Gamma^2}{4\\pi^2 a^4} \\right ] "
},
{
"math_id": 1,
"text": "\\omega"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "\\Gamma"
},
{
"math_id": 6,
"text": "K_n(x)"
},
{
"math_id": 7,
"text": "n"
}
]
| https://en.wikipedia.org/wiki?curid=1362378 |
13624160 | Churchill–Bernstein equation | In convective heat transfer, the Churchill–Bernstein equation is used to estimate the surface averaged Nusselt number for a cylinder in cross flow at various velocities. The need for the equation arises from the inability to solve the Navier–Stokes equations in the turbulent flow regime, even for a Newtonian fluid. When the concentration and temperature profiles are independent of one another, the mass-heat transfer analogy can be employed. In the mass-heat transfer analogy, heat transfer dimensionless quantities are replaced with analogous mass transfer dimensionless quantities.
This equation is named after Stuart W. Churchill and M. Bernstein, who introduced it in 1977. This equation is also called the Churchill–Bernstein correlation.
formula_0
Heat transfer definition.
where:
The Churchill–Bernstein equation is valid for a wide range of Reynolds numbers and Prandtl numbers, as long as the product of the two is greater than or equal to 0.2, as defined above. The Churchill–Bernstein equation can be used for any object of cylindrical geometry in which boundary layers develop freely, without constraints imposed by other surfaces. Properties of the external free stream fluid are to be evaluated at the film temperature in order to account for the variation of the fluid properties at different temperatures. One should not expect much more than 20% accuracy from the above equation due to the wide range of flow conditions that the equation encompasses. The Churchill–Bernstein equation is a correlation and cannot be derived from principles of fluid dynamics. The equation yields the surface averaged Nusselt number, which is used to determine the average convective heat transfer coefficient. Newton's law of cooling (in the form of heat loss per surface area being equal to heat transfer coefficient multiplied by temperature gradient) can then be invoked to determine the heat loss or gain from the object, fluid and/or surface temperatures, and the area of the object, depending on what information is known.
formula_4
Mass transfer definition.
where:
Using the mass-heat transfer analogy, the Nusselt number is replaced by the Sherwood number, and the Prandtl number is replaced by the Schmidt number. The same restrictions described in the heat transfer definition are applied to the mass transfer definition. The Sherwood number can be used to find an overall mass transfer coefficient and applied to Fick's law of diffusion to find concentration profiles and mass transfer fluxes. | [
{
"math_id": 0,
"text": "\\overline{\\mathrm{Nu}}_D \\ = 0.3 + \\frac{0.62\\mathrm{Re}_D^{1/2}\\Pr^{1/3}}{\\left[1 + (0.4/\\Pr)^{2/3} \\, \\right]^{1/4} \\,}\\bigg[1 + \\bigg(\\frac{\\mathrm{Re}_D}{282000} \\bigg)^{5/8}\\bigg]^{4/5} \\quad\n\\Pr\\mathrm{Re}_D \\ge 0.2 "
},
{
"math_id": 1,
"text": "\\overline{\\mathrm{Nu}}_D"
},
{
"math_id": 2,
"text": "\\mathrm{Re}_D\\,\\!"
},
{
"math_id": 3,
"text": "\\Pr"
},
{
"math_id": 4,
"text": "\\mathrm{Sh}_D = 0.3 + \\frac{0.62\\mathrm{Re}_D^{1/2}\\mathrm{Sc}^{1/3}}{\\left[1 + (0.4/\\mathrm{Sc})^{2/3} \\, \\right]^{1/4} \\,}\\bigg[1 + \\bigg(\\frac{\\mathrm{Re}_D}{282000} \\bigg)^{5/8}\\bigg]^{4/5} \\quad\n\\mathrm{Sc}\\,\\mathrm{Re}_D \\ge 0.2 "
},
{
"math_id": 5,
"text": "\\mathrm{Sh}_D"
},
{
"math_id": 6,
"text": "\\mathrm{Sc}"
}
]
| https://en.wikipedia.org/wiki?curid=13624160 |
1362465 | Classical electron radius | Physical constant providing length scale to interatomic interactions
The classical electron radius is a combination of fundamental physical quantities that define a length scale for problems involving an electron interacting with electromagnetic radiation. It links the classical electrostatic self-interaction energy of a homogeneous charge distribution to the electron's relativistic mass-energy. According to modern understanding, the electron is a point particle with a point charge and no spatial extent. Nevertheless, it is useful to define a length that characterizes electron interactions in atomic-scale problems. The classical electron radius is given as
formula_0
where formula_1 is the elementary charge, formula_2 is the electron mass, formula_3 is the speed of light, and formula_4 is the permittivity of free space. This numerical value is several times larger than the radius of the proton.
In cgs units, the permittivity factor and formula_5 do not enter, but the classical electron radius has the same value.
The classical electron radius is sometimes known as the Lorentz radius or the Thomson scattering length. It is one of a trio of related scales of length, the other two being the Bohr radius formula_6 and the reduced Compton wavelength of the electron "ƛ"e. Any one of these three length scales can be written in terms of any other using the fine-structure constant formula_7:
formula_8 "ƛ"e formula_9
Derivation.
The classical electron radius length scale can be motivated by considering the energy necessary to assemble an amount of charge formula_10 into a sphere of a given radius formula_11. The electrostatic potential at a distance formula_11 from a charge formula_10 is
formula_12.
To bring an additional amount of charge formula_13 from infinity necessitates putting energy into the system, formula_14, by an amount
formula_15.
If the sphere is "assumed" to have constant charge density, formula_16, then
formula_17 and formula_18.
Integrating for formula_11 from zero to the final radius formula_11 yields the expression for the total energy formula_19, necessary to assemble the total charge formula_10 into a uniform sphere of radius formula_11:
formula_20.
This is called the electrostatic self-energy of the object. The charge formula_10 is now interpreted as the electron charge, formula_1, and the energy formula_19 is set equal to the relativistic mass–energy of the electron, formula_21, and the numerical factor 3/5 is ignored as being specific to the special case of a uniform charge density. The radius formula_11 is then "defined" to be the classical electron radius, formula_22, and one arrives at the expression given above.
Note that this derivation does not say that formula_22 is the actual radius of an electron. It only establishes a dimensional link between electrostatic self energy and the mass–energy scale of the electron.
Discussion.
The classical electron radius appears in the classical limit of modern theories as well, including non-relativistic Thomson scattering and the relativistic Klein–Nishina formula. Also, formula_22 is roughly the length scale at which renormalization becomes important in quantum electrodynamics. That is, at short-enough distances, quantum fluctuations within the vacuum of space surrounding an electron begin to have calculable effects that have measurable consequences in atomic and particle physics.
Based on the assumption of a simple mechanical model, attempts to model the electron as a non-point particle have been described by some as ill-conceived and counter-pedagogic. | [
{
"math_id": 0,
"text": "r_\\text{e} = \\frac{1}{4\\pi\\varepsilon_0}\\frac{e^2}{m_{\\text{e}} c^2} = 2.817 940 3227(19) \\times 10^{-15} \\text{ m} = 2.817 940 3227(19) \\text{ fm} ,"
},
{
"math_id": 1,
"text": "e"
},
{
"math_id": 2,
"text": "m_{\\text{e}}"
},
{
"math_id": 3,
"text": "c"
},
{
"math_id": 4,
"text": "\\varepsilon_0"
},
{
"math_id": 5,
"text": " \\frac{1}{4\\pi} "
},
{
"math_id": 6,
"text": "a_0"
},
{
"math_id": 7,
"text": "\\alpha"
},
{
"math_id": 8,
"text": "r_\\text{e} = "
},
{
"math_id": 9,
"text": "\\alpha}={a_0 \\alpha^2."
},
{
"math_id": 10,
"text": "q"
},
{
"math_id": 11,
"text": "r"
},
{
"math_id": 12,
"text": "V(r) = \\frac{1}{4\\pi\\varepsilon_0}\\frac{q}{r}"
},
{
"math_id": 13,
"text": "dq"
},
{
"math_id": 14,
"text": "dU"
},
{
"math_id": 15,
"text": "dU = V(r) dq "
},
{
"math_id": 16,
"text": "\\rho"
},
{
"math_id": 17,
"text": "q = \\rho \\frac{4}{3} \\pi r^3"
},
{
"math_id": 18,
"text": "dq = \\rho 4 \\pi r^2 dr"
},
{
"math_id": 19,
"text": "U"
},
{
"math_id": 20,
"text": "U = \\frac{1}{4\\pi\\varepsilon_0} \\frac{3}{5} \\frac{q^2}{r}"
},
{
"math_id": 21,
"text": "m c^2"
},
{
"math_id": 22,
"text": "r_\\text{e}"
}
]
| https://en.wikipedia.org/wiki?curid=1362465 |
13625345 | Schrödinger field | Physical fields obeying the Schrödinger equation
In quantum mechanics and quantum field theory, a Schrödinger field, named after Erwin Schrödinger, is a quantum field which obeys the Schrödinger equation. While any situation described by a Schrödinger field can also be described by a many-body Schrödinger equation for identical particles, the field theory is more suitable for situations where the particle number changes.
A Schrödinger field is also the classical limit of a quantum Schrödinger field, a classical wave which satisfies the Schrödinger equation. Unlike the quantum mechanical wavefunction, if there are interactions between the particles the equation will be nonlinear. These nonlinear equations describe the classical wave limit of a system of interacting identical particles.
The path integral of a Schrödinger field is also known as a coherent state path integral, because the field itself is an annihilation operator whose eigenstates can be thought of as coherent states of the harmonic oscillations of the field modes.
Schrödinger fields are useful for describing Bose–Einstein condensation, the Bogolyubov–de Gennes equation of superconductivity, superfluidity, and many-body theory in general. They are also a useful alternative formalism for nonrelativistic quantum mechanics.
A Schrödinger field is the nonrelativistic limit of a Klein–Gordon field.
Summary.
A Schrödinger field is a quantum field whose quanta obey the Schrödinger equation. In the classical limit, it can be understood as the quantized wave equation of a Bose Einstein condensate or a superfluid.
Free field.
A Schrödinger field has the free field Lagrangian
formula_0
When formula_1 is a complex valued field in a path integral, or equivalently an operator with canonical commutation relations, it describes a collection of identical non-relativistic bosons. When formula_1 is a Grassmann valued field, or equivalently an operator with canonical anti-commutation relations, the field describes identical fermions.
External potential.
If the particles interact with an external potential formula_2, the interaction makes a local contribution to the action:
formula_3
The field operators obey the Euler–Lagrange equations of motion, corresponding to the Schrödinger field Lagrangian density:
formula_4
Yielding the Schrödinger equations of motion:
formula_5
formula_6
If the ordinary Schrödinger equation for "V" has known energy eigenstates formula_7 with energies formula_8, then the field in the action can be rotated into a diagonal basis by a mode expansion:
formula_9
The action becomes:
formula_10
which is the position-momentum path integral for a collection of independent Harmonic oscillators.
To see the equivalence, note that decomposed into real and imaginary parts the action is:
formula_11
after an integration by parts. Integrating over formula_12 gives the action
formula_13
which, rescaling formula_14, is a harmonic oscillator action with frequency formula_8.
Pair potential.
When the particles interact with a pair potential formula_15, the interaction is a nonlocal contribution to the action:
formula_16
A pair-potential is the non-relativistic limit of a relativistic field coupled to electrodynamics. Ignoring the propagating degrees of freedom, the interaction between nonrelativistic electrons is the coulomb repulsion. In 2+1 dimensions, this is:
formula_17
When coupled to an external potential to model classical positions of nuclei, a Schrödinger field with this pair potential describes nearly all of condensed matter physics. The exceptions are effects like superfluidity, where the quantum mechanical interference of nuclei is important, and inner shell electrons where the electron motion can be relativistic.
Nonlinear Schrödinger equation.
A special case of a delta-function interaction formula_18 is widely studied, and is known as the nonlinear Schrödinger equation. Because the interactions always happen when two particles occupy the same point, the action for the nonlinear Schrödinger equation is local:
formula_19
The interaction strength formula_20 requires renormalization in dimensions higher than 2 and in two dimensions it has logarithmic divergence. In any dimensions, and even with power-law divergence, the theory is well defined. If the particles are fermions, the interaction vanishes.
Many-body potentials.
The potentials can include many-body contributions. The interacting Lagrangian is then:
formula_21
These types of potentials are important in some effective descriptions of close-packed atoms. Higher order interactions are less and less important.
Canonical formalism.
The canonical momentum association with the field formula_1 is
formula_22
The canonical commutation relations are like an independent harmonic oscillator at each point:
formula_23
The field Hamiltonian is
formula_24
and the field equation for any interaction is a nonlinear and nonlocal version of the Schrödinger equation. For pairwise interactions:
formula_25
Perturbation theory.
The expansion in Feynman diagrams is called many-body perturbation theory. The propagator is
formula_26
The interaction vertex is the Fourier transform of the pair-potential. In all the interactions, the number of incoming and outgoing lines is equal.
Exposition.
Identical particles.
The many body Schrödinger equation for identical particles describes the time evolution of the many-body wavefunction "ψ"("x"1, "x"2..."xN") which is the probability amplitude for "N" particles to have the listed positions. The Schrödinger equation for "ψ" is:
"formula_27"
with Hamiltonian
formula_28
Since the particles are indistinguishable, the wavefunction has some symmetry under switching
positions. Either
Since the particles are indistinguishable, the potential V must be unchanged under permutations.
If
formula_31
then it must be the case that formula_32. If
formula_33
then formula_34 and so on.
In the Schrödinger equation formalism, the restrictions on the potential are ad-hoc, and the classical wave limit is hard to reach. It also has limited usefulness if a system is open to the environment, because particles might coherently enter and leave.
Nonrelativistic Fock space.
A Schrödinger field is defined by extending the Hilbert space of states to include configurations with arbitrary particle number. A nearly complete basis for this set of states is the collection:
formula_35
labeled by the total number of particles and their position. An arbitrary state with particles at separated positions is described by a superposition of states of this form.
formula_36
In this formalism, keep in mind that any two states whose positions can be permuted into each other are really the same, so the integration domains need to avoid double counting. Also keep in mind that the states with more than one particle at the same point have not yet been defined. The quantity formula_37 is the amplitude that no particles are present, and its absolute square is the probability that the system is in the vacuum.
In order to reproduce the Schrödinger description, the inner product on the basis states should be
formula_38
formula_39
and so on. Since the discussion is nearly formally identical for bosons and fermions, although the physical properties are different, from here on the particles will be bosons.
There are natural operators in this Hilbert space. One operator, called formula_40, is the operator which introduces an extra particle at x. It is defined on each basis state:
formula_41
with slight ambiguity when a particle is already at x.
Another operator removes a particle at x, and is called formula_1. This operator is the conjugate of the operator formula_42. Because formula_43 has no matrix elements which connect to states with no particle at x, formula_1 must give zero when acting on such a state.
formula_44
The position basis is an inconvenient way to understand coincident particles because states with a particle localized at one point have infinite energy, so intuition is difficult. In order to see what happens when two particles are at exactly the same point, it is mathematically simplest either to make space into a discrete lattice, or to Fourier transform the field in a finite volume.
The operator
formula_45
creates a superposition of one particle states in a plane wave state with momentum "k", in other words, it produces a new particle with momentum "k". The operator
formula_46
annihilates a particle with momentum "k".
If the potential energy for interaction of infinitely distant particles vanishes, the Fourier transformed operators in infinite volume create states which are noninteracting. The states are infinitely spread out, and the chance that the particles are nearby is zero.
The matrix elements for the operators between non-coincident points reconstructs the matrix elements of the Fourier transform between all modes:
where the delta function is either the Dirac delta function or the Kronecker delta, depending on whether the volume is infinite or finite.
The commutation relations now determine the operators completely, and when the spatial volume is finite, there are no conceptual hurdle to understand coinciding momenta because momenta are discrete. In a discrete momentum basis, the basis states are:
formula_50
where the "n"'s are the number of particles at each momentum. For fermions and anyons, the number of particles at any momentum is always either zero or one. The operators formula_51 have harmonic-oscillator like matrix elements between states, independent of the interaction:
formula_52
formula_53
So that the operator
formula_54
counts the total number of particles.
Now it is easy to see that the matrix elements of formula_55 and formula_56 have harmonic oscillator commutation relations too.
So that there really is no difficulty with coincident particles in position space.
The operator formula_59 which removes and replaces a particle, acts as a sensor to detect if a particle is present at "x". The operator formula_60 acts to multiply the state by the gradient of the many body wavefunction. The operator
formula_61
acts to reproduce the right hand side of the Schrödinger equation when acting on any basis state, so that
formula_62
holds as an operator equation. Since this is true for an arbitrary state, it is also true without the formula_43.
formula_63
To add interactions, add nonlinear terms in the field equations. The field form automatically ensures that the potentials obey the restrictions from symmetry.
Field Hamiltonian.
The field Hamiltonian which reproduces the equations of motion is
formula_64
The Heisenberg equations of motion for this operator reproduces the equation of motion for the field.
To find the classical field Lagrangian, apply a Legendre transform to the classical limit of the Hamiltonian.
formula_65
Although this is correct classically, the quantum mechanical transformation is not completely conceptually straightforward because the path integral is over eigenvalues of operators ψ which are not hermitian and whose eigenvectors are not orthogonal. The path integral over field states therefore seems naively to be overcounting. This is not the case, because the time derivative term in L includes the overlap between the different field states.
Relation to Klein–Gordon field.
The non-relativistic limit as formula_66 of any Klein–Gordon field is two Schrödinger fields, representing the particle and anti-particle. For clarity, all units and constants are preserved in this derivation. From the momentum space annihilation operators formula_67 of the relativistic field, one defines
formula_68,
such that formula_69. Defining two "non-relativistic" fields formula_70 and formula_71,
formula_72,
which factor out a rapidly oscillating phase due to the rest mass plus a vestige of the relativistic measure, the Lagrangian density formula_73 becomes
formula_74
where terms proportional to formula_75 are represented with ellipses and disappear in the non-relativistic limit. When the four-gradient is expanded, the total divergence is ignored and terms proportional to formula_76 also disappear in the non-relativistic limit. After an integration by parts,
formula_77
The final Lagrangian takes the form
formula_78
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nL = \\psi^\\dagger \\left(i{\\partial\\over \\partial t} + {\\nabla^2 \\over 2m}\\right)\\psi.\n"
},
{
"math_id": 1,
"text": "\\psi"
},
{
"math_id": 2,
"text": "V(x)"
},
{
"math_id": 3,
"text": "\nS = \\int_{xt} \\psi^\\dagger \\left(i{\\partial \\over \\partial t} + {\\nabla^2\\over 2m}\\right)\\psi - \\psi^\\dagger(x) \\psi(x) V(x).\n"
},
{
"math_id": 4,
"text": "\\mathcal{L}=i\\psi^{\\dagger}\\partial_{o}\\psi -\\frac{1}{2m}(\\partial_i \\psi^{\\dagger} \\partial^{i} \\psi)-V\\psi^{\\dagger}\\psi\n"
},
{
"math_id": 5,
"text": "\n\\,\\,i\\partial_{o}\\psi(x^{\\mu}) = \\left(\\frac{-\\Delta}{2m}+V(\\vec{x})\\right)\\psi(x^{\\mu})\n"
},
{
"math_id": 6,
"text": "\n-i\\partial_{o}\\psi^{\\dagger}(x^{\\mu}) = \\left(\\frac{-\\Delta}{2m}+V(\\vec{x})\\right)\\psi^{\\dagger}(x^{\\mu})\n"
},
{
"math_id": 7,
"text": "\\phi_i(x)"
},
{
"math_id": 8,
"text": "E_i"
},
{
"math_id": 9,
"text": "\n\\psi(x) = \\sum_i \\psi_i \\phi_i(x).\n\\,"
},
{
"math_id": 10,
"text": "\nS= \\int_t \\sum_i \\psi_i^\\dagger\\left( i{\\partial \\over \\partial t} - E_i\\right) \\psi_i\n\\,"
},
{
"math_id": 11,
"text": "\nS= \\int_t \\sum_i 2\\psi_r{d\\psi_i\\over dt} - E_i(\\psi_r^2 + \\psi_i^2)\n"
},
{
"math_id": 12,
"text": "\\psi_r"
},
{
"math_id": 13,
"text": "\nS= \\int_t \\sum_i {1 \\over E_i} \\left(\\frac{d\\psi_i}{dt}\\right)^2 - E_i \\psi_i^2\n"
},
{
"math_id": 14,
"text": " \\psi_i"
},
{
"math_id": 15,
"text": "V(x_1,x_2)"
},
{
"math_id": 16,
"text": "\nS = \\int_{xt} \\psi^\\dagger \\left(i \\frac{\\partial}{\\partial t} + {\\nabla^2 \\over 2m}\\right)\\psi - \\int_{xy} \\psi^\\dagger(y) \\psi^\\dagger(x)V(x,y) \\psi(x)\\psi(y).\n"
},
{
"math_id": 17,
"text": "\nV(x,y)= {j^2\\over |y-x|}.\n"
},
{
"math_id": 18,
"text": "V(x_1,x_2) = \\lambda \\delta(x_1 - x_2)"
},
{
"math_id": 19,
"text": "\nS = \\int_x \\psi^\\dagger \\left(i{\\partial \\over \\partial t} + {\\nabla^2 \\over 2m}\\right)\\psi + \\lambda \\int_x \\psi^\\dagger\\psi^\\dagger \\psi\\psi \n"
},
{
"math_id": 20,
"text": "\\lambda"
},
{
"math_id": 21,
"text": "\nL_i = \\int_x \\psi^\\dagger(x_1)\\psi^\\dagger(x_2)\\cdots\\psi^\\dagger(x_n) V(x_1,x_2,\\dots,x_n)\\psi(x_1)\\psi(x_2)\\cdots\\psi(x_n).\\,"
},
{
"math_id": 22,
"text": "\n\\Pi(x) = i \\psi^\\dagger.\n\\,"
},
{
"math_id": 23,
"text": "\n[\\psi(x), \\psi^\\dagger (y)] = \\delta(x-y).\n"
},
{
"math_id": 24,
"text": "\nH = S - \\int \\Pi(x) {d\\over dt}\\psi = \\int {|\\nabla \\psi|^2 \\over 2m} + \\int_{xy} \\psi^\\dagger(x)\\psi^\\dagger(y)V(x,y)\\psi(x)\\psi(y)\n\\,"
},
{
"math_id": 25,
"text": "\ni{\\partial \\over \\partial t} \\psi = -{\\nabla^2\\over 2m} \\psi + \\left(\\int_y V(x,y)\\psi^\\dagger(y)\\psi(y)\\right) \\psi(x).\n\\,"
},
{
"math_id": 26,
"text": "\nG(k) = {1 \\over i\\omega - {k^2\\over 2m} }.\n\\,"
},
{
"math_id": 27,
"text": "\ni\\frac{\\partial}{\\partial t} \\psi = \\left(\\frac{\\nabla_1^2}{2m} + \\frac{\\nabla_2^2}{2m} + \\cdots\n+ \\frac{\\nabla_N^2}{2m} + V(x_1,x_2,\\dots,x_N) \\right)\\psi \n\\,"
},
{
"math_id": 28,
"text": "\nH = \\frac{p_1^2}{2m} + \\frac{p_2^2}{2m} + \\cdots + \\frac{p_N^2}{2m} + V(x_1,\\dots,x_N).\n\\,"
},
{
"math_id": 29,
"text": "\\psi(x_1, x_2, \\dots) = \\psi(x_2, x_1, \\dots) \\qquad\\quad \\text{for bosons} "
},
{
"math_id": 30,
"text": "\\psi(x_1, x_2,\\dots) = -\\psi(x_2,x_1, \\dots) \\qquad \\text{for fermions} "
},
{
"math_id": 31,
"text": "\nV(x_1,\\dots,x_N) = V_1(x_1)+ V_2(x_2) + \\cdots + V_N(x_N)\n\\,"
},
{
"math_id": 32,
"text": " V_1=V_2=\\cdots=V_N "
},
{
"math_id": 33,
"text": "\nV(x_1 ... ,x_N) = V_{1,2}(x_1,x_2) + V_{1,3}(x_2,x_3) + V_{2,3}(x_1,x_2)\n\\,"
},
{
"math_id": 34,
"text": "V_{1,2} = V_{1,3} = V_{2,3}"
},
{
"math_id": 35,
"text": "\n|N;x_1,\\ldots,x_N\\rangle\n\\,"
},
{
"math_id": 36,
"text": "\n\\psi_0 |0\\rangle + \\int_x \\psi_1(x) |1;x\\rangle + \\int_{x_1x_2} \\psi_2(x_1,x_2)|2;x_1 x_2\\rangle + \\ldots\n\\,"
},
{
"math_id": 37,
"text": "\\psi_0"
},
{
"math_id": 38,
"text": "\n\\langle 1;x_1|1;y_1\\rangle = \\delta(x_1-y_1)\n\\,"
},
{
"math_id": 39,
"text": "\n\\langle 2;x_1 x_2 | 2;y_1 y_2\\rangle = \\delta(x_1-y_1)\\delta(x_2-y_2) \\pm \\delta(x_1 -y_2)\\delta(x_2-y_1)\n\\,"
},
{
"math_id": 40,
"text": " \\psi^\\dagger(x)"
},
{
"math_id": 41,
"text": "\n\\psi^\\dagger(x) \\left|N;x_1 ,\\dots, x_n\\right\\rangle = \\left|N+1; x_1, \\dots,x_n, x\\right\\rangle\n"
},
{
"math_id": 42,
"text": "\\psi^\\dagger"
},
{
"math_id": 43,
"text": "\\psi^\\dagger"
},
{
"math_id": 44,
"text": "\n\\psi(x) \\left|N; x_1, \\dots ,x_N \\right\\rangle = \\delta(x-x_1) \\left|N-1;x_2,\\dots,x_N\\right\\rangle + \\delta(x-x_2)\\left|N-1;x_1,x_3,\\dots,x_N \\right\\rangle + \\cdots "
},
{
"math_id": 45,
"text": "\n\\psi^\\dagger(k)= \\int_x e^{-ikx} \\psi^\\dagger(x)\n\\,"
},
{
"math_id": 46,
"text": "\n\\psi(k) = \\int_x e^{ikx} \\psi(x)\n\\,"
},
{
"math_id": 47,
"text": "\n\\psi^\\dagger(k) \\psi^\\dagger(k') - \\psi^\\dagger(k')\\psi^\\dagger(k) =0\n\\,"
},
{
"math_id": 48,
"text": "\n\\psi(k)\\psi(k') - \\psi(k')\\psi(k) =0\n\\,"
},
{
"math_id": 49,
"text": "\n\\psi(k)\\psi^\\dagger(k') - \\psi(k')\\psi^\\dagger(k) = \\delta(k-k')\n\\,"
},
{
"math_id": 50,
"text": "\n|n_1, n_2, ... n_k \\rangle\n\\,"
},
{
"math_id": 51,
"text": "\\psi_k"
},
{
"math_id": 52,
"text": "\n\\psi^\\dagger(k)|\\dots,n_k,\\ldots\\rangle = \\sqrt{n_k+1}\\, |\\dots,n_k+1,\\ldots\\rangle\n"
},
{
"math_id": 53,
"text": "\n\\psi(k) \\left| \\dots,n_k, \\ldots \\right\\rangle = \\sqrt{n_k} \\left|\\dots,n_k-1,\\ldots \\right\\rangle\n"
},
{
"math_id": 54,
"text": "\n\\sum_k \\psi^\\dagger(k)\\psi(k) = \\int_x \\psi^\\dagger(x)\\psi(x)\n"
},
{
"math_id": 55,
"text": "\\psi(x)"
},
{
"math_id": 56,
"text": "\\psi^\\dagger(x)"
},
{
"math_id": 57,
"text": " [\\psi(x),\\psi(y)] = [\\psi^\\dagger(x),\\psi^\\dagger(y)] = 0 "
},
{
"math_id": 58,
"text": " [\\psi(x),\\psi^\\dagger(y)] = \\delta(x-y) "
},
{
"math_id": 59,
"text": " \\psi^\\dagger(x) \\psi(x)"
},
{
"math_id": 60,
"text": " \\psi^\\dagger \\nabla\\psi"
},
{
"math_id": 61,
"text": "\nH= - \\int_x \\psi^\\dagger(x) {\\nabla^2 \\over 2m } \\psi(x)\n\\,"
},
{
"math_id": 62,
"text": "\n\\psi^\\dagger i{d\\over dt} \\psi = \\psi^\\dagger {-\\nabla^2 \\over 2m} \\psi\n\\,"
},
{
"math_id": 63,
"text": "\ni {\\partial \\over \\partial t} \\psi = {-\\nabla^2 \\over 2m} \\psi\n\\,"
},
{
"math_id": 64,
"text": "\nH= {\\nabla \\psi^\\dagger \\nabla\\psi \\over 2m} \n"
},
{
"math_id": 65,
"text": "\nL = \\psi^\\dagger \\left(i {\\partial \\over \\partial t} + {\\nabla^2 \\over 2m} \\right)\\psi\n\\,"
},
{
"math_id": 66,
"text": "c\\to\\infty"
},
{
"math_id": 67,
"text": "\\hat{a}_\\mathbf{p},\\hat{b}_\\mathbf{p}"
},
{
"math_id": 68,
"text": "\\hat{a}(x)=\\int d\\Omega_\\mathbf{p}\\hat{a}_\\mathbf{p}e^{-i p\\cdot x},\\quad \\hat{b}(x)=\\int d\\Omega_\\mathbf{p}\\hat{b}_\\mathbf{p}e^{-i p\\cdot x}"
},
{
"math_id": 69,
"text": "\\hat\\phi(x)=\\hat a(x)+\\hat b^\\dagger(x)"
},
{
"math_id": 70,
"text": "\\hat{A}(x)"
},
{
"math_id": 71,
"text": "\\hat{B}(x)"
},
{
"math_id": 72,
"text": "\\hat{a}(x)=\\frac{e^{-i m c^2 t/\\hbar}}{\\sqrt{2mc^2}}\\hat{A}(x),\\quad \\hat{b}(x)=\\frac{e^{-i m c^2 t/\\hbar}}{\\sqrt{2mc^2}}\\hat{B}(x)"
},
{
"math_id": 73,
"text": "L = (\\hbar c)^2\\partial_\\mu\\phi\\partial^\\mu\\phi^\\dagger - (mc^2)^2\\phi\\phi^\\dagger"
},
{
"math_id": 74,
"text": "\\begin{align}\nL\n&= \\left(\\hbar c\\right)^2 \\left(\\partial_\\mu\\hat{a}\\partial^\\mu\\hat{a}^\\dagger + \\partial_\\mu\\hat{b}\\partial^\\mu\\hat{b}^\\dagger + \\cdots\\right) - \\left(mc^2\\right)^2\\left(\\hat{a}\\hat{a}^\\dagger + \\hat{b}\\hat{b}^\\dagger + \\cdots\\right) \\\\\n&= \\frac{1}{2mc^2}\\left[\\left(\\hbar c\\right)^2\\left(\\frac{-imc}{\\hbar}\\hat{A} + \\partial_0\\hat{A}\\right) \\left(\\frac{imc}{\\hbar}\\hat{A}^\\dagger + \\partial^0\\hat{A}^\\dagger\\right) - \\left(\\hbar c\\right)^2\\partial_x\\hat{A}\\partial^x\\hat{A}^\\dagger + (A\\Rightarrow B) + \\cdots - \\left(mc^2\\right)^2\\left(\\hat{A}\\hat{A}^\\dagger + \\hat{B}\\hat{B}^\\dagger + \\cdots\\right)\\right] \\\\\n&= \\frac{\\hbar^2}{2m}\\left[ \\frac{imc}{\\hbar} \\left(\\partial_0\\hat{A}\\hat{A}^\\dagger-\\hat{A}\\partial^0\\hat{A}^\\dagger\\right) +\n\\partial_\\mu\\hat{A} \\partial^\\mu\\hat{A}^\\dagger + (A\\Rightarrow B) + \\cdots\n\\right]\n\\end{align}"
},
{
"math_id": 75,
"text": "e^{\\pm 2 i m c^2 t/\\hbar}"
},
{
"math_id": 76,
"text": "{1}/{c}"
},
{
"math_id": 77,
"text": "\\begin{align}\nL_A \n&= i\\hbar\\hat{A}^\\dagger\\hat{A}' + \\frac{\\hbar^2}{2m}\\left[\\frac{1}{c^2}\\hat{A}'{\\hat{A}'}^\\dagger - \\partial_x\\hat{A}\\partial^x\\hat{A}^\\dagger \\right] \\\\\n&= i\\hbar\\hat{A}^\\dagger\\hat{A}' + \\frac{\\hbar^2}{2m} \\left[ -\\left(\\partial_x\\left(\\hat{A}\\,\\partial^x\\hat{A}^\\dagger\\right) - \\hat{A}\\,\\partial_x\\partial^x\\hat{A}^\\dagger\\right) \\right] \\\\\n&= i\\hbar\\hat{A}^\\dagger\\hat{A}' + \\frac{\\hbar^2}{2m} \\hat{A}\\,\\partial_x\\partial^x\\hat{A}^\\dagger .\n\\end{align}"
},
{
"math_id": 78,
"text": "\nL = \\frac{1}{2}\\left[\n\\hat{A}^\\dagger \\left(i\\hbar\\frac{\\partial}{\\partial t} + \\frac{\\hbar^2\\nabla^2}{2m}\\right)\\hat{A}\n+ \\hat{B}^\\dagger \\left(i\\hbar\\frac{\\partial}{\\partial t}+ \\frac{\\hbar^2\\nabla^2}{2m}\\right)\\hat{B}\n+ \\text{h.c.}\n\\right].\n"
}
]
| https://en.wikipedia.org/wiki?curid=13625345 |
1362652 | Kleinian group | Discrete group of Möbius transformations
In mathematics, a Kleinian group is a discrete subgroup of the group of orientation-preserving isometries of hyperbolic 3-space H3. The latter, identifiable with PSL(2, C), is the quotient group of the 2 by 2 complex matrices of determinant 1 by their center, which consists of the identity matrix and its product by −1. PSL(2, C) has a natural representation as orientation-preserving conformal transformations of the Riemann sphere, and as orientation-preserving conformal transformations of the open unit ball "B"3 in R3. The group of Möbius transformations is also related as the non-orientation-preserving isometry group of H3, PGL(2, C). So, a Kleinian group can be regarded as a discrete subgroup acting on one of these spaces.
History.
The theory of general Kleinian groups was founded by Felix Klein (1883) and Henri Poincaré (1883), who named them after Felix Klein. The special case of Schottky groups had been studied a few years earlier, in 1877, by Schottky.
Definitions.
One modern definition of Kleinian group is as a group which acts on the 3-ball formula_0 as a discrete group of hyperbolic isometries. Hyperbolic 3-space has a natural boundary; in the ball model, this can be identified with the 2-sphere. We call it the sphere at infinity, and denote it by formula_1. A hyperbolic isometry extends to a conformal homeomorphism of the sphere at infinity (and conversely, every conformal homeomorphism on the sphere at infinity extends uniquely to a hyperbolic isometry on the ball by Poincaré extension. It is a standard result from complex analysis that conformal homeomorphisms on the Riemann sphere are exactly the Möbius transformations, which can further be identified as elements of the projective linear group PGL(2,C). Thus, a Kleinian group can also be defined as a subgroup Γ of PGL(2,C). Classically, a Kleinian group was required to act properly discontinuously on a non-empty open subset of the Riemann sphere, but modern usage allows any discrete subgroup.
When Γ is isomorphic to the fundamental group formula_2 of a hyperbolic 3-manifold, then the quotient space H3/Γ becomes a Kleinian model of the manifold. Many authors use the terms "Kleinian model" and "Kleinian group" interchangeably, letting the one stand for the other.
Discreteness implies points in the interior of hyperbolic 3-space have finite stabilizers, and discrete orbits under the group Γ. On the other hand, the orbit Γ"p" of a point "p" will typically accumulate on the boundary of the closed ball formula_3.
The set of accumulation points of Γ"p" in formula_1 is called the limit set of Γ, and usually denoted formula_4. The complement formula_5 is called the domain of discontinuity or the ordinary set or the regular set. Ahlfors' finiteness theorem implies that if the group is finitely generated then formula_6 is a Riemann surface orbifold of finite type.
The unit ball "B"3 with its conformal structure is the Poincaré model of hyperbolic 3-space. When we think of it metrically, with metric
formula_7
it is a model of 3-dimensional hyperbolic space H3. The set of conformal self-maps of "B"3 becomes the set of isometries (i.e. distance-preserving maps) of H3 under this identification. Such maps restrict to conformal self-maps of formula_1, which are Möbius transformations. There are isomorphisms
formula_8
The subgroups of these groups consisting of orientation-preserving transformations are all isomorphic to the projective matrix group: PSL(2,C) via the usual identification of the unit sphere with the complex projective line P1(C).
Variations.
There are some variations of the definition of a Kleinian group: sometimes
Kleinian groups are allowed to be subgroups of PSL(2, C).2 (that is, of PSL(2, C) extended by complex conjugations), in other words to have orientation reversing elements, and sometimes they are assumed to be finitely generated, and sometimes they are required to act properly discontinuously on a non-empty open subset of the Riemann sphere.
Examples.
Bianchi groups.
A Bianchi group is a Kleinian group of the form PSL(2, "O""d"), where formula_9 is the ring of integers of the imaginary quadratic field formula_10 for d a positive square-free integer.
Elementary and reducible Kleinian groups.
A Kleinian group is called elementary if its limit set is finite, in which case the limit set has 0, 1, or 2 points.
Examples of elementary Kleinian groups include finite Kleinian groups (with empty limit set) and infinite cyclic Kleinian groups.
A Kleinian group is called reducible if all elements have a common fixed point on the Riemann sphere. Reducible Kleinian groups are elementary, but some elementary finite Kleinian groups are not reducible.
Fuchsian groups.
Any Fuchsian group (a discrete subgroup of PSL(2, R)) is a Kleinian group, and conversely any Kleinian group preserving the real line (in its action on the Riemann sphere) is a Fuchsian group. More generally, every Kleinian group preserving a circle or straight line in the Riemann sphere is conjugate to a Fuchsian group.
Quasi-Fuchsian groups.
A Kleinian group that preserves a Jordan curve is called a quasi-Fuchsian group. When the Jordan curve is a circle or a straight line these are just conjugate to Fuchsian groups under conformal transformations. Finitely generated quasi-Fuchsian groups are conjugate to Fuchsian groups under quasi-conformal transformations. The limit set is contained in the invariant Jordan curve, and if it is equal to the Jordan curve the group is said to be of the first kind, and otherwise it is said to be of the second kind.
Schottky groups.
Let "C"i be the boundary circles of a finite collection of disjoint closed disks. The group generated by inversion in each circle has limit set a Cantor set, and the quotient H3/"G" is a mirror orbifold with underlying space a ball. It is double covered by a handlebody; the corresponding index 2 subgroup is a Kleinian group called a Schottky group.
Crystallographic groups.
Let "T" be a periodic tessellation of hyperbolic 3-space. The group of symmetries of the tessellation is a Kleinian group.
Fundamental groups of hyperbolic 3-manifolds.
The fundamental group of any oriented hyperbolic 3-manifold is a Kleinian group. There are many examples of these, such as the complement of a figure 8 knot or the Seifert–Weber space. Conversely if a Kleinian group has no nontrivial torsion elements then it is the fundamental group of a hyperbolic 3-manifold.
Degenerate Kleinian groups.
A Kleinian group is called degenerate if it is not elementary and its limit set is simply connected. Such groups can be constructed by taking a suitable limit of quasi-Fuchsian groups such that one of the two components of the regular points contracts down to the empty set; these groups are called singly degenerate. If both components of the regular set contract down to the empty set, then the limit set becomes a space-filling curve and the group is called doubly degenerate.
The existence of degenerate Kleinian groups was first shown indirectly by , and the first explicit example was found by Jørgensen. gave examples of doubly degenerate groups and space-filling curves associated to pseudo-Anosov maps. | [
{
"math_id": 0,
"text": "B^3"
},
{
"math_id": 1,
"text": "S^2_\\infty"
},
{
"math_id": 2,
"text": "\\pi_1"
},
{
"math_id": 3,
"text": "\\bar{B}^3"
},
{
"math_id": 4,
"text": "\\Lambda(\\Gamma)"
},
{
"math_id": 5,
"text": "\\Omega(\\Gamma)=S^2_\\infty - \\Lambda(\\Gamma)"
},
{
"math_id": 6,
"text": "\\Omega(\\Gamma)/\\Gamma"
},
{
"math_id": 7,
"text": "ds^2= \\frac{4 \\, \\left| dx \\right|^2 }{\\left( 1-|x|^2 \\right)^2}"
},
{
"math_id": 8,
"text": " \\operatorname{Mob}(S^2_\\infty) \\cong \\operatorname{Conf}(B^3) \\cong \\operatorname{Isom}(\\mathbf{H}^3)."
},
{
"math_id": 9,
"text": "\\mathcal{O}_d"
},
{
"math_id": 10,
"text": "\\mathbb{Q}(\\sqrt{-d})"
}
]
| https://en.wikipedia.org/wiki?curid=1362652 |
1362724 | Kazhdan's property (T) | Mathematics term
In mathematics, a locally compact topological group "G" has property (T) if the trivial representation is an isolated point in its unitary dual equipped with the Fell topology. Informally, this means that if "G" acts unitarily on a Hilbert space and has "almost invariant vectors", then it has a nonzero invariant vector. The formal definition, introduced by David Kazhdan (1967), gives this a precise, quantitative meaning.
Although originally defined in terms of irreducible representations, property (T) can often be checked even when there is little or no explicit knowledge of the unitary dual. Property (T) has important applications to group representation theory, lattices in algebraic groups over local fields, ergodic theory, geometric group theory, expanders, operator algebras and the theory of networks.
Definitions.
Let "G" be a σ-compact, locally compact topological group and π : "G" → "U"("H") a unitary representation of "G" on a (complex) Hilbert space "H". If ε > 0 and "K" is a compact subset of "G", then a unit vector ξ in "H" is called an (ε, "K")-invariant vector if
formula_0
The following conditions on "G" are all equivalent to "G" having property (T) of Kazhdan, and any of them can be used as the definition of property (T).
(1) The trivial representation is an isolated point of the unitary dual of "G" with Fell topology.
(2) Any sequence of continuous positive definite functions on "G" converging to 1 uniformly on compact subsets, converges to 1 uniformly on "G".
(3) Every unitary representation of "G" that has an (ε, "K")-invariant unit vector for any ε > 0 and any compact subset "K", has a non-zero invariant vector.
(4) There exists an ε > 0 and a compact subset "K" of "G" such that every unitary representation of "G" that has an (ε, "K")-invariant unit vector, has a nonzero invariant vector.
(5) Every continuous affine isometric action of "G" on a "real" Hilbert space has a fixed point (property (FH)).
If "H" is a closed subgroup of "G", the pair ("G","H") is said to have relative property (T) of Margulis if there exists an ε > 0 and a compact subset "K" of "G" such that whenever a unitary representation of "G" has an (ε, "K")-invariant unit vector, then it has a non-zero vector fixed by "H".
Discussion.
Definition (4) evidently implies definition (3). To show the converse, let "G" be a locally compact group satisfying (3), assume by contradiction that for every "K" and ε there is a unitary representation that has a ("K", ε)-invariant unit vector and does not have an invariant vector. Look at the direct sum of all such representation and that will negate (4).
The equivalence of (4) and (5) (Property (FH)) is the Delorme-Guichardet theorem. The fact that (5) implies (4) requires the assumption that "G" is σ-compact (and locally compact) (Bekka et al., Theorem 2.12.4).
Examples.
Examples of groups that "do not" have property (T) include
Discrete groups.
Historically property (T) was established for discrete groups Γ by embedding them as lattices in real or p-adic Lie groups with property (T). There are now several direct methods available.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\forall g \\in K \\ : \\ \\left \\|\\pi(g) \\xi - \\xi \\right \\| < \\varepsilon."
},
{
"math_id": 1,
"text": "\\mathbb{R}^{+}"
}
]
| https://en.wikipedia.org/wiki?curid=1362724 |
1362795 | 4-manifold | Mathematical space
In mathematics, a 4-manifold is a 4-dimensional topological manifold. A smooth 4-manifold is a 4-manifold with a smooth structure. In dimension four, in marked contrast with lower dimensions, topological and smooth manifolds are quite different. There exist some topological 4-manifolds which admit no smooth structure, and even if there exists a smooth structure, it need not be unique (i.e. there are smooth 4-manifolds which are homeomorphic but not diffeomorphic).
4-manifolds are important in physics because in General Relativity, spacetime is modeled as a pseudo-Riemannian 4-manifold.
Topological 4-manifolds.
The homotopy type of a simply connected compact 4-manifold only depends on the intersection form on the middle dimensional homology. A famous theorem of Michael Freedman (1982) implies that the homeomorphism type of the manifold only depends on this intersection form, and on a formula_0 invariant called the Kirby–Siebenmann invariant, and moreover that every combination of unimodular form and Kirby–Siebenmann invariant can arise, except that if the form is even, then the Kirby–Siebenmann invariant must be the signature/8 (mod 2).
Examples:
Freedman's classification can be extended to some cases when the fundamental group is not too complicated; for example, when it is formula_1, there is a classification similar to the one above using Hermitian forms over the group ring of formula_1. If the fundamental group is too large (for example, a free group on 2 generators), then Freedman's techniques seem to fail and very little is known about such manifolds.
For any finitely presented group it is easy to construct a (smooth) compact 4-manifold with it as its fundamental group. (More specifically, for any finitely presented group, one constructs a manifold with the given fundamental group, such that two manifolds in this family are homeomorphic if and only if the fundamental groups are isomorphic.) As there can be no algorithm to tell whether two finitely presented groups are isomorphic (even if one is known to be trivial), there can be no algorithm to tell if two 4-manifolds have the same fundamental group. This is one reason why much of the work on 4-manifolds just considers the simply connected case: the general case of many problems is already known to be intractable.
Smooth 4-manifolds.
For manifolds of dimension at most 6, any piecewise linear (PL) structure can be smoothed in an essentially unique way, so in particular the theory of 4 dimensional PL manifolds is much the same as the theory of 4 dimensional smooth manifolds.
A major open problem in the theory of smooth 4-manifolds is to classify the simply connected compact ones.
As the topological ones are known, this breaks up into two parts:
There is an almost complete answer to the first problem asking which simply connected compact 4-manifolds have smooth structures.
First, the Kirby–Siebenmann class must vanish.
In contrast, very little is known about the second question of classifying the smooth structures on a smoothable 4-manifold; in fact, there is not a single smoothable 4-manifold where the answer is fully known. Donaldson showed that there are some simply connected compact 4-manifolds, such as Dolgachev surfaces, with a countably infinite number of different smooth structures. There are an uncountable number of different smooth structures on R4; see exotic R4.
Fintushel and Stern showed how to use surgery to construct large numbers of different smooth structures (indexed by arbitrary integral polynomials) on many different manifolds, using Seiberg–Witten invariants to show that the smooth structures are different. Their results suggest that any classification of simply connected smooth 4-manifolds will be very complicated. There are currently no plausible conjectures about what this classification might look like. (Some early conjectures that all simply connected smooth 4-manifolds might be connected sums of algebraic surfaces, or symplectic manifolds, possibly with orientations reversed, have been disproved.)
Special phenomena in 4 dimensions.
There are several fundamental theorems about manifolds that can be proved by low-dimensional methods in dimensions at most 3, and by completely different high-dimensional methods in dimension at least 5, but which are false in dimension 4. Here are some examples:
Failure of the Whitney trick in dimension 4.
According to Frank Quinn, "Two "n"-dimensional submanifolds of a manifold of dimension 2"n" will usually intersect themselves and each other in isolated points. The "Whitney trick" uses an isotopy across an embedded 2-disk to simplify these intersections. Roughly speaking this reduces the study of "n"-dimensional embeddings to embeddings of 2-disks. But this is not a reduction when the dimension is 4: the 2-disks themselves are middle-dimensional, so trying to embed them encounters exactly the same problems they are supposed to solve. This is the phenomenon that separates dimension 4 from others."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Z/2\\Z"
},
{
"math_id": 1,
"text": "\\Z"
}
]
| https://en.wikipedia.org/wiki?curid=1362795 |
13629602 | Allegory (mathematics) | In the mathematical field of category theory, an allegory is a category that has some of the structure of the category Rel of sets and binary relations between them. Allegories can be used as an abstraction of categories of relations, and in this sense the theory of allegories is a generalization of relation algebra to relations between different sorts. Allegories are also useful in defining and investigating certain constructions in category theory, such as exact completions.
In this article we adopt the convention that morphisms compose from right to left, so "RS" means "first do S, then do R".
Definition.
An allegory is a category in which
all such that
Here, we are abbreviating using the order defined by the intersection: formula_13 means formula_14
A first example of an allegory is the category of sets and relations. The objects of this allegory are sets, and a morphism formula_15 is a binary relation between X and Y. Composition of morphisms is composition of relations, and the anti-involution of formula_16 is the converse relation formula_17: formula_18 if and only if formula_19. Intersection of morphisms is (set-theoretic) intersection of relations.
Regular categories and allegories.
Allegories of relations in regular categories.
In a category C, a relation between objects X and Y is a span of morphisms formula_20 that is jointly monic. Two such spans formula_21 and formula_22 are considered equivalent when there is an isomorphism between S and T that make everything commute; strictly speaking, relations are only defined up to equivalence (one may formalise this either by using equivalence classes or by using bicategories). If the category C has products, a relation between X and Y is the same thing as a monomorphism into "X" × "Y" (or an equivalence class of such). In the presence of pullbacks and a proper factorization system, one can define the composition of relations. The composition formula_23 is found by first pulling back the cospan formula_24 and then taking the jointly-monic image of the resulting span formula_25
Composition of relations will be associative if the factorization system is appropriately stable. In this case, one can consider a category Rel("C"), with the same objects as C, but where morphisms are relations between the objects. The identity relations are the diagonals formula_26
A regular category (a category with finite limits and images in which covers are stable under pullback) has a stable regular epi/mono factorization system. The category of relations for a regular category is always an allegory. Anti-involution is defined by turning the source/target of the relation around, and intersections are intersections of subobjects, computed by pullback.
Maps in allegories, and tabulations.
A morphism R in an allegory A is called a map if it is entire formula_27 and deterministic formula_28 Another way of saying this is that a map is a morphism that has a right adjoint in A when "A" is considered, using the local order structure, as a 2-category. Maps in an allegory are closed under identity and composition. Thus, there is a subcategory Map("A") of A with the same objects but only the maps as morphisms. For a regular category C, there is an isomorphism of categories formula_29 In particular, a morphism in Map(Rel(Set)) is just an ordinary set function.
In an allegory, a morphism formula_0 is tabulated by a pair of maps formula_30 and formula_31 if formula_32 and formula_33 An allegory is called tabular if every morphism has a tabulation. For a regular category C, the allegory Rel("C") is always tabular. On the other hand, for any tabular allegory A, the category Map("A") of maps is a locally regular category: it has pullbacks, equalizers, and images that are stable under pullback. This is enough to study relations in Map("A"), and in this setting, formula_34
Unital allegories and regular categories of maps.
A unit in an allegory is an object U for which the identity is the largest morphism formula_35 and such that from every other object, there is an entire relation to U. An allegory with a unit is called unital. Given a tabular allegory A, the category Map("A") is a regular category (it has a terminal object) if and only if A is unital.
More sophisticated kinds of allegory.
Additional properties of allegories can be axiomatized. Distributive allegories have a union-like operation that is suitably well-behaved, and division allegories have a generalization of the division operation of relation algebra. Power allegories are distributive division allegories with additional powerset-like structure. The connection between allegories and regular categories can be developed into a connection between power allegories and toposes.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "R\\colon X\\to Y"
},
{
"math_id": 1,
"text": "R^\\circ\\colon Y\\to X"
},
{
"math_id": 2,
"text": "R^{\\circ\\circ} = R"
},
{
"math_id": 3,
"text": "(RS)^\\circ = S^\\circ R^\\circ\\text{;}"
},
{
"math_id": 4,
"text": "R,S \\colon X\\to Y"
},
{
"math_id": 5,
"text": "R \\cap S\\colon X\\to Y"
},
{
"math_id": 6,
"text": "R\\cap R = R,"
},
{
"math_id": 7,
"text": "R\\cap S = S\\cap R,"
},
{
"math_id": 8,
"text": "(R\\cap S)\\cap T = R\\cap (S\\cap T);"
},
{
"math_id": 9,
"text": "(R\\cap S)^\\circ = R^\\circ \\cap S^\\circ;"
},
{
"math_id": 10,
"text": "R(S\\cap T) \\subseteq RS\\cap RT"
},
{
"math_id": 11,
"text": "(R\\cap S)T \\subseteq RT\\cap ST;"
},
{
"math_id": 12,
"text": "RS \\cap T \\subseteq (R\\cap TS^\\circ)S."
},
{
"math_id": 13,
"text": "R \\subseteq S"
},
{
"math_id": 14,
"text": "R = R\\cap S."
},
{
"math_id": 15,
"text": "X \\to Y"
},
{
"math_id": 16,
"text": "R"
},
{
"math_id": 17,
"text": "R^\\circ"
},
{
"math_id": 18,
"text": "y R^\\circ x"
},
{
"math_id": 19,
"text": "xRy"
},
{
"math_id": 20,
"text": "X\\gets R\\to Y"
},
{
"math_id": 21,
"text": "X\\gets S\\to Y"
},
{
"math_id": 22,
"text": "X\\gets T\\to Y"
},
{
"math_id": 23,
"text": "X\\gets R\\to Y\\gets S\\to Z"
},
{
"math_id": 24,
"text": "R\\to Y\\gets S"
},
{
"math_id": 25,
"text": "X\\gets R\\gets\\bullet\\to S\\to Z."
},
{
"math_id": 26,
"text": "X \\to X\\times X."
},
{
"math_id": 27,
"text": "(1\\subseteq R^\\circ R)"
},
{
"math_id": 28,
"text": "(RR^\\circ \\subseteq 1)."
},
{
"math_id": 29,
"text": "C \\cong \\operatorname{Map}(\\operatorname{Rel}(C))."
},
{
"math_id": 30,
"text": "f\\colon Z\\to X"
},
{
"math_id": 31,
"text": "g\\colon Z\\to Y"
},
{
"math_id": 32,
"text": "gf^\\circ = R"
},
{
"math_id": 33,
"text": "f^\\circ f \\cap g^\\circ g = 1."
},
{
"math_id": 34,
"text": "A\\cong \\operatorname{Rel}(\\operatorname{Map}(A))."
},
{
"math_id": 35,
"text": "U\\to U,"
}
]
| https://en.wikipedia.org/wiki?curid=13629602 |
13630730 | Tidewater glacier cycle | Behavior of glaciers that terminate at the sea
The tidewater glacier cycle is the typically centuries-long behavior of tidewater glaciers that consists of recurring periods of advance alternating with rapid retreat and punctuated by periods of stability. During portions of its cycle, a tidewater glacier is relatively insensitive to climate change.
Calving rate of tidewater glaciers.
While climate is the main factor affecting the behavior of all glaciers, additional factors affect calving (iceberg-producing) tidewater glaciers. These glaciers terminate abruptly at the ocean interface, with large pieces of the glacier fracturing and separating, or calving, from the ice front as icebergs.
Climate change causes a shift in the equilibrium line altitude (ELA) of a glacier. This is the imaginary line on a glacier, above which snow accumulates faster than it ablates, and below which, the reverse is the case. This altitude shift, in turn, prompts a retreat or advance of the terminus toward a new steady-state position. However, this change in terminus behavior for calving glaciers is also a function of resulting changes in fjord geometry, and calving rate at the glacier terminus as it changes position.
Calving glaciers are different from land terminating glaciers in the variation in velocity along their length. Land terminating glacier velocities decline as the terminus is approached. Calving glaciers accelerate at the terminus. A declining velocity near the terminus slows the glacier response to climate. An accelerating velocity at the front enhances the speed of the glaciers response to climate or glacier dynamic changes. This is observed in Svalbard, Patagonia and Alaska. A calving glacier requires more accumulation area than a land terminating glacier to offset this higher loss from calving.
The calving rate is largely controlled by the depth of the water and the glacier velocity at the calving front. The process of calving provides an imbalance in forces at the front of the glaciers, that raises velocity. The depth of the water at the glacier front is a simple measure that allows estimation of calving rate, but is the amount of flotation of the glacier at the front that is the specific physical characteristic that is important.
Water depth at the glacier terminus is the key variable in predicting calving of a tidewater glacier. Debris flux and sediment recycling at the glacier grounding-line, particularly rapid in the temperate glaciers of Alaska, can alter this depth, acting as a second-order control on terminus fluctuations. This effect contributes to the insensitivity of a glacier to climate when its terminus is either retreating or advancing in deep water.
Austin Post was one of the first to propose that water depth at the calving margin strongly affects the rate of iceberg calving. Glaciers that terminate on a morainal shoal are generally stable, but once a glacier retreats into water that deepens as the ice front recedes, calving rate increases rapidly and results in drastic retreat of the terminus. Using data collected from 13 Alaskan tidewater calving glaciers, Brown et al. (1982) derived the following relationship between calving speed and water depth: formula_0, where formula_1 is the mean calving speed (m⋅a−1), formula_2 is a calving coefficient (27.1±2 a−1), formula_3 is the mean water depth at glacier front (m) and formula_4 is a constant (0 m⋅a−1). Pelto and Warren (1991) found a similar calving relationship with tidewater glaciers observed over longer time periods, with slightly reduced calving rate to the mainly summer rates noted by Brown et al. (1982).
Calving is an important form of ablation for glaciers that terminate in freshwater, also. Funk and Röthlisberger determined a relationship between calving speed and water depth based on analysis of six glaciers that calve into lakes. They found that the same basic calving relationship developed for tidewater calving glaciers was true for freshwater calving glaciers, only the calving coefficients led to calving rates 10% of that for tidewater glaciers.
Tidewater glacier phases.
Observations of Alaskan tidewater calving glaciers prompted Austin Post to describe the tidewater calving glacier advance/retreat cycle: (1) advancing, (2) stable-extended, (3) drastically retreating, or (4) stable-retracted. The following is a detailed review of the tidewater glacier cycle derived by Post, with numerous cited examples, the cycle is based on observations of temperate tidewater glaciers in Alaska, not outlet glaciers from large ice sheets or polar glaciers.
The "accumulation area ratio" of a glacier, AAR, is the percentage of a glacier that is a snow-covered accumulation zone at the end of the summer melt season. This percentage for large Alaskan glaciers is between 60 and 70 for non-calving glaciers, 70-80 for moderately calving glaciers and up to 90 for very high calving rate glaciers. By using accumulation area ratio (AAR) data for Alaskan tidewater calving glaciers, Pelto (1987) and Viens (1995) produced models showing that climate acts as a first-order control on the advance/retreat cycle of calving glaciers during most of the advance retreat cycle, but there are climate insensitive periods as well. Pelto (1987) examined the terminus behavior of 90 Alaskan glaciers and found that the terminus behavior of all 90 were correctly predicted based on the AAR and calving rate.
Advancing.
If we begin at the stable retracted position at the end of a tidewater glacier cycle the glacier will have a moderate calving rate and a high AAR, above 70. The glacier will build a terminus shoal of sediment further reducing the calving rate. This will improve the glacier mass balance and the glacier can begin to advance due to this change or an increase in ice flux to the terminus due to increasing snowfall or reduced snow melt. As the advance proceeds the terminus shoal will be pushed in front of the glacier and continue to build, keeping the calving rate low. In the case of the most glaciers such as the Taku Glacier the glacier will eventually build a terminus shoal that is above water and calving will essentially cease. This will eliminate this loss of ice from the glacier and the glacier can continue to advance. Taku Glacier and Hubbard Glacier have been in this phase of the cycle. Taku Glacier which has been advancing for 120 years no longer calves. Hubbard Glacier still has a calving front. The glacier will then expand until the AAR is between 60 and 70 and equilibrium of the non-calving glacier is achieved. The glacier is not very sensitive to climate during the advance as its AAR is quite high, when the terminus shoal is limiting calving.
Stable-extended.
At the maximum extended position the glacier is once again sensitive to changing climate. Brady Glacier and Baird Glacier are examples of glaciers currently at this point. Brady Glacier has been thinning during the last two decades due to the higher equilibrium line altitudes accompanying warmer conditions in the region, and its secondary termini have begun to retreat. A glacier can remain at this position for sometime, a century at least in the case of Brady Glacier. Usually substantial thinning occurs before retreat from the shoal commences. This allowed the prediction in 1980, by the United States Geological Survey (USGS), of the retreat of the Columbia Glacier from its terminus shoal. The glacier had remained on this shoal throughout the entire 20th century. The USGS was monitoring the glacier due to its proximity to Valdez, Alaska, the port for crude oil export from the Alaskan Pipeline. At some point a decline in mass balance will trigger a retreat from the shoal into deeper water at which point calving will ensue. Based on the recent thinning it is suggested that Brady Glacier is poised to begin retreat.
Drastically retreating.
The calving rate will increase as the glacier retreats from the shoal into the deeper fjord just cleared by the glacier during advance. The water depth initially increases as the glacier retreats from the shoal, causing ever more rapid glacier flow, calving and retreat. A glacier is comparatively insensitive to climate during this calving retreat. However, in the case of San Rafael Glacier, Chile, a switch from retreat (1945–1990) to advance (1990–1997) was noted. Current examples of this retreat are Columbia Glacier and Guyot Glacier. The most famous recent example of this is the large retreat of Glacier Bay and Icy Bay glaciers in Alaska that occurred rapidly via this process.
Muir Glacier retreated 33 km from 1886 to 1968 featuring extensive calving the entire time. It reversed its retreat briefly 1890—1892. In 1968, Muir Glacier was still 27 km long, less than half of its length in 1886. The retreat continued an additional 6.5 km by 2001. Today, the glacier is near the head of its fjord and with minimal calving the glacier may be stable at this retracted position.
The best current example is illustrated by the United States Geological Survey study of Columbia Glacier. They noted that the average calving rate from Columbia Glacier increased from 3 km3⋅a−1 in the second half of 1983 to 4 km3⋅a−1 during the first nine months of 1984. This rate was four times greater than that measured at the end of 1977 and increased again in 1985. The glacier flow, i.e., the movement of the ice toward the sea, also increased, it was inadequate to keep pace with the break-up and expulsion of icebergs. The increase in speed instead seemed to just feed the ever faster conveyor to the terminus for iceberg production. This prompted the USGS to predict that the glacier would retreat 32 km before stabilizing. By 2006, it has retreated 16 km. The water remains deep and the calving rate and glacier velocity very high, indicating retreat will continue. At this point, just like having a balloon payment in an adjustable rate mortgage, the glacier has to pay a whole new portion of its balance via icebergs. The glacier accelerates as flow is enhanced by the calving process; this increases the export of icebergs from the glacier. Large calving retreats are initiated by warming conditions causing ice thinning. The resulting retreat to a new equilibrium conditions can be far more extensive than will be regained during the next advance stage. A good example of this is Muir Glacier.
Next to Glacier Bay, Icy Bay has had the most extensive retreat. At the beginning of the 20th century, the coastline was nearly straight and the bay non-existent. The entrance of the bay was filled by a tidewater glacier face that calved icebergs directly into the Gulf of Alaska. A century later glacier retreat has opened a multi-armed bay more than 30 miles long. The tidewater glacier has divided into three independent glaciers, Yahtse, Tsaa and Guyot Glacier. Other examples of glaciers currently in the retreat phase are South Sawyer and Sawyer Glaciers in Alaska, retreating 2.1 and 2.3 km respectively from 1961 to 2005.
In Patagonia an example of a rapidly retreating glacier is the Jorge Montt Glacier which drains into Baja Jorge Montt in the Pacific Ocean. The glacier's ice thinning, at low elevations, from 1975 to 2000 reached 18 m⋅a−1 at the lowest elevations. The glacier calving front experienced a major retreat of 8.5 km in those 25 years as a result of rapid thinning .
Stable-retracted.
At some point the glacier reaches a pinning point where calving is reduced due to a fjord narrowing or shoaling and the glacier's AAR is near 100. This is occurring with LeConte Glacier and Yahtse Glacier. Le Conte Glacier currently has an AAR of 90, is at a retracted position and seems likely to be set to advance after building a terminus shoal. The drop in calving rate allows the glacier to reestablish equilibrium.
Examples of tidewater glacier behavior.
Taku Glacier.
The Taku Glacier provides a good example of this cycle. It was at its maximum extent near 1750. At this point it had closed off Taku Inlet. Subsequently, calving retreat commenced. By the time John Muir saw the glacier in 1890, it was near its minimum extent, at a location where the fjord narrowed, with deep water in front. About 1900, its AAR of 90 led to the Taku Glacier onset of advance, at the same time that the remaining Juneau Icefield glaciers continued receding. This advance continued at a rate of 88 m⋅a−1, advancing 5.3 km from the 1900 minimum until 1948, all the while building and then riding up on a substantial outwash plain beneath its calving face. After 1948, the now non-calving Taku Glacier, possessed an AAR only slightly reduced (86 and 63). This drove 1.5 km of further advance at a reduced rate of 37 m⋅a−1. In 1990, the Taku Glacier's AAR was 82 high enough, to prompt Pelto and Miller to conclude that the Taku Glacier would continue to advance for the remaining decade of the 20th century. From 1986 to 2005, the equilibrium line altitude on the glacier rose without a significant terminus shift causing the AAR to decline to about 72. Pelto and Miller concluded that the current reduction in rate of advance is since 1970 is attributable to the laterally expanding terminal lobe as opposed to declining mass balance and that the primary force behind the Taku Glacier's advance since about 1900 is due to positive mass balance. The recent lack of positive mass balance will eventually slow the retreat if it persists.
Effects of climate change.
The size of tidewater glaciers is such that the tidewater glacier cycle is several hundred years in length. A tidewater glacier is not sensitive to climate during the advancing and drastically retreating phases of its cycle. In the same region, disparate terminus responses are observed amongst tidewater calving glaciers, but not land terminating glaciers. This is exemplified by the 17 major glaciers of the Juneau Icefield, 5 have retreated more than 500 m since 1948, 11 more than 1000 m, and one glacier the Taku has advanced. This difference highlights the unique impacts on terminus behavior of the tidewater glacier cycle, which has caused the Taku Glacier to be insensitive to climate change in the last 60 years.
Concurrently, in both Patagonia and Alaska, there are tidewater glaciers that have advanced for a considerable period, tidewater glaciers undergoing rapid retreat and stable tidewater glaciers.
References.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_C = CH_w + D"
},
{
"math_id": 1,
"text": "V_C"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "H_w"
},
{
"math_id": 4,
"text": "D"
}
]
| https://en.wikipedia.org/wiki?curid=13630730 |
13631685 | Hosaka–Cohen transformation | Hosaka–Cohen transformation (also called H–C transformation) is a mathematical method of converting a particular two-dimensional scalar magnetic field map to a particular two-dimensional vector map. The scalar field map is of the component of magnetic field which is normal to a two-dimensional surface of a volume conductor; this volume conductor contains the currents producing the magnetic field. The resulting vector map, sometimes called "an arrowmap" roughly mimics those currents under the surface which are parallel to the surface, which produced the field. Therefore, the purpose in performing the transformation is to allow a rough visualization of the underlying, parallel currents.
The transformation was proposed by Cohen and Hosaka of the biomagnetism group at MIT, then was used by Hosaka and Cohen to visualize the current sources of the magnetocardiogram.
Each arrow is defined as:
formula_0
where formula_1 of the local formula_2 coordinate system is normal to the volume conductor surface, formula_3 and formula_4 are unit vectors, and formula_5 is the normal component of magnetic field. This is a form of two-dimensional gradient of the scalar quantity formula_5 and is rotated by 90° from the conventional gradient.
Almost any scalar field, magnetic or otherwise, can be displayed in this way, if desired, as an aid to the eye, to help see the underlying sources of the field.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\vec{a} = {\\partial Bz\\over\\partial y}\\hat{x} - {\\partial Bz\\over\\partial x}\\hat{y}\n"
},
{
"math_id": 1,
"text": "x"
},
{
"math_id": 2,
"text": "x, y, z"
},
{
"math_id": 3,
"text": "\\hat{x}"
},
{
"math_id": 4,
"text": "\\hat{y}"
},
{
"math_id": 5,
"text": "Bz"
}
]
| https://en.wikipedia.org/wiki?curid=13631685 |
13632049 | Gamma-ray burst emission mechanisms | Gamma-ray burst emission mechanisms are theories that explain how the energy from a gamma-ray burst progenitor (regardless of the actual nature of the progenitor) is turned into radiation. These mechanisms are a major topic of research as of 2007. Neither the light curves nor the early-time spectra of GRBs show resemblance to the radiation emitted by any familiar physical process.
Compactness problem.
It has been known for many years that ejection of matter at relativistic velocities (velocities very close to the speed of light) is a necessary requirement for producing the emission in a gamma-ray burst. GRBs vary on such short timescales (as short as milliseconds) that the size of the emitting region must be very small, or else the time delay due to the finite speed of light would "smear" the emission out in time, wiping out any short-timescale behavior. At the energies involved in a typical GRB, so much energy crammed into such a small space would make the system opaque to photon-photon pair production, making the burst far less luminous and also giving it a very different spectrum from what is observed. However, if the emitting system is moving towards Earth at relativistic velocities, the burst is compressed in time (as seen by an Earth observer, due to the relativistic Doppler effect) and the emitting region inferred from the finite speed of light becomes much smaller than the true size of the GRB (see relativistic beaming).
GRBs and internal shocks.
A related constraint is imposed by the "relative" timescales seen in some bursts between the short-timescale variability and the total length of the GRB. Often this variability timescale is far shorter than the total burst length. For example, in bursts as long as 100 seconds, the majority of the energy can be released in short episodes less than 1 second long. If the GRB were due to matter moving towards Earth (as the relativistic motion argument enforces), it is hard to understand why it would release its energy in such brief interludes. The generally accepted explanation for this is that these bursts involve the "collision" of multiple shells traveling at slightly different velocities; so-called "internal shocks". The collision of two thin shells flash-heats the matter, converting enormous amounts of kinetic energy into the
random motion of particles, greatly amplifying the energy release due to all emission mechanisms. Which physical mechanisms are at play in producing the observed photons is still an area of debate, but the most likely candidates appear to be synchrotron radiation and inverse Compton scattering.
As of 2007 there is no theory that has successfully described the spectrum of "all" gamma-ray bursts (though some theories work for a subset). However, the so-called Band function (named after David Band) has been fairly successful at fitting, empirically, the spectra of most gamma-ray bursts:
formula_0
A few gamma-ray bursts have shown evidence for an additional, delayed emission component at very high energies (GeV and higher). One theory for this emission invokes inverse Compton scattering. If a GRB progenitor, such as a Wolf-Rayet star, were to explode within a stellar cluster, the resulting shock wave could generate gamma-rays by scattering photons from neighboring stars. About 30% of known galactic Wolf-Rayet stars, are located in dense clusters of O stars with intense ultraviolet radiation fields, and the collapsar model suggests that WR stars are likely GRB progenitors. Therefore, a substantial fraction of GRBs are expected to occur in such clusters. As the relativistic matter ejected from an explosion slows and interacts with ultraviolet-wavelength photons, some photons gain energy, generating gamma-rays.
Afterglows and external shocks.
The GRB itself is very rapid, lasting from less than a second up to a few minutes at most. Once it disappears, it leaves behind a counterpart at longer wavelengths (X-ray, UV, optical, infrared, and radio) known as the afterglow that generally remains detectable for days or longer.
In contrast to the GRB emission, the afterglow emission is not believed to be dominated by internal shocks. In general, all the ejected matter has by this time coalesced into a single shell traveling outward into the interstellar medium (or possibly the stellar wind) around the star. At the front of this shell of matter is a shock wave referred to as the "external shock" as the still relativistically moving matter ploughs into the tenuous interstellar gas or the gas surrounding the star.
As the interstellar matter moves across the shock, it is immediately heated to extreme temperatures. (How this happens is still poorly understood as of 2007, since the particle density across the shock wave is too low to create a shock wave comparable to those familiar in dense terrestrial environments – the topic of "collisionless shocks" is still largely hypothesis but seems to accurately describe a number of astrophysical situations. Magnetic fields are probably critically involved.) These particles, now relativistically moving, encounter a strong local magnetic field and are accelerated perpendicular to the
magnetic field, causing them to radiate their energy via synchrotron radiation.
Synchrotron radiation is well understood, and the afterglow spectrum has been modeled fairly successfully using this template. It is generally dominated by electrons (which move and therefore radiate much faster than protons and other particles) so radiation from other particles is generally ignored.
In general, the GRB assumes the form of a power-law with three break points (and therefore four different power-law segments.) The lowest break point, formula_1, corresponds to the frequency below which the GRB is opaque to radiation and so the spectrum attains the form Rayleigh-Jeans tail of blackbody radiation. The two other break points, formula_2 and formula_3, are related to the minimum energy acquired by an electron after it crosses the shock wave and the time it takes an electron to radiate most of its energy, respectively. Depending on which of these two frequencies is higher, two different regimes are possible:
formula_5
formula_7
The afterglow changes with time. It must fade, obviously, but the spectrum changes as well. For the simplest case of adiabatic expansion into a uniform-density medium, the critical parameters evolve as:
formula_8
formula_9
formula_10
Here formula_11 is the flux at the current peak frequency of the GRB spectrum. (During fast-cooling this is at formula_3; during slow-cooling it is at formula_2.) Note that because formula_2 drops faster than formula_3, the system eventually switches from fast-cooling to slow-cooling.
Different scalings are derived for radiative evolution and for a non-constant-density environment (such as a stellar wind), but share the general power-law behavior observed in this case.
Several other known effects can modify the evolution of the afterglow:
Reverse shocks and the optical flash.
There can be "reverse shocks", which propagate "back" into the shocked matter once it begins to encounter the interstellar medium. The twice-shocked material can produce a bright optical/UV flash, which has been seen in a few GRBs, though it appears not to be a common phenomenon.
Refreshed shocks and late-time flares.
There can be "refreshed" shocks if the central engine continues to release fast-moving matter in small amounts even out to late times, these new shocks will catch up with the external shock to produce something like a late-time internal shock. This explanation has been invoked to explain the frequent flares seen in X-rays and at other wavelengths in many bursts, though some theorists are uncomfortable with the apparent demand that the progenitor (which one would think would be destroyed by the GRB) remains active for very long.
Jet effects.
Gamma-ray burst emission is believed to be released in jets, not spherical shells. Initially the two scenarios are equivalent: the center of the jet is not "aware" of the jet edge, and due to relativistic beaming we only see a small fraction of the jet. However, as the jet slows down, two things eventually occur (each at about the same time): First, information from the edge of the jet that there is no pressure to the side propagates to its center, and the jet matter can spread laterally. Second, relativistic beaming effects subside, and once Earth observers see the entire jet the widening of the relativistic beam is no longer compensated by the fact that we see a larger emitting region. Once these effects appear the jet fades very rapidly, an effect that is visible as a power-law "break" in the afterglow light curve. This is the so-called "jet break" that has been seen in some events and is often cited as evidence for the consensus view of GRBs as jets. Many GRB afterglows do not display jet breaks, especially in the X-ray, but they are more common in the optical light curves. Though as jet breaks generally occur at very late times (~1 day or more) when the afterglow is quite faint, and often undetectable, this is not necessarily surprising.
Dust extinction and hydrogen absorption.
There may be dust along the line of sight from the GRB to Earth, both in the host galaxy and in the Milky Way. If so, the light will be attenuated and reddened and an afterglow spectrum may look very different from that modeled.
At very high frequencies (far-ultraviolet and X-ray) interstellar hydrogen gas becomes a significant absorber. In particular, a photon with a wavelength of less than 91 nanometers is energetic enough to completely ionize neutral hydrogen and is absorbed with almost 100% probability even through relatively thin gas clouds. (At much shorter wavelengths the probability of absorption begins to drop again, which is why X-ray afterglows are still detectable.) As a result, observed spectra of very high-redshift GRBs often drop to zero at wavelengths less than that of where this hydrogen ionization threshold (known as the Lyman break) would be in the GRB host's reference frame. Other, less dramatic hydrogen absorption features are also commonly seen in high-z GRBs, such as the Lyman alpha forest.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N(E)= \\begin{cases} {E^\\alpha \\exp \\left( { - \\frac{E}{{E_0 }}} \\right)}, & \\mbox{if }E \\le (\\alpha - \\beta) E_0\\mbox{ } \\\\ {\\left[{\\left( {\\alpha - \\beta } \\right)E_0 } \\right]^{\\left( {\\alpha - \\beta } \\right)} E^\\beta \\exp \\left( {\\beta - \\alpha } \\right)}, & \\mbox{if }E > (\\alpha - \\beta) E_0\\mbox{ } \\end{cases}"
},
{
"math_id": 1,
"text": "\\nu_a"
},
{
"math_id": 2,
"text": "\\nu_m"
},
{
"math_id": 3,
"text": "\\nu_c"
},
{
"math_id": 4,
"text": "\\nu_m > \\nu_c"
},
{
"math_id": 5,
"text": "F_\\nu \\propto \\begin{cases} {\\nu^{2}}, & \\nu<\\nu_a \\\\\n {\\nu^{1/3}}, & \\nu_a<\\nu<\\nu_c \\\\\n {\\nu^{-1/2}}, & \\nu_c<\\nu<\\nu_m \\\\\n {\\nu^{-p/2}}, & \\nu_m<\\nu\n\\end{cases}"
},
{
"math_id": 6,
"text": "\\nu_m < \\nu_c"
},
{
"math_id": 7,
"text": "F_\\nu \\propto \\begin{cases} {\\nu^{2}}, & \\nu<\\nu_a \\\\\n {\\nu^{1/3}}, & \\nu_a<\\nu<\\nu_m \\\\\n {\\nu^{-(p-1)/2}}, & \\nu_m<\\nu<\\nu_c \\\\\n {\\nu^{-p/2}}, & \\nu_c<\\nu\n\\end{cases}"
},
{
"math_id": 8,
"text": "\\nu_c \\propto t^{1/2}"
},
{
"math_id": 9,
"text": "\\nu_m \\propto t^{-3/2}"
},
{
"math_id": 10,
"text": "F_{\\nu,max} = const"
},
{
"math_id": 11,
"text": "F_{\\nu,max}"
}
]
| https://en.wikipedia.org/wiki?curid=13632049 |
13633477 | Direct integration of a beam | Direct integration is a structural analysis method for measuring internal shear, internal moment, rotation, and deflection of a beam.
For a beam with an applied weight formula_0, taking downward to be positive, the internal shear force is given by taking the negative integral of the weight:
formula_1
The internal moment formula_2 is the integral of the internal shear:
formula_3 = formula_4
The angle of rotation from the horizontal, formula_5, is the integral of the internal moment divided by the product of the Young's modulus and the area moment of inertia:
formula_6
Integrating the angle of rotation obtains the vertical displacement formula_7:
formula_8
Integrating.
Each time an integration is carried out, a constant of integration needs to be obtained. These constants are determined by using either the forces at supports, or at free ends.
For internal shear and moment, the constants can be found by analyzing the beam's free body diagram.
For rotation and displacement, the constants are found using conditions dependent on the type of supports. For a cantilever beam, the fixed support has zero rotation and zero displacement. For a beam supported by a pin and roller, both the supports have zero displacement.
Sample calculations.
Take the beam shown at right supported by a fixed pin at the left and a roller at the right. There are no applied moments, the weight is a constant 10 kN, and - due to symmetry - each support applies a 75 kN vertical force to the beam. Taking x as the distance from the pin,
formula_9
Integrating,
formula_10
where formula_11 represents the applied loads. For these calculations, the only load having an effect on the beam is the 75 kN load applied by the pin, applied at x=0, giving
formula_12
Integrating the internal shear,
formula_13 where, because there is no applied moment, formula_14.
Assuming an EI value of 1 kNformula_15mformula_15m (for simplicity, real EI values for structural members such as steel are normally greater by powers of ten)
formula_16* and
formula_17
Because of the vertical supports at each end of the beam, the displacement (formula_18) at x = 0 and x = 15m is zero. Substituting (x = 0, ν(0) = 0) and (x = 15m, ν(15m) = 0), we can solve for constants formula_19=-1406.25 and formula_20=0, yielding
formula_21 and
formula_22
For the given EI value, the maximum displacement, at x=7.5m, is approximately 440 times the length of the beam. For a more realistic situation, such as a uniform load of 1 kN and an EI value of 5,000 kN·m², the displacement would be approximately 13 cm. | [
{
"math_id": 0,
"text": "w(x) "
},
{
"math_id": 1,
"text": "V(x) = -\\int w(x)\\, dx"
},
{
"math_id": 2,
"text": "M(x)"
},
{
"math_id": 3,
"text": "M(x) = \\int V(x)\\, dx"
},
{
"math_id": 4,
"text": " -\\int \\left[\\int w(x)\\, dx \\right] dx "
},
{
"math_id": 5,
"text": " \\theta"
},
{
"math_id": 6,
"text": "\\theta (x) = \\frac{1}{EI} \\int M(x)\\, dx "
},
{
"math_id": 7,
"text": " \\nu "
},
{
"math_id": 8,
"text": "\\nu (x) = \\int \\theta (x)\\, dx "
},
{
"math_id": 9,
"text": " w(x)= 10~\\textrm{kN}/\\textrm{m}"
},
{
"math_id": 10,
"text": " V(x)= -\\int w(x)\\, dx=-10x+C_1 (\\textrm{kN})"
},
{
"math_id": 11,
"text": "C_1"
},
{
"math_id": 12,
"text": " V(x)=-10x+75 (\\textrm{kN}) "
},
{
"math_id": 13,
"text": " M(x)= \\int V(x)\\, dx=-5x^2 + 75x (\\textrm{kN} \\cdot \\textrm{m}) "
},
{
"math_id": 14,
"text": "C_2 =0"
},
{
"math_id": 15,
"text": "\\cdot"
},
{
"math_id": 16,
"text": " \\theta (x)= \\int \\frac{M(x)}{EI}\\, dx= -\\frac{5}{3} x^3 + \\frac{75}{2} x^2 + C_3(\\textrm{m}/\\textrm{m})"
},
{
"math_id": 17,
"text": " \\nu (x) = \\int \\theta (x)\\, dx = -\\frac{5}{12} x^4 + \\frac{75}{6} x^3 + C_3 x + C_4 (\\textrm{m})"
},
{
"math_id": 18,
"text": "\\nu"
},
{
"math_id": 19,
"text": "C_3"
},
{
"math_id": 20,
"text": "C_4"
},
{
"math_id": 21,
"text": " \\theta (x)= \\int \\frac{M(x)}{EI}\\, dx= -\\frac{5}{3} x^3 + \\frac{75}{2} x^2 -1406.25(\\textrm{m}/\\textrm{m})"
},
{
"math_id": 22,
"text": " \\nu (x) = \\int \\theta (x)\\, dx = -\\frac{5}{12} x^4 + \\frac{75}{6} x^3 -1406.25x (\\textrm{m})"
},
{
"math_id": 23,
"text": "\\theta"
}
]
| https://en.wikipedia.org/wiki?curid=13633477 |
1363559 | IBEX 35 | Spanish stock market index
The IBEX 35 (IBerian IndEX) is the benchmark stock market index of the Bolsa de Madrid, Spain's principal stock exchange. Initiated in 1992, the index is administered and calculated by Sociedad de Bolsas, a subsidiary of Bolsas y Mercados Españoles (BME), the company which runs Spain's securities markets (including the Bolsa de Madrid). It is a market capitalization weighted index comprising the 35 most liquid Spanish stocks traded in the Madrid Stock Exchange General Index and is reviewed twice annually. Trading on options and futures contracts on the IBEX 35 is provided by MEFF (Mercado Español de Futuros Financieros), another subsidiary of BME.
History.
The IBEX 35 was inaugurated on January 14, 1992, although there are calculated values for the index back to December 29, 1989, where the base value of 3,000 points lies.
Between 2000 and 2007, the index outperformed many of its Western peers, driven by relatively strong domestic economic growth which particularly helped construction and real estate stocks. Consequently, while the record highs to date of the FTSE 100, CAC 40 and AEX, for example, were set during the dot-com bubble in 1999 and 2000, the IBEX 35's all-time maximum of 15,945.70 was reached on November 8, 2007.
The financial crisis of 2007–2008 included extreme volatility in the markets, and saw both the biggest one day percentage fall and rise in the IBEX 35's history. The index closed 7.5% down on January 21, 2008, the second biggest fall in the Spanish equity market since 1987, and rose a record 6.95% three days later.
Rules.
Selection.
The composition of the IBEX 35 is reviewed twice per year (in June and December) by the so-called Technical Advisory Committee, which consists of "representatives of the stock exchanges and derivatives markets, as well as... renowned experts from the academic and financial fields". If any changes are made, they come into effect the following trading day after the third Friday of the rebalance month In general, at each review, the 35 companies with the highest trading volume in Euros over the previous six months are chosen for inclusion in the index, provided that the average free float market capitalization of the stock is at least 0.3% of the total market cap of the index. Any candidate stock must also have either been traded on at least a third of all trading days in the previous six months, or rank in the top twenty overall in market cap (thus allowing large recently IPOed companies to be included).
Weighting.
The IBEX 35 is a capitalization-weighted index. The market cap used to calculate the weighting of each constituent is multiplied by a free float factor (ranging from 0.1 to 1) depending on the fraction of shares not subject to block ownership. Any company with 50% or more of its shares considered free float is given a free float factor of 1. Unlike many other European benchmark indices, the weightings of companies in the IBEX 35 are not capped.
As of 2015, international funds based abroad (chiefly in Norway, the United States, the United Kingdom and Qatar) owned 43% of the index, vs. 16% in 1992. Such rate of foreign investment was about 5% above the EU average.
Calculation.
The index value (given here as "I") of the IBEX 35 index is calculated using the following formula:
formula_0
with "t" the moment of calculation; "Cap" the free float market cap of a specific listing and "J" a coefficient used to adjust the index on the back of capital increases or other corporate actions so as to ensure continuity. The formula can be adjusted to accommodate changes in index structure, such as the temporary suspension of companies pending news.
Specification.
IBEX Mini futures contracts are traded on the MEFF Renta Variable (MEFF-RV) exchange under the ticker symbol BIBX. The full contract specifications for IBEX Mini futures are listed below.
Components.
As of 3, 2023[ [update]], the following 35 companies make up the index:
Record values.
The index reached the following record values:
Annual returns.
The following table shows the annual development of the IBEX 35 since 1992.
Past components.
All changes are due to market capitalisation unless stated otherwise.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " I(t) = I(t-1)\\times\\frac{\\sum_{i=1}^{35} {\\rm Cap}_{i}(t)\\,}{[\\,\\sum_{i=1}^{35} {\\rm Cap}_{i}(t-1)\\,\\pm J\\,]\\,} "
}
]
| https://en.wikipedia.org/wiki?curid=1363559 |
13636040 | Lemaître coordinates | Lemaître coordinates are a particular set of coordinates for the Schwarzschild metric—a spherically symmetric solution to the Einstein field equations in vacuum—introduced by Georges Lemaître in 1932. Changing from Schwarzschild to Lemaître coordinates removes the coordinate singularity at the Schwarzschild radius.
Metric.
The original Schwarzschild coordinate expression of the Schwarzschild metric, in natural units ("c" = "G" = 1), is given as
formula_0
where
formula_1 is the invariant interval;
formula_2 is the Schwarzschild radius;
formula_3 is the mass of the central body;
formula_4 are the Schwarzschild coordinates (which asymptotically turn into the flat spherical coordinates);
formula_5 is the speed of light;
and formula_6 is the gravitational constant.
This metric has a coordinate singularity at the Schwarzschild radius formula_7.
Georges Lemaître was the first to show that this is not a real physical singularity but simply a manifestation of the fact that the static Schwarzschild coordinates cannot be realized with material bodies inside the Schwarzschild radius. Indeed, inside the Schwarzschild radius everything falls towards the centre and it is impossible for a physical body to keep a constant radius.
A transformation of the Schwarzschild coordinate system from formula_8 to the new coordinates formula_9
formula_10
(the numerator and denominator are switched inside the square-roots), leads to the Lemaître coordinate expression of the metric,
formula_11
where
formula_12
The metric in Lemaître coordinates is non-singular at the Schwarzschild radius formula_7. This corresponds to the point formula_13. There remains a genuine gravitational singularity at the center, where formula_14, which cannot be removed by a coordinate change.
The time coordinate used in the Lemaître coordinates is identical to the "raindrop" time coordinate used in the Gullstrand–Painlevé coordinates. The other three: the radial and angular coordinates formula_15 of the Gullstrand–Painlevé coordinates are identical to those of the Schwarzschild chart. That is, Gullstrand–Painlevé applies one coordinate transform to go from the Schwarzschild time formula_16 to the raindrop coordinate formula_17. Then Lemaître applies a second coordinate transform to the radial component, so as to get rid of the off-diagonal entry in the Gullstrand–Painlevé chart.
The notation formula_18 used in this article for the time coordinate should not be confused with the proper time. It is true that formula_18 gives the proper time for radially infalling observers; it does not give the proper time for observers traveling along other geodesics.
Geodesics.
The trajectories with "ρ" constant are timelike geodesics with "τ" the proper time along these geodesics. They represent the motion of freely falling particles which start out with zero velocity at infinity. At any point their speed is just equal to the escape velocity from that point.
The Lemaître coordinate system is synchronous, that is, the global time coordinate of the metric defines the proper time of co-moving observers. The radially falling bodies reach the Schwarzschild radius and the centre within finite proper time.
Radial null geodesics correspond to formula_19, which have solutions formula_20. Here, formula_21 is just a short-hand for
formula_22
The two signs correspond to outward-moving and inward-moving light rays, respectively. Re-expressing this in terms of the coordinate formula_23 gives
formula_24
Note that formula_25 when formula_26. This is interpreted as saying that no signal can escape from inside the Schwarzschild radius, with light rays emitted radially either inwards or outwards both end up at the origin as the proper time formula_18 increases.
The Lemaître coordinate chart is not geodesically complete. This can be seen by tracing outward-moving radial null geodesics backwards in time. The outward-moving geodesics correspond to the plus sign in the above. Selecting a starting point formula_27 at formula_28, the above equation integrates to formula_29 as formula_30. Going backwards in proper time, one has formula_31 as formula_32. Starting at formula_26 and integrating forward, one arrives at formula_33 in finite proper time. Going backwards, one has, once again that formula_31 as formula_32. Thus, one concludes that, although the metric is non-singular at formula_7, all outward-traveling geodesics extend to formula_7 as formula_32.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "ds^2=\\left(1-{r_s\\over r}\\right)dt^2-{dr^2\\over 1-{r_s\\over r}} - r^2\\left(d\\theta^2+\\sin^2\\theta d\\phi^2\\right) \\;,"
},
{
"math_id": 1,
"text": "ds^2"
},
{
"math_id": 2,
"text": "r_s=\\frac{2GM}{c^2}"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "t, r, \\theta, \\phi"
},
{
"math_id": 5,
"text": "c"
},
{
"math_id": 6,
"text": "G"
},
{
"math_id": 7,
"text": "r=r_s"
},
{
"math_id": 8,
"text": "\\{t,r\\}"
},
{
"math_id": 9,
"text": "\\{\\tau,\\rho\\},"
},
{
"math_id": 10,
"text": "\n\\begin{align}\nd\\tau = dt + \\sqrt{\\frac{r_{s}}{r}}\\,\\left(1-\\frac{r_{s}}{r}\\right)^{-1}dr~\\\\\nd\\rho = dt + \\sqrt{\\frac{r}{r_{s}}}\\,\\left(1-\\frac{r_{s}}{r}\\right)^{-1}dr~\n\\end{align}\n"
},
{
"math_id": 11,
"text": "\nds^{2} = d\\tau^{2} - \\frac{r_{s}}{r} d\\rho^{2}\n- r^{2}(d\\theta^{2} +\\sin^{2}\\theta\nd\\phi^{2})\n"
},
{
"math_id": 12,
"text": "\nr=\\left[\\frac{3}{2}(\\rho-\\tau)\\right]^{2/3}r_{s}^{1/3} \\;.\n"
},
{
"math_id": 13,
"text": "\\frac{3}{2}(\\rho-\\tau)=r_s"
},
{
"math_id": 14,
"text": "\\rho-\\tau=0"
},
{
"math_id": 15,
"text": "r,\\theta,\\phi"
},
{
"math_id": 16,
"text": "t"
},
{
"math_id": 17,
"text": "t_r=\\tau"
},
{
"math_id": 18,
"text": "\\tau"
},
{
"math_id": 19,
"text": "ds^2=0"
},
{
"math_id": 20,
"text": "d\\tau=\\pm \\beta d\\rho"
},
{
"math_id": 21,
"text": "\\beta"
},
{
"math_id": 22,
"text": "\\beta \\equiv \\beta(r)=\\sqrt{r_s\\over r}"
},
{
"math_id": 23,
"text": "r"
},
{
"math_id": 24,
"text": "\ndr=\\left(\\pm 1 - \\sqrt{r_s\\over r}\\right)d\\tau \n"
},
{
"math_id": 25,
"text": "dr<0"
},
{
"math_id": 26,
"text": "r<r_s"
},
{
"math_id": 27,
"text": "r>r_s"
},
{
"math_id": 28,
"text": "\\tau=0"
},
{
"math_id": 29,
"text": "r\\to +\\infty"
},
{
"math_id": 30,
"text": "\\tau\\to +\\infty"
},
{
"math_id": 31,
"text": "r\\to r_s"
},
{
"math_id": 32,
"text": "\\tau\\to -\\infty"
},
{
"math_id": 33,
"text": "r=0"
}
]
| https://en.wikipedia.org/wiki?curid=13636040 |
13636654 | Beltrami's theorem | Geodesic maps preserve the property of having constant curvature
In the mathematical field of differential geometry, any (pseudo-)Riemannian metric determines a certain class of paths known as geodesics. Beltrami's theorem, named for Italian mathematician Eugenio Beltrami, is a result on the inverse problem of determining a (pseudo-)Riemannian metric from its geodesics.
It is nontrivial to see that, on any Riemannian manifold of constant curvature, there are smooth coordinates relative to which all nonconstant geodesics appear as straight lines. In the "negative curvature" case of hyperbolic geometry, this is justified by the Beltrami–Klein model. In the "positive curvature" case of spherical geometry, it is justified by the gnomonic projection. In the language of projective differential geometry, these charts show that any Riemannian manifold of constant curvature is "locally projectively flat." More generally, any pseudo-Riemannian manifold of constant curvature is locally projectively flat.
Beltrami's theorem asserts the converse: any connected pseudo-Riemannian manifold which is locally projectively flat must have constant curvature. With the use of tensor calculus, the proof is straightforward. Hermann Weyl described Beltrami's original proof (done in the two-dimensional Riemannian case) as being much more complicated. Relative to a projectively flat chart, there are functions "ρ""i" such that the Christoffel symbols take the form
formula_0
Direct calculation then shows that the Riemann curvature tensor is given by
formula_1
The curvature symmetry "R""ijkl" + "R""jikl"
0 implies that ∂"i" "ρ""j"
∂"j" "ρ""i". The other curvature symmetry "R""ijkl"
"R""klij", traced over i and l, then says that
formula_2
where n is the dimension of the manifold. It is direct to verify that the left-hand side is a (locally defined) Codazzi tensor, using only the given form of the Christoffel symbols. It follows from Schur's lemma that "g""il"(∂"i" "ρ""l" − "ρ""i" "ρ""l") is constant. Substituting the above identity into the Riemann tensor as given above, it follows that the chart domain has constant sectional curvature −"g""il"(∂"i" "ρ""l" − "ρ""i" "ρ""l"). By connectedness of the manifold, this local constancy implies global constancy.
Beltrami's theorem may be phrased in the language of geodesic maps: if given a geodesic map between pseudo-Riemannian manifolds, one manifold has constant curvature if and only if the other does.
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\Gamma_{ij}^k=\\rho_i\\delta_j^k+\\rho_j\\delta_i^k."
},
{
"math_id": 1,
"text": "R_{ijkl}=(\\partial_i\\rho_j-\\partial_j\\rho_i)g_{kl}+g_{jl}(\\partial_i\\rho_k-\\rho_i\\rho_k)-g_{il}(\\partial_j\\rho_k-\\rho_j\\rho_k)."
},
{
"math_id": 2,
"text": "\\partial_j\\rho_k-\\rho_j\\rho_k=g_{jk}\\frac{g^{il}(\\partial_i\\rho_l-\\rho_i\\rho_l)}{n}"
}
]
| https://en.wikipedia.org/wiki?curid=13636654 |
13637 | Hausdorff space | Type of topological space
In topology and related branches of mathematics, a Hausdorff space ( , ), T2 space or separated space, is a topological space where distinct points have disjoint neighbourhoods. Of the many separation axioms that can be imposed on a topological space, the "Hausdorff condition" (T2) is the most frequently used and discussed. It implies the uniqueness of limits of sequences, nets, and filters.
Hausdorff spaces are named after Felix Hausdorff, one of the founders of topology. Hausdorff's original definition of a topological space (in 1914) included the Hausdorff condition as an axiom.
Definitions.
Points formula_0 and formula_1 in a topological space formula_2 can be "separated by neighbourhoods" if there exists a neighbourhood formula_3 of formula_0 and a neighbourhood formula_4 of formula_1 such that formula_3 and formula_4 are disjoint formula_5. formula_2 is a Hausdorff space if any two distinct points in formula_2 are separated by neighbourhoods. This condition is the third separation axiom (after T0 and T1), which is why Hausdorff spaces are also called T2 spaces. The name "separated space" is also used.
A related, but weaker, notion is that of a preregular space. formula_2 is a preregular space if any two topologically distinguishable points can be separated by disjoint neighbourhoods. A preregular space is also called an R1 space.
The relationship between these two conditions is as follows. A topological space is Hausdorff if and only if it is both preregular (i.e. topologically distinguishable points are separated by neighbourhoods) and Kolmogorov (i.e. distinct points are topologically distinguishable). A topological space is preregular if and only if its Kolmogorov quotient is Hausdorff.
Equivalences.
For a topological space "formula_2", the following are equivalent:
Examples of Hausdorff and non-Hausdorff spaces.
Almost all spaces encountered in analysis are Hausdorff; most importantly, the real numbers (under the standard metric topology on real numbers) are a Hausdorff space. More generally, all metric spaces are Hausdorff. In fact, many spaces of use in analysis, such as topological groups and topological manifolds, have the Hausdorff condition explicitly stated in their definitions.
A simple example of a topology that is T1 but is not Hausdorff is the cofinite topology defined on an infinite set, as is the cocountable topology defined on an uncountable set.
Pseudometric spaces typically are not Hausdorff, but they are preregular, and their use in analysis is usually only in the construction of Hausdorff gauge spaces. Indeed, when analysts run across a non-Hausdorff space, it is still probably at least preregular, and then they simply replace it with its Kolmogorov quotient, which is Hausdorff.
In contrast, non-preregular spaces are encountered much more frequently in abstract algebra and algebraic geometry, in particular as the Zariski topology on an algebraic variety or the spectrum of a ring. They also arise in the model theory of intuitionistic logic: every complete Heyting algebra is the algebra of open sets of some topological space, but this space need not be preregular, much less Hausdorff, and in fact usually is neither. The related concept of Scott domain also consists of non-preregular spaces.
While the existence of unique limits for convergent nets and filters implies that a space is Hausdorff, there are non-Hausdorff T1 spaces in which every convergent sequence has a unique limit. Such spaces are called "US spaces". For sequential spaces, this notion is equivalent to being weakly Hausdorff.
Properties.
Subspaces and products of Hausdorff spaces are Hausdorff, but quotient spaces of Hausdorff spaces need not be Hausdorff. In fact, "every" topological space can be realized as the quotient of some Hausdorff space.
Hausdorff spaces are T1, meaning that each singleton is a closed set. Similarly, preregular spaces are R0. Every Hausdorff space is a Sober space although the converse is in general not true.
Another property of Hausdorff spaces is that each compact set is a closed set. For non-Hausdorff spaces, it can be that each compact set is a closed set (for example, the cocountable topology on an uncountable set) or not (for example, the cofinite topology on an infinite set and the Sierpiński space).
The definition of a Hausdorff space says that points can be separated by neighborhoods. It turns out that this implies something which is seemingly stronger: in a Hausdorff space every pair of disjoint compact sets can also be separated by neighborhoods, in other words there is a neighborhood of one set and a neighborhood of the other, such that the two neighborhoods are disjoint. This is an example of the general rule that compact sets often behave like points.
Compactness conditions together with preregularity often imply stronger separation axioms. For example, any locally compact preregular space is completely regular. Compact preregular spaces are normal, meaning that they satisfy Urysohn's lemma and the Tietze extension theorem and have partitions of unity subordinate to locally finite open covers. The Hausdorff versions of these statements are: every locally compact Hausdorff space is Tychonoff, and every compact Hausdorff space is normal Hausdorff.
The following results are some technical properties regarding maps (continuous and otherwise) to and from Hausdorff spaces.
Let "formula_9" be a continuous function and suppose formula_10 is Hausdorff. Then the graph of "formula_11", formula_12, is a closed subset of "formula_13".
Let "formula_9" be a function and let formula_14 be its kernel regarded as a subspace of "formula_8".
If "formula_16" are continuous maps and "formula_10" is Hausdorff then the equalizer formula_17 is a closed set in "formula_2". It follows that if "formula_10" is Hausdorff and "formula_11" and "formula_18" agree on a dense subset of "formula_2" then "formula_19". In other words, continuous functions into Hausdorff spaces are determined by their values on dense subsets.
Let "formula_9" be a closed surjection such that "formula_20" is compact for all "formula_21". Then if "formula_2" is Hausdorff so is "formula_10".
Let "formula_9" be a quotient map with "formula_2" a compact Hausdorff space. Then the following are equivalent:
Preregularity versus regularity.
All regular spaces are preregular, as are all Hausdorff spaces. There are many results for topological spaces that hold for both regular and Hausdorff spaces.
Most of the time, these results hold for all preregular spaces; they were listed for regular and Hausdorff spaces separately because the idea of preregular spaces came later.
On the other hand, those results that are truly about regularity generally do not also apply to nonregular Hausdorff spaces.
There are many situations where another condition of topological spaces (such as paracompactness or local compactness) will imply regularity if preregularity is satisfied. Such conditions often come in two versions: a regular version and a Hausdorff version. Although Hausdorff spaces are not, in general, regular, a Hausdorff space that is also (say) locally compact will be regular, because any Hausdorff space is preregular. Thus from a certain point of view, it is really preregularity, rather than regularity, that matters in these situations. However, definitions are usually still phrased in terms of regularity, since this condition is better known than preregularity.
See History of the separation axioms for more on this issue.
Variants.
The terms "Hausdorff", "separated", and "preregular" can also be applied to such variants on topological spaces as uniform spaces, Cauchy spaces, and convergence spaces. The characteristic that unites the concept in all of these examples is that limits of nets and filters (when they exist) are unique (for separated spaces) or unique up to topological indistinguishability (for preregular spaces).
As it turns out, uniform spaces, and more generally Cauchy spaces, are always preregular, so the Hausdorff condition in these cases reduces to the T0 condition. These are also the spaces in which completeness makes sense, and Hausdorffness is a natural companion to completeness in these cases. Specifically, a space is complete if and only if every Cauchy net has at "least" one limit, while a space is Hausdorff if and only if every Cauchy net has at "most" one limit (since only Cauchy nets can have limits in the first place).
Algebra of functions.
The algebra of continuous (real or complex) functions on a compact Hausdorff space is a commutative C*-algebra, and conversely by the Banach–Stone theorem one can recover the topology of the space from the algebraic properties of its algebra of continuous functions. This leads to noncommutative geometry, where one considers noncommutative C*-algebras as representing algebras of functions on a noncommutative space.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "y"
},
{
"math_id": 2,
"text": "X"
},
{
"math_id": 3,
"text": "U"
},
{
"math_id": 4,
"text": "V"
},
{
"math_id": 5,
"text": "(U\\cap V=\\varnothing)"
},
{
"math_id": 6,
"text": "\\{ x \\} \\subset X"
},
{
"math_id": 7,
"text": "\\Delta = \\{ (x, x) \\mid x \\in X \\}"
},
{
"math_id": 8,
"text": "X \\times X"
},
{
"math_id": 9,
"text": "f\\colon X \\to Y"
},
{
"math_id": 10,
"text": "Y"
},
{
"math_id": 11,
"text": "f"
},
{
"math_id": 12,
"text": "\\{(x,f(x)) \\mid x\\in X\\}"
},
{
"math_id": 13,
"text": "X \\times Y"
},
{
"math_id": 14,
"text": "\\ker(f) \\triangleq \\{(x,x') \\mid f(x) = f(x')\\}"
},
{
"math_id": 15,
"text": "\\ker(f)"
},
{
"math_id": 16,
"text": "f, g \\colon X \\to Y"
},
{
"math_id": 17,
"text": "\\mbox{eq}(f,g) = \\{x \\mid f(x) = g(x)\\}"
},
{
"math_id": 18,
"text": "g"
},
{
"math_id": 19,
"text": "f = g"
},
{
"math_id": 20,
"text": "f^{-1} (y)"
},
{
"math_id": 21,
"text": "y \\in Y"
}
]
| https://en.wikipedia.org/wiki?curid=13637 |
1363880 | Random forest | Tree-based ensemble machine learning method
<templatestyles src="Machine learning/styles.css"/>
Random forests or random decision forests is an ensemble learning method for classification, regression and other tasks that operates by constructing a multitude of decision trees at training time. For classification tasks, the output of the random forest is the class selected by most trees. For regression tasks, the mean or average prediction of the individual trees is returned. Random decision forests correct for decision trees' habit of overfitting to their training set.
The first algorithm for random decision forests was created in 1995 by Tin Kam Ho using the random subspace method, which, in Ho's formulation, is a way to implement the "stochastic discrimination" approach to classification proposed by Eugene Kleinberg.
An extension of the algorithm was developed by Leo Breiman and Adele Cutler, who registered "Random Forests" as a trademark in 2006 (as of 2019[ [update]], owned by Minitab, Inc.). The extension combines Breiman's "bagging" idea and random selection of features, introduced first by Ho and later independently by Amit and Geman in order to construct a collection of decision trees with controlled variance.
History.
The general method of random decision forests was first proposed by Salzberg and Heath in 1993, with a method that used a randomized decision tree algorithm to generate multiple different trees and then combine them using majority voting. This idea was developed further by Ho in 1995. Ho established that forests of trees splitting with oblique hyperplanes can gain accuracy as they grow without suffering from overtraining, as long as the forests are randomly restricted to be sensitive to only selected feature dimensions. A subsequent work along the same lines concluded that other splitting methods behave similarly, as long as they are randomly forced to be insensitive to some feature dimensions. Note that this observation of a more complex classifier (a larger forest) getting more accurate nearly monotonically is in sharp contrast to the common belief that the complexity of a classifier can only grow to a certain level of accuracy before being hurt by overfitting. The explanation of the forest method's resistance to overtraining can be found in Kleinberg's theory of stochastic discrimination.
The early development of Breiman's notion of random forests was influenced by the work of Amit and Geman who introduced the idea of searching over a random subset of the available decisions when splitting a node, in the context of growing a single tree. The idea of random subspace selection from Ho was also influential in the design of random forests. In this method a forest of trees is grown, and variation among the trees is introduced by projecting the training data into a randomly chosen subspace before fitting each tree or each node. Finally, the idea of randomized node optimization, where the decision at each node is selected by a randomized procedure, rather than a deterministic optimization was first introduced by Thomas G. Dietterich.
The proper introduction of random forests was made in a paper by Leo Breiman. This paper describes a method of building a forest of uncorrelated trees using a CART like procedure, combined with randomized node optimization and bagging. In addition, this paper combines several ingredients, some previously known and some novel, which form the basis of the modern practice of random forests, in particular:
The report also offers the first theoretical result for random forests in the form of a bound on the generalization error which depends on the strength of the trees in the forest and their correlation.
Algorithm.
Preliminaries: decision tree learning.
Decision trees are a popular method for various machine learning tasks. Tree learning "come[s] closest to meeting the requirements for serving as an off-the-shelf procedure for data mining", say Hastie "et al.", "because it is invariant under scaling and various other transformations of feature values, is robust to inclusion of irrelevant features, and produces inspectable models. However, they are seldom accurate".
In particular, trees that are grown very deep tend to learn highly irregular patterns: they overfit their training sets, i.e. have low bias, but very high variance. Random forests are a way of averaging multiple deep decision trees, trained on different parts of the same training set, with the goal of reducing the variance. This comes at the expense of a small increase in the bias and some loss of interpretability, but generally greatly boosts the performance in the final model.
Bagging.
The training algorithm for random forests applies the general technique of bootstrap aggregating, or bagging, to tree learners. Given a training set X = x1, ..., xn with responses Y = y1, ..., yn, bagging repeatedly ("B" times) selects a random sample with replacement of the training set and fits trees to these samples:
<templatestyles src="Block indent/styles.css"/>For b = 1, ..., B:
After training, predictions for unseen samples x' can be made by averaging the predictions from all the individual regression trees on x':
formula_0
or by taking the plurality vote in the case of classification trees.
This bootstrapping procedure leads to better model performance because it decreases the variance of the model, without increasing the bias. This means that while the predictions of a single tree are highly sensitive to noise in its training set, the average of many trees is not, as long as the trees are not correlated. Simply training many trees on a single training set would give strongly correlated trees (or even the same tree many times, if the training algorithm is deterministic); bootstrap sampling is a way of de-correlating the trees by showing them different training sets.
Additionally, an estimate of the uncertainty of the prediction can be made as the standard deviation of the predictions from all the individual regression trees on x′:
formula_1
The number of samples/trees, B, is a free parameter. Typically, a few hundred to several thousand trees are used, depending on the size and nature of the training set. An optimal number of trees B can be found using cross-validation, or by observing the "out-of-bag error": the mean prediction error on each training sample xi, using only the trees that did not have xi in their bootstrap sample.
The training and test error tend to level off after some number of trees have been fit.
From bagging to random forests.
The above procedure describes the original bagging algorithm for trees. Random forests also include another type of bagging scheme: they use a modified tree learning algorithm that selects, at each candidate split in the learning process, a random subset of the features. This process is sometimes called "feature bagging". The reason for doing this is the correlation of the trees in an ordinary bootstrap sample: if one or a few features are very strong predictors for the response variable (target output), these features will be selected in many of the B trees, causing them to become correlated. An analysis of how bagging and random subspace projection contribute to accuracy gains under different conditions is given by Ho.
Typically, for a classification problem with p features, √ (rounded down) features are used in each split.592 For regression problems the inventors recommend "p"/3 (rounded down) with a minimum node size of 5 as the default.592 In practice, the best values for these parameters should be tuned on a case-to-case basis for every problem.
ExtraTrees.
Adding one further step of randomization yields "extremely randomized trees", or ExtraTrees. While similar to ordinary random forests in that they are an ensemble of individual trees, there are two main differences: first, each tree is trained using the whole learning sample (rather than a bootstrap sample), and second, the top-down splitting in the tree learner is randomized. Instead of computing the locally "optimal" cut-point for each feature under consideration (based on, e.g., information gain or the Gini impurity), a "random" cut-point is selected. This value is selected from a uniform distribution within the feature's empirical range (in the tree's training set). Then, of all the randomly generated splits, the split that yields the highest score is chosen to split the node. Similar to ordinary random forests, the number of randomly selected features to be considered at each node can be specified. Default values for this parameter are formula_2 for classification and formula_3 for regression, where formula_3 is the number of features in the model.
Random forests for high-dimensional data.
The basic Random Forest procedure may not work well in situations where there are a large number of features but only a small proportion of these features are informative with respect to sample classification. This can be addressed by encouraging the procedure to focus mainly on features and trees that are informative. Some methods for accomplishing this are:
Properties.
Variable importance.
Random forests can be used to rank the importance of variables in a regression or classification problem in a natural way. The following technique was described in Breiman's original paper and is implemented in the R package "randomForest".
Permutation Importance.
The first step in measuring the variable importance in a data set formula_4 is to fit a random forest to the data. During the fitting process the out-of-bag error for each data point is recorded and averaged over the forest (errors on an independent test set can be substituted if bagging is not used during training).
To measure the importance of the formula_5-th feature after training, the values of the formula_5-th feature are permuted in the out-of-bag samples and the out-of-bag error is again computed on this perturbed data set. The importance score for the formula_5-th feature is computed by averaging the difference in out-of-bag error before and after the permutation over all trees. The score is normalized by the standard deviation of these differences.
Features which produce large values for this score are ranked as more important than features which produce small values. The statistical definition of the variable importance measure was given and analyzed by Zhu "et al."
This method of determining variable importance has some drawbacks.
Mean Decrease in Impurity Feature Importance.
This feature importance for random forests is the default implementation in sci-kit learn and R. It is described in the book "Classification and Regression Trees" by Leo Breiman.
Variables which decrease the impurity during splits a lot are considered important:
formula_6
where formula_7 indicates a feature, formula_8 is the number of trees in the forest, formula_9 indicates tree formula_10, formula_11 is the fraction of samples reaching node formula_5, formula_12 is the change in impurity in tree formula_13 at node formula_5. As impurity measure for samples falling in a node e.g. the following statistics can be used:
The normalized importance is then obtained by normalizing over all features, so that the sum of normalized feature importances is 1.
The sci-kit learn default implementation of Mean Decrease in Impurity Feature Importance is susceptible to misleading feature importances:
Relationship to nearest neighbors.
A relationship between random forests and the k-nearest neighbor algorithm (k-NN) was pointed out by Lin and Jeon in 2002. It turns out that both can be viewed as so-called "weighted neighborhoods schemes". These are models built from a training set formula_14 that make predictions formula_15 for new points x' by looking at the "neighborhood" of the point, formalized by a weight function W:
formula_16
Here, formula_17 is the non-negative weight of the i'th training point relative to the new point x' in the same tree. For any particular x', the weights for points formula_18 must sum to one. Weight functions are given as follows:
Since a forest averages the predictions of a set of m trees with individual weight functions formula_21, its predictions are
formula_22
This shows that the whole forest is again a weighted neighborhood scheme, with weights that average those of the individual trees. The neighbors of x' in this interpretation are the points formula_18 sharing the same leaf in any tree formula_5. In this way, the neighborhood of x' depends in a complex way on the structure of the trees, and thus on the structure of the training set. Lin and Jeon show that the shape of the neighborhood used by a random forest adapts to the local importance of each feature.
Unsupervised learning with random forests.
As part of their construction, random forest predictors naturally lead to a dissimilarity measure among the observations. One can also define a random forest dissimilarity measure between unlabeled data: the idea is to construct a random forest predictor that distinguishes the "observed" data from suitably generated synthetic data.
The observed data are the original unlabeled data and the synthetic data are drawn from a reference distribution. A random forest dissimilarity can be attractive because it handles mixed variable types very well, is invariant to monotonic transformations of the input variables, and is robust to outlying observations. The random forest dissimilarity easily deals with a large number of semi-continuous variables due to its intrinsic variable selection; for example, the "Addcl 1" random forest dissimilarity weighs the contribution of each variable according to how dependent it is on other variables. The random forest dissimilarity has been used in a variety of applications, e.g. to find clusters of patients based on tissue marker data.
Variants.
Instead of decision trees, linear models have been proposed and evaluated as base estimators in random forests, in particular multinomial logistic regression and naive Bayes classifiers. In cases that the relationship between the predictors and the target variable is linear, the base learners may have an equally high accuracy as the ensemble learner.
Kernel random forest.
In machine learning, kernel random forests (KeRF) establish the connection between random forests and kernel methods. By slightly modifying their definition, random forests can be rewritten as kernel methods, which are more interpretable and easier to analyze.
History.
Leo Breiman was the first person to notice the link between random forest and kernel methods. He pointed out that random forests which are grown using i.i.d. random vectors in the tree construction are equivalent to a kernel acting on the true margin. Lin and Jeon established the connection between random forests and adaptive nearest neighbor, implying that random forests can be seen as adaptive kernel estimates. Davies and Ghahramani proposed Random Forest Kernel and show that it can empirically outperform state-of-art kernel methods. Scornet first defined KeRF estimates and gave the explicit link between KeRF estimates and random forest. He also gave explicit expressions for kernels based on centered random forest and uniform random forest, two simplified models of random forest. He named these two KeRFs Centered KeRF and Uniform KeRF, and proved upper bounds on their rates of consistency.
Notations and definitions.
Preliminaries: Centered forests.
Centered forest is a simplified model for Breiman's original random forest, which uniformly selects an attribute among all attributes and performs splits at the center of the cell along the pre-chosen attribute. The algorithm stops when a fully binary tree of level formula_23 is built, where formula_24 is a parameter of the algorithm.
Uniform forest.
Uniform forest is another simplified model for Breiman's original random forest, which uniformly selects a feature among all features and performs splits at a point uniformly drawn on the side of the cell, along the preselected feature.
From random forest to KeRF.
Given a training sample formula_25 of formula_26-valued independent random variables distributed as the independent prototype pair formula_27, where formula_28. We aim at predicting the response formula_29, associated with the random variable formula_30, by estimating the regression function formula_31. A random regression forest is an ensemble of formula_32 randomized regression trees. Denote formula_33 the predicted value at point formula_34 by the formula_5-th tree, where formula_35 are independent random variables, distributed as a generic random variable formula_36, independent of the sample formula_37. This random variable can be used to describe the randomness induced by node splitting and the sampling procedure for tree construction. The trees are combined to form the finite forest estimate formula_38.
For regression trees, we have formula_39, where formula_40 is the cell containing formula_34, designed with randomness formula_41 and dataset formula_37, and formula_42.
Thus random forest estimates satisfy, for all formula_43, formula_44. Random regression forest has two levels of averaging, first over the samples in the target cell of a tree, then over all trees. Thus the contributions of observations that are in cells with a high density of data points are smaller than that of observations which belong to less populated cells. In order to improve the random forest methods and compensate the misestimation, Scornet defined KeRF by
formula_45
which is equal to the mean of the formula_46's falling in the cells containing formula_34 in the forest. If we define the connection function of the formula_32 finite forest as formula_47, i.e. the proportion of cells shared between formula_34 and formula_48, then almost surely we have formula_49, which defines the KeRF.
Centered KeRF.
The construction of Centered KeRF of level formula_23 is the same as for centered forest, except that predictions are made by formula_50, the corresponding kernel function, or connection function is
formula_51
Uniform KeRF.
Uniform KeRF is built in the same way as uniform forest, except that predictions are made by formula_50, the corresponding kernel function, or connection function is
formula_52
Properties.
Relation between KeRF and random forest.
Predictions given by KeRF and random forests are close if the number of points in each cell is controlled:
Assume that there exist sequences formula_53 such that, almost surely,
formula_54
Then almost surely,
formula_55
Relation between infinite KeRF and infinite random forest.
When the number of trees formula_32 goes to infinity, then we have infinite random forest and infinite KeRF. Their estimates are close if the number of observations in each cell is bounded:
Assume that there exist sequences formula_56 such that, almost surely
Then almost surely,
formula_60
Consistency results.
Assume that formula_61, where formula_62 is a centered Gaussian noise, independent of formula_30, with finite variance formula_63. Moreover, formula_30 is uniformly distributed on formula_64 and formula_65 is Lipschitz. Scornet proved upper bounds on the rates of consistency for centered KeRF and uniform KeRF.
Consistency of centered KeRF.
Providing formula_66 and formula_67, there exists a constant formula_68 such that, for all formula_69,
formula_70.
Consistency of uniform KeRF.
Providing formula_66 and formula_67, there exists a constant formula_71 such that,
formula_72.
Disadvantages.
While random forests often achieve higher accuracy than a single decision tree, they sacrifice the intrinsic interpretability present in decision trees. Decision trees are among a fairly small family of machine learning models that are easily interpretable along with linear models, rule-based models, and attention-based models. This interpretability is one of the most desirable qualities of decision trees. It allows developers to confirm that the model has learned realistic information from the data and allows end-users to have trust and confidence in the decisions made by the model. For example, following the path that a decision tree takes to make its decision is quite trivial, but following the paths of tens or hundreds of trees is much harder. To achieve both performance and interpretability, some model compression techniques allow transforming a random forest into a minimal "born-again" decision tree that faithfully reproduces the same decision function. If it is established that the predictive attributes are linearly correlated with the target variable, using random forest may not enhance the accuracy of the base learner. Furthermore, in problems with multiple categorical variables, random forest may not be able to increase the accuracy of the base learner.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat{f} = \\frac{1}{B} \\sum_{b=1}^Bf_b (x')"
},
{
"math_id": 1,
"text": "\\sigma = \\sqrt{\\frac{\\sum_{b=1}^B (f_b(x') - \\hat{f})^2}{B-1} }."
},
{
"math_id": 2,
"text": "\\sqrt{p}"
},
{
"math_id": 3,
"text": "p"
},
{
"math_id": 4,
"text": "\\mathcal{D}_n =\\{(X_i, Y_i)\\}_{i=1}^n"
},
{
"math_id": 5,
"text": "j"
},
{
"math_id": 6,
"text": "\\text{unormalized average importance}(x)=\\frac{1}{n_T} \\sum_{i=1}^{n_T} \\sum_{\\text{node }j \\in T_i | \\text{split variable}(j) = x} p_{T_i}(j)\\Delta i_{T_i}(j),"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "n_T"
},
{
"math_id": 9,
"text": "T_i"
},
{
"math_id": 10,
"text": "i"
},
{
"math_id": 11,
"text": "p_{T_i}(j)=\\frac{n_j}{n}"
},
{
"math_id": 12,
"text": "\\Delta i_{T_i}(j)"
},
{
"math_id": 13,
"text": "t"
},
{
"math_id": 14,
"text": "\\{(x_i, y_i)\\}_{i=1}^n"
},
{
"math_id": 15,
"text": "\\hat{y}"
},
{
"math_id": 16,
"text": "\\hat{y} = \\sum_{i=1}^n W(x_i, x') \\, y_i."
},
{
"math_id": 17,
"text": "W(x_i, x')"
},
{
"math_id": 18,
"text": "x_i"
},
{
"math_id": 19,
"text": "W(x_i, x') = \\frac{1}{k}"
},
{
"math_id": 20,
"text": "W(x_i, x') = \\frac{1}{k'}"
},
{
"math_id": 21,
"text": "W_j"
},
{
"math_id": 22,
"text": "\\hat{y} = \\frac{1}{m}\\sum_{j=1}^m\\sum_{i=1}^n W_{j}(x_i, x') \\, y_i = \\sum_{i=1}^n\\left(\\frac{1}{m}\\sum_{j=1}^m W_{j}(x_i, x')\\right) \\, y_i."
},
{
"math_id": 23,
"text": "k"
},
{
"math_id": 24,
"text": "k \\in\\mathbb{N} "
},
{
"math_id": 25,
"text": "\\mathcal{D}_n =\\{(\\mathbf{X}_i, Y_i)\\}_{i=1}^n"
},
{
"math_id": 26,
"text": "[0,1]^p\\times\\mathbb{R}"
},
{
"math_id": 27,
"text": "(\\mathbf{X}, Y)"
},
{
"math_id": 28,
"text": "\\operatorname{E}[Y^2]<\\infty"
},
{
"math_id": 29,
"text": "Y"
},
{
"math_id": 30,
"text": "\\mathbf{X}"
},
{
"math_id": 31,
"text": "m(\\mathbf{x})=\\operatorname{E}[Y \\mid \\mathbf{X} = \\mathbf{x}]"
},
{
"math_id": 32,
"text": "M"
},
{
"math_id": 33,
"text": "m_n(\\mathbf{x},\\mathbf{\\Theta}_j)"
},
{
"math_id": 34,
"text": "\\mathbf{x}"
},
{
"math_id": 35,
"text": "\\mathbf{\\Theta}_1,\\ldots,\\mathbf{\\Theta}_M "
},
{
"math_id": 36,
"text": "\\mathbf{\\Theta}"
},
{
"math_id": 37,
"text": "\\mathcal{D}_n"
},
{
"math_id": 38,
"text": "m_{M, n}(\\mathbf{x},\\Theta_1,\\ldots,\\Theta_M) = \\frac{1}{M}\\sum_{j=1}^M m_n(\\mathbf{x},\\Theta_j)"
},
{
"math_id": 39,
"text": "m_n = \\sum_{i=1}^n\\frac{Y_i\\mathbf{1}_{\\mathbf{X}_i\\in A_n(\\mathbf{x},\\Theta_j)}}{N_n(\\mathbf{x}, \\Theta_j)}"
},
{
"math_id": 40,
"text": "A_n(\\mathbf{x},\\Theta_j)"
},
{
"math_id": 41,
"text": "\\Theta_j"
},
{
"math_id": 42,
"text": " N_n(\\mathbf{x}, \\Theta_j) = \\sum_{i=1}^n \\mathbf{1}_{\\mathbf{X}_i\\in A_n(\\mathbf{x}, \\Theta_j)}"
},
{
"math_id": 43,
"text": "\\mathbf{x}\\in[0,1]^d"
},
{
"math_id": 44,
"text": " m_{M,n}(\\mathbf{x}, \\Theta_1,\\ldots,\\Theta_M) =\\frac{1}{M}\\sum_{j=1}^M \\left(\\sum_{i=1}^n\\frac{Y_i\\mathbf{1}_{\\mathbf{X}_i\\in A_n(\\mathbf{x},\\Theta_j)}}{N_n(\\mathbf{x}, \\Theta_j)}\\right)"
},
{
"math_id": 45,
"text": " \\tilde{m}_{M,n}(\\mathbf{x}, \\Theta_1,\\ldots,\\Theta_M) = \\frac{1}{\\sum_{j=1}^M N_n(\\mathbf{x}, \\Theta_j)}\\sum_{j=1}^M\\sum_{i=1}^n Y_i\\mathbf{1}_{\\mathbf{X}_i\\in A_n(\\mathbf{x}, \\Theta_j)},"
},
{
"math_id": 46,
"text": "Y_i"
},
{
"math_id": 47,
"text": "K_{M,n}(\\mathbf{x}, \\mathbf{z}) = \\frac{1}{M} \\sum_{j=1}^M \\mathbf{1}_{\\mathbf{z} \\in A_n (\\mathbf{x}, \\Theta_j)}"
},
{
"math_id": 48,
"text": "\\mathbf{z}"
},
{
"math_id": 49,
"text": "\\tilde{m}_{M,n}(\\mathbf{x}, \\Theta_1,\\ldots,\\Theta_M) =\n\\frac{\\sum_{i=1}^n Y_i K_{M,n}(\\mathbf{x}, \\mathbf{x}_i)}{\\sum_{\\ell=1}^n K_{M,n}(\\mathbf{x}, \\mathbf{x}_{\\ell})}"
},
{
"math_id": 50,
"text": "\\tilde{m}_{M,n}(\\mathbf{x}, \\Theta_1,\\ldots,\\Theta_M) "
},
{
"math_id": 51,
"text": "\nK_k^{cc}(\\mathbf{x},\\mathbf{z}) = \\sum_{k_1,\\ldots,k_d, \\sum_{j=1}^d k_j=k}\n\\frac{k!}{k_1!\\cdots k_d!} \\left(\\frac 1 d \\right)^k\n\\prod_{j=1}^d\\mathbf{1}_{\\lceil2^{k_j}x_j\\rceil=\\lceil2^{k_j}z_j\\rceil},\n\\qquad\n\\text{ for all } \\mathbf{x},\\mathbf{z}\\in[0,1]^d.\n"
},
{
"math_id": 52,
"text": "K_k^{uf}(\\mathbf{0},\\mathbf{x}) =\n\\sum_{k_1,\\ldots,k_d, \\sum_{j=1}^d k_j=k}\n\\frac{k!}{k_1!\\ldots k_d!}\\left(\\frac{1}{d}\\right)^k\n\\prod_{m=1}^d\\left(1-|x_m|\\sum_{j=0}^{k_m-1}\\frac{\\left(-\\ln|x_m|\\right)^j}{j!}\\right) \\text{ for all } \\mathbf{x}\\in[0,1]^d."
},
{
"math_id": 53,
"text": " (a_n),(b_n) "
},
{
"math_id": 54,
"text": " a_n\\leq N_n(\\mathbf{x},\\Theta)\\leq b_n \\text{ and } a_n\\leq \\frac 1 M \\sum_{m=1}^M N_n {\\mathbf{x},\\Theta_m}\\leq b_n.\n"
},
{
"math_id": 55,
"text": "|m_{M,n}(\\mathbf{x}) - \\tilde{m}_{M,n}(\\mathbf{x})| \\le\\frac{b_n-a_n}{a_n} \\tilde{m}_{M,n}(\\mathbf{x}).\n"
},
{
"math_id": 56,
"text": "(\\varepsilon_n), (a_n),(b_n)"
},
{
"math_id": 57,
"text": "\\operatorname{E}[N_n(\\mathbf{x},\\Theta)] \\ge 1,"
},
{
"math_id": 58,
"text": "\\operatorname{P}[a_n\\le N_n(\\mathbf{x},\\Theta) \\le b_n\\mid \\mathcal{D}_n] \\ge 1-\\varepsilon_n/2,"
},
{
"math_id": 59,
"text": "\\operatorname{P}[a_n\\le \\operatorname{E}_\\Theta [N_n(\\mathbf{x},\\Theta)] \\le b_n\\mid \\mathcal{D}_n] \\ge 1-\\varepsilon_n/2,"
},
{
"math_id": 60,
"text": " |m_{\\infty,n}(\\mathbf{x})-\\tilde{m}_{\\infty,n}(\\mathbf{x})| \\le\n\\frac{b_n-a_n}{a_n}\\tilde{m}_{\\infty,n}(\\mathbf{x}) + n \\varepsilon_n \\left( \\max_{1\\le i\\le n} Y_i \\right)."
},
{
"math_id": 61,
"text": "Y = m(\\mathbf{X}) + \\varepsilon"
},
{
"math_id": 62,
"text": "\\varepsilon"
},
{
"math_id": 63,
"text": "\\sigma^2<\\infty"
},
{
"math_id": 64,
"text": "[0,1]^d"
},
{
"math_id": 65,
"text": "m"
},
{
"math_id": 66,
"text": "k\\rightarrow\\infty"
},
{
"math_id": 67,
"text": "n/2^k\\rightarrow\\infty"
},
{
"math_id": 68,
"text": "C_1>0"
},
{
"math_id": 69,
"text": "n"
},
{
"math_id": 70,
"text": " \\mathbb{E}[\\tilde{m}_n^{cc}(\\mathbf{X}) - m(\\mathbf{X})]^2 \\le C_1 n^{-1/(3+d\\log 2)}(\\log n)^2"
},
{
"math_id": 71,
"text": "C>0"
},
{
"math_id": 72,
"text": "\\mathbb{E}[\\tilde{m}_n^{uf}(\\mathbf{X})-m(\\mathbf{X})]^2\\le Cn^{-2/(6+3d\\log2)}(\\log n)^2"
}
]
| https://en.wikipedia.org/wiki?curid=1363880 |
1363985 | Brian Goodwin | Canadian mathematician and biologist (1931–2009)
Brian Carey Goodwin (25 March 1931 – 15 July 2009) (Sainte-Anne-de-Bellevue, Quebec, Canada - Dartington, Totnes, Devon, UK) was a Canadian mathematician and biologist, a Professor Emeritus at the Open University and a founder of theoretical biology and biomathematics. He introduced the use of complex systems and generative models in developmental biology. He suggested that a reductionist view of nature fails to explain complex features, controversially proposing the structuralist theory that morphogenetic fields might substitute for natural selection in driving evolution. He was also a visible member of the Third Culture movement.
Biography.
Brian Goodwin was born in Montreal, Quebec, Canada in 1931. He studied biology at McGill University and then emigrated to the UK, under a Rhodes Scholarship for studying mathematics at Oxford. He got his PhD at the University of Edinburgh presenting the thesis "Studies in the general theory of development and evolution" under the supervision of Conrad Hal Waddington. He then moved to Sussex University until 1983 when he became a full professor at the Open University in Milton Keynes until retirement in 1992. He became a major figure in the early development of mathematical biology, along with other researchers. He was one of the attendants to the famous meetings that took place between 1965 and 1968 in Villa Serbelloni, hosted by the Rockefeller Foundation, under the topic "Towards a theoretical Biology".
Thereafter, he taught at the Schumacher College in Devon, UK, where he was instrumental in starting the college's MSc in Holistic Science. He was made a Founding Fellow of Schumacher College shortly before his death. Goodwin also had a research position at MIT and was a long time visitor of several institutions including the UNAM in Mexico City. He was a founding member of the Santa Fe Institute in New Mexico where he also served as a member of the science board for several years.
Brian Goodwin died in hospital in 2009, after surgery resulting from a fall from his bicycle. Goodwin is survived by his third wife, Christel, and his daughter, Lynn.
Gene networks and development.
Shortly after François Jacob and Jacques Monod developed their first model of gene regulation, Goodwin proposed the first model of a genetic oscillator, showing that regulatory interactions among genes allowed periodic fluctuations to occur. Shortly after this model became published, he also formulated a general theory of complex gene regulatory networks using statistical mechanics.
In its simplest form, Goodwin's oscillator involves a single gene that represses itself. Goodwin equations were originally formulated in terms of conservative (Hamiltonian) systems, thus not taking into account dissipative effects that are required in a realistic approach to regulatory phenomena in biology. Many versions have been developed since then. The simplest (but realistic) formulation considers three variables, X, Y and Z indicating the concentrations of RNA, protein and end product which generates the negative feedback loop. The equations are
formula_0
formula_1
formula_2
and closed oscillations can occur for n>8 and behave limit cycles: after a perturbation of the system's state, it returns to its previous attractor. A simple modification of this model, adding other terms introducing additional steps in the transcription machinery allows to find oscillations for smaller n values. Goodwin's model and its extensions have been widely used over the years as the basic skeleton for other models of oscillatory behavior, including circadian clocks, cell division or physiological control systems.
Developmental biology.
In the field of developmental biology, Goodwin explored self-organization in pattern formation, using case studies from single-cell (as "Acetabularia") to multicellular organisms, including early development in "Drosophila". He proposed that morphogenetic fields, defined in terms of spatial distributions of chemical signals (morphogenes), could pattern and shape the embryo. In this way, geometry and development were linked through a mathematical formalism. Along with his colleague Lynn Trainor, Goodwin developed a set of mathematical equations describing the changes of both physical boundaries in the organism and chemical gradients.
By considering the mechanochemical behaviour of the cortical cytoplasm (or cytogel) of plant cells, a viscoelastic material mainly composed of actin microfilaments and reinforced by a microtubules network, Goodwin & Trainor (1985) showed how to couple calcium and the mechanical properties of the cytoplasm. The cytogel is treated as a continuous viscoelastic medium in which calcium ions can diffuse and interact with the cytoskeleton. The model consists in two non-linear partial differential equations which describe the evolution of the mechanical strain field and of the calcium distribution in the cytogel.
It has been shown (Trainor & Goodwin, 1986) that, in a range of parameter values, instabilities may occur and develop in this system, leading to intracellular patterns of strain and calcium concentration. The equations read, in their general form:
formula_3
formula_4
These equations describe the spatiotemporal dynamics of the displacement from the reference state and the calcium concentration, respectively. Here x and t are the space and time coordinates, respectively. These equations can be applied to many different scenarios and the different functions P(x) introduce the specific mechanical properties of the medium. These equations can generate a rich variety of static and dynamic patterns, from complex geometrical motifs to oscillations and chaos (Briere 1994).
Structuralism.
He was also a strong advocate of the view that genes cannot fully explain the complexity of biological systems. In that sense, he became one of the strongest defenders of the systems view against reductionism. He suggested that nonlinear phenomena and the fundamental laws defining their behavior were essential to understand biology and its evolutionary paths. His position within evolutionary biology can be defined as a structuralist one. To Goodwin, many patterns in nature are a byproduct of constraints imposed by complexity. The limited repertoire of motifs observed in the spatial organization of plants and animals (at some scales) would be, in Goodwin's opinion, a fingerprint of the role played by such constraints. The role of natural selection would be secondary.
These opinions were highly controversial, and they brought Goodwin into conflict with many prominent Darwinian evolutionists, whereas some physicists found some of his views natural. Physicist Murray Gell-Mann for example acknowledged that "when biological evolution — based on largely random variation in genetic material and on natural selection — operates on the structure of actual organisms, it does so subject to the laws of physical science, which place crucial limitations on how living things can be constructed." Richard Dawkins, the former professor for public understanding of science at Oxford University and a well known Darwinian evolutionist, conceded: "I don't think there's much good evidence to support [his thesis], but it's important that somebody like Brian Goodwin is saying that kind of thing, because it provides the other extreme, and the truth probably lies somewhere between." Dawkins also agreed that "It's a genuinely interesting possibility that the underlying laws of morphology allow only a certain limited range of shapes.". For his part, Goodwin did not reject basic Darwinism, only its excesses.
Reception.
Biologist Gert Korthof has praised the research of Goodwin commenting he tried to "improve Darwinism in a scientific way." David B. Wake has also positively reviewed Goodwin's research describing him as a "thoughtful scientist, one of the great dissenters from the orthodoxies of modern evolutionary, genetic and developmental biology".
Goodwin had argued that natural selection was "too weak [a] force" to explain evolution and only operated as a filter mechanism. He claimed that modern evolutionary biology failed to provide an explanation for the theory of biological form and had ignored the importance of morphogenesis in evolution. He claimed to provide a new evolutionary theory to replace neo-Darwinism. In a critical review, biologist Catherine S. C. Price noted that although he had succeeded in providing an alternative to mutation as the only source of variation, he failed to provide an alternative to natural selection as a mechanism of adaptation. Price claimed Goodwin's "discussion of evolution is biased, insufficiently developed and poorly informed", and that he misrepresented Darwinism, used straw man arguments and ignored research from population genetics.
The evolutionary biologist Günter P. Wagner described Goodwin's structuralism as "a fringe movement in evolutionary biology".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\frac{dX}{dt}= {k_1 \\over K_1 + Z^n}-k_2X\n"
},
{
"math_id": 1,
"text": "\n\\frac{dY}{dt}= k_3 X - k_4 Y\n"
},
{
"math_id": 2,
"text": "\n\\frac{dZ}{dt}= k_5 Y - k_6 Z\n"
},
{
"math_id": 3,
"text": "\n\\rho {\\partial^2 \\xi \\over \\partial t^2} = {\\partial \\over \\partial x} \\left ( P_1(\\chi){\\partial \\xi \\over \\partial x} \\right ) + \n{\\partial \\over \\partial x} \\left ( P_2(\\chi){\\partial^2 \\xi \\over \\partial x \\partial t} \\right ) - P_3(\\chi) - F_0 {\\partial \\chi \\over \\partial x} \n"
},
{
"math_id": 4,
"text": "\n{\\partial \\chi \\over \\partial t} = \\left ( a_0 + a {\\partial \\xi \\over \\partial x} \\right ) (K-\\chi) - k_1(\\beta+\\chi)\\chi^n + \nD {\\partial^2 \\xi \\over \\partial x^2}\n"
}
]
| https://en.wikipedia.org/wiki?curid=1363985 |
13640867 | Gamma-ray burst progenitors | Types of celestial objects that can emit gamma-ray bursts
Gamma-ray burst progenitors are the types of celestial objects that can emit gamma-ray bursts (GRBs). GRBs show an extraordinary degree of diversity. They can last anywhere from a fraction of a second to many minutes. Bursts could have a single profile or oscillate wildly up and down in intensity, and their spectra are highly variable unlike other objects in space. The near complete lack of observational constraint led to a profusion of theories, including evaporating black holes, magnetic flares on white dwarfs, accretion of matter onto neutron stars, antimatter accretion, supernovae, hypernovae, and rapid extraction of rotational energy from supermassive black holes, among others.
There are at least two different types of progenitors (sources) of GRBs: one responsible for the long-duration, soft-spectrum bursts and one (or possibly more) responsible for short-duration, hard-spectrum bursts. The progenitors of long GRBs are believed to be massive, low-metallicity stars exploding due to the collapse of their cores. The progenitors of short GRBs are thought to arise from mergers of compact binary systems like neutron stars, which was confirmed by the GW170817 observation of a neutron star merger and a kilonova.
Long GRBs: massive stars.
Collapsar model.
As of 2007, there is almost universal agreement in the astrophysics community that the long-duration bursts are associated with the deaths of massive stars in a specific kind of supernova-like event commonly referred to as a collapsar or hypernova. Very massive stars are able to fuse material in their centers all the way to iron, at which point a star cannot continue to generate energy by fusion and collapses, in this case, immediately forming a black hole. Matter from the star around the core rains down towards the center and (for rapidly rotating stars) swirls into a high-density accretion disk. The infall of this material into the black hole drives a pair of jets out along the rotational axis, where the matter density is much lower than in the accretion disk, towards the poles of the star at velocities approaching the speed of light, creating a relativistic shock wave at the front. If the star is not surrounded by a thick, diffuse hydrogen envelope, the jets' material can pummel all the way to the stellar surface. The leading shock actually accelerates as the density of the stellar matter it travels through decreases, and by the time it reaches the surface of the star it may be traveling with a Lorentz factor of 100 or higher (that is, a velocity of 0.9999 times the speed of light). Once it reaches the surface, the shock wave breaks out into space, with much of its energy released in the form of gamma-rays.
Three very special conditions are required for a star to evolve all the way to a gamma-ray burst under this theory: the star must be very massive (probably at least 40 Solar masses on the main sequence) to form a central black hole in the first place, the star must be rapidly rotating to develop an accretion torus capable of launching jets, and the star must have low metallicity in order to strip off its hydrogen envelope so the jets can reach the surface. As a result, gamma-ray bursts are far rarer than ordinary core-collapse supernovae, which "only" require that the star be massive enough to fuse all the way to iron.
Evidence for the collapsar view.
This consensus is based largely on two lines of evidence. First, long gamma-ray bursts are found without exception in systems with abundant recent star formation, such as in irregular galaxies and in the arms of spiral galaxies. This is strong evidence of a link to massive stars, which evolve and die within a few hundred million years and are never found in regions where star formation has long ceased. This does not necessarily prove the collapsar model (other models also predict an association with star formation) but does provide significant support.
Second, there are now several observed cases where a supernova has immediately followed a gamma-ray burst. While most GRBs occur too far away for current instruments to have any chance of detecting the relatively faint emission from a supernova at that distance, for lower-redshift systems there are several well-documented cases where a GRB was followed within a few days by the appearance of a supernova. These supernovae that have been successfully classified are type Ib/c, a rare class of supernova caused by core collapse. Type Ib and Ic supernovae lack hydrogen absorption lines, consistent with the theoretical prediction of stars that have lost their hydrogen envelope. The GRBs with the most obvious supernova signatures include GRB 060218 (SN 2006aj), GRB 030329 (SN 2003dh), and GRB 980425 (SN 1998bw), and a handful of more distant GRBs show supernova "bumps" in their afterglow light curves at late times.
Possible challenges to this theory emerged recently, with the discovery of two nearby long gamma-ray bursts that lacked the signature of any type of supernova: both GRB060614 and GRB 060505 defied predictions that a supernova would emerge despite intense scrutiny from ground-based telescopes. Both events were, however, associated with actively star-forming stellar populations. One possible explanation is that during the core collapse of a very massive star a black hole can form, which then 'swallows' the entire star before the supernova blast can reach the surface.
Short GRBs: degenerate binary systems.
Short gamma-ray bursts appear to be an exception. Until 2007, only a handful of these events have been localized to a definite galactic host. However, those that have been localized appear to show significant differences from the long-burst population. While at least one short burst has been found in the star-forming central region of a galaxy, several others have been associated with the outer regions and even the outer halo of large elliptical galaxies in which star formation has nearly ceased. All the hosts identified so far have also been at low redshift. Furthermore, despite the relatively nearby distances and detailed follow-up study for these events, no supernova has been associated with any short GRB.
Neutron star and neutron star/black hole mergers.
While the astrophysical community has yet to settle on a single, universally favored model for the progenitors of short GRBs, the generally preferred model is the merger of two compact objects as a result of gravitational inspiral: two neutron stars, or a neutron star and a black hole. While thought to be rare in the Universe, a small number of cases of close neutron star - neutron star binaries are known in our Galaxy, and neutron star - black hole binaries are believed to exist as well. According to Einstein's theory of general relativity, systems of this nature will slowly lose energy due to gravitational radiation and the two degenerate objects will spiral closer and closer together, until in the last few moments, tidal forces rip the neutron star (or stars) apart and an immense amount of energy is liberated before the matter plunges into a single black hole. The whole process is believed to occur extremely quickly and be completely over within a few seconds, accounting for the short nature of these bursts. Unlike long-duration bursts, there is no conventional star to explode and therefore no supernova.
This model has been well-supported so far by the distribution of short GRB host galaxies, which have been observed in old galaxies with no star formation (for example, GRB050509B, the first short burst to be localized to a probable host) as well as in galaxies with star formation still occurring (such as GRB050709, the second), as even younger-looking galaxies can have significant populations of old stars. However, the picture is clouded somewhat by the observation of X-ray flaring in short GRBs out to very late times (up to many days), long after the merger should have been completed, and the failure to find nearby hosts of any sort for some short GRBs.
Magnetar giant flares.
One final possible model that may describe a small subset of short GRBs are the so-called magnetar giant flares (also called megaflares or hyperflares). Early high-energy satellites discovered a small population of objects in the Galactic plane that frequently produced repeated bursts of soft gamma-rays and hard X-rays. Because these sources repeat and because the explosions have very soft (generally thermal) high-energy spectra, they were quickly realized to be a separate class of object from normal gamma-ray bursts and excluded from subsequent GRB studies. However, on rare occasions these objects, now believed to be extremely magnetized neutron stars and sometimes termed magnetars, are capable of producing extremely luminous outbursts. The most powerful such event observed to date, the giant flare of 27 December 2004, originated from the magnetar SGR 1806-20 and was bright enough to saturate the detectors of every gamma-ray satellite in orbit and significantly disrupted Earth's ionosphere. While still significantly less luminous than "normal" gamma-ray bursts (short or long), such an event would be detectable to current spacecraft from galaxies as far as the Virgo cluster and, at this distance, would be difficult to distinguish from other types of short gamma-ray burst on the basis of the light curve alone. To date, three gamma-ray bursts have been associated with SGR flares in galaxies beyond the Milky Way: GRB 790305b in the Large Magellanic Cloud, GRB 051103 from M81 and GRB 070201 from M31.
Diversity in the origin of long GRBs.
HETE II and Swift observations reveal that long gamma-ray bursts come with and without supernovae, and with and without pronounced X-ray afterglows. It gives a clue to a diversity in the origin of long GRBs, possibly in- and outside of star-forming regions, with otherwise a common inner engine. The timescale of tens of seconds of long GRBs hereby appears to be intrinsic to their inner engine, for example, associated with a viscous or a dissipative process.
The most powerful stellar mass transient sources are the above-mentioned progenitors (collapsars and mergers of compact objects), all producing rotating black holes surrounded by debris in the form of an accretion disk or torus. A rotating black hole carries spin-energy in angular momentum
as does a spinning top:
formula_0
where formula_1 and formula_2 denote the moment of inertia and the angular velocity of the black hole in the trigonometric expression formula_3 for the specific angular momentum formula_4 of a Kerr black hole of mass formula_5. With no small parameter present, it has been well-recognized that the spin energy of a Kerr black hole can reach a substantial fraction (29%) of its total mass-energy formula_5, thus holding promise to power the most remarkable transient sources in the sky.
Of particular interest are mechanisms for producing "non-thermal" radiation by the gravitational field of rotating black holes, in the process of spin-down against their surroundings in aforementioned scenarios.
By Mach's principle, spacetime is dragged along with mass-energy, with the distant stars on cosmological scales or with a black hole in close proximity. Thus, matter tends to spin-up around rotating black holes, for the same reason that pulsars spin down by shedding angular momentum in radiation to infinity. A major amount of spin-energy of rapidly spinning black holes can thereby be released in a process of viscous spin-down against an inner disk or torus—into various emission channels.
Spin-down of rapidly spinning stellar mass black holes in their lowest energy state takes tens of seconds against an inner disk, representing the remnant debris of the merger of two neutron stars, the break-up of a neutron star around a companion black hole or formed in core-collapse of a massive star. Forced turbulence in the inner disk stimulates the creation of magnetic fields and multipole mass-moments, thereby opening radiation channels in radio, neutrinos and, mostly, in gravitational waves with distinctive chirps shown in the diagram
with the creation of astronomical amounts of Bekenstein-Hawking entropy.
Transparency of matter to gravitational waves offers a new probe to the inner-most workings of supernovae and GRBs. The gravitational-wave observatories LIGO and Virgo are designed to probe stellar mass transients in a frequency range of tens to about fifteen hundred Hz. The above-mentioned gravitational-wave emissions fall well within the LIGO-Virgo bandwidth of sensitivity; for long GRBs powered by "naked inner engines" produced in the binary merger of a neutron star with another neutron star or companion black hole, the above-mentioned magnetic disk winds dissipate into long-duration radio-bursts, that may be observed by the novel Low Frequency Array (LOFAR).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nE_{spin} = \\frac{1}{2} I \\Omega_H^2\n"
},
{
"math_id": 1,
"text": "I=4M^3(\\cos(\\lambda/2)/\\cos(\\lambda/4))^2"
},
{
"math_id": 2,
"text": "\\Omega_H=(1/2M)\\tan(\\lambda/2)"
},
{
"math_id": 3,
"text": "\\sin\\lambda=a/M"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "M"
}
]
| https://en.wikipedia.org/wiki?curid=13640867 |
13644054 | Integer relation algorithm | Mathematical procedure
An integer relation between a set of real numbers "x"1, "x"2, ..., "x""n" and a set of integers "a"1, "a"2, ..., "a""n", not all 0, such that
formula_0
An integer relation algorithm is an algorithm for finding integer relations. Specifically, given a set of real numbers known to a given precision, an integer relation algorithm will either find an integer relation between them, or will determine that no integer relation exists with coefficients whose magnitudes are less than a certain upper bound.
History.
For the case "n" = 2, an extension of the Euclidean algorithm can find any integer relation that exists between any two real numbers "x"1 and "x"2. The algorithm generates successive terms of the continued fraction expansion of "x"1/"x"2; if there is an integer relation between the numbers, then their ratio is rational and the algorithm eventually terminates.
Applications.
Integer relation algorithms have numerous applications. The first application is to determine whether a given real number "x" is likely to be algebraic, by searching for an integer relation between a set of powers of "x" {1, "x", "x"2, ..., "x""n"}. The second application is to search for an integer relation between a real number "x" and a set of mathematical constants such as "e", π and ln(2), which will lead to an expression for "x" as a linear combination of these constants.
A typical approach in experimental mathematics is to use numerical methods and arbitrary precision arithmetic to find an approximate value for an infinite series, infinite product or an integral to a high degree of precision (usually at least 100 significant figures), and then use an integer relation algorithm to search for an integer relation between this value and a set of mathematical constants. If an integer relation is found, this suggests a possible closed-form expression for the original series, product or integral. This conjecture can then be validated by formal algebraic methods. The higher the precision to which the inputs to the algorithm are known, the greater the level of confidence that any integer relation that is found is not just a numerical artifact.
A notable success of this approach was the use of the PSLQ algorithm to find the integer relation that led to the Bailey–Borwein–Plouffe formula for the value of π. PSLQ has also helped find new identities involving multiple zeta functions and their appearance in quantum field theory; and in identifying bifurcation points of the logistic map. For example, where B4 is the logistic map's fourth bifurcation point, the constant α = −"B"4("B"4 − 2) is a root of a 120th-degree polynomial whose largest coefficient is 25730. Integer relation algorithms are combined with tables of high precision mathematical constants and heuristic search methods in applications such as the Inverse Symbolic Calculator or Plouffe's Inverter.
Integer relation finding can be used to factor polynomials of high degree.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "a_1x_1 + a_2x_2 + \\cdots + a_nx_n = 0.\\,"
}
]
| https://en.wikipedia.org/wiki?curid=13644054 |
1364502 | Belt (mechanical) | Method of connecting two rotating shafts or pulleys
A belt is a loop of flexible material used to link two or more rotating shafts mechanically, most often parallel. Belts may be used as a source of motion, to transmit power efficiently or to track relative movement. Belts are looped over pulleys and may have a twist between the pulleys, and the shafts need not be parallel.
In a two pulley system, the belt can either drive the pulleys normally in one direction (the same if on parallel shafts), or the belt may be crossed, so that the direction of the driven shaft is reversed (the opposite direction to the driver if on parallel shafts). The belt drive can also be used to change the speed of rotation, either up or down, by using different sized pulleys.
As a source of motion, a conveyor belt is one application where the belt is adapted to carry a load continuously between two points.
History.
The mechanical belt drive, using a pulley machine, was first mentioned in the text of the "Dictionary of Local Expressions" by the Han Dynasty philosopher, poet, and politician Yang Xiong (53–18 BC) in 15 BC, used for a quilling machine that wound silk fibres onto bobbins for weavers' shuttles. The belt drive is an essential component of the invention of the spinning wheel. The belt drive was not only used in textile technologies, it was also applied to hydraulic-powered bellows dated from the 1st century AD.
Power transmission.
Belts are the cheapest utility for power transmission between shafts that may not be axially aligned. Power transmission is achieved by purposely designed belts and pulleys. The variety of power transmission needs that can be met by a belt-drive transmission system are numerous, and this has led to many variations on the theme. Belt drives run smoothly and with little noise, and provide shock absorption for motors, loads, and bearings when the force and power needed changes. A drawback to belt drives is that they transmit less power than gears or chain drives. However, improvements in belt engineering allow use of belts in systems that formerly only allowed chain drives or gears.
Power transmitted between a belt and a pulley is expressed as the product of difference of tension and belt velocity:
formula_0
where formula_1 and formula_2 are tensions in the tight side and slack side of the belt respectively. They are related as
formula_3
where formula_4 is the coefficient of friction, and formula_5 is the angle (in radians) subtended by contact surface at the centre of the pulley.
Pros and cons.
Belt drives are simple, inexpensive, and do not require axially aligned shafts. They help protect machinery from overload and jam, and damp and isolate noise and vibration. Load fluctuations are shock-absorbed (cushioned). They need no lubrication and minimal maintenance. They have high efficiency (90–98%, usually 95%), high tolerance for misalignment, and are of relatively low cost if the shafts are far apart. Clutch action can be achieved by shifting the belt to a free turning pulley or by releasing belt tension. Different speeds can be obtained by stepped or tapered pulleys.
The angular-velocity ratio may not be exactly constant or equal to that of the pulley diameters, due to slip and stretch. However, this problem can be largely solved by the use of toothed belts. Working temperatures range from . Adjustment of centre distance or addition of an idler pulley is crucial to compensate for wear and stretch.
Flat belts.
Flat belts were widely used in the 19th and early 20th centuries in line shafting to transmit power in factories. They were also used in countless farming, mining, and logging applications, such as bucksaws, sawmills, threshers, silo blowers, conveyors for filling corn cribs or haylofts, balers, water pumps (for wells, mines, or swampy farm fields), and electrical generators. Flat belts are still used today, although not nearly as much as in the line-shaft era. The flat belt is a simple system of power transmission that was well suited for its day. It can deliver high power at high speeds (373 kW at 51 m/s; 115 mph), in cases of wide belts and large pulleys. Wide-belt-large-pulley drives are bulky, consuming much space while requiring high tension, leading to high loads, and are poorly suited to close-centers applications. V-belts have mainly replaced flat belts for short-distance power transmission; and longer-distance power transmission is typically no longer done with belts at all. For example, factory machines now tend to have individual electric motors.
Because flat belts tend to climb towards the higher side of the pulley, pulleys were made with a slightly convex or "crowned" surface (rather than flat) to allow the belt to self-center as it runs. Flat belts also tend to slip on the pulley face when heavy loads are applied, and many proprietary belt dressings were available that could be applied to the belts to increase friction, and so power transmission.
Flat belts were traditionally made of leather or fabric. Early flour mills in Ukraine had leather belt drives. After World War I, there was such a shortage of shoe leather that people cut up the belt drives to make shoes. Selling shoes was more profitable than selling flour for a time. Flour milling soon came to a standstill and bread prices rose, contributing to famine conditions. Leather drive belts were put to another use during the Rhodesian Bush War (1964–1979): To protect riders of cars and busses from land mines, layers of leather belt drives were placed on the floors of vehicles in danger zones. Today most belt drives are made of rubber or synthetic polymers. Grip of leather belts is often better if they are assembled with the hair side (outer side) of the leather against the pulley, although some belts are instead given a half-twist before joining the ends (forming a Möbius strip), so that wear can be evenly distributed on both sides of the belt. Belts ends are joined by lacing the ends together with leather thonging (the oldest of the methods), steel comb fasteners and/or lacing, or by gluing or welding (in the case of polyurethane or polyester). Flat belts were traditionally jointed, and still usually are, but they can also be made with endless construction.
Rope drives.
In the mid 19th century, British millwrights discovered that multi-grooved pulleys connected by ropes outperformed flat pulleys connected by leather belts. Wire ropes were occasionally used, but cotton, hemp, manila hemp and flax rope saw the widest use. Typically, the rope connecting two pulleys with multiple V-grooves was spliced into a single loop that traveled along a helical path before being returned to its starting position by an idler pulley that also served to maintain the tension on the rope. Sometimes, a single rope was used to transfer power from one multiple-groove drive pulley to several single- or multiple-groove driven pulleys in this way.
In general, as with flat belts, rope drives were used for connections from stationary engines to the jack shafts and line shafts of mills, and sometimes from line shafts to driven machinery. Unlike leather belts, however, rope drives were sometimes used to transmit power over relatively long distances. Over long distances, intermediate sheaves were used to support the "flying rope", and in the late 19th century, this was considered quite efficient.
Round belts.
Round belts are a circular cross section belt designed to run in a pulley with a 60 degree V-groove. Round grooves are only suitable for idler pulleys that guide the belt, or when (soft) O-ring type belts are used. The V-groove transmits torque through a wedging action, thus increasing friction. Nevertheless, round belts are for use in relatively low torque situations only and may be purchased in various lengths or cut to length and joined, either by a staple, a metallic connector (in the case of hollow plastic), gluing or welding (in the case of polyurethane). Early sewing machines utilized a leather belt, joined either by a metal staple or glued, to great effect.
Spring belts.
Spring belts are similar to rope or round belts but consist of a long steel helical spring. They are commonly found on toy or small model engines, typically steam engines driving other toys or models or providing a transmission between the crankshaft and other parts of a vehicle. The main advantage over rubber or other elastic belts is that they last much longer under poorly controlled operating conditions. The distance between the pulleys is also less critical. Their main disadvantage is that slippage is more likely due to the lower coefficient of friction. The ends of a spring belt can be joined either by bending the last turn of the helix at each end by 90 degrees to form hooks, or by reducing the diameter of the last few turns at one end so that it "screws" into the other end.
<templatestyles src="Template:Visible anchor/styles.css" />V belts.
V belts (also style V-belts, vee belts, or, less commonly, wedge rope) solved the slippage and alignment problem. It is now the basic belt for power transmission. They provide the best combination of traction, speed of movement, load of the bearings, and long service life. They are generally endless, and their general cross-section shape is roughly trapezoidal (hence the name "V"). The "V" shape of the belt tracks in a mating groove in the pulley (or sheave), with the result that the belt cannot slip off. The belt also tends to wedge into the groove as the load increases—the greater the load, the greater the wedging action—improving torque transmission and making the V-belt an effective solution, needing less width and tension than flat belts. V-belts trump flat belts with their small center distances and high reduction ratios. The preferred center distance is larger than the largest pulley diameter, but less than three times the sum of both pulleys. Optimal speed range is . V-belts need larger pulleys for their thicker cross-section than flat belts.
For high-power requirements, two or more V-belts can be joined side-by-side in an arrangement called a multi-V, running on matching multi-groove sheaves. This is known as a multiple-V-belt drive (or sometimes a "classical V-belt drive").
V-belts may be homogeneously rubber or polymer throughout, or there may be fibers embedded in the rubber or polymer for strength and reinforcement. The fibers may be of textile materials such as cotton, polyamide (such as nylon) or polyester or, for greatest strength, of steel or aramid (such as Technora, Twaron or Kevlar).
When an endless belt does not fit the need, jointed and link V-belts may be employed. Most models offer the same power and speed ratings as equivalently-sized endless belts and do not require special pulleys to operate. A link v-belt is a number of polyurethane/polyester composite links held together, either by themselves, such as Fenner Drives' PowerTwist, or Nu-T-Link (with metal studs). These provide easy installation and superior environmental resistance compared to rubber belts and are length-adjustable by disassembling and removing links when needed.
History of V-belts.
Trade journal coverage of V-belts in automobiles from 1916 mentioned leather as the belt material, and mentioned that the V angle was not yet well standardized. The endless rubber V-belt was developed in 1917 by Charles C. Gates of the Gates Rubber Company. Multiple-V-belt drive was first arranged a few years later by Walter Geist of the Allis-Chalmers corporation, who was inspired to replace the single rope of multi-groove-sheave rope drives with multiple V-belts running parallel. Geist filed for a patent in 1925, and Allis-Chalmers began marketing the drive under the "Texrope" brand; the patent was granted in 1928 (U.S. patent 1662511). The "Texrope" brand still exists, although it has changed ownership and no longer refers to multiple-V-belt drive alone.
Multi-groove belts.
A multi-groove, V-ribbed, or polygroove belt is made up of usually between 3 and 24 V-shaped sections alongside each other. This gives a thinner belt for the same drive surface, thus it is more flexible, although often wider. The added flexibility offers an improved efficiency, as less energy is wasted in the internal friction of continually bending the belt. In practice this gain of efficiency causes a reduced heating effect on the belt, and a cooler-running belt lasts longer in service. Belts are commercially available in several sizes, with usually a 'P' (sometimes omitted) and a single letter identifying the pitch between grooves. The 'PK' section with a pitch of 3.56 mm is commonly used for automotive applications.
A further advantage of the polygroove belt that makes them popular is that they can run over pulleys on the ungrooved back of the belt. Though this is sometimes done with V-belts with a single idler pulley for tensioning, a polygroove belt may be wrapped around a pulley on its back tightly enough to change its direction, or even to provide a light driving force.
Any V-belt's ability to drive pulleys depends on wrapping the belt around a sufficient angle of the pulley to provide grip. Where a single-V-belt is limited to a simple convex shape, it can adequately wrap at most three or possibly four pulleys, so can drive at most three accessories. Where more must be driven, such as for modern cars with power steering and air conditioning, multiple belts are required. As the polygroove belt can be bent into concave paths by external idlers, it can wrap any number of driven pulleys, limited only by the power capacity of the belt.
This ability to bend the belt at the designer's whim allows it to take a complex or "serpentine" path. This can assist the design of a compact engine layout, where the accessories are mounted more closely to the engine block and without the need to provide movable tensioning adjustments. The entire belt may be tensioned by a single idler pulley.
The nomenclature used for belt sizes varies by region and trade. An automotive belt with the number "740K6" or "6K740" indicates a belt in length, 6 ribs wide, with a rib pitch of (a standard thickness for a K series automotive belt would be 4.5mm). A metric equivalent would be usually indicated by "6PK1880" whereby 6 refers to the number of ribs, PK refers to the metric PK thickness and pitch standard, and 1880 is the length of the belt in millimeters.
Ribbed belt.
A ribbed belt is a power transmission belt featuring lengthwise grooves. It operates from contact between the ribs of the belt and the grooves in the pulley. Its single-piece structure is reported to offer an even distribution of tension across the width of the pulley where the belt is in contact, a power range up to 600 kW, a high speed ratio, serpentine drives (possibility to drive off the back of the belt), long life, stability and homogeneity of the drive tension, and reduced vibration. The ribbed belt may be fitted on various applications: compressors, fitness bikes, agricultural machinery, food mixers, washing machines, lawn mowers, etc.
Film belts.
Though often grouped with flat belts, they are actually a different kind. They consist of a very thin belt (0.5–15 millimeters or 100–4000 micrometres) strip of plastic and occasionally rubber. They are generally intended for low-power (less than 10 watts), high-speed uses, allowing high efficiency (up to 98%) and long life. These are seen in business machines, printers, tape recorders, and other light-duty operations.
Timing belts.
Timing belts (also known as toothed, notch, cog, or synchronous belts) are a "positive" transfer belt and can track relative movement. These belts have teeth that fit into a matching toothed pulley. When correctly tensioned, they have no slippage, run at constant speed, and are often used to transfer direct motion for indexing or timing purposes (hence their name). They are often used instead of chains or gears, so there is less noise and a lubrication bath is not necessary. Camshafts of automobiles, miniature timing systems, and stepper motors often utilize these belts. Timing belts need the least tension of all belts and are among the most efficient. They can bear up to at speeds of .
Timing belts with a helical offset tooth design are available. The helical offset tooth design forms a chevron pattern and causes the teeth to engage progressively. The chevron pattern design is self-aligning and does not make the noise that some timing belts make at certain speeds, and is more efficient at transferring power (up to 98%).
The advantages of timing belts include clean operation, energy efficiency, low maintenance, low noise, non slip performance, versatile load and speed capabilities.
Disadvantages include a relatively high purchase cost, the need for specially fabricated toothed pulleys, less protection from overloading, jamming, and vibration due to their continuous tension cords, the lack of clutch action (only possible with friction-drive belts), and the fixed lengths, which do not allow length adjustment (unlike link V-belts or chains).
Specialty belts.
Belts normally transmit power on the tension side of the loop. However, designs for continuously variable transmissions exist that use belts that are a series of solid metal blocks, linked together as in a chain, transmitting power on the compression side of the loop.
Rolling roads.
Belts used for rolling roads for wind tunnels can be capable of .
Standards for use.
The open belt drive has parallel shafts rotating in the same direction, whereas the cross-belt drive also bears parallel shafts but rotate in opposite direction. The former is far more common, and the latter not appropriate for timing and standard V-belts unless there is a twist between each pulley so that the pulleys only contact the same belt surface. Nonparallel shafts can be connected if the belt's center line is aligned with the center plane of the pulley. Industrial belts are usually reinforced rubber but sometimes leather types. Non-leather, non-reinforced belts can only be used in light applications.
The pitch line is the line between the inner and outer surfaces that is neither subject to tension (like the outer surface) nor compression (like the inner). It is midway through the surfaces in film and flat belts and dependent on cross-sectional shape and size in timing and V-belts. Standard reference pitch diameter can be estimated by taking average of gear teeth tips diameter and gear teeth base diameter. The angular speed is inversely proportional to size, so the larger the one wheel, the less angular velocity, and vice versa. Actual pulley speeds tend to be 0.5–1% less than generally calculated because of belt slip and stretch. In timing belts, the inverse ratio teeth of the belt contributes to the exact measurement.
The speed of the belt is:
<templatestyles src="Block indent/styles.css"/>Speed = Circumference based on pitch diameter × angular speed in rpm
International use standards.
Standards include:
Selection criteria.
Belt drives are built under the following required conditions: speeds of and power transmitted between drive and driven unit; suitable distance between shafts; and appropriate operating conditions. The equation for power is
<templatestyles src="Block indent/styles.css"/>power [kW] = (torque [N·m]) × (rotational speed [rev/min]) × (2π radians) / (60 s × 1000 W).
Factors of power adjustment include speed ratio; shaft distance (long or short); type of drive unit (electric motor, internal combustion engine); service environment (oily, wet, dusty); driven unit loads (jerky, shock, reversed); and pulley-belt arrangement (open, crossed, turned). These are found in engineering handbooks and manufacturer's literature. When corrected, the power is compared to rated powers of the standard belt cross-sections at particular belt speeds to find a number of arrays that perform best. Now the pulley diameters are chosen. It is generally either large diameters or large cross-section that are chosen, since, as stated earlier, larger belts transmit this same power at low belt speeds as smaller belts do at high speeds. To keep the driving part at its smallest, minimal-diameter pulleys are desired. Minimum pulley diameters are limited by the elongation of the belt's outer fibers as the belt wraps around the pulleys. Small pulleys increase this elongation, greatly reducing belt life. Minimal pulley diameters are often listed with each cross-section and speed, or listed separately by belt cross-section. After the cheapest diameters and belt section are chosen, the belt length is computed. If endless belts are used, the desired shaft spacing may need adjusting to accommodate standard-length belts. It is often more economical to use two or more juxtaposed V-belts, rather than one larger belt.
In large speed ratios or small central distances, the angle of contact between the belt and pulley may be less than 180°. If this is the case, the drive power must be further increased, according to manufacturer's tables, and the selection process repeated. This is because power capacities are based on the standard of a 180° contact angle. Smaller contact angles mean less area for the belt to obtain traction, and thus the belt carries less power.
Belt friction.
Belt drives depend on friction to operate, but excessive friction wastes energy and rapidly wears the belt. Factors that affect belt friction include belt tension, contact angle, and the materials used to make the belt and pulleys.
Belt tension.
Power transmission is a function of belt tension. However, also increasing with tension is stress (load) on the belt and bearings. The ideal belt is that of the lowest tension that does not slip in high loads. Belt tensions should also be adjusted to belt type, size, speed, and pulley diameters. Belt tension is determined by measuring the force to deflect the belt a given distance per inch (or mm) of pulley. Timing belts need only adequate tension to keep the belt in contact with the pulley.
Belt wear.
Fatigue, more so than abrasion, is the culprit for most belt problems. This wear is caused by stress from rolling around the pulleys. High belt tension; excessive slippage; adverse environmental conditions; and belt overloads caused by shock, vibration, or belt slapping all contribute to belt fatigue.
Belt vibration.
Vibration signatures are widely used for studying belt drive malfunctions. Some of the common malfunctions or faults include the effects of belt tension, speed, sheave eccentricity and misalignment conditions. The effect of sheave Eccentricity on vibration signatures of the belt drive is quite significant. Although, vibration magnitude is not necessarily increased by this it will create strong amplitude modulation. When the top section of a belt is in resonance, the vibrations of the machine is increased. However, an increase in the machine vibration is not significant when only the bottom section of the belt is in resonance. The vibration spectrum has the tendency to move to higher frequencies as the tension force of the belt is increased.
Belt dressing.
Belt slippage can be addressed in several ways. Belt replacement is an obvious solution, and eventually the mandatory one (because no belt lasts forever). Often, though, before the replacement option is executed, retensioning (via pulley centerline adjustment) or dressing (with any of various coatings) may be successful to extend the belt's lifespan and postpone replacement. Belt dressings are typically liquids that are poured, brushed, dripped, or sprayed onto the belt surface and allowed to spread around; they are meant to recondition the belt's driving surfaces and increase friction between the belt and the pulleys. Some belt dressings are dark and sticky, resembling tar or syrup; some are thin and clear, resembling mineral spirits. Some are sold to the public in aerosol cans at auto parts stores; others are sold in drums only to industrial users.
Specifications.
To fully specify a belt, the material, length, and cross-section size and shape are required. Timing belts, in addition, require that the size of the teeth be given. The length of the belt is the sum of the central length of the system on both sides, half the circumference of both pulleys, and the square of the sum (if crossed) or the difference (if open) of the radii. Thus, when dividing by the central distance, it can be visualized as the central distance times the height that gives the same squared value of the radius difference on, of course, both sides. When adding to the length of either side, the length of the belt increases, in a similar manner to the Pythagorean theorem. One important concept to remember is that as formula_6 gets closer to formula_7 there is less of a distance (and therefore less addition of length) as it approaches zero.
On the other hand, in a crossed belt drive the "sum" rather than the difference of radii is the basis for computation for length. So the wider the small drive increases, the belt length is higher.
V-belt profiles.
Metric v-belt profiles (note pulley angles are reduced for small radius pulleys):
E.g. the pitch line for SPZ could be 8.5 mm from the bottom of the "V". In other words, 0–8.5 mm is 35° and 45° from 8.5 and above.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P = (T_1 - T_2)v,"
},
{
"math_id": 1,
"text": "T_1"
},
{
"math_id": 2,
"text": "T_2"
},
{
"math_id": 3,
"text": "\\frac{T_1}{T_2} = e^{\\mu\\alpha},"
},
{
"math_id": 4,
"text": "\\mu"
},
{
"math_id": 5,
"text": "\\alpha"
},
{
"math_id": 6,
"text": "D_1"
},
{
"math_id": 7,
"text": "D_2"
}
]
| https://en.wikipedia.org/wiki?curid=1364502 |
1364506 | Binary data | Data whose unit can take on only two possible states
Binary data is data whose unit can take on only two possible states. These are often labelled as 0 and 1 in accordance with the binary numeral system and Boolean algebra.
Binary data occurs in many different technical and scientific fields, where it can be called by different names including "bit" (binary digit) in computer science, "truth value" in mathematical logic and related domains and "binary variable" in statistics.
Mathematical and combinatoric foundations.
A discrete variable that can take only one state contains zero information, and 2 is the next natural number after 1. That is why the bit, a variable with only two possible values, is a standard primary unit of information.
A collection of n bits may have 2"n" states: see binary number for details. Number of states of a collection of discrete variables depends exponentially on the number of variables, and only as a power law on number of states of each variable. Ten bits have more (1024) states than three decimal digits (1000). 10"k" bits are more than sufficient to represent an information (a number or anything else) that requires 3"k" decimal digits, so information contained in discrete variables with 3, 4, 5, 6, 7, 8, 9, 10... states can be ever superseded by allocating two, three, or four times more bits. So, the use of any other small number than 2 does not provide an advantage.
Moreover, Boolean algebra provides a convenient mathematical structure for collection of bits, with a semantic of a collection of propositional variables. Boolean algebra operations are known as "bitwise operations" in computer science. Boolean functions are also well-studied theoretically and easily implementable, either with computer programs or by so-named logic gates in digital electronics. This contributes to the use of bits to represent different data, even those originally not binary.
In statistics.
In statistics, binary data is a statistical data type consisting of categorical data that can take exactly two possible values, such as "A" and "B", or "heads" and "tails". It is also called dichotomous data, and an older term is quantal data. The two values are often referred to generically as "success" and "failure". As a form of categorical data, binary data is nominal data, meaning the values are qualitatively different and cannot be compared numerically. However, the values are frequently represented as 1 or 0, which corresponds to counting the number of successes in a single trial: 1 (success…) or 0 (failure); see .
Often, binary data is used to represent one of two conceptually opposed values, e.g.:
However, it can also be used for data that is assumed to have only two possible values, even if they are not conceptually opposed or conceptually represent all possible values in the space. For example, binary data is often used to represent the party choices of voters in elections in the United States, i.e. Republican or Democratic. In this case, there is no inherent reason why only two political parties should exist, and indeed, other parties do exist in the U.S., but they are so minor that they are generally simply ignored. Modeling continuous data (or categorical data of more than 2 categories) as a binary variable for analysis purposes is called dichotomization (creating a dichotomy). Like all discretization, it involves discretization error, but the goal is to learn something valuable despite the error: treating it as negligible for the purpose at hand, but remembering that it cannot be assumed to be negligible in general.
Binary variables.
A binary variable is a random variable of binary type, meaning with two possible values. Independent and identically distributed (i.i.d.) binary variables follow a Bernoulli distribution, but in general binary data need not come from i.i.d. variables. Total counts of i.i.d. binary variables (equivalently, sums of i.i.d. binary variables coded as 1 or 0) follow a binomial distribution, but when binary variables are not i.i.d., the distribution need not be binomial.
Counting.
Like categorical data, binary data can be converted to a vector of count data by writing one coordinate for each possible value, and counting 1 for the value that occurs, and 0 for the value that does not occur. For example, if the values are A and B, then the data set A, A, B can be represented in counts as (1, 0), (1, 0), (0, 1). Once converted to counts, binary data can be grouped and the counts added. For instance, if the set A, A, B is grouped, the total counts are (2, 1): 2 A's and 1 B (out of 3 trials).
Since there are only two possible values, this can be simplified to a single count (a scalar value) by considering one value as "success" and the other as "failure", coding a value of the success as 1 and of the failure as 0 (using only the coordinate for the "success" value, not the coordinate for the "failure" value). For example, if the value A is considered "success" (and thus B is considered "failure"), the data set A, A, B would be represented as 1, 1, 0. When this is grouped, the values are added, while the number of trial is generally tracked implicitly. For example, A, A, B would be grouped as 1 + 1 + 0 = 2 successes (out of formula_0 trials). Going the other way, count data with formula_1 is binary data, with the two classes being 0 (failure) or 1 (success).
Counts of i.i.d. binary variables follow a binomial distribution, with &NoBreak;&NoBreak; the total number of trials (points in the grouped data).
Regression.
Regression analysis on predicted outcomes that are binary variables is known as binary regression; when binary data is converted to count data and modeled as i.i.d. variables (so they have a binomial distribution), binomial regression can be used. The most common regression methods for binary data are logistic regression, probit regression, or related types of binary choice models.
Similarly, counts of i.i.d. categorical variables with more than two categories can be modeled with a multinomial regression. Counts of non-i.i.d. binary data can be modeled by more complicated distributions, such as the beta-binomial distribution (a compound distribution). Alternatively, the "relationship" can be modeled without needing to explicitly model the distribution of the output variable using techniques from generalized linear models, such as quasi-likelihood and a quasibinomial model; see .
In computer science.
In modern computers, binary data refers to any data represented in binary form rather than interpreted on a higher level or converted into some other form. At the lowest level, bits are stored in a bistable device such as a flip-flop. While most binary data has symbolic meaning (except for don't cares) not all binary data is numeric. Some binary data corresponds to computer instructions, such as the data within processor registers decoded by the control unit along the fetch-decode-execute cycle. Computers rarely modify individual bits for performance reasons. Instead, data is aligned in groups of a fixed number of bits, usually 1 byte (8 bits). Hence, "binary data" in computers are actually sequences of bytes. On a higher level, data is accessed in groups of 1 word (4 bytes) for 32-bit systems and 2 words for 64-bit systems.
In applied computer science and in the information technology field, the term "binary data" is often specifically opposed to "text-based data", referring to any sort of data that cannot be interpreted as text. The "text" vs. "binary" distinction can sometimes refer to the semantic content of a file (e.g. a written document vs. a digital image). However, it often refers specifically to whether the individual bytes of a file are interpretable as text (see character encoding) or cannot so be interpreted. When this last meaning is intended, the more specific terms "binary format" and "text(ual) format" are sometimes used. Semantically textual data can be represented in binary format (e.g. when compressed or in certain formats that intermix various sorts of formatting codes, as in the doc format used by Microsoft Word); contrarily, image data is sometimes represented in textual format (e.g. the X PixMap image format used in the X Window System).
1 and 0 are nothing but just two different voltage levels. You can make the computer understand 1 for higher voltage and 0 for lower voltage. There are many different ways to store two voltage levels. If you have seen floppy, then you will find a magnetic tape that has a coating of ferromagnetic material, this is a type of paramagnetic material that has domains aligned in a particular direction to give a remnant magnetic field even after removal of currents through materials or magnetic field. During loading of data in the magnetic tape, the magnetic field is passed in one direction to call the saved orientation of the domain 1 and for the magnetic field is passed in another direction, then the saved orientation of the domain is 0. In this way, generally, 1 and 0 data are stored.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "n = 3"
},
{
"math_id": 1,
"text": "n = 1"
}
]
| https://en.wikipedia.org/wiki?curid=1364506 |
1364622 | Four-dimensional space | Geometric space with four dimensions
Four-dimensional space (4D) is the mathematical extension of the concept of three-dimensional space (3D). Three-dimensional space is the simplest possible abstraction of the observation that one needs only three numbers, called "dimensions", to describe the sizes or locations of objects in the everyday world. For example, the volume of a rectangular box is found by measuring and multiplying its length, width, and height (often labeled x, y, and z). This concept of ordinary space is called Euclidean space because it corresponds to Euclid's geometry, which was originally abstracted from the spatial experiences of everyday life.
The idea of adding a fourth dimension appears in Jean le Rond d'Alembert's "Dimensions", published in 1754, but the mathematics of more than three dimensions only emerged in the 19th century. The general concept of Euclidean space with any number of dimensions was fully developed by the Swiss mathematician Ludwig Schläfli before 1853. Schläfli's work received little attention during his lifetime and was published only posthumously, in 1901, but meanwhile the fourth Euclidean dimension was rediscovered by others. In 1880 Charles Howard Hinton popularized it in an essay, "", in which he explained the concept of a "four-dimensional cube" with a step-by-step generalization of the properties of lines, squares, and cubes. The simplest form of Hinton's method is to draw two ordinary 3D cubes in 2D space, one encompassing the other, separated by an "unseen" distance, and then draw lines between their equivalent vertices. This can be seen in the accompanying animation whenever it shows a smaller inner cube inside a larger outer cube. The eight lines connecting the vertices of the two cubes in this case represent a "single direction" in the "unseen" fourth dimension.
Higher-dimensional spaces (greater than three) have since become one of the foundations for formally expressing modern mathematics and physics. Large parts of these topics could not exist in their current forms without using such spaces. Einstein's theory of relativity is formulated in 4D space, although not in a Euclidean 4D space. Einstein's concept of spacetime has a Minkowski structure based on a non-Euclidean geometry with three spatial dimensions and one temporal dimension, rather than the four symmetric spatial dimensions of Schläfli's Euclidean 4D space.
Single locations in Euclidean 4D space can be given as vectors or "4-tuples", i.e., as ordered lists of numbers such as ("x", "y", "z", "w"). It is only when such locations are linked together into more complicated shapes that the full richness and geometric complexity of higher-dimensional spaces emerge. A hint of that complexity can be seen in the accompanying 2D animation of one of the simplest possible regular 4D objects, the tesseract, which is analogous to the 3D cube.
History.
Lagrange wrote in his (published 1788, based on work done around 1755) that mechanics can be viewed as operating in a four-dimensional space— three dimensions of space, and one of time. As early as 1827, Möbius realized that a fourth "spatial" dimension would allow a three-dimensional form to be rotated onto its mirror-image. The general concept of Euclidean space with any number of dimensions was fully developed by the Swiss mathematician Ludwig Schläfli in the mid-19th century, at a time when Cayley, Grassman and Möbius were the only other people who had ever conceived the possibility of geometry in more than three dimensions. By 1853 Schläfli had discovered all the regular polytopes that exist in higher dimensions, including the four-dimensional analogs of the Platonic solids.
An arithmetic of four spatial dimensions, called quaternions, was defined by William Rowan Hamilton in 1843. This associative algebra was the source of the science of vector analysis in three dimensions as recounted by Michael J. Crowe in "A History of Vector Analysis". Soon after, tessarines and coquaternions were introduced as other four-dimensional algebras over R. In 1886, Victor Schlegel described his method of visualizing four-dimensional objects with Schlegel diagrams.
One of the first popular expositors of the fourth dimension was Charles Howard Hinton, starting in 1880 with his essay "What is the Fourth Dimension?", published in the Dublin University magazine. He coined the terms "tesseract", "ana" and "kata" in his book "A New Era of Thought" and introduced a method for visualizing the fourth dimension using cubes in the book "Fourth Dimension". Hinton's ideas inspired a fantasy about a "Church of the Fourth Dimension" featured by Martin Gardner in his January 1962 "Mathematical Games column" in "Scientific American".
Higher dimensional non-Euclidean spaces were put on a firm footing by Bernhard Riemann's 1854 thesis, , in which he considered a "point" to be any sequence of coordinates ("x"1, ..., "xn"). In 1908, Hermann Minkowski presented a paper consolidating the role of time as the fourth dimension of spacetime, the basis for Einstein's theories of special and general relativity. But the geometry of spacetime, being non-Euclidean, is profoundly different from that explored by Schläfli and popularised by Hinton. The study of Minkowski space required Riemann's mathematics which is quite different from that of four-dimensional Euclidean space, and so developed along quite different lines. This separation was less clear in the popular imagination, with works of fiction and philosophy blurring the distinction, so in 1973 H. S. M. Coxeter felt compelled to write:
<templatestyles src="Template:Blockquote/styles.css" />Little, if anything, is gained by representing the fourth Euclidean dimension as "time". In fact, this idea, so attractively developed by H. G. Wells in "The Time Machine", has led such authors as John William Dunne ("An Experiment with Time") into a serious misconception of the theory of Relativity. Minkowski's geometry of space-time is "not" Euclidean, and consequently has no connection with the present investigation.
Vectors.
Mathematically, a four-dimensional space is a space that needs four parameters to specify a point in it. For example, a general point might have position vector a, equal to
formula_0
This can be written in terms of the four standard basis vectors (e1, e2, e3, e4), given by
formula_1
so the general vector a is
formula_2
Vectors add, subtract and scale as in three dimensions.
The dot product of Euclidean three-dimensional space generalizes to four dimensions as
formula_3
It can be used to calculate the norm or length of a vector,
formula_4
and calculate or define the angle between two non-zero vectors as
formula_5
Minkowski spacetime is four-dimensional space with geometry defined by a non-degenerate pairing different from the dot product:
formula_6
As an example, the distance squared between the points (0,0,0,0) and (1,1,1,0) is 3 in both the Euclidean and Minkowskian 4-spaces, while the distance squared between (0,0,0,0) and (1,1,1,1) is 4 in Euclidean space and 2 in Minkowski space; increasing "b"4 decreases the metric distance. This leads to many of the well-known apparent "paradoxes" of relativity.
The cross product is not defined in four dimensions. Instead, the exterior product is used for some applications, and is defined as follows:
formula_7
This is bivector valued, with bivectors in four dimensions forming a six-dimensional linear space with basis (e12, e13, e14, e23, e24, e34). They can be used to generate rotations in four dimensions.
Orthogonality and vocabulary.
In the familiar three-dimensional space of daily life, there are three coordinate axes—usually labeled x, y, and z—with each axis orthogonal (i.e. perpendicular) to the other two. The six cardinal directions in this space can be called "up", "down", "east", "west", "north", and "south". Positions along these axes can be called "altitude", "longitude", and "latitude". Lengths measured along these axes can be called "height", "width", and "depth".
Comparatively, four-dimensional space has an extra coordinate axis, orthogonal to the other three, which is usually labeled w. To describe the two additional cardinal directions, Charles Howard Hinton coined the terms "ana" and "kata", from the Greek words meaning "up toward" and "down from", respectively.
As mentioned above, Hermann Minkowski exploited the idea of four dimensions to discuss cosmology including the finite velocity of light. In appending a time dimension to three-dimensional space, he specified an alternative perpendicularity, hyperbolic orthogonality. This notion provides his four-dimensional space with a modified simultaneity appropriate to electromagnetic relations in his cosmos. Minkowski's world overcame problems associated with the traditional absolute space and time cosmology previously used in a universe of three space dimensions and one time dimension.
Geometry.
The geometry of four-dimensional space is much more complex than that of three-dimensional space, due to the extra degree of freedom.
Just as in three dimensions there are polyhedra made of two dimensional polygons, in four dimensions there are polychora made of polyhedra. In three dimensions, there are 5 regular polyhedra known as the Platonic solids. In four dimensions, there are 6 convex regular 4-polytopes, the analogs of the Platonic solids. Relaxing the conditions for regularity generates a further 58 convex uniform 4-polytopes, analogous to the 13 semi-regular Archimedean solids in three dimensions. Relaxing the conditions for convexity generates a further 10 nonconvex regular 4-polytopes.
In three dimensions, a circle may be extruded to form a cylinder. In four dimensions, there are several different cylinder-like objects. A sphere may be extruded to obtain a spherical cylinder (a cylinder with spherical "caps", known as a spherinder), and a cylinder may be extruded to obtain a cylindrical prism (a cubinder). The Cartesian product of two circles may be taken to obtain a duocylinder. All three can "roll" in four-dimensional space, each with its properties.
In three dimensions, curves can form knots but surfaces cannot (unless they are self-intersecting). In four dimensions, however, knots made using curves can be trivially untied by displacing them in the fourth direction—but 2D surfaces can form non-trivial, non-self-intersecting knots in 4D space. Because these surfaces are two-dimensional, they can form much more complex knots than strings in 3D space can. The Klein bottle is an example of such a knotted surface. Another such surface is the real projective plane.
Hypersphere.
The set of points in Euclidean 4-space having the same distance R from a fixed point "P"0 forms a hypersurface known as a 3-sphere. The hyper-volume of the enclosed space is:
formula_8
This is part of the Friedmann–Lemaître–Robertson–Walker metric in General relativity where R is substituted by function "R"("t") with t meaning the cosmological age of the universe. Growing or shrinking R with time means expanding or collapsing universe, depending on the mass density inside.
Four-dimensional perception in humans.
Research using virtual reality finds that humans, despite living in a three-dimensional world, can, without special practice, make spatial judgments about line segments embedded in four-dimensional space, based on their length (one-dimensional) and the angle (two-dimensional) between them. The researchers noted that "the participants in our study had minimal practice in these tasks, and it remains an open question whether it is possible to obtain more sustainable, definitive, and richer 4D representations with increased perceptual experience in 4D virtual environments". In another study, the ability of humans to orient themselves in 2D, 3D, and 4D mazes has been tested. Each maze consisted of four path segments of random length and connected with orthogonal random bends, but without branches or loops (i.e. actually labyrinths). The graphical interface was based on John McIntosh's free 4D Maze game. The participating persons had to navigate through the path and finally estimate the linear direction back to the starting point. The researchers found that some of the participants were able to mentally integrate their path after some practice in 4D (the lower-dimensional cases were for comparison and for the participants to learn the method).
However, a 2020 review underlined how these studies are composed of a small subject sample and mainly of college students. It also pointed out other issues that future research has to resolve: elimination of artifacts (these could be caused, for example, by strategies to resolve the required task that don't use 4D representation/4D reasoning and feedback given by researchers to speed up the adaptation process) and analysis on inter-subject variability (if 4D perception is possible, its acquisition could be limited to a subset of humans, to a specific critical period, or to people's attention or motivation). Furthermore, it is undetermined if there is a more appropriate way to project the 4-dimension (because there are no restrictions on how the 4-dimension can be projected). Researchers also hypothesized that human acquisition of 4D perception could result in the activation of brain visual areas and entorhinal cortex. If so they suggest that it could be used as a strong indicator of 4D space perception acquisition. Authors also suggested using a variety of different neural network architectures (with different "a priori" assumptions) to understand which ones are or are not able to learn.
Dimensional analogy.
To understand the nature of four-dimensional space, a device called "dimensional analogy" is commonly employed. Dimensional analogy is the study of how ("n" − 1) dimensions relate to n dimensions, and then inferring how n dimensions would relate to ("n" + 1) dimensions.
The dimensional analogy was used by Edwin Abbott Abbott in the book "Flatland", which narrates a story about a square that lives in a two-dimensional world, like the surface of a piece of paper. From the perspective of this square, a three-dimensional being has seemingly god-like powers, such as ability to remove objects from a safe without breaking it open (by moving them across the third dimension), to see everything that from the two-dimensional perspective is enclosed behind walls, and to remain completely invisible by standing a few inches away in the third dimension.
By applying dimensional analogy, one can infer that a four-dimensional being would be capable of similar feats from the three-dimensional perspective. Rudy Rucker illustrates this in his novel "Spaceland", in which the protagonist encounters four-dimensional beings who demonstrate such powers.
Cross-sections.
As a three-dimensional object passes through a two-dimensional plane, two-dimensional beings in this plane would only observe a cross-section of the three-dimensional object within this plane. For example, if a sphere passed through a sheet of paper, beings in the paper would see first a single point. A circle gradually grows larger, until it reaches the diameter of the sphere, and then gets smaller again, until it shrinks to a point and disappears. The 2D beings would not see a circle in the same way as three-dimensional beings do; rather, they only see a one-dimensional projection of the circle on their 1D "retina". Similarly, if a four-dimensional object passed through a three-dimensional (hyper) surface, one could observe a three-dimensional cross-section of the four-dimensional object. For example, a hypersphere would appear first as a point, then as a growing sphere (until it reaches the "hyperdiameter" of the hypersphere), with the sphere then shrinking to a single point and then disappearing. This means of visualizing aspects of the fourth dimension was used in the novel "Flatland" and also in several works of Charles Howard Hinton. And, in the same way, three-dimensional beings (such as humans with a 2D retina) can see all the sides and the insides of a 2D shape simultaneously, a 4D being could see all faces and the inside of a 3D shape at once with their 3D retina.
Projections.
A useful application of dimensional analogy in visualizing higher dimensions is in projection. A projection is a way of representing an "n"-dimensional object in "n" − 1 dimensions. For instance, computer screens are two-dimensional, and all the photographs of three-dimensional people, places, and things are represented in two dimensions by projecting the objects onto a flat surface. By doing this, the dimension orthogonal to the screen ("depth") is removed and replaced with indirect information. The retina of the eye is also a two-dimensional array of receptors but the brain can perceive the nature of three-dimensional objects by inference from indirect information (such as shading, foreshortening, binocular vision, etc.). Artists often use perspective to give an illusion of three-dimensional depth to two-dimensional pictures. The "shadow", cast by a fictitious grid model of a rotating tesseract on a plane surface, as shown in the figures, is also the result of projections.
Similarly, objects in the fourth dimension can be mathematically projected to the familiar three dimensions, where they can be more conveniently examined. In this case, the 'retina' of the four-dimensional eye is a three-dimensional array of receptors. A hypothetical being with such an eye would perceive the nature of four-dimensional objects by inferring four-dimensional depth from indirect information in the three-dimensional images in its retina.
The perspective projection of three-dimensional objects into the retina of the eye introduces artifacts such as foreshortening, which the brain interprets as depth in the third dimension. In the same way, perspective projection from four dimensions produces similar foreshortening effects. By applying dimensional analogy, one may infer four-dimensional "depth" from these effects.
As an illustration of this principle, the following sequence of images compares various views of the three-dimensional cube with analogous projections of the four-dimensional tesseract into three-dimensional space.
Shadows.
A concept closely related to projection is the casting of shadows.
If a light is shone on a three-dimensional object, a two-dimensional shadow is cast. By dimensional analogy, light shone on a two-dimensional object in a two-dimensional world would cast a one-dimensional shadow, and light on a one-dimensional object in a one-dimensional world would cast a zero-dimensional shadow, that is, a point of non-light. Going the other way, one may infer that light shining on a four-dimensional object in a four-dimensional world would cast a three-dimensional shadow.
If the wireframe of a cube is lit from above, the resulting shadow on a flat two-dimensional surface is a square within a square with the corresponding corners connected. Similarly, if the wireframe of a tesseract were lit from "above" (in the fourth dimension), its shadow would be that of a three-dimensional cube within another three-dimensional cube suspended in midair (a "flat" surface from a four-dimensional perspective). (Note that, technically, the visual representation shown here is a two-dimensional image of the three-dimensional shadow of the four-dimensional wireframe figure.)
Bounding regions.
The dimensional analogy also helps in inferring basic properties of objects in higher dimensions, such as the bounding region. For example, two-dimensional objects are bounded by one-dimensional boundaries: a square is bounded by four edges. Three-dimensional objects are bounded by two-dimensional surfaces: a cube is bounded by 6 square faces.
By applying dimensional analogy, one may infer that a four-dimensional cube, known as a "tesseract", is bounded by three-dimensional volumes. And indeed, this is the case: mathematics shows that the tesseract is bounded by 8 cubes. Knowing this is key to understanding how to interpret a three-dimensional projection of the tesseract. The boundaries of the tesseract project to "volumes" in the image, not merely two-dimensional surfaces.
Hypervolume.
The 4-volume or hypervolume in 4D can be calculated in closed form for simple geometrical figures, such as the tesseract ("s"4, for side length "s") and the 4-ball (formula_9 for radius "r").
Reasoning by analogy from familiar lower dimensions can be an excellent intuitive guide, but care must be exercised not to accept results that are not more rigorously tested. For example, consider the formulas for the area enclosed by a circle in two dimensions (formula_10) and the volume enclosed by a sphere in three dimensions (formula_11). One might guess that the volume enclosed by the sphere in four-dimensional space is a rational multiple of formula_12, but the correct volume is formula_13. The volume of an "n"-ball in an arbitrary dimension "n" is computable from a recurrence relation connecting dimension n to dimension "n" - 2.
In culture.
In literature.
Science fiction texts often mention the concept of "dimension" when referring to parallel or alternate universes or other imagined planes of existence. This usage is derived from the idea that to travel to parallel/alternate universes/planes of existence one must travel in a direction/dimension besides the standard ones. In effect, the other universes/planes are just a small distance away from our own, but the distance is in a fourth (or higher) spatial (or non-spatial) dimension, not the standard ones.
One of the most heralded science fiction stories regarding true geometric dimensionality, and often recommended as a starting point for those just starting to investigate such matters, is the 1884 novella "Flatland" by Edwin A. Abbott. Isaac Asimov, in his foreword to the Signet Classics 1984 edition, described "Flatland" as "The best introduction one can find into the manner of perceiving dimensions."
The idea of other dimensions was incorporated into many early science fiction stories, appearing prominently, for example, in Miles J. Breuer's "The Appendix and the Spectacles" (1928) and Murray Leinster's "The Fifth-Dimension Catapult" (1931); and appeared irregularly in science fiction by the 1940s. Classic stories involving other dimensions include Robert A. Heinlein's "—And He Built a Crooked House" (1941), in which a California architect designs a house based on a three-dimensional projection of a tesseract; Alan E. Nourse's "Tiger by the Tail" and "The Universe Between" (both 1951); and "The Ifth of Oofth" (1957) by Walter Tevis. Another reference is Madeleine L'Engle's novel "A Wrinkle In Time" (1962), which uses the fifth dimension as a way of "tesseracting the universe" or "folding" space to move across it quickly. The fourth and fifth dimensions are also key components of the book "The Boy Who Reversed Himself" by William Sleator.
In philosophy.
Immanuel Kant wrote in 1783: "That everywhere space (which is not itself the boundary of another space) has three dimensions and that space, in general, cannot have more dimensions is based on the proposition that not more than three lines can intersect at right angles in one point. This proposition cannot at all be shown from concepts, but rests immediately on intuition and indeed on pure intuition "a priori" because it is apodictically (demonstrably) certain."
"Space has Four Dimensions" is a short story published in 1846 by German philosopher and experimental psychologist Gustav Fechner under the pseudonym "Dr. Mises". The protagonist in the tale is a shadow who is aware of and able to communicate with other shadows, but who is trapped on a two-dimensional surface. According to Fechner, this "shadow-man" would conceive of the third dimension as being one of time. The story bears a strong similarity to the "Allegory of the Cave" presented in Plato's "The Republic" (c. 380 BC).
Simon Newcomb wrote an article for the "Bulletin of the American Mathematical Society" in 1898 entitled "The Philosophy of Hyperspace". Linda Dalrymple Henderson coined the term "hyperspace philosophy", used to describe writing that uses higher dimensions to explore metaphysical themes, in her 1983 thesis about the fourth dimension in early-twentieth-century art. Examples of "hyperspace philosophers" include Charles Howard Hinton, the first writer, in 1888, to use the word "tesseract"; and the Russian esotericist P. D. Ouspensky.
See also.
<templatestyles src="Div col/styles.css"/>
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{a} = \\begin{pmatrix} a_1 \\\\ a_2 \\\\ a_3 \\\\ a_4 \\end{pmatrix}."
},
{
"math_id": 1,
"text": "\\mathbf{e}_1 = \\begin{pmatrix} 1 \\\\ 0 \\\\ 0 \\\\ 0 \\end{pmatrix}; \\mathbf{e}_2 = \\begin{pmatrix} 0 \\\\ 1 \\\\ 0 \\\\ 0 \\end{pmatrix}; \\mathbf{e}_3 = \\begin{pmatrix} 0 \\\\ 0 \\\\ 1 \\\\ 0 \\end{pmatrix}; \\mathbf{e}_4 = \\begin{pmatrix} 0 \\\\ 0 \\\\ 0 \\\\ 1 \\end{pmatrix}, "
},
{
"math_id": 2,
"text": " \\mathbf{a} = a_1\\mathbf{e}_1 + a_2\\mathbf{e}_2 + a_3\\mathbf{e}_3 + a_4\\mathbf{e}_4."
},
{
"math_id": 3,
"text": "\\mathbf{a} \\cdot \\mathbf{b} = a_1 b_1 + a_2 b_2 + a_3 b_3 + a_4 b_4."
},
{
"math_id": 4,
"text": " \\left| \\mathbf{a} \\right| = \\sqrt{\\mathbf{a} \\cdot \\mathbf{a} } = \\sqrt{a_1^2 + a_2^2 + a_3^2 + a_4^2},"
},
{
"math_id": 5,
"text": " \\theta = \\arccos{\\frac{\\mathbf{a} \\cdot \\mathbf{b}}{\\left|\\mathbf{a}\\right| \\left|\\mathbf{b}\\right|}}."
},
{
"math_id": 6,
"text": "\\mathbf{a} \\cdot \\mathbf{b} = a_1 b_1 + a_2 b_2 + a_3 b_3 - a_4 b_4."
},
{
"math_id": 7,
"text": " \\begin{align}\n\\mathbf{a} \\wedge \\mathbf{b} = (a_1b_2 - a_2b_1)\\mathbf{e}_{12} + (a_1b_3 - a_3b_1)\\mathbf{e}_{13} + (a_1b_4 - a_4b_1)\\mathbf{e}_{14} + (a_2b_3 - a_3b_2)\\mathbf{e}_{23} \\\\\n+ (a_2b_4 - a_4b_2)\\mathbf{e}_{24} + (a_3b_4 - a_4b_3)\\mathbf{e}_{34}. \\end{align}"
},
{
"math_id": 8,
"text": " \\mathbf V = \\begin{matrix} \\frac{1}{2} \\end{matrix} \\pi^2 R^4"
},
{
"math_id": 9,
"text": "\\pi^2 r^4 /2"
},
{
"math_id": 10,
"text": "A = \\pi r^2"
},
{
"math_id": 11,
"text": "V = \\frac{4}{3} \\pi r^3"
},
{
"math_id": 12,
"text": "\\pi r^4"
},
{
"math_id": 13,
"text": "\\frac{\\pi^2}{2} r^4"
}
]
| https://en.wikipedia.org/wiki?curid=1364622 |
1365 | Ammonia | Chemical compound
<templatestyles src="Chembox/styles.css"/>
Chemical compound
Ammonia is an inorganic chemical compound of nitrogen and hydrogen with the formula . A stable binary hydride and the simplest pnictogen hydride, ammonia is a colourless gas with a distinctive pungent smell. Biologically, it is a common nitrogenous waste, and it contributes significantly to the nutritional needs of terrestrial organisms by serving as a precursor to fertilisers. Around 70% of ammonia produced industrially is used to make fertilisers in various forms and composition, such as urea and diammonium phosphate. Ammonia in pure form is also applied directly into the soil.
Ammonia, either directly or indirectly, is also a building block for the synthesis of many chemicals.
Ammonia occurs in nature and has been detected in the interstellar medium. In many countries it is classified as an extremely hazardous substance.
Ammonia is produced biologically in a process called nitrogen fixation, but even more is generated industrially by the Haber process. The process helped revolutionize agriculture by providing cheap fertilizers. The global industrial production of ammonia in 2021 was 235 million tonnes. Industrial ammonia is transported in tank cars or cylinders.
boils at at a pressure of one atmosphere, but the liquid can often be handled in the laboratory without external cooling. Household ammonia or ammonium hydroxide is a solution of in water.
Etymology.
Pliny, in Book XXXI of his Natural History, refers to a salt named "hammoniacum", so called because of the proximity of its source to the Temple of Jupiter Amun (Greek Ἄμμων "Ammon") in the Roman province of Cyrenaica. However, the description Pliny gives of the salt does not conform to the properties of ammonium chloride. According to Herbert Hoover's commentary in his English translation of Georgius Agricola's "De re metallica", it is likely to have been common sea salt. In any case, that salt ultimately gave ammonia and ammonium compounds their name.
Natural occurrence (abiological).
Traces of ammonia/ammonium are found in rainwater. Ammonium chloride (sal ammoniac), and ammonium sulfate are found in volcanic districts. Crystals of ammonium bicarbonate have been found in Patagonia guano.
Ammonia is found throughout the Solar System on Mars, Jupiter, Saturn, Uranus, Neptune, and Pluto, among other places: on smaller, icy bodies such as Pluto, ammonia can act as a geologically important antifreeze, as a mixture of water and ammonia can have a melting point as low as if the ammonia concentration is high enough and thus allow such bodies to retain internal oceans and active geology at a far lower temperature than would be possible with water alone. Substances containing ammonia, or those that are similar to it, are called "ammoniacal".
Properties.
Ammonia is a colourless gas with a characteristically pungent smell. It is lighter than air, its density being 0.589 times that of air. It is easily liquefied due to the strong hydrogen bonding between molecules. Gaseous ammonia turns to a colourless liquid, which boils at , and freezes to colourless crystals at . Little data is available at very high temperatures and pressures, but the liquid-vapor critical point occurs at 405 K and 11.35 MPa.
Solid.
The crystal symmetry is cubic, Pearson symbol cP16, space group P213 No.198, lattice constant 0.5125 nm.
Liquid.
Liquid ammonia possesses strong ionising powers reflecting its high "ε" of 22 at . Liquid ammonia has a very high standard enthalpy change of vapourization (23.5 kJ/mol; for comparison, water's is 40.65 kJ/mol, methane 8.19 kJ/mol and phosphine 14.6 kJ/mol) and can be transported in pressurized or refrigerated vessels; however, at standard temperature and pressure liquid anhydrous ammonia will vaporize.
Solvent properties.
Ammonia readily dissolves in water. In an aqueous solution, it can be expelled by boiling. The aqueous solution of ammonia is basic, and may be described as aqueous ammonia or ammonium hydroxide. The maximum concentration of ammonia in water (a saturated solution) has a specific gravity of 0.880 and is often known as '.880 ammonia'.
Liquid ammonia is a widely studied nonaqueous ionising solvent. Its most conspicuous property is its ability to dissolve alkali metals to form highly coloured, electrically conductive solutions containing solvated electrons. Apart from these remarkable solutions, much of the chemistry in liquid ammonia can be classified by analogy with related reactions in aqueous solutions. Comparison of the physical properties of with those of water shows has the lower melting point, boiling point, density, viscosity, dielectric constant and electrical conductivity. These differences are attributed at least in part to the weaker hydrogen bonding in . The ionic self-dissociation constant of liquid at −50 °C is about 10−33.
Liquid ammonia is an ionising solvent, although less so than water, and dissolves a range of ionic compounds, including many nitrates, nitrites, cyanides, thiocyanates, metal cyclopentadienyl complexes and metal bis(trimethylsilyl)amides. Most ammonium salts are soluble and act as acids in liquid ammonia solutions. The solubility of halide salts increases from fluoride to iodide. A saturated solution of ammonium nitrate (Divers' solution, named after Edward Divers) contains 0.83 mol solute per mole of ammonia and has a vapour pressure of less than 1 bar even at . However, few oxyanion salts with other cations dissolve.
Liquid ammonia will dissolve all of the alkali metals and other electropositive metals such as Ca, Sr, Ba, Eu and Yb (also Mg using an electrolytic process). At low concentrations (<0.06 mol/L), deep blue solutions are formed: these contain metal cations and solvated electrons, free electrons that are surrounded by a cage of ammonia molecules.
These solutions are strong reducing agents. At higher concentrations, the solutions are metallic in appearance and in electrical conductivity. At low temperatures, the two types of solution can coexist as phases.
Redox properties of liquid ammonia.
The range of thermodynamic stability of liquid ammonia solutions is very narrow, as the potential for oxidation to dinitrogen, "E"° (), is only +0.04 V. In practice, both oxidation to dinitrogen and reduction to dihydrogen are slow. This is particularly true of reducing solutions: the solutions of the alkali metals mentioned above are stable for several days, slowly decomposing to the metal amide and dihydrogen. Most studies involving liquid ammonia solutions are done in reducing conditions; although oxidation of liquid ammonia is usually slow, there is still a risk of explosion, particularly if transition metal ions are present as possible catalysts.
Structure.
The ammonia molecule has a trigonal pyramidal shape, as predicted by the valence shell electron pair repulsion theory (VSEPR theory) with an experimentally determined bond angle of 106.7°. The central nitrogen atom has five outer electrons with an additional electron from each hydrogen atom. This gives a total of eight electrons, or four electron pairs that are arranged tetrahedrally. Three of these electron pairs are used as bond pairs, which leaves one lone pair of electrons. The lone pair repels more strongly than bond pairs; therefore, the bond angle is not 109.5°, as expected for a regular tetrahedral arrangement, but 106.8°. This shape gives the molecule a dipole moment and makes it polar. The molecule's polarity, and especially its ability to form hydrogen bonds, makes ammonia highly miscible with water. The lone pair makes ammonia a base, a proton acceptor. Ammonia is moderately basic; a 1.0 M aqueous solution has a pH of 11.6, and if a strong acid is added to such a solution until the solution is neutral (pH = 7), 99.4% of the ammonia molecules are protonated. Temperature and salinity also affect the proportion of ammonium . The latter has the shape of a regular tetrahedron and is isoelectronic with methane.
The ammonia molecule readily undergoes nitrogen inversion at room temperature; a useful analogy is an umbrella turning itself inside out in a strong wind. The energy barrier to this inversion is 24.7 kJ/mol, and the resonance frequency is 23.79 GHz, corresponding to microwave radiation of a wavelength of 1.260 cm. The absorption at this frequency was the first microwave spectrum to be observed and was used in the first maser.
Amphotericity.
One of the most characteristic properties of ammonia is its basicity. Ammonia is considered to be a weak base. It combines with acids to form ammonium salts; thus, with hydrochloric acid it forms ammonium chloride (sal ammoniac); with nitric acid, ammonium nitrate, etc. Perfectly dry ammonia gas will not combine with perfectly dry hydrogen chloride gas; moisture is necessary to bring about the reaction.
As a demonstration experiment under air with ambient moisture, opened bottles of concentrated ammonia and hydrochloric acid solutions produce a cloud of ammonium chloride, which seems to appear 'out of nothing' as the salt aerosol forms where the two diffusing clouds of reagents meet between the two bottles.
The salts produced by the action of ammonia on acids are known as the and all contain the ammonium ion ().
Although ammonia is well known as a weak base, it can also act as an extremely weak acid. It is a protic substance and is capable of formation of amides (which contain the ion). For example, lithium dissolves in liquid ammonia to give a blue solution (solvated electron) of lithium amide:
Self-dissociation.
Like water, liquid ammonia undergoes molecular autoionisation to form its acid and base conjugates:
Ammonia often functions as a weak base, so it has some buffering ability. Shifts in pH will cause more or fewer ammonium cations () and amide anions () to be present in solution. At standard pressure and temperature,
K = = 10−30.
Combustion.
Ammonia does not burn readily or sustain combustion, except under narrow fuel-to-air mixtures of 15–28% ammonia by volume in air. When mixed with oxygen, it burns with a pale yellowish-green flame. Ignition occurs when chlorine is passed into ammonia, forming nitrogen and hydrogen chloride; if chlorine is present in excess, then the highly explosive nitrogen trichloride () is also formed.
The combustion of ammonia to form nitrogen and water is exothermic:
, Δ"H"°r = −1267.20 kJ (or −316.8 kJ/mol if expressed per mol of )
The standard enthalpy change of combustion, Δ"H"°c, expressed per mole of ammonia and with condensation of the water formed, is −382.81 kJ/mol. Dinitrogen is the thermodynamic product of combustion: all nitrogen oxides are unstable with respect to and , which is the principle behind the catalytic converter. Nitrogen oxides can be formed as kinetic products in the presence of appropriate catalysts, a reaction of great industrial importance in the production of nitric acid:
A subsequent reaction leads to :
The combustion of ammonia in air is very difficult in the absence of a catalyst (such as platinum gauze or warm chromium(III) oxide), due to the relatively low heat of combustion, a lower laminar burning velocity, high auto-ignition temperature, high heat of vapourization, and a narrow flammability range. However, recent studies have shown that efficient and stable combustion of ammonia can be achieved using swirl combustors, thereby rekindling research interest in ammonia as a fuel for thermal power production. The flammable range of ammonia in dry air is 15.15–27.35% and in 100% relative humidity air is 15.95–26.55%. For studying the kinetics of ammonia combustion, knowledge of a detailed reliable reaction mechanism is required, but this has been challenging to obtain.
Precursor to organonitrogen compounds.
Ammonia is a direct or indirect precursor to most manufactured nitrogen-containing compounds. It is the precursor to nitric acid, which is the source for most N-substituted aromatic compounds.
Amines can be formed by the reaction of ammonia with alkyl halides or, more commonly, with alcohols:
Its ring-opening reaction with ethylene oxide give ethanolamine, diethanolamine, and triethanolamine.
Amides can be prepared by the reaction of ammonia with carboxylic acid and their derivatives. For example, ammonia reacts with formic acid (HCOOH) to yield formamide () when heated. Acyl chlorides are the most reactive, but the ammonia must be present in at least a twofold excess to neutralise the hydrogen chloride formed. Esters and anhydrides also react with ammonia to form amides. Ammonium salts of carboxylic acids can be dehydrated to amides by heating to 150–200 °C as long as no thermally sensitive groups are present.
Other organonitrogen compounds include alprazolam, ethanolamine, ethyl carbamate and hexamethylenetetramine.
Precursor to inorganic nitrogenous compounds.
Nitric acid is generated via the Ostwald process by oxidation of ammonia with air over a platinum catalyst at , ≈9 atm. Nitric oxide and nitrogen dioxide are intermediate in this conversion:
Nitric acid is used for the production of fertilisers, explosives, and many organonitrogen compounds.
The hydrogen in ammonia is susceptible to replacement by a myriad substituents.
Ammonia gas reacts with metallic sodium to give sodamide, .
With chlorine, monochloramine is formed.
Pentavalent ammonia is known as λ5-amine, nitrogen pentahydride decomposes spontaneously into trivalent ammonia (λ3-amine) and hydrogen gas at normal conditions. This substance was once investigated as a possible solid rocket fuel in 1966.
Ammonia is also used to make the following compounds:
Ammonia is a ligand forming metal ammine complexes. For historical reasons, ammonia is named ammine in the nomenclature of coordination compounds. One notable ammine complex is cisplatin (, a widely used anticancer drug. Ammine complexes of chromium(III) formed the basis of Alfred Werner's revolutionary theory on the structure of coordination compounds. Werner noted only two isomers ("fac"- and "mer"-) of the complex could be formed, and concluded the ligands must be arranged around the metal ion at the vertices of an octahedron.
Ammonia forms 1:1 adducts with a variety of Lewis acids such as , phenol, and . Ammonia is a hard base (HSAB theory) and its E & C parameters are EB = 2.31 and CB = 2.04. Its relative donor strength toward a series of acids, versus other Lewis bases, can be illustrated by C-B plots.
Detection and determination.
Ammonia in solution.
Ammonia and ammonium salts can be readily detected, in very minute traces, by the addition of Nessler's solution, which gives a distinct yellow colouration in the presence of the slightest trace of ammonia or ammonium salts. The amount of ammonia in ammonium salts can be estimated quantitatively by distillation of the salts with sodium (NaOH) or potassium hydroxide (KOH), the ammonia evolved being absorbed in a known volume of standard sulfuric acid and the excess of acid then determined volumetrically; or the ammonia may be absorbed in hydrochloric acid and the ammonium chloride so formed precipitated as ammonium hexachloroplatinate, .
Gaseous ammonia.
Sulfur sticks are burnt to detect small leaks in industrial ammonia refrigeration systems. Larger quantities can be detected by warming the salts with a caustic alkali or with quicklime, when the characteristic smell of ammonia will be at once apparent. Ammonia is an irritant and irritation increases with concentration; the permissible exposure limit is 25 ppm, and lethal above 500 ppm by volume. Higher concentrations are hardly detected by conventional detectors, the type of detector is chosen according to the sensitivity required (e.g. semiconductor, catalytic, electrochemical). Holographic sensors have been proposed for detecting concentrations up to 12.5% in volume.
In a laboratorial setting, gaseous ammonia can be detected by using concentrated hydrochloric acid or gaseous hydrogen chloride. A dense white fume (which is ammonium chloride vapor) arises from the reaction between ammonia and HCl(g).
Ammoniacal nitrogen (NH3–N).
Ammoniacal nitrogen (NH3–N) is a measure commonly used for testing the quantity of ammonium ions, derived naturally from ammonia, and returned to ammonia via organic processes, in water or waste liquids. It is a measure used mainly for quantifying values in waste treatment and water purification systems, as well as a measure of the health of natural and man-made water reserves. It is measured in units of mg/L (milligram per litre).
History.
The ancient Greek historian Herodotus mentioned that there were outcrops of salt in an area of Libya that was inhabited by a people called the 'Ammonians' (now the Siwa oasis in northwestern Egypt, where salt lakes still exist). The Greek geographer Strabo also mentioned the salt from this region. However, the ancient authors Dioscorides, Apicius, Arrian, Synesius, and Aëtius of Amida described this salt as forming clear crystals that could be used for cooking and that were essentially rock salt. "Hammoniacus sal" appears in the writings of Pliny, although it is not known whether the term is equivalent to the more modern sal ammoniac (ammonium chloride).
The fermentation of urine by bacteria produces a solution of ammonia; hence fermented urine was used in Classical Antiquity to wash cloth and clothing, to remove hair from hides in preparation for tanning, to serve as a mordant in dying cloth, and to remove rust from iron. It was also used by ancient dentists to wash teeth.
In the form of sal ammoniac (نشادر, "nushadir"), ammonia was important to the Muslim alchemists. It was mentioned in the "Book of Stones", likely written in the 9th century and attributed to Jābir ibn Hayyān. It was also important to the European alchemists of the 13th century, being mentioned by Albertus Magnus. It was also used by dyers in the Middle Ages in the form of fermented urine to alter the colour of vegetable dyes. In the 15th century, Basilius Valentinus showed that ammonia could be obtained by the action of alkalis on sal ammoniac. At a later period, when sal ammoniac was obtained by distilling the hooves and horns of oxen and neutralizing the resulting carbonate with hydrochloric acid, the name 'spirit of hartshorn' was applied to ammonia.
Gaseous ammonia was first isolated by Joseph Black in 1756 by reacting "sal ammoniac" (ammonium chloride) with "calcined magnesia" (magnesium oxide). It was isolated again by Peter Woulfe in 1767, by Carl Wilhelm Scheele in 1770 and by Joseph Priestley in 1773 and was termed by him 'alkaline air'. Eleven years later in 1785, Claude Louis Berthollet ascertained its composition.
The production of ammonia from nitrogen in the air (and hydrogen) was invented by Fritz Haber and Robert LeRossignol. The patent was sent in 1909 (USPTO Nr 1,202,995) and awarded in 1916. Later, Carl Bosch developed the industrial method for ammonia production (Haber–Bosch process). It was first used on an industrial scale in Germany during World War I, following the allied blockade that cut off the supply of nitrates from Chile. The ammonia was used to produce explosives to sustain war efforts. The Nobel Prize in Chemistry 1918 was awarded to Fritz Haber "for the synthesis of ammonia from its elements".
Before the availability of natural gas, hydrogen as a precursor to ammonia production was produced via the electrolysis of water or using the chloralkali process.
With the advent of the steel industry in the 20th century, ammonia became a byproduct of the production of coking coal.
Applications.
Fertiliser.
In the US as of 2019[ [update]], approximately 88% of ammonia was used as fertilisers either as its salts, solutions or anhydrously. When applied to soil, it helps provide increased yields of crops such as maize and wheat. 30% of agricultural nitrogen applied in the US is in the form of anhydrous ammonia, and worldwide, 110 million tonnes are applied each year.
Solutions of ammonia ranging from 16% to 25% are used in the fermentation industry as a source of nitrogen for microorganisms and to adjust pH during fermentation.
Refrigeration–R717.
Because of ammonia's vapourization properties, it is a useful refrigerant. It was commonly used before the popularisation of chlorofluorocarbons (Freons). Anhydrous ammonia is widely used in industrial refrigeration applications and hockey rinks because of its high energy efficiency and low cost. It suffers from the disadvantage of toxicity, and requiring corrosion resistant components, which restricts its domestic and small-scale use. Along with its use in modern vapour-compression refrigeration it is used in a mixture along with hydrogen and water in absorption refrigerators. The Kalina cycle, which is of growing importance to geothermal power plants, depends on the wide boiling range of the ammonia–water mixture.
Ammonia coolant is also used in the radiators aboard the International Space Station in loops that are used to regulate the internal temperature and enable temperature-dependent experiments. The ammonia is under sufficient pressure to remain liquid throughout the process. Single-phase ammonia cooling systems also serve the power electronics in each pair of solar arrays.
The potential importance of ammonia as a refrigerant has increased with the discovery that vented CFCs and HFCs are potent and stable greenhouse gases.
Antimicrobial agent for food products.
As early as in 1895, it was known that ammonia was 'strongly antiseptic ... it requires 1.4 grams per litre to preserve beef tea (broth).' In one study, anhydrous ammonia destroyed 99.999% of zoonotic bacteria in three types of animal feed, but not silage. Anhydrous ammonia is currently used commercially to reduce or eliminate microbial contamination of beef.
Lean finely textured beef (popularly known as 'pink slime') in the beef industry is made from fatty beef trimmings (c. 50–70% fat) by removing the fat using heat and centrifugation, then treating it with ammonia to kill "E. coli". The process was deemed effective and safe by the US Department of Agriculture based on a study that found that the treatment reduces "E. coli" to undetectable levels. There have been safety concerns about the process as well as consumer complaints about the taste and smell of ammonia-treated beef.
Fuel.
Ammonia has been used as fuel, and is a proposed alternative to fossil fuels and hydrogen. Being liquid at ambient temperature under its own vapour pressure and having high volumetric and gravimetric energy density, ammonia is considered a suitable carrier for hydrogen, and may be cheaper than direct transport of liquid hydrogen.
Compared to hydrogen, ammonia is easier to store. Compared to hydrogen as a fuel, ammonia is much more energy efficient, and could be produced, stored and delivered at a much lower cost than hydrogen, which must be kept compressed or as a cryogenic liquid. The raw energy density of liquid ammonia is 11.5 MJ/L, which is about a third that of diesel.
Ammonia can be converted back to hydrogen to be used to power hydrogen fuel cells, or it may be used directly within high-temperature solid oxide direct ammonia fuel cells to provide efficient power sources that do not emit greenhouse gases. Ammonia to hydrogen conversion can be achieved through the sodium amide process or the catalytic decomposition of ammonia using solid catalysts.
Ammonia engines or ammonia motors, using ammonia as a working fluid, have been proposed and occasionally used. The principle is similar to that used in a fireless locomotive, but with ammonia as the working fluid, instead of steam or compressed air. Ammonia engines were used experimentally in the 19th century by Goldsworthy Gurney in the UK and the St. Charles Avenue Streetcar line in New Orleans in the 1870s and 1880s, and during World War II ammonia was used to power buses in Belgium.
Ammonia is sometimes proposed as a practical alternative to fossil fuel for internal combustion engines. However, ammonia cannot be easily used in existing Otto cycle engines because of its very narrow flammability range. Despite this, several tests have been run. Its high octane rating of 120 and low flame temperature allows the use of high compression ratios without a penalty of high production. Since ammonia contains no carbon, its combustion cannot produce carbon dioxide, carbon monoxide, hydrocarbons, or soot.
Ammonia production currently creates 1.8% of global CO2 emissions. 'Green ammonia' is ammonia produced by using green hydrogen (hydrogen produced by electrolysis), whereas 'blue ammonia' is ammonia produced using blue hydrogen (hydrogen produced by steam methane reforming where the carbon dioxide has been captured and stored).
Rocket engines have also been fueled by ammonia. The Reaction Motors XLR99 rocket engine that powered the X-15 hypersonic research aircraft used liquid ammonia. Although not as powerful as other fuels, it left no soot in the reusable rocket engine, and its density approximately matches the density of the oxidiser, liquid oxygen, which simplified the aircraft's design.
In 2020, Saudi Arabia shipped 40 metric tons of liquid 'blue ammonia' to Japan for use as a fuel. It was produced as a by-product by petrochemical industries, and can be burned without giving off greenhouse gases. Its energy density by volume is nearly double that of liquid hydrogen. If the process of creating it can be scaled up via purely renewable resources, producing green ammonia, it could make a major difference in avoiding climate change. The company ACWA Power and the city of Neom have announced the construction of a green hydrogen and ammonia plant in 2020.
Green ammonia is considered as a potential fuel for future container ships. In 2020, the companies DSME and MAN Energy Solutions announced the construction of an ammonia-based ship, DSME plans to commercialize it by 2025. The use of ammonia as a potential alternative fuel for aircraft jet engines is also being explored.
Japan intends to implement a plan to develop ammonia co-firing technology that can increase the use of ammonia in power generation, as part of efforts to assist domestic and other Asian utilities to accelerate their transition to carbon neutrality.
In October 2021, the first International Conference on Fuel Ammonia (ICFA2021) was held.
In June 2022, IHI Corporation succeeded in reducing greenhouse gases by over 99% during combustion of liquid ammonia in a 2,000-kilowatt-class gas turbine achieving truly CO2-free power generation.
In July 2022, Quad nations of Japan, the U.S., Australia and India agreed to promote technological development for clean-burning hydrogen and ammonia as fuels at the security grouping's first energy meeting. As of 2022[ [update]], however, significant amounts of are produced. Nitrous oxide may also be a problem as it is a "greenhouse gas that is known to possess up to 300 times the Global Warming Potential (GWP) of carbon dioxide".
At high temperature and in the presence of a suitable catalyst ammonia decomposes into its constituent elements. Decomposition of ammonia is a slightly endothermic process requiring 23 kJ/mol (5.5 kcal/mol) of ammonia, and yields hydrogen and nitrogen gas.
Other.
Remediation of gaseous emissions.
Ammonia is used to scrub from the burning of fossil fuels, and the resulting product is converted to ammonium sulfate for use as fertiliser. Ammonia neutralises the nitrogen oxide () pollutants emitted by diesel engines. This technology, called SCR (selective catalytic reduction), relies on a vanadia-based catalyst.
Ammonia may be used to mitigate gaseous spills of phosgene.
Stimulant.
Ammonia, as the vapour released by smelling salts, has found significant use as a respiratory stimulant. Ammonia is commonly used in the illegal manufacture of methamphetamine through a Birch reduction. The Birch method of making methamphetamine is dangerous because the alkali metal and liquid ammonia are both extremely reactive, and the temperature of liquid ammonia makes it susceptible to explosive boiling when reactants are added.
Textile.
Liquid ammonia is used for treatment of cotton materials, giving properties like mercerisation, using alkalis. In particular, it is used for prewashing of wool.
Lifting gas.
At standard temperature and pressure, ammonia is less dense than atmosphere and has approximately 45–48% of the lifting power of hydrogen or helium. Ammonia has sometimes been used to fill balloons as a lifting gas. Because of its relatively high boiling point (compared to helium and hydrogen), ammonia could potentially be refrigerated and liquefied aboard an airship to reduce lift and add ballast (and returned to a gas to add lift and reduce ballast).
Fuming.
Ammonia has been used to darken quartersawn white oak in Arts & Crafts and Mission-style furniture. Ammonia fumes react with the natural tannins in the wood and cause it to change colour.
Safety.
The US Occupational Safety and Health Administration (OSHA) has set a 15-minute exposure limit for gaseous ammonia of 35 ppm by volume in the environmental air and an 8-hour exposure limit of 25 ppm by volume. The National Institute for Occupational Safety and Health (NIOSH) recently reduced the IDLH (Immediately Dangerous to Life and Health, the level to which a healthy worker can be exposed for 30 minutes without suffering irreversible health effects) from 500 to 300 based on recent more conservative interpretations of original research in 1943. Other organisations have varying exposure levels. US Navy Standards [U.S. Bureau of Ships 1962] maximum allowable concentrations (MACs): for continuous exposure (60 days) is 25 ppm; for exposure of 1 hour is 400 ppm.
Ammonia vapour has a sharp, irritating, pungent odor that acts as a warning of potentially dangerous exposure. The average odor threshold is 5 ppm, well below any danger or damage. Exposure to very high concentrations of gaseous ammonia can result in lung damage and death. Ammonia is regulated in the US as a non-flammable gas, but it meets the definition of a material that is toxic by inhalation and requires a hazardous safety permit when transported in quantities greater than .
Liquid ammonia is dangerous because it is hygroscopic and because it can cause caustic burns. See for more information.
Toxicity.
The toxicity of ammonia solutions does not usually cause problems for humans and other mammals, as a specific mechanism exists to prevent its build-up in the bloodstream. Ammonia is converted to carbamoyl phosphate by the enzyme carbamoyl phosphate synthetase, and then enters the urea cycle to be either incorporated into amino acids or excreted in the urine. Fish and amphibians lack this mechanism, as they can usually eliminate ammonia from their bodies by direct excretion. Ammonia even at dilute concentrations is highly toxic to aquatic animals, and for this reason it is classified as "dangerous for the environment". Atmospheric ammonia plays a key role in the formation of fine particulate matter.
Ammonia is a constituent of tobacco smoke.
Coking wastewater.
Ammonia is present in coking wastewater streams, as a liquid by-product of the production of coke from coal. In some cases, the ammonia is discharged to the marine environment where it acts as a pollutant. The Whyalla Steelworks in South Australia is one example of a coke-producing facility that discharges ammonia into marine waters.
Aquaculture.
Ammonia toxicity is believed to be a cause of otherwise unexplained losses in fish hatcheries. Excess ammonia may accumulate and cause alteration of metabolism or increases in the body pH of the exposed organism. Tolerance varies among fish species. At lower concentrations, around 0.05 mg/L, un-ionised ammonia is harmful to fish species and can result in poor growth and feed conversion rates, reduced fecundity and fertility and increase stress and susceptibility to bacterial infections and diseases. Exposed to excess ammonia, fish may suffer loss of equilibrium, hyper-excitability, increased respiratory activity and oxygen uptake and increased heart rate. At concentrations exceeding 2.0 mg/L, ammonia causes gill and tissue damage, extreme lethargy, convulsions, coma, and death. Experiments have shown that the lethal concentration for a variety of fish species ranges from 0.2 to 2.0 mg/L.
During winter, when reduced feeds are administered to aquaculture stock, ammonia levels can be higher. Lower ambient temperatures reduce the rate of algal photosynthesis so less ammonia is removed by any algae present. Within an aquaculture environment, especially at large scale, there is no fast-acting remedy to elevated ammonia levels. Prevention rather than correction is recommended to reduce harm to farmed fish and in open water systems, the surrounding environment.
Storage information.
Similar to propane, anhydrous ammonia boils below room temperature when at atmospheric pressure. A storage vessel capable of is suitable to contain the liquid. Ammonia is used in numerous different industrial applications requiring carbon or stainless steel storage vessels. Ammonia with at least 0.2% by weight water content is not corrosive to carbon steel. carbon steel construction storage tanks with 0.2% by weight or more of water could last more than 50 years in service. Experts warn that ammonium compounds not be allowed to come in contact with bases (unless in an intended and contained reaction), as dangerous quantities of ammonia gas could be released.
Laboratory.
The hazards of ammonia solutions depend on the concentration: 'dilute' ammonia solutions are usually 5–10% by weight (< 5.62 mol/L); 'concentrated' solutions are usually prepared at >25% by weight. A 25% (by weight) solution has a density of 0.907 g/cm3, and a solution that has a lower density will be more concentrated. The European Union classification of ammonia solutions is given in the table.
The ammonia vapour from concentrated ammonia solutions is severely irritating to the eyes and the respiratory tract, and experts warn that these solutions only be handled in a fume hood. Saturated ('0.880'–see "") solutions can develop a significant pressure inside a closed bottle in warm weather, and experts also warn that the bottle be opened with care. This is not usually a problem for 25% ('0.900') solutions.
Experts warn that ammonia solutions not be mixed with halogens, as toxic and/or explosive products are formed. Experts also warn that prolonged contact of ammonia solutions with silver, mercury or iodide salts can also lead to explosive products: such mixtures are often formed in qualitative inorganic analysis, and that it needs to be lightly acidified but not concentrated (<6% w/v) before disposal once the test is completed.
Laboratory use of anhydrous ammonia (gas or liquid).
Anhydrous ammonia is classified as toxic (T) and dangerous for the environment (N). The gas is flammable (autoignition temperature: 651 °C) and can form explosive mixtures with air (16–25%). The permissible exposure limit (PEL) in the United States is 50 ppm (35 mg/m3), while the IDLH concentration is estimated at 300 ppm. Repeated exposure to ammonia lowers the sensitivity to the smell of the gas: normally the odour is detectable at concentrations of less than 50 ppm, but desensitised individuals may not detect it even at concentrations of 100 ppm. Anhydrous ammonia corrodes copper- and zinc-containing alloys, which makes brass fittings not appropriate for handling the gas. Liquid ammonia can also attack rubber and certain plastics.
Ammonia reacts violently with the halogens. Nitrogen triiodide, a primary high explosive, is formed when ammonia comes in contact with iodine. Ammonia causes the explosive polymerisation of ethylene oxide. It also forms explosive fulminating compounds with compounds of gold, silver, mercury, germanium or tellurium, and with stibine. Violent reactions have also been reported with acetaldehyde, hypochlorite solutions, potassium ferricyanide and peroxides.
Production.
Global ammonia production 1950–2020 (expressed as fixed nitrogen in U.S. tons)
Ammonia has one of the highest rates of production of any inorganic chemical. Production is sometimes expressed in terms of 'fixed nitrogen'. Global production was estimated as being 160 million tonnes in 2020 (147 tons of fixed nitrogen). China accounted for 26.5% of that, followed by Russia at 11.0%, the United States at 9.5%, and India at 8.3%.
Before the start of World War I, most ammonia was obtained by the dry distillation of nitrogenous vegetable and animal waste products, including camel dung, where it was distilled by the reduction of nitrous acid and nitrites with hydrogen; in addition, it was produced by the distillation of coal, and also by the decomposition of ammonium salts by alkaline hydroxides such as quicklime:
For small scale laboratory synthesis, one can heat urea and calcium hydroxide or sodium hydroxide:
Electrochemical.
Ammonia can be synthesized electrochemically. The only required inputs are sources of nitrogen (potentially atmospheric) and hydrogen (water), allowing generation at the point of use. The availability of renewable energy creates the possibility of zero emission production.
'Green Ammonia' is a name for ammonia produced from hydrogen that is in turn produced from carbon-free sources such as electrolysis of water. Ammonia from this source can be used as a liquid fuel with zero contribution to global climate change.
Another electrochemical synthesis mode involves the reductive formation of lithium nitride, which can be protonated to ammonia, given a proton source, which can be hydrogen. In the early years of the development of this process, ethanol has been used as such a source. The first use of this chemistry was reported in 1930, where lithium solutions in ethanol were used to produce ammonia at pressures of up to 1000 bar. In 1994, Tsuneto et al. used lithium electrodeposition in tetrahydrofuran to synthesize ammonia at more moderate pressures with reasonable Faradaic efficiency. Other studies have since used the ethanol–tetrahydrofuran system for electrochemical ammonia synthesis. In 2019, Lazouski et al. proposed a mechanism to explain observed ammonia formation kinetics.
In 2020, Lazouski et al. developed a solvent-agnostic gas diffusion electrode to improve nitrogen transport to the reactive lithium. The study observed production rates of up to 30 ± 5 nmol/s/cm2 and Faradaic efficiencies of up to 47.5 ± 4% at ambient temperature and 1 bar pressure.
In 2021, Suryanto et al. replaced ethanol with a tetraalkyl phosphonium salt. This cation can stably undergo deprotonation–reprotonation cycles, while it enhances the medium's ionic conductivity. The study observed production rates of 53 ± 1 nmol/s/cm2 at 69 ± 1% faradaic efficiency experiments under 0.5-bar hydrogen and 19.5-bar nitrogen partial pressure at ambient temperature.
In 2022, Fu et al. reported the production of ammonia via the lithium mediated process in a continuous-flow electrolyzer also demonstrating the hydrogen gas as proton source. The study synthesized ammonia at 61 ± 1% Faradaic efficiency at a current density of −6 mA/cm2 at 1 bar and room temperature.
Biochemistry and medicine.
Ammonia is essential for life. For example, it is required for the formation of amino acids and nucleic acids, fundamental building blocks of life. Ammonia is however quite toxic. Nature thus uses carriers for ammonia. Within a cell, glutamate serves this role. In the bloodstream, glutamine is a source of ammonia.
Ethanolamine, required for cell membranes, is the substrate for ethanolamine ammonia-lyase, which produces ammonia:
Ammonia is both a metabolic waste and a metabolic input throughout the biosphere. It is an important source of nitrogen for living systems. Although atmospheric nitrogen abounds (more than 75%), few living creatures are capable of using atmospheric nitrogen in its diatomic form, gas. Therefore, nitrogen fixation is required for the synthesis of amino acids, which are the building blocks of protein. Some plants rely on ammonia and other nitrogenous wastes incorporated into the soil by decaying matter. Others, such as nitrogen-fixing legumes, benefit from symbiotic relationships with rhizobia bacteria that create ammonia from atmospheric nitrogen.
In humans, inhaling ammonia in high concentrations can be fatal. Exposure to ammonia can cause headaches, edema, impaired memory, seizures and coma as it is neurotoxic in nature.
Biosynthesis.
In certain organisms, ammonia is produced from atmospheric nitrogen by enzymes called nitrogenases. The overall process is called nitrogen fixation. Intense effort has been directed toward understanding the mechanism of biological nitrogen fixation. The scientific interest in this problem is motivated by the unusual structure of the active site of the enzyme, which consists of an ensemble.
Ammonia is also a metabolic product of amino acid deamination catalyzed by enzymes such as glutamate dehydrogenase 1. Ammonia excretion is common in aquatic animals. In humans, it is quickly converted to urea (by liver), which is much less toxic, particularly less basic. This urea is a major component of the dry weight of urine. Most reptiles, birds, insects, and snails excrete uric acid solely as nitrogenous waste.
Physiology.
Ammonia plays a role in both normal and abnormal animal physiology. It is biosynthesised through normal amino acid metabolism and is toxic in high concentrations. The liver converts ammonia to urea through a series of reactions known as the urea cycle. Liver dysfunction, such as that seen in cirrhosis, may lead to elevated amounts of ammonia in the blood (hyperammonemia). Likewise, defects in the enzymes responsible for the urea cycle, such as ornithine transcarbamylase, lead to hyperammonemia. Hyperammonemia contributes to the confusion and coma of hepatic encephalopathy, as well as the neurological disease common in people with urea cycle defects and organic acidurias.
Ammonia is important for normal animal acid/base balance. After formation of ammonium from glutamine, α-ketoglutarate may be degraded to produce two bicarbonate ions, which are then available as buffers for dietary acids. Ammonium is excreted in the urine, resulting in net acid loss. Ammonia may itself diffuse across the renal tubules, combine with a hydrogen ion, and thus allow for further acid excretion.
Excretion.
Ammonium ions are a toxic waste product of metabolism in animals. In fish and aquatic invertebrates, it is excreted directly into the water. In mammals, sharks, and amphibians, it is converted in the urea cycle to urea, which is less toxic and can be stored more efficiently. In birds, reptiles, and terrestrial snails, metabolic ammonium is converted into uric acid, which is solid and can therefore be excreted with minimal water loss.
Extraterrestrial occurrence.
Ammonia has been detected in the atmospheres of the giant planets Jupiter, Saturn, Uranus and Neptune, along with other gases such as methane, hydrogen, and helium. The interior of Saturn may include frozen ammonia crystals. It is found on Deimos and Phobos–the two moons of Mars.
Interstellar space.
Ammonia was first detected in interstellar space in 1968, based on microwave emissions from the direction of the galactic core. This was the first polyatomic molecule to be so detected.
The sensitivity of the molecule to a broad range of excitations and the ease with which it can be observed in a number of regions has made ammonia one of the most important molecules for studies of molecular clouds. The relative intensity of the ammonia lines can be used to measure the temperature of the emitting medium.
The following isotopic species of ammonia have been detected: , , , and . The detection of triply deuterated ammonia was considered a surprise as deuterium is relatively scarce. It is thought that the low-temperature conditions allow this molecule to survive and accumulate.
Since its interstellar discovery, has proved to be an invaluable spectroscopic tool in the study of the interstellar medium. With a large number of transitions sensitive to a wide range of excitation conditions, has been widely astronomically detected–its detection has been reported in hundreds of journal articles. Listed below is a sample of journal articles that highlights the range of detectors that have been used to identify ammonia.
The study of interstellar ammonia has been important to a number of areas of research in the last few decades. Some of these are delineated below and primarily involve using ammonia as an interstellar thermometer.
Interstellar formation mechanisms.
The interstellar abundance for ammonia has been measured for a variety of environments. The []/[] ratio has been estimated to range from 10−7 in small dark clouds up to 10−5 in the dense core of the Orion molecular cloud complex. Although a total of 18 total production routes have been proposed, the principal formation mechanism for interstellar is the reaction:
The rate constant, "k", of this reaction depends on the temperature of the environment, with a value of formula_0 at 10 K. The rate constant was calculated from the formula &NoBreak;&NoBreak;. For the primary formation reaction, "a" = and "B" = −0.47. Assuming an abundance of formula_1and an electron abundance of 10−7 typical of molecular clouds, the formation will proceed at a rate of in a molecular cloud of total density .
All other proposed formation reactions have rate constants of between two and 13 orders of magnitude smaller, making their contribution to the abundance of ammonia relatively insignificant. As an example of the minor contribution other formation reactions play, the reaction:
has a rate constant of 2.2×10−15. Assuming densities of 105 and []/[] ratio of 10−7, this reaction proceeds at a rate of 2.2×10−12, more than three orders of magnitude slower than the primary reaction above.
Some of the other possible formation reactions are:
Interstellar destruction mechanisms.
There are 113 total proposed reactions leading to the destruction of . Of these, 39 were tabulated in extensive tables of the chemistry among C, N and O compounds. A review of interstellar ammonia cites the following reactions as the principal dissociation mechanisms:
with rate constants of 4.39×10−9 and 2.2×10−9, respectively. The above equations (1, 2) run at a rate of 8.8×10−9 and 4.4×10−13, respectively. These calculations assumed the given rate constants and abundances of []/[] = 10−5, []/[] = 2×10−5, []/[] = 2×10−9, and total densities of "n" = 105, typical of cold, dense, molecular clouds. Clearly, between these two primary reactions, equation (1) is the dominant destruction reaction, with a rate ≈10,000 times faster than equation (2). This is due to the relatively high abundance of .
Single antenna detections.
Radio observations of from the Effelsberg 100-m Radio Telescope reveal that the ammonia line is separated into two components–a background ridge and an unresolved core. The background corresponds well with the locations previously detected CO. The 25 m Chilbolton telescope in England detected radio signatures of ammonia in H II regions, HNH2O masers, H–H objects, and other objects associated with star formation. A comparison of emission line widths indicates that turbulent or systematic velocities do not increase in the central cores of molecular clouds.
Microwave radiation from ammonia was observed in several galactic objects including W3(OH), Orion A, W43, W51, and five sources in the galactic centre. The high detection rate indicates that this is a common molecule in the interstellar medium and that high-density regions are common in the galaxy.
Interferometric studies.
VLA observations of in seven regions with high-velocity gaseous outflows revealed condensations of less than 0.1 pc in L1551, S140, and Cepheus A. Three individual condensations were detected in Cepheus A, one of them with a highly elongated shape. They may play an important role in creating the bipolar outflow in the region.
Extragalactic ammonia was imaged using the VLA in IC 342. The hot gas has temperatures above 70 K, which was inferred from ammonia line ratios and appears to be closely associated with the innermost portions of the nuclear bar seen in CO. was also monitored by VLA toward a sample of four galactic ultracompact HII regions: G9.62+0.19, G10.47+0.03, G29.96-0.02, and G31.41+0.31. Based upon temperature and density diagnostics, it is concluded that in general such clumps are probably the sites of massive star formation in an early evolutionary phase prior to the development of an ultracompact HII region.
Infrared detections.
Absorption at 2.97 micrometres due to solid ammonia was recorded from interstellar grains in the Becklin–Neugebauer Object and probably in NGC 2264-IR as well. This detection helped explain the physical shape of previously poorly understood and related ice absorption lines.
A spectrum of the disk of Jupiter was obtained from the Kuiper Airborne Observatory, covering the 100 to 300 cm−1 spectral range. Analysis of the spectrum provides information on global mean properties of ammonia gas and an ammonia ice haze.
A total of 149 dark cloud positions were surveyed for evidence of 'dense cores' by using the (J,K) = (1,1) rotating inversion line of NH3. In general, the cores are not spherically shaped, with aspect ratios ranging from 1.1 to 4.4. It is also found that cores with stars have broader lines than cores without stars.
Ammonia has been detected in the Draco Nebula and in one or possibly two molecular clouds, which are associated with the high-latitude galactic infrared cirrus. The finding is significant because they may represent the birthplaces for the Population I metallicity B-type stars in the galactic halo that could have been borne in the galactic disk.
Observations of nearby dark clouds.
By balancing and stimulated emission with spontaneous emission, it is possible to construct a relation between excitation temperature and density. Moreover, since the transitional levels of ammonia can be approximated by a 2-level system at low temperatures, this calculation is fairly simple. This premise can be applied to dark clouds, regions suspected of having extremely low temperatures and possible sites for future star formation. Detections of ammonia in dark clouds show very narrow lines – indicative not only of low temperatures, but also of a low level of inner-cloud turbulence. Line ratio calculations provide a measurement of cloud temperature that is independent of previous CO observations. The ammonia observations were consistent with CO measurements of rotation temperatures of ≈10 K. With this, densities can be determined, and have been calculated to range between 104 and 105 cm−3 in dark clouds. Mapping of gives typical clouds sizes of 0.1 pc and masses near 1 solar mass. These cold, dense cores are the sites of future star formation.
UC HII regions.
Ultra-compact HII regions are among the best tracers of high-mass star formation. The dense material surrounding UCHII regions is likely primarily molecular. Since a complete study of massive star formation necessarily involves the cloud from which the star formed, ammonia is an invaluable tool in understanding this surrounding molecular material. Since this molecular material can be spatially resolved, it is possible to constrain the heating/ionising sources, temperatures, masses, and sizes of the regions. Doppler-shifted velocity components allow for the separation of distinct regions of molecular gas that can trace outflows and hot cores originating from forming stars.
Extragalactic detection.
Ammonia has been detected in external galaxies, and by simultaneously measuring several lines, it is possible to directly measure the gas temperature in these galaxies. Line ratios imply that gas temperatures are warm (≈50 K), originating from dense clouds with sizes of tens of parsecs. This picture is consistent with the picture within our Milky Way galaxy – hot dense molecular cores form around newly forming stars embedded in larger clouds of molecular material on the scale of several hundred parsecs (giant molecular clouds; GMCs).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "5.2\\times 10^{-6}"
},
{
"math_id": 1,
"text": "3\\times 10^{-7}"
}
]
| https://en.wikipedia.org/wiki?curid=1365 |
1365053 | Heaviside condition | Optimal condition for a hypothetical transmission line
A transmission line which meets the Heaviside condition, named for Oliver Heaviside (1850–1925), and certain other conditions can transmit signals without dispersion and without distortion. The importance of the Heaviside condition is that it showed the possibility of dispersionless transmission of telegraph signals.In some cases, the performance of a transmission line can be improved by adding inductive loading to the cable.
The condition.
A transmission line can be represented as a distributed-element model of its primary line constants as shown in the figure. The primary constants are the electrical properties of the cable per unit length and are: capacitance "C" (in farads per meter), inductance "L" (in henries per meter), series resistance "R" (in ohms per meter), and shunt conductance "G" (in siemens per meter).
The Heaviside condition is satisfied when :formula_0
The series resistance and shunt conductivity cause losses in the line; for an ideal transmission line, formula_1. An ideal line trivially meets the Heaviside condition.
Background.
A signal on a transmission line can become distorted even if the line constants, and the resulting transmission function, are all perfectly linear. There are two mechanisms: firstly, the attenuation of the line can vary with frequency which results in a change to the shape of a pulse transmitted down the line. Secondly, and usually more problematically, distortion is caused by a frequency dependence on phase velocity of the transmitted signal frequency components. If different frequency components of the signal are transmitted at different velocities the signal becomes "smeared out" in space and time, a form of distortion called dispersion.
A transmission line is "dispersionless", if the velocity of signals is independent of frequency. Mathematically formula_2.
A transmission line is "distortionless" if it is dispersionless and the attenuation coefficient is independent of frequency. Mathematically formula_3.
This was a major problem on the first transatlantic telegraph cable and led to the theory of the causes of dispersion being investigated first by Lord Kelvin and then by Heaviside who discovered in 1876 how it could be countered. Dispersion of telegraph pulses, if severe enough, will cause them to overlap with adjacent pulses, causing what is now called intersymbol interference. To prevent intersymbol interference it was necessary to reduce the transmission speed of the transatlantic telegraph cable to the equivalent of <templatestyles src="Fraction/styles.css" />1⁄15 baud. This is an exceptionally slow data transmission rate, even for human operators who had great difficulty operating a morse key that slowly.
For voice circuits (telephone) the frequency response distortion is usually more important than dispersion whereas digital signals are highly susceptible to dispersion distortion. For any kind of analogue image transmission such as video or facsimile both kinds of distortion need to be mitigated.
An analogous Heaviside condition for dispersionless propagation in left-handed transmission line metamaterials cannot be derived, since no combination of reactive and resistive elements would yield a constant group velocity.
Derivation.
The transmission function of a transmission line is defined in terms of its input and output voltages when correctly terminated (that is, with no reflections) as
formula_4
where formula_5 represents distance from the transmitter in meters and
formula_6.
are the secondary line constants, "α" being the attenuation constant in nepers per metre and "β" being the phase constant in radians per metre. For no distortion, "α" is required to be independent of the angular frequency "ω", while "β" must be proportional to "ω". This requirement for proportionality to frequency is due to the relationship between the velocity, "v", and phase constant, "β" being given by,
formula_7
and the requirement that phase velocity, "v", be constant at all frequencies.
The relationship between the primary and secondary line constants is given by
formula_8
If the Heaviside condition holds, then the square root function can be carried out explicitly as:
formula_9
where
formula_10.
Hence
formula_11.
formula_12.
formula_13.
Velocity is independent of frequency if the product formula_14 is independent of frequency. Attenuation is independent of frequency if the product formula_15 is independent of frequency.
Characteristic impedance.
The characteristic impedance of a lossy transmission line is given by
formula_16
In general, it is not possible to impedance match this transmission line at all frequencies with any finite network of discrete elements because such networks are rational functions of jω, but in general the expression for characteristic impedance is complex due to the square root term. However, for a line which meets the Heaviside condition, there is a common factor in the fraction which cancels out the frequency dependent terms leaving,
formula_17
which is a real number, and independent of frequency if L/C is independent of frequency. The line can therefore be impedance-matched with just a resistor at either end. This expression for formula_18 is the same as for a lossless line (formula_19) with the same "L" and "C", although the attenuation (due to "R" and "G") is of course still present.
Practical use.
A real line will have a "G" that is very low and will usually not come anywhere close to meeting the Heaviside condition. The normal situation is that
formula_20 by several orders of magnitude.
To make a line meet the Heaviside condition one of the four primary constants needs to be adjusted and the question is which one. "G" could be increased, but this is highly undesirable since increasing "G" will increase the loss. Decreasing "R" is sending the loss in the right direction, but this is still not usually a satisfactory solution. "R" must be decreased by a large number and to do this the conductor cross-sections must be increased dramatically. This not only makes the cable much bulkier, but also adds significantly to the amount of copper (or other metal) being used and hence the cost and weight. Decreasing the capacitance is difficult because it requires using a different dielectric with a lower permittivity. Gutta-percha insulation used in the early trans-Atlantic cables has a dielectric constant of about 3, hence C could be decreased by a maximum factor or no more than 3. This leaves increasing "L" which is the usual solution adopted.
"L" is increased by loading the cable with a metal with high magnetic permeability. It is also possible to load a cable of conventional construction by adding discrete loading coils at regular intervals. This is not identical to a distributed loading, the difference being that with loading coils there is distortionless transmission up to a definite cut-off frequency beyond which the attenuation increases rapidly.
Loading cables is no longer a common practice. Instead, regularly spaced digital repeaters are now placed in long lines to maintain the desired shape and duration of pulses for long-distance transmission.
Frequency-dependent line parameters.
When the line parameters are frequency dependent, there are additional considerations. Achieving the Heaviside condition is more difficult when some or all of the line parameters depend on frequency. Typically, R (due to skin effect) and G (due to dielectric loss) are strong functions of frequency. If magnetic material is added to increase L, then L also becomes frequency dependent.
The chart on the left plots the ratios formula_21 for typical transmission lines made from non-magnetic materials. The Heaviside condition is satisfied where the blue curve touches or crosses a red curve.
The knee of the blue curve occurs at the frequency where formula_22.
There are three red curves indicating typical low, medium, and high-quality dielectrics. Pulp insulation (used for telephone lines in the early 20th century), gutta-percha, and modern foamed plastics are examples of low, medium, and high-quality dielectrics. The knee of each curve occurs at the frequency where formula_23. The reciprocal of this frequency is known as the dielectric relaxation time of the dielectric. Above this frequency, the value of G/(ωC) is the same as the loss tangent of the dielectric material. The curve is depicted as flat on the figure, but loss tangent shows some frequency dependence. The value of G/(ωC) at all frequencies is determined entirely by properties of the dielectric and is independent of the transmission line cross-section.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{G}{C} = \\frac{R}{L}."
},
{
"math_id": 1,
"text": "\\scriptstyle R=G=0"
},
{
"math_id": 2,
"text": " \\frac {d} {d \\omega} v = 0 "
},
{
"math_id": 3,
"text": " \\frac {d} {d \\omega} \\alpha = 0 "
},
{
"math_id": 4,
"text": "\\frac{V_\\mathrm{out}}{V_\\mathrm{in}} = e^{- \\gamma x}"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": " \\gamma = \\alpha + j \\beta = \\sqrt{(R + j \\omega L)(G + j \\omega C)} "
},
{
"math_id": 7,
"text": "v = \\frac{\\omega}{\\beta}"
},
{
"math_id": 8,
"text": "\\gamma^2 = (\\alpha +j \\beta)^2 = (R+j \\omega L)(G + j \\omega C) = \\omega^2 LC (j+\\frac R {\\omega L} )(j+\\frac G {\\omega C} ) "
},
{
"math_id": 9,
"text": "\\gamma = \\omega \\sqrt { LC }(\\frac R {\\omega L} +j) = \\frac R {Z_0} +j\\omega \\sqrt { LC }"
},
{
"math_id": 10,
"text": " Z_0 = \\sqrt{ \\frac L C}"
},
{
"math_id": 11,
"text": " \\alpha = \\frac R {Z_0} = R \\sqrt{ \\frac C L} = R \\sqrt{ \\frac {LG/R} L} = \\sqrt{RG}"
},
{
"math_id": 12,
"text": " \\beta = \\omega \\sqrt { LC } "
},
{
"math_id": 13,
"text": " v = \\frac 1 {\\sqrt { LC }} "
},
{
"math_id": 14,
"text": "LC"
},
{
"math_id": 15,
"text": "RG"
},
{
"math_id": 16,
"text": "Z_0=\\sqrt{\\frac{R+j\\omega L}{G+j\\omega C}}"
},
{
"math_id": 17,
"text": "Z_0=\\sqrt{\\frac{L}{C}},"
},
{
"math_id": 18,
"text": "\\scriptstyle Z_0 = \\sqrt{L/C}"
},
{
"math_id": 19,
"text": "\\scriptstyle R = 0,\\ G = 0"
},
{
"math_id": 20,
"text": "\\frac{G}{C} \\ll \\frac{R}{L}"
},
{
"math_id": 21,
"text": " \\tfrac {R_{\\omega}} {\\omega L_{\\omega}} ({\\color{blue}\\text{blue} }) \\text{ and } \\tfrac {G_{\\omega}} {\\omega C_{\\omega}} ({\\color{red}\\text{red} })"
},
{
"math_id": 22,
"text": " R_{\\omega} = \\omega L_{\\omega} "
},
{
"math_id": 23,
"text": " G_{\\omega} = \\omega C_{\\omega} "
}
]
| https://en.wikipedia.org/wiki?curid=1365053 |
13650583 | Malliavin's absolute continuity lemma | Result in measure theory
In mathematics — specifically, in measure theory — Malliavin's absolute continuity lemma is a result due to the French mathematician Paul Malliavin that plays a foundational rôle in the regularity (smoothness) theorems of the Malliavin calculus. Malliavin's lemma gives a sufficient condition for a finite Borel measure to be absolutely continuous with respect to Lebesgue measure.
Statement of the lemma.
Let "μ" be a finite Borel measure on "n"-dimensional Euclidean space R"n". Suppose that, for every "x" ∈ R"n", there exists a constant "C" = "C"("x") such that
formula_0
for every "C"∞ function "φ" : R"n" → R with compact support. Then "μ" is absolutely continuous with respect to "n"-dimensional Lebesgue measure "λ""n" on R"n". In the above, D"φ"("y") denotes the Fréchet derivative of "φ" at "y" and ||"φ"||∞ denotes the supremum norm of "φ". | [
{
"math_id": 0,
"text": "\\left| \\int_{\\mathbf{R}^{n}} \\mathrm{D} \\varphi (y) (x) \\, \\mathrm{d} \\mu(y) \\right| \\leq C(x) \\| \\varphi \\|_{\\infty}"
}
]
| https://en.wikipedia.org/wiki?curid=13650583 |
13651046 | Double layer (plasma physics) | A double layer is a structure in a plasma consisting of two parallel layers of opposite electrical charge. The sheets of charge, which are not necessarily planar, produce localised excursions of electric potential, resulting in a relatively strong electric field between the layers and weaker but more extensive compensating fields outside, which restore the global potential. Ions and electrons within the double layer are accelerated, decelerated, or deflected by the electric field, depending on their direction of motion.
Double layers can be created in discharge tubes, where sustained energy is provided within the layer for electron acceleration by an external power source. Double layers are claimed to have been observed in the aurora and are invoked in astrophysical applications. Similarly, a double layer in the auroral region requires some external driver to produce electron acceleration.
Electrostatic double layers are especially common in current-carrying plasmas, and are very thin (typically tens of Debye lengths), compared to the sizes of the plasmas that contain them. Other names for a double layer are electrostatic double layer, electric double layer, plasma double layers. The term ‘electrostatic shock’ in the magnetosphere has been applied to electric fields oriented at an oblique angle to the magnetic field in such a way that the perpendicular electric field is much stronger than the parallel electric field, In laser physics, a double layer is sometimes called an ambipolar electric field.
Double layers are conceptually related to the concept of a 'sheath' ("see" Debye sheath). An early review of double layers from laboratory experiment and simulations is provided by Torvén.
Classification.
Double layers may be classified in the following ways:
Potential imbalance will be neutralised by electron (1&3) and ion (2&4) migration, unless the potential gradients are sustained by an external energy source. Under most laboratory situations, unlike outer space conditions, charged particles may effectively originate within the double layer, by ionization at the anode or cathode, and be sustained.
The figure shows the localised perturbation of potential produced by an idealised double layer consisting of two oppositely charged discs. The perturbation is zero at a distance from the double layer in every direction.
If an incident charged particle, such as a precipitating auroral electron, encounters such a static or quasistatic structure in the magnetosphere, provided that the particle energy exceeds half the electric potential difference within the double layer, it will pass through without any net change in energy. Incident particles with less energy than this will also experience no net change in energy but will undergo more overall deflection.
Four distinct regions of a double layer can be identified, which affect charged particles passing through it, or within it:
Double layers will tend to be transient in the magnetosphere, as any charge imbalance will become neutralised, unless there is a sustained external source of energy to maintain them as there is under laboratory conditions.
Formation mechanisms.
The details of the formation mechanism depend on the environment of the plasma (e.g. double layers in the laboratory, ionosphere, solar wind, nuclear fusion, etc.). Proposed mechanisms for their formation have included:
History.
<templatestyles src="Template:Blockquote/styles.css" />In a low density plasma, localized space charge regions may build up large potential
drops over distances of the order of some tens of the Debye lengths. Such regions have been called "electric double layers". An electric double layer is the simplest space charge distribution that gives a potential drop in the layer and a vanishing electric field on each side of the layer. In the laboratory, double layers have been studied for half a century, but their importance in cosmic plasmas has not been generally recognized.
It was already known in the 1920s that a plasma has a limited capacity for current maintenance, Irving Langmuir characterized double layers in the laboratory and called these structures double-sheaths. In the 1950s a thorough study of double layers started in the laboratory. Many groups are still working on this topic theoretically, experimentally and numerically. It was first proposed by Hannes Alfvén (the developer of magnetohydrodynamics from laboratory experiments) that the polar lights or Aurora Borealis are created by electrons accelerated in the magnetosphere of the Earth. He supposed that the electrons were accelerated electrostatically by an electric field localized in a small volume bounded by two charged regions, and the so-called double layer would accelerate electrons earthwards. Since then other mechanisms involving wave-particle interactions have been proposed as being feasible, from extensive spatial and temporal in situ studies of auroral particle characteristics.
Many investigations of the magnetosphere and auroral regions have been made using rockets and satellites. McIlwain discovered from a rocket flight in 1960 that the energy spectrum of auroral electrons exhibited a peak that was thought then to be too sharp to be produced by a random process and which suggested, therefore, that an ordered process was responsible. It was reported in 1977 that satellites had detected the signature of double layers as electrostatic shocks in the magnetosphere. indications of electric fields parallel to the geomagnetic field lines was obtained by the Viking satellite, which measures the differential potential structures in the magnetosphere with probes mounted on 40m long booms. These probes measured the local particle density and the potential difference between two points 80m apart. Asymmetric potential excursions with respect to 0 V were measured, and interpreted as a double layer with a net potential within the region. Magnetospheric double layers typically have a strength formula_0 (where the electron temperature is assumed to lie in the range formula_1) and are therefore weak. A series of such double layers would tend to merge, much like a string of bar magnets, and dissipate, even within a rarefied plasma. It has yet to be explained how any overall localised charge distribution in the form of double layers might provide a source of energy for auroral electrons precipitated into the atmosphere.
Interpretation of the FAST spacecraft data proposed strong double layers in the auroral acceleration region. Strong double layers have also been reported in the downward current region by Andersson et al. Parallel electric fields with amplitudes reaching nearly 1 V/m were inferred to be confined to a thin layer of approximately 10 Debye lengths. It is stated that the structures moved ‘at roughly the ion acoustic speed in the direction of the accelerated electrons, i.e., anti-earthward.’ That raises a question of what role, if any, double layers might play in accelerating auroral electrons that are precipitated downwards into the atmosphere from the magnetosphere. Double layers have also been found in the Earth's magnetosphere by the space missions Cluster and MMS.
The possible role of precipitating electrons from 1-10keV themselves generating such observed double layers or electric fields has seldom been considered or analysed. Equally, the general question of how such double layers might be generated from an alternative source of energy, or what the spatial distribution of electric charge might be to produce net energy changes, is seldom addressed. Under laboratory conditions an external power supply is available.
In the laboratory, double layers can be created in different devices. They are investigated in double plasma machines, triple plasma machines, and Q-machines. The stationary potential structures that can be measured in these machines agree very well with what one would expect theoretically. An example of a laboratory double layer can be seen in the figure below, taken from Torvén and Lindberg (1980), where we can see how well-defined and confined is the potential drop of a double layer in a double plasma machine.
One of the interesting aspects of the experiment by Torvén and Lindberg (1980) is that not only did they measure the potential structure in the double plasma machine but they also found high-frequency fluctuating electric fields at the high-potential side of the double layer (also shown in the figure). These fluctuations are probably due to a beam-plasma interaction outside the double layer, which excites plasma turbulence. Their observations are consistent with experiments on electromagnetic radiation emitted by double layers in a double plasma machine by Volwerk (1993), who, however, also observed radiation from the double layer itself.
The power of these fluctuations has a maximum around the plasma frequency of the ambient plasma. It was later reported that the electrostatic high-frequency fluctuations near the double layer can be concentrated in a narrow region, sometimes called the hf-spike. Subsequently, both radio emissions, near the plasma frequency, and whistler waves at much lower frequencies were seen to emerge from this region. Similar whistler wave structures were observed together with electron beams near Saturn's moon Enceladus, suggesting the possible presence of a double layer at lower altitude.
A recent development in double layer experiments in the laboratory is the investigation of so-called stairstep double layers. It has been observed that a potential drop in a plasma column can be divided into different parts. Transitions from a single double layer into two-, three-, or greater-step double layers are strongly sensitive to the boundary conditions of the plasma.
Unlike experiments in the laboratory, the concept of such double layers in the magnetosphere, and any role in creating the aurora, suffers from there so far being no identified steady source of energy. The electric potential characteristic of double layers might however indicate that, those observed in the auroral zone are a secondary product of precipitating electrons that have been energized in other ways, such as by electrostatic waves.
Some scientists have suggested a role of double layers in solar flares. Establishing such a role indirectly is even harder to verify than postulating double layers as accelerators of auroral electrons within the Earth's magnetosphere. Serious questions have been raised on their role even there.
Footnotes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e\\phi_{DL}/k_B T_e \\approx 0.1"
},
{
"math_id": 1,
"text": "2 eV \\leq k_B T_e \\leq 20 eV"
}
]
| https://en.wikipedia.org/wiki?curid=13651046 |
13651683 | Spectral clustering | Clustering methods
In multivariate statistics, spectral clustering techniques make use of the spectrum (eigenvalues) of the similarity matrix of the data to perform dimensionality reduction before clustering in fewer dimensions. The similarity matrix is provided as an input and consists of a quantitative assessment of the relative similarity of each pair of points in the dataset.
In application to image segmentation, spectral clustering is known as segmentation-based object categorization.
Definitions.
Given an enumerated set of data points, the similarity matrix may be defined as a symmetric matrix formula_0, where formula_1 represents a measure of the similarity between data points with indices formula_2 and formula_3. The general approach to spectral clustering is to use a standard clustering method (there are many such methods, "k"-means is discussed below) on relevant eigenvectors of a Laplacian matrix of formula_0. There are many different ways to define a Laplacian which have different mathematical interpretations, and so the clustering will also have different interpretations. The eigenvectors that are relevant are the ones that correspond to several smallest eigenvalues of the Laplacian except for the smallest eigenvalue which will have a value of 0. For computational efficiency, these eigenvectors are often computed as the eigenvectors corresponding to the largest several eigenvalues of a function of the Laplacian.
Laplacian matrix.
Spectral clustering is well known to relate to partitioning of a mass-spring system, where each mass is associated with a data point and each spring stiffness corresponds to a weight of an edge describing a similarity of the two related data points, as in the spring system. Specifically, the classical reference explains that the eigenvalue problem describing transversal vibration modes of a mass-spring system is exactly the same as the eigenvalue problem for the graph Laplacian matrix defined as
formula_4,
where formula_5 is the diagonal matrix
formula_6
and A is the adjacency matrix.
The masses that are tightly connected by the springs in the mass-spring system evidently move together from the equilibrium position in low-frequency vibration modes, so that the components of the eigenvectors corresponding to the smallest eigenvalues of the graph Laplacian can be used for meaningful clustering of the masses. For example, assuming that all the springs and the masses are identical in the 2-dimensional spring system pictured, one would intuitively expect that the loosest connected masses on the right-hand side of the system would move with the largest amplitude and in the opposite direction to the rest of the masses when the system is shaken — and this expectation will be confirmed by analyzing components of the eigenvectors of the graph Laplacian corresponding to the smallest eigenvalues, i.e., the smallest vibration frequencies.
Laplacian matrix normalization.
The goal of normalization is making the diagonal entries of the Laplacian matrix to be all unit, also scaling off-diagonal entries correspondingly. In a weighted graph, a vertex may have a large degree because of a small number of connected edges but with large weights just as well as due to a large number of connected edges with unit weights.
A popular normalized spectral clustering technique is the normalized cuts algorithm or "Shi–Malik algorithm" introduced by Jianbo Shi and Jitendra Malik, commonly used for image segmentation. It partitions points into two sets formula_7 based on the eigenvector formula_8 corresponding to the second-smallest eigenvalue of the symmetric normalized Laplacian defined as
formula_9
The vector formula_8 is also the eigenvector corresponding to the second-largest eigenvalue of the symmetrically normalized adjacency matrix formula_10
The random walk (or left) normalized Laplacian is defined as
formula_11
and can also be used for spectral clustering. A mathematically equivalent algorithm takes the eigenvector formula_12 corresponding to the largest eigenvalue of the random walk normalized adjacency matrix formula_13.
The eigenvector formula_8 of the symmetrically normalized Laplacian and the eigenvector formula_12 of the left normalized Laplacian are related by the identity formula_14
Cluster analysis via Spectral Embedding.
Knowing the formula_15-by-formula_16 matrix formula_17 of selected eigenvectors, mapping — called spectral embedding — of the original formula_15 data points is performed to a formula_16-dimensional vector space using the rows of formula_17. Now the analysis is reduced to clustering vectors with formula_16 components, which may be done in various ways.
In the simplest case formula_18, the selected single eigenvector formula_8, called the Fiedler vector, corresponds to the second smallest eigenvalue. Using the components of formula_19 one can place all points whose component in formula_8 is positive in the set formula_20 and the rest in formula_21, thus bi-partitioning the graph and labeling the data points with two labels. This sign-based approach follows the intuitive explanation of spectral clustering via the mass-spring model — in the low frequency vibration mode that the Fiedler vector formula_8 represents, one cluster data points identified with mutually strongly connected masses would move together in one direction, while in the complement cluster data points identified with remaining masses would move together in the opposite direction. The algorithm can be used for hierarchical clustering by repeatedly partitioning the subsets in the same fashion.
In the general case formula_22, any vector clustering technique can be used, e.g., DBSCAN.
Algorithms.
If the similarity matrix formula_0 has not already been explicitly constructed, the efficiency of spectral clustering may be improved if the solution to the corresponding eigenvalue problem is performed in a matrix-free fashion (without explicitly manipulating or even computing the similarity matrix), as in the Lanczos algorithm.
For large-sized graphs, the second eigenvalue of the (normalized) graph Laplacian matrix is often ill-conditioned, leading to slow convergence of iterative eigenvalue solvers. Preconditioning is a key technology accelerating the convergence, e.g., in the matrix-free LOBPCG method. Spectral clustering has been successfully applied on large graphs by first identifying their community structure, and then clustering communities.
Spectral clustering is closely related to nonlinear dimensionality reduction, and dimension reduction techniques such as locally-linear embedding can be used to reduce errors from noise or outliers.
Costs.
Denoting the number of the data points by formula_15, it is important to estimate the memory footprint and compute time, or number of arithmetic operations (AO) performed, as a function of formula_15. No matter the algorithm of the spectral clustering, the two main costly items are the construction of the graph Laplacian and determining its formula_16 eigenvectors for the spectral embedding. The last step — determining the labels from the formula_15-by-formula_16 matrix of eigenvectors — is typically the least expensive requiring only formula_25 AO and creating just a formula_15-by-formula_26 vector of the labels in memory.
The need to construct the graph Laplacian is common for all distance- or correlation-based clustering methods. Computing the eigenvectors is specific to spectral clustering only.
Constructing graph Laplacian.
The graph Laplacian can be and commonly is constructed from the adjacency matrix. The construction can be performed matrix-free, i.e., without explicitly forming the matrix of the graph Laplacian and no AO. It can also be performed in-place of the adjacency matrix without increasing the memory footprint. Either way, the costs of constructing the graph Laplacian is essentially determined by the costs of constructing the formula_15-by-formula_15 graph adjacency matrix.
Moreover, a normalized Laplacian has exactly the same eigenvectors as the normalized adjacency matrix, but with the order of the eigenvalues reversed. Thus, instead of computing the eigenvectors corresponding to the smallest eigenvalues of the normalized Laplacian, one can equivalently compute the eigenvectors corresponding to the largest eigenvalues of the normalized adjacency matrix, without even talking about the Laplacian matrix.
Naive constructions of the graph adjacency matrix, e.g., using the RBF kernel, make it dense, thus requiring formula_27 memory and formula_27 AO to determine each of the formula_27 entries of the matrix. Nystrom method can be used to approximate the similarity matrix, but the approximate matrix is not elementwise positive, i.e. cannot be interpreted as a distance-based similarity.
Algorithms to construct the graph adjacency matrix as a sparse matrix are typically based on a nearest neighbor search, which estimate or sample a neighborhood of a given data point for nearest neighbors, and compute non-zero entries of the adjacency matrix by comparing only pairs of the neighbors. The number of the selected nearest neighbors thus determines the number of non-zero entries, and is often fixed so that the memory footprint of the formula_15-by-formula_15 graph adjacency matrix is only formula_28, only formula_28 sequential arithmetic operations are needed to compute the formula_28 non-zero entries, and the calculations can be trivially run in parallel.
Computing eigenvectors.
The cost of computing the formula_15-by-formula_16 (with formula_29) matrix of selected eigenvectors of the graph Laplacian is normally proportional to the cost of multiplication of the formula_15-by-formula_15 graph Laplacian matrix by a vector, which varies greatly whether the graph Laplacian matrix is dense or sparse. For the dense case the cost thus is formula_30. The very commonly cited in the literature cost formula_31 comes from choosing formula_32 and is clearly misleading, since, e.g., in a hierarchical spectral clustering formula_18 as determined by the Fiedler vector.
In the sparse case of the formula_15-by-formula_15 graph Laplacian matrix with formula_28 non-zero entries, the cost of the matrix-vector product and thus of computing the formula_15-by-formula_16 with formula_29 matrix of selected eigenvectors is formula_28, with the memory footprint also only formula_28 — both are the optimal low bounds of complexity of clustering formula_15 data points. Moreover, matrix-free eigenvalue solvers such as LOBPCG can efficiently run in parallel, e.g., on multiple GPUs with distributed memory, resulting not only in high quality clusters, which spectral clustering is famous for, but also top performance.
Software.
Free software implementing spectral clustering is available in large open source projects like scikit-learn using LOBPCG with multigrid preconditioning or ARPACK, MLlib for pseudo-eigenvector clustering using the power iteration method, and R.
Relationship with other clustering methods.
The ideas behind spectral clustering may not be immediately obvious. It may be useful to highlight relationships with other methods. In particular, it can be described in the context of kernel clustering methods, which reveals several similarities with other approaches.
Relationship with "k"-means.
The weighted kernel "k"-means problem
shares the objective function with the spectral clustering problem, which can be optimized directly by multi-level methods.
Relationship to DBSCAN.
In the trivial case of determining connected graph components — the optimal clusters with no edges cut — spectral clustering is also related to a spectral version of DBSCAN clustering that finds density-connected components.
Measures to compare clusterings.
Ravi Kannan, Santosh Vempala and Adrian Vetta proposed a bicriteria measure to define the quality of a given clustering. They said that a clustering was an (α, ε)-clustering if the conductance of each cluster (in the clustering) was at least α and the weight of the inter-cluster edges was at most ε fraction of the total weight of all the edges in the graph. They also look at two approximation algorithms in the same paper.
History and related literatures.
Spectral clustering has a long history. Spectral clustering as a machine learning method was popularized by Shi & Malik and Ng, Jordan, & Weiss.
Ideas and network measures related to spectral clustering also play an important role in a number of applications apparently different from clustering problems. For instance, networks with stronger spectral partitions take longer to converge in opinion-updating models used in sociology and economics. | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "A_{ij}\\geq 0"
},
{
"math_id": 2,
"text": "i"
},
{
"math_id": 3,
"text": "j"
},
{
"math_id": 4,
"text": "L:=D-A"
},
{
"math_id": 5,
"text": "D"
},
{
"math_id": 6,
"text": "D_{ii} = \\sum_j A_{ij},"
},
{
"math_id": 7,
"text": "(B_1,B_2)"
},
{
"math_id": 8,
"text": "v"
},
{
"math_id": 9,
"text": "L^\\text{norm}:=I-D^{-1/2}AD^{-1/2}."
},
{
"math_id": 10,
"text": "D^{-1/2}AD^{-1/2}."
},
{
"math_id": 11,
"text": "L^\\text{rw} := D^{-1} L = I - D^{-1} A"
},
{
"math_id": 12,
"text": "u"
},
{
"math_id": 13,
"text": "P = D^{-1}A"
},
{
"math_id": 14,
"text": "D^{-1/2} v = u."
},
{
"math_id": 15,
"text": "n"
},
{
"math_id": 16,
"text": "k"
},
{
"math_id": 17,
"text": "V"
},
{
"math_id": 18,
"text": "k=1"
},
{
"math_id": 19,
"text": "v,"
},
{
"math_id": 20,
"text": "B_+"
},
{
"math_id": 21,
"text": "B_-"
},
{
"math_id": 22,
"text": "k>1"
},
{
"math_id": 23,
"text": "L"
},
{
"math_id": 24,
"text": "l"
},
{
"math_id": 25,
"text": "kn"
},
{
"math_id": 26,
"text": "1"
},
{
"math_id": 27,
"text": "n^2"
},
{
"math_id": 28,
"text": "O(n)"
},
{
"math_id": 29,
"text": "k\\ll n"
},
{
"math_id": 30,
"text": "O(n^2)"
},
{
"math_id": 31,
"text": "O(n^3)"
},
{
"math_id": 32,
"text": "k=n"
}
]
| https://en.wikipedia.org/wiki?curid=13651683 |
13653300 | Kinetic proofreading | Kinetic proofreading (or kinetic amplification) is a mechanism for error correction in biochemical reactions, proposed independently by John Hopfield (1974) and Jacques Ninio (1975). Kinetic proofreading allows enzymes to discriminate between two possible reaction pathways leading to correct or incorrect products with an accuracy higher than what one would predict based on the difference in the activation energy between these two pathways.
Increased specificity is obtained by introducing an irreversible step exiting the pathway, with reaction intermediates leading to incorrect products more likely to prematurely exit the pathway than reaction intermediates leading to the correct product. If the exit step is fast relative to the next step in the pathway, the specificity can be increased by a factor of up to the ratio between the two exit rate constants. (If the next step is fast relative to the exit step, specificity will not be increased because there will not be enough time for exit to occur.) This can be repeated more than once to increase specificity further.
As an analogy, if we have a medicine assembly line sometimes produces empty boxes, and we are unable to upgrade the assembly line, then we can increase the ratio of full boxes over empty boxes (specificity) by placing a giant fan at the end. Empty boxes are more likely to be blown off the line (a higher exit rate) than full boxes, even though both kinds' production rates are lowered. By lengthening the final section and adding more giant fans (multistep proofreading), the specificity can be increased arbitrarily, at the cost of decreasing production rate.
Specificity paradox.
In protein synthesis, the error rate is on the order of formula_0. This means that when a ribosome is matching anticodons of tRNA to the codons of mRNA, it matches complementary sequences correctly nearly all the time. Hopfield noted that because of how similar the substrates are (the difference between a wrong codon and a right codon can be as small as a difference in a single base), an error rate that small is unachievable with a one-step mechanism. Both wrong and right tRNA can bind to the ribosome, and if the ribosome can only discriminate between them by complementary matching of the anticodon, it must rely on the small free energy difference between binding three matched complementary bases or only two.
A one-shot machine which tests whether the codons match or not by examining whether the codon and anticodon are bound will not be able to tell the difference between wrong and right codon with an error rate less than formula_0 unless the free energy difference is at least 9.2kT, which is much larger than the free energy difference for single codon binding. This is a thermodynamic bound, so it cannot be evaded by building a different machine. However, this can be overcome by kinetic proofreading, which introduces an irreversible step through the input of energy.
Another molecular recognition mechanism, which does "not" require expenditure of free energy is that of conformational proofreading. The incorrect product may also be formed but hydrolyzed at a greater rate than the correct product, giving the possibility of theoretically infinite specificity the longer you let this reaction run, but at the cost of large amounts of the correct product as well. (Thus there is a tradeoff between product production and its efficiency.) The hydrolytic activity may be on the same enzyme, as in DNA polymerases with editing functions, or on different enzymes.
Multistep ratchet.
Hopfield suggested a simple way to achieve smaller error rates using a molecular ratchet which takes many irreversible steps, each testing to see if the sequences match. At each step, energy is expended and specificity (the ratio of correct substrate to incorrect substrate at that point in the pathway) increases.
The requirement for energy in each step of the ratchet is due to the need for the steps to be irreversible; for specificity to increase, entry of substrate and analogue must occur largely through the entry pathway, and exit largely through the exit pathway. If entry were an equilibrium, the earlier steps would form a pre-equilibrium and the specificity benefits of entry into the pathway (less likely for the substrate analogue) would be lost; if the exit step were an equilibrium, then the substrate analogue would be able to re-enter the pathway through the exit step, bypassing the specificity of earlier steps altogether.
Although one test will fail to discriminate between mismatched and matched sequences a fraction formula_1 of the time, two tests will both fail only formula_2 of the time, and N tests will fail formula_3 of the time. In terms of free energy, the discrimination power of N successive tests for two states with a free energy formula_4 is the same as one test between two states with a free energy formula_5.
To achieve an error rate of formula_6 requires several comparison steps. Hopfield predicted on the basis of this theory that there is a multistage ratchet in the ribosome which tests the match several times before incorporating the next amino acid into the protein.
Theoretical considerations.
Universal first passage time.
Biochemical processes that use kinetic proofreading to improve specificity implement the delay-inducing multistep ratchet by a variety of distinct biochemical networks. Nonetheless, many such networks result in the times to completion of the molecular assembly and the proofreading steps (also known as the first passage time) that approach a near-universal, exponential shape for high proofreading rates and large network sizes. Since exponential completion times are characteristic of a two-state Markov process, this observation makes kinetic proofreading one of only a few examples of biochemical processes where structural complexity results in a much simpler large-scale, phenomenological dynamics.
Topology.
The increase in specificity, or the overall amplification factor of a kinetic proofreading network that may include multiple pathways and especially loops is intimately related to the topology of the network: the specificity grows exponentially with the number of loops in the network. An example is homologous recombination in which the number of loops scales like the square of DNA length. The universal completion time emerges precisely in this regime of large number of loops and high amplification.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "10^{-4} = e^{-9.2}"
},
{
"math_id": 1,
"text": "p=e^{-\\Delta F/kT}"
},
{
"math_id": 2,
"text": "p^2"
},
{
"math_id": 3,
"text": "p^N"
},
{
"math_id": 4,
"text": "\\Delta F"
},
{
"math_id": 5,
"text": "N\\Delta F"
},
{
"math_id": 6,
"text": "e^{-10}"
}
]
| https://en.wikipedia.org/wiki?curid=13653300 |
13653437 | Integration by parts operator | Linear operator used to formulate integration by parts formulae
In mathematics, an integration by parts operator is a linear operator used to formulate integration by parts formulae; the most interesting examples of integration by parts operators occur in infinite-dimensional settings and find uses in stochastic analysis and its applications.
Definition.
Let "E" be a Banach space such that both "E" and its continuous dual space "E"∗ are separable spaces; let "μ" be a Borel measure on "E". Let "S" be any (fixed) subset of the class of functions defined on "E". A linear operator "A" : "S" → "L"2("E", "μ"; R) is said to be an integration by parts operator for "μ" if
formula_0
for every "C"1 function "φ" : "E" → R and all "h" ∈ "S" for which either side of the above equality makes sense. In the above, D"φ"("x") denotes the Fréchet derivative of "φ" at "x".
formula_1
For "h" ∈ "S", define "Ah" by
formula_2
This operator "A" is an integration by parts operator, also known as the divergence operator; a proof can be found in Elworthy (1974).
formula_3
i.e., all bounded, adapted processes with absolutely continuous sample paths. Let "φ" : "C"0 → R be any "C"1 function such that both "φ" and D"φ" are bounded. For "h" ∈ "S" and "λ" ∈ R, the Girsanov theorem implies that
formula_4
Differentiating with respect to "λ" and setting "λ" = 0 gives
formula_5
where ("Ah")("x") is the Itō integral
formula_6
The same relation holds for more general "φ" by an approximation argument; thus, the Itō integral is an integration by parts operator and can be seen as an infinite-dimensional divergence operator. This is the same result as the integration by parts formula derived from the Clark-Ocone theorem. | [
{
"math_id": 0,
"text": "\\int_{E} \\mathrm{D} \\varphi(x) h(x) \\, \\mathrm{d} \\mu(x) = \\int_{E} \\varphi(x) (A h)(x) \\, \\mathrm{d} \\mu(x)"
},
{
"math_id": 1,
"text": "E^{*} \\xrightarrow{i^{*}} H^{*} \\cong H \\xrightarrow{i} E."
},
{
"math_id": 2,
"text": "(A h)(x) = h(x) x - \\mathrm{trace}_{H} \\mathrm{D} h(x)."
},
{
"math_id": 3,
"text": "S = \\left\\{ \\left. h \\colon C_{0} \\to L_{0}^{2, 1} \\right| h \\mbox{ is bounded and non-anticipating} \\right\\},"
},
{
"math_id": 4,
"text": "\\int_{C_{0}} \\varphi (x + \\lambda h(x)) \\, \\mathrm{d} \\gamma(x) = \\int_{C_{0}} \\varphi(x) \\exp \\left( \\lambda \\int_{0}^{1} \\dot{h}_{s} \\cdot \\mathrm{d} x_{s} - \\frac{\\lambda^{2}}{2} \\int_{0}^{1} | \\dot{h}_{s} |^{2} \\, \\mathrm{d} s \\right) \\, \\mathrm{d} \\gamma(x)."
},
{
"math_id": 5,
"text": "\\int_{C_{0}} \\mathrm{D} \\varphi(x) h(x) \\, \\mathrm{d} \\gamma(x) = \\int_{C_{0}} \\varphi(x) (A h) (x) \\, \\mathrm{d} \\gamma(x),"
},
{
"math_id": 6,
"text": "\\int_{0}^{1} \\dot{h}_{s} \\cdot \\mathrm{d} x_{s}."
}
]
| https://en.wikipedia.org/wiki?curid=13653437 |
13654 | Heat engine | System that converts heat or thermal energy to mechanical work
A heat engine is a system that converts heat to usable energy, particularly mechanical energy, which can then be used to do mechanical work. While originally conceived in the context of mechanical energy, the concept of the heat engine has been applied to various other kinds of energy, particularly electrical, since at least the late 19th century. The heat engine does this by bringing a working substance from a higher state temperature to a lower state temperature. A heat source generates thermal energy that brings the working substance to the higher temperature state. The working substance generates work in the working body of the engine while transferring heat to the colder sink until it reaches a lower temperature state. During this process some of the thermal energy is converted into work by exploiting the properties of the working substance. The working substance can be any system with a non-zero heat capacity, but it usually is a gas or liquid. During this process, some heat is normally lost to the surroundings and is not converted to work. Also, some energy is unusable because of friction and drag.
In general, an engine is any machine that converts energy to mechanical work. Heat engines distinguish themselves from other types of engines by the fact that their efficiency is fundamentally limited by Carnot's theorem of thermodynamics. Although this efficiency limitation can be a drawback, an advantage of heat engines is that most forms of energy can be easily converted to heat by processes like exothermic reactions (such as combustion), nuclear fission, absorption of light or energetic particles, friction, dissipation and resistance. Since the heat source that supplies thermal energy to the engine can thus be powered by virtually any kind of energy, heat engines cover a wide range of applications.
Heat engines are often confused with the cycles they attempt to implement. Typically, the term "engine" is used for a physical device and "cycle" for the models.
Overview.
In thermodynamics, heat engines are often modeled using a standard engineering model such as the Otto cycle. The theoretical model can be refined and augmented with actual data from an operating engine, using tools such as an indicator diagram. Since very few actual implementations of heat engines exactly match their underlying thermodynamic cycles, one could say that a thermodynamic cycle is an ideal case of a mechanical engine. In any case, fully understanding an engine and its efficiency requires a good understanding of the (possibly simplified or idealised) theoretical model, the practical nuances of an actual mechanical engine and the discrepancies between the two.
In general terms, the larger the difference in temperature between the hot source and the cold sink, the larger is the potential thermal efficiency of the cycle. On Earth, the cold side of any heat engine is limited to being close to the ambient temperature of the environment, or not much lower than 300 kelvin, so most efforts to improve the thermodynamic efficiencies of various heat engines focus on increasing the temperature of the source, within material limits. The maximum theoretical efficiency of a heat engine (which no engine ever attains) is equal to the temperature difference between the hot and cold ends divided by the temperature at the hot end, each expressed in absolute temperature.
The efficiency of various heat engines proposed or used today has a large range:
The efficiency of these processes is roughly proportional to the temperature drop across them. Significant energy may be consumed by auxiliary equipment, such as pumps, which effectively reduces efficiency.
Examples.
Although some cycles have a typical combustion location (internal or external), they can often be implemented with the other. For example, John Ericsson developed an external heated engine running on a cycle very much like the earlier Diesel cycle. In addition, externally heated engines can often be implemented in open or closed cycles. In a closed cycle the working fluid is retained within the engine at the completion of the cycle whereas is an open cycle the working fluid is either exchanged with the environment together with the products of combustion in the case of the internal combustion engine or simply vented to the environment in the case of external combustion engines like steam engines and turbines.
Everyday examples.
Everyday examples of heat engines include the thermal power station, internal combustion engine, firearms, refrigerators and heat pumps. Power stations are examples of heat engines run in a forward direction in which heat flows from a hot reservoir and flows into a cool reservoir to produce work as the desired product. Refrigerators, air conditioners and heat pumps are examples of heat engines that are run in reverse, i.e. they use work to take heat energy at a low temperature and raise its temperature in a more efficient way than the simple conversion of work into heat (either through friction or electrical resistance). Refrigerators remove heat from within a thermally sealed chamber at low temperature and vent waste heat at a higher temperature to the environment and heat pumps take heat from the low temperature environment and 'vent' it into a thermally sealed chamber (a house) at higher temperature.
In general heat engines exploit the thermal properties associated with the expansion and compression of gases according to the gas laws or the properties associated with phase changes between gas and liquid states.
Earth's heat engine.
Earth's atmosphere and hydrosphere—Earth's heat engine—are coupled processes that constantly even out solar heating imbalances through evaporation of surface water, convection, rainfall, winds and ocean circulation, when distributing heat around the globe.
A Hadley cell is an example of a heat engine. It involves the rising of warm and moist air in the earth's equatorial region and the descent of colder air in the subtropics creating a thermally driven direct circulation, with consequent net production of kinetic energy.
Phase-change cycles.
In phase change cycles and engines, the working fluids are gases and liquids. The engine converts the working fluid from a gas to a liquid, from liquid to gas, or both, generating work from the fluid expansion or compression.
Gas-only cycles.
In these cycles and engines the working fluid is always a gas (i.e., there is no phase change):
Liquid-only cycles.
In these cycles and engines the working fluid are always like liquid:
Cycles used for refrigeration.
A domestic refrigerator is an example of a heat pump: a heat engine in reverse. Work is used to create a heat differential. Many cycles can run in reverse to move heat from the cold side to the hot side, making the cold side cooler and the hot side hotter. Internal combustion engine versions of these cycles are, by their nature, not reversible.
Refrigeration cycles include:
Evaporative heat engines.
The Barton evaporation engine is a heat engine based on a cycle producing power and cooled moist air from the evaporation of water into hot dry air.
Mesoscopic heat engines.
Mesoscopic heat engines are nanoscale devices that may serve the goal of processing heat fluxes and perform useful work at small scales. Potential applications include e.g. electric cooling devices. In such mesoscopic heat engines, work per cycle of operation fluctuates due to thermal noise. There is exact equality that relates average of exponents of work performed by any heat engine and the heat transfer from the hotter heat bath. This relation transforms the Carnot's inequality into exact equality. This relation is also a Carnot cycle equality
Efficiency.
The efficiency of a heat engine relates how much useful work is output for a given amount of heat energy input.
From the laws of thermodynamics, after a completed cycle:
formula_0
and therefore
formula_1
where
formula_2 is the net work extracted from the engine in one cycle. (It is negative, in the IUPAC convention, since work is "done by" the engine.)
formula_3 is the heat energy taken from the high temperature heat source in the surroundings in one cycle. (It is positive since heat energy is "added" to the engine.)
formula_4 is the waste heat given off by the engine to the cold temperature heat sink. (It is negative since heat is "lost" by the engine to the sink.)
In other words, a heat engine absorbs heat energy from the high temperature heat source, converting part of it to useful work and giving off the rest as waste heat to the cold temperature heat sink.
In general, the efficiency of a given heat transfer process is defined by the ratio of "what is taken out" to "what is put in". (For a refrigerator or heat pump, which can be considered as a heat engine run in reverse, this is the coefficient of performance and it is ≥ 1.) In the case of an engine, one desires to extract work and has to put in heat formula_5, for instance from combustion of a fuel, so the engine efficiency is reasonably defined as
formula_6
The efficiency is less than 100% because of the waste heat formula_7 unavoidably lost to the cold sink (and corresponding compression work put in) during the required recompression at the cold temperature before the power stroke of the engine can occur again.
The "theoretical" maximum efficiency of any heat engine depends only on the temperatures it operates between. This efficiency is usually derived using an ideal imaginary heat engine such as the Carnot heat engine, although other engines using different cycles can also attain maximum efficiency. Mathematically, after a full cycle, the overall change of entropy is zero:
formula_8
Note that formula_9 is positive because isothermal expansion in the power stroke increases the multiplicity of the working fluid while formula_10 is negative since recompression decreases the multiplicity. If the engine is ideal and runs reversibly, formula_11 and formula_12, and thus
formula_13,
which gives formula_14 and thus the Carnot limit for heat-engine efficiency,
formula_15
where formula_16 is the absolute temperature of the hot source and formula_17 that of the cold sink, usually measured in kelvins.
The reasoning behind this being the maximal efficiency goes as follows. It is first assumed that if a more efficient heat engine than a Carnot engine is possible, then it could be driven in reverse as a heat pump. Mathematical analysis can be used to show that this assumed combination would result in a net decrease in entropy. Since, by the second law of thermodynamics, this is statistically improbable to the point of exclusion, the Carnot efficiency is a theoretical upper bound on the reliable efficiency of "any" thermodynamic cycle.
Empirically, no heat engine has ever been shown to run at a greater efficiency than a Carnot cycle heat engine.
Figure 2 and Figure 3 show variations on Carnot cycle efficiency with temperature. Figure 2 indicates how efficiency changes with an increase in the heat addition temperature for a constant compressor inlet temperature. Figure 3 indicates how the efficiency changes with an increase in the heat rejection temperature for a constant turbine inlet temperature.
Endo-reversible heat-engines.
By its nature, any maximally efficient Carnot cycle must operate at an infinitesimal temperature gradient; this is because any transfer of heat between two bodies of differing temperatures is irreversible, therefore the Carnot efficiency expression applies only to the infinitesimal limit. The major problem is that the objective of most heat-engines is to output power, and infinitesimal power is seldom desired.
A different measure of ideal heat-engine efficiency is given by considerations of endoreversible thermodynamics, where the system is broken into reversible subsystems, but with non reversible interactions between them. A classical example is the Curzon–Ahlborn engine, very similar to a Carnot engine, but where the thermal reservoirs at temperature formula_16 and formula_17 are allowed to be different from the temperatures of the substance going through the reversible Carnot cycle: formula_18 and formula_19. The heat transfers between the reservoirs and the substance are considered as conductive (and irreversible) in the form formula_20. In this case, a tradeoff has to be made between power output and efficiency. If the engine is operated very slowly, the heat flux is low, formula_21 and the classical Carnot result is found
formula_22,
but at the price of a vanishing power output. If instead one chooses to operate the engine at its maximum output power, the efficiency becomes
formula_23 (Note: "T" in units of K or °R)
This model does a better job of predicting how well real-world heat-engines can do (Callen 1985, see also endoreversible thermodynamics):
As shown, the Curzon–Ahlborn efficiency much more closely models that observed.
History.
Heat engines have been known since antiquity but were only made into useful devices at the time of the industrial revolution in the 18th century. They continue to be developed today.
Enhancements.
Engineers have studied the various heat-engine cycles to improve the amount of usable work they could extract from a given power source. The Carnot cycle limit cannot be reached with any gas-based cycle, but engineers have found at least two ways to bypass that limit and one way to get better efficiency without bending any rules:
Heat engine processes.
Each process is one of the following:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " W + Q = \\Delta_{cycle}U = 0 "
},
{
"math_id": 1,
"text": " W = -Q = - (Q_c + Q_h) "
},
{
"math_id": 2,
"text": " W = -\\oint PdV "
},
{
"math_id": 3,
"text": " Q_h > 0 "
},
{
"math_id": 4,
"text": " Q_c = -|Q_c|<0 "
},
{
"math_id": 5,
"text": " Q_h "
},
{
"math_id": 6,
"text": "\\eta = \\frac{|W|}{Q_h} = \\frac{Q_h + Q_c}{Q_h} = 1 + \\frac{Q_c}{Q_h} = 1 - \\frac{|Q_c|}{Q_h}"
},
{
"math_id": 7,
"text": " Q_c<0 "
},
{
"math_id": 8,
"text": "\\ \\ \\ \\Delta S_h + \\Delta S_c = \\Delta_{cycle} S = 0"
},
{
"math_id": 9,
"text": "\\Delta S_h"
},
{
"math_id": 10,
"text": "\\Delta S_c"
},
{
"math_id": 11,
"text": " Q_h = T_h\\Delta S_h "
},
{
"math_id": 12,
"text": " Q_c = T_c\\Delta S_c "
},
{
"math_id": 13,
"text": " Q_h / T_h + Q_c / T_c = 0 "
},
{
"math_id": 14,
"text": " Q_c /Q_h = -T_c / T_h "
},
{
"math_id": 15,
"text": "\\eta_\\text{max} = 1 - \\frac{T_c}{T_h}"
},
{
"math_id": 16,
"text": "T_h"
},
{
"math_id": 17,
"text": "T_c"
},
{
"math_id": 18,
"text": "T'_h"
},
{
"math_id": 19,
"text": "T'_c"
},
{
"math_id": 20,
"text": "dQ_{h,c}/dt = \\alpha (T_{h,c}-T'_{h,c})"
},
{
"math_id": 21,
"text": "T\\approx T'"
},
{
"math_id": 22,
"text": "\\eta = 1 - \\frac{T_c}{T_h}"
},
{
"math_id": 23,
"text": "\\eta = 1 - \\sqrt{\\frac{T_c}{T_h}}"
}
]
| https://en.wikipedia.org/wiki?curid=13654 |
13655834 | Nash functions | In real algebraic geometry, a Nash function on an open semialgebraic subset "U" ⊂ R"n" is an analytic function
"f": "U" → R satisfying a nontrivial polynomial equation "P"("x","f"("x")) = 0 for all "x" in "U" (A semialgebraic subset of R"n" is a subset obtained from subsets of the form {"x" in R"n" : "P"("x")=0} or {"x" in R"n" : "P"("x") > 0}, where "P" is a polynomial, by taking finite unions, finite intersections and complements). Some examples of Nash functions:
Nash functions are those functions needed in order to have an implicit function theorem in real algebraic geometry.
Nash manifolds.
Along with Nash functions one defines Nash manifolds, which are semialgebraic analytic submanifolds of some R"n". A Nash mapping
between Nash manifolds is then an analytic mapping with semialgebraic graph. Nash functions and manifolds are named after John Forbes Nash, Jr., who proved (1952) that any compact smooth manifold admits a Nash manifold structure, i.e., is diffeomorphic to some Nash manifold. More generally, a smooth manifold admits a Nash manifold structure if and only if it is diffeomorphic to the interior of some compact smooth manifold possibly with boundary. Nash's result was later (1973) completed by Alberto Tognoli who proved that any compact smooth manifold is diffeomorphic to some affine real algebraic manifold; actually, any Nash manifold is Nash diffeomorphic to an affine real algebraic manifold. These results exemplify the fact that the Nash category is somewhat intermediate between the smooth and the algebraic categories.
Local properties.
The local properties of Nash functions are well understood. The ring of germs of Nash functions at a point of a Nash manifold of dimension "n" is isomorphic to the ring of algebraic power series in "n" variables (i.e., those series satisfying a nontrivial polynomial equation), which is the henselization of the ring of germs of rational functions. In particular, it is a regular local ring of dimension "n".
Global properties.
The global properties are more difficult to obtain. The fact that the ring of Nash functions on a Nash manifold (even noncompact) is noetherian was proved independently (1973) by Jean-Jacques Risler and Gustave Efroymson. Nash manifolds have properties similar to but weaker than Cartan's theorems A and B on Stein manifolds. Let formula_1 denote the sheaf of Nash function germs on
a Nash manifold "M", and formula_2 be a coherent sheaf of formula_1-ideals. Assume formula_2 is finite, i.e., there exists a finite open semialgebraic covering formula_3 of "M" such that, for each "i", formula_4 is generated by Nash functions on formula_5. Then formula_2 is globally generated by Nash functions on "M", and the natural map
formula_6
is surjective. However
formula_7
contrarily to the case of Stein manifolds.
Generalizations.
Nash functions and manifolds can be defined over any real closed field instead of the field of real numbers, and the above statements still hold. Abstract Nash functions can also be defined on the real spectrum of any commutative ring. | [
{
"math_id": 0,
"text": "x\\mapsto \\sqrt{1+x^2}"
},
{
"math_id": 1,
"text": "\\mathcal{N}"
},
{
"math_id": 2,
"text": "\\mathcal{I}"
},
{
"math_id": 3,
"text": "\\{U_i\\}"
},
{
"math_id": 4,
"text": "\\mathcal{I}|_{U_i}"
},
{
"math_id": 5,
"text": "U_i"
},
{
"math_id": 6,
"text": "H^0(M,\\mathcal{N}) \\to H^0(M,\\mathcal{N}/\\mathcal{I})"
},
{
"math_id": 7,
"text": "H^1(M,\\mathcal{N})\\neq 0, \\ \\text{if} \\ \\dim(M) > 0,"
}
]
| https://en.wikipedia.org/wiki?curid=13655834 |
13656257 | Pacific–North American teleconnection pattern | Large-scale weather pattern with two modes
The Pacific–North American teleconnection pattern (PNA) is a large-scale weather pattern with two modes, denoted positive and negative, and which relates the atmospheric circulation pattern over the North Pacific Ocean with the one over the North American continent. It is the second leading mode of natural climate variability in the higher latitudes of the Northern Hemisphere (behind the Arctic Oscillation or North Atlantic Oscillation) and can be diagnosed using the arrangement of anomalous geopotential heights or air pressures over the North Pacific and North America.
On average, the troposphere over North America features a ridge on the western part of the continent and a trough over the eastern part of the continent. The "positive phase" of the PNA teleconnection is identified by anomalously low geopotential heights south of the Aleutian Islands and over Southeastern U.S. straddling high geopotential heights over the North Pacific from Hawaii to the U.S. Intermountain West. This represents an amplification of the long-term average conditions. The "negative phase" features the opposite pattern over the same regions, with above-average geopotential heights straddling below-average heights. This represents a damping of the long-term average conditions.
Indices.
The PNA is typically quantified using an index using geopotential height anomalies at the 500-hPa pressure level, with positive and negative PNA phases based on the sign of the index. Wallace and Gutzler (1981) expressed the PNA index as the average of normalized height anomalies at the four centers of action most relevant to the PNA,
formula_0
where formula_1 describes the normalized 500-hPa height anomaly as a function of location. The subtropical center at (20°N, 160°W) can be excluded, though the difference between the resulting formula_2 index and the formula_3 index is small.
Applying rotated principal component analysis to the 500-hPa geopotential height anomaly field in the Northern Hemisphere can also provide a quantification of the PNA (formula_4), with the canonical PNA pattern emerging as the second-leading principal component. This methodology is used by the U.S. Climate Prediction Center to compute its PNA index.
Dynamics.
Although the PNA is usually defined based on anomalies relative to monthly or seasonal averages, the PNA often varies at weekly timescales. However, as a pattern of internal climate variability, the state of the PNA occasionally changes without a clear and identifiable cause. This reduces the predictability of the PNA and can complicate long-range seasonal weather forecasts. Predictability of the PNA is limited to roughly within 10 days. The PNA is associated with changes in the intensity and positioning of the East Asian jet stream. During the positive phase of the PNA, the East Asian jet intensifies and extends eastward across the North Pacific towards the western U.S. During the negative phase, the jet stream is retracted over East Asia, producing a blocking weather pattern over the North Pacific. Some of the energy that drives the PNA originates from the barotropic instability produced by the jet, potentially exciting Rossby waves. Shifts in the jet stream can induce changes in air pressure distributions both near and downstream of the jet.
Storms over the tropical Pacific and Indian oceans may play a role in exciting the positive and negative phases of the PNA by influencing the East Asian jet. Tropical convection can induce a low-amplitude PNA pattern that amplifies to its peak strength after 8–12 days. Atmospheric eddies and Rossby waves can further intensify the PNA pattern. Positive PNA is correlated with increased convective activity over western tropical Pacific and reduced convective activity over the tropical Indian Ocean, while negative PNA is correlated with the opposite convective anomalies. The Rossby waves associated with positive PNA tend to track eastward and undergo cyclonic wavebreaking, while those associated with negative PNA tend to track equatorward towards the subtropics and break anticyclonically; the wavebreaking behavior of the Rossby waves is determined by the meridional gradient of potential vorticity and the magnitude and orientation of wind shear, which in turn are modulated by variations in the East Asian jet stream. In either case, positive feedbacks associated with the wavebreaking sustain amplified PNA patterns.
Other teleconnections can modulate the PNA by modifying the jet stream. The El Niño–Southern Oscillation (ENSO) impacts the behavior of PNA, with the positive phase of the PNA more commonly associated with El Niño and the negative phase more commonly associated with La Niña. This relationship is most evident at seasonal timescales, making the seasonal PNA more predictable than the monthly PNA. The negative phase is also favored when the Madden–Julian oscillation (MJO) enhances convection over the Indian Ocean and Maritime Continent; the positive phase is favored when the MJO enhances convection closer to the central Pacific. The MJO's influence on the PNA arises from the interaction between the enhanced convection and the Pacific jet stream.
Effects on weather.
The regional variations in weather associated with the PNA are generally the result of the PNA's influence on the East Asian jet. The temperature pattern associated with the PNA follows the pattern of anomalous ridging and troughing. The positive phase of the PNA is correlated with above-average temperatures over the U.S. Pacific Coast and Western Canada. During the positive phase, an anomalously strong ridge of high pressure over Canada reduces the frequency of cold air outbreaks over western Northern America. Below-average temperatures over the South-Central U.S., Southeastern U.S., and U.S. East Coast are associated with the positive phase due to the presence of anomalously low pressure. The influence of the PNA on surface temperatures over North America is reduced during the summer.
Correlations between precipitation patterns and the PNA are weaker than temperature patterns, but are nonetheless evident. Anomalously high precipitation over the Gulf of Alaska and Pacific Northwest accompany the positive phase, along with below-average precipitation totals over the Pacific Northwest, Northern Rocky Mountains, and Ohio and Tennessee river valleys. The negative PNA phase exhibits the opposite departures from average.
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{PNA}_4 = \\frac{1}{4}(z^*(20^{\\circ}\\text{N}, 160^{\\circ}\\text{W}) - z^*(45^{\\circ}\\text{N}, 165^{\\circ}\\text{W}) + z^*(55^{\\circ}\\text{N}, 115^{\\circ}\\text{W}) - z^*(30^{\\circ}\\text{N}, 85^{\\circ}\\text{W}))"
},
{
"math_id": 1,
"text": "z^*"
},
{
"math_id": 2,
"text": "\\text{PNA}_3"
},
{
"math_id": 3,
"text": "\\text{PNA}_4"
},
{
"math_id": 4,
"text": "\\text{PNA}_\\text{RPCA}"
}
]
| https://en.wikipedia.org/wiki?curid=13656257 |
13657747 | Dirac bracket | Quantization method for constrained Hamiltonian systems with second-class constraints
The Dirac bracket is a generalization of the Poisson bracket developed by Paul Dirac to treat classical systems with second class constraints in Hamiltonian mechanics, and to thus allow them to undergo canonical quantization. It is an important part of Dirac's development of Hamiltonian mechanics to elegantly handle more general Lagrangians; specifically, when constraints are at hand, so that the number of apparent variables exceeds that of dynamical ones. More abstractly, the two-form implied from the Dirac bracket is the restriction of the symplectic form to the constraint surface in phase space.
This article assumes familiarity with the standard Lagrangian and Hamiltonian formalisms, and their connection to canonical quantization. Details of Dirac's modified Hamiltonian formalism are also summarized to put the Dirac bracket in context.
Inadequacy of the standard Hamiltonian procedure.
The standard development of Hamiltonian mechanics is inadequate in several specific situations:
Example of a Lagrangian linear in velocity.
An example in classical mechanics is a particle with charge q and mass m confined to the x - y plane with a strong constant, homogeneous perpendicular magnetic field, so then pointing in the z-direction with strength B.
The Lagrangian for this system with an appropriate choice of parameters is
formula_0
where is the vector potential for the magnetic field, ; c is the speed of light in vacuum; and is an arbitrary external scalar potential; one could easily take it to be quadratic in x and y, without loss of generality. We use
formula_1
as our vector potential; this corresponds to a uniform and constant magnetic field "B" in the "z" direction. Here, the hats indicate unit vectors. Later in the article, however, they are used to distinguish quantum mechanical operators from their classical analogs. The usage should be clear from the context.
Explicitly, the Lagrangian amounts to just
formula_2
which leads to the equations of motion
formula_3
formula_4
For a harmonic potential, the gradient of "V" amounts to just the coordinates, −("x","y").
Now, in the limit of a very large magnetic field, "qB"/"mc" ≫ 1. One may then drop the kinetic term to produce a simple approximate Lagrangian,
formula_5
with first-order equations of motion
formula_6
formula_7
Note that this approximate Lagrangian is "linear in the velocities", which is one of the conditions under which the standard Hamiltonian procedure breaks down. While this example has been motivated as an approximation, the Lagrangian under consideration is legitimate and leads to consistent equations of motion in the Lagrangian formalism.
Following the Hamiltonian procedure, however, the canonical momenta associated with the coordinates are now
formula_8
formula_9
which are unusual in that they are not invertible to the velocities; instead, they are constrained to be functions of the coordinates: the four phase-space variables are linearly dependent, so the variable basis is overcomplete.
A Legendre transformation then produces the Hamiltonian
formula_10
Note that this "naive" Hamiltonian has "no dependence on the momenta", which means that equations of motion (Hamilton's equations) are inconsistent.
The Hamiltonian procedure has broken down. One might try to fix the problem by eliminating two of the components of the 4-dimensional phase space, say y and "p""y", down to a reduced phase space of 2 dimensions, that is sometimes expressing the coordinates as momenta and sometimes as coordinates. However, this is neither a general nor rigorous solution. This gets to the heart of the matter: that the definition of the canonical momenta implies a "constraint on phase space" (between momenta and coordinates) that was never taken into account.
Generalized Hamiltonian procedure.
In Lagrangian mechanics, if the system has holonomic constraints, then one generally adds Lagrange multipliers to the Lagrangian to account for them. The extra terms vanish when the constraints are satisfied, thereby forcing the path of stationary action to be on the constraint surface. In this case, going to the Hamiltonian formalism introduces a constraint on "phase space" in Hamiltonian mechanics, but the solution is similar.
Before proceeding, it is useful to understand the notions of weak equality and strong equality. Two functions on phase space, f and g, are weakly equal if they are equal "when the constraints are satisfied, but not throughout the phase space", denoted "f ≈ g". If f and g are equal independently of the constraints being satisfied, they are called strongly equal, written "f"
"g". It is important to note that, in order to get the right answer, "no weak equations may be used before evaluating derivatives or Poisson brackets".
The new procedure works as follows, start with a Lagrangian and define the canonical momenta in the usual way. Some of those definitions may not be invertible and instead give a constraint in phase space (as above). Constraints derived in this way or imposed from the beginning of the problem are called primary constraints. The constraints, labeled "φ""j", must weakly vanish, "φ""j" ("p,q") ≈ 0.
Next, one finds the naive Hamiltonian, H, in the usual way via a Legendre transformation, exactly as in the above example. Note that the Hamiltonian can always be written as a function of "q"s and "p"s only, even if the velocities cannot be inverted into functions of the momenta.
Generalizing the Hamiltonian.
Dirac argues that we should generalize the Hamiltonian (somewhat analogously to the method of Lagrange multipliers) to
formula_11
where the "c""j" are not constants but functions of the coordinates and momenta. Since this new Hamiltonian is the most general function of coordinates and momenta weakly equal to the naive Hamiltonian, "H"* is the broadest generalization of the Hamiltonian possible
so that "δH" * ≈ "δH" when "δϕj" ≈ 0.
To further illuminate the "c""j", consider how one gets the equations of motion from the naive Hamiltonian in the standard procedure. One expands the variation of the Hamiltonian out in two ways and sets them equal (using a somewhat abbreviated notation with suppressed indices and sums):
formula_12
where the second equality holds after simplifying with the Euler-Lagrange equations of motion and the definition of canonical momentum. From this equality, one deduces the equations of motion in the Hamiltonian formalism from
formula_13
where the weak equality symbol is no longer displayed explicitly, since by definition the equations of motion only hold weakly. In the present context, one cannot simply set the coefficients of "δq" and "δp" separately to zero, since the variations are somewhat restricted by the constraints. In particular, the variations must be tangent to the constraint surface.
One can demonstrate that the solution to
formula_14
for the variations "δq""n" and "δp""n" restricted by the constraints "Φ""j" ≈ 0 (assuming the constraints satisfy some regularity conditions) is generally
formula_15
formula_16
where the "u""m" are arbitrary functions.
Using this result, the equations of motion become
formula_17
formula_18
formula_19
where the "uk" are functions of coordinates and velocities that can be determined, in principle, from the second equation of motion above.
The Legendre transform between the Lagrangian formalism and the Hamiltonian formalism has been saved at the cost of adding new variables.
Consistency conditions.
The equations of motion become more compact when using the Poisson bracket, since if f is some function of the coordinates and momenta then
formula_20
if one assumes that the Poisson bracket with the "u""k" (functions of the velocity) exist; this causes no problems since the contribution weakly vanishes. Now, there are some consistency conditions which must be satisfied in order for this formalism to make sense. If the constraints are going to be satisfied, then their equations of motion must weakly vanish, that is, we require
formula_21
There are four different types of conditions that can result from the above:
The first case indicates that the starting Lagrangian gives inconsistent equations of motion, such as "L
q". The second case does not contribute anything new.
The third case gives new constraints in phase space. A constraint derived in this manner is called a secondary constraint. Upon finding the secondary constraint one should add it to the extended Hamiltonian and check the new consistency conditions, which may result in still more constraints. Iterate this process until there are no more constraints. The distinction between primary and secondary constraints is largely an artificial one (i.e. a constraint for the same system can be primary or secondary depending on the Lagrangian), so this article does not distinguish between them from here on. Assuming the consistency condition has been iterated until all of the constraints have been found, then "ϕ""j" will index all of them. Note this article uses secondary constraint to mean any constraint that was not initially in the problem or derived from the definition of canonical momenta; some authors distinguish between secondary constraints, tertiary constraints, et cetera.
Finally, the last case helps fix the "u""k". If, at the end of this process, the "u""k" are not completely determined, then that means there are unphysical (gauge) degrees of freedom in the system. Once all of the constraints (primary and secondary) are added to the naive Hamiltonian and the solutions to the consistency conditions for the "uk" are plugged in, the result is called "the total Hamiltonian".
Determination of the "u""k".
The "u"k must solve a set of inhomogeneous linear equations of the form
formula_22
The above equation must possess at least one solution, since otherwise the initial Lagrangian is inconsistent; however, in systems with gauge degrees of freedom, the solution will not be unique. The most general solution is of the form
formula_23
where "U""k" is a particular solution and "V""k" is the most general solution to the homogeneous equation
formula_24
The most general solution will be a linear combination of linearly independent solutions to the above homogeneous equation. The number of linearly independent solutions equals the number of "u""k" (which is the same as the number of constraints) minus the number of consistency conditions of the fourth type (in previous subsection). This is the number of unphysical degrees of freedom in the system. Labeling the linear independent solutions "V""k""a" where the index a runs from 1 to the number of unphysical degrees of freedom, the general solution to the consistency conditions is of the form
formula_25
where the formula_26 are completely arbitrary functions of time. A different choice of the formula_26 corresponds to a gauge transformation, and should leave the physical state of the system unchanged.
The total Hamiltonian.
At this point, it is natural to introduce the total Hamiltonian
formula_27
and what is denoted
formula_28
The time evolution of a function on the phase space, f , is governed by
formula_29
Later, the extended Hamiltonian is introduced. For gauge-invariant (physically measurable quantities) quantities, all of the Hamiltonians should give the same time evolution, since they are all weakly equivalent. It is only for non gauge-invariant quantities that the distinction becomes important.
The Dirac bracket.
Above is everything needed to find the equations of motion in Dirac's modified Hamiltonian procedure. Having the equations of motion, however, is not the endpoint for theoretical considerations. If one wants to canonically quantize a general system, then one needs the Dirac brackets. Before defining Dirac brackets, first-class and second-class constraints need to be introduced.
We call a function "f(q, p)" of coordinates and momenta first class if its Poisson bracket with all of the constraints weakly vanishes, that is,
formula_30
for all j. Note that the only quantities that weakly vanish are the constraints "ϕ""j", and therefore anything that weakly vanishes must be strongly equal to a linear combination of the constraints. One can demonstrate that the Poisson bracket of two first-class quantities must also be first class. The first-class constraints are intimately connected with the unphysical degrees of freedom mentioned earlier. Namely, the number of independent first-class constraints is equal to the number of unphysical degrees of freedom, and furthermore, the primary first-class constraints generate gauge transformations. Dirac further postulated that all secondary first-class constraints are generators of gauge transformations, which turns out to be false; however, typically one operates under the assumption that all first-class constraints generate gauge transformations when using this treatment.
When the first-class secondary constraints are added into the Hamiltonian with arbitrary formula_26 as the first-class primary constraints are added to arrive at the total Hamiltonian, then one obtains the extended Hamiltonian. The extended Hamiltonian gives the most general possible time evolution for any gauge-dependent quantities, and may actually generalize the equations of motion from those of the Lagrangian formalism.
For the purposes of introducing the Dirac bracket, of more immediate interest are the second class constraints. Second class constraints are constraints that have a nonvanishing Poisson bracket with at least one other constraint.
For instance, consider second-class constraints "ϕ"1 and "ϕ"2 whose Poisson bracket is simply a constant, c,
formula_31
Now, suppose one wishes to employ canonical quantization, then the phase-space coordinates become operators whose commutators become "iħ" times their classical Poisson bracket. Assuming there are no ordering issues that give rise to new quantum corrections, this implies that
formula_32
where the hats emphasize the fact that the constraints are on operators.
On one hand, canonical quantization gives the above commutation relation, but on the other hand ϕ1 and "ϕ"2 are constraints that must vanish on physical states, whereas the right-hand side cannot vanish. This example illustrates the need for some generalization of the Poisson bracket which respects the system's constraints, and which leads to a consistent quantization procedure. This new bracket should be bilinear, antisymmetric, satisfy the Jacobi identity as does the Poisson bracket, reduce to the Poisson bracket for unconstrained systems, and, additionally, "the bracket of any second-class constraint with any other quantity must vanish".
At this point, the second class constraints will be labeled formula_33. Define a matrix with entries
formula_34
In this case, the Dirac bracket of two functions on phase space, f and g, is defined as
formula_35
where "M"−1"ab" denotes the "ab" entry of M 's inverse matrix. Dirac proved that M "will always be invertible".
It is straightforward to check that the above definition of the Dirac bracket satisfies all of the desired properties, and especially the last one, of vanishing for an argument which is a second-class constraint.
When applying canonical quantization on a constrained Hamiltonian system, the commutator of the operators is supplanted by "iħ" times their classical "Dirac bracket". Since the Dirac bracket respects the constraints, one need not be careful about evaluating all brackets before using any weak equations, as is the case with the Poisson bracket.
Note that while the Poisson bracket of bosonic (Grassmann even) variables with itself must vanish, the Poisson bracket of fermions represented as a Grassmann variables with itself need not vanish. This means that in the fermionic case it "is" possible for there to be an odd number of second class constraints.
Illustration on the example provided.
Returning to the above example, the naive Hamiltonian and the two primary constraints are
formula_36
formula_37
Therefore, the extended Hamiltonian can be written
formula_38
The next step is to apply the consistency conditions {"Φ""j", "H"*}"PB" ≈ 0, which in this case become
formula_39
formula_40
These are "not" secondary constraints, but conditions that fix "u"1 and "u"2. Therefore, there are no secondary constraints and the arbitrary coefficients are completely determined, indicating that there are no unphysical degrees of freedom.
If one plugs in with the values of "u"1 and "u"2, then one can see that the equations of motion are
formula_41
formula_42
formula_43
formula_44
which are self-consistent and coincide with the Lagrangian equations of motion.
A simple calculation confirms that "ϕ"1 and "ϕ"2 are second class constraints since
formula_45
hence the matrix looks like
formula_46
which is easily inverted to
formula_47
where "ε""ab" is the Levi-Civita symbol. Thus, the Dirac brackets are defined to be
formula_48
If one always uses the Dirac bracket instead of the Poisson bracket, then there is no issue about the order of applying constraints and evaluating expressions, since the Dirac bracket of anything weakly zero is strongly equal to zero. This means that one can just use the naive Hamiltonian with Dirac brackets, instead, to thus get the correct equations of motion, which one can easily confirm on the above ones.
To quantize the system, the Dirac brackets between all of the phase space variables are needed. The nonvanishing Dirac brackets for this system are
formula_49
formula_50
while the cross-terms vanish, and
formula_51
Therefore, the correct implementation of canonical quantization dictates the commutation relations,
formula_52
formula_53
with the cross terms vanishing, and
formula_54
This example has a nonvanishing commutator between and , which means this structure specifies a noncommutative geometry. (Since the two coordinates do not commute, there will be an uncertainty principle for the x and y positions.)
Further Illustration for a hypersphere.
Similarly, for free motion on a hypersphere "S""n", the n + 1 coordinates are constrained, "xi xi"
1. From a plain kinetic Lagrangian, it is evident that their momenta are perpendicular to them, "xi pi"
0. Thus the corresponding Dirac Brackets are likewise simple to work out,
formula_55
formula_56
formula_57
The (2"n" + 1) constrained phase-space variables ("xi, pi") obey much "simpler Dirac brackets" than the 2"n" unconstrained variables, had one eliminated one of the xs and one of the ps through the two constraints ab initio, which would obey plain Poisson brackets. The Dirac brackets add simplicity and elegance, at the cost of excessive (constrained) phase-space variables.
For example, for free motion on a circle, "n" = 1, for "x"1 ≡ z and eliminating "x"2 from the circle constraint yields the unconstrained
formula_58
with equations of motion
formula_59
an oscillation; whereas the equivalent constrained system with "H" = "p"2/2 = "E" yields
formula_60
formula_61
whence, instantly, virtually by inspection, oscillation for both variables,
formula_62
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " L = \\tfrac{1}{2}m\\vec{v}^2 + \\frac{q}{c}\\vec{A}\\cdot\\vec{v} - V(\\vec{r}),"
},
{
"math_id": 1,
"text": " \\vec{A} = \\frac{B}{2}(x\\hat{y} - y\\hat{x})"
},
{
"math_id": 2,
"text": "\nL = \\frac{m}{2}(\\dot{x}^2 + \\dot{y}^2) + \\frac{qB}{2c}(x\\dot{y} - y\\dot{x}) - V(x, y) ~,\n"
},
{
"math_id": 3,
"text": "\nm\\ddot{x} = - \\frac{\\partial V}{\\partial x} + \\frac{q B}{c}\\dot{y}\n"
},
{
"math_id": 4,
"text": "\nm\\ddot{y} = - \\frac{\\partial V}{\\partial y} - \\frac{q B}{c}\\dot{x}.\n"
},
{
"math_id": 5,
"text": "\nL = \\frac{qB}{2c}(x\\dot{y} - y\\dot{x}) - V(x, y)~,\n"
},
{
"math_id": 6,
"text": "\n\\dot{y} = \\frac{c}{q B}\\frac{\\partial V}{\\partial x}\n"
},
{
"math_id": 7,
"text": "\n\\dot{x} = -\\frac{c}{q B}\\frac{\\partial V}{\\partial y}~.\n"
},
{
"math_id": 8,
"text": "\np_x = \\frac{\\partial L}{\\partial \\dot{x}} = -\\frac{q B}{2c}y\n"
},
{
"math_id": 9,
"text": "\np_y = \\frac{\\partial L}{\\partial \\dot{y}} = \\frac{q B}{2c}x ~,\n"
},
{
"math_id": 10,
"text": "\nH(x,y, p_x, p_y) = \\dot{x}p_x + \\dot{y} p_y - L = V(x, y).\n"
},
{
"math_id": 11,
"text": "\nH^* = H + \\sum_j c_j\\phi_j \\approx H,\n"
},
{
"math_id": 12,
"text": "\n\\delta H = \\frac{\\partial H}{\\partial q}\\delta q + \\frac{\\partial H}{\\partial p}\\delta p\n \\approx \\dot{q}\\delta p - \\dot{p}\\delta q ~,\n"
},
{
"math_id": 13,
"text": "\n\\left(\\frac{\\partial H}{\\partial q} + \\dot{p}\\right)\\delta q + \\left(\\frac{\\partial H}{\\partial p} - \\dot{q}\\right)\\delta p = 0 ~,\n"
},
{
"math_id": 14,
"text": "\n\\sum_n A_n\\delta q_n + \\sum_n B_n\\delta p_n = 0,\n"
},
{
"math_id": 15,
"text": "\nA_n = \\sum_m u_m \\frac{\\partial \\phi_m}{\\partial q_n}\n"
},
{
"math_id": 16,
"text": "\nB_n = \\sum_m u_m \\frac{\\partial \\phi_m}{\\partial p_n},\n"
},
{
"math_id": 17,
"text": "\n\\dot{p}_j = -\\frac{\\partial H}{\\partial q_j} - \\sum_k u_k \\frac{\\partial \\phi_k}{\\partial q_j}\n"
},
{
"math_id": 18,
"text": "\n\\dot{q}_j = \\frac{\\partial H}{\\partial p_j} + \\sum_k u_k \\frac{\\partial \\phi_k}{\\partial p_j}\n"
},
{
"math_id": 19,
"text": "\n\\phi_j(q, p) = 0,\n"
},
{
"math_id": 20,
"text": "\n\\dot{f} \\approx \\{f, H^*\\}_{PB} \\approx \\{f, H\\}_{PB} + \\sum_k u_k\\{f, \\phi_k\\}_{PB},\n"
},
{
"math_id": 21,
"text": "\n\\dot{\\phi_j} \\approx \\{\\phi_j, H\\}_{PB} + \\sum_k u_k\\{\\phi_j,\\phi_k\\}_{PB} \\approx 0.\n"
},
{
"math_id": 22,
"text": "\n\\{\\phi_j, H\\}_{PB} + \\sum_k u_k\\{\\phi_j,\\phi_k\\}_{PB} \\approx 0.\n"
},
{
"math_id": 23,
"text": "\nu_k = U_k + V_k,\n"
},
{
"math_id": 24,
"text": "\n\\sum_k V_k\\{\\phi_j,\\phi_k\\}_{PB}\\approx 0.\n"
},
{
"math_id": 25,
"text": "\nu_k \\approx U_k + \\sum_a v_a V^a_k,\n"
},
{
"math_id": 26,
"text": "\nv_a\n"
},
{
"math_id": 27,
"text": "\nH_T = H + \\sum_k U_k\\phi_k + \\sum_{a, k} v_a V^a_k \\phi_k\n"
},
{
"math_id": 28,
"text": "\nH' = H + \\sum_k U_k \\phi_k.\n"
},
{
"math_id": 29,
"text": "\n\\dot{f} \\approx \\{f, H_T\\}_{PB}.\n"
},
{
"math_id": 30,
"text": "\n\\{f, \\phi_j\\}_{PB} \\approx 0,\n"
},
{
"math_id": 31,
"text": "\n\\{\\phi_1,\\phi_2\\}_{PB} = c ~.\n"
},
{
"math_id": 32,
"text": "\n[\\hat{\\phi}_1, \\hat{\\phi}_2] = i\\hbar ~c,\n"
},
{
"math_id": 33,
"text": "\\tilde{\\phi}_a"
},
{
"math_id": 34,
"text": "\nM_{ab} = \\{\\tilde{\\phi}_a,\\tilde{\\phi}_b\\}_{PB}.\n"
},
{
"math_id": 35,
"text": "\n\\{f, g\\}_{DB} = \\{f, g\\}_{PB} - \\sum_{a, b}\\{f,\\tilde{\\phi}_a\\}_{PB} M^{-1}_{ab}\\{\\tilde{\\phi}_b,g\\}_{PB} ~,\n"
},
{
"math_id": 36,
"text": "\nH = V(x, y)\n"
},
{
"math_id": 37,
"text": "\n\\phi_1 = p_x + \\tfrac{q B}{2c} y,\\qquad \\phi_2 = p_y - \\tfrac{q B}{2 c} x.\n"
},
{
"math_id": 38,
"text": "\nH^* = V(x, y) + u_1 \\left(p_x + \\tfrac{q B}{2c}y\\right) + u_2 \\left(p_y - \\tfrac{q B}{2c}x\\right).\n"
},
{
"math_id": 39,
"text": "\n\\{\\phi_1, H\\}_{PB}+\\sum_j u_j\\{\\phi_1, \\phi_j\\}_{PB} = -\\frac{\\partial V}{\\partial x} + u_2 \\frac{q B}{c} \\approx 0\n"
},
{
"math_id": 40,
"text": "\n\\{\\phi_2, H\\}_{PB}+\\sum_j u_j\\{\\phi_2, \\phi_j\\}_{PB} = -\\frac{\\partial V}{\\partial y} - u_1 \\frac{q B}{c} \\approx 0.\n"
},
{
"math_id": 41,
"text": "\n\\dot{x} = \\{x, H\\}_{PB} + u_1\\{x, \\phi_1\\}_{PB} + u_2 \\{x, \\phi_2\\} = -\\frac{c}{q B} \\frac{\\partial V}{\\partial y}\n"
},
{
"math_id": 42,
"text": "\n\\dot{y} = \\frac{c}{q B} \\frac{\\partial V}{\\partial x}\n"
},
{
"math_id": 43,
"text": "\n\\dot{p}_x = -\\frac{1}{2}\\frac{\\partial V}{\\partial x}\n"
},
{
"math_id": 44,
"text": "\n\\dot{p}_y = -\\frac{1}{2}\\frac{\\partial V}{\\partial y},\n"
},
{
"math_id": 45,
"text": "\n\\{\\phi_1, \\phi_2\\}_{PB} = - \\{\\phi_2, \\phi_1\\}_{PB} = \\frac{q B}{c},\n"
},
{
"math_id": 46,
"text": "\nM = \\frac{q B}{c} \n\\left(\\begin{matrix}\n 0 & 1\\\\\n-1 & 0\n\\end{matrix}\\right),\n"
},
{
"math_id": 47,
"text": "\nM^{-1} = \\frac{c}{q B}\n\\left(\\begin{matrix}\n 0 & -1\\\\\n 1 & 0\n\\end{matrix}\\right) \\quad\\Rightarrow\\quad M^{-1}_{ab} = -\\frac{c}{q B_0} \\varepsilon_{ab},\n"
},
{
"math_id": 48,
"text": "\n\\{f, g\\}_{DB} = \\{f, g\\}_{PB} + \\frac{c\\varepsilon_{ab}}{q B} \\{f, \\phi_a\\}_{PB}\\{\\phi_b, g\\}_{PB}.\n"
},
{
"math_id": 49,
"text": "\n\\{x, y\\}_{DB} = -\\frac{c}{q B}\n"
},
{
"math_id": 50,
"text": "\n\\{x, p_x\\}_{DB} = \\{y, p_y\\}_{DB} = \\tfrac{1}{2}\n"
},
{
"math_id": 51,
"text": "\n\\{p_x, p_y\\}_{DB} = - \\frac{q B}{4c}.\n"
},
{
"math_id": 52,
"text": "\n[\\hat{x}, \\hat{y}] = -i\\frac{\\hbar c}{q B}\n"
},
{
"math_id": 53,
"text": "\n[\\hat{x}, \\hat{p}_x] = [\\hat{y}, \\hat{p}_y] = i\\frac{\\hbar}{2}\n"
},
{
"math_id": 54,
"text": "\n[\\hat{p}_x, \\hat{p}_y] = -i\\frac{\\hbar q B}{4c}~.\n"
},
{
"math_id": 55,
"text": "\n\\{x_i, x_j\\}_{DB} = 0,\n"
},
{
"math_id": 56,
"text": "\n\\{x_i, p_j\\}_{DB} = \\delta_{ij} -x_i x_j ,"
},
{
"math_id": 57,
"text": "\n\\{p_i, p_j\\}_{DB} = x_j p_i - x_i p_j ~.\n"
},
{
"math_id": 58,
"text": "L=\\frac{1}{2} \\frac {{\\dot z}^2}{1-z^2} ~,"
},
{
"math_id": 59,
"text": "{\\ddot z} =-z \\frac {{\\dot z}^2}{1-z^2} =-z 2E ~,"
},
{
"math_id": 60,
"text": "{\\dot x}^i =\\{x^i,H\\}_{DB} = p^i~, "
},
{
"math_id": 61,
"text": "{\\dot p}^i =\\{p^i,H\\}_{DB} = - x^i ~ p^2~, "
},
{
"math_id": 62,
"text": "{\\ddot x}^i = - x^i 2E ~. "
}
]
| https://en.wikipedia.org/wiki?curid=13657747 |
13659915 | Hörmander's condition | In mathematics, Hörmander's condition is a property of vector fields that, if satisfied, has many useful consequences in the theory of partial and stochastic differential equations. The condition is named after the Swedish mathematician Lars Hörmander.
Definition.
Given two "C"1 vector fields "V" and "W" on "d"-dimensional Euclidean space R"d", let ["V", "W"] denote their Lie bracket, another vector field defined by
formula_0
where D"V"("x") denotes the Fréchet derivative of "V" at "x" ∈ R"d", which can be thought of as a matrix that is applied to the vector "W"("x"), and "vice versa".
Let "A"0, "A"1, ... "A""n" be vector fields on R"d". They are said to satisfy Hörmander's condition if, for every point "x" ∈ R"d", the vectors
formula_1
span R"d". They are said to satisfy the parabolic Hörmander condition if the same holds true, but with the index formula_2 taking only values in 1...,"n".
Application to stochastic differential equations.
Consider the stochastic differential equation (SDE)
formula_3
where the vectors fields formula_4 are assumed to have bounded derivative, formula_5 the normalized "n"-dimensional Brownian motion and formula_6 stands for the Stratonovich integral interpretation of the SDE.
Hörmander's theorem asserts that if the SDE above satisfies the parabolic Hörmander condition, then its solutions admit a smooth density with respect to Lebesgue measure.
Application to the Cauchy problem.
With the same notation as above, define a second-order differential operator "F" by
formula_7
An important problem in the theory of partial differential equations is to determine sufficient conditions on the vector fields "A""i" for the Cauchy problem
formula_8
to have a smooth fundamental solution, i.e. a real-valued function "p" (0, +∞) × R2"d" → R such that "p"("t", ·, ·) is smooth on R2"d" for each "t" and
formula_9
satisfies the Cauchy problem above. It had been known for some time that a smooth solution exists in the elliptic case, in which
formula_10
and the matrix "A" = ("a""ji"), 1 ≤ "j" ≤ "d", 1 ≤ "i" ≤ "n" is such that "AA"∗ is everywhere an invertible matrix.
The great achievement of Hörmander's 1967 paper was to show that a smooth fundamental solution exists under a considerably weaker assumption: the parabolic version of the condition that now bears his name.
Application to control systems.
Let "M" be a smooth manifold and formula_4 be smooth vector fields on "M". Assuming that these vector fields satisfy Hörmander's condition, then the control system
formula_11
is locally controllable in any time at every point of "M". This is known as the Chow–Rashevskii theorem. See Orbit (control theory). | [
{
"math_id": 0,
"text": "[V, W] (x) = \\mathrm{D} V(x) W(x) - \\mathrm{D} W(x) V(x),"
},
{
"math_id": 1,
"text": "\\begin{align}\n&A_{j_0} (x)~,\\\\\n&[A_{j_{0}} (x), A_{j_{1}} (x)]~,\\\\\n&[[A_{j_{0}} (x), A_{j_{1}} (x)], A_{j_{2}} (x)]~,\\\\\n&\\quad\\vdots\\quad\n\\end{align}\n\\qquad 0 \\leq j_{0}, j_{1}, \\ldots, j_{n} \\leq n\n"
},
{
"math_id": 2,
"text": "j_0"
},
{
"math_id": 3,
"text": "\\operatorname dx = A_0(x) \\operatorname dt + \\sum_{i=1}^n A_i(x) \\circ \\operatorname dW_i"
},
{
"math_id": 4,
"text": "A_0,\\dotsc,A_n"
},
{
"math_id": 5,
"text": "(W_1,\\dotsc,W_n)"
},
{
"math_id": 6,
"text": "\\circ\\operatorname d"
},
{
"math_id": 7,
"text": "F = \\frac1{2} \\sum_{i = 1}^n A_i^2 + A_0."
},
{
"math_id": 8,
"text": "\\begin{cases} \\dfrac{\\partial u}{\\partial t} (t, x) = F u(t, x), & t > 0, x \\in \\mathbf{R}^{d}; \\\\ u(t, \\cdot) \\to f, & \\text{as } t \\to 0; \\end{cases}"
},
{
"math_id": 9,
"text": "u(t, x) = \\int_{\\mathbf{R}^{d}} p(t, x, y) f(y) \\, \\mathrm{d} y"
},
{
"math_id": 10,
"text": "A_{i} = \\sum_{j = 1}^{d} a_{ji} \\frac{\\partial}{\\partial x_{j}},"
},
{
"math_id": 11,
"text": "\\dot{x} = \\sum_{i=0}^{n} u_{i} A_{i}(x)"
}
]
| https://en.wikipedia.org/wiki?curid=13659915 |
13660 | Homeomorphism | Mapping which preserves all topological properties of a given space
In mathematics and more specifically in topology, a homeomorphism (from Greek roots meaning "similar shape", named by Henri Poincaré), also called topological isomorphism, or bicontinuous function, is a bijective and continuous function between topological spaces that has a continuous inverse function. Homeomorphisms are the isomorphisms in the category of topological spaces—that is, they are the mappings that preserve all the topological properties of a given space. Two spaces with a homeomorphism between them are called homeomorphic, and from a topological viewpoint they are the same.
Very roughly speaking, a topological space is a geometric object, and a homeomorphism results from a continuous deformation of the object into a new shape. Thus, a square and a circle are homeomorphic to each other, but a sphere and a torus are not. However, this description can be misleading. Some continuous deformations do not result into homeomorphisms, such as the deformation of a line into a point. Some homeomorphisms do not result from continuous deformations, such as the homeomorphism between a trefoil knot and a circle. Homotopy and isotopy are precise definitions for the informal concept of "continuous deformation".
Definition.
A function formula_0 between two topological spaces is a homeomorphism if it has the following properties:
A homeomorphism is sometimes called a "bicontinuous" function. If such a function exists, formula_3 and formula_4 are homeomorphic. A self-homeomorphism is a homeomorphism from a topological space onto itself. Being "homeomorphic" is an equivalence relation on topological spaces. Its equivalence classes are called homeomorphism classes.
The third requirement, that formula_5 be continuous, is essential. Consider for instance the function formula_6 (the unit circle in &NoBreak;&NoBreak;) defined byformula_7 This function is bijective and continuous, but not a homeomorphism (formula_8 is compact but formula_9 is not). The function formula_5 is not continuous at the point formula_10 because although formula_5 maps formula_11 to formula_12 any neighbourhood of this point also includes points that the function maps close to formula_13 but the points it maps to numbers in between lie outside the neighbourhood.
Homeomorphisms are the isomorphisms in the category of topological spaces. As such, the composition of two homeomorphisms is again a homeomorphism, and the set of all self-homeomorphisms formula_14 forms a group, called the homeomorphism group of "X", often denoted formula_15 This group can be given a topology, such as the compact-open topology, which under certain assumptions makes it a topological group.
In some contexts, there are homeomorphic objects that cannot be continuously deformed from one to the other. Homotopy and isotopy are equivalence relations that have been introduced for dealing with such situations.
Similarly, as usual in category theory, given two spaces that are homeomorphic, the space of homeomorphisms between them, formula_16 is a torsor for the homeomorphism groups formula_17 and formula_18 and, given a specific homeomorphism between formula_3 and formula_19 all three sets are identified.
Informal discussion.
The intuitive criterion of stretching, bending, cutting and gluing back together takes a certain amount of practice to apply correctly—it may not be obvious from the description above that deforming a line segment to a point is impermissible, for instance. It is thus important to realize that it is the formal definition given above that counts. In this case, for example, the line segment possesses infinitely many points, and therefore cannot be put into a bijection with a set containing only a finite number of points, including a single point.
This characterization of a homeomorphism often leads to a confusion with the concept of homotopy, which is actually "defined" as a continuous deformation, but from one "function" to another, rather than one space to another. In the case of a homeomorphism, envisioning a continuous deformation is a mental tool for keeping track of which points on space "X" correspond to which points on "Y"—one just follows them as "X" deforms. In the case of homotopy, the continuous deformation from one map to the other is of the essence, and it is also less restrictive, since none of the maps involved need to be one-to-one or onto. Homotopy does lead to a relation on spaces: homotopy equivalence.
There is a name for the kind of deformation involved in visualizing a homeomorphism. It is (except when cutting and regluing are required) an isotopy between the identity map on "X" and the homeomorphism from "X" to "Y".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f : X \\to Y"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "f^{-1}"
},
{
"math_id": 3,
"text": "X"
},
{
"math_id": 4,
"text": "Y"
},
{
"math_id": 5,
"text": "f^{-1}"
},
{
"math_id": 6,
"text": "f : [0,2\\pi) \\to S^1"
},
{
"math_id": 7,
"text": "f(\\varphi) = (\\cos\\varphi,\\sin\\varphi)."
},
{
"math_id": 8,
"text": "S^1"
},
{
"math_id": 9,
"text": "[0,2\\pi)"
},
{
"math_id": 10,
"text": "(1,0),"
},
{
"math_id": 11,
"text": "(1,0)"
},
{
"math_id": 12,
"text": "0,"
},
{
"math_id": 13,
"text": "2\\pi,"
},
{
"math_id": 14,
"text": "X \\to X"
},
{
"math_id": 15,
"text": "\\text{Homeo}(X)."
},
{
"math_id": 16,
"text": "\\text{Homeo}(X,Y),"
},
{
"math_id": 17,
"text": "\\text{Homeo}(X)"
},
{
"math_id": 18,
"text": "\\text{Homeo}(Y),"
},
{
"math_id": 19,
"text": "Y,"
},
{
"math_id": 20,
"text": "(a,b)"
},
{
"math_id": 21,
"text": "a < b."
},
{
"math_id": 22,
"text": "f(x) = \\frac{1}{a-x} + \\frac{1}{b-x} "
},
{
"math_id": 23,
"text": "D^2"
},
{
"math_id": 24,
"text": "(\\rho, \\theta) \\mapsto \\left( \\tfrac{\\rho}{ \\max(|\\cos \\theta|, |\\sin \\theta|)}, \\theta\\right)."
},
{
"math_id": 25,
"text": "G"
},
{
"math_id": 26,
"text": "x \\mapsto x^{-1}"
},
{
"math_id": 27,
"text": "x \\in G,"
},
{
"math_id": 28,
"text": "y \\mapsto xy,"
},
{
"math_id": 29,
"text": "y \\mapsto yx,"
},
{
"math_id": 30,
"text": "y \\mapsto xyx^{-1}"
},
{
"math_id": 31,
"text": "[0,1]"
},
{
"math_id": 32,
"text": "(0,1)"
},
{
"math_id": 33,
"text": "S^1"
},
{
"math_id": 34,
"text": "D^2"
}
]
| https://en.wikipedia.org/wiki?curid=13660 |
13662732 | Rydberg matter | An exotic phase of matter formed by Rydberg atoms
Rydberg matter is an exotic phase of matter formed by Rydberg atoms; it was predicted around 1980 by É. A. Manykin, M. I. Ozhovan and P. P. Poluéktov. It has been formed from various elements like caesium, potassium, hydrogen and nitrogen; studies have been conducted on theoretical possibilities like sodium, beryllium, magnesium and calcium. It has been suggested to be a material that diffuse interstellar bands may arise from. Circular Rydberg states, where the outermost electron is found in a planar circular orbit, are the most long-lived, with lifetimes of up to several hours, and are the most common.
Physical.
Rydberg matter consists of usually hexagonal planar clusters; these cannot be very big because of the retardation effect caused by the finite velocity of the speed of light. Hence, they are not gases or plasmas; nor are they solids or liquids; they are most similar to dusty plasmas with small clusters in a gas. Though Rydberg matter can be studied in the laboratory by laser probing, the largest cluster reported consists of only 91 atoms, but it has been shown to be behind extended clouds in space and the upper atmosphere of planets. Bonding in Rydberg matter is caused by delocalisation of the high-energy electrons to form an overall lower energy state. The way in which the electrons delocalise is to form standing waves on loops surrounding nuclei, creating quantised angular momentum and the defining characteristics of Rydberg matter. It is a generalised metal by way of the quantum numbers influencing loop size but restricted by the bonding requirement for strong electron correlation; it shows exchange-correlation properties similar to covalent bonding. Electronic excitation and vibrational motion of these bonds can be studied by Raman spectroscopy.
Lifetime.
Due to reasons still debated by the physics community because of the lack of methods to observe clusters, Rydberg matter is highly stable against disintegration by emission of radiation; the characteristic lifetime of a cluster at "n" = 12 is 25 seconds. Reasons given include the lack of overlap between excited and ground states, the forbidding of transitions between them and exchange-correlation effects hindering emission through necessitating tunnelling that causes a long delay in excitation decay. Excitation plays a role in determining lifetimes, with a higher excitation giving a longer lifetime; "n" = 80 gives a lifetime comparable to the age of the Universe.
Excitations.
In ordinary metals, interatomic distances are nearly constant through a wide range of temperatures and pressures; this is not the case with Rydberg matter, whose distances and thus properties vary greatly with excitations. A key variable in determining these properties is the principal quantum number "n" that can be any integer greater than 1; the highest values reported for it are around 100. Bond distance "d" in Rydberg matter is given by
formula_0
where "a"0 is the Bohr radius. The approximate factor 2.9 was first experimentally determined, then measured with rotational spectroscopy in different clusters. Examples of "d" calculated this way, along with selected values of the density "D", are given in the adjacent table.
Condensation.
Like bosons that can be condensed to form Bose–Einstein condensates, Rydberg matter can be condensed, but not in the same way as bosons. The reason for this is that Rydberg matter behaves similarly to a gas, meaning that it cannot be condensed without removing the condensation energy; ionisation occurs if this is not done. All solutions to this problem so far involve using an adjacent surface in some way, the best being evaporating the atoms of which the Rydberg matter is to be formed from and leaving the condensation energy on the surface. Using caesium atoms, graphite-covered surfaces and thermionic converters as containment, the work function of the surface has been measured to be 0.5 eV, indicating that the cluster is between the ninth and fourteenth excitation levels.
See also.
The overview provides information on Rydberg matter and possible applications in developing clean energy, catalysts, researching space phenomena, and usage in sensors.
Disputed.
The research claiming to create ultradense hydrogen Rydberg matter (with interatomic spacing of ~2.3pm: many orders of magnitude less than in most solid matter) is disputed:
″The paper of Holmlid and Zeiner-Gundersen makes claims that would be truly revolutionary if they were true. We have shown that they violate some fundamental and very well established laws in a rather direct manner. We believe we share this scepticism with most of the scientific community. The response to the theories of Holmlid is perhaps most clearly reflected in the reference list of their article. Out of 114 references, 36 are not coauthored by Holmlid. And of these 36, none address the claims made by him and his co-authors. This is so much more remarkable because the claims, if correct, would revolutionize quantum science, add at least two new forms of hydrogen, of which one is supposedly the ground state of the element, discover an extremely dense form of matter, discover processes that violate baryon number conservation, in addition to solving humanity’s need for energy practically in perpetuity.″
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d = 2.9 n^2 a_0,"
}
]
| https://en.wikipedia.org/wiki?curid=13662732 |
13664406 | Cut locus | In differential geometry, the cut locus of a point p on a manifold is the closure of the set of all other points on the manifold that are connected to p by two or more distinct shortest geodesics. More generally, the cut locus of a closed set X on the manifold is the closure of the set of all other points on the manifold connected to X by two or more distinct shortest geodesics.
Examples.
In the Euclidean plane, a point "p" has an empty cut locus, because every other point is connected to "p" by a unique geodesic (the line segment between the points).
On the sphere, the cut locus of a point consists of the single antipodal point diametrically opposite to it.
On an infinitely long cylinder, the cut locus of a point consists of the line opposite the point.
Let "X" be the boundary of a simple polygon in the Euclidean plane. Then the cut locus of "X" in the interior of the polygon is the polygon's medial axis. Points on the medial axis are centers of disks that touch the polygon boundary at two or more points, corresponding to two or more shortest paths to the disk center.
Let "x" be a point on the surface of a convex polyhedron "P". Then the cut locus of "x" on the polyhedron's surface is known as the ridge tree of "P" with respect to "x". This ridge tree has the property that cutting the surface along its edges unfolds "P" to a simple planar polygon. This polygon can be viewed as a net for the polyhedron.
Formal definition.
Fix a point formula_0 in a complete Riemannian manifold formula_1, and consider the tangent space formula_2. It is a standard result that for sufficiently small formula_3 in formula_4, the curve defined by the Riemannian exponential map, formula_5 for formula_6 belonging to the interval formula_7 is a minimizing geodesic, and is the unique minimizing geodesic connecting the two endpoints. Here formula_8 denotes the exponential map from formula_0. The cut locus of formula_0 in the tangent space is defined to be the set of all vectors formula_3 in formula_2 such that formula_9 is a minimizing geodesic for formula_10 but fails to be minimizing for formula_11 for every formula_12. Thus the cut locus in the tangent space is the boundary of the set
formula_13
where formula_14 denotes the length metric of formula_15, and formula_16 is the Euclidean norm of formula_2. The cut locus of formula_0 in formula_15 is defined to be image of the cut locus of formula_0 in the tangent space under the exponential map at formula_0. Thus, we may interpret the cut locus of formula_0 in formula_15 as the points in the manifold where the geodesics starting at formula_0 stop being minimizing.
The least distance from "p" to the cut locus is the injectivity radius at "p". On the open ball of this radius, the exponential map at "p" is a diffeomorphism from the tangent space to the manifold, and this is the largest such radius. The global injectivity radius is defined to be the infimum of the injectivity radius at "p", over all points of the manifold.
Characterization.
Suppose formula_17 is in the cut locus of formula_0 in formula_15. A standard result is that either (1) there is more than one minimizing geodesic joining formula_0 to formula_17, or (2) formula_0 and formula_17 are conjugate along some geodesic
which joins them. It is possible for both (1) and (2) to hold.
Applications.
The significance of the cut locus is that the distance function from a point formula_0 is smooth, except on the cut locus of formula_0 and formula_0 itself. In particular, it makes sense to take the gradient and Hessian of the distance function away from the cut locus and formula_0. This idea is used in the local Laplacian comparison theorem and the local Hessian comparison theorem. These are used in the proof of the local version of the Toponogov theorem, and many other important theorems in Riemannian geometry.
For the metric space of surface distances on a convex polyhedron, cutting the polyhedron along the cut locus produces a shape that can be unfolded flat into a plane, the source unfolding. The unfolding process can be performed continuously, as a blooming of the polyhedron. Analogous methods of cutting along the cut locus can be used to unfold higher-dimensional convex polyhedra as well.
Cut locus of a subset.
One can similarly define the cut locus of a submanifold of the Riemannian manifold, in terms of its normal exponential map.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "(M,g)"
},
{
"math_id": 2,
"text": "T_pM"
},
{
"math_id": 3,
"text": "v"
},
{
"math_id": 4,
"text": "T_p M"
},
{
"math_id": 5,
"text": "\\gamma(t) = \\exp_p(tv)"
},
{
"math_id": 6,
"text": "t"
},
{
"math_id": 7,
"text": "[0,1]"
},
{
"math_id": 8,
"text": "\\exp_p"
},
{
"math_id": 9,
"text": "\\gamma(t)=\\exp_p(tv)"
},
{
"math_id": 10,
"text": "t \\in [0,1]"
},
{
"math_id": 11,
"text": "t = 1 + \\varepsilon"
},
{
"math_id": 12,
"text": "\\varepsilon > 0"
},
{
"math_id": 13,
"text": "\\{ v\\in T_pM | d(\\exp_pv,p) = \\|v\\|\\}"
},
{
"math_id": 14,
"text": "d"
},
{
"math_id": 15,
"text": "M"
},
{
"math_id": 16,
"text": "\\|\\cdot\\|"
},
{
"math_id": 17,
"text": "q"
}
]
| https://en.wikipedia.org/wiki?curid=13664406 |
13665 | Hausdorff maximal principle | Mathematical result or axiom on order relations
In mathematics, the Hausdorff maximal principle is an alternate and earlier formulation of Zorn's lemma proved by Felix Hausdorff in 1914 (Moore 1982:168). It states that in any partially ordered set, every totally ordered subset is contained in a maximal totally ordered subset, where "maximal" is with respect to set inclusion.
In a partially ordered set, a totally ordered subset is also called a chain. Thus, the maximal principle says every chain in the set extends to a maximal chain.
The Hausdorff maximal principle is one of many statements equivalent to the axiom of choice over ZF (Zermelo–Fraenkel set theory without the axiom of choice). The principle is also called the Hausdorff maximality theorem or the Kuratowski lemma (Kelley 1955:33).
Statement.
The Hausdorff maximal principle states that, in any partially ordered set formula_0, every chain formula_1 (i.e., a totally ordered subset) is contained in a maximal chain formula_2 (i.e., a chain that is not contained in a strictly larger chain in formula_0). In general, there may be several maximal chains containing a given chain.
An equivalent form of the Hausdorff maximal principle is that in every partially ordered set, there exists a maximal chain. (Note if the set is empty, the empty subset is a maximal chain.)
This form follows from the original form since the empty set is a chain. Conversely, to deduce the original form from this form, consider the set formula_3 of all chains in formula_0 containing a given chain formula_1 in formula_0. Then formula_3 is partially ordered by set inclusion. Thus, by the maximal principle in the above form, formula_3 contains a maximal chain formula_4. Let formula_2 be the union of formula_4, which is a chain in formula_0 since a union of a totally ordered set of chains is a chain. Since formula_2 contains formula_1, it is an element of formula_3. Also, since any chain containing formula_2 is contained in formula_2 as formula_2 is a union, formula_2 is in fact a maximal element of formula_3; i.e., a maximal chain in formula_0.
The proof that the Hausdorff maximal principle is equivalent to Zorn's lemma is somehow similar to this proof. Indeed, first assume Zorn's lemma. Since a union of a totally ordered set of chains is a chain, the hypothesis of Zorn's lemma (every chain has an upper bound) is satisfied for formula_3 and thus formula_3 contains a maximal element or a maximal chain in formula_0.
Conversely, if the maximal principle holds, then formula_0 contains a maximal chain formula_2. By the hypothesis of Zorn's lemma, formula_2 has an upper bound formula_5 in formula_0. If formula_6, then formula_7 is a chain containing formula_2 and so by maximality, formula_8; i.e., formula_9 and so formula_10. formula_11
Examples.
If "A" is any collection of sets, the relation "is a proper subset of" is a strict partial order on "A". Suppose that "A" is the collection of all circular regions (interiors of circles) in the plane. One maximal totally ordered sub-collection of "A" consists of all circular regions with centers at the origin. Another maximal totally ordered sub-collection consists of all circular regions bounded by circles tangent from the right to the y-axis at the origin.
If (x0, y0) and (x1, y1) are two points of the plane formula_12, define (x0, y0) < (x1, y1) if y0 = y1 and x0 < x1. This is a partial ordering of formula_12 under which two points are comparable only if they lie on the same horizontal line. The maximal totally ordered sets are horizontal lines in formula_12.
Application.
By the Hausdorff maximal principle, we can show every Hilbert space formula_13 contains a maximal orthonormal subset formula_14 as follows. (This fact can be stated as saying that formula_15 as Hilbert spaces.)
Let formula_0 be the set of all orthonormal subsets of the given Hilbert space formula_13, which is partially ordered by set inclusion. It is nonempty as it contains the empty set and thus by the maximal principle, it contains a maximal chain formula_16. Let formula_14 be the union of formula_16. We shall show it is a maximal orthonormal subset. First, if formula_17 are in formula_16, then either formula_18 or formula_19. That is, any given two distinct elements in formula_14 are contained in some formula_20 in formula_16 and so they are orthogonal to each other (and of course, formula_14 is a subset of the unit sphere in formula_13). Second, if formula_21 for some formula_22 in formula_0, then formula_22 cannot be in formula_16 and so formula_23 is a chain strictly larger than formula_16, a contradiction. formula_11
For the purpose of comparison, here is a proof of the same fact by Zorn's lemma. As above, let formula_0 be the set of all orthonormal subsets of formula_13. If formula_16 is a chain in formula_0, then the union of formula_16 is also orthonormal by the same argument as above and so is an upper bound of formula_16. Thus, by Zorn's lemma, formula_0 contains a maximal element formula_14. (So, the difference is that the maximal principle gives a maximal chain while Zorn's lemma gives a maximal element directly.)
Proof.
The idea of the proof is essentially due to Zermelo and is to prove the following weak form of Zorn's lemma, from the axiom of choice.
Let formula_24 be a nonempty set of subsets of some fixed set, ordered by set inclusion, such that (1) the union of each totally ordered subset of formula_24 is in formula_24 and (2) each subset of a set in formula_24 is in formula_24. Then formula_24 has a maximal element.
(Zorn's lemma itself also follows from this weak form.) The maximal principle follows from the above since the set of all chains in formula_0 satisfies the above conditions.
By the axiom of choice, we have a function formula_25 such that formula_26 for the power set formula_27 of formula_0.
For each formula_28, let formula_29 be the set of all formula_30 such that formula_31 is in formula_24. If formula_32, then let formula_8. Otherwise, let
formula_33
Note formula_2 is a maximal element if and only if formula_8. Thus, we are done if we can find a formula_2 such that formula_8.
Fix a formula_1 in formula_24. We call a subset formula_34 a "tower (over formula_1)" if
There exists at least one tower; indeed, the set of all sets in formula_24 containing formula_1 is a tower. Let formula_38 be the intersection of all towers, which is again a tower.
Now, we shall show formula_38 is totally ordered. We say a set formula_2 is "comparable in formula_38" if for each formula_14 in formula_38, either formula_39 or formula_40. Let formula_41 be the set of all sets in formula_38 that are comparable in formula_38. We claim formula_41 is a tower. The conditions 1. and 2. are straightforward to check. For 3., let formula_2 in formula_41 be given and then let formula_42 be the set of all formula_14 in formula_38 such that either formula_39 or formula_43.
We claim formula_42 is a tower. The conditions 1. and 2. are again straightforward to check. For 3., let formula_14 be in formula_42. If formula_39, then since formula_2 is comparable in formula_38, either formula_44 or formula_45. In the first case, formula_46 is in formula_42. In the second case, we have formula_47, which implies either formula_48 or formula_49. (This is the moment we needed to collapse a set to an element by the axiom of choice to define formula_46.) Either way, we have formula_46 is in formula_42. Similarly, if formula_40, we see formula_46 is in formula_42. Hence, formula_42 is a tower. Now, since formula_50 and formula_38 is the intersection of all towers, formula_51, which implies formula_37 is comparable in formula_38; i.e., is in formula_41. This completes the proof of the claim that formula_41 is a tower.
Finally, since formula_41 is a tower contained in formula_38, we have formula_52, which means formula_38 is totally ordered.
Let formula_2 be the union of formula_38. By 2., formula_2 is in formula_38 and then by 3., formula_53 is in formula_38. Since formula_2 is the union of formula_38, formula_54 and thus formula_55. formula_11
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "C_0"
},
{
"math_id": 2,
"text": "C"
},
{
"math_id": 3,
"text": "P'"
},
{
"math_id": 4,
"text": "C'"
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "y \\ge x"
},
{
"math_id": 7,
"text": "\\widetilde{C} = C \\cup \\{ y \\}"
},
{
"math_id": 8,
"text": "\\widetilde{C} = C"
},
{
"math_id": 9,
"text": "y \\in C"
},
{
"math_id": 10,
"text": "y = x"
},
{
"math_id": 11,
"text": "\\square"
},
{
"math_id": 12,
"text": "\\mathbb{R}^{2}"
},
{
"math_id": 13,
"text": "H"
},
{
"math_id": 14,
"text": "A"
},
{
"math_id": 15,
"text": "H \\simeq \\ell^2(A)"
},
{
"math_id": 16,
"text": "Q"
},
{
"math_id": 17,
"text": "S, T"
},
{
"math_id": 18,
"text": "S \\subset T"
},
{
"math_id": 19,
"text": "T \\subset S"
},
{
"math_id": 20,
"text": "S"
},
{
"math_id": 21,
"text": "B \\supsetneq A"
},
{
"math_id": 22,
"text": "B"
},
{
"math_id": 23,
"text": "Q \\cup \\{ B \\}"
},
{
"math_id": 24,
"text": "F"
},
{
"math_id": 25,
"text": "f : \\mathfrak{P}(P) - \\{ \\emptyset \\} \\to P"
},
{
"math_id": 26,
"text": "f(S) \\in S"
},
{
"math_id": 27,
"text": "\\mathfrak{P}(P)"
},
{
"math_id": 28,
"text": "C \\in F"
},
{
"math_id": 29,
"text": "C^*"
},
{
"math_id": 30,
"text": "x \\in P - C"
},
{
"math_id": 31,
"text": "C \\cup \\{ x \\}"
},
{
"math_id": 32,
"text": "C^* = \\emptyset"
},
{
"math_id": 33,
"text": "\\widetilde{C} = C \\cup \\{ f(C^*) \\}."
},
{
"math_id": 34,
"text": "T \\subset F"
},
{
"math_id": 35,
"text": "T"
},
{
"math_id": 36,
"text": "T' \\subset T"
},
{
"math_id": 37,
"text": "\\widetilde{C}"
},
{
"math_id": 38,
"text": "T_0"
},
{
"math_id": 39,
"text": "A \\subset C"
},
{
"math_id": 40,
"text": "C \\subset A"
},
{
"math_id": 41,
"text": "\\Gamma"
},
{
"math_id": 42,
"text": "U"
},
{
"math_id": 43,
"text": "\\widetilde{C} \\subset A"
},
{
"math_id": 44,
"text": "\\widetilde{A} \\subset C"
},
{
"math_id": 45,
"text": "C \\subset \\widetilde{A} "
},
{
"math_id": 46,
"text": "\\widetilde{A}"
},
{
"math_id": 47,
"text": "A \\subset C \\subset \\widetilde{A}"
},
{
"math_id": 48,
"text": "A = C"
},
{
"math_id": 49,
"text": "C = \\widetilde{A}"
},
{
"math_id": 50,
"text": "U \\subset T_0"
},
{
"math_id": 51,
"text": "U = T_0"
},
{
"math_id": 52,
"text": "T_0 = \\Gamma"
},
{
"math_id": 53,
"text": "\\widetilde C"
},
{
"math_id": 54,
"text": "\\widetilde C \\subset C"
},
{
"math_id": 55,
"text": "\\widetilde C = C"
}
]
| https://en.wikipedia.org/wiki?curid=13665 |
13666570 | Epstein frame | Device to measure magnetic properties
An Epstein frame or Epstein square is a standardised measurement device for measuring the magnetic properties of soft magnetic materials, especially used for testing of electrical steels.
The International Standard for the measurement configuration and conditions are defined by the standard IEC 60404-2:2008 Magnetic materials - Part 2: Methods of measurement of the magnetic properties of electrical steel sheet and strip by means of an Epstein frame published by International Electrotechnical Commission.
An Epstein frame comprises a primary and a secondary winding. The sample under test should be prepared as a set of a number of strips (always a multiple of four) cut from electrical steel sheet or ribbon. Each layer of the sample is double-lapped in corners and weighted down with a force of 1 N (see photo).
The power losses are measured by means of a wattmeter method in which the primary current and secondary voltage are used. During the measurement, the Epstein frame behaves as an unloaded transformer.
Power loss, "Pc", is calculated as:
formula_0
where:<br>
formula_1 is the number of turns of primary winding<br>
formula_2 is the number of turns of secondary winding<br>
formula_3 is the reading of the wattmeter in watts<br>
formula_4 is the total resistance of the instruments in the secondary circuit in ohms and<br>
formula_5 is the average secondary voltage in volts.
Specific power loss, Ps, is calculated as:
formula_6
where:<br>
formula_7 is the length of the sample in metres<br>
formula_8 is the average magnetic path length = 0.94 (constant value)<br>
formula_9 is the mass of the sample in kilograms
If all conditions are as defined in the standard, the standard deviation of the reproducibility of the values is not greater than 1.5% up to 1.5 T for non-oriented electrical steel and up to 1.7 T for grain-oriented electrical steel.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "P_c = \\frac {N_1}{N_2} \\cdot P_m - \\frac {\\left( 1,111 \\cdot |\\bar{U_2}| \\right)^2}{R_i} "
},
{
"math_id": 1,
"text": "N_1~"
},
{
"math_id": 2,
"text": "N_2~"
},
{
"math_id": 3,
"text": "P_m~"
},
{
"math_id": 4,
"text": "R_i~"
},
{
"math_id": 5,
"text": "|\\bar{U_2}|"
},
{
"math_id": 6,
"text": "P_s = \\frac {P_c \\cdot 4 \\cdot l}{m \\cdot l_m}"
},
{
"math_id": 7,
"text": "l~"
},
{
"math_id": 8,
"text": "l_m~"
},
{
"math_id": 9,
"text": "m~"
}
]
| https://en.wikipedia.org/wiki?curid=13666570 |
13666685 | Partial current | In electrochemistry, partial current is defined as the electric current associated with (anodic or cathodic) half of the electrode reaction.
Depending on the electrode half-reaction, one can distinguish two types of partial current:
The cathodic and anodic partial currents are defined by IUPAC.
The partial current densities ("ic" and "ia") are the ratios of partial currents respect to the electrode areas ("Ac" and "Aa"):
"ic = Ic/Ac"
"ia = Ia/Aa"
The sum of the cathodic partial current density "ic" (positive) and the anodic partial current density "ia" (negative) gives the net current density "i":
"i = ic + ia"
In the case of the cathodic partial current density being equal to the anodic partial current density (for example, in a corrosion process), the net current density on the electrode is zero:
"ieq = ic,eq + ia,eq = 0"
When more than one reaction occur on an electrode simultaneously, then the total electrode current can be expressed as:
formula_0
where the index "formula_1" refers to the particular reactions. | [
{
"math_id": 0,
"text": "I = \\Sigma I_{a,j} + \\Sigma I_{c,j}"
},
{
"math_id": 1,
"text": "j"
}
]
| https://en.wikipedia.org/wiki?curid=13666685 |
13667880 | Synchronizing word | In computer science, more precisely, in the theory of deterministic finite automata (DFA), a synchronizing word or reset sequence is a word in the input alphabet of the DFA that sends any state of the DFA to one and the same state. That is, if an ensemble of copies of the DFA are each started in different states, and all of the copies process the synchronizing word, they will all end up in the same state. Not every DFA has a synchronizing word; for instance, a DFA with two states, one for words of even length and one for words of odd length, can never be synchronized.
Existence.
Given a DFA, the problem of determining if it has a synchronizing word can be solved in polynomial time using a theorem due to Ján Černý. A simple approach considers the power set of states of the DFA, and builds a directed graph where nodes belong to the power set, and a directed edge describes the action of the transition function. A path from the node of all states to a singleton state shows the existence of a synchronizing word. This algorithm is exponential in the number of states. A polynomial algorithm results however, due to a theorem of Černý that exploits the substructure of the problem, and shows that a synchronizing word exists if and only if every pair of states has a synchronizing word.
Length.
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in computer science:
If a DFA with formula_0 states has a synchronizing word, must it have one of length at most formula_1?
The problem of estimating the length of synchronizing words has a long history and was posed independently by several authors, but it is commonly known as the Černý conjecture. In 1969, Ján Černý conjectured that ("n" − 1)2 is the upper bound for the length of the shortest synchronizing word for any "n"-state complete DFA (a DFA with complete state transition graph). If this is true, it would be tight: in his 1964 paper, Černý exhibited a class of automata (indexed by the number "n" of states) for which the shortest reset words have this length. The best upper bound known is 0.1654"n"3, far from the lower bound.
For "n"-state DFAs over a "k"-letter input alphabet, an algorithm by David Eppstein finds a synchronizing word of length at most 11"n"3/48 + O("n"2), and runs in time complexity O("n"3+"kn"2). This algorithm does not always find the shortest possible synchronizing word for a given automaton; as Eppstein also shows, the problem of finding the shortest synchronizing word is NP-complete. However, for a special class of automata in which all state transitions preserve the cyclic order of the states, he describes a different algorithm with time O("kn"2) that always finds the shortest synchronizing word, proves that these automata always have a synchronizing word of length at most ("n" − 1)2 (the bound given in Černý's conjecture), and exhibits examples of automata with this special form whose shortest synchronizing word has length exactly ("n" − 1)2.
Road coloring.
The road coloring problem is the problem of labeling the edges of a regular directed graph with the symbols of a "k"-letter input alphabet (where "k" is the outdegree of each vertex) in order to form a synchronizable DFA. It was conjectured in 1970 by Benjamin Weiss and Roy Adler that any strongly connected and aperiodic regular digraph can be labeled in this way; their conjecture was proven in 2007 by Avraham Trahtman.
Related: transformation semigroups.
A transformation semigroup is "synchronizing" if it contains an element of rank 1, that is, an element whose image is of cardinality 1. A DFA corresponds to a transformation semigroup with a distinguished generator set.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "(n-1)^2"
}
]
| https://en.wikipedia.org/wiki?curid=13667880 |
1366807 | Colors of noise | Power spectrum of a noise signal
In audio engineering, electronics, physics, and many other fields, the color of noise or noise spectrum refers to the power spectrum of a noise signal (a signal produced by a stochastic process). Different colors of noise have significantly different properties. For example, as audio signals they will sound different to human ears, and as images they will have a visibly different texture. Therefore, each application typically requires noise of a specific color. This sense of 'color' for noise signals is similar to the concept of timbre in music (which is also called "tone color"; however, the latter is almost always used for sound, and may consider detailed features of the spectrum).
The practice of naming kinds of noise after colors started with white noise, a signal whose spectrum has equal power within any equal interval of frequencies. That name was given by analogy with white light, which was (incorrectly) assumed to have such a flat power spectrum over the visible range. Other color names, such as "pink", "red", and "blue" were then given to noise with other spectral profiles, often (but not always) in reference to the color of light with similar spectra. Some of those names have standard definitions in certain disciplines, while others are informal and poorly defined. Many of these definitions assume a signal with components at all frequencies, with a power spectral density per unit of bandwidth proportional to 1/"f" "β" and hence they are examples of "power-law noise". For instance, the spectral density of white noise is flat ("β" = 0), while flicker or pink noise has "β" = 1, and Brownian noise has "β" = 2. Blue noise has "β" = -1.
Technical definitions.
Various noise models are employed in analysis, many of which fall under the above categories. AR noise or "autoregressive noise" is such a model, and generates simple examples of the above noise types, and more. The Federal Standard 1037C Telecommunications Glossary defines white, pink, blue, and black noise.
The color names for these different types of sounds are derived from a loose analogy between the spectrum of frequencies of sound wave present in the sound (as shown in the blue diagrams) and the equivalent spectrum of light wave frequencies. That is, if the sound wave pattern of "blue noise" were translated into light waves, the resulting light would be blue, and so on.
White noise.
White noise is a signal (or process), named by analogy to white light, with a flat frequency spectrum when plotted as a linear function of frequency (e.g., in Hz). In other words, the signal has equal power in any band of a given bandwidth (power spectral density) when the bandwidth is measured in Hz. For example, with a white noise audio signal, the range of frequencies between 40 Hz and 60 Hz contains the same amount of sound power as the range between 400 Hz and 420 Hz, since both intervals are 20 Hz wide. Note that spectra are often plotted with a logarithmic frequency axis rather than a linear one, in which case equal physical widths on the printed or displayed plot do not all have the same bandwidth, with the same physical width covering more Hz at higher frequencies than at lower frequencies. In this case a white noise spectrum that is equally sampled in the logarithm of frequency (i.e., equally sampled on the X axis) will slope upwards at higher frequencies rather than being flat. However it is not unusual in practice for spectra to be calculated using linearly-spaced frequency samples but plotted on a logarithmic frequency axis, potentially leading to misunderstandings and confusion if the distinction between equally spaced linear frequency samples and equally spaced logarithmic frequency samples is not kept in mind.
Pink noise.
The frequency spectrum of pink noise is linear in logarithmic scale; it has equal power in bands that are proportionally wide. This means that pink noise would have equal power in the frequency range from 40 to 60 Hz as in the band from 4000 to 6000 Hz. Since humans hear in such a proportional space, where a doubling of frequency (an octave) is perceived the same regardless of actual frequency (40–60 Hz is heard as the same interval and distance as 4000–6000 Hz), every octave contains the same amount of energy and thus pink noise is often used as a reference signal in audio engineering. The spectral power density, compared with white noise, decreases by 3.01 dB per octave (density proportional to 1/"f" ). For this reason, pink noise is often called "1/"f" noise".
Since there are an infinite number of logarithmic bands at both the low frequency (DC) and high frequency ends of the spectrum, any finite energy spectrum must have less energy than pink noise at both ends. Pink noise is the only power-law spectral density that has this property: all steeper power-law spectra are finite if integrated to the high-frequency end, and all flatter power-law spectra are finite if integrated to the DC, low-frequency limit.
Brownian noise.
Brownian noise, also called Brown noise, is noise with a power density which decreases 6.02 dB per octave with increasing frequency (frequency density proportional to 1/"f"2) over a frequency range excluding zero (DC). It is also called "red noise", with pink being between red and white.
Brownian noise can be generated with temporal integration of white noise. "Brown" noise is not named for a power spectrum that suggests the color brown; rather, the name derives from Brownian motion, also known as "random walk" or "drunkard's walk".
Blue noise.
Blue noise is also called azure noise. Blue noise's power density increases formula_0 3.01 dB per octave with increasing frequency (density proportional to "f" ) over a finite frequency range. In computer graphics, the term "blue noise" is sometimes used more loosely as any noise with minimal low frequency components and no concentrated spikes in energy. This can be good noise for dithering. Retinal cells are arranged in a blue-noise-like pattern which yields good visual resolution.
Cherenkov radiation is a naturally occurring example of almost perfect blue noise, with the power density growing linearly with frequency over spectrum regions where the permeability of index of refraction of the medium are approximately constant. The exact density spectrum is given by the Frank–Tamm formula. In this case, the finiteness of the frequency range comes from the finiteness of the range over which a material can have a refractive index greater than unity. Cherenkov radiation also appears as a bright blue color, for these reasons.
Violet noise.
Violet noise is also called purple noise. Violet noise's power density increases 6.02 dB per octave with increasing frequency "The spectral analysis shows that GPS acceleration errors seem to be violet noise processes. They are dominated by high-frequency noise." (density proportional to "f" 2) over a finite frequency range. It is also known as differentiated white noise, due to its being the result of the differentiation of a white noise signal.
Due to the diminished sensitivity of the human ear to high-frequency hiss and the ease with which white noise can be electronically differentiated (high-pass filtered at first order), many early adaptations of dither to digital audio used violet noise as the dither signal.
Acoustic thermal noise of water has a violet spectrum, causing it to dominate hydrophone measurements at high frequencies. "Predictions of the thermal noise spectrum, derived from classical statistical mechanics, suggest increasing noise with frequency with a positive slope of 6.02 dB octave−1." "Note that thermal noise increases at the rate of 20 dB decade−1"
Grey noise.
Grey noise is random white noise subjected to a psychoacoustic equal loudness curve (such as an inverted A-weighting curve) over a given range of frequencies, giving the listener the perception that it is equally loud at all frequencies. This is in contrast to standard white noise which has equal strength over a linear scale of frequencies but is not perceived as being equally loud due to biases in the human equal-loudness contour.
Velvet noise.
Velvet noise is a sparse sequence of random positive and negative impulses. Velvet noise is typically characterised by its density in taps/second. At high densities it sounds similar to white noise; however, it is perceptually "smoother". The sparse nature of velvet noise allows for efficient time-domain convolution, making velvet noise particularly useful for applications where computational resources are limited, like real-time reverberation algorithms. Velvet noise is also frequently used in decorrelation filters.
Informal definitions.
There are also many colors used without precise definitions (or as synonyms for formally defined colors), sometimes with multiple definitions.
Noisy white.
In telecommunication, the term noisy white has the following meanings:
Noisy black.
In telecommunication, the term noisy black has the following meanings:
Generation.
Colored noise can be computer-generated by first generating a white noise signal, Fourier-transforming it, then multiplying the amplitudes of the different frequency components with a frequency-dependent function. Matlab programs are available to generate power-law colored noise in one or any number of dimensions.
Identification of Power Law Frequency Noise.
Identifying the dominant noise type in a time series has many applications including clock stability analysis and market forecasting. There are two algorithms based on autocorrelation functions that can identify the dominant noise type in a data set provided the noise type has a power law spectral density.
Lag(1) Autocorrelation Method (Non-Overlapped).
The first method for doing noise identification is based on a paper by W.J Riley and C.A Greenhall. First the lag(1) autocorrelation function is computed and checked to see if it is less than one third (which is the threshold for a stationary process):
formula_2
where formula_3 is the number of data points in the time series, formula_4 are the phase or frequency values, and formula_5 is the average value of the time series. If used for clock stability analysis, the formula_4 values are the non-overlapped (or binned) averages of the original frequency or phase array for some averaging time and factor. Now discrete-time fractionally integrated noises have power spectral densities of the form formula_6 which are stationary for formula_7. The value of formula_8 is calculated using formula_9:
formula_10
where formula_9 is the lag(1) autocorrelation function defined above. If formula_11 then the first differences of the adjacent time series data are taken formula_12 times until formula_7. The power law for the stationary noise process is calculated from the calculated formula_8 and the number of times the data has been differenced to achieve formula_7 as follows:
formula_13
where formula_14 is the power of the frequency noise which can be rounded to identify the dominant noise type (for frequency data formula_14 is the power of the frequency noise but for phase data the power of the frequency noise is formula_15).
Lag(m) Autocorrelation Method (Overlapped).
This method improves on the accuracy of the previous method and was introduced by Z. Chunlei, Z. Qi, Y. Shuhuana. Instead of using the lag(1) autocorrelation function the lag(m) correlation function is computed instead:
formula_16
where formula_17 is the "lag" or shift between the time series and the delayed version of itself. A major difference is that formula_4 are now the averaged values of the original time series computed with a moving window average and averaging factor also equal to formula_17. The value of formula_8 is computed the same way as in the pervious method and formula_7 is again the criteria for a stationary process. The other major difference between this and the previous method is that the differencing used to make the time series stationary (formula_7) is done between values that are spaced a distance formula_17 apart:
formula_18
The value of the power is calculated the same as the previous method as well.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Citation/styles.css"/> | [
{
"math_id": 0,
"text": "10\\log_{10}2 = "
},
{
"math_id": 1,
"text": "f_{\\text{max}} \\approx 3627 \\times {\\text{M}_\\odot \\over \\text{M}}"
},
{
"math_id": 2,
"text": "R_1 = \\frac{\\frac{1}{N}\\sum_{t=1}^{N-1}(z_t - \\bar z)*(z_{t+1} - \\bar z)}\n{\\frac{1}{N}\\sum_{t=1}^{N}(z_t - \\bar z)}"
},
{
"math_id": 3,
"text": "N "
},
{
"math_id": 4,
"text": "z_t "
},
{
"math_id": 5,
"text": "\\bar z "
},
{
"math_id": 6,
"text": "(2sin(\\pi f))^{-2\\delta} "
},
{
"math_id": 7,
"text": "\\delta < .25 "
},
{
"math_id": 8,
"text": "\\delta "
},
{
"math_id": 9,
"text": "R_1 "
},
{
"math_id": 10,
"text": "\\delta = \\frac{R_1}{1+R_1} "
},
{
"math_id": 11,
"text": "\\delta > .25 "
},
{
"math_id": 12,
"text": "d "
},
{
"math_id": 13,
"text": "p = -2(\\delta + d) "
},
{
"math_id": 14,
"text": "p "
},
{
"math_id": 15,
"text": "p+2 "
},
{
"math_id": 16,
"text": "R_m = \\frac{\\frac{1}{N}\\sum_{t=1}^{N-m}(z_t - \\bar z)*(z_{t+m} - \\bar z)}\n{\\frac{1}{N}\\sum_{t=1}^{N}(z_t - \\bar z)}"
},
{
"math_id": 17,
"text": "m"
},
{
"math_id": 18,
"text": "z_1 = z_{1+m}-z_1, z_2 = z_{2+m} - z_2..., z_{N-m} = z_N - z_{N-m} "
}
]
| https://en.wikipedia.org/wiki?curid=1366807 |
13668903 | Jensen hierarchy | Concept in mathematics
In set theory, a mathematical discipline, the Jensen hierarchy or J-hierarchy is a modification of Gödel's constructible hierarchy, L, that circumvents certain technical difficulties that exist in the constructible hierarchy. The J-Hierarchy figures prominently in fine structure theory, a field pioneered by Ronald Jensen, for whom the Jensen hierarchy is named. Rudimentary functions describe a method for iterating through the Jensen hierarchy.
Definition.
As in the definition of "L", let Def("X") be the collection of sets definable with parameters over "X":
formula_0
The constructible hierarchy, formula_1 is defined by transfinite recursion. In particular, at successor ordinals, formula_2.
The difficulty with this construction is that each of the levels is not closed under the formation of unordered pairs; for a given formula_3, the set formula_4 will not be an element of formula_5, since it is not a subset of formula_6.
However, formula_6 does have the desirable property of being closed under Σ0 separation.
Jensen's modification of the L hierarchy retains this property and the slightly weaker condition that formula_7, but is also closed under pairing. The key technique is to encode hereditarily definable sets over formula_8 by codes; then formula_9 will contain all sets whose codes are in formula_8.
Like formula_6, formula_8 is defined recursively. For each ordinal formula_10, we define formula_11 to be a universal formula_12 predicate for formula_8. We encode hereditarily definable sets as formula_13, with formula_14. Then set formula_15 and finally, formula_16.
Properties.
Each sublevel "J""α", "n" is transitive and contains all ordinals less than or equal to "αω" + "n". The sequence of sublevels is strictly ⊆-increasing in "n", since a Σ"m" predicate is also Σ"n" for any "n" > "m". The levels "J""α" will thus be transitive and strictly ⊆-increasing as well, and are also closed under pairing, formula_17-comprehension and transitive closure. Moreover, they have the property that
formula_18
as desired. (Or a bit more generally, formula_19.)
The levels and sublevels are themselves Σ1 uniformly definable (i.e. the definition of "J""α", "n" in "J""β" does not depend on "β"), and have a uniform Σ1 well-ordering. Also, the levels of the Jensen hierarchy satisfy a condensation lemma much like the levels of Gödel's original hierarchy.
For any formula_8, considering any formula_12 relation on formula_8, there is a Skolem function for that relation that is itself definable by a formula_12 formula.
Rudimentary functions.
A rudimentary function is a Vn→V function (i.e. a finitary function accepting sets as arguments) that can be obtained from the following operations:
For any set "M" let rud("M") be the smallest set containing "M"∪{"M"} closed under the rudimentary functions. Then the Jensen hierarchy satisfies "J"α+1 = rud("J"α).
Projecta.
Jensen defines formula_20, the formula_12 projectum of formula_10, as the largest formula_21 such that formula_22 is amenable for all formula_23, and the formula_24 projectum of formula_10 is defined similarly. One of the main results of fine structure theory is that formula_20 is also the largest formula_25 such that not every formula_26 subset of formula_27 is (in the terminology of α-recursion theory) formula_10-finite.
Lerman defines the formula_28 projectum of formula_10 to be the largest formula_25 such that not every formula_28 subset of formula_29 is formula_10-finite, where a set is formula_28 if it is the image of a function formula_30 expressible as formula_31 where formula_32 is formula_10-recursive. In a Jensen-style characterization, formula_33 projectum of formula_10 is the largest formula_21 such that there is an formula_33 epimorphism from formula_29 onto formula_10. There exists an ordinal formula_10 whose formula_34 projectum is formula_35, but whose formula_28 projectum is formula_10 for all natural formula_36.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\textrm{Def}(X) := \\{ \\{y \\in X \\mid \\Phi(y,z_1,...,z_n) \\text{ is true in } (X,\\in)\\} \\mid \\Phi \\text{ is a first order formula}, z_1, ..., z_n\\in X\\}"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "L_{\\alpha+1} = \\textrm{Def}(L_\\alpha)"
},
{
"math_id": 3,
"text": "x, y \\in\nL_{\\alpha+1} \\setminus L_\\alpha"
},
{
"math_id": 4,
"text": "\\{x,y\\}"
},
{
"math_id": 5,
"text": "L_{\\alpha+1}"
},
{
"math_id": 6,
"text": "L_\\alpha"
},
{
"math_id": 7,
"text": "J_{\\alpha+1} \\cap \\mathcal P(J_{\\alpha}) = \\textrm{Def}(J_{\\alpha})"
},
{
"math_id": 8,
"text": "J_\\alpha"
},
{
"math_id": 9,
"text": "J_{\\alpha+1}"
},
{
"math_id": 10,
"text": "\\alpha"
},
{
"math_id": 11,
"text": "W^{\\alpha}_n"
},
{
"math_id": 12,
"text": "\\Sigma_n"
},
{
"math_id": 13,
"text": "X_{\\alpha}(n+1, e) = \\{X_\\alpha(n, f) \\mid W^{\\alpha}_{n+1}(e, f)\\}"
},
{
"math_id": 14,
"text": "X_{\\alpha}(0, e) = e"
},
{
"math_id": 15,
"text": "J_{\\alpha,n} := \\{X_\\alpha(n, e) \\mid e \\in J_\\alpha\\}"
},
{
"math_id": 16,
"text": "J_{\\alpha+1} := \\bigcup_{n \\in \\omega} J_{\\alpha, n}"
},
{
"math_id": 17,
"text": "\\Delta_0"
},
{
"math_id": 18,
"text": "J_{\\alpha+1} \\cap \\mathcal P(J_\\alpha) = \\text{Def}(J_\\alpha),"
},
{
"math_id": 19,
"text": "L_{\\omega+\\alpha}=J_{1+\\alpha}\\cap V_{\\omega+\\alpha}"
},
{
"math_id": 20,
"text": "\\rho_\\alpha^n"
},
{
"math_id": 21,
"text": "\\beta\\leq\\alpha"
},
{
"math_id": 22,
"text": "(J_\\beta,A)"
},
{
"math_id": 23,
"text": "A\\in\\Sigma_n(J_\\alpha)\\cap\\mathcal P(J_\\beta)"
},
{
"math_id": 24,
"text": "\\Delta_n"
},
{
"math_id": 25,
"text": "\\gamma"
},
{
"math_id": 26,
"text": "\\Sigma_n(J_\\alpha)"
},
{
"math_id": 27,
"text": "\\omega\\gamma"
},
{
"math_id": 28,
"text": "S_n"
},
{
"math_id": 29,
"text": "\\beta"
},
{
"math_id": 30,
"text": "f(x)"
},
{
"math_id": 31,
"text": "\\lim_{y_1}\\lim_{y_2}\\ldots\\lim_{y_n}g(x,y_1,y_2,\\ldots,y_n)"
},
{
"math_id": 32,
"text": "g"
},
{
"math_id": 33,
"text": "S_3"
},
{
"math_id": 34,
"text": "\\Delta_3"
},
{
"math_id": 35,
"text": "\\omega"
},
{
"math_id": 36,
"text": "n"
}
]
| https://en.wikipedia.org/wiki?curid=13668903 |
1367040 | Lambert series | Mathematical term
In mathematics, a Lambert series, named for Johann Heinrich Lambert, is a series taking the form
formula_0
It can be resummed formally by expanding the denominator:
formula_1
where the coefficients of the new series are given by the Dirichlet convolution of "a""n" with the constant function 1("n") = 1:
formula_2
This series may be inverted by means of the Möbius inversion formula, and is an example of a Möbius transform.
Examples.
Since this last sum is a typical number-theoretic sum, almost any natural multiplicative function will be exactly summable when used in a Lambert series. Thus, for example, one has
formula_3
where formula_4 is the number of positive divisors of the number "n".
For the higher order sum-of-divisor functions, one has
formula_5
where formula_6 is any complex number and
formula_7
is the divisor function. In particular, for formula_8, the Lambert series one gets is
formula_9
which is (up to the factor of formula_10) the logarithmic derivative of the usual generating function for partition numbers
formula_11
Additional Lambert series related to the previous identity include those for the variants of the
Möbius function given below formula_12
formula_13
Related Lambert series over the Möbius function include the following identities for any
prime formula_14:
formula_15
formula_16
The proof of the first identity above follows from a multi-section (or bisection) identity of these
Lambert series generating functions in the following form where we denote
formula_17 to be the Lambert series generating function of the arithmetic function "f":
formula_18
For Euler's totient function formula_19:
formula_20
For Von Mangoldt function formula_21:
formula_22
For Liouville's function formula_23:
formula_24
with the sum on the right similar to the Ramanujan theta function, or Jacobi theta function formula_25. Note that Lambert series in which the "a""n" are trigonometric functions, for example, "a""n" = sin(2"n" "x"), can be evaluated by various combinations of the logarithmic derivatives of Jacobi theta functions.
Generally speaking, we can extend the previous generating function expansion by letting formula_26 denote the characteristic function of the formula_27 powers, formula_28, for positive natural numbers formula_29 and defining the generalized "m"-Liouville lambda function to be the arithmetic function satisfying formula_30. This definition of formula_31 clearly implies that formula_32, which in turn shows that
formula_33
We also have a slightly more generalized Lambert series expansion generating the sum of squares function formula_34 in the form of
formula_35
In general, if we write the Lambert series over formula_36 which generates the arithmetic functions formula_37, the next pairs of functions correspond to other well-known convolutions expressed by their Lambert series generating functions in the forms of
formula_38
where formula_39 is the multiplicative identity for Dirichlet convolutions, formula_40 is the identity function for formula_41 powers, formula_42 denotes the characteristic function for the squares, formula_43 which counts the number of distinct prime factors of formula_44 (see prime omega function), formula_45 is Jordan's totient function, and formula_46 is the divisor function (see Dirichlet convolutions).
The conventional use of the letter "q" in the summations is a historical usage, referring to its origins in the theory of elliptic curves and theta functions, as the nome.
Alternate form.
Substituting formula_47 one obtains another common form for the series, as
formula_48
where
formula_49
as before. Examples of Lambert series in this form, with formula_50, occur in expressions for the Riemann zeta function for odd integer values; see Zeta constants for details.
Current usage.
In the literature we find "Lambert series" applied to a wide variety of sums. For example, since formula_51 is a polylogarithm function, we may refer to any sum of the form
formula_52
as a Lambert series, assuming that the parameters are suitably restricted. Thus
formula_53
which holds for all complex "q" not on the unit circle, would be considered a Lambert series identity. This identity follows in a straightforward fashion from some identities published by the Indian mathematician S. Ramanujan. A very thorough exploration of Ramanujan's works can be found in the works by Bruce Berndt.
Factorization theorems.
A somewhat newer construction recently published over 2017–2018 relates to so-termed "Lambert series factorization theorems" of the form
formula_54
where formula_55 is the respective sum or difference of the
restricted partition functions formula_56 which denote the number of formula_57's in all partitions of formula_44 into an "even" (respectively, "odd") number of distinct parts. Let formula_58 denote the invertible lower triangular sequence whose first few values are shown in the table below.
Another characteristic form of the Lambert series factorization theorem expansions is given by
formula_59
where formula_60 is the (infinite) q-Pochhammer symbol. The invertible matrix products on the right-hand-side of the previous equation correspond to inverse matrix products whose lower triangular entries are given in terms of the partition function and the Möbius function by the divisor sums
formula_61
The next table lists the first several rows of these corresponding inverse matrices.
We let formula_62 denote the sequence of interleaved pentagonal numbers, i.e., so that the pentagonal number theorem is expanded in the form of
formula_63
Then for any Lambert series formula_64 generating the sequence of formula_65, we have the corresponding inversion relation of the factorization theorem expanded above given by
formula_66
This work on Lambert series factorization theorems is extended in to more general expansions of the form
formula_67
where formula_68 is any (partition-related) reciprocal generating function, formula_69 is any arithmetic function, and where the
modified coefficients are expanded by
formula_70
The corresponding inverse matrices in the above expansion satisfy
formula_71
so that as in the first variant of the Lambert factorization theorem above we obtain an inversion relation for the right-hand-side coefficients of the form
formula_72
Recurrence relations.
Within this section we define the following functions for natural numbers formula_73:
formula_74
formula_75
We also adopt the notation from the previous section that
formula_76
where formula_60 is the infinite q-Pochhammer symbol. Then we have the following recurrence relations for involving these functions and the pentagonal numbers proved in:
formula_77
formula_78
Derivatives.
Derivatives of a Lambert series can be obtained by differentiation of the series termwise with respect to formula_10. We have the following identities for the termwise formula_79 derivatives of a Lambert series for any formula_80
formula_81
formula_82
where the bracketed triangular coefficients in the previous equations denote the Stirling numbers of the first and second kinds.
We also have the next identity for extracting the individual coefficients of the terms implicit to the previous expansions given in the form of
formula_83
Now if we define the functions formula_84 for any formula_85 by
formula_86
where formula_87 denotes Iverson's convention, then we have the coefficients for the formula_88 derivatives of a Lambert series
given by
formula_89
Of course, by a typical argument purely by operations on formal power series we also have that
formula_90
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S(q)=\\sum_{n=1}^\\infty a_n \\frac {q^n}{1-q^n}."
},
{
"math_id": 1,
"text": "S(q)=\\sum_{n=1}^\\infty a_n \\sum_{k=1}^\\infty q^{nk} = \\sum_{m=1}^\\infty b_m q^m "
},
{
"math_id": 2,
"text": "b_m = (a*1)(m) = \\sum_{n\\mid m} a_n. \\,"
},
{
"math_id": 3,
"text": "\\sum_{n=1}^\\infty q^n \\sigma_0(n) = \\sum_{n=1}^\\infty \\frac{q^n}{1-q^n}"
},
{
"math_id": 4,
"text": "\\sigma_0(n)=d(n)"
},
{
"math_id": 5,
"text": "\\sum_{n=1}^\\infty q^n \\sigma_\\alpha(n) = \\sum_{n=1}^\\infty \\frac{n^\\alpha q^n}{1-q^n}"
},
{
"math_id": 6,
"text": "\\alpha"
},
{
"math_id": 7,
"text": "\\sigma_\\alpha(n) = (\\textrm{Id}_\\alpha*1)(n) = \\sum_{d\\mid n} d^\\alpha \\,"
},
{
"math_id": 8,
"text": "\\alpha = 1"
},
{
"math_id": 9,
"text": "q \\frac{F'(q)}{F(q)}"
},
{
"math_id": 10,
"text": "q"
},
{
"math_id": 11,
"text": "F(q) := \\frac{1}{\\phi(q)} = \\sum_{k=0}^\\infty p(k) q^k = \\prod_{n=1}^\\infty \\frac{1}{1-q^n}."
},
{
"math_id": 12,
"text": "\\mu(n)"
},
{
"math_id": 13,
"text": "\\sum_{n=1}^\\infty \\mu(n)\\,\\frac{q^n}{1-q^n} = q."
},
{
"math_id": 14,
"text": "\\alpha \\in \\mathbb{Z}^{+}"
},
{
"math_id": 15,
"text": "\n\\sum_{n \\geq 1} \\frac{\\mu(n) q^n}{1+q^n} = q-2q^2\n"
},
{
"math_id": 16,
"text": "\n\\sum_{n \\geq 1} \\frac{\\mu(\\alpha n) q^n}{1-q^n} = -\\sum_{n \\geq 0} q^{\\alpha^n}. \n"
},
{
"math_id": 17,
"text": "L_{f}(q) := q"
},
{
"math_id": 18,
"text": "\n\n\\begin{align}\n\\sum_{n \\geq 1} \\frac{f(n) q^n}{1+q^n} & = \\sum_{n \\geq 1} \\frac{f(n)q^{n}}{1-q^{n}} - \n \\sum_{n \\geq 1} \\frac{2 f(n) q^{2n}}{1-q^{2n}} \\\\ \n & = \n L_f(q) - 2 \\cdot L_f(q^2). \n\\end{align} \n"
},
{
"math_id": 19,
"text": "\\varphi(n)"
},
{
"math_id": 20,
"text": "\\sum_{n=1}^\\infty \\varphi(n)\\,\\frac{q^n}{1-q^n} = \\frac{q}{(1-q)^2}."
},
{
"math_id": 21,
"text": "\\Lambda(n)"
},
{
"math_id": 22,
"text": "\\sum_{n=1}^\\infty \\Lambda(n)\\,\\frac{q^n}{1-q^n} = \\sum_{n=1}^{\\infty} \\log(n)q^n"
},
{
"math_id": 23,
"text": "\\lambda(n)"
},
{
"math_id": 24,
"text": "\\sum_{n=1}^\\infty \\lambda(n)\\,\\frac{q^n}{1-q^n} = \n\\sum_{n=1}^\\infty q^{n^2}"
},
{
"math_id": 25,
"text": "\\vartheta_3(q)"
},
{
"math_id": 26,
"text": "\\chi_m(n)"
},
{
"math_id": 27,
"text": "m^{th}"
},
{
"math_id": 28,
"text": "n = k^m \\in \\mathbb{Z}^{+}"
},
{
"math_id": 29,
"text": "m > 2"
},
{
"math_id": 30,
"text": "\\chi_m(n) := (1 \\ast \\lambda_m)(n)"
},
{
"math_id": 31,
"text": "\\lambda_m(n)"
},
{
"math_id": 32,
"text": "\\lambda_m(n) = \\sum_{d^m|n} \\mu\\left(\\frac{n}{d^m}\\right)"
},
{
"math_id": 33,
"text": "\\sum_{n \\geq 1} \\frac{\\lambda_m(n) q^n}{1-q^n} = \\sum_{n \\geq 1} q^{n^m},\\ \\text{ for } m \\geq 2."
},
{
"math_id": 34,
"text": "r_2(n)"
},
{
"math_id": 35,
"text": "\\sum_{n=1}^{\\infty} \\frac{4 \\cdot (-1)^{n+1} q^{2n+1}}{1-q^{2n+1}} = \\sum_{m=1}^{\\infty} r_2(m) q^m."
},
{
"math_id": 36,
"text": "f(n)"
},
{
"math_id": 37,
"text": "g(m) = (f \\ast 1)(m)"
},
{
"math_id": 38,
"text": "(f, g) = (\\mu, \\varepsilon), (\\varphi, \\operatorname{Id}_1), (\\lambda, \\chi_{\\operatorname{sq}}), (\\Lambda, \\log), \n (|\\mu|, 2^{\\omega}), (J_t, \\operatorname{Id}_t), (d^3, (d \\ast 1)^2), "
},
{
"math_id": 39,
"text": "\\varepsilon(n) = \\delta_{n,1}"
},
{
"math_id": 40,
"text": "\\operatorname{Id}_k(n) = n^k"
},
{
"math_id": 41,
"text": "k^{th}"
},
{
"math_id": 42,
"text": "\\chi_{\\operatorname{sq}}"
},
{
"math_id": 43,
"text": "\\omega(n)"
},
{
"math_id": 44,
"text": "n"
},
{
"math_id": 45,
"text": "J_t"
},
{
"math_id": 46,
"text": "d(n) = \\sigma_0(n)"
},
{
"math_id": 47,
"text": "q=e^{-z}"
},
{
"math_id": 48,
"text": "\\sum_{n=1}^\\infty \\frac {a_n}{e^{zn}-1}= \\sum_{m=1}^\\infty b_m e^{-mz}"
},
{
"math_id": 49,
"text": "b_m = (a*1)(m) = \\sum_{d\\mid m} a_d\\,"
},
{
"math_id": 50,
"text": "z=2\\pi"
},
{
"math_id": 51,
"text": "q^n/(1 - q^n ) = \\mathrm{Li}_0(q^{n})"
},
{
"math_id": 52,
"text": "\\sum_{n=1}^{\\infty} \\frac{\\xi^n \\,\\mathrm{Li}_u (\\alpha q^n)}{n^s} = \\sum_{n=1}^{\\infty} \\frac{\\alpha^n \\,\\mathrm{Li}_s(\\xi q^n)}{n^u}"
},
{
"math_id": 53,
"text": "12\\left(\\sum_{n=1}^{\\infty} n^2 \\, \\mathrm{Li}_{-1}(q^n)\\right)^{\\!2} = \\sum_{n=1}^{\\infty} \nn^2 \\,\\mathrm{Li}_{-5}(q^n) -\n\\sum_{n=1}^{\\infty} n^4 \\, \\mathrm{Li}_{-3}(q^n),"
},
{
"math_id": 54,
"text": "\\sum_{n \\geq 1} \\frac{a_n q^n}{1\\pm q^n} = \\frac{1}{(\\mp q; q)_{\\infty}} \\sum_{n \\geq 1} \\left((s_o(n, k) \\pm s_e(n, k)) a_k\\right) q^n, "
},
{
"math_id": 55,
"text": "s_o(n, k) \\pm s_e(n, k) = [q^n] (\\mp q; q)_{\\infty} \\frac{q^k}{1 \\pm q^k}"
},
{
"math_id": 56,
"text": "s_{e/o}(n, k)"
},
{
"math_id": 57,
"text": "k"
},
{
"math_id": 58,
"text": "s_{n,k} := s_e(n, k) - s_o(n, k) = [q^n] (q; q)_{\\infty} \\frac{q^k}{1-q^k}"
},
{
"math_id": 59,
"text": "L_f(q) := \\sum_{n \\geq 1} \\frac{f(n) q^n}{1-q^n} = \\frac{1}{(q; q)_{\\infty}} \\sum_{n \\geq 1} \\left(s_{n,k} f(k)\\right) q^n, "
},
{
"math_id": 60,
"text": "(q; q)_{\\infty}"
},
{
"math_id": 61,
"text": "s_{n,k}^{(-1)} = \\sum_{d|n} p(d-k) \\mu\\left(\\frac{n}{d}\\right)"
},
{
"math_id": 62,
"text": "G_j := \\frac{1}{2} \\left\\lceil \\frac{j}{2} \\right\\rceil \\left\\lceil \\frac{3j+1}{2} \\right\\rceil"
},
{
"math_id": 63,
"text": "(q; q)_{\\infty} = \\sum_{n \\geq 0} (-1)^{\\left\\lceil \\frac{n}{2} \\right\\rceil} q^{G_n}. "
},
{
"math_id": 64,
"text": "L_f(q)"
},
{
"math_id": 65,
"text": "g(n) = (f \\ast 1)(n)"
},
{
"math_id": 66,
"text": "f(n) = \\sum_{k=1}^n \\sum_{d|n} p(d-k) \\mu(n/d) \\times \\sum_{j: k-G_j > 0} (-1)^{\\left\\lceil \\frac{j}{2} \\right\\rceil} b(k-G_j)."
},
{
"math_id": 67,
"text": "\\sum_{n \\geq 1} \\frac{a_n q^n}{1-q^n} = \\frac{1}{C(q)} \\sum_{n \\geq 1} \\left(\\sum_{k=1}^n s_{n,k}(\\gamma) \\widetilde{a}_k(\\gamma)\\right) q^n, "
},
{
"math_id": 68,
"text": "C(q)"
},
{
"math_id": 69,
"text": "\\gamma(n)"
},
{
"math_id": 70,
"text": "\\widetilde{a}_k(\\gamma) = \\sum_{d|k} \\sum_{r| \\frac{k}{d}} a_d \\gamma(r). "
},
{
"math_id": 71,
"text": "s_{n,k}^{(-1)}(\\gamma) = \\sum_{d|n} [q^{d-k}] \\frac{1}{C(q)} \\gamma\\left(\\frac{n}{d}\\right), "
},
{
"math_id": 72,
"text": "\\widetilde{a}_k(\\gamma) = \\sum_{k=1}^{n} s_{n,k}^{(-1)}(\\gamma) \\times [q^k]\\left(\\sum_{d=1}^k \\frac{a_d q^d}{1-q^d} C(q)\\right)."
},
{
"math_id": 73,
"text": "n,x \\geq 1"
},
{
"math_id": 74,
"text": "g_f(n) := (f \\ast 1)(n), "
},
{
"math_id": 75,
"text": "\\Sigma_f(x) := \\sum_{1 \\leq n \\leq x} g_f(n). "
},
{
"math_id": 76,
"text": "s_{n,k} = [q^n] (q; q)_{\\infty} \\frac{q^k}{1-q^k}, "
},
{
"math_id": 77,
"text": "g_f(n+1) = \\sum_{b = \\pm 1} \\sum_{k=1}^{\\left\\lfloor \\frac{\\sqrt{24n+1}-b}{6}\\right\\rfloor} \n(-1)^{k+1} g_f\\left(n+1-\\frac{k(3k+b)}{2}\\right) + \n\\sum_{k=1}^{n+1} s_{n+1,k} f(k), "
},
{
"math_id": 78,
"text": "\\Sigma_f(x+1) = \\sum_{b = \\pm 1} \\sum_{k=1}^{\\left\\lfloor \\frac{\\sqrt{24x+1}-b}{6}\\right\\rfloor} \n(-1)^{k+1} \\Sigma_f\\left(n+1-\\frac{k(3k+b)}{2}\\right) + \n\\sum_{n=0}^x \\sum_{k=1}^{n+1} s_{n+1,k} f(k). "
},
{
"math_id": 79,
"text": "s^{th}"
},
{
"math_id": 80,
"text": "s \\geq 1"
},
{
"math_id": 81,
"text": "q^s \\cdot D^{(s)}\\left[\\frac{q^i}{1-q^i}\\right] = \\sum_{m=0}^s \\sum_{k=0}^m \\left[\\begin{matrix} s \\\\ m\\end{matrix}\\right] \n \\left\\{\\begin{matrix} m \\\\ k\\end{matrix}\\right\\} \\frac{(-1)^{s-k} k! i^m}{(1-q^i)^{k+1}}"
},
{
"math_id": 82,
"text": "q^s \\cdot D^{(s)}\\left[\\frac{q^i}{1-q^i}\\right] = \\sum_{r=0}^s\\left[\\sum_{m=0}^s \\sum_{k=0}^m \\left[\\begin{matrix} s \\\\ m\\end{matrix}\\right] \n \\left\\{\\begin{matrix} m \\\\ k\\end{matrix}\\right\\} \\binom{s-k}{r} \\frac{(-1)^{s-k-r} k! i^m}{(1-q^i)^{k+1}}\\right] q^{(r+1)i},"
},
{
"math_id": 83,
"text": "[q^n]\\left(\\sum_{i \\geq t} \\frac{a_i q^{mi}}{(1-q^i)^{k+1}}\\right) = \\sum_{\\begin{matrix} d|n \\\\ t \\leq d \\leq \\left\\lfloor \\frac{n}{m} \\right\\rfloor\\end{matrix}} \n \\binom{\\frac{n}{d}-m+k}{k} a_d. "
},
{
"math_id": 84,
"text": "A_t(n)"
},
{
"math_id": 85,
"text": "n,t \\geq 1"
},
{
"math_id": 86,
"text": "A_t(n) := \\sum_{\\begin{matrix} 0 \\leq k \\leq m \\leq t \\\\ 0 \\leq r \\leq t\\end{matrix}} \\sum_{d|n} \\left[\\begin{matrix} t \\\\ m\\end{matrix}\\right] \n \\left\\{\\begin{matrix} m \\\\ k\\end{matrix}\\right\\} \\binom{t-k}{r} \\binom{\\frac{n}{d}-1-r+k}{k} (-1)^{t-k-r} k! d^m \\cdot a_d \\cdot \n \\left[t \\leq d \\leq \\left\\lfloor \\frac{n}{r+1} \\right\\rfloor\\right]_{\\delta}, "
},
{
"math_id": 87,
"text": "[\\cdot]_{\\delta}"
},
{
"math_id": 88,
"text": "t^{th}"
},
{
"math_id": 89,
"text": "\\begin{align}\nA_t(n) & = [q^n]\\left(q^t \\cdot D^{(t)}\\left[\\sum_{i \\geq t} \\frac{a_i q^i}{1-q^i}\\right]\\right) \\\\ \n & = [q^n]\\left(\\sum_{n \\geq 1} \\frac{(A_t \\ast \\mu)(n) q^n}{1-q^n}\\right). \n\\end{align} \n"
},
{
"math_id": 90,
"text": "[q^n]\\left(q^t \\cdot D^{(t)}\\left[\\sum_{i \\geq 1} \\frac{f(i) q^i}{1-q^i}\\right]\\right) = \\frac{n!}{(n-t)!} \\cdot (f \\ast 1)(n). "
}
]
| https://en.wikipedia.org/wiki?curid=1367040 |
13671081 | Marginal product of capital | Additional production per extra unit of capital
In economics, the marginal product of capital (MPK) is the additional production that a firm experiences when it adds an extra unit of input. It is a feature of the production function, alongside the labour input.
Definition.
The marginal product of capital (MPK) is the additional output resulting, ceteris paribus ("all things being equal"), from the use of an additional unit of physical capital, such as machines or buildings used by businesses.
The marginal product of capital (MPK) is the amount of extra output the firm gets from an extra unit of capital, holding the amount of labor constant:
formula_0
Thus, the marginal product of capital is the difference between the amount of output produced with K + 1 units of capital and that produced with only K units of capital.
Determining marginal product of capital is essential when a firm is debating on whether or not to invest on the additional unit of capital. The decision of increasing the production is only beneficial if the MPK is higher than the cost of capital of each additional unit. Otherwise, if the cost of capital is higher, the firm will be losing profit when adding extra units of physical capital. This concept equals the reciprocal of the incremental capital-output ratio. Mathematically, it is the partial derivative of the production function with respect to capital. If production output formula_1, then
formula_2
Diminishing marginal returns.
One of the key assumptions in economics is diminishing returns, that is the marginal product of capital is positive but decreasing in the level of capital stock, or mathematically
formula_3
Graphically, this evidence can be observed by the curve shown on the graphic, which represents the effect of capital, K, on the output, Y. If the quantity of labor input, L, is hold fixed, the slope of the curve at any point resemble the marginal product of capital. In a low quantity of capital, such as point A, the slope is steeper than in point B, due to diminishing returns of capital. By other words, the additional unit of capital has diminishing productivity, once the increase on production becomes less and less significant, as K rises.
Example.
Consider a furniture firm, in which labour input, that is, the number of employees is given as fixed, and capital input is translated in the number of machines of one of its factories. If the firm has no machines, it would produce zero furnitures. If there is one machine in the factory, sixteen furnitures would be produced. When there are two machines, twenty eight furnitures are built. However, as the number of machines available increase, the change in the output turns out to be less significant compared to the previous number. That fact can be observed in the marginal product which begins to decrease: diminishing marginal returns. This is justified by the fact that there is not enough employees to work with the extra machines, so the value that these additional units bring to the company, in terms of output generated, starts to decrease.
Rental rate of capital.
In a perfectly competitive market, a firm will continue to add capital until the point where MPK is equal to the rental rate of capital, which is called equilibrium point. This fact justifies why in perfectly competitive capital markets, the price of capital can be seen as the rental rate. The price of capital is determined in the capital market by the respective capital demand and supply.
The marginal product of capital determines the real rental price of capital. The real interest rate, the depreciation rate, and the relative price of capital goods determine the cost of capital. According to the neoclassical model, firms invest if the rental price is greater than the cost of capital, and they disinvest if the rental price is less than the cost of capital.
MRPK, MCK and profit maximization.
It is only profitable for a firm to keep adding capital when the marginal revenue product of capital, MRPK (the change in total revenue, when there is a unit change of capital input, ∆TR/∆K) is higher than the marginal cost of capital, MCK (marginal cost of obtaining and utilizing a machine, for example). Thus, the profit of the firm will reach its maximum point when MRPK = MCK.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "MP_K = F(K + 1, L) - F(K, L)"
},
{
"math_id": 1,
"text": "Y = f(K,L)"
},
{
"math_id": 2,
"text": "MP_K = \\frac{\\text{change in }Y}{\\text{change in }K} = \\frac{\\partial f(K, L)}{\\partial K}"
},
{
"math_id": 3,
"text": "\\frac{\\partial}{\\partial K} MP_K = \\frac{\\partial^2 f(K, L)}{\\partial K^2} < 0 "
}
]
| https://en.wikipedia.org/wiki?curid=13671081 |
13672415 | Shrikhande graph | Undirected graph named after S. S. Shrikhande
In the mathematical field of graph theory, the Shrikhande graph is a graph discovered by S. S. Shrikhande in 1959. It is a strongly regular graph with 16 vertices and 48 edges, with each vertex having degree 6. Every pair of nodes has exactly two other neighbors in common, whether or not the pair of nodes is connected.
Construction.
The Shrikhande graph can be constructed as a Cayley graph. The vertex set is formula_0. Two vertices are adjacent if and only if the difference is in formula_1.
Properties.
In the Shrikhande graph, any two vertices "I" and "J" have two distinct neighbors in common (excluding the two vertices "I" and "J" themselves), which holds true whether or not "I" is adjacent to "J". In other words, it is strongly regular and its parameters are: {16,6,2,2}, i.e., formula_2. This equality implies that the graph is associated with a symmetric BIBD. The Shrikhande graph shares these parameters with exactly one other graph, the 4×4 rook's graph, i.e., the line graph "L"("K"4,4) of the complete bipartite graph "K"4,4. The latter graph is the only line graph "L"("Kn,n") for which the strong regularity parameters do not determine that graph uniquely but are shared with a different graph, namely the Shrikhande graph (which is not a rook's graph).
The Shrikhande graph is locally hexagonal; that is, the neighbors of each vertex form a cycle of six vertices. As with any locally cyclic graph, the Shrikhande graph is the 1-skeleton of a Whitney triangulation of some surface; in the case of the Shrikhande graph, this surface is a torus in which each vertex is surrounded by six triangles. Thus, the Shrikhande graph is a toroidal graph. The embedding forms a regular map in the torus, with 32 triangular faces. The skeleton of the dual of this map (as embedded in the torus) is the Dyck graph, a cubic symmetric graph.
The Shrikhande graph is not a distance-transitive graph. It is the smallest distance-regular graph that is not distance-transitive.
The automorphism group of the Shrikhande graph is of order 192. It acts transitively on the vertices, on the edges and on the arcs of the graph. Therefore, the Shrikhande graph is a symmetric graph.
The characteristic polynomial of the Shrikhande graph is : formula_3. Therefore, the Shrikhande graph is an integral graph: its spectrum consists entirely of integers.
It has book thickness 4 and queue number 3.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb{Z}_4 \\times \\mathbb{Z}_4"
},
{
"math_id": 1,
"text": "\\{\\pm( 1,0),\\pm(0,1),\\pm (1,1)\\}"
},
{
"math_id": 2,
"text": "\\lambda = \\mu = 2"
},
{
"math_id": 3,
"text": "(x-6)(x-2)^6(x+2)^9"
}
]
| https://en.wikipedia.org/wiki?curid=13672415 |
13673459 | Jet damping | Jet damping or thrust damping is the effect of rocket exhaust removing energy from the transverse angular motion of a rocket. If a rocket has pitch or yaw motion then the exhaust must be accelerated laterally as it flows down the exhaust tube and nozzle. Once the exhaust leaves the nozzle this lateral momentum is lost to the vehicle and thus serves to damp the lateral oscillations. The jet damping is stabilizing as long as the distance from the instantaneous spacecraft center of mass to the nozzle exit plane exceeds the instantaneous transverse radius of gyration. Most rocket or missile configurations meet this criterion and the jet damping has a dynamic stabilizing effect. The jet damping torque rotates at nutation frequency in the spacecraft frame.
The jet damping contributes to the pitch and yaw damping coefficients, formula_0 and formula_1, where formula_0 is the rate of change of pitching moment with respect to pitch rate and formula_1 is the rate of change of the yawing moment with respect to yaw rate. For jet airplanes in cruise, the contribution of jet damping is usually negligible because the external aerodynamic damping is large relative to the jet damping. Rockets at lift-off, however, have practically zero external aerodynamic damping and the jet damping becomes significant.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Cm_q"
},
{
"math_id": 1,
"text": "Cn_r"
}
]
| https://en.wikipedia.org/wiki?curid=13673459 |
1367647 | Band sum | Method of connecting knots
In geometric topology, a band sum of two "n"-dimensional knots "K"1 and "K"2 along an ("n" + 1)-dimensional 1-handle "h" called a "band" is an "n"-dimensional knot "K" such that:
"K" is the "n"-dimensional knot obtained by this surgery.
A band sum is thus a generalization of the usual connected sum of knots. | [
{
"math_id": 0,
"text": "p_1\\in K_1"
},
{
"math_id": 1,
"text": "p_2\\in K_2"
},
{
"math_id": 2,
"text": "h"
},
{
"math_id": 3,
"text": "K_1\\sqcup K_2"
},
{
"math_id": 4,
"text": "p_1\\sqcup p_2"
}
]
| https://en.wikipedia.org/wiki?curid=1367647 |
13676913 | Orthostochastic matrix | Doubly stochastic matrix
In mathematics, an orthostochastic matrix is a doubly stochastic matrix whose entries are the squares of
the absolute values of the entries of some orthogonal matrix.
The detailed definition is as follows. A square matrix "B" of size "n" is doubly stochastic (or "bistochastic") if all its rows and columns sum to 1 and all its entries are nonnegative real numbers. It is orthostochastic if there exists an orthogonal matrix "O" such that
formula_0
All 2-by-2 doubly stochastic matrices are orthostochastic (and also unistochastic)
since for any
formula_1
we find the corresponding orthogonal matrix
formula_2
with
formula_3 such that
formula_4
For larger "n" the sets of bistochastic matrices includes the set of unistochastic matrices,
which includes the set of orthostochastic matrices and these inclusion relations are proper. | [
{
"math_id": 0,
"text": " B_{ij}=O_{ij}^2 \\text{ for } i,j=1,\\dots,n. \\, "
},
{
"math_id": 1,
"text": "\nB= \\begin{bmatrix}\na & 1-a \\\\\n1-a & a \\end{bmatrix}\n"
},
{
"math_id": 2,
"text": "\nO = \\begin{bmatrix}\n\\cos \\phi & \\sin \\phi \\\\\n- \\sin \\phi & \\cos \\phi \\end{bmatrix},\n"
},
{
"math_id": 3,
"text": " \\cos^2 \\phi =a, "
},
{
"math_id": 4,
"text": " B_{ij}=O_{ij}^2 ."
}
]
| https://en.wikipedia.org/wiki?curid=13676913 |
1367789 | Monitor (synchronization) | Object or module in concurrent programming
In concurrent programming, a monitor is a synchronization construct that prevents threads from concurrently accessing a shared object's state and allows them to wait for the state to change. They provide a mechanism for threads to temporarily give up exclusive access in order to wait for some condition to be met, before regaining exclusive access and resuming their task. A monitor consists of a mutex (lock) and at least one condition variable. A condition variable is explicitly 'signalled' when the object's state is modified, temporarily passing the mutex to another thread 'waiting' on the conditional variable.
Another definition of monitor is a thread-safe class, object, or module that wraps around a mutex in order to safely allow access to a method or variable by more than one thread. The defining characteristic of a monitor is that its methods are executed with mutual exclusion: At each point in time, at most one thread may be executing any of its methods. By using one or more condition variables it can also provide the ability for threads to wait on a certain condition (thus using the above definition of a "monitor"). For the rest of this article, this sense of "monitor" will be referred to as a "thread-safe object/class/module".
Monitors were invented by Per Brinch Hansen and C. A. R. Hoare, and were first implemented in Brinch Hansen's Concurrent Pascal language.
Mutual exclusion.
While a thread is executing a method of a thread-safe object, it is said to "occupy" the object, by holding its mutex (lock). Thread-safe objects are implemented to enforce that "at each point in time, at most one thread may occupy the object". The lock, which is initially unlocked, is locked at the start of each public method, and is unlocked at each return from each public method.
Upon calling one of the methods, a thread must wait until no other thread is executing any of the thread-safe object's methods before starting execution of its method. Note that without this mutual exclusion, two threads could cause money to be lost or gained for no reason. For example, two threads withdrawing 1000 from the account could both return true, while causing the balance to drop by only 1000, as follows: first, both threads fetch the current balance, find it greater than 1000, and subtract 1000 from it; then, both threads store the balance and return.
Condition variables.
Problem statement.
For many applications, mutual exclusion is not enough. Threads attempting an operation may need to wait until some condition P holds true. A busy waiting loop
while not ( P ) do skip
will not work, as mutual exclusion will prevent any other thread from entering the monitor to make the condition true. Other "solutions" exist such as having a loop that unlocks the monitor, waits a certain amount of time, locks the monitor and checks for the condition P. Theoretically, it works and will not deadlock, but issues arise. It is hard to decide an appropriate amount of waiting time: too small and the thread will hog the CPU, too big and it will be apparently unresponsive. What is needed is a way to signal the thread when the condition P is true (or "could" be true).
Case study: classic bounded producer/consumer problem.
A classic concurrency problem is that of the bounded producer/consumer, in which there is a queue or ring buffer of tasks with a maximum size, with one or more threads being "producer" threads that add tasks to the queue, and one or more other threads being "consumer" threads that take tasks out of the queue. The queue is assumed to be non–thread-safe itself, and it can be empty, full, or between empty and full. Whenever the queue is full of tasks, then we need the producer threads to block until there is room from consumer threads dequeueing tasks. On the other hand, whenever the queue is empty, then we need the consumer threads to block until more tasks are available due to producer threads adding them.
As the queue is a concurrent object shared between threads, accesses to it must be made atomic, because the queue can be put into an inconsistent state during the course of the queue access that should never be exposed between threads. Thus, any code that accesses the queue constitutes a critical section that must be synchronized by mutual exclusion. If code and processor instructions in critical sections of code that access the queue could be interleaved by arbitrary context switches between threads on the same processor or by simultaneously-running threads on multiple processors, then there is a risk of exposing inconsistent state and causing race conditions.
Incorrect without synchronization.
A naïve approach is to design the code with busy-waiting and no synchronization, making the code subject to race conditions:
global RingBuffer queue; // A thread-unsafe ring-buffer of tasks.
// Method representing each producer thread's behavior:
public method producer() {
while (true) {
task myTask = ...; // Producer makes some new task to be added.
while (queue.isFull()) {} // Busy-wait until the queue is non-full.
queue.enqueue(myTask); // Add the task to the queue.
// Method representing each consumer thread's behavior:
public method consumer() {
while (true) {
while (queue.isEmpty()) {} // Busy-wait until the queue is non-empty.
myTask = queue.dequeue(); // Take a task off of the queue.
doStuff(myTask); // Go off and do something with the task.
This code has a serious problem in that accesses to the queue can be interrupted and interleaved with other threads' accesses to the queue. The "queue.enqueue" and "queue.dequeue" methods likely have instructions to update the queue's member variables such as its size, beginning and ending positions, assignment and allocation of queue elements, etc. In addition, the "queue.isEmpty()" and "queue.isFull()" methods read this shared state as well. If producer/consumer threads are allowed to be interleaved during the calls to enqueue/dequeue, then inconsistent state of the queue can be exposed leading to race conditions. In addition, if one consumer makes the queue empty in-between another consumer's exiting the busy-wait and calling "dequeue", then the second consumer will attempt to dequeue from an empty queue leading to an error. Likewise, if a producer makes the queue full in-between another producer's exiting the busy-wait and calling "enqueue", then the second producer will attempt to add to a full queue leading to an error.
Spin-waiting.
One naive approach to achieve synchronization, as alluded to above, is to use "spin-waiting", in which a mutex is used to protect the critical sections of code and busy-waiting is still used, with the lock being acquired and released in between each busy-wait check.
global RingBuffer queue; // A thread-unsafe ring-buffer of tasks.
global Lock queueLock; // A mutex for the ring-buffer of tasks.
// Method representing each producer thread's behavior:
public method producer() {
while (true) {
task myTask = ...; // Producer makes some new task to be added.
queueLock.acquire(); // Acquire lock for initial busy-wait check.
while (queue.isFull()) { // Busy-wait until the queue is non-full.
queueLock.release();
// Drop the lock temporarily to allow a chance for other threads
// needing queueLock to run so that a consumer might take a task.
queueLock.acquire(); // Re-acquire the lock for the next call to "queue.isFull()".
queue.enqueue(myTask); // Add the task to the queue.
queueLock.release(); // Drop the queue lock until we need it again to add the next task.
// Method representing each consumer thread's behavior:
public method consumer() {
while (true) {
queueLock.acquire(); // Acquire lock for initial busy-wait check.
while (queue.isEmpty()) { // Busy-wait until the queue is non-empty.
queueLock.release();
// Drop the lock temporarily to allow a chance for other threads
// needing queueLock to run so that a producer might add a task.
queueLock.acquire(); // Re-acquire the lock for the next call to "queue.isEmpty()".
myTask = queue.dequeue(); // Take a task off of the queue.
queueLock.release(); // Drop the queue lock until we need it again to take off the next task.
doStuff(myTask); // Go off and do something with the task.
This method assures that an inconsistent state does not occur, but wastes CPU resources due to the unnecessary busy-waiting. Even if the queue is empty and producer threads have nothing to add for a long time, consumer threads are always busy-waiting unnecessarily. Likewise, even if consumers are blocked for a long time on processing their current tasks and the queue is full, producers are always busy-waiting. This is a wasteful mechanism. What is needed is a way to make producer threads block until the queue is non-full, and a way to make consumer threads block until the queue is non-empty.
Condition variables.
The solution is to use condition variables. Conceptually a condition variable is a queue of threads, associated with a mutex, on which a thread may wait for some condition to become true. Thus each condition variable c is associated with an assertion Pc. While a thread is waiting on a condition variable, that thread is not considered to occupy the monitor, and so other threads may enter the monitor to change the monitor's state. In most types of monitors, these other threads may signal the condition variable c to indicate that assertion Pc is true in the current state.
Thus there are three main operations on condition variables:
As a design rule, multiple condition variables can be associated with the same mutex, but not vice versa. (This is a one-to-many correspondence.) This is because the predicate Pc is the same for all threads using the monitor and must be protected with mutual exclusion from all other threads that might cause the condition to be changed or that might read it while the thread in question causes it to be changed, but there may be different threads that want to wait for a different condition on the same variable requiring the same mutex to be used. In the producer-consumer example , the queue must be protected by a unique mutex object, codice_3. The "producer" threads will want to wait on a monitor using lock codice_3 and a condition variable formula_0 which blocks until the queue is non-full. The "consumer" threads will want to wait on a different monitor using the same mutex codice_3 but a different condition variable formula_1 which blocks until the queue is non-empty. It would (usually) never make sense to have different mutexes for the same condition variable, but this classic example shows why it often certainly makes sense to have multiple condition variables using the same mutex. A mutex used by one or more condition variables (one or more monitors) may also be shared with code that does "not" use condition variables (and which simply acquires/releases it without any wait/signal operations), if those critical sections do not happen to require waiting for a certain condition on the concurrent data.
Monitor usage.
The proper basic usage of a monitor is:
acquire(m); // Acquire this monitor's lock.
while (!p) { // While the condition/predicate/assertion that we are waiting for is not true...
wait(m, cv); // Wait on this monitor's lock and condition variable.
// ... Critical section of code goes here ...
signal(cv2); // Or: broadcast(cv2);
// cv2 might be the same as cv or different.
release(m); // Release this monitor's lock.
The following is the same pseudocode but with more verbose comments to better explain what is going on:
// ... (previous code)
// About to enter the monitor.
// Acquire the advisory mutex (lock) associated with the concurrent
// data that is shared between threads,
// to ensure that no two threads can be preemptively interleaved or
// run simultaneously on different cores while executing in critical
// sections that read or write this same concurrent data. If another
// thread is holding this mutex, then this thread will be put to sleep
// (blocked) and placed on m's sleep queue. (Mutex "m" shall not be
// a spin-lock.)
acquire(m);
// Now, we are holding the lock and can check the condition for the
// first time.
// The first time we execute the while loop condition after the above
// "acquire", we are asking, "Does the condition/predicate/assertion
// we are waiting for happen to already be true?"
while (!p()) // "p" is any expression (e.g. variable or
// function-call) that checks the condition and
// evaluates to boolean. This itself is a critical
// section, so you *MUST* be holding the lock when
// executing this "while" loop condition!
// If this is not the first time the "while" condition is being checked,
// then we are asking the question, "Now that another thread using this
// monitor has notified me and woken me up and I have been context-switched
// back to, did the condition/predicate/assertion we are waiting on stay
// true between the time that I was woken up and the time that I re-acquired
// the lock inside the "wait" call in the last iteration of this loop, or
// did some other thread cause the condition to become false again in the
// meantime thus making this a spurious wakeup?
// If this is the first iteration of the loop, then the answer is
// "no" -- the condition is not ready yet. Otherwise, the answer is:
// the latter. This was a spurious wakeup, some other thread occurred
// first and caused the condition to become false again, and we must
// wait again.
wait(m, cv);
// Temporarily prevent any other thread on any core from doing
// operations on m or cv.
// release(m) // Atomically release lock "m" so other
// // code using this concurrent data
// // can operate, move this thread to cv's
// // wait-queue so that it will be notified
// // sometime when the condition becomes
// // true, and sleep this thread. Re-enable
// // other threads and cores to do
// // operations on m and cv.
// Context switch occurs on this core.
// At some future time, the condition we are waiting for becomes
// true, and another thread using this monitor (m, cv) does either
// a signal that happens to wake this thread up, or a
// broadcast that wakes us up, meaning that we have been taken out
// of cv's wait-queue.
// During this time, other threads may cause the condition to
// become false again, or the condition may toggle one or more
// times, or it may happen to stay true.
// This thread is switched back to on some core.
// acquire(m) // Lock "m" is re-acquired.
// End this loop iteration and re-check the "while" loop condition to make
// sure the predicate is still true.
// The condition we are waiting for is true!
// We are still holding the lock, either from before entering the monitor or from
// the last execution of "wait".
// Critical section of code goes here, which has a precondition that our predicate
// must be true.
// This code might make cv's condition false, and/or make other condition variables'
// predicates true.
// Call signal or broadcast, depending on which condition variables'
// predicates (who share mutex m) have been made true or may have been made true,
// and the monitor semantic type being used.
for (cv_x in cvs_to_signal) {
signal(cv_x); // Or: broadcast(cv_x);
// One or more threads have been woken up but will block as soon as they try
// to acquire m.
// Release the mutex so that notified thread(s) and others can enter their critical
// sections.
release(m);
Solving the bounded producer/consumer problem.
Having introduced the usage of condition variables, let us use it to revisit and solve the classic bounded producer/consumer problem. The classic solution is to use two monitors, comprising two condition variables sharing one lock on the queue:
global volatile RingBuffer queue; // A thread-unsafe ring-buffer of tasks.
global Lock queueLock; // A mutex for the ring-buffer of tasks. (Not a spin-lock.)
global CV queueEmptyCV; // A condition variable for consumer threads waiting for the queue to
// become non-empty. Its associated lock is "queueLock".
global CV queueFullCV; // A condition variable for producer threads waiting for the queue to
// become non-full. Its associated lock is also "queueLock".
// Method representing each producer thread's behavior:
public method producer() {
while (true) {
// Producer makes some new task to be added.
task myTask = ...;
// Acquire "queueLock" for the initial predicate check.
queueLock.acquire();
// Critical section that checks if the queue is non-full.
while (queue.isFull()) {
// Release "queueLock", enqueue this thread onto "queueFullCV" and sleep this thread.
wait(queueLock, queueFullCV);
// When this thread is awoken, re-acquire "queueLock" for the next predicate check.
// Critical section that adds the task to the queue (note that we are holding "queueLock").
queue.enqueue(myTask);
// Wake up one or all consumer threads that are waiting for the queue to be non-empty
// now that it is guaranteed, so that a consumer thread will take the task.
signal(queueEmptyCV); // Or: broadcast(queueEmptyCV);
// End of critical sections.
// Release "queueLock" until we need it again to add the next task.
queueLock.release();
// Method representing each consumer thread's behavior:
public method consumer() {
while (true) {
// Acquire "queueLock" for the initial predicate check.
queueLock.acquire();
// Critical section that checks if the queue is non-empty.
while (queue.isEmpty()) {
// Release "queueLock", enqueue this thread onto "queueEmptyCV" and sleep this thread.
wait(queueLock, queueEmptyCV);
// When this thread is awoken, re-acquire "queueLock" for the next predicate check.
// Critical section that takes a task off of the queue (note that we are holding "queueLock").
myTask = queue.dequeue();
// Wake up one or all producer threads that are waiting for the queue to be non-full
// now that it is guaranteed, so that a producer thread will add a task.
signal(queueFullCV); // Or: broadcast(queueFullCV);
// End of critical sections.
// Release "queueLock" until we need it again to take the next task.
queueLock.release();
// Go off and do something with the task.
doStuff(myTask);
This ensures concurrency between the producer and consumer threads sharing the task queue, and blocks the threads that have nothing to do rather than busy-waiting as shown in the aforementioned approach using spin-locks.
A variant of this solution could use a single condition variable for both producers and consumers, perhaps named "queueFullOrEmptyCV" or "queueSizeChangedCV". In this case, more than one condition is associated with the condition variable, such that the condition variable represents a weaker condition than the conditions being checked by individual threads. The condition variable represents threads that are waiting for the queue to be non-full "and" ones waiting for it to be non-empty. However, doing this would require using "broadcast" in all the threads using the condition variable and cannot use a regular "signal". This is because the regular "signal" might wake up a thread of the wrong type whose condition has not yet been met, and that thread would go back to sleep without a thread of the correct type getting signaled. For example, a producer might make the queue full and wake up another producer instead of a consumer, and the woken producer would go back to sleep. In the complementary case, a consumer might make the queue empty and wake up another consumer instead of a producer, and the consumer would go back to sleep. Using "broadcast" ensures that some thread of the right type will proceed as expected by the problem statement.
Here is the variant using only one condition variable and broadcast:
global volatile RingBuffer queue; // A thread-unsafe ring-buffer of tasks.
global Lock queueLock; // A mutex for the ring-buffer of tasks. (Not a spin-lock.)
global CV queueFullOrEmptyCV; // A single condition variable for when the queue is not ready for any thread
// i.e. for producer threads waiting for the queue to become non-full
// and consumer threads waiting for the queue to become non-empty.
// Its associated lock is "queueLock".
// Not safe to use regular "signal" because it is associated with
// multiple predicate conditions (assertions).
// Method representing each producer thread's behavior:
public method producer() {
while (true) {
// Producer makes some new task to be added.
task myTask = ...;
// Acquire "queueLock" for the initial predicate check.
queueLock.acquire();
// Critical section that checks if the queue is non-full.
while (queue.isFull()) {
// Release "queueLock", enqueue this thread onto "queueFullOrEmptyCV" and sleep this thread.
wait(queueLock, queueFullOrEmptyCV);
// When this thread is awoken, re-acquire "queueLock" for the next predicate check.
// Critical section that adds the task to the queue (note that we are holding "queueLock").
queue.enqueue(myTask);
// Wake up all producer and consumer threads that are waiting for the queue to be respectively
// non-full and non-empty now that the latter is guaranteed, so that a consumer thread will take the task.
broadcast(queueFullOrEmptyCV); // Do not use "signal" (as it might wake up another producer thread only).
// End of critical sections.
// Release "queueLock" until we need it again to add the next task.
queueLock.release();
// Method representing each consumer thread's behavior:
public method consumer() {
while (true) {
// Acquire "queueLock" for the initial predicate check.
queueLock.acquire();
// Critical section that checks if the queue is non-empty.
while (queue.isEmpty()) {
// Release "queueLock", enqueue this thread onto "queueFullOrEmptyCV" and sleep this thread.
wait(queueLock, queueFullOrEmptyCV);
// When this thread is awoken, re-acquire "queueLock" for the next predicate check.
// Critical section that takes a task off of the queue (note that we are holding "queueLock").
myTask = queue.dequeue();
// Wake up all producer and consumer threads that are waiting for the queue to be respectively
// non-full and non-empty now that the former is guaranteed, so that a producer thread will add a task.
broadcast(queueFullOrEmptyCV); // Do not use "signal" (as it might wake up another consumer thread only).
// End of critical sections.
// Release "queueLock" until we need it again to take the next task.
queueLock.release();
// Go off and do something with the task.
doStuff(myTask);
Synchronization primitives.
Monitors are implemented using an atomic read-modify-write primitive and a waiting primitive. The read-modify-write primitive (usually test-and-set or compare-and-swap) is usually in the form of a memory-locking instruction provided by the ISA, but can also be composed of non-locking instructions on single-processor devices when interrupts are disabled. The waiting primitive can be a busy-wait loop or an OS-provided primitive that prevents the thread from being scheduled until it is ready to proceed.
Here is an example pseudocode implementation of parts of a threading system and mutexes and Mesa-style condition variables, using test-and-set and a first-come, first-served policy:
Sample Mesa-monitor implementation with Test-and-Set.
// Basic parts of threading system:
// Assume "ThreadQueue" supports random access.
public volatile ThreadQueue readyQueue; // Thread-unsafe queue of ready threads. Elements are (Thread*).
public volatile global Thread* currentThread; // Assume this variable is per-core. (Others are shared.)
// Implements a spin-lock on just the synchronized state of the threading system itself.
// This is used with test-and-set as the synchronization primitive.
public volatile global bool threadingSystemBusy = false;
// Context-switch interrupt service routine (ISR):
// On the current CPU core, preemptively switch to another thread.
public method contextSwitchISR() {
if (testAndSet(threadingSystemBusy)) {
return; // Can't switch context right now.
// Ensure this interrupt can't happen again which would foul up the context switch:
systemCall_disableInterrupts();
// Get all of the registers of the currently-running process.
// For Program Counter (PC), we will need the instruction location of
// the "resume" label below. Getting the register values is platform-dependent and may involve
// reading the current stack frame, JMP/CALL instructions, etc. (The details are beyond this scope.)
currentThread->registers = getAllRegisters(); // Store the registers in the "currentThread" object in memory.
currentThread->registers.PC = resume; // Set the next PC to the "resume" label below in this method.
readyQueue.enqueue(currentThread); // Put this thread back onto the ready queue for later execution.
Thread* otherThread = readyQueue.dequeue(); // Remove and get the next thread to run from the ready queue.
currentThread = otherThread; // Replace the global current-thread pointer value so it is ready for the next thread.
// Restore the registers from currentThread/otherThread, including a jump to the stored PC of the other thread
// (at "resume" below). Again, the details of how this is done are beyond this scope.
restoreRegisters(otherThread.registers);
// *** Now running "otherThread" (which is now "currentThread")! The original thread is now "sleeping". ***
resume: // This is where another contextSwitch() call needs to set PC to when switching context back here.
// Return to where otherThread left off.
threadingSystemBusy = false; // Must be an atomic assignment.
systemCall_enableInterrupts(); // Turn pre-emptive switching back on on this core.
// Thread sleep method:
// On current CPU core, a synchronous context switch to another thread without putting
// the current thread on the ready queue.
// Must be holding "threadingSystemBusy" and disabled interrupts so that this method
// doesn't get interrupted by the thread-switching timer which would call contextSwitchISR().
// After returning from this method, must clear "threadingSystemBusy".
public method threadSleep() {
// Get all of the registers of the currently-running process.
// For Program Counter (PC), we will need the instruction location of
// the "resume" label below. Getting the register values is platform-dependent and may involve
// reading the current stack frame, JMP/CALL instructions, etc. (The details are beyond this scope.)
currentThread->registers = getAllRegisters(); // Store the registers in the "currentThread" object in memory.
currentThread->registers.PC = resume; // Set the next PC to the "resume" label below in this method.
// Unlike contextSwitchISR(), we will not place currentThread back into readyQueue.
// Instead, it has already been placed onto a mutex's or condition variable's queue.
Thread* otherThread = readyQueue.dequeue(); // Remove and get the next thread to run from the ready queue.
currentThread = otherThread; // Replace the global current-thread pointer value so it is ready for the next thread.
// Restore the registers from currentThread/otherThread, including a jump to the stored PC of the other thread
// (at "resume" below). Again, the details of how this is done are beyond this scope.
restoreRegisters(otherThread.registers);
// *** Now running "otherThread" (which is now "currentThread")! The original thread is now "sleeping". ***
resume: // This is where another contextSwitch() call needs to set PC to when switching context back here.
// Return to where otherThread left off.
public method wait(Mutex m, ConditionVariable c) {
// Internal spin-lock while other threads on any core are accessing this object's
// "held" and "threadQueue", or "readyQueue".
// N.B.: "threadingSystemBusy" is now true.
// System call to disable interrupts on this core so that threadSleep() doesn't get interrupted by
// the thread-switching timer on this core which would call contextSwitchISR().
// Done outside threadSleep() for more efficiency so that this thread will be sleeped
// right after going on the condition-variable queue.
systemCall_disableInterrupts();
assert m.held; // (Specifically, this thread must be the one holding it.)
m.release();
c.waitingThreads.enqueue(currentThread);
threadSleep();
// Thread sleeps ... Thread gets woken up from a signal/broadcast.
threadingSystemBusy = false; // Must be an atomic assignment.
systemCall_enableInterrupts(); // Turn pre-emptive switching back on on this core.
// Mesa style:
// Context switches may now occur here, making the client caller's predicate false.
m.acquire();
public method signal(ConditionVariable c) {
// Internal spin-lock while other threads on any core are accessing this object's
// "held" and "threadQueue", or "readyQueue".
// N.B.: "threadingSystemBusy" is now true.
// System call to disable interrupts on this core so that threadSleep() doesn't get interrupted by
// the thread-switching timer on this core which would call contextSwitchISR().
// Done outside threadSleep() for more efficiency so that this thread will be sleeped
// right after going on the condition-variable queue.
systemCall_disableInterrupts();
if (!c.waitingThreads.isEmpty()) {
wokenThread = c.waitingThreads.dequeue();
readyQueue.enqueue(wokenThread);
threadingSystemBusy = false; // Must be an atomic assignment.
systemCall_enableInterrupts(); // Turn pre-emptive switching back on on this core.
// Mesa style:
// The woken thread is not given any priority.
public method broadcast(ConditionVariable c) {
// Internal spin-lock while other threads on any core are accessing this object's
// "held" and "threadQueue", or "readyQueue".
// N.B.: "threadingSystemBusy" is now true.
// System call to disable interrupts on this core so that threadSleep() doesn't get interrupted by
// the thread-switching timer on this core which would call contextSwitchISR().
// Done outside threadSleep() for more efficiency so that this thread will be sleeped
// right after going on the condition-variable queue.
systemCall_disableInterrupts();
while (!c.waitingThreads.isEmpty()) {
wokenThread = c.waitingThreads.dequeue();
readyQueue.enqueue(wokenThread);
threadingSystemBusy = false; // Must be an atomic assignment.
systemCall_enableInterrupts(); // Turn pre-emptive switching back on on this core.
// Mesa style:
// The woken threads are not given any priority.
class Mutex {
protected volatile bool held = false;
private volatile ThreadQueue blockingThreads; // Thread-unsafe queue of blocked threads. Elements are (Thread*).
public method acquire() {
// Internal spin-lock while other threads on any core are accessing this object's
// "held" and "threadQueue", or "readyQueue".
// N.B.: "threadingSystemBusy" is now true.
// System call to disable interrupts on this core so that threadSleep() doesn't get interrupted by
// the thread-switching timer on this core which would call contextSwitchISR().
// Done outside threadSleep() for more efficiency so that this thread will be sleeped
// right after going on the lock queue.
systemCall_disableInterrupts();
assert !blockingThreads.contains(currentThread);
if (held) {
// Put "currentThread" on this lock's queue so that it will be
// considered "sleeping" on this lock.
// Note that "currentThread" still needs to be handled by threadSleep().
readyQueue.remove(currentThread);
blockingThreads.enqueue(currentThread);
threadSleep();
// Now we are woken up, which must be because "held" became false.
assert !held;
assert !blockingThreads.contains(currentThread);
held = true;
threadingSystemBusy = false; // Must be an atomic assignment.
systemCall_enableInterrupts(); // Turn pre-emptive switching back on on this core.
public method release() {
// Internal spin-lock while other threads on any core are accessing this object's
// "held" and "threadQueue", or "readyQueue".
// N.B.: "threadingSystemBusy" is now true.
// System call to disable interrupts on this core for efficiency.
systemCall_disableInterrupts();
assert held; // (Release should only be performed while the lock is held.)
held = false;
if (!blockingThreads.isEmpty()) {
Thread* unblockedThread = blockingThreads.dequeue();
readyQueue.enqueue(unblockedThread);
threadingSystemBusy = false; // Must be an atomic assignment.
systemCall_enableInterrupts(); // Turn pre-emptive switching back on on this core.
struct ConditionVariable {
volatile ThreadQueue waitingThreads;
Blocking condition variables.
The original proposals by C. A. R. Hoare and Per Brinch Hansen were for "blocking condition variables". With a blocking condition variable, the signaling thread must wait outside the monitor (at least) until the signaled thread relinquishes occupancy of the monitor by either returning or by again waiting on a condition variable. Monitors using blocking condition variables are often called "Hoare-style" monitors or "signal-and-urgent-wait" monitors.
We assume there are two queues of threads associated with each monitor object
In addition we assume that for each condition variable c, there is a queue
All queues are typically guaranteed to be fair and, in some implementations, may be guaranteed to be first in first out.
The implementation of each operation is as follows. (We assume that each operation runs in mutual exclusion to the others; thus restarted threads do not begin executing until the operation is complete.)
enter the monitor:
enter the method
if the monitor is locked
add this thread to e
block this thread
else
lock the monitor
leave the monitor:
schedule
return from the method
wait c:
add this thread to c.q
schedule
block this thread
signal c:
if there is a thread waiting on c.q
select and remove one such thread t from c.q
(t is called "the signaled thread")
add this thread to s
restart t
(so t will occupy the monitor next)
block this thread
schedule:
if there is a thread on s
select and remove one thread from s and restart it
(this thread will occupy the monitor next)
else if there is a thread on e
select and remove one thread from e and restart it
(this thread will occupy the monitor next)
else
unlock the monitor
(the monitor will become unoccupied)
The codice_22 routine selects the next thread to occupy the monitor
or, in the absence of any candidate threads, unlocks the monitor.
The resulting signaling discipline is known as "signal and urgent wait," as the signaler must wait, but is given priority over threads on the entrance queue. An alternative is "signal and wait," in which there is no codice_20 queue and signaler waits on the codice_19 queue instead.
Some implementations provide a signal and return operation that combines signaling with returning from a procedure.
signal c and return:
if there is a thread waiting on c.q
select and remove one such thread t from c.q
(t is called "the signaled thread")
restart t
(so t will occupy the monitor next)
else
schedule
return from the method
In either case ("signal and urgent wait" or "signal and wait"), when a condition variable is signaled and there is at least one thread waiting on the condition variable, the signaling thread hands occupancy over to the signaled thread seamlessly, so that no other thread can gain occupancy in between. If Pc is true at the start of each signal c operation, it will be true at the end of each wait c operation. This is summarized by the following contracts. In these contracts, I is the monitor's invariant.
enter the monitor:
postcondition I
leave the monitor:
precondition I
wait c:
precondition I
modifies the state of the monitor
postcondition Pc and I
signal c:
precondition Pc and I
modifies the state of the monitor
postcondition I
signal c and return:
precondition Pc and I
In these contracts, it is assumed that I and Pc do not depend on the
contents or lengths of any queues.
(When the condition variable can be queried as to the number of threads waiting on its queue, more sophisticated contracts can be given. For example, a useful pair of contracts, allowing occupancy to be passed without establishing the invariant, is:
wait c:
precondition I
modifies the state of the monitor
postcondition Pc
signal c
precondition (not empty(c) and Pc) or (empty(c) and I)
modifies the state of the monitor
postcondition I
It is important to note here that the assertion Pc is entirely up to the programmer; he or she simply needs to be consistent about what it is.
We conclude this section with an example of a thread-safe class using a blocking monitor that implements a bounded, thread-safe stack.
monitor class "SharedStack" {
private const capacity := 10
private "int"[capacity] A
private "int" size := 0
invariant 0 <= size and size <= capacity
private "BlockingCondition" theStackIsNotEmpty /* associated with 0 < size and size <= capacity */
private "BlockingCondition" theStackIsNotFull /* associated with 0 <= size and size < capacity */
public method push("int" value)
if size = capacity then wait theStackIsNotFull
assert 0 <= size and size < capacity
A[size] := value ; size := size + 1
assert 0 < size and size <= capacity
signal theStackIsNotEmpty and return
public method "int" pop()
if size = 0 then wait theStackIsNotEmpty
assert 0 < size and size <= capacity
size := size - 1 ;
assert 0 <= size and size < capacity
signal theStackIsNotFull and return A[size]
Note that, in this example, the thread-safe stack is internally providing a mutex, which, as in the earlier producer/consumer example, is shared by both condition variables, which are checking different conditions on the same concurrent data. The only difference is that the producer/consumer example assumed a regular non-thread-safe queue and was using a standalone mutex and condition variables, without these details of the monitor abstracted away as is the case here. In this example, when the "wait" operation is called, it must somehow be supplied with the thread-safe stack's mutex, such as if the "wait" operation is an integrated part of the "monitor class". Aside from this kind of abstracted functionality, when a "raw" monitor is used, it will "always" have to include a mutex and a condition variable, with a unique mutex for each condition variable.
Nonblocking condition variables.
With "nonblocking condition variables" (also called "Mesa style" condition variables or "signal and continue" condition variables), signaling does not cause the signaling thread to lose occupancy of the monitor. Instead the signaled threads are moved to the codice_19 queue. There is no need for the codice_20 queue.
With nonblocking condition variables, the signal operation is often called notify — a terminology we will follow here. It is also common to provide a notify all operation that moves all threads waiting on a condition variable to the codice_19 queue.
The meaning of various operations are given here. (We assume that each operation runs in mutual exclusion to the others; thus restarted threads do not begin executing until the operation is complete.)
enter the monitor:
enter the method
if the monitor is locked
add this thread to e
block this thread
else
lock the monitor
leave the monitor:
schedule
return from the method
wait c:
add this thread to c.q
schedule
block this thread
notify c:
if there is a thread waiting on c.q
select and remove one thread t from c.q
(t is called "the notified thread")
move t to e
notify all c:
move all threads waiting on c.q to e
schedule :
if there is a thread on e
select and remove one thread from e and restart it
else
unlock the monitor
As a variation on this scheme, the notified thread may be moved to a queue called codice_28, which has priority over codice_19. See Howard and Buhr "et al." for further discussion.
It is possible to associate an assertion Pc with each condition variable c such that Pc is sure to be true upon return from codice_30. However, one must
ensure that Pc is preserved from the time the notifying thread gives up occupancy until the notified thread is selected to re-enter the monitor. Between these times there could be activity by other occupants. Thus it is common for Pc to simply be "true".
For this reason, it is usually necessary to enclose each wait operation in a loop like this
while not ( P ) do
wait c
where P is some condition stronger than Pc. The operations codice_10 and codice_32 are treated as "hints" that P may be true for some waiting thread.
Every iteration of such a loop past the first represents a lost notification; thus with nonblocking monitors, one must be careful to ensure that too many notifications cannot be lost.
As an example of "hinting," consider a bank account in which a withdrawing thread will wait until the account has sufficient funds before proceeding
monitor class "Account" {
private "int" balance := 0
invariant balance >= 0
private "NonblockingCondition" balanceMayBeBigEnough
public method withdraw("int" amount)
precondition amount >= 0
while balance < amount do wait balanceMayBeBigEnough
assert balance >= amount
balance := balance - amount
public method deposit("int" amount)
precondition amount >= 0
balance := balance + amount
notify all balanceMayBeBigEnough
In this example, the condition being waited for is a function of the amount to be withdrawn, so it is impossible for a depositing thread to "know" that it made such a condition true. It makes sense in this case to allow each waiting thread into the monitor (one at a time) to check if its assertion is true.
Implicit condition variable monitors.
In the Java language, each object may be used as a monitor. Methods requiring mutual exclusion must be explicitly marked with the synchronized keyword. Blocks of code may also be marked by synchronized.
Rather than having explicit condition variables, each monitor (i.e., object) is equipped with a single wait queue in addition to its entrance queue. All waiting is done on this single wait queue and all notify and notifyAll operations apply to this queue. This approach has been adopted in other languages, for example C#.
Implicit signaling.
Another approach to signaling is to omit the signal operation. Whenever a thread leaves the monitor (by returning or waiting), the assertions of all waiting threads are evaluated until one is found to be true. In such a system, condition variables are not needed, but the assertions must be explicitly coded. The contract for wait is
wait P:
precondition I
modifies the state of the monitor
postcondition P and I
History.
Brinch Hansen and Hoare developed the monitor concept in the early 1970s, based on earlier ideas of their own and of Edsger Dijkstra. Brinch Hansen published the first monitor notation, adopting the class concept of Simula 67, and invented a queueing mechanism. Hoare refined the rules of process resumption. Brinch Hansen created the first implementation of monitors, in Concurrent Pascal. Hoare demonstrated their equivalence to semaphores.
Monitors (and Concurrent Pascal) were soon used to structure process synchronization in the Solo operating system.
Programming languages that have supported monitors include:
A number of libraries have been written that allow monitors to be constructed in languages that do not support them natively. When library calls are used, it is up to the programmer to explicitly mark the start and end of code executed with mutual exclusion. Pthreads is one such library.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "c_{full}"
},
{
"math_id": 1,
"text": "c_{empty}"
}
]
| https://en.wikipedia.org/wiki?curid=1367789 |
13678647 | Node (autonomous system) | The behaviour of a linear autonomous system around a critical point is a node if the following conditions are satisfied:
Each path converges to the or away from the critical point (dependent of the underlying equation) as formula_0 (or as formula_1). Furthermore, each path approaches the point asymptotically through a line.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " t \\rightarrow \\infty "
},
{
"math_id": 1,
"text": " t \\rightarrow - \\infty "
}
]
| https://en.wikipedia.org/wiki?curid=13678647 |
1368010 | Noncentral F-distribution | In probability theory and statistics, the noncentral "F"-distribution is a continuous probability distribution that is a noncentral generalization of the (ordinary) "F"-distribution. It describes the distribution of the quotient ("X"/"n"1)/("Y"/"n"2), where the numerator "X" has a noncentral chi-squared distribution with "n"1 degrees of freedom and the denominator "Y" has a central chi-squared distribution with "n"2 degrees of freedom. It is also required that "X" and "Y" are statistically independent of each other.
It is the distribution of the test statistic in analysis of variance problems when the null hypothesis is false. The noncentral "F"-distribution is used to find the power function of such a test.
Occurrence and specification.
If formula_0 is a noncentral chi-squared random variable with noncentrality parameter formula_1 and formula_2 degrees of freedom, and formula_3 is a chi-squared random variable with formula_4 degrees of freedom that is statistically independent of formula_0, then
formula_5
is a noncentral "F"-distributed random variable.
The probability density function (pdf) for the noncentral "F"-distribution is
formula_6
when formula_7 and zero otherwise.
The degrees of freedom formula_2 and formula_4 are positive.
The term formula_8 is the beta function, where
formula_9
The cumulative distribution function for the noncentral "F"-distribution is
formula_10
where formula_11 is the regularized incomplete beta function.
The mean and variance of the noncentral "F"-distribution are
formula_12
and
formula_13
Special cases.
When "λ" = 0, the noncentral "F"-distribution becomes the
"F"-distribution.
Related distributions.
"Z" has a noncentral chi-squared distribution if
formula_14
where "F" has a noncentral "F"-distribution.
See also noncentral t-distribution.
Implementations.
The noncentral "F"-distribution is implemented in the R language (e.g., pf function), in MATLAB (ncfcdf, ncfinv, ncfpdf, ncfrnd and ncfstat functions in the statistics toolbox) in Mathematica (NoncentralFRatioDistribution function), in NumPy (random.noncentral_f), and in Boost C++ Libraries.
A collaborative wiki page implements an interactive online calculator, programmed in the R language, for the noncentral t, chi-squared, and F distributions, at the Institute of Statistics and Econometrics, School of Business and Economics, Humboldt-Universität zu Berlin. | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\lambda"
},
{
"math_id": 2,
"text": "\\nu_1"
},
{
"math_id": 3,
"text": "Y"
},
{
"math_id": 4,
"text": "\\nu_2"
},
{
"math_id": 5,
"text": "\nF=\\frac{X/\\nu_1}{Y/\\nu_2}\n"
},
{
"math_id": 6,
"text": "\np(f)\n=\\sum\\limits_{k=0}^\\infty\\frac{e^{-\\lambda/2}(\\lambda/2)^k}{ B\\left(\\frac{\\nu_2}{2},\\frac{\\nu_1}{2}+k\\right) k!}\n\\left(\\frac{\\nu_1}{\\nu_2}\\right)^{\\frac{\\nu_1}{2}+k}\n\\left(\\frac{\\nu_2}{\\nu_2+\\nu_1f}\\right)^{\\frac{\\nu_1+\\nu_2}{2}+k}f^{\\nu_1/2-1+k}\n "
},
{
"math_id": 7,
"text": "f\\ge0"
},
{
"math_id": 8,
"text": "B(x,y)"
},
{
"math_id": 9,
"text": "\nB(x,y)=\\frac{\\Gamma(x)\\Gamma(y)}{\\Gamma(x+y)}.\n"
},
{
"math_id": 10,
"text": "\nF(x\\mid d_1,d_2,\\lambda)=\\sum\\limits_{j=0}^\\infty\\left(\\frac{\\left(\\frac{1}{2}\\lambda\\right)^j}{j!}e^{-\\lambda/2} \\right)I\\left(\\frac{d_1x}{d_2 + d_1x}\\bigg|\\frac{d_1}{2}+j,\\frac{d_2}{2}\\right)\n"
},
{
"math_id": 11,
"text": "I"
},
{
"math_id": 12,
"text": "\n\\operatorname{E}[F] \\quad\n\\begin{cases}\n= \\frac{\\nu_2(\\nu_1+\\lambda)}{\\nu_1(\\nu_2-2)} & \\text{if } \\nu_2>2\\\\\n\\text{does not exist} & \\text{if } \\nu_2\\le2\\\\\n\\end{cases}\n"
},
{
"math_id": 13,
"text": "\n\\operatorname{Var}[F] \\quad\n\\begin{cases}\n= 2\\frac{(\\nu_1+\\lambda)^2+(\\nu_1+2\\lambda)(\\nu_2-2)}{(\\nu_2-2)^2(\\nu_2-4)}\\left(\\frac{\\nu_2}{\\nu_1}\\right)^2\n& \\text{if } \\nu_2>4\\\\\n\\text{does not exist}\n& \\text{if } \\nu_2\\le4.\\\\\n\\end{cases}\n"
},
{
"math_id": 14,
"text": " Z=\\lim_{\\nu_2\\to\\infty}\\nu_1 F "
}
]
| https://en.wikipedia.org/wiki?curid=1368010 |
13680698 | Mass attenuation coefficient | Property of materials
The mass attenuation coefficient, or mass narrow beam attenuation coefficient of a material is the attenuation coefficient normalized by the density of the material; that is, the attenuation per unit mass (rather than per unit of distance). Thus, it characterizes how easily a mass of material can be penetrated by a beam of light, sound, particles, or other energy or matter. In addition to visible light, mass attenuation coefficients can be defined for other electromagnetic radiation (such as X-rays), sound, or any other beam that can be attenuated. The SI unit of mass attenuation coefficient is the square metre per kilogram (m2/kg). Other common units include cm2/g (the most common unit for X-ray mass attenuation coefficients) and L⋅g−1⋅cm−1 (sometimes used in solution chemistry). Mass extinction coefficient is an old term for this quantity.
The mass attenuation coefficient can be thought of as a variant of absorption cross section where the effective area is defined per unit mass instead of per particle.
Mathematical definitions.
Mass attenuation coefficient is defined as
formula_0
where
When using the mass attenuation coefficient, the Beer–Lambert law is written in alternative form as
formula_1
where
formula_2 is the area density known also as mass thickness, and formula_3 is the length, over which the attenuation takes place.
Mass absorption and scattering coefficients.
When a narrow (collimated) beam passes through a volume, the beam will lose intensity to two processes: absorption and scattering.
Mass absorption coefficient, and mass scattering coefficient are defined as
formula_4
where
In solutions.
In chemistry, mass attenuation coefficients are often used for a chemical species dissolved in a solution. In that case, the mass attenuation coefficient is defined by the same equation, except that the "density" is the density of only that one chemical species, and the "attenuation" is the attenuation due to only that one chemical species. The "actual" attenuation coefficient is computed by
formula_5
where each term in the sum is the mass attenuation coefficient and density of a different component of the solution (the solvent must also be included). This is a convenient concept because the mass attenuation coefficient of a species is approximately independent of its concentration (as long as certain assumptions are fulfilled).
A closely related concept is molar absorptivity. They are quantitatively related by
(mass attenuation coefficient) × (molar mass) = (molar absorptivity).
X-rays.
Tables of photon mass attenuation coefficients are essential in radiological physics, radiography (for medical and security purposes), dosimetry, diffraction, interferometry, crystallography, and other branches of physics. The photons can be in form of X-rays, gamma rays, and bremsstrahlung.
The values of mass attenuation coefficients, based on proper values of photon cross section, are dependent upon the absorption and scattering of the incident radiation caused by several different mechanisms such as
The actual values have been thoroughly examined and are available to the general public through three databases run by National Institute of Standards and Technology (NIST):
Calculating the composition of a solution.
If several known chemicals are dissolved in a single solution, the concentrations of each can be calculated using a light absorption analysis. First, the mass attenuation coefficients of each individual solute or solvent, ideally across a broad spectrum of wavelengths, must be measured or looked up. Second, the attenuation coefficient of the actual solution must be measured. Finally, using the formula
formula_5
the spectrum can be fitted using "ρ"1, "ρ"2, … as adjustable parameters, since "μ" and each "μ"/"ρ""i" are functions of wavelength. If there are "N" solutes or solvents, this procedure requires "at least" "N" measured wavelengths to create a solvable system of simultaneous equations, although using more wavelengths gives more reliable data.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\mu}{\\rho_m},"
},
{
"math_id": 1,
"text": "I = I_0 \\, e^{-(\\mu/\\rho_m)\\lambda}"
},
{
"math_id": 2,
"text": "\\lambda=\\rho_m \\ell"
},
{
"math_id": 3,
"text": "\\ell"
},
{
"math_id": 4,
"text": "\\frac{\\mu_\\mathrm{a}}{\\rho_m},\\quad \\frac{\\mu_\\mathrm{s}}{\\rho_m},"
},
{
"math_id": 5,
"text": "\\mu = (\\mu/\\rho)_1 \\rho_1 + (\\mu/\\rho)_2 \\rho_2 + \\ldots,"
}
]
| https://en.wikipedia.org/wiki?curid=13680698 |
13680969 | Mauchly's sphericity test | Statistical test
Mauchly's sphericity test or Mauchly's "W" is a statistical test used to validate a repeated measures analysis of variance (ANOVA). It was developed in 1940 by John Mauchly.
Sphericity.
Sphericity is an important assumption of a repeated-measures ANOVA. It is the condition of equal variances among the differences between all possible pairs of within-subject conditions (i.e., levels of the independent variable). If sphericity is violated (i.e., if the variances of the differences between all combinations of the conditions are not equal), then the variance calculations may be distorted, which would result in an inflated F-ratio. Sphericity can be evaluated when there are three or more levels of a repeated measure factor and, with each additional repeated measures factor, the risk for violating sphericity increases. If sphericity is violated, a decision must be made as to whether a univariate or multivariate analysis is selected. If a univariate method is selected, the repeated-measures ANOVA must be appropriately corrected depending on the degree to which sphericity has been violated.
Measurement of sphericity.
To further illustrate the concept of sphericity, consider a matrix representing data from patients who receive three different types of drug treatments in Figure 1. Their outcomes are represented on the left-hand side of the matrix, while differences between the outcomes for each treatment are represented on the right-hand side. After obtaining the difference scores for all possible pairs of groups, the variances of each group difference can be contrasted. From the example in Figure 1, the variance of the differences between Treatment A and B (17) appear to be much greater than the variance of the differences between Treatment A and C (10.3) and between Treatment B and C (10.3). This suggests that the data may violate the assumption of sphericity. To determine whether statistically significant differences exist between the variances of the differences, Mauchly's test of sphericity can be performed.
Interpretation.
Developed in 1940 by John W. Mauchly, Mauchly's test of sphericity is a popular test to evaluate whether the sphericity assumption has been violated. The null hypothesis of sphericity and alternative hypothesis of non-sphericity in the above example can be mathematically written in terms of difference scores.
formula_0
formula_1
Interpreting Mauchly's test is fairly straightforward. When the probability of Mauchly's test statistic is greater than or equal to formula_2 (i.e., "p" > formula_2, with formula_2 commonly being set to .05), we fail to reject the null hypothesis that the variances are equal. Therefore, we could conclude that the assumption has not been violated. However, when the probability of Mauchly's test statistic is less than or equal to formula_2 (i.e., "p" < formula_2), sphericity cannot be assumed and we would therefore conclude that there are significant differences between the variances of the differences. Sphericity is always met for two levels of a repeated measure factor and is, therefore, unnecessary to evaluate.
Statistical software should not provide output for a test of sphericity for two levels of a repeated measure factor; however, some versions of SPSS produce an output table with degrees of freedom equal to 0, and a period in place of a numeric "p" value.
Violations of sphericity.
When sphericity has been established, the F-ratio is valid and therefore interpretable. However, if Mauchly's test is significant then the F-ratios produced must be interpreted with caution as the violations of this assumption can result in an increase in the Type I error rate, and influence the conclusions drawn from your analysis. In instances where Mauchly's test is significant, modifications need to be made to the degrees of freedom so that a valid F-ratio can be obtained.
In SPSS, three corrections are generated: the Greenhouse–Geisser correction (1959), the Huynh–Feldt correction (1976), and the lower-bound. Each of these corrections have been developed to alter the degrees of freedom and produce an F-ratio where the Type I error rate is reduced. The actual F-ratio does not change as a result of applying the corrections; only the degrees of freedom.
The test statistic for these estimates is denoted by epsilon ("ε") and can be found on Mauchly's test output in SPSS. Epsilon provides a measure of departure from sphericity. By evaluating epsilon, we can determine the degree to which sphericity has been violated. If the variances of differences between all possible pairs of groups are equal and sphericity is exactly met, then epsilon will be exactly 1, indicating no departure from sphericity. If the variances of differences between all possible pairs of groups are unequal and sphericity is violated, epsilon will be below 1. The further epsilon is from 1, the worse the violation.
Of the three corrections, Huynh-Feldt is considered the least conservative, while Greenhouse–Geisser is considered more conservative and the lower-bound correction is the most conservative. When epsilon is > .75, the Greenhouse–Geisser correction is believed to be too conservative, and would result in incorrectly rejecting the null hypothesis that sphericity holds. Collier and colleagues showed this was true when epsilon was extended to as high as .90. The Huynh–Feldt correction, however, is believed to be too liberal and overestimates sphericity. This would result in incorrectly rejecting the alternative hypothesis that sphericity does not hold, when it does. Girden recommended a solution to this problem: when epsilon is > .75, the Huynh–Feldt correction should be applied and when epsilon is < .75 or nothing is known about sphericity, the Greenhouse–Geisser correction should be applied.
Another alternative procedure is using the multivariate test statistics (MANOVA) since they do not require the assumption of sphericity. However, this procedure can be less powerful than using a repeated measures ANOVA, especially when sphericity violation is not large or sample sizes are small. O’Brien and Kaiser suggested that when you have a large violation of sphericity (i.e., epsilon < .70) and your sample size is greater than "k" + 10 (i.e., the number of levels of the repeated measures factor + 10), then a MANOVA is more powerful; in other cases, repeated measures design should be selected. Additionally, the power of MANOVA is contingent upon the correlations between the dependent variables, so the relationship between the different conditions must also be considered.
SPSS provides an F-ratio from four different methods: Pillai's trace, Wilks’ lambda, Hotelling's trace, and Roy's largest root. In general, Wilks’ lambda has been recommended as the most appropriate multivariate test statistic to use.
Criticisms.
While Mauchly's test is one of the most commonly used to evaluate sphericity, the test fails to detect departures from sphericity in small samples and over-detects departures from sphericity in large samples. Consequently, the sample size has an influence on the interpretation of the results. In practice, the assumption of sphericity is extremely unlikely to be exactly met so it is prudent to correct for a possible violation without actually testing for a violation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H_0 : \\sigma_{\\text{Tx A}-\\text{Tx B} }^2 = \\sigma_{\\text{Tx A}-\\text{Tx C} }^2 = \\sigma_{\\text{Tx B}-\\text{Tx C} }^2"
},
{
"math_id": 1,
"text": "H_1 : \\text{The variances are not all equal}."
},
{
"math_id": 2,
"text": "\\alpha"
}
]
| https://en.wikipedia.org/wiki?curid=13680969 |
13681 | Hyperinflation | Rapidly accelerating inflation
In economics, hyperinflation is a very high and typically accelerating inflation. It quickly erodes the real value of the local currency, as the prices of all goods increase. This causes people to minimize their holdings in that currency as they usually switch to more stable foreign currencies. Effective capital controls and currency substitution ("dollarization") are the orthodox solutions to ending short-term hyperinflation; however there are significant social and economic costs to these policies. Ineffective implementations of these solutions often exacerbate the situation. Many governments choose to attempt to solve structural issues without resorting to those solutions, with the goal of bringing inflation down slowly while minimizing social costs of further economic shocks.
Unlike low inflation, where the process of rising prices is protracted and not generally noticeable except by studying past market prices, hyperinflation sees a rapid and continuing increase in nominal prices, the nominal cost of goods, and in the supply of currency. Typically, however, the general price level rises even more rapidly than the money supply as people try ridding themselves of the devaluing currency as quickly as possible. As this happens, the real stock of money (i.e., the amount of circulating money divided by the price level) decreases considerably.
Hyperinflation is often associated with some stress to the government budget, such as wars or their aftermath, sociopolitical upheavals, a collapse in aggregate supply or one in export prices, or other crises that make it difficult for the government to collect tax revenue. A sharp decrease in real tax revenue coupled with a strong need to maintain government spending, together with an inability or unwillingness to borrow, can lead a country into hyperinflation.
Definition.
In 1956, Phillip Cagan wrote "The Monetary Dynamics of Hyperinflation", the book often regarded as the first serious study of hyperinflation and its effects (though "The Economics of Inflation" by C. Bresciani-Turroni on the German hyperinflation was published in Italian in 1931). In his book, Cagan defined a hyperinflationary episode as starting in the month that the monthly inflation rate exceeds 50%, and as ending when the monthly inflation rate drops below 50% and stays that way for at least a year. Economists usually follow Cagan's description that hyperinflation occurs when the monthly inflation rate exceeds 50% (this is equivalent to a yearly rate of 12,874.63%, so that the amount becomes 129.7463 times as high).
The International Accounting Standards Board has issued guidance on accounting rules in a hyperinflationary environment. It does not establish an absolute rule on when hyperinflation arises, but instead lists factors that indicate the existence of hyperinflation:
Causes.
While there can be a number of causes of high inflation, almost all hyperinflations have been caused by government budget deficits financed by currency creation. Peter Bernholz analysed 29 hyperinflations (following Cagan's definition) and concludes that at least 25 of them have been caused in this way. A necessary condition for hyperinflation is the use of paper money instead of gold or silver coins. Most hyperinflations in history, with some exceptions, such as the French hyperinflation of 1789–1796, occurred after the use of fiat currency became widespread in the late 19th century. The French hyperinflation took place after the introduction of a non-convertible paper currency, the assignat.
Money supply.
Monetarist theories hold that hyperinflation occurs when there is a continuing (and often accelerating) rapid increase in the amount of money that is not supported by a corresponding growth in the output of goods and services.
The increases in price that can result from rapid money creation can create a vicious circle, requiring ever growing amounts of new money creation to fund government deficits. Hence both monetary inflation and price inflation proceed at a rapid pace. Such rapidly increasing prices cause widespread unwillingness of the local population to hold the local currency as it rapidly loses its buying power. Instead, they quickly spend any money they receive, which increases the velocity of money flow; this in turn causes further acceleration in prices. This means that the increase in the price level is greater than that of the money supply.
This results in an imbalance between the supply and demand for the money (including currency and bank deposits), causing rapid inflation. Very high inflation rates can result in a loss of confidence in the currency, similar to a bank run. The excessive money supply growth can result from speculating by private borrowers, or may result from the government being either unable or unwilling to fully finance the government budget through taxation or borrowing. The government may instead finance a government deficit through the creation of money.
Governments have sometimes resorted to excessively loose monetary policy, as it allows a government to devalue its debts and reduce (or avoid) a tax increase. Monetary inflation is effectively a flat tax on creditors that also redistributes proportionally to private debtors. Distributional effects of monetary inflation are complex and vary based on the situation, with some models finding regressive effects but other empirical studies progressive effects. As a form of tax, it is less overt than levied taxes and is therefore harder to understand by ordinary citizens. Inflation can obscure quantitative assessments of the true cost of living, as published price indices only look at data in retrospect, so may increase only months later. Monetary inflation can become hyperinflation if monetary authorities fail to fund increasing government expenses from taxes, government debt, cost cutting, or by other means, because either
Theories of hyperinflation generally look for a relationship between seigniorage and the inflation tax. In both Cagan's model and the neo-classical models, a tipping point occurs when the increase in money supply or the drop in the monetary base makes it impossible for a government to improve its financial position. Thus when fiat money is printed, government obligations that are not denominated in money increase in cost by more than the value of the money created.
From this, it might be wondered why any rational government would engage in actions that cause or continue hyperinflation. One reason for such actions is that often the alternative to hyperinflation is either depression or military defeat. The root cause is a matter of more dispute. In both classical economics and monetarism, it is always the result of the monetary authority irresponsibly borrowing money to pay all its expenses. These models focus on the unrestrained seigniorage of the monetary authority, and the gains from the inflation tax.
In neo-classical economic theory, hyperinflation is rooted in a deterioration of the monetary base, that is the confidence that there is a store of value that the currency will be able to command later. In this model, the perceived risk of holding currency rises dramatically, and sellers demand increasingly high premiums to accept the currency. This in turn leads to a greater fear that the currency will collapse, causing even higher premiums. One example of this is during periods of warfare, civil war, or intense internal conflict of other kinds: governments need to do whatever is necessary to continue fighting, since the alternative is defeat. Expenses cannot be cut significantly since the main outlay is armaments. Further, a civil war may make it difficult to raise taxes or to collect existing taxes. While in peacetime the deficit is financed by selling bonds, during a war it is typically difficult and expensive to borrow, especially if the war is going poorly for the government in question. The banking authorities, whether central or not, "monetize" the deficit, printing money to pay for the government's efforts to survive. The hyperinflation under the Chinese Nationalists from 1939 to 1945 is a classic example of a government printing money to pay civil war costs. By the end, currency was flown in over the Himalayas, and then old currency was flown out to be destroyed.
Hyperinflation is a complex phenomenon and one explanation may not be applicable to all cases. In both of these models, however, whether loss of confidence comes first, or central bank seigniorage, the other phase is ignited. In the case of rapid expansion of the money supply, prices rise rapidly in response to the increased supply of money relative to the supply of goods and services, and in the case of loss of confidence, the monetary authority responds to the risk premiums it has to pay by "running the printing presses".
Supply shocks.
A number of hyperinflations were caused by some sort of extreme negative supply shock, sometimes but not always associated with wars or natural disasters.
Effects.
Hyperinflation increases stock market prices, wipes out the purchasing power of private and public savings, distorts the economy in favor of the hoarding of real assets, causes the monetary base (whether specie or hard currency) to flee the country, and makes the afflicted area anathema to investment.
One of the most important characteristics of hyperinflation is the accelerating substitution of the inflating money by stable money—gold and silver in former times, then relatively stable foreign currencies after the breakdown of the gold or silver standards (Thiers' law). If inflation is high enough, government regulations like heavy penalties and fines, often combined with exchange controls, cannot prevent this currency substitution. As a consequence, the inflating currency is usually heavily undervalued compared to stable foreign money in terms of purchasing power parity. So foreigners can live cheaply and buy at low prices in the countries hit by high inflation. It follows that governments that do not succeed in engineering a successful currency reform in time must finally legalize the stable foreign currencies (or, formerly, gold and silver) that threaten to fully substitute the inflating money. Otherwise, their tax revenues, including the inflation tax, will approach zero. The last episode of hyperinflation in which this process could be observed was in Zimbabwe in the first decade of the 21st century. In this case, the local money was mainly driven out by the US dollar and the South African rand.
Enactment of price controls to prevent discounting the value of paper money relative to gold, silver, hard currency, or other commodities fail to force acceptance of a paper money that lacks intrinsic value. If the entity responsible for printing a currency promotes excessive money printing, with other factors contributing a reinforcing effect, hyperinflation usually continues. Hyperinflation is generally associated with paper money, which can easily be used to increase the money supply: add more zeros to the plates and print, or even stamp old notes with new numbers. Historically, there have been numerous episodes of hyperinflation in various countries followed by a return to "hard money". Older economies would revert to hard currency and barter when the circulating medium became excessively devalued, generally following a "run" on the store of value.
Much attention on hyperinflation centers on the effect on savers whose investments become worthless. Interest rate changes often cannot keep up with hyperinflation or even high inflation, certainly with contractually fixed interest rates. For example, in the 1970s in the United Kingdom inflation reached 25% per annum, yet interest rates did not rise above 15%—and then only briefly—and many fixed interest rate loans existed. Contractually, there is often no bar to a debtor clearing his long term debt with "hyperinflated cash", nor could a lender simply somehow suspend the loan. Contractual "early redemption penalties" were (and still are) often based on a penalty of "n" months of interest/payment; again no real bar to paying off what had been a large loan. In interwar Germany, for example, much private and corporate debt was effectively wiped out—certainly for those holding fixed interest rate loans.
As more and more money is provided, interest rates decline towards zero. Realizing that fiat money is losing value, investors will try to place money in assets such as real estate, stocks, even art; as these appear to represent "real" value. Asset prices are thus becoming inflated. This potentially spiraling process will ultimately lead to the collapse of the monetary system. The Cantillon effect says that those institutions that receive the new money first are the beneficiaries of the policy.
Aftermath.
Hyperinflation is ended by drastic remedies, such as imposing the shock therapy of slashing government expenditures or altering the currency basis. One form this may take is dollarization, the use of a foreign currency (not necessarily the U.S. dollar) as a national unit of currency. An example was dollarization in Ecuador, initiated in September 2000 in response to a 75% loss of value of the Ecuadorian sucre in early 2000. Usually the "dollarization" takes place in spite of all efforts of the government to prevent it by exchange controls, heavy fines and penalties. The government has thus to try to engineer a successful currency reform stabilizing the value of the money. If it does not succeed with this reform the substitution of the inflating by stable money goes on. Thus it is not surprising that there have been at least seven historical cases in which the good (foreign) money did fully drive out the use of the inflating currency. In the end, the government had to legalize the former, for otherwise its revenues would have fallen to zero.
Hyperinflation has always been a traumatic experience for the people who suffer it, and the next political regime almost always enacts policies to try to prevent its recurrence. Often this means making the central bank very aggressive about maintaining price stability, as was the case with the German Bundesbank, or moving to some hard basis of currency, such as a currency board. Many governments have enacted extremely stiff wage and price controls in the wake of hyperinflation, but this does not prevent further inflation of the money supply by the central bank, and always leads to widespread shortages of consumer goods if the controls are rigidly enforced.
Currency.
In countries experiencing hyperinflation, the central bank often prints money in larger and larger denominations as the smaller denomination notes become worthless. This can result in the production of unusually large denominations of banknotes, including those denominated in amounts of 1,000,000,000 (109, 1 billion) or more.
One way to avoid the use of large numbers is by declaring a new unit of currency. (As an example, instead of 10,000,000,000 dollars, a central bank might set 1 new dollar = 1,000,000,000 old dollars, so the new note would read "10 new dollars".) One example of this is Turkey's revaluation of the lira on 1 January 2005, when the old Turkish lira (TRL) was converted to the new Turkish lira (TRY) at a rate of 1,000,000 old to 1 new lira. While this does not lessen the actual value of a currency, it is called redenomination or revaluation and also occasionally happens in countries with lower inflation rates. During hyperinflation, currency inflation happens so quickly that bills reach large numbers before revaluation.
Governments may try to disguise the true rate of inflation through a variety of techniques. If these actions do not address the root causes of inflation they may undermine trust in the currency, causing further increases in inflation. Price controls will generally result in shortages and hoarding and extremely high demand for the controlled goods, causing disruptions of supply chains. Products available to consumers may diminish or disappear as businesses no longer find it economic to continue producing and/or distributing such goods at the legal prices, further exacerbating the shortages.
There are also issues with computerized money-handling systems. In Zimbabwe, during the hyperinflation of the Zimbabwe dollar, many automated teller machines and payment card machines struggled with arithmetic overflow errors as customers required many billions and trillions of dollars at one time.
Notable hyperinflationary periods.
Austria.
In 1922, inflation in Austria reached 1,426%, and from 1914 to January 1923, the consumer price index rose by a factor of 11,836, with the highest banknote in denominations of 500,000 Kronen. After World War I, essentially all State enterprises ran at a loss, and the number of state employees in the capital, Vienna, was greater than in the earlier monarchy, even though the new republic was nearly one-eighth of the size.
Observing the Austrian response to developing hyperinflation, which included the hoarding of food and the speculation in foreign currencies, Owen S. Phillpotts, the Commercial Secretary at the British Legation in Vienna wrote: "The Austrians are like men on a ship who cannot manage it, and are continually signalling for help. While waiting, however, most of them begin to cut rafts, each for himself, out of the sides and decks. The ship has not yet sunk despite the leaks so caused, and those who have acquired stores of wood in this way may use them to cook their food, while the more seamanlike look on cold and hungry. The population lack courage and energy as well as patriotism."
Bolivia.
Increasing hyperinflation in Bolivia has plagued, and at times crippled, its economy and currency since the 1970s. At one time in 1985, the country experienced an annual inflation rate of more than 20,000%. Fiscal and monetary reform reduced the inflation rate to single digits by the 1990s, and in 2004 Bolivia experienced a manageable 4.9% rate of inflation.
In 1987, the peso boliviano was replaced by the new boliviano at a rate of one million to one (when 1 US dollar was worth 1.8–1.9 million pesos bolivianos). At that time, 1 new boliviano was roughly equivalent to 52 U.S. cents.
Brazil.
Brazilian hyperinflation lasted from 1985 (the year when the military dictatorship ended) to 1994, with prices rising by 184,901,570,954.39% (or percent; equivalent to a tenfold increase on average a year) in that time due to the uncontrolled printing of money. There were many economic plans that tried to contain hyperinflation including zeroes cuts, price freezes and even confiscation of bank accounts.
The highest value was in March 1990, when the government inflation index reached 82.39%. Hyperinflation ended in July 1994 with the Real Plan during the government of Itamar Franco. During the period of inflation Brazil adopted a total of six different currencies, as the government constantly changed due to rapid devaluation and increase in the number of zeros.
China.
Hyperinflation was a major factor in the collapse of the Nationalist government of Chiang Kai-shek.
After a brief decrease following the defeat of Japan in the Second Sino-Japanese War, hyperinflation resumed in October 1945.7 From 1948 to 1949, near the end of the Chinese Civil War, the Republic of China went through a period of hyperinflation. In 1947, the highest denomination bill was 50,000 yuan. By mid-1948, the highest denomination was 180,000,000 yuan.
In October 1948, the Nationalist government replaced its fabi currency with the gold yuan.8 The gold yuan deteriorated even faster than the fabi had.8
The Communists gained significant legitimacy by defeating hyperinflation in the late 1940s and early 1950s. Their development of state trading agencies reintegrated markets and trading networks, ultimately stabilizing prices.
France.
During the French Revolution and first Republic, the National Assembly issued bonds, some backed by seized church property, called assignats. Napoleon replaced them with the franc in 1803, at which time the assignats were basically worthless. Stephen D. Dillaye pointed out that one of the reasons for the failure was massive counterfeiting of the paper currency, largely through London. According to Dillaye: "Seventeen manufacturing establishments were in full operation in London, with a force of four hundred men devoted to the production of false and forged Assignats."
Germany (Weimar Republic).
By November 1922, the value in gold of money in circulation had fallen from £300 million before World War I to £20 million. The Reichsbank responded by the unlimited printing of notes, thereby accelerating the devaluation of the mark. In his report to London, Lord D'Abernon wrote: "In the whole course of history, no dog has ever run after its own tail with the speed of the Reichsbank." Germany went through its worst inflation in 1923. In 1922, the highest denomination was 50,000ℳ. By 1923, the highest denomination was 100,000,000,000,000ℳ ( marks). In December 1923 the exchange rate was 4,200,000,000,000ℳ ( marks) to 1 US dollar. In 1923, the rate of inflation hit percent per month (prices double every two days). Beginning on 20 November 1923, 1,000,000,000,000ℳ (ℳ, 1 trillion marks) were exchanged for 1 Rentenmark, so that RM 4.2 was worth 1 US dollar, exactly the same rate the mark had in 1914.
Greece (German–Italian occupation).
With the German invasion in April 1941, there was an abrupt increase in prices. This was due to psychological factors related to the fear of shortages and to the hoarding of goods. During the German and Italian Axis occupation of Greece (1941–1944), the agricultural, mineral, industrial etc. production of Greece were used to sustain the occupation forces, but also to secure provisions for the Afrika Korps. One part of these "sales" of provisions was settled with bilateral clearing through the German DEGRIGES and the Italian Sagic companies at very low prices. As the value of Greek exports in drachmas fell, the demand for drachmas followed suit and so did its forex rate. While shortages started due to naval blockades and hoarding, the prices of commodities soared. The other part of the "purchases" was settled with drachmas secured from the Bank of Greece and printed for this purpose by private printing presses. As prices soared, the Germans and Italians started requesting more and more drachmas from the Bank of Greece to offset price increases; each time prices increased, the note circulation followed suit soon afterwards. For the year starting November 1943, the inflation rate was %, the circulation was drachmae and one gold sovereign cost 43,167 billion drachmas. The hyperinflation started subsiding immediately after the departure of the German occupation forces, but inflation rates took several years to fall below 50%.
Hungary.
The Treaty of Trianon and political instability between 1919 and 1924 led to a major inflation of Hungary's currency. In 1921, in an attempt to stop this inflation, the national assembly of Hungary passed the Hegedüs reforms, including a 20% levy on bank deposits, but this precipitated a mistrust of banks by the public, especially the peasants, and resulted in a reduction in savings, and thus an increase in the amount of currency in circulation. Due to the reduced tax base, the government resorted to printing money, and in 1923 inflation in Hungary reached 98% per month.
Between the end of 1945 and July 1946, Hungary went through the highest inflation ever recorded. In 1944, the highest banknote value was 1,000 P. By the end of 1945, it was 10,000,000 P, and the highest value in mid-1946 was 100,000,000,000,000,000,000 P (1020 pengő). A special currency, the adópengő (or "tax pengő") was created for tax and postal payments. The inflation was such that the value of the adópengő was adjusted each day by radio announcement. On 1 January 1946, one adópengő equaled one pengő, but by late July, one adópengő equaled 2,000,000,000,000,000,000,000 P or 2×1021 P (2 sextillion pengő).
When the pengő was replaced in August 1946 by the forint, the total value of all Hungarian banknotes in circulation amounted to <templatestyles src="Fraction/styles.css" />1⁄1,000 of one US cent. Inflation had peaked at % per month (i.e. prices doubled every 15.6 hours). On 18 August 1946, 400,000,000,000,000,000,000,000,000,000 P (4×1029 pengő, four hundred quadrilliard on the long scale used in Hungary, or four hundred octillion on short scale) became 1 Ft.
Malaya and Singapore (Japanese occupation).
Malaya and Singapore were under Japanese occupation from 1942 until 1945. The Japanese issued "banana notes" as the official currency to replace the Straits currency issued by the British. During that time, the cost of basic necessities increased drastically. As the occupation proceeded, the Japanese authorities printed more money to fund their wartime activities, which resulted in hyperinflation and a severe depreciation in value of the banana note.
From February to December 1942, $100 of Straits currency was worth $100 in Japanese scrip, after which the value of Japanese scrip began to erode, reaching $385 in December 1943 and $1,850 one year later. By 1 August 1945, this had inflated to $10,500, and 11 days later it had reached $95,000. After 13 August 1945, Japanese scrip had become valueless.
North Korea.
North Korea most likely experienced hyperinflation from December 2009 to mid-January 2011. Based on the price of rice, North Korea's hyperinflation peaked in mid-January 2010, but according to black market exchange-rate data, and calculations based on purchasing power parity, North Korea experienced its peak month of inflation in early March 2010. These data points are unofficial, however, and therefore must be treated with a degree of caution.
Peru.
In modern history, Peru underwent a period of hyperinflation in the 1980s to the early 1990s starting with President Fernando Belaúnde's second administration, heightened during Alan García's first administration, to the beginning of Alberto Fujimori's term. 1 US dollar was worth over S/3,210,000,000. Garcia's term introduced the inti, which worsened inflation into hyperinflation. Peru's currency and economy were stabilized under Fujimori's Nuevo Sol program, which has remained Peru's currency since 1991.
Poland.
Poland has gone through two episodes of hyperinflation since the country regained independence following the end of World War I, the first in 1923, the second in 1989–1990. Both events resulted in the introduction of new currencies. In 1924, the złoty replaced the original currency of post-war Poland, the mark. This currency was subsequently replaced by another of the same name in 1950. As a result of the second hyperinflation crisis, the current "new złoty" was introduced in 1995 (ISO code: PLN).
The newly independent Poland had been struggling with a large budget deficit since its inception in 1918 but it was in 1923 when inflation reached its peak. The exchange rate of the Polish mark (Mp) to the US dollar dropped from Mp 9.— per dollar in 1918 to Mp 6,375,000.— per dollar at the end of 1923. A new personal 'inflation tax' was introduced. The resolution of the crisis is attributed to Władysław Grabski, who became prime minister of Poland in December 1923. Having nominated an all-new government and being granted extraordinary lawmaking powers by the Sejm for a period of six months, he introduced a new currency, the "złoty" ("golden" in Polish), established a new national bank and scrapped the inflation tax, which took place throughout 1924.
The economic crisis in Poland in the 1980s was accompanied by rising inflation when new money was printed to cover a budget deficit. Although inflation was not as acute as in 1920s, it is estimated that its annual rate reached around 600% in a period of over a year spanning parts of 1989 and 1990. The economy was stabilised by the adoption of the Balcerowicz Plan in 1989, named after the main author of the reforms, minister of finance Leszek Balcerowicz. The plan was largely inspired by the previous Grabski's reforms.
Philippines.
The Japanese government occupying the Philippines during World War II issued fiat currencies for general circulation. The Japanese-sponsored Second Philippine Republic government led by Jose P. Laurel at the same time outlawed possession of other currencies, most especially "guerrilla money". The fiat money's lack of value earned it the derisive nickname "Mickey Mouse money". Survivors of the war often tell tales of bringing suitcases or "bayong" (native bags made of woven coconut or buri leaf strips) overflowing with Japanese-issued notes. Early on, 75 JIM pesos could buy one duck egg. In 1944, a box of matches cost more than 100 JIM pesos.
In 1942, the highest denomination available was ₱10. Before the end of the war, because of inflation, the Japanese government was forced to issue ₱100, ₱500, and ₱1,000 notes.
Soviet Union.
A seven-year period of uncontrollable spiralling inflation occurred in the early Soviet Union, running from the earliest days of the Bolshevik Revolution in November 1917 to the reestablishment of the gold standard with the introduction of the chervonets as part of the New Economic Policy. The inflationary crisis effectively ended in March 1924 with the introduction of the so-called "gold ruble" as the country's standard currency.
The early Soviet hyperinflationary period was marked by three successive redenominations of its currency, in which "new rubles" replaced old at the rates of 10,000:1 (1 January 1922), 100:1 (1 January 1923), and 50,000:1 (7 March 1924), respectively.
Between 1921 and 1922, inflation in the Soviet Union reached 213%.
Turkey.
Since the end of 2017 Turkey has had high inflation rates. It is speculated that the new elections took place frustrated because of the impending crisis to forestall. In October 2017, inflation was at 11.9%, the highest rate since July 2008. The lira fell from TL 1.503 = US$1 in 2010 to TL 23.1446 = US$1 in June 2023.
In February 2022 inflation rose to 54.4%. In March 2022, inflation was above 60%.
Venezuela.
Venezuela's hyperinflation began in November 2016. Inflation of Venezuela's bolivar fuerte (VEF) in 2014 reached 69% and was the highest in the world. In 2015, inflation was 181%, the highest in the world and the highest in the country's history at that time, 800% in 2016, over 4,000% in 2017, and 1,698,488% in 2018, with Venezuela spiraling into hyperinflation. While the Venezuelan government "has essentially stopped" producing official inflation estimates as of early 2018, one estimate of the rate at that time was 5,220%, according to inflation economist Steve Hanke of Johns Hopkins University.
Inflation has affected Venezuelans so much that in 2017, some people became video game gold farmers and could be seen playing games such as "RuneScape" to sell in-game currency or characters for real currency. In many cases, these gamers made more money than salaried workers in Venezuela even though they were earning just a few dollars per day. During the Christmas season of 2017, some shops would no longer use price tags since prices would inflate so quickly, so customers were required to ask staff at stores, known as ("talkers"), how much each item was. Some then further cut costs by replacing the "talkers" with computer screens.
The International Monetary Fund estimated in 2018 that Venezuela's inflation rate would reach 1,000,000% by the end of the year. This forecast was criticized by Steve H. Hanke, professor of applied economics at The Johns Hopkins University and senior fellow at the Cato Institute. According to Hanke, the IMF had released a "bogus forecast" because "no one has ever been able to accurately forecast the course or the duration of an episode of hyperinflation. But that has not stopped the IMF from offering inflation forecasts for Venezuela that have proven to be wildly inaccurate".
In July 2018, hyperinflation in Venezuela was sitting at 33,151%, "the 23rd most severe episode of hyperinflation in history".
In April 2019, the International Monetary Fund estimated that inflation would reach 10,000,000% by the end of 2019.
In May 2019, the Central Bank of Venezuela released economic data for the first time since 2015. According to this release, the inflation of Venezuela was 274% in 2016, 863% in 2017 and 130,060% in 2018. The annualised inflation rate as of April 2019 was estimated to be 282,972.8% as of April 2019, and cumulative inflation from 2016 to April 2019 was estimated at 53,798,500%.
The new reports imply a contraction of more than half of the economy in five years, according to the "Financial Times" "one of the biggest contractions in Latin American history". According to undisclosed sources from Reuters, the release of these numbers was due to pressure from China, a Maduro ally. One of these sources claims that the disclosure of economic numbers may bring Venezuela into compliance with the IMF, making it harder to support Juan Guaidó during the presidential crisis. At the time, the IMF was not able to support the validity of the data as they had not been able to contact the authorities.
Vietnam.
Vietnam went through a period of chaos and high inflation in the late 1980s, with inflation peaking at 774% in 1988, after the country's "price-wage-currency" reform package, led by then-Deputy Prime Minister Trần Phương, had failed. High inflation also occurred in the early stages of the socialist-oriented market economic reforms commonly referred to as the Đổi Mới.
Yugoslavia.
Hyperinflation in the Socialist Federal Republic of Yugoslavia happened before and during the period of breakup of Yugoslavia, from 1989 to 1991. In April 1992, one of its successor states, FR Yugoslavia, entered a period of hyperinflation in the Federal Republic of Yugoslavia, that lasted until 1994. One of several regional conflicts accompanying the dissolution of Yugoslavia was the Bosnian War (1992–1995). The Belgrade government of Slobodan Milošević backed ethnic Serbian forces in the conflict, resulting in a United Nations boycott of Yugoslavia. The UN boycott collapsed an economy already weakened by regional war, with the projected monthly inflation rate accelerating to one million percent by December 1993 (prices double every 2.3 days).
The highest denomination in 1988 was 50,000 . By 1989, it was 2,000,000 . In the 1990 currency reform, 1 new dinar was exchanged for 10,000 old dinars. After socialist Yugoslavia broke up, the 1992 currency reform in FR Yugoslavia led to 1 new dinar being exchanged for 10 old dinars. The highest denomination in 1992 was 50,000 . By 1993, it was 10,000,000,000 . In the 1993 currency reform, 1 new dinar was exchanged for 1,000,000 old dinars. Before the year was over, however, the highest denomination was 500,000,000,000 dinars. In the 1994 currency reform, 1 new dinar was exchanged for 1,000,000,000 old dinars. In another currency reform a month later, 1 novi dinar was exchanged for 13 million dinars (1 novi dinar = 1 Deutschmark at the time of exchange). The overall impact of hyperinflation was that 1 novi dinar was equal to – pre-1990 dinars. Yugoslavia's rate of inflation hit % cumulative inflation over the time period 1 October 1993 and 24 January 1994.
Zimbabwe.
Hyperinflation in Zimbabwe was one of the few instances that resulted in the abandonment of the local currency. At independence in 1980, the Zimbabwe dollar (ZWD) was worth about US$1.49 (or 67 Zimbabwean cents per U.S. dollar). Afterwards, however, rampant inflation and the collapse of the economy severely devalued the currency. Inflation was relatively steady until the early 1990s when economic disruption caused by failed land reform agreements and rampant government corruption resulted in reductions in food production and the decline of foreign investment. Several multinational companies began hoarding retail goods in warehouses in Zimbabwe and just south of the border, preventing commodities from becoming available on the market. The result was that to pay its expenditures Mugabe's government and Gideon Gono's Reserve Bank printed more and more notes with higher face values.
Hyperinflation began early in the 21st century, reaching 624% in 2004. It fell back to low triple digits before surging to a new high of 1,730% in 2006. The Reserve Bank of Zimbabwe revalued on 1 August 2006 at a ratio of 1,000 ZWD to each second dollar (ZWN), but year-to-year inflation rose by June 2007 to 11,000% (versus an earlier estimate of 9,000%). Larger denominations were progressively issued in 2008:
Inflation by 16 July officially surged to 2,200,000% with some analysts estimating figures surpassing 9,000,000%. As of 22 July 2008 the value of the Zimbabwe dollar fell to approximately Z$688 billion per US$1, or Z$688 trillion in pre-August 2006 Zimbabwean dollars.
On 1 August 2008, the Zimbabwe dollar was redenominated at the ratio of 1010 ZWN to each third dollar (ZWR). On 19 August 2008, official figures announced for June estimated the inflation over 11,250,000%. Zimbabwe's annual inflation was 231,000,000% in July (prices doubling every 17.3 days). By October 2008 Zimbabwe was mired in hyperinflation with wages falling far behind inflation. In this dysfunctional economy hospitals and schools had chronic staffing problems, because many nurses and teachers could not afford bus fare to work. Most of the capital of Harare was without water because the authorities had stopped paying the bills to buy and transport the treatment chemicals. Desperate for foreign currency to keep the government functioning, Zimbabwe's central bank governor, Gideon Gono, sent runners into the streets with suitcases of Zimbabwean dollars to buy up American dollars and South African rand.
For periods after July 2008, no official inflation statistics were released. Prof. Steve H. Hanke overcame the problem by estimating inflation rates after July 2008 and publishing the Hanke Hyperinflation Index for Zimbabwe. Prof. Hanke's HHIZ measure indicated that the inflation peaked at an annual rate of 89.7 sextillion percent (89,700,000,000,000,000,000,000%, or %) in mid-November 2008. The peak monthly rate was 79.6 billion percent, which is equivalent to a 98% daily rate, or around % yearly rate. At that rate, prices were doubling every 24.7 hours. Note that many of these figures should be considered mostly theoretical since hyperinflation did not proceed at this rate over a whole year.
At its November 2008 peak, Zimbabwe's rate of inflation approached, but failed to surpass, Hungary's July 1946 world record. On 2 February 2009, the dollar was redenominated for the third time at the ratio of 1012 ZWR to 1 ZWL, only three weeks after the Z$100 trillion banknote was issued on 16 January, but hyperinflation waned by then as official inflation rates in USD were announced and foreign transactions were legalised, and on 12 April the Zimbabwe dollar was abandoned in favour of using only foreign currencies. The overall impact of hyperinflation was US$1 = Z$1025.
Ironically, following the abandonment of the ZWR and subsequent use of reserve currencies, banknotes from the hyperinflation period of the old Zimbabwe dollar began attracting international attention as collectors items, having accrued numismatic value, selling for prices many orders of magnitude higher than their old purchasing power.
Units of inflation.
Inflation rate is usually measured in percent per year. It can also be measured in percent per month or in price doubling time.
formula_0
formula_1
formula_2
formula_3
Often, at redenominations, three zeros are cut from the face values of denominations. It can be read from the table that if the (annual) inflation is for example 100%, it takes about 3.32 years for prices to increase by an order of magnitude (e.g., to produce one more zero on the price tags), or 9.97 years to produce three zeros. Thus can one expect a redenomination to take place about ten years after the currency was introduced.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\hbox{New price } y \\hbox{ years later} = \\hbox{old price} \\times \\left(1+\\frac{\\hbox{inflation}}{100}\\right)^{y}"
},
{
"math_id": 1,
"text": "\\hbox{Monthly inflation} = 100 \\times \\left(\\left(1+\\frac{\\hbox{inflation}}{100}\\right)^{\\frac{1}{12}} -1\\right)"
},
{
"math_id": 2,
"text": "\\hbox{Price doubling time} = \\frac{1}{\\log_2 \\left(1+ \\frac{\\hbox{inflation}}{100}\\right)}"
},
{
"math_id": 3,
"text": "\\hbox{Years per added zero of the price} = \\frac{1}{\\log_{10} \\left(1+ \\frac{\\hbox{inflation}}{100}\\right)}"
}
]
| https://en.wikipedia.org/wiki?curid=13681 |
13682464 | Segment tree | Computer science data structure
In computer science, the segment tree is a data structure used for storing information about intervals or segments. It allows querying which of the stored segments contain a given point. A similar data structure is the interval tree.
A segment tree for a set I of "n" intervals uses "O"("n" log "n") storage and can be built in "O"("n" log "n") time. Segment trees support searching for all the intervals that contain a query point in time "O"(log "n" + "k"), "k" being the number of retrieved intervals or segments.
Applications of the segment tree are in the areas of computational geometry, geographic information systems and machine learning.
The segment tree can be generalized to higher dimension spaces.
Definition.
Description.
Let I be a set of intervals, or segments. Let "p"1, "p"2, ..., "pm" be the list of distinct interval endpoints, sorted from left to right. Consider the partitioning of the real line induced by those points. The regions of this partitioning are called "elementary intervals". Thus, the elementary intervals are, from left to right:
formula_0
That is, the list of elementary intervals consists of open intervals between two consecutive endpoints "pi" and "p""i"+1, alternated with closed intervals consisting of a single endpoint. Single points are treated themselves as intervals because the answer to a query is not necessarily the same at the interior of an elementary interval and its endpoints.
Given a set I of intervals, or segments, a segment tree "T" for I is structured as follows:
Construction.
A segment tree from the set of segments I, can be built as follows. First, the endpoints of the intervals in I are sorted. The elementary intervals are obtained from that. Then, a balanced binary tree is built on the elementary intervals, and for each node "v" it is determined the interval Int("v") it represents. It remains to compute the canonical subsets for the nodes. To achieve this, the intervals in I are inserted one by one into the segment tree. An interval "X" = ["x", "x′"] can be inserted in a subtree rooted at "T", using the following procedure:
The complete construction operation takes "O"("n" log "n") time, "n" being the number of segments in I.
<templatestyles src="Math_proof/styles.css" />Proof
Sorting the endpoints takes "O"("n" log "n"). Building a balanced binary tree from the sorted endpoints, takes linear time on "n".
The insertion of an interval "X" = ["x", "x′"] into the tree, costs O(log "n").
<templatestyles src="Math_proof/styles.css" />Proof
Visiting every node takes constant time (assuming that canonical subsets are stored in a simple data structure like a linked list). When we visit node "v", we either store "X" at "v", or Int("v") contains an endpoint of "X". As proved above, an interval is stored at most twice at each level of the tree. There is also at most one node at every level whose corresponding interval contains "x", and one node whose interval contains "x′". So, at most four nodes per level are visited. Since there are "O"(log "n") levels, the total cost of the insertion is "O"(log "n").
Query.
A query for a segment tree receives a point "qx"(should be one of the leaves of tree), and retrieves a list of all the segments stored which contain the point "qx".
Formally stated; given a node (subtree) "v" and a query point "qx", the query can be done using the following algorithm:
In a segment tree that contains "n" intervals, those containing a given query point can be reported in "O"(log "n" + "k") time, where "k" is the number of reported intervals.
<templatestyles src="Math_proof/styles.css" />Proof
The query algorithm visits one node per level of the tree, so "O"(log "n") nodes in total. On the other hand, at a node "v", the segments in I are reported in "O"(1 + "kv") time, where "kv" is the number of intervals at node "v", reported. The sum of all the "kv" for all nodes "v" visited, is "k", the number of reported segments.
Storage requirements.
A segment tree "T" on a set I of "n" intervals uses "O"("n" log "n") storage.
<templatestyles src="Math_theorem/styles.css" />
"Lemma" — Any interval ["x", "x′"] of I is stored in the canonical set for at most two nodes at the same depth.
<templatestyles src="Math_proof/styles.css" />Proof
Let "v"1, "v"2, "v"3 be the three nodes at the same depth, numbered from left to right; and let p("v") be the parent node of any given node "v". Suppose ["x", "x′"] is stored at "v"1 and "v"3. This means that ["x", "x′"] spans the whole interval from the left endpoint of Int("v"1) to the right endpoint of Int("v"3). Note that all segments at a particular level are non-overlapping and ordered from left to right: this is true by construction for the level containing the leaves, and the property is not lost when moving from any level to the one above it by combining pairs of adjacent segments. Now either parent("v"2) = parent("v"1), or the former is to the right of the latter (edges in the tree do not cross). In the first case, Int(parent("v"2))'s leftmost point is the same as Int("v"1)'s leftmost point; in the second case, Int(parent("v"2))'s leftmost point is to the right of Int(parent("v"1))'s rightmost point, and therefore also to the right of Int("v"1)'s rightmost point. In both cases, Int(parent("v"2)) begins at or to the right of Int("v"1)'s leftmost point. Similar reasoning shows that Int(parent("v"2)) ends at or to the left of Int("v"3)'s rightmost point. Int(parent("v"2)) must therefore be contained in ["x", "x′"]; hence, ["x", "x′"] will not be stored at "v"2.
The set I has at most 4"n" + 1 elementary intervals. Because "T" is a binary balanced tree with at most 4"n" + 1 leaves, its height is O(log "n"). Since any interval is stored at most twice at a given depth of the tree, that the total amount of storage is "O"("n" log "n").
Generalization for higher dimensions.
The segment tree can be generalized to higher dimension spaces, in the form of multi-level segment trees. In higher dimensional versions, the segment tree stores a collection of axis-parallel (hyper-)rectangles, and can retrieve the rectangles that contain a given query point. The structure uses "O"("n" log"d" "n") storage, and answers queries in "O"(log"d" "n") time.
The use of fractional cascading lowers the query time bound by a logarithmic factor. The use of the interval tree on the deepest level of associated structures lowers the storage bound by a logarithmic factor.
Notes.
A query that asks for all the intervals containing a given point is often referred as a "stabbing query".
The segment tree is less efficient than the interval tree for range queries in one dimension, due to its higher storage requirement: "O"("n" log "n") against the O("n") of the interval tree. The importance of the segment tree is that the segments within each node’s canonical subset can be stored in any arbitrary manner.
For "n" intervals whose endpoints are in a small integer range (e.g., in the range [1...,"O"("n")]), optimal data structures exist with a linear preprocessing time and query time "O"(1 + "k") for reporting all "k" intervals containing a given query point.
Another advantage of the segment tree is that it can easily be adapted to counting queries; that is, to report the number of segments containing a given point, instead of reporting the segments themselves. Instead of storing the intervals in the canonical subsets, it can simply store the number of them. Such a segment tree uses linear storage, and requires an "O"(log "n") query time, so it is optimal.
Higher dimensional versions of the interval tree and the priority search tree do not exist; that is, there is no clear extension of these structures that solves the analogous problem in higher dimensions. But the structures can be used as associated structure of segment trees.
History.
The segment tree was invented by Jon Bentley in 1977; in "Solutions to Klee’s rectangle problems".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(-\\infty, p_1), [p_1,p_1], (p_1, p_2), [p_2, p_2], \\dots, (p_{m-1}, p_m), [p_m, p_m], (p_m, +\\infty)"
}
]
| https://en.wikipedia.org/wiki?curid=13682464 |
13683508 | Grothendieck construction | The Grothendieck construction (named after Alexander Grothendieck) is a construction used in the mathematical field of category theory. It is a fundamental construction in the theory of descent, in the theory of stacks, and in fibred category theory. In categorical logic, the construction is used to model the relationship between a type theory and a logic over that type theory, and allows for the translation of concepts from indexed category theory into fibred category theory, such as Lawvere's concept of hyperdoctrine.
The Grothendieck construction was first studied for the special case presheaves of sets by Mac Lane, where it was called the category of elements.
Motivation.
If formula_0 is a family of sets indexed by another set, one can form the disjoint union or coproduct
formula_1,
which is the set of all ordered pairs formula_2 such that formula_3. The disjoint union set is naturally equipped with a "projection" map
formula_4
defined by
formula_5.
From the projection formula_6 it is possible to reconstruct the original family of sets formula_0 up to a canonical bijection, as for each formula_7 via the bijection formula_8. In this context, for formula_9, the preimage formula_10 of the singleton set formula_11 is called the "fiber" of formula_6 over formula_12, and any set formula_13 equipped with a choice of function formula_14 is said to be "fibered" over formula_15. In this way, the disjoint union construction provides a way of viewing any family of sets indexed by formula_15 as a set "fibered" over formula_15, and conversely, for any set formula_14 fibered over formula_15, we can view it as the disjoint union of the fibers of formula_16. Jacobs has referred to these two perspectives as "display indexing" and "pointwise indexing".
The Grothendieck construction generalizes this to categories. For each category formula_17, family of categories formula_18 indexed by the objects of formula_17 in a functorial way, the Grothendieck construction returns a new category formula_19 fibered over formula_17 by a functor formula_6 whose fibers are the categories formula_18.
Definition.
Let formula_20 be a functor from any small category to the category of small categories. The Grothendieck construction for formula_21 is the category formula_22 (also written formula_23, formula_24 or formula_25), with
Composition of morphisms is defined by formula_34.
Example.
If formula_35 is a group, then it can be viewed as a category, formula_36 with one object and all morphisms invertible. Let formula_37 be a functor whose value at the sole object of formula_38 is the category formula_39 a category representing the group formula_40 in the same way. The requirement that formula_21 be a functor is then equivalent to specifying a group homomorphism formula_41 where formula_42 denotes the group of automorphisms of formula_43 Finally, the Grothendieck construction, formula_44 results in a category with one object, which can again be viewed as a group, and in this case, the resulting group is (isomorphic to) the semidirect product formula_45 | [
{
"math_id": 0,
"text": "\\left\\{ A_i \\right\\}_{i\\in I}"
},
{
"math_id": 1,
"text": "\\coprod_{i\\in I} A_i"
},
{
"math_id": 2,
"text": "(i,a)"
},
{
"math_id": 3,
"text": "a\\in A_i"
},
{
"math_id": 4,
"text": "\\pi : \\coprod_{i\\in I} A_i\\to I"
},
{
"math_id": 5,
"text": "\\pi(i,a)=i"
},
{
"math_id": 6,
"text": "\\pi"
},
{
"math_id": 7,
"text": "i\\in I, A_i\\cong \\pi^{-1}(\\{i\\})"
},
{
"math_id": 8,
"text": "a\\mapsto (i,a)"
},
{
"math_id": 9,
"text": "i\\in I"
},
{
"math_id": 10,
"text": "\\pi^{-1}(\\{i\\})"
},
{
"math_id": 11,
"text": "\\{i\\}"
},
{
"math_id": 12,
"text": "i"
},
{
"math_id": 13,
"text": "B"
},
{
"math_id": 14,
"text": "f : B\\to I"
},
{
"math_id": 15,
"text": "I"
},
{
"math_id": 16,
"text": "f"
},
{
"math_id": 17,
"text": "\\mathcal{C}"
},
{
"math_id": 18,
"text": "\\{F(c)\\}_{c\\in\\mathcal{C}}"
},
{
"math_id": 19,
"text": "\\mathcal{E}"
},
{
"math_id": 20,
"text": "F\\colon \\mathcal{C} \\rightarrow \\mathbf{Cat}"
},
{
"math_id": 21,
"text": "F"
},
{
"math_id": 22,
"text": "\\Gamma(F)"
},
{
"math_id": 23,
"text": "\\textstyle\\int_{\\textstyle\\mathcal{C}} F"
},
{
"math_id": 24,
"text": "\\textstyle\\mathcal{C} \\int F"
},
{
"math_id": 25,
"text": "F \\rtimes \\mathcal{C}"
},
{
"math_id": 26,
"text": "(c,x)"
},
{
"math_id": 27,
"text": "c\\in \\operatorname{obj}(\\mathcal{C})"
},
{
"math_id": 28,
"text": "x\\in \\operatorname{obj}(F(c))"
},
{
"math_id": 29,
"text": "\\operatorname{hom}_{\\Gamma(F)}((c_1,x_1),(c_2,x_2))"
},
{
"math_id": 30,
"text": "(f, g)"
},
{
"math_id": 31,
"text": "f: c_1 \\to c_2"
},
{
"math_id": 32,
"text": "g: F(f)(x_1) \\to x_2"
},
{
"math_id": 33,
"text": "F(c_2)"
},
{
"math_id": 34,
"text": "(f,g) \\circ (f',g') = (f \\circ f', g \\circ F(f)(g'))"
},
{
"math_id": 35,
"text": "G"
},
{
"math_id": 36,
"text": "\\mathcal{C}_G,"
},
{
"math_id": 37,
"text": "F:\\mathcal{C}_G\\to\\mathbf{Cat}"
},
{
"math_id": 38,
"text": "\\mathcal{C}_G"
},
{
"math_id": 39,
"text": "\\mathcal{C}_H,"
},
{
"math_id": 40,
"text": "H"
},
{
"math_id": 41,
"text": "\\varphi:G\\to\\operatorname{Aut}(H),"
},
{
"math_id": 42,
"text": "\\operatorname{Aut}(H)"
},
{
"math_id": 43,
"text": "H."
},
{
"math_id": 44,
"text": "F \\rtimes \\mathcal{C}_G,"
},
{
"math_id": 45,
"text": "H \\rtimes_\\varphi G."
}
]
| https://en.wikipedia.org/wiki?curid=13683508 |
13685265 | Logic Theorist | 1956 computer program written by Allen Newell, Herbert A. Simon and Cliff Shaw
Logic Theorist is a computer program written in 1956 by Allen Newell, Herbert A. Simon, and Cliff Shaw. It was the first program deliberately engineered to perform automated reasoning, and has been described as "the first artificial intelligence program". Logic Theorist proved 38 of the first 52 theorems in chapter two of Whitehead and Bertrand Russell's "Principia Mathematica", and found new and shorter proofs for some of them.
History.
In 1955, when Newell and Simon began to work on the Logic Theorist, the field of artificial intelligence did not yet exist. Even the term itself ("artificial intelligence") would not be coined until the following summer.
Simon was a political scientist who had already produced classic work in the study of how bureaucracies function as well as developing his theory of bounded rationality (for which he would later win a Nobel Prize). The study of business organizations requires, like artificial intelligence, an insight into the nature of human problem solving and decision making. Simon remembers consulting at RAND Corporation in the early 1950s and seeing a printer typing out a map, using ordinary letters and punctuation as symbols. He realized that a machine that could manipulate symbols could just as well simulate decision making and possibly even the process of human thought.
The program that printed the map had been written by Newell, a RAND scientist studying logistics and organization theory. For Newell, the decisive moment was in 1954 when Oliver Selfridge came to RAND to describe his work on pattern matching. Watching the presentation, Newell suddenly understood how the interaction of simple, programmable units could accomplish complex behavior, including the intelligent behavior of human beings. "It all happened in one afternoon," he would later say. It was a rare moment of scientific epiphany.
"I had such a sense of clarity that this was a new path, and one I was going to go down. I haven't had that sensation very many times. I'm pretty skeptical, and so I don't normally go off on a toot, but I did on that one. Completely absorbed in it—without existing with the two or three levels consciousness so that you're working, and aware that you're working, and aware of the consequences and implications, the normal mode of thought. No. Completely absorbed for ten to twelve hours."
Newell and Simon began to talk about the possibility of teaching machines to think. Their first project was a program that could prove mathematical theorems like the ones used in Bertrand Russell and Alfred North Whitehead's "Principia Mathematica". They enlisted the help of computer programmer Cliff Shaw, also from RAND, to develop the program. (Newell says "Cliff was the genuine computer scientist of the three".)
The first version was hand-simulated: they wrote the program onto 3x5 cards and, as Simon recalled:In January 1956, we assembled my wife and three children together with some graduate students. To each member of the group, we gave one of the cards, so that each one became, in effect, a component of the computer program ... Here was nature imitating art imitating nature.
They succeeded in showing that the program could successfully prove theorems as well as a talented mathematician. Eventually Shaw was able to run the program on the computer at RAND's Santa Monica facility.
In the summer of 1956, John McCarthy, Marvin Minsky, Claude Shannon and Nathan Rochester organized a conference on the subject of what they called "artificial intelligence" (a term coined by McCarthy for the occasion). Newell and Simon proudly presented the group with the Logic Theorist. It was met with a lukewarm reception. Pamela McCorduck writes "the evidence is that nobody save Newell and Simon themselves sensed the long-range significance of what they were doing." Simon confides that "we were probably fairly arrogant about it all" and adds:
They didn't want to hear from us, and we sure didn't want to hear from them: we had something to "show" them! ... In a way it was ironic because we already had done the first example of what they were after; and second, they didn't pay much attention to it.
Logic Theorist soon proved 38 of the first 52 theorems in chapter 2 of the "Principia Mathematica". The proof of theorem 2.85 was actually more elegant than the proof produced laboriously by hand by Russell and Whitehead. Simon was able to show the new proof to Russell himself who "responded with delight". They attempted to publish the new proof in "The Journal of Symbolic Logic", but it was rejected on the grounds that a new proof of an elementary mathematical theorem was not notable, apparently overlooking the fact that one of the authors was a computer program.
Newell and Simon formed a lasting partnership, founding one of the first AI laboratories at the Carnegie Institute of Technology and developing a series of influential artificial intelligence programs and ideas, including the General Problem Solver, Soar, and their unified theory of cognition.
Architecture.
This is a brief presentation, based on.
The logical theorist is a program that performs logical "processes" on logical "expressions".
Expressions.
For example, the logical expression formula_0 is represented as a tree with a root element representing formula_1. Among the attributes of the root element are pointers to the two elements representing the subexpressions formula_2 and formula_3.
Processes.
There are four kinds of processes, from the lowest to the highest level.
Logic Theorist's influence on AI.
Logic Theorist introduced several concepts that would be central to AI research:
Philosophical implications.
Pamela McCorduck writes that the Logic Theorist was "proof positive that a machine could perform tasks heretofore considered intelligent, creative and uniquely human". And, as such, it represents a milestone in the development of artificial intelligence and our understanding of intelligence in general.
Simon told a graduate class in January 1956, "Over Christmas, Al Newell and I invented a thinking machine,"
and would write:
[We] invented a computer program capable of thinking non-numerically, and thereby solved the venerable mind-body problem, explaining how a system composed of matter can have the properties of mind.
This statement, that machines can have minds just as people do, would be later named "Strong AI" by philosopher John Searle. It remains a serious subject of debate up to the present day.
Pamela McCorduck also sees in the Logic Theorist the debut of a new theory of the mind, the information processing model (sometimes called computationalism or cognitivism). She writes that "this view would come to be central to their later work, and in their opinion, as central to understanding mind in the 20th century as Darwin's principle of natural selection had been to understanding biology in the nineteenth century." Newell and Simon would later formalize this proposal as the physical symbol systems hypothesis.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\neg P \\to (Q \\wedge \\neg P)"
},
{
"math_id": 1,
"text": "\\to"
},
{
"math_id": 2,
"text": "\\neg P"
},
{
"math_id": 3,
"text": "Q \\wedge \\neg P"
},
{
"math_id": 4,
"text": "B"
},
{
"math_id": 5,
"text": "A \\to B'"
},
{
"math_id": 6,
"text": "B'"
},
{
"math_id": 7,
"text": "A"
},
{
"math_id": 8,
"text": "A\\to C"
},
{
"math_id": 9,
"text": "A\\to B"
},
{
"math_id": 10,
"text": "B \\to C"
}
]
| https://en.wikipedia.org/wiki?curid=13685265 |
13687788 | Toda bracket | In mathematics, the Toda bracket is an operation on homotopy classes of maps, in particular on homotopy groups of spheres, named after Hiroshi Toda, who defined them and used them to compute homotopy groups of spheres in .
Definition.
See or for more information.
Suppose that
formula_0
is a sequence of maps between spaces, such that the compositions formula_1 and formula_2 are both nullhomotopic. Given a space formula_3, let formula_4 denote the cone of formula_3. Then we get a (non-unique) map
formula_5
induced by a homotopy from formula_1 to a trivial map, which when post-composed with formula_6 gives a map
formula_7.
Similarly we get a non-unique map formula_8 induced by a homotopy from formula_2 to a trivial map, which when composed with formula_9, the cone of the map formula_10, gives another map,
formula_11.
By joining these two cones on formula_12 and the maps from them to formula_13, we get a map
formula_14
representing an element in the group formula_15 of homotopy classes of maps from the suspension formula_16 to formula_13, called the Toda bracket of formula_10, formula_17, and formula_6. The map formula_18 is not uniquely defined up to homotopy, because there was some choice in choosing the maps from the cones. Changing these maps changes the Toda bracket by adding elements of formula_19 and formula_20.
There are also higher Toda brackets of several elements, defined when suitable lower Toda brackets vanish. This parallels the theory of Massey products in cohomology.
The Toda bracket for stable homotopy groups of spheres.
The direct sum
formula_21
of the stable homotopy groups of spheres is a supercommutative graded ring, where multiplication (called composition product) is given by composition of representing maps, and any element of non-zero degree is nilpotent .
If "f" and "g" and "h" are elements of formula_22 with formula_23 and formula_24, there is a "Toda bracket" formula_18 of these elements. The Toda bracket is not quite an element of a stable homotopy group, because it is only defined up to addition of composition products of certain other elements. Hiroshi Toda used the composition product and Toda brackets to label many of the elements of homotopy groups.
showed that every element of the stable homotopy groups of spheres can be expressed using composition products and higher Toda brackets in terms of certain well known elements, called Hopf elements.
The Toda bracket for general triangulated categories.
In the case of a general triangulated category the Toda bracket can be defined as follows. Again, suppose that
formula_0
is a sequence of morphism in a triangulated category such that formula_25 and formula_26. Let formula_27 denote the cone of "f" so we obtain an exact triangle
formula_28
The relation formula_25 implies that "g" factors (non-uniquely) through formula_27 as
formula_29
for some formula_30. Then, the relation formula_31 implies that formula_32 factors (non-uniquely) through "W[1]" as
formula_33
for some "b". This "b" is (a choice of) the Toda bracket formula_18 in the group formula_34.
Convergence theorem.
There is a convergence theorem originally due to Moss which states that special Massey products formula_35 of elements in the formula_36-page of the Adams spectral sequence contain a permanent cycle, meaning has an associated element in formula_37, assuming the elements formula_38 are permanent cyclespg 18-19. Moreover, these Massey products have a lift to a motivic Adams spectral sequence giving an element in the Toda bracket formula_39 in formula_40 for elements formula_41 lifting formula_38.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "W\\stackrel{f}{\\ \\to\\ } X\\stackrel{g}{\\ \\to\\ } Y\\stackrel{h}{\\ \\to\\ } Z"
},
{
"math_id": 1,
"text": "g\\circ f"
},
{
"math_id": 2,
"text": "h\\circ g"
},
{
"math_id": 3,
"text": "A"
},
{
"math_id": 4,
"text": "CA"
},
{
"math_id": 5,
"text": "F\\colon CW\\to Y"
},
{
"math_id": 6,
"text": "h"
},
{
"math_id": 7,
"text": "h\\circ F\\colon CW\\to Z"
},
{
"math_id": 8,
"text": "G\\colon CX\\to Z"
},
{
"math_id": 9,
"text": "C_f\\colon CW\\to CX"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "G\\circ C_f\\colon CW\\to Z"
},
{
"math_id": 12,
"text": "W"
},
{
"math_id": 13,
"text": "Z"
},
{
"math_id": 14,
"text": "\\langle f, g, h\\rangle\\colon SW\\to Z"
},
{
"math_id": 15,
"text": "[SW, Z]"
},
{
"math_id": 16,
"text": "SW"
},
{
"math_id": 17,
"text": "g"
},
{
"math_id": 18,
"text": "\\langle f, g, h\\rangle"
},
{
"math_id": 19,
"text": "h [SW,Y] "
},
{
"math_id": 20,
"text": "[SX,Z]f"
},
{
"math_id": 21,
"text": "\\pi_{\\ast}^S=\\bigoplus_{k\\ge 0}\\pi_k^S"
},
{
"math_id": 22,
"text": "\\pi_{\\ast}^{S}"
},
{
"math_id": 23,
"text": "f \\cdot g= 0"
},
{
"math_id": 24,
"text": "g \\cdot h= 0"
},
{
"math_id": 25,
"text": "g\\circ f = 0"
},
{
"math_id": 26,
"text": "h\\circ g = 0"
},
{
"math_id": 27,
"text": "C_f"
},
{
"math_id": 28,
"text": "W\\stackrel{f}{\\ \\to\\ } X\\stackrel{i}{\\ \\to\\ } C_f \\stackrel{q}{\\ \\to\\ } W[1]"
},
{
"math_id": 29,
"text": "X\\stackrel{i}{\\ \\to\\ } C_f \\stackrel{a}{\\ \\to\\ } Y "
},
{
"math_id": 30,
"text": "a"
},
{
"math_id": 31,
"text": "h\\circ a\\circ i = h\\circ g = 0"
},
{
"math_id": 32,
"text": "h\\circ a"
},
{
"math_id": 33,
"text": "C_f \\stackrel{q}{\\ \\to\\ } W[1] \\stackrel{b}{\\ \\to\\ } Z "
},
{
"math_id": 34,
"text": "\\operatorname{hom}(W[1], Z)"
},
{
"math_id": 35,
"text": "\\langle a,b,c \\rangle"
},
{
"math_id": 36,
"text": "E_r"
},
{
"math_id": 37,
"text": "\\pi^s_*(\\mathbb{S})"
},
{
"math_id": 38,
"text": "a,b,c"
},
{
"math_id": 39,
"text": "\\langle \\alpha,\\beta,\\gamma \\rangle"
},
{
"math_id": 40,
"text": "\\pi_{*,*}"
},
{
"math_id": 41,
"text": "\\alpha,\\beta,\\gamma"
}
]
| https://en.wikipedia.org/wiki?curid=13687788 |
13688204 | EHP spectral sequence | In mathematics, the EHP spectral sequence is a spectral sequence used for inductively calculating the homotopy groups of spheres localized at some prime "p". It is described in more detail in and . It is related to the EHP long exact sequence of ; the name "EHP" comes from the fact that George W. Whitehead named 3 of the maps of his sequence "E" (the first letter of the German word "Einhängung" meaning "suspension"), "H" (for Heinz Hopf, as this map is the second Hopf–James invariant), and "P" (related to Whitehead products).
For formula_0 the spectral sequence uses some exact sequences associated to the fibration
formula_1,
where formula_2 stands for a loop space and the (2) is localization of a topological space at the prime 2. This gives a spectral sequence with formula_3 term equal to
formula_4
and converging to formula_5 (stable homotopy groups of spheres localized at 2). The spectral sequence has the advantage that the input is previously calculated homotopy groups. It was used by to calculate the first 31 stable homotopy groups of spheres.
For arbitrary primes one uses some fibrations found by :
formula_6
formula_7
where formula_8 is the formula_9-skeleton of the loop space formula_10. (For formula_0, the space formula_8 is the same as formula_11, so Toda's fibrations at formula_0 are the same as the James fibrations.) | [
{
"math_id": 0,
"text": "p = 2"
},
{
"math_id": 1,
"text": "S^n(2)\\rightarrow \\Omega S^{n+1}(2)\\rightarrow \\Omega S^{2n+1}(2)"
},
{
"math_id": 2,
"text": "\\Omega"
},
{
"math_id": 3,
"text": "E_1^{k,n}"
},
{
"math_id": 4,
"text": "\\pi_{k+n}(S^{2 n - 1}(2))"
},
{
"math_id": 5,
"text": "\\pi_*^S(2)"
},
{
"math_id": 6,
"text": "\\widehat S^{2n}(p)\\rightarrow \\Omega S^{2n+1}(p)\\rightarrow \\Omega S^{2pn+1}(p)"
},
{
"math_id": 7,
"text": " S^{2n-1}(p)\\rightarrow \\Omega \\widehat S^{2n}(p)\\rightarrow \\Omega S^{2pn-1}(p)"
},
{
"math_id": 8,
"text": "\\widehat S^{2n}"
},
{
"math_id": 9,
"text": "(2np-1)"
},
{
"math_id": 10,
"text": "\\Omega S^{2n+1}"
},
{
"math_id": 11,
"text": " S^{2n}"
}
]
| https://en.wikipedia.org/wiki?curid=13688204 |
1368932 | PEG ratio | Price/earnings to growth ratio, a stock price analysis tool
The 'PEG ratio' (price/earnings to growth ratio) is a valuation metric for determining the relative trade-off between the price of a stock, the earnings generated per share (EPS), and the company's expected growth.
In general, the P/E ratio is higher for a company with a higher growth rate. Thus, using just the P/E ratio would make high-growth companies appear overvalued relative to others. It is assumed that by dividing the P/E ratio by the earnings growth rate, the resulting ratio is better for comparing companies with different growth rates.
The PEG ratio is considered to be a convenient approximation. It was originally developed by Mario Farina who wrote about it in his 1969 Book, "A Beginner's Guide To Successful Investing In The Stock Market". It was later popularized by Peter Lynch, who wrote in his 1989 book "One Up on Wall Street" that "The P/E ratio of any company that's fairly priced will equal its growth rate", i.e., a fairly valued company will have its PEG equal to 1. The formula can be supported theoretically by reference to the Sum of perpetuities method.
formula_0
Basic formula.
The rate is expressed as a percent value, and should use real growth only, to correct for inflation. For example, if a company is growing at 30% a year in real terms, and has a P/E of 30.00, it would have a PEG of 1.00. A lower ratio than 1.00 indicates an undervalued stock and a value above 1.00 indicates overvalued. The P/E ratio used in the calculation may be projected or trailing, and the annual growth rate may be the expected growth rate for the next year or the next five years.
As an indicator.
PEG is a widely employed indicator of a stock's possible true value. Similar to PE ratios, a lower PEG means that the stock is undervalued more. It is favored by many over the price/earnings ratio because it also accounts for growth.
See also PVGO.
The PEG ratio of 1 is sometimes said to represent a fair trade-off between the values of cost and the values of growth, indicating that a stock is reasonably valued given the expected growth. A crude analysis suggests that companies with PEG values between 0 and 1 may provide higher returns.
A PEG Ratio can also be a negative number if a stock's present income figure is negative (negative earnings), or if future earnings are expected to drop (negative growth). PEG ratios calculated from negative present earnings are viewed with skepticism as almost meaningless, other than as an indication of high investment risk.
Criticism.
The PEG ratio is commonly used and provided by numerous sources of financial and stock information. Despite its wide use, the PEG ratio is only a rough rule of thumb. Criticisms of the PEG ratio include that it is an oversimplified ratio that fails to usefully relate the price/earnings ratio to growth because it fails to factor in return on equity (ROE) or the required return factor (T).
When the PEG is quoted in public sources it makes a great deal of difference whether the earnings used in calculating the PEG is the past year's EPS, the estimated future year's EPS, or even selected analysts' speculative estimates of growth over the next five years. Use of the coming year's expected growth rate is considered preferable as the most reliable of the future-looking estimates. Yet which growth rate was selected for calculating a particular published PEG ratio may not be clear, or may require a close reading of the footnotes for the given figure.
The PEG ratio's validity is particularly questionable when used to compare companies expecting high growth with those expecting low-growth, or to compare companies with high P/E with those with a low P/E. It is more apt to be considered when comparing so-called growth companies (those growing earnings significantly faster than the market).
Growth rate numbers are expected to come from an impartial source. This may be from an analyst, whose job it is to be objective, or the investor's own analysis. Management is not impartial and it is assumed that their statements have a bit of puffery, going from a bit optimistic to completely implausible. This is not always true, since some managers tend to predict modest results only to have things come out better than claimed. A prudent investor should investigate for himself whether the estimates are reasonable, and what should be used to compare the stock price.
PEG calculations based on five-year growth estimates are especially subject to over-optimistic growth projections by analysts, which on average are not achieved, and to discounting the risk of outright loss of invested capital.
Advantages.
Investors may prefer the PEG ratio because it explicitly puts a value on the expected growth in earnings of a company. The PEG ratio can offer a suggestion of whether a company's high P/E ratio reflects an excessively high stock price or is a reflection of promising growth prospects for the company.
Disadvantages.
The PEG ratio is less appropriate for measuring companies without high growth. Large, well-established companies, for instance, may offer dependable dividend income, but little opportunity for growth.
A company's growth rate is an estimate. It is subject to the limitations of projecting future events. Future growth of a company can change due to any number of factors: market conditions, expansion setbacks, and hype of investors. Also, the convention that "PEG=1" is appropriate is somewhat arbitrary and considered a rule-of-thumb metric. .
The simplicity and convenience of calculating PEG leaves out several important variables. First, the absolute company growth rate used in the PEG does not account for the overall growth rate of the economy, and hence an investor must compare a stock's PEG to average PEG's across its industry and the entire economy to get any accurate sense of how competitive a stock is for investment. A low (attractive) PEG in times of high growth in the entire economy may not be particularly impressive when compared to other stocks, and vice versa for high PEG's in periods of slow growth or recession.
In addition, company growth rates that are much higher than the economy's growth rate are unstable and vulnerable to any problems the company may face that would prevent it from keeping its current rate. Therefore, a higher-PEG stock with a steady, sustainable growth rate (compared to the economy's growth) can often be a more attractive investment than a low-PEG stock that may happen to just be on a short-term growth "streak". A sustained higher-than-economy growth rate over the years usually indicates a highly profitable company, but can also indicate a scam, especially if the growth is a "flat" percentage no matter how the rest of the economy fluctuates (as was the case for several years for returns in Bernie Madoff's Ponzi scheme).
Finally, the volatility of highly speculative and risky stocks, which have low price/earnings ratios due to their very low price, is also not corrected for in PEG calculations. These stocks may have low PEG's due to a very low short-term (~1 year) PE ratio (e.g. 100% growth rate from $1 to $2 /stock) that does not indicate any guarantee of maintaining future growth or even solvency.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{PEG Ratio} \\,=\\,\\frac{\\text{Price/Earnings}}{\\text{Annual EPS Growth}} "
}
]
| https://en.wikipedia.org/wiki?curid=1368932 |
1369166 | Elias omega coding | Universal code encoding positive integers
Elias ω coding or Elias omega coding is a universal code encoding the positive integers developed by Peter Elias. Like Elias gamma coding and Elias delta coding, it works by prefixing the positive integer with a representation of its order of magnitude in a universal code. Unlike those other two codes, however, Elias omega recursively encodes that prefix; thus, they are sometimes known as recursive Elias codes.
Omega coding is used in applications where the largest encoded value is not known ahead of time, or to compress data in which small values are much more frequent than large values.
To encode a positive integer "N":
To decode an Elias omega-encoded positive integer:
Examples.
Omega codes can be thought of as a number of "groups". A group is either a single 0 bit, which terminates the code, or two or more bits beginning with 1, which is followed by another group.
The first few codes are shown below. Included is the so-called "implied distribution", describing the distribution of values for which this coding yields a minimum-size code; see Relationship of universal codes to practical compression for details.
The encoding for 1 googol, 10100, is 11 1000 101001100 (15 bits of length header) followed by the 333-bit binary representation of 1 googol, which is 10010 01001001 10101101 00100101 10010100 11000011 01111100 11101011 00001011 00100111 10000100 11000100 11001110 00001011 11110011 10001010 11001110 01000000 10001110 00100001 00011010 01111100 10101010 10110010 01000011 00001000 10101000 00101110 10001111 00010000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 00000000 and a trailing 0, for a total of 349 bits.
A googol to the hundredth power (1010000) is a 33,220-bit binary number. Its omega encoding is 33,243 bits long: 11 1111 1000000111000100 (22 bits), followed by 33,220 bits of the value, and a trailing 0. Under Elias delta coding, the same number is 33,250 bits long: 000000000000000 1000000111000100 (31 bits) followed by 33,219 bits of the value. The omega and delta coding are, respectively, 0.07% and 0.09% longer than the ordinary 33,220-bit binary representation of the number.
Code length.
For the encoding of a positive integer "N", the number of bits needed, "B"("N"), is recursively:formula_0That is, the length of the Elias omega code for the integer formula_1 isformula_2where the number of terms in the sum is bounded above by the binary iterated logarithm.
To be precise, let formula_3. We have formula_4 for some formula_5, and the length of the code is formula_6. Since formula_7, we have formula_8.
Since the iterated logarithm grows slower than all formula_9 for any fixed formula_10, the asymptotic growth rate is formula_11, where the sum terminates when it drops below one.
Asymptotic optimality.
Elias omega coding is an asymptotically optimal prefix code.
Proof sketch. A prefix code must satisfy the Kraft inequality. For the Elias omega coding, the Kraft inequality statesformula_12Now, the summation is asymptotically the same as an integral, giving usformula_13If the denominator terminates at some point formula_14, then the integral diverges as formula_15. However, if the denominator terminates at some point formula_16, then the integral converges as formula_17. The Elias omega code is on the edge between diverging and converging.
Example code.
Encoding.
void eliasOmegaEncode(char* source, char* dest)
IntReader intreader(source);
BitWriter bitwriter(dest);
while (intreader.hasLeft())
int num = intreader.getInt();
BitStack bits;
while (num > 1) {
int len = 0;
for (int temp = num; temp > 0; temp »= 1) // calculate 1+floor(log2(num))
len++;
for (int i = 0; i < len; i++)
bits.pushBit((num » i) & 1);
num = len - 1;
while (bits.length() > 0)
bitwriter.putBit(bits.popBit());
bitwriter.putBit(false); // write one zero
bitwriter.close();
intreader.close();
Decoding.
void eliasOmegaDecode(char* source, char* dest) {
BitReader bitreader(source);
IntWriter intwriter(dest);
while (bitreader.hasLeft())
int num = 1;
while (bitreader.inputBit()) // potentially dangerous with malformed files.
int len = num;
num = 1;
for (int i = 0; i < len; ++i)
num «= 1;
if (bitreader.inputBit())
num |= 1;
intwriter.putInt(num); // write out the value
bitreader.close();
intwriter.close();
Generalizations.
Elias omega coding does not encode zero or negative integers.
One way to encode all non-negative integers is to add 1 before encoding and then subtract 1 after decoding, or use the very similar Levenshtein coding.
One way to encode all integers is to set up a bijection, mapping all integers (0, 1, -1, 2, -2, 3, -3, ...) to strictly positive integers (1, 2, 3, 4, 5, 6, 7, ...) before encoding.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}B(0) & = 0\\,, \\\\\nB(N) & = 1 + \\lfloor \\log_2(N) \\rfloor + B(\\lfloor \\log_2(N) \\rfloor)\\,.\n\\end{align}"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "1 + (1 + \\lfloor \\log_2 N \\rfloor{}) + (1 + \\lfloor \\log_2 \\lfloor \\log_2 N \\rfloor{} \\rfloor{}) + \\cdots "
},
{
"math_id": 3,
"text": "f(x) = \\lfloor \\log_2 x \\rfloor{} "
},
{
"math_id": 4,
"text": "N > f(N) > f(f(N)) > \\cdots > f^k(N) = 1 "
},
{
"math_id": 5,
"text": "k "
},
{
"math_id": 6,
"text": "(k+1) + f(N) + f^2(N) + \\dots + f^k(N) "
},
{
"math_id": 7,
"text": "f(x) \\leq \\log_2 x "
},
{
"math_id": 8,
"text": "k \\leq \\log_2^*(N) "
},
{
"math_id": 9,
"text": "\\log_2^n N "
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "\\Theta(\\log_2 N + \\log_2^2 N + \\cdots) "
},
{
"math_id": 12,
"text": "\\sum_{n=1}^\\infty \\frac{1}{2^{O(1) + \\log_2 n + \\log_2\\log_2 n + \\cdots}} \\leq 1 "
},
{
"math_id": 13,
"text": "\\int_1^\\infty \\frac{dx}{x \\times \\ln x \\times \\ln \\ln x \\cdots} \\leq O(1) "
},
{
"math_id": 14,
"text": "\\ln^k x"
},
{
"math_id": 15,
"text": "\\ln^{k+1} \\infty"
},
{
"math_id": 16,
"text": "(\\ln^k x)^{1+\\epsilon}"
},
{
"math_id": 17,
"text": "\\frac{1}{(\\ln^{k} \\infty)^\\epsilon} "
}
]
| https://en.wikipedia.org/wiki?curid=1369166 |
1369226 | Brix | Sugar content of an aqueous solution
Degrees Brix (symbol °Bx) is a measure of the dissolved solids in a liquid, and is commonly used to measure dissolved sugar content of an aqueous solution. One degree Brix is 1 gram of sucrose in 100 grams of solution and represents the strength of the solution as percentage by mass. If the solution contains dissolved solids other than pure sucrose, then the °Bx only approximates the dissolved solid content. For example, when one adds equal amounts of salt and sugar to equal amounts of water, the degrees of refraction (BRIX) of the salt solution rises faster than the sugar solution. The °Bx is traditionally used in the wine, sugar, carbonated beverage, fruit juice, fresh produce, maple syrup, and honey industries. The °Bx is also used for measuring the concentration of a cutting fluid mixed in water for metalworking processes.
Comparable scales for indicating sucrose content are: the Plato scale (°P), which is widely used by the brewing industry; the Oechsle scale used in German and Swiss wine making industries, amongst others; and the Balling scale, which is the oldest of the three systems and therefore mostly found in older textbooks, but is still in use in some parts of the world.
A sucrose solution with an apparent specific gravity (20°/20 °C) of 1.040 would be 9.99325 °Bx or 9.99359 °P while the representative sugar body, the International Commission for Uniform Methods of Sugar Analysis (ICUMSA), which favours the use of mass fraction, would report the solution strength as 9.99249%. Because the differences between the systems are of little practical significance (the differences are less than the precision of most common instruments) and wide historical use of the Brix unit, modern instruments calculate mass fraction using ICUMSA official formulas but report the result as °Bx.
Background.
In the early 1800s, Karl Balling, followed by Adolf Brix, and finally the "Normal-Commissions" under Fritz Plato, prepared pure sucrose solutions of known strength, measured their specific gravities and prepared tables of percent sucrose by mass vs. measured specific gravity. Balling measured specific gravity to 3 decimal places, Brix to 5, and the Normal-Eichungs Kommission to 6 with the goal of the Commission being to correct errors in the 5th and 6th decimal place in the Brix table.
Equipped with one of these tables, a brewer wishing to know how much sugar was in his wort could measure its specific gravity and enter that specific gravity into the Plato table to obtain °Plato, which is the concentration of sucrose by percentage mass. Similarly, a vintner could enter the specific gravity of his must into the Brix table to obtain the °Bx, which is the concentration of sucrose by percent mass. It is important to point out that neither wort nor must is a solution of pure sucrose in pure water. Many other compounds are dissolved as well but these are either sugars, which behave similar to sucrose with respect to specific gravity as a function of concentration, or compounds that are present in small amounts (minerals, hop acids in wort, tannins, acids in must). In any case, even if °Bx is not representative of the exact amount of sugar in a must or fruit juice, it can be used for comparison of relative sugar content.
Measurement.
Specific gravity.
As specific gravity was the basis for the Balling, Brix and Plato tables, dissolved sugar content was originally estimated by measurement of specific gravity using a hydrometer or pycnometer. In modern times, hydrometers are still widely used, but where greater accuracy is required, an electronic oscillating U-tube meter may be employed. Whichever means is used, the analyst enters the tables with specific gravity and takes out (using interpolation if necessary) the sugar content in percent by mass.
If the analyst uses the Plato tables (maintained by the American Society of Brewing Chemists) they reports in °P. If using the Brix table (the current version of which is maintained by NIST and can be found on their website), they reports in °Bx. If using the ICUMSA tables, they would report in mass fraction (m.f.).
It is not, typically, actually necessary to consult tables as the tabulated °Bx or °P value can be printed directly on the hydrometer scale next to the tabulated value of specific gravity or stored in the memory of the electronic U-tube meter or calculated from polynomial fits to the tabulated data, in fact, the ICUMSA tables are calculated from a best-fit polynomial.
Also note that the tables in use today are not those published by Brix or Plato. Those workers measured true specific gravity reference to water at 4 °C using, respectively, 17.5 °C and 20 °C, as the temperature at which the density of a sucrose solution was measured. Both NBS and ASBC converted to apparent specific gravity at 20 °C/20 °C. The ICUMSA tables are based on more recent measurements on sucrose, fructose, glucose and invert sugar, and they tabulate true density and weight in air at 20 °C against mass fraction.
Refractive index.
Dissolution of sucrose and other sugars in water changes not only its specific gravity but its optical properties, in particular its refractive index and the extent to which it rotates the plane of linearly polarized light. The refractive index, "n"D, for sucrose solutions of various percentage by mass has been measured and tables of "n"D vs. °Bx published. As with the hydrometer, it is possible to use these tables to calibrate a refractometer so that it reads directly in °Bx. Calibration is usually based on the ICUMSA tables, but the user of an electronic refractometer should verify this.
Infrared absorption.
Sugars also have known infrared absorption spectra and this has made it possible to develop instruments for measuring sugar concentration using mid-infrared (MIR), non-dispersive infrared (NDIR), and Fourier transform infrared (FT-IR) techniques. In-line instruments are available that allow constant monitoring of sugar content in sugar refineries, beverage plants, wineries, etc. As with any other instruments, MIR and FT-IR instruments can be calibrated against pure sucrose solutions and thus report in °Bx, but there are other possibilities with these technologies, as they have the potential to distinguish between sugars and interfering substances. Newer MIR and NDIR instruments have up to five analyzing channels that allow corrections for interference between ingredients.
Tables.
Specific gravity.
Approximate values of °Bx can be computed from 231.61 × (SG − 0.9977), where SG is the apparent specific gravity of the solution at 20 °C/20 °C. More accurate values are available from:
formula_0,
derived from the NBS table with SG as above. This should not be used above SG = 1.17874 (40 °Bx). RMS disagreement between the polynomial and the NBS table is 0.0009 °Bx.
The Plato scale can be approximated with a mean average error of less than 0.02°P with the following equation:
formula_1
or with even higher accuracy (average error less than 0.00053°P with respect to the ASBC tables) from the best-fit polynomial:
formula_2.
The difference between the °Bx and °P as calculated from the respective polynomials is:
formula_3
The difference is generally less than ±0.0005 °Bx or °P with the exception being for weak solutions. As 0 °Bx is approached °P tend toward as much as 0.002 °P higher than the °Bx calculated for the same specific gravity. Disagreements of this order of magnitude can be expected as the NBS and the ASBC used slightly different values for the density of air and pure water in their calculations for converting to apparent specific gravity. It should be clear from these comments that Plato and Brix are, for all but the most exacting applications, the same. Note: all polynomials in this article are in a format that can be pasted directly into a spreadsheet.
The ICUMSA polynomials are generally only published in the form where mass fraction is used to derive the density. As a result, they are omitted from this section.
Refractive index.
When a refractometer is used, the Brix value can be obtained from the polynomial fit to the ICUMSA table:
formula_4,
where formula_5 is the refractive index measured at the wavelength of the sodium D line (589.3 nm) at 20 °C. Temperature is important as refractive index changes dramatically with temperature. Many refractometers have built in "Automatic Temperature Compensation" (ATC), which is based on knowledge of the way the refractive index of sucrose changes. For example, the refractive index of a sucrose solution of strength less than 10 °Bx is such that a 1 °C change in temperature would cause the Brix reading to shift by about 0.06 °Bx. Beer, conversely, exhibits a change with temperature about three times this much. It is important, therefore, that users of refractometers either make sure the sample and prism of the instrument are both close to 20 °C or, if that is difficult to ensure, readings should be taken at 2 temperatures separated by a few degrees, the change per degree noted and the final recorded value referenced to 20 °C using the Bx vs. Temp slope information.
As solutes other than sucrose may affect the refractive index and the specific gravity differently, this refractive "Brix" value is not interchangeable with the traditional hydrometer Brix unless corrections are applied. The formal term for such a refractive value is "Refractometric Dry Substance" (RDS). See below.
Usage.
The four scales are often used interchangeably since the differences are minor.
Brix is used in the food industry for measuring the approximate amount of sugars in fruits, vegetables, juices, wine, soft drinks and in the starch and sugar manufacturing industry. Different countries use the scales in different industries: In brewing, the UK uses specific gravity X 1000; Europe uses Plato degrees; and the US use a mix of specific gravity, degrees Brix, degrees Baumé, and degrees Plato. For fruit juices, 1.0 degree Brix is denoted as 1.0% sugar by mass. This usually correlates well with perceived sweetness.
Brix measurements are also used in the dairy industry to measure the quality of colostrum given to newborn calves, goats, and sheep.
Modern optical Brix meters are divided into two categories. In the first are the Abbe-based instruments in which a drop of the sample solution is placed on a prism; the result is observed through an eyepiece. The critical angle (the angle beyond which light is totally reflected back into the sample) is a function of the refractive index and the operator detects this critical angle by noting where a dark-bright boundary falls on an engraved scale. The scale can be calibrated in Brix or refractive index. Often the prism mount contains a thermometer that can be used to correct to 20 °C in situations where measurement cannot be made at exactly that temperature. These instruments are available in bench and handheld versions.
Digital refractometers also find the critical angle, but the light path is entirely internal to the prism. A drop of sample is placed on its surface, so the critical light beam never penetrates the sample. This makes it easier to read turbid samples. The light/dark boundary, whose position is proportional to the critical angle, is sensed by a CCD array. These meters are also available in bench top (laboratory) and portable (pocket) versions. This ability to easily measure Brix in the field makes it possible to determine ideal harvesting times of fruit and vegetables so that products arrive at the consumers in a perfect state or are ideal for subsequent processing steps such as vinification.
Due to higher accuracy and the ability to couple it with other measuring techniques (%CO2 and %alcohol), most soft drink companies and breweries use an oscillating U-tube density meter. Refractometers are still commonly used for fruit juice.
Brix and actual dissolved solids content.
When a sugar solution is measured by refractometer or density meter, the °Bx or °P value obtained by entry into the appropriate table only represents the amount of dry solids dissolved in the sample if the dry solids are exclusively sucrose. This is seldom the case. Grape juice (must), for example, contains little sucrose but does contain glucose, fructose, acids, and other substances. In such cases, the °Bx value clearly cannot be equated with the sucrose content, but it may represent a good approximation to the total sugar content. For example, an 11.0% by mass D-Glucose ("grape sugar") solution measured 10.9 °Bx using a hand held instrument. For these reasons, the sugar content of a solution obtained by use of refractometry with the ICUMSA table is often reported as "Refractometric Dry Substance" (RDS), which could be thought of as an equivalent sucrose content. Where it is desirable to know the actual dry solids content, empirical correction formulas can be developed based on calibrations with solutions similar to those being tested. For example, in sugar refining, dissolved solids can be accurately estimated from refractive index measurement corrected by an optical rotation (polarization) measurement.
Alcohol has a higher refractive index (1.361) than water (1.333). As a consequence, a refractometer measurement made on a sugar solution once fermentation has begun results in a reading substantially higher than the actual solids content. Thus, an operator must be certain that the sample they are testing has not begun to ferment. (If fermentation has indeed started, a correction can be made by estimating alcohol concentration from the original, pre-fermentation reading, termed "OG" by homebrewers.) Brix or Plato measurements based on specific gravity are also affected by fermentation, but in the opposite direction; as ethanol is less dense than water, an ethanol/sugar/water solution gives a Brix or Plato reading that is artificially low.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "^{\\circ}Bx = 182.4601\\,SG^3-775.6821\\,SG^2+1262.7794\\,SG-669.5622"
},
{
"math_id": 1,
"text": "^\\circ P = 260.4 - \\frac{260.4}{SG}"
},
{
"math_id": 2,
"text": "^{\\circ}P = 133.5892\\,SG^3 - 622.5576\\,SG^2 + 1102.9079\\,SG - 613.9427"
},
{
"math_id": 3,
"text": "^{\\circ}P-^{\\circ}Bx= -48.8709\\,SG^3+133.1245\\,SG^2-159.8715\\,SG+55.6195"
},
{
"math_id": 4,
"text": "^{\\circ}Bx= 11758.74n_D^5 -88885.21n_D^4 + 270177.93n_D^3 - 413145.80n_D^2 + 318417.95n_D -99127.4536"
},
{
"math_id": 5,
"text": "n_D"
}
]
| https://en.wikipedia.org/wiki?curid=1369226 |
1369241 | Polar decomposition | Representation of invertible matrices as unitary operator multiplying a Hermitian operator
In mathematics, the polar decomposition of a square real or complex matrix formula_0 is a factorization of the form formula_1, where formula_2 is a unitary matrix and formula_3 is a positive semi-definite Hermitian matrix (formula_2 is an orthogonal matrix and formula_3 is a positive semi-definite symmetric matrix in the real case), both square and of the same size.
If a real formula_4 matrix formula_0 is interpreted as a linear transformation of formula_5-dimensional space formula_6, the polar decomposition separates it into a rotation or reflection formula_2 of formula_6, and a scaling of the space along a set of formula_5 orthogonal axes.
The polar decomposition of a square matrix formula_0 always exists. If formula_0 is invertible, the decomposition is unique, and the factor formula_3 will be positive-definite. In that case, formula_0 can be written uniquely in the form formula_7, where formula_2 is unitary and formula_8 is the unique self-adjoint logarithm of the matrix formula_3. This decomposition is useful in computing the fundamental group of (matrix) Lie groups.
The polar decomposition can also be defined as formula_9 where formula_10 is a symmetric positive-definite matrix with the same eigenvalues as formula_3 but different eigenvectors.
The polar decomposition of a matrix can be seen as the matrix analog of the polar form of a complex number formula_11 as formula_12, where formula_13 is its absolute value (a non-negative real number), and formula_14 is a complex number with unit norm (an element of the circle group).
The definition formula_15 may be extended to rectangular matrices formula_16 by requiring formula_17 to be a semi-unitary matrix and formula_18 to be a positive-semidefinite Hermitian matrix. The decomposition always exists and formula_3 is always unique. The matrix formula_2 is unique if and only if formula_0 has full rank.
Geometric interpretation.
A real square formula_19 matrix formula_0 can be interpreted as the linear transformation of formula_20 that takes a column vector formula_21 to formula_22. Then, in the polar decomposition formula_23, the factor formula_24 is an formula_19 real orthonormal matrix. The polar decomposition then can be seen as expressing the linear transformation defined by formula_0 into a scaling of the space formula_20 along each eigenvector formula_25 of formula_3 by a scale factor formula_26 (the action of formula_3), followed by a rotation of formula_20 (the action of formula_24).
Alternatively, the decomposition formula_27 expresses the transformation defined by formula_0 as a rotation (formula_24) followed by a scaling (formula_3) along certain orthogonal directions. The scale factors are the same, but the directions are different.
Properties.
The polar decomposition of the complex conjugate of formula_0 is given by formula_28 Note thatformula_29gives the corresponding polar decomposition of the determinant of "A", since formula_30 and formula_31. In particular, if formula_0 has determinant 1 then both formula_2 and formula_3 have determinant 1.
The positive-semidefinite matrix "P" is always unique, even if "A" is singular, and is denoted asformula_32where formula_33 denotes the conjugate transpose of formula_0. The uniqueness of "P" ensures that this expression is well-defined. The uniqueness is guaranteed by the fact that formula_34 is a positive-semidefinite Hermitian matrix and, therefore, has a unique positive-semidefinite Hermitian square root. If "A" is invertible, then "P" is positive-definite, thus also invertible and the matrix "U" is uniquely determined byformula_35
Relation to the SVD.
In terms of the singular value decomposition (SVD) of formula_0, formula_36, one hasformula_37where formula_2, formula_38, and formula_39 are unitary matrices (called orthogonal matrices if the field is the reals formula_40). This confirms that formula_3 is positive-definite and formula_2 is unitary. Thus, the existence of the SVD is equivalent to the existence of polar decomposition.
One can also decompose formula_0 in the formformula_41Here formula_2 is the same as before and formula_42 is given byformula_43This is known as the left polar decomposition, whereas the previous decomposition is known as the right polar decomposition. Left polar decomposition is also known as reverse polar decomposition.
The polar decomposition of a square invertible real matrix formula_0 is of the form
formula_44
where formula_45 is a positive-definite matrix and formula_46 is an orthogonal matrix.
Relation to normal matrices.
The matrix formula_0 with polar decomposition formula_47 is normal if and only if formula_2 and formula_3 commute: formula_48, or equivalently, they are simultaneously diagonalizable.
Construction and proofs of existence.
The core idea behind the construction of the polar decomposition is similar to that used to compute the singular-value decomposition.
Derivation for normal matrices.
If formula_0 is normal, then it is unitarily equivalent to a diagonal matrix: formula_49 for some unitary matrix formula_38 and some diagonal matrix formula_50. This makes the derivation of its polar decomposition particularly straightforward, as we can then write
formula_51
where formula_52 is a diagonal matrix containing the "phases" of the elements of formula_50, that is, formula_53 when formula_54, and formula_55 when formula_56.
The polar decomposition is thus formula_47, with formula_2 and formula_3 diagonal in the eigenbasis of formula_0 and having eigenvalues equal to the phases and absolute values of those of formula_0, respectively.
Derivation for invertible matrices.
From the singular-value decomposition, it can be shown that a matrix formula_0 is invertible if and only if formula_34 (equivalently, formula_57) is. Moreover, this is true if and only if the eigenvalues of formula_34 are all not zero.
In this case, the polar decomposition is directly obtained by writing
formula_58
and observing that formula_59 is unitary. To see this, we can exploit the spectral decomposition of formula_34 to write formula_60.
In this expression, formula_61 is unitary because formula_38 is. To show that also formula_62 is unitary, we can use the SVD to write formula_63, so that
formula_64
where again formula_39 is unitary by construction.
Yet another way to directly show the unitarity of formula_59 is to note that, writing the SVD of formula_0 in terms of rank-1 matrices as formula_65, where formula_66are the singular values of formula_0, we have
formula_67
which directly implies the unitarity of formula_59 because a matrix is unitary if and only if its singular values have unitary absolute value.
Note how, from the above construction, it follows that "the unitary matrix in the polar decomposition of an invertible matrix is uniquely defined".
General derivation.
The SVD of a square matrix formula_0 reads formula_68, with formula_69 unitary matrices, and formula_70 a diagonal, positive semi-definite matrix. By simply inserting an additional pair of formula_39s or formula_38s, we obtain the two forms of the polar decomposition of formula_0:formula_71More generally, if formula_72 is some rectangular formula_73 matrix, its SVD can be written as formula_74 where now formula_75 and formula_76 are isometries with dimensions formula_77 and formula_78, respectively, where formula_79, and formula_80 is again a diagonal positive semi-definite square matrix with dimensions formula_81. We can now apply the same reasoning used in the above equation to write formula_82, but now formula_83 is not in general unitary. Nonetheless, formula_84 has the same support and range as formula_72, and it satisfies formula_85 and formula_86. This makes formula_84 into an isometry when its action is restricted onto the support of formula_72, that is, it means that formula_84 is a partial isometry.
As an explicit example of this more general case, consider the SVD of the following matrix:formula_87We then haveformula_88which is an isometry, but not unitary. On the other hand, if we consider the decomposition offormula_89we findformula_90which is a partial isometry (but not an isometry).
Bounded operators on Hilbert space.
The polar decomposition of any bounded linear operator "A" between complex Hilbert spaces is a canonical factorization as the product of a partial isometry and a non-negative operator.
The polar decomposition for matrices generalizes as follows: if "A" is a bounded linear operator then there is a unique factorization of "A" as a product "A" = "UP" where "U" is a partial isometry, "P" is a non-negative self-adjoint operator and the initial space of "U" is the closure of the range of "P".
The operator "U" must be weakened to a partial isometry, rather than unitary, because of the following issues. If "A" is the one-sided shift on "l"2(N), then |"A"| = {"A*A"}1/2 = "I". So if "A" = "U" |"A"|, "U" must be "A", which is not unitary.
The existence of a polar decomposition is a consequence of Douglas' lemma:
<templatestyles src="Math_theorem/styles.css" />
Lemma — If "A", "B" are bounded operators on a Hilbert space "H", and "A*A" ≤ "B*B", then there exists a contraction "C" such that "A = CB". Furthermore, "C" is unique if ker("B*") ⊂ ker("C").
The operator "C" can be defined by "C"("Bh") := "Ah" for all "h" in "H", extended by continuity to the closure of "Ran"("B"), and by zero on the orthogonal complement to all of "H". The lemma then follows since "A*A" ≤ "B*B" implies ker("B") ⊂ ker("A").
In particular. If "A*A" = "B*B", then "C" is a partial isometry, which is unique if ker("B*") ⊂ ker("C").
In general, for any bounded operator "A",
formula_91
where ("A*A")1/2 is the unique positive square root of "A*A" given by the usual functional calculus. So by the lemma, we have
formula_92
for some partial isometry "U", which is unique if ker("A*") ⊂ ker("U"). Take "P" to be ("A*A")1/2 and one obtains the polar decomposition "A" = "UP". Notice that an analogous argument can be used to show "A = P'U'", where "P' " is positive and "U'" a partial isometry.
When "H" is finite-dimensional, "U" can be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version of singular value decomposition.
By property of the continuous functional calculus, |"A"| is in the C*-algebra generated by "A". A similar but weaker statement holds for the partial isometry: "U" is in the von Neumann algebra generated by "A". If "A" is invertible, the polar part "U" will be in the C*-algebra as well.
Unbounded operators.
If "A" is a closed, densely defined unbounded operator between complex Hilbert spaces then it still has a (unique) polar decomposition
formula_93
where |"A"| is a (possibly unbounded) non-negative self adjoint operator with the same domain as "A", and "U" is a partial isometry vanishing on the orthogonal complement of the range ran(|"A"|).
The proof uses the same lemma as above, which goes through for unbounded operators in general. If dom("A*A") = dom("B*B") and "A*Ah" = "B*Bh" for all "h" ∈ dom("A*A"), then there exists a partial isometry "U" such that "A" = "UB". "U" is unique if ran("B")⊥ ⊂ ker("U"). The operator "A" being closed and densely defined ensures that the operator "A*A" is self-adjoint (with dense domain) and therefore allows one to define ("A*A")1/2. Applying the lemma gives polar decomposition.
If an unbounded operator "A" is affiliated to a von Neumann algebra M, and "A" = "UP" is its polar decomposition, then "U" is in M and so is the spectral projection of "P", 1"B"("P"), for any Borel set "B" in [0, ∞).
Quaternion polar decomposition.
The polar decomposition of quaternions formula_94 with orthonormal basis quaternions formula_95 depends on the unit 2-dimensional sphere formula_96 of square roots of minus one, known as "right versors". Given any formula_97 on this sphere, and an angle the versor formula_98 is on the unit 3-sphere of formula_99 For and the versor is 1 or −1, regardless of which r is selected. The norm t of a quaternion q is the Euclidean distance from the origin to q. When a quaternion is not just a real number, then there is a "unique" polar decomposition:
formula_100
Here r, a, t are all uniquely determined such that r is a right versor a satisfies and
Alternative planar decompositions.
In the Cartesian plane, alternative planar ring decompositions arise as follows:
Numerical determination of the matrix polar decomposition.
To compute an approximation of the polar decomposition "A" = "UP", usually the unitary factor "U" is approximated. The iteration is based on Heron's method for the square root of "1" and computes, starting from formula_101, the sequence
formula_102
The combination of inversion and Hermite conjugation is chosen so that in the singular value decomposition, the unitary factors remain the same and the iteration reduces to Heron's method on the singular values.
This basic iteration may be refined to speed up the process:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "A = U P"
},
{
"math_id": 2,
"text": "U"
},
{
"math_id": 3,
"text": "P"
},
{
"math_id": 4,
"text": "n\\times n"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "\\mathbb{R}^n"
},
{
"math_id": 7,
"text": "A = U e^X "
},
{
"math_id": 8,
"text": "X"
},
{
"math_id": 9,
"text": "A = P' U"
},
{
"math_id": 10,
"text": "P' = U P U^{-1}"
},
{
"math_id": 11,
"text": "z"
},
{
"math_id": 12,
"text": "z = u r"
},
{
"math_id": 13,
"text": "r"
},
{
"math_id": 14,
"text": "u"
},
{
"math_id": 15,
"text": "A = UP"
},
{
"math_id": 16,
"text": "A\\in\\mathbb{C}^{m \\times n}"
},
{
"math_id": 17,
"text": "U\\in\\mathbb{C}^{m \\times n}"
},
{
"math_id": 18,
"text": "P\\in\\mathbb{C}^{n \\times n}"
},
{
"math_id": 19,
"text": "m\\times m"
},
{
"math_id": 20,
"text": "\\mathbb{R}^m"
},
{
"math_id": 21,
"text": "x"
},
{
"math_id": 22,
"text": "A x"
},
{
"math_id": 23,
"text": "A = RP"
},
{
"math_id": 24,
"text": "R"
},
{
"math_id": 25,
"text": "e_i"
},
{
"math_id": 26,
"text": "\\sigma_i"
},
{
"math_id": 27,
"text": "A=P R"
},
{
"math_id": 28,
"text": "\\overline{A} = \\overline{U}\\overline{P}."
},
{
"math_id": 29,
"text": "\\det A = \\det U \\det P = e^{i\\theta} r"
},
{
"math_id": 30,
"text": "\\det U = e^{i\\theta}"
},
{
"math_id": 31,
"text": "\\det P = r = \\left|\\det A\\right|"
},
{
"math_id": 32,
"text": "P = \\left(A^* A\\right)^{1/2},"
},
{
"math_id": 33,
"text": "A^*"
},
{
"math_id": 34,
"text": "A^* A"
},
{
"math_id": 35,
"text": "U = AP^{-1}."
},
{
"math_id": 36,
"text": "A = W\\Sigma V^*"
},
{
"math_id": 37,
"text": "\\begin{align}\n P &= V\\Sigma V^* \\\\\n U &= WV^*\n\\end{align}"
},
{
"math_id": 38,
"text": "V"
},
{
"math_id": 39,
"text": "W"
},
{
"math_id": 40,
"text": "\\mathbb{R}"
},
{
"math_id": 41,
"text": "A = P'U"
},
{
"math_id": 42,
"text": "P'"
},
{
"math_id": 43,
"text": "P' = UPU^{-1} = \\left(AA^*\\right)^{1/2} = W \\Sigma W^*."
},
{
"math_id": 44,
"text": "A = |A|R"
},
{
"math_id": 45,
"text": "|A| = \\left(AA^\\textsf{T}\\right)^{1/2}"
},
{
"math_id": 46,
"text": "R = |A|^{-1}A"
},
{
"math_id": 47,
"text": "A=UP"
},
{
"math_id": 48,
"text": "UP = PU"
},
{
"math_id": 49,
"text": "A = V\\Lambda V^*"
},
{
"math_id": 50,
"text": "\\Lambda"
},
{
"math_id": 51,
"text": "A = V\\Phi_\\Lambda |\\Lambda|V^* = \\underbrace{\\left(V\\Phi_\\Lambda V^*\\right)}_{\\equiv U} \\underbrace{\\left(V |\\Lambda| V^*\\right)}_{\\equiv P},"
},
{
"math_id": 52,
"text": "\\Phi_\\Lambda"
},
{
"math_id": 53,
"text": "(\\Phi_\\Lambda)_{ii}\\equiv \\Lambda_{ii}/ |\\Lambda_{ii}|"
},
{
"math_id": 54,
"text": "\\Lambda_{ii}\\neq 0"
},
{
"math_id": 55,
"text": "(\\Phi_\\Lambda)_{ii}=0"
},
{
"math_id": 56,
"text": "\\Lambda_{ii}=0"
},
{
"math_id": 57,
"text": "AA^*"
},
{
"math_id": 58,
"text": "A = A\\left(A^* A\\right)^{-1/2}\\left(A^* A\\right)^{1/2},"
},
{
"math_id": 59,
"text": "A\\left(A^* A\\right)^{-1/2}"
},
{
"math_id": 60,
"text": "A\\left(A^* A\\right)^{-1/2} = AVD^{-1/2}V^*"
},
{
"math_id": 61,
"text": "V^*"
},
{
"math_id": 62,
"text": "AVD^{-1/2}"
},
{
"math_id": 63,
"text": "A = WD^{1/2}V^*"
},
{
"math_id": 64,
"text": "AV D^{-1/2} = WD^{1/2}V^* VD^{-1/2} = W,"
},
{
"math_id": 65,
"text": "A = \\sum_k s_k v_k w_k^*"
},
{
"math_id": 66,
"text": "s_k"
},
{
"math_id": 67,
"text": "A\\left(A^* A\\right)^{-1/2}\n= \\left(\\sum_j \\lambda_j v_j w_j^*\\right)\\left(\\sum_k |\\lambda_k|^{-1} w_k w_k^*\\right)\n= \\sum_k \\frac{\\lambda_k}{|\\lambda_k|} v_k w_k^*,"
},
{
"math_id": 68,
"text": "A = W D^{1/2} V^*"
},
{
"math_id": 69,
"text": "W, V"
},
{
"math_id": 70,
"text": "D"
},
{
"math_id": 71,
"text": "\n A = WD^{1/2}V^* =\n \\underbrace{\\left(W D^{1/2} W^*\\right)}_P \\underbrace{\\left(W V^*\\right)}_U =\n \\underbrace{\\left(W V^*\\right)}_U \\underbrace{\\left(VD^{1/2} V^*\\right)}_{P'}.\n"
},
{
"math_id": 72,
"text": " A "
},
{
"math_id": 73,
"text": "\n n\\times m "
},
{
"math_id": 74,
"text": " A=WD^{1/2}V^* "
},
{
"math_id": 75,
"text": " W "
},
{
"math_id": 76,
"text": " V "
},
{
"math_id": 77,
"text": " n\\times r "
},
{
"math_id": 78,
"text": "\n m\\times r "
},
{
"math_id": 79,
"text": " r\\equiv\\operatorname{rank}(A) "
},
{
"math_id": 80,
"text": " D "
},
{
"math_id": 81,
"text": " r\\times r "
},
{
"math_id": 82,
"text": " A=PU=UP'"
},
{
"math_id": 83,
"text": " U\\equiv WV^* "
},
{
"math_id": 84,
"text": " U "
},
{
"math_id": 85,
"text": " U^* U=VV^* "
},
{
"math_id": 86,
"text": " UU^*=WW^* \n"
},
{
"math_id": 87,
"text": "\n A\\equiv \\begin{pmatrix}1&1\\\\2&-2\\\\0&0\\end{pmatrix} =\n\\underbrace{\\begin{pmatrix}1&0\\\\0&1\\\\0&0\\end{pmatrix}}_{\\equiv W}\n\\underbrace{\\begin{pmatrix}\\sqrt2&0\\\\0&\\sqrt8\\end{pmatrix}}_{\\sqrt D}\n\\underbrace{\\begin{pmatrix}\\frac1{\\sqrt2} & \\frac1{\\sqrt2} \\\\ \\frac1{\\sqrt2} & -\\frac1{\\sqrt2}\\end{pmatrix}}_{V^\\dagger}. \n"
},
{
"math_id": 88,
"text": "\n WV^\\dagger = \\frac1{\\sqrt2}\\begin{pmatrix}1&1 \\\\ 1&-1 \\\\ 0&0\\end{pmatrix} \n"
},
{
"math_id": 89,
"text": "\n A\\equiv \\begin{pmatrix}1&0&0\\\\0&2&0\\end{pmatrix} =\n\\begin{pmatrix}1&0\\\\0&1\\end{pmatrix}\n\\begin{pmatrix}1&0\\\\0&2\\end{pmatrix}\n\\begin{pmatrix}1&0&0\\\\0&1&0\\end{pmatrix}, \n"
},
{
"math_id": 90,
"text": "\n WV^\\dagger =\\begin{pmatrix}1&0&0\\\\0&1&0\\end{pmatrix}, \n"
},
{
"math_id": 91,
"text": "A^*A = \\left(A^*A\\right)^{1/2} \\left(A^*A\\right)^{1/2},"
},
{
"math_id": 92,
"text": "A = U\\left(A^*A\\right)^{1/2}"
},
{
"math_id": 93,
"text": "A = U |A|"
},
{
"math_id": 94,
"text": "\\ \\mathbb{H}\\ "
},
{
"math_id": 95,
"text": "\\ 1 , \\widehat{ i }, \\widehat{ j }, \\widehat{ k } \\ "
},
{
"math_id": 96,
"text": "\\ \\widehat{ r } \\in \\lbrace\\ x\\ \\widehat{ i } + y\\ \\widehat{ j } + z\\ \\widehat{ k } \\in \\mathbb{H} \\smallsetminus \\mathbb{R}\\ :\\ x^2 + y^2 +z^2 = 1\\ \\rbrace\\ "
},
{
"math_id": 97,
"text": "\\ \\widehat{ r }\\ "
},
{
"math_id": 98,
"text": "\\ e^{a\\ \\widehat{ r } } {{=}} \\cos (a) + \\widehat{ r }\\ \\sin (a)\\ "
},
{
"math_id": 99,
"text": "\\ \\mathbb{H} ~."
},
{
"math_id": 100,
"text": "\\ q = t\\ \\exp\\left(\\ a\\ \\widehat{ r }\\ \\right) ~."
},
{
"math_id": 101,
"text": "U_0 = A"
},
{
"math_id": 102,
"text": "U_{k+1} = \\frac{1}{2}\\left(U_k + \\left(U_k^*\\right )^{-1}\\right),\\qquad k = 0, 1, 2, \\ldots"
}
]
| https://en.wikipedia.org/wiki?curid=1369241 |
13693464 | Thiazyl fluoride | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Thiazyl fluoride, NSF, is a colourless, pungent gas at room temperature and condenses to a pale yellow liquid at 0.4 °C. Along with thiazyl trifluoride, NSF3, it is an important precursor to sulfur-nitrogen-fluorine compounds. It is notable for its extreme hygroscopicity.
Synthesis.
Thiazyl fluoride can be synthesized by various methods, such as fluorination of tetrasulfur tetranitride with silver(II) fluoride or mercuric fluoride. It can be purified by vacuum distillation. However, because this synthetic pathway yields numerous side-products, an alternative approach is the reaction of imino(triphenyl)phosphines with sulfur tetrafluoride by cleavage of the bond to form sulfur difluoride imides and triphenyldifluorophosphorane. These products readily decompose yielding thiazyl fluoride.
For synthesis on a preparative scale, the decomposition of compounds already containing the moiety is commonly used:
Reactivity.
Reactions with electrophiles and Lewis acids.
Lewis acids remove fluoride to afford thiazyl salts:
Thiazyl fluoride functions as a ligand in . and (M = Co, Ni). In all of its complexes, NSF is bound to the metal center through nitrogen.
Reactions with nucleophiles.
Thiazyl fluoride reacts violently with water:
Nucleophilic attack on thiazyl fluoride occurs at sulfur atom:.
Fluoride gives an adduct:
The halogen derivatives XNSF2 (X = F, Cl, Br, I) can be synthesized from reacting Hg(NSF)2 with X2; whereby, ClNSF2 is the most stable compound observed in this series.
Oligomerization and cycloaddition.
At room temperature, thiazyl fluoride undergoes cyclic trimerization via the <chem>N-S</chem> multiple bonding:
1,3,5-trifluoro-1formula_0,3formula_0,5formula_0,2,4,6-trithiatriazine is the yielded cyclic trimer, where each sulfur atom remains tetravalent.
Thiazyl fluoride also reacts via exothermic cycloaddition in the presence of dienes.
Structure and bonding.
The N−S bond length is 1.448 Å, which is short, indicating multiple bonding, and can be represented by the following resonance structures:
The NSF molecule has 18 total valence electrons and is isoelectronic to sulfur dioxide. Thiazyl fluoride adopts Cs-symmetry and has been shown by isotopic substitution to be bent in the ground state. A combination of rotational analysis with Franck-Condon calculations has been applied to study the electronic excitation from the A"formula_1A' states, which results in the elongation of the <chem>N-S</chem> bond by 0.11 Å and a decrease in the formula_2NSF by 15.3formula_3.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\lambda^4"
},
{
"math_id": 1,
"text": "-"
},
{
"math_id": 2,
"text": "\\measuredangle"
},
{
"math_id": 3,
"text": "^\\circ"
}
]
| https://en.wikipedia.org/wiki?curid=13693464 |
1369392 | Matching pennies | Simple game studied in game theory
Matching pennies is a non-cooperative game studied in game theory. It is played between two players, Even and Odd. Each player has a penny and must secretly turn the penny to heads or tails. The players then reveal their choices simultaneously. If the pennies match (both heads or both tails), then Even wins and keeps both pennies. If the pennies do not match (one heads and one tails), then Odd wins and keeps both pennies.
Theory.
Matching Pennies is a zero-sum game because each participant's gain or loss of utility is exactly balanced by the losses or gains of the utility of the other participants. If the participants' total gains are added up and their total losses subtracted, the sum will be zero.
The game can be written in a payoff matrix (pictured right - from Even's point of view). Each cell of the matrix shows the two players' payoffs, with Even's payoffs listed first.
Matching pennies is used primarily to illustrate the concept of mixed strategies and a mixed strategy Nash equilibrium.
This game has no pure strategy Nash equilibrium since there is no pure strategy (heads or tails) that is a best response to a best response. In other words, there is no pair of pure strategies such that neither player would want to switch if told what the other would do. Instead, the unique Nash equilibrium of this game is in mixed strategies: each player chooses heads or tails with equal probability. In this way, each player makes the other indifferent between choosing heads or tails, so neither player has an incentive to try another strategy. The best-response functions for mixed strategies are depicted in Figure 1 below:
When either player plays the equilibrium, everyone's expected payoff is zero.
Variants.
Varying the payoffs in the matrix can change the equilibrium point. For example, in the table shown on the right, Even has a chance to win 7 if both he and Odd play Heads. To calculate the equilibrium point in this game, note that a player playing a mixed strategy must be indifferent between his two actions (otherwise he would switch to a pure strategy). This gives us two equations:
Note that since formula_2 is the Heads-probability of "Odd" and formula_6 is the Heads-probability of "Even", the change in Even's payoff affects Odd's equilibrium strategy and not Even's own equilibrium strategy. This may be unintuitive at first. The reasoning is that in equilibrium, the choices must be equally appealing. The +7 possibility for Even is very appealing relative to +1, so to maintain equilibrium, Odd's play must lower the probability of that outcome to compensate and equalize the expected values of the two choices, meaning in equilibrium Odd will play Heads less often and Tails more often.
Laboratory experiments.
Human players do not always play the equilibrium strategy. Laboratory experiments reveal several factors that make players deviate from the equilibrium strategy, especially if matching pennies is played repeatedly:
Moreover, when the payoff matrix is asymmetric, other factors influence human behavior even when the game is not repeated:
Real-life data.
The conclusions of laboratory experiments have been criticized on several grounds.
To overcome these difficulties, several authors have done statistical analyses of professional sports games. These are zero-sum games with very high payoffs, and the players have devoted their lives to become experts. Often such games are strategically similar to matching pennies: | [
{
"math_id": 0,
"text": "+7\\cdot x -1\\cdot (1-x)"
},
{
"math_id": 1,
"text": "-1\\cdot x +1\\cdot (1-x)"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "x=0.2"
},
{
"math_id": 4,
"text": "+1\\cdot y -1\\cdot (1-y)"
},
{
"math_id": 5,
"text": "-1\\cdot y +1\\cdot (1-y)"
},
{
"math_id": 6,
"text": "y"
},
{
"math_id": 7,
"text": "y=0.5"
}
]
| https://en.wikipedia.org/wiki?curid=1369392 |
1369521 | Magic cube classes | Categories of number cubes
In mathematics, a magic cube of order formula_0 is an formula_1 grid of natural numbers satisying the property that the numbers in the same row, the same column, the same pillar or the same length-formula_0 diagonal add up to the same number. It is a formula_2-dimensional generalisation of the magic square. A magic cube can be assigned to one of six magic cube classes, based on the cube characteristics. A benefit of this classification is that it is consistent for all orders and all dimensions of magic hypercubes.
The six classes.
The minimum requirements for a magic cube are: all rows, columns, pillars, and 4 space diagonals must sum to the same value. A simple magic cube contains no magic squares or not enough to qualify for the next class. <br>The smallest normal simple magic cube is order 3. Minimum correct summations required = 3"m"2 + 4
Each of the 3"m" planar arrays must be a simple magic square. The 6 oblique squares are also simple magic. The smallest normal diagonal magic cube is order 5.<br>
These squares were referred to as 'Perfect' by Gardner and others. At the same time he referred to Langman’s 1962 pandiagonal cube also as 'Perfect'.<br>
Christian Boyer and Walter Trump now consider this "and" the next two classes to be "Perfect". (See "Alternate Perfect" below).<br>A. H. Frost referred to all but the simple class as Nasik cubes. <br>The smallest normal diagonal magic cube is order 5; see Diagonal magic cube. Minimum correct summations required = 3"m"2 + 6"m" + 4
All 4"m"2 pantriagonals must sum correctly (that is 4 one-segment, 12("m"−1) two-segment, and 4("m"−2)("m"−1) three-segment). There may be some simple AND/OR pandiagonal magic squares, but not enough to satisfy any other classification. <br>The smallest normal pantriagonal magic cube is order 4; see Pantriagonal magic cube. <br>Minimum correct summations required = 7"m"2. All pan-"r"-agonals sum correctly for "r" = 1 and 3.
A cube of this class was first constructed in late 2004 by Mitsutoshi Nakamura. This cube is a combination pantriagonal magic cube and diagonal magic cube. Therefore, all main and broken space diagonals sum correctly, and it contains 3"m" planar simple magic squares. In addition, all 6 oblique squares are pandiagonal magic squares. The only such cube constructed so far is order 8. It is not known what other orders are possible; see Pantriagdiag magic cube. Minimum correct summations required = 7"m"2 + 6"m"
All 3"m" planar arrays must be pandiagonal magic squares. The 6 oblique squares are always magic (usually simple magic). Several of them "may" be pandiagonal magic.
Gardner also called this (Langman’s pandiagonal) a 'perfect' cube, presumably not realizing it was a higher class then Myer’s cube. See previous note re Boyer and Trump. <br>The smallest normal pandiagonal magic cube is order 7; see Pandiagonal magic cube.<br>Minimum correct summations required = 9"m"2 + 4. All pan-"r"-agonals sum correctly for "r" = 1 and 2.
All 3"m" planar arrays must be pandiagonal magic squares. In addition, all pantriagonals must sum correctly. These two conditions combine to provide a total of 9"m" pandiagonal magic squares. <br>The smallest normal perfect magic cube is order 8; see Perfect magic cube.
Nasik;
A. H. Frost (1866) referred to all but the simple magic cube as Nasik!<br>
C. Planck (1905) redefined "Nasik" to mean magic hypercubes of any order or dimension in which all possible lines summed correctly.<br> i.e. Nasik is a preferred alternate, and less ambiguous term for the "perfect" class.<br>Minimum correct summations required = 13"m"2. All pan-"r"-agonals sum correctly for "r" = 1, 2 and 3.
Alternate Perfect
Note that the above is a relatively new definition of "perfect". Until about 1995 there was much confusion about what constituted a "perfect" magic cube (see the discussion under Diagonal).<br> Included below are references and links to discussions of the old definition<br>
With the popularity of personal computers it became easier to examine the finer details of magic cubes. Also more and more work was being done with higher-dimension magic hypercubes. For example, John Hendricks constructed the world's first Nasik magic tesseract in 2000. Classed as a perfect magic tesseract by Hendricks definition.
Generalized for all dimensions.
A magic hypercube of dimension "n" is perfect if all pan-"n"-agonals sum correctly. Then all lower-dimension hypercubes contained in it are also perfect.<br>
For dimension 2, The Pandiagonal Magic Square has been called "perfect" for many years. This is consistent with the perfect (Nasik) definitions given above for the cube. In this dimension, there is no ambiguity because there are only two classes of magic square, simple and perfect. <br>
In the case of 4 dimensions, the magic tesseract, Mitsutoshi Nakamura has determined that there are 18 classes. He has determined their characteristics and constructed examples of each.
And in this dimension also, the "Perfect" ("Nasik") magic tesseract has all possible lines summing correctly and all cubes and squares contained in it are also Nasik magic.
Another definition and a table.
Proper:
A proper magic cube is a magic cube belonging to one of the six classes of magic cube, but containing exactly the minimum requirements for that class of cube. i.e. a proper simple or pantriagonal magic cube would contain no magic squares, a proper diagonal magic cube would contain exactly 3"m" + 6 simple magic squares, etc. This term was coined by Mitsutoshi Nakamura in April, 2004.
Notes for the table
External links.
Cube classes
Perfect Cube
Tesseract Classes | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n\\times n \\times n"
},
{
"math_id": 2,
"text": "3"
}
]
| https://en.wikipedia.org/wiki?curid=1369521 |
1369832 | R*-tree | A variant of R-trees used for indexing spatial information
In data processing R*-trees are a variant of R-trees used for indexing spatial information. R*-trees have slightly higher construction cost than standard R-trees, as the data may need to be reinserted; but the resulting tree will usually have a better query performance. Like the standard R-tree, it can store both point and spatial data. It was proposed by Norbert Beckmann, Hans-Peter Kriegel, Ralf Schneider, and Bernhard Seeger in 1990.
Difference between R*-trees and R-trees.
Minimization of both coverage and overlap is crucial to the performance of R-trees. Overlap means that, on data query or insertion, more than one branch of the tree needs to be expanded (due to the way data is being split in regions which may overlap). A minimized coverage improves pruning performance, allowing exclusion of whole pages from search more often, in particular for negative range queries. The R*-tree attempts to reduce both, using a combination of a revised node split algorithm and the concept of forced reinsertion at node overflow. This is based on the observation that R-tree structures are highly susceptible
to the order in which their entries are inserted, so an insertion-built (rather than bulk-loaded) structure
is likely to be sub-optimal. Deletion and reinsertion of entries allows them to "find" a place in the tree
that may be more appropriate than their original location.
When a node overflows, a portion of its entries are removed from the node and reinserted into the tree. (In order to avoid an indefinite cascade of reinsertions caused by subsequent node overflow, the reinsertion
routine may be called only once in each level of the tree when inserting any one new entry.) This has the
effect of producing more well-clustered groups of entries in nodes, reducing node coverage. Furthermore,
actual node splits are often postponed, causing average node occupancy to rise. Re-insertion can be seen as a method of incremental tree optimization triggered on node overflow.
The R*-tree describes three metrics by which the quality of a split can be quantified. These being overlap (common between R*-trees and R-trees), defined as the intersection area of the bounding boxes of two clusters; Area-value, being the sum of the area of two cluster bounding boxes and Margin-value being the sum of the perimeters of two cluster bounding boxes.
Algorithm and complexity.
Worst case query and delete complexity are thus identical to the R-Tree. The insertion strategy to the R*-tree is with formula_0 more complex than the linear split strategy (formula_1) of the R-tree, but less complex than the quadratic split strategy (formula_2) for a page size of formula_3 objects and has little impact on the total complexity. The total insert complexity is still comparable to the R-tree: reinsertions affect at most one branch of the tree and thus formula_4 reinsertions, comparable to performing a split on a regular R-tree. So, on overall, the complexity of the R*-tree is the same as that of a regular R-tree.
An implementation of the full algorithm must address many corner cases and tie situations not discussed here.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{O}(M \\log M)"
},
{
"math_id": 1,
"text": "\\mathcal{O}(M)"
},
{
"math_id": 2,
"text": "\\mathcal{O}(M^2)"
},
{
"math_id": 3,
"text": "M"
},
{
"math_id": 4,
"text": "\\mathcal{O}(\\log n)"
}
]
| https://en.wikipedia.org/wiki?curid=1369832 |
13698492 | Brown–Peterson cohomology | In mathematics, Brown–Peterson cohomology is a generalized cohomology theory introduced by
Edgar H. Brown and Franklin P. Peterson (1966), depending on a choice of prime "p". It is described in detail by Douglas Ravenel (2003, Chapter 4).
Its representing spectrum is denoted by BP.
Complex cobordism and Quillen's idempotent.
Brown–Peterson cohomology BP is a summand of MU("p"), which is complex cobordism MU localized at a prime "p". In fact MU"(p)" is a wedge product of suspensions of BP.
For each prime "p", Daniel Quillen showed there is a unique idempotent map of ring spectra ε from MUQ("p") to itself, with the property that ε([CP"n"]) is [CP"n"] if "n"+1 is a power of "p", and 0 otherwise. The spectrum BP is the image of this idempotent ε.
Structure of BP.
The coefficient ring formula_0 is a polynomial algebra over formula_1 on generators formula_2 in degrees formula_3 for formula_4.
formula_5 is isomorphic to the polynomial ring formula_6 over formula_0 with generators formula_7 in formula_8 of degrees formula_9.
The cohomology of the Hopf algebroid formula_10 is the initial term of the Adams–Novikov spectral sequence for calculating p-local homotopy groups of spheres.
BP is the universal example of a complex oriented cohomology theory whose associated formal group law is p-typical. | [
{
"math_id": 0,
"text": "\\pi_*(\\text{BP})"
},
{
"math_id": 1,
"text": "\\Z_{(p)}"
},
{
"math_id": 2,
"text": "v_n"
},
{
"math_id": 3,
"text": "2(p^n-1) "
},
{
"math_id": 4,
"text": "n\\ge 1"
},
{
"math_id": 5,
"text": "\\text{BP}_*(\\text{BP})"
},
{
"math_id": 6,
"text": "\\pi_*(\\text{BP})[t_1, t_2, \\ldots]"
},
{
"math_id": 7,
"text": "t_i"
},
{
"math_id": 8,
"text": "\\text{BP}_{2 (p^i-1)}(\\text{BP})"
},
{
"math_id": 9,
"text": "2 (p^i-1)"
},
{
"math_id": 10,
"text": "(\\pi_*(\\text{BP}), \\text{BP}_*(\\text{BP}))"
}
]
| https://en.wikipedia.org/wiki?curid=13698492 |
1370831 | Velocity of money | Rate of money changing hands
The velocity of money measures the number of times that one unit of currency is used to purchase goods and services within a given time period. In other words, it's how many times money is changing hands. The concept relates the size of economic activity to a given money supply, and the speed of money exchange is one of the variables that determine inflation. The measure of the velocity of money is usually the ratio of the gross national product (GNP) to a country's money supply.
If the velocity of money is increasing, then transactions are occurring between individuals more frequently. The velocity of money changes over time and is influenced by a variety of factors.
Because of the nature of financial transactions, the velocity of money cannot be determined empirically.
Illustration.
If, for example, in a very small economy, a farmer and a mechanic, with just $50 between them, buy new goods and services from each other in just three transactions over the course of a year
then $100 changed hands in the course of a year, even though there is only $50 in this little economy. That $100 level is possible because each dollar was spent on new goods and services an average of twice a year, which is to say that the velocity was formula_0. If the farmer bought a used tractor from the mechanic or made a gift to the mechanic, it would not go into the numerator of velocity because that transaction would not be part of this tiny economy's gross domestic product (GDP).
Relation to money demand.
The velocity of money provides another perspective on money demand. Given the nominal flow of transactions using money, if the interest rate on alternative financial assets is high, people will not want to hold much money relative to the quantity of their transactions—they try to exchange it fast for goods or other financial assets, and money is said to "burn a hole in their pocket" and velocity is high. This situation is precisely one of money demand being low. Conversely, with a low opportunity cost velocity is low and money demand is high. Both situations contribute to the time-varying nature of the money demand. In money market equilibrium, some economic variables (interest rates, income, or the price level) have adjusted to equate money demand and money supply.
The quantitative relation between velocity and money demand is given by Velocity = Nominal Transactions (however defined) divided by Nominal Money Demand.
Indirect measurement.
In practice, attempts to measure the velocity of money are usually indirect. The transactions velocity can be computed as
formula_1
where
formula_2 is the velocity of money for all transactions in a given time frame;
formula_3 is the price level;
formula_4 is the amount of transactions occurring in a given time frame; and
formula_5 is the total nominal amount of money in circulation on average in the economy (see “Money supply” for details).
Thus formula_6 is the total nominal amount of transactions per period.
Values of formula_6 and formula_7 permit calculation of formula_8.
Similarly, the income velocity of money may be written as
formula_9
where
formula_10 is the velocity for transactions counting towards national or domestic product;
formula_11 is an index of real expenditures (on newly produced goods and services); and
formula_12 is nominal national or domestic product.
Determination.
The determinants and consequent stability of the velocity of money are a subject of controversy across and within schools of economic thought. Those favoring a quantity theory of money have tended to believe that, in the absence of inflationary or deflationary expectations, velocity will be technologically determined and stable, and that such expectations will not generally arise without a signal that overall prices have changed or will change.
This determinant has come under scrutiny in 2020-2021 as the levels of M1 and M2 Money Supply grow at an increasingly volatile rate while Velocity of M1 and M2 flattens to stable new low of a 1.10 ratio. While interest rates have remained stable under the Fed Rate, the economy is saving more M1 and M2 rather than consuming, in the expectations that Fed benchmark interest rate increases from all-time lows of 0.50%. During this time, inflation has risen to new decade highs without the velocity of money.
Criticism.
Ludwig von Mises in a 1968 letter to Henry Hazlitt said: "The main deficiency of the velocity of circulation concept is that it does not start from the actions of individuals but looks at the problem from the angle of the whole economic system. This concept in itself is a vicious mode of approaching the problem of prices and purchasing power. It is assumed that, other things being equal, prices must change in proportion to the changes occurring in the total supply of money available. This is not true."
References.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2/\\text{year}"
},
{
"math_id": 1,
"text": "V_T =\\frac{PT}{M}"
},
{
"math_id": 2,
"text": "V_T\\,"
},
{
"math_id": 3,
"text": "P\\,"
},
{
"math_id": 4,
"text": "T\\,"
},
{
"math_id": 5,
"text": "M\\,"
},
{
"math_id": 6,
"text": "PT"
},
{
"math_id": 7,
"text": "M"
},
{
"math_id": 8,
"text": "V_T"
},
{
"math_id": 9,
"text": "V =\\frac{PQ}{M}"
},
{
"math_id": 10,
"text": "V\\,"
},
{
"math_id": 11,
"text": "Q\\,"
},
{
"math_id": 12,
"text": "PQ\\,"
}
]
| https://en.wikipedia.org/wiki?curid=1370831 |
1371087 | Saving identity | The saving identity or the saving-investment identity is a concept in national income accounting stating that the amount saved in an economy will be the amount invested in new physical machinery, new inventories, and the like. More specifically, in an open economy (an economy with foreign trade and capital flows), private saving plus governmental saving (the government budget surplus or the negative of the deficit) plus foreign investment domestically (capital inflows from abroad) must equal private physical investment. In other words, the flow variable investment must be financed by some combination of private domestic saving, government saving (surplus), and foreign saving (foreign capital inflows).
This is an "identity", meaning it is true by definition. This identity only holds true because investment here is defined as including inventory accumulation, both deliberate and unintended. Thus, should consumers decide to save more and spend less, the fall in demand would lead to an increase in business inventories. The change in inventories brings saving and investment into balance without any intention by business to increase investment. Also, the identity holds true because saving is defined to include private saving and "public saving" (actually public saving is positive when there is budget surplus, that is, public debt reduction).
As such, this does not imply that an increase in saving must lead directly to an increase in investment. Indeed, businesses may respond to increased inventories by decreasing both output and intended investment. Likewise, this reduction in output by business will reduce income, forcing an unintended reduction in saving. Even if the end result of this process is ultimately a lower level of investment, it will nonetheless remain true at any given point in time that the saving-investment identity holds.
Algebraic statement.
Closed economy identity.
In a closed economy with government, we have:
formula_0
formula_1
This means that the remainder of aggregate output (formula_2), after subtracting consumption by individuals (formula_3) and government (formula_4), must equal investment (formula_5).
However, it is also true that:
formula_6
formula_7
"T" is the amount of taxes levied. This equation says that saving (formula_8) is equal to disposable income (formula_9) minus consumption (formula_3). Combining both expressions (by solving for formula_10 on one side and equating), gives:
formula_11
formula_12
Investment as equal to savings is the basis of the investment-savings theory.
Open economy identity.
In an open economy, a similar expression can be found. The national income identity is:
formula_13
In this equation, formula_14 is the balance of trade (exports minus imports). Private saving is still formula_15, so again combining (by solving for formula_10 on one side and equating) gives:
formula_16
Intended and unintended investment.
In the above equations, formula_5 is total investment, both intended and unintended (with unintended investment being unintended accumulation of inventories). With this interpretation of formula_5, the above equations are identities: they automatically hold by definition regardless of the values of any exogenous variables.
If formula_5 is redefined as only intended investment, then all of the above equations are no longer identities but rather are statements of equilibrium in the goods market.
Views from classical macroeconomic theorists.
Adam Smith.
Adam Smith notes this in "The Wealth of Nations" and it figures into the question of general equilibrium and the general glut controversy. In the general equilibrium model savings must equal investment for the economy to clear. The economy grows as division of labor increases productivity of laborers. This increased productivity in laborers creates a surplus that will be split between capitalists’ expenditure on goods for themselves and investment in other capital. The accumulation of saving and parsimony of capitalists leads to greater increases in capital which leads to a more productive state. Smith advocates this parsimony of profit as a virtue. Smith provides the example of the colonial United States for the positive relationship between the wage fund and investment in capital. He says that England is a much more wealthy economy than anywhere in America; however, he believes that the true wealth lies within the market for wage funds and the growth rate of the population. The colonies have a much higher wage rate than England due to a lower cost of provisions and necessities to survive, which result in a higher competition for human capital among the “masters” of the economy, which in turn raises the wage rate and increases the wage fund. The core of this phenomenon is why Adam Smith believes in the saving-investment identity. The reason why wages go up and there is competition between employers is the result of a constant influx of capital that is equal to or greater than the rate at which the amount of labor increases.
David Ricardo.
To understand why Ricardo’s view of the saving-investment identity differed from Smith’s, one must first examine Ricardo’s definition of rent. This rent adds no new value to society, but since land-owners are profit seeking, and since population is increasing in this time of growth, land that yields beyond the value of sustenance for workers is sought out and the return on those pieces of land suffers and lower rent. This is the basis for what Ricardo believed about the saving-investment identity. He agreed with Smith that parsimony and saving was a virtue, and that saving and investment were equal, but he introduced the notion that returns diminish as population decreases. The diminished rate of return results in something Ricardo calls the stationary state, which is when eventually a minimum profit rate is reached at which new investment (i.e., additional capital accumulation) ceases. As long as profits are positive, the capital stock is increasing, and the increased demand for labor will temporarily increase the average wage rate. But when wage rates rise above subsistence population increases. A larger population requires a greater food supply, so that, barring imports, cultivation must be extended to inferior lands (lower rent). As this occurs, rents increase and profits fall, until ultimately the stationary state is reached.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Y = C + I + G"
},
{
"math_id": 1,
"text": "\\to I = Y - C - G"
},
{
"math_id": 2,
"text": "Y"
},
{
"math_id": 3,
"text": "C"
},
{
"math_id": 4,
"text": "G"
},
{
"math_id": 5,
"text": "I"
},
{
"math_id": 6,
"text": "Y = C + S + T"
},
{
"math_id": 7,
"text": "\\to S = Y - T - C"
},
{
"math_id": 8,
"text": "S"
},
{
"math_id": 9,
"text": "Y-T"
},
{
"math_id": 10,
"text": "Y - C"
},
{
"math_id": 11,
"text": "I + G = S + T"
},
{
"math_id": 12,
"text": "\\to I = \\underbrace{S}_{\\text{private saving}} + \\underbrace{(T - G)}_{\\text{public saving}}"
},
{
"math_id": 13,
"text": "Y = C + I + G + (X - M)"
},
{
"math_id": 14,
"text": "(X-M)"
},
{
"math_id": 15,
"text": "S = Y - T - C"
},
{
"math_id": 16,
"text": "I + G + (X - M) = S + T \\to I = \\underbrace{S}_{\\text{private saving}} + \\underbrace{(T - G)}_{\\text{public saving}} + \\underbrace{(M - X)}_{\\text{capital inflow}}"
}
]
| https://en.wikipedia.org/wiki?curid=1371087 |
13714248 | Bisymmetric matrix | Square matrix symmetric about both its diagonal and anti-diagonal
In mathematics, a bisymmetric matrix is a square matrix that is symmetric about both of its main diagonals. More precisely, an "n" × "n" matrix A is bisymmetric if it satisfies both "A" = "A"T (it is its own transpose), and "AJ" = "JA", where J is the "n" × "n" exchange matrix.
For example, any matrix of the form
formula_0
is bisymmetric. The associated formula_1 exchange matrix for this example is
formula_2
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\begin{bmatrix}\na & b & c & d & e \\\\\nb & f & g & h & d \\\\\nc & g & i & g & c \\\\\nd & h & g & f & b \\\\\ne & d & c & b & a \\end{bmatrix}\n= \\begin{bmatrix}\na_{11} & a_{12} & a_{13} & a_{14} & a_{15} \\\\\na_{12} & a_{22} & a_{23} & a_{24} & a_{14} \\\\\na_{13} & a_{23} & a_{33} & a_{23} & a_{13} \\\\\na_{14} & a_{24} & a_{23} & a_{22} & a_{12} \\\\\na_{15} & a_{14} & a_{13} & a_{12} & a_{11}\n\\end{bmatrix}"
},
{
"math_id": 1,
"text": "5\\times 5"
},
{
"math_id": 2,
"text": "J_{5} = \\begin{bmatrix}\n0 & 0 & 0 & 0 & 1 \\\\\n0 & 0 & 0 & 1 & 0 \\\\\n0 & 0 & 1 & 0 & 0 \\\\\n0 & 1 & 0 & 0 & 0 \\\\\n1 & 0 & 0 & 0 & 0\n\\end{bmatrix}"
}
]
| https://en.wikipedia.org/wiki?curid=13714248 |
1371746 | Diceware | Method for generating passphrases using dice
Diceware is a method for creating passphrases, passwords, and other cryptographic variables using ordinary dice as a hardware random number generator. For each word in the passphrase, five rolls of a six-sided die are required. The numbers from 1 to 6 that come up in the rolls are assembled as a five-digit number, e.g. "43146". That number is then used to look up a word in a cryptographic word list. In the original Diceware list "43146" corresponds to "munch". By generating several words in sequence, a lengthy passphrase can thus be constructed randomly.
A Diceware word list is any list of 65
unique words, preferably ones the user will find easy to spell and to remember. The contents of the word list do not have to be protected or concealed in any way, as the security of a Diceware passphrase is in the number of words selected, and the number of words each selected word could be taken from. Lists have been compiled for several languages, including Basque, Bulgarian, Catalan, Chinese, Czech, Danish, Dutch, English, Esperanto, Estonian, Finnish, French, German, Greek, Hebrew, Hungarian, Italian, Japanese, Latin, Māori, Norwegian, Polish, Portuguese, Romanian, Russian, Slovak, Slovenian, Spanish, Swedish and Turkish.
The level of unpredictability of a Diceware passphrase can be easily calculated: each word adds 12.9 bits of entropy to the passphrase (that is, formula_0 bits). Originally, in 1995, Diceware creator Arnold Reinhold considered five words () the minimal length needed by average users. However, in 2014 Reinhold started recommending that at least six words () be used.
This level of unpredictability assumes that potential attackers know three things: that Diceware has been used to generate the passphrase, the particular word list used, and exactly how many words make up the passphrase. If the attacker has less information, the entropy can be greater than .
The above calculations of the Diceware algorithm's entropy assume that, as recommended by Diceware's author, each word is separated by a space. If, instead, words are simply concatenated, the calculated entropy is slightly reduced due to redundancy; for example, the three-word Diceware phrases "in put clammy" and "input clam my" become identical if the spaces are removed.
EFF wordlists.
The Electronic Frontier Foundation published three alternative English diceware word lists in 2016, further emphasizing ease-of-memorization with a bias against obscure, abstract or otherwise problematic words; one tradeoff is that typical EFF-style passphrases require typing a larger number of characters.
Snippet.
The original diceware word list consists of a line for each of the possible five-die combinations. One excerpt:
Examples.
Diceware wordlist passphrase examples:
EFF wordlist passphrase examples:
The XKCD #936 strip shows a password similar to a Diceware generated one, even if the used wordlist is shorter than the regular -words list used for Diceware.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\log_2(6^5)"
}
]
| https://en.wikipedia.org/wiki?curid=1371746 |
1371940 | AEX index | Dutch stock market index
The AEX index, derived from Amsterdam Exchange index, is a stock market index composed of Dutch companies that trade on Euronext Amsterdam, formerly known as the "Amsterdam Stock Exchange". Started in 1983, the index is composed of a maximum of 25 of the most frequently traded securities on the exchange. It is one of the main national indices of the stock exchange group Euronext alongside Euronext Brussels' BEL20, Euronext Dublin's ISEQ 20, Euronext Lisbon's PSI-20, the Oslo Bors OBX Index, and Euronext Paris's CAC 40.
History.
The AEX started from a base level of 100 index points on 3 January 1983 (a corresponding value of 45.378 is used for historic comparisons due to the adoption of the Euro). The index's peak to date was set on 26th October 2021 816.91 After the dot-com bubble in 1999, the index value more than halved over the following three years before recovering in line with most global financial markets.
The AEX index dealt with its second largest one-day loss on March 12, 2020, when the index closed down almost 11% during the coronavirus pandemic.
The AEX index enjoyed its third largest one-day loss on September 29, 2008, when the index closed down almost 9%. The decade between 1998 and 2008 was bad for the AEX index, as it was the worst performing stock index except for the OMX Iceland 15. The preceding years were a lot better compared to the rest of the world.
Annual Returns.
The following table shows the annual development of the AEX index since 1983.
Rules.
Selection.
As of 2011, the AEX index composition is reviewed four times a year - a full "annual" review in March and interim "quarterly" reviews in June, September and December. Any changes made as a result of the reviews take effect on the third Friday of the month. Previously reviews were held in March and September only. Prior to 2008, index changes were made only annually in March.
At the main March review date, the 23 companies listed on Euronext Amsterdam's regulated market with the highest share turnover (in Euros) over the previous year are admitted to the index. Of the companies ranked between 24th and 27th, a further two are selected with preference given to existing constituents of the index. Companies which have fewer than 25% of shares considered free float on Euronext Amsterdam are, however, ineligible for inclusion. Unlike some other European benchmark equity indices (such as the OMXS30), if a company has more than one class of shares traded on the exchange, only the most frequently traded of these will be accepted into the AEX. If a company or companies are removed from the index due to delisting, acquisition or another reason, no replacements are made until the next review date.
At the three interim reviews in June, September and December, no changes are made to the AEX unless either the index has seen one or more constituents removed, or a non-constituent possesses a share turnover ranked 15th or higher overall over the previous 12 months. If vacancies are to be filled, the highest-ranking non-AEX companies are selected to join the index.
Weighting.
The AEX is a capitalization-weighted index. At each main annual review, the index weightings of companies in the index are capped at 15%, but range freely with share price subsequently. The index weights are calculated with respect to the closing prices of the relevant companies on March 1. At the interim reviews, weightings after adjustment are left as close as possible to those of the previous day and are not re-capped.
Calculation.
The index comprises a basket of shares, the numbers of which are based on the constituent weights and index value at the time of readjustment. The value of the index at any given time, It, is calculated using the following formula:
formula_0
with "t" the day of calculation; "N" the number of constituent shares in the index (usually 25); "Qi,t" the number of shares of company "i" on day "t"; "Fi,t" the free float factor of share "i"; "fi,t" the capping factor of share "i" (exactly 1 for all companies not subject to the 15% cap); "Ci,t" the price of share "i" on day "t"; and "dt" the index divisor (a factor calculated from the base capitalisation of the index, which is updated to reflect corporate actions and other index changes.
Contract Specifications.
The AEX Index is traded as a future on the Euronext Equities & Index Derivatives exchange (EUREID) under the ticker symbol FTI. The size of each contract is 200 EURO x AEX Index points (e.g. 200 X 667.55 = €133,510).
Composition.
The index is composed of the following listings as of 30 June 2021.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " I_t = \\frac{\\sum_{i=1}^N Q_{i,t}\\,F_{i,t}\\,f_{i,t}\\,C_{i,t}\\,}{d_{t}\\,} "
}
]
| https://en.wikipedia.org/wiki?curid=1371940 |
13721636 | Goku (disambiguation) | Goku is the main character in "Dragon Ball" media.
Goku may also refer to:
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "10^{48}"
}
]
| https://en.wikipedia.org/wiki?curid=13721636 |
1372167 | Amenable number | An amenable number is a positive integer for which there exists a multiset of as many integers as the original number that both add up to the original number and when multiplied together give the original number. To put it algebraically, for a positive integer "n", there is a multiset of "n" integers {a1, ..., an}, for which the equalities
formula_0
hold. Negative numbers are allowed in the multiset. For example, 5 is amenable since 5 = 1 + (-1) + 1 + (-1) + 5. All and only those numbers congruent to 0 or 1 (mod 4), except 4, are amenable.
The first few amenable numbers are: 1, 5, 8, 9, 12, 13 ... OEIS:
A solution for integers of the form "n" = 4"k" + 1 could be given by a set of 2"k" (+1)s and 2"k" (-1)s and "n" itself. (This generalizes the example of 5 given above.)
Although not obvious from the definition, the set of amenable numbers is closed under multiplication (the product of two amenable numbers is an amenable number).
All composite numbers would be amenable if the multiset was allowed to be of any length, because, even if other solutions are available, one can always obtain a solution by taking the prime factorization (expressed with repeated factors rather than exponents) and add as many 1s as necessary to add up to "n". The product of this set of integers will yield "n" no matter how many 1s there are in the set. Furthermore, still under this assumption, any integer "n" would be amenable. Consider the inelegant solution for "n" of {1, -1, 1, -1, "n"}. In the sum, the positive ones are cancelled out by the negative ones, leaving "n", while in the product, the two negative ones cancel out the effect of their signs.
Amenable numbers should not be confused with amicable numbers, which are pairs of integers whose divisors add up to each other. | [
{
"math_id": 0,
"text": " n = \\sum_{i=1}^n a_i = \\prod_{i=1}^n a_i"
}
]
| https://en.wikipedia.org/wiki?curid=1372167 |
1372255 | Highly cototient number | Numbers k where x - phi(x) = k has many solutions
In number theory, a branch of mathematics, a highly cototient number is a positive integer formula_0 which is above 1 and has more solutions to the equation
formula_1
than any other integer below formula_0 and above 1. Here, formula_2 is Euler's totient function. There are infinitely many solutions to the equation for
formula_0 = 1
so this value is excluded in the definition. The first few highly cototient numbers are:
2, 4, 8, 23, 35, 47, 59, 63, 83, 89, 113, 119, 167, 209, 269, 299, 329, 389, 419, 509, 629, 659, 779, 839, 1049, 1169, 1259, 1469, 1649, 1679, 1889, ... (sequence in the OEIS)
Many of the highly cototient numbers are odd.
The concept is somewhat analogous to that of highly composite numbers. Just as there are infinitely many highly composite numbers, there are also infinitely many highly cototient numbers. Computations become harder, since integer factorization becomes harder as the numbers get larger.
Example.
The cototient of formula_3 is defined as formula_4, i.e. the number of positive integers less than or equal to formula_3 that have at least one prime factor in common with formula_3. For example, the cototient of 6 is 4 since these four positive integers have a prime factor in common with 6: 2, 3, 4, 6. The cototient of 8 is also 4, this time with these integers: 2, 4, 6, 8. There are exactly two numbers, 6 and 8, which have cototient 4. There are fewer numbers which have cototient 2 and cototient 3 (one number in each case), so 4 is a highly cototient number.
Primes.
The first few highly cototient numbers which are primes are
2, 23, 47, 59, 83, 89, 113, 167, 269, 389, 419, 509, 659, 839, 1049, 1259, 1889, 2099, 2309, 2729, 3359, 3989, 4289, 4409, 5879, 6089, 6719, 9029, 9239, ... (sequence in the OEIS)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "x - \\phi(x) = k"
},
{
"math_id": 2,
"text": "\\phi"
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": "x - \\phi(x)"
}
]
| https://en.wikipedia.org/wiki?curid=1372255 |
13722767 | Willmore conjecture | Lower bound on the integrated squared mean curvature of a torus
In differential geometry, the Willmore conjecture is a lower bound on the Willmore energy of a torus. It is named after the English mathematician Tom Willmore, who conjectured it in 1965. A proof by Fernando Codá Marques and André Neves was announced in 2012 and published in 2014.
Willmore energy.
Let "v" : "M" → R3 be a smooth immersion of a compact, orientable surface. Giving "M" the Riemannian metric induced by "v", let "H" : "M" → R be the mean curvature (the arithmetic mean of the principal curvatures "κ"1 and "κ"2 at each point). In this notation, the "Willmore energy" "W"("M") of "M" is given by
formula_0
It is not hard to prove that the Willmore energy satisfies "W"("M") ≥ 4"π", with equality if and only if "M" is an embedded round sphere.
Statement.
Calculation of "W"("M") for a few examples suggests that there should be a better bound than "W"("M") ≥ 4"π" for surfaces with genus "g"("M") > 0. In particular, calculation of "W"("M") for tori with various symmetries led Willmore to propose in 1965 the following conjecture, which now bears his name
For every smooth immersed torus "M" in R3, "W"("M") ≥ 2"π"2.
In 1982, Peter Wai-Kwong Li and Shing-Tung Yau proved the conjecture in the non-embedded case, showing that if formula_1 is an immersion of a compact surface, which is "not" an embedding, then "W"("M") is at least 8"π".
In 2012, Fernando Codá Marques and André Neves proved the conjecture in the embedded case, using the Almgren–Pitts min-max theory of minimal surfaces. Martin Schmidt claimed a proof in 2002, but it was not accepted for publication in any peer-reviewed mathematical journal (although it did not contain a proof of the Willmore conjecture, he proved some other important conjectures in it). Prior to the proof of Marques and Neves, the Willmore conjecture had already been proved for many special cases, such as tube tori (by Willmore himself), and for tori of revolution (by Langer & Singer).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " W(M) = \\int_M H^2 \\, dA. "
},
{
"math_id": 1,
"text": "f:\\Sigma\\to S^3"
}
]
| https://en.wikipedia.org/wiki?curid=13722767 |
13723417 | Cold gas thruster | Type of rocket engine
A cold gas thruster (or a cold gas propulsion system) is a type of rocket engine which uses the expansion of a (typically inert) pressurized gas to generate thrust. As opposed to traditional rocket engines, a cold gas thruster does not house any combustion and therefore has lower thrust and efficiency compared to conventional monopropellant and bipropellant rocket engines. Cold gas thrusters have been referred to as the "simplest manifestation of a rocket engine" because their design consists only of a fuel tank, a regulating valve, a propelling nozzle, and the little required plumbing. They are the cheapest, simplest, and most reliable propulsion systems available for orbital maintenance, maneuvering and attitude control.
Cold gas thrusters are predominantly used to provide stabilization for smaller space missions which require contaminant-free operation. Specifically, CubeSat propulsion system development has been predominantly focused on cold gas systems because CubeSats have strict regulations against pyrotechnics and hazardous materials.
Design.
The nozzle of a cold gas thruster is generally a convergent-divergent nozzle that provides the required thrust in flight. The nozzle is shaped such that the high-pressure, low-velocity gas that enters the nozzle is accelerated as it approaches the throat (the narrowest part of the nozzle), where the gas velocity matches the speed of sound.
Performance.
Cold gas thrusters benefit from their simplicity; however, they do fall short in other respects. The advantages and disadvantages of a cold gas system can be summarized as:
Thrust.
Thrust is generated by momentum exchange between the exhaust and the spacecraft, which is given by Newton's second law as formula_0 where formula_1 is the mass flow rate, and formula_2 is the velocity of the exhaust.
For a cold gas thruster in space, where the thrusters are designed for infinite expansion (since the ambient pressure is zero), the thrust is given as
formula_3
Where formula_4 is the area of the throat, formula_5 is the chamber pressure in the nozzle, formula_6 is the specific heat ratio, formula_7 is the exit pressure of the propellant, and formula_8 is the exit area of the nozzle.
Specific Impulse.
The specific impulse (Isp) of a rocket engine is the most important metric of efficiency; a high specific impulse is normally desired. Cold gas thrusters have a significantly lower specific impulse than most other rocket engines because they do not take advantage of chemical energy stored in the propellant. The theoretical specific impulse for cold gases is given by
formula_9
where formula_10 is standard gravity and formula_11 is the characteristic velocity which is given by
formula_12
where formula_13 is the sonic velocity of the propellant.
Propellants.
Cold gas systems can use either a solid, liquid or gaseous propellant storage system; but the propellant must exit the nozzle in gaseous form. Storing liquid propellant may pose attitude control issues due to the sloshing of fuel in its tank.
When choosing a propellant, a high specific impulse, and a high specific impulse per unit volume of propellant should be considered.
Overview of the specific impulses of propellants suitable for a cold gas propulsion system:
Properties at 0°C and 241 bar.
Applications.
Human Propulsion.
Cold gas thrusters are especially well suited for astronaut propulsion units due to the inert and non-toxic nature of their propellants.
Hand-Held Maneuvering Unit.
Main article: Hand-Held Maneuvering Unit
The Hand-Held Maneuvering Unit (HHMU) used on the Gemini 4 and 10 missions used pressurized oxygen to facilitate the astronauts' extravehicular activities. Although the patent of the HHMU does not categorize the device as a cold gas thruster, the HHMU is described as a "propulsion unit utilizing the thrust developed by a pressurized gas escaping various nozzle means."
Manned Maneuvering Unit.
Twenty-four cold gas thrusters utilizing pressurized gaseous nitrogen were used on the Manned Maneuvering Unit (MMU). The thrusters provided full 6-degree-of-freedom control to the astronaut wearing the MMU. Each thruster provided 1.4 lbs (6.23 N) of thrust. The two propellant tanks onboard provided a total of 40 lbs (18kg) of gaseous nitrogen at 4500 psi, which provided sufficient propellant to generate a change in velocity of 110 to 135 ft/sec (33.53 to 41.15 m/s). At a nominal mass, the MMU had a translational acceleration of 0.3±0.05 ft/sec2 (9.1±1.5 cm/s2) and a rotational acceleration of 10.0±3.0 deg/sec2 (0.1745±0.052 rad/sec2)
Vernier Engines.
Main article: Vernier Engines
Larger cold gas thrusters are employed to help in the attitude control of the first stage of the SpaceX Falcon 9 rocket as it returns to land.
Automotive.
In a tweet in June 2018, Elon Musk proposed the use of air-based cold gas thrusters to improve car performance.
In September 2018, Bosch successfully tested its proof-of-concept safety system for righting a slipping motorcycle using cold gas thrusters. The system senses a sideways wheel slip and uses a lateral cold gas thruster to keep the motorcycle from slipping further.
Research.
The main focus of research is miniaturization of cold gas thrusters using microelectromechanical systems.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F=\\dot{m}V_e"
},
{
"math_id": 1,
"text": "\\dot{m}"
},
{
"math_id": 2,
"text": "V_e"
},
{
"math_id": 3,
"text": "F=A_tP_c\\gamma \\left [ \\left (\\frac{2}{\\gamma - 1}\\right ) \\left( \\frac{2}{\\gamma + 1} \\right) \\left (1 - \\frac{P_e}{P_c} \\right) \\right ] + P_eA_e "
},
{
"math_id": 4,
"text": "A_t"
},
{
"math_id": 5,
"text": "P_c"
},
{
"math_id": 6,
"text": "\\gamma"
},
{
"math_id": 7,
"text": "P_e"
},
{
"math_id": 8,
"text": "A_e"
},
{
"math_id": 9,
"text": "I_{sp} = \\frac{C^*}{g_0} \\gamma \\sqrt{\\left ( \\frac{2}{\\gamma - 1} \\right) \\left ( \\frac{2}{\\gamma +1} \\right )^ \\frac{\\gamma + 1}{\\gamma - 1} \\left ( 1 - \\frac{P_e}{P_c} \\right ) ^ {\\frac{\\gamma - 1}{\\gamma}} }"
},
{
"math_id": 10,
"text": "g_0"
},
{
"math_id": 11,
"text": "C^*"
},
{
"math_id": 12,
"text": "C^* = \\frac{a_0}{\\gamma \\left( \\frac{2}{\\gamma + 1} \\right) ^ \\frac{\\gamma +1}{2(\\gamma - 1)}}"
},
{
"math_id": 13,
"text": "a_0"
}
]
| https://en.wikipedia.org/wiki?curid=13723417 |
1372353 | Hydraulic machinery | Type of machine that uses liquid fluid power to perform work
Hydraulic machines use liquid fluid power to perform work. Heavy construction vehicles are a common example. In this type of machine, hydraulic fluid is pumped to various hydraulic motors and hydraulic cylinders throughout the machine and becomes pressurized according to the resistance present. The fluid is controlled directly or automatically by control valves and distributed through hoses, tubes, or pipes.
Hydraulic systems, like pneumatic systems, are based on Pascal's law which states that any pressure applied to a fluid inside a closed system will transmit that pressure equally everywhere and in all directions. A hydraulic system uses an incompressible liquid as its fluid, rather than a compressible gas.
The popularity of hydraulic machinery is due to the large amount of power that can be transferred through small tubes and flexible hoses, the high power density and a wide array of actuators that can make use of this power, and the huge multiplication of forces that can be achieved by applying pressures over relatively large areas. One drawback, compared to machines using gears and shafts, is that any transmission of power results in some losses due to resistance of fluid flow through the piping.
History.
Joseph Bramah patented the hydraulic press in 1795. While working at Bramah's shop, Henry Maudslay suggested a cup leather packing. Because it produced superior results, the hydraulic press eventually displaced the steam hammer for metal forging.
To supply large-scale power that was impractical for individual steam engines, central station hydraulic systems were developed. Hydraulic power was used to operate cranes and other machinery in British ports and elsewhere in Europe. The largest hydraulic system was in London. Hydraulic power was used extensively in Bessemer steel production. Hydraulic power was also used for elevators, to operate canal locks and rotating sections of bridges. Some of these systems remained in use well into the twentieth century.
Harry Franklin Vickers was called the "Father of Industrial Hydraulics" by ASME.
Force and torque multiplication.
A fundamental feature of hydraulic systems is the ability to apply force or torque multiplication in an easy way, independent of the distance between the input and output, without the need for mechanical gears or levers, either by altering the effective areas in two connected cylinders or the effective displacement (cc/rev) between a pump and motor. In normal cases, hydraulic ratios are combined with a mechanical force or torque ratio for optimum machine designs such as boom movements and track drives for an excavator.
Examples.
Two hydraulic cylinders interconnected.
Cylinder C1 is one inch in radius, and cylinder C2 is ten inches in radius. If the force exerted on C1 is 10 lbf, the force exerted by C2 is 1000 lbf because C2 is a hundred times larger in area ("S" = π"r"²) as C1. The downside to this is that you have to move C1 a hundred inches to move C2 one inch. The most common use for this is the classical hydraulic jack where a pumping cylinder with a small diameter is connected to the lifting cylinder with a large diameter.
Pump and motor.
If a hydraulic rotary pump with the displacement 10 cc/rev is connected to a hydraulic rotary motor with 100 cc/rev, the shaft torque required to drive the pump is one-tenth of the torque then available at the motor shaft, but the shaft speed (rev/min) for the motor is also only one-tenth of the pump shaft speed. This combination is actually the same type of force multiplication as the cylinder example, just that the linear force in this case is a rotary force, defined as torque.
Both these examples are usually referred to as a hydraulic transmission or hydrostatic transmission involving a certain hydraulic "gear ratio".
Hydraulic circuits.
A hydraulic circuit is a system comprising an interconnected set of discrete components that transport liquid. The purpose of this system may be to control where fluid flows (as in a network of tubes of coolant in a thermodynamic system) or to control fluid pressure (as in hydraulic amplifiers). For example, hydraulic machinery uses hydraulic circuits (in which hydraulic fluid is pushed, under pressure, through hydraulic pumps, pipes, tubes, hoses, hydraulic motors, hydraulic cylinders, and so on) to move heavy loads. The approach of describing a fluid system in terms of discrete components is inspired by the success of electrical circuit theory. Just as electric circuit theory works when elements are discrete and linear, hydraulic circuit theory works best when the elements (passive components such as pipes or transmission lines or active components such as power packs or pumps) are discrete and linear. This usually means that hydraulic circuit analysis works best for long, thin tubes with discrete pumps, as found in chemical process flow systems or microscale devices.
The circuit comprises the following components:
For the hydraulic fluid to do work, it must flow to the actuator and/or motors, then return to a reservoir. The fluid is then filtered and re-pumped. The path taken by hydraulic fluid is called a hydraulic circuit of which there are several types.
Open loop circuits.
Open-loop: Pump-inlet and motor-return (via the directional valve) are connected to the hydraulic tank. The term loop applies to feedback; the more correct term is open versus closed "circuit". Open center circuits use pumps which supply a continuous flow. The flow is returned to the tank through the control valve's open center; that is, when the control valve is centered, it provides an open return path to the tank and the fluid is not pumped to a high pressure. Otherwise, if the control valve is actuated it routes fluid to and from an actuator and tank. The fluid's pressure will rise to meet any resistance, since the pump has a constant output. If the pressure rises too high, fluid returns to the tank through a pressure relief valve. Multiple control valves may be stacked in series. This type of circuit can use inexpensive, constant displacement pumps.
Closed loop circuits.
Closed-loop: Motor-return is connected directly to the pump-inlet. To keep up pressure on the low pressure side, the circuits have a charge pump (a small gear pump) that supplies cooled and filtered oil to the low pressure side. Closed-loop circuits are generally used for hydrostatic transmissions in mobile applications. "Advantages:" No directional valve and better response, the circuit can work with higher pressure. The pump swivel angle covers both positive and negative flow direction. "Disadvantages:" The pump cannot be utilized for any other hydraulic function in an easy way and cooling can be a problem due to limited exchange of oil flow. High power closed loop systems generally must have a 'flush-valve' assembled in the circuit in order to exchange much more flow than the basic leakage flow from the pump and the motor, for increased cooling and filtering. The flush valve is normally integrated in the motor housing to get a cooling effect for the oil that is rotating in the motor housing itself. The losses in the motor housing from rotating effects and losses in the ball bearings can be considerable as motor speeds will reach 4000-5000 rev/min or even more at maximum vehicle speed. The leakage flow as well as the extra flush flow must be supplied by the charge pump. A large charge pump is thus very important if the transmission is designed for high pressures and high motor speeds. High oil temperature is usually a major problem when using hydrostatic transmissions at high vehicle speeds for longer periods, for instance when transporting the machine from one work place to the other. High oil temperatures for long periods will drastically reduce the lifetime of the transmission. To keep down the oil temperature, the system pressure during transport must be lowered, meaning that the minimum displacement for the motor must be limited to a reasonable value. Circuit pressure during transport around 200-250 bar is recommended.
Closed loop systems in mobile equipment are generally used for the transmission as an alternative to mechanical and hydrodynamic (converter) transmissions. The advantage is a stepless gear ratio (continuously variable speed/torque) and a more flexible control of the gear ratio depending on the load and operating conditions. The hydrostatic transmission is generally limited to around 200 kW maximum power, as the total cost gets too high at higher power compared to a hydrodynamic transmission. Large wheel loaders for instance and heavy machines are therefore usually equipped with converter transmissions. Recent technical achievements for the converter transmissions have improved the efficiency and developments in the software have also improved the characteristics, for example selectable gear shifting programs during operation and more gear steps, giving them characteristics close to the hydrostatic transmission.
Constant pressure and load-sensing systems.
Hydrostatic transmissions for earth moving machines, such as for track loaders, are often equipped with a separate 'inch pedal' that is used to temporarily increase the diesel engine rpm while reducing the vehicle speed in order to increase the available hydraulic power output for the working hydraulics at low speeds and increase the tractive effort. The function is similar to stalling a converter gearbox at high engine rpm. The inch function affects the preset characteristics for the 'hydrostatic' gear ratio versus diesel engine rpm.
Constant pressure systems.
The closed center circuits exist in two basic configurations, normally related to the regulator for the variable pump that supplies the oil:
Load-sensing systems.
Load-sensing systems (LS) generate less power losses as the pump can reduce both flow and pressure to match the load requirements, but require more tuning than the CP system with respect to system stability. The LS system also requires additional logical valves and compensator valves in the directional valves, thus it is technically more complex and more expensive than the CP system. The LS system generates a constant power loss related to the regulating pressure drop for the pump regulator :
formula_0
The average formula_1 is around 2 MPa (290 psi). If the pump flow is high the extra loss can be considerable. The power loss also increases if the load pressures vary a lot. The cylinder areas, motor displacements and mechanical torque arms must be designed to match load pressure in order to bring down the power losses. Pump pressure always equals the maximum load pressure when several functions are run simultaneously and the power input to the pump equals the (max. load pressure + Δ"p"LS) x sum of flow.
Five basic types of load sensing systems.
Technically the down-stream mounted compensator in a valve block can physically be mounted "up-stream", but work as a down-stream compensator.
System type (3) gives the advantage that activated functions are synchronized independent of pump flow capacity. The flow relation between two or more activated functions remains independent of load pressures, even if the pump reaches the maximum swivel angle. This feature is important for machines that often run with the pump at maximum swivel angle and with several activated functions that must be synchronized in speed, such as with excavators. With the type (4) system, the functions with "up-stream" compensators have priority, for example the steering function for a wheel loader. The system type with down-stream compensators usually have a unique trademark depending on the manufacturer of the valves, for example "LSC" (Linde Hydraulics), "LUDV" (Bosch Rexroth Hydraulics) and "Flowsharing" (Parker Hydraulics) etc. No official standardized name for this type of system has been established but flowsharing is a common name for it.
Components.
Hydraulic pump.
Hydraulic pumps supply fluid to the components in the system. Pressure in the system develops in reaction to the load. Hence, a pump rated for 5,000 psi is capable of maintaining flow against a load of 5,000 psi.
Pumps have a power density about ten times greater than an electric motor (by volume). They are powered by an electric motor or an engine, connected through gears, belts, or a flexible elastomeric coupling to reduce vibration.
Common types of hydraulic pumps to hydraulic machinery applications are:
Piston pumps are more expensive than gear or vane pumps, but provide longer life operating at higher pressure, with difficult fluids and longer continuous duty cycles. Piston pumps make up one half of a hydrostatic transmission.
Control valves.
"Directional control valves" route the fluid to the desired actuator. They usually consist of a spool inside a cast iron or steel housing. The spool slides to different positions in the housing, and intersecting grooves and channels route the fluid based on the spool's position.
The spool has a central (neutral) position maintained with springs; in this position the supply fluid is blocked, or returned to tank. Sliding the spool to one side routes the hydraulic fluid to an actuator and provides a return path from the actuator to tank. When the spool is moved to the opposite direction the supply and return paths are switched. When the spool is allowed to return to neutral (center) position the actuator fluid paths are blocked, locking it in position.
Directional control valves are usually designed to be stackable, with one valve for each hydraulic cylinder, and one fluid input supplying all the valves in the stack.
Tolerances are very tight in order to handle the high pressure and avoid leaking, spools typically have a clearance with the housing of less than a thousandth of an inch (25 μm). The valve block will be mounted to the machine's frame with a "three point" pattern to avoid distorting the valve block and jamming the valve's sensitive components.
The spool position may be actuated by mechanical levers, hydraulic "pilot" pressure, or solenoids which push the spool left or right. A seal allows part of the spool to protrude outside the housing, where it is accessible to the actuator.
The main valve block is usually a stack of "off the shelf" directional control valves chosen by flow capacity and performance. Some valves are designed to be proportional (flow rate proportional to valve position), while others may be simply on-off. The control valve is one of the most expensive and sensitive parts of a hydraulic circuit.
Reservoir.
The hydraulic fluid reservoir holds excess hydraulic fluid to accommodate volume changes from: cylinder extension and contraction, temperature driven expansion and contraction, and leaks. The reservoir is also designed to aid in separation of air from the fluid and also work as a heat accumulator to cover losses in the system when peak power is used. Reservoirs can also help separate dirt and other particulate from the oil, as the particulate will generally settle to the bottom of the tank.
Some designs include dynamic flow channels on the fluid's return path that allow for a smaller reservoir.
Accumulators.
Accumulators are a common part of hydraulic machinery. Their function is to store energy by using pressurized gas. One type is a tube with a floating piston. On the one side of the piston there is a charge of pressurized gas, and on the other side is the fluid. Bladders are used in other designs. Reservoirs store a system's fluid.
Examples of accumulator uses are backup power for steering or brakes, or to act as a shock absorber for the hydraulic circuit.
Hydraulic fluid.
Also known as "tractor fluid", hydraulic fluid is the life of the hydraulic circuit. It is usually petroleum oil with various additives. Some hydraulic machines require fire resistant fluids, depending on their applications. In some factories where food is prepared, either an edible oil or water is used as a working fluid for health and safety reasons.
In addition to transferring energy, hydraulic fluid needs to lubricate components, suspend contaminants and metal filings for transport to the filter, and to function well to several hundred degrees Fahrenheit or Celsius.
Filters.
Filters are an important part of hydraulic systems which removes the unwanted particles from fluid. Metal particles are continually produced by mechanical components and need to be removed along with other contaminants.
Filters may be positioned in many locations. The filter may be located between the reservoir and the pump intake. Blockage of the filter will cause cavitation and possibly failure of the pump. Sometimes the filter is located between the pump and the control valves. This arrangement is more expensive, since the filter housing is pressurized, but eliminates cavitation problems and protects the control valve from pump failures. The third common filter location is just before the return line enters the reservoir. This location is relatively insensitive to blockage and does not require a pressurized housing, but contaminants that enter the reservoir from external sources are not filtered until passing through the system at least once. Filters are used from 7 micron to 15 micron depends upon the viscosity grade of hydraulic oil.
Tubes, pipes and hoses.
"Hydraulic tubes" are seamless steel precision pipes, specially manufactured for hydraulics. The tubes have standard sizes for different pressure ranges, with standard diameters up to 100 mm. The tubes are supplied by manufacturers in lengths of 6 m, cleaned, oiled and plugged. The tubes are interconnected by different types of flanges (especially for the larger sizes and pressures), welding cones/nipples (with o-ring seal), several types of flare connection and by cut-rings. In larger sizes, hydraulic pipes are used. Direct joining of tubes by welding is not acceptable since the interior cannot be inspected.
"Hydraulic pipe" is used in case standard hydraulic tubes are not available. Generally these are used for low pressure. They can be connected by threaded connections, but usually by welds. Because of the larger diameters the pipe can usually be inspected internally after welding. Black pipe is non-galvanized and suitable for welding.
"Hydraulic hose" is graded by pressure, temperature, and fluid compatibility. Hoses are used when pipes or tubes can not be used, usually to provide flexibility for machine operation or maintenance. The hose is built up with rubber and steel layers. A rubber interior is surrounded by multiple layers of woven wire and rubber. The exterior is designed for abrasion resistance. The bend radius of hydraulic hose is carefully designed into the machine, since hose failures can be deadly, and violating the hose's minimum bend radius will cause failure. Hydraulic hoses generally have steel fittings swaged on the ends. The weakest part of the high pressure hose is the connection of the hose to the fitting. Another disadvantage of hoses is the shorter life of rubber which requires periodic replacement, usually at five to seven year intervals.
Tubes and pipes for hydraulic n applications are internally oiled before the system is commissioned. Usually steel piping is painted outside. Where flare and other couplings are used, the paint is removed under the nut, and is a location where corrosion can begin. For this reason, in marine applications most piping is stainless steel.
Seals, fittings and connections.
Components of a hydraulic system [sources (e.g. pumps), controls (e.g. valves) and actuators (e.g. cylinders)] need connections that will contain and direct the hydraulic fluid without leaking or losing the pressure that makes them work. In some cases, the components can be made to bolt together with fluid paths built-in. In more cases, though, rigid tubing or flexible hoses are used to direct the flow from one component to the next. Each component has entry and exit points for the fluid involved (called ports) sized according to how much fluid is expected to pass through it.
There are a number of standardized methods in use to attach the hose or tube to the component. Some are intended for ease of use and service, others are better for higher system pressures or control of leakage. The most common method, in general, is to provide in each component a female-threaded port, on each hose or tube a female-threaded captive nut, and use a separate adapter fitting with matching male threads to connect the two. This is functional, economical to manufacture, and easy to service.
Fittings serve several purposes;
A typical piece of machinery or heavy equipment may have thousands of sealed connection points and several different types:
Elastomeric seals (O-ring boss and face seal) are the most common types of seals in heavy equipment and are capable of reliably sealing more than of fluid pressure.
References and notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Power loss} = \\Delta p_{\\text{LS}} \\cdot Q_{\\text{tot}}"
},
{
"math_id": 1,
"text": "\\Delta p_{LS}"
}
]
| https://en.wikipedia.org/wiki?curid=1372353 |
1372446 | Gravitational anomaly | Breakdown of general covariance at the quantum level
In theoretical physics, a gravitational anomaly is an example of a gauge anomaly: it is an effect of quantum mechanics — usually a one-loop diagram—that invalidates the general covariance of a theory of general relativity combined with some other fields. The adjective "gravitational" is derived from the symmetry of a gravitational theory, namely from general covariance. A gravitational anomaly is generally synonymous with "diffeomorphism anomaly", since general covariance is symmetry under coordinate reparametrization; i.e. diffeomorphism.
General covariance is the basis of general relativity, the classical theory of gravitation. Moreover, it is necessary for the consistency of any theory of quantum gravity, since it is required in order to cancel unphysical degrees of freedom with a negative norm, namely gravitons polarized along the time direction. Therefore, all gravitational anomalies must cancel out.
The anomaly usually appears as a Feynman diagram with a chiral fermion running in the loop (a polygon) with "n" external gravitons attached to the loop where formula_0 where formula_1 is the spacetime dimension.
Gravitational anomalies.
Consider a classical gravitational field represented by the vielbein formula_2 and a quantized Fermi field formula_3. The generating functional for this quantum field is
formula_4
where formula_5 is the quantum action and the formula_6 factor before the Lagrangian is the vielbein determinant, the variation of the quantum action renders
formula_7
in which we denote a mean value with respect to the path integral by the bracket formula_8. Let us label the Lorentz, Einstein and Weyl transformations respectively by their parameters formula_9; they spawn the following anomalies:
Lorentz anomaly
formula_10
which readily indicates that the energy-momentum tensor has an anti-symmetric part.
Einstein anomaly
formula_11
this is related to the non-conservation of the energy-momentum tensor, i.e. formula_12.
Weyl anomaly
formula_13
which indicates that the trace is non-zero.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n=1+D/2"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "e^a_{\\;\\mu}"
},
{
"math_id": 3,
"text": "\\psi"
},
{
"math_id": 4,
"text": "Z[e^a_{\\;\\mu}]=e^{-W[e^a_{\\;\\mu}]}=\\int d\\bar{\\psi}d\\psi\\;\\; e^{-\\int d^4x e \\mathcal{L}_{\\psi}},"
},
{
"math_id": 5,
"text": "W"
},
{
"math_id": 6,
"text": "e"
},
{
"math_id": 7,
"text": "\\delta W[e^a_{\\;\\mu}]=\\int d^4x \\; e \\langle T^\\mu_{\\;a}\\rangle \\delta e^a_{\\;\\mu}"
},
{
"math_id": 8,
"text": "\\langle\\;\\;\\; \\rangle"
},
{
"math_id": 9,
"text": "\\alpha,\\, \\xi,\\, \\sigma"
},
{
"math_id": 10,
"text": "\\delta_\\alpha W=\\int d^4x e \\, \\alpha_{ab}\\langle T^{ab} \\rangle,"
},
{
"math_id": 11,
"text": "\\delta_\\xi W=-\\int d^4x e \\, \\xi^\\nu \\left(\\nabla_\\nu\\langle T^\\mu_{\\;\\nu}\\rangle-\\omega_{ab\\nu}\\langle T^{ab}\\rangle\\right),"
},
{
"math_id": 12,
"text": "\\nabla_\\mu\\langle T^{\\mu\\nu}\\rangle \\neq 0"
},
{
"math_id": 13,
"text": "\\delta_\\sigma W=\\int d^4x e \\, \\sigma\\langle T^\\mu_{\\;\\mu}\\rangle,"
}
]
| https://en.wikipedia.org/wiki?curid=1372446 |
1372450 | Gauge anomaly | Breakdown of gauge symmetry at the quantum level
In theoretical physics, a gauge anomaly is an example of an anomaly: it is a feature of quantum mechanics—usually a one-loop diagram—that invalidates the gauge symmetry of a quantum field theory; i.e. of a gauge theory.
All gauge anomalies must cancel out. Anomalies in gauge symmetries lead to an inconsistency, since a gauge symmetry is required in order to cancel degrees of freedom with a negative norm which are unphysical (such as a photon polarized in the time direction). Indeed, cancellation occurs in the Standard Model.
The term gauge anomaly is usually used for vector gauge anomalies. Another type of gauge anomaly is the gravitational anomaly, because coordinate reparametrization (called a diffeomorphism) is the gauge symmetry of gravitation.
Calculation of the anomaly.
Anomalies occur only in even spacetime dimensions. For example, the anomalies in the usual 4 spacetime dimensions arise from triangle Feynman diagrams.
Vector gauge anomalies.
In vector gauge anomalies (in gauge symmetries whose gauge boson is a vector), the anomaly is a chiral anomaly, and can be calculated exactly at one loop level, via a Feynman diagram with a chiral fermion running in the loop with "n" external gauge bosons attached to the loop where formula_0 where formula_1 is the spacetime dimension.
Let us look at the (semi)effective action we get after integrating over the chiral fermions. If there is a gauge anomaly, the resulting action will not be gauge invariant. If we denote by formula_2 the operator corresponding to an infinitesimal gauge transformation by ε, then the Frobenius consistency condition requires that
formula_3
for any functional formula_4, including the (semi)effective action S where [,] is the Lie bracket. As formula_5 is linear in ε, we can write
formula_6
where Ω(d) is d-form as a functional of the nonintegrated fields and is linear in ε. Let us make the further assumption (which turns out to be valid in all the cases of interest) that this functional is local (i.e. Ω(d)(x) only depends upon the values of the fields and their derivatives at x) and that it can be expressed as the exterior product of p-forms. If the spacetime Md is closed (i.e. without boundary) and oriented, then it is the boundary of some d+1 dimensional oriented manifold Md+1. If we then arbitrarily extend the fields (including ε) as defined on Md to Md+1 with the only condition being they match on the boundaries and the expression Ω(d), being the exterior product of p-forms, can be extended and defined in the interior, then
formula_7
The Frobenius consistency condition now becomes
formula_8
As the previous equation is valid for "any" arbitrary extension of the fields into the interior,
formula_9
Because of the Frobenius consistency condition, this means that there exists a d+1-form Ω(d+1) (not depending upon ε) defined over Md+1 satisfying
formula_10
Ω(d+1) is often called a Chern–Simons form.
Once again, if we assume Ω(d+1) can be expressed as an exterior product and that it can be extended into a d+1 -form in a d+2 dimensional oriented manifold, we can define
formula_11
in d+2 dimensions. Ω(d+2) is gauge invariant:
formula_12
as d and δε commute. | [
{
"math_id": 0,
"text": "n=1+D/2"
},
{
"math_id": 1,
"text": "D"
},
{
"math_id": 2,
"text": "\\delta_\\epsilon"
},
{
"math_id": 3,
"text": "\\left[\\delta_{\\epsilon_1},\\delta_{\\epsilon_2}\\right]\\mathcal{F}=\\delta_{\\left[\\epsilon_1,\\epsilon_2\\right]}\\mathcal{F}"
},
{
"math_id": 4,
"text": "\\mathcal{F}"
},
{
"math_id": 5,
"text": "\\delta_\\epsilon S"
},
{
"math_id": 6,
"text": "\\delta_\\epsilon S=\\int_{M^d} \\Omega^{(d)}(\\epsilon)"
},
{
"math_id": 7,
"text": "\\delta_\\epsilon S=\\int_{M^{d+1}} d\\Omega^{(d)}(\\epsilon)."
},
{
"math_id": 8,
"text": "\\left[\\delta_{\\epsilon_1},\\delta_{\\epsilon_2}\\right]S=\\int_{M^{d+1}}\\left[\\delta_{\\epsilon_1}d\\Omega^{(d)}(\\epsilon_2)-\\delta_{\\epsilon_2}d\\Omega^{(d)}(\\epsilon_1)\\right]=\\int_{M^{d+1}}d\\Omega^{(d)}(\\left[\\epsilon_1,\\epsilon_2\\right])."
},
{
"math_id": 9,
"text": "\\delta_{\\epsilon_1}d\\Omega^{(d)}(\\epsilon_2)-\\delta_{\\epsilon_2}d\\Omega^{(d)}(\\epsilon_1)=d\\Omega^{(d)}(\\left[\\epsilon_1,\\epsilon_2\\right])."
},
{
"math_id": 10,
"text": "\\delta_\\epsilon \\Omega^{(d+1)}=d\\Omega^{(d)}( \\epsilon )."
},
{
"math_id": 11,
"text": "\\Omega^{(d+2)}=d\\Omega^{(d+1)}"
},
{
"math_id": 12,
"text": "\\delta_\\epsilon \\Omega^{(d+2)}=d\\delta_\\epsilon \\Omega^{(d+1)}=d^2\\Omega^{(d)}(\\epsilon)=0"
}
]
| https://en.wikipedia.org/wiki?curid=1372450 |
1372463 | Mixed anomaly | Gauge anomaly from multiple gauge groups
In theoretical physics, a mixed anomaly is an example of an anomaly: it is an effect of quantum mechanics — usually a one-loop diagram — that implies that the classically valid general covariance and gauge symmetry of a theory of general relativity combined with gauge fields and fermionic fields cannot be preserved simultaneously in the quantum theory.
The adjective "mixed" usually refers to a mixture of a gravitational anomaly and gauge anomaly, but may also refer to a mixture of two different gauge groups tensored together, like the "SU(2)" and the "U(1)" of the Standard Model.
The anomaly usually appears as a Feynman diagram with a chiral fermion running in the loop (a polygon) with "n−k" external gravitons and "k" external gauge bosons attached to the loop where formula_0 where formula_1 is the spacetime dimension. Chiral fermions only occur in even spacetime dimensions. For example, the anomalies in the usual 4 spacetime dimensions arise from triangle Feynman diagrams.
General covariance and gauge symmetries are very important symmetries for the consistency of the whole theory, and therefore all gravitational, gauge, and mixed anomalies must cancel out. | [
{
"math_id": 0,
"text": "n=1+D/2"
},
{
"math_id": 1,
"text": "D"
}
]
| https://en.wikipedia.org/wiki?curid=1372463 |
13725281 | Differential inclusion | In mathematics, differential inclusions are a generalization of the concept of ordinary differential equation of the form
formula_0
where "F" is a multivalued map, i.e. "F"("t", "x") is a "set" rather than a single point in formula_1. Differential inclusions arise in many situations including differential variational inequalities, projected dynamical systems, Moreau's sweeping process, linear and nonlinear complementarity dynamical systems, discontinuous ordinary differential equations, switching dynamical systems, and fuzzy set arithmetic.
For example, the basic rule for Coulomb friction is that the friction force has magnitude "μN" in the direction opposite to the direction of slip, where "N" is the normal force and "μ" is a constant (the friction coefficient). However, if the slip is zero, the friction force can be "any" force in the correct plane with magnitude smaller than or equal to "μN". Thus, writing the friction force as a function of position and velocity leads to a set-valued function.
In differential inclusion, we not only take a set-valued map at the right hand side but also we can take a subset of a Euclidean space formula_2 for some formula_3 as following way. Let formula_4 and formula_5 Our main purpose is to find a formula_6 function formula_7 satisfying the differential inclusion formula_8 a.e. in formula_9 where formula_10 is an open bounded set.
Theory.
Existence theory usually assumes that "F"("t", "x") is an upper hemicontinuous function of "x", measurable in "t", and that "F"("t", "x") is a closed, convex set for all "t" and "x".
Existence of solutions for the initial value problem
formula_11
for a sufficiently small time interval ["t"0, "t"0 + "ε"), "ε" > 0 then follows.
Global existence can be shown provided "F" does not allow "blow-up" (formula_12 as formula_13 for a finite formula_14).
Existence theory for differential inclusions with non-convex "F"("t", "x") is an active area of research.
Uniqueness of solutions usually requires other conditions.
For example, suppose formula_15 satisfies a one-sided Lipschitz condition:
formula_16
for some "C" for all "x"1 and "x"2. Then the initial value problem
formula_11
has a unique solution.
This is closely related to the theory of maximal monotone operators, as developed by Minty and Haïm Brezis.
Filippov's theory only allows for discontinuities in the derivative formula_17, but allows no discontinuities in the state, i.e. formula_18 need be continuous. Schatzman and later Moreau (who gave it the currently accepted name) extended the notion to "measure differential inclusion" (MDI) in which the inclusion is evaluated by taking the limit from above for formula_18.
Applications.
Differential inclusions can be used to understand and suitably interpret discontinuous ordinary differential equations, such as arise for Coulomb friction in mechanical systems and ideal switches in power electronics. An important contribution has been made by A. F. Filippov, who studied regularizations of discontinuous equations. Further, the technique of regularization was used by N.N. Krasovskii in the theory of differential games.
Differential inclusions are also found at the foundation of non-smooth dynamical systems (NSDS) analysis, which is used in the "analog" study of switching electrical circuits using idealized component equations (for example using idealized, straight vertical lines for the ) and in the study of certain non-smooth mechanical system such as stick-slip oscillations in systems with dry friction or the dynamics of impact phenomena. Software that solves NSDS systems exists, such as INRIA's Siconos.
In continuous function when Fuzzy concept is used in differential inclusion a new concept comes as Fuzzy differential inclusion which has application in Atmospheric dispersion modeling and Cybernetics in Medical imaging.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{dx}{dt}(t)\\in F(t,x(t)), "
},
{
"math_id": 1,
"text": "\\R^d"
},
{
"math_id": 2,
"text": "\\mathbb R^N "
},
{
"math_id": 3,
"text": "N\\in \\mathbb N "
},
{
"math_id": 4,
"text": "n\\in \\mathbb N"
},
{
"math_id": 5,
"text": "E\\subset \\mathbb R^{n\\times n}\\setminus \\{0\\}."
},
{
"math_id": 6,
"text": "W^{1,\\infty}_{0}(\\Omega, \\mathbb R^n)"
},
{
"math_id": 7,
"text": "u"
},
{
"math_id": 8,
"text": "Du \\in E"
},
{
"math_id": 9,
"text": "\\Omega,"
},
{
"math_id": 10,
"text": "\\Omega\\subset \\mathbb R^n"
},
{
"math_id": 11,
"text": "\\frac{dx}{dt}(t)\\in F(t,x(t)), \\quad x(t_0)=x_0"
},
{
"math_id": 12,
"text": "\\scriptstyle \\Vert x(t)\\Vert\\,\\to\\,\\infty"
},
{
"math_id": 13,
"text": "\\scriptstyle t\\,\\to\\, t^*"
},
{
"math_id": 14,
"text": "\\scriptstyle t^*"
},
{
"math_id": 15,
"text": "F(t,x)"
},
{
"math_id": 16,
"text": "(x_1-x_2)^T(F(t,x_1)-F(t,x_2))\\leq C\\Vert x_1-x_2\\Vert^2"
},
{
"math_id": 17,
"text": "\\frac{dx}{dt}(t)"
},
{
"math_id": 18,
"text": "x(t)"
}
]
| https://en.wikipedia.org/wiki?curid=13725281 |
1372610 | Coleman–Mandula theorem | No-go theorem pertaining the triviality of space-time and internal symmetries
In theoretical physics, the Coleman–Mandula theorem is a no-go theorem stating that spacetime and internal symmetries can only combine in a trivial way. This means that the charges associated with internal symmetries must always transform as Lorentz scalars. Some notable exceptions to the no-go theorem are conformal symmetry and supersymmetry. It is named after Sidney Coleman and Jeffrey Mandula who proved it in 1967 as the culmination of a series of increasingly generalized no-go theorems investigating how internal symmetries can be combined with spacetime symmetries. The supersymmetric generalization is known as the Haag–Łopuszański–Sohnius theorem.
History.
In the early 1960s, the global formula_0 flavor symmetry associated with the eightfold way was shown to successfully describe the hadron spectrum for hadrons of the same spin. This led to efforts to expand the global formula_0 symmetry to a larger formula_1 symmetry mixing both flavour and spin, an idea similar to that previously considered in nuclear physics by Eugene Wigner in 1937 for an formula_2 symmetry. This non-relativistic formula_1 model united vector and pseudoscalar mesons of different spin into a 35-dimensional multiplet and it also united the two baryon decuplets into a 56-dimensional multiplet. While this was reasonably successful in describing various aspects of the hadron spectrum, from the perspective of quantum chromodynamics this success is merely a consequence of the flavour and spin independence of the force between quarks. There were many attempts to generalize this non-relativistic formula_1 model into a fully relativistic one, but these all failed.
At the time it was also an open question whether there existed a symmetry for which particles of different masses could belong to the same multiplet. Such a symmetry could then account for the mass splitting found in mesons and baryons. It was only later understood that this is instead a consequence of the differing up-, down-, and strange-quark masses which leads to a breakdown of the formula_0 internal flavor symmetry.
These two motivations led to a series of no-go theorems to show that spacetime symmetries and internal symmetries could not be combined in any but a trivial way. The first notable theorem was proved by William McGlinn in 1964, with a subsequent generalization by Lochlainn O'Raifeartaigh in 1965. These efforts culminated with the most general theorem by Sidney Coleman and Jeffrey Mandula in 1967.
Little notice was given to this theorem in subsequent years. As a result, the theorem played no role in the early development of supersymmetry, which instead emerged in the early 1970s from the study of dual resonance models, which are the precursor to string theory, rather than from any attempts to overcome the no-go theorem. Similarly, the Haag–Łopuszański–Sohnius theorem, a supersymmetric generalization of the Coleman–Mandula theorem, was proved in 1975 after the study of supersymmetry was already underway.
Theorem.
Consider a theory that can be described by an S-matrix and that satisfies the following conditions
The Coleman–Mandula theorem states that the symmetry group of this theory is necessarily a direct product of the Poincaré group and an internal symmetry group. The last technical assumption is unnecessary if the theory is described by a quantum field theory and is only needed to apply the theorem in a wider context.
A kinematic argument for why the theorem should hold was provided by Edward Witten. The argument is that Poincaré symmetry acts as a very strong constraint on elastic scattering, leaving only the scattering angle unknown. Any additional spacetime dependent symmetry would overdetermine the amplitudes, making them nonzero only at discrete scattering angles. Since this conflicts with the assumption of the analyticity of the scattering angles, such additional spacetime dependent symmetries are ruled out.
Limitations.
Conformal symmetry.
The theorem does not apply to a theory of massless particles, with these allowing for conformal symmetry as an additional spacetime dependent symmetry. In particular, the algebra of this group is the conformal algebra, which consists of the Poincaré algebra together with the commutation relations for the dilaton generator and the special conformal transformations generator.
Supersymmetry.
The Coleman–Mandula theorem assumes that the only symmetry algebras are Lie algebras, but the theorem can be generalized by instead considering Lie superalgebras. Doing this allows for additional anticommutating generators known as supercharges which transform as spinors under Lorentz transformations. This extension gives rise to the super-Poincaré algebra, with the associated symmetry known as supersymmetry. The Haag–Łopuszański–Sohnius theorem is the generalization of the Coleman–Mandula theorem to Lie superalgebras, with it stating that supersymmetry is the only new spacetime dependent symmetry that is allowed. For a theory with massless particles, the theorem is again evaded by conformal symmetry which can be present in addition to supersymmetry giving a superconformal algebra.
Low dimensions.
In a one or two dimensional theory the only possible scattering is forwards and backwards scattering so analyticity of the scattering angles is no longer possible and the theorem no longer holds. Spacetime dependent internal symmetries are then possible, such as in the massive Thirring model which can admit an infinite tower of conserved charges of ever higher tensorial rank.
Quantum groups.
Models with nonlocal symmetries whose charges do not act on multiparticle states as if they were a tensor product of one-particle states, evade the theorem. Such an evasion is found more generally for quantum group symmetries which avoid the theorem because the corresponding algebra is no longer a Lie algebra.
Other limitations.
For other spacetime symmetries besides the Poincaré group, such as theories with a de Sitter background or non-relativistic field theories with Galilean invariance, the theorem no longer applies. It also does not hold for discrete symmetries, since these are not Lie groups, or for spontaneously broken symmetries since these do not act on the S-matrix level and thus do not commute with the S-matrix.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{SU}(3)"
},
{
"math_id": 1,
"text": "\\text{SU}(6)"
},
{
"math_id": 2,
"text": "\\text{SU}(4)"
}
]
| https://en.wikipedia.org/wiki?curid=1372610 |
1372623 | No-go theorem | Theorem of physical impossibility
In theoretical physics, a no-go theorem is a theorem that states that a particular situation is not physically possible. This type of theorem imposes boundaries on certain mathematical or physical possibilities via a proof of contradiction.
Instances of no-go theorems.
Full descriptions of the no-go theorems named below are given in other articles linked to their names. A few of them are broad, general categories under which several theorems fall. Other names are broad and general-sounding but only refer to a single theorem.
Proof of impossibility.
In mathematics there is the concept of proof of impossibility referring to problems impossible to solve. The difference between this impossibility and that of the no-go theorems is that a proof of impossibility states a category of logical proposition that may never be true; a no-go theorem instead presents a sequence of events that may never occur.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\; J > \\tfrac{1}{2} \\;"
},
{
"math_id": 1,
"text": "\\; J > 1 \\;"
},
{
"math_id": 2,
"text": "\\; J = 2 \\;"
}
]
| https://en.wikipedia.org/wiki?curid=1372623 |
1372638 | LSZ reduction formula | Connection between correlation functions and the S-matrix
In quantum field theory, the Lehmann–Symanzik–Zimmermann (LSZ) reduction formula is a method to calculate "S"-matrix elements (the scattering amplitudes) from the time-ordered correlation functions of a quantum field theory. It is a step of the path that starts from the Lagrangian of some quantum field theory and leads to prediction of measurable quantities. It is named after the three German physicists Harry Lehmann, Kurt Symanzik and Wolfhart Zimmermann.
Although the LSZ reduction formula cannot handle bound states, massless particles and topological solitons, it can be generalized to cover bound states, by use of composite fields which are often nonlocal. Furthermore, the method, or variants thereof, have turned out to be also fruitful in other fields of theoretical physics. For example, in statistical physics they can be used to get a particularly general formulation of the fluctuation-dissipation theorem.
In and out fields.
"S"-matrix elements are amplitudes of transitions between "in" states and "out" states. An "in" state formula_0 describes the state of a system of particles which, in a far away past, before interacting, were moving freely with definite momenta {"p"}, and, conversely, an "out" state formula_1 describes the state of a system of particles which, long after interaction, will be moving freely with definite momenta {"p"}.
"In" and "out" states are states in Heisenberg picture so they should not be thought to describe particles at a definite time, but rather to describe the system of particles in its entire evolution, so that the S-matrix element:
formula_2
is the probability amplitude for a set of particles which were prepared with definite momenta {"p"} to interact and be measured later as a new set of particles with momenta {"q"}.
The easy way to build "in" and "out" states is to seek appropriate field operators that provide the right creation and annihilation operators. These fields are called respectively "in" and "out" fields:
Just to fix ideas, suppose we deal with a Klein–Gordon field that interacts in some way which doesn't concern us:
formula_3
formula_4 may contain a self interaction "gφ"3 or interaction with other fields, like a Yukawa interaction formula_5. From this Lagrangian, using Euler–Lagrange equations, the equation of motion follows:
formula_6
where, if formula_4 does not contain derivative couplings:
formula_7
We may expect the "in" field to resemble the asymptotic behaviour of the free field as "x"0 → −∞, making the assumption that in the far away past interaction described by the current "j"0 is negligible, as particles are far from each other. This hypothesis is named the adiabatic hypothesis. However self interaction never fades away and, besides many other effects, it causes a difference between the Lagrangian mass "m"0 and the physical mass m of the φ boson. This fact must be taken into account by rewriting the equation of motion as follows:
formula_8
This equation can be solved formally using the retarded Green's function of the Klein–Gordon operator formula_9:
formula_10
allowing us to split interaction from asymptotic behaviour. The solution is:
formula_11
The factor is a normalization factor that will come handy later, the field "φ"in is a solution of the homogeneous equation associated with the equation of motion:
formula_12
and hence is a free field which describes an incoming unperturbed wave, while the last term of the solution gives the perturbation of the wave due to interaction.
The field "φ"in is indeed the "in" field we were seeking, as it describes the asymptotic behaviour of the interacting field as "x"0 → −∞, though this statement will be made more precise later. It is a free scalar field so it can be expanded in plane waves:
formula_13
where:
formula_14
The inverse function for the coefficients in terms of the field can be easily obtained and put in the elegant form:
formula_15
where:
formula_16
The Fourier coefficients satisfy the algebra of creation and annihilation operators:
formula_17
and they can be used to build "in" states in the usual way:
formula_18
The relation between the interacting field and the "in" field is not very simple to use, and the presence of the retarded Green's function tempts us to write something like:
formula_19
implicitly making the assumption that all interactions become negligible when particles are far away from each other. Yet the current "j"("x") contains also self interactions like those producing the mass shift from "m"0 to m. These interactions do not fade away as particles drift apart, so much care must be used in establishing asymptotic relations between the interacting field and the "in" field.
The correct prescription, as developed by Lehmann, Symanzik and Zimmermann, requires two normalizable states formula_20 and formula_21, and a normalizable solution "f" ("x") of the Klein–Gordon equation formula_22. With these pieces one can state a correct and useful but very weak asymptotic relation:
formula_23
The second member is indeed independent of time as can be shown by differentiating and remembering that both "φ"in and "f" satisfy the Klein–Gordon equation.
With appropriate changes the same steps can be followed to construct an "out" field that builds "out" states. In particular the definition of the "out" field is:
formula_24
where Δadv("x" − "y") is the advanced Green's function of the Klein–Gordon operator. The weak asymptotic relation between "out" field and interacting field is:
formula_25
The reduction formula for scalars.
The asymptotic relations are all that is needed to obtain the LSZ reduction formula. For future convenience we start with the matrix element:
formula_26
which is slightly more general than an "S"-matrix element. Indeed, formula_27 is the expectation value of the time-ordered product of a number of fields formula_28 between an "out" state and an "in" state. The "out" state can contain anything from the vacuum to an undefined number of particles, whose momenta are summarized by the index β. The "in" state contains at least a particle of momentum p, and possibly many others, whose momenta are summarized by the index α. If there are no fields in the time-ordered product, then formula_27 is obviously an "S"-matrix element. The particle with momentum p can be 'extracted' from the "in" state by use of a creation operator:
formula_29
where the prime on formula_30 denotes that one particle has been taken out. With the assumption that no particle with momentum p is present in the "out" state, that is, we are ignoring forward scattering, we can write:
formula_31
because formula_32 acting on the left gives zero. Expressing the construction operators in terms of "in" and "out" fields, we have:
formula_33
Now we can use the asymptotic condition to write:
formula_34
Then we notice that the field "φ"("x") can be brought inside the time-ordered product, since it appears on the right when "x"0 → −∞ and on the left when "x"0 → ∞:
formula_35
In the following, x dependence in the time-ordered product is what matters, so we set:
formula_36
It can be shown by explicitly carrying out the time integration that:
formula_37
so that, by explicit time derivation, we have:
formula_38
By its definition we see that "fp" ("x") is a solution of the Klein–Gordon equation, which can be written as:
formula_39
Substituting into the expression for formula_27 and integrating by parts, we arrive at:
formula_40
That is:
formula_41
Starting from this result, and following the same path another particle can be extracted from the "in" state, leading to the insertion of another field in the time-ordered product. A very similar routine can extract particles from the "out" state, and the two can be iterated to get vacuum both on right and on left of the time-ordered product, leading to the general formula:
formula_42
Which is the LSZ reduction formula for Klein–Gordon scalars. It gains a much better looking aspect if it is written using the Fourier transform of the correlation function:
formula_43
Using the inverse transform to substitute in the LSZ reduction formula, with some effort, the following result can be obtained:
formula_44
Leaving aside normalization factors, this formula asserts that "S"-matrix elements are the residues of the poles that arise in the Fourier transform of the correlation functions as four-momenta are put on-shell.
Reduction formula for fermions.
Recall that solutions to the quantized free-field Dirac equation may be written as
formula_45
where the metric signature is mostly plus, formula_46 is an annihilation operator for b-type particles of momentum formula_47 and spin formula_48, formula_49 is a creation operator for d-type particles of spin formula_50, and the spinors formula_51 and formula_52 satisfy formula_53 and formula_54. The Lorentz-invariant measure is written as formula_55, with formula_56. Consider now a scattering event consisting of an "in" state formula_57 of non-interacting particles approaching an interaction region at the origin, where scattering occurs, followed by an "out" state formula_58 of outgoing non-interacting particles. The probability amplitude for this process is given by
formula_59
where no extra time-ordered product of field operators has been inserted, for simplicity. The situation considered will be the scattering of formula_60 b-type particles to formula_61 b-type particles. Suppose that the "in" state consists of formula_60 particles with momenta formula_62 and spins formula_63, while the "out" state contains particles of momenta formula_64 and spins formula_65. The "in" and "out" states are then given by
formula_66
Extracting an "in" particle from formula_57 yields a free-field creation operator formula_67 acting on the state with one less particle. Assuming that no outgoing particle has that same momentum, we then can write
formula_68
where the prime on formula_30 denotes that one particle has been taken out. Now recall that in the free theory, the b-type particle operators can be written in terms of the field using the inverse relation
formula_69
where formula_70. Denoting the asymptotic free fields by formula_71 and formula_72, we find
formula_73
The weak asymptotic condition needed for a Dirac field, analogous to that for scalar fields, reads
formula_74
and likewise for the "out" field. The scattering amplitude is then
formula_75
where now the interacting field appears in the inner product. Rewriting the limits in terms of the integral of a time derivative, we have
formula_76
formula_77
where the row vector of matrix elements of the barred Dirac field is written as formula_78. Now, recall that formula_79 is a solution to the Dirac equation:
formula_80
Solving for formula_81, substituting it into the first term in the integral, and performing an integration by parts, yields
formula_82
Switching to Dirac index notation (with sums over repeated indices) allows for a neater expression, in which the quantity in square brackets is to be regarded as a differential operator:
formula_83
Consider next the matrix element appearing in the integral. Extracting an "out" state creation operator and subtracting the corresponding "in" state operator, with the assumption that no incoming particle has the same momentum, we have
formula_84
Remembering that formula_85, where formula_86, we can replace the annihilation operators with "in" fields using the adjoint of the inverse relation. Applying the asymptotic relation, we find
formula_87
Note that a time-ordering symbol has appeared, since the first term requires formula_88 on the left, while the second term requires it on the right. Following the same steps as before, this expression reduces to
formula_89
The rest of the "in" and "out" states can then be extracted and reduced in the same way, ultimately resulting in
formula_90
The same procedure can be done for the scattering of d-type particles, for which formula_51's are replaced by formula_52's, and formula_91's and formula_92's are swapped.
Field strength normalization.
The reason of the normalization factor Z in the definition of "in" and "out" fields can be understood by taking that relation between the vacuum and a single particle state formula_93 with four-moment on-shell:
formula_94
Remembering that both φ and "φ"in are scalar fields with their Lorentz transform according to:
formula_95
where Pμ is the four-momentum operator, we can write:
formula_96
Applying the Klein–Gordon operator ∂2 + "m"2 on both sides, remembering that the four-moment p is on-shell and that Δret is the Green's function of the operator, we obtain:
formula_97
So we arrive to the relation:
formula_98
which accounts for the need of the factor Z. The "in" field is a free field, so it can only connect one-particle states with the vacuum. That is, its expectation value between the vacuum and a many-particle state is null. On the other hand, the interacting field can also connect many-particle states to the vacuum, thanks to interaction, so the expectation values on the two sides of the last equation are different, and need a normalization factor in between. The right hand side can be computed explicitly, by expanding the "in" field in creation and annihilation operators:
formula_99
Using the commutation relation between "a"in and formula_100 we obtain:
formula_101
leading to the relation:
formula_102
by which the value of Z may be computed, provided that one knows how to compute formula_103.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|\\{p\\}\\ \\mathrm{in}\\rangle"
},
{
"math_id": 1,
"text": "|\\{p\\}\\ \\mathrm{out}\\rangle"
},
{
"math_id": 2,
"text": "S_{\\rm fi}=\\langle \\{q\\}\\ \\mathrm{out}| \\{p\\}\\ \\mathrm{in}\\rangle"
},
{
"math_id": 3,
"text": "\\mathcal L= \\frac 1 2 \\partial_\\mu \\varphi\\partial^\\mu \\varphi - \\frac 1 2 m_0^2 \\varphi^2 +\\mathcal L_{\\mathrm{int}}"
},
{
"math_id": 4,
"text": "\\mathcal L_{\\mathrm{int}}"
},
{
"math_id": 5,
"text": "g\\ \\varphi\\bar\\psi\\psi"
},
{
"math_id": 6,
"text": "\\left(\\partial^2+m_0^2\\right)\\varphi(x)=j_0(x)"
},
{
"math_id": 7,
"text": "j_0=\\frac{\\partial\\mathcal L_{\\mathrm{int}}}{\\partial \\varphi}"
},
{
"math_id": 8,
"text": "\\left(\\partial^2+m^2\\right)\\varphi(x)=j_0(x)+\\left(m^2-m_0^2\\right)\\varphi(x)=j(x)"
},
{
"math_id": 9,
"text": "\\partial^2+m^2"
},
{
"math_id": 10,
"text": "\\Delta_{\\mathrm{ret}}(x)=i\\theta\\left(x^0\\right)\\int \\frac{\\mathrm{d}^3k}{(2\\pi)^3 2\\omega_k} \\left(e^{-ik\\cdot x}-e^{ik\\cdot x}\\right)_{k^0=\\omega_k}\\qquad \\omega_k=\\sqrt{\\mathbf{k}^2+m^2}"
},
{
"math_id": 11,
"text": "\\varphi(x)=\\sqrt Z \\varphi_{\\mathrm{in}}(x) +\\int \\mathrm{d}^4y \\Delta_{\\mathrm{ret}}(x-y)j(y)"
},
{
"math_id": 12,
"text": "\\left(\\partial^2+m^2\\right) \\varphi_{\\mathrm{in}}(x)=0,"
},
{
"math_id": 13,
"text": "\\varphi_{\\mathrm{in}}(x)=\\int \\mathrm{d}^3k \\left\\{f_k(x) a_{\\mathrm{in}}(\\mathbf{k})+f^*_k(x) a^\\dagger_{\\mathrm{in}}(\\mathbf{k})\\right\\}"
},
{
"math_id": 14,
"text": "f_k(x)=\\left.\\frac{e^{-ik\\cdot x}}{(2\\pi)^{\\frac{3}{2}}(2\\omega_k)^{\\frac{1}{2}}}\\right|_{k^0=\\omega_k}"
},
{
"math_id": 15,
"text": "a_{\\mathrm{in}}(\\mathbf{k})=i\\int \\mathrm{d}^3x f^*_k(x)\\overleftrightarrow{\\partial_0}\\varphi_{\\mathrm{in}}(x)"
},
{
"math_id": 16,
"text": "{\\mathrm{g}}\\overleftrightarrow{\\partial_0} f = \\mathrm{g}\\partial_0 f -f\\partial_0 \\mathrm{g}."
},
{
"math_id": 17,
"text": "[a_{\\mathrm{in}}(\\mathbf{p}),a_{\\mathrm{in}}(\\mathbf{q})]=0;\\quad [a_{\\mathrm{in}}(\\mathbf{p}),a^\\dagger_{\\mathrm{in}}(\\mathbf{q})]=\\delta^3(\\mathbf{p}-\\mathbf{q});"
},
{
"math_id": 18,
"text": "\\left|k_1,\\ldots,k_n\\ \\mathrm{in}\\right\\rangle=\\sqrt{2\\omega_{k_1}}a_{\\mathrm{in}}^\\dagger(\\mathbf{k}_1)\\ldots \\sqrt{2\\omega_{k_n}}a_{\\mathrm{in}}^\\dagger(\\mathbf{k}_n)|0\\rangle"
},
{
"math_id": 19,
"text": "\\varphi(x)\\sim\\sqrt Z\\varphi_{\\mathrm{in}}(x)\\qquad \\mathrm{as}\\quad x^0\\to-\\infty"
},
{
"math_id": 20,
"text": "|\\alpha\\rangle"
},
{
"math_id": 21,
"text": "|\\beta\\rangle"
},
{
"math_id": 22,
"text": "(\\partial^2+m^2)f(x)=0"
},
{
"math_id": 23,
"text": "\\lim_{x^0\\to-\\infty} \\int \\mathrm{d}^3x \\langle\\alpha|f(x)\\overleftrightarrow{\\partial_0}\\varphi(x)|\\beta\\rangle= \\sqrt Z \\int \\mathrm{d}^3x \\langle\\alpha|f(x)\\overleftrightarrow{\\partial_0}\\varphi_{\\mathrm{in}}(x)|\\beta\\rangle"
},
{
"math_id": 24,
"text": "\\varphi(x)=\\sqrt Z \\varphi_{\\mathrm{out}}(x) +\\int \\mathrm{d}^4y \\Delta_{\\mathrm{adv}}(x-y)j(y)"
},
{
"math_id": 25,
"text": " \\lim_{x^0\\to \\infty} \\int \\mathrm{d}^3x \\langle\\alpha|f(x)\\overleftrightarrow{\\partial_0}\\varphi(x)|\\beta\\rangle= \\sqrt Z \\int \\mathrm{d}^3x\n\\langle\\alpha|f(x)\\overleftrightarrow{\\partial_0}\\varphi_{\\mathrm{out}}(x)|\\beta\\rangle "
},
{
"math_id": 26,
"text": "\\mathcal M=\\langle \\beta\\ \\mathrm{out}|\\mathrm{T}\\varphi(y_1)\\ldots\\varphi(y_n)|\\alpha\\ \\mathrm{in}\\rangle "
},
{
"math_id": 27,
"text": "\\mathcal M"
},
{
"math_id": 28,
"text": "\\varphi(y_1)\\cdots\\varphi(y_n)"
},
{
"math_id": 29,
"text": " \\mathcal M=\\sqrt{2\\omega_p}\\ \\left \\langle \\beta\\ \\mathrm{out} \\bigg| \\mathrm T\\left[\\varphi(y_1)\\ldots\\varphi(y_n)\\right] a_{\\mathrm{in}}^\\dagger(\\mathbf p) \\bigg|\\alpha'\\ \\mathrm{in} \\right \\rangle "
},
{
"math_id": 30,
"text": "\\alpha"
},
{
"math_id": 31,
"text": "\\mathcal M=\\sqrt{2\\omega_p}\\ \\left \\langle \\beta\\ \\mathrm{out} \\bigg| \n\\left\\{ \n\\mathrm T\\left[\\varphi(y_1)\\ldots\\varphi(y_n)\\right] a_{\\mathrm{in}}^\\dagger (\\mathbf p)- a_{\\mathrm{out}}^\\dagger(\\mathbf p) \\mathrm T\\left[\\varphi(y_1)\\ldots\\varphi(y_n)\\right] \n\\right\\}\n\\bigg|\\alpha'\\ \\mathrm{in} \\right \\rangle"
},
{
"math_id": 32,
"text": "a_{\\mathrm{out}}^\\dagger"
},
{
"math_id": 33,
"text": "\\mathcal M=-i\\sqrt{2\\omega_p}\\ \\int \\mathrm{d}^3x f_p(x)\\overleftrightarrow{\\partial_0} \\left\\langle \\beta\\ \\mathrm{out} \\bigg| \n\\left\\{\n\\mathrm T\\left[\\varphi(y_1)\\ldots\\varphi(y_n)\\right] \\varphi_{\\mathrm{in}}(x)- \\varphi_{\\mathrm{out}}(x) \\mathrm T\\left[\\varphi(y_1)\\ldots\\varphi(y_n)\\right] \n\\right\\}\n\\bigg|\\alpha'\\ \\mathrm{in}\\right \\rangle"
},
{
"math_id": 34,
"text": "\\mathcal M= -i\\sqrt{\\frac{2\\omega_p}{Z}} \\left\\{ \\lim_{x^0\\to-\\infty} \\int \\mathrm{d}^3x f_p(x)\\overleftrightarrow{\\partial_0} \\langle \\beta\\ \\mathrm{out}| \\mathrm T\\left[\\varphi(y_1)\\ldots\\varphi(y_n)\\right] \\varphi(x) |\\alpha'\\ \\mathrm{in}\\rangle-\\lim_{x^0\\to\\infty} \\int \\mathrm{d}^3x f_p(x)\\overleftrightarrow{\\partial_0} \\langle \\beta\\ \\mathrm{out}| \\varphi(x) \\mathrm T\\left[\\varphi(y_1)\\ldots\\varphi(y_n)\\right] |\\alpha'\\ \\mathrm{in}\\rangle \\right\\}"
},
{
"math_id": 35,
"text": "\\mathcal M=-i\\sqrt{\\frac{2\\omega_p}{Z}} \\left(\\lim_{x^0\\to-\\infty}-\\lim_{x^0\\to \\infty}\\right) \\int \\mathrm{d}^3x f_p(x) \\overleftrightarrow{\\partial_0} \\langle \\beta\\ \\mathrm{out}| \\mathrm T\\left[\\varphi(x)\\varphi(y_1)\\ldots\\varphi(y_n)\\right] |\\alpha'\\ \\mathrm{in} \\rangle "
},
{
"math_id": 36,
"text": "\\langle \\beta\\ \\mathrm{out}| \\mathrm T\\left[\\varphi(x)\\varphi(y_1)\\ldots\\varphi(y_n)\\right] |\\alpha'\\ \\mathrm{in}\\rangle= \\eta(x) "
},
{
"math_id": 37,
"text": " \\mathcal M=i\\sqrt{\\frac{2\\omega_p}{Z}} \\int \\mathrm{d}(x^0)\\partial_0 \\int \\mathrm{d}^3x f_p(x)\\overleftrightarrow{\\partial_0}\\eta(x)"
},
{
"math_id": 38,
"text": "\\mathcal M=i\\sqrt{\\frac{2\\omega_p}{Z}} \\int \\mathrm{d}^4 x\\left\\{f_p(x)\\partial_0^2\\eta(x)-\\eta(x)\\partial_0^2 f_p(x)\\right\\}"
},
{
"math_id": 39,
"text": "\\partial_0^2f_p(x)=\\left(\\Delta-m^2\\right) f_p(x)"
},
{
"math_id": 40,
"text": "\\mathcal M=i\\sqrt{\\frac{2\\omega_p}{Z}} \\int \\mathrm{d}^4 x f_p(x)\\left(\\partial_0^2-\\Delta+m^2\\right)\\eta(x)"
},
{
"math_id": 41,
"text": " \\mathcal M=\\frac{i}{(2\\pi)^{\\frac{3}{2}} Z^{\\frac{1}{2}}} \\int \\mathrm{d}^4 x e^{-ip\\cdot x} \\left(\\Box+m^2\\right)\\langle \\beta\\ \\mathrm{out}| \\mathrm T\\left[\\varphi(x)\\varphi(y_1)\\ldots\\varphi(y_n)\\right] |\\alpha'\\ \\mathrm{in}\\rangle"
},
{
"math_id": 42,
"text": "\\langle p_1,\\ldots,p_n\\ \\mathrm{out}|q_1,\\ldots,q_m\\ \\mathrm{in}\\rangle=\\int \\prod_{i=1}^{m} \\left\\{\\mathrm{d}^4x_i \\frac{i e^{-iq_i\\cdot x_i} \\left(\\Box_{x_i}+m^2\\right)}{(2\\pi)^{\\frac{3}{2}} Z^{\\frac{1}{2}}} \\right\\} \\prod_{j=1}^{n} \\left\\{ \\mathrm{d}^4y_j \\frac{i e^{ip_j\\cdot y_j}\\left(\\Box_{y_j}+m^2\\right)}{(2\\pi)^{\\frac{3}{2}} Z^{\\frac{1}{2}}} \\right\\} \\langle \\Omega|\\mathrm{T} \\varphi(x_1)\\ldots\\varphi(x_m)\\varphi(y_1)\\ldots\\varphi(y_n)|\\Omega\\rangle"
},
{
"math_id": 43,
"text": " \\Gamma \\left (p_1,\\ldots,p_n \\right )=\\int \\prod_{i=1}^{n} \\left\\{\\mathrm{d}^4x_i e^{i p_i\\cdot x_i} \\right\\} \\langle \\Omega|\\mathrm{T}\\ \\varphi(x_1)\\ldots\\varphi(x_n)|\\Omega\\rangle"
},
{
"math_id": 44,
"text": "\\langle p_1,\\ldots,p_n\\ \\mathrm{out}|q_1,\\ldots,q_m\\ \\mathrm{in}\\rangle= \\prod_{i=1}^{m} \\left\\{-\\frac{i\\left(p_i^2-m^2\\right)}{(2\\pi)^{\\frac{3}{2}} Z^{\\frac{1}{2}}} \\right\\} \\prod_{j=1}^{n} \\left\\{ -\\frac{i\\left(q_j^2-m^2\\right)}{(2\\pi)^{\\frac{3}{2}} Z^{\\frac{1}{2}}} \\right\\} \\Gamma \\left (p_1,\\ldots,p_n;-q_1,\\ldots,-q_m \\right )"
},
{
"math_id": 45,
"text": "\\Psi(x)=\\sum_{s=\\pm}\\int\\!\\mathrm{d}\\tilde{p}\\big(b^s_\\textbf{p}u^s_\\textbf{p}\\mathrm{e}^{ip\\cdot x}+d^{\\dagger s}_\\textbf{p}v^s_\\textbf{p}\\mathrm{e}^{-ip\\cdot x}\\big),"
},
{
"math_id": 46,
"text": " b^s_\\textbf{p}"
},
{
"math_id": 47,
"text": "\\textbf{p}"
},
{
"math_id": 48,
"text": "s=\\pm"
},
{
"math_id": 49,
"text": "d^{\\dagger s}_\\textbf{p}"
},
{
"math_id": 50,
"text": "s"
},
{
"math_id": 51,
"text": "u^s_\\textbf{p}"
},
{
"math_id": 52,
"text": "v^s_\\textbf{p}"
},
{
"math_id": 53,
"text": "(p\\!\\!\\!/+m)u^s_\\textbf{p}=0"
},
{
"math_id": 54,
"text": "(p\\!\\!\\!/-m)v^s_\\textbf{p} = 0"
},
{
"math_id": 55,
"text": "\\mathrm{d}\\tilde{p}:=\\mathrm{d}^3 p/(2\\pi)^3 2\\omega_\\textbf{p}"
},
{
"math_id": 56,
"text": "\\omega_\\textbf{p} = \\sqrt{\\textbf{p}^2+m^2}"
},
{
"math_id": 57,
"text": "|\\alpha\\ \\mathrm{in}\\rangle"
},
{
"math_id": 58,
"text": "|\\beta\\ \\mathrm{out}\\rangle"
},
{
"math_id": 59,
"text": "\\mathcal{M} = \\langle \\beta\\ \\mathrm{out}|\\alpha\\ \\mathrm{in}\\rangle,"
},
{
"math_id": 60,
"text": "n"
},
{
"math_id": 61,
"text": "n'"
},
{
"math_id": 62,
"text": "\\{\\textbf{p}_1,...,\\textbf{p}_n\\}"
},
{
"math_id": 63,
"text": "\\{s_1,...,s_{n}\\}"
},
{
"math_id": 64,
"text": "\\{\\textbf{k}_1,...,\\textbf{k}_{n'}\\}"
},
{
"math_id": 65,
"text": "\\{\\sigma_1,...,\\sigma_{n'}\\}"
},
{
"math_id": 66,
"text": "|\\alpha\\ \\mathrm{in}\\rangle = |\\textbf{p}_1^{s_1},...,\\textbf{p}_n^{s_n}\\rangle\\quad\\text{and}\\quad|\\beta\\ \\mathrm{out}\\rangle = |\\textbf{k}_1^{\\sigma_1},...,\\textbf{k}_{n'}^{\\sigma_{n'}}\\rangle."
},
{
"math_id": 67,
"text": "b^{\\dagger s_1}_{\\textbf{p}_1,\\mathrm{in}}"
},
{
"math_id": 68,
"text": "\\mathcal{M} = \\langle\\beta\\ \\mathrm{out}|b^{\\dagger s_1}_{\\textbf{p}_1,\\mathrm{in}}-b^{\\dagger s_1}_{\\textbf{p}_1,\\mathrm{out}}|\\alpha'\\ \\mathrm{in}\\rangle,"
},
{
"math_id": 69,
"text": "b^{\\dagger s}_\\textbf{p} = \\int\\!\\mathrm{d}^3 x\\;\\mathrm{e}^{ip\\cdot x}\\bar{\\Psi}(x)\\gamma^0 u^s_\\textbf{p},"
},
{
"math_id": 70,
"text": "\\bar{\\Psi}(x)=\\Psi^\\dagger(x)\\gamma^0"
},
{
"math_id": 71,
"text": "\\Psi_\\text{in}"
},
{
"math_id": 72,
"text": "\\Psi_\\text{out}"
},
{
"math_id": 73,
"text": "\\mathcal{M} = \\int\\!\\mathrm{d}^3x_1\\;\\mathrm{e}^{ip_1\\cdot x_1}\\langle\\beta\\ \\mathrm{out}|\\bar{\\Psi}_\\text{in}(x_1)\\gamma^0 u^{s_1}_{\\textbf{p}_1}-\\bar{\\Psi}_\\text{out}(x_1)\\gamma^0 u^{s_1}_{\\textbf{p}_1}|\\alpha'\\ \\mathrm{in}\\rangle."
},
{
"math_id": 74,
"text": "\\lim_{x^0\\rightarrow-\\infty}\\int\\!\\mathrm{d}^3 x\\langle \\beta|\\mathrm{e}^{ip\\cdot x}\\bar{\\Psi}(x)\\gamma^0 u^{s}_{\\textbf{p}}|\\alpha\\rangle=\\sqrt{Z}\\int\\!\\mathrm{d}^3 x\\langle \\beta|\\mathrm{e}^{ip\\cdot x}\\bar{\\Psi}_\\text{in}(x)\\gamma^0 u^{s}_{\\textbf{p}}|\\alpha\\rangle,"
},
{
"math_id": 75,
"text": "\\mathcal{M} = \\frac{1}{\\sqrt{Z}}\\Big(\\lim_{x_1^0\\rightarrow-\\infty}-\\lim_{x^0_1\\rightarrow+\\infty}\\Big)\\int\\!\\mathrm{d}^3 x_1\\;\\mathrm{e}^{ip_1\\cdot x_1}\\langle\\beta\\ \\mathrm{out}|\\bar{\\Psi}(x_1)\\gamma^0 u^{s_1}_{\\textbf{p}_1}|\\alpha'\\ \\mathrm{in}\\rangle,"
},
{
"math_id": 76,
"text": "\\mathcal{M} = -\\frac{1}{\\sqrt{Z}}\\int\\!\\mathrm{d}^4 x_1\\partial_0\\big(\\mathrm{e}^{ip_1\\cdot x_1}\\langle\\beta\\ \\mathrm{out}|\\bar{\\Psi}(x_1)\\gamma^0 u^{s_1}_{\\textbf{p}_1}|\\alpha'\\ \\mathrm{in}\\rangle\\big)"
},
{
"math_id": 77,
"text": " =-\\frac{1}{\\sqrt{Z}}\\int\\!\\mathrm{d}^4 x_1(\\partial_0\\mathrm{e}^{ip_1\\cdot x_1}\\eta(x_1)+\\mathrm{e}^{ip_1\\cdot x_1}\\partial_0\\eta(x_1)\\big)\\gamma^0 u^{s_1}_{\\textbf{p}_1},"
},
{
"math_id": 78,
"text": "\\eta(x_1):=\\langle\\beta\\ \\mathrm{out}|\\bar{\\Psi}(x_1)|\\alpha'\\ \\mathrm{in}\\rangle"
},
{
"math_id": 79,
"text": "\\mathrm{e}^{ip\\cdot x}u^s_\\textbf{p}"
},
{
"math_id": 80,
"text": "(-i\\partial\\!\\!\\!/+m)\\mathrm{e}^{ip\\cdot x}u^s_\\textbf{p}=0."
},
{
"math_id": 81,
"text": "\\gamma^0\\partial_0 \\mathrm{e}^{ip\\cdot x}u^s_\\textbf{p}"
},
{
"math_id": 82,
"text": "\\mathcal{M} = \\frac{i}{\\sqrt{Z}}\\int\\!\\mathrm{d}^4x_1\\mathrm{e}^{ip_1\\cdot x_1}\\big(i\\partial_\\mu\\eta(x_1)\\gamma^\\mu + \\eta(x_1)m\\big)u^{s_1}_{\\textbf{p}_1}."
},
{
"math_id": 83,
"text": "\\mathcal{M} = \\frac{i}{\\sqrt{Z}}\\int\\!\\mathrm{d}^4x_1\\mathrm{e}^{ip_1\\cdot x_1}[(i{\\partial\\!\\!\\!/}_{x_1} + m)u^{s_1}_{\\textbf{p}_1}]_{\\alpha_1}\\langle\\beta\\ \\mathrm{out}|\\bar{\\Psi}_{\\alpha_1}(x_1)|\\alpha'\\ \\mathrm{in}\\rangle."
},
{
"math_id": 84,
"text": "\\langle\\beta\\ \\mathrm{out}|\\bar{\\Psi}_{\\alpha_1}(x_1)|\\alpha'\\ \\mathrm{in}\\rangle = \\langle\\beta'\\ \\mathrm{out}|b^{\\sigma_1}_{\\textbf{k}_1,\\mathrm{out}}\\bar{\\Psi}_{\\alpha_1}(x_1) - \\bar{\\Psi}_{\\alpha_1}(x_1)b^{\\sigma_1}_{\\textbf{k}_1,\\mathrm{in}}|\\alpha'\\ \\mathrm{in}\\rangle."
},
{
"math_id": 85,
"text": "(\\bar{\\Psi}\\gamma^0u^s_\\textbf{p})^\\dagger = \\bar{u}^s_\\textbf{p}\\gamma^0\\Psi"
},
{
"math_id": 86,
"text": "\\bar{u}^s_\\textbf{p}:=u^{\\dagger s}_\\textbf{p}\\beta"
},
{
"math_id": 87,
"text": "\\langle\\beta\\ \\mathrm{out}|\\bar{\\Psi}_{\\alpha_1}(x_1)|\\alpha'\\ \\mathrm{in}\\rangle =\\frac{1}{\\sqrt{Z}}\\Big(\\lim_{y^0_1\\rightarrow\\infty}-\\lim_{y^0_1\\rightarrow-\\infty}\\Big)\\int\\!\\mathrm{d}^3 y_1\\mathrm{e}^{-ik_1\\cdot y_1}[\\bar{u}^{\\sigma_1}_{\\textbf{k}_1}\\gamma^0]_{\\beta_1}\\langle\\beta'\\ \\mathrm{out}|\\mathrm{T}[\\Psi_{\\beta_1}(y_1)\\bar{\\Psi}_{\\alpha_1}(x_1)]|\\alpha'\\ \\mathrm{in}\\rangle."
},
{
"math_id": 88,
"text": "\\Psi_{\\beta_1}(y_1)"
},
{
"math_id": 89,
"text": "\\langle\\beta\\ \\mathrm{out}|\\bar{\\Psi}_{\\alpha_1}(x_1)|\\alpha'\\ \\mathrm{in}\\rangle =\\frac{i}{\\sqrt{Z}}\\int\\!\\mathrm{d}^4y_1\\mathrm{e}^{-ik_1\\cdot y_1}[\\bar{u}^{\\sigma_1}_{\\textbf{k}_1}(-i\\partial\\!\\!\\!/_{y_1}+m)]_{\\beta_1}\\langle\\beta'\\ \\mathrm{out}|\\mathrm{T}[\\Psi_{\\beta_1}(y_1)\\bar{\\Psi}_{\\alpha_1}(x_1)]|\\alpha'\\ \\mathrm{in}\\rangle."
},
{
"math_id": 90,
"text": "\\langle \\beta\\ \\mathrm{out}|\\alpha\\ \\mathrm{in}\\rangle=\\int\\!\\prod_{j=1}^n \\mathrm{d}^4 x_j \\frac{i\\mathrm{e}^{-ip_j x_j}}{\\sqrt{Z}} [(i{\\partial\\!\\!\\!/}_{x_j}+m)u^{s_j}_{\\textbf{p}_j}]_{\\alpha_j}\\prod_{l=1}^{n'}\\mathrm{d}^4 y_l\\frac{i\\mathrm{e}^{ik_l y_l}}{\\sqrt{Z}}[\\bar{u}^{\\sigma_l}_{\\textbf{k}_l}(-i{\\partial\\!\\!\\!/}_{y_l}+m)]_{\\beta_l} \\langle 0| \\mathrm{T}[\\Psi_{\\beta_1}(y_1)...\\Psi_{\\beta_{n'}}(y_{n'})\\bar{\\Psi}_{\\alpha_1}(x_1)...\\bar{\\Psi}_{\\alpha_n}(x_n)]|0\\rangle."
},
{
"math_id": 91,
"text": "\\Psi"
},
{
"math_id": 92,
"text": "\\bar{\\Psi}"
},
{
"math_id": 93,
"text": "|p\\rangle"
},
{
"math_id": 94,
"text": "\\langle 0|\\varphi(x)|p\\rangle= \\sqrt Z \\langle 0|\\varphi_{\\mathrm{in}}(x)|p\\rangle + \\int \\mathrm{d}^4y \\Delta_{\\mathrm{ret}}(x-y) \\langle 0|j(y)|p\\rangle"
},
{
"math_id": 95,
"text": "\\varphi(x)=e^{iP\\cdot x}\\varphi(0)e^{-iP\\cdot x}"
},
{
"math_id": 96,
"text": " e^{-ip\\cdot x}\\langle 0|\\varphi(0)|p\\rangle= \\sqrt Z e^{-ip\\cdot x} \\langle 0|\\varphi_{\\mathrm{in}}(0)|p\\rangle + \\int \\mathrm{d}^4y \\Delta_{\\mathrm{ret}}(x-y)\\langle 0|j(y)|p\\rangle"
},
{
"math_id": 97,
"text": " 0=0 + \\int \\mathrm{d}^4y \\delta^4(x-y) \\langle 0|j(y)|p\\rangle; \\quad\\Leftrightarrow\\quad \\langle 0|j(x)|p\\rangle=0"
},
{
"math_id": 98,
"text": "\\langle 0|\\varphi(x)|p\\rangle= \\sqrt Z \\langle 0|\\varphi_{\\mathrm{in}}(x)|p\\rangle "
},
{
"math_id": 99,
"text": "\\langle 0|\\varphi_{\\mathrm{in}}(x)|p\\rangle= \\int \\frac{\\mathrm{d}^3q}{(2\\pi)^{\\frac{3}{2}}(2\\omega_q)^{\\frac{1}{2}}} e^{-iq\\cdot x} \\langle 0|a_{\\mathrm{in}}(\\mathbf q)|p\\rangle= \\int \\frac{\\mathrm{d}^3q}{(2\\pi)^{\\frac{3}{2}}} e^{-iq\\cdot x} \\langle 0|a_{\\mathrm{in}}(\\mathbf q)a^\\dagger_{\\mathrm{in}}(\\mathbf p)|0\\rangle "
},
{
"math_id": 100,
"text": "a^\\dagger_{\\mathrm{in}}"
},
{
"math_id": 101,
"text": " \\langle 0|\\varphi_{\\mathrm{in}}(x)|p\\rangle= \\frac{e^{-ip\\cdot x}}{(2\\pi)^{\\frac{3}{2}}} "
},
{
"math_id": 102,
"text": "\\langle 0|\\varphi(0)|p\\rangle= \\sqrt \\frac{Z}{(2\\pi)^3}"
},
{
"math_id": 103,
"text": "\\langle 0|\\varphi(0)|p\\rangle"
}
]
| https://en.wikipedia.org/wiki?curid=1372638 |
1372673 | Mandelstam variables | Variables used in scattering processes
In theoretical physics, the Mandelstam variables are numerical quantities that encode the energy, momentum, and angles of particles in a scattering process in a Lorentz-invariant fashion. They are used for scattering processes of two particles to two particles. The Mandelstam variables were first introduced by physicist Stanley Mandelstam in 1958.
If the Minkowski metric is chosen to be formula_0, the Mandelstam variables formula_1 are then defined by
*formula_2
*formula_3
*formula_4,
where "p"1 and "p"2 are the four-momenta of the incoming particles and "p"3 and "p"4 are the four-momenta of the outgoing particles.
formula_5 is also known as the square of the center-of-mass energy (invariant mass) and formula_6 as the square of the four-momentum transfer.
Feynman diagrams.
The letters "s,t,u" are also used in the terms s-channel (timelike channel), t-channel, and u-channel (both spacelike channels). These channels represent different Feynman diagrams or different possible scattering events where the interaction involves the exchange of an intermediate particle whose squared four-momentum equals "s,t,u", respectively.
For example, the s-channel corresponds to the particles 1,2 joining into an intermediate particle that eventually splits into 3,4: The t-channel represents the process in which the particle 1 emits the intermediate particle and becomes the final particle 3, while the particle 2 absorbs the intermediate particle and becomes 4. The u-channel is the t-channel with the role of the particles 3,4 interchanged.
When evaluating a Feynman amplitude one often finds scalar products of the external four momenta. One can use the Mandelstam variables to simplify these:
formula_7
formula_8
formula_9
Where formula_10 is the mass of the particle with corresponding momentum formula_11.
Sum.
Note that
formula_12
where "m""i" is the mass of particle "i".
Relativistic limit.
In the relativistic limit, the momentum (speed) is large, so using the relativistic energy-momentum equation, the energy becomes essentially the momentum norm (e.g. formula_13 becomes formula_14 ). The rest mass can also be neglected.
So for example,
formula_15
because formula_16 and formula_17.
Thus,
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{diag}(1, -1,-1,-1)"
},
{
"math_id": 1,
"text": "s,t,u"
},
{
"math_id": 2,
"text": "s=(p_1+p_2)^2 c^2 =(p_3+p_4)^2 c^2"
},
{
"math_id": 3,
"text": "t=(p_1-p_3)^2 c^2 =(p_4-p_2)^2 c^2"
},
{
"math_id": 4,
"text": "u=(p_1-p_4)^2 c^2 =(p_3-p_2)^2 c^2"
},
{
"math_id": 5,
"text": "s"
},
{
"math_id": 6,
"text": "t"
},
{
"math_id": 7,
"text": "p_1 \\cdot p_2 = \\frac{s/c^2 - m_1^2 - m_2^2 }{2}"
},
{
"math_id": 8,
"text": "p_1 \\cdot p_3 = \\frac{m_1^2 + m_3^2 - t/c^2}{2}"
},
{
"math_id": 9,
"text": "p_1 \\cdot p_4 = \\frac{m_1^2 + m_4^2 - u/c^2}{2}"
},
{
"math_id": 10,
"text": "m_i"
},
{
"math_id": 11,
"text": "p_i"
},
{
"math_id": 12,
"text": "(s+t+u)/c^4 = m_1^2 + m_2^2 + m_3^2 + m_4^2"
},
{
"math_id": 13,
"text": "E^2= \\mathbf{p} \\cdot \\mathbf{p} + {m_0}^2"
},
{
"math_id": 14,
"text": "E^2 \\approx \\mathbf{p} \\cdot \\mathbf{p}"
},
{
"math_id": 15,
"text": "s/c^2=(p_1+p_2)^2=p_1^2+p_2^2+2 p_1 \\cdot p_2 \\approx 2 p_1 \\cdot p_2"
},
{
"math_id": 16,
"text": "p_1^2 = m_1^2"
},
{
"math_id": 17,
"text": "p_2^2 = m_2^2"
}
]
| https://en.wikipedia.org/wiki?curid=1372673 |
1372715 | Seesaw mechanism | In the theory of grand unification of particle physics, and, in particular, in theories of neutrino masses and neutrino oscillation, the seesaw mechanism is a generic model used to understand the relative sizes of observed neutrino masses, of the order of eV, compared to those of quarks and charged leptons, which are millions of times heavier. The name of the seesaw mechanism was given by Tsutomu Yanagida in a Tokyo conference in 1981.
There are several types of models, each extending the Standard Model. The simplest version, "Type 1", extends the Standard Model by assuming two or more additional right-handed neutrino fields
inert under the electroweak interaction,
and the existence of a very large mass scale. This allows the mass scale to be identifiable with the postulated scale of grand unification.
Type 1 seesaw.
This model produces a light neutrino, for each of the three known neutrino flavors, and a corresponding very heavy neutrino for each flavor, which has yet to be observed.
The simple mathematical principle behind the seesaw mechanism is the following property of any 2×2 matrix of the form
formula_0
It has two eigenvalues:
formula_1
and
formula_2
The geometric mean of formula_3 and formula_4 equals formula_5, since the determinant formula_6.
Thus, if one of the eigenvalues goes up, the other goes down, and vice versa. This is the point of the name "seesaw" of the mechanism.
In applying this model to neutrinos, formula_7 is taken to be much larger than formula_8
Then the larger eigenvalue, formula_9 is approximately equal to formula_10 while the smaller eigenvalue is approximately equal to
formula_11
This mechanism serves to explain why the neutrino masses are so small.
The matrix A is essentially the mass matrix for the neutrinos. The Majorana mass component formula_7 is comparable to the GUT scale and violates lepton number conservation; while the Dirac mass components formula_12 are of order of the much smaller electroweak scale, called the VEV or "vacuum expectation value" below. The smaller eigenvalue formula_13 then leads to a very small neutrino mass, comparable to , which is in qualitative accord with experiments—sometimes regarded as supportive evidence for the framework of Grand Unified Theories.
Background.
The 2×2 matrix A arises in a natural manner within the standard model by considering the most general mass matrix allowed by gauge invariance of the standard model action, and the corresponding charges of the lepton- and neutrino fields.
Call the neutrino part of a Weyl spinor formula_14 a part of a left-handed lepton weak isospin doublet; the other part is the left-handed charged lepton formula_15
formula_16
as it is present in the minimal standard model with neutrino masses omitted, and let formula_17 be a postulated right-handed neutrino Weyl spinor which is a singlet under weak isospin – i.e. a neutrino that fails to interact weakly, such as a sterile neutrino.
There are now three ways to form Lorentz covariant mass terms, giving either
formula_18
and their complex conjugates, which can be written as a quadratic form,
formula_19
Since the right-handed neutrino spinor is uncharged under all standard model gauge symmetries, B is a free parameter which can in principle take any arbitrary value.
The parameter M is forbidden by electroweak gauge symmetry, and can only appear after the symmetry has been spontaneously broken by a Higgs mechanism, like the Dirac masses of the charged leptons. In particular, since has weak isospin like the Higgs field H, and formula_20 has weak isospin 0, the mass parameter M can be generated from Yukawa interactions with the Higgs field, in the conventional standard model fashion,
formula_21
This means that M is naturally of the order of the vacuum expectation value of the standard model Higgs field,
the vacuum expectation value (VEV)formula_22
formula_23
if the dimensionless Yukawa coupling is of order formula_24. It can be chosen smaller consistently, but extreme values formula_25 can make the model nonperturbative.
The parameter formula_26 on the other hand, is forbidden, since no renormalizable singlet under weak hypercharge and isospin can be formed using these doublet components – only a nonrenormalizable, dimension 5 term is allowed. This is the origin of the pattern and hierarchy of scales of the mass matrix formula_27 within the "Type 1" seesaw mechanism.
The large size of B can be motivated in the context of grand unification. In such models, enlarged gauge symmetries may be present, which initially force formula_28 in the unbroken phase, but generate a large, non-vanishing value formula_29 around the scale of their spontaneous symmetry breaking. So given a mass formula_30 one has formula_31 A huge scale has thus induced a dramatically small neutrino mass for the eigenvector formula_32
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " A = \\begin{pmatrix} 0 & M \\\\\n M & B \\end{pmatrix} ."
},
{
"math_id": 1,
"text": "\\lambda_{(+)} = \\frac{B + \\sqrt{ B^2 + 4 M^2 }}{2} ,"
},
{
"math_id": 2,
"text": "\\lambda_{(-)} = \\frac{B - \\sqrt{ B^2 + 4 M^2 } }{2} ."
},
{
"math_id": 3,
"text": "\\lambda_{(+)}"
},
{
"math_id": 4,
"text": "\\lambda_{(-)} "
},
{
"math_id": 5,
"text": "\\left| M \\right|"
},
{
"math_id": 6,
"text": " \\lambda_{(+)} \\; \\lambda_{(-)} = -M^2 "
},
{
"math_id": 7,
"text": " B "
},
{
"math_id": 8,
"text": " M ."
},
{
"math_id": 9,
"text": "\\lambda_{(+)},"
},
{
"math_id": 10,
"text": " B ,"
},
{
"math_id": 11,
"text": " \\lambda_- \\approx -\\frac{M^2}{B} ."
},
{
"math_id": 12,
"text": " M "
},
{
"math_id": 13,
"text": " \\lambda_{(-)} "
},
{
"math_id": 14,
"text": " \\chi ,"
},
{
"math_id": 15,
"text": "\\ell,"
},
{
"math_id": 16,
"text": " L = \\begin{pmatrix} \\chi \\\\ \\ell \\end{pmatrix} ,"
},
{
"math_id": 17,
"text": "\\eta"
},
{
"math_id": 18,
"text": " \\tfrac{1}{2} \\, B' \\, \\chi^\\alpha \\chi_\\alpha \\, , \\quad \\frac{1}{2} \\, B\\, \\eta^\\alpha \\eta_\\alpha \\, , \\quad \\mathrm{ or } \\quad M \\, \\eta^\\alpha \\chi_\\alpha \\, ,"
},
{
"math_id": 19,
"text": "\n \\frac{1}{2} \\, \\begin{pmatrix} \\chi & \\eta \\end{pmatrix}\n \\begin{pmatrix} B' & M \\\\\n M & B \\end{pmatrix}\n \\begin{pmatrix} \\chi \\\\\n \\eta \\end{pmatrix} ."
},
{
"math_id": 20,
"text": " \\eta "
},
{
"math_id": 21,
"text": " \\mathcal{L}_{yuk}=y \\, \\eta L \\epsilon H^* + ... "
},
{
"math_id": 22,
"text": "\\quad v \\; \\approx \\; \\mathrm{ 246 \\; GeV }, \\qquad \\qquad | \\langle H \\rangle| \\; = \\; v / \\sqrt{2} "
},
{
"math_id": 23,
"text": " M_t = \\mathcal{O} \\left( v / \\sqrt{2} \\right) \\; \\approx \\; \\mathrm{ 174 \\; GeV } ,"
},
{
"math_id": 24,
"text": " y \\approx 1 "
},
{
"math_id": 25,
"text": " y \\gg 1 "
},
{
"math_id": 26,
"text": " B' "
},
{
"math_id": 27,
"text": " A "
},
{
"math_id": 28,
"text": " B = 0 "
},
{
"math_id": 29,
"text": " B \\approx M_\\mathsf{GUT} \\approx \\mathrm{10^{15}~GeV},"
},
{
"math_id": 30,
"text": "M \\approx \\mathrm{ 100 \\; GeV }"
},
{
"math_id": 31,
"text": "\\lambda_{(-)} \\; \\approx \\; \\mathrm{ 0.01 \\; eV }."
},
{
"math_id": 32,
"text": " \\nu \\approx \\chi - \\frac{\\; M \\;}{B} \\eta ."
}
]
| https://en.wikipedia.org/wiki?curid=1372715 |
13727384 | Sten scores | The results for some scales of some psychometric instruments are returned as sten scores, sten being an abbreviation for 'Standard Ten' and thus closely related to stanine scores.
<templatestyles src="Template:TOC limit/styles.css" />
Definition.
A sten score indicates an individual's approximate position (as a range of values) with respect to the population of values and, therefore, to other people in that population. The individual sten scores are defined by reference to a standard normal distribution. Unlike stanine scores, which have a midpoint of five, sten scores have no midpoint (the midpoint is the value 5.5). Like stanines, individual sten scores are demarcated by half standard deviations. Thus, a sten score of 5 includes all standard scores from -.5 to zero and is centered at -0.25 and a sten score of 4 includes all standard scores from -1.0 to -0.5 and is centered at -0.75. A sten score of 1 includes all standard scores below -2.0. Sten scores of 6-10 "mirror" scores 5-1. The table below shows the standard scores that define stens and the percent of individuals drawn from a normal distribution that would receive sten score.
Percentiles are the percentile of the sten score (which is the mid-point of a range of z-scores).
Sten scores (for the entire population of results) have a mean of 5.5 and a standard deviation of 2.
Calculation of sten scores.
When the score distribution is approximately normally distributed, sten scores can be calculated by a linear transformation: (1) the scores are first standardized; (2) then multiplied by the desired standard deviation of 2; and finally, (3) the desired mean of 5.5 is added. The resulting decimal value may be used as-is or rounded to an integer.
For example, suppose that scale scores are found to have a mean of 23.5, a standard deviation of 4.2, and to be approximately normally distributed. Then sten scores for this scale can be calculated using the formula, formula_0. It is also usually necessary to truncate such scores, particularly if the scores are skewed.
An alternative method of calculation requires that the scale developer prepare a table to convert raw scores to sten scores by apportioning percentages according to the distribution shown in the table. For example, if the scale developer observes that raw scores 0-3 comprise 2% of the population, then these raw scores will be converted to a sten score of 1 and a raw score of 4 (and possibly 5, etc.) will be converted to a sten score of 2. This procedure is a non-linear transformation that will normalize the sten scores and usually the resulting stens will only approximate the percentages shown in the table. The 16PF Questionnaire uses this scoring method. | [
{
"math_id": 0,
"text": "\\frac {(s - 23.5)}{4.2} 2 + 5.5"
}
]
| https://en.wikipedia.org/wiki?curid=13727384 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.