id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
105956 | Palindromic number | Number that remains the same when its digits are reversed
A palindromic number (also known as a numeral palindrome or a numeric palindrome) is a number (such as 16461) that remains the same when its digits are reversed. In other words, it has reflectional symmetry across a vertical axis. The term "palindromic" is derived from palindrome, which refers to a word (such as "rotor" or "racecar") whose spelling is unchanged when its letters are reversed. The first 30 palindromic numbers (in decimal) are:
0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 11, 22, 33, 44, 55, 66, 77, 88, 99, 101, 111, 121, 131, 141, 151, 161, 171, 181, 191, 202, ... (sequence in the OEIS).
Palindromic numbers receive most attention in the realm of recreational mathematics. A typical problem asks for numbers that possess a certain property "and" are palindromic. For instance:
It is obvious that in any base there are infinitely many palindromic numbers, since in any base the infinite sequence of numbers written (in that base) as 101, 1001, 10001, 100001, etc. consists solely of palindromic numbers.
Formal definition.
Although palindromic numbers are most often considered in the decimal system, the concept of palindromicity can be applied to the natural numbers in any numeral system. Consider a number "n" > 0 in base "b" ≥ 2, where it is written in standard notation with "k"+1 digits "a""i" as:
formula_0
with, as usual, 0 ≤ "a""i" < "b" for all "i" and "a""k" ≠ 0. Then "n" is palindromic if and only if "a""i" = "a""k"−"i" for all "i". Zero is written 0 in any base and is also palindromic by definition.
Decimal palindromic numbers.
All numbers with one digit are palindromic, so in base 10 there are ten palindromic numbers with one digit:
{0, 1, 2, 3, 4, 5, 6, 7, 8, 9}.
There are 9 palindromic numbers with two digits:
{11, 22, 33, 44, 55, 66, 77, 88, 99}.
All palindromic numbers with an even number of digits are divisible by 11.
There are 90 palindromic numbers with three digits (Using the rule of product: 9 choices for the first digit - which determines the third digit as well - multiplied by 10 choices for the second digit):
{101, 111, 121, 131, 141, 151, 161, 171, 181, 191, ..., 909, 919, 929, 939, 949, 959, 969, 979, 989, 999}
There are likewise 90 palindromic numbers with four digits (again, 9 choices for the first digit multiplied by ten choices for the second digit. The other two digits are determined by the choice of the first two):
{1001, 1111, 1221, 1331, 1441, 1551, 1661, 1771, 1881, 1991, ..., 9009, 9119, 9229, 9339, 9449, 9559, 9669, 9779, 9889, 9999},
so there are 199 palindromic numbers smaller than 104.
There are 1099 palindromic numbers smaller than 105 and for other exponents of 10n we have: 1999, 10999, 19999, 109999, 199999, 1099999, ... (sequence in the OEIS). The number of palindromic numbers which have some other property are listed below:
Perfect powers.
There are many palindromic perfect powers "n""k", where "n" is a natural number and "k" is 2, 3 or 4.
The first nine terms of the sequence 12, 112, 1112, 11112, ... form the palindromes 1, 121, 12321, 1234321, ... (sequence in the OEIS)
The only known non-palindromic number whose cube is a palindrome is 2201, and it is a conjecture the fourth root of all the palindrome fourth powers are a palindrome with 100000...000001 (10n + 1).
Gustavus Simmons conjectured there are no palindromes of form "n""k" for "k" > 4 (and "n" > 1).
Other bases.
Palindromic numbers can be considered in numeral systems other than decimal. For example, the binary palindromic numbers are those with the binary representations:
0, 1, 11, 101, 111, 1001, 1111, 10001, 10101, 11011, 11111, 100001, ... (sequence in the OEIS)
or in decimal:
0, 1, 3, 5, 7, 9, 15, 17, 21, 27, 31, 33, ... (sequence in the OEIS)
The Fermat primes and the Mersenne primes form a subset of the binary palindromic primes.
Any number formula_1 is palindromic in all bases formula_2 with formula_3 (trivially so, because formula_1 is then a single-digit number), and also in base formula_4 (because formula_1 is then formula_5). Even excluding cases where the number is smaller than the base, most numbers are palindromic in more than one base. For example, formula_6, formula_7. A number formula_1 is never palindromic in base formula_2 if formula_8. Moreover, a prime number formula_9 is never palindromic in base formula_2 if formula_10.
A number that is non-palindromic in all bases "b" in the range 2 ≤ "b" ≤ "n" − 2 can be called a "strictly non-palindromic number". For example, the number 6 is written as "110" in base 2, "20" in base 3, and "12" in base 4, none of which are palindromes. All strictly non-palindromic numbers larger than 6 are prime. Indeed, if formula_11 is composite, then either formula_12 for some formula_13, in which case "n" is the palindrome "aa" in base formula_14, or else it is a perfect square formula_15, in which case "n" is the palindrome "121" in base formula_16 (except for the special case of formula_17).
The first few strictly non-palindromic numbers (sequence in the OEIS) are:
0, 1, 2, 3, 4, 6, 11, 19, 47, 53, 79, 103, 137, 139, 149, 163, 167, 179, 223, 263, 269, 283, 293, 311, 317, 347, 359, 367, 389, 439, 491, 563, 569, 593, 607, 659, 739, 827, 853, 877, 977, 983, 997, ...
Antipalindromic numbers.
If the digits of a natural number don't only have to be reversed in order, but also subtracted from formula_14 to yield the original sequence again, then the number is said to be "antipalindromic". Formally, in the usual decomposition of a natural number into its digits formula_18 in base formula_2, a number is antipalindromic iff formula_19.
Lychrel process.
Non-palindromic numbers can be paired with palindromic ones via a series of operations. First, the non-palindromic number is reversed and the result is added to the original number. If the result is not a palindromic number, this is repeated until it gives a palindromic number. Such number is called "a delayed palindrome".
It is not known whether all non-palindromic numbers can be paired with palindromic numbers in this way. While no number has been proven to be unpaired, many do not appear to be. For example, 196 does not yield a palindrome even after 700,000,000 iterations. Any number that never becomes palindromic in this way is known as a Lychrel number.
On January 24, 2017, the number 1,999,291,987,030,606,810 was published in OEIS as and announced "The Largest Known Most Delayed Palindrome". The sequence of 125 261-step most delayed palindromes preceding 1,999,291,987,030,606,810 and not reported before was published separately as .
Sum of the reciprocals.
The sum of the reciprocals of the palindromic numbers is a convergent series, whose value is approximately 3.37028... (sequence in the OEIS).
Scheherazade numbers.
Scheherazade numbers are a set of numbers identified by Buckminster Fuller in his book "Synergetics". Fuller does not give a formal definition for this term, but from the examples he gives, it can be understood to be those numbers that contain a factor of the primorial "n"#, where "n"≥13 and is the largest prime factor in the number. Fuller called these numbers "Scheherazade numbers" because they must have a factor of 1001. Scheherazade is the storyteller of "One Thousand and One Nights", telling a new story each night to delay her execution. Since "n" must be at least 13, the primorial must be at least 1·2·3·5·7·11·13, and 7×11×13 = 1001. Fuller also refers to powers of 1001 as Scheherazade numbers. The smallest primorial containing Scheherazade number is 13# = 30,030.
Fuller pointed out that some of these numbers are palindromic by groups of digits. For instance 17# = 510,510 shows a symmetry of groups of three digits. Fuller called such numbers "Scheherazade Sublimely Rememberable Comprehensive Dividends", or SSRCD numbers. Fuller notes that 1001 raised to a power not only produces "sublimely rememberable" numbers that are palindromic in three-digit groups, but also the values of the groups are the binomial coefficients. For instance,
formula_20
This sequence fails at (1001)13 because there is a carry digit taken into the group to the left in some groups. Fuller suggests writing these "spillovers" on a separate line. If this is done, using more spillover lines as necessary, the symmetry is preserved indefinitely to any power. Many other Scheherazade numbers show similar symmetries when expressed in this way.
Sums of palindromes.
In 2018, a paper was published demonstrating that every positive integer can be written as the sum of three palindromic numbers in every number system with base 5 or greater.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n=\\sum_{i=0}^ka_ib^i"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "b"
},
{
"math_id": 3,
"text": "b > n"
},
{
"math_id": 4,
"text": "n-1"
},
{
"math_id": 5,
"text": "11_{n-1}"
},
{
"math_id": 6,
"text": "1221_4=151_8=77_{14}=55_{20}=33_{34}=11_{104}"
},
{
"math_id": 7,
"text": "1991_{10}=7C7_{16}"
},
{
"math_id": 8,
"text": "n/2 \\le b \\le n-2"
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "\\sqrt{p} \\le b \\le p-2"
},
{
"math_id": 11,
"text": "n > 6"
},
{
"math_id": 12,
"text": "n = ab"
},
{
"math_id": 13,
"text": "2 \\le a < b"
},
{
"math_id": 14,
"text": "b-1"
},
{
"math_id": 15,
"text": "n = a^2"
},
{
"math_id": 16,
"text": "a-1"
},
{
"math_id": 17,
"text": "n = 9 = 1001_2"
},
{
"math_id": 18,
"text": "a_i"
},
{
"math_id": 19,
"text": "a_i = b - 1 - a_{k-i}"
},
{
"math_id": 20,
"text": "(1001)^6 = 1,006,015,020,015,006,001 "
}
] | https://en.wikipedia.org/wiki?curid=105956 |
105967 | Cayley's theorem | Representation of groups by permutations
In group theory, Cayley's theorem, named in honour of Arthur Cayley, states that every group G is isomorphic to a subgroup of a symmetric group.
More specifically, G is isomorphic to a subgroup of the symmetric group formula_0 whose elements are the permutations of the underlying set of G.
Explicitly,
The homomorphism formula_3 can also be understood as arising from the left translation action of G on the underlying set G.
When G is finite, formula_0 is finite too. The proof of Cayley's theorem in this case shows that if G is a finite group of order n, then G is isomorphic to a subgroup of the standard symmetric group formula_5. But G might also be isomorphic to a subgroup of a smaller symmetric group, formula_6 for some formula_7; for instance, the order 6 group formula_8 is not only isomorphic to a subgroup of formula_9, but also (trivially) isomorphic to a subgroup of formula_10. The problem of finding the minimal-order symmetric group into which a given group G embeds is rather difficult.
Alperin and Bell note that "in general the fact that finite groups are imbedded in symmetric groups has not influenced the methods used to study finite groups".
When G is infinite, formula_0 is infinite, but Cayley's theorem still applies.
History.
While it seems elementary enough, at the time the modern definitions did not exist, and when Cayley introduced what are now called "groups" it was not immediately clear that this was equivalent to the previously known groups, which are now called "permutation groups". Cayley's theorem unifies the two.
Although Burnside
attributes the theorem
to Jordan,
Eric Nummela
nonetheless argues that the standard name—"Cayley's Theorem"—is in fact appropriate. Cayley, in his original 1854 paper,
showed that the correspondence in the theorem is one-to-one, but he failed to explicitly show it was a homomorphism (and thus an embedding). However, Nummela notes that Cayley made this result known to the mathematical community at the time, thus predating Jordan by 16 years or so.
The theorem was later published by Walther Dyck in 1882 and is attributed to Dyck in the first edition of Burnside's book.
Background.
A "permutation" of a set A is a bijective function from A to A. The set of all permutations of A forms a group under function composition, called "the symmetric group on" A, and written as formula_11.
In particular, taking A to be the underlying set of a group G produces a symmetric group denoted formula_0.
Proof of the theorem.
If "g" is any element of a group "G" with operation ∗, consider the function "f""g" : "G" → "G", defined by "f""g"("x") = "g" ∗ "x". By the existence of inverses, this function has also an inverse, formula_12. So multiplication by "g" acts as a bijective function. Thus, "f""g" is a permutation of "G", and so is a member of Sym("G").
The set "K" = {"f""g" : "g" ∈ "G"} is a subgroup of Sym("G") that is isomorphic to "G". The fastest way to establish this is to consider the function "T" : "G" → Sym("G") with "T"("g") = "f""g" for every "g" in "G". "T" is a group homomorphism because (using · to denote composition in Sym("G")):
formula_13
for all "x" in "G", and hence:
formula_14
The homomorphism "T" is injective since "T"("g") = id"G" (the identity element of Sym("G")) implies that "g" ∗ "x" = "x" for all "x" in "G", and taking "x" to be the identity element "e" of "G" yields "g" = "g" ∗ "e" = "e", i.e. the kernel is trivial. Alternatively, "T" is also injective since "g" ∗ "x" = "g"′ ∗ "x" implies that "g" = "g"′ (because every group is cancellative).
Thus "G" is isomorphic to the image of "T", which is the subgroup "K".
"T" is sometimes called the "regular representation of" "G".
Alternative setting of proof.
An alternative setting uses the language of group actions. We consider the group formula_15 as acting on itself by left multiplication, i.e. formula_16, which has a permutation representation, say formula_17.
The representation is faithful if formula_18 is injective, that is, if the kernel of formula_18 is trivial. Suppose formula_19. Then, formula_20. Thus, formula_21 is trivial. The result follows by use of the first isomorphism theorem, from which we get formula_22.
Remarks on the regular group representation.
The identity element of the group corresponds to the identity permutation. All other group elements correspond to derangements: permutations that do not leave any element unchanged. Since this also applies for powers of a group element, lower than the order of that element, each element corresponds to a permutation that consists of cycles all of the same length: this length is the order of that element. The elements in each cycle form a right coset of the subgroup generated by the element.
Examples of the regular group representation.
formula_23 with addition modulo 2; group element 0 corresponds to the identity permutation e, group element 1 to permutation (12) (see cycle notation). E.g. 0 +1 = 1 and 1+1 = 0, so formula_24 and formula_25 as they would under a permutation.
formula_26 with addition modulo 3; group element 0 corresponds to the identity permutation e, group element 1 to permutation (123), and group element 2 to permutation (132). E.g. 1 + 1 = 2 corresponds to (123)(123) = (132).
formula_27 with addition modulo 4; the elements correspond to e, (1234), (13)(24), (1432).
The elements of Klein four-group {e, a, b, c} correspond to e, (12)(34), (13)(24), and (14)(23).
S3 (dihedral group of order 6) is the group of all permutations of 3 objects, but also a permutation group of the 6 group elements, and the latter is how it is realized by its regular representation.
More general statement.
Theorem:
Let G be a group, and let H be a subgroup.
Let formula_28 be the set of left cosets of H in G.
Let N be the normal core of H in G, defined to be the intersection of the conjugates of H in G.
Then the quotient group formula_29 is isomorphic to a subgroup of formula_30.
The special case formula_31 is Cayley's original theorem.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{Sym}(G)"
},
{
"math_id": 1,
"text": "g \\in G"
},
{
"math_id": 2,
"text": "\\ell_g \\colon G \\to G"
},
{
"math_id": 3,
"text": "G \\to \\operatorname{Sym}(G)"
},
{
"math_id": 4,
"text": "\\ell_g"
},
{
"math_id": 5,
"text": "S_n"
},
{
"math_id": 6,
"text": "S_m"
},
{
"math_id": 7,
"text": "m<n"
},
{
"math_id": 8,
"text": "G=S_3"
},
{
"math_id": 9,
"text": "S_6"
},
{
"math_id": 10,
"text": "S_3"
},
{
"math_id": 11,
"text": "\\operatorname{Sym}(A)"
},
{
"math_id": 12,
"text": "f_{g^{-1}}"
},
{
"math_id": 13,
"text": " (f_g \\cdot f_h)(x) = f_g(f_h(x)) = f_g(h*x) = g*(h*x) = (g*h)*x = f_{g*h}(x) ,"
},
{
"math_id": 14,
"text": " T(g) \\cdot T(h) = f_g \\cdot f_h = f_{g*h} = T(g*h) ."
},
{
"math_id": 15,
"text": "G"
},
{
"math_id": 16,
"text": "g \\cdot x = gx"
},
{
"math_id": 17,
"text": "\\phi : G \\to \\mathrm{Sym}(G)"
},
{
"math_id": 18,
"text": "\\phi"
},
{
"math_id": 19,
"text": "g\\in\\ker\\phi"
},
{
"math_id": 20,
"text": "g = ge = g\\cdot e = e"
},
{
"math_id": 21,
"text": "\\ker\\phi"
},
{
"math_id": 22,
"text": "\\mathrm{Im}\\, \\phi \\cong G"
},
{
"math_id": 23,
"text": " \\mathbb Z_2 = \\{0,1\\} "
},
{
"math_id": 24,
"text": "1\\mapsto0"
},
{
"math_id": 25,
"text": "0\\mapsto1,"
},
{
"math_id": 26,
"text": " \\mathbb Z_3 = \\{0,1,2\\} "
},
{
"math_id": 27,
"text": " \\mathbb Z_4 = \\{0,1,2,3\\} "
},
{
"math_id": 28,
"text": "G/H"
},
{
"math_id": 29,
"text": "G/N"
},
{
"math_id": 30,
"text": "\\operatorname{Sym}(G/H)"
},
{
"math_id": 31,
"text": "H=1"
}
] | https://en.wikipedia.org/wiki?curid=105967 |
10597273 | Sara Seager | Canadian astronomer
Sara Seager (born 21 July 1971) is a Canadian-American astronomer and planetary scientist. She is a professor at the Massachusetts Institute of Technology and is known for her work on extrasolar planets and their atmospheres. She is the author of two textbooks on these topics, and has been recognized for her research by "Popular Science", "Discover Magazine", "Nature", and "TIME Magazine". Seager was awarded a MacArthur Fellowship in 2013 citing her theoretical work on detecting chemical signatures on exoplanet atmospheres and developing low-cost space observatories to observe planetary transits.
Background.
Seager was born in Toronto, Ontario, Canada, and is Jewish. Her father, David Seager, who lost his hair when he was 19 years old, was a pioneer and one of the world's leaders in hair transplantation and the founder of the Seager Hair Transplant Center in Toronto.
She earned her BSc degree in Mathematics and Physics from the University of Toronto in 1994, assisted by a NSERC University Undergraduate Student Research Award, and a PhD in astronomy from Harvard University in 1999. Her doctoral thesis developed theoretical models of atmospheres on extrasolar planets and was supervised by Dimitar Sasselov.
She held a postdoctoral research fellow position at the Institute for Advanced Study between 1999 and 2002 and a senior research staff member at the Carnegie Institution of Washington until 2006. She joined the Massachusetts Institute of Technology in January 2007 as an associate professor in both physics and planetary science, was granted tenure in July 2007, and was elevated to full professor in July 2010. She currently holds the "Class of 1941" chair.
She was elected a Legacy Fellow of the American Astronomical Society in 2020.
She is married to Charles Darrow and they have two sons from her first marriage. Her first spouse, Michael Wevrick, died of cancer in 2011.
Academic research.
Seager's research has been primarily directed toward the discovery and analysis of exoplanets; in particular her work is centered around ostensibly rare earth analogs, leading NASA to dub her "an astronomical Indiana Jones." Seager used the term "gas dwarf" for a high-mass super-Earth-type planet composed mainly of hydrogen and helium in an animation of one model of the exoplanet Gliese 581c. The term "gas dwarf" has also been used to refer to planets smaller than gas giants, with thick hydrogen and helium atmospheres. Together with Marc Kuchner, Seager had predicted the existence of carbon planets.
Seager has been the chair of the NASA Science and Technology Definition team for a proposed mission, "Starshade", to launch a free-flying occulting disk, used to block the light from a distant star in order for a telescope to be able to resolve the (much dimmer) light from an accompanying exoplanet located in the habitable zone of the star.
In years since 2020, Sara has been focusing on work related to Venus, with the potential discovery of phosphine, a biosignature gas, in the upper atmosphere.
Seager equation.
Seager developed a parallel version of the Drake equation to estimate the number of habitable planets in the Galaxy. Instead of aliens with radio technology, Seager has revised the Drake equation to focus on simply the presence of any alien life detectable from Earth. The equation focuses on the search for planets with biosignature gases, gases produced by life that can accumulate in a planet atmosphere to levels that can be detected with remote space telescopes.
formula_0
where:
Asteria Spacecraft.
Seager was the principal investigator of the Asteria (Arcsecond Space Telescope Enabling Research in Astrophysics) spacecraft, a 6-U cubesat designed to do precision photometry to search for extrasolar planets, a collaborative project between MIT and NASA's Jet Propulsion Laboratory. ASTERIA was launched into low Earth orbit from the International Space Station on 20 November 2017, and successfully operated until its orbital decay on 24 April 2020.
Venus Life Finder.
In 2020, Seager led a team proposing a mission "Venus Life Finder", a small spacecraft to investigate the possibility of life in the atmosphere of Venus. The mission will be a privately-funded spacecraft to be launched by Rocket Lab on the Electron rocket with a target launch date of January 2025.
Honors and awards.
Seager was awarded the 2012 Sackler Prize for "analysis of the atmospheres and internal compositions of extra-solar planets," the Helen B. Warner Prize from the American Astronomical Society in 2007 for developing "fundamental techniques for understanding, analyzing, and finding the atmospheres of extrasolar planets," and the 2004 Harvard Book Prize in Astronomy. She was appointed as a fellow to the American Association for the Advancement of Science in 2012 and elected to the Royal Astronomical Society of Canada as an honorary member in 2013. In September 2013 she became a MacArthur Fellow. She was elected to the American Philosophical Society in 2018. She was the Elizabeth R. Laird Lecturer at Memorial University of Newfoundland in 2018. On 19 August 2020 Seager appeared on the "Lex Fridman Podcast" (#116).
In 2020, she was appointed as an Officer of the Order of Canada. She won the 2020 "Los Angeles Times" Prize for Science and Technology for "The Smallest Lights in the Universe."
She was an honorary graduand at her Alma Mater, the University of Toronto Spring 2023 Convocation.
In 2024, Seager was awarded the Kavli Prize in Astrophysics.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " N = N^*F_\\text{Q} F_\\text{HZ} F_\\text{O} F_\\text{L} F_\\text{S} "
}
] | https://en.wikipedia.org/wiki?curid=10597273 |
105979 | Exclusive or | True when either but not both inputs are true
Exclusive or, exclusive disjunction, exclusive alternation, logical non-equivalence, or logical inequality is a logical operator whose negation is the logical biconditional. With two inputs, XOR is true if and only if the inputs differ (one is true, one is false). With multiple inputs, XOR is true if and only if the number of true inputs is odd.
It gains the name "exclusive or" because the meaning of "or" is ambiguous when both operands are true. XOR "excludes" that case. Some informal ways of describing XOR are "one or the other but not both", "either one or the other", and "A or B, but not A and B".
It is symbolized by the prefix operator formula_016 and by the infix operators XOR (, , or ), EOR, EXOR, formula_1, formula_2, formula_3, ⩛, formula_4, formula_5, and formula_6.
Definition.
The truth table of formula_7 shows that it outputs true whenever the inputs differ:
Equivalences, elimination, and introduction.
Exclusive disjunction essentially means 'either one, but not both nor none'. In other words, the statement is true if and only if one is true and the other is false. For example, if two horses are racing, then one of the two will win the race, but not both of them. The exclusive disjunction formula_8, also denoted by formula_9 or formula_10, can be expressed in terms of the logical conjunction ("logical and", formula_11), the disjunction ("logical or", formula_12), and the negation (formula_13) as follows:
formula_14
The exclusive disjunction formula_8 can also be expressed in the following way:
formula_15
This representation of XOR may be found useful when constructing a circuit or network, because it has only one formula_13 operation and small number of formula_16 and formula_12 operations. A proof of this identity is given below:
formula_17
It is sometimes useful to write formula_8 in the following way:
formula_18
or:
formula_19
This equivalence can be established by applying De Morgan's laws twice to the fourth line of the above proof.
The exclusive or is also equivalent to the negation of a logical biconditional, by the rules of material implication (a material conditional is equivalent to the disjunction of the negation of its antecedent and its consequence) and material equivalence.
In summary, we have, in mathematical and in engineering notation:
formula_20
Negation of the operator.
By applying the spirit of De Morgan's laws, we get:
formula_21
Relation to modern algebra.
Although the operators formula_11 (conjunction) and formula_12 (disjunction) are very useful in logic systems, they fail a more generalizable structure in the following way:
The systems formula_22 and formula_23 are monoids, but neither is a group. This unfortunately prevents the combination of these two systems into larger structures, such as a mathematical ring.
However, the system using exclusive or formula_24 "is" an abelian group. The combination of operators formula_11 and formula_4 over elements formula_25 produce the well-known two-element field formula_26. This field can represent any logic obtainable with the system formula_27 and has the added benefit of the arsenal of algebraic analysis tools for fields.
More specifically, if one associates formula_28 with 0 and formula_29 with 1, one can interpret the logical "AND" operation as multiplication on formula_26 and the "XOR" operation as addition on formula_26:
formula_30
The description of a Boolean function as a polynomial in formula_26, using this basis, is called the function's algebraic normal form.
Exclusive or in natural language.
Disjunction is often understood exclusively in natural languages. In English, the disjunctive word "or" is often understood exclusively, particularly when used with the particle "either". The English example below would normally be understood in conversation as implying that Mary is not both a singer and a poet.
1. Mary is a singer or a poet.
However, disjunction can also be understood inclusively, even in combination with "either". For instance, the first example below shows that "either" can be felicitously used in combination with an outright statement that both disjuncts are true. The second example shows that the exclusive inference vanishes away under downward entailing contexts. If disjunction were understood as exclusive in this example, it would leave open the possibility that some people ate both rice and beans.
2. Mary is either a singer or a poet or both.
3. Nobody ate either rice or beans.
Examples such as the above have motivated analyses of the exclusivity inference as pragmatic conversational implicatures calculated on the basis of an inclusive semantics. Implicatures are typically cancellable and do not arise in downward entailing contexts if their calculation depends on the Maxim of Quantity. However, some researchers have treated exclusivity as a bona fide semantic entailment and proposed nonclassical logics which would validate it.
This behavior of English "or" is also found in other languages. However, many languages have disjunctive constructions which are robustly exclusive such as French "soit... soit".
Alternative symbols.
The symbol used for exclusive disjunction varies from one field of application to the next, and even depends on the properties being emphasized in a given context of discussion. In addition to the abbreviation "XOR", any of the following symbols may also be seen:
Properties.
<templatestyles src="Glossary/styles.css" />
If using binary values for true (1) and false (0), then "exclusive or" works exactly like addition modulo 2.
Computer science.
Bitwise operation.
Exclusive disjunction is often used for bitwise operations. Examples:
As noted above, since exclusive disjunction is identical to addition modulo 2, the bitwise exclusive disjunction of two "n"-bit strings is identical to the standard vector of addition in the vector space formula_49.
In computer science, exclusive disjunction has several uses:
In logical circuits, a simple adder can be made with an XOR gate to add the numbers, and a series of AND, OR and NOT gates to create the carry output.
On some computer architectures, it is more efficient to store a zero in a register by XOR-ing the register with itself (bits XOR-ed with themselves are always zero) than to load and store the value zero.
In cryptography, XOR is sometimes used as a simple, self-inverse mixing function, such as in one-time pad or Feistel network systems. XOR is also heavily used in block ciphers such as AES (Rijndael) or Serpent and in block cipher implementation (CBC, CFB, OFB or CTR).
In simple threshold-activated artificial neural networks, modeling the XOR function requires a second layer because XOR is not a linearly separable function.
Similarly, XOR can be used in generating entropy pools for hardware random number generators. The XOR operation preserves randomness, meaning that a random bit XORed with a non-random bit will result in a random bit. Multiple sources of potentially random data can be combined using XOR, and the unpredictability of the output is guaranteed to be at least as good as the best individual source.
XOR is used in RAID 3–6 for creating parity information. For example, RAID can "back up" bytes 100111002 and 011011002 from two (or more) hard drives by XORing the just mentioned bytes, resulting in (111100002) and writing it to another drive. Under this method, if any one of the three hard drives are lost, the lost byte can be re-created by XORing bytes from the remaining drives. For instance, if the drive containing 011011002 is lost, 100111002 and 111100002 can be XORed to recover the lost byte.
XOR is also used to detect an overflow in the result of a signed binary arithmetic operation. If the leftmost retained bit of the result is not the same as the infinite number of digits to the left, then that means overflow occurred. XORing those two bits will give a "1" if there is an overflow.
XOR can be used to swap two numeric variables in computers, using the XOR swap algorithm; however this is regarded as more of a curiosity and not encouraged in practice.
XOR linked lists leverage XOR properties in order to save space to represent doubly linked list data structures.
In computer graphics, XOR-based drawing methods are often used to manage such items as bounding boxes and cursors on systems without alpha channels or overlay planes.
Encodings.
It is also called "not left-right arrow" (codice_0) in LaTeX-based markdown (formula_5). Apart from the ASCII codes, the operator is encoded at and , both in block mathematical operators. | [
{
"math_id": 0,
"text": "J"
},
{
"math_id": 1,
"text": "\\dot{\\vee}"
},
{
"math_id": 2,
"text": "\\overline{\\vee}"
},
{
"math_id": 3,
"text": "\\underline{\\vee}"
},
{
"math_id": 4,
"text": "\\oplus"
},
{
"math_id": 5,
"text": "\\nleftrightarrow"
},
{
"math_id": 6,
"text": "\\not\\equiv"
},
{
"math_id": 7,
"text": "A \\oplus B"
},
{
"math_id": 8,
"text": "p \\nleftrightarrow q"
},
{
"math_id": 9,
"text": "p\\operatorname{?}q"
},
{
"math_id": 10,
"text": "Jpq"
},
{
"math_id": 11,
"text": "\\wedge"
},
{
"math_id": 12,
"text": "\\lor"
},
{
"math_id": 13,
"text": "\\lnot"
},
{
"math_id": 14,
"text": "\\begin{matrix}\n p \\nleftrightarrow q & = & (p \\lor q) \\land \\lnot (p \\land q)\n\\end{matrix}"
},
{
"math_id": 15,
"text": "\\begin{matrix}\n p \\nleftrightarrow q & = & (p \\land \\lnot q) \\lor (\\lnot p \\land q)\n\\end{matrix}"
},
{
"math_id": 16,
"text": "\\land"
},
{
"math_id": 17,
"text": "\\begin{matrix}\n p \\nleftrightarrow q & = & (p \\land \\lnot q) & \\lor & (\\lnot p \\land q) \\\\[3pt]\n & = & ((p \\land \\lnot q) \\lor \\lnot p) & \\land & ((p \\land \\lnot q) \\lor q) \\\\[3pt]\n & = & ((p \\lor \\lnot p) \\land (\\lnot q \\lor \\lnot p)) & \\land & ((p \\lor q) \\land (\\lnot q \\lor q)) \\\\[3pt]\n & = & (\\lnot p \\lor \\lnot q) & \\land & (p \\lor q) \\\\[3pt]\n & = & \\lnot (p \\land q) & \\land & (p \\lor q)\n\\end{matrix}"
},
{
"math_id": 18,
"text": "\\begin{matrix}\n p \\nleftrightarrow q & = & \\lnot ((p \\land q) \\lor (\\lnot p \\land \\lnot q))\n\\end{matrix}"
},
{
"math_id": 19,
"text": "\\begin{matrix}\n p \\nleftrightarrow q & = & (p \\lor q) \\land (\\lnot p \\lor \\lnot q)\n\\end{matrix}"
},
{
"math_id": 20,
"text": "\\begin{matrix}\n p \\nleftrightarrow q & = & (p \\land \\lnot q) & \\lor & (\\lnot p \\land q) & = & p\\overline{q} + \\overline{p}q \\\\[3pt]\n & = & (p \\lor q) & \\land & (\\lnot p \\lor \\lnot q) & = & (p + q)(\\overline{p} + \\overline{q}) \\\\[3pt]\n & = & (p \\lor q) & \\land & \\lnot (p \\land q) & = & (p + q)(\\overline{pq})\n\\end{matrix}"
},
{
"math_id": 21,
"text": "\\lnot(p \\nleftrightarrow q) \\Leftrightarrow \\lnot p \\nleftrightarrow q \\Leftrightarrow p \\nleftrightarrow \\lnot q."
},
{
"math_id": 22,
"text": "(\\{T, F\\}, \\wedge)"
},
{
"math_id": 23,
"text": "(\\{T, F\\}, \\lor)"
},
{
"math_id": 24,
"text": "(\\{T, F\\}, \\oplus)"
},
{
"math_id": 25,
"text": "\\{T, F\\}"
},
{
"math_id": 26,
"text": "\\mathbb{F}_2"
},
{
"math_id": 27,
"text": "(\\land, \\lor)"
},
{
"math_id": 28,
"text": "F"
},
{
"math_id": 29,
"text": "T"
},
{
"math_id": 30,
"text": "\\begin{matrix}\n r = p \\land q & \\Leftrightarrow & r = p \\cdot q \\pmod 2 \\\\[3pt]\n r = p \\oplus q & \\Leftrightarrow & r = p + q \\pmod 2 \\\\\n\\end{matrix}"
},
{
"math_id": 31,
"text": "+"
},
{
"math_id": 32,
"text": "x,y"
},
{
"math_id": 33,
"text": "x+y"
},
{
"math_id": 34,
"text": "\\vee"
},
{
"math_id": 35,
"text": "A\\operatorname{\\overline{\\vee}}B"
},
{
"math_id": 36,
"text": "A"
},
{
"math_id": 37,
"text": "B"
},
{
"math_id": 38,
"text": "\\not="
},
{
"math_id": 39,
"text": "="
},
{
"math_id": 40,
"text": "\\circ"
},
{
"math_id": 41,
"text": "a\\circ b=a-b\\,\\cup\\,b-a"
},
{
"math_id": 42,
"text": "\\cup"
},
{
"math_id": 43,
"text": "\\vee\\vee"
},
{
"math_id": 44,
"text": "J\\phi\\psi"
},
{
"math_id": 45,
"text": "S"
},
{
"math_id": 46,
"text": "S\\ominus T"
},
{
"math_id": 47,
"text": "S\\mathop{\\triangledown} T"
},
{
"math_id": 48,
"text": "S\\mathop{\\vartriangle} T"
},
{
"math_id": 49,
"text": "(\\Z/2\\Z)^n"
},
{
"math_id": 50,
"text": "A \\oplus B \\oplus C \\oplus D \\oplus E"
}
] | https://en.wikipedia.org/wiki?curid=105979 |
1059994 | Sun Zhiwei | Chinese mathematician
Sun Zhiwei (, born October 16, 1965) is a Chinese mathematician, working primarily in number theory, combinatorics, and group theory. He is a professor at Nanjing University.
Biography.
Sun Zhiwei was born in Huai'an, Jiangsu. Sun and his twin brother Sun Zhihong proved a theorem about what are now known as the Wall–Sun–Sun primes.
Sun proved Sun's curious identity in 2002. In 2003, he presented a unified approach to three topics of Paul Erdős in combinatorial number theory: covering systems, restricted sumsets, and zero-sum problems or EGZ Theorem.
With Stephen Redmond, he posed the Redmond–Sun conjecture in 2006.
In 2013, he published a paper containing many conjectures on primes, one of which states that for any positive integer formula_0 there are consecutive primes formula_1 not exceeding formula_2 such that formula_3, where formula_4 denotes the formula_5-th prime.
He is the Editor-in-Chief of the "Journal of Combinatorics and Number Theory".
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "p_k,\\ldots,p_n\\ (k<n)"
},
{
"math_id": 2,
"text": "2m+2.2\\sqrt{m}"
},
{
"math_id": 3,
"text": "m=p_n-p_{n-1}+...+(-1)^{n-k}p_k"
},
{
"math_id": 4,
"text": "p_j"
},
{
"math_id": 5,
"text": "j"
}
] | https://en.wikipedia.org/wiki?curid=1059994 |
1060236 | Simplicial homology | Concept in algebraic topology
In algebraic topology, simplicial homology is the sequence of homology groups of a simplicial complex. It formalizes the idea of the number of holes of a given dimension in the complex. This generalizes the number of connected components (the case of dimension 0).
Simplicial homology arose as a way to study topological spaces whose building blocks are "n"-simplices, the "n"-dimensional analogs of triangles. This includes a point (0-simplex), a line segment (1-simplex), a triangle (2-simplex) and a tetrahedron (3-simplex). By definition, such a space is homeomorphic to a simplicial complex (more precisely, the geometric realization of an abstract simplicial complex). Such a homeomorphism is referred to as a "triangulation" of the given space. Many topological spaces of interest can be triangulated, including every smooth manifold (Cairns and Whitehead).
Simplicial homology is defined by a simple recipe for any abstract simplicial complex. It is a remarkable fact that simplicial homology only depends on the associated topological space. As a result, it gives a computable way to distinguish one space from another.
Definitions.
Orientations.
A key concept in defining simplicial homology is the notion of an orientation of a simplex. By definition, an orientation of a "k"-simplex is given by an ordering of the vertices, written as ("v"0...,"v""k"), with the rule that two orderings define the same orientation if and only if they differ by an even permutation. Thus every simplex has exactly two orientations, and switching the order of two vertices changes an orientation to the opposite orientation. For example, choosing an orientation of a 1-simplex amounts to choosing one of the two possible directions, and choosing an orientation of a 2-simplex amounts to choosing what "counterclockwise" should mean.
Chains.
Let "S" be a simplicial complex. A simplicial "k"-chain is a finite formal sum
formula_0
where each "c""i" is an integer and σ"i" is an oriented "k"-simplex. In this definition, we declare that each oriented simplex is equal to the negative of the simplex with the opposite orientation. For example,
formula_1
The group of "k"-chains on "S" is written "Ck". This is a free abelian group which has a basis in one-to-one correspondence with the set of "k"-simplices in "S". To define a basis explicitly, one has to choose an orientation of each simplex. One standard way to do this is to choose an ordering of all the vertices and give each simplex the orientation corresponding to the induced ordering of its vertices.
Boundaries and cycles.
Let σ = ("v"0...,"v""k") be an oriented "k"-simplex, viewed as a basis element of "Ck". The boundary operator
formula_2
is the homomorphism defined by:
formula_3
where the oriented simplex
formula_4
is the "i"th face of "σ", obtained by deleting its "i"th vertex.
In "Ck", elements of the subgroup
formula_5
are referred to as cycles, and the subgroup
formula_6
is said to consist of boundaries.
Boundaries of boundaries.
Because formula_7, where formula_8 is the "second" face removed, formula_9. In geometric terms, this says that the boundary of a boundary of anything has no boundary. Equivalently, the abelian groups
formula_10
form a chain complex. Another equivalent statement is that "Bk" is contained in "Zk".
As an example, consider a tetrahedron with vertices oriented as formula_11. By definition, its boundary is given by
formula_12.
The boundary of the boundary is given by
formula_13
Homology groups.
The "k"th homology group "Hk" of "S" is defined to be the quotient abelian group
formula_14
It follows that the homology group "Hk"("S") is nonzero exactly when there are "k"-cycles on "S" which are not boundaries. In a sense, this means that there are "k"-dimensional holes in the complex. For example, consider the complex "S" obtained by gluing two triangles (with no interior) along one edge, shown in the image. The edges of each triangle can be oriented so as to form a cycle. These two cycles are by construction not boundaries (since every 2-chain is zero). One can compute that the homology group "H"1("S") is isomorphic to Z2, with a basis given by the two cycles mentioned. This makes precise the informal idea that "S" has two "1-dimensional holes".
Holes can be of different dimensions. The rank of the "k"th homology group, the number
formula_15
is called the "k"th Betti number of "S". It gives a measure of the number of "k"-dimensional holes in "S".
Example.
Homology groups of a triangle.
Let "S" be a triangle (without its interior), viewed as a simplicial complex. Thus "S" has three vertices, which we call "v"0, "v"1, "v"2, and three edges, which are 1-dimensional simplices. To compute the homology groups of "S", we start by describing the chain groups "C""k":
The boundary homomorphism ∂: "C"1 → "C"0 is given by:
formula_17
formula_18
formula_19
Since "C"−1 = 0, every 0-chain is a cycle (i.e. "Z"0 = "C"0); moreover, the group "B"0 of the 0-boundaries is generated by the three elements on the right of these equations, creating a two-dimensional subgroup of "C"0. So the 0th homology group "H"0("S") = "Z"0/"B"0 is isomorphic to Z, with a basis given (for example) by the image of the 0-cycle ("v"0). Indeed, all three vertices become equal in the quotient group; this expresses the fact that "S" is connected.
Next, the group of 1-cycles is the kernel of the homomorphism ∂ above, which is isomorphic to Z, with a basis given (for example) by ("v"0,"v"1) − ("v"0,"v"2) + ("v"1,"v"2). (A picture reveals that this 1-cycle goes around the triangle in one of the two possible directions.) Since "C"2 = 0, the group of 1-boundaries is zero, and so the 1st homology group "H"1("S") is isomorphic to Z/0 ≅ Z. This makes precise the idea that the triangle has one 1-dimensional hole.
Next, since by definition there are no 2-cycles, "C"2 = 0 (the trivial group). Therefore the 2nd homology group "H""2"("S") is zero. The same is true for "H""i"("S") for all "i" not equal to 0 or 1. Therefore, the homological connectivity of the triangle is 0 (it is the largest "k" for which the reduced homology groups up to "k" are trivial).
Homology groups of higher-dimensional simplices.
Let "S" be a tetrahedron (without its interior), viewed as a simplicial complex. Thus "S" has four 0-dimensional vertices, six 1-dimensional edges, and four 2-dimensional faces. The construction of the homology groups of a tetrahedron is described in detail here. It turns out that "H"0("S") is isomorphic to Z, "H"2("S") is isomorphic to Z too, and all other groups are trivial. Therefore, the homological connectivity of the tetrahedron is 0.
If the tetrahedron contains its interior, then "H"2("S") is trivial too.
In general, if "S" is a "d"-dimensional simplex, the following holds:
Simplicial maps.
Let "S" and "T" be simplicial complexes. A simplicial map "f" from "S" to "T" is a function from the vertex set of "S" to the vertex set of "T" such that the image of each simplex in "S" (viewed as a set of vertices) is a simplex in "T". A simplicial map "f": "S" → "T" determines a homomorphism of homology groups "H""k"("S") → "H""k"("T") for each integer "k". This is the homomorphism associated to a chain map from the chain complex of "S" to the chain complex of "T". Explicitly, this chain map is given on "k"-chains by
formula_20
if "f"("v"0), ..., "f"("v""k") are all distinct, and otherwise "f"(("v"0, ..., "v""k")) = 0.
This construction makes simplicial homology a functor from simplicial complexes to abelian groups. This is essential to applications of the theory, including the Brouwer fixed point theorem and the topological invariance of simplicial homology.
Related homologies.
Singular homology is a related theory that is better adapted to theory rather than computation. Singular homology is defined for all topological spaces and depends only on the topology, not any triangulation; and it agrees with simplicial homology for spaces which can be triangulated. Nonetheless, because it is possible to compute the simplicial homology of a simplicial complex automatically and efficiently, simplicial homology has become important for application to real-life situations, such as image analysis, medical imaging, and data analysis in general.
Another related theory is Cellular homology.
Applications.
A standard scenario in many computer applications is a collection of points (measurements, dark pixels in a bit map, etc.) in which one wishes to find a topological feature. Homology can serve as a qualitative tool to search for such a feature, since it is readily computable from combinatorial data such as a simplicial complex. However, the data points have to first be triangulated, meaning one replaces the data with a simplicial complex approximation. Computation of persistent homology involves analysis of homology at different resolutions, registering homology classes (holes) that persist as the resolution is changed. Such features can be used to detect structures of molecules, tumors in X-rays, and cluster structures in complex data.
More generally, simplicial homology plays a central role in topological data analysis, a technique in the field of data mining.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum_{i=1}^N c_i \\sigma_i, \\,"
},
{
"math_id": 1,
"text": " (v_0,v_1) = -(v_1,v_0)."
},
{
"math_id": 2,
"text": "\\partial_k: C_k \\rightarrow C_{k-1}"
},
{
"math_id": 3,
"text": "\\partial_k(\\sigma)=\\sum_{i=0}^k (-1)^i (v_0 , \\dots , \\widehat{v_i} , \\dots ,v_k),"
},
{
"math_id": 4,
"text": "(v_0 , \\dots , \\widehat{v_i} , \\dots ,v_k)"
},
{
"math_id": 5,
"text": "Z_k := \\ker \\partial_k"
},
{
"math_id": 6,
"text": "B_k := \\operatorname{im} \\partial_{k+1}"
},
{
"math_id": 7,
"text": "(-1)^{i+j-1}(v_0 , \\dots , \\widehat{v_i} , \\dots , \\widehat\\widehat{v_j} ,\\dots , v_k) = - (-1)^{i+j}(v_0 , \\dots , \\widehat\\widehat{v_i} , \\dots , \\widehat{v_j} ,\\dots , v_k)"
},
{
"math_id": 8,
"text": "\\widehat\\widehat{v_x}"
},
{
"math_id": 9,
"text": "\\partial^2 = 0"
},
{
"math_id": 10,
"text": "(C_k, \\partial_k)"
},
{
"math_id": 11,
"text": "w,x,y,z"
},
{
"math_id": 12,
"text": "xyz - wyz + wxz - wxy"
},
{
"math_id": 13,
"text": "(yz-xz+xy)-(yz-wz+wy)+(xz-wz+wx)-(xy-wy+wx) = 0 "
},
{
"math_id": 14,
"text": "H_k(S) = Z_k/B_k\\, ."
},
{
"math_id": 15,
"text": "\\beta_k = \\operatorname{rank} (H_k(S))\\,"
},
{
"math_id": 16,
"text": "(v_0, v_1, v_2)"
},
{
"math_id": 17,
"text": " \\partial(v_0,v_1) = (v_1)-(v_0)"
},
{
"math_id": 18,
"text": " \\partial(v_0,v_2) = (v_2)-(v_0)"
},
{
"math_id": 19,
"text": " \\partial(v_1,v_2) = (v_2)-(v_1)"
},
{
"math_id": 20,
"text": "f((v_0, \\ldots, v_k)) = (f(v_0),\\ldots,f(v_k))"
}
] | https://en.wikipedia.org/wiki?curid=1060236 |
10603 | Field (mathematics) | Algebraic structure with addition, multiplication, and division
In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other areas of mathematics.
The best known fields are the field of rational numbers, the field of real numbers and the field of complex numbers. Many other fields, such as fields of rational functions, algebraic function fields, algebraic number fields, and "p"-adic fields are commonly used and studied in mathematics, particularly in number theory and algebraic geometry. Most cryptographic protocols rely on finite fields, i.e., fields with finitely many elements.
The theory of fields proves that angle trisection and squaring the circle cannot be done with a compass and straightedge. Galois theory, devoted to understanding the symmetries of field extensions, provides an elegant proof of the Abel-Ruffini theorem that general quintic equations cannot be solved in radicals.
Fields serve as foundational notions in several mathematical domains. This includes different branches of mathematical analysis, which are based on fields with additional structure. Basic theorems in analysis hinge on the structural properties of the field of real numbers. Most importantly for algebraic purposes, any field may be used as the scalars for a vector space, which is the standard general context for linear algebra. Number fields, the siblings of the field of rational numbers, are studied in depth in number theory. Function fields can help describe properties of geometric objects.
Definition.
Informally, a field is a set, along with two operations defined on that set: an addition operation written as "a" + "b", and a multiplication operation written as "a" ⋅ "b", both of which behave similarly as they behave for rational numbers and real numbers, including the existence of an additive inverse −"a" for all elements a, and of a multiplicative inverse "b"−1 for every nonzero element b. This allows one to also consider the so-called "inverse" operations of subtraction, "a" − "b", and division, "a" / "b", by defining:
"a" − "b" := "a" + (−"b"),
"a" / "b" := "a" ⋅ "b"−1.
Classic definition.
Formally, a field is a set "F" together with two binary operations on F called "addition" and "multiplication". A binary operation on F is a mapping "F" × "F" → "F", that is, a correspondence that associates with each ordered pair of elements of F a uniquely determined element of F. The result of the addition of "a" and "b" is called the sum of "a" and "b", and is denoted "a" + "b". Similarly, the result of the multiplication of "a" and "b" is called the product of "a" and "b", and is denoted "ab" or "a" ⋅ "b". These operations are required to satisfy the following properties, referred to as "field axioms" (in these axioms, a, b, and c are arbitrary elements of the field F):
An equivalent, and more succinct, definition is: a field has two commutative operations, called addition and multiplication; it is a group under addition with 0 as the additive identity; the nonzero elements are a group under multiplication with 1 as the multiplicative identity; and multiplication distributes over addition.
Even more succinctly: a field is a commutative ring where 0 ≠ 1 and all nonzero elements are invertible under multiplication.
Alternative definition.
Fields can also be defined in different, but equivalent ways. One can alternatively define a field by four binary operations (addition, subtraction, multiplication, and division) and their required properties. Division by zero is, by definition, excluded. In order to avoid existential quantifiers, fields can be defined by two binary operations (addition and multiplication), two unary operations (yielding the additive and multiplicative inverses respectively), and two nullary operations (the constants 0 and 1). These operations are then subject to the conditions above. Avoiding existential quantifiers is important in constructive mathematics and computing. One may equivalently define a field by the same two binary operations, one unary operation (the multiplicative inverse), and two (not necessarily distinct) constants 1 and −1, since 0 = 1 + (−1) and −"a" = (−1)"a".
Examples.
Rational numbers.
Rational numbers have been widely used a long time before the elaboration of the concept of field.
They are numbers that can be written as fractions
"a"/"b", where "a" and "b" are integers, and "b" ≠ 0. The additive inverse of such a fraction is −"a"/"b", and the multiplicative inverse (provided that "a" ≠ 0) is "b"/"a", which can be seen as follows:
formula_0
The abstractly required field axioms reduce to standard properties of rational numbers. For example, the law of distributivity can be proven as follows:
formula_1
Real and complex numbers.
The real numbers R, with the usual operations of addition and multiplication, also form a field. The complex numbers C consist of expressions
"a" + "bi", with "a", "b" real,
where "i" is the imaginary unit, i.e., a (non-real) number satisfying "i"2 = −1.
Addition and multiplication of real numbers are defined in such a way that expressions of this type satisfy all field axioms and thus hold for C. For example, the distributive law enforces
("a" + "bi")("c" + "di") = "ac" + "bci" + "adi" + "bdi"2 = ("ac" − "bd") + ("bc" + "ad")"i".
It is immediate that this is again an expression of the above type, and so the complex numbers form a field. Complex numbers can be geometrically represented as points in the plane, with Cartesian coordinates given by the real numbers of their describing expression, or as the arrows from the origin to these points, specified by their length and an angle enclosed with some distinct direction. Addition then corresponds to combining the arrows to the intuitive parallelogram (adding the Cartesian coordinates), and the multiplication is – less intuitively – combining rotating and scaling of the arrows (adding the angles and multiplying the lengths). The fields of real and complex numbers are used throughout mathematics, physics, engineering, statistics, and many other scientific disciplines.
Constructible numbers.
In antiquity, several geometric problems concerned the (in)feasibility of constructing certain numbers with compass and straightedge. For example, it was unknown to the Greeks that it is, in general, impossible to trisect a given angle in this way. These problems can be settled using the field of constructible numbers. Real constructible numbers are, by definition, lengths of line segments that can be constructed from the points 0 and 1 in finitely many steps using only compass and straightedge. These numbers, endowed with the field operations of real numbers, restricted to the constructible numbers, form a field, which properly includes the field Q of rational numbers. The illustration shows the construction of square roots of constructible numbers, not necessarily contained within Q. Using the labeling in the illustration, construct the segments "AB", "BD", and a semicircle over "AD" (center at the midpoint "C"), which intersects the perpendicular line through "B" in a point "F", at a distance of exactly formula_2 from "B" when "BD" has length one.
Not all real numbers are constructible. It can be shown that formula_3 is not a constructible number, which implies that it is impossible to construct with compass and straightedge the length of the side of a cube with volume 2, another problem posed by the ancient Greeks.
A field with four elements.
In addition to familiar number systems such as the rationals, there are other, less immediate examples of fields. The following example is a field consisting of four elements called "O", "I", "A", and "B". The notation is chosen such that "O" plays the role of the additive identity element (denoted 0 in the axioms above), and "I" is the multiplicative identity (denoted 1 in the axioms above). The field axioms can be verified by using some more field theory, or by direct computation. For example,
"A" ⋅ ("B" + "A") = "A" ⋅ "I" = "A", which equals "A" ⋅ "B" + "A" ⋅ "A" = "I" + "B" = "A", as required by the distributivity.
This field is called a finite field or Galois field with four elements, and is denoted F4 or GF(4). The subset consisting of "O" and "I" (highlighted in red in the tables at the right) is also a field, known as the "binary field" F2 or GF(2).
Elementary notions.
In this section, "F" denotes an arbitrary field and "a" and "b" are arbitrary elements of "F".
Consequences of the definition.
One has "a" ⋅ 0 = 0 and −"a" = (−1) ⋅ "a". In particular, one may deduce the additive inverse of every element as soon as one knows −1.
If "ab" = 0 then "a" or "b" must be 0, since, if "a" ≠ 0, then
"b" = ("a"−1"a")"b" = "a"−1("ab") = "a"−1 ⋅ 0 = 0. This means that every field is an integral domain.
In addition, the following properties are true for any elements "a" and "b":
−0 = 0
1−1 = 1
(−(−"a")) = "a"
(−"a") ⋅ "b" = "a" ⋅ (−"b") = −("a" ⋅ "b")
("a"−1)−1 = "a" if "a" ≠ 0
Additive and multiplicative groups of a field.
The axioms of a field "F" imply that it is an abelian group under addition. This group is called the additive group of the field, and is sometimes denoted by ("F", +) when denoting it simply as "F" could be confusing.
Similarly, the "nonzero" elements of "F" form an abelian group under multiplication, called the multiplicative group, and denoted by formula_4 or just formula_5, or "F"×.
A field may thus be defined as set "F" equipped with two operations denoted as an addition and a multiplication such that "F" is an abelian group under addition, formula_5 is an abelian group under multiplication (where 0 is the identity element of the addition), and multiplication is distributive over addition. Some elementary statements about fields can therefore be obtained by applying general facts of groups. For example, the additive and multiplicative inverses −"a" and "a"−1 are uniquely determined by "a".
The requirement 1 ≠ 0 is imposed by convention to exclude the trivial ring, which consists of a single element; this guides any choice of the axioms that define fields.
Every finite subgroup of the multiplicative group of a field is cyclic (see "").
Characteristic.
In addition to the multiplication of two elements of "F", it is possible to define the product "n" ⋅ "a" of an arbitrary element "a" of "F" by a positive integer "n" to be the "n"-fold sum
"a" + "a" + ... + "a" (which is an element of "F".)
If there is no positive integer such that
"n" ⋅ 1 = 0,
then "F" is said to have characteristic 0. For example, the field of rational numbers Q has characteristic 0 since no positive integer "n" is zero. Otherwise, if there "is" a positive integer "n" satisfying this equation, the smallest such positive integer can be shown to be a prime number. It is usually denoted by "p" and the field is said to have characteristic "p" then.
For example, the field F4 has characteristic 2 since (in the notation of the above addition table) "I" + "I" = O.
If "F" has characteristic "p", then "p" ⋅ "a" = 0 for all "a" in "F". This implies that
("a" + "b")"p" = "a""p" + "b""p",
since all other binomial coefficients appearing in the binomial formula are divisible by "p". Here, "a""p" := "a" ⋅ "a" ⋅ ⋯ ⋅ "a" ("p" factors) is the "p"th power, i.e., the "p"-fold product of the element "a". Therefore, the Frobenius map
"F" → "F" : "x" ↦ "x""p"
is compatible with the addition in "F" (and also with the multiplication), and is therefore a field homomorphism. The existence of this homomorphism makes fields in characteristic "p" quite different from fields of characteristic 0.
Subfields and prime fields.
A "subfield" "E" of a field "F" is a subset of "F" that is a field with respect to the field operations of "F". Equivalently "E" is a subset of "F" that contains 1, and is closed under addition, multiplication, additive inverse and multiplicative inverse of a nonzero element. This means that 1 ∊ "E", that for all "a", "b" ∊ "E" both "a" + "b" and "a" ⋅ "b" are in "E", and that for all "a" ≠ 0 in "E", both −"a" and 1/"a" are in "E".
Field homomorphisms are maps "φ": "E" → "F" between two fields such that "φ"("e"1 + "e"2) = "φ"("e"1) + "φ"("e"2), "φ"("e"1"e"2) = "φ"("e"1)&hairsp;"φ"("e"2), and "φ"(1"E") = 1"F", where "e"1 and "e"2 are arbitrary elements of "E". All field homomorphisms are injective. If "φ" is also surjective, it is called an isomorphism (or the fields "E" and "F" are called isomorphic).
A field is called a prime field if it has no proper (i.e., strictly smaller) subfields. Any field "F" contains a prime field. If the characteristic of "F" is "p" (a prime number), the prime field is isomorphic to the finite field F"p" introduced below. Otherwise the prime field is isomorphic to Q.
Finite fields.
"Finite fields" (also called "Galois fields") are fields with finitely many elements, whose number is also referred to as the order of the field. The above introductory example F4 is a field with four elements. Its subfield F2 is the smallest field, because by definition a field has at least two distinct elements, 0 and 1.
The simplest finite fields, with prime order, are most directly accessible using modular arithmetic. For a fixed positive integer "n", arithmetic "modulo "n"" means to work with the numbers
Z/"n"Z = {0, 1, ..., "n" − 1}.
The addition and multiplication on this set are done by performing the operation in question in the set Z of integers, dividing by "n" and taking the remainder as result. This construction yields a field precisely if "n" is a prime number. For example, taking the prime "n" = 2 results in the above-mentioned field F2. For "n" = 4 and more generally, for any composite number (i.e., any number "n" which can be expressed as a product "n" = "r" ⋅ "s" of two strictly smaller natural numbers), Z/"nZ is not a field: the product of two non-zero elements is zero since "r" ⋅ "s" = 0 in Z/"nZ, which, as was explained above, prevents Z/"nZ from being a field. The field Z/"pZ with "p" elements ("p" being prime) constructed in this way is usually denoted by F"p".
Every finite field "F" has "q" = "p""n" elements, where "p" is prime and "n" ≥ 1. This statement holds since "F" may be viewed as a vector space over its prime field. The dimension of this vector space is necessarily finite, say "n", which implies the asserted statement.
A field with "q" = "p""n" elements can be constructed as the splitting field of the polynomial
"f"("x") = "x""q" − "x".
Such a splitting field is an extension of F"p" in which the polynomial "f" has "q" zeros. This means "f" has as many zeros as possible since the degree of "f" is "q". For "q" = 22 = 4, it can be checked case by case using the above multiplication table that all four elements of F4 satisfy the equation "x"4 = "x", so they are zeros of "f". By contrast, in F2, "f" has only two zeros (namely 0 and 1), so "f" does not split into linear factors in this smaller field. Elaborating further on basic field-theoretic notions, it can be shown that two finite fields with the same order are isomorphic. It is thus customary to speak of "the" finite field with "q" elements, denoted by F"q" or GF("q").
History.
Historically, three algebraic disciplines led to the concept of a field: the question of solving polynomial equations, algebraic number theory, and algebraic geometry. A first step towards the notion of a field was made in 1770 by Joseph-Louis Lagrange, who observed that permuting the zeros "x"1, "x"2, "x"3 of a cubic polynomial in the expression
("x"1 + "ωx"2 + "ω"2"x"3)3
(with "ω" being a third root of unity) only yields two values. This way, Lagrange conceptually explained the classical solution method of Scipione del Ferro and François Viète, which proceeds by reducing a cubic equation for an unknown "x" to a quadratic equation for "x"3. Together with a similar observation for equations of degree 4, Lagrange thus linked what eventually became the concept of fields and the concept of groups. Vandermonde, also in 1770, and to a fuller extent, Carl Friedrich Gauss, in his "Disquisitiones Arithmeticae" (1801), studied the equation
"x"&hairsp;"p" = 1
for a prime "p" and, again using modern language, the resulting cyclic Galois group. Gauss deduced that a regular "p"-gon can be constructed if "p" = 22"k" + 1. Building on Lagrange's work, Paolo Ruffini claimed (1799) that quintic equations (polynomial equations of degree 5) cannot be solved algebraically; however, his arguments were flawed. These gaps were filled by Niels Henrik Abel in 1824. Évariste Galois, in 1832, devised necessary and sufficient criteria for a polynomial equation to be algebraically solvable, thus establishing in effect what is known as Galois theory today. Both Abel and Galois worked with what is today called an algebraic number field, but conceived neither an explicit notion of a field, nor of a group.
In 1871 Richard Dedekind introduced, for a set of real or complex numbers that is closed under the four arithmetic operations, the German word "Körper", which means "body" or "corpus" (to suggest an organically closed entity). The English term "field" was introduced by .
<templatestyles src="Template:Blockquote/styles.css" />By a field we will mean every infinite system of real or complex numbers so closed in itself and perfect that addition, subtraction, multiplication, and division of any two of these numbers again yields a number of the system.
In 1881 Leopold Kronecker defined what he called a "domain of rationality", which is a field of rational fractions in modern terms. Kronecker's notion did not cover the field of all algebraic numbers (which is a field in Dedekind's sense), but on the other hand was more abstract than Dedekind's in that it made no specific assumption on the nature of the elements of a field. Kronecker interpreted a field such as Q(π) abstractly as the rational function field Q("X"). Prior to this, examples of transcendental numbers were known since Joseph Liouville's work in 1844, until Charles Hermite (1873) and Ferdinand von Lindemann (1882) proved the transcendence of "e" and "π", respectively.
The first clear definition of an abstract field is due to . In particular, Heinrich Martin Weber's notion included the field F"p". Giuseppe Veronese (1891) studied the field of formal power series, which led to introduce the field of "p"-adic numbers. synthesized the knowledge of abstract field theory accumulated so far. He axiomatically studied the properties of fields and defined many important field-theoretic concepts. The majority of the theorems mentioned in the sections Galois theory, Constructing fields and Elementary notions can be found in Steinitz's work. linked the notion of orderings in a field, and thus the area of analysis, to purely algebraic properties. Emil Artin redeveloped Galois theory from 1928 through 1942, eliminating the dependency on the primitive element theorem.
Constructing fields.
Constructing fields from rings.
A commutative ring is a set that is equipped with an addition and multiplication operation and satisfes all the axioms of a field, except for the existence of multiplicative inverses "a"−1. For example, the integers Z form a commutative ring, but not a field: the reciprocal of an integer "n" is not itself an integer, unless "n" = ±1.
In the hierarchy of algebraic structures fields can be characterized as the commutative rings "R" in which every nonzero element is a unit (which means every element is invertible). Similarly, fields are the commutative rings with precisely two distinct ideals, (0) and "R". Fields are also precisely the commutative rings in which (0) is the only prime ideal.
Given a commutative ring "R", there are two ways to construct a field related to "R", i.e., two ways of modifying "R" such that all nonzero elements become invertible: forming the field of fractions, and forming residue fields. The field of fractions of Z is Q, the rationals, while the residue fields of Z are the finite fields F"p".
Field of fractions.
Given an integral domain "R", its field of fractions "Q"("R") is built with the fractions of two elements of "R" exactly as Q is constructed from the integers. More precisely, the elements of "Q"("R") are the fractions "a"/"b" where "a" and "b" are in "R", and "b" ≠ 0. Two fractions "a"/"b" and "c"/"d" are equal if and only if "ad" = "bc". The operation on the fractions work exactly as for rational numbers. For example,
formula_6
It is straightforward to show that, if the ring is an integral domain, the set of the fractions form a field.
The field "F"("x") of the rational fractions over a field (or an integral domain) "F" is the field of fractions of the polynomial ring "F"["x"]. The field "F"(("x")) of Laurent series
formula_7
over a field "F" is the field of fractions of the ring "F""x" of formal power series (in which "k" ≥ 0). Since any Laurent series is a fraction of a power series divided by a power of "x" (as opposed to an arbitrary power series), the representation of fractions is less important in this situation, though.
Residue fields.
In addition to the field of fractions, which embeds "R" injectively into a field, a field can be obtained from a commutative ring "R" by means of a surjective map onto a field "F". Any field obtained in this way is a quotient , where "m" is a maximal ideal of "R". If "R" has only one maximal ideal "m", this field is called the residue field of "R".
The ideal generated by a single polynomial "f" in the polynomial ring "R" = "E"["X"] (over a field "E") is maximal if and only if "f" is irreducible in "E", i.e., if "f" cannot be expressed as the product of two polynomials in "E"["X"] of smaller degree. This yields a field
"F" = "E"["X"] / ("f"("X")).
This field "F" contains an element "x" (namely the residue class of "X") which satisfies the equation
"f"("x") = 0.
For example, C is obtained from R by adjoining the imaginary unit symbol i, which satisfies "f"("i") = 0, where "f"("X") = "X"2 + 1. Moreover, "f" is irreducible over R, which implies that the map that sends a polynomial to yields an isomorphism
formula_8
Constructing fields within a bigger field.
Fields can be constructed inside a given bigger container field. Suppose given a field "E", and a field "F" containing "E" as a subfield. For any element "x" of "F", there is a smallest subfield of "F" containing "E" and "x", called the subfield of "F" generated by "x" and denoted "E"("x"). The passage from "E" to "E"("x") is referred to by "adjoining an element" to "E". More generally, for a subset "S" ⊂ "F", there is a minimal subfield of "F" containing "E" and "S", denoted by "E"("S").
The compositum of two subfields "E" and "E"′ of some field "F" is the smallest subfield of "F" containing both "E" and "E"′. The compositum can be used to construct the biggest subfield of "F" satisfying a certain property, for example the biggest subfield of "F", which is, in the language introduced below, algebraic over "E".
Field extensions.
The notion of a subfield "E" ⊂ "F" can also be regarded from the opposite point of view, by referring to "F" being a "field extension" (or just extension) of "E", denoted by
"F" / "E",
and read ""F" over "E"".
A basic datum of a field extension is its degree ["F" : "E"], i.e., the dimension of "F" as an "E"-vector space. It satisfies the formula
["G" : "E"] = ["G" : "F"] ["F" : "E"].
Extensions whose degree is finite are referred to as finite extensions. The extensions C / R and F4 / F2 are of degree 2, whereas R / Q is an infinite extension.
Algebraic extensions.
A pivotal notion in the study of field extensions "F" / "E" are algebraic elements. An element "x" ∈ "F" is "algebraic" over E if it is a root of a polynomial with coefficients in E, that is, if it satisfies a polynomial equation
"e""n"&hairsp;"x""n" + "e""n"−1"x""n"−1 + ⋯ + "e"1"x" + "e"0 = 0,
with "e""n", ..., "e"0 in E, and "e""n" ≠ 0.
For example, the imaginary unit "i" in C is algebraic over R, and even over Q, since it satisfies the equation
"i"2 + 1 = 0.
A field extension in which every element of "F" is algebraic over "E" is called an algebraic extension. Any finite extension is necessarily algebraic, as can be deduced from the above multiplicativity formula.
The subfield "E"("x") generated by an element "x", as above, is an algebraic extension of "E" if and only if "x" is an algebraic element. That is to say, if "x" is algebraic, all other elements of "E"("x") are necessarily algebraic as well. Moreover, the degree of the extension "E"("x") / "E", i.e., the dimension of "E"("x") as an "E"-vector space, equals the minimal degree "n" such that there is a polynomial equation involving "x", as above. If this degree is "n", then the elements of "E"("x") have the form
formula_9
For example, the field Q("i") of Gaussian rationals is the subfield of C consisting of all numbers of the form "a" + "bi" where both "a" and "b" are rational numbers: summands of the form "i"2 (and similarly for higher exponents) do not have to be considered here, since "a" + "bi" + "ci"2 can be simplified to "a" − "c" + "bi".
Transcendence bases.
The above-mentioned field of rational fractions "E"("X"), where "X" is an indeterminate, is not an algebraic extension of "E" since there is no polynomial equation with coefficients in "E" whose zero is "X". Elements, such as "X", which are not algebraic are called transcendental. Informally speaking, the indeterminate "X" and its powers do not interact with elements of "E". A similar construction can be carried out with a set of indeterminates, instead of just one.
Once again, the field extension "E"("x") / "E" discussed above is a key example: if "x" is not algebraic (i.e., "x" is not a root of a polynomial with coefficients in "E"), then "E"("x") is isomorphic to "E"("X"). This isomorphism is obtained by substituting "x" to "X" in rational fractions.
A subset "S" of a field "F" is a transcendence basis if it is algebraically independent (do not satisfy any polynomial relations) over "E" and if "F" is an algebraic extension of "E"("S"). Any field extension "F" / "E" has a transcendence basis. Thus, field extensions can be split into ones of the form "E"("S") / "E" (purely transcendental extensions) and algebraic extensions.
Closure operations.
A field is algebraically closed if it does not have any strictly bigger algebraic extensions or, equivalently, if any polynomial equation
"f""n"&hairsp;"x""n" + "f""n"−1"x""n"−1 + ⋯ + "f"1"x" + "f"0 = 0, with coefficients "f""n", ..., "f"0 ∈ "F", "n" > 0,
has a solution "x" ∊ "F". By the fundamental theorem of algebra, C is algebraically closed, i.e., "any" polynomial equation with complex coefficients has a complex solution. The rational and the real numbers are "not" algebraically closed since the equation
"x"2 + 1 = 0
does not have any rational or real solution. A field containing "F" is called an "algebraic closure" of "F" if it is algebraic over "F" (roughly speaking, not too big compared to "F") and is algebraically closed (big enough to contain solutions of all polynomial equations).
By the above, C is an algebraic closure of R. The situation that the algebraic closure is a finite extension of the field "F" is quite special: by the Artin–Schreier theorem, the degree of this extension is necessarily 2, and "F" is elementarily equivalent to R. Such fields are also known as real closed fields.
Any field "F" has an algebraic closure, which is moreover unique up to (non-unique) isomorphism. It is commonly referred to as "the" algebraic closure and denoted . For example, the algebraic closure of Q is called the field of algebraic numbers. The field is usually rather implicit since its construction requires the ultrafilter lemma, a set-theoretic axiom that is weaker than the axiom of choice. In this regard, the algebraic closure of F"q", is exceptionally simple. It is the union of the finite fields containing F"q" (the ones of order "q""n"). For any algebraically closed field "F" of characteristic 0, the algebraic closure of the field "F"(("t")) of Laurent series is the field of Puiseux series, obtained by adjoining roots of "t".
Fields with additional structure.
Since fields are ubiquitous in mathematics and beyond, several refinements of the concept have been adapted to the needs of particular mathematical areas.
Ordered fields.
A field "F" is called an "ordered field" if any two elements can be compared, so that "x" + "y" ≥ 0 and "xy" ≥ 0 whenever "x" ≥ 0 and "y" ≥ 0. For example, the real numbers form an ordered field, with the usual ordering ≥. The Artin–Schreier theorem states that a field can be ordered if and only if it is a formally real field, which means that any quadratic equation
formula_10
only has the solution "x"1 = "x"2 = ⋯ = "x""n" = 0. The set of all possible orders on a fixed field "F" is isomorphic to the set of ring homomorphisms from the Witt ring W("F") of quadratic forms over "F", to Z.
An Archimedean field is an ordered field such that for each element there exists a finite expression
1 + 1 + ⋯ + 1
whose value is greater than that element, that is, there are no infinite elements. Equivalently, the field contains no infinitesimals (elements smaller than all rational numbers); or, yet equivalent, the field is isomorphic to a subfield of R.
An ordered field is Dedekind-complete if all upper bounds, lower bounds (see "Dedekind cut") and limits, which should exist, do exist. More formally, each bounded subset of "F" is required to have a least upper bound. Any complete field is necessarily Archimedean, since in any non-Archimedean field there is neither a greatest infinitesimal nor a least positive rational, whence the sequence 1/2, 1/3, 1/4, ..., every element of which is greater than every infinitesimal, has no limit.
Since every proper subfield of the reals also contains such gaps, R is the unique complete ordered field, up to isomorphism. Several foundational results in calculus follow directly from this characterization of the reals.
The hyperreals R* form an ordered field that is not Archimedean. It is an extension of the reals obtained by including infinite and infinitesimal numbers. These are larger, respectively smaller than any real number. The hyperreals form the foundational basis of non-standard analysis.
Topological fields.
Another refinement of the notion of a field is a topological field, in which the set "F" is a topological space, such that all operations of the field (addition, multiplication, the maps "a" ↦ −"a" and "a" ↦ "a"−1) are continuous maps with respect to the topology of the space.
The topology of all the fields discussed below is induced from a metric, i.e., a function
"d" : "F" × "F" → R,
that measures a "distance" between any two elements of "F".
The completion of "F" is another field in which, informally speaking, the "gaps" in the original field "F" are filled, if there are any. For example, any irrational number "x", such as "x" = √2, is a "gap" in the rationals Q in the sense that it is a real number that can be approximated arbitrarily closely by rational numbers "p"/"q", in the sense that distance of "x" and "p"/"q" given by the absolute value is as small as desired.
The following table lists some examples of this construction. The fourth column shows an example of a zero sequence, i.e., a sequence whose limit (for "n" → ∞) is zero.
The field Q"p" is used in number theory and "p"-adic analysis. The algebraic closure carries a unique norm extending the one on Q"p", but is not complete. The completion of this algebraic closure, however, is algebraically closed. Because of its rough analogy to the complex numbers, it is sometimes called the field of complex "p"-adic numbers and is denoted by C"p".
Local fields.
The following topological fields are called "local fields":
These two types of local fields share some fundamental similarities. In this relation, the elements "p" ∈ Q"p" and "t" ∈ F"p"(("t")) (referred to as uniformizer) correspond to each other. The first manifestation of this is at an elementary level: the elements of both fields can be expressed as power series in the uniformizer, with coefficients in F"p". (However, since the addition in Q"p" is done using carrying, which is not the case in F"p"(("t")), these fields are not isomorphic.) The following facts show that this superficial similarity goes much deeper:
Differential fields.
Differential fields are fields equipped with a derivation, i.e., allow to take derivatives of elements in the field. For example, the field R("X"), together with the standard derivative of polynomials forms a differential field. These fields are central to differential Galois theory, a variant of Galois theory dealing with linear differential equations.
Galois theory.
Galois theory studies algebraic extensions of a field by studying the symmetry in the arithmetic operations of addition and multiplication. An important notion in this area is that of finite Galois extensions "F" / "E", which are, by definition, those that are separable and normal. The primitive element theorem shows that finite separable extensions are necessarily simple, i.e., of the form
"F" = "E"["X"] / "f"("X"),
where "f" is an irreducible polynomial (as above). For such an extension, being normal and separable means that all zeros of "f" are contained in "F" and that "f" has only simple zeros. The latter condition is always satisfied if "E" has characteristic 0.
For a finite Galois extension, the Galois group Gal("F"/"E") is the group of field automorphisms of "F" that are trivial on "E" (i.e., the bijections "σ" : "F" → "F" that preserve addition and multiplication and that send elements of "E" to themselves). The importance of this group stems from the fundamental theorem of Galois theory, which constructs an explicit one-to-one correspondence between the set of subgroups of Gal("F"/"E") and the set of intermediate extensions of the extension "F"/"E". By means of this correspondence, group-theoretic properties translate into facts about fields. For example, if the Galois group of a Galois extension as above is not solvable (cannot be built from abelian groups), then the zeros of "f" "cannot" be expressed in terms of addition, multiplication, and radicals, i.e., expressions involving formula_12. For example, the symmetric groups S"n" is not solvable for "n" ≥ 5. Consequently, as can be shown, the zeros of the following polynomials are not expressible by sums, products, and radicals. For the latter polynomial, this fact is known as the Abel–Ruffini theorem:
"f"("X") = "X"5 − 4"X" + 2 (and "E" = Q),
"f"("X") = "X""n" + "a""n"−1"X""n"−1 + ⋯ + "a"0 (where "f" is regarded as a polynomial in "E"("a"0, ..., "a""n"−1), for some indeterminates "a""i", "E" is any field, and "n" ≥ 5).
The tensor product of fields is not usually a field. For example, a finite extension "F" / "E" of degree "n" is a Galois extension if and only if there is an isomorphism of "F"-algebras
"F" ⊗"E" "F" ≅ "F""n".
This fact is the beginning of Grothendieck's Galois theory, a far-reaching extension of Galois theory applicable to algebro-geometric objects.
Invariants of fields.
Basic invariants of a field "F" include the characteristic and the transcendence degree of "F" over its prime field. The latter is defined as the maximal number of elements in "F" that are algebraically independent over the prime field. Two algebraically closed fields "E" and "F" are isomorphic precisely if these two data agree. This implies that any two uncountable algebraically closed fields of the same cardinality and the same characteristic are isomorphic. For example, and C are isomorphic (but "not" isomorphic as topological fields).
Model theory of fields.
In model theory, a branch of mathematical logic, two fields "E" and "F" are called elementarily equivalent if every mathematical statement that is true for "E" is also true for "F" and conversely. The mathematical statements in question are required to be first-order sentences (involving 0, 1, the addition and multiplication). A typical example, for "n" > 0, "n" an integer, is
"φ"("E") = "any polynomial of degree "n" in "E" has a zero in "E""
The set of such formulas for all "n" expresses that "E" is algebraically closed.
The Lefschetz principle states that C is elementarily equivalent to any algebraically closed field "F" of characteristic zero. Moreover, any fixed statement "φ" holds in C if and only if it holds in any algebraically closed field of sufficiently high characteristic.
If "U" is an ultrafilter on a set "I", and "F""i" is a field for every "i" in "I", the ultraproduct of the "F""i" with respect to "U" is a field. It is denoted by
ulim"i"→∞ "F""i",
since it behaves in several ways as a limit of the fields "F""i": Łoś's theorem states that any first order statement that holds for all but finitely many "F""i", also holds for the ultraproduct. Applied to the above sentence φ, this shows that there is an isomorphism
formula_13
The Ax–Kochen theorem mentioned above also follows from this and an isomorphism of the ultraproducts (in both cases over all primes "p")
ulim"p" Q"p" ≅ ulim"p" F"p"(("t")).
In addition, model theory also studies the logical properties of various other types of fields, such as real closed fields or exponential fields (which are equipped with an exponential function exp : "F" → "F"×).
Absolute Galois group.
For fields that are not algebraically closed (or not separably closed), the absolute Galois group Gal("F") is fundamentally important: extending the case of finite Galois extensions outlined above, this group governs "all" finite separable extensions of "F". By elementary means, the group Gal(F"q") can be shown to be the Prüfer group, the profinite completion of Z. This statement subsumes the fact that the only algebraic extensions of Gal(F"q") are the fields Gal(F"q""n") for "n" > 0, and that the Galois groups of these finite extensions are given by
Gal(F"q""n" / F"q") = Z/"n"Z.
A description in terms of generators and relations is also known for the Galois groups of "p"-adic number fields (finite extensions of Q"p").
Representations of Galois groups and of related groups such as the Weil group are fundamental in many branches of arithmetic, such as the Langlands program. The cohomological study of such representations is done using Galois cohomology. For example, the Brauer group, which is classically defined as the group of central simple "F"-algebras, can be reinterpreted as a Galois cohomology group, namely
Br("F") = H2("F", Gm).
K-theory.
Milnor K-theory is defined as
formula_14
The norm residue isomorphism theorem, proved around 2000 by Vladimir Voevodsky, relates this to Galois cohomology by means of an isomorphism
formula_15
Algebraic K-theory is related to the group of invertible matrices with coefficients the given field. For example, the process of taking the determinant of an invertible matrix leads to an isomorphism "K"1("F") = "F"×. Matsumoto's theorem shows that "K"2("F") agrees with "K"2"M"("F"). In higher degrees, K-theory diverges from Milnor K-theory and remains hard to compute in general.
Applications.
Linear algebra and commutative algebra.
If "a" ≠ 0, then the equation
"ax" = "b"
has a unique solution "x" in a field "F", namely formula_16 This immediate consequence of the definition of a field is fundamental in linear algebra. For example, it is an essential ingredient of Gaussian elimination and of the proof that any vector space has a basis.
The theory of modules (the analogue of vector spaces over rings instead of fields) is much more complicated, because the above equation may have several or no solutions. In particular systems of linear equations over a ring are much more difficult to solve than in the case of fields, even in the specially simple case of the ring Z of the integers.
Finite fields: cryptography and coding theory.
A widely applied cryptographic routine uses the fact that discrete exponentiation, i.e., computing
"a""n" = "a" ⋅ "a" ⋅ ⋯ ⋅ "a" ("n" factors, for an integer "n" ≥ 1)
in a (large) finite field F"q" can be performed much more efficiently than the discrete logarithm, which is the inverse operation, i.e., determining the solution "n" to an equation
"a""n" = "b".
In elliptic curve cryptography, the multiplication in a finite field is replaced by the operation of adding points on an elliptic curve, i.e., the solutions of an equation of the form
"y"2 = "x"3 + "ax" + "b".
Finite fields are also used in coding theory and combinatorics.
Geometry: field of functions.
Functions on a suitable topological space "X" into a field F can be added and multiplied pointwise, e.g., the product of two functions is defined by the product of their values within the domain:
("f" ⋅ "g")("x") = "f"("x") ⋅ "g"("x").
This makes these functions a "F"-commutative algebra.
For having a "field" of functions, one must consider algebras of functions that are integral domains. In this case the ratios of two functions, i.e., expressions of the form
formula_17
form a field, called field of functions.
This occurs in two main cases. When "X" is a complex manifold "X". In this case, one considers the algebra of holomorphic functions, i.e., complex differentiable functions. Their ratios form the field of meromorphic functions on "X".
The function field of an algebraic variety "X" (a geometric object defined as the common zeros of polynomial equations) consists of ratios of regular functions, i.e., ratios of polynomial functions on the variety. The function field of the "n"-dimensional space over a field "F" is "F"("x"1, ..., "x""n"), i.e., the field consisting of ratios of polynomials in "n" indeterminates. The function field of "X" is the same as the one of any open dense subvariety. In other words, the function field is insensitive to replacing "X" by a (slightly) smaller subvariety.
The function field is invariant under isomorphism and birational equivalence of varieties. It is therefore an important tool for the study of abstract algebraic varieties and for the classification of algebraic varieties. For example, the dimension, which equals the transcendence degree of "F"("X"), is invariant under birational equivalence. For curves (i.e., the dimension is one), the function field "F"("X") is very close to "X": if "X" is smooth and proper (the analogue of being compact), "X" can be reconstructed, up to isomorphism, from its field of functions. In higher dimension the function field remembers less, but still decisive information about "X". The study of function fields and their geometric meaning in higher dimensions is referred to as birational geometry. The minimal model program attempts to identify the simplest (in a certain precise sense) algebraic varieties with a prescribed function field.
Number theory: global fields.
Global fields are in the limelight in algebraic number theory and arithmetic geometry.
They are, by definition, number fields (finite extensions of Q) or function fields over F"q" (finite extensions of F"q"("t")). As for local fields, these two types of fields share several similar features, even though they are of characteristic 0 and positive characteristic, respectively. This function field analogy can help to shape mathematical expectations, often first by understanding questions about function fields, and later treating the number field case. The latter is often more difficult. For example, the Riemann hypothesis concerning the zeros of the Riemann zeta function (open as of 2017) can be regarded as being parallel to the Weil conjectures (proven in 1974 by Pierre Deligne).
Cyclotomic fields are among the most intensely studied number fields. They are of the form Q("ζ""n"), where "ζ""n" is a primitive "n"th root of unity, i.e., a complex number "ζ" that satisfies "ζ""n" = 1 and for all 0 < "m" < "n". For "n" being a regular prime, Kummer used cyclotomic fields to prove Fermat's Last Theorem, which asserts the non-existence of rational nonzero solutions to the equation
"x""n" + "y""n" = "z""n".
Local fields are completions of global fields. Ostrowski's theorem asserts that the only completions of Q, a global field, are the local fields Q"p" and R. Studying arithmetic questions in global fields may sometimes be done by looking at the corresponding questions locally. This technique is called the local–global principle. For example, the Hasse–Minkowski theorem reduces the problem of finding rational solutions of quadratic equations to solving these equations in R and Q"p", whose solutions can easily be described.
Unlike for local fields, the Galois groups of global fields are not known. Inverse Galois theory studies the (unsolved) problem whether any finite group is the Galois group Gal("F"/Q) for some number field "F". Class field theory describes the abelian extensions, i.e., ones with abelian Galois group, or equivalently the abelianized Galois groups of global fields. A classical statement, the Kronecker–Weber theorem, describes the maximal abelian Qab extension of Q: it is the field
Q("ζ""n", "n" ≥ 2)
obtained by adjoining all primitive "n"th roots of unity. Kronecker's Jugendtraum asks for a similarly explicit description of "F"ab of general number fields "F". For imaginary quadratic fields, formula_18, "d" > 0, the theory of complex multiplication describes "F"ab using elliptic curves. For general number fields, no such explicit description is known.
Related notions.
In addition to the additional structure that fields may enjoy, fields admit various other related notions. Since in any field 0 ≠ 1, any field has at least two elements. Nonetheless, there is a concept of field with one element, which is suggested to be a limit of the finite fields F"p", as "p" tends to 1. In addition to division rings, there are various other weaker algebraic structures related to fields such as quasifields, near-fields and semifields.
There are also proper classes with field structure, which are sometimes called Fields, with a capital 'F'. The surreal numbers form a Field containing the reals, and would be a field except for the fact that they are a proper class, not a set. The nimbers, a concept from game theory, form such a Field as well.
Division rings.
Dropping one or several axioms in the definition of a field leads to other algebraic structures. As was mentioned above, commutative rings satisfy all field axioms except for the existence of multiplicative inverses. Dropping instead commutativity of multiplication leads to the concept of a "division ring" or "skew field"; sometimes associativity is weakened as well. The only division rings that are finite-dimensional R-vector spaces are R itself, C (which is a field), and the quaternions H (in which multiplication is non-commutative). This result is known as the Frobenius theorem. The octonions O, for which multiplication is neither commutative nor associative, is a normed alternative division algebra, but is not a division ring. This fact was proved using methods of algebraic topology in 1958 by Michel Kervaire, Raoul Bott, and John Milnor.
Wedderburn's little theorem states that all finite division rings are fields.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " \\frac b a \\cdot \\frac a b = \\frac{ba}{ab} = 1."
},
{
"math_id": 1,
"text": "\n\\begin{align}\n& \\frac a b \\cdot \\left(\\frac c d + \\frac e f \\right) \\\\[6pt]\n= {} & \\frac a b \\cdot \\left(\\frac c d \\cdot \\frac f f + \\frac e f \\cdot \\frac d d \\right) \\\\[6pt]\n= {} & \\frac{a}{b} \\cdot \\left(\\frac{cf}{df} + \\frac{ed}{fd}\\right) = \\frac{a}{b} \\cdot \\frac{cf + ed}{df} \\\\[6pt]\n= {} & \\frac{a(cf + ed)}{bdf} = \\frac{acf}{bdf} + \\frac{aed}{bdf} = \\frac{ac}{bd} + \\frac{ae}{bf} \\\\[6pt]\n= {} & \\frac a b \\cdot \\frac c d + \\frac a b \\cdot \\frac e f.\n\\end{align}\n"
},
{
"math_id": 2,
"text": "h=\\sqrt p"
},
{
"math_id": 3,
"text": "\\sqrt[3] 2"
},
{
"math_id": 4,
"text": "(F \\smallsetminus \\{0\\}, \\cdot)"
},
{
"math_id": 5,
"text": "F \\smallsetminus \\{0\\}"
},
{
"math_id": 6,
"text": "\\frac{a}{b}+\\frac{c}{d} = \\frac{ad+bc}{bd}."
},
{
"math_id": 7,
"text": "\\sum_{i=k}^\\infty a_i x^i \\ (k \\in \\Z, a_i \\in F)"
},
{
"math_id": 8,
"text": "\\mathbf R[X]/\\left(X^2 + 1\\right) \\ \\stackrel \\cong \\longrightarrow \\ \\mathbf C."
},
{
"math_id": 9,
"text": "\\sum_{k=0}^{n-1} a_k x^k, \\ \\ a_k \\in E."
},
{
"math_id": 10,
"text": "x_1^2 + x_2^2 + \\dots + x_n^2 = 0"
},
{
"math_id": 11,
"text": "\\operatorname {Gal}\\left(\\mathbf Q_p \\left(p^{1/p^\\infty} \\right) \\right) \\cong \\operatorname {Gal}\\left(\\mathbf F_p((t))\\left(t^{1/p^\\infty}\\right)\\right)."
},
{
"math_id": 12,
"text": "\\sqrt[n]{~}"
},
{
"math_id": 13,
"text": "\\operatorname{ulim}_{p \\to \\infty} \\overline \\mathbf F_p \\cong \\mathbf C."
},
{
"math_id": 14,
"text": "K_n^M(F) = F^\\times \\otimes \\cdots \\otimes F^\\times / \\left\\langle x \\otimes (1-x) \\mid x \\in F \\smallsetminus \\{0, 1\\} \\right\\rangle."
},
{
"math_id": 15,
"text": "K_n^M(F) / p = H^n(F, \\mu_l^{\\otimes n})."
},
{
"math_id": 16,
"text": "x=a^{-1}b."
},
{
"math_id": 17,
"text": "\\frac{f(x)}{g(x)},"
},
{
"math_id": 18,
"text": "F=\\mathbf Q(\\sqrt{-d})"
}
] | https://en.wikipedia.org/wiki?curid=10603 |
1060349 | Differential wheeled robot | Robot with a particular driving system
A differential wheeled robot is a mobile robot whose movement is based on two separately driven wheels placed on either side of the robot body. It can thus change its direction by varying the relative rate of rotation of its wheels and hence does not require an additional steering motion. Robots with such a drive typically have one or more castor wheels to prevent the vehicle from tilting.
Details.
If both the wheels are driven in the same direction and speed, the robot will go in a straight line. If both wheels are turned with equal speed in opposite directions, as is clear from the diagram shown, the robot will rotate about the central point of the axis. Otherwise, depending on the speed of rotation and its direction, the center of rotation may fall anywhere on the line defined by the two contact points of the tires. While the robot is traveling in a straight line, the center of rotation is an infinite distance from the robot. Since the direction of the robot is dependent on the rate and direction of rotation of the two driven wheels, these quantities should be sensed and controlled precisely.
A differentially steered robot is similar to the differential gears used in automobiles in that both the wheels can have different rates of rotations, but unlike the differential gearing system, a differentially steered system will have both the wheels powered. Differential wheeled robots are used extensively in robotics, since their motion is easy to program and can be well controlled. Virtually all consumer robots on the market today use differential steering primarily for its low cost and simplicity.
Kinematics of Differential Drive Robots.
The illustration on the right shows the differential drive kinematics of a mobile wheeled robot. The variables are expressed using the following notation: formula_0 and formula_1 are the global coordinate system. Using the point midway between the wheels as the origin of the robot, one can define formula_2 and formula_3 as the locale body coordinate system. The orientation of the robot with respect to the global coordinate system is the angle formula_4. The radius of the wheels is formula_5 and the width of the vehicle formula_6. Assuming that the wheels are at any time in contact with the ground (there is no slip), the wheels describe arcs in the plane in such a way that the vehicle always rotates around a point (referred to as formula_7 - instantaneous center of rotation). The ground contact speed of the left wheel formula_8 and the right wheel formula_9 lead to a rotation of the vehicle by the angular velocity formula_10. Following the definition of angular velocity, one obtains:formula_11Solving these two equations for formula_10 and formula_12, while the latter is defined as the distance from formula_7 to the center of the robotformula_13Using the equation for the angular velocity, the instantaneous velocity formula_14 of the point midway between the robot's wheels is given byformula_15The wheel tangential velocities can also be written asformula_16where formula_17 and formula_18are the left and the right angular velocities of the wheels around their axes. The robot kinematics in local body coordinates can thus be written asformula_19Using a coordinate transformation (Rotation of axes), the robot's kinematic model in global coordinates can finally be obtainedformula_20where formula_14 and formula_21 are the control variables.
Differential Drive Controller.
One might face a situation where the velocity formula_22 and the angular velocity formula_10 are given as inputs, and the angular velocities of the left formula_23and right wheels formula_24 are sought as control variables (see figure above). In this case, the already mentioned equation can be easily reformulated. Using the relations formula_25 and formula_26 in
formula_27one obtains the equation for the angular velocity of the right wheel formula_24
formula_28
The same procedure can be applied to the calculation of the angular velocity of the left wheel formula_23
formula_29
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "Y"
},
{
"math_id": 2,
"text": "X_{B}"
},
{
"math_id": 3,
"text": "Y_{B}"
},
{
"math_id": 4,
"text": "\\varphi"
},
{
"math_id": 5,
"text": "r"
},
{
"math_id": 6,
"text": "b"
},
{
"math_id": 7,
"text": "ICR"
},
{
"math_id": 8,
"text": "v_{L}"
},
{
"math_id": 9,
"text": "v_{R}"
},
{
"math_id": 10,
"text": "\\omega"
},
{
"math_id": 11,
"text": "\\begin{align} \\omega \\cdot (R+b/2) &= v_{R} \\\\ \\omega \\cdot (R-b/2) &= v_{L} \\\\ \\end{align}"
},
{
"math_id": 12,
"text": "R"
},
{
"math_id": 13,
"text": "\\begin{align} \\omega &= (v_{R}-v_{L})/b \\\\ R &= b/2 \\cdot (v_{R}+v_{L}) / (v_{R}-v_{L}) \\\\ \\end{align}"
},
{
"math_id": 14,
"text": "V"
},
{
"math_id": 15,
"text": "V = \\omega \\cdot R = \\frac{v_{R}+v_{L}}{2}"
},
{
"math_id": 16,
"text": "\\begin{align}v_{R} & = r \\cdot \\omega_{R} \\\\ v_{L} & = r \\cdot \\omega_{L} \\end{align}"
},
{
"math_id": 17,
"text": "\\omega_{R}"
},
{
"math_id": 18,
"text": "\\omega_{L}"
},
{
"math_id": 19,
"text": "\\begin{bmatrix}\n \\dot{x}_{B}\\\\\n \\dot{y}_{B}\\\\\n\\dot{\\varphi}\n\\end{bmatrix}\n= \n\n\\begin{bmatrix}\nv\\, {x}_{B} \\\\\nv\\, {y}_{B} \\\\\n\\omega\n\\end{bmatrix}\n\\overbrace{=}^{v=r \\omega}\n\n\\begin{bmatrix}\n\\frac{r}{2} & \\frac{r}{2}\\\\\n0 &0\\\\\n-\\frac{r}{b} & \\frac{r}{b} \n\\end{bmatrix}\n\\begin{bmatrix}\n\\omega_{L}\\\\\n\\omega_{R}\n\\end{bmatrix}"
},
{
"math_id": 20,
"text": "\\begin{bmatrix}\n \\dot{x}\\\\\n \\dot{y}\\\\\n\\dot{\\varphi}\n\\end{bmatrix}\n= \n\n\\begin{bmatrix}\n\\cos{\\varphi} & 0\\\\\n\\sin{\\varphi} &0\\\\\n0 & 1 \n\\end{bmatrix}\n\\begin{bmatrix}\nV\\\\\n\\omega\n\\end{bmatrix}"
},
{
"math_id": 21,
"text": "\\omega"
},
{
"math_id": 22,
"text": "V"
},
{
"math_id": 23,
"text": "\\omega_{L}"
},
{
"math_id": 24,
"text": "\\omega_{R}"
},
{
"math_id": 25,
"text": "R = V/\\omega"
},
{
"math_id": 26,
"text": "\\omega_{R} = v_{R}/r"
},
{
"math_id": 27,
"text": "\\omega \\cdot (R+b/2) = v_{R}"
},
{
"math_id": 28,
"text": "\\omega_{R} = \\frac{V+\\omega \\cdot b/2}{r}"
},
{
"math_id": 29,
"text": "\\omega_{L} = \\frac{V-\\omega \\cdot b/2}{r}"
}
] | https://en.wikipedia.org/wiki?curid=1060349 |
10603568 | Unitarity (physics) | Requirement that quantum states' time evolution operators are unitary transformations
In quantum physics, unitarity is (or a unitary process has) the condition that the time evolution of a quantum state according to the Schrödinger equation is mathematically represented by a unitary operator. This is typically taken as an axiom or basic postulate of quantum mechanics, while generalizations of or departures from unitarity are part of speculations about theories that may go beyond quantum mechanics. A unitarity bound is any inequality that follows from the unitarity of the evolution operator, i.e. from the statement that time evolution preserves inner products in Hilbert space.
Hamiltonian evolution.
Time evolution described by a time-independent Hamiltonian is represented by a one-parameter family of unitary operators, for which the Hamiltonian is a generator: formula_0.
In the Schrödinger picture, the unitary operators are taken to act upon the system's quantum state, whereas in the Heisenberg picture, the time dependence is incorporated into the observables instead.
Implications of unitarity on measurement results.
In quantum mechanics, every state is described as a vector in Hilbert space. When a measurement is performed, it is convenient to describe this space using a vector basis in which every basis vector has a defined result of the measurement – e.g., a vector basis of defined momentum in case momentum is measured. The measurement operator is diagonal in this basis.
The probability to get a particular measured result depends on the probability amplitude, given by the inner product of the physical state formula_1 with the basis vectors formula_2 that diagonalize the measurement operator. For a physical state that is measured after it has evolved in time, the probability amplitude can be described either by the inner product of the physical state after time evolution with the relevant basis vectors, or equivalently by the inner product of the physical state with the basis vectors that are evolved backwards in time. Using the time evolution operator formula_3, we have:
formula_4
But by definition of Hermitian conjugation, this is also:
formula_5
Since these equalities are true for every two vectors, we get
formula_6
This means that the Hamiltonian is Hermitian and the time evolution operator formula_3 is unitary.
Since by the Born rule the norm determines the probability to get a particular result in a measurement, unitarity together with the Born rule guarantees the sum of probabilities is always one. Furthermore, unitarity together with the Born rule implies that the measurement operators in Heisenberg picture indeed describe how the measurement results are expected to evolve in time.
Implications on the form of the Hamiltonian.
That the time evolution operator is unitary, is equivalent to the Hamiltonian being Hermitian. Equivalently, this means that the possible measured energies, which are the eigenvalues of the Hamiltonian, are always real numbers.
Scattering amplitude and the optical theorem.
The S-matrix is used to describe how the physical system changes in a scattering process. It is in fact equal to the time evolution operator over a very long time (approaching infinity) acting on momentum states of particles (or bound complex of particles) at infinity. Thus it must be a unitary operator as well; a calculation yielding a non-unitary S-matrix often implies a bound state has been overlooked.
Optical theorem.
Unitarity of the S-matrix implies, among other things, the optical theorem. This can be seen as follows:
The S-matrix can be written as:
formula_7
where formula_8 is the part of the S-matrix that is due to interactions; e.g. formula_9 just implies the S-matrix is 1, no interaction occur and all states remain unchanged.
Unitarity of the S-matrix:
formula_10
is then equivalent to:
formula_11
The left-hand side is twice the imaginary part of the S-matrix. In order to see what the right-hand side is, let us look at any specific element of this matrix, e.g. between some initial state formula_12 and final state formula_13, each of which may include many particles. The matrix element is then:
formula_14
where {Ai} is the set of possible on-shell states - i.e. momentum states of particles (or bound complex of particles) at infinity.
Thus, twice the imaginary part of the S-matrix, is equal to a sum representing products of contributions from all the scatterings of the initial state of the S-matrix to any other physical state at infinity, with the scatterings of the latter to the final state of the S-matrix. Since the imaginary part of the S-matrix can be calculated by virtual particles appearing in intermediate states of the Feynman diagrams, it follows that these virtual particles must only consist of real particles that may also appear as final states. The mathematical machinery which is used to ensure this includes gauge symmetry and sometimes also Faddeev–Popov ghosts.
Unitarity bounds.
According to the optical theorem, the probability amplitude "M (= iT)" for any scattering process must obey
formula_15
Similar unitarity bounds imply that the amplitudes and cross section cannot increase too much with energy or they must decrease as quickly as a certain formula dictates. For example, Froissart bound says that the total cross section of two particles scattering is bounded by formula_16, where formula_17 is a constant, and formula_18 is the square of the center-of-mass energy. (See Mandelstam variables)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U(t) = e^{-i \\hat{H} t/ \\hbar}"
},
{
"math_id": 1,
"text": "|\\psi\\rangle"
},
{
"math_id": 2,
"text": "\\{|\\phi_i\\rangle\\}"
},
{
"math_id": 3,
"text": "e^{-i\\hat{H}t/\\hbar}"
},
{
"math_id": 4,
"text": "\\left\\langle \\phi_i \\left| e^{-i\\hat{H}t/\\hbar} \\psi \\right.\\right\\rangle = \\left\\langle\\left. e^{-i\\hat{H}(-t)/\\hbar} \\phi_i \\right| \\psi \\right\\rangle"
},
{
"math_id": 5,
"text": "\n \\left\\langle \\phi_i \\left| e^{-i\\hat{H}t/\\hbar} \\psi \\right.\\right\\rangle = \n \\left\\langle\\left. \\phi_i \\left( e^{-i\\hat{H}t/\\hbar}\\right)^{\\dagger} \\right| \\psi \\right\\rangle =\n \\left\\langle\\left. \\phi_i e^{-i\\hat{H}^{\\dagger}(-t)/\\hbar} \\right| \\psi \\right\\rangle\n"
},
{
"math_id": 6,
"text": "\\hat{H}^{\\dagger} = \\hat{H}"
},
{
"math_id": 7,
"text": "S = 1 + i T "
},
{
"math_id": 8,
"text": "T"
},
{
"math_id": 9,
"text": "T = 0"
},
{
"math_id": 10,
"text": "S^{\\dagger} S = 1"
},
{
"math_id": 11,
"text": "-i\\left(T - T^{\\dagger}\\right) = T^{\\dagger}T"
},
{
"math_id": 12,
"text": "|I\\rangle "
},
{
"math_id": 13,
"text": "\\langle F|"
},
{
"math_id": 14,
"text": "\\left\\langle F \\left| T^{\\dagger}T \\right| I\\right\\rangle = \\sum_i \\left\\langle F | T^{\\dagger} | A_i \\right\\rangle \\left\\langle A_i | T | I\\right\\rangle"
},
{
"math_id": 15,
"text": "|M|^2 = 2\\operatorname{Im}(M)"
},
{
"math_id": 16,
"text": " c \\ln^2 s "
},
{
"math_id": 17,
"text": " c "
},
{
"math_id": 18,
"text": " s "
}
] | https://en.wikipedia.org/wiki?curid=10603568 |
10604601 | Net volatility | Volatility implied by the price of an option spread trade involving two or more options
Net volatility refers to the volatility implied by the price of an option spread trade involving two or more options. Essentially, it is the volatility at which the theoretical value of the spread trade matches the price quoted in the market, or, in other words, the implied volatility of the spread.
Formula.
The net volatility for a two-legged spread (with one long leg, and one short) can be estimated, to a first order approximation, by the formula:
formula_0
where
formula_1 is the net volatility for the spread
formula_2 and formula_3 are the implied volatility and vega for the long leg
formula_4 and formula_5 are the implied volatility and vega for the short leg
Example.
It is now mid-April 2007, and you are considering going long a Sep07/May07 100 call spread, i.e. buy the Sep 100 call and sell the May 100 call. The Sep 100 call is offered at a 14.1% implied volatility and the May 100 call is bid at an 18.3% implied volatility. The vega of the Sep 100 call is 4.3 and the vega of the May 100 call is 2.3. Using the formula above, the net volatility of the spread is:
formula_6
Interpretation.
In the example above, going short a May 100 call and long a Sep 100 call results in a synthetic "forward" option – i.e. an option struck at 100 that spans the period from May to September expirations. To see this, consider that the two options essentially offset each other from today until the expiration of the short May option.
Thus, the net volatility calculated above is, in fact, the implied volatility of this synthetic forward option. While it may seem counter-intuitive that one can create a synthetic option whose implied volatility is lower than the implied volatilities of its components, consider that the first implied volatility, 18.1%, corresponds to the period from today to May expiration, while the second implied volatility, 14.3% corresponds to the period from today to September expiration. Therefore, the implied volatility for the period May to September must be less than 14.3% to compensate for the higher implied volatility during the period to May.
In practice, one sees this type of situation often when the short leg is being bid up for a specific reason. For instance, the near option may include an upcoming event, such as an earnings announcement, that will, in all probability, cause the underlier price to move. After the event has passed, the market may expect the underlier to be relatively stable which results in a lower implied volatility for the subsequent period. | [
{
"math_id": 0,
"text": "\\sigma_N=\\frac{\\nu_L \\sigma_L - \\nu_S \\sigma_S}{\\nu_L - \\nu_S}\\,"
},
{
"math_id": 1,
"text": "\\sigma_N"
},
{
"math_id": 2,
"text": "\\sigma_L"
},
{
"math_id": 3,
"text": "\\nu_L"
},
{
"math_id": 4,
"text": "\\sigma_S"
},
{
"math_id": 5,
"text": "\\nu_S"
},
{
"math_id": 6,
"text": "\\frac{14.1\\%\\cdot4.3-18.3\\%\\cdot2.3}{4.3-2.3}=9.27\\%"
}
] | https://en.wikipedia.org/wiki?curid=10604601 |
10605275 | Fiber derivative | In the context of Lagrangian mechanics, the fiber derivative is used to convert between the Lagrangian and Hamiltonian forms. In particular, if formula_0 is the configuration manifold then the Lagrangian formula_1 is defined on the tangent bundle formula_2 , and the Hamiltonian is defined on the cotangent bundle formula_3—the fiber derivative is a map formula_4 such that
formula_5,
where formula_6 and formula_7 are vectors from the same tangent space. When restricted to a particular point, the fiber derivative is a Legendre transformation. | [
{
"math_id": 0,
"text": "Q"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "TQ"
},
{
"math_id": 3,
"text": "T^* Q"
},
{
"math_id": 4,
"text": "\\mathbb{F}L:TQ \\rightarrow T^* Q"
},
{
"math_id": 5,
"text": "\\mathbb{F}L(v) \\cdot w = \\left. \\frac{d}{ds} \\right|_{s=0} L(v+sw)"
},
{
"math_id": 6,
"text": "v"
},
{
"math_id": 7,
"text": "w"
}
] | https://en.wikipedia.org/wiki?curid=10605275 |
10606 | Factorial | Product of numbers from 1 to n
In mathematics, the factorial of a non-negative integer formula_0, denoted by formula_1, is the product of all positive integers less than or equal to formula_0. The factorial of formula_0 also equals the product of formula_0 with the next smaller factorial:
formula_2
For example,
formula_3
The value of 0! is 1, according to the convention for an empty product.
Factorials have been discovered in several ancient cultures, notably in Indian mathematics in the canonical works of Jain literature, and by Jewish mystics in the Talmudic book "Sefer Yetzirah". The factorial operation is encountered in many areas of mathematics, notably in combinatorics, where its most basic use counts the possible distinct sequences – the permutations – of formula_0 distinct objects: there are formula_1. In mathematical analysis, factorials are used in power series for the exponential function and other functions, and they also have applications in algebra, number theory, probability theory, and computer science.
Much of the mathematics of the factorial function was developed beginning in the late 18th and early 19th centuries.
Stirling's approximation provides an accurate approximation to the factorial of large numbers, showing that it grows more quickly than exponential growth. Legendre's formula describes the exponents of the prime numbers in a prime factorization of the factorials, and can be used to count the trailing zeros of the factorials. Daniel Bernoulli and Leonhard Euler interpolated the factorial function to a continuous function of complex numbers, except at the negative integers, the (offset) gamma function.
Many other notable functions and number sequences are closely related to the factorials, including the binomial coefficients, double factorials, falling factorials, primorials, and subfactorials. Implementations of the factorial function are commonly used as an example of different computer programming styles, and are included in scientific calculators and scientific computing software libraries. Although directly computing large factorials using the product formula or recurrence is not efficient, faster algorithms are known, matching to within a constant factor the time for fast multiplication algorithms for numbers with the same number of digits.
History.
The concept of factorials has arisen independently in many cultures:
From the late 15th century onward, factorials became the subject of study by Western mathematicians. In a 1494 treatise, Italian mathematician Luca Pacioli calculated factorials up to 11!, in connection with a problem of dining table arrangements. Christopher Clavius discussed factorials in a 1603 commentary on the work of Johannes de Sacrobosco, and in the 1640s, French polymath Marin Mersenne published large (but not entirely correct) tables of factorials, up to 64!, based on the work of Clavius. The power series for the exponential function, with the reciprocals of factorials for its coefficients, was first formulated in 1676 by Isaac Newton in a letter to Gottfried Wilhelm Leibniz. Other important works of early European mathematics on factorials include extensive coverage in a 1685 treatise by John Wallis, a study of their approximate values for large values of formula_0 by Abraham de Moivre in 1721, a 1729 letter from James Stirling to de Moivre stating what became known as Stirling's approximation, and work at the same time by Daniel Bernoulli and Leonhard Euler formulating the continuous extension of the factorial function to the gamma function. Adrien-Marie Legendre included Legendre's formula, describing the exponents in the factorization of factorials into prime powers, in an 1808 text on number theory.
The notation formula_1 for factorials was introduced by the French mathematician Christian Kramp in 1808. Many other notations have also been used. Another later notation formula_4, in which the argument of the factorial was half-enclosed by the left and bottom sides of a box, was popular for some time in Britain and America but fell out of use, perhaps because it is difficult to typeset. The word "factorial" (originally French: "factorielle") was first used in 1800 by Louis François Antoine Arbogast, in the first work on Faà di Bruno's formula, but referring to a more general concept of products of arithmetic progressions. The "factors" that this name refers to are the terms of the product formula for the factorial.
Definition.
The factorial function of a positive integer formula_0 is defined by the product of all positive integers not greater than formula_0
formula_5
This may be written more concisely in product notation as
formula_6
If this product formula is changed to keep all but the last term, it would define a product of the same form, for a smaller factorial. This leads to a recurrence relation, according to which each value of the factorial function can be obtained by multiplying the previous value
formula_7
For example, formula_8.
Factorial of zero.
The factorial of formula_9 is formula_10, or in symbols, formula_11. There are several motivations for this definition:
Applications.
The earliest uses of the factorial function involve counting permutations: there are formula_1 different ways of arranging formula_0 distinct objects into a sequence. Factorials appear more broadly in many formulas in combinatorics, to account for different orderings of objects. For instance the binomial coefficients formula_16 count the formula_17-element combinations (subsets of formula_17 elements) from a set with formula_0 elements, and can be computed from factorials using the formula formula_18 The Stirling numbers of the first kind sum to the factorials, and count the permutations of formula_0 grouped into subsets with the same numbers of cycles. Another combinatorial application is in counting derangements, permutations that do not leave any element in its original position; the number of derangements of formula_0 items is the nearest integer to formula_19.
In algebra, the factorials arise through the binomial theorem, which uses binomial coefficients to expand powers of sums. They also occur in the coefficients used to relate certain families of polynomials to each other, for instance in Newton's identities for symmetric polynomials. Their use in counting permutations can also be restated algebraically: the factorials are the orders of finite symmetric groups. In calculus, factorials occur in Faà di Bruno's formula for chaining higher derivatives. In mathematical analysis, factorials frequently appear in the denominators of power series, most notably in the series for the exponential function, formula_20
and in the coefficients of other Taylor series (in particular those of the trigonometric and hyperbolic functions), where they cancel factors of formula_1 coming from the formula_0th derivative of formula_21. This usage of factorials in power series connects back to analytic combinatorics through the exponential generating function, which for a combinatorial class with formula_22 elements of size formula_23 is defined as the power series formula_24
In number theory, the most salient property of factorials is the divisibility of formula_1 by all positive integers up to formula_0, described more precisely for prime factors by Legendre's formula. It follows that arbitrarily large prime numbers can be found as the prime factors of the numbers
formula_25, leading to a proof of Euclid's theorem that the number of primes is infinite. When formula_25 is itself prime it is called a factorial prime; relatedly, Brocard's problem, also posed by Srinivasa Ramanujan, concerns the existence of square numbers of the form formula_26. In contrast, the numbers formula_27 must all be composite, proving the existence of arbitrarily large prime gaps. An elementary proof of Bertrand's postulate on the existence of a prime in any interval of the form formula_28, one of the first results of Paul Erdős, was based on the divisibility properties of factorials. The factorial number system is a mixed radix notation for numbers in which the place values of each digit are factorials.
Factorials are used extensively in probability theory, for instance in the Poisson distribution and in the probabilities of random permutations. In computer science, beyond appearing in the analysis of brute-force searches over permutations, factorials arise in the lower bound of formula_29 on the number of comparisons needed to comparison sort a set of formula_0 items, and in the analysis of chained hash tables, where the distribution of keys per cell can be accurately approximated by a Poisson distribution. Moreover, factorials naturally appear in formulae from quantum and statistical physics, where one often considers all the possible permutations of a set of particles. In statistical mechanics, calculations of entropy such as Boltzmann's entropy formula or the Sackur–Tetrode equation must correct the count of microstates by dividing by the factorials of the numbers of each type of indistinguishable particle to avoid the Gibbs paradox. Quantum physics provides the underlying reason for why these corrections are necessary.
Properties.
Growth and approximation.
As a function of formula_0, the factorial has faster than exponential growth, but grows more slowly than a double exponential function. Its growth rate is similar to formula_31, but slower by an exponential factor. One way of approaching this result is by taking the natural logarithm of the factorial, which turns its product formula into a sum, and then estimating the sum by an integral:
formula_32
Exponentiating the result (and ignoring the negligible formula_33 term) approximates formula_1 as formula_30.
More carefully bounding the sum both above and below by an integral, using the trapezoid rule, shows that this estimate needs a correction factor proportional to formula_34. The constant of proportionality for this correction can be found from the Wallis product, which expresses formula_35 as a limiting ratio of factorials and powers of two. The result of these corrections is Stirling's approximation:
formula_36
Here, the formula_37 symbol means that, as formula_0 goes to infinity, the ratio between the left and right sides approaches one in the limit.
Stirling's formula provides the first term in an asymptotic series that becomes even more accurate when taken to greater numbers of terms:
formula_38
An alternative version uses only odd exponents in the correction terms:
formula_39
Many other variations of these formulas have also been developed, by Srinivasa Ramanujan, Bill Gosper, and others.
The binary logarithm of the factorial, used to analyze comparison sorting, can be very accurately estimated using Stirling's approximation. In the formula below, the formula_40 term invokes big O notation.
formula_41
Divisibility and digits.
The product formula for the factorial implies that formula_1 is divisible by all prime numbers that are at most formula_0, and by no larger prime numbers. More precise information about its divisibility is given by Legendre's formula, which gives the exponent of each prime formula_42 in the prime factorization of formula_1 as
formula_43
Here formula_44 denotes the sum of the base-formula_42 digits of formula_0, and the exponent given by this formula can also be interpreted in advanced mathematics as the p-adic valuation of the factorial. Applying Legendre's formula to the product formula for binomial coefficients produces Kummer's theorem, a similar result on the exponent of each prime in the factorization of a binomial coefficient. Grouping the prime factors of the factorial into prime powers in different ways produces the multiplicative partitions of factorials.
The special case of Legendre's formula for formula_45 gives the number of trailing zeros in the decimal representation of the factorials. According to this formula, the number of zeros can be obtained by subtracting the base-5 digits of formula_0 from formula_0, and dividing the result by four. Legendre's formula implies that the exponent of the prime formula_46 is always larger than the exponent for formula_45, so each factor of five can be paired with a factor of two to produce one of these trailing zeros. The leading digits of the factorials are distributed according to Benford's law. Every sequence of digits, in any base, is the sequence of initial digits of some factorial number in that base.
Another result on divisibility of factorials, Wilson's theorem, states that formula_47 is divisible by formula_0 if and only if formula_0 is a prime number. For any given integer formula_48, the Kempner function of formula_48 is given by the smallest formula_0 for which formula_48 divides formula_1. For almost all numbers (all but a subset of exceptions with asymptotic density zero), it coincides with the largest prime factor of formula_48.
The product of two factorials, formula_49, always evenly divides There are infinitely many factorials that equal the product of other factorials: if formula_0 is itself any product of factorials, then formula_1 equals that same product multiplied by one more factorial, formula_50. The only known examples of factorials that are products of other factorials but are not of this "trivial" form are formula_51, formula_52, and formula_53. It would follow from the abc conjecture that there are only finitely many nontrivial examples.
The greatest common divisor of the values of a primitive polynomial of degree formula_54 over the integers evenly divides
Continuous interpolation and non-integer generalization.
There are infinitely many ways to extend the factorials to a continuous function. The most widely used of these uses the gamma function, which can be defined for positive real numbers as the integral
formula_55
The resulting function is related to the factorial of a non-negative integer formula_0 by the equation
formula_56
which can be used as a definition of the factorial for non-integer arguments.
At all values formula_48 for which both formula_57 and formula_58 are defined, the gamma function obeys the functional equation
formula_59
generalizing the recurrence relation for the factorials.
The same integral converges more generally for any complex number formula_60 whose real part is positive. It can be extended to the non-integer points in the rest of the complex plane by solving for Euler's reflection formula
formula_61
However, this formula cannot be used at integers because, for them, the formula_62 term would produce a division by zero. The result of this extension process is an analytic function, the analytic continuation of the integral formula for the gamma function. It has a nonzero value at all complex numbers, except for the non-positive integers where it has simple poles. Correspondingly, this provides a definition for the factorial at all complex numbers other than the negative integers.
One property of the gamma function, distinguishing it from other continuous interpolations of the factorials, is given by the Bohr–Mollerup theorem, which states that the gamma function (offset by one) is the only log-convex function on the positive real numbers that interpolates the factorials and obeys the same functional equation. A related uniqueness theorem of Helmut Wielandt states that the complex gamma function and its scalar multiples are the only holomorphic functions on the positive complex half-plane that obey the functional equation and remain bounded for complex numbers with real part between 1 and 2.
Other complex functions that interpolate the factorial values include Hadamard's gamma function, which is an entire function over all the complex numbers, including the non-positive integers. In the p-adic numbers, it is not possible to continuously interpolate the factorial function directly, because the factorials of large integers (a dense subset of the p-adics) converge to zero according to Legendre's formula, forcing any continuous function that is close to their values to be zero everywhere. Instead, the p-adic gamma function provides a continuous interpolation of a modified form of the factorial, omitting the factors in the factorial that are divisible by p.
The digamma function is the logarithmic derivative of the gamma function. Just as the gamma function provides a continuous interpolation of the factorials, offset by one, the digamma function provides a continuous interpolation of the harmonic numbers, offset by the Euler–Mascheroni constant.
Computation.
The factorial function is a common feature in scientific calculators. It is also included in scientific programming libraries such as the Python mathematical functions module and the Boost C++ library. If efficiency is not a concern, computing factorials is trivial: just successively multiply a variable initialized to formula_10 by the integers up to formula_0. The simplicity of this computation makes it a common example in the use of different computer programming styles and methods.
The computation of formula_1 can be expressed in pseudocode using iteration as
define factorial("n"):
"f" := 1
for "i" := 1, 2, 3, ..., "n":
"f" := "f" * "i"
return "f"
or using recursion based on its recurrence relation as
define factorial("n"):
if ("n" = 0) return 1
return "n" * factorial("n" − 1)
Other methods suitable for its computation include memoization, dynamic programming, and functional programming. The computational complexity of these algorithms may be analyzed using the unit-cost random-access machine model of computation, in which each arithmetic operation takes constant time and each number uses a constant amount of storage space. In this model, these methods can compute formula_1 in time formula_63, and the iterative version uses space formula_40. Unless optimized for tail recursion, the recursive version takes linear space to store its call stack. However, this model of computation is only suitable when formula_0 is small enough to allow formula_1 to fit into a machine word. The values 12! and 20! are the largest factorials that can be stored in, respectively, the 32-bit and 64-bit integers. Floating point can represent larger factorials, but approximately rather than exactly, and will still overflow for factorials larger than
The exact computation of larger factorials involves arbitrary-precision arithmetic, because of fast growth and integer overflow. Time of computation can be analyzed as a function of the number of digits or bits in the result. By Stirling's formula, formula_1 has formula_64 bits. The Schönhage–Strassen algorithm can produce a formula_65-bit product in time formula_66, and faster multiplication algorithms taking time formula_67 are known. However, computing the factorial involves repeated products, rather than a single multiplication, so these time bounds do not apply directly. In this setting, computing formula_1 by multiplying the numbers from 1 to formula_0 in sequence is inefficient, because it involves formula_0 multiplications, a constant fraction of which take time formula_68 each, giving total time formula_69. A better approach is to perform the multiplications as a divide-and-conquer algorithm that multiplies a sequence of formula_23 numbers by splitting it into two subsequences of formula_70 numbers, multiplies each subsequence, and combines the results with one last multiplication. This approach to the factorial takes total time formula_71: one logarithm comes from the number of bits in the factorial, a second comes from the multiplication algorithm, and a third comes from the divide and conquer.
Even better efficiency is obtained by computing "n"! from its prime factorization, based on the principle that exponentiation by squaring is faster than expanding an exponent into a product. An algorithm for this by Arnold Schönhage begins by finding the list of the primes up to formula_0, for instance using the sieve of Eratosthenes, and uses Legendre's formula to compute the exponent for each prime. Then it computes the product of the prime powers with these exponents, using a recursive algorithm, as follows:
The product of all primes up to formula_0 is an formula_63-bit number, by the prime number theorem, so the time for the first step is formula_68, with one logarithm coming from the divide and conquer and another coming from the multiplication algorithm. In the recursive calls to the algorithm, the prime number theorem can again be invoked to prove that the numbers of bits in the corresponding products decrease by a constant factor at each level of recursion, so the total time for these steps at all levels of recursion adds in a geometric series to formula_68. The time for the squaring in the second step and the multiplication in the third step are again formula_68, because each is a single multiplication of a number with formula_72 bits. Again, at each level of recursion the numbers involved have a constant fraction as many bits (because otherwise repeatedly squaring them would produce too large a final result) so again the amounts of time for these steps in the recursive calls add in a geometric series to formula_68. Consequentially, the whole algorithm takes time formula_68, proportional to a single multiplication with the same number of bits in its result.
Related sequences and functions.
Several other integer sequences are similar to or related to the factorials:
The alternating factorial is the absolute value of the alternating sum of the first formula_0 factorials, formula_73. These have mainly been studied in connection with their primality; only finitely many of them can be prime, but a complete list of primes of this form is not known.
The Bhargava factorials are a family of integer sequences defined by Manjul Bhargava with similar number-theoretic properties to the factorials, including the factorials themselves as a special case.
The product of all the odd integers up to some odd positive integer formula_0 is called the double factorial of formula_0, and denoted by That is, formula_74 For example, 9!! = 1 × 3 × 5 × 7 × 9 = 945. Double factorials are used in trigonometric integrals, in expressions for the gamma function at half-integers and the volumes of hyperspheres, and in counting binary trees and perfect matchings.
Just as triangular numbers sum the numbers from formula_10 to formula_0, and factorials take their product, the exponential factorial exponentiates. The exponential factorial is defined recursively as formula_75. For example, the exponential factorial of 4 is formula_76 These numbers grow much more quickly than regular factorials.
The notations formula_77 or formula_78 are sometimes used to represent the product of the formula_0 integers counting up to and including formula_48, equal to formula_79. This is also known as a falling factorial or backward factorial, and the formula_77 notation is a Pochhammer symbol. Falling factorials count the number of different sequences of formula_0 distinct items that can be drawn from a universe of formula_48 items. They occur as coefficients in the higher derivatives of polynomials, and in the factorial moments of random variables.
The hyperfactorial of formula_0 is the product formula_80. These numbers form the discriminants of Hermite polynomials. They can be continuously interpolated by the K-function, and obey analogues to Stirling's formula and Wilson's theorem.
The Jordan–Pólya numbers are the products of factorials, allowing repetitions. Every tree has a symmetry group whose number of symmetries is a Jordan–Pólya number, and every Jordan–Pólya number counts the symmetries of some tree.
The primorial formula_81 is the product of prime numbers less than or equal to formula_0; this construction gives them some similar divisibility properties to factorials, but unlike factorials they are squarefree. As with the factorial primes formula_25, researchers have studied primorial primes
The subfactorial yields the number of derangements of a set of formula_0 objects. It is sometimes denoted formula_82, and equals the closest integer to formula_19.
The superfactorial of formula_0 is the product of the first formula_0 factorials. The superfactorials are continuously interpolated by the Barnes G-function.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n!"
},
{
"math_id": 2,
"text": "\n\\begin{align}\nn! &= n \\times (n-1) \\times (n-2) \\times (n-3) \\times \\cdots \\times 3 \\times 2 \\times 1 \\\\\n &= n\\times(n-1)!\\\\\n\\end{align}"
},
{
"math_id": 3,
"text": "5! = 5\\times 4! = 5 \\times 4 \\times 3 \\times 2 \\times 1 = 120. "
},
{
"math_id": 4,
"text": "\\vert\\!\\underline{\\,n}"
},
{
"math_id": 5,
"text": "n! = 1 \\cdot 2 \\cdot 3 \\cdots (n-2) \\cdot (n-1) \\cdot n."
},
{
"math_id": 6,
"text": "n! = \\prod_{i = 1}^n i."
},
{
"math_id": 7,
"text": " n! = n\\cdot (n-1)!."
},
{
"math_id": 8,
"text": "5! = 5\\cdot 4!=5\\cdot 24=120"
},
{
"math_id": 9,
"text": "0"
},
{
"math_id": 10,
"text": "1"
},
{
"math_id": 11,
"text": "0!=1"
},
{
"math_id": 12,
"text": "n=0"
},
{
"math_id": 13,
"text": "\\tbinom{n}{n} = \\tfrac{n!}{n!0!} = 1,"
},
{
"math_id": 14,
"text": "n=1"
},
{
"math_id": 15,
"text": "0! = \\Gamma(0+1) = 1"
},
{
"math_id": 16,
"text": "\\tbinom{n}{k}"
},
{
"math_id": 17,
"text": "k"
},
{
"math_id": 18,
"text": "\\binom{n}{k}=\\frac{n!}{k!(n-k)!}."
},
{
"math_id": 19,
"text": "n!/e"
},
{
"math_id": 20,
"text": "e^x=1+\\frac{x}{1}+\\frac{x^2}{2}+\\frac{x^3}{6}+\\cdots=\\sum_{i=0}^{\\infty}\\frac{x^i}{i!},"
},
{
"math_id": 21,
"text": "x^n"
},
{
"math_id": 22,
"text": "n_i"
},
{
"math_id": 23,
"text": "i"
},
{
"math_id": 24,
"text": "\\sum_{i=0}^{\\infty} \\frac{x^i n_i}{i!}."
},
{
"math_id": 25,
"text": "n!\\pm 1"
},
{
"math_id": 26,
"text": "n!+1"
},
{
"math_id": 27,
"text": "n!+2,n!+3,\\dots n!+n"
},
{
"math_id": 28,
"text": "[n,2n]"
},
{
"math_id": 29,
"text": "\\log_2 n!=n\\log_2n-O(n)"
},
{
"math_id": 30,
"text": "(n/e)^n"
},
{
"math_id": 31,
"text": "n^n"
},
{
"math_id": 32,
"text": "\\ln n! = \\sum_{x=1}^n \\ln x \\approx \\int_1^n\\ln x\\, dx=n\\ln n-n+1."
},
{
"math_id": 33,
"text": "+1"
},
{
"math_id": 34,
"text": "\\sqrt n"
},
{
"math_id": 35,
"text": "\\pi"
},
{
"math_id": 36,
"text": "n!\\sim\\sqrt{2\\pi n}\\left(\\frac{n}{e}\\right)^n\\,."
},
{
"math_id": 37,
"text": "\\sim"
},
{
"math_id": 38,
"text": "\nn! \\sim \\sqrt{2\\pi n}\\left(\\frac{n}{e}\\right)^n \\left(1 +\\frac{1}{12n}+\\frac{1}{288n^2} - \\frac{139}{51840n^3} -\\frac{571}{2488320n^4}+ \\cdots \\right)."
},
{
"math_id": 39,
"text": "\nn! \\sim \\sqrt{2\\pi n}\\left(\\frac{n}{e}\\right)^n \\exp\\left(\\frac{1}{12n} - \\frac{1}{360n^3} + \\frac{1}{1260n^5} -\\frac{1}{1680n^7}+ \\cdots \\right)."
},
{
"math_id": 40,
"text": "O(1)"
},
{
"math_id": 41,
"text": "\\log_2 n! = n\\log_2 n-(\\log_2 e)n + \\frac12\\log_2 n + O(1)."
},
{
"math_id": 42,
"text": "p"
},
{
"math_id": 43,
"text": "\\sum_{i=1}^\\infty \\left \\lfloor \\frac n {p^i} \\right \\rfloor=\\frac{n - s_p(n)}{p - 1}."
},
{
"math_id": 44,
"text": "s_p(n)"
},
{
"math_id": 45,
"text": "p=5"
},
{
"math_id": 46,
"text": "p=2"
},
{
"math_id": 47,
"text": "(n-1)!+1"
},
{
"math_id": 48,
"text": "x"
},
{
"math_id": 49,
"text": "m!\\cdot n!"
},
{
"math_id": 50,
"text": "(n-1)!"
},
{
"math_id": 51,
"text": "9!=7!\\cdot 3!\\cdot 3!\\cdot 2!"
},
{
"math_id": 52,
"text": "10!=7!\\cdot 6!=7!\\cdot 5!\\cdot 3!"
},
{
"math_id": 53,
"text": "16!=14!\\cdot 5!\\cdot 2!"
},
{
"math_id": 54,
"text": "d"
},
{
"math_id": 55,
"text": " \\Gamma(z) = \\int_0^\\infty x^{z-1} e^{-x}\\,dx."
},
{
"math_id": 56,
"text": " n!=\\Gamma(n+1),"
},
{
"math_id": 57,
"text": "\\Gamma(x)"
},
{
"math_id": 58,
"text": "\\Gamma(x-1)"
},
{
"math_id": 59,
"text": " \\Gamma(n)=(n-1)\\Gamma(n-1),"
},
{
"math_id": 60,
"text": "z"
},
{
"math_id": 61,
"text": "\\Gamma(z)\\Gamma(1-z)=\\frac{\\pi}{\\sin\\pi z}."
},
{
"math_id": 62,
"text": "\\sin\\pi z"
},
{
"math_id": 63,
"text": "O(n)"
},
{
"math_id": 64,
"text": "b = O(n\\log n)"
},
{
"math_id": 65,
"text": "b"
},
{
"math_id": 66,
"text": "O(b\\log b\\log\\log b)"
},
{
"math_id": 67,
"text": "O(b\\log b)"
},
{
"math_id": 68,
"text": "O(n\\log^2 n)"
},
{
"math_id": 69,
"text": "O(n^2\\log^2 n)"
},
{
"math_id": 70,
"text": "i/2"
},
{
"math_id": 71,
"text": "O(n\\log^3 n)"
},
{
"math_id": 72,
"text": "O(n\\log n)"
},
{
"math_id": 73,
"text": "\\sum_{i = 1}^n (-1)^{n - i}i!"
},
{
"math_id": 74,
"text": "(2k-1)!! = \\prod_{i=1}^k (2i-1) = \\frac{(2k)!}{2^k k!}."
},
{
"math_id": 75,
"text": "a_0 = 1,\\ a_n = n^{a_{n - 1}}"
},
{
"math_id": 76,
"text": "4^{3^{2^{1}}}=262144."
},
{
"math_id": 77,
"text": "(x)_{n}"
},
{
"math_id": 78,
"text": "x^{\\underline n}"
},
{
"math_id": 79,
"text": "x!/(x-n)!"
},
{
"math_id": 80,
"text": "1^1\\cdot 2^2\\cdots n^n"
},
{
"math_id": 81,
"text": "n\\#"
},
{
"math_id": 82,
"text": "!n"
}
] | https://en.wikipedia.org/wiki?curid=10606 |
1060624 | Newton's rings | Optical phenomenon
Newton's rings is a phenomenon in which an interference pattern is created by the reflection of light between two surfaces, typically a spherical surface and an adjacent touching flat surface. It is named after Isaac Newton, who investigated the effect in 1666. When viewed with monochromatic light, Newton's rings appear as a series of concentric, alternating bright and dark rings centered at the point of contact between the two surfaces. When viewed with white light, it forms a concentric ring pattern of rainbow colors because the different wavelengths of light interfere at different thicknesses of the air layer between the surfaces.
History.
The phenomenon was first described by Robert Hooke in his 1665 book "Micrographia". Its name derives from the mathematician and physicist Sir Isaac Newton, who studied the phenomenon in 1666 while sequestered at home in Lincolnshire in the time of the Great Plague that had shut down Trinity College, Cambridge. He recorded his observations in an essay entitled "Of Colours". The phenomenon became a source of dispute between Newton, who favored a corpuscular nature of light, and Hooke, who favored a wave-like nature of light. Newton did not publish his analysis until after Hooke's death, as part of his treatise "Opticks" published in 1704.
Theory.
The pattern is created by placing a very slightly convex curved glass on an optical flat glass. The two pieces of glass make contact only at the center. At other points there is a slight air gap between the two surfaces, increasing with radial distance from the center, as shown in Fig. 3.
Consider monochromatic (single color) light incident from the top that reflects from both the bottom surface of the top lens and the top surface of the optical flat below it. The light passes through the glass lens until it comes to the glass-to-air boundary, where the transmitted light goes from a higher refractive index ("n") value to a lower "n" value. The transmitted light passes through this boundary with no phase change. The reflected light undergoing internal reflection (about 4% of the total) also has no phase change. The light that is transmitted into the air travels a distance, "t", before it is reflected at the flat surface below. Reflection at this air-to-glass boundary causes a half-cycle (180°) phase shift because the air has a lower refractive index than the glass. The reflected light at the lower surface returns a distance of (again) "t" and passes back into the lens. The additional path length is equal to twice the gap between the surfaces. The two reflected rays will interfere according to the total phase change caused by the extra path length 2"t" and by the half-cycle phase change induced in reflection at the flat surface. When the distance 2"t" is zero (lens touching optical flat) the waves interfere destructively, hence the central region of the pattern is dark, as shown in Fig. 2.
A similar analysis for illumination of the device from below instead of from above shows that in this case the central portion of the pattern is bright, not dark, as shown in Fig. 1. When the light is not monochromatic, the radial position of the fringe pattern has a "rainbow" appearance, as shown in Fig. 5.
Constructive interference.
(Fig. 4a): In areas where the path length difference between the two rays is equal to an odd multiple of half a wavelength (λ/2) of the light waves, the reflected waves will be in phase, so the "troughs" and "peaks" of the waves coincide. Therefore, the waves will reinforce (add) and the resulting reflected light intensity will be greater. As a result, a bright area will be observed there.
Destructive interference.
(Fig. 4b): At other locations, where the path length difference is equal to an even multiple of a half-wavelength, the reflected waves will be 180° out of phase, so a "trough" of one wave coincides with a "peak" of the other wave. Therefore, the waves will cancel (subtract) and the resulting light intensity will be weaker or zero. As a result, a dark area will be observed there. Because of the 180° phase reversal due to reflection of the bottom ray, the center where the two pieces touch is dark.
This interference results in a pattern of bright and dark lines or bands called "interference fringes" being observed on the surface. These are similar to contour lines on maps, revealing differences in the thickness of the air gap. The gap between the surfaces is constant along a fringe. The path length difference between two adjacent bright or dark fringes is one wavelength "λ" of the light, so the difference in the gap between the surfaces is one-half wavelength. Since the wavelength of light is so small, this technique can measure very small departures from flatness. For example, the wavelength of red light is about 700 nm, so using red light the difference in height between two fringes is half that, or 350 nm, about <templatestyles src="Fraction/styles.css" />1⁄100 the diameter of a human hair. Since the gap between the glasses increases radially from the center, the interference fringes form concentric rings. For glass surfaces that are not axially symmetric, the fringes will not be rings but will have other shapes.
Quantitative Relationships.
For illumination from above, with a dark center, the radius of the "N"th bright ring is given by
formula_0
where
"N" is the bright-ring number, "R" is the radius of curvature of the glass lens the light is passing through, and "λ" is the wavelength of the light. The above formula is also applicable for dark rings for the ring pattern obtained by transmitted light.
Given the radial distance of a bright ring, "r", and a radius of curvature of the lens, "R", the air gap between the glass surfaces, "t", is given to a good approximation by
formula_1
where the effect of viewing the pattern at an angle oblique to the incident rays is ignored.
Thin-film interference.
The phenomenon of Newton's rings is explained on the same basis as thin-film interference, including effects such as "rainbows" seen in thin films of oil on water or in soap bubbles. The difference is that here the "thin film" is a thin layer of air. | [
{
"math_id": 0,
"text": " r_N= \\left[\\lambda R \\left(N - {1 \\over 2}\\right)\\right]^{1/2}, "
},
{
"math_id": 1,
"text": "t = {r^2 \\over 2R}, "
}
] | https://en.wikipedia.org/wiki?curid=1060624 |
1060721 | Basic Linear Algebra Subprograms | Routines for performing common linear algebra operations
Basic Linear Algebra Subprograms (BLAS) is a specification that prescribes a set of low-level routines for performing common linear algebra operations such as vector addition, scalar multiplication, dot products, linear combinations, and matrix multiplication. They are the "de facto" standard low-level routines for linear algebra libraries; the routines have bindings for both C ("CBLAS interface") and Fortran ("BLAS interface"). Although the BLAS specification is general, BLAS implementations are often optimized for speed on a particular machine, so using them can bring substantial performance benefits. BLAS implementations will take advantage of special floating point hardware such as vector registers or SIMD instructions.
It originated as a Fortran library in 1979 and its interface was standardized by the BLAS Technical (BLAST) Forum, whose latest BLAS report can be found on the netlib website. This Fortran library is known as the "reference implementation" (sometimes confusingly referred to as "the" BLAS library) and is not optimized for speed but is in the public domain.
Most libraries that offer linear algebra routines conform to the BLAS interface, allowing library users to develop programs that are indifferent to the BLAS library being used.
Many BLAS libraries have been developed, targeting various different hardware platforms. Examples includes cuBLAS (NVIDIA GPU, GPGPU), rocBLAS (AMD GPU), and OpenBLAS. Examples of CPU-based BLAS library branches include: OpenBLAS, BLIS (BLAS-like Library Instantiation Software), Arm Performance Libraries, ATLAS, and Intel Math Kernel Library (iMKL). AMD maintains a fork of BLIS that is optimized for the AMD platform. ATLAS is a portable library that automatically optimizes itself for an arbitrary architecture. iMKL is a freeware and proprietary vendor library optimized for x86 and x86-64 with a performance emphasis on Intel processors. OpenBLAS is an open-source library that is hand-optimized for many of the popular architectures. The LINPACK benchmarks rely heavily on the BLAS routine codice_0 for its performance measurements.
Many numerical software applications use BLAS-compatible libraries to do linear algebra computations, including LAPACK, LINPACK, Armadillo, GNU Octave, Mathematica, MATLAB, NumPy, R, Julia and Lisp-Stat.
Background.
With the advent of numerical programming, sophisticated subroutine libraries became useful. These libraries would contain subroutines for common high-level mathematical operations such as root finding, matrix inversion, and solving systems of equations. The language of choice was FORTRAN. The most prominent numerical programming library was IBM's Scientific Subroutine Package (SSP). These subroutine libraries allowed programmers to concentrate on their specific problems and avoid re-implementing well-known algorithms. The library routines would also be better than average implementations; matrix algorithms, for example, might use full pivoting to get better numerical accuracy. The library routines would also have more efficient routines. For example, a library may include a program to solve a matrix that is upper triangular. The libraries would include single-precision and double-precision versions of some algorithms.
Initially, these subroutines used hard-coded loops for their low-level operations. For example, if a subroutine needed to perform a matrix multiplication, then the subroutine would have three nested loops. Linear algebra programs have many common low-level operations (the so-called "kernel" operations, not related to operating systems). Between 1973 and 1977, several of these kernel operations were identified. These kernel operations became defined subroutines that math libraries could call. The kernel calls had advantages over hard-coded loops: the library routine would be more readable, there were fewer chances for bugs, and the kernel implementation could be optimized for speed. A specification for these kernel operations using scalars and vectors, the level-1 Basic Linear Algebra Subroutines (BLAS), was published in 1979. BLAS was used to implement the linear algebra subroutine library LINPACK.
The BLAS abstraction allows customization for high performance. For example, LINPACK is a general purpose library that can be used on many different machines without modification. LINPACK could use a generic version of BLAS. To gain performance, different machines might use tailored versions of BLAS. As computer architectures became more sophisticated, vector machines appeared. BLAS for a vector machine could use the machine's fast vector operations. (While vector processors eventually fell out of favor, vector instructions in modern CPUs are essential for optimal performance in BLAS routines.)
Other machine features became available and could also be exploited. Consequently, BLAS was augmented from 1984 to 1986 with level-2 kernel operations that concerned vector-matrix operations. Memory hierarchy was also recognized as something to exploit. Many computers have cache memory that is much faster than main memory; keeping matrix manipulations localized allows better usage of the cache. In 1987 and 1988, the level 3 BLAS were identified to do matrix-matrix operations. The level 3 BLAS encouraged block-partitioned algorithms. The LAPACK library uses level 3 BLAS.
The original BLAS concerned only densely stored vectors and matrices. Further extensions to BLAS, such as for sparse matrices, have been addressed.
Functionality.
BLAS functionality is categorized into three sets of routines called "levels", which correspond to both the chronological order of definition and publication, as well as the degree of the polynomial in the complexities of algorithms; Level 1 BLAS operations typically take linear time, "O"("n"), Level 2 operations quadratic time and Level 3 operations cubic time. Modern BLAS implementations typically provide all three levels.
Level 1.
This level consists of all the routines described in the original presentation of BLAS (1979), which defined only "vector operations" on strided arrays: dot products, vector norms, a generalized vector addition of the form
formula_0
(called "codice_1", "a x plus y") and several other operations.
Level 2.
This level contains "matrix-vector operations" including, among other things, a "ge"neralized "m"atrix-"v"ector multiplication (codice_2):
formula_1
as well as a solver for x in the linear equation
formula_2
with T being triangular. Design of the Level 2 BLAS started in 1984, with results published in 1988. The Level 2 subroutines are especially intended to improve performance of programs using BLAS on vector processors, where Level 1 BLAS are suboptimal "because they hide the matrix-vector nature of the operations from the compiler."
Level 3.
This level, formally published in 1990, contains "matrix-matrix operations", including a "general matrix multiplication" (codice_0), of the form
formula_3
where A and B can optionally be transposed or hermitian-conjugated inside the routine, and all three matrices may be strided. The ordinary matrix multiplication A B can be performed by setting "α" to one and C to an all-zeros matrix of the appropriate size.
Also included in Level 3 are routines for computing
formula_4
where T is a triangular matrix, among other functionality.
Due to the ubiquity of matrix multiplications in many scientific applications, including for the implementation of the rest of Level 3 BLAS, and because faster algorithms exist beyond the obvious repetition of matrix-vector multiplication, codice_0 is a prime target of optimization for BLAS implementers. E.g., by decomposing one or both of A, B into block matrices, codice_0 can be implemented recursively. This is one of the motivations for including the "β" parameter, so the results of previous blocks can be accumulated. Note that this decomposition requires the special case "β"
1 which many implementations optimize for, thereby eliminating one multiplication for each value of C. This decomposition allows for better locality of reference both in space and time of the data used in the product. This, in turn, takes advantage of the cache on the system. For systems with more than one level of cache, the blocking can be applied a second time to the order in which the blocks are used in the computation. Both of these levels of optimization are used in implementations such as ATLAS. More recently, implementations by Kazushige Goto have shown that blocking only for the L2 cache, combined with careful amortizing of copying to contiguous memory to reduce TLB misses, is superior to ATLAS. A highly tuned implementation based on these ideas is part of the GotoBLAS, OpenBLAS and BLIS.
A common variation of is the , which calculates a complex product using "three real matrix multiplications and five real matrix additions instead of the conventional four real matrix multiplications and two real matrix additions", an algorithm similar to Strassen algorithm first described by Peter Ungar.
SGI's Scientific Computing Software Library contains BLAS and LAPACK implementations for SGI's Irix workstations.
Sparse BLAS.
Several extensions to BLAS for handling sparse matrices have been suggested over the course of the library's history; a small set of sparse matrix kernel routines was finally standardized in 2002.
Batched BLAS.
The traditional BLAS functions have been also ported to architectures that support large amounts of parallelism such as GPUs. Here, the traditional BLAS functions provide typically good performance for large matrices. However, when computing e.g., matrix-matrix-products of many small matrices by using the GEMM routine, those architectures show significant performance losses. To address this issue, in 2017 a batched version of the BLAS function has been specified.
Taking the GEMM routine from above as an example, the batched version performs the following computation simultaneously for many matrices:
formula_5
The index formula_6 in square brackets indicates that the operation is performed for all matrices formula_6 in a stack. Often, this operation is implemented for a strided batched memory layout where all matrices follow concatenated in the arrays formula_7, formula_8 and formula_9.
Batched BLAS functions can be a versatile tool and allow e.g. a fast implementation of exponential integrators and Magnus integrators that handle long integration periods with many time steps. Here, the matrix exponentiation, the computationally expensive part of the integration, can be implemented in parallel for all time-steps by using Batched BLAS functions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\boldsymbol{y} \\leftarrow \\alpha \\boldsymbol{x} + \\boldsymbol{y}"
},
{
"math_id": 1,
"text": "\\boldsymbol{y} \\leftarrow \\alpha \\boldsymbol{A} \\boldsymbol{x} + \\beta \\boldsymbol{y}"
},
{
"math_id": 2,
"text": "\\boldsymbol{T} \\boldsymbol{x} = \\boldsymbol{y}"
},
{
"math_id": 3,
"text": "\\boldsymbol{C} \\leftarrow \\alpha \\boldsymbol{A} \\boldsymbol{B} + \\beta \\boldsymbol{C},"
},
{
"math_id": 4,
"text": "\\boldsymbol{B} \\leftarrow \\alpha \\boldsymbol{T}^{-1} \\boldsymbol{B},"
},
{
"math_id": 5,
"text": "\\boldsymbol{C}[k] \\leftarrow \\alpha \\boldsymbol{A}[k] \\boldsymbol{B}[k] + \\beta \\boldsymbol{C}[k] \\quad \\forall k "
},
{
"math_id": 6,
"text": "k"
},
{
"math_id": 7,
"text": "A"
},
{
"math_id": 8,
"text": "B"
},
{
"math_id": 9,
"text": "C"
}
] | https://en.wikipedia.org/wiki?curid=1060721 |
1060825 | Strömgren sphere | In theoretical astrophysics, there can be a sphere of ionized hydrogen (H II) around a young star of the spectral classes O or B. The theory was derived by Bengt Strömgren in 1937 and later named Strömgren sphere after him. The Rosette Nebula is the most prominent example of this type of emission nebula from the H II-regions.
The physics.
Very hot stars of the spectral class O or B emit very energetic radiation, especially ultraviolet radiation, which is able to ionize the neutral hydrogen (H I) of the surrounding interstellar medium, so that hydrogen atoms lose their single electrons. This state of hydrogen is called H II. After a while, free electrons recombine with those hydrogen ions. Energy is re-emitted, not as a single photon, but rather as a series of photons of lesser energy. The photons lose energy as they travel outward from the star's surface, and are not energetic enough to again contribute to ionization. Otherwise, the entire interstellar medium would be ionized. A Strömgren sphere is the theoretical construct which describes the ionized regions.
The model.
In its first and simplest form, derived by the Danish astrophysicist Bengt Strömgren in 1939, the model examines the effects of the electromagnetic radiation of a single star (or a tight cluster of similar stars) of a given surface temperature and luminosity on the surrounding interstellar medium of a given density. To simplify calculations, the interstellar medium is taken to be homogeneous and consisting entirely of hydrogen.
The formula derived by Strömgren describes the relationship between the luminosity and temperature of the exciting star on the one hand, and the density of the surrounding hydrogen gas on the other. Using it, the size of the idealized ionized region can be calculated as the "Strömgren radius". Strömgren's model also shows that there is a very sharp cut-off of the degree of ionization at the edge of the Strömgren sphere. This is caused by the fact that the transition region between gas that is highly ionized and neutral hydrogen is very narrow, compared to the overall size of the Strömgren sphere.
The above-mentioned relationships are as follows:
* The hotter and more luminous the exciting star, the larger the Strömgren sphere.
* The denser the surrounding hydrogen gas, the smaller the Strömgren sphere.
In Strömgren's model, the sphere now named Strömgren's sphere is made almost exclusively of free protons and electrons. A very small amount of hydrogen atoms appear at a density that increases nearly exponentially toward the surface. Outside the sphere, radiation of the atoms' frequencies cools the gas strongly, so that it appears as a thin region in which the radiation emitted by the star is strongly absorbed by the atoms which lose their energy by radiation in all directions. Thus a Strömgren system appears as a bright star surrounded by a less-emitting and difficult to observe globe.
Strömgren did not know Einstein's theory of optical coherence. The density of excited hydrogen is low, but the paths may be long, so that the hypothesis of a super-radiance and other effects observed using lasers must be tested. A supposed super-radiant Strömgren's shell emits space-coherent, time-incoherent beams in the direction for which the path in excited hydrogen is maximal, that is, tangential to the sphere.
In Strömgren's explanations, the shell absorbs only the resonant lines of hydrogen, so that the available energy is low. Assuming that the star is a supernova, the radiance of the light it emits corresponds (by Planck's law) to a temperature of several hundreds of kelvins, so that several frequencies may combine to produce the resonance frequencies of hydrogen atoms. Thus, almost all light emitted by the star is absorbed, and almost all energy radiated by the star amplifies the tangent, super-radiant rays.
The Necklace Nebula is a Strömgren sphere. It shows a dotted circle which gives its name.
In supernova remnant 1987A, the Strömgren shell is strangulated into an hourglass whose limbs are like three pearl necklaces.
Both Strömgren's original model and the one modified by McCullough do not take into account the effects of dust, clumpiness, detailed radiative transfer, or dynamical effects.
The history.
In 1938 the American astronomers Otto Struve and Chris T. Elvey published their observations of emission nebulae in the constellations Cygnus and Cepheus, most of which are not concentrated toward individual bright stars (in contrast to planetary nebulae). They suggested the UV radiation of the O- and B-stars to be the required energy source.
In 1939 Bengt Strömgren took up the problem of the ionization and excitation of the interstellar hydrogen. This is the paper identified with the concept of the Strömgren sphere. It draws, however, on his earlier similar efforts published in 1937.
In 2000 Peter R. McCullough published a modified model allowing for an evacuated, spherical cavity either centered on the star or with the star displaced with respect to the evacuated cavity. Such cavities might be created by stellar winds and supernovae. The resulting images more closely resemble many actual H II-regions than the original model.
Mathematical basis.
Let's suppose the region is exactly spherical, fully ionized (x=1), and composed only of hydrogen, so that the numerical density of protons equals the density of electrons (formula_0). Then the Strömgren radius will be the region where the recombination rate equals the ionization rate. We will consider the recombination rate formula_1 of all energy levels, which is
formula_2
formula_3 is the recombination rate of the n-th energy level. The reason we have excluded n=1 is that if an electron recombines directly to the ground level, the hydrogen atom will release another photon capable of ionizing up from the ground level. This is important, as the electric dipole mechanism always makes the ionization up from the ground level, so we exclude n=1 to add these ionizing field effects. Now, the recombination rate of a particular energy level formula_3 is (with formula_4):
formula_5
where formula_6 is the recombination coefficient of the "n"th energy level in a unitary volume at a temperature formula_7, which is the temperature of the electrons in kelvins and is usually the same as the sphere. So after doing the sum, we arrive at
formula_8
where formula_9 is the total recombination rate and has an approximate value of
formula_10
Using formula_11 as the number of nucleons (in this case, protons), we can introduce the degree of ionization formula_12 so formula_13, and the numerical density of neutral hydrogen is formula_14. With a cross section formula_15 (which has units of area) and the number of ionizing photons per area per second formula_16, the ionization rate formula_17 is
formula_18
For simplicity we will consider only the geometric effects on formula_16 as we get further from the ionizing source (a source of flux formula_19), so we have an inverse square law:
formula_20
We are now in position to calculate the Stromgren radius formula_21 from the balance between the recombination and ionization
formula_22
and finally, remembering that the region is considered as fully ionized ("x" = 1):
formula_23
This is the radius of a region ionized by a type O-B star.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n_e = n_p"
},
{
"math_id": 1,
"text": "N_R"
},
{
"math_id": 2,
"text": "N_R = \\sum_{n=2}^{\\infty}N_n, "
},
{
"math_id": 3,
"text": "N_n"
},
{
"math_id": 4,
"text": "n_e=n_p"
},
{
"math_id": 5,
"text": "N_n=n_e n_p \\beta_{n}(T_e)=n_e^2 \\beta_{n}(T_e),"
},
{
"math_id": 6,
"text": "\\beta_{n}(T_e)"
},
{
"math_id": 7,
"text": "T_e"
},
{
"math_id": 8,
"text": "N_R=n_e^2 \\beta_2(T_e),"
},
{
"math_id": 9,
"text": "\\beta_2(T_e)"
},
{
"math_id": 10,
"text": "\\beta_2(T_e) \\approx 2 \\times 10^{-16} T_e^{-3/4} \\ \\mathrm{[m^{3} s^{-1}]}."
},
{
"math_id": 11,
"text": "n"
},
{
"math_id": 12,
"text": "0\\leq x \\leq1 "
},
{
"math_id": 13,
"text": "n_e=xn"
},
{
"math_id": 14,
"text": "n_h=(1-x)n"
},
{
"math_id": 15,
"text": "\\alpha_0"
},
{
"math_id": 16,
"text": "J"
},
{
"math_id": 17,
"text": "N_I"
},
{
"math_id": 18,
"text": "N_I=\\alpha_0 n_h J."
},
{
"math_id": 19,
"text": "S_*"
},
{
"math_id": 20,
"text": "\\alpha_0 n_h J(r)=\\frac{3 S_*}{4 \\pi r^3}."
},
{
"math_id": 21,
"text": "R_S"
},
{
"math_id": 22,
"text": "\\frac{4 \\pi}{3} (nx)^2 \\beta_2 R_S^3 = S_*"
},
{
"math_id": 23,
"text": "R_S=\\left( \\frac{3}{4 \\pi} \\frac{S_*}{n^2 \\beta_2} \\right)^{\\frac{1}{3}}."
}
] | https://en.wikipedia.org/wiki?curid=1060825 |
1060920 | Specific weight | Weight per unit volume of a material
The specific weight, also known as the unit weight (symbol "γ", the Greek letter gamma), is a volume-specific quantity defined as the weight per unit volume of a material.
A commonly used value is the specific weight of water on Earth at , which is .
Definition.
The specific weight, γ, of a material is defined as the product of its density, ρ, and the standard gravity, g:
formula_0
The density of the material is defined as mass per unit volume, typically measured in kg/m3. The standard gravity is acceleration due to gravity, usually given in m/s2, and on Earth usually taken as .
Unlike density, specific weight is not a fixed property of a material. It depends on the value of the gravitational acceleration, which varies with location. Pressure may also affect values, depending upon the bulk modulus of the material, but generally, at moderate pressures, has a less significant effect than the other factors.
Applications.
Fluid mechanics.
In fluid mechanics, specific weight represents the force exerted by gravity on a unit volume of a fluid. For this reason, units are expressed as force per unit volume (e.g., N/m3 or lbf/ft3). Specific weight can be used as a characteristic property of a fluid.
Soil mechanics.
Specific weight is often used as a property of soil to solve earthwork problems.
In soil mechanics, specific weight may refer to:
<templatestyles src="Glossary/styles.css" />
Civil and mechanical engineering.
Specific weight can be used in civil engineering and mechanical engineering to determine the weight of a structure designed to carry certain loads while remaining intact and remaining within limits regarding deformation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\gamma = \\rho \\, g"
}
] | https://en.wikipedia.org/wiki?curid=1060920 |
10609701 | Quasi-polynomial | Generalization of polynomials
In mathematics, a quasi-polynomial (pseudo-polynomial) is a generalization of polynomials. While the coefficients of a polynomial come from a ring, the coefficients of quasi-polynomials are instead periodic functions with integral period. Quasi-polynomials appear throughout much of combinatorics as the enumerators for various objects.
A quasi-polynomial can be written as formula_0, where formula_1 is a periodic function with integral period. If formula_2 is not identically zero, then the degree of formula_3 is formula_4. Equivalently, a function formula_5 is a quasi-polynomial if there exist polynomials formula_6 such that formula_7 when formula_8. The polynomials formula_9 are called the constituents of formula_10.
formula_21
which is a quasi-polynomial with degree formula_22 | [
{
"math_id": 0,
"text": "q(k) = c_d(k) k^d + c_{d-1}(k) k^{d-1} + \\cdots + c_0(k)"
},
{
"math_id": 1,
"text": "c_i(k)"
},
{
"math_id": 2,
"text": "c_d(k)"
},
{
"math_id": 3,
"text": "q"
},
{
"math_id": 4,
"text": "d"
},
{
"math_id": 5,
"text": "f \\colon \\mathbb{N} \\to \\mathbb{N}"
},
{
"math_id": 6,
"text": "p_0, \\dots, p_{s-1}"
},
{
"math_id": 7,
"text": "f(n) = p_i(n)"
},
{
"math_id": 8,
"text": "i \\equiv n \\bmod s"
},
{
"math_id": 9,
"text": "p_i"
},
{
"math_id": 10,
"text": "f"
},
{
"math_id": 11,
"text": "P"
},
{
"math_id": 12,
"text": "v_1,\\dots,v_n"
},
{
"math_id": 13,
"text": "tP"
},
{
"math_id": 14,
"text": "tv_1,\\dots,tv_n"
},
{
"math_id": 15,
"text": "L(P,t) = \\#(tP \\cap \\mathbb{Z}^d)"
},
{
"math_id": 16,
"text": "t"
},
{
"math_id": 17,
"text": "L(P,t)"
},
{
"math_id": 18,
"text": "\\mathbb{N} \\to \\mathbb{N}"
},
{
"math_id": 19,
"text": "F"
},
{
"math_id": 20,
"text": "G"
},
{
"math_id": 21,
"text": "(F*G)(k) = \\sum_{m=0}^k F(m)G(k-m)"
},
{
"math_id": 22,
"text": "\\le \\deg F + \\deg G + 1."
}
] | https://en.wikipedia.org/wiki?curid=10609701 |
10610469 | Theory of tides | Scientific interpretation of tidal forces
The theory of tides is the application of continuum mechanics to interpret and predict the tidal deformations of planetary and satellite bodies and their atmospheres and oceans (especially Earth's oceans) under the gravitational loading of another astronomical body or bodies (especially the Moon and Sun).
History.
Australian Aboriginal astronomy.
The Yolngu people of northeastern Arnhem Land in the Northern Territory of Australia identified a link between the Moon and the tides, which they mythically attributed to the Moon filling with water and emptying out again.
Classical era.
The tides received relatively little attention in the civilizations around the Mediterranean Sea, as the tides there are relatively small, and the areas that experience tides do so unreliably. A number of theories were advanced, however, from comparing the movements to breathing or blood flow to theories involving whirlpools or river cycles. A similar "breathing earth" idea was considered by some Asian thinkers. Plato reportedly believed that the tides were caused by water flowing in and out of undersea caverns. Crates of Mallus attributed the tides to "the counter-movement (ἀντισπασμός) of the sea” and Apollodorus of Corcyra to "the refluxes from the Ocean". An ancient Indian Purana text dated to 400-300 BC refers to the ocean rising and falling because of heat expansion from the light of the Moon.
Ultimately the link between the Moon (and Sun) and the tides became known to the Greeks, although the exact date of discovery is unclear; references to it are made in sources such as Pytheas of Massilia in 325 BC and Pliny the Elder's "Natural History" in 77 AD. Although the schedule of the tides and the link to lunar and solar movements was known, the exact mechanism that connected them was unclear. Classicists Thomas Little Heath claimed that both Pytheas and Posidonius connected the tides with the moon, "the former directly, the latter through the setting up of winds". Seneca mentions in "De Providentia" the periodic motion of the tides controlled by the lunar sphere. Eratosthenes (3rd century BC) and Posidonius (1st century BC) both produced detailed descriptions of the tides and their relationship to the phases of the Moon, Posidonius in particular making lengthy observations of the sea on the Spanish coast, although little of their work survived. The influence of the Moon on tides was mentioned in Ptolemy's "Tetrabiblos" as evidence of the reality of astrology. Seleucus of Seleucia is thought to have theorized around 150 BC that tides were caused by the Moon as part of his heliocentric model.
Aristotle, judging from discussions of his beliefs in other sources, is thought to have believed the tides were caused by winds driven by the Sun's heat, and he rejected the theory that the Moon caused the tides. An apocryphal legend claims that he committed suicide in frustration with his failure to fully understand the tides. Heraclides also held "the sun sets up winds, and that these winds, when they blow, cause the high tide and, when they cease, the low tide". Dicaearchus also "put the tides down to the direct action of the sun according to its position". Philostratus discusses tides in Book Five of "Life of Apollonius of Tyana" (circa 217-238 AD); he was vaguely aware of a correlation of the tides with the phases of the Moon but attributed them to spirits moving water in and out of caverns, which he connected with the legend that spirits of the dead cannot move on at certain phases of the Moon.
Medieval period.
The Venerable Bede discusses the tides in "The Reckoning of Time" and shows that the twice-daily timing of tides is related to the Moon and that the lunar monthly cycle of spring and neap tides is also related to the Moon's position. He goes on to note that the times of tides vary along the same coast and that the water movements cause low tide at one place when there is high tide elsewhere. However, he made no progress regarding the question of how exactly the Moon created the tides.
Medieval rule-of-thumb methods for predicting tides were said to allow one "to know what Moon makes high water" from the Moon's movements. Dante references the Moon's influence on the tides in his "Divine Comedy".
Medieval European understanding of the tides was often based on works of Muslim astronomers that became available through Latin translation starting from the 12th century. Abu Ma'shar al-Balkhi, in his "Introductorium in astronomiam", taught that ebb and flood tides were caused by the Moon. Abu Ma'shar discussed the effects of wind and Moon's phases relative to the Sun on the tides. In the 12th century, al-Bitruji contributed the notion that the tides were caused by the general circulation of the heavens. Medieval Arabic astrologers frequently referenced the Moon's influence on the tides as evidence for the reality of astrology; some of their treatises on the topic influenced western Europe. Some theorized that the influence was caused by lunar rays heating the ocean's floor.
Modern era.
Simon Stevin in his 1608 "De spiegheling der Ebbenvloet (The Theory of Ebb and Flood") dismisses a large number of misconceptions that still existed about ebb and flood. Stevin pleads for the idea that the attraction of the Moon was responsible for the tides and writes in clear terms about ebb, flood, spring tide and neap tide, stressing that further research needed to be made. In 1609, Johannes Kepler correctly suggested that the gravitation of the Moon causes the tides, which he compared to magnetic attraction basing his argument upon ancient observations and correlations.
In 1616, Galileo Galilei wrote "Discourse on the Tides." He strongly and mockingly rejects the lunar theory of the tides, and tries to explain the tides as the result of the Earth's rotation and revolution around the Sun, believing that the oceans moved like water in a large basin: as the basin moves, so does the water. Therefore, as the Earth revolves, the force of the Earth's rotation causes the oceans to "alternately accelerate and retardate". His view on the oscillation and "alternately accelerated and retardated" motion of the Earth's rotation is a "dynamic process" that deviated from the previous dogma, which proposed "a process of expansion and contraction of seawater." However, Galileo's theory was erroneous. In subsequent centuries, further analysis led to the current tidal physics. Galileo tried to use his tidal theory to prove the movement of the Earth around the Sun. Galileo theorized that because of the Earth's motion, borders of the oceans like the Atlantic and Pacific would show one high tide and one low tide per day. The Mediterranean Sea had two high tides and low tides, though Galileo argued that this was a product of secondary effects and that his theory would hold in the Atlantic. However, Galileo's contemporaries noted that the Atlantic also had two high tides and low tides per day, which led to Galileo omitting this claim from his 1632 "Dialogue".
René Descartes theorized that the tides (alongside the movement of planets, etc.) were caused by aetheric vortices, without reference to Kepler's theories of gravitation by mutual attraction; this was extremely influential, with numerous followers of Descartes expounding on this theory throughout the 17th century, particularly in France. However, Descartes and his followers acknowledged the influence of the Moon, speculating that pressure waves from the Moon via the aether were responsible for the correlation.
Newton, in the "Principia", provides a correct explanation for the tidal force, which can be used to explain tides on a planet covered by a uniform ocean but which takes no account of the distribution of the continents or ocean bathymetry.
Dynamic theory.
While Newton explained the tides by describing the tide-generating forces and Daniel Bernoulli gave a description of the static reaction of the waters on Earth to the tidal potential, the "dynamic theory of tides", developed by Pierre-Simon Laplace in 1775, describes the ocean's real reaction to tidal forces. Laplace's theory of ocean tides takes into account friction, resonance and natural periods of ocean basins. It predicts the large amphidromic systems in the world's ocean basins and explains the oceanic tides that are actually observed.
The equilibrium theory—based on the gravitational gradient from the Sun and Moon but ignoring the Earth's rotation, the effects of continents, and other important effects—could not explain the real ocean tides. Since measurements have confirmed the dynamic theory, many things have possible explanations now, like how the tides interact with deep sea ridges, and chains of seamounts give rise to deep eddies that transport nutrients from the deep to the surface. The equilibrium tide theory calculates the height of the tide wave of less than half a meter, while the dynamic theory explains why tides are up to 15 meters. <br>
Satellite observations confirm the accuracy of the dynamic theory, and the tides worldwide are now measured to within a few centimeters. Measurements from the CHAMP satellite closely match the models based on the TOPEX data. Accurate models of tides worldwide are essential for research since the variations due to tides must be removed from measurements when calculating gravity and changes in sea levels.
Laplace's tidal equations.
In 1776, Laplace formulated a single set of linear partial differential equations for tidal flow described as a barotropic two-dimensional sheet flow. Coriolis effects are introduced as well as lateral forcing by gravity. Laplace obtained these equations by simplifying the fluid dynamics equations, but they can also be derived from energy integrals via Lagrange's equation.
For a fluid sheet of average thickness "D", the vertical tidal elevation "ζ", as well as the horizontal velocity components "u" and "v" (in the latitude "φ" and longitude "λ" directions, respectively) satisfy Laplace's tidal equations:
formula_0
where "Ω" is the angular frequency of the planet's rotation, "g" is the planet's gravitational acceleration at the mean ocean surface, "a" is the planetary radius, and "U" is the external gravitational tidal-forcing potential.
William Thomson (Lord Kelvin) rewrote Laplace's momentum terms using the curl to find an equation for vorticity. Under certain conditions this can be further rewritten as a conservation of vorticity.
Tidal analysis and prediction.
Harmonic analysis.
Laplace's improvements in theory were substantial, but they still left prediction in an approximate state. This position changed in the 1860s when the local circumstances of tidal phenomena were more fully brought into account by William Thomson's application of Fourier analysis to the tidal motions as harmonic analysis. Thomson's work in this field was further developed and extended by George Darwin, applying the lunar theory current in his time. Darwin's symbols for the tidal harmonic constituents are still used.
Darwin's harmonic developments of the tide-generating forces were later improved when A.T. Doodson, applying the lunar theory of E.W. Brown, developed the tide-generating potential (TGP) in harmonic form, distinguishing 388 tidal frequencies. Doodson's work was carried out and published in 1921. Doodson devised a practical system for specifying the different harmonic components of the tide-generating potential, the Doodson numbers, a system still in use.
Since the mid-twentieth century further analysis has generated many more terms than Doodson's 388. About 62 constituents are of sufficient size to be considered for possible use in marine tide prediction, but sometimes many fewer can predict tides to useful accuracy. The calculations of tide predictions using the harmonic constituents are laborious, and from the 1870s to about the 1960s they were carried out using a mechanical tide-predicting machine, a special-purpose form of analog computer. More recently digital computers, using the method of matrix inversion, are used to determine the tidal harmonic constituents directly from tide gauge records.
Tidal constituents.
Tidal constituents combine to give an endlessly varying aggregate because of their different and incommensurable frequencies: the effect is visualized in an animation of the American Mathematical Society illustrating the way in which the components used to be mechanically combined in the tide-predicting machine. Amplitudes (half of peak-to-peak amplitude) of tidal constituents are given below for six example locations:
Eastport, Maine (ME), Biloxi, Mississippi (MS), San Juan, Puerto Rico (PR), Kodiak, Alaska (AK), San Francisco, California (CA), and Hilo, Hawaii (HI).
Doodson numbers.
In order to specify the different harmonic components of the tide-generating potential, Doodson devised a practical system which is still in use, involving what are called the Doodson numbers based on the six "Doodson arguments" or Doodson variables. The number of different tidal frequency components is large, but each corresponds to a specific linear combination of six frequencies using small-integer multiples, positive or negative. In principle, these basic angular arguments can be specified in numerous ways; Doodson's choice of his six "Doodson arguments" has been widely used in tidal work. In terms of these Doodson arguments, each tidal frequency can then be specified as a sum made up of a small integer multiple of each of the six arguments. The resulting six small integer multipliers effectively encode the frequency of the tidal argument concerned, and these are the Doodson numbers: in practice all except the first are usually biased upwards by +5 to avoid negative numbers in the notation. (In the case that the biased multiple exceeds 9, the system adopts X for 10, and E for 11.)
The Doodson arguments are specified in the following way, in order of decreasing frequency:
formula_1 is mean Lunar time, the Greenwich hour angle of the mean Moon plus 12 hours.
formula_2 is the mean longitude of the Moon.
formula_3 is the mean longitude of the Sun.
formula_4 is the longitude of the Moon's mean perigee.
formula_5 is the negative of the longitude of the Moon's mean ascending node on the ecliptic.
formula_6 or formula_7 is the longitude of the Sun's mean perigee.
In these expressions, the symbols formula_8, formula_9, formula_10 and formula_11 refer to an alternative set of fundamental angular arguments (usually preferred for use in modern lunar theory), in which:-
formula_8 is the mean anomaly of the Moon (distance from its perigee).
formula_9 is the mean anomaly of the Sun (distance from its perigee).
formula_10 is the Moon's mean argument of latitude (distance from its node).
formula_11 is the Moon's mean elongation (distance from the sun).
It is possible to define several auxiliary variables on the basis of combinations of these.
In terms of this system, each tidal constituent frequency can be identified by its Doodson numbers. The strongest tidal constituent "M2" has a frequency of 2 cycles per lunar day, its Doodson numbers are usually written 255.555, meaning that its frequency is composed of twice the first Doodson argument, and zero times all of the others. The second strongest tidal constituent "S2" is influenced by the sun, and its Doodson numbers are 273.555, meaning that its frequency is composed of twice the first Doodson argument, +2 times the second, -2 times the third, and zero times each of the other three. This aggregates to the angular equivalent of mean solar time +12 hours. These two strongest component frequencies have simple arguments for which the Doodson system might appear needlessly complex, but each of the hundreds of other component frequencies can be briefly specified in a similar way, showing in the aggregate the usefulness of the encoding.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n \\begin{align} \n \\frac{\\partial \\zeta}{\\partial t}\n &+ \\frac{1}{a \\cos( \\varphi )} \\left[\n \\frac{\\partial}{\\partial \\lambda} (uD)\n + \\frac{\\partial}{\\partial \\varphi} \\left(vD \\cos( \\varphi )\\right) \n \\right]\n = 0,\n \\\\[2ex]\n \\frac{\\partial u}{\\partial t}\n &- v \\, 2 \\Omega \\sin( \\varphi )\n + \\frac{1}{a \\cos( \\varphi )} \\frac{\\partial}{\\partial \\lambda} \\left( g \\zeta + U \\right)\n = 0,\n \\quad \\text{and} \\\\[2ex]\n \\frac{\\partial v}{\\partial t}\n &+ u \\, 2 \\Omega \\sin( \\varphi )\n + \\frac{1}{a} \\frac{\\partial}{\\partial \\varphi} \\left( g \\zeta + U \\right)\n = 0,\n \\end{align}\n"
},
{
"math_id": 1,
"text": "\\beta_1 = \\tau = ( \\theta_M + \\pi - s )"
},
{
"math_id": 2,
"text": "\\beta_2 = s = ( F + \\Omega )"
},
{
"math_id": 3,
"text": "\\beta_3 = h = ( s - D )"
},
{
"math_id": 4,
"text": "\\beta_4 = p = ( s - l )"
},
{
"math_id": 5,
"text": "\\beta_5 = N' = ( -\\Omega )"
},
{
"math_id": 6,
"text": "\\beta_6 = p_l"
},
{
"math_id": 7,
"text": "p_s = ( s - D - l' )"
},
{
"math_id": 8,
"text": "l"
},
{
"math_id": 9,
"text": "l'"
},
{
"math_id": 10,
"text": "F"
},
{
"math_id": 11,
"text": "D"
}
] | https://en.wikipedia.org/wiki?curid=10610469 |
1061279 | Patronage concentration | Patronage concentration is a term used in marketing and retailing. It is the share of an individual consumer's expenditures in an industry or retail sector that is spent at one company. It is the amount that a person spends at one company divided by the amount that a person spends at all companies in the industry.
The relation is as follows:
formula_0.
For example, I may spend $1000 per year at fast food restaurants. If I spend $100 at Wendy's Restaurants, then Wendy's has (100/1000=10%) ten percent of my patronage. As long as the amount spent at one firm is less than the total amount spent at all firms in the industry, the customer will be patronizing more than one firm, and patronage concentration will be less than 100%.
The goal of many firms is to increase the patronage concentration ratio of its customers to 100%. Some firms set different patronage concentration targets for various classes of customers. This reflects the fact that some types of customers are more profitable than others.
This is very similar to market share. Whereas market share describes the percentage of all customers that patronize a company relative to the industry total, the patronage concentration ratio describes the percentage of one customer's patronage going to a company, relative to that person's spending in the industry. That is, market share is the aggregate or macro version of the patronage concentration ratio. Or alternatively, patronage concentration is the micro equivalent of market share.
In retailing, it has been demonstrated that store patronage is a continuum between single store loyalty and the use of several different stores. In particular, patronage concentration involves trading off economic resources against product assortment, and spatial and temporal benefits. It has been shown that patronage decisions are associated with consumer characteristics that are suggestive of heterogeneous cost–benefit tradeoffs and opportunity costs of time.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\text{amount spent at one company}}{\\text{amount spent at all companies in the industry}}"
}
] | https://en.wikipedia.org/wiki?curid=1061279 |
10614125 | Anderson's theorem | On when a function on convex body K does not decrease if K is translated inwards
In mathematics, Anderson's theorem is a result in real analysis and geometry which says that the integral of an integrable, symmetric, unimodal, non-negative function "f" over an "n"-dimensional convex body "K" does not decrease if "K" is translated inwards towards the origin. This is a natural statement, since the graph of "f" can be thought of as a hill with a single peak over the origin; however, for "n" ≥ 2, the proof is not entirely obvious, as there may be points "x" of the body "K" where the value "f"("x") is larger than at the corresponding translate of "x".
Anderson's theorem, named after Theodore Wilbur Anderson, also has an interesting application to probability theory.
Statement of the theorem.
Let "K" be a convex body in "n"-dimensional Euclidean space R"n" that is symmetric with respect to reflection in the origin, i.e. "K" = −"K". Let "f" : R"n" → R be a non-negative, symmetric, globally integrable function; i.e.
Suppose also that the super-level sets "L"("f", "t") of "f", defined by
formula_1
are convex subsets of R"n" for every "t" ≥ 0. (This property is sometimes referred to as being unimodal.) Then, for any 0 ≤ "c" ≤ 1 and "y" ∈ R"n",
formula_2
Application to probability theory.
Given a probability space (Ω, Σ, Pr), suppose that "X" : Ω → R"n" is an R"n"-valued random variable with probability density function "f" : R"n" → [0, +∞) and that "Y" : Ω → R"n" is an independent random variable. The probability density functions of many well-known probability distributions are "p"-concave for some "p", and hence unimodal. If they are also symmetric (e.g. the Laplace and normal distributions), then Anderson's theorem applies, in which case
formula_3
for any origin-symmetric convex body "K" ⊆ R"n". | [
{
"math_id": 0,
"text": "\\int_{\\mathbb{R}^{n}} f(x) \\, \\mathrm{d} x < + \\infty."
},
{
"math_id": 1,
"text": "L(f, t) = \\{ x \\in \\mathbb{R}^{n} | f(x) \\geq t \\},"
},
{
"math_id": 2,
"text": "\\int_{K} f(x + c y) \\, \\mathrm{d} x \\geq \\int_{K} f(x + y) \\, \\mathrm{d} x."
},
{
"math_id": 3,
"text": "\\Pr ( X \\in K ) \\geq \\Pr ( X + Y \\in K )"
}
] | https://en.wikipedia.org/wiki?curid=10614125 |
10614436 | Shephard's problem | In mathematics, Shephard's problem, is the following geometrical question asked by Geoffrey Colin Shephard in 1964: if "K" and "L" are centrally symmetric convex bodies in "n"-dimensional Euclidean space such that whenever "K" and "L" are projected onto a hyperplane, the volume of the projection of "K" is smaller than the volume of the projection of "L", then does it follow that the volume of "K" is smaller than that of "L"?
In this case, "centrally symmetric" means that the reflection of "K" in the origin, "−K", is a translate of "K", and similarly for "L". If π"k" : R"n" → Π"k" is a projection of R"n" onto some "k"-dimensional hyperplane Π"k" (not necessarily a coordinate hyperplane) and "V""k" denotes "k"-dimensional volume, Shephard's problem is to determine the truth or falsity of the implication
formula_0
"V""k"(π"k"("K")) is sometimes known as the brightness of "K" and the function "V""k" π"k" as a ("k"-dimensional) brightness function.
In dimensions "n" = 1 and 2, the answer to Shephard's problem is "yes". In 1967, however, Petty and Schneider showed that the answer is "no" for every "n" ≥ 3. The solution of Shephard's problem requires Minkowski's first inequality for convex bodies and the notion of projection bodies of convex bodies.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "V_{k} (\\pi_{k} (K)) \\leq V_{k} (\\pi_{k} (L)) \\mbox{ for all } 1 \\leq k < n \\implies V_{n} (K) \\leq V_{n} (L)."
}
] | https://en.wikipedia.org/wiki?curid=10614436 |
10614602 | Minkowski's first inequality for convex bodies | In mathematics, Minkowski's first inequality for convex bodies is a geometrical result due to the German mathematician Hermann Minkowski. The inequality is closely related to the Brunn–Minkowski inequality and the isoperimetric inequality.
Statement of the inequality.
Let "K" and "L" be two "n"-dimensional convex bodies in "n"-dimensional Euclidean space R"n". Define a quantity "V"1("K", "L") by
formula_0
where "V" denotes the "n"-dimensional Lebesgue measure and + denotes the Minkowski sum. Then
formula_1
with equality if and only if "K" and "L" are homothetic, i.e. are equal up to translation and dilation.
Connection to other inequalities.
The Brunn–Minkowski inequality.
One can show that the Brunn–Minkowski inequality for convex bodies in R"n" implies Minkowski's first inequality for convex bodies in R"n", and that equality in the Brunn–Minkowski inequality implies equality in Minkowski's first inequality.
The isoperimetric inequality.
By taking "L" = "B", the "n"-dimensional unit ball, in Minkowski's first inequality for convex bodies, one obtains the isoperimetric inequality for convex bodies in R"n": if "K" is a convex body in R"n", then
formula_2
with equality if and only if "K" is a ball of some radius. | [
{
"math_id": 0,
"text": "n V_{1} (K, L) = \\lim_{\\varepsilon \\downarrow 0} \\frac{V (K + \\varepsilon L) - V(K)}{\\varepsilon},"
},
{
"math_id": 1,
"text": "V_{1} (K, L) \\geq V(K)^{(n - 1) / n} V(L)^{1 / n},"
},
{
"math_id": 2,
"text": "\\left( \\frac{V(K)}{V(B)} \\right)^{1 / n} \\leq \\left( \\frac{S(K)}{S(B)} \\right)^{1 / (n - 1)},"
}
] | https://en.wikipedia.org/wiki?curid=10614602 |
1061511 | Digital watermarking | Marker covertly embedded in a signal
A digital watermark is a kind of marker covertly embedded in a noise-tolerant signal such as audio, video or image data. It is typically used to identify ownership of the copyright of such a signal. Digital watermarking is the process of hiding digital information in a carrier signal; the hidden information should, but does not need to, contain a relation to the carrier signal. Digital watermarks may be used to verify the authenticity or integrity of the carrier signal or to show the identity of its owners. It is prominently used for tracing copyright infringements and for banknote authentication.
Like traditional physical watermarks, digital watermarks are often only perceptible under certain conditions, e.g. after using some algorithm. If a digital watermark distorts the carrier signal in a way that it becomes easily perceivable, it may be considered less effective depending on its purpose. Traditional watermarks may be applied to visible media (like images or video), whereas in digital watermarking, the signal may be audio, pictures, video, texts or 3D models. A signal may carry several different watermarks at the same time. Unlike metadata that is added to the carrier signal, a digital watermark does not change the size of the carrier signal.
The needed properties of a digital watermark depend on the use case in which it is applied. For marking media files with copyright information, a digital watermark has to be rather robust against modifications that can be applied to the carrier signal. Instead, if integrity has to be ensured, a fragile watermark would be applied.
Both steganography and digital watermarking employ steganographic techniques to embed data covertly in noisy signals. While steganography aims for imperceptibility to human senses, digital watermarking tries to control the robustness as top priority.
Since a digital copy of data is the same as the original, digital watermarking is a passive protection tool. It just marks data, but does not degrade it or control access to the data.
One application of digital watermarking is "source tracking". A watermark is embedded into a digital signal at each point of distribution. If a copy of the work is found later, then the watermark may be retrieved from the copy and the source of the distribution is known. This technique reportedly has been used to detect the source of illegally copied movies.
History.
The term "digital watermark" was coined by Andrew Tirkel and Charles Osborne in December 1992. The first successful embedding and extraction of a steganographic spread spectrum watermark was demonstrated in 1993 by Andrew Tirkel, Gerard Rankin, Ron Van Schyndel, Charles Osborne, and others.
Watermarks are identification marks produced during the paper-making process. The first watermarks appeared in Italy during the 13th century, but their use rapidly spread across Europe. They were used as a means to identify the paper maker or the trade guild that manufactured the paper. The marks often were created by a wire sewn onto the paper mold. Watermarks continue to be used today as manufacturer's marks and to prevent forgery.
Applications.
Digital watermarking may be used for a wide range of applications, such as:
Digital watermarking life-cycle phases.
The information to be embedded in a signal is called a digital watermark, although in some contexts the phrase digital watermark means the difference between the watermarked signal and the cover signal. The signal where the watermark is to be embedded is called the "host" signal. A watermarking system is usually divided into three distinct steps, embedding, attack, and detection. In embedding, an algorithm accepts the host and the data to be embedded, and produces a watermarked signal.
Then the watermarked digital signal is transmitted or stored, usually transmitted to another person. If this person makes a modification, this is called an "attack". While the modification may not be malicious, the term attack arises from copyright protection application, where third parties may attempt to remove the digital watermark through modification. There are many possible modifications, for example, lossy compression of the data (in which resolution is diminished), cropping an image or video, or intentionally adding noise.
"Detection" (often called extraction) is an algorithm that is applied to the attacked signal to attempt to extract the watermark from it. If the signal was unmodified during transmission, then the watermark still is present and it may be extracted. In "robust" digital watermarking applications, the extraction algorithm should be able to produce the watermark correctly, even if the modifications were strong. In "fragile" digital watermarking, the extraction algorithm should fail if any change is made to the signal.
Classification.
A digital watermark is called "robust" with respect to transformations if the embedded information may be detected reliably from the marked signal, even if degraded by any number of transformations. Typical image degradations are JPEG compression, rotation, cropping, additive noise, and quantization. For video content, temporal modifications and MPEG compression often are added to this list. A digital watermark is called "imperceptible" if the watermarked content is perceptually equivalent to the original, unwatermarked content. In general, it is easy to create either robust watermarks or imperceptible watermarks, but the creation of both robust and imperceptible watermarks has proven to be quite challenging. Robust imperceptible watermarks have been proposed as a tool for the protection of digital content, for example as an embedded "no-copy-allowed" flag in professional video content.
Digital watermarking techniques may be classified in several ways.
Robustness.
A digital watermark is called "fragile" if it fails to be detectable after the slightest modification. Fragile watermarks are commonly used for tamper detection (integrity proof). Modifications to an original work that clearly are noticeable, commonly are not referred to as watermarks, but as generalized barcodes.
A digital watermark is called "semi-fragile" if it resists benign transformations, but fails detection after malignant transformations. Semi-fragile watermarks commonly are used to detect malignant transformations.
A digital watermark is called "robust" if it resists a designated class of transformations. Robust watermarks may be used in copy protection applications to carry copy and no access control information.
Perceptibility.
A digital watermark is called "imperceptible" if the original cover signal and the marked signal are perceptually indistinguishable.
A digital watermark is called "perceptible" if its presence in the marked signal is noticeable (e.g. digital on-screen graphics like a network logo, content bug, codes, opaque images). On videos and images, some are made transparent/translucent for convenience for consumers due to the fact that they block portion of the view; therefore degrading it.
This should not be confused with "perceptual", that is, watermarking which uses the limitations of human perception to be imperceptible.
Capacity.
The length of the embedded message determines two different main classes of digital watermarking schemes:
Embedding method.
A digital watermarking method is referred to as "spread-spectrum" if the marked signal is obtained by an additive modification. Spread-spectrum watermarks are known to be modestly robust, but also to have a low information capacity due to host interference.
A digital watermarking method is said to be of "quantization type" if the marked signal is obtained by quantization. Quantization watermarks suffer from low robustness, but have a high information capacity due to rejection of host interference.
A digital watermarking method is referred to as "amplitude modulation" if the marked signal is embedded by additive modification which is similar to spread spectrum method, but is particularly embedded in the spatial domain.
Evaluation and benchmarking.
The evaluation of digital watermarking schemes may provide detailed information for a watermark designer or for end-users, therefore, different evaluation strategies exist. Often used by a watermark designer is the evaluation of single properties to show, for example, an improvement. Mostly, end-users are not interested in detailed information. They want to know if a given digital watermarking algorithm may be used for their application scenario, and if so, which parameter sets seems to be the best.
Cameras.
Epson and Kodak have produced cameras with security features such as the Epson PhotoPC 3000Z and the Kodak DC-290. Both cameras added irremovable features to the pictures which distorted the original image, making them unacceptable for some applications such as forensic evidence in court. According to Blythe and Fridrich, "[n]either camera can provide an undisputable proof of the image origin or its author".
A secure digital camera (SDC) was proposed by Saraju Mohanty, et al. in 2003 and published in January 2004. This was not the first time this was proposed. Blythe and Fridrich also have worked on SDC in 2004 for a digital camera that would use lossless watermarking to embed a biometric identifier together with a cryptographic hash.
Reversible data hiding.
"Reversible data hiding" is a technique which enables images to be authenticated and then restored to their original form by removing the digital watermark and replacing the image data that had been overwritten.
Watermarking for relational databases.
Digital watermarking for relational databases has emerged as a candidate solution to provide copyright protection, tamper detection, traitor tracing, and maintaining integrity of relational data. Many watermarking techniques have been proposed in the literature to address these purposes. A survey of the current state-of-the-art and a classification of the different techniques according to their intent, the way they express the watermark, the cover type, granularity level, and verifiability was published in 2010 by Halder et al. in the Journal of Universal Computer Science.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left(m=m_1\\ldots m_n,\\; n\\in\\N\\right."
},
{
"math_id": 1,
"text": "\\left.n=|m|\\right)"
},
{
"math_id": 2,
"text": "M=\\{0,1\\}^n"
}
] | https://en.wikipedia.org/wiki?curid=1061511 |
10615177 | MA plot | Within computational biology, an MA plot is an application of a Bland–Altman plot for visual representation of genomic data. The plot visualizes the differences between measurements taken in two samples, by transforming the data onto M (log ratio) and A (mean average) scales, then plotting these values. Though originally applied in the context of two channel DNA microarray gene expression data, MA plots are also used to visualise high-throughput sequencing analysis.
Explanation.
Microarray data is often normalized within arrays to control for systematic biases in dye coupling and hybridization efficiencies, as well as other technical biases in the DNA probes and the print tip used to spot the array. By minimizing these systematic variations, true biological differences can be found. To determine whether normalization is needed, one can plot Cy5 (R) intensities against Cy3 (G) intensities and see whether the slope of the line is around 1. An improved method, which is basically a scaled, 45 degree rotation of the R vs. G plot is an MA-plot. The MA-plot is a plot of the distribution of the red/green intensity ratio ('M') plotted by the average intensity ('A'). M and A are defined by the following equations.
formula_0
formula_1
M is, therefore, the binary logarithm of the intensity ratio (or difference between log intensities) and A is the average log intensity for a dot in the plot. MA plots are then used to visualize intensity-dependent ratio of raw microarray data (microarrays typically show a bias here, with higher A resulting in higher |M|, i.e. the brighter the spot the more likely an observed difference between sample and control). The MA plot puts the variable "M" on the "y"-axis and "A" on the "x"-axis and gives a quick overview of the distribution of the data.
In many microarray gene expression experiments, an underlying assumption is that most of the genes would not see any change in their expression; therefore, the majority of the points on the "y"-axis ("M") would be located at 0, since log(1) is 0. If this is not the case, then a normalization method such as LOESS should be applied to the data before statistical analysis. (On the diagram below see the red line running below the zero mark before normalization, it should be straight. Since it is not straight, the data should be normalized. After being normalized, the red line is straight on the zero line and shows as pink/black.)
Packages.
Several Bioconductor packages, for the R software, provide the facility for creating MA plots. These include affy (ma.plot, mva.pairs), limma (plotMA), marray (maPlot), and edgeR(maPlot)
Similar "RA" plots can be generated using the raPlot function in the caroline CRAN R package.
An interactive MA plot to filter genes by M, A and p-values, search by names or with a lasso, and save selected genes, is available as an R-Shiny code Enhanced-MA-Plot.
Example in the R programming language.
library(affy)
if (require(affydata))
data(Dilution)
y <- (exprs(Dilution)[, c("20B", "10A")])
x11()
ma.plot( rowMeans(log2(y)), log2(y[, 1])-log2(y[, 2]), cex=1 )
title("Dilutions Dataset (array 20B v 10A)")
library(preprocessCore)
x <- normalize.quantiles(y)
x11()
ma.plot( rowMeans(log2(x)), log2(x[, 1])-log2(x[, 2]), cex=1 )
title("Post Norm: Dilutions Dataset (array 20B v 10A)") | [
{
"math_id": 0,
"text": "M=\\log_2(R/G)=\\log_2(R)-\\log_2(G)"
},
{
"math_id": 1,
"text": "A=\\frac12 \\log_2(RG) = \\frac12 (\\log_2(R) + \\log_2(G))"
}
] | https://en.wikipedia.org/wiki?curid=10615177 |
1062015 | Associated Legendre polynomials | Canonical solutions of the general Legendre equation
In mathematics, the associated Legendre polynomials are the canonical solutions of the general Legendre equation
formula_0
or equivalently
formula_1
where the indices "ℓ" and "m" (which are integers) are referred to as the degree and order of the associated Legendre polynomial respectively. This equation has nonzero solutions that are nonsingular on [−1, 1] only if "ℓ" and "m" are integers with 0 ≤ "m" ≤ "ℓ", or with trivially equivalent negative values. When in addition "m" is even, the function is a polynomial. When "m" is zero and "ℓ" integer, these functions are identical to the Legendre polynomials. In general, when "ℓ" and "m" are integers, the regular solutions are sometimes called "associated Legendre polynomials", even though they are not polynomials when "m" is odd. The fully general class of functions with arbitrary real or complex values of "ℓ" and "m" are Legendre functions. In that case the parameters are usually labelled with Greek letters.
The Legendre ordinary differential equation is frequently encountered in physics and other technical fields. In particular, it occurs when solving Laplace's equation (and related partial differential equations) in spherical coordinates. Associated Legendre polynomials play a vital role in the definition of spherical harmonics.
Definition for non-negative integer parameters ℓ and m.
These functions are denoted formula_2, where the superscript indicates the order and not a power of "P". Their most straightforward definition is in terms
of derivatives of ordinary Legendre polynomials ("m" ≥ 0)
formula_3
The (−1)"m" factor in this formula is known as the Condon–Shortley phase. Some authors omit it. That the functions described by this equation satisfy the general Legendre differential equation with the indicated values of the parameters "ℓ" and "m" follows by differentiating "m" times the Legendre equation for "P""ℓ":
formula_4
Moreover, since by Rodrigues' formula,
formula_5
the "P" can be expressed in the form
formula_6
This equation allows extension of the range of "m" to: −"ℓ" ≤ "m" ≤ "ℓ". The definitions of "P""ℓ"±"m", resulting from this expression by substitution of ±"m", are proportional. Indeed, equate the coefficients of equal powers on the left and right hand side of
formula_7
then it follows that the proportionality constant is
formula_8
so that
formula_9
Alternative notations.
The following alternative notations are also used in literature:
formula_10
Closed Form.
The Associated Legendre Polynomial can also be written as:
formula_11
with simple monomials and the generalized form of the binomial coefficient.
Orthogonality.
The associated Legendre polynomials are not mutually orthogonal in general. For example, formula_12 is not orthogonal to formula_13. However, some subsets are orthogonal. Assuming 0 ≤ "m" ≤ "ℓ", they satisfy the orthogonality condition for fixed "m":
formula_14
Where "δ""k","ℓ" is the Kronecker delta.
Also, they satisfy the orthogonality condition for fixed "ℓ":
formula_15
Negative m and/or negative ℓ.
The differential equation is clearly invariant under a change in sign of "m".
The functions for negative "m" were shown above to be proportional to those of positive "m":
formula_16
formula_17
The differential equation is also invariant under a change from ℓ to −"ℓ" − 1, and the functions for negative ℓ are defined by
formula_18
Parity.
From their definition, one can verify that the Associated Legendre functions are either even or odd according to
formula_19
The first few associated Legendre functions.
The first few associated Legendre functions, including those for negative values of "m", are:
formula_20
formula_21
formula_22
formula_23
formula_24
Recurrence formula.
These functions have a number of recurrence properties:
formula_25
formula_26
formula_27
formula_28
formula_29
formula_30
formula_31
formula_32
formula_33
formula_34
formula_35
formula_36
formula_37
formula_38
formula_39
Helpful identities (initial values for the first recursion):
formula_40
formula_41
formula_42
with !! the double factorial.
Gaunt's formula.
The integral over the product of three associated Legendre polynomials (with orders matching as shown below) is a necessary ingredient when developing products of Legendre polynomials into a series linear in the Legendre polynomials. For instance, this turns out to be necessary when doing atomic calculations of the Hartree–Fock variety where matrix elements of the Coulomb operator are needed. For this we have Gaunt's formula
formula_43
This formula is to be used under the following assumptions:
Other quantities appearing in the formula are defined as
formula_49
formula_50
formula_51
The integral is zero unless
Dong and Lemus (2002) generalized the derivation of this formula to integrals over a product of an arbitrary number of associated Legendre polynomials.
Generalization via hypergeometric functions.
These functions may actually be defined for general complex parameters and argument:
formula_54
where formula_55 is the gamma function and formula_56 is the hypergeometric function
formula_57
They are called the Legendre functions when defined in this more general way. They satisfy the same differential equation as before:
formula_58
Since this is a second order differential equation, it has a second solution, formula_59, defined as:
formula_60
formula_61 and formula_59 both obey the various recurrence formulas given previously.
Reparameterization in terms of angles.
These functions are most useful when the argument is reparameterized in terms of angles, letting formula_62:
formula_63
Using the relation formula_64, the list given above yields the first few polynomials, parameterized this way, as:
formula_65
The orthogonality relations given above become in this formulation:
for fixed "m", formula_66 are orthogonal, parameterized by θ over formula_67, with weight formula_68:
formula_69
Also, for fixed "ℓ":
formula_70
In terms of θ, formula_71 are solutions of
formula_72
More precisely, given an integer "m"formula_730, the above equation has
nonsingular solutions only when formula_74 for "ℓ"
an integer ≥ "m", and those solutions are proportional to
formula_71.
Applications in physics: spherical harmonics.
In many occasions in physics, associated Legendre polynomials in terms of angles occur where spherical symmetry is involved. The colatitude angle in spherical coordinates is
the angle formula_75 used above. The longitude angle, formula_76, appears in a multiplying factor. Together, they make a set of functions called spherical harmonics. These functions express the symmetry of the two-sphere under the action of the Lie group SO(3).
What makes these functions useful is that they are central to the solution of the equation
formula_77 on the surface of a sphere. In spherical coordinates θ (colatitude) and φ (longitude), the Laplacian is
formula_78
When the partial differential equation
formula_79
is solved by the method of separation of variables, one gets a φ-dependent part formula_80 or formula_81 for integer m≥0, and an equation for the θ-dependent part
formula_72
for which the solutions are formula_71 with formula_82
and formula_83.
Therefore, the equation
formula_84
has nonsingular separated solutions only when formula_83,
and those solutions are proportional to
formula_85
and
formula_86
For each choice of "ℓ", there are 2ℓ + 1 functions
for the various values of "m" and choices of sine and cosine.
They are all orthogonal in both "ℓ" and "m" when integrated over the
surface of the sphere.
The solutions are usually written in terms of complex exponentials:
formula_87
The functions formula_88 are the spherical harmonics, and the quantity in the square root is a normalizing factor.
Recalling the relation between the associated Legendre functions of positive and negative "m", it is easily shown that the spherical harmonics satisfy the identity
formula_89
The spherical harmonic functions form a complete orthonormal set of functions in the sense of Fourier series. Workers in the fields of geodesy, geomagnetism and spectral analysis use a different phase and normalization factor than given here (see spherical harmonics).
When a 3-dimensional spherically symmetric partial differential equation is solved by the method of separation of variables in spherical coordinates, the part that remains after removal of the radial part is typically
of the form
formula_90
and hence the solutions are spherical harmonics.
Generalizations.
The Legendre polynomials are closely related to hypergeometric series. In the form of spherical harmonics, they express the symmetry of the two-sphere under the action of the Lie group SO(3). There are many other Lie groups besides SO(3), and analogous generalizations of the Legendre polynomials exist to express the symmetries of semi-simple Lie groups and Riemannian symmetric spaces. Crudely speaking, one may define a Laplacian on symmetric spaces; the eigenfunctions of the Laplacian can be thought of as generalizations of the spherical harmonics to other settings.
Notes and references.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\left(1 - x^2\\right) \\frac{d^2}{d x^2} P_\\ell^m(x) - 2 x \\frac{d}{d x} P_\\ell^m(x) + \\left[ \\ell (\\ell + 1) - \\frac{m^2}{1 - x^2} \\right] P_\\ell^m(x) = 0,"
},
{
"math_id": 1,
"text": "\\frac{d}{d x} \\left[ \\left(1 - x^2\\right) \\frac{d}{d x} P_\\ell^m(x) \\right] + \\left[ \\ell (\\ell + 1) - \\frac{m^2}{1 - x^2} \\right] P_\\ell^m(x) = 0,"
},
{
"math_id": 2,
"text": "P_\\ell^{m}(x)"
},
{
"math_id": 3,
"text": " P_\\ell^{m}(x) = (-1)^m (1-x^2)^{m/2} \\frac{d^m}{dx^m} \\left( P_\\ell(x) \\right), "
},
{
"math_id": 4,
"text": "\\left(1-x^2\\right) \\frac{d^2}{dx^2}P_\\ell(x) -2x\\frac{d}{dx}P_\\ell(x)+ \\ell(\\ell+1)P_\\ell(x) = 0."
},
{
"math_id": 5,
"text": "P_\\ell(x) = \\frac{1}{2^\\ell\\,\\ell!} \\ \\frac{d^\\ell}{dx^\\ell}\\left[(x^2-1)^\\ell\\right],"
},
{
"math_id": 6,
"text": "P_\\ell^{m}(x) = \\frac{(-1)^m}{2^\\ell \\ell!} (1-x^2)^{m/2}\\ \\frac{d^{\\ell+m}}{dx^{\\ell+m}}(x^2-1)^\\ell."
},
{
"math_id": 7,
"text": "\\frac{d^{\\ell-m}}{dx^{\\ell-m}} (x^2-1)^{\\ell} = c_{lm} (1-x^2)^m \\frac{d^{\\ell+m}}{dx^{\\ell+m}}(x^2-1)^{\\ell},"
},
{
"math_id": 8,
"text": "c_{lm} = (-1)^m \\frac{(\\ell-m)!}{(\\ell+m)!} ,"
},
{
"math_id": 9,
"text": "P^{-m}_\\ell(x) = (-1)^m \\frac{(\\ell-m)!}{(\\ell+m)!} P^{m}_\\ell(x)."
},
{
"math_id": 10,
"text": "P_{\\ell m}(x) = (-1)^m P_\\ell^{m}(x) "
},
{
"math_id": 11,
"text": " P_l^m(x)=(-1)^{m} \\cdot 2^{l} \\cdot (1-x^2)^{m/2} \\cdot \\sum_{k=m}^l \\frac{k!}{(k-m)!}\\cdot x^{k-m} \\cdot \\binom{l}{k} \\binom{\\frac{l+k-1}{2}}{l} "
},
{
"math_id": 12,
"text": "P_1^1"
},
{
"math_id": 13,
"text": "P_2^2"
},
{
"math_id": 14,
"text": "\\int_{-1}^{1} P_k ^{m} P_\\ell ^{m} dx = \\frac{2 (\\ell+m)!}{(2\\ell+1)(\\ell-m)!}\\ \\delta _{k,\\ell}"
},
{
"math_id": 15,
"text": "\\int_{-1}^{1} \\frac{P_\\ell ^{m} P_\\ell ^{n}}{1-x^2}dx = \\begin{cases}\n0 & \\text{if } m\\neq n \\\\\n\\frac{(\\ell+m)!}{m(\\ell-m)!} & \\text{if } m=n\\neq0 \\\\\n\\infty & \\text{if } m=n=0\n\\end{cases}"
},
{
"math_id": 16,
"text": "P_\\ell ^{-m} = (-1)^m \\frac{(\\ell-m)!}{(\\ell+m)!} P_\\ell ^{m}"
},
{
"math_id": 17,
"text": "\\text{If}\\quad |m| > \\ell\\,\\quad\\text{then}\\quad P_\\ell^{m} = 0.\\,"
},
{
"math_id": 18,
"text": "P_{-\\ell} ^{m} = P_{\\ell-1} ^{m},\\ (\\ell=1,\\,2,\\, \\dots)."
},
{
"math_id": 19,
"text": "P_\\ell ^{m} (-x) = (-1)^{\\ell - m} P_\\ell ^{m}(x) "
},
{
"math_id": 20,
"text": "P_{0}^{0}(x)=1"
},
{
"math_id": 21,
"text": "\\begin{align}\nP_{1}^{-1}(x)&=-\\tfrac{1}{2}P_{1}^{1}(x) \\\\\nP_{1}^{0}(x)&=x \\\\\nP_{1}^{1}(x)&=-(1-x^2)^{1/2}\n\\end{align}"
},
{
"math_id": 22,
"text": "\\begin{align}\nP_{2}^{-2}(x)&=\\tfrac{1}{24}P_{2}^{2}(x) \\\\\nP_{2}^{-1}(x)&=-\\tfrac{1}{6}P_{2}^{1}(x) \\\\\nP_{2}^{0}(x)&=\\tfrac{1}{2}(3x^{2}-1) \\\\\nP_{2}^{1}(x)&=-3x(1-x^2)^{1/2} \\\\\nP_{2}^{2}(x)&=3(1-x^2)\n\\end{align}"
},
{
"math_id": 23,
"text": "\\begin{align}\nP_{3}^{-3}(x)&=-\\tfrac{1}{720}P_{3}^{3}(x) \\\\\nP_{3}^{-2}(x)&=\\tfrac{1}{120}P_{3}^{2}(x) \\\\\nP_{3}^{-1}(x)&=-\\tfrac{1}{12}P_{3}^{1}(x) \\\\\nP_{3}^{0}(x)&=\\tfrac{1}{2}(5x^3-3x) \\\\\nP_{3}^{1}(x)&=\\tfrac{3}{2}(1-5x^{2})(1-x^2)^{1/2} \\\\\nP_{3}^{2}(x)&=15x(1-x^2) \\\\\nP_{3}^{3}(x)&=-15(1-x^2)^{3/2}\n\\end{align}"
},
{
"math_id": 24,
"text": "\\begin{align}\nP_{4}^{-4}(x)&=\\tfrac{1}{40320}P_{4}^{4}(x) \\\\\nP_{4}^{-3}(x)&=-\\tfrac{1}{5040}P_{4}^{3}(x) \\\\\nP_{4}^{-2}(x)&=\\tfrac{1}{360}P_{4}^{2}(x) \\\\\nP_{4}^{-1}(x)&=-\\tfrac{1}{20}P_{4}^{1}(x) \\\\\nP_{4}^{0}(x)&=\\tfrac{1}{8}(35x^{4}-30x^{2}+3) \\\\\nP_{4}^{1}(x)&=-\\tfrac{5}{2}(7x^3-3x)(1-x^2)^{1/2} \\\\\nP_{4}^{2}(x)&=\\tfrac{15}{2}(7x^2-1)(1-x^2) \\\\\nP_{4}^{3}(x)&= - 105x(1-x^2)^{3/2} \\\\\nP_{4}^{4}(x)&=105(1-x^2)^{2}\n\\end{align}"
},
{
"math_id": 25,
"text": "(\\ell-m-1)(\\ell-m)P_{\\ell}^{m}(x) = -P_{\\ell}^{m+2}(x) + P_{\\ell-2}^{m+2}(x) + (\\ell+m)(\\ell+m-1)P_{\\ell-2}^{m}(x)"
},
{
"math_id": 26,
"text": "(\\ell-m+1)P_{\\ell+1}^{m}(x) = (2\\ell+1)xP_{\\ell}^{m}(x) - (\\ell+m)P_{\\ell-1}^{m}(x)"
},
{
"math_id": 27,
"text": "2mxP_{\\ell}^{m}(x)=-\\sqrt{1-x^2}\\left[P_{\\ell}^{m+1}(x)+(\\ell+m)(\\ell-m+1)P_{\\ell}^{m-1}(x)\\right]"
},
{
"math_id": 28,
"text": "\\frac{1}{\\sqrt{1-x^2}}P_\\ell^m(x) = \\frac{-1}{2m} \\left[ P_{\\ell-1}^{m+1}(x) + (\\ell+m-1)(\\ell+m)P_{\\ell-1}^{m-1}(x) \\right]"
},
{
"math_id": 29,
"text": "\\frac{1}{\\sqrt{1-x^2}}P_\\ell^m(x) = \\frac{-1}{2m} \\left[ P_{\\ell+1}^{m+1}(x) + (\\ell-m+1)(\\ell-m+2)P_{\\ell+1}^{m-1}(x) \\right]"
},
{
"math_id": 30,
"text": " \\sqrt{1-x^2}P_\\ell^m(x) = \\frac1{2\\ell+1} \\left[ (\\ell-m+1)(\\ell-m+2) P_{\\ell+1}^{m-1}(x) - (\\ell+m-1)(\\ell+m) P_{\\ell-1}^{m-1}(x) \\right] "
},
{
"math_id": 31,
"text": " \\sqrt{1-x^2}P_\\ell^m(x) = \\frac{-1}{2\\ell+1} \\left[ P_{\\ell+1}^{m+1}(x) - P_{\\ell-1}^{m+1}(x) \\right] "
},
{
"math_id": 32,
"text": "\\sqrt{1-x^2}P_\\ell^{m+1}(x) = (\\ell-m)xP_{\\ell}^{m}(x) - (\\ell+m)P_{\\ell-1}^{m}(x)"
},
{
"math_id": 33,
"text": "\\sqrt{1-x^2}P_\\ell^{m+1}(x) = (\\ell-m+1)P_{\\ell+1}^m(x) - (\\ell+m+1)xP_\\ell^m(x)"
},
{
"math_id": 34,
"text": " \\sqrt{1-x^2}\\frac{d}{dx}{P_\\ell^m}(x) = \\frac12 \\left[ (\\ell+m)(\\ell-m+1)P_\\ell^{m-1}(x) - P_\\ell^{m+1}(x) \\right] "
},
{
"math_id": 35,
"text": " (1-x^2)\\frac{d}{dx}{P_\\ell^m}(x) = \\frac1{2\\ell+1} \\left[ (\\ell+1)(\\ell+m)P_{\\ell-1}^m(x) - \\ell(\\ell-m+1)P_{\\ell+1}^m(x) \\right] "
},
{
"math_id": 36,
"text": "(x^2-1)\\frac{d}{dx}{P_{\\ell}^{m}}(x) = {\\ell}xP_{\\ell}^{m}(x) - (\\ell+m)P_{\\ell-1}^{m}(x)"
},
{
"math_id": 37,
"text": "(x^2-1)\\frac{d}{dx}{P_{\\ell}^{m}}(x) = -(\\ell+1)xP_{\\ell}^{m}(x) + (\\ell-m+1)P_{\\ell+1}^{m}(x)"
},
{
"math_id": 38,
"text": "(x^2-1)\\frac{d}{dx}{P_{\\ell}^{m}}(x) = \\sqrt{1-x^2}P_{\\ell}^{m+1}(x) + mxP_{\\ell}^{m}(x)"
},
{
"math_id": 39,
"text": "(x^2-1)\\frac{d}{dx}{P_{\\ell}^{m}}(x) = -(\\ell+m)(\\ell-m+1)\\sqrt{1-x^2}P_{\\ell}^{m-1}(x) - mxP_{\\ell}^{m}(x)"
},
{
"math_id": 40,
"text": "P_{\\ell +1}^{\\ell +1}(x) = - (2\\ell+1) \\sqrt{1-x^2} P_{\\ell}^{\\ell}(x)"
},
{
"math_id": 41,
"text": "P_{\\ell}^{\\ell}(x) = (-1)^\\ell (2\\ell-1)!! (1- x^2)^{(\\ell/2)}"
},
{
"math_id": 42,
"text": "P_{\\ell +1}^{\\ell}(x) = x (2\\ell+1) P_{\\ell}^{\\ell}(x)"
},
{
"math_id": 43,
"text": "\\begin{align}\n\\frac{1}{2} \\int_{-1}^1 P_l^u(x) P_m^v(x) P_n^w(x) dx\n={}&{}(-1)^{s-m-w}\\frac{(m+v)!(n+w)!(2s-2n)!s!}{(m-v)!(s-l)!(s-m)!(s-n)!(2s+1)!} \\\\\n &{}\\times \\ \\sum_{t=p}^q (-1)^t \\frac{(l+u+t)!(m+n-u-t)!}{t!(l-u-t)!(m-n+u+t)!(n-w-t)!}\n\\end{align}"
},
{
"math_id": 44,
"text": "l,m,n\\ge0"
},
{
"math_id": 45,
"text": "u,v,w\\ge 0"
},
{
"math_id": 46,
"text": "u"
},
{
"math_id": 47,
"text": "u=v+w"
},
{
"math_id": 48,
"text": " m\\ge n"
},
{
"math_id": 49,
"text": " 2s = l+m+n "
},
{
"math_id": 50,
"text": " p = \\max(0,\\,n-m-u) "
},
{
"math_id": 51,
"text": " q = \\min(m+n-u,\\,l-u,\\,n-w) "
},
{
"math_id": 52,
"text": "s"
},
{
"math_id": 53,
"text": "m+n\\ge l \\ge m-n"
},
{
"math_id": 54,
"text": "P_{\\lambda}^{\\mu}(z) = \\frac{1}{\\Gamma(1-\\mu)} \\left[\\frac{1+z}{1-z}\\right]^{\\mu/2} \\,_2F_1 (-\\lambda, \\lambda+1; 1-\\mu; \\frac{1-z}{2})"
},
{
"math_id": 55,
"text": "\\Gamma"
},
{
"math_id": 56,
"text": " _2F_1"
},
{
"math_id": 57,
"text": "\\,_2F_1 (\\alpha, \\beta; \\gamma; z) = \\frac{\\Gamma(\\gamma)}{\\Gamma(\\alpha)\\Gamma(\\beta)} \\sum_{n=0}^\\infty\\frac{\\Gamma(n+\\alpha)\\Gamma(n+\\beta)}{\\Gamma(n+\\gamma)\\ n!}z^n,"
},
{
"math_id": 58,
"text": "(1-z^2)\\,y'' -2zy' + \\left(\\lambda[\\lambda+1] - \\frac{\\mu^2}{1-z^2}\\right)\\,y = 0.\\,"
},
{
"math_id": 59,
"text": "Q_\\lambda^{\\mu}(z)"
},
{
"math_id": 60,
"text": "Q_{\\lambda}^{\\mu}(z) = \\frac{\\sqrt{\\pi}\\ \\Gamma(\\lambda+\\mu+1)}{2^{\\lambda+1}\\Gamma(\\lambda+3/2)}\\frac{1}{z^{\\lambda+\\mu+1}}(1-z^2)^{\\mu/2} \\,_2F_1 \\left(\\frac{\\lambda+\\mu+1}{2}, \\frac{\\lambda+\\mu+2}{2}; \\lambda+\\frac{3}{2}; \\frac{1}{z^2}\\right)"
},
{
"math_id": 61,
"text": "P_\\lambda^{\\mu}(z)"
},
{
"math_id": 62,
"text": "x = \\cos\\theta"
},
{
"math_id": 63,
"text": "P_\\ell^{m}(\\cos\\theta) = (-1)^m (\\sin \\theta)^m\\ \\frac{d^m}{d(\\cos\\theta)^m}\\left(P_\\ell(\\cos\\theta)\\right)"
},
{
"math_id": 64,
"text": "(1 - x^2)^{1 / 2} = \\sin\\theta"
},
{
"math_id": 65,
"text": "\\begin{align}\nP_0^0(\\cos\\theta) & = 1 \\\\[8pt]\nP_1^0(\\cos\\theta) & = \\cos\\theta \\\\[8pt]\nP_1^1(\\cos\\theta) & = -\\sin\\theta \\\\[8pt]\nP_2^0(\\cos\\theta) & = \\tfrac{1}{2} (3\\cos^2\\theta-1) \\\\[8pt]\nP_2^1(\\cos\\theta) & = -3\\cos\\theta\\sin\\theta \\\\[8pt]\nP_2^2(\\cos\\theta) & = 3\\sin^2\\theta \\\\[8pt]\nP_3^0(\\cos\\theta) & = \\tfrac{1}{2} (5\\cos^3\\theta-3\\cos\\theta) \\\\[8pt]\nP_3^1(\\cos\\theta) & = -\\tfrac{3}{2} (5\\cos^2\\theta-1)\\sin\\theta \\\\[8pt]\nP_3^2(\\cos\\theta) & = 15\\cos\\theta\\sin^2\\theta \\\\[8pt]\nP_3^3(\\cos\\theta) & = -15\\sin^3\\theta \\\\[8pt]\nP_4^0(\\cos\\theta) & = \\tfrac{1}{8} (35\\cos^4\\theta-30\\cos^2\\theta+3) \\\\[8pt]\nP_4^1(\\cos\\theta) & = - \\tfrac{5}{2} (7\\cos^3\\theta-3\\cos\\theta)\\sin\\theta \\\\[8pt]\nP_4^2(\\cos\\theta) & = \\tfrac{15}{2} (7\\cos^2\\theta-1)\\sin^2\\theta \\\\[8pt]\nP_4^3(\\cos\\theta) & = -105\\cos\\theta\\sin^3\\theta \\\\[8pt]\nP_4^4(\\cos\\theta) & = 105\\sin^4\\theta\n\\end{align}"
},
{
"math_id": 66,
"text": "P_\\ell^m(\\cos\\theta)"
},
{
"math_id": 67,
"text": "[0, \\pi]"
},
{
"math_id": 68,
"text": "\\sin \\theta"
},
{
"math_id": 69,
"text": "\\int_0^\\pi P_k^{m}(\\cos\\theta) P_\\ell^{m}(\\cos\\theta)\\,\\sin\\theta\\,d\\theta = \\frac{2 (\\ell+m)!}{(2\\ell+1)(\\ell-m)!}\\ \\delta _{k,\\ell}"
},
{
"math_id": 70,
"text": "\\int_0^\\pi P_\\ell^{m}(\\cos\\theta) P_\\ell^{n}(\\cos\\theta) \\csc\\theta\\,d\\theta = \\begin{cases} 0 & \\text{if } m\\neq n \\\\ \\frac{(\\ell+m)!}{m(\\ell-m)!} & \\text{if } m=n\\neq0 \\\\ \\infty & \\text{if } m=n=0\\end{cases}"
},
{
"math_id": 71,
"text": "P_\\ell^{m}(\\cos \\theta)"
},
{
"math_id": 72,
"text": "\\frac{d^{2}y}{d\\theta^2} + \\cot \\theta \\frac{dy}{d\\theta} + \\left[\\lambda - \\frac{m^2}{\\sin^2\\theta}\\right]\\,y = 0\\,"
},
{
"math_id": 73,
"text": "\\ge"
},
{
"math_id": 74,
"text": "\\lambda = \\ell(\\ell+1)\\,"
},
{
"math_id": 75,
"text": "\\theta"
},
{
"math_id": 76,
"text": "\\phi"
},
{
"math_id": 77,
"text": "\\nabla^2\\psi + \\lambda\\psi = 0"
},
{
"math_id": 78,
"text": "\\nabla^2\\psi = \\frac{\\partial^2\\psi}{\\partial\\theta^2} + \\cot \\theta \\frac{\\partial \\psi}{\\partial \\theta} + \\csc^2 \\theta\\frac{\\partial^2\\psi}{\\partial\\phi^2}."
},
{
"math_id": 79,
"text": "\\frac{\\partial^2\\psi}{\\partial\\theta^2} + \\cot \\theta \\frac{\\partial \\psi}{\\partial \\theta} + \\csc^2 \\theta\\frac{\\partial^2\\psi}{\\partial\\phi^2} + \\lambda \\psi = 0"
},
{
"math_id": 80,
"text": "\\sin(m\\phi)"
},
{
"math_id": 81,
"text": "\\cos(m\\phi)"
},
{
"math_id": 82,
"text": "\\ell{\\ge}m"
},
{
"math_id": 83,
"text": "\\lambda = \\ell(\\ell+1)"
},
{
"math_id": 84,
"text": "\\nabla^2\\psi + \\lambda\\psi = 0"
},
{
"math_id": 85,
"text": "P_\\ell^{m}(\\cos \\theta)\\ \\cos (m\\phi)\\ \\ \\ \\ 0 \\le m \\le \\ell"
},
{
"math_id": 86,
"text": "P_\\ell^{m}(\\cos \\theta)\\ \\sin (m\\phi)\\ \\ \\ \\ 0 < m \\le \\ell."
},
{
"math_id": 87,
"text": "Y_{\\ell, m}(\\theta, \\phi) = \\sqrt{\\frac{(2\\ell+1)(\\ell-m)!}{4\\pi(\\ell+m)!}}\\ P_\\ell^{m}(\\cos \\theta)\\ e^{im\\phi}\\qquad -\\ell \\le m \\le \\ell.\n"
},
{
"math_id": 88,
"text": "Y_{\\ell, m}(\\theta, \\phi)"
},
{
"math_id": 89,
"text": "Y_{\\ell, m}^*(\\theta, \\phi) = (-1)^m Y_{\\ell, -m}(\\theta, \\phi)."
},
{
"math_id": 90,
"text": "\\nabla^2\\psi(\\theta, \\phi) + \\lambda\\psi(\\theta, \\phi) = 0,"
}
] | https://en.wikipedia.org/wiki?curid=1062015 |
10620457 | Lambek–Moser theorem | On integer partitions from monotonic functions
The Lambek–Moser theorem is a mathematical description of partitions of the natural numbers into two complementary sets. For instance, it applies to the partition of numbers into even and odd, or into prime and non-prime (one and the composite numbers). There are two parts to the Lambek–Moser theorem. One part states that any two non-decreasing integer functions that are inverse, in a certain sense, can be used to split the natural numbers into two complementary subsets, and the other part states that every complementary partition can be constructed in this way. When a formula is known for the formula_0th natural number in a set, the Lambek–Moser theorem can be used to obtain a formula for the formula_0th number not in the set.
The Lambek–Moser theorem belongs to combinatorial number theory. It is named for Joachim Lambek and Leo Moser, who published it in 1954, and should be distinguished from an unrelated theorem of Lambek and Moser, later strengthened by Wild, on the number of primitive Pythagorean triples. It extends Rayleigh's theorem, which describes complementary pairs of Beatty sequences, the sequences of rounded multiples of irrational numbers.
From functions to partitions.
Let formula_1 be any function from positive integers to non-negative integers that is both non-decreasing (each value in the sequence formula_5 is at least as large as any earlier value) and unbounded (it eventually increases past any fixed value).
The sequence of its values may skip some numbers, so it might not have an inverse function with the same properties. Instead, define a non-decreasing and unbounded integer function formula_2 that is as close as possible to the inverse in the sense that, for all positive integers formula_0,
formula_6
Equivalently, formula_7 may be defined as the number of values formula_8 for which formula_9.
It follows from either of these definitions that formula_10. If the two functions formula_1 and formula_2 are plotted as histograms, they form mirror images of each other across the diagonal line formula_11.
From these two functions formula_1 and formula_2, define two more functions formula_3 and formula_4, from positive integers to positive integers, by
formula_12
Then the first part of the Lambek–Moser theorem states that each positive integer occurs exactly once among the values of either formula_3 or formula_4.
That is, the values obtained from formula_3 and the values obtained from formula_4 form two complementary sets of positive integers. More strongly, each of these two functions maps its argument formula_0 to the formula_0th member of its set in the partition.
As an example of the construction of a partition from a function, let formula_13, the function that squares its argument. Then its inverse is the square root function, whose closest integer approximation (in the sense used for the Lambek–Moser theorem) is formula_14.
These two functions give formula_15 and formula_16
For formula_17 the values of formula_3 are the pronic numbers
2, 6, 12, 20, 30, 42, 56, 72, 90, 110, ...
while the values of formula_4 are
1, 3, 4, 5, 7, 8, 9, 10, 11, 13, 14, ...
These two sequences are complementary: each positive integer belongs to exactly one of them. The Lambek–Moser theorem states that this phenomenon is not specific to the pronic numbers, but rather it arises for any choice of formula_1 with the appropriate properties.
From partitions to functions.
The second part of the Lambek–Moser theorem states that this construction of partitions from inverse functions is universal, in the sense that it can explain any partition of the positive integers into two infinite parts. If formula_18 and formula_19 are any two complementary increasing sequences of integers, one may construct a pair of functions formula_1 and formula_2 from which this partition may be derived using the Lambek–Moser theorem. To do so, define formula_20 and formula_21.
One of the simplest examples to which this could be applied is the partition of positive integers into even and odd numbers. The functions formula_22 and formula_23 should give the formula_0th even or odd number, respectively, so formula_24 and formula_25. From these are derived the two functions formula_26 and formula_27. They form an inverse pair, and the partition generated via the Lambek–Moser theorem from this pair is just the partition of the positive integers into even and odd numbers. Another integer partition, into evil numbers and odious numbers (by the parity of the binary representation) uses almost the same functions, adjusted by the values of the Thue–Morse sequence.
Limit formula.
In the same work in which they proved the Lambek–Moser theorem, Lambek and Moser provided a method of going directly from formula_3, the function giving the formula_0th member of a set of positive integers, to formula_4, the function giving the formula_0th non-member, without going through formula_1 and formula_2. Let formula_28 denote the number of values of formula_8 for which formula_29; this is an approximation to the inverse function of formula_3, but (because it uses formula_30 in place of formula_31) offset by one from the type of inverse used to define formula_2 from formula_1. Then, for any formula_0, formula_23 is the limit of the sequence
formula_32
meaning that this sequence eventually becomes constant and the value it takes when it does
Lambek and Moser used the prime numbers as an example, following earlier work by Viggo Brun and D. H. Lehmer. If formula_33 is the prime-counting function (the number of primes less than or equal to formula_0), then the formula_0th non-prime (1 or a composite number) is given by the limit of the sequence
formula_34
For some other sequences of integers, the corresponding limit converges in a fixed number of steps, and a direct formula for the complementary sequence is possible. In particular, the formula_0th positive integer that is not a formula_35th power can be obtained from the limiting formula as
formula_36
History and proofs.
The theorem was discovered by Leo Moser and Joachim Lambek, who published it in 1954. Moser and Lambek cite the previous work of Samuel Beatty on Beatty sequences as their inspiration, and also cite the work of Viggo Brun and D. H. Lehmer from the early 1930s on methods related to their limiting formula for formula_4. Edsger W. Dijkstra has provided a visual proof of the result, and later another proof based on algorithmic reasoning. Yuval Ginosar has provided an intuitive proof based on an analogy of two athletes running in opposite directions around a circular racetrack.
Related results.
For non-negative integers.
A variation of the theorem applies to partitions of the non-negative integers, rather than to partitions of the positive integers. For this variation, every partition corresponds to a Galois connection of the ordered non-negative integers to themselves. This is a pair of non-decreasing functions
formula_37 with the property that, for all formula_8 and formula_38, formula_39 if and only if formula_40. The corresponding functions formula_3 and formula_4 are defined slightly less symmetrically by formula_41 and formula_42. For functions defined in this way, the values of formula_3 and formula_4 (for non-negative arguments, rather than positive arguments) form a partition of the non-negative integers, and every partition can be constructed in this way.
Rayleigh's theorem.
Rayleigh's theorem states that for two positive irrational numbers formula_43 and formula_44, both greater than one, with formula_45,
the two sequences formula_46 and formula_47 for formula_48, obtained by rounding down to an integer the multiples of formula_43 and formula_44, are complementary. It can be seen as an instance of the Lambek–Moser theorem with formula_49 and formula_50. The condition that formula_43 and formula_44 be greater than one implies that these two functions are non-decreasing; the derived functions are formula_51 and formula_52 The sequences of values of formula_3 and formula_4 forming the derived partition are known as Beatty sequences, after Samuel Beatty's 1926 rediscovery of Rayleigh's theorem.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "f^*"
},
{
"math_id": 3,
"text": "F"
},
{
"math_id": 4,
"text": "F^*"
},
{
"math_id": 5,
"text": "f(1),f(2),f(3),\\dots"
},
{
"math_id": 6,
"text": "f\\bigl(f^*(n)\\bigr) < n \\le f\\bigl(f^*(n)+1\\bigr)."
},
{
"math_id": 7,
"text": "f^*(n)"
},
{
"math_id": 8,
"text": "x"
},
{
"math_id": 9,
"text": "f(x)<n"
},
{
"math_id": 10,
"text": "f^*{}^*=f"
},
{
"math_id": 11,
"text": "x=y"
},
{
"math_id": 12,
"text": "\n\\begin{align}\nF(n)&=f(n)+n\\\\\nF^*(n)&=f^*(n)+n\\\\\n\\end{align}"
},
{
"math_id": 13,
"text": "f(n)=n^2"
},
{
"math_id": 14,
"text": "f^*(n)=\\lfloor\\sqrt{n-1}\\rfloor"
},
{
"math_id": 15,
"text": "F(n)=n^2+n"
},
{
"math_id": 16,
"text": "F^*(n)=\\lfloor\\sqrt{n-1}\\rfloor+n."
},
{
"math_id": 17,
"text": "n=1,2,3,\\dots"
},
{
"math_id": 18,
"text": "S=s_1,s_2,\\dots"
},
{
"math_id": 19,
"text": "S^*=s^*_1,s^*_2,\\dots"
},
{
"math_id": 20,
"text": "f(n)=s_n-n"
},
{
"math_id": 21,
"text": "f^*(n)=s^*_n-n"
},
{
"math_id": 22,
"text": "F(n)"
},
{
"math_id": 23,
"text": "F^*(n)"
},
{
"math_id": 24,
"text": "F(n)=2n"
},
{
"math_id": 25,
"text": "F^*(n)=2n-1"
},
{
"math_id": 26,
"text": "f(n)=F(n)-n=n"
},
{
"math_id": 27,
"text": "f^*(n)=F^*(n)-n=n-1"
},
{
"math_id": 28,
"text": "F^{\\#}(n)"
},
{
"math_id": 29,
"text": "F(x)\\le n"
},
{
"math_id": 30,
"text": "\\le"
},
{
"math_id": 31,
"text": "<"
},
{
"math_id": 32,
"text": "n, n+F^{\\#}(n), n+F^{\\#}\\bigl(n+F^{\\#}(n)\\bigr), \\dots,"
},
{
"math_id": 33,
"text": "\\pi(n)"
},
{
"math_id": 34,
"text": "n, n+\\pi(n), n+\\pi\\bigl(n+\\pi(n)\\bigr), \\dots"
},
{
"math_id": 35,
"text": "k"
},
{
"math_id": 36,
"text": "n+\\left\\lfloor\\sqrt[k]{n + \\lfloor\\sqrt[k]{n}\\rfloor}\\right\\rfloor."
},
{
"math_id": 37,
"text": "(f,f^*)"
},
{
"math_id": 38,
"text": "y"
},
{
"math_id": 39,
"text": "f(x)\\le y"
},
{
"math_id": 40,
"text": "x\\le f(y)"
},
{
"math_id": 41,
"text": "F(n)=f(n)+n"
},
{
"math_id": 42,
"text": "F^*(n)=f^*(n)+n+1"
},
{
"math_id": 43,
"text": "r"
},
{
"math_id": 44,
"text": "s"
},
{
"math_id": 45,
"text": "\\tfrac1r+\\tfrac1s=1"
},
{
"math_id": 46,
"text": "\\lfloor i\\cdot r\\rfloor"
},
{
"math_id": 47,
"text": "\\lfloor i\\cdot s\\rfloor"
},
{
"math_id": 48,
"text": "i=1,2,3,\\dots"
},
{
"math_id": 49,
"text": "f(n)=\\lfloor rn\\rfloor-n"
},
{
"math_id": 50,
"text": "f^\\ast(n)=\\lfloor sn\\rfloor-n"
},
{
"math_id": 51,
"text": "F(n)=\\lfloor rn\\rfloor"
},
{
"math_id": 52,
"text": "F^*(n)=\\lfloor sn\\rfloor."
}
] | https://en.wikipedia.org/wiki?curid=10620457 |
10621792 | Filtration fraction | In renal physiology, the filtration fraction is the ratio of the glomerular filtration rate (GFR) over the renal plasma flow (RPF).
Filtration Fraction, FF = GFR/RPF, or formula_0.
The filtration fraction, therefore, represents the proportion of the fluid reaching the kidneys that passes into the renal tubules. It is normally about 20%.
GFR on its own is the most common and important measure of renal function. However, in conditions such as renal artery stenosis, blood flow to the kidneys is reduced. Filtration fraction must therefore be increased in order to perform the normal functions of the kidney. Loop diuretics and thiazide diuretics decrease filtration fraction.
Catecholamines (norepinephrine and epinephrine) increase filtration fraction by vasoconstriction of afferent and efferent arterioles, possibly through activation of alpha-1 adrenergic receptors.
Severe hemorrhage will also result in an increased filtration fraction.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "FF = \\frac{GFR}{RPF}"
}
] | https://en.wikipedia.org/wiki?curid=10621792 |
106218 | Cooperative binding | Molecular mechanism
Cooperative binding occurs in molecular binding systems containing more than one type, or species, of molecule and in which one of the partners is not mono-valent and can bind more than one molecule of the other species. In general, molecular binding is an interaction between molecules that results in a stable physical association between those molecules.
Cooperative binding occurs in a molecular binding system where two or more "ligand" molecules can bind to a "receptor" molecule. Binding can be considered "cooperative" if the actual binding of the first molecule of the ligand to the receptor changes the binding affinity of the second ligand molecule. The binding of ligand molecules to the different sites on the receptor molecule do not constitute mutually independent events. Cooperativity can be positive or negative, meaning that it becomes more or less likely that successive ligand molecules will bind to the receptor molecule.
Cooperative binding is observed in many biopolymers, including proteins and nucleic acids. Cooperative binding has been shown to be the mechanism underlying a large range of biochemical and physiological processes.
History and mathematical formalisms.
Christian Bohr and the concept of cooperative binding.
In 1904, Christian Bohr studied hemoglobin binding to oxygen under different conditions. When plotting hemoglobin saturation with oxygen as a function of the partial pressure of oxygen, he obtained a sigmoidal (or "S-shaped") curve. This indicates that the more oxygen is bound to hemoglobin, the easier it is for more oxygen to bind - until all binding sites are saturated. In addition, Bohr noticed that increasing CO2 pressure shifted this curve to the right - i.e. higher concentrations of CO2 make it more difficult for hemoglobin to bind oxygen. This latter phenomenon, together with the observation that hemoglobin's affinity for oxygen increases with increasing pH, is known as the Bohr effect.
A receptor molecule is said to exhibit cooperative binding if its binding to ligand scales non-linearly with ligand concentration. Cooperativity can be positive (if binding of a ligand molecule increases the receptor's apparent affinity, and hence increases the chance of another ligand molecule binding) or negative (if binding of a ligand molecule decreases affinity and hence makes binding of other ligand molecules less likely). The "fractional occupancy" formula_0 of a receptor with a given ligand is defined as the quantity of ligand-bound binding sites divided by the total quantity of ligand binding sites:
formula_1
If formula_2, then the protein is completely unbound, and if formula_3, it is completely saturated. If the plot of formula_0 at equilibrium as a function of ligand concentration is sigmoidal in shape, as observed by Bohr for hemoglobin, this indicates positive cooperativity. If it is not, no statement can be made about cooperativity from looking at this plot alone.
The concept of cooperative binding only applies to molecules or complexes with more than one ligand binding sites. If several ligand binding sites exist, but ligand binding to any one site does not affect the others, the receptor is said to be non-cooperative. Cooperativity can be homotropic, if a ligand influences the binding of ligands of the same kind, or heterotropic, if it influences binding of other kinds of ligands. In the case of hemoglobin, Bohr observed homotropic positive cooperativity (binding of oxygen facilitates binding of more oxygen) and heterotropic negative cooperativity (binding of CO2 reduces hemoglobin's facility to bind oxygen.)
Throughout the 20th century, various frameworks have been developed to describe the binding of a ligand to a protein with more than one binding site and the cooperative effects observed in this context.
The Hill equation.
The first description of cooperative binding to a multi-site protein was developed by A.V. Hill. Drawing on observations of oxygen binding to hemoglobin and the idea that cooperativity arose from the aggregation of hemoglobin molecules, each one binding one oxygen molecule, Hill suggested a phenomenological equation that has since been named after him:
formula_4
where formula_5 is the "Hill coefficient", formula_6 denotes ligand concentration, formula_7 denotes an apparent association constant (used in the original form of the equation), formula_8 is an empirical dissociation constant, and formula_9 a microscopic dissociation constant (used in modern forms of the equation, and equivalent to an formula_10). If formula_11, the system exhibits negative cooperativity, whereas cooperativity is positive if formula_12. The total number of ligand binding sites is an upper bound for formula_5. The Hill equation can be linearized as:
formula_13
The "Hill plot" is obtained by plotting formula_14 versus formula_15. In the case of the Hill equation, it is a line with slope formula_16 and intercept formula_17. This means that cooperativity is assumed to be fixed, i.e. it does not change with saturation. It also means that binding sites always exhibit the same affinity, and cooperativity does not arise from an affinity increasing with ligand concentration.
The Adair equation.
G.S. Adair found that the Hill plot for hemoglobin was not a straight line, and hypothesized that binding affinity was not a fixed term, but dependent on ligand saturation. Having demonstrated that hemoglobin contained four hemes (and therefore binding sites for oxygen), he worked from the assumption that fully saturated hemoglobin is formed in stages, with intermediate forms with one, two, or three bound oxygen molecules. The formation of each intermediate stage from unbound hemoglobin can be described using an apparent macroscopic association constant formula_18. The resulting fractional occupancy can be expressed as:
formula_19
Or, for any protein with "n" ligand binding sites:
formula_20
where "n" denotes the number of binding sites and each formula_18 is a combined association constant, describing the binding of "i" ligand molecules.
By combining the Adair treatment with the Hill plot, one arrives at the modern experimental definition of cooperativity (Hill, 1985, Abeliovich, 2005). The resultant Hill coefficient, or more correctly the slope of the Hill plot as calculated from the Adair Equation, can be shown to be the ratio between the variance of the binding number to the variance of the binding number in an equivalent system of non-interacting binding sites. Thus, the Hill coefficient defines cooperativity as a statistical dependence of one binding site on the state of other site(s).
The Klotz equation.
Working on calcium binding proteins, Irving Klotz deconvoluted Adair's association constants by considering stepwise formation of the intermediate stages, and tried to express the cooperative binding in terms of elementary processes governed by mass action law. In his framework, formula_21 is the association constant governing binding of the first ligand molecule, formula_22 the association constant governing binding of the second ligand molecule (once the first is already bound) etc. For formula_0, this gives:
formula_23
It is worth noting that the constants formula_21, formula_22 and so forth do not relate to individual binding sites. They describe "how many" binding sites are occupied, rather than "which ones". This form has the advantage that cooperativity is easily recognised when considering the association constants. If all ligand binding sites are identical with a microscopic association constant formula_7, one would expect formula_24 (that is formula_25) in the absence of cooperativity. We have positive cooperativity if formula_18 lies above these expected values for formula_26.
The Klotz equation (which is sometimes also called the Adair-Klotz equation) is still often used in the experimental literature to describe measurements of ligand binding in terms of sequential apparent binding constants.
Pauling equation.
By the middle of the 20th century, there was an increased interest in models that would not only describe binding curves phenomenologically, but offer an underlying biochemical mechanism. Linus Pauling reinterpreted the equation provided by Adair, assuming that his constants were the combination of the binding constant for the ligand (formula_7 in the equation below) and energy coming from the interaction between subunits of the cooperative protein (formula_27 below). Pauling actually derived several equations, depending on the degree of interaction between subunits. Based on wrong assumptions about the localization of hemes, he opted for the wrong one to describe oxygen binding by hemoglobin, assuming the subunit were arranged in a square. The equation below provides the equation for a tetrahedral structure, which would be more accurate in the case of hemoglobin:
formula_28
The KNF model.
Based on results showing that the structure of cooperative proteins changed upon binding to their ligand, Daniel Koshland and colleagues refined the biochemical explanation of the mechanism described by Pauling. The Koshland-Némethy-Filmer (KNF) model assumes that each subunit can exist in one of two conformations: active or inactive. Ligand binding to one subunit would induce an immediate conformational change of that subunit from the inactive to the active conformation, a mechanism described as "induced fit". Cooperativity, according to the KNF model, would arise from interactions between the subunits, the strength of which varies depending on the relative conformations of the subunits involved. For a tetrahedric structure (they also considered linear and square structures), they proposed the following formula:
formula_29
Where formula_30 is the constant of association for X, formula_31 is the ratio of B and A states in the absence of ligand ("transition"), formula_32 and formula_33 are the relative stabilities of pairs of neighbouring subunits relative to a pair where both subunits are in the A state (Note that the KNF paper actually presents formula_34, the number of occupied sites, which is here 4 times formula_0).
The MWC model.
The Monod-Wyman-Changeux (MWC) model for concerted allosteric transitions went a step further by exploring cooperativity based on thermodynamics and three-dimensional conformations. It was originally formulated for oligomeric proteins with symmetrically arranged, identical subunits, each of which has one ligand binding site. According to this framework, two (or more) interconvertible conformational states of an allosteric protein coexist in a thermal equilibrium. The states - often termed tense (T) and relaxed (R) - differ in affinity for the ligand molecule. The ratio between the two states is regulated by the binding of ligand molecules that stabilizes the higher-affinity state. Importantly, all subunits of a molecule change states at the same time, a phenomenon known as "concerted transition".
The allosteric isomerisation constant "L" describes the equilibrium between both states when no ligand molecule is bound: formula_35. If "L" is very large, most of the protein exists in the T state in the absence of ligand. If "L" is small (close to one), the R state is nearly as populated as the T state. The ratio of dissociation constants for the ligand from the T and R states is described by the constant "c": formula_36. If formula_37, both R and T states have the same affinity for the ligand and the ligand does not affect isomerisation. The value of "c" also indicates how much the equilibrium between T and R states changes upon ligand binding: the smaller "c", the more the equilibrium shifts towards the R state after one binding. With formula_38, fractional occupancy is described as:
formula_39
The sigmoid Hill plot of allosteric proteins can then be analysed as a progressive transition from the T state (low affinity) to the R state (high affinity) as the saturation increases. The slope of the Hill plot also depends on saturation, with a maximum value at the inflexion point. The intercepts between the two asymptotes and the y-axis allow to determine the affinities of both states for the ligand.
In proteins, conformational change is often associated with activity, or activity towards specific targets. Such activity is often what is physiologically relevant or what is experimentally measured. The degree of conformational change is described by the state function formula_40, which denotes the fraction of protein present in the formula_41 state. As the energy diagram illustrates, formula_40 increases as more ligand molecules bind. The expression for formula_40 is:
formula_42
A crucial aspect of the MWC model is that the curves for formula_0 and formula_40 do not coincide, i.e. fractional saturation is not a direct indicator of conformational state (and hence, of activity). Moreover, the extents of the cooperativity of binding and the cooperativity of activation can be very different: an extreme case is provide by the bacteria flagella motor with a Hill coefficient of 1.7 for the binding and 10.3 for the activation. The supra-linearity of the response is sometimes called ultrasensitivity.
If an allosteric protein binds to a target that also has a higher affinity for the R state, then target binding further stabilizes the R state, hence increasing ligand affinity. If, on the other hand, a target preferentially binds to the T state, then target binding will have a negative effect on ligand affinity. Such targets are called allosteric modulators.
Since its inception, the MWC framework has been extended and generalized. Variations have been proposed, for example to cater for proteins with more than two states, proteins that bind to several types of ligands or several types of allosteric modulators and proteins with non-identical subunits or ligand-binding sites.
Examples.
The list of molecular assemblies that exhibit cooperative binding of ligands is very large, but some examples are particularly notable for their historical interest, their unusual properties, or their physiological importance.
As described in the historical section, the most famous example of cooperative binding is hemoglobin. Its quaternary structure, solved by Max Perutz using X-ray diffraction, exhibits a pseudo-symmetrical tetrahedron carrying four binding sites (hemes) for oxygen. Many other molecular assemblies exhibiting cooperative binding have been studied in great detail.
Multimeric enzymes.
The activity of many enzymes is regulated by allosteric effectors. Some of these enzymes are multimeric and carry several binding sites for the regulators.
Threonine deaminase was one of the first enzymes suggested to behave like hemoglobin and shown to bind ligands cooperatively. It was later shown to be a tetrameric protein.
Another enzyme that has been suggested early to bind ligands cooperatively is aspartate trans-carbamylase. Although initial models were consistent with four binding sites, its structure was later shown to be hexameric by William Lipscomb and colleagues.
Ion channels.
Most ion channels are formed of several identical or pseudo-identical monomers or domains, arranged symmetrically in biological membranes. Several classes of such channels whose opening is regulated by ligands exhibit cooperative binding of these ligands.
It was suggested as early as 1967 (when the exact nature of those channels was still unknown) that the nicotinic acetylcholine receptors bound acetylcholine in a cooperative manner due to the existence of several binding sites. The purification of the receptor and its characterization demonstrated a pentameric structure with binding sites located at the interfaces between subunits, confirmed by the structure of the receptor binding domain.
Inositol triphosphate (IP3) receptors form another class of ligand-gated ion channels exhibiting cooperative binding. The structure of those receptors shows four IP3 binding sites symmetrically arranged.
Multi-site molecules.
Although most proteins showing cooperative binding are multimeric complexes of homologous subunits, some proteins carry several binding sites for the same ligand on the same polypeptide. One such example is calmodulin. One molecule of calmodulin binds four calcium ions cooperatively. Its structure presents four EF-hand domains, each one binding one calcium ion. The molecule does not display a square or tetrahedron structure, but is formed of two lobes, each carrying two EF-hand domains.
Transcription factors.
Cooperative binding of proteins onto nucleic acids has also been shown. A classical example is the binding of the lambda phage repressor to its operators, which occurs cooperatively. Other examples of transcription factors exhibit positive cooperativity when binding their target, such as the repressor of the TtgABC pumps (n=1.6), as well as conditional cooperativity exhibited by the transcription factors HOXA11 and FOXO1.
Conversely, examples of negative cooperativity for the binding of transcription factors were also documented, as for the homodimeric repressor of the "Pseudomonas putida" cytochrome P450cam hydroxylase operon (n=0.56).
Conformational spread and binding cooperativity.
Early on, it has been argued that some proteins, especially those consisting of many subunits, could be regulated by a generalized MWC mechanism, in which the transition between R and T state is not necessarily synchronized across the entire protein. In 1969, Wyman proposed such a model with "mixed conformations" (i.e. some protomers in the R state, some in the T state) for respiratory proteins in invertebrates.
Following a similar idea, the conformational spread model by Duke and colleagues subsumes both the KNF and the MWC model as special cases. In this model, a subunit does not automatically change conformation upon ligand binding (as in the KNF model), nor do all subunits in a complex change conformations together (as in the MWC model). Conformational changes are stochastic with the likelihood of a subunit switching states depending on whether or not it is ligand bound and on the conformational state of neighbouring subunits. Thus, conformational states can "spread" around the entire complex.
Impact of upstream and downstream components on module's ultrasensitivity.
In a living cell, ultrasensitive modules are embedded in a bigger network with upstream and downstream components. This components may constrain the range of inputs that the module will receive as well as the range of the module's outputs that network will be able to detect. The sensitivity of a modular system is affected by these restrictions. The dynamic range limitations imposed by downstream components can produce effective sensitivities much larger than that of the original module when considered in isolation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\bar{Y}"
},
{
"math_id": 1,
"text": "\n\\bar{Y}=\\frac{[\\text{bound sites}]}{[\\text{bound sites}]+[\\text{unbound sites}]} = \\frac{[\\text{bound sites}]}{[\\text{total sites}]}\n"
},
{
"math_id": 2,
"text": "\\bar{Y}=0"
},
{
"math_id": 3,
"text": "\\bar{Y}=1"
},
{
"math_id": 4,
"text": "\n\\bar{Y} = \\frac{K\\cdot{}[X]^n}{1+ K\\cdot{}[X]^n} = \\frac{[X]^n}{K^* + [X]^n} = \\frac{[X]^n}{K_d^n + [X]^n}\n"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "[X]"
},
{
"math_id": 7,
"text": "K"
},
{
"math_id": 8,
"text": "K^*"
},
{
"math_id": 9,
"text": "K_d"
},
{
"math_id": 10,
"text": "\\mathrm{EC}_{50}"
},
{
"math_id": 11,
"text": "n<1"
},
{
"math_id": 12,
"text": "n>1"
},
{
"math_id": 13,
"text": "\n\\log \\frac{\\bar{Y}}{1-\\bar{Y}} = n\\cdot{}\\log [X] - n\\cdot{}\\log K_d\n"
},
{
"math_id": 14,
"text": "\\log \\frac{\\bar{Y}}{1-\\bar{Y}}"
},
{
"math_id": 15,
"text": "\\log [X]"
},
{
"math_id": 16,
"text": "n_H"
},
{
"math_id": 17,
"text": "n\\cdot\\log(K_d)"
},
{
"math_id": 18,
"text": "K_i"
},
{
"math_id": 19,
"text": "\n\\bar{Y} = \\frac{1}{4}\\cdot{}\\frac{K_I[X]+2K_{II}[X]^2+3K_{III}[X]^3+4K_{IV}[X]^4}{1+K_I[X]+K_{II}[X]^2+K_{III}[X]^3+K_{IV}[X]^4}\n"
},
{
"math_id": 20,
"text": "\n\\bar{Y}=\\frac{1}{n}\\frac{K_I[X] + 2K_{II}[X]^2 + \\ldots + nK_{n} [X]^n}{1+K_I[X]+K_{II}[X]^2+ \\ldots +K_n[X]^n}\n"
},
{
"math_id": 21,
"text": "K_1"
},
{
"math_id": 22,
"text": "K_2"
},
{
"math_id": 23,
"text": "\n\\bar{Y}=\\frac{1}{n}\\frac{K_1[X] + 2K_1K_2[X]^2 + \\ldots + n\\left(K_1K_2 \\ldots K_n\\right)[X]^n}{1+K_1[X]+K_1K_2[X]^2+ \\ldots +\\left(K_1K_2 \\ldots K_n\\right)[X]^n}\n"
},
{
"math_id": 24,
"text": "K_1=nK, K_2=\\frac{n-1}{2}K, \\ldots K_n=\\frac{1}{n}K"
},
{
"math_id": 25,
"text": "K_i=\\frac{n-i+1}{i}K"
},
{
"math_id": 26,
"text": "i>1"
},
{
"math_id": 27,
"text": "\\alpha"
},
{
"math_id": 28,
"text": "\n\\bar{Y} = \\frac{K[X]+3\\alpha{}K^2[X]^2+3\\alpha{}^3K^3[X]^3+\\alpha{}^6K^4[X]^4}{1+4K[X]+6\\alpha{}K^2[X]^2+4\\alpha{}^3K^3[X]^3+\\alpha{}^6K^4[X]^4}\n"
},
{
"math_id": 29,
"text": "\n\\bar{Y} = \\frac{K_{AB}^3(K_XK_t[X])+3K_{AB}^4K_{BB}(K_XK_t[X])^2+3K_{AB}^3K_{BB}^3(K_XK_t[X])^3+K_{BB}^6(K_XK_t[X])^4}{1+4K_{AB}^3(K_XK_t[X])+6K_{AB}^4K_{BB}(K_XK_t[X])^2+4K_{AB}^3K_{BB}^3(K_XK_t[X])^3+K_{BB}^6(K_XK_t[X])^4} \n"
},
{
"math_id": 30,
"text": "K_X"
},
{
"math_id": 31,
"text": "K_t"
},
{
"math_id": 32,
"text": "K_{AB}"
},
{
"math_id": 33,
"text": "K_{BB}"
},
{
"math_id": 34,
"text": "N_s"
},
{
"math_id": 35,
"text": "L=\\frac{\\left[T_0\\right]}{\\left[R_0\\right]}"
},
{
"math_id": 36,
"text": "c = \\frac{K_d^R}{K_d^T}"
},
{
"math_id": 37,
"text": "c=1"
},
{
"math_id": 38,
"text": "\\alpha = \\frac{[X]}{K_d^R}"
},
{
"math_id": 39,
"text": "\n\\bar{Y} = \\frac{\\alpha(1+\\alpha)^{n-1}+Lc\\alpha(1+c\\alpha)^{n-1}}{(1+\\alpha)^n+L(1+c\\alpha)^n} \n"
},
{
"math_id": 40,
"text": "\\bar{R}"
},
{
"math_id": 41,
"text": "R"
},
{
"math_id": 42,
"text": "\n\\bar{R}=\\frac{(1+\\alpha)^n}{(1+\\alpha)^n+L(1+c\\alpha)^n}\n"
}
] | https://en.wikipedia.org/wiki?curid=106218 |
1062283 | Brazilian cruzado | Brazilian currency from 1986 to 1989
The cruzado was the currency of Brazil from 1986 to 1989. It replaced the second cruzeiro (at first called the "cruzeiro novo") in 1986, at a rate of 1 cruzado = 1000 cruzeiros (novos) and was replaced in 1989 by the cruzado novo at a rate of 1000 cruzados = 1 cruzado novo.
This currency was subdivided in 100 centavos and it had the symbol formula_0 and the ISO 4217 code "BRC".
Coins.
Standard.
Stainless-steel coins were introduced in 1986 in denominations of 1, 5, 10, 20 and 50 centavos, and 1 and 5 cruzados, with 10 cruzados following in 1987. Coin production ceased in 1988.
Commemorative.
Three designs of commemorative 100 cruzado coins, celebrating the 100th anniversary of the abolition of slavery in the country (the Lei Áurea), were produced in 1988. Although very rare in circulation, the numbers' design was carried over into both Cruzado Novo and the third Cruzeiro.
Banknotes.
The first banknotes were overprints on cruzeiro notes, in denominations of 10, 50 and 100 cruzados. Regular notes followed in denominations of 10, 50, 100 and 500 cruzados, followed by 1000 cruzados in 1987, 5000 and 10,000 cruzados in 1988.
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{CzS}\\!\\!\\!\\Vert"
}
] | https://en.wikipedia.org/wiki?curid=1062283 |
1062316 | Brazilian cruzeiro real | Brazilian currency from 1993 to 1994
The cruzeiro real (formula_0, plural: "cruzeiros reais") was the short-lived currency of Brazil between August 1, 1993, and June 30, 1994. It was subdivided in 100 centavos; however, this subunit was used only for accounting purposes, and coins and banknotes worth 10 to 500 of the preceding cruzeiro remained valid and were used for the purpose of corresponding to centavos of the cruzeiro real, especially when the redenomination was carried out. The currency had the ISO 4217 code "BRR".
This redenomination, at the beginning of the second half of 1993, was made with the objective of facilitating the accounting of day-to-day activities, which in the previous unit implied the placement of several zeros that made it difficult to record values in calculators and machines.
The cruzeiro real was replaced with the current Brazilian real as part of the Plano Real.
History.
The cruzeiro real replaced the third cruzeiro, with 1,000 cruzeiros = 1 cruzeiro real. The cruzeiro real was replaced in circulation by the real at a rate of 1 real for 2,750 cruzeiros reais. Before this occurred, the unidade real de valor (pegged to the U.S. dollar at parity) was used in pricing, to allow the population to become accustomed to a stable currency (after many years of high inflation) before the real was introduced.
Coins.
Standard circulation stainless-steel coins were issued in 1993 and 1994 in denominations of 5, 10, 50 and 100 cruzeiros reais. The reverse of the coins portrayed iconic animals of the Brazilian fauna. Coins worth 10 or more of the previous cruzeiro were retained to correspond to smaller denominations, such as the 1,000-cruzeiro coin for a single cruzeiro real, but became scarce by the end of 1993.
No commemorative coins were issued for the Cruzeiro Real.
The macaw and jaguar were represented again in the Real's and bills, respectively, after their introduction in 1994, and the maned wolf was later portrayed in the bill since its introduction in late 2020.
Banknotes.
In 1993, provisional banknotes were introduced in the form of cruzeiro notes overprinted in the new currency. These were in denominations of 50, 100 and 500 cruzeiros reais. Regular notes followed in denominations of 1,000, 5,000 and 50,000 cruzeiros reais. The 10,000 cruzeiros reais banknote was designed and scheduled to be put into circulation in the first months of 1994, but inflation and the impending release of a new economic plan put its release on hold and only the 50,000 Cruzeiro real banknote was released.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{CRS}\\!\\!\\!\\Vert"
}
] | https://en.wikipedia.org/wiki?curid=1062316 |
10625394 | Rescaled range | The rescaled range is a statistical measure of the variability of a time series introduced by the British hydrologist Harold Edwin Hurst (1880–1978). Its purpose is to provide an assessment of how the apparent variability of a series changes with the length of the time-period being considered.
The rescaled range of time series is calculated from dividing the range of its mean adjusted cumulative deviate series (see the Calculation section below) by the standard deviation of the time series itself. For example, consider a time series {1,3,1,0,2,5}, which has a mean m = 2 and standard deviation S = 1.79. Subtracting m from each value of the series gives mean adjusted series {-1,1,-1,-2,0,3}. To calculate cumulative deviate series we take the first value -1, then sum of the first two values -1+1=0, then sum of the first three values and so on to get {-1,0,-1,-3,-3,0}, range of which is R = 3, so the rescaled range is R/S = 1.68.
If we consider the same time series, but increase the number of observations of it, the rescaled range will generally also increase. The increase of the rescaled range can be characterized by making a plot of the logarithm of R/S vs. the logarithm of the number of samples. The slope of this line gives the Hurst exponent, H. If the time series is generated by a random walk (or a Brownian motion process) it has the value of H =1/2. Many physical phenomena that have a long time series suitable for analysis exhibit a Hurst exponent greater than 1/2. For example, observations of the height of the Nile River measured annually over many years gives a value of H = 0.77.
Several researchers (including Peters, 1991) have found that the prices of many financial instruments (such as currency exchange rates, stock values, etc.) also have H > 1/2. This means that they have a behavior that is distinct from a random walk, and therefore the time series is not generated by a stochastic process that has the nth value independent of all of the values before this. According to model of Fractional Brownian motion this is referred to as long memory of positive linear autocorrelation. However it has been shown that this measure is correct only for linear evaluation: complex nonlinear processes with memory need additional descriptive parameters. Several studies using Lo's modified rescaled range statistic have contradicted Peters' results as well.
The Rescaled Range is calculated for a time series, formula_0, as follows:
Calculation.
Lo (1991) advocates adjusting the standard deviation formula_9 for the expected increase in range formula_10 resulting from short-range autocorrelation in the time series. This involves replacing formula_9 by formula_11, which is the square root of
formula_12
where formula_13 is some maximum lag over which short-range autocorrelation might be substantial and formula_14 is the sample autocovariance at lag formula_15. Using this adjusted rescaled range, he concludes that stock market return time series show no evidence of long-range memory.
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "X=X_1,X_2,\\dots, X_n \\, "
},
{
"math_id": 1,
"text": "m=\\frac{1}{n} \\sum_{i=1}^{n} X_i \\,"
},
{
"math_id": 2,
"text": "Y_t=X_{t}-m \\text{ for } t=1,2, \\dots ,n \\, "
},
{
"math_id": 3,
"text": "Z_t= \\sum_{i=1}^{t} Y_{i} \\text{ for } t=1,2, \\dots ,n \\, "
},
{
"math_id": 4,
"text": " R_t = \\max\\left (Z_1, Z_2, \\dots, Z_t \\right )- \\min\\left (Z_1, Z_2, \\dots, Z_t \\right ) \\text{ for } t=1,2, \\dots, n \\, "
},
{
"math_id": 5,
"text": "S_{t}= \\sqrt{\\frac{1}{t} \\sum_{i=1}^{t}\\left ( X_{i} - m(t) \\right )^{2}} \\text{ for } t=1,2, \\dots ,n \\, "
},
{
"math_id": 6,
"text": "t"
},
{
"math_id": 7,
"text": "X_1,X_2, \\dots, X_t \\, "
},
{
"math_id": 8,
"text": "\\left ( R/S \\right )_{t} = \\frac{R_{t}}{S_{t}} \\text{ for } t=1,2, \\dots, n \\, "
},
{
"math_id": 9,
"text": "S"
},
{
"math_id": 10,
"text": "R"
},
{
"math_id": 11,
"text": " \\hat{S} "
},
{
"math_id": 12,
"text": " \\hat{S}^2 = S^2 + 2 \\sum_{j=1}^{q} \\left( 1 - \\frac{j}{q + 1} \\right) C(j), "
},
{
"math_id": 13,
"text": "q"
},
{
"math_id": 14,
"text": "C(j)"
},
{
"math_id": 15,
"text": "j"
}
] | https://en.wikipedia.org/wiki?curid=10625394 |
1062753 | Louis Bachelier | French pioneer in mathematical economics (1870-1946)
Louis Jean-Baptiste Alphonse Bachelier (; 11 March 1870 – 28 April 1946) was a French mathematician at the turn of the 20th century. He is credited with being the first person to model the stochastic process now called Brownian motion, as part of his doctoral thesis "The Theory of Speculation" ("Théorie de la spéculation", defended in 1900).
Bachelier's doctoral thesis, which introduced the first mathematical model of Brownian motion and its use for valuing stock options, was the first paper to use advanced mathematics in the study of finance. His Bachelier model has been influential in the development of other widely used models, including the Black-Scholes model.
Bachelier is considered as the forefather of mathematical finance and a pioneer in the study of stochastic processes.
Early years.
Bachelier was born in Le Havre, in Seine-Maritime. His father was a wine merchant and amateur scientist, and the vice-consul of Venezuela at Le Havre. His mother was the daughter of an important banker (who was also a writer of poetry books). Both of Louis's parents died just after he completed his high school diploma ("baccalauréat" in French), forcing him to take care of his sister and three-year-old brother and to assume the family business, which effectively put his graduate studies on hold. During this time Bachelier gained a practical acquaintance with the financial markets. His studies were further delayed by military service. Bachelier arrived in Paris in 1892 to study at the Sorbonne, where his grades were less than ideal.
The doctoral thesis.
Defended on 29 March 1900 at the University of Paris, Bachelier's thesis was not well received because it attempted to apply mathematics to an area mathematicians found unfamiliar. However, his instructor, Henri Poincaré, is recorded as having given some positive feedback (though insufficient to secure Bachelier an immediate teaching position in France at that time). For example, Poincaré called his approach to deriving Gauss's law of errors
The thesis received a grade of "honorable," and was accepted for publication in the prestigious "Annales Scientifiques de l’École Normale Supérieure". While it did not receive a mark of "très honorable", despite its ultimate importance, the grade assigned is still interpreted as an appreciation for his contribution. Jean-Michel Courtault et al. point out in "On the Centenary of "Théorie de la spéculation"" that "honorable" was "the highest note which could be awarded for a thesis that was essentially outside mathematics and that had a number of arguments far from being rigorous".
Academic career.
For several years following the successful defense of his thesis, Bachelier further developed the theory of diffusion processes, and was published in prestigious journals. In 1909 he became a "free professor" at the Sorbonne. In 1914, he published a book, "Le Jeu, la Chance, et le Hasard" (Games, Chance, and Randomness), that sold over six thousand copies. With the support of the Council of the University of Paris, Bachelier was given a permanent professorship at the Sorbonne, but World War I intervened and he was drafted into the French army as a private. His army service ended on December 31, 1918. In 1919, he found a position as an assistant professor in Besançon, replacing a regular professor on leave. He married Augustine Jeanne Maillot in September 1920 but was soon widowed. When the professor returned in 1922, Bachelier replaced another professor at Dijon. He moved to Rennes in 1925, but was finally awarded a permanent professorship in 1927 at the University of Besançon, where he worked for 10 years until his retirement.
Besides the setback that the war had caused him, Bachelier was blackballed in 1926 when he attempted to receive a permanent position at Dijon. This was due to a "misinterpretation" of one of Bachelier's papers by Professor Paul Lévy, who—to Bachelier's understandable fury—knew nothing of Bachelier's work, nor of the candidate that Lévy recommended above him. Lévy later learned of his error, and reconciled himself with Bachelier.
Although Bachelier's work on random walks predated Einstein's celebrated study of Brownian motion by five years, the pioneering nature of his work was recognized only after several decades, first by Andrey Kolmogorov who pointed out his work to Paul Lévy, then by Leonard Jimmie Savage who translated Bachelier's thesis into English and brought the work of Bachelier to the attention of Paul Samuelson. The arguments Bachelier used in his thesis also predate Eugene Fama's efficient-market hypothesis, which is very closely related, as the idea of a random walk is suited to predict the random future in a stock market where everyone has all the available information. His work in finance is recognized as one of the foundations for the Black–Scholes model.
Also published as a book,
Republished in a book of combined works,
Translated into English,
Translated into English with additional commentary and background,
Translated into English,
Republished in a book of combined works,
Republished,
Republished,
Translated into English, Harding 2017
Erratum,
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "{\\pi}"
}
] | https://en.wikipedia.org/wiki?curid=1062753 |
106284 | Centrifuge | Device using centrifugal force to separate fluids
A centrifuge is a device that uses centrifugal force to subject a specimen to a specified constant force, for example to separate various components of a fluid. This is achieved by spinning the fluid at high speed within a container, thereby separating fluids of different densities (e.g. cream from milk) or liquids from solids. It works by causing denser substances and particles to move outward in the radial direction. At the same time, objects that are less dense are displaced and moved to the centre. In a laboratory centrifuge that uses sample tubes, the radial acceleration causes denser particles to settle to the bottom of the tube, while low-density substances rise to the top. A centrifuge can be a very effective filter that separates contaminants from the main body of fluid.
Industrial scale centrifuges are commonly used in manufacturing and waste processing to sediment suspended solids, or to separate immiscible liquids. An example is the cream separator found in dairies. Very high speed centrifuges and ultracentrifuges able to provide very high accelerations can separate fine particles down to the nano-scale, and molecules of different masses. Large centrifuges are used to simulate high gravity or acceleration environments (for example, high-G training for test pilots). Medium-sized centrifuges are used in washing machines and at some swimming pools to draw water out of fabrics. Gas centrifuges are used for isotope separation, such as to enrich nuclear fuel for fissile isotopes.
History.
English military engineer Benjamin Robins (1707–1751) invented a whirling arm apparatus to determine drag. In 1864, Antonin Prandtl proposed the idea of a dairy centrifuge to separate cream from milk. The idea was subsequently put into practice by his brother, Alexander Prandtl, who made improvements to his brother's design, and exhibited a working butterfat extraction machine in 1875.
Types.
A centrifuge machine can be described as a machine with a rapidly rotating container that applies centrifugal force to its contents. There are multiple types of centrifuge, which can be classified by intended use or by rotor design:
Types by rotor design:
Types by intended use:
Industrial centrifuges may otherwise be classified according to the type of separation of the high density fraction from the low density one.
Generally, there are two types of centrifuges: the filtration and sedimentation centrifuges. For the filtration or the so-called screen centrifuge the drum is perforated and is inserted with a filter, for example a filter cloth, wire mesh or lot screen. The suspension flows through the filter and the drum with the perforated wall from the inside to the outside. In this way the solid material is restrained and can be removed. The kind of removing depends on the type of centrifuge, for example manually or periodically. Common types are:
In the centrifuges the drum is a solid wall (not perforated). This type of centrifuge is used for the purification of a suspension. For the acceleration of the natural deposition process of suspension the centrifuges use centrifugal force. With so-called overflow centrifuges the suspension is drained off and the liquid is added constantly. Common types are:
Though most modern centrifuges are electrically powered, a hand-powered variant inspired by the whirligig has been developed for medical applications in developing countries.
Many designs have been shared for free and open-source centrifuges that can be digitally manufactured. The open-source hardware designs for hand-powered centrifuge for larger volumes of fluids with a radial velocity of over 1750 rpm and over 50 N of relative centrifugal force can be completely 3-D printed for about $25. Other open hardware designs use custom 3-D printed fixtures with inexpensive electric motors to make low-cost centrifuges (e.g. the Dremelfuge that uses a Dremel power tool) or CNC cut out OpenFuge.
Uses.
Laboratory separations.
A wide variety of laboratory-scale centrifuges are used in chemistry, biology, biochemistry and clinical medicine for isolating and separating suspensions and immiscible liquids. They vary widely in speed, capacity, temperature control, and other characteristics. Laboratory centrifuges often can accept a range of different fixed-angle and swinging bucket rotors able to carry different numbers of centrifuge tubes and rated for specific maximum speeds. Controls vary from simple electrical timers to programmable models able to control acceleration and deceleration rates, running speeds, and temperature regimes. Ultracentrifuges spin the rotors under vacuum, eliminating air resistance and enabling exact temperature control. Zonal rotors and continuous flow systems are capable of handing bulk and larger sample volumes, respectively, in a laboratory-scale instrument.
An application in laboratories is blood separation. Blood separates into cells and proteins (RBC, WBC, platelets, etc.) and serum. DNA preparation is another common application for pharmacogenetics and clinical diagnosis. DNA samples are purified and the DNA is prepped for separation by adding buffers and then centrifuging it for a certain amount of time. The blood waste is then removed and another buffer is added and spun inside the centrifuge again. Once the blood waste is removed and another buffer is added the pellet can be suspended and cooled. Proteins can then be removed and the entire thing can be centrifuged again and the DNA can be isolated completely. Specialized cytocentrifuges are used in medical and biological laboratories to concentrate cells for microscopic examination.
Isotope separation.
Other centrifuges, the first being the Zippe-type centrifuge, separate isotopes, and these kinds of centrifuges are in use in nuclear power and nuclear weapon programs.
Aeronautics and astronautics.
Human centrifuges are exceptionally large centrifuges that test the reactions and tolerance of pilots and astronauts to acceleration above those experienced in the Earth's gravity.
The first centrifuges used for human research were used by Erasmus Darwin, the grandfather of Charles Darwin. The first large-scale human centrifuge designed for aeronautical training was created in Germany in 1933.
The US Air Force at Brooks City Base, Texas, operates a human centrifuge while awaiting completion of the new human centrifuge in construction at Wright-Patterson AFB, Ohio. The centrifuge at Brooks City Base is operated by the United States Air Force School of Aerospace Medicine for the purpose of training and evaluating prospective fighter pilots for high-"g" flight in Air Force fighter aircraft.
The use of large centrifuges to simulate a feeling of gravity has been proposed for future long-duration space missions. Exposure to this simulated gravity would prevent or reduce the bone decalcification and muscle atrophy that affect individuals exposed to long periods of freefall.
Non-Human centrifuge
At the European Space Agency (ESA) technology center ESTEC (in Noordwijk, the Netherlands), an diameter centrifuge is used to expose samples in fields of life sciences as well as physical sciences. This Large Diameter Centrifuge (LDC) began operation in 2007. Samples can be exposed to a maximum of 20 times Earth's gravity. With its four arms and six freely swinging out gondolas it is possible to expose samples with different g-levels at the same time. Gondolas can be fixed at eight different positions. Depending on their locations one could e.g. run an experiment at 5 and 10g in the same run. Each gondola can hold an experiment of a maximum . Experiments performed in this facility ranged from zebra fish, metal alloys, plasma, cells, liquids, Planaria, Drosophila or plants.
Industrial centrifugal separator.
Industrial centrifugal separator is a coolant filtration system for separating particles from liquid like, grinding machining coolant. It is usually used for non-ferrous particles separation such as, silicon, glass, ceramic, and graphite etc. The filtering process does not require any consumption parts like filter bags, which saves the earth from harm.
Geotechnical centrifuge modeling.
Geotechnical centrifuge modeling is used for physical testing of models involving soils. Centrifuge acceleration is applied to scale models to scale the gravitational acceleration and enable prototype scale stresses to be obtained in scale models. Problems such as building and bridge foundations, earth dams, tunnels, and slope stability, including effects such as blast loading and earthquake shaking.
Synthesis of materials.
High gravity conditions generated by centrifuge are applied in the chemical industry, casting, and material synthesis. The convection and mass transfer are greatly affected by the gravitational condition. Researchers reported that the high-gravity level can effectively affect the phase composition and morphology of the products.
Mathematical description.
Protocols for centrifugation typically specify the amount of acceleration to be applied to the sample, rather than specifying a rotational speed such as revolutions per minute. This distinction is important because two rotors with different diameters running at the same rotational speed will subject samples to different accelerations. During circular motion the acceleration is the product of the radius and the square of the angular velocity formula_0, and the acceleration relative to "g" is traditionally named "relative centrifugal force" (RCF). The acceleration is measured in multiples of ""g" (or × "g""), the standard acceleration due to gravity at the Earth's surface, a dimensionless quantity given by the expression:
formula_1
where
formula_2 is earth's gravitational acceleration,
formula_3 is the rotational radius,
formula_0 is the angular velocity in radians per unit time
This relationship may be written as
formula_4
or
formula_5
where
formula_6 is the rotational radius measured in millimeters (mm), and
formula_7 is rotational speed measured in revolutions per minute (RPM).
To avoid having to perform a mathematical calculation every time, one can find nomograms for converting RCF to rpm for a rotor of a given radius. A ruler or other straight edge lined up with the radius on one scale, and the desired RCF on another scale, will point at the correct rpm on the third scale. Based on automatic rotor recognition, modern centrifuges have a button for automatic conversion from RCF to rpm and vice versa.
References and notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega"
},
{
"math_id": 1,
"text": " \\text{RCF} = \\frac{r \\omega^2}{g}"
},
{
"math_id": 2,
"text": "\\textstyle g "
},
{
"math_id": 3,
"text": "\\textstyle r "
},
{
"math_id": 4,
"text": " \\text{RCF} = \\frac{10^{-3} r_\\text{mm} \\, \\left(\\frac{2 \\pi N_\\text{RPM}}{60}\\right)^2}{g}"
},
{
"math_id": 5,
"text": " \\text{RCF} = 1.118(2)\\, \\times 10^{-6}\\, r_\\text{mm} \\, N_\\text{RPM}^2"
},
{
"math_id": 6,
"text": "\\textstyle r_\\text{mm} "
},
{
"math_id": 7,
"text": "\\textstyle N_\\text{RPM} "
}
] | https://en.wikipedia.org/wiki?curid=106284 |
10629043 | NCAA Division I FBS passing leaders | College football statistics
The NCAA Division I FBS passing leaders are career, single-season, and single-game passing leaders in yards, touchdowns, efficiency, completions, completion percentage, and interception percentage. These lists are dominated by more recent players for several reasons:
Only seasons in which a team was considered to be a part of the Football Bowl Subdivision are included in these lists. Players such as Taylor Heinicke and Chad Pennington played for teams who reclassified to the FBS during their careers, and only their stats from the FBS years are eligible for inclusion. Similarly, players such as Vernon Adams and Bailey Zappe finished their careers by transferring to an FBS school, but their earlier seasons are not counted.
Quarters and halves are not counted, "per se", but for good measure, Andre Ware of Houston is the leader of those periods with 340 yards/5 touchdowns and 517 yards/6 touchdowns, respectively. Ware is also the only passer atop the leaderboards to collect the Heisman Trophy or the Davey O'Brien Award.
All records are current as of the end of the 2023 season. Until the 2024 season is over, entries may be incomplete.
Passing yards.
The career leader in passing yards is Houston's Case Keenum. He is the only player to amass three 5,000+ yards seasons. Keenum was granted a fifth year of eligibility after being injured in Houston's third game in 2010, but he would still top the list by over 1,500 yards if 2010 were not included. Keenum passed Hawaii's Timmy Chang, who also received a fifth year of eligibility after being injured in Hawaii's third game in 2001. Chang broke the record previously held by BYU's Ty Detmer, who shattered a record previously held by San Diego State's Todd Santos, who finished his career in 1987 and is no longer in the top 50.
The single-season leader in passing yards is Bailey Zappe, who transferred to Western Kentucky for his final year of eligibility after starting his career at FCS Houston Baptist (now Houston Christian). He broke a record that had stood for 18 years from Texas Tech's B. J. Symons. Prior to Symons, the record had been held by Detmer, who edged out Houston's David Klingler in 1990.
The first player to pass for 600 yards in a single game was Illinois' Dave Wilson, whose record stood for eight years. The 700-yard barrier was first breached in 1990 by David Klingler. The current single-game record of 734 is shared by Connor Halliday and Patrick Mahomes.
<templatestyles src="Col-begin/styles.css"/>
Passing touchdowns.
The holders of the career and single-season passing yards records, Case Keenum and Bailey Zappe also hold the records for passing touchdowns. The single-game record holder is Houston's David Klingler, who threw for 11 touchdowns in a 1990 game against Eastern Washington.
<templatestyles src="Col-begin/styles.css"/>
Efficiency.
Passing efficiency is a measure of quarterback performance based on the following formula:
formula_0
Only passing statistics are included in the formula. Any yards or touchdowns gained rushing or by any other method are not a factor in the formula, and neither are fumbles. Players tend to rank highly on the list when they have a high completion percentage, high yards per completion, and many touchdowns to few interceptions. The career leader (with a minimum of 350 completions) in effiency is Alabama's Tua Tagovailoa. All of the top 10 in single-season efficiency have come since 2016, with Grayson McCall of Coastal Carolina breaking the record in 2021.
The NCAA does not recognize a single-game leaderboard in passing efficiency, and detailed box scores do not exist for every year going back to the beginning of college football, but the single-game record holder is Cincinnati's Gunner Kiel, who achieved an efficiency rating of 388.6, going 15-for-15 for 319 yards and 5 touchdowns in a 2015 game against UCF.
<templatestyles src="Col-begin/styles.css"/>
Completions.
Case Keenum also holds the career record for completions. The single season record is held by Texas Tech's Graham Harrell. In fact, 13 of the top 17 performances on the single-season list were by quarterbacks who played under head coach Mike Leach.
<templatestyles src="Col-begin/styles.css"/>
Completion percentage.
Alabama's Mac Jones holds the NCAA record for completion percentage, with 413 completions on 556 attempts. This is over 1.5 percentage points higher than the second place on the list, Northwestern's Dan Persa. The highest completion percentage among quarterbacks with over 1,000 career attempts is the 70.39% of Hawaii's Colt Brennan.
Jones also held the single-season record, until it was broken in 2023 by Oregon's Bo Nix. At the end of the 20th century, the single season record was held by Daunte Culpepper, and while he is still 7th on the list, he is the only 20th century player on either list.
The NCAA doesn't recognize a full list for single games, but top performances include:
<templatestyles src="Col-begin/styles.css"/>
Interception percentage.
With a minimum of 500 passing attempts, Northern Illinois's Drew Hare is the only quarterback in FBS history with fewer than 1% of his passes intercepted. Before Hare, the record had been held by Baylor's Bryce Petty and before that, Fresno State's Billy Volek. The lowest interception percentage among quarterbacks with over 1,000 career attempts is 1.20% by Oregon's Marcus Mariota.
On the single-season list, 28 of the top 30 are 21st century players, though the list is topped by Virginia's Matt Blundin, who is the only player ever to have 20 passing attempts per game without throwing a single interception.
The single-game record is 0%, which is accomplished hundreds of times every season. However the quarterback with the most single-game attempts without throwing an interception is Houston's David Piland, who attempted 77 passes in a 2012 game against Louisiana Tech without any interceptions.
<templatestyles src="Col-begin/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\text{Passing Efficiency} = {(100 \\times \\text{completions}) + (8.4 \\times \\text{yards}) + (330 \\times \\text{touchdowns}) - (200 \\times \\text{interceptions}) \\over \\text{attempts}}"
}
] | https://en.wikipedia.org/wiki?curid=10629043 |
1063 | Algorithms for calculating variance | Important algorithms in numerical statistics
Algorithms for calculating variance play a major role in computational statistics. A key difficulty in the design of good algorithms for this problem is that formulas for the variance may involve sums of squares, which can lead to numerical instability as well as to arithmetic overflow when dealing with large values.
Naïve algorithm.
A formula for calculating the variance of an entire population of size "N" is:
formula_0
Using Bessel's correction to calculate an unbiased estimate of the population variance from a finite sample of "n" observations, the formula is:
formula_1
Therefore, a naïve algorithm to calculate the estimated variance is given by the following:
<templatestyles src="Framebox/styles.css" />
(SumSq − (Sum × Sum) / n) / (n − 1)
This algorithm can easily be adapted to compute the variance of a finite population: simply divide by "n" instead of "n" − 1 on the last line.
Because SumSq and (Sum×Sum)/"n" can be very similar numbers, cancellation can lead to the precision of the result to be much less than the inherent precision of the floating-point arithmetic used to perform the computation. Thus this algorithm should not be used in practice, and several alternate, numerically stable, algorithms have been proposed. This is particularly bad if the standard deviation is small relative to the mean.
Computing shifted data.
The variance is invariant with respect to changes in a location parameter, a property which can be used to avoid the catastrophic cancellation in this formula.
formula_2
with formula_3 any constant, which leads to the new formula
formula_4
the closer formula_3 is to the mean value the more accurate the result will be, but just choosing a value inside the
samples range will guarantee the desired stability. If the values formula_5 are small then there are no problems with the sum of its squares, on the contrary, if they are large it necessarily means that the variance is large as well. In any case the second term in the formula is always smaller than the first one therefore no cancellation may occur.
If just the first sample is taken as formula_3 the algorithm can be written in Python programming language as
def shifted_data_variance(data):
if len(data) < 2:
return 0.0
K = data[0]
n = Ex = Ex2 = 0.0
for x in data:
n += 1
Ex += x - K
Ex2 += (x - K) ** 2
variance = (Ex2 - Ex**2 / n) / (n - 1)
# use n instead of (n-1) if want to compute the exact variance of the given data
# use (n-1) if data are samples of a larger population
return variance
This formula also facilitates the incremental computation that can be expressed as
K = Ex = Ex2 = 0.0
n = 0
def add_variable(x):
global K, n, Ex, Ex2
if n == 0:
K = x
n += 1
Ex += x - K
Ex2 += (x - K) ** 2
def remove_variable(x):
global K, n, Ex, Ex2
n -= 1
Ex -= x - K
Ex2 -= (x - K) ** 2
def get_mean():
global K, n, Ex
return K + Ex / n
def get_variance():
global n, Ex, Ex2
return (Ex2 - Ex**2 / n) / (n - 1)
Two-pass algorithm.
An alternative approach, using a different formula for the variance, first computes the sample mean,
formula_6
and then computes the sum of the squares of the differences from the mean,
formula_7
where "s" is the standard deviation. This is given by the following code:
def two_pass_variance(data):
n = len(data)
mean = sum(data) / n
variance = sum((x - mean) ** 2 for x in data) / (n - 1)
return variance
This algorithm is numerically stable if "n" is small. However, the results of both of these simple algorithms ("naïve" and "two-pass") can depend inordinately on the ordering of the data and can give poor results for very large data sets due to repeated roundoff error in the accumulation of the sums. Techniques such as compensated summation can be used to combat this error to a degree.
Welford's online algorithm.
It is often useful to be able to compute the variance in a single pass, inspecting each value formula_8 only once; for example, when the data is being collected without enough storage to keep all the values, or when costs of memory access dominate those of computation. For such an online algorithm, a recurrence relation is required between quantities from which the required statistics can be calculated in a numerically stable fashion.
The following formulas can be used to update the mean and (estimated) variance of the sequence, for an additional element "x""n". Here, formula_9 denotes the sample mean of the first "n" samples formula_10, formula_11 their biased sample variance, and formula_12 their unbiased sample variance.
formula_13
formula_14
formula_15
These formulas suffer from numerical instability , as they repeatedly subtract a small number from a big number which scales with "n". A better quantity for updating is the sum of squares of differences from the current mean, formula_16, here denoted formula_17:
formula_18
This algorithm was found by Welford, and it has been thoroughly analyzed. It is also common to denote formula_19 and formula_20.
An example Python implementation for Welford's algorithm is given below.
def update(existing_aggregate, new_value):
(count, mean, M2) = existing_aggregate
count += 1
delta = new_value - mean
mean += delta / count
delta2 = new_value - mean
M2 += delta * delta2
return (count, mean, M2)
def finalize(existing_aggregate):
(count, mean, M2) = existing_aggregate
if count < 2:
return float("nan")
else:
(mean, variance, sample_variance) = (mean, M2 / count, M2 / (count - 1))
return (mean, variance, sample_variance)
This algorithm is much less prone to loss of precision due to catastrophic cancellation, but might not be as efficient because of the division operation inside the loop. For a particularly robust two-pass algorithm for computing the variance, one can first compute and subtract an estimate of the mean, and then use this algorithm on the residuals.
The parallel algorithm below illustrates how to merge multiple sets of statistics calculated online.
Weighted incremental algorithm.
The algorithm can be extended to handle unequal sample weights, replacing the simple counter "n" with the sum of weights seen so far. West (1979) suggests this incremental algorithm:
def weighted_incremental_variance(data_weight_pairs):
w_sum = w_sum2 = mean = S = 0
for x, w in data_weight_pairs:
w_sum = w_sum + w
w_sum2 = w_sum2 + w**2
mean_old = mean
mean = mean_old + (w / w_sum) * (x - mean_old)
S = S + w * (x - mean_old) * (x - mean)
population_variance = S / w_sum
# Bessel's correction for weighted samples
# Frequency weights
sample_frequency_variance = S / (w_sum - 1)
# Reliability weights
sample_reliability_variance = S / (w_sum - w_sum2 / w_sum)
Parallel algorithm.
Chan et al. note that Welford's online algorithm detailed above is a special case of an algorithm that works for combining arbitrary sets formula_21 and formula_22:
formula_23.
This may be useful when, for example, multiple processing units may be assigned to discrete parts of the input.
Chan's method for estimating the mean is numerically unstable when formula_24 and both are large, because the numerical error in formula_25 is not scaled down in the way that it is in the formula_26 case. In such cases, prefer formula_27.
def parallel_variance(n_a, avg_a, M2_a, n_b, avg_b, M2_b):
n = n_a + n_b
delta = avg_b - avg_a
M2 = M2_a + M2_b + delta**2 * n_a * n_b / n
var_ab = M2 / (n - 1)
return var_ab
This can be generalized to allow parallelization with AVX, with GPUs, and computer clusters, and to covariance.
Example.
Assume that all floating point operations use standard IEEE 754 double-precision arithmetic. Consider the sample (4, 7, 13, 16) from an infinite population. Based on this sample, the estimated population mean is 10, and the unbiased estimate of population variance is 30. Both the naïve algorithm and two-pass algorithm compute these values correctly.
Next consider the sample (108 + 4, 108 + 7, 108 + 13, 108 + 16), which gives rise to the same estimated variance as the first sample. The two-pass algorithm computes this variance estimate correctly, but the naïve algorithm returns 29.333333333333332 instead of 30.
While this loss of precision may be tolerable and viewed as a minor flaw of the naïve algorithm, further increasing the offset makes the error catastrophic. Consider the sample (109 + 4, 109 + 7, 109 + 13, 109 + 16). Again the estimated population variance of 30 is computed correctly by the two-pass algorithm, but the naïve algorithm now computes it as −170.66666666666666. This is a serious problem with naïve algorithm and is due to catastrophic cancellation in the subtraction of two similar numbers at the final stage of the algorithm.
Higher-order statistics.
Terriberry extends Chan's formulae to calculating the third and fourth central moments, needed for example when estimating skewness and kurtosis:
formula_28
Here the formula_29 are again the sums of powers of differences from the mean formula_30, giving
formula_31
For the incremental case (i.e., formula_32), this simplifies to:
formula_33
By preserving the value formula_34, only one division operation is needed and the higher-order statistics can thus be calculated for little incremental cost.
An example of the online algorithm for kurtosis implemented as described is:
def online_kurtosis(data):
n = mean = M2 = M3 = M4 = 0
for x in data:
n1 = n
n = n + 1
delta = x - mean
delta_n = delta / n
delta_n2 = delta_n**2
term1 = delta * delta_n * n1
mean = mean + delta_n
M4 = M4 + term1 * delta_n2 * (n**2 - 3*n + 3) + 6 * delta_n2 * M2 - 4 * delta_n * M3
M3 = M3 + term1 * delta_n * (n - 2) - 3 * delta_n * M2
M2 = M2 + term1
# Note, you may also calculate variance using M2, and skewness using M3
# Caution: If all the inputs are the same, M2 will be 0, resulting in a division by 0.
kurtosis = (n * M4) / (M2**2) - 3
return kurtosis
Pébaÿ
further extends these results to arbitrary-order central moments, for the incremental and the pairwise cases, and subsequently Pébaÿ et al.
for weighted and compound moments. One can also find there similar formulas for covariance.
Choi and Sweetman
offer two alternative methods to compute the skewness and kurtosis, each of which can save substantial computer memory requirements and CPU time in certain applications. The first approach is to compute the statistical moments by separating the data into bins and then computing the moments from the geometry of the resulting histogram, which effectively becomes a one-pass algorithm for higher moments. One benefit is that the statistical moment calculations can be carried out to arbitrary accuracy such that the computations can be tuned to the precision of, e.g., the data storage format or the original measurement hardware. A relative histogram of a random variable can be constructed in the conventional way: the range of potential values is divided into bins and the number of occurrences within each bin are counted and plotted such that the area of each rectangle equals the portion of the sample values within that bin:
formula_35
where formula_36 and formula_37 represent the frequency and the relative frequency at bin formula_38 and formula_39 is the total area of the histogram. After this normalization, the formula_40 raw moments and central moments of formula_41 can be calculated from the relative histogram:
formula_42
formula_43
where the superscript formula_44 indicates the moments are calculated from the histogram. For constant bin width formula_45 these two expressions can be simplified using formula_46:
formula_47
formula_48
The second approach from Choi and Sweetman is an analytical methodology to combine statistical moments from individual segments of a time-history such that the resulting overall moments are those of the complete time-history. This methodology could be used for parallel computation of statistical moments with subsequent combination of those moments, or for combination of statistical moments computed at sequential times.
If formula_49 sets of statistical moments are known:
formula_50 for formula_51, then each formula_52 can
be expressed in terms of the equivalent formula_40 raw moments:
formula_53
where formula_54 is generally taken to be the duration of the formula_55 time-history, or the number of points if formula_56 is constant.
The benefit of expressing the statistical moments in terms of formula_57 is that the formula_49 sets can be combined by addition, and there is no upper limit on the value of formula_49.
formula_58
where the subscript formula_59 represents the concatenated time-history or combined formula_57. These combined values of formula_57 can then be inversely transformed into raw moments representing the complete concatenated time-history
formula_60
Known relationships between the raw moments (formula_61) and the central moments (formula_62)
are then used to compute the central moments of the concatenated time-history. Finally, the statistical moments of the concatenated history are computed from the central moments:
formula_63
Covariance.
Very similar algorithms can be used to compute the covariance.
Naïve algorithm.
The naïve algorithm is
formula_64
For the algorithm above, one could use the following Python code:
def naive_covariance(data1, data2):
n = len(data1)
sum1 = sum(data1)
sum2 = sum(data2)
sum12 = sum([i1 * i2 for i1, i2 in zip(data1, data2)])
covariance = (sum12 - sum1 * sum2 / n) / n
return covariance
With estimate of the mean.
As for the variance, the covariance of two random variables is also shift-invariant, so given any two constant values formula_65 and formula_66 it can be written:
formula_67
and again choosing a value inside the range of values will stabilize the formula against catastrophic cancellation as well as make it more robust against big sums. Taking the first value of each data set, the algorithm can be written as:
def shifted_data_covariance(data_x, data_y):
n = len(data_x)
if n < 2:
return 0
kx = data_x[0]
ky = data_y[0]
Ex = Ey = Exy = 0
for ix, iy in zip(data_x, data_y):
Ex += ix - kx
Ey += iy - ky
Exy += (ix - kx) * (iy - ky)
return (Exy - Ex * Ey / n) / n
Two-pass.
The two-pass algorithm first computes the sample means, and then the covariance:
formula_68
formula_69
formula_70
The two-pass algorithm may be written as:
def two_pass_covariance(data1, data2):
n = len(data1)
mean1 = sum(data1) / n
mean2 = sum(data2) / n
covariance = 0
for i1, i2 in zip(data1, data2):
a = i1 - mean1
b = i2 - mean2
covariance += a * b / n
return covariance
A slightly more accurate compensated version performs the full naive algorithm on the residuals. The final sums formula_71 and formula_72 "should" be zero, but the second pass compensates for any small error.
Online.
A stable one-pass algorithm exists, similar to the online algorithm for computing the variance, that computes co-moment formula_73:
formula_74
The apparent asymmetry in that last equation is due to the fact that formula_75, so both update terms are equal to formula_76. Even greater accuracy can be achieved by first computing the means, then using the stable one-pass algorithm on the residuals.
Thus the covariance can be computed as
formula_77
def online_covariance(data1, data2):
meanx = meany = C = n = 0
for x, y in zip(data1, data2):
n += 1
dx = x - meanx
meanx += dx / n
meany += (y - meany) / n
C += dx * (y - meany)
population_covar = C / n
# Bessel's correction for sample variance
sample_covar = C / (n - 1)
A small modification can also be made to compute the weighted covariance:
def online_weighted_covariance(data1, data2, data3):
meanx = meany = 0
wsum = wsum2 = 0
C = 0
for x, y, w in zip(data1, data2, data3):
wsum += w
wsum2 += w * w
dx = x - meanx
meanx += (w / wsum) * dx
meany += (w / wsum) * (y - meany)
C += w * dx * (y - meany)
population_covar = C / wsum
# Bessel's correction for sample variance
# Frequency weights
sample_frequency_covar = C / (wsum - 1)
# Reliability weights
sample_reliability_covar = C / (wsum - wsum2 / wsum)
Likewise, there is a formula for combining the covariances of two sets that can be used to parallelize the computation:
formula_78
Weighted batched version.
A version of the weighted online algorithm that does batched updated also exists: let formula_79 denote the weights, and write
formula_80
The covariance can then be computed as
formula_81 | [
{
"math_id": 0,
"text": "\\sigma^2 = \\overline{(x^2)} - \\bar x^2 = \\frac {\\sum_{i=1}^N x_i^2 - (\\sum_{i=1}^N x_i)^2/N}{N}."
},
{
"math_id": 1,
"text": "s^2 = \\left(\\frac {\\sum_{i=1}^n x_i^2} n - \\left( \\frac {\\sum_{i=1}^n x_i} n \\right)^2\\right) \\cdot \\frac {n}{n-1}. "
},
{
"math_id": 2,
"text": "\\operatorname{Var}(X-K)=\\operatorname{Var}(X)."
},
{
"math_id": 3,
"text": "K"
},
{
"math_id": 4,
"text": "\\sigma^2 = \\frac {\\sum_{i=1}^n (x_i-K)^2 - (\\sum_{i=1}^n (x_i-K))^2/n}{n-1}. "
},
{
"math_id": 5,
"text": "(x_i - K)"
},
{
"math_id": 6,
"text": "\\bar x = \\frac {\\sum_{j=1}^n x_j} n,"
},
{
"math_id": 7,
"text": "\\text{sample variance} = s^2 = \\dfrac {\\sum_{i=1}^n (x_i - \\bar x)^2}{n-1}, "
},
{
"math_id": 8,
"text": "x_i"
},
{
"math_id": 9,
"text": "\\overline{x}_n = \\frac{1}{n} \\sum_{i=1}^n x_i "
},
{
"math_id": 10,
"text": "(x_1,\\dots,x_n)"
},
{
"math_id": 11,
"text": "\\sigma^2_n = \\frac{1}{n} \\sum_{i=1}^n \\left(x_i - \\overline{x}_n \\right)^2"
},
{
"math_id": 12,
"text": "s^2_n = \\frac{1}{n - 1} \\sum_{i=1}^n \\left(x_i - \\overline{x}_n \\right)^2"
},
{
"math_id": 13,
"text": "\\bar x_n = \\frac{(n-1) \\, \\bar x_{n-1} + x_n}{n} = \\bar x_{n-1} + \\frac{x_n - \\bar x_{n-1}}{n} "
},
{
"math_id": 14,
"text": "\\sigma^2_n = \\frac{(n-1) \\, \\sigma^2_{n-1} + (x_n - \\bar x_{n-1})(x_n - \\bar x_n)}{n} = \\sigma^2_{n-1} + \\frac{(x_n - \\bar x_{n-1})(x_n - \\bar x_n) - \\sigma^2_{n-1}}{n}."
},
{
"math_id": 15,
"text": "s^2_n = \\frac{n-2}{n-1} \\, s^2_{n-1} + \\frac{(x_n - \\bar x_{n-1})^2}{n} = s^2_{n-1} + \\frac{(x_n - \\bar x_{n-1})^2}{n} - \\frac{s^2_{n-1}}{n-1}, \\quad n>1 "
},
{
"math_id": 16,
"text": "\\sum_{i=1}^n (x_i - \\bar x_n)^2"
},
{
"math_id": 17,
"text": "M_{2,n}"
},
{
"math_id": 18,
"text": "\\begin{align}\nM_{2,n} & = M_{2,n-1} + (x_n - \\bar x_{n-1})(x_n - \\bar x_n) \\\\[4pt]\n\\sigma^2_n & = \\frac{M_{2,n}}{n} \\\\[4pt]\ns^2_n & = \\frac{M_{2,n}}{n-1}\n\\end{align}"
},
{
"math_id": 19,
"text": "M_k = \\bar x_k"
},
{
"math_id": 20,
"text": "S_k = M_{2,k}"
},
{
"math_id": 21,
"text": "A"
},
{
"math_id": 22,
"text": "B"
},
{
"math_id": 23,
"text": "\\begin{align}\nn_{AB} & = n_A + n_B \\\\\n\\delta & = \\bar x_B - \\bar x_A \\\\\n\\bar x_{AB} & = \\bar x_A + \\delta\\cdot\\frac{n_B}{n_{AB}} \\\\\nM_{2,AB} & = M_{2,A} + M_{2,B} + \\delta^2\\cdot\\frac{n_A n_B}{n_{AB}} \\\\\n\\end{align}"
},
{
"math_id": 24,
"text": "n_A \\approx n_B"
},
{
"math_id": 25,
"text": "\\delta = \\bar x_B - \\bar x_A"
},
{
"math_id": 26,
"text": "n_B = 1"
},
{
"math_id": 27,
"text": "\\bar x_{AB} = \\frac{n_A \\bar x_A + n_B \\bar x_B}{n_{AB}}"
},
{
"math_id": 28,
"text": "\n\\begin{align}\nM_{3,X} = M_{3,A} + M_{3,B} & {} + \\delta^3\\frac{n_A n_B (n_A - n_B)}{n_X^2} + 3\\delta\\frac{n_AM_{2,B} - n_BM_{2,A}}{n_X} \\\\[6pt]\nM_{4,X} = M_{4,A} + M_{4,B} & {} + \\delta^4\\frac{n_A n_B \\left(n_A^2 - n_A n_B + n_B^2\\right)}{n_X^3} \\\\[6pt]\n & {} + 6\\delta^2\\frac{n_A^2 M_{2,B} + n_B^2 M_{2,A}}{n_X^2} + 4\\delta\\frac{n_AM_{3,B} - n_BM_{3,A}}{n_X}\n\\end{align}"
},
{
"math_id": 29,
"text": "M_k"
},
{
"math_id": 30,
"text": "\\sum(x - \\overline{x})^k"
},
{
"math_id": 31,
"text": "\n\\begin{align}\n& \\text{skewness} = g_1 = \\frac{\\sqrt{n} M_3}{M_2^{3/2}}, \\\\[4pt]\n& \\text{kurtosis} = g_2 = \\frac{n M_4}{M_2^2}-3.\n\\end{align}\n"
},
{
"math_id": 32,
"text": "B = \\{x\\}"
},
{
"math_id": 33,
"text": "\n\\begin{align}\n\\delta & = x - m \\\\[5pt]\nm' & = m + \\frac{\\delta}{n} \\\\[5pt]\nM_2' & = M_2 + \\delta^2 \\frac{n-1}{n} \\\\[5pt]\nM_3' & = M_3 + \\delta^3 \\frac{ (n - 1) (n - 2)}{n^2} - \\frac{3\\delta M_2}{n} \\\\[5pt]\nM_4' & = M_4 + \\frac{\\delta^4 (n - 1) (n^2 - 3n + 3)}{n^3} + \\frac{6\\delta^2 M_2}{n^2} - \\frac{4\\delta M_3}{n}\n\\end{align}\n"
},
{
"math_id": 34,
"text": "\\delta / n"
},
{
"math_id": 35,
"text": " H(x_k)=\\frac{h(x_k)}{A}"
},
{
"math_id": 36,
"text": "h(x_k)"
},
{
"math_id": 37,
"text": "H(x_k)"
},
{
"math_id": 38,
"text": "x_k"
},
{
"math_id": 39,
"text": "A= \\sum_{k=1}^K h(x_k) \\,\\Delta x_k"
},
{
"math_id": 40,
"text": "n"
},
{
"math_id": 41,
"text": "x(t)"
},
{
"math_id": 42,
"text": "\n m_n^{(h)} = \\sum_{k=1}^{K} x_k^n H(x_k) \\, \\Delta x_k\n = \\frac{1}{A} \\sum_{k=1}^K x_k^n h(x_k) \\, \\Delta x_k\n"
},
{
"math_id": 43,
"text": "\n \\theta_n^{(h)}= \\sum_{k=1}^{K} \\Big(x_k-m_1^{(h)}\\Big)^n \\, H(x_k) \\, \\Delta x_k\n = \\frac{1}{A} \\sum_{k=1}^{K} \\Big(x_k-m_1^{(h)}\\Big)^n h(x_k) \\, \\Delta x_k\n"
},
{
"math_id": 44,
"text": "^{(h)}"
},
{
"math_id": 45,
"text": "\\Delta x_k=\\Delta x"
},
{
"math_id": 46,
"text": "I= A/\\Delta x"
},
{
"math_id": 47,
"text": "\n m_n^{(h)}= \\frac{1}{I} \\sum_{k=1}^K x_k^n \\, h(x_k)\n"
},
{
"math_id": 48,
"text": "\n \\theta_n^{(h)}= \\frac{1}{I} \\sum_{k=1}^K \\Big(x_k-m_1^{(h)}\\Big)^n h(x_k)\n"
},
{
"math_id": 49,
"text": "Q"
},
{
"math_id": 50,
"text": "(\\gamma_{0,q},\\mu_{q},\\sigma^2_{q},\\alpha_{3,q},\\alpha_{4,q})\n\\quad "
},
{
"math_id": 51,
"text": "q=1,2,\\ldots,Q "
},
{
"math_id": 52,
"text": "\\gamma_n"
},
{
"math_id": 53,
"text": "\n\\gamma_{n,q}= m_{n,q} \\gamma_{0,q} \\qquad \\quad \\textrm{for} \\quad n=1,2,3,4 \\quad \\text{ and } \\quad q = 1,2, \\dots ,Q\n"
},
{
"math_id": 54,
"text": "\\gamma_{0,q}"
},
{
"math_id": 55,
"text": "q^{th}"
},
{
"math_id": 56,
"text": "\\Delta t"
},
{
"math_id": 57,
"text": "\\gamma"
},
{
"math_id": 58,
"text": "\n \\gamma_{n,c}= \\sum_{q=1}^Q \\gamma_{n,q} \\quad \\quad \\text{for } n=0,1,2,3,4\n"
},
{
"math_id": 59,
"text": "_c"
},
{
"math_id": 60,
"text": "\n m_{n,c}=\\frac{\\gamma_{n,c}}{\\gamma_{0,c}} \\quad \\text{for } n=1,2,3,4\n"
},
{
"math_id": 61,
"text": "m_n"
},
{
"math_id": 62,
"text": " \\theta_n = \\operatorname E[(x-\\mu)^n])"
},
{
"math_id": 63,
"text": "\n \\mu_c=m_{1,c}\n \\qquad \\sigma^2_c=\\theta_{2,c}\n \\qquad \\alpha_{3,c}=\\frac{\\theta_{3,c}}{\\sigma_c^3}\n \\qquad \\alpha_{4,c}={\\frac{\\theta_{4,c}}{\\sigma_c^4}}-3\n"
},
{
"math_id": 64,
"text": "\\operatorname{Cov}(X,Y) = \\frac {\\sum_{i=1}^n x_i y_i - (\\sum_{i=1}^n x_i)(\\sum_{i=1}^n y_i)/n}{n}. "
},
{
"math_id": 65,
"text": "k_x"
},
{
"math_id": 66,
"text": "k_y,"
},
{
"math_id": 67,
"text": "\\operatorname{Cov}(X,Y) = \\operatorname{Cov}(X-k_x,Y-k_y) = \\dfrac {\\sum_{i=1}^n (x_i-k_x) (y_i-k_y) - (\\sum_{i=1}^n (x_i-k_x))(\\sum_{i=1}^n (y_i-k_y))/n}{n}. "
},
{
"math_id": 68,
"text": "\\bar x = \\sum_{i=1}^n x_i/n"
},
{
"math_id": 69,
"text": "\\bar y = \\sum_{i=1}^n y_i/n"
},
{
"math_id": 70,
"text": "\\operatorname{Cov}(X,Y) = \\frac {\\sum_{i=1}^n (x_i - \\bar x)(y_i - \\bar y)}{n}. "
},
{
"math_id": 71,
"text": "\\sum_i x_i"
},
{
"math_id": 72,
"text": "\\sum_i y_i"
},
{
"math_id": 73,
"text": " C_n = \\sum_{i=1}^n (x_i - \\bar x_n)(y_i - \\bar y_n)"
},
{
"math_id": 74,
"text": "\\begin{alignat}{2}\n\\bar x_n &= \\bar x_{n-1} &\\,+\\,& \\frac{x_n - \\bar x_{n-1}}{n} \\\\[5pt]\n\\bar y_n &= \\bar y_{n-1} &\\,+\\,& \\frac{y_n - \\bar y_{n-1}}{n} \\\\[5pt]\nC_n &= C_{n-1} &\\,+\\,& (x_n - \\bar x_n)(y_n - \\bar y_{n-1}) \\\\[5pt]\n &= C_{n-1} &\\,+\\,& (x_n - \\bar x_{n-1})(y_n - \\bar y_n)\n\\end{alignat}"
},
{
"math_id": 75,
"text": " (x_n - \\bar x_n) = \\frac{n-1}{n}(x_n - \\bar x_{n-1})"
},
{
"math_id": 76,
"text": " \\frac{n-1}{n}(x_n - \\bar x_{n-1})(y_n - \\bar y_{n-1})"
},
{
"math_id": 77,
"text": "\\begin{align}\n\\operatorname{Cov}_N(X,Y) = \\frac{C_N}{N} &= \\frac{\\operatorname{Cov}_{N-1}(X,Y)\\cdot(N-1) + (x_n - \\bar x_n)(y_n - \\bar y_{n-1})}{N}\\\\\n &= \\frac{\\operatorname{Cov}_{N-1}(X,Y)\\cdot(N-1) + (x_n - \\bar x_{n-1})(y_n - \\bar y_n)}{N}\\\\\n &= \\frac{\\operatorname{Cov}_{N-1}(X,Y)\\cdot(N-1) + \\frac{N-1}{N}(x_n - \\bar x_{n-1})(y_n - \\bar y_{n-1})}{N}\\\\\n &= \\frac{\\operatorname{Cov}_{N-1}(X,Y)\\cdot(N-1) + \\frac{N}{N-1}(x_n - \\bar x_{n})(y_n - \\bar y_{n})}{N}.\n\\end{align}"
},
{
"math_id": 78,
"text": "C_X = C_A + C_B + (\\bar x_A - \\bar x_B)(\\bar y_A - \\bar y_B)\\cdot\\frac{n_A n_B}{n_X}. "
},
{
"math_id": 79,
"text": "w_1, \\dots w_N"
},
{
"math_id": 80,
"text": "\\begin{alignat}{2}\n\\bar x_{n+k} &= \\bar x_n &\\,+\\,& \\frac{\\sum_{i=n+1}^{n+k} w_i (x_i - \\bar x_n)}{\\sum_{i=1}^{n+k} w_i} \\\\\n\\bar y_{n+k} &= \\bar y_n &\\,+\\,& \\frac{\\sum_{i=n+1}^{n+k} w_i (y_i - \\bar y_n)}{\\sum_{i=1}^{n+k} w_i} \\\\\nC_{n+k} &= C_n &\\,+\\,& \\sum_{i=n+1}^{n+k} w_i (x_i - \\bar x_{n+k})(y_i - \\bar y_n) \\\\\n &= C_n &\\,+\\,& \\sum_{i=n+1}^{n+k} w_i (x_i - \\bar x_n)(y_i - \\bar y_{n+k}) \\\\\n\\end{alignat}"
},
{
"math_id": 81,
"text": "\\operatorname{Cov}_N(X,Y) = \\frac{C_N}{\\sum_{i=1}^{N} w_i}"
}
] | https://en.wikipedia.org/wiki?curid=1063 |
10630303 | Section (category theory) | In category theory, a branch of mathematics, a section is a right inverse of some morphism. Dually, a retraction is a left inverse of some morphism.
In other words, if formula_2 and formula_3 are morphisms whose composition formula_4 is the identity morphism on formula_5, then formula_1 is a section of formula_0, and formula_0 is a retraction of formula_1.
Every section is a monomorphism (every morphism with a left inverse is left-cancellative), and every retraction is an epimorphism (every morphism with a right inverse is right-cancellative).
In algebra, sections are also called split monomorphisms and retractions are also called split epimorphisms. In an abelian category, if formula_2 is a split epimorphism with split monomorphism formula_3, then formula_6 is isomorphic to the direct sum of formula_5 and the kernel of formula_0. The synonym coretraction for section is sometimes seen in the literature, although rarely in recent work.
Terminology.
The concept of a retraction in category theory comes from the essentially similar notion of a retraction in topology: formula_7 where formula_8 is a subspace of formula_9 is a retraction in the topological sense, if it's a retraction of the inclusion map formula_10 in the category theory sense. The concept in topology was defined by Karol Borsuk in 1931.
Borsuk's student, Samuel Eilenberg, was with Saunders Mac Lane the founder of category theory, and (as the earliest publications on category theory concerned various topological spaces) one might have expected this term to have initially be used. In fact, their earlier publications, up to, e.g., Mac Lane (1963)'s "Homology", used the term right inverse. It was not until 1965 when Eilenberg and John Coleman Moore coined the dual term 'coretraction' that Borsuk's term was lifted to category theory in general. The term coretraction gave way to the term section by the end of the 1960s.
Both use of left/right inverse and section/retraction are commonly seen in the literature: the former use has the advantage that it is familiar from the theory of semigroups and monoids; the latter is considered less confusing by some because one does not have to think about 'which way around' composition goes, an issue that has become greater with the increasing popularity of the synonym "f;g" for "g∘f".
Examples.
In the category of sets, every monomorphism (injective function) with a non-empty domain is a section, and every epimorphism (surjective function) is a retraction; the latter statement is equivalent to the axiom of choice.
In the category of vector spaces over a field "K", every monomorphism and every epimorphism splits; this follows from the fact that linear maps can be uniquely defined by specifying their values on a basis.
In the category of abelian groups, the epimorphism Z → Z/2Z which sends every integer to its remainder modulo 2 does not split; in fact the only morphism Z/2Z → Z is the zero map. Similarly, the natural monomorphism Z/2Z → Z/4Z doesn't split even though there is a non-trivial morphism Z/4Z → Z/2Z.
The categorical concept of a section is important in homological algebra, and is also closely related to the notion of a section of a fiber bundle in topology: in the latter case, a section of a fiber bundle is a section of the bundle projection map of the fiber bundle.
Given a quotient space formula_11 with quotient map formula_12, a section of formula_13 is called a transversal.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "g"
},
{
"math_id": 2,
"text": "f: X\\to Y"
},
{
"math_id": 3,
"text": "g: Y\\to X"
},
{
"math_id": 4,
"text": "f \\circ g: Y\\to Y"
},
{
"math_id": 5,
"text": "Y"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": " f:X \\to Y "
},
{
"math_id": 8,
"text": " Y "
},
{
"math_id": 9,
"text": " X "
},
{
"math_id": 10,
"text": " i:Y\\hookrightarrow X "
},
{
"math_id": 11,
"text": "\\bar X"
},
{
"math_id": 12,
"text": "\\pi\\colon X \\to \\bar X"
},
{
"math_id": 13,
"text": "\\pi"
}
] | https://en.wikipedia.org/wiki?curid=10630303 |
106304 | Graduated cylinder | Laboratory equipment to measure liquid volume
A graduated cylinder, also known as a measuring cylinder or mixing cylinder, is a common piece of laboratory equipment used to measure the volume of a liquid. It has a narrow cylindrical shape. Each marked line on the graduated cylinder represents the amount of liquid that has been measured.
Materials and structure.
Large graduated cylinders are usually made of polypropylene for its excellent chemical resistance or polymethylpentene for its transparency, making them lighter and less fragile than glass. Polypropylene (PP) is easy to repeatedly autoclave; however, autoclaving in excess of about (depending on the chemical formulation: typical commercial grade polypropylene melts in excess of ), can warp or damage polypropylene graduated cylinders, affecting accuracy.
A traditional graduated cylinder is usually narrow and tall so as to increase the accuracy and precision of volume measurement. It has a plastic or glass base (stand, foot, support) and a "spout" for easy pouring of the measured liquid. An additional version is wide and low.
Mixing cylinders have ground glass joints instead of a spout, so they can be closed with a stopper or connected directly with other elements of a manifold. With this kind of cylinder, the metered liquid does not pour directly, but is often removed using a Cannula. A graduated cylinder is meant to be read with the surface of the liquid at eye level, where the center of the meniscus shows the measurement line. Typical capacities of graduated cylinders are from 10 mL to 1000 mL.
Common uses.
Graduated cylinders are often used to measure the volume of a liquid. Graduated cylinders are generally more accurate and precise than laboratory flasks and beakers, but they should not be used to perform volumetric analysis; volumetric glassware, such as a volumetric flask or volumetric pipette, should be used, as it is even more accurate and precise. Graduated cylinders are sometimes used to measure the volume of a solid indirectly by measuring the displacement of a liquid.
Scales and accuracy.
For accuracy the volume on graduated cylinders is depicted on scales with 3 significant digits: 100mL cylinders have 1ml grading divisions while 10mL cylinders have 0.1 mL grading divisions.
Two classes of accuracy exist for graduated cylinders. Class A has double the accuracy of class B.
Cylinders can have single or double scales. Single scales allow to read the volume from top to bottom (filling volume) while double scale cylinders allow reading for filling and pouring (reverse scale).
Graduated cylinders are calibrated either “to contain” (indicated liquid volume inside the cylinder) and marked as "TC" or “to deliver” (indicated liquid volume poured out, accounting for liquid traces left in the cylinder) and marked “TD”. Formerly the tolerances for “to deliver” and “to contain” cylinders are distinct; however now these are the same. Also, the international symbols “IN” and “EX” are more likely to be used instead of “TC” and “TD” respectively.
Measurement.
To read the volume accurately, the observation must be at an eye level and read at the bottom of a meniscus of the liquid level.
The main reason as to why the reading of the volume is done via meniscus is due to the nature of the liquid in a closed surrounded space. By nature, liquid in the cylinder is attracted to the wall around it through molecular forces. This forces the liquid surface to develop either a convex or concave shape, depending on the type of the liquid in the cylinder. Reading the liquid at the bottom part of a concave or the top part of the convex liquid is equivalent to reading the liquid at its meniscus. From the picture, the level of the liquid will be read at the bottom of the meniscus, which is the concave. The most accurate of the reading that could be done here is reduced down to 1 mL due to the given means of measurement on the cylinder. From this, the derived error is one tenth of the least figure. For instance, if the reading is done and the value calculated is set to be 36.5 mL. The error, give or take 0.1 mL, must be included too. Therefore, the more precise value equates to 36.5 formula_0 0.1; 36.4 or 36.6 mL. Therefore, there are 3 significant figures can be read from the given graduated cylinder picture. Another example, if the reading is done and the value calculated is set to be 40.0 mL. The precise value is 40.0 formula_0 0.1; 40.1 or 39.9 mL.
History.
The graduated cylinder was first introduced in 1784 by Louis Bernard Guyton de Morveau, for use in volumetric analysis.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\pm"
}
] | https://en.wikipedia.org/wiki?curid=106304 |
10632988 | Pyramorphix | The Pyramorphix (), also called Pyramorphinx, is a tetrahedral puzzle similar to the Rubik's Cube. It has a total of 8 movable pieces to rearrange, compared to the 20 of the Rubik's Cube. Although it looks like a trivially simple version of the Pyraminx, it is an edge-turning puzzle with the mechanism identical to that of the Pocket Cube.
Description.
At first glance, the Pyramorphix appears to be a trivial puzzle. It resembles the Pyraminx, and its appearance would suggest that only the four corners could be rotated. In fact, the puzzle is a specially shaped 2×2×2 cube. Four of the cube's corners are reshaped into pyramids and the other four are reshaped into triangles. The result of this is a puzzle that changes shape as it is turned.
The original name for the Pyramorphix was "The Junior Pyraminx." This was altered to reflect the "Shape Changing" aspect of the puzzle which makes it appear less like the 2×2×2 Cube. "Junior" also made it sound less desirable to an adult customer. The only remaining reference to the name "Junior Pyraminx" is on Uwe Mèffert's website-based solution which still has the title "jpmsol.html".
The purpose of the puzzle is to scramble the colors and the shape, and then restore it to its original state of being a tetrahedron with one color per face.
Number of combinations.
The puzzle is available either with stickers or plastic tiles on the faces. Both have a ribbed appearance, giving a visible orientation to the flat pieces. This results in 3,674,160 combinations, the same as the 2×2×2 cube.
However, if there were no means of identifying the orientation of those pieces, the number of combinations would be reduced. There would be 8! ways to arrange the pieces, divided by 24 to account for the lack of center pieces, and there would be 34 ways to rotate the four pyramidal pieces.
formula_0
The Pyramorphix can be rotated around three axes by multiples of 90°. The corners cannot rotate individually as on the Pyraminx. The Pyramorphix rotates in a way that changes the position of center pieces not only with other center pieces but also with corner pieces, leading to a variety of shapes.
Master Pyramorphix.
The Master Pyramorphix, informally referred to the Mastermorphix, is a more complex variant of the Pyramorphix. Like the Pyramorphix, it is an edge-turning tetrahedral puzzle capable of changing shape as it is twisted, leading to a large variety of irregular shapes. Several different variants have been made, including flat-faced custom-built puzzles by puzzle fans and Uwe Mèffert's commercially produced pillowed variant (pictured), sold through his puzzle shop, Meffert's.
The puzzle consists of 4 corner pieces, 4 face centers, 6 edge pieces, and 12 non-center face pieces. Being an edge-turning puzzle, the edge pieces only rotate in place, while the rest of the pieces can be permuted. The face centers and corner pieces are interchangeable because they are both corners although they are shaped differently, and the non-center face pieces may be flipped, leading to a wide variety of exotic shapes as the puzzle is twisted. If only 180° turns are made, it is possible to scramble only the colors while retaining the puzzle's tetrahedral shape. When 90° and 180° turns are made this puzzle can "shape shift″.
In spite of superficial similarities, the only way that this puzzle is related to the Pyraminx is that they are both "twisty puzzles"; the Pyraminx is a face-turning puzzle. On the Mastermorphix the corner pieces are non-trivial; they cannot be simply rotated in place to the right orientation.
Solutions.
Despite its appearance, the puzzle is in fact equivalent to a shape modification of the original 3x3x3 Rubik's Cube. Its 4 corner pieces on the corners and 4 corner pieces on the face centers together are equivalent to the 8 corner pieces of the Rubik's Cube, its 6 edge pieces are equivalent to the face centers of the Rubik's Cube, and its non-center face pieces are equivalent to the edge pieces of the Rubik's Cube. Thus, the same methods used to solve the Rubik's Cube may be used to solve the Master Pyramorphix, with a few minor differences: the center pieces are sensitive to orientation because they have two colors, unlike the usual coloring scheme used for the Rubik's Cube, and the face centers are "not" sensitive to orientation (however when in the "wrong" orientation parity errors may occur). In effect, it behaves as a Rubik's Cube with a non-standard coloring scheme where center piece orientation matters, and the orientation of 4 of the 8 corner pieces do not, technically, matter.
Unlike the Square One, another shape-changing puzzle, the most straightforward solutions of the Master Pyramorphix do not involve first restoring the tetrahedral shape of the puzzle and then restoring the colors; most of the algorithms carried over from the 3x3x3 Rubik's Cube translate to shape-changing permutations of the Master Pyramorphix. Some methods, such as the equivalent of Philip Marshall's "Ultimate Solution", show a gradual progression in shape as the solution progresses; first the non-center face pieces are put into place, resulting in a partial restoration of the tetrahedral shape except at the face centers and corners, and then the complete restoration of tetrahedral shape as the face centers and corners are solved.
Number of combinations.
There are four corners and four face centers. These may be interchanged with each other in 8! different ways. There are 37 ways for these pieces to be oriented, since the orientation of the last piece depends on the preceding seven, and the texture of the stickers makes the face center orientation visible. There are twelve non-central face pieces. These can be flipped in 211 ways and there are 12!/2 ways to arrange them. The three pieces of a given color are distinguishable due to the texture of the stickers. There are six edge pieces which are fixed in position relative to one another, each of which has four possible orientations. If the puzzle is solved apart from these pieces, the number of edge twists will always be even, making 46/2 possibilities for these pieces.
formula_1
The full number is .
However, if the stickers were smooth the number of combinations would be reduced. There would be 34 ways for the corners to be oriented, but the face centers would not have visible orientations. The three non-central face pieces of a given color would be indistinguishable. Since there are six ways to arrange the three pieces of the same color and there are four colors, there would be 211×12!/64 possibilities for these pieces.
formula_2
The full number is .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{8! \\times 3^4}{24}=136080"
},
{
"math_id": 1,
"text": " {8! \\times 3^7 \\times 12! \\times 2^9 \\times 4^6} \\approx 8.86 \\times 10^{22}"
},
{
"math_id": 2,
"text": " \\frac {8! \\times 3^4 \\times 12! \\times 2^{10} \\times 4^6}{6^4} \\approx 5.06 \\times 10^{18}"
}
] | https://en.wikipedia.org/wiki?curid=10632988 |
1063395 | KCDSA | KCDSA (Korean Certificate-based Digital Signature Algorithm) is a digital signature algorithm created by a team led by the Korea Internet & Security Agency (KISA). It is an ElGamal variant, similar to the Digital Signature Algorithm and GOST R 34.10-94. The standard algorithm is implemented over formula_0, but an elliptic curve variant (EC-KCDSA) is also specified.
KCDSA requires a collision-resistant cryptographic hash function that can produce a variable-sized output (from 128 to 256 bits, in 32-bit increments). HAS-160, another Korean standard, is the suggested choice.
Domain parameters.
The revised version of the spec additional requires either that formula_10 be prime or that all of its prime factors are greater than formula_4.
User parameters.
The 1998 spec is unclear about the exact format of the "Cert Data". In the revised spec, z is defined as being the bottom B bits of the public key y, where B is the block size of the hash function in bits (typically 512 or 1024). The effect is that the first input block corresponds to y mod 2^B.
Signing.
To sign a message formula_19:
The specification is vague about how the integer formula_26 be reinterpreted as a byte string input to hash function. In the example in section C.1 the interpretation is consistent with formula_27 using the definition of I2OSP from PKCS#1/RFC3447.
Verifying.
To verify a signature formula_25 on a message formula_19:
EC-KCDSA.
EC-KCDSA is essentially the same algorithm using Elliptic-curve cryptography instead of discrete log cryptography.
The domain parameters are:
The user parameters and algorithms are essentially the same as for discrete log KCDSA except that modular exponentiation is replaced by point multiplication. The specific differences are: | [
{
"math_id": 0,
"text": "GF(p)"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "|p| = 512 + 256i"
},
{
"math_id": 3,
"text": "i = 0, 1, \\dots, 6"
},
{
"math_id": 4,
"text": "q"
},
{
"math_id": 5,
"text": "p-1"
},
{
"math_id": 6,
"text": "|q| = 128 + 32j"
},
{
"math_id": 7,
"text": "j = 0, 1, \\dots, 4"
},
{
"math_id": 8,
"text": "g"
},
{
"math_id": 9,
"text": "\\operatorname{GF}(p)"
},
{
"math_id": 10,
"text": "(p-1)/(2q)"
},
{
"math_id": 11,
"text": "x"
},
{
"math_id": 12,
"text": "0 < x < q"
},
{
"math_id": 13,
"text": "y"
},
{
"math_id": 14,
"text": "y=g^\\bar{x} \\pmod{p},"
},
{
"math_id": 15,
"text": "\\bar{x}=x^{-1} \\pmod{q}"
},
{
"math_id": 16,
"text": "z"
},
{
"math_id": 17,
"text": "z = h(\\text{Cert Data})"
},
{
"math_id": 18,
"text": "h"
},
{
"math_id": 19,
"text": "m"
},
{
"math_id": 20,
"text": "0 < k < q"
},
{
"math_id": 21,
"text": "w = g^k \\mod{p}"
},
{
"math_id": 22,
"text": "r = h(w)"
},
{
"math_id": 23,
"text": "s = x(k - r \\oplus h(z \\parallel m)) \\pmod{q}"
},
{
"math_id": 24,
"text": "s=0"
},
{
"math_id": 25,
"text": "(r, s)"
},
{
"math_id": 26,
"text": "w"
},
{
"math_id": 27,
"text": "r = h(I2OSP(w, |q|/8))"
},
{
"math_id": 28,
"text": "0 \\le r < 2^{|q|}"
},
{
"math_id": 29,
"text": "0 < s < q"
},
{
"math_id": 30,
"text": "e = r \\oplus h(z \\parallel m)"
},
{
"math_id": 31,
"text": "r = h(y^s \\cdot g^e \\mod{p})"
},
{
"math_id": 32,
"text": "E"
},
{
"math_id": 33,
"text": "G"
},
{
"math_id": 34,
"text": "n"
},
{
"math_id": 35,
"text": "Y=\\bar{x}G"
},
{
"math_id": 36,
"text": "r=h(W_x || W_y)"
},
{
"math_id": 37,
"text": "W=kG"
},
{
"math_id": 38,
"text": "r=h(sY+eG)"
}
] | https://en.wikipedia.org/wiki?curid=1063395 |
106341 | Direct sum of groups | Means of constructing a group from two subgroups
In mathematics, a group "G" is called the direct sum of two normal subgroups with trivial intersection if it is generated by the subgroups. In abstract algebra, this method of construction of groups can be generalized to direct sums of vector spaces, modules, and other structures; see the article direct sum of modules for more information. A group which can be expressed as a direct sum of non-trivial subgroups is called "decomposable", and if a group cannot be expressed as such a direct sum then it is called "indecomposable".
Definition.
A group "G" is called the direct sum of two subgroups "H"1 and "H"2 if
More generally, "G" is called the direct sum of a finite set of subgroups {"H""i"} if
If "G" is the direct sum of subgroups "H" and "K" then we write "G" = "H" + "K", and if "G" is the direct sum of a set of subgroups {"H""i"} then we often write "G" = Σ"H""i". Loosely speaking, a direct sum is isomorphic to a weak direct product of subgroups.
Properties.
If "G" = "H" + "K", then it can be proven that:
The above assertions can be generalized to the case of "G" = Σ"H""i", where {"H"i} is a finite set of subgroups:
"g" = "h"1 ∗ "h"2 ∗ ... ∗ "h""i" ∗ ... ∗ "h""n"
Note the similarity with the direct product, where each "g" can be expressed uniquely as
"g" = ("h"1,"h"2, ..., "h""i", ..., "h""n").
Since "h""i" ∗ "h""j" = "h""j" ∗ "h""i" for all "i" ≠ "j", it follows that multiplication of elements in a direct sum is isomorphic to multiplication of the corresponding elements in the direct product; thus for finite sets of subgroups, Σ"H""i" is isomorphic to the direct product ×{"H""i"}.
Direct summand.
Given a group formula_1, we say that a subgroup formula_2 is a direct summand of formula_1 if there exists another subgroup formula_3 of formula_1 such that formula_4.
In abelian groups, if formula_2 is a divisible subgroup of formula_1, then formula_2 is a direct summand of formula_1.
Equivalence of decompositions into direct sums.
In the decomposition of a finite group into a direct sum of indecomposable subgroups the embedding of the subgroups is not unique. For example, in the Klein group formula_11 we have that
formula_12 and
formula_13
However, the Remak-Krull-Schmidt theorem states that given a "finite" group "G" = Σ"A""i" = Σ"B""j", where each "A""i" and each "B""j" is non-trivial and indecomposable, the two sums have equal terms up to reordering and isomorphism.
The Remak-Krull-Schmidt theorem fails for infinite groups; so in the case of infinite "G" = "H" + "K" = "L" + "M", even when all subgroups are non-trivial and indecomposable, we cannot conclude that "H" is isomorphic to either "L" or "M".
Generalization to sums over infinite sets.
To describe the above properties in the case where "G" is the direct sum of an infinite (perhaps uncountable) set of subgroups, more care is needed.
If "g" is an element of the cartesian product Π{"H""i"} of a set of groups, let "g""i" be the "i"th element of "g" in the product. The external direct sum of a set of groups {"H""i"} (written as ΣE{"H""i"}) is the subset of Π{"H""i"}, where, for each element "g" of ΣE{"H""i"}, "g""i" is the identity formula_14 for all but a finite number of "g""i" (equivalently, only a finite number of "g""i" are not the identity). The group operation in the external direct sum is pointwise multiplication, as in the usual direct product.
This subset does indeed form a group, and for a finite set of groups {"H""i"} the external direct sum is equal to the direct product.
If "G" = Σ"H""i", then "G" is isomorphic to ΣE{"H""i"}. Thus, in a sense, the direct sum is an "internal" external direct sum. For each element "g" in "G", there is a unique finite set "S" and a unique set {"h""i" ∈ "H""i" : "i" ∈ "S"} such that "g" = Π {"h""i" : "i" in "S"}.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "H"
},
{
"math_id": 3,
"text": "K"
},
{
"math_id": 4,
"text": "G = H+K"
},
{
"math_id": 5,
"text": " G= \\prod_{i\\in I} H_i "
},
{
"math_id": 6,
"text": " G "
},
{
"math_id": 7,
"text": " H_{i_0} \\times \\prod_{i\\not=i_0}H_i"
},
{
"math_id": 8,
"text": "G=K+H"
},
{
"math_id": 9,
"text": "\\mathbb R"
},
{
"math_id": 10,
"text": "G/K"
},
{
"math_id": 11,
"text": "V_4 \\cong C_2 \\times C_2"
},
{
"math_id": 12,
"text": "V_4 = \\langle(0,1)\\rangle + \\langle(1,0)\\rangle,"
},
{
"math_id": 13,
"text": "V_4 = \\langle(1,1)\\rangle + \\langle(1,0)\\rangle."
},
{
"math_id": 14,
"text": "e_{H_i}"
}
] | https://en.wikipedia.org/wiki?curid=106341 |
1063435 | Normal force | Force exerted on an object by a body with which it is in contact, and vice versa
In mechanics, the normal force formula_0 is the component of a contact force that is perpendicular to the surface that an object contacts. In this instance "normal" is used in the geometric sense and means perpendicular, as opposed to the common language use of "normal" meaning "ordinary" or "expected". A person standing still on a platform is acted upon by gravity, which would pull them down towards the Earth's core unless there were a countervailing force from the resistance of the platform's molecules, a force which is named the "normal force".
The normal force is one type of ground reaction force. If the person stands on a slope and does not sink into the ground or slide downhill, the total ground reaction force can be divided into two components: a normal force perpendicular to the ground and a frictional force parallel to the ground. In another common situation, if an object hits a surface with some speed, and the surface can withstand the impact, the normal force provides for a rapid deceleration, which will depend on the flexibility of the surface and the object.
Equations.
In the case of an object resting upon a flat table (unlike on an incline as in Figures 1 and 2), the normal force on the object is equal but in opposite direction to the gravitational force applied on the object (or the weight of the object), that is, formula_1, where "m" is mass, and "g" is the gravitational field strength (about 9.81 m/s2 on Earth). The normal force here represents the force applied by the table against the object that prevents it from sinking through the table and requires that the table be sturdy enough to deliver this normal force without breaking. However, it is easy to assume that the normal force and weight are action-reaction force pairs (a common mistake). In this case, the normal force and weight need to be equal in magnitude to explain why there is no upward acceleration of the object. For example, a ball that bounces upwards accelerates upwards because the normal force acting on the ball is larger in magnitude than the weight of the ball.
Where an object rests on an incline as in Figures 1 and 2, the normal force is perpendicular to the plane the object rests on. Still, the normal force will be as large as necessary to prevent sinking through the surface, presuming the surface is sturdy enough. The strength of the force can be calculated as:
formula_2
where formula_0 is the normal force, "m" is the mass of the object, "g" is the gravitational field strength, and "θ" is the angle of the inclined surface measured from the horizontal.
The normal force is one of the several forces which act on the object. In the simple situations so far considered, the most important other forces acting on it are friction and the force of gravity.
Using vectors.
In general, the magnitude of the normal force, "N", is the projection of the net surface interaction force, "T", in the normal direction, "n", and so the normal force vector can be found by scaling the normal direction by the net surface interaction force. The surface interaction force, in turn, is equal to the dot product of the unit normal with the Cauchy stress tensor describing the stress state of the surface. That is:
formula_3
or, in indicial notation,
formula_4
The parallel shear component of the contact force is known as the frictional force (formula_5).
The static coefficient of friction for an object on an inclined plane can be calculated as follows:
formula_6
for an object on the point of sliding where formula_7 is the angle between the slope and the horizontal.
Physical origin.
Normal force is directly a result of Pauli exclusion principle and not a true force "per se": it is a result of the interactions of the electrons at the surfaces of the objects. The atoms in the two surfaces cannot penetrate one another without a large investment of energy because there is no low energy state for which the electron wavefunctions from the two surfaces overlap; thus no microscopic force is needed to prevent this penetration.
However these interactions are often modeled as van der Waals force, a force that grows very large very quickly as distance becomes smaller.
On the more macroscopic level, such surfaces can be treated as a single object, and two bodies do not penetrate each other due to the stability of matter, which is again a consequence of Pauli exclusion principle, but also of the fundamental forces of nature: cracks in the bodies do not widen due to electromagnetic forces that create the chemical bonds between the atoms; the atoms themselves do not disintegrate because of the electromagnetic forces between the electrons and the nuclei; and the nuclei do not disintegrate due to the nuclear forces.
Practical applications.
In an elevator either stationary or moving at constant velocity, the normal force on the person's feet balances the person's weight. In an elevator that is accelerating upward, the normal force is greater than the person's ground weight and so the person's perceived weight increases (making the person feel heavier). In an elevator that is accelerating downward, the normal force is less than the person's ground weight and so a passenger's perceived weight decreases. If a passenger were to stand on a weighing scale, such as a conventional bathroom scale, while riding the elevator, the scale will be reading the normal force it delivers to the passenger's feet, and will be different than the person's ground weight if the elevator cab is "accelerating" up or down. The weighing scale measures normal force (which varies as the elevator cab accelerates), not gravitational force (which does not vary as the cab accelerates).
When we define upward to be the positive direction, constructing Newton's second law and solving for the normal force on a passenger yields the following equation:
formula_8
In a gravitron amusement ride, the static friction caused by and perpendicular to the normal force acting on the passengers against the walls results in suspension of the passengers above the floor as the ride rotates. In such a scenario, the walls of the ride apply normal force to the passengers in the direction of the center, which is a result of the centripetal force applied to the passengers as the ride rotates. As a result of the normal force experienced by the passengers, the static friction between the passengers and the walls of the ride counteracts the pull of gravity on the passengers, resulting in suspension above ground of the passengers throughout the duration of the ride.
When we define the center of the ride to be the positive direction, solving for the normal force on a passenger that is suspended above ground yields the following equation:
formula_9
where formula_10 is the normal force on the passenger, formula_11 is the mass of the passenger, formula_12 is the tangential velocity of the passenger and formula_13 is the distance of the passenger from the center of the ride.
With the normal force known, we can solve for the static coefficient of friction needed to maintain a net force of zero in the vertical direction:
formula_14
where formula_15 is the static coefficient of friction, and formula_16 is the gravitational field strength.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F_n"
},
{
"math_id": 1,
"text": "F_n = mg"
},
{
"math_id": 2,
"text": "F_n = mg \\cos(\\theta)"
},
{
"math_id": 3,
"text": "\\mathbf{N} = \\mathbf{n}\\, N = \\mathbf{n}\\, (\\mathbf{T} \\cdot \\mathbf{n}) = \\mathbf{n}\\, (\\mathbf{n} \\cdot \\mathbf{\\tau} \\cdot \\mathbf{n})."
},
{
"math_id": 4,
"text": "N_i = n_i N = n_i T_j n_j = n_i n_k \\tau_{jk} n_j."
},
{
"math_id": 5,
"text": "F_{fr}"
},
{
"math_id": 6,
"text": "\\mu_s=\\tan(\\theta)"
},
{
"math_id": 7,
"text": "\\theta"
},
{
"math_id": 8,
"text": "N = m(g + a)"
},
{
"math_id": 9,
"text": "N = \\frac{mv^2}{r}"
},
{
"math_id": 10,
"text": "N"
},
{
"math_id": 11,
"text": "m"
},
{
"math_id": 12,
"text": "v"
},
{
"math_id": 13,
"text": "r"
},
{
"math_id": 14,
"text": "\\mu = \\frac{mg}{N}"
},
{
"math_id": 15,
"text": "\\mu"
},
{
"math_id": 16,
"text": "g"
}
] | https://en.wikipedia.org/wiki?curid=1063435 |
10636131 | Mathematical markup language | A mathematical markup language is a computer notation for representing mathematical formulae, based on mathematical notation. Specialized markup languages are necessary because computers normally deal with linear text and more limited character sets (although increasing support for Unicode is obsoleting very simple uses). A formally standardized syntax also allows a computer to interpret otherwise ambiguous content, for rendering or even evaluating. For computer-interpretable syntaxes, the most popular are TeX/LaTeX, MathML (Mathematical Markup Language), OpenMath and OMDoc.
Notations for human input.
Popular languages for input by humans and interpretation by computers include TeX/LaTeX and eqn.
Computer algebra systems such as Macsyma, Mathematica (Wolfram Language), Maple, and MATLAB each have their own syntax.
When the purpose is informal communication with other humans, syntax is often ad hoc, sometimes called "ASCII math notation". Academics sometimes use syntax based on TeX due to familiarity with it from writing papers. Those used to programming languages may also use shorthands like "!" for formula_0. Web pages may also use a limited amount of HTML to mark up a small subset, for example superscripting. Ad hoc syntax requires context to interpret ambiguous syntax, for example "<=" could be "is implied by" or "less than or equal to", and "dy/dx" is likely to denote a derivative, but strictly speaking could also mean a finite quantity "dy" divided by "dx".
Unicode improves the support for mathematics, compared to ASCII only.
Markup languages for computer interchange.
Markup languages optimized for computer-to-computer communication include MathML, OpenMath, and OMDoc. These are designed for clarity, parseability and to minimize ambiguity, at the price of verbosity. However, the verbosity makes them clumsier for humans to type directly.
Conversion.
Many input, rendering, and conversion tools exist.
Microsoft Word included Equation Editor, a limited version of MathType, until 2007. These allow entering formulae using a graphical user interface, and converting to standard markup languages such as MathML. With Microsoft's release of Microsoft Office 2007 and the Office Open XML file formats, they introduced a new equation editor which uses a new format, "Office Math Markup Language" (OMML). The lack of compatibility led some prestigious scientific journals to refuse to accept manuscripts which had been produced using Microsoft Office 2007.
SciWriter is another GUI that can generate MathML and LaTeX.
ASCIIMathML, a JavaScript program, can convert ad hoc ASCII notation to MathML.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\neg"
}
] | https://en.wikipedia.org/wiki?curid=10636131 |
106364 | Algebraic structure | Set with operations obeying given axioms
In mathematics, an algebraic structure consists of a nonempty set "A" (called the underlying set, carrier set or domain), a collection of operations on "A" (typically binary operations such as addition and multiplication), and a finite set of identities (known as "axioms") that these operations must satisfy.
An algebraic structure may be based on other algebraic structures with operations and axioms involving several structures. For instance, a vector space involves a second structure called a field, and an operation called "scalar multiplication" between elements of the field (called "scalars"), and elements of the vector space (called "vectors").
Abstract algebra is the name that is commonly given to the study of algebraic structures. The general theory of algebraic structures has been formalized in universal algebra. Category theory is another formalization that includes also other mathematical structures and functions between structures of the same type (homomorphisms).
In universal algebra, an algebraic structure is called an "algebra"; this term may be ambiguous, since, in other contexts, an algebra is an algebraic structure that is a vector space over a field or a module over a commutative ring.
The collection of all structures of a given type (same operations and same laws) is called a variety in universal algebra; this term is also used with a completely different meaning in algebraic geometry, as an abbreviation of algebraic variety. In category theory, the collection of all structures of a given type and homomorphisms between them form a concrete category.
Introduction.
Addition and multiplication are prototypical examples of operations that combine two elements of a set to produce a third element of the same set. These operations obey several algebraic laws. For example, "a" + ("b" + "c") = ("a" + "b") + "c" and "a"("bc") = ("ab")"c" are associative laws, and "a" + "b" = "b" + "a" and "ab" = "ba" are commutative laws. Many systems studied by mathematicians have operations that obey some, but not necessarily all, of the laws of ordinary arithmetic. For example, the possible moves of an object in three-dimensional space can be combined by performing a first move of the object, and then a second move from its new position. Such moves, formally called rigid motions, obey the associative law, but fail to satisfy the commutative law.
Sets with one or more operations that obey specific laws are called "algebraic structures". When a new problem involves the same laws as such an algebraic structure, all the results that have been proved using only the laws of the structure can be directly applied to the new problem.
In full generality, algebraic structures may involve an arbitrary collection of operations, including operations that combine more than two elements (higher arity operations) and operations that take only one argument (unary operations) or even zero arguments (nullary operations). The examples listed below are by no means a complete list, but include the most common structures taught in undergraduate courses.
Common axioms.
Equational axioms.
An axiom of an algebraic structure often has the form of an identity, that is, an equation such that the two sides of the equals sign are expressions that involve operations of the algebraic structure and variables. If the variables in the identity are replaced by arbitrary elements of the algebraic structure, the equality must remain true. Here are some common examples.
Existential axioms.
Some common axioms contain an existential clause. In general, such a clause can be avoided by introducing further operations, and replacing the existential clause by an identity involving the new operation. More precisely, let us consider an axiom of the form ""for all X there is y such that" formula_6", where X is a k-tuple of variables. Choosing a specific value of y for each value of X defines a function formula_7 which can be viewed as an operation of arity k, and the axiom becomes the identity formula_8
The introduction of such auxiliary operation complicates slightly the statement of an axiom, but has some advantages. Given a specific algebraic structure, the proof that an existential axiom is satisfied consists generally of the definition of the auxiliary function, completed with straightforward verifications. Also, when computing in an algebraic structure, one generally uses explicitly the auxiliary operations. For example, in the case of numbers, the additive inverse is provided by the unary minus operation formula_9
Also, in universal algebra, a variety is a class of algebraic structures that share the same operations, and the same axioms, with the condition that all axioms are identities. What precedes shows that existential axioms of the above form are accepted in the definition of a variety.
Here are some of the most common existential axioms.
A binary operation formula_0 has an identity element if there is an element e such that formula_10 for all x in the structure. Here, the auxiliary operation is the operation of arity zero that has e as its result.
Given a binary operation formula_0 that has an identity element e, an element x is "invertible" if it has an inverse element, that is, if there exists an element formula_11 such that formula_12For example, a group is an algebraic structure with a binary operation that is associative, has an identity element, and for which all elements are invertible.
Non-equational axioms.
The axioms of an algebraic structure can be any first-order formula, that is a formula involving logical connectives (such as "and", "or" and "not"), and logical quantifiers (formula_13) that apply to elements (not to subsets) of the structure.
Such a typical axiom is inversion in fields. This axiom cannot be reduced to axioms of preceding types. (it follows that fields do not form a variety in the sense of universal algebra.) It can be stated: "Every nonzero element of a field is invertible;" or, equivalently: "the structure has a unary operation inv such that
formula_14
The operation inv can be viewed either as a partial operation that is not defined for "x" = 0; or as an ordinary function whose value at 0 is arbitrary and must not be used.
Common algebraic structures.
One set with operations.
Simple structures: no binary operation:
Group-like structures: one binary operation. The binary operation can be indicated by any symbol, or with no symbol (juxtaposition) as is done for ordinary multiplication of real numbers.
Ring-like structures or Ringoids: two binary operations, often called addition and multiplication, with multiplication distributing over addition.
Lattice structures: two or more binary operations, including operations called meet and join, connected by the absorption law.
Hybrid structures.
Algebraic structures can also coexist with added structure of non-algebraic nature, such as partial order or a topology. The added structure must be compatible, in some sense, with the algebraic structure.
Universal algebra.
Algebraic structures are defined through different configurations of axioms. Universal algebra abstractly studies such objects. One major dichotomy is between structures that are axiomatized entirely by "identities" and structures that are not. If all axioms defining a class of algebras are identities, then this class is a variety (not to be confused with algebraic varieties of algebraic geometry).
Identities are equations formulated using only the operations the structure allows, and variables that are tacitly universally quantified over the relevant universe. Identities contain no connectives, existentially quantified variables, or relations of any kind other than the allowed operations. The study of varieties is an important part of universal algebra. An algebraic structure in a variety may be understood as the quotient algebra of term algebra (also called "absolutely free algebra") divided by the equivalence relations generated by a set of identities. So, a collection of functions with given signatures generate a free algebra, the term algebra "T". Given a set of equational identities (the axioms), one may consider their symmetric, transitive closure "E". The quotient algebra "T"/"E" is then the algebraic structure or variety. Thus, for example, groups have a signature containing two operators: the multiplication operator "m", taking two arguments, and the inverse operator "i", taking one argument, and the identity element "e", a constant, which may be considered an operator that takes zero arguments. Given a (countable) set of variables "x", "y", "z", etc. the term algebra is the collection of all possible terms involving "m", "i", "e" and the variables; so for example, "m"("i"("x"), "m"("x", "m"("y","e"))) would be an element of the term algebra. One of the axioms defining a group is the identity "m"("x", "i"("x")) = "e"; another is "m"("x","e") = "x". The axioms can be represented as trees. These equations induce equivalence classes on the free algebra; the quotient algebra then has the algebraic structure of a group.
Some structures do not form varieties, because either:
Structures whose axioms unavoidably include nonidentities are among the most important ones in mathematics, e.g., fields and division rings. Structures with nonidentities present challenges varieties do not. For example, the direct product of two fields is not a field, because formula_15, but fields do not have zero divisors.
Category theory.
Category theory is another tool for studying algebraic structures (see, for example, Mac Lane 1998). A category is a collection of "objects" with associated "morphisms." Every algebraic structure has its own notion of homomorphism, namely any function compatible with the operation(s) defining the structure. In this way, every algebraic structure gives rise to a category. For example, the category of groups has all groups as objects and all group homomorphisms as morphisms. This concrete category may be seen as a category of sets with added category-theoretic structure. Likewise, the category of topological groups (whose morphisms are the continuous group homomorphisms) is a category of topological spaces with extra structure. A forgetful functor between categories of algebraic structures "forgets" a part of a structure.
There are various concepts in category theory that try to capture the algebraic character of a context, for instance
Different meanings of "structure".
In a slight abuse of notation, the word "structure" can also refer to just the operations on a structure, instead of the underlying set itself. For example, the sentence, "We have defined a ring "structure" on the set formula_16", means that we have defined ring "operations" on the set formula_16. For another example, the group formula_17 can be seen as a set formula_18 that is equipped with an "algebraic structure," namely the "operation" formula_3.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "*"
},
{
"math_id": 1,
"text": "x*y=y*x "
},
{
"math_id": 2,
"text": "(x*y)*z=x*(y*z) "
},
{
"math_id": 3,
"text": "+"
},
{
"math_id": 4,
"text": "x*(y+z)=(x*y)+(x*z) "
},
{
"math_id": 5,
"text": "(y+z)*x=(y*x)+(z*x) "
},
{
"math_id": 6,
"text": "f(X,y)=g(X,y)"
},
{
"math_id": 7,
"text": "\\varphi:X\\mapsto y,"
},
{
"math_id": 8,
"text": "f(X,\\varphi(X))=g(X,\\varphi(X))."
},
{
"math_id": 9,
"text": "x\\mapsto -x."
},
{
"math_id": 10,
"text": "x*e=x\\quad \\text{and} \\quad e*x=x"
},
{
"math_id": 11,
"text": "\\operatorname{inv}(x)"
},
{
"math_id": 12,
"text": "\\operatorname{inv}(x)*x=e \\quad \\text{and} \\quad x*\\operatorname{inv}(x)=e."
},
{
"math_id": 13,
"text": "\\forall, \\exists"
},
{
"math_id": 14,
"text": "\\forall x, \\quad x=0 \\quad\\text{or} \\quad x \\cdot\\operatorname{inv}(x)=1."
},
{
"math_id": 15,
"text": "(1,0)\\cdot(0,1)=(0,0)"
},
{
"math_id": 16,
"text": "A"
},
{
"math_id": 17,
"text": "(\\mathbb Z, +)"
},
{
"math_id": 18,
"text": "\\mathbb Z"
}
] | https://en.wikipedia.org/wiki?curid=106364 |
1063654 | Rossby parameter | The Rossby parameter (or simply beta formula_0) is a number used in geophysics and meteorology which arises due to the meridional variation of the Coriolis force caused by the spherical shape of the Earth. It is important in the generation of Rossby waves. The Rossby parameter formula_0 is given by
formula_1
where formula_2 is the Coriolis parameter, formula_3 is the latitude, formula_4 is the angular speed of the Earth's rotation, and formula_5 is the mean radius of the Earth. Although both involve Coriolis effects, the Rossby parameter describes the "variation" of the effects with latitude (hence the latitudinal derivative), and should not be confused with the Rossby number. | [
{
"math_id": 0,
"text": "\\beta"
},
{
"math_id": 1,
"text": "\\beta = \\frac{\\partial f}{\\partial y} = \\frac{1}{a} \\frac{d}{d\\phi} (2 \\omega \\sin\\phi) = \\frac{2\\omega \\cos\\phi}{a} "
},
{
"math_id": 2,
"text": " f "
},
{
"math_id": 3,
"text": "\\phi"
},
{
"math_id": 4,
"text": "\\omega"
},
{
"math_id": 5,
"text": "a"
}
] | https://en.wikipedia.org/wiki?curid=1063654 |
10638693 | Prime zeta function | Mathematical function
In mathematics, the prime zeta function is an analogue of the Riemann zeta function, studied by . It is defined as the following infinite series, which converges for formula_0:
formula_1
Properties.
The Euler product for the Riemann zeta function "ζ"("s") implies that
formula_2
which by Möbius inversion gives
formula_3
When "s" goes to 1, we have formula_4.
This is used in the definition of Dirichlet density.
This gives the continuation of "P"("s") to formula_5, with an infinite number of logarithmic singularities at points "s" where "ns" is a pole (only "ns" = 1 when "n" is a squarefree number greater than or equal to 1), or zero of the Riemann zeta function "ζ"(.). The line formula_6 is a natural boundary as the singularities cluster near all points of this line.
If one defines a sequence
formula_7
then
formula_8
The prime zeta function is related to Artin's constant by
formula_9
where "L""n" is the "n"th Lucas number.
Specific values are:
Analysis.
Integral.
The integral over the prime zeta function is usually anchored at infinity,
because the pole at formula_10 prohibits defining a nice lower bound
at some finite integer without entering a discussion on branch cuts in the complex plane:
formula_11
The noteworthy values are again those where the sums converge slowly:
Derivative.
The first derivative is
formula_12
The interesting values are again those where the sums converge slowly:
Generalizations.
Almost-prime zeta functions.
As the Riemann zeta function is a sum of inverse powers over the integers
and the prime zeta function a sum of inverse powers of the prime numbers,
the k-primes (the integers which are a product of formula_13 not
necessarily distinct primes) define a sort of intermediate sums:
formula_14
where formula_15 is the total number of prime factors.
Each integer in the denominator of the Riemann zeta function formula_16
may be classified by its value of the index formula_13, which decomposes the Riemann zeta
function into an infinite sum of the formula_17:
formula_18
Since we know that the Dirichlet series (in some formal parameter "u") satisfies
formula_19
we can use formulas for the symmetric polynomial variants with a generating function of the right-hand-side type. Namely, we have the coefficient-wise identity that formula_20 when the sequences correspond to formula_21 where formula_22 denotes the characteristic function of the primes. Using Newton's identities, we have a general formula for these sums given by
formula_23
Special cases include the following explicit expansions:
formula_24
Prime modulo zeta functions.
Constructing the sum not over all primes but only over primes which are in the same modulo class introduces further types of infinite series that are a reduction of the Dirichlet L-function.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Re(s) > 1"
},
{
"math_id": 1,
"text": "P(s)=\\sum_{p\\,\\in\\mathrm{\\,primes}} \\frac{1}{p^s}=\\frac{1}{2^s}+\\frac{1}{3^s}+\\frac{1}{5^s}+\\frac{1}{7^s}+\\frac{1}{11^s}+\\cdots."
},
{
"math_id": 2,
"text": "\\log\\zeta(s)=\\sum_{n>0} \\frac{P(ns)} n"
},
{
"math_id": 3,
"text": "P(s)=\\sum_{n>0} \\mu(n)\\frac{\\log\\zeta(ns)} n"
},
{
"math_id": 4,
"text": "P(s)\\sim \\log\\zeta(s)\\sim\\log\\left(\\frac{1}{s-1} \\right)"
},
{
"math_id": 5,
"text": "\\Re(s) > 0"
},
{
"math_id": 6,
"text": "\\Re(s) = 0"
},
{
"math_id": 7,
"text": "a_n=\\prod_{p^k \\mid n} \\frac{1}{k}=\\prod_{p^k \\mid \\mid n} \\frac 1 {k!} "
},
{
"math_id": 8,
"text": "P(s)=\\log\\sum_{n=1}^\\infty \\frac{a_n}{n^s}."
},
{
"math_id": 9,
"text": "\\ln C_{\\mathrm{Artin}} = - \\sum_{n=2}^{\\infty} \\frac{(L_n-1)P(n)}{n}"
},
{
"math_id": 10,
"text": "s=1"
},
{
"math_id": 11,
"text": "\\int_s^\\infty P(t) \\, dt = \\sum_p \\frac 1 {p^s\\log p}"
},
{
"math_id": 12,
"text": "P'(s) \\equiv \\frac{d}{ds} P(s) = - \\sum_p \\frac{\\log p}{p^s}"
},
{
"math_id": 13,
"text": "k"
},
{
"math_id": 14,
"text": "P_k(s)\\equiv \\sum_{n: \\Omega(n)=k} \\frac 1 {n^s}"
},
{
"math_id": 15,
"text": "\\Omega"
},
{
"math_id": 16,
"text": "\\zeta"
},
{
"math_id": 17,
"text": "P_k"
},
{
"math_id": 18,
"text": "\\zeta(s) = 1+\\sum_{k=1,2,\\ldots} P_k(s)"
},
{
"math_id": 19,
"text": "P_{\\Omega}(u, s) := \\sum_{n \\geq 1} \\frac{u^{\\Omega(n)}}{n^s} = \\prod_{p \\in \\mathbb{P}} \\left(1-up^{-s}\\right)^{-1},"
},
{
"math_id": 20,
"text": "P_k(s) = [u^k] P_{\\Omega}(u, s) = h(x_1, x_2, x_3, \\ldots)"
},
{
"math_id": 21,
"text": "x_j := j^{-s} \\chi_{\\mathbb{P}}(j)"
},
{
"math_id": 22,
"text": "\\chi_{\\mathbb{P}}"
},
{
"math_id": 23,
"text": "P_n(s) = \\sum_{{k_1+2k_2+\\cdots+nk_n=n} \\atop {k_1,\\ldots,k_n \\geq 0}} \\left[\\prod_{i=1}^n \\frac{P(is)^{k_i}}{k_i! \\cdot i^{k_i}}\\right] = -[z^n]\\log\\left(1 - \\sum_{j \\geq 1} \\frac{P(js) z^j}{j}\\right)."
},
{
"math_id": 24,
"text": "\\begin{align}P_1(s) & = P(s) \\\\ P_2(s) & = \\frac{1}{2}\\left(P(s)^2+P(2s)\\right) \\\\ P_3(s) & = \\frac{1}{6}\\left(P(s)^3+3P(s)P(2s)+2P(3s)\\right) \\\\ P_4(s) & = \\frac{1}{24}\\left(P(s)^4+6P(s)^2 P(2s)+3 P(2s)^2+8P(s)P(3s)+6P(4s)\\right).\\end{align}"
}
] | https://en.wikipedia.org/wiki?curid=10638693 |
1063946 | Occurs check | In computer science, the occurs check is a part of algorithms for syntactic unification. It causes unification of a variable "V" and a structure "S" to fail if "S" contains "V".
Application in theorem proving.
In theorem proving, unification without the occurs check can lead to unsound inference. For example, the Prolog goal
formula_0
will succeed, binding "X" to a cyclic structure which has no counterpart in the Herbrand universe.
As another example,
without occurs-check, a resolution proof can be found for the non-theorem
formula_1: the negation of that formula has the conjunctive normal form formula_2, with formula_3 and formula_4 denoting the Skolem function for the first and second existential quantifier, respectively. Without occurs check, the literals formula_5 and formula_6 are unifiable, producing the refuting empty clause.
Rational tree unification.
Prolog implementations usually omit the occurs check for reasons of efficiency, which can lead to circular data structures and looping.
By not performing the occurs check, the worst case complexity of unifying a term
formula_7 with term formula_8
is reduced in many cases from
formula_9
to
formula_10;
in the particular, frequent case of variable-term unifications, runtime shrinks to
formula_11.
Modern implementations, based on Colmerauer's Prolog II,
use rational tree unification to avoid looping. However it is difficult to keep the complexity time linear in the presence of cyclic terms. Examples where Colmerauers algorithm becomes quadratic can be readily constructed, but refinement proposals exist.
See image for an example run of the unification algorithm given in Unification (computer science)#A unification algorithm, trying to solve the goal formula_12, however without the "occurs check rule" (named "check" there); applying rule "eliminate" instead leads to a cyclic graph (i.e. an infinite term) in the last step.
Sound unification.
ISO Prolog implementations have the built-in predicate "unify_with_occurs_check/2" for sound unification but are free to use unsound or even looping algorithms when unification is invoked otherwise, provided the algorithm works correctly for all cases that are "not subject to occurs-check" (NSTO). The built-in "acyclic_term/1" serves to check the finiteness of terms.
Implementations offering sound unification for all unifications are Qu-Prolog and Strawberry Prolog and (optionally, via a runtime flag): XSB, SWI-Prolog, Tau Prolog, Trealla Prolog and Scryer Prolog. A variety
of optimizations can render sound unification feasible for common cases.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "X = f(X)"
},
{
"math_id": 1,
"text": "(\\forall x \\exists y. p(x,y)) \\rightarrow (\\exists y \\forall x. p(x,y))"
},
{
"math_id": 2,
"text": "p(X,f(X)) \\land \\lnot p(g(Y),Y)"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "g"
},
{
"math_id": 5,
"text": "p(X,f(X))"
},
{
"math_id": 6,
"text": "p(g(Y),Y)"
},
{
"math_id": 7,
"text": "t_1"
},
{
"math_id": 8,
"text": "t_2"
},
{
"math_id": 9,
"text": "O(\\text{size}(t_1)+\\text{size}(t_2))"
},
{
"math_id": 10,
"text": "O(\\text{min}(\\text{size}(t_1),\\text{size}(t_2)))"
},
{
"math_id": 11,
"text": "O(1)"
},
{
"math_id": 12,
"text": "cons(x,y) \\stackrel{?}{=} cons(1,cons(x,cons(2,y)))"
}
] | https://en.wikipedia.org/wiki?curid=1063946 |
10640172 | Robotics conventions | Conventions used in the robotics research field
There are many conventions used in the robotics research field. This article summarises these conventions.
Line representations.
Lines in robotics are the following:
Non-minimal vector coordinates.
A line formula_0 is completely defined by the ordered set of two vectors:
Each point formula_4 on the line is given a parameter value formula_5 that satisfies:
formula_6. The parameter t is unique once formula_1 and formula_3 are chosen. The representation formula_0 is not minimal, because it uses six parameters for only four degrees of freedom. The following two constraints apply:
Plücker coordinates.
Arthur Cayley and Julius Plücker introduced an alternative representation using two free vectors. This representation was finally named after Plücker.
The Plücker representation is denoted by formula_7. Both formula_3 and formula_8 are free vectors: formula_3 represents the direction of the line and formula_8 is the moment of formula_3 about the chosen reference origin. formula_9 (formula_8 is independent of which point formula_1 on the line is chosen!)
The advantage of the Plücker coordinates is that they are homogeneous.
A line in Plücker coordinates has still four out of six independent parameters, so it is not a minimal representation. The two constraints on the six Plücker coordinates are
Minimal line representation.
A line representation is minimal if it uses four parameters, which is the minimum needed to represent all possible lines in the Euclidean Space (E³).
Denavit–Hartenberg line coordinates.
Jaques Denavit and Richard S. Hartenberg presented the first minimal representation for a line which is now widely used. The common normal between two lines was the main geometric concept that allowed Denavit and Hartenberg to find a minimal representation. Engineers use the Denavit–Hartenberg convention(D–H) to help them describe the positions of links and joints unambiguously. Every link gets its own coordinate system. There are a few rules to consider in choosing the coordinate system:
Once the coordinate frames are determined, inter-link transformations are uniquely described by the following four parameters:
Hayati–Roberts line coordinates.
The Hayati–Roberts line representation, denoted formula_19, is another minimal line representation, with parameters:
This representation is unique for a directed line. The coordinate singularities are different from the DH singularities: it has singularities if the line becomes parallel to either the formula_22 or formula_23 axis of the world frame.
Product of exponentials formula.
The product of exponentials formula represents the kinematics of an open-chain mechanism as the product of exponentials of twists, and may be used to describe a series of revolute, prismatic, and helical joints. | [
{
"math_id": 0,
"text": "L(p,d)"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "L"
},
{
"math_id": 3,
"text": "d"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "t"
},
{
"math_id": 6,
"text": "x = p+td"
},
{
"math_id": 7,
"text": "L_{pl}(d,m)"
},
{
"math_id": 8,
"text": "m"
},
{
"math_id": 9,
"text": "m = p\\times d"
},
{
"math_id": 10,
"text": "z"
},
{
"math_id": 11,
"text": "x_n = z_n \\times z_{n - 1}"
},
{
"math_id": 12,
"text": "y"
},
{
"math_id": 13,
"text": "\\theta\\,"
},
{
"math_id": 14,
"text": "d\\,"
},
{
"math_id": 15,
"text": "r\\,"
},
{
"math_id": 16,
"text": "a"
},
{
"math_id": 17,
"text": "\\alpha"
},
{
"math_id": 18,
"text": "\\alpha\\,"
},
{
"math_id": 19,
"text": "L_{hr}(e_{x},e_{y},l_{x},l_{y})"
},
{
"math_id": 20,
"text": "e_{x}"
},
{
"math_id": 21,
"text": "e_{y}"
},
{
"math_id": 22,
"text": "X"
},
{
"math_id": 23,
"text": "Y"
},
{
"math_id": 24,
"text": "e"
},
{
"math_id": 25,
"text": "Z"
},
{
"math_id": 26,
"text": "e_{z}=(1-e_{x}^2-e_{y}^2)^{\\frac{1}{2}}"
},
{
"math_id": 27,
"text": "l_{x}"
},
{
"math_id": 28,
"text": "l_{y}"
}
] | https://en.wikipedia.org/wiki?curid=10640172 |
10640474 | Positive-definite function on a group | In mathematics, and specifically in operator theory, a positive-definite function on a group relates the notions of positivity, in the context of Hilbert spaces, and algebraic groups. It can be viewed as a particular type of positive-definite kernel where the underlying set has the additional group structure.
Definition.
Let "G" be a group, "H" be a complex Hilbert space, and "L"("H") be the bounded operators on "H".
A positive-definite function on "G" is a function "F": "G" → "L"("H") that satisfies
formula_0
for every function "h": "G" → "H" with finite support ("h" takes non-zero values for only finitely many "s").
In other words, a function "F": "G" → "L"("H") is said to be a positive-definite function if the kernel "K": "G" × "G" → "L"("H") defined by "K"("s", "t") = "F"("s"−1"t") is a positive-definite kernel.
Unitary representations.
A unitary representation is a unital homomorphism Φ: "G" → "L"("H") where Φ("s") is a unitary operator for all "s". For such Φ, Φ("s"−1) = Φ("s")*.
Positive-definite functions on "G" are intimately related to unitary representations of "G". Every unitary representation of "G" gives rise to a family of positive-definite functions. Conversely, given a positive-definite function, one can define a unitary representation of "G" in a natural way.
Let Φ: "G" → "L"("H") be a unitary representation of "G". If "P" ∈ "L"("H") is the projection onto a closed subspace "H`" of "H". Then "F"("s") = "P" Φ("s") is a positive-definite function on "G" with values in "L"("H`"). This can be shown readily:
formula_1
for every "h": "G" → "H`" with finite support. If "G" has a topology and Φ is weakly(resp. strongly) continuous, then clearly so is "F".
On the other hand, consider now a positive-definite function "F" on "G". A unitary representation of "G" can be obtained as follows. Let "C"00("G", "H") be the family of functions "h": "G" → "H" with finite support. The corresponding positive kernel "K"("s", "t") = "F"("s"−1"t") defines a (possibly degenerate) inner product on "C"00("G", "H"). Let the resulting Hilbert space be denoted by "V".
We notice that the "matrix elements" "K"("s", "t") = "K"("a"−1"s", "a"−1"t") for all "a", "s", "t" in "G". So "Uah"("s") = "h"("a"−1"s") preserves the inner product on "V", i.e. it is unitary in "L"("V"). It is clear that the map Φ("a") = "U"a is a representation of "G" on "V".
The unitary representation is unique, up to Hilbert space isomorphism, provided the following minimality condition holds:
formula_2
where formula_3 denotes the closure of the linear span.
Identify "H" as elements (possibly equivalence classes) in "V", whose support consists of the identity element "e" ∈ "G", and let "P" be the projection onto this subspace. Then we have "PUaP" = "F"("a") for all "a" ∈ "G".
Toeplitz kernels.
Let "G" be the additive group of integers Z. The kernel "K"("n", "m") = "F"("m" − "n") is called a kernel of "Toeplitz" type, by analogy with Toeplitz matrices. If "F" is of the form "F"("n") = "Tn" where "T" is a bounded operator acting on some Hilbert space. One can show that the kernel "K"("n", "m") is positive if and only if "T" is a contraction. By the discussion from the previous section, we have a unitary representation of Z, Φ("n") = "U""n" for a unitary operator "U". Moreover, the property "PUaP" = "F"("a") now translates to "PUnP" = "Tn". This is precisely Sz.-Nagy's dilation theorem and hints at an important dilation-theoretic characterization of positivity that leads to a parametrization of arbitrary positive-definite kernels. | [
{
"math_id": 0,
"text": "\\sum_{s,t \\in G}\\langle F(s^{-1}t) h(t), h(s) \\rangle \\geq 0 ,"
},
{
"math_id": 1,
"text": "\\begin{align}\n\\sum_{s,t \\in G}\\langle F(s^{-1}t) h(t), h(s) \\rangle \n& =\\sum_{s,t \\in G}\\langle P \\Phi (s^{-1}t) h(t), h(s) \\rangle \\\\ \n{} & =\\sum_{s,t \\in G}\\langle \\Phi (t) h(t), \\Phi(s)h(s) \\rangle \\\\ \n{} & = \\left\\langle \\sum_{t \\in G} \\Phi (t) h(t), \\sum_{s \\in G} \\Phi(s)h(s) \\right\\rangle \\\\ \n{} & \\geq 0\n\\end{align}\n"
},
{
"math_id": 2,
"text": "V = \\bigvee_{s \\in G} \\Phi(s)H \\, "
},
{
"math_id": 3,
"text": "\\bigvee"
}
] | https://en.wikipedia.org/wiki?curid=10640474 |
106418 | Computational physics | Numerical simulations of physical problems via computers
Computational physics is the study and implementation of numerical analysis to solve problems in physics. Historically, computational physics was the first application of modern computers in science, and is now a subset of computational science. It is sometimes regarded as a subdiscipline (or offshoot) of theoretical physics, but others consider it an intermediate branch between theoretical and experimental physics — an area of study which supplements both theory and experiment.
Overview.
In physics, different theories based on mathematical models provide very precise predictions on how systems behave. Unfortunately, it is often the case that solving the mathematical model for a particular system in order to produce a useful prediction is not feasible. This can occur, for instance, when the solution does not have a closed-form expression, or is too complicated. In such cases, numerical approximations are required. Computational physics is the subject that deals with these numerical approximations: the approximation of the solution is written as a finite (and typically large) number of simple mathematical operations (algorithm), and a computer is used to perform these operations and compute an approximated solution and respective error.
Status in physics.
There is a debate about the status of computation within the scientific method. Sometimes it is regarded as more akin to theoretical physics; some others regard computer simulation as "computer experiments", yet still others consider it an intermediate or different branch between theoretical and experimental physics, a third way that supplements theory and experiment. While computers can be used in experiments for the measurement and recording (and storage) of data, this clearly does not constitute a computational approach.
Challenges in computational physics.
Computational physics problems are in general very difficult to solve exactly. This is due to several (mathematical) reasons: lack of algebraic and/or analytic solvability, complexity, and chaos. For example, even apparently simple problems, such as calculating the wavefunction of an electron orbiting an atom in a strong electric field (Stark effect), may require great effort to formulate a practical algorithm (if one can be found); other cruder or brute-force techniques, such as graphical methods or root finding, may be required. On the more advanced side, mathematical perturbation theory is also sometimes used (a working is shown for this particular example here). In addition, the computational cost and computational complexity for many-body problems (and their classical counterparts) tend to grow quickly. A macroscopic system typically has a size of the order of formula_0 constituent particles, so it is somewhat of a problem. Solving quantum mechanical problems is generally of exponential order in the size of the system and for classical N-body it is of order N-squared. Finally, many physical systems are inherently nonlinear at best, and at worst chaotic: this means it can be difficult to ensure any numerical errors do not grow to the point of rendering the 'solution' useless.
Methods and algorithms.
Because computational physics uses a broad class of problems, it is generally divided amongst the different mathematical problems it numerically solves, or the methods it applies. Between them, one can consider:
All these methods (and several others) are used to calculate physical properties of the modeled systems.
Computational physics also borrows a number of ideas from computational chemistry - for example, the density functional theory used by computational solid state physicists to calculate properties of solids is basically the same as that used by chemists to calculate the properties of molecules.
Furthermore, computational physics encompasses the tuning of the software/ to solve the problems (as the problems usually can be very large, in processing power need or in memory requests).
Divisions.
It is possible to find a corresponding computational branch for every major field in physics:
Applications.
Due to the broad class of problems computational physics deals, it is an essential component of modern research in different areas of physics, namely: accelerator physics, astrophysics, general theory of relativity (through numerical relativity), fluid mechanics (computational fluid dynamics), lattice field theory/lattice gauge theory (especially lattice quantum chromodynamics), plasma physics (see plasma modeling), simulating physical systems (using e.g. molecular dynamics), nuclear engineering computer codes, protein structure prediction, weather prediction, solid state physics, soft condensed matter physics, hypervelocity impact physics etc.
Computational solid state physics, for example, uses density functional theory to calculate properties of solids, a method similar to that used by chemists to study molecules. Other quantities of interest in solid state physics, such as the electronic band structure, magnetic properties and charge densities can be calculated by this and several methods, including the Luttinger-Kohn/k.p method and ab-initio methods.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "10^{23}"
}
] | https://en.wikipedia.org/wiki?curid=106418 |
10646392 | Mortality (computability theory) | In computability theory, the mortality problem is a decision problem related to the halting problem. For Turing machines, the halting problem can be stated as follows:
"Given a Turing machine, and a word, decide whether the machine halts when run on the given word."
In contrast, the mortality problem for Turing machines asks whether "all" executions of the machine, starting from "any configuration", halt.
In the statement above, a "configuration" specifies both the machine's state (not necessarily its initial state),
its tape position and the contents of the tape. While we usually assume that in the starting configuration all but finitely many cells on the tape are blanks, in the mortality problem the tape can have arbitrary content, including infinitely many non-blank symbols written on it.
Philip K. Hooper proved in 1966 that the mortality problem is undecidable. This is true both for a machine with a tape infinite in both directions, and for a machine with semi-infinite tape. Note that this result does not directly follow from the well-known total function problem (Does a given machine halt for every input?), since the latter problem concerns only valid computations (starting with an initial configuration).
The variant in which only finite configurations are considered is also undecidable, as proved by Herman, who calls it "the uniform halting problem". He shows that the problem is not just undecidable, but formula_0-complete.
Additional Models.
The problem can naturally be rephrased for any computational model in which there are notions of "configuration" and "transition". A member of the model will be mortal if there is no configuration that leads to an infinite chain of transitions. The mortality problem has been proved undecidable for:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\Pi^0_2"
},
{
"math_id": 1,
"text": "{\\mathbb R}^n"
},
{
"math_id": 2,
"text": "{\\mathbb Q}^n"
},
{
"math_id": 3,
"text": "{\\mathbb Z}^n"
},
{
"math_id": 4,
"text": "n\\ge 2"
}
] | https://en.wikipedia.org/wiki?curid=10646392 |
1064839 | Bose gas | State of matter of many bosons
An ideal Bose gas is a quantum-mechanical phase of matter, analogous to a classical ideal gas. It is composed of bosons, which have an integer value of spin and abide by Bose–Einstein statistics. The statistical mechanics of bosons were developed by Satyendra Nath Bose for a photon gas and extended to massive particles by Albert Einstein, who realized that an ideal gas of bosons would form a condensate at a low enough temperature, unlike a classical ideal gas. This condensate is known as a Bose–Einstein condensate.
Introduction and examples.
Bosons are quantum mechanical particles that follow Bose–Einstein statistics, or equivalently, that possess integer spin. These particles can be classified as elementary: these are the Higgs boson, the photon, the gluon, the W/Z and the hypothetical graviton; or composite like the atom of hydrogen, the atom of 16O, the nucleus of deuterium, mesons etc. Additionally, some quasiparticles in more complex systems can also be considered bosons like the plasmons (quanta of charge density waves).
The first model that treated a gas with several bosons, was the photon gas, a gas of photons, developed by Bose. This model leads to a better understanding of Planck's law and the black-body radiation. The photon gas can be easily expanded to any kind of ensemble of massless non-interacting bosons. The "phonon gas", also known as Debye model, is an example where the normal modes of vibration of the crystal lattice of a metal, can be treated as effective massless bosons. Peter Debye used the phonon gas model to explain the behaviour of heat capacity of metals at low temperature.
An interesting example of a Bose gas is an ensemble of helium-4 atoms. When a system of 4He atoms is cooled down to temperature near absolute zero, many quantum mechanical effects are present. Below 2.17 K, the ensemble starts to behave as a superfluid, a fluid with almost zero viscosity. The Bose gas is the most simple quantitative model that explains this phase transition. Mainly when a gas of bosons is cooled down, it forms a Bose–Einstein condensate, a state where a large number of bosons occupy the lowest energy, the ground state, and quantum effects are macroscopically visible like wave interference.
The theory of Bose-Einstein condensates and Bose gases can also explain some features of superconductivity where charge carriers couple in pairs (Cooper pairs) and behave like bosons. As a result, superconductors behave like having no electrical resistivity at low temperatures.
The equivalent model for half-integer particles (like electrons or helium-3 atoms), that follow Fermi–Dirac statistics, is called the Fermi gas (an ensemble of non-interacting fermions). At low enough particle number density and high temperature, both the Fermi gas and the Bose gas behave like a classical ideal gas.
Macroscopic limit.
The thermodynamics of an ideal Bose gas is best calculated using the grand canonical ensemble. The grand potential for a Bose gas is given by:
formula_0
where each term in the sum corresponds to a particular single-particle energy level "ε""i"; "g""i" is the number of states with energy "ε""i"; "z" is the absolute activity (or "fugacity"), which may also be expressed in terms of the chemical potential "μ" by defining:
formula_1
and "β" defined as:
formula_2
where "k"B is the Boltzmann constant and "T" is the temperature. All thermodynamic quantities may be derived from the grand potential and we will consider all thermodynamic quantities to be functions of only the three variables "z", "β" (or "T"), and "V". All partial derivatives are taken with respect to one of these three variables while the other two are held constant.
The permissible range of "z" is from negative infinity to +1, as any value beyond this would give an infinite number of particles to states with an energy level of 0 (it is assumed that the energy levels have been offset so that the lowest energy level is 0).
Macroscopic limit, result for uncondensed fraction.
Following the procedure described in the gas in a box article, we can apply the Thomas–Fermi approximation, which assumes that the average energy is large compared to the energy difference between levels so that the above sum may be replaced by an integral. This replacement gives the macroscopic grand potential function formula_3, which is close to formula_4:
formula_5
The degeneracy "dg" may be expressed for many different situations by the general formula:
formula_6
where "α" is a constant, "E"c is a "critical" energy, and Γ is the gamma function. For example, for a massive Bose gas in a box, "α" = 3/2 and the critical energy is given by:
formula_7
where Λ is the thermal wavelength, and "f" is a degeneracy factor ("f" = 1 for simple spinless bosons). For a massive Bose gas in a harmonic trap we will have "α" = 3 and the critical energy is given by:
formula_8
where "V"("r") = "mω"2"r"2/2 is the harmonic potential. It is seen that "E"c is a function of volume only.
This integral expression for the grand potential evaluates to:
formula_9
where Li"s"("x") is the polylogarithm function.
The problem with this continuum approximation for a Bose gas is that the ground state has been effectively ignored, giving a degeneracy of zero for zero energy. This inaccuracy becomes serious when dealing with the Bose–Einstein condensate and will be dealt with in the next sections. As will be seen, even at low temperatures the above result is still useful for accurately describing the thermodynamics of just the uncondensed portion of the gas.
Limit on number of particles in uncondensed phase, critical temperature.
The total number of particles is found from the grand potential by
formula_10
This increases monotonically with "z" (up to the maximum "z" = +1). The behaviour when approaching "z" = 1 is however crucially dependent on the value of "α" (i.e., dependent on whether the gas is 1D, 2D, 3D, whether it is in a flat or harmonic potential well).
For "α" > 1, the number of particles only increases up to a finite maximum value, i.e., "N"m is finite at "z" = 1:
formula_11
where "ζ"("α") is the Riemann zeta function (using Li"α"("1") = "ζ"("α")). Thus, for a fixed number of particles "N"m, the largest possible value that "β" can have is a critical value "β"c. This corresponds to a critical temperature "T"c = 1/"k"B"β"c, below which the Thomas–Fermi approximation breaks down (the continuum of states simply can no longer support this many particles, at lower temperatures). The above equation can be solved for the critical temperature:
formula_12
For example, for the three-dimensional Bose gas in a box (formula_13 and using the above noted value of "E"c) we get:
formula_14
For "α" ≤ 1, there is no upper limit on the number of particles ("N"m diverges as "z" approaches 1), and thus for example for a gas in a one- or two-dimensional box ("α" = 1/2 and "α" = 1 respectively) there is no critical temperature.
Inclusion of the ground state.
The above problem raises the question for "α" > 1: if a Bose gas with a fixed number of particles is lowered down below the critical temperature, what happens?
The problem here is that the Thomas–Fermi approximation has set the degeneracy of the ground state to zero, which is wrong. There is no ground state to accept the condensate and so particles simply 'disappear' from the continuum of states. It turns out, however, that the macroscopic equation gives an accurate estimate of the number of particles in the excited states, and it is not a bad approximation to simply "tack on" a ground state term to accept the particles that fall out of the continuum:
formula_15
where "N"0 is the number of particles in the ground state condensate.
Thus in the macroscopic limit, when "T" < "T"c, the value of "z" is pinned to 1 and "N"0 takes up the remainder of particles. For "T" > "T"c there is the normal behaviour, with "N"0 = 0. This approach gives the fraction of condensed particles in the macroscopic limit:
formula_16
Limitations of the macroscopic Bose gas model.
The above standard treatment of a macroscopic Bose gas is straightforward, but the inclusion of the ground state is somewhat inelegant. Another approach is to include the ground state explicitly (contributing a term in the grand potential, as in the section below), this gives rise to an unrealistic fluctuation catastrophe: the number of particles in any given state follow a geometric distribution, meaning that when condensation happens at "T" < "T"c and most particles are in one state, there is a huge uncertainty in the total number of particles. This is related to the fact that the compressibility becomes unbounded for "T" < "T"c. Calculations can instead be performed in the canonical ensemble, which fixes the total particle number, however the calculations are not as easy.
Practically however, the aforementioned theoretical flaw is a minor issue, as the most unrealistic assumption is that of non-interaction between bosons. Experimental realizations of boson gases always have significant interactions, i.e., they are non-ideal gases. The interactions significantly change the physics of how a condensate of bosons behaves: the ground state spreads out, the chemical potential saturates to a positive value even at zero temperature, and the fluctuation problem disappears (the compressibility becomes finite). See the article Bose–Einstein condensate.
Approximate behaviour in small gases.
For smaller, mesoscopic, systems (for example, with only thousands of particles), the ground state term can be more explicitly approximated by adding in an actual discrete level at energy "ε"=0 in the grand potential:
formula_17
which gives instead "N"0 =. Now, the behaviour is smooth when crossing the critical temperature, and "z" approaches 1 very closely but does not reach it.
This can now be solved down to absolute zero in temperature. Figure 1 shows the results of the solution to this equation for "α" = 3/2, with "k" = "ε"c = 1, which corresponds to a gas of bosons in a box. The solid black line is the fraction of excited states 1 − "N"0/"N" for "N" = and the dotted black line is the solution for "N" = 1000. The blue lines are the fraction of condensed particles "N"0/"N". The red lines plot values of the negative of the chemical potential "μ" and the green lines plot the corresponding values of "z". The horizontal axis is the normalized temperature "τ" defined by
formula_18
It can be seen that each of these parameters become linear in "τ""α" in the limit of low temperature and, except for the chemical potential, linear in 1/"τ""α" in the limit of high temperature. As the number of particles increases, the condensed and excited fractions tend towards a discontinuity at the critical temperature.
The equation for the number of particles can be written in terms of the normalized temperature as:
formula_19
For a given "N" and "τ", this equation can be solved for "τα" and then a series solution for "z" can be found by the method of inversion of series, either in powers of "τ""α" or as an asymptotic expansion in inverse powers of "τα". From these expansions, we can find the behavior of the gas near "T" = 0 and in the Maxwell–Boltzmann as "T" approaches infinity. In particular, we are interested in the limit as "N" approaches infinity, which can be easily determined from these expansions.
This approach to modelling small systems may in fact be unrealistic, however, since the variance in the number of particles in the ground state is very large, equal to the number of particles. In contrast, the variance of particle number in a normal gas is only the square-root of the particle number, which is why it can normally be ignored. This high variance is due to the choice of using the grand canonical ensemble for the entire system, including the condensate state.
Thermodynamics.
Expanded out, the grand potential is:
formula_20
All thermodynamic properties can be computed from this potential. The following table lists various thermodynamic quantities calculated in the limit of low temperature and high temperature, and in the limit of infinite particle number. An equal sign (=) indicates an exact result, while an approximation symbol indicates that only the first few terms of a series in formula_21 is shown.
It is seen that all quantities approach the values for a classical ideal gas in
the limit of large temperature. The above values can be used to calculate other
thermodynamic quantities. For example, the relationship between internal energy and
the product of pressure and volume is the same as that for a classical ideal gas over
all temperatures:
formula_22
A similar situation holds for the specific heat at constant volume
formula_23
The entropy is given by:
formula_24
Note that in the limit of high temperature, we have
formula_25
which, for "α" = 3/2 is simply a restatement of the Sackur–Tetrode equation. In one dimension bosons with delta interaction behave as fermions, they obey Pauli exclusion principle. In one dimension Bose gas with delta interaction can be solved exactly by Bethe ansatz. The bulk free energy and thermodynamic potentials were calculated by Chen-Ning Yang. In one dimensional case correlation functions also were evaluated. In one dimension Bose gas is equivalent to quantum non-linear Schrödinger equation.
References.
<templatestyles src="Reflist/styles.css" />
General references.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\Omega=-\\ln(\\mathcal{Z}) = \\sum_i g_i \\ln\\left(1-ze^{-\\beta\\epsilon_i}\\right)."
},
{
"math_id": 1,
"text": "z(\\beta,\\mu)= e^{\\beta \\mu}"
},
{
"math_id": 2,
"text": "\\beta = \\frac{1}{k_{\\rm B}T}"
},
{
"math_id": 3,
"text": "\\Omega_m"
},
{
"math_id": 4,
"text": "\\Omega"
},
{
"math_id": 5,
"text": "\\Omega_{\\rm m} = \\int_0^\\infty \\ln\\left(1-ze^{-\\beta E}\\right)\\,dg \\approx \\Omega."
},
{
"math_id": 6,
"text": "dg = \\frac{1}{\\Gamma(\\alpha)}\\,\\frac{E^{\\,\\alpha-1}}{ E_{\\rm c}^{\\alpha}} ~dE"
},
{
"math_id": 7,
"text": "\\frac{1}{(\\beta E_{\\rm c})^\\alpha}=\\frac{Vf}{\\Lambda^3}"
},
{
"math_id": 8,
"text": "\\frac{1}{(\\beta E_c)^\\alpha}=\\frac{f}{(\\hbar\\omega\\beta)^3}"
},
{
"math_id": 9,
"text": "\\Omega_{\\rm m} = -\\frac{\\textrm{Li}_{\\alpha+1}(z)}{\\left(\\beta E_\\text{c}\\right)^\\alpha},"
},
{
"math_id": 10,
"text": "N_{\\rm m} = -z\\frac{\\partial\\Omega_m}{\\partial z} = \\frac{\\textrm{Li}_\\alpha(z)}{(\\beta E_c)^\\alpha}."
},
{
"math_id": 11,
"text": "N_{\\rm m, max} = \\frac{\\zeta(\\alpha)}{(\\beta E_{\\rm c})^\\alpha},"
},
{
"math_id": 12,
"text": "T_{\\rm c}=\\left(\\frac{N}{\\zeta(\\alpha)}\\right)^{1/\\alpha}\\frac{E_{\\rm c}}{k_{\\rm B}}"
},
{
"math_id": 13,
"text": "\\alpha=3/2"
},
{
"math_id": 14,
"text": "T_{\\rm c}=\\left(\\frac{N}{Vf\\zeta(3/2)}\\right)^{2/3}\\frac{h^2}{2\\pi m k_{\\rm B}}"
},
{
"math_id": 15,
"text": "N = N_0+ N_{\\rm m} = N_0 + \\frac{\\textrm{Li}_\\alpha(z)}{(\\beta E_{\\rm c})^\\alpha}"
},
{
"math_id": 16,
"text": "\\frac{N_0}{N} =\n\\begin{cases}\n1 - \\left(\\frac{T}{T_{\\rm c}}\\right)^\\alpha &\\mbox{if } \\alpha > 1 \\mbox{ and } T < T_{\\rm c}, \\\\ \n0 & \\mbox{otherwise}.\n\\end{cases}\n"
},
{
"math_id": 17,
"text": "\\Omega = g_0\\ln(1-z) + \\Omega_{\\rm m}"
},
{
"math_id": 18,
"text": "\\tau=\\frac{T}{T_{\\rm c}}"
},
{
"math_id": 19,
"text": "N = \\frac{g_0\\,z}{1-z}+N~\\frac{\\textrm{Li}_\\alpha(z)}{\\zeta(\\alpha)}~\\tau^\\alpha"
},
{
"math_id": 20,
"text": "\\Omega = g_0\\ln(1-z)-\\frac{\\textrm{Li}_{\\alpha+1}(z)}{\\left(\\beta E_{\\rm c}\\right)^\\alpha}"
},
{
"math_id": 21,
"text": "\\tau^\\alpha"
},
{
"math_id": 22,
"text": "U=\\frac{\\partial \\Omega}{\\partial \\beta}=\\alpha PV"
},
{
"math_id": 23,
"text": "C_V=\\frac{\\partial U}{\\partial T}=k_{\\rm B}(\\alpha+1)\\,U\\beta"
},
{
"math_id": 24,
"text": "TS=U+PV-G\\,"
},
{
"math_id": 25,
"text": "TS=(\\alpha+1)+\\ln\\left(\\frac{\\tau^\\alpha}{\\zeta(\\alpha)}\\right)"
}
] | https://en.wikipedia.org/wiki?curid=1064839 |
10648468 | Transmissibility (vibration) | Transmissibility is the ratio of output to input.
It is defined as the ratio of the force transmitted to the force applied. Transmitted force implies the one which is being transmitted to the foundation or to the body of a particular system. Applied force is the external agent that cause the force to be generated in the first place and be transmitted.
Transmissibility: formula_0
formula_1 means amplification and maximum amplification occurs when forcing frequency (formula_2) and natural frequency (formula_3) of the system coincide.
There is no unit designation for transmissibility, although it may sometimes be referred to as the "Q factor".
The transmissibility is used in calculation of passive hon efficiency.
The lesser the transmissibility the better is the damping or the isolation system.
formula_4 is Desirable,
formula_5 acts as a rigid body,
formula_1 is Undesirable | [
{
"math_id": 0,
"text": "T = \\frac{\\text{output}}{\\text{input}}"
},
{
"math_id": 1,
"text": "T>1"
},
{
"math_id": 2,
"text": "f_f"
},
{
"math_id": 3,
"text": "f_n"
},
{
"math_id": 4,
"text": "T<1"
},
{
"math_id": 5,
"text": "T=1"
}
] | https://en.wikipedia.org/wiki?curid=10648468 |
10648600 | Group 2 organometallic chemistry | Group 2 organometallic chemistry refers to the organic derivativess of any group 2 element. It is a subtheme to main group organometallic chemistry. By far the most common group 2 organometallic compounds are the magnesium-containing Grignard reagents which are widely used in organic chemistry. Other organometallic group 2 compounds are typically limited to academic interests.
Characteristics.
As the group 2 elements (also referred to as the alkaline earth metals) contain two valence electrons, their chemistries have similarities group 12 organometallic compounds. Both readily assume a +2 oxidation states with higher and lower states being rare, and are less electronegative than carbon. However, as the group two elements (with the exception of beryllium) have considerably low electronegativity the resulting C-M bonds are more highly polarized and ionic-like, if not entirely ionic for the heavier barium compounds. The lighter organoberyllium and organomagnesium compounds are often considered covalent, but with some ionic bond characteristics owing to the attached carbon bearing a negative dipole moment. This higher ionic character and bond polarization tends to produce high coordination numbers and many compounds (particularly dialklys) are polymeric in solid or liquid states with highly complex structures in solution, though in the gaseous state they are often monomeric.
Metallocene compounds with group 2 elements are rare, but some do exist. Bis(cyclopentadienyl)beryllium or beryllocene (Cp2Be), with a molecular dipole moment of 2.2 D, is so-called slipped 5η/1η sandwich. While magnesocene (Cp2Mg) is a regular metallocene, bis(pentamethylcyclopentadienyl)calcium (Cp*)2Ca is bent with an angle of 147°.
Synthesis.
Mixed alkyl/aryl-halide compounds, which contain a single C-M bond and a C-X bond, are typically prepared by oxidative addition. Magnesium-containing compounds of this configuration are known as the Grignard reagents, though some calcium Grignard's are known and more reactive and sensitive to decomposition. Calcium grignard's must be pre-activated prior to synthesis.
There are three key reaction pathways for dialkyl and diaryl group 2 metal compounds.
MX2 + R-Y → MR2 + Y-X'
M'R2 + M → MR2 + M'
2 RMX → MR2 + MX2
Compounds.
Although organomagnesium compounds are widespread in the form of Grignard reagents, the other organo-group 2 compound are almost exclusively of academic interest. Organoberyllium chemistry is limited due to the cost and toxicity of beryllium. Calcium is nontoxic and cheap but organocalcium compounds are difficult to prepare, strontium and barium compounds even more so. One use for these type of compounds is in chemical vapor deposition.
Organoberyllium.
Beryllium derivatives and reagents are often prepared by alkylation of beryllium chloride. Examples of known organoberyllium compounds are dineopentylberyllium, beryllocene (Cp2Be), "diallylberyllium" (by exchange reaction of diethyl beryllium with triallyl boron), bis(1,3-trimethylsilylallyl)beryllium and Be(mes)2. Ligands can also be aryls and alkynyls.
Organomagnesium.
The distinctive feature of the Grignard reagents is their formation from the organic halide and magnesium metal. Most other group II organic compounds are generated by salt metathesis, which limits their accessibility. The formation of the Grignard reagents has received intense scrutiny. It proceeds by a SET process. For less reactive organic halides, activated forms of magnesium have been produced in the form of Rieke magnesium. Examples of Grignard reagents are phenylmagnesium bromide and ethylmagnesium bromide. These simplified formulas are deceptive: Grignard reagents generally exist as dietherates, RMgX(ether)2. As such they obey the octet rule.
Grignard reagents participate in the Schlenk equilibrium. Exploiting this reaction is a way to generate dimethylmagnesium. Beyond Grignard reagents, another organomagnesium compound is magnesium anthracene. This orange solid is used as a source of highly active magnesium. Butadiene-magnesium serves as a source for the butadiene dianion. Ate complexes of magnesium are also well known, e.g LiMgBu3.
Organocalcium.
Dimethylcalcium is obtained by metathesis reaction of calcium bis(trimethylsilyl)amide and methyllithium in diethyl ether:
formula_0
A well known organocalcium compound is (Cp)calcium(I). Bis(allyl)calcium was described in 2009. It forms in a metathesis reaction of allylpotassium and calcium iodide as a stable non-pyrophoric off-white powder:
<chem>\overset{allylpotassium}{2KC3H5} + \overset{calcium\ iodide}{CaI2} ->[\ce{THF}][25^\circ \ce C] {(C3H5)2Ca} + 2KI</chem>
The bonding mode is η3. This compound is also reported to give access to an η1 polymeric (CaCH2CHCH2)"n" compound.
The compound [(thf)3Ca{μ-C6H3-1,3,5-Ph3}Ca(thf)3] also described in 2009 is an inverse sandwich compound with two calcium atoms at either side of an arene.
Olefins tethered to cyclopentadienyl ligands have been shown to coordinate to calcium(II), strontium(II), and barium(II):
Organocalcium compounds have been investigated as catalysts.
Organostrontium.
Organostrontium compounds have been reported as intermediates in Barbier-type reactions.
Organobarium.
Organobarium compounds of the type (allyl)BaCl can be prepared by reaction of activated barium (Rieke method reduction of barium iodide with lithium biphenylide) with allyl halides. These allylbarium compounds react with carbonyl compounds. Such reagents are more alpha-selective and more stereoselective than the related Grignards or organocalcium compounds. The metallocene (Cp*)2Ba has also been reported.
Organoradium.
The only known organoradium compound is the gas-phase acetylide.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{Ca[N\\{Si(CH_3 )_3\\}_2]_2 + 2\\ LiCH_3 \\longrightarrow Ca(CH_3)_2 + 2\\ Li[N\\{Si(CH_3)_3\\}_2]}"
}
] | https://en.wikipedia.org/wiki?curid=10648600 |
10649582 | Leader election | In distributed computing, leader election is the process of designating a single process as the organizer of some task distributed among several computers (nodes). Before the task has begun, all network nodes are either unaware which node will serve as the "leader" (or coordinator) of the task, or unable to communicate with the current coordinator. After a leader election algorithm has been run, however, each node throughout the network recognizes a particular, unique node as the task leader.
The network nodes communicate among themselves in order to decide which of them will get into the "leader" state. For that, they need some method in order to break the symmetry among them. For example, if each node has unique and comparable identities, then the nodes can compare their identities, and decide that the node with the highest identity is the leader.
The definition of this problem is often attributed to LeLann, who formalized it as a method to create a new token in a token ring network in which the token has been lost.
Leader election algorithms are designed to be economical in terms of total bytes transmitted, and time. The algorithm suggested by Gallager, Humblet, and Spira for general undirected graphs has had a strong impact on the design of distributed algorithms in general, and won the Dijkstra Prize for an influential paper in distributed computing.
Many other algorithms have been suggested for different kinds of network graphs, such as undirected rings, unidirectional rings, complete graphs, grids, directed Euler graphs, and others. A general method that decouples the issue of the graph family from the design of the leader election algorithm was suggested by Korach, Kutten, and Moran.
Definition.
The problem of leader election is for each processor eventually to decide whether it is a leader or not, subject to the constraint that exactly one processor decides that it is the leader. An algorithm solves the leader election problem if:
A valid leader election algorithm must meet the following conditions:
An algorithm for leader election may vary in the following aspects:
Algorithms.
Leader election in rings.
A ring network is a connected-graph topology in which each node is exactly connected to two other nodes, i.e., for a graph with n nodes, there are exactly n edges connecting the nodes. A ring can be unidirectional, which means processors only communicate in one direction (a node could only send messages to the left or only send messages to the right), or bidirectional, meaning processors may transmit and receive messages in both directions (a node could send messages to the left and right).
Anonymous rings.
A ring is said to be anonymous if every processor is identical. More formally, the system has the same state machine for every processor. There is no deterministic algorithm to elect a leader in anonymous rings, even when the size of the network is known to the processes. This is due to the fact that there is no possibility of breaking symmetry in an anonymous ring if all processes run at the same speed. The state of processors after some steps only depends on the initial state of neighbouring nodes. So, because their states are identical and execute the same procedures, in every round the same messages are sent by each processor. Therefore, each processor state also changes identically and as a result if one processor is elected as a leader, so are all the others.
For simplicity, here is a proof in anonymous synchronous rings. It is a proof by contradiction. Consider an anonymous ring R with size n>1. Assume there exists an algorithm "A" to solve leader election in this anonymous ring R.
"Lemma: after round formula_0 of the admissible execution of A in R, all the processes have the same states."
Proof. Proof by induction on formula_0.
Base case: formula_1: all the processes are in the initial state, so all the processes are identical.
Induction hypothesis: assume the lemma is true for formula_2 rounds.
Inductive step: in round formula_0, every process send the same message formula_3 to the right and send the same message formula_4 to the left. Since all the processes are in the same state after round formula_2, in round k, every process will receive the message formula_3 from the left edge, and will receive the message formula_4 from the right edge. Since all processes are receiving the same messages in round formula_0, they are in the same state after round formula_0.
The above lemma contradicts the fact that after some finite number of rounds in an execution of A, one process entered the elected state and other processes entered the non-elected state.
Randomized (probabilistic) leader election.
A common approach to solve the problem of leader election in anonymous rings is the use of probabilistic algorithms. In such approaches, generally processors assume some identities based on a probabilistic function and communicate it to the rest of the network. At the end, through the application of an algorithm, a leader is selected (with high probability).
Asynchronous ring.
Since there is no algorithm for anonymous rings (proved above), the asynchronous rings would be considered as asynchronous non-anonymous rings. In non-anonymous rings, each process has a unique formula_5, and they don't know the size of the ring. Leader election in asynchronous rings can be solved by some algorithm with using formula_6 messages or formula_7 messages.
In the formula_6 algorithm, every process sends a message with its formula_5 to the left edge. Then waits until a message from the right edge. If the formula_5 in the message is greater than its own formula_5, then forwards the message to the left edge; else ignore the message, and does nothing. If the formula_5 in the message is equal to its own formula_5, then sends a message to the left announcing myself is elected. Other processes forward the announcement to the left and turn themselves to non-elected. It is clear that the upper bound is formula_6 for this algorithm.
In the formula_7 algorithm, it is running in phases. On the formula_0th phase, a process will determine whether it is the winner among the left side formula_8 and right side formula_8 neighbors. If it is a winner, then the process can go to next phase. In phase formula_9, each process formula_10 needs to determine itself is a winner or not by sending a message with its formula_5 to the left and right neighbors (neighbor do not forward the message). The neighbor replies an formula_11 only if the formula_5 in the message is larger than the neighbor's formula_5, else replies an formula_12. If formula_10 receives two formula_11s, one from the left, one from the right, then formula_10 is the winner in phase formula_9. In phase formula_0, the winners in phase formula_2 need to send a message with its formula_5 to the formula_8 left and formula_8 right neighbors. If the neighbors in the path receive the formula_5 in the message larger than their formula_5, then forward the message to the next neighbor, otherwise reply an formula_12. If the formula_8th neighbor receives the formula_5 larger than its formula_5, then sends back an formula_11, otherwise replies an formula_12. If the process receives two formula_11s, then it is the winner in phase formula_0. In the last phase, the final winner will receive its own formula_5 in the message, then terminates and send termination message to the other processes. In the worst case, each phase there are at most formula_13 winners, where formula_0 is the phase number. There are formula_14 phases in total. Each winner sends in the order of formula_8 messages in each phase. So, the messages complexity is formula_7.
Synchronous ring.
In Attiya and Welch's Distributed Computing book, they described a non-uniform algorithm using formula_15 messages in synchronous ring with known ring size formula_16. The algorithm is operating in phases, each phase has formula_16 rounds, each round is one time unit. In phase formula_9, if there is a process with formula_17, then process formula_9 sends termination message to the other processes (sending termination messages cost formula_16 rounds). Else, go to the next phase. The algorithm will check if there is a phase number equals to a process formula_5, then does the same steps as phase formula_9. At the end of the execution, the minimal formula_5 will be elected as the leader. It used exactly formula_16 messages and formula_18 rounds.
Itai and Rodeh introduced an algorithm for a unidirectional ring with synchronized processes. They assume the size of the ring (number of nodes) is known to the processes. For a ring of size n, a≤n processors are active. Each processor decides with probability of a^(-1) whether to become a candidate. At the end of each phase, each processor calculates the number of candidates c and if it is equal to 1, it becomes the leader.
To determine the value of c, each candidate sends a token (pebble) at the start of the phase which is passed around the ring, returning after exactly n time units to its sender. Every processor determines c by counting the number of pebbles which passed through. This algorithm achieves leader election with expected message complexity of O(nlogn). A similar approach is also used in which a time-out mechanism is employed to detect deadlocks in the system. There are also algorithms for rings of special sizes such as prime size and odd size.
Uniform algorithm.
In typical approaches to leader election, the size of the ring is assumed to be known to the processes. In the case of anonymous rings, without using an external entity, it is not possible to elect a leader. Even assuming an algorithm exists, the leader could not estimate the size of the ring. i.e. in any anonymous ring, there is a positive probability that an algorithm computes a wrong ring size. To overcome this problem, Fisher and Jiang used a so-called leader oracle Ω? that each processor can ask whether there is a unique leader. They show that from some point upward, it is guaranteed to return the same answer to all processes.
Rings with unique IDs.
In one of the early works, Chang and Roberts proposed a uniform algorithm in which a processor with the highest ID is selected as the leader. Each processor sends its ID in a clockwise direction. A process receiving a message and compares it with its own. If it is bigger, it passes it through, otherwise it will discard the message. They show that this algorithm uses at most formula_6 messages and formula_7 in the average case.
Hirschberg and Sinclair improved this algorithm with formula_7 message complexity by introducing a 2 directional message passing scheme allowing the processors to send messages in both directions.
Leader election in a mesh.
The mesh is another popular form of network topology, especially in parallel systems, redundant memory systems and interconnection networks.
In a mesh structure, nodes are either corner (only two neighbours), border (only three neighbours) or interior (with four neighbours). The number of edges in a mesh of size a x b is m=2ab-a-b.
Unoriented mesh.
A typical algorithm to solve the leader election in an unoriented mesh is to only elect one of the four corner nodes as the leader. Since the corner nodes might not be aware of the state of other processes, the algorithm should first wake up the corner nodes. A leader can be elected as follows.
The message complexity is at most formula_21, and if the mesh is square-shaped, formula_22.
Oriented mesh.
An oriented mesh is a special case where port numbers are compass labels, i.e. north, south, east and west. Leader election in an oriented mesh is trivial. We only need to nominate a corner, e.g. "north" and "east" and make sure that node knows it is a leader.
Torus.
A special case of mesh architecture is a torus which is a mesh with "wrap-around". In this structure, every node has exactly 4 connecting edges.
One approach to elect a leader in such a structure is known as electoral stages. Similar to procedures in ring structures, this method in each stage eliminates potential candidates until eventually one candidate node is left. This node becomes the leader and then notifies all other processes of termination. This approach can be used to achieve a complexity of O(n). There also more practical approaches introduced for dealing with presence of faulty links in the network.
Election in hypercubes.
A Hypercube formula_23 is a network consisting of formula_24 nodes, each with degree of formula_0 and formula_7 edges.
A similar electoral stages as before can be used to solve the problem of leader election. In each stage two nodes (called duelists) compete and the winner is promoted to the next stage. This means in each stage only half of the duelists enter the next stage. This procedure continues until only one duelist is left, and it becomes the leader. Once selected, it notifies all other processes. This algorithm requires formula_15 messages. In the case of unoriented hypercubes, a similar approach can be used but with a higher message complexity of formula_25.
Election in complete networks.
Complete networks are structures in which all processes are connected to one another, i.e., the degree of each node is n-1, n being the size of the network. An optimal solution with O(n) message and space complexity is known. In this algorithm, processes have the following states:
There is an assumption that although a node does not know the total set of nodes in the system, it is required that in this arrangement every node knows the identifier of its single successor, which is called neighbor, and every node is known by another one.
All processors initially start in a passive state until they are woken up. Once the nodes are awake, they are candidates to become the leader. Based on a priority scheme, candidate nodes collaborate in the virtual ring. At some point, candidates become aware of the identity of candidates that precede them in the ring. The higher priority candidates ask the lower ones about their predecessors. The candidates with lower priority become dummies after replying to the candidates with higher priority. Based on this scheme, the highest priority candidate eventually knows that all nodes in the system are dummies except itself, at which point it knows it is the leader.
The above algorithm is not correct — it needs further improvement.
Universal leader election techniques.
As the name implies, these algorithms are designed to be used in every form of process networks without any prior knowledge of the topology of a network or its properties, such as its size.
Shout.
Shout (protocol) builds a spanning tree on a generic graph and elects its root as leader. The algorithm has a total cost linear in the edges cardinality.
Mega-Merger.
This technique in essence is similar to finding a Minimum Spanning Tree (MST) in which the root of the tree becomes the leader. The basic idea in this method is individual nodes merge with each other to form bigger structures. The result of this algorithm is a tree (a graph with no cycle) whose root is the leader of entire system. The cost of mega-merger method is formula_26 where m is the number of edges and n is the number of nodes.
Yo-yo.
Yo-yo (algorithm) is a minimum finding algorithm consisting of two parts: a preprocessing phase and a series of iterations. In the first phase or "setup", each node exchanges its id with all its neighbours and based on the value it orients its incident edges. For instance, if node x has a smaller id than y, x orients towards y. If a node has a smaller id than all its neighbours it becomes a source. In contrast, a node with all inward edges (i.e., with id larger than all of its neighbours) is a sink. All other nodes are internal nodes.
Once all the edges are oriented, the "iteration" phase starts. Each iteration is an electoral stage in which some candidates will be removed. Each iteration has two phases: "YO-" and "–YO". In this phase sources start the process to propagate to each sink the smallest values of the sources connected to that sink.
Yo-
-yo
After the final stage, any source who receives a NO is no longer a source and becomes a sink.
An additional stage, "pruning", also is introduced to remove the nodes that are useless, i.e. their existence has no impact on the next iterations.
This method has a total cost of O(mlogn) messages. Its real message complexity including pruning is an open research problem and is unknown.
Applications.
Radio networks.
In radio network protocols, leader election is often used as a first step to approach more advanced communication primitives, such as message gathering or broadcasts. The very nature of wireless networks induces collisions when adjacent nodes transmit at the same time; electing a leader allows to better coordinate this process. While the diameter "D" of a network is a natural lower bound for the time needed to elect a leader, upper and lower bounds for the leader election problem depend on the specific radio model studied.
Models and runtime.
In radio networks, the "n" nodes may in every round choose to either transmit or receive a message. If "no collision detection" is available, then a node cannot distinguish between silence or receiving more than one message at a time. Should "collision detection" be available, then a node may detect more than one incoming message at the same time, even though the messages itself cannot be decoded in that case. In the "beeping model", nodes can only distinguish between silence or at least one message via carrier sensing.
Known runtimes for single-hop networks range from a constant (expected with collision detection) to "O(n log n)" rounds (deterministic and no collision detection). In multi-hop networks, known runtimes differ from roughly "O((D+ log n)(log2 log n))" rounds (with high probability in the beeping model), "O(D log n)" (deterministic in the beeping model), "O(n)" (deterministic with collision detection) to "O(n log3/2 n (log log n)0.5)" rounds (deterministic and no collision detection).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "k=0"
},
{
"math_id": 2,
"text": "k-1"
},
{
"math_id": 3,
"text": "m_r"
},
{
"math_id": 4,
"text": "m_l"
},
{
"math_id": 5,
"text": "id"
},
{
"math_id": 6,
"text": "O(n^2)"
},
{
"math_id": 7,
"text": "O(n \\log n)"
},
{
"math_id": 8,
"text": "2^k"
},
{
"math_id": 9,
"text": "0"
},
{
"math_id": 10,
"text": "P"
},
{
"math_id": 11,
"text": "ACK"
},
{
"math_id": 12,
"text": "ACK_{fault}"
},
{
"math_id": 13,
"text": "\\frac{n}{2^k+1}"
},
{
"math_id": 14,
"text": "\\lceil \\log (n-1) \\rceil"
},
{
"math_id": 15,
"text": "O(n)"
},
{
"math_id": 16,
"text": "n"
},
{
"math_id": 17,
"text": "id=0"
},
{
"math_id": 18,
"text": "n(minimum\\_id +1)"
},
{
"math_id": 19,
"text": "3n+k"
},
{
"math_id": 20,
"text": "6(a+b)-16"
},
{
"math_id": 21,
"text": "6(a + b) - 16"
},
{
"math_id": 22,
"text": "O(\\sqrt{n})"
},
{
"math_id": 23,
"text": "H_k"
},
{
"math_id": 24,
"text": "n=2^k"
},
{
"math_id": 25,
"text": "O(n \\log \\log n)"
},
{
"math_id": 26,
"text": "O(m+n \\log n)"
}
] | https://en.wikipedia.org/wiki?curid=10649582 |
10650443 | Preimage theorem | On the preimage of points in a manifold under the action of a smooth map
In mathematics, particularly in the field of differential topology, the preimage theorem is a variation of the implicit function theorem concerning the preimage of particular points in a manifold under the action of a smooth map.
Statement of Theorem.
"Definition." Let formula_0 be a smooth map between manifolds. We say that a point formula_1 is a "regular value of" formula_2 if for all formula_3 the map formula_4 is surjective. Here, formula_5 and formula_6 are the tangent spaces of formula_7 and formula_8 at the points formula_9 and formula_10
"Theorem." Let formula_11 be a smooth map, and let formula_1 be a regular value of formula_12 Then formula_13 is a submanifold of formula_14 If formula_15 then the codimension of formula_13 is equal to the dimension of formula_16 Also, the tangent space of formula_13 at formula_9 is equal to formula_17
There is also a complex version of this theorem:
"Theorem." Let formula_18 and formula_19 be two complex manifolds of complex dimensions formula_20 Let formula_21 be a holomorphic map and let formula_22 be such that formula_23 for all formula_24 Then formula_25 is a complex submanifold of formula_7 of complex dimension formula_26
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f : X \\to Y"
},
{
"math_id": 1,
"text": "y \\in Y"
},
{
"math_id": 2,
"text": "f"
},
{
"math_id": 3,
"text": "x \\in f^{-1}(y)"
},
{
"math_id": 4,
"text": "d f_x: T_x X \\to T_y Y"
},
{
"math_id": 5,
"text": "T_x X"
},
{
"math_id": 6,
"text": "T_y Y"
},
{
"math_id": 7,
"text": "X"
},
{
"math_id": 8,
"text": "Y"
},
{
"math_id": 9,
"text": "x"
},
{
"math_id": 10,
"text": "y."
},
{
"math_id": 11,
"text": "f: X \\to Y"
},
{
"math_id": 12,
"text": "f."
},
{
"math_id": 13,
"text": "f^{-1}(y)"
},
{
"math_id": 14,
"text": "X."
},
{
"math_id": 15,
"text": "y \\in \\text{im}(f),"
},
{
"math_id": 16,
"text": "Y."
},
{
"math_id": 17,
"text": " \\ker(df_x)."
},
{
"math_id": 18,
"text": "X^n"
},
{
"math_id": 19,
"text": "Y^m"
},
{
"math_id": 20,
"text": "n > m."
},
{
"math_id": 21,
"text": "g : X \\to Y"
},
{
"math_id": 22,
"text": "y \\in \\text{im}(g)"
},
{
"math_id": 23,
"text": "\\text{rank}(dg_x) = m"
},
{
"math_id": 24,
"text": "x \\in g^{-1}(y)."
},
{
"math_id": 25,
"text": "g^{-1}(y)"
},
{
"math_id": 26,
"text": "n - m."
}
] | https://en.wikipedia.org/wiki?curid=10650443 |
1065627 | Ground and neutral | In mains electricity, part of a circuit connected to ground or earth
In electrical engineering, ground and neutral (earth and neutral) are circuit conductors used in alternating current (AC) electrical systems. The neutral conductor returns current to the supply. To limit the effects of leakage current from higher-voltage systems, the neutral conductor is often connected to earth ground at the point of supply. A ground conductor is not intended to carry current for normal operation of the circuit, but instead connects exposed metallic components (such as equipment enclosures or conduits enclosing wiring) to earth ground. A ground conductor only carries significant current if there is a circuit fault that would otherwise energize exposed conductive parts and present a shock hazard. Circuit protection devices may detect a fault to a grounded metal enclosure and automatically de-energize the circuit, or may provide a warning of a ground fault.
Under certain conditions, a conductor used to connect to a system neutral is also used for grounding (earthing) of equipment and structures. Current carried on a grounding conductor can result in objectionable or dangerous voltages appearing on equipment enclosures, so the installation of grounding conductors and neutral conductors is carefully defined in electrical regulations. Where a neutral conductor is used also to connect equipment enclosures to earth, care must be taken that the neutral conductor never rises to a high voltage with respect to local ground.
Definitions.
Ground or earth in a mains (AC power) electrical wiring system is a conductor that provides a low-impedance path to the earth to prevent hazardous voltages from appearing on equipment (high voltage spikes). The terms ground and earth are used synonymously in this section; ground is more common in North American English, and earth is more common in British English. Under normal conditions, a grounding conductor does not carry current. Grounding is also an integral path for home wiring because it causes circuit breakers to trip more quickly (ie, GFCI), which is safer. Adding new grounds requires a qualified electrician with knowledge particular to a power distribution region.
Neutral is a circuit conductor that normally completes the circuit back to the source. NEC states that the neutral and ground wires should be connected at the neutral point of the transformer or generator, or otherwise some "system neutral point" but not anywhere else. That is for simple single panel installations; for multiple panels the situation is more complex.
In a polyphase (usually three-phase) AC system, the neutral conductor is intended to have similar voltages to each of the other circuit conductors, but may carry very little current if the phases are balanced.
All neutral wires of the same earthed (grounded) electrical system should have the same electrical potential, because they are all connected through the system ground. Neutral conductors are usually insulated for the same voltage as the line conductors, with interesting exceptions.
Circuitry.
Neutral wires are usually connected at a neutral bus within panelboards or switchboards, and are "bonded" to earth ground at either the electrical service entrance, or at transformers within the system. For electrical installations with split-phase (three-wire single-phase) service, the neutral point of the system is at the center-tap on the secondary side of the service transformer. For larger electrical installations, such as those with polyphase service, the neutral point is usually at the common connection on the secondary side of delta/wye connected transformers. Other arrangements of polyphase transformers may result in no neutral point, and no neutral conductors.
Grounding systems.
The IEC standard (IEC 60364) codifies methods of installing neutral and ground conductors in a building, where these earthing systems are designated with letter symbols. The letter symbols are common in countries using IEC standards, but North American practices rarely refer to the IEC symbols. The differences are that the conductors may be separate over their entire run from equipment to earth ground, or may be combined all or part of their length. Different systems are used to minimize the voltage difference between neutral and local earth ground. Current flowing in a grounding conductor will produce a voltage drop along the conductor, and grounding systems seek to ensure this voltage does not reach unsafe levels.
In the TN-S system, separate neutral and protective earth conductors are installed between the equipment and the source of supply (generator or electric utility transformer). Normal circuit currents flow only in the neutral, and the protective earth conductor bonds all equipment cases to earth to intercept any leakage current due to insulation failure. The neutral conductor is connected to earth at the building point of supply, but no common path to ground exists for circuit current and the protective conductor.
In the TN-C system, a common conductor provides both the neutral and protective grounding. The neutral conductor is connected to earth ground at the point of supply, and equipment cases are connected to the neutral. The danger exists that a broken neutral connection will allow all the equipment cases to rise to a dangerous voltage if any leakage or insulation fault exists in any equipment. This can be mitigated with special cables but the cost is then higher.
In the TN-C-S system, each piece of electrical equipment has both a protective ground connection to its case, and a neutral connection. These are all brought back to some common point in the building system, and a common connection is then made from that point back to the source of supply and to the earth.
In a TT system, no lengthy common protective ground conductor is used, instead each article of electrical equipment (or building distribution system) has its own connection to earth ground.
Indian CEAR, Rule 41, makes the following provisions:
Combining neutral with ground.
Stray voltages created in grounding (earthing) conductors by currents flowing in the supply utility neutral conductors can be troublesome. For example, special measures may be required in barns used for milking dairy cattle. Very small voltages, not usually perceptible to humans, may cause low milk yield, or even mastitis (inflammation of the udder).
So-called "tingle voltage filters" may be required in the electrical distribution system for a milking parlour.
Connecting the neutral to the equipment case provides some protection against faults, but may produce a dangerous voltage on the case if the neutral connection is broken.
Combined neutral and ground conductors are commonly used in electricity supply companies' wiring and occasionally for fixed wiring in buildings and for some specialist applications where there is little alternative, such as railways and trams. Since normal circuit currents in the neutral conductor can lead to objectionable or dangerous differences between local earth potential and the neutral, and to protect against neutral breakages, special precautions such as frequent rodding down to earth (multiple ground rod connections), use of cables where the combined neutral and earth completely surrounds the phase conductor(s), and thicker than normal equipotential bonding must be considered to ensure the system is safe.
Fixed appliances on three-wire circuits.
In the United States, the cases of some kitchen stoves (ranges, ovens), cook tops, clothes dryers and other specifically "listed" appliances were grounded through their neutral wires as a measure to conserve copper from copper cables during World War II. This practice was removed from the NEC in the 1996 edition, but existing installations (called "old work") may still allow the cases of such "listed" appliances to be connected to the neutral conductor for grounding. (Canada did not adopt this system and instead during this time and into the present uses separate neutral and ground wires.)
This practice arose from the three-wire system used to supply both 120 volt and 240 volt loads. Because these "listed" appliances often have components that use either 120, or both 120 and 240 volts, there is often some current on the neutral wire. This differs from the protective grounding wire, which only carries current under fault conditions. Using the neutral conductor for grounding the equipment enclosure was considered safe since the devices were permanently wired to the supply and so the neutral was unlikely to be broken without also breaking both supply conductors. Also, the unbalanced current due to lamps and small motors in the appliances was small compared to the rating of the conductors and therefore unlikely to cause a large voltage drop in the neutral conductor.
Portable appliances.
In North American and European practice, small portable equipment connected by a cord set is permitted under certain conditions to have merely two conductors in the attachment plug. A polarized plug can be used to maintain the identity of the neutral conductor into the appliance but neutral is never used as a chassis/case ground. The small cords to lamps, etc., often have one or more molded ridges or embedded strings to identify the neutral conductor, or may be identified by colour. Portable appliances never use the neutral conductor for case grounding, and often feature "double-insulated" construction.
In places where the design of the plug and socket cannot ensure that a system neutral conductor is connected to particular terminals of the device ("unpolarized" plugs), portable appliances must be designed on the assumption that either pole of each circuit may reach full main voltage with respect to the ground.
Technical equipment.
In North American practice, equipment connected by a cord set must have three wires if supplied exclusively by 240 volts, or must have four wires (including neutral and ground), if supplied by 120/240 volts.
There are special provisions in the NEC for so-called technical equipment, mainly professional grade audio and video equipment supplied by so-called "balanced" 120 volt circuits. The center tap of a transformer is connected to ground, and the equipment is supplied by two line wires each 60 volts to ground (and 120 volts between line conductors). The center tap is not distributed to the equipment and no neutral conductor is used. These cases generally use a grounding conductor which is separated from the safety grounding conductor specifically for the purposes of noise and "hum" reduction.
Another specialized distribution system was formerly specified in patient care areas of hospitals. An isolated power system was furnished, from a special isolation transformer, with the intention of minimizing any leakage current that could pass through equipment directly connected to a patient (for example, an electrocardiograph for monitoring the heart). The neutral of the circuit was not connected to ground. The leakage current was due to the distributed capacitance of the wiring and capacitance of the supply transformer. Such distribution systems were monitored by permanently installed instruments to give an alarm when high leakage current was detected.
Shared neutral.
A shared neutral is a connection in which a plurality of circuits use the same neutral connection. This is also known as a common neutral, and the circuits and neutral together are sometimes referred to as an Edison circuit.
Three-phase circuits.
In a three-phase circuit, a neutral is shared between all three phases. Commonly the system neutral is connected to the star point on the feeding transformer. This is the reason that the secondary side of most three-phase distribution transformers is wye- or star-wound. Three-phase transformers and their associated neutrals are usually found in industrial distribution environments.
A system could be made entirely ungrounded. In this case a fault between one phase and ground would not cause any significant current. Commonly the neutral is grounded (earthed) through a bond between the neutral bar and the earth bar. It is common on larger systems to monitor any current flowing through the neutral-to-earth link and use this as the basis for neutral fault protection.
The connection between neutral and earth allows any phase-to-earth fault to develop enough current flow to "trip" the circuit overcurrent protection device. In some jurisdictions, calculations are required to ensure the fault loop impedance is low enough so that fault current will trip the protection (In Australia, this is referred to in AS3000:2007 Fault loop impedance calculation). This may limit the length of a branch circuit.
In the case of two phases sharing one neutral and the third phase is disconnected, the worst-case current draw is one side has zero load and the other has full load, or when both sides have full load. The latter case results in formula_0 , formula_1 or formula_2 where formula_3 is the magnitude of the current. In other words the magnitude of the current in the neutral equals that of the other two wires.
In a three-phase linear circuit with three identical resistive or reactive loads, the neutral carries no current. The neutral carries current if the loads on each phase are not identical. In some jurisdictions, the neutral is allowed to be reduced in size if no unbalanced current flow is expected. If the neutral is smaller than the phase conductors, it can be overloaded if a large unbalanced load occurs.
The current drawn by non-linear loads, such as fluorescent & HID lighting and electronic equipment containing switching power supplies, often contains harmonics. Triplen harmonic currents (odd multiples of the third harmonic) are additive, resulting in more current in the shared neutral conductor than in any of the phase conductors. In the absolute worst case, the current in the shared neutral conductor can be triple that in each phase conductor. Some jurisdictions prohibit the use of shared neutral conductors when feeding single-phase loads from a three-phase source; others require that the neutral conductor be substantially larger than the phase conductors. It is good practice to use four-pole circuit breakers (as opposed to the standard three-pole) where the fourth pole is the neutral phase, and is hence protected against overcurrent on the neutral conductor.
Split phase.
In split-phase wiring, for example a duplex receptacle in a North American kitchen, devices may be connected with a cable that has three conductors, in addition to ground. The three conductors are usually coloured red, black, and white. The white serves as a common neutral, while the red and black each feed, separately, the top and bottom hot sides of the receptacle. Typically such receptacles are supplied from two circuit breakers in which the handles of two poles are tied together for a common trip. If two large appliances are used at once, current passes through both and the neutral only carries the difference in current. The advantage is that only three wires are required to serve these loads, instead of four. If one kitchen appliance overloads the circuit, the other side of the duplex receptacle will be shut off as well. This is called a multiwire branch circuit. Common trip is required when the connected load uses more than one phase simultaneously. The common trip prevents overloading of the shared neutral if one device draws more than rated current.
Grounding problems.
A ground connection that is missing or of inadequate capacity may not provide the protective functions as intended during a fault in the connected equipment. Extra connections between ground and circuit neutral may result in circulating current in the ground path, stray current introduced in the earth or in a structure, and stray voltage. Extra ground connections on a neutral conductor may bypass the protection provided by a ground-fault circuit interrupter. Signal circuits that rely on a ground connection will not function or will have erratic function if the ground connection is missing.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "I_\\mathrm{m} \\angle {0}^\\circ + I_\\mathrm{m} \\angle {-120}^\\circ = I_\\mathrm{m} \\angle {-60}^\\circ"
},
{
"math_id": 1,
"text": "I_\\mathrm{m} \\angle {0}^\\circ + I_\\mathrm{m} \\angle {120}^\\circ = I_\\mathrm{m} \\angle {60}^\\circ"
},
{
"math_id": 2,
"text": "I_\\mathrm{m} \\angle {120}^\\circ + I_\\mathrm{m} \\angle {-120}^\\circ = I_\\mathrm{m} \\angle {180}^\\circ"
},
{
"math_id": 3,
"text": "I_\\mathrm{m}"
}
] | https://en.wikipedia.org/wiki?curid=1065627 |
10656445 | Derivation of the Navier–Stokes equations | Equations of fluid dynamics
The derivation of the Navier–Stokes equations as well as their application and formulation for different families of fluids, is an important exercise in fluid dynamics with applications in mechanical engineering, physics, chemistry, heat transfer, and electrical engineering. A proof explaining the properties and bounds of the equations, such as Navier–Stokes existence and smoothness, is one of the important unsolved problems in mathematics.
Basic assumptions.
The Navier–Stokes equations are based on the assumption that the fluid, at the scale of interest, is a continuum – a continuous substance rather than discrete particles. Another necessary assumption is that all the fields of interest including pressure, flow velocity, density, and temperature are at least weakly differentiable.
The equations are derived from the basic principles of continuity of mass, conservation of momentum, and conservation of energy. Sometimes it is necessary to consider a finite arbitrary volume, called a control volume, over which these principles can be applied. This finite volume is denoted by Ω and its bounding surface ∂Ω. The control volume can remain fixed in space or can move with the fluid.
The material derivative.
Changes in properties of a moving fluid can be measured in two different ways. One can measure a given property by either carrying out the measurement on a fixed point in space as particles of the fluid pass by, or by following a parcel of fluid along its streamline. The derivative of a field with respect to a fixed position in space is called the "Eulerian" derivative, while the derivative following a moving parcel is called the "advective" or "material" (or "Lagrangian") derivative.
The material derivative is defined as the "nonlinear operator":
formula_0
where u is the flow velocity. The first term on the right-hand side of the equation is the ordinary Eulerian derivative (the derivative on a fixed reference frame, representing changes at a point with respect to time) whereas the second term represents changes of a quantity with respect to position (see advection). This "special" derivative is in fact the ordinary derivative of a function of many variables along a path following the fluid motion; it may be derived through application of the chain rule in which all independent variables are checked for change along the path (which is to say, the total derivative).
For example, the measurement of changes in wind velocity in the atmosphere can be obtained with the help of an anemometer in a weather station or by observing the movement of a weather balloon. The anemometer in the first case is measuring the velocity of all the moving particles passing through a fixed point in space, whereas in the second case the instrument is measuring changes in velocity as it moves with the flow.
Continuity equations.
The Navier–Stokes equation is a special continuity equation. A continuity equation may be derived from conservation principles of:
A continuity equation (or conservation law) is an integral relation stating that the rate of change of some integrated property φ defined over a control volume Ω must be equal to the rate at which it is lost or gained through the boundaries Γ of the volume plus the rate at which it is created or consumed by sources and sinks inside the volume. This is expressed by the following integral continuity equation:
formula_1
where u is the flow velocity of the fluid, n is the outward-pointing unit normal vector, and s represents the sources and sinks in the flow, taking the sinks as positive.
The divergence theorem may be applied to the surface integral, changing it into a volume integral:
formula_2
Applying the Reynolds transport theorem to the integral on the left and then combining all of the integrals:
formula_3
The integral must be zero for any control volume; this can only be true if the integrand itself is zero, so that:
formula_4
From this valuable relation (a very generic continuity equation), three important concepts may be concisely written: conservation of mass, conservation of momentum, and conservation of energy. Validity is retained if φ is a vector, in which case the vector-vector product in the second term will be a dyad.
Conservation of mass.
Mass may be considered also. When the intensive property φ is considered as the mass, by substitution into the general continuity equation, and taking "s"
0 (no sources or sinks of mass):
formula_5
where ρ is the mass density (mass per unit volume), and u is the flow velocity. This equation is called the mass continuity equation, or simply "the" continuity equation. This equation generally accompanies the Navier–Stokes equation.
In the case of an incompressible fluid,
0 (the density following the path of a fluid element is constant) and the equation reduces to:
formula_6
which is in fact a statement of the conservation of volume.
Conservation of momentum.
A general momentum equation is obtained when the conservation relation is applied to momentum. When the intensive property φ is considered as the mass flux (also "momentum density"), that is, the product of mass density and flow velocity "ρ"u, by substitution into the general continuity equation:
formula_7
where u ⊗ u is a dyad, a special case of tensor product, which results in a second rank tensor; the divergence of a second rank tensor is again a vector (a first-rank tensor).
Using the formula for the divergence of a dyad,
formula_8
we then have
formula_9
Note that the gradient of a vector is a special case of the covariant derivative, the operation results in second rank tensors; except in Cartesian coordinates, it is important to understand that this is not simply an element by element gradient. Rearranging :
formula_10
The leftmost expression enclosed in parentheses is, by mass continuity (shown before), equal to zero. Noting that what remains on the left side of the equation is the material derivative of flow velocity:
formula_11
This appears to simply be an expression of Newton's second law (F
"m"a) in terms of body forces instead of point forces. Each term in any case of the Navier–Stokes equations is a body force. A shorter though less rigorous way to arrive at this result would be the application of the chain rule to acceleration:
formula_12
where u
("u", "v", "w"). The reason why this is "less rigorous" is that we haven't shown that the choice of
formula_13
is correct; however it does make sense since with that choice of path the derivative is "following" a fluid "particle", and in order for Newton's second law to work, forces must be summed following a particle. For this reason the convective derivative is also known as the particle derivative.
Cauchy momentum equation.
The generic density of the momentum source s seen previously is made specific first by breaking it up into two new terms, one to describe internal stresses and one for external forces, such as gravity. By examining the forces acting on a small cube in a fluid, it may be shown that
formula_14
where σ is the Cauchy stress tensor, and f accounts for body forces present. This equation is called the Cauchy momentum equation and describes the non-relativistic momentum conservation of "any" continuum that conserves mass. σ is a rank two symmetric tensor given by its covariant components. In orthogonal coordinates in three dimensions it is represented as the 3 × 3 matrix:
formula_15
where the σ are normal stresses and τ shear stresses. This matrix is split up into two terms:
formula_16
where I is the 3 × 3 identity matrix and τ is the deviatoric stress tensor. Note that the mechanical pressure p is equal to the negative of the mean normal stress:
formula_17
The motivation for doing this is that pressure is typically a variable of interest, and also this simplifies application to specific fluid families later on since the rightmost tensor τ in the equation above must be zero for a fluid at rest. Note that τ is traceless. The Cauchy equation may now be written in another more explicit form:
formula_18
This equation is still incomplete. For completion, one must make hypotheses on the forms of τ and p, that is, one needs a constitutive law for the stress tensor which can be obtained for specific fluid families and on the pressure. Some of these hypotheses lead to the Euler equations (fluid dynamics), other ones lead to the Navier–Stokes equations. Additionally, if the flow is assumed compressible an equation of state will be required, which will likely further require a conservation of energy formulation.
Application to different fluids.
The general form of the equations of motion is not "ready for use", the stress tensor is still unknown so that more information is needed; this information is normally some knowledge of the viscous behavior of the fluid. For different types of fluid flow this results in specific forms of the Navier–Stokes equations.
Newtonian fluid.
Compressible Newtonian fluid.
The formulation for Newtonian fluids stems from an observation made by Newton that, for most fluids,
formula_19
In order to apply this to the Navier–Stokes equations, three assumptions were made by Stokes:
* The stress tensor is a linear function of the strain rate tensor or equivalently the velocity gradient.
* The fluid is isotropic.
* For a fluid at rest, ∇ ⋅ τ must be zero (so that hydrostatic pressure results).
The above list states the classic argument that the shear strain rate tensor (the (symmetric) shear part of the velocity gradient) is a pure shear tensor and does not include any inflow/outflow part (any compression/expansion part). This means that its trace is zero, and this is achieved by subtracting ∇ ⋅ u in a symmetric way from the diagonal elements of the tensor. The compressional contribution to viscous stress is added as a separate diagonal tensor.
Applying these assumptions will lead to :
formula_20
or in tensor form
formula_21
That is, the deviatoric of the deformation rate tensor is identified to the deviatoric of the stress tensor, up to a factor μ.
"δij" is the Kronecker delta. μ and λ are proportionality constants associated with the assumption that stress depends on strain linearly; μ is called the first coefficient of viscosity or shear viscosity (usually just called "viscosity") and λ is the second coefficient of viscosity or volume viscosity (and it is related to bulk viscosity). The value of λ, which produces a viscous effect associated with volume change, is very difficult to determine, not even its sign is known with absolute certainty. Even in compressible flows, the term involving λ is often negligible; however it can occasionally be important even in nearly incompressible flows and is a matter of controversy. When taken nonzero, the most common approximation is "λ" ≈ −"μ".
A straightforward substitution of "τij" into the momentum conservation equation will yield the Navier–Stokes equations, describing a compressible Newtonian fluid:
formula_22
The body force has been decomposed into density and external acceleration, that is, f
"ρ"g. The associated mass continuity equation is:
formula_23
In addition to this equation, an equation of state and an equation for the conservation of energy is needed. The equation of state to use depends on context (often the ideal gas law), the conservation of energy will read:
formula_24
Here, h is the specific enthalpy, T is the temperature, and Φ is a function representing the dissipation of energy due to viscous effects:
formula_25
With a good equation of state and good functions for the dependence of parameters (such as viscosity) on the variables, this system of equations seems to properly model the dynamics of all known gases and most liquids.
Incompressible Newtonian fluid.
For the special (but very common) case of incompressible flow, the momentum equations simplify significantly. Using the following assumptions:
0
0
This gives incompressible Navier-Stokes equations, describing incompressible Newtonian fluid:
formula_26
then looking at the viscous terms of the x momentum equation for example we have:
formula_27
Similarly for the y and z momentum directions we have "μ"∇2"v" and "μ"∇2"w".
The above solution is key to deriving Navier–Stokes equations from the equation of motion in fluid dynamics when density and viscosity are constant.
Non-Newtonian fluids.
A non-Newtonian fluid is a fluid whose flow properties differ in any way from those of Newtonian fluids. Most commonly the viscosity of non-Newtonian fluids is a function of shear rate or shear rate history. However, there are some non-Newtonian fluids with shear-independent viscosity, that nonetheless exhibit normal stress-differences or other non-Newtonian behaviour. Many salt solutions and molten polymers are non-Newtonian fluids, as are many commonly found substances such as ketchup, custard, toothpaste, starch suspensions, paint, blood, and shampoo. In a Newtonian fluid, the relation between the shear stress and the shear rate is linear, passing through the origin, the constant of proportionality being the coefficient of viscosity. In a non-Newtonian fluid, the relation between the shear stress and the shear rate is different, and can even be time-dependent. The study of the non-Newtonian fluids is usually called rheology. A few examples are given here.
Bingham fluid.
In Bingham fluids, the situation is slightly different:
formula_28
These are fluids capable of bearing some stress before they start flowing. Some common examples are toothpaste and clay.
Power-law fluid.
A power law fluid is an idealised fluid for which the shear stress, τ, is given by
formula_29
This form is useful for approximating all sorts of general fluids, including shear thinning (such as latex paint) and shear thickening (such as corn starch water mixture).
Stream function formulation.
In the analysis of a flow, it is often desirable to reduce the number of equations and/or the number of variables. The incompressible Navier–Stokes equation with mass continuity (four equations in four unknowns) can be reduced to a single equation with a single dependent variable in 2D, or one vector equation in 3D. This is enabled by two vector calculus identities:
formula_30
for any differentiable scalar φ and vector A. The first identity implies that any term in the Navier–Stokes equation that may be represented as the gradient of a scalar will disappear when the curl of the equation is taken. Commonly, pressure p and external acceleration g will be eliminated, resulting in (this is true in 2D as well as 3D):
formula_31
where it is assumed that all body forces are describable as gradients (for example it is true for gravity), and density has been divided so that viscosity becomes kinematic viscosity.
The second vector calculus identity above states that the divergence of the curl of a vector field is zero. Since the (incompressible) mass continuity equation specifies the divergence of flow velocity being zero, we can replace the flow velocity with the curl of some vector ψ so that mass continuity is always satisfied:
formula_32
So, as long as flow velocity is represented through u
∇ × ψ, mass continuity is unconditionally satisfied. With this new dependent vector variable, the Navier–Stokes equation (with curl taken as above) becomes a single fourth order vector equation, no longer containing the unknown pressure variable and no longer dependent on a separate mass continuity equation:
formula_33
Apart from containing fourth order derivatives, this equation is fairly complicated, and is thus uncommon. Note that if the cross differentiation is left out, the result is a third order vector equation containing an unknown vector field (the gradient of pressure) that may be determined from the same boundary conditions that one would apply to the fourth order equation above.
2D flow in orthogonal coordinates.
The true utility of this formulation is seen when the flow is two dimensional in nature and the equation is written in a general orthogonal coordinate system, in other words a system where the basis vectors are orthogonal. Note that this by no means limits application to Cartesian coordinates, in fact most of the common coordinates systems are orthogonal, including familiar ones like cylindrical and obscure ones like toroidal.
The 3D flow velocity is expressed as (note that the discussion not used coordinates so far):
formula_34
where e"i" are basis vectors, not necessarily constant and not necessarily normalized, and "ui" are flow velocity components; let also the coordinates of space be ("x"1, "x"2, "x"3).
Now suppose that the flow is 2D. This does not mean the flow is in a plane, rather it means that the component of flow velocity in one direction is zero and the remaining components are independent of the same direction. In that case (take component 3 to be zero):
formula_35
The vector function ψ is still defined via:
formula_36
but this must simplify in some way also since the flow is assumed 2D. If orthogonal coordinates are assumed, the curl takes on a fairly simple form, and the equation above expanded becomes:
formula_37
formula_38
Examining this equation shows that we can set "ψ"1
"ψ"2
0 and retain equality with no loss of generality, so that:
formula_39
the significance here is that only one component of ψ remains, so that 2D flow becomes a problem with only one dependent variable. The cross differentiated Navier–Stokes equation becomes two 0
0 equations and one meaningful equation.
The remaining component "ψ"3
"ψ" is called the stream function. The equation for ψ can simplify since a variety of quantities will now equal zero, for example:
formula_40
if the scale factors "h"1 and "h"2 also are independent of "x"3. Also, from the definition of the vector Laplacian
formula_41
Manipulating the cross differentiated Navier–Stokes equation using the above two equations and a variety of identities will eventually yield the 1D scalar equation for the stream function:
formula_42
where ∇4 is the biharmonic operator. This is very useful because it is a single self-contained scalar equation that describes both momentum and mass conservation in 2D. The only other equations that this partial differential equation needs are initial and boundary conditions.
The assumptions for the stream function equation are:
0
0, otherwise extra terms appear.
The stream function has some useful properties:
∇ × (∇ × ψ)
∇ × u, the vorticity of the flow is just the negative of the Laplacian of the stream function.
The stress tensor.
The derivation of the Navier–Stokes equation involves the consideration of forces acting on fluid elements, so that a quantity called the stress tensor appears naturally in the Cauchy momentum equation. Since the divergence of this tensor is taken, it is customary to write out the equation fully simplified, so that the original appearance of the stress tensor is lost.
However, the stress tensor still has some important uses, especially in formulating boundary conditions at fluid interfaces. Recalling that σ
−"p"I + τ, for a Newtonian fluid the stress tensor is:
formula_43
If the fluid is assumed to be incompressible, the tensor simplifies significantly. In 3D cartesian coordinates for example:
formula_44
e is the strain rate tensor, by definition:
formula_45
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{D}{Dt} \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{\\partial}{\\partial t} + \\mathbf{u}\\cdot\\nabla "
},
{
"math_id": 1,
"text": "\\frac{d}{dt}\\int_{\\Omega} \\varphi \\ d\\Omega = -\\int_{\\Gamma} \\varphi \\mathbf{u\\cdot n} \\ d\\Gamma - \\int_{\\Omega} s \\ d\\Omega"
},
{
"math_id": 2,
"text": "\\frac{d}{d t} \\int_{\\Omega} \\varphi \\ d\\Omega = -\\int_{\\Omega} \\nabla \\cdot ( \\varphi \\mathbf u) \\ d\\Omega - \\int_{\\Omega} s \\ d\\Omega."
},
{
"math_id": 3,
"text": "\\int_{\\Omega} \\frac{\\partial \\varphi}{\\partial t} \\ d\\Omega = - \\int_{\\Omega}\\nabla \\cdot (\\varphi\\mathbf u) \\ d\\Omega - \\int_{\\Omega} s \\ d\\Omega\n\\quad \\Rightarrow \\quad\n\\int_{\\Omega} \\left( \\frac{\\partial \\varphi}{\\partial t} + \\nabla \\cdot (\\varphi\\mathbf u) + s \\right) d\\Omega = 0."
},
{
"math_id": 4,
"text": "\\frac{\\partial \\varphi}{\\partial t} + \\nabla \\cdot (\\varphi \\mathbf u) + s = 0."
},
{
"math_id": 5,
"text": "\\frac{\\partial \\rho}{\\partial t} + \\nabla \\cdot (\\rho \\mathbf{u}) = 0"
},
{
"math_id": 6,
"text": "\\nabla\\cdot\\mathbf{u} = 0"
},
{
"math_id": 7,
"text": "\\frac{\\partial}{\\partial t}(\\rho \\mathbf u) + \\nabla \\cdot (\\rho \\mathbf u\\otimes\\mathbf u) = \\mathbf{s}"
},
{
"math_id": 8,
"text": "\\nabla \\cdot (\\mathbf a \\otimes\\mathbf b) = (\\nabla \\cdot \\mathbf a)\\mathbf b + \\mathbf a\\cdot \\nabla \\mathbf b"
},
{
"math_id": 9,
"text": "\\mathbf u \\frac{\\partial \\rho}{\\partial t} + \\rho \\frac{\\partial \\mathbf u}{\\partial t} + \\mathbf u \\nabla \\cdot (\\rho\\mathbf u) + \n\\rho \\mathbf u \\cdot \\nabla \\mathbf u = \\mathbf{s}"
},
{
"math_id": 10,
"text": "\\mathbf u \\left(\\frac{\\partial \\rho}{\\partial t} + \\nabla \\cdot (\\rho \\mathbf u)\\right) + \\rho \\left(\\frac{\\partial \\mathbf u}{\\partial t} + \\mathbf u \\cdot \\nabla \\mathbf u\\right) = \\mathbf{s}\n"
},
{
"math_id": 11,
"text": "\\rho\\frac{D \\mathbf u}{D t} = \\rho \\left(\\frac{\\partial \\mathbf u}{\\partial t} + \\mathbf u \\cdot \\nabla \\mathbf u\\right) = \\mathbf{s} "
},
{
"math_id": 12,
"text": "\\begin{align}\n\\rho \\frac{d}{dt}\\bigl(\\mathbf u(x, y, z, t)\\bigr) = \\mathbf{s} \\quad &\\Rightarrow& \\rho \\left(\n\\frac{\\partial \\mathbf u}{\\partial t} + \n\\frac{\\partial \\mathbf u}{\\partial x}\\frac{d x}{d t} + \n\\frac{\\partial \\mathbf u}{\\partial y}\\frac{d y}{d t} + \n\\frac{\\partial \\mathbf u}{\\partial z}\\frac{d z}{d t} \n\\right) &= \\mathbf{s} \\\\\n\\quad &\\Rightarrow& \\rho \\left(\n\\frac{\\partial \\mathbf u}{\\partial t} + \nu \\frac{\\partial \\mathbf u}{\\partial x} + \nv \\frac{\\partial \\mathbf u}{\\partial y} + \nw \\frac{\\partial \\mathbf u}{\\partial z} \n\\right) &= \\mathbf{s} \\\\\n\\quad &\\Rightarrow& \\rho \\left(\\frac{\\partial \\mathbf u}{\\partial t} + \\mathbf u \\cdot \\nabla \\mathbf u\\right) &= \\mathbf{s}\n\\end{align}"
},
{
"math_id": 13,
"text": "\\mathbf u = \\left(\\frac{d x}{d t}, \\frac{d y}{d t}, \\frac{d z}{d t}\\right)"
},
{
"math_id": 14,
"text": "\\rho\\frac{D\\mathbf u}{D t} = \\nabla \\cdot \\boldsymbol{\\sigma} + \\mathbf{f}"
},
{
"math_id": 15,
"text": "\\sigma_{ij} = \\begin{pmatrix}\n\\sigma_{xx} & \\tau_{xy} & \\tau_{xz} \\\\\n\\tau_{yx} & \\sigma_{yy} & \\tau_{yz} \\\\\n\\tau_{zx} & \\tau_{zy} & \\sigma_{zz}\n\\end{pmatrix}"
},
{
"math_id": 16,
"text": "\\sigma_{ij} = \\begin{pmatrix}\n\\sigma_{xx} & \\tau_{xy} & \\tau_{xz} \\\\\n\\tau_{yx} & \\sigma_{yy} & \\tau_{yz} \\\\\n\\tau_{zx} & \\tau_{zy} & \\sigma_{zz}\n\\end{pmatrix}\n=\n-\\begin{pmatrix}\np &0&0\\\\\n0&p &0\\\\\n0&0&p \n\\end{pmatrix}\n+ \n\\begin{pmatrix}\n\\sigma_{xx}+p & \\tau_{xy} & \\tau_{xz} \\\\\n\\tau_{yx} & \\sigma_{yy}+p & \\tau_{yz} \\\\\n\\tau_{zx} & \\tau_{zy} & \\sigma_{zz}+p \n\\end{pmatrix}\n= -p \\mathbf{I} + \\boldsymbol \\tau\n"
},
{
"math_id": 17,
"text": "p = -\\tfrac13 \\left( \\sigma_{xx} + \\sigma_{yy} + \\sigma_{zz} \\right)."
},
{
"math_id": 18,
"text": "\\rho\\frac{D\\mathbf u}{D t} = -\\nabla p + \\nabla \\cdot\\boldsymbol \\tau + \\mathbf{f}"
},
{
"math_id": 19,
"text": "\\tau \\propto \\frac{\\partial u}{\\partial y}"
},
{
"math_id": 20,
"text": " \\boldsymbol\\tau = \\mu \\left(\\nabla \\mathbf u + \\left(\\nabla \\mathbf u\\right)^\\mathsf{T} \\right) + \\lambda \\left( \\nabla \\cdot \\mathbf u \\right) \\mathbf I "
},
{
"math_id": 21,
"text": "\\tau_{ij} = \\mu\\left(\\frac{\\partial u_i}{\\partial x_j} + \\frac{\\partial u_j}{\\partial x_i} \\right) + \\delta_{ij} \\lambda \\frac{\\partial u_k}{\\partial x_k} "
},
{
"math_id": 22,
"text": "\\rho \\left(\\frac{\\partial \\mathbf u}{\\partial t} + \\mathbf u \\cdot \\nabla \\mathbf u\\right) = \n-\\nabla p + \\nabla \\cdot \\left[\\mu \\left(\\nabla \\mathbf u + \\left(\\nabla \\mathbf u\\right)^\\mathsf{T}\\right)\\right] \n+ \\nabla \\cdot \\left[ \\lambda \\left( \\nabla \\cdot \\mathbf u \\right) \\mathbf I \\right]\n+ \\rho \\mathbf{g}"
},
{
"math_id": 23,
"text": "\\frac{\\partial \\rho}{\\partial t} + \\nabla \\cdot (\\rho \\mathbf u) = 0"
},
{
"math_id": 24,
"text": "\\rho \\frac{D h}{D t} = \\frac{D p}{D t} + \\nabla \\cdot (k \\nabla T) + \\Phi"
},
{
"math_id": 25,
"text": "\\Phi = \\mu \\left(2\\left(\\frac{\\partial u}{\\partial x}\\right)^2 + 2\\left(\\frac{\\partial v}{\\partial y}\\right)^2 + 2\\left(\\frac{\\partial w}{\\partial z}\\right)^2 + \\left(\\frac{\\partial v}{\\partial x} + \\frac{\\partial u}{\\partial y}\\right)^2 + \\left(\\frac{\\partial w}{\\partial y} + \\frac{\\partial v}{\\partial z}\\right)^2 + \\left(\\frac{\\partial u}{\\partial z} + \\frac{\\partial w}{\\partial x}\\right)^2\\right) + \\lambda (\\nabla \\cdot \\mathbf u)^2."
},
{
"math_id": 26,
"text": "\\rho \\left(\\frac{\\partial \\mathbf u}{\\partial t} + \\mathbf u \\cdot \\nabla \\mathbf u\\right) = \n-\\nabla p + \\nabla \\cdot \\left[\\mu \\left(\\nabla \\mathbf u + \\left(\\nabla \\mathbf u\\right)^\\mathsf{T}\\right)\\right] \n+ \\rho \\mathbf{g}"
},
{
"math_id": 27,
"text": "\\begin{align}\n &\\frac{\\partial}{\\partial x}\\left(2 \\mu \\frac{\\partial u}{\\partial x}\\right) + \n \\frac{\\partial}{\\partial y}\\left(\\mu\\left(\\frac{\\partial u}{\\partial y} + \\frac{\\partial v}{\\partial x}\\right)\\right) + \n \\frac{\\partial}{\\partial z}\\left(\\mu\\left(\\frac{\\partial u}{\\partial z} + \\frac{\\partial w}{\\partial x}\\right)\\right) \\\\ [8px]\n &\\qquad = \n 2 \\mu \\frac{\\partial^2 u}{\\partial x^2} + \n \\mu \\frac{\\partial^2 u}{\\partial y^2} + \\mu \\frac{\\partial^2 v}{\\partial y \\, \\partial x} + \n \\mu \\frac{\\partial^2 u}{\\partial z^2} + \\mu \\frac{\\partial^2 w}{\\partial z \\, \\partial x} \\\\ [8px]\n &\\qquad = \n \\mu \\frac{\\partial^2 u}{\\partial x^2} + \n \\mu \\frac{\\partial^2 u}{\\partial y^2} + \n \\mu \\frac{\\partial^2 u}{\\partial z^2} + \n \\mu \\frac{\\partial^2 u}{\\partial x^2} + \\mu \\frac{\\partial^2 v}{\\partial y \\, \\partial x} + \\mu \\frac{\\partial^2 w}{\\partial z \\, \\partial x} \\\\ [8px]\n &\\qquad = \\mu \\nabla^2 u + \\mu \\frac{\\partial}{\\partial x} \\cancelto{0}{\\left(\\frac{\\partial u}{\\partial x} + \\frac{\\partial v}{\\partial y} + \\frac{\\partial w}{\\partial z}\\right)} \\\\ [8px]\n &\\qquad = \\mu \\nabla^2 u\n\\end{align}\\,"
},
{
"math_id": 28,
"text": "\\frac{\\partial u}{\\partial y} = \n\\begin{cases} \n0 ,& \\tau < \\tau_0 \\\\[5px] \n\\dfrac{\\tau - \\tau_0}{\\mu} ,& \\tau \\ge \\tau_0 \n\\end{cases}"
},
{
"math_id": 29,
"text": "\\tau = K \\left(\\frac{\\partial u}{\\partial y}\\right)^n "
},
{
"math_id": 30,
"text": "\\begin{align}\n\\nabla \\times (\\nabla \\phi) &= 0 \\\\\n\\nabla \\cdot (\\nabla \\times \\mathbf{A}) &= 0\n\\end{align}"
},
{
"math_id": 31,
"text": "\\nabla \\times \\left(\\frac{\\partial \\mathbf u}{\\partial t} + \\mathbf u \\cdot \\nabla \\mathbf u\\right) = \\nu \\nabla \\times \\left(\\nabla^2 \\mathbf u\\right)"
},
{
"math_id": 32,
"text": "\\nabla \\cdot \\mathbf u = 0 \\quad \\Rightarrow \\quad \\nabla \\cdot (\\nabla \\times \\boldsymbol \\psi) = 0 \\quad \\Rightarrow \\quad 0 = 0"
},
{
"math_id": 33,
"text": "\\nabla \\times \\left(\\frac{\\partial}{\\partial t}(\\nabla \\times \\boldsymbol \\psi) + (\\nabla \\times \\boldsymbol \\psi) \\cdot \\nabla (\\nabla \\times \\boldsymbol \\psi)\\right) = \\nu \\nabla \\times \\left(\\nabla^2 (\\nabla \\times \\boldsymbol \\psi)\\right)"
},
{
"math_id": 34,
"text": "\\mathbf u = u_1 \\mathbf e_1 + u_2 \\mathbf e_2 + u_3 \\mathbf e_3"
},
{
"math_id": 35,
"text": "\\mathbf u = u_1 \\mathbf e_1 + u_2 \\mathbf e_2; \\qquad \\frac{\\partial u_1}{\\partial x_3} = \\frac{\\partial u_2}{\\partial x_3} = 0"
},
{
"math_id": 36,
"text": "\\mathbf u = \\nabla \\times \\boldsymbol \\psi"
},
{
"math_id": 37,
"text": "u_1 \\mathbf e_1 + u_2 \\mathbf e_2 = \n\\frac{\\mathbf{e}_{1}}{h_{2} h_{3}} \n\\left[\n\\frac{\\partial}{\\partial x_{2}} \\left( h_{3} \\psi_{3} \\right) - \n\\frac{\\partial}{\\partial x_{3}} \\left( h_{2} \\psi_{2} \\right)\n\\right] + \n"
},
{
"math_id": 38,
"text": "\n{\\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ \\ }\n+\n\\frac{\\mathbf{e}_{2}}{h_{3} h_{1}} \n\\left[\n\\frac{\\partial}{\\partial x_{3}} \\left( h_{1} \\psi_{1} \\right) - \n\\frac{\\partial}{\\partial x_{1}} \\left( h_{3} \\psi_{3} \\right)\n\\right] + \n\\frac{\\mathbf{e}_{3}}{h_{1} h_{2}} \n\\left[\n\\frac{\\partial}{\\partial x_{1}} \\left( h_{2} \\psi_{2} \\right) - \n\\frac{\\partial}{\\partial x_{2}} \\left( h_{1} \\psi_{1} \\right)\n\\right]\n"
},
{
"math_id": 39,
"text": "u_1 \\mathbf e_1 + u_2 \\mathbf e_2 = \n\\frac{\\mathbf{e}_{1}}{h_{2} h_{3}} \\frac{\\partial}{\\partial x_{2}} \\left( h_{3} \\psi_{3} \\right)\n- \\frac{\\mathbf{e}_{2}}{h_{3} h_{1}} \\frac{\\partial}{\\partial x_{1}} \\left( h_{3} \\psi_{3} \\right)\n"
},
{
"math_id": 40,
"text": "\n\\nabla \\cdot \\boldsymbol \\psi = \\frac{1}{h_{1} h_{2} h_{3}} \\frac{\\partial}{\\partial x_3} \\left(\\psi h_1 h_2\\right) = 0\n"
},
{
"math_id": 41,
"text": "\n\\nabla \\times (\\nabla \\times \\boldsymbol \\psi) = \\nabla(\\nabla \\cdot \\boldsymbol \\psi) - \\nabla^2 \\boldsymbol \\psi = -\\nabla^2 \\boldsymbol \\psi\n"
},
{
"math_id": 42,
"text": "\n\\frac{\\partial}{\\partial t}\\left(\\nabla^2 \\psi\\right)\n + (\\nabla \\times \\boldsymbol \\psi) \\cdot \\nabla\\left(\\nabla^2 \\psi\\right) = \\nu \\nabla^4 \\psi"
},
{
"math_id": 43,
"text": "\\sigma_{ij} = -p\\delta_{ij}+ \\mu\\left(\\frac{\\partial u_i}{\\partial x_j} + \\frac{\\partial u_j}{\\partial x_i}\\right) + \\delta_{ij} \\lambda \\nabla \\cdot \\mathbf u."
},
{
"math_id": 44,
"text": "\\begin{align}\n\\boldsymbol\\sigma &= \n-\\begin{pmatrix}\np&0&0\\\\\n0&p&0\\\\\n0&0&p\n\\end{pmatrix} +\n\n\\mu \\begin{pmatrix}\n2 \\displaystyle{\\frac{\\partial u}{\\partial x}} & \\displaystyle{\\frac{\\partial u}{\\partial y} + \\frac{\\partial v}{\\partial x}} &\\displaystyle{ \\frac{\\partial u}{\\partial z} + \\frac{\\partial w}{\\partial x}} \\\\\n\\displaystyle{\\frac{\\partial v}{\\partial x} + \\frac{\\partial u}{\\partial y}} & 2 \\displaystyle{\\frac{\\partial v}{\\partial y}} & \\displaystyle{\\frac{\\partial v}{\\partial z} + \\frac{\\partial w}{\\partial y}} \\\\\n\\displaystyle{\\frac{\\partial w}{\\partial x} + \\frac{\\partial u}{\\partial z}} & \\displaystyle{\\frac{\\partial w}{\\partial y} + \\frac{\\partial v}{\\partial z}} & 2\\displaystyle{\\frac{\\partial w}{\\partial z}}\n\\end{pmatrix} \\\\[6px]\n\n&= -p \\mathbf{I} + \\mu \\left(\\nabla \\mathbf u + \\left(\\nabla \\mathbf u\\right)^\\mathsf{T}\\right) \\\\[6px]\n&= -p \\mathbf{I} + 2 \\mu \\mathbf{e}\n\\end{align}\n"
},
{
"math_id": 45,
"text": "e_{ij} = \\frac12\\left(\\frac{\\partial u_i}{\\partial x_j} + \\frac{\\partial u_j}{\\partial x_i}\\right)."
}
] | https://en.wikipedia.org/wiki?curid=10656445 |
10657439 | Transformer types | Overview of electrical transformer types
Various types of electrical transformer are made for different purposes. Despite their design differences, the various types employ the same basic principle as discovered in 1831 by Michael Faraday, and share several key functional parts.
Power transformer.
Laminated core.
This is the most common type of transformer, widely used in electric power transmission and appliances to convert mains voltage to low voltage to power electronic devices. They are available in power ratings ranging from mW to MW. The insulated laminations minimize eddy current losses in the iron core.
Small appliance and electronic transformers may use a split bobbin, giving a high level of insulation between the windings. The rectangular cores are made up of stampings, often in E-I shape pairs, but other shapes are sometimes used. Shields between primary and secondary may be fitted to reduce EMI (electromagnetic interference), or a screen winding is occasionally used.
Small appliance and electronics transformers may have a thermal cut-out built into the winding, to shut-off power at high temperatures to prevent further overheating.
Toroidal.
Donut-shaped toroidal transformers save space compared to E-I cores, and may reduce external magnetic field. These use a ring shaped core, copper windings wrapped around this ring (and thus threaded through the ring during winding), and tape for insulation.
Toroidal transformers have a lower external magnetic field compared to rectangular transformers, and can be smaller for a given power rating. However, they cost more to make, as winding requires more complex and slower equipment.
They can be mounted by a bolt through the center, using washers and rubber pads or by potting in resin. Care must be taken that the bolt does not form part of a short-circuit turn.
Autotransformer.
An autotransformer consists of only one winding that is tapped at some point along the winding. Voltage is applied across a terminal of the winding, and a higher (or lower) voltage is produced across another portion of the same winding. The equivalent power rating of the autotransformer is lower than the actual load power rating. It is calculated by: load VA × (|Vin – Vout|)/Vin. For example, an auto transformer that adapts a 1000 VA load rated at 120 volts to a 240 volt supply has an equivalent rating of at least: 1,000 VA (240 V – 120 V) / 240 V = 500 VA. However, the actual rating (shown on the tally plate) must be at least 1000 VA.
For voltage ratios that don't exceed about 3:1, an autotransformer is cheaper, lighter, smaller, and more efficient than an isolating (two-winding) transformer of the same rating. Large three-phase autotransformers are used in electric power distribution systems, for example, to interconnect 220 kV and 33 kV sub-transmission networks or other high voltage levels.
Variable autotransformer.
By exposing part of the winding coils of an autotransformer, and making the secondary connection through a sliding carbon brush, an autotransformer with a near-continuously variable turns ratio can be obtained, allowing for wide voltage adjustment in very small increments.
Induction regulator.
The induction regulator is similar in design to a wound-rotor induction motor but it is essentially a transformer whose output voltage is varied by rotating its secondary relative to the primary—i.e., rotating the angular position of the rotor. It can be seen as a power transformer exploiting rotating magnetic fields. The major advantage of the induction regulator is that unlike variacs, they are practical for transformers over 5 kVA. Hence, such regulators find widespread use in high-voltage laboratories.
Polyphase transformer.
For polyphase systems, multiple single-phase transformers can be used, or all phases can be connected to a single polyphase transformer. For a three phase transformer, the three primary windings are connected together and the three secondary windings are connected together. Examples of connections are wye-delta, delta-wye, delta-delta, and wye-wye. A vector group indicates the configuration of the windings and the phase angle difference between them. If a winding is connected to earth (grounded), the earth connection point is usually the center point of a wye winding. If the secondary is a delta winding, the ground may be connected to a center tap on one winding (high leg delta) or one phase may be grounded (corner grounded delta). A special purpose polyphase transformer is the zigzag transformer. There are many possible configurations that may involve more or fewer than six windings and various tap connections.
Grounding transformer.
"Grounding" or "earthing transformers" let three wire (delta) polyphase system supplies accommodate phase to neutral loads by providing a return path for current to a neutral. Grounding transformers most commonly incorporate a single winding transformer with a zigzag winding configuration but may also be created with a wye-delta isolated winding transformer connection.
Phase-shifting transformer.
This is a specialized type of transformer which can be configured to adjust the phase relationship between input and output. This allows power flow in an electric grid to be controlled, e.g. to steer power flows away from a shorter (but overloaded) link to a longer path with excess capacity.
Variable-frequency transformer.
A "variable-frequency transformer" is a specialized three-phase power transformer which allows the phase relationship between the input and output windings to be continuously adjusted by rotating one half. They are used to interconnect electrical grids with the same nominal frequency but without synchronous phase coordination.
Leakage or stray field transformer.
A leakage transformer, also called a stray-field transformer, has a significantly higher leakage inductance than other transformers, sometimes increased by a magnetic bypass or shunt in its core between primary and secondary, which is sometimes adjustable with a set screw. This provides a transformer with an inherent current limitation due to the loose coupling between its primary and the secondary windings. The adjustable short-circuit inductance acts as a current limiting parameter.
The output and input currents are kept low enough to preclude thermal overload under any load conditions — even if the secondary is shorted.
Uses.
Leakage transformers are used for arc welding and high voltage discharge lamps (neon lights and cold cathode fluorescent lamps, which are series connected up to 7.5 kV AC). It acts both as a voltage transformer and as a magnetic ballast.
Other applications are short-circuit-proof extra-low voltage transformers for toys or doorbell installations.
Resonant transformer.
A resonant transformer is a transformer in which one or both windings has a capacitor across it and functions as a tuned circuit. Used at radio frequencies, resonant transformers can function as high Q factor bandpass filters. The transformer windings have either air or ferrite cores and the bandwidth can be adjusted by varying the coupling (mutual inductance). One common form is the IF (intermediate frequency) transformer, used in superheterodyne radio receivers. They are also used in radio transmitters.
Resonant transformers are also used in electronic ballasts for gas discharge lamps, and high voltage power supplies. They are also used in some types of switching power supplies. Here the short-circuit inductance value is an important parameter that determines the resonance frequency of the resonant transformer. Often only secondary winding has a resonant capacitor (or stray capacitance) and acts as a serial resonant tank circuit. When the short-circuit inductance of the secondary side of the transformer is Lsc and the resonant capacitor (or stray capacitance) of the secondary side is Cr, The resonance frequency ωs of 1' is as follows
formula_0
The transformer is driven by a pulse or square wave for efficiency, generated by an electronic oscillator circuit. Each pulse serves to drive resonant sinusoidal oscillations in the tuned winding, and due to resonance a high voltage can be developed across the secondary.
Applications:
Constant voltage transformer.
By arranging particular magnetic properties of a transformer core, and installing a ferro-resonant tank circuit (a capacitor and an additional winding), a transformer can be arranged to automatically keep the secondary winding voltage relatively constant for varying primary supply without additional circuitry or manual adjustment. Ferro-resonant transformers run hotter than standard power transformers, because regulating action depends on core saturation, which reduces efficiency. The output waveform is heavily distorted unless careful measures are taken to prevent this. Saturating transformers provide a simple rugged method to stabilize an AC power supply.
Ferrite core.
Ferrite core power transformers are widely used in switched-mode power supplies (SMPSs). The powder core enables high-frequency operation, and hence much smaller size-to-power ratio than laminated-iron transformers.
Ferrite transformers are not used as power transformers at mains frequency since laminated iron cores cost less than an equivalent ferrite core.
Planar transformer.
Manufacturers either use flat copper sheets or etch spiral patterns on a printed circuit board to form the "windings" of a planar transformer, replacing the turns of wire used to make other types. Some planar transformers are commercially sold as discrete components, other planar transformers are etched directly into the main printed circuit board and only need a ferrite core to be attached over the PCB. A planar transformer can be thinner than other transformers, which is useful for low-profile applications or when several printed circuit boards are stacked. Almost all planar transformers use a ferrite planar core.
Liquid-cooled transformer.
Large transformers used in power distribution or electrical substations have their core and coils immersed in oil, which cools and insulates. Oil circulates through ducts in the coil and around the coil and core assembly, moved by convection. The oil is cooled by the outside of the tank in small ratings, and by an air-cooled radiator in larger ratings. Where a higher rating is required, or where the transformer is in a building or underground, oil pumps circulate the oil, fans may force air over the radiators, or an oil-to-water heat exchanger may also be used.
Transformer oil is flammable, so oil-filled transformers inside a building are installed in vaults to prevent spread of fire and smoke from a burning transformer. Some transformers were built to use fire-resistant PCBs, but because these compounds persist in the environment and have adverse effects on organisms, their use has been discontinued in most areas; for example, after 1979 in South Africa. Substitute fire-resistant liquids such as silicone oils are now used instead.
Cast resin transformer.
Cast-resin power transformers encase the windings in epoxy resin. These transformers simplify installation since they are dry, without cooling oil, and so require no fire-proof vault for indoor installations. The epoxy protects the windings from dust and corrosive atmospheres. However, because the molds for casting the coils are only available in fixed sizes, the design of the transformers is less flexible, which may make them more costly if customized features (voltage, turns ratio, taps) are required.
Isolating transformer.
An isolation transformer links two circuits magnetically, but provides no metallic conductive path between the circuits. An example application would be in the power supply for medical equipment, when it is necessary to prevent any leakage from the AC power system into devices connected to a patient. Special purpose isolation transformers may include shielding to prevent coupling of electromagnetic noise between circuits, or may have reinforced insulation to withstand thousands of volts of potential difference between primary and secondary circuits.
Solid-state transformer.
A solid-state transformer is actually a power converter that performs the same function as a conventional transformer, sometimes with added functionality. Most contain a smaller high-frequency transformer. It can consist of an AC-to-AC converter, or a rectifier powering an inverter.
Instrument transformer.
Instrument transformers are typically used to operate instruments from high voltage lines or high current circuits, safely isolating measurement and control circuitry from the high voltages or currents. The primary winding of the transformer is connected to the high voltage or high current circuit, and the meter or relay is connected to the secondary circuit. Instrument transformers may also be used as an isolation transformer so that secondary quantities may be used without affecting the primary circuitry.
Terminal identifications (either alphanumeric such as H1, X1, Y1, etc. or a colored spot or dot impressed in the case) indicate one end of each winding, indicating the same instantaneous polarity and phase between windings. This applies to both types of instrument transformers. Correct identification of terminals and wiring is essential for proper operation of metering and protective relay instrumentation.
Current transformer.
A current transformer (CT) is a series connected measurement device designed to provide a current in its secondary coil proportional to the current flowing in its primary. Current transformers are commonly used in metering and protective relays in the electrical power industry.
Current transformers are often constructed by passing a single primary turn (either an insulated cable or an uninsulated bus bar) through a well-insulated toroidal core wrapped with many turns of wire. The CT is typically described by its current ratio from primary to secondary. For example, a 1000:1 CT provides an output current of 1 amperes when 1000 amperes flow through the primary winding. Standard secondary current ratings are 5 amperes or 1 ampere, compatible with standard measuring instruments. The secondary winding can be single ratio or have several tap points to provide a range of ratios. Care must be taken to make sure the secondary winding is not disconnected from its low-impedance load while current flows in the primary, as this may produce a dangerously high voltage across the open secondary and may permanently affect the accuracy of the transformer.
Specially constructed wideband CTs are also used, usually with an oscilloscope, to measure high frequency waveforms or pulsed currents within pulsed power systems. One type provides a voltage output that is proportional to the measured current. Another, called a Rogowski coil, requires an external integrator in order to provide a proportional output.
A current clamp uses a current transformer with a split core that can be easily wrapped around a conductor in a circuit. This is a common method used in portable current measuring instruments but permanent installations use more economical types of current transformer.
Voltage transformer or potential transformer.
Voltage transformers (VT), also called potential transformers (PT), are a parallel connected type of instrument transformer, used for metering and protection in high-voltage circuits or phasor phase shift isolation. They are designed to present negligible load to the supply being measured and to have an accurate voltage ratio to enable accurate metering. A potential transformer may have several secondary windings on the same core as a primary winding, for use in different metering or protection circuits. The primary may be connected phase to ground or phase to phase. The secondary is usually grounded on one terminal.
There are three primary types of voltage transformers (VT): electromagnetic, capacitor, and optical. The electromagnetic voltage transformer is a wire-wound transformer. The capacitor voltage transformer uses a capacitance potential divider and is used at higher voltages due to a lower cost than an electromagnetic VT. An optical voltage transformer exploits the electrical properties of optical materials. Measurement of high voltages is possible by the potential transformers. An optical voltage transformer is not strictly a transformer, but a sensor similar to a Hall effect sensor.
Combined instrument transformer.
A combined instrument transformer encloses a current transformer and a voltage transformer in the same transformer. There are two main combined current and voltage transformer designs: oil-paper insulated and SF6 insulated. One advantage of applying this solution is reduced substation footprint, due to reduced number of transformers in a bay, supporting structures and connections as well as lower costs for civil works, transportation and installation.
Pulse transformer.
A pulse transformer is a transformer that is optimised for transmitting rectangular electrical pulses (that is, pulses with fast rise and fall times and a relatively constant amplitude). Small versions called "signal" types are used in digital logic and telecommunications circuits such as in Ethernet, often for matching logic drivers to transmission lines. These are also called Ethernet transformer modules.
Medium-sized "power" versions are used in power-control circuits such as camera flash controllers. Larger "power" versions are used in the electrical power distribution industry to interface low-voltage control circuitry to the high-voltage gates of power semiconductors. Special high voltage pulse transformers are also used to generate high power pulses for radar, particle accelerators, or other high energy pulsed power applications.
To minimize distortion of the pulse shape, a pulse transformer needs to have low values of leakage inductance and distributed capacitance, and a high open-circuit inductance. In power-type pulse transformers, a low coupling capacitance (between the primary and secondary) is important to protect the circuitry on the primary side from high-powered transients created by the load. For the same reason, high insulation resistance and high breakdown voltage are required. A good transient response is necessary to maintain the rectangular pulse shape at the secondary, because a pulse with slow edges would create switching losses in the power semiconductors.
The product of the peak pulse voltage and the duration of the pulse (or more accurately, the voltage-time integral) is often used to characterise pulse transformers. Generally speaking, the larger this product, the larger and more expensive the transformer.
Pulse transformers by definition have a duty cycle of less than <templatestyles src="Fraction/styles.css" />1⁄2; whatever energy stored in the coil during the pulse must be "dumped" out before the pulse is fired again.
RF transformer.
There are several types of transformer used in radio frequency (RF) work, distinguished by how their windings are connected, and by the type of cores (if any) the coil turns are wound onto.
Laminated steel used for power transformer cores is very inefficient at RF, wasting a lot of RF power as heat, so transformers for use at radio frequencies tends to use magnetic ceramics for winding cores, such as powdered iron (for mediumwave and lower shortwave frequencies) or ferrite (for upper shortwave).
The core material a coil is wrapped around can increase its inductance dramatically – hundreds to thousands of times more than “air” – thereby raising the transformer's Q. The cores of such transformers tend to help performance the most at the lower end of the frequency band transformer was designed for.
Old RF transformers sometimes included an extra, third coil (called a tickler winding) to inject feedback into an earlier (detector) stage in antique regenerative radio receivers.
Air-core transformer.
So-called “air-core” transformers actually have no core at all – they are wound onto non-magnetic forms or frames, or merely held in shape by the stiffness of the coiled wire. These are used for very high frequency and upper shortwave work.
The lack of a magnetically reactive core means very low inductance per turn, requiring many turns of wire on the transformer coil. All forward current excites reverse current and induces secondary voltage which is proportional to the mutual inductance. At VHF, such transformers may be nothing more than a few turns of wire soldered onto a printed circuit board.
Ferrite-core transformer.
Ferrite core transformers are widely used in RF transformers, especially for current balancing (see below) and impedance matching for TV and radio antennas. Because of the enormous improvement in inductance that ferrite produces, many ferrite cored transformers work well with only one or two turns.
Ferrite is an intensely magnetically reactive ceramic material made from iron oxide (rust) mixed with small fractions of other metals or their oxides, such as magnesium, zinc, and nickel. Different mixtures respond best at different frequencies.
Because they are ceramics, ferrites are (almost) non-conductive, so they respond only to the magnetic fields created by nearby currents, and not to the electric fields created by the accompanying voltages.
Choke transformer.
For radio frequency use, "choke" transformers are sometimes made from windings of transmission line wired in parallel. Sometimes the windings are coaxial cable, sometimes bifilar (paired parallel wire); either is wound around a ferrite, powdered iron, or "air" core. This style of transformer gives an extremely wide bandwidth but only a limited number of impedance ratios (such as 1:1, 1:4, or 1:9) can be achieved with this technique.
Choke transformers are sometimes called "transmission-line transformers" (although see below for a different transformer type with the same name), or "Guanella transformers", or "current baluns", or "line isolators". Although called a "transmission line" transformer, it is distinct from the transformers made from segments of transmission line.
Line section transformer.
At radio frequencies and microwave frequencies, a quarter-wave impedance transformer can provide impedance matching between circuits over a limited range of frequencies, using only a section of transmission line no more than a wave long. The line may be coaxial cable, waveguide, stripline, or microstrip. For upper VHF and UHF frequencies, where coil self resonance interferes with proper operation, it is usually the only feasible method for transforming line impedances.
Single frequency transformers are made using sections of transmission line, often called a "matching section" or a "matching stub". Like the choke transformer above, it is also called a "transmission line transformer" even though the two are very different in form and operation.
Unless it is terminated in its characteristic impedance, any transmission line will produce standing waves of impedance along its length, repeating exactly every full wavelength, and covering its full range of absolute values over only a quarter wave. One may exploit this behavior to transform currents and voltages by connecting sections of transmission line with mismatched impedances to deliberately create a standing wave on a line, and the cut and reconnect to the line at the position where a desired impedance is reached – never requiring more than a wave of mismatched line.
This type of transformer is very efficient (very little loss) but severely limited in the frequency span it will operate on: Whereas the choke transformer, above, is very broadbanded, a line section transformer is very narrowbanded.
Balun.
"Balun" is a generic name for any transformer configured specifically to connect between balanced (non-grounded) and unbalanced (grounded) circuits. They can be made using any transformer type, but the actual balance achieved depends on the type; for example, "choke" baluns produce balanced current and autotransformer-type baluns produce balanced voltages. Baluns can also be made from configurations of transmission line, using bifilar or coaxial cable similar to transmission line transformers in construction and operation.
In addition to interfacing between balanced and unbalanced loads by producing balanced current or balanced voltage (or both), baluns can in addition separately transform (match) impedance between the loads.
IF transformer.
Ferrite-core transformers are widely used in (intermediate frequency) (IF) stages in superheterodyne radio receivers. They are mostly tuned transformers, containing a threaded ferrite slug that is screwed in or out to adjust IF tuning. The transformers are usually canned (shielded) for stability and to reduce interference.
Audio transformer.
Audio transformers are those specifically designed for use in audio circuits to carry audio signal. They can be used to block radio frequency interference or the DC component of an audio signal, to split or combine audio signals, or to provide impedance matching between high impedance and low impedance circuits, such as between a high impedance tube (valve) amplifier output and a low impedance loudspeaker, or between a high impedance instrument output and the low impedance input of a mixing console. Audio transformers that operate with loudspeaker voltages and current are larger than those that operate at microphone or line level, which carry much less power. Bridge transformers connect 2-wire and 4-wire communication circuits.
Being magnetic devices, audio transformers are susceptible to external magnetic fields such as those generated by AC current-carrying conductors. "Hum" is a term commonly used to describe unwanted signals originating from the "mains" power supply (typically 50 or 60 Hz). Audio transformers used for low-level signals, such as those from microphones, often include magnetic shielding to protect against extraneous magnetically coupled signals.
Audio transformers were originally designed to connect different telephone systems to one another while keeping their respective power supplies isolated, and are still commonly used to interconnect professional audio systems or system components, to eliminate buzz and hum. Such transformers typically have a 1:1 ratio between the primary and the secondary. These can also be used for splitting signals, balancing unbalanced signals, or feeding a balanced signal to unbalanced equipment. Transformers are also used in DI boxes to convert high-impedance instrument signals (e.g., bass guitar) to low impedance signals to enable them to connect to a microphone input on the mixing console.
A particularly critical component is the output transformer of a valve amplifier. Valve circuits for quality reproduction have long been produced with no other (inter-stage) audio transformers, but an output transformer is needed to couple the relatively high impedance (up to a few hundred ohms depending upon configuration) of the output valve(s) to the low impedance of a loudspeaker. (The valves can deliver a low current at a high voltage; the speakers require high current at low voltage.) Most solid-state power amplifiers need no output transformer at all.
Audio transformers affect the sound quality because they are non-linear. They add harmonic distortion to the original signal, especially odd-order harmonics, with an emphasis on third-order harmonics. When the incoming signal amplitude is very low there is not enough level to energize the magnetic core (see coercivity and magnetic hysteresis). When the incoming signal amplitude is very high the transformer saturates and adds harmonics from soft clipping. Another non-linearity comes from limited frequency response. For good low-frequency response a relatively large magnetic core is required; high power handling increases the required core size. Good high-frequency response requires carefully designed and implemented windings without excessive leakage inductance or stray capacitance. All this makes for an expensive component.
Early transistor audio power amplifiers often had output transformers, but they were eliminated as advances in semiconductors allowed the design of amplifiers with sufficiently low output impedance to drive a loudspeaker directly.
Loudspeaker transformer.
In the same way that transformers create high voltage power transmission circuits that minimize transmission losses, loudspeaker transformers can power many individual loudspeakers from a single audio circuit operated at higher than normal loudspeaker voltages. This application is common in public address applications. Such circuits are commonly referred to as constant-voltage speaker systems. Such systems are also known by the nominal voltage of the loudspeaker line, such as "25-", "70-" and "100-volt" speaker systems (the voltage corresponding to the power rating of a speaker or amplifier). A transformer steps up the output of the system's amplifier to the distribution voltage. At the distant loudspeaker locations, a step-down transformer matches the speaker to the rated voltage of the line, so the speaker produces rated nominal output when the line is at nominal voltage. Loudspeaker transformers commonly have multiple primary taps to adjust the volume at each speaker in steps.
Output transformer.
Valve (tube) amplifiers almost always use an output transformer to match the high load impedance requirement of the valves (several kilohms) to a low impedance speaker
Small-signal transformer.
Moving coil phonograph cartridges produce a very small voltage. For this to be amplified with a reasonable signal-noise ratio usually requires a transformer to convert the voltage to the range of the more common moving-magnet cartridges.
Microphones may also be matched to their load with a small transformer, which is mu-metal shielded to minimise noise pickup. These transformers are less widely used today, as transistorized buffers are now cheaper.
Interstage and coupling transformer.
In a push–pull amplifier, an inverted signal is required and can be obtained from a transformer with a center-tapped winding, used to drive two active devices in opposite phase. These phase splitting transformers are not much used today.
Other types.
Transactor.
A transactor is a combination of a transformer and a reactor. A transactor has an iron core with an air-gap, which limits the coupling between windings.
Hedgehog.
Hedgehog transformers are occasionally encountered in homemade 1920s radios. They are homemade audio interstage coupling transformers.
Enameled copper wire is wound round the central half of the length of a bundle of insulated iron wire (e.g., florists' wire), to make the windings. The ends of the iron wires are then bent around the electrical winding to complete the magnetic circuit, and the whole is wrapped with tape or string to hold it together.
Variometer and variocoupler.
A variometer is a type of continuously variable air-core RF inductor with two windings. One common form consisted of a coil wound on a short hollow cylindrical form, with a second smaller coil inside, mounted on a shaft so its magnetic axis can be rotated with respect to the outer coil. The two coils are connected in series. When the two coils are collinear, with their magnetic fields pointed in the same direction, the two magnetic fields add, and the inductance is maximum. If the inner coil is rotated so its axis is at an angle to the outer coil, the magnetic fields do not add and the inductance is less. If the inner coil is rotated so it is collinear with the outer coil but their magnetic fields point in opposite directions, the fields cancel each other out and the inductance is very small or zero. The advantage of the variometer is that inductance can be adjusted continuously, over a wide range. Variometers were widely used in 1920s radio receivers. One of their main uses today is as antenna matching coils to match longwave radio transmitters to their antennas.
The "vario-coupler" was a device with similar construction, but the two coils were not connected but attached to separate circuits. So it functioned as an air-core RF transformer with variable coupling. The inner coil could be rotated from 0° to 90° angle with the outer, reducing the mutual inductance from maximum to near zero.
The pancake coil variometer was another common construction used in both 1920s receivers and transmitters. It consists of two flat spiral coils suspended vertically facing each other, hinged at one side so one could swing away from the other to an angle of 90° to reduce the coupling. The flat spiral design served to reduce parasitic capacitance and losses at radio frequencies.
Pancake or "honeycomb" coil vario-couplers were used in the 1920s in the common Armstrong or "tickler" regenerative radio receivers. One coil was connected to the detector tube's grid circuit. The other coil, the "tickler" coil was connected to the tube's plate (output) circuit. It fed back some of the signal from the plate circuit into the input again, and this positive feedback increased the tube's gain and selectivity.
Rotary transformer.
A rotary (rotatory) transformer is a specialized transformer that couples electrical signals between two parts that rotate in relation to each other—as an alternative to slip rings, which are prone to wear and contact noise. They are commonly used in helical scan magnetic tape applications.
Variable differential transformer.
A variable differential transformer is a rugged non-contact position sensor. It has two oppositely-phased primaries which nominally produce zero output in the secondary, but any movement of the core changes the coupling to produce a signal.
Resolver and synchro.
The two-phase resolver and related three-phase synchro are rotary position sensors which work over a full 360°. The primary is rotated within two or three secondaries at different angles, and the amplitudes of the secondary signals can be decoded into an angle. Unlike variable differential transformers, the coils, and not just the core, move relative to each other, so slip rings are required to connect the primary.
Resolvers produce in-phase and quadrature components which are useful for computation. Synchros produce three-phase signals which can be connected to other synchros to rotate them in a generator/motor configuration.
Piezoelectric transformer.
Two piezoelectric transducers can be mechanically coupled or integrated in one piece of material, creating a piezoelectric transformer.
Flyback.
A Flyback transformer is a high-voltage, high-frequency transformer used in plasma balls and with cathode-ray tubes (CRTs). It provides the high (often several kV) anode DC voltage required for operation of CRTs. Variations in anode voltage supplied by the flyback can result in distortions in the image displayed by the CRT. CRT flybacks may contain multiple secondary windings to provide several other, lower voltages. Its output is often pulsed because it is often used with a voltage multiplier, which may be integrated with the flyback.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\omega_s=\\frac{1}{\\sqrt{L_{sc} C_r}}=\\frac{1}{\\sqrt{(1-k^2)L_s C_r}}"
}
] | https://en.wikipedia.org/wiki?curid=10657439 |
1065871 | Superreal number | Class of extensions of the real numbers
In abstract algebra, the superreal numbers are a class of extensions of the real numbers, introduced by H. Garth Dales and W. Hugh Woodin as a generalization of the hyperreal numbers and primarily of interest in non-standard analysis, model theory, and the study of Banach algebras. The field of superreals is itself a subfield of the surreal numbers.
Dales and Woodin's superreals are distinct from the super-real numbers of David O. Tall, which are lexicographically ordered fractions of formal power series over the reals.
Formal definition.
Suppose "X" is a Tychonoff space and C("X") is the algebra of continuous real-valued functions on "X". Suppose "P" is a prime ideal in C("X"). Then the factor algebra "A" = C("X")/"P" is by definition an integral domain that is a real algebra and that can be seen to be totally ordered. The field of fractions "F" of "A" is a superreal field if "F" strictly contains the real numbers formula_0, so that "F" is not order isomorphic to formula_0.
If the prime ideal "P" is a maximal ideal, then "F" is a field of hyperreal numbers (Robinson's hyperreals being a very special case). | [
{
"math_id": 0,
"text": "\\R"
}
] | https://en.wikipedia.org/wiki?curid=1065871 |
10659231 | Facet theory | Facet theory is a metatheory for the multivariate behavioral sciences that posits that scientific theories and measurements can be advanced by discovering relationships between "conceptual classifications" of research variables and "empirical partitions" of data-representation spaces. For this purpose, facet theory proposes procedures for (1) Constructing or selecting variables for observation, using the "mapping sentence" technique (a formal definitional framework for a system of observations), and (2) Analyzing multivariate data, using data representation spaces, notably those depicting similarity measures (e.g., correlations), or partially ordered sets, derived from the data.
Facet theory is characterized by its direct concern with the entire content-universe under study, containing many, possibly infinitely many, variables. Observed variables are regarded just as a sample of statistical units from the multitude of variables that make up the investigated attribute (the "content-universe"). Hence, Facet theory proposes techniques for "sampling" variables for observation from the entire content universe; and for "making inferences" from the sample of observed variables to the entire content universe. The sampling of variables is done with the aid of the mapping sentence technique (see Section 1); and inferences from the sample of observed variables to the entire content universe are made with respect to correspondences between conceptual classifications (of attribute-variables or of population-members) and partitions of empirical geometric representation spaces obtained in data analysis (see Sections 2 & 3).
Of the many types of representation spaces that have been proposed, two stand out as especially fruitful: "Faceted-SSA (Faceted Smallest Space Analysis)" for structuring the investigated attribute (see Section 2); and "POSAC (Partial Order Scalogram Analysis by base Coordinates)" for multiple scaling measurements of the investigated attribute (see Section 3).
Inasmuch as observed variables in a behavioral study form in fact but a sample from the content-universe of interest, facet theory's procedures and principles serve to avoid errors that may ensue from incidental sampling of observed variables, thus meeting the challenge of the replication crisis in psychological research and in behavioral research in general.
Facet Theory was initiated by Louis Guttman and has been further developed and applied in a variety of disciplines of the behavioral sciences including psychology, sociology, and business administration.
The mapping sentence.
Definition and properties of the mapping sentence.
Definition (Guttman). A "mapping sentence" is a verbal statement of the domain and of the range of a mapping including connectives between facets as in ordinary language.
In the context of behavioral research, a mapping sentence is essentially a function whose domain consists of the respondents and of the stimuli as arguments, and whose image consists of the cartesian product of the ranges of responses to the stimuli, where each response-range is similarly ordered from high to low with respect to a concept common to all stimuli. When stimuli are classified a priori by one or more content criteria, the mapping sentence facilitates stratified sampling of the content-universe. A classification of the stimuli by their content is called a "content facet"; and the pre-specified set of responses to a stimulus (classifying respondents by their response to that stimulus) is called a "range facet".
The mapping sentence defines the system of observations to be performed. As such, the mapping sentence provides also the essential concepts in terms of which research hypotheses may be formulated.
An example from intelligence research.
Suppose members "pi" of a population P are observed with respect to their success in a written verbal intelligence test. Such observations may be described as a mapping from the observed population to the set of possible scores, say, "R" = {1...,10}: "P""q"1 → "R", where "q"1 is the sense in which a specific score is assigned to every individual in the observed population "P", i.e., "q"1 is "verbal intelligence" in this example. Now, one may be interested in observing also the mathematical or, more specifically, the numerical intelligence of the investigated population; and possibly also their spatial intelligence. Each of these kinds of intelligence is a "sense" in which population members "pi" may be mapped into a range of scores "R" = {1...,10}. Thus, 'intelligence' is now differentiated into three types of materials: verbal ("q"1), numerical ("q"2) and spatial ("q"3). Together, "P", the population, and "Q" = {"q"1, "q"2, "q"3}, the set of types of intelligence, form a cartesian product which constitutes the mapping domain. The mapping is from the set of pairs (pi, qj) to the common range of test-scores "R" = {1...,10}: "P" × "Q" → "R".
A "facet" is a set that serves as a component-set of a cartesian product. Thus, "P" is called the "population facet", "Q" is called a "content facet," and the set of scores obtainable for each test is a "range facet". The range facets of the various items (variables) need not be identical in size: they may have any finite number of scores, or categories, greater or equal to 2.
The Common Meaning Range (CMR).
The ranges of the items pertaining to an investigated content-universe – intelligence in this example – should all have a Common Meaning Range (CMR); that is, they must be ordered from high to low with respect to a common meaning. Following Guttman, the common meaning proposed for the ranges of intelligence-items is "correctness with respect to an objective rule".
The concept of CMR is central in facet theory: It serves to define the content-universe being studied by specifying the universe of items pertaining to that content-universe. Thus, the "mapping-definition" of intelligence, advanced by facet theory is:
"An item belongs to the universe of intelligence items if and only if its domain requires performance of a cognitive task concerning an objective rule and its range is ordered from high correctness to low correctness with respect to that rule."
An initial framework for observing intelligence could be Mapping Sentence 1.
The mapping sentence serves as a unified semantic device for specifying the system of intelligence test items, according to the present conceptualization. Its content facet, the material facet, may now serve as a classification of intelligence test items to be considered. Thus, in designing observations, a stratified sampling of items is afforded by ensuring an appropriate selection of items from each of the material facet elements; that is, from each class of items: the verbal, the numerical and the spatial.
Enriching the mapping sentence.
The research design can be enriched by introducing to the mapping sentence an additional, independent classification of the observations in the form of an additional content-facet, thereby facilitating systematic differentiations of the observations. For example, intelligence items may be classified also according to the cognitive operation required in order to respond correctly to an item: whether rule-recall (memory), rule-application, or rule-inference. Instead of the three sub-content-universes of intelligence defined by the material facet alone, we now have nine sub-content-universes defined by the cartesian multiplication of the material and the mental-operation facets. See mapping sentence 2.
Another way of enriching a mapping sentence (and the scope of the research) is by adding an element (a class) to an existing content facet; for example, by adding Interpersonal material as a new element to the extant material facet. See Mapping Sentence 3.
Content profiles.
A selection of one element from each of the two content facets defines a content profile which represents a sub-content-universe of intelligence. For example, the content profile ("c2, q2") represents the application of rules for performing mathematical computations, such as performing long division. The 3x4=12 sub-content-universes constitute twelve classes of intelligence items. In designing observations, the researcher would strive to include a number of varied items from each of these 12 classes so that the sample of observed items would be representative of the entire intelligence universe. Of course, this stratified sampling of items depends on the researchers' conception of the studied domain, reflected in their choice of content-facets. But, in the larger cycle of the scientific investigation (which includes Faceted SSA of empirical data, see next section), this conception may undergo adjustments and remolding, converging to improved choices of content-facets and observations, and ultimately to robust theories in research domain. In general, mapping sentences may attain high levels of complexity, size and abstraction through various logical operations such as recursion, twist, decomposition and completion.
Cartesian decomposition and completion: an example.
In drafting a mapping sentence, an effort is made to include the most salient content-facets, according to the researcher's existing conception of the investigated domain. And for each content facet, attempt is made to specify its elements (classes) so that they be exhaustive (complete) and exclusive (non-overlapping) of each other. Thus, the element 'interpersonal' has been added to the incumbent 3-element material facet of intelligence by a two-step facet-analytic procedure. Step 1, cartesian decomposition of the 3-element material facet into two binary elementary facets: The Environment Facet, whose elements are 'physical environment' and 'human-environment'; and the Symbolization Facet whose elements are 'symbolic' (or high symbolization), and 'concrete' (or low symbolization). Step 2, cartesian completion of the material facet is then sought by attempting to infer the missing material classifiable as 'human environment' and 'concrete'.
In facet theory, this 2×2 classification of intelligence-testing material may now be formulated as an hypothesis to be tested empirically, using Faceted Smallest Space Analysis (SSA).
Complementary topics concerning the mapping sentence.
Despite its seemingly rigid appearance, the mapping sentence format can accommodate complex semantic structures such as twists and recursions, while retaining its essential cartesian structure.
In addition to guiding the collection of data, mapping sentences have been used to content-analyze varieties of conceptualizations and texts—such as organizational quality, legal documents and even dream stories.
Concepts as spaces: faceted SSA.
Description of Faceted Smallest Space Analysis (Faceted SSA).
Facet theory conceives of a multivariate attribute as a content-universe defined by the set of all its items, as specified by the attribute mapping-definition, illustrated above. In facet-theoretical data analysis, the attribute (e.g., intelligence) is likened to a geometric space of suitable dimensionality, whose points represent all possible items. Observed items are processed by Faceted SSA, a version of Multidmensional Scaling (MDS) which involves the following steps:
Step 3 of Faceted SSA incorporates the idea that observed variables included in the Faceted SSA procedure, typically constitute a small subset from the countless items that define the attribute content-universe. But their locations in space may serve as clues that guide the partitioning of the space into regions, in effect classifying all points in space, including those pertaining to unobserved items (had they been observed). This procedure, then, tests the regional hypothesis that the sub-content-universes defined by a content-facet elements exist each as a distinct empirical entity. The Shye-Kingsley Separation Index (SI) assesses the goodness-of-fit of the partition to the content-facet.
The spatial scientific imagery suggested by Facet Theory has far reaching consequences that set Facet Theory apart from other statistical procedures and research strategies. Specifically, it facilitates inferences concerning the structure of the entire content-universe investigated, including unobserved items.
Example 1. The structure of intelligence.
Intelligence testing has been conceived as described above, with Mapping Sentence 2 as a framework for its
observation. In many studies, different samples of variables conforming to Mapping Sentence 2 have been analyzed confirming two regional hypotheses:
The superposition of these two partition patterns results in a scheme known as the Radex Theory of Intelligence, see Figure 1.
The radex structure, which originated earlier as "a new approach to factor analysis", has been found also in the study of color perception as well as in other domains of research.
Faceted SSA has been applied in a wide variety of research areas including value research social work and criminology and many others.
Example 2. The structure of quality of life.
The Systemic Quality of Life (SQOL) has been defined as the effective functioning of human individuals in four functioning subsystems: the cultural, the social, the physical and the personality subsystems. The axiomatic foundations of SQOL suggest the regional hypothesis that the four subsystems should be empirically validated (i.e., item of each would occupy a distinct region) and that they be mutually oriented in space in a specific 2x2 pattern topologically equivalent to the 2x2 classification shown in Figure 2 (i.e., personality opposite cultural, and physical opposite social). The hypothesis has been confirmed by many studies.
Types of partition patterns.
Of the many possible partitions of a 2-d concept space, three stand out as especially useful for theory construction:
The advantages of these partition patterns as likely models for behavioral data are that they are describable by a minimal number of parameters, hence avoid overfitting; and that they are generalizable to partition in spaces of higher dimensionalities.
In testing regional hypotheses, the fit of a content-facet to any one of these three models is assessed the Separation Index (SI), a normalized measure of the deviation of variables from the region assigned to them by the model.
Concept spaces in higher dimensionalities have been found as well.
Principles of faceted SSA: A summary.
1. The attribute under study is represented by a geometric space.
2. Variables of the attribute are represented as points in that space. Conversely, every point in the geometric space is a variable of the attribute. This is the Continuity Principle.
3. The observed variables, located as points in the empirical Faceted SSA map, constitute but a sample drawn from the many (possibly infinitely many) variables constituting the content universe of the attribute investigated.
4. The observed variables chosen for SSA must all belong to the same content universe. This is ensured by including in the SSA only variables whose ranges are similarly ordered with respect to a common meaning (CMR).
5. The sample of variables marked on the Faceted SSA map is used as a guide for inferring possible partitions of the SSA-attribute-map into distinct regions, each region representing a component, or subdomain, of the attribute.
6. In Facet Theory, relationships between attribute components (such as verbal intelligence and numeric intelligence as components of intelligence), are expressed in geometric terms –such as shapes and spatial orientation – rather than in algebraic terms. Just as one would describe relationships between neighboring countries in terms of their shapes and geographical orientation, not in terms of distances between them.
7. The imagery of an attribute as a continuous space, from which variables are sampled, implies that clustering of variables in SSA map has no significance: It is just an artifact of the sampling of the variables. Sampled variables that are clustered together may belong to different subdomains; just as two cities that are close together may be located in different countries. Conversely, variables that are far apart, may belong to the same sub-domain; just as two cities that are far apart may belong to the same country. What matters is the identification of distinct regions with well-defined sub-domains. Facet Theory proposes a way of transcending accidental clustering of variables by focusing on a robust and replicable aspect of the data, namely the partitionability of the attribute-space.
These principles bring in new concepts, raise new questions, and opens new ways of understanding behavior. Thus, Facet Theory represents a paradigm of its own for multivariate behavioral research.
Complementary topics in faceted SSA.
Besides analyzing a data matrix of "N" individuals by "n" variables, as discussed above, Faceted SSA is usefully employed in additional modes.
Direct measures of (dis)similarity. For a given a set of objects and a similarity (or dissimilarity) measure between every pair of objects, Faceted SSA can provide a map whose regions correspond to a specified classification of the objects. For example, in a study of color perception, a sample of spectral colors, with a measure of perceived similarity between every pair of colors, yielded the radex theory of spectral color perception. In a study of community elites, a measure of distance devised between pairs of community leaders, yielded a sociometric map whose regions were interpreted from the perspective of sociological theory.
Transposed data matrix. Switching the roles of individuals and variables, Faceted SSA may be applied to individuals rather than to the variables. This rarely used procedure may be justified to the extent variables evenly cover a research domain. For example, intercorrelations between members of a multidisciplinary team of experts were computed based on their human quality-of life value assessments. The resulting Faceted SSA map yielded a radex of disciplines, supporting the association between social institutions and human values.
Multiple scaling by POSAC.
Description of Partial Order Scalogram Analysis by Coordinates (POSAC).
In Facet Theory, the measurement of investigated individuals (and, by extension, of all individuals belonging to the sampled population) with respect to a multivariate attribute, is based on the following assumptions and conditions:
Partial order analysis of observed data"." Let observed items "v1...,vn" with a common-meaning range (CMR) represent an investigated content universe; let "A"1...,"A""n" be their ranges with each "Aj" ordered from high to low with respect to the common meaning; and let "A" = "A"1×"A"2 × ... × "A""n" be the cartesian product of all the range facets, "Aj" ("j" = 1...,"n"). A system of observations is a mapping "P" → "A" from the observed subjects "P" to "A", that is, each subject "p""i" gets a score from each "A""j" ("j" = 1...,"n"), or "p""i" → ["a""i"1,"a""i"2, ..., "a""in"] formula_0 "a"("pi"). The point "a"("p""i") in "A" is also called the profile of "p""i", and the subset "A"′ of "A" (formula_1) of observed profiles is called a scalogram. Facet Theory defines relations between profiles as follows: Two different profiles "a""i" = ["a""i"1,"a""i"2...,"a""in"] and "a""j" = ["a""j"1,"a""j"2...,"a""jn"], are comparable, denoted by "a""i""Sa""j", with "a""i" greater than "a""j", "a""i" > "a""j", if and only if "a""ik" ≥ "a""jk" for "k" = 1, ..., "n", and "a""ik"′ > "a""jk"′ for some "k". Two different profiles are incomparable, denoted by "a""i" $ "a""j", if neither "a""i" > "a""j" nor "a""j" > "a""i". "A," and therefore its subset "A"′, form a partially ordered set.
Facet Theoretical measurement consists in mapping points "a"("pi") of "A"' into a coordinate space "X" of the lowest dimensionality while preserving observed order relations, including incomparability:
Definition. The p.o. dimensionality of scalogram "A'" is the smallest "m" ("m" ≤ "n") for which there exist "m" facets "X"1 ... "X""m" (each "Xi" is ordered) and there exists a 1-1 mapping "Q":"X"′ → "A"′ from "X"′ (formula_2) to "A"′ such that "a" > "a"′ if and only if "x" > "x"′ whenever "Q" maps points "x", "x"′ in "X"′ to points "a", "a"′ ∈ "A".
The coordinate scales, "Xi" ("i" = 1, ..., "m") represent underlying fundamental variables whose meanings must be inferred in any specific application. The well known Guttman scale [24] (example: 1111, 1121, 1131, 2131, 2231, 2232) is simply a 1-d scalogram, i.e. one all of whose profiles are comparable.
The procedure of identifying and interpreting the coordinate scales "X"1..."X""m" is called multiple scaling. multiple scaling is facilitated by partial order scalogram analysis by base coordinates (POSAC) for which algorithms and computer programs have been devised. In practice, a particular dimensionality is attempted and a solution that best accommodates the order-preserving condition is sought. The POSAC/LSA program finds an optimal solution in 2-d coordinate space, then goes on to analyze by Lattice Space Analysis (LSA) the role played by each of the variables in structuring the POSAC 2-space, thereby facilitating interpretation of the derived coordinate scales, "X"1, "X"2. Recent developments include the algorithms for computerized partitioning of the POSAC space by the range facet of each variable, which induces meaningful intervals on the coordinate scales, "X", "Y".
Example 3. TV watching patterns: analysis of simplified survey data.
Source:
Members of a particular population were asked four questions: whether they watched TV the night before for an hour at 7 PM (hour 1), at 8 PM (hour 2), at 9 PM (hour 3) and at 10 PM (hour 4). A positive answer to a question was recorded as 1, and a negative answer, as 0. Thus, for example, the profile 1010 represents a person who watched TV at 7 PM and at 9 PM but not at 8 PM and at 10 PM. Suppose that out of the 16 combinatorially possible profiles, only the following eleven profiles were observed empirically: 0000, 1000, 0100, 0010, 0001, 1100, 0110, 0011, 1110, 0111, 1111. Figure 3 is an order-preserving mapping of these profiles into a 2-dimensional coordinate space.
Given this POSAC solution, an attempt is made to interpret the two coordinates, "X"1 and "X"2, as two fundamental scales of the investigated phenomenon of evening TV watching by the investigated population. This is done by, first, interpreting the intervals (equivalence classes) within each coordinate, and then trying to conceptualize the derived meanings of the ordered intervals, in terms of a meaningful notion that may be attributed to the coordinate.
In the present simplified example, this is easy: Inspecting the map, we attempt to identify the feature that distinguishes all profiles with given score in "X"1. Thus, we find that profiles with "X1"=4, and only they, represent TV watching in the fourth hour. Profiles with "X"1 = 3 all have 1 in the third watching hour but 0 at the fourth hour, i.e., the third hour is the latest watching hour. "X"1 = 2 is assigned to, and only to, profiles whose latest watching hour is the second hour. And, finally, "X"1 = 1 is for the profile 1000 which represents the fact that the first hour is the only – and therefore the latest – watching hour (ignoring the profile 0000 of those who didn't watch TV at the specified hours, and could be assigned (0,0) in this coordinate-space). Hence, it may be concluded that intervals of coordinate "X1" represent j=the latest hour—among the four hours observed—in which TV was watched, ("j" = 1, ..., 4). Similarly, it is found that intervals of coordinate "X"2 represent 5 − "k" for "k" ("k" = 1, ..., 4) is the earliest hour of TV watching.
Indeed, for profiles of the observed set, which represent a single sequence of continuous TV watching, specification of the earliest and latest watching hours, provide full description of the watching hours.
Example 3 illustrates key features of Multiple Scaling by POSAC that render this procedure a theory-based multivariate measurement:
These features are present also in applications that are less obvious, to produce scales with novel meanings.
Example 4. Measuring distributive justice attitudes.
In the systemic theory of distributive justice (DJ), alternative allocations of a given amount of an educational resource (100 supplementary teaching hours) between gifted and disadvantaged pupils, may be classified by one of four types, the preference for each reflecting one's DJ attitude:
Equality, where the gifted and the disadvantaged pupils get the same amount of the supplementary resource;
Fairness, where the disadvantaged pupils get more of the resource than the gifted, in proportion to their weakness relative to the gifted;
Utility, where the gifted get more of the resource than the disadvantaged pupils (so as to promote future contribution to the general good);
Corrective Action, where the disadvantaged pupils get more of the resource than the gifted over and above the proportion of their weakness relative to the gifted pupils, (so as to compensate them for past accumulated disadvantage);
Following the Faceted SSA validation of the four DJ modes of Equality, Fairness, Utility, and Corrective Action, profiles based on eight dichotomized DJ attitudes variables observed on a sample of 191 respondents, were created. 35 of the 256 combinatorially possible profile were observed and analyzed by POSAC to obtain the measurement space shown in Figure 4. For each of the variables an optimal partition- line was computed that separates a high from a low score in that variable. (Logically, partition-lines must look like non-increasing step functions.) Then, for each of the four attitude types, the characteristic partition-line was identified as follows:
Fairness—a straight vertical line;
Utility—a straight horizontal line;
Equality—an L-shaped line;
Corrective action—an inverted-L-shaped line
The content significance of the intervals induced by these partition-lines on the X coordinate and on the Y coordinate of the POSAC space, are now identified and thereby define the contents of the X and Y Coordinate Scales of DJ attitudes.
The X-coordinate Scale, interpreted as "Enhanced Fairness Attitude Scale:"
That is, Enhanced Fairness Attitude, even if low, (interval 1 and 2) is somewhat present when Equality is favored (interval 2). And if Enhanced Fairness Attitude is high (intervals 3 and 4), it reaches the extreme level (interval 4) when Corrective Action is favored.
The Y-coordinate Scale, interpreted as "Enhanced Utility Attitude Scale":
That is, Enhanced Utility Attitude, even if low, (interval 1 and 2) is somewhat present when Equality is favored (interval 2). If Enhanced Utility Attitude is high (intervals 3 and 4), it reaches the extreme level (interval 4) when Corrective Action is favored. (This may well reflect the sentiment that, in the long run, the advancement of disadvantaged pupils serves the common good.)
The meanings of the fundamental variables, X and Y, while relying on the concepts of fairness and of utility, respectively, suggest new notions that modify them. The new notions were christened Enhanced (or Extended) Fairness and Enhanced (or Extended) Utility.
Complementary topics in partial order spaces.
"Higher order partition lines." The above simple measurement space illustrates partition-lines that are straight or have one bend. More complex measurement spaces result with items whose partition-lines have two or more bends.
While partial order spaces are used mainly for analyzing score profiles (based on range facets), under certain conditions, they may be applied to the analysis of content profiles; i.e., those based on content facets.
"Relating POSAC Measurement Space to the SSA Concept Space." Based on the same data matrix, POSAC measurement space and Faceted SSA concept space are mathematically related. Proved relationships rely on the introduction of a new kind of coefficient, E*, the coefficient of structural similarity. While E* assesses pairwise similarity between variables, it does depend on variations in the remaining n-2 variables processed. That is, in the spirit of Facet Theory, E* depends on the sampled contents as well as on the sampled population. LSA1 procedure, within 2-dimensional POSAC/LSA program, is a special version of SSA with E* as the similarity coefficient, and with lattice ("city block") as the distance function. Under specified conditions, LSA1 may be readily derived from the boundary scales of the POSAC configuration, thereby highlighting concept/measurement space duality.
Facet theory: comparisons and comments.
Concerned with the entire cycle of multivariate research – concept definition, observational design, and data analysis for concept-structure and measurement, Facet Theory constitutes a novel paradigm for the behavioral sciences. Hence, only limited aspects of it can be compared with specific statistical methods.
A distinctive feature of Facet Theory is its explicit concern with the entire
set of variables included in the investigated content-universe, regarding the subset of observed variables as but a sample from which inferences can be made. Hence, clusters of variables, if observed, are of no significance. They are simply unimportant artifacts of the procedure for sampling of the variables. This is in contrast with cluster analysis or factor analysis where recorded clustering patterns determine research results and interpretations. There have been various attempts to describe technical differences between Factor Analysis and Facet Theory. Briefly, it may be said that while Factor Analysis aims to structure the set of variables selected for observation, Facet Theory aims to structure the entire content universe of all variables, observed as well as unobserved, relying on the continuity principle and using regional hypotheses as an inferential procedure.
Guttman's SSA, as well as Multidimensional Scaling (MDS) in general, were often described as a procedure for visualizing similarities (e.g., correlations) between analyzed units (e.g., variables) in which the researcher has specific interest. (See, for example, Wikipedia, October 2020: "Multidimensional scaling (MDS) is a means of visualizing the level of similarity of individual cases of a dataset"). Modern Facet Theory, however, concerned with theory construction in the behavioral sciences, assigns SSA/MDS space a different role. Regarding the analyzed units as a sample of statistical units representing all units that pertain to the content-universe, their dispersion in the SSA/MDS space is used to infer the structure of the content universe. Namely, to infer space partitionings that define components of the content-universes and their spatial interrelationships. The inferred structure, if replicated, may suggest a theory in the investigated domain and provide a basis for theory-based measurements.
Misgivings and responses
One reservation that has been voiced concerns the usefulness of a successful SSA map (one whose partition-pattern matches a content-classification of the mapped variables). What are the consequences of an SSA map? Does such a map qualify as a theory?
In response, it may be pointed out that (a) consistently replicated empirical partition-patterns in a domain of research constitute a scientific lawfulness which, as such, are of interest to Science; (b) Often a partition-pattern leads to insights that explain behavior and may have potential applications. For example, the "Radex Theory of Intelligence" implies that inferential abilities are less differentiated by kinds of material than memory (or rule-recall, see Example 1 above). (c) Faceted SSA is a useful preliminary procedure for performing meaningful non arbitrary measurements by Multiple Scaling (POSAC). See Example 4.
A common doubt about SSA was voiced by a sympathetic but mystified user of SSA: "Smallest Space Analysis seems to come up with provocative pictures that an imaginative observer can usually make some sense of –– in fact, I have often referred to SSA as the sociologist's Rorschach test for imagination". Indeed, missing in Facet Theory are statistical significance tests that would indicate the stability of discovered or hypothesized partition patterns across population samples. For example, it is not clear how to compute the probability of obtaining a hypothesized partition pattern, assuming that in fact the variables are randomly dispersed over the SSA map.
In response, facet theorists claim that in Facet Theory the stability of research results is established by replications, as is the common practice in the natural sciences. Thus, if the same partition-pattern is observed across many population samples (and if no unexplained counterexamples are recorded), confidence in the research outcome would increase. Moreover, Facet Theory adds a stringent requirement for establishing scientific lawfulness, namely that the hypothesized partition-pattern would hold also across different selections of variables, sampled from the same mapping sentence.
Facet Theory is regarded as a promising metatheory for the behavioral sciences by Clyde Coombs, an eminent psychometrician and pioneer of mathematical psychology, who commented: “It is not uncommon for a behavioral theory to be somewhat ambiguous about its domain. The result is that an experiment usually can be performed which will support it and another experiment will disconfirm it. … The problem of how to define the boundaries of a domain, especially in social and behavioral science, is subtle and complex. Guttman’s facet theory (see Shye, 1978) is, I believe, the only substantial attempt to provide a general theory for characterizing domains; in this sense, it is a metatheory. As behavioral science advances so will the need for such theory.” | [
{
"math_id": 0,
"text": "\\equiv"
},
{
"math_id": 1,
"text": "A'\\subseteq A"
},
{
"math_id": 2,
"text": "X'\\subseteq X= X_1\\times\\cdots\\times X_m"
}
] | https://en.wikipedia.org/wiki?curid=10659231 |
1066093 | Real closed field | Non algebraically closed field whose extension by sqrt(–1) is algebraically closed
In mathematics, a real closed field is a field "F" that has the same first-order properties as the field of real numbers. Some examples are the field of real numbers, the field of real algebraic numbers, and the field of hyperreal numbers.
Definition.
A real closed field is a field "F" in which any of the following equivalent conditions is true:
Real closure.
If "F" is an ordered field, the Artin–Schreier theorem states that "F" has an algebraic extension, called the real closure "K" of "F", such that "K" is a real closed field whose ordering is an extension of the given ordering on "F", and is unique up to a unique isomorphism of fields identical on "F" (note that every ring homomorphism between real closed fields automatically is order preserving, because "x" ≤ "y" if and only if ∃"z" : "y" = "x" + "z"2). For example, the real closure of the ordered field of rational numbers is the field formula_1 of real algebraic numbers. The theorem is named for Emil Artin and Otto Schreier, who proved it in 1926.
If ("F", "P") is an ordered field, and "E" is a Galois extension of "F", then by Zorn's lemma there is a maximal ordered field extension ("M", "Q") with "M" a subfield of "E" containing "F" and the order on "M" extending "P". This "M", together with its ordering "Q", is called the relative real closure of ("F", "P") in "E". We call ("F", "P") real closed relative to "E" if "M" is just "F". When "E" is the algebraic closure of "F" the relative real closure of "F" in "E" is actually the real closure of "F" described earlier.
If "F" is a field (no ordering compatible with the field operations is assumed, nor is it assumed that "F" is orderable) then "F" still has a real closure, which may not be a field anymore, but just a
real closed ring. For example, the real closure of the field formula_2 is the ring formula_3 (the two copies correspond to the two orderings of formula_2). On the other hand, if formula_2 is considered as an ordered subfield
of formula_4, its real closure is again the field formula_1.
Decidability and quantifier elimination.
The language of real closed fields formula_5 includes symbols for the operations of addition and multiplication, the constants 0 and 1, and the order relation ≤ (as well as equality, if this is not considered a logical symbol). In this language, the (first-order) theory of real closed fields, formula_6, consists of all sentences that follow from the following axioms:
All of these axioms can be expressed in first-order logic (i.e. quantification ranges only over elements of the field). Note that formula_6 is just the set of all first-order sentences that are true about the field of real numbers.
Tarski showed that formula_6 is complete, meaning that any formula_5-sentence can be proven either true or false from the above axioms. Furthermore, formula_6 is decidable, meaning that there is an algorithm to determine the truth or falsity of any such sentence. This was done by showing quantifier elimination: there is an algorithm that, given any formula_5-formula, which may contain free variables, produces an equivalent quantifier-free formula in the same free variables, where "equivalent" means that the two formulas are true for exactly the same values of the variables. Tarski's proof uses a generalization of Sturm's theorem. Since the truth of quantifier-free formulas without free variables can be easily checked, this yields the desired decision procedure. These results were obtained c. 1930 and published in 1948.
The Tarski–Seidenberg theorem extends this result to the following "projection theorem". If R is a real closed field, a formula with n free variables defines a subset of R"n", the set of the points that satisfy the formula. Such a subset is called a semialgebraic set. Given a subset of k variables, the "projection" from R"n" to R"k" is the function that maps every n-tuple to the k-tuple of the components corresponding to the subset of variables. The projection theorem asserts that a projection of a semialgebraic set is a semialgebraic set, and that there is an algorithm that, given a quantifier-free formula defining a semialgebraic set, produces a quantifier-free formula for its projection.
In fact, the projection theorem is equivalent to quantifier elimination, as the projection of a semialgebraic set defined by the formula "p"("x", "y") is defined by
formula_8
where x and y represent respectively the set of eliminated variables, and the set of kept variables.
The decidability of a first-order theory of the real numbers depends dramatically on the primitive operations and functions that are considered (here addition and multiplication). Adding other functions symbols, for example, the sine or the exponential function, can provide undecidable theories; see Richardson's theorem and Decidability of first-order theories of the real numbers.
Furthermore, the completeness and decidability of the first-order theory of the real numbers (using addition and multiplication) contrasts sharply with Gödel's and Turing's results about the incompleteness and undecidability of the first-order theory of the natural numbers (using addition and multiplication). There is no contradiction, since the statement ""x" is an integer" cannot be formulated as a first-order formula in the language formula_5.
Complexity of deciding 𝘛rcf.
Tarski's original algorithm for quantifier elimination has nonelementary computational complexity, meaning that no tower
formula_9
can bound the execution time of the algorithm if n is the size of the input formula. The cylindrical algebraic decomposition, introduced by George E. Collins, provides a much more practicable algorithm of complexity
formula_10
where n is the total number of variables (free and bound), d is the product of the degrees of the polynomials occurring in the formula, and "O"("n") is big O notation.
Davenport and Heintz (1988) proved that this worst-case complexity is nearly optimal for quantifier elimination by producing a family Φ"n" of formulas of length "O"("n"), with n quantifiers, and involving polynomials of constant degree, such that any quantifier-free formula equivalent to Φ"n" must involve polynomials of degree formula_11 and length formula_12 where formula_13 is big Omega notation. This shows that both the time complexity and the space complexity of quantifier elimination are intrinsically double exponential.
For the decision problem, Ben-Or, Kozen, and Reif (1986) claimed to have proved that the theory of real closed fields is decidable in exponential space, and therefore in double exponential time, but their argument (in the case of more than one variable) is generally held as flawed; see Renegar (1992) for a discussion.
For purely existential formulas, that is for formulas of the form
∃"x"1, ..., ∃"x""k" "P"1("x"1, ..., "x""k") ⋈ 0 ∧ ... ∧ "P""s"("x"1, ..., "x""k") ⋈ 0,
where ⋈ stands for either <, > or =, the complexity is lower. Basu and Roy (1996) provided a well-behaved algorithm to decide the truth of such an existential formula with complexity of "s""k"+1"d""O"("k") arithmetic operations and polynomial space.
Order properties.
A crucially important property of the real numbers is that it is an Archimedean field, meaning it has the Archimedean property that for any real number, there is an integer larger than it in absolute value. Note that this statement is not expressible in the first-order language of ordered fields, since it is not possible to quantify over integers in that language.
There are real-closed fields that are non-Archimedean; for example, any field of hyperreal numbers is real closed and non-Archimedean. These fields contain infinitely large (larger than any integer) and infinitesimal (positive but smaller than any positive rational) elements.
The Archimedean property is related to the concept of cofinality. A set "X" contained in an ordered set "F" is cofinal in "F" if for every "y" in "F" there is an "x" in "X" such that "y" < "x". In other words, "X" is an unbounded sequence in "F". The cofinality of "F" is the cardinality of the smallest cofinal set, which is to say, the size of the smallest cardinality giving an unbounded sequence. For example, natural numbers are cofinal in the reals, and the cofinality of the reals is therefore formula_14.
We have therefore the following invariants defining the nature of a real closed field "F":
To this we may add
These three cardinal numbers tell us much about the order properties of any real closed field, though it may be difficult to discover what they are, especially if we are not willing to invoke the generalized continuum hypothesis. There are also particular properties that may or may not hold:
The generalized continuum hypothesis.
The characteristics of real closed fields become much simpler if we are willing to assume the generalized continuum hypothesis. If the continuum hypothesis holds, all real closed fields with cardinality of the continuum and having the "η"1 property are order isomorphic. This unique field "Ϝ" can be defined by means of an ultrapower, as formula_16, where M is a maximal ideal not leading to a field order-isomorphic to formula_4. This is the most commonly used hyperreal number field in nonstandard analysis, and its uniqueness is equivalent to the continuum hypothesis. (Even without the continuum hypothesis we have that if the cardinality of the continuum is
formula_17 then we have a unique "η""β" field of size formula_17.)
Moreover, we do not need ultrapowers to construct "Ϝ", we can do so much more constructively as the subfield of series with a countable number of nonzero terms of the field formula_18 of formal power series on a totally ordered abelian divisible group "G" that is an "η"1 group of cardinality formula_19 .
"Ϝ" however is not a complete field; if we take its completion, we end up with a field "Κ" of larger cardinality. "Ϝ" has the cardinality of the continuum, which by hypothesis is formula_19, "Κ" has cardinality formula_20, and contains "Ϝ" as a dense subfield. It is not an ultrapower but it "is" a hyperreal field, and hence a suitable field for the usages of nonstandard analysis. It can be seen to be the higher-dimensional analogue of the real numbers; with cardinality formula_20 instead of formula_19, cofinality formula_19 instead of formula_14, and weight formula_19 instead of formula_14, and with the "η"1 property in place of the "η"0 property (which merely means between any two real numbers we can find another).
Elementary Euclidean geometry.
Tarski's axioms are an axiom system for the first-order ("elementary") portion of Euclidean geometry. Using those axioms, one can show that the points on a line form a real closed field R, and one can introduce coordinates so that the Euclidean plane is identified with R2 . Employing the decidability of the theory of real closed fields, Tarski then proved that the elementary theory of Euclidean geometry is complete and decidable.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F(\\sqrt{-1}\\,)"
},
{
"math_id": 1,
"text": "\\mathbb{R}_\\mathrm{alg}"
},
{
"math_id": 2,
"text": "\\mathbb{Q}(\\sqrt 2)"
},
{
"math_id": 3,
"text": "\\mathbb{R}_\\mathrm{alg} \\!\\times \\mathbb{R}_\\mathrm{alg}"
},
{
"math_id": 4,
"text": "\\mathbb{R}"
},
{
"math_id": 5,
"text": "\\mathcal{L}_\\text{rcf}"
},
{
"math_id": 6,
"text": "\\mathcal{T}_\\text{rcf}"
},
{
"math_id": 7,
"text": "d"
},
{
"math_id": 8,
"text": "(\\exists x) P(x,y),"
},
{
"math_id": 9,
"text": "2^{2^{\\cdot^{\\cdot^{\\cdot^n}}}}"
},
{
"math_id": 10,
"text": "d^{2^{O(n)}}"
},
{
"math_id": 11,
"text": "2^{2^{\\Omega(n)}}"
},
{
"math_id": 12,
"text": "2^{2^{\\Omega(n)}},"
},
{
"math_id": 13,
"text": "\\Omega(n)"
},
{
"math_id": 14,
"text": "\\aleph_0"
},
{
"math_id": 15,
"text": "\\aleph_\\alpha"
},
{
"math_id": 16,
"text": "\\mathbb{R}^{\\mathbb{N}}/\\mathbf{M}"
},
{
"math_id": 17,
"text": "\\aleph_\\beta"
},
{
"math_id": 18,
"text": "\\mathbb{R}[[G]]"
},
{
"math_id": 19,
"text": "\\aleph_1"
},
{
"math_id": 20,
"text": "\\aleph_2"
}
] | https://en.wikipedia.org/wiki?curid=1066093 |
10664957 | Penney's game | Sequence generating game between two players
Penney's game, named after its inventor Walter Penney, is a binary (head/tail) sequence generating game between two players. Player A selects a sequence of heads and tails (of length 3 or larger), and shows this sequence to player B. Player B then selects another sequence of heads and tails of the same length. Subsequently, a fair coin is tossed until either player A's or player B's sequence appears as a consecutive subsequence of the coin toss outcomes. The player whose sequence appears first wins.
Provided sequences of at least length three are used, the second player (B) has an edge over the starting player (A). This is because the game is nontransitive such that for any given sequence of length three or longer one can find another sequence that has higher probability of occurring first.
Analysis of the three-bit game.
For the three-bit sequence game, the second player can optimize their odds by choosing sequences according to:
An easy way to remember the sequence is for the second player to start with the opposite of the middle choice of the first player, then follow it with the first player's first two choices.
So for the first player's choice of 1-2-3
the second player must choose (not-2)-1-2
where (not-2) is the opposite of the second choice of the first player.
An intuitive explanation for this result is that in any case that the sequence is not immediately the first player's choice, the chances for the first player getting their sequence-beginning, the opening two choices, are usually the chance that the second player will be getting their full sequence. So the second player will most likely "finish before" the first player.
Strategy for more than three bits.
The optimal strategy for the first player (for any length of the sequence no less than 4) was found by J.A. Csirik (See References). It is to choose HTTTT...TTTHH (formula_0 T's) in which case the second player's maximal odds of winning is formula_1.
Variation with playing cards.
One suggested variation on Penney's Game uses a pack of ordinary playing cards. The Humble-Nishiyama Randomness Game follows the same format using Red and Black cards, instead of Heads and Tails. The game is played as follows. At the start of a game each player decides on their three colour sequence for the whole game. The cards are then turned over one at a time and placed in a line, until one of the chosen triples appears. The winning player takes the upturned cards, having won that "trick". The game continues with the rest of the unused cards, with players collecting tricks as their triples come up, until all the cards in the pack have been used. The winner of the game is the player that has won the most tricks. An average game will consist of around 7 "tricks". As this card-based version is quite similar to multiple repetitions of the original coin game, the second player's advantage is greatly amplified. The probabilities are slightly different because the odds for each flip of a coin are independent while the odds of drawing a red or black card each time is dependent on previous draws. Note that HHT is a 2:1 favorite over HTH and HTT but the odds are different for BBR over BRB and BRR.
Below are approximate probabilities of the outcomes for each strategy based on computer simulations:
If the game is ended after the first trick, there is a negligible chance of a draw. The odds of the second player winning in such a game appear in the table below.
Variation with a Roulette wheel.
Recently Robert W. Vallin, and later Vallin and Aaron M. Montgomery, presented results with Penney's Game as it applies to (American) roulette with Players choosing Red/Black rather than Heads/Tails. In this situation the probability of the ball landing on red or black is 9/19 and the remaining 1/19 is the chance the ball lands on green for the numbers 0 and 00. There are various ways to interpret green: (1) as a "wild card" so that BGR can be read at Black, Black, Red and Black, Red, Red, (2) as a do-over, the game stops when green appears and restarts with the next spin, (3) as just itself with not extra interpretation. Results have been worked out for odds and wait times.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k-3"
},
{
"math_id": 1,
"text": " (2^{k-1}+1):(2^{k-2}+1) "
}
] | https://en.wikipedia.org/wiki?curid=10664957 |
1066621 | Characteristic (algebra) | Smallest integer n for which n equals 0 in a ring
In mathematics, the characteristic of a ring "R", often denoted char("R"), is defined to be the smallest positive number of copies of the ring's multiplicative identity (1) that will sum to the additive identity (0). If no such number exists, the ring is said to have characteristic zero.
That is, char("R") is the smallest positive number "n" such that:(p198, Thm. 23.14)
formula_0
if such a number "n" exists, and 0 otherwise.
Motivation.
The special definition of the characteristic zero is motivated by the equivalent definitions characterized in the next section, where the characteristic zero is not required to be considered separately.
The characteristic may also be taken to be the exponent of the ring's additive group, that is, the smallest positive integer "n" such that:(p198, Def. 23.12)
formula_1
for every element "a" of the ring (again, if "n" exists; otherwise zero). This definition applies in the more general class of rngs (see ""); for (unital) rings the two definitions are equivalent due to their distributive law.
Equivalent characterizations.
0. If nothing "smaller" (in this ordering) than 0 will suffice, then the characteristic is 0. This is the appropriate partial ordering because of such facts as that char("A" × "B") is the least common multiple of char "A" and char "B", and that no ring homomorphism "f" : "A" → "B" exists unless char "B" divides char "A".
0 for all "a" ∈ "R" implies that "k" is a multiple of "n".
Case of rings.
If "R" and "S" are rings and there exists a ring homomorphism "R" → "S", then the characteristic of "S" divides the characteristic of "R". This can sometimes be used to exclude the possibility of certain ring homomorphisms. The only ring with characteristic 1 is the zero ring, which has only a single element 0. If a nontrivial ring "R" does not have any nontrivial zero divisors, then its characteristic is either 0 or prime. In particular, this applies to all fields, to all integral domains, and to all division rings. Any ring of characteristic 0 is infinite.
The ring formula_3 of integers modulo "n" has characteristic "n". If "R" is a subring of "S", then "R" and "S" have the same characteristic. For example, if "p" is prime and "q"("X") is an irreducible polynomial with coefficients in the field formula_4 with p elements, then the quotient ring formula_5 is a field of characteristic "p". Another example: The field formula_6 of complex numbers contains formula_2, so the characteristic of formula_6 is 0.
A formula_3-algebra is equivalently a ring whose characteristic divides "n". This is because for every ring "R" there is a ring homomorphism formula_7, and this map factors through formula_3 if and only if the characteristic of "R" divides "n". In this case for any "r" in the ring, then adding "r" to itself "n" times gives "nr"
0.
If a commutative ring "R" has "prime characteristic" "p", then we have ("x" + "y")"p"
"x""p" + "y""p" for all elements "x" and "y" in "R" – the normally incorrect "freshman's dream" holds for power "p".
The map "x" ↦ "x""p" then defines a ring homomorphism "R" → "R", which is called the "Frobenius homomorphism". If "R" is an integral domain it is injective.
Case of fields.
As mentioned above, the characteristic of any field is either 0 or a prime number. A field of non-zero characteristic is called a field of "finite characteristic" or "positive characteristic" or "prime characteristic". The "characteristic exponent" is defined similarly, except that it is equal to 1 when the characteristic is 0; otherwise it has the same value as the characteristic.
Any field "F" has a unique minimal subfield, also called its <templatestyles src="Template:Visible anchor/styles.css" />prime field. This subfield is isomorphic to either the rational number field formula_8 or a finite field formula_4 of prime order. Two prime fields of the same characteristic are isomorphic, and this isomorphism is unique. In other words, there is essentially a unique prime field in each characteristic.
Fields of characteristic zero.
The most common fields of "characteristic zero" are the subfields of the complex numbers. The p-adic fields are characteristic zero fields that are widely used in number theory. They have absolute values which are very different from those of complex numbers.
For any ordered field, such as the field of rational numbers formula_8 or the field of real numbers formula_9, the characteristic is 0. Thus, every algebraic number field and the field of complex numbers formula_6 are of characteristic zero.
Fields of prime characteristic.
The finite field GF("p""n") has characteristic "p".
There exist infinite fields of prime characteristic. For example, the field of all rational functions over formula_10, the algebraic closure of formula_10 or the field of formal Laurent series formula_11.
The size of any finite ring of prime characteristic "p" is a power of "p". Since in that case it contains formula_10 it is also a vector space over that field, and from linear algebra we know that the sizes of finite vector spaces over finite fields are a power of the size of the field. This also shows that the size of any finite vector space is a prime power.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\underbrace{1+\\cdots+1}_{n \\text{ summands}} = 0"
},
{
"math_id": 1,
"text": "\\underbrace{a+\\cdots+a}_{n \\text{ summands}} = 0"
},
{
"math_id": 2,
"text": "\\mathbb{Z}"
},
{
"math_id": 3,
"text": "\\mathbb{Z}/n\\mathbb{Z}"
},
{
"math_id": 4,
"text": "\\mathbb F_p"
},
{
"math_id": 5,
"text": "\\mathbb F_p[X]/(q(X))"
},
{
"math_id": 6,
"text": "\\mathbb{C}"
},
{
"math_id": 7,
"text": "\\mathbb{Z}\\to R"
},
{
"math_id": 8,
"text": "\\mathbb{Q}"
},
{
"math_id": 9,
"text": "\\mathbb{R}"
},
{
"math_id": 10,
"text": "\\mathbb{Z}/p\\mathbb{Z}"
},
{
"math_id": 11,
"text": "\\mathbb{Z}/p\\mathbb{Z}((T))"
}
] | https://en.wikipedia.org/wiki?curid=1066621 |
1066694 | Salem number | Type of algebraic integer
In mathematics, a Salem number is a real algebraic integer formula_0 whose conjugate roots all have absolute value no greater than 1, and at least one of which has absolute value exactly 1. Salem numbers are of interest in Diophantine approximation and harmonic analysis. They are named after Raphaël Salem.
Properties.
Because it has a root of absolute value 1, the minimal polynomial for a Salem number must be a reciprocal polynomial. This implies that formula_1 is also a root, and that all other roots have absolute value exactly one. As a consequence α must be a unit in the ring of algebraic integers, being of norm 1.
Every Salem number is a Perron number (a real algebraic number greater than one all of whose conjugates have smaller absolute value).
Relation with Pisot–Vijayaraghavan numbers.
The smallest known Salem number is the largest real root of Lehmer's polynomial (named after Derrick Henry Lehmer)
formula_2
which is about formula_3: it is conjectured that it is indeed the smallest Salem number, and the smallest possible Mahler measure of an irreducible non-cyclotomic polynomial.
Lehmer's polynomial is a factor of the shorter degree-12 polynomial,
formula_4
all twelve roots of which satisfy the relation
formula_5
Salem numbers can be constructed from Pisot–Vijayaraghavan numbers. To recall, the smallest of the latter is the unique real root of the cubic polynomial,
formula_6
known as the "plastic ratio" and approximately equal to 1.324718. This can be used to generate a family of Salem numbers including the smallest one found so far. The general approach is to take the minimal polynomial formula_7 of a Pisot–Vijayaraghavan number and its reciprocal polynomial, formula_8, and solve the equation,
formula_9
for integer formula_10 above a bound. Subtracting one side from the other, factoring, and disregarding trivial factors will then yield the minimal polynomial of certain Salem numbers. For example, using the negative case of the above,
formula_11
then for formula_12, this factors as,
formula_13
where the decic is Lehmer's polynomial. Using higher formula_10 will yield a family with a root approaching the plastic ratio. This can be better understood by taking formula_10th roots of both sides,
formula_14
so as formula_10 goes higher, formula_15 will approach the solution of formula_16. If the positive case is used, then formula_15 approaches the plastic ratio from the opposite direction. Using the minimal polynomial of the next smallest Pisot–Vijayaraghavan number gives
formula_17
which for formula_18 factors as
formula_19
a decic not generated in the previous and has the root formula_20 which is the 5th smallest known Salem number. As formula_21, this family in turn tends towards the larger real root of formula_22.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\alpha > 1"
},
{
"math_id": 1,
"text": "1/\\alpha"
},
{
"math_id": 2,
"text": "P(x) = x^{10} + x^9 - x^7 - x^6 - x^5 - x^4 - x^3 + x + 1,"
},
{
"math_id": 3,
"text": "x = 1.17628"
},
{
"math_id": 4,
"text": "Q(x) = x^{12} - x^7 - x^6 - x^5 + 1,"
},
{
"math_id": 5,
"text": "x^{630}-1 = \\frac{(x^{315}-1)(x^{210}-1)(x^{126}-1)^2(x^{90}-1)(x^{3}-1)^3(x^{2}-1)^5(x-1)^3 }{(x^{35}-1)(x^{15}-1)^2(x^{14}-1)^2(x^{5}-1)^6\\,x^{68}}"
},
{
"math_id": 6,
"text": "x^3 - x - 1,"
},
{
"math_id": 7,
"text": "P(x)"
},
{
"math_id": 8,
"text": "P^*(x)"
},
{
"math_id": 9,
"text": "x^n P(x) = \\pm P^{*}(x)"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "x^n(x^3 - x - 1) = -(x^3 + x^2 - 1)"
},
{
"math_id": 12,
"text": "n=8"
},
{
"math_id": 13,
"text": "(x-1)(x^{10} + x^9 -x^7 -x^6 -x^5 -x^4 -x^3 +x +1) = 0"
},
{
"math_id": 14,
"text": "x(x^3 - x - 1)^{1/n} = \\pm (x^3 + x^2 - 1)^{1/n} "
},
{
"math_id": 15,
"text": "x"
},
{
"math_id": 16,
"text": "x^3 - x - 1=0"
},
{
"math_id": 17,
"text": "x^n(x^4 - x^3 - 1) = -(x^4 + x - 1),"
},
{
"math_id": 18,
"text": "n=7"
},
{
"math_id": 19,
"text": "(x-1)(x^{10} - x^6 - x^5 - x^4 + 1) = 0,"
},
{
"math_id": 20,
"text": "x=1.216391\\ldots"
},
{
"math_id": 21,
"text": "n\\to\\infty"
},
{
"math_id": 22,
"text": "x^4 - x^3 - 1 = 0"
}
] | https://en.wikipedia.org/wiki?curid=1066694 |
1066843 | Vickrey auction | Auction priced by second-highest sealed bid
A Vickrey auction or sealed-bid second-price auction (SBSPA) is a type of sealed-bid auction. Bidders submit written bids without knowing the bid of the other people in the auction. The highest bidder wins but the price paid is the second-highest bid. This type of auction is strategically similar to an English auction and gives bidders an incentive to bid their true value. The auction was first described academically by Columbia University professor William Vickrey in 1961 though it had been used by stamp collectors since 1893. In 1797 Johann Wolfgang von Goethe sold a manuscript using a sealed-bid, second-price auction.
Vickrey's original paper mainly considered auctions where only a single, indivisible good is being sold. The terms "Vickrey auction" and "second-price sealed-bid auction" are, in this case only, equivalent and used interchangeably. In the case of multiple identical goods, the bidders submit inverse demand curves and pay the opportunity cost.
Vickrey auctions are much studied in economic literature but uncommon in practice. Generalized variants of the Vickrey auction for multiunit auctions exist, such as the generalized second-price auction used in Google's and Yahoo!'s online advertisement programmes (not incentive compatible) and the Vickrey–Clarke–Groves auction (incentive compatible).
Properties.
Self-revelation and incentive compatibility.
In a Vickrey auction with private values each bidder maximizes their expected utility by bidding (revealing) their valuation of the item for sale. These type of auctions are sometimes used for specified pool trading in the agency mortgage-backed securities (MBS) market.
Ex-post efficiency.
A Vickrey auction is decision efficient (the winner is the bidder with the highest valuation) under the most general circumstances; it thus provides a baseline model against which the efficiency properties of other types of auctions can be posited. It is only ex-post efficient (sum of transfers equal to zero) if the seller is included as "player zero," whose transfer equals the negative of the sum of the other players' transfers (i.e. the bids).
Proof of dominance of truthful bidding.
The dominant strategy in a Vickrey auction with a single, indivisible item is for each bidder to bid their true value of the item.
Let formula_0 be bidder i's value for the item. Let formula_1 be bidder formula_2 bid for the item.
The payoff for bidder formula_3 is
formula_4
The strategy of overbidding is dominated by bidding truthfully (i.e. bidding formula_5). Assume that bidder formula_3 bids formula_6.
If formula_7 then the bidder would win the item with a truthful bid as well as an overbid. The bid's amount does not change the payoff so the two strategies have equal payoffs in this case.
If formula_8 then the bidder would lose the item either way so the strategies have equal payoffs in this case.
If formula_9 then only the strategy of overbidding would win the auction. The payoff would be negative for the strategy of overbidding because they paid more than their value of the item, while the payoff for a truthful bid would be zero. Thus the strategy of bidding higher than one's true valuation is dominated by the strategy of truthfully bidding.
The strategy of underbidding is dominated by bidding truthfully. Assume that bidder formula_3 bids formula_10.
If formula_11 then the bidder would lose the item with a truthful bid as well as an underbid, so the strategies have equal payoffs for this case.
If formula_12 then the bidder would win the item either way so the strategies have equal payoffs in this case.
If formula_13 then only the strategy of truthfully bidding would win the auction. The payoff for the truthful strategy would be positive as they paid less than their value of the item, while the payoff for an underbid bid would be zero. Thus the strategy of underbidding is dominated by the strategy of truthfully bidding.
Truthful bidding dominates the other possible strategies (underbidding and overbidding) so it is an optimal strategy.
Revenue equivalence of the Vickrey auction and sealed first price auction.
The two most common auctions are the sealed first price (or high-bid) auction and the open ascending price (or English) auction. In the former each buyer submits a sealed bid. The high bidder is awarded the item and pays his or her bid. In the latter, the auctioneer announces successively higher asking prices and continues until no one is willing to accept a higher price. Suppose that a buyer's valuation is formula_14 and the current asking price is formula_15. If formula_16, then the buyer loses by raising his hand. If formula_17 and the buyer is not the current high bidder, it is more profitable to bid than to let someone else be the winner. Thus it is a dominant strategy for a buyer to drop out of the bidding when the asking price reaches his or her valuation. Thus, just as in the Vickrey sealed second price auction, the price paid by the buyer with the highest valuation is equal to the second highest value.
Consider then the expected payment in the sealed second-price auction. Vickrey considered the case of two buyers and assumed that each buyer's value was an independent draw from a uniform distribution with support formula_18. With buyers bidding according to their dominant strategies, a buyer with valuation formula_14 wins if his opponent's value formula_19. Suppose that formula_14 is the high value. Then the winning payment is uniformly distributed on the interval formula_20 and so the expected payment of the winner is
formula_21
We now argue that in the sealed first price auction the equilibrium bid of a buyer with valuation formula_14 is
formula_22
That is, the payment of the winner in the sealed first-price auction is equal to the expected revenue in the sealed second-price auction.
Suppose that buyer 2 bids according to the strategy formula_23, where formula_24 is the buyer's bid for a valuation formula_14. We need to show that buyer 1's best response is to use the same strategy.
Note first that if buyer 2 uses the strategy formula_23, then buyer 2's maximum bid is formula_25 and so buyer 1 wins with probability 1 with any bid of 1/2 or more. Consider then a bid formula_15 on the interval formula_26. Let buyer 2's value be formula_27. Then buyer 1 wins if formula_28, that is, if formula_29. Under Vickrey's assumption of uniformly distributed values, the win probability is formula_30. Buyer 1's expected payoff is therefore
formula_31
Note that formula_32 takes on its maximum at formula_33.
Use in network routing.
In network routing, VCG mechanisms are a family of payment schemes based on the added value concept. The basic idea of a VCG mechanism in network routing is to pay the owner of each link or node (depending on the network model) that is part of the solution, its declared cost "plus" its added value. In many routing problems, this mechanism is not only strategyproof, but also the minimum among all strategyproof mechanisms.
In the case of network flows, unicast or multicast, a minimum-cost flow (MCF) in graph "G" is calculated based on the declared costs "d""k" of each of the links and payment is calculated as follows:
Each link (or node) formula_34 in the MCF is paid
formula_35
where MCF("G") indicates the cost of the minimum-cost flow in graph "G" and "G" − "e""k" indicates graph "G" without the link "e""k". Links not in the MCF are paid nothing. This routing problem is one of the cases for which VCG is strategyproof and minimum.
In 2004, it was shown that the expected VCG overpayment of an Erdős–Rényi random graph with "n" nodes and edge probability "p", formula_36 approaches
formula_37
as "n", approaches formula_38, for formula_39. Prior to this result, it was known that
VCG overpayment in "G"("n", "p") is
formula_40
and
formula_41
with high probability given
formula_42
Generalizations.
The most obvious generalization to multiple or divisible goods is to have all winning bidders pay the amount of the highest non-winning bid. This is known as a uniform price auction. The uniform-price auction does not, however, result in bidders bidding their true valuations as they do in a second-price auction unless each bidder has demand for only a single unit. A generalization of the Vickrey auction that maintains the incentive to bid truthfully is known as the Vickrey–Clarke–Groves (VCG) mechanism. The idea in VCG is that items are assigned to maximize the sum of utilities; then each bidder pays the "opportunity cost" that their presence introduces to all the other players. This opportunity cost for a bidder is defined as the total bids of all the other bidders that would have won if the first bidder had not bid, minus the total bids of all the other actual winning bidders.
A different kind of generalization is to set a reservation price—a minimum price below which the item is not sold at all. In some cases, setting a reservation price can substantially increase the revenue of the auctioneer. This is an example of Bayesian-optimal mechanism design.
In mechanism design, the revelation principle can be viewed as a generalization of the Vickrey auction. | [
{
"math_id": 0,
"text": "v_i"
},
{
"math_id": 1,
"text": "b_i"
},
{
"math_id": 2,
"text": "i\\text{'s} "
},
{
"math_id": 3,
"text": "i "
},
{
"math_id": 4,
"text": "\n \\begin{cases}\n v_i-\\max_{j\\neq i} b_j & \\text{if } b_i > \\max_{j\\neq i} b_j, \\\\\n0 & \\text{otherwise.}\n \\end{cases}\n "
},
{
"math_id": 5,
"text": " v_i "
},
{
"math_id": 6,
"text": " b_i > v_i "
},
{
"math_id": 7,
"text": "\\max_{j\\neq i} b_j < v_i "
},
{
"math_id": 8,
"text": "\\max_{j\\neq i} b_j > b_i "
},
{
"math_id": 9,
"text": "v_i < \\max_{j\\neq i} b_j < b_i "
},
{
"math_id": 10,
"text": " b_i < v_i "
},
{
"math_id": 11,
"text": "\\max_{j\\neq i} b_j > v_i "
},
{
"math_id": 12,
"text": "\\max_{j\\neq i} b_j < b_i "
},
{
"math_id": 13,
"text": "b_i < \\max_{j\\neq i} b_j < v_i "
},
{
"math_id": 14,
"text": "v"
},
{
"math_id": 15,
"text": "b"
},
{
"math_id": 16,
"text": "v<b"
},
{
"math_id": 17,
"text": "v>b"
},
{
"math_id": 18,
"text": "[0,1]"
},
{
"math_id": 19,
"text": "x<v"
},
{
"math_id": 20,
"text": "[0,v]"
},
{
"math_id": 21,
"text": "e(v)=\\tfrac{1}{2}v. "
},
{
"math_id": 22,
"text": "B(v)=e(v)=\\tfrac{1}{2}v. "
},
{
"math_id": 23,
"text": "B(v) = v/2"
},
{
"math_id": 24,
"text": "B(v)"
},
{
"math_id": 25,
"text": "B(1) = 1/2"
},
{
"math_id": 26,
"text": "[0,1/2]"
},
{
"math_id": 27,
"text": "x"
},
{
"math_id": 28,
"text": "B(x) = x/2 < b"
},
{
"math_id": 29,
"text": "x < 2b"
},
{
"math_id": 30,
"text": "w(b) = 2b"
},
{
"math_id": 31,
"text": "U(b)=w(b)(v-b)=2b(v-b)=\\tfrac{1}{2}[v^2-(v-2b)^2]"
},
{
"math_id": 32,
"text": "U(b)"
},
{
"math_id": 33,
"text": "b = v/2 = B(v)"
},
{
"math_id": 34,
"text": "e_k"
},
{
"math_id": 35,
"text": "p_k = d_k + \\operatorname{MCF}(G - e_k) - \\operatorname{MCF}(G), "
},
{
"math_id": 36,
"text": "\\scriptstyle G \\in G(n, p)"
},
{
"math_id": 37,
"text": " \\frac{p}{2-p} "
},
{
"math_id": 38,
"text": "\\scriptstyle \\infty "
},
{
"math_id": 39,
"text": "n p = \\omega(\\sqrt{n \\log n})"
},
{
"math_id": 40,
"text": "\\Omega\\left(\\frac{1}{np}\\right)"
},
{
"math_id": 41,
"text": "O(1)\\,"
},
{
"math_id": 42,
"text": "np=\\omega(\\log n).\\,"
}
] | https://en.wikipedia.org/wiki?curid=1066843 |
10670706 | Gentisic acid | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Gentisic acid is a dihydroxybenzoic acid. It is a derivative of benzoic acid and a minor (1%) product of the metabolic break down of aspirin, excreted by the kidneys.
It is also found in the African tree "Alchornea cordifolia" and in wine.
Production.
Gentisic acid is produced by carboxylation of hydroquinone.
C6H4(OH)2 + CO2 → C6H3(CO2H)(OH)2
This conversion is an example of a Kolbe–Schmitt reaction.
Alternatively the compound can be synthesized from salicylic acid via Elbs persulfate oxidation.
Reactions.
In the presence of the enzyme gentisate 1,2-dioxygenase, gentisic acid reacts with oxygen to give maleylpyruvate:
2,5-dihydroxybenzoate + O2 formula_0 maleylpyruvate
Applications.
As a hydroquinone, gentisic acid is readily oxidised and is used as an antioxidant excipient in some pharmaceutical preparations.
In the laboratory, it is used as a sample matrix in matrix-assisted laser desorption/ionization (MALDI) mass spectrometry, and has been shown to conveniently detect peptides incorporating the boronic acid moiety by MALDI.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\rightleftharpoons"
}
] | https://en.wikipedia.org/wiki?curid=10670706 |
1067223 | Formally real field | Field that can be equipped with an ordering
In mathematics, in particular in field theory and real algebra, a formally real field is a field that can be equipped with a (not necessarily unique) ordering that makes it an ordered field.
Alternative definitions.
The definition given above is not a first-order definition, as it requires quantifiers over sets. However, the following criteria can be coded as (infinitely many) first-order sentences in the language of fields and are equivalent to the above definition.
A formally real field "F" is a field that also satisfies one of the following equivalent properties:
It is easy to see that these three properties are equivalent. It is also easy to see that a field that admits an ordering must satisfy these three properties.
A proof that if "F" satisfies these three properties, then "F" admits an ordering uses the notion of and positive cones. Suppose −1 is not a sum of squares; then a Zorn's Lemma argument shows that the prepositive cone of sums of squares can be extended to a positive cone "P" ⊆ "F". One uses this positive cone to define an ordering: "a" ≤ "b" if and only if "b" − "a" belongs to "P".
Real closed fields.
A formally real field with no formally real proper algebraic extension is a real closed field. If "K" is formally real and Ω is an algebraically closed field containing "K", then there is a real closed subfield of Ω containing "K". A real closed field can be ordered in a unique way, and the non-negative elements are exactly the squares.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\forall x_1 (-1 \\ne x_1^2)"
},
{
"math_id": 1,
"text": "\\forall x_1 x_2 (-1 \\ne x_1^2 + x_2^2)"
}
] | https://en.wikipedia.org/wiki?curid=1067223 |
10673638 | Solar-like oscillations | Solar-like oscillations are oscillations in stars that are excited in the same way as those in the Sun, namely by turbulent convection in its outer layers. Stars that show solar-like oscillations are called solar-like oscillators. The oscillations are standing pressure and mixed pressure-gravity modes that are excited over a range in frequency, with the amplitudes roughly following a bell-shaped distribution. Unlike opacity-driven oscillators, all the modes in the frequency range are excited, making the oscillations relatively easy to identify. The surface convection also damps the modes, and each is well-approximated in frequency space by a Lorentzian curve, the width of which corresponds to the lifetime of the mode: the faster it decays, the broader is the Lorentzian. All stars with surface convection zones are expected to show solar-like oscillations, including cool main-sequence stars (up to surface temperatures of about 7000K), subgiants and red giants. Because of the small amplitudes of the oscillations, their study has advanced tremendously thanks to space-based missions (mainly COROT and Kepler).
Solar-like oscillations have been used, among other things, to precisely determine the masses and radii of planet-hosting stars and thus improve the measurements of the planets' masses and radii.
Red giants.
In red giants, mixed modes are observed, which are in part directly sensitive to the core properties of the star. These have been used to distinguish red giants burning helium in their cores from those that are still only burning hydrogen in a shell, to show that the cores of red giants are rotating more slowly than models predict and to constrain the internal magnetic fields of the cores
Echelle diagrams.
The peak of the oscillation power roughly corresponds to lower frequencies and radial orders for larger stars. For the Sun, the highest amplitude modes occur around a frequency of 3 mHz with order formula_1, and no mixed modes are observed. For more massive and more evolved stars, the modes are of lower radial order and overall lower frequencies. Mixed modes can be seen in the evolved stars. In principle, such mixed modes may also be present in main-sequence stars but they are at too low frequency to be excited to observable amplitudes. High-order pressure modes of a given angular degree formula_0 are expected to be roughly evenly-spaced in frequency, with a characteristic spacing known as the large separation formula_2. This motivates the echelle diagram, in which the mode frequencies are plotted as a function of the frequency modulo the large separation, and modes of a particular angular degree form roughly vertical ridges.
Scaling relations.
The frequency of maximum oscillation power is accepted to vary roughly with the acoustic cut-off frequency, above which waves can propagate in the stellar atmosphere, and thus are not trapped and do not contribute to standing modes. This gives
formula_3
Similarly, the large frequency separation formula_2 is known to be roughly proportional to the square root of the density:
formula_4
When combined with an estimate of the effective temperature, this allows one to solve directly for the mass and radius of the star, basing the constants of proportionality on the known values for the Sun. These are known as the scaling relations:
formula_5
formula_6
Equivalently, if one knows the star's luminosity, then the temperature can be replaced via the blackbody luminosity relationship formula_7, which gives
formula_8
formula_9
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\ell"
},
{
"math_id": 1,
"text": "n_\\mathrm{max}\\approx20"
},
{
"math_id": 2,
"text": "\\Delta\\nu"
},
{
"math_id": 3,
"text": "\\nu_\\mathrm{max}\\propto\\frac{g}{\\sqrt{T_\\mathrm{eff}}}"
},
{
"math_id": 4,
"text": "\\Delta\\nu\\propto\\sqrt{\\frac{M}{R^3}}"
},
{
"math_id": 5,
"text": "M\\propto\\frac{\\nu_\\mathrm{max}^3}{\\Delta\\nu^4}T_\\mathrm{eff}^{3/2}"
},
{
"math_id": 6,
"text": "R\\propto\\frac{\\nu_\\mathrm{max}}{\\Delta\\nu^2}T_\\mathrm{eff}^{1/2}"
},
{
"math_id": 7,
"text": "L\\propto R^2T_\\mathrm{eff}^4"
},
{
"math_id": 8,
"text": "M\\propto\\frac{\\nu_\\mathrm{max}^{12/5}}{\\Delta\\nu^{14/5}}L^{3/10}"
},
{
"math_id": 9,
"text": "R\\propto\\frac{\\nu_\\mathrm{max}^{4/5}}{\\Delta\\nu^{8/5}}L^{1/10}"
}
] | https://en.wikipedia.org/wiki?curid=10673638 |
10678861 | Parhelic circle | Type of halo, an optical phenomenon
A parhelic circle is a type of halo, an optical phenomenon appearing as a horizontal white line on the same altitude as the Sun, or occasionally the Moon. If complete, it stretches all around the sky, but more commonly it only appears in sections. If the halo occurs due to light from the Moon rather than the Sun, it is known as a paraselenic circle.
Even fractions of parhelic circles are less common than sun dogs and 22° halos. While parhelic circles are generally white in colour because they are produced by reflection, they can however show a bluish or greenish tone near the 120° parhelia and be reddish or deep violet along the fringes.
Parhelic circles form as beams of sunlight are reflected by vertical or almost vertical hexagonal ice crystals. The reflection can be either external (e.g. without the light passing through the crystal) which contributes to the parhelic circle near the Sun, or internal (one or more reflections inside the crystal) which creates much of the circle away from the Sun. Because an increasing number of reflections makes refraction asymmetric some colour separation occurs away from the Sun. Sun dogs are always aligned to the parhelic circle (but not always to the 22° halo).
The intensity distribution of the parhelic circle is largely dominated by 1-3-2 and 1-3-8-2 rays (cf. the nomenclature by W. Tape, i.e. 1 denotes the top hexagonal face, 2 the bottom face, and 3-8 enumerate the side faces in counter-clockwise fashion. A ray is notated by the sequence in which it encounters the prism faces). The former ray-path is responsible for the blue spot halo which occurs at an azimuth.
formula_0,
with formula_1 being the material's index of refraction (not the Bravais index of refraction for inclined rays). However, many more features give a structure to the intensity pattern of the parhelic circle. Among the features of the parhelic circle are the Liljequist parhelia, the 90° parhelia (likely unobservable), the second order 90° parhelia (unobservable), the 22° parhelia and more.
Artificial parhelic circles can be realized by experimental means using, for instance, spinning crystals. | [
{
"math_id": 0,
"text": "\\theta_{\\mathfrak{1}\\mathfrak{3}\\mathfrak{2}}=2\\arcsin\\left(n\\cos\\left(\\arcsin\\left(1/n\\right)\\right)/\\cos\\left(e\\right)\\right)"
},
{
"math_id": 1,
"text": "n"
}
] | https://en.wikipedia.org/wiki?curid=10678861 |
1067914 | Lévy's constant | In mathematics Lévy's constant (sometimes known as the Khinchin–Lévy constant) occurs in an expression for the asymptotic behaviour of the denominators of the convergents of continued fractions.
In 1935, the Soviet mathematician Aleksandr Khinchin showed that the denominators "q""n" of the convergents of the continued fraction expansions of almost all real numbers satisfy
formula_0
Soon afterward, in 1936, the French mathematician Paul Lévy found the explicit expression for the constant, namely
formula_1 (sequence in the OEIS)
The term "Lévy's constant" is sometimes used to refer to formula_2 (the logarithm of the above expression), which is approximately equal to 1.1865691104… The value derives from the asymptotic expectation of the logarithm of the ratio of successive denominators, using the Gauss-Kuzmin distribution. In particular, the ratio has the asymptotic density function
formula_3
for formula_4 and zero otherwise. This gives Lévy's constant as
formula_5.
The base-10 logarithm of Lévy's constant, which is approximately 0.51532041…, is half of the reciprocal of the limit in Lochs' theorem.
Proof.
The proof assumes basic properties of continued fractions.
Let formula_6 be the Gauss map.
Lemma.
formula_7where formula_8 is the Fibonacci number.
Proof. Define the function formula_9. The quantity to estimate is then formula_10.
By the mean value theorem, for any formula_11,formula_12The denominator sequence formula_13 satisfies a recurrence relation, and so it is at least as large as the Fibonacci sequence formula_14.
Ergodic argument.
Since formula_15, and formula_16, we haveformula_17By the lemma,
formula_18
where formula_19 is finite, and is called the reciprocal Fibonacci constant.
By the Birkhoff's ergodic theorem, the limit formula_20 converges toformula_21 almost surely, where formula_22 is the Gauss distribution. | [
{
"math_id": 0,
"text": "\\lim_{n \\to \\infty}{q_n}^{1/n}= e^{\\beta}"
},
{
"math_id": 1,
"text": "e^{\\beta} = e^{\\pi^2/(12\\ln2)} = 3.275822918721811159787681882\\ldots"
},
{
"math_id": 2,
"text": "\\pi^2/(12\\ln2)"
},
{
"math_id": 3,
"text": "f(z)=\\frac{1}{z(z+1)\\ln(2)}"
},
{
"math_id": 4,
"text": "z \\geq 1"
},
{
"math_id": 5,
"text": "\\beta=\\int_1^\\infty\\frac{\\ln z}{z(z+1)\\ln 2}dz=\\int_0^1\\frac{\\ln z^{-1}}{(z+1)\\ln 2}dz=\\frac{\\pi^2}{12\\ln 2}"
},
{
"math_id": 6,
"text": "T : x \\mapsto 1/x \\mod 1"
},
{
"math_id": 7,
"text": "|\\ln x - \\ln p_n(x)/q_n(x)| \\leq 1/q_n(x) \\leq 1/F_n"
},
{
"math_id": 8,
"text": "F_n"
},
{
"math_id": 9,
"text": "f(t) = \\ln\\frac{p_n + p_{n-1}t}{q_n + q_{n-1}t}"
},
{
"math_id": 10,
"text": "|f(T^n x) - f(0)| "
},
{
"math_id": 11,
"text": "t\\in [0, 1]"
},
{
"math_id": 12,
"text": "\n |f(t)-f(0)| \\leq \\max_{t \\in [0, 1]}|f'(t)| = \\max_{t \\in [0, 1]} \\frac{1}{(p_n + tp_{n-1})(q_n + tq_{n-1})} = \\frac{1}{p_nq_n} \\leq \\frac{1}{q_n}\n "
},
{
"math_id": 13,
"text": "q_{0}, q_1, q_2, \\dots"
},
{
"math_id": 14,
"text": "1, 1, 2, \\dots"
},
{
"math_id": 15,
"text": "p_n(x) = q_{n-1}(Tx)"
},
{
"math_id": 16,
"text": "p_1 = 1"
},
{
"math_id": 17,
"text": "-\\ln q_n = \\ln\\frac{p_n(x)}{q_n(x)} + \\ln\\frac{p_{n-1}(Tx)}{q_{n-1}(Tx)} + \\dots + \\ln\\frac{p_1(T^{n-1}x)}{q_1(T^{n-1} x)}"
},
{
"math_id": 18,
"text": "\n -\\ln q_n = \\ln x + \\ln Tx + \\dots + \\ln T^{n-1}x + \\delta\n "
},
{
"math_id": 19,
"text": "|\\delta| \\leq \\sum_{k=1}^\\infty 1/F_n"
},
{
"math_id": 20,
"text": "\\lim_{n \\to \\infty}\\frac{\\ln q_n}{n}"
},
{
"math_id": 21,
"text": "\n \\int_0^1 ( -\\ln t )\\rho(t) dt = \\frac{\\pi^2}{12\\ln 2} "
},
{
"math_id": 22,
"text": "\\rho(t) = \\frac{1}{(1+t) \\ln 2}"
}
] | https://en.wikipedia.org/wiki?curid=1067914 |
10679980 | 22° halo | Atmospheric optical phenomenon
A 22° halo is an atmospheric optical phenomenon that consists of a halo with an apparent diameter of approximately 22° around the Sun or Moon. Around the Sun, it may also be called a sun halo. Around the Moon, it is also known as a moon ring, storm ring, or winter halo. It forms as sunlight or moonlight is refracted by millions of hexagonal ice crystals suspended in the atmosphere. Its radius, as viewed from Earth, is roughly the length of an outstretched hand at arm's length.
Formation.
Even though it is one of the most common types of halo, the shape and orientation of the ice crystals responsible for the 22° halo are the topic of debate. Hexagonal, randomly oriented columns are usually put forward as the most likely candidate, but this explanation presents problems, such as the fact that the aerodynamic properties of such crystals leads them to be oriented horizontally rather than randomly. Alternative explanations include the involvement of clusters of bullet-shaped ice columns.
As light passes through the 60° apex angle of the hexagonal ice prisms, it is deflected twice, resulting in deviation angles ranging from 22° to 50°. Given the angle of incidence onto the hexagonal ice prism formula_0 and the refractive index inside the prism formula_1, then the angle of deviation formula_2 can be derived from Snell's law:
formula_3
For formula_1 = 1.309, the angle of minimum deviation is almost 22° (21.76°, when formula_0 = 40.88°). More specifically, the angle of minimum deviation is 21.84° on average (formula_1 = 1.31); 21.54° for red light (formula_1 = 1.306) and 22.37° for blue light (formula_1 = 1.317). This wavelength-dependent variation in refraction causes the inner edge of the circle to be reddish while the outer edge is bluish.
The ice crystals in the clouds all deviate the light similarly, but only the ones from the specific ring at 22 degrees contribute to the effect for an observer at a set distance. As no light is refracted at angles smaller than 22°, the sky is darker inside the halo.
Another way to intuitively understand the formation of the 22° halo is to consider the following logic:
Angle of rotation = formula_4
formula_5
Another phenomenon resulting in a ring around the Sun or Moon—and therefore sometimes confused with the 22° halo—is the corona. Unlike the 22° halo, however, it is produced by water droplets instead of ice crystals and is much smaller and more colorful.
Weather relation.
In folklore, moon rings are said to warn of approaching storms. Like other ice halos, 22° halos appear when the sky is covered by thin cirrus or cirrostratus clouds that often come a few days before a large storm front. However, the same clouds can also occur without any associated weather change, making a 22° halo unreliable as a sign of bad weather.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta_\\text{incidence}"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\theta_\\text{deviation}"
},
{
"math_id": 3,
"text": "\\theta_\\text{deviation} = \\theta_\\text{incidence} + \\sin^{-1}\\left[n\\sin\\left(\\frac{\\pi}{3}-\\sin^{-1}\\frac{\\sin \\theta_\\text{incidence}}{n}\\right)\\right]-\\frac{\\pi}{3}."
},
{
"math_id": 4,
"text": "\\alpha"
},
{
"math_id": 5,
"text": "\\theta = \\sin^{-1}\\left[n\\sin\\left(\\frac{\\pi}{3}-\\sin^{-1}\\frac{\\sin \\left(\\pi/3-\\alpha\\right)}{n}\\right)\\right]-\\alpha."
}
] | https://en.wikipedia.org/wiki?curid=10679980 |
10681476 | Brauer's three main theorems | Three results in the representation theory of finite groups
Brauer's main theorems are three theorems in representation theory of finite groups linking the blocks of a finite group (in characteristic "p") with those of its "p"-local subgroups, that is to say, the normalizers of its nontrivial "p"-subgroups.
The second and third main theorems allow refinements of orthogonality relations for ordinary characters which may be applied in finite group theory. These do not presently admit a proof purely in terms of ordinary characters.
All three main theorems are stated in terms of the Brauer correspondence.
Brauer correspondence.
There are many ways to extend the definition which follows, but this is close to the early treatments
by Brauer. Let "G" be a finite group, "p" be a prime, "F" be a field of characteristic "p".
Let "H" be a subgroup of "G" which contains
formula_0
for some "p"-subgroup "Q" of "G", and is contained in the normalizer
formula_1,
where formula_2 is the centralizer of "Q" in "G".
The Brauer homomorphism (with respect to "H") is a linear map from the center of the group algebra of "G" over "F" to the corresponding algebra for "H". Specifically, it is the restriction to
formula_3 of the (linear) projection from formula_4 to formula_5 whose
kernel is spanned by the elements of "G" outside formula_2. The image of this map is contained in
formula_6, and it transpires that the map is also a ring homomorphism.
Since it is a ring homomorphism, for any block "B" of "FG", the Brauer homomorphism
sends the identity element of "B" either to 0 or to an idempotent element. In the latter case,
the idempotent may be decomposed as a sum of (mutually orthogonal) primitive idempotents of "Z"("FH").
Each of these primitive idempotents is the multiplicative identity of some block of "FH." The block "b" of "FH" is said to be a Brauer correspondent of "B" if its identity element occurs
in this decomposition of the image of the identity of "B" under the Brauer homomorphism.
Brauer's first main theorem.
Brauer's first main theorem (Brauer 1944, 1956, 1970) states that if formula_7 is a finite group and formula_8 is a formula_9-subgroup of formula_7, then there is a bijection between the set of
(characteristic "p") blocks of formula_7 with defect group formula_8 and blocks of the normalizer formula_10 with
defect group "D". This bijection arises because when formula_11, each block of "G"
with defect group "D" has a unique Brauer correspondent block of "H", which also has defect
group "D".
Brauer's second main theorem.
Brauer's second main theorem (Brauer 1944, 1959) gives, for an element "t" whose order is a power of a prime "p", a criterion for a (characteristic "p") block of formula_12 to correspond to a given block of formula_7, via "generalized decomposition numbers". These are the coefficients which occur when the restrictions of ordinary characters of formula_7 (from the given block) to elements of the form "tu", where "u" ranges over elements of order prime to "p" in formula_12, are written as linear combinations of the irreducible Brauer characters of formula_12. The content of the theorem is that it is only necessary to use Brauer characters from blocks of formula_12 which are Brauer correspondents of the chosen block of "G".
Brauer's third main theorem.
Brauer's third main theorem states that when "Q" is a "p"-subgroup of the finite group "G",
and "H" is a subgroup of "G" containing formula_0 and contained in formula_1,
then the principal block of "H" is the only Brauer correspondent of the principal block of "G" (where the blocks referred to are calculated in characteristic "p"). | [
{
"math_id": 0,
"text": "QC_G(Q)"
},
{
"math_id": 1,
"text": "N_G(Q)"
},
{
"math_id": 2,
"text": "C_G(Q)"
},
{
"math_id": 3,
"text": "Z(FG)"
},
{
"math_id": 4,
"text": "FG"
},
{
"math_id": 5,
"text": "FC_G(Q)"
},
{
"math_id": 6,
"text": "Z(FH)"
},
{
"math_id": 7,
"text": "G"
},
{
"math_id": 8,
"text": "D"
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "N_G(D)"
},
{
"math_id": 11,
"text": "H = N_G(D)"
},
{
"math_id": 12,
"text": "C_G(t)"
}
] | https://en.wikipedia.org/wiki?curid=10681476 |
1068230 | Market power | Ability of a firm to raise the market price of a commodity over marginal cost
In economics, market power refers to the ability of a firm to influence the price at which it sells a product or service by manipulating either the supply or demand of the product or service to increase economic profit. In other words, market power occurs if a firm does not face a perfectly elastic demand curve and can set its price (P) above marginal cost (MC) without losing revenue. This indicates that the magnitude of market power is associated with the gap between P and MC at a firm's profit maximising level of output. The size of the gap, which encapsulates the firm's level of market dominance, is determined by the residual demand curve's form. A steeper reverse demand indicates higher earnings and more dominance in the market. Such propensities contradict perfectly competitive markets, where market participants have no market power, P = MC and firms earn zero economic profit. Market participants in perfectly competitive markets are consequently referred to as 'price takers', whereas market participants that exhibit market power are referred to as 'price makers' or 'price setters'.
The market power of any individual firm is controlled by multiple factors, including but not limited to, their size, the structure of the market they are involved in, and the barriers to entry for the particular market. A firm with market power has the ability to individually affect either the total quantity or price in the market. This said, market power has been seen to exert more upward pressure on prices due to effects relating to Nash equilibria and profitable deviations that can be made by raising prices. Price makers face a downward-sloping demand curve and as a result, price increases lead to a lower quantity demanded. The decrease in supply creates an economic deadweight loss (DWL) and a decline in consumer surplus. This is viewed as socially undesirable and has implications for welfare and resource allocation as larger firms with high markups negatively effect labour markets by providing lower wages. Perfectly competitive markets do not exhibit such issues as firms set prices that reflect costs, which is to the benefit of the customer. As a result, many countries have antitrust or other legislation intended to limit the ability of firms to accrue market power. Such legislation often regulates mergers and sometimes introduces a judicial power to compel divestiture.
Market power provides firms with the ability to engage in unilateral anti-competitive behavior. As a result, legislation recognises that firms with market power can, in some circumstances, damage the competitive process. In particular, firms with market power are accused of limit pricing, predatory pricing, holding excess capacity and strategic bundling. A firm usually has market power by having a high market share although this alone is not sufficient to establish the possession of significant market power. This is because highly concentrated markets may be contestable if there are no barriers to entry or exit. Invariably, this limits the incumbent firm's ability to raise its price above competitive levels.
If no individual participant in the market has significant market power, anti-competitive conduct can only take place through collusion, or the exercise of a group of participants' collective market power. An example of which was seen in 2007, when British Airways was found to have colluded with Virgin Atlantic between 2004 and 2006, increasing their surcharges per ticket from £5 to £60.
Regulators are able to assess the level of market power and dominance a firm has and measure competition through the use of several tools and indicators. Although market power is extremely difficult to measure, through the use of widely used analytical techniques such as concentration ratios, the Herfindahl-Hirschman index and the Lerner index, regulators are able to oversee and attempt to restore market competitiveness.
Market structure.
In economics, market structure can profoundly affect the behavior and financial performance of firms. Market structure depicts how different industries are characterized and differentiated based upon the types of goods the firms sell (homogenous/heterogenous) and the nature of competition within the industry. The degree of market power firms assert in different markets are relative to the market structure that the firms operate in. There are four main forms of market structures that are observed: perfect competition, monopolistic competition, oligopoly, and monopoly. Perfect competition and monopoly represent the two extremes of market structure, respectively. Monopolistic competition and oligopoly exist in between these two extremes.
Perfect competition power.
"Perfect Competition" refers to a market structure that is devoid of any barriers or interference and describes those marketplaces where neither corporations nor consumers are powerful enough to affect pricing. In terms of economics, it is one of the many conventional market forms and the optimal condition of market competition. The concept of perfect competition represents a theoretical market structure where the market reaches an equilibrium that is Pareto optimal. This occurs when the quantity supplied by sellers in the market equals the quantity demanded by buyers in the market at the current price. Firms competing in a perfectly competitive market faces a market price that is equal to their marginal cost, therefore, no economic profits are present. The following criteria need to be satisfied in a perfectly competitive market:
As all firms in the market are price takers, they essentially hold zero market power and must accept the price given by the market. A perfectly competitive market is logically impossible to achieve in a real world scenario as it embodies contradiction in itself and therefore is considered an idealised framework by economists.
Monopolistic competition power.
Monopolistic competition can be described as the "middle ground" between perfect competition and a monopoly as it shares elements present in both market structures that are on different ends of the market structure spectrum. Monopolistic competition is a type of market structure defined by many producers that are competing against each other by selling similar goods which are differentiated, thus are not perfect substitutes. In the short term, firms are able to obtain economic profits as a result of differentiated goods providing sellers with some degree of market power; however, profits approaches zero as more competitive toughness increases in the industry. The main characteristics of monopolistic competition include:
Firms within this market structure are not price takers and compete based on product price, quality and through marketing efforts, setting individual prices for the unique differentiated products. Examples of industries with monopolistic competition include restaurants, hairdressers and clothing.
Monopoly power.
The word monopoly is used in various instances referring to a single seller of a product, a producer with an overwhelming level of market share, or refer to a large firm. All of these treatments have one unifying factor which is the ability to influence the market price by altering the supply of the good or service through its own production decisions. The most discussed form of market power is that of a monopoly, but other forms such as monopsony and more moderate versions of these extremes exist. A monopoly is considered a 'market failure' and consists of one firm that produces a unique product or service without close substitutes. Whilst pure monopolies are rare, monopoly power is far more common and can be seen in many industries even with more than one supplier in the market. Firms with monopoly power can charge a higher price for products (higher markup) as demand is relatively inelastic. They also see a falling rate of labour share as firms divest from expensive inputs such as labour. Often, firms with monopoly power exist in industries with high barriers to entry, which include, but are not limited to:
A well-known example of monopolistic market power is Microsoft's market share in PC operating systems. The "United States v. Microsoft" case dealt with an allegation that Microsoft illegally exercised its market power by bundling its web browser with its operating system. In this respect, the notion of dominance and dominant position in EU Antitrust Law is a strictly related aspect.
Oligopoly power.
Another form of market power is that of an oligopoly or oligopsony. Within this market structure, the market is highly concentrated and several firms control a significant share of market sales. The emergence of oligopoly market forms is mainly attributed to the monopoly of market competition, i.e., the market monopoly acquired by enterprises through their competitive advantages, and the administrative monopoly due to government regulations, such as when the government grants monopoly power to an enterprise in the industry through laws and regulations and at the same time imposes certain controls on it to improve efficiency. The main characteristics of an oligopoly are:
It is salient to note that only a few firms make up the market share. Hence, their market power is large as a collective and each firm has little or no market power independently. For firms trying to enter these industries, unless they can start with a large production scale and capture a significant market share, the high average costs will make it impossible for them to compete with the existing firms. Generally, when a firm operating in an oligopolistic market adjusts prices, other firms in the industry will be directly impacted.
The graph below depicts the kinked demand curve hypothesis which was proposed by Paul Sweezy who was an American economist. It is important to note that this graph is a simplistic example of a kinked demand curve.
Oligopolistic firms are believed to operate within the confines of the kinked demand function. This means that when firms set prices above the prevailing price level (P*), prices are relatively elastic because individuals are likely to switch to a competitor's product as a substitute. Prices below P* are believed to be relatively inelastic as competitive firms are likely to mimic the change in prices, meaning less gains are experienced by the firm.
An oligopoly may engage in collusion, either tacit or overt to exercise market power and manipulate prices to control demand and revenue for a collection of firms. A group of firms that explicitly agree to affect market price or output is called a cartel, with the organization of petroleum-exporting countries (OPEC) being one of the most well known example of an international cartel.
Sources of market power.
By remaining consistent with the strict definition of market power as any firm with a positive Lerner index, the sources of market power is derived from distinctiveness of the good and or seller. For a monopolist, distinctiveness is a necessary condition that needs to be satisfied but this is just the starting point. Without barriers to entries, above normal profits experienced by monopolists would not persist as other sellers of homogenous or similar goods would continue to enter the industry until above normal profits are diminished until the industry experiences perfect competition
There are several sources of market power including:
Measurement of market power.
Measuring market power is inherently complex because the most widely used measures are sensitive to the definition of a market and the range of analysis.
Magnitude of a firm's market power is shown by a firm's ability to deviate from an elastic demand curve and charge a higher price (P) above its marginal cost (C), commonly referred to as a firm's mark-up or margin. The higher a firm's mark-up, the larger the magnitude of power. This said, markups are complicated to measure as they are reliant on a firm's marginal costs and as a result, concentration ratios are the more common measures as they require only publicly accessible revenue data.
Concentration ratios.
Market concentration, also referred to as industry concentration, refers to the extent of which market shares of the largest firms in the market account for a significant portion of the economic activities quantifiable by various metrics such as sales, employment, active users. Recent macroeconomic market power literature indicates that concentration ratios are the most frequently used measure of market power. Measures of concentration summarise the share of market or industry activity accounted for by large firms. An advantage of using concentration as an empirical tool to quantify market power is the requirement of only needing revenue data of firms which results in the corresponding disadvantage of the inconsideration of costs or profits.
"N"-firm concentration ratio.
The "N"-firm concentration ratio gives the combined market share of the largest "N" firms in the market. For example, a 4-firm concentration ratio measures the total market share of the four largest firms in an industry. In order to calculate the "N"-firm concentration ratio, one usually uses sales revenue to calculate market share, however, concentration ratios based on other measures such as production capacity may also be used. For a monopoly, the 4-firm concentration ratio is 100 per cent whilst for perfect competition, the ratio is zero. Moreover, studies indicate that a concentration ratio of between 40 and 70 percent suggests that the firm operates as an oligopoly. These figures are viable but should be used as a 'rule of thumb' as it is important to consider other market factors when analysing concentration ratios.
An advantage of concentration ratios as an empirical tool for studying market power is that it requires only data on revenues and is thus easy to compute. The corresponding disadvantage is that concentration is about relative revenue and includes no information about costs or profits.
Herfindahl-Hirschman index.
The Herfindahl-Hirschman index (HHI) is another measure of concentration and is the sum of the squared market shares of all firms in a market. The HHI is a more widely used indicator in economics and government regulation. The index reflects not only the market share of large firms within the market, but also the market structure outside of large firms, and therefore, more accurately reflects the degree of influence of large firms on the market. For example, in a market with two firms, each with 50% market share, the HHI is formula_0 = 0.502 + 0.502 = 0.50. The HHI for a monopoly is 1 whilst for perfect competition, the HHI is zero. Unlike the "N"-firm concentration ratio, large firms are given more weight in the HHI and as a result, the HHI conveys more information. However the HHI has its own limitations as it is sensitive to the definition of a market, therefore meaning you cannot use it to cross-examine different industries, or do analysis over time as the industry changes.
Lerner index.
The Lerner index is a widely accepted and applied method of estimating market power in a monopoly. It compares a firm's price of output with its associated marginal cost where marginal cost pricing is the "socially optimal level" achieved in market with perfect competition. Lerner (1934) believes that market power is the monopoly manufacturers' ability to raise prices above their marginal cost. This notion can be expressed by using the formula:
formula_1
Where P represents the price of the good set by the firm and MC representing the firm's marginal cost.The formula focuses on the nature of monopoly and emphasising welfare economic implications of the Pareto optimal principle. Although Lerner is usually credited for the price/cost margin index, the generalized version was fully derived prior to WWII by Italian neoclassical economist, Luigi Amaroso.
Connection with Competition Law.
Market power within competition law can be used to determine whether or not a firm has unfairly manipulated the market in their favour, or to the detriment of entrants. The Sherman Antitrust Act of 1890 under section 2 restricts firms from engaging in anticompetitive conduct by utilising an individual firm's power to manipulate the market or partake in anticompetitive acts.
A firm can be found in breach of the act if they have leveraged their market power to unfairly gain further market power in a manner that is detrimental to the market and consumers. The measurement of market power is key in determining a breach of the act and can be determined from multiple measurements as discussed in measurements of market power above.
In Australia, consumer law allows for firms to have significant market power and utilise it, as long as it is determined to not have “the purpose, effect or likely effect of substantially lessening competition”
Elasticity of demand.
The degree to which a firm can raise its price above marginal cost depends on the shape of the demand curve at a firm's profit maximising level of output. Consequently, the relationship between market power and the price elasticity of demand (PED) can be summarised by the equation:
formula_2
The ratio formula_3 is always greater than "1" and the higher the formula_3 ratio, the more market power the firm possesses. As PED increases in magnitude, the formula_3 ratio approaches "1" and market power approaches zero. The equation is derived from the monopolist pricing rule:
formula_4
Nobel Memorial Prize.
Jean Tirole was awarded the 2014 Nobel Memorial Prize in Economic Sciences for his analysis of market power and economic regulation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\sum(S_i)^2"
},
{
"math_id": 1,
"text": "L = (P-MC)/P"
},
{
"math_id": 2,
"text": "\\frac{P}{MC}=\\frac{PED}{1+PED}."
},
{
"math_id": 3,
"text": "P/MC"
},
{
"math_id": 4,
"text": "\\frac{P-MC}{P}=-\\frac 1 {PED}."
}
] | https://en.wikipedia.org/wiki?curid=1068230 |
1068378 | Multimodal distribution | Probability distribution with more than one mode
In statistics, a multimodal distribution is a probability distribution with more than one mode (i.e., more than one local peak of the distribution). These appear as distinct peaks (local maxima) in the probability density function, as shown in Figures 1 and 2. Categorical, continuous, and discrete data can all form multimodal distributions. Among univariate analyses, multimodal distributions are commonly bimodal.
Terminology.
When the two modes are unequal the larger mode is known as the major mode and the other as the minor mode. The least frequent value between the modes is known as the antimode. The difference between the major and minor modes is known as the amplitude. In time series the major mode is called the acrophase and the antimode the batiphase.
Galtung's classification.
Galtung introduced a classification system (AJUS) for distributions:
This classification has since been modified slightly:
Under this classification bimodal distributions are classified as type S or U.
Examples.
Bimodal distributions occur both in mathematics and in the natural sciences.
Probability distributions.
Important bimodal distributions include the arcsine distribution and the beta distribution (iff both parameters "a" and "b" are less than 1). Others include the U-quadratic distribution.
The ratio of two normal distributions is also bimodally distributed. Let
formula_0
where "a" and "b" are constant and "x" and "y" are distributed as normal variables with a mean of 0 and a standard deviation of 1. "R" has a known density that can be expressed as a confluent hypergeometric function.
The distribution of the reciprocal of a "t" distributed random variable is bimodal when the degrees of freedom are more than one. Similarly the reciprocal of a normally distributed variable is also bimodally distributed.
A "t" statistic generated from data set drawn from a Cauchy distribution is bimodal.
Occurrences in nature.
Examples of variables with bimodal distributions include the time between eruptions of certain geysers, the color of galaxies, the size of worker weaver ants, the age of incidence of Hodgkin's lymphoma, the speed of inactivation of the drug isoniazid in US adults, the absolute magnitude of novae, and the circadian activity patterns of those crepuscular animals that are active both in morning and evening twilight. In fishery science multimodal length distributions reflect the different year classes and can thus be used for age distribution- and growth estimates of the fish population. Sediments are usually distributed in a bimodal fashion. When sampling mining galleries crossing either the host rock and the mineralized veins, the distribution of geochemical variables would be bimodal. Bimodal distributions are also seen in traffic analysis, where traffic peaks in during the AM rush hour and then again in the PM rush hour. This phenomenon is also seen in daily water distribution, as water demand, in the form of showers, cooking, and toilet use, generally peak in the morning and evening periods.
Econometrics.
In econometric models, the parameters may be bimodally distributed.
Origins.
Mathematical.
A bimodal distribution commonly arises as a mixture of two different unimodal distributions (i.e. distributions having only one mode). In other words, the bimodally distributed random variable X is defined as formula_1 with probability formula_2 or formula_3 with probability formula_4 where "Y" and "Z" are unimodal random variables and formula_5 is a mixture coefficient.
Mixtures with two distinct components need not be bimodal and two component mixtures of unimodal component densities can have more than two modes. There is no immediate connection between the number of components in a mixture and the number of modes of the resulting density.
Particular distributions.
Bimodal distributions, despite their frequent occurrence in data sets, have only rarely been studied. This may be because of the difficulties in estimating their parameters either with frequentist or Bayesian methods. Among those that have been studied are
Bimodality also naturally arises in the cusp catastrophe distribution.
Biology.
In biology five factors are known to contribute to bimodal distributions of population sizes:
The bimodal distribution of sizes of weaver ant workers arises due to existence of two distinct classes of workers, namely major workers and minor workers.
The distribution of fitness effects of mutations for both whole genomes and individual genes is also frequently found to be bimodal with most mutations being either neutral or lethal with relatively few having intermediate effect.
General properties.
A mixture of two unimodal distributions with differing means is not necessarily bimodal. The combined distribution of heights of men and women is sometimes used as an example of a bimodal distribution, but in fact the difference in mean heights of men and women is too small relative to their standard deviations to produce bimodality when the two distribution curves are combined.
Bimodal distributions have the peculiar property that – unlike the unimodal distributions – the mean may be a more robust sample estimator than the median. This is clearly the case when the distribution is U-shaped like the arcsine distribution. It may not be true when the distribution has one or more long tails.
Moments of mixtures.
Let
formula_6
where "g""i" is a probability distribution and "p" is the mixing parameter.
The moments of "f"("x") are
formula_7
formula_8
formula_9
formula_10
where
formula_11
formula_12
formula_13
and "S""i" and "K""i" are the skewness and kurtosis of the "i"th distribution.
Mixture of two normal distributions.
It is not uncommon to encounter situations where an investigator believes that the data comes from a mixture of two normal distributions. Because of this, this mixture has been studied in some detail.
A mixture of two normal distributions has five parameters to estimate: the two means, the two variances and the mixing parameter. A mixture of two normal distributions with equal standard deviations is bimodal only if their means differ by at least twice the common standard deviation. Estimates of the parameters is simplified if the variances can be assumed to be equal (the homoscedastic case).
If the means of the two normal distributions are equal, then the combined distribution is unimodal. Conditions for unimodality of the combined distribution were derived by Eisenberger. Necessary and sufficient conditions for a mixture of normal distributions to be bimodal have been identified by Ray and Lindsay.
A mixture of two approximately equal mass normal distributions has a negative kurtosis since the two modes on either side of the center of mass effectively reduces the tails of the distribution.
A mixture of two normal distributions with highly unequal mass has a positive kurtosis since the smaller distribution lengthens the tail of the more dominant normal distribution.
Mixtures of other distributions require additional parameters to be estimated.
Summary statistics.
Bimodal distributions are a commonly used example of how summary statistics such as the mean, median, and standard deviation can be deceptive when used on an arbitrary distribution. For example, in the distribution in Figure 1, the mean and median would be about zero, even though zero is not a typical value. The standard deviation is also larger than deviation of each normal distribution.
Although several have been suggested, there is no presently generally agreed summary statistic (or set of statistics) to quantify the parameters of a general bimodal distribution. For a mixture of two normal distributions the means and standard deviations along with the mixing parameter (the weight for the combination) are usually used – a total of five parameters.
Ashman's D.
A statistic that may be useful is Ashman's D:
formula_23
where "μ"1, "μ"2 are the means and "σ"1, "σ"2 are the standard deviations.
For a mixture of two normal distributions "D" > 2 is required for a clean separation of the distributions.
van der Eijk's A.
This measure is a weighted average of the degree of agreement the frequency distribution. "A" ranges from -1 (perfect bimodality) to +1 (perfect unimodality). It is defined as
formula_24
where "U" is the unimodality of the distribution, "S" the number of categories that have nonzero frequencies and "K" the total number of categories.
The value of U is 1 if the distribution has any of the three following characteristics:
With distributions other than these the data must be divided into 'layers'. Within a layer the responses are either equal or zero. The categories do not have to be contiguous. A value for "A" for each layer ("A"i) is calculated and a weighted average for the distribution is determined. The weights ("w"i) for each layer are the number of responses in that layer. In symbols
formula_25
A uniform distribution has "A" = 0: when all the responses fall into one category "A" = +1.
One theoretical problem with this index is that it assumes that the intervals are equally spaced. This may limit its applicability.
Bimodal separation.
This index assumes that the distribution is a mixture of two normal distributions with means ("μ"1 and "μ"2) and standard deviations ("σ"1 and "σ"2):
formula_26
Bimodality coefficient.
Sarle's bimodality coefficient "b" is
formula_27
where "γ" is the skewness and "κ" is the kurtosis. The kurtosis is here defined to be the standardised fourth moment around the mean. The value of "b" lies between 0 and 1. The logic behind this coefficient is that a bimodal distribution with light tails will have very low kurtosis, an asymmetric character, or both – all of which increase this coefficient.
The formula for a finite sample is
formula_28
where "n" is the number of items in the sample, "g" is the sample skewness and "k" is the sample excess kurtosis.
The value of "b" for the uniform distribution is 5/9. This is also its value for the exponential distribution. Values greater than 5/9 may indicate a bimodal or multimodal distribution, though corresponding values can also result for heavily skewed unimodal distributions. The maximum value (1.0) is reached only by a Bernoulli distribution with only two distinct values or the sum of two different Dirac delta functions (a bi-delta distribution).
The distribution of this statistic is unknown. It is related to a statistic proposed earlier by Pearson – the difference between the kurtosis and the square of the skewness ("vide infra").
Bimodality amplitude.
This is defined as
formula_29
where "A"1 is the amplitude of the smaller peak and "A"an is the amplitude of the antimode.
"A"B is always < 1. Larger values indicate more distinct peaks.
Bimodal ratio.
This is the ratio of the left and right peaks. Mathematically
formula_30
where "A"l and "A"r are the amplitudes of the left and right peaks respectively.
Bimodality parameter.
This parameter ("B") is due to Wilcock.
formula_31
where "A"l and "A"r are the amplitudes of the left and right peaks respectively and "P""i" is the logarithm taken to the base 2 of the proportion of the distribution in the ith interval. The maximal value of the "ΣP" is 1 but the value of "B" may be greater than this.
To use this index, the log of the values are taken. The data is then divided into interval of width Φ whose value is log 2. The width of the peaks are taken to be four times 1/4Φ centered on their maximum values.
Bimodality indices.
The bimodality index proposed by Wang "et al" assumes that the distribution is a sum of two normal distributions with equal variances but differing means. It is defined as follows:
formula_32
where "μ"1, "μ"2 are the means and "σ" is the common standard deviation.
formula_33
where "p" is the mixing parameter.
A different bimodality index has been proposed by Sturrock.
This index ("B") is defined as
formula_34
When "m" = 2 and "γ" is uniformly distributed, "B" is exponentially distributed.
This statistic is a form of periodogram. It suffers from the usual problems of estimation and spectral leakage common to this form of statistic.
Another bimodality index has been proposed by de Michele and Accatino. Their index ("B") is
formula_35
where "μ" is the arithmetic mean of the sample and
formula_36
where "m""i" is number of data points in the "i"th bin, "x""i" is the center of the "i"th bin and "L" is the number of bins.
The authors suggested a cut off value of 0.1 for "B" to distinguish between a bimodal ("B" > 0.1)and unimodal ("B" < 0.1) distribution. No statistical justification was offered for this value.
A further index ("B") has been proposed by Sambrook Smith "et al"
formula_37
where "p"1 and "p"2 are the proportion contained in the primary (that with the greater amplitude) and secondary (that with the lesser amplitude) mode and "φ"1 and "φ"2 are the "φ"-sizes of the primary and secondary mode. The "φ"-size is defined as minus one times the log of the data size taken to the base 2. This transformation is commonly used in the study of sediments.
The authors recommended a cut off value of 1.5 with B being greater than 1.5 for a bimodal distribution and less than 1.5 for a unimodal distribution. No statistical justification for this value was given.
Otsu's method for finding a threshold for separation between two modes relies on minimizing the quantity
formula_38
where "n""i" is the number of data points in the "i"th subpopulation, "σ""i"2 is the variance of the "i"th subpopulation, "m" is the total size of the sample and "σ"2 is the sample variance. Some researchers (particularly in the field of digital image processing) have applied this quantity more broadly as an index for detecting bimodality, with a small value indicating a more bimodal distribution.
Statistical tests.
A number of tests are available to determine if a data set is distributed in a bimodal (or multimodal) fashion.
Graphical methods.
In the study of sediments, particle size is frequently bimodal. Empirically, it has been found useful to plot the frequency against the log( size ) of the particles. This usually gives a clear separation of the particles into a bimodal distribution. In geological applications the logarithm is normally taken to the base 2. The log transformed values are referred to as phi (Φ) units. This system is known as the Krumbein (or phi) scale.
An alternative method is to plot the log of the particle size against the cumulative frequency. This graph will usually consist two reasonably straight lines with a connecting line corresponding to the antimode.
Approximate values for several statistics can be derived from the graphic plots.
formula_39
formula_40
formula_41
formula_42
where "Mean" is the mean, "StdDev" is the standard deviation, "Skew" is the skewness, "Kurt" is the kurtosis and "φ"x is the value of the variate "φ" at the "x"th percentage of the distribution.
Unimodal vs. bimodal distribution.
Pearson in 1894 was the first to devise a procedure to test whether a distribution could be resolved into two normal distributions. This method required the solution of a ninth order polynomial. In a subsequent paper Pearson reported that for any distribution skewness2 + 1 < kurtosis. Later Pearson showed that
formula_43
where "b"2 is the kurtosis and "b"1 is the square of the skewness. Equality holds only for the two point Bernoulli distribution or the sum of two different Dirac delta functions. These are the most extreme cases of bimodality possible. The kurtosis in both these cases is 1. Since they are both symmetrical their skewness is 0 and the difference is 1.
Baker proposed a transformation to convert a bimodal to a unimodal distribution.
Several tests of unimodality versus bimodality have been proposed: Haldane suggested one based on second central differences. Larkin later introduced a test based on the F test; Benett created one based on Fisher's G test. Tokeshi has proposed a fourth test. A test based on a likelihood ratio has been proposed by Holzmann and Vollmer.
A method based on the score and Wald tests has been proposed. This method can distinguish between unimodal and bimodal distributions when the underlying distributions are known.
Antimode tests.
Statistical tests for the antimode are known.
Otsu's method is commonly employed in computer graphics to determine the optimal separation between two distributions.
General tests.
To test if a distribution is other than unimodal, several additional tests have been devised: the bandwidth test, the dip test, the excess mass test, the MAP test, the mode existence test, the runt test, the span test, and the saddle test.
An implementation of the dip test is available for the R programming language. The p-values for the dip statistic values range between 0 and 1. P-values less than 0.05 indicate significant multimodality and p-values greater than 0.05 but less than 0.10 suggest multimodality with marginal significance.
Silverman's test.
Silverman introduced a bootstrap method for the number of modes. The test uses a fixed bandwidth which reduces the power of the test and its interpretability. Under smoothed densities may have an excessive number of modes whose count during bootstrapping is unstable.
Bajgier-Aggarwal test.
Bajgier and Aggarwal have proposed a test based on the kurtosis of the distribution.
Special cases.
Additional tests are available for a number of special cases:
A study of a mixture density of two normal distributions data found that separation into the two normal distributions was difficult unless the means were separated by 4–6 standard deviations.
In astronomy the Kernel Mean Matching algorithm is used to decide if a data set belongs to a single normal distribution or to a mixture of two normal distributions.
This distribution is bimodal for certain values of is parameters. A test for these values has been described.
Parameter estimation and fitting curves.
Assuming that the distribution is known to be bimodal or has been shown to be bimodal by one or more of the tests above, it is frequently desirable to fit a curve to the data. This may be difficult.
Bayesian methods may be useful in difficult cases.
Software.
A package for R is available for testing for bimodality. This package assumes that the data are distributed as a sum of two normal distributions. If this assumption is not correct the results may not be reliable. It also includes functions for fitting a sum of two normal distributions to the data.
Assuming that the distribution is a mixture of two normal distributions then the expectation-maximization algorithm may be used to determine the parameters. Several programmes are available for this including Cluster, and the R package nor1mix.
The mixtools package available for R can test for and estimate the parameters of a number of different distributions. A package for a mixture of two right-tailed gamma distributions is available.
Several other packages for R are available to fit mixture models; these include flexmix, mcclust, agrmt, and mixdist.
The statistical programming language SAS can also fit a variety of mixed distributions with the PROC FREQ procedure.
In Python, the package Scikit-learn contains a tool for mixture modeling
Example software application.
The CumFreqA program for the fitting of composite probability distributions to a data set (X) can divide the set into two parts with a different distribution. The figure shows an example of a double generalized mirrored Gumbel distribution as in distribution fitting with cumulative distribution function (CDF) equations:
X < 8.10 : CDF = 1 - exp[-exp{-(0.092X^0.01+935)}]
X > 8.10 : CDF = 1 - exp[-exp{-(-0.0039X^2.79+1.05)}]
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " R = \\frac{ a + x }{ b + y } "
},
{
"math_id": 1,
"text": " Y "
},
{
"math_id": 2,
"text": " \\alpha "
},
{
"math_id": 3,
"text": " Z "
},
{
"math_id": 4,
"text": " (1-\\alpha), "
},
{
"math_id": 5,
"text": "0 < \\alpha < 1"
},
{
"math_id": 6,
"text": " f( x ) = p g_1( x ) + ( 1 - p ) g_2( x ) \\, "
},
{
"math_id": 7,
"text": " \\mu = p \\mu_1 + ( 1 - p ) \\mu_2 "
},
{
"math_id": 8,
"text": " \\nu_2 = p[ \\sigma_1^2 + \\delta_1^2 ] + ( 1 - p )[ \\sigma_2^2 + \\delta_2^2 ]"
},
{
"math_id": 9,
"text": " \\nu_3 = p [ S_1 \\sigma_1^3 + 3 \\delta_1 \\sigma_1^2 + \\delta_1^3 ] + ( 1 - p )[ S_2 \\sigma_2^3 + 3 \\delta_2 \\sigma_2^2 + \\delta_2^3 ] "
},
{
"math_id": 10,
"text": " \\nu_4 = p[ K_1 \\sigma_1^4 + 4 S_1 \\delta_1 \\sigma_1^3 + 6 \\delta_1^2 \\sigma_1^2 + \\delta_1^4 ] + ( 1 - p )[ K_2 \\sigma_2^4 + 4 S_2 \\delta_2 \\sigma_2^3 + 6 \\delta_2^2 \\sigma_2^2 + \\delta_2^4 ]"
},
{
"math_id": 11,
"text": " \\mu = \\int x f( x ) \\, dx "
},
{
"math_id": 12,
"text": " \\delta_i = \\mu_i - \\mu "
},
{
"math_id": 13,
"text": " \\nu_r = \\int ( x - \\mu )^r f( x ) \\, dx "
},
{
"math_id": 14,
"text": " d \\le 1 "
},
{
"math_id": 15,
"text": " \\left\\vert \\log( 1 - p ) - \\log( p ) \\right\\vert \\ge 2 \\log( d - \\sqrt{ d^2 - 1 } ) + 2d \\sqrt{ d^2 - 1 } ,"
},
{
"math_id": 16,
"text": " d = \\frac{ \\left\\vert \\mu_1 - \\mu_2 \\right\\vert }{ 2 \\sigma }, "
},
{
"math_id": 17,
"text": " r = \\frac{ \\sigma_1^2 }{ \\sigma_2^2 } ."
},
{
"math_id": 18,
"text": " S = \\frac{ \\sqrt{ -2 + 3r + 3r^2 - 2r^3 + 2( 1 - r + r^2 )^{ 1.5 } } }{ \\sqrt{ r }( 1 + \\sqrt{ r } ) } ."
},
{
"math_id": 19,
"text": " | \\mu_1 - \\mu_2 | < S | \\sigma_1 + \\sigma_2 | ."
},
{
"math_id": 20,
"text": "|\\mu_1-\\mu_2| \\le2\\min (\\sigma_1,\\sigma_2)."
},
{
"math_id": 21,
"text": "\\sigma,"
},
{
"math_id": 22,
"text": "|\\mu _1-\\mu_2|\\le 2\\sigma \\sqrt{1+\\frac{|\\log p-\\ln (1-p)|}{2}}."
},
{
"math_id": 23,
"text": " D = \\frac{ \\left| \\mu_1 - \\mu_2 \\right| }{ \\sqrt{ 2 ( \\sigma_1^2 + \\sigma_2^2 ) } } "
},
{
"math_id": 24,
"text": " A = U \\left( 1 - \\frac{ S - 1 }{ K - 1 } \\right) "
},
{
"math_id": 25,
"text": " A_\\text{overall} = \\sum_i w_i A_i "
},
{
"math_id": 26,
"text": " S = \\frac{ \\mu_1 - \\mu_2 }{ 2( \\sigma_1 +\\sigma_2 ) } "
},
{
"math_id": 27,
"text": " \\beta = \\frac{ \\gamma^2 + 1 }{ \\kappa } "
},
{
"math_id": 28,
"text": " b = \\frac{ g^2 + 1 }{ k + \\frac{ 3( n - 1 )^2 }{ ( n - 2 )( n - 3 ) } } "
},
{
"math_id": 29,
"text": " A_B = \\frac{A_1 - A_{ an } }{ A_1 } "
},
{
"math_id": 30,
"text": " R = \\frac{ A_r }{ A_l } "
},
{
"math_id": 31,
"text": " B = \\sqrt{ \\frac{ A_r }{ A_l } } \\sum P_i "
},
{
"math_id": 32,
"text": " \\delta = \\frac{ | \\mu_1 - \\mu_2 |}{ \\sigma } "
},
{
"math_id": 33,
"text": " BI = \\delta \\sqrt{ p( 1 - p ) } "
},
{
"math_id": 34,
"text": " B = \\frac{ 1 }{ N } \\left[ \\left( \\sum_1^N \\cos ( 2 \\pi m \\gamma ) \\right)^2 + \\left( \\sum_1^N \\sin ( 2 \\pi m \\gamma ) \\right)^2 \\right] "
},
{
"math_id": 35,
"text": " B = | \\mu - \\mu_M | "
},
{
"math_id": 36,
"text": " \\mu_M = \\frac{ \\sum_{ i = 1 }^L m_i x_i }{ \\sum_{ i = 1 }^L m_i } "
},
{
"math_id": 37,
"text": " B = | \\phi_2 - \\phi_1 | \\frac{ p_2 }{ p_1 } "
},
{
"math_id": 38,
"text": " \\frac{ n_1 \\sigma_1^2 + n_2 \\sigma_2^2 }{ m \\sigma^2 } "
},
{
"math_id": 39,
"text": "\\mathit{Mean} = \\frac{ \\phi_{ 16 } + \\phi_{ 50 } + \\phi_{ 84 } }{ 3 }"
},
{
"math_id": 40,
"text": "\\mathit{StdDev} = \\frac{ \\phi_{ 84 } - \\phi_{ 16 } }{ 4 } + \\frac{ \\phi_{ 95 } - \\phi_{ 5 } }{ 6.6 } "
},
{
"math_id": 41,
"text": "\\mathit{Skew} = \\frac{ \\phi_{ 84 } + \\phi_{ 16 } - 2 \\phi_{ 50 } }{ 2 ( \\phi_{ 84 } - \\phi_{ 16 } ) } + \\frac{ \\phi_{ 95 } + \\phi_{ 5 } - 2 \\phi_{ 50 } }{ 2( \\phi_{ 95 } - \\phi_{ 5 } ) } "
},
{
"math_id": 42,
"text": "\\mathit{Kurt} = \\frac{ \\phi_{ 95 } - \\phi_{ 5 } }{ 2.44 ( \\phi_{ 75 } - \\phi_{ 25 } ) }"
},
{
"math_id": 43,
"text": " b_2 - b_1 \\ge 1 "
}
] | https://en.wikipedia.org/wiki?curid=1068378 |
10685654 | Second sound | Quantum mechanical phenomenon in which heat transfer occurs by wave-like motion
In condensed matter physics, second sound is a quantum mechanical phenomenon in which heat transfer occurs by wave-like motion, rather than by the more usual mechanism of diffusion. Its presence leads to a very high thermal conductivity. It is known as "second sound" because the wave motion of entropy and temperature is similar to the propagation of pressure waves in air (sound). The phenomenon of second sound was first described by Lev Landau in 1941.
Description.
Normal sound waves are fluctuations in the displacement and density of molecules in a substance;
second sound waves are fluctuations in the density of quasiparticle thermal excitations (rotons and phonons). Second sound can be observed in any system in which most phonon-phonon collisions conserve momentum, like superfluids and in some dielectric crystals when Umklapp scattering is small.
Contrary to molecules in a gas, quasiparticles are not necessarily conserved. Also gas molecules in a box conserve momentum (except at the boundaries of box), while quasiparticles can sometimes not conserve momentum in the presence of impurities or Umklapp scattering. Umklapp phonon-phonon scattering exchanges momentum with the crystal lattice, so phonon momentum is not conserved, but Umklapp processes can be reduced at low temperatures.
Normal sound in gases is a consequence of the collision rate "τ" between molecules being large compared to the frequency of the sound wave "ω" &Gt; 1/"τ". For second sound, the Umklapp rate "τ"u has to be small compared to the oscillation frequency "ω" &Lt; 1/"τ"u for energy and momentum conservation. However analogous to gasses, the relaxation time "τ""N" describing the collisions has to be large with respect to the frequency "ω" &Gt; 1/"τ""N", leaving a window:
formula_0
for sound-like behaviour or second sound. The second sound thus behaves as oscillations of the local number of quasiparticles (or of the local energy carried by these particles). Contrary to the normal sound where energy is related to pressure and temperature, in a crystal the local energy density is purely a function of the temperature. In this sense, the second sound can also be considered as oscillations of the local temperature. Second sound is a wave-like phenomena which makes it very different from usual heat diffusion.
In helium II.
Second sound is observed in liquid helium at temperatures below the lambda point, 2.1768 K, where 4He becomes a superfluid known as helium II. Helium II has the highest thermal conductivity of any known material (several hundred times higher than copper). Second sound can be observed either as pulses or in a resonant cavity.
The speed of second sound is close to zero near the lambda point, increasing to approximately 20 m/s around 1.8 K, about ten times slower than normal sound waves.
At temperatures below 1 K, the speed of second sound in helium II increases as the temperature decreases.
Second sound is also observed in superfluid helium-3 below its lambda point 2.5 mK.
As per the two-fluid, the speed of second sound is given by
formula_1
where formula_2 is the temperature, formula_3 is the entropy, formula_4 is the specific heat, formula_5 is the superfluid density and formula_6 is the normal fluid density. As formula_7, formula_8, where formula_9 is the ordinary (or first) sound speed.
In other media.
Second sound has been observed in solid 4He and 3He,
and in some dielectric solids such as Bi in the temperature
range of 1.2 to 4.0 K with a velocity of 780 ± 50 m/s,
or solid sodium fluoride (NaF) around 10 to 20 K. In 2021 this effect was observed in a BKT superfluid as well as in a germanium semiconductor
In graphite.
In 2019 it was reported that ordinary graphite exhibits second sound at 120 K. This feature was both predicted theoretically and observed experimentally, and
was by far the highest temperature at which second sound has been observed. However, this second sound is observed only at the microscale, because the wave dies out exponentially with
characteristic length 1-10 microns. Therefore, presumably graphite in the right temperature regime has extraordinarily high thermal conductivity "but" only for the purpose of transferring heat pulses distances of order 10 microns, and for pulses of duration on the order of 10 nanoseconds. For more "normal" heat-transfer, graphite's observed thermal conductivity is less than that of, e.g., copper. The theoretical models, however, predict longer absorption lengths would be seen in isotopically pure graphite, and perhaps over a wider temperature range, e.g. even at room temperature. (As of March 2019, that experiment has not yet been tried.)
Applications.
Measuring the speed of second sound in 3He-4He mixtures can be
used as a thermometer in the range 0.01-0.7 K.
Oscillating superleak transducers (OST) use second sound to locate defects in superconducting accelerator cavities. | [
{
"math_id": 0,
"text": "\\frac{1}{\\tau_{\\rm u}} \\ll \\omega\\ll \\frac{1}{\\tau_N}"
},
{
"math_id": 1,
"text": "c_2 = \\left(\\frac{TS^2}{C}\\,\\frac{\\rho_s}{\\rho_n}\\right)^{1/2}"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "S"
},
{
"math_id": 4,
"text": "C"
},
{
"math_id": 5,
"text": "\\rho_s"
},
{
"math_id": 6,
"text": "\\rho_n"
},
{
"math_id": 7,
"text": "T\\rightarrow 0"
},
{
"math_id": 8,
"text": "c_2=c/\\sqrt{3}"
},
{
"math_id": 9,
"text": "c=(\\partial p/\\partial \\rho)_S\\approx (\\partial p/\\partial \\rho)_T"
}
] | https://en.wikipedia.org/wiki?curid=10685654 |
10686210 | Jupiter mass | Unit of mass equal to the total mass of the planet Jupiter
<templatestyles src="Template:Infobox/styles-images.css" />
Jupiter mass, also called Jovian mass, is the unit of mass equal to the total mass of the planet Jupiter. This value may refer to the mass of the planet alone, or the mass of the entire Jovian system to include the moons of Jupiter. Jupiter is by far the most massive planet in the Solar System. It is approximately 2.5 times as massive as all of the other planets in the Solar System combined.
Jupiter mass is a common unit of mass in astronomy that is used to indicate the masses of other similarly-sized objects, including the outer planets, extrasolar planets, and brown dwarfs, as this unit provides a convenient scale for comparison.
Current best estimates.
The current best known value for the mass of Jupiter can be expressed as :
formula_0
which is about <templatestyles src="Fraction/styles.css" />1⁄1000 as massive as the Sun (is about 0.1% M☉):
formula_1
Jupiter is 318 times as massive as Earth:
formula_2
Context and implications.
Jupiter's mass is 2.5 times that of all the other planets in the Solar System combined—this is so massive that its barycenter with the Sun lies beyond the Sun's surface at 1.068 solar radii from the Sun's center.
Because the mass of Jupiter is so large compared to the other objects in the Solar System, the effects of its gravity must be included when calculating satellite trajectories and the precise orbits of other bodies in the Solar System, including the Moon and even Pluto.
Theoretical models indicate that if Jupiter had much more mass than it does at present, its atmosphere would collapse, and the planet would shrink. For small changes in mass, the radius would not change appreciably, but above about 500 ME (1.6 Jupiter masses) the interior would become so much more compressed under the increased pressure that its volume would "decrease" despite the increasing amount of matter. As a result, Jupiter is thought to have about as large a diameter as a planet of its composition and evolutionary history can achieve. The process of further shrinkage with increasing mass would continue until appreciable stellar ignition was achieved, as in high-mass brown dwarfs having around 50 Jupiter masses. Jupiter would need to be about 80 times as massive to fuse hydrogen and become a star.
Gravitational constant.
The mass of Jupiter is derived from the measured value called the Jovian mass parameter, which is denoted with "GM"J. The mass of Jupiter is calculated by dividing "GM"J by the constant "G". For celestial bodies such as Jupiter, Earth and the Sun, the value of the "GM" product is known to many orders of magnitude more precisely than either factor independently. The limited precision available for "G" limits the uncertainty of the derived mass. For this reason, astronomers often prefer to refer to the gravitational parameter, rather than the explicit mass. The "GM" products are used when computing the ratio of Jupiter mass relative to other objects.
In 2015, the International Astronomical Union defined the "nominal Jovian mass parameter" to remain constant regardless of subsequent improvements in measurement precision of MJ. This constant is defined as exactly
formula_3
If the explicit mass of Jupiter is needed in SI units, it can be calculated by dividing "GM" by "G", where "G" is the gravitational constant.
Mass composition.
The majority of Jupiter's mass is hydrogen and helium. These two elements make up more than 87% of the total mass of Jupiter. The total mass of heavy elements other than hydrogen and helium in the planet is between 11 and 45 ME. The bulk of the hydrogen on Jupiter is solid hydrogen. Evidence suggests that Jupiter contains a central dense core. If so, the mass of the core is predicted to be no larger than about 12 ME. The exact mass of the core is uncertain due to the relatively poor knowledge of the behavior of solid hydrogen at very high pressures.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M_\\mathrm{J}=(1.89813 \\pm 0.00019)\\times10^{27} \\text{ kg},"
},
{
"math_id": 1,
"text": "M_\\mathrm{J}=\\frac{1}{1047.348644 \\pm 0.000017} M_{\\odot} \\approx (9.547919 \\pm 0.000002) \\times10^{-4} M_{\\odot}."
},
{
"math_id": 2,
"text": "M_\\mathrm{J} = 3.1782838 \\times 10^2 M_\\oplus. "
},
{
"math_id": 3,
"text": "(\\mathcal{GM})^\\mathrm N_\\mathrm J = 1.266\\,8653 \\times 10^{17} \\text{ m}^3/\\text{s}^2 "
}
] | https://en.wikipedia.org/wiki?curid=10686210 |
10686498 | Impact parameter | Distance between a projectile path and center of a potential field affecting it
In physics, the impact parameter b is defined as the perpendicular distance between the path of a projectile and the center of a potential field "U"("r") created by an object that the projectile is approaching (see diagram). It is often referred to in nuclear physics (see Rutherford scattering) and in classical mechanics.
The impact parameter is related to the scattering angle θ by
formula_0
where v∞ is the velocity of the projectile when it is far from the center, and "r"min is its closest distance from the center.
Scattering from a hard sphere.
The simplest example illustrating the use of the impact parameter is in the case of scattering from a sphere. Here, the object that the projectile is approaching is a hard sphere with radius formula_1. In the case of a hard sphere, formula_2 when formula_3, and formula_4 for formula_5. When formula_6, the projectile misses the hard sphere. We immediately see that formula_7. When formula_8, we find that formula_9
Collision centrality.
In high-energy nuclear physics — specifically, in colliding-beam experiments — collisions may be classified according to their impact parameter. Central collisions have formula_10, peripheral collisions have formula_11, and ultraperipheral collisions (UPCs) have formula_12, where the colliding nuclei are viewed as hard spheres with radius formula_1.
Because the color force has an extremely short range, it cannot couple quarks that are separated by much more than one nucleon's radius; hence, strong interactions are suppressed in peripheral and ultraperipheral collisions. This means that final-state particle multiplicity (the total number of particles resulting from the collision), is typically greatest in the most central collisions, due to the partons involved having the greatest probability of interacting in some way. This has led to charged particle multiplicity being used as a common measure of collision centrality, as charged particles are much easier to detect than uncharged particles.
Because strong interactions are effectively impossible in ultraperipheral collisions, they may be used to study electromagnetic interactions — i.e. photon–photon, photon–nucleon, or photon–nucleus interactions — with low background contamination. Because UPCs typically produce only two to four final-state particles, they are also relatively "clean" when compared to central collisions, which may produce hundreds of particles per event.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\theta = \\pi - 2b\\int_{r_\\text{min}}^\\infty \\frac{dr}{r^2\\sqrt{1 - (b/r)^2 - 2U/(mv_\\infty^2)}},"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "U(r) = 0"
},
{
"math_id": 3,
"text": "r > R"
},
{
"math_id": 4,
"text": "U(r) = \\infty"
},
{
"math_id": 5,
"text": " r \\leq R "
},
{
"math_id": 6,
"text": " b > R "
},
{
"math_id": 7,
"text": "\\theta = 0"
},
{
"math_id": 8,
"text": "b \\leq R"
},
{
"math_id": 9,
"text": "b = R \\cos\\tfrac{\\theta}{2}."
},
{
"math_id": 10,
"text": "b \\approx 0"
},
{
"math_id": 11,
"text": "0 < b < 2R"
},
{
"math_id": 12,
"text": "b > 2R"
}
] | https://en.wikipedia.org/wiki?curid=10686498 |
10687767 | Schatten norm | In mathematics, specifically functional analysis, the Schatten norm (or Schatten–von-Neumann norm) arises as a generalization of "p"-integrability similar to the trace class norm and the Hilbert–Schmidt norm.
Definition.
Let formula_0, formula_1 be Hilbert spaces, and formula_2 a (linear) bounded operator from
formula_0 to formula_1. For formula_3, define the Schatten "p"-norm of formula_2 as
formula_4
where formula_5, using the operator square root.
If formula_2 is compact and formula_6 are separable, then
formula_7
for formula_8 the singular values of formula_2, i.e. the eigenvalues of the Hermitian operator formula_5.
Properties.
In the following we formally extend the range of formula_9 to formula_10 with the convention that formula_11 is the operator norm. The dual index to formula_12 is then formula_13.
formula_17
formula_24
If formula_25 satisfy formula_26, then we have
formula_27.
The latter version of Hölder's inequality is proven in higher generality (for noncommutative formula_28 spaces instead of Schatten-p classes) in.
formula_29
formula_31
formula_33
where formula_34 denotes the Hilbert–Schmidt inner product.
formula_37
Remarks.
Notice that formula_38 is the Hilbert–Schmidt norm (see Hilbert–Schmidt operator), formula_39 is the trace class norm (see trace class), and formula_40 is the operator norm (see operator norm).
For formula_41 the function formula_42 is an example of a quasinorm.
An operator which has a finite Schatten norm is called a Schatten class operator and the space of such operators is denoted by formula_43. With this norm, formula_43 is a Banach space, and a Hilbert space for "p" = 2.
Observe that formula_44, the algebra of compact operators. This follows from the fact that if the sum is finite the spectrum will be finite or countable with the origin as limit point, and hence a compact operator (see compact operator on Hilbert space).
The case "p" = 1 is often referred to as the nuclear norm (also known as the "trace norm", or the Ky Fan "n"-norm)
See also.
Matrix norms
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H_1"
},
{
"math_id": 1,
"text": "H_2"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "p\\in [1,\\infty)"
},
{
"math_id": 4,
"text": " \\|T\\| _p = [\\operatorname{Tr} (|T|^p)]^{1/p}, "
},
{
"math_id": 5,
"text": "|T|:=\\sqrt{(T^*T)}"
},
{
"math_id": 6,
"text": "H_1,\\,H_2"
},
{
"math_id": 7,
"text": " \\|T\\| _p := \\bigg( \\sum_{n\\ge 1} s^p_n(T)\\bigg)^{1/p} "
},
{
"math_id": 8,
"text": " s_1(T) \\ge s_2(T) \\ge \\cdots \\ge s_n(T) \\ge \\cdots \\ge 0"
},
{
"math_id": 9,
"text": " p "
},
{
"math_id": 10,
"text": "[1,\\infty]"
},
{
"math_id": 11,
"text": " \\|\\cdot\\|_{\\infty} "
},
{
"math_id": 12,
"text": "p=\\infty"
},
{
"math_id": 13,
"text": "q=1"
},
{
"math_id": 14,
"text": " U "
},
{
"math_id": 15,
"text": " V "
},
{
"math_id": 16,
"text": "p\\in [1,\\infty]"
},
{
"math_id": 17,
"text": " \\|U T V\\|_p = \\|T\\|_p. "
},
{
"math_id": 18,
"text": " p\\in [1,\\infty]"
},
{
"math_id": 19,
"text": "q"
},
{
"math_id": 20,
"text": "\\frac{1}{p} + \\frac{1}{q} = 1"
},
{
"math_id": 21,
"text": " S\\in\\mathcal{L}(H_2,H_3), T\\in\\mathcal{L}(H_1,H_2)"
},
{
"math_id": 22,
"text": "H_1, H_2, "
},
{
"math_id": 23,
"text": " H_3"
},
{
"math_id": 24,
"text": " \\|ST\\|_1 \\leq \\|S\\|_p \\|T\\|_q. "
},
{
"math_id": 25,
"text": "p,q,r\\in [1,\\infty]"
},
{
"math_id": 26,
"text": "\\tfrac{1}{p} + \\tfrac{1}{q} = \\tfrac{1}{r}"
},
{
"math_id": 27,
"text": "\\|ST\\|_r \\leq \\|S\\|_p \\|T\\|_q"
},
{
"math_id": 28,
"text": "L^p"
},
{
"math_id": 29,
"text": " \\|ST\\|_p \\leq \\|S\\|_p \\|T\\|_p ."
},
{
"math_id": 30,
"text": " 1\\leq p\\leq p'\\leq\\infty"
},
{
"math_id": 31,
"text": " \\|T\\|_1 \\geq \\|T\\|_p \\geq \\|T\\|_{p'} \\geq \\|T\\|_\\infty. "
},
{
"math_id": 32,
"text": "H_1, H_2"
},
{
"math_id": 33,
"text": " \\|S\\|_p = \\sup\\lbrace |\\langle S,T\\rangle | \\mid \\|T\\|_q = 1\\rbrace, "
},
{
"math_id": 34,
"text": "\\langle S,T\\rangle = \\operatorname{tr}(S^*T) "
},
{
"math_id": 35,
"text": " (e_k)_k,(f_{k'})_{k'}"
},
{
"math_id": 36,
"text": " p=1"
},
{
"math_id": 37,
"text": " \\|T\\|_1 \\leq \\sum_{k,k'}\\left|T_{k,k'}\\right|. "
},
{
"math_id": 38,
"text": "\\|\\cdot\\|_2 "
},
{
"math_id": 39,
"text": "\\|\\cdot\\|_1 "
},
{
"math_id": 40,
"text": "\\|\\cdot\\|_\\infty"
},
{
"math_id": 41,
"text": "p\\in(0,1)"
},
{
"math_id": 42,
"text": "\\|\\cdot\\|_p"
},
{
"math_id": 43,
"text": " S_p(H_1,H_2)"
},
{
"math_id": 44,
"text": " S_p(H_1,H_2) \\subseteq \\mathcal{K} (H_1,H_2)"
}
] | https://en.wikipedia.org/wiki?curid=10687767 |
10688404 | Acetoacetate decarboxylase | Enzyme
Acetoacetate decarboxylase (AAD or ADC) is an enzyme (EC 4.1.1.4) involved in both the ketone body production pathway in humans and other mammals, and solventogenesis in bacteria. Acetoacetate decarboxylase plays a key role in solvent production by catalyzing the decarboxylation of acetoacetate, yielding acetone and carbon dioxide.
This enzyme has been of particular interest because it is a classic example of how pKa values of ionizable groups in the enzyme active site can be significantly perturbed. Specifically, the pKa value of lysine 115 in the active site is unusually low, allowing for the formation of a Schiff base intermediate and catalysis.
History.
Acetoacetate decarboxylase is an enzyme with major historical implications, specifically in World War I and in establishing the state of Israel. During the war the Allies needed pure acetone as a solvent for nitrocellulose, a highly flammable compound that is the main component in gunpowder. In 1916, biochemist and future first president of Israel Chaim Weizmann was the first to isolate "Clostridium acetobutylicum", a Gram-positive, anaerobic bacteria in which acetoacetate decarboxylase is found. Weizmann was able to harness the organism's ability to yield acetone from starch in order to mass-produce explosives during the war. This led the American and British governments to install the process devised by Chaim Weizmann in several large plants in England, France, Canada, and the United States. Through Weizmann's scientific contributions in World War I, he became close with influential British leaders educating them of his Zionist beliefs. One of them was Arthur Balfour, the man after whom the Balfour Declaration—the first document pronouncing British support in the establishment of a Jewish homeland—was named.
The production of acetone by acetoacetate decarboxylase-containing or clostridial bacteria was utilized in large-scale industrial syntheses in the first half of the twentieth century. In the 1960s, the industry replaced this process with less expensive, more efficient chemical syntheses of acetone from petroleum and petroleum derivatives. However, there has been a growing interest in acetone production that is more environmentally friendly, causing a resurgence in utilizing acetoacetate decarboxylase-containing bacteria. Similarly, isopropanol and butanol fermentation using clostridial species is also becoming popular.
Structure.
Acetoacetate decarboxylase is a 365 kDa complex with a homododecameric structure. The overall structure consists of antiparallel β-sheets and a central seven-stranded cone-shaped β-barrel. The core of this β-barrel surrounds the active site in each protomer of the enzyme. The active site, consisting of residues such as Phe27, Met97, and Tyr113, is mostly hydrophobic. However, the active site does contain two charged residues: Arg29 and Glu76.
Arg29 is thought to play a role in substrate binding, while Glu76 is thought to play a role in the orienting the active site for catalysis. The overall hydrophobic environment of the active site plays a critical role in favoring the neutral amine form of Lys115, a key residue involved in the formation of a Schiff base intermediate. Another important lysine residue, Lys116, is thought to play an important role in the positioning of Lys115 in the active site. Through hydrogen bonds with Ser16 and Met210, Lys116 positions Lys115 in the hydrophobic pocket of the active site to favor the neutral amine form.
Reaction mechanism.
Acetoacetate decarboxylase from "Clostridium acetobutylicum" catalyzes the decarboxylation of acetoacetate to yield acetone and carbon dioxide (Figure 1). The reaction mechanism proceeds via the formation of a Schiff base intermediate, which is covalently attached to lysine 115 in the active site. The first line of support for this mechanism came from a radiolabeling experiment in which researchers labeled the carbonyl group of acetoacetate with 18O and observed that oxygen exchange to water, used as the solvent, is a necessary part of decarboxylation step. These results provided support that the mechanism proceeds through a Schiff base intermediate between the ketoacid and an amino acid residue on the enzyme.
Further research led to the isolation of an active site peptide sequence and identification of the active site lysine, Lys115, that is involved in the formation of the Schiff base intermediate. Additionally, later experiments led to the finding that maximum activity of the enzyme occurs at pH 5.95, suggesting that the pK of the ε-ammonium group of Lys115 is significantly perturbed in the active site. If the pK were not perturbed downward, the lysine residue would remain protonated as an ammonium cation, making it unreactive for the nucleophilic addition necessary to form the Schiff base.
Building upon this finding, Westheimer et al. directly measured the pK of Lys115 in the active site using 5-nitrosalicylaldehyde (5-NSA) . Reaction of 5-NSA with acetoacetate decarboxylase and subsequent reduction of the resulting Schiff base with sodium borohydride led to the incorporation of a 2-hydroxy-5-nitrobenzylamino reporter molecule in the active site (Figure 2). Titration of the enzyme with this attached reporter group revealed that the pK of Lys115 is decreased to 5.9 in the active site. These results were the basis for the proposal that the perturbation in the pK of Lys115 was due to its proximity to the positively charged ε-ammonium group of Lys116 in the active site. A nearby positive charge could cause unfavorable electrostatic repulsions that weaken the N-H bond of Lys115. Westheimer et al.'s proposal was further supported by site-directed mutagenesis studies. When Lys116 was mutated to cysteine or asparagine, the pK of Lys115 was found to be significantly elevated to over 9.2, indicating that positively charged Lys116 plays a critical role in determining the pK of Lys115. Although a crystal structure was not yet solved to provide structural evidence, this proposal was widely accepted and cited as a textbook example of how the active site can be precisely organized to perturb a pK and affect reactivity.
In 2009, a crystal structure of acetoacetate decarboxylase from "Clostridium acetobutylicum" was solved, allowing Westheimer et al.'s proposal to be evaluated from a new perspective . From the crystal structure, researchers found that Lys 115 and Lys 116 are oriented in opposite directions and separated by 14.8 Å (Figure 3). This distance makes it unlikely that the positive charge of Lys116 is able to affect the pK of Lys115. Instead through hydrogen bonds with Ser16 and Met210, Lys116 likely holds Lys115 into position in a hydrophobic pocket of the active site. This positioning disrupts the stability of the protonated ammonium cation of Lys115, suggesting that the perturbation of Lys115's pK occurs through a 'desolvation effect'.
Inactivation and inhibition.
Acetoacetate decarboxylase is inhibited by a number of compounds. Acetic anhydride performs an electrophilic attack on the critical catalytic residue, Lys115, of acetoacetate decarboxylase to inactivate the enzyme. The rate of inactivation was assessed through the hydrolysis of the synthetic substrate 2,4-dinitrophenyl propionate to dinitrophenol by acetoacetate decarboxylase. In the presence of acetic anhydride, the enzyme is inactivated, unable to catalyze the hydrolysis reaction 2,4-dinitrophenyl propionate to dinitrophenol.
Acetonylsulfonate acts as a competitive inhibitor (KI=8.0 mM ) as it mimics the characteristics of the natural substrate, acetoacetate (KM=8.0 mM). The monoanion version of acetonylphosphonate is also a good inhibitor (KI=0.8mM), more efficient than the acetonylphosphonate monoester or dianion. These findings indicate that active site is very discriminatory and sterically restricted.
Hydrogen cyanide seems to be an uncompetitive inhibitor, combining with Schiff's base compounds formed at the active site. Addition of carbonyl compounds to the enzyme, in the presence of hydrogen cyanide, increases hydrogen cyanide's ability to inhibit acetoacetate decarboxylase, suggesting that carbonyl compounds readily form Schiff's bases at the active site. Hydrogen cyanide is most potent as an inhibitor at pH 6, the optimum pH for the enzyme, suggesting that the rate-limiting step of catalysis is the formation of the Schiff base intermediate.
Beta-diketones appear to inhibit acetoacetate decarboxylase well but slowly. Acetoacetate decarboxylase has a KM for acetoacetate of 7×10−3 M whereas the enzyme has a KI for benzoylacetone of 1.9×10−6 M. An enamine is most likely formed upon interaction of beta-diketones with free enzyme.
The reaction of acetoacetate decarboxylase with "p"-chloromercuriphenylsulfonate (CMS) results in decreased catalytic activity upon two equivalents of CMS per enzyme subunit. CMS interacts with two sulfhydryl groups located on each enzyme subunit. Further inactivation occurs upon addition of a third equivalent of CMS per subunit. Addition of free cysteine to the inhibited enzyme is able to reverse CMS inhibition of acetoacetate decarboxylase.
Activity in bacteria.
Acetoacetate decarboxylase has been found and studied in the following bacteria in addition to "Clostridium acetobutylicum":
Activity in humans and mammals.
While this enzyme has not been purified from human tissue, the activity was shown to be present in human blood serum. The acetoacetate decarboxylase activity is mainly attributed to human serum albumin, however, the activity is about formula_0 times that of real acetoacetate decarboxylase ("C. acetobutylicum").
In humans and other mammals, the conversion of acetoacetate into acetone and carbon dioxide by acetoacetate decarboxylase is a final irreversible step in the ketone-body pathway that supplies the body with a secondary source of energy. In the liver, acetyl co-A formed from fats and lipids are transformed into three ketone bodies: acetoacetate, D-β-hydroxybutyrate and acetone. Acetoacetate and D-β-hydroxybutyrate are exported to non-hepatic tissues, where they are converted back into acetyl-coA and used for fuel. Acetone and carbon dioxide on the other hand are exhaled, and not allowed to accumulate under normal conditions.
Acetoacetate and D-β-hydroxybutyrate freely interconvert through the action of D-β-hydroxybutyrate dehydrogenase. Subsequently, one function of acetoacetate decarboxylase may be to regulate the concentrations of the other, two 4-carbon ketone bodies.
Clinical significance.
Ketone body production increases significantly when the rate of glucose metabolism is insufficient in meeting the body's energy needs. Such conditions include high-fat ketogenic diets, diabetic ketoacidosis, or severe starvation.
Under elevated levels of acetoacetate and D-β-hydroxybutyrate, acetoacetate decarboxylase produces significantly more acetone. Acetone is toxic, and can accumulate in the body under these conditions. Elevated levels of acetone in the human breath can be used to diagnose diabetes.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "10^{-4}"
}
] | https://en.wikipedia.org/wiki?curid=10688404 |
1069091 | Exponent bias | In IEEE 754 floating-point numbers, the exponent is biased in the engineering sense of the word – the value stored is offset from the actual value by the exponent bias, also called a biased exponent.
Biasing is done because exponents have to be signed values in order to be able to represent both tiny and huge values, but two's complement, the usual representation for signed values, would make comparison harder.
To solve this problem the exponent is stored as an unsigned value which is suitable for comparison, and when being interpreted it is converted into an exponent within a signed range by subtracting the bias.
By arranging the fields such that the sign bit takes the most significant bit position, the biased exponent takes the middle position, then the significand will be the least significant bits and the resulting value will be ordered properly. This is the case whether or not it is interpreted as a floating-point or integer value. The purpose of this is to enable high speed comparisons between floating-point numbers using fixed-point hardware.
If there are formula_0 bits in the exponent, the bias
is typically set as formula_1.
Therefore, the possible integer values that the biased exponent can express lie in the range formula_2.
To understand this range, with formula_0 bits in the exponent, the possible unsigned integers lie in the range formula_3.
However, the strings containing all zeros and all ones are reserved for special values, so the expressible integers lie in the range formula_4.
It follows that:
When interpreting the floating-point number, the bias is subtracted to retrieve the actual exponent.
History.
The floating-point format of the IBM 704 introduced the use of a biased exponent in 1954.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "e"
},
{
"math_id": 1,
"text": "b = 2^{e-1}-1"
},
{
"math_id": 2,
"text": "[1-b, b]"
},
{
"math_id": 3,
"text": "[0, 2^{e}-1]"
},
{
"math_id": 4,
"text": "[1, 2^{e}-2]"
},
{
"math_id": 5,
"text": "(2^{e}-2) - b = 2b - b = b"
},
{
"math_id": 6,
"text": "1 - b"
}
] | https://en.wikipedia.org/wiki?curid=1069091 |
1069723 | Cohomology ring | In mathematics, specifically algebraic topology, the cohomology ring of a topological space "X" is a ring formed from the cohomology groups of "X" together with the cup product serving as the ring multiplication. Here 'cohomology' is usually understood as singular cohomology, but the ring structure is also present in other theories such as de Rham cohomology. It is also functorial: for a continuous mapping of spaces one obtains a ring homomorphism on cohomology rings, which is contravariant.
Specifically, given a sequence of cohomology groups "H""k"("X";"R") on "X" with coefficients in a commutative ring "R" (typically "R" is Z"n", Z, Q, R, or C) one can define the cup product, which takes the form
formula_0
The cup product gives a multiplication on the direct sum of the cohomology groups
formula_1
This multiplication turns "H"•("X";"R") into a ring. In fact, it is naturally an N-graded ring with the nonnegative integer "k" serving as the degree. The cup product respects this grading.
The cohomology ring is graded-commutative in the sense that the cup product commutes up to a sign determined by the grading. Specifically, for pure elements of degree "k" and ℓ; we have
formula_2
A numerical invariant derived from the cohomology ring is the cup-length, which means the maximum number of graded elements of degree ≥ 1 that when multiplied give a non-zero result. For example a complex projective space has cup-length equal to its complex dimension. | [
{
"math_id": 0,
"text": "H^k(X;R) \\times H^\\ell(X;R) \\to H^{k+\\ell}(X; R)."
},
{
"math_id": 1,
"text": "H^\\bullet(X;R) = \\bigoplus_{k\\in\\mathbb{N}} H^k(X; R)."
},
{
"math_id": 2,
"text": "(\\alpha^k \\smile \\beta^\\ell) = (-1)^{k\\ell}(\\beta^\\ell \\smile \\alpha^k)."
},
{
"math_id": 3,
"text": "\\operatorname{H}^*(\\mathbb{R}P^n; \\mathbb{F}_2) = \\mathbb{F}_2[\\alpha]/(\\alpha^{n+1})"
},
{
"math_id": 4,
"text": "|\\alpha|=1"
},
{
"math_id": 5,
"text": "\\operatorname{H}^*(\\mathbb{R}P^\\infty; \\mathbb{F}_2) = \\mathbb{F}_2[\\alpha]"
},
{
"math_id": 6,
"text": "\\operatorname{H}^*(\\mathbb{C}P^n; \\mathbb{Z}) = \\mathbb{Z}[\\alpha]/(\\alpha^{n+1})"
},
{
"math_id": 7,
"text": "|\\alpha|=2"
},
{
"math_id": 8,
"text": "\\operatorname{H}^*(\\mathbb{C}P^\\infty; \\mathbb{Z}) = \\mathbb{Z}[\\alpha]"
},
{
"math_id": 9,
"text": "\\operatorname{H}^*(\\mathbb{H}P^n; \\mathbb{Z}) = \\mathbb{Z}[\\alpha]/(\\alpha^{n+1})"
},
{
"math_id": 10,
"text": "|\\alpha|=4"
},
{
"math_id": 11,
"text": "\\operatorname{H}^*(\\mathbb{H}P^\\infty; \\mathbb{Z}) = \\mathbb{Z}[\\alpha]"
},
{
"math_id": 12,
"text": "\\mathbb{R}P^\\infty"
},
{
"math_id": 13,
"text": "\\mathbb{F}_2"
}
] | https://en.wikipedia.org/wiki?curid=1069723 |
10698304 | Schatten class operator | In mathematics, specifically functional analysis, a "p"th Schatten-class operator is a bounded linear operator on a Hilbert space with finite "p"th Schatten norm. The space of "p"th Schatten-class operators is a Banach space with respect to the Schatten norm.
Via polar decomposition, one can prove that the space of "p"th Schatten class operators is an ideal in "B(H)". Furthermore, the Schatten norm satisfies a type of Hölder inequality:
formula_0
If we denote by formula_1 the Banach space of compact operators on "H" with respect to the operator norm, the above Hölder-type inequality even holds for formula_2. From this it follows that formula_3, formula_4 is a well-defined contraction. (Here the prime denotes (topological) dual.)
Observe that the "2"nd Schatten class is in fact the Hilbert space of Hilbert–Schmidt operators. Moreover, the "1"st Schatten class is the space of trace class operators. | [
{
"math_id": 0,
"text": " \\| S T\\| _{S_1} \\leq \\| S\\| _{S_p} \\| T\\| _{S_q} \\ \\mbox{if} \\ S \\in S_p , \\ T\\in S_q \\mbox{ and } 1/p+1/q=1. "
},
{
"math_id": 1,
"text": " S_\\infty"
},
{
"math_id": 2,
"text": " p \\in [1,\\infty] "
},
{
"math_id": 3,
"text": " \\phi : S_p \\rightarrow S_q '"
},
{
"math_id": 4,
"text": " T \\mapsto \\mathrm{tr}(T\\cdot ) "
}
] | https://en.wikipedia.org/wiki?curid=10698304 |
10698414 | Operator system | Given a unital C*-algebra formula_0, a *-closed subspace "S" containing "1" is called an operator system. One can associate to each subspace formula_1 of a unital C*-algebra an operator system via formula_2.
The appropriate morphisms between operator systems are completely positive maps.
By a theorem of Choi and Effros, operator systems can be characterized as *-vector spaces equipped with an Archimedean matrix order.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\mathcal{A} "
},
{
"math_id": 1,
"text": " \\mathcal{M} \\subseteq \\mathcal{A} "
},
{
"math_id": 2,
"text": " S:= \\mathcal{M}+\\mathcal{M}^* +\\mathbb{C} 1 "
}
] | https://en.wikipedia.org/wiki?curid=10698414 |
10701883 | Reassignment method | The method of reassignment is a technique for sharpening a time-frequency representation (e.g. spectrogram or the short-time Fourier transform) by mapping the data to time-frequency coordinates that are nearer to the true region of support of the analyzed signal. The method has been independently introduced by several parties under various names, including "method of reassignment", "remapping", "time-frequency reassignment", and "modified moving-window method". The method of reassignment sharpens blurry time-frequency data by relocating the data according to local estimates of instantaneous frequency and group delay. This mapping to reassigned time-frequency coordinates is very precise for signals that are separable in time and frequency with respect to the analysis window.
Introduction.
Many signals of interest have a distribution of energy that varies in time and frequency. For example, any sound signal having a beginning or an end has an energy distribution that varies in time, and most sounds exhibit considerable variation in both time and frequency over their duration. Time-frequency representations are commonly used to analyze or characterize such signals. They map the one-dimensional time-domain signal into a two-dimensional function of time and frequency. A time-frequency representation describes the variation of spectral energy distribution over time, much as a musical score describes the variation of musical pitch over time.
In audio signal analysis, the spectrogram is the most commonly used time-frequency representation, probably because it is well understood, and immune to so-called "cross-terms" that sometimes make other time-frequency representations difficult to interpret. But the windowing operation required in spectrogram computation introduces an unsavory tradeoff between time resolution and frequency resolution, so spectrograms provide a time-frequency representation that is blurred in time, in frequency, or in both dimensions. The method of time-frequency reassignment is a technique for refocussing time-frequency data in a blurred representation like the spectrogram by mapping the data to time-frequency coordinates that are nearer to the true region of support of the analyzed signal.
The spectrogram as a time-frequency representation.
One of the best-known time-frequency representations is the spectrogram, defined as the squared magnitude of the short-time Fourier transform. Though the short-time phase spectrum is known to contain important temporal information about the signal, this information is difficult to interpret, so typically, only the short-time magnitude spectrum is considered in short-time spectral analysis.
As a time-frequency representation, the spectrogram has relatively poor resolution. Time and frequency resolution are governed by the choice of analysis window and greater concentration in one domain is accompanied by greater smearing in the other.
A time-frequency representation having improved resolution, relative to the spectrogram, is the Wigner–Ville distribution, which may be interpreted as a short-time Fourier transform with a window function that is perfectly matched to the signal. The Wigner–Ville distribution is highly concentrated in time and frequency, but it is also highly nonlinear and non-local. Consequently, this
distribution is very sensitive to noise, and generates cross-components that often mask the components of interest, making it difficult to extract useful information concerning the distribution of energy in multi-component signals.
Cohen's class of bilinear time-frequency representations is a class of "smoothed" Wigner–Ville distributions, employing a smoothing kernel that can reduce sensitivity of the distribution to noise and suppresses cross-components, at the expense of smearing the distribution in time and frequency. This smearing causes the distribution to be non-zero in regions where the true Wigner–Ville distribution shows no energy.
The spectrogram is a member of Cohen's class. It is a smoothed Wigner–Ville distribution with the smoothing kernel equal to the Wigner–Ville distribution of the analysis window. The method of reassignment smooths the Wigner–Ville distribution, but then refocuses the distribution back to the true regions of support of the signal components. The method has been shown to reduce time and frequency smearing of any member of Cohen's class.
In the case of the reassigned
spectrogram, the short-time phase spectrum is used to
correct the nominal time and frequency coordinates of the
spectral data, and map it back nearer to the true regions of
support of the analyzed signal.
The method of reassignment.
Pioneering work on the method of reassignment was published by Kodera, Gendrin, and de Villedary under the name of "Modified Moving Window Method". Their technique enhances the resolution in time and frequency of the classical Moving Window Method (equivalent to the spectrogram) by assigning to each data point a new time-frequency coordinate that better-reflects the distribution of energy in the analyzed signal.
In the classical moving window method, a time-domain signal, formula_0 is decomposed into a set of coefficients, formula_1, based on a set of elementary signals, formula_2, defined
formula_3
where formula_4 is a (real-valued) lowpass kernel function, like the window function in the short-time Fourier transform. The coefficients in this decomposition are defined
formula_5
where formula_6 is the magnitude, and formula_7 the phase, of formula_8, the Fourier transform of the signal formula_0 shifted in time by formula_9 and windowed by formula_4.
formula_0 can be reconstructed from the moving window coefficients by
formula_10
For signals having magnitude spectra, formula_11, whose time variation is slow relative to the phase variation, the maximum contribution to the reconstruction integral comes from the vicinity of the point formula_12 satisfying the phase stationarity condition
formula_13
or equivalently, around the point formula_14 defined by
formula_15
This phenomenon is known in such fields as optics as the principle of stationary phase, which states that for periodic or quasi-periodic signals, the variation of the Fourier phase spectrum not attributable to periodic oscillation is slow with respect to time in the vicinity of the frequency of oscillation, and in surrounding regions the variation is relatively rapid. Analogously, for impulsive signals, that are concentrated in time, the variation of the phase spectrum is slow with respect to frequency near the time of the impulse, and in surrounding regions the variation is relatively rapid.
In reconstruction, positive and negative contributions to the synthesized waveform cancel, due to destructive interference, in frequency regions of rapid phase variation. Only regions of slow phase variation (stationary phase) will contribute significantly to the reconstruction, and the maximum contribution (center of gravity) occurs at the point where the phase is changing most slowly with respect to time and frequency.
The time-frequency coordinates thus computed are equal to the local group delay, formula_16 and local instantaneous frequency, formula_17 and are computed from the phase of the short-time Fourier transform, which is normally ignored when constructing the spectrogram. These quantities are "local" in the sense that they represent a windowed and filtered signal that is localized in time and frequency, and are not global properties of the signal under analysis.
The modified moving window method, or method of reassignment, changes (reassigns) the point of attribution of formula_18 to this point of maximum contribution formula_19, rather than to the point formula_12 at which it is computed. This point is sometimes called the "center of gravity" of the distribution, by way of analogy to a mass distribution. This analogy is a useful reminder that the attribution of spectral energy to the center of gravity of its distribution only makes sense when there is energy to attribute, so the method of reassignment has no meaning at points where the spectrogram is zero-valued.
Efficient computation of reassigned times and frequencies.
In digital signal processing, it is most common to sample the time and frequency domains. The discrete Fourier transform is used to compute samples formula_20 of the Fourier transform from samples formula_21 of a time domain signal. The reassignment operations proposed by Kodera et al. cannot be applied directly to the discrete short-time Fourier transform data, because partial derivatives cannot be computed directly on data that is discrete in time and frequency, and it has been suggested that this difficulty has been the primary barrier to wider use of the method of reassignment.
It is possible to approximate the partial derivatives using finite differences. For example, the phase spectrum can be evaluated at two nearby times, and the partial derivative with respect to time be approximated as the difference between the two values divided by the time difference, as in
formula_22
For sufficiently small values of formula_23 and formula_24 and provided that the phase difference is appropriately "unwrapped", this finite-difference method yields good approximations to the partial derivatives of phase, because in regions of the spectrum in which the evolution of the phase is dominated by rotation due to sinusoidal oscillation of a single, nearby component, the phase is a linear function.
Independently of Kodera "et al.", Nelson arrived at a similar method for improving the time-frequency precision of short-time spectral data from partial derivatives of the short-time phase
spectrum. It is easily shown that Nelson's "cross spectral surfaces" compute an approximation of the derivatives that is equivalent to the finite differences method.
Auger and Flandrin showed that the method of reassignment, proposed in the context of the spectrogram by Kodera et al., could be extended to any member of Cohen's class of time-frequency representations by generalizing the reassignment operations to
formula_25
where formula_26 is the Wigner–Ville distribution of formula_0, and formula_27 is the kernel function that defines the distribution. They further described an efficient method for computing the times and frequencies for the reassigned spectrogram efficiently and accurately without explicitly computing the partial derivatives of
phase.
In the case of the spectrogram, the reassignment operations can be computed by
formula_28
where formula_29 is the short-time Fourier transform computed using an analysis window formula_30 is the short-time Fourier transform computed using a time-weighted analysis window formula_31 and formula_32 is the short-time Fourier transform computed using a time-derivative analysis window formula_33.
Using the auxiliary window functions formula_34 and formula_35, the reassignment operations can be computed at any time-frequency coordinate
formula_12 from an algebraic combination of three Fourier transforms evaluated at formula_12. Since these algorithms operate only on short-time spectral data evaluated at a single time and frequency, and do not explicitly compute any derivatives, this gives an efficient method of computing the reassigned discrete short-time Fourier transform.
One constraint in this method of computation is that the formula_36 must be non-zero. This is not much of a restriction, since the reassignment operation itself implies that there is some energy to reassign, and has no meaning when the distribution is zero-valued.
Separability.
The short-time Fourier transform can often be used to estimate the amplitudes and phases of the individual components in a "multi-component" signal, such as a quasi-harmonic musical instrument tone. Moreover, the time and frequency reassignment operations can be used to sharpen the representation by attributing the spectral energy reported by the short-time Fourier transform to the point that is the local center of gravity of the complex energy distribution.
For a signal consisting of a single component, the instantaneous frequency can be estimated from the partial derivatives of phase of any short-time Fourier transform channel that passes the component. If the signal is to be decomposed into many components,
formula_37
and the instantaneous frequency of each component is defined as the derivative of its phase with respect to time, that is,
formula_38
then the instantaneous frequency of each individual component can be computed from the phase of the response of a filter that passes that component, provided that no more than one component lies in the passband of the filter.
This is the property, in the frequency domain, that Nelson called "separability" and is required of all signals so analyzed. If this property is not met, then the desired multi-component decomposition cannot be achieved, because the parameters of individual components cannot be estimated from the short-time Fourier transform. In such cases, a different analysis window must be chosen so that the separability criterion is satisfied.
If the components of a signal are separable in frequency with respect to a particular short-time spectral analysis window, then the output of each short-time Fourier transform filter is a filtered version of, at most, a single dominant (having significant energy) component, and so the derivative, with respect to time, of the phase of the formula_39 is equal to the derivative with respect to time, of the phase of the dominant component at formula_40 Therefore, if a component, formula_41 having instantaneous frequency formula_42 is the dominant component in the vicinity of formula_43 then the instantaneous frequency of that component can be computed from the phase of the short-time Fourier transform evaluated at formula_40 That is,
formula_44
Just as each bandpass filter in the short-time Fourier transform filterbank may pass at most a single complex exponential component, two temporal events must be sufficiently separated in time that they do not lie in the same windowed segment of the input signal. This is the property of separability in the time domain, and is equivalent to requiring that the time between two events be
greater than the length of the impulse response of the short-time Fourier transform filters, the span of non-zero samples in formula_45
In general, there is an infinite number of equally valid decompositions for a multi-component signal. The separability property must be considered in the context of the desired decomposition. For example, in the analysis of a speech signal, an analysis window that is long relative to the time between glottal pulses is sufficient to separate harmonics, but the individual glottal pulses will be smeared, because many pulses are covered by each window (that is, the individual pulses are not separable, in time, by the chosen analysis window). An analysis window that is much shorter than the time between glottal pulses may resolve the glottal pulses, because no window spans more than one pulse, but the harmonic frequencies are smeared together, because the main lobe of the analysis window spectrum is wider than the spacing between the harmonics (that is, the harmonics are not separable, in frequency, by the chosen analysis window).
Extensions.
Consensus complex reassignment.
Gardner and Magnasco (2006) argues that the auditory nerves may use a form of the reassignment method to process sounds. These nerves are known for preserving timing (phase) information better than they do for magnitudes. The authors come up with a variation of reassignment with complex values (i.e. both phase and magnitude) and show that it produces sparse outputs like auditory nerves do. By running this reassignment with windows of different bandwidths (see discussion in the section above), a "consensus" that captures multiple kinds of signals is found, again like the auditory system. They argue that the algorithm is simple enough for neurons to implement.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x(t)"
},
{
"math_id": 1,
"text": "\\epsilon( t, \\omega )"
},
{
"math_id": 2,
"text": "h_{\\omega}(t)"
},
{
"math_id": 3,
"text": "h_{\\omega}(t) = h(t) e^{j \\omega t} "
},
{
"math_id": 4,
"text": "h(t)"
},
{
"math_id": 5,
"text": "\\begin{align}\n\\epsilon( t, \\omega ) &= \\int x(\\tau) h( t - \\tau ) e^{ -j \\omega \\left[ \\tau - t \\right]} d\\tau \\\\\n\t&= e^{ j \\omega t} \\int x(\\tau) h( t - \\tau ) e^{ -j \\omega \\tau } d\\tau \\\\\n\t&= e^{ j \\omega t} X(t, \\omega) \\\\\n\t&= X_{t}(\\omega) \\\\\n &= M_{t}(\\omega) e^{j \\phi_{\\tau}(\\omega)}\n\\end{align}"
},
{
"math_id": 6,
"text": "M_{t}(\\omega)"
},
{
"math_id": 7,
"text": "\\phi_{\\tau}(\\omega)"
},
{
"math_id": 8,
"text": "X_{t}(\\omega)"
},
{
"math_id": 9,
"text": "t"
},
{
"math_id": 10,
"text": "\\begin{align}\nx(t) & = \\iint X_{\\tau}(\\omega) h^{*}_{\\omega}(\\tau - t) d\\omega d\\tau \\\\\n \t& = \\iint X_{\\tau}(\\omega) h( \\tau - t ) e^{ -j \\omega \\left[ \\tau - t \\right]} d\\omega d\\tau \\\\\n\t&= \\iint M_{\\tau}(\\omega) e^{j \\phi_{\\tau}(\\omega)} h( \\tau - t ) e^{ -j \\omega \\left[ \\tau - t \\right]} d\\omega d\\tau \\\\\n\t&= \\iint M_{\\tau}(\\omega) h( \\tau - t ) e^{ j \\left[ \\phi_{\\tau}(\\omega) - \\omega \\tau+ \\omega t \\right] } d\\omega d\\tau\n\\end{align}"
},
{
"math_id": 11,
"text": "M(t,\\omega)"
},
{
"math_id": 12,
"text": "t,\\omega"
},
{
"math_id": 13,
"text": "\\begin{align}\n\\frac{\\partial}{\\partial \\omega} \\left[ \\phi_{\\tau}(\\omega) - \\omega \\tau + \\omega t\\right] & = 0 \\\\\n\\frac{\\partial}{\\partial \\tau} \\left[ \\phi_{\\tau}(\\omega) - \\omega \\tau + \\omega t \\right] & = 0 \n\\end{align}"
},
{
"math_id": 14,
"text": "\\hat{t}, \\hat{\\omega}"
},
{
"math_id": 15,
"text": "\\begin{align}\n\\hat{t}(\\tau, \\omega) & = \\tau - \\frac{\\partial \\phi_{\\tau}(\\omega)}{\\partial \\omega} = -\\frac{\\partial \\phi(\\tau, \\omega)}{\\partial \\omega} \\\\\n\\hat{\\omega}(\\tau, \\omega) & = \\frac{\\partial \\phi_{\\tau}(\\omega)}{\\partial \\tau} = \\omega + \\frac{\\partial \\phi(\\tau, \\omega)}{\\partial \\tau}\n\\end{align}"
},
{
"math_id": 16,
"text": "\\hat{t}_{g}(t,\\omega),"
},
{
"math_id": 17,
"text": "\\hat{\\omega}_{i}(t,\\omega),"
},
{
"math_id": 18,
"text": "\\epsilon(t,\\omega)"
},
{
"math_id": 19,
"text": "\\hat{t}(t,\\omega), \\hat{\\omega}(t,\\omega)"
},
{
"math_id": 20,
"text": "X(k)"
},
{
"math_id": 21,
"text": "x(n)"
},
{
"math_id": 22,
"text": "\\begin{align}\n\\frac{\\partial \\phi(t, \\omega)}{\\partial t} & \\approx \\frac{1}{\\Delta t} \\left[ \\phi \\left (t + \\frac{\\Delta t}{2}, \\omega \\right ) - \\phi \\left (t - \\frac{\\Delta t}{2}, \\omega \\right ) \\right] \\\\\n\\frac{\\partial \\phi(t, \\omega)}{\\partial \\omega} & \\approx \\frac{1}{\\Delta \\omega}\\left[ \\phi \\left (t, \\omega+ \\frac{\\Delta \\omega}{2} \\right ) - \\phi \\left (t, \\omega-\\frac{\\Delta \\omega}{2} \\right ) \\right] \n\\end{align}"
},
{
"math_id": 23,
"text": "\\Delta t"
},
{
"math_id": 24,
"text": "\\Delta \\omega,"
},
{
"math_id": 25,
"text": "\\begin{align}\n\\hat{t} (t,\\omega) &= t - \\frac{\\iint \\tau \\cdot W_{x}(t-\\tau,\\omega -\\nu) \\cdot \\Phi(\\tau,\\nu) d\\tau d\\nu} {\\iint W_{x} \\left (t-\\tau,\\omega -\\nu \\right ) \\cdot \\Phi (\\tau,\\nu) d\\tau d\\nu } \\\\\n\\hat{\\omega} (t,\\omega) & = \\omega - \\frac{\\iint \\nu \\cdot W_{x}(t-\\tau,\\omega -\\nu) \\cdot \\Phi(\\tau,\\nu) d\\tau d\\nu} {\\iint W_{x}(t-\\tau,\\omega -\\nu) \\cdot \\Phi(\\tau,\\nu) d\\tau d\\nu}\n\\end{align}"
},
{
"math_id": 26,
"text": "W_{x}(t,\\omega)"
},
{
"math_id": 27,
"text": "\\Phi(t,\\omega)"
},
{
"math_id": 28,
"text": "\\begin{align}\n \\hat{t} (t,\\omega) & = t - \\Re \\left \\{ \\frac{ X_{\\mathcal{T}h}(t,\\omega) \\cdot X^*(t,\\omega) }{ | X(t,\\omega) |^2 } \\right \\} \\\\\n \\hat{\\omega}(t,\\omega) & = \\omega + \\Im \\left \\{ \\frac{ X_{\\mathcal{D}h}(t,\\omega) \\cdot X^*(t,\\omega) }{ | X(t,\\omega) |^2 } \\right \\} \n\\end{align}"
},
{
"math_id": 29,
"text": "X(t,\\omega)"
},
{
"math_id": 30,
"text": "h(t), X_{\\mathcal{T}h}(t,\\omega)"
},
{
"math_id": 31,
"text": "h_{\\mathcal{T}}(t) = t \\cdot h(t)"
},
{
"math_id": 32,
"text": "X_{\\mathcal{D}h}(t,\\omega)"
},
{
"math_id": 33,
"text": "h_{\\mathcal{D}}(t) = \\tfrac{d}{dt}h(t)"
},
{
"math_id": 34,
"text": "h_{\\mathcal{T}}(t)"
},
{
"math_id": 35,
"text": "h_{\\mathcal{D}}(t)"
},
{
"math_id": 36,
"text": "| X(t,\\omega) |^2"
},
{
"math_id": 37,
"text": "x(t) = \\sum_{n} A_{n}(t) e^{j \\theta_{n}(t)}"
},
{
"math_id": 38,
"text": "\\omega_{n}(t) = \\frac{d \\theta_{n}(t)}{d t},"
},
{
"math_id": 39,
"text": "X(t,\\omega_0)"
},
{
"math_id": 40,
"text": "\\omega_0."
},
{
"math_id": 41,
"text": "x_n(t),"
},
{
"math_id": 42,
"text": "\\omega_{n}(t)"
},
{
"math_id": 43,
"text": "\\omega_0,"
},
{
"math_id": 44,
"text": "\\begin{align}\n\\omega_{n}(t) &= \\frac{\\partial}{\\partial t} \\arg\\{ x_{n}(t) \\} \\\\\n\t&= \\frac{\\partial }{\\partial t} \\arg\\{ X(t,\\omega_{0}) \\}\n\\end{align}"
},
{
"math_id": 45,
"text": "h(t)."
}
] | https://en.wikipedia.org/wiki?curid=10701883 |
10702027 | Harmonic wavelet transform | In the mathematics of signal processing, the harmonic wavelet transform, introduced by David Edward Newland in 1993, is a wavelet-based linear transformation of a given function into a time-frequency representation. It combines advantages of the short-time Fourier transform and the continuous wavelet transform. It can be expressed in terms of repeated Fourier transforms, and its discrete analogue can be computed efficiently using a fast Fourier transform algorithm.
Harmonic wavelets.
The transform uses a family of "harmonic" wavelets indexed by two integers "j" (the "level" or "order") and "k" (the "translation"), given by formula_0, where
formula_1
These functions are orthogonal, and their Fourier transforms are a square window function (constant in a certain octave band and zero elsewhere). In particular, they satisfy:
formula_2
formula_3
where "*" denotes complex conjugation and formula_4 is Kronecker's delta.
As the order "j" increases, these wavelets become more localized in Fourier space (frequency) and in higher frequency bands, and conversely become less localized in time ("t"). Hence, when they are used as a basis for expanding an arbitrary function, they represent behaviors of the function on different timescales (and at different time offsets for different "k").
However, it is possible to combine all of the negative orders ("j" < 0) together into a single family of "scaling" functions formula_5 where
formula_6
The function "φ" is orthogonal to itself for different "k" and is also orthogonal to the wavelet functions for non-negative "j":
formula_7
formula_8
formula_9
formula_10
In the harmonic wavelet transform, therefore, an arbitrary real- or complex-valued function formula_11 (in L2) is expanded in the basis of the harmonic wavelets (for all integers "j") and their complex conjugates:
formula_12
or alternatively in the basis of the wavelets for non-negative "j" supplemented by the scaling functions "φ":
formula_13
The expansion coefficients can then, in principle, be computed using the orthogonality relationships:
formula_14
For a real-valued function "f"("t"), formula_15 and formula_16 so one can cut the number of independent expansion coefficients in half.
This expansion has the property, analogous to Parseval's theorem, that:
formula_17
Rather than computing the expansion coefficients directly from the orthogonality relationships, however, it is possible to do so using a sequence of Fourier transforms. This is much more efficient in the discrete analogue of this transform (discrete "t"), where it can exploit fast Fourier transform algorithms. | [
{
"math_id": 0,
"text": "w(2^j t - k) \\!"
},
{
"math_id": 1,
"text": "w(t) = \\frac{e^{i4\\pi t} - e^{i 2\\pi t}}{i 2\\pi t} ."
},
{
"math_id": 2,
"text": "\\int_{-\\infty}^\\infty w^*(2^j t - k) \\cdot w(2^{j'} t - k') \\, dt = \\frac{1}{2^j} \\delta_{j,j'} \\delta_{k,k'}"
},
{
"math_id": 3,
"text": "\\int_{-\\infty}^\\infty w(2^j t - k) \\cdot w(2^{j'} t - k') \\, dt = 0"
},
{
"math_id": 4,
"text": "\\delta"
},
{
"math_id": 5,
"text": "\\varphi(t - k)"
},
{
"math_id": 6,
"text": "\\varphi(t) = \\frac{e^{i2\\pi t} - 1}{i 2\\pi t}."
},
{
"math_id": 7,
"text": "\\int_{-\\infty}^\\infty \\varphi^*(t - k) \\cdot \\varphi(t - k') \\, dt = \\delta_{k,k'}"
},
{
"math_id": 8,
"text": "\\int_{-\\infty}^\\infty w^*(2^j t - k) \\cdot \\varphi(t - k') \\, dt = 0\\text{ for }j \\geq 0"
},
{
"math_id": 9,
"text": "\\int_{-\\infty}^\\infty \\varphi(t - k) \\cdot \\varphi(t - k') \\, dt = 0"
},
{
"math_id": 10,
"text": "\\int_{-\\infty}^\\infty w(2^j t - k) \\cdot \\varphi(t - k') \\, dt = 0\\text{ for }j \\geq 0."
},
{
"math_id": 11,
"text": "f(t)"
},
{
"math_id": 12,
"text": "f(t) = \\sum_{j=-\\infty}^\\infty \\sum_{k=-\\infty}^\\infty \\left[ a_{j,k} w(2^j t - k) + \\tilde{a}_{j,k} w^*(2^j t - k)\\right],"
},
{
"math_id": 13,
"text": "f(t) = \\sum_{k=-\\infty}^\\infty \\left[ a_k \\varphi(t - k) + \\tilde{a}_k \\varphi^*(t - k) \\right] + \\sum_{j=0}^\\infty \\sum_{k=-\\infty}^\\infty \\left[ a_{j,k} w(2^j t - k) + \\tilde{a}_{j,k} w^*(2^j t - k)\\right] ."
},
{
"math_id": 14,
"text": "\n\\begin{align}\na_{j,k} & {} = 2^j \\int_{-\\infty}^\\infty f(t) \\cdot w^*(2^j t - k) \\, dt \\\\\n\\tilde{a}_{j,k} & {} = 2^j \\int_{-\\infty}^\\infty f(t) \\cdot w(2^j t - k) \\, dt \\\\\na_k & {} = \\int_{-\\infty}^\\infty f(t) \\cdot \\varphi^*(t - k) \\, dt \\\\\n\\tilde{a}_k & {} = \\int_{-\\infty}^\\infty f(t) \\cdot \\varphi(t - k) \\, dt.\n\\end{align}\n"
},
{
"math_id": 15,
"text": "\\tilde{a}_{j,k} = a_{j,k}^*"
},
{
"math_id": 16,
"text": "\\tilde{a}_k = a_k^*"
},
{
"math_id": 17,
"text": "\n\\begin{align}\n& \\sum_{j=-\\infty}^\\infty \\sum_{k=-\\infty}^\\infty 2^{-j} \\left( |a_{j,k}|^2 + |\\tilde{a}_{j,k}|^2 \\right) \\\\\n& {} = \\sum_{k=-\\infty}^\\infty \\left( |a_k|^2 + |\\tilde{a}_k|^2 \\right) + \\sum_{j=0}^\\infty \\sum_{k=-\\infty}^\\infty 2^{-j} \\left( |a_{j,k}|^2 + |\\tilde{a}_{j,k}|^2 \\right) \\\\\n& {} = \\int_{-\\infty}^\\infty |f(x)|^2 \\, dx.\n\\end{align}\n"
}
] | https://en.wikipedia.org/wiki?curid=10702027 |
1070326 | Total derivative | Type of derivative in mathematics
In mathematics, the total derivative of a function f at a point is the best linear approximation near this point of the function with respect to its arguments. Unlike partial derivatives, the total derivative approximates the function with respect to all of its arguments, not just a single one. In many situations, this is the same as considering all partial derivatives simultaneously. The term "total derivative" is primarily used when f is a function of several variables, because when f is a function of a single variable, the total derivative is the same as the ordinary derivative of the function.
The total derivative as a linear map.
Let formula_0 be an open subset. Then a function formula_1 is said to be (totally) differentiable at a point formula_2 if there exists a linear transformation formula_3 such that
formula_4
The linear map formula_5 is called the (total) derivative or (total) differential of formula_6 at formula_7. Other notations for the total derivative include formula_8 and formula_9. A function is (totally) differentiable if its total derivative exists at every point in its domain.
Conceptually, the definition of the total derivative expresses the idea that formula_5 is the best linear approximation to formula_6 at the point formula_7. This can be made precise by quantifying the error in the linear approximation determined by formula_5. To do so, write
formula_10
where formula_11 equals the error in the approximation. To say that the derivative of formula_6 at formula_7 is formula_5 is equivalent to the statement
formula_12
where formula_13 is little-o notation and indicates that formula_11 is much smaller than formula_14 as formula_15. The total derivative formula_5 is the "unique" linear transformation for which the error term is this small, and this is the sense in which it is the best linear approximation to formula_6.
The function formula_6 is differentiable if and only if each of its components formula_16 is differentiable, so when studying total derivatives, it is often possible to work one coordinate at a time in the codomain. However, the same is not true of the coordinates in the domain. It is true that if formula_6 is differentiable at formula_7, then each partial derivative formula_17 exists at formula_7. The converse does not hold: it can happen that all of the partial derivatives of formula_6 at formula_7 exist, but formula_6 is not differentiable at formula_7. This means that the function is very "rough" at formula_7, to such an extreme that its behavior cannot be adequately described by its behavior in the coordinate directions. When formula_6 is not so rough, this cannot happen. More precisely, if all the partial derivatives of formula_6 at formula_7 exist and are continuous in a neighborhood of formula_7, then formula_6 is differentiable at formula_7. When this happens, then in addition, the total derivative of formula_6 is the linear transformation corresponding to the Jacobian matrix of partial derivatives at that point.
The total derivative as a differential form.
When the function under consideration is real-valued, the total derivative can be recast using differential forms. For example, suppose that formula_18 is a differentiable function of variables formula_19. The total derivative of formula_6 at formula_7 may be written in terms of its Jacobian matrix, which in this instance is a row matrix:
formula_20
The linear approximation property of the total derivative implies that if
formula_21
is a small vector (where the formula_22 denotes transpose, so that this vector is a column vector), then
formula_23
Heuristically, this suggests that if formula_24 are infinitesimal increments in the coordinate directions, then
formula_25
In fact, the notion of the infinitesimal, which is merely symbolic here, can be equipped with extensive mathematical structure. Techniques, such as the theory of differential forms, effectively give analytical and algebraic descriptions of objects like infinitesimal increments, formula_26. For instance, formula_26 may be inscribed as a linear functional on the vector space formula_27. Evaluating formula_26 at a vector formula_28 in formula_27 measures how much formula_28 points in the formula_29th coordinate direction. The total derivative formula_5 is a linear combination of linear functionals and hence is itself a linear functional. The evaluation formula_30 measures how much formula_6 points in the direction determined by formula_28 at formula_7, and this direction is the gradient. This point of view makes the total derivative an instance of the exterior derivative.
Suppose now that formula_6 is a vector-valued function, that is, formula_31. In this case, the components formula_32 of formula_6 are real-valued functions, so they have associated differential forms formula_33. The total derivative formula_34 amalgamates these forms into a single object and is therefore an instance of a vector-valued differential form.
The chain rule for total derivatives.
The chain rule has a particularly elegant statement in terms of total derivatives. It says that, for two functions formula_6 and formula_35, the total derivative of the composite function formula_36 at formula_7 satisfies
formula_37
If the total derivatives of formula_6 and formula_35 are identified with their Jacobian matrices, then the composite on the right-hand side is simply matrix multiplication. This is enormously useful in applications, as it makes it possible to account for essentially arbitrary dependencies among the arguments of a composite function.
Example: Differentiation with direct dependencies.
Suppose that "f" is a function of two variables, "x" and "y". If these two variables are independent, so that the domain of "f" is formula_38, then the behavior of "f" may be understood in terms of its partial derivatives in the "x" and "y" directions. However, in some situations, "x" and "y" may be dependent. For example, it might happen that "f" is constrained to a curve formula_39. In this case, we are actually interested in the behavior of the composite function formula_40. The partial derivative of "f" with respect to "x" does not give the true rate of change of "f" with respect to changing "x" because changing "x" necessarily changes "y". However, the chain rule for the total derivative takes such dependencies into account. Write formula_41. Then, the chain rule says
formula_42
By expressing the total derivative using Jacobian matrices, this becomes:
formula_43
Suppressing the evaluation at formula_44 for legibility, we may also write this as
formula_45
This gives a straightforward formula for the derivative of formula_40 in terms of the partial derivatives of formula_6 and the derivative of formula_46.
For example, suppose
formula_47
The rate of change of "f" with respect to "x" is usually the partial derivative of "f" with respect to "x"; in this case,
formula_48
However, if "y" depends on "x", the partial derivative does not give the true rate of change of "f" as "x" changes because the partial derivative assumes that "y" is fixed. Suppose we are constrained to the line
formula_49
Then
formula_50
and the total derivative of "f" with respect to "x" is
formula_51
which we see is not equal to the partial derivative formula_52. Instead of immediately substituting for "y" in terms of "x", however, we can also use the chain rule as above:
formula_53
Example: Differentiation with indirect dependencies.
While one can often perform substitutions to eliminate indirect dependencies, the chain rule provides for a more efficient and general technique. Suppose formula_54 is a function of time formula_55 and formula_56 variables formula_57 which themselves depend on time. Then, the time derivative of formula_58 is
formula_59
The chain rule expresses this derivative in terms of the partial derivatives of formula_58 and the time derivatives of the functions formula_57:
formula_60
This expression is often used in physics for a gauge transformation of the Lagrangian, as two Lagrangians that differ only by the total time derivative of a function of time and the formula_56 generalized coordinates lead to the same equations of motion. An interesting example concerns the resolution of causality concerning the Wheeler–Feynman time-symmetric theory. The operator in brackets (in the final expression above) is also called the total derivative operator (with respect to formula_55).
For example, the total derivative of formula_61 is
formula_62
Here there is no formula_63 term since formula_6 itself does not depend on the independent variable formula_55 directly.
Total differential equation.
A "total differential equation" is a differential equation expressed in terms of total derivatives. Since the exterior derivative is coordinate-free, in a sense that can be given a technical meaning, such equations are intrinsic and "geometric".
Application to equation systems.
In economics, it is common for the total derivative to arise in the context of a system of equations. For example, a simple supply-demand system might specify the quantity "q" of a product demanded as a function "D" of its price "p" and consumers' income "I", the latter being an exogenous variable, and might specify the quantity supplied by producers as a function "S" of its price and two exogenous resource cost variables "r" and "w". The resulting system of equations
formula_64
formula_65
determines the market equilibrium values of the variables "p" and "q". The total derivative formula_66 of "p" with respect to "r", for example, gives the sign and magnitude of the reaction of the market price to the exogenous variable "r". In the indicated system, there are a total of six possible total derivatives, also known in this context as comparative static derivatives: "dp" / "dr", "dp" / "dw", "dp" / "dI", "dq" / "dr", "dq" / "dw", and "dq" / "dI". The total derivatives are found by totally differentiating the system of equations, dividing through by, say "dr", treating "dq" / "dr" and "dp" / "dr" as the unknowns, setting "dI" = "dw" = 0, and solving the two totally differentiated equations simultaneously, typically by using Cramer's rule.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U \\subseteq \\R^n"
},
{
"math_id": 1,
"text": "f:U \\to \\R^m"
},
{
"math_id": 2,
"text": "a\\in U"
},
{
"math_id": 3,
"text": "df_a:\\R^n \\to \\R^m"
},
{
"math_id": 4,
"text": "\\lim_{x \\to a} \\frac{\\|f(x)-f(a)-df_a(x-a)\\|}{\\|x-a\\|}=0."
},
{
"math_id": 5,
"text": "df_a"
},
{
"math_id": 6,
"text": "f"
},
{
"math_id": 7,
"text": "a"
},
{
"math_id": 8,
"text": "D_a f"
},
{
"math_id": 9,
"text": "Df(a)"
},
{
"math_id": 10,
"text": "f(a + h) = f(a) + df_a(h) + \\varepsilon(h),"
},
{
"math_id": 11,
"text": "\\varepsilon(h)"
},
{
"math_id": 12,
"text": "\\varepsilon(h) = o(\\lVert h\\rVert),"
},
{
"math_id": 13,
"text": "o"
},
{
"math_id": 14,
"text": "\\lVert h\\rVert"
},
{
"math_id": 15,
"text": "h \\to 0"
},
{
"math_id": 16,
"text": "f_i \\colon U \\to \\R"
},
{
"math_id": 17,
"text": "\\partial f/\\partial x_i"
},
{
"math_id": 18,
"text": "f \\colon \\R^n \\to \\R"
},
{
"math_id": 19,
"text": "x_1, \\ldots, x_n"
},
{
"math_id": 20,
"text": "D f_a = \\begin{bmatrix} \\frac{\\partial f}{\\partial x_1}(a) & \\cdots & \\frac{\\partial f}{\\partial x_n}(a) \\end{bmatrix}."
},
{
"math_id": 21,
"text": "\\Delta x = \\begin{bmatrix} \\Delta x_1 & \\cdots & \\Delta x_n \\end{bmatrix}^\\mathsf{T}"
},
{
"math_id": 22,
"text": "\\mathsf{T}"
},
{
"math_id": 23,
"text": "f(a + \\Delta x) - f(a) \\approx D f_a \\cdot \\Delta x = \\sum_{i=1}^n \\frac{\\partial f}{\\partial x_i}(a) \\cdot \\Delta x_i."
},
{
"math_id": 24,
"text": "dx_1, \\ldots, dx_n"
},
{
"math_id": 25,
"text": "df_a = \\sum_{i=1}^n \\frac{\\partial f}{\\partial x_i}(a) \\cdot dx_i."
},
{
"math_id": 26,
"text": "dx_i"
},
{
"math_id": 27,
"text": "\\R^n"
},
{
"math_id": 28,
"text": "h"
},
{
"math_id": 29,
"text": "i"
},
{
"math_id": 30,
"text": "df_a(h)"
},
{
"math_id": 31,
"text": "f \\colon \\R^n \\to \\R^m"
},
{
"math_id": 32,
"text": "f_i"
},
{
"math_id": 33,
"text": "df_i"
},
{
"math_id": 34,
"text": "df"
},
{
"math_id": 35,
"text": "g"
},
{
"math_id": 36,
"text": "f \\circ g"
},
{
"math_id": 37,
"text": "d(f \\circ g)_a = df_{g(a)} \\cdot dg_a."
},
{
"math_id": 38,
"text": "\\R^2"
},
{
"math_id": 39,
"text": "y = y(x)"
},
{
"math_id": 40,
"text": "f(x, y(x))"
},
{
"math_id": 41,
"text": "\\gamma(x) = (x, y(x))"
},
{
"math_id": 42,
"text": "d(f \\circ \\gamma)_{x_0} = df_{(x_0, y(x_0))} \\cdot d\\gamma_{x_0}."
},
{
"math_id": 43,
"text": "\\frac{df(x, y(x))}{dx}(x_0) = \\frac{\\partial f}{\\partial x}(x_0, y(x_0)) \\cdot \\frac{\\partial x}{\\partial x}(x_0) + \\frac{\\partial f}{\\partial y}(x_0, y(x_0)) \\cdot \\frac{\\partial y}{\\partial x}(x_0)."
},
{
"math_id": 44,
"text": "x_0"
},
{
"math_id": 45,
"text": "\\frac{df(x, y(x))}{dx} = \\frac{\\partial f}{\\partial x} \\frac{\\partial x}{\\partial x} + \\frac{\\partial f}{\\partial y} \\frac{\\partial y}{\\partial x}."
},
{
"math_id": 46,
"text": "y(x)"
},
{
"math_id": 47,
"text": "f(x,y)=xy."
},
{
"math_id": 48,
"text": "\\frac{\\partial f}{\\partial x} = y."
},
{
"math_id": 49,
"text": "y=x."
},
{
"math_id": 50,
"text": "f(x,y) = f(x,x) = x^2,"
},
{
"math_id": 51,
"text": "\\frac{df}{dx} = 2 x,"
},
{
"math_id": 52,
"text": "\\partial f/\\partial x"
},
{
"math_id": 53,
"text": "\\frac{df}{dx} = \\frac{\\partial f}{\\partial x} + \\frac{\\partial f}{\\partial y}\\frac{dy}{dx} = y+x \\cdot 1 = x+y = 2x."
},
{
"math_id": 54,
"text": "L(t,x_1,\\dots,x_n)"
},
{
"math_id": 55,
"text": "t"
},
{
"math_id": 56,
"text": "n"
},
{
"math_id": 57,
"text": "x_i"
},
{
"math_id": 58,
"text": "L"
},
{
"math_id": 59,
"text": "\\frac{dL}{dt} = \\frac{d}{dt} L \\bigl(t, x_1(t), \\ldots, x_n(t)\\bigr)."
},
{
"math_id": 60,
"text": "\\frac{dL}{dt}\n= \\frac{\\partial L}{\\partial t} + \\sum_{i=1}^n \\frac{\\partial L}{\\partial x_i}\\frac{dx_i}{dt}\n= \\biggl(\\frac{\\partial}{\\partial t} + \\sum_{i=1}^n \\frac{dx_i}{dt}\\frac{\\partial}{\\partial x_i}\\biggr)(L)."
},
{
"math_id": 61,
"text": "f(x(t),y(t))"
},
{
"math_id": 62,
"text": "\\frac{df}{dt} = { \\partial f \\over \\partial x}{dx \\over dt} + {\\partial f \\over \\partial y}{dy \\over dt }."
},
{
"math_id": 63,
"text": "\\partial f / \\partial t"
},
{
"math_id": 64,
"text": "q=D(p, I),"
},
{
"math_id": 65,
"text": "q=S(p, r, w),"
},
{
"math_id": 66,
"text": "dp/dr"
}
] | https://en.wikipedia.org/wiki?curid=1070326 |
1070395 | Magnon | Spin 1 quasiparticle; quantum of a spin wave
A magnon is a quasiparticle, a collective excitation of the spin structure of an electron in a crystal lattice. In the equivalent wave picture of quantum mechanics, a magnon can be viewed as a quantized spin wave. Magnons carry a fixed amount of energy and lattice momentum, and are spin-1, indicating they obey boson behavior.
Brief history.
The concept of a magnon was introduced in 1930 by Felix Bloch in order to explain the reduction of the spontaneous magnetization in a ferromagnet. At absolute zero temperature (0 K), a Heisenberg ferromagnet reaches the state of lowest energy (so-called ground state), in which all of the atomic spins (and hence magnetic moments) point in the same direction. As the temperature increases, more and more spins deviate randomly from the alignment, increasing the internal energy and reducing the net magnetization. If one views the perfectly magnetized state at zero temperature as the vacuum state of the ferromagnet, the low-temperature state with a few misaligned spins can be viewed as a gas of quasiparticles, in this case magnons. Each magnon reduces the total spin along the direction of magnetization by one unit of formula_0 (reduced Planck's constant) and the magnetization by formula_1, where formula_2 is the gyromagnetic ratio. This leads to Bloch's law for the temperature dependence of spontaneous magnetization:
formula_3
where formula_4 is the (material dependent) critical temperature, and formula_5 is the magnitude of the spontaneous magnetization.
The quantitative theory of magnons, quantized spin waves, was developed further by Theodore Holstein and Henry Primakoff, and then by Freeman Dyson. Using the second quantization formalism they showed that magnons behave as weakly interacting quasiparticles obeying Bose–Einstein statistics (bosons). A comprehensive treatment can be found in the solid state textbook by Charles Kittel or the early review article by Van Kranendonk and Van Vleck.
Direct experimental detection of magnons by inelastic neutron scattering in ferrite was achieved in 1957 by Bertram Brockhouse. Since then magnons have been detected in ferromagnets, ferrimagnets, and antiferromagnets.
The fact that magnons obey the Bose–Einstein statistics was confirmed by the light scattering experiments done during the 1960s through the 1980s. Classical theory predicts equal intensity of Stokes and anti-Stokes lines. However, the scattering showed that if the magnon energy is comparable to or smaller than the thermal energy, or formula_6, then the Stokes line becomes more intense, as follows from Bose–Einstein statistics. Bose–Einstein condensation of magnons was proven in an antiferromagnet at low temperatures by Nikuni "et al." and in a ferrimagnet by Demokritov "et al." at room temperature. In 2015 Uchida "et al." reported the generation of spin currents by surface plasmon resonance.
Paramagnons.
Paramagnons are magnons in magnetic materials which are in their high temperature, disordered (paramagnetic) phase. For low enough temperatures, the local atomic magnetic moments (spins) in ferromagnetic or anti-ferromagnetic compounds will become ordered. Small oscillations of the moments around their natural direction will propagate as waves (magnons). At temperatures higher than the critical temperature, long range order is lost, but spins will still align locally in patches, allowing for spin waves to propagate for short distances. These waves are known as a paramagnon, and undergo diffusive (instead of ballistic or long range) transport.
The concept was first proposed based on the spin fluctuations in transition metals, by Berk and Schrieffer and Doniach and Engelsberg, to explain additional repulsion between electrons in some metals, which reduces the critical temperature for superconductivity.
Properties.
Magnon behavior can be studied with a variety of scattering techniques. Magnons behave as a Bose gas with no chemical potential. Microwave pumping can be used to excite spin waves and create additional non-equilibrium magnons which thermalize into phonons. At a critical density, a condensate is formed, which appears as the emission of monochromatic microwaves. This microwave source can be tuned with an applied magnetic field.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\hbar"
},
{
"math_id": 1,
"text": "\\gamma\\hbar"
},
{
"math_id": 2,
"text": "\\gamma"
},
{
"math_id": 3,
"text": "M(T) = M_0 \\left[1 - \\left(\\frac{T}{T_c}\\right)^{3/2}\\right]"
},
{
"math_id": 4,
"text": "T_c"
},
{
"math_id": 5,
"text": "M_0"
},
{
"math_id": 6,
"text": "\\hbar \\omega < k_B T"
}
] | https://en.wikipedia.org/wiki?curid=1070395 |
10704974 | Structural risk minimization | Structural risk minimization (SRM) is an inductive principle of use in machine learning. Commonly in machine learning, a generalized model must be selected from a finite data set, with the consequent problem of overfitting – the model becoming too strongly tailored to the particularities of the training set and generalizing poorly to new data. The SRM principle addresses this problem by balancing the model's complexity against its success at fitting the training data. This principle was first set out in a 1974 book by Vladimir Vapnik and Alexey Chervonenkis and uses the VC dimension.
In practical terms, Structural Risk Minimization is implemented by minimizing formula_0, where formula_1 is the train error, the function formula_2 is called a regularization function, and formula_3 is a constant. formula_2 is chosen such that it takes large values on parameters formula_4 that belong to high-capacity subsets of the parameter space. Minimizing formula_2 in effect limits the capacity of the accessible subsets of the parameter space, thereby controlling the trade-off between minimizing the training error and minimizing the expected gap between the training error and test error.
The SRM problem can be formulated in terms of data. Given n data points consisting of data x and labels y, the objective formula_5 is often expressed in the following manner:
formula_6
The first term is the mean squared error (MSE) term between the value of the learned model, formula_7, and the given labels formula_8. This term is the training error, formula_1, that was discussed earlier. The second term, places a prior over the weights, to favor sparsity and penalize larger weights. The trade-off coefficient, formula_9, is a hyperparameter that places more or less importance on the regularization term. Larger formula_9 encourages sparser weights at the expense of a more optimal MSE, and smaller formula_9 relaxes regularization allowing the model to fit to data. Note that as formula_10 the weights become zero, and as formula_11, the model typically suffers from overfitting.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "E_{train} + \\beta H(W)"
},
{
"math_id": 1,
"text": "E_{train}"
},
{
"math_id": 2,
"text": "H(W)"
},
{
"math_id": 3,
"text": "\\beta"
},
{
"math_id": 4,
"text": "W"
},
{
"math_id": 5,
"text": "J(\\theta)"
},
{
"math_id": 6,
"text": "J(\\theta) = \\frac{1}{2n} \\sum_{i=1}^{n}(h_{\\theta}(x^i) - y^i)^2 + \\frac{\\lambda}{2} \\sum_{j=1}^{d} \\theta_j^2 "
},
{
"math_id": 7,
"text": "h_{\\theta}"
},
{
"math_id": 8,
"text": "y"
},
{
"math_id": 9,
"text": "\\lambda"
},
{
"math_id": 10,
"text": "\\lambda \\to \\infty"
},
{
"math_id": 11,
"text": "\\lambda \\to 0"
}
] | https://en.wikipedia.org/wiki?curid=10704974 |
1070567 | Home advantage | Advantage a team has playing in home venue
In team sports, the term home advantage – also called home ground, home field, home-field advantage, home court, home-court advantage, defender's advantage or home-ice advantage – describes the benefit that the home team is said to gain over the visiting team. This benefit has been attributed to psychological effects supporting fans have on the competitors or referees; to psychological or physiological advantages of playing near home in familiar situations; to the disadvantages away teams suffer from changing time zones or climates, or from the rigors of travel; and in some sports, to specific rules that favor the home team directly or indirectly. In baseball and cricket in particular, the difference may also be the result of the home team having been assembled to take advantage of the idiosyncrasies of the home ballpark/ground, such as the distances to the outfield walls/boundaries; most other sports are played in standardized venues.
The term is also widely used in "best-of" playoff formats (e.g., best-of-seven) as being given to the team that is scheduled to play one more game at home than their opponent if all necessary games are played.
In many sports, such designations may also apply to games played at a neutral site, as the rules of various sports make different provisions for home and visiting teams. In baseball, for instance, the visiting team always bats first in each inning. Therefore, one team must be chosen to be the "visitor" when games are played at neither team's home field. Likewise, there are uncommon instances in which a team playing a game at their home venue is officially the visiting team, and their opponent officially the home team, such as when a game originally scheduled to play at one venue must be postponed and is later resumed at the other team's venue.
Advantages.
In most team sports, the home or hosting team is considered to have a significant advantage over the away or visiting team. Due to this, many important games (such as playoff or elimination matches) in many sports have special rules for determining what match is played where. In association football, matches with two legs, one game played in each team's "home", are common. It is also common to hold important games, such as the Super Bowl, at a neutral site in which the location is determined years in advance. In many team sports in North America (including baseball, basketball, and ice hockey), playoff series are often held with a nearly equal number of games at each team's site. However, as it is usually beneficial to have an odd number of matches in a series (to prevent ties), the final home game is often awarded to the team that had more success over the regular season.
An example is UEFA Champions League, UEFA Europa League and UEFA Europa Conference League home and away legs, with weaker teams often beating the favourites when playing at home. The World Cup victories of Uruguay (1930), Italy (1934), England (1966), Germany (1974), Argentina (1978) and France (1998) are all in part attributed to the fact that the World Cup was held in the winner's country. A 2006 study by "The Times" found that in the English Premiership, a home team can be expected to score 37.29% more goals than the away team, though this changes depending on the quality of the teams involved. Others have suggested that the increase in British medals during the 2012 Olympics may have been impacted by home court advantage. (However, having home court did not help Canada at the 1976 Montreal Olympics, the only Summer Games at which the hosting country failed to win a single gold medal.)
The strength of the home advantage varies for different sports, regions, seasons, and divisions. For all sports, it seems to be strongest in the early period after the creation of a new league. The effect seems to have become somewhat weaker in some sports in recent decades.
Adams & Kupper (1994) described home-field advantage as an expertise deficiency. They demonstrated that, in theory and in practice, home-field advantage decreases as superiority of performance increases. They also showed that home-field advantage is not applicable for no-hit major league baseball games for pitchers who either replicated performance by winning two or more no-hitters or amassed a large number of career wins. Their general finding was that home-field advantage is a metric for the inability to maintain performance independent of environment and that this metric is inversely related to variables of expertise.
In recognition of the difficulty in winning away matches, cup competitions in association football often invoke the away goals rule. Away goals can also sometimes be used to separate teams level on points and goal difference in league competitions.
Causes.
Factors related to the location and the venue.
There are many causes that attribute to home advantage, such as crowd involvement (for instance Marek and Vávra (2024) conducted a study on ice hockey results during COVID-19, where no spectators were allowed at matches), travel considerations, and environmental factors. The most commonly cited factors of home advantage are usually factors which are difficult to measure and so even their existence is debated. Most of these are psychological in nature: the home teams are familiar with the playing venue; they can lodge in their homes, rather than in a hotel, and so have less far to travel before the game; and they have the psychological support of the home fans.
Other factors, however, are easier to detect and can have noticeable effects on the outcome of the game. In American football, for instance, the crowd often makes as much noise as it can when the visiting team is about to run a play. That can make it very difficult for the visiting team's quarterback to call audible play changes, or for any player to hear the snap count. In contrast, the crowd is often quiet while the home team is on offense, and that enables the quarterback to use the hard count intended to draw the defense offsides as the defense can hear the hard count. In basketball, when a visiting player is making a free throw, home fans behind the backboard typically wave their arms or other objects in an attempt to break the visiting player's focus on making the shot. Environmental factors such as weather and altitude are easy to measure, yet their effects are debatable, as both teams have to play in the same conditions; but the home team may be more acclimated to local conditions with difficult environments, such as extremely warm or cold weather, or high altitude (such as the case of Denver teams, as well as the Mexico national football team, many of whose home matches are played in Mexico City).
The stadium or arena will typically be filled with home supporters, who are sometimes described as being as valuable as an extra player for the home team. The home fans can sometimes create a psychological lift by cheering loudly for their team when good things happen in the game. The home crowd can also intimidate visiting players by booing, whistling, or heckling. Generally the home fans vastly outnumber the visiting team's supporters. While some visiting fans may travel to attend the game, home team fans will generally have better access to tickets and easier transport to the event, thus in most cases they outnumber the visitors' fans (although in local derbies and crosstown rivalries this may not always be the case). In some sports, such as association football, sections of the stadium will be reserved for supporters of one team or the other (to prevent fan violence) but the home team's fans will have the bulk of the seating available to them. In addition, stadium/arena light shows, sound effects, fireworks, cheerleaders, and other means to enliven the crowd will be in support of the home team. Stadium announcers in many sports will emphasize the home team's goals and lineup to excite the crowd.
Ryan Boyko, a research assistant in the Department of Psychology in the Faculty of Arts and Sciences at Harvard University, studied 5,000 English Premier League games from 1992 to 2006, to discern any officiating bias and the influence of home crowds. The data was published in the "Journal of Sports Sciences" and suggested that for every additional 10,000 people attending, home team advantage increased by 0.1 goals. Additionally, his study found that home teams are likely to be awarded more penalty kicks, and this is more likely with inexperienced referees.
Further, home players can be accustomed to peculiar environmental conditions of their home area. The city of Denver, being above sea level, has thinner air, enough so that it affects the stamina of athletes whose bodies are not used to it. Although baseball is less aerobically demanding than many other sports, high altitude affects that sport's game play in several important ways. Denver's combination of altitude and a semi-arid climate (the city averages only about 16 in/400 mm precipitation annually) allows fly balls to travel about 10% farther than at sea level, and also slightly reduces the ability of a pitcher to throw an effective breaking ball. The low humidity also causes baseballs to dry out, making it harder for pitchers to grip them and further reducing their ability to throw breaking balls. The Colorado Rockies have a very large home advantage. This anomaly has been countered with Colorado's innovative use of humidors to keep the baseballs from drying out. Denver's altitude advantage has also come into play in gridiron football; the second longest field goal in National Football League history took place in Denver, as did the longest recorded punt. The national association football team of Bolivia also enjoys the advantage of playing at high altitude: at home during World Cup qualifiers at the even more extreme 3,600 m (11,800 ft) altitude of La Paz they have even been known to beat Brazil, a team regularly ranked number one in the FIFA World Rankings. More recently, Bolivia beat Argentina, who were ranked sixth in the world, 6–1 on April 1, 2009, Argentina's heaviest defeat since 1958. In cricket, the condition of the pitch and the behaviour of the ball when it bounces off the pitch varies significantly in different parts of the world, and consequently the players on the visiting team must adjust to the ball behaving in an unfamiliar way to be successful on foreign surfaces; additionally, the home team has the right to adjust the preparation of its pitches in a manner which specifically enhances its own strengths or exacerbates its opponent's weaknesses.
The weather can also play a major factor. For example, the February average temperature minimum in Tel Aviv, Israel is , while the average at the same time in Kazan, Russia is , with snow being common. This means that when Rubin Kazan played at home to Hapoel Tel Aviv in the 2009-10 UEFA Europa League, Hapoel needed to acclimatize and were therefore at a disadvantage. Hapoel duly lost the match 3–0. This advantage, however, can also be a disadvantage to the home team, as weather conditions can impede the home team as much as the visitors: the Buffalo Bills, whose home stadium (Highmark Stadium) is subject to high and unpredictable winds and lake-effect snow in the late fall and early winter, regularly suffer large numbers of injuries late in the season.
Sometimes the unique attributes of a stadium create a home-field advantage. The unique off-white Teflon-coated roof of the Hubert H. Humphrey Metrodome trapped and reflected noise to such an extent that it was distracting or even harmful. This, combined with the color of the roof, caused opposing baseball players to commit more errors in the Dome than in other ballparks. While this is no longer an issue for opponents of the Minnesota Twins with that team's 2010 move to the open-air Target Field, it remained important to the many college baseball teams that played games in the Dome until its late 2013 closing. Hard Rock Stadium, the home stadium of the NFL's Miami Dolphins, is designed in such a way that when the sun is overhead, the home sideline is in the shade while the visitors sideline is directly in the suns path. This, combined with the hot tropical climate South Florida receives, can lead to differences of up to 30 degrees Fahrenheit or more between the two sidelines, with the visiting sideline temperature getting as high as 120 °F (49 °C). The parquet floor at the Boston Celtics' former home of Boston Garden contained many defects, which were said to give the Celtics, who were more likely to be familiar with the playing surface, an advantage. During the 1985–1986 season, the Larry Bird-led Celtics posted a home court record of 40–1; this record still stands in the NBA. Memorial Gymnasium, the venue for men's and women's basketball at Vanderbilt University, was built in 1952 with the team benches at the ends of the court instead of along one of the sidelines, a setup that was not unusual at the time. However, the configuration is now unique in U.S. major-college sports, and has been said to give the Commodores an edge because opposing coaches are not used to directing their teams from the baseline. Cherry Hill Arena, a New Jersey-based arena in the southern suburbs of Philadelphia, had a number of idiosyncrasies that its home teams used to their advantage but earned the arena an extremely poor reputation, including a slanted ice surface that forced opponents to skate the majority of the game uphill and lack of showers for the visiting team.
"Sports Illustrated", in a 17 January 2011 article, reported that home crowds, rigor of travel for visiting teams, scheduling, and unique home field characteristics, "were not" factors in giving home teams an advantage. The journal concluded that it was favorable treatment by game officials and referees that conferred advantages on home teams. They stated that sports officials are unwittingly and psychologically influenced by home crowds and the influence is significant enough to affect the outcomes of sporting events in favor of the home team.
Other research has found that crowd support, travel fatigue, geographical distance, pitch familiarity, and referee bias do not have a strong effect when each factor is considered alone suggesting that it is the combination of several different factors that creates the overall home advantage effect. An evolutionary psychology explanation for the home advantage effect refers to observed behavioral and physiological responses in animals when they are defending their home territory against intruders. This causes a rise in aggression and testosterone levels in the defenders. A similar effect has been observed in football with testosterone levels being significantly higher in home games than in away games. Goalkeepers, the last line of defense, have particularly strong testosterone changes when playing against a bitter rival as compared to a training season. How testosterone may influence results is unclear but may include cognitive effects such as motivation and physiological effects such as reaction time.
An extreme example of home advantage was the 2013 Nigeria Premier League; each of the 20 teams lost at most 3 of 19 home matches and won at most 3 of 19 away matches. Paul Doyle ascribed this to visiting teams' facing "violent crowds, questionable refereeing and [...] [a]rriving just before kick-off after long road trips, often on hazardous surfaces".
The 2020–21 NHL season saw major disruption due to COVID-19-restricted conditions that resulted in bubble playoffs and ghost games, as fans were unable to attend in person. New research has shown that this led to a significant drop to home advantage compared with the previous six seasons. In 592 games played under the restricted conditions through March, home teams suffered a decline of 10% while road teams’ win rates increased by 7%.
Factors related to the game rules.
In a number of sports, the hosting team has the advantage of playing with their first choice uniforms, while the visiting team wears their alternative away/road colors. Some sports leagues simply state that the team wears its away uniforms only when its primary jerseys would clash with the colors of the home team, while other leagues mandate that visiting teams must always wear their away colors regardless. However, sometimes teams wear their alternative uniforms by choice. This is especially true in North American sports where generally one uniform is white or grey, and "Color vs. color" games (e.g., blue vs. red uniforms) are a rarity, having been discouraged in the era of black-and-white television. Many teams from warm-weather cities may wear their white uniforms at home, forcing their opponent to wear dark uniforms in the hot weather. An exception is a rule in high school American football (except Texas, which plays by NCAA football rules), requiring the home team to wear dark jerseys and the visiting team to wear white jerseys, which may work as a disadvantage to the home team in hot-weather games.
In ice hockey, there are at least three distinct rule-related advantages for the home team. The first is referred to as "last change", where during stoppages of play, the home team is allowed to make player substitutions after the visiting team does (unless the home team ices the puck, in which case no substitutions are allowed). This allows the home team to obtain favorable player matchups. This rule makes the home team designation important even in games played on neutral ice. Traditionally, the second advantage was that when lining up for the face-off, the away team's centre always had to place his stick on the ice before the centre of the home team. However, in both the NHL and international rule sets, this now applies only for face-offs at the centre-ice spot; when a face-off takes place anywhere else on the ice, the defending centre has to place his stick first. The centre who is allowed to place his stick last gains the ability to time the face-off better and gives him greater odds of winning it. The third advantage is that the home team has the benefit of choosing whether to take the first or second attempt in a shootout.
In baseball, the home team – which bats in the bottom half of each inning – enjoys the advantage of being able to end the game immediately if it has the lead in the ninth inning (or other scheduled final inning) or in extra innings. If the home team is leading at the end of the top half of the ninth inning, the game ends without the bottom half being played. If the home team is trailing or the score is tied in the bottom half of the ninth inning or any extra inning, the game ends immediately if the home team takes the lead; the visiting team does not get another opportunity to score and the home team does not have to protect their lead. On the other hand, if the visiting team has the lead when the top half of the ninth inning or extra inning ends, the home team still gets an opportunity to score and so the visiting team must protect their lead. In addition, in the late innings, the home team knows how many runs they need to win or tie, and can therefore strategize accordingly. For example, in a tie game in the ninth inning or extra innings, the home team may employ strategies such as bunting or stealing bases, which oftentimes increase the chance of scoring one run, but decrease the chance of scoring multiple runs.
Additionally in baseball, the host team is familiar with the unique dimensions of their home field yielding them advantages (pitching, hitting, fielding) over visiting teams. Before the Major League Baseball labor agreement that ended the 2021–22 lockout saw the designated hitter added permanently in the National League, the home league's rules concerning the designated hitter (DH) were followed during interleague games, including the World Series. This put AL teams at a disadvantage when they played in NL parks, as AL pitchers were typically not used to batting nor baserunning. NL teams at AL parks were at a disadvantage because a player who did not play often had to bat an entire game, usually on consecutive nights. The NL team's DH was a pinch-hitter who batted perhaps once every two or three games during the season, or alternated in a platoon system with other players (such as a catcher who does not start because the starting pitcher uses the other catcher), while the AL team's DH batted three or more times a game throughout the season.
Measuring and comparing of home-field advantage.
Measuring the home-field advantage of a team (in a league with balanced schedule) requires a determination of the number of opponents for which the result at home-field was better (formula_0), same (formula_1), and worse (formula_2). Goals scored and conceded – in so called "combined measure of home team advantage" – are used to determine which results are better, same, and which are worse. Given two results between teams formula_3 and formula_4, formula_5 played at formula_3's field and formula_6played at formula_4's field, we can compute differences in scores (e.g. from formula_3's point of view): formula_7and formula_8. Team formula_3 played better at home field if formula_9, and formula_3 played better at away field if formula_10 (for example, if Arsenal won 3–1 at home against Chelsea, i.e. formula_11, and Arsenal won 3–0 at Chelsea, i.e. formula_12, then the result for Arsenal at home was worse). Same approach has to be used for all opponents in one season to obtain formula_0, formula_1, and formula_2.
Values of formula_0, formula_1, and formula_2 are used to estimate probabilities as formula_13, where formula_14 is total number of opponents in a league (this is Bayesian estimator). To test hypothesis that home-field advantage is statistically significant we can compute formula_15, where formula_16 is incomplete gamma function. For example, Newcastle in 2015/2016 English Premier League season recorded better result at home field for 13 opponents, same result with 4 opponents, and worse result for two opponents; therefore formula_17 and hypothesis about home team advantage can be accepted. This procedure was introduced and applied by Marek and Vávra (2017) on English Premier League seasons 1992/1993 – 2015/2016. Later, the procedure was finalised in Marek and Vávra (2020).
Marek and Vávra (2018) described procedure which allows to use observed counts of combined measure of home team advantage (formula_0, formula_1, and formula_2) in two leagues to be compared by the "test for homogeneity of parallel samples" (for the test see Rao (2002)). The second proposed approach is based on distance between estimated probability description of home team advantage in two leagues ( formula_18) which can be measured by Jeffrey divergence (a symmetric version of Kullback–Leibler divergence). They tested five top level English football leagues and two top level Spanish leagues between 2007/2008 and 2016/2017 season. The main result is that home team advantage in Spain is stronger. Spanish La Liga has the strongest home team advantage, and English football league two has the lowest home team advantage, among analysed leagues.
Comparison of home advantage in 19 European football leagues between the 2007/2008 and 2016/2017 seasons was made in Marek and Vávra (2024). They found that, among the analysed leagues, the Super League Greece had the strongest home advantage and English Football League Two had the lowest home advantage.
Gaining or losing home-field advantage.
During the regular season for a sport, in the interest of fairness, schedulers try to ensure that each team plays an equal number of home and away games. Thus, having home-field advantage for any particular regular-season game is largely due to random chance. (This is only true for fully organized leagues with structured schedules; for a counterexample, college football schedules often have an imbalance in which the most successful and largest teams can negotiate more home appearances than mid-majors, a situation that was also prevalent in the early, disorganized years of the National Football League.)
However, in the playoffs, home advantage is usually given to the team with the higher seed (which may or may not have the better record), as is the case in the NFL, MLB, and Stanley Cup playoffs. One exception to this was MLB's World Series, which from 2003 to 2016, awarded home-field advantage to the team representing the league which won the All-Star Game that year, to help raise interest in the All-Star Game after a tie in 2002; before 2003, home-field advantage alternated each year between the National League and the American League. Starting in 2017, home-field advantage in the World Series is nowadays given to the team with the best regular season record. Home-ice advantage in the Stanley Cup Finals is given to the team with the best season record. The NBA is the only league that has home-court advantage based solely on which team has the best record (using various tiebreakers to settle the question should the teams finish with identical records).
For most championship series, such as the NBA Finals, the team with the better regular season record, regardless of seed, has home-court advantage.
Rugby union's European Rugby Champions Cup also uses a seeding system to determine home advantage in the quarterfinals (though not in the semifinals, where the nominal "home" teams are determined by a blind draw).
In many sports, playoffs consist of a 'series' of games played between two teams. These series are usually a best-of-5 or best-of-7 format, where the first team to win 3 or 4 games, respectively, wins the playoff. Since these best-of series always involve an odd number of games, it is impossible to guarantee that an equal number of games will be played at each team's home venue. As a result, the team with the better regular season record must be scheduled to have one more home game than the other. This team is said to have home-field advantage for that playoff series.
During the course of these playoff series, however, sports announcers or columnists will sometimes mention a team "gaining" or "losing" home-field advantage. This can happen after a visiting team has just won a game in the series. In playoff series format, the home-field advantage is said to exist for whichever team would win the series "if" all remaining games in the series are won by the home team "for that game". Therefore, it is possible for a visiting team to win a game and, hence, gain home-field advantage. This is somewhat similar to the concept of losing serve in tennis.
As an example, in the 1982 NBA Finals, the Los Angeles Lakers played the Philadelphia 76ers, with the 76ers having earned home court advantage because of a better regular season record. Four games were scheduled to be played in Philadelphia, while three were scheduled in Los Angeles. If the home team were to win each game, then the 76ers would have won four games, the Lakers would have won three games, and the 76ers would have won the series, so we say that Philadelphia had the home-court advantage. However, the visiting Lakers won Game 1. Los Angeles now had one win, and there were three games remaining at each arena. The home team went on to win all of the remaining games in that series, so Los Angeles won four games, while Philadelphia won two (Game 7, which would have been played in Philadelphia, was omitted, as even if the 76ers won, they'd still lose the series 4–3). Since the Lakers won the series in this scenario, it is said that they have taken home-court advantage away from the 76ers.
In some cup competitions, (for example the FA Cup in all rounds prior to the semi-final), home advantage is determined by a random drawing. However, if the initial match is drawn (tied), home advantage for the replay is given to the other team.
Neutral venues.
For certain sporting events, home advantage may be removed by use of a neutral venue. This may be a national stadium that is not a home stadium to any club (for example Wembley Stadium hosts the FA Cup Final and semi-finals). Alternatively the neutral venue may be the home stadium of another club, such as was used historically to stage FA Cup semi-finals.
If the venue is chosen before the start of the competition however, it is still possible for one team to gain home field advantage. For example, in the European Cup/UEFA Champions League, there have been four instances where a club has managed to reach the final hosted in its own stadium (1957, 1965, 1984, and 2012). Most recently Bayern Munich played (and lost) the 2012 final at their home stadium of Allianz Arena, as it was chosen as the venue in January 2010. In the Champions League Final, however, if the "home" shirt colors of both teams conflict (e.g. both are red) then there is a draw which assigns one of the teams their "away" shirt. The NFL's Super Bowl is also played in a venue chosen years in advance of the game. Super Bowl LV in 2021 was the first Super Bowl in which one of the participating teams was playing in its home stadium, as the Tampa Bay Buccaneers played at Raymond James Stadium in Tampa, Florida. The next year saw the Los Angeles Rams play Super Bowl LVI at their home of SoFi Stadium in Inglewood, California. Both games were awarded to their respective locations in 2017. Two other Super Bowls (XIV in 1980 and XIX in 1985) were played in neutral stadiums in the market area of one of the participating teams. Regardless, tickets are allocated equally between both competing teams, even if one happens to be playing in its own stadium.
Neutral-venue matches may arise out of necessity. For example, on December 12, 2010, the roof of the Minnesota Vikings' stadium, the Hubert H. Humphrey Metrodome collapsed due to a snowstorm. The Vikings were supposed to play against the New York Giants at the stadium the next day. The game was moved to the Detroit Lions' stadium, Ford Field. The following week, the Vikings' "Monday Night Football" game against the Chicago Bears was moved to the University of Minnesota's TCF Bank Stadium.
As part of a settlement for a 1992 strike by the NHL Players Association, the National Hockey League scheduled two neutral-site games for each team in a non-NHL market, with one as designated home team and one as the designated away team. The neutral site games ended after the 1993–94 NHL season, as the following season was lockout-shortened, and the 1995–96 NHL season reduced the regular season from 84 to 82 games per team. The NHL has held neutral-site, season-opening games in Europe (sometimes also including preseason exhibitions against European clubs), first from 2007 to 2011 as the "NHL Premiere", and from 2017 as the "NHL Global Series". The 2019 NHL Heritage Classic was also a neutral site game, played in the non-NHL market of Regina, Saskatchewan—falling roughly halfway between the markets of the participating teams, the Calgary Flames and Winnipeg Jets.
A requirement to play home matches at a neutral venue has been used as a punishment by UEFA for teams whose fans cause disturbances at a previous match. For example, after the violent clashes after the Turkey-Switzerland game in 2005, UEFA punished the Turkish team with playing the next six regular international home matches abroad. It is also required where one team's home location is in a war zone or at high risk of terrorism, or if the visiting team is prevented from travelling to (one of) their opponent's regular stadium(s) for political reasons. The latter consideration is uncommon because governing bodies typically implement measures to prevent national teams from jurisdictions with the most serious political disputes (for example, Spain and Gibraltar or Serbia and Kosovo) from being drawn in the same group, but it does happen - for example, the Ukraine–Kosovo qualifier game for the 2018 World Championships was held in Krakow, Poland, because Ukraine does not recognize Kosovo and does not admit Kosovar nationals to its territory. In all such cases, the match is still treated as a "home" match for such purposes as implementing the away goals rule.
In North America, Major League Soccer formerly hosted its MLS Cup final at a neutral site. Since 2012, the game has been held at the venue of the participating team that had the better regular season record.
By competition.
Baseball.
In the 2018 Major League Baseball regular season, the home team won 1,277 games (52.6%), and the away team won 1,149 games (47.4%). These totals do not include the six games played at neutral sites (though all of the neutral-site games had a designated "home" team).
Basketball.
In the 2018–19 NBA regular season, the home team won 729 games (59%), and the away team won 501 games (41%).
Cricket.
Across the 2008, 2010, and 2011 seasons of the Indian Premier League, the home team won 95 games (54.3%), and the away team won 80 games (45.7%).
American football.
At least some degree of home field advantage has been recorded in almost every National Football League season; each year, designated home teams have won more games than lost. The 2020 NFL season, played in empty and near-empty stadiums, was the first in which no significant advantage was recorded: home teams that year finished 127–128–1 ().
Hockey.
In the 2019 Stanley Cup playoffs, for the first time in NHL history all division winners (who had home-ice advantage) were eliminated in the first round as all the wild-cards advanced to the second round. The Columbus Blue Jackets won a playoff series for the first time, defeating the first-place Lightning in four games, and marking the first time in Stanley Cup playoff history that the Presidents' Trophy winners were swept in the opening round, and the first time since 2012 that the Presidents' Trophy winners were defeated in the opening round. They were soon followed by the Calgary Flames, who with their five-game loss to the Colorado Avalanche, ensured that for the first time in NHL history, neither of the conference number one seeds advanced to the second round. After that, the two remaining division winners, the Nashville Predators and Washington Capitals, were each eliminated in an overtime game, the Predators in six by the Dallas Stars, and the Capitals in seven by the Carolina Hurricanes.
Association football.
In the 2018–19 Premier League, the home team won 181 matches (47%), the away team won 128 matches (34%), and teams drew in 71 matches (19%); however, this is considered an aberration, as home advantage has statistically been steadily declining for over a century.
References.
Notes
<templatestyles src="Reflist/styles.css" />
Sources
Further reading | [
{
"math_id": 0,
"text": "k_1"
},
{
"math_id": 1,
"text": "k_0"
},
{
"math_id": 2,
"text": "k_{-1}"
},
{
"math_id": 3,
"text": "T_1"
},
{
"math_id": 4,
"text": "T_2"
},
{
"math_id": 5,
"text": "h_{T_1}:a_{T_2}"
},
{
"math_id": 6,
"text": "h_{T_2}:a_{T_1}"
},
{
"math_id": 7,
"text": "d_{h,T_1}=h_{T_1}-a_{T_2}"
},
{
"math_id": 8,
"text": "d_{a,T_1}=a_{T_1}-h_{T_2}"
},
{
"math_id": 9,
"text": "d_{h,T_1}>d_{a,T_1}"
},
{
"math_id": 10,
"text": "d_{h,T_1}<d_{a,T_1}"
},
{
"math_id": 11,
"text": "d_{h,Arsenal}=2"
},
{
"math_id": 12,
"text": "d_{a,Arsenal}=3"
},
{
"math_id": 13,
"text": "\\hat{p}_r=\\frac{k_r+1}{K+3}, r=-1,0,1"
},
{
"math_id": 14,
"text": "K"
},
{
"math_id": 15,
"text": "P(p_1>p_{-1})=1-I_{1/2}(k_1+1,k_{-1}+1)"
},
{
"math_id": 16,
"text": "I_{1/2}()"
},
{
"math_id": 17,
"text": "P(p_1>p_{-1})=1-I_{1/2}(14,3)=0.998"
},
{
"math_id": 18,
"text": "\\hat{p}_r=\\frac{k_r}{K}, r=-1,0,1"
}
] | https://en.wikipedia.org/wiki?curid=1070567 |
10708102 | Bundle metric | In differential geometry, the notion of a metric tensor can be extended to an arbitrary vector bundle, and to some principal fiber bundles. This metric is often called a bundle metric, or fibre metric.
Definition.
If "M" is a topological manifold and π : "E" → "M" a vector bundle on "M", then a metric on "E" is a bundle map "k" : "E" ×"M" "E" → "M" × R from the fiber product of "E" with itself to the trivial bundle with fiber R such that the restriction of "k" to each fibre over "M" is a nondegenerate bilinear map of vector spaces. Roughly speaking, "k" gives a kind of dot product (not necessarily symmetric or positive definite) on the vector space above each point of "M", and these products vary smoothly over "M".
Properties.
Every vector bundle with paracompact base space can be equipped with a bundle metric. For a vector bundle of rank "n", this follows from the bundle charts formula_0: the bundle metric can be taken as the pullback of the inner product of a metric on formula_1; for example, the orthonormal charts of Euclidean space. The structure group of such a metric is the orthogonal group "O"("n").
Example: Riemann metric.
If "M" is a Riemannian manifold, and "E" is its tangent bundle T"M", then the Riemannian metric gives a bundle metric, and vice versa.
Example: on vertical bundles.
If the bundle π:"P" → "M" is a principal fiber bundle with group "G", and "G" is a compact Lie group, then there exists an Ad("G")-invariant inner product "k" on the fibers, taken from the inner product on the corresponding compact Lie algebra. More precisely, there is a metric tensor "k" defined on the vertical bundle E = V"P" such that "k" is invariant under left-multiplication:
formula_2
for vertical vectors "X", "Y" and "L""g" is left-multiplication by "g" along the fiber, and "L""g*" is the pushforward. That is, "E" is the vector bundle that consists of the vertical subspace of the tangent of the principal bundle.
More generally, whenever one has a compact group with Haar measure μ, and an arbitrary inner product "h(X,Y)" defined at the tangent space of some point in "G", one can define an invariant metric simply by averaging over the entire group, i.e. by defining
formula_3
as the average.
The above notion can be extended to the associated bundle formula_4 where "V" is a vector space transforming covariantly under some representation of "G".
In relation to Kaluza–Klein theory.
If the base space "M" is also a metric space, with metric "g", and the principal bundle is endowed with a connection form ω, then π*g+kω is a metric defined on the entire tangent bundle "E" = T"P".
More precisely, one writes π*g("X","Y") = "g"(π*"X", π*"Y") where π* is the pushforward of the projection π, and "g" is the metric tensor on the base space "M". The expression "kω" should be understood as ("kω")("X","Y") = "k"("ω"("X"),"ω"("Y")), with "k" the metric tensor on each fiber. Here, "X" and "Y" are elements of the tangent space T"P".
Observe that the lift π*g vanishes on the vertical subspace T"V" (since π* vanishes on vertical vectors), while kω vanishes on the horizontal subspace T"H" (since the horizontal subspace is defined as that part of the tangent space T"P" on which the connection ω vanishes). Since the total tangent space of the bundle is a direct sum of the vertical and horizontal subspaces (that is, T"P" = T"V" ⊕ T"H"), this metric is well-defined on the entire bundle.
This bundle metric underpins the generalized form of Kaluza–Klein theory due to several interesting properties that it possesses. The scalar curvature derived from this metric is constant on each fiber, this follows from the Ad("G") invariance of the fiber metric "k". The scalar curvature on the bundle can be decomposed into three distinct pieces:
"R""E" = "R"M("g") + "L"("g", ω) + "R""G"("k")
where "R""E" is the scalar curvature on the bundle as a whole (obtained from the metric π*g+kω above), and "R"M("g") is the scalar curvature on the base manifold "M" (the Lagrangian density of the Einstein–Hilbert action), and "L"("g", ω) is the Lagrangian density for the Yang–Mills action, and "R""G"("k") is the scalar curvature on each fibre (obtained from the fiber metric "k", and constant, due to the Ad("G")-invariance of the metric "k"). The arguments denote that "R"M("g") only depends on the metric "g" on the base manifold, but not ω or "k", and likewise, that "R""G"("k") only depends on "k", and not on "g" or ω, and so-on.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi:\\pi^{-1}(U)\\to U\\times\\mathbb{R}^n"
},
{
"math_id": 1,
"text": "\\mathbb{R}^n"
},
{
"math_id": 2,
"text": "k(L_{g*}X, L_{g*}Y)=k(X,Y)"
},
{
"math_id": 3,
"text": "k(X,Y)=\\int_G h(L_{g*} X, L_{g*} Y) d\\mu_g"
},
{
"math_id": 4,
"text": "P\\times_G V"
}
] | https://en.wikipedia.org/wiki?curid=10708102 |
10708199 | Therapeutic inertia | Therapeutic inertia (also known as clinical inertia) is a measurement of the resistance to therapeutic treatment for an existing medical condition. It is commonly measured as a percentage of the number of encounters in which a patient with a condition received new or increased therapeutic treatment out of the total number of visits to a health care professional by the patient. A high percentage indicates that the health care provider is slow to treat a medical condition. A low percentage indicates that a provider is extremely quick in prescribing new treatment at the onset of any medical condition.
Calculation.
There are two common methods used in calculating therapeutic inertia. For the following examples, consider that a patient has five visits with a health provider. In four of those visits, a condition is not controlled (such as high blood pressure or high cholesterol). In two of those visits, the provider made a change to the patient's treatment for the condition.
In Dr. Okonofua's original paper, this patient's therapeutic inertia is calculated as formula_0 where "h" is the number of visits with an uncontrolled condition, "c" is the number of visits in which a change was made, and "v" is the total number of visits. Therefore, the patient's therapeutic inertia is formula_1.
An alternative, which avoids consideration of visits where the condition was already controlled and the provider should not be expected to make a treatment change, is formula_2. Using the above example, there are 2 changes and 4 visits with an uncontrolled condition. The therapeutic inertia is formula_3.
Reception.
Therapeutic inertia was devised as a metric for measuring treatment of hypertension. It has now become a standard metric for analysing treatment of many common comorbidities such as diabetes and hyperlipidemia. Both feedback reporting processes and intervention studies aimed at reducing therapeutic inertia have been shown to increase control of hypertension, diabetes, and hyperlipidemia.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{h}{v} - \\frac{c}{v}"
},
{
"math_id": 1,
"text": "\\frac{4}{5} - \\frac{2}{5} = 0.4 = 40\\%"
},
{
"math_id": 2,
"text": "1 - \\frac{c}{h}"
},
{
"math_id": 3,
"text": "1 - \\frac{2}{4} = 0.5 = 50\\%"
}
] | https://en.wikipedia.org/wiki?curid=10708199 |
10708511 | Social cost of carbon | Monetary damage caused by a tonne of greenhouse gas from North America
The social cost of carbon (SCC) is the marginal cost of the impacts caused by emitting one extra tonne of carbon emissions at any point in time. The purpose of putting a price on a tonne of emitted CO2 is to aid policymakers or other legislators in evaluating whether a policy designed to curb climate change is justified. The social cost of carbon is a calculation focused on taking corrective measures on climate change which can be deemed a form of market failure. The only governments which use the SCC are in North America. The Intergovernmental Panel on Climate Change suggested that a carbon price of $100 per tonne of CO2 could reduce global GHG emissions by at least half the 2019 level by 2030.
Because of politics the SCC is different from a carbon price. According to economic theory, a carbon price should be set equal to the SCC. In reality, carbon tax and carbon emission trading only cover a limited number of countries and sectors, which is vastly below the optimal SCC. In 2024 the social cost of carbon ranges to over $1000/tCO2, while the carbon pricing only ranges to about $160/tCO2. From a technological cost perspective, the 2018 IPCC report suggested that limiting global warming below 1.5 °C requires technology costs around $135 to $5500 in 2030 and $245 to $13000/tCO2 in 2050. This is more than three times higher than for a 2 °C limit.
A 2024 study estimated the social cost of carbon (SCC) to be over $1000 per tonne of CO2—more than five times the United States Environmental Protection Agency recommended value of around $190 per tonne, which is in turn much more than the US government value of $51.
Calculations.
Calculating the SCC requires estimating the impacts of climate change. This includes impacts on human health and the environment, as measured by the amount of damage done and the cost to remedy it. Valuations can be difficult because the impacts on biomes do not have a market price. In economics, comparing impacts over time involves a discount rate or time preference. This rate determines the weight placed on impacts occurring at different times.
Best estimates of the SCC come from integrated assessment models (IAM) which predict the effects of climate change under various scenarios and allow for calculation of monetised damages. One of the most widely used IAMs is the Dynamic Integrated model of Climate and the Economy (DICE).
The DICE model, developed by William Nordhaus, makes provisions for the calculation of a social cost of carbon. The DICE model defines the SCC to be "equal to the economic impact of a unit of emissions in terms of t-period consumption as a numéraire".
Other popular IAMs used to calculate the social cost of carbon include the Policy Analysis for Greenhouse Effect Model (PAGE) and the Climate Framework for Uncertainty, Negotiation, and Distribution (FUND).
In the United States, the Trump Administration was criticised for using existing IAMs to calculate the SCC that lacked appropriate calculations for interactions between regions. For instance, climate catastrophes caused by climate change in one region may have a domino impact on the economy of neighboring regions or trading partners.
The wide range of estimates is explained mostly by underlying uncertainties in the science of climate change including the climate sensitivity, which is a measure of the amount of global warming expected for a doubling in the atmospheric concentration of CO2, different choices of discount rate, treatment of equity, and how potential catastrophic impacts are estimated.
The Interagency Working Group in the United States usually uses four values when calculating the cost. The values come from using a discount rate of 2.5%, 3%, and 5% from the integrated assessment models. The SCC that is being found must include the different probabilities based on what mitigation is being used for climate change that betters or worsens the environment. This is where the fourth value comes into play, because there can be lower-probability, but higher-impact outcomes from climate change. The fourth value zones in on the 3% discounts rate and is set to the 95th percentile when distributing the frequency estimates.
In "The U.S. Government's Social Cost of Carbon Estimates after Their First Two Years: Pathways for Improvement", Kopp and Mignone suggest that these calculation rates do not reflect the multiple ways that humans can respond to climate change. They propose an alternative approach that should be considered by calculating through a cost-benefit optimization analysis based on if the public "panics" about climate change and implement mitigation policies accordingly.
Discount rate.
It has been popular to compare rates of saving over time involving a discount rate or time preference. These rates determine the weight placed on impacts occurring at different times, applying a theoretical model of inter-generational welfare developed by Ramsey.
What discount rate to use is "consequential and contentious" because it defines the relative value of present costs and future damages, an inherently ethical and political judgment. A 2015 survey of 200 general economists found that most preferred a rate between 1% and 3%. Some, like Nordhaus, advocate for a time discount rate that is pegged to the current average rate of time discount as estimated from market interest rates–this is spurious reasoning because intragenerational interest rates have nothing to do with the intergenerational ones in question. Others, like Stern, propose a much smaller discount rate because "normal" discount rates are skewed when applied over the time scales over which climate change acts. A 2015 survey of 1,100 economists who had published on climate change found that those who estimated discount rates preferred that they decline over time and that explicit ethical considerations be factored in.
Carbon pricing recommendations.
According to economic theory, a carbon price should be set equal to the SCC. In reality, carbon tax and carbon emission trading only cover a limited number of countries and sectors, which is vastly below the optimal SCC. The social cost of carbon ranges from −$13 to $2387 per tonne of CO2, while the carbon pricing at present only ranges from $0.50 to $137 per tonne of CO2 in 2022. From a technological cost perspective, the 2018 IPCC report suggested that limiting global warming below 1.5 °C requires technology costs around $135 to $5500 in 2030 and $245 to $13000 per tonne of CO2 in 2050. This is more than three times higher than for a 2 °C limit.
In 2021, the study "The social cost of carbon dioxide under climate-economy feedbacks and temperature variability" estimated even costs of more than $300 per tonne of CO2. A study published in September 2022 in "Nature" estimated the social cost of carbon (SCC) to be $185 per tonne of CO2—3.6 times higher than the U.S. government's then-current value of $51 per tonne.
Large studies in the late 2010s estimated the social cost of carbon as high as $417/tCO2 or as low as $54/tCO2. Both those studies subsume wide ranges; the latter is a meta-study whose source estimates range from -$13.36/tCO2 to $2,386.91/tCO2. Note that the costs derive not from the element carbon, but the molecule carbon dioxide. Each tonne of carbon dioxide consists of about 0.27 tonnes of carbon and 0.73 tonnes of oxygen.
According to David Anthoff and Johannes Emmerling, the social cost of carbon can be expressed by the following equation: formula_0.
This equation represents how one additional tonne of carbon dioxide impacts the environment and incorporates equity and social impact. Chen, Van der Beek, and Cloud inquire upon the benefits of incorporating a second measure of the externalities of carbon by accounting for both the social cost of carbon and risk cost of carbon. This technique involves accounting for the cost of risk on climate change goals. Matsuo and Schmidt suggest that carbon policies revolve around two renewable energy targets. They focus on bringing the cost down of renewable energy and growth of the industry. The prioblem with these objectives in policy is that prioritization can affect how the policy plays out. This can result in a negative impact on the social cost of carbon by affecting how renewable energy is incorporated into society. Newbery, Reiner, and Ritz discuss a carbon price floor as a means of attributing to the social cost of carbon. They discuss how incorporating a CPF in SCC can have a long-term effect of less coal usage, an increase in electricity pricing, and more innovation and investment in low-carbon alternatives. Yang et al. estimated the social cost of carbon under alternative socioeconomic pathways. According to their results, regional rivalries with increased trade friction can increase the social cost of carbon by a factor of 2 to 4.
Use in investment decisions.
Organizations that take an integrated management approach are using the social cost of carbon to help evaluate investment decisions and guide long-term planning in order to consider the full extent of how their operations impact society and the environment. By placing a value on carbon emissions, decision makers can use this value to expand upon traditional financial decision-making tools and create new metrics for measuring the short and long-term outcomes of their actions. This means taking the triple bottom line a step further and promotes an integrated bottom line (IBL) approach. Prioritising an IBL approach begins with changing the way we think about traditional financial measurements as these do not take into consideration the full extent of the short and long-term impacts of a decision or action. Instead, return on investment can be expanded to return on integration, internal rate of return can evolve into integrated rate of return and instead of focusing on net present value, companies can plan for integrated future value.
By country.
The SCC is highly sensitive to socioeconomic narratives. Because carbon dioxide is a global externality, a rational, coordinated human society (or what economists might call "the social planner") would never want to set policy based on anything other than the global aggregate value. However, given the de-globalization trend around the world, the country-level"" or regional-level social cost of carbon is also calculated. Yang et al. calculated the regional social cost of carbon using regional cost-benefit IAM (RICE). Generally, SCCs in developing countries are much more sensitive to socioeconomic uncertainty and risk valuation - average SCCs in developing regions are 20 times higher than developed regions. Cost-benefit IAM requires more computational resources to provide SCC at the country level, so Ricke et al."" calculate the social cost of carbon based on discounted future damage. Their estimation shows countries that incur large fractions of the global cost consistently include India, China, Saudi Arabia and the United States.
Canada.
In 2023 the SCC was estimated as 261 Canadian dollars/tCO2, the same as the US SCC.
United States.
In February 2021 the US government set the social cost of carbon to $51 per tonne, based on a 3% discount rate, but it plans a more thorough review of the issue. However, in February 2022 a court ruled against the government and said the figure was invalid as only damage within the US could be included. In March 2022, a three-judge panel of the 5th Circuit Court of Appeals stayed his injunction, permitting continued use of the interim figure. The social cost of carbon is used in policymaking.
Executive Order 12866 requires agencies to consider the costs and benefits of any potential regulations and, bearing in mind that some factors may be difficult to assign monetary value, only propose regulations whose benefits would justify the cost. Social cost of carbon estimates allow agencies to bring considerations of the impact of increased carbon dioxide emissions into cost-benefit analyses of proposed regulations.
The United States government was not required to implement greenhouse gas emission requirements until after the 2007 court case "Massachusetts v. EPA". The U.S. government struggled to implement greenhouse gas emission requirements due to the lack of an accurate social cost on carbon to guide policy making.
Due to the varying estimates of the social cost of carbon, in 2009, the Office of Management and Budget (OMB) and the Council of Economic Advisers established the Interagency Working Group on the Social Cost of Greenhouse Gases (IWG) in an attempt to develop standards estimates of SCC for the use of federal agencies considering regulatory policies. This establishment was formerly named Interagency Working Group on the Social Cost of Carbon, but has now extended to include multiple greenhouse gasses. The IWG works closely with the National Academies of Sciences, Engineering, and Medicine when researching and creating an up to date report on the SCC.
When developing the 2010 and 2013 social cost of carbon estimates, the U.S. Government Accountability Office used a consensus-based approach with working groups alongside of existing academic works, studies, and models. These created estimates for the social costs and benefits that government agencies could use when creating environmental policies. Members of the public are able to comment on the developed social cost of carbon.
Along with the Office of Management and Budget (OMB) and the Council of Economic Advisers, six federal agencies worked in the working group. The agencies involved included, The Environmental Protection Agency (EPA), United States Department of Agriculture, United States Department of Commerce, United States Department of Energy, United States Department of Transportation (DOT), and the United States Department of Treasury. The Interagency Working Group analyzed and advised that policy surrounding the social cost of carbon must be implemented based on global impacts instead of domestic. Support for this expansion in scope stems from theories that climate change may lead to global migration and political and environmental destabilization that affects both the national security and economy of the United States, as well as its allies and trading partners. The social cost of carbon In the United States Government should be seen as a way to continuously update estimates with an end goal of public and scientific approval in order to make efficient environmental policy.
The price being set for the social cost of carbon is dependent upon the administration in charge. While Obama was in office, the administration paved the way for the first estimate of putting a price on carbon emissions. The administration estimated that the cost would be $36 per tonne in 2015, $42 in 2020, and $46 in 2025.
The Trump administration estimated between $1–$7 in economic damage in 2020. Trump's Executive Order 13783 mandated that SCC estimates be calculated based on guidelines from the 2003 OMB Circular A-4, rather than guidelines based on more recent climate science.
In November 2022, the EPA issued an estimate of $190 per tonne for 2020, and published a detailed methodology.
Criticism.
The SCC has been criticised as being extremely uncertain, having to change over time and according to the level of emissions, and is claimed to be useless to policymakers as the Paris Agreement has a goal of 2 °C temperature rise.
Calculating the SCC brings about a degree of uncertainty particularly due to unknown future economic growth and socioeconomic development paths. Societal preferences over development, international trade, and the potential for technological innovation, as well as national preferences regarding energy development should be taken into account. The discount rate, damage and pending climate system response also contribute to uncertainty.
Furthermore, the figures produced from the SCC cause calculations to be produced on a range with the most commonly utilised number being the central case value (an average over the entire data set at a given discount rate). The SCC is no longer used for policy appraisal in the UK or the EU.
History.
The concept of a social cost of carbon was first mooted by the Reagan administration of the United States in 1981. Federal agencies such as the Environmental Protection Agency and Department of Transportation began to develop other forms of social cost calculations from carbon during the George H. W. Bush administration. Furthermore, economic social cost from carbon was judicially mandated in cost-benefit analysis for new policy in 2008 following a decision by a federal appellate court. The year following in 2009 there was a call for a uniform calculation of social cost from carbon to be utilised by the government.
The UK government has not used the SCC since 2009. The UK government has estimated social cost of carbon since 2002, when a Government Economic Service working paper "Estimating the social cost of carbon emissions" suggested £19/tCO2 within a range of £10 to £38/tCO2. This cost was set to rise at a rate of £0.27/tCO2 per year to reflect the increasing marginal cost of emissions. In 2009 the UK government conducted a review of the approach taken to developing carbon values. The conclusion of the review was to move to a "target-consistent‟ or "abatement cost" approach to carbon valuation rather than a "social cost of carbon" (SCC) approach. Following a cross-government review during 2020 and 2021, UK carbon valuation are further updated to reflect consistency to the global 1.5 °C goal and its domestic targets.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "SCCx=(\\partial w/\\partial cx0)^-1 \\sum_{t=0}^T \\sum_{r=1}^R\\partial w/\\partial c\n"
}
] | https://en.wikipedia.org/wiki?curid=10708511 |
10708900 | Reversible reference system propagation algorithm | Time-stepping algorithm in molecular dynamics
Reversible reference system propagation algorithm (r-RESPA) is a time stepping algorithm used in molecular dynamics.
It evolves the system state over time,
formula_0
where the "L" is the Liouville operator. | [
{
"math_id": 0,
"text": "\\Gamma(t) = e^{iLt}\\Gamma(t=0) \\, "
}
] | https://en.wikipedia.org/wiki?curid=10708900 |
10711453 | Long short-term memory | Artificial recurrent neural network architecture used in deep learning
<templatestyles src="Machine learning/styles.css"/>
Long short-term memory (LSTM) is a type of recurrent neural network (RNN) aimed at dealing with the vanishing gradient problem present in traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, hidden Markov models and other sequence learning methods. It aims to provide a short-term memory for RNN that can last thousands of timesteps, thus "long short-term memory". The name is made in analogy with long-term memory and short-term memory and their relationship, studied by cognitive psychologists since early 20th century.
It is applicable to classification, processing and predicting data based on time series, such as in handwriting, speech recognition, machine translation, speech activity detection, robot control, video games, and healthcare.
A common LSTM unit is composed of a cell, an input gate, an output gate and a forget gate. The cell remembers values over arbitrary time intervals and the three "gates" regulate the flow of information into and out of the cell. Forget gates decide what information to discard from the previous state by mapping the previous state and the current input to a value between 0 and 1. A (rounded) value of 1 means to keep the information, and a value of 0 means to discard it. Input gates decide which pieces of new information to store in the current cell state, using the same system as forget gates. Output gates control which pieces of information in the current cell state to output by assigning a value from 0 to 1 to the information, considering the previous and current states. Selectively outputting relevant information from the current state allows the LSTM network to maintain useful, long-term dependencies to make predictions, both in current and future time-steps.
Motivation.
In theory, classic RNNs can keep track of arbitrary long-term dependencies in the input sequences. The problem with classic RNNs is computational (or practical) in nature: when training a classic RNN using back-propagation, the long-term gradients which are back-propagated can "vanish", meaning they can tend to zero due to very small numbers creeping into the computations, causing the model to effectively stop learning. RNNs using LSTM units partially solve the vanishing gradient problem, because LSTM units allow gradients to also flow with little to no attenuation. However, LSTM networks can still suffer from the exploding gradient problem.
The intuition behind the LSTM architecture is to create an additional module in a neural network that learns when to remember and when to forget pertinent information. In other words, the network effectively learns which information might be needed later on in a sequence and when that information is no longer needed. For instance, in the context of natural language processing, the network can learn grammatical dependencies. An LSTM might process the sentence "Dave, as a result of his controversial claims, is now a pariah" by remembering the (statistically likely) grammatical gender and number of the subject "Dave", note that this information is pertinent for the pronoun "his" and note that this information is no longer important after the verb "is".
Variants.
In the equations below, the lowercase variables represent vectors. Matrices formula_0 and formula_1 contain, respectively, the weights of the input and recurrent connections, where the subscript formula_2 can either be the input gate formula_3, output gate formula_4, the forget gate formula_5 or the memory cell formula_6, depending on the activation being calculated. In this section, we are thus using a "vector notation". So, for example, formula_7 is not just one unit of one LSTM cell, but contains formula_8 LSTM cell's units.
LSTM with a forget gate.
The compact forms of the equations for the forward pass of an LSTM cell with a forget gate are:
formula_9
where the initial values are formula_10 and formula_11 and the operator formula_12 denotes the Hadamard product (element-wise product). The subscript formula_13 indexes the time step.
Variables.
Letting the superscripts formula_14 and formula_8 refer to the number of input features and number of hidden units, respectively:
Peephole LSTM.
The figure on the right is a graphical representation of an LSTM unit with peephole connections (i.e. a peephole LSTM). Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state. formula_28 is not used, formula_29 is used instead in most places.
formula_30
Each of the gates can be thought as a "standard" neuron in a feed-forward (or multi-layer) neural network: that is, they compute an activation (using an activation function) of a weighted sum. formula_31 and formula_32 represent the activations of respectively the input, output and forget gates, at time step formula_13.
The 3 exit arrows from the memory cell formula_6 to the 3 gates formula_33 and formula_5 represent the "peephole" connections. These peephole connections actually denote the contributions of the activation of the memory cell formula_6 at time step formula_34, i.e. the contribution of formula_29 (and not formula_35, as the picture may suggest). In other words, the gates formula_33 and formula_5 calculate their activations at time step formula_13 (i.e., respectively, formula_31 and formula_32) also considering the activation of the memory cell formula_6 at time step formula_36, i.e. formula_29.
The single left-to-right arrow exiting the memory cell is "not" a peephole connection and denotes formula_35.
The little circles containing a formula_37 symbol represent an element-wise multiplication between its inputs. The big circles containing an "S"-like curve represent the application of a differentiable function (like the sigmoid function) to a weighted sum.
Peephole convolutional LSTM.
Peephole convolutional LSTM. The formula_38 denotes the convolution operator.
formula_39
Training.
An RNN using LSTM units can be trained in a supervised fashion on a set of training sequences, using an optimization algorithm like gradient descent combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight.
A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to formula_40 if the spectral radius of formula_41 is smaller than 1.
However, with LSTM units, when error values are back-propagated from the output layer, the error remains in the LSTM unit's cell. This "error carousel" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value.
CTC score function.
Many applications use stacks of LSTM RNNs and train them by connectionist temporal classification (CTC) to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.
Alternatives.
Sometimes, it can be advantageous to train (parts of) an LSTM by neuroevolution or by policy gradient methods, especially when there is no "teacher" (that is, training labels).
Success.
There have been several successful stories of training, in a non-supervised fashion, RNNs with LSTM units.
In 2018, Bill Gates called it a "huge milestone in advancing artificial intelligence" when bots developed by OpenAI were able to beat humans in the game of Dota 2. OpenAI Five consists of five independent but coordinated neural networks. Each network is trained by a policy gradient method without supervising teacher and contains a single-layer, 1024-unit Long-Short-Term-Memory that sees the current game state and emits actions through several possible action heads.
In 2018, OpenAI also trained a similar LSTM by policy gradients to control a human-like robot hand that manipulates physical objects with unprecedented dexterity.
In 2019, DeepMind's program AlphaStar used a deep LSTM core to excel at the complex video game Starcraft II. This was viewed as significant progress towards Artificial General Intelligence.
Applications.
Applications of LSTM include:
<templatestyles src="Div col/styles.css"/>
Timeline of development.
1989: Mike Mozer's work on "focused back-propagation" anticipates aspects of LSTM, which the LSTM paper cites.
1991: Sepp Hochreiter analyzed the vanishing gradient problem and developed principles of the method in his German diploma thesis, which was considered highly significant by his supervisor Jürgen Schmidhuber.
1995: "Long Short-Term Memory (LSTM)" is published in a technical report by Sepp Hochreiter and Jürgen Schmidhuber.
1996: LSTM is published at NIPS'1996, a peer-reviewed conference.
1997: The main LSTM paper is published in the journal Neural Computation. By introducing Constant Error Carousel (CEC) units, LSTM deals with the vanishing gradient problem. The initial version of LSTM block included cells, input and output gates.
1999: Felix Gers, Jürgen Schmidhuber, and Fred Cummins introduced the forget gate (also called "keep gate") into the LSTM architecture,
enabling the LSTM to reset its own state.
2000: Gers, Schmidhuber, and Cummins added peephole connections (connections from the cell to the gates) into the architecture. Additionally, the output activation function was omitted.
2001: Gers and Schmidhuber trained LSTM to learn languages unlearnable by traditional models such as Hidden Markov Models.
Hochreiter et al. used LSTM for meta-learning (i.e. learning a learning algorithm).
2004: First successful application of LSTM to speech Alex Graves et al.
2005: First publication (Graves and Schmidhuber) of LSTM with full backpropagation through time and of bi-directional LSTM.
2005: Daan Wierstra, Faustino Gomez, and Schmidhuber trained LSTM by neuroevolution without a teacher.
2006: Graves, Fernandez, Gomez, and Schmidhuber introduce a new error function for LSTM: Connectionist Temporal Classification (CTC) for simultaneous alignment and recognition of sequences. CTC-trained LSTM led to breakthroughs in speech recognition.
Mayer et al. trained LSTM to control robots.
2007: Wierstra, Foerster, Peters, and Schmidhuber trained LSTM by policy gradients for reinforcement learning without a teacher.
Hochreiter, Heuesel, and Obermayr applied LSTM to protein homology detection the field of biology.
2009: An LSTM trained by CTC won the ICDAR connected handwriting recognition competition. Three such models were submitted by a team led by Alex Graves. One was the most accurate model in the competition and another was the fastest. This was the first time an RNN won international competitions.
2009: Justin Bayer et al. introduced neural architecture search for LSTM.
2013: Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton used LSTM networks as a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset.
2014: Kyunghyun Cho et al. put forward a simplified variant of the forget gate LSTM called Gated recurrent unit (GRU).
2015: Google started using an LSTM trained by CTC for speech recognition on Google Voice. According to the official blog post, the new model cut transcription errors by 49%.
2015: Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber used LSTM principles to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks. 7 months later, Kaiming He, Xiangyu Zhang; Shaoqing Ren, and Jian Sun won the ImageNet 2015 competition with an open-gated or gateless Highway network variant called Residual neural network. This has become the most cited neural network of the 21st century.
2016: Google started using an LSTM to suggest messages in the Allo conversation app. In the same year, Google released the Google Neural Machine Translation system for Google Translate which used LSTMs to reduce translation errors by 60%.
Apple announced in its Worldwide Developers Conference that it would start using the LSTM for quicktype in the iPhone and for Siri.
Amazon released Polly, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology.
2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks.
Researchers from Michigan State University, IBM Research, and Cornell University published a study in the Knowledge Discovery and Data Mining (KDD) conference. Their Time-Aware LSTM (T-LSTM) performs better on certain data sets than standard LSTM.
Microsoft reported reaching 94.9% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory".
2018: OpenAI used LSTM trained by policy gradients to beat humans in the complex video game of Dota 2, and to control a human-like robot hand that manipulates physical objects with unprecedented dexterity.
2019: DeepMind used LSTM trained by policy gradients to excel at the complex video game of Starcraft II.
2021: According to Google Scholar, in 2021, LSTM was cited over 16,000 times within a single year. This reflects applications of LSTM in many different fields including healthcare.
2024: An evolution of LSTM called xLSTM is published by a team leaded by Sepp Hochreiter.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "W_q"
},
{
"math_id": 1,
"text": "U_q"
},
{
"math_id": 2,
"text": "_q"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "o"
},
{
"math_id": 5,
"text": "f"
},
{
"math_id": 6,
"text": "c"
},
{
"math_id": 7,
"text": "c_t \\in \\mathbb{R}^{h}"
},
{
"math_id": 8,
"text": "h"
},
{
"math_id": 9,
"text": "\n\\begin{align}\nf_t &= \\sigma_g(W_{f} x_t + U_{f} h_{t-1} + b_f) \\\\\ni_t &= \\sigma_g(W_{i} x_t + U_{i} h_{t-1} + b_i) \\\\\no_t &= \\sigma_g(W_{o} x_t + U_{o} h_{t-1} + b_o) \\\\\n\\tilde{c}_t &= \\sigma_c(W_{c} x_t + U_{c} h_{t-1} + b_c) \\\\\nc_t &= f_t \\odot c_{t-1} + i_t \\odot \\tilde{c}_t \\\\\nh_t &= o_t \\odot \\sigma_h(c_t)\n\\end{align}\n"
},
{
"math_id": 10,
"text": "c_0 = 0"
},
{
"math_id": 11,
"text": "h_0 = 0"
},
{
"math_id": 12,
"text": "\\odot"
},
{
"math_id": 13,
"text": "t"
},
{
"math_id": 14,
"text": "d"
},
{
"math_id": 15,
"text": "x_t \\in \\mathbb{R}^{d}"
},
{
"math_id": 16,
"text": "f_t \\in {(0,1)}^{h}"
},
{
"math_id": 17,
"text": "i_t \\in {(0,1)}^{h}"
},
{
"math_id": 18,
"text": "o_t \\in {(0,1)}^{h}"
},
{
"math_id": 19,
"text": "h_t \\in {(-1,1)}^{h}"
},
{
"math_id": 20,
"text": "\\tilde{c}_t \\in {(-1,1)}^{h}"
},
{
"math_id": 21,
"text": "W \\in \\mathbb{R}^{h \\times d}"
},
{
"math_id": 22,
"text": "U \\in \\mathbb{R}^{h \\times h} "
},
{
"math_id": 23,
"text": "b \\in \\mathbb{R}^{h}"
},
{
"math_id": 24,
"text": "\\sigma_g"
},
{
"math_id": 25,
"text": "\\sigma_c"
},
{
"math_id": 26,
"text": "\\sigma_h"
},
{
"math_id": 27,
"text": "\\sigma_h(x) = x"
},
{
"math_id": 28,
"text": "h_{t-1}"
},
{
"math_id": 29,
"text": "c_{t-1}"
},
{
"math_id": 30,
"text": "\n\\begin{align}\nf_t &= \\sigma_g(W_{f} x_t + U_{f} c_{t-1} + b_f) \\\\\ni_t &= \\sigma_g(W_{i} x_t + U_{i} c_{t-1} + b_i) \\\\\no_t &= \\sigma_g(W_{o} x_t + U_{o} c_{t-1} + b_o) \\\\\nc_t &= f_t \\odot c_{t-1} + i_t \\odot \\sigma_c(W_{c} x_t + b_c) \\\\\nh_t &= o_t \\odot \\sigma_h(c_t)\n\\end{align}\n"
},
{
"math_id": 31,
"text": "i_t, o_t"
},
{
"math_id": 32,
"text": "f_t"
},
{
"math_id": 33,
"text": "i, o"
},
{
"math_id": 34,
"text": "t-1"
},
{
"math_id": 35,
"text": "c_{t}"
},
{
"math_id": 36,
"text": "t - 1"
},
{
"math_id": 37,
"text": "\\times"
},
{
"math_id": 38,
"text": "*"
},
{
"math_id": 39,
"text": "\n\\begin{align}\nf_t &= \\sigma_g(W_{f} * x_t + U_{f} * h_{t-1} + V_{f} \\odot c_{t-1} + b_f) \\\\\ni_t &= \\sigma_g(W_{i} * x_t + U_{i} * h_{t-1} + V_{i} \\odot c_{t-1} + b_i) \\\\\nc_t &= f_t \\odot c_{t-1} + i_t \\odot \\sigma_c(W_{c} * x_t + U_{c} * h_{t-1} + b_c) \\\\\no_t &= \\sigma_g(W_{o} * x_t + U_{o} * h_{t-1} + V_{o} \\odot c_{t} + b_o) \\\\\nh_t &= o_t \\odot \\sigma_h(c_t)\n\\end{align}\n"
},
{
"math_id": 40,
"text": "\\lim_{n \\to \\infty}W^n = 0"
},
{
"math_id": 41,
"text": "W"
}
] | https://en.wikipedia.org/wiki?curid=10711453 |
1071289 | Finite-state transducer | Finite state machine with two tapes (input, output)
A finite-state transducer (FST) is a finite-state machine with two memory "tapes", following the terminology for Turing machines: an input tape and an output tape. This contrasts with an ordinary finite-state automaton, which has a single tape. An FST is a type of finite-state automaton (FSA) that maps between two sets of symbols. An FST is more general than an FSA. An FSA defines a formal language by defining a set of accepted strings, while an FST defines a relation between sets of strings.
An FST will read a set of strings on the input tape and generates a set of relations on the output tape. An FST can be thought of as a translator or relater between strings in a set.
In morphological parsing, an example would be inputting a string of letters into the FST, the FST would then output a string of morphemes.
Overview.
An automaton can be said to "recognize" a string if we view the content of its tape as input. In other words, the automaton computes a function that maps strings into the set {0,1}. Alternatively, we can say that an automaton "generates" strings, which means viewing its tape as an output tape. On this view, the automaton generates a formal language, which is a set of strings. The two views of automata are equivalent: the function that the automaton computes is precisely the indicator function of the set of strings it generates. The class of languages generated by finite automata is known as the class of regular languages.
The two tapes of a transducer are typically viewed as an input tape and an output tape. On this view, a transducer is said to "transduce" (i.e., translate) the contents of its input tape to its output tape, by accepting a string on its input tape and generating another string on its output tape. It may do so nondeterministically and it may produce more than one output for each input string. A transducer may also produce no output for a given input string, in which case it is said to "reject" the input. In general, a transducer computes a relation between two formal languages.
Each string-to-string finite-state transducer relates the input alphabet Σ to the output alphabet Γ. Relations "R" on Σ*×Γ* that can be implemented as finite-state transducers are called rational relations. Rational relations that are partial functions, i.e. that relate every input string from Σ* to at most one Γ*, are called rational functions.
Finite-state transducers are often used for phonological and morphological analysis in natural language processing research and applications. Pioneers in this field include Ronald Kaplan, Lauri Karttunen, Martin Kay and Kimmo Koskenniemi.
A common way of using transducers is in a so-called "cascade", where transducers for various operations are combined into a single transducer by repeated application of the composition operator (defined below).
Formal construction.
Formally, a finite transducer "T" is a 6-tuple ("Q", Σ, Γ, "I", "F", δ) such that:
We can view ("Q", "δ") as a labeled directed graph, known as the "transition graph" of "T": the set of vertices is "Q", and formula_1 means that there is a labeled edge going from vertex "q" to vertex "r". We also say that "a" is the "input label" and "b" the "output label" of that edge.
NOTE: This definition of finite transducer is also called "letter transducer" (Roche and Schabes 1997); alternative definitions are possible, but can all be converted into transducers following this one.
Define the "extended transition relation" formula_2 as the smallest set such that:
The extended transition relation is essentially the reflexive transitive closure of the transition graph that has been augmented to take edge labels into account. The elements of formula_2 are known as "paths". The edge labels of a path are obtained by concatenating the edge labels of its constituent transitions in order.
The "behavior" of the transducer "T" is the rational relation ["T"] defined as follows: formula_9 if and only if there exists formula_10 and formula_11 such that formula_12. This is to say that "T" transduces a string formula_13 into a string formula_14 if there exists a path from an initial state to a final state whose input label is "x" and whose output label is "y".
Weighted automata.
Finite State Transducers can be weighted, where each transition is labelled with a weight in addition to the input and output labels. A Weighted Finite State Transducer (WFST) over a set "K" of weights can be defined similarly to an unweighted one as an 8-tuple "T"=("Q", Σ, Γ, "I", "F", "E", "λ", "ρ"), where:
In order to make certain operations on WFSTs well-defined, it is convenient to require the set of weights to form a semiring. Two typical semirings used in practice are the log semiring and tropical semiring: nondeterministic automata may be regarded as having weights in the Boolean semiring.
Stochastic FST.
Stochastic FSTs (also known as probabilistic FSTs or statistical FSTs) are presumably a form of weighted FST.
Operations on finite-state transducers.
The following operations defined on finite automata also apply to finite transducers:
and formula_27 does not hold unless mandated by (k1) or (k2).
This definition uses the same notation used in mathematics for relation composition. However, the conventional reading for relation composition is the other way around: given two relations T and S, formula_31 when there exist some y such that formula_32 and formula_33
Given a transducer T, there exists a finite automaton formula_36 such that formula_36 accepts "x" if and only if there exists a string "y" for which formula_37
The second projection, formula_35 is defined similarly.
Applications.
FSTs are used in the lexical analysis phase of compilers to associate semantic value with the discovered tokens.
Context-sensitive rewriting rules of the form "a" → "b" / "c" _ "d", used in linguistics to model phonological rules and sound change, are computationally equivalent to finite-state transducers, provided that application is nonrecursive, i.e. the rule is not allowed to rewrite the same substring twice.
Weighted FSTs found applications in natural language processing, including machine translation, and in machine learning. An implementation for part-of-speech tagging can be found as one component of the OpenGrm library.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\delta \\subseteq Q \\times (\\Sigma\\cup\\{\\epsilon\\}) \\times (\\Gamma\\cup\\{\\epsilon\\}) \\times Q"
},
{
"math_id": 1,
"text": "(q,a,b,r)\\in\\delta"
},
{
"math_id": 2,
"text": "\\delta^*"
},
{
"math_id": 3,
"text": "\\delta\\subseteq\\delta^*"
},
{
"math_id": 4,
"text": "(q,\\epsilon,\\epsilon,q)\\in\\delta^*"
},
{
"math_id": 5,
"text": "q\\in Q"
},
{
"math_id": 6,
"text": "(q,x,y,r) \\in \\delta^*"
},
{
"math_id": 7,
"text": "(r,a,b,s) \\in \\delta"
},
{
"math_id": 8,
"text": "(q,xa,yb,s) \\in \\delta^*"
},
{
"math_id": 9,
"text": "x[T]y"
},
{
"math_id": 10,
"text": "i \\in I"
},
{
"math_id": 11,
"text": "f \\in F"
},
{
"math_id": 12,
"text": "(i,x,y,f) \\in \\delta^*"
},
{
"math_id": 13,
"text": "x\\in\\Sigma^*"
},
{
"math_id": 14,
"text": "y\\in\\Gamma^*"
},
{
"math_id": 15,
"text": " E \\subseteq Q \\times (\\Sigma\\cup\\{\\epsilon\\}) \\times (\\Gamma\\cup\\{\\epsilon\\}) \\times Q \\times K"
},
{
"math_id": 16,
"text": "\\lambda: I \\rightarrow K "
},
{
"math_id": 17,
"text": "\\rho: F \\rightarrow K "
},
{
"math_id": 18,
"text": "T\\cup S"
},
{
"math_id": 19,
"text": "x[T\\cup S]y"
},
{
"math_id": 20,
"text": "x[S]y"
},
{
"math_id": 21,
"text": "T\\cdot S"
},
{
"math_id": 22,
"text": "x[T\\cdot S]y"
},
{
"math_id": 23,
"text": "x_1, x_2, y_1, y_2"
},
{
"math_id": 24,
"text": "x=x_1x_2, y =y_1y_2, x_1[T]y_1"
},
{
"math_id": 25,
"text": "x_2[S]y_2."
},
{
"math_id": 26,
"text": "T^*"
},
{
"math_id": 27,
"text": "x[T^*]y"
},
{
"math_id": 28,
"text": "T \\circ S"
},
{
"math_id": 29,
"text": "x[T \\circ S]z"
},
{
"math_id": 30,
"text": "y[S]z"
},
{
"math_id": 31,
"text": "(x,z)\\in T\\circ S"
},
{
"math_id": 32,
"text": "(x,y)\\in S"
},
{
"math_id": 33,
"text": "(y,z)\\in T."
},
{
"math_id": 34,
"text": "\\pi_1"
},
{
"math_id": 35,
"text": "\\pi_2"
},
{
"math_id": 36,
"text": "\\pi_1 T"
},
{
"math_id": 37,
"text": "x[T]y."
},
{
"math_id": 38,
"text": "L=(\\Sigma\\cup\\{\\epsilon\\}) \\times (\\Gamma\\cup\\{\\epsilon\\})"
},
{
"math_id": 39,
"text": "L"
},
{
"math_id": 40,
"text": "L=[(\\Sigma\\cup\\{\\epsilon\\}) \\times \\Gamma] \\cup [\\Sigma \\times (\\Gamma\\cup\\{\\epsilon\\})]"
}
] | https://en.wikipedia.org/wiki?curid=1071289 |
1071369 | Ford Tempo | Car model
The Ford Tempo is an automobile that was produced by Ford from the 1984 to 1994 model years. The successor of the Ford Fairmont, the Tempo marked both the downsizing of the Ford compact car line and its adoption of front-wheel drive. Through its production, the model line was offered as a two-door coupe and four-door sedan, with the Mercury Topaz marketed as its divisional counterpart (no Lincoln version was sold).
Deriving its chassis underpinnings and powertrain from the Ford Escort, the Tempo was the first aerodynamically styled sedan introduced by Ford. First seen on the 1982 Ford Sierra hatchbacks (designed by Ford of Europe) and the 1983 Ford Thunderbird, the model line was followed by the 1986 Ford Taurus.
Produced across multiple facilities in North America, the Tempo/Topaz was produced in a single generation of two-doors; two generations of four-door sedans were produced. For the 1995 model year, the Tempo/Topaz four-door sedan was replaced by the Ford Contour (and Mercury Mystique), developed from the Ford Mondeo; the two-door Tempo was not directly replaced.
Development.
In the late 1970s Ford began planning to replace their compact rear wheel drive Ford Fairmont and Mercury Zephyr models with a new smaller front wheel drive (FWD) car. This new compact was expected to compete in the marketplace with General Motors' X-Body, but wound up more similar to GM's J-cars. Ford's chief development engineer for the new car was Ed Cascardo.
The Tempo and Topaz chassis shared some parts with the front-wheel-drive platform used on the first North American Ford Escort, but with a wheelbase stretched by and distinctive new bodies. There were few common components due to the Tempo and Topaz's larger size. Switching to front-wheel drive freed up interior space that would have otherwise been lost to accommodate a driveshaft and rear differential.
Wind tunnel testing on the Tempo began in December 1978. More than 450 hours of testing resulted in over 950 different design changes. The Tempo and Topaz both featured windshields inclined at a 60° angle and aircraft-inspired door frames, two features that had both appeared on the Thunderbird and Cougar in 1983. The door frames wrapped up into the roofline, which improved sealing, allowed for hidden drip rails, and cleaned up the A-pillar area of the car. The rear track was widened, improving aerodynamic efficiency. The front grille was laid back and the leading edge of the hood was tuned for aerodynamic cleanliness. The wheels were pushed out to the corners of the body, reducing turbulence. The cars' backlights were also laid down at 60°, and the rear deck was raised, reducing drag and resulting in greater fuel efficiency. Viewed from the side, the raised trunk imparted a wedge stance to the car which was especially prominent on the two-door coupes. The aerodynamic work resulted in a coefficient of drag (formula_0) of 0.36 for the two-door Tempo, equal to that of the aero Ford Thunderbird. The four door returned a drag coefficient of 0.37.
The Tempo was designed for a four-cylinder engine, but all production of Ford's 2.3 L Lima OHC four was committed to other product lines. In 1983 Ford had stopped production of their 200 cubic inch Thriftpower inline six, leaving unused capacity at the Lima Engine plant. Ford developed a four-cylinder engine that shared some features of the Thriftpower six, topped with a new cylinder head and using other new technologies, while repurposing as much tooling as possible at the Lima plant.
When the cars were released, a turbocharged version of the new four cylinder was said to be in development, but this engine never became available. At that point, a V6 was not being considered.
Released in 1983 as a 1984 model, more than 107,000 two-door Tempos and more than 295,000 four-door Tempos were sold in its first year.
Features.
Chassis and suspension.
The Tempo's chassis is a steel unibody. The structure from the firewall forward is shared with the contemporary Ford Escort.
The Tempo's front suspension on each side comprises a lower lateral link triangulated by the anti-roll bar and a coil over MacPherson strut. The rear "Quadralink" suspension is two parallel lower lateral control links and a radius rod per side, with coil over MacPherson struts. This differed from the Escort's rear suspension, which used a lower lateral arm, radius rod, and non-concentric coil spring and shock absorber. The Tempo was the first ever American built Ford with an independent rear suspension using MacPherson struts.
Brakes are discs in front and drums in back.
Steering is by power assisted rack and pinion, with three turns lock-to-lock.
Standard tires on base models are 175/80R13, while higher trim levels have 185/70R14, and sportier models are fitted with Michelin TRX 185/65R365 metric tires on aluminum wheels.
Powertrain.
The Tempo's four cylinder gasoline engine is called the "High Swirl Combustion" (HSC) engine and displaces 2.3 L. It has a cast iron block and head, with a single cam-in-block and two overhead valves (OHV) per cylinder with pushrods and rocker arms. The HSC engine was also offered in a "High Specific Output" (HSO) version producing .
In 1992 the 3.0 L Ford Vulcan V6 engine became an option in the Tempo. Fitting the Vulcan V6 into the Tempo required changes to the water pump, and use of a more restrictive exhaust system that reduced maximum power.
The original base transmission in the gasoline fueled Tempo/Topaz is a four-speed IB4 manual that made up part of what Ford called the "Fuel Saver" powertrain. A five-speed MTX-III manual or a three-speed FLC automatic were optional upgrades.
From 1984 to 1986 a version of Mazda's four-cylinder RF diesel engine was offered in the Tempo and Topaz. The only transmission paired with the diesel engine was the 5-speed manual.
All Wheel Drive.
An optional All Wheel Drive (AWD) system became available in the Tempo and Topaz in 1987, and was offered until 1991. Although Ford had a long history with four wheel drive, and had built prototypes based on other car models before, the AWD Tempo was their first production passenger sedan to offer four wheel drive.
This design of part-time system is not meant for serious off-road driving, nor for use on dry streets. It is designed specifically to provide additional traction in slippery road conditions. The system is controlled by a rocker switch in the interior. When activated, the system engages a clutch which sends power to a limited-slip rear differential via a new driveshaft. There is no transfer case.
The all-wheel-drive system adds to the weight of the car, and increases ride height by just .
The only engine offered with the AWD option was the HSO four cylinder, with the 3-speed automatic transmission.
In 1991 Ford started referring to the system as "Four Wheel Drive" instead of All Wheel Drive.
First generation (1984–1987).
The first generation Tempo and Topaz were unveiled on the deck of the USS Intrepid, a decommissioned aircraft carrier that had been turned into a floating museum in New York Harbor. They were released on 26 May 1983 as 1984 models. An early advertisement for the car featured a Tempo sedan performing a 360 degree loop on a stunt track. The car in the ad was securely attached to a track, and was pulled through the shot rather than operating under its own power.
As Ford's first downsized compact car, the Tempo arrived four years after GM's compact X-Bodies in 1979 for the 1980 model years, and two and a half years after Chrysler's compact K-cars were introduced. The four door Tempo had three windows in profile, somewhat similar to the European Ford Sierra, while the four-door Topaz received a more upright C-pillar without rear quarter windows. The front of the car featured two sealed-beam halogen headlamps in chrome trimmed recessed mounts, and the grille between them featured four thin horizontal slats, swept back to allow for greater air flow into the engine compartment and over the hood.
The first generation Tempo came standard with a four-cylinder 2.3 L gasoline engine or an optional Mazda-built 2.0 L diesel engine. In late 1985 the four speed manual transmission was discontinued and the five-speed became standard. A slight modification was made to the five-speed transmission, moving the "reverse" position in the shift pattern from right beside first gear to the opposite bottom corner to reduce the likelihood of mistakenly selecting reverse rather than first gear during takeoff. Other changes for 1985 included a redesigned instrument panel with passenger side shelf, side window demisters, and a redesigned driver's pod with separate area for the radio. Also for 1985, the engine received a new central fuel injection (CFI) system, although the carbureted version was offered in Canada until 1987.
The instrument panel featured a new, easier to read gauge layout, with all switches and controls placed within easy reach of the driver.
In early 1985, the Tempo became the first production American automobile to feature a driver's side airbag as a supplemental restraint system. In 1984, Ford entered a contract with the General Services Administration and the Department of Transportation to supply 5,000 airbag-equipped Tempos. Half also received a special windshield designed to minimize lacerations to passengers, and all were early recipients of the high-mounted brake lights that became required by law in 1986.
1986 update.
In October 1985, the Tempo and the Topaz received several minor changes for the 1986 model year. The rectangular sealed-beam halogen headlamps were replaced with new, plastic composite enclosures with a replaceable lamp. The new headlights were flush-mounted to match the redesigned front corner lights and freshly restyled grille, which more closely matched that of the new Taurus that debuted in 1986, while the Topaz received a half pseudo-lightbar grille similar to the one on the upcoming Sable. In back, the trunk and taillights were slightly restyled.
In 1986, the Tempo surpassed the Hyundai Pony to become the best selling new car in Canada. A new "LX" luxury trim level replaced the GLX. An all-wheel-drive model was added for 1987 on both 2- and 4-door. It included all the interior amenities of the LX/LS models, and the HSO engine from the Sport (detailed below). An automatic transaxle was standard, and the AWD was driver-selected with a pushbutton.
From 1985 to 1987, Ford offered the Sport GL, which included unique interior and exterior styling cues, the 2.3 L HSO engine, alloy wheels, tachometer, and the five-speed manual transaxle with a lower (numerically higher) final drive ratio of 3.73 for quicker acceleration. It was badged simply as "GL", but was recognizable because it lacked the GL's chrome front and rear bumpers, had 14" alloy wheels and charcoal trim accents.
Trim levels.
First generation Tempo trim levels:
First generation Topaz trim levels:
Second generation (1988–1994).
For 1988, the Tempo and Topaz sedans were redesigned, while the coupes were just facelifted. Both cars arrived in November 1987. The changes made the Tempo and Topaz look even more like their respective Taurus and Sable stablemates. The front of the Tempo got a completely restyled grille featuring three thin horizontal chrome bars with a Ford oval in the center, and two composite flush-mounted rectangular headlamps with restyled front turn signal housings on either side. On the Tempo GLS, the grille was blacked out, as was the "D" pillar. At the rear were brand new flush mounted tail lamps. The rear quarter window was redesigned to match and blend evenly with the restyled rear door trim. The Topaz was differentiated from the Tempo by a more formal, more vertical rear window, a waterfall grille, more upscale wheels, and solid red tail lamps.
The HSO engine was standard equipment on the Mercury Topaz XR5 and LTS models.
Both the sedan and coupe received a brand new instrument panel design with a central gauge cluster that included a standard engine temperature gauge, and more ergonomic driver controls. Fan and windshield wiper controls were now mounted on rotary-style switches on either side of the instrument panel, and the HVAC controls received a new push-button control layout. Other changes included reworked interior door panels. A driver's side airbag continued as an option, a rarity then for an economy level car. On Tempo LX and AWD models, the interior received chrome and wood trim on the dashboard and doors. Topaz models featured a tachometer-equipped gauge cluster and a front center armrest as standard.
For the 1991 model year the all-wheel drive Tempo and Topaz and the Canadian market exclusive entry-level Tempo L were discontinued. For 1992, the Tempo and Topaz got a minor restyle, with the Tempo gaining body-colored side trim that replaced the black and chrome trim, as well as full body-colored bumpers. The Tempo's three bar chrome grille was replaced with a body-colored monochromatic piece, while the Topaz's chrome grille was replaced with a non-functional light-bar.
Also for 1992, the 3.0 L Vulcan V6 engine from the Taurus and Sable was introduced as an option for the GL and LX models, and as the standard engine on the GLS. The 1992 model year was the last year of the GLS, as it and its Topaz counterpart were discontinued in 1993. This left the Tempo with only two trim level options, GL and LX. 1992 also brought a slightly redesigned gauge cluster, with a tachometer reading up to 7,000 rpm instead of the previous 6,000 rpm. A fuel door indicator was added to the fuel gauge as an arrow pointing to the side of the car where the fuel door was located. 1992 was the only year when a speedometer reading to 120 mph was available in American models, and only in the GLS, XR5 and LTS trim levels; all other model years read to 85 MPH.
A revised body with eight headlamps was previewed late in 1991, and a redesigned Tempo was expected for 1993 or 1998. However, no new Tempo model appeared as it was discontinued in 1994 and was replaced by the Contour for 1995.
Trim levels.
Second generation Tempo trim levels:
Second generation Topaz trim levels:
End of production.
Although a third-generation Tempo had been spotted testing in 1990, this was eventually scrapped in favor of replacing the car with an adapted version of the European Ford Mondeo, then late in development. By 1993 Ford had been losing money on the Tempo for a decade. While the Tempo had long been a loss leader for Ford, the incoming Contour was based on the Mondeo, one of the most expensive cars in Ford's European lineup. Ford was unsuccessful in drawing a distinction between the Tempo and Contour, and many buyers assumed that the new car would be priced the same as the old, causing some to face a large sticker shock.
For buyers shopping for a compact Ford, moving to the Contour came with a jump in price: the range topping 1994 Tempo LX sedan with V6 cost about $12,900, , while a base model 1995 Contour GL with four-cylinder engine and manual transmission was $13,990, .
1994 marked the last year for the "HSC" engine, the 2.5 L having been dropped from the Taurus in 1991. It was also the end for the 3-speed FLC automatic transmission, with the Ford Escort and Mercury Tracer using the Ford F-4EAT transmission.
The last Ford Tempo was built at Oakville Assembly on May 20, 1994. The facility was retooled to build the Ford Windstar. Kansas City Assembly and Cuautitlán Assembly became assembly points for the Ford Contour and Mercury Mystique.
Production figures.
The Tempo was a sales success for Ford, staying one of the top ten best selling cars in the US, if not one of the top five, during its entire production run. For the introductory, extended 1984 model year (16 months long), Ford sold a total of 531,468 examples of the Tempo and Topaz combined, but this was also the nameplate's best year. All model year production figures for the Tempo are as follows:
Other markets.
China.
During the early 1990s, in an attempt to ease trade tensions with the United States, China agreed to buy millions of dollars worth of automobiles from each of the Big Three American automakers; the share of the deal for Ford was worth $32 million. Along with reducing Chinese dependence on imports of Japanese automobiles, the arrangement let China keep its favored nation status with the United States (despite political tensions of the late 1980s).
In 1992, Ford began sales operations in China for the first time, introducing the Tempo alongside the Ford Taurus, Ford Crown Victoria, Ford Aerostar, and the Lincoln Town Car. In total, the Chinese government purchased over 8,200 Tempos, with the initial order of 3010 1992 four-door Tempo sedans serving as the largest single fleet order ever received by Ford. In addition of the fitment of metric-unit instrument panels, the Tempo underwent several revisions for the Chinese market, including modifications of the fuel system to accommodate leaded fuel, heavier-duty suspension, and heavier-grade wiring.
The initial government fleet order was originally intended for use as taxis and tourist vehicles; all were white-colored GL-trim four-door sedans. Following their importation, the Tempos remained under government ownership under official use, later sold to private owners.
Mexico.
Ford assembled and sold two models based on the American Tempo in Mexico — the Ford Topaz and the Ford Ghia.
The cars were built in Ford's Cuautitlán Assembly plant. The lower level of automation at that plant translated into higher assembly costs, making a Mexican-built Ford Topaz retail for about US$400 more than a more well equipped US model with more effective pollution controls.
The first to hit the Mexican market was the Ford Topaz, debuting in 1984. This car was based on the American Tempo, and was offered in both two door and four door sedan versions. Like the US model, the Ford Topaz was restyled in 1988.
The Ford Ghia debuted in 1991. Based on the American Mercury Topaz, this model was more luxuriously trimmed, based on the American Topaz LS. With wood grain panel doors, power antena, and 5 band stereo graphic equalizer, alloy wheels 14", similar a BBS model used by European Ford Fiesta or Volkswagen Golf GTi. A 4 Cylinders 2.3L only available. For 1992, A 3-speed automatic transmission and V6 engine were only available. Also, leather interior.
T-Drive Tempo.
In or around 1990, a modified Tempo was used as a rolling testbed for a new powertrain configuration that Ford called "T-Drive". This system used transverse inline engines with the drive for the transmission taken off the center of the crankshaft, rather than the end.
The test Tempo's original transverse straight four engine and transaxle were replaced with a transversely mounted DOHC straight-eight engine and a center-mounted longitudinal transaxle, making the Tempo Rear Wheel Drive. The T-Drive specification allowed for engines of 4, 6, and 8 cylinders of 2.0 L, 3.2 L, and 4.0 L respectively, and the Tempo received the largest engine. Power output was estimated to have been in the range of . It is possible that the Tempo was chosen as it could accommodate a drive shaft and rear differential, due to the availability of the AWD model. This 8 cylinder, RWD Tempo with independent rear suspension was shown briefly and never seen again.
In 1991, the T-Drive technology was unveiled to the public as part of two concept cars; the Contour sedan and the Mystique minivan. It did not go into production.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\scriptstyle C_\\mathrm d\\,"
}
] | https://en.wikipedia.org/wiki?curid=1071369 |
1071730 | Hyperbolic quaternion | Mutation of quaternions where unit vectors square to +1
In abstract algebra, the algebra of hyperbolic quaternions is a nonassociative algebra over the real numbers with elements of the form
formula_0
where the squares of i, j, and k are +1 and distinct elements of {i, j, k} multiply with the anti-commutative property.
The four-dimensional algebra of hyperbolic quaternions incorporates some of the features of the older and larger algebra of biquaternions. They both contain subalgebras isomorphic to the split-complex number plane. Furthermore, just as the quaternion algebra H can be viewed as a union of complex planes, so the hyperbolic quaternion algebra is a pencil of planes of split-complex numbers sharing the same real line.
It was Alexander Macfarlane who promoted this concept in the 1890s as his "Algebra of Physics", first through the American Association for the Advancement of Science in 1891, then through his 1894 book of five "Papers in Space Analysis", and in a series of lectures at Lehigh University in 1900.
Algebraic structure.
Like the quaternions, the set of hyperbolic quaternions form a vector space over the real numbers of dimension 4. A linear combination
formula_1
is a hyperbolic quaternion when formula_2 and formula_3 are real numbers and the basis set formula_4 has these products:
formula_5
formula_6
formula_7
formula_8
Using the distributive property, these relations can be used to multiply any two hyperbolic quaternions.
Unlike the ordinary quaternions, the hyperbolic quaternions are not associative. For example, formula_9, while formula_10. In fact, this example shows that the hyperbolic quaternions are not even an alternative algebra.
The first three relations show that products of the (non-real) basis elements are anti-commutative. Although this basis set does not form a group, the set
formula_11
forms a loop, that is, a quasigroup with an identity element. One also notes that any subplane of the set "M" of hyperbolic quaternions that contains the real axis forms a plane of split-complex numbers. If
formula_12
is the conjugate of formula_13, then the product
formula_14
is the quadratic form used in spacetime theory. In fact, for events "p" and "q", the bilinear form
formula_15
arises as the negative of the real part of the hyperbolic quaternion product "pq"*, and is used in Minkowski space.
Note that the set of units U = {"q" : "qq"* ≠ 0 } is "not" closed under multiplication. See the references (external link) for details.
Discussion.
The hyperbolic quaternions form a nonassociative ring; the failure of associativity in this algebra curtails the facility of this algebra in transformation theory. Nevertheless,
this algebra put a focus on analytical kinematics by suggesting a mathematical model:
When one selects a unit vector "r" in the hyperbolic quaternions, then "r" 2 = +1. The plane formula_16 with hyperbolic quaternion multiplication is a commutative and associative subalgebra isomorphic to the split-complex number plane.
The hyperbolic versor formula_17 transforms Dr by
formula_18
Since the direction "r" in space is arbitrary, this hyperbolic quaternion multiplication can express any Lorentz boost using the parameter "a" called rapidity. However, the hyperbolic quaternion algebra is deficient for representing the full Lorentz group (see biquaternion instead).
Writing in 1967 about the dialogue on vector methods in the 1890s, historian Michael J. Crowe commented
"The introduction of another system of vector analysis, even a sort of compromise system such as Macfarlane's, could scarcely be well received by the advocates of the already existing systems and moreover probably acted to broaden the question beyond the comprehension of the as-yet uninitiated reader."
Geometry.
Later, Macfarlane published an article in the "Proceedings of the Royal Society of Edinburgh" in 1900. In it he treats a model for hyperbolic space H3 on the hyperboloid
formula_19
This isotropic model is called the hyperboloid model and consists of all the hyperbolic versors in the ring of hyperbolic quaternions.
Historical review.
The 1890s felt the influence of the posthumous publications of W. K. Clifford and the "continuous groups" of Sophus Lie. An example of a one-parameter group is the hyperbolic versor with the hyperbolic angle parameter. This parameter is part of the polar decomposition of a split-complex number. But it is a startling aspect of finite mathematics that makes the hyperbolic quaternion ring different:
The basis formula_20 of the vector space of hyperbolic quaternions is not closed under multiplication: for example, formula_21. Nevertheless, the set formula_22 is closed under multiplication. It satisfies all the properties of an abstract group except the associativity property; being finite, it is a Latin square or quasigroup, a peripheral mathematical structure. Loss of the associativity property of multiplication as found in quasigroup theory is not consistent with linear algebra since all linear transformations compose in an associative manner. Yet physical scientists were calling in the 1890s for mutation of the squares of formula_23,formula_24, and formula_25 to be formula_26 instead of formula_27 :
The Yale University physicist Willard Gibbs had pamphlets with the plus one square in his three-dimensional vector system. Oliver Heaviside in England wrote columns in the "Electrician", a trade paper, advocating the positive square. In 1892 he brought his work together in "Transactions of the Royal Society A" where he says his vector system is
simply the elements of Quaternions without quaternions, with the notation simplified to the uttermost, and with the very inconvenient "minus" sign before scalar product done away with.
So the appearance of Macfarlane's hyperbolic quaternions had some motivation, but the disagreeable non-associativity precipitated a reaction. Cargill Gilston Knott was moved to offer the following:
Theorem (Knott 1892)
If a 4-algebra on basis formula_20 is associative and off-diagonal products are given by Hamilton's rules, then formula_28.
Proof:
formula_29, so formula_30. Cycle the letters formula_23, formula_24, formula_25 to obtain formula_31. "QED".
This theorem needed statement to justify resistance to the call of the physicists and the "Electrician". The quasigroup stimulated a considerable stir in the 1890s: the journal "Nature" was especially conducive to an exhibit of what was known by giving two digests of Knott's work as well as those of several other vector theorists. Michael J. Crowe devotes chapter six of his book "A History of Vector Analysis" to the various published views, and notes the hyperbolic quaternion:
"Macfarlane constructed a new system of vector analysis more in harmony with Gibbs–Heaviside system than with the quaternion system. ...he...defined a full product of two vectors which was comparable to the full quaternion product except that the scalar part was positive, not negative as in the older system."
In 1899 Charles Jasper Joly noted the hyperbolic quaternion and the non-associativity property while ascribing its origin to Oliver Heaviside.
The hyperbolic quaternions, as the "Algebra of Physics", undercut the claim that ordinary quaternions made on physics. As for mathematics, the hyperbolic quaternion is another hypercomplex number, as such structures were called at the time. By the 1890s Richard Dedekind had introduced the ring concept into commutative algebra, and the vector space concept was being abstracted by Giuseppe Peano. In 1899 Alfred North Whitehead promoted Universal algebra, advocating for inclusivity. The concepts of quasigroup and algebra over a field are examples of mathematical structures describing hyperbolic quaternions.
Macfarlane's hyperbolic quaternion paper of 1900.
The "Proceedings of the Royal Society of Edinburgh" published "Hyperbolic Quaternions"
in 1900, a paper in which Macfarlane regains associativity for multiplication by reverting
to complexified quaternions. While there he used some expressions later
made famous by Wolfgang Pauli: where Macfarlane wrote
formula_32
formula_33
formula_34
the Pauli matrices satisfy
formula_35
formula_36
formula_37
while referring to the same complexified quaternions.
The opening sentence of the paper is "It is well known that quaternions are intimately connected with spherical trigonometry and in fact they reduce the subject to a branch of algebra." This statement may be verified by reference to the contemporary work "Vector Analysis" which works with a reduced quaternion system based on dot product and cross product. In Macfarlane's paper there is an effort to produce "trigonometry on the surface of the equilateral hyperboloids" through the algebra of hyperbolic quaternions, now re-identified in an associative ring of eight real dimensions. The effort is reinforced by a plate of nine figures on page 181. They illustrate the descriptive power of his "space analysis" method. For example, figure 7 is the
common Minkowski diagram used today in special relativity to discuss change of velocity of a frame of reference and relativity of simultaneity.
On page 173 Macfarlane expands on his greater theory of quaternion variables. By way of contrast he notes that Felix Klein appears not to look beyond the theory of Quaternions and spatial rotation.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "q = a + bi + cj + dk, \\quad a,b,c,d \\in \\mathbb{R} \\!"
},
{
"math_id": 1,
"text": "q = a+bi+cj+dk"
},
{
"math_id": 2,
"text": "a, b, c,"
},
{
"math_id": 3,
"text": "d"
},
{
"math_id": 4,
"text": "\\{1,i,j,k\\}"
},
{
"math_id": 5,
"text": "ij=k=-ji"
},
{
"math_id": 6,
"text": "jk=i=-kj"
},
{
"math_id": 7,
"text": "ki=j=-ik"
},
{
"math_id": 8,
"text": "i^2=j^2=k^2=+1"
},
{
"math_id": 9,
"text": "(ij)j = kj = -i"
},
{
"math_id": 10,
"text": "i(jj) = i"
},
{
"math_id": 11,
"text": "\\{1,i,j,k,-1,-i,-j,-k\\}"
},
{
"math_id": 12,
"text": "q^*=a-bi-cj-dk"
},
{
"math_id": 13,
"text": "q"
},
{
"math_id": 14,
"text": "q(q^*)=a^2-b^2-c^2-d^2"
},
{
"math_id": 15,
"text": "\\eta (p,q) = -p_0q_0 + p_1q_1 + p_2q_2 + p_3q_3 "
},
{
"math_id": 16,
"text": "D_r = \\lbrace t + x r : t, x \\in R \\rbrace "
},
{
"math_id": 17,
"text": "\\exp(a r) = \\cosh(a) + r \\sinh(a) "
},
{
"math_id": 18,
"text": "\\begin{align}\nt + x r && \\mapsto \\quad & \\exp(a r) (t + x r)\\\\\n&&=\\quad& (\\cosh(a) t + x \\sinh(a)) + (\\sinh(a) t + x \\cosh(a)) r .\n\\end{align}"
},
{
"math_id": 19,
"text": "H^3 = \\{ q \\in M: q(q^*)=1 \\} ."
},
{
"math_id": 20,
"text": "\\{1,\\,i,\\,j,\\,k\\}"
},
{
"math_id": 21,
"text": "ji=-\\!k"
},
{
"math_id": 22,
"text": "\\{1,\\,i,\\,j,\\,k,\\,-\\!1,\\,-\\!i,\\,-\\!j,\\,-\\!k\\}"
},
{
"math_id": 23,
"text": "i"
},
{
"math_id": 24,
"text": "j"
},
{
"math_id": 25,
"text": "k"
},
{
"math_id": 26,
"text": "+1"
},
{
"math_id": 27,
"text": "-1"
},
{
"math_id": 28,
"text": "i^2=-\\!1=j^2=k^2"
},
{
"math_id": 29,
"text": "j = ki = (-ji)i = -j(ii)"
},
{
"math_id": 30,
"text": "i^2 = -1"
},
{
"math_id": 31,
"text": "i^2=-1=j^2=k^2"
},
{
"math_id": 32,
"text": "ij=k\\sqrt{-1}"
},
{
"math_id": 33,
"text": "jk=i\\sqrt{-1}"
},
{
"math_id": 34,
"text": "ki=j\\sqrt{-1},"
},
{
"math_id": 35,
"text": "\\sigma_1\\sigma_2=\\sigma_3\\sqrt{-1}"
},
{
"math_id": 36,
"text": "\\sigma_2\\sigma_3=\\sigma_1\\sqrt{-1}"
},
{
"math_id": 37,
"text": "\\sigma_3\\sigma_1=\\sigma_2\\sqrt{-1}"
}
] | https://en.wikipedia.org/wiki?curid=1071730 |
1072006 | Median (geometry) | Line segment joining a triangle's vertex to the midpoint of the opposite side
In geometry, a median of a triangle is a line segment joining a vertex to the midpoint of the opposite side, thus bisecting that side. Every triangle has exactly three medians, one from each vertex, and they all intersect at the triangle's centroid. In the case of isosceles and equilateral triangles, a median bisects any angle at a vertex whose two adjacent sides are equal in length.
The concept of a median extends to tetrahedra.
Relation to center of mass.
Each median of a triangle passes through the triangle's centroid, which is the center of mass of an infinitely thin object of uniform density coinciding with the triangle. Thus, the object would balance at the intersection point of the medians. The centroid is twice as close along any median to the side that the median intersects as it is to the vertex it emanates from.
Equal-area division.
Each median divides the area of the triangle in half, hence the name, and hence a triangular object of uniform density would balance on any median. (Any other lines that divide triangle's area into two equal parts do not pass through the centroid.) The three medians divide the triangle into six smaller triangles of equal area.
Proof of equal-area property.
Consider a triangle "ABC". Let "D" be the midpoint of formula_0, "E" be the midpoint of formula_1, "F" be the midpoint of formula_2, and "O" be the centroid (most commonly denoted "G").
By definition, formula_3. Thus formula_4 and formula_5, where formula_6 represents the area of triangle formula_7 ; these hold because in each case the two triangles have bases of equal length and share a common altitude from the (extended) base, and a triangle's area equals one-half its base times its height.
We have:
formula_8
formula_9
Thus, formula_10 and formula_11
Since formula_12, therefore, formula_13.
Using the same method, one can show that formula_14.
Three congruent triangles.
In 2014 Lee Sallows discovered the following theorem:
The medians of any triangle dissect it into six equal area smaller triangles as in the figure above where three adjacent pairs of triangles meet at the midpoints D, E and F. If the two triangles in each such pair are rotated about their common midpoint until they meet so as to share a common side, then the three new triangles formed by the union of each pair are congruent.
Formulas involving the medians' lengths.
The lengths of the medians can be obtained from Apollonius' theorem as:
formula_15
formula_16
formula_17
where formula_18 and formula_19 are the sides of the triangle with respective medians formula_20 and formula_21 from their midpoints.
These formulas imply the relationships:
formula_22
formula_23
formula_24
Other properties.
Let "ABC" be a triangle, let "G" be its centroid, and let "D", "E", and "F" be the midpoints of "BC", "CA", and "AB", respectively. For any point "P" in the plane of "ABC" then
formula_25
The centroid divides each median into parts in the ratio 2:1, with the centroid being twice as close to the midpoint of a side as it is to the opposite vertex.
For any triangle with sides formula_26 and medians formula_27
formula_28
The medians from sides of lengths formula_29 and formula_30 are perpendicular if and only if formula_31
The medians of a right triangle with hypotenuse formula_19 satisfy formula_32
Any triangle's area "T" can be expressed in terms of its medians formula_33, and formula_21 as follows. If their semi-sum formula_34 is denoted by formula_35 then
formula_36
Tetrahedron.
A tetrahedron is a three-dimensional object having four triangular faces. A line segment joining a vertex of a tetrahedron with the centroid of the opposite face is called a "median" of the tetrahedron. There are four medians, and they are all concurrent at the "centroid" of the tetrahedron. As in the two-dimensional case, the centroid of the tetrahedron is the center of mass. However contrary to the two-dimensional case the centroid divides the medians not in a 2:1 ratio but in a 3:1 ratio (Commandino's theorem).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\overline{AB}"
},
{
"math_id": 1,
"text": "\\overline{BC}"
},
{
"math_id": 2,
"text": "\\overline{AC}"
},
{
"math_id": 3,
"text": "AD=DB, AF=FC, BE=EC "
},
{
"math_id": 4,
"text": "[ADO]=[BDO], [AFO]=[CFO], [BEO]=[CEO],"
},
{
"math_id": 5,
"text": "[ABE]=[ACE] "
},
{
"math_id": 6,
"text": "[ABC]"
},
{
"math_id": 7,
"text": "\\triangle ABC"
},
{
"math_id": 8,
"text": "[ABO]=[ABE]-[BEO] "
},
{
"math_id": 9,
"text": "[ACO]=[ACE]-[CEO] "
},
{
"math_id": 10,
"text": "[ABO]=[ACO] "
},
{
"math_id": 11,
"text": "[ADO]=[DBO], [ADO]=\\frac{1}{2}[ABO]"
},
{
"math_id": 12,
"text": "[AFO]=[FCO], [AFO]= \\frac{1}{2}[ACO]=\\frac{1}{2}[ABO]=[ADO]"
},
{
"math_id": 13,
"text": "[AFO]=[FCO]=[DBO]=[ADO]"
},
{
"math_id": 14,
"text": "[AFO]=[FCO]=[DBO]=[ADO]=[BEO]=[CEO] "
},
{
"math_id": 15,
"text": "m_a = \\sqrt{\\frac{2 b^2 + 2 c^2 - a^2}{4}}"
},
{
"math_id": 16,
"text": "m_b = \\sqrt{\\frac{2 a^2 + 2 c^2 - b^2}{4}}"
},
{
"math_id": 17,
"text": "m_c = \\sqrt{\\frac{2 a^2 + 2 b^2 - c^2}{4}}"
},
{
"math_id": 18,
"text": "a, b,"
},
{
"math_id": 19,
"text": "c"
},
{
"math_id": 20,
"text": "m_a, m_b,"
},
{
"math_id": 21,
"text": "m_c"
},
{
"math_id": 22,
"text": "a = \\frac{2}{3} \\sqrt{-m_a^2 + 2m_b^2 + 2m_c^2} = \\sqrt{2(b^2+c^2)-4m_a^2} = \\sqrt{\\frac{b^2}{2} - c^2 + 2m_b^2} = \\sqrt{\\frac{c^2}{2} - b^2 + 2m_c^2}"
},
{
"math_id": 23,
"text": "b = \\frac{2}{3} \\sqrt{-m_b^2 + 2m_a^2 + 2m_c^2} = \\sqrt{2(a^2+c^2)-4m_b^2} = \\sqrt{\\frac{a^2}{2} - c^2 + 2m_a^2} = \\sqrt{\\frac{c^2}{2} - a^2 + 2m_c^2}"
},
{
"math_id": 24,
"text": "c = \\frac{2}{3} \\sqrt{-m_c^2 + 2m_b^2 + 2m_a^2} = \\sqrt{2(b^2+a^2)-4m_c^2} = \\sqrt{\\frac{b^2}{2} - a^2 + 2m_b^2} = \\sqrt{\\frac{a^2}{2} - b^2 + 2m_a^2}."
},
{
"math_id": 25,
"text": "PA+PB+PC \\leq 2(PD+PE+PF) + 3PG."
},
{
"math_id": 26,
"text": "a, b, c"
},
{
"math_id": 27,
"text": "m_a, m_b, m_c,"
},
{
"math_id": 28,
"text": "\\tfrac{3}{4}(a+b+c) < m_a + m_b + m_c < a+b+c \\quad \\text{ and } \\quad \\tfrac{3}{4}\\left(a^2+b^2+c^2\\right) = m_a^2 + m_b^2 + m_c^2."
},
{
"math_id": 29,
"text": "a"
},
{
"math_id": 30,
"text": "b"
},
{
"math_id": 31,
"text": "a^2 + b^2 = 5c^2."
},
{
"math_id": 32,
"text": "m_a^2 + m_b^2 = 5m_c^2."
},
{
"math_id": 33,
"text": "m_a, m_b"
},
{
"math_id": 34,
"text": "\\left(m_a + m_b + m_c\\right)/2"
},
{
"math_id": 35,
"text": "\\sigma"
},
{
"math_id": 36,
"text": "T = \\frac{4}{3} \\sqrt{\\sigma \\left(\\sigma - m_a\\right)\\left(\\sigma - m_b\\right)\\left(\\sigma - m_c\\right)}."
}
] | https://en.wikipedia.org/wiki?curid=1072006 |
1072144 | Grothendieck universe | Set-theoretic concept
In mathematics, a Grothendieck universe is a set "U" with the following properties:
A Grothendieck universe is meant to provide a set in which all of mathematics can be performed. (In fact, uncountable Grothendieck universes provide models of set theory with the natural ∈-relation, natural powerset operation etc.). Elements of a Grothendieck universe are sometimes called small sets. The idea of universes is due to Alexander Grothendieck, who used them as a way of avoiding proper classes in algebraic geometry.
The existence of a nontrivial Grothendieck universe goes beyond the usual axioms of Zermelo–Fraenkel set theory; in particular it would imply the existence of strongly inaccessible cardinals.
Tarski–Grothendieck set theory is an axiomatic treatment of set theory, used in some automatic proof systems, in which every set belongs to a Grothendieck universe.
The concept of a Grothendieck universe can also be defined in a topos.
Properties.
As an example, we will prove an easy proposition.
Proposition. If formula_3 and formula_4, then formula_5.
Proof. formula_6 because formula_4. formula_7 because formula_3, so formula_5.
It is similarly easy to prove that any Grothendieck universe "U" contains:
In particular, it follows from the last axiom that if "U" is non-empty, it must contain all of its finite subsets and a subset of each finite cardinality. One can also prove immediately from the definitions that the intersection of any class of universes is a universe.
Grothendieck universes and inaccessible cardinals.
There are two simple examples of Grothendieck universes:
Other examples are more difficult to construct. Loosely speaking, this is because Grothendieck universes are equivalent to strongly inaccessible cardinals. More formally, the following two axioms are equivalent:
(U) For each set "x", there exists a Grothendieck universe "U" such that "x" ∈ "U".
(C) For each cardinal κ, there is a strongly inaccessible cardinal λ that is strictly larger than κ.
To prove this fact, we introduce the function c("U"). Define:
formula_9
where by |"x"| we mean the cardinality of "x". Then for any universe "U", c("U") is either zero or strongly inaccessible. Assuming it is non-zero, it is a strong limit cardinal because the power set of any element of "U" is an element of "U" and every element of "U" is a subset of "U". To see that it is regular, suppose that "c""λ" is a collection of cardinals indexed by I, where the cardinality of I and of each "cλ" is less than c("U"). Then, by the definition of c("U"), I and each "c""λ" can be replaced by an element of "U". The union of elements of "U" indexed by an element of "U" is an element of "U", so the sum of the "c""λ" has the cardinality of an element of "U", hence is less than c("U"). By invoking the axiom of foundation, that no set is contained in itself, it can be shown that c("U") equals |"U"|; when the axiom of foundation is not assumed, there are counterexamples (we may take for example U to be the set of all finite sets of finite sets etc. of the sets xα where the index α is any real number, and "x""α" = {"x""α"} for each "α". Then "U" has the cardinality of the continuum, but all of its members have finite cardinality and so formula_10 ; see Bourbaki's article for more details).
Let κ be a strongly inaccessible cardinal. Say that a set "S" is strictly of type κ if for any sequence "s""n" ∈ ... ∈ "s"0 ∈ "S", |"s""n"| < "κ". ("S" itself corresponds to the empty sequence.) Then the set "u"("κ") of all sets strictly of type κ is a Grothendieck universe of cardinality κ. The proof of this fact is long, so for details, we again refer to Bourbaki's article, listed in the references.
To show that the large cardinal axiom (C) implies the universe axiom (U), choose a set "x". Let "x"0 = "x", and for each "n", let formula_11 be the union of the elements of "xn". Let "y" = formula_12. By (C), there is a strongly inaccessible cardinal κ such that |y| < "κ". Let "u"("κ") be the universe of the previous paragraph. "x" is strictly of type κ, so "x" ∈ "u"("κ"). To show that the universe axiom (U) implies the large cardinal axiom (C), choose a cardinal κ. κ is a set, so it is an element of a Grothendieck universe "U". The cardinality of "U" is strongly inaccessible and strictly larger than that of κ.
In fact, any Grothendieck universe is of the form "u"("κ") for some κ. This gives another form of the equivalence between Grothendieck universes and strongly inaccessible cardinals:
For any Grothendieck universe "U", |"U"| is either zero, formula_13, or a strongly inaccessible cardinal. And if κ is zero, formula_13, or a strongly inaccessible cardinal, then there is a Grothendieck universe &NoBreak;&NoBreak;. Furthermore, "u"(|"U"|) = "U", and |"u"("κ")| = "κ".
Since the existence of strongly inaccessible cardinals cannot be proved from the axioms of Zermelo–Fraenkel set theory (ZFC), the existence of universes other than the empty set and formula_8 cannot be proved from ZFC either. However, strongly inaccessible cardinals are on the lower end of the list of large cardinals; thus, most set theories that use large cardinals (such as "ZFC plus there is a measurable cardinal", "ZFC plus there are infinitely many Woodin cardinals") will prove that Grothendieck universes exist.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\{x,y\\}"
},
{
"math_id": 1,
"text": "\\{x_\\alpha\\}_{\\alpha\\in I}"
},
{
"math_id": 2,
"text": "\\bigcup_{\\alpha\\in I} x_\\alpha"
},
{
"math_id": 3,
"text": "x \\in U"
},
{
"math_id": 4,
"text": "y \\subseteq x"
},
{
"math_id": 5,
"text": "y \\in U"
},
{
"math_id": 6,
"text": "y \\in P(x)"
},
{
"math_id": 7,
"text": "P(x) \\in U"
},
{
"math_id": 8,
"text": "V_\\omega"
},
{
"math_id": 9,
"text": "\\mathbf{c}(U) = \\sup_{x \\in U} |x|"
},
{
"math_id": 10,
"text": "\\mathbf{c}(U) = \\aleph_0 "
},
{
"math_id": 11,
"text": "x_{n+1} = \\bigcup x_n"
},
{
"math_id": 12,
"text": "\\bigcup_n x_n"
},
{
"math_id": 13,
"text": "\\aleph_0"
}
] | https://en.wikipedia.org/wiki?curid=1072144 |
1072223 | Compliance (physiology) | Ability of a biological organ to distend
Compliance is the ability of a hollow organ (vessel) to distend and increase volume with increasing transmural pressure or the tendency of a hollow organ to resist recoil toward its original dimensions on application of a distending or compressing force. It is the reciprocal of "elastance", hence elastance is a measure of the tendency of a hollow organ to recoil toward its original dimensions upon removal of a distending or compressing force.
Blood vessels.
The terms elastance and compliance are of particular significance in cardiovascular physiology and respiratory physiology. In compliance, an increase in volume occurs in a vessel when the pressure in that vessel is increased. The tendency of the arteries and veins to stretch in response to pressure has a large effect on perfusion and blood pressure. This physically means that blood vessels with a higher compliance deform easier than lower compliance blood vessels under the same pressure and volume conditions.
Venous compliance is approximately 30 times larger than arterial compliance.
Compliance is calculated using the following equation, where ΔV is the change in volume (mL), and ΔP is the change in pressure (mmHg):
formula_0
Physiologic compliance is generally in agreement with the above and adds "dP/dt" as a common academic physiologic measurement of both pulmonary and cardiac tissues. Adaptation of equations initially applied to rubber and latex allow modeling of the dynamics of pulmonary and cardiac tissue compliance.
Veins have a much higher compliance than arteries (largely due to their thinner walls.) Veins which are abnormally compliant can be associated with edema. Pressure stockings are sometimes used to externally reduce compliance, and thus keep blood from pooling in the legs.
Vasodilation and vasoconstriction are complex phenomena; they are functions not merely of the fluid mechanics of pressure and tissue elasticity but also of active homeostatic regulation with hormones and cell signaling, in which the body produces endogenous vasodilators and vasoconstrictors to modify its vessels' compliance. For example, the muscle tone of the smooth muscle tissue of the tunica media can be adjusted by the renin–angiotensin system. In patients whose endogenous homeostatic regulation is not working well, dozens of pharmaceutical drugs that are also vasoactive can be added. The response of vessels to such vasoactive substances is called vasoactivity (or sometimes vasoreactivity). Vasoactivity can vary between persons because of genetic and epigenetic differences, and it can be impaired by pathosis and by age. This makes the topic of haemodynamic response (including vascular compliance and vascular resistance) a matter of medical and pharmacologic complexity beyond mere hydraulic considerations (which are complex enough by themselves).
The relationship between vascular compliance, pressure, and flow rate is Q=C(dP/dt) Q=flow rate (cm3/sec)
Arterial compliance.
The classic definition by MP Spencer and AB Denison of compliance (C) is the change in arterial blood volume (ΔV) due to a given change in arterial blood pressure (ΔP). They wrote this in the "Handbook of Physiology" in 1963 in work entitled "Pulsatile Flow in the Vascular System". So, C = ΔV/ΔP.
Arterial compliance is an index of the elasticity of large arteries such as the thoracic aorta. Arterial compliance is an important cardiovascular risk factor. Compliance diminishes with age and menopause. Arterial compliance is measured by ultrasound as a pressure (carotid artery) and volume (outflow into aorta) relationship.
Compliance, in simple terms, is the degree to which a container experiences pressure or force without disruption. It is used as an indication of arterial stiffness. An increase in the age and also in the systolic blood pressure (SBP) is accompanied with decrease on arterial compliance.
Endothelial dysfunction results in reduced compliance (increased arterial stiffness), especially in the smaller arteries. This is characteristic of patients with hypertension. However, it may be seen in normotensive patients (with normal blood pressure) before the appearance of clinical hypertension. Reduced arterial compliance is also seen in patients with diabetes and also in smokers. It is actually a part of a vicious cycle that further elevates blood pressure, aggravates atherosclerosis (hardening of the arteries), and leads to increased cardiovascular risk. Arterial compliance can be measured by several techniques. Most of them are invasive and are not clinically appropriate. Pulse contour analysis is a non-invasive method that allows easy measurement of arterial elasticity to identify patients at risk for cardiovascular events.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C = \\frac{ \\Delta V}{ \\Delta P} "
}
] | https://en.wikipedia.org/wiki?curid=1072223 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.