id
stringlengths
2
8
title
stringlengths
1
130
text
stringlengths
0
252k
formulas
listlengths
1
823
url
stringlengths
38
44
1104945
Hill cipher
Substitution cipher based on linear algebra In classical cryptography, the Hill cipher is a polygraphic substitution cipher based on linear algebra. Invented by Lester S. Hill in 1929, it was the first polygraphic cipher in which it was practical (though barely) to operate on more than three symbols at once. The following discussion assumes an elementary knowledge of matrices. Encryption. Each letter is represented by a number modulo 26. Though this is not an essential feature of the cipher, this simple scheme is often used: To encrypt a message, each block of "n" letters (considered as an "n"-component vector) is multiplied by an invertible "n" × "n" matrix, against modulus 26. To decrypt the message, each block is multiplied by the inverse of the matrix used for encryption. The matrix used for encryption is the cipher key, and it should be chosen randomly from the set of invertible "n" × "n" matrices (modulo 26). The cipher can, of course, be adapted to an alphabet with any number of letters; all arithmetic just needs to be done modulo the number of letters instead of modulo 26. Consider the message 'ACT', and the key below (or GYB/NQK/URP in letters): formula_0 Since 'A' is 0, 'C' is 2 and 'T' is 19, the message is the vector: formula_1 Thus the enciphered vector is given by: formula_2 which corresponds to a ciphertext of 'POH'. Now, suppose that our message is instead 'CAT', or: formula_3 This time, the enciphered vector is given by: formula_4 which corresponds to a ciphertext of 'FIN'. Every letter has changed. The Hill cipher has achieved Shannon's diffusion, and an "n"-dimensional Hill cipher can diffuse fully across "n" symbols at once. Decryption. In order to decrypt, we turn the ciphertext back into a vector, then simply multiply by the inverse matrix of the key matrix (IFK/VIV/VMI in letters). We find that, modulo 26, the inverse of the matrix used in the previous example is: formula_5 Taking the previous example ciphertext of 'POH', we get: formula_6 which gets us back to 'ACT', as expected. One complications exist in picking the encrypting matrix: Thus, if we work modulo 26 as above, the determinant must be nonzero, and must not be divisible by 2 or 13. If the determinant is 0, or has common factors with the modular base, then the matrix cannot be used in the Hill cipher, and another matrix must be chosen (otherwise it will not be possible to decrypt). Fortunately, matrices which satisfy the conditions to be used in the Hill cipher are fairly common. For our example key matrix: formula_7 So, modulo 26, the determinant is 25. Since formula_8 and formula_9, 25 has no common factors with 26, and this matrix can be used for the Hill cipher. The risk of the determinant having common factors with the modulus can be eliminated by making the modulus prime. Consequently, a useful variant of the Hill cipher adds 3 extra symbols (such as a space, a period and a question mark) to increase the modulus to 29. Example. Let formula_10 be the key and suppose the plaintext message is 'HELP'. Then this plaintext is represented by two pairs formula_11 Then we compute formula_12 and formula_13 and continue encryption as follows: formula_14 The matrix "K" is invertible, hence formula_15 exists such that formula_16. The inverse of "K" can be computed by using the formula formula_17 This formula still holds after a modular reduction if a modular multiplicative inverse is used to compute formula_18. Hence in this case, we compute formula_19 formula_20 Then we compute formula_21 and formula_22 Therefore, formula_23. Security. The basic Hill cipher is vulnerable to a known-plaintext attack because it is completely linear. An opponent who intercepts formula_24 plaintext/ciphertext character pairs can set up a linear system which can (usually) be easily solved; if it happens that this system is indeterminate, it is only necessary to add a few more plaintext/ciphertext pairs. Calculating this solution by standard linear algebra algorithms then takes very little time. While matrix multiplication alone does not result in a secure cipher it is still a useful step when combined with other non-linear operations, because matrix multiplication can provide diffusion. For example, an appropriately chosen matrix can guarantee that small differences before the matrix multiplication will result in large differences after the matrix multiplication. Indeed, some modern ciphers use a matrix multiplication step to provide diffusion. For example, the MixColumns step in AES is a matrix multiplication. The function "g" in Twofish is a combination of non-linear S-boxes with a carefully chosen matrix multiplication (MDS). Key space size. The key space is the set of all possible keys. The key space size is the number of possible keys. The effective key size, in number of bits, is the binary logarithm of the key space size. There are formula_25 matrices of dimension "n" × "n". Thus formula_26 or about formula_27 is an upper bound on the key size of the Hill cipher using "n" × "n" matrices. This is only an upper bound because not every matrix is invertible and thus usable as a key. The number of invertible matrices can be computed via the Chinese Remainder Theorem. I.e., a matrix is invertible modulo 26 if and only if it is invertible both modulo 2 and modulo 13. The number of invertible "n" × "n" matrices modulo 2 is equal to the order of the general linear group GL(n,Z2). It is formula_28 Equally, the number of invertible matrices modulo 13 (i.e. the order of GL(n,Z13)) is formula_29 The number of invertible matrices modulo 26 is the product of those two numbers. Hence it is formula_30 Additionally it seems to be prudent to avoid too many zeroes in the key matrix, since they reduce diffusion. The net effect is that the effective keyspace of a basic Hill cipher is about formula_31. For a 5 × 5 Hill cipher, that is about 114 bits. Of course, key search is not the most efficient known attack. Mechanical implementation. When operating on 2 symbols at once, a Hill cipher offers no particular advantage over Playfair or the bifid cipher, and in fact is weaker than either, and slightly more laborious to operate by pencil-and-paper. As the dimension increases, the cipher rapidly becomes infeasible for a human to operate by hand. A Hill cipher of dimension 6 was implemented mechanically. Hill and a partner were awarded a patent (U.S. patent 1,845,947) for this device, which performed a 6 × 6 matrix multiplication modulo 26 using a system of gears and chains. Unfortunately the gearing arrangements (and thus the key) were fixed for any given machine, so triple encryption was recommended for security: a secret nonlinear step, followed by the wide diffusive step from the machine, followed by a third secret nonlinear step. (The much later Even–Mansour cipher also uses an unkeyed diffusive middle step). Such a combination was actually very powerful for 1929, and indicates that Hill apparently understood the concepts of a meet-in-the-middle attack as well as confusion and diffusion. Unfortunately, his machine did not sell. See also. Other practical "pencil-and-paper" polygraphic ciphers include:
[ { "math_id": 0, "text": "\\begin{pmatrix} 6 & 24 & 1 \\\\ 13 & 16 & 10 \\\\ 20 & 17 & 15 \\end{pmatrix}" }, { "math_id": 1, "text": "\\begin{pmatrix} 0 \\\\ 2 \\\\ 19 \\end{pmatrix}" }, { "math_id": 2, "text": "\\begin{pmatrix} 6 & 24 & 1 \\\\ 13 & 16 & 10 \\\\ 20 & 17 & 15 \\end{pmatrix} \\begin{pmatrix} 0 \\\\ 2 \\\\ 19 \\end{pmatrix} = \\begin{pmatrix} 67 \\\\ 222 \\\\ 319 \\end{pmatrix} \\equiv \\begin{pmatrix} 15 \\\\ 14 \\\\ 7 \\end{pmatrix} \\pmod{26}" }, { "math_id": 3, "text": "\\begin{pmatrix} 2 \\\\ 0 \\\\ 19 \\end{pmatrix}" }, { "math_id": 4, "text": "\\begin{pmatrix} 6 & 24 & 1 \\\\ 13 & 16 & 10 \\\\ 20 & 17 & 15 \\end{pmatrix} \\begin{pmatrix} 2 \\\\ 0 \\\\ 19 \\end{pmatrix} = \\begin{pmatrix} 31 \\\\ 216 \\\\ 325 \\end{pmatrix} \\equiv \\begin{pmatrix} 5 \\\\ 8 \\\\ 13 \\end{pmatrix} \\pmod{26}" }, { "math_id": 5, "text": "\\begin{pmatrix} 6 & 24 & 1 \\\\ 13 & 16 & 10 \\\\ 20 & 17 & 15 \\end{pmatrix}^{-1} \\pmod{26}\\equiv \\begin{pmatrix} 8 & 5 & 10 \\\\ 21 & 8 & 21 \\\\ 21 & 12 & 8 \\end{pmatrix} " }, { "math_id": 6, "text": "\\begin{pmatrix} 8 & 5 & 10 \\\\ 21 & 8 & 21 \\\\ 21 & 12 & 8 \\end{pmatrix} \\begin{pmatrix} 15 \\\\ 14 \\\\ 7 \\end{pmatrix} = \\begin{pmatrix} 260 \\\\ 574 \\\\ 539 \\end{pmatrix} \\equiv \\begin{pmatrix} 0 \\\\ 2 \\\\ 19 \\end{pmatrix} \\pmod{26}" }, { "math_id": 7, "text": "\\begin{vmatrix} 6 & 24& 1 \\\\ 13 & 16 & 10 \\\\ 20 & 17 & 15 \\end{vmatrix} = 6(16\\cdot15-10\\cdot17)-24(13\\cdot15-10\\cdot20)+1(13\\cdot17-16\\cdot20) = 441 \\equiv 25 \\pmod{26}" }, { "math_id": 8, "text": "25=5^2" }, { "math_id": 9, "text": "26=2 \\times 13" }, { "math_id": 10, "text": "K= \\begin{pmatrix} 3 & 3 \\\\ 2 & 5 \\end{pmatrix}" }, { "math_id": 11, "text": "HELP \\to \\begin{pmatrix} H \\\\ E \\end{pmatrix} , \\begin{pmatrix} L \\\\ P \\end{pmatrix} \\to \\begin{pmatrix} 7 \\\\ 4 \\end{pmatrix} , \\begin{pmatrix} 11 \\\\ 15 \\end{pmatrix}" }, { "math_id": 12, "text": "\\begin{pmatrix} 3 & 3 \\\\ 2 & 5 \\end{pmatrix} \\begin{pmatrix} 7 \\\\ 4 \\end{pmatrix} \\equiv \\begin{pmatrix} 7 \\\\ 8 \\end{pmatrix} \\pmod{26}," }, { "math_id": 13, "text": "\\begin{pmatrix} 3 & 3 \\\\ 2 & 5 \\end{pmatrix} \\begin{pmatrix} 11 \\\\ 15 \\end{pmatrix} \\equiv \\begin{pmatrix} 0 \\\\ 19 \\end{pmatrix}\\pmod{26}" }, { "math_id": 14, "text": "\\begin{pmatrix} 7 \\\\ 8 \\end{pmatrix}, \\begin{pmatrix} 0 \\\\ 19 \\end{pmatrix} \\to \\begin{pmatrix} H \\\\ I \\end{pmatrix}, \\begin{pmatrix} A \\\\ T \\end{pmatrix}" }, { "math_id": 15, "text": "K^{-1}" }, { "math_id": 16, "text": "KK^{-1}=K^{-1}K=I_2" }, { "math_id": 17, "text": "\\begin{pmatrix} a & b \\\\ c & d \\end{pmatrix}^{-1}=(ad-bc)^{-1}\\begin{pmatrix} d & -b \\\\ -c & a \\end{pmatrix}" }, { "math_id": 18, "text": "(ad-bc)^{-1}" }, { "math_id": 19, "text": "K^{-1} \\equiv 9^{-1} \\begin{pmatrix} 5 & 23 \\\\ 24 & 3 \\end{pmatrix} \\equiv 3 \\begin{pmatrix} 5 & 23 \\\\ 24 & 3 \\end{pmatrix} \\equiv \\begin{pmatrix} 15 & 17 \\\\ 20 & 9 \\end{pmatrix}\\pmod{26}" }, { "math_id": 20, "text": "HIAT \\to \\begin{pmatrix} H \\\\ I \\end{pmatrix}, \\begin{pmatrix} A \\\\ T \\end{pmatrix} \\to \\begin{pmatrix} 7 \\\\ 8 \\end{pmatrix}, \\begin{pmatrix} 0 \\\\ 19 \\end{pmatrix}" }, { "math_id": 21, "text": "\\begin{pmatrix} 15 & 17 \\\\ 20 & 9 \\end{pmatrix}\\begin{pmatrix} 7 \\\\ 8 \\end{pmatrix} = \\begin{pmatrix} 241 \\\\ 212 \\end{pmatrix} \\equiv \\begin{pmatrix} 7 \\\\ 4 \\end{pmatrix}\\pmod{26}," }, { "math_id": 22, "text": "\\begin{pmatrix} 15 & 17 \\\\ 20 & 9 \\end{pmatrix}\\begin{pmatrix} 0 \\\\ 19 \\end{pmatrix} = \\begin{pmatrix} 323 \\\\ 171 \\end{pmatrix} \\equiv \\begin{pmatrix} 11 \\\\ 15 \\end{pmatrix}\\pmod{26}" }, { "math_id": 23, "text": "\\begin{pmatrix} 7 \\\\ 4 \\end{pmatrix}, \\begin{pmatrix} 11 \\\\ 15 \\end{pmatrix} \\to \\begin{pmatrix} H \\\\ E \\end{pmatrix}, \\begin{pmatrix} L \\\\ P \\end{pmatrix} \\to HELP" }, { "math_id": 24, "text": "n^2" }, { "math_id": 25, "text": "26^{n^2}" }, { "math_id": 26, "text": "\\log_2(26^{n^2})" }, { "math_id": 27, "text": "4.7n^2" }, { "math_id": 28, "text": "2^{n^2}(1-1/2)(1-1/2^2)\\cdots(1-1/2^n)." }, { "math_id": 29, "text": "13^{n^2}(1-1/13)(1-1/13^2)\\cdots(1-1/13^n)." }, { "math_id": 30, "text": "26^{n^2}(1-1/2)(1-1/2^2)\\cdots(1-1/2^n)(1-1/13)(1-1/13^2)\\cdots(1-1/13^n)." }, { "math_id": 31, "text": "4.64n^2 - 1.7" } ]
https://en.wikipedia.org/wiki?curid=1104945
1104948
Grigory Margulis
Russian mathematician Grigory Aleksandrovich Margulis (, first name often given as Gregory, Grigori or Gregori; born February 24, 1946) is a Russian-American mathematician known for his work on lattices in Lie groups, and the introduction of methods from ergodic theory into diophantine approximation. He was awarded a Fields Medal in 1978, a Wolf Prize in Mathematics in 2005, and an Abel Prize in 2020, becoming the fifth mathematician to receive the three prizes. In 1991, he joined the faculty of Yale University, where he is currently the Erastus L. De Forest Professor of Mathematics. Biography. Margulis was born to a Russian family of Lithuanian Jewish descent in Moscow, Soviet Union. At age 16 in 1962 he won the silver medal at the International Mathematical Olympiad. He received his PhD in 1970 from the Moscow State University, starting research in ergodic theory under the supervision of Yakov Sinai. Early work with David Kazhdan produced the Kazhdan–Margulis theorem, a basic result on discrete groups. His superrigidity theorem from 1975 clarified an area of classical conjectures about the characterisation of arithmetic groups amongst lattices in Lie groups. He was awarded the Fields Medal in 1978, but was not permitted to travel to Helsinki to accept it in person, allegedly due to antisemitism against Jewish mathematicians in the Soviet Union. His position improved, and in 1979 he visited Bonn, and was later able to travel freely, though he still worked in the Institute of Problems of Information Transmission, a research institute rather than a university. In 1991, Margulis accepted a professorial position at Yale University. Margulis was elected a member of the U.S. National Academy of Sciences in 2001. In 2012 he became a fellow of the American Mathematical Society. In 2005, Margulis received the Wolf Prize for his contributions to theory of lattices and applications to ergodic theory, representation theory, number theory, combinatorics, and measure theory. In 2020, Margulis received the Abel Prize jointly with Hillel Furstenberg "For pioneering the use of methods from probability and dynamics in group theory, number theory and combinatorics." Mathematical contributions. Margulis's early work dealt with Kazhdan's property (T) and the questions of rigidity and arithmeticity of lattices in semisimple algebraic groups of higher rank over a local field. It had been known since the 1950s (Borel, Harish-Chandra) that a certain simple-minded way of constructing subgroups of semisimple Lie groups produces examples of lattices, called "arithmetic lattices". It is analogous to considering the subgroup "SL"("n",Z) of the real special linear group "SL"("n",R) that consists of matrices with "integer" entries. Margulis proved that under suitable assumptions on "G" (no compact factors and split rank greater or equal than two), "any" (irreducible) lattice "Γ" in it is arithmetic, i.e. can be obtained in this way. Thus "Γ" is commensurable with the subgroup "G"(Z) of "G", i.e. they agree on subgroups of finite index in both. Unlike general lattices, which are defined by their properties, arithmetic lattices are defined by a construction. Therefore, these results of Margulis pave a way for classification of lattices. Arithmeticity turned out to be closely related to another remarkable property of lattices discovered by Margulis. "Superrigidity" for a lattice "Γ" in "G" roughly means that any homomorphism of "Γ" into the group of real invertible "n" × "n" matrices extends to the whole "G". The name derives from the following variant: If "G" and "G' " are semisimple algebraic groups over a local field without compact factors and whose split rank is at least two and "Γ" and "Γ"formula_0 are irreducible lattices in them, then any homomorphism "f": "Γ" → "Γ"formula_0 between the lattices agrees on a finite index subgroup of "Γ" with a homomorphism between the algebraic groups themselves. (The case when "f" is an isomorphism is known as the strong rigidity.) While certain rigidity phenomena had already been known, the approach of Margulis was at the same time novel, powerful, and very elegant. Margulis solved the Banach–Ruziewicz problem that asks whether the Lebesgue measure is the only normalized rotationally invariant finitely additive measure on the "n"-dimensional sphere. The affirmative solution for "n" ≥ 4, which was also independently and almost simultaneously obtained by Dennis Sullivan, follows from a construction of a certain dense subgroup of the orthogonal group that has property (T). Margulis gave the first construction of expander graphs, which was later generalized in the theory of Ramanujan graphs. In 1986, Margulis gave a complete resolution of the Oppenheim conjecture on quadratic forms and diophantine approximation. This was a question that had been open for half a century, on which considerable progress had been made by the Hardy–Littlewood circle method; but to reduce the number of variables to the point of getting the best-possible results, the more structural methods from group theory proved decisive. He has formulated a further program of research in the same direction, that includes the Littlewood conjecture. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "'" } ]
https://en.wikipedia.org/wiki?curid=1104948
11051917
Günter Nimtz
German physicist Günter Nimtz (born 22 September 1936) is a German physicist, working at the 2nd Physics Institute at the University of Cologne in Germany. He has investigated narrow-gap semiconductors and liquid crystals. His claims show that particles may travel faster than the speed of light when undergoing quantum tunneling. Academic career. Günter Nimtz studied Electrical Engineering in Mannheim and Physics at the University of Heidelberg. He graduated from the University of Vienna and became a professor of physics at the University of Cologne in 1983. During 1977 he was a research associate for teaching and researching at McGill University, Montreal/Canada. He achieved emeritus status in 2001. During 2004 he was Visiting Professor at the University of Shanghai and of the Beijing University of Posts and Telecommunications. From 2001 to 2008 he was teaching and doing fundamental research at the University of Koblenz-Landau. Industrial research and development. In 1993 Günter Nimtz and Achim Enders invented a novel absorber for electromagnetic anechoic chambers. It is based on a 10 nanometer -thick metal film placed on an incombustible pyramidal carrier. At the Merck Company in Darmstadt Nimtz designed an apparatus for the production of ceramic aerosols. Experiments related to superluminal quantum tunneling. Nimtz and his coauthors have been investigating superliminal quantum tunneling since 1992. Their experiment involved microwaves either being sent across two space-separated prisms or through frequency-filtered waveguides. In the latter case either an additional undersized waveguide or a reflective grating structure had been used. In 1994 Nimtz and Horst Aichmann carried out a tunneling experiment at the laboratories of Hewlett-Packard after which Nimtz stated that the frequency modulated (FM) carrier wave transported a signal 4.7 times faster than light due to the effect of quantum tunneling. Recently, this experiment was successfully reproduced by Peter Elsen and Simon Tebeck and represented at "Jugend forscht" the German pupil competition in Physics 2019. They won the first prize of Rheinland-Pfalz and the Heraeus Prize of Germany. Alfons Stahlhofen and Nimtz described an experiment which sent a beam of microwaves towards a pair of prisms. The angle provided for total internal reflection and setting up an evanescent wave. Because the second prism was close to the first prism, some light leaked across that gap. The transmitted and reflected waves arrived at detectors at the same time, despite the transmitted light having also traversed the distance of the gap. This is the basis for the assertion of faster-than-c transmission of information. Nimtz and coworkers asserted that the measured tunneling time is spent at the barrier front, whereas inside the barrier zero time is spent. This result was observed in several tunneling barriers and in various fields. Zero time tunneling was already calculated by several theoreticians, while other mathematical results point at a completely subluminal process when the standard relativistic causality notion is used together with relativistic wave equations for the wavefunction. Scientific opponents and their interpretations. Chris Lee has stated that there is no new physics involved here, and that the apparent faster-than-c transmission can be explained by carefully considering how the time of arrival is measured (whether the group velocity or some other measure). Recent papers by Herbert Winful point out errors in Nimtz' interpretation. These articles propose that Nimtz has provided a rather trivial experimental confirmation for General Relativity. Winful says that there is nothing specifically quantum-mechanical about Nimtz's experiment, that in fact the results agree with the predictions of classical electromagnetism (Maxwell's equations), and that in one of his papers on tunneling through undersized waveguides Nimtz himself had written "Therefore microwave tunneling, i.e. the propagation of guided evanescent modes, can be described to an extremely high degree of accuracy by a theory based on Maxwell's equations and on phase time approach." (Elsewhere Nimtz has argued that since evanescent modes have an imaginary wave number, they represent a "mathematical analogy" to quantum tunnelling, and that "evanescent modes are not fully describable by the Maxwell equations and quantum mechanics have to be taken into consideration." Since Maxwell's laws respect special relativity, Winful argues that an experiment which is describable using these laws cannot involve a relativistic causality violation (which would be implied by transmitting information faster than light). He also argues that "Nothing was observed to be traveling faster than light. The measured delay is the lifetime of stored energy leaking out of both sides of the barrier. The equality of transmission and reflection delays is what one expects for energy leaking out of both sides of a symmetric barrier." Aephraim M. Steinberg of the University of Toronto has also stated that Nimtz has not demonstrated causality violation (which would be implied by transmitting information faster than light). Steinberg also uses a classical argument. In a "New Scientist" article, he uses the analogy of a train traveling from Chicago to New York, but dropping off train cars at each station along the way, so that the center of the train moves forward at each stop; in this way, the speed of the center of the train exceeds the speed of any of the individual cars. Herbert Winful argues that the train analogy is a variant of the "reshaping argument" for superluminal tunneling velocities, but he goes on to say that this argument is not actually supported by experiment or simulations, which actually show that the transmitted pulse has the same length and shape as the incident pulse. Instead, Winful argues that the group delay in tunneling is not actually the transit time for the pulse (whose spatial length must be greater than the barrier length in order for its spectrum to be narrow enough to allow tunneling), but is instead the lifetime of the energy stored in a standing wave which forms inside the barrier. Since the stored energy in the barrier is less than the energy stored in a barrier-free region of the same length due to destructive interference, the group delay for the energy to escape the barrier region is shorter than it would be in free space, which according to Winful is the explanation for apparently superluminal tunneling. This becomes obvious wrong in a standing wave guide set-up at frequencies below the cut-off frequency. Apart from these strange interpretations further authors have published papers arguing that quantum tunneling does not violate the relativistic notion of causality, and that Nimtz's experiments (which are argued to be purely classical in nature) don't violate it either. Some oppositional theoretical interpretations have been published. Nimtz' interpretation. Nimtz and others argue that an analysis of signal shape and frequency spectrum has evidenced that a superluminal signal velocity has been measured and that tunneling is the one and only observed violation of special relativity. However - in contradiction to their opponents - they explicitly point out that this does not lead to a violation of primitive causality: Due to the temporal extent of any kind of signal it is impossible to transport information into the past. After all they claim that tunneling can generally be explained with virtual photons, the strange particles introduced by Richard Feynman and shown for evanescent modes by Ali and by Cargnilia and Mandel. In that sense it is common to calculate the imaginary tunneling wave number with the Helmholtz and the Schrödinger equations as Günter Nimtz and Herbert Winful did. However, Nimtz highlights that eventually the final tunneling time was always obtained by the Wigner phase time approach. Günter Nimtz outlines that such evanescent modes only exist in the classically forbidden region of energy. As a consequence they cannot be explained by classical physics nor by special relativity postulates: A negative energy of evanescent modes follows from the imaginary wave number, i.e. from the imaginary refractive index according to the Maxwell relation formula_0 for electromagnetic and elastic fields. Nimtz explicitly points out that tunneling indeed confronts special relativity and that any other statement must be considered incorrect. All waves have a zero tunneling time. and the barrier can be seen as a timeless macroscopic space. Winfuls tunneling model is not correct. Recently it was proven in several experiments with photonic and Schrödinger wave packets that all waves have a zero tunneling time. Related experiments. It was later claimed by the Keller group in Switzerland that particle tunneling does indeed occur in zero real time. Their tests involved tunneling electrons, where the group argued a relativistic prediction for tunneling time should be 500-600 attoseconds (an attosecond is one quintillionth of a second). All that could be measured was 24 attoseconds, which is the limit of the test accuracy. Again, though, other physicists believe that tunneling experiments in which particles appear to spend anomalously short times inside the barrier are in fact fully compatible with relativity, although there is disagreement about whether the explanation involves reshaping of the wave packet or other effects. This claimed zero tunnel time for electrons is in apparent contrast with the known fact that quantum tunneling is a completely subluminal effect (namely, it is consistent with the standard notion of relativistic causality and does not lead to faster-than-light propagation of information) when modeled with the relativistic Dirac equation. Therefore, or the Dirac equation has to be disregarded to model relativistic tunneling, or the interpretation of the experimental results should be made consistent with the standard textbook relativistic causality notion in which the wave function can not propagate beyond its future light-cone envelope. Temporal conclusions and future research. Nimtz' interpretation is based on the following theory: The expression formula_1 in the Feynman photon propagator means that a photon has the highest probability of traveling exactly at the speed of light formula_2, but it has nonvanishing probability to violate the laws of special relativity, as a “virtual photon”, over short time and length scales. While it would be impossible to transport information over cosmologically relevant time scales using tunneling (the tunneling probability is simply too small if the classically forbidden region is too large), over short time and length scales, the tunneling photons are allowed to propagate faster than light, in view of their property as virtual particles. The photon propagation probability is nonvanishing even if the photon’s angular frequency omega is not equal to the product of the speed of light "c" and the wave momentum "p". Nimtz has written in more detail on signals and the described interpretation of the FTL tunneling experiments. Although his experimental results have been well documented since the early 1990s, Günter Nimtz' interpretation of the implications of these results represents a highly debated topic, which numerous researchers consider as incorrect (see above, #Scientific opponents and their interpretations). Some oppositional studies on zero time tunneling have been published. The common descriptions of FTL-tunneling signals presented in most textbooks and articles are corrected into final conclusions according to Brillouin and other important physicists. References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "n := \\sqrt{\\epsilon_r\\mu_r}" }, { "math_id": 1, "text": "1 \\over {(hv)^2 - (pc)^2}" }, { "math_id": 2, "text": "(hv = pc)" } ]
https://en.wikipedia.org/wiki?curid=11051917
11052041
Determination of equilibrium constants
Equilibrium constants are determined in order to quantify chemical equilibria. When an equilibrium constant K is expressed as a concentration quotient, formula_0 it is implied that the activity quotient is constant. For this assumption to be valid, equilibrium constants must be determined in a medium of relatively high ionic strength. Where this is not possible, consideration should be given to possible activity variation. The equilibrium expression above is a function of the concentrations [A], [B] etc. of the chemical species in equilibrium. The equilibrium constant value can be determined if any one of these concentrations can be measured. The general procedure is that the concentration in question is measured for a series of solutions with known analytical concentrations of the reactants. Typically, a titration is performed with one or more reactants in the titration vessel and one or more reactants in the burette. Knowing the analytical concentrations of reactants initially in the reaction vessel and in the burette, all analytical concentrations can be derived as a function of the volume (or mass) of titrant added. The equilibrium constants may be derived by best-fitting of the experimental data with a chemical model of the equilibrium system. Experimental methods. There are four main experimental methods. For less commonly used methods, see Rossotti and Rossotti. In all cases the range can be extended by using the competition method. An example of the application of this method can be found in palladium(II) cyanide. Potentiometric measurements. A free concentration [A] or activity {A} of a species A is measured by means of an ion selective electrode such as the glass electrode. If the electrode is calibrated using activity standards it is assumed that the Nernst equation applies in the form formula_1 where "E"0 is the standard electrode potential. When buffer solutions of known pH are used for calibration the meter reading will be a pH. formula_2 At 298 K, 1 pH unit is approximately equal to 59 mV. When the electrode is calibrated with solutions of known concentration, by means of a strong acid–strong base titration, for example, a modified Nernst equation is assumed. formula_3 where s is an empirical slope factor. A solution of known hydrogen ion concentration may be prepared by standardization of a strong acid against borax. Constant-boiling hydrochloric acid may also be used as a primary standard for hydrogen ion concentration. Range and limitations. The most widely used electrode is the glass electrode, which is selective for the hydrogen ion. This is suitable for all acid–base equilibria. log10 "β" values between about 2 and 11 can be measured directly by potentiometric titration using a glass electrode. This enormous range of stability constant values (ca. 100 to 1011) is possible because of the logarithmic response of the electrode. The limitations arise because the Nernst equation breaks down at very low or very high pH. When a glass electrode is used to obtain the measurements on which the calculated equilibrium constants depend, the precision of the calculated parameters is limited by secondary effects such as variation of liquid junction potentials in the electrode. In practice it is virtually impossible to obtain a precision for log β better than ±0.001. Spectrophotometric measurements. Absorbance. It is assumed that the Beer–Lambert law applies. formula_4 where l is the optical path length, ε is a molar absorbance at unit path length and c is a concentration. More than one of the species may contribute to the absorbance. In principle absorbance may be measured at one wavelength only, but in present-day practice it is common to record complete spectra. Range and limitations. An upper limit on log10 "β" of 4 is usually quoted, corresponding to the precision of the measurements, but it also depends on how intense the effect is. Spectra of contributing species should be clearly distinct from each other Fluorescence (luminescence) intensity. It is assumed that the scattered light intensity is a linear function of species’ concentrations. formula_5 where φ is a proportionality constant. Range and limitations. The magnitude of the constant φ may be higher than the value of the molar extinction coefficient, ε, for a species. When this is so, the detection limit for that species will be lower. At high solute concentrations, fluorescence intensity becomes non-linear with respect to concentration due to self-absorption of the scattered radiation. NMR chemical shift measurements. Chemical exchange is assumed to be rapid on the NMR time-scale. An individual chemical shift is the mole-fraction-weighted average of the shifts δ of nuclei in contributing species. formula_6 Example: the p"K"a of the hydroxyl group in citric acid has been determined from 13C chemical shift data to be 14.4. Neither potentiometry nor ultraviolet–visible spectroscopy could be used for this determination. Range and limitations. Limited precision of chemical shift measurements also puts an upper limit of about 4 on log10 "β". Limited to diamagnetic systems. 1H NMR cannot be used with solutions of compounds in 1H2O. Calorimetric measurements. Simultaneous measurement of K and Δ"H" for 1:1 adducts is routinely carried out using isothermal titration calorimetry. Extension to more complex systems is limited by the availability of suitable software. Range and limitations. Insufficient evidence is currently available. The competition method. The competition method may be used when a stability constant value is too large to be determined by a direct method. It was first used by Schwarzenbach in the determination of the stability constants of complexes of EDTA with metal ions. For simplicity consider the determination of the stability constant formula_7 of a binary complex, "AB", of a reagent "A" with another reagent "B". formula_8 where the [X] represents the concentration, at equilibrium, of a species X in a solution of given composition. A ligand "C" is chosen which forms a weaker complex with "A" The stability constant, KAC, is small enough to be determined by a direct method. For example, in the case of EDTA complexes "A" is a metal ion and "C" may be a polyamine such as diethylenetriamine. formula_9 The stability constant, "K" for the competition reaction formula_10 can be expressed as formula_11 It follows that formula_12 where K is the stability constant for the competition reaction. Thus, the value of the stability constant formula_7 may be derived from the experimentally determined values of "K" and formula_13 . Computational methods. It is assumed that the collected experimental data comprise a set of data points. At each ith data point, the analytical concentrations of the reactants, "T"A("i"), "T"B("i") etc. are known along with a measured quantity, yi, that depends on one or more of these analytical concentrations. A general computational procedure has four main components: The value of the equilibrium constant for the formation of a 1:1 complex, such as a host-guest species, may be calculated with a dedicated spreadsheet application, Bindfit: In this case step 2 can be performed with a non-iterative procedure and the pre-programmed routine Solver can be used for step 3. The chemical model. The chemical model consists of a set of chemical species present in solution, both the reactants added to the reaction mixture and the complex species formed from them. Denoting the reactants by A, B..., each "complex species" is specified by the stoichiometric coefficients that relate the particular combination of "reactants" forming them. <chem>{\mathit p A} + \mathit q B \cdots <=> A_\mathit{p}B_\mathit{q} \cdots</chem>: formula_14 When using general-purpose computer programs, it is usual to use cumulative association constants, as shown above. Electrical charges are not shown in general expressions such as this and are often omitted from specific expressions, for simplicity of notation. In fact, electrical charges have no bearing on the equilibrium processes other that there being a requirement for overall electrical neutrality in all systems. With aqueous solutions the concentrations of proton (hydronium ion) and hydroxide ion are constrained by the self-dissociation of water. <chem>H2O <=> H+ + OH-</chem>: formula_15 With dilute solutions the concentration of water is assumed constant, so the equilibrium expression is written in the form of the ionic product of water. formula_16 When both H+ and OH− must be considered as reactants, one of them is eliminated from the model by specifying that its concentration be derived from the concentration of the other. Usually the concentration of the hydroxide ion is given by formula_17 In this case the equilibrium constant for the formation of hydroxide has the stoichiometric coefficients −1 in regard to the proton and zero for the other reactants. This has important implications for all protonation equilibria in aqueous solution and for hydrolysis constants in particular. It is quite usual to omit from the model those species whose concentrations are considered negligible. For example, it is usually assumed then there is no interaction between the reactants and/or complexes and the electrolyte used to maintain constant ionic strength or the buffer used to maintain constant pH. These assumptions may or may not be justified. Also, it is implicitly assumed that there are no other complex species present. When complexes are wrongly ignored a systematic error is introduced into the calculations. Equilibrium constant values are usually estimated initially by reference to data sources. Speciation calculations. A speciation calculation is one in which concentrations of all the species in an equilibrium system are calculated, knowing the analytical concentrations, TA, TB etc. of the reactants A, B etc. This means solving a set of nonlinear equations of mass-balance formula_18 for the free concentrations [A], [B] etc. When the pH (or equivalent e.m.f., E).is measured, the free concentration of hydrogen ions, [H], is obtained from the measured value as formula_19 or formula_20and only the free concentrations of the other reactants are calculated. The concentrations of the complexes are derived from the free concentrations via the chemical model. Some authors include the free reactant terms in the sums by declaring "identity" (unit) β constants for which the stoichiometric coefficients are 1 for the reactant concerned and zero for all other reactants. For example, with 2 reagents, the mass-balance equations assume the simpler form. formula_21 formula_22 In this manner, all chemical species, "including the free reactants", are treated in the same way, having been "formed" from the combination of reactants that is specified by the stoichiometric coefficients. In a titration system the analytical concentrations of the reactants at each titration point are obtained from the initial conditions, the burette concentrations and volumes. The analytical (total) concentration of a reactant R at the ith titration point is given by formula_23 where R0 is the initial amount of R in the titration vessel, "v"0 is the initial volume, [R] is the concentration of R in the burette and vi is the volume added. The burette concentration of a reactant not present in the burette is taken to be zero. In general, solving these nonlinear equations presents a formidable challenge because of the huge range over which the free concentrations may vary. At the beginning, values for the free concentrations must be estimated. Then, these values are refined, usually by means of Newton–Raphson iterations. The logarithms of the free concentrations may be refined rather than the free concentrations themselves. Refinement of the logarithms of the free concentrations has the added advantage of automatically imposing a non-negativity constraint on the free concentrations. Once the free reactant concentrations have been calculated, the concentrations of the complexes are derived from them and the equilibrium constants. Note that the free reactant concentrations can be regarded as implicit parameters in the equilibrium constant refinement process. In that context the values of the free concentrations are constrained by forcing the conditions of mass-balance to apply at all stages of the process. Equilibrium constant refinement. The objective of the refinement process is to find equilibrium constant values that give the best fit to the experimental data. This is usually achieved by minimising an objective function, U, by the method of non-linear least-squares. First the residuals are defined as formula_24 Then the most general objective function is given by formula_25 The matrix of weights, W, should be, ideally, the inverse of the variance-covariance matrix of the observations. It is rare for this to be known. However, when it is, the expectation value of U is one, which means that the data are fitted "within experimental error". Most often only the diagonal elements are known, in which case the objective function simplifies to formula_26 with "Wij" 0 when "j" ≠ "i". Unit weights, "Wii" 1, are often used but, in that case, the expectation value of U is the root mean square of the experimental errors. The minimization may be performed using the Gauss–Newton method. Firstly the objective function is linearised by approximating it as a first-order Taylor series expansion about an initial parameter set, p. formula_27 The increments δ"pi" are added to the corresponding initial parameters such that U is less than "U"0. At the minimum the derivatives , which are simply related to the elements of the Jacobian matrix, J formula_28 where pk is the kth parameter of the refinement, are equal to zero. One or more equilibrium constants may be parameters of the refinement. However, the measured quantities (see above) represented by y are not expressed in terms of the equilibrium constants, but in terms of the species concentrations, which are implicit functions of these parameters. Therefore, the Jacobian elements must be obtained using implicit differentiation. The parameter increments δp are calculated by solving the normal equations, derived from the conditions that 0 at the minimum. formula_29 The increments δp are added iteratively to the parameters formula_30 where n is an iteration number. The species concentrations and "y"calc values are recalculated at every data point. The iterations are continued until no significant reduction in U is achieved, that is, until a convergence criterion is satisfied. If, however, the updated parameters do not result in a decrease of the objective function, that is, if divergence occurs, the increment calculation must be modified. The simplest modification is to use a fraction, f, of calculated increment, so-called shift-cutting. formula_31 In this case, the direction of the shift vector, δp, is unchanged. With the more powerful Levenberg–Marquardt algorithm, on the other hand, the shift vector is rotated towards the direction of steepest descent, by modifying the normal equations, formula_32 where λ is the Marquardt parameter and I is an identity matrix. Other methods of handling divergence have been proposed. A particular issue arises with NMR and spectrophotometric data. For the latter, the observed quantity is absorbance, A, and the Beer–Lambert law can be written as formula_33 It can be seen that, assuming that the concentrations, c, are known, that absorbance, A, at a given wavelength, formula_34, and path length formula_35, is a linear function of the molar absorptivities, ε. With 1 cm path-length, in matrix notation formula_36 There are two approaches to the calculation of the unknown molar absorptivities (1) The ε values are considered parameters of the minimization and the Jacobian is constructed on that basis. However, the ε values themselves are calculated at each step of the refinement by linear least-squares: formula_37 using the refined values of the equilibrium constants to obtain the speciation. The matrix formula_38 is an example of a pseudo-inverse. Golub and Pereyra showed how the pseudo-inverse can be differentiated so that parameter increments for both molar absorptivities and equilibrium constants can be calculated by solving the normal equations. (2) The Beer–Lambert law is written as formula_39 The unknown molar absorbances of all "coloured" species are found by using the non-iterative method of linear least-squares, one wavelength at a time. The calculations are performed once every refinement cycle, using the stability constant values obtaining at that refinement cycle to calculate species' concentration values in the matrix formula_40. Parameter errors and correlation. In the region close to the minimum of the objective function, U, the system approximates to a linear least-squares system, for which formula_41 Therefore, the parameter values are (approximately) linear combinations of the observed data values and the errors on the parameters, p, can be obtained by error propagation from the observations, yobs, using the linear formula. Let the variance-covariance matrix for the observations be denoted by Σy and that of the parameters by Σp. Then, formula_42 When W (Σy)−1, this simplifies to formula_43 In most cases the errors on the observations are un-correlated, so that Σy is diagonal. If so, each weight should be the reciprocal of the variance of the corresponding observation. For example, in a potentiometric titration, the weight at a titration point, k, can be given by formula_44 where σE is the error in electrode potential or pH, () is the slope of the titration curve and σv is the error on added volume. When unit weights are used (W I, p (JTJ)−1JTy) it is implied that the experimental errors are uncorrelated and all equal: Σy "σ"2I, where "σ"2 is known as the variance of an observation of unit weight, and I is an identity matrix. In this case "σ"2 is approximated by formula_45 where U is the minimum value of the objective function and "n"d and "n"p are the number of data and parameters, respectively. formula_46 In all cases, the variance of the parameter pi is given by Σ and the covariance between parameters pi and pj is given by Σ. Standard deviation is the square root of variance. These error estimates reflect only random errors in the measurements. The true uncertainty in the parameters is larger due to the presence of systematic errors—which, by definition, cannot be quantified. Note that even though the observations may be uncorrelated, the parameters are always correlated. Derived constants. When cumulative constants have been refined it is often useful to derive stepwise constants from them. The general procedure is to write down the defining expressions for all the constants involved and then to equate concentrations. For example, suppose that one wishes to derive the pKa for removing one proton from a tribasic acid, LH3, such as citric acid. formula_47 The stepwise "association" constant for formation of LH3 is given by formula_48 Substitute the expressions for the concentrations of LH3 and LH2− into this equation formula_49 whence formula_50 and since p"K"a −log10 its value is given by formula_51 formula_52 formula_53 Note the reverse numbering for pK and log β. When calculating the error on the stepwise constant, the fact that the cumulative constants are correlated must accounted for. By error propagation formula_54 and formula_55 Model selection. Once a refinement has been completed the results should be checked to verify that the chosen model is acceptable. generally speaking, a model is acceptable when the data are fitted within experimental error, but there is no single criterion to use to make the judgement. The following should be considered. The objective function. When the weights have been correctly derived from estimates of experimental error, the expectation value of is 1. It is therefore very useful to estimate experimental errors and derive some reasonable weights from them as this is an absolute indicator of the goodness of fit. When unit weights are used, it is implied that all observations have the same variance. is expected to be equal to that variance. Parameter errors. One would want the errors on the stability constants to be roughly commensurate with experimental error. For example, with pH titration data, if pH is measured to 2 decimal places, the errors of log10 "β" should not be much larger than 0.01. In exploratory work where the nature of the species present is not known in advance, several different chemical models may be tested and compared. There will be models where the uncertainties in the best estimate of an equilibrium constant may be somewhat or even significantly larger than "σ"pH, especially with those constants governing the formation of comparatively minor species, but the decision as to how large is acceptable remains subjective. The decision process as to whether or not to include comparatively uncertain equilibria in a model, and for the comparison of competing models in general, can be made objective and has been outlined by Hamilton. Distribution of residuals. At the minimum in U the system can be approximated to a linear one, the residuals in the case of unit weights are related to the observations by formula_56 The symmetric, idempotent matrix J(JTT)−1J is known in the statistics literature as the hat matrix, H. Thus, formula_57 and formula_58 where I is an identity matrix and Mr and My are the variance-covariance matrices of the residuals and observations, respectively. This shows that even though the observations may be uncorrelated, the residuals are always correlated. The diagram at the right shows the result of a refinement of the stability constants of Ni(Gly)+, Ni(Gly)2 and Ni(Gly)3- (where GlyH = glycine). The observed values are shown a blue diamonds and the species concentrations, as a percentage of the total nickel, are superimposed. The residuals are shown in the lower box. The residuals are not distributed as randomly as would be expected. This is due to the variation of liquid junction potentials and other effects at the glass/liquid interfaces. Those effects are very slow compared to the rate at which equilibrium is established. Physical constraints. Some physical constraints are usually incorporated in the calculations. For example, all the concentrations of free reactants and species must have positive values and association constants must have positive values. With spectrophotometric data the calculated molar absorptivity (or emissivity) values should all be positive. Most computer programs do not impose this constraint on the calculations. Chemical constraints. When determining the stability constants of metal-ligand complexes, it is common practice to fix ligand protonation constants at values that have been determined using data obtained from metal-free solutions. Hydrolysis constants of metal ions are usually fixed at values which were obtained using ligand-free solutions. When determining the stability constants for ternary complexes, MpAqBr it is common practice the fix the values for the corresponding binary complexes Mp′Aq′ and Mp′′Bq′′, at values which have been determined in separate experiments. Use of such constraints reduces the number of parameters to be determined, but may result in the calculated errors on refined stability constant values being under-estimated. Other models. If the model is not acceptable, a variety of other models should be examined to find one that best fits the experimental data, within experimental error. The main difficulty is with the so-called minor species. These are species whose concentration is so low that the effect on the measured quantity is at or below the level of error in the experimental measurement. The constant for a minor species may prove impossible to determine if there is no means to increase the concentration of the species. Implementations. Some simple systems are amenable to spreadsheet calculations. A large number of general-purpose computer programs for equilibrium constant calculation have been published. See for a bibliography. The most frequently used programs are: References. <templatestyles src="Reflist/styles.css" />
[ { "math_id": 0, "text": "K=\\frac{\\mathrm{[S]} ^\\sigma \\mathrm{[T]}^\\tau \\cdots } {\\mathrm{[A]}^\\alpha \\mathrm{[B]}^\\beta \\cdots }" }, { "math_id": 1, "text": " E=E^0+\\frac{RT}{nF}\\ln\\mathrm{\\{A\\}}" }, { "math_id": 2, "text": "\\mathrm{pH}=\\frac{nF}{RT}\\left(E^0-E\\right)" }, { "math_id": 3, "text": "E=E^0 + s\\log_{10}\\mathrm{[A]}" }, { "math_id": 4, "text": "A=l \\sum {\\varepsilon c}" }, { "math_id": 5, "text": "I=\\sum \\varphi c " }, { "math_id": 6, "text": "\\bar {\\delta} =\\frac{\\sum x_i \\delta_i}{\\sum x_i}" }, { "math_id": 7, "text": "K_{AB}" }, { "math_id": 8, "text": "K_{AB}=\\frac{[AB]}{[A][B]}" }, { "math_id": 9, "text": "K_{AC}=\\frac{[AC]}{[A][C]}" }, { "math_id": 10, "text": "AC + B \\leftrightharpoons AB +C" }, { "math_id": 11, "text": "K=\\frac{[AB][C]}{[AC][B]}" }, { "math_id": 12, "text": "K_{AB}=K \\times K_{AC}" }, { "math_id": 13, "text": "K_{AC}" }, { "math_id": 14, "text": "\\beta_{pq\\cdots}=\\frac {[\\ce{A}_p\\ce{B}_q \\cdots ]} {[\\ce A]^p[\\ce B]^q \\cdots }" }, { "math_id": 15, "text": "K_\\mathrm{W}^' = \\frac{[H^+][OH^-]}{[H_2O]} " }, { "math_id": 16, "text": "K_\\mathrm{W}=\\ce{[H+]}[\\ce{OH-}]\\," }, { "math_id": 17, "text": "[\\ce{OH-}]=\\frac{K_\\ce{W}}{[\\ce{H+}]}\\," }, { "math_id": 18, "text": "\n\\begin{align}\n\\ce{T_A} & = [\\ce A]+\\sum_{1,nk}p\\beta_{pq \\cdots}[\\ce A]^p[\\ce B]^q \\cdots \\\\\n\\ce{T_B} & = [\\ce B]+\\sum_{1,nk}q\\beta_{pq \\cdots}[\\ce A]^p[\\ce B]^q \\cdots \\\\\netc.\n\\end{align}\n" }, { "math_id": 19, "text": "[\\mathrm H]=10^{-\\mathrm{pH}}" }, { "math_id": 20, "text": "[\\mathrm H]=e^\\mathrm{{ -\\frac{nF}{RT}}(E-E^0) }" }, { "math_id": 21, "text": "\n\\begin{align}\nT_\\ce{A} & = \\sum_{0,nk}p\\beta_{pq}[\\ce A]^p[\\ce B]^q \\\\[4pt]\nT_\\ce{B} & = \\sum_{0,nk}q\\beta_{pq}[\\ce A]^p[\\ce B]^q \\\\\n\\end{align}\n" }, { "math_id": 22, "text": "\\beta_{10}= \\beta_{01} = 1" }, { "math_id": 23, "text": "T_\\ce{R}=\\frac{\\ce{R}_0+v_i\\ce{[R]}}{v_0+v_i}" }, { "math_id": 24, "text": "r_i=y_i^\\text{obs}-y_i^\\text{calc}" }, { "math_id": 25, "text": "U=\\sum_i\\sum_j r_i W_{ij} r_j\\," }, { "math_id": 26, "text": "U=\\sum_i W_{ii}r_i^2" }, { "math_id": 27, "text": "U=U^0+\\sum_i \\frac{\\partial U}{\\partial p_i}\\delta p_i" }, { "math_id": 28, "text": "J_{jk}=\\frac{\\partial y_j^\\mathrm{calc}}{\\partial p_k}" }, { "math_id": 29, "text": "{ \\left(J^\\mathrm{T} W J\\right) \\delta p=J^\\mathrm{T} W r }" }, { "math_id": 30, "text": "\\mathbf{p}^{n+1}=\\mathbf{p}^n +\\delta \\mathbf{p}" }, { "math_id": 31, "text": "\\mathbf{p}^{n+1}=\\mathbf{p}^n +f \\mathbf{\\delta p}" }, { "math_id": 32, "text": "\\mathbf{ \\left(J^\\mathrm{T} W J +\\lambda I\\right)\\delta p=J^\\mathrm{T} W r }" }, { "math_id": 33, "text": "A_\\lambda=l\\sum(\\varepsilon_{pq..})_\\lambda c_{pq..}" }, { "math_id": 34, "text": "\\lambda" }, { "math_id": 35, "text": "l" }, { "math_id": 36, "text": "\\mathbf{A}=\\boldsymbol{\\varepsilon} \\mathbf{C} \\, " }, { "math_id": 37, "text": "\\boldsymbol{\\varepsilon} = \\mathbf{\\left(C^\\mathrm{T}C\\right)^{-1}C^\\mathrm{T}A }" }, { "math_id": 38, "text": "\\mathbf{\\left(C^TC\\right)^{-1}C^T}" }, { "math_id": 39, "text": "\\mathbf{\\boldsymbol\\varepsilon}_\\lambda= \\mathbf{A}^{-1}_\\lambda \\mathbf{C} \\, " }, { "math_id": 40, "text": " \\mathbf{C} " }, { "math_id": 41, "text": "\\mathbf{p=\\left(J^\\mathrm{T}WJ\\right)^{-1}J^\\mathrm{T}Wy^\\mathrm{obs}}" }, { "math_id": 42, "text": "\\mathbf{\\Sigma^p=\\left(J^\\mathrm{T}WJ\\right)^{-1}J^\\mathrm{T}W \\Sigma^y W^\\mathrm{T}J(J^\\mathrm{T}WJ)^{-1}}" }, { "math_id": 43, "text": "\\mathbf{\\Sigma^p=\\left(J^\\mathrm{T}WJ\\right)^{-1}}" }, { "math_id": 44, "text": "W_k= \\frac{1}{\\sigma^2_E+\\left( \\frac{\\partial E}{\\partial v} \\right)^2_k\\sigma^2_v} " }, { "math_id": 45, "text": "\\sigma^2 = \\frac{U}{n_\\mathrm{d}-n_\\mathrm{p}}" }, { "math_id": 46, "text": "\\mathbf{\\Sigma^p}=\\frac{U}{n_\\mathrm{d}-n_\\mathrm{p}}\\left(\\mathbf{J}^\\mathrm{T}\\mathbf{J}\\right)^{-1}" }, { "math_id": 47, "text": "\\begin{align}\n\\ce{L^3-}+ \\ce{ H+ <=> }\\ \\ce{LH^2-} &:\\ [\\ce{LH^2-}] =\\beta_{11} [\\ce{L^3-}] [\\ce{H+}]\\\\\n\\ce{L^3-}+ \\ce{2H+ <=> }\\ \\ce{LH2^-} &:\\ [\\ce{LH2^-}] =\\beta_{12} [\\ce{L^3-}] [\\ce{H+}]^2\\\\\n\\ce{L^3-}+ \\ce{3H+ <=> }\\ \\ce{LH3} &:\\ [\\ce{LH3}] =\\beta_{13} [\\ce{L^3-}] [\\ce{H+}]^3\n\\end{align}" }, { "math_id": 48, "text": "\\ce{{LH2^-} + H+ <=> LH3\\ ; \\quad\\ [LH3]}=K[\\ce{LH2^-}][\\ce{H+}]" }, { "math_id": 49, "text": "\\beta_{13}[\\ce{L^3-}][\\ce{H+}]^3=K\\beta_{12}[\\ce{L^3-}][\\ce{H+}]^2[\\ce{H+}]" }, { "math_id": 50, "text": "\\beta_{13}=K\\beta_{12}; K=\\frac{\\beta_{13}}{\\beta_{12}} \\," }, { "math_id": 51, "text": "\\ce{p}K_\\ce{a1} = \\log_{10} \\beta_{13}-\\log_{10} \\beta_{12}\\, " }, { "math_id": 52, "text": "\\ce{p}K_\\ce{a2} = \\log_{10} \\beta_{12}-\\log_{10} \\beta_{11}\\, " }, { "math_id": 53, "text": "\\ce{p}K_\\ce{a3} = \\log_{10} \\beta_{11}\\, " }, { "math_id": 54, "text": "\\sigma^2_K=\\sigma^2_{\\beta_{12}}+\\sigma^2_{\\beta_{13}}-2 \\sigma_{\\beta_{12}} \\sigma_{\\beta_{13}}\\rho_{12,13}\\," }, { "math_id": 55, "text": "\\sigma_{\\log_{10} K}=\\frac{\\sigma_K}{K}" }, { "math_id": 56, "text": "\\mathbf{r=y^\\mathrm{obs}-J \\left(J^\\mathrm{T}T \\right)^{-1}J^\\mathrm{T} y^\\mathrm{obs}}" }, { "math_id": 57, "text": "\\mathbf{r=\\left(I-H \\right) y^\\mathrm{obs}}" }, { "math_id": 58, "text": "\\mathbf{M^r=\\left(I-H \\right) M^y \\left(I-H \\right)}" } ]
https://en.wikipedia.org/wiki?curid=11052041
11053577
TrueSkill
Rating system supporting games with more than 2 players TrueSkill is a skill-based ranking system developed by Microsoft for use with video game matchmaking on the Xbox network. Unlike the popular Elo rating system, which was initially designed for chess, TrueSkill is designed to support games with more than two players. In 2018, Microsoft published details about an extended version of TrueSkill, named TrueSkill2. Calculation. A player's skill is represented as a normal distribution formula_0 characterized by a mean value of formula_1 (mu, representing perceived skill) and a variance of formula_2 (sigma, representing how "unconfident" the system is in the player's formula_1 value). As such formula_3 can be interpreted as the probability that the player's "true" skill is formula_4. On Xbox Live, players start with formula_5 and formula_6; formula_1 always increases after a win and always decreases after a loss. The extent of actual updates depends on each player's formula_2 and on how "surprising" the outcome is to the system. Unbalanced games, for example, result in either negligible updates when the favorite wins, or huge updates when the favorite loses surprisingly. Factor graphs and expectation propagation via moment matching are used to compute the message passing equations which in turn compute the skills for the players. Player ranks are displayed as the conservative estimate of their skill, formula_7. This is conservative, because the system is 99% sure that the player's skill is actually higher than what is displayed as their rank. The system can be used with arbitrary scales, but Microsoft uses a scale from 0 to 50 for Xbox Live. Hence, players start with a rank of formula_8. This means that a new player's defeat results in a large sigma loss, which partially or completely compensates their mu loss. This explains why people may gain ranks from losses. Use in other projects. TrueSkill is patented, and the name is trademarked, so it is limited to Microsoft projects and commercial projects that obtain a license to use the algorithm. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{N}" }, { "math_id": 1, "text": "\\mu" }, { "math_id": 2, "text": "\\sigma" }, { "math_id": 3, "text": "\\mathcal{N}(x)" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "\\mu = 25" }, { "math_id": 6, "text": "\\sigma = 25/3" }, { "math_id": 7, "text": "R = \\mu - 3 \\times \\sigma" }, { "math_id": 8, "text": "R = 25 - 3 \\cdot \\frac{25}{3} = 0" } ]
https://en.wikipedia.org/wiki?curid=11053577
11054533
Archard equation
Model used to describe wear The Archard wear equation is a simple model used to describe sliding wear and is based on the theory of asperity contact. The Archard equation was developed much later than Reye's hypothesis (sometimes also known as energy dissipative hypothesis), though both came to the same physical conclusions, that the volume of the removed debris due to wear is proportional to the work done by friction forces. Theodor Reye's model became popular in Europe and it is still taught in university courses of applied mechanics. Until recently, Reye's theory of 1860 has, however, been totally ignored in English and American literature where subsequent works by Ragnar Holm and John Frederick Archard are usually cited. In 1960, Mikhail Mikhailovich Khrushchov and Mikhail Alekseevich Babichev published a similar model as well. In modern literature, the relation is therefore also known as Reye–Archard–Khrushchov wear law. In 2022, the steady-state Archard wear equation was extended into the running-in regime using the bearing ratio curve representing the initial surface topography. formula_0 Equation. where: "Q" is the total volume of wear debris produced "K" is a dimensionless constant "W" is the total normal load "L" is the sliding distance "H" is the hardness of the softest contacting surfaces Note that formula_1 is proportional to the work done by the friction forces as described by Reye's hypothesis. Also, K is obtained from experimental results and depends on several parameters. Among them are surface quality, chemical affinity between the material of two surfaces, surface hardness process, heat transfer between two surfaces and others. Derivation. The equation can be derived by first examining the behavior of a single asperity. The local load formula_2, supported by an asperity, assumed to have a circular cross-section with a radius formula_3, is: formula_4 where "P" is the yield pressure for the asperity, assumed to be deforming plastically. "P" will be close to the indentation hardness, "H", of the asperity. If the volume of wear debris, formula_5, for a particular asperity is a hemisphere sheared off from the asperity, it follows that: formula_6 This fragment is formed by the material having slid a distance 2"a" Hence, formula_7, the wear volume of material produced from this asperity per unit distance moved is: formula_8 making the approximation that formula_9 However, not all asperities will have had material removed when sliding distance 2"a". Therefore, the total wear debris produced per unit distance moved, formula_10 will be lower than the ratio of "W" to "3H". This is accounted for by the addition of a dimensionless constant "K", which also incorporates the factor 3 above. These operations produce the Archard equation as given above. Archard interpreted "K" factor as a probability of forming wear debris from asperity encounters. Typically for 'mild' wear, "K" ≈ 10−8, whereas for 'severe' wear, "K" ≈ 10−2. Recently, it has been shown that there exists a critical length scale that controls the wear debris formation at the asperity level. This length scale defines a critical junction size, where bigger junctions produce debris, while smaller ones deform plastically. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Q = \\frac {KWL}H" }, { "math_id": 1, "text": "WL" }, { "math_id": 2, "text": "\\, \\delta W " }, { "math_id": 3, "text": "\\, a " }, { "math_id": 4, "text": "\\delta W = P \\pi {a^2} \\,\\!" }, { "math_id": 5, "text": "\\, \\delta V " }, { "math_id": 6, "text": " \\delta V = \\frac 2 3 \\pi a^3 " }, { "math_id": 7, "text": "\\, \\delta Q " }, { "math_id": 8, "text": " \\delta Q = \\frac {\\delta V} {2a} = \\frac {\\pi a^2} 3 \\equiv \\frac {\\delta W} {3P} \\approx \\frac {\\delta W} {3H}" }, { "math_id": 9, "text": "\\,P \\approx H" }, { "math_id": 10, "text": "\\, Q " } ]
https://en.wikipedia.org/wiki?curid=11054533
11054805
Flyback diode
Voltage-spike stopping diode across an inductor A flyback diode is any diode connected across an inductor used to eliminate flyback, which is the sudden voltage spike seen across an inductive load when its supply current is suddenly reduced or interrupted. It is used in circuits in which inductive loads are controlled by switches, and in switching power supplies and inverters. Flyback circuits have been used since 1930 and were refined starting in 1950 for use in television receivers. The word "flyback" comes from the horizontal movement of the electron beam in a cathode ray tube, because the beam flew back to begin the next horizontal line. This diode is known by many other names, such as snubber diode, commutating diode, freewheeling diode, flywheel diode, suppressor diode, clamp diode, or catch diode. Operation. Fig. 1 shows an inductor connected to a battery - a constant voltage source. The resistor represents the small residual resistance of the inductor's wire windings. When the switch is closed, the voltage from the battery is applied to the inductor, causing current from the battery's positive terminal to flow down through the inductor and resistor. The increase in current causes a back EMF (voltage) across the inductor due to Faraday's law of induction which opposes the change in current. Since the voltage across the inductor is limited to the battery's voltage of 24 volts, the rate of increase of the current is limited to an initial value of formula_0 so the current through the inductor increases slowly as energy from the battery is stored in the inductor's magnetic field. As the current rises, more voltage is dropped across the resistor and less across the inductor, until the current reaches a steady value of formula_1 with all the battery voltage across the resistance and none across the inductance. However, the current drops rapidly when the switch is opened in Fig. 2. The inductor resists the drop in current by developing a very large induced voltage of polarity in the opposite direction of the battery, positive at the lower end of the inductor and negative at the upper end. This voltage pulse, sometimes called the inductive "kick", which can be much larger than the battery voltage, appears across the switch contacts. It causes electrons to jump the air gap between the contacts, causing a momentary electric arc to develop across the contacts as the switch is opened. The arc continues until the energy stored in the inductor's magnetic field is dissipated as heat in the arc. The arc can damage the switch contacts, causing pitting and burning, eventually destroying them. If a transistor is used to switch the current, such as switching power supplies, the high reverse voltage can destroy the transistor. To prevent the inductive voltage pulse on turnoff, a diode is connected across the inductor, as shown in Fig. 3. The diode doesn't conduct current while the switch is closed because it is reverse-biased by the battery voltage, so it doesn't interfere with the normal operation of the circuit. However, when the switch is opened, the induced voltage across the inductor of opposite polarity forward biases the diode, and it conducts current, limiting the voltage across the inductor and thus preventing the arc from forming at the switch. The inductor and diode momentarily form a loop or circuit powered by the stored energy in the inductor. This circuit supplies a current path to the inductor to replace the current from the battery, so the inductor current does not drop abruptly and does not develop a high voltage. The voltage across the inductor is limited to the forward voltage of the diode, around 0.7 - 1.5V. This "freewheeling" or "flyback" current through the diode and inductor decreases slowly to zero as the magnetic energy in the inductor is dissipated as heat in the series resistance of the windings. This may take a few milliseconds in a small inductor. These images show the voltage spike and its elimination through the use of a flyback diode (1N4007). In this case, the inductor is a solenoid connected to a 24V DC power supply. Each waveform was taken using a digital oscilloscope set to trigger when the voltage across the inductor dipped below zero. Note the different scaling: left image 50V/division, right image 1V/division. In Figure 1, the voltage as measured across the switch, bounces/spikes to around -300 V. In Figure 2, a flyback diode was added in antiparallel with the solenoid. Instead of spiking to -300 V, the flyback diode only allows approximately -1.4 V of potential to be built up (-1.4 V is a combination of the forward bias of the 1N4007 diode (1.1 V) and the foot of wiring separating the diode and the solenoid). The waveform in Figure 2 is also smoother than the waveform in Figure 1, perhaps due to arcing at the switch for Figure 1. In both cases, the total time for the solenoid to discharge is a few milliseconds, though the lower voltage drop across the diode will slow relay dropout. Design. When used with a DC coil relay, a flyback diode can cause delayed drop-out of the contacts when power is removed, due to the continued circulation of current in the relay coil and diode. When rapid opening of the contacts is important, a resistor or reverse-biased Zener diode can be placed in series with the diode to help dissipate the coil energy faster, at the expense of higher voltage at the switch. Schottky diodes are preferred in flyback diode applications for switching power converters because they have the lowest forward drop (~0.2 V rather than &gt;0.7 V for low currents) and are able to quickly respond to reverse bias (when the inductor is being re-energized). They, therefore, dissipate less energy while transferring energy from the inductor to a capacitor. Induction at the opening of a contact. According to Faraday's law of induction, if the current through an inductance changes, this inductance induces a voltage, so the current will flow as long as there is energy in the magnetic field. If the current can only flow through the air, the voltage is so high that the air conducts. That is why in mechanically switched circuits, the near-instantaneous dissipation which occurs without a flyback diode is often observed as an arc across the opening mechanical contacts. Energy is dissipated in this arc primarily as intense heat, which causes undesirable premature erosion of the contacts. Another way to dissipate energy is through electromagnetic radiation. Similarly, for non-mechanical solid-state switching (i.e., a transistor), large voltage drops across an unactivated solid-state switch can destroy the component in question (either instantaneously or through accelerated wear and tear). Some energy is also lost from the system as a whole and from the arc as a broad spectrum of electromagnetic radiation, in the form of radio waves and light. These radio waves can cause undesirable clicks and pops on nearby radio receivers. To minimise the antenna-like radiation of this electromagnetic energy from wires connected to the inductor, the flyback diode should be connected as physically close to the inductor as practicable. This approach also minimises those parts of the circuit that are subject to an unwanted high-voltage — a good engineering practice. Derivation. The voltage at an inductor is, by the law of electromagnetic induction and the definition of inductance: formula_2 If there is no flyback diode but only something with great resistance (such as the air between two metal contacts), say, "R"2, we will approximate it as: formula_3 If we open the switch and ignore "V"CC and "R"1, we get: formula_4 or formula_5 which is a differential equation with the solution: formula_6 We observe that the current will decrease faster if the resistance is high, such as with air. Now if we open the switch with the diode in place, we only need to consider "L"1, "R"1 and "D"1. For "I" &gt; 0, we can assume: formula_7 so: formula_8 which is: formula_9 whose (first order differential equation) solution is: formula_10 We can calculate the time it needs to switch off by determining for which t it is "I"("t") = 0. formula_11 If "V"CC = "I"0"R"1, then formula_12 Applications. Flyback diodes are commonly used when semiconductor devices switch inductive loads off: in relay drivers, H-bridge motor drivers, and so on. A switched-mode power supply also exploits this effect, but the energy is not dissipated to heat and is instead used to pump a packet of additional charge into a capacitor, in order to supply power to a load. When the inductive load is a relay, the flyback diode can noticeably delay the release of the relay by keeping the coil current flowing longer. A resistor in series with the diode will make the circulating current decay faster at the drawback of an increased reverse voltage. A zener diode in series but with reverse polarity with regard to the flyback diode has the same properties, albeit with a fixed reverse voltage increase. Both the transistor voltages and the resistor or zener diode power ratings should be checked in this case. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{dI \\over dt} = {V_B \\over L}," }, { "math_id": 1, "text": "I = V_B/R" }, { "math_id": 2, "text": "V_L = - {d\\Phi_B \\over dt} = - L {dI \\over dt}" }, { "math_id": 3, "text": "V_{R_2} = R_2 \\cdot I" }, { "math_id": 4, "text": "V_L = V_{R_2}" }, { "math_id": 5, "text": "- L {dI \\over dt} = R_2 \\cdot I" }, { "math_id": 6, "text": "I(t) = I_0 \\cdot e^{- {R_2 \\over L} t}" }, { "math_id": 7, "text": "V_D = \\mathrm{constant}" }, { "math_id": 8, "text": "V_L = V_{R_1} + V_D" }, { "math_id": 9, "text": "- L {dI \\over dt} = R_1 \\cdot I + V_D" }, { "math_id": 10, "text": "I(t) = (I_0+{1\\over R_1} V_D) \\cdot e^{- {R_1 \\over L} t} - {1 \\over R_1} V_D" }, { "math_id": 11, "text": "t = {-L\\over R_1} \\cdot ln{\\left({V_D \\over {V_D + I_0{R_1}}}\\right)}" }, { "math_id": 12, "text": "t = {-L\\over R_1} \\cdot ln{\\left({1 \\over {\\frac{V_{CC}}{V_D} + 1}}\\right)}={L\\over R_1} \\cdot ln{\\left({{\\frac{V_{CC}}{V_D} + 1}}\\right)}" } ]
https://en.wikipedia.org/wiki?curid=11054805
1105488
Classification of Clifford algebras
In abstract algebra, in particular in the theory of nondegenerate quadratic forms on vector spaces, the finite-dimensional real and complex Clifford algebras for a nondegenerate quadratic form have been completely classified as rings. In each case, the Clifford algebra is algebra isomorphic to a full matrix ring over R, C, or H (the quaternions), or to a direct sum of two copies of such an algebra, though not in a canonical way. Below it is shown that distinct Clifford algebras may be algebra-isomorphic, as is the case of Cl1,1(R) and Cl2,0(R), which are both isomorphic as rings to the ring of two-by-two matrices over the real numbers. Notation and conventions. The Clifford product is the manifest ring product for the Clifford algebra, and all algebra homomorphisms in this article are with respect to this ring product. Other products defined within Clifford algebras, such as the exterior product, and other structure, such as the distinguished subspace of generators "V", are not used here. This article uses the (+) sign convention for Clifford multiplication so that formula_0 for all vectors "v" in the vector space of generators "V", where "Q" is the quadratic form on the vector space "V". We will denote the algebra of "n" × "n" matrices with entries in the division algebra "K" by M"n"("K") or End("K""n"). The direct sum of two such identical algebras will be denoted by M"n"("K") ⊕ M"n"("K"), which is isomorphic to M"n"("K" ⊕ "K"). Bott periodicity. Clifford algebras exhibit a 2-fold periodicity over the complex numbers and an 8-fold periodicity over the real numbers, which is related to the same periodicities for homotopy groups of the stable unitary group and stable orthogonal group, and is called Bott periodicity. The connection is explained by the geometric model of loop spaces approach to Bott periodicity: their 2-fold/8-fold periodic embeddings of the classical groups in each other (corresponding to isomorphism groups of Clifford algebras), and their successive quotients are symmetric spaces which are homotopy equivalent to the loop spaces of the unitary/orthogonal group. Complex case. The complex case is particularly simple: every nondegenerate quadratic form on a complex vector space is equivalent to the standard diagonal form formula_1 where "n" = dim("V"), so there is essentially only one Clifford algebra for each dimension. This is because the complex numbers include "i" by which −"u""k"2 = +("iu""k")2 and so positive or negative terms are equivalent. We will denote the Clifford algebra on C"n" with the standard quadratic form by Cl"n"(C). There are two separate cases to consider, according to whether "n" is even or odd. When "n" is even, the algebra Cl"n"(C) is central simple and so by the Artin–Wedderburn theorem is isomorphic to a matrix algebra over C. When "n" is odd, the center includes not only the scalars but the pseudoscalars (degree "n" elements) as well. We can always find a normalized pseudoscalar "ω" such that "ω"2 = 1. Define the operators formula_2 These two operators form a complete set of orthogonal idempotents, and since they are central they give a decomposition of Cl"n"(C) into a direct sum of two algebras formula_3 where formula_4 The algebras Cl"n"&amp;pm;(C) are just the positive and negative eigenspaces of "ω" and the "P"&amp;pm; are just the projection operators. Since "ω" is odd, these algebras are mixed by "α" (the linear map on "V" defined by "v" ↦ −"v"): formula_5 and therefore isomorphic (since "α" is an automorphism). These two isomorphic algebras are each central simple and so, again, isomorphic to a matrix algebra over C. The sizes of the matrices can be determined from the fact that the dimension of Cl"n"(C) is 2"n". What we have then is the following table: The even subalgebra Cl(C) of Cl"n"(C) is (non-canonically) isomorphic to Cl"n"−1(C). When "n" is even, the even subalgebra can be identified with the block diagonal matrices (when partitioned into 2 × 2 block matrices). When "n" is odd, the even subalgebra consists of those elements of End(C"N") ⊕ End(C"N") for which the two pieces are identical. Picking either piece then gives an isomorphism with Cl"n"(C) ≅ End(C"N"). Complex spinors in even dimension. The classification allows Dirac spinors and Weyl spinors to be defined in even dimension. In even dimension "n", the Clifford algebra Cl"n"(C) is isomorphic to End(C"N"), which has its fundamental representation on Δ"n" := C"N". A complex Dirac spinor is an element of Δ"n". The term "complex" signifies that it is the element of a representation space of a complex Clifford algebra, rather than that is an element of a complex vector space. The even subalgebra Cl"n"0(C) is isomorphic to End(C"N"/2) ⊕ End(C"N"/2) and therefore decomposes to the direct sum of two irreducible representation spaces Δ ⊕ Δ, each isomorphic to C"N"/2. A left-handed (respectively right-handed) complex Weyl spinor is an element of Δ (respectively, Δ). Proof of the structure theorem for complex Clifford algebras. The structure theorem is simple to prove inductively. For base cases, Cl0(C) is simply C ≅ End(C), while Cl1(C) is given by the algebra C ⊕ C ≅ End(C) ⊕ End(C) by defining the only gamma matrix as "γ"1 = (1, −1). We will also need Cl2(C) ≅ End(C2). The Pauli matrices can be used to generate the Clifford algebra by setting "γ"1 = "σ"1, "γ"2 = "σ"2. The span of the generated algebra is End(C2). The proof is completed by constructing an isomorphism Cl"n"+2(C) ≅ Cl"n"(C) ⊗ Cl2(C). Let "γ""a" generate Cl"n"(C), and formula_6 generate Cl2(C). Let "ω" = "i"formula_7 be the chirality element satisfying "ω"2 = 1 and formula_8"ω" + "ω"formula_8 = 0. These can be used to construct gamma matrices for Cl"n"+2(C) by setting Γ"a" = "γ""a" ⊗ "ω" for 1 ≤ "a" ≤ "n" and Γ"a" = 1 ⊗ formula_9 for "a" = "n" + 1, "n" + 2. These can be shown to satisfy the required Clifford algebra and by the universal property of Clifford algebras, there is an isomorphism Cl"n"(C) ⊗ Cl2(C) → Cl"n"+2(C). Finally, in the even case this means by the induction hypothesis Cl"n"+2(C) ≅ End(C"N") ⊗ End(C2) ≅ End(C"N"+1). The odd case follows similarly as the tensor product distributes over direct sums. Real case. The real case is significantly more complicated, exhibiting a periodicity of 8 rather than 2, and there is a 2-parameter family of Clifford algebras. Classification of quadratic forms. Firstly, there are non-isomorphic quadratic forms of a given degree, classified by signature. Every nondegenerate quadratic form on a real vector space is equivalent to the standard diagonal form: formula_10 where "n" = "p" + "q" is the dimension of the vector space. The pair of integers ("p", "q") is called the signature of the quadratic form. The real vector space with this quadratic form is often denoted R"p","q". The Clifford algebra on R"p","q" is denoted Cl"p","q"(R). A standard orthonormal basis {"e""i"} for R"p","q" consists of "n" = "p" + "q" mutually orthogonal vectors, "p" of which have norm +1 and "q" of which have norm −1. Unit pseudoscalar. Given a standard basis {"e""i"} as defined in the previous subsection, the unit pseudoscalar in Cl"p","q"(R) is defined as formula_11 This is both a Coxeter element of sorts (product of reflections) and a longest element of a Coxeter group in the Bruhat order; this is an analogy. It corresponds to and generalizes a volume form (in the exterior algebra; for the trivial quadratic form, the unit pseudoscalar is a volume form), and lifts reflection through the origin (meaning that the image of the unit pseudoscalar is reflection through the origin, in the orthogonal group). To compute the square "ω"2 = ("e"1"e"2⋅⋅⋅"e""n")("e"1"e"2⋅⋅⋅"e""n"), one can either reverse the order of the second group, yielding sgn("σ")"e"1"e"2⋅⋅⋅"e""n""e""n"⋅⋅⋅"e"2"e"1, or apply a perfect shuffle, yielding sgn("σ")"e"1"e"1"e"2"e"2⋅⋅⋅"e""n""e""n". These both have sign (−1)⌊"n"/2⌋ = (−1)"n"("n"−1)/2, which is 4-periodic (proof), and combined with "e""i""e""i" = &amp;pm;1, this shows that the square of "ω" is given by formula_12 Note that, unlike the complex case, it is not in general possible to find a pseudoscalar that squares to +1. Center. If "n" (equivalently, "p" − "q") is even, the algebra Cl"p","q"(R) is central simple and so isomorphic to a matrix algebra over R or H by the Artin–Wedderburn theorem. If "n" (equivalently, "p" − "q") is odd then the algebra is no longer central simple but rather has a center which includes the pseudoscalars as well as the scalars. If "n" is odd and "ω"2 = +1 (equivalently, if "p" − "q" ≡ 1 (mod 4)) then, just as in the complex case, the algebra Cl"p","q"(R) decomposes into a direct sum of isomorphic algebras formula_13 each of which is central simple and so isomorphic to matrix algebra over R or H. If "n" is odd and "ω"2 = −1 (equivalently, if "p" − "q" ≡ −1 (mod 4)) then the center of Cl"p","q"(R) is isomorphic to C and can be considered as a "complex" algebra. As a complex algebra, it is central simple and so isomorphic to a matrix algebra over C. Classification. All told there are three properties which determine the class of the algebra Cl"p","q"(R): Each of these properties depends only on the signature "p" − "q" modulo 8. The complete classification table is given below. The size of the matrices is determined by the requirement that Cl"p","q"(R) have dimension 2"p"+"q". It may be seen that of all matrix ring types mentioned, there is only one type shared by complex and real algebras: the type M2"m"(C). For example, Cl2(C) and Cl3,0(R) are both determined to be M2(C). It is important to note that there is a difference in the classifying isomorphisms used. Since the Cl2(C) is algebra isomorphic via a C-linear map (which is necessarily R-linear), and Cl3,0(R) is algebra isomorphic via an R-linear map, Cl2(C) and Cl3,0(R) are R-algebra isomorphic. A table of this classification for "p" + "q" ≤ 8 follows. Here "p" + "q" runs vertically and "p" − "q" runs horizontally (e.g. the algebra Cl1,3(R) ≅ M2(H) is found in row 4, column −2). Symmetries. There is a tangled web of symmetries and relationships in the above table. formula_14 Going over 4 spots in any row yields an identical algebra. From these Bott periodicity follows: formula_15 If the signature satisfies "p" − "q" ≡ 1 (mod 4) then formula_16 Thus if the signature satisfies "p" − "q" ≡ 1 (mod 4), formula_17 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "v^2 = Q(v)1" }, { "math_id": 1, "text": "Q(u) = u_1^2 + u_2^2 + \\cdots + u_n^2 ," }, { "math_id": 2, "text": "P_{\\pm} = \\frac{1}{2}(1\\pm\\omega)." }, { "math_id": 3, "text": "\\mathrm{Cl}_n(\\mathbf{C}) = \\mathrm{Cl}_n^+(\\mathbf{C}) \\oplus \\mathrm{Cl}_n^-(\\mathbf{C})," }, { "math_id": 4, "text": "\\mathrm{Cl}_n^\\pm(\\mathbf{C}) = P_\\pm \\mathrm{Cl}_n(\\mathbf{C})." }, { "math_id": 5, "text": "\\alpha\\left(\\mathrm{Cl}_n^\\pm(\\mathbf{C})\\right) = \\mathrm{Cl}_n^\\mp(\\mathbf{C}) ," }, { "math_id": 6, "text": "\\tilde \\gamma_a" }, { "math_id": 7, "text": "\\tilde\\gamma_1 \\tilde\\gamma_2" }, { "math_id": 8, "text": "\\tilde\\gamma_a" }, { "math_id": 9, "text": "\\tilde \\gamma_{a - n}" }, { "math_id": 10, "text": "Q(u) = u_1^2 + \\cdots + u_p^2 - u_{p+1}^2 - \\cdots - u_{p+q}^2" }, { "math_id": 11, "text": "\\omega = e_1e_2\\cdots e_n." }, { "math_id": 12, "text": "\\omega^2 = (-1)^{\\frac{n(n-1)}{2}}(-1)^q = (-1)^{\\frac{(p-q)(p-q-1)}{2}} = \\begin{cases}+1 & p-q \\equiv 0,1 \\mod{4}\\\\ -1 & p-q \\equiv 2,3 \\mod{4}.\\end{cases}" }, { "math_id": 13, "text": "\\operatorname{Cl}_{p,q}(\\mathbf{R}) = \\operatorname{Cl}_{p,q}^{+}(\\mathbf{R})\\oplus \\operatorname{Cl}_{p,q}^{-}(\\mathbf{R}) ," }, { "math_id": 14, "text": "\\begin{align}\n \\operatorname{Cl}_{p+1,q+1}(\\mathbf{R}) &= \\mathrm{M}_2(\\operatorname{Cl}_{p,q}(\\mathbf{R})) \\\\\n \\operatorname{Cl}_{p+4,q}(\\mathbf{R}) &= \\operatorname{Cl}_{p,q+4}(\\mathbf{R})\n\\end{align}" }, { "math_id": 15, "text": "\\operatorname{Cl}_{p+8,q}(\\mathbf{R}) = \\operatorname{Cl}_{p+4,q+4}(\\mathbf{R}) = M_{2^4}(\\operatorname{Cl}_{p,q}(\\mathbf{R})) ." }, { "math_id": 16, "text": "\\operatorname{Cl}_{p+k,q}(\\mathbf{R}) = \\operatorname{Cl}_{p,q+k}(\\mathbf{R}) ." }, { "math_id": 17, "text": "\\operatorname{Cl}_{p+k,q}(\\mathbf{R}) = \\operatorname{Cl}_{p,q+k}(\\mathbf{R}) = \\operatorname{Cl}_{p-k+k,q+k}(\\mathbf{R}) = \\mathrm{M}_{2^k}(\\operatorname{Cl}_{p-k,q}(\\mathbf{R})) = \\mathrm{M}_{2^k}(\\operatorname{Cl}_{p,q-k}(\\mathbf{R})) ." } ]
https://en.wikipedia.org/wiki?curid=1105488
11055227
Admissible representation
Class of representations In mathematics, admissible representations are a well-behaved class of representations used in the representation theory of reductive Lie groups and locally compact totally disconnected groups. They were introduced by Harish-Chandra. Real or complex reductive Lie groups. Let "G" be a connected reductive (real or complex) Lie group. Let "K" be a maximal compact subgroup. A continuous representation (π, "V") of "G" on a complex Hilbert space "V" is called admissible if π restricted to "K" is unitary and each irreducible unitary representation of "K" occurs in it with finite multiplicity. The prototypical example is that of an irreducible unitary representation of "G". An admissible representation π induces a formula_0-module which is easier to deal with as it is an algebraic object. Two admissible representations are said to be infinitesimally equivalent if their associated formula_0-modules are isomorphic. Though for general admissible representations, this notion is different than the usual equivalence, it is an important result that the two notions of equivalence agree for unitary (admissible) representations. Additionally, there is a notion of unitarity of formula_0-modules. This reduces the study of the equivalence classes of irreducible unitary representations of "G" to the study of infinitesimal equivalence classes of admissible representations and the determination of which of these classes are infinitesimally unitary. The problem of parameterizing the infinitesimal equivalence classes of admissible representations was fully solved by Robert Langlands and is called the Langlands classification. Totally disconnected groups. Let "G" be a locally compact totally disconnected group (such as a reductive algebraic group over a nonarchimedean local field or over the finite adeles of a global field). A representation (π, "V") of "G" on a complex vector space "V" is called smooth if the subgroup of "G" fixing any vector of "V" is open. If, in addition, the space of vectors fixed by any compact open subgroup is finite dimensional then π is called admissible. Admissible representations of "p"-adic groups admit more algebraic description through the action of the Hecke algebra of locally constant functions on "G". Deep studies of admissible representations of "p"-adic reductive groups were undertaken by Casselman and by Bernstein and Zelevinsky in the 1970s. Progress was made more recently by Howe, Moy, Gopal Prasad and Bushnell and Kutzko, who developed a "theory of types" and classified the admissible dual (i.e. the set of equivalence classes of irreducible admissible representations) in many cases. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(\\mathfrak{g},K)" } ]
https://en.wikipedia.org/wiki?curid=11055227
11056409
Kelvin functions
In applied mathematics, the Kelvin functions ber"ν"("x") and bei"ν"("x") are the real and imaginary parts, respectively, of formula_0 where "x" is real, and "Jν"("z"), is the "ν"th order Bessel function of the first kind. Similarly, the functions kerν("x") and keiν("x") are the real and imaginary parts, respectively, of formula_1 where "Kν"("z") is the "ν"th order modified Bessel function of the second kind. These functions are named after William Thomson, 1st Baron Kelvin. While the Kelvin functions are defined as the real and imaginary parts of Bessel functions with "x" taken to be real, the functions can be analytically continued for complex arguments "xe""iφ", 0 ≤ "φ" &lt; 2"π". With the exception of ber"n"("x") and bei"n"("x") for integral "n", the Kelvin functions have a branch point at "x" = 0. Below, Γ("z") is the gamma function and "ψ"("z") is the digamma function. ber("x"). For integers "n", ber"n"("x") has the series expansion formula_2 where Γ("z") is the gamma function. The special case ber0("x"), commonly denoted as just ber("x"), has the series expansion formula_3 and asymptotic series formula_4, where formula_5 formula_6 formula_7 bei("x"). For integers "n", bei"n"("x") has the series expansion formula_8 The special case bei0("x"), commonly denoted as just bei("x"), has the series expansion formula_9 and asymptotic series formula_10 where α, formula_11, and formula_12 are defined as for ber("x"). ker("x"). For integers "n", ker"n"("x") has the (complicated) series expansion formula_13 The special case ker0("x"), commonly denoted as just ker("x"), has the series expansion formula_14 and the asymptotic series formula_15 where formula_16 formula_17 formula_18 kei("x"). For integer "n", kei"n"("x") has the series expansion formula_19 The special case kei0("x"), commonly denoted as just kei("x"), has the series expansion formula_20 and the asymptotic series formula_21 where "β", "f"2("x"), and "g"2("x") are defined as for ker("x").
[ { "math_id": 0, "text": "J_\\nu \\left (x e^{\\frac{3 \\pi i}{4}} \\right ),\\," }, { "math_id": 1, "text": "K_\\nu \\left (x e^{\\frac{\\pi i}{4}} \\right ),\\," }, { "math_id": 2, "text": "\\mathrm{ber}_n(x) = \\left(\\frac{x}{2}\\right)^n \\sum_{k \\geq 0} \\frac{\\cos\\left[\\left(\\frac{3n}{4} + \\frac{k}{2}\\right)\\pi\\right]}{k! \\Gamma(n + k + 1)} \\left(\\frac{x^2}{4}\\right)^k ," }, { "math_id": 3, "text": "\\mathrm{ber}(x) = 1 + \\sum_{k \\geq 1} \\frac{(-1)^k}{[(2k)!]^2} \\left(\\frac{x}{2} \\right )^{4k}" }, { "math_id": 4, "text": "\\mathrm{ber}(x) \\sim \\frac{e^{\\frac{x}{\\sqrt{2}}}}{\\sqrt{2 \\pi x}} \\left (f_1(x) \\cos \\alpha + g_1(x) \\sin \\alpha \\right ) - \\frac{\\mathrm{kei}(x)}{\\pi}" }, { "math_id": 5, "text": "\\alpha = \\frac{x}{\\sqrt{2}} - \\frac{\\pi}{8}," }, { "math_id": 6, "text": "f_1(x) = 1 + \\sum_{k \\geq 1} \\frac{\\cos(k \\pi / 4)}{k! (8x)^k} \\prod_{l = 1}^k (2l - 1)^2" }, { "math_id": 7, "text": "g_1(x) = \\sum_{k \\geq 1} \\frac{\\sin(k \\pi / 4)}{k! (8x)^k} \\prod_{l = 1}^k (2l - 1)^2 ." }, { "math_id": 8, "text": "\\mathrm{bei}_n(x) = \\left(\\frac{x}{2}\\right)^n \\sum_{k \\geq 0} \\frac{\\sin\\left[\\left(\\frac{3n}{4} + \\frac{k}{2}\\right)\\pi\\right]}{k! \\Gamma(n + k + 1)} \\left(\\frac{x^2}{4}\\right)^k ." }, { "math_id": 9, "text": "\\mathrm{bei}(x) = \\sum_{k \\geq 0} \\frac{(-1)^k }{[(2k+1)!]^2} \\left(\\frac{x}{2} \\right )^{4k+2}" }, { "math_id": 10, "text": "\\mathrm{bei}(x) \\sim \\frac{e^{\\frac{x}{\\sqrt{2}}}}{\\sqrt{2 \\pi x}} [f_1(x) \\sin \\alpha - g_1(x) \\cos \\alpha] - \\frac{\\mathrm{ker}(x)}{\\pi}," }, { "math_id": 11, "text": "f_1(x)" }, { "math_id": 12, "text": "g_1(x)" }, { "math_id": 13, "text": "\\begin{align}\n&\\mathrm{ker}_n(x) = - \\ln\\left(\\frac{x}{2}\\right) \\mathrm{ber}_n(x) + \\frac{\\pi}{4}\\mathrm{bei}_n(x) \\\\\n&+ \\frac{1}{2} \\left(\\frac{x}{2}\\right)^{-n} \\sum_{k=0}^{n-1} \\cos\\left[\\left(\\frac{3n}{4} + \\frac{k}{2}\\right)\\pi\\right] \\frac{(n-k-1)!}{k!} \\left(\\frac{x^2}{4}\\right)^k \\\\\n&+ \\frac{1}{2} \\left(\\frac{x}{2}\\right)^n \\sum_{k \\geq 0} \\cos\\left[\\left(\\frac{3n}{4} + \\frac{k}{2}\\right)\\pi\\right] \\frac{\\psi(k+1) + \\psi(n + k + 1)}{k! (n+k)!} \\left(\\frac{x^2}{4}\\right)^k .\n\\end{align}" }, { "math_id": 14, "text": "\\mathrm{ker}(x) = -\\ln\\left(\\frac{x}{2}\\right) \\mathrm{ber}(x) + \\frac{\\pi}{4}\\mathrm{bei}(x) + \\sum_{k \\geq 0} (-1)^k \\frac{\\psi(2k + 1)}{[(2k)!]^2} \\left(\\frac{x^2}{4}\\right)^{2k}" }, { "math_id": 15, "text": "\\mathrm{ker}(x) \\sim \\sqrt{\\frac{\\pi}{2x}} e^{-\\frac{x}{\\sqrt{2}}} [f_2(x) \\cos \\beta + g_2(x) \\sin \\beta]," }, { "math_id": 16, "text": "\\beta = \\frac{x}{\\sqrt{2}} + \\frac{\\pi}{8}," }, { "math_id": 17, "text": "f_2(x) = 1 + \\sum_{k \\geq 1} (-1)^k \\frac{\\cos(k \\pi / 4)}{k! (8x)^k} \\prod_{l = 1}^k (2l - 1)^2" }, { "math_id": 18, "text": "g_2(x) = \\sum_{k \\geq 1} (-1)^k \\frac{\\sin(k \\pi / 4)}{k! (8x)^k} \\prod_{l = 1}^k (2l - 1)^2." }, { "math_id": 19, "text": "\\begin{align}\n&\\mathrm{kei}_n(x) = - \\ln\\left(\\frac{x}{2}\\right) \\mathrm{bei}_n(x) - \\frac{\\pi}{4}\\mathrm{ber}_n(x) \\\\\n&-\\frac{1}{2} \\left(\\frac{x}{2}\\right)^{-n} \\sum_{k=0}^{n-1} \\sin\\left[\\left(\\frac{3n}{4} + \\frac{k}{2}\\right)\\pi\\right] \\frac{(n-k-1)!}{k!} \\left(\\frac{x^2}{4}\\right)^k \\\\\n&+ \\frac{1}{2} \\left(\\frac{x}{2}\\right)^n \\sum_{k \\geq 0} \\sin\\left[\\left(\\frac{3n}{4} + \\frac{k}{2}\\right)\\pi\\right] \\frac{\\psi(k+1) + \\psi(n + k + 1)}{k! (n+k)!} \\left(\\frac{x^2}{4}\\right)^k .\n\\end{align}" }, { "math_id": 20, "text": "\\mathrm{kei}(x) = -\\ln\\left(\\frac{x}{2}\\right) \\mathrm{bei}(x) - \\frac{\\pi}{4}\\mathrm{ber}(x) + \\sum_{k \\geq 0} (-1)^k \\frac{\\psi(2k + 2)}{[(2k+1)!]^2} \\left(\\frac{x^2}{4}\\right)^{2k+1}" }, { "math_id": 21, "text": "\\mathrm{kei}(x) \\sim -\\sqrt{\\frac{\\pi}{2x}} e^{-\\frac{x}{\\sqrt{2}}} [f_2(x) \\sin \\beta + g_2(x) \\cos \\beta]," } ]
https://en.wikipedia.org/wiki?curid=11056409
1105826
Kirby calculus
Describes how distinct surgery presentations of a given 3-manifold are related In mathematics, the Kirby calculus in geometric topology, named after Robion Kirby, is a method for modifying framed links in the 3-sphere using a finite set of moves, the Kirby moves. Using four-dimensional Cerf theory, he proved that if "M" and "N" are 3-manifolds, resulting from Dehn surgery on framed links "L" and "J" respectively, then they are homeomorphic if and only if "L" and "J" are related by a sequence of Kirby moves. According to the Lickorish–Wallace theorem any closed orientable 3-manifold is obtained by such surgery on some link in the 3-sphere. Some ambiguity exists in the literature on the precise use of the term "Kirby moves". Different presentations of "Kirby calculus" have a different set of moves and these are sometimes called Kirby moves. Kirby's original formulation involved two kinds of move, the "blow-up" and the "handle slide"; Roger Fenn and Colin Rourke exhibited an equivalent construction in terms of a single move, the Fenn–Rourke move, that appears in many expositions and extensions of the Kirby calculus. Dale Rolfsen's book, "Knots and Links", from which many topologists have learned the Kirby calculus, describes a set of two moves: 1) delete or add a component with surgery coefficient infinity 2) twist along an unknotted component and modify surgery coefficients appropriately (this is called the Rolfsen twist). This allows an extension of the Kirby calculus to rational surgeries. There are also various tricks to modify surgery diagrams. One such useful move is the slam-dunk. An extended set of diagrams and moves are used for describing 4-manifolds. A framed link in the 3-sphere encodes instructions for attaching 2-handles to the 4-ball. (The 3-dimensional boundary of this manifold is the 3-manifold interpretation of the link diagram mentioned above.) 1-handles are denoted by either The dot indicates that a neighborhood of a standard 2-disk with boundary the dotted circle is to be excised from the interior of the 4-ball. Excising this 2-handle is equivalent to adding a 1-handle; 3-handles and 4-handles are usually not indicated in the diagram. Handle decomposition. Two different smooth handlebody decompositions of a smooth 4-manifold are related by a finite sequence of isotopies of the attaching maps, and the creation/cancellation of handle pairs.
[ { "math_id": 0, "text": "\\R^4" } ]
https://en.wikipedia.org/wiki?curid=1105826
1105886
Selection (genetic algorithm)
Selection is the stage of a genetic algorithm or more general evolutionary algorithm in which individual genomes are chosen from a population for later breeding (e.g., using the crossover operator). Selection mechanisms are also used to choose candidate solutions (individuals) for the next generation. Retaining the best individuals in a generation unchanged in the next generation, is called "elitism" or "elitist selection". It is a successful (slight) variant of the general process of constructing a new population. A selection procedure for breeding used early on may be implemented as follows: For many problems the above algorithm might be computationally demanding. A simpler and faster alternative uses the so-called stochastic acceptance. If this procedure is repeated until there are enough selected individuals, this selection method is called fitness proportionate selection or "roulette-wheel selection". If instead of a single pointer spun multiple times, there are multiple, equally spaced pointers on a wheel that is spun once, it is called stochastic universal sampling. Repeatedly selecting the best individual of a randomly chosen subset is tournament selection. Taking the best half, third or another proportion of the individuals is truncation selection. There are other selection algorithms that do not consider all individuals for selection, but only those with a fitness value that is higher than a given (arbitrary) constant. Other algorithms select from a restricted pool where only a certain percentage of the individuals are allowed, based on fitness value. Methods of selection (evolutionary algorithm). The listed methods differ mainly in the selection pressure, which can be set by a strategy parameter in the rank selection described below. The higher the selection pressure, the faster a population converges against a certain solution and the search space may not be explored sufficiently. For more selection methods and further detail see. Roulette wheel selection. In the roulette wheel selection, the probability of choosing an individual for breeding of the next generation is proportional to its fitness, the better the fitness is, the higher chance for that individual to be chosen. Choosing individuals can be depicted as spinning a roulette that has as many pockets as there are individuals in the current generation, with sizes depending on their probability. Probability of choosing individual formula_0 is equal to formula_1, where formula_2 is the fitness of formula_0 and formula_3 is the size of current generation (note that in this method one individual can be drawn multiple times). Rank selection. In rank selection, the selection probability does not depend directly on the fitness, but on the fitness rank of an individual within the population. This puts large fitness differences into perspective; moreover, the exact fitness values themselves do not have to be available, but only a sorting of the individuals according to quality. Linear ranking, which goes back to Baker, is often used. It allows the selection pressure to be set by the parameter formula_4, which can take values between 1.0 (no selection pressure) and 2.0 (high selection pressure). The probability formula_5 for rank positions formula_6 is obtained as follows: formula_7 In addition to the adjustable selection pressure, an advantage of rank-based selection can additionally be seen in the fact that it also gives worse individuals a chance to reproduce and thus to improve. This can be particularly helpful in applications with restrictions, since it facilitates the overcoming of a restriction in several intermediate steps, i.e. via a sequence of several individuals rated poorly due to restriction violations. Steady state selection. In every generation few chromosomes are selected (good - with high fitness) for creating a new offspring. Then some (bad - with low fitness) chromosomes are removed and the new offspring is placed in their place. The rest of population survives to new generation. Tournament selection. Tournament selection is a method of choosing the individual from the set of individuals. The winner of each tournament is selected to perform crossover. Elitist selection. Often to get better results, strategies with partial reproduction are used. One of them is elitism, in which a small portion of the best individuals from the last generation is carried over (without any changes) to the next one. Boltzmann selection. In Boltzmann selection, a continuously varying temperature controls the rate of selection according to a preset schedule. The temperature starts out high, which means that the selection pressure is low. The temperature is gradually lowered, which gradually increases the selection pressure, thereby allowing the GA to narrow in more closely to the best part of the search space while maintaining the appropriate degree of diversity. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "i" }, { "math_id": 1, "text": "p_i = \\frac{f_i}{\\Sigma_{j=1}^{N} f_j}" }, { "math_id": 2, "text": "f_i" }, { "math_id": 3, "text": "N" }, { "math_id": 4, "text": "sp " }, { "math_id": 5, "text": "P " }, { "math_id": 6, "text": "R_i " }, { "math_id": 7, "text": "P(R_i) =\\frac{1}{n}\\Bigl(sp-(2sp-2)\\frac{i-1}{n-1}\\Bigr) \\quad \\quad 1\\leq i \\leq n ,\\quad 1 \\leq sp \\leq 2 \\quad \\mathsf{with} \\quad P(R_i) \\ge 0, \\quad \\sum_{i=1}^nP(R_i)=1 " } ]
https://en.wikipedia.org/wiki?curid=1105886
1105907
Spark-gap transmitter
Type of radio transmitter A spark-gap transmitter is an obsolete type of radio transmitter which generates radio waves by means of an electric spark. Spark-gap transmitters were the first type of radio transmitter, and were the main type used during the wireless telegraphy or "spark" era, the first three decades of radio, from 1887 to the end of World War I. German physicist Heinrich Hertz built the first experimental spark-gap transmitters in 1887, with which he proved the existence of radio waves and studied their properties. A fundamental limitation of spark-gap transmitters is that they generate a series of brief transient pulses of radio waves called damped waves; they are unable to produce the continuous waves used to carry audio (sound) in modern AM or FM radio transmission. So spark-gap transmitters could not transmit audio, and instead transmitted information by radiotelegraphy; the operator switched the transmitter on and off with a telegraph key, creating pulses of radio waves to spell out text messages in Morse code. The first practical spark gap transmitters and receivers for radiotelegraphy communication were developed by Guglielmo Marconi around 1896. One of the first uses for spark-gap transmitters was on ships, to communicate with shore and broadcast a distress call if the ship was sinking. They played a crucial role in maritime rescues such as the 1912 RMS "Titanic" disaster. After World War I, vacuum tube transmitters were developed, which were less expensive and produced continuous waves which had a greater range, produced less interference, and could also carry audio, making spark transmitters obsolete by 1920. The radio signals produced by spark-gap transmitters are electrically "noisy"; they have a wide bandwidth, creating radio frequency interference (RFI) that can disrupt other radio transmissions. This type of radio emission has been prohibited by international law since 1934. Theory of operation. Electromagnetic waves are radiated by electric charges when they are accelerated. Radio waves, electromagnetic waves of radio frequency, can be generated by time-varying electric currents, consisting of electrons flowing through a conductor which suddenly change their velocity, thus accelerating. An electrically charged capacitance discharged through an electric spark across a spark gap between two conductors was the first device known which could generate radio waves. The spark itself doesn't produce the radio waves, it merely serves as a fast acting switch to excite resonant radio frequency oscillating electric currents in the conductors of the attached circuit. The conductors radiate the energy in this oscillating current as radio waves. Due to the inherent inductance of circuit conductors, the discharge of a capacitor through a low enough resistance (such as a spark) is oscillatory; the charge flows rapidly back and forth through the spark gap for a brief period, charging the conductors on each side alternately positive and negative, until the oscillations die away. A practical spark gap transmitter consists of these parts: Operation cycle. The transmitter works in a rapid repeating cycle in which the capacitor is charged to a high voltage by the transformer and discharged through the coil by a spark across the spark gap. The impulsive spark excites the resonant circuit to "ring" like a bell, producing a brief oscillating current which is radiated as electromagnetic waves by the antenna. The transmitter repeats this cycle at a rapid rate, so the spark appeared continuous, and the radio signal sounded like a whine or buzz in a radio receiver. The cycle is very rapid, taking less than a millisecond. With each spark, this cycle produces a radio signal consisting of an oscillating sinusoidal wave that increases rapidly to a high amplitude and decreases exponentially to zero, called a damped wave. The frequency formula_0 of the oscillations, which is the frequency of the emitted radio waves, is equal to the resonant frequency of the resonant circuit, determined by the capacitance formula_1 of the capacitor and the inductance formula_2 of the coil: formula_3 The transmitter repeats this cycle rapidly, so the output is a repeating string of damped waves. This is equivalent to a radio signal amplitude modulated with a steady frequency, so it could be demodulated in a radio receiver by a rectifying AM detector, such as the crystal detector or Fleming valve used during the wireless telegraphy era. The frequency of repetition (spark rate) is in the audio range, typically 50 to 1000 sparks per second, so in a receiver's earphones the signal sounds like a steady tone, whine, or buzz. In order to transmit information with this signal, the operator turns the transmitter on and off rapidly by tapping on a switch called a telegraph key in the primary circuit of the transformer, producing sequences of short (dot) and long (dash) strings of damped waves, to spell out messages in Morse code. As long as the key is pressed the spark gap fires repetitively, creating a string of pulses of radio waves, so in a receiver the keypress sounds like a buzz; the entire Morse code message sounds like a sequence of buzzes separated by pauses. In low-power transmitters the key directly breaks the primary circuit of the supply transformer, while in high-power transmitters the key operates a heavy duty relay that breaks the primary circuit. Charging circuit and spark rate. The circuit which charges the capacitors, along with the spark gap itself, determines the "spark rate" of the transmitter, the number of sparks and resulting damped wave pulses it produces per second, which determines the tone of the signal heard in the receiver. The spark rate should not be confused with the "frequency" of the transmitter, which is the number of sinusoidal oscillations per second in each damped wave. Since the transmitter produces one pulse of radio waves per spark, the output power of the transmitter was proportional to the spark rate, so higher rates were favored. Spark transmitters generally used one of three types of power circuits: Induction coil. An induction coil (Ruhmkorff coil) was used in low-power transmitters, usually less than 500 watts, often battery-powered. An induction coil is a type of transformer powered by DC, in which a vibrating arm switch contact on the coil called an interrupter repeatedly breaks the circuit that provides current to the primary winding, causing the coil to generate pulses of high voltage. When the primary current to the coil is turned on, the primary winding creates a magnetic field in the iron core which pulls the springy interrupter arm away from its contact, opening the switch and cutting off the primary current. Then the magnetic field collapses, creating a pulse of high voltage in the secondary winding, and the interrupter arm springs back to close the contact again, and the cycle repeats. Each pulse of high voltage charged up the capacitor until the spark gap fired, resulting in one spark per pulse. Interrupters were limited to low spark rates of 20–100 Hz, sounding like a low buzz in the receiver. In powerful induction coil transmitters, instead of a vibrating interrupter, a mercury turbine interrupter was used. This could break the current at rates up to several thousand hertz, and the rate could be adjusted to produce the best tone. AC transformer. In higher power transmitters powered by AC, a transformer steps the input voltage up to the high voltage needed. The sinusoidal voltage from the transformer is applied directly to the capacitor, so the voltage on the capacitor varies from a high positive voltage, to zero, to a high negative voltage. The spark gap is adjusted so sparks only occur near the maximum voltage, at peaks of the AC sine wave, when the capacitor was fully charged. Since the AC sine wave has two peaks per cycle, ideally two sparks occurred during each cycle, so the spark rate was equal to twice the frequency of the AC power (often multiple sparks occurred during the peak of each half cycle). The spark rate of transmitters powered by 50 or 60 Hz mains power was thus 100 or 120 Hz. However higher audio frequencies cut through interference better, so in many transmitters the transformer was powered by a motor–alternator set, an electric motor with its shaft turning an alternator, that produced AC at a higher frequency, usually 500 Hz, resulting in a spark rate of 1000 Hz. Quenched spark gap. The speed at which signals may be transmitted is naturally limited by the time taken for the spark to be extinguished. If, as described above, the conductive plasma does not, during the zero points of the alternating current, cool enough to extinguish the spark, a 'persistent spark' is maintained until the stored energy is dissipated, permitting practical operation only up to around 60 signals per second. If active measures are taken to break the arc (either by blowing air through the spark or by lengthening the spark gap), a much shorter "quenched spark" may be obtained. A simple quenched spark system still permits several oscillations of the capacitor circuit in the time taken for the spark to be quenched. With the spark circuit broken, the transmission frequency is solely determined by the antenna resonant circuit, which permits simpler tuning. Rotary spark gap. In a transmitter with a "rotary" spark gap "(below)", the capacitor was charged by AC from a high-voltage transformer as above, and discharged by a spark gap consisting of electrodes spaced around a wheel which was spun by an electric motor, which produced sparks as they passed by a stationary electrode. The spark rate was equal to the rotations per second times the number of spark electrodes on the wheel. It could produce spark rates up to several thousand hertz, and the rate could be adjusted by changing the speed of the motor. The rotation of the wheel was usually synchronized to the AC sine wave so the moving electrode passed by the stationary one at the peak of the sine wave, initiating the spark when the capacitor was fully charged, which produced a musical tone in the receiver. When tuned correctly in this manner, the need for external cooling or quenching airflow was eliminated, as was the loss of power directly from the charging circuit (parallel to the capacitor) through the spark. History. The invention of the radio transmitter resulted from the convergence of two lines of research. One was efforts by inventors to devise a system to transmit telegraph signals without wires. Experiments by a number of inventors had shown that electrical disturbances could be transmitted short distances through the air. However most of these systems worked not by radio waves but by electrostatic induction or electromagnetic induction, which had too short a range to be practical. In 1866 Mahlon Loomis claimed to have transmitted an electrical signal through the atmosphere between two 600 foot wires held aloft by kites on mountaintops 14 miles apart. Thomas Edison had come close to discovering radio in 1875; he had generated and detected radio waves which he called "etheric currents" experimenting with high-voltage spark circuits, but due to lack of time did not pursue the matter. David Edward Hughes in 1879 had also stumbled on radio wave transmission which he received with his carbon microphone detector, however he was persuaded that what he observed was induction. Neither of these individuals are usually credited with the discovery of radio, because they did not understand the significance of their observations and did not publish their work before Hertz. The other was research by physicists to confirm the theory of electromagnetism proposed in 1864 by Scottish physicist James Clerk Maxwell, now called Maxwell's equations. Maxwell's theory predicted that a combination of oscillating electric and magnetic fields could travel through space as an "electromagnetic wave". Maxwell proposed that light consisted of electromagnetic waves of short wavelength, but no one knew how to confirm this, or generate or detect electromagnetic waves of other wavelengths. By 1883 it was theorized that accelerated electric charges could produce electromagnetic waves, and George Fitzgerald had calculated the output power of a loop antenna. Fitzgerald in a brief note published in 1883 suggested that electromagnetic waves could be generated practically by discharging a capacitor rapidly; the method used in spark transmitters, however there is no indication that this inspired other inventors. The division of the history of spark transmitters into the different types below follows the organization of the subject used in many wireless textbooks. Hertzian oscillators. German physicist Heinrich Hertz in 1887 built the first experimental spark gap transmitters during his historic experiments to demonstrate the existence of electromagnetic waves predicted by James Clerk Maxwell in 1864, in which he discovered radio waves, which were called "Hertzian waves" until about 1910. Hertz was inspired to try spark excited circuits by experiments with "Reiss spirals", a pair of flat spiral inductors with their conductors ending in spark gaps. A Leyden jar capacitor discharged through one spiral, would cause sparks in the gap of the other spiral. See circuit diagram. Hertz's transmitters consisted of a dipole antenna made of a pair of collinear metal rods of various lengths with a spark gap "(S)" between their inner ends and metal balls or plates for capacitance "(C)" attached to the outer ends. The two sides of the antenna were connected to an induction coil (Ruhmkorff coil) "(T)" a common lab power source which produced pulses of high voltage, 5 to 30 kV. In addition to radiating the waves, the antenna also acted as a harmonic oscillator (resonator) which generated the oscillating currents. High-voltage pulses from the induction coil "(T)" were applied between the two sides of the antenna. Each pulse stored electric charge in the capacitance of the antenna, which was immediately discharged by a spark across the spark gap. The spark excited brief oscillating standing waves of current between the sides of the antenna. The antenna radiated the energy as a momentary pulse of radio waves; a damped wave. The frequency of the waves was equal to the resonant frequency of the antenna, which was determined by its length; it acted as a half-wave dipole, which radiated waves roughly twice the length of the antenna (for example a dipole 1 meter long would generate 150 MHz radio waves). Hertz detected the waves by observing tiny sparks in micrometer spark gaps "(M)" in loops of wire which functioned as resonant receiving antennas. Oliver Lodge was also experimenting with spark oscillators at this time and came close to discovering radio waves before Hertz, but his focus was on waves on wires, not in free space. Hertz and the first generation of physicists who built these "Hertzian oscillators", such as Jagadish Chandra Bose, Lord Rayleigh, George Fitzgerald, Frederick Trouton, Augusto Righi and Oliver Lodge, were mainly interested in radio waves as a scientific phenomenon, and largely failed to foresee its possibilities as a communication technology. Due to the influence of Maxwell's theory, their thinking was dominated by the similarity between radio waves and light waves; they thought of radio waves as an invisible form of light. By analogy with light, they assumed that radio waves only traveled in straight lines, so they thought radio transmission was limited by the visual horizon like existing optical signalling methods such as semaphore, and therefore was not capable of longer distance communication. As late as 1894 Oliver Lodge speculated that the maximum distance Hertzian waves could be transmitted was a half mile. To investigate the similarity between radio waves and light waves, these researchers concentrated on producing short wavelength high-frequency waves with which they could duplicate classic optics experiments with radio waves, using quasioptical components such as prisms and lenses made of paraffin wax, sulfur, and pitch and wire diffraction gratings. Their short antennas generated radio waves in the VHF, UHF, or microwave bands. In his various experiments, Hertz produced waves with frequencies from 50 to 450 MHz, roughly the frequencies used today by broadcast television transmitters. Hertz used them to perform historic experiments demonstrating standing waves, refraction, diffraction, polarization and interference of radio waves. He also measured the speed of radio waves, showing they traveled at the same speed as light. These experiments established that light and radio waves were both forms of Maxwell's electromagnetic waves, differing only in frequency. Augusto Righi and Jagadish Chandra Bose around 1894 generated microwaves of 12 and 60 GHz respectively, using small metal balls as resonator-antennas. The high frequencies produced by Hertzian oscillators could not travel beyond the horizon. The dipole resonators also had low capacitance and couldn't store much charge, limiting their power output. Therefore, these devices were not capable of long distance transmission; their reception range with the primitive receivers employed was typically limited to roughly 100 yards (100 meters). Non-syntonic transmitters. &lt;templatestyles src="Template:Blockquote/styles.css" /&gt;I could scarcely conceive it possible that [radio's] application to useful purposes could have escaped the notice of such eminent scientists. Italian radio pioneer Guglielmo Marconi was one of the first people to believe that radio waves could be used for long distance communication, and singlehandedly developed the first practical radiotelegraphy transmitters and receivers, mainly by combining and tinkering with the inventions of others. Starting at age 21 on his family's estate in Italy, between 1894 and 1901 he conducted a long series of experiments to increase the transmission range of Hertz's spark oscillators and receivers. He was unable to communicate beyond a half-mile until 1895, when he discovered that the range of transmission could be increased greatly by replacing one side of the Hertzian dipole antenna in his transmitter and receiver with a connection to Earth and the other side with a long wire antenna suspended high above the ground. These antennas functioned as quarter-wave monopole antennas. The length of the antenna determined the wavelength of the waves produced and thus their frequency. Longer, lower frequency waves have less attenuation with distance. As Marconi tried longer antennas, which radiated lower frequency waves, probably in the MF band around 2 MHz, he found that he could transmit further. Another advantage was that these vertical antennas radiated vertically polarized waves, instead of the horizontally polarized waves produced by Hertz's horizontal antennas. These longer vertically polarized waves could travel beyond the horizon, because they propagated as a ground wave that followed the contour of the Earth. Under certain conditions they could also reach beyond the horizon by reflecting off layers of charged particles (ions) in the upper atmosphere, later called skywave propagation. Marconi did not understand any of this at the time; he simply found empirically that the higher his vertical antenna was suspended, the further it would transmit. After failing to interest the Italian government, in 1896 Marconi moved to England, where William Preece of the British General Post Office funded his experiments. Marconi applied for a patent on his radio system 2 June 1896, often considered the first wireless patent. In May 1897 he transmitted 14 km (8.7 miles), on 27 March 1899 he transmitted across the English Channel, 46 km (28 miles), in fall 1899 he extended the range to 136 km (85 miles), and by January 1901 he had reached 315 km (196 miles). These demonstrations of wireless Morse code communication at increasingly long distances convinced the world that radio, or "wireless telegraphy" as it was called, was not just a scientific curiosity but a commercially useful communication technology. In 1897 Marconi started a company to produce his radio systems, which became the Marconi Wireless Telegraph Company. and radio communication began to be used commercially around 1900. His first large contract in 1901 was with the insurance firm Lloyd's of London to equip their ships with wireless stations. Marconi's company dominated marine radio throughout the spark era. Inspired by Marconi, in the late 1890s other researchers also began developing competing spark radio communication systems; Alexander Popov in Russia, Eugène Ducretet in France, Reginald Fessenden and Lee de Forest in America, and Karl Ferdinand Braun, Adolf Slaby, and Georg von Arco in Germany who in 1903 formed the Telefunken Co., Marconi's chief rival. Disadvantages. The primitive transmitters prior to 1897 had no resonant circuits (also called LC circuits, tank circuits, or tuned circuits), the spark gap was in the antenna, which functioned as the resonator to determine the frequency of the radio waves. These were called "unsyntonized" or "plain antenna" transmitters. The average power output of these transmitters was low, because due to its low capacitance the antenna was a highly damped oscillator (in modern terminology, it had very low Q factor). During each spark the energy stored in the antenna was quickly radiated away as radio waves, so the oscillations decayed to zero quickly. The radio signal consisted of brief pulses of radio waves, repeating tens or at most a few hundreds of times per second, separated by comparatively long intervals of no output. The power radiated was dependent on how much electric charge could be stored in the antenna before each spark, which was proportional to the capacitance of the antenna. To increase their capacitance to ground, antennas were made with multiple parallel wires, often with capacitive toploads, in the "harp", "cage", "umbrella", "inverted-L", and "T" antennas characteristic of the "spark" era. The only other way to increase the energy stored in the antenna was to charge it up to very high voltages. However the voltage that could be used was limited to about 100 kV by corona discharge which caused charge to leak off the antenna, particularly in wet weather, and also energy lost as heat in the longer spark. A more significant drawback of the large damping was that the radio transmissions were electrically "noisy"; they had a very large bandwidth. These transmitters did not produce waves of a single frequency, but a continuous band of frequencies. They were essentially radio noise sources radiating energy over a large part of the radio spectrum, which made it impossible for other transmitters to be heard. When multiple transmitters attempted to operate in the same area, their broad signals overlapped in frequency and interfered with each other. The radio receivers used also had no resonant circuits, so they had no way of selecting one signal from others besides the broad resonance of the antenna, and responded to the transmissions of all transmitters in the vicinity. An example of this interference problem was an embarrassing public debacle in August 1901 when Marconi, Lee de Forest, and G. W. Pickard attempted to report the New York Yacht Race to newspapers from ships with their untuned spark transmitters. The Morse code transmissions interfered, and the reporters on shore failed to receive any information from the garbled signals. Syntonic transmitters. It became clear that for multiple transmitters to operate, some system of "selective signaling" had to be devised to allow a receiver to select which transmitter's signal to receive, and reject the others. In 1892 William Crookes had given an influential lecture on radio in which he suggested using resonance (then called "syntony") to reduce the bandwidth of transmitters and receivers. Using a resonant circuit (also called tuned circuit or tank circuit) in transmitters would narrow the bandwidth of the radiated signal, it would occupy a smaller range of frequencies around its center frequency, so that the signals of transmitters "tuned" to transmit on different frequencies would no longer overlap. A receiver which had its own resonant circuit could receive a particular transmitter by "tuning" its resonant frequency to the frequency of the desired transmitter, analogously to the way one musical instrument could be tuned to resonance with another. This is the system used in all modern radio. During the period 1897 to 1900 wireless researchers realized the advantages of "syntonic" or "tuned" systems, and added capacitors (Leyden jars) and inductors (coils of wire) to transmitters and receivers, to make resonant circuits (tuned circuits, or tank circuits). Oliver Lodge, who had been researching electrical resonance for years, patented the first "syntonic" transmitter and receiver in May 1897 Lodge added an inductor (coil) between the sides of his dipole antennas, which resonated with the capacitance of the antenna to make a tuned circuit. Although his complicated circuit did not see much practical use, Lodge's "syntonic" patent was important because it was the first to propose a radio transmitter and receiver containing resonant circuits which were tuned to resonance with each other. In 1911 when the patent was renewed the Marconi Company was forced to buy it to protect its own syntonic system against infringement suits. The resonant circuit functioned analogously to a tuning fork, storing oscillating electrical energy, increasing the Q factor of the circuit so the oscillations were less damped. Another advantage was the frequency of the transmitter was no longer determined by the length of the antenna but by the resonant circuit, so it could easily be changed by adjustable taps on the coil. The antenna was brought into resonance with the tuned circuit using loading coils. The energy in each spark, and thus the power output, was no longer limited by the capacitance of the antenna but by the size of the capacitor in the resonant circuit. In order to increase the power very large capacitor banks were used. The form that the resonant circuit took in practical transmitters was the inductively-coupled circuit described in the next section. Inductive coupling. In developing these syntonic transmitters, researchers found it impossible to achieve low damping with a single resonant circuit. A resonant circuit can only have low damping (high Q, narrow bandwidth) if it is a "closed" circuit, with no energy dissipating components. But such a circuit does not produce radio waves. A resonant circuit with an antenna radiating radio waves (an "open" tuned circuit) loses energy quickly, giving it high damping (low Q, wide bandwidth). There was a fundamental tradeoff between a circuit which produced persistent oscillations which had narrow bandwidth, and one which radiated high power. The solution found by a number of researchers was to use two resonant circuits in the transmitter, with their coils inductively (magnetically) coupled, making a resonant transformer (called an "oscillation transformer"); this was called an ""inductively coupled", "coupled circuit"" or "two circuit" transmitter. See circuit diagram. The primary winding of the oscillation transformer ("L1") with the capacitor ("C1") and spark gap ("S") formed a "closed" resonant circuit which generated the oscillations, while the secondary winding ("L2") was connected to the wire antenna ("A") and ground, forming an "open" resonant circuit with the capacitance of the antenna ("C2"). Both circuits were tuned to the same resonant frequency. The advantage of the inductively coupled circuit was that the "loosely coupled" transformer transferred the oscillating energy of the tank circuit to the radiating antenna circuit gradually, creating long "ringing" waves. A second advantage was that it allowed a large primary capacitance "(C1)" to be used which could store a lot of energy, increasing the power output enormously. Powerful transoceanic transmitters often had huge Leyden jar capacitor banks filling rooms "(see pictures above)". The receiver in most systems also used two inductively coupled circuits, with the antenna an "open" resonant circuit coupled through an oscillation transformer to a "closed" resonant circuit containing the detector. A radio system with a "two circuit" (inductively coupled) transmitter and receiver was called a "four circuit" system. The first person to use resonant circuits in a radio application was Nikola Tesla, who invented the resonant transformer in 1891. At a March 1893 St. Louis lecture he had demonstrated a wireless system that, although it was intended for wireless power transmission, had many of the elements of later radio communication systems. A grounded capacitance-loaded spark-excited resonant transformer (his "Tesla coil") attached to an elevated wire monopole antenna transmitted radio waves, which were received across the room by a similar wire antenna attached to a receiver consisting of a second grounded resonant transformer tuned to the transmitter's frequency, which lighted a Geissler tube. This system, patented by Tesla 2 September 1897, 4 months after Lodge's "syntonic" patent, was in effect an inductively coupled radio transmitter and receiver, the first use of the "four circuit" system claimed by Marconi in his 1900 patent "(below)". However, Tesla was mainly interested in wireless power and never developed a practical radio "communication" system. In addition to Tesla's system, inductively coupled radio systems were patented by Oliver Lodge in February 1898, Karl Ferdinand Braun, in November 1899, and John Stone Stone in February 1900. Braun made the crucial discovery that low damping required "loose coupling" (reduced mutual inductance) between the primary and secondary coils. Marconi at first paid little attention to syntony, but by 1900 developed a radio system incorporating features from these systems, with a two circuit transmitter and two circuit receiver, with all four circuits tuned to the same frequency, using a resonant transformer he called the "jigger". In spite of the above prior patents, Marconi in his 26 April 1900 "four circuit" or "master tuning" patent on his system claimed rights to the inductively coupled transmitter and receiver. This was granted a British patent, but the US patent office twice rejected his patent as lacking originality. Then in a 1904 appeal a new patent commissioner reversed the decision and granted the patent, on the narrow grounds that Marconi's patent by including an antenna loading coil "(J in circuit above)" provided the means for tuning the four circuits to the same frequency, whereas in the Tesla and Stone patents this was done by adjusting the length of the antenna. This patent gave Marconi a near monopoly of syntonic wireless telegraphy in England and America. Tesla sued Marconi's company for patent infringement but didn't have the resources to pursue the action. In 1943 the US Supreme Court invalidated the inductive coupling claims of Marconi's patent due to the prior patents of Lodge, Tesla, and Stone, but this came long after spark transmitters had become obsolete. The inductively coupled or "syntonic" spark transmitter was the first type that could communicate at intercontinental distances, and also the first that had sufficiently narrow bandwidth that interference between transmitters was reduced to a tolerable level. It became the dominant type used during the "spark" era. A drawback of the plain inductively coupled transmitter was that unless the primary and secondary coils were very loosely coupled it radiated on two frequencies. This was remedied by the quenched-spark and rotary gap transmitters" (below)". In recognition of their achievements in radio, Marconi and Braun shared the 1909 Nobel Prize in physics. First transatlantic radio transmission. Marconi decided in 1900 to attempt transatlantic communication, which would allow him to dominate Atlantic shipping and compete with submarine telegraph cables. This would require a major scale-up in power, a risky gamble for his company. Up to that time his small induction coil transmitters had an input power of 100 - 200 watts, and the maximum range achieved was around 150 miles. To build the first high power transmitter, Marconi hired an expert in electric power engineering, Prof. John Ambrose Fleming of University College, London, who applied power engineering principles. Fleming designed a complicated inductively-coupled transmitter "(see circuit)" with two cascaded spark gaps "(S1, S2)" firing at different rates, and three resonant circuits, powered by a 25 kW alternator "(D)" turned by a combustion engine. The first spark gap and resonant circuit "(S1, C1, T2)" generated the high voltage to charge the capacitor "(C2)" powering the second spark gap and resonant circuit "(S2, C2, T3)", which generated the output. The spark rate was low, perhaps as low as 2 - 3 sparks per second. Fleming estimated the radiated power was around 10 - 12 kW. The transmitter was built in secrecy on the coast at Poldhu, Cornwall, UK. Marconi was pressed for time because Nikola Tesla was building his own transatlantic radiotelegraphy transmitter on Long Island, New York, in a bid to be first (this was the Wardenclyffe Tower, which lost funding and was abandoned unfinished after Marconi's success). Marconi's original round 400-wire transmitting antenna collapsed in a storm 17 September 1901 and he hastily erected a temporary antenna consisting of 50 wires suspended in a fan shape from a cable between two 160 foot poles. The frequency used is not known precisely, as Marconi did not measure wavelength or frequency, but it was between 166 and 984 kHz, probably around 500 kHz. He received the signal on the coast of St. John's, Newfoundland using an untuned coherer receiver with a 400 ft. wire antenna suspended from a kite. Marconi announced the first transatlantic radio transmission took place on 12 December 1901, from Poldhu, Cornwall to Signal Hill, Newfoundland, a distance of 2100 miles (3400 km). Marconi's achievement received worldwide publicity, and was the final proof that radio was a practical communication technology. The scientific community at first doubted Marconi's report. Virtually all wireless experts besides Marconi believed that radio waves traveled in straight lines, so no one (including Marconi) understood how the waves had managed to propagate around the 300 mile high curve of the Earth between Britain and Newfoundland. In 1902 Arthur Kennelly and Oliver Heaviside independently theorized that radio waves were reflected by a layer of ionized atoms in the upper atmosphere, enabling them to return to Earth beyond the horizon. In 1924 Edward V. Appleton demonstrated the existence of this layer, now called the "Kennelly–Heaviside layer" or "E-layer", for which he received the 1947 Nobel Prize in Physics. Knowledgeable sources today doubt whether Marconi actually received this transmission. Ionospheric conditions should not have allowed the signal to be received during the daytime at that range. Marconi knew the Morse code signal to be transmitted was the letter 'S' (three dots). He and his assistant could have mistaken atmospheric radio noise ("static") in their earphones for the clicks of the transmitter. Marconi made many subsequent transatlantic transmissions which clearly establish his priority, but reliable transatlantic communication was not achieved until 1907 with more powerful transmitters. Quenched-spark transmitters. The inductively-coupled transmitter had a more complicated output waveform than the non-syntonic transmitter, due to the interaction of the two resonant circuits. The two magnetically coupled tuned circuits acted as a coupled oscillator, producing beats "(see top graphs)". The oscillating radio frequency energy was passed rapidly back and forth between the primary and secondary resonant circuits as long as the spark continued. Each time the energy returned to the primary, some was lost as heat in the spark. In addition, unless the coupling was very loose the oscillations caused the transmitter to transmit on two separate frequencies. Since the narrow passband of the receiver's resonant circuit could only be tuned to one of these frequencies, the power radiated at the other frequency was wasted. This troublesome backflow of energy to the primary circuit could be prevented by extinguishing (quenching) the spark at the right instant, after all the energy from the capacitors was transferred to the antenna circuit. Inventors tried various methods to accomplish this, such as air blasts and Elihu Thomson's magnetic blowout. In 1906, a new type of spark gap was developed by German physicist Max Wien, called the "series" or "quenched" gap. A quenched gap consisted of a stack of wide cylindrical electrodes separated by thin insulating spacer rings to create many narrow spark gaps in series, of around . The wide surface area of the electrodes terminated the ionization in the gap quickly by cooling it after the current stopped. In the inductively coupled transmitter, the narrow gaps extinguished ("quenched") the spark at the first nodal point (Q) when the primary current momentarily went to zero after all the energy had been transferred to the secondary winding "(see lower graph)". Since without the spark no current could flow in the primary circuit, this effectively uncoupled the secondary from the primary circuit, allowing the secondary resonant circuit and antenna to oscillate completely free of the primary circuit after that (until the next spark). This produced output power centered on a single frequency instead of two frequencies. It also eliminated most of the energy loss in the spark, producing very lightly damped, long "ringing" waves, with decrements of only 0.08 to 0.25 (a Q of 12-38) and consequently a very "pure", narrow bandwidth radio signal. Another advantage was the rapid quenching allowed the time between sparks to be reduced, allowing higher spark rates of around 1000 Hz to be used, which had a musical tone in the receiver which penetrated radio static better. The quenched gap transmitter was called the "singing spark" system. The German wireless giant Telefunken Co., Marconi's rival, acquired the patent rights and used the quenched spark gap in their transmitters. Rotary gap transmitters. A second type of spark gap that had a similar quenching effect was the "rotary gap", invented by Tesla in 1896 and applied to radio transmitters by Reginald Fessenden and others. It consisted of multiple electrodes equally spaced around a disk rotor spun at high speed by a motor, which created sparks as they passed by a stationary electrode. By using the correct motor speed, the rapidly separating electrodes extinguished the spark after the energy had been transferred to the secondary. The rotating wheel also kept the electrodes cooler, important in high-power transmitters. There were two types of rotary spark transmitter: To reduce interference caused by the "noisy" signals of the burgeoning numbers of spark transmitters, the 1912 US Congress "Act to Regulate Radio Communication" required that "the logarithmic decrement per oscillation in the wave trains emitted by the transmitter shall not exceed two tenths" (this is equivalent to a Q factor of 15 or greater). Virtually the only spark transmitters which could satisfy this condition were the quenched-spark and rotary gap types above, and they dominated wireless telegraphy for the rest of the spark era. Marconi's timed spark system. In 1912 in his high-power stations Marconi developed a refinement of the rotary discharger called the "timed spark" system, which generated what was probably the nearest to a continuous wave that sparks could produce. He used several identical resonant circuits in parallel, with the capacitors charged by a DC dynamo. These were discharged sequentially by multiple rotary discharger wheels on the same shaft to create overlapping damped waves shifted progressively in time, which were added together in the oscillation transformer so the output was a superposition of damped waves. The speed of the discharger wheel was controlled so that the time between sparks was equal to an integer multiple of the wave period. Therefore, oscillations of the successive wave trains were in phase and reinforced each other. The result was essentially a continuous sinusoidal wave, whose amplitude varied with a ripple at the spark rate. This system was necessary to give Marconi's transoceanic stations a narrow enough bandwidth that they didn't interfere with other transmitters on the narrow VLF band. Timed spark transmitters achieved the longest transmission range of any spark transmitters, but these behemoths represented the end of spark technology. The "spark" era. The first application of radio was on ships, to keep in touch with shore, and send out a distress call if the ship were sinking. The Marconi Company built a string of shore stations and in 1904 established the first Morse code distress call, the letters "CQD", used until the Second International Radiotelegraphic Convention in 1906 at which "SOS" was agreed on. The first significant marine rescue due to radiotelegraphy was the 23 January 1909 sinking of the luxury liner RMS "Republic", in which 1500 people were saved. Spark transmitters and the crystal receivers used to receive them were simple enough that they were widely built by hobbyists. During the first decades of the 20th century this exciting new high tech hobby attracted a growing community of "radio amateurs", many of them teenage boys, who used their homebuilt sets recreationally to contact distant amateurs and chat with them by Morse code, and relay messages. Low-power amateur transmitters ("squeak boxes") were often built with "trembler" ignition coils from early automobiles such as the Ford Model T. In the US prior to 1912 there was no government regulation of radio, and a chaotic "wild west" atmosphere prevailed, with stations transmitting without regard to other stations on their frequency, and deliberately interfering with each other. The expanding numbers of non-syntonic broadband spark transmitters created uncontrolled congestion in the airwaves, interfering with commercial and military wireless stations. The sinking 14 April 1912 increased public appreciation for the role of radio, but the loss of life brought attention to the disorganized state of the new radio industry, and prompted regulation which corrected some abuses. Although the "Titanic" radio operator's "CQD" distress calls summoned the which rescued 705 survivors, the rescue operation was delayed four hours because the nearest ship, the SS "Californian", only a few miles away, did not hear the "Titanic"'s call as its radio operator had gone to bed. This was held responsible for most of the 1500 deaths. Existing international regulations required all ships with more than 50 passengers to carry wireless equipment, but after the disaster subsequent regulations mandated ships have enough radio officers so that a round-the-clock radio watch could be kept. US President Taft and the public heard reports of chaos on the air the night of the disaster, with amateur stations interfering with official naval messages and passing false information. In response Congress passed the 1912 Radio Act, in which licenses were required for all radio transmitters, maximum damping of transmitters was limited to a decrement of 0.2 to get old noisy non-syntonic transmitters off the air, and amateurs were mainly restricted to the unused frequencies above 1.5 MHz and output power of 1 kilowatt. The largest spark transmitters were powerful transoceanic radiotelegraphy stations with input power of 100 - 300 kW. Beginning about 1910, industrial countries built global networks of these stations to exchange commercial and diplomatic telegram traffic with other countries and communicate with their overseas colonies. During World War I, radio became a strategic defensive technology, as it was realized a nation without long distance radiotelegraph stations could be isolated by an enemy cutting its submarine telegraph cables. Most of these networks were built by the two giant wireless corporations of the age: the British Marconi Company, which constructed the Imperial Wireless Chain to link the possessions of the British Empire, and the German Telefunken Co. which was dominant outside the British Empire. Marconi transmitters used the timed spark rotary discharger, while Telefunken transmitters used its quenched spark gap technology. Paper tape machines were used to transmit Morse code text at high speed. To achieve a maximum range of around 3000 – 6000 miles, transoceanic stations transmitted mainly in the very low frequency (VLF) band, from 50 kHz to as low as 15 – 20 kHz. At these wavelengths even the largest antennas were electrically short, a tiny fraction of a wavelength tall, and so had low radiation resistance (often below 1 ohm), so these transmitters required enormous wire umbrella and flattop antennas up to several miles long with large capacitive toploads, to achieve adequate efficiency. The antenna required a large loading coil at the base, 6 – 10 feet tall, to make it resonant with the transmitter. Continuous waves. Although their damping had been reduced as much as possible, spark transmitters still produced damped waves, which due to their large bandwidth caused interference between transmitters. The spark also made a very loud noise when operating, produced corrosive ozone gas, eroded the spark electrodes, and could be a fire hazard. Despite its drawbacks, most wireless experts believed along with Marconi that the impulsive "whipcrack" of a spark was necessary to produce radio waves that would communicate long distances. From the beginning, physicists knew that another type of waveform, continuous sinusoidal waves (CW), had theoretical advantages over damped waves for radio transmission. Because their energy is essentially concentrated at a single frequency, in addition to causing almost no interference to other transmitters on adjacent frequencies, continuous wave transmitters could transmit longer distances with a given output power. They could also be modulated with an audio signal to carry sound. The problem was no techniques were known for generating them. The efforts described above to reduce the damping of spark transmitters can be seen as attempts to make their output approach closer to the ideal of a continuous wave, but spark transmitters could not produce true continuous waves. Beginning about 1904, continuous wave transmitters were developed using new principles, which competed with spark transmitters. Continuous waves were first generated by two short-lived technologies: These transmitters, which could produce power outputs of up to one megawatt, slowly replaced the spark transmitter in high-power radiotelegraphy stations. However spark transmitters remained popular in two way communication stations because most continuous wave transmitters were not capable of a mode called "break in" or "listen in" operation. With a spark transmitter, when the telegraph key was up between Morse symbols the carrier wave was turned off and the receiver was turned on, so the operator could listen for an incoming message. This allowed the receiving station, or a third station, to interrupt or "break in" to an ongoing transmission. In contrast, these early CW transmitters had to operate continuously; the carrier wave was not turned off between Morse code symbols, words, or sentences but just detuned, so a local receiver could not operate as long as the transmitter was powered up. Therefore, these stations could not receive messages until the transmitter was turned off. Obsolescence. All these early technologies were superseded by the vacuum tube feedback electronic oscillator, invented in 1912 by Edwin Armstrong and Alexander Meissner, which used the triode vacuum tube invented in 1906 by Lee de Forest. Vacuum tube oscillators were a far cheaper source of continuous waves, and could be easily modulated to carry sound. Due to the development of the first high-power transmitting tubes by the end of World War I, in the 1920s tube transmitters replaced the arc converter and alternator transmitters, as well as the last of the old noisy spark transmitters. The 1927 International Radiotelegraph Convention in Washington, D.C. saw a political battle to finally eliminate spark radio. Spark transmitters were long obsolete at this point, and broadcast radio audiences and aviation authorities were complaining of the disruption to radio reception that noisy legacy marine spark transmitters were causing. But shipping interests vigorously fought a blanket prohibition on damped waves, due to the capital expenditure that would be required to replace ancient spark equipment that was still being used on older ships. The Convention prohibited licensing of new land spark transmitters after 1929. Damped wave radio emission, called Class B, was banned after 1934 except for emergency use on ships. This loophole allowed shipowners to avoid replacing spark transmitters, which were kept as emergency backup transmitters on ships through World War II. Legacy. One legacy of spark-gap transmitters is that radio operators were regularly nicknamed "Sparky" long after the devices ceased to be used. Even today, the German verb "funken", literally, "to spark", also means "to send a radio message". The spark gap oscillator was also used in nonradio applications, continuing long after it became obsolete in radio. In the form of the Tesla coil and Oudin coil it was used until the 1940s in the medical field of diathermy for deep body heating. High oscillating voltages of hundreds of thousands of volts at frequencies of 0.1 - 1 MHz from a Tesla coil were applied directly to the patient's body. The treatment was not painful, because currents in the radio frequency range do not cause the physiological reaction of electric shock. In 1926 William T. Bovie discovered that RF currents applied to a scalpel could cut and cauterize tissue in medical operations, and spark oscillators were used as electrosurgery generators or "Bovies" as late as the 1980s. In the 1950s a Japanese toy company, Matsudaya, produced a line of cheap remote control toy trucks, boats and robots called Radicon, which used a low-power spark transmitter in the controller as an inexpensive way to produce the radio control signals. The signals were received in the toy by a coherer receiver. Spark gap oscillators are still used to generate high-frequency high voltage needed to initiate welding arcs in gas tungsten arc welding. Powerful spark gap pulse generators are still used to simulate EMPs. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "C" }, { "math_id": 2, "text": "L" }, { "math_id": 3, "text": "f = \\frac {1}{2 \\pi} \\sqrt { \\frac {1}{LC}} \\," } ]
https://en.wikipedia.org/wiki?curid=1105907
1106042
Reciprocal polynomial
In algebra, given a polynomial formula_0 with coefficients from an arbitrary field, its reciprocal polynomial or reflected polynomial, denoted by "p"∗ or "p"R, is the polynomial formula_1 That is, the coefficients of "p"∗ are the coefficients of "p" in reverse order. Reciprocal polynomials arise naturally in linear algebra as the characteristic polynomial of the inverse of a matrix. In the special case where the field is the complex numbers, when formula_2 the conjugate reciprocal polynomial, denoted "p"†, is defined by, formula_3 where formula_4 denotes the complex conjugate of formula_5, and is also called the reciprocal polynomial when no confusion can arise. A polynomial "p" is called self-reciprocal or palindromic if "p"("x") = "p"∗("x"). The coefficients of a self-reciprocal polynomial satisfy "a""i" = "a""n"−"i" for all "i". Properties. Reciprocal polynomials have several connections with their original polynomials, including: Other properties of reciprocal polynomials may be obtained, for instance: Palindromic and antipalindromic polynomials. A self-reciprocal polynomial is also called palindromic because its coefficients, when the polynomial is written in the order of ascending or descending powers, form a palindrome. That is, if formula_7 is a polynomial of degree "n", then "P" is "palindromic" if "ai" = "a""n"−"i" for "i" = 0, 1, ..., "n". Similarly, a polynomial "P" of degree "n" is called antipalindromic if "ai" = −"a""n"−"i" for "i" = 0, 1, ..., "n". That is, a polynomial "P" is "antipalindromic" if "P"("x") = –"P"∗("x"). Examples. From the properties of the binomial coefficients, it follows that the polynomials "P"("x") = ("x" + 1)"n" are palindromic for all positive integers "n", while the polynomials "Q"("x") = ("x" – 1)"n" are palindromic when "n" is even and antipalindromic when "n" is odd. Other examples of palindromic polynomials include cyclotomic polynomials and Eulerian polynomials. Real coefficients. A polynomial with real coefficients all of whose complex roots lie on the unit circle in the complex plane (that is, all the roots have modulus 1) is either palindromic or antipalindromic. Conjugate reciprocal polynomials. A polynomial is conjugate reciprocal if formula_8 and self-inversive if formula_9 for a scale factor "ω" on the unit circle. If "p"("z") is the minimal polynomial of "z"0 with |"z"0| = 1, "z"0 ≠ 1, and "p"("z") has real coefficients, then "p"("z") is self-reciprocal. This follows because formula_10 So "z"0 is a root of the polynomial formula_11 which has degree "n". But, the minimal polynomial is unique, hence formula_12 for some constant "c", i.e. formula_13. Sum from "i" = 0 to "n" and note that 1 is not a root of "p". We conclude that "c" = 1. A consequence is that the cyclotomic polynomials Φ"n" are self-reciprocal for "n" &gt; 1. This is used in the special number field sieve to allow numbers of the form "x"11 ± 1, "x"13 ± 1, "x"15 ± 1 and "x"21 ± 1 to be factored taking advantage of the algebraic factors by using polynomials of degree 5, 6, 4 and 6 respectively – note that "φ" (Euler's totient function) of the exponents are 10, 12, 8 and 12. Per Cohn's theorem, a self-inversive polynomial has as many roots in the unit disk formula_14 as the reciprocal polynomial of its derivative. Application in coding theory. The reciprocal polynomial finds a use in the theory of cyclic error correcting codes. Suppose "x""n" − 1 can be factored into the product of two polynomials, say "x""n" − 1 = "g"("x")"p"("x"). When "g"("x") generates a cyclic code "C", then the reciprocal polynomial "p"∗ generates "C"⊥, the orthogonal complement of "C". Also, "C" is "self-orthogonal" (that is, "C" ⊆ "C"⊥), if and only if "p"∗ divides "g"("x"). Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p(x) = a_0 + a_1x + a_2x^2 + \\cdots + a_nx^n," }, { "math_id": 1, "text": "p^*(x) = a_n + a_{n-1}x + \\cdots + a_0x^n = x^n p(x^{-1})." }, { "math_id": 2, "text": "p(z) = a_0 + a_1z + a_2z^2 + \\cdots + a_nz^n," }, { "math_id": 3, "text": "p^{\\dagger}(z) = \\overline{a_n} + \\overline{a_{n-1}}z + \\cdots + \\overline{a_0}z^n = z^n\\overline{p(\\bar{z}^{-1})}," }, { "math_id": 4, "text": "\\overline{a_i}" }, { "math_id": 5, "text": "a_i" }, { "math_id": 6, "text": "a_0" }, { "math_id": 7, "text": " P(x) = \\sum_{i=0}^n a_ix^i" }, { "math_id": 8, "text": "p(x) \\equiv p^{\\dagger}(x)" }, { "math_id": 9, "text": "p(x) = \\omega p^{\\dagger}(x)" }, { "math_id": 10, "text": "z_0^n\\overline{p(1/\\bar{z_0})} = z_0^n\\overline{p(z_0)} = z_0^n\\bar{0} = 0." }, { "math_id": 11, "text": "z^n\\overline{p(\\bar{z}^{-1})}" }, { "math_id": 12, "text": "cp(z) = z^n\\overline{p(\\bar{z}^{-1})}" }, { "math_id": 13, "text": "ca_i=\\overline{a_{n-i}}=a_{n-i}" }, { "math_id": 14, "text": "\\{z\\in\\mathbb{C}: |z| < 1\\}" } ]
https://en.wikipedia.org/wiki?curid=1106042
11062
Friction
Force resisting sliding motion &lt;templatestyles src="Hlist/styles.css"/&gt; Friction is the force resisting the relative motion of solid surfaces, fluid layers, and material elements sliding against each other. Types of friction include dry, fluid, lubricated, skin, and internal -- an incomplete list. The study of the processes involved is called tribology, and has a history of more than 2000 years. Friction can have dramatic consequences, as illustrated by the use of friction created by rubbing pieces of wood together to start a fire. Another important consequence of many types of friction can be wear, which may lead to performance degradation or damage to components. It is known that frictional energy losses account for about 20% of the total energy expenditure of the world. As briefly discussed later, there are many different contributors to the retarding force in friction, ranging from asperity deformation to the generation of charges and changes in local structure. Friction is not itself a fundamental force, it is a non-conservative force – work done against friction is path dependent. In the presence of friction, some kinetic or mechanical energy is transformed to thermal energy as well as the free energy of the structural changes and other types of dissipation, so mechanical energy is not conserved. The complexity of the interactions involved makes the calculation of friction from first principles difficult and it is often easier to use empirical methods for analysis and the development of theory. Types. There are several types of friction: History. Many ancient authors including Aristotle, Vitruvius, and Pliny the Elder, were interested in the cause and mitigation of friction. They were aware of differences between static and kinetic friction with Themistius stating in 350 A.D. that "it is easier to further the motion of a moving body than to move a body at rest". The classic laws of sliding friction were discovered by Leonardo da Vinci in 1493, a pioneer in tribology, but the laws documented in his notebooks were not published and remained unknown. These laws were rediscovered by Guillaume Amontons in 1699 and became known as Amonton's three laws of dry friction. Amontons presented the nature of friction in terms of surface irregularities and the force required to raise the weight pressing the surfaces together. This view was further elaborated by Bernard Forest de Bélidor and Leonhard Euler (1750), who derived the angle of repose of a weight on an inclined plane and first distinguished between static and kinetic friction. John Theophilus Desaguliers (1734) first recognized the role of adhesion in friction. Microscopic forces cause surfaces to stick together; he proposed that friction was the force necessary to tear the adhering surfaces apart. The understanding of friction was further developed by Charles-Augustin de Coulomb (1785). Coulomb investigated the influence of four main factors on friction: the nature of the materials in contact and their surface coatings; the extent of the surface area; the normal pressure (or load); and the length of time that the surfaces remained in contact (time of repose). Coulomb further considered the influence of sliding velocity, temperature and humidity, in order to decide between the different explanations on the nature of friction that had been proposed. The distinction between static and dynamic friction is made in Coulomb's friction law (see below), although this distinction was already drawn by Johann Andreas von Segner in 1758. The effect of the time of repose was explained by Pieter van Musschenbroek (1762) by considering the surfaces of fibrous materials, with fibers meshing together, which takes a finite time in which the friction increases. John Leslie (1766–1832) noted a weakness in the views of Amontons and Coulomb: If friction arises from a weight being drawn up the inclined plane of successive asperities, why then isn't it balanced through descending the opposite slope? Leslie was equally skeptical about the role of adhesion proposed by Desaguliers, which should on the whole have the same tendency to accelerate as to retard the motion. In Leslie's view, friction should be seen as a time-dependent process of flattening, pressing down asperities, which creates new obstacles in what were cavities before. In the long course of the development of the law of conservation of energy and of the first law of thermodynamics, friction was recognised as a mode of conversion of mechanical work into heat. In 1798, Benjamin Thompson reported on cannon boring experiments. Arthur Jules Morin (1833) developed the concept of sliding versus rolling friction. In 1842, Julius Robert Mayer frictionally generated heat in paper pulp and measured the temperature rise. In 1845, Joule published a paper entitled "The Mechanical Equivalent of Heat", in which he specified a numerical value for the amount of mechanical work required to "produce a unit of heat", based on the friction of an electric current passing through a resistor, and on the friction of a paddle wheel rotating in a vat of water. Osborne Reynolds (1866) derived the equation of viscous flow. This completed the classic empirical model of friction (static, kinetic, and fluid) commonly used today in engineering. In 1877, Fleeming Jenkin and J. A. Ewing investigated the continuity between static and kinetic friction. In 1907, G.H. Bryan published an investigation of the foundations of thermodynamics, "Thermodynamics: an Introductory Treatise dealing mainly with First Principles and their Direct Applications". He noted that for a driven hard surface sliding on a body driven by it, the work done by the driver exceeds the work received by the body. The difference is accounted for by heat generated by friction. Over the years, for example in his 1879 thesis, but particularly in 1926, Planck advocated regarding the generation of heat by rubbing as the most specific way to define heat, and the prime example of an irreversible thermodynamic process. The focus of research during the 20th century has been to understand the physical mechanisms behind friction. Frank Philip Bowden and David Tabor (1950) showed that, at a microscopic level, the actual area of contact between surfaces is a very small fraction of the apparent area. This actual area of contact, caused by asperities increases with pressure. The development of the atomic force microscope (ca. 1986) enabled scientists to study friction at the atomic scale, showing that, on that scale, dry friction is the product of the inter-surface shear stress and the contact area. These two discoveries explain Amonton's first law "(below)"; the macroscopic proportionality between normal force and static frictional force between dry surfaces. Laws of dry friction. The elementary property of sliding (kinetic) friction were discovered by experiment in the 15th to 18th centuries and were expressed as three empirical laws: Dry friction. Dry friction resists relative lateral motion of two solid surfaces in contact. The two regimes of dry friction are 'static friction' ("stiction") between non-moving surfaces, and "kinetic friction" (sometimes called sliding friction or dynamic friction) between moving surfaces. Coulomb friction, named after Charles-Augustin de Coulomb, is an approximate model used to calculate the force of dry friction. It is governed by the model: formula_0 where The Coulomb friction formula_1 may take any value from zero up to formula_4, and the direction of the frictional force against a surface is opposite to the motion that surface would experience in the absence of friction. Thus, in the static case, the frictional force is exactly what it must be in order to prevent motion between the surfaces; it balances the net force tending to cause such motion. In this case, rather than providing an estimate of the actual frictional force, the Coulomb approximation provides a threshold value for this force, above which motion would commence. This maximum force is known as traction. The force of friction is always exerted in a direction that opposes movement (for kinetic friction) or potential movement (for static friction) between the two surfaces. For example, a curling stone sliding along the ice experiences a kinetic force slowing it down. For an example of potential movement, the drive wheels of an accelerating car experience a frictional force pointing forward; if they did not, the wheels would spin, and the rubber would slide backwards along the pavement. Note that it is not the direction of movement of the vehicle they oppose, it is the direction of (potential) sliding between tire and road. Normal force. The normal force is defined as the net force compressing two parallel surfaces together, and its direction is perpendicular to the surfaces. In the simple case of a mass resting on a horizontal surface, the only component of the normal force is the force due to gravity, where formula_5. In this case, conditions of equilibrium tell us that the magnitude of the friction force is "zero", formula_6. In fact, the friction force always satisfies formula_7, with equality reached only at a critical ramp angle (given by formula_8) that is steep enough to initiate sliding. The friction coefficient is an empirical (experimentally measured) structural property that depends only on various aspects of the contacting materials, such as surface roughness. The coefficient of friction is not a function of mass or volume. For instance, a large aluminum block has the same coefficient of friction as a small aluminum block. However, the magnitude of the friction force itself depends on the normal force, and hence on the mass of the block. Depending on the situation, the calculation of the normal force formula_9 might include forces other than gravity. If an object is on a level surface and subjected to an external force formula_10 tending to cause it to slide, then the normal force between the object and the surface is just formula_11, where formula_12 is the block's weight and formula_13 is the downward component of the external force. Prior to sliding, this friction force is formula_14, where formula_15 is the horizontal component of the external force. Thus, formula_16 in general. Sliding commences only after this frictional force reaches the value formula_17. Until then, friction is whatever it needs to be to provide equilibrium, so it can be treated as simply a reaction. If the object is on a tilted surface such as an inclined plane, the normal force from gravity is smaller than formula_12, because less of the force of gravity is perpendicular to the face of the plane. The normal force and the frictional force are ultimately determined using vector analysis, usually via a free body diagram. In general, process for solving any statics problem with friction is to treat contacting surfaces "tentatively" as immovable so that the corresponding tangential reaction force between them can be calculated. If this frictional reaction force satisfies formula_16, then the tentative assumption was correct, and it is the actual frictional force. Otherwise, the friction force must be set equal to formula_17, and then the resulting force imbalance would then determine the acceleration associated with slipping. Coefficient of friction. The coefficient of friction (COF), often symbolized by the Greek letter μ, is a dimensionless scalar value which equals the ratio of the force of friction between two bodies and the force pressing them together, either during or at the onset of slipping. The coefficient of friction depends on the materials used; for example, ice on steel has a low coefficient of friction, while rubber on pavement has a high coefficient of friction. Coefficients of friction range from near zero to greater than one. The coefficient of friction between two surfaces of similar metals is greater than that between two surfaces of different metals; for example, brass has a higher coefficient of friction when moved against brass, but less if moved against steel or aluminum. For surfaces at rest relative to each other, formula_18, where formula_19 is the "coefficient of static friction". This is usually larger than its kinetic counterpart. The coefficient of static friction exhibited by a pair of contacting surfaces depends upon the combined effects of material deformation characteristics and surface roughness, both of which have their origins in the chemical bonding between atoms in each of the bulk materials and between the material surfaces and any adsorbed material. The fractality of surfaces, a parameter describing the scaling behavior of surface asperities, is known to play an important role in determining the magnitude of the static friction. For surfaces in relative motion formula_20, where formula_21 is the "coefficient of kinetic friction". The Coulomb friction is equal to formula_1, and the frictional force on each surface is exerted in the direction opposite to its motion relative to the other surface. Arthur Morin introduced the term and demonstrated the utility of the coefficient of friction. The coefficient of friction is an empirical measurement — it has to be measured experimentally, and cannot be found through calculations. Rougher surfaces tend to have higher effective values. Both static and kinetic coefficients of friction depend on the pair of surfaces in contact; for a given pair of surfaces, the coefficient of static friction is "usually" larger than that of kinetic friction; in some sets the two coefficients are equal, such as teflon-on-teflon. Most dry materials in combination have friction coefficient values between 0.3 and 0.6. Values outside this range are rarer, but teflon, for example, can have a coefficient as low as 0.04. A value of zero would mean no friction at all, an elusive property. Rubber in contact with other surfaces can yield friction coefficients from 1 to 2. Occasionally it is maintained that "μ" is always &lt; 1, but this is not true. While in most relevant applications "μ" &lt; 1, a value above 1 merely implies that the force required to slide an object along the surface is greater than the normal force of the surface on the object. For example, silicone rubber or acrylic rubber-coated surfaces have a coefficient of friction that can be substantially larger than 1. While it is often stated that the COF is a "material property," it is better categorized as a "system property." Unlike true material properties (such as conductivity, dielectric constant, yield strength), the COF for any two materials depends on system variables like temperature, velocity, atmosphere and also what are now popularly described as aging and deaging times; as well as on geometric properties of the interface between the materials, namely surface structure. For example, a copper pin sliding against a thick copper plate can have a COF that varies from 0.6 at low speeds (metal sliding against metal) to below 0.2 at high speeds when the copper surface begins to melt due to frictional heating. The latter speed, of course, does not determine the COF uniquely; if the pin diameter is increased so that the frictional heating is removed rapidly, the temperature drops, the pin remains solid and the COF rises to that of a 'low speed' test. In systems with significant non-uniform stress fields, because local slip occurs before the system slides, the macroscopic coefficient of static friction depends on the applied load, system size, or shape; Amontons' law is not satisfied macroscopically. Approximate coefficients of friction. Under certain conditions some materials have very low friction coefficients. An example is (highly ordered pyrolytic) graphite which can have a friction coefficient below 0.01. This ultralow-friction regime is called superlubricity. Static friction. Static friction is friction between two or more solid objects that are not moving relative to each other. For example, static friction can prevent an object from sliding down a sloped surface. The coefficient of static friction, typically denoted as "μ"s, is usually higher than the coefficient of kinetic friction. Static friction is considered to arise as the result of surface roughness features across multiple length scales at solid surfaces. These features, known as asperities are present down to nano-scale dimensions and result in true solid to solid contact existing only at a limited number of points accounting for only a fraction of the apparent or nominal contact area. The linearity between applied load and true contact area, arising from asperity deformation, gives rise to the linearity between static frictional force and normal force, found for typical Amonton–Coulomb type friction. The static friction force must be overcome by an applied force before an object can move. The maximum possible friction force between two surfaces before sliding begins is the product of the coefficient of static friction and the normal force: formula_22. When there is no sliding occurring, the friction force can have any value from zero up to formula_23. Any force smaller than formula_23 attempting to slide one surface over the other is opposed by a frictional force of equal magnitude and opposite direction. Any force larger than formula_23 overcomes the force of static friction and causes sliding to occur. The instant sliding occurs, static friction is no longer applicable—the friction between the two surfaces is then called kinetic friction. However, an apparent static friction can be observed even in the case when the true static friction is zero. An example of static friction is the force that prevents a car wheel from slipping as it rolls on the ground. Even though the wheel is in motion, the patch of the tire in contact with the ground is stationary relative to the ground, so it is static rather than kinetic friction. Upon slipping, the wheel friction changes to kinetic friction. An anti-lock braking system operates on the principle of allowing a locked wheel to resume rotating so that the car maintains static friction. The maximum value of static friction, when motion is impending, is sometimes referred to as limiting friction, although this term is not used universally. Kinetic friction. Kinetic friction, also known as dynamic friction or sliding friction, occurs when two objects are moving relative to each other and rub together (like a sled on the ground). The coefficient of kinetic friction is typically denoted as "μ"k, and is usually less than the coefficient of static friction for the same materials. However, Richard Feynman comments that "with dry metals it is very hard to show any difference." The friction force between two surfaces after sliding begins is the product of the coefficient of kinetic friction and the normal force: formula_24. This is responsible for the Coulomb damping of an oscillating or vibrating system. New models are beginning to show how kinetic friction can be greater than static friction. In many other cases roughness effects are dominant, for example in rubber to road friction. Surface roughness and contact area affect kinetic friction for micro- and nano-scale objects where surface area forces dominate inertial forces. The origin of kinetic friction at nanoscale can be rationalized by an energy model. During sliding, a new surface forms at the back of a sliding true contact, and existing surface disappears at the front of it. Since all surfaces involve the thermodynamic surface energy, work must be spent in creating the new surface, and energy is released as heat in removing the surface. Thus, a force is required to move the back of the contact, and frictional heat is released at the front. Angle of friction. For certain applications, it is more useful to define static friction in terms of the maximum angle before which one of the items will begin sliding. This is called the "angle of friction" or "friction angle". It is defined as: formula_25 and thus: formula_26 where formula_27 is the angle from horizontal and "μs" is the static coefficient of friction between the objects. This formula can also be used to calculate "μs" from empirical measurements of the friction angle. Friction at the atomic level. Determining the forces required to move atoms past each other is a challenge in designing nanomachines. In 2008 scientists for the first time were able to move a single atom across a surface, and measure the forces required. Using ultrahigh vacuum and nearly zero temperature (5 K), a modified atomic force microscope was used to drag a cobalt atom, and a carbon monoxide molecule, across surfaces of copper and platinum. Limitations of the Coulomb model. The Coulomb approximation follows from the assumptions that: surfaces are in atomically close contact only over a small fraction of their overall area; that this contact area is proportional to the normal force (until saturation, which takes place when all area is in atomic contact); and that the frictional force is proportional to the applied normal force, independently of the contact area. The Coulomb approximation is fundamentally an empirical construct. It is a rule-of-thumb describing the approximate outcome of an extremely complicated physical interaction. The strength of the approximation is its simplicity and versatility. Though the relationship between normal force and frictional force is not exactly linear (and so the frictional force is not entirely independent of the contact area of the surfaces), the Coulomb approximation is an adequate representation of friction for the analysis of many physical systems. When the surfaces are conjoined, Coulomb friction becomes a very poor approximation (for example, adhesive tape resists sliding even when there is no normal force, or a negative normal force). In this case, the frictional force may depend strongly on the area of contact. Some drag racing tires are adhesive for this reason. However, despite the complexity of the fundamental physics behind friction, the relationships are accurate enough to be useful in many applications. "Negative" coefficient of friction. As of 2012[ [update]], a single study has demonstrated the potential for an "effectively negative coefficient of friction in the low-load regime", meaning that a decrease in normal force leads to an increase in friction. This contradicts everyday experience in which an increase in normal force leads to an increase in friction. This was reported in the journal "Nature" in October 2012 and involved the friction encountered by an atomic force microscope stylus when dragged across a graphene sheet in the presence of graphene-adsorbed oxygen. Numerical simulation of the Coulomb model. Despite being a simplified model of friction, the Coulomb model is useful in many numerical simulation applications such as multibody systems and granular material. Even its most simple expression encapsulates the fundamental effects of sticking and sliding which are required in many applied cases, although specific algorithms have to be designed in order to efficiently numerically integrate mechanical systems with Coulomb friction and bilateral or unilateral contact. Some quite nonlinear effects, such as the so-called Painlevé paradoxes, may be encountered with Coulomb friction. Dry friction and instabilities. Dry friction can induce several types of instabilities in mechanical systems which display a stable behaviour in the absence of friction. These instabilities may be caused by the decrease of the friction force with an increasing velocity of sliding, by material expansion due to heat generation during friction (the thermo-elastic instabilities), or by pure dynamic effects of sliding of two elastic materials (the Adams–Martins instabilities). The latter were originally discovered in 1995 by George G. Adams and João Arménio Correia Martins for smooth surfaces and were later found in periodic rough surfaces. In particular, friction-related dynamical instabilities are thought to be responsible for brake squeal and the 'song' of a glass harp, phenomena which involve stick and slip, modelled as a drop of friction coefficient with velocity. A practically important case is the self-oscillation of the strings of bowed instruments such as the violin, cello, hurdy-gurdy, erhu, etc. A connection between dry friction and flutter instability in a simple mechanical system has been discovered, watch the movie for more details. Frictional instabilities can lead to the formation of new self-organized patterns (or "secondary structures") at the sliding interface, such as in-situ formed tribofilms which are utilized for the reduction of friction and wear in so-called self-lubricating materials. Fluid friction. Fluid friction occurs between fluid layers that are moving relative to each other. This internal resistance to flow is named "viscosity". In everyday terms, the viscosity of a fluid is described as its "thickness". Thus, water is "thin", having a lower viscosity, while honey is "thick", having a higher viscosity. The less viscous the fluid, the greater its ease of deformation or movement. All real fluids (except superfluids) offer some resistance to shearing and therefore are viscous. For teaching and explanatory purposes it is helpful to use the concept of an inviscid fluid or an ideal fluid which offers no resistance to shearing and so is not viscous. Lubricated friction. Lubricated friction is a case of fluid friction where a fluid separates two solid surfaces. Lubrication is a technique employed to reduce wear of one or both surfaces in close proximity moving relative to each another by interposing a substance called a lubricant between the surfaces. In most cases the applied load is carried by pressure generated within the fluid due to the frictional viscous resistance to motion of the lubricating fluid between the surfaces. Adequate lubrication allows smooth continuous operation of equipment, with only mild wear, and without excessive stresses or seizures at bearings. When lubrication breaks down, metal or other components can rub destructively over each other, causing heat and possibly damage or failure. Skin friction. Skin friction arises from the interaction between the fluid and the skin of the body, and is directly related to the area of the surface of the body that is in contact with the fluid. Skin friction follows the drag equation and rises with the square of the velocity. Skin friction is caused by viscous drag in the boundary layer around the object. There are two ways to decrease skin friction: the first is to shape the moving body so that smooth flow is possible, like an airfoil. The second method is to decrease the length and cross-section of the moving object as much as is practicable. Internal friction. Internal friction is the force resisting motion between the elements making up a solid material while it undergoes deformation. Plastic deformation in solids is an irreversible change in the internal molecular structure of an object. This change may be due to either (or both) an applied force or a change in temperature. The change of an object's shape is called strain. The force causing it is called stress. Elastic deformation in solids is reversible change in the internal molecular structure of an object. Stress does not necessarily cause permanent change. As deformation occurs, internal forces oppose the applied force. If the applied stress is not too large these opposing forces may completely resist the applied force, allowing the object to assume a new equilibrium state and to return to its original shape when the force is removed. This is known as elastic deformation or elasticity. Radiation friction. As a consequence of light pressure, Einstein in 1909 predicted the existence of "radiation friction" which would oppose the movement of matter. He wrote, "radiation will exert pressure on both sides of the plate. The forces of pressure exerted on the two sides are equal if the plate is at rest. However, if it is in motion, more radiation will be reflected on the surface that is ahead during the motion (front surface) than on the back surface. The backward-acting force of pressure exerted on the front surface is thus larger than the force of pressure acting on the back. Hence, as the resultant of the two forces, there remains a force that counteracts the motion of the plate and that increases with the velocity of the plate. We will call this resultant 'radiation friction' in brief." Other types of friction. Rolling resistance. Rolling resistance is the force that resists the rolling of a wheel or other circular object along a surface caused by deformations in the object or surface. Generally the force of rolling resistance is less than that associated with kinetic friction. Typical values for the coefficient of rolling resistance are 0.001. One of the most common examples of rolling resistance is the movement of motor vehicle tires on a road, a process which generates heat and sound as by-products. Braking friction. Any wheel equipped with a brake is capable of generating a large retarding force, usually for the purpose of slowing and stopping a vehicle or piece of rotating machinery. Braking friction differs from rolling friction because the coefficient of friction for rolling friction is small whereas the coefficient of friction for braking friction is designed to be large by choice of materials for brake pads. Triboelectric effect. Rubbing two materials against each other can lead to charge transfer, either electrons or ions. The energy required for this contributes to the friction. In addition, sliding can cause a build-up of electrostatic charge, which can be hazardous if flammable gases or vapours are present. When the static build-up discharges, explosions can be caused by ignition of the flammable mixture. Belt friction. Belt friction is a physical property observed from the forces acting on a belt wrapped around a pulley, when one end is being pulled. The resulting tension, which acts on both ends of the belt, can be modeled by the belt friction equation. In practice, the theoretical tension acting on the belt or rope calculated by the belt friction equation can be compared to the maximum tension the belt can support. This helps a designer of such a rig to know how many times the belt or rope must be wrapped around the pulley to prevent it from slipping. Mountain climbers and sailing crews demonstrate a standard knowledge of belt friction when accomplishing basic tasks. Reduction. Devices. Devices such as wheels, ball bearings, roller bearings, and air cushion or other types of fluid bearings can change sliding friction into a much smaller type of rolling friction. Many thermoplastic materials such as nylon, HDPE and PTFE are commonly used in low friction bearings. They are especially useful because the coefficient of friction falls with increasing imposed load. For improved wear resistance, very high molecular weight grades are usually specified for heavy duty or critical bearings. Lubricants. A common way to reduce friction is by using a lubricant, such as oil, water, or grease, which is placed between the two surfaces, often dramatically lessening the coefficient of friction. The science of friction and lubrication is called tribology. Lubricant technology is when lubricants are mixed with the application of science, especially to industrial or commercial objectives. Superlubricity, a recently discovered effect, has been observed in graphite: it is the substantial decrease of friction between two sliding objects, approaching zero levels. A very small amount of frictional energy would still be dissipated. Lubricants to overcome friction need not always be thin, turbulent fluids or powdery solids such as graphite and talc; acoustic lubrication actually uses sound as a lubricant. Another way to reduce friction between two parts is to superimpose micro-scale vibration to one of the parts. This can be sinusoidal vibration as used in ultrasound-assisted cutting or vibration noise, known as dither. Energy of friction. According to the law of conservation of energy, no energy is destroyed due to friction, though it may be lost to the system of concern. Energy is transformed from other forms into thermal energy. A sliding hockey puck comes to rest because friction converts its kinetic energy into heat which raises the thermal energy of the puck and the ice surface. Since heat quickly dissipates, many early philosophers, including Aristotle, wrongly concluded that moving objects lose energy without a driving force. When an object is pushed along a surface along a path C, the energy converted to heat is given by a line integral, in accordance with the definition of work formula_28 where Dissipation of energy by friction in a process is a classic example of thermodynamic irreversibility. Work of friction. The work done by friction can translate into deformation, wear, and heat that can affect the contact surface properties (even the coefficient of friction between the surfaces). This can be beneficial as in polishing. The work of friction is used to mix and join materials such as in the process of friction welding. Excessive erosion or wear of mating sliding surfaces occurs when work due to frictional forces rise to unacceptable levels. Harder corrosion particles caught between mating surfaces in relative motion (fretting) exacerbates wear of frictional forces. As surfaces are worn by work due to friction, fit and surface finish of an object may degrade until it no longer functions properly. For example, bearing seizure or failure may result from excessive wear due to work of friction. In the reference frame of the interface between two surfaces, static friction does "no" work, because there is never displacement between the surfaces. In the same reference frame, kinetic friction is always in the direction opposite the motion, and does "negative" work. However, friction can do "positive" work in certain frames of reference. One can see this by placing a heavy box on a rug, then pulling on the rug quickly. In this case, the box slides backwards relative to the rug, but moves forward relative to the frame of reference in which the floor is stationary. Thus, the kinetic friction between the box and rug accelerates the box in the same direction that the box moves, doing "positive" work. When sliding takes place between two rough bodies in contact, the algebraic sum of the works done is different from zero, and the algebraic sum of the quantities of heat gained by the two bodies is equal to the quantity of work lost by friction, and the total quantity of heat gained is positive. In a natural thermodynamic process, the work done by an agency in the surroundings of a thermodynamic system or working body is greater than the work received by the body, because of friction. Thermodynamic work is measured by changes in a body's state variables, sometimes called work-like variables, other than temperature and entropy. Examples of work-like variables, which are ordinary macroscopic physical variables and which occur in conjugate pairs, are pressure – volume, and electric field – electric polarization. Temperature and entropy are a specifically thermodynamic conjugate pair of state variables. They can be affected microscopically at an atomic level, by mechanisms such as friction, thermal conduction, and radiation. The part of the work done by an agency in the surroundings that does not change the volume of the working body but is dissipated in friction, is called isochoric work. It is received as heat, by the working body and sometimes partly by a body in the surroundings. It is not counted as thermodynamic work received by the working body. Applications. Friction is an important factor in many engineering disciplines. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F_\\mathrm{f} \\leq \\mu F_\\mathrm{n}," }, { "math_id": 1, "text": "F_\\mathrm{f}" }, { "math_id": 2, "text": "\\mu" }, { "math_id": 3, "text": "F_\\mathrm{n}" }, { "math_id": 4, "text": "\\mu F_\\mathrm{n}" }, { "math_id": 5, "text": "N=mg\\," }, { "math_id": 6, "text": "F_f = 0" }, { "math_id": 7, "text": "F_f\\le \\mu N" }, { "math_id": 8, "text": "\\tan^{-1}\\mu" }, { "math_id": 9, "text": "N" }, { "math_id": 10, "text": "P" }, { "math_id": 11, "text": "N = mg + P_y" }, { "math_id": 12, "text": "mg" }, { "math_id": 13, "text": "P_y" }, { "math_id": 14, "text": "F_f = -P_x" }, { "math_id": 15, "text": "P_x" }, { "math_id": 16, "text": "F_f \\le \\mu N" }, { "math_id": 17, "text": "F_f = \\mu N" }, { "math_id": 18, "text": "\\mu = \\mu_\\mathrm{s}" }, { "math_id": 19, "text": "\\mu_\\mathrm{s}" }, { "math_id": 20, "text": "\\mu = \\mu_\\mathrm{k}" }, { "math_id": 21, "text": "\\mu_\\mathrm{k}" }, { "math_id": 22, "text": "F_\\text{max} = \\mu_\\mathrm{s} F_\\text{n}" }, { "math_id": 23, "text": "F_\\text{max}" }, { "math_id": 24, "text": "F_{k} = \\mu_\\mathrm{k} F_{n}" }, { "math_id": 25, "text": "\\tan{\\theta} = \\mu_\\mathrm{s}" }, { "math_id": 26, "text": "\\theta = \\arctan{\\mu_\\mathrm{s}}" }, { "math_id": 27, "text": "\\theta" }, { "math_id": 28, "text": "E_{th} = \\int_C \\mathbf{F}_\\mathrm{fric}(\\mathbf{x}) \\cdot d\\mathbf{x}\\ = \\int_C \\mu_\\mathrm{k}\\ \\mathbf{F}_\\mathrm{n}(\\mathbf{x}) \\cdot d\\mathbf{x}, " }, { "math_id": 29, "text": "\\mathbf{F}_\\mathrm{fric}" }, { "math_id": 30, "text": "\\mathbf{F}_\\mathrm{n}" }, { "math_id": 31, "text": "\\mathbf{x}" } ]
https://en.wikipedia.org/wiki?curid=11062
1106213
Laplace distribution
Probability distribution In probability theory and statistics, the Laplace distribution is a continuous probability distribution named after Pierre-Simon Laplace. It is also sometimes called the double exponential distribution, because it can be thought of as two exponential distributions (with an additional location parameter) spliced together along the abscissa, although the term is also sometimes used to refer to the Gumbel distribution. The difference between two independent identically distributed exponential random variables is governed by a Laplace distribution, as is a Brownian motion evaluated at an exponentially distributed random time. Increments of Laplace motion or a variance gamma process evaluated over the time scale also have a Laplace distribution. Definitions. Probability density function. A random variable has a formula_2 distribution if its probability density function is formula_3 where formula_0 is a location parameter, and formula_1, which is sometimes referred to as the "diversity", is a scale parameter. If formula_4 and formula_5, the positive half-line is exactly an exponential distribution scaled by 1/2. The probability density function of the Laplace distribution is also reminiscent of the normal distribution; however, whereas the normal distribution is expressed in terms of the squared difference from the mean formula_0, the Laplace density is expressed in terms of the absolute difference from the mean. Consequently, the Laplace distribution has fatter tails than the normal distribution. It is a special case of the generalized normal distribution and the hyperbolic distribution. Continuous symmetric distributions that have exponential tails, like the Laplace distribution, but which have probability density functions that are differentiable at the mode include the logistic distribution, hyperbolic secant distribution, and the Champernowne distribution. Cumulative distribution function. The Laplace distribution is easy to integrate (if one distinguishes two symmetric cases) due to the use of the absolute value function. Its cumulative distribution function is as follows: formula_6 The inverse cumulative distribution function is given by formula_7 formula_8 Properties. Probability of a Laplace being greater than another. Let formula_51 be independent laplace random variables: formula_52 and formula_53, and we want to compute formula_54. The probability of formula_55 can be reduced (using the properties below) to formula_56, where formula_57. This probability is equal to formula_58 When formula_59, both expressions are replaced by their limit as formula_60: formula_61 To compute the case for formula_62, note that formula_63 since formula_64 when formula_65 Relation to the exponential distribution. A Laplace random variable can be represented as the difference of two independent and identically distributed (iid) exponential random variables. One way to show this is by using the characteristic function approach. For any set of independent continuous random variables, for any linear combination of those variables, its characteristic function (which uniquely determines the distribution) can be acquired by multiplying the corresponding characteristic functions. Consider two i.i.d random variables formula_15. The characteristic functions for formula_66 are formula_67 respectively. On multiplying these characteristic functions (equivalent to the characteristic function of the sum of the random variables formula_68), the result is formula_69 This is the same as the characteristic function for formula_70, which is formula_71 Sargan distributions. Sargan distributions are a system of distributions of which the Laplace distribution is a core member. A formula_72th order Sargan distribution has density formula_73 for parameters formula_74. The Laplace distribution results for formula_75. Statistical inference. Given formula_76 independent and identically distributed samples formula_77, the maximum likelihood (MLE) estimator of formula_0 is the sample median, formula_78 The MLE estimator of formula_79 is the mean absolute deviation from the median, formula_80 revealing a link between the Laplace distribution and least absolute deviations. A correction for small samples can be applied as follows: formula_81 (see: exponential distribution#Parameter estimation). Occurrence and applications. The Laplacian distribution has been used in speech recognition to model priors on DFT coefficients and in JPEG image compression to model AC coefficients generated by a DCT. The Laplace distribution, being a composite or double distribution, is applicable in situations where the lower values originate under different external conditions than the higher ones so that they follow a different pattern. Random variate generation. Given a random variable formula_82 drawn from the uniform distribution in the interval formula_83, the random variable formula_84 has a Laplace distribution with parameters formula_0 and formula_79. This follows from the inverse cumulative distribution function given above. A formula_85 variate can also be generated as the difference of two i.i.d. formula_86 random variables. Equivalently, formula_87 can also be generated as the logarithm of the ratio of two i.i.d. uniform random variables. History. This distribution is often referred to as "Laplace's first law of errors". He published it in 1774, modeling the frequency of an error as an exponential function of its magnitude once its sign was disregarded. Laplace would later replace this model with his "second law of errors", based on the normal distribution, after the discovery of the central limit theorem. Keynes published a paper in 1911 based on his earlier thesis wherein he showed that the Laplace distribution minimised the absolute deviation from the median. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mu" }, { "math_id": 1, "text": "b > 0" }, { "math_id": 2, "text": "\\operatorname{Laplace}(\\mu, b)" }, { "math_id": 3, "text": "f(x \\mid \\mu, b) = \\frac{1}{2b} \\exp\\left( -\\frac{|x - \\mu|}{b} \\right)," }, { "math_id": 4, "text": "\\mu = 0" }, { "math_id": 5, "text": "b = 1" }, { "math_id": 6, "text": "\\begin{align}\nF(x) &= \\int_{-\\infty}^x \\!\\!f(u)\\,\\mathrm{d}u = \\begin{cases}\n \\frac12 \\exp \\left( \\frac{x-\\mu}{b} \\right) & \\mbox{if }x < \\mu \\\\\n 1-\\frac12 \\exp \\left( -\\frac{x-\\mu}{b} \\right) & \\mbox{if }x \\geq \\mu\n \\end{cases} \\\\\n&=\\tfrac{1}{2} + \\tfrac{1}{2} \\sgn(x-\\mu) \\left(1-\\exp \\left(-\\frac{|x-\\mu|}{b} \\right ) \\right ).\n\\end{align}" }, { "math_id": 7, "text": "F^{-1}(p) = \\mu - b\\,\\sgn(p-0.5)\\,\\ln(1 - 2|p-0.5|)." }, { "math_id": 8, "text": "\\mu_r' = \\bigg({\\frac{1}{2}}\\bigg) \\sum_{k=0}^r \\bigg[{\\frac{r!}{(r-k)!}} b^k \\mu^{(r-k)} \\{1 + (-1)^k\\}\\bigg]." }, { "math_id": 9, "text": "X \\sim \\textrm{Laplace}(\\mu, b)" }, { "math_id": 10, "text": "kX + c \\sim \\textrm{Laplace}(k\\mu + c, |k|b)" }, { "math_id": 11, "text": "X \\sim \\textrm{Laplace}(0, 1)" }, { "math_id": 12, "text": "bX \\sim \\textrm{Laplace}(0, b)" }, { "math_id": 13, "text": "X \\sim \\textrm{Laplace}(0, b)" }, { "math_id": 14, "text": "\\left|X\\right| \\sim \\textrm{Exponential}\\left(b^{-1}\\right)" }, { "math_id": 15, "text": "X, Y \\sim \\textrm{Exponential}(\\lambda)" }, { "math_id": 16, "text": "X - Y \\sim \\textrm{Laplace}\\left(0, \\lambda^{-1}\\right)" }, { "math_id": 17, "text": "\\left|X - \\mu\\right| \\sim \\textrm{Exponential}(b^{-1})" }, { "math_id": 18, "text": "X \\sim \\textrm{EPD}(\\mu, b, 1)" }, { "math_id": 19, "text": "X_1, ...,X_4 \\sim \\textrm{N}(0, 1)" }, { "math_id": 20, "text": "X_1X_2 - X_3X_4 \\sim \\textrm{Laplace}(0, 1)" }, { "math_id": 21, "text": "(X_1^2 - X_2^2 + X_3^2 - X_4^2)/2 \\sim \\textrm{Laplace}(0, 1)" }, { "math_id": 22, "text": "X_i \\sim \\textrm{Laplace}(\\mu, b)" }, { "math_id": 23, "text": "\\frac{\\displaystyle 2}{b} \\sum_{i=1}^n |X_i-\\mu| \\sim \\chi^2(2n)" }, { "math_id": 24, "text": "X, Y \\sim \\textrm{Laplace}(\\mu, b)" }, { "math_id": 25, "text": "\\tfrac{|X-\\mu|}{|Y-\\mu|} \\sim \\operatorname{F}(2,2)" }, { "math_id": 26, "text": "X, Y \\sim \\textrm{U}(0, 1)" }, { "math_id": 27, "text": "\\log(X/Y) \\sim \\textrm{Laplace}(0, 1)" }, { "math_id": 28, "text": "X \\sim \\textrm{Exponential}(\\lambda)" }, { "math_id": 29, "text": "Y \\sim \\textrm{Bernoulli}(0.5)" }, { "math_id": 30, "text": "X" }, { "math_id": 31, "text": "X(2Y - 1) \\sim \\textrm{Laplace}\\left(0, \\lambda^{-1}\\right)" }, { "math_id": 32, "text": "Y \\sim \\textrm{Exponential}(\\nu)" }, { "math_id": 33, "text": "\\lambda X - \\nu Y \\sim \\textrm{Laplace}(0, 1)" }, { "math_id": 34, "text": "Y \\sim \\textrm{Exponential}(\\lambda)" }, { "math_id": 35, "text": "XY \\sim \\textrm{Laplace}(0, 1/\\lambda)" }, { "math_id": 36, "text": "V \\sim \\textrm{Exponential}(1)" }, { "math_id": 37, "text": "Z \\sim N(0, 1)" }, { "math_id": 38, "text": "V" }, { "math_id": 39, "text": "X = \\mu + b \\sqrt{2 V}Z \\sim \\mathrm{Laplace}(\\mu,b)" }, { "math_id": 40, "text": "X \\sim \\textrm{GeometricStable}(2, 0, \\lambda, 0)" }, { "math_id": 41, "text": "X \\sim \\textrm{Laplace}(0, \\lambda)" }, { "math_id": 42, "text": "X|Y \\sim \\textrm{N}(\\mu,Y^2)" }, { "math_id": 43, "text": "Y \\sim \\textrm{Rayleigh}(b)" }, { "math_id": 44, "text": "Y^2 \\sim \\textrm{Gamma}(1,2b^2) " }, { "math_id": 45, "text": "\\textrm{E}(Y^2)=2b^2" }, { "math_id": 46, "text": "\\textrm{Exp}(1/(2b^2))" }, { "math_id": 47, "text": "n \\ge 1" }, { "math_id": 48, "text": "X_i, Y_i \\sim \\Gamma\\left(\\frac{1}{n}, b\\right)" }, { "math_id": 49, "text": "k, \\theta" }, { "math_id": 50, "text": "\\sum_{i=1}^n \\left( \\frac{\\mu}{n} + X_i - Y_i\\right) \\sim \\textrm{Laplace}(\\mu, b)" }, { "math_id": 51, "text": "X, Y" }, { "math_id": 52, "text": "X \\sim \\textrm{Laplace}(\\mu_X, b_X)" }, { "math_id": 53, "text": "Y \\sim \\textrm{Laplace}(\\mu_Y, b_Y)" }, { "math_id": 54, "text": "P(X>Y)" }, { "math_id": 55, "text": "P(X > Y)" }, { "math_id": 56, "text": "P(\\mu + bZ_1 > Z_2)" }, { "math_id": 57, "text": "Z_1, Z_2 \\sim \\textrm{Laplace}(0, 1)" }, { "math_id": 58, "text": "P(\\mu + bZ_1 > Z_2) = \\begin{cases}\n\\frac{b^2 e^{\\mu/b} - e^\\mu}{2(b^2 -1)}, & \\text{when } \\mu < 0 \\\\\n1-\\frac{ b^2 e^{-\\mu/b} - e^{-\\mu}}{2(b^2 -1)}, & \\text{when } \\mu > 0 \\\\\n\\end{cases}" }, { "math_id": 59, "text": " b = 1 " }, { "math_id": 60, "text": " b \\to 1" }, { "math_id": 61, "text": "P(\\mu + Z_1 > Z_2) = \\begin{cases}\ne^\\mu\\frac{(2-\\mu)}4, & \\text{when } \\mu < 0 \\\\\n1-e^{-\\mu}\\frac{(2+\\mu)}4, & \\text{when } \\mu > 0 \\\\\n\\end{cases}" }, { "math_id": 62, "text": " \\mu > 0 " }, { "math_id": 63, "text": "P(\\mu + Z_1 > Z_2) = 1 - P(\\mu + Z_1 < Z_2) = 1 - P(-\\mu - Z_1 > -Z_2) = 1 - P(-\\mu + Z_1 > Z_2)" }, { "math_id": 64, "text": " Z \\sim -Z " }, { "math_id": 65, "text": "Z \\sim \\textrm{Laplace}(0, 1) " }, { "math_id": 66, "text": "X, -Y" }, { "math_id": 67, "text": "\\frac{\\lambda }{-i t+\\lambda }, \\quad \\frac{\\lambda }{i t+\\lambda }" }, { "math_id": 68, "text": "X + (-Y)" }, { "math_id": 69, "text": "\\frac{\\lambda ^2}{(-i t+\\lambda ) (i t+\\lambda )} = \\frac{\\lambda ^2}{t^2+\\lambda ^2}." }, { "math_id": 70, "text": "Z \\sim \\textrm{Laplace}(0,1/\\lambda)" }, { "math_id": 71, "text": "\\frac{1}{1+\\frac{t^2}{\\lambda ^2}}." }, { "math_id": 72, "text": "p" }, { "math_id": 73, "text": "f_p(x)=\\tfrac{1}{2} \\exp(-\\alpha |x|) \\frac{\\displaystyle 1+\\sum_{j=1}^p \\beta_j \\alpha^j |x|^j}{\\displaystyle 1+\\sum_{j=1}^p j!\\beta_j}," }, { "math_id": 74, "text": "\\alpha \\ge 0, \\beta_j \\ge 0" }, { "math_id": 75, "text": "p = 0" }, { "math_id": 76, "text": "n" }, { "math_id": 77, "text": "x_1, x_2, ..., x_n" }, { "math_id": 78, "text": "\\hat{\\mu} = \\mathrm{med}(x)." }, { "math_id": 79, "text": "b" }, { "math_id": 80, "text": "\\hat{b} = \\frac{1}{n} \\sum_{i = 1}^{n} |x_i - \\hat{\\mu}|." }, { "math_id": 81, "text": "\\hat{b}^* = \\hat{b} \\cdot n/(n-2)" }, { "math_id": 82, "text": "U" }, { "math_id": 83, "text": "\\left(-1/2, 1/2\\right)" }, { "math_id": 84, "text": "X=\\mu - b\\,\\sgn(U)\\,\\ln(1 - 2|U|)" }, { "math_id": 85, "text": "\\textrm{Laplace}(0, b)" }, { "math_id": 86, "text": "\\textrm{Exponential}(1/b)" }, { "math_id": 87, "text": "\\textrm{Laplace}(0,1)" } ]
https://en.wikipedia.org/wiki?curid=1106213
11063114
Linear relation
Type of mathematical equation In linear algebra, a linear relation, or simply relation, between elements of a vector space or a module is a linear equation that has these elements as a solution. More precisely, if formula_0 are elements of a (left) module M over a ring R (the case of a vector space over a field is a special case), a relation between formula_0 is a sequence formula_1 of elements of R such that formula_2 The relations between formula_0 form a module. One is generally interested in the case where formula_0 is a generating set of a finitely generated module M, in which case the module of the relations is often called a syzygy module of M. The syzygy module depends on the choice of a generating set, but it is unique up to the direct sum with a free module. That is, if formula_3 and formula_4 are syzygy modules corresponding to two generating sets of the same module, then they are stably isomorphic, which means that there exist two free modules formula_5 and formula_6 such that formula_7 and formula_8 are isomorphic. Higher order syzygy modules are defined recursively: a first syzygy module of a module M is simply its syzygy module. For "k" &gt; 1, a kth syzygy module of M is a syzygy module of a ("k" – 1)-th syzygy module. Hilbert's syzygy theorem states that, if formula_9 is a polynomial ring in n indeterminates over a field, then every nth syzygy module is free. The case "n" = 0 is the fact that every finite dimensional vector space has a basis, and the case "n" = 1 is the fact that "K"["x"] is a principal ideal domain and that every submodule of a finitely generated free "K"["x"] module is also free. The construction of higher order syzygy modules is generalized as the definition of free resolutions, which allows restating Hilbert's syzygy theorem as a polynomial ring in n indeterminates over a field has global homological dimension n. If a and b are two elements of the commutative ring R, then ("b", –"a") is a relation that is said "trivial". The "module of trivial relations" of an ideal is the submodule of the first syzygy module of the ideal that is generated by the trivial relations between the elements of a generating set of an ideal. The concept of trivial relations can be generalized to higher order syzygy modules, and this leads to the concept of the Koszul complex of an ideal, which provides information on the non-trivial relations between the generators of an ideal. Basic definitions. Let R be a ring, and M be a left R-module. A " linear relation", or simply a "relation" between k elements formula_10 of M is a sequence formula_11 of elements of R such that formula_12 If formula_10 is a generating set of M, the relation is often called a "syzygy" of M. It makes sense to call it a syzygy of formula_13 without regard to formula_14 because, although the syzygy module depends on the chosen generating set, most of its properties are independent; see , below. If the ring R is Noetherian, or, at least coherent, and if M is finitely generated, then the syzygy module is also finitely generated. A syzygy module of this syzygy module is a "second syzygy module" of M. Continuing this way one can define a kth syzygy module for every positive integer k. Hilbert's syzygy theorem asserts that, if M is a finitely generated module over a polynomial ring formula_15 over a field, then any nth syzygy module is a free module. Stable properties. Generally speaking, in the language of K-theory, a property is "stable" if it becomes true by making a direct sum with a sufficiently large free module. A fundamental property of syzygies modules is that there are "stably independent" of choices of generating sets for involved modules. The following result is the basis of these stable properties. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Theorem — Proposition "Proof." As formula_16 is a generating set, each formula_19 can be written formula_20 This provides a relation formula_21 between formula_22 Now, if formula_23 is any relation, then formula_24 is a relation between the formula_25 only. In other words, every relation between formula_17 is a sum of a relation between formula_18 and a linear combination of the formula_21s. It is straightforward to prove that this decomposition is unique, and this proves the result. formula_26 This proves that the first syzygy module is "stably unique". More precisely, given two generating sets formula_3 and formula_4 of a module M, if formula_3 and formula_4 are the corresponding modules of relations, then there exist two free modules formula_5 and formula_6 such that formula_7 and formula_8 are isomorphic. For proving this, it suffices to apply twice the preceding proposition for getting two decompositions of the module of the relations between the union of the two generating sets. For obtaining a similar result for higher syzygy modules, it remains to prove that, if M is any module, and L is a free module, then M and "M" ⊕ "L" have isomorphic syzygy modules. It suffices to consider a generating set of "M" ⊕ "L" that consists of a generating set of M and a basis of L. For every relation between the elements of this generating set, the coefficients of the basis elements of L are all zero, and the syzygies of "M" ⊕ "L" are exactly the syzygies of M extended with zero coefficients. This completes the proof to the following theorem. &lt;templatestyles src="Math_theorem/styles.css" /&gt; Relationship with free resolutions. Given a generating set formula_27 of an R-module, one can consider a free module of L of basis formula_28 where formula_29 are new indeterminates. This defines an exact sequence formula_30 where the left arrow is the linear map that maps each formula_31 to the corresponding formula_32 The kernel of this left arrow is a first syzygy module of M. One can repeat this construction with this kernel in place of M. Repeating again and again this construction, one gets a long exact sequence formula_33 where all formula_34 are free modules. By definition, such a long exact sequence is a free resolution of M. For every "k" ≥ 1, the kernel formula_35 of the arrow starting from formula_36 is a kth syzygy module of M. It follows that the study of free resolutions is the same as the study of syzygy modules. A free resolution is "finite" of length ≤ "n" if formula_37 is free. In this case, one can take formula_38 and formula_39 (the zero module) for every "k" &gt; "n". This allows restating Hilbert's syzygy theorem: If formula_40 is a polynomial ring in n indeterminates over a field K, then every free resolution is finite of length at most n. The global dimension of a commutative Noetherian ring is either infinite, or the minimal n such that every free resolution is finite of length at most n. A commutative Noetherian ring is regular if its global dimension is finite. In this case, the global dimension equals its Krull dimension. So, Hilbert's syzygy theorem may be restated in a very short sentence that hides much mathematics: "A polynomial ring over a field is a regular ring." Trivial relations. In a commutative ring R, one has always "ab" – "ba" = 0. This implies "trivially" that ("b", –"a") is a linear relation between a and b. Therefore, given a generating set formula_41 of an ideal I, one calls trivial relation or trivial syzygy every element of the submodule the syzygy module that is generated by these trivial relations between two generating elements. More precisely, the module of trivial syzygies is generated by the relations formula_42 such that formula_43 formula_44 and formula_45 otherwise. History. The word "syzygy" came into mathematics with the work of Arthur Cayley. In that paper, Cayley used it in the theory of resultants and discriminants. As the word syzygy was used in astronomy to denote a linear relation between planets, Cayley used it to denote linear relations between minors of a matrix, such as, in the case of a 2×3 matrix: formula_46 Then, the word "syzygy" was popularized (among mathematicians) by David Hilbert in his 1890 article, which contains three fundamental theorems on polynomials, Hilbert's syzygy theorem, Hilbert's basis theorem and Hilbert's Nullstellensatz. In his article, Cayley makes use, in a special case, of what was later called the Koszul complex, after a similar construction in differential geometry by the mathematician Jean-Louis Koszul. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e_1,\\dots,e_n" }, { "math_id": 1, "text": "(f_1,\\dots, f_n)" }, { "math_id": 2, "text": "f_1e_1+\\dots+f_ne_n=0." }, { "math_id": 3, "text": "S_1" }, { "math_id": 4, "text": "S_2" }, { "math_id": 5, "text": "L_1" }, { "math_id": 6, "text": "L_2" }, { "math_id": 7, "text": "S_1\\oplus L_1" }, { "math_id": 8, "text": "S_2\\oplus L_2" }, { "math_id": 9, "text": "R=K[x_1,\\dots,x_n]" }, { "math_id": 10, "text": "x_1, \\dots, x_k" }, { "math_id": 11, "text": "(a_1, \\dots, a_k)" }, { "math_id": 12, "text": "a_1x_1+\\dots+ a_kx_k=0." }, { "math_id": 13, "text": "M" }, { "math_id": 14, "text": "x_1,..,x_k" }, { "math_id": 15, "text": "K[x_1, \\dots, x_n]" }, { "math_id": 16, "text": "\\{x_1,\\dots, x_m\\}" }, { "math_id": 17, "text": "x_1,\\dots, x_m, y_1,\\dots, y_n" }, { "math_id": 18, "text": "x_1,\\dots, x_m," }, { "math_id": 19, "text": "y_i" }, { "math_id": 20, "text": "\\textstyle y_i=\\sum \\alpha_{i,j}x_j." }, { "math_id": 21, "text": "r_i" }, { "math_id": 22, "text": "x_1,\\dots, x_m, y_1,\\dots, y_n." }, { "math_id": 23, "text": "r=(a_1, \\dots,a_m, b_1,\\dots,b_n)" }, { "math_id": 24, "text": "\\textstyle r-\\sum b_ir_i" }, { "math_id": 25, "text": "x_1,\\dots, x_m" }, { "math_id": 26, "text": "\\blacksquare" }, { "math_id": 27, "text": "g_1,\\dots,g_n" }, { "math_id": 28, "text": "G_1,\\dots,G_n," }, { "math_id": 29, "text": "G_1,\\dots,G_n" }, { "math_id": 30, "text": "L\\longrightarrow M \\longrightarrow 0," }, { "math_id": 31, "text": "G_i" }, { "math_id": 32, "text": "g_i." }, { "math_id": 33, "text": "\\cdots\\longrightarrow L_k\\longrightarrow L_{k-1} \\longrightarrow \\cdots\\longrightarrow L_0 \\longrightarrow M \\longrightarrow 0," }, { "math_id": 34, "text": "L_i" }, { "math_id": 35, "text": "S_k" }, { "math_id": 36, "text": "L_{k-1}" }, { "math_id": 37, "text": "S_n" }, { "math_id": 38, "text": "L_n = S_n," }, { "math_id": 39, "text": "L_k = 0" }, { "math_id": 40, "text": "R=K[x_1, \\dots, x_n]" }, { "math_id": 41, "text": "g_1, \\dots,g_k" }, { "math_id": 42, "text": "r_{i,j}= (x_1,\\dots,x_r)" }, { "math_id": 43, "text": "x_i=g_j," }, { "math_id": 44, "text": "x_j=-g_i," }, { "math_id": 45, "text": "x_h=0" }, { "math_id": 46, "text": "a\\,\\begin{vmatrix}b&c\\\\e&f\\end{vmatrix} - b\\,\\begin{vmatrix}a&c\\\\d&f\\end{vmatrix} +c\\,\\begin{vmatrix}a&b\\\\d&e\\end{vmatrix}=0." } ]
https://en.wikipedia.org/wiki?curid=11063114
11063933
Rule 184
Elementary cellular automaton Rule 184 is a one-dimensional binary cellular automaton rule, notable for solving the majority problem as well as for its ability to simultaneously describe several, seemingly quite different, particle systems: The apparent contradiction between these descriptions is resolved by different ways of associating features of the automaton's state with particles. The name of Rule 184 is a Wolfram code that defines the evolution of its states. The earliest research on Rule 184 is by and . In particular, Krug and Spohn already describe all three types of particle system modeled by Rule 184. Definition. A state of the Rule 184 automaton consists of a one-dimensional array of cells, each containing a binary value (0 or 1). In each step of its evolution, the Rule 184 automaton applies the following rule to each of the cells in the array, simultaneously for all cells, to determine the new state of the cell: An entry in this table defines the new state of each cell as a function of the previous state and the previous values of the neighboring cells on either side. The name for this rule, Rule 184, is the Wolfram code describing the state table above: the bottom row of the table, 10111000, when viewed as a binary number, is equal to the decimal number 184. The rule set for Rule 184 may also be described intuitively, in several different ways: Dynamics and majority classification. From the descriptions of the rules above, two important properties of its dynamics may immediately be seen. First, in Rule 184, for any finite set of cells with periodic boundary conditions, the number of 1s and the number of 0s in a pattern remains invariant throughout the pattern's evolution. Rule 184 and its reflection are the only nontrivial elementary cellular automata to have this property of number conservation. Similarly, if the density of 1s is well-defined for an infinite array of cells, it remains invariant as the automaton carries out its steps. And second, although Rule 184 is not symmetric under left-right reversal, it does have a different symmetry: reversing left and right and at the same time swapping the roles of the 0 and 1 symbols produces a cellular automaton with the same update rule. Patterns in Rule 184 typically quickly stabilize, either to a pattern in which the cell states move in lockstep one position leftwards at each step, or to a pattern that moves one position rightwards at each step. Specifically, if the initial density of cells with state 1 is less than 50%, the pattern stabilizes into clusters of cells in state 1, spaced two units apart, with the clusters separated by blocks of cells in state 0. Patterns of this type move rightwards. If, on the other hand, the initial density is greater than 50%, the pattern stabilizes into clusters of cells in state 0, spaced two units apart, with the clusters separated by blocks of cells in state 1, and patterns of this type move leftwards. If the density is exactly 50%, the initial pattern stabilizes (more slowly) to a pattern that can equivalently be viewed as moving either leftwards or rightwards at each step: an alternating sequence of 0s and 1s. The majority problem is the problem of constructing a cellular automaton that, when run on any finite set of cells, can compute the value held by a majority of its cells. In a sense, Rule 184 solves this problem, as follows. if Rule 184 is run on a finite set of cells with periodic boundary conditions, with an unequal number of 0s and 1s, then each cell will eventually see two consecutive states of the majority value infinitely often, but will see two consecutive states of the minority value only finitely many times. The majority problem cannot be solved perfectly if it is required that all cells eventually stabilize to the majority state but the Rule 184 solution avoids this impossibility result by relaxing the criterion by which the automaton recognizes a majority. Traffic flow. If one interprets each 1-cell in Rule 184 as containing a particle, these particles behave in many ways similarly to automobiles in a single lane of traffic: they move forward at a constant speed if there is open space in front of them, and otherwise they stop. Traffic models such as Rule 184 and its generalizations that discretize both space and time are commonly called "particle-hopping models". Although very primitive, the Rule 184 model of traffic flow already predicts some of the familiar emergent features of real traffic: clusters of freely moving cars separated by stretches of open road when traffic is light, and waves of stop-and-go traffic when it is heavy. It is difficult to pinpoint the first use of Rule 184 for traffic flow simulation, in part because the focus of research in this area has been less on achieving the greatest level of mathematical abstraction and more on verisimilitude: even the earlier papers on cellular automaton based traffic flow simulation typically make the model more complex in order to more accurately simulate real traffic. Nevertheless, Rule 184 is fundamental to traffic simulation by cellular automata. , for instance, state that "the basic cellular automaton model describing a one-dimensional traffic flow problem is rule 184." writes "Much work using CA models for traffic is based on this model." Several authors describe one-dimensional models with vehicles moving at multiple speeds; such models degenerate to Rule 184 in the single-speed case. extend the Rule 184 dynamics to two-lane highway traffic with lane changes; their model shares with Rule 184 the property that it is symmetric under simultaneous left-right and 0-1 reversal. describe a two-dimensional city grid model in which the dynamics of individual lanes of traffic is essentially that of Rule 184. For an in-depth survey of cellular automaton traffic modeling and associated statistical mechanics, see and . When viewing Rule 184 as a traffic model, it is natural to consider the average speed of the vehicles. When the density of traffic is less than 50%, this average speed is simply one unit of distance per unit of time: after the system stabilizes, no car ever slows. However, when the density is a number ρ greater than 1/2, the average speed of traffic is formula_0. Thus, the system exhibits a second-order kinetic phase transition at "ρ" = 1/2. When Rule 184 is interpreted as a traffic model, and started from a random configuration whose density is at this critical value "ρ" = 1/2, then the average speed approaches its stationary limit as the square root of the number of steps. Instead, for random configurations whose density is not at the critical value, the approach to the limiting speed is exponential. Surface deposition. As shown in the figure, and as originally described by , Rule 184 may be used to model deposition of particles onto a surface. In this model, one has a set of particles that occupy a subset of the positions in a square lattice oriented diagonally (the darker particles in the figure). If a particle is present at some position of the lattice, the lattice positions below and to the right, and below and to the left of the particle must also be filled, so the filled part of the lattice extends infinitely downward to the left and right. The boundary between filled and unfilled positions (the thin black line in the figure) is interpreted as modeling a surface, onto which more particles may be deposited. At each time step, the surface grows by the deposition of new particles in each local minimum of the surface; that is, at each position where it is possible to add one new particle that has existing particles below it on both sides (the lighter particles in the figure). To model this process by Rule 184, observe that the boundary between filled and unfilled lattice positions can be marked by a polygonal line, the segments of which separate adjacent lattice positions and have slopes +1 and −1. Model a segment with slope +1 by an automaton cell with state 0, and a segment with slope −1 by an automaton cell with state 1. The local minima of the surface are the points where a segment of slope −1 lies to the left of a segment of slope +1; that is, in the automaton, a position where a cell with state 1 lies to the left of a cell with state 0. Adding a particle to that position corresponds to changing the states of these two adjacent cells from 1,0 to 0,1, so advancing the polygonal line. This is exactly the behavior of Rule 184. Related work on this model concerns deposition in which the arrival times of additional particles are random, rather than having particles arrive at all local minima simultaneously. These stochastic growth processes can be modeled as an asynchronous cellular automaton. Ballistic annihilation. Ballistic annihilation describes a process by which moving particles and antiparticles annihilate each other when they collide. In the simplest version of this process, the system consists of a single type of particle and antiparticle, moving at equal speeds in opposite directions in a one-dimensional medium. This process can be modeled by Rule 184, as follows. The particles are modeled as points that are aligned, not with the cells of the automaton, but rather with the interstices between cells. Two consecutive cells that both have state 0 model a particle at the space between these two cells that moves rightwards one cell at each time step. Symmetrically, two consecutive cells that both have state 1 model an antiparticle that moves leftwards one cell at each time step. The remaining possibilities for two consecutive cells are that they both have differing states; this is interpreted as modeling a background material without any particles in it, through which the particles move. With this interpretation, the particles and antiparticles interact by ballistic annihilation: when a rightwards-moving particle and a leftwards-moving antiparticle meet, the result is a region of background from which both particles have vanished, without any effect on any other nearby particles. The behavior of certain other systems, such as one-dimensional cyclic cellular automata, can also be described in terms of ballistic annihilation. There is a technical restriction on the particle positions for the ballistic annihilation view of Rule 184 that does not arise in these other systems, stemming from the alternating pattern of the background: in the particle system corresponding to a Rule 184 state, if two consecutive particles are both of the same type they must be an odd number of cells apart, while if they are of opposite types they must be an even number of cells apart. However this parity restriction does not play a role in the statistical behavior of this system. uses a similar but more complicated particle-system view of Rule 184: he not only views alternating 0–1 regions as background, but also considers regions consisting solely of a single state to be background as well. Based on this view he describes seven different particles formed by boundaries between regions, and classifies their possible interactions. See for a more general survey of the cellular automaton models of annihilation processes. Context-free parsing. In his book "A New Kind of Science", Stephen Wolfram points out that rule 184, when run on patterns with density 50%, can be interpreted as parsing the context-free language describing strings formed from nested parentheses. This interpretation is closely related to the ballistic annihilation view of rule 184: in Wolfram's interpretation, an open parenthesis corresponds to a left-moving particle while a close parenthesis corresponds to a right-moving particle. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\\tfrac{1-\\rho}{\\rho}" } ]
https://en.wikipedia.org/wiki?curid=11063933
11063979
Tangent measure
In measure theory, tangent measures are used to study the local behavior of Radon measures, in much the same way as tangent spaces are used to study the local behavior of differentiable manifolds. Tangent measures (introduced by David Preiss in his study of rectifiable sets) are a useful tool in geometric measure theory. For example, they are used in proving Marstrand's theorem and Preiss' theorem. Definition. Consider a Radon measure "μ" defined on an open subset Ω of "n"-dimensional Euclidean space R"n" and let "a" be an arbitrary point in Ω. We can "zoom in" on a small open ball of radius "r" around "a", "B""r"("a"), via the transformation formula_0 which enlarges the ball of radius "r" about "a" to a ball of radius 1 centered at 0. With this, we may now zoom in on how "μ" behaves on "B""r"("a") by looking at the push-forward measure defined by formula_1 where formula_2 As "r" gets smaller, this transformation on the measure "μ" spreads out and enlarges the portion of "μ" supported around the point "a". We can get information about our measure around "a" by looking at what these measures tend to look like in the limit as "r" approaches zero. Definition. A "tangent measure" of a Radon measure "μ" at the point "a" is a second Radon measure "ν" such that there exist sequences of positive numbers "c""i" &gt; 0 and decreasing radii "r""i" → 0 such that formula_3 where the limit is taken in the weak-∗ topology, i.e., for any continuous function "φ" with compact support in Ω, formula_4 We denote the set of tangent measures of "μ" at "a" by Tan("μ", "a"). Existence. The set Tan("μ", "a") of tangent measures of a measure "μ" at a point "a" in the support of "μ" is nonempty on mild conditions on "μ". By the weak compactness of Radon measures, Tan("μ", "a") is nonempty if one of the following conditions hold: Properties. The collection of tangent measures at a point is closed under two types of scaling. Cones of measures were also defined by Preiss. At typical points in the support of a measure, the cone of tangent measures is also closed under translations. Related concepts. There is an associated notion of the tangent space of a measure. A "k"-dimensional subspace "P" of R"n" is called the "k"-dimensional tangent space of "μ" at "a" ∈ Ω if — after appropriate rescaling — "μ" "looks like" "k"-dimensional Hausdorff measure "H""k" on "P". More precisely: Definition. "P" is the "k"-"dimensional tangent space" of "μ" at "a" if there is a "θ" &gt; 0 such that formula_15 where "μ""a","r" is the translated and rescaled measure given by formula_16 The number "θ" is called the "multiplicity" of "μ" at "a", and the tangent space of "μ" at "a" is denoted T"a"("μ"). Further study of tangent measures and tangent spaces leads to the notion of a varifold. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_{a,r}(x)=\\frac{x-a}{r}," }, { "math_id": 1, "text": " T_{a,r \\#}\\mu(A)=\\mu(a+rA)" }, { "math_id": 2, "text": "a+rA=\\{a+rx:x\\in A\\}." }, { "math_id": 3, "text": "\\lim_{i\\rightarrow\\infty} c_{i}T_{a,r_{i}\\#}\\mu =\\nu " }, { "math_id": 4, "text": " \\lim_{i\\rightarrow\\infty}\\int_{\\Omega} \\varphi \\, \\mathrm{d} (c_{i}T_{a,r_{i}\\#}\\mu)=\\int_{\\Omega} \\varphi \\, \\mathrm{d} \\nu." }, { "math_id": 5, "text": "\\limsup_{r\\downarrow 0} \\frac{\\mu(B(a,2r))}{\\mu(B(a,r))}<\\infty" }, { "math_id": 6, "text": "0<\\limsup_{r\\downarrow 0}\\frac{\\mu(B(a,r))}{r^s}<\\infty" }, { "math_id": 7, "text": "0<s<\\infty" }, { "math_id": 8, "text": "\\nu\\in \\mathrm{Tan}(\\mu,a)" }, { "math_id": 9, "text": "c>0" }, { "math_id": 10, "text": "c\\nu\\in \\mathrm{Tan}(\\mu,a)" }, { "math_id": 11, "text": "r>0" }, { "math_id": 12, "text": "T_{0,r\\#}\\nu \\in \\mathrm{Tan}(\\mu,a)" }, { "math_id": 13, "text": "\\nu\\in\\mathrm{Tan}(\\mu,a)" }, { "math_id": 14, "text": "T_{x,1\\#}\\nu\\in\\mathrm{Tan}(\\mu,a)" }, { "math_id": 15, "text": "\\mu_{a, r} \\xrightarrow[r \\to 0]{*} \\theta H^{k} \\lfloor_{P}," }, { "math_id": 16, "text": "\\mu_{a, r} (A) = \\frac1{r^{n - 1}} \\mu(a + r A)." } ]
https://en.wikipedia.org/wiki?curid=11063979
11064899
Ballistic limit
The ballistic limit or limit velocity is the velocity required for a particular projectile to reliably (at least 50% of the time) penetrate a particular piece of material. In other words, a given projectile will generally not pierce a given target when the projectile velocity is lower than the ballistic limit. The term "ballistic limit" is used specifically in the context of armor; "limit velocity" is used in other contexts. The ballistic limit equation for laminates, as derived by Reid and Wen is as follows: formula_0 where Additionally, the ballistic limit for small-caliber into homogeneous armor by TM5-855-1 is: formula_8&lt;br&gt; where References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "V_b=\\frac{\\pi\\,\\Gamma\\,\\sqrt{\\rho_t\\,\\sigma_e}\\,D^2\\,T}{4\\,m} \\left [1+\\sqrt{1+\\frac{8\\,m}{\\pi\\,\\Gamma^2\\,\\rho_t\\,D^2\\,T}}\\, \\right ]" }, { "math_id": 1, "text": "V_b\\," }, { "math_id": 2, "text": "\\Gamma\\," }, { "math_id": 3, "text": "\\rho_t\\," }, { "math_id": 4, "text": "\\sigma_e\\," }, { "math_id": 5, "text": "D\\," }, { "math_id": 6, "text": "T\\," }, { "math_id": 7, "text": "m\\," }, { "math_id": 8, "text": "V_1= 19.72 \\left [ \\frac{7800 d^3 \\left [ \\left ( \\frac{e_h}{d} \\right) \\sec \\theta \\right ]^{1.6}}{W_T} \\right ]^{0.5}" }, { "math_id": 9, "text": "V_1" }, { "math_id": 10, "text": "d" }, { "math_id": 11, "text": "e_h" }, { "math_id": 12, "text": "\\theta" }, { "math_id": 13, "text": "W_T" } ]
https://en.wikipedia.org/wiki?curid=11064899
1106564
Smith normal form
Matrix normal formIn mathematics, the Smith normal form (sometimes abbreviated SNF) is a normal form that can be defined for any matrix (not necessarily square) with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and can be obtained from the original matrix by multiplying on the left and right by invertible square matrices. In particular, the integers are a PID, so one can always calculate the Smith normal form of an integer matrix. The Smith normal form is very useful for working with finitely generated modules over a PID, and in particular for deducing the structure of a quotient of a free module. It is named after the Irish mathematician Henry John Stephen Smith. Definition. Let formula_0 be a nonzero formula_1 matrix over a principal ideal domain formula_2. There exist invertible formula_3 and formula_4-matrices formula_5 (with coefficients in formula_2) such that the product formula_6 is formula_7 and the diagonal elements formula_8 satisfy formula_9 for all formula_10. This is the Smith normal form of the matrix formula_0. The elements formula_8 are unique up to multiplication by a unit and are called the "elementary divisors", "invariants", or "invariant factors". They can be computed (up to multiplication by a unit) as formula_11 where formula_12 (called "i"-th "determinant divisor") equals the greatest common divisor of the determinants of all formula_13 minors of the matrix formula_0 and formula_14. Example : For a formula_15 matrix, formula_16 with formula_17 and formula_18. Algorithm. The first goal is to find invertible square matrices formula_19 and formula_20 such that the product formula_21 is diagonal. This is the hardest part of the algorithm. Once diagonality is achieved, it becomes relatively easy to put the matrix into Smith normal form. Phrased more abstractly, the goal is to show that, thinking of formula_0 as a map from formula_22 (the free formula_2-module of rank formula_23) to formula_24 (the free formula_2-module of rank formula_25), there are isomorphisms formula_26 and formula_27 such that formula_28 has the simple form of a diagonal matrix. The matrices formula_19 and formula_20 can be found by starting out with identity matrices of the appropriate size, and modifying formula_19 each time a row operation is performed on formula_0 in the algorithm by the corresponding column operation (for example, if row formula_29 is added to row formula_30 of formula_0, then column formula_30 should be subtracted from column formula_29 of formula_19 to retain the product invariant), and similarly modifying formula_20 for each column operation performed. Since row operations are left-multiplications and column operations are right-multiplications, this preserves the invariant formula_31 where formula_32 denote current values and formula_0 denotes the original matrix; eventually the matrices in this invariant become diagonal. Only invertible row and column operations are performed, which ensures that formula_19 and formula_20 remain invertible matrices. For formula_33, write formula_34 for the number of prime factors of formula_35 (these exist and are unique since any PID is also a unique factorization domain). In particular, formula_2 is also a Bézout domain, so it is a gcd domain and the gcd of any two elements satisfies a Bézout's identity. To put a matrix into Smith normal form, one can repeatedly apply the following, where formula_36 loops from 1 to formula_25. Step I: Choosing a pivot. Choose formula_37 to be the smallest column index of formula_0 with a non-zero entry, starting the search at column index formula_38 if formula_39. We wish to have formula_40; if this is the case this step is complete, otherwise there is by assumption some formula_41 with formula_42, and we can exchange rows formula_36 and formula_41, thereby obtaining formula_40. Our chosen pivot is now at position formula_43. Step II: Improving the pivot. If there is an entry at position ("k","j""t") such that formula_44, then, letting formula_45, we know by the Bézout property that there exist σ, τ in "R" such that formula_46 By left-multiplication with an appropriate invertible matrix "L", it can be achieved that row "t" of the matrix product is the sum of σ times the original row "t" and τ times the original row "k", that row "k" of the product is another linear combination of those original rows, and that all other rows are unchanged. Explicitly, if σ and τ satisfy the above equation, then for formula_47 and formula_48 (which divisions are possible by the definition of β) one has formula_49 so that the matrix formula_50 is invertible, with inverse formula_51 Now "L" can be obtained by fitting formula_52 into rows and columns "t" and "k" of the identity matrix. By construction the matrix obtained after left-multiplying by "L" has entry β at position ("t","j""t") (and due to our choice of α and γ it also has an entry 0 at position ("k","j""t"), which is useful though not essential for the algorithm). This new entry β divides the entry formula_53 that was there before, and so in particular formula_54; therefore repeating these steps must eventually terminate. One ends up with a matrix having an entry at position ("t","j""t") that divides all entries in column "j""t". Step III: Eliminating entries. Finally, adding appropriate multiples of row "t", it can be achieved that all entries in column "j""t" except for that at position ("t","j""t") are zero. This can be achieved by left-multiplication with an appropriate matrix. However, to make the matrix fully diagonal we need to eliminate nonzero entries on the row of position ("t","j""t") as well. This can be achieved by repeating the steps in Step II for columns instead of rows, and using multiplication on the right by the transpose of the obtained matrix "L". In general this will result in the zero entries from the prior application of Step III becoming nonzero again. However, notice that each application of Step II for either rows or columns must continue to reduce the value of formula_55, and so the process must eventually stop after some number of iterations, leading to a matrix where the entry at position ("t","j""t") is the only non-zero entry in both its row and column. At this point, only the block of "A" to the lower right of ("t","j""t") needs to be diagonalized, and conceptually the algorithm can be applied recursively, treating this block as a separate matrix. In other words, we can increment "t" by one and go back to Step I. Final step. Applying the steps described above to the remaining non-zero columns of the resulting matrix (if any), we get an formula_1-matrix with column indices formula_56 where formula_57. The matrix entries formula_58 are non-zero, and every other entry is zero. Now we can move the null columns of this matrix to the right, so that the nonzero entries are on positions formula_59 for formula_60. For short, set formula_8 for the element at position formula_59. The condition of divisibility of diagonal entries might not be satisfied. For any index formula_61 for which formula_62, one can repair this shortcoming by operations on rows and columns formula_29 and formula_63 only: first add column formula_63 to column formula_29 to get an entry formula_64 in column "i" without disturbing the entry formula_8 at position formula_59, and then apply a row operation to make the entry at position formula_59 equal to formula_65 as in Step II; finally proceed as in Step III to make the matrix diagonal again. Since the new entry at position formula_66 is a linear combination of the original formula_67, it is divisible by β. The value formula_68 does not change by the above operation (it is δ of the determinant of the upper formula_69 submatrix), whence that operation does diminish (by moving prime factors to the right) the value of formula_70 So after finitely many applications of this operation no further application is possible, which means that we have obtained formula_71 as desired. Since all row and column manipulations involved in the process are invertible, this shows that there exist invertible formula_3 and formula_4-matrices "S, T" so that the product "S A T" satisfies the definition of a Smith normal form. In particular, this shows that the Smith normal form exists, which was assumed without proof in the definition. Applications. The Smith normal form is useful for computing the homology of a chain complex when the chain modules of the chain complex are finitely generated. For instance, in topology, it can be used to compute the homology of a finite simplicial complex or CW complex over the integers, because the boundary maps in such a complex are just integer matrices. It can also be used to determine the invariant factors that occur in the structure theorem for finitely generated modules over a principal ideal domain, which includes the fundamental theorem of finitely generated abelian groups. The Smith normal form is also used in control theory to compute transmission and blocking zeros of a transfer function matrix. Example. As an example, we will find the Smith normal form of the following matrix over the integers. formula_72 The following matrices are the intermediate steps as the algorithm is applied to the above matrix. formula_73 formula_74 formula_75 So the Smith normal form is formula_76 and the invariant factors are 2, 2 and 156. Run-time complexity. The Smith Normal Form of an N-by-N matrix A can be computed in time formula_77. If the matrix is sparse, the computation is typically much faster. Similarity. The Smith normal form can be used to determine whether or not matrices with entries over a common field formula_78 are similar. Specifically two matrices "A" and "B" are similar if and only if the characteristic matrices formula_79 and formula_80 have the same Smith normal form (working in the PID formula_81). For example, with formula_82 "A" and "B" are similar because the Smith normal form of their characteristic matrices match, but are not similar to "C" because the Smith normal form of the characteristic matrices do not match. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A" }, { "math_id": 1, "text": "m \\times n" }, { "math_id": 2, "text": "R" }, { "math_id": 3, "text": "m \\times m" }, { "math_id": 4, "text": "n \\times n" }, { "math_id": 5, "text": "S,T" }, { "math_id": 6, "text": "SAT" }, { "math_id": 7, "text": "\n\\begin{pmatrix}\n\\alpha_1 & 0 & 0 & \\cdots & 0 & \\cdots & 0 \\\\\n0 & \\alpha_2 & 0 & & & & \\\\\n0 & 0 & \\ddots & & \\vdots & & \\vdots\\\\\n\\vdots & & & \\alpha_r & & & \\\\\n0 & & \\cdots & & 0 & \\cdots & 0 \\\\\n\\vdots & & & & \\vdots & & \\vdots \\\\\n0 & & \\cdots & & 0 & \\cdots & 0\n\\end{pmatrix}.\n" }, { "math_id": 8, "text": "\\alpha_i" }, { "math_id": 9, "text": "\\alpha_i \\mid \\alpha_{i+1}" }, { "math_id": 10, "text": "1 \\le i < r" }, { "math_id": 11, "text": "\\alpha_i = \\frac{d_i(A)}{d_{i-1}(A)}," }, { "math_id": 12, "text": "d_i(A)" }, { "math_id": 13, "text": "i\\times i" }, { "math_id": 14, "text": "d_0(A):=1" }, { "math_id": 15, "text": "2\\times2" }, { "math_id": 16, "text": "{\\rm SNF}{a~~b\\choose c~~d} \n = {\\rm diag}(d_1, d_2/d_1)" }, { "math_id": 17, "text": "d_1 = \\gcd(a,b,c,d)" }, { "math_id": 18, "text": "d_2 = ad-bc" }, { "math_id": 19, "text": "S" }, { "math_id": 20, "text": "T" }, { "math_id": 21, "text": "S A T" }, { "math_id": 22, "text": "R^n" }, { "math_id": 23, "text": "n" }, { "math_id": 24, "text": "R^m" }, { "math_id": 25, "text": "m" }, { "math_id": 26, "text": "S:R^m \\to R^m" }, { "math_id": 27, "text": "T:R^n \\to R^n" }, { "math_id": 28, "text": "S \\cdot A \\cdot T" }, { "math_id": 29, "text": "i" }, { "math_id": 30, "text": "j" }, { "math_id": 31, "text": "A'=S'\\cdot A\\cdot T'" }, { "math_id": 32, "text": "A',S',T'" }, { "math_id": 33, "text": "a \\in R\\setminus \\{0\\}" }, { "math_id": 34, "text": "\\delta(a)" }, { "math_id": 35, "text": "a" }, { "math_id": 36, "text": "t" }, { "math_id": 37, "text": "j_t" }, { "math_id": 38, "text": "j_{t-1}+1" }, { "math_id": 39, "text": "t> 1" }, { "math_id": 40, "text": "a_{t,j_t}\\neq0" }, { "math_id": 41, "text": "k" }, { "math_id": 42, "text": "a_{k,j_t} \\neq 0" }, { "math_id": 43, "text": "(t, j_t)" }, { "math_id": 44, "text": "a_{t,j_t} \\nmid a_{k,j_t}" }, { "math_id": 45, "text": "\\beta =\\gcd\\left(a_{t,j_t}, a_{k,j_t}\\right)" }, { "math_id": 46, "text": "\na_{t,j_t} \\cdot \\sigma + a_{k,j_t} \\cdot \\tau=\\beta.\n" }, { "math_id": 47, "text": "\\alpha=a_{t,j_t}/\\beta" }, { "math_id": 48, "text": "\\gamma=a_{k,j_t}/\\beta" }, { "math_id": 49, "text": "\n\\sigma\\cdot \\alpha + \\tau \\cdot \\gamma=1,\n" }, { "math_id": 50, "text": " L_0=\n\\begin{pmatrix}\n\\sigma & \\tau \\\\\n-\\gamma & \\alpha \\\\\n\\end{pmatrix}\n" }, { "math_id": 51, "text": "\n\\begin{pmatrix}\n\\alpha & -\\tau \\\\\n\\gamma & \\sigma \\\\\n\\end{pmatrix}\n." }, { "math_id": 52, "text": "L_0" }, { "math_id": 53, "text": "a_{t,j_t}" }, { "math_id": 54, "text": "\\delta(\\beta) < \\delta(a_{t,j_t})" }, { "math_id": 55, "text": "\\delta(a_{t,j_t})" }, { "math_id": 56, "text": "j_1 < \\ldots < j_r" }, { "math_id": 57, "text": "r \\le \\min(m,n)" }, { "math_id": 58, "text": "(l,j_l)" }, { "math_id": 59, "text": "(i,i)" }, { "math_id": 60, "text": "1 \\le i\\le r" }, { "math_id": 61, "text": "i<r" }, { "math_id": 62, "text": "\\alpha_i\\nmid\\alpha_{i+1}" }, { "math_id": 63, "text": "i+1" }, { "math_id": 64, "text": "\\alpha_{i+1}" }, { "math_id": 65, "text": "\\beta=\\gcd(\\alpha_i,\\alpha_{i+1})" }, { "math_id": 66, "text": "(i+1,i+1)" }, { "math_id": 67, "text": "\\alpha_i,\\alpha_{i+1}" }, { "math_id": 68, "text": "\\delta(\\alpha_1)+\\cdots+\\delta(\\alpha_r)" }, { "math_id": 69, "text": "r\\times r" }, { "math_id": 70, "text": "\\sum_{j=1}^r(r-j)\\delta(\\alpha_j)." }, { "math_id": 71, "text": "\\alpha_1\\mid\\alpha_2\\mid\\cdots\\mid\\alpha_r" }, { "math_id": 72, "text": "\n\\begin{pmatrix}\n2 & 4 & 4 \\\\\n-6 & 6 & 12 \\\\\n10 & 4 & 16\n\\end{pmatrix}\n" }, { "math_id": 73, "text": "\n\\to\n\\begin{pmatrix}\n2 & 0 & 0 \\\\\n-6 & 18 & 24 \\\\\n10 & -16 & -4\n\\end{pmatrix}\n\\to\n\\begin{pmatrix}\n2 & 0 & 0 \\\\\n0 & 18 & 24 \\\\\n0 & -16 & -4\n\\end{pmatrix}\n" }, { "math_id": 74, "text": "\n\\to\n\\begin{pmatrix}\n2 & 0 & 0 \\\\\n0 & 2 & 20 \\\\\n0 & -16 & -4\n\\end{pmatrix}\n\\to\n\\begin{pmatrix}\n2 & 0 & 0 \\\\\n0 & 2 & 20 \\\\\n0 & 0 & 156\n\\end{pmatrix}\n" }, { "math_id": 75, "text": "\n\\to\n\\begin{pmatrix}\n2 & 0 & 0 \\\\\n0 & 2 & 0 \\\\\n0 & 0 & 156\n\\end{pmatrix}\n" }, { "math_id": 76, "text": "\n\\begin{pmatrix}\n2 & 0 & 0 \\\\\n0 & 2 & 0 \\\\\n0 & 0 & 156\n\\end{pmatrix}\n" }, { "math_id": 77, "text": "O(\\|A\\| \\log \\|A\\| N^4\\log N)" }, { "math_id": 78, "text": "K" }, { "math_id": 79, "text": "xI-A" }, { "math_id": 80, "text": "xI-B" }, { "math_id": 81, "text": "K[x]" }, { "math_id": 82, "text": "\n\\begin{align}\nA & {} =\\begin{bmatrix}\n 1 & 2 \\\\\n 0 & 1 \n\\end{bmatrix}, & & \\mbox{SNF}(xI-A) =\\begin{bmatrix}\n 1 & 0 \\\\\n 0 & (x-1)^2\n\\end{bmatrix} \\\\\nB & {} =\\begin{bmatrix}\n 3 & -4 \\\\\n 1 & -1 \n\\end{bmatrix}, & & \\mbox{SNF}(xI-B) =\\begin{bmatrix}\n 1 & 0 \\\\\n 0 & (x-1)^2\n\\end{bmatrix} \\\\\nC & {} =\\begin{bmatrix}\n 1 & 0 \\\\\n 1 & 2 \n\\end{bmatrix}, & & \\mbox{SNF}(xI-C) =\\begin{bmatrix}\n 1 & 0 \\\\\n 0 & (x-1)(x-2)\n\\end{bmatrix}.\n\\end{align}\n" } ]
https://en.wikipedia.org/wiki?curid=1106564
11067444
Darboux frame
In the differential geometry of surfaces, a Darboux frame is a natural moving frame constructed on a surface. It is the analog of the Frenet–Serret frame as applied to surface geometry. A Darboux frame exists at any non-umbilic point of a surface embedded in Euclidean space. It is named after French mathematician Jean Gaston Darboux. Darboux frame of an embedded curve. Let "S" be an oriented surface in three-dimensional Euclidean space E3. The construction of Darboux frames on "S" first considers frames moving along a curve in "S", and then specializes when the curves move in the direction of the principal curvatures. Definition. At each point "p" of an oriented surface, one may attach a unit normal vector u("p") in a unique way, as soon as an orientation has been chosen for the normal at any particular fixed point. If "γ"("s") is a curve in "S", parametrized by arc length, then the Darboux frame of "γ" is defined by formula_0    (the "unit tangent") formula_1    (the "unit normal") formula_2    (the "tangent normal") The triple T, t, u defines a positively oriented orthonormal basis attached to each point of the curve: a natural moving frame along the embedded curve. Geodesic curvature, normal curvature, and relative torsion. Note that a Darboux frame for a curve does not yield a natural moving frame on the surface, since it still depends on an initial choice of tangent vector. To obtain a moving frame on the surface, we first compare the Darboux frame of γ with its Frenet–Serret frame. Let Since the tangent vectors are the same in both cases, there is a unique angle α such that a rotation in the plane of N and B produces the pair t and u: formula_5 Taking a differential, and applying the Frenet–Serret formulas yields formula_6 where: Darboux frame on a surface. This section specializes the case of the Darboux frame on a curve to the case when the curve is a principal curve of the surface (a "line of curvature"). In that case, since the principal curves are canonically associated to a surface at all non-umbilic points, the Darboux frame is a canonical moving frame. The trihedron. The introduction of the trihedron (or "trièdre"), an invention of Darboux, allows for a conceptual simplification of the problem of moving frames on curves and surfaces by treating the coordinates of the point on the curve and the frame vectors in a uniform manner. A trihedron consists of a point P in Euclidean space, and three orthonormal vectors e1, e2, and e3 based at the point P. A moving trihedron is a trihedron whose components depend on one or more parameters. For example, a trihedron moves along a curve if the point P depends on a single parameter "s", and P("s") traces out the curve. Similarly, if P("s","t") depends on a pair of parameters, then this traces out a surface. A trihedron is said to be adapted to a surface if P always lies on the surface and e3 is the oriented unit normal to the surface at P. In the case of the Darboux frame along an embedded curve, the quadruple (P("s") = γ("s"), e1("s") = T("s"), e2("s") = t("s"), e3("s") = u("s")) defines a tetrahedron adapted to the surface into which the curve is embedded. In terms of this trihedron, the structural equations read formula_7 Change of frame. Suppose that any other adapted trihedron (P, e1, e2, e3) is given for the embedded curve. Since, by definition, P remains the same point on the curve as for the Darboux trihedron, and e3 = u is the unit normal, this new trihedron is related to the Darboux trihedron by a rotation of the form formula_8 where θ = θ("s") is a function of "s". Taking a differential and applying the Darboux equation yields formula_9 where the (ωi,ωij) are functions of "s", satisfying formula_10 Structure equations. The Poincaré lemma, applied to each double differential ddP, dde"i", yields the following Cartan structure equations. From ddP = 0, formula_11 From ddei = 0, formula_12 The latter are the Gauss–Codazzi equations for the surface, expressed in the language of differential forms. Principal curves. Consider the second fundamental form of "S". This is the symmetric 2-form on "S" given by formula_13 By the spectral theorem, there is some choice of frame (ei) in which ("ii"ij) is a diagonal matrix. The eigenvalues are the principal curvatures of the surface. A diagonalizing frame a1, a2, a3 consists of the normal vector a3, and two principal directions a1 and a2. This is called a Darboux frame on the surface. The frame is canonically defined (by an ordering on the eigenvalues, for instance) away from the umbilics of the surface. Moving frames. The Darboux frame is an example of a natural moving frame defined on a surface. With slight modifications, the notion of a moving frame can be generalized to a hypersurface in an "n"-dimensional Euclidean space, or indeed any embedded submanifold. This generalization is among the many contributions of Élie Cartan to the method of moving frames. Frames on Euclidean space. A (Euclidean) frame on the Euclidean space E"n" is a higher-dimensional analog of the trihedron. It is defined to be an ("n" + 1)-tuple of vectors drawn from E"n", ("v"; "f"1, ..., "f""n"), where: Let "F"("n") be the ensemble of all Euclidean frames. The Euclidean group acts on "F"("n") as follows. Let φ ∈ Euc("n") be an element of the Euclidean group decomposing as formula_14 where "A" is an orthogonal transformation and "x"0 is a translation. Then, on a frame, formula_15 Geometrically, the affine group moves the origin in the usual way, and it acts via a rotation on the orthogonal basis vectors since these are "attached" to the particular choice of origin. This is an effective and transitive group action, so "F"("n") is a principal homogeneous space of Euc("n"). Structure equations. Define the following system of functions "F"("n") → E"n": formula_16 The projection operator "P" is of special significance. The inverse image of a point "P"−1("v") consists of all orthonormal bases with basepoint at "v". In particular, "P" : "F"("n") → E"n" presents "F"("n") as a principal bundle whose structure group is the orthogonal group O("n"). (In fact this principal bundle is just the tautological bundle of the homogeneous space "F"("n") → "F"("n")/O("n") = E"n".) The exterior derivative of "P" (regarded as a vector-valued differential form) decomposes uniquely as formula_17 for some system of scalar valued one-forms ωi. Similarly, there is an "n" × "n" matrix of one-forms (ωij) such that formula_18 Since the "e"i are orthonormal under the inner product of Euclidean space, the matrix of 1-forms ωij is skew-symmetric. In particular it is determined uniquely by its upper-triangular part (ω"j""i" | "i" &lt; "j"). The system of "n"("n" + 1)/2 one-forms (ωi, ω"j""i" ("i"&lt;"j")) gives an absolute parallelism of "F"("n"), since the coordinate differentials can each be expressed in terms of them. Under the action of the Euclidean group, these forms transform as follows. Let φ be the Euclidean transformation consisting of a translation "v"i and rotation matrix ("A""j""i"). Then the following are readily checked by the invariance of the exterior derivative under pullback: formula_19 formula_20 Furthermore, by the Poincaré lemma, one has the following structure equations formula_21 formula_22 Adapted frames and the Gauss–Codazzi equations. Let φ : "M" → E"n" be an embedding of a "p"-dimensional smooth manifold into a Euclidean space. The space of adapted frames on "M", denoted here by "F"φ("M") is the collection of tuples ("x"; "f"1...,"f"n) where "x" ∈ "M", and the "f"i form an orthonormal basis of E"n" such that "f"1...,"f""p" are tangent to φ("M") at φ("x"). Several examples of adapted frames have already been considered. The first vector T of the Frenet–Serret frame (T, N, B) is tangent to a curve, and all three vectors are mutually orthonormal. Similarly, the Darboux frame on a surface is an orthonormal frame whose first two vectors are tangent to the surface. Adapted frames are useful because the invariant forms (ωi,ωji) pullback along φ, and the structural equations are preserved under this pullback. Consequently, the resulting system of forms yields structural information about how "M" is situated inside Euclidean space. In the case of the Frenet–Serret frame, the structural equations are precisely the Frenet–Serret formulas, and these serve to classify curves completely up to Euclidean motions. The general case is analogous: the structural equations for an adapted system of frames classifies arbitrary embedded submanifolds up to a Euclidean motion. In detail, the projection π : "F"("M") → "M" given by π("x"; "f"i) = "x" gives "F"("M") the structure of a principal bundle on "M" (the structure group for the bundle is O("p") × O("n" − "p").) This principal bundle embeds into the bundle of Euclidean frames "F"("n") by φ("v";"f""i") := (φ("v");"f""i") ∈ "F"("n"). Hence it is possible to define the pullbacks of the invariant forms from "F"("n"): formula_23 Since the exterior derivative is equivariant under pullbacks, the following structural equations hold formula_24 Furthermore, because some of the frame vectors "f"1..."f"p are tangent to "M" while the others are normal, the structure equations naturally split into their tangential and normal contributions. Let the lowercase Latin indices "a","b","c" range from 1 to "p" (i.e., the tangential indices) and the Greek indices μ, γ range from "p"+1 to "n" (i.e., the normal indices). The first observation is that formula_25 since these forms generate the submanifold φ("M") (in the sense of the Frobenius integration theorem.) The first set of structural equations now becomes formula_26 Of these, the latter implies by Cartan's lemma that formula_27 where "s"μab is "symmetric" on "a" and "b" (the second fundamental forms of φ("M")). Hence, equations (1) are the Gauss formulas (see Gauss–Codazzi equations). In particular, θ"b""a" is the connection form for the Levi-Civita connection on "M". The second structural equations also split into the following formula_28 The first equation is the Gauss equation which expresses the curvature form Ω of "M" in terms of the second fundamental form. The second is the Codazzi–Mainardi equation which expresses the covariant derivatives of the second fundamental form in terms of the normal connection. The third is the Ricci equation. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\mathbf{T}(s) = \\gamma'(s), " }, { "math_id": 1, "text": " \\mathbf{u}(s) = \\mathbf{u}(\\gamma(s)), " }, { "math_id": 2, "text": " \\mathbf{t}(s) = \\mathbf{u}(s) \\times \\mathbf{T}(s), " }, { "math_id": 3, "text": " \\mathbf{N}(s) = \\frac{\\mathbf{T}'(s)}{\\|\\mathbf{T}'(s)\\|}, " }, { "math_id": 4, "text": " \\mathbf{B}(s) = \\mathbf{T}(s)\\times\\mathbf{N}(s), " }, { "math_id": 5, "text": "\n\\begin{bmatrix}\n\\mathbf{T}\\\\\n\\mathbf{t}\\\\\n\\mathbf{u}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n1&0&0\\\\\n0&\\cos\\alpha&\\sin\\alpha\\\\\n0&-\\sin\\alpha&\\cos\\alpha\n\\end{bmatrix}\n\\begin{bmatrix}\n\\mathbf{T}\\\\\n\\mathbf{N}\\\\\n\\mathbf{B}\n\\end{bmatrix}.\n" }, { "math_id": 6, "text": "\\begin{align}\n\\mathrm{d}\\begin{bmatrix}\n\\mathbf{T}\\\\\n\\mathbf{t}\\\\\n\\mathbf{u}\n\\end{bmatrix}\n&=\n\\begin{bmatrix}\n0&\\kappa\\cos\\alpha\\, \\mathrm{d}s&-\\kappa\\sin\\alpha\\, \\mathrm{d}s\\\\\n-\\kappa\\cos\\alpha\\, \\mathrm{d}s&0&\\tau \\, \\mathrm{d}s + \\mathrm{d}\\alpha\\\\\n\\kappa\\sin\\alpha\\, \\mathrm{d}s&-\\tau \\, \\mathrm{d}s - \\mathrm{d}\\alpha&0\n\\end{bmatrix}\n\\begin{bmatrix}\n\\mathbf{T}\\\\\n\\mathbf{t}\\\\\n\\mathbf{u}\n\\end{bmatrix} \\\\\n&=\n\\begin{bmatrix}\n0&\\kappa_g \\, \\mathrm{d}s&\\kappa_n \\, \\mathrm{d}s\\\\\n-\\kappa_g \\, \\mathrm{d}s&0&\\tau_r \\, \\mathrm{d}s\\\\\n-\\kappa_n \\, \\mathrm{d}s&-\\tau_r \\, \\mathrm{d}s&0\n\\end{bmatrix}\n\\begin{bmatrix}\n\\mathbf{T}\\\\\n\\mathbf{t}\\\\\n\\mathbf{u}\n\\end{bmatrix}\n\\end{align}" }, { "math_id": 7, "text": "\n\\mathrm{d}\\begin{bmatrix}\n\\mathbf{P}\\\\\n\\mathbf{T}\\\\\n\\mathbf{t}\\\\\n\\mathbf{u}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n0&\\mathrm{d}s&0&0\\\\\n0&0&\\kappa_g \\, \\mathrm{d}s&\\kappa_n \\, \\mathrm{d}s\\\\\n0&-\\kappa_g \\, \\mathrm{d}s&0&\\tau_r \\, \\mathrm{d}s\\\\\n0&-\\kappa_n \\, \\mathrm{d}s&-\\tau_r \\, \\mathrm{d}s&0\n\\end{bmatrix}\n\\begin{bmatrix}\n\\mathbf{P}\\\\\n\\mathbf{T}\\\\\n\\mathbf{t}\\\\\n\\mathbf{u}\n\\end{bmatrix}.\n" }, { "math_id": 8, "text": "\n\\begin{bmatrix}\n\\mathbf{P}\\\\\n\\mathbf{e}_1\\\\\n\\mathbf{e}_2\\\\\n\\mathbf{e}_3\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n1&0&0&0\\\\\n0&\\cos\\theta&\\sin\\theta&0\\\\\n0&-\\sin\\theta&\\cos\\theta&0\\\\\n0&0&0&1\n\\end{bmatrix}\n\\begin{bmatrix}\n\\mathbf{P}\\\\\n\\mathbf{T}\\\\\n\\mathbf{t}\\\\\n\\mathbf{u}\n\\end{bmatrix}\n" }, { "math_id": 9, "text": "\n\\begin{align}\n\\mathrm{d}\\mathbf{P} & = \\mathbf{T} \\mathrm{d}s = \\omega^1\\mathbf{e}_1+\\omega^2\\mathbf{e}_2\\\\\n\\mathrm{d}\\mathbf{e}_i & = \\sum_j \\omega^j_i\\mathbf{e}_j\n\\end{align}\n" }, { "math_id": 10, "text": "\n\\begin{align}\n\\omega^1 & = \\cos\\theta \\, \\mathrm{d}s,\\quad \\omega^2 = -\\sin\\theta \\, \\mathrm{d}s\\\\\n\\omega_i^j & = -\\omega_j^i\\\\\n\\omega_1^2 & = \\kappa_g \\, \\mathrm{d}s + \\mathrm{d}\\theta\\\\\n\\omega_1^3 & = (\\kappa_n\\cos\\theta + \\tau_r\\sin\\theta) \\, \\mathrm{d}s\\\\\n\\omega_2^3 & = -(\\kappa_n\\sin\\theta + \\tau_r\\cos\\theta) \\, \\mathrm{d}s\n\\end{align}\n" }, { "math_id": 11, "text": "\n\\begin{align}\n\\mathrm{d}\\omega^1 & =\\omega^2\\wedge\\omega_2^1\\\\\n\\mathrm{d}\\omega^2 & =\\omega^1\\wedge\\omega_1^2\\\\\n0 & =\\omega^1\\wedge\\omega_1^3+\\omega^2\\wedge\\omega_2^3\n\\end{align}\n" }, { "math_id": 12, "text": "\n\\begin{align}\n\\mathrm{d}\\omega_1^2 & =\\omega_1^3\\wedge\\omega_3^2\\\\\n\\mathrm{d}\\omega_1^3 & =\\omega_1^2\\wedge\\omega_2^3\\\\\n\\mathrm{d}\\omega_2^3 & =\\omega_2^1\\wedge\\omega_1^3\n\\end{align}\n" }, { "math_id": 13, "text": "\n\\mathrm{I\\!I} = -\\mathrm{d}\\mathbf{N}\\cdot \\mathrm{d}\\mathbf{P} = \\omega_1^3\\odot\\omega^1 + \\omega_2^3\\odot\\omega^2\n=\\begin{pmatrix}\\omega^1 & \\omega^2\\end{pmatrix}\n\\begin{pmatrix}\nii_{11}&ii_{12}\\\\\nii_{21}&ii_{22}\n\\end{pmatrix}\n\\begin{pmatrix}\\omega^1\\\\\\omega^2\\end{pmatrix}.\n" }, { "math_id": 14, "text": "\\phi(x) = Ax + x_0" }, { "math_id": 15, "text": "\\phi(v;f_1,\\dots,f_n) := (\\phi(v);Af_1, \\dots, Af_n)." }, { "math_id": 16, "text": "\\begin{align}\nP(v; f_1,\\dots, f_n) & = v\\\\\ne_i(v; f_1,\\dots, f_n) & = f_i, \\qquad i=1,2,\\dots,n.\n\\end{align}\n" }, { "math_id": 17, "text": "\\mathrm{d}P = \\sum_i \\omega^ie_i,\\, " }, { "math_id": 18, "text": "\\mathrm{d}e_i = \\sum_j \\omega_i^je_j." }, { "math_id": 19, "text": "\\phi^*(\\omega^i) = (A^{-1})_j^i\\omega^j" }, { "math_id": 20, "text": "\\phi^*(\\omega_j^i) = (A^{-1})_p^i\\, \\omega_q^p\\, A_j^q." }, { "math_id": 21, "text": "\\mathrm{d}\\omega^i = -\\omega_j^i\\wedge\\omega^j" }, { "math_id": 22, "text": "\\mathrm{d}\\omega_j^i = -\\omega^i_k\\wedge\\omega^k_j." }, { "math_id": 23, "text": "\\theta^i = \\phi^*\\omega^i,\\quad \\theta_j^i=\\phi^*\\omega_j^i." }, { "math_id": 24, "text": "\\mathrm{d}\\theta^i=-\\theta_j^i\\wedge\\theta^j,\\quad \\mathrm{d}\\theta_j^i = -\\theta_k^i\\wedge\\theta_j^k." }, { "math_id": 25, "text": "\\theta^\\mu = 0,\\quad \\mu=p+1,\\dots,n" }, { "math_id": 26, "text": "\\left.\\begin{array}{l}\n\\mathrm{d}\\theta^a = -\\sum_{b=1}^p\\theta_b^a\\wedge\\theta^b\\\\\n\\\\\n0=\\mathrm{d}\\theta^\\mu = -\\sum_{b=1}^p \\theta_b^\\mu\\wedge\\theta^b\n\\end{array}\\right\\}\\,\\,\\, (1)\n" }, { "math_id": 27, "text": "\n\\theta_b^\\mu = s^\\mu_{ab}\\theta^a\n" }, { "math_id": 28, "text": "\n\\left.\\begin{array}{l}\n\\mathrm{d}\\theta_b^a + \\sum_{c=1}^p\\theta_c^a\\wedge\\theta_b^c = \\Omega_b^a = -\\sum_{\\mu=p+1}^n\\theta_\\mu^a\\wedge\\theta^\\mu_b\\\\\n\\\\\n\\mathrm{d}\\theta_b^\\gamma = -\\sum_{c=1}^p\\theta_c^\\gamma\\wedge\\theta_b^c-\\sum_{\\mu=p+1}^n\\theta_\\mu^\\gamma\\wedge\\theta_b^\\mu\\\\\n\\\\\n\\mathrm{d}\\theta_\\mu^\\gamma = -\\sum_{c=1}^p\\theta_c^\\gamma\\wedge\\theta_\\mu^c-\\sum_{\\delta=p+1}^n\\theta_\\delta^\\gamma\\wedge\\theta_\\mu^\\delta\n\\end{array}\\right\\}\\,\\,\\, (2)\n" } ]
https://en.wikipedia.org/wiki?curid=11067444
1106771
Kinetic isotope effect
Change in chemical reaction rate due to isotopic substitution formula_0An example of the kinetic isotope effect.In the reaction of methyl bromide with cyanide, the kinetic isotope effect of the carbon in the methyl group was found to be 1.082 ± 0.008. In physical organic chemistry, a kinetic isotope effect (KIE) is the change in the reaction rate of a chemical reaction when one of the atoms in the reactants is replaced by one of its isotopes. Formally, it is the ratio of rate constants for the reactions involving the light ("kL") and the heavy ("kH") isotopically substituted reactants (isotopologues): formula_1 This change in reaction rate is a quantum mechanical effect that mainly results from heavier isotopologues having lower vibrational frequencies compared to their lighter counterparts. In most cases, this implies a greater energetic input needed for heavier isotopologues to reach the transition state (or, in rare cases, dissociation limit), and therefore, a slower reaction rate. The study of KIEs can help elucidate the reaction mechanism of certain chemical reactions, and is occasionally exploited in drug development to improve unfavorable pharmacokinetics by protecting metabolically vulnerable C-H bonds. Background. KIE is considered one of the most essential and sensitive tools for studying reaction mechanisms, the knowledge of which allows improvement of the desirable qualities of said reactions. For example, KIEs can be used to reveal whether a nucleophilic substitution reaction follows a unimolecular (SN1) or bimolecular (SN2) pathway. In the reaction of methyl bromide and cyanide (shown in the introduction), the observed methyl carbon KIE indicates an SN2 mechanism. Depending on the pathway, different strategies may be used to stabilize the transition state of the rate-determining step of the reaction and improve the reaction rate and selectivity, which are important for industrial applications. Isotopic rate changes are most pronounced when the relative mass change is greatest, since the effect is related to vibrational frequencies of the affected bonds. Thus, replacing normal hydrogen (1H) with its isotope deuterium (D or 2H), doubles the mass; whereas in replacing carbon-12 with carbon-13, the mass increases by only 8%. The rate of a reaction involving a C–1H bond is typically 6–10x faster than with a C–2H bond, whereas a 12C reaction is only 4% faster than the corresponding 13C reaction; even though, in both cases, the isotope is one atomic mass unit (amu) (dalton) heavier. Isotopic substitution can modify the rate of reaction in a variety of ways. In many cases, the rate difference can be rationalized by noting that the mass of an atom affects the vibrational frequency of the chemical bond that it forms, even if the potential energy surface for the reaction is nearly identical. Heavier isotopes will (classically) lead to lower vibration frequencies, or, viewed quantum mechanically, will have lower zero-point energy (ZPE). With a lower ZPE, more energy must be supplied to break the bond, resulting in a higher activation energy for bond cleavage, which in turn lowers the measured rate (see, for example, the Arrhenius equation). Classification. Primary kinetic isotope effects. A primary kinetic isotope effect (PKIE) may be found when a bond to the isotopically labeled atom is being formed or broken. Depending on the way a KIE is probed (parallel measurement of rates vs. intermolecular competition vs. intramolecular competition), the observation of a PKIE is indicative of breaking/forming a bond to the isotope at the rate-limiting step, or subsequent product-determining step(s). (The misconception that a PKIE must reflect bond cleavage/formation to the isotope at the rate-limiting step is often repeated in textbooks and the primary literature: "see the section on experiments below.") For the aforementioned nucleophilic substitution reactions, PKIEs have been investigated for both the leaving groups, the nucleophiles, and the α-carbon at which the substitution occurs. Interpretation of the leaving group KIEs was difficult at first due to significant contributions from temperature independent factors. KIEs at the α-carbon can be used to develop some understanding into the symmetry of the transition state in SN2 reactions, though this KIE is less sensitive than what would be ideal, also due to contribution from non-vibrational factors. Secondary kinetic isotope effects. A secondary kinetic isotope effect (SKIE) is observed when no bond to the isotopically labeled atom in the reactant is broken or formed. SKIEs tend to be much smaller than PKIEs; however, secondary deuterium isotope effects can be as large as 1.4 per 2H atom, and techniques have been developed to measure heavy-element isotope effects to very high precision, so SKIEs are still very useful for elucidating reaction mechanisms. For the aforementioned nucleophilic substitution reactions, secondary hydrogen KIEs at the α-carbon provide a direct means to distinguish between SN1 and SN2 reactions. It has been found that SN1 reactions typically lead to large SKIEs, approaching to their theoretical maximum at about 1.22, while SN2 reactions typically yield SKIEs that are very close to or less than 1. KIEs greater than 1 are called normal kinetic isotope effects, while KIEs less than 1 are called inverse kinetic isotope effects (IKIE). In general, smaller force constants in the transition state are expected to yield a normal KIE, and larger force constants in the transition state are expected to yield an IKIE when stretching vibrational contributions dominate the KIE. The magnitudes of such SKIEs at the α-carbon atom are largely determined by the Cα-H(2H) vibrations. For an SN1 reaction, since the carbon atom is converted into an sp2 hybridized carbenium ion during the transition state for the rate-determining step with an increase in Cα-H(2H) bond order, an IKIE would be expected if only the stretching vibrations were important. The observed large normal KIEs are found to be caused by significant out-of-plane bending vibrational contributions when going from the reactants to the transition state of carbenium ion formation. For SN2 reactions, bending vibrations still play an important role for the KIE, but stretching vibrational contributions are of more comparable magnitude, and the resulting KIE may be normal or inverse depending on the specific contributions of the respective vibrations. Theory. The theoretical treatment of isotope effects relies heavily on transition state theory, which assumes a single potential energy surface for the reaction, and a barrier between the reactants and the products on this surface, on top of which resides the transition state. The KIE arises largely from the changes to vibrational ground states produced by the isotopic perturbation along the minimum energy pathway of the potential energy surface, which may only be accounted for with quantum mechanical treatments of the system. Depending on the mass of the atom that moves along the reaction coordinate and nature (width and height) of the energy barrier, quantum tunnelling may also make a large contribution to an observed kinetic isotope effect and may need to be separately considered, in addition to the "semi-classical" transition state theory model. The deuterium kinetic isotope effect (2H KIE) is by far the most common, useful, and well-understood type of KIE. The accurate prediction of the numerical value of a 2H KIE using density functional theory calculations is now fairly routine. Moreover, several qualitative and semi-quantitative models allow rough estimates of deuterium isotope effects to be made without calculations, often providing enough information to rationalize experimental data or even support or refute different mechanistic possibilities. Starting materials containing 2H are often commercially available, making the synthesis of isotopically enriched starting materials relatively straightforward. Also, due to the large relative difference in the mass of 2H and 1H and the attendant differences in vibrational frequency, the magnitude of the isotope effect is larger than any other pair of isotopes except 1H and 3H, allowing both primary and secondary isotope effects to be easily measured and interpreted. In contrast, secondary effects are generally very small for heavier elements and close in magnitude to the experimental uncertainty, which complicates their interpretation and limits their utility. In the context of isotope effects, "hydrogen" often means the light isotope, protium (1H), specifically. In the rest of this article, reference to "hydrogen" and "deuterium" in parallel grammatical constructions or direct comparisons between them should be interpreted as meaning 1H and 2H. The theory of KIEs was first formulated by Jacob Bigeleisen in 1949. Bigeleisen's general formula for 2H KIEs (which is also applicable to heavier elements) is given below. It employs transition state theory and a statistical mechanical treatment of translational, rotational, and vibrational levels for the calculation of rate constants "k"H and "k"D. However, this formula is "semi-classical" in that it neglects the contribution from quantum tunneling, which is often introduced as a separate correction factor. Bigeleisen's formula also does not deal with differences in non-bonded repulsive interactions caused by the slightly shorter C–D bond compared to a C–H bond. In the equation, subscript H or D refers to the 1H- or 2H-substituted species, respectively; quantities with or without the double-dagger, ‡, refer to transition state or reactant ground state, respectively. (Strictly speaking, a formula_2 term resulting from an isotopic difference in transmission coefficients should also be included.) formula_3, where we define formula_4 and formula_5. Here, "h" = Planck constant; "k"B = Boltzmann constant; formula_6 = frequency of vibration, expressed in wavenumber; "c" = speed of light; "N"A = Avogadro constant; and "R" = universal gas constant. The σX (X = H or D) are the symmetry numbers for the reactants and transition states. The "M"X are the molecular masses of the corresponding species, and the "Iq"X ("q" = "x", "y", or "z") terms are the moments of inertia about the three principal axes. The "ui"X are directly proportional to the corresponding vibrational frequencies, "νi", and the vibrational zero-point energy (ZPE) (see below). The integers "N" and "N"‡ are the number of atoms in the reactants and the transition states, respectively. The complicated expression given above can be represented as the product of four separate factors: formula_7. For the special case of 2H isotope effects, we will argue that the first three terms can be treated as equal to or well approximated by unity. The first factor S (containing σX) is the ratio of the symmetry numbers for the various species. This will be a rational number (a ratio of integers) that depends on the number of molecular and bond rotations leading to the permutation of identical atoms or groups in the reactants and the transition state. For systems of low symmetry, all σX (reactant and transition state) will be unity; thus S can often be neglected. The MMI factor (containing the "M"X and "Iq"X) refers to the ratio of the molecular masses and the moments of inertia. Since hydrogen and deuterium tend to be much lighter than most reactants and transition states, there is little difference in the molecular masses and moments of inertia between H and D containing molecules, so the MMI factor is usually also approximated as unity. The EXC factor (containing the product of vibrational partition functions) corrects for the KIE caused by the reactions of vibrationally excited molecules. The fraction of molecules with enough energy to have excited state A–H/D bond vibrations is generally small for reactions at or near room temperature (bonds to hydrogen usually vibrate at 1000 cm−1 or higher, so exp(-"ui") = exp(-"hνi"/"k"B"T") &lt; 0.01 at 298 K, resulting in negligible contributions from the 1–exp(-"ui") factors). Hence, for hydrogen/deuterium KIEs, the observed values are typically dominated by the last factor, ZPE (an exponential function of vibrational ZPE differences), consisting of contributions from the ZPE differences for each of the vibrational modes of the reactants and transition state, which can be represented as follows: formula_8, where we define formula_9 and formula_10. The sums in the exponent of the second expression can be interpreted as running over all vibrational modes of the reactant ground state and the transition state. Or, one may interpret them as running over those modes unique to the reactant or the transition state or whose vibrational frequencies change substantially upon advancing along the reaction coordinate. The remaining pairs of reactant and transition state vibrational modes have very similar formula_11 and formula_12, and cancellations occur when the sums in the exponent are calculated. Thus, in practice, 2H KIEs are often largely dependent on a handful of key vibrational modes because of this cancellation, making qualitative analyses of "k"H/"k"D possible. As mentioned, especially for 1H/2H substitution, most KIEs arise from the difference in ZPE between the reactants and the transition state of the isotopologues; this difference can be understood qualitatively as follows: in the Born–Oppenheimer approximation, the potential energy surface is the same for both isotopic species. However, a quantum treatment of the energy introduces discrete vibrational levels onto this curve, and the lowest possible energy state of a molecule corresponds to the lowest vibrational energy level, which is slightly higher in energy than the minimum of the potential energy curve. This difference, known as the ZPE, is a manifestation of the uncertainty principle that necessitates an uncertainty in the C-H or C-D bond length. Since the heavier (in this case the deuterated) species behaves more "classically", its vibrational energy levels are closer to the classical potential energy curve, and it has a lower ZPE. The ZPE differences between the two isotopic species, at least in most cases, diminish in the transition state, since the bond force constant decreases during bond breaking. Hence, the lower ZPE of the deuterated species translates into a larger activation energy for its reaction, as shown in the following figure, leading to a normal KIE. This effect should, in principle, be taken into account all 3"N−"6 vibrational modes for the starting material and 3"N"‡"−"7 vibrational modes at the transition state (one mode, the one corresponding to the reaction coordinate, is missing at the transition state, since a bond breaks and there is no restorative force against the motion). The harmonic oscillator is a good approximation for a vibrating bond, at least for low-energy vibrational states. Quantum mechanics gives the vibrational ZPE as formula_13. Thus, we can readily interpret the factor of and the sums of formula_14 terms over ground state and transition state vibrational modes in the exponent of the simplified formula above. For a harmonic oscillator, vibrational frequency is inversely proportional to the square root of the reduced mass of the vibrating system: formula_15, where "k"f is the force constant. Moreover, the reduced mass is approximated by the mass of the light atom of the system, X = H or D. Because "m"D ≈ 2"m"H, formula_16. In the case of homolytic C–H/D bond dissociation, the transition state term disappears; and neglecting other vibrational modes, "k"H/"k"D = exp(Δ"ui"). Thus, a larger isotope effect is observed for a stiffer ("stronger") C–H/D bond. For most reactions of interest, a hydrogen atom is transferred between two atoms, with a transition-state [A···H···B]‡ and vibrational modes at the transition state need to be accounted for. Nevertheless, it is still generally true that cleavage of a bond with a higher vibrational frequency will give a larger isotope effect. To calculate the maximum possible value for a non-tunneling 2H KIE, we consider the case where the ZPE difference between the stretching vibrations of a C-1H bond (3000 cm−1) and C-2H bond (2200 cm−1) disappears in the transition state (an energy difference of [3000 – 2200 cm−1]/2 = 400 cm−1 ≈ 1.15 kcal/mol), without any compensation from a ZPE difference at the transition state (e.g., from the symmetric A···H···B stretch, which is unique to the transition state). The simplified formula above, predicts a maximum for "k"H/"k"D as 6.9. If the complete disappearance of two bending vibrations is also included, "k"H/"k"D values as large as 15-20 can be predicted. Bending frequencies are very unlikely to vanish in the transition state, however, and there are only a few cases in which "k"H/"k"D values exceed 7-8 near room temperature. Furthermore, it is often found that tunneling is a major factor when they do exceed such values. A value of "k"H/"k"D ~ 10 is thought to be maximal for a semi-classical PKIE (no tunneling) for reactions at ≈298 K. (The formula for "k"H/"k"D has a temperature dependence, so larger isotope effects are possible at lower temperatures.) Depending on the nature of the transition state of H-transfer (symmetric vs. "early" or "late" and linear vs. bent); the extent to which a primary 2H isotope effect approaches this maximum, varies. A model developed by Westheimer predicted that symmetrical (thermoneutral, by Hammond's postulate), linear transition states have the largest isotope effects, while transition states that are "early" or "late" (for exothermic or endothermic reactions, respectively), or nonlinear (e.g. cyclic) exhibit smaller effects. These predictions have since received extensive experimental support. For secondary 2H isotope effects, Streitwieser proposed that weakening (or strengthening, in the case of an inverse isotope effect) of bending modes from the reactant ground state to the transition state are largely responsible for observed isotope effects. These changes are attributed to a change in steric environment when the carbon bound to the H/D undergoes rehybridization from sp3 to sp2 or vice versa (an α SKIE), or bond weakening due to hyperconjugation in cases where a carbocation is being generated one carbon atom away (a β SKIE). These isotope effects have a theoretical maximum of "k"H/"k"D = 20.5 ≈ 1.4. For a SKIE at the α position, rehybridization from sp3 to sp2 produces a normal isotope effect, while rehybridization from sp2 to sp3 results in an inverse isotope effect with a theoretical minimum of "k"H/"k"D = 2-0.5 ≈ 0.7. In practice, "k"H/"k"D ~ 1.1-1.2 and "k"H"/k"D ~ 0.8-0.9 are typical for α SKIEs, while "k"H/"k"D ~ 1.15-1.3 are typical for β SKIE. For reactants containing several isotopically substituted β-hydrogens, the observed isotope effect is often the result of several H/D's at the β position acting in concert. In these cases, the effect of each isotopically labeled atom is multiplicative, and cases where "k"H/"k"D &gt; 2 are not uncommon. The following simple expressions relating 2H and 3H KIEs, which are also known as the Swain equation (or the Swain-Schaad-Stivers equations), can be derived from the general expression given above using some simplifications: formula_17; i.e., formula_18. In deriving these expressions, the reasonable approximation that reduced mass roughly equals the mass of the 1H, 2H, or 3H, was used. Also, the vibrational motion was assumed to be approximated by a harmonic oscillator, so that formula_19; X = 1,2,3H. The subscript "s" refers to these "semi-classical" KIEs, which disregard quantum tunneling. Tunneling contributions must be treated separately as a correction factor. For isotope effects involving elements other than hydrogen, many of these simplifications are not valid, and the magnitude of the isotope effect may depend strongly on some or all of the neglected factors. Thus, KIEs for elements other than hydrogen are often much more difficult to rationalize or interpret. In many cases and especially for hydrogen-transfer reactions, contributions to KIEs from tunneling are significant (see below). Tunneling. In some cases, a further rate enhancement is seen for the lighter isotope, possibly due to quantum tunneling. This is typically only observed for reactions involving bonds to hydrogen. Tunneling occurs when a molecule penetrates through a potential energy barrier rather than over it. Though not allowed by classical mechanics, particles can pass through classically forbidden regions of space in quantum mechanics based on wave–particle duality. Tunneling can be analyzed using Bell's modification of the Arrhenius equation, which includes the addition of a tunneling factor, Q: formula_20 where A is the Arrhenius parameter, E is the barrier height and formula_21 where formula_22 and formula_23 Examination of the "β" term shows exponential dependence on the particle's mass. As a result, tunneling is much more likely for a lighter particle such as hydrogen. Simply doubling the mass of a tunneling proton by replacing it with a deuteron drastically reduces the rate of such reactions. As a result, very large KIEs are observed that can not be accounted for by differences in ZPEs. Also, the "β" term depends linearly with barrier width, 2a. As with mass, tunneling is greatest for small barrier widths. Optimal tunneling distances of protons between donor and acceptor atom is 40 pm. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;Temperature dependence in tunneling Tunneling is a quantum effect tied to the laws of wave mechanics, not kinetics. Therefore, tunneling tends to become more important at low temperatures, where even the smallest kinetic energy barriers may not be overcome but can be tunneled through. Peter S. Zuev et al. reported rate constants for the ring expansion of 1-methylcyclobutylfluorocarbene to be 4.0 × 10−6/s in nitrogen and 4.0 × 10−5/s in argon at 8 kelvin. They calculated that at 8 kelvin, the reaction would proceed via a single quantum state of the reactant so that the reported rate constant is temperature independent and the tunneling contribution to the rate was 152 orders of magnitude greater than the contribution of passage over the transition state energy barrier. So even though conventional chemical reactions tend to slow down dramatically as the temperature is lowered, tunneling reactions rarely change at all. Particles that tunnel through an activation barrier are a direct result of the fact that the wave function of an intermediate species, reactant or product is not confined to the energy well of a particular trough along the energy surface of a reaction but can "leak out" into the next energy minimum. In light of this, tunneling "should" be temperature independent. For the hydrogen abstraction from gaseous n-alkanes and cycloalkanes by hydrogen atoms over the temperature range 363–463 K, the H/D KIE data were characterized by small preexponential factor ratios "A"H/"A"D ranging from 0.43 to 0.54 and large activation energy differences from 9.0 to 9.7 kJ/mol. Basing their arguments on transition state theory, the small "A" factor ratios associated with the large activation energy differences (usually about 4.5 kJ/mol for C–H(D) bonds) provided strong evidence for tunneling. For the purpose of this discussion, it is important is that the "A" factor ratio for the various paraffins they used was roughly constant throughout the temperature range. The observation that tunneling is not entirely temperature independent can be explained by the fact that not all molecules of a given species occupy their vibrational ground state at varying temperatures. Adding thermal energy to a potential energy well could cause higher vibrational levels than the ground state to become populated. For a conventional kinetically driven reaction, this excitation would only have a small influence on the rate. However, for a tunneling reaction, the difference between the ZPE and the first vibrational energy level could be huge. The tunneling correction term "Q" is linearly dependent on barrier width and this width is significantly diminished as the number vibrational modes on the Morse potential increase. The decrease of the barrier width can have such a huge impact on the tunneling rate that even a small population of excited vibrational states would dominate this process. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;Criteria for KIE tunneling To determine if tunneling is involved in KIE of a reaction with H or D, a few criteria are considered: Also for reactions where isotopes include H, D and T, a criterion of tunneling is the Swain-Schaad relations which compare the rate constants ("k") of the reactions where H, D or T are exchanged: "k"H/"k"T=("k"D/"k"T)"X" and "k"H/"k"T=("k"H/"k"D)"Y" Experimental values of X exceeding 3.26 and Y lower than 1.44 are evidence of a certain amount of contribution from tunneling. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;Examples for tunneling in KIE In organic reactions, this proton tunneling effect has been observed in such reactions as the deprotonation and iodination of nitropropane with hindered pyridine base with a reported KIE of 25 at 25°C: and in a 1,5-sigmatropic hydrogen shift, though it is observed that it is hard to extrapolate experimental values obtained at high temperature to lower temperatures: It has long been speculated that high efficiency of enzyme catalysis in proton or hydride ion transfer reactions could be due partly to the quantum mechanical tunneling effect. Environment at the active site of an enzyme positions the donor and acceptor atom close to the optimal tunneling distance, where the amino acid side chains can "force" the donor and acceptor atom closer together by electrostatic and noncovalent interactions. It is also possible that the enzyme and its unusual hydrophobic environment inside a reaction site provides tunneling-promoting vibration. Studies on ketosteroid isomerase have provided experimental evidence that the enzyme actually enhances the coupled motion/hydrogen tunneling by comparing primary and secondary KIEs of the reaction under enzyme-catalyzed and non-enzyme-catalyzed conditions. Many examples exist for proton tunneling in enzyme-catalyzed reactions that were discovered by KIE. A well-studied example is methylamine dehydrogenase, where large primary KIEs of 5–55 have been observed for the proton transfer step. Another example of tunneling contribution to proton transfer in enzymatic reactions is the reaction carried out by alcohol dehydrogenase. Competitive KIEs for the hydrogen transfer step at 25°C resulted in 3.6 and 10.2 for primary and secondary KIEs, respectively. Transient kinetic isotope effect. Isotopic effect expressed with the equations given above only refer to reactions that can be described with first-order kinetics. In all instances in which this is not possible, transient KIEs should be taken into account using the GEBIK and GEBIF equations. Experiments. Simmons and Hartwig refer to the following three cases as the main types of KIE experiments involving C-H bond functionalization: A) KIE determined from absolute rates of two parallel reactions In this experiment, the rate constants for the normal substrate and its isotopically labeled analogue are determined independently, and the KIE is obtained as a ratio of the two. The accuracy of the measured KIE is severely limited by the accuracy with which each of these rate constants can be measured. Furthermore, reproducing the exact conditions in the two parallel reactions can be very challenging. Nevertheless, a measurement of a large kinetic isotope effect through direct comparison of rate constants is indicative that C-H bond cleavage occurs at the rate-determining step. (A smaller value could indicate an isotope effect due to a pre-equilibrium, so that the C-H bond cleavage occurs somewhere before the rate-determining step.) B) KIE determined from an intermolecular competition This type of experiment, uses the same substrates as used in Experiment A, but they are allowed in to react in the same container, instead of two separate containers. The KIE in this experiment is determined by the relative amount of products formed from C-H versus C-D functionalization (or it can be inferred from the relative amounts of unreacted starting materials). One must quench the reaction before it goes to completion to observe the KIE (see the Evaluation section below). Generally, the reaction is halted at low conversion (~5 to 10% conversion) or a large excess (&gt; 5 equiv.) of the isotopic mixture is used. This experiment type ensures that both C-H and C-D bond functionalizations occur under exactly the same conditions, and the ratio of products from C-H and C-D bond functionalizations can be measured with much greater precision than the rate constants in Experiment A. Moreover, only a single measurement of product concentrations from a single sample is required. However, an observed kinetic isotope effect from this experiment is more difficult to interpret, since it may either mean that C-H bond cleavage occurs during the rate-determining step or at a product-determining step ensuing the rate-determining step. The absence of a KIE, at least according to Simmons and Hartwig, is nonetheless indicative of the C-H bond cleavage not occurring during the rate-determining step. C) KIE determined from an intramolecular competition This type of experiment is analogous to Experiment B, except this time there is an intramolecular competition for the C-H or C-D bond functionalization. In most cases, the substrate possesses a directing group (DG) between the C-H and C-D bonds. Calculation of the KIE from this experiment and its interpretation follow the same considerations as that of Experiment B. However, the results of Experiments B and C will differ if the irreversible binding of the isotope-containing substrate takes place in Experiment B "prior" to the cleavage of the C-H or C-D bond. In such a scenario, an isotope effect may be observed in Experiment C (where choice of the isotope can take place even after substrate binding) but not in Experiment B (since the choice of whether C-H or C-D bond cleaves is already made as soon as the substrate binds irreversibly). In contrast to Experiment B, the reaction need not be halted at low consumption of isotopic starting material to obtain an accurate "k"H/"k"D, since the ratio of H and D in the starting material is 1:1, regardless of the extent of conversion. One non-C-H activation example of different isotope effects being observed in the case of intermolecular (Experiment B) and intramolecular (Experiment C) competition is the photolysis of diphenyldiazomethane in the presence of "t"-butylamine. To explain this result, the formation of diphenylcarbene, followed by irreversible nucleophilic attack by "t"-butylamine was proposed. Because there is little isotopic difference in the rate of nucleophilic attack, the intermolecular experiment resulted in a KIE close to 1. In the intramolecular case, however, the product ratio is determined by the proton transfer that occurs after the nucleophilic attack, a process which has a substantial KIE of 2.6. Thus, Experiments A, B, and C will give results of differing levels of precision and require different experimental setup and ways of analyzing data. As a result, the feasibility of each type of experiment will depend on the kinetic and stoichiometric profile of the reaction, as well as the physical characteristics of the reaction mixture (e.g., homogeneous vs. heterogeneous). Moreover, as noted in the paragraph above, the experiments provide KIE data for different steps of a multi-step reaction, depending on the relative locations of the rate-limiting step, product-determining steps, and/or C-H/D cleavage step. The hypothetical examples below illustrate common scenarios. Consider the following reaction coordinate diagram. For a reaction with this profile, all three experiments (A, B, and C) will yield a significant primary KIE: On the other hand, if a reaction follows the following energy profile, in which the C-H or C-D bond cleavage is irreversible but occurs after the rate-determining step (RDS), no significant KIE will be observed with Experiment A, since the overall rate is not affected by the isotopic substitution. Nevertheless, the irreversible C-H bond cleavage step will give a primary KIE with the other two experiments, since the second step would still affect the product distribution. Therefore, with Experiments B and C, it is possible to observe the KIE even if C-H or C-D bond cleavage occurs not in the rate-determining step, but in the product-determining step. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;Evaluation of KIEs in a Hypothetical Multi-Step Reaction A large part of the KIE arises from vibrational ZPE differences between the reactant ground state and the transition state that vary between the reactant and its isotopically substituted analog. While one can carry out involved calculations of KIEs using computational chemistry, much of the work done is of simpler order that involves the investigation of whether particular isotopic substitutions produce a detectable KIE or not. Vibrational changes from isotopic substitution at atoms away from the site where the reaction occurs tend to cancel between the reactant and the transition state. Therefore, the presence of a KIE indicates that the isotopically labeled atom is at or very near the reaction site. The absence of an isotope effect is more difficult to interpret: It may mean that the isotopically labeled atom is away from the reaction site, but it may also mean there are certain compensating effects that lead to the lack of an observable KIE. For example, the differences between the reactant and the transition state ZPEs may be identical between the normal reactant and its isotopically labeled version. Alternatively, it may mean that the isotopic substitution is at the reaction site, but vibrational changes associated with bonds to this atom occur after the rate-determining step. Such a case is illustrated in the following example, in which ABCD represents the atomic skeleton of a molecule. Assuming steady state conditions for the intermediate ABC, the overall rate of reaction is the following: formula_24 If the first step is rate-determining, this equation reduces to: formula_25 Or if the second step is rate-determining, the equation reduces to: formula_26 In most cases, isotopic substitution at A, especially if it is a heavy atom, will not alter "k"1 or "k"2, but it will most probably alter "k"3. Hence, if the first step is rate-determining, there will not be an observable kinetic isotope effect in the overall reaction with isotopic labeling of A, but there will be one if the second step is rate-determining. For intermediate cases where both steps have comparable rates, the magnitude of the kinetic isotope effect will depend on the ratio of "k"3 and "k"2. Isotopic substitution of D will alter "k"1 and "k"2 while not affecting "k"3. The KIE will always be observable with this substitution since "k"1 appears in the simplified rate expression regardless of which step is rate-determining, but it will be less pronounced if the second step is rate-determining due to some cancellation between the isotope effects on "k"1 and "k"2. This outcome is related to the fact that equilibrium isotope effects are usually smaller than KIEs. Isotopic substitution of B will clearly alter "k"3, but it may also alter "k"1 to a lesser extent if the B-C bond vibrations are affected in the transition state of the first step. There may thus be a small isotope effect even if the first step is rate-determining. This hypothetical consideration reveals how observing KIEs may be used to investigate reaction mechanisms. The existence of a KIE is indicative of a change to the vibrational force constant of a bond associated with the isotopically labeled atom at or before the rate-controlling step. Intricate calculations may be used to learn a great amount of detail about the transition state from observed kinetic isotope effects. More commonly, though, the mere qualitative knowledge that a bond associated with the isotopically labeled atom is altered in a certain way can be very useful. Evaluation of rate constant ratios from intermolecular competition reactions. In competition reactions, KIE is calculated from isotopic product or remaining reactant ratios after the reaction, but these ratios depend strongly on the extent of completion of the reaction. Most often, the isotopic substrate consists of molecules labeled in a specific position and their unlabeled, ordinary counterparts. One can also, in case of 13C KIEs, as well as similar cases, simply rely on the natural abundance of the isotopic carbon for the KIE experiments, eliminating the need for isotopic labeling. The two isotopic substrates will react through the same mechanism, but at different rates. The ratio between the amounts of the two species in the reactants and the products will thus change gradually over the course of the reaction, and this gradual change can be treated as follows: Assume that two isotopic molecules, A1 and A2, undergo irreversible competition reactions: formula_27 The KIE for this scenario is found to be: formula_28 Where F1 and F2 refer to the fraction of conversions for the isotopic species A1 and A2, respectively. &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;Evaluation In this treatment, all other reactants are assumed to be non-isotopic. Assuming further that the reaction is of first order with respect to the isotopic substrate A, the following general rate expression for both these reactions can be written: formula_29 Since f([B],[C]...) does not depend on the isotopic composition of A, it can be solved for in both rate expressions with A1 and A2, and the two can be equated to derive the following relations: formula_30 formula_31 Where [A1]0 and [A2]0 are the initial concentrations of A1 and A2, respectively. This leads to the following KIE expression: formula_32 Which can also be expressed in terms of fraction amounts of conversion of the two reactions, F1 and F2, where 1-Fn=[An]/[An]0 for n = 1 or 2, as follows: formula_33 As for finding the KIEs, mixtures of substrates containing stable isotopes may be analyzed with a mass spectrometer, which yields the ratios of the isotopic molecules in the initial substrate (defined here as [A2]0/[A1]0=R0), in the substrate after some conversion ([A2]/[A1]=R), or in the product ([P2]/[P1]=RP). When one of the species, e.g. 2, is a radioisotope, its mixture with the other species can also be analyzed by its radioactivity, which is measured in molar activities that are proportional to [A2]0 / ([A1]0+[A2]0) ≈ [A2]0/[A1]0 = R0 in the initial substrate, [A2] / ([A1]+[A2]) ≈ [A2]/[A1] = R in the substrate after some conversion, and [R2] / ([R1]+[R2]) ≈ [R2]/[R1] = RP, so that the same ratios as in the other case can be measured as long as the radioisotope is present in tracer amounts. Such ratios may also be determined using NMR spectroscopy. When the substrate composition is followed, the following KIE expression in terms of R0 and R can be derived: formula_34 &lt;templatestyles src="Template:Hidden begin/styles.css"/&gt;Measurement of F1 in terms of weights per unit volume or molarities of the reactants Taking the ratio of R and R0 using the previously derived expression for F2, one gets: formula_35 This relation can be solved in terms of the KIE to obtain the KIE expression given above. When the uncommon isotope has very low abundance, both R0 and R are very small and not significantly different from each other, such that 1-"F"1 can be approximated with "m"/"m"0 or "c"/"c"0. Isotopic enrichment of the starting material can be calculated from the dependence of "R/R"0 on "F"1 for various KIEs, yielding the following figure. Due to the exponential dependence, even very low KIEs lead to large changes in isotopic composition of the starting material at high conversions. When the products are followed, the KIE can be calculated using the products ratio "RP" along with "R"0 as follows: formula_36 Kinetic isotope effect measurement at natural abundance. KIE measurement at natural abundance is a simple general method for measuring KIEs for chemical reactions performed with materials of natural abundance. This technique for measuring KIEs overcomes many limitations of previous KIE measurement methods. KIE measurements from isotopically labeled materials require a new synthesis for each isotopically labeled material (a process often prohibitively difficult), a competition reaction, and an analysis. The KIE measurement at natural abundance avoids these issues by taking advantage of high precision quantitative techniques (nuclear magnetic resonance spectroscopy, isotope-ratio mass spectrometry) to site selectively measure kinetic fractionation of isotopes, in either product or starting material for a given chemical reaction. Single-pulse NMR. Quantitative single-pulse nuclear magnetic resonance spectroscopy (NMR) is a method amenable for measuring kinetic fractionation of isotopes for natural abundance KIE measurements. Pascal et al. were inspired by studies demonstrating dramatic variations of deuterium within identical compounds from different sources and hypothesized that NMR could be used to measure 2H KIEs at natural abundance. Pascal and coworkers tested their hypothesis by studying the insertion reaction of dimethyl diazomalonate into cyclohexane. Pascal et al. measured a KIE of 2.2 using 2H NMR for materials of natural abundance. Singleton and coworkers demonstrated the capacity of 13C NMR based natural abundance KIE measurements for studying the mechanism of the [4 + 2] cycloaddition of isoprene with maleic anhydride. Previous studies by Gajewski on isotopically enrich materials observed KIE results that suggested an asynchronous transition state, but were always consistent, within error, for a perfectly synchronous reaction mechanism. This work by Singleton et al. established the measurement of multiple 13C KIEs within the design of a single experiment. These 2H and 13C KIE measurements determined at natural abundance found the "inside" hydrogens of the diene experience a more pronounced 2H KIE than the "outside" hydrogens and the C1 and C4 experience a significant KIE. These key observations suggest an asynchronous reaction mechanism for the cycloaddition of isoprene with maleic anhydride. The limitations for determining KIEs at natural abundance using NMR are that the recovered material must have a suitable amount and purity for NMR analysis (the signal of interest should be distinct from other signals), the reaction of interest must be irreversible, and the reaction mechanism must not change for the duration of the chemical reaction. Experimental details for using quantitative single pulse NMR to measure KIE at natural abundance as follows: the experiment needs to be performed under quantitative conditions including a relaxation time of 5 T1, measured 90° flip angle, a digital resolution of at least 5 points across a peak, and a signal:noise greater than 250. The raw FID is zero-filled to at least 256K points before the Fourier transform. NMR spectra are phased and then treated with a zeroth order baseline correction without any tilt correction. Signal integrations are determined numerically with a minimal tolerance for each integrated signal. Organometallic reaction mechanism elucidation examples. Colletto et al. developed a regioselective β-arylation of benzo[b]thiophenes at room temperature with aryl iodides as coupling partners and sought to understand the mechanism of this reaction by performing natural abundance KIE measurements via single pulse NMR. The observation of a primary 13C isotope effect at C3, an inverse 2H isotope effect, a secondary 13C isotope effect at C2, and the lack of a 2H isotope effect at C2; led Colletto "et al." to suggest a Heck-type reaction mechanism for the regioselective β-arylation of benzo[b]thiophenes at room temperature with aryl iodides as coupling partners. Frost "et al." sought to understand the effects of Lewis acid additives on the mechanism of enantioselective palladium-catalyzed C-N bond activation using natural abundance KIE measurements via single pulse NMR. The primary 13C KIE observed in the absence of BPh3 suggests a reaction mechanism with rate limiting cis oxidation into the C–CN bond of the cyanoformamide. The addition of BPh3 causes a relative decrease in the observed 13C KIE which led Frost et al. to suggest a change in the rate limiting step from cis oxidation to coordination of palladium to the cyanoformamide. DEPT-55 NMR. Though KIE measurements at natural abundance are a powerful tool for understanding reaction mechanisms, the amounts of material required for analysis can make this technique inaccessible for reactions that employ expensive reagents or unstable starting materials. To mitigate these limitations, Jacobsen and coworkers developed 1H to 13C polarization transfer as a means to reduce the time and material required for KIE measurements at natural abundance. The distortionless enhancement by polarization transfer (DEPT) takes advantage of the larger gyromagnetic ratio of 1H over 13C, to theoretically improve measurement sensitivity by a factor of 4 or decrease experiment time by a factor of 16. This method for natural abundance kinetic isotope measurement is favorable for analysis for reactions containing unstable starting materials, and catalysts or products that are relatively costly. Jacobsen and coworkers identified the thiourea-catalyzed glycosylation of galactose as a reaction that met both of the aforementioned criteria (expensive materials and unstable substrates) and was a reaction with a poorly understood mechanism. Glycosylation is a special case of nucleophilic substitution that lacks clear definition between SN1 and SN2 mechanistic character. The presence of the oxygen adjacent to the site of displacement (i.e., C1) can stabilize positive charge. This charge stabilization can cause any potential concerted pathway to become asynchronous and approaches intermediates with oxocarbenium character of the SN1 mechanism for glycosylation. Jacobsen and coworkers observed small normal KIEs at C1, C2, and C5 which suggests significant oxocarbenium character in the transition state and an asynchronous reaction mechanism with a large degree of charge separation. Isotope-ratio mass spectrometry. High precision isotope-ratio mass spectrometry (IRMS) is another method for measuring kinetic fractionation of isotopes for natural abundance KIE measurements. Widlanski and coworkers demonstrated 34S KIE at natural abundance measurements for the hydrolysis of sulfate monoesters. Their observation of a large KIE suggests S-O bond cleavage is rate controlling and likely rules out an associate reaction mechanism. The major limitation for determining KIEs at natural abundance using IRMS is the required site selective degradation without isotopic fractionation into an analyzable small molecule, a non-trivial task. Case studies. Primary hydrogen isotope effects. Primary hydrogen KIEs refer to cases in which a bond to the isotopically labeled hydrogen is formed or broken at a rate- and/or product-determining step of a reaction. These are the most commonly measured KIEs, and much of the previously covered theory refers to primary KIEs. When there is adequate evidence that transfer of the labeled hydrogen occurs in the rate-determining step of a reaction, if a fairly large KIE is observed, e.g. kH/kD of at least 5-6 or kH/kT about 10–13 at room temperature, it is quite likely that the hydrogen transfer is linear and that the hydrogen is fairly symmetrically located in the transition state. It is usually not possible to make comments about tunneling contributions to the observed isotope effect unless the effect is very large. If the primary KIE is not as large, it is generally considered to be indicative of a significant contribution from heavy-atom motion to the reaction coordinate, though it may also mean that hydrogen transfer follows a nonlinear pathway. Secondary hydrogen isotope effects. The secondary hydrogen isotope effects or secondary KIE (SKIE) arises in cases where the isotopic substitution is remote from the bond being broken. The remote atom nonetheless influences the internal vibrations of the system, which via changes in zero-point energy (ZPE) affect the rates of chemical reactions. Such effects are expressed as ratios of rate for the light isotope to that of the heavy isotope and can be "normal" (ratio ≥ 1) or "inverse" (ratio &lt; 1) effects. SKIEs are defined as "α,β" (etc.) secondary isotope effects where such prefixes refer to the position of the isotopic substitution relative to the reaction center (see alpha and beta carbon). The prefix "α" refers to the isotope associated with the reaction center and the prefix "β" refers to the isotope associated with an atom neighboring the reaction center and so on. In physical organic chemistry, SKIE is discussed in terms of electronic effects such as induction, bond hybridization, or hyperconjugation. These properties are determined by electron distribution, and depend upon vibrationally averaged bond length and angles that are not greatly affected by isotopic substitution. Thus, the use of the term "electronic isotope effect" while legitimate is discouraged from use as it can be misinterpreted to suggest that the isotope effect is electronic in nature rather than vibrational. SKIEs can be explained in terms of changes in orbital hybridization. When the hybridization of a carbon atom changes from sp3 to sp2, a number of vibrational modes (stretches, in-plane and out-of-plane bending) are affected. The in-plane and out-of-plane bending in an sp3 hybridized carbon are similar in frequency due to the symmetry of an sp3 hybridized carbon. In an sp2 hybridized carbon the in-plane bend is much stiffer than the out-of-plane bending resulting in a large difference in the frequency, the ZPE and thus the SKIE (which exists when there is a difference in the ZPE of the reactant and transition state). The theoretical maximum change caused by the bending frequency difference has been calculated as 1.4. When carbon undergoes a reaction that changes its hybridization from sp3 to sp2, the out-of-plane bending force constant at the transition state is weaker as it is developing sp2 character and a "normal" SKIE is observed with typical values of 1.1 to 1.2. Conversely, when carbon's hybridization changes from sp2 to sp3, the out of plane bending force constants at the transition state increase and an inverse SKIE is observed with typical values of 0.8 to 0.9. More generally the SKIE for reversible reactions can be "normal" one way and "inverse" the other if bonding in the transition state is midway in stiffness between substrate and product, or they can be "normal" both ways if bonding is weaker in the transition state, or "inverse" both ways if bonding is stronger in the transition state than in either reactant. An example of an "inverse" α SKIE can be seen in the work of Fitzpatrick and Kurtz who used such an effect to distinguish between two proposed pathways for the reaction of d-amino acid oxidase with nitroalkane anions. Path A involved a nucleophilic attack on the coenzyme flavin adenine dinucleotide (FAD), while path B involves a free-radical intermediate. As path A results in the intermediate carbon changing hybridization from sp2 to sp3 an "inverse" SKIE is expected. If path B occurs then no SKIE should be observed as the free radical intermediate does not change hybridization. An SKIE of 0.84 was observed and Path A verified as shown in the scheme below. Another example of SKIE is the oxidation of benzyl alcohols by dimethyldioxirane where three transition states for different mechanisms were proposed. Again, by considering how and if the hydrogen atoms were involved in each, researchers predicted whether or not they would expect an effect of isotopic substitution of them. Then, analysis of the experimental data for the reaction allowed them to choose which pathway was most likely based on the observed isotope effect. Secondary hydrogen isotope effects from the methylene hydrogens were also used to show that Cope rearrangement in 1,5-hexadiene follow a concerted bond rearrangement pathway, and not one of the alternatively proposed allyl radical or 1,4-diyl pathways, all of which are presented in the following scheme. Alternative mechanisms for the Cope rearrangement of 1,5-hexadiene: (from top to bottom), allyl radical, synchronous concerted, and 1,4-dyil pathways. The predominant pathway is found to be the middle one, which has six delocalized π electrons corresponding to an aromatic intermediate. Steric isotope effects. The steric isotope effect (SIE) is a SKIE that does not involve bond breaking or formation. This effect is attributed to the different vibrational amplitudes of isotopologues. An example of such an effect is the racemization of 9,10-dihydro-4,5-dimethylphenanthrene. The smaller amplitude of vibration for 2H than for 1H in C–1H, C–2H bonds, results in a smaller van der Waals radius or effective size in addition to a difference in the ZPE between the two. When there is a greater effective bulk of molecules containing one over the other this may be manifested by a steric effect on the rate constant. For the example above, 2H racemizes faster than 1H resulting in a SIE. A model for the SIE was developed by Bartell. A SIE is usually small, unless the transformations passes through a transition state with severe steric encumbrance, as in the racemization process shown above. Another example of the SIE is in the deslipping reaction of rotaxanes. 2H, due to its smaller effective size, allows easier passage of the stoppers through the macrocycle, resulting in faster deslipping for the deuterated rotaxanes. Inverse kinetic isotope effects. Reactions are known where the deuterated species reacts "faster" than the undeuterated one, and these cases are said to exhibit inverse KIEs (IKIE). IKIEs are often observed in the reductive elimination of alkyl metal hydrides, e.g. ((Me2NCH2)2)PtMe(H). In such cases the C-D bond in the transition state, an agostic species, is highly stabilized relative to the C–H bond. An inverse effect can also occur in a multistep reaction if the overall rate constant depends on a pre-equilibrium prior to the rate-determining step which has an inverse equilibrium isotope effect. For example, the rates of acid-catalyzed reactions are usually 2-3 times greater for reactions in D2O catalyzed by D3O+ than for the analogous reactions in H2O catalyzed by H3O+ This can be explained for a mechanism of specific hydrogen-ion catalysis of a reactant R by H3O+ (or D3O+). H3O+ + R ⇌ RH+ + H2O RH+ + H2O → H3O+ + P The rate of formation of products is then d[P]/dt = k2[RH+] = k2K1[H3O+][R] = kobs[H3O+][R]. In the first step, H3O+ is usually a stronger acid than RH+. Deuteration shifts the equilibrium toward the more strongly bound acid species RD+ in which the effect of deuteration on zero-point vibrational energy is greater, so that the deuterated equilibrium constant K1D is greater than K1H. This equilibrium isotope effect in the first step usually outweighs the kinetic isotope effect in the second step, so that there is an apparent inverse isotope effect and the observed overall rate constant kobs = k2K1 decreases. Solvent hydrogen kinetic isotope effects. For the solvent isotope effects to be measurable, a fraction of the solvent must have a different isotopic composition than the rest. Therefore, large amounts of the less common isotopic species must be available, limiting observable solvent isotope effects to isotopic substitutions involving hydrogen. Detectable KIEs occur only when solutes exchange hydrogen with the solvent or when there is a specific solute-solvent interaction near the reaction site. Both such phenomena are common for protic solvents, in which the hydrogen is exchangeable, and they may form dipole-dipole interactions or hydrogen bonds with polar molecules. Carbon-13 isotope effects. Most organic reactions involve breaking and making bonds to carbon; thus, it is reasonable to expect detectable carbon isotope effects. When 13C is used as the label, the change in mass of the isotope is only ~8%, though, which limits the observable KIEs to much smaller values than the ones observable with hydrogen isotope effects. Compensating for variations in 13C natural abundance. Often, the largest source of error in a study that depends on the natural abundance of carbon is the slight variation in natural 13C abundance itself. Such variations arise; because the starting materials in the reaction, are themselves products of other reactions that have KIEs and thus isotopically enrich the products. To compensate for this error when NMR spectroscopy is used to determine the KIE, the following guidelines have been proposed: If these as well as some other precautions listed by Jankowski are followed, KIEs with precisions of three decimal places can be achieved. Isotope effects with elements heavier than carbon. Interpretation of carbon isotope effects is usually complicated by simultaneously forming and breaking bonds to carbon. Even reactions that involve only bond cleavage from the carbon, such as SN1 reactions, involve strengthening of the remaining bonds to carbon. In many such reactions, leaving group isotope effects tend to be easier to interpret. For example, substitution and elimination reactions in which chlorine acts as a leaving group are convenient to interpret, especially since chlorine acts as a monatomic species with no internal bonding to complicate the reaction coordinate, and it has two stable isotopes, 35Cl and 37Cl, both with high abundance. The major challenge to the interpretation of such isotope affects is the solvation of the leaving group. Owing to experimental uncertainties, measurement of isotope effect may entail significant uncertainty. Often isotope effects are determined through complementary studies on a series of isotopomers. Accordingly, it is quite useful to combine hydrogen isotope effects with heavy-atom isotope effects. For instance, determining nitrogen isotope effect along with hydrogen isotope effect was used to show that the reaction of 2-phenylethyltrimethylammonium ion with ethoxide in ethanol at 40°C follows an E2 mechanism, as opposed to alternative non-concerted mechanisms. This conclusion was reached upon showing that this reaction yields a nitrogen isotope effect, "k"14/"k"15, of 1.0133±0.0002 along with a hydrogen KIE of 3.2 at the leaving hydrogen. Similarly, combining nitrogen and hydrogen isotope effects was used to show that syn eliminations of simple ammonium salts also follow a concerted mechanism, which was a question of debate before. In the following two reactions of 2-phenylcyclopentyltrimethylammonium ion with ethoxide, both of which yield 1-phenylcyclopentene, both isomers exhibited a nitrogen isotope effect "k"14/"k"15 at 60°C. Though the reaction of the trans isomer, which follows syn elimination, has a smaller nitrogen KIE (1.0064) than the cis isomer which undergoes anti elimination (1.0108); both results are large enough to be indicative of weakening of the C-N bond in the transition state that would occur in a concerted process. Other examples. Since KIEs arise from differences in isotopic mass, the largest observable KIEs are associated with isotopic substitution of 1H with 2H (2x increase in mass) or 3H (3x increase in mass). KIEs from isotopic mass ratios can be as large as 36.4 using muons. They have produced the lightest "hydrogen" atom, 0.11H (0.113 amu), in which an electron orbits a positive muon (μ+) "nucleus" that has a mass of 206 electrons. They have also prepared the heaviest "hydrogen" atom by replacing one electron in helium with a negative muon μ− to form Heμ (mass 4.116 amu). Since μ− is much heavier than an electron, it orbits much closer to the nucleus, effectively shielding one proton, making Heμ behave as 4.1H. With these exotic atoms, the reaction of H with 1H2 was investigated. Rate constants from reacting the lightest and the heaviest hydrogen analogs with 1H2 were then used to calculate "k"0.11/"k"4.1, in which there is a 36.4x difference in isotopic mass. For this reaction, isotopic substitution happens to produce an IKIE, and the authors report a KIE as low as 1.74 × 10−4, which is the smallest KIE ever reported. The KIE leads to a specific distribution of 2H in natural products, depending on the route they were synthesized in nature. By NMR spectroscopy, it is therefore easy to detect whether the alcohol in wine was fermented from glucose, or from illicitly added saccharose. Another reaction mechanism that was elucidated using the KIE is halogenation of toluene: In this particular "intramolecular KIE" study, a benzylic hydrogen undergoes radical substitution by bromine using "N"-bromosuccinimide as the brominating agent. It was found that PhCH3 brominates 4.86x faster than PhCD3 (PhC2H3). A large KIE of 5.56 is associated with the reaction of ketones with bromine and sodium hydroxide. In this reaction the rate-limiting step is formation of the enolate by deprotonation of the ketone. In this study the KIE is calculated from the reaction rate constants for regular 2,4-dimethyl-3-pentanone and its deuterated isomer by optical density measurements. In asymmetric catalysis, there are rare cases where a KIE manifests as a significant difference in the enantioselectivity observed for a deuterated substrate compared to a non-deuterated one. One example was reported by Toste and coworkers, in which a deuterated substrate produced an enantioselectivity of 83% ee, compared to 93% ee for the undeuterated substrate. The effect was taken to corroborate additional inter- and intramolecular competition KIE data that suggested cleavage of the C-H/D bond in the enantiodetermining step. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt; Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{matrix}\\\\\n\\ce{ {CN^-} + {^{12}CH3-Br} ->[k_{12}] {^{12}CH3-CN} + Br^-}\\\\\n\\ce{ {CN^-} + {^{13}CH3-Br} ->[k_{13}] {^{13}CH3-CN} + Br^-}\\\\{}\n\\end{matrix}\n\\qquad\n\\text{KIE}=\\frac{k_{12}}{k_{13}} = 1.082 \\pm 0.008\n" }, { "math_id": 1, "text": "\\text{KIE}=\\frac{k_L}{k_H}" }, { "math_id": 2, "text": "\n\\kappa_\\mathrm{H}/\\kappa_{\\mathrm{D}}\n" }, { "math_id": 3, "text": "\n\\frac{k_\\ce{H}}{k_\\ce{D}} = \\left(\\frac{\\sigma_\\ce{H} \\sigma^\\ddagger_\\ce{D}}{\\sigma_\\ce{D} \\sigma^\\ddagger_\\ce{H}} \\right) \\left(\\frac{M^\\ddagger_\\ce{H} M_\\ce{D}}{M^\\ddagger_\\ce{D} M_\\ce{H}}\\right)^{\\frac 3 2}\\left(\\frac{I^\\ddagger_{x\\ce{H}}I^\\ddagger_{y\\ce{H}}I^\\ddagger_{z\\ce{H}}}{I^\\ddagger_{x\\ce{D}}I^\\ddagger_{y\\ce{D}}I^\\ddagger_{z\\ce{D}}}\\frac{I_{x\\ce{D}}I_{y\\ce{D}}I_{z\\ce{D}}}{I_{x\\ce{H}}I_{y\\ce{H}}I_{z\\ce{H}}}\\right)^{\\frac 1 2}\n \\left(\\frac{\\prod\\limits_{i=1}^{3N^\\ddagger -7}\\frac{1-e^{-u^\\ddagger_{i\\ce{D}}}}{1-e^{-u^\\ddagger_{i\\ce{H}}}}}{\\prod\\limits_{i=1}^{3N -6}\\frac{1-e^{-u_{i\\ce{D}}}}{1-e^{-u_{i\\ce{H}}}}} \\right) e^{-\\frac 1 2 \\left[\\sum\\limits_{i=1}^{3N^\\ddagger-7}(u^\\ddagger_{i\\ce{H}}-u^\\ddagger_{i\\ce{D}})-\\sum\\limits_{i=1}^{3N-6}(u_{i\\ce{H}}-u_{i\\ce{D}})\\right]}\n" }, { "math_id": 4, "text": "\nu_i:= \\frac{h\\nu_i}{k_\\mathrm{B}T} =\n\\frac{hcN_\\mathrm{A}\\tilde{\\nu}_i}{RT}\n" }, { "math_id": 5, "text": "\nu_i^\\ddagger:= \\frac{h\\nu_i^\\ddagger}{k_\\mathrm{B}T} =\n\\frac{hcN_\\mathrm{A}\\tilde{\\nu}_i^\\ddagger}{RT}\n" }, { "math_id": 6, "text": "\\tilde{\\nu}_i" }, { "math_id": 7, "text": "\\frac{k_\\ce{H}}{k_\\ce{D}} = \\mathbf{S} \\times \\mathbf{MMI} \\times \\mathbf{EXC} \\times \\mathbf{ZPE}" }, { "math_id": 8, "text": "\\begin{align}\n\\frac{k_\\ce{H}}{k_\\ce{D}} &\\cong \\exp\\left\\{-\\frac 1 2\\left[\\sum\\limits_{i=1}^{3N^\\ddagger-7}(u^\\ddagger_{i\\ce{H}}-u^\\ddagger_{i\\ce{D}})-\\sum\\limits_{i=1}^{3N-6}(u_{i\\ce{H}}-u_{i\\ce{D}})\\right]\\right\\}\n\\\\ &\\cong \\exp\\left[\\sum_{i}^{\\mathrm{(react.)}}\\frac{1}{2}\\Delta u_i-\\sum_i^{\\mathrm{(TS)}}\\frac{1}{2}\\Delta u_i^\\ddagger\\right]\n\\end{align}" }, { "math_id": 9, "text": "\\Delta u_i := u_{i\\mathrm{H}}-u_{i\\mathrm{D}}" }, { "math_id": 10, "text": "\\Delta u_i^\\ddagger := u_{i\\mathrm{H}}^\\ddagger-u_{i\\mathrm{D}}^\\ddagger" }, { "math_id": 11, "text": "\\Delta u_i" }, { "math_id": 12, "text": "\\Delta u_i^\\ddagger" }, { "math_id": 13, "text": "\n\\epsilon_i^{(0)}=\\frac{1}{2}h\\nu_i\n" }, { "math_id": 14, "text": "\nu_i = h\\nu_i/k_\\mathrm{B}T\n" }, { "math_id": 15, "text": "\n\\nu_{\\mathrm{X}}=\\frac{1}{2\\pi }\\sqrt{\\frac{k_\\mathrm{f}}{\\mu_\\mathrm{X}}}\\cong\n\\frac{1}{2\\pi }\\sqrt{\\frac{k_\\mathrm{f}}{m_\\mathrm{X}}}\n" }, { "math_id": 16, "text": "\n\\Delta u_i \\cong \\left(1-\\frac{1}{\\sqrt 2}\\right)\\frac{h\\nu_{i\\mathrm{H}}}{k_\\mathrm{B}T}\n" }, { "math_id": 17, "text": "\\left(\\frac{\\ln\\left(\\frac{k_\\ce{H}}{k_\\ce{T}}\\right)}{\\ln\\left(\\frac{k_\\ce{H}}{k_\\ce{D}}\\right)}\\right)_s\\cong\\frac{1-\\sqrt{m_\\ce{H}/m_\\ce{T}}}{1-\\sqrt{m_\\ce{H}/m_\\ce{D}}}=\\frac{1-\\sqrt{1/3}}{1-\\sqrt{1/2}}\\cong1.44" }, { "math_id": 18, "text": "\\left(\\frac{k_\\ce{H}}{k_\\ce{T}}\\right)_s=\\left(\\frac{k_\\ce{H}}{k_\\ce{D}}\\right)_s^{1.44}" }, { "math_id": 19, "text": "\nu_{i\\mathrm{X}}\\propto \\mu_{\\mathrm{X}}^{-1/2}\\cong m_{\\mathrm{X}}^{-1/2}\n" }, { "math_id": 20, "text": "k=QAe^{-E/RT}" }, { "math_id": 21, "text": "Q = \\frac{e^\\alpha}{\\beta-\\alpha}(\\beta e^{-\\alpha}-\\alpha e^{- \\beta})" }, { "math_id": 22, "text": "\\alpha=\\frac{E}{RT}" }, { "math_id": 23, "text": "\\beta=\\frac{2a \\pi ^2(2mE)^{1/2}}{h}" }, { "math_id": 24, "text": "\\frac{d[A]}{dt} = \\frac{k_1k_3[ABCD]}{k_2[D]+k_3}" }, { "math_id": 25, "text": "\\frac{d[A]}{dt} = k_1[ABCD]" }, { "math_id": 26, "text": "\\frac{d[A]}{dt} = \\frac{k_1k_3[ABCD]}{k_2[D]}" }, { "math_id": 27, "text": "\\begin{align}\n\\ce{ {A1} + {B} + {C} + \\cdots}\\ &\\ce{->[k_1] P1}\\\\\n\\ce{ {A2} + {B} + {C} + \\cdots}\\ &\\ce{->[k_2] P2}\n\\end{align}" }, { "math_id": 28, "text": "\\text{KIE} = {k_1 \\over k_2} = \\frac{\\ln (1-F_1)}{\\ln (1-F_2) }" }, { "math_id": 29, "text": "\\text{rate} = {-d[\\ce A_n]\\over dt} = k_n \\times [\\ce A_n] \\times f([\\ce B],[\\ce C],\\cdots) \\text{ where } n=1 \\text{ or } 2" }, { "math_id": 30, "text": "{1\\over k_1} \\times \\ce{\\mathit{d}[A1]\\over [A1]} = {1\\over k_2} \\times \\ce{\\mathit{d}[A2] \\over [A2]}" }, { "math_id": 31, "text": "{1\\over k_1} \\times \\int \\limits_\\ce{[A1]^0}^\\ce{[A1]} {d[\\ce A'_1]\\over [\\ce A'_1]} = {1\\over k_2} \\times \\int \\limits_\\ce{[A2]^0}^\\ce{[A2]}{d[\\ce A'_2] \\over [\\ce A'_2]}" }, { "math_id": 32, "text": "{k_1 \\over k_2} = \\frac\\ce{\\ln ([A1]/[A1]^0)}\\ce{\\ln ([A2]/[A2]^0) }" }, { "math_id": 33, "text": "{k_1 \\over k_2} = \\frac{\\ln (1-F_1)}{\\ln (1-F_2) }" }, { "math_id": 34, "text": "\\text{KIE} = \\frac{k_1}{k_2} = \\frac {\\ln(1-F_1)}{\\ln[(1-F_1)R/R_0]}" }, { "math_id": 35, "text": "{R \\over R_0} = \\ce{\\frac {[A2]/[A1]}{[A2]^0/[A1]^0}} = \\ce{\\frac {[A2]/[A2]^0}{[A1]/[A1]^0}} = \\frac{1-F_2}{1-F_1}=(1-F_1)^{(k_2/k_1)-1}" }, { "math_id": 36, "text": "{k_1 \\over k_2} = \\frac {\\ln(1-F_1)} {\\ln[1-(F_1R_P/R_0)]}" } ]
https://en.wikipedia.org/wiki?curid=1106771
11068276
Vividness of Visual Imagery Questionnaire
Psychometric instrument for visual imagery vividness The Vividness of Visual Imagery Questionnaire (VVIQ) was developed in 1973 by the British psychologist David Marks. The VVIQ consists of 16 items in four groups of 4 items in which the participant is invited to consider the mental image formed in thinking about specific scenes and situations. The vividness of the image is rated along a 5-point scale. The questionnaire has been widely used as a measure of individual differences in vividness of visual imagery. The large body of evidence confirms that the VVIQ is a valid and reliable psychometric measure of visual image vividness. In 1995 Marks published a new version of the VVIQ, the VVIQ2. This questionnaire consists of twice the number of items and reverses the rating scale so that higher scores reflect higher vividness. More recently, Campos and Pérez-Fabello evaluated the reliability and construct validity of the VVIQ and the VVIQ2. Cronbach's formula_0 reliabilities for both the VVIQ and the VVIQ-2 were found to be high. Estimates of internal consistency reliability and construct validity were found to be similar for the two versions. Validation. The VVIQ has proved an essential tool in the scientific investigation of mental imagery as a phenomenological, behavioral and neurological construct. Marks' 1973 paper has been cited in close to 2000 studies of mental imagery in a variety of fields including cognitive psychology, clinical psychology and neuropsychology. The procedure can be carried out with eyes closed and/or with eyes open. Total score on the VVIQ is a predictor of the person's performance in a variety of cognitive, motor, and creative tasks. For example, Marks (1973) reported that high vividness scores correlate with the accuracy of recall of coloured photographs. The VVIQ is in several languages apart from English including Spanish, Japanese, French (Denis, 1982), and Polish (Jankowska and Karwowski, 2020). Factor analysis of the Spanish VVIQ by Campos, González, and Amor (2002) indicated a single factor that explained 37% of the variance with good internal consistency (Cronbach α = 88). The VVIQ has spawned imagery vividness questionnaires across several other modalities including auditory (VAIQ; Brett and Starker, 1977), movement (VMIQ; Isaac, Marks and Russell, 1986), olfactory(VOIQ; Gilbert, Crouch and Kemp, 1998) and wine imagery (VWIQ; Croijmans, Speed, Arshamian and Majid, 2019). Some critics have argued that introspective or ‘self-report’ questionnaires including the VVIQ are “too subjective” and can fall under the influence of social desirability, demand characteristics and other uncontrolled factors (Kaufmann, 198). In spite of this issue, acceptably strong evidence of criterion validity for the VVIQ has been found in meta analysis of more than 200 studies. The meta analysis by McKelvie (1995) indicated that internal consistency and test-retest reliability of the VVIQ were acceptable and minimally acceptable respectively, while alternate form reliability was unacceptable. McKelvie (1995) reported only a weak correlation (r =.137) between VVIQ imagery ability and memory performance. However, McKelvie (1995, p. 59) asserted that his “findings support the construct of vividness and the validity of the VVIQ”. McKelvie’s meta analysis (1995, p. 81) obtained an acceptable relationship between the VVIQ and criterion test performance with the strongest relationship with self-report tasks. A meta analysis of gender differences in 16 comparisons of men and women’s VVIQ scores showed a “slight but reliable tendency for women to report more vivid images than men” (Richardson (1995). However, Richardson observed that “randomizing the order of the items abolishes the gender differences suggesting that the latter are “determined by psychosocial factors rather than by biological ones” (p.177). Rodway, Gillies and Schepman (2006) used a novel long-term change detection task to determine whether participants with low and high vividness scores on the VVIQ2 showed any performance differences. Rodway et al. (2006) found that high vividness participants were significantly more accurate at detecting salient changes to pictures compared to low vividness participants. This replicated an earlier study by Gur and Hilgard (1975). An unresolved issue about image vividness ratings concerns whether the ratings are measures of a “trait”, a “state” or a mixture of the two. An updated meta analysis of the validity of the VVIQ by Runge, Cheung and D’Angiulli (2017) compared two main formats used to measure imagery vividness: trial-by-trial vividness ratings (VRs) and the Vividness of Visual Imagery Questionnaire (VVIQ). The associations between the vividness scores obtained using these two formats and all existing behavioural, cognitive and neuroscientific measures were computed. Significantly larger effect sizes were found for VR than for VVIQ, which suggest that VRs provide a more reliable self-report measure than the VVIQ, and “may reflect a more direct route of reportability than the latter”. Neuropsychological studies. Recent studies have found that individual differences in VVIQ scores can be used to predict changes in a person's brain while visualizing different activities. Unlike associations between cognitive or perceptual performance measures and VVIQ scores, demand characteristics and social desirability effects can be eliminated as possible explanations of any observed differences between vivid and non-vivid images. Marks and Issac (1995) mapped electroencephalographic (EEG) activity topographically during visual and motor imagery in vivid and non-vivid imagers. Topographical maps of EEG activation revealed attenuation of alpha power in vivid images during visual imagery, particularly in the left posterior quadrant of the cortex, but enhanced alpha power during motor imagery. Amedi, Malach and Pascual-Leone (2005) predicted that VVIQ scores might be correlated with the degree of deactivation of the auditory cortex in individual subjects in functional magnetic resonance imaging (fMRI). These investigators found a significant positive correlation between the magnitude of A1 deactivation (negative blood-oxygen-level-dependent -BOLD- signal in auditory cortex) and the subjective vividness of visual imagery (Spearman r = 0.73, p &lt; 0.05). In a related study, Xu Cui, Cameron Jeter, Dongni Yang, Read Montague and David Eagleman (2007) also observed that reported vividness is correlated with an objective measure of brain activity: the early visual cortex activity relative to the whole brain activity measured by fMRI. These results show that individual differences in the visual imagery vividness are quantifiable even in the absence of subjective report. In a meta analysis, Runge, Cheung and D’Angiulli (2017) observed that both VR and VVIQ “are more strongly associated with the neural, than the cognitive and behavioural correlates of imagery. If one establishes neuroscience measures as the criterion variable, then self-reports of vividness show higher construct validity than behavioural/cognitive measures of imagery”. In a large study with 285 participants, Tabi, Maio, Attaallah, et al. (2022) investigated the association between VVIQ scores, visual short-term memory performance and volumes of brain structures including the hippocampus, amygdala, primary motor cortex, primary visual cortex and the fusiform gyrus. Tabi et al. (2022) used a variant of the “What was where?” visual object-location binding task to assess the participants’ memories over 1- or 4-second delays. In healthy volunteers, there was no evidence of an association between the vividness of visual imagery and short term memory. However, significant positive correlations occurred between visual imagery and the volumes of the hippocampus and primary visual cortex. The figure shows VVIQ correlations with Bilateral Hippocampal Volume, Amygdala Volume, Volume of the Primary Motor Cortex, of the Primary Visual Cortex and of the Fusiform Gyrus. Vividness of Visual Imagery Questionnaire (VVIQ) Scores positively correlated with the volume of the Hippocampus and the Primary Visual Cortex but not with the volume of the Amygdala or the Primary Motor Cortex controls, suggesting an involvement of these two areas in visual imagery and confirming our second hypothesis. There was, however, no correlation of Fusiform gyrus volume and VVIQ (Tabi, et al., 2022). The neuropsychological evidence indicates that people who are high vs. low VVIQ scorers have associated cortical volumes in structures thought to be responsible for image generation.
[ { "math_id": 0, "text": "\\alpha" } ]
https://en.wikipedia.org/wiki?curid=11068276
1107021
Accumulation function
The accumulation function "a"("t") is a function defined in terms of time "t" expressing the ratio of the value at time "t" (future value) and the initial investment (present value). It is used in interest theory. Thus "a"(0)=1 and the value at time "t" is given by: formula_0. where the initial investment is formula_1 For various interest-accumulation protocols, the accumulation function is as follows (with "i" denoting the interest rate and "d" denoting the discount rate): In the case of a positive rate of return, as in the case of interest, the accumulation function is an increasing function. Variable rate of return. The logarithmic or continuously compounded return, sometimes called force of interest, is a function of time defined as follows: formula_6 which is the rate of change with time of the natural logarithm of the accumulation function. Conversely: formula_7 reducing to formula_8 for constant formula_9. The effective annual percentage rate at any time is: formula_10 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "A(t) = A(0) \\cdot a(t)" }, { "math_id": 1, "text": "A(0)." }, { "math_id": 2, "text": "a(t)=1+t \\cdot i" }, { "math_id": 3, "text": "a(t)=(1+i)^t" }, { "math_id": 4, "text": "a(t) = 1+\\frac{td}{1-d}" }, { "math_id": 5, "text": "a(t) = (1-d)^{-t}" }, { "math_id": 6, "text": "\\delta_{t}=\\frac{a'(t)}{a(t)}\\," }, { "math_id": 7, "text": "a(t)=e^{\\int_0^t \\delta_u\\, du}" }, { "math_id": 8, "text": "a(t)=e^{t \\delta}" }, { "math_id": 9, "text": "\\delta" }, { "math_id": 10, "text": " r(t) = e^{\\delta_t} - 1" } ]
https://en.wikipedia.org/wiki?curid=1107021
11070790
Maximum entropy spectral estimation
Spectral density estimation methodMaximum entropy spectral estimation is a method of spectral density estimation. The goal is to improve the spectral quality based on the principle of maximum entropy. The method is based on choosing the spectrum which corresponds to the most random or the most unpredictable time series whose autocorrelation function agrees with the known values. This assumption, which corresponds to the concept of maximum entropy as used in both statistical mechanics and information theory, is maximally non-committal with regard to the unknown values of the autocorrelation function of the time series. It is simply the application of maximum entropy modeling to any type of spectrum and is used in all fields where data is presented in spectral form. The usefulness of the technique varies based on the source of the spectral data since it is dependent on the amount of assumed knowledge about the spectrum that can be applied to the model. In maximum entropy modeling, probability distributions are created on the basis of that which is known, leading to a type of statistical inference about the missing information which is called the maximum entropy estimate. For example, in spectral analysis the expected peak shape is often known, but in a noisy spectrum the center of the peak may not be clear. In such a case, inputting the known information allows the maximum entropy model to derive a better estimate of the center of the peak, thus improving spectral accuracy. Method description. In the periodogram approach to calculating the power spectra, the sample autocorrelation function is multiplied by some window function and then Fourier transformed. The window is applied to provide statistical stability as well as to avoid leakage from other parts of the spectrum. However, the window limits the spectral resolution. Maximum entropy method attempts to improve the spectral resolution by extrapolating the correlation function beyond the maximum lag in such a way that the entropy of the corresponding probability density function is maximized in each step of the extrapolation. The maximum entropy rate stochastic process that satisfies the given empirical autocorrelation and variance constraints is an autoregressive model with independent and identically distributed zero-mean Gaussian input. Therefore, the maximum entropy method is equivalent to least-squares fitting the available time series data to an autoregressive model formula_0 where the formula_1 are independent and identically distributed as formula_2. The unknowns coefficients formula_3 are found using least-square method. Once the autoregressive coefficients have been determined, the spectrum of the time series data is estimated by evaluating the power spectral density function of the fitted autoregressive model formula_4 where formula_5 is the sampling period and formula_6 is the imaginary unit.
[ { "math_id": 0, "text": " X_t = \\sum_{k=1}^M \\alpha_k X_{t-k} + \\epsilon_k" }, { "math_id": 1, "text": "\\epsilon_k" }, { "math_id": 2, "text": "N(0, \\sigma^2)" }, { "math_id": 3, "text": "\\alpha_k" }, { "math_id": 4, "text": " \\hat{S}(\\omega) = \\frac{\\sigma^2 T_s}{\\left| 1 + \\sum_{k=1}^M \\alpha_k e^{- i k \\omega T_s} \\right|^2}, " }, { "math_id": 5, "text": "T_s" }, { "math_id": 6, "text": "i = \\sqrt{-1}" } ]
https://en.wikipedia.org/wiki?curid=11070790
11071463
Entropy rate
Time density of the average information in a stochastic process In the mathematical theory of probability, the entropy rate or source information rate is a function assigning an entropy to a stochastic process. For a strongly stationary process, the conditional entropy for latest random variable eventually tend towards this rate value. Definition. A process formula_0 with a countable index gives rise to the sequence of its joint entropies formula_1. If the limit exists, the entropy rate is defined as formula_2 Note that given any sequence formula_3 with formula_4 and letting formula_5, by telescoping one has formula_6. The entropy rate thus computes the mean of the first formula_7 such entropy changes, with formula_7 going to infinity. The behaviour of joint entropies from one index to the next is also explicitly subject in some characterizations of entropy. Discussion. While formula_0 may be understood as a sequence of random variables, the entropy rate formula_8 represents the average entropy change per one random variable, in the long term. It can be thought of as a general property of stochastic sources - this is the subject of the asymptotic equipartition property. For strongly stationary processes. A stochastic process also gives rise to a sequence of conditional entropies, comprising more and more random variables. For strongly stationary stochastic processes, the entropy rate equals the limit of that sequence formula_9 The quantity given by the limit on the right is also denoted formula_10, which is motivated to the extent that here this is then again a rate associated with the process, in the above sense. For Markov chains. Since a stochastic process defined by a Markov chain that is irreducible, aperiodic and positive recurrent has a stationary distribution, the entropy rate is independent of the initial distribution. For example, consider a Markov chain defined on a countable number of states. Given its right stochastic transition matrix formula_11 and an entropy formula_12 associated with each state, one finds formula_13 where formula_14 is the asymptotic distribution of the chain. In particular, it follows that the entropy rate of an i.i.d. stochastic process is the same as the entropy of any individual member in the process. For hidden Markov models. The entropy rate of hidden Markov models (HMM) has no known closed-form solution. However, it has known upper and lower bounds. Let the underlying Markov chain formula_15 be stationary, and let formula_16 be the observable states, then we haveformula_17and at the limit of formula_18, both sides converge to the middle. Applications. The entropy rate may be used to estimate the complexity of stochastic processes. It is used in diverse applications ranging from characterizing the complexity of languages, blind source separation, through to optimizing quantizers and data compression algorithms. For example, a maximum entropy rate criterion may be used for feature selection in machine learning. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "H_n(X_1, X_2, \\dots X_n)" }, { "math_id": 2, "text": "H(X) := \\lim_{n \\to \\infty} \\tfrac{1}{n} H_n." }, { "math_id": 3, "text": "(a_n)_n" }, { "math_id": 4, "text": "a_0=0" }, { "math_id": 5, "text": "\\Delta a_k := a_k - a_{k-1}" }, { "math_id": 6, "text": "a_n={\\textstyle \\sum_{k=1}^n}\\Delta a_k" }, { "math_id": 7, "text": "n" }, { "math_id": 8, "text": "H(X)" }, { "math_id": 9, "text": "H(X) = \\lim_{n \\to \\infty} H(X_n|X_{n-1}, X_{n-2}, \\dots X_1)" }, { "math_id": 10, "text": "H'(X)" }, { "math_id": 11, "text": "P_{ij}" }, { "math_id": 12, "text": "h_i := -\\sum_{j} P_{ij} \\log P_{ij}" }, { "math_id": 13, "text": "\\displaystyle H(X) = \\sum_{i} \\mu_i h_i," }, { "math_id": 14, "text": "\\mu_i" }, { "math_id": 15, "text": "X_{1:\\infty}" }, { "math_id": 16, "text": "Y_{1:\\infty}" }, { "math_id": 17, "text": "H(Y_n|X_1 , Y_{1:n-1} ) \\leq H(Y) \\leq H(Y_n|Y_{1:n-1} )" }, { "math_id": 18, "text": "n \\to \\infty" } ]
https://en.wikipedia.org/wiki?curid=11071463
1107299
Token bucket
Scheduling algorithm for network transmissions The token bucket is an algorithm used in packet-switched and telecommunications networks. It can be used to check that data transmissions, in the form of packets, conform to defined limits on bandwidth and burstiness (a measure of the unevenness or variations in the traffic flow). It can also be used as a scheduling algorithm to determine the timing of transmissions that will comply with the limits set for the bandwidth and burstiness: see network scheduler. Overview. The token bucket algorithm is based on an analogy of a fixed capacity bucket into which tokens, normally representing a unit of bytes or a single packet of predetermined size, are added at a fixed rate. When a packet is to be checked for conformance to the defined limits, the bucket is inspected to see if it contains sufficient tokens at that time. If so, the appropriate number of tokens, e.g. equivalent to the length of the packet in bytes, are removed ("cashed in"), and the packet is passed, e.g., for transmission. The packet does not conform if there are insufficient tokens in the bucket, and the contents of the bucket are not changed. Non-conformant packets can be treated in various ways: A conforming flow can thus contain traffic with an average rate up to the rate at which tokens are added to the bucket, and have a burstiness determined by the depth of the bucket. This burstiness may be expressed in terms of either a jitter tolerance, i.e. how much sooner a packet might conform (e.g. arrive or be transmitted) than would be expected from the limit on the average rate, or a burst tolerance or maximum burst size, i.e. how much more than the average level of traffic might conform in some finite period. Algorithm. The token bucket algorithm can be conceptually understood as follows: Variations. Implementers of this algorithm on platforms lacking the clock resolution necessary to add a single token to the bucket every formula_0 seconds may want to consider an alternative formulation. Given the ability to update the token bucket every S milliseconds, the number of tokens to add every S milliseconds = formula_2. Properties. Average rate. Over the long run the output of conformant packets is limited by the token rate, formula_3. Burst size. Let formula_4 be the maximum possible transmission rate in bytes/second. Then formula_5 is the maximum burst time, that is the time for which the rate formula_4 is fully utilized. The maximum burst size is thus formula_6 Uses. The token bucket can be used in either traffic shaping or traffic policing. In traffic policing, nonconforming packets may be discarded (dropped) or may be reduced in priority (for downstream traffic management functions to drop if there is congestion). In traffic shaping, packets are delayed until they conform. Traffic policing and traffic shaping are commonly used to protect the network against excess or excessively bursty traffic, see bandwidth management and congestion avoidance. Traffic shaping is commonly used in the network interfaces in hosts to prevent transmissions being discarded by traffic management functions in the network. The token bucket algorithm is also used in controlling database IO flow. In it, limitation applies to neither IOPS nor the bandwidth but rather to a linear combination of both. By defining tokens to be the normalized sum of IO request weight and its length, the algorithm makes sure that the time derivative of the aforementioned function stays below the needed threshold. Comparison to leaky bucket. The token bucket algorithm is directly comparable to one of the two versions of the leaky bucket algorithm described in the literature. This comparable version of the leaky bucket is described on the relevant Wikipedia page as the leaky bucket algorithm as a meter. This is a mirror image of the token bucket, in that conforming packets add fluid, equivalent to the tokens removed by a conforming packet in the token bucket algorithm, to a finite capacity bucket, from which this fluid then drains away at a constant rate, equivalent to the process in which tokens are added at a fixed rate. There is, however, another version of the leaky bucket algorithm, described on the relevant Wikipedia page as the leaky bucket algorithm as a queue. This is a special case of the leaky bucket as a meter, which can be described by the conforming packets passing through the bucket. The leaky bucket as a queue is therefore applicable only to traffic shaping, and does not, in general, allow the output packet stream to be bursty, i.e. it is jitter free. It is therefore significantly different from the token bucket algorithm. These two versions of the leaky bucket algorithm have both been described in the literature under the same name. This has led to considerable confusion over the properties of that algorithm and its comparison with the token bucket algorithm. However, fundamentally, the two algorithms are the same, and will, if implemented correctly and given the same parameters, see exactly the same packets as conforming and nonconforming. Hierarchical token bucket. The hierarchical token bucket (HTB) is a faster replacement for the class-based queueing (CBQ) queuing discipline in Linux. It is useful for limiting each client's download/upload rate so that the limited client cannot saturate the total bandwidth. Conceptually, HTB is an arbitrary number of token buckets arranged in a hierarchy. The primary egress queuing discipline ("qdisc") on any device is known as the root qdisc. The root qdisc will contain one class. This single HTB class will be set with two parameters, a "rate" and a "ceil". These values should be the same for the top-level class, and will represent the total available bandwidth on the link. In HTB, "rate" means the guaranteed bandwidth available for a given class and "ceil" (short for ceiling) indicates the maximum bandwidth that class is allowed to consume. When a class requests a bandwidth more than guaranteed, it may borrow bandwidth from its parent as long as both ceils are not reached. Hierarchical Token Bucket implements a classful queuing mechanism for the Linux traffic control system, and provides rate and ceil to allow the user to control the absolute bandwidth to particular classes of traffic as well as indicate the ratio of distribution of bandwidth when extra bandwidth become available (up to ceil). When choosing the bandwidth for a top-level class, traffic shaping only helps at the bottleneck between the LAN and the Internet. Typically, this is the case in home and office network environments, where an entire LAN is serviced by a DSL or T1 connection. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "1/r" }, { "math_id": 1, "text": "b" }, { "math_id": 2, "text": "(r*S)/1000" }, { "math_id": 3, "text": "r" }, { "math_id": 4, "text": "M" }, { "math_id": 5, "text": "T_\\text{max} =\n\\begin{cases}\nb/(M -r) & \\text{ if } r < M \\\\\n\\infty & \\text{ otherwise }\n\\end{cases}\n" }, { "math_id": 6, "text": "B_\\text{max} = T_\\text{max}*M" } ]
https://en.wikipedia.org/wiki?curid=1107299
11073025
Epistemic closure
Principle in epistemology Epistemic closure is a property of some belief systems. It is the principle that if a subject formula_0 knows formula_1, and formula_0 knows that formula_1 entails formula_2, then formula_0 can thereby come to know formula_2. Most epistemological theories involve a closure principle and many skeptical arguments assume a closure principle. On the other hand, some epistemologists, including Robert Nozick, have denied closure principles on the basis of reliabilist accounts of knowledge. Nozick, in "Philosophical Explanations", advocated that, when considering the Gettier problem, the least counter-intuitive assumption we give up should be epistemic closure. Nozick suggested a "truth tracking" theory of knowledge, in which the x was said to know P if x's belief in P tracked the truth of P through the relevant modal scenarios. A subject may not actually believe q, for example, regardless of whether he or she is justified or warranted. Thus, one might instead say that knowledge is closed under "known" deduction: if, while knowing p, S believes q because S knows that p entails q, then S knows q. An even stronger formulation would be as such: If, while knowing various propositions, S believes p because S knows that these propositions entail p, then S knows p. While the principle of epistemic closure is generally regarded as intuitive, philosophers such as Robert Nozick and Fred Dretske have argued against it. Epistemic closure and skeptical arguments. The epistemic closure principle typically takes the form of a modus ponens argument: This epistemic closure principle is central to many versions of skeptical arguments. A skeptical argument of this type will involve knowledge of some piece of widely accepted information to be knowledge, which will then be pointed out to entail knowledge of some skeptical scenario, such as the brain in a vat scenario or the Cartesian evil demon scenario. A skeptic might say, for example, that if you know that you have hands, then you know that you are not a handless brain in a vat (because knowledge that you have hands implies that you know you are not handless, and if you know that you are not handless, then you know that you are not a handless brain in a vat). The skeptic will then utilize this conditional to form a modus tollens argument. For example, the skeptic might make an argument like the following: Much of the epistemological discussion surrounding this type of skeptical argument involves whether to accept or deny the conclusion, and how to do each. Ernest Sosa says that there are three possibilities in responding to the skeptic: Justificatory closure. In the seminal 1963 paper, “Is Justified True Belief Knowledge?”, Edmund Gettier gave an assumption (later called the “principle of deducibility for justification” by Irving Thalberg, Jr.) that would serve as a basis for the rest of his piece: “for any proposition P, if S is justified in believing P and P entails Q, and S deduces Q from P and accepts Q as a result of this deduction, then S is justified in believing Q.” This was seized upon by Thalberg, who rejected the principle in order to demonstrate that one of Gettier's examples fails to support Gettier's main thesis that justified true belief is not knowledge (in the following quotation, (1) refers to “Jones will get the job”, (2) refers to “Jones has ten coins”, and (3) is the logical conjunction of (1) and (2)): Why doesn't Gettier's principle (PDJ) hold in the evidential situation he has described? You multiply your risks of being wrong when you believe a conjunction. [… T]he most elementary theory of probability indicates that Smith's prospects of being right on both (1) and (2), namely, of being right on (3), are bound to be less favorable than his prospects of being right on either (1) or (2). In fact, Smith's chances of being right on (3) might not come up to the minimum standard of justification which (1) and (2) barely satisfy, and Smith would be unjustified in accepting (3). Epistemic closure in U.S. political discussion. The term "epistemic closure" has been used in an unrelated sense in American political debate to refer to the claim that political belief systems can be closed systems of deduction, unaffected by empirical evidence. This use of the term was popularized by libertarian blogger and commentator Julian Sanchez in 2010 as an extreme form of confirmation bias. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "S" }, { "math_id": 1, "text": "p" }, { "math_id": 2, "text": "q" } ]
https://en.wikipedia.org/wiki?curid=11073025
1107334
Notch signaling pathway
Series of molecular signals The Notch signaling pathway is a highly conserved cell signaling system present in most animals. Mammals possess four different notch receptors, referred to as NOTCH1, NOTCH2, NOTCH3, and NOTCH4. The notch receptor is a single-pass transmembrane receptor protein. It is a hetero-oligomer composed of a large extracellular portion, which associates in a calcium-dependent, non-covalent interaction with a smaller piece of the notch protein composed of a short extracellular region, a single transmembrane-pass, and a small intracellular region. Notch signaling promotes proliferative signaling during neurogenesis, and its activity is inhibited by Numb to promote neural differentiation. It plays a major role in the regulation of embryonic development. Notch signaling is dysregulated in many cancers, and faulty notch signaling is implicated in many diseases, including T-cell acute lymphoblastic leukemia (T-ALL), cerebral autosomal-dominant arteriopathy with sub-cortical infarcts and leukoencephalopathy (CADASIL), multiple sclerosis, Tetralogy of Fallot, and Alagille syndrome. Inhibition of notch signaling inhibits the proliferation of T-cell acute lymphoblastic leukemia in both cultured cells and a mouse model. Discovery. In 1914, John S. Dexter noticed the appearance of a notch in the wings of the fruit fly "Drosophila melanogaster". The alleles of the gene were identified in 1917 by American evolutionary biologist Thomas Hunt Morgan. Its molecular analysis and sequencing was independently undertaken in the 1980s by Spyros Artavanis-Tsakonas and Michael W. Young. Alleles of the two "C. elegans" "Notch" genes were identified based on developmental phenotypes: "lin-12" and "glp-1". The cloning and partial sequence of "lin-12" was reported at the same time as "Drosophila" "Notch" by Iva Greenwald. Mechanism. The Notch protein spans the cell membrane, with part of it inside and part outside. Ligand proteins binding to the extracellular domain induce proteolytic cleavage and release of the intracellular domain, which enters the cell nucleus to modify gene expression. The cleavage model was first proposed in 1993 based on work done with "Drosophila" "Notch" and "C. elegans" "lin-12", informed by the first oncogenic mutation affecting a human "Notch" gene. Compelling evidence for this model was provided in 1998 by in vivo analysis in "Drosophila" by Gary Struhl and in cell culture by Raphael Kopan. Although this model was initially disputed, the evidence in favor of the model was irrefutable by 2001. The receptor is normally triggered via direct cell-to-cell contact, in which the transmembrane proteins of the cells in direct contact form the ligands that bind the notch receptor. The Notch binding allows groups of cells to organize themselves such that, if one cell expresses a given trait, this may be switched off in neighbouring cells by the intercellular notch signal. In this way, groups of cells influence one another to make large structures. Thus, lateral inhibition mechanisms are key to Notch signaling. "lin-12" and "Notch" mediate binary cell fate decisions, and lateral inhibition involves feedback mechanisms to amplify initial differences. The Notch cascade consists of Notch and Notch ligands, as well as intracellular proteins transmitting the notch signal to the cell's nucleus. The Notch/Lin-12/Glp-1 receptor family was found to be involved in the specification of cell fates during development in "Drosophila" and "C. elegans". The intracellular domain of Notch forms a complex with CBF1 and Mastermind to activate transcription of target genes. The structure of the complex has been determined. Pathway. Maturation of the notch receptor involves cleavage at the prospective extracellular side during intracellular trafficking in the Golgi complex. This results in a bipartite protein, composed of a large extracellular domain linked to the smaller transmembrane and intracellular domain. Binding of ligand promotes two proteolytic processing events; as a result of proteolysis, the intracellular domain is liberated and can enter the nucleus to engage other DNA-binding proteins and regulate gene expression. Notch and most of its ligands are transmembrane proteins, so the cells expressing the ligands typically must be adjacent to the notch expressing cell for signaling to occur. The notch ligands are also single-pass transmembrane proteins and are members of the DSL (Delta/Serrate/LAG-2) family of proteins. In "Drosophila melanogaster" (the fruit fly), there are two ligands named Delta and Serrate. In mammals, the corresponding names are Delta-like and Jagged. In mammals there are multiple Delta-like and Jagged ligands, as well as possibly a variety of other ligands, such as F3/contactin. In the nematode "C. elegans", two genes encode homologous proteins, "glp-1" and "lin-12". There has been at least one report that suggests that some cells can send out processes that allow signaling to occur between cells that are as much as four or five cell diameters apart. The notch extracellular domain is composed primarily of small cystine-rich motifs called EGF-like repeats. Notch 1, for example, has 36 of these repeats. Each EGF-like repeat is composed of approximately 40 amino acids, and its structure is defined largely by six conserved cysteine residues that form three conserved disulfide bonds. Each EGF-like repeat can be modified by "O"-linked glycans at specific sites. An "O"-glucose sugar may be added between the first and second conserved cysteines, and an "O"-fucose may be added between the second and third conserved cysteines. These sugars are added by an as-yet-unidentified "O"-glucosyltransferase (except for Rumi), and GDP-fucose Protein "O"-fucosyltransferase 1 (POFUT1), respectively. The addition of "O"-fucose by POFUT1 is absolutely necessary for notch function, and, without the enzyme to add "O"-fucose, all notch proteins fail to function properly. As yet, the manner by which the glycosylation of notch affects function is not completely understood. The "O"-glucose on notch can be further elongated to a trisaccharide with the addition of two xylose sugars by xylosyltransferases, and the "O"-fucose can be elongated to a tetrasaccharide by the ordered addition of an N-acetylglucosamine (GlcNAc) sugar by an N-Acetylglucosaminyltransferase called Fringe, the addition of a galactose by a galactosyltransferase, and the addition of a sialic acid by a sialyltransferase. To add another level of complexity, in mammals there are three Fringe GlcNAc-transferases, named lunatic fringe, manic fringe, and radical fringe. These enzymes are responsible for something called a "fringe effect" on notch signaling. If Fringe adds a GlcNAc to the "O"-fucose sugar then the subsequent addition of a galactose and sialic acid will occur. In the presence of this tetrasaccharide, notch signals strongly when it interacts with the Delta ligand, but has markedly inhibited signaling when interacting with the Jagged ligand. The means by which this addition of sugar inhibits signaling through one ligand, and potentiates signaling through another is not clearly understood. Once the notch extracellular domain interacts with a ligand, an ADAM-family metalloprotease called ADAM10, cleaves the notch protein just outside the membrane. This releases the extracellular portion of notch (NECD), which continues to interact with the ligand. The ligand plus the notch extracellular domain is then endocytosed by the ligand-expressing cell. There may be signaling effects in the ligand-expressing cell after endocytosis; this part of notch signaling is a topic of active research. After this first cleavage, an enzyme called γ-secretase (which is implicated in Alzheimer's disease) cleaves the remaining part of the notch protein just inside the inner leaflet of the cell membrane of the notch-expressing cell. This releases the intracellular domain of the notch protein (NICD), which then moves to the nucleus, where it can regulate gene expression by activating the transcription factor CSL. It was originally thought that these CSL proteins suppressed Notch target transcription. However, further research showed that, when the intracellular domain binds to the complex, it switches from a repressor to an activator of transcription. Other proteins also participate in the intracellular portion of the notch signaling cascade. Ligand interactions. Notch signaling is initiated when Notch receptors on the cell surface engage ligands presented "in trans" on opposing cells"." Despite the expansive size of the Notch extracellular domain, it has been demonstrated that EGF domains 11 and 12 are the critical determinants for interactions with Delta. Additional studies have implicated regions outside of Notch EGF11-12 in ligand binding. For example, Notch EGF domain 8 plays a role in selective recognition of Serrate/Jagged and EGF domains 6-15 are required for maximal signaling upon ligand stimulation. A crystal structure of the interacting regions of Notch1 and Delta-like 4 (Dll4) provided a molecular-level visualization of Notch-ligand interactions, and revealed that the N-terminal MNNL (or C2) and DSL domains of ligands bind to Notch EGF domains 12 and 11, respectively. The Notch1-Dll4 structure also illuminated a direct role for Notch O-linked fucose and glucose moieties in ligand recognition, and rationalized a structural mechanism for the glycan-mediated tuning of Notch signaling. Synthetic Notch signaling. It is possible to engineer synthetic Notch receptors by replacing the extracellular receptor and intracellular transcriptional domains with other domains of choice. This allows researchers to select which ligands are detected, and which genes are upregulated in response. Using this technology, cells can report or change their behavior in response to contact with user-specified signals, facilitating new avenues of both basic and applied research into cell-cell signaling. Notably, this system allows multiple synthetic pathways to be engineered into a cell in parallel. Function. The Notch signaling pathway is important for cell-cell communication, which involves gene regulation mechanisms that control multiple cell differentiation processes during embryonic and adult life. Notch signaling also has a role in the following processes: * neuronal function and development * stabilization of arterial endothelial fate and angiogenesis * regulation of crucial cell communication events between endocardium and myocardium during both the formation of the valve primordial and ventricular development and differentiation * cardiac valve homeostasis, as well as implications in other human disorders involving the cardiovascular system * timely cell lineage specification of both endocrine and exocrine pancreas * influencing of binary fate decisions of cells that must choose between the secretory and absorptive lineages in the gut * expansion of the hematopoietic stem cell compartment during bone development and participation in commitment to the osteoblastic lineage, suggesting a potential therapeutic role for notch in bone regeneration and osteoporosis * expansion of the hemogenic endothelial cells along with signaling axis involving Hedgehog signaling and Scl *T cell lineage commitment from common lymphoid precursor * regulation of cell-fate decision in mammary glands at several distinct development stages * possibly some non-nuclear mechanisms, such as control of the actin cytoskeleton through the tyrosine kinase Abl * Regulation of the mitotic/meiotic decision in the "C. elegans" germline * development of alveoli in the lung. It has also been found that Rex1 has inhibitory effects on the expression of notch in mesenchymal stem cells, preventing differentiation. Role in embryogenesis. The Notch signaling pathway plays an important role in cell-cell communication, and further regulates embryonic development. Embryo polarity. Notch signaling is required in the regulation of polarity. For example, mutation experiments have shown that loss of Notch signaling causes abnormal anterior-posterior polarity in somites. Also, Notch signaling is required during left-right asymmetry determination in vertebrates. Early studies in the nematode model organism "C. elegans" indicate that Notch signaling has a major role in the induction of mesoderm and cell fate determination. As mentioned previously, C. elegans has two genes that encode for partially functionally redundant Notch homologs, "glp-1" and "lin-12". During C. elegans, GLP-1, the C. elegans Notch homolog, interacts with APX-1, the C. elegans Delta homolog. This signaling between particular blastomeres induces differentiation of cell fates and establishes the dorsal-ventral axis. Role in somitogenesis. Notch signaling is central to somitogenesis. In 1995, Notch1 was shown to be important for coordinating the segmentation of somites in mice. Further studies identified the role of Notch signaling in the segmentation clock. These studies hypothesized that the primary function of Notch signaling does not act on an individual cell, but coordinates cell clocks and keep them synchronized. This hypothesis explained the role of Notch signaling in the development of segmentation and has been supported by experiments in mice and zebrafish. Experiments with Delta1 mutant mice that show abnormal somitogenesis with loss of anterior/posterior polarity suggest that Notch signaling is also necessary for the maintenance of somite borders. During somitogenesis, a molecular oscillator in paraxial mesoderm cells dictates the precise rate of somite formation. A clock and wavefront model has been proposed in order to spatially determine the location and boundaries between somites. This process is highly regulated as somites must have the correct size and spacing in order to avoid malformations within the axial skeleton that may potentially lead to spondylocostal dysostosis. Several key components of the Notch signaling pathway help coordinate key steps in this process. In mice, mutations in Notch1, Dll1 or Dll3, Lfng, or Hes7 result in abnormal somite formation. Similarly, in humans, the following mutations have been seen to lead to development of spondylocostal dysostosis: DLL3, LFNG, or HES7. Role in epidermal differentiation. Notch signaling is known to occur inside ciliated, differentiating cells found in the first epidermal layers during early skin development. Furthermore, it has found that presenilin-2 works in conjunction with ARF4 to regulate Notch signaling during this development. However, it remains to be determined whether gamma-secretase has a direct or indirect role in modulating Notch signaling. Role in central nervous system development and function. Early findings on Notch signaling in central nervous system (CNS) development were performed mainly in "Drosophila" with mutagenesis experiments. For example, the finding that an embryonic lethal phenotype in "Drosophila" was associated with Notch dysfunction indicated that Notch mutations can lead to the failure of neural and Epidermal cell segregation in early "Drosophila" embryos. In the past decade, advances in mutation and knockout techniques allowed research on the Notch signaling pathway in mammalian models, especially rodents. The Notch signaling pathway was found to be critical mainly for neural progenitor cell (NPC) maintenance and self-renewal. In recent years, other functions of the Notch pathway have also been found, including glial cell specification, neurites development, as well as learning and memory. Neuron cell differentiation. The Notch pathway is essential for maintaining NPCs in the developing brain. Activation of the pathway is sufficient to maintain NPCs in a proliferating state, whereas loss-of-function mutations in the critical components of the pathway cause precocious neuronal differentiation and NPC depletion. Modulators of the Notch signal, e.g., the Numb protein are able to antagonize Notch effects, resulting in the halting of cell cycle and the differentiation of NPCs. Conversely, the fibroblast growth factor pathway promotes Notch signaling to keep stem cells of the cerebral cortex in the proliferative state, amounting to a mechanism regulating cortical surface area growth and, potentially, gyrification. In this way, Notch signaling controls NPC self-renewal as well as cell fate specification. A non-canonical branch of the Notch signaling pathway that involves the phosphorylation of STAT3 on the serine residue at amino acid position 727 and subsequent Hes3 expression increase (STAT3-Ser/Hes3 Signaling Axis) has been shown to regulate the number of NPCs in culture and in the adult rodent brain. In adult rodents and in cell culture, Notch3 promotes neuronal differentiation, having a role opposite to Notch1/2. This indicates that individual Notch receptors can have divergent functions, depending on cellular context. Neurite development. "In vitro" studies show that Notch can influence neurite development. "In vivo", deletion of the Notch signaling modulator, Numb, disrupts neuronal maturation in the developing cerebellum, whereas deletion of Numb disrupts axonal arborization in sensory ganglia. Although the mechanism underlying this phenomenon is not clear, together these findings suggest Notch signaling might be crucial in neuronal maturation. Gliogenesis. In gliogenesis, Notch appears to have an instructive role that can directly promote the differentiation of many glial cell subtypes. For example, activation of Notch signaling in the retina favors the generation of Muller glia cells at the expense of neurons, whereas reduced Notch signaling induces production of ganglion cells, causing a reduction in the number of Muller glia. Adult brain function. Apart from its role in development, evidence shows that Notch signaling is also involved in neuronal apoptosis, neurite retraction, and neurodegeneration of ischemic stroke in the brain In addition to developmental functions, Notch proteins and ligands are expressed in cells of the adult nervous system, suggesting a role in CNS plasticity throughout life. Adult mice heterozygous for mutations in either Notch1 or Cbf1 have deficits in spatial learning and memory. Similar results are seen in experiments with presenilins1 and 2, which mediate the Notch intramembranous cleavage. To be specific, conditional deletion of presenilins at 3 weeks after birth in excitatory neurons causes learning and memory deficits, neuronal dysfunction, and gradual neurodegeneration. Several gamma secretase inhibitors that underwent human clinical trials in Alzheimer's disease and MCI patients resulted in statistically significant worsening of cognition relative to controls, which is thought to be due to its incidental effect on Notch signalling. Role in cardiovascular development. The Notch signaling pathway is a critical component of cardiovascular formation and morphogenesis in both development and disease. It is required for the selection of endothelial tip and stalk cells during sprouting angiogenesis. Cardiac development. Notch signal pathway plays a crucial role in at least three cardiac development processes: Atrioventricular canal development, myocardial development, and cardiac outflow tract (OFT) development. Ventricular development. Some studies in Xenopus and in mouse embryonic stem cells indicate that cardiomyogenic commitment and differentiation require Notch signaling inhibition. Active Notch signaling is required in the ventricular endocardium for proper trabeculae development subsequent to myocardial specification by regulating BMP10, NRG1, and EphrinB2 expression. Notch signaling sustains immature cardiomyocyte proliferation in mammals and zebrafish. A regulatory correspondence likely exists between Notch signaling and Wnt signaling, whereby upregulated Wnt expression downregulates Notch signaling, and a subsequent inhibition of ventricular cardiomyocyte proliferation results. This proliferative arrest can be rescued using Wnt inhibitors. The downstream effector of Notch signaling, HEY2, was also demonstrated to be important in regulating ventricular development by its expression in the interventricular septum and the endocardial cells of the cardiac cushions. Cardiomyocyte and smooth muscle cell-specific deletion of HEY2 results in impaired cardiac contractility, malformed right ventricle, and ventricular septal defects. Ventricular outflow tract development. During development of the aortic arch and the aortic arch arteries, the Notch receptors, ligands, and target genes display a unique expression pattern. When the Notch pathway was blocked, the induction of vascular smooth muscle cell marker expression failed to occur, suggesting that Notch is involved in the differentiation of cardiac neural crest cells into vascular cells during outflow tract development. Angiogenesis. Endothelial cells use the Notch signaling pathway to coordinate cellular behaviors during the blood vessel sprouting that occurs sprouting angiogenesis. Activation of Notch takes place primarily in "connector" cells and cells that line patent stable blood vessels through direct interaction with the Notch ligand, Delta-like ligand 4 (Dll4), which is expressed in the endothelial tip cells. VEGF signaling, which is an important factor for migration and proliferation of endothelial cells, can be downregulated in cells with activated Notch signaling by lowering the levels of Vegf receptor transcript. Zebrafish embryos lacking Notch signaling exhibit ectopic and persistent expression of the zebrafish ortholog of VEGF3, flt4, within all endothelial cells, while Notch activation completely represses its expression. Notch signaling may be used to control the sprouting pattern of blood vessels during angiogenesis. When cells within a patent vessel are exposed to VEGF signaling, only a restricted number of them initiate the angiogenic process. Vegf is able to induce DLL4 expression. In turn, DLL4 expressing cells down-regulate Vegf receptors in neighboring cells through activation of Notch, thereby preventing their migration into the developing sprout. Likewise, during the sprouting process itself, the migratory behavior of connector cells must be limited to retain a patent connection to the original blood vessel. Role in endocrine development. During development, definitive endoderm and ectoderm differentiates into several gastrointestinal epithelial lineages, including endocrine cells. Many studies have indicated that Notch signaling has a major role in endocrine development. Pancreatic development. The formation of the pancreas from endoderm begins in early development. The expression of elements of the Notch signaling pathway have been found in the developing pancreas, suggesting that Notch signaling is important in pancreatic development. Evidence suggests Notch signaling regulates the progressive recruitment of endocrine cell types from a common precursor, acting through two possible mechanisms. One is the "lateral inhibition", which specifies some cells for a primary fate but others for a secondary fate among cells that have the potential to adopt the same fate. Lateral inhibition is required for many types of cell fate determination. Here, it could explain the dispersed distribution of endocrine cells within pancreatic epithelium. A second mechanism is "suppressive maintenance", which explains the role of Notch signaling in pancreas differentiation. Fibroblast growth factor10 is thought to be important in this activity, but the details are unclear. Intestinal development. The role of Notch signaling in the regulation of gut development has been indicated in several reports. Mutations in elements of the Notch signaling pathway affect the earliest intestinal cell fate decisions during zebrafish development. Transcriptional analysis and gain of function experiments revealed that Notch signaling targets Hes1 in the intestine and regulates a binary cell fate decision between adsorptive and secretory cell fates. Bone development. Early "in vitro" studies have found the Notch signaling pathway functions as down-regulator in osteoclastogenesis and osteoblastogenesis. Notch1 is expressed in the mesenchymal condensation area and subsequently in the hypertrophic chondrocytes during chondrogenesis. Overexpression of Notch signaling inhibits bone morphogenetic protein2-induced osteoblast differentiation. Overall, Notch signaling has a major role in the commitment of mesenchymal cells to the osteoblastic lineage and provides a possible therapeutic approach to bone regeneration. Role in cancer. Leukemia. Aberrant Notch signaling is a driver of T cell acute lymphoblastic leukemia (T-ALL) and is mutated in at least 65% of all T-ALL cases. Notch signaling can be activated by mutations in Notch itself, inactivating mutations in FBXW7 (a negative regulator of Notch1), or rarely by t(7;9)(q34;q34.3) translocation. In the context of T-ALL, Notch activity cooperates with additional oncogenic lesions such as c-MYC to activate anabolic pathways such as ribosome and protein biosynthesis thereby promoting leukemia cell growth. Urothelial bladder cancer. Loss of Notch activity is a driving event in urothelial cancer. A study identified inactivating mutations in components of the Notch pathway in over 40% of examined human bladder carcinomas. In mouse models, genetic inactivation of Notch signaling results in Erk1/2 phosphorylation leading to tumorigenesis in the urinary tract. As not all NOTCH receptors are equally involved in the urothelial bladder cancer, 90% of samples in one study had some level of NOTCH3 expression, suggesting that NOTCH3 plays an important role in urothelial bladder cancer. A higher level of NOTCH3 expression was observed in high-grade tumors, and a higher level of positivity was associated with a higher mortality risk. NOTCH3 was identified as an independent predictor of poor outcome. Therefore, it is suggested that NOTCH3 could be used as a marker for urothelial bladder cancer-specific mortality risk. It was also shown that NOTCH3 expression could be a prognostic immunohistochemical marker for clinical follow-up of urothelial bladder cancer patients, contributing to a more individualized approach by selecting patients to undergo control cystoscopy after a shorter time interval. Liver cancer. In hepatocellular carcinoma, for instance, it was suggesting that AXIN1 mutations would provoke Notch signaling pathway activation, fostering the cancer development, but a recent study demonstrated that such an effect cannot be detected. Thus the exact role of Notch signaling in the cancer process awaits further elucidation. Notch inhibitors. The involvement of Notch signaling in many cancers has led to investigation of notch inhibitors (especially gamma-secretase inhibitors) as cancer treatments which are in different phases of clinical trials. As of 2013[ [update]] at least 7 notch inhibitors were in clinical trials. MK-0752 has given promising results in an early clinical trial for breast cancer. Preclinical studies showed beneficial effects of gamma-secretase inhibitors in endometriosis, a disease characterised by increased expression of notch pathway constituents. Several notch inhibitors, including the gamma-secretase inhibitor LY3056480, are being studied for their potential ability to regenerate hair cells in the cochlea, which could lead to treatments for hearing loss and tinnitus. Mathematical modeling. Mathematical modeling in Notch-Delta signaling has become a pivotal tool in understanding pattern formation driven by cell-cell interactions, particularly in the context of lateral-inhibition mechanisms. The Collier model, a cornerstone in this field, employs a system of coupled ordinary differential equations to describe the feedback loop between adjacent cells. The model is defined by the equations:formula_0 where formula_1 and formula_2 represent the levels of Notch and Delta activity in cell formula_3, respectively. Functions formula_4 and formula_5 are typically Hill functions, reflecting the regulatory dynamics of the signaling process. The term formula_6 denotes the average level of Delta activity in the cells adjacent to cell formula_3, integrating juxtacrine signaling effects. Recent extensions of this model incorporate long-range signaling, acknowledging the role of cell protrusions like filopodia (cytonemes) that reach non-neighboring cells. One extended model, often referred to as the formula_7-Collier model, introduces a weighting parameter formula_8 to balance juxtacrine and long-range signaling. The interaction term formula_6 is modified to include these protrusions, creating a more complex, non-local signaling network. This model is instrumental in exploring pattern formation robustness and biological pattern refinement, considering the stochastic nature of filopodia dynamics and intrinsic noise. The application of mathematical modeling in Notch-Delta signaling has been particularly illuminating in understanding the patterning of sensory organ precursors (SOPs) in the Drosophila's notum and wing margin. The mathematical modeling of Notch-Delta signaling thus provides significant insights into lateral inhibition mechanisms and pattern formation in biological systems. It enhances the understanding of cell-cell interaction variations leading to diverse tissue structures, contributing to developmental biology and offering potential therapeutic pathways in diseases related to Notch-Delta dysregulation. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{align}\n\\frac{d}{dt}n_i &= f(\\langle d_i\\rangle) - n_i \\\\\n\\frac{d}{dt}d_i &= \\nu(g(n_i) - d_i)\n\\end{align}\n" }, { "math_id": 1, "text": "n_i" }, { "math_id": 2, "text": "d_i" }, { "math_id": 3, "text": "i" }, { "math_id": 4, "text": "f" }, { "math_id": 5, "text": "g" }, { "math_id": 6, "text": "\\langle d_i\\rangle" }, { "math_id": 7, "text": "\\epsilon" }, { "math_id": 8, "text": "\\epsilon \\in [0,1]" } ]
https://en.wikipedia.org/wiki?curid=1107334
1107567
Birch–Murnaghan equation of state
The Birch–Murnaghan isothermal equation of state, published in 1947 by Albert Francis Birch of Harvard, is a relationship between the volume of a body and the pressure to which it is subjected. Birch proposed this equation based on the work of Francis Dominic Murnaghan of Johns Hopkins University published in 1944, so that the equation is named in honor of both scientists. Expressions for the equation of state. The third-order Birch–Murnaghan isothermal equation of state is given by formula_0 where "P" is the pressure, "V"0 is the reference volume, "V" is the deformed volume, "B"0 is the bulk modulus, and "B"0' is the derivative of the bulk modulus with respect to pressure. The bulk modulus and its derivative are usually obtained from fits to experimental data and are defined as formula_1 and formula_2 The expression for the equation of state is obtained by expanding the Helmholtz free energy in powers of the finite strain parameter "f", defined as formula_3 in the form of a series. This is more evident by writing the equation in terms of "f". Expanded to third order in finite strain, the equation reads, formula_4 with formula_5. The internal energy, "E"("V"), is found by integration of the pressure: formula_6 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\nP(V)=\\frac{3B_0}{2}\n\\left[\\left(\\frac{V_0}{V}\\right)^{7/3} - \n\\left(\\frac{V_0}{V}\\right)^{5/3}\\right]\n\\left\\{1+\\frac{3}{4}\\left(B_0^\\prime-4\\right)\n\\left[\\left(\\frac{V_0}{V}\\right)^{2/3} - 1\\right]\\right\\}.\n" }, { "math_id": 1, "text": " B_0 = -V \\left(\\frac{\\partial P}{\\partial V}\\right)_{P = 0}" }, { "math_id": 2, "text": "B_0' = \\left(\\frac{\\partial B}{\\partial P}\\right)_{P = 0}" }, { "math_id": 3, "text": "f = \\frac{1}{2}\\left[\\left(\\frac{V_0}{V}\\right)^{{2/3}} - 1\\right] \\,," }, { "math_id": 4, "text": "\nP(f) = 3 B_0 f (1 + 2 f)^{5/2} ( 1 + a f + \\mathit{higher~order ~terms})\\,,\n" }, { "math_id": 5, "text": " a = \\frac{3}{2}(B_0' - 4) " }, { "math_id": 6, "text": "\nE(V) = E_0 + \\frac{9V_0B_0}{16}\n\\left\\{\n\\left[\\left(\\frac{V_0}{V}\\right)^{2/3} - 1\\right]^3 B_0^\\prime + \n\\left[\\left(\\frac{V_0}{V}\\right)^{2/3} - 1\\right]^2\n\\left[6-4\\left(\\frac{V_0}{V}\\right)^{2/3}\\right]\\right\\}.\n" } ]
https://en.wikipedia.org/wiki?curid=1107567
1107596
Linear approximation
Approximation of a function by its tangent line at a point In mathematics, a linear approximation is an approximation of a general function using a linear function (more precisely, an affine function). They are widely used in the method of finite differences to produce first order methods for solving or approximating solutions to equations. Definition. Given a twice continuously differentiable function formula_0 of one real variable, Taylor's theorem for the case formula_1 states that formula_2 where formula_3 is the remainder term. The linear approximation is obtained by dropping the remainder: formula_4 This is a good approximation when formula_5 is close enough to formula_6; since a curve, when closely observed, will begin to resemble a straight line. Therefore, the expression on the right-hand side is just the equation for the tangent line to the graph of formula_0 at formula_7. For this reason, this process is also called the tangent line approximation. Linear approximations in this case are further improved when the second derivative of a, formula_8, is sufficiently small (close to zero) (i.e., at or near an inflection point). If formula_0 is concave down in the interval between formula_5 and formula_6, the approximation will be an overestimate (since the derivative is decreasing in that interval). If formula_0 is concave up, the approximation will be an underestimate. Linear approximations for vector functions of a vector variable are obtained in the same way, with the derivative at a point replaced by the Jacobian matrix. For example, given a differentiable function formula_9 with real values, one can approximate formula_9 for formula_10 close to formula_11 by the formula formula_12 The right-hand side is the equation of the plane tangent to the graph of formula_13 at formula_14 In the more general case of Banach spaces, one has formula_15 where formula_16 is the Fréchet derivative of formula_0 at formula_6. Applications. Optics. "Gaussian optics" is a technique in geometrical optics that describes the behaviour of light rays in optical systems by using the paraxial approximation, in which only rays which make small angles with the optical axis of the system are considered. In this approximation, trigonometric functions can be expressed as linear functions of the angles. Gaussian optics applies to systems in which all the optical surfaces are either flat or are portions of a sphere. In this case, simple explicit formulae can be given for parameters of an imaging system such as focal distance, magnification and brightness, in terms of the geometrical shapes and material properties of the constituent elements. Period of oscillation. The period of swing of a simple gravity pendulum depends on its length, the local strength of gravity, and to a small extent on the maximum angle that the pendulum swings away from vertical, "θ"0, called the amplitude. It is independent of the mass of the bob. The true period "T" of a simple pendulum, the time taken for a complete cycle of an ideal simple gravity pendulum, can be written in several different forms (see pendulum), one example being the infinite series: formula_17 where L is the length of the pendulum and g is the local acceleration of gravity. However, if one takes the linear approximation (i.e. if the amplitude is limited to small swings, ) the period is: In the linear approximation, the period of swing is approximately the same for different size swings: that is, "the period is independent of amplitude". This property, called isochronism, is the reason pendulums are so useful for timekeeping. Successive swings of the pendulum, even if changing in amplitude, take the same amount of time. Electrical resistivity. The electrical resistivity of most materials changes with temperature. If the temperature "T" does not vary too much, a linear approximation is typically used: formula_18 where formula_19 is called the "temperature coefficient of resistivity", formula_20 is a fixed reference temperature (usually room temperature), and formula_21 is the resistivity at temperature formula_20. The parameter formula_19 is an empirical parameter fitted from measurement data. Because the linear approximation is only an approximation, formula_19 is different for different reference temperatures. For this reason it is usual to specify the temperature that formula_19 was measured at with a suffix, such as formula_22, and the relationship only holds in a range of temperatures around the reference. When the temperature varies over a large temperature range, the linear approximation is inadequate and a more detailed analysis and understanding should be used. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f" }, { "math_id": 1, "text": "n = 1 " }, { "math_id": 2, "text": " f(x) = f(a) + f'(a)(x - a) + R_2 " }, { "math_id": 3, "text": "R_2" }, { "math_id": 4, "text": " f(x) \\approx f(a) + f'(a)(x - a)." }, { "math_id": 5, "text": "x" }, { "math_id": 6, "text": "a" }, { "math_id": 7, "text": "(a,f(a))" }, { "math_id": 8, "text": " f''(a) " }, { "math_id": 9, "text": "f(x, y)" }, { "math_id": 10, "text": "(x, y)" }, { "math_id": 11, "text": "(a, b)" }, { "math_id": 12, "text": "f\\left(x,y\\right)\\approx f\\left(a,b\\right) + \\frac{\\partial f}{\\partial x} \\left(a,b\\right)\\left(x-a\\right) + \\frac{\\partial f}{\\partial y} \\left(a,b\\right)\\left(y-b\\right)." }, { "math_id": 13, "text": "z=f(x, y)" }, { "math_id": 14, "text": "(a, b)." }, { "math_id": 15, "text": " f(x) \\approx f(a) + Df(a)(x - a)" }, { "math_id": 16, "text": "Df(a)" }, { "math_id": 17, "text": "\nT = 2\\pi \\sqrt{L\\over g} \\left( 1+ \\frac{1}{16}\\theta_0^2 + \\frac{11}{3072}\\theta_0^4 + \\cdots \\right)\n" }, { "math_id": 18, "text": "\\rho(T) = \\rho_0[1+\\alpha (T - T_0)]" }, { "math_id": 19, "text": "\\alpha" }, { "math_id": 20, "text": "T_0" }, { "math_id": 21, "text": "\\rho_0" }, { "math_id": 22, "text": "\\alpha_{15}" } ]
https://en.wikipedia.org/wiki?curid=1107596
1107642
Jones polynomial
Mathematical invariant of a knot or link In the mathematical field of knot theory, the Jones polynomial is a knot polynomial discovered by Vaughan Jones in 1984. Specifically, it is an invariant of an oriented knot or link which assigns to each oriented knot or link a Laurent polynomial in the variable formula_0 with integer coefficients. Definition by the bracket. Suppose we have an oriented link formula_1, given as a knot diagram. We will define the Jones polynomial formula_2 by using Louis Kauffman's bracket polynomial, which we denote by formula_3. Here the bracket polynomial is a Laurent polynomial in the variable formula_4 with integer coefficients. First, we define the auxiliary polynomial (also known as the normalized bracket polynomial) formula_5 where formula_6 denotes the writhe of formula_1 in its given diagram. The writhe of a diagram is the number of positive crossings (formula_7 in the figure below) minus the number of negative crossings (formula_8). The writhe is not a knot invariant. formula_9 is a knot invariant since it is invariant under changes of the diagram of formula_1 by the three Reidemeister moves. Invariance under type II and III Reidemeister moves follows from invariance of the bracket under those moves. The bracket polynomial is known to change by a factor of formula_10 under a type I Reidemeister move. The definition of the formula_11 polynomial given above is designed to nullify this change, since the writhe changes appropriately by formula_12 or formula_13 under type I moves. Now make the substitution formula_14 in formula_9 to get the Jones polynomial formula_2. This results in a Laurent polynomial with integer coefficients in the variable formula_0. Jones polynomial for tangles. This construction of the Jones polynomial for tangles is a simple generalization of the Kauffman bracket of a link. The construction was developed by Vladimir Turaev and published in 1990. Let formula_15 be a non-negative integer and formula_16 denote the set of all isotopic types of tangle diagrams, with formula_17 ends, having no crossing points and no closed components (smoothings). Turaev's construction makes use of the previous construction for the Kauffman bracket and associates to each formula_17-end oriented tangle an element of the free formula_18-module formula_19, where formula_18 is the ring of Laurent polynomials with integer coefficients in the variable formula_0. Definition by braid representation. Jones' original formulation of his polynomial came from his study of operator algebras. In Jones' approach, it resulted from a kind of "trace" of a particular braid representation into an algebra which originally arose while studying certain models, e.g. the Potts model, in statistical mechanics. Let a link "L" be given. A theorem of Alexander states that it is the trace closure of a braid, say with "n" strands. Now define a representation formula_20 of the braid group on "n" strands, "Bn", into the Temperley–Lieb algebra formula_21 with coefficients in formula_22 and formula_23. The standard braid generator formula_24 is sent to formula_25, where formula_26 are the standard generators of the Temperley–Lieb algebra. It can be checked easily that this defines a representation. Take the braid word formula_27 obtained previously from formula_1 and compute formula_28 where formula_29 is the Markov trace. This gives formula_30, where formula_31 formula_32 is the bracket polynomial. This can be seen by considering, as Louis Kauffman did, the Temperley–Lieb algebra as a particular diagram algebra. An advantage of this approach is that one can pick similar representations into other algebras, such as the "R"-matrix representations, leading to "generalized Jones invariants". Properties. The Jones polynomial is characterized by taking the value 1 on any diagram of the unknot and satisfies the following skein relation: formula_33 where formula_7, formula_8, and formula_34 are three oriented link diagrams that are identical except in one small region where they differ by the crossing changes or smoothing shown in the figure below: The definition of the Jones polynomial by the bracket makes it simple to show that for a knot formula_35, the Jones polynomial of its mirror image is given by substitution of formula_36 for formula_37 in formula_38. Thus, an amphicheiral knot, a knot equivalent to its mirror image, has palindromic entries in its Jones polynomial. See the article on skein relation for an example of a computation using these relations. Another remarkable property of this invariant states that the Jones polynomial of an alternating link is an alternating polynomial. This property was proved by Morwen Thistlethwaite in 1987. Another proof of this last property is due to Hernando Burgos-Soto, who also gave an extension to tangles of the property. The Jones polynomial is not a complete invariant. There exist an infinite number of non-equivalent knots that have the same Jones polynomial. An example of two distinct knots having the same Jones polynomial can be found in the book by Murasugi. Colored Jones polynomial. For a positive integer formula_39, the formula_39-colored Jones polynomial formula_40 is a generalisation of the Jones polynomial. It is the Reshetikhin–Turaev invariant associated with the formula_41-irreducible representation of the quantum group formula_42. In this scheme, the Jones polynomial is the 1-colored Jones polynomial, the Reshetikhin-Turaev invariant associated to the standard representation (irreducible and two-dimensional) of formula_42. One thinks of the strands of a link as being "colored" by a representation, hence the name. More generally, given a link formula_1 of formula_15 components and representations formula_43 of formula_42, the formula_44-colored Jones polynomial formula_45 is the Reshetikhin–Turaev invariant associated to formula_43 (here we assume the components are ordered). Given two representations formula_46 and formula_47, colored Jones polynomials satisfy the following two properties: *formula_48, *formula_49, where formula_50 denotes the 2-cabling of formula_1. These properties are deduced from the fact that colored Jones polynomials are Reshetikhin-Turaev invariants. Let formula_35 be a knot. Recall that by viewing a diagram of formula_35 as an element of the Temperley-Lieb algebra thanks to the Kauffman bracket, one recovers the Jones polynomial of formula_35. Similarly, the formula_39-colored Jones polynomial of formula_35 can be given a combinatorial description using the Jones-Wenzl idempotents, as follows: *consider the formula_39-cabling formula_51 of formula_35; *view it as an element of the Temperley-Lieb algebra; *insert the Jones-Wenzl idempotents on some formula_39 parallel strands. The resulting element of formula_52 is the formula_39-colored Jones polynomial. See appendix H of for further details. Relationship to other theories. Link with Chern–Simons theory. As first shown by Edward Witten, the Jones polynomial of a given knot formula_53 can be obtained by considering Chern–Simons theory on the three-sphere with gauge group formula_54, and computing the vacuum expectation value of a Wilson loop formula_55, associated to formula_53, and the fundamental representation formula_56 of formula_54. Link with quantum knot invariants. By substituting formula_57 for the variable formula_37 of the Jones polynomial and expanding it as the series of h each of the coefficients turn to be the Vassiliev invariant of the knot formula_35. In order to unify the Vassiliev invariants (or, finite type invariants), Maxim Kontsevich constructed the Kontsevich integral. The value of the Kontsevich integral, which is the infinite sum of 1, 3-valued chord diagrams, named the Jacobi chord diagrams, reproduces the Jones polynomial along with the formula_58 weight system studied by Dror Bar-Natan. Link with the volume conjecture. By numerical examinations on some hyperbolic knots, Rinat Kashaev discovered that substituting the "n"-th root of unity into the parameter of the colored Jones polynomial corresponding to the "n"-dimensional representation, and limiting it as "n" grows to infinity, the limit value would give the hyperbolic volume of the knot complement. (See Volume conjecture.) Link with Khovanov homology. In 2000 Mikhail Khovanov constructed a certain chain complex for knots and links and showed that the homology induced from it is a knot invariant (see Khovanov homology). The Jones polynomial is described as the Euler characteristic for this homology. Detection of the unknot. It is an open question whether there is a nontrivial knot with Jones polynomial equal to that of the unknot. It is known that there are nontrivial "links" with Jones polynomial equal to that of the corresponding unlinks by the work of Morwen Thistlethwaite. It was shown by Kronheimer and Mrowka that there is no nontrivial knot with Khovanov homology equal to that of the unknot. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "t^{1/2}" }, { "math_id": 1, "text": "L" }, { "math_id": 2, "text": "V(L)" }, { "math_id": 3, "text": "\\langle~\\rangle" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "X(L) = (-A^3)^{-w(L)}\\langle L \\rangle, " }, { "math_id": 6, "text": "w(L)" }, { "math_id": 7, "text": "L_{+}" }, { "math_id": 8, "text": "L_{-}" }, { "math_id": 9, "text": "X(L)" }, { "math_id": 10, "text": "-A^{\\pm 3}" }, { "math_id": 11, "text": "X" }, { "math_id": 12, "text": "+1" }, { "math_id": 13, "text": "-1" }, { "math_id": 14, "text": "A = t^{-1/4} " }, { "math_id": 15, "text": "k" }, { "math_id": 16, "text": "S_k" }, { "math_id": 17, "text": "2k" }, { "math_id": 18, "text": "\\mathrm{R}" }, { "math_id": 19, "text": "\\mathrm{R}[S_k]" }, { "math_id": 20, "text": "\\rho" }, { "math_id": 21, "text": "\\operatorname{TL}_n" }, { "math_id": 22, "text": "\\Z [A, A^{-1}]" }, { "math_id": 23, "text": "\\delta = -A^2 - A^{-2}" }, { "math_id": 24, "text": "\\sigma_i" }, { "math_id": 25, "text": "A\\cdot e_i + A^{-1}\\cdot 1" }, { "math_id": 26, "text": "1, e_1, \\dots, e_{n-1}" }, { "math_id": 27, "text": "\\sigma" }, { "math_id": 28, "text": "\\delta^{n-1} \\operatorname{tr} \\rho(\\sigma)" }, { "math_id": 29, "text": "\\operatorname{tr}" }, { "math_id": 30, "text": "\\langle L \\rangle" }, { "math_id": 31, "text": "\\langle" }, { "math_id": 32, "text": "\\rangle" }, { "math_id": 33, "text": " (t^{1/2} - t^{-1/2})V(L_0) = t^{-1}V(L_{+}) - tV(L_{-}) \\," }, { "math_id": 34, "text": "L_{0}" }, { "math_id": 35, "text": "K" }, { "math_id": 36, "text": "t^{-1}" }, { "math_id": 37, "text": "t" }, { "math_id": 38, "text": "V(K)" }, { "math_id": 39, "text": "N" }, { "math_id": 40, "text": "V_N(L,t)" }, { "math_id": 41, "text": "(N+1)" }, { "math_id": 42, "text": "U_q(\\mathfrak{sl}_2)" }, { "math_id": 43, "text": "V_1,\\ldots,V_k" }, { "math_id": 44, "text": "(V_1,\\ldots,V_k)" }, { "math_id": 45, "text": "V_{V_1,\\ldots,V_k}(L,t)" }, { "math_id": 46, "text": "V" }, { "math_id": 47, "text": "W" }, { "math_id": 48, "text": "V_{V\\oplus W}(L,t)=V_V(L,t)+V_W(L,t)" }, { "math_id": 49, "text": "V_{V\\otimes W}(L,t) = V_{V,W}(L^2,t)" }, { "math_id": 50, "text": "L^2" }, { "math_id": 51, "text": "K^N" }, { "math_id": 52, "text": "\\mathbb{Q}(t)" }, { "math_id": 53, "text": "\\gamma" }, { "math_id": 54, "text": "\\mathrm{SU}(2)" }, { "math_id": 55, "text": "W_F(\\gamma)" }, { "math_id": 56, "text": "F" }, { "math_id": 57, "text": "e^h" }, { "math_id": 58, "text": "\\mathfrak{sl}_2" } ]
https://en.wikipedia.org/wiki?curid=1107642
1107647
Alexander polynomial
Knot invariant In mathematics, the Alexander polynomial is a knot invariant which assigns a polynomial with integer coefficients to each knot type. James Waddell Alexander II discovered this, the first knot polynomial, in 1923. In 1969, John Conway showed a version of this polynomial, now called the Alexander–Conway polynomial, could be computed using a skein relation, although its significance was not realized until the discovery of the Jones polynomial in 1984. Soon after Conway's reworking of the Alexander polynomial, it was realized that a similar skein relation was exhibited in Alexander's paper on his polynomial. Definition. Let "K" be a knot in the 3-sphere. Let "X" be the infinite cyclic cover of the knot complement of "K". This covering can be obtained by cutting the knot complement along a Seifert surface of "K" and gluing together infinitely many copies of the resulting manifold with boundary in a cyclic manner. There is a covering transformation "t" acting on "X". Consider the first homology (with integer coefficients) of "X", denoted formula_0. The transformation "t" acts on the homology and so we can consider formula_0 a module over the ring of Laurent polynomials formula_1. This is called the Alexander invariant or Alexander module. The module is finitely presentable; a presentation matrix for this module is called the Alexander matrix. If the number of generators, formula_2, is less than or equal to the number of relations, formula_3 , then we consider the ideal generated by all formula_4 minors of the matrix; this is the zeroth Fitting ideal or Alexander ideal and does not depend on choice of presentation matrix. If formula_5, set the ideal equal to 0. If the Alexander ideal is principal, take a generator; this is called an Alexander polynomial of the knot. Since this is only unique up to multiplication by the Laurent monomial formula_6, one often fixes a particular unique form. Alexander's choice of normalization is to make the polynomial have a positive constant term. Alexander proved that the Alexander ideal is nonzero and always principal. Thus an Alexander polynomial always exists, and is clearly a knot invariant, denoted formula_7. It turns out that the Alexander polynomial of a knot is the same polynomial for the mirror image knot. In other words, it cannot distinguish between a knot and its mirror image. Computing the polynomial. The following procedure for computing the Alexander polynomial was given by J. W. Alexander in his paper. Take an oriented diagram of the knot with formula_8 crossings; there are formula_9 regions of the knot diagram. To work out the Alexander polynomial, first one must create an incidence matrix of size formula_10. The formula_8 rows correspond to the formula_8 crossings, and the formula_9 columns to the regions. The values for the matrix entries are either formula_11. Consider the entry corresponding to a particular region and crossing. If the region is not adjacent to the crossing, the entry is 0. If the region is adjacent to the crossing, the entry depends on its location. The following table gives the entry, determined by the location of the region at the crossing from the perspective of the incoming undercrossing line. on the left before undercrossing: formula_12 on the right before undercrossing: formula_13 on the left after undercrossing: formula_14 on the right after undercrossing: formula_15 Remove two columns corresponding to adjacent regions from the matrix, and work out the determinant of the new formula_16 matrix. Depending on the columns removed, the answer will differ by multiplication by formula_6, where the power of formula_8 is not necessarily the number of crossings in the knot. To resolve this ambiguity, divide out the largest possible power of formula_14 and multiply by formula_15 if necessary, so that the constant term is positive. This gives the Alexander polynomial. The Alexander polynomial can also be computed from the Seifert matrix. After the work of J. W. Alexander, Ralph Fox considered a copresentation of the knot group formula_17, and introduced non-commutative differential calculus, which also permits one to compute formula_7. Basic properties of the polynomial. The Alexander polynomial is symmetric: formula_18 for all knots K. From the point of view of the definition, this is an expression of the Poincaré Duality isomorphism formula_19 where formula_20 is the quotient of the field of fractions of formula_21 by formula_21, considered as a formula_21-module, and where formula_22 is the conjugate formula_21-module to formula_23 ie: as an abelian group it is identical to formula_23 but the covering transformation formula_14 acts by formula_24. Furthermore, the Alexander polynomial evaluates to a unit on 1: formula_25. From the point of view of the definition, this is an expression of the fact that the knot complement is a homology circle, generated by the covering transformation formula_14. More generally if formula_26 is a 3-manifold such that formula_27 it has an Alexander polynomial formula_28 defined as the order ideal of its infinite-cyclic covering space. In this case formula_29 is, up to sign, equal to the order of the torsion subgroup of formula_30. Every integral Laurent polynomial which is both symmetric and evaluates to a unit at 1 is the Alexander polynomial of a knot. Geometric significance of the polynomial. Since the Alexander ideal is principal, formula_31 if and only if the commutator subgroup of the knot group is perfect (i.e. equal to its own commutator subgroup). For a topologically slice knot, the Alexander polynomial satisfies the Fox–Milnor condition formula_32 where formula_33 is some other integral Laurent polynomial. Twice the knot genus is bounded below by the degree of the Alexander polynomial. Michael Freedman proved that a knot in the 3-sphere is topologically slice; i.e., bounds a "locally-flat" topological disc in the 4-ball, if the Alexander polynomial of the knot is trivial. Kauffman describes the first construction of the Alexander polynomial via state sums derived from physical models. A survey of these topic and other connections with physics are given in. There are other relations with surfaces and smooth 4-dimensional topology. For example, under certain assumptions, there is a way of modifying a smooth 4-manifold by performing a surgery that consists of removing a neighborhood of a two-dimensional torus and replacing it with a knot complement crossed with "S"1. The result is a smooth 4-manifold homeomorphic to the original, though now the Seiberg–Witten invariant has been modified by multiplication with the Alexander polynomial of the knot. Knots with symmetries are known to have restricted Alexander polynomials. Nonetheless, the Alexander polynomial can fail to detect some symmetries, such as strong invertibility. If the knot complement fibers over the circle, then the Alexander polynomial of the knot is known to be "monic" (the coefficients of the highest and lowest order terms are equal to formula_34). In fact, if formula_35 is a fiber bundle where formula_36 is the knot complement, let formula_37 represent the monodromy, then formula_38 where formula_39 is the induced map on homology. Relations to satellite operations. If a knot formula_40 is a satellite knot with pattern knot formula_41 (there exists an embedding formula_42 such that formula_43, where formula_44 is an unknotted solid torus containing formula_41), then formula_45, where formula_46 is the integer that represents formula_47 in formula_48. Examples: For a connect-sum formula_49. If formula_40 is an untwisted Whitehead double, then formula_50. Alexander–Conway polynomial. Alexander proved the Alexander polynomial satisfies a skein relation. John Conway later rediscovered this in a different form and showed that the skein relation together with a choice of value on the unknot was enough to determine the polynomial. Conway's version is a polynomial in "z" with integer coefficients, denoted formula_51 and called the Alexander–Conway polynomial (also known as Conway polynomial or Conway–Alexander polynomial). Suppose we are given an oriented link diagram, where formula_52 are link diagrams resulting from crossing and smoothing changes on a local region of a specified crossing of the diagram, as indicated in the figure. Here are Conway's skein relations: The relationship to the standard Alexander polynomial is given by formula_55. Here formula_56 must be properly normalized (by multiplication of formula_57) to satisfy the skein relation formula_58. Note that this relation gives a Laurent polynomial in "t1/2". See knot theory for an example computing the Conway polynomial of the trefoil. Relation to Floer homology. Using pseudo-holomorphic curves, Ozsváth-Szabó and Rasmussen associated a bigraded abelian group, called knot Floer homology, to each isotopy class of knots. The graded Euler characteristic of knot Floer homology is the Alexander polynomial. While the Alexander polynomial gives a lower bound on the genus of a knot, showed that knot Floer homology detects the genus. Similarly, while the Alexander polynomial gives an obstruction to a knot complement fibering over the circle, showed that knot Floer homology completely determines when a knot complement fibers over the circle. The knot Floer homology groups are part of the Heegaard Floer homology family of invariants; see Floer homology for further discussion. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "H_1(X)" }, { "math_id": 1, "text": "\\mathbb{Z}[t, t^{-1}]" }, { "math_id": 2, "text": "r" }, { "math_id": 3, "text": "s" }, { "math_id": 4, "text": "r \\times r" }, { "math_id": 5, "text": "r > s" }, { "math_id": 6, "text": "\\pm t^n" }, { "math_id": 7, "text": "\\Delta_K(t)" }, { "math_id": 8, "text": "n" }, { "math_id": 9, "text": "n+2" }, { "math_id": 10, "text": "(n, n + 2)" }, { "math_id": 11, "text": "0,1,-1,t,-t" }, { "math_id": 12, "text": "-t" }, { "math_id": 13, "text": "1" }, { "math_id": 14, "text": "t" }, { "math_id": 15, "text": "-1" }, { "math_id": 16, "text": "n \\times n" }, { "math_id": 17, "text": "\\pi_1(S^3\\backslash K)" }, { "math_id": 18, "text": "\\Delta_K(t^{-1}) = \\Delta_K(t)" }, { "math_id": 19, "text": " \\overline{H_1 X} \\simeq \\mathrm{Hom}_{\\mathbb Z[t,t^{-1}]}(H_1 X, G) " }, { "math_id": 20, "text": "G" }, { "math_id": 21, "text": "\\mathbb Z[t,t^{-1}]" }, { "math_id": 22, "text": "\\overline{H_1 X}" }, { "math_id": 23, "text": "H_1 X" }, { "math_id": 24, "text": "t^{-1}" }, { "math_id": 25, "text": "\\Delta_K(1)=\\pm 1" }, { "math_id": 26, "text": "M" }, { "math_id": 27, "text": "rank(H_1 M) = 1" }, { "math_id": 28, "text": "\\Delta_M(t)" }, { "math_id": 29, "text": "\\Delta_M(1)" }, { "math_id": 30, "text": "H_1 M" }, { "math_id": 31, "text": "\\Delta_K(t)=1" }, { "math_id": 32, "text": "\\Delta_K(t) = f(t)f(t^{-1})" }, { "math_id": 33, "text": "f(t)" }, { "math_id": 34, "text": "\\pm 1" }, { "math_id": 35, "text": "S \\to C_K \\to S^1" }, { "math_id": 36, "text": "C_K" }, { "math_id": 37, "text": "g : S \\to S" }, { "math_id": 38, "text": "\\Delta_K(t) = {\\rm Det}(tI-g_*)" }, { "math_id": 39, "text": "g_*\\colon H_1 S \\to H_1 S" }, { "math_id": 40, "text": "K" }, { "math_id": 41, "text": "K'" }, { "math_id": 42, "text": "f : S^1 \\times D^2 \\to S^3" }, { "math_id": 43, "text": "K=f(K')" }, { "math_id": 44, "text": "S^1 \\times D^2 \\subset S^3" }, { "math_id": 45, "text": "\\Delta_K(t) = \\Delta_{f(S^1 \\times \\{0\\})}(t^a) \\Delta_{K'}(t)" }, { "math_id": 46, "text": "a \\in \\mathbb Z" }, { "math_id": 47, "text": "K' \\subset S^1 \\times D^2" }, { "math_id": 48, "text": "H_1(S^1\\times D^2) = \\mathbb Z" }, { "math_id": 49, "text": "\\Delta_{K_1 \\# K_2}(t) = \\Delta_{K_1}(t) \\Delta_{K_2}(t)" }, { "math_id": 50, "text": "\\Delta_K(t)=\\pm 1" }, { "math_id": 51, "text": "\\nabla(z)" }, { "math_id": 52, "text": "L_+, L_-, L_0" }, { "math_id": 53, "text": "\\nabla(O) = 1" }, { "math_id": 54, "text": "\\nabla(L_+) - \\nabla(L_-) = z \\nabla(L_0)" }, { "math_id": 55, "text": "\\Delta_L(t^2) = \\nabla_L(t - t^{-1})" }, { "math_id": 56, "text": "\\Delta_L" }, { "math_id": 57, "text": "\\pm t^{n/2}" }, { "math_id": 58, "text": "\\Delta(L_+) - \\Delta(L_-) = (t^{1/2} - t^{-1/2}) \\Delta(L_0)" } ]
https://en.wikipedia.org/wiki?curid=1107647
11076807
Céa's lemma
Céa's lemma is a lemma in mathematics. Introduced by Jean Céa in his Ph.D. dissertation, it is an important tool for proving error estimates for the finite element method applied to elliptic partial differential equations. Lemma statement. Let formula_0 be a real Hilbert space with the norm formula_1 Let formula_2 be a bilinear form with the properties Let formula_9 be a bounded linear operator. Consider the problem of finding an element formula_10 in formula_0 such that formula_11 for all formula_8 in formula_12 Consider the same problem on a finite-dimensional subspace formula_13 of formula_14 so, formula_15 in formula_13 satisfies formula_16 for all formula_8 in formula_17 By the Lax–Milgram theorem, each of these problems has exactly one solution. Céa's lemma states that formula_18 for all formula_8 in formula_17 That is to say, the subspace solution formula_15 is "the best" approximation of formula_10 in formula_19 up to the constant formula_20 The proof is straightforward formula_21 for all formula_8 in formula_17 We used the formula_22-orthogonality of formula_23 and formula_24 formula_25 which follows directly from formula_26 formula_27 for all formula_8 in formula_13. Note: Céa's lemma holds on complex Hilbert spaces also, one then uses a sesquilinear form formula_28 instead of a bilinear one. The coercivity assumption then becomes formula_29 for all formula_8 in formula_0 (notice the absolute value sign around formula_30). Error estimate in the energy norm. In many applications, the bilinear form formula_2 is symmetric, so formula_31 for all formula_5 in formula_12 This, together with the above properties of this form, implies that formula_28 is an inner product on formula_12 The resulting norm formula_32 is called the energy norm, since it corresponds to a physical energy in many problems. This norm is equivalent to the original norm formula_1 Using the formula_22-orthogonality of formula_23 and formula_13 and the Cauchy–Schwarz inequality formula_33 for all formula_8 in formula_13. Hence, in the energy norm, the inequality in Céa's lemma becomes formula_34 for all formula_8 in formula_13 (notice that the constant formula_35 on the right-hand side is no longer present). This states that the subspace solution formula_15 is the best approximation to the full-space solution formula_10 in respect to the energy norm. Geometrically, this means that formula_15 is the projection of the solution formula_10 onto the subspace formula_13 in respect to the inner product formula_28 (see the adjacent picture). Using this result, one can also derive a sharper estimate in the norm formula_36. Since formula_37 for all formula_8 in formula_13, it follows that formula_38 for all formula_8 in formula_13. An application of Céa's lemma. We will apply Céa's lemma to estimate the error of calculating the solution to an elliptic differential equation by the finite element method. Consider the problem of finding a function formula_39 satisfying the conditions formula_40 where formula_41 is a given continuous function. Physically, the solution formula_10 to this two-point boundary value problem represents the shape taken by a string under the influence of a force such that at every point formula_42 between formula_22 and formula_43 the force density is formula_44 (where formula_45 is a unit vector pointing vertically, while the endpoints of the string are on a horizontal line, see the adjacent picture). For example, that force may be the gravity, when formula_46 is a constant function (since the gravitational force is the same at all points). Let the Hilbert space formula_0 be the Sobolev space formula_47 which is the space of all square-integrable functions formula_8 defined on formula_48 that have a weak derivative on formula_48 with formula_49 also being square integrable, and formula_8 satisfies the conditions formula_50 The inner product on this space is formula_51 for all formula_8 and formula_52 in formula_12 After multiplying the original boundary value problem by formula_8 in this space and performing an integration by parts, one obtains the equivalent problem formula_11 for all formula_8 in formula_0, with formula_53, and formula_54 It can be shown that the bilinear form formula_28 and the operator formula_55 satisfy the assumptions of Céa's lemma. In order to determine a finite-dimensional subspace formula_13 of formula_14 consider a partition formula_56 of the interval formula_57 and let formula_13 be the space of all continuous functions that are affine on each subinterval in the partition (such functions are called piecewise-linear). In addition, assume that any function in formula_13 takes the value 0 at the endpoints of formula_58 It follows that formula_13 is a vector subspace of formula_0 whose dimension is formula_59 (the number of points in the partition that are not endpoints). Let formula_15 be the solution to the subspace problem formula_16 for all formula_8 in formula_19 so one can think of formula_15 as of a piecewise-linear approximation to the exact solution formula_60 By Céa's lemma, there exists a constant formula_61 dependent only on the bilinear form formula_62 such that formula_63 for all formula_8 in formula_17 To explicitly calculate the error between formula_10 and formula_64 consider the function formula_65 in formula_13 that has the same values as formula_10 at the nodes of the partition (so formula_65 is obtained by linear interpolation on each interval formula_66 from the values of formula_10 at interval's endpoints). It can be shown using Taylor's theorem that there exists a constant formula_67 that depends only on the endpoints formula_22 and formula_68 such that formula_69 for all formula_42 in formula_57 where formula_70 is the largest length of the subintervals formula_66 in the partition, and the norm on the right-hand side is the L2 norm. This inequality then yields an estimate for the error formula_71 Then, by substituting formula_72 in Céa's lemma it follows that formula_73 where formula_74 is a different constant from the above (it depends only on the bilinear form, which implicitly depends on the interval formula_48). This result is of a fundamental importance, as it states that the finite element method can be used to approximately calculate the solution of our problem, and that the error in the computed solution decreases proportionately to the partition size formula_75 Céa's lemma can be applied along the same lines to derive error estimates for finite element problems in higher dimensions (here the domain of formula_10 was in one dimension), and while using higher order polynomials for the subspace formula_17
[ { "math_id": 0, "text": "V" }, { "math_id": 1, "text": "\\|\\cdot\\|." }, { "math_id": 2, "text": "a:V\\times V\\to \\mathbb R" }, { "math_id": 3, "text": "|a(v, w)| \\le \\gamma \\|v\\|\\,\\|w\\|" }, { "math_id": 4, "text": "\\gamma>0" }, { "math_id": 5, "text": "v, w " }, { "math_id": 6, "text": "a(v, v) \\ge \\alpha \\|v\\|^2" }, { "math_id": 7, "text": "\\alpha>0" }, { "math_id": 8, "text": "v" }, { "math_id": 9, "text": "L:V\\to \\mathbb R" }, { "math_id": 10, "text": "u" }, { "math_id": 11, "text": "a(u, v)=L(v)" }, { "math_id": 12, "text": "V." }, { "math_id": 13, "text": "V_h" }, { "math_id": 14, "text": "V," }, { "math_id": 15, "text": "u_h" }, { "math_id": 16, "text": "a(u_h, v)=L(v)" }, { "math_id": 17, "text": "V_h." }, { "math_id": 18, "text": "\\|u-u_h\\|\\le \\frac{\\gamma}{\\alpha}\\|u-v\\|" }, { "math_id": 19, "text": "V_h," }, { "math_id": 20, "text": "\\gamma/\\alpha." }, { "math_id": 21, "text": "\\alpha\\|u-u_h\\|^2 \\le a(u-u_h,u-u_h) = a(u-u_h,u-v) + a(u-u_h,v - u_h) = a(u-u_h,u-v)\n \\le \\gamma\\|u-u_h\\|\\|u-v\\|" }, { "math_id": 22, "text": "a" }, { "math_id": 23, "text": "u-u_h" }, { "math_id": 24, "text": "v - u_h \\in V_h" }, { "math_id": 25, "text": "a(u-u_h,v) = 0, \\ \\forall \\ v \\in V_h" }, { "math_id": 26, "text": "V_h \\subset V" }, { "math_id": 27, "text": "a(u, v) = L(v) = a(u_h, v)" }, { "math_id": 28, "text": "a(\\cdot, \\cdot)" }, { "math_id": 29, "text": "|a(v, v)| \\ge \\alpha \\|v\\|^2" }, { "math_id": 30, "text": "a(v, v)" }, { "math_id": 31, "text": "a(v, w) =a(w, v)" }, { "math_id": 32, "text": "\\|v\\|_a=\\sqrt{a(v, v)}" }, { "math_id": 33, "text": "\\|u-u_h\\|_a^2 = a(u-u_h,u-u_h) = a(u-u_h,u-v) \\le \\|u-u_h\\|_a \\cdot \\|u-v\\|_a" }, { "math_id": 34, "text": "\\|u-u_h\\|_a\\le \\|u-v\\|_a" }, { "math_id": 35, "text": "\\gamma/\\alpha" }, { "math_id": 36, "text": "\\| \\cdot \\|" }, { "math_id": 37, "text": "\\alpha \\|u-u_h\\|^2 \\le a(u-u_h,u-u_h) = \\|u-u_h\\|_a^2 \\le \\|u - v\\|_a^2 \\le \\gamma \\|u-v\\|^2" }, { "math_id": 38, "text": "\\|u-u_h\\| \\le \\sqrt{\\frac{\\gamma}{\\alpha}} \\|u-v\\|" }, { "math_id": 39, "text": "u:[a, b]\\to \\mathbb R" }, { "math_id": 40, "text": "\\begin{cases} \n-u''=f \\mbox { in } [a, b] \\\\\nu(a)=u(b)=0 \n\\end{cases}\n" }, { "math_id": 41, "text": "f:[a, b]\\to \\mathbb R" }, { "math_id": 42, "text": "x" }, { "math_id": 43, "text": "b" }, { "math_id": 44, "text": "f(x)\\mathbf{e}" }, { "math_id": 45, "text": "\\mathbf{e}" }, { "math_id": 46, "text": "f" }, { "math_id": 47, "text": "H^1_0(a, b)," }, { "math_id": 48, "text": "[a, b]" }, { "math_id": 49, "text": "v'" }, { "math_id": 50, "text": "v(a)=v(b)=0." }, { "math_id": 51, "text": "(v, w)=\\int_a^b\\! \\left( v(x)w(w) + v'(x) w'(x)\\right)\\,dx" }, { "math_id": 52, "text": "w" }, { "math_id": 53, "text": "a(u, v)=\\int_a^b\\! u'(x) v'(x)\\,dx" }, { "math_id": 54, "text": "L(v) = \\int_a^b\\! f(x) v(x) \\, dx." }, { "math_id": 55, "text": "L" }, { "math_id": 56, "text": "a=x_0< x_1 < \\cdots < x_{n-1} < x_n = b" }, { "math_id": 57, "text": "[a, b]," }, { "math_id": 58, "text": "[a, b]." }, { "math_id": 59, "text": "n-1" }, { "math_id": 60, "text": "u." }, { "math_id": 61, "text": "C>0" }, { "math_id": 62, "text": "a(\\cdot, \\cdot)," }, { "math_id": 63, "text": "\\|u-u_h\\|\\le C \\|u-v\\|" }, { "math_id": 64, "text": "u_h," }, { "math_id": 65, "text": "\\pi u" }, { "math_id": 66, "text": "[x_i, x_{i+1}]" }, { "math_id": 67, "text": "K" }, { "math_id": 68, "text": "b," }, { "math_id": 69, "text": "|u'(x)-(\\pi u)'(x)|\\le K h \\|u''\\|_{L^2(a, b)}" }, { "math_id": 70, "text": "h" }, { "math_id": 71, "text": "\\|u-\\pi u\\|." }, { "math_id": 72, "text": "v=\\pi u" }, { "math_id": 73, "text": "\\|u-u_h\\|\\le C h \\|u''\\|_{L^2(a, b)}, " }, { "math_id": 74, "text": "C" }, { "math_id": 75, "text": "h." } ]
https://en.wikipedia.org/wiki?curid=11076807
11081803
Phase retrieval
Algorithmic determination of wave cycle parts Phase retrieval is the process of algorithmically finding solutions to the phase problem. Given a complex signal formula_0, of amplitude formula_1, and phase formula_2: formula_3 where "x" is an "M"-dimensional spatial coordinate and "k" is an "M"-dimensional spatial frequency coordinate. Phase retrieval consists of finding the phase that satisfies a set of constraints for a measured amplitude. Important applications of phase retrieval include X-ray crystallography, transmission electron microscopy and coherent diffractive imaging, for which formula_4. Uniqueness theorems for both 1-D and 2-D cases of the phase retrieval problem, including the phaseless 1-D inverse scattering problem, were proven by Klibanov and his collaborators (see References). Problem formulation. Here we consider 1-D discrete Fourier transform (DFT) phase retrieval problem. The DFT of a complex signal formula_5 is given by formula_6, and the oversampled DFT of formula_7 is given by formula_8, where formula_9. Since the DFT operator is bijective, this is equivalent to recovering the phase formula_10. It is common recovering a signal from its autocorrelation sequence instead of its Fourier magnitude. That is, denote by formula_11 the vector formula_12 after padding with formula_13 zeros. The autocorrelation sequence of formula_11 is then defined as formula_14, and the DFT of formula_15, denoted by formula_16, satisfies formula_17. Methods. Error reduction algorithm. The error reduction is a generalization of the Gerchberg–Saxton algorithm. It solves for formula_18 from measurements of formula_19 by iterating a four-step process. For the formula_20th iteration the steps are as follows: Step (1): formula_21, formula_22, and formula_23 are estimates of, respectively, formula_24, formula_25 and formula_18. In the first step we calculate the Fourier transform of formula_23: formula_26 Step (2): The experimental value of formula_19, calculated from the diffraction pattern via the signal equation, is then substituted for formula_27, giving an estimate of the Fourier transform: formula_28 where the ' denotes an intermediate result that will be discarded later on. Step (3): the estimate of the Fourier transform formula_29 is then inverse Fourier transformed: formula_30 Step (4): formula_31 then must be changed so that the new estimate of the object, formula_32, satisfies the object constraints. formula_32 is therefore defined piecewise as: formula_33 where formula_34 is the domain in which formula_31 does not satisfy the object constraints. A new estimate formula_32 is obtained and the four step process is repeated. This process is continued until both the Fourier constraint and object constraint are satisfied. Theoretically, the process will always lead to a convergence, but the large number of iterations needed to produce a satisfactory image (generally &gt;2000) results in the error-reduction algorithm by itself being unsuitable for practical applications. Hybrid input-output algorithm. The hybrid input-output algorithm is a modification of the error-reduction algorithm - the first three stages are identical. However, formula_23 no longer acts as an estimate of formula_18, but the input function corresponding to the output function formula_31, which is an estimate of formula_18. In the fourth step, when the function formula_31 violates the object constraints, the value of formula_35 is forced towards zero, but optimally not to zero. The chief advantage of the hybrid input-output algorithm is that the function formula_23 contains feedback information concerning previous iterations, reducing the probability of stagnation. It has been shown that the hybrid input-output algorithm converges to a solution significantly faster than the error reduction algorithm. Its convergence rate can be further improved through step size optimization algorithms. formula_36 Shrinkwrap. For a two dimensional phase retrieval problem, there is a degeneracy of solutions as formula_18 and its conjugate formula_39 have the same Fourier modulus. This leads to "image twinning" in which the phase retrieval algorithm stagnates producing an image with features of both the object and its conjugate. The shrinkwrap technique periodically updates the estimate of the support by low-pass filtering the current estimate of the object amplitude (by convolution with a Gaussian) and applying a threshold, leading to a reduction in the image ambiguity. Semidefinite relaxation-based algorithm for short time Fourier transform. The phase retrieval is an ill-posed problem. To uniquely identify the underlying signal, in addition to the methods that adds additional prior information like Gerchberg–Saxton algorithm, the other way is to add magnitude-only measurements like short time Fourier transform (STFT). The method introduced below mainly based on the work of Jaganathan "et al". Short time Fourier transform. Given a discrete signal formula_40 which is sampled from formula_41. We use a window of length "W": formula_42 to compute the STFT of formula_43, denoted by formula_44: formula_45 for formula_46 and formula_47, where the parameter formula_48 denotes the separation in time between adjacent short-time sections and the parameter formula_49 denotes the number of short-time sections considered. The other interpretation (called sliding window interpretation) of STFT can be used with the help of discrete Fourier transform (DFT). Let formula_50 denotes the window element obtained from shifted and flipped window formula_51. Then we have formula_52, where formula_53. Problem definition. Let formula_54 be the formula_55 measurements corresponding to the magnitude-square of the STFT of formula_56, formula_57 be the formula_58 diagonal matrix with diagonal elements formula_59 STFT phase retrieval can be stated as: Find formula_56 such that formula_60 for formula_61 and formula_62, where formula_63 is the formula_64-th column of the formula_65-point inverse DFT matrix. Intuitively, the computational complexity growing with formula_65 makes the method impractical. In fact, however, for the most cases in practical we only need to consider the measurements corresponding to formula_66, for any parameter formula_67 satisfying formula_68. To be more specifically, if both the signal and the window are not "vanishing", that is, formula_69 for all formula_70 and formula_71 for all formula_72 formula_73, signal formula_56 can be uniquely identified from its STFT magnitude if the following requirements are satisfied: The proof can be found in Jaganathan' s work, which reformulates STFT phase retrieval as the following least-squares problem: formula_75. The algorithm, although without theoretical recovery guarantees, empirically able to converge to the global minimum when there is substantial overlap between adjacent short-time sections. Semidefinite relaxation-based algorithm. To establish recovery guarantees, one way is to formulate the problems as a semidefinite program (SDP), by embedding the problem in a higher dimensional space using the transformation formula_76 and relax the rank-one constraint to obtain a convex program. The problem reformulated is stated below: Obtain formula_77 by solving:formula_78for formula_79 and formula_62 Once formula_77 is found, we can recover signal formula_56 by best rank-one approximation. Applications. Phase retrieval is a key component of coherent diffraction imaging (CDI). In CDI, the intensity of the diffraction pattern scattered from a target is measured. The phase of the diffraction pattern is then obtained using phase retrieval algorithms and an image of the target is constructed. In this way, phase retrieval allows for the conversion of a diffraction pattern into an image without an optical lens. Using phase retrieval algorithms, it is possible to characterize complex optical systems and their aberrations. For example, phase retrieval was used to diagnose and repair the flawed optics of the Hubble Space Telescope. Other applications of phase retrieval include X-ray crystallography and transmission electron microscopy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F(k)" }, { "math_id": 1, "text": "|F (k)| " }, { "math_id": 2, "text": "\\psi(k)" }, { "math_id": 3, "text": "F(k) = |F(k)|e^{i \\psi(k)} =\\int_{-\\infty}^{\\infty} f(x)\\ e^{- 2\\pi i k \\cdot x}\\,dx" }, { "math_id": 4, "text": " M = 2" }, { "math_id": 5, "text": "f[n]" }, { "math_id": 6, "text": "F[k]=\\sum_{n=0}^{N-1} f[n] e^{-j 2 \\pi \\frac{k n}{N},}=|F[k]|\\cdot e^{j\\psi[k]} \\quad k=0,1, \\ldots, N-1" }, { "math_id": 7, "text": " x " }, { "math_id": 8, "text": "F[k]=\\sum_{n=0}^{N-1} f[n] e^{-j 2 \\pi \\frac{k n}{M},} \\quad k=0,1, \\ldots, M-1" }, { "math_id": 9, "text": " M > N " }, { "math_id": 10, "text": " \\psi[k] " }, { "math_id": 11, "text": " \\hat{f} " }, { "math_id": 12, "text": " f " }, { "math_id": 13, "text": " N-1 " }, { "math_id": 14, "text": " g[m]=\\sum_{i=\\max \\{1, m+1\\}}^{N} \\hat{f}_{i} \\overline{\\hat{f}_{i-m}}, \\quad m=-(N-1), \\ldots, N-1 " }, { "math_id": 15, "text": " g[m] " }, { "math_id": 16, "text": " G[k] " }, { "math_id": 17, "text": " G[k]=|F[k]|^2 " }, { "math_id": 18, "text": " f(x) " }, { "math_id": 19, "text": " |F(u)| " }, { "math_id": 20, "text": " k " }, { "math_id": 21, "text": " G_k(u) " }, { "math_id": 22, "text": " \\phi_k " }, { "math_id": 23, "text": " g_k(x) " }, { "math_id": 24, "text": " F(u) " }, { "math_id": 25, "text": " \\psi " }, { "math_id": 26, "text": "\nG_k(u) = |G_k(u)|e^{i \\phi_k(u)} = \\mathcal{F} (g_k (x)).\n" }, { "math_id": 27, "text": " |G_k(u)| " }, { "math_id": 28, "text": "\nG'_k(u) = |F(u)|e^{i \\phi_k(u)},\n" }, { "math_id": 29, "text": " G'_k(u) " }, { "math_id": 30, "text": "\ng'_k(x) =|g'_k(x)|e^{i \\theta'_k(x)} = \\mathcal{F}^{-1}(G'_k (u)).\n" }, { "math_id": 31, "text": " g'_k (x) " }, { "math_id": 32, "text": " g_{k+1} (x) " }, { "math_id": 33, "text": "\ng_{k+1} (x) \\equiv \\begin{cases}\n g'_k(x), & x \\notin \\gamma, \\\\\n 0, & x \\in \\gamma,\n\\end{cases}\n" }, { "math_id": 34, "text": " \\gamma " }, { "math_id": 35, "text": " g_{k+1}(x) " }, { "math_id": 36, "text": "\ng_{k+1} (x) \\equiv \\begin{cases}\n g'_k(x), & x \\notin \\gamma, \\\\\n g_k (x) - \\beta{g'_k(x)}, & x \\in \\gamma.\n\\end{cases}\n" }, { "math_id": 37, "text": " \\beta " }, { "math_id": 38, "text": " \\beta \\approx 0.9 " }, { "math_id": 39, "text": " f^*(-x) " }, { "math_id": 40, "text": "\\mathbf{x}=(f[0],f[1],...,f[N-1])^T" }, { "math_id": 41, "text": "f(x)" }, { "math_id": 42, "text": "\\mathbf{w}=(w[0],w[1],...,w[W-1])^T" }, { "math_id": 43, "text": "\\mathrm{f}" }, { "math_id": 44, "text": "\\mathbf{Y}" }, { "math_id": 45, "text": "Y[m,r]=\\sum_{n=0}^{N-1}{f[n]w[rL-n]e^{-i2\\pi \\frac{mn}{N}}}" }, { "math_id": 46, "text": "0\\leq m \\leq N-1" }, { "math_id": 47, "text": "0\\leq r \\leq R-1" }, { "math_id": 48, "text": "L" }, { "math_id": 49, "text": "R = \\left \\lceil \\frac{N+W-1}{L} \\right \\rceil" }, { "math_id": 50, "text": "w_r[n]=w[rL-n]" }, { "math_id": 51, "text": "\\mathbf{w}" }, { "math_id": 52, "text": "\\mathbf{Y}=[\\mathbf{Y}_0,\\mathbf{Y}_1,...,\\mathbf{Y}_{R-1}]" }, { "math_id": 53, "text": "\\mathbf{Y}_r = \\mathbf{x}\\circ\\mathbf{w}_r" }, { "math_id": 54, "text": "{Z}_{w}[m,r]=|Y[m,r]|^2" }, { "math_id": 55, "text": "N \\times R" }, { "math_id": 56, "text": "\\mathbf{x}" }, { "math_id": 57, "text": "\\mathbf{W}_{r}" }, { "math_id": 58, "text": "N \\times N" }, { "math_id": 59, "text": "\\left(w_{r}[0], w_{r}[1], \\ldots, w_{r}[N-1]\\right) ." }, { "math_id": 60, "text": "\nZ_{w}[m, r]=\\left|\\left\\langle\\mathbf{f}_{m}, \\mathbf{W}_{r} \\mathbf{x}\\right\\rangle\\right|^{2}\n" }, { "math_id": 61, "text": "0 \\leq m \\leq N-1" }, { "math_id": 62, "text": "0 \\leq r \\leq R-1" }, { "math_id": 63, "text": "\\mathbf{f}_{m}" }, { "math_id": 64, "text": "m" }, { "math_id": 65, "text": "N" }, { "math_id": 66, "text": "0 \\leq m \\leq M" }, { "math_id": 67, "text": "M" }, { "math_id": 68, "text": "2W \\leq M \\leq N" }, { "math_id": 69, "text": "x[n] \\neq 0" }, { "math_id": 70, "text": "0 \\leq n \\leq N-1" }, { "math_id": 71, "text": "w[n] \\neq 0" }, { "math_id": 72, "text": "0 \\leq" }, { "math_id": 73, "text": "n \\leq W-1" }, { "math_id": 74, "text": "L < W \\leq N/2" }, { "math_id": 75, "text": "\\min _{\\mathbf{x}} \\sum_{r=0}^{R-1} \\sum_{m=0}^{N-1}\\left(Z_{w}[m, r]-\\left|\\left\\langle\\mathbf{f}_{m}, \\mathbf{W}_{r} \\mathbf{x}\\right\\rangle\\right|^{2}\\right)^{2}" }, { "math_id": 76, "text": "\\mathbf{X}=\\mathbf{x}\\mathbf{x}^\\ast" }, { "math_id": 77, "text": "\\mathbf{\\hat{X}}" }, { "math_id": 78, "text": "\\begin{align}\n&\\mathrm{minimize}~~~\\mathrm{trace}(\\mathbf{X})\\\\[6pt]\n&\\mathrm{subject~to}~~Z[m,r]=\\mathrm{trace}(\\mathbf{W}_r^{\\ast}\\mathbf{f}_m\\mathbf{f}_m^\\ast\\mathbf{W}_r\\mathbf{X})\\\\[0pt]\n&~~~~~~~~~~~~~~~~~~~\\mathbf{X}\\succeq0\n\\end{align}" }, { "math_id": 79, "text": "1 \\leq m \\leq M" } ]
https://en.wikipedia.org/wiki?curid=11081803
11084460
Horocycle
Curve whose normals converge asymptotically In hyperbolic geometry, a horocycle (from Greek roots meaning "boundary circle"), sometimes called an oricycle or limit circle, is a curve of constant curvature where all the perpendicular geodesics (normals) through a point on a horocycle are limiting parallel, and all converge asymptotically to a single ideal point called the "centre" of the horocycle. In some models of hyperbolic geometry it looks like the two "ends" of a horocycle get closer and closer to each other and closer to its centre, this is not true; the two "ends" of a horocycle get further and further away from each other and stay at an infinite distance off its centre. The horosphere is the 3 dimensional version of a horocycle In Euclidean space, all curves of constant curvature are either straight lines (geodesics) or circles, but in a hyperbolic space of sectional curvature formula_0 the curves of constant curvature come in four types: geodesics with curvature formula_1 hypercycles with curvature formula_2 horocycles with curvature formula_3 and circles with curvature formula_4 Any two horocycles are congruent, and can be superimposed by an isometry (translation and rotation) of the hyperbolic plane. A horocycle can also be described as the limit of the circles that share a tangent at a given point, as their radii tend to infinity, or as the limit of hypercycles tangent at the point as the distances from their axes tends to infinity. Two horocycles with the same centre are called "concentric". As for concentric circles, any geodesic perpendicular to a horocycle is also perpendicular to every concentric horocycle. longer than the length of the line segment between those two points, longer than the length of the arc of a hypercycle between those two points and shorter than the length of any circle arc between those two points. Properties. Standardized Gaussian curvature. When the hyperbolic plane has the standardized Gaussian curvature "K" of −1: Representations in models of hyperbolic geometry. Poincaré disk model. In the Poincaré disk model of the hyperbolic plane, horocycles are represented by circles tangent to the boundary circle; the centre of the horocycle is the ideal point where the horocycle touches the boundary circle. The compass and straightedge construction of the two horocycles through two points is the same construction of the CPP construction for the Special cases of Apollonius' problem where both points are inside the circle. In the Poincaré disk model, it looks like points near opposite "ends" of a horocycle get closer to each other and to the center of the horocycle (on the boundary circle), but in hyperbolic geometry every point on a horocycle is infinitely distant from the center of the horocycle. Also the distance between points on opposite "ends" of the horocycle increases as the arc length between those points increases. (The Euclidean intuition can be misleading because the scale of the model increases to infinity at the boundary circle.) Poincaré half-plane model. In the Poincaré half-plane model, horocycles are represented by circles tangent to the boundary line, in which case their centre is the ideal point where the circle touches the boundary line. When the centre of the horocycle is the ideal point at formula_6 then the horocycle is a line parallel to the boundary line. The compass and straightedge construction in the first case is the same construction as the LPP construction for the Special cases of Apollonius' problem. Hyperboloid model. In the hyperboloid model horocycles are represented by intersections of the hyperboloid with planes whose normal lies on the asymptotic cone (i.e., is a null vector in three-dimensional Minkowski space.) Metric. If the metric is normalized to have Gaussian curvature −1, then the horocycle is a curve of geodesic curvature 1 at every point. Horocycle flow. Every horocycle is the orbit of a unipotent subgroup of PSL(2,R) in the hyperbolic plane. Moreover, the displacement at unit speed along the horocycle tangent to a given unit tangent vector induces a flow on the unit tangent bundle of the hyperbolic plane. This flow is called the "horocycle flow" of the hyperbolic plane. Identifying the unit tangent bundle with the group PSL(2,R), the horocycle flow is given by the right-action of the unipotent subgroup formula_7, where: formula_8 That is, the flow at time formula_9 starting from a vector represented by formula_10 is equal to formula_11. If formula_12 is a hyperbolic surface its unit tangent bundle also supports a horocycle flow. If formula_12 is uniformised as formula_13 the unit tangent bundle is identified with formula_14 and the flow starting at formula_15 is given by formula_16. When formula_12 is compact, or more generally when formula_17 is a lattice, this flow is ergodic (with respect to the normalised Liouville measure). Moreover, in this setting Ratner's theorems describe very precisely the possible closures for its orbits. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "-1," }, { "math_id": 1, "text": "\\kappa = 0," }, { "math_id": 2, "text": "0 < |\\kappa| < 1," }, { "math_id": 3, "text": "|\\kappa| = 1," }, { "math_id": 4, "text": "|\\kappa| > 1." }, { "math_id": 5, "text": " s = 2 \\sinh \\left( \\frac{1}{2} d \\right) = \\sqrt{2 (\\cosh d -1) } " }, { "math_id": 6, "text": " y = \\infty " }, { "math_id": 7, "text": "U = \\{u_t,\\, t \\in \\mathbb R \\}" }, { "math_id": 8, "text": " u_t = \\pm\\left(\\begin{array}{cc} 1 & t \\\\ 0 & 1 \\end{array}\\right). " }, { "math_id": 9, "text": "t" }, { "math_id": 10, "text": "g \\in \\mathrm{PSL}_2(\\mathbb R)" }, { "math_id": 11, "text": "gu_t" }, { "math_id": 12, "text": "S" }, { "math_id": 13, "text": "S = \\Gamma \\backslash \\mathbb H^2" }, { "math_id": 14, "text": "\\Gamma \\backslash \\mathrm{PSL}_2(\\mathbb R)" }, { "math_id": 15, "text": "\\Gamma g" }, { "math_id": 16, "text": "t \\mapsto \\Gamma gu_t" }, { "math_id": 17, "text": "\\Gamma" } ]
https://en.wikipedia.org/wiki?curid=11084460
11084869
Gravitational-wave observatory
Device used to measure gravitational waves A gravitational-wave detector (used in a gravitational-wave observatory) is any device designed to measure tiny distortions of spacetime called gravitational waves. Since the 1960s, various kinds of gravitational-wave detectors have been built and constantly improved. The present-day generation of laser interferometers has reached the necessary sensitivity to detect gravitational waves from astronomical sources, thus forming the primary tool of gravitational-wave astronomy. The first direct observation of gravitational waves was made in September 2015 by the Advanced LIGO observatories, detecting gravitational waves with wavelengths of a few thousand kilometers from a merging binary of stellar black holes. In June 2023, four pulsar timing array collaborations presented the first strong evidence for a gravitational wave background of wavelengths spanning light years, most likely from many binaries of supermassive black holes. Challenge. The direct detection of gravitational waves is complicated by the extraordinarily small effect the waves produce on a detector. The amplitude of a spherical wave falls off as the inverse of the distance from the source. Thus, even waves from extreme systems such as merging binary black holes die out to a very small amplitude by the time they reach the Earth. Astrophysicists predicted that some gravitational waves passing the Earth might produce differential motion on the order 10−18 m in a LIGO-size instrument. Resonant mass antennas. A simple device to detect the expected wave motion is called a resonant mass antenna – a large, solid body of metal isolated from outside vibrations. This type of instrument was the first type of gravitational-wave detector. Strains in space due to an incident gravitational wave excite the body's resonant frequency and could thus be amplified to detectable levels. Conceivably, a nearby supernova might be strong enough to be seen without resonant amplification. However, up to 2018, no gravitational wave observation that would have been widely accepted by the research community has been made on any type of resonant mass antenna, despite certain claims of observation by researchers operating the antennas. There are three types of resonant mass antenna that have been built: room-temperature bar antennas, cryogenically cooled bar antennas and cryogenically cooled spherical antennas. The earliest type was the room-temperature bar-shaped antenna called a Weber bar; these were dominant in 1960s and 1970s and many were built around the world. It was claimed by Weber and some others in the late 1960s and early 1970s that these devices detected gravitational waves; however, other experimenters failed to detect gravitational waves using them, and a consensus developed that Weber bars would not be a practical means to detect gravitational waves. The second generation of resonant mass antennas, developed in the 1980s and 1990s, were the cryogenic bar antennas which are also sometimes called Weber bars. In the 1990s there were five major cryogenic bar antennas: AURIGA (Padua, Italy), NAUTILUS (Rome, Italy), EXPLORER (CERN, Switzerland), ALLEGRO (Louisiana, US), and NIOBE (Perth, Australia). In 1997, these five antennas run by four research groups formed the International Gravitational Event Collaboration (IGEC) for collaboration. While there were several cases of unexplained deviations from the background signal, there were no confirmed instances of the observation of gravitational waves with these detectors. In the 1980s, there was also a cryogenic bar antenna called ALTAIR, which, along with a room-temperature bar antenna called GEOGRAV, was built in Italy as a prototype for later bar antennas. Operators of the GEOGRAV-detector claimed to have observed gravitational waves coming from the supernova SN1987A (along with another room-temperature bar antenna), but these claims were not adopted by the wider community. These modern cryogenic forms of the Weber bar operated with superconducting quantum interference devices to detect vibration (ALLEGRO, for example). Some of them continued in operation after the interferometric antennas started to reach astrophysical sensitivity, such as AURIGA, an ultracryogenic resonant cylindrical bar gravitational wave detector based at INFN in Italy. The AURIGA and LIGO teams collaborated in joint observations. In the 2000s, the third generation of resonant mass antennas, the spherical cryogenic antennas, emerged. Four spherical antennas were proposed around year 2000 and two of them were built as downsized versions, the others were cancelled. The proposed antennas were GRAIL (Netherlands, downsized to MiniGRAIL), TIGA (US, small prototypes made), SFERA (Italy), and Graviton (Brasil, downsized to Mario Schenberg). The two downsized antennas, MiniGRAIL and the Mario Schenberg, are similar in design and are operated as a collaborative effort. MiniGRAIL is based at Leiden University, and consists of an exactingly machined sphere cryogenically cooled to . The spherical configuration allows for equal sensitivity in all directions, and is somewhat experimentally simpler than larger linear devices requiring high vacuum. Events are detected by measuring deformation of the detector sphere. MiniGRAIL is highly sensitive in the 2–4 kHz range, suitable for detecting gravitational waves from rotating neutron star instabilities or small black hole mergers. It is the current consensus that current cryogenic resonant mass detectors are not sensitive enough to detect anything but extremely powerful (and thus very rare) gravitational waves. As of 2020, no detection of gravitational waves by cryogenic resonant antennas has occurred. Laser interferometers. A more sensitive detector uses laser interferometry to measure gravitational-wave induced motion between separated 'free' masses. This allows the masses to be separated by large distances (increasing the signal size); a further advantage is that it is sensitive to a wide range of frequencies (not just those near a resonance as is the case for Weber bars). Ground-based interferometers are now operational. Currently, the most sensitive ground-based laser interferometer is LIGO – the Laser Interferometer Gravitational Wave Observatory. LIGO is famous as the site of the first confirmed detections of gravitational waves in 2015. LIGO has two detectors: one in Livingston, Louisiana; the other at the Hanford site in Richland, Washington. Each consists of two light storage arms which are 4 km in length. These are at 90 degree angles to each other, with the light passing through diameter vacuum tubes running the entire . A passing gravitational wave will slightly stretch one arm as it shortens the other. This is precisely the motion to which a Michelson interferometer is most sensitive. Even with such long arms, the strongest gravitational waves will only change the distance between the ends of the arms by at most roughly 10−18 meters. LIGO should be able to detect gravitational waves as small as formula_0. Upgrades to LIGO and other detectors such as Virgo, GEO600, and TAMA 300 should increase the sensitivity further, and the next generation of instruments (Advanced LIGO Plus and Advanced Virgo Plus) will be more sensitive still. Another highly sensitive interferometer (KAGRA) began operations in 2020. A key point is that a ten-times increase in sensitivity (radius of "reach") increases the volume of space accessible to the instrument by one thousand. This increases the rate at which detectable signals should be seen from one per tens of years of observation, to tens per year. Interferometric detectors are limited at high frequencies by shot noise, which occurs because the lasers produce photons randomly. One analogy is to rainfall: the rate of rainfall, like the laser intensity, is measurable, but the raindrops, like photons, fall at random times, causing fluctuations around the average value. This leads to noise at the output of the detector, much like radio static. In addition, for sufficiently high laser power, the random momentum transferred to the test masses by the laser photons shakes the mirrors, masking signals at low frequencies. Thermal noise (e.g., Brownian motion) is another limit to sensitivity. In addition to these "stationary" (constant) noise sources, all ground-based detectors are also limited at low frequencies by seismic noise and other forms of environmental vibration, and other "non-stationary" noise sources; creaks in mechanical structures, lightning or other large electrical disturbances, etc. may also create noise masking an event or may even imitate an event. All these must be taken into account and excluded by analysis before a detection may be considered a true gravitational-wave event. Space-based interferometers, such as LISA and DECIGO, are also being developed. LISA's design calls for three test masses forming an equilateral triangle, with lasers from each spacecraft to each other spacecraft forming two independent interferometers. LISA is planned to occupy a solar orbit trailing the Earth, with each arm of the triangle being five million kilometers. This puts the detector in an excellent vacuum far from Earth-based sources of noise, though it will still be susceptible to shot noise, as well as artifacts caused by cosmic rays and solar wind. Einstein@Home. In some sense, the easiest signals to detect should be constant sources. Supernovae and neutron star or black hole mergers should have larger amplitudes and be more interesting, but the waves generated will be more complicated. The waves given off by a spinning, bumpy neutron star would be "monochromatic" – like a pure tone in acoustics. It would not change very much in amplitude or frequency. The Einstein@Home project is a distributed computing project similar to SETI@home intended to detect this type of simple gravitational wave. By taking data from LIGO and GEO, and sending it out in little pieces to thousands of volunteers for parallel analysis on their home computers, Einstein@Home can sift through the data far more quickly than would be possible otherwise. Pulsar timing arrays. A different approach to detecting gravitational waves is used by pulsar timing arrays, such as the European Pulsar Timing Array, the North American Nanohertz Observatory for Gravitational Waves, and the Parkes Pulsar Timing Array. These projects propose to detect gravitational waves by looking at the effect these waves have on the incoming signals from an array of 20–50 well-known millisecond pulsars. As a gravitational wave passing through the Earth contracts space in one direction and expands space in another, the times of arrival of pulsar signals from those directions are shifted correspondingly. By studying a fixed set of pulsars across the sky, these arrays should be able to detect gravitational waves in the nanohertz range. Such signals are expected to be emitted by pairs of merging supermassive black holes. In June 2023, four pulsar timing array collaborations, the three mentioned above and the Chinese Pulsar Timing Array, presented independent but similar evidence for a stochastic background of nanohertz gravitational waves. The source of this background could not yet be identified. Cosmic microwave background. The cosmic microwave background, radiation left over from when the Universe cooled sufficiently for the first atoms to form, can contain the imprint of gravitational waves from the very early Universe. The microwave radiation is polarized. The pattern of polarization can be split into two classes called "E"-modes and "B"-modes. This is in analogy to electrostatics where the electric field ("E"-field) has a vanishing curl and the magnetic field ("B"-field) has a vanishing divergence. The "E"-modes can be created by a variety of processes, but the "B"-modes can only be produced by gravitational lensing, gravitational waves, or scattering from dust. On 17 March 2014, astronomers at the Harvard-Smithsonian Center for Astrophysics announced the apparent detection of the imprint gravitational waves in the cosmic microwave background, which, if confirmed, would provide strong evidence for inflation and the Big Bang. However, on 19 June 2014, lowered confidence in confirming the findings was reported; and on 19 September 2014, even more lowered confidence. Finally, on 30 January 2015, the European Space Agency announced that the signal can be entirely attributed to dust in the Milky Way. Novel detector designs. There are currently two detectors focusing on detections at the higher end of the gravitational-wave spectrum (10−7 to 105 Hz): one at University of Birmingham, England, and the other at INFN Genoa, Italy. A third is under development at Chongqing University, China. The Birmingham detector measures changes in the polarization state of a microwave beam circulating in a closed loop about one meter across. Two have been fabricated and they are currently expected to be sensitive to periodic spacetime strains of formula_1, given as an amplitude spectral density. The INFN Genoa detector is a resonant antenna consisting of two coupled spherical superconducting harmonic oscillators a few centimeters in diameter. The oscillators are designed to have (when uncoupled) almost equal resonant frequencies. The system is currently expected to have a sensitivity to periodic spacetime strains of formula_2, with an expectation to reach a sensitivity of formula_3. The Chongqing University detector is planned to detect relic high-frequency gravitational waves with the predicted typical parameters ~ 1010 Hz (10 GHz) and "h" ~ 10−30 to 10−31. Levitated Sensor Detector is a proposed detector for gravitational waves with a frequency between 10 kHz and 300 kHz, potentially coming from primordial black holes. It will use optically-levitated dielectric particles in an optical cavity. A torsion-bar antenna (TOBA) is a proposed design composed of two, long, thin bars, suspended as torsion pendula in a cross-like fashion, in which the differential angle is sensitive to tidal gravitational wave forces. Detectors based on matter waves (atom interferometers) have also been proposed and are being developed. There have been proposals since the beginning of the 2000s. Atom interferometry is proposed to extend the detection bandwidth in the infrasound band (10 mHz – 10 Hz), where current ground based detectors are limited by low frequency gravity noise. A demonstrator project called "Matter wave laser based Interferometer Gravitation Antenna" (MIGA) started construction in 2018 in the underground environment of LSBB (Rustrel, France). List of gravitational wave detectors. Interferometers. Interferometric gravitational-wave detectors are often grouped into generations based on the technology used. The interferometric detectors deployed in the 1990s and 2000s were proving grounds for many of the foundational technologies necessary for initial detection and are commonly referred to as the first generation. The second generation of detectors operating in the 2010s, mostly at the same facilities like LIGO and Virgo, improved on these designs with sophisticated techniques such as cryogenic mirrors and the injection of squeezed vacuum. This led to the first unambiguous detection of a gravitational wave by Advanced LIGO in 2015. The third generation of detectors are currently in the planning phase, and seek to improve over the second generation by achieving greater detection sensitivity and a larger range of accessible frequencies. All these experiments involve many technologies under continuous development over multiple decades, so the categorization by generation is necessarily only rough. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "h \\approx 5\\times 10^{-22}" }, { "math_id": 1, "text": "h\\sim{2 \\times 10^{-13}/\\sqrt{\\mathit{Hz}}} " }, { "math_id": 2, "text": "h\\sim{2 \\times 10^{-17}/\\sqrt{\\mathit{Hz}}} " }, { "math_id": 3, "text": "h\\sim{2 \\times 10^{-20}/\\sqrt{\\mathit{Hz}}} " } ]
https://en.wikipedia.org/wiki?curid=11084869
11085278
Langlands decomposition
In mathematics, the Langlands decomposition writes a parabolic subgroup "P" of a semisimple Lie group as a product formula_0 of a reductive subgroup "M", an abelian subgroup "A", and a nilpotent subgroup "N". Applications. A key application is in parabolic induction, which leads to the Langlands program: if formula_1 is a reductive algebraic group and formula_0 is the Langlands decomposition of a parabolic subgroup "P", then parabolic induction consists of taking a representation of formula_2, extending it to formula_3 by letting formula_4 act trivially, and inducing the result from formula_3 to formula_1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P=MAN" }, { "math_id": 1, "text": "G" }, { "math_id": 2, "text": "MA" }, { "math_id": 3, "text": "P" }, { "math_id": 4, "text": "N" } ]
https://en.wikipedia.org/wiki?curid=11085278
11085311
Anomaly matching condition
Principle in quantum field theory In quantum field theory, the anomaly matching condition by Gerard 't Hooft states that the calculation of any chiral anomaly for the flavor symmetry must not depend on what scale is chosen for the calculation if it is done by using the degrees of freedom of the theory at some energy scale. It is also known as the 't Hooft condition and the 't Hooft UV-IR anomaly matching condition. 't Hooft anomalies. There are two closely related but different types of obstructions to formulating a quantum field theory that are both called anomalies: chiral, or "Adler–Bell–Jackiw" anomalies, and 't Hooft anomalies. If we say that the symmetry of the theory has a "'t Hooft anomaly", it means that the symmetry is exact as a global symmetry of the quantum theory, but there is some impediment to using it as a gauge in the theory. As an example of a 't Hooft anomaly, we consider quantum chromodynamics with formula_0 massless fermions: This is the formula_1 gauge theory with formula_0 massless Dirac fermions. This theory has the global symmetry formula_2, which is often called the flavor symmetry, and this has a 't Hooft anomaly. Anomaly matching for continuous symmetry. The anomaly matching condition by G. 't Hooft proposes that a 't Hooft anomaly of continuous symmetry can be computed both in the high-energy and low-energy degrees of freedom (“UV” and “IR”) and give the same answer. Example. For example, consider the quantum chromodynamics with "N"f massless quarks. This theory has the flavor symmetry formula_2 This flavor symmetry formula_2 becomes anomalous when the background gauge field is introduced. One may use either the degrees of freedom at the far low energy limit (far “IR” ) or the degrees of freedom at the far high energy limit (far “UV”) in order to calculate the anomaly. In the former case one should only consider massless fermions or Nambu–Goldstone bosons which may be composite particles, while in the latter case one should only consider the elementary fermions of the underlying short-distance theory. In both cases, the answer must be the same. Indeed, in the case of QCD, the chiral symmetry breaking occurs and the Wess–Zumino–Witten term for the Nambu–Goldstone bosons reproduces the anomaly. Proof. One proves this condition by the following procedure: we may add to the theory a gauge field which couples to the current related with this symmetry, as well as chiral fermions which couple only to this gauge field, and cancel the anomaly (so that the gauge symmetry will remain non-anomalous, as needed for consistency). In the limit where the coupling constants we have added go to zero, one gets back to the original theory, plus the fermions we have added; the latter remain good degrees of freedom at every energy scale, as they are free fermions at this limit. The gauge symmetry anomaly can be computed at any energy scale, and must always be zero, so that the theory is consistent. One may now get the anomaly of the symmetry in the original theory by subtracting the free fermions we have added, and the result is independent of the energy scale. Alternative proof. Another way to prove the anomaly matching for continuous symmetries is to use the anomaly inflow mechanism. To be specific, we consider four-dimensional spacetime in the following. For global continuous symmetries formula_3, we introduce the background gauge field formula_4 and compute the effective action formula_5. If there is a 't Hooft anomaly for formula_3, the effective action formula_5 is not invariant under the formula_3 gauge transformation on the background gauge field formula_4 and it cannot be restored by adding any four-dimensional local counter terms of formula_4. Wess–Zumino consistency condition shows that we can make it gauge invariant by adding the five-dimensional Chern–Simons action. With the extra dimension, we can now define the effective action formula_5 by using the low-energy effective theory that only contains the massless degrees of freedom by integrating out massive fields. Since it must be again gauge invariant by adding the same five-dimensional Chern–Simons term, the 't Hooft anomaly does not change by integrating out massive degrees of freedom. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "N_f" }, { "math_id": 1, "text": "SU(N_c)" }, { "math_id": 2, "text": "SU(N_f)_L\\times SU(N_f)_R\\times U(1)_V" }, { "math_id": 3, "text": "G" }, { "math_id": 4, "text": "A" }, { "math_id": 5, "text": "\\Gamma[A]" } ]
https://en.wikipedia.org/wiki?curid=11085311
11085324
Free-energy perturbation
Method in computational chemistry Free-energy perturbation (FEP) is a method based on statistical mechanics that is used in computational chemistry for computing free-energy differences from molecular dynamics or Metropolis Monte Carlo simulations. The FEP method was introduced by Robert W. Zwanzig in 1954. According to the free-energy perturbation method, the free-energy difference for going from state A to state B is obtained from the following equation, known as the "Zwanzig equation": formula_0 where "T" is the temperature, "k"B is the Boltzmann constant, and the angular brackets denote an average over a simulation run for state A. In practice, one runs a normal simulation for state A, but each time a new configuration is accepted, the energy for state B is also computed. The difference between states A and B may be in the atom types involved, in which case the Δ"F" obtained is for "mutating" one molecule onto another, or it may be a difference of geometry, in which case one obtains a free-energy map along one or more reaction coordinates. This free-energy map is also known as a "potential of mean force" (PMF). Free-energy perturbation calculations only converge properly when the difference between the two states is small enough; therefore it is usually necessary to divide a perturbation into a series of smaller "windows", which are computed independently. Since there is no need for constant communication between the simulation for one window and the next, the process can be trivially parallelized by running each window on a different CPU, in what is known as an "embarrassingly parallel" setup. Application. FEP calculations have been used for studying host–guest binding energetics, pKa predictions, solvent effects on reactions, and enzymatic reactions. Other applications are the virtual screening of ligands in drug discovery, "in silico" mutagenesis studies and antibody affinity maturation. For the study of reactions it is often necessary to involve a quantum-mechanical (QM) representation of the reaction center because the molecular mechanics (MM) force fields used for FEP simulations cannot handle breaking bonds. A hybrid method that has the advantages of both QM and MM calculations is called QM/MM. Umbrella sampling is another free-energy calculation technique that is typically used for calculating the free-energy change associated with a change in "position" coordinates as opposed to "chemical" coordinates, although umbrella sampling can also be used for a chemical transformation when the "chemical" coordinate is treated as a dynamic variable (as in the case of the Lambda dynamics approach of Kong and Brooks). An alternative to free-energy perturbation for computing potentials of mean force in chemical space is thermodynamic integration. Another alternative, which is probably more efficient, is the Bennett acceptance ratio method. Adaptations to FEP exist which attempt to apportion free-energy changes to subsections of the chemical structure. Software. Several software packages have been developed to help perform FEP calculations. Below is a short list of some of the most common programs: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n \\Delta F(\\mathbf{A} \\to \\mathbf{B}) =\n F_\\mathbf{B} - F_\\mathbf{A} =\n -k_\\text{B} T \\ln \\left\\langle \\exp\\left(-\\frac{E_\\text{B} - E_\\mathbf{A}}{k_\\text{B}T} \\right) \\right\\rangle_\\mathbf{A},\n" } ]
https://en.wikipedia.org/wiki?curid=11085324
1108598
PAQ
PAQ is a series of lossless data compression archivers that have gone through collaborative development to top rankings on several benchmarks measuring compression ratio (although at the expense of speed and memory usage). Specialized versions of PAQ have won the Hutter Prize and the Calgary Challenge. PAQ is free software distributed under the GNU General Public License. Algorithm. PAQ uses a context mixing algorithm. Context mixing is related to prediction by partial matching (PPM) in that the compressor is divided into a predictor and an arithmetic coder, but differs in that the next-symbol prediction is computed using a weighted combination of probability estimates from a large number of models conditioned on different contexts. Unlike PPM, a context doesn't need to be contiguous. Most PAQ versions collect next-symbol statistics for the following contexts: All PAQ versions predict and compress one bit at a time, but differ in the details of the models and how the predictions are combined and postprocessed. Once the next-bit probability is determined, it is encoded by arithmetic coding. There are three methods for combining predictions, depending on the version: PAQ1SSE and later versions postprocess the prediction using secondary symbol estimation (SSE). The combined prediction and a small context are used to look up a new prediction in a table. After the bit is encoded, the table entry is adjusted to reduce the prediction error. SSE stages can be pipelined with different contexts or computed in parallel with the outputs averaged. Arithmetic coding. A string "s" is compressed to the shortest byte string representing a base-256 big-endian number "x" in the range [0, 1] such that P("r" &lt; "s") ≤ "x" &lt; P("r" ≤ "s"), where P("r" &lt; "s") is the probability that a random string "r" with the same length as "s" will be lexicographically less than "s". It is always possible to find an "x" such that the length of "x" is at most one byte longer than the Shannon limit, −log2P("r" = "s") bits. The length of "s" is stored in the archive header. The arithmetic coder in PAQ is implemented by maintaining for each prediction a lower and upper bound on "x", initially [0, 1]. After each prediction, the current range is split into two parts in proportion to P(0) and P(1), the probability that the next bit of "s" will be a 0 or 1 respectively, given the previous bits of "s". The next bit is then encoded by selecting the corresponding subrange to be the new range. The number "x" is decompressed back to string "s" by making an identical series of bit predictions (since the previous bits of "s" are known). The range is split as with compression. The portion containing "x" becomes the new range, and the corresponding bit is appended to "s". In PAQ, the lower and upper bounds of the range are represented in 3 parts. The most significant base-256 digits are identical, so they can be written as the leading bytes of "x". The next 4 bytes are kept in memory, such that the leading byte is different. The trailing bits are assumed to be all zeros for the lower bound and all ones for the upper bound. Compression is terminated by writing one more byte from the lower bound. Adaptive model weighting. In PAQ versions through PAQ6, each model maps a set of distinct contexts to a pair of counts, formula_1, a count of zero bits, and formula_2, a count of 1 bits. In order to favor recent history, half of the count over 2 is discarded when the opposite bit is observed. For example, if the current state associated with a context is formula_3 and a 1 is observed, then the counts are updated to (7, 4). A bit is arithmetically coded with space proportional to its probability, either P(1) or P(0) = 1 − P(1). The probabilities are computed by weighted addition of the 0 and 1 counts: where "wi" is the weight of the "i"-th model. Through PAQ3, the weights were fixed and set in an ad-hoc manner. (Order-"n" contexts had a weight of "n"2.) Beginning with PAQ4, the weights were adjusted adaptively in the direction that would reduce future errors in the same context set. If the bit to be coded is "y", then the weight adjustment is: Neural-network mixing. Beginning with PAQ7, each model outputs a prediction (instead of a pair of counts). These predictions are averaged in the logistic domain: where P(1) is the probability that the next bit will be a 1, P"i"(1) is the probability estimated by the "i"-th model, and After each prediction, the model is updated by adjusting the weights to minimize coding cost: where η is the learning rate (typically 0.002 to 0.01), "y" is the predicted bit, and ("y" − P(1)) is the prediction error. The weight update algorithm differs from backpropagation in that the terms P(1)P(0) are dropped. This is because the goal of the neural network is to minimize coding cost, not root mean square error. Most versions of PAQ use a small context to select among sets of weights for the neural network. Some versions use multiple networks whose outputs are combined with one more network prior to the SSE stages. Furthermore, for each input prediction there may be several inputs which are nonlinear functions of P"i"(1) in addition to stretch(P(1)). Context modeling. Each model partitions the known bits of "s" into a set of contexts and maps each context to a bit history represented by an 8-bit state. In versions through PAQ6, the state represents a pair of counters ("n"0, "n"1). In PAQ7 and later versions under certain conditions, the state also represents the value of the last bit or the entire sequence. The states are mapped to probabilities using a 256-entry table for each model. After a prediction by the model, the table entry is adjusted slightly (typically by 0.4%) to reduce the prediction error. In all PAQ8 versions, the representable states are as follows: To keep the number of states to 256, the following limits are placed on the representable counts: (41, 0), (40, 1), (12, 2), (5, 3), (4, 4), (3, 5), (2, 12), (1, 40), (0, 41). If a count exceeds this limit, then the next state is one chosen to have a similar ratio of "n"0 to "n"1. Thus, if the current state is ("n"0 = 4, "n"1 = 4, last bit = 0) and a 1 is observed, then the new state is not ("n"0 = 4, "n"1 = 5, last bit = 1). Rather, it is ("n"0 = 3, n1 = 4, last bit = 1). Most context models are implemented as hash tables. Some small contexts are implemented as direct lookup tables. Text preprocessing. Some versions of PAQ, in particular PAsQDa, PAQAR (both PAQ6 derivatives), and PAQ8HP1 through PAQ8HP8 (PAQ8 derivatives and Hutter prize recipients) preprocess text files by looking up words in an external dictionary and replacing them with 1- to 3-byte codes. In addition, uppercase letters are encoded with a special character followed by the lowercase letter. In the PAQ8HP series, the dictionary is organized by grouping syntactically and semantically related words together. This allows models to use just the most significant bits of the dictionary codes as context. Comparison. The following table is a sample from the Large Text Compression Benchmark by Matt Mahoney that consists of a file consisting of 109 bytes (1 GB, or 0.931 GiB) of English Wikipedia text. See Lossless compression benchmarks for a list of file compression benchmarks. History. The following lists the major enhancements to the PAQ algorithm. In addition, there have been a large number of incremental improvements, which are omitted. Hutter Prizes. The series PAQ8HP1 through PAQ8HP8 were released by Alexander Ratushnyak from August 21, 2006 through January 18, 2007 as Hutter Prize submissions. The Hutter Prize is a text compression contest using a 100 MB English and XML data set derived from Wikipedia's source. The PAQ8HP series was forked from PAQ8H. The programs include text preprocessing dictionaries and models tuned specifically to the benchmark. All non-text models were removed. The dictionaries were organized to group syntactically and semantically related words and to group words by common suffix. The former strategy improves compression because related words (which are likely to appear in similar context) can be modeled on the high order bits of their dictionary codes. The latter strategy makes the dictionary easier to compress. The size of the decompression program and compressed dictionary is included in the contest ranking. On October 27, 2006, it was announced that PAQ8HP5 won a Hutter Prize for Lossless Compression of Human Knowledge of €3,416. On June 30, 2007, Ratushnyak's PAQ8HP12 was awarded a second Hutter prize of €1732, improving upon his previous record by 3.46%. PAQ derivations. Being free software, PAQ can be modified and redistributed by anyone who has a copy. This has allowed other authors to fork the PAQ compression engine and add new features such as a graphical user interface or better speed (at the expense of compression ratio). Notable PAQ derivatives include: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(n_0, n_1)" }, { "math_id": 1, "text": "n_0" }, { "math_id": 2, "text": "n_1" }, { "math_id": 3, "text": "(n_0,n_1) = (12,3)" } ]
https://en.wikipedia.org/wiki?curid=1108598
11087133
Holding value
In the field of financial economics, Holding value is an indicator of a theoretical value of an asset that someone has in their portfolio. It is a value which sums the impacts of all the dividends that would be given to the holder in the future, to help them estimate a price to buy or sell assets. Expression. The following formula gives the holding value (HV) for a period beginning at i through the period n. formula_0 where div = dividend r = interest rate (of the money if it is kept at the bank; e.g., 0.02 or 2%) i = the period at the beginning of the estimation n = the last period considered in the window of future dividends. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "{HV}_{[i,n]}=\\sum_{k=0}^{n-i}\\frac{div(i+k)}{{(1+r)}^{n-i-k}}" } ]
https://en.wikipedia.org/wiki?curid=11087133
1108755
National saving
Sum of a country's private and public saving In economics, a country's national saving is the sum of private and public saving.187 It equals a nation's income minus consumption and the government spending.174 Economic model. Closed economy with public deficit or surplus possible. In this simple economic model with a closed economy there are three uses for GDP (the goods and services it produces in a year). If Y is national income (GDP), then the three uses of C consumption, I investment, and G government purchases can be expressed as: National saving can be thought of as the amount of remaining income that is not consumed, or spent by government. In a simple model of a closed economy, anything that is not spent is assumed to be invested: National saving can be split into private saving and public saving. Denoting T for taxes paid by consumers that go directly to the government and TR for transfers paid by the government to the consumers as shown here: (Y − T + TR) is disposable income whereas (Y − T + TR − C) is private saving. Public saving, also known as the budget surplus, is the term (T − G − TR), which is government revenue through taxes, minus government expenditures on goods and services, minus transfers. Thus we have that private plus public saving equals investment. The interest rate plays the important role of creating an equilibrium between saving S and investment in neoclassical economics. where the interest rate r affects saving positively and affects physical investment negatively. Open economy with balanced public spending. In an open economic model international trade is introduced. Therefore the current account is split into exports and imports: The net exports is the part of GDP which is not consumed by domestic demand: If we transform the identity for net exports by subtracting consumption, investment and government spending we get the national accounts identity: The national saving is the part of the GDP which is not consumed or spent by the government. Therefore the difference between the national saving and the investment is equal to the net exports: Open economy with public deficit or surplus. The government budget can be directly introduced into the model. We consider now an open economic model with public deficits or surpluses. Therefore the budget is split into revenues, which are the taxes (T), and the spendings, which are transfers (TR) and government spendings (G). Revenue minus spending results in the public (governmental) saving: The disposable income of the households is the income Y minus the taxes net of transfers: Disposable income can only be used for saving or for consumption: where the subscript P denotes the private sector. Therefore private saving in this model equals the disposable income of the households minus consumption: By this equation the private saving can be written as: and the national accounts as: Once this equation is used in Y=C+I+G+X-M we obtain By one transformation we get the determination of net exports and investment by private and public saving: By another transformation we get the sectoral balances of the economy as developed by Wynne Godley. This corresponds approximately to Balances Mechanics developed by Wolfgang Stützel: References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Y = C + I + G" }, { "math_id": 1, "text": "\\text{National Saving} = Y + I - C - G " }, { "math_id": 2, "text": "(Y - T + TR - C) + (T - G - TR) = I" }, { "math_id": 3, "text": "S(r)=I(r)" }, { "math_id": 4, "text": " \\text{Net eXports} = NX = \\text{eXports} - \\text{iMports} = X - M " }, { "math_id": 5, "text": " NX = Y - ( C + I + G ) = Y - \\text{Domestic demand} " }, { "math_id": 6, "text": " Y = C + I + G + NX " }, { "math_id": 7, "text": " Y - C - G = S = I + NX " }, { "math_id": 8, "text": " S-I = NX " }, { "math_id": 9, "text": " S_G = T - G - TR " }, { "math_id": 10, "text": " Y_d = Y - T + TR " }, { "math_id": 11, "text": " Y_d = C + S_P " }, { "math_id": 12, "text": " S_P = Y_d - C " }, { "math_id": 13, "text": " S_P = Y - T + TR - C " }, { "math_id": 14, "text": " Y = S_P + C + T - TR " }, { "math_id": 15, "text": " C + I + G + (X - M) = S(P) + C + T - TR " }, { "math_id": 16, "text": " S_P + S_G = I + (X - M) " }, { "math_id": 17, "text": " (S_P - I) + S_G = (X - M) " } ]
https://en.wikipedia.org/wiki?curid=1108755
1108758
Hasse's theorem on elliptic curves
Estimates the number of points on an elliptic curve over a finite field Hasse's theorem on elliptic curves, also referred to as the Hasse bound, provides an estimate of the number of points on an elliptic curve over a finite field, bounding the value both above and below. If "N" is the number of points on the elliptic curve "E" over a finite field with "q" elements, then Hasse's result states that formula_0 The reason is that "N" differs from "q" + 1, the number of points of the projective line over the same field, by an 'error term' that is the sum of two complex numbers, each of absolute value formula_1 This result had originally been conjectured by Emil Artin in his thesis. It was proven by Hasse in 1933, with the proof published in a series of papers in 1936. Hasse's theorem is equivalent to the determination of the absolute value of the roots of the local zeta-function of "E". In this form it can be seen to be the analogue of the Riemann hypothesis for the function field associated with the elliptic curve. Hasse–Weil Bound. A generalization of the Hasse bound to higher genus algebraic curves is the Hasse–Weil bound. This provides a bound on the number of points on a curve over a finite field. If the number of points on the curve "C" of genus "g" over the finite field formula_2 of order "q" is formula_3, then formula_4 This result is again equivalent to the determination of the absolute value of the roots of the local zeta-function of "C", and is the analogue of the Riemann hypothesis for the function field associated with the curve. The Hasse–Weil bound reduces to the usual Hasse bound when applied to elliptic curves, which have genus "g=1". The Hasse–Weil bound is a consequence of the Weil conjectures, originally proposed by André Weil in 1949 and proved by André Weil in the case of curves. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "|N - (q+1)| \\le 2 \\sqrt{q}." }, { "math_id": 1, "text": "\\sqrt{q}." }, { "math_id": 2, "text": "\\mathbb{F}_q" }, { "math_id": 3, "text": "\\#C(\\mathbb{F}_q)" }, { "math_id": 4, "text": "|\\#C(\\mathbb{F}_q) - (q+1)| \\le 2g \\sqrt{q}." } ]
https://en.wikipedia.org/wiki?curid=1108758
1108795
Hasse norm theorem
In cyclic extension of number fields, if k is a local norm everywhere, it is a global norm In number theory, the Hasse norm theorem states that if L/K is a cyclic extension of number fields, then if a nonzero element of K is a local norm everywhere, then it is a global norm. Here to be a global norm means to be an element "k" of K such that there is an element "l" of L with formula_0; in other words "k" is a relative norm of some element of the extension field L. To be a local norm means that for some prime p of K and some prime P of L lying over K, then "k" is a norm from LP; here the "prime" p can be an archimedean valuation, and the theorem is a statement about completions in all valuations, archimedean and non-archimedean. The theorem is no longer true in general if the extension is abelian but not cyclic. Hasse gave the counterexample that 3 is a local norm everywhere for the extension formula_1 but is not a global norm. Serre and Tate showed that another counterexample is given by the field formula_2 where every rational square is a local norm everywhere but formula_3 is not a global norm. This is an example of a theorem stating a local-global principle. The full theorem is due to Hasse (1931). The special case when the degree "n" of the extension is 2 was proved by , and the special case when "n" is prime was proved by Furtwangler in 1902. The Hasse norm theorem can be deduced from the theorem that an element of the Galois cohomology group H2("L"/"K") is trivial if it is trivial locally everywhere, which is in turn equivalent to the deep theorem that the first cohomology of the idele class group vanishes. This is true for all finite Galois extensions of number fields, not just cyclic ones. For cyclic extensions the group H2("L"/"K") is isomorphic to the Tate cohomology group H0("L"/"K") which describes which elements are norms, so for cyclic extensions it becomes Hasse's theorem that an element is a norm if it is a local norm everywhere.
[ { "math_id": 0, "text": "\\mathbf{N}_{L/K}(l) = k" }, { "math_id": 1, "text": "{\\mathbf Q}(\\sqrt{-3},\\sqrt{13})/{\\mathbf Q}" }, { "math_id": 2, "text": "{\\mathbf Q}(\\sqrt{13},\\sqrt{17})/{\\mathbf Q}" }, { "math_id": 3, "text": "5^2" } ]
https://en.wikipedia.org/wiki?curid=1108795
11092492
Amalgamation property
Concept in model theory In the mathematical field of model theory, the amalgamation property is a property of collections of structures that guarantees, under certain conditions, that two structures in the collection can be regarded as substructures of a larger one. This property plays a crucial role in Fraïssé's theorem, which characterises classes of finite structures that arise as ages of countable homogeneous structures. The diagram of the amalgamation property appears in many areas of mathematical logic. Examples include in modal logic as an incestual accessibility relation, and in lambda calculus as a manner of reduction having the Church–Rosser property. Definition. An "amalgam" can be formally defined as a 5-tuple ("A,f,B,g,C") such that "A,B,C" are structures having the same signature, and "f: A" → "B, g": "A" → "C" are "embeddings". Recall that "f: A" → "B" is an "embedding" if "f" is an injective morphism which induces an isomorphism from "A" to the substructure "f(A)" of "B". A class "K" of structures has the amalgamation property if for every amalgam with "A,B,C" ∈ "K" and "A" ≠ Ø, there exist both a structure "D" ∈ "K" and embeddings "f':" "B" → "D, g':" "C" → "D" such that formula_0 A first-order theory formula_1 has the amalgamation property if the class of models of formula_1 has the amalgamation property. The amalgamation property has certain connections to the quantifier elimination. In general, the amalgamation property can be considered for a category with a specified choice of the class of morphisms (in place of embeddings). This notion is related to the categorical notion of a pullback, in particular, in connection with the strong amalgamation property (see below). Examples. A similar but different notion to the amalgamation property is the joint embedding property. To see the difference, first consider the class "K" (or simply the set) containing three models with linear orders, "L"1 of size one, "L"2 of size two, and "L"3 of size three. This class "K" has the joint embedding property because all three models can be embedded into "L"3. However, "K" does not have the amalgamation property. The counterexample for this starts with "L"1 containing a single element "e" and extends in two different ways to "L"3, one in which "e" is the smallest and the other in which "e" is the largest. Now any common model with an embedding from these two extensions must be at least of size five so that there are two elements on either side of "e". Now consider the class of algebraically closed fields. This class has the amalgamation property since any two field extensions of a prime field can be embedded into a common field. However, two arbitrary fields cannot be embedded into a common field when the characteristic of the fields differ. Strong amalgamation property. A class "K" of structures has the "strong amalgamation property" (SAP), also called the "disjoint amalgamation property" (DAP), if for every amalgam with "A,B,C" ∈ "K" there exist both a structure "D" ∈ "K" and embeddings "f':" "B" → "D, g': C" → "D" such that formula_3 and formula_4 where for any set "X" and function "h" on "X," formula_5 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "f'\\circ f = g' \\circ g. \\, " }, { "math_id": 1, "text": "T" }, { "math_id": 2, "text": "B*C/A" }, { "math_id": 3, "text": "f' \\circ f = g' \\circ g \\, " }, { "math_id": 4, "text": "f '[B] \\cap g '[C] = (f ' \\circ f)[A] = (g ' \\circ g)[A] \\, " }, { "math_id": 5, "text": "h \\lbrack X \\rbrack = \\lbrace h(x) \\mid x \\in X \\rbrace. \\, " } ]
https://en.wikipedia.org/wiki?curid=11092492
11095324
Integro-differential equation
Equation involving both integrals and derivatives of a function In mathematics, an integro-differential equation is an equation that involves both integrals and derivatives of a function. General first order linear equations. The general first-order, linear (only with respect to the term involving derivative) integro-differential equation is of the form formula_0 As is typical with differential equations, obtaining a closed-form solution can often be difficult. In the relatively few cases where a solution can be found, it is often by some kind of integral transform, where the problem is first transformed into an algebraic setting. In such situations, the solution of the problem may be derived by applying the inverse transform to the solution of this algebraic equation. Example. Consider the following second-order problem, formula_1 where formula_2 is the Heaviside step function. The Laplace transform is defined by, formula_3 Upon taking term-by-term Laplace transforms, and utilising the rules for derivatives and integrals, the integro-differential equation is converted into the following algebraic equation, formula_4 Thus, formula_5. Inverting the Laplace transform using contour integral methods then gives formula_6. Alternatively, one can complete the square and use a table of Laplace transforms ("exponentially decaying sine wave") or recall from memory to proceed: formula_7. Applications. Integro-differential equations model many situations from science and engineering, such as in circuit analysis. By Kirchhoff's second law, the net voltage drop across a closed loop equals the voltage impressed formula_8. (It is essentially an application of energy conservation.) An RLC circuit therefore obeys formula_9 where formula_10 is the current as a function of time, formula_11 is the resistance, formula_12 the inductance, and formula_13 the capacitance. The activity of interacting "inhibitory" and "excitatory" neurons can be described by a system of integro-differential equations, see for example the Wilson-Cowan model. The Whitham equation is used to model nonlinear dispersive waves in fluid dynamics. Epidemiology. Integro-differential equations have found applications in epidemiology, the mathematical modeling of epidemics, particularly when the models contain age-structure or describe spatial epidemics. The Kermack-McKendrick theory of infectious disease transmission is one particular example where age-structure in the population is incorporated into the modeling framework. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\frac{d}{dx}u(x) + \\int_{x_0}^x f(t,u(t))\\,dt = g(x,u(x)), \\qquad u(x_0) = u_0, \\qquad x_0 \\ge 0.\n" }, { "math_id": 1, "text": "\nu'(x) + 2u(x) + 5\\int_{0}^{x}u(t)\\,dt = \\theta(x)\n \\qquad \\text{with} \\qquad u(0)=0,\n" }, { "math_id": 2, "text": "\n \\theta(x) = \\left\\{ \\begin{array}{ll}\n 1, \\qquad x \\geq 0\\\\\n 0, \\qquad x < 0 \\end{array} \n\\right.\n" }, { "math_id": 3, "text": " U(s) = \\mathcal{L} \\left\\{u(x)\\right\\}=\\int_0^{\\infty} e^{-sx} u(x) \\,dx. " }, { "math_id": 4, "text": " s U(s) - u(0) + 2U(s) + \\frac{5}{s}U(s) = \\frac{1}{s}. " }, { "math_id": 5, "text": " U(s) = \\frac{1}{s^2 + 2s + 5} " }, { "math_id": 6, "text": " u(x) = \\frac{1}{2} e^{-x} \\sin(2x) \\theta(x) " }, { "math_id": 7, "text": " U(s) = \\frac{1}{s^2 + 2s + 5} = \\frac{1}{2} \\frac{2}{(s+1)^2+4} \\Rightarrow u(x) = \\mathcal L^{-1}\\left\\{ U(s) \\right\\} = \\frac{1}{2} e^{-x} \\sin(2x) \\theta(x) " }, { "math_id": 8, "text": " E(t) " }, { "math_id": 9, "text": " L \\frac{d}{dt}I(t) + RI(t) + \\frac{1}{C} \\int_{0}^{t} I(\\tau) d\\tau = E(t), " }, { "math_id": 10, "text": "I(t)" }, { "math_id": 11, "text": "R" }, { "math_id": 12, "text": "L" }, { "math_id": 13, "text": "C" } ]
https://en.wikipedia.org/wiki?curid=11095324
11095361
Umdeutung paper
Scientific article by Werner Heisenberg In the history of physics, "On the quantum-theoretical reinterpretation of kinematical and mechanical relationships" (), also known as the Umdeutung (reinterpretation) paper, was a breakthrough article in quantum mechanics written by Werner Heisenberg, which appeared in "Zeitschrift für Physik" in September 1925. In the article, Heisenberg tried to explain the energy levels of a one-dimensional anharmonic oscillator, avoiding the concrete but unobservable representations of electron orbits by using observable parameters such as transition probabilities for quantum jumps, which necessitated using two indexes corresponding to the initial and final states. Mathematically, Heisenberg showed the need of non-commutative operators. This insight would later become the basis for Heisenberg's uncertainty principle. This article was followed by the paper by Pascual Jordan and Max Born of the same year, and by the 'three-man paper' () by Born, Heisenberg and Jordan in 1926. These articles laid the groundwork for matrix mechanics that would come to substitute old quantum theory, leading to the modern quantum mechanics. Heisenberg received the Nobel Prize in Physics in 1932 for his work on developing quantum mechanics. Historical context. Heisenberg was 23 years old when he worked on the article while recovering from hay fever on the island of Heligoland, corresponding with Wolfgang Pauli on the subject. When asked for his opinion of the manuscript, Pauli responded favorably, but Heisenberg said that he was still "very uncertain about it". In July 1925, he sent the manuscript to Max Born to review and decide whether to submit it for publication. When Born read the article, he recognized the formulation as one which could be transcribed and extended to the systematic language of matrices. Born, with the help of his assistant and former student Pascual Jordan, began immediately to make the transcription and extension, and they submitted their results for publication; their manuscript was received for publication just 60 days after Heisenberg’s article. A follow-on article by all three authors extending the theory to multiple dimensions was submitted for publication before the end of the year. Heisenberg determined to base his quantum mechanics "exclusively upon relationships between quantities that in principle are observable." He observed that one could not then use any statements about such things as "the position and period of revolution of the electron." Rather, to make true progress in understanding the radiation of the simplest case, the radiation of excited hydrogen atoms, one had measurements only of the frequencies and the intensities of the hydrogen bright-line spectrum to work with. In classical physics, the intensity of each frequency of light produced in a radiating system is equal to the square of the amplitude of the radiation at that frequency, so attention next fell on amplitudes. The classical equations that Heisenberg hoped to use to form quantum theoretical equations would first yield the amplitudes, and in classical physics one could compute the intensities simply by squaring the amplitudes. But Heisenberg saw that "the simplest and most natural assumption would be" to follow the lead provided by recent work in computing light dispersion done by Hans Kramers. The work he had done assisting Kramers in the previous year now gave him an important clue about how to model what happened to excited hydrogen gas when it radiated light and what happened when incoming radiation of one frequency excited atoms in a dispersive medium and then the energy delivered by the incoming light was re-radiated – sometimes at the original frequency but often at two lower frequencies the sum of which equalled the original frequency. According to their model, an electron that had been driven to a higher energy state by accepting the energy of an incoming photon might return in one step to its equilibrium position, re-radiating a photon of the same frequency, or it might return in more than one step, radiating one photon for each step in its return to its equilibrium state. Because of the way factors cancel out in deriving the new equation based on these considerations, the result turns out to be relatively simple. Also included in the manuscript was the "Heisenberg commutator", his law of multiplication needed to describe certain properties of atoms, whereby the product of two physical quantities did not commute. Therefore, "PQ" would differ from "QP" where, for example, "P" was an electron's momentum, and "Q" its position. Paul Dirac, who had received a proof copy in August 1925, realized that the commutative law had not been fully developed, and he produced an algebraic formulation to express the same results in more logical form. Heisenberg's multiplication rule. By means of an intense series of mathematical analogies that some physicists have termed "magical," Heisenberg wrote out an equation that is the quantum mechanical analog for the classical computation of intensities. The equation below appears in the paper. Its general form is as follows: formula_0 This general format indicates that some term C is to be computed by summing up all of the products of some group of terms A by some related group of terms B. There will potentially be an infinite series of A terms and their matching B terms. Each of these multiplications has as its factors two measurements that pertain to sequential downward transitions between energy states of an electron. This type of rule differentiates matrix mechanics from the kind of physics familiar in everyday life because the important values are where (in what energy state or "orbital") the electron begins and in what energy state it ends, not what the electron is doing while in one or another state. If A and B both refer to lists of frequencies, for instance, the calculation proceeds as follows: Multiply the frequency for a change of energy from state n to state "n" - "a" by the frequency for a change of energy from state n-a to state n-b. and to that add the product found by multiplying the frequency for a change of energy from state n-a to state n-b by the frequency for a change of energy from state n-b to state n-c, and so forth. Symbolically, that is: formula_1 It would be easy to perform each individual step of this process for some measured quantity. For instance, the boxed formula at the head of this article gives each needed wavelength in sequence. The values calculated could very easily be filled into a grid as described below. However, since the series is infinite, nobody could do the entire set of calculations. Heisenberg originally devised this equation to enable himself to multiply two measurements of the same kind (amplitudes), so it happened not to matter in which order they were multiplied. Heisenberg noticed, however that if he tried to use the same schema to multiply two variables, such as momentum, "p", and displacement, "q", then "a significant difficulty arises." It turns out that multiplying a matrix of "p" by a matrix of "q" gives a different result from multiplying a matrix of "q" by a matrix of "p". It only made a tiny bit of difference, but that difference could never be reduced below a certain limit, and that limit involved Planck's constant, "h". More on that later. Below is a very short sample of what the calculations would be, placed into grids that are called matrices. Heisenberg's teacher saw almost immediately that his work should be expressed in a matrix format because mathematicians already were familiar with how to do computations involving matrices in an efficient way. (Since Heisenberg was interested in photon radiation, the illustrations will be given in terms of electrons going from a higher energy level to a lower level, e.g., "n" ← "n"-1, instead of going from a lower level to a higher level, e.g., "n"→"n"-1) formula_2 (equation for the conjugate variables momentum and position) Matrix of "p" Matrix of "q" The matrix for the product of the above two matrices as specified by the relevant equation in the "Umdeutung" paper is Where &lt;templatestyles src="Block indent/styles.css"/&gt;"A" = "p"("n︎" ← "n" - "a")*"q"("n" - "a︎" ← "n" - "b") + "p"("n︎" ← "n" - "b")*"q"("n" - "b︎" ← "n" - "b") + "p"("n︎" ← "n" - "c")*"q"("n" - "c︎" ← "n" - "b") + ... &lt;templatestyles src="Block indent/styles.css"/&gt;"B" = "p"("n" - "a︎" ← "n" - "a)*q(n" - "a︎" ← "n" - "c") + "p"("n" - "a︎" ← "n" - "b")*"q"("n" - "b︎" ← "n" - "c") + "p"("n" - "a︎" ← "n" - "c")*"q"("n" - "c︎" ← "n" - "c") + ... &lt;templatestyles src="Block indent/styles.css"/&gt;"C" = "p"("n" - "b︎" ← "n" - "a)*q(n" - "a︎" ← "n" - "d)+p(n" - "b︎" ← "n" - "b")*"q"("n" - "b︎" ← "n" - "d") + "p"("n" - "b︎" ← "n" - "c")*"q"("n" - "d︎" ← "n" - "d") + ... and so forth. If the matrices were reversed, the following values would result &lt;templatestyles src="Block indent/styles.css"/&gt;"A" = "q"("n︎" ← "n" - "a")*"p"("n" - "a︎" ← "n" - "b") + "q"("n︎" ← "n" - "b")*"p"("n" - "b︎" ← "n" - "b") + "q"("n︎" ← "n" - "c")*"p"("n" - "c︎" ← "n" - "b") + ... &lt;templatestyles src="Block indent/styles.css"/&gt;"B" = "q(n" - "a︎" ← "n" - "a")*"p(n" - "a︎" ← "n" - "c") + "q"("n" - "a︎" ← "n" - "b")*"p"("n" - "b︎" ← "n" - "c") + "q"("n" - "a︎" ← "n" - "c")*"p"("n" - "c︎" ← "n" - "c") + ... &lt;templatestyles src="Block indent/styles.css"/&gt;"C" = "q(n" - "b︎" ← "n" - "a)*p(n" - "a︎" ← "n" - "d)+q(n" - "b︎" ← "n" - "b")*"p"("n" - "b︎" ← "n" - "d") + "q"("n" - "b︎" ← "n" - "c")*"p"("n" - "d︎" ← "n" - "d") + ... and so forth. Development of matrix mechanics. Werner Heisenberg used the idea that since classical physics is correct when it applies to phenomena in the world of things larger than atoms and molecules, it must stand as a special case of a more inclusive quantum theoretical model. So he hoped that he could modify quantum physics in such a way that when the parameters were on the scale of everyday objects it would look just like classical physics, but when the parameters were pulled down to the atomic scale the discontinuities seen in things like the widely spaced frequencies of the visible hydrogen bright line spectrum would come back into sight. The one thing that people at that time most wanted to understand about hydrogen radiation was how to predict or account for the intensities of the lines in its spectrum. Although Heisenberg did not know it at the time, the general format he worked out to express his new way of working with quantum theoretical calculations can serve as a recipe for two matrices and how to multiply them. The "Umdeutung" paper does not mention matrices. Heisenberg's great advance was the "scheme which was capable in principle of determining uniquely the relevant physical qualities (transition frequencies and amplitudes)" of hydrogen radiation. After Heisenberg wrote the "Umdeutung" paper, he turned it over to one of his senior colleagues for any needed corrections and went on vacation. Max Born puzzled over the equations and the non-commuting equations that Heisenberg had found troublesome and disturbing. After several days he realized that these equations amounted to directions for writing out matrices. By consideration of ...examples...[Heisenberg] found this rule... This was in the summer of 1925. Heisenberg...took leave of absence...and handed over his paper to me for publication...Heisenberg's rule of multiplication left me no peace, and after a week of intensive thought and trial, I suddenly remembered an algebraic theory...Such quadratic arrays are quite familiar to mathematicians and are called matrices, in association with a definite rule of multiplication. I applied this rule to Heisenberg's quantum condition and found that it agreed for the diagonal elements. It was easy to guess what the remaining elements must be, namely, null; and immediately there stood before me the strange formula formula_3 The symbol "Q" is the matrix for displacement, "P" is the matrix for momentum, i stands for the square root of negative one, and h is Planck's constant. Born and a few colleagues took up the task of working everything out in matrix form before Heisenberg returned from his time off, and within a few months the new quantum mechanics in matrix form formed the basis for another paper. This relation is now known as Heisenberg's uncertainty principle. When quantities such as position and momentum are mentioned in the context of Heisenberg's matrix mechanics, a statement such as "pq" ≠ "qp" does not refer to a single value of "p" and a single value "q" but to a matrix (grid of values arranged in a defined way) of values of position and a matrix of values of momentum. So multiplying "p" times "q" or "q" times "p" is really talking about the matrix multiplication of the two matrices. When two matrices are multiplied, the answer is a third matrix. Paul Dirac decided that the essence of Heisenberg's work lay in the very feature that Heisenberg had originally found problematical – the fact of non-commutativity such as that between multiplication of a momentum matrix by a displacement matrix and multiplication of a displacement matrix by a momentum matrix. That insight led Dirac in new and productive directions. References. &lt;templatestyles src="Refbegin/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "C(n, n-b) = \\sum_{a} A(n, n-a) B(n-a, n-b)" }, { "math_id": 1, "text": "f(n, n-a) f(n-a, n-b) +f(n-a, n - b) f(n - b, n - c)+\\cdots" }, { "math_id": 2, "text": "Y(n, n-b) = \\sum_{a} p(n,n-a)q(n-a,n-b)" }, { "math_id": 3, "text": "{QP - PQ = \\frac{ih}{2\\pi}}" } ]
https://en.wikipedia.org/wiki?curid=11095361
11096735
Static light scattering
Static light scattering is a technique in physical chemistry that measures the intensity of the scattered light to obtain the average molecular weight "Mw" of a macromolecule like a polymer or a protein in solution. Measurement of the scattering intensity at many angles allows calculation of the root mean square radius, also called the radius of gyration "Rg". By measuring the scattering intensity for many samples of various concentrations, the second virial coefficient, "A2", can be calculated. Static light scattering is also commonly utilized to determine the size of particle suspensions in the sub-μm and supra-μm ranges, via the Lorenz-Mie (see Mie scattering) and Fraunhofer diffraction formalisms, respectively. For static light scattering experiments, a high-intensity monochromatic light, usually a laser, is launched into a solution containing the macromolecules. One or many detectors are used to measure the scattering intensity at one or many angles. The angular dependence is required to obtain accurate measurements of both molar mass and size for all macromolecules of radius above 1–2% of the incident wavelength. Hence simultaneous measurements at several angles relative to the direction of the incident light, known as multi-angle light scattering (MALS) or multi-angle laser light scattering (MALLS), are generally regarded as the standard implementation of static light scattering. Additional details on the history and theory of MALS may be found in multi-angle light scattering. To measure the average molecular weight directly without calibration from the light scattering intensity, the laser intensity, the quantum efficiency of the detector, and the full scattering volume and solid angle of the detector need to be known. Since this is impractical, all commercial instruments are calibrated using a strong, known scatterer like toluene since the Rayleigh ratio of toluene and a few other solvents were measured using an absolute light scattering instrument. Theory. For a light scattering instrument composed of many detectors placed at various angles, all the detectors need to respond the same way. Usually, detectors will have slightly different quantum efficiency, different gains, and are looking at different geometrical scattering volumes. In this case, a normalization of the detectors is absolutely needed. To normalize the detectors, a measurement of a pure solvent is made first. Then an isotropic scatterer is added to the solvent. Since isotropic scatterers scatter the same intensity at any angle, the detector efficiency and gain can be normalized with this procedure. It is convenient to normalize all the detectors to the 90° angle detector. formula_0 where "IR(90)" is the scattering intensity measured for the Rayleigh scatterer by the 90° angle detector. The most common equation to measure the weight-average molecular weight, "Mw", is the Zimm equation (the right-hand side of the Zimm equation is provided incorrectly in some texts, as noted by Hiemenz and Lodge): formula_1 where formula_2 and formula_3 with formula_4 and the scattering vector for vertically polarized light is formula_5 with "n"0 the refractive index of the solvent, "λ" the wavelength of the light source, "N"A the Avogadro constant, "c" the solution concentration, and d"n"/d"c" the change in the refractive index of the solution with change in concentration. The intensity of the analyte measured at an angle is "IA(θ)". In these equations, the subscript A is for analyte (the solution) and T is for the toluene with the Rayleigh ratio of toluene, "RT" being 1.35×10−5 cm−1 for a HeNe laser. As described above, the radius of gyration, "Rg", and the second virial coefficient, "A2", are also calculated from this equation. The refractive index increment "dn/dc" characterizes the change of the refractive index "n" with the concentration "c" and can be measured with a differential refractometer. A Zimm plot is built from a double extrapolation to zero angle and zero concentration from many angles and many concentration measurements. In its simplest form, the Zimm equation is reduced to: formula_6 for measurements made at low angle and infinite dilution since "P"(0) = 1. There are typically several analyses developed to analyze the scattering of particles in solution to derive the above-named physical characteristics of particles. A simple static light scattering experiment entails the average intensity of the sample that is corrected for the scattering of the solvent will yield the Rayleigh ratio, "R" as a function of the angle or the wave vector "q" as follows: Data analyses. Guinier plot. The scattered intensity can be plotted as a function of the angle to give information on the "Rg" which can simply be calculated using the Guinier approximation as follows: formula_7 where "ln(ΔR(θ)) = lnP(θ)" also known as the form factor with "q = 4πn0sin(θ/2)/λ". Hence a plot of the corrected Rayleigh ratio, "ΔR(θ) vs sin2(θ/2)" or "q2" will yield a slope "Rg2/3". However, this approximation is only true for "qRg &lt; 1". Note that for a Guinier plot, the value of "dn/dc" and the concentration is not needed. Kratky plot. The Kratky plot is typically used to analyze the conformation of proteins but can be used to analyze the random walk model of polymers. A Kratky plot can be made by plotting "sin2(θ/2)ΔR(θ) vs sin(θ/2)" or "q2ΔR(θ) vs q". Zimm plot. For polymers and polymer complexes that are monodisperse (formula_8) as determined by static light scattering, a Zimm plot is a conventional means of deriving the parameters such as "Rg", molecular mass "Mw" and the second virial coefficient "A2". One must note that if the material constant "K" is not implemented, a Zimm plot will only yield "Rg". Hence implementing "K" will yield the following equation: formula_9 The analysis performed with the Zimm plot uses a double-extrapolation to zero concentration and zero scattering angle resulting in a characteristic rhomboid plot. As the angular information is available, it is also possible to obtain the radius of gyration ("Rg"). Experiments are performed at several angles, which satisfy the condition formula_10 and at least 4 concentrations. Performing a Zimm analysis on a single concentration is known as a partial Zimm analysis and is only valid for dilute solutions of strong point scatterers. The partial Zimm however, does not yield the second virial coefficient, due to the absence of the variable concentration of the sample. More specifically, the value of the second virial coefficient is either assumed to equal zero or is inputted as a known value in order to perform the partial Zimm analysis. Debye plot. If the measured particles are smaller than λ/20, the form factor "P(θ)" can be neglected ("P(θ)"→1). Therefore, the Zimm equation is simplified to the Debye equation, as follows: formula_11 Note that this is also the result of an extrapolation to zero scattering angle. By acquiring data on concentration and scattering intensity, the Debye plot is constructed by plotting "Kc"/"ΔR(θ)" vs. concentration. The intercept of the fitted line gives the molecular mass, while the slope corresponds to the 2nd virial coefficient. As the Debye plot is a simplification of the Zimm equation, the same limitations of the latter apply, i.e., samples should present a monodisperse nature. For polydisperse samples, the resulting molecular mass from a static light-scattering measurement will represent an average value. An advantage of the Debye plot is the possibility to determine the second virial coefficient. This parameter describes the interaction between particles and the solvent. In macromolecule solutions, for instance, it can assume negative (particle-particle interactions are favored), zero, or positive values (particle-solvent interactions are favored). Multiple scattering. Static light scattering assumes that each detected photon has only been scattered exactly once. Therefore, analysis according to the calculations stated above will only be correct if the sample has been diluted sufficiently to ensure that photons are not scattered multiple times by the sample before being detected. Accurate interpretation becomes exceedingly difficult for systems with non-negligible contributions from multiple scattering. In many commercial instruments where analysis of the scattering signal is automatically performed, the error may never be noticed by the user. Particularly for larger particles and those with high refractive index contrast, this limits the application of standard static light scattering to very low particle concentrations. On the other hand, for soluble macromolecules that exhibit a relatively low refractive index contrast versus the solvent, including most polymers and biomolecules in their respective solvents, multiple scattering is rarely a limiting factor even at concentrations that approach the limits of solubility. However, as shown by Schaetzel, it is possible to suppress multiple scattering in static light scattering experiments via a cross-correlation approach. The general idea is to isolate singly scattered light and suppress undesired contributions from multiple scattering in a static light scattering experiment. Different implementations of cross-correlation light scattering have been developed and applied. Currently, the most widely used scheme is the so-called 3D-dynamic light scattering method. The same method can also be used to correct dynamic light scattering data for multiple scattering contributions. Composition-gradient static light scattering. Samples that change their properties after dilution may not be analyzed via static light scattering in terms of the simple model presented here as the Zimm equation. A more sophisticated analysis known as 'composition-gradient static (or multi-angle) light scattering' (CG-SLS or CG-MALS) is an important class of methods to investigate protein–protein interactions, colligative properties, and other macromolecular interactions as it yields, in addition to size and molecular weight, information on the affinity and stoichiometry of molecular complexes formed by one or more associating macromolecular/biomolecular species. In particular, static light scattering from a dilution series may be analyzed to quantify self-association, reversible oligomerization, and non-specific attraction or repulsion, while static light scattering from mixtures of species may be analyzed to quantify hetero-association. Applications. One of the main applications of static light scattering for molecular mass determination is in the field of macromolecules, such as proteins and polymers, as it is possible to measure the molecular mass of proteins without any assumption about their shape. Static light scattering is usually combined with other particle characterization techniques, such as size-exclusion chromatography (SEC), dynamic light scattering (DLS), and electrophoretic light scattering (ELS). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ N(\\theta) = \\frac{I_R(\\theta) - I_S(\\theta)} {I_R(90) - I_S(90)}" }, { "math_id": 1, "text": " \\frac{Kc}{\\Delta R(\\theta, c)}= \\frac{1}{M_w}\\left(1+ \\frac{q^2 R_g^2}{3}+O(q^4)\\right)+2A_2c+O(c^2)" }, { "math_id": 2, "text": "\\ K=4\\pi^2 n_0^2 (dn/dc)^2/N_\\text{A}\\lambda^4" }, { "math_id": 3, "text": "\\ \\Delta R(\\theta, c)= R_A(\\theta) - R_0(\\theta)" }, { "math_id": 4, "text": "\\ R(\\theta) = \\frac{I_A(\\theta) n_0^2}{I_T(\\theta) n_T^2} \\frac{R_T}{N(\\theta)} " }, { "math_id": 5, "text": "\\ q = 4\\pi n_0 \\sin(\\theta/2)/\\lambda" }, { "math_id": 6, "text": "\\ Kc/\\Delta R(\\theta \\rightarrow 0, c \\rightarrow 0)=1/M_w" }, { "math_id": 7, "text": "\\ln(\\Delta R(\\theta)) = 1 - (R_g^2/3)q^2 " }, { "math_id": 8, "text": "\\scriptstyle \\mu_2/\\bar{\\Gamma}^2 < 0.3" }, { "math_id": 9, "text": " \\frac{Kc}{\\Delta R(\\theta, c)}=\\frac{1}{M_w}\\left(1+ \\frac{q^2 R_g^2}{3}+O(q^4)\\right)+2A_2c+O(c^2)" }, { "math_id": 10, "text": "qR_g < 1" }, { "math_id": 11, "text": " \\frac{Kc}{\\Delta R(\\theta, c)}=\\frac{1}{M_w}+2A_2c" } ]
https://en.wikipedia.org/wiki?curid=11096735
1109946
Balance theory
Theory of attitude change In the psychology of motivation, balance theory is a theory of attitude change, proposed by Fritz Heider. It conceptualizes the cognitive consistency motive as a drive toward psychological balance. The consistency motive is the urge to maintain one's values and beliefs over time. Heider proposed that "sentiment" or liking relationships are balanced if the affect valence in a system multiplies out to a positive result. Structural balance theory in social network analysis is the extension proposed by Dorwin Cartwright and Frank Harary. It was the framework for the discussion at a Dartmouth College symposium in September 1975. P-O-X model. For example: a Person (formula_0) who likes (formula_1) an Other (formula_2) person will be balanced by the same valence attitude on behalf of the other. Symbolically, formula_3 and formula_4 results in psychological balance. This can be extended to things or objects (formula_5) as well, thus introducing triadic relationships. If a person formula_0 likes object formula_5 but dislikes other person formula_2, what does formula_0 feel upon learning that person formula_2 created the object formula_5? This is symbolized as such: Cognitive balance is achieved when there are three positive links or two negatives with one positive. Two positive links and one negative like the example above creates imbalance or cognitive dissonance. Multiplying the signs shows that the person will perceive imbalance (a negative multiplicative product) in this relationship, and will be motivated to correct the imbalance somehow. The Person can either: Any of these will result in psychological balance, thus resolving the dilemma and satisfying the drive. (Person formula_0 could also avoid object formula_5 and other person formula_2 entirely, lessening the stress created by psychological imbalance.) To predict the outcome of a situation using Heider's balance theory, one must weigh the effects of all the potential results, and the one requiring the least amount of effort will be the likely outcome. Determining if the triad is balanced is simple math: formula_9; Balanced. formula_10; Balanced. formula_11; Unbalanced. Examples. Balance theory is useful in examining how celebrity endorsement affects consumers' attitudes toward products. If a person likes a celebrity and perceives (due to the endorsement) that said celebrity likes a product, said person will tend to like the product more, in order to achieve psychological balance. However, if the person already had a dislike for the product being endorsed by the celebrity, they may begin disliking the celebrity, again to achieve psychological balance. Heider's balance theory can explain why holding the same negative attitudes of others promotes closeness. See The enemy of my enemy is my friend. Signed graphs and social networks. Dorwin Cartwright and Frank Harary looked at Heider's triads as 3-cycles in a signed graph. The sign of a path in a graph is the product of the signs of its edges. They considered cycles in a signed graph representing a social network. A balanced signed graph has only cycles of positive sign. Harary proved that a balanced graph is polarized, that is, it decomposes into two entirely positive subgraphs that are joined by negative edges. In the interest of realism, a weaker property was suggested by Davis: No cycle has exactly one negative edge. Graphs with this property may decompose into more than two entirely positive subgraphs, called clusters. The property has been called the "clusterability axiom". Then balanced graphs are recovered by assuming the Parsimony axiom: The subgraph of positive edges has at most two components. The significance of balance theory for social dynamics was expressed by Anatol Rapoport: The hypothesis implies roughly that attitudes of the group members will tend to change in such a way that one's friends' friends will tend to become one's friends and one's enemies' enemies also one's friends, and one's enemies' friends and one's friends' enemies will tend to become one's enemies, and moreover, that these changes tend to operate even across several removes (one's friends' friends' enemies' enemies tend to become friends by an iterative process). Note that a triangle of three mutual enemies makes a clusterable graph but not a balanced one. Therefore, in a clusterable network one cannot conclude that "the enemy of my enemy is my friend," although this aphorism is a fact in a balanced network. Criticism. Claude Flament expressed a limit to balance theory imposed by reconciling weak ties with relationships of stronger force such as family bonds: One might think that a valued algebraic graph is necessary to represent psycho-social reality, if it is to take into account the degree of intensity of interpersonal relationships. But in fact it then seems hardly possible to define the balance of a graph, not for mathematical but for psychological reasons. If the relationship "AB" is +3, the relationship "BC" is –4, what should the "AC" relationship be in order that the triangle be balanced? The psychological hypotheses are wanting, or rather they are numerous and little justified. At the 1975 Dartmouth College colloquium on balance theory, Bo Anderson struck at the heart of the notion: In graph theory there exists a "formal" balance theory that contains theorems that are "analytically" true. The statement that Heider's "psychological" balance can be represented, in its essential aspects, by a suitable interpretation of that "formal balance theory" should, however, be regarded as problematical. We cannot routinely identify the positive and negative lines in the formal theory with the positive and negative "sentiment relations", and identify the formal balance notion with the "psychological" idea of balance or structural tension. .. It is puzzling that the fine structure of the relationships between formal and psychological balance has been given scant attention by balance theorists. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P" }, { "math_id": 1, "text": "+" }, { "math_id": 2, "text": "O" }, { "math_id": 3, "text": "P (+) > O" }, { "math_id": 4, "text": "P < (+) O" }, { "math_id": 5, "text": "X" }, { "math_id": 6, "text": " P (+) > X " }, { "math_id": 7, "text": " P (-) > O " }, { "math_id": 8, "text": " O (+) > X " }, { "math_id": 9, "text": " + + + = + " }, { "math_id": 10, "text": " - + - = + " }, { "math_id": 11, "text": " - + + = - " } ]
https://en.wikipedia.org/wiki?curid=1109946
1109958
Loop-erased random walk
Model for a random simple path In mathematics, loop-erased random walk is a model for a random simple path with important applications in combinatorics, physics and quantum field theory. It is intimately connected to the uniform spanning tree, a model for a random tree. See also "random walk" for more general treatment of this topic. Definition. Assume "G" is some graph and formula_0 is some path of length "n" on "G". In other words, formula_1 are vertices of "G" such that formula_2 and formula_3 are connected by an edge. Then the loop erasure of formula_0 is a new simple path created by erasing all the loops of formula_0 in chronological order. Formally, we define indices formula_4 inductively using formula_5 formula_6 where "max" here means up to the length of the path formula_0. The induction stops when for some formula_4 we have formula_7. In words, to find formula_8, we hold formula_9 in one hand, and with the other hand, we trace back from the end: formula_10, until we either hit some formula_11, in which case we set formula_12, or we end up at formula_9, in which case we set formula_13. Assume the induction stops at "J" i.e. formula_14 is the last formula_15. Then the loop erasure of formula_0, denoted by formula_16 is a simple path of length "J" defined by formula_17 Now let "G" be some graph, let "v" be a vertex of "G", and let "R" be a random walk on "G" starting from "v". Let "T" be some stopping time for "R". Then the loop-erased random walk until time "T" is LE("R"([1,"T"])). In other words, take "R" from its beginning until "T" — that's a (random) path — erase all the loops in chronological order as above — you get a random simple path. The stopping time "T" may be fixed, i.e. one may perform "n" steps and then loop-erase. However, it is usually more natural to take "T" to be the hitting time in some set. For example, let "G" be the graph Z2 and let "R" be a random walk starting from the point (0,0). Let "T" be the time when "R" first hits the circle of radius 100 (we mean here of course a "discretized" circle). LE("R") is called the loop-erased random walk starting at (0,0) and stopped at the circle. Uniform spanning tree. For any graph "G", a spanning tree of "G" is a subgraph of "G" containing all vertices and some of the edges, which is a tree, i.e. connected and with no cycles. A spanning tree chosen randomly from among all possible spanning trees with equal probability is called a uniform spanning tree. There are typically exponentially many spanning trees (too many to generate them all and then choose one randomly); instead, uniform spanning trees can be generated more efficiently by an algorithm called Wilson's algorithm which uses loop-erased random walks. The algorithm proceeds according to the following steps. First, construct a single-vertex tree "T" by choosing (arbitrarily) one vertex. Then, while the tree "T" constructed so far does not yet include all of the vertices of the graph, let "v" be an arbitrary vertex that is not in "T", perform a loop-erased random walk from "v" until reaching a vertex in "T", and add the resulting path to "T". Repeating this process until all vertices are included produces a uniformly distributed tree, regardless of the arbitrary choices of vertices at each step. A connection in the other direction is also true. If "v" and "w" are two vertices in "G" then, in any spanning tree, they are connected by a unique path. Taking this path in the "uniform" spanning tree gives a random simple path. It turns out that the distribution of this path is identical to the distribution of the loop-erased random walk starting at "v" and stopped at "w". This fact can be used to justify the correctness of Wilson's algorithm. Another corollary is that loop-erased random walk is symmetric in its start and end points. More precisely, the distribution of the loop-erased random walk starting at "v" and stopped at "w" is identical to the distribution of the reversal of loop-erased random walk starting at "w" and stopped at "v". Loop-erasing a random walk and the reverse walk do not, in general, give the same result, but according to this result the distributions of the two loop-erased walks are identical. The Laplacian random walk. Another representation of loop-erased random walk stems from solutions of the discrete Laplace equation. Let "G" again be a graph and let "v" and "w" be two vertices in "G". Construct a random path from "v" to "w" inductively using the following procedure. Assume we have already defined formula_18. Let "f" be a function from "G" to R satisfying formula_19 for all formula_20 and formula_21 "f" is discretely harmonic everywhere else Where a function "f" on a graph is discretely harmonic at a point "x" if "f"("x") equals the average of "f" on the neighbors of "x". With "f" defined choose formula_22 using "f" at the neighbors of formula_23 as weights. In other words, if formula_24 are these neighbors, choose formula_25 with probability formula_26 Continuing this process, recalculating "f" at each step, will result in a random simple path from "v" to "w"; the distribution of this path is identical to that of a loop-erased random walk from "v" to "w". An alternative view is that the distribution of a loop-erased random walk conditioned to start in some path β is identical to the loop-erasure of a random walk conditioned not to hit β. This property is often referred to as the Markov property of loop-erased random walk (though the relation to the usual Markov property is somewhat vague). It is important to notice that while the proof of the equivalence is quite easy, models which involve dynamically changing harmonic functions or measures are typically extremely difficult to analyze. Practically nothing is known about the p-Laplacian walk or diffusion-limited aggregation. Another somewhat related model is the harmonic explorer. Finally there is another link that should be mentioned: Kirchhoff's theorem relates the number of spanning trees of a graph "G" to the eigenvalues of the discrete Laplacian. See spanning tree for details. Grids. Let "d" be the dimension, which we will assume to be at least 2. Examine Z"d" i.e. all the points formula_27 with integer formula_28. This is an infinite graph with degree 2"d" when you connect each point to its nearest neighbors. From now on we will consider loop-erased random walk on this graph or its subgraphs. High dimensions. The easiest case to analyze is dimension 5 and above. In this case it turns out that there the intersections are only local. A calculation shows that if one takes a random walk of length "n", its loop-erasure has length of the same order of magnitude, i.e. "n". Scaling accordingly, it turns out that loop-erased random walk converges (in an appropriate sense) to Brownian motion as "n" goes to infinity. Dimension 4 is more complicated, but the general picture is still true. It turns out that the loop-erasure of a random walk of length "n" has approximately formula_29 vertices, but again, after scaling (that takes into account the logarithmic factor) the loop-erased walk converges to Brownian motion. Two dimensions. In two dimensions, arguments from conformal field theory and simulation results led to a number of exciting conjectures. Assume "D" is some simply connected domain in the plane and "x" is a point in "D". Take the graph "G" to be formula_30 that is, a grid of side length ε restricted to "D". Let "v" be the vertex of "G" closest to "x". Examine now a loop-erased random walk starting from "v" and stopped when hitting the "boundary" of "G", i.e. the vertices of "G" which correspond to the boundary of "D". Then the conjectures are formula_32 The first attack at these conjectures came from the direction of domino tilings. Taking a spanning tree of "G" and adding to it its planar dual one gets a domino tiling of a special derived graph (call it "H"). Each vertex of "H" corresponds to a vertex, edge or face of "G", and the edges of "H" show which vertex lies on which edge and which edge on which face. It turns out that taking a uniform spanning tree of "G" leads to a uniformly distributed random domino tiling of "H". The number of domino tilings of a graph can be calculated using the determinant of special matrices, which allow to connect it to the discrete Green function which is approximately conformally invariant. These arguments allowed to show that certain measurables of loop-erased random walk are (in the limit) conformally invariant, and that the expected number of vertices in a loop-erased random walk stopped at a circle of radius "r" is of the order of formula_33. In 2002 these conjectures were resolved (positively) using Stochastic Löwner Evolution. Very roughly, it is a stochastic conformally invariant ordinary differential equation which allows to catch the Markov property of loop-erased random walk (and many other probabilistic processes). Three dimensions. The scaling limit exists and is invariant under rotations and dilations. If formula_34 denotes the expected number of vertices in the loop-erased random walk until it gets to a distance of "r", then formula_35 where ε, "c" and "C" are some positive numbers (the numbers can, in principle, be calculated from the proofs, but the author did not do it). This suggests that the scaling limit should have Hausdorff dimension between formula_36 and 5/3 almost surely. Numerical experiments show that it should be formula_37. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\gamma" }, { "math_id": 1, "text": "\\gamma(1),\\dots,\\gamma(n)" }, { "math_id": 2, "text": "\\gamma(i)" }, { "math_id": 3, "text": "\\gamma(i+1)" }, { "math_id": 4, "text": "i_j" }, { "math_id": 5, "text": "i_1 = 1\\," }, { "math_id": 6, "text": "i_{j+1}=\\max\\{k:\\gamma(k)=\\gamma(i_j)\\}+1\\," }, { "math_id": 7, "text": "\\gamma(i_j)=\\gamma(n)" }, { "math_id": 8, "text": "i_{j+1}" }, { "math_id": 9, "text": "\\gamma(i_j)" }, { "math_id": 10, "text": "\\gamma(n), \\gamma(n-1), ..." }, { "math_id": 11, "text": "\\gamma(k) = \\gamma(i_j) " }, { "math_id": 12, "text": "i_{j+1} = k+1" }, { "math_id": 13, "text": "i_{j+1} = i_j+1" }, { "math_id": 14, "text": "\\gamma(i_J)=\\gamma(n)" }, { "math_id": 15, "text": "i_J" }, { "math_id": 16, "text": "\\mathrm{LE}(\\gamma)" }, { "math_id": 17, "text": "\\mathrm{LE}(\\gamma)(j)=\\gamma(i_j).\\," }, { "math_id": 18, "text": "\\gamma(1),...,\\gamma(n)" }, { "math_id": 19, "text": "f(\\gamma(i))=0" }, { "math_id": 20, "text": "i\\leq n" }, { "math_id": 21, "text": "f(w)=1" }, { "math_id": 22, "text": "\\gamma(n+1)" }, { "math_id": 23, "text": "\\gamma(n)" }, { "math_id": 24, "text": "x_1,...,x_d" }, { "math_id": 25, "text": "x_i" }, { "math_id": 26, "text": "\\frac{f(x_i)}{\\sum_{j=1}^d f(x_j)}." }, { "math_id": 27, "text": "(a_1,...,a_d)" }, { "math_id": 28, "text": "a_i" }, { "math_id": 29, "text": "n/\\log^{1/3}n" }, { "math_id": 30, "text": "G:=D\\cap \\varepsilon \\mathbb{Z}^2," }, { "math_id": 31, "text": "S_{D,x}" }, { "math_id": 32, "text": "\\phi(S_{D,x})=S_{E,\\phi(x)}.\\," }, { "math_id": 33, "text": "r^{5/4}" }, { "math_id": 34, "text": "L(r)" }, { "math_id": 35, "text": "cr^{1+\\varepsilon}\\leq L(r)\\leq Cr^{5/3}\\," }, { "math_id": 36, "text": "1+\\varepsilon" }, { "math_id": 37, "text": "1.62400\\pm 0.00005" } ]
https://en.wikipedia.org/wiki?curid=1109958
11101129
Film temperature
Temperature at the boundary layer of a fluid undergoing convection In fluid thermodynamics, the film temperature (Tf) is an approximation of the temperature of a fluid inside a convection boundary layer. It is calculated as the arithmetic mean of the temperature at the surface of the solid boundary wall (Tw) and the free-stream temperature (T∞): formula_0 The film temperature is often used as the temperature at which fluid properties are calculated when using the Prandtl number, Nusselt number, Reynolds number or Grashof number to calculate a heat transfer coefficient, because it is a reasonable first approximation to the temperature within the convection boundary layer. Somewhat confusing terminology may be encountered in relation to boilers and heat exchangers, where the same term is used to refer to the limit (hot) temperature of a fluid in contact with a hot surface.
[ { "math_id": 0, "text": "T_f=\\frac{T_w + T_\\infty}{2}" } ]
https://en.wikipedia.org/wiki?curid=11101129
11101338
Equiprobability
Events with equal probabilities of occurring Equiprobability is a property for a collection of events that each have the same probability of occurring. In statistics and probability theory it is applied in the discrete uniform distribution and the equidistribution theorem for rational numbers. If there are formula_0 events under consideration, the probability of each occurring is formula_1 In philosophy it corresponds to a concept that allows one to assign equal probabilities to outcomes when they are judged to be equipossible or to be "equally likely" in some sense. The best-known formulation of the rule is Laplace's principle of indifference (or "principle of insufficient reason"), which states that, when "we have no other information than" that exactly formula_2 mutually exclusive events can occur, we are justified in assigning each the probability formula_3 This subjective assignment of probabilities is especially justified for situations such as rolling dice and lotteries since these experiments carry a symmetry structure, and one's state of knowledge must clearly be invariant under this symmetry. A similar argument could lead to the seemingly absurd conclusion that the sun is as likely to rise as to not rise tomorrow morning. However, the conclusion that the sun is equally likely to rise as it is to not rise is only absurd when additional information is known, such as the laws of gravity and the sun's history. Similar applications of the concept are effectively instances of circular reasoning, with "equally likely" events being assigned equal probabilities, which means in turn that they are equally likely. Despite this, the notion remains useful in probabilistic and statistical modeling. In Bayesian probability, one needs to establish prior probabilities for the various hypotheses before applying Bayes' theorem. One procedure is to assume that these prior probabilities have some symmetry which is typical of the experiment, and then assign a prior which is proportional to the Haar measure for the symmetry group: this generalization of equiprobability is known as the principle of transformation groups and leads to misuse of equiprobability as a model for incertitude. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "\\frac{1}{n}." }, { "math_id": 2, "text": "N" }, { "math_id": 3, "text": "\\frac{1}{N}." } ]
https://en.wikipedia.org/wiki?curid=11101338
11101522
Isomorphism-closed subcategory
In category theory, a branch of mathematics, a subcategory formula_0 of a category formula_1 is said to be isomorphism closed or replete if every formula_1-isomorphism formula_2 with formula_3 belongs to formula_4 This implies that both formula_5 and formula_6 belong to formula_0 as well. A subcategory that is isomorphism closed and full is called strictly full. In the case of full subcategories it is sufficient to check that every formula_1-object that is isomorphic to an formula_0-object is also an formula_0-object. This condition is very natural. For example, in the category of topological spaces one usually studies properties that are invariant under homeomorphisms—so-called topological properties. Every topological property corresponds to a strictly full subcategory of formula_7 References. &lt;templatestyles src="Reflist/styles.css" /&gt; "This article incorporates material from Isomorphism-closed subcategory on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "\\mathcal{A}" }, { "math_id": 1, "text": "\\mathcal{B}" }, { "math_id": 2, "text": "h:A\\to B" }, { "math_id": 3, "text": "A\\in\\mathcal{A}" }, { "math_id": 4, "text": "\\mathcal{A}." }, { "math_id": 5, "text": "B" }, { "math_id": 6, "text": "h^{-1}:B\\to A" }, { "math_id": 7, "text": "\\mathbf{Top}." } ]
https://en.wikipedia.org/wiki?curid=11101522
11103925
Representation theory of SL2(R)
In mathematics, the main results concerning irreducible unitary representations of the Lie group SL(2, R) are due to Gelfand and Naimark (1946), V. Bargmann (1947), and Harish-Chandra (1952). Structure of the complexified Lie algebra. We choose a basis "H", "X", "Y" for the complexification of the Lie algebra of SL(2, R) so that "iH" generates the Lie algebra of a compact Cartan subgroup "K" (so in particular unitary representations split as a sum of eigenspaces of "H"), and {"H", "X", "Y"} is an sl2-triple, which means that they satisfy the relations formula_0 One way of doing this is as follows: formula_1 corresponding to the subgroup "K" of matrices formula_2 formula_3 formula_4 The Casimir operator Ω is defined to be formula_5 It generates the center of the universal enveloping algebra of the complexified Lie algebra of SL(2, R). The Casimir element acts on any irreducible representation as multiplication by some complex scalar μ2. Thus in the case of the Lie algebra sl2, the infinitesimal character of an irreducible representation is specified by one complex number. The center "Z" of the group SL(2, R) is a cyclic group {"I", −"I"} of order 2, consisting of the identity matrix and its negative. On any irreducible representation, the center either acts trivially, or by the nontrivial character of "Z", which represents the matrix -"I" by multiplication by -1 in the representation space. Correspondingly, one speaks of the trivial or nontrivial "central character". The central character and the infinitesimal character of an irreducible representation of any reductive Lie group are important invariants of the representation. In the case of irreducible admissible representations of SL(2, R), it turns out that, generically, there is exactly one representation, up to an isomorphism, with the specified central and infinitesimal characters. In the exceptional cases there are two or three representations with the prescribed parameters, all of which have been determined. Finite-dimensional representations. For each nonnegative integer "n", the group SL(2, R) has an irreducible representation of dimension "n" + 1, which is unique up to an isomorphism. This representation can be constructed in the space of homogeneous polynomials of degree "n" in two variables. The case "n" = 0 corresponds to the trivial representation. An irreducible finite-dimensional representation of a noncompact simple Lie group of dimension greater than 1 is never unitary. Thus this construction produces only one unitary representation of SL(2, R), the trivial representation. The "finite-dimensional" representation theory of the noncompact group SL(2, R) is equivalent to the representation theory of SU(2), its compact form, essentially because their Lie algebras have the same complexification and they are "algebraically simply connected". (More precisely, the group SU(2) is simply connected and, although SL(2, R) is not, it has no non-trivial algebraic central extensions.) However, in the general "infinite-dimensional" case, there is no close correspondence between representations of a group and the representations of its Lie algebra. In fact, it follows from the Peter–Weyl theorem that all irreducible representations of the compact Lie group SU(2) are finite-dimensional and unitary. The situation with SL(2, R) is completely different: it possesses infinite-dimensional irreducible representations, some of which are unitary, and some are not. Principal series representations. A major technique of constructing representations of a reductive Lie group is the method of parabolic induction. In the case of the group SL(2, R), there is up to conjugacy only one proper parabolic subgroup, the Borel subgroup of the upper-triangular matrices of determinant 1. The inducing parameter of an induced principal series representation is a (possibly non-unitary) character of the multiplicative group of real numbers, which is specified by choosing ε = ± 1 and a complex number μ. The corresponding principal series representation is denoted "I"ε,μ. It turns out that ε is the central character of the induced representation and the complex number μ may be identified with the infinitesimal character via the Harish-Chandra isomorphism. The principal series representation "I"ε,μ (or more precisely its Harish-Chandra module of "K"-finite elements) admits a basis consisting of elements "w""j", where the index "j" runs through the even integers if ε=1 and the odd integers if ε=-1. The action of "X", "Y", and "H" is given by the formulas formula_6 formula_7 formula_8 Admissible representations. Using the fact that it is an eigenvector of the Casimir operator and has an eigenvector for "H", it follows easily that any irreducible admissible representation is a subrepresentation of a parabolically induced representation. (This also is true for more general reductive Lie groups and is known as Casselman's subrepresentation theorem.) Thus the irreducible admissible representations of SL(2, R) can be found by decomposing the principal series representations "I"ε,μ into irreducible components and determining the isomorphisms. We summarize the decompositions as follows: This gives the following list of irreducible admissible representations: Relation with the Langlands classification. According to the Langlands classification, the irreducible admissible representations are parametrized by certain tempered representations of Levi subgroups "M" of parabolic subgroups "P"="MAN". This works as follows: Unitary representations. The irreducible unitary representations can be found by checking which of the irreducible admissible representations admit an invariant positively definite Hermitian form. This results in the following list of unitary representations of SL(2, R): Of these, the two limit of discrete series representations, the discrete series representations, and the two families of principal series representations are tempered, while the trivial and complementary series representations are not tempered.
[ { "math_id": 0, "text": " [H,X]=2X, \\quad [H,Y]=-2Y, \\quad [X,Y]=H. " }, { "math_id": 1, "text": "H=\\begin{pmatrix}0 & -i\\\\ i & 0\\end{pmatrix}" }, { "math_id": 2, "text": "\\begin{pmatrix}\\cos(\\theta) & -\\sin(\\theta)\\\\ \\sin(\\theta)& \\cos(\\theta)\\end{pmatrix}" }, { "math_id": 3, "text": "X={1\\over 2}\\begin{pmatrix}1 & i\\\\ i & -1\\end{pmatrix}" }, { "math_id": 4, "text": "Y={1\\over 2}\\begin{pmatrix}1 & -i\\\\ -i & -1\\end{pmatrix}" }, { "math_id": 5, "text": "\\Omega= H^2+1+2XY+2YX." }, { "math_id": 6, "text": "H(w_j) = jw_j" }, { "math_id": 7, "text": "X(w_j) = {\\mu+j+1\\over 2}w_{j+2}" }, { "math_id": 8, "text": "Y(w_j) = {\\mu-j+1\\over 2}w_{j-2}" } ]
https://en.wikipedia.org/wiki?curid=11103925
1110499
Higman–Sims graph
In mathematical graph theory, the Higman–Sims graph is a 22-regular undirected graph with 100 vertices and 1100 edges. It is the unique strongly regular graph srg(100,22,0,6), where no neighboring pair of vertices share a common neighbor and each non-neighboring pair of vertices share six common neighbors. It was first constructed by and rediscovered in 1968 by Donald G. Higman and Charles C. Sims as a way to define the Higman–Sims group, a subgroup of index two in the group of automorphisms of the Hoffman–Singleton graph. Construction. From M22 graph. Take the M22 graph, a strongly regular graph srg(77,16,0,4) and augment it with 22 new vertices corresponding to the points of S(3,6,22), each block being connected to its points, and one additional vertex "C" connected to the 22 points. From Hoffman–Singleton graph. There are 100 independent sets of size 15 in the Hoffman–Singleton graph. Create a new graph with 100 corresponding vertices, and connect vertices whose corresponding independent sets have exactly 0 or 8 elements in common. The resulting Higman–Sims graph can be partitioned into two copies of the Hoffman–Singleton graph in 352 ways. From a cube. Take a cube with vertices labeled 000, 001, 010, ..., 111. Take all 70 possible 4-sets of vertices, and retain only the ones whose XOR evaluates to 000; there are 14 such 4-sets, corresponding to the 6 faces + 6 diagonal-rectangles + 2 parity tetrahedra. This is a 3-(8,4,1) block design on 8 points, with 14 blocks of block size 4, each point appearing in 7 blocks, each pair of points appearing 3 times, each triplet of points occurring exactly once. Permute the original 8 vertices any of 8! = 40320 ways, and discard duplicates. There are then 30 different ways to relabel the vertices (i.e., 30 different designs that are all isomorphic to each other by permutation of the points). This is because there are 1344 automorphisms, and 40320/1344 = 30. Create a vertex for each of the 30 designs, and for each row of every design (there are 70 such rows in total, each row being a 4-set of 8 and appearing in 6 designs). Connect each design to its 14 rows. Connect disjoint designs to each other (each design is disjoint with 8 others). Connect rows to each other if they have exactly one element in common (there are 4x4 = 16 such neighbors). The resulting graph is the Higman–Sims graph. Rows are connected to 16 other rows and to 6 designs == degree 22. Designs are connected to 14 rows and 8 disjoint designs == degree 22. Thus all 100 vertices have degree 22 each. Algebraic properties. The automorphism group of the Higman–Sims graph is a group of order isomorphic to the semidirect product of the Higman–Sims group of order with the cyclic group of order 2. It has automorphisms that take any edge to any other edge, making the Higman–Sims graph an edge-transitive graph. The outer elements induce odd permutations on the graph. As mentioned above, there are 352 ways to partition the Higman–Sims graph into a pair of Hoffman–Singleton graphs; these partitions actually come in 2 orbits of size 176 each, and the outer elements of the Higman–Sims group swap these orbits. The characteristic polynomial of the Higman–Sims graph is ("x" − 22)("x" − 2)77("x" + 8)22. Therefore, the Higman–Sims graph is an integral graph: its spectrum consists entirely of integers. It is also the only graph with this characteristic polynomial, making it a graph determined by its spectrum. Inside the Leech lattice. The Higman–Sims graph naturally occurs inside the Leech lattice: if "X", "Y" and "Z" are three points in the Leech lattice such that the distances "XY", "XZ" and "YZ" are formula_0 respectively, then there are exactly 100 Leech lattice points "T" such that all the distances "XT", "YT" and "ZT" are equal to 2, and if we connect two such points "T" and "T"′ when the distance between them is formula_1, the resulting graph is isomorphic to the Higman–Sims graph. Furthermore, the set of all automorphisms of the Leech lattice (that is, Euclidean congruences fixing it) which fix each of "X", "Y" and "Z" is the Higman–Sims group (if we allow exchanging "X" and "Y", the order 2 extension of all graph automorphisms is obtained). This shows that the Higman–Sims group occurs inside the Conway groups Co2 (with its order 2 extension) and Co3, and consequently also Co1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "2, \\sqrt{6}, \\sqrt{6}" }, { "math_id": 1, "text": " \\sqrt{6} " } ]
https://en.wikipedia.org/wiki?curid=1110499
11105238
SL2(R)
Group of real 2×2 matrices with unit determinant In mathematics, the special linear group SL(2, R) or SL2(R) is the group of 2 × 2 real matrices with determinant one: formula_0 It is a connected non-compact simple real Lie group of dimension 3 with applications in geometry, topology, representation theory, and physics. SL(2, R) acts on the complex upper half-plane by fractional linear transformations. The group action factors through the quotient PSL(2, R) (the 2 × 2 projective special linear group over R). More specifically, PSL(2, R) = SL(2, R) / {±"I"}, where "I" denotes the 2 × 2 identity matrix. It contains the modular group PSL(2, Z). Also closely related is the 2-fold covering group, Mp(2, R), a metaplectic group (thinking of SL(2, R) as a symplectic group). Another related group is SL±(2, R), the group of real 2 × 2 matrices with determinant ±1; this is more commonly used in the context of the modular group, however. Descriptions. SL(2, R) is the group of all linear transformations of R2 that preserve oriented area. It is isomorphic to the symplectic group Sp(2, R) and the special unitary group SU(1, 1). It is also isomorphic to the group of unit-length coquaternions. The group SL±(2, R) preserves unoriented area: it may reverse orientation. The quotient PSL(2, R) has several interesting descriptions, up to Lie group isomorphism: Elements of the modular group PSL(2, Z) have additional interpretations, as do elements of the group SL(2, Z) (as linear transforms of the torus), and these interpretations can also be viewed in light of the general theory of SL(2, R). Homographies. Elements of PSL(2, R) are homographies on the real projective line R ∪ {∞}: formula_1 These projective transformations form a subgroup of PSL(2, C), which acts on the Riemann sphere by Möbius transformations. When the real line is considered the boundary of the hyperbolic plane, PSL(2, R) expresses hyperbolic motions. Möbius transformations. Elements of PSL(2, R) act on the complex plane by Möbius transformations: formula_2 This is precisely the set of Möbius transformations that preserve the upper half-plane. It follows that PSL(2, R) is the group of conformal automorphisms of the upper half-plane. By the Riemann mapping theorem, it is also isomorphic to the group of conformal automorphisms of the unit disc. These Möbius transformations act as the isometries of the upper half-plane model of hyperbolic space, and the corresponding Möbius transformations of the disc are the hyperbolic isometries of the Poincaré disk model. The above formula can be also used to define Möbius transformations of dual and double (aka split-complex) numbers. The corresponding geometries are in non-trivial relations to Lobachevskian geometry. Adjoint representation. The group SL(2, R) acts on its Lie algebra sl(2, R) by conjugation (remember that the Lie algebra elements are also 2 × 2 matrices), yielding a faithful 3-dimensional linear representation of PSL(2, R). This can alternatively be described as the action of PSL(2, R) on the space of quadratic forms on R2. The result is the following representation: formula_3 The Killing form on sl(2, R) has signature (2,1), and induces an isomorphism between PSL(2, R) and the Lorentz group SO+(2,1). This action of PSL(2, R) on Minkowski space restricts to the isometric action of PSL(2, R) on the hyperboloid model of the hyperbolic plane. Classification of elements. The eigenvalues of an element "A" ∈ SL(2, R) satisfy the characteristic polynomial formula_4 and therefore formula_5 This leads to the following classification of elements, with corresponding action on the Euclidean plane: The names correspond to the classification of conic sections by eccentricity: if one defines eccentricity as half the absolute value of the trace (ε = |tr|; dividing by 2 corrects for the effect of dimension, while absolute value corresponds to ignoring an overall factor of ±1 such as when working in PSL(2, R)), then this yields: formula_9, elliptic; formula_10, parabolic; formula_11, hyperbolic. The identity element 1 and negative identity element −1 (in PSL(2, R) they are the same), have trace ±2, and hence by this classification are parabolic elements, though they are often considered separately. The same classification is used for SL(2, C) and PSL(2, C) (Möbius transformations) and PSL(2, R) (real Möbius transformations), with the addition of "loxodromic" transformations corresponding to complex traces; analogous classifications are used elsewhere. A subgroup that is contained with the elliptic (respectively, parabolic, hyperbolic) elements, plus the identity and negative identity, is called an elliptic subgroup (respectively, parabolic subgroup, hyperbolic subgroup). The trichotomy of SL(2, R) into elliptic, parabolic, and hyperbolic elements is a classification into "subsets," not "subgroups:" these sets are not closed under multiplication (the product of two parabolic elements need not be parabolic, and so forth). However, each element is conjugate to a member of one of 3 standard one-parameter subgroups (possibly times ±1), as detailed below. Topologically, as trace is a continuous map, the elliptic elements (excluding ±1) form an open set, as do the hyperbolic elements (excluding ±1). By contrast, the parabolic elements, together with ±1, form a closed set that is not open. Elliptic elements. The eigenvalues for an elliptic element are both complex, and are conjugate values on the unit circle. Such an element is conjugate to a rotation of the Euclidean plane – they can be interpreted as rotations in a possibly non-orthogonal basis – and the corresponding element of PSL(2, R) acts as (conjugate to) a rotation of the hyperbolic plane and of Minkowski space. Elliptic elements of the modular group must have eigenvalues {ω, ω−1}, where "ω" is a primitive 3rd, 4th, or 6th root of unity. These are all the elements of the modular group with finite order, and they act on the torus as periodic diffeomorphisms. Elements of trace 0 may be called "circular elements" (by analogy with eccentricity) but this is rarely done; they correspond to elements with eigenvalues ±"i", and are conjugate to rotation by 90°, and square to -"I": they are the non-identity involutions in PSL(2). Elliptic elements are conjugate into the subgroup of rotations of the Euclidean plane, the special orthogonal group SO(2); the angle of rotation is arccos of half of the trace, with the sign of the rotation determined by orientation. (A rotation and its inverse are conjugate in GL(2) but not SL(2).) Parabolic elements. A parabolic element has only a single eigenvalue, which is either 1 or -1. Such an element acts as a shear mapping on the Euclidean plane, and the corresponding element of PSL(2, R) acts as a limit rotation of the hyperbolic plane and as a null rotation of Minkowski space. Parabolic elements of the modular group act as Dehn twists of the torus. Parabolic elements are conjugate into the 2 component group of standard shears × ±"I": formula_12. In fact, they are all conjugate (in SL(2)) to one of the four matrices formula_13, formula_14 (in GL(2) or SL±(2), the ± can be omitted, but in SL(2) it cannot). Hyperbolic elements. The eigenvalues for a hyperbolic element are both real, and are reciprocals. Such an element acts as a squeeze mapping of the Euclidean plane, and the corresponding element of PSL(2, R) acts as a translation of the hyperbolic plane and as a Lorentz boost on Minkowski space. Hyperbolic elements of the modular group act as Anosov diffeomorphisms of the torus. Hyperbolic elements are conjugate into the 2 component group of standard squeezes × ±"I": formula_15; the hyperbolic angle of the hyperbolic rotation is given by arcosh of half of the trace, but the sign can be positive or negative: in contrast to the elliptic case, a squeeze and its inverse are conjugate in SL₂ (by a rotation in the axes; for standard axes, a rotation by 90°). Conjugacy classes. By Jordan normal form, matrices are classified up to conjugacy (in GL("n", C)) by eigenvalues and nilpotence (concretely, nilpotence means where 1s occur in the Jordan blocks). Thus elements of SL(2) are classified up to conjugacy in GL(2) (or indeed SL±(2)) by trace (since determinant is fixed, and trace and determinant determine eigenvalues), except if the eigenvalues are equal, so ±I and the parabolic elements of trace +2 and trace -2 are not conjugate (the former have no off-diagonal entries in Jordan form, while the latter do). Up to conjugacy in SL(2) (instead of GL(2)), there is an additional datum, corresponding to orientation: a clockwise and counterclockwise (elliptical) rotation are not conjugate, nor are a positive and negative shear, as detailed above; thus for absolute value of trace less than 2, there are two conjugacy classes for each trace (clockwise and counterclockwise rotations), for absolute value of the trace equal to 2 there are three conjugacy classes for each trace (positive shear, identity, negative shear), and for absolute value of the trace greater than 2 there is one conjugacy class for a given trace. Iwasawa or KAN decomposition. The Iwasawa decomposition of a group is a method to construct the group as a product of three Lie subgroups "K", "A", "N". For formula_16 these three subgroups are formula_17 formula_18 formula_19 These three elements are the generators of the Elliptic, Hyperbolic, and Parabolic subsets respectively. Topology and universal cover. As a topological space, PSL(2, R) can be described as the unit tangent bundle of the hyperbolic plane. It is a circle bundle, and has a natural contact structure induced by the symplectic structure on the hyperbolic plane. SL(2, R) is a 2-fold cover of PSL(2, R), and can be thought of as the bundle of spinors on the hyperbolic plane. The fundamental group of SL(2, R) is the infinite cyclic group Z. The universal covering group, denoted formula_20, is an example of a finite-dimensional Lie group that is not a matrix group. That is, formula_20 admits no faithful, finite-dimensional representation. As a topological space, formula_20 is a line bundle over the hyperbolic plane. When imbued with a left-invariant metric, the 3-manifold formula_20 becomes one of the eight Thurston geometries. For example, formula_20 is the universal cover of the unit tangent bundle to any hyperbolic surface. Any manifold modeled on formula_20 is orientable, and is a circle bundle over some 2-dimensional hyperbolic orbifold (a Seifert fiber space). Under this covering, the preimage of the modular group PSL(2, Z) is the braid group on 3 generators, "B"3, which is the universal central extension of the modular group. These are lattices inside the relevant algebraic groups, and this corresponds algebraically to the universal covering group in topology. The 2-fold covering group can be identified as Mp(2, R), a metaplectic group, thinking of SL(2, R) as the symplectic group Sp(2, R). The aforementioned groups together form a sequence: formula_21 However, there are other covering groups of PSL(2, R) corresponding to all "n", as "n" Z &lt; Z ≅ π1 (PSL(2, R)), which form a lattice of covering groups by divisibility; these cover SL(2, R) if and only if "n" is even. Algebraic structure. The center of SL(2, R) is the two-element group {±1}, and the quotient PSL(2, R) is simple. Discrete subgroups of PSL(2, R) are called Fuchsian groups. These are the hyperbolic analogue of the Euclidean wallpaper groups and Frieze groups. The most famous of these is the modular group PSL(2, Z), which acts on a tessellation of the hyperbolic plane by ideal triangles. The circle group SO(2) is a maximal compact subgroup of SL(2, R), and the circle SO(2) / {±1} is a maximal compact subgroup of PSL(2, R). The Schur multiplier of the discrete group PSL(2, R) is much larger than Z, and the universal central extension is much larger than the universal covering group. However these large central extensions do not take the topology into account and are somewhat pathological. Representation theory. SL(2, R) is a real, non-compact simple Lie group, and is the split-real form of the complex Lie group SL(2, C). The Lie algebra of SL(2, R), denoted sl(2, R), is the algebra of all real, traceless 2 × 2 matrices. It is the Bianchi algebra of type VIII. The finite-dimensional representation theory of SL(2, R) is equivalent to the representation theory of SU(2), which is the compact real form of SL(2, C). In particular, SL(2, R) has no nontrivial finite-dimensional unitary representations. This is a feature of every connected simple non-compact Lie group. For outline of proof, see non-unitarity of representations. The infinite-dimensional representation theory of SL(2, R) is quite interesting. The group has several families of unitary representations, which were worked out in detail by Gelfand and Naimark (1946), V. Bargmann (1947), and Harish-Chandra (1952). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mbox{SL}(2,\\mathbf{R}) = \\left\\{ \\begin{pmatrix}\na & b \\\\\nc & d\n\\end{pmatrix} \\colon a,b,c,d \\in \\mathbf{R}\\mbox{ and }ad-bc=1\\right\\}." }, { "math_id": 1, "text": "[x,1] \\mapsto [x,\\ 1] \\begin{pmatrix}a & c \\\\ b & d \\end{pmatrix} \\ = \\ [ax + b,\\ cx + d] \\ = \\, \\left[\\frac{ax+b}{cx+d},\\ 1\\right] ." }, { "math_id": 2, "text": "z \\mapsto \\frac{az+b}{cz+d}\\;\\;\\;\\;\\mbox{ (where }a,b,c,d\\in\\mathbf{R}\\mbox{)}." }, { "math_id": 3, "text": "\\begin{bmatrix}\na & b \\\\\nc & d\n\\end{bmatrix} \\mapsto \\begin{bmatrix}\na^2 & 2ab & b^2 \\\\\nac & ad+bc & bd \\\\\nc^2 & 2cd & d^2\n\\end{bmatrix}." }, { "math_id": 4, "text": " \\lambda^2 \\,-\\, \\mathrm{tr}(A)\\,\\lambda \\,+\\, 1 \\,=\\, 0" }, { "math_id": 5, "text": " \\lambda = \\frac{\\mathrm{tr}(A) \\pm \\sqrt{\\mathrm{tr}(A)^2 - 4}}{2}. " }, { "math_id": 6, "text": "|\\mathrm{tr}(A)| < 2 " }, { "math_id": 7, "text": "|\\mathrm{tr}(A)| = 2 " }, { "math_id": 8, "text": "|\\mathrm{tr}(A)| > 2 " }, { "math_id": 9, "text": "\\epsilon < 1" }, { "math_id": 10, "text": "\\epsilon = 1" }, { "math_id": 11, "text": "\\epsilon > 1" }, { "math_id": 12, "text": "\\left(\\begin{smallmatrix}1 & \\lambda \\\\ & 1\\end{smallmatrix}\\right) \\times \\{\\pm I\\}" }, { "math_id": 13, "text": "\\left(\\begin{smallmatrix}1 & \\pm 1 \\\\ & 1\\end{smallmatrix}\\right)" }, { "math_id": 14, "text": "\\left(\\begin{smallmatrix}-1 & \\pm 1 \\\\ & -1\\end{smallmatrix}\\right)" }, { "math_id": 15, "text": "\\left(\\begin{smallmatrix}\\lambda \\\\ & \\lambda^{-1}\\end{smallmatrix}\\right) \\times \\{\\pm I\\}" }, { "math_id": 16, "text": "\\mbox{SL}(2,\\mathbf{R})" }, { "math_id": 17, "text": " \\mathbf{K} = \\left\\{\n \\begin{pmatrix}\n \\cos \\theta & -\\sin \\theta \\\\\n \\sin \\theta & \\cos \\theta \n \\end{pmatrix} \\in SL(2,\\mathbb{R}) \\ | \\ \\theta\\in\\mathbf{R} \\right\\} \\cong SO(2) ,\n" }, { "math_id": 18, "text": "\n\\mathbf{A} = \\left\\{\n \\begin{pmatrix}\n r & 0 \\\\\n 0 & r^{-1} \n \\end{pmatrix} \\in SL(2,\\mathbb{R}) \\ | \\ r > 0 \\right\\},\n" }, { "math_id": 19, "text": "\n\\mathbf{N} = \\left\\{\n \\begin{pmatrix}\n 1 & x \\\\\n 0 & 1 \n \\end{pmatrix} \\in SL(2,\\mathbb{R}) \\ | \\ x\\in\\mathbf{R} \\right\\}.\n" }, { "math_id": 20, "text": "\\overline{\\mbox{SL}(2,\\mathbf{R})}" }, { "math_id": 21, "text": "\\overline{\\mathrm{SL}(2,\\mathbf{R})} \\to \\cdots \\to \\mathrm{Mp}(2,\\mathbf{R})\n\\to \\mathrm{SL}(2,\\mathbf{R}) \\to \\mathrm{PSL}(2,\\mathbf{R})." } ]
https://en.wikipedia.org/wiki?curid=11105238
1110667
Hall–Janko graph
In the mathematical field of graph theory, the Hall–Janko graph, also known as the Hall-Janko-Wales graph, is a 36-regular undirected graph with 100 vertices and 1800 edges. It is a rank 3 strongly regular graph with parameters (100,36,14,12) and a maximum coclique of size 10. This parameter set is not unique, it is however uniquely determined by its parameters as a rank 3 graph. The Hall–Janko graph was originally constructed by D. Wales to establish the existence of the Hall-Janko group as an index 2 subgroup of its automorphism group. The Hall–Janko graph can be constructed out of objects in U3(3), the simple group of order 6048: The characteristic polynomial of the Hall–Janko graph is formula_0. Therefore the Hall–Janko graph is an integral graph: its spectrum consists entirely of integers. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "(x-36)(x-6)^{36}(x+4)^{63}" } ]
https://en.wikipedia.org/wiki?curid=1110667
1110685
Modular representation theory
Studies linear representations of finite groups over a field K of positive characteristic p Modular representation theory is a branch of mathematics, and is the part of representation theory that studies linear representations of finite groups over a field "K" of positive characteristic "p", necessarily a prime number. As well as having applications to group theory, modular representations arise naturally in other branches of mathematics, such as algebraic geometry, coding theory, combinatorics and number theory. Within finite group theory, character-theoretic results proved by Richard Brauer using modular representation theory played an important role in early progress towards the classification of finite simple groups, especially for simple groups whose characterization was not amenable to purely group-theoretic methods because their Sylow 2-subgroups were too small in an appropriate sense. Also, a general result on embedding of elements of order 2 in finite groups called the Z* theorem, proved by George Glauberman using the theory developed by Brauer, was particularly useful in the classification program. If the characteristic "p" of "K" does not divide the order |"G"|, then modular representations are completely reducible, as with "ordinary" (characteristic 0) representations, by virtue of Maschke's theorem. In the other case, when |"G"| ≡ 0 mod "p", the process of averaging over the group needed to prove Maschke's theorem breaks down, and representations need not be completely reducible. Much of the discussion below implicitly assumes that the field "K" is sufficiently large (for example, "K" algebraically closed suffices), otherwise some statements need refinement. History. The earliest work on representation theory over finite fields is by who showed that when "p" does not divide the order of the group, the representation theory is similar to that in characteristic 0. He also investigated modular invariants of some finite groups. The systematic study of modular representations, when the characteristic "p" divides the order of the group, was started by and was continued by him for the next few decades. Example. Finding a representation of the cyclic group of two elements over F2 is equivalent to the problem of finding matrices whose square is the identity matrix. Over every field of characteristic other than 2, there is always a basis such that the matrix can be written as a diagonal matrix with only 1 or −1 occurring on the diagonal, such as formula_0 Over F2, there are many other possible matrices, such as formula_1 Over an algebraically closed field of positive characteristic, the representation theory of a finite cyclic group is fully explained by the theory of the Jordan normal form. Non-diagonal Jordan forms occur when the characteristic divides the order of the group. Ring theory interpretation. Given a field "K" and a finite group "G", the group algebra "K"["G"] (which is the "K"-vector space with "K"-basis consisting of the elements of "G", endowed with algebra multiplication by extending the multiplication of "G" by linearity) is an Artinian ring. When the order of "G" is divisible by the characteristic of "K", the group algebra is not semisimple, hence has non-zero Jacobson radical. In that case, there are finite-dimensional modules for the group algebra that are not projective modules. By contrast, in the characteristic 0 case every irreducible representation is a direct summand of the regular representation, hence is projective. Brauer characters. Modular representation theory was developed by Richard Brauer from about 1940 onwards to study in greater depth the relationships between the characteristic "p" representation theory, ordinary character theory and structure of "G", especially as the latter relates to the embedding of, and relationships between, its "p"-subgroups. Such results can be applied in group theory to problems not directly phrased in terms of representations. Brauer introduced the notion now known as the Brauer character. When "K" is algebraically closed of positive characteristic "p", there is a bijection between roots of unity in "K" and complex roots of unity of order coprime to "p". Once a choice of such a bijection is fixed, the Brauer character of a representation assigns to each group element of order coprime to "p" the sum of complex roots of unity corresponding to the eigenvalues (including multiplicities) of that element in the given representation. The Brauer character of a representation determines its composition factors but not, in general, its equivalence type. The irreducible Brauer characters are those afforded by the simple modules. These are integral (though not necessarily non-negative) combinations of the restrictions to elements of order coprime to "p" of the ordinary irreducible characters. Conversely, the restriction to the elements of order coprime to "p" of each ordinary irreducible character is uniquely expressible as a non-negative integer combination of irreducible Brauer characters. Reduction (mod "p"). In the theory initially developed by Brauer, the link between ordinary representation theory and modular representation theory is best exemplified by considering the group algebra of the group "G" over a complete discrete valuation ring "R" with residue field "K" of positive characteristic "p" and field of fractions "F" of characteristic 0, such as the "p"-adic integers. The structure of "R"["G"] is closely related both to the structure of the group algebra "K"["G"] and to the structure of the semisimple group algebra "F"["G"], and there is much interplay between the module theory of the three algebras. Each "R"["G"]-module naturally gives rise to an "F"["G"]-module, and, by a process often known informally as reduction (mod "p"), to a "K"["G"]-module. On the other hand, since "R" is a principal ideal domain, each finite-dimensional "F"["G"]-module arises by extension of scalars from an "R"["G"]-module. In general, however, not all "K"["G"]-modules arise as reductions (mod "p") of "R"["G"]-modules. Those that do are liftable. Number of simple modules. In ordinary representation theory, the number of simple modules "k"("G") is equal to the number of conjugacy classes of "G". In the modular case, the number "l"("G") of simple modules is equal to the number of conjugacy classes whose elements have order coprime to the relevant prime "p", the so-called "p"-regular classes. Blocks and the structure of the group algebra. In modular representation theory, while Maschke's theorem does not hold when the characteristic divides the group order, the group algebra may be decomposed as the direct sum of a maximal collection of two-sided ideals known as blocks. When the field "F" has characteristic 0, or characteristic coprime to the group order, there is still such a decomposition of the group algebra "F"["G"] as a sum of blocks (one for each isomorphism type of simple module), but the situation is relatively transparent when "F" is sufficiently large: each block is a full matrix algebra over "F", the endomorphism ring of the vector space underlying the associated simple module. To obtain the blocks, the identity element of the group "G" is decomposed as a sum of primitive idempotents in "Z"("R"[G]), the center of the group algebra over the maximal order "R" of "F". The block corresponding to the primitive idempotent "e" is the two-sided ideal "e" "R"["G"]. For each indecomposable "R"["G"]-module, there is only one such primitive idempotent that does not annihilate it, and the module is said to belong to (or to be in) the corresponding block (in which case, all its composition factors also belong to that block). In particular, each simple module belongs to a unique block. Each ordinary irreducible character may also be assigned to a unique block according to its decomposition as a sum of irreducible Brauer characters. The block containing the trivial module is known as the principal block. Projective modules. In ordinary representation theory, every indecomposable module is irreducible, and so every module is projective. However, the simple modules with characteristic dividing the group order are rarely projective. Indeed, if a simple module is projective, then it is the only simple module in its block, which is then isomorphic to the endomorphism algebra of the underlying vector space, a full matrix algebra. In that case, the block is said to have 'defect 0'. Generally, the structure of projective modules is difficult to determine. For the group algebra of a finite group, the (isomorphism types of) projective indecomposable modules are in a one-to-one correspondence with the (isomorphism types of) simple modules: the socle of each projective indecomposable is simple (and isomorphic to the top), and this affords the bijection, as non-isomorphic projective indecomposables have non-isomorphic socles. The multiplicity of a projective indecomposable module as a summand of the group algebra (viewed as the regular module) is the dimension of its socle (for large enough fields of characteristic zero, this recovers the fact that each simple module occurs with multiplicity equal to its dimension as a direct summand of the regular module). Each projective indecomposable module (and hence each projective module) in positive characteristic "p" may be lifted to a module in characteristic 0. Using the ring "R" as above, with residue field "K", the identity element of "G" may be decomposed as a sum of mutually orthogonal primitive idempotents (not necessarily central) of "K"["G"]. Each projective indecomposable "K"["G"]-module is isomorphic to "e"."K"["G"] for a primitive idempotent "e" that occurs in this decomposition. The idempotent "e" lifts to a primitive idempotent, say "E", of "R"["G"], and the left module "E"."R"["G"] has reduction (mod "p") isomorphic to "e"."K"["G"]. Some orthogonality relations for Brauer characters. When a projective module is lifted, the associated character vanishes on all elements of order divisible by "p", and (with consistent choice of roots of unity), agrees with the Brauer character of the original characteristic "p" module on "p"-regular elements. The (usual character-ring) inner product of the Brauer character of a projective indecomposable with any other Brauer character can thus be defined: this is 0 if the second Brauer character is that of the socle of a non-isomorphic projective indecomposable, and 1 if the second Brauer character is that of its own socle. The multiplicity of an ordinary irreducible character in the character of the lift of a projective indecomposable is equal to the number of occurrences of the Brauer character of the socle of the projective indecomposable when the restriction of the ordinary character to "p"-regular elements is expressed as a sum of irreducible Brauer characters. Decomposition matrix and Cartan matrix. The composition factors of the projective indecomposable modules may be calculated as follows: Given the ordinary irreducible and irreducible Brauer characters of a particular finite group, the irreducible ordinary characters may be decomposed as non-negative integer combinations of the irreducible Brauer characters. The integers involved can be placed in a matrix, with the ordinary irreducible characters assigned rows and the irreducible Brauer characters assigned columns. This is referred to as the "decomposition matrix", and is frequently labelled "D". It is customary to place the trivial ordinary and Brauer characters in the first row and column respectively. The product of the transpose of "D" with "D" itself results in the Cartan matrix, usually denoted "C"; this is a symmetric matrix such that the entries in its "j"-th row are the multiplicities of the respective simple modules as composition factors of the "j"-th projective indecomposable module. The Cartan matrix is non-singular; in fact, its determinant is a power of the characteristic of "K". Since a projective indecomposable module in a given block has all its composition factors in that same block, each block has its own Cartan matrix. Defect groups. To each block "B" of the group algebra "K"["G"], Brauer associated a certain "p"-subgroup, known as its defect group (where "p" is the characteristic of "K"). Formally, it is the largest "p"-subgroup "D" of "G" for which there is a Brauer correspondent of "B" for the subgroup formula_2, where formula_3 is the centralizer of "D" in "G". The defect group of a block is unique up to conjugacy and has a strong influence on the structure of the block. For example, if the defect group is trivial, then the block contains just one simple module, just one ordinary character, the ordinary and Brauer irreducible characters agree on elements of order prime to the relevant characteristic "p", and the simple module is projective. At the other extreme, when "K" has characteristic "p", the Sylow "p"-subgroup of the finite group "G" is a defect group for the principal block of "K"["G"]. The order of the defect group of a block has many arithmetical characterizations related to representation theory. It is the largest invariant factor of the Cartan matrix of the block, and occurs with multiplicity one. Also, the power of "p" dividing the index of the defect group of a block is the greatest common divisor of the powers of "p" dividing the dimensions of the simple modules in that block, and this coincides with the greatest common divisor of the powers of "p" dividing the degrees of the ordinary irreducible characters in that block. Other relationships between the defect group of a block and character theory include Brauer's result that if no conjugate of the "p"-part of a group element "g" is in the defect group of a given block, then each irreducible character in that block vanishes at "g". This is one of many consequences of Brauer's second main theorem. The defect group of a block also has several characterizations in the more module-theoretic approach to block theory, building on the work of J. A. Green, which associates a "p"-subgroup known as the vertex to an indecomposable module, defined in terms of relative projectivity of the module. For example, the vertex of each indecomposable module in a block is contained (up to conjugacy) in the defect group of the block, and no proper subgroup of the defect group has that property. Brauer's first main theorem states that the number of blocks of a finite group that have a given "p"-subgroup as defect group is the same as the corresponding number for the normalizer in the group of that "p"-subgroup. The easiest block structure to analyse with non-trivial defect group is when the latter is cyclic. Then there are only finitely many isomorphism types of indecomposable modules in the block, and the structure of the block is by now well understood, by virtue of work of Brauer, E.C. Dade, J.A. Green and J.G. Thompson, among others. In all other cases, there are infinitely many isomorphism types of indecomposable modules in the block. Blocks whose defect groups are not cyclic can be divided into two types: tame and wild. The tame blocks (which only occur for the prime 2) have as a defect group a dihedral group, semidihedral group or (generalized) quaternion group, and their structure has been broadly determined in a series of papers by Karin Erdmann. The indecomposable modules in wild blocks are extremely difficult to classify, even in principle.
[ { "math_id": 0, "text": "\n\\begin{bmatrix}\n1 & 0\\\\\n0 & -1\n\\end{bmatrix}.\n" }, { "math_id": 1, "text": "\n\\begin{bmatrix}\n1 & 1\\\\\n0 & 1\n\\end{bmatrix}.\n" }, { "math_id": 2, "text": "DC_G(D)" }, { "math_id": 3, "text": "C_G(D)" } ]
https://en.wikipedia.org/wiki?curid=1110685
1110742
Multiplicative group
Mathematical structure with multiplication as its operation In mathematics and group theory, the term multiplicative group refers to one of the following concepts: Group scheme of roots of unity. The group scheme of "n"-th roots of unity is by definition the kernel of the "n"-power map on the multiplicative group GL(1), considered as a group scheme. That is, for any integer "n" &gt; 1 we can consider the morphism on the multiplicative group that takes "n"-th powers, and take an appropriate fiber product of schemes, with the morphism "e" that serves as the identity. The resulting group scheme is written μ"n" (or formula_7). It gives rise to a reduced scheme, when we take it over a field "K", if and only if the characteristic of "K" does not divide "n". This makes it a source of some key examples of non-reduced schemes (schemes with nilpotent elements in their structure sheaves); for example μ"p" over a finite field with "p" elements for any prime number "p". This phenomenon is not easily expressed in the classical language of algebraic geometry. For example, it turns out to be of major importance in expressing the duality theory of abelian varieties in characteristic "p" (theory of Pierre Cartier). The Galois cohomology of this group scheme is a way of expressing Kummer theory.
[ { "math_id": 0, "text": "\\mathbb{Z}/n\\mathbb{Z}" }, { "math_id": 1, "text": "\\mathbb{R}^+" }, { "math_id": 2, "text": "\\mathbb{R}" }, { "math_id": 3, "text": "F" }, { "math_id": 4, "text": "F^\\times = F -\\{0\\}" }, { "math_id": 5, "text": "F = \\mathbb F_p=\\mathbb Z/p\\mathbb Z" }, { "math_id": 6, "text": "F^\\times \\cong C_{q-1}" }, { "math_id": 7, "text": "\\mu\\!\\!\\mu_n" } ]
https://en.wikipedia.org/wiki?curid=1110742
1110818
Held group
Sporadic simple group In the area of modern algebra known as group theory, the Held group "He" is a sporadic simple group of order    210 · 33 · 52 · 73 · 17 = 4030387200 ≈ 4×109. History. "He" is one of the 26 sporadic groups and was found by Dieter Held (1969a, 1969b) during an investigation of simple groups containing an involution whose centralizer is an extension of the extra special group 21+6 by the linear group L3(2), which is the same involution centralizer as the Mathieu group M24. A second such group is the linear group L5(2). The Held group is the third possibility, and its construction was completed by John McKay and Graham Higman. In all of these groups, the extension splits. The outer automorphism group has order 2 and the Schur multiplier is trivial. Representations. The smallest faithful complex representation has dimension 51; there are two such representations that are duals of each other. It centralizes an element of order 7 in the Monster group. As a result the prime 7 plays a special role in the theory of the group; for example, the smallest representation of the Held group over any field is the 50-dimensional representation over the field with 7 elements, and it acts naturally on a vertex operator algebra over the field with 7 elements. The smallest permutation representation is a rank 5 action on 2058 points with point stabilizer Sp4(4):2. The graph associated with this representation has rank 5 and is directed; the outer automorphism reverses the direction of the edges, decreasing the rank to 4. Since He is the normalizer of a Frobenius group 7:3 in the Monster group, it does not just commute with a 7-cycle, but also some 3-cycles. Each of these 3-cycles is normalized by the Fischer group Fi24, so He:2 is a subgroup of the derived subgroup Fi24' (the non-simple group Fi24 has 2 conjugacy classes of He:2, which are fused by an outer automorphism). As mentioned above, the smallest permutation representation of He has 2058 points, and when realized inside Fi24', there is an orbit of 2058 transpositions. Generalized monstrous moonshine. Conway and Norton suggested in their 1979 paper that monstrous moonshine is not limited to the monster, but that similar phenomena may be found for other groups. Larissa Queen and others subsequently found that one can construct the expansions of many Hauptmoduln from simple combinations of dimensions of sporadic groups. For "He", the relevant McKay-Thompson series is formula_0 where one can set the constant term a(0) = 10 (OEIS: ), formula_1 and "η"("τ") is the Dedekind eta function. Presentation. It can be defined in terms of the generators "a" and "b" and relations formula_2 Maximal subgroups. found the 11 conjugacy classes of maximal subgroups of "He" as follows:
[ { "math_id": 0, "text": "T_{7A}(\\tau)" }, { "math_id": 1, "text": "\\begin{align}\n j_{7A}(\\tau)\n &= T_{7A}(\\tau)+10\\\\\n &= \\left(\\left(\\tfrac{\\eta(\\tau)}{\\eta(7\\tau)}\\right)^{2} + 7\\left(\\tfrac{\\eta(7\\tau)}{\\eta(\\tau)}\\right)^2\\right)^2\\\\\n &= \\frac{1}{q} + 10 + 51q + 204q^2 + 681q^3 + 1956q^4 + 5135q^5 + \\dots\n\\end{align}" }, { "math_id": 2, "text": "a^2 = b^7 = (ab)^{17} = [a, b]^6 = \\left [a, b^3 \\right ]^5 = \\left [a, babab^{-1}abab \\right ] = (ab)^4 ab^2 ab^{-3} ababab^{-1}ab^3 ab^{-2}ab^2 = 1." } ]
https://en.wikipedia.org/wiki?curid=1110818
11112693
Virtual temperature
Virtual temperature of a moist air parcel In atmospheric thermodynamics, the virtual temperature (formula_0) of a moist air parcel is the temperature at which a theoretical dry air parcel would have a total pressure and density equal to the moist parcel of air. The virtual temperature of unsaturated moist air is always greater than the absolute air temperature, however, as the existence of suspended cloud droplets reduces the virtual temperature. The virtual temperature effect is also known as the vapor buoyancy effect. It has been described to increase Earth's thermal emission by warming the tropical atmosphere. Introduction. Description. In atmospheric thermodynamic processes, it is often useful to assume air parcels behave approximately adiabatically, and approximately ideally. The specific gas constant for the standardized mass of one kilogram of a particular gas is variable, and described mathematically as formula_1 where formula_2 is the molar gas constant, and formula_3 is the apparent molar mass of gas formula_4 in kilograms per mole. The apparent molar mass of a theoretical moist parcel in Earth's atmosphere can be defined in components of water vapor and dry air as formula_5 with formula_6 being partial pressure of water, formula_7 dry air pressure, and formula_8 and formula_9 representing the molar masses of water vapor and dry air respectively. The total pressure formula_10 is described by Dalton's law of partial pressures: formula_11 Purpose. Rather than carry out these calculations, it is convenient to scale another quantity within the ideal gas law to equate the pressure and density of a dry parcel to a moist parcel. The only variable quantity of the ideal gas law independent of density and pressure is temperature. This scaled quantity is known as virtual temperature, and it allows for the use of the dry-air equation of state for moist air. Temperature has an inverse proportionality to density. Thus, analytically, a higher vapor pressure would yield a lower density, which should yield a higher virtual temperature in turn. Derivation. Consider a moist air parcel containing masses formula_12 and formula_13 of dry air and water vapor in a given volume formula_14. The density is given by formula_15 where formula_16 and formula_17 are the densities the dry air and water vapor would respectively have when occupying the volume of the air parcel. Rearranging the standard ideal gas equation with these variables gives formula_18 and formula_19 Solving for the densities in each equation and combining with the law of partial pressures yields formula_20 Then, solving for formula_10 and using formula_21 is approximately 0.622 in Earth's atmosphere: formula_22 where the virtual temperature formula_0 is formula_23 We now have a non-linear scalar for temperature dependent purely on the unitless value formula_24, allowing for varying amounts of water vapor in an air parcel. This virtual temperature formula_0 in units of kelvin can be used seamlessly in any thermodynamic equation necessitating it. Variations. Often the more easily accessible atmospheric parameter is the mixing ratio formula_25. Through expansion upon the definition of vapor pressure in the law of partial pressures as presented above and the definition of mixing ratio: formula_26 which allows formula_27 Algebraic expansion of that equation, ignoring higher orders of formula_25 due to its typical order in Earth's atmosphere of formula_28, and substituting formula_29 with its constant value yields the linear approximation formula_30 With the mixing ratio formula_25 expressed in g/g. An approximate conversion using formula_31 in degrees Celsius and mixing ratio formula_25 in g/kg is formula_32 Knowing that specific humidity formula_33 is given in terms of mixing ratio formula_25 as formula_34, then we can write mixing ratio in terms of the specific humidity as formula_35. We can now write the virtual temperature formula_0 in terms of specific humidity as formula_36 Simplifying the above will reduce to formula_37 and using the value of formula_38, then we can write formula_39 Virtual potential temperature. Virtual potential temperature is similar to potential temperature in that it removes the temperature variation caused by changes in pressure. Virtual potential temperature is useful as a surrogate for density in buoyancy calculations and in turbulence transport which includes vertical air movement. Density temperature. A moist air parcel may also contain liquid droplets and ice crystals in addition to water vapor. A net mixing ratio formula_40 can be defined as the sum of the mixing ratios of water vapor formula_25, liquid formula_41, and ice formula_42 present in the parcel. Assuming that formula_41 and formula_42 are typically much smaller than formula_25, a "density temperature" of a parcel formula_43 can be defined, representing the temperature at which a theoretical dry air parcel would have the a pressure and density equal to a moist parcel of air while accounting for condensates: formula_44 Uses. Virtual temperature is used in adjusting CAPE soundings for assessing available convective potential energy from skew-T log-P diagrams. The errors associated with ignoring virtual temperature correction for smaller CAPE values can be quite significant. Thus, in the early stages of convective storm formation, a virtual temperature correction is significant in identifying the potential intensity in tropical cyclogenesis. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "T_v" }, { "math_id": 1, "text": "R_x = \\frac{R^*}{M_x}," }, { "math_id": 2, "text": "R^*" }, { "math_id": 3, "text": "M_x" }, { "math_id": 4, "text": "x" }, { "math_id": 5, "text": "M_\\text{air} = \\frac{e}{p} M_v + \\frac{p_d}{p} M_d," }, { "math_id": 6, "text": "e" }, { "math_id": 7, "text": "p_d" }, { "math_id": 8, "text": "M_v" }, { "math_id": 9, "text": "M_d" }, { "math_id": 10, "text": "p" }, { "math_id": 11, "text": "p = p_d + e." }, { "math_id": 12, "text": "m_d" }, { "math_id": 13, "text": "m_v" }, { "math_id": 14, "text": "V" }, { "math_id": 15, "text": "\\rho = \\frac{m_d + m_v}{V} = \\rho_d + \\rho_v," }, { "math_id": 16, "text": "\\rho_d" }, { "math_id": 17, "text": "\\rho_v" }, { "math_id": 18, "text": "e = \\rho_v R_v T" }, { "math_id": 19, "text": "p_d = \\rho_d R_d T." }, { "math_id": 20, "text": "\\rho = \\frac{p - e}{R_dT} + \\frac{e}{R_v T}." }, { "math_id": 21, "text": "\\epsilon = \\tfrac{R_d}{R_v} = \\tfrac{M_v}{M_d}" }, { "math_id": 22, "text": "p = \\rho R_d T_v," }, { "math_id": 23, "text": "T_v = \\frac{T}{1 - \\frac{e}{p}(1 - \\epsilon)}." }, { "math_id": 24, "text": "e/p" }, { "math_id": 25, "text": "w" }, { "math_id": 26, "text": "\\frac{e}{p} = \\frac{w}{w + \\epsilon}," }, { "math_id": 27, "text": "T_v = T\\frac{w + \\epsilon}{\\epsilon(1 + w)}." }, { "math_id": 28, "text": "10^{-3}" }, { "math_id": 29, "text": "\\epsilon" }, { "math_id": 30, "text": "T_v \\approx T(1 + 0.608w)." }, { "math_id": 31, "text": "T" }, { "math_id": 32, "text": "T_v \\approx T + \\frac{w}{6}." }, { "math_id": 33, "text": "q" }, { "math_id": 34, "text": "q = \\frac{w}{1+w}" }, { "math_id": 35, "text": "w = \\frac{q}{1-q}" }, { "math_id": 36, "text": "T_v = T\\frac{\\frac{q}{1-q}+\\epsilon}{\\epsilon(1+\\frac{q}{1-q})}" }, { "math_id": 37, "text": "T_v = T[\\frac{q}{\\epsilon}+(1-q)]" }, { "math_id": 38, "text": "\\epsilon = 0.622" }, { "math_id": 39, "text": "T_v = T(0.608q+1)" }, { "math_id": 40, "text": "w_T" }, { "math_id": 41, "text": "w_i" }, { "math_id": 42, "text": "w_l" }, { "math_id": 43, "text": "T_\\rho" }, { "math_id": 44, "text": "T_\\rho = T \\frac{1 + w/\\epsilon}{1 + w_T}" } ]
https://en.wikipedia.org/wiki?curid=11112693
1111315
Galois cohomology
In mathematics, Galois cohomology is the study of the group cohomology of Galois modules, that is, the application of homological algebra to modules for Galois groups. A Galois group "G" associated with a field extension "L"/"K" acts in a natural way on some abelian groups, for example those constructed directly from "L", but also through other Galois representations that may be derived by more abstract means. Galois cohomology accounts for the way in which taking Galois-invariant elements fails to be an exact functor. History. The current theory of Galois cohomology came together around 1950, when it was realised that the Galois cohomology of ideal class groups in algebraic number theory was one way to formulate class field theory, at the time it was in the process of ridding itself of connections to L-functions. Galois cohomology makes no assumption that Galois groups are abelian groups, so this was a non-abelian theory. It was formulated abstractly as a theory of class formations. Two developments of the 1960s turned the position around. Firstly, Galois cohomology appeared as the foundational layer of étale cohomology theory (roughly speaking, the theory as it applies to zero-dimensional schemes). Secondly, non-abelian class field theory was launched as part of the Langlands philosophy. The earliest results identifiable as Galois cohomology had been known long before, in algebraic number theory and the arithmetic of elliptic curves. The normal basis theorem implies that the first cohomology group of the additive group of "L" will vanish; this is a result on general field extensions, but was known in some form to Richard Dedekind. The corresponding result for the multiplicative group is known as Hilbert's Theorem 90, and was known before 1900. Kummer theory was another such early part of the theory, giving a description of the connecting homomorphism coming from the "m"-th power map. In fact, for a while the multiplicative case of a 1-cocycle for groups that are not necessarily cyclic was formulated as the solubility of Noether's equations, named for Emmy Noether; they appear under this name in Emil Artin's treatment of Galois theory, and may have been folklore in the 1920s. The case of 2-cocycles for the multiplicative group is that of the Brauer group, and the implications seem to have been well known to algebraists of the 1930s. In another direction, that of torsors, these were already implicit in the infinite descent arguments of Fermat for elliptic curves. Numerous direct calculations were done, and the proof of the Mordell–Weil theorem had to proceed by some surrogate of a finiteness proof for a particular "H"1 group. The 'twisted' nature of objects over fields that are not algebraically closed, which are not isomorphic but become so over the algebraic closure, was also known in many cases linked to other algebraic groups (such as quadratic forms, simple algebras, Severi–Brauer varieties), in the 1930s, before the general theory arrived. The needs of number theory were in particular expressed by the requirement to have control of a local-global principle for Galois cohomology. This was formulated by means of results in class field theory, such as Hasse's norm theorem. In the case of elliptic curves, it led to the key definition of the Tate–Shafarevich group in the Selmer group, which is the obstruction to the success of a local-global principle. Despite its great importance, for example in the Birch and Swinnerton-Dyer conjecture, it proved very difficult to get any control of it, until results of Karl Rubin gave a way to show in some cases it was finite (a result generally believed, since its conjectural order was predicted by an L-function formula). The other major development of the theory, also involving John Tate was the Tate–Poitou duality result. Technically speaking, "G" may be a profinite group, in which case the definitions need to be adjusted to allow only continuous cochains. Formal details. Galois cohomology is the study of the group cohomology of Galois groups. Let formula_0 be a field extension with Galois group formula_1 and formula_2 an abelian group on which formula_1 acts. The cohomology group: formula_3 is the Galois cohomology group associated to the representation of the Galois group on formula_2. It is possible, moreover, to extend this definition to the case when formula_2 is a non-abelian group and formula_4, and this extension is required for some of the most important applications of the theory. In particular, formula_5 is the set of fixed points of the Galois group in formula_2, and formula_6 is related to the 1-cocycles (which parametrize quaternion algebras for instance). When the extension field formula_7 is the separable closure of the field formula_8, one often writes instead formula_9 and formula_10 Hilbert's theorem 90 in cohomological language is the statement that the first cohomology group with values in the multiplicative group of formula_11 is trivial for a Galois extension formula_0: formula_12 This vanishing theorem can be generalized to a large class of algebraic groups, also formulated in the language of Galois cohomology. The most straightforward generalization is that for any quasisplit formula_8-torus formula_13, formula_14 Denote by formula_15 the general linear group in formula_16 dimensions. Then Hilbert 90 is the formula_17 special case of formula_18 Likewise, the vanishing theorem holds for the special linear group formula_19 and for the symplectic group formula_20 where formula_21 is a non-degenerate alternating bilinear form defined over formula_8. The second cohomology group describes the factor systems attached to the Galois group. Thus for any normal extension formula_0, the relative Brauer group can be identified with the group formula_22 As a special case, with the separable closure, formula_23 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "L|K" }, { "math_id": 1, "text": "G(L|K)" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "H^n(L|K, M) := H^n(G(L|K),M),\\quad n\\ge 0" }, { "math_id": 4, "text": "n=0,1" }, { "math_id": 5, "text": "H^0(L|K,M)" }, { "math_id": 6, "text": "H^1(L|K,M)" }, { "math_id": 7, "text": "L=K^s" }, { "math_id": 8, "text": "K" }, { "math_id": 9, "text": "G_K = G(K^s|K)" }, { "math_id": 10, "text": "H^n(K,M) := H^n(G_K,M)." }, { "math_id": 11, "text": "L" }, { "math_id": 12, "text": "H^1(L|K,L^*)=1." }, { "math_id": 13, "text": "T" }, { "math_id": 14, "text": "H^1(K,T) = 1." }, { "math_id": 15, "text": "GL_n(L)" }, { "math_id": 16, "text": "n" }, { "math_id": 17, "text": "n=1" }, { "math_id": 18, "text": "H^1(L|K,GL_n(L))=1." }, { "math_id": 19, "text": "SL_n(L)" }, { "math_id": 20, "text": "Sp(\\omega)_L" }, { "math_id": 21, "text": "\\omega" }, { "math_id": 22, "text": "Br(L|K) = H^2(L|K, L^*)." }, { "math_id": 23, "text": "Br(K) = H^2(K,(K^s)^*)." } ]
https://en.wikipedia.org/wiki?curid=1111315
11118768
Parabolic induction
In mathematics, parabolic induction is a method of constructing representations of a reductive group from representations of its parabolic subgroups. If "G" is a reductive algebraic group and formula_0 is the Langlands decomposition of a parabolic subgroup "P", then parabolic induction consists of taking a representation of formula_1, extending it to "P" by letting "N" act trivially, and inducing the result from "P" to "G". There are some generalizations of parabolic induction using cohomology, such as cohomological parabolic induction and Deligne–Lusztig theory. Philosophy of cusp forms. The "philosophy of cusp forms" was a slogan of Harish-Chandra, expressing his idea of a kind of reverse engineering of automorphic form theory, from the point of view of representation theory. The discrete group Γ fundamental to the classical theory disappears, superficially. What remains is the basic idea that representations in general are to be constructed by parabolic induction of cuspidal representations. A similar philosophy was enunciated by Israel Gelfand, and the philosophy is a precursor of the Langlands program. A consequence for thinking about representation theory is that cuspidal representations are the fundamental class of objects, from which other representations may be constructed by procedures of induction. According to Nolan Wallach Put in the simplest terms the "philosophy of cusp forms" says that for each Γ-conjugacy classes of Q-rational parabolic subgroups one should construct automorphic functions (from objects from spaces of lower dimensions) whose constant terms are zero for other conjugacy classes and the constant terms for [an] element of the given class give all constant terms for this parabolic subgroup. This is almost possible and leads to a description of all automorphic forms in terms of these constructs and cusp forms. The construction that does this is the Eisenstein series. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "P=MAN" }, { "math_id": 1, "text": "MA" } ]
https://en.wikipedia.org/wiki?curid=11118768
11118994
Piecewise linear continuation
Simplicial continuation, or piecewise linear continuation (Allgower and Georg), is a one-parameter continuation method which is well suited to small to medium embedding spaces. The algorithm has been generalized to compute higher-dimensional manifolds by (Allgower and Gnutzman) and (Allgower and Schmidt). The algorithm for drawing contours is a simplicial continuation algorithm, and since it is easy to visualize, it serves as a good introduction to the algorithm. Contour plotting. The contour plotting problem is to find the zeros (contours) of formula_0 (formula_1 a smooth scalar valued function) in the square formula_2, The square is divided into small triangles, usually by introducing points at the corners of a regular square mesh formula_3, formula_4, making a table of the values of formula_5 at each corner formula_6, and then dividing each square into two triangles. The value of formula_5 at the corners of the triangle defines a unique Piecewise Linear interpolant formula_7 to formula_8 over each triangle. One way of writing this interpolant on the triangle with corners formula_9 is as the set of equations formula_10 formula_11 formula_12 formula_13 formula_14 The first four equations can be solved for formula_15 (this maps the original triangle to a right unit triangle), then the remaining equation gives the interpolated value of formula_8. Over the whole mesh of triangles, this piecewise linear interpolant is continuous. The contour of the interpolant on an individual triangle is a line segment (it is an interval on the intersection of two planes). The equation for the line can be found, however the points where the line crosses the edges of the triangle are the endpoints of the line segment. The contour of the piecewise linear interpolant is a set of curves made up of these line segments. Any point on the edge connecting formula_16 and formula_17 can be written as formula_18 with formula_19 in formula_20, and the linear interpolant over the edge is formula_21 So setting formula_22 formula_23 and formula_24 Since this only depends on values on the edge, every triangle which shares this edge will produce the same point, so the contour will be continuous. Each triangle can be tested independently, and if all are checked the entire set of contour curves can be found. Piecewise linear continuation. Piecewise linear continuation is similar to contour plotting (Dobkin, Levy, Thurston and Wilks), but in higher dimensions. The algorithm is based on the following results: Lemma 1. An '(n-1)'-dimensional simplex has n vertices, and the function F assigns an 'n'-vector to each. The simplex is convex, and any point within the simplex is a convex combination of the vertices. That is: If x is in the interior of an (n-1)-dimensional simplex with n vertices formula_25, then there are positive scalars formula_26 such that formula_27 formula_28 If the vertices of the simplex are linearly independent the non-negative scalars formula_29 are unique for each point x, and are called the barycentric coordinates of x. They determine the value of the unique interpolant by the formula: formula_30 Lemma 2. There are basically two tests. The one which was first used labels the vertices of the simplex with a vector of signs (+/-) of the coordinates of the vertex. For example the vertex (.5,-.2,1.) would be labelled (+,-,+). A simplex is called "completely labelled" if there is a vertex whose label begins with a string of "+" signs of length 0,1,2,3,4...n. A completely labelled simplex contains a neighborhood of the origin. This may be surprising, but what underlies this result is that for each coordinate of a completely labelled simplex there is a vector with "+" and another with a "-". Put another way, the smallest cube with edges parallel to the coordinate axes and which covers the simplex has pairs of faces on opposite sides of 0. (i.e. a "+" and a "-" for each coordinate). The second approach is called "vector labelling". It is based on the barycentric coordinates of the vertices of the simplex. The first step is to find the barycentric coordinates of the origin, and then the test that the simplex contains the origin is simply that all the barycentric coordinates are positive and the sum is less than 1. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " f(x,y)=0\\," }, { "math_id": 1, "text": " f(\\cdot)\\," }, { "math_id": 2, "text": "0\\leq x \\leq 1, 0\\leq y \\leq 1\\," }, { "math_id": 3, "text": "ih_x\\leq x\\leq (i+1)h_x\\," }, { "math_id": 4, "text": "jh_y\\leq y \\leq (j+1)h_y\\," }, { "math_id": 5, "text": "f(x_i,y_j)\\," }, { "math_id": 6, "text": "(i,j)\\," }, { "math_id": 7, "text": "lf(x,y)\\," }, { "math_id": 8, "text": "f(\\cdot)\\," }, { "math_id": 9, "text": "(x_0,y_0),~(x_1,y_1),~(x_2,y_2)\\," }, { "math_id": 10, "text": " (x,y) = (x_0,y_0)+(x_1-x_0,y_1-y_0)s+(x_2-x_0,y_2-y_0)t\\," }, { "math_id": 11, "text": " 0\\leq s\\," }, { "math_id": 12, "text": " 0\\leq t\\," }, { "math_id": 13, "text": " s+t \\leq 1\\," }, { "math_id": 14, "text": " lf(x,y) = f(x_0,y_0)+(f(x_1,y_1)-f(x_0,y_0))s+(f(x_2,y_2)-f(x_0,y_0))t\\," }, { "math_id": 15, "text": "(s,t)\\," }, { "math_id": 16, "text": "(x_0,y_0)\\," }, { "math_id": 17, "text": "(x_1,y_1)\\," }, { "math_id": 18, "text": "(x,y) = (x_0,y_0) + t (x_1-x_0,y_1-y_0),\\," }, { "math_id": 19, "text": "t\\," }, { "math_id": 20, "text": "(0,1)\\," }, { "math_id": 21, "text": " f \\sim f_0 + t (f_1-f_0)\\," }, { "math_id": 22, "text": " f = 0\\," }, { "math_id": 23, "text": "t = -f_0/(f_1-f_0)\\," }, { "math_id": 24, "text": " (x,y) = (x_0,y_0)-f_0*(x_1-x_0,y_1-y_0)/(f_1-f_0)\\," }, { "math_id": 25, "text": " v_i " }, { "math_id": 26, "text": "0<\\alpha_i" }, { "math_id": 27, "text": " \\mathbf{x} = \\sum_i \\alpha_i \\mathbf{v}_i " }, { "math_id": 28, "text": " \\sum_i \\alpha_i = 1.\\," }, { "math_id": 29, "text": "\\alpha" }, { "math_id": 30, "text": "LF = \\sum_i \\alpha_i F(\\mathbf{v}_i)" } ]
https://en.wikipedia.org/wiki?curid=11118994
11120026
Sum-free set
In additive combinatorics and number theory, a subset "A" of an abelian group "G" is said to be sum-free if the sumset "A" + "A" is disjoint from "A". In other words, "A" is sum-free if the equation formula_0 has no solution with formula_1. For example, the set of odd numbers is a sum-free subset of the integers, and the set {"N" + 1, ..., 2"N"&amp;hairsp;} forms a large sum-free subset of the set {1, ..., 2"N"&amp;hairsp;}. Fermat's Last Theorem is the statement that, for a given integer "n" &gt; 2, the set of all nonzero "n"th powers of the integers is a sum-free set. Some basic questions that have been asked about sum-free sets are: A sum-free set is said to be maximal if it is not a proper subset of another sum-free set. Let formula_3 be defined by formula_4 is the largest number formula_5 such that any subset of formula_6 with size "n" has a sum-free subset of size "k". The function is subadditive, and by the Fekete subadditivity lemma, formula_7 exists. Erdős proved that formula_8, and conjectured that equality holds. This was proved by Eberhard, Green, and Manners. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "a + b = c" }, { "math_id": 1, "text": "a,b,c \\in A" }, { "math_id": 2, "text": "O(2^{N/2})" }, { "math_id": 3, "text": "f: [1, \\infty) \\to [1, \\infty)" }, { "math_id": 4, "text": "f(n)" }, { "math_id": 5, "text": "k" }, { "math_id": 6, "text": "[1, \\infty)" }, { "math_id": 7, "text": "\\lim_n\\frac{f(n)}{n}" }, { "math_id": 8, "text": "\\lim_n\\frac{f(n)}{n} \\geq \\frac 13" } ]
https://en.wikipedia.org/wiki?curid=11120026
11120053
Weyl–Brauer matrices
Matrix realization of the Clifford algebra In mathematics, particularly in the theory of spinors, the Weyl–Brauer matrices are an explicit realization of a Clifford algebra as a matrix algebra of 2⌊"n"/2⌋ × 2⌊"n"/2⌋ matrices. They generalize the Pauli matrices to n dimensions, and are a specific construction of higher-dimensional gamma matrices. They are named for Richard Brauer and Hermann Weyl, and were one of the earliest systematic constructions of spinors from a representation theoretic standpoint. The matrices are formed by taking tensor products of the Pauli matrices, and the space of spinors in n dimensions may then be realized as the column vectors of size 2⌊"n"/2⌋ on which the Weyl–Brauer matrices act. Construction. Suppose that "V" = Rn is a Euclidean space of dimension "n". There is a sharp contrast in the construction of the Weyl–Brauer matrices depending on whether the dimension "n" is even or odd. Let n = 2k (or 2k+1) and suppose that the Euclidean quadratic form on V is given by formula_0 where ("p"i, "q"i) are the standard coordinates on R"n". Define matrices 1, 1', "P", and "Q" by formula_1. In even or in odd dimensionality, this quantization procedure amounts to replacing the ordinary "p", "q" coordinates with non-commutative coordinates constructed from "P", "Q" in a suitable fashion. Even case. In the case when "n" = 2"k" is even, let formula_2 formula_3 for "i" = 1,2...,"k" (where the "P" or "Q" is considered to occupy the "i"-th position). The operation formula_4 is the tensor product of matrices. It is no longer important to distinguish between the "P"s and "Q"s, so we shall simply refer to them all with the symbol "P", and regard the index on "P"i as ranging from "i" = 1 to "i" = 2"k". For instance, the following properties hold: formula_5, and formula_6 for all unequal pairs "i" and "j". (Clifford relations.) Thus the algebra generated by the "P"i is the Clifford algebra of euclidean "n"-space. Let "A" denote the algebra generated by these matrices. By counting dimensions, "A" is a complete 2"k"×2"k" matrix algebra over the complex numbers. As a matrix algebra, therefore, it acts on 2"k"-dimensional column vectors (with complex entries). These column vectors are the spinors. We now turn to the action of the orthogonal group on the spinors. Consider the application of an orthogonal transformation to the coordinates, which in turn acts upon the "P"i via formula_7. That is, formula_8. Since the "P"i generate "A", the action of this transformation extends to all of "A" and produces an automorphism of "A". From elementary linear algebra, any such automorphism must be given by a change of basis. Hence there is a matrix "S", depending on "R", such that formula_9 (1). In particular, "S"("R") will act on column vectors (spinors). By decomposing rotations into products of reflections, one can write down a formula for "S"("R") in much the same way as in the case of three dimensions. There is more than one matrix "S"("R") which produces the action in (1). The ambiguity defines "S"("R") up to a nonevanescent scalar factor "c". Since "S"("R") and "cS"("R") define the same transformation (1), the action of the orthogonal group on spinors is not single-valued, but instead descends to an action on the projective space associated to the space of spinors. This multiple-valued action can be sharpened by normalizing the constant "c" in such a way that (det "S"("R"))2 = 1. In order to do this, however, it is necessary to discuss how the space of spinors (column vectors) may be identified with its dual (row vectors). In order to identify spinors with their duals, let "C" be the matrix defined by formula_10 Then conjugation by "C" converts a "P"i matrix to its transpose: t"P"i = "C P"i "C"−1. Under the action of a rotation, formula_11 whence "C" "S"("R") "C"−1 = α t"S"("R")−1 for some scalar α. The scalar factor α can be made to equal one by rescaling "S"("R"). Under these circumstances, (det "S"("R"))2 = 1, as required. In physics, the matrix "C" is conventionally interpreted as charge conjugation. Weyl spinors. Let "U" be the element of the algebra "A" defined by formula_12, ("k" factors). Then "U" is preserved under rotations, so in particular its eigenspace decomposition (which necessarily corresponds to the eigenvalues +1 and -1, occurring in equal numbers) is also stabilized by rotations. As a consequence, each spinor admits a decomposition into eigenvectors under "U": ξ = ξ+ + ξ− into a "right-handed Weyl spinor" ξ+ and a "left-handed Weyl spinor" ξ−. Because rotations preserve the eigenspaces of "U", the rotations themselves act diagonally as matrices "S"("R")+, "S"("R")− via ("S"("R")ξ)+ = "S"+("R") ξ+, and ("S"("R")ξ)− = "S"−("R") ξ−. This decomposition is not, however, stable under improper rotations (e.g., reflections in a hyperplane). A reflection in a hyperplane has the effect of interchanging the two eigenspaces. Thus there are two irreducible spin representations in even dimensions given by the left-handed and right-handed Weyl spinors, each of which has dimension 2k-1. However, there is only one irreducible pin representation (see below) owing to the non-invariance of the above eigenspace decomposition under improper rotations, and that has dimension 2k. Odd case. In the quantization for an odd number 2"k"+1 of dimensions, the matrices "P"i may be introduced as above for "i" = 1,2...,2"k", and the following matrix may be adjoined to the system: formula_13, ("k" factors), so that the Clifford relations still hold. This adjunction has no effect on the algebra "A" of matrices generated by the "P"i, since in either case "A" is still a complete matrix algebra of the same dimension. Thus "A", which is a complete 2"k"×2"k" matrix algebra, is not the Clifford algebra, which is an algebra of dimension 2×2"k"×2"k". Rather "A" is the quotient of the Clifford algebra by a certain ideal. Nevertheless, one can show that if "R" is a proper rotation (an orthogonal transformation of determinant one), then the rotation among the coordinates formula_14 is again an automorphism of "A", and so induces a change of basis formula_9 exactly as in the even-dimensional case. The projective representation "S"("R") may again be normalized so that (det "S"("R"))2 = 1. It may further be extended to general orthogonal transformations by setting "S"("R") = -"S"(-"R") in case det "R" = -1 (i.e., if "R" is a reversal). In the case of odd dimensions it is not possible to split a spinor into a pair of Weyl spinors, and spinors form an irreducible representation of the spin group. As in the even case, it is possible to identify spinors with their duals, but for one caveat. The identification of the space of spinors with its dual space is invariant under "proper" rotations, and so the two spaces are spinorially equivalent. However, if "improper" rotations are also taken into consideration, then the spin space and its dual are not isomorphic. Thus, while there is only one spin representation in odd dimensions, there are a pair of inequivalent pin representations. This fact is not evident from the Weyl's quantization approach, however, and is more easily seen by considering the representations of the full Clifford algebra.
[ { "math_id": 0, "text": "q_1^2+\\dots+q_k^2+p_1^2+\\dots+p_k^2 ~~ (+p_n^2)~," }, { "math_id": 1, "text": "\n\\begin{matrix}\n{\\mathbf 1}=\\sigma_0=\\left(\\begin{matrix}1&0\\\\0&1\\end{matrix}\\right),&\n{\\mathbf 1}'=\\sigma_3=\\left(\\begin{matrix}1&0\\\\0&-1\\end{matrix}\\right),\\\\\nP=\\sigma_1=\\left(\\begin{matrix}0&1\\\\1&0\\end{matrix}\\right),&\nQ=-\\sigma_2=\\left(\\begin{matrix}0&i\\\\-i&0\\end{matrix}\\right)\n\\end{matrix}\n" }, { "math_id": 2, "text": "P_i = {\\mathbf 1}'\\otimes\\dots\\otimes{\\mathbf 1}'\\otimes P \\otimes {\\mathbf 1}\\otimes\\dots\\otimes{\\mathbf 1}" }, { "math_id": 3, "text": "Q_i = {\\mathbf 1}'\\otimes\\dots\\otimes{\\mathbf 1}'\\otimes Q \\otimes {\\mathbf 1}\\otimes\\dots\\otimes{\\mathbf 1}" }, { "math_id": 4, "text": "\\otimes" }, { "math_id": 5, "text": "P_i^2 = 1, i=1,2,...,2k" }, { "math_id": 6, "text": "P_iP_j=-P_jP_i" }, { "math_id": 7, "text": "P_i\\mapsto R(P)_i = \\sum_j R_{ij}P_j" }, { "math_id": 8, "text": "R\\in SO(n)" }, { "math_id": 9, "text": "R(P)_i = S(R)P_iS(R)^{-1}" }, { "math_id": 10, "text": "C=P\\otimes Q\\otimes P\\otimes\\dots\\otimes Q." }, { "math_id": 11, "text": "\\hbox{ }^tP_i\\rightarrow \\,^tS(R)^{-1}\\,^tP_i\\,^tS(R) = (CS(R)C^{-1})\\,^tP_i(CS(R)C^{-1})^{-1}" }, { "math_id": 12, "text": "U={\\mathbf 1}'\\otimes\\dots\\otimes{\\mathbf 1}'" }, { "math_id": 13, "text": "P_n = {\\mathbf 1}'\\otimes\\dots\\otimes{\\mathbf 1}'" }, { "math_id": 14, "text": "R(P)_i = \\sum_j R_{ij}P_j" } ]
https://en.wikipedia.org/wiki?curid=11120053
1112015
Scheimpflug principle
Optical imaging rule The Scheimpflug principle is a description of the geometric relationship between the orientation of the plane of focus, the lens plane, and the image plane of an optical system (such as a camera) when the lens plane is not parallel to the image plane. It is applicable to the use of some camera movements on a view camera. It is also the principle used in corneal pachymetry, the mapping of corneal topography, done prior to refractive eye surgery such as LASIK, and used for early detection of keratoconus. The principle is named after Austrian army Captain Theodor Scheimpflug, who used it in devising a systematic method and apparatus for correcting perspective distortion in aerial photographs, although Captain Scheimpflug himself credits Jules Carpentier with the rule, thus making it an example of Stigler's law of eponymy. Description. Normally, the lens and image (film or sensor) planes of a camera are parallel, and the plane of focus (PoF) is parallel to the lens and image planes. If a planar subject (such as the side of a building) is also parallel to the image plane, it can coincide with the PoF, and the entire subject can be rendered sharply. If the subject plane is not parallel to the image plane, it will be in focus only along a line where it intersects the PoF, as illustrated in Figure 1. But when a lens is tilted with respect to the image plane, an oblique tangent extended from the image plane and another extended from the lens plane meet at a line through which the PoF also passes, as illustrated in Figure 2. With this condition, a planar subject that is not parallel to the image plane can be completely in focus. While many photographers were/are unaware of the exact geometric relationship between the PoF, lens plane, and film plane, swinging and tilting the lens to swing and tilt the PoF was practiced since the middle of the 19th century. But, when Carpentier and Scheimpflug wanted to produce equipment to automate the process, they needed to find a geometric relationship. Scheimpflug (1904) referenced this concept in his British patent; Carpentier (1901) also described the concept in an earlier British patent for a perspective-correcting photographic enlarger. The concept can be inferred from a theorem in projective geometry of Gérard Desargues; the principle also readily derives from simple geometric considerations and application of the Gaussian thin-lens formula, as shown in the section Proof of the Scheimpflug principle. Changing the plane of focus. When the lens and image planes are not parallel, adjusting focus rotates the PoF rather than merely displacing it along the lens axis. The axis of rotation is the intersection of the lens's front focal plane and a plane through the center of the lens parallel to the image plane, as shown in Figure 3. As the image plane is moved from IP1 to IP2, the PoF rotates about the axis G from position PoF1 to position PoF2; the "Scheimpflug line" moves from position S1 to position S2. The axis of rotation has been given many different names: "counter axis" (Scheimpflug 1904), "hinge line" (Merklinger 1996), and "pivot point" (Wheeler). Refer to Figure 4; if a lens with focal length f is tilted by an angle θ relative to the image plane, the distance J from the center of the lens to the axis G is given by formula_0 If v′ is the distance along the line of sight from the image plane to the center of the lens, the angle ψ between the image plane and the PoF is given by formula_1 Equivalently, on the object side of the lens, if u′ is the distance along the line of sight from the center of the lens to the PoF, the angle ψ is given by formula_2 The angle ψ increases with focus distance; when the focus is at infinity, the PoF is perpendicular to the image plane for any nonzero value of tilt. The distances u′ and v′ along the line of sight are "not" the object and image distances u and v used in the thin-lens formula formula_3 where the distances are perpendicular to the lens plane. Distances u and v are related to the line-of-sight distances by u = u′ cos θ and v = v′ cos θ. For an essentially planar subject, such as a roadway extending for miles from the camera on flat terrain, the tilt can be set to place the axis G in the subject plane, and the focus then adjusted to rotate the PoF so that it coincides with the subject plane. The entire subject can be in focus, even if it is not parallel to the image plane. The plane of focus also can be rotated so that it does not coincide with the subject plane, and so that only a small part of the subject is in focus. This technique sometimes is referred to as "anti-Scheimpflug", though it actually relies on the Scheimpflug principle. Rotation of the plane of focus can be accomplished by rotating either the lens plane or the image plane. Rotating the lens (as by adjusting the front standard on a view camera) does not alter linear perspective in a planar subject such as the face of a building, but requires a lens with a large image circle to avoid vignetting. Rotating the image plane (as by adjusting the back or rear standard on a view camera) alters perspective (e.g., the sides of a building converge), but works with a lens that has a smaller image circle. Rotation of the lens or back about a horizontal axis is commonly called "tilt", and rotation about a vertical axis is commonly called "swing". Camera movements. Tilt and swing are movements available on most view cameras, often on both the front and rear standards, and on some small- and medium format cameras using special lenses that partially emulate view-camera movements. Such lenses are often called tilt-shift or "perspective control" lenses. For some camera models there are adapters that enable movements with some of the manufacturer's regular lenses, and a crude approximation may be achieved with such attachments as the 'Lensbaby' or by 'freelensing'. Depth of field. When the lens and image planes are parallel, the depth of field (DoF) extends between parallel planes on either side of the plane of focus. When the Scheimpflug principle is employed, the DoF becomes wedge shaped (Merklinger 1996, 32; Tillmanns 1997, 71), with the apex of the wedge at the PoF rotation axis, as shown in Figure 5. The DoF is zero at the apex, remains shallow at the edge of the lens's field of view, and increases with distance from the camera. The shallow DoF near the camera requires the PoF to be positioned carefully if near objects are to be rendered sharply. On a plane parallel to the image plane, the DoF is equally distributed above and below the PoF; in Figure 5, the distances yn and yf on the plane VP are equal. This distribution can be helpful in determining the best position for the PoF; if a scene includes a distant tall feature, the best fit of the DoF to the scene often results from having the PoF pass through the vertical midpoint of that feature. The angular DoF, however, is "not" equally distributed about the PoF. The distances yn and yf are given by (Merklinger 1996, 126) formula_4 where f is the lens focal length, v′ and u′ are the image and object distances parallel to the line of sight, uh is the hyperfocal distance, and J is the distance from the center of the lens to the PoF rotation axis. By solving the image-side equation for tan ψ for v′ and substituting for v′ and uh in the equation above, the values may be given equivalently by formula_5 where N is the lens f-number and c is the circle of confusion. At a large focus distance (equivalent to a large angle between the PoF and the image plane), v′ ≈ f, and (Merklinger 1996, 48) formula_6 or formula_7 Thus at the hyperfocal distance, the DoF on a plane parallel to the image plane extends a distance of J on either side of the PoF. With some subjects, such as landscapes, the wedge-shaped DoF is a good fit to the scene, and satisfactory sharpness can often be achieved with a smaller lens f-number (larger aperture) than would be required if the PoF were parallel to the image plane. Selective focus. The region of sharpness can also be made very small by using large tilt and a small "f"-number. For example, with 8° tilt on a 90 mm lens for a small-format camera, the total vertical DoF at the hyperfocal distance is approximately formula_8 At an aperture of "f"/2.8, with a circle of confusion of 0.03 mm, this occurs at a distance "u′" of approximately formula_9 Of course, the tilt also affects the position of the PoF, so if the tilt is chosen to minimize the region of sharpness, the PoF cannot be set to pass through more than one arbitrarily chosen point. If the PoF is to pass through more than one arbitrary point, the tilt and focus are fixed, and the lens "f"-number is the only available control for adjusting sharpness. Derivation of the formulas. Proof of the Scheimpflug principle. In a two-dimensional representation, an object plane inclined to the lens plane is a line described by formula_10 . By optical convention, both object and image distances are positive for real images, so that in Figure 6, the object distance "u" increases to the left of the lens plane LP; the vertical axis uses the normal Cartesian convention, with values above the optical axis positive and those below the optical axis negative. The relationship between the object distance "u", the image distance "v", and the lens focal length "f" is given by the thin-lens equation formula_11 solving for "u" gives formula_12 so that formula_13 . The magnification "m" is the ratio of image height "yv" to object height "yu" : formula_14 "yu" and "yv" are of opposite sense, so the magnification is negative, indicating an inverted image. From similar triangles in Figure 6, the magnification also relates the image and object distances, so that formula_15 . On the image side of the lens, formula_16 giving formula_17 . The locus of focus for the inclined object plane is a plane; in two-dimensional representation, the y-intercept is the same as that for the line describing the object plane, so the object plane, lens plane, and image plane have a common intersection. A similar proof is given by Larmore (1965, 171–173). Angle of the PoF with the image plane. From Figure 7, formula_18 where u′ and v′ are the object and image distances along the line of sight and S is the distance from the line of sight to the Scheimpflug intersection at S. Again from Figure 7, formula_19 combining the previous two equations gives formula_20 From the thin-lens equation, formula_21 Solving for u′ gives formula_22 substituting this result into the equation for tan ψ gives formula_23 or formula_24 Similarly, the thin-lens equation can be solved for v′, and the result substituted into the equation for tan ψ to give the object-side relationship formula_25 Noting that formula_26 the relationship between ψ and θ can be expressed in terms of the magnification m of the object in the line of sight: formula_27 Proof of the "hinge rule". From Figure 7, formula_28 combining with the previous result for the object side and eliminating ψ gives formula_29 Again from Figure 7, formula_30 so the distance d is the lens focal length f, and the point G is at the intersection the lens's front focal plane with a line parallel to the image plane. The distance J depends only on the lens tilt and the lens focal length; in particular, it is not affected by changes in focus. From Figure 7, formula_31 so the distance to the Scheimpflug intersection at S varies as the focus is changed. Thus the PoF rotates about the axis at G as focus is adjusted. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "J = \\frac f {\\sin \\theta}." }, { "math_id": 1, "text": "\\tan \\psi = \\frac {v'} {v' \\cos \\theta - f} \\sin \\theta." }, { "math_id": 2, "text": "\\tan \\psi = \\frac {u'} f \\sin \\theta." }, { "math_id": 3, "text": "\\frac 1 u + \\frac 1 v = \\frac 1 f," }, { "math_id": 4, "text": "y_x = \\frac f {v'} \\frac {u'} u_\\mathrm{h} J \\,," }, { "math_id": 5, "text": "y_x = \\frac {Nc} {f} \\left ( \\frac {1} {\\tan \\theta} - \\frac {1} {\\tan \\psi} \\right ) u' \\,," }, { "math_id": 6, "text": "y_x \\approx \\frac {u'} u_\\mathrm{h} J \\,," }, { "math_id": 7, "text": "y_x \\approx \\frac {Nc} {f} \\frac {u'} {\\tan \\theta} \\,." }, { "math_id": 8, "text": "2J = 2 \\frac {f} {\\sin \\theta} = 2 \\times \\frac {90 \\text{ mm}} {\\sin 8^\\circ} = 1293 \\text { mm} \\,." }, { "math_id": 9, "text": "\\frac {f^2} {Nc} = \\frac {90^2} {2.8 \\times 0.03} = 96.4 \\text { m} \\,." }, { "math_id": 10, "text": "y_u=au+b" }, { "math_id": 11, "text": "\\frac 1 u + \\frac 1 v = \\frac 1 f \\,;" }, { "math_id": 12, "text": "u=\\frac{vf}{v-f} \\,," }, { "math_id": 13, "text": "y_u=a \\, \\frac {vf} {v-f} +b" }, { "math_id": 14, "text": "m=\\frac{y_v}{y_u} \\,;" }, { "math_id": 15, "text": "m=-\\frac{v}{u}=-\\frac{v-f}{f}" }, { "math_id": 16, "text": "\\begin{align}\n y_{v} & =my_{u} \\\\ \n & =-\\frac{v-f}{f}\\left( a \\, \\frac{vf}{v-f}+b \\right) \\\\\n & =-\\left( av+\\frac{v}{f}b-b \\right) \\,,\n\\end{align}" }, { "math_id": 17, "text": "y_{v}=-\\left( a + \\frac{b}{f} \\right)v+b" }, { "math_id": 18, "text": "\\tan \\psi = \\frac {u' + v'} {S} \\,," }, { "math_id": 19, "text": "\\tan \\theta = \\frac {v'} {S} \\,;" }, { "math_id": 20, "text": "\\tan \\psi = \\frac {u' + v'} {v'} \\tan \\theta = \\left ( \\frac {u'} {v'} + 1 \\right ) \\tan \\theta \\,." }, { "math_id": 21, "text": "\\frac {1} {u} + \\frac {1} {v} = \\frac {1} {u' \\cos \\theta} + \\frac {1} {v' \\cos \\theta} = \\frac {1} {f} \\,." }, { "math_id": 22, "text": "u' = \\frac {v' f} {v' \\cos \\theta - f} \\,;" }, { "math_id": 23, "text": "\\tan \\psi = \\left ( \\frac {f} {v' \\cos \\theta - f} + 1 \\right ) \\tan \\theta\n = \\frac {f + v' \\cos \\theta - f} {v' \\cos \\theta -f} \\tan \\theta \\,," }, { "math_id": 24, "text": "\\tan \\psi = \\frac {v'} {v' \\cos \\theta - f} \\sin \\theta \\,." }, { "math_id": 25, "text": "\\tan \\psi = \\frac {u'} {f} \\sin \\theta \\,." }, { "math_id": 26, "text": "\\frac {u'} {f} = \\frac {u} {f} \\frac {1} {\\cos \\theta}\n = \\frac {m + 1} {m} \\frac {1} {\\cos \\theta} \\,," }, { "math_id": 27, "text": "\\tan \\psi = \\frac {m + 1} {m} \\tan \\theta \\,." }, { "math_id": 28, "text": "\\tan \\psi = \\frac {u'} {J} \\,;" }, { "math_id": 29, "text": "\\sin \\theta = \\frac {f} {J} \\,." }, { "math_id": 30, "text": "\\sin \\theta = \\frac {d} {J} \\,," }, { "math_id": 31, "text": "\\tan \\theta = \\frac {v'} {S} \\,," } ]
https://en.wikipedia.org/wiki?curid=1112015
1112147
Mixed inhibition
Mixed inhibition is a type of enzyme inhibition in which the inhibitor may bind to the enzyme whether or not the enzyme has already bound the substrate but has a greater affinity for one state or the other. It is called "mixed" because it can be seen as a conceptual "mixture" of competitive inhibition, in which the inhibitor can only bind the enzyme if the substrate "has not" already bound, and uncompetitive inhibition, in which the inhibitor can only bind the enzyme if the substrate "has" already bound. If the ability of the inhibitor to bind the enzyme is "exactly the same" whether or not the enzyme has already bound the substrate, it is known as a non-competitive inhibitor. Non-competitive inhibition is sometimes thought of as a special case of mixed inhibition. In mixed inhibition, the inhibitor binds to an allosteric site, i.e. a site different from the active site where the substrate binds. However, not all inhibitors that bind at allosteric sites are mixed inhibitors. Mixed inhibition may result in either: In either case the inhibition decreases the apparent maximum enzyme reaction rate (formula_2). Mathematically, mixed inhibition occurs when the factors α and α’ (introduced into the Michaelis-Menten equation to account for competitive and uncompetitive inhibition, respectively) are both greater than 1. In the special case where α = α’, noncompetitive inhibition occurs, in which case formula_3 is reduced but formula_4 is unaffected. This is very unusual in practice. Biological examples. In gluconeogenesis, the enzyme cPEPCK (cystolic phosphoenolpyruvate carboxykinase) is responsible for converting oxaloacetate into phosphoenolpyruvic acid, or PEP, when guanosine triphosphate, GTP, is present. This step is exclusive for gluconeogenesis, which occurs under fasting condition's due to the body's depletion of glucose. cPEPCK is known to be regulated by Genistein, an isoflavone that is naturally found in a number of plants. It was first proven that genistein inhibits the activity of cPEPCK. In a study, the presence of this isoflavone resulted in a decrease in the level of blood sugar. A lowered blood sugar level means less glucose is in the blood. If this occurs in a subject that is fasting, this is because the gluconeogenesis was inhibited, preventing increased production of glucose. The ability of genistein to lower a person's blood sugar level allows it to be referred to as an anti-diabetic property. The mechanism in which genistein inhibited the enzyme cPEPCK was further evaluated. First, cPEPCK was placed in the presence of 3-Mercaptopropionic acid, or 3-MPA, a known inhibitor of the enzyme. It was compared to the results of placing cPEPCK in the presence of genistein, which revealed that the mechanism of mixed inhibition was used to decrease cPEPCK's activity. cPEPCK undergoes multiple configurations when catalyzing the formation of PEP. It can be either unbound, bound to GDP or bound to GTP. An experiment that studied the affinity for genistein in these different configurations was conducted. It revealed that geinstein favors binding to the cPEPCK with a bound GTP than then the enzyme with a bound GDP, which was found to be less stable. This was because the GTP-bound cPEPCK revealed an extended binding site for genistein. This is the same binding site as the enzyme's intended substrate, oxaloacetate while the other configurations did not do so in the presence of genistein. This provided evidence that the mechanism of inhibition of cPEPCK by genistein was a mixture of competitive and non-competitive inhibition. A kallikrein is a type of serine protease, which cleaves peptide bonds after certain amino acids in a protein. These 15 kallikreins, KLK1 to KLK15, are found in human tissues. The ability for this molecule to cleave proteins results in the effective activation of cell surface receptors, making them crucial elements of many biological signal transduction pathways, and its amplification through cascades. This family of serine proteases is often a biomarker to diseases, and therefore, have become a target for inhibition. Inhibition of these kallikreins results in possible therapy for diseases such as metastatic cancer or Alzheimer's disease. Fukugetin, or (+)-morelloflavone, is a type of plant biflavonoid isolated from "Garcinia brasiliensis". After isolating fukugetin, it was placed with KLK1, KLK2, KLK3, KLK4, KLK5, KLK6, and KLK7 in varying concentrations. This allowed for the analysis of enzyme kinetics through derivation of parameters Km and Vmax. Through the model of Michaelis-Menten kinetics, the Eadie-Hofstee diagram was plotted. It confirmed that fukugetin acts as a mixed inhibitor by exhibiting varying but present affinities for the enzyme alone and the enzyme-substrate complex. Analyzing through kinetics, fukugetin decreased the Vmax while it increased the Km for these KLKs. Typically, in competitive inhibition, Vmax remains the same while Km increases, and in non-competitive inhibition, Vmax decreases while Km remains the same. The change in both of these variables is another finding consistent with the effects of a mixed inhibitor. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "K_m^\\text{app} > K_m" }, { "math_id": 1, "text": "K_m^\\text{app} < K_m" }, { "math_id": 2, "text": "V_{max}^\\text{app} < V_{max}" }, { "math_id": 3, "text": "V_{max}^{app}" }, { "math_id": 4, "text": "K_m" } ]
https://en.wikipedia.org/wiki?curid=1112147
1112273
Heat of combustion
Amount of heat released by combustion of a quantity of substance The heating value (or energy value or calorific value) of a substance, usually a fuel or food (see food energy), is the amount of heat released during the combustion of a specified amount of it. The "calorific value" is the total energy released as heat when a substance undergoes complete combustion with oxygen under standard conditions. The chemical reaction is typically a hydrocarbon or other organic molecule reacting with oxygen to form carbon dioxide and water and release heat. It may be expressed with the quantities: There are two kinds of enthalpy of combustion, called high(er) and low(er) heat(ing) value, depending on how much the products are allowed to cool and whether compounds like H2O are allowed to condense. The high heat values are conventionally measured with a bomb calorimeter. Low heat values are calculated from high heat value test data. They may also be calculated as the difference between the heat of formation Δ"H" of the products and reactants (though this approach is somewhat artificial since most heats of formation are typically calculated from measured heats of combustion).. For a fuel of composition C"c"H"h"O"o"N"n", the (higher) heat of combustion is 419 kJ/mol × ("c" + 0.3 "h" − 0.5 "o") usually to a good approximation (±3%), though it gives poor results for some compounds such as (gaseous) formaldehyde and carbon monoxide, and can be significantly off if "o" + "n" &gt; "c", such as for glycerine dinitrate, . By convention, the (higher) heat of combustion is defined to be the heat released for the complete combustion of a compound in its standard state to form stable products in their standard states: hydrogen is converted to water (in its liquid state), carbon is converted to carbon dioxide gas, and nitrogen is converted to nitrogen gas. That is, the heat of combustion, Δ"H"°comb, is the heat of reaction of the following process: C"c"H"h"N"n"O"o" (std.) + ("c" + &lt;templatestyles src="Fraction/styles.css" /&gt;"h"⁄4 - &lt;templatestyles src="Fraction/styles.css" /&gt;"o"⁄2) O2 (g) → "c"CO2 (g) + &lt;templatestyles src="Fraction/styles.css" /&gt;"h"⁄2H2O ("l") + &lt;templatestyles src="Fraction/styles.css" /&gt;"n"⁄2N2 (g) Chlorine and sulfur are not quite standardized; they are usually assumed to convert to hydrogen chloride gas and SO2 or SO3 gas, respectively, or to dilute aqueous hydrochloric and sulfuric acids, respectively, when the combustion is conducted in a bomb calorimeter containing some quantity of water. Ways of determination. Gross and net. Zwolinski and Wilhoit defined, in 1972, "gross" and "net" values for heats of combustion. In the gross definition the products are the most stable compounds, e.g. H2O(l), Br2(l), I2(s) and H2SO4(l). In the net definition the products are the gases produced when the compound is burned in an open flame, e.g. H2O(g), Br2(g), I2(g) and SO2(g). In both definitions the products for C, F, Cl and N are CO2(g), HF(g), Cl2(g) and N2(g), respectively. Dulong's Formula The heating value of a fuel can be calculated with the results of ultimate analysis of fuel. From analysis, percentages of the combustibles in the fuel (carbon, hydrogen, sulfur) are known. Since the heat of combustion of these elements is known, the heating value can be calculated using Dulong's Formula: HHV [kJ/g]= 33.87mC + 122.3(mH - mO ÷ 8) + 9.4mS where mC, mH, mO, mN, and mS are the contents of carbon, hydrogen, oxygen, nitrogen, and sulfur on any (wet, dry or ash free) basis, respectively. Higher heating value. The higher heating value (HHV; "gross energy", "upper heating value", "gross calorific value" "GCV", or "higher calorific value"; "HCV") indicates the upper limit of the available thermal energy produced by a complete combustion of fuel. It is measured as a unit of energy per unit mass or volume of substance. The HHV is determined by bringing all the products of combustion back to the original pre-combustion temperature, including condensing any vapor produced. Such measurements often use a standard temperature of . This is the same as the thermodynamic heat of combustion since the enthalpy change for the reaction assumes a common temperature of the compounds before and after combustion, in which case the water produced by combustion is condensed to a liquid. The higher heating value takes into account the latent heat of vaporization of water in the combustion products, and is useful in calculating heating values for fuels where condensation of the reaction products is practical (e.g., in a gas-fired boiler used for space heat). In other words, HHV assumes all the water component is in liquid state at the end of combustion (in product of combustion) and that heat delivered at temperatures below can be put to use. Lower heating value. The lower heating value (LHV; "net calorific value"; "NCV", or "lower calorific value"; "LCV") is another measure of available thermal energy produced by a combustion of fuel, measured as a unit of energy per unit mass or volume of substance. In contrast to the HHV, the LHV considers energy losses such as the energy used to vaporize water - although its exact definition is not uniformly agreed upon. One definition is simply to subtract the heat of vaporization of the water from the higher heating value. This treats any H2O formed as a vapor that is released as a waste. The energy required to vaporize the water is therefore lost. LHV calculations assume that the water component of a combustion process is in vapor state at the end of combustion, as opposed to the higher heating value (HHV) (a.k.a. "gross calorific value" or "gross CV") which assumes that all of the water in a combustion process is in a liquid state after a combustion process. Another definition of the LHV is the amount of heat released when the products are cooled to . This means that the latent heat of vaporization of water and other reaction products is not recovered. It is useful in comparing fuels where condensation of the combustion products is impractical, or heat at a temperature below cannot be put to use. One definition of lower heating value, adopted by the American Petroleum Institute (API), uses a reference temperature of . Another definition, used by Gas Processors Suppliers Association (GPSA) and originally used by API (data collected for API research project 44), is the enthalpy of all combustion products minus the enthalpy of the fuel at the reference temperature (API research project 44 used 25 °C. GPSA currently uses 60 °F), minus the enthalpy of the stoichiometric oxygen (O2) at the reference temperature, minus the heat of vaporization of the vapor content of the combustion products. The definition in which the combustion products are all returned to the reference temperature is more easily calculated from the higher heating value than when using other definitions and will in fact give a slightly different answer. Gross heating value. Gross heating value accounts for water in the exhaust leaving as vapor, as does LHV, but gross heating value also includes liquid water in the fuel prior to combustion. This value is important for fuels like wood or coal, which will usually contain some amount of water prior to burning. Measuring heating values. The higher heating value is experimentally determined in a bomb calorimeter. The combustion of a stoichiometric mixture of fuel and oxidizer (e.g. two moles of hydrogen and one mole of oxygen) in a steel container at is initiated by an ignition device and the reactions allowed to complete. When hydrogen and oxygen react during combustion, water vapor is produced. The vessel and its contents are then cooled to the original 25 °C and the higher heating value is determined as the heat released between identical initial and final temperatures. When the lower heating value (LHV) is determined, cooling is stopped at 150 °C and the reaction heat is only partially recovered. The limit of 150 °C is based on acid gas dew-point. Note: Higher heating value (HHV) is calculated with the product of water being in liquid form while lower heating value (LHV) is calculated with the product of water being in vapor form. Relation between heating values. The difference between the two heating values depends on the chemical composition of the fuel. In the case of pure carbon or carbon monoxide, the two heating values are almost identical, the difference being the sensible heat content of carbon dioxide between 150 °C and 25 °C (sensible heat exchange causes a change of temperature, while latent heat is added or subtracted for phase transitions at constant temperature. Examples: heat of vaporization or heat of fusion). For hydrogen, the difference is much more significant as it includes the sensible heat of water vapor between 150 °C and 100 °C, the latent heat of condensation at 100 °C, and the sensible heat of the condensed water between 100 °C and 25 °C. In all, the higher heating value of hydrogen is 18.2% above its lower heating value (142MJ/kg vs. 120MJ/kg). For hydrocarbons, the difference depends on the hydrogen content of the fuel. For gasoline and diesel the higher heating value exceeds the lower heating value by about 10% and 7%, respectively, and for natural gas about 11%. A common method of relating HHV to LHV is: formula_0 where "H"v is the heat of vaporization of water, "n"H2O,out is the number of moles of water vaporized and "n"fuel,in is the number of moles of fuel combusted. Usage of terms. Engine manufacturers typically rate their engines fuel consumption by the lower heating values since the exhaust is never condensed in the engine, and doing this allows them to publish more attractive numbers than are used in conventional power plant terms. The conventional power industry had used HHV (high heat value) exclusively for decades, even though virtually all of these plants did not condense exhaust either. American consumers should be aware that the corresponding fuel-consumption figure based on the higher heating value will be somewhat higher. The difference between HHV and LHV definitions causes endless confusion when quoters do not bother to state the convention being used. since there is typically a 10% difference between the two methods for a power plant burning natural gas. For simply benchmarking part of a reaction the LHV may be appropriate, but HHV should be used for overall energy efficiency calculations if only to avoid confusion, and in any case, the value or convention should be clearly stated. Accounting for moisture. Both HHV and LHV can be expressed in terms of AR (all moisture counted), MF and MAF (only water from combustion of hydrogen). AR, MF, and MAF are commonly used for indicating the heating values of coal: Higher heating values of natural gases from various sources. The International Energy Agency reports the following typical higher heating values per Standard cubic metre of gas: &lt;templatestyles src="Div col/styles.css"/&gt; The lower heating value of natural gas is normally about 90% of its higher heating value. This table is in Standard cubic metres (1atm, 15°C), to convert to values per Normal cubic metre (1atm, 0°C), multiply above table by 1.0549. See also. &lt;templatestyles src="Div col/styles.css"/&gt; References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{HHV} = \\mathrm{LHV} + H_\\mathrm{v}\\left(\\frac{n_\\mathrm{H_2O,out}}{n_\\mathrm{fuel,in}}\\right)" } ]
https://en.wikipedia.org/wiki?curid=1112273
1112366
Allen Hatcher
American mathematician Allen Edward Hatcher (born October 23, 1944) is an American topologist. Biography. Hatcher was born in Indianapolis, Indiana. After obtaining his B.S from Oberlin College in 1966, he went for his graduate studies to Stanford University, where he received his Ph.D. in 1971. His thesis, "A K2 Obstruction for Pseudo-Isotopies", was written under the supervision of Hans Samelson. Afterwards, Hatcher went to Princeton University, where he was an NSF postdoc for a year, then a lecturer for another year, and then Assistant Professor from 1973 to 1979. He was also a member of the Institute for Advanced Study in 1975–76 and 1979–80. Hatcher moved to the University of California, Los Angeles as an assistant professor in 1977. From 1983 he has been a professor at Cornell University; he is now a professor emeritus. In 1978 Hatcher was an invited speaker at the International Congresses of Mathematicians in Helsinki. Mathematical contributions. He has worked in geometric topology, both in high dimensions, relating pseudoisotopy to algebraic K-theory, and in low dimensions: surfaces and 3-manifolds, such as proving the Smale conjecture for the 3-sphere. 3-manifolds. Perhaps among his most recognized results in 3-manifolds concern the classification of incompressible surfaces in certain 3-manifolds and their boundary slopes. William Floyd and Hatcher classified all the incompressible surfaces in punctured-torus bundles over the circle. William Thurston and Hatcher classified the incompressible surfaces in 2-bridge knot complements. As corollaries, this gave more examples of non-Haken, non-Seifert fibered, irreducible 3-manifolds and extended the techniques and line of investigation started in Thurston's Princeton lecture notes. Hatcher also showed that irreducible, boundary-irreducible 3-manifolds with toral boundary have at most "half" of all possible boundary slopes resulting from essential surfaces. In the case of one torus boundary, one can conclude that the number of slopes given by essential surfaces is finite. Hatcher has made contributions to the so-called theory of essential laminations in 3-manifolds. He invented the notion of "end-incompressibility" and several of his students, such as Mark Brittenham, Charles Delman, and Rachel Roberts, have made important contributions to the theory. Surfaces. Hatcher and Thurston exhibited an algorithm to produce a presentation of the mapping class group of a closed, orientable surface. Their work relied on the notion of a cut system and moves that relate any two systems. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathrm{Diff}(S^{3})\\simeq {\\mathrm O}(4)" } ]
https://en.wikipedia.org/wiki?curid=1112366
11125142
Extouch triangle
Triangle formed from the points of tangency of a given triangle's excircles In Euclidean geometry, the extouch triangle of a triangle is formed by joining the points at which the three excircles touch the triangle. Coordinates. The vertices of the extouch triangle are given in trilinear coordinates by: formula_0 or equivalently, where a, b, c are the lengths of the sides opposite angles A, B, C respectively, formula_1 Related figures. The triangle's splitters are lines connecting the vertices of the original triangle to the corresponding vertices of the extouch triangle; they bisect the triangle's perimeter and meet at the Nagel point. This is shown in blue and labelled "N" in the diagram. The Mandart inellipse is tangent to the sides of the reference triangle at the three vertices of the extouch triangle. Area. The area of the extouch triangle, KT, is given by: formula_2 where K and r are the area and radius of the incircle, s is the semiperimeter of the original triangle, and a, b, c are the side lengths of the original triangle. This is the same area as that of the intouch triangle. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\begin{array}{rccccc}\nT_A =& 0 &:& \\csc^2 \\frac{B}{2} &:& \\csc^2 \\frac{C}{2} \\\\\nT_B =& \\csc^2 \\frac{A}{2} &:& 0 &:& \\csc^2 \\frac{C}{2} \\\\\nT_C =& \\csc^2 \\frac{A}{2} &:& \\csc^2 \\frac{B}{2} &:& 0\n\\end{array}" }, { "math_id": 1, "text": "\\begin{array}{rccccc}\n T_A =& 0 &:& \\frac{a \\, - \\, b \\, + \\, c}{b} &:& \\frac{a \\, + \\, b \\, - \\, c}{c} \\\\\n T_B =& \\frac{-a \\, + \\, b \\, + \\, c}{a} &:& 0 &:& \\frac{a \\, + \\, b \\, - \\, c}{c} \\\\\n T_C =& \\frac{-a \\, + \\, b \\, + \\, c}{a} &:& \\frac{a \\, - \\, b \\, + \\, c}{b} &:& 0\n\\end{array}" }, { "math_id": 2, "text": "K_T= K\\frac{2r^2s}{abc}" } ]
https://en.wikipedia.org/wiki?curid=11125142
11127382
Belt problem
The belt problem is a mathematics problem which requires finding the length of a crossed belt that connects two circular pulleys with radius "r"1 and "r"2 whose centers are separated by a distance "P". The solution of the belt problem requires trigonometry and the concepts of the bitangent line, the vertical angle, and congruent angles. Solution. Clearly triangles ACO and ADO are congruent right angled triangles, as are triangles BEO and BFO. In addition, triangles ACO and BEO are similar. Therefore angles CAO, DAO, EBO and FBO are all equal. Denoting this angle by formula_0 (denominated in radians), the length of the belt is formula_1 formula_2 formula_3 This exploits the convenience of denominating angles in radians that the length of an arc = the radius × the measure of the angle facing the arc. To find formula_0 we see from the similarity of triangles ACO and BEO that formula_4 formula_5 formula_6 formula_7 formula_8 formula_9 For fixed "P" the length of the belt depends only on the sum of the radius values "r"1 + "r"2, and not on their individual values. Pulley problem. There are other types of problems similar to the belt problem. The pulley problem, as shown, is similar to the belt problem; however, the belt does not cross itself. In the pulley problem the length of the belt is formula_10 where "r"1 represents the radius of the larger pulley, "r"2 represents the radius of the smaller one, and: formula_11 Applications. The belt problem is used in the design of aeroplanes, bicycle gearing, cars, and other items with pulleys or belts that cross each other. The pulley problem is also used in the design of conveyor belts found in airport luggage belts and automated factory lines. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\varphi" }, { "math_id": 1, "text": "CO + DO + EO + FO + \\text {arc} CD + \\text {arc} EF \\,\\!" }, { "math_id": 2, "text": "=2r_1\\tan(\\varphi) + 2r_2\\tan(\\varphi) + (2\\pi-2\\varphi)r_1 + (2\\pi-2\\varphi)r_2 \\,\\!" }, { "math_id": 3, "text": "=2(r_1+r_2)(\\tan(\\varphi) + \\pi- \\varphi) \\,\\!" }, { "math_id": 4, "text": "\\frac{AO}{BO} = \\frac{AC}{BE} \\,\\!" }, { "math_id": 5, "text": "\\Rightarrow \\frac{P-x}{x} = \\frac{r_1}{r_2} \\,\\!" }, { "math_id": 6, "text": "\\Rightarrow \\frac{P}{x} = \\frac{r_1+r_2}{r_2} \\,\\!" }, { "math_id": 7, "text": "\\Rightarrow {x} = \\frac{P r_2}{r_1+r_2} \\,\\!" }, { "math_id": 8, "text": " \\cos(\\varphi) = \\frac{r_2}{x} = \\frac{r_2}{\\left(\\dfrac{P r_2}{r_1+r_2}\\right)} = \\frac{r_1+r_2}{P} \\,\\!" }, { "math_id": 9, "text": "\\Rightarrow \\varphi=\\arccos\\left(\\frac{r_1+r_2}{P}\\right) \\,\\!" }, { "math_id": 10, "text": "2 P \\sin\\left(\\frac{\\theta}{2}\\right)+r_1(2\\pi-\\theta)+r_2{\\theta}\\, ," }, { "math_id": 11, "text": "\\theta=2\\arccos\\left(\\frac{r_1-r_2}{P}\\right)\\, ." } ]
https://en.wikipedia.org/wiki?curid=11127382
11127518
Chow's lemma
Chow's lemma, named after Wei-Liang Chow, is one of the foundational results in algebraic geometry. It roughly says that a proper morphism is fairly close to being a projective morphism. More precisely, a version of it states the following: If formula_0 is a scheme that is proper over a noetherian base formula_1, then there exists a projective formula_1-scheme formula_2 and a surjective formula_1-morphism formula_3 that induces an isomorphism formula_4 for some dense open formula_5 Proof. The proof here is a standard one. Reduction to the case of formula_0 irreducible. We can first reduce to the case where formula_0 is irreducible. To start, formula_0 is noetherian since it is of finite type over a noetherian base. Therefore it has finitely many irreducible components formula_6, and we claim that for each formula_6 there is an irreducible proper formula_1-scheme formula_7 so that formula_8 has set-theoretic image formula_6 and is an isomorphism on the open dense subset formula_9 of formula_6. To see this, define formula_7 to be the scheme-theoretic image of the open immersion formula_10 Since formula_11 is set-theoretically noetherian for each formula_12, the map formula_13 is quasi-compact and we may compute this scheme-theoretic image affine-locally on formula_0, immediately proving the two claims. If we can produce for each formula_7 a projective formula_1-scheme formula_14 as in the statement of the theorem, then we can take formula_2 to be the disjoint union formula_15 and formula_16 to be the composition formula_17: this map is projective, and an isomorphism over a dense open set of formula_0, while formula_15 is a projective formula_1-scheme since it is a finite union of projective formula_1-schemes. Since each formula_7 is proper over formula_1, we've completed the reduction to the case formula_0 irreducible. formula_0 can be covered by finitely many quasi-projective formula_1-schemes. Next, we will show that formula_0 can be covered by a finite number of open subsets formula_18 so that each formula_18 is quasi-projective over formula_1. To do this, we may by quasi-compactness first cover formula_1 by finitely many affine opens formula_19, and then cover the preimage of each formula_19 in formula_0 by finitely many affine opens formula_20 each with a closed immersion in to formula_21 since formula_22 is of finite type and therefore quasi-compact. Composing this map with the open immersions formula_23 and formula_24, we see that each formula_25 is a closed subscheme of an open subscheme of formula_26. As formula_26 is noetherian, every closed subscheme of an open subscheme is also an open subscheme of a closed subscheme, and therefore each formula_25 is quasi-projective over formula_1. Construction of formula_2 and formula_27. Now suppose formula_28 is a finite open cover of formula_0 by quasi-projective formula_1-schemes, with formula_29 an open immersion in to a projective formula_1-scheme. Set formula_30, which is nonempty as formula_0 is irreducible. The restrictions of the formula_31 to formula_32 define a morphism formula_33 so that formula_34, where formula_35 is the canonical injection and formula_36 is the projection. Letting formula_37 denote the canonical open immersion, we define formula_38, which we claim is an immersion. To see this, note that this morphism can be factored as the graph morphism formula_39 (which is a closed immersion as formula_40 is separated) followed by the open immersion formula_41; as formula_42 is noetherian, we can apply the same logic as before to see that we can swap the order of the open and closed immersions. Now let formula_2 be the scheme-theoretic image of formula_43, and factor formula_43 as formula_44 where formula_45 is an open immersion and formula_46 is a closed immersion. Let formula_47 and formula_48 be the canonical projections. Set formula_49 formula_50 We will show that formula_2 and formula_16 satisfy the conclusion of the theorem. Verification of the claimed properties of formula_2 and formula_16. To show formula_16 is surjective, we first note that it is proper and therefore closed. As its image contains the dense open set formula_51, we see that formula_16 must be surjective. It is also straightforward to see that formula_16 induces an isomorphism on formula_32: we may just combine the facts that formula_52 and formula_43 is an isomorphism on to its image, as formula_43 factors as the composition of a closed immersion followed by an open immersion formula_53. It remains to show that formula_2 is projective over formula_1. We will do this by showing that formula_54 is an immersion. We define the following four families of open subschemes: formula_55 formula_56 formula_57 formula_58 As the formula_18 cover formula_0, the formula_59 cover formula_2, and we wish to show that the formula_60 also cover formula_2. We will do this by showing that formula_61 for all formula_12. It suffices to show that formula_62 is equal to formula_63 as a map of topological spaces. Replacing formula_59 by its reduction, which has the same underlying topological space, we have that the two morphisms formula_64 are both extensions of the underlying map of topological space formula_65, so by the reduced-to-separated lemma they must be equal as formula_32 is topologically dense in formula_18. Therefore formula_61 for all formula_12 and the claim is proven. The upshot is that the formula_66 cover formula_67, and we can check that formula_68 is an immersion by checking that formula_69 is an immersion for all formula_12. For this, consider the morphism formula_70 Since formula_22 is separated, the graph morphism formula_71 is a closed immersion and the graph formula_72 is a closed subscheme of formula_73; if we show that formula_74 factors through this graph (where we consider formula_75 via our observation that formula_16 is an isomorphism over formula_76 from earlier), then the map from formula_60 must also factor through this graph by construction of the scheme-theoretic image. Since the restriction of formula_77 to formula_78 is an isomorphism onto formula_66, the restriction of formula_68 to formula_60 will be an immersion into formula_66, and our claim will be proven. Let formula_79 be the canonical injection formula_80; we have to show that there is a morphism formula_81 so that formula_82. By the definition of the fiber product, it suffices to prove that formula_83, or by identifying formula_51 and formula_75, that formula_84. But formula_85 and formula_86, so the desired conclusion follows from the definition of formula_87 and formula_68 is an immersion. Since formula_88 is proper, any formula_1-morphism out of formula_2 is closed, and thus formula_54 is a closed immersion, so formula_2 is projective. formula_89 Additional statements. In the statement of Chow's lemma, if formula_0 is reduced, irreducible, or integral, we can assume that the same holds for formula_2. If both formula_0 and formula_2 are irreducible, then formula_3 is a birational morphism. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "S" }, { "math_id": 2, "text": "X'" }, { "math_id": 3, "text": "f: X' \\to X" }, { "math_id": 4, "text": "f^{-1}(U) \\simeq U" }, { "math_id": 5, "text": "U\\subseteq X." }, { "math_id": 6, "text": "X_i" }, { "math_id": 7, "text": "Y_i" }, { "math_id": 8, "text": "Y_i\\to X" }, { "math_id": 9, "text": "X_i\\setminus \\cup_{j\\neq i} X_j" }, { "math_id": 10, "text": "X\\setminus \\cup_{j\\neq i} X_j \\to X." }, { "math_id": 11, "text": "X\\setminus \\cup_{j\\neq i} X_j" }, { "math_id": 12, "text": "i" }, { "math_id": 13, "text": "X\\setminus \\cup_{j\\neq i} X_j\\to X" }, { "math_id": 14, "text": "Y_i'" }, { "math_id": 15, "text": "\\coprod Y_i'" }, { "math_id": 16, "text": "f" }, { "math_id": 17, "text": "\\coprod Y_i' \\to \\coprod Y_i\\to X" }, { "math_id": 18, "text": "U_i" }, { "math_id": 19, "text": "S_j" }, { "math_id": 20, "text": "X_{jk}" }, { "math_id": 21, "text": "\\mathbb{A}^n_{S_j}" }, { "math_id": 22, "text": "X\\to S" }, { "math_id": 23, "text": "\\mathbb{A}^n_{S_j}\\to \\mathbb{P}^n_{S_j}" }, { "math_id": 24, "text": "\\mathbb{P}^n_{S_j} \\to \\mathbb{P}^n_S" }, { "math_id": 25, "text": "X_{ij}" }, { "math_id": 26, "text": "\\mathbb{P}^n_S" }, { "math_id": 27, "text": "f:X'\\to X" }, { "math_id": 28, "text": "\\{U_i\\}" }, { "math_id": 29, "text": "\\phi_i:U_i\\to P_i" }, { "math_id": 30, "text": "U=\\cap_i U_i" }, { "math_id": 31, "text": "\\phi_i" }, { "math_id": 32, "text": "U" }, { "math_id": 33, "text": "\\phi: U \\to P = P_1 \\times_S \\cdots \\times_S P_n" }, { "math_id": 34, "text": "U\\to U_i\\to P_i = U\\stackrel{\\phi}{\\to} P \\stackrel{p_i}{\\to} P_i" }, { "math_id": 35, "text": "U\\to U_i" }, { "math_id": 36, "text": "p_i:P\\to P_i" }, { "math_id": 37, "text": "j:U\\to X" }, { "math_id": 38, "text": "\\psi=(j,\\phi)_S: U\\to X\\times_S P" }, { "math_id": 39, "text": "U\\to U\\times_S P" }, { "math_id": 40, "text": "P\\to S" }, { "math_id": 41, "text": "U\\times_S P\\to X\\times_S P" }, { "math_id": 42, "text": "X\\times_S P" }, { "math_id": 43, "text": "\\psi" }, { "math_id": 44, "text": " \\psi:U\\stackrel{\\psi'}{\\to} X'\\stackrel{h}{\\to} X\\times_S P" }, { "math_id": 45, "text": "\\psi'" }, { "math_id": 46, "text": "h" }, { "math_id": 47, "text": "q_1:X\\times_S P\\to X" }, { "math_id": 48, "text": "q_2:X\\times_S P\\to P" }, { "math_id": 49, "text": "f:X'\\stackrel{h}{\\to} X\\times_S P \\stackrel{q_1}{\\to} X," }, { "math_id": 50, "text": "g:X'\\stackrel{h}{\\to} X\\times_S P \\stackrel{q_2}{\\to} P." }, { "math_id": 51, "text": "U\\subset X" }, { "math_id": 52, "text": "f^{-1}(U)=h^{-1}(U\\times_S P)" }, { "math_id": 53, "text": "U\\to U\\times_S P \\to X\\times_S P" }, { "math_id": 54, "text": "g:X'\\to P" }, { "math_id": 55, "text": " V_i = \\phi_i(U_i)\\subset P_i " }, { "math_id": 56, "text": " W_i = p_i^{-1}(V_i)\\subset P " }, { "math_id": 57, "text": " U_i' = f^{-1}(U_i)\\subset X' " }, { "math_id": 58, "text": " U_i'' = g^{-1}(W_i)\\subset X'. " }, { "math_id": 59, "text": "U_i'" }, { "math_id": 60, "text": "U_i''" }, { "math_id": 61, "text": "U_i'\\subset U_i''" }, { "math_id": 62, "text": "p_i\\circ g|_{U_i'}:U_i'\\to P_i" }, { "math_id": 63, "text": "\\phi_i\\circ f|_{U_i'}:U_i'\\to P_i" }, { "math_id": 64, "text": "(U_i')_{red}\\to P_i" }, { "math_id": 65, "text": "U\\to U_i\\to P_i" }, { "math_id": 66, "text": "W_i" }, { "math_id": 67, "text": "g(X')" }, { "math_id": 68, "text": "g" }, { "math_id": 69, "text": "g|_{U_i''}:U_i''\\to W_i" }, { "math_id": 70, "text": " u_i:W_i\\stackrel{p_i}{\\to} V_i\\stackrel{\\phi_i^{-1}}{\\to} U_i\\to X." }, { "math_id": 71, "text": "\\Gamma_{u_i}:W_i\\to X\\times_S W_i" }, { "math_id": 72, "text": "T_i=\\Gamma_{u_i}(W_i)" }, { "math_id": 73, "text": "X\\times_S W_i" }, { "math_id": 74, "text": "U\\to X\\times_S W_i" }, { "math_id": 75, "text": "U\\subset X'" }, { "math_id": 76, "text": "f^{-1}(U)" }, { "math_id": 77, "text": "q_2" }, { "math_id": 78, "text": "T_i" }, { "math_id": 79, "text": "v_i" }, { "math_id": 80, "text": "U\\subset X' \\to X\\times_S W_i" }, { "math_id": 81, "text": "w_i:U\\subset X'\\to W_i" }, { "math_id": 82, "text": "v_i=\\Gamma_{u_i}\\circ w_i" }, { "math_id": 83, "text": "q_1\\circ v_i= u_i\\circ q_2\\circ v_i" }, { "math_id": 84, "text": "q_1\\circ\\psi=u_i\\circ q_2\\circ \\psi" }, { "math_id": 85, "text": "q_1\\circ\\psi = j" }, { "math_id": 86, "text": "q_2\\circ\\psi=\\phi" }, { "math_id": 87, "text": "\\phi:U\\to P" }, { "math_id": 88, "text": "X'\\to S" }, { "math_id": 89, "text": "\\blacksquare" } ]
https://en.wikipedia.org/wiki?curid=11127518
11128198
Nearly neutral theory of molecular evolution
Variant of one theory of evolution The nearly neutral theory of molecular evolution is a modification of the neutral theory of molecular evolution that accounts for the fact that not all mutations are either so deleterious such that they can be ignored, or else neutral. Slightly deleterious mutations are reliably purged only when their selection coefficient are greater than one divided by the effective population size. In larger populations, a higher proportion of mutations exceed this threshold for which genetic drift cannot overpower selection, leading to fewer fixation events and so slower molecular evolution. The nearly neutral theory was proposed by Tomoko Ohta in 1973. The population-size-dependent threshold for purging mutations has been called the "drift barrier" by Michael Lynch, and used to explain differences in genomic architecture among species. Origins. According to the neutral theory of molecular evolution, the rate at which molecular changes accumulate between species should be equal to the rate of neutral mutations and hence relatively constant across species. However, this is a per-generation rate. Since larger organisms have longer generation times, the neutral theory predicts that their rate of molecular evolution should be slower. However, molecular evolutionists found that rates of protein evolution were fairly independent of generation time. Noting that population size is generally inversely proportional to generation time, Tomoko Ohta proposed that if most amino acid substitutions are slightly deleterious, this would increase the rate of effectively neutral mutation rate in small populations, which could offset the effect of long generation times. However, because noncoding DNA substitutions tend to be more neutral, independent of population size, their rate of evolution is correctly predicted to depend on population size / generation time, unlike the rate of non-synonymous changes. In this case, the faster rate of neutral evolution in proteins expected in small populations (due to a more lenient threshold for purging deleterious mutations) is offset by longer generation times (and vice versa), but in large populations with short generation times, noncoding DNA evolves faster while protein evolution is retarded by selection (which is more significant than drift for large populations) In 1973, Ohta published a short letter in "Nature" suggesting that a wide variety of molecular evidence supported the theory that most mutation events at the molecular level are slightly deleterious rather than strictly neutral. Between then and the early 1990s, many studies of molecular evolution used a "shift model" in which the negative effect on the fitness of a population due to deleterious mutations shifts back to an original value when a mutation reaches fixation. In the early 1990s, Ohta developed a "fixed model" that included both beneficial and deleterious mutations, so that no artificial "shift" of overall population fitness was necessary. According to Ohta, however, the nearly neutral theory largely fell out of favor in the late 1980s, because the mathematically simpler neutral theory for the widespread molecular systematics research that flourished after the advent of rapid DNA sequencing. As more detailed systematics studies started to compare the evolution of genome regions subject to strong selection versus weaker selection in the 1990s, the nearly neutral theory and the interaction between selection and drift have once again become an important focus of research. Theory. The rate of substitution, formula_0 is formula_1, where formula_2 is the mutation rate, formula_3 is the generation time, and formula_4 is the effective population size. The last term is the probability that a new mutation will become fixed. Early models assumed that formula_2 is constant between species, and that formula_3 increases with formula_4. Kimura’s equation for the probability of fixation in a haploid population gives: formula_5, where formula_6 is the selection coefficient of a mutation. When formula_7(completely neutral), formula_8, and when formula_9 (extremely deleterious), formula_10 decreases almost exponentially with formula_4. Mutations with formula_11 are called nearly neutral mutations. These mutations can fix in small-formula_4 populations through genetic drift. In large-formula_4 populations, these mutations are purged by selection. If nearly neutral mutations are common, then the proportion for which formula_12 is dependent on formula_13 The effect of nearly neutral mutations can depend on fluctuations in formula_6. Early work used a “shift model” in which formula_6 can vary between generations but the mean fitness of the population is reset to zero after fixation. This basically assumes the distribution of formula_6 is constant (in this sense, the argument in the previous paragraphs can be regarded as based on the “shift model”). This assumption can lead to indefinite improvement or deterioration of protein function. Alternatively, the later “fixed model” fixes the distribution of mutations’ effect on protein function, but allows the mean fitness of population to evolve. This allows the distribution of formula_6 to change with the mean fitness of population. The “fixed model” provides a slightly different explanation for the rate of protein evolution. In large formula_4 populations, advantageous mutations are quickly picked up by selection, increasing the mean fitness of the population. In response, the mutation rate of nearly neutral mutations is reduced because these mutations are restricted to the tail of the distribution of selection coefficients. The “fixed model” expands the nearly neutral theory. Tachida classified evolution under the “fixed model” based on the product of formula_4 and the variance in the distribution of formula_6: a large product corresponds to adaptive evolution, an intermediate product corresponds to nearly neutral evolution, and a small product corresponds to almost neutral evolution. According to this classification, slightly advantageous mutations can contribute to nearly neutral evolution. The "drift barrier" theory. Michael Lynch has proposed that variation in the ability to purge slightly deleterious mutations (i.e. variation in formula_13) can explain variation in genomic architecture among species, e.g. the size of the genome, or the mutation rate. Specifically, larger populations will have lower mutation rates, more streamlined genomic architectures, and generally more finely tuned adaptations. However, if robustness to the consequences of each possible error in processes such as transcription and translation substantially reduces the cost of making such errors, larger populations might evolve lower rates of global proofreading, and hence have higher rates of error. This may explain why "Escherichia coli" has higher rates of transcription error than "Saccharomyces cerevisiae". This is supported by the fact that transcriptional error rates in "E. coli" depend on protein abundance (which is responsible for modulating the locus-specific strength of selection), but do so only for high-error-rate C to U deamination errors in "S. cerevisiae". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": " \\rho " }, { "math_id": 1, "text": " \\rho = ugN_e \\bar P_{fix}" }, { "math_id": 2, "text": " u " }, { "math_id": 3, "text": " g " }, { "math_id": 4, "text": " N_e " }, { "math_id": 5, "text": " P_{fix} = \\frac{1-e^{- s}}{1-e^{- s N_e}} " }, { "math_id": 6, "text": " s " }, { "math_id": 7, "text": " |s| \\ll \\frac{1}{N_e} " }, { "math_id": 8, "text": " P_{fix}= \\frac{1}{N_e} " }, { "math_id": 9, "text": " - s \\gg \\frac{1}{N_e} " }, { "math_id": 10, "text": " P_{fix} " }, { "math_id": 11, "text": " -s \\simeq \\frac{1}{N_e} " }, { "math_id": 12, "text": " P_{fix} \\ll \\frac{1}{N_e}" }, { "math_id": 13, "text": "N_e" } ]
https://en.wikipedia.org/wiki?curid=11128198
11128841
Whewell equation
Mathematical equation The Whewell equation of a plane curve is an equation that relates the tangential angle (φ) with arc length (s), where the tangential angle is the angle between the tangent to the curve at some point and the x-axis, and the arc length is the distance along the curve from a fixed point. These quantities do not depend on the coordinate system used except for the choice of the direction of the x-axis, so this is an intrinsic equation of the curve, or, less precisely, "the" intrinsic equation. If one curve is obtained from another curve by translation then their Whewell equations will be the same. When the relation is a function, so that tangential angle is given as a function of arc length, certain properties become easy to manipulate. In particular, the derivative of the tangential angle with respect to arc length is equal to the curvature. Thus, taking the derivative of the Whewell equation yields a Cesàro equation for the same curve. The concept is named after William Whewell, who introduced it in 1849, in a paper in the Cambridge Philosophical Transactions. In his conception, the angle used is the deviation from the direction of the curve at some fixed starting point, and this convention is sometimes used by other authors as well. This is equivalent to the definition given here by the addition of a constant to the angle or by rotating the curve. Properties. If a point formula_0 on the curve is given parametrically in terms of the arc length, formula_1 then the tangential angle φ is determined by formula_2 which implies formula_3 Parametric equations for the curve can be obtained by integrating: formula_4 Since the curvature is defined by formula_5 the Cesàro equation is easily obtained by differentiating the Whewell equation.
[ { "math_id": 0, "text": "\\vec r = (x, y)" }, { "math_id": 1, "text": "s \\mapsto \\vec r," }, { "math_id": 2, "text": "\\frac {d \\vec r}{ds}\n = \\begin{pmatrix} \\frac{dx}{ds} \\\\ \\frac{dy}{ds} \\end{pmatrix}\n = \\begin{pmatrix} \\cos \\varphi \\\\ \\sin \\varphi \\end{pmatrix}\n \\quad \\text {since} \\quad\n \\left | \\frac {d \\vec r}{ds} \\right | = 1 ," }, { "math_id": 3, "text": "\\frac{dy}{dx} = \\tan \\varphi." }, { "math_id": 4, "text": " \\begin{align}\nx &= \\int \\cos \\varphi \\, ds, \\\\\ny &= \\int \\sin \\varphi \\, ds.\n\\end{align} " }, { "math_id": 5, "text": "\\kappa = \\frac{d\\varphi}{ds}," } ]
https://en.wikipedia.org/wiki?curid=11128841
11129054
Cesàro equation
Equation in geometry In geometry, the Cesàro equation of a plane curve is an equation relating the curvature (κ) at a point of the curve to the arc length (s) from the start of the curve to the given point. It may also be given as an equation relating the radius of curvature (R) to arc length. (These are equivalent because "R" .) Two congruent curves will have the same Cesàro equation. Cesàro equations are named after Ernesto Cesàro. Log-aesthetic curves. The family of log-aesthetic curves is determined in the general (formula_0) case by the following intrinsic equation: formula_1 This is equivalent to the following explicit formula for curvature: formula_2 Further, the formula_3 constant above represents simple re-parametrization of the arc length parameter, while formula_4 is equivalent to uniform scaling, so log-aesthetic curves are fully characterized by the formula_5 parameter. In the special case of formula_6, the log-aesthetic curve becomes Nielsen's spiral, with the following Cesàro equation (where formula_7 is a uniform scaling parameter): formula_8 A number of well known curves are instances of the log-aesthetic curve family. These include circle (formula_9), Euler spiral (formula_10), Logarithmic spiral (formula_11), and Circle involute (formula_12). Examples. Some curves have a particularly simple representation by a Cesàro equation. Some examples are: Related parameterizations. The Cesàro equation of a curve is related to its Whewell equation in the following way: if the Whewell equation is "φ" "f" ("s") then the Cesàro equation is "κ" "f" ′("s"). References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\alpha \\ne 0" }, { "math_id": 1, "text": "R(s)^\\alpha = c_0s + c_1" }, { "math_id": 2, "text": "\\kappa(s) = (c_0s+c_1)^{-1/\\alpha}" }, { "math_id": 3, "text": "c_1" }, { "math_id": 4, "text": "c_0" }, { "math_id": 5, "text": "\\alpha" }, { "math_id": 6, "text": "\\alpha=0" }, { "math_id": 7, "text": "a" }, { "math_id": 8, "text": "\\kappa(s) = \\frac{e^{\\frac{s}{a}}}{a}" }, { "math_id": 9, "text": "\\alpha = \\infty" }, { "math_id": 10, "text": "\\alpha = -1" }, { "math_id": 11, "text": "\\alpha = 1" }, { "math_id": 12, "text": "\\alpha = 2" }, { "math_id": 13, "text": "\\kappa = 0" }, { "math_id": 14, "text": "\\kappa = \\frac{1}{\\alpha}" }, { "math_id": 15, "text": "\\kappa=\\frac{C}{s}" }, { "math_id": 16, "text": "\\kappa=\\frac{C}{\\sqrt s}" }, { "math_id": 17, "text": "\\kappa=Cs" }, { "math_id": 18, "text": "\\kappa=\\frac{a}{s^2+a^2}" } ]
https://en.wikipedia.org/wiki?curid=11129054
11129423
Social impact theory
1981 social theory Social impact theory was created by Bibb Latané in 1981 and consists of four basic rules which consider how individuals can be "sources or targets of social influence". Social impact is the result of social forces including the strength of the source of impact, the immediacy of the event, and the number of sources exerting the impact. The more targets of impact that exist, the less impact each individual target has. Original research. According to psychologist Bibb Latané, social impact is defined as any influence on individual feelings, thoughts or behavior that is created from the real, implied or imagined presence or actions of others. The application of social impact varies from diffusion of responsibility to social loafing, stage fright or persuasive communication. In 1981, Latané developed the social impact theory using three key variables: With these variables, Latané developed three laws through formulas—social forces, psychosocial, and multiplication/division of impact. Social forces. The social forces law states that i = f(S * I * N). Impact (i) is a function of the three variables multiplied and grows as each variable is increased. However, if a variable were to be 0 or significantly low, the overall impact would be affected. Psychosocial law. The psychosocial law states that the most significant difference in social impact will occur in the transition from 0 to 1 source and as the number of sources increases, this difference will become even eventually. The equation Latané uses for this law is formula_0. That is, some power (t) of the number of people (N) multiplied by the scaling constant (s) determines social impact. Latané applied this theory to previous studies done on imitation and conformity as well as on embarrassment. Asch's study of conformity in college students contradicts the psychosocial law, showing that one or two sources of social impact make little difference. However, Gerard, Wilhelmy, and Conolley conducted a similar study on conformity sampling from high school students. High school students were deemed less likely to be resistant to conformity than college students, and thus may be more generalizable, in this regard, than Asch's study. Gerard, Wilhelmy, and Conolley's study supported the psychosocial law, showing that the first few confederates had the greatest impact on conformity. Latané applied his law to imitation as well, using Milgram's gawking experiment. In this experiment various numbers of confederates stood on a street corner in New York craning and gawking at the sky. The results showed that more confederates meant more gawkers, and the change became increasingly insignificant as more confederates were present. In a study Latané and Harkins conducted on stage fright and embarrassment, the results also followed the psychosocial law showing that more audience members meant greater anxiety and that the greatest difference existed between 0 and 1 audience members. Multiplication/divisions of impact. The third law of social impact states that the strength, immediacy, and number of targets play a role in social impact. That is, the more strength and immediacy and the greater number of targets in a social situation causes the social impact to be divided amongst all of the targets. The equation that represents this division is formula_1. This law relates to diffusion of responsibility, in which individuals feel less accountable as the number of people present increases. In emergency situations, the impact of the emergency is reduced when more people are present. The social impact theory is both a generalizable and a specific theory. It uses one set of equations, which are applicable to many social situations. For example, the psychosocial law can be used to predict instances of conformity, imitation and embarrassment. Yet, it is also specific because the predictions that it makes are specific and can be applied to and observed in the world. The theory is falsifiable as well. It makes predictions through the use of equations; however, the equations may not be able to accurately predict the outcome of social situations. Social impact theory is also useful. It can be used to understand which social situations result in the greatest impact and which situations present exceptions to the rules. While social impact theory explores social situations and can help predict the outcomes of social situations, it also has some shortcomings and questions that are left unresolved. The rules guiding the theory depict people as recipients that passively accept social impact and do not take into account the social impact that people may actively seek out. The model is also static, and does not fully compensate for the dynamics involved in social interactions. The theory is relatively new and fails to address some pertinent issues. These issues include finding more accurate ways to measure social outcomes, understanding the "t" exponent in psychosocial law, taking susceptibility into account, understanding how short-term consequences can develop into chronic consequences, application to group interactions, understanding the model's nature (descriptive vs. explanatory, generalization vs. theory). Applying social impact theory. The social impact theory specifies the effects of social variables—strength, immediacy, and number of sources—but does not explain the nature of these influencing processes. There are various factors not considered by experimenters while implementing the theory. Concepts such as peripheral persuasion affect how communicators may be more credible to some individuals and untrustworthy to others. The variables are inconsistent from individual to individual, possibly associating strength with source credibility and attractiveness or immediacy with physical closeness. Therefore, in the application of the social impact theory, the idea of persuasiveness, the ability to induce someone with an opposing position to change, and supportiveness, the ability to help those who agree with someone's point of view to resist the influence of others, is introduced. Ultimately, an individual's likelihood of change and being influenced is a direct function of strength (persuasiveness), immediacy and the number of advocates and is a direct inverse function of strength (supportiveness), immediacy and number of target individuals. Subsequent development. The dynamic social impact theory, as proposed by Bibb Latané and his colleagues, describes the influence of members between majority and minority groups. The theory serves as extension of the originating social impact theory (i.e., influence is determined by the strength, immediacy, and number of sources present) as it explains how groups, as complex systems, change and develop over time. Groups are constantly organizing and re-organizing into four basic patterns: "consolidation, clustering, correlation," and "continuing diversity". These patterns are consistent with groups that are spatially distributed and interacting repeatedly over time. 1. "Consolidation" – as individuals interact with each other regularly, their actions, attitudes, and opinions become more uniform. The opinions held by the majority tend to spread throughout the group, while the minority decreases in size. 2. "Clustering" – occurs when group members communicate more frequently as a consequence of close proximity. As the law of social impact suggests, individuals are susceptible to influence by their closest members, and so clusters of group members with similar opinions emerge in groups. Minority group members are often shielded from majority influence due to clustering. Therefore, subgroups can emerge which may possess similar ideas to one another, but hold different beliefs than the majority population. 3. "Correlation" – over time, individual group members' opinions on a variety of issues (including issues that have never been openly discussed before) converge, so that their opinions become correlated. 4. "Continuing diversity" – as mentioned previously, minority members are often shielded from majority influence due to clustering. Diversity exists if the minority group can resist majority influence and communicate with majority members. However, if the majority is large or minority members are physically isolated from one another, this diversity decreases. Contemporary research. In 1985 Mullen analyzed two of the factors that Latané associated with social impact theory. Mullen conducted a meta-analysis that examined the validity of the source strength and the source immediacy. The studies that were analyzed were sorted by the method of measurement used with the self-reported in one category and the behavior measurements in the other category. Mullen's results showed that the source strength and immediacy were only supported in cases in which tension was self-reported, and not when behavior was measured. He thus concluded that Latané's source strength and immediacy were weak and lacked consistency. Critics of Mullen's study, however, argue that perhaps not enough studies were available or included, which may have skewed his results and given him an inaccurate conclusion. A study conducted by Constantine Sedikides and Jeffrey M. Jackson took another look at the role of strength and within social impact theory. This study was conducted in a bird house at a zoo. In one scenario, an experimenter dressed as a bird keeper walked into the bird house and told visitors that leaning on the railing was prohibited. This was considered the high-strength scenario because of the authority that a zookeeper possesses within a zoo. The other scenario involved an experimenter dressed in ordinary clothes addressing the visitors with the same message. The results of the study showed that visitors responded better to the high-strength scenario, with fewer individuals leaning on the railing after the zookeeper had told them not to. The study also tested the effect that immediacy had on social impact. This was done by measuring the incidences of leaning on the rail both immediately after the message was delivered and at a later point in time. The results showed that immediacy played a role in determining social impact since there were fewer people leaning on the rails immediately after the message. The visitors in the bird house were studied as members of the group they came with to determine how number of targets would influence the targets' behavior. The group size ranged from 1 to 6 and the results showed that those in larger groups were less likely to comply with the experimenter's message than those in smaller groups. All of these findings support the parameters of Latané's social impact theory. Kipling D. Williams, and Karen B. Williams theorized that social impact would vary depending on the underlying motive of compliance. When compliance is simply a mechanism to induce the formation of a positive impression, stronger sources should produce a greater social impact. When it is an internal motive that induces compliance, the strength of the source shouldn't matter. Williams and Williams designed a study in which two persuasion methods were utilized, one that would evoke external motivation and one that would evoke internal motivation. Using these techniques, experimenters went from door to door using one of the techniques to attempt to collect money for a zoo. The foot-in-the-door technique was utilized to evoke the internal motive. In this technique, the experimenter would make an initial request that was relatively small, and gradually request larger and larger amounts. This is internally motivated because the target's self-perception is altered to feel more helpful after the original contribution. The door-in-the-face technique, on the other hand, involves the experimenter asking for a large amount first; and when the target declines, they ask for a much smaller amount as a concession. This technique draws on external motivation because the request for a concession makes one feel obliged to comply. The experiment was conducted with low-strength and high-strength experimenters. Those who were approached by higher-strength experimenters were more likely to contribute money. Using the different persuasion approaches did not produce statistically significant results; however, it did support Williams and Williams hypothesis that the strength of the experimenter would heighten the effects of the door-in-the-face technique, but have minimal effect on the foot-in-the-door technique One study conducted by Helen Harton and colleagues examined the four patterns of dynamic social impact theory. The study included one large (six rows of 15-30 people) and two small introductory psychology classes (one group per class). Ten questions were chosen from course-readings and either distributed as a hand-out, read aloud, or presented on an overhead projector. Students were given ~1 min per question to mark their pre-discussion answers. The students were then instructed to discuss each question for 1 or 2 minutes with their neighbours (on either side), but only about the assigned questions - which answer they chose and why. There was little initial diversity on two of the questions - one was too easy (majority got it), and the other was too difficult (majority agreeing on the wrong answer). "Consolidation"- overall, discussion-induced consolidation occurred in 7 out of the 8 independent groups, indicating majority members converting minority members. "Clustering"- prior to discussion, neighbours answers were evenly distributed. Post-discussion, groups exhibited a significant degree of spatial clustering, as neighbours influenced each other to become more similar. "Correlation"- there was an increased tendency for an answer on one question to be associated with an answer on another question that was completely unrelated content-wise. "Continuing Diversity"- none of the 8 groups reached unanimity on any of the questions - meaning, minority group members did not completely conform to majority group members. Due to social media's influence, there has been movement towards e-commerce. Researchers have since looked into the relationship between social media influence and visit and purchase intentions within individuals. Most recently, Rodrigo Perez-Vega, Kathryn Waite, and Kevin O'Gorman suggest that the theory is also relevant in the context of social media. Empirical research on this context has found support for the effects of numbers of sources (i.e. likes) in performance outcomes such as box office sales. Furthermore, Babajide Osatuyi and Katia Passerini operationalized strength, immediacy, and number using Social Network Analysis centrality measures, i.e., betweeness, closeness, and degree centralities to test two of the rules stipulated in social impact theory. They compared the influence of using Twitter and discussion board in a learning management system (e.g., Moodle and Blackboard) on student performance, measured as final grade in a course. The results provide support for the first law, i.e., impact (grade) as a multiplicative resultant of strength, immediacy, and number of interactions among students. Additional interesting insights were observed in this study that educators ought to consider to maximize the integration of new social technologies into pedagogy. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "Impact = s\\cdot N^t" }, { "math_id": 1, "text": "Impact = f(1/(S\\cdot I\\cdot N))" } ]
https://en.wikipedia.org/wiki?curid=11129423
11129469
Flare (countermeasure)
Aerial defence against heat-seeking missiles A flare or decoy flare is an aerial infrared countermeasure used by an aircraft to counter an infrared homing ("heat-seeking") surface-to-air missile or air-to-air missile. Flares are commonly composed of a pyrotechnic composition based on magnesium or another hot-burning metal, with burning temperature equal to or hotter than engine exhaust. The aim is to make the infrared-guided missile seek out the heat signature from the flare rather than the aircraft's engines. Tactics. In contrast to radar-guided missiles, IR-guided missiles are very difficult to find as they approach aircraft. They do not emit detectable radar, and they are generally fired from behind, directly toward the engines. In most cases, pilots have to rely on their wingmen to spot the missile's smoke trail and alert of a launch. Since IR-guided missiles have a shorter range than their radar-guided counterparts, good situational awareness of altitude and potential threats continues to be an effective defense. More advanced electro-optical systems can detect missile launches automatically from the distinct thermal emissions of a missile's rocket motor. Once the presence of a "live" IR missile is indicated, flares are released by the aircraft in an attempt to decoy the missile. Some systems are automatic, while others require manual jettisoning of the flares. The aircraft would then pull away at a sharp angle from the flare (and the terminal trajectory of the missile) and reduce engine power in attempt to cool the thermal signature. Ideally the missile's seeker head is then confused by this change in temperature and flurry of new heat signatures, and starts to follow one of the flares rather than the aircraft. More modern IR-guided missiles have sophisticated on-board electronics and secondary electro-optical sensors that help discriminate between flares and targets, reducing the effectiveness of flares as a reactionary countermeasure. A newer procedure involves preemptively deploying flares in anticipation of a missile launch, which distorts the expected image of the target should one be let loose. This "pre-flaring" increases the chances that the missile then follows the flares or the open sky in between, rather than a part of the actual defender. Usage. Apart from military use, some civilian aircraft are also equipped with countermeasure flares, against terrorism: the Israeli airline El Al, having been the target of the failed 2002 airliner attack, in which shoulder-launched surface-to-air missiles were fired at an airliner while taking off, began equipping its fleet with radar-based, automated flare release countermeasures from June 2004. This caused concerns in some European countries, which proceeded to ban such aircraft from landing at their airports. On 18 June 2017, after an AIM-9X did not successfully track a targeted Syrian Air Force Su-22 Fitter, US Navy Lt. Cmdr. Michael "Mob" Tremel flying a F/A-18E Super Hornet used an AMRAAM AAM to successfully destroy the enemy aircraft. There is a theory that the Sidewinder is tested against American and not Soviet/Russian flares. The Sidewinder is used to rejecting American but not Soviet/Russian flares. Similar issues arose from the testing of the AIM-9P model. The missile would ignore American flares but go for Soviet ones due to these flares have "different burn time, intensity and separation." Decoying. Flares burn at thousands of degrees Celsius, which is much hotter than the exhaust of a jet engine. IR missiles seek out the hotter flame, believing it to be an aircraft in afterburner or the beginning of the engine's exhaust source. As the more modern infrared seekers tend to have spectral sensitivity tailored to more closely match the emissions of airplanes and reject other sources (the so-called CCM, or counter-countermeasures), the modernized decoy flares have their emission spectrum optimized to also match the radiation of the airplane (mainly its engines and engine exhaust). In addition to spectral discrimination, the CCMs can include trajectory discrimination and detection of size of the radiation source. The newest generation of the FIM-92 Stinger uses a dual IR and UV seeker head, which allows for a redundant tracking solution, effectively negating the effectiveness of modern decoy flares (according to the U.S. Department of Defense). While research and development in flare technology has produced an IR signature on the same wavelength as hot engine exhaust, modern flares still produce a notably (and immutably) different UV signature than an aircraft engine burning kerosene jet-fuel. Materials used. For the infrared generating charge, two approaches are possible: pyrotechnic and pyrophoric as stored, chemical-energy-source IR-decoy flares contain pyrotechnic compositions, liquid or solid pyrophoric substances, or liquid or solid highly flammable substances. Upon ignition of the decoy flare, a strongly exothermal reaction is started, releasing infrared energy and visible smoke and flame, emission being dependent on the chemical nature of the payload used. There is a wide variety of calibres and shapes available for aerial decoy flares. Due to volume storage restrictions onboard platforms, many aircraft of American origin use square decoy flare cartridges. Nevertheless, cylindrical cartridges are also available onboard American aircraft, such as MJU 23/B on the B-1 Lancer or MJU-8A/B on the F/A-18 Hornet; however, these are used mainly onboard French aircraft and those of Russian origin (e.g. PPI-26 IW on the MiG 29). Square calibres and typical decoy flares: Cylindrical calibres and typical decoy flares: Pyrotechnic flares. Pyrotechnic flares use a slow-burning fuel-oxidizer mixture that generates intense heat. Thermite-like mixtures (e.g. Magnesium/Teflon/Viton [MTV]), are common. Other combinations include ammonium perchlorate/anthracene/magnesium, or can be based on red phosphorus. To adjust the emission characteristics to match closer the spectrum of jet engines, charges on the base of double base propellants. These compositions can avoid the metal content and achieve cleaner burning without the prominent smoke trail. Blackbody payloads. Certain pyrotechnic compositions, for example MTV, give a great flame emission upon combustion and yield a temperature-dependent signature and can be understood as gray bodies of high emissivity (formula_0~0.95). Such payloads are called blackbody payloads. Other payloads, like iron/potassium perchlorate pellets, only yield a low flame emission but also show temperature-dependent signature. Nevertheless, the lower combustion temperature as compared to MTV results in a lower amount of energy released in the short-wavelength IR range. Other blackbody payloads include ammonium perchlorate/anthracene/magnesium and hydroxyl-terminated polybutadiene (HTPB) binder. Spectrally balanced payloads. Other payloads provide large amounts of hot carbon dioxide upon combustion and thus provide a temperature-independent selective emission in the wavelength range between 3 and 5 μm. Typical pyrotechnic payloads of this type resemble whistling compositions and are often made up from potassium perchlorate and hydrogen lean organic fuels. Other spectrally balanced payloads are made up similarly as double base propellants and contain nitrocellulose (NC), and other esters of nitric acid or nitro compounds as oxidizers such as hexanitroethane and nitro compounds and nitramines as high-energy fuels. Pyrophoric flares. Pyrophoric flares work on the principle of ejecting a special pyrophoric material out of an airtight cartridge, usually using a gas generator (e.g. a small pyrotechnic charge or pressurized gas). The material then self-ignites in contact with air. The materials can be solid, e.g. iron platelets coated with ultrafine aluminium, or liquid, often organometallic compounds; e.g. alkyl aluminium compounds (e.g. triethylaluminium). Pyrophoric flares may have reduced effectiveness at high altitudes, due to lower air temperature and lower availability of oxygen; however oxygen can be co-ejected with the pyrophoric fuel. The advantage of alkyl aluminium and similar compounds is the high content of carbon and hydrogen, resulting in bright emission lines similar to spectral signature of burning jet fuel. Controlled content of solid combustion products, generating continuous black-body radiation, allows further matching of emission characteristics to the net infrared emissions of fuel exhaust and hot engine components. The flames of pyrophoric fuels can also reach the size of several metres, in comparison with about less than one metre flame of MTV flares. The trajectory can be also influenced by tailoring the aerodynamic properties of the ejected containers. Highly flammable payloads. These payloads contain red phosphorus as an energetic filler. The red phosphorus is mixed with organic binders to give brushable pastes that can be coated on thin polyimide platelets. The combustion of those platelets yields a temperature-dependent signature. Endergonic additives such as highly dispersed silica or alkali halides may further lower the combustion temperature. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "e" } ]
https://en.wikipedia.org/wiki?curid=11129469
1112960
Monte Carlo integration
Numerical technique In mathematics, Monte Carlo integration is a technique for numerical integration using random numbers. It is a particular Monte Carlo method that numerically computes a definite integral. While other algorithms usually evaluate the integrand at a regular grid, Monte Carlo randomly chooses points at which the integrand is evaluated. This method is particularly useful for higher-dimensional integrals. There are different methods to perform a Monte Carlo integration, such as uniform sampling, stratified sampling, importance sampling, sequential Monte Carlo (also known as a particle filter), and mean-field particle methods. Overview. In numerical integration, methods such as the trapezoidal rule use a deterministic approach. Monte Carlo integration, on the other hand, employs a non-deterministic approach: each realization provides a different outcome. In Monte Carlo, the final outcome is an approximation of the correct value with respective error bars, and the correct value is likely to be within those error bars. The problem Monte Carlo integration addresses is the computation of a multidimensional definite integral formula_0 where Ω, a subset of formula_1, has volume formula_2 The naive Monte Carlo approach is to sample points uniformly on Ω: given "N" uniform samples, formula_3 "I" can be approximated by formula_4 This is because the law of large numbers ensures that formula_5 Given the estimation of "I" from "QN", the error bars of "QN" can be estimated by the sample variance using the unbiased estimate of the variance. formula_6 which leads to formula_7 As long as the sequence formula_8 is bounded, this variance decreases asymptotically to zero as 1/"N". The estimation of the error of "QN" is thus formula_9 which decreases as formula_10. This is standard error of the mean multiplied with formula_11. This result does not depend on the number of dimensions of the integral, which is the promised advantage of Monte Carlo integration against most deterministic methods that depend exponentially on the dimension. It is important to notice that, unlike in deterministic methods, the estimate of the error is not a strict error bound; random sampling may not uncover all the important features of the integrand that can result in an underestimate of the error. While the naive Monte Carlo works for simple examples, an improvement over deterministic algorithms can only be accomplished with algorithms that use problem-specific sampling distributions. With an appropriate sample distribution it is possible to exploit the fact that almost all higher-dimensional integrands are very localized and only small subspace notably contributes to the integral. A large part of the Monte Carlo literature is dedicated in developing strategies to improve the error estimates. In particular, stratified sampling—dividing the region in sub-domains—and importance sampling—sampling from non-uniform distributions—are two examples of such techniques. Example. A paradigmatic example of a Monte Carlo integration is the estimation of π. Consider the function formula_12 and the set Ω = [−1,1] × [−1,1] with "V" = 4. Notice that formula_13 Thus, a crude way of calculating the value of π with Monte Carlo integration is to pick "N" random numbers on Ω and compute formula_14 In the figure on the right, the relative error formula_15 is measured as a function of "N", confirming the formula_10. C example. Keep in mind that a true random number generator should be used. int i, throws = 99999, insideCircle = 0; double randX, randY, pi; srand(time(NULL)); for (i = 0; i &lt; throws; ++i) { randX = rand() / (double) RAND_MAX; randY = rand() / (double) RAND_MAX; if (randX * randX + randY * randY &lt; 1) ++insideCircle; pi = 4.0 * insideCircle / throws; Python example. Made in Python. from numpy import random throws = 2000 inside_circle = 0 i = 0 radius = 1 while i &lt; throws: # Choose random X and Y centered around 0,0 x = random.uniform(-radius, radius) y = random.uniform(-radius, radius) # If the point is inside circle, increase variable if x**2 + y**2 &lt;= radius**2: inside_circle += 1 i += 1 area = (((2 * radius) ** 2) * inside_circle) / throws print(area) Wolfram Mathematica example. The code below describes a process of integrating the function formula_16 from formula_17 using the Monte-Carlo method in Mathematica: func[x_] := 1/(1 + Sinh[2*x]*(Log[x])^2); Distrib[x_, average_, var_] := PDF[NormalDistribution[average, var], 1.1*x - 0.1]; n = 10; RV = RandomVariate[TruncatedDistribution[{0.8, 3}, NormalDistribution[1, 0.399]], n]; Int = 1/n Total[func[RV]/Distrib[RV, 1, 0.399]]*Integrate[Distrib[x, 1, 0.399], {x, 0.8, 3}] NIntegrate[func[x], {x, 0.8, 3}] (*Compare with real answer*) Recursive stratified sampling. Recursive stratified sampling is a generalization of one-dimensional adaptive quadratures to multi-dimensional integrals. On each recursion step the integral and the error are estimated using a plain Monte Carlo algorithm. If the error estimate is larger than the required accuracy the integration volume is divided into sub-volumes and the procedure is recursively applied to sub-volumes. The ordinary 'dividing by two' strategy does not work for multi-dimensions as the number of sub-volumes grows far too quickly to keep track. Instead one estimates along which dimension a subdivision should bring the most dividends and only subdivides the volume along this dimension. The stratified sampling algorithm concentrates the sampling points in the regions where the variance of the function is largest thus reducing the grand variance and making the sampling more effective, as shown on the illustration. The popular MISER routine implements a similar algorithm. MISER Monte Carlo. The MISER algorithm is based on recursive stratified sampling. This technique aims to reduce the overall integration error by concentrating integration points in the regions of highest variance. The idea of stratified sampling begins with the observation that for two disjoint regions "a" and "b" with Monte Carlo estimates of the integral formula_18 and formula_19 and variances formula_20 and formula_21, the variance Var("f") of the combined estimate formula_22 is given by, formula_23 It can be shown that this variance is minimized by distributing the points such that, formula_24 Hence the smallest error estimate is obtained by allocating sample points in proportion to the standard deviation of the function in each sub-region. The MISER algorithm proceeds by bisecting the integration region along one coordinate axis to give two sub-regions at each step. The direction is chosen by examining all "d" possible bisections and selecting the one which will minimize the combined variance of the two sub-regions. The variance in the sub-regions is estimated by sampling with a fraction of the total number of points available to the current step. The same procedure is then repeated recursively for each of the two half-spaces from the best bisection. The remaining sample points are allocated to the sub-regions using the formula for "Na" and "Nb". This recursive allocation of integration points continues down to a user-specified depth where each sub-region is integrated using a plain Monte Carlo estimate. These individual values and their error estimates are then combined upwards to give an overall result and an estimate of its error. Importance sampling. There are a variety of importance sampling algorithms, such as Importance sampling algorithm. Importance sampling provides a very important tool to perform Monte-Carlo integration. The main result of importance sampling to this method is that the uniform sampling of formula_25 is a particular case of a more generic choice, on which the samples are drawn from any distribution formula_26. The idea is that formula_26 can be chosen to decrease the variance of the measurement "QN". Consider the following example where one would like to numerically integrate a gaussian function, centered at 0, with σ = 1, from −1000 to 1000. Naturally, if the samples are drawn uniformly on the interval [−1000, 1000], only a very small part of them would be significant to the integral. This can be improved by choosing a different distribution from where the samples are chosen, for instance by sampling according to a gaussian distribution centered at 0, with σ = 1. Of course the "right" choice strongly depends on the integrand. Formally, given a set of samples chosen from a distribution formula_27 the estimator for "I" is given by formula_28 Intuitively, this says that if we pick a particular sample twice as much as other samples, we weight it half as much as the other samples. This estimator is naturally valid for uniform sampling, the case where formula_26 is constant. The Metropolis–Hastings algorithm is one of the most used algorithms to generate formula_25 from formula_26, thus providing an efficient way of computing integrals. VEGAS Monte Carlo. The VEGAS algorithm approximates the exact distribution by making a number of passes over the integration region which creates the histogram of the function "f". Each histogram is used to define a sampling distribution for the next pass. Asymptotically this procedure converges to the desired distribution. In order to avoid the number of histogram bins growing like "Kd", the probability distribution is approximated by a separable function: formula_29 so that the number of bins required is only "Kd". This is equivalent to locating the peaks of the function from the projections of the integrand onto the coordinate axes. The efficiency of VEGAS depends on the validity of this assumption. It is most efficient when the peaks of the integrand are well-localized. If an integrand can be rewritten in a form which is approximately separable this will increase the efficiency of integration with VEGAS. VEGAS incorporates a number of additional features, and combines both stratified sampling and importance sampling. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "I = \\int_{\\Omega}f(\\overline{\\mathbf{x}}) \\, d\\overline{\\mathbf{x}}" }, { "math_id": 1, "text": "\\mathbb{R}^m" }, { "math_id": 2, "text": "V = \\int_{\\Omega}d\\overline{\\mathbf{x}}" }, { "math_id": 3, "text": "\\overline{\\mathbf{x}}_1, \\cdots, \\overline{\\mathbf{x}}_N\\in \\Omega," }, { "math_id": 4, "text": " I \\approx Q_N \\equiv V \\frac{1}{N} \\sum_{i=1}^N f(\\overline{\\mathbf{x}}_i) = V \\langle f\\rangle." }, { "math_id": 5, "text": " \\lim_{N \\to \\infty} Q_N = I." }, { "math_id": 6, "text": " \\mathrm{Var}(f)\\equiv\\sigma_N^2 = \\frac{1}{N-1} \\sum_{i=1}^N \\left (f(\\overline{\\mathbf{x}}_i) - \\langle f \\rangle \\right )^2. " }, { "math_id": 7, "text": " \\mathrm{Var}(Q_N) = \\frac{V^2}{N^2} \\sum_{i=1}^N \\mathrm{Var}(f) = V^2\\frac{\\mathrm{Var}(f)}{N} = V^2\\frac{\\sigma_N^2}{N}." }, { "math_id": 8, "text": " \\left \\{ \\sigma_1^2, \\sigma_2^2, \\sigma_3^2, \\ldots \\right \\} " }, { "math_id": 9, "text": "\\delta Q_N\\approx\\sqrt{\\mathrm{Var}(Q_N)}=V\\frac{\\sigma_N}{\\sqrt{N}}," }, { "math_id": 10, "text": "\\tfrac{1}{\\sqrt{N}}" }, { "math_id": 11, "text": "V" }, { "math_id": 12, "text": "H\\left(x,y\\right)=\\begin{cases}\n1 & \\text{if }x^{2}+y^{2}\\leq1\\\\\n0 & \\text{else}\n\\end{cases}" }, { "math_id": 13, "text": "I_\\pi = \\int_\\Omega H(x,y) dx dy = \\pi." }, { "math_id": 14, "text": "Q_N = 4 \\frac{1}{N}\\sum_{i=1}^N H(x_{i},y_{i})" }, { "math_id": 15, "text": "\\tfrac{Q_N-\\pi}{\\pi}" }, { "math_id": 16, "text": "f(x) = \\frac{1}{1+\\sinh(2x)\\log(x)^2}" }, { "math_id": 17, "text": "0.8<x<3" }, { "math_id": 18, "text": "E_a(f)" }, { "math_id": 19, "text": "E_b(f)" }, { "math_id": 20, "text": "\\sigma_a^2(f)" }, { "math_id": 21, "text": "\\sigma_b^2(f)" }, { "math_id": 22, "text": "E(f) = \\tfrac{1}{2} \\left (E_a(f) + E_b(f) \\right )" }, { "math_id": 23, "text": "\\mathrm{Var}(f) = \\frac{\\sigma_a^2(f)}{4 N_a} + \\frac{\\sigma_b^2(f)}{4 N_b}" }, { "math_id": 24, "text": "\\frac{N_a}{N_a + N_b} = \\frac{\\sigma_a}{\\sigma_a + \\sigma_b}" }, { "math_id": 25, "text": "\\overline{\\mathbf{x}}" }, { "math_id": 26, "text": "p(\\overline{\\mathbf{x}})" }, { "math_id": 27, "text": "p(\\overline{\\mathbf{x}}) : \\qquad \\overline{\\mathbf{x}}_1, \\cdots, \\overline{\\mathbf{x}}_N \\in V, " }, { "math_id": 28, "text": " Q_N \\equiv \\frac{1}{N} \\sum_{i=1}^N \\frac{f(\\overline{\\mathbf{x}}_i)}{p(\\overline{\\mathbf{x}}_i)}" }, { "math_id": 29, "text": "g(x_1, x_2, \\ldots) = g_1(x_1) g_2(x_2) \\ldots " } ]
https://en.wikipedia.org/wiki?curid=1112960
1113108
Factorial number system
Numeral system in combinatorics In combinatorics, the factorial number system, also called factoradic, is a mixed radix numeral system adapted to numbering permutations. It is also called factorial base, although factorials do not function as base, but as place value of digits. By converting a number less than "n"! to factorial representation, one obtains a sequence of "n" digits that can be converted to a permutation of "n" elements in a straightforward way, either using them as Lehmer code or as inversion table representation; in the former case the resulting map from integers to permutations of "n" elements lists them in lexicographical order. General mixed radix systems were studied by Georg Cantor. The term "factorial number system" is used by Knuth, while the French equivalent "numération factorielle" was first used in 1888. The term "factoradic", which is a portmanteau of factorial and mixed radix, appears to be of more recent date. Definition. The factorial number system is a mixed radix numeral system: the "i"-th digit from the right has base "i", which means that the digit must be strictly less than "i", and that (taking into account the bases of the less significant digits) its value is to be multiplied by ("i" − 1)! (its place value). From this it follows that the rightmost digit is always 0, the second can be 0 or 1, the third 0, 1 or 2, and so on (sequence in the OEIS). The factorial number system is sometimes defined with the 0! place omitted because it is always zero (sequence in the OEIS). In this article, a factorial number representation will be flagged by a subscript "!". In addition, some examples will have digits delimited by a colon. For example, 3:4:1:0:1:0! stands for 3×5! + 4×4! + 1×3! + 0×2! + 1×1! + 0×0!  ((((3×5 + 4)×4 + 1)×3 + 0)×2 + 1)×1 + 0  46310. General properties of mixed radix number systems also apply to the factorial number system. For instance, one can convert a number into factorial representation producing digits from right to left, by repeatedly dividing the number by the radix (1, 2, 3, ...), taking the remainder as digits, and continuing with the integer quotient, until this quotient becomes 0. For example, 46310 can be transformed into a factorial representation by these successive divisions: The process terminates when the quotient reaches zero. Reading the remainders backward gives 3:4:1:0:1:0!. In principle, this system may be extended to represent rational numbers, though rather than the natural extension of place values (−1)!, (−2)!, etc., which are undefined, the symmetric choice of radix values "n" = 0, 1, 2, 3, 4, etc. after the point may be used instead. Again, the 0 and 1 places may be omitted as these are always zero. The corresponding place values are therefore 1/1, 1/1, 1/2, 1/6, 1/24, ..., 1/"n"!, etc. Examples. The following sortable table shows the 24 permutations of four elements with different inversion related vectors. The left and right inversion counts formula_0 and formula_1 (the latter often called Lehmer code) are particularly eligible to be interpreted as factorial numbers. formula_0 gives the permutation's position in reverse colexicographic order (the default order of this table), and the latter the position in lexicographic order (both counted from 0). Sorting by a column that has the omissible 0 on the right makes the factorial numbers in that column correspond to the index numbers in the immovable column on the left. The small columns are reflections of the columns next to them, and can be used to bring those in colexicographic order. The rightmost column shows the digit sums of the factorial numbers (OEIS:  in the tables default order). For another example, the greatest number that could be represented with six digits would be 543210! which equals 719 in decimal: 5×5! + 4×4! + 3x3! + 2×2! + 1×1! + 0×0!. Clearly the next factorial number representation after 5:4:3:2:1:0! is 1:0:0:0:0:0:0! which designates 6! = 72010, the place value for the radix-7 digit. So the former number, and its summed out expression above, is equal to: 6! − 1. The factorial number system provides a unique representation for each natural number, with the given restriction on the "digits" used. No number can be represented in more than one way because the sum of consecutive factorials multiplied by their index is always the next factorial minus one: formula_2 This can be easily proved with mathematical induction, or simply by noticing that formula_3: subsequent terms cancel each other, leaving the first and last term (see Telescoping series). However, when using Arabic numerals to write the digits (and not including the subscripts as in the above examples), their simple concatenation becomes ambiguous for numbers having a "digit" greater than 9. The smallest such example is the number 10 × 10! = 36,288,00010, which may be written A0000000000!=10:0:0:0:0:0:0:0:0:0:0!, but not 100000000000! = 1:0:0:0:0:0:0:0:0:0:0:0! which denotes 11! = 39,916,80010. Thus using letters A–Z to denote digits 10, 11, 12, ..., 35 as in other base-"N" make the largest representable number 36 × 36! − 1. For arbitrarily greater numbers one has to choose a base for representing individual digits, say decimal, and provide a separating mark between them (for instance by subscripting each digit by its base, also given in decimal, like 24031201, this number also can be written as 2:0:1:0!). In fact the factorial number system itself is not truly a numeral system in the sense of providing a representation for all natural numbers using only a finite alphabet of symbols. Permutations. There is a natural mapping between the integers 0, 1, ..., "n"! − 1 (or equivalently the numbers with "n" digits in factorial representation) and permutations of "n" elements in lexicographical order, when the integers are expressed in factoradic form. This mapping has been termed the Lehmer code (or inversion table). For example, with "n" = 3, such a mapping is In each case, calculating the permutation proceeds by using the leftmost factoradic digit (here, 0, 1, or 2) as the first permutation digit, then removing it from the list of choices (0, 1, and 2). Think of this new list of choices as zero indexed, and use each successive factoradic digit to choose from its remaining elements. If the second factoradic digit is "0" then the first element of the list is selected for the second permutation digit and is then removed from the list. Similarly, if the second factoradic digit is "1", the second is selected and then removed. The final factoradic digit is always "0", and since the list now contains only one element, it is selected as the last permutation digit. The process may become clearer with a longer example. Let's say we want the 2982nd permutation of the numbers 0 through 6. The number 2982 is 4:0:4:1:0:0:0! in factoradic, and that number picks out digits (4,0,6,2,1,3,5) in turn, via indexing a dwindling ordered set of digits and picking out each digit from the set at each turn: 4:0:4:1:0:0:0! ─► (4,0,6,2,1,3,5) factoradic: 4 : 0 : 4 : 1 : 0 : 0 : 0! sets: (0,1,2,3,4,5,6) ─► (0,1,2,3,5,6) ─► (1,2,3,5,6) ─► (1,2,3,5) ─► (1,3,5) ─► (3,5) ─► (5) permutation: (4, 0, 6, 2, 1, 3, 5) A natural index for the direct product of two permutation groups is the concatenation of two factoradic numbers, with two subscript "!"s. concatenated decimal factoradics permutation pair 010 0:0:0!0:0:0! ((0,1,2),(0,1,2)) 110 0:0:0!0:1:0! ((0,1,2),(0,2,1)) 510 0:0:0!2:1:0! ((0,1,2),(2,1,0)) 610 0:1:0!0:0:0! ((0,2,1),(0,1,2)) 710 0:1:0!0:1:0! ((0,2,1),(0,2,1)) 2210 1:1:0!2:0:0! ((1,2,0),(2,0,1)) 3410 2:1:0!2:0:0! ((2,1,0),(2,0,1)) 3510 2:1:0!2:1:0! ((2,1,0),(2,1,0)) Fractional values. Unlike single radix systems whose place values are "base""n" for both positive and negative integral "n", the factorial number base cannot be extended to negative place values as these would be (−1)!, (−2)! and so on, and these values are undefined (see factorial). One possible extension is therefore to use 1/0!, 1/1!, 1/2!, 1/3!, ..., 1/"n"! etc. instead, possibly omitting the 1/0! and 1/1! places which are always zero. With this method, all rational numbers have a terminating expansion, whose length in 'digits' is less than or equal to the denominator of the rational number represented. This may be proven by considering that there exists a factorial for any integer and therefore the denominator divides into its own factorial even if it does not divide into any smaller factorial. By necessity, therefore, the factoradic expansion of the reciprocal of a prime has a length of exactly that prime (less one if the 1/1! place is omitted). Other terms are given as the sequence A046021 on the OEIS. It can also be proven that the last 'digit' or term of the representation of a rational with prime denominator is equal to the difference between the numerator and the prime denominator. Similar to how checking the divisibility of 4 in base 10 requires looking at only the last two digits, checking the divisibility of any number in factorial number system requires looking at only a finite number of digits. That is, it has a divisibility rule for each number. There is also a non-terminating equivalent for every rational number akin to the fact that in decimal 0.24999... = 0.25 = 1/4 and 0.999... = 1, etc., which can be created by reducing the final term by 1 and then filling in the remaining infinite number of terms with the highest value possible for the radix of that position. In the following selection of examples, spaces are used to separate the place values, otherwise represented in decimal. The rational numbers on the left are also in decimal: There are also a small number of constants that have patterned representations with this method:
[ { "math_id": 0, "text": "l" }, { "math_id": 1, "text": "r" }, { "math_id": 2, "text": " \\sum_{i=0}^n {i\\cdot i!} = {(n+1)!} - 1. " }, { "math_id": 3, "text": "\\forall i, i\\cdot i!=(i+1-1)\\cdot i!=(i+1)!-i!" }, { "math_id": 4, "text": "1/2 = 0.0\\ 1_!" }, { "math_id": 5, "text": "1/3 = 0.0\\ 0\\ 2_!" }, { "math_id": 6, "text": "2/3 = 0.0\\ 1\\ 1_!" }, { "math_id": 7, "text": "1/4 = 0.0\\ 0\\ 1\\ 2_!" }, { "math_id": 8, "text": "3/4 = 0.0\\ 1\\ 1\\ 2_!" }, { "math_id": 9, "text": "1/5 = 0.0\\ 0\\ 1\\ 0\\ 4_!" }, { "math_id": 10, "text": "1/6 = 0.0\\ 0\\ 1_!" }, { "math_id": 11, "text": "5/6 = 0.0\\ 1\\ 2_!" }, { "math_id": 12, "text": "1/7 = 0.0\\ 0\\ 0\\ 3\\ 2\\ 0\\ 6_!" }, { "math_id": 13, "text": "1/8 = 0.0\\ 0\\ 0\\ 3_!" }, { "math_id": 14, "text": "1/9 = 0.0\\ 0\\ 0\\ 2\\ 3\\ 2_!" }, { "math_id": 15, "text": "1/10 = 0.0\\ 0\\ 0\\ 2\\ 2_!" }, { "math_id": 16, "text": "1/11 \\ \\ = 0.0\\ 0\\ 0\\ 2\\ 0\\ 5\\ 3\\ 1\\ 4\\ 0\\ A_!" }, { "math_id": 17, "text": "2/11 \\ \\ = 0.0\\ 0\\ 1\\ 0\\ 1\\ 4\\ 6\\ 2\\ 8\\ 1\\ 9_!" }, { "math_id": 18, "text": "9/11 \\ \\ = 0.0\\ 1\\ 1\\ 3\\ 3\\ 1\\ 0\\ 5\\ 0\\ 8\\ 2_!" }, { "math_id": 19, "text": "10/11 = 0.0\\ 1\\ 2\\ 1\\ 4\\ 0\\ 3\\ 6\\ 4\\ 9 \\ 1_!" }, { "math_id": 20, "text": "1/12 \\ \\ = 0.0\\ 0\\ 0\\ 2_!" }, { "math_id": 21, "text": "5/12 \\ \\ = 0.0\\ 0\\ 2\\ 2_!" }, { "math_id": 22, "text": "7/12 \\ \\ = 0.0\\ 1\\ 0\\ 2_!" }, { "math_id": 23, "text": "11/12 = 0.0\\ 1\\ 2\\ 2_!" }, { "math_id": 24, "text": "1/15 = 0.0\\ 0\\ 0\\ 1\\ 3_!" }, { "math_id": 25, "text": "1/16 = 0.0\\ 0\\ 0\\ 1\\ 2\\ 3_!" }, { "math_id": 26, "text": "1/18 = 0.0\\ 0\\ 0\\ 1\\ 1\\ 4_!" }, { "math_id": 27, "text": "1/20 = 0.0\\ 0\\ 0\\ 1\\ 1_!" }, { "math_id": 28, "text": "1/24 = 0.0\\ 0\\ 0\\ 1_!" }, { "math_id": 29, "text": "1/30 = 0.0\\ 0\\ 0\\ 0\\ 4_!" }, { "math_id": 30, "text": "1/36 = 0.0\\ 0\\ 0\\ 0\\ 3\\ 2_!" }, { "math_id": 31, "text": "1/60 = 0.0\\ 0\\ 0\\ 0\\ 2_!" }, { "math_id": 32, "text": "1/72 = 0.0\\ 0\\ 0\\ 0\\ 1\\ 4_!" }, { "math_id": 33, "text": "1/120 = 0.0\\ 0\\ 0\\ 0\\ 1_!" }, { "math_id": 34, "text": "1/144 = 0.0\\ 0\\ 0\\ 0\\ 0\\ 5_!" }, { "math_id": 35, "text": "1/240 = 0.0\\ 0\\ 0\\ 0\\ 0\\ 3_!" }, { "math_id": 36, "text": "1/360 = 0.0\\ 0\\ 0\\ 0\\ 0\\ 2_!" }, { "math_id": 37, "text": "1/720 = 0.0\\ 0\\ 0\\ 0\\ 0\\ 1_!" }, { "math_id": 38, "text": "e = 1\\ 0.0\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1\\ 1..._!" }, { "math_id": 39, "text": "e^{-1} = 0.0\\ 0\\ 2\\ 0\\ 4\\ 0\\ 6\\ 0\\ 8\\ 0\\ A\\ 0\\ C\\ 0\\ E..._!" }, { "math_id": 40, "text": "\\sin(1) = 0.0\\ 1\\ 2\\ 0\\ 0\\ 5\\ 6\\ 0\\ 0\\ 9\\ A\\ 0\\ 0\\ D\\ E..._!" }, { "math_id": 41, "text": "\\cos(1) = 0.0\\ 1\\ 0\\ 0\\ 4\\ 5\\ 0\\ 0\\ 8\\ 9\\ 0\\ 0\\ C\\ D\\ 0..._!" }, { "math_id": 42, "text": "\\sinh(1) = 1.0\\ 0\\ 1\\ 0\\ 1\\ 0\\ 1\\ 0\\ 1\\ 0\\ 1\\ 0\\ 1\\ 0\\ 1\\ 0..._!" }, { "math_id": 43, "text": "\\cosh(1) = 1.0\\ 1\\ 0\\ 1\\ 0\\ 1\\ 0\\ 1\\ 0\\ 1\\ 0\\ 1\\ 0\\ 1\\ 0\\ 1..._!" } ]
https://en.wikipedia.org/wiki?curid=1113108
1113115
Combinatorial number system
In mathematics, and in particular in combinatorics, the combinatorial number system of degree "k" (for some positive integer "k"), also referred to as combinadics, or the Macaulay representation of an integer, is a correspondence between natural numbers (taken to include 0) "N" and "k"-combinations. The combinations are represented as strictly decreasing sequences "c""k" &gt; ... &gt; "c"2 &gt; "c"1 ≥ 0 where each "ci" corresponds to the index of a chosen element in a given "k"-combination. Distinct numbers correspond to distinct "k"-combinations, and produce them in lexicographic order. The numbers less than formula_0 correspond to all "k"-combinations of {0, 1, ..., "n" − 1}. The correspondence does not depend on the size "n" of the set that the "k"-combinations are taken from, so it can be interpreted as a map from N to the "k"-combinations taken from N; in this view the correspondence is a bijection. The number "N" corresponding to ("c""k", ..., "c"2, "c"1) is given by formula_1. The fact that a unique sequence corresponds to any non-negative number "N" was first observed by D. H. Lehmer. Indeed, a greedy algorithm finds the "k"-combination corresponding to "N": take "c""k" maximal with formula_2, then take "c""k"−1 maximal with formula_3, and so forth. Finding the number "N", using the formula above, from the "k"-combination ("c""k", ..., "c"2, "c"1) is also known as "ranking", and the opposite operation (given by the greedy algorithm) as "unranking"; the operations are known by these names in most computer algebra systems, and in computational mathematics. The originally used term "combinatorial representation of integers" was shortened to "combinatorial number system" by Knuth, who also gives a much older reference; the term "combinadic" is introduced by James McCaffrey (without reference to previous terminology or work). Unlike the factorial number system, the combinatorial number system of degree "k" is not a mixed radix system: the part formula_4 of the number "N" represented by a "digit" "c""i" is not obtained from it by simply multiplying by a place value. The main application of the combinatorial number system is that it allows rapid computation of the "k"-combination that is at a given position in the lexicographic ordering, without having to explicitly list the "k"-combinations preceding it; this allows for instance random generation of "k"-combinations of a given set. Enumeration of "k"-combinations has many applications, among which are software testing, sampling, quality control, and the analysis of lottery games. Ordering combinations. A "k"-combination of a set "S" is a subset of "S" with "k" (distinct) elements. The main purpose of the combinatorial number system is to provide a representation, each by a single number, of all formula_0 possible "k"-combinations of a set "S" of "n" elements. Choosing, for any "n", {0, 1, ..., "n" − 1} as such a set, it can be arranged that the representation of a given "k"-combination "C" is independent of the value of "n" (although "n" must of course be sufficiently large); in other words considering "C" as a subset of a larger set by increasing "n" will not change the number that represents "C". Thus for the combinatorial number system one just considers "C" as a "k"-combination of the set N of all natural numbers, without explicitly mentioning "n". In order to ensure that the numbers representing the "k"-combinations of {0, 1, ..., "n" − 1} are less than those representing "k"-combinations not contained in {0, 1, ..., "n" − 1}, the "k"-combinations must be ordered in such a way that their largest elements are compared first. The most natural ordering that has this property is lexicographic ordering of the "decreasing" sequence of their elements. So comparing the 5-combinations "C" = {0,3,4,6,9} and "C"′ = {0,1,3,7,9}, one has that "C" comes before "C"′, since they have the same largest part 9, but the next largest part 6 of "C" is less than the next largest part 7 of "C"′; the sequences compared lexicographically are (9,6,4,3,0) and (9,7,3,1,0). Another way to describe this ordering is view combinations as describing the "k" raised bits in the binary representation of a number, so that "C" = {"c"1, ..., "c""k"} describes the number formula_5 (this associates distinct numbers to "all" finite sets of natural numbers); then comparison of "k"-combinations can be done by comparing the associated binary numbers. In the example "C" and "C"′ correspond to numbers 10010110012 = 60110 and 10100010112 = 65110, which again shows that "C" comes before "C"′. This number is not however the one one wants to represent the "k"-combination with, since many binary numbers have a number of raised bits different from "k"; one wants to find the relative position of "C" in the ordered list of (only) "k"-combinations. Place of a combination in the ordering. The number associated in the combinatorial number system of degree "k" to a "k"-combination "C" is the number of "k"-combinations strictly less than "C" in the given ordering. This number can be computed from "C" = {"c""k", ..., "c"2, "c"1} with "c""k" &gt; ... &gt; "c"2 &gt; "c"1 as follows. From the definition of the ordering it follows that for each "k"-combination "S" strictly less than "C", there is a unique index "i" such that "c""i" is absent from "S", while "c""k", ..., "c""i"+1 are present in "S", and no other value larger than "c""i" is. One can therefore group those "k"-combinations "S" according to the possible values 1, 2, ..., "k" of "i", and count each group separately. For a given value of "i" one must include "c""k", ..., "c""i"+1 in "S", and the remaining "i" elements of "S" must be chosen from the "c""i" non-negative integers strictly less than "c""i"; moreover any such choice will result in a "k"-combinations "S" strictly less than "C". The number of possible choices is formula_4, which is therefore the number of combinations in group "i"; the total number of "k"-combinations strictly less than "C" then is formula_6 and this is the index (starting from 0) of "C" in the ordered list of "k"-combinations. Obviously there is for every "N" ∈ N exactly one "k"-combination at index "N" in the list (supposing "k" ≥ 1, since the list is then infinite), so the above argument proves that every "N" can be written in exactly one way as a sum of "k" binomial coefficients of the given form. Finding the "k"-combination for a given number. The given formula allows finding the place in the lexicographic ordering of a given "k"-combination immediately. The reverse process of finding the "k"-combination at a given place "N" requires somewhat more work, but is straightforward nonetheless. By the definition of the lexicographic ordering, two "k"-combinations that differ in their largest element "c""k" will be ordered according to the comparison of those largest elements, from which it follows that all combinations with a fixed value of their largest element are contiguous in the list. Moreover the smallest combination with "c""k" as the largest element is formula_7, and it has "c""i" = "i" − 1 for all "i" &lt; "k" (for this combination all terms in the expression except formula_7 are zero). Therefore "c""k" is the largest number such that formula_2. If "k" &gt; 1 the remaining elements of the "k"-combination form the "k" − 1-combination corresponding to the number formula_8 in the combinatorial number system of degree "k" − 1, and can therefore be found by continuing in the same way for formula_8 and "k" − 1 instead of "N" and "k". Example. Suppose one wants to determine the 5-combination at position 72. The successive values of formula_9 for "n" = 4, 5, 6, ... are 0, 1, 6, 21, 56, 126, 252, ..., of which the largest one not exceeding 72 is 56, for "n" = 8. Therefore "c"5 = 8, and the remaining elements form the 4-combination at position 72 − 56 16. The successive values of formula_10 for "n" = 3, 4, 5, ... are 0, 1, 5, 15, 35, ..., of which the largest one not exceeding 16 is 15, for "n" = 6, so "c"4 = 6. Continuing similarly to search for a 3-combination at position 16 − 15 1 one finds "c"3 = 3, which uses up the final unit; this establishes formula_11, and the remaining values "c""i" will be the maximal ones with formula_12, namely "c""i" "i" − 1. Thus we have found the 5-combination {8, 6, 3, 1, 0}. National Lottery example. For each of the formula_13 lottery combinations "c"1 &lt; "c"2 &lt; "c"3 &lt; "c"4 &lt; "c"5 &lt; "c"6 , there is a list number "N" between 0 and formula_14 which can be found by adding formula_15
[ { "math_id": 0, "text": "\\tbinom nk" }, { "math_id": 1, "text": "N=\\binom{c_k}k+\\cdots+\\binom{c_2}2+\\binom{c_1}1" }, { "math_id": 2, "text": "\\tbinom{c_k}k\\leq N" }, { "math_id": 3, "text": "\\tbinom{c_{k-1}}{k-1}\\leq N - \\tbinom{c_k}k" }, { "math_id": 4, "text": "\\tbinom{c_i}i" }, { "math_id": 5, "text": "2^{c_1}+2^{c_2}+\\cdots+2^{c_k}" }, { "math_id": 6, "text": "\\binom{c_1}1+\\binom{c_2}2+\\cdots+\\binom{c_k}k," }, { "math_id": 7, "text": "\\tbinom{c_k}k" }, { "math_id": 8, "text": "N-\\tbinom{c_k}k" }, { "math_id": 9, "text": "\\tbinom n5" }, { "math_id": 10, "text": "\\tbinom n4" }, { "math_id": 11, "text": "72=\\tbinom85+\\tbinom64+\\tbinom33" }, { "math_id": 12, "text": "\\tbinom{c_i}i=0" }, { "math_id": 13, "text": "\\binom{49}6" }, { "math_id": 14, "text": "\\binom{49}6 - 1" }, { "math_id": 15, "text": " \\binom{49-c_1} 6 + \\binom{49-c_2} 5 + \\binom{49-c_3} 4 + \\binom{49-c_4} 3 + \\binom{49-c_5} 2 + \\binom{49-c_6} 1. " } ]
https://en.wikipedia.org/wiki?curid=1113115
11131291
Extra element theorem
The Extra Element Theorem (EET) is an analytic technique developed by R. D. Middlebrook for simplifying the process of deriving driving point and transfer functions for linear electronic circuits. Much like Thévenin's theorem, the extra element theorem breaks down one complicated problem into several simpler ones. Driving point and transfer functions can generally be found using Kirchhoff's circuit laws. However, several complicated equations may result that offer little insight into the circuit's behavior. Using the extra element theorem, a circuit element (such as a resistor) can be removed from a circuit, and the desired driving point or transfer function is found. By removing the element that most complicate the circuit (such as an element that creates feedback), the desired function can be easier to obtain. Next, two correctional factors must be found and combined with the previously derived function to find the exact expression. The general form of the extra element theorem is called the N-extra element theorem and allows multiple circuit elements to be removed at once. General formulation. The (single) extra element theorem expresses any transfer function as a product of the transfer function with that element removed and a correction factor. The correction factor term consists of the impedance of the extra element and two driving point impedances seen by the extra element: The double null injection driving point impedance and the single injection driving point impedance. Because an extra element can be removed in general by either short-circuiting or open-circuiting the element, there are two equivalent forms of the EET: formula_0 or, formula_1 Where the Laplace-domain transfer functions and impedances in the above expressions are defined as follows: "H"("s") is the transfer function with the extra element present. "H"∞("s") is the transfer function with the extra element open-circuited. "H"0("s") is the transfer function with the extra element short-circuited. "Z"("s") is the impedance of the extra element. "Zd"("s") is the single-injection driving point impedance "seen" by the extra element. "Zn"("s") is the double-null-injection driving point impedance "seen" by the extra element. The extra element theorem incidentally proves that any electric circuit transfer function can be expressed as no more than a bilinear function of any particular circuit element. Driving point impedances. Single Injection Driving Point Impedance. "Zd"("s") is found by making the input to the system's transfer function zero (short circuit a voltage source or open circuit a current source) and determining the impedance across the terminals to which the extra element will be connected with the extra element absent. This impedance is same as the Thévenin's equivalent impedance. Double Null Injection Driving Point Impedance. "Zn"("s") is found by replacing the extra element with a second test signal source (either a current source or voltage source as appropriate). Then, "Zn"("s") is defined as the ratio of voltage across the terminals of this second test source to the current leaving its positive terminal when the output of the system's transfer function is nulled for any value of the primary input to the system's transfer function. In practice, "Zn"("s") can be found from working backward from the facts that the output of the transfer function is made zero and that the primary input to the transfer function is unknown. Then using conventional circuit analysis techniques to express both the voltage across the extra element test source's terminals, "vn"("s"), and the current leaving the extra element test source's positive terminals, "in"("s"), and calculating formula_2. Although the computation of "Zn"("s") is an unfamiliar process for many engineers, its expressions are often much simpler than those for "Zd"("s") because the nulling of the transfer function's output often leads to other voltages/currents in the circuit being zero, which may allow exclusion of certain components from analysis. Special case with transfer function as a self-impedance. As a special case, the EET can be used to find the input impedance of a network with the addition of an element designated as "extra". In this case, "Zd" is the same as the impedance of the input test current source signal made zero or equivalently with the input open circuited. Likewise, since the transfer function output signal can be considered to be the voltage at the input terminals, "Zn" is found when the input voltage is zero i.e. the input terminals are short-circuited. Thus, for this particular application, the EET can be written as: formula_3 where Computing these three terms may seem like extra effort, but they are often easier to compute than the overall input impedance. Example. Consider the problem of finding formula_8 for the circuit in Figure 1 using the EET (note all component values are unity for simplicity). If the capacitor (gray shading) is denoted the extra element then formula_9 Removing this capacitor from the circuit, formula_10 Calculating the impedance seen by the capacitor with the input shorted, formula_11 Calculating the impedance seen by the capacitor with the input open, formula_12 Therefore, using the EET, formula_13 This problem was solved by calculating three simple driving point impedances by inspection. Feedback amplifiers. The EET is also useful for analyzing single and multi-loop feedback amplifiers. In this case, the EET can take the form of the asymptotic gain model. Further reading. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": " H(s) = H_{\\infty}(s) \\frac{1 + \\frac{Z_n(s)}{Z(s)}}{1 + \\frac{Z_d(s)}{Z(s)}} " }, { "math_id": 1, "text": " H(s) = H_0(s)\\frac{1 + \\frac{Z(s)}{Z_n(s)}}{1 + \\frac{Z(s)}{Z_d(s)}} ." }, { "math_id": 2, "text": "Z_n(s) = v_n(s) / i_n(s)" }, { "math_id": 3, "text": "Z_\\text{in} = Z^{\\infty}_\\text{in} \\cdot \\frac{1+\\frac{Z^0_{e}}{Z}}{1+\\frac{Z^{\\infty}_{e}}{Z}}" }, { "math_id": 4, "text": "Z " }, { "math_id": 5, "text": "Z^{\\infty}_\\text{in}" }, { "math_id": 6, "text": "Z^0_{e}" }, { "math_id": 7, "text": "Z^{\\infty}_{e}" }, { "math_id": 8, "text": "Z_{in}" }, { "math_id": 9, "text": "Z = \\frac{1}{s}." }, { "math_id": 10, "text": "Z^{\\infty}_{in} = 2\\|1 +1 = \\frac{5}{3}." }, { "math_id": 11, "text": "Z^0_{e} = 1\\|(1+1\\|1) = \\frac{3}{5}." }, { "math_id": 12, "text": "Z^{\\infty}_{e} = 2\\|1+1 = \\frac{5}{3}." }, { "math_id": 13, "text": "Z_{in} = \\frac{5}{3} \\cdot \\frac{1+\\frac{3}{5}s}{1+\\frac{5}{3}s} = \\frac{5+3s}{3+5s}." } ]
https://en.wikipedia.org/wiki?curid=11131291
1113185
Wolstenholme's theorem
Result in number theory In mathematics, Wolstenholme's theorem states that for a prime number formula_0, the congruence formula_1 holds, where the parentheses denote a binomial coefficient. For example, with "p" = 7, this says that 1716 is one more than a multiple of 343. The theorem was first proved by Joseph Wolstenholme in 1862. In 1819, Charles Babbage showed the same congruence modulo "p"2, which holds for formula_2. An equivalent formulation is the congruence formula_3 for formula_0, which is due to Wilhelm Ljunggren (and, in the special case formula_4, to J. W. L. Glaisher) and is inspired by Lucas' theorem. No known composite numbers satisfy Wolstenholme's theorem and it is conjectured that there are none (see below). A prime that satisfies the congruence modulo "p"4 is called a Wolstenholme prime (see below). As Wolstenholme himself established, his theorem can also be expressed as a pair of congruences for (generalized) harmonic numbers: formula_5 formula_6 For example, with "p"=7, the first of these says that the numerator of 49/20 is a multiple of 49, while the second says the numerator of 5369/3600 is a multiple of 7. Wolstenholme primes. A prime "p" is called a Wolstenholme prime iff the following condition holds: formula_7 If "p" is a Wolstenholme prime, then Glaisher's theorem holds modulo "p"4. The only known Wolstenholme primes so far are 16843 and 2124679 (sequence in the OEIS); any other Wolstenholme prime must be greater than 109. This result is consistent with the heuristic argument that the residue modulo "p"4 is a pseudo-random multiple of "p"3. This heuristic predicts that the number of Wolstenholme primes between "K" and "N" is roughly "ln ln N − ln ln K". The Wolstenholme condition has been checked up to 109, and the heuristic says that there should be roughly one Wolstenholme prime between 109 and 1024. A similar heuristic predicts that there are no "doubly Wolstenholme" primes, for which the congruence would hold modulo "p"5. A proof of the theorem. There is more than one way to prove Wolstenholme's theorem. Here is a proof that directly establishes Glaisher's version using both combinatorics and algebra. For the moment let "p" be any prime, and let "a" and "b" be any non-negative integers. Then a set "A" with "ap" elements can be divided into "a" rings of length "p", and the rings can be rotated separately. Thus, the "a"-fold direct sum of the cyclic group of order "p" acts on the set "A", and by extension it acts on the set of subsets of size "bp". Every orbit of this group action has "pk" elements, where "k" is the number of incomplete rings, i.e., if there are "k" rings that only partly intersect a subset "B" in the orbit. There are formula_8 orbits of size 1 and there are no orbits of size "p". Thus we first obtain Babbage's theorem formula_9 Examining the orbits of size "p2", we also obtain formula_10 Among other consequences, this equation tells us that the case "a=2" and "b=1" implies the general case of the second form of Wolstenholme's theorem. Switching from combinatorics to algebra, both sides of this congruence are polynomials in "a" for each fixed value of "b". The congruence therefore holds when "a" is any integer, positive or negative, provided that "b" is a fixed positive integer. In particular, if "a=-1" and "b=1", the congruence becomes formula_11 This congruence becomes an equation for formula_12 using the relation formula_13 When "p" is odd, the relation is formula_14 When "p"≠3, we can divide both sides by 3 to complete the argument. A similar derivation modulo "p"4 establishes that formula_15 for all positive "a" and "b" if and only if it holds when "a=2" and "b=1", i.e., if and only if "p" is a Wolstenholme prime. The converse as a conjecture. It is conjectured that if when "k=3", then "n" is prime. The conjecture can be understood by considering "k" = 1 and 2 as well as 3. When "k" = 1, Babbage's theorem implies that it holds for "n" = "p"2 for "p" an odd prime, while Wolstenholme's theorem implies that it holds for "n" = "p"3 for "p" &gt; 3, and it holds for "n" = "p"4 if "p" is a Wolstenholme prime. When "k" = 2, it holds for "n" = "p"2 if "p" is a Wolstenholme prime. These three numbers, 4 = 22, 8 = 23, and 27 = 33 are not held for (1) with "k" = 1, but all other prime square and prime cube are held for (1) with "k" = 1. Only 5 other composite values (neither prime square nor prime cube) of "n" are known to hold for (1) with "k" = 1, they are called Wolstenholme pseudoprimes, they are 27173, 2001341, 16024189487, 80478114820849201, 20378551049298456998947681, ... (sequence in the OEIS) The first three are not prime powers (sequence in the OEIS), the last two are 168434 and 21246794, 16843 and 2124679 are Wolstenholme primes (sequence in the OEIS). Besides, with an exception of 168432 and 21246792, no composites are known to hold for (1) with "k" = 2, much less "k" = 3. Thus the conjecture is considered likely because Wolstenholme's congruence seems over-constrained and artificial for composite numbers. Moreover, if the congruence does hold for any particular "n" other than a prime or prime power, and any particular "k", it does not imply that formula_16 The number of Wolstenholme pseudoprimes up to formula_17 is formula_18, so the sum of reciprocals of those numbers converges. The constant formula_19 follows from the existence of only three Wolstenholme pseudoprimes up to formula_20. The number of Wolstenholme pseudoprimes up to formula_20 should be at least 7 if the sum of its reciprocals diverged, and since this is not satisfied because there are only 3 of them in this range, the counting function of these pseudoprimes is at most formula_21 for some efficiently computable constant formula_22; we can take formula_22 as formula_19. The constant in the big O notation is also effectively computable in formula_18. Generalizations. Leudesdorf has proved that for a positive integer "n" coprime to 6, the following congruence holds: formula_23 Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "p \\geq 5" }, { "math_id": 1, "text": "{2p-1 \\choose p-1} \\equiv 1 \\pmod{p^3}" }, { "math_id": 2, "text": "p \\geq 3" }, { "math_id": 3, "text": "{ap \\choose bp} \\equiv {a \\choose b} \\pmod{p^3}" }, { "math_id": 4, "text": "b = 1" }, { "math_id": 5, "text": "1+{1 \\over 2}+{1 \\over 3}+\\dots+{1 \\over p-1} \\equiv 0 \\pmod{p^2} \\mbox{, and}" }, { "math_id": 6, "text": "1+{1 \\over 2^2}+{1 \\over 3^2}+\\dots+{1 \\over (p-1)^2} \\equiv 0 \\pmod p. " }, { "math_id": 7, "text": "{{2p-1}\\choose{p-1}} \\equiv 1 \\pmod{p^4}." }, { "math_id": 8, "text": "\\textstyle {a \\choose b}" }, { "math_id": 9, "text": "{ap \\choose bp} \\equiv {a \\choose b} \\pmod{p^2}." }, { "math_id": 10, "text": "{ap \\choose bp} \\equiv {a \\choose b} + {a \\choose 2}\\left({2p \\choose p} - 2\\right){a -2 \\choose b-1} \\pmod{p^3}." }, { "math_id": 11, "text": "{-p \\choose p} \\equiv {-1 \\choose 1} + {-1 \\choose 2}\\left({2p \\choose p} - 2\\right) \\pmod{p^3}." }, { "math_id": 12, "text": "\\textstyle {2p \\choose p}" }, { "math_id": 13, "text": "{-p \\choose p} = \\frac{(-1)^p}2{2p \\choose p}." }, { "math_id": 14, "text": "3{2p \\choose p} \\equiv 6 \\pmod{p^3}." }, { "math_id": 15, "text": "{ap \\choose bp} \\equiv {a \\choose b} \\pmod{p^4}" }, { "math_id": 16, "text": "{an \\choose bn} \\equiv {a \\choose b} \\pmod{n^k}." }, { "math_id": 17, "text": "x" }, { "math_id": 18, "text": "O(x^{1/2} \\log(\\log(x))^{499712})" }, { "math_id": 19, "text": "499712" }, { "math_id": 20, "text": "10^{12}" }, { "math_id": 21, "text": "O(x^{1/2} \\log(\\log(x))^C)" }, { "math_id": 22, "text": "C" }, { "math_id": 23, "text": " \\sum_{i=1\\atop (i,n)=1}^{n-1} \\frac{1}{i} \\equiv 0\\pmod{n^2}." } ]
https://en.wikipedia.org/wiki?curid=1113185
11135761
Financial result
The financial result is the difference between earnings before interest and taxes and earnings before taxes. It is determined by the earning or the loss which results from financial affairs. Interpretation. For most industrial companies the financial result is negative, as the interest charged on borrowing generally exceeds income from investments (dividends). If a company records a positive financial Result over several periods, then one has to ask how much capital is invested at which interest rate, and if this capital would not bear a greater yield if it were invested in the company's growth. In case of constant, positive financial results a company also has to deal with increasing demands for special distributions to its shareholders. Calculation formula. In mathematical terms financial result is defined as follows: formula_0 Advantages. The advantages of the use of financial result as a key performance indicator Disadvantages. The disadvantages of the use of financial result as a Key performance indicator
[ { "math_id": 0, "text": " \\textstyle{\\begin{align}\\mbox{Financial result } & = \\mbox{ Interest income} \\\\ & - \\mbox{ Interest expense} \\\\ & \\pm \\mbox{ Write-downs/write-ups for financial assets} \\\\ & \\pm \\mbox{ Write-downs/write-ups for marketable securities} \\\\ & + \\mbox{ Other financial income and expenses}\\end{align}}" } ]
https://en.wikipedia.org/wiki?curid=11135761
1113637
Hurwitz's automorphisms theorem
Bounds the order of the group of automorphisms of a compact Riemann surface of genus g &gt; 1 In mathematics, Hurwitz's automorphisms theorem bounds the order of the group of automorphisms, via orientation-preserving conformal mappings, of a compact Riemann surface of genus "g" &gt; 1, stating that the number of such automorphisms cannot exceed 84("g" − 1). A group for which the maximum is achieved is called a Hurwitz group, and the corresponding Riemann surface a Hurwitz surface. Because compact Riemann surfaces are synonymous with non-singular complex projective algebraic curves, a Hurwitz surface can also be called a Hurwitz curve. The theorem is named after Adolf Hurwitz, who proved it in . Hurwitz's bound also holds for algebraic curves over a field of characteristic 0, and over fields of positive characteristic "p"&gt;0 for groups whose order is coprime to "p", but can fail over fields of positive characteristic "p"&gt;0 when "p" divides the group order. For example, the double cover of the projective line "y"2 = "xp" −"x" branched at all points defined over the prime field has genus "g"=("p"−1)/2 but is acted on by the group PGL2("p") of order "p"3−"p". Interpretation in terms of hyperbolicity. One of the fundamental themes in differential geometry is a trichotomy between the Riemannian manifolds of positive, zero, and negative curvature "K". It manifests itself in many diverse situations and on several levels. In the context of compact Riemann surfaces "X", via the Riemann uniformization theorem, this can be seen as a distinction between the surfaces of different topologies: While in the first two cases the surface "X" admits infinitely many conformal automorphisms (in fact, the conformal automorphism group is a complex Lie group of dimension three for a sphere and of dimension one for a torus), a hyperbolic Riemann surface only admits a discrete set of automorphisms. Hurwitz's theorem claims that in fact more is true: it provides a uniform bound on the order of the automorphism group as a function of the genus and characterizes those Riemann surfaces for which the bound is sharp. Statement and proof. Theorem: Let formula_0 be a smooth connected Riemann surface of genus formula_1. Then its automorphism group formula_2 has size at most formula_3. "Proof:" Assume for now that formula_4 is finite (this will be proved at the end). Now call the righthand side formula_20 and since formula_21 we must have formula_22. Rearranging the equation we find: In conclusion, formula_50. To show that formula_6 is finite, note that formula_6 acts on the cohomology formula_51 preserving the Hodge decomposition and the lattice formula_52. This is a contradiction, and so formula_61 is infinite. Since formula_61 is a closed complex sub variety of positive dimension and formula_0 is a smooth connected curve (i.e. formula_63), we must have formula_64. Thus formula_65 is the identity, and we conclude that formula_59 is injective and formula_66 is finite. Q.E.D. Corollary of the proof: A Riemann surface formula_0 of genus formula_1 has formula_3 automorphisms if and only if formula_0 is a branched cover formula_67 with three ramification points, of indices "2","3" and "7". The idea of another proof and construction of the Hurwitz surfaces. By the uniformization theorem, any hyperbolic surface "X" – i.e., the Gaussian curvature of "X" is equal to negative one at every point – is covered by the hyperbolic plane. The conformal mappings of the surface correspond to orientation-preserving automorphisms of the hyperbolic plane. By the Gauss–Bonnet theorem, the area of the surface is A("X") = − 2π χ("X") = 4π("g" − 1). In order to make the automorphism group "G" of "X" as large as possible, we want the area of its fundamental domain "D" for this action to be as small as possible. If the fundamental domain is a triangle with the vertex angles π/p, π/q and π/r, defining a tiling of the hyperbolic plane, then "p", "q", and "r" are integers greater than one, and the area is A("D") = π(1 − 1/"p" − 1/"q" − 1/"r"). Thus we are asking for integers which make the expression 1 − 1/"p" − 1/"q" − 1/"r" strictly positive and as small as possible. This minimal value is 1/42, and 1 − 1/2 − 1/3 − 1/7 = 1/42 gives a unique triple of such integers. This would indicate that the order |"G"| of the automorphism group is bounded by A("X")/A("D")  ≤  168("g" − 1). However, a more delicate reasoning shows that this is an overestimate by the factor of two, because the group "G" can contain orientation-reversing transformations. For the orientation-preserving conformal automorphisms the bound is 84("g" − 1). Construction. To obtain an example of a Hurwitz group, let us start with a (2,3,7)-tiling of the hyperbolic plane. Its full symmetry group is the full (2,3,7) triangle group generated by the reflections across the sides of a single fundamental triangle with the angles π/2, π/3 and π/7. Since a reflection flips the triangle and changes the orientation, we can join the triangles in pairs and obtain an orientation-preserving tiling polygon. A Hurwitz surface is obtained by 'closing up' a part of this infinite tiling of the hyperbolic plane to a compact Riemann surface of genus "g". This will necessarily involve exactly 84("g" − 1) double triangle tiles. The following two regular tilings have the desired symmetry group; the rotational group corresponds to rotation about an edge, a vertex, and a face, while the full symmetry group would also include a reflection. The polygons in the tiling are not fundamental domains – the tiling by (2,3,7) triangles refines both of these and is not regular. Wythoff constructions yields further uniform tilings, yielding eight uniform tilings, including the two regular ones given here. These all descend to Hurwitz surfaces, yielding tilings of the surfaces (triangulation, tiling by heptagons, etc.). From the arguments above it can be inferred that a Hurwitz group "G" is characterized by the property that it is a finite quotient of the group with two generators "a" and "b" and three relations formula_68 thus "G" is a finite group generated by two elements of orders two and three, whose product is of order seven. More precisely, any Hurwitz surface, that is, a hyperbolic surface that realizes the maximum order of the automorphism group for the surfaces of a given genus, can be obtained by the construction given. This is the last part of the theorem of Hurwitz. Examples of Hurwitz groups and surfaces. The smallest Hurwitz group is the projective special linear group PSL(2,7), of order 168, and the corresponding curve is the Klein quartic curve. This group is also isomorphic to PSL(3,2). Next is the Macbeath curve, with automorphism group PSL(2,8) of order 504. Many more finite simple groups are Hurwitz groups; for instance all but 64 of the alternating groups are Hurwitz groups, the largest non-Hurwitz example being of degree 167. The smallest alternating group that is a Hurwitz group is A15. Most projective special linear groups of large rank are Hurwitz groups, . For lower ranks, fewer such groups are Hurwitz. For "n""p" the order of "p" modulo 7, one has that PSL(2,"q") is Hurwitz if and only if either "q"=7 or "q" = "p""n""p". Indeed, PSL(3,"q") is Hurwitz if and only if "q" = 2, PSL(4,"q") is never Hurwitz, and PSL(5,"q") is Hurwitz if and only if "q" = 74 or "q" = "p""n""p", . Similarly, many groups of Lie type are Hurwitz. The finite classical groups of large rank are Hurwitz, . The exceptional Lie groups of type G2 and the Ree groups of type 2G2 are nearly always Hurwitz, . Other families of exceptional and twisted Lie groups of low rank are shown to be Hurwitz in . There are 12 sporadic groups that can be generated as Hurwitz groups: the Janko groups J1, J2 and J4, the Fischer groups Fi22 and Fi'24, the Rudvalis group, the Held group, the Thompson group, the Harada–Norton group, the third Conway group Co3, the Lyons group, and the Monster, . Automorphism groups in low genus. The largest |Aut("X")| can get for a Riemann surface "X" of genus "g" is shown below, for 2≤"g"≤10, along with a surface "X"0 with |Aut("X"0)| maximal. In this range, there only exists a Hurwitz curve in genus "g"=3 and "g"=7. Generalizations. The concept of a Hurwitz surface can be generalized in several ways to a definition that has examples in all but a few genera. Perhaps the most natural is a "maximally symmetric" surface: One that cannot be continuously modified through equally symmetric surfaces to a surface whose symmetry properly contains that of the original surface. This is possible for all orientable compact genera (see above section "Automorphism groups in low genus"). References. &lt;templatestyles src="Refbegin/styles.css" /&gt;
[ { "math_id": 0, "text": "X" }, { "math_id": 1, "text": "g \\ge 2" }, { "math_id": 2, "text": "\\operatorname{Aut}(X)" }, { "math_id": 3, "text": "84(g-1)" }, { "math_id": 4, "text": "G = \\operatorname{Aut}(X)" }, { "math_id": 5, "text": "X \\to X/G" }, { "math_id": 6, "text": "G" }, { "math_id": 7, "text": "z \\to z^n" }, { "math_id": 8, "text": "X/G" }, { "math_id": 9, "text": "g_0" }, { "math_id": 10, "text": " 2g-2 \\ = \\ |G| \\cdot \\left( 2g_0-2 + \\sum_{i = 1}^k \\left(1-\\frac{1}{e_i}\\right)\\right) " }, { "math_id": 11, "text": "k" }, { "math_id": 12, "text": "p_i \\in X/G" }, { "math_id": 13, "text": " X \\to X/G" }, { "math_id": 14, "text": "e_i" }, { "math_id": 15, "text": "p_i" }, { "math_id": 16, "text": "e_i f_i = \\deg(X/\\, X/G)" }, { "math_id": 17, "text": "f_i" }, { "math_id": 18, "text": "\\deg(X/\\, X/G) = |G|" }, { "math_id": 19, "text": "e_i \\ge 2" }, { "math_id": 20, "text": "|G| R" }, { "math_id": 21, "text": "g \\ge 2 " }, { "math_id": 22, "text": "R>0" }, { "math_id": 23, "text": "g_0 \\ge 2" }, { "math_id": 24, "text": "R \\ge 2" }, { "math_id": 25, "text": "|G| \\le (g-1) " }, { "math_id": 26, "text": "g_0 = 1 " }, { "math_id": 27, "text": " k \\ge 1" }, { "math_id": 28, "text": "R\\ge 0 + 1 - 1/2 = 1/2" }, { "math_id": 29, "text": "|G| \\le 4(g-1)" }, { "math_id": 30, "text": "g_0 = 0" }, { "math_id": 31, "text": "k \\ge 3" }, { "math_id": 32, "text": "k \\ge 5" }, { "math_id": 33, "text": "R \\ge -2 + k(1 - 1/2) \\ge 1/2" }, { "math_id": 34, "text": "|G| \\le 4(g-1)" }, { "math_id": 35, "text": "k=4" }, { "math_id": 36, "text": " R \\ge -2 + 4 - 1/2 - 1/2 - 1/2 - 1/3 = 1/6" }, { "math_id": 37, "text": "|G| \\le 12(g-1)" }, { "math_id": 38, "text": "k=3" }, { "math_id": 39, "text": "e_1 = p,\\, e_2 = q, \\, e_3 = r" }, { "math_id": 40, "text": "2 \\le p\\le q\\ \\le r" }, { "math_id": 41, "text": " p \\ge 3 " }, { "math_id": 42, "text": " R \\ge -2 + 3 - 1/3 - 1/3 - 1/4 = 1/12" }, { "math_id": 43, "text": "|G| \\le 24(g-1)" }, { "math_id": 44, "text": " p = 2 " }, { "math_id": 45, "text": "q \\ge 4 " }, { "math_id": 46, "text": "R \\ge -2 + 3 - 1/2 - 1/4 - 1/5 = 1/20" }, { "math_id": 47, "text": "|G| \\le 40(g-1)" }, { "math_id": 48, "text": "q = 3 " }, { "math_id": 49, "text": "R \\ge -2 + 3 - 1/2 - 1/3 - 1/7 = 1/42" }, { "math_id": 50, "text": "|G| \\le 84(g-1)" }, { "math_id": 51, "text": "H^*(X,\\mathbf{C})" }, { "math_id": 52, "text": "H^1(X,\\mathbf{Z})" }, { "math_id": 53, "text": "V=H^{0,1}(X,\\mathbf{C})" }, { "math_id": 54, "text": "h: G \\to \\operatorname{GL}(V)" }, { "math_id": 55, "text": "h(G)" }, { "math_id": 56, "text": "(\\omega,\\eta)= i \\int\\bar{\\omega}\\wedge\\eta" }, { "math_id": 57, "text": "V" }, { "math_id": 58, "text": "\\operatorname{U}(V) \\subset \\operatorname{GL}(V)" }, { "math_id": 59, "text": "h" }, { "math_id": 60, "text": "\\varphi \\in G" }, { "math_id": 61, "text": "\\operatorname{fix}(\\varphi)" }, { "math_id": 62, "text": " |\\operatorname{fix}(\\varphi)| = 1 - 2\\operatorname{tr}(h(\\varphi)) + 1 = 2 - 2\\operatorname{tr}(\\mathrm{id}_V) = 2 - 2g < 0. " }, { "math_id": 63, "text": "\\dim_{\\mathbf C}(X) = 1" }, { "math_id": 64, "text": "\\operatorname{fix}(\\varphi) = X" }, { "math_id": 65, "text": "\\varphi" }, { "math_id": 66, "text": "G \\cong h(G)" }, { "math_id": 67, "text": "X \\to \\mathbf{P}^1" }, { "math_id": 68, "text": "a^2 = b^3 = (ab)^7 = 1," } ]
https://en.wikipedia.org/wiki?curid=1113637
11136939
Biochemical systems theory
Biochemical systems theory is a mathematical modelling framework for biochemical systems, based on ordinary differential equations (ODE), in which biochemical processes are represented using power-law expansions in the variables of the system. This framework, which became known as Biochemical Systems Theory, has been developed since the 1960s by Michael Savageau, Eberhard Voit and others for the systems analysis of biochemical processes. According to Cornish-Bowden (2007) they "regarded this as a general theory of metabolic control, which includes both metabolic control analysis and flux-oriented theory as special cases". Representation. The dynamics of a species is represented by a differential equation with the structure: formula_0 where "X""i" represents one of the "n""d" variables of the model (metabolite concentrations, protein concentrations or levels of gene expression). "j" represents the "n""f" biochemical processes affecting the dynamics of the species. On the other hand, formula_1"ij" (stoichiometric coefficient), formula_2"j" (rate constants) and "f""jk" (kinetic orders) are two different kinds of parameters defining the dynamics of the system. The principal difference of power-law models with respect to other ODE models used in biochemical systems is that the kinetic orders can be non-integer numbers. A kinetic order can have even negative value when inhibition is modeled. In this way, power-law models have a higher flexibility to reproduce the non-linearity of biochemical systems. Models using power-law expansions have been used during the last 35 years to model and analyze several kinds of biochemical systems including metabolic networks, genetic networks and recently in cell signalling. References. &lt;templatestyles src="Reflist/styles.css" /&gt; Literature. Books: Scientific articles:
[ { "math_id": 0, "text": "\\frac{dX_i}{dt}=\\sum_j \\mu_{ij} \\cdot \\gamma_j \\prod_k X_k^{f_{jk}}\\," }, { "math_id": 1, "text": "\\mu" }, { "math_id": 2, "text": "\\gamma" } ]
https://en.wikipedia.org/wiki?curid=11136939
11137122
Mason's invariant
In electronics, Mason's invariant, named after Samuel Jefferson Mason, is a measure of the quality of transistors. "When trying to solve a seemingly difficult problem, Sam said to concentrate on the easier ones first; the rest, including the hardest ones, will follow," recalled Andrew Viterbi, co-founder and former vice-president of Qualcomm. He had been a thesis advisee under Samuel Mason at MIT, and this was one lesson he especially remembered from his professor. A few years earlier, Mason had heeded his own advice when he defined a unilateral power gain for a linear two-port device, or U. After concentrating on easier problems with power gain in feedback amplifiers, a figure of merit for all three-terminal devices followed that is still used today as Mason's Invariant. Origin. In 1953, transistors were only five years old, and they were the only successful solid-state three-terminal active device. They were beginning to be used for RF applications, and they were limited to VHF frequencies and below. Mason wanted to find a figure of merit to compare transistors, and this led him to discover that the unilateral power gain of a linear two-port device was an invariant figure of merit. In his paper "Power Gain in Feedback Amplifiers" published in 1953, Mason stated in his introduction, "A vacuum tube, very often represented as a simple transconductance driving a passive impedance, may lead to relatively simple amplifier designs in which the input impedance (and hence the power gain) is effectively infinite, the voltage gain is the quantity of interest, and the input circuit is isolated from the load. The transistor, however, usually cannot be characterized so easily." He wanted to find a metric to characterize and measure the quality of transistors since up until then, no such measure existed. His discovery turned out to have applications beyond transistors. Derivation of U. Mason first defined the device being studied with the three constraints listed below. Then, according to Madhu Gupta in "Power Gain in Feedback Amplifiers, a Classic Revisited", Mason defined the problem as "being the search for device properties that are invariant with respect to transformations as represented by an embedding network" that satisfy the four constraints listed below. He next showed that all transformations that satisfy the above constraints can be accomplished with just three simple transformations performed sequentially. Similarly, this is the same as representing an embedding network by a set of three embedding networks nested within one another. The three mathematical expressions can be seen below. 1. Reactance padding: formula_0 2. Real Transformations: formula_1 3. Inversion: formula_2 Mason then considered which quantities remained invariant under each of these three transformations. His conclusions, listed respectively to the transformations above, are shown below. Each transformation left the values below unchanged. 1. Reactance padding: formula_3 and formula_4 2. Real transformations: formula_5 and formula_6 3. Inversion: The magnitudes of the two determinants and the sign of the denominator in the above fraction remain unchanged in the inversion transformation. Consequently, the quantity invariant under all three conditions is: formula_7 Importance. Mason's Invariant, or U, is the only device characteristic that is invariant under lossless, reciprocal embeddings. In other words, U can be used as a figure of merit to compare any two-port active device (which includes three-terminal devices used as two-ports). For example, a factory producing BJTs can calculate U of the transistors it is producing and compare their quality to the other BJTs on the market. Furthermore, U can be used as an indicator of activity. If U is greater than one, the two-port device is active; otherwise, that device is passive. This is especially useful in the microwave engineering community. Though originally published in a circuit theory journal, Mason's paper becomes especially relevant to microwave engineers since U is usually slightly greater than or equal to one in the microwave frequency range. When U is smaller than or considerably larger than one, it becomes relatively useless. While Mason's Invariant can be used as a figure of merit across all operating frequencies, its value at ƒ"max is especially useful. ƒ"max is the maximum oscillation frequency of a device, and it is discovered when formula_8. This frequency is also the frequency at which the maximum stable gain Gms and the maximum available gain Gma of the device become one. Consequently, "ƒ"max is a characteristic of the device, and it has the significance that it is the maximum frequency of oscillation in a circuit where only one active device is present, the device is embedded in a passive network, and only single sinusoidal signals are of interest. Conclusion. In his revisit of Mason's paper, Gupta states, "Perhaps the most convincing evidence of the utility of the concept of a unilateral power gain as a device figure of merit is the fact that for the last three decades, practically every new, active, two-port device developed for high frequency use has been carefully scrutinized for the achievable value of U..." This assumption is appropriate because "Umax" or "maximum unilateral gain" is still listed on transistor specification sheets, and Mason's Invariant is still taught in some undergraduate electrical engineering curricula. Though now it has been over five decades, Mason's finding of an invariant device characteristic still plays a significant role in transistor design. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\n\\begin{bmatrix}\n\tZ'_{11} & Z'_{12} \\\\\n\tZ'_{21} & Z'_{22}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\tZ_{11}+jx_{11} & Z_{12}+jx_{12} \\\\\n\tZ_{21}+jx_{21} & Z_{22}+jx_{22}\n\\end{bmatrix}\n" }, { "math_id": 1, "text": "\n\\begin{bmatrix}\n\tZ'_{11} & Z'_{12} \\\\\n\tZ'_{21} & Z'_{22}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\tn_{11} & n_{12} \\\\\n\tn_{21} & n_{22}\n\\end{bmatrix}\n\\begin{bmatrix}\n\tZ_{11} & Z_{12} \\\\\n\tZ_{21} & Z_{22}\n\\end{bmatrix}\n\\begin{bmatrix}\n\tn_{11} & n_{12} \\\\\n\tn_{21} & n_{22}\n\\end{bmatrix}\n" }, { "math_id": 2, "text": "\n\\begin{bmatrix}\n\tZ'_{11} & Z'_{12} \\\\\n\tZ'_{21} & Z'_{22}\n\\end{bmatrix}\n=\n\\begin{bmatrix}\n\tZ_{11} & Z_{12} \\\\\n\tZ_{21} & Z_{22}\n\\end{bmatrix}^{-1}\n" }, { "math_id": 3, "text": "\n\\left [ Z-Z_{t} \\right ]\n" }, { "math_id": 4, "text": "\n\\left [ Z+Z^{*} \\right ]\n" }, { "math_id": 5, "text": "\n\\left [ Z-Z_{t} \\right ]\n\\left [ Z+Z^{*} \\right ]\n" }, { "math_id": 6, "text": "\n\\dfrac{\\det{\\left [ Z-Z_{t} \\right ]}}{\\det{\\left [ Z+Z^{*} \\right ]}}\n" }, { "math_id": 7, "text": "\n\\begin{align}\nU & =\\dfrac{|\\det{\\left [ Z-Z_t \\right ]}|}{\\det{\\left [ Z+Z^{*} \\right ]}} \\\\\n& =\n\\dfrac{|Z_{12}-Z_{21}|^{2}}{4 (\\operatorname{Re}[Z_{11}] Re[Z_{22}]-\\operatorname{Re}[Z_{12}] \\operatorname{Re}[Z_{21}])} \\\\\n& =\n\\dfrac{|Y_{21}-Y_{12}|^{2}}{4 (\\operatorname{Re}[Y_{11}] \\operatorname{Re}[Y_{22}]-\\operatorname{Re}[Y_{12}] \\operatorname{Re}[Y_{21}])}\n\\end{align}\n" }, { "math_id": 8, "text": "U(f_\\max) = 1" } ]
https://en.wikipedia.org/wiki?curid=11137122
11137263
GF(2)
Finite field of two elements GF(2) (also denoted formula_0, Z/2Z or formula_1) is the finite field with two elements. GF(2) is the field with the smallest possible number of elements, and is unique if the additive identity and the multiplicative identity are denoted respectively and , as usual. The elements of GF(2) may be identified with the two possible values of a bit and to the boolean values "true" and "false". It follows that GF(2) is fundamental and ubiquitous in computer science and its logical foundations. Definition. GF(2) is the unique field with two elements with its additive and multiplicative identities respectively denoted and . Its addition is defined as the usual addition of integers but modulo 2 and corresponds to the table below: If the elements of GF(2) are seen as boolean values, then the addition is the same as that of the logical XOR operation. Since each element equals its opposite, subtraction is thus the same operation as addition. The multiplication of GF(2) is again the usual multiplication modulo 2 (see the table below), and on boolean variables corresponds to the logical AND operation. GF(2) can be identified with the field of the integers modulo, that is, the quotient ring of the ring of integers Z by the ideal 2Z of all even numbers: GF(2) = Z/2Z. Notations Z2 and formula_2 may be encountered although they can be confused with the notation of -adic integers. Properties. Because GF(2) is a field, many of the familiar properties of number systems such as the rational numbers and real numbers are retained: Properties that are not familiar from the real numbers include: Applications. Because of the algebraic properties above, many familiar and powerful tools of mathematics work in GF(2) just as well as other fields. For example, matrix operations, including matrix inversion, can be applied to matrices with elements in GF(2) ("see" matrix ring). Any group ("V,+") with the property "v" + "v" = 0 for every "v" in "V" is necessarily abelian and can be turned into a vector space over GF(2) in a natural fashion, by defining 0"v" = 0 and 1"v" = "v" for all "v" in "V". This vector space will have a basis, implying that the number of elements of "V" must be a power of 2 (or infinite). In modern computers, data are represented with bit strings of a fixed length, called "machine words". These are endowed with the structure of a vector space over GF(2). The addition of this vector space is the bitwise operation called XOR (exclusive or). The bitwise AND is another operation on this vector space, which makes it a Boolean algebra, a structure that underlies all computer science. These spaces can also be augmented with a multiplication operation that makes them into a field GF(2"n"), but the multiplication operation cannot be a bitwise operation. When "n" is itself a power of two, the multiplication operation can be nim-multiplication; alternatively, for any "n", one can use multiplication of polynomials over GF(2) modulo a irreducible polynomial (as for instance for the field GF(28) in the description of the Advanced Encryption Standard cipher). Vector spaces and polynomial rings over GF(2) are widely used in coding theory, and in particular in error correcting codes and modern cryptography. For example, many common error correcting codes (such as BCH codes) are linear codes over GF(2) (codes defined from vector spaces over GF(2)), or polynomial codes (codes defined as quotients of polynomial rings over GF(2)). Algebraic closure. Like any field, GF(2) has an algebraic closure. This is a field "F" which contains GF(2) as a subfield, which is algebraic over GF(2) (i.e. every element of "F" is a root of a polynomial with coefficients in GF(2)), and which is algebraically closed (any non-constant polynomial with coefficients in "F" has a root in "F"). The field "F" is uniquely determined by these properties, up to a field automorphism (i.e. essentially up to the notation of its elements). "F" is countable and contains a single copy of each of the finite fields GF(2"n"); the copy of GF(2"n") is contained in the copy of GF(2"m") if and only if "n" divides "m." The field "F" is countable and is the union of all these finite fields. Conway realized that "F" can be identified with the ordinal number formula_3, where the addition and multiplication operations are defined in a natural manner by transfinite induction (these operations are however different from the standard addition and multiplication of ordinal numbers). The addition in this field is simple to perform and is akin to Nim-addition; Lenstra has shown that the multiplication can also be performed efficiently. References. &lt;templatestyles src="Reflist/styles.css" /&gt; &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathbb F_2" }, { "math_id": 1, "text": "\\mathbb Z/2\\mathbb Z" }, { "math_id": 2, "text": "\\mathbb Z_2" }, { "math_id": 3, "text": "\\omega^{\\omega^\\omega}" } ]
https://en.wikipedia.org/wiki?curid=11137263
11139487
Frictionless plane
The frictionless plane is a concept from the writings of Galileo Galilei. In his 1638 "The Two New Sciences", Galileo presented a formula that predicted the motion of an object moving down an inclined plane. His formula was based upon his past experimentation with free-falling bodies. However, his model was not based upon experimentation with objects moving down an inclined plane, but from his conceptual modeling of the forces acting upon the object. Galileo understood the mechanics of the inclined plane as the combination of horizontal and vertical vectors; the result of gravity acting upon the object, diverted by the slope of the plane. However, Galileo's equations do not contemplate friction, and therefore do not perfectly predict the results of an actual experiment. This is because some energy is always lost when one mass applies a non-zero normal force to another. Therefore, the observed speed, acceleration and distance traveled should be less than Galileo predicts. This energy is lost in forms like sound and heat. However, from Galileo's predictions of an object moving down an inclined plane in a frictionless environment, he created the theoretical foundation for extremely fruitful real-world experimental prediction. Frictionless planes do not exist in the real world. However, if they did, one can be almost certain that objects on them would behave exactly as Galileo predicts. Despite their nonexistence, they have considerable value in the design of engines, motors, roadways, and even tow-truck beds, to name a few examples. The effect of friction on an object moving down an inclined plane can be calculated as formula_0 where formula_1 is the force of friction exerted by the object and the inclined plane on each other, parallel to the surface of the plane, formula_2 is the normal force exerted by the object and the plane on each other, directed perpendicular to the plane, and formula_3 is the coefficient of kinetic friction. Unless the inclined plane is in a vacuum, a (usually) small amount of potential energy is also lost to air drag.
[ { "math_id": 0, "text": " F_\\mathrm{f} = \\mu_\\mathrm{k} F_\\mathrm{N}, " }, { "math_id": 1, "text": " F_\\mathrm{f} " }, { "math_id": 2, "text": "F_\\mathrm{N}" }, { "math_id": 3, "text": "\\mu_\\mathrm{k}" } ]
https://en.wikipedia.org/wiki?curid=11139487
11140843
Townsend (unit)
The townsend (symbol Td) is a physical unit of the reduced electric field (ratio E/N), where formula_0 is electric field and formula_1 is concentration of neutral particles. It is named after John Sealy Townsend, who conducted early research into gas ionisation. Definition. It is defined by the relation formula_2 For example, an electric field of formula_3 in a medium with the density of an ideal gas at 1 atm, the Loschmidt constant formula_4 gives formula_5, which corresponds to formula_6. Uses. This unit is important in gas discharge physics, where it serves as scaling parameter because the mean energy of electrons (and therefore many other properties of discharge) is typically a function of formula_7 over broad range of formula_0 and formula_1. The concentration formula_1, which is in ideal gas simply related to pressure and temperature, controls the mean free path and collision frequency. The electric field formula_0 governs the energy gained between two successive collisions. Reduced electric field being a scaling factor effectively means, that increasing the electric field intensity "E" by some factor "q" has the same consequences as lowering gas density "N" by factor "q". References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "E" }, { "math_id": 1, "text": "N" }, { "math_id": 2, "text": "1 \\, {\\rm Td} = 10^{-21} \\, {\\rm V\\cdot m^2} = 10^{-17} \\, {\\rm V\\cdot cm^2}." }, { "math_id": 3, "text": "E = 2.5 \\cdot 10^{4} \\, {\\rm V/m}" }, { "math_id": 4, "text": "n_0 = 2.6867811 \\cdot 10^{25} \\, {\\rm m^{-3}}" }, { "math_id": 5, "text": "E/n_0 \\approx 10^{-21} \\, {\\rm V \\cdot m^{2}}" }, { "math_id": 6, "text": "1 \\, {\\rm Td}" }, { "math_id": 7, "text": "E/N" } ]
https://en.wikipedia.org/wiki?curid=11140843
11141222
Tetrad formalism
Relativity with a basis not derived from coordinates The tetrad formalism is an approach to general relativity that generalizes the choice of basis for the tangent bundle from a coordinate basis to the less restrictive choice of a local basis, i.e. a locally defined set of four linearly independent vector fields called a "tetrad" or "vierbein". It is a special case of the more general idea of a "vielbein formalism", which is set in (pseudo-)Riemannian geometry. This article as currently written makes frequent mention of general relativity; however, almost everything it says is equally applicable to (pseudo-)Riemannian manifolds in general, and even to spin manifolds. Most statements hold simply by substituting arbitrary formula_0 for formula_1. In German, "" translates to "four", and "" to "many". The general idea is to write the metric tensor as the product of two "vielbeins", one on the left, and one on the right. The effect of the vielbeins is to change the coordinate system used on the tangent manifold to one that is simpler or more suitable for calculations. It is frequently the case that the vielbein coordinate system is orthonormal, as that is generally the easiest to use. Most tensors become simple or even trivial in this coordinate system; thus the complexity of most expressions is revealed to be an artifact of the choice of coordinates, rather than a innate property or physical effect. That is, as a formalism, it does not alter predictions; it is rather a calculational technique. The advantage of the tetrad formalism over the standard coordinate-based approach to general relativity lies in the ability to choose the tetrad basis to reflect important physical aspects of the spacetime. The abstract index notation denotes tensors as if they were represented by their coefficients with respect to a fixed local tetrad. Compared to a completely coordinate free notation, which is often conceptually clearer, it allows an easy and computationally explicit way to denote contractions. The significance of the tetradic formalism appear in the Einstein–Cartan formulation of general relativity. The tetradic formalism of the theory is more fundamental than its metric formulation as one can "not" convert between the tetradic and metric formulations of the fermionic actions despite this being possible for bosonic actions . This is effectively because Weyl spinors can be very naturally defined on a Riemannian manifold and their natural setting leads to the spin connection. Those spinors take form in the vielbein coordinate system, and not in the manifold coordinate system. The privileged tetradic formalism also appears in the "deconstruction" of "higher dimensional" Kaluza–Klein gravity theories and massive gravity theories, in which the extra-dimension(s) is/are replaced by series of N lattice sites such that the higher dimensional metric is replaced by a set of interacting metrics that depend only on the 4D components. Vielbeins commonly appear in other general settings in physics and mathematics. Vielbeins can be understood as solder forms. Mathematical formulation. The tetrad formulation is a special case of a more general formulation, known as the vielbein or n-bein formulation, with n=4. Make note of the spelling: in German, "viel" means "many", not to be confused with "vier", meaning "four". In the vielbein formalism, an open cover of the spacetime manifold formula_2 and a local basis for each of those open sets is chosen: a set of formula_0 independent vector fields formula_3 for formula_4 that together span the formula_0-dimensional tangent bundle at each point in the set. Dually, a vielbein (or tetrad in 4 dimensions) determines (and is determined by) a dual co-vielbein (co-tetrad) — a set of formula_0 independent 1-forms. formula_5 such that formula_6 where formula_7 is the Kronecker delta. A vielbein is usually specified by its coefficients formula_8 with respect to a coordinate basis, despite the choice of a set of (local) coordinates formula_9 being unnecessary for the specification of a tetrad. Each covector is a solder form. From the point of view of the differential geometry of fiber bundles, the n vector fields formula_10 define a section of the frame bundle "i.e." a parallelization of formula_11 which is equivalent to an isomorphism formula_12. Since not every manifold is parallelizable, a vielbein can generally only be chosen locally ("i.e." only on a coordinate chart formula_13 and not all of formula_2.) All tensors of the theory can be expressed in the vector and covector basis, by expressing them as linear combinations of members of the (co)vielbein. For example, the spacetime metric tensor can be transformed from a coordinate basis to the tetrad basis. Popular tetrad bases in general relativity include orthonormal tetrads and null tetrads. Null tetrads are composed of four null vectors, so are used frequently in problems dealing with radiation, and are the basis of the Newman–Penrose formalism and the GHP formalism. Relation to standard formalism. The standard formalism of differential geometry (and general relativity) consists simply of using the coordinate tetrad in the tetrad formalism. The coordinate tetrad is the canonical set of vectors associated with the coordinate chart. The coordinate tetrad is commonly denoted formula_14 whereas the dual cotetrad is denoted formula_15. These tangent vectors are usually defined as directional derivative operators: given a chart formula_16 which maps a subset of the manifold into coordinate space formula_17, and any scalar field formula_18, the coordinate vectors are such that: formula_19 The definition of the cotetrad uses the usual abuse of notation formula_20 to define covectors (1-forms) on formula_2. The involvement of the coordinate tetrad is not usually made explicit in the standard formalism. In the tetrad formalism, instead of writing tensor equations out fully (including tetrad elements and tensor products formula_21 as above) only "components" of the tensors are mentioned. For example, the metric is written as "formula_22". When the tetrad is unspecified this becomes a matter of specifying the type of the tensor called abstract index notation. It allows to easily specify contraction between tensors by repeating indices as in the Einstein summation convention. Changing tetrads is a routine operation in the standard formalism, as it is involved in every coordinate transformation (i.e., changing from one coordinate tetrad basis to another). Switching between multiple coordinate charts is necessary because, except in trivial cases, it is not possible for a single coordinate chart to cover the entire manifold. Changing to and between general tetrads is much similar and equally necessary (except for parallelizable manifolds). Any tensor can locally be written in terms of this coordinate tetrad or a general (co)tetrad. For example, the metric tensor formula_23 can be expressed as: formula_24 (Here we use the Einstein summation convention). Likewise, the metric can be expressed with respect to an arbitrary (co)tetrad as formula_25 Here, we use choice of alphabet (Latin and Greek) for the index variables to distinguish the applicable basis. We can translate from a general co-tetrad to the coordinate co-tetrad by expanding the covector formula_26. We then get formula_27 from which it follows that formula_28. Likewise expanding formula_29 with respect to the general tetrad, we get formula_30 which shows that formula_31. Manipulation of indices. The manipulation with tetrad coefficients shows that abstract index formulas can, in principle, be obtained from tensor formulas with respect to a coordinate tetrad by "replacing greek by latin indices". However care must be taken that a coordinate tetrad formula defines a genuine tensor when differentiation is involved. Since the coordinate vector fields have vanishing Lie bracket (i.e. commute: formula_32), naive substitutions of formulas that correctly compute tensor coefficients with respect to a coordinate tetrad may not correctly define a tensor with respect to a general tetrad because the Lie bracket is non-vanishing: formula_33. Thus, it is sometimes said that tetrad coordinates provide a non-holonomic basis. For example, the Riemann curvature tensor is defined for general vector fields formula_34 by formula_35. In a coordinate tetrad this gives tensor coefficients formula_36 The naive "Greek to Latin" substitution of the latter expression formula_37 is incorrect because for fixed "c" and "d", formula_38 is, in general, a first order differential operator rather than a zeroth order operator which defines a tensor coefficient. Substituting a general tetrad basis in the abstract formula we find the proper definition of the curvature in abstract index notation, however: formula_39 where formula_40. Note that the expression formula_41 is indeed a zeroth order operator, hence (the ("c" "d")-component of) a tensor. Since it agrees with the coordinate expression for the curvature when specialised to a coordinate tetrad it is clear, even without using the abstract definition of the curvature, that it defines the same tensor as the coordinate basis expression. Example: Lie groups. Given a vector (or covector) in the tangent (or cotangent) manifold, the exponential map describes the corresponding geodesic of that tangent vector. Writing formula_42, the parallel transport of a differential corresponds to formula_43 The above can be readily verified simply by taking formula_44 to be a matrix. For the special case of a Lie algebra, the formula_44 can be taken to be an element of the algebra, the exponential is the exponential map of a Lie group, and group elements correspond to the geodesics of the tangent vector. Choosing a basis formula_45 for the Lie algebra and writing formula_46 for some functions formula_47 the commutators can be explicitly written out. One readily computes that formula_48 for formula_49 the structure constants of the Lie algebra. The series can be written more compactly as formula_50 with the infinite series formula_51 Here, formula_2 is a matrix whose matrix elements are formula_52. The matrix formula_53 is then the vielbein; it expresses the differential formula_54 in terms of the "flat coordinates" (orthonormal, at that) formula_45. Given some map formula_55 from some manifold formula_56 to some Lie group formula_57, the metric tensor on the manifold formula_56 becomes the pullback of the metric tensor formula_58 on the Lie group formula_57: formula_59 The metric tensor formula_58 on the Lie group is the Cartan metric, aka the Killing form. Note that, as a matrix, the second W is the transpose. For formula_56 a (pseudo-)Riemannian manifold, the metric is a (pseudo-)Riemannian metric. The above generalizes to the case of symmetric spaces. These vielbeins are used to perform calculations in sigma models, of which the supergravity theories are a special case. See also. &lt;templatestyles src="Div col/styles.css"/&gt; Notes. &lt;templatestyles src="Reflist/styles.css" /&gt; Citations. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "n" }, { "math_id": 1, "text": "n=4" }, { "math_id": 2, "text": "M" }, { "math_id": 3, "text": "e_a = e_a{}^{\\mu} \\partial_\\mu" }, { "math_id": 4, "text": "a=1,\\ldots,n" }, { "math_id": 5, "text": "e^a = e^a{}_{\\mu} dx^\\mu" }, { "math_id": 6, "text": " e^a (e_b) = e^a{}_{\\mu} e_b{}^\\mu = \\delta^{a}_{b}," }, { "math_id": 7, "text": "\\delta^{a}_{b}" }, { "math_id": 8, "text": "e^\\mu{}_{a}" }, { "math_id": 9, "text": "x^\\mu" }, { "math_id": 10, "text": "\\{e_a\\}_{a=1\\dots n}" }, { "math_id": 11, "text": "U\\subset M" }, { "math_id": 12, "text": "TU \\cong U\\times {\\mathbb R^n}" }, { "math_id": 13, "text": "U" }, { "math_id": 14, "text": "\\{\\partial_\\mu\\}" }, { "math_id": 15, "text": "\\{d x^\\mu\\}" }, { "math_id": 16, "text": "{\\varphi = (\\varphi^1, \\ldots, \\varphi^n)}" }, { "math_id": 17, "text": "\\mathbb R^n" }, { "math_id": 18, "text": "f" }, { "math_id": 19, "text": "\\partial_\\mu [f] \\equiv \\frac{\\partial (f \\circ \\varphi^{-1}) }{\\partial x^\\mu}." }, { "math_id": 20, "text": " dx^\\mu = d\\varphi^\\mu" }, { "math_id": 21, "text": "\\otimes" }, { "math_id": 22, "text": "g_{ab}" }, { "math_id": 23, "text": "\\mathbf g" }, { "math_id": 24, "text": "\\mathbf g = g_{\\mu\\nu}dx^\\mu dx^\\nu \\qquad \\text{where}~g_{\\mu\\nu} = \\mathbf g(\\partial_\\mu,\\partial_\\nu) ." }, { "math_id": 25, "text": "\\mathbf g = g_{ab}e^a e^b \\qquad \\text{where}~g_{ab} = \\mathbf g\\left(e_a,e_b\\right) ." }, { "math_id": 26, "text": " e^a = e^a{}_{\\mu} dx^\\mu " }, { "math_id": 27, "text": "\\mathbf g = g_{ab}e^a e^b = \ng_{ab}e^a{}_{\\mu} e^b{}_{\\nu} dx^\\mu dx^\\nu = g_{\\mu\\nu}dx^{\\mu}dx^{\\nu}" }, { "math_id": 28, "text": " g_{\\mu\\nu} = g_{ab}e^a{}_{\\mu} e^b{}_{\\nu}" }, { "math_id": 29, "text": "dx^\\mu = e^\\mu{}_{a}e^a" }, { "math_id": 30, "text": "\\mathbf g = g_{\\mu\\nu}dx^{\\mu}dx^{\\nu} = \ng_{\\mu \\nu} e^\\mu{}_{a} e^\\nu{}_{b} e^a e^b = g_{ab}e^a e^b" }, { "math_id": 31, "text": "g_{ab} = g_{\\mu\\nu}e^\\mu{}_{a} e^\\nu{}_{b}" }, { "math_id": 32, "text": " \\partial_\\mu\\partial_\\nu = \\partial_\\nu\\partial_\\mu " }, { "math_id": 33, "text": "[e_a, e_b] \\ne 0" }, { "math_id": 34, "text": "X, Y" }, { "math_id": 35, "text": " R(X,Y) = \\left(\\nabla_X \\nabla_Y - \\nabla_Y\\nabla_X - \\nabla_{[X,Y]}\\right) " }, { "math_id": 36, "text": " R^\\mu_{\\ \\nu\\sigma\\tau} = \n dx^\\mu\\left((\\nabla_\\sigma\\nabla_\\tau - \\nabla_\\tau\\nabla_\\sigma)\\partial_\\nu\\right)." }, { "math_id": 37, "text": " R^a_{\\ bcd} = e^a\\left((\\nabla_c\\nabla_d - \\nabla_d\\nabla_c)e_b\\right) \\qquad \\text{(wrong!)}" }, { "math_id": 38, "text": "\\left(\\nabla_c\\nabla_d - \\nabla_d\\nabla_c\\right) " }, { "math_id": 39, "text": " R^a_{\\ bcd}= e^a\\left((\\nabla_c\\nabla_d - \\nabla_d\\nabla_c - f_{cd}{}^{e}\\nabla_e)e_b\\right)" }, { "math_id": 40, "text": "[e_a, e_b] = f_{ab}{}^{c}e_c" }, { "math_id": 41, "text": "\\left(\\nabla_c\\nabla_d - \\nabla_d\\nabla_c - f_{cd}{}^{e}\\nabla_e\\right)" }, { "math_id": 42, "text": "X\\in TM" }, { "math_id": 43, "text": "e^{-X} de^X= dX-\\frac{1}{2!}\\left[X,dX\\right]+\\frac{1}{3!}[X,[X,dX]]-\\frac{1}{4!}[X,[X,[X,dX]]]+\\cdots" }, { "math_id": 44, "text": "X" }, { "math_id": 45, "text": "e_i" }, { "math_id": 46, "text": "X=X^ie_i" }, { "math_id": 47, "text": "X^i," }, { "math_id": 48, "text": "e^{-X}d e^X= \n dX^i e_i-\\frac{1}{2!} X^i dX^j {f_{ij}}^k e_k +\n \\frac{1}{3!} X^iX^j dX^k {f_{jk}}^l {f_{il}}^m e_m - \\cdots " }, { "math_id": 49, "text": "[e_i,e_j]={f_{ij}}^k e_k" }, { "math_id": 50, "text": "e^{-X}d e^X= e_i{W^i}_j dX^j" }, { "math_id": 51, "text": "W=\\sum_{n=0}^\\infty \\frac{(-1)^nM^n}{(n+1)!} = (I-e^{-M})M^{-1}." }, { "math_id": 52, "text": "{M_j}^k = X^i{f_{ij}}^k" }, { "math_id": 53, "text": "W" }, { "math_id": 54, "text": "dX^j" }, { "math_id": 55, "text": "N\\to G" }, { "math_id": 56, "text": "N" }, { "math_id": 57, "text": "G" }, { "math_id": 58, "text": "B_{mn}" }, { "math_id": 59, "text": "g_{ij}= {W_i}^m B_{mn}{W^n}_j" } ]
https://en.wikipedia.org/wiki?curid=11141222
1114171
Order (ring theory)
In mathematics, an order in the sense of ring theory is a subring formula_0 of a ring formula_1, such that The last two conditions can be stated in less formal terms: Additively, formula_0 is a free abelian group generated by a basis for "formula_1" over formula_2. More generally for "formula_4" an integral domain with fraction field "formula_5", an "formula_4"-order in a finite-dimensional "formula_5"-algebra "formula_1" is a subring formula_0 of "formula_1" which is a full "formula_4"-lattice; i.e. is a finite "formula_4"-module with the property that "formula_6". When "formula_1" is not a commutative ring, the idea of order is still important, but the phenomena are different. For example, the Hurwitz quaternions form a maximal order in the quaternions with rational co-ordinates; they are not the quaternions with integer coordinates in the most obvious sense. Maximal orders exist in general, but need not be unique: there is in general no largest order, but a number of maximal orders. An important class of examples is that of integral group rings. Examples. Some examples of orders are: A fundamental property of formula_4-orders is that every element of an formula_4-order is integral over formula_4. If the integral closure formula_10 of formula_4 in formula_1 is an formula_4-order then the integrality of every element of every formula_4-order shows that formula_10 must be the unique maximal formula_4-order in formula_1. However formula_10 need not always be an formula_4-order: indeed formula_10 need not even be a ring, and even if formula_10 is a ring (for example, when formula_1 is commutative) then formula_10 need not be an formula_4-lattice. Algebraic number theory. The leading example is the case where "formula_1" is a number field "formula_5" and formula_0 is its ring of integers. In algebraic number theory there are examples for any "formula_5" other than the rational field of proper subrings of the ring of integers that are also orders. For example, in the field extension "formula_17" of Gaussian rationals over formula_2, the integral closure of "formula_3" is the ring of Gaussian integers "formula_18" and so this is the unique "maximal" "formula_3"-order: all other orders in "formula_1" are contained in it. For example, we can take the subring of complex numbers of the form formula_19, with formula_11 and formula_20 integers. The maximal order question can be examined at a local field level. This technique is applied in algebraic number theory and modular representation theory. Notes. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\mathcal{O}" }, { "math_id": 1, "text": "A" }, { "math_id": 2, "text": "\\mathbb{Q}" }, { "math_id": 3, "text": "\\mathbb{Z}" }, { "math_id": 4, "text": "R" }, { "math_id": 5, "text": "K" }, { "math_id": 6, "text": "\\mathcal{O}\\otimes_RK=A" }, { "math_id": 7, "text": "M_n(K)" }, { "math_id": 8, "text": "M_n(R)" }, { "math_id": 9, "text": "L" }, { "math_id": 10, "text": "S" }, { "math_id": 11, "text": "a" }, { "math_id": 12, "text": "R[a]" }, { "math_id": 13, "text": "K[a]" }, { "math_id": 14, "text": "K[G]" }, { "math_id": 15, "text": "G" }, { "math_id": 16, "text": "R[G]" }, { "math_id": 17, "text": "A=\\mathbb{Q}(i)" }, { "math_id": 18, "text": "\\mathbb{Z}[i]" }, { "math_id": 19, "text": "a+2bi" }, { "math_id": 20, "text": "b" } ]
https://en.wikipedia.org/wiki?curid=1114171
1114297
Pi (disambiguation)
Pi (π) is a mathematical constant equal to a circle's circumference divided by its diameter. Pi, π or Π may also refer to: &lt;templatestyles src="Template:TOC_right/styles.css" /&gt; See also. Topics referred to by the same term &lt;templatestyles src="Dmbox/styles.css" /&gt; This page lists associated with the title .
[ { "math_id": 0, "text": "\\pi" } ]
https://en.wikipedia.org/wiki?curid=1114297
11143727
Tower Mounted Amplifier
A Tower Mounted Amplifier (TMA), or Mast Head Amplifier (MHA), is a low-noise amplifier (LNA) mounted as close as practical to the antenna in mobile masts or base transceiver stations. A TMA reduces the base transceiver station noise figure (NF) and therefore improves its overall sensitivity; in other words the mobile mast is able to receive weaker signals. The power to feed the amplifier (in the top of the mast) is usually a DC component on the same coaxial cable that feeds the antenna, otherwise an extra power cable has to be run to the TMA/MHA to supply it with power. Benefits in mobile communications. In two way communications systems, there are occasions when one way, one link, is weaker than the other, normally referenced as unbalanced links. This can be fixed by making the transmitter on that link stronger or the receiver more sensitive to weaker signals. TMAs are used in mobile networks to improve the sensitivity of the uplink in mobile phone masts. Since the transmitter in a mobile phone it cannot be easily modified to transmit stronger signals. Improving the "uplink" translates into a combination of better coverage and mobile transmitting at less power, which in turn implies a lower drain from its batteries, thus a longer battery charge. There are occasions when the cable between the antenna and the receiver is so lossy (too thin or too long) that the signal weakens from the antenna before reaching the receiver; therefore it may be decided to install TMAs from the start to make the system viable. In other words, the TMA can only partially correct, or palliate, the link imbalance. Mathematical principles. In a receiver, the receiving path starts with the signal originating at the antenna. Then the signal is amplified in further stages within the receiver. It is actually not amplified all at once but in stages, with some stages producing other changes (like changing the signal's frequency). The principle can be demonstrated mathematically; the receiver's noise figure is calculated by modularly assessing each amplifier stage. Each stage consists of a noise figure (F) and an amount of amplification, or gain (G). So amplifier number 1 will be right after the antenna and described by formula_0 and formula_1. The relationship of the stages is known as the Friis formula. formula_2 Note that: Applying the Friis formula to TMAs. Typical receiver without TMA. Start with a typical receiver: Antenna - Connecting Cable (stage 1) - Receiver (stage 2). formula_5 The first stage after the antenna is actually the connecting cable. Therefore: What can be done to improve the receiver to pick up very weak signals? It must have a lower noise figure; that is when the TMA comes into use. Typical receiver with TMA. It is a chain of 4 modules: antenna - short connecting cable (stage 1) - TMA (stage 2) - longer connecting cable (stage 3) - receiver (stage 4) Updating the Friis formula with this case, the noise figure is now: formula_10 In this way, the cable losses are now negligible and do not significantly affect the system noise figure. This number is normally expressed in decibels (dB) thus: formula_11 References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "F_1" }, { "math_id": 1, "text": "G_1" }, { "math_id": 2, "text": " System Noise Figure = F_1\n+ \\frac{ F_2 - 1 }{ G_1 } \n+ \\frac{ F_3 - 1 }{ G_1 \\times G_2 } \n+ \\cdots \n+ \\frac{ F_n - 1 }{ G_1 \\times G_2 \\times G_3 \\times \\cdots \\times G_{ n-1 } } " }, { "math_id": 3, "text": "F_2" }, { "math_id": 4, "text": "G_2" }, { "math_id": 5, "text": " System Noise Figure = F_1\n+ \\frac{ F_2 - 1 }{ G_1 }\n" }, { "math_id": 6, "text": "F_2 - 1" }, { "math_id": 7, "text": "F_3" }, { "math_id": 8, "text": "G_3" }, { "math_id": 9, "text": "G_1 \\approx 1" }, { "math_id": 10, "text": " System Noise Figure = F_1\n+ \\frac{ F_2 - 1 }{ 1 } \n+ \\frac{ F_3 - 1 }{ 1 \\times G_2 } \n+ \\frac{ F_4 - 1 }{ 1 \\times G_2 \\times G_3} \n" }, { "math_id": 11, "text": " Noise Figure (in dB) = 10 \\times \\log_{10} (Fs) " } ]
https://en.wikipedia.org/wiki?curid=11143727
11146362
Coprecipitation
Chemical process In chemistry, coprecipitation (CPT) or co-precipitation is the carrying down by a precipitate of substances normally soluble under the conditions employed. Analogously, in medicine, coprecipitation (referred to as immunoprecipitation) is specifically "an assay designed to purify a single antigen from a complex mixture using a specific antibody attached to a beaded support". Coprecipitation is an important topic in chemical analysis, where it can be undesirable, but can also be usefully exploited. In gravimetric analysis, which consists on precipitating the analyte and measuring its mass to determine its concentration or purity, coprecipitation is a problem because undesired impurities often coprecipitate with the analyte, resulting in excess mass. This problem can often be mitigated by "digestion" (waiting for the precipitate to equilibrate and form larger and purer particles) or by redissolving the sample and precipitating it again. On the other hand, in the analysis of trace elements, as is often the case in radiochemistry, coprecipitation is often the only way of separating an element. Since the trace element is too dilute (sometimes less than a part per trillion) to precipitate by conventional means, it is typically coprecipitated with a "carrier", a substance that has a similar crystalline structure that can incorporate the desired element. An example is the separation of francium from other radioactive elements by coprecipitating it with caesium salts such as caesium perchlorate. Otto Hahn is credited for promoting the use of coprecipitation in radiochemistry. There are three main mechanisms of coprecipitation: inclusion, occlusion, and adsorption. An inclusion (incorporation in the crystal lattice) occurs when the impurity occupies a lattice site in the crystal structure of the carrier, resulting in a crystallographic defect; this can happen when the ionic radius and charge of the impurity are similar to those of the carrier. An adsorbate is an impurity that is weakly, or strongly, bound (adsorbed) to the surface of the precipitate. An occlusion occurs when an adsorbed impurity gets physically trapped inside the crystal as it grows. Besides its applications in chemical analysis and in radiochemistry, coprecipitation is also important to many environmental issues related to water resources, including acid mine drainage, radionuclide migration around waste repositories, toxic heavy metal transport at industrial and defense sites, metal concentrations in aquatic systems, and wastewater treatment technology. Coprecipitation is also used as a method of magnetic nanoparticle synthesis. Distribution between precipitate and solution. There are two models describing of the distribution of the tracer compound between the two phases (the precipitate and the solution): formula_0 formula_1 where: "a" and "b" are the initial concentrations of the tracer and carrier, respectively; "a" − "x" and "b" − "y" are the concentrations of tracer and carrier after separation; "x" and "y" are the amounts of the tracer and carrier on the precipitate; "D" and λ are the distribution coefficients. For "D" and λ greater than 1, the precipitate is enriched in the tracer. Depending on the co-precipitation system and conditions either λ or "D" may be constant. The derivation of the Doerner-Hoskins law assumes that there in no mass exchange between the interior of the precipitating crystals and the solution. When this assumption is fulfilled, then the content of the tracer in the crystal is non-uniform (the crystals are said to be heterogeneous). When the Berthelot-Nernst law applies, then the concentration of the tracer in the interior of the crystal is uniform (and the crystals are said to be homogeneous). This is the case when diffusion in the interior is possible (like in the liquids) or when the initial small crystals are allowed to recrystallize. Kinetic effects (like speed of crystallization and presence of mixing) play a role. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\ln{a \\over {a-x}} = \\lambda \\ln{ b \\over {b-y}}" }, { "math_id": 1, "text": "{x \\over {a-x}} = D {y \\over {b-y}}" } ]
https://en.wikipedia.org/wiki?curid=11146362
11146921
Acousto-optic deflector
Device that deflects or redirects a laser beam An acousto-optic deflector (AOD) is a device that uses the interaction between sound waves and light waves to deflect or redirect a laser beam. AODs are essentially the same as acousto-optic modulators (AOMs). In both an AOM and an AOD, the amplitude and frequency of different orders are adjusted as light is diffracted. Operation. In the operation of an acousto-optic deflector the power driving the acoustic transducer is kept on, at a constant level, while the acoustic frequency is varied to deflect the beam to different angular positions. The acousto-optic deflector makes use of the acoustic frequency dependent diffraction angle, where a change in the angle formula_0 as a function of the change in frequency formula_1 given as, formula_2 where formula_3 is the optical wavelength and formula_4 is the velocity of the acoustic wave. Impact. AOM technology has made Bose–Einstein condensation practical, for which the 2001 Nobel Prize in Physics was awarded to Eric A. Cornell, Wolfgang Ketterle and Carl E. Wieman. Another application of acoustic-optical deflection is optical trapping of small molecules. References. &lt;templatestyles src="Reflist/styles.css" /&gt;
[ { "math_id": 0, "text": "\\Delta \\theta_d" }, { "math_id": 1, "text": "\\Delta f" }, { "math_id": 2, "text": " (12) \\ \\Delta \\theta_d = \\frac{\\lambda}{\\nu}\\Delta f" }, { "math_id": 3, "text": "\\lambda" }, { "math_id": 4, "text": "\\nu" } ]
https://en.wikipedia.org/wiki?curid=11146921
11147109
Kempner function
In number theory, the Kempner function formula_0 is defined for a given positive integer formula_1 to be the smallest number formula_2 such that formula_1 divides the factorial formula_3. For example, the number formula_4 does not divide formula_5, formula_6, or formula_7, but does divide formula_8, so formula_9. This function has the property that it has a highly inconsistent growth rate: it grows linearly on the prime numbers but only grows sublogarithmically at the factorial numbers. History. This function was first considered by François Édouard Anatole Lucas in 1883, followed by Joseph Jean Baptiste Neuberg in 1887. In 1918, A. J. Kempner gave the first correct algorithm for The Kempner function is also sometimes called the Smarandache function following Florentin Smarandache's rediscovery of the function in 1980. Properties. Since formula_1 divides formula_10, formula_0 is always at most formula_1. A number formula_1 greater than 4 is a prime number if and only if formula_11. That is, the numbers formula_1 for which formula_0 is as large as possible relative to formula_1 are the primes. In the other direction, the numbers for which formula_0 is as small as possible are the factorials: formula_12, for all formula_13. formula_0 is the smallest possible degree of a monic polynomial with integer coefficients, whose values over the integers are all divisible For instance, the fact that formula_14 means that there is a cubic polynomial whose values are all zero modulo 6, for instance the polynomial formula_15 but that all quadratic or linear polynomials (with leading coefficient one) are nonzero modulo 6 at some integers. In one of the advanced problems in "The American Mathematical Monthly", set in 1991 and solved in 1994, Paul Erdős pointed out that the function formula_0 coincides with the largest prime factor of formula_1 for "almost all" formula_1 (in the sense that the asymptotic density of the set of exceptions is zero). Computational complexity. The Kempner function formula_0 of an arbitrary number formula_1 is the maximum, over the prime powers formula_16 dividing formula_1, of formula_17. When formula_1 is itself a prime power formula_16, its Kempner function may be found in polynomial time by sequentially scanning the multiples of formula_18 until finding the first one whose factorial contains enough multiples of formula_18. The same algorithm can be extended to any formula_1 whose prime factorization is already known, by applying it separately to each prime power in the factorization and choosing the one that leads to the largest value. For a number of the form formula_19, where formula_18 is prime and formula_20 is less than formula_18, the Kempner function of formula_1 is formula_18. It follows from this that computing the Kempner function of a semiprime (a product of two primes) is computationally equivalent to finding its prime factorization, believed to be a difficult problem. More generally, whenever formula_1 is a composite number, the greatest common divisor of formula_0 and formula_1 will necessarily be a nontrivial divisor of formula_1, allowing formula_1 to be factored by repeated evaluations of the Kempner function. Therefore, computing the Kempner function can in general be no easier than factoring composite numbers. References and notes. &lt;templatestyles src="Reflist/styles.css" /&gt; "This article incorporates material from Smarandache function on PlanetMath, which is licensed under the ."
[ { "math_id": 0, "text": "S(n)" }, { "math_id": 1, "text": "n" }, { "math_id": 2, "text": "s" }, { "math_id": 3, "text": "s!" }, { "math_id": 4, "text": "8" }, { "math_id": 5, "text": "1!" }, { "math_id": 6, "text": "2!" }, { "math_id": 7, "text": "3!" }, { "math_id": 8, "text": "4!" }, { "math_id": 9, "text": "S(8)=4" }, { "math_id": 10, "text": "n!" }, { "math_id": 11, "text": "S(n)=n" }, { "math_id": 12, "text": "S(k!)=k" }, { "math_id": 13, "text": "k\\ge 1" }, { "math_id": 14, "text": "S(6)=3" }, { "math_id": 15, "text": "x(x-1)(x-2)=x^3-3x^2+2x," }, { "math_id": 16, "text": "p^e" }, { "math_id": 17, "text": "S(p^e)" }, { "math_id": 18, "text": "p" }, { "math_id": 19, "text": "n=px" }, { "math_id": 20, "text": "x" } ]
https://en.wikipedia.org/wiki?curid=11147109
11148549
Abstract family of languages
In computer science, in particular in the field of formal language theory, an abstract family of languages is an abstract mathematical notion generalizing characteristics common to the regular languages, the context-free languages and the recursively enumerable languages, and other families of formal languages studied in the scientific literature. Formal definitions. A "formal language" is a set L for which there exists a finite set of abstract symbols Σ such that formula_0, where * is the Kleene star operation. A "family of languages" is an ordered pair formula_1, where A "trio" is a family of languages closed under homomorphisms that do not introduce the empty word, inverse homomorphisms, and intersections with a regular language. A "full trio," also called a "cone," is a trio closed under arbitrary homomorphism. A "(full) semi-AFL" is a (full) trio closed under union. A "(full) AFL" is a "(full) semi-AFL" closed under concatenation and the Kleene plus. Some families of languages. The following are some simple results from the study of abstract families of languages. Within the Chomsky hierarchy, the regular languages, the context-free languages, and the recursively enumerable languages are all full AFLs. However, the context sensitive languages and the recursive languages are AFLs, but not full AFLs because they are not closed under arbitrary homomorphisms. The family of regular languages are contained within any cone (full trio). Other categories of abstract families are identifiable by closure under other operations such as shuffle, reversal, or substitution. Origins. Seymour Ginsburg of the University of Southern California and Sheila Greibach of Harvard University presented the first AFL theory paper at the IEEE Eighth Annual Symposium on Switching and Automata Theory in 1967.
[ { "math_id": 0, "text": "L \\subseteq\\Sigma^*" }, { "math_id": 1, "text": "(\\Sigma,\\Lambda)" }, { "math_id": 2, "text": "\\Sigma_1 \\subseteq \\Sigma" }, { "math_id": 3, "text": "L \\subseteq \\Sigma_1^*" } ]
https://en.wikipedia.org/wiki?curid=11148549