id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
59646351
|
Fine-grained reduction
|
In computational complexity theory, a fine-grained reduction is a transformation from one computational problem to another, used to relate the difficulty of improving the time bounds for the two problems.
Intuitively, it provides a method for solving one problem efficiently by using the solution to the other problem as a subroutine.
If problem formula_0 can be solved in time formula_1 and problem formula_2 can be solved in time formula_3, then the existence of an formula_4-reduction from problem formula_0 to problem formula_2 implies that any significant speedup for problem formula_2 would also lead to a speedup for problem formula_0.
Definition.
Let formula_0 and formula_2 be computational problems, specified as the desired output for each possible input.
Let formula_5 and formula_6 both be time-constructible functions that take an integer argument formula_7 and produce an integer result. Usually, formula_5 and formula_6 are the time bounds for known or naive algorithms for the two problems, and often they are monomials such as formula_8.
Then formula_0 is said to be formula_4-reducible to formula_2
if, for every real number formula_9, there exists a real number formula_10 and an algorithm that solves instances of problem formula_0 by transforming it into a sequence of instances of problem formula_2, taking time formula_11 for the transformation on instances of size formula_7, and producing a sequence of instances whose sizes formula_12 are bounded by formula_13.
An formula_4-reduction is given by the mapping from formula_14 to the pair of an algorithm and formula_15.
Speedup implication.
Suppose formula_0 is formula_4-reducible to formula_2, and there exists formula_9 such that formula_2 can be solved in time formula_16.
Then, with these assumptions, there also exists formula_10 such that formula_0 can be solved in time formula_11. Namely, let formula_15 be the value given by the formula_4-reduction, and solve formula_0 by applying the transformation of the reduction and using the fast algorithm for formula_2 for each resulting subproblem.
Equivalently, if formula_0 cannot be solved in time significantly faster than formula_1, then formula_2 cannot be solved in time significantly faster than formula_3.
History.
Fine-grained reductions were defined, in the special case that formula_5 and formula_6 are equal monomials, by Virginia Vassilevska Williams and Ryan Williams in 2010.
They also showed the existence of formula_17-reductions between several problems including all-pairs shortest paths, finding the second-shortest path between two given vertices in a weighted graph, finding negative-weight triangles in weighted graphs, and testing whether a given distance matrix describes a metric space. According to their results, either all of these problems have time bounds with exponents less than three, or none of them do.
The term "fine-grained reduction" comes from later work by Virginia Vassilevska Williams in an invited presentation at the 10th International Symposium on Parameterized and Exact Computation.
Although the original definition of fine-grained reductions involved deterministic algorithms, the corresponding concepts for randomized algorithms and nondeterministic algorithms have also been considered.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "a(n)"
},
{
"math_id": 2,
"text": "B"
},
{
"math_id": 3,
"text": "b(n)"
},
{
"math_id": 4,
"text": "(a,b)"
},
{
"math_id": 5,
"text": "a"
},
{
"math_id": 6,
"text": "b"
},
{
"math_id": 7,
"text": "n"
},
{
"math_id": 8,
"text": "n^2"
},
{
"math_id": 9,
"text": "\\epsilon>0"
},
{
"math_id": 10,
"text": "\\delta>0"
},
{
"math_id": 11,
"text": "O\\bigl(a(n)^{1-\\delta}\\bigr)"
},
{
"math_id": 12,
"text": "n_i"
},
{
"math_id": 13,
"text": "\\sum_i b(n_i)^{1-\\epsilon}<a(n)^{1-\\delta}"
},
{
"math_id": 14,
"text": "\\epsilon"
},
{
"math_id": 15,
"text": "\\delta"
},
{
"math_id": 16,
"text": "O\\bigl(b(n)^{1-\\epsilon}\\bigr)"
},
{
"math_id": 17,
"text": "(n^3,n^3)"
}
] |
https://en.wikipedia.org/wiki?curid=59646351
|
5965076
|
Floyd Williams
|
American mathematician
Floyd Leroy Williams (born September 20, 1939) is a North American mathematician well known for his work in Lie theory and, most recently, mathematical physics. In addition to Lie theory, his research interests are in homological algebra and the mathematics of quantum mechanics. He received his B.S.(1962) in Mathematics from Lincoln University of Missouri, and later his M.S.(1965) and Ph.D.(1972) from Washington University in St. Louis. Williams was appointed professor of mathematics at the University of Massachusetts Amherst in 1984, and has been professor emeritus since 2005. Williams' accomplishments earned him recognition by Mathematically Gifted & Black as a Black History Month 2019 Honoree.
Biographical Sketch.
Floyd Williams was born on September 20, 1939, and lived in Kansas City, Missouri. He was raised in extreme poverty. His mother told him not to complain about their situation, but rather to have faith in God and work hard. Her advice was taken, and it worked. He eventually was ordained in addition to being a mathematician.
However, it was music, not mathematics, that appealed to him through high school. "In fact," he admits, "mathematics was the only course in which I did not do well." Williams had not thought of going to college until his last week in high school when he was offered a music scholarship at Lincoln University of Missouri in Jefferson City, Missouri.
It was in his sophomore year that he became intrigued by the theory of relativity, which turned out to be his main motivation for studying mathematics. In 1972 he completed his Ph.D. from Washington University where his thesis was in the field of Lie theory. He was an instructor and lecturer at MIT from 1972 to 1975, before moving to the University of Massachusetts Amherst as an assistant professor in 1975. In 1983 he received an MRI grant to continuing researching in this field, ushering him into the mainstream of mathematics.
As an African-American in a field that has had little minority representation, Williams has felt the sting of discrimination during his career. However, he has been a motivation and role model for many young minorities, encouraging them to enter science and engineering. Williams has helped to set up programs that allow pre-college students and undergraduates to meet and talk with mathematicians, scientists and engineers, most notably at a summer camp run at MIT. "All that many of these youngsters see is different courses," he says, "but they want to know what mathematicians do from 8 am to 5 pm. Once minorities commit to graduate work in science or engineering," he continues, "they need extra help and support for what, for many, is the foreign environment of graduate school. Such programs exist at few universities, but we need more of them."
In 2012 he became a fellow of the American Mathematical Society.
Mathematics.
Williams' recent contribution to quantum mechanics has been in the area of Nikiforov-Uvarov theory of generalized hypergeometric differential equation, used to solve the Schrödinger equation and to obtain the quantization of energies from a single unified point of view. This theory is developed and is also used to give a uniform approach to the theory of special functions. This study furthers to connect the modern studies of pure mathematics with physics.
Bibliography.
Notable works of Floyd Williams include:
He had written over 88 written papers, including four books. Moreover, Floyd L. Williams has been cited 157 times by over 150 authors. Here is a list of some of his most cited works
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "L^2(\\Gamma \\backslash G)"
}
] |
https://en.wikipedia.org/wiki?curid=5965076
|
59652
|
Merkle–Hellman knapsack cryptosystem
|
The Merkle–Hellman knapsack cryptosystem was one of the earliest public key cryptosystems. It was published by Ralph Merkle and Martin Hellman in 1978. A polynomial time attack was published by Adi Shamir in 1984. As a result, the cryptosystem is now considered insecure.
History.
The concept of public key cryptography was introduced by Whitfield Diffie and Martin Hellman in 1976. At that time they proposed the general concept of a "trap-door one-way function", a function whose inverse is computationally infeasible to calculate without some secret "trap-door information"; but they had not yet found a practical example of such a function. Several specific public-key cryptosystems were then proposed by other researchers over the next few years, such as RSA in 1977 and Merkle-Hellman in 1978.
Description.
Merkle–Hellman is a public key cryptosystem, meaning that two keys are used, a public key for encryption and a private key for decryption. It is based on the subset sum problem (a special case of the knapsack problem). The problem is as follows: given a set of integers formula_0 and an integer formula_1, find a subset of formula_0 which sums to formula_1. In general, this problem is known to be NP-complete. However, if formula_0 is superincreasing, meaning that each element of the set is greater than the sum of all the numbers in the set lesser than it, the problem is "easy" and solvable in polynomial time with a simple greedy algorithm.
In Merkle–Hellman, decrypting a message requires solving an apparently "hard" knapsack problem. The private key contains a superincreasing list of numbers formula_2, and the public key contains a non-superincreasing list of numbers formula_3, which is actually a "disguised" version of formula_2. The private key also contains some "trapdoor" information that can be used to transform a hard knapsack problem using formula_3 into an easy knapsack problem using formula_2.
Unlike some other public key cryptosystems such as RSA, the two keys in Merkle-Hellman are not interchangeable; the private key cannot be used for encryption. Thus Merkle-Hellman is not directly usable for authentication by cryptographic signing, although Shamir published a variant that can be used for signing.
Key generation.
1. Choose a block size formula_4. Integers up to formula_4 bits in length can be encrypted with this key.
2. Choose a random superincreasing sequence of formula_4 positive integers
formula_5
The superincreasing requirement means that formula_6, for formula_7.
3. Choose a random integer formula_8 such that
formula_9
4. Choose a random integer formula_10 such that formula_11 (that is, formula_10 and formula_8 are coprime).
5. Calculate the sequence
formula_12
where formula_13.
The public key is formula_3 and the private key is formula_14.
Encryption.
Let formula_15 be an formula_4-bit message consisting of bits formula_16, with formula_17 the highest order bit. Select each formula_18 for which formula_19 is nonzero, and add them together. Equivalently, calculate
formula_20.
The ciphertext is formula_1.
Decryption.
To decrypt a ciphertext formula_1, we must find the subset of formula_3 which sums to formula_1. We do this by transforming the problem into one of finding a subset of formula_2. That problem can be solved in polynomial time since formula_2 is superincreasing.
1. Calculate the modular inverse of formula_10 modulo formula_8 using the Extended Euclidean algorithm. The inverse will exist since formula_10 is coprime to formula_8.
formula_21
The computation of formula_22 is independent of the message, and can be done just once when the private key is generated.
2. Calculate
formula_23
3. Solve the subset sum problem for formula_24 using the superincreasing sequence formula_2, by the simple greedy algorithm described below. Let formula_25 be the resulting list of indexes of the elements of formula_2 which sum to formula_24. (That is, formula_26.)
4. Construct the message formula_15 with a 1 in each formula_27 bit position and a 0 in all other bit positions:
formula_28
Solving the subset sum problem.
This simple greedy algorithm finds the subset of a superincreasing sequence formula_2 which sums to formula_24, in polynomial time:
1. Initialize formula_29 to an empty list.
2. Find the largest element in formula_2 which is less than or equal to formula_24, say formula_30.
3. Subtract: formula_31.
4. Append formula_32 to the list formula_29.
5. If formula_24 is greater than zero, return to step 2.
Example.
Key generation.
Create a key to encrypt 8-bit numbers by creating a random superincreasing sequence of 8 values:
formula_33
The sum of these is 706, so select a larger value for formula_8:
formula_34.
Choose formula_10 to be coprime to formula_8:
formula_35.
Construct the public key formula_3 by multiplying each element in formula_2 by formula_10 modulo formula_8:
formula_36
Hence formula_37.
Encryption.
Let the 8-bit message be formula_38. We multiply each bit by the corresponding number in formula_3 and add the results:
0 * 295
+ 1 * 592
+ 1 * 301
+ 0 * 14
+ 0 * 28
+ 0 * 353
+ 0 * 120
+ 1 * 236
= 1129
The ciphertext formula_1 is 1129.
Decryption.
To decrypt 1129, first use the Extended Euclidean Algorithm to find the modular inverse of formula_10 mod formula_8:
formula_39.
Compute formula_40.
Use the greedy algorithm to decompose 372 into a sum of formula_41 values:
formula_42
Thus formula_43, and the list of indexes is formula_44. The message can now be computed as
formula_45.
Cryptanalysis.
In 1984 Adi Shamir published an attack on the Merkle-Hellman cryptosystem which can decrypt encrypted messages in polynomial time without using the private key. The attack analyzes the public key formula_46 and searches for a pair of numbers formula_47 and formula_15 such that formula_48 is a superincreasing sequence. The formula_49 pair found by the attack may not be equal to formula_50 in the private key, but like that pair it can be used to transform a hard knapsack problem using formula_3 into an easy problem using a superincreasing sequence. The attack operates solely on the public key; no access to encrypted messages is necessary.
Shamir's attack on the Merkle-Hellman cryptosystem works in polynomial time even if the numbers in the public key are randomly shuffled, a step which is usually not included in the description of the cryptosystem, but can be helpful against some more primitive attacks.
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "c"
},
{
"math_id": 2,
"text": "W"
},
{
"math_id": 3,
"text": "B"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "W = ( w_1, w_2, \\dots, w_n )"
},
{
"math_id": 6,
"text": "w_k > \\sum_{i = 1}^{k-1} w_i"
},
{
"math_id": 7,
"text": "1 < k \\le n"
},
{
"math_id": 8,
"text": "q"
},
{
"math_id": 9,
"text": "q > \\sum_{i = 1}^n w_i"
},
{
"math_id": 10,
"text": "r"
},
{
"math_id": 11,
"text": "\\gcd(r,q) = 1"
},
{
"math_id": 12,
"text": "B = ( b_1, b_2, \\dots, b_n )"
},
{
"math_id": 13,
"text": "b_i = r w_i \\bmod q"
},
{
"math_id": 14,
"text": "(W,q,r)"
},
{
"math_id": 15,
"text": "m"
},
{
"math_id": 16,
"text": "m_1 m_2 \\dots m_n"
},
{
"math_id": 17,
"text": "m_1"
},
{
"math_id": 18,
"text": "b_i"
},
{
"math_id": 19,
"text": "m_i"
},
{
"math_id": 20,
"text": "c = \\sum_{i = 1}^n m_i b_i"
},
{
"math_id": 21,
"text": "r' := r^{-1} \\pmod q"
},
{
"math_id": 22,
"text": "r'"
},
{
"math_id": 23,
"text": "c' := c r' \\bmod q"
},
{
"math_id": 24,
"text": "c'"
},
{
"math_id": 25,
"text": "X = (x_1, x_2, \\dots, x_k)"
},
{
"math_id": 26,
"text": "c' = \\sum_{i=1}^k w_{x_i}"
},
{
"math_id": 27,
"text": "x_i"
},
{
"math_id": 28,
"text": "m = \\sum_{i=1}^k 2^{n-x_i}"
},
{
"math_id": 29,
"text": "X"
},
{
"math_id": 30,
"text": "w_j"
},
{
"math_id": 31,
"text": "c' := c' - w_j"
},
{
"math_id": 32,
"text": "j"
},
{
"math_id": 33,
"text": "W = ( 2, 7, 11, 21, 42, 89, 180, 354 )"
},
{
"math_id": 34,
"text": "q = 881"
},
{
"math_id": 35,
"text": "r = 588"
},
{
"math_id": 36,
"text": "\\begin{align}\n&(2 * 588) \\bmod 881 = 295 \\\\\n&(7 * 588) \\bmod 881 = 592 \\\\\n&(11 * 588) \\bmod 881 = 301 \\\\\n&(21 * 588) \\bmod 881 = 14 \\\\\n&(42 * 588) \\bmod 881 = 28 \\\\\n&(89 * 588) \\bmod 881 = 353 \\\\\n&(180 * 588) \\bmod 881 = 120 \\\\\n&(354 * 588) \\bmod 881 = 236\n\\end{align}"
},
{
"math_id": 37,
"text": "B = ( 295, 592, 301, 14, 28, 353, 120, 236 )"
},
{
"math_id": 38,
"text": "m = 97 = 01100001_2"
},
{
"math_id": 39,
"text": "r' = r^{-1} \\bmod q = 588^{-1} \\bmod 881 = 442"
},
{
"math_id": 40,
"text": "c' = c r' \\bmod q = 1129*442 \\bmod 881 = 372"
},
{
"math_id": 41,
"text": "w_i"
},
{
"math_id": 42,
"text": "\\begin{align}\nc' &= 372 \\\\\n& w_8 = 354 \\le 372 \\\\\nc' &= 372-354 = 18 \\\\\n& w_3 = 11 \\le 18 \\\\\nc' &= 18-11 = 7 \\\\\n& w_2 = 7 \\le 7 \\\\\nc' &= 7-7 = 0\n\\end{align}"
},
{
"math_id": 43,
"text": "372 = 354 + 11 + 7 = w_8 + w_3 + w_2"
},
{
"math_id": 44,
"text": "X = (8,3,2)"
},
{
"math_id": 45,
"text": "m = \\sum_{i=1}^3 2^{n-x_i} = 2^{8-8} + 2^{8-3} + 2^{8-2} = 1 + 32 + 64 = 97"
},
{
"math_id": 46,
"text": "B = (b_1, b_2, \\dots, b_n)"
},
{
"math_id": 47,
"text": "u"
},
{
"math_id": 48,
"text": "(u b_i \\bmod m)"
},
{
"math_id": 49,
"text": "(u,m)"
},
{
"math_id": 50,
"text": "(r',q)"
}
] |
https://en.wikipedia.org/wiki?curid=59652
|
59652617
|
K-outerplanar graph
|
In graph theory, a "k"-outerplanar graph is a planar graph that has a planar embedding in which the vertices belong to at most formula_0 concentric layers. The outerplanarity index of a planar graph is the minimum value of formula_0 for which it is formula_0-outerplanar.
Definition.
An outerplanar graph (or 1-outerplanar graph) has all of its vertices on the unbounded (outside) face of the graph. A 2-outerplanar graph is a planar graph with the property that, when the vertices on the unbounded face are removed, the remaining vertices all lie on the newly formed unbounded face. And so on.
More formally, a graph is formula_0-outerplanar if it has a planar embedding such that, for every vertex, there is an alternating sequence of at most formula_0 faces and formula_0 vertices of the embedding, starting with the unbounded face and ending with the vertex, in which each consecutive face and vertex are incident to each other.
Properties and applications.
The formula_0-outerplanar graphs have treewidth at most formula_1. However, some bounded-treewidth planar graphs such as the nested triangles graph may be formula_0-outerplanar only for very large formula_0, linear in the number of vertices.
Baker's technique covers a planar graph with a constant number of formula_0-outerplanar graphs and uses their low treewidth in order to quickly approximate several hard graph optimization problems.
In connection with the GNRS conjecture on metric embedding of minor-closed graph families, the formula_0-outerplanar graphs are one of the most general classes of graphs for which the conjecture has been proved.
A conjectured converse of Courcelle's theorem, according to which every graph property recognizable on graphs of bounded treewidth by finite state tree automata is definable in the monadic second-order logic of graphs, has been proven for the formula_0-outerplanar graphs.
Recognition.
The smallest value of formula_0 for which a given graph is formula_0-outerplanar (its outerplanarity index) can be computed in quadratic time.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "3k-1"
}
] |
https://en.wikipedia.org/wiki?curid=59652617
|
59652628
|
GNRS conjecture
|
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
Do minor-closed graph families have formula_0 embeddings with bounded distortion?
In theoretical computer science and metric geometry, the GNRS conjecture connects the theory of graph minors, the stretch factor of embeddings, and the approximation ratio of multi-commodity flow problems. It is named after Anupam Gupta, Ilan Newman, Yuri Rabinovich, and Alistair Sinclair, who formulated it in 2004.
Formulation.
One formulation of the conjecture involves embeddings of the shortest path distances of weighted undirected graphs into formula_0 spaces, real vector spaces in which the distance between two vectors is the sum of their coordinate differences. If an embedding maps all pairs of vertices with distance formula_1 to pairs of vectors with distance in the range formula_2 then its stretch factor or distortion is the ratio formula_3; an isometry has stretch factor one, and all other embeddings have greater stretch factor.
The graphs that have an embedding with at most a given distortion are closed under graph minor operations, operations that delete vertices or edges from a graph or contract some of its edges. The GNRS conjecture states that, conversely, every minor-closed family of graphs, other than the family of all graphs, can be embedded into an formula_0 space with bounded distortion. That is, the distortion of graphs in the family is bounded by a constant that depends on the family but not on the individual graphs. For instance, the planar graphs are closed under minors. Therefore, it would follow from the GNRS conjecture that the planar graphs have bounded distortion.
An alternative formulation involves analogues of the max-flow min-cut theorem for undirected multi-commodity flow problems. The ratio of the maximum flow to the minimum cut, in such problems, is known as the "flow-cut gap". The largest flow-cut gap that a flow problem can have on a given graph equals the distortion of the optimal formula_0 embedding of the graph. Therefore, the GNRS conjecture can be rephrased as stating that the minor-closed families of graphs have bounded flow-cut gap.
Related results.
Arbitrary formula_4-vertex graphs (indeed, arbitrary formula_4-point metric spaces) have formula_0 embeddings with distortion formula_5. Some graphs have logarithmic flow-cut gap, and in particular this is true for a multicommodity flow with every pair of vertices having equal demand on a bounded-degree expander graph. Therefore, this logarithmic bound on the distortion of arbitrary graphs is tight. Planar graphs can be embedded with smaller distortion, formula_6.
Although the GNRS conjecture remains unsolved, it has been proven for some minor-closed graph families that bounded-distortion embeddings exist. These include the series–parallel graphs and the graphs of bounded circuit rank, the graphs of bounded pathwidth, the 2-clique-sums of graphs of bounded size, and the formula_7-outerplanar graphs.
In contrast to the behavior of metric embeddings into formula_0 spaces, every finite metric space has embeddings into formula_8 with stretch arbitrarily close to one by the Johnson–Lindenstrauss lemma, and into formula_9 spaces with stretch exactly one by the tight span construction.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\ell_1"
},
{
"math_id": 1,
"text": "d"
},
{
"math_id": 2,
"text": "[cd,Cd]"
},
{
"math_id": 3,
"text": "C/c"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "O(\\log n)"
},
{
"math_id": 6,
"text": "O(\\sqrt{\\log n})"
},
{
"math_id": 7,
"text": "k"
},
{
"math_id": 8,
"text": "\\ell_2"
},
{
"math_id": 9,
"text": "\\ell_\\infty"
}
] |
https://en.wikipedia.org/wiki?curid=59652628
|
59654517
|
Graph cut optimization
|
Combinatorial optimization method for a family of functions of discrete variables
Graph cut optimization is a combinatorial optimization method applicable to a family of functions of discrete variables, named after the concept of cut in the theory of flow networks. Thanks to the max-flow min-cut theorem, determining the minimum cut over a graph representing a flow network is equivalent to computing the maximum flow over the network. Given a pseudo-Boolean function formula_0, if it is possible to construct a flow network with positive weights such that
then it is possible to find the global optimum of formula_0 in polynomial time by computing a minimum cut of the graph. The mapping between cuts and variable assignments is done by representing each variable with one node in the graph and, given a cut, each variable will have a value of 0 if the corresponding node belongs to the component connected to the source, or 1 if it belong to the component connected to the sink.
Not all pseudo-Boolean functions can be represented by a flow network, and in the general case the global optimization problem is NP-hard. There exist sufficient conditions to characterise families of functions that can be optimised through graph cuts, such as submodular quadratic functions. Graph cut optimization can be extended to functions of discrete variables with a finite number of values, that can be approached with iterative algorithms with strong optimality properties, computing one graph cut at each iteration.
Graph cut optimization is an important tool for inference over graphical models such as Markov random fields or conditional random fields, and it has applications in computer vision problems such as image segmentation, denoising, registration and stereo matching.
Representability.
A pseudo-Boolean function formula_4 is said to be "representable" if there exists a graph formula_5 with non-negative weights and with source and sink nodes formula_6 and formula_7 respectively, and there exists a set of nodes formula_8 such that, for each tuple of values formula_9 assigned to the variables, formula_10 equals (up to a constant) the value of the flow determined by a minimum cut formula_11 of the graph formula_12 such that formula_13 if formula_14 and formula_15 if formula_16.
It is possible to classify pseudo-Boolean functions according to their order, determined by the maximum number of variables contributing to each single term. All first order functions, where each term depends upon at most one variable, are always representable. Quadratic functions
formula_17
are representable if and only if they are submodular, i.e. for each quadratic term formula_18 the following condition is satisfied
formula_19
Cubic functions
formula_20
are representable if and only if they are "regular", i.e. all possible binary projections to two variables, obtained by fixing the value of the remaining variable, are submodular. For higher-order functions, regularity is a necessary condition for representability.
Graph construction.
Graph construction for a representable function is simplified by the fact that the sum of two representable functions formula_21 and formula_22 is representable, and its graph formula_23 is the union of the graphs formula_24 and formula_25 representing the two functions. Such theorem allows to build separate graphs representing each term and combine them to obtain a graph representing the entire function.
The graph representing a quadratic function of formula_26 variables contains formula_27 vertices, two of them representing the source and sink and the others representing the variables. When representing higher-order functions, the graph contains auxiliary nodes that allow to model higher-order interactions.
Unary terms.
A unary term formula_28 depends only on one variable formula_29 and can be represented by a graph with one non-terminal node formula_30 and one edge formula_31 with weight formula_32 if formula_33, or formula_34 with weight formula_35 if formula_36.
Binary terms.
A quadratic (or binary) term formula_18 can be represented by a graph containing two non-terminal nodes formula_30 and formula_37. The term can be rewritten as
formula_38
with
formula_39
In this expression, the first term is constant and it is not represented by any edge, the two following terms depend on one variable and are represented by one edge, as shown in the previous section for unary terms, while the third term is represented by an edge formula_40 with weight formula_41 (submodularity guarantees that the weight is non-negative).
Ternary terms.
A cubic (or ternary) term formula_42 can be represented by a graph with four non-terminal nodes, three of them (formula_30, formula_37 and formula_43) associated to the three variables plus one fourth auxiliary node formula_44. A generic ternary term can be rewritten as the sum of a constant, three unary terms, three binary terms, and a ternary term in simplified form. There may be two different cases, according to the sign of formula_45. If formula_46 then
formula_47
with
formula_49
If formula_48 the construction is similarly, but the variables will have opposite value. If the function is regular, then all its projections of two variables will be submodular, implying that formula_50, formula_51 and formula_52 are positive and then all terms in the new representation are submodular.
In this decomposition, the constant, unary and binary terms can be represented as shown in the previous sections. If formula_46 the ternary term can be represented with a graph with four edges formula_53, formula_54, formula_55, formula_56, all with weight formula_57, while if formula_48 the term can be represented by four edges formula_58, formula_59, formula_60, formula_61 with weight formula_62.
Minimum cut.
After building a graph representing a pseudo-Boolean function, it is possible to compute a minimum cut using one among the various algorithms developed for flow networks, such as Ford–Fulkerson, Edmonds–Karp, and Boykov–Kolmogorov algorithm. The result is a partition of the graph in two connected components formula_63 and formula_64 such that formula_65 and formula_66, and the function attains its global minimum when formula_14 for each formula_67 such that the corresponding node formula_68, and formula_16 for each formula_67 such that the corresponding node formula_15.
Max-flow algorithms such as Boykov–Kolmogorov's are very efficient in practice for sequential computation, but they are difficult to parallelise, making them not suitable for distributed computing applications and preventing them from exploiting the potential of modern CPUs. Parallel max-flow algorithms were developed, such as push-relabel and jump-flood, that can also take advantage of hardware acceleration in GPGPU implementations.
Functions of discrete variables with more than two values.
The previous construction allows global optimization of pseudo-Boolean functions only, but it can be extended to quadratic functions of discrete variables with a finite number of values, in the form
formula_69
where formula_70 and formula_71. The function formula_72 represents the unary contribution of each variable (often referred as "data term"), while the function formula_73 represents binary interactions between variables ("smoothness term"). In the general case, optimization of such functions is a NP-hard problem, and stochastic optimization methods such as simulated annealing are sensitive to local minima and in practice they can generate arbitrarily sub-optimal results. With graph cuts it is possible to construct move-making algorithms that allow to reach in polynomial time a local minima with strong optimality properties for a wide family of quadratic functions of practical interest (when the binary interaction formula_73 is a metric or a semimetric), such that the value of the function in the solution lies within a constant and known factor from the global optimum.
Given a function formula_74 with formula_75, and a certain assignment of values formula_76 to the variables, it is possible to associate each assignment formula_2 to a partition formula_77 of the set of variables, such that, formula_78. Give two distinct assignments formula_79 and formula_80 and a value formula_81, a move that transforms formula_79 into formula_80 is said to be an formula_82-expansion if formula_83 and formula_84. Given a couple of values formula_82 and formula_85, a move is said to be an formula_86-swap if formula_87. Intuitively, an formula_82-expansion move from formula_2 assigns the value of formula_82 to some variables that have a different value in formula_2, while an formula_86-swap move assigns formula_82 to some variables that have value formula_85 in formula_2 and vice versa.
For each iteration, the formula_82-expansion algorithm computes, for each possible value formula_82, the minimum of the function among all assignments formula_88 that can be reached with a single formula_82-expansion move from the current temporary solution formula_2, and takes it as the new temporary solution.
formula_89
formula_90
while formula_91:
formula_92
foreach formula_81:
formula_93
if formula_94:
formula_95
formula_90
The formula_86-swap algorithm is similar, but it searches for the minimum among all assignments formula_96 reachable with a single formula_86-swap move from formula_2.
formula_89
formula_90
while formula_91:
formula_92
foreach formula_97:
formula_98
if formula_94:
formula_95
formula_90
In both cases, the optimization problem in the innermost loop can be solved exactly and efficiently with a graph cut. Both algorithms terminate certainly in a finite number of iterations of the outer loop, and in practice such number is small, with most of the improvement happening at the first iteration. The algorithms can generate different solutions depending on the initial guess, but in practice they are robust with respect to initialisation, and starting with a point where all variables are assigned to the same random value is usually sufficient to produce good quality results.
The solution generated by such algorithms is not necessarily a global optimum, but it has strong guarantees of optimality. If formula_73 is a metric and formula_2 is a solution generated by the formula_82-expansion algorithm, or if formula_73 is a semimetric and formula_2 is a solution generated by the formula_86-swap algorithm, then formula_3 lies within a known and constant factor from the global minimum formula_99:
formula_100
Non-submodular functions.
Generally speaking, the problem of optimizing a non-submodular pseudo-Boolean function is NP-hard and cannot be solved in polynomial time with a simple graph cut. The simplest approach is to approximate the function with a similar but submodular one, for instance truncating all non-submodular terms or replacing them with similar submodular expressions. Such approach is generally sub-optimal, and it produces acceptable results only if the number of non-submodular terms is relatively small.
In case of quadratic non-submodular functions, it is possible to compute in polynomial time a partial solution using algorithms such as QPBO. Higher-order functions can be reduced in polynomial time to a quadratic form that can be optimised with QPBO.
Higher-order functions.
Quadratic functions are extensively studied and were characterised in detail, but more general results were derived also for higher-order functions. While quadratic functions can indeed model many problems of practical interest, they are limited by the fact they can represent only binary interactions between variables. The possibility to capture higher-order interactions allows to better capture the nature of the problem and it can provide higher quality results that could be difficult to achieve with quadratic models. For instance in computer vision applications, where each variable represents a pixel or voxel of the image, higher-order interactions can be used to model texture information, that would be difficult to capture using only quadratic functions.
Sufficient conditions analogous to submodularity were developed to characterise higher-order pseudo-Boolean functions that can be optimised in polynomial time, and there exists algorithms analogous to formula_82-expansion and formula_86-swap for some families of higher-order functions. The problem is NP-hard in the general case, and approximate methods were developed for fast optimization of functions that do not satisfy such conditions.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f"
},
{
"math_id": 1,
"text": "C"
},
{
"math_id": 2,
"text": "\\mathbf{x}"
},
{
"math_id": 3,
"text": "f(\\mathbf{x})"
},
{
"math_id": 4,
"text": "f: \\{0, 1\\}^n \\to \\mathbb{R}"
},
{
"math_id": 5,
"text": "G = (V, E)"
},
{
"math_id": 6,
"text": "s"
},
{
"math_id": 7,
"text": "t"
},
{
"math_id": 8,
"text": "V_0 = \\{v_1, \\dots, v_n\\} \\subset V - \\{s, t\\}"
},
{
"math_id": 9,
"text": "(x_1, \\dots, x_n) \\in \\{0, 1\\}^n"
},
{
"math_id": 10,
"text": "f(x_1, \\dots, x_n)"
},
{
"math_id": 11,
"text": "C = (S, T)"
},
{
"math_id": 12,
"text": "G"
},
{
"math_id": 13,
"text": "v_i \\in S"
},
{
"math_id": 14,
"text": "x_i = 0"
},
{
"math_id": 15,
"text": "v_i \\in T"
},
{
"math_id": 16,
"text": "x_i = 1"
},
{
"math_id": 17,
"text": " f(\\mathbf{x}) = w_0 + \\sum_i w_i(x_i) + \\sum_{i < j} w_{ij}(x_i, x_j) . "
},
{
"math_id": 18,
"text": "w_{ij}"
},
{
"math_id": 19,
"text": " w_{ij}(0, 0) + w_{ij}(1, 1) \\le w_{ij}(0, 1) + w_{ij}(1, 0) . "
},
{
"math_id": 20,
"text": " f(\\mathbf{x}) = w_0 + \\sum_i w_i(x_i) + \\sum_{i < j} w_{ij}(x_i, x_j) + \\sum_{i < j < k} w_{ijk}(x_i, x_j, x_k) "
},
{
"math_id": 21,
"text": "f'"
},
{
"math_id": 22,
"text": "f''"
},
{
"math_id": 23,
"text": "G = (V' \\cup V'', E' \\cup E'')"
},
{
"math_id": 24,
"text": "G' = (V', E')"
},
{
"math_id": 25,
"text": "G'' = (V'', E'')"
},
{
"math_id": 26,
"text": "n"
},
{
"math_id": 27,
"text": "n + 2"
},
{
"math_id": 28,
"text": "w_i"
},
{
"math_id": 29,
"text": "x_i"
},
{
"math_id": 30,
"text": "v_i"
},
{
"math_id": 31,
"text": "s \\rightarrow v_i"
},
{
"math_id": 32,
"text": "w_i(1) - w_i(0)"
},
{
"math_id": 33,
"text": "w_i(1) \\ge w_i(0)"
},
{
"math_id": 34,
"text": "v_i \\rightarrow t"
},
{
"math_id": 35,
"text": "w_i(0) - w_i(1)"
},
{
"math_id": 36,
"text": "w_i(1) < w_i(0)"
},
{
"math_id": 37,
"text": "v_j"
},
{
"math_id": 38,
"text": "w_{ij}(x_i, x_j) = w_{ij}(0, 0) + k_i x_i + k_j x_j + k_{ij} \\left( (1 - x_i) x_j + x_i (1 - x_j) \\right)"
},
{
"math_id": 39,
"text": "\n\\begin{align}\n k_i &= \\frac{1}{2} (w_{ij}(1, 0) - w_{ij}(0, 0)) \\\\\n k_j &= \\frac{1}{2} (w_{ij}(1, 1) - w_{ij}(1, 0)) \\\\\n k_{ij} &= \\frac{1}{2} (w_{ij}(0, 1) + w_{ij}(1, 0) - w_{ij}(0, 0) - w_{ij}(1, 1)) .\n\\end{align}\n"
},
{
"math_id": 40,
"text": "v_i \\rightarrow v_j"
},
{
"math_id": 41,
"text": "w_{ij}(0, 1) + w_{ij}(1, 0) - w_{ij}(0, 0) - w_{ij}(1, 1)"
},
{
"math_id": 42,
"text": "w_{ijk}"
},
{
"math_id": 43,
"text": "v_k"
},
{
"math_id": 44,
"text": "v_{ijk}"
},
{
"math_id": 45,
"text": "p = w_{ijk}(0, 0, 0) + w_{ijk}(0, 1, 1) + w_{ijk}(1, 0, 1) + w_{ijk}(1, 1, 0)"
},
{
"math_id": 46,
"text": "p > 0"
},
{
"math_id": 47,
"text": "\n w_{ijk}(x_i, x_j, x_k) =\n w_{ijk}(0, 0, 0)\n + p_1 (x_i - 1) + p_2 (x_j - 1) + p_3 (x_k - 1)\n + p_{23}(x_j - 1) x_k + p_{31} x_i (x_k - 1) + p_{12} (x_i - 1) x_j\n - p x_i x_j x_k\n"
},
{
"math_id": 48,
"text": "p < 0"
},
{
"math_id": 49,
"text": "\n\\begin{align}\n p_1 &= w_{ijk}(1, 0, 1) - w_{ijk}(0, 0, 1) \\\\\n p_2 &= w_{ijk}(1, 1, 0) - w_{ijk}(1, 0, 1) \\\\\n p_3 &= w_{ijk}(0, 1, 1) - w_{ijk}(0, 1, 0) \\\\\n p_{23} &= w_{ijk}(0, 0, 1) + w_{ijk}(0, 1, 0) - w_{ijk}(0, 0, 0) - w_{ijk}(0, 1, 1) \\\\\n p_{31} &= w_{ijk}(0, 0, 1) + w_{ijk}(1, 0, 0) - w_{ijk}(0, 0, 0) - w_{ijk}(1, 0, 1) \\\\\n p_{12} &= w_{ijk}(0, 1, 0) + w_{ijk}(1, 0, 0) - w_{ijk}(0, 0, 0) - w_{ijk}(1, 1, 0) .\n\\end{align}\n"
},
{
"math_id": 50,
"text": "p_{23}"
},
{
"math_id": 51,
"text": "p_{31}"
},
{
"math_id": 52,
"text": "p_{12}"
},
{
"math_id": 53,
"text": "v_i \\rightarrow v_{ijk}"
},
{
"math_id": 54,
"text": "v_j \\rightarrow v_{ijk}"
},
{
"math_id": 55,
"text": "v_k \\rightarrow v_{ijk}"
},
{
"math_id": 56,
"text": "v_{ijk} \\rightarrow t"
},
{
"math_id": 57,
"text": "p"
},
{
"math_id": 58,
"text": "v_{ijk} \\rightarrow v_i"
},
{
"math_id": 59,
"text": "v_{ijk} \\rightarrow v_j"
},
{
"math_id": 60,
"text": "v_{ijk} \\rightarrow v_k"
},
{
"math_id": 61,
"text": "s \\rightarrow v_{ijk}"
},
{
"math_id": 62,
"text": "-p"
},
{
"math_id": 63,
"text": "S"
},
{
"math_id": 64,
"text": "T"
},
{
"math_id": 65,
"text": "s \\in S"
},
{
"math_id": 66,
"text": "t \\in T"
},
{
"math_id": 67,
"text": "i"
},
{
"math_id": 68,
"text": "v_i \\in\nS"
},
{
"math_id": 69,
"text": "f(\\mathbf{x}) = \\sum_{i \\in V} D(x_i) + \\sum_{(i, j) \\in E} S(x_i, x_j)"
},
{
"math_id": 70,
"text": "E \\subseteq V \\times V"
},
{
"math_id": 71,
"text": "x_i \\in \\Lambda = \\{1, \\dots, k\\}"
},
{
"math_id": 72,
"text": "D(x_i)"
},
{
"math_id": 73,
"text": "S(x_i, x_j)"
},
{
"math_id": 74,
"text": "f: \\Lambda^n \\to \\mathbb{R}"
},
{
"math_id": 75,
"text": "\\Lambda = \\{1, \\dots, k\\}"
},
{
"math_id": 76,
"text": "\\mathbf{x} = (x_1, \\dots, x_n) \\in \\Lambda^n"
},
{
"math_id": 77,
"text": "P = \\{P_l | l \\in \\Lambda \\}"
},
{
"math_id": 78,
"text": "P_l = \\{ x_i | x_i = l \\in \\Lambda \\}"
},
{
"math_id": 79,
"text": "P"
},
{
"math_id": 80,
"text": "P'"
},
{
"math_id": 81,
"text": "\\alpha \\in \\Lambda"
},
{
"math_id": 82,
"text": "\\alpha"
},
{
"math_id": 83,
"text": "P_\\alpha \\subset P'_\\alpha"
},
{
"math_id": 84,
"text": "P'_l \\subset P_l \\; \\forall l \\in \\Lambda - \\{ \\alpha \\}"
},
{
"math_id": 85,
"text": "\\beta"
},
{
"math_id": 86,
"text": "\\alpha\\beta"
},
{
"math_id": 87,
"text": "P_l = P'_l \\; \\forall l \\in \\Lambda - \\{ \\alpha, \\beta \\}"
},
{
"math_id": 88,
"text": "\\Alpha(\\mathbf{x})"
},
{
"math_id": 89,
"text": "\\mathbf{x} := \\text{arbitrary value in } \\Lambda^n"
},
{
"math_id": 90,
"text": "\\text{exit} := 0"
},
{
"math_id": 91,
"text": "\\text{exit} \\ne 1"
},
{
"math_id": 92,
"text": "\\text{exit} = 1"
},
{
"math_id": 93,
"text": "\\mathbf{\\hat{x}} := \\arg \\min_{\\mathbf{y} \\in \\Alpha(\\mathbf{x})} f(\\mathbf{y})"
},
{
"math_id": 94,
"text": "f(\\mathbf{\\hat{x}}) < f(\\mathbf{x})"
},
{
"math_id": 95,
"text": "\\mathbf{x} = \\mathbf{\\hat{x}}"
},
{
"math_id": 96,
"text": "\\Alpha\\Beta(\\mathbf{x})"
},
{
"math_id": 97,
"text": "(\\alpha, \\beta) \\in \\Lambda^2"
},
{
"math_id": 98,
"text": "\\mathbf{\\hat{x}} := \\arg \\min_{\\mathbf{y} \\in \\Alpha\\Beta(\\mathbf{x})} f(\\mathbf{y})"
},
{
"math_id": 99,
"text": "f(\\mathbf{x}^*)"
},
{
"math_id": 100,
"text": "f(\\mathbf{x}) \\le 2 \\frac{ \\max_{\\alpha \\ne \\beta \\in \\Lambda} S(\\alpha, \\beta) }{ \\min_{\\alpha \\ne \\beta \\in \\Lambda} S(\\alpha, \\beta) } f(\\mathbf{x}^*) . "
}
] |
https://en.wikipedia.org/wiki?curid=59654517
|
59654519
|
Quadratic pseudo-Boolean optimization
|
Combinatorial optimization method for pseudo-Boolean functions
Quadratic pseudo-Boolean optimisation (QPBO) is a combinatorial optimization method for minimizing quadratic pseudo-Boolean functions in the form
formula_0
in the binary variables formula_1, with formula_2. If formula_3 is submodular then QPBO produces a global optimum equivalently to graph cut optimization, while if formula_3 contains non-submodular terms then the algorithm produces a partial solution with specific optimality properties, in both cases in polynomial time.
QPBO is a useful tool for inference on Markov random fields and conditional random fields, and has applications in computer vision problems such as image segmentation and stereo matching.
Optimization of non-submodular functions.
If the coefficients formula_4 of the quadratic terms satisfy the submodularity condition
formula_5
then the function can be efficiently optimised with graph cut optimization. It is indeed possible to represent it with a non-negative weighted graph, and the global minimum can be found in polynomial time by computing a minimum cut of the graph, which can be computed with algorithms such as Ford–Fulkerson, Edmonds–Karp, and Boykov–Kolmogorov's.
If the function is not submodular, then the problem is NP-hard in the general case and it is not always possible to solve it exactly in polynomial time. It is possible to replace the target function with a similar but submodular approximation, e.g. by removing all non-submodular terms or replacing them with submodular approximations, but such approach is generally sub-optimal and it produces satisfying results only if the number of non-submodular terms is relatively small.
QPBO builds an extended graph, introducing a set of auxiliary variables ideally equivalent to the negation of the variables in the problem. If the nodes in the graph associated to a variable (representing the variable itself and its negation) are separated by the minimum cut of the graph in two different connected components, then the optimal value for such variable is well defined, otherwise it is not possible to infer it. Such method produces results generally superior to submodular approximations of the target function.
Properties.
QPBO produces a solution where each variable assumes one of three possible values: "true", "false", and "undefined", noted in the following as 1, 0, and formula_6 respectively. The solution has the following two properties.
Algorithm.
The algorithm can be divided in three steps: graph construction, max-flow computation, and assignment of values to the variables.
When constructing the graph, the set of vertices formula_17 contains the source and sink nodes formula_18 and formula_19, and a pair of nodes formula_20 and formula_21 for each variable. After re-parametrising the function to normal form, a pair of edges is added to the graph for each term formula_22:
The minimum cut of the graph can be computed with a max-flow algorithm. In the general case, the minimum cut is not unique, and each minimum cut correspond to a different partial solution, however it is possible to build a minimum cut such that the number of undefined variables is minimal.
Once the minimum cut is known, each variable receives a value depending upon the position of its corresponding nodes formula_20 and formula_21: if formula_20 belongs to the connected component containing the source and formula_21 belongs to the connected component containing the sink then the variable will have value of 0. Vice versa, if formula_20 belongs to the connected component containing the sink and formula_21 to the one containing the source, then the variable will have value of 1. If both nodes formula_20 and formula_21 belong to the same connected component, then the value of the variable will be undefined.
The way undefined variables can be handled is dependent upon the context of the problem. In the general case, given a partition of the graph in two sub-graphs and two solutions, each one optimal for one of the sub-graphs, then it is possible to combine the two solutions into one solution optimal for the whole graph in polynomial time. However, computing an optimal solution for the subset of undefined variables is still a NP-hard problem. In the context of iterative algorithms such as formula_47-expansion, a reasonable approach is to leave the value of undefined variables unchanged, since the persistence property guarantees that the target function will have non-increasing value. Different exact and approximate strategies to minimise the number of undefined variables exist.
Higher order terms.
It is always possible to reduce a higher-order function to a quadratic function which is equivalent with respect to the optimisation, problem known as "higher-order clique reduction" (HOCR), and the result of such reduction can be optimized with QPBO. Generic methods for reduction of arbitrary functions rely on specific substitution rules and in the general case they require the introduction of auxiliary variables. In practice most terms can be reduced without introducing additional variables, resulting in a simpler optimization problem, and the remaining terms can be reduced exactly, with addition of auxiliary variables, or approximately, without addition of any new variable.
|
[
{
"math_id": 0,
"text": " f(\\mathbf{x}) = w_0 + \\sum_{p \\in V} w_p(x_p) + \\sum_{(p, q) \\in E} w_{pq}(x_p, x_q) "
},
{
"math_id": 1,
"text": "x_p \\in \\{0, 1\\} \\; \\forall p \\in V = \\{1, \\dots, n\\}"
},
{
"math_id": 2,
"text": "E \\subseteq V \\times V"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "w_{pq}"
},
{
"math_id": 5,
"text": " w_{pq}(0, 0) + w_{pq}(1, 1) \\le w_{pq}(0, 1) + w_{pq}(1, 0) "
},
{
"math_id": 6,
"text": "\\emptyset"
},
{
"math_id": 7,
"text": "\\mathbf{x}"
},
{
"math_id": 8,
"text": "\\hat{V} \\subseteq V"
},
{
"math_id": 9,
"text": "\\mathbf{x^*}"
},
{
"math_id": 10,
"text": "x_i = x_i^*"
},
{
"math_id": 11,
"text": "i \\in \\hat{V}"
},
{
"math_id": 12,
"text": "\\mathbf{y}"
},
{
"math_id": 13,
"text": "\\hat{\\mathbf{y}}"
},
{
"math_id": 14,
"text": "y_i"
},
{
"math_id": 15,
"text": "x_i"
},
{
"math_id": 16,
"text": "f(\\hat{\\mathbf{y}}) \\le f(\\mathbf{y})"
},
{
"math_id": 17,
"text": "V"
},
{
"math_id": 18,
"text": "s"
},
{
"math_id": 19,
"text": "t"
},
{
"math_id": 20,
"text": "p"
},
{
"math_id": 21,
"text": "p'"
},
{
"math_id": 22,
"text": "w"
},
{
"math_id": 23,
"text": "w_p(0)"
},
{
"math_id": 24,
"text": "p \\rightarrow t"
},
{
"math_id": 25,
"text": "s \\rightarrow p'"
},
{
"math_id": 26,
"text": "\\frac{1}{2} w_p(0)"
},
{
"math_id": 27,
"text": "w_p(1)"
},
{
"math_id": 28,
"text": "s \\rightarrow p"
},
{
"math_id": 29,
"text": "p' \\rightarrow t"
},
{
"math_id": 30,
"text": "\\frac{1}{2} w_p(1)"
},
{
"math_id": 31,
"text": "w_{pq}(0, 1)"
},
{
"math_id": 32,
"text": "p \\rightarrow q"
},
{
"math_id": 33,
"text": "q' \\rightarrow p'"
},
{
"math_id": 34,
"text": "\\frac{1}{2} w_{pq}(0, 1)"
},
{
"math_id": 35,
"text": "w_{pq}(1, 0)"
},
{
"math_id": 36,
"text": "q \\rightarrow p"
},
{
"math_id": 37,
"text": "p' \\rightarrow q'"
},
{
"math_id": 38,
"text": "\\frac{1}{2} w_{pq}(1, 0)"
},
{
"math_id": 39,
"text": "w_{pq}(0, 0)"
},
{
"math_id": 40,
"text": "p \\rightarrow q'"
},
{
"math_id": 41,
"text": "q \\rightarrow p'"
},
{
"math_id": 42,
"text": "\\frac{1}{2} w_{pq}(0, 0)"
},
{
"math_id": 43,
"text": "w_{pq}(1, 1)"
},
{
"math_id": 44,
"text": "q' \\rightarrow p"
},
{
"math_id": 45,
"text": "p' \\rightarrow q"
},
{
"math_id": 46,
"text": "\\frac{1}{2} w_{pq}(1, 1)"
},
{
"math_id": 47,
"text": "\\alpha"
}
] |
https://en.wikipedia.org/wiki?curid=59654519
|
596556
|
Hartley transform
|
Integral transform closely related to the Fourier transform
In mathematics, the Hartley transform (HT) is an integral transform closely related to the Fourier transform (FT), but which transforms real-valued functions to real-valued functions. It was proposed as an alternative to the Fourier transform by Ralph V. L. Hartley in 1942, and is one of many known Fourier-related transforms. Compared to the Fourier transform, the Hartley transform has the advantages of transforming real functions to real functions (as opposed to requiring complex numbers) and of being its own inverse.
The discrete version of the transform, the discrete Hartley transform (DHT), was introduced by Ronald N. Bracewell in 1983.
The two-dimensional Hartley transform can be computed by an analog optical process similar to an optical Fourier transform (OFT), with the proposed advantage that only its amplitude and sign need to be determined rather than its complex phase. However, optical Hartley transforms do not seem to have seen widespread use.
Definition.
The Hartley transform of a function formula_0 is defined by:
formula_1
where formula_2 can in applications be an angular frequency and
formula_3
is the cosine-and-sine (cas) or "Hartley" kernel. In engineering terms, this transform takes a signal (function) from the time-domain to the Hartley spectral domain (frequency domain).
Inverse transform.
The Hartley transform has the convenient property of being its own inverse (an involution):
formula_4
Conventions.
The above is in accord with Hartley's original definition, but (as with the Fourier transform) various minor details are matters of convention and can be changed without altering the essential properties:
Relation to Fourier transform.
This transform differs from the classic Fourier transform
formula_11 in the choice of the kernel. In the Fourier transform, we have the exponential kernel,
formula_12,
where formula_13 is the imaginary unit.
The two transforms are closely related, however, and the Fourier transform (assuming it uses the same formula_14 normalization convention) can be computed from the Hartley transform via:
formula_15
That is, the real and imaginary parts of the Fourier transform are simply given by the even and odd parts of the Hartley transform, respectively.
Conversely, for real-valued functions formula_0, the Hartley transform is given from the Fourier transform's real and imaginary parts:
formula_16
where formula_17 and formula_18 denote the real and imaginary parts.
Properties.
The Hartley transform is a real linear operator, and is symmetric (and Hermitian). From the symmetric and self-inverse properties, it follows that the transform is a unitary operator (indeed, orthogonal).
Convolution using Hartley transforms is
formula_19
where formula_20 and formula_21
Similar to the Fourier transform, the Hartley transform of an even/odd function is even/odd, respectively.
cas.
The properties of the "Hartley kernel", for which Hartley introduced the name "cas" for the function (from "cosine and sine") in 1942, follow directly from trigonometry, and its definition as a phase-shifted trigonometric function formula_22. For example, it has an angle-addition identity of:
formula_23
Additionally:
formula_24
and its derivative is given by:
formula_25
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "f(t)"
},
{
"math_id": 1,
"text": "\nH(\\omega) = \\left\\{\\mathcal{H}f\\right\\}(\\omega) = \\frac{1}{\\sqrt{2\\pi}}\\int_{-\\infty}^\\infty\nf(t) \\operatorname{cas}(\\omega t) \\, \\mathrm{d}t\\,,\n"
},
{
"math_id": 2,
"text": "\\omega"
},
{
"math_id": 3,
"text": "\n\\operatorname{cas}(t) = \\cos(t) + \\sin(t) = \\sqrt{2} \\sin (t+\\pi /4) = \\sqrt{2} \\cos (t-\\pi /4)\\,,\n"
},
{
"math_id": 4,
"text": "f = \\{\\mathcal{H} \\{\\mathcal{H}f \\}\\}\\,."
},
{
"math_id": 5,
"text": "{1}/{\\sqrt{2\\pi}}"
},
{
"math_id": 6,
"text": "{1}/{2\\pi}"
},
{
"math_id": 7,
"text": "2\\pi\\nu t"
},
{
"math_id": 8,
"text": "\\omega t"
},
{
"math_id": 9,
"text": "\\cos-\\sin"
},
{
"math_id": 10,
"text": "\\cos+\\sin"
},
{
"math_id": 11,
"text": "F(\\omega) = \\mathcal{F} \\{ f(t) \\}(\\omega)"
},
{
"math_id": 12,
"text": "\\exp\\left({-\\mathrm{i}\\omega t}\\right) = \\cos(\\omega t) - \\mathrm{i} \\sin(\\omega t)"
},
{
"math_id": 13,
"text": "\\mathrm{i}"
},
{
"math_id": 14,
"text": "1/\\sqrt{2\\pi}"
},
{
"math_id": 15,
"text": "F(\\omega) = \\frac{H(\\omega) + H(-\\omega)}{2} - \\mathrm{i} \\frac{H(\\omega) - H(-\\omega)}{2}\\,."
},
{
"math_id": 16,
"text": "\\{ \\mathcal{H} f \\} = \\Re \\{ \\mathcal{F}f \\} - \\Im \\{ \\mathcal{F}f \\} = \\Re \\{ \\mathcal{F}f \\cdot (1+\\mathrm{i}) \\}\\,,"
},
{
"math_id": 17,
"text": "\\Re"
},
{
"math_id": 18,
"text": "\\Im"
},
{
"math_id": 19,
"text": "\nf(x) * g(x) = \\frac{F(\\omega) G(\\omega) + F(-\\omega) G(\\omega) + F(\\omega) G(-\\omega) - F(-\\omega) G(-\\omega)}{2}\n"
},
{
"math_id": 20,
"text": "F(\\omega) = \\{\\mathcal{H}f\\}(\\omega)"
},
{
"math_id": 21,
"text": "G(\\omega) = \\{\\mathcal{H} g\\}(\\omega)"
},
{
"math_id": 22,
"text": "\\operatorname{cas}(t)=\\sqrt{2} \\sin (t+\\pi /4)=\\sin(t)+\\cos(t)"
},
{
"math_id": 23,
"text": "\n2 \\operatorname{cas} (a+b) = \\operatorname{cas}(a) \\operatorname{cas}(b) + \\operatorname{cas}(-a) \\operatorname{cas}(b) + \\operatorname{cas}(a) \\operatorname{cas}(-b) - \\operatorname{cas}(-a) \\operatorname{cas}(-b)\\,.\n"
},
{
"math_id": 24,
"text": " \n\\operatorname{cas} (a+b) = {\\cos (a) \\operatorname{cas} (b)} + {\\sin (a) \\operatorname{cas} (-b)} = \\cos (b) \\operatorname{cas} (a) + \\sin (b) \\operatorname{cas}(-a)\\,,\n"
},
{
"math_id": 25,
"text": "\n\\operatorname{cas}'(a) = \\frac{d}{da} \\operatorname{cas} (a) = \\cos (a) - \\sin (a) = \\operatorname{cas}(-a)\\,.\n"
}
] |
https://en.wikipedia.org/wiki?curid=596556
|
59656
|
Rayleigh number
|
Dimensionless quantity associated with free convection of a fluid
In fluid mechanics, the Rayleigh number (Ra, after Lord Rayleigh) for a fluid is a dimensionless number associated with buoyancy-driven flow, also known as free (or natural) convection. It characterises the fluid's flow regime: a value in a certain lower range denotes laminar flow; a value in a higher range, turbulent flow. Below a certain critical value, there is no fluid motion and heat transfer is by conduction rather than convection. For most engineering purposes, the Rayleigh number is large, somewhere around 106 to 108.
The Rayleigh number is defined as the product of the Grashof number (Gr), which describes the relationship between buoyancy and viscosity within a fluid, and the Prandtl number (Pr), which describes the relationship between momentum diffusivity and thermal diffusivity: Ra = Gr × Pr. Hence it may also be viewed as the ratio of buoyancy and viscosity forces multiplied by the ratio of momentum and thermal diffusivities: Ra = B/"μ" × "ν"/"α". It is closely related to the Nusselt number (Nu).
Derivation.
The Rayleigh number describes the behaviour of fluids (such as water or air) when the mass density of the fluid is non-uniform. The mass density differences are usually caused by temperature differences. Typically a fluid expands and becomes less dense as it is heated. Gravity causes denser parts of the fluid to sink, which is called convection. Lord Rayleigh studied the case of Rayleigh-Bénard convection. When the Rayleigh number, Ra, is below a critical value for a fluid, there is no flow and heat transfer is purely by conduction; when it exceeds that value, heat is transferred by natural convection.
When the mass density difference is caused by temperature difference, Ra is, by definition, the ratio of the time scale for diffusive thermal transport to the time scale for convective thermal transport at speed formula_0:
formula_1
This means the Rayleigh number is a type of Péclet number. For a volume of fluid of size formula_2 in all three dimensions and mass density difference formula_3, the force due to gravity is of the order formula_4, where formula_5 is acceleration due to gravity. From the Stokes equation, when the volume of fluid is sinking, viscous drag is of the order formula_6, where formula_7 is the dynamic viscosity of the fluid. When these two forces are equated, the speed formula_8. Thus the time scale for transport via flow is formula_9. The time scale for thermal diffusion across a distance formula_2 is formula_10, where formula_11 is the thermal diffusivity. Thus the Rayleigh number Ra is
formula_12
where we approximated the density difference formula_13 for a fluid of average mass density formula_14, thermal expansion coefficient formula_15 and a temperature difference formula_16 across distance formula_2.
The Rayleigh number can be written as the product of the Grashof number and the Prandtl number:
formula_17
Classical definition.
For free convection near a vertical wall, the Rayleigh number is defined as:
formula_18
where:
In the above, the fluid properties Pr, "ν", "α" and "β" are evaluated at the film temperature, which is defined as:
formula_20
For a uniform wall heating flux, the modified Rayleigh number is defined as:
formula_21
where:
Other applications.
Solidifying alloys.
The Rayleigh number can also be used as a criterion to predict convectional instabilities, such as A-segregates, in the mushy zone of a solidifying alloy. The mushy zone Rayleigh number is defined as:
formula_22
where:
A-segregates are predicted to form when the Rayleigh number exceeds a certain critical value. This critical value is independent of the composition of the alloy, and this is the main advantage of the Rayleigh number criterion over other criteria for prediction of convectional instabilities, such as Suzuki criterion.
Torabi Rad et al. showed that for steel alloys the critical Rayleigh number is 17. Pickering et al. explored Torabi Rad's criterion, and further verified its effectiveness. Critical Rayleigh numbers for lead–tin and nickel-based super-alloys were also developed.
Porous media.
The Rayleigh number above is for convection in a bulk fluid such as air or water, but convection can also occur when the fluid is inside and fills a porous medium, such as porous rock saturated with water. Then the Rayleigh number, sometimes called the Rayleigh-Darcy number, is different. In a bulk fluid, i.e., not in a porous medium, from the Stokes equation, the falling speed of a domain of size formula_2 of liquid formula_8. In porous medium, this expression is replaced by that from Darcy's law formula_23, with formula_24 the permeability of the porous medium. The Rayleigh or Rayleigh-Darcy number is then
formula_25
This also applies to A-segregates, in the mushy zone of a solidifying alloy.
Geophysical applications.
In geophysics, the Rayleigh number is of fundamental importance: it indicates the presence and strength of convection within a fluid body such as the Earth's mantle. The mantle is a solid that behaves as a fluid over geological time scales. The Rayleigh number for the Earth's mantle due to internal heating alone, Ra"H", is given by:
formula_26
where:
A Rayleigh number for bottom heating of the mantle from the core, Ra"T", can also be defined as:
formula_27
where:
High values for the Earth's mantle indicates that convection within the Earth is vigorous and time-varying, and that convection is responsible for almost all the heat transported from the deep interior to the surface.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "u"
},
{
"math_id": 1,
"text": "\\mathrm{Ra} = \\frac{\\text{time scale for thermal transport via diffusion}}{\\text{time scale for thermal transport via convection at speed}~ u}."
},
{
"math_id": 2,
"text": "l"
},
{
"math_id": 3,
"text": "\\Delta\\rho"
},
{
"math_id": 4,
"text": "\\Delta\\rho l^3g"
},
{
"math_id": 5,
"text": "g"
},
{
"math_id": 6,
"text": "\\eta l u"
},
{
"math_id": 7,
"text": "\\eta"
},
{
"math_id": 8,
"text": "u \\sim \\Delta\\rho l^2 g/\\eta"
},
{
"math_id": 9,
"text": "l/u \\sim \\eta/\\Delta\\rho lg"
},
{
"math_id": 10,
"text": "l^2/\\alpha"
},
{
"math_id": 11,
"text": "\\alpha"
},
{
"math_id": 12,
"text": "\\mathrm{Ra} = \\frac{l^2/\\alpha}{\\eta/\\Delta\\rho lg} = \\frac{\\Delta\\rho l^3g}{\\eta\\alpha} = \\frac{\\rho\\beta\\Delta T l^3g}{\\eta\\alpha}"
},
{
"math_id": 13,
"text": "\\Delta\\rho=\\rho\\beta\\Delta T"
},
{
"math_id": 14,
"text": "\\rho"
},
{
"math_id": 15,
"text": "\\beta"
},
{
"math_id": 16,
"text": "\\Delta T"
},
{
"math_id": 17,
"text": "\\mathrm{Ra} = \\mathrm{Gr}\\mathrm{Pr}."
},
{
"math_id": 18,
"text": "\\mathrm{Ra}_{x} = \\frac{g \\beta} {\\nu \\alpha} (T_s - T_\\infty) x^3 = \\mathrm{Gr}_{x}\\mathrm{Pr}"
},
{
"math_id": 19,
"text": "\\nu"
},
{
"math_id": 20,
"text": "T_f = \\frac{T_s + T_\\infin}{2}."
},
{
"math_id": 21,
"text": "\\mathrm{Ra}^{*}_{x} = \\frac{g \\beta q''_o} {\\nu \\alpha k} x^4 "
},
{
"math_id": 22,
"text": "\\mathrm{Ra} = \\frac{\\frac{\\Delta \\rho}{\\rho_0}g \\bar{K} L}{\\alpha \\nu} = \\frac{\\frac{\\Delta \\rho}{\\rho_0}g \\bar{K} }{R \\nu}"
},
{
"math_id": 23,
"text": "u \\sim \\Delta\\rho k g/\\eta"
},
{
"math_id": 24,
"text": "k"
},
{
"math_id": 25,
"text": "\\mathrm{Ra}=\\frac{\\rho\\beta\\Delta T klg}{\\eta\\alpha}"
},
{
"math_id": 26,
"text": "\\mathrm{Ra}_H = \\frac{g\\rho^{2}_{0}\\beta HD^5}{\\eta \\alpha k}"
},
{
"math_id": 27,
"text": "\\mathrm{Ra}_T = \\frac{\\rho_{0}^2 g\\beta\\Delta T_\\text{sa}D^3 C_P}{\\eta k}"
}
] |
https://en.wikipedia.org/wiki?curid=59656
|
59659427
|
Genomic control
|
Statistical method used in genetic association studies
Genomic control (GC) is a statistical method that is used to control for the confounding effects of population stratification in genetic association studies. The method was originally outlined by Bernie Devlin and Kathryn Roeder in a 1999 paper. It involves using a set of anonymous genetic markers to estimate the effect of population structure on the distribution of the chi-square statistic. The distribution of the chi-square statistics for a given allele that is suspected to be associated with a given trait can then be compared to the distribution of the same statistics for an allele that is expected not to be related to the trait. The method is supposed to involve the use of markers that are not linked to the marker being tested for a possible association. In theory, it takes advantage of the tendency of population structure to cause overdispersion of test statistics in association analyses. The genomic control method is as robust as family-based designs, despite being applied to population-based data. It has the potential to lead to a decrease in statistical power to detect a true association, and it may also fail to eliminate the biasing effects of population stratification. A more robust form of the genomic control method can be performed by expressing the association being studied as two Cochran–Armitage trend tests, and then applying the method to each test separately.
The assumption of population homogeneity in association studies, especially case-control studies, can easily be violated and can lead to both type I and type II errors. It is therefore important for the models used in the study to compensate for the population structure. The problem in case control studies is that if there is a genetic involvement in the disease, the case population is more likely to be related than the individuals in the control population. This means that the assumption of independence of observations is violated. Often this will lead to an overestimation of the significance of an association but it depends on the way the sample was chosen. If, coincidentally, there is a higher allele frequency in a subpopulation of the cases, you will find association with any trait that is more prevalent in the case population. This kind of spurious association increases as the sample population grows so the problem should be of special concern in large scale association studies when loci only cause relatively small effects on the trait. A method that in some cases can compensate for the above described problems has been developed by Devlin and Roeder (1999). It uses both a frequentist and a Bayesian approach (the latter being appropriate when dealing with a large number of candidate genes).
The frequentist way of correcting for population structure works by using markers that are not linked with the trait in question to correct for any inflation of the statistic caused by population structure. The method was first developed for binary traits but has since been generalized for quantitative ones. For the binary one, which applies to finding genetic differences between the case and control populations, Devlin and Roeder (1999) use Armitage's trend test
formula_0
and the formula_1 test for allelic frequencies
formula_2
If the population is in Hardy–Weinberg equilibrium the two statistics are approximately equal. Under the null hypothesis of no population stratification the trend test is asymptotic formula_1 distribution with one degree of freedom. The idea is that the statistic is inflated by a factor formula_3 so that formula_4 where formula_3 depends on the effect of stratification. The above method rests upon the assumptions that the inflation factor formula_3 is constant, which means that the loci should have roughly equal mutation rates, should not be under different selection in the two populations, and the amount of Hardy–Weinberg disequilibrium measured in Wright's coefficient of inbreeding "F" should not differ between the different loci. The last of these is of greatest concern. If the effect of the stratification is similar across the different loci formula_3 can be estimated from the unlinked markers
formula_5
where "L" is the number of unlinked markers. The denominator is derived from the gamma distribution as a robust estimator of formula_3. Other estimators have been suggested, for example, Reich and Goldstein suggested using the mean of the statistics instead. This is not the only way to estimate formula_3 but according to Bacanu et al. it is an appropriate estimate even if some of the unlinked markers are actually in disequilibrium with a disease causing locus or are themselves associated with the disease. Under the null hypothesis and when correcting for stratification using "L" unlinked genes, formula_6 is approximately formula_7 distributed. With this correction the overall type I error rate should be approximately equal to formula_8 even when the population is stratified. Devlin and Roeder (1999) mostly considered the situation where formula_9 gives a 95% confidence level and not smaller p-values. Marchini et al. (2004) demonstrates by simulation that genomic control can lead to an anti-conservative p-value if this value is very small and the two populations (case and control) are extremely distinct. This was especially a problem if the number of unlinked markers were in the order 50−100. This can result in false positives (at that significance level).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\nY^2=\\frac{N(N(r_1+2r_2)-R(n_1+2n_2))^2}{R(N-R)(N(n_1 + 4n_2) - (n_1 + 2n_2)^2)} \n"
},
{
"math_id": 1,
"text": "\\chi^2"
},
{
"math_id": 2,
"text": "\n\\chi^2\\sim X_A^2 = \\frac{2N (2N(r_1 + 2r_2) - R(n_1 + 2n_2))^2}\n{4R(N - R) (2N(n_1 + 2n_2) - (n_1 + 2n_2)^2)} \n"
},
{
"math_id": 3,
"text": "\\lambda"
},
{
"math_id": 4,
"text": "Y^2\\sim\\lambda\\chi_1^2"
},
{
"math_id": 5,
"text": "\\widehat{\\lambda}= \\frac{\\operatorname{median}(Y_1^2,Y_2^2,\\ldots, Y_L^2)}{0.456}\n"
},
{
"math_id": 6,
"text": "Y^2"
},
{
"math_id": 7,
"text": "\\chi^2_1"
},
{
"math_id": 8,
"text": "\\alpha"
},
{
"math_id": 9,
"text": "\\alpha=0.05"
}
] |
https://en.wikipedia.org/wiki?curid=59659427
|
596622
|
Arzelà–Ascoli theorem
|
On when a family of real, continuous functions has a uniformly convergent subsequence
The Arzelà–Ascoli theorem is a fundamental result of mathematical analysis giving necessary and sufficient conditions to decide whether every sequence of a given family of real-valued continuous functions defined on a closed and bounded interval has a uniformly convergent subsequence. The main condition is the equicontinuity of the family of functions. The theorem is the basis of many proofs in mathematics, including that of the Peano existence theorem in the theory of ordinary differential equations, Montel's theorem in complex analysis, and the Peter–Weyl theorem in harmonic analysis and various results concerning compactness of integral operators.
The notion of equicontinuity was introduced in the late 19th century by the Italian mathematicians Cesare Arzelà and Giulio Ascoli. A weak form of the theorem was proven by , who established the sufficient condition for compactness, and by , who established the necessary condition and gave the first clear presentation of the result. A further generalization of the theorem was proven by , to sets of real-valued continuous functions with domain a compact metric space . Modern formulations of the theorem allow for the domain to be compact Hausdorff and for the range to be an arbitrary metric space. More general formulations of the theorem exist that give necessary and sufficient conditions for a family of functions from a compactly generated Hausdorff space into a uniform space to be compact in the compact-open topology; see .
Statement and first consequences.
By definition, a sequence formula_0 of continuous functions on an interval "I"
["a", "b"] is "uniformly bounded" if there is a number "M" such that
formula_1
for every function "fn" belonging to the sequence, and every "x" ∈ ["a", "b"]. (Here, "M" must be independent of "n" and "x".)
The sequence is said to be "uniformly equicontinuous" if, for every "ε" > 0, there exists a "δ" > 0 such that
formula_2
whenever |"x" − "y"| < "δ" for all functions "fn" in the sequence. (Here, "δ" may depend on "ε", but not "x", "y" or "n".)
One version of the theorem can be stated as follows:
Consider a sequence of real-valued continuous functions { "fn" }"n" ∈ N defined on a closed and bounded interval ["a", "b"] of the real line. If this sequence is uniformly bounded and uniformly equicontinuous, then there exists a subsequence { "fnk" }"k" ∈ N that converges uniformly.
The converse is also true, in the sense that if every subsequence of { "fn" }itself has a uniformly convergent subsequence, then { "fn" }is uniformly bounded and equicontinuous.
<templatestyles src="Math_proof/styles.css" />Proof
The proof is essentially based on a diagonalization argument. The simplest case is of real-valued functions on a closed and bounded interval:
["a", "b"] ⊂ R be a closed and bounded interval. If F is an infinite set of functions "f" : "I" → R which is uniformly bounded and equicontinuous, then there is a sequence "fn" of elements of F such that "fn" converges uniformly on "I".
Fix an enumeration {"x""i"}"i" ∈N of rational numbers in "I". Since F is uniformly bounded, the set of points {"f"("x"1)}"f"∈F is bounded, and hence by the Bolzano–Weierstrass theorem, there is a sequence {"f""n"1} of distinct functions in F such that {"f""n"1("x"1)} converges. Repeating the same argument for the sequence of points {"f""n"1("x"2)}, there is a subsequence {"f""n"2} of {"f""n"1} such that {"f""n"2("x"2)} converges.
By induction this process can be continued forever, and so there is a chain of subsequences
formula_3
such that, for each k = 1, 2, 3, ..., the subsequence {"fnk"} converges at "x"1, ..., "xk". Now form the diagonal subsequence {"f"} whose mth term fm is the mth term in the mth subsequence {"fnm"}. By construction, fm converges at every rational point of I.
Therefore, given any "ε" > 0 and rational xk in I, there is an integer "N"
"N"("ε", "xk") such that
formula_4
Since the family F is equicontinuous, for this fixed ε and for every x in I, there is an open interval "Ux" containing x such that
formula_5
for all "f" ∈ F and all "s", "t" in I such that "s", "t" ∈ "Ux".
The collection of intervals Ux, "x" ∈ "I", forms an open cover of I. Since I is closed and bounded, by the Heine–Borel theorem I is compact, implying that this covering admits a finite subcover "U"1, ..., "UJ". There exists an integer K such that each open interval Uj, 1 ≤ "j" ≤ "J", contains a rational xk with 1 ≤ "k" ≤ "K". Finally, for any "t" ∈ "I", there are j and k so that t and xk belong to the same interval "Uj". For this choice of k,
formula_6
for all "n", "m" > "N"
max{"N"("ε", "x"1), ..., "N"("ε", "x""K")}. Consequently, the sequence {"fn"} is uniformly Cauchy, and therefore converges to a continuous function, as claimed. This completes the proof.
Immediate examples.
Differentiable functions.
The hypotheses of the theorem are satisfied by a uniformly bounded sequence { "fn" }of differentiable functions with uniformly bounded derivatives. Indeed, uniform boundedness of the derivatives implies by the mean value theorem that for all x and y,
formula_7
where K is the supremum of the derivatives of functions in the sequence and is independent of n. So, given "ε" > 0, let "δ"
to verify the definition of equicontinuity of the sequence. This proves the following corollary:
If, in addition, the sequence of second derivatives is also uniformly bounded, then the derivatives also converge uniformly (up to a subsequence), and so on. Another generalization holds for continuously differentiable functions. Suppose that the functions "fn" are continuously differentiable with derivatives "fn"′. Suppose that "fn"′ are uniformly equicontinuous and uniformly bounded, and that the sequence { "fn" }, is pointwise bounded (or just bounded at a single point). Then there is a subsequence of the { "fn" }converging uniformly to a continuously differentiable function.
The diagonalization argument can also be used to show that a family of infinitely differentiable functions, whose derivatives of each order are uniformly bounded, has a uniformly convergent subsequence, all of whose derivatives are also uniformly convergent. This is particularly important in the theory of distributions.
Lipschitz and Hölder continuous functions.
The argument given above proves slightly more, specifically
formula_8
for all "x", "y" ∈ ["a", "b"] and all "fn" , then there is a subsequence that converges uniformly on ["a", "b"].
The limit function is also Lipschitz continuous with the same value K for the Lipschitz constant. A slight refinement is
formula_9
is relatively compact in C(["a", "b"]). In particular, the unit ball of the Hölder space C0,"α"(["a", "b"]) is compact in C(["a", "b"]).
This holds more generally for scalar functions on a compact metric space X satisfying a Hölder condition with respect to the metric on X.
Generalizations.
Euclidean spaces.
The Arzelà–Ascoli theorem holds, more generally, if the functions "fn" take values in d-dimensional Euclidean space R"d", and the proof is very simple: just apply the R-valued version of the Arzelà–Ascoli theorem d times to extract a subsequence that converges uniformly in the first coordinate, then a sub-subsequence that converges uniformly in the first two coordinates, and so on. The above examples generalize easily to the case of functions with values in Euclidean space.
Compact metric spaces and compact Hausdorff spaces.
The definitions of boundedness and equicontinuity can be generalized to the setting of arbitrary compact metric spaces and, more generally still, compact Hausdorff spaces. Let "X" be a compact Hausdorff space, and let "C"("X") be the space of real-valued continuous functions on "X". A subset F ⊂ "C"("X") is said to be "equicontinuous" if for every "x" ∈ "X" and every "ε" > 0, "x" has a neighborhood "Ux" such that
formula_10
A set F ⊂ "C"("X", R) is said to be "pointwise bounded" if for every "x" ∈ "X",
formula_11
A version of the Theorem holds also in the space "C"("X") of real-valued continuous functions on a compact Hausdorff space "X" :
Let "X" be a compact Hausdorff space. Then a subset F of "C"("X") is relatively compact in the topology induced by the uniform norm if and only if it is equicontinuous and pointwise bounded.
The Arzelà–Ascoli theorem is thus a fundamental result in the study of the algebra of continuous functions on a compact Hausdorff space.
Various generalizations of the above quoted result are possible. For instance, the functions can assume values in a metric space or (Hausdorff) topological vector space with only minimal changes to the statement (see, for instance, , ):
Let "X" be a compact Hausdorff space and "Y" a metric space. Then F ⊂ "C"("X", "Y") is compact in the compact-open topology if and only if it is equicontinuous, pointwise relatively compact and closed.
Here pointwise relatively compact means that for each "x" ∈ "X", the set F"x"
{ "f" ("x") : "f" ∈ F} is relatively compact in "Y".
In the case that "Y" is complete, the proof given above can be generalized in a way that does not rely on the separability of the domain. On a compact Hausdorff space "X", for instance, the equicontinuity is used to extract, for each ε = 1/"n", a finite open covering of "X" such that the oscillation of any function in the family is less than ε on each open set in the cover. The role of the rationals can then be played by a set of points drawn from each open set in each of the countably many covers obtained in this way, and the main part of the proof proceeds exactly as above. A similar argument is used as a part of the proof for the general version which does not assume completeness of "Y".
Functions on non-compact spaces.
The Arzela-Ascoli theorem generalises to functions formula_12 where formula_13 is not compact. Particularly important are cases where formula_13 is a topological vector space. Recall that if formula_13
is a topological space and formula_14 is a uniform space (such as any metric space or any topological group, metrisable or not), there is the topology of compact convergence on the set formula_15 of functions formula_12; it is set up so that a sequence (or more generally a
filter or net) of functions converges if and only if it converges "uniformly" on each compact subset of formula_13. Let formula_16 be the subspace of
formula_15 consisting of continuous functions, equipped with the topology of compact convergence.
Then one form of the Arzèla-Ascoli theorem is the following:
Let formula_13 be a topological space, formula_14 a Hausdorff uniform space and formula_17 an equicontinuous set of continuous functions such that formula_18 is relatively compact in formula_14 for each formula_19. Then formula_20 is relatively compact in formula_17.
This theorem immediately gives the more specialised statements above in cases where formula_13 is compact
and the uniform structure of formula_14 is given by a metric. There are a few other variants in terms of
the topology of precompact convergence or other related topologies on
formula_15. It is also possible to extend the statement to functions that are only continuous when restricted to the sets of a covering of formula_13 by compact subsets. For details one can consult Bourbaki (1998), Chapter X, § 2, nr 5.
Non-continuous functions.
Solutions of numerical schemes for parabolic equations are usually piecewise constant, and therefore not continuous, in time. As their jumps nevertheless tend to become small as the time step goes to formula_21, it is possible to establish uniform-in-time convergence properties using a generalisation to non-continuous functions of the classical Arzelà–Ascoli theorem (see e.g. ).
Denote by formula_22 the space of functions from formula_13 to formula_14 endowed with the uniform metric
formula_23
Then we have the following:
Let formula_13 be a compact metric space and formula_14 a complete metric space. Let formula_24 be a sequence in formula_22 such that there exists a function formula_25 and a sequence formula_26 satisfying
formula_27
formula_28
Assume also that, for all formula_29, formula_30 is relatively compact in formula_14. Then formula_24 is relatively compact in formula_22, and any limit of formula_24 in this space is in formula_31.
Necessity.
Whereas most formulations of the Arzelà–Ascoli theorem assert sufficient conditions for a family of functions to be (relatively) compact in some topology, these conditions are typically also necessary. For instance, if a set F is compact in "C"("X"), the Banach space of real-valued continuous functions on a compact Hausdorff space with respect to its uniform norm, then it is bounded in the uniform norm on "C"("X") and in particular is pointwise bounded. Let "N"("ε", "U") be the set of all functions in F whose oscillation over an open subset "U" ⊂ "X" is less than "ε":
formula_32
For a fixed "x"∈"X" and "ε", the sets "N"("ε", "U") form an open covering of F as "U" varies over all open neighborhoods of "x". Choosing a finite subcover then gives equicontinuity.
formula_33
Let F be the set of functions G corresponding to functions g in the unit ball of the space "Lp"([0, 1]). If q is the Hölder conjugate of p, defined by +
Further examples.
1, then Hölder's inequality implies that all functions in F satisfy a Hölder condition with "α"
and constant "M"
1.
It follows that F is compact in "C"([0, 1]). This means that the correspondence "g" → "G" defines a compact linear operator T between the Banach spaces "Lp"([0, 1]) and "C"([0, 1]). Composing with the injection of "C"([0, 1]) into "Lp"([0, 1]), one sees that T acts compactly from "Lp"([0, 1]) to itself. The case "p"
2 can be seen as a simple instance of the fact that the injection from the Sobolev space formula_34 into "L"2(Ω), for Ω a bounded open set in R"d", is compact.
Indeed, the image "T"("B") of the closed unit ball B of X is contained in a compact subset K of Y. The unit ball "B∗" of "Y ∗" defines, by restricting from Y to K, a set F of (linear) continuous functions on K that is bounded and equicontinuous. By Arzelà–Ascoli, for every sequence {"y"}, in "B∗", there is a subsequence that converges uniformly on K, and this implies that the image formula_35 of that subsequence is Cauchy in "X ∗".
"B"("z"0, "r"), with modulus bounded by M, then (for example by Cauchy's formula) its derivative "f" ′ has modulus bounded by in the smaller disk "D"2
"B"("z"0, ). If a family of holomorphic functions on "D"1 is bounded by M on "D"1, it follows that the family F of restrictions to "D"2 is equicontinuous on "D"2. Therefore, a sequence converging uniformly on "D"2 can be extracted. This is a first step in the direction of Montel's theorem.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
"This article incorporates material from Ascoli–Arzelà theorem on PlanetMath, which is licensed under the ."
|
[
{
"math_id": 0,
"text": "\\{f_n\\}_{n \\in \\mathbb{N}}"
},
{
"math_id": 1,
"text": "\\left|f_n(x)\\right| \\le M"
},
{
"math_id": 2,
"text": "\\left|f_n(x)-f_n(y)\\right| < \\varepsilon"
},
{
"math_id": 3,
"text": "\\left \\{f_{n_1} \\right \\} \\supseteq \\left \\{f_{n_2} \\right \\} \\supseteq \\cdots"
},
{
"math_id": 4,
"text": "|f_n(x_k) - f_m(x_k)| < \\tfrac{\\varepsilon}{3}, \\qquad n, m \\ge N."
},
{
"math_id": 5,
"text": "|f(s)-f(t)| < \\tfrac{\\varepsilon}{3}"
},
{
"math_id": 6,
"text": "\\begin{align}\n\\left |f_n(t)-f_m(t) \\right| &\\le \\left|f_n(t) - f_n(x_k) \\right| + |f_n(x_k) - f_m(x_k)| + |f_m(x_k) - f_m(t)| \\\\\n&< \\tfrac{\\varepsilon}{3} + \\tfrac{\\varepsilon}{3} + \\tfrac{\\varepsilon}{3}\n\\end{align}"
},
{
"math_id": 7,
"text": "\\left|f_n(x) - f_n(y)\\right| \\le K |x-y|,"
},
{
"math_id": 8,
"text": "\\left|f_n(x) - f_n(y)\\right| \\le K|x-y|"
},
{
"math_id": 9,
"text": "\\left|f(x) - f(y)\\right| \\le M \\, |x - y|^\\alpha, \\qquad x, y \\in [a, b]"
},
{
"math_id": 10,
"text": "\\forall y \\in U_x, \\forall f \\in \\mathbf{F} : \\qquad |f(y) - f(x)| < \\varepsilon."
},
{
"math_id": 11,
"text": "\\sup \\{ | f(x) | : f \\in \\mathbf{F} \\} < \\infty."
},
{
"math_id": 12,
"text": "X \\rightarrow Y"
},
{
"math_id": 13,
"text": "X"
},
{
"math_id": 14,
"text": "Y"
},
{
"math_id": 15,
"text": "\\mathfrak{F}(X,Y)"
},
{
"math_id": 16,
"text": "\\mathcal{C}_c(X,Y)"
},
{
"math_id": 17,
"text": "H\\subset\\mathcal{C}_c(X,Y)"
},
{
"math_id": 18,
"text": "H(x)"
},
{
"math_id": 19,
"text": "x\\in X"
},
{
"math_id": 20,
"text": "H"
},
{
"math_id": 21,
"text": "0"
},
{
"math_id": 22,
"text": "S(X,Y)"
},
{
"math_id": 23,
"text": "d_S(v,w)=\\sup_{t\\in X}d_Y(v(t),w(t))."
},
{
"math_id": 24,
"text": "\\{v_n\\}_{n\\in\\mathbb{N}}"
},
{
"math_id": 25,
"text": "\\omega:X\\times X\\to[0,\\infty]"
},
{
"math_id": 26,
"text": "\\{\\delta_n\\}_{n\\in\\mathbb{N}}\\subset[0,\\infty)"
},
{
"math_id": 27,
"text": "\\lim_{d_X(t,t')\\to0}\\omega(t,t')=0,\\quad\\lim_{n\\to\\infty}\\delta_n=0,"
},
{
"math_id": 28,
"text": "\\forall(t,t')\\in X\\times X,\\quad \\forall n\\in\\mathbb{N},\\quad d_Y(v_n(t),v_n(t'))\\leq \\omega(t,t')+\\delta_n."
},
{
"math_id": 29,
"text": "t\\in X"
},
{
"math_id": 30,
"text": "\\{v_n(t):n\\in\\mathbb{N}\\}"
},
{
"math_id": 31,
"text": "C(X,Y)"
},
{
"math_id": 32,
"text": "N(\\varepsilon, U) = \\{f \\mid \\operatorname{osc}_U f < \\varepsilon\\}."
},
{
"math_id": 33,
"text": "G(x) = \\int_0^x g(t) \\, \\mathrm{d}t."
},
{
"math_id": 34,
"text": "H^1_0(\\Omega)"
},
{
"math_id": 35,
"text": "T^*(y^*_{n_k})"
},
{
"math_id": 36,
"text": "C([0,T],L^1(\\mathbb{R}^N))"
},
{
"math_id": 37,
"text": "\\textstyle\\sup_{t\\in [0,T]}\\|v(\\cdot,t)-w(\\cdot,t)\\|_{L^1(\\mathbb{R}^N)}."
},
{
"math_id": 38,
"text": "u_n=u_n(x,t)\\subset C([0,T];L^1(\\mathbb{R}^N))"
},
{
"math_id": 39,
"text": "x\\mapsto u_n(x,t)"
},
{
"math_id": 40,
"text": "t"
},
{
"math_id": 41,
"text": "(t,t')\\in [0,T]\\times[0,T]"
},
{
"math_id": 42,
"text": "n\\in\\mathbb{N}"
},
{
"math_id": 43,
"text": "\\|u_n(\\cdot,t)-u_n(\\cdot,t')\\|_{L^1(\\mathbb{R}^N)}"
},
{
"math_id": 44,
"text": "|t-t'|"
},
{
"math_id": 45,
"text": "\\{x\\mapsto u_n(x,t):n\\in\\mathbb{N}\\}"
},
{
"math_id": 46,
"text": "L^1(\\mathbb{R}^N)"
},
{
"math_id": 47,
"text": "\\{u_n:n\\in\\mathbb{N}\\}"
},
{
"math_id": 48,
"text": "C([0,T],L^1(\\mathbb{R}^N))."
}
] |
https://en.wikipedia.org/wiki?curid=596622
|
59676244
|
QM-AM-GM-HM inequalities
|
Mathematical relationships
In mathematics, the QM-AM-GM-HM inequalities, also known as the mean inequality chain, state the relationship between the harmonic mean, geometric mean, arithmetic mean, and quadratic mean (also known as root mean square). Suppose that formula_0 are positive real numbers. Then
formula_1
These inequalities often appear in mathematical competitions and have applications in many fields of science.
Proof.
There are three inequalities between means to prove. There are various methods to prove the inequalities, including mathematical induction, the Cauchy–Schwarz inequality, Lagrange multipliers, and Jensen's inequality. For several proofs that GM ≤ AM, see Inequality of arithmetic and geometric means.
AM-QM inequality.
From the Cauchy–Schwarz inequality on real numbers, setting one vector to (1, 1, ...):
formula_2 hence formula_3. For positive formula_4 the square root of this gives the inequality.
HM-GM inequality.
The reciprocal of the harmonic mean is the arithmetic mean of the reciprocals formula_5, and it exceeds formula_6 by the AM-GM inequality. formula_7 implies the inequality:
formula_8
The "n" = 2 case.
When "n" = 2, the inequalities become
formula_9 for all formula_10
which can be visualized in a semi-circle whose diameter is ["AB"] and center "D".
Suppose "AC" = "x"1 and "BC" = "x"2. Construct perpendiculars to ["AB"] at "D" and "C" respectively. Join ["CE"] and ["DF"] and further construct a perpendicular ["CG"] to ["DF"] at "G". Then the length of "GF" can be calculated to be the harmonic mean, "CF" to be the geometric mean, "DE" to be the arithmetic mean, and "CE" to be the quadratic mean. The inequalities then follow easily by the Pythagorean theorem.
Tests.
To infer the correct order, the four expressions can be evaluated with two positive numbers.
For formula_11 and formula_12 in particular, this results in formula_13.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x_1, x_2, \\ldots, x_n"
},
{
"math_id": 1,
"text": "0<\\frac{n}{\\frac{1}{x_1}+\\frac{1}{x_2}+\\cdots+\\frac{1}{x_n}}\\leq\\sqrt[n]{x_1x_2\\cdots x_n}\\leq\\frac{x_1+x_2+\\cdots+x_n}{n} \\leq\\sqrt{\\frac{x_1^2+x_2^2+\\cdots+x_n^2}{n}}."
},
{
"math_id": 2,
"text": "\\left( \\sum_{i=1}^n x_i \\cdot 1 \\right)^2 \\leq \\left( \\sum_{i=1}^n x_i^2 \\right) \\left( \\sum_{i=1}^n 1^2 \\right) = n \\,\\sum_{i=1}^n x_i^2,"
},
{
"math_id": 3,
"text": "\\left( \\frac{\\sum_{i=1}^n x_i}{n} \\right)^2 \\leq \\frac{\\sum_{i=1}^n x_i^2}{n}"
},
{
"math_id": 4,
"text": "x_i"
},
{
"math_id": 5,
"text": "1/x_1 , \\dots, 1/x_n"
},
{
"math_id": 6,
"text": "1/\\sqrt[n]{x_1 \\dots x_n}"
},
{
"math_id": 7,
"text": "x_i > 0"
},
{
"math_id": 8,
"text": " \\frac{n}{\\frac{1}{x_1} + \\dots + \\frac{1}{x_n}} \\leq \\sqrt[n]{x_1\\dots x_n}. "
},
{
"math_id": 9,
"text": "\\frac 2 {\\frac{1}{x_1}+\\frac{1}{x_2}} \\leq \\sqrt{x_1 x_2} \\leq \\frac{x_1+x_2}{2}\\leq\\sqrt{\\frac{x_1^2+x_2^2}{2}}"
},
{
"math_id": 10,
"text": "x_1, x_2 > 0,"
},
{
"math_id": 11,
"text": "x_1=10"
},
{
"math_id": 12,
"text": "x_2=40"
},
{
"math_id": 13,
"text": "16 < 20 < 25 < 5 \\sqrt{34} "
}
] |
https://en.wikipedia.org/wiki?curid=59676244
|
5968131
|
Waveshaper
|
In electronic music, waveshaping is a type of distortion synthesis in which complex spectra are produced from simple tones by altering the shape of the waveforms.
Uses.
Waveshapers are used mainly by electronic musicians to achieve an extra-abrasive sound. This effect is most used to enhance the sound of a music synthesizer by altering the waveform or vowel. Rock musicians may also use a waveshaper for heavy distortion of a guitar or bass. Some synthesizers or virtual software instruments have built-in waveshapers. The effect can make instruments sound noisy or overdriven.
In digital modeling of analog audio equipment such as tube amplifiers, waveshaping is used to introduce a static, or memoryless, nonlinearity to approximate the transfer characteristic of a vacuum tube or diode limiter.
How it works.
A waveshaper is an audio effect that changes an audio signal by mapping an input signal to the output signal by applying a fixed or variable mathematical function, called the "shaping function" or "transfer function", to the input signal (the term shaping function is preferred to avoid confusion with the transfer function from systems theory). The function can be any function at all.
Mathematically, the operation is defined by the "waveshaper equation"
formula_0
where "f" is the shaping function, "x(t)" is the input function, and "a(t)" is the "index function", which in general may vary as a function of time. This parameter "a" is often used as a constant gain factor called the "distortion index". In practice, the input to the waveshaper, x, is considered on [-1,1] for digitally sampled signals, and f will be designed such that y is also on [-1,1] to prevent unwanted clipping in software.
Commonly used shaping functions.
Sin, arctan, polynomial functions, or piecewise functions (such as the hard clipping function) are commonly used as waveshaping transfer functions. It is also possible to use table-driven functions, consisting of discrete points with some degree of interpolation or linear segments.
Polynomials.
A polynomial is a function of the form
formula_1
Polynomial functions are convenient as shaping functions because, when given a single sinusoid as input, a polynomial of degree "N" will only introduce up to the "N"th harmonic of the sinusoid. To prove this, consider a sinusoid used as input to the general polynomial.
formula_2
Next, use the inverse Euler's formula to obtain complex sinusoids.
formula_3
Finally, use the binomial formula to transform back to trigonometric form and find coefficients for each harmonic.
formula_4
formula_5
From the above equation, several observations can be made about the effect of a polynomial shaping function on a single sinusoid:
Problems associated with waveshapers.
The sound produced by digital waveshapers tends to be harsh and unattractive, because of problems with aliasing. Waveshaping is a non-linear operation, so it's hard to generalize about the effect of a waveshaping function on an input signal. The mathematics of non-linear operations on audio signals is difficult, and not well understood. The effect will be amplitude-dependent, among other things. But generally, waveshapers—particularly those with sharp corners (e.g., some derivatives are discontinuous) -- tend to introduce large numbers of high frequency harmonics. If these introduced harmonics exceed the Nyquist limit, then they will be heard as harsh inharmonic content with a distinctly metallic sound in the output signal. Oversampling can somewhat but not completely alleviate this problem, depending on how fast the introduced harmonics fall off.
With relatively simple, and relatively smooth waveshaping functions (sin(a*x), atan(a*x), polynomial functions, for example), this procedure may reduce aliased content in the harmonic signal to the point that it is musically acceptable. But waveshaping functions other than polynomial waveshaping functions will introduce an infinite number of harmonics into the signal, some which may audibly alias even at the supersampled frequency.
Sources.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "y = f(a(t)x(t))"
},
{
"math_id": 1,
"text": " f(x) = a_n x^n + a_{n-1} x^{n-1} + \\cdots + a_2 x^2 + a_1 x + a_0 = \\sum_{n=0}^{N}a_nx^n "
},
{
"math_id": 2,
"text": "\\sum_{n=0}^{N}a_n(\\alpha \\cos(\\omega t + \\phi))^n "
},
{
"math_id": 3,
"text": "\\sum_{n=0}^{N}a_n \\Bigg(\\alpha \\frac{e^{j(\\omega t + \\phi)}+e^{-j(\\omega t + \\phi)}}{2}\\Bigg)^n\n= a_0 + \\sum_{n=1}^{N}\\frac{a_n \\alpha^n}{2^{n-1}}\\frac{(e^{j(\\omega t + \\phi)}+e^{-j(\\omega t + \\phi)})^n}{2}"
},
{
"math_id": 4,
"text": "a_0 + \\sum_{n=1}^{N}\\Bigg[{\\frac{a_n \\alpha^n}{2^{n-1}} \\sum_{k=0}^{n} {{n \\choose k} \\frac{e^{j(n-k)(\\omega t + \\phi)}e^{-jk(\\omega t + \\phi)}}{2}}\\Bigg]}\n\n=a_0 + \\sum_{n=1}^{N}\\Bigg[{\\frac{a_n \\alpha^n}{2^{n-1}} \\sum_{k=0}^{n} {{n \\choose k} \\frac{e^{j(n-2k)(\\omega t + \\phi)}}{2}}\\Bigg]}\n"
},
{
"math_id": 5,
"text": "\n=a_0 + \\sum_{n=1}^{N}\\Bigg[{\\frac{a_n \\alpha^n}{2^{n-1}} \\sum_{k=0}^{\\lfloor n/2 \\rfloor} {{n \\choose k} \\cos {((n-2k)(\\omega t + \\phi))}}\\Bigg]} \n"
},
{
"math_id": 6,
"text": "N\\omega"
},
{
"math_id": 7,
"text": "x^n"
},
{
"math_id": 8,
"text": "a_n"
},
{
"math_id": 9,
"text": "\\frac{a_n \\alpha^n}{2^{n-1}}"
}
] |
https://en.wikipedia.org/wiki?curid=5968131
|
596816
|
Particle-in-cell
|
Mathematical technique used to solve a certain class of partial differential equations
In plasma physics, the particle-in-cell (PIC) method refers to a technique used to solve a certain class of partial differential equations. In this method, individual particles (or fluid elements) in a Lagrangian frame are tracked in continuous phase space, whereas moments of the distribution such as densities and currents are computed simultaneously on Eulerian (stationary) mesh points.
PIC methods were already in use as early as 1955,
even before the first Fortran compilers were available. The method gained popularity for plasma simulation in the late 1950s and early 1960s by Buneman, Dawson, Hockney, Birdsall, Morse and others. In plasma physics applications, the method amounts to following the trajectories of charged particles in self-consistent electromagnetic (or electrostatic) fields computed on a fixed mesh.
Technical aspects.
For many types of problems, the classical PIC method invented by Buneman, Dawson, Hockney, Birdsall, Morse and others is relatively intuitive and straightforward to implement. This probably accounts for much of its success, particularly for plasma simulation, for which the method typically includes the following procedures:
Models which include interactions of particles only through the average fields are called PM (particle-mesh). Those which include direct binary interactions are PP (particle-particle). Models with both types of interactions are called PP-PM or P3M.
Since the early days, it has been recognized that the PIC method is susceptible to error from so-called "discrete particle noise".
This error is statistical in nature, and today it remains less-well understood than for traditional fixed-grid methods, such as Eulerian or semi-Lagrangian schemes.
Modern geometric PIC algorithms are based on a very different theoretical framework. These algorithms use tools of discrete manifold, interpolating differential forms, and canonical or non-canonical symplectic integrators to guarantee gauge invariant and conservation of charge, energy-momentum, and more importantly the infinitely dimensional symplectic structure of the particle-field system.
These desired features are attributed to the fact that geometric PIC algorithms are built on the more fundamental field-theoretical framework and are directly linked to the perfect form, i.e., the variational principle of physics.
Basics of the PIC plasma simulation technique.
Inside the plasma research community, systems of different species (electrons, ions, neutrals, molecules, dust particles, etc.) are investigated. The set of equations associated with PIC codes are therefore the Lorentz force as the equation of motion, solved in the so-called "pusher" or "particle mover" of the code, and Maxwell's equations determining the electric and magnetic fields, calculated in the "(field) solver".
Super-particles.
The real systems studied are often extremely large in terms of the number of particles they contain. In order to make simulations efficient or at all possible, so-called "super-particles" are used. A super-particle (or "macroparticle") is a computational particle that represents many real particles; it may be millions of electrons or ions in the case of a plasma simulation, or, for instance, a vortex element in a fluid simulation. It is allowed to rescale the number of particles, because the acceleration from the Lorentz force depends only on the charge-to-mass ratio, so a super-particle will follow the same trajectory as a real particle would.
The number of real particles corresponding to a super-particle must be chosen such that sufficient statistics can be collected on the particle motion. If there is a significant difference between the density of different species in the system (between ions and neutrals, for instance), separate real to super-particle ratios can be used for them.
The particle mover.
Even with super-particles, the number of simulated particles is usually very large (> 105), and often the particle mover is the most time consuming part of PIC, since it has to be done for each particle separately. Thus, the pusher is required to be of high accuracy and speed and much effort is spent on optimizing the different schemes.
The schemes used for the particle mover can be split into two categories, implicit and explicit solvers. While implicit solvers (e.g. implicit Euler scheme) calculate the particle velocity from the already updated fields, explicit solvers use only the old force from the previous time step, and are therefore simpler and faster, but require a smaller time step. In PIC simulation the leapfrog method is used, a second-order explicit method. Also the "Boris algorithm" is used which cancel out the magnetic field in the Newton-Lorentz equation.
For plasma applications, the leapfrog method takes the following form:
formula_0
formula_1
where the subscript formula_2 refers to "old" quantities from the previous time step, formula_3 to updated quantities from the next time step (i.e. formula_4), and velocities are calculated in-between the usual time steps formula_5.
The equations of the Boris scheme which are substitute in the above equations are:
formula_6
formula_7
with
formula_8
formula_9
formula_10
formula_11
and formula_12.
Because of its excellent long term accuracy, the Boris algorithm is the de facto standard for advancing a charged particle. It was realized that the excellent long term accuracy of nonrelativistic Boris algorithm is due to the fact it conserves phase space volume, even though it is not symplectic. The global bound on energy error typically associated with symplectic algorithms still holds for the Boris algorithm, making it an effective algorithm for the multi-scale dynamics of plasmas. It has also been shown
that one can improve on the relativistic Boris push to make it both volume preserving and have a constant-velocity solution in crossed E and B fields.
The field solver.
The most commonly used methods for solving Maxwell's equations (or more generally, partial differential equations (PDE)) belong to one of the following three categories:
With the FDM, the continuous domain is replaced with a discrete grid of points, on which the electric and magnetic fields are calculated. Derivatives are then approximated with differences between neighboring grid-point values and thus PDEs are turned into algebraic equations.
Using FEM, the continuous domain is divided into a discrete mesh of elements. The PDEs are treated as an eigenvalue problem and initially a trial solution is calculated using basis functions that are localized in each element. The final solution is then obtained by optimization until the required accuracy is reached.
Also spectral methods, such as the fast Fourier transform (FFT), transform the PDEs into an eigenvalue problem, but this time the basis functions are high order and defined globally over the whole domain. The domain itself is not discretized in this case, it remains continuous. Again, a trial solution is found by inserting the basis functions into the eigenvalue equation and then optimized to determine the best values of the initial trial parameters.
Particle and field weighting.
The name "particle-in-cell" originates in the way that plasma macro-quantities (number density, current density, etc.) are assigned to simulation particles (i.e., the "particle weighting"). Particles can be situated anywhere on the continuous domain, but macro-quantities are calculated only on the mesh points, just as the fields are. To obtain the macro-quantities, one assumes that the particles have a given "shape" determined by the shape function
formula_13
where formula_14 is the coordinate of the particle and formula_15 the observation point. Perhaps the easiest and most used choice for the shape function is the so-called "cloud-in-cell" (CIC) scheme, which is a first order (linear) weighting scheme. Whatever the scheme is, the shape function has to satisfy the following conditions:
space isotropy, charge conservation, and increasing accuracy (convergence) for higher-order terms.
The fields obtained from the field solver are determined only on the grid points and can't be used directly in the particle mover to calculate the force acting on particles, but have to be interpolated via the "field weighting":
formula_16
where the subscript formula_17 labels the grid point. To ensure that the forces acting on particles are self-consistently obtained, the way of calculating macro-quantities from particle positions on the grid points and interpolating fields from grid points to particle positions has to be consistent, too, since they both appear in Maxwell's equations. Above all, the field interpolation scheme should conserve momentum. This can be achieved by choosing the same weighting scheme for particles and fields and by ensuring the appropriate space symmetry (i.e. no self-force and fulfilling the action-reaction law) of the field solver at the same time
Collisions.
As the field solver is required to be free of self-forces, inside a cell the field generated by a particle must decrease with decreasing distance from the particle, and hence inter-particle forces inside the cells are underestimated. This can be balanced with the aid of Coulomb collisions between charged particles. Simulating the interaction for every pair of a big system would be computationally too expensive, so several Monte Carlo methods have been developed instead. A widely used method is the "binary collision model", in which particles are grouped according to their cell, then these particles are paired randomly, and finally the pairs are collided.
In a real plasma, many other reactions may play a role, ranging from elastic collisions, such as collisions between charged and neutral particles, over inelastic collisions, such as electron-neutral ionization collision, to chemical reactions; each of them requiring separate treatment. Most of the collision models handling charged-neutral collisions use either the "direct Monte-Carlo" scheme, in which all particles carry information about their collision probability, or the "null-collision" scheme, which does not analyze all particles but uses the maximum collision probability for each charged species instead.
Accuracy and stability conditions.
As in every simulation method, also in PIC, the time step and the grid size must be well chosen, so that the time and length scale phenomena of interest are properly resolved in the problem. In addition, time step and grid size affect the speed and accuracy of the code.
For an electrostatic plasma simulation using an explicit time integration scheme (e.g. leapfrog, which is most commonly used), two important conditions regarding the grid size formula_18 and the time step formula_19 should be fulfilled in order to ensure the stability of the solution:
formula_20
formula_21
which can be derived considering the harmonic oscillations of a one-dimensional unmagnetized plasma. The latter conditions is strictly required but practical considerations related to energy conservation suggest to use a much stricter constraint where the factor 2 is replaced by a number one order of magnitude smaller. The use of formula_22 is typical. Not surprisingly, the natural time scale in the plasma is given by the inverse plasma frequency formula_23 and length scale by the Debye length formula_24.
For an explicit electromagnetic plasma simulation, the time step must also satisfy the CFL condition:
formula_25
where formula_26, and formula_27 is the speed of light.
Applications.
Within plasma physics, PIC simulation has been used successfully to study laser-plasma interactions, electron acceleration and ion heating in the auroral ionosphere, magnetohydrodynamics, magnetic reconnection, as well as ion-temperature-gradient and other microinstabilities in tokamaks, furthermore vacuum discharges, and dusty plasmas.
Hybrid models may use the PIC method for the kinetic treatment of some species, while other species (that are Maxwellian) are simulated with a fluid model.
PIC simulations have also been applied outside of plasma physics to problems in solid and fluid mechanics.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{\\mathbf{x}_{k+1} - \\mathbf{x}_{k}}{\\Delta t} = \\mathbf{v}_{k+1/2},"
},
{
"math_id": 1,
"text": "\\frac{\\mathbf{v}_{k+1/2} - \\mathbf{v}_{k-1/2}}{\\Delta t} = \\frac{q}{m} \\left( \\mathbf{E}_k + \\frac{\\mathbf{v}_{k+1/2} + \\mathbf{v}_{k-1/2}}{2} \\times \\mathbf{B}_{k} \\right),"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "k+1"
},
{
"math_id": 4,
"text": "t_{k+1} = t_k + \\Delta t"
},
{
"math_id": 5,
"text": "t_k"
},
{
"math_id": 6,
"text": "\\mathbf{x}_{k+1} = \\mathbf{x}_{k} + {\\Delta t} \\mathbf{v}_{k+1/2},"
},
{
"math_id": 7,
"text": "\\mathbf{v}_{k+1/2} = \\mathbf{u}' + q' \\mathbf{E}_k,"
},
{
"math_id": 8,
"text": "\\mathbf{u}' = \\mathbf{u} + (\\mathbf{u} + (\\mathbf{u} \\times \\mathbf{h})) \\times \\mathbf{s},"
},
{
"math_id": 9,
"text": "\\mathbf{u} = \\mathbf{v}_{k-1/2} + q' \\mathbf{E}_k,"
},
{
"math_id": 10,
"text": "\\mathbf{h} = q' \\mathbf{B}_k,"
},
{
"math_id": 11,
"text": "\\mathbf{s} = 2 \\mathbf{h}/(1 + h^2)"
},
{
"math_id": 12,
"text": "q' = \\Delta t \\times (q/2m)"
},
{
"math_id": 13,
"text": "S(\\mathbf{x}-\\mathbf{X}),"
},
{
"math_id": 14,
"text": "\\mathbf{x}"
},
{
"math_id": 15,
"text": "\\mathbf{X}"
},
{
"math_id": 16,
"text": "\\mathbf{E}(\\mathbf{x}) = \\sum_{i}\\mathbf{E}_i S(\\mathbf{x}_i-\\mathbf{x}),"
},
{
"math_id": 17,
"text": "i"
},
{
"math_id": 18,
"text": "\\Delta x"
},
{
"math_id": 19,
"text": "\\Delta t"
},
{
"math_id": 20,
"text": "\\Delta x < 3.4 \\lambda_D,"
},
{
"math_id": 21,
"text": "\\Delta t \\leq 2 \\omega_{pe}^{-1},"
},
{
"math_id": 22,
"text": "\\Delta t \\leq 0.1 \\omega_{pe}^{-1},"
},
{
"math_id": 23,
"text": "\\omega_{pe}^{-1}"
},
{
"math_id": 24,
"text": "\\lambda_D"
},
{
"math_id": 25,
"text": "\\Delta t < \\Delta x / c ,"
},
{
"math_id": 26,
"text": "\\Delta x \\sim \\lambda_D"
},
{
"math_id": 27,
"text": " c"
}
] |
https://en.wikipedia.org/wiki?curid=596816
|
59681951
|
Diagram (mathematical logic)
|
Concept in model theory
In model theory, a branch of mathematical logic, the diagram of a structure is a simple but powerful concept for proving useful properties of a theory, for example the amalgamation property and the joint embedding property, among others.
Definition.
Let formula_0 be a first-order language and formula_1 be a theory over formula_2 For a model formula_3 of formula_1 one expands formula_0 to a new language
formula_4
by adding a new constant symbol formula_5 for each element formula_6 in formula_7 where formula_8 is a subset of the domain of formula_9 Now one may expand formula_3 to the model
formula_10
The positive diagram of formula_3, sometimes denoted formula_11, is the set of all those atomic sentences which hold in formula_3 while the negative diagram, denoted formula_12 thereof is the set of all those atomic sentences which do not hold in formula_13.
The diagram formula_14 of formula_3 is the set of all atomic sentences and negations of atomic sentences of formula_15 that hold in formula_16 Symbolically, formula_17.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal L"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "\\mathcal L."
},
{
"math_id": 3,
"text": "\\mathfrak A"
},
{
"math_id": 4,
"text": "\\mathcal L_A := \\mathcal L\\cup \\{c_a:a\\in A\\}"
},
{
"math_id": 5,
"text": "c_a"
},
{
"math_id": 6,
"text": "a"
},
{
"math_id": 7,
"text": "A,"
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "\\mathfrak A."
},
{
"math_id": 10,
"text": "\\mathfrak A_A := (\\mathfrak A,a)_{a\\in A}."
},
{
"math_id": 11,
"text": "D^+(\\mathfrak A)"
},
{
"math_id": 12,
"text": "D^-(\\mathfrak A),"
},
{
"math_id": 13,
"text": " \\mathfrak A "
},
{
"math_id": 14,
"text": " D(\\mathfrak A)"
},
{
"math_id": 15,
"text": "\\mathcal L_A"
},
{
"math_id": 16,
"text": "\\mathfrak A_A."
},
{
"math_id": 17,
"text": " D(\\mathfrak A) = D^+(\\mathfrak A) \\cup \\neg D^-(\\mathfrak A)"
}
] |
https://en.wikipedia.org/wiki?curid=59681951
|
596833
|
Tully–Fisher relation
|
Trend in astronomy
In astronomy, the Tully–Fisher relation (TFR) is a widely verified empirical relationship between the mass or intrinsic luminosity of a spiral galaxy and its asymptotic rotation velocity or emission line width. Since the observed brightness of a galaxy is distance-dependent, the relationship can be used to estimate distances to galaxies from measurements of their rotational velocity.
History.
The connection between rotational velocity measured spectroscopically and distance was first used in 1922 by Ernst Öpik to estimate the distance to the Andromeda Galaxy. In the 1970s, Balkowski, C., et al. measured 13 galaxies but focused on using the data to distinguish galaxy shapes rather than extract distances.
The relationship was first published in 1977 by astronomers R. Brent Tully and J. Richard Fisher. The luminosity is calculated by multiplying the galaxy's apparent brightness by formula_0, where formula_1 is its distance from Earth, and the spectral-line width is measured using long-slit spectroscopy.
A series of collaborative catalogs of galaxy peculiar velocity values called CosmicFlow uses Tully-Fisher analysis; the Cosmicflow-4 catalog has reached 10000 galaxies. Many values of the Hubble constant have been derived from Tully-Fisher analysis, starting with the first paper and continuing through 2023.
Subtypes.
Several different forms of the TFR exist, depending on which precise measures of mass, luminosity or rotation velocity one takes it to relate. Tully and Fisher used optical luminosity, but subsequent work showed the relation to be tighter when defined using microwave to infrared (K band) radiation (a good proxy for stellar mass), and even tighter when luminosity is replaced by the galaxy's total stellar mass. The relation in terms of stellar mass is dubbed the "stellar mass Tully Fisher relation" (STFR), and its scatter only shows correlations with the galaxy's kinematic morphology, such that more dispersion-supported systems scatter below the relation. The tightest correlation is recovered when considering the total baryonic mass (the sum of its mass in stars and gas). This latter form of the relation is known as the baryonic Tully–Fisher relation (BTFR), and states that baryonic mass is proportional to velocity to the power of roughly 3.5–4.
The TFR can be used to estimate the distance to spiral galaxies by allowing the luminosity of a galaxy to be derived from its directly measurable line width. The distance can then be found by comparing the luminosity to the apparent brightness. Thus the TFR constitutes a rung of the cosmic distance ladder, where it is calibrated using more direct distance measurement techniques and used in turn to calibrate methods extending to larger distance.
In the dark matter paradigm, a galaxy's rotation velocity (and hence line width) is primarily determined by the mass of the dark matter halo in which it lives, making the TFR a manifestation of the connection between visible and dark matter mass. In Modified Newtonian dynamics (MOND), the BTFR (with power-law index exactly 4) is a direct consequence of the gravitational force law effective at low acceleration.
The analogues of the TFR for non-rotationally-supported galaxies, such as ellipticals, are known as the Faber–Jackson relation and the fundamental plane.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "4\\pi d^2"
},
{
"math_id": 1,
"text": "d"
}
] |
https://en.wikipedia.org/wiki?curid=596833
|
59689176
|
Naum Il'ich Feldman
|
Soviet mathematician (1918–1994)
Naum Il'ich Feldman (26 November 1918 – 20 April 1994) was a Soviet mathematician who specialized in number theory.
Life.
Feldman was born on 26 November 1918 in Melitopol, Zaporizhia Oblast of southeastern Ukraine.
He entered in 1936 the Faculty of Mathematics and Mechanics at the University of Leningrad where he specialized in number theory under the supervision of Rodion O. Kuzmin. After his graduation in 1941, Feldman was called up by the army and served from October 1941 until the end of the World war II. For his service, he was awarded the Order of the Red Star, the Order of the Patriotic War (second class), and the medals "For the Capture of Königsberg", "For the Defence of Moscow", Medal "For the Victory over Germany in the Great Patriotic War 1941–1945".
After his demobilization, he started his PhD in 1946 at the Institute of Mathematics at the University of Moscow, under the supervision of Alexander O. Gelfond, and he presented his Ph.D. thesis in 1949. In 1950, he became head of the Department of Mathematics of the Ufimsky Oil Institute, where he was assigned until 1954. He lectured at the Moscow Geological Prospecting Institute from 1954 to 1961.
From September 1961 Feldman worked at Moscow State University, first in the department of mathematical analysis, and then in the department of number theory. In 1974 he became Doctor of Science. Feldman got full professorship in 1980.
Feldman died on 20 April 1994.
Work.
Feldman obtained important results in number theory. His main research area were the theory of Diophantine approximations, the theory of transcendental numbers, and Diophantine equations.
In 1899, French mathematician Émile Borel strengthened the famous theorem of Charles Hermite that proved in 1873 the transcendence of the number e without having been specifically constructed for this purpose. Later different estimates of the measure of transcendence were considered for other numbers too. Feldman's mentor Gelfond obtained his most famous result in 1948 in his eponymous theorem, also known as the 7th Hilbert's problem:
If α and β are algebraic numbers (with α≠0 and α≠1), and if β is not a real rational number, then any value of αβ is a transcendental number.
In 1949, Feldman further improved Gelfond's method to estimate of the measure of transcendence for logarithms of algebraic numbers and periods of elliptic curves. Of special importance is his result from 1960 on the measure of the transcendence of the number formula_0.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\pi"
}
] |
https://en.wikipedia.org/wiki?curid=59689176
|
596987
|
Synchrocyclotron
|
Special type of cyclic particle accelerator
A synchrocyclotron is a special type of cyclotron, patented by Edwin McMillan in 1952, in which the frequency of the driving RF electric field is varied to compensate for relativistic effects as the particles' velocity begins to approach the speed of light. This is in contrast to the classical cyclotron, where this frequency is constant.
There are two major differences between the synchrocyclotron and the classical cyclotron. In the synchrocyclotron, only one "dee" (hollow D-shaped sheet metal electrode) retains its classical shape, while the other pole is open (see patent sketch). Furthermore, the frequency of oscillating electric field in a synchrocyclotron is decreasing continuously instead of kept constant so as to maintain cyclotron resonance for relativistic velocities. One terminal of the oscillating electric potential varying periodically is applied to the dee and the other terminal is on ground potential. The protons or deuterons to be accelerated are made to move in circles of increasing radius. The acceleration of particles takes place as they enter or leave the dee. At the outer edge, the ion beam can be removed with the aid of electrostatic deflector. The first synchrocyclotron produced 195 MeV deuterons and 390 MeV α-particles.
Differences from the classical cyclotron.
In a classical cyclotron, the angular frequency of the electric field is given by
formula_0,
Where formula_1 is the angular frequency of the electric field, formula_2 is the charge on the particle, formula_3 is the magnetic field, and formula_4 is the mass of the particle. This makes the assumption that the particle is classical, and does not experience relativistic phenomena such as length contraction. These effects start to become significant when formula_5, the velocity of the particle greater than formula_6. To correct for this, the relativistic mass is used instead of the rest mass; thus, a factor of formula_7 multiplies the mass, such that
formula_8,
where
formula_9.
This is then the angular frequency of the field applied to the particles as they are accelerated around the synchrocyclotron.
Advantages.
The chief advantage of the synchrocyclotron is that there is no need to restrict the number of revolutions executed by the ion before its exit. As such, the potential difference supplied between the dees can be much smaller.
The smaller potential difference needed across the gap has the following uses:
Disadvantages.
The main drawback of this device is that, as a result of the variation in the frequency of the oscillating voltage supply, only a very small fraction of the ions leaving the source are captured in phase-stable orbits of maximum radius and energy with the result that the output beam current has a low duty cycle, and the average beam current is only a small fraction of the instantaneous beam current. Thus the machine produces high energy ions, though with comparatively low intensity.
The next development step of the cyclotron concept, the isochronous cyclotron, maintains a constant RF driving frequency and compensates for relativistic effects by increasing the magnetic field with radius. Isochronous cyclotrons are capable of producing much greater beam current than synchrocyclotrons. As a result, isochronous cyclotrons became more popular in the research field.
History.
In 1945, Robert Lyster Thornton at Ernest Lawrence's Radiation Laboratory led the construction of the 730 MeV cyclotron. In 1946, he oversaw the conversion of the cyclotron to the new design made by McMillan which would become the first synchrocyclotron with could produce 195 MeV deuterons and 390 MeV α-particles.
After the first synchrocyclotron was operational, the Office of Naval Research (ONR) funded two synchrocyclotron construction initiatives. The first funding was in 1946 for Carnegie Institute of Technology to build a 435-MeV synchrocyclotron led by Edward Creutz and to start its nuclear physics research program. The second initiative was in 1947 for University of Chicago to build a 450-MeV synchrocyclotron under the direction of Enrico Fermi.
In 1948, University of Rochester completed the construction of its 240-MeV synchrocyclotron, followed by a completion of 380-MeV synchrocyclotron at Columbia University in 1950.
In 1950 the 435-MeV synchrocyclotron at Carnegie Institute of Technology was operational, followed by 450-MeV synchrocyclotron of University of Chicago in 1951.
The construction of the 400-Mev synchrocyclotron at the University of Liverpool was completed in 1952 and by April 1954 it was operational. The Liverpool synchrocyclotron first demonstrated the extraction of a particle beam from such a machine, removing the constraint of having to fit experiments inside the synchrocyclotron.
At a UNESCO meeting in Paris in December 1951, there was a discussion on finding a solution to have a medium-energy accelerator for the soon-to-be-formed European Organization for Nuclear Research (CERN). The synchrocyclotron was proposed as a solution to bridge the gap before the 28-GeV Proton Synchrotron was completed. In 1952, Cornelis Bakker led the group to design and construct the synchrocyclotron named Synchro-Cyclotron (SC) at CERN. The design of the Synchro-Cyclotron with in circumference started in 1953. The construction started in 1954 and it achieved 600 MeV proton acceleration in August 1957, with the experimental program started in April 1958.
Current developments.
Synchrocyclotrons are attractive for use in proton therapy because of the ability to make compact systems using high magnetic fields. Medical physics companies Ion Beam Applications and Mevion Medical Systems have developed superconducting synchrocyclotrons that can fit comfortably into hospitals.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\omega = \\frac{q B}{m}"
},
{
"math_id": 1,
"text": "\\omega"
},
{
"math_id": 2,
"text": "q"
},
{
"math_id": 3,
"text": "B"
},
{
"math_id": 4,
"text": "m"
},
{
"math_id": 5,
"text": "v"
},
{
"math_id": 6,
"text": " \\approx \\frac{c}{3}"
},
{
"math_id": 7,
"text": "\\gamma"
},
{
"math_id": 8,
"text": "\\omega = \\frac{q B}{m \\gamma}"
},
{
"math_id": 9,
"text": "\\gamma = \\frac{1}{\\sqrt{1-\\frac{v^2}{c^2}}}"
}
] |
https://en.wikipedia.org/wiki?curid=596987
|
59710426
|
Constant chord theorem
|
Invariant cord in one of two intersecting circles based on any point in the other
The constant chord theorem is a statement in elementary geometry about a property of certain chords in two intersecting circles.
The circles formula_0 and formula_1 intersect in the points formula_2 and formula_3. formula_4 is an arbitrary point on formula_0 being different from formula_2 and formula_3. The lines formula_5 and formula_6 intersect the circle formula_1 in formula_7 and formula_8. The constant chord theorem then states that the length of the chord formula_9 in formula_1 does not depend on the location of formula_4 on formula_0, in other words the length is constant.
The theorem stays valid when formula_4 coincides with formula_2 or formula_3, provided one replaces the then undefined line formula_5 or formula_6 by the tangent on formula_0 at formula_4.
A similar theorem exists in three dimensions for the intersection of two spheres. The spheres formula_0 and formula_1 intersect in the circle formula_10. formula_4 is arbitrary point on the surface of the first sphere formula_0, that is not on the intersection circle formula_10. The extended cone created by formula_10 and formula_4 intersects the second sphere formula_1 in a circle. The length of the diameter of this circle is constant, that is it does not depend on the location of formula_4 on formula_0.
Nathan Altshiller Court described the constant chord theorem 1925 in the article "sur deux cercles secants" for the Belgian math journal Mathesis. Eight years later he published "On Two Intersecting Spheres" in the American Mathematical Monthly, which contained the 3-dimensional version. Later it was included in several textbooks, such as Ross Honsberger's "Mathematical Morsels" and Roger B. Nelsen's "Proof Without Words II", where it was given as a problem, or the German geometry textbook "Mit harmonischen Verhältnissen zu Kegelschnitten" by Halbeisen, Hungerbühler and Läuchli, where it was given as a theorem.
|
[
{
"math_id": 0,
"text": "k_1"
},
{
"math_id": 1,
"text": "k_2"
},
{
"math_id": 2,
"text": "P"
},
{
"math_id": 3,
"text": "Q"
},
{
"math_id": 4,
"text": "Z_1"
},
{
"math_id": 5,
"text": "Z_1P"
},
{
"math_id": 6,
"text": "Z_1Q"
},
{
"math_id": 7,
"text": "P_1"
},
{
"math_id": 8,
"text": "Q_1"
},
{
"math_id": 9,
"text": "P_1Q_1"
},
{
"math_id": 10,
"text": "k_s"
}
] |
https://en.wikipedia.org/wiki?curid=59710426
|
59715
|
Scientific notation
|
Method of writing numbers with a large amount of digits
Scientific notation is a way of expressing numbers that are too large or too small to be conveniently written in decimal form, since to do so would require writing out an inconveniently long string of digits. It may be referred to as scientific form or standard index form, or standard form in the United Kingdom. This base ten notation is commonly used by scientists, mathematicians, and engineers, in part because it can simplify certain arithmetic operations. On scientific calculators, it is usually known as "SCI" display mode.
In scientific notation, nonzero numbers are written in the form
<templatestyles src="Block indent/styles.css"/>"m" × 10"n"
or "m" times ten raised to the power of "n", where "n" is an integer, and the coefficient "m" is a nonzero real number (usually between 1 and 10 in absolute value, and nearly always written as a terminating decimal). The integer "n" is called the exponent and the real number "m" is called the "significand" or "mantissa". The term "mantissa" can be ambiguous where logarithms are involved, because it is also the traditional name of the fractional part of the common logarithm. If the number is negative then a minus sign precedes "m", as in ordinary decimal notation. In normalized notation, the exponent is chosen so that the absolute value (modulus) of the significand "m" is at least 1 but less than 10.
Decimal floating point is a computer arithmetic system closely related to scientific notation.
Normalized notation.
Any real number can be written in the form in many ways: for example, 350 can be written as or or .
In "normalized" scientific notation (called "standard form" in the United Kingdom), the exponent "n" is chosen so that the absolute value of "m" remains at least one but less than ten (). Thus 350 is written as . This form allows easy comparison of numbers: numbers with bigger exponents are (due to the normalization) larger than those with smaller exponents, and subtraction of exponents gives an estimate of the number of orders of magnitude separating the numbers. It is also the form that is required when using tables of common logarithms. In normalized notation, the exponent "n" is negative for a number with absolute value between 0 and 1 (e.g. 0.5 is written as ). The 10 and exponent are often omitted when the exponent is 0. For a series of numbers that are to be added or subtracted (or otherwise compared), it can be convenient to use the same value of "m" for all elements of the series.
Normalized scientific form is the typical form of expression of large numbers in many fields, unless an unnormalized or differently normalized form, such as engineering notation, is desired. Normalized scientific notation is often called exponential notation – although the latter term is more general and also applies when "m" is not restricted to the range 1 to 10 (as in engineering notation for instance) and to bases other than 10 (for example, ).
Engineering notation.
Engineering notation (often named "ENG" on scientific calculators) differs from normalized scientific notation in that the exponent "n" is restricted to multiples of 3. Consequently, the absolute value of "m" is in the range 1 ≤ |"m"| < 1000, rather than 1 ≤ |"m"| < 10. Though similar in concept, engineering notation is rarely called scientific notation. Engineering notation allows the numbers to explicitly match their corresponding SI prefixes, which facilitates reading and oral communication. For example, can be read as "twelve-point-five nanometres" and written as , while its scientific notation equivalent would likely be read out as "one-point-two-five times ten-to-the-negative-eight metres".
Significant figures.
A significant figure is a digit in a number that adds to its precision. This includes all nonzero numbers, zeroes between significant digits, and zeroes indicated to be significant.
Leading and trailing zeroes are not significant digits, because they exist only to show the scale of the number. Unfortunately, this leads to ambiguity. The number is usually read to have five significant figures: 1, 2, 3, 0, and 4, the final two zeroes serving only as placeholders and adding no precision. The same number, however, would be used if the last two digits were also measured precisely and found to equal 0 – seven significant figures.
When a number is converted into normalized scientific notation, it is scaled down to a number between 1 and 10. All of the significant digits remain, but the placeholding zeroes are no longer required. Thus would become if it had five significant digits. If the number were known to six or seven significant figures, it would be shown as or . Thus, an additional advantage of scientific notation is that the number of significant figures is unambiguous.
Estimated final digits.
It is customary in scientific measurement to record all the definitely known digits from the measurement and to estimate at least one additional digit if there is any information at all available on its value. The resulting number contains more information than it would without the extra digit, which may be considered a significant digit because it conveys some information leading to greater precision in measurements and in aggregations of measurements (adding them or multiplying them together).
Additional information about precision can be conveyed through additional notation. It is often useful to know how exact the final digit or digits are. For instance, the accepted value of the mass of the proton can properly be expressed as , which is shorthand for . However it is still unclear whether the error ( in this case) is the maximum possible error, standard error, or some other confidence interval.
E notation.
Calculators and computer programs typically present very large or small numbers using scientific notation, and some can be configured to uniformly present all numbers that way. Because superscript exponents like 107 can be inconvenient to display or type, the letter "E" or "e" (for "exponent") is often used to represent "times ten raised to the power of", so that the notation "m"&hairsp;E&hairsp;"n" for a decimal significand "m" and integer exponent "n" means the same as "m" × 10"n". For example is written as or , and is written as or . While common in computer output, this abbreviated version of scientific notation is discouraged for published documents by some style guides.
Most popular programming languages – including Fortran, C/C++, Python, and JavaScript – use this "E" notation, which comes from Fortran and was present in the first version released for the IBM 704 in 1956. The E notation was already used by the developers of SHARE Operating System (SOS) for the IBM 709 in 1958. Later versions of Fortran (at least since FORTRAN IV as of 1961) also use "D" to signify double precision numbers in scientific notation, and newer Fortran compilers use "Q" to signify quadruple precision. The MATLAB programming language supports the use of either "E" or "D".
The ALGOL 60 (1960) programming language uses a subscript ten "10" character instead of the letter "E", for example: codice_0. This presented a challenge for computer systems which did not provide such a character, so ALGOL W (1966) replaced the symbol by a single quote, e.g. codice_1, and some Soviet Algol variants allowed the use of the Cyrillic letter "ю", e.g. . Subsequently, the ALGOL 68 programming language provided a choice of characters: , , , , or codice_2. The ALGOL "10" character was included in the Soviet GOST 10859 text encoding (1964), and was added to Unicode 5.2 (2009) as .
Some programming languages use other symbols. For instance, Simula uses (or for long), as in . Mathematica supports the shorthand notation (reserving the letter for the mathematical constant "e").
The first pocket calculators supporting scientific notation appeared in 1972. The displays of pocket calculators of the 1970s did not display an explicit symbol between significand and exponent; instead, one or more digits were left blank (e.g. codice_3, as seen in the HP-25), or a pair of smaller and slightly raised digits were reserved for the exponent (e.g. codice_3, as seen in the Commodore PR100). In 1976, Hewlett-Packard calculator user Jim Davidson coined the term "decapower" for the scientific-notation exponent to distinguish it from "normal" exponents, and suggested the letter "D" as a separator between significand and exponent in typewritten numbers (for example, ); these gained some currency in the programmable calculator user community. The letters "E" or "D" were used as a scientific-notation separator by Sharp pocket computers released between 1987 and 1995, "E" used for 10-digit numbers and "D" used for 20-digit double-precision numbers. The Texas Instruments TI-83 and TI-84 series of calculators (1996–present) use a small capital codice_5 for the separator.
In 1962, Ronald O. Whitaker of Rowco Engineering Co. proposed a power-of-ten system nomenclature where the exponent would be circled, e.g. 6.022 × 103 would be written as "6.022③".
Use of spaces.
In normalized scientific notation, in E notation, and in engineering notation, the space (which in typesetting may be represented by a normal width space or a thin space) that is allowed "only" before and after "×" or in front of "E" is sometimes omitted, though it is less common to do so before the alphabetical character.
Converting numbers.
Converting a number in these cases means to either convert the number into scientific notation form, convert it back into decimal form or to change the exponent part of the equation. None of these alter the actual number, only how it's expressed.
Decimal to scientific.
First, move the decimal separator point sufficient places, "n", to put the number's value within a desired range, between 1 and 10 for normalized notation. If the decimal was moved to the left, append codice_6; to the right, codice_7. To represent the number in normalized scientific notation, the decimal separator would be moved 6 digits to the left and codice_8 appended, resulting in . The number would have its decimal separator shifted 3 digits to the right instead of the left and yield as a result.
Scientific to decimal.
Converting a number from scientific notation to decimal notation, first remove the codice_6 on the end, then shift the decimal separator "n" digits to the right (positive "n") or left (negative "n"). The number would have its decimal separator shifted 6 digits to the right and become , while would have its decimal separator moved 3 digits to the left and be .
Exponential.
Conversion between different scientific notation representations of the same number with different exponential values is achieved by performing opposite operations of multiplication or division by a power of ten on the significand and an subtraction or addition of one on the exponent part. The decimal separator in the significand is shifted "x" places to the left (or right) and "x" is added to (or subtracted from) the exponent, as shown below.
<templatestyles src="Block indent/styles.css"/>= = = 1234
Basic operations.
Given two numbers in scientific notation,
formula_0
and
formula_1
Multiplication and division are performed using the rules for operation with exponentiation:
formula_2
and
formula_3
Some examples are:
formula_4
and
formula_5
Addition and subtraction require the numbers to be represented using the same exponential part, so that the significand can be simply added or subtracted:
<templatestyles src="Block indent/styles.css"/>formula_6 and formula_7 with formula_8
Next, add or subtract the significands:
formula_9
An example:
formula_10
Other bases.
While base ten is normally used for scientific notation, powers of other bases can be used too, base 2 being the next most commonly used one.
For example, in base-2 scientific notation, the number 1001b in binary (=9d) is written as 1.001b × 2d11b or 1.001b × 10b11b using binary numbers (or shorter 1.001 × 1011 if binary context is obvious). In E notation, this is written as 1.001bE11b (or shorter: 1.001E11) with the letter "E" now standing for "times two (10b) to the power" here. In order to better distinguish this base-2 exponent from a base-10 exponent, a base-2 exponent is sometimes also indicated by using the letter "B" instead of "E", a shorthand notation originally proposed by Bruce Alan Martin of Brookhaven National Laboratory in 1968, as in 1.001bB11b (or shorter: 1.001B11). For comparison, the same number in decimal representation: 1.125 × 23 (using decimal representation), or 1.125B3 (still using decimal representation). Some calculators use a mixed representation for binary floating point numbers, where the exponent is displayed as decimal number even in binary mode, so the above becomes 1.001b × 10b3d or shorter 1.001B3.
This is closely related to the base-2 floating-point representation commonly used in computer arithmetic, and the usage of IEC binary prefixes (e.g. 1B10 for 1×210 (kibi), 1B20 for 1×220 (mebi), 1B30 for 1×230 (gibi), 1B40 for 1×240 (tebi)).
Similar to "B" (or "b"), the letters "H" (or "h") and "O" (or "o", or "C") are sometimes also used to indicate "times 16 or 8 to the power" as in 1.25 = 1.40h × 10h0h = 1.40H0 = 1.40h0, or 98000 = 2.7732o × 10o5o = 2.7732o5 = 2.7732C5.
Another similar convention to denote base-2 exponents is using a letter "P" (or "p", for "power"). In this notation the significand is always meant to be hexadecimal, whereas the exponent is always meant to be decimal. This notation can be produced by implementations of the "printf" family of functions following the C99 specification and (Single Unix Specification) IEEE Std 1003.1 POSIX standard, when using the "%a" or "%A" conversion specifiers. Starting with C++11, C++ I/O functions could parse and print the P notation as well. Meanwhile, the notation has been fully adopted by the language standard since C++17. Apple's Swift supports it as well. It is also required by the IEEE 754-2008 binary floating-point standard. Example: 1.3DEp42 represents 1.3DEh × 242.
Engineering notation can be viewed as a base-1000 scientific notation.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "x_0=m_0\\times10^{n_0}"
},
{
"math_id": 1,
"text": "x_1=m_1\\times10^{n_1}"
},
{
"math_id": 2,
"text": "x_0 x_1=m_0 m_1\\times10^{n_0+n_1}"
},
{
"math_id": 3,
"text": "\\frac{x_0}{x_1}=\\frac{m_0}{m_1}\\times10^{n_0-n_1}"
},
{
"math_id": 4,
"text": "5.67\\times10^{-5} \\times 2.34\\times10^2 \\approx 13.3\\times10^{-5+2} = 13.3\\times10^{-3} = 1.33\\times10^{-2}"
},
{
"math_id": 5,
"text": "\\frac{2.34\\times10^2}{5.67\\times10^{-5}} \\approx 0.413\\times10^{2-(-5)} = 0.413\\times10^{7} = 4.13\\times10^6"
},
{
"math_id": 6,
"text": "x_0 = m_0 \\times10^{n_0}"
},
{
"math_id": 7,
"text": "x_1 = m_1 \\times10^{n_1}"
},
{
"math_id": 8,
"text": "n_0 = n_1"
},
{
"math_id": 9,
"text": "x_0 \\pm x_1=(m_0\\pm m_1)\\times10^{n_0}"
},
{
"math_id": 10,
"text": "2.34\\times10^{-5} + 5.67\\times10^{-6} = 2.34\\times10^{-5} + 0.567\\times10^{-5} = 2.907\\times10^{-5}"
}
] |
https://en.wikipedia.org/wiki?curid=59715
|
59718
|
Identity matrix
|
Square matrix with ones on the main diagonal and zeros elsewhere
In linear algebra, the identity matrix of size formula_0 is the formula_1 square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the object remains unchanged by the transformation. In other contexts, it is analogous to multiplying by the number 1.
Terminology and notation.
The identity matrix is often denoted by formula_2, or simply by formula_3 if the size is immaterial or can be trivially determined by the context.
formula_4
The term unit matrix has also been widely used, but the term "identity matrix" is now standard. The term "unit matrix" is ambiguous, because it is also used for a matrix of ones and for any unit of the ring of all formula_1 matrices.
In some fields, such as group theory or quantum mechanics, the identity matrix is sometimes denoted by a boldface one, formula_5, or called "id" (short for identity). Less frequently, some mathematics books use formula_6 or formula_7 to represent the identity matrix, standing for "unit matrix" and the German word respectively.
In terms of a notation that is sometimes used to concisely describe diagonal matrices, the identity matrix can be written as
formula_8
The identity matrix can also be written using the Kronecker delta notation:
formula_9
Properties.
When formula_10 is an formula_11 matrix, it is a property of matrix multiplication that
formula_12
In particular, the identity matrix serves as the multiplicative identity of the matrix ring of all formula_1 matrices, and as the identity element of the general linear group formula_13, which consists of all invertible formula_1 matrices under the matrix multiplication operation. In particular, the identity matrix is invertible. It is an involutory matrix, equal to its own inverse. In this group, two square matrices have the identity matrix as their product exactly when they are the inverses of each other.
When formula_1 matrices are used to represent linear transformations from an formula_0-dimensional vector space to itself, the identity matrix formula_2 represents the identity function, for whatever basis was used in this representation.
The formula_14th column of an identity matrix is the unit vector formula_15, a vector whose formula_14th entry is 1 and 0 elsewhere. The determinant of the identity matrix is 1, and its trace is formula_0.
The identity matrix is the only idempotent matrix with non-zero determinant. That is, it is the only matrix such that:
The principal square root of an identity matrix is itself, and this is its only positive-definite square root. However, every identity matrix with at least two rows and columns has an infinitude of symmetric square roots.
The rank of an identity matrix formula_2 equals the size formula_0, i.e.:
formula_16
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n\\times n"
},
{
"math_id": 2,
"text": "I_n"
},
{
"math_id": 3,
"text": "I"
},
{
"math_id": 4,
"text": "\nI_1 = \\begin{bmatrix} 1 \\end{bmatrix}\n,\\ \nI_2 = \\begin{bmatrix}\n1 & 0 \\\\\n0 & 1 \\end{bmatrix}\n,\\ \nI_3 = \\begin{bmatrix}\n1 & 0 & 0 \\\\\n0 & 1 & 0 \\\\\n0 & 0 & 1 \\end{bmatrix}\n,\\ \\dots ,\\ \nI_n = \\begin{bmatrix}\n1 & 0 & 0 & \\cdots & 0 \\\\\n0 & 1 & 0 & \\cdots & 0 \\\\\n0 & 0 & 1 & \\cdots & 0 \\\\\n\\vdots & \\vdots & \\vdots & \\ddots & \\vdots \\\\\n0 & 0 & 0 & \\cdots & 1 \\end{bmatrix}.\n"
},
{
"math_id": 5,
"text": "\\mathbf{1}"
},
{
"math_id": 6,
"text": "U"
},
{
"math_id": 7,
"text": "E"
},
{
"math_id": 8,
"text": " I_n = \\operatorname{diag}(1, 1, \\dots, 1)."
},
{
"math_id": 9,
"text": "(I_n)_{ij} = \\delta_{ij}."
},
{
"math_id": 10,
"text": "A"
},
{
"math_id": 11,
"text": "m\\times n"
},
{
"math_id": 12,
"text": "I_m A = A I_n = A."
},
{
"math_id": 13,
"text": "GL(n)"
},
{
"math_id": 14,
"text": "i"
},
{
"math_id": 15,
"text": "e_i"
},
{
"math_id": 16,
"text": "\\operatorname{rank}(I_n) = n ."
}
] |
https://en.wikipedia.org/wiki?curid=59718
|
59730114
|
Parallel external memory
|
In computer science, a parallel external memory (PEM) model is a cache-aware, external-memory abstract machine. It is the parallel-computing analogy to the single-processor external memory (EM) model. In a similar way, it is the cache-aware analogy to the parallel random-access machine (PRAM). The PEM model consists of a number of processors, together with their respective private caches and a shared main memory.
Model.
Definition.
The PEM model is a combination of the EM model and the PRAM model. The PEM model is a computation model which consists of formula_0 processors and a two-level memory hierarchy. This memory hierarchy consists of a large external memory (main memory) of size formula_1 and formula_0 small internal memories (caches). The processors share the main memory. Each cache is exclusive to a single processor. A processor can't access another’s cache. The caches have a size formula_2 which is partitioned in blocks of size formula_3. The processors can only perform operations on data which are in their cache. The data can be transferred between the main memory and the cache in blocks of size formula_3.
I/O complexity.
The complexity measure of the PEM model is the I/O complexity, which determines the number of parallel blocks transfers between the main memory and the cache. During a parallel block transfer each processor can transfer a block. So if formula_0 processors load parallelly a data block of size formula_3 form the main memory into their caches, it is considered as an I/O complexity of formula_4 not formula_5. A program in the PEM model should minimize the data transfer between main memory and caches and operate as much as possible on the data in the caches.
Read/write conflicts.
In the PEM model, there is no direct communication network between the P processors. The processors have to communicate indirectly over the main memory. If multiple processors try to access the same block in main memory concurrently read/write conflicts occur. Like in the PRAM model, three different variations of this problem are considered:
The following two algorithms solve the CREW and EREW problem if formula_6 processors write to the same block simultaneously.
A first approach is to serialize the write operations. Only one processor after the other writes to the block. This results in a total of formula_0 parallel block transfers. A second approach needs formula_7 parallel block transfers and an additional block for each processor. The main idea is to schedule the write operations in a binary tree fashion and gradually combine the data into a single block. In the first round formula_0 processors combine their blocks into formula_8 blocks. Then formula_8 processors combine the formula_8 blocks into formula_9. This procedure is continued until all the data is combined in one block.
Examples.
Multiway partitioning.
Let formula_10 be a vector of d-1 pivots sorted in increasing order. Let A be an unordered set of N elements. A d-way partition of A is a set formula_11 , where formula_12 and formula_13 for formula_14. formula_15 is called the i-th bucket. The number of elements in formula_15 is greater than formula_16 and smaller than formula_17. In the following algorithm the input is partitioned into N/P-sized contiguous segments formula_18 in main memory. The processor i primarily works on the segment formula_19. The multiway partitioning algorithm (codice_0) uses a PEM prefix sum algorithm to calculate the prefix sum with the optimal formula_20 I/O complexity. This algorithm simulates an optimal PRAM prefix sum algorithm.
// Compute parallelly a d-way partition on the data segments formula_19
for each processor i in parallel do
Read the vector of pivots M into the cache.
Partition formula_19 into d buckets and let vector formula_21 be the number of items in each bucket.
end for
Run PEM prefix sum on the set of vectors formula_22 simultaneously.
// Use the prefix sum vector to compute the final partition
for each processor i in parallel do
Write elements formula_19 into memory locations offset appropriately by formula_23 and formula_24.
end for
Using the prefix sums stored in formula_25 the last processor P calculates the vector B of bucket sizes and returns it.
If the vector of formula_26 pivots M and the input set A are located in contiguous memory, then the d-way partitioning problem can be solved in the PEM model with formula_27 I/O complexity. The content of the final buckets have to be located in contiguous memory.
Selection.
The selection problem is about finding the k-th smallest item in an unordered list A of size N.
The following code makes use of codice_1 which is a PRAM optimal sorting algorithm which runs in formula_28, and codice_2, which is a cache optimal single-processor selection algorithm.
if formula_29 then
formula_30
return formula_31
end if
//Find median of each formula_19
for each processor i in parallel do
formula_32
end for
// Sort medians
formula_33
// Partition around median of medians
formula_34
if formula_35 then
return formula_36
else
return formula_37
end if
Under the assumption that the input is stored in contiguous memory, codice_3 has an I/O complexity of:
formula_38
Distribution sort.
Distribution sort partitions an input list A of size N into d disjoint buckets of similar size. Every bucket is then sorted recursively and the results are combined into a fully sorted list.
If formula_39 the task is delegated to a cache-optimal single-processor sorting algorithm.
Otherwise the following algorithm is used:
// Sample formula_40 elements from A
for each processor i in parallel do
if formula_41 then
formula_42
Load formula_19 in M-sized pages and sort pages individually
else
formula_43
Load and sort formula_19 as single page
end if
Pick every formula_44'th element from each sorted memory page into contiguous vector formula_45 of samples
end for
in parallel do
Combine vectors formula_46 into a single contiguous vector formula_47
Make formula_48 copies of formula_47: formula_49
end do
// Find formula_48 pivots formula_50
for formula_51 to formula_48 in parallel do
formula_52
end for
Pack pivots in contiguous array formula_53
// Partition Aaround pivots into buckets formula_54
formula_55
// Recursively sort buckets
for formula_51 to formula_56 in parallel do
recursively call formula_57 on bucket jof size formula_58
using formula_59 processors responsible for elements in bucket j
end for
The I/O complexity of codice_4 is:
formula_60
where
formula_61
If the number of processors is chosen that formula_62and formula_63 the I/O complexity is then:
formula_64
Other PEM algorithms.
Where formula_65 is the time it takes to sort N items with P processors in the PEM model.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "P"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "B"
},
{
"math_id": 4,
"text": "O(1)"
},
{
"math_id": 5,
"text": "O(P)"
},
{
"math_id": 6,
"text": "P \\leq B"
},
{
"math_id": 7,
"text": "O(\\log(P))"
},
{
"math_id": 8,
"text": "P/2"
},
{
"math_id": 9,
"text": "P/4"
},
{
"math_id": 10,
"text": "M=\\{m_1,...,m_{d-1}\\}"
},
{
"math_id": 11,
"text": "\\Pi=\\{A_1,...,A_d\\}"
},
{
"math_id": 12,
"text": "\\cup_{i=1}^d A_i = A"
},
{
"math_id": 13,
"text": "A_i\\cap A_j=\\emptyset"
},
{
"math_id": 14,
"text": "1\\leq i<j\\leq d"
},
{
"math_id": 15,
"text": "A_i"
},
{
"math_id": 16,
"text": "m_{i-1}"
},
{
"math_id": 17,
"text": "m_{i}^2"
},
{
"math_id": 18,
"text": "S_1,...,S_P"
},
{
"math_id": 19,
"text": "S_i"
},
{
"math_id": 20,
"text": "O\\left(\\frac{N}{PB} + \\log P\\right)"
},
{
"math_id": 21,
"text": "M_i=\\{j_1^i,...,j_d^i\\}"
},
{
"math_id": 22,
"text": "\\{M_1,...,M_P\\}"
},
{
"math_id": 23,
"text": "M_{i-1}"
},
{
"math_id": 24,
"text": "M_{i}"
},
{
"math_id": 25,
"text": "M_P"
},
{
"math_id": 26,
"text": "d=O\\left(\\frac{M}{B}\\right)"
},
{
"math_id": 27,
"text": "O\\left(\\frac{N}{PB} + \\left\\lceil \\frac{d}{B} \\right\\rceil>\\log(P)+d\\log(B)\\right)"
},
{
"math_id": 28,
"text": "O(\\log N)"
},
{
"math_id": 29,
"text": "N \\leq P"
},
{
"math_id": 30,
"text": "\\texttt{PRAMSORT}(A,P)"
},
{
"math_id": 31,
"text": "A[k]"
},
{
"math_id": 32,
"text": "m_i = \\texttt{SELECT}(S_i, \\frac{N}{2P}) "
},
{
"math_id": 33,
"text": "\\texttt{PRAMSORT}(\\lbrace m_1, \\dots, m_2 \\rbrace, P)"
},
{
"math_id": 34,
"text": "t = \\texttt{PEMPARTITION}(A, m_{P/2},P)"
},
{
"math_id": 35,
"text": "k \\leq t"
},
{
"math_id": 36,
"text": "\\texttt{PEMSELECT}(A[1:t], P, k)"
},
{
"math_id": 37,
"text": "\\texttt{PEMSELECT}(A[t+1:N], P, k-t)"
},
{
"math_id": 38,
"text": "O\\left(\\frac{N}{PB} + \\log (PB) \\cdot \\log(\\frac{N}{P})\\right)"
},
{
"math_id": 39,
"text": "P = 1"
},
{
"math_id": 40,
"text": "\\tfrac{4N}{\\sqrt{d}}"
},
{
"math_id": 41,
"text": "M < |S_i|"
},
{
"math_id": 42,
"text": "d = M/B"
},
{
"math_id": 43,
"text": "d = |S_i|"
},
{
"math_id": 44,
"text": "\\sqrt{d}/4"
},
{
"math_id": 45,
"text": "R^i"
},
{
"math_id": 46,
"text": "R^1 \\dots R^P"
},
{
"math_id": 47,
"text": "\\mathcal{R}"
},
{
"math_id": 48,
"text": "\\sqrt{d}"
},
{
"math_id": 49,
"text": "\\mathcal{R}_1 \\dots \\mathcal{R}_{\\sqrt{d}}"
},
{
"math_id": 50,
"text": "\\mathcal{M}[j]"
},
{
"math_id": 51,
"text": "j = 1"
},
{
"math_id": 52,
"text": "\\mathcal{M}[j] = \\texttt{PEMSELECT}(\\mathcal{R}_i, \\tfrac{P}{\\sqrt{d}}, \\tfrac{j \\cdot 4N}{d})"
},
{
"math_id": 53,
"text": "\\mathcal{M}"
},
{
"math_id": 54,
"text": "\\mathcal{B}"
},
{
"math_id": 55,
"text": "\\mathcal{B} = \\texttt{PEMMULTIPARTITION}(A[1:N],\\mathcal{M},\\sqrt{d},P)"
},
{
"math_id": 56,
"text": "\\sqrt{d} + 1"
},
{
"math_id": 57,
"text": "\\texttt{PEMDISTSORT}"
},
{
"math_id": 58,
"text": "\\mathcal{B}[j]"
},
{
"math_id": 59,
"text": "O \\left( \\left \\lceil \\tfrac{\\mathcal{B}[j]}{N / P} \\right \\rceil \\right)"
},
{
"math_id": 60,
"text": "O \\left( \\left \\lceil \\frac{N}{PB} \\right \\rceil \\left ( \\log_d P + \\log_{M/B} \\frac{N}{PB} \\right ) + f(N,P,d) \\cdot \\log_d P \\right)"
},
{
"math_id": 61,
"text": "f(N,P,d) = O \\left ( \\log \\frac{PB}{\\sqrt{d}} \\log \\frac{N}{P} + \\left \\lceil \\frac{\\sqrt{d}}{B} \\log P + \\sqrt{d} \\log B \\right \\rceil \\right )"
},
{
"math_id": 62,
"text": "f(N,P,d) = O\\left ( \\left \\lceil \\tfrac{N}{PB} \\right \\rceil \\right )"
},
{
"math_id": 63,
"text": "M < B^{O(1)}"
},
{
"math_id": 64,
"text": "O \\left ( \\frac{N}{PB} \\log_{M/B} \\frac{N}{B} \\right )"
},
{
"math_id": 65,
"text": "\\textrm{sort}_P(N)"
}
] |
https://en.wikipedia.org/wiki?curid=59730114
|
59732566
|
List of Dutch discoveries
|
The following list is composed of objects, concepts, phenomena and processes that were discovered or invented by people from the Netherlands.
<templatestyles src="Template:TOC limit/styles.css" />
Discoveries.
Archaeology.
Java Man (Homo erectus erectus) (1891).
Java Man ("Homo erectus erectus") is the name given to hominid fossils discovered in 1891 at Trinil – Ngawi Regency on the banks of the Solo River in East Java, Indonesia, one of the first known specimens of Homo erectus. Its discoverer, Dutch paleontologist Eugène Dubois, gave it the scientific name Pithecanthropus erectus, a name derived from Greek and Latin roots meaning "upright ape-man".
Astronomy.
Columba (constellation) (1592).
Columba is a small, faint constellation named in the late sixteenth century. Its name is Latin for dove. It is located just south of Canis Major and Lepus. Columba was named by Dutch astronomer Petrus Plancius in 1592 in order to differentiate the 'unformed stars' of the large constellation Canis Major. Plancius first depicted Columba on the small celestial planispheres of his large wall map of 1592. It is also shown on his smaller world map of 1594 and on early Dutch celestial globes.
Novaya Zemlya effect (1597).
The first person to record the Novaya Zemlya effect was Gerrit de Veer, a member of Willem Barentsz' ill-fated third expedition into the polar region. Novaya Zemlya, the archipelago where de Veer first observed the phenomenon, lends its name to the effect.
12 southern constellations (1597–1598).
Plancius defined 12 constellations created by Plancius from the observations of Pieter Dirkszoon Keyser and Frederick de Houtman.
Camelopardalis (constellation) (1612–1613).
"Camelopardalis" was created by Plancius in 1613 to represent the animal Rebecca rode to marry Isaac in the Bible. One year later, Jakob Bartsch featured it in his atlas. Johannes Hevelius gave it the official name of "Camelopardus" or "Camelopardalis" because he saw the constellation's many faint stars as the spots of a giraffe.
Monoceros (constellation) (1612–1613).
"Monoceros" is a relatively modern creation. Its first certain appearance was on a globe created by Plancius in 1612 or 1613. It was later charted by Bartsch as "Unicornus" in his 1624 star chart.
Rings of Saturn (1655).
In 1655, Huygens became the first person to suggest that Saturn was surrounded by a ring, after Galileo's much less advanced telescope had failed to show rings. Galileo had reported the anomaly as possibly 3 planets instead of one.
Titan (Saturn's moon) (1655).
In 1655, using a 50 power refracting telescope that he designed himself, Huygens discovered the first of Saturn's moons, Titan.
Kapteyn's Star (1897).
Kapteyn's Star is a class M1 red dwarf about 12.76 light-years from Earth in the southern constellation Pictor, and the closest halo star to the Solar System. With a magnitude of nearly 9 it is visible through binoculars or a telescope. It had the highest proper motion of any star known until the discovery of Barnard's Star in 1916. Attention was first drawn to what is now known as Kapteyn's Star by the Dutch astronomer Jacobus Kapteyn, in 1897.
Discovery of evidence for galactic rotation (1904).
In 1904, studying the proper motions of stars, Dutch astronomer Jacobus Kapteyn reported that these were not random, as it was believed in that time; stars could be divided into two streams, moving in nearly opposite directions. It was later realized that Kapteyn's data had been the first evidence of the rotation of our Galaxy, which ultimately led to the finding of galactic rotation by Bertil Lindblad and Jan Oort.
Galactic halo (1924).
In 1924, Dutch astronomer Jan Oort the galactic halo, a group of stars orbiting the Milky Way but outside the main disk.
Oort constants (1927).
The Oort constants (discovered by Jan Oort) formula_0 and formula_1 are empirically derived parameters that characterize the local rotational properties of the Milky Way.
Evidence of dark matter (1932).
In 1932, Dutch astronomer Jan Oort became the first person to discover evidence of dark matter. Oort proposed the substance after measuring the motions of nearby stars in the Milky Way relative to the galactic plane. He found that the mass of the galactic plane must be more than the mass of the material that can be seen. A year later (1933), Fritz Zwicky examined the dynamics of clusters of galaxies and found their movements similarly perplexing.
Discovery of methane in the atmosphere of Titan (1944).
The first formal proof of the existence of an atmosphere around Titan came in 1944, when Gerard Kuiper observed Titan with the new McDonald telescope and discovered spectral signatures on Titan at wavelengths longer than 0.6 μm (micrometers), among which he identified two absorption bands of methane at 6190 and 7250 Å (Kuiper1944). This discovery was significant not only because it requires a dense atmosphere with a significant fraction of methane, but also because the atmosphere needs to be chemically evolved, since methane requires hydrogen in the presence of carbon, and molecular and atomic hydrogen would have escaped from Titan's weak gravitational field since the formation of the Solar System.
Discovery of carbon dioxide in the atmosphere of Mars (1947).
Using infrared spectrometry, in 1947 the Dutch-American astronomer Gerard Kuiper detected carbon dioxide in the Martian atmosphere, a discovery of biological significance because it is a principal gas in the process of photosynthesis (see also: History of Mars observation). He was able to estimate that the amount of carbon dioxide over a given area of the surface is double that on the Earth.
Miranda (Uranus's moon) (1948).
Miranda is the smallest and innermost of Uranus's five major moons. It was discovered by Gerard Kuiper on 16 February 1948 at McDonald Observatory.
Nereid (Neptune's moon) (1949).
Nereid, also known as Neptune II, is the third-largest moon of Neptune and was its second moon to be discovered, on 1 May 1949, by Gerard Kuiper, on photographic plates taken with the 82-inch telescope at McDonald Observatory.
Oort cloud (1950).
The "Oort cloud" or "Öpik–Oort cloud", named after Dutch astronomer Jan Oort and Estonian astronomer Ernst Öpik, is a spherical cloud of predominantly icy planetesimals believed to surround the Sun at a distance of up to . Further evidence for the existence of the Kuiper belt emerged from the study of comets. That comets have finite lifespans has been known for some time. As they approach the Sun, its heat causes their volatile surfaces to sublimate into space, gradually evaporating them. In order for comets to continue to be visible over the age of the Solar System, they must be replenished frequently. One such area of replenishment is the Oort cloud, a spherical swarm of comets extending beyond 50,000 AU from the Sun first hypothesised by Dutch astronomer in 1950. The Oort cloud is believed to be the point of origin of long-period comets, which are those, like Hale–Bopp, with orbits lasting thousands of years.
Kuiper belt (1951).
The Kuiper belt was named after Dutch-American astronomer Gerard Kuiper, regarded by many as the father of modern planetary science, though his role in hypothesising it has been heavily contested. In 1951, he proposed the existence of what is now called the Kuiper Belt, a disk-shaped region of minor planets outside the orbit of Neptune, which also is a source of short-period comets.
Biology.
Function of the fallopian tubes (1660s).
Dutch physician and anatomist Regnier de Graaf may have been the first to understand the reproductive function of the fallopian tubes. He described the hydrosalpinx, linking its development to female infertility. de Graaf recognized pathologic conditions of the tubes. He was aware of tubal pregnancies, and he surmised that the mammalian egg traveled from the ovary to the uterus through the tube.
Development of ovarian follicles (1672).
In his "De Mulierum Organis Generatione Inservientibus" (1672), de Graaf provided the first thorough description of the female gonad and established that it produced the ovum. De Graaf used the terminology vesicle or egg (ovum) for what now called the ovarian follicle. Because the fluid-filled ovarian vesicles had been observed previously by others, including Andreas Vesalius and Falloppio, De Graaf did not claim their discovery. He noted that he was not the first to describe them, but to describe their development. De Graaf was the first to observe changes in the ovary before and after mating and describe the corpus luteum. From the observation of pregnancy in rabbits, he concluded that the follicle contained the oocyte. The mature stage of the ovarian follicle is called the Graafian follicle in his honour, although others, including Fallopius, had noticed it previously but failed to recognize its reproductive significance.
Foundations of microbiology (discovery of microorganisms) (1670s).
Antonie van Leeuwenhoek is often considered to be the father of microbiology. Robert Hooke is cited as the first to record microscopic observation of the fruiting bodies of molds, in 1665. However, the first observation of microbes using a microscope is generally credited to van Leeuwenhoek. In the 1670s, he observed and researched bacteria and other microorganisms, using a single-lens microscope of his own design.
In 1981 the British microscopist Brian J. Ford found that Leeuwenhoek's original specimens had survived in the collections of the Royal Society of London. They were found to be of high quality, and were all well preserved. Ford carried out observations with a range of microscopes, adding to our knowledge of Leeuwenhoek's work.
Photosynthesis (1779).
Photosynthesis is a fundamental biochemical process in which plants, algae, and some bacteria convert sunlight to chemical energy. The process was discovered by Jan Ingenhousz in 1779. The chemical energy is used to drive reactions such as the formation of sugars or the fixation of nitrogen into amino acids, the building blocks for protein synthesis. Ultimately, nearly all living things depend on energy produced from photosynthesis. It is also responsible for producing the oxygen that makes animal life possible. Organisms that produce energy through photosynthesis are called photoautotrophs. Plants are the most visible representatives of photoautotrophs, but bacteria and algae also employ the process.
Plant respiration (1779).
Plant respiration was also discovered by Ingenhousz in 1779.
Foundations of virology (1898).
Martinus Beijerinck is considered one of the founders of virology. In 1898, he published results on his filtration experiments, demonstrating that tobacco mosaic disease is caused by an infectious agent smaller than a bacterium. His results were in accordance with similar observations made by Dmitri Ivanovsky in 1892. Like Ivanovsky and Adolf Mayer, predecessor at Wageningen, Beijerinck could not culture the filterable infectious agent. He concluded that the agent can replicate and multiply in living plants. He named the new pathogen "virus" to indicate its non-bacterial nature. This discovery is considered to be the beginning of virology.
Chemistry of photosynthesis (1931).
In 1931, Cornelis van Niel made key discoveries explaining the chemistry of photosynthesis. By studying purple sulfur bacteria and green sulfur bacteria, he was the first scientist to demonstrate that photosynthesis is a light-dependent redox reaction, in which hydrogen reduces carbon dioxide. Expressed as:
2 H2A + CO2 → 2A + CH2O + H2O
where A is the electron acceptor. His discovery predicted that H2O is the hydrogen donor in green plant photosynthesis and is oxidized to O2. The chemical summation of photosynthesis was a milestone in the understanding of the chemistry of photosynthesis. This was later experimentally verified by Robert Hill.
Foundations of modern ethology (Tinbergen's four questions) (1930s).
Many naturalists have studied aspects of animal behaviour throughout history. Ethology has its scientific roots in the work of Charles Darwin and of American and German ornithologists of the late 19th and early 20th century, including Charles O. Whitman, Oskar Heinroth, and Wallace Craig. The modern discipline of ethology is generally considered to have begun during the 1930s with the work of Dutch biologist Nikolaas Tinbergen and by Austrian biologists Konrad Lorenz and Karl von Frisch.
Tinbergen's four questions, named after Nikolaas Tinbergen, one of the founders of modern ethology, are complementary categories of explanations for behaviour. It suggests that an integrative understanding of behaviour must include both a proximate and ultimate (functional) analysis of behaviour, as well as an understanding of both phylogenetic/developmental history and the operation of current mechanisms.
Vroman effect (1975).
The Vroman effect, named after Leo Vroman, is exhibited by protein adsorption to a surface by blood serum proteins.
Chemistry.
Concept of gas (1600s).
Flemish physician Jan Baptist van Helmont is sometimes considered the founder of pneumatic chemistry, coining the word "gas" and conducting experiments involving gases. Van Helmont had derived the word "gas" from the Dutch word "geest", which means ghost or spirit.
Foundations of stereochemistry (1874).
Dutch chemist Jacobus Henricus van 't Hoff is generally considered to be one of the founders of the field of stereochemistry. In 1874, Van 't Hoff built on the work on isomers of German chemist Johannes Wislicenus, and showed that the four valencies of the carbon atom were probably directed in space toward the four corners of a regular tetrahedron, a model which explained how optical activity could be associated with an asymmetric carbon atom. He shares credit for this with the French chemist Joseph Le Bel, who independently came up with the same idea. Three months before his doctoral degree was awarded Van 't Hoff published this theory, which today is regarded as the foundation of stereochemistry, first in a Dutch pamphlet in the fall of 1874, and then in the following May in a small French book entitled "La chimie dans l'espace". A German translation appeared in 1877, at a time when the only job Van 't Hoff could find was at the Veterinary School in Utrecht. In these early years his theory was largely ignored by the scientific community, and was sharply criticized by one prominent chemist, Hermann Kolbe. However, by about 1880 support for Van 't Hoff's theory by such important chemists as Johannes Wislicenus and Viktor Meyer brought recognition.
Foundations of modern physical chemistry (1880s).
Jacobus van 't Hoff is also considered as one of the modern founders of the disciple of physical chemistry. The first scientific journal specifically in the field of physical chemistry was the German journal, "Zeitschrift für Physikalische Chemie", founded in 1887 by Wilhelm Ostwald and Van 't Hoff. Together with Svante Arrhenius, these were the leading figures in physical chemistry in the late 19th century and early 20th century.
Van 't Hoff equation (1884).
The Van 't Hoff equation in chemical thermodynamics relates the change in the equilibrium constant, "Keq", of a chemical equilibrium to the change in temperature, "T", given the standard enthalpy change, "ΔHo", for the process. It was proposed by Dutch chemist Jacobus Henricus van 't Hoff in 1884. The "Van 't Hoff equation" has been widely utilized to explore the changes in state functions in a thermodynamic system. The "Van 't Hoff plot", which is derived from this equation, is especially effective in estimating the change in enthalpy, or total energy, and entropy, or amount of disorder, of a chemical reaction.
Van 't Hoff factor (1884).
The van 't Hoff factor formula_2 is a measure of the effect of a solute upon colligative properties such as osmotic pressure, relative lowering in vapor pressure, elevation of boiling point and freezing point depression. The van 't Hoff factor is the ratio between the actual concentration of particles produced when the substance is dissolved, and the concentration of a substance as calculated from its mass.
Lobry de Bruyn–van Ekenstein transformation (1885).
In carbohydrate chemistry, the Lobry de Bruyn–van Ekenstein transformation is the base or acid-catalyzed transformation of an aldose into the ketose isomer or vice versa, with a tautomeric enediol as reaction intermediate. The transformation is relevant for the industrial production of certain ketoses and was discovered in 1885 by Cornelis Adriaan Lobry van Troostenburg de Bruyn and Willem Alberda van Ekenstein.
Prins reaction (1919).
The Prins reaction is an organic reaction consisting of an electrophilic addition of an aldehyde or ketone to an alkene or alkyne followed by capture of a nucleophile. Dutch chemist Hendrik Jacobus Prins discovered two new organic reactions, both now carrying the name Prins reaction. The first was the addition of polyhalogen compounds to olefins, was found during Prins doctoral research, while the others, the acid-catalyzed addition of aldehydes to olefinic compounds, became of industrial relevance.
Hafnium (1923).
Dutch physicist Dirk Coster and Hungarian-Swedish chemist George de Hevesy co-discovered "Hafnium" (Hf) in 1923, by means of X-ray spectroscopic analysis of zirconium ore. "Hafnium' is named after "Hafnia', the Latin name for Copenhagen (Denmark), where it was discovered.
Crystal bar process (1925).
The crystal bar process (also known as "iodide process" or the "van Arkel–de Boer process") was developed by Dutch chemists Anton Eduard van Arkel and Jan Hendrik de Boer in 1925. It was the first industrial process for the commercial production of pure ductile metallic zirconium. It is used in the production of small quantities of ultra-pure titanium and zirconium.
Koopmans' theorem (1934).
Koopmans' theorem states that in closed-shell Hartree–Fock theory, the first ionization energy of a molecular system is equal to the negative of the orbital energy of the highest occupied molecular orbital (HOMO). This theorem is named after Tjalling Koopmans, who published this result in 1934.
Koopmans became a Nobel laureate in 1975, though neither in physics nor chemistry, but in economics.
Genetics.
Concept of pangene/gene (1889).
In 1889, Dutch botanist Hugo de Vries published his book "Intracellular Pangenesis", in which he postulated that different characters have different hereditary carriers, based on a modified version of Charles Darwin's theory of Pangenesis of 1868. He specifically postulated that inheritance of specific traits in organisms comes in "particles". He called these units "pangenes".
Rediscovery the laws of inheritance (1900).
1900 marked the "rediscovery of Mendelian genetics". The significance of Gregor Mendel's work was not understood until early in the twentieth century, after his death, when his research was re-discovered by Hugo de Vries, Carl Correns and Erich von Tschermak, who were working on similar problems. They were unaware of Mendel's work. They worked independently on different plant hybrids, and came to Mendel's conclusions about the rules of inheritance.
Geology.
Bushveld Igneous Complex (1897).
The Bushveld Igneous Complex (or BIC) is a large, layered igneous intrusion within the Earth's crust that has been tilted and eroded and now outcrops around what appears to be the edge of a great geological basin, the Transvaal Basin. Located in South Africa, the BIC contains some of Earth's richest ore deposits. The complex contains the world's largest reserves of platinum group metals (PGMs), platinum, palladium, osmium, iridium, rhodium, and ruthenium, along with vast quantities of iron, tin, chromium, titanium and vanadium. The site was discovered around 1897 by Dutch geologist Gustaaf Molengraaff.
Mathematics.
Differential geometry of curves (concepts of the involute and evolute of a curve) (1673).
Christiaan Huygens was the first to publish in 1673 ("Horologium Oscillatorium") a specific method of determining the evolute and involute of a curve
Korteweg–de Vries equation (1895).
In mathematics, the Korteweg–de Vries equation (KdV equation for short) is a mathematical model of waves on shallow water surfaces. It is particularly notable as the prototypical example of an exactly solvable model, that is, a non-linear partial differential equation whose solutions can be exactly and precisely specified. The equation is named for Diederik Korteweg and Gustav de Vries who, in 1895, proposed a mathematical model which allowed to predict the waves behaviour on shallow water surfaces.
Proof of the Brouwer fixed-point theorem (1911).
Brouwer fixed-point theorem is a fixed-point theorem in topology, named after Dutchman Luitzen Brouwer, who proved it in 1911.
Proof of the hairy ball theorem (1912).
The hairy ball theorem of algebraic topology states that there is no nonvanishing continuous tangent vector field on even-dimensional "n"-spheres. The theorem was first stated by Henri Poincaré in the late 19th century. It was first proved in 1912 by Brouwer.
Debye functions (1912).
The Debye functions are named in honor of Peter Debye, who came across this function (with "n" = 3) in 1912 when he analytically computed the heat capacity of what is now called the Debye model.
Kramers–Kronig relations (1927).
The Kramers–Kronig relations are bidirectional mathematical relations, connecting the real and imaginary parts of any complex function that is analytic in the upper half-plane. The relation is named in honor of Ralph Kronig and Hendrik Anthony Kramers.
Heyting algebra (formalized intuitionistic logic) (1930).
Formalized intuitionistic logic was originally developed by Arend Heyting to provide a formal basis for Luitzen Brouwer's programme of intuitionism. Arend Heyting introduced Heyting algebra (1930) to formalize intuitionistic logic.
Zernike polynomials (1934).
In mathematics, the Zernike polynomials are a sequence of polynomials that are orthogonal on the unit disk. Named after Frits Zernike, the Dutch optical physicist, and the inventor of phase contrast microscopy, they play an important role in beam optics.
Minnaert function (1941).
In 1941, Marcel Minnaert invented the Minnaert function, which is used in optical measurements of celestial bodies. The Minnaert function is a photometric function used to interpret astronomical observations and remote sensing data for the Earth.
Mechanics.
Proof of the law of equilibrium on an inclined plane (1586).
In 1586, Simon Stevin (Stevinus) derived the mechanical advantage of the inclined plane by an argument that used a string of beads. Stevin's proof of the law of equilibrium on an inclined plane, known as the "Epitaph of Stevinus".
Centripetal force (1659).
Christiaan Huygens stated what is now known as the second of Newton's laws of motion in a quadratic form. In 1659 he derived the now standard formula for the centripetal force, exerted by an object describing a circular motion, for instance on the string to which it is attached. In modern notation:
formula_3
with "m" the mass of the object, "v" the velocity and "r" the radius. The publication of the general formula for this force in 1673 was a significant step in studying orbits in astronomy. It enabled the transition from Kepler's third law of planetary motion, to the inverse square law of gravitation.
Centrifugal force (1659).
Huygens coined the term "centrifugal force" in his 1659 "De Vi Centrifiga" and wrote of it in his 1673 "Horologium Oscillatorium" on pendulums.
Formula for the period of mathematical pendulum (1659).
In 1659, Christiaan Huygens was the first to derive the formula for the period of an ideal mathematical pendulum (with massless rod or cord and length much longer than its swing), in modern notation:
formula_4
with "T" the period, "l" the length of the pendulum and "g" the gravitational acceleration. By his study of the oscillation period of compound pendulums Huygens made contributions to the development of the concept of moment of inertia.
Tautochrone curve (isochrone curve) (1659).
A tautochrone or isochrone curve is the curve for which the time taken by an object sliding without friction in uniform gravity to its lowest point is independent of its starting point. The curve is a cycloid, and the time is equal to π times the square root of the radius over the acceleration of gravity. Christiaan Huygens was the first to discover the tautochronous property (or isochronous property) of the cycloid. The tautochrone problem, the attempt to identify this curve, was solved by Christiaan Huygens in 1659. He proved geometrically in his "Horologium Oscillatorium", originally published in 1673, that the curve was a cycloid. Huygens also proved that the time of descent is equal to the time a body takes to fall vertically the same distance as the diameter of the circle which generates the cycloid, multiplied by π⁄2. The tautochrone curve is the same as the brachistochrone curve for any given starting point. Johann Bernoulli posed the problem of the brachistochrone to the readers of "Acta Eruditorum" in June, 1696. He published his solution in the journal in May of the following year, and noted that the solution is the same curve as Huygens's tautochrone curve.
Coupled oscillation (spontaneous synchronization) (1665).
Christiaan Huygens observed that two pendulum clocks mounted next to each other on the same support often become synchronized, swinging in opposite directions. In 1665, he reported the results by letter to the Royal Society of London. It is referred to as "an odd kind of sympathy" in the Society's minutes. This may be the first published observation of what is now called "coupled oscillations". In the 20th century, "coupled oscillators" took on great practical importance because of two discoveries: lasers, in which different atoms give off light waves that oscillate in unison, and superconductors, in which pairs of electrons oscillate in synchrony, allowing electricity to flow with almost no resistance. "Coupled oscillators" are even more ubiquitous in nature, showing up, for example, in the synchronized flashing of fireflies and chirping of crickets, and in the pacemaker cells that regulate heartbeats.
Medicine.
Foundations of modern (human) anatomy (1543).
Flemish anatomist and physician Andreas Vesalius is often referred to as the founder of modern human anatomy for the publication of the seven-volume "De humani corporis fabrica" ("On the Structure of the Human Body") in 1543.
Crystals in gouty tophi (1679).
In 1679, van Leeuwenhoek used a microscopes to assess tophaceous material and found that gouty tophi consist of aggregates of needle-shaped crystals, and not globules of chalk as was previously believed.
Boerhaave syndrome (1724).
Boerhaave syndrome (also known as "spontaneous esophageal perforation" or "esophageal rupture") refers to an esophageal rupture secondary to forceful vomiting. Originally described in 1724 by Dutch physician/botanist Herman Boerhaave, it is a rare condition with high mortality. The syndrome was described after the case of a Dutch admiral, Baron Jan von Wassenaer, who died of the condition.
Factor V Leiden (1994).
Factor V Leiden is an inherited disorder of blood clotting. It is a variant of human factor V that causes a hypercoagulability disorder. It is named after the city Leiden, where it was first identified by R. Bertina, et al., in 1994.
Microbiology.
Blood cells (1658).
In 1658 Dutch naturalist Jan Swammerdam was the first person to observe red blood cells under a microscope and in 1695, microscopist Antoni van Leeuwenhoek, also Dutch, was the first to draw an illustration of "red corpuscles", as they were called. No further blood cells were discovered until 1842 when the platelets were discovered.
Red blood cells (1658).
The first person to observe and describe red blood cells was Dutch biologist Jan Swammerdam, who had used an early microscope to study the blood of a frog.
Micro-organisms (1670s).
A resident of Delft, Anton van Leeuwenhoek, used a high-power single-lens simple microscope to discover the world of micro-organisms. His simple microscopes were made of silver or copper frames, holding hand-ground lenses were capable of magnification up to 275 times. Using these he was the first to observe and describe single-celled organisms, which he originally referred to as "animalcules", and which now referred to as micro-organisms or microbes.
Volvox (1700)- Volvox is a genus of chlorophytes, a type of green algae. It forms spherical colonies of up to 50,000 cells. They live in a variety of freshwater habitats, and were first reported by Van Leeuwenhoek in 1700.
Biological nitrogen fixation (1885).
Biological nitrogen fixation was discovered by Martinus Beijerinck in 1885.
Rhizobium (1888).
"Rhizobium" is a genus of Gram-negative soil bacteria that fix nitrogen. Rhizobium forms an endosymbiotic nitrogen fixing association with roots of legumes and "Parasponia". Martinus Beijerinck in the Netherlands was the first to isolate and cultivate a microorganism from the nodules of legumes in 1888. He named it "Bacillus radicicola", which is now placed in "Bergey's Manual of Determinative Bacteriology" under the genus Rhizobium.
Spirillum (first isolated sulfate-reducing bacteria) (1895).
Martinus Beijerinck discovered the phenomenon of bacterial sulfate reduction, a form of anaerobic respiration. He learned that bacteria could use sulfate as a terminal electron acceptor, instead of oxygen. He isolated and described "Spirillum desulfuricans" (now called "Desulfovibrio desulfuricans"), the first known sulfate-reducing bacterium.
Concept of virus (1898).
In 1898 Beijerinck coined the term "virus" to indicate that the causal agent of tobacco mosaic disease was non-bacterial. Beijerinck discovered what is now known as the tobacco mosaic virus. He observed that the agent multiplied only in cells that were dividing and he called it a contagium vivum fluidum ("contagious living fluid"). Beijerinck's discovery is considered to be the beginning of virology.
Azotobacter (1901).
"Azotobacter" is a genus of usually motile, oval or spherical bacteria that form thick-walled cysts and may produce large quantities of capsular slime. They are aerobic, free-living soil microbes which play an important role in the nitrogen cycle in nature, binding atmospheric nitrogen, which is inaccessible to plants, and releasing it in the form of ammonium ions into the soil. Apart from being a model organism, it is used by humans for the production of biofertilizers, food additives, and some biopolymers. The first representative of the genus, "Azotobacter chroococcum", was discovered and described in 1901 by the Dutch microbiologist and botanist Martinus Beijerinck.
Enrichment culture (1904).
Beijerinck is credited with developing the first enrichment culture, a fundamental method of studying microbes from the environment.
Physics.
31 equal temperament (1661).
Division of the octave into 31 steps arose naturally out of Renaissance music theory; the lesser diesis – the ratio of an octave to three major thirds, 128:125 or 41.06 cents – was approximately a fifth of a tone and a third of a semitone. In 1666, Lemme Rossi first proposed an equal temperament of this order. Shortly thereafter, having discovered it independently, scientist Christiaan Huygens wrote about it also. Since the standard system of tuning at that time was quarter-comma meantone, in which the fifth is tuned to 51/4, the appeal of this method was immediate, as the fifth of 31-et, at 696.77 cents, is only 0.19 cent wider than the fifth of quarter-comma meantone. Huygens not only realized this, he went farther and noted that 31-ET provides an excellent approximation of septimal, or 7-limit harmony. In the twentieth century, physicist, music theorist and composer Adriaan Fokker, after reading Huygens's work, led a revival of interest in this system of tuning which led to a number of compositions, particularly by Dutch composers. Fokker designed the Fokker organ, a 31-tone equal-tempered organ, which was installed in Teyler's Museum in Haarlem in 1951.
Polarization of light (1678).
In 1678, Huygens discovered the polarization of light by double refraction in calcite.
Huygens' principle (concepts of the wavefront and wavelet) (1690).
In his "Treatise on light", Huygens showed how Snell's law of sines could be explained by, or derived from, the wave nature of light, using the Huygens–Fresnel principle.
Bernoulli's principle (1738).
Bernoulli's principle was discovered by Dutch-Swiss mathematician and physicist Daniel Bernoulli and named after him. It states that for an inviscid flow, an increase in the speed of the fluid occurs simultaneously with a decrease in pressure or a decrease in the fluid's potential energy.
Brownian motion (1785).
In 1785, Ingenhousz described the irregular movement of coal dust on the surface of alcohol and therefore has a claim as discoverer of what came to be known as Brownian motion.
Buys Ballot's law (1857).
The law takes its name from Dutch meteorologist C. H. D. Buys Ballot, who published it in the "Comptes Rendus", in November 1857. While William Ferrel first theorized this in 1856, Buys Ballot was the first to provide an empirical validation. The law states that in the Northern Hemisphere, if a person stands with his back to the wind, the low pressure area will be on his left, because wind travels counterclockwise around low pressure zones in that hemisphere. this is approximately true in the higher latitudes and is reversed in the Southern Hemisphere.
Foundations of molecular physics (1873).
Spearheaded by Mach and Ostwald, a strong philosophical current that denied the existence of molecules arose towards the end of the 19th century. The molecular existence was considered unproven and the molecular hypothesis unnecessary. At the time Van der Waals' thesis was written (1873), the molecular structure of fluids had not been accepted by most physicists, and liquid and vapor were often considered as chemically distinct. But Van der Waals's work affirmed the reality of molecules and allowed an assessment of their size and attractive strength. By comparing his equation of state with experimental data, Van der Waals was able to obtain estimates for the actual size of molecules and the strength of their mutual attraction. By introducing parameters characterizing molecular size and attraction in constructing his equation of state, Van der Waals set the tone for molecular physics (molecular dynamics in particular) of the 20th century. That molecular aspects such as size, shape, attraction, and multipolar interactions should form the basis for mathematical formulations of the thermodynamic and transport properties of fluids is presently considered an axiom.
Van der Waals equation of state (1873).
In 1873, J. D. van der Waals introduced the first equation of state derived by the assumption of a finite volume occupied by the constituent molecules. The Van der Waals equation is generally regarded as the first somewhat realistic equation of state (beyond the ideal gas law). Van der Waals noted the non-ideality of gases and attributed it to the existence of molecular or atomic interactions. His new formula revolutionized the study of equations of state, and was most famously continued via the Redlich-Kwong equation of state (1949) and the Soave modification of Redlich-Kwong. While the Van der Waals equation is definitely superior to the ideal gas law and does predict the formation of a liquid phase, the agreement with experimental data is limited for conditions where the liquid forms. Except at higher pressures, the real gases do not obey Van der Waals equation in all ranges of pressures and temperatures. Despite its limitations, the equation has historical importance, because it was the first attempt to model the behaviour of real gases.
Van der Waals forces (1873).
The van der Waals forces are named after the scientist who first described them in 1873. Johannes Diderik van der Waals noted the non-ideality of gases and attributed it to the existence of molecular or atomic interactions. They are forces that develop between the atoms inside molecules and keep them together. The Van der Waals forces between molecules, much weaker than chemical bonds but present universally, play a role in fields as diverse as supramolecular chemistry, structural biology, polymer science, nanotechnology, surface science, and condensed matter physics.
Van der Waals radius (1873).
The Van der Waals radius, "r"w, of an atom is the radius of an imaginary hard sphere which can be used to model the atom for many purposes. It is named after Johannes Diderik van der Waals, winner of the 1910 Nobel Prize in Physics, as he was the first to recognise that atoms were not simply points and to demonstrate the physical consequences of their size through the van der Waals equation of state.
Law of corresponding states (1880).
The law of corresponding states was first suggested and formulated by van der Waals in 1880. This showed that the van der Waals equation of state can be expressed as a simple function of the critical pressure, critical volume and critical temperature. This general form is applicable to all substances. The compound-specific constants a and b in the original equation are replaced by universal (compound-independent) quantities. It was this law that served as a guide during experiments which ultimately led to the liquefaction of hydrogen by James Dewar in 1898 and of helium by Heike Kamerlingh Onnes in 1908.
Lorentz ether theory (1892).
Lorentz ether theory has its roots in Hendrik Lorentz's "theory of electrons", which was the final point in the development of the classical aether theories at the end of the 19th and at the beginning of the 20th century. Lorentz's initial theory created in 1892 and 1895 was based on a completely motionless aether. Many aspects of Lorentz's theory were incorporated into special relativity with the works of Albert Einstein and Hermann Minkowski.
Lorentz force law (1892).
In 1892, Hendrik Lorentz derived the modern form of the formula for the electromagnetic force which includes the contributions to the total force from both the electric and the magnetic fields. In many textbook treatments of classical electromagnetism, the Lorentz force law is used as the "definition" of the electric and magnetic fields E and B. To be specific, the Lorentz force is understood to be the following empirical statement:
"The electromagnetic force F on a test charge at a given point and time is a certain function of its charge "q" and velocity v, which can be parameterized by exactly two vectors E and B, in the functional form":
formula_5
Abraham–Lorentz force (1895).
In the physics of electromagnetism, the Abraham–Lorentz force (also "Lorentz-Abraham force") is the recoil force on an accelerating charged particle caused by the particle emitting electromagnetic radiation. It is also called the "radiation reaction force" or the "self force".
Lorentz transformation (1895).
In physics, the Lorentz transformation (or Lorentz transformations) is named after the Dutch physicist Hendrik Lorentz. It was the result of attempts by Lorentz and others to explain how the speed of light was observed to be independent of the reference frame, and to understand the symmetries of the laws of electromagnetism. The Lorentz transformation is in accordance with special relativity, but was derived before special relativity. Early approximations of the transformation were published by Lorentz in 1895. In 1905, Poincaré was the first to recognize that the transformation has the properties of a mathematical group, and named it after Lorentz.
Lorentz contraction (1895).
In physics, length contraction (more formally called Lorentz contraction or Lorentz–FitzGerald contraction after Hendrik Lorentz and George FitzGerald) is the phenomenon of a decrease in length measured by the observer, of an object which is traveling at any non-zero velocity relative to the observer. This contraction is usually only noticeable at a substantial fraction of the speed of light.
Lorentz factor (1895).
The Lorentz factor or "Lorentz term" is the factor by which time, length, and relativistic mass change for an object while that object is moving. It is an expression which appears in several equations in special relativity, and it arises from deriving the Lorentz transformations. The name originates from its earlier appearance in Lorentzian electrodynamics – named after the Dutch physicist Hendrik Lorentz.
Zeeman effect (1896).
The Zeeman effect, named after the Dutch physicist Pieter Zeeman, is the effect of splitting a spectral line into several components in the presence of a static magnetic field. It is analogous to the Stark effect, the splitting of a spectral line into several components in the presence of an electric field. Also similar to the Stark effect, transitions between different components have, in general, different intensities, with some being entirely forbidden (in the dipole approximation), as governed by the selection rules.
Since the distance between the Zeeman sub-levels is a function of the magnetic field, this effect can be used to measure the magnetic field, e.g. that of the Sun and other stars or in laboratory plasmas.
The Zeeman effect is important in applications such as nuclear magnetic resonance spectroscopy, electron spin resonance spectroscopy, magnetic resonance imaging (MRI) and Mössbauer spectroscopy. It may also be utilized to improve accuracy in atomic absorption spectroscopy.
A theory about the magnetic sense of birds assumes that a protein in the retina is changed due to the Zeeman effect.
When the spectral lines are absorption lines, the effect is called "inverse Zeeman effect".
Liquid helium (liquefaction of helium) (1908).
Helium was first liquefied (liquid helium) on 10 July 1908, by Dutch physicist Heike Kamerlingh Onnes. With the production of liquid helium, it was said that "the coldest place on Earth" was in Leiden.
Superconductivity (1911).
Superconductivity, the ability of certain materials to conduct electricity with little or no resistance, was discovered by Dutch physicist Heike Kamerlingh Onnes.
Einstein–de Haas effect (1910s).
The Einstein–de Haas effect or the "Richardson effect" (after Owen Willans Richardson), is a physical phenomenon delineated by Albert Einstein and Wander Johannes de Haas in the mid-1910s, that exposes a relationship between magnetism, angular momentum, and the spin of elementary particles.
Debye model (1912).
In thermodynamics and solid state physics, the Debye model is a method developed by Peter Debye in 1912 for estimating the phonon contribution to the specific heat (heat capacity) in a solid. It treats the vibrations of the atomic lattice (heat) as phonons in a box, in contrast to the Einstein model, which treats the solid as many individual, non-interacting quantum harmonic oscillators. The Debye model correctly predicts the low temperature dependence of the heat capacity.
De Sitter precession (1916).
The geodetic effect (also known as geodetic precession, de Sitter precession or de Sitter effect) represents the effect of the curvature of spacetime, predicted by general relativity, on a vector carried along with an orbiting body. The geodetic effect was first predicted by Willem de Sitter in 1916, who provided relativistic corrections to the Earth–Moon system's motion.
De Sitter space and anti-de Sitter space (1920s).
In mathematics and physics, a de Sitter space is the analog in Minkowski space, or spacetime, of a sphere in ordinary, Euclidean space. The "n"-dimensional de Sitter space, denoted dS"n", is the Lorentzian manifold analog of an "n"-sphere (with its canonical Riemannian metric); it is maximally symmetric, has constant positive curvature, and is simply connected for "n" at least 3. The de Sitter space, as well as the anti-de Sitter space is named after Willem de Sitter (1872–1934), professor of astronomy at Leiden University and director of the Leiden Observatory. Willem de Sitter and Albert Einstein worked in the 1920s in Leiden closely together on the spacetime structure of our universe. De Sitter space was discovered by Willem de Sitter, and, at the same time, independently by Tullio Levi-Civita.
Van der Pol oscillator (1920).
In dynamical systems, a Van der Pol oscillator is a non-conservative oscillator with non-linear damping. It was originally proposed by Dutch physicist Balthasar van der Pol while he was working at Philips in 1920. Van der Pol studied a differential equation that describes the circuit of a vacuum tube. It has been used to model other phenomenon such as human heartbeats by colleague Jan van der Mark.
Kramers' opacity law (1923).
Kramers' opacity law describes the opacity of a medium in terms of the ambient density and temperature, assuming that the opacity is dominated by bound-free absorption (the absorption of light during ionization of a bound electron) or free-free absorption (the absorption of light when scattering a free ion, also called bremsstrahlung). It is often used to model radiative transfer, particularly in stellar atmospheres. The relation is named after the Dutch physicist Hendrik Kramers, who first derived the form in 1923.
Electron spin (1925).
In 1925, Dutch physicists George Eugene Uhlenbeck and Samuel Goudsmit co-discovered the concept of electron spin, which posits an intrinsic angular momentum for all electrons.
Solidification of helium (1926).
In 1926, Onnes' student, Dutch physicist Willem Hendrik Keesom, invented a method
to freeze liquid helium and was the first person who was able to solidify the noble gas.
Ehrenfest theorem (1927).
The Ehrenfest theorem, named after the Austrian-born Dutch-Jew theoretical physicist Paul Ehrenfest at Leiden University.
De Haas–van Alphen effect (1930).
The de Haas–van Alphen effect, often abbreviated to dHvA, is a quantum mechanical effect in which the magnetic moment of a pure metal crystal oscillates as the intensity of an applied magnetic field B is increased. It was discovered in 1930 by Wander Johannes de Haas and his student P. M. van Alphen.
Shubnikov–de Haas effect (1930).
The Shubnikov–de Haas effect (ShdH) is named after Dutch physicist Wander Johannes de Haas and Russian physicist Lev Shubnikov.
Kramers degeneracy theorem (1930).
In quantum mechanics, the Kramers degeneracy theorem states that for every energy eigenstate of a time-reversal symmetric system with half-integer total spin, there is at least one more eigenstate with the same energy. It was first discovered in 1930 by H. A. Kramers as a consequence of the Breit equation.
Minnaert resonance frequency (1933).
In 1933, Marcel Minnaert published a solution for the acoustic resonance frequency of a single bubble in water, the so-called Minnaert resonance. The Minnaert resonance or Minnaert frequency is the acoustic resonance frequency of a single bubble in an infinite domain of water (neglecting the effects of surface tension and viscous attenuation).
Casimir effect (1948).
In quantum field theory, the Casimir effect and the Casimir–Polder force are physical forces arising from a quantized field. Dutch physicists Hendrik Casimir and Dirk Polder at Philips Research Labs proposed the existence of a force between two polarizable atoms and between such an atom and a conducting plate in 1947. After a conversation with Niels Bohr who suggested it had something to do with zero-point energy, Casimir alone formulated the theory predicting a force between neutral conducting plates in 1948; the former is called the Casimir–Polder force while the latter is the Casimir effect in the narrow sense.
Tellegen's theorem (1952).
Tellegen's theorem is one of the most powerful theorems in network theory. Most of the energy distribution theorems and extremum principles in network theory can be derived from it. It was published in 1952 by Bernard Tellegen. Fundamentally, Tellegen's theorem gives a simple relation between magnitudes that satisfy Kirchhoff's laws of electrical circuit theory.
Stochastic cooling (1970s).
In the early 1970s Simon van der Meer, a Dutch particle physicist at CERN, discovered this technique to concentrate proton and anti-proton beams, leading to the discovery of the W and Z particles. He won the 1984 Nobel Prize in Physics together with Carlo Rubbia.
Renormalization of gauge theories (1971).
In 1971, Gerardus 't Hooft, who was completing his PhD under the supervision of Dutch theoretical physicist Martinus Veltman, renormalized Yang–Mills theory. They showed that if the symmetries of Yang–Mills theory were to be realized in the spontaneously broken mode, referred to as the Higgs mechanism, then Yang–Mills theory can be renormalized. Renormalization of Yang–Mills theory is considered as a major achievement of twentieth century physics.
Holographic principle (1993).
The holographic principle is a property of string theories and a supposed property of quantum gravity that states that the description of a volume of space can be thought of as encoded on a boundary to the region – preferably a light-like boundary like a gravitational horizon. In 1993, Dutch theoretical physicist Gerard 't Hooft proposed what is now known as the holographic principle. It was given a precise string-theory interpretation by Leonard Susskind who combined his ideas with previous ones of 't Hooft and Charles Thorn.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": " i "
},
{
"math_id": 3,
"text": "F_{c}=\\frac{m\\ v^2}{r}"
},
{
"math_id": 4,
"text": "T = 2 \\pi \\sqrt{\\frac{l}{g}}"
},
{
"math_id": 5,
"text": "\\mathbf{F}=q(\\mathbf{E}+\\mathbf{v}\\times\\mathbf{B})"
}
] |
https://en.wikipedia.org/wiki?curid=59732566
|
59733
|
Hexagon
|
Shape with six sides
In geometry, a hexagon (from Greek , , meaning "six", and , , meaning "corner, angle") is a six-sided polygon. The total of the internal angles of any simple (non-self-intersecting) hexagon is 720°.
Regular hexagon.
A "regular hexagon" has Schläfli symbol {6} and can also be constructed as a truncated equilateral triangle, t{3}, which alternates two types of edges.
A regular hexagon is defined as a hexagon that is both equilateral and equiangular. It is bicentric, meaning that it is both cyclic (has a circumscribed circle) and tangential (has an inscribed circle).
The common length of the sides equals the radius of the circumscribed circle or circumcircle, which equals formula_0 times the apothem (radius of the inscribed circle). All internal angles are 120 degrees. A regular hexagon has six rotational symmetries ("rotational symmetry of order six") and six reflection symmetries ("six lines of symmetry"), making up the dihedral group D6. The longest diagonals of a regular hexagon, connecting diametrically opposite vertices, are twice the length of one side. From this it can be seen that a triangle with a vertex at the center of the regular hexagon and sharing one side with the hexagon is equilateral, and that the regular hexagon can be partitioned into six equilateral triangles.
Like squares and equilateral triangles, regular hexagons fit together without any gaps to "tile the plane" (three hexagons meeting at every vertex), and so are useful for constructing tessellations. The cells of a beehive honeycomb are hexagonal for this reason and because the shape makes efficient use of space and building materials. The Voronoi diagram of a regular triangular lattice is the honeycomb tessellation of hexagons.
Parameters.
The maximal diameter (which corresponds to the long diagonal of the hexagon), "D", is twice the maximal radius or circumradius, "R", which equals the side length, "t". The minimal diameter or the diameter of the inscribed circle (separation of parallel sides, flat-to-flat distance, short diagonal or height when resting on a flat base), "d", is twice the minimal radius or inradius, "r". The maxima and minima are related by the same factor:
formula_1 and, similarly, formula_2
The area of a regular hexagon
formula_3
For any regular polygon, the area can also be expressed in terms of the apothem "a" and the perimeter "p". For the regular hexagon these are given by "a" = "r", and "p"formula_4, so
formula_5
The regular hexagon fills the fraction formula_6 of its circumscribed circle.
If a regular hexagon has successive vertices A, B, C, D, E, F and if P is any point on the circumcircle between B and C, then PE + PF
PA + PB + PC + PD.
It follows from the ratio of circumradius to inradius that the height-to-width ratio of a regular hexagon is 1:1.1547005; that is, a hexagon with a long diagonal of 1.0000000 will have a distance of 0.8660254 between parallel sides.
Point in plane.
For an arbitrary point in the plane of a regular hexagon with circumradius formula_7, whose distances to the centroid of the regular hexagon and its six vertices are formula_8 and formula_9
respectively, we have
formula_10
formula_11
formula_12
If formula_9 are the distances from the vertices of a regular hexagon to any point on its circumcircle, then
formula_13
Symmetry.
The "regular hexagon" has D6 symmetry. There are 16 subgroups. There are 8 up to isomorphism: itself (D6), 2 dihedral: (D3, D2), 4 cyclic: (Z6, Z3, Z2, Z1) and the trivial (e)
These symmetries express nine distinct symmetries of a regular hexagon. John Conway labels these by a letter and group order. r12 is full symmetry, and a1 is no symmetry. p6, an isogonal hexagon constructed by three mirrors can alternate long and short edges, and d6, an isotoxal hexagon constructed with equal edge lengths, but vertices alternating two different internal angles. These two forms are duals of each other and have half the symmetry order of the regular hexagon. The i4 forms are regular hexagons flattened or stretched along one symmetry direction. It can be seen as an elongated rhombus, while d2 and p2 can be seen as horizontally and vertically elongated kites. g2 hexagons, with opposite sides parallel are also called hexagonal parallelogons.
Each subgroup symmetry allows one or more degrees of freedom for irregular forms. Only the g6 subgroup has no degrees of freedom but can be seen as directed edges.
Hexagons of symmetry g2, i4, and r12, as parallelogons can tessellate the Euclidean plane by translation. Other hexagon shapes can tile the plane with different orientations.
A2 and G2 groups.
The 6 roots of the simple Lie group , represented by a Dynkin diagram , are in a regular hexagonal pattern. The two simple roots have a 120° angle between them.
The 12 roots of the Exceptional Lie group G2, represented by a Dynkin diagram are also in a hexagonal pattern. The two simple roots of two lengths have a 150° angle between them.
Dissection.
Coxeter states that every zonogon (a 2"m"-gon whose opposite sides are parallel and of equal length) can be dissected into parallelograms. In particular this is true for regular polygons with evenly many sides, in which case the parallelograms are all rhombi. This decomposition of a regular hexagon is based on a Petrie polygon projection of a cube, with 3 of 6 square faces. Other parallelogons and projective directions of the cube are dissected within rectangular cuboids.
Related polygons and tilings.
A regular hexagon has Schläfli symbol {6}. A regular hexagon is a part of the regular hexagonal tiling, {6,3}, with three hexagonal faces around each vertex.
A regular hexagon can also be created as a truncated equilateral triangle, with Schläfli symbol t{3}. Seen with two types (colors) of edges, this form only has D3 symmetry.
A truncated hexagon, t{6}, is a dodecagon, {12}, alternating two types (colors) of edges. An alternated hexagon, h{6}, is an equilateral triangle, {3}. A regular hexagon can be stellated with equilateral triangles on its edges, creating a hexagram. A regular hexagon can be dissected into six equilateral triangles by adding a center point. This pattern repeats within the regular triangular tiling.
A regular hexagon can be extended into a regular dodecagon by adding alternating squares and equilateral triangles around it. This pattern repeats within the rhombitrihexagonal tiling.
Self-crossing hexagons.
There are six self-crossing hexagons with the vertex arrangement of the regular hexagon:
Hexagonal structures.
From bees' honeycombs to the Giant's Causeway, hexagonal patterns are prevalent in nature due to their efficiency. In a hexagonal grid each line is as short as it can possibly be if a large area is to be filled with the fewest hexagons. This means that honeycombs require less wax to construct and gain much strength under compression.
Irregular hexagons with parallel opposite edges are called parallelogons and can also tile the plane by translation. In three dimensions, hexagonal prisms with parallel opposite faces are called parallelohedrons and these can tessellate 3-space by translation.
Tesselations by hexagons.
In addition to the regular hexagon, which determines a unique tessellation of the plane, any irregular hexagon which satisfies the Conway criterion will tile the plane.
Hexagon inscribed in a conic section.
Pascal's theorem (also known as the "Hexagrammum Mysticum Theorem") states that if an arbitrary hexagon is inscribed in any conic section, and pairs of opposite sides are extended until they meet, the three intersection points will lie on a straight line, the "Pascal line" of that configuration.
Cyclic hexagon.
The Lemoine hexagon is a cyclic hexagon (one inscribed in a circle) with vertices given by the six intersections of the edges of a triangle and the three lines that are parallel to the edges that pass through its symmedian point.
If the successive sides of a cyclic hexagon are "a", "b", "c", "d", "e", "f", then the three main diagonals intersect in a single point if and only if "ace"
"bdf".
If, for each side of a cyclic hexagon, the adjacent sides are extended to their intersection, forming a triangle exterior to the given side, then the segments connecting the circumcenters of opposite triangles are concurrent.
If a hexagon has vertices on the circumcircle of an acute triangle at the six points (including three triangle vertices) where the extended altitudes of the triangle meet the circumcircle, then the area of the hexagon is twice the area of the triangle.
Hexagon tangential to a conic section.
Let ABCDEF be a hexagon formed by six tangent lines of a conic section. Then Brianchon's theorem states that the three main diagonals AD, BE, and CF intersect at a single point.
In a hexagon that is tangential to a circle and that has consecutive sides "a", "b", "c", "d", "e", and "f",
formula_14
Equilateral triangles on the sides of an arbitrary hexagon.
If an equilateral triangle is constructed externally on each side of any hexagon, then the midpoints of the segments connecting the centroids of opposite triangles form another equilateral triangle.
Skew hexagon.
A skew hexagon is a skew polygon with six vertices and edges but not existing on the same plane. The interior of such a hexagon is not generally defined. A "skew zig-zag hexagon" has vertices alternating between two parallel planes.
A regular skew hexagon is vertex-transitive with equal edge lengths. In three dimensions it will be a zig-zag skew hexagon and can be seen in the vertices and side edges of a triangular antiprism with the same D3d, [2+,6] symmetry, order 12.
The cube and octahedron (same as triangular antiprism) have regular skew hexagons as petrie polygons.
Petrie polygons.
The regular skew hexagon is the Petrie polygon for these higher dimensional regular, uniform and dual polyhedra and polytopes, shown in these skew orthogonal projections:
Convex equilateral hexagon.
A "principal diagonal" of a hexagon is a diagonal which divides the hexagon into quadrilaterals. In any convex equilateral hexagon (one with all sides equal) with common side "a", there exists a principal diagonal "d"1 such that
formula_15
and a principal diagonal "d"2 such that
formula_16
Polyhedra with hexagons.
There is no Platonic solid made of only regular hexagons, because the hexagons tessellate, not allowing the result to "fold up". The Archimedean solids with some hexagonal faces are the truncated tetrahedron, truncated octahedron, truncated icosahedron (of soccer ball and fullerene fame), truncated cuboctahedron and the truncated icosidodecahedron. These hexagons can be considered truncated triangles, with Coxeter diagrams of the form and .
There are other symmetry polyhedra with stretched or flattened hexagons, like these Goldberg polyhedron G(2,0):
There are also 9 Johnson solids with regular hexagons:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\tfrac{2}{\\sqrt{3}}"
},
{
"math_id": 1,
"text": "\\frac{1}{2}d = r = \\cos(30^\\circ) R = \\frac{\\sqrt{3}}{2} R = \\frac{\\sqrt{3}}{2} t"
},
{
"math_id": 2,
"text": "d = \\frac{\\sqrt{3}}{2} D."
},
{
"math_id": 3,
"text": "\\begin{align}\n A &= \\frac{3\\sqrt{3}}{2}R^2 = 3Rr = 2\\sqrt{3} r^2 \\\\[3pt]\n &= \\frac{3\\sqrt{3}}{8}D^2 = \\frac{3}{4}Dd = \\frac{\\sqrt{3}}{2} d^2 \\\\[3pt]\n &\\approx 2.598 R^2 \\approx 3.464 r^2\\\\\n &\\approx 0.6495 D^2 \\approx 0.866 d^2.\n\\end{align}"
},
{
"math_id": 4,
"text": "{} = 6R = 4r\\sqrt{3}"
},
{
"math_id": 5,
"text": "\\begin{align}\n A &= \\frac{ap}{2} \\\\\n &= \\frac{r \\cdot 4r\\sqrt{3}}{2} = 2r^2\\sqrt{3} \\\\\n &\\approx 3.464 r^2.\n\\end{align}"
},
{
"math_id": 6,
"text": "\\tfrac{3\\sqrt{3}}{2\\pi} \\approx 0.8270"
},
{
"math_id": 7,
"text": "R"
},
{
"math_id": 8,
"text": "L"
},
{
"math_id": 9,
"text": "d_i"
},
{
"math_id": 10,
"text": " d_1^2 + d_4^2 = d_2^2 + d_5^2 = d_3^2+ d_6^2= 2\\left(R^2 + L^2\\right), "
},
{
"math_id": 11,
"text": " d_1^2 + d_3^2+ d_5^2 = d_2^2 + d_4^2+ d_6^2 = 3\\left(R^2 + L^2\\right), "
},
{
"math_id": 12,
"text": " d_1^4 + d_3^4+ d_5^4 = d_2^4 + d_4^4+ d_6^4 = 3\\left(\\left(R^2 + L^2\\right)^2 + 2 R^2 L^2\\right). "
},
{
"math_id": 13,
"text": "\\left(\\sum_{i=1}^6 d_i^2\\right)^2 = 4 \\sum_{i=1}^6 d_i^4 ."
},
{
"math_id": 14,
"text": "a + c + e = b + d + f."
},
{
"math_id": 15,
"text": "\\frac{d_1}{a} \\leq 2"
},
{
"math_id": 16,
"text": "\\frac{d_2}{a} > \\sqrt{3}."
}
] |
https://en.wikipedia.org/wiki?curid=59733
|
59734735
|
Conical spiral
|
Plane spiral projected onto the surface of a cone
In mathematics, a conical spiral, also known as a conical helix, is a space curve on a right circular cone, whose floor projection is a plane spiral. If the floor projection is a logarithmic spiral, it is called "conchospiral" (from conch).
Parametric representation.
In the formula_0-formula_1-plane a spiral with parametric representation
formula_2
a third coordinate formula_3 can be added such that the space curve lies on the cone with equation formula_4 :
Such curves are called conical spirals. They were known to Pappos.
Parameter formula_6 is the slope of the cone's lines with respect to the formula_0-formula_1-plane.
A conical spiral can instead be seen as the orthogonal projection of the floor plan spiral onto the cone.
1) Starting with an "archimedean spiral" formula_7 gives the conical spiral (see diagram)
formula_8
In this case the conical spiral can be seen as the intersection curve of the cone with a helicoid.
2) The second diagram shows a conical spiral with a "Fermat's spiral" formula_9 as floor plan.
3) The third example has a "logarithmic spiral" formula_10 as floor plan. Its special feature is its constant "slope" (see below).
Introducing the abbreviation formula_11gives the description: formula_12.
4) Example 4 is based on a "hyperbolic spiral" formula_13. Such a spiral has an "asymptote" (black line), which is the floor plan of a hyperbola (purple). The conical spiral approaches the hyperbola for formula_14.
Properties.
The following investigation deals with conical spirals of the form formula_15 and formula_16, respectively.
Slope.
The "slope" at a point of a conical spiral is the slope of this point's tangent with respect to the formula_0-formula_1-plane. The corresponding angle is its "slope angle" (see diagram):
formula_17
A spiral with formula_15 gives:
For an "archimedean" spiral is formula_19 and hence its slope isformula_20
Because of this property a conchospiral is called an "equiangular" conical spiral.
Arclength.
The length of an arc of a conical spiral can be determined by
formula_23
For an "archimedean" spiral the integral can be solved with help of a table of integrals, analogously to the planar case:
formula_24
For a "logarithmic" spiral the integral can be solved easily:
formula_25
In other cases elliptical integrals occur.
Development.
For the development of a conical spiral the distance formula_26 of a curve point formula_27 to the cone's apex formula_28 and the relation between the angle formula_29 and the corresponding angle formula_30 of the development have to be determined:
formula_31
formula_32
Hence the polar representation of the developed conical spiral is:
In case of formula_15 the polar representation of the developed curve is
formula_34
which describes a spiral of the same type.
In case of a "hyperbolic" spiral (formula_35) the development is congruent to the floor plan spiral.
In case of a "logarithmic" spiral formula_16 the development is a logarithmic spiral:
formula_36
Tangent trace.
The collection of intersection points of the tangents of a conical spiral with the formula_0-formula_1-plane (plane through the cone's apex) is called its "tangent trace".
For the conical spiral
formula_37
the tangent vector is
formula_38
and the tangent:
formula_39
formula_40
formula_41
The intersection point with the formula_0-formula_1-plane has parameter formula_42 and the intersection point is
formula_15 gives formula_44 and the tangent trace is a spiral. In the case formula_35 (hyperbolic spiral) the tangent trace degenerates to a "circle" with radius formula_45 (see diagram). For formula_46 one has formula_47 and the tangent trace is a logarithmic spiral, which is congruent to the floor plan, because of the self-similarity of a logarithmic spiral.
|
[
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "y"
},
{
"math_id": 2,
"text": "x=r(\\varphi)\\cos\\varphi \\ ,\\qquad y=r(\\varphi)\\sin\\varphi"
},
{
"math_id": 3,
"text": "z(\\varphi)"
},
{
"math_id": 4,
"text": "\\;m^2(x^2+y^2)=(z-z_0)^2\\ ,\\ m>0\\;"
},
{
"math_id": 5,
"text": "x=r(\\varphi)\\cos\\varphi \\ ,\\qquad y=r(\\varphi)\\sin\\varphi\\ , \\qquad \\color{red}{z=z_0 + mr(\\varphi)} \\ ."
},
{
"math_id": 6,
"text": " m "
},
{
"math_id": 7,
"text": "\\;r(\\varphi)=a\\varphi\\;"
},
{
"math_id": 8,
"text": "x=a\\varphi\\cos\\varphi \\ ,\\qquad y=a\\varphi\\sin\\varphi\\ , \\qquad z=z_0 + ma\\varphi \\ ,\\quad \\varphi \\ge 0 \\ ."
},
{
"math_id": 9,
"text": "\\;r(\\varphi)=\\pm a\\sqrt{\\varphi}\\;"
},
{
"math_id": 10,
"text": "\\; r(\\varphi)=a e^{k\\varphi} \\; "
},
{
"math_id": 11,
"text": "K=e^k"
},
{
"math_id": 12,
"text": "r(\\varphi)=aK^\\varphi"
},
{
"math_id": 13,
"text": "\\; r(\\varphi)=a/\\varphi\\; "
},
{
"math_id": 14,
"text": " \\varphi \\to 0"
},
{
"math_id": 15,
"text": "r=a\\varphi^n"
},
{
"math_id": 16,
"text": "r=ae^{k\\varphi}"
},
{
"math_id": 17,
"text": "\\tan \\beta = \\frac{z'}{\\sqrt{(x')^2+(y')^2}}=\\frac{mr'}{\\sqrt{(r')^2+r^2}}\\ ."
},
{
"math_id": 18,
"text": "\\tan\\beta=\\frac{mn}{\\sqrt{n^2+\\varphi^2}}\\ ."
},
{
"math_id": 19,
"text": "n=1"
},
{
"math_id": 20,
"text": "\\ \\tan\\beta=\\tfrac{m}{\\sqrt{1+\\varphi^2}}\\ ."
},
{
"math_id": 21,
"text": "\\ \\tan\\beta= \\tfrac{mk}{\\sqrt{1+k^2}}\\ "
},
{
"math_id": 22,
"text": "\\color{red}{\\text{ constant!}}"
},
{
"math_id": 23,
"text": "L=\\int_{\\varphi_1}^{\\varphi_2}\\sqrt{(x')^2+(y')^2+(z')^2}\\,\\mathrm{d}\\varphi\n= \\int_{\\varphi_1}^{\\varphi_2}\\sqrt{(1+m^2)(r')^2+r^2}\\,\\mathrm{d}\\varphi \\ ."
},
{
"math_id": 24,
"text": "L= \\frac{a}{2} \\left[\\varphi\\sqrt{(1+m^2) + \\varphi^2} + (1+m^2)\\ln \\left(\\varphi + \\sqrt{(1+m^2) + \\varphi^2}\\right)\\right ]_{\\varphi_1}^{\\varphi_2}\\ ."
},
{
"math_id": 25,
"text": "L=\\frac{\\sqrt{(1+m^2)k^2+1}}{k}(r\\big(\\varphi_2)-r(\\varphi_1)\\big)\\ ."
},
{
"math_id": 26,
"text": "\\rho(\\varphi)"
},
{
"math_id": 27,
"text": "(x,y,z)"
},
{
"math_id": 28,
"text": "(0,0,z_0)"
},
{
"math_id": 29,
"text": "\\varphi"
},
{
"math_id": 30,
"text": "\\psi"
},
{
"math_id": 31,
"text": "\\rho=\\sqrt{x^2+y^2+(z-z_0)^2}=\\sqrt{1+m^2}\\;r \\ ,"
},
{
"math_id": 32,
"text": "\\varphi= \\sqrt{1+m^2}\\psi \\ ."
},
{
"math_id": 33,
"text": "\\rho(\\psi)=\\sqrt{1+m^2}\\; r(\\sqrt{1+m^2}\\psi)"
},
{
"math_id": 34,
"text": "\\rho=a\\sqrt{1+m^2}^{\\,n+1}\\psi^n,"
},
{
"math_id": 35,
"text": "n=-1"
},
{
"math_id": 36,
"text": "\\rho=a\\sqrt{1+m^2}\\;e^{k\\sqrt{1+m^2}\\psi}\\ ."
},
{
"math_id": 37,
"text": "(r\\cos\\varphi, r\\sin\\varphi,mr)"
},
{
"math_id": 38,
"text": "(r'\\cos\\varphi-r\\sin\\varphi,r'\\sin\\varphi+r\\cos\\varphi,mr')^T"
},
{
"math_id": 39,
"text": "x(t)=r\\cos\\varphi+t(r'\\cos\\varphi-r\\sin\\varphi)\\ ,"
},
{
"math_id": 40,
"text": "y(t)=r\\sin\\varphi +t(r'\\sin\\varphi+r\\cos\\varphi)\\ ,"
},
{
"math_id": 41,
"text": "z(t)=mr+tmr'\\ ."
},
{
"math_id": 42,
"text": "t=-r/r'"
},
{
"math_id": 43,
"text": " \\left( \\frac{r^2}{r'}\\sin\\varphi, -\\frac{r^2}{r'}\\cos\\varphi,0 \\right)\\ ."
},
{
"math_id": 44,
"text": "\\ \\tfrac{r^2}{r'}=\\tfrac{a}{n}\\varphi^{n+1}\\ "
},
{
"math_id": 45,
"text": "a"
},
{
"math_id": 46,
"text": " r=a e^{k\\varphi} "
},
{
"math_id": 47,
"text": "\\ \\tfrac{r^2}{r'}=\\tfrac{r}{k}\\ "
}
] |
https://en.wikipedia.org/wiki?curid=59734735
|
59735
|
Free group
|
Mathematics concept
In mathematics, the free group "F""S" over a given set "S" consists of all words that can be built from members of "S", considering two words to be different unless their equality follows from the group axioms (e.g. "st" = "suu"−1"t" but "s" ≠ "t"−1 for "s","t","u" ∈ "S"). The members of "S" are called generators of "F""S", and the number of generators is the rank of the free group.
An arbitrary group "G" is called free if it is isomorphic to "F""S" for some subset "S" of "G", that is, if there is a subset "S" of "G" such that every element of "G" can be written in exactly one way as a product of finitely many elements of "S" and their inverses (disregarding trivial variations such as "st" = "suu"−1"t").
A related but different notion is a free abelian group; both notions are particular instances of a free object from universal algebra. As such, free groups are defined by their universal property.
History.
Free groups first arose in the study of hyperbolic geometry, as examples of Fuchsian groups (discrete groups acting by isometries on the hyperbolic plane). In an 1882 paper, Walther von Dyck pointed out that these groups have the simplest possible presentations. The algebraic study of free groups was initiated by Jakob Nielsen in 1924, who gave them their name and established many of their basic properties. Max Dehn realized the connection with topology, and obtained the first proof of the full Nielsen–Schreier theorem. Otto Schreier published an algebraic proof of this result in 1927, and Kurt Reidemeister included a comprehensive treatment of free groups in his 1932 book on combinatorial topology. Later on in the 1930s, Wilhelm Magnus discovered the connection between the lower central series of free groups and free Lie algebras.
Examples.
The group (Z,+) of integers is free of rank 1; a generating set is "S" = {1}. The integers are also a free abelian group, although all free groups of rank formula_0 are non-abelian. A free group on a two-element set "S" occurs in the proof of the Banach–Tarski paradox and is described there.
On the other hand, any nontrivial finite group cannot be free, since the elements of a free generating set of a free group have infinite order.
In algebraic topology, the fundamental group of a bouquet of "k" circles (a set of "k" loops having only one point in common) is the free group on a set of "k" elements.
Construction.
The free group "FS" with free generating set "S" can be constructed as follows. "S" is a set of symbols, and we suppose for every "s" in "S" there is a corresponding "inverse" symbol, "s"−1, in a set "S"−1. Let "T" = "S" ∪ "S"−1, and define a word in "S" to be any written product of elements of "T". That is, a word in "S" is an element of the monoid generated by "T". The empty word is the word with no symbols at all. For example, if "S" = {"a", "b", "c"}, then "T" = {"a", "a"−1, "b", "b"−1, "c", "c"−1}, and
formula_1
is a word in "S".
If an element of "S" lies immediately next to its inverse, the word may be simplified by omitting the c, c−1 pair:
formula_2
A word that cannot be simplified further is called reduced.
The free group "FS" is defined to be the group of all reduced words in "S", with concatenation of words (followed by reduction if necessary) as group operation. The identity is the empty word.
A reduced word is called cyclically reduced if its first and last letter are not inverse to each other. Every word is conjugate to a cyclically reduced word, and a cyclically reduced conjugate of a cyclically reduced word is a cyclic permutation of the letters in the word. For instance "b"−1"abcb" is not cyclically reduced, but is conjugate to "abc", which is cyclically reduced. The only cyclically reduced conjugates of "abc" are "abc", "bca", and "cab".
Universal property.
The free group "FS" is the universal group generated by the set "S". This can be formalized by the following universal property: given any function f from "S" to a group "G", there exists a unique homomorphism "φ": "FS" → "G" making the following diagram commute (where the unnamed mapping denotes the inclusion from "S" into "FS"):
That is, homomorphisms "FS" → "G" are in one-to-one correspondence with functions "S" → "G". For a non-free group, the presence of relations would restrict the possible images of the generators under a homomorphism.
To see how this relates to the constructive definition, think of the mapping from "S" to "FS" as sending each symbol to a word consisting of that symbol. To construct "φ" for the given f, first note that "φ" sends the empty word to the identity of "G" and it has to agree with f on the elements of "S". For the remaining words (consisting of more than one symbol), "φ" can be uniquely extended, since it is a homomorphism, i.e., "φ"("ab") = "φ"("a") "φ"("b").
The above property characterizes free groups up to isomorphism, and is sometimes used as an alternative definition. It is known as the universal property of free groups, and the generating set "S" is called a basis for "FS". The basis for a free group is not uniquely determined.
Being characterized by a universal property is the standard feature of free objects in universal algebra. In the language of category theory, the construction of the free group (similar to most constructions of free objects) is a functor from the category of sets to the category of groups. This functor is left adjoint to the forgetful functor from groups to sets.
Facts and theorems.
Some properties of free groups follow readily from the definition:
A few other related results are:
Free abelian group.
The free abelian group on a set "S" is defined via its universal property in the analogous way, with obvious modifications:
Consider a pair ("F", "φ"), where "F" is an abelian group and "φ": "S" → "F" is a function. "F" is said to be the free abelian group on "S" with respect to "φ" if for any abelian group "G" and any function "ψ": "S" → "G", there exists a unique homomorphism "f": "F" → "G" such that
"f"("φ"("s")) = "ψ"("s"), for all "s" in "S".
The free abelian group on "S" can be explicitly identified as the free group F("S") modulo the subgroup generated by its commutators, [F("S"), F("S")], i.e.
its abelianisation. In other words, the free abelian group on "S" is the set of words that are distinguished only up to the order of letters. The rank of a free group can therefore also be defined as the rank of its abelianisation as a free abelian group.
Tarski's problems.
Around 1945, Alfred Tarski asked whether the free groups on two or more generators have the same first-order theory, and whether this theory is decidable. answered the first question by showing that any two nonabelian free groups have the same first-order theory, and answered both questions, showing that this theory is decidable.
A similar unsolved (as of 2011) question in free probability theory asks whether the von Neumann group algebras of any two non-abelian finitely generated free groups are isomorphic.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\geq 2"
},
{
"math_id": 1,
"text": "a b^3 c^{-1} c a^{-1} c\\,"
},
{
"math_id": 2,
"text": "a b^3 c^{-1} c a^{-1} c\\;\\;\\longrightarrow\\;\\;a b^3 \\, a^{-1} c."
},
{
"math_id": 3,
"text": "H \\subset F"
}
] |
https://en.wikipedia.org/wiki?curid=59735
|
59742671
|
DSatur
|
Graph colouring algorithm by Daniel Brélaz
DSatur is a graph colouring algorithm put forward by Daniel Brélaz in 1979. Similarly to the greedy colouring algorithm, DSatur colours the vertices of a graph one after another, adding a previously unused colour when needed. Once a new vertex has been coloured, the algorithm determines which of the remaining uncoloured vertices has the highest number of colours in its neighbourhood and colours this vertex next. Brélaz defines this number as the "degree of saturation" of a given vertex. The contraction of the term "degree of saturation" forms the name of the algorithm. DSatur is a heuristic graph colouring algorithm, yet produces exact results for bipartite, cycle, and wheel graphs. DSatur has also been referred to as saturation LF in the literature.
Pseudocode.
Let the "degree of saturation" of a vertex be the number of different colours being used by its neighbors. Given a simple, undirected graph formula_0 compromising a vertex set formula_1 and edge set formula_2, the algorithm assigns colors to all of the vertices using color labels formula_3. The algorithm operates as follows:
Step 2 of this algorithm assigns colors to vertices using the same scheme as the greedy colouring algorithm. The main differences between the two approaches arises in Step 1 above, where vertices seen to be the most "constrained" are coloured first.
Example.
Consider the graph formula_5 shown on the right. This is a wheel graph and will therefore be optimally colored by the DSatur algorithm. Executing the algorithm results in the vertices being selected and colored as follows. (In this example, where ties occur in both of DSatur's heuristics, the vertex with lowest lexicographic labelling among these is chosen.)
This gives the final three-colored solution formula_13.
Performance.
The worst-case complexity of DSatur is formula_14, where formula_15 is the number of vertices in the graph. This is because the process of selecting the next vertex to colour takes formula_16 time, and this process is carried out formula_15 times. The algorithm can also be implemented using a binary heap to store saturation degrees, operating in formula_17, or formula_18 using Fibonacci heap, where formula_19 is the number of edges in the graph. This produces much faster runs with sparse graphs.
DSatur is known to be exact for bipartite graphs, as well as for cycle and wheel graphs. In an empirical comparison by Lewis in 2021, DSatur produced significantly better vertex colourings than the greedy algorithm on random graphs with edge probability formula_20, while in turn producing significantly worse colourings than the recursive largest first algorithm.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "V"
},
{
"math_id": 2,
"text": "E"
},
{
"math_id": 3,
"text": "1,2,3,..."
},
{
"math_id": 4,
"text": "v"
},
{
"math_id": 5,
"text": "G=(V,E)"
},
{
"math_id": 6,
"text": "g"
},
{
"math_id": 7,
"text": "a"
},
{
"math_id": 8,
"text": "b"
},
{
"math_id": 9,
"text": "c"
},
{
"math_id": 10,
"text": "d"
},
{
"math_id": 11,
"text": "e"
},
{
"math_id": 12,
"text": "f"
},
{
"math_id": 13,
"text": "\\mathcal{S} = \\{\\{g\\}, \\{a, c, e\\}, \\{b, d, f\\}\\}"
},
{
"math_id": 14,
"text": "O(n^2)"
},
{
"math_id": 15,
"text": "n"
},
{
"math_id": 16,
"text": "O(n)"
},
{
"math_id": 17,
"text": "O((n+m)\\log n)"
},
{
"math_id": 18,
"text": "O(m+n\\log n)"
},
{
"math_id": 19,
"text": "m"
},
{
"math_id": 20,
"text": "p=0.5"
}
] |
https://en.wikipedia.org/wiki?curid=59742671
|
5974662
|
Relational quantum mechanics
|
Interpretation of quantum mechanics
Relational quantum mechanics (RQM) is an interpretation of quantum mechanics which treats the state of a quantum system as being relational, that is, the state "is" the relation between the observer and the system. This interpretation was first delineated by Carlo Rovelli in a 1994 preprint, and has since been expanded upon by a number of theorists. It is inspired by the key idea behind special relativity, that the details of an observation depend on the reference frame of the observer, and uses some ideas from Wheeler on quantum information.
The physical content of the theory has not to do with objects themselves, but the relations between them. As Rovelli puts it:
"Quantum mechanics is a theory about the physical description of physical systems relative to other systems, and this is a complete description of the world".
The essential idea behind RQM is that different observers may give different accurate accounts of the same system. For example, to one observer, a system is in a single, "collapsed" eigenstate. To a second observer, the same system is in a superposition of two or more states and the first observer is in a correlated superposition of two or more states. RQM argues that this is a complete picture of the world because the notion of "state" is always relative to some observer. There is no privileged, "real" account.
The state vector of conventional quantum mechanics becomes a description of the correlation of some "degrees of freedom" in the observer, with respect to the observed system.
The terms "observer" and "observed" apply to any arbitrary system, microscopic or macroscopic. The classical limit is a consequence of aggregate systems of very highly correlated subsystems.
A "measurement event" is thus described as an ordinary physical interaction where two systems become correlated to some degree with respect to each other.
Rovelli criticizes describing this as a form of "observer-dependence" which suggests reality depends upon the presence of a conscious observer, when his point is instead that reality is relational and thus the state of a system can be described even in relation to any physical object and not necessarily a human observer.
The proponents of the relational interpretation argue that this approach resolves some of the traditional interpretational difficulties with quantum mechanics. By giving up our preconception of a global privileged state, issues around the measurement problem and local realism are resolved.
In 2020, Carlo Rovelli published an account of the main ideas of the relational interpretation in his popular book "Helgoland", which was published in an English translation in 2021 as "Helgoland: Making Sense of the Quantum Revolution".
History and development.
Relational quantum mechanics arose from a comparison of the quandaries posed by the interpretations of quantum mechanics with those resulting from Lorentz transformations prior to the development of special relativity. Rovelli suggested that just as pre-relativistic interpretations of Lorentz's equations were complicated by incorrectly assuming an observer-independent time exists, a similarly incorrect assumption frustrates attempts to make sense of the quantum formalism. The assumption rejected by relational quantum mechanics is the existence of an observer-independent state of a system.
The idea has been expanded upon by Lee Smolin and Louis Crane, who have both applied the concept to quantum cosmology, and the interpretation has been applied to the EPR paradox, revealing not only a peaceful co-existence between quantum mechanics and special relativity, but a formal indication of a completely local character to reality.
The problem of the observer and the observed.
This problem was initially discussed in detail in Everett's thesis, "The Theory of the Universal Wavefunction". Consider observer formula_0, measuring the state of the quantum system formula_1. We assume that formula_0 has complete information on the system, and that formula_0 can write down the wavefunction formula_2 describing it. At the same time, there is another observer formula_3, who is interested in the state of the entire formula_0-formula_1 system, and formula_3 likewise has complete information.
To analyse this system formally, we consider a system formula_1 which may take one of two states, which we shall designate formula_4 and formula_5, ket vectors in the Hilbert space formula_6. Now, the observer formula_0 wishes to make a measurement on the system. At time formula_7, this observer may characterize the system as follows:
formula_8
where formula_9 and formula_10 are probabilities of finding the system in the respective states, and these add up to 1. For our purposes here, we can assume that in a single experiment, the outcome is the eigenstate formula_11 (but this can be substituted throughout, without loss of generality, by formula_12). So, we may represent the sequence of events in this experiment, with observer formula_0 doing the observing, as follows:
formula_13
This is the description of the measurement event given by observer formula_0. Now, any measurement is also a physical interaction between two or more systems. Accordingly, we can consider the tensor product Hilbert space formula_14, where formula_15 is the Hilbert space inhabited by state vectors describing formula_0. If the initial state of formula_0 is formula_16, some degrees of freedom in formula_0 become correlated with the state of formula_1 after the measurement, and this correlation can take one of two values: formula_17 or formula_18 where the direction of the arrows in the subscripts corresponds to the outcome of the measurement that formula_0 has made on formula_1. If we now consider the description of the measurement event by the other observer, formula_3, who describes the combined formula_19 system, but does not interact with it, the following gives the description of the measurement event according to formula_3, from the linearity inherent in the quantum formalism:
formula_20
Thus, on the assumption (see hypothesis 2 below) that quantum mechanics is complete, the two observers formula_0 and formula_3 give different but equally correct accounts of the events formula_21.
Note that the above scenario is directly linked to Wigner's Friend thought experiment, which serves as a prime example when understanding different interpretations of quantum theory.
Central principles.
Observer-dependence of state.
According to formula_0, at formula_22, the system formula_1 is in a determinate state, namely spin up. And, if quantum mechanics is complete, then so is this description. But, for formula_3, formula_1 is "not" uniquely determinate, but is rather entangled with the state of formula_0 – note that his description of the situation at formula_22 is not factorisable no matter what basis chosen. But, if quantum mechanics is complete, then the description that formula_3 gives is "also" complete.
Thus the standard mathematical formulation of quantum mechanics allows different observers to give different accounts of the same sequence of events. There are many ways to overcome this perceived difficulty. It could be described as an epistemic limitation – observers with a full knowledge of the system, we might say, could give a complete and equivalent description of the state of affairs, but that obtaining this knowledge is impossible in practice. But whom? What makes formula_0's description better than that of formula_3, or vice versa? Alternatively, we could claim that quantum mechanics is not a complete theory, and that by adding more structure we could arrive at a universal description (the troubled hidden variables approach). Yet another option is to give a preferred status to a particular observer or type of observer, and assign the epithet of correctness to their description alone. This has the disadvantage of being "ad hoc", since there are no clearly defined or physically intuitive criteria by which this super-observer ("who can observe all possible sets of observations by all observers over the entire universe") ought to be chosen.
RQM, however, takes the point illustrated by this problem at face value. Instead of trying to modify quantum mechanics to make it fit with prior assumptions that we might have about the world, Rovelli says that we should modify our view of the world to conform to what amounts to our best physical theory of motion. Just as forsaking the notion of absolute simultaneity helped clear up the problems associated with the interpretation of the Lorentz transformations, so many of the conundrums associated with quantum mechanics dissolve, provided that the state of a system is assumed to be observer-dependent – like simultaneity in Special Relativity. This insight follows logically from the two main hypotheses which inform this interpretation:
Thus, if a state is to be observer-dependent, then a description of a system would follow the form "system "S" is in state "x" "with reference to" observer "O"" or similar constructions, much like in relativity theory. In RQM it is meaningless to refer to the absolute, observer-independent state of any system.
Information and correlation.
It is generally well established that any quantum mechanical measurement can be reduced to a set of yes–no questions or bits that are either 1 or 0. RQM makes use of this fact to formulate the state of a quantum system (relative to a given observer!) in terms of the physical notion of information developed by Claude Shannon. Any yes/no question can be described as a single bit of information. This should not be confused with the idea of a qubit from quantum information theory, because a qubit can be in a superposition of values, whilst the "questions" of RQM are ordinary binary variables.
Any quantum measurement is fundamentally a physical interaction between the system being measured and some form of measuring apparatus. By extension, any physical interaction may be seen to be a form of quantum measurement, as all systems are seen as quantum systems in RQM. A physical interaction is seen as establishing a correlation between the system and the observer, and this correlation is what is described and predicted by the quantum formalism.
But, Rovelli points out, this form of correlation is precisely the same as the definition of information in Shannon's theory. Specifically, an observer "O" observing a system "S" will, after measurement, have some degrees of freedom correlated with those of "S". The amount of this correlation is given by log2"k" bits, where "k" is the number of possible values which this correlation may take – the number of "options" there are.
All systems are quantum systems.
All physical interactions are, at bottom, quantum interactions, and must ultimately be governed by the same rules. Thus, an interaction between two particles does not, in RQM, differ fundamentally from an interaction between a particle and some "apparatus". There is no true wave collapse, in the sense in which it occurs in some interpretations.
Because "state" is expressed in RQM as the correlation between two systems, there can be no meaning to "self-measurement". If observer formula_0 measures system formula_1, formula_1's "state" is represented as a correlation between formula_0 and formula_1. formula_0 itself cannot say anything with respect to its own "state", because its own "state" is defined only relative to another observer, formula_3. If the formula_19 compound system does not interact with any other systems, then it will possess a clearly defined state relative to formula_3. However, because formula_0's measurement of formula_1 breaks its unitary evolution with respect to formula_0, formula_0 will not be able to give a full description of the formula_19 system (since it can only speak of the correlation between formula_1 and itself, not its own behaviour). A complete description of the formula_23 system can only be given by a further, external observer, and so forth.
Taking the model system discussed above, if formula_3 has full information on the formula_19 system, it will know the Hamiltonians of both formula_1 and formula_0, including the interaction Hamiltonian. Thus, the system will evolve entirely unitarily (without any form of collapse) relative to formula_3, if formula_0 measures formula_1. The only reason that formula_0 will perceive a "collapse" is because formula_0 has incomplete information on the system (specifically, formula_0 does not know its own Hamiltonian, and the interaction Hamiltonian for the measurement).
Consequences and implications.
Coherence.
In our system above, formula_3 may be interested in ascertaining whether or not the state of formula_0 accurately reflects the state of formula_1. We can draw up for formula_3 an operator, formula_24, which is specified as:
formula_25
formula_26
formula_27
formula_28
with an eigenvalue of 1 meaning that formula_0 indeed accurately reflects the state of formula_1. So there is a 0 probability of formula_0 reflecting the state of formula_1 as being formula_11 if it is in fact formula_12, and so forth. The implication of this is that at time formula_22, formula_3 can predict with certainty that the formula_19 system is in "some" eigenstate of formula_24, but cannot say "which" eigenstate it is in, unless formula_3 itself interacts with the formula_19 system.
An apparent paradox arises when one considers the comparison, between two observers, of the specific outcome of a measurement. In the problem of the observer observed section above, let us imagine that the two experiments want to compare results. It is obvious that if the observer formula_3 has the full Hamiltonians of both formula_1 and formula_0, he will be able to say with certainty "that" at time formula_22, formula_0 has a determinate result for formula_1's spin, but he will not be able to say "what" formula_0's result is without interaction, and hence breaking the unitary evolution of the compound system (because he doesn't know his own Hamiltonian). The distinction between knowing "that" and knowing "what" is a common one in everyday life: everyone knows "that" the weather will be like something tomorrow, but no-one knows exactly "what" the weather will be like.
But, let us imagine that formula_3 measures the spin of formula_1, and finds it to have spin down (and note that nothing in the analysis above precludes this from happening). What happens if he talks to formula_0, and they compare the results of their experiments? formula_0, it will be remembered, measured a spin up on the particle. This would appear to be paradoxical: the two observers, surely, will realise that they have disparate results.
However, this apparent paradox only arises as a result of the question being framed incorrectly: as long as we presuppose an "absolute" or "true" state of the world, this would, indeed, present an insurmountable obstacle for the relational interpretation. However, in a fully relational context, there is no way in which the problem can even be coherently expressed. The consistency inherent in the quantum formalism, exemplified by the "M-operator" defined above, guarantees that there will be no contradictions between records. The interaction between formula_3 and whatever he chooses to measure, be it the formula_19 compound system or formula_0 and formula_1 individually, will be a "physical" interaction, a "quantum" interaction, and so a complete description of it can only be given by a further observer formula_29, who will have a similar "M-operator" guaranteeing coherency, and so on out. In other words, a situation such as that described above cannot violate any "physical observation", as long as the physical content of quantum mechanics is taken to refer only to relations.
Relational networks.
An interesting implication of RQM arises when we consider that interactions between material systems can only occur within the constraints prescribed by Special Relativity, namely within the intersections of the light cones of the systems: when they are spatiotemporally contiguous, in other words. Relativity tells us that objects have location only relative to other objects. By extension, a network of relations could be built up based on the properties of a set of systems, which determines which systems have properties relative to which others, and when (since properties are no longer well defined relative to a specific observer after unitary evolution breaks down for that observer). On the assumption that all interactions are "local" (which is backed up by the analysis of the EPR paradox presented below), one could say that the ideas of "state" and spatiotemporal contiguity are two sides of the same coin: spacetime location determines the possibility of interaction, but interactions determine spatiotemporal structure. The full extent of this relationship, however, has not yet fully been explored.
RQM and quantum cosmology.
The universe is the sum total of everything in existence with any possibility of direct or indirect interaction with a local observer. A (physical) observer outside of the universe would require physically breaking of gauge invariance, and a concomitant alteration in the mathematical structure of gauge-invariance theory.
Similarly, RQM conceptually forbids the possibility of an external observer. Since the assignment of a quantum state requires at least two "objects" (system and observer), which must both be physical systems, there is no meaning in speaking of the "state" of the entire universe. This is because this state would have to be ascribed to a correlation between the universe and some other physical observer, but this observer in turn would have to form part of the universe. As was discussed above, it is not possible for an object to contain a complete specification of itself. Following the idea of relational networks above, an RQM-oriented cosmology would have to account for the universe as a set of partial systems providing descriptions of one another. Such a construction was developed in particular by Francesca Vidotto .
Relationship with other interpretations.
The only group of interpretations of quantum mechanics with which RQM is almost completely incompatible is that of hidden variables theories. RQM shares some deep similarities with other views, but differs from them all to the extent to which the other interpretations do not accord with the "relational world" put forward by RQM.
Copenhagen interpretation.
RQM is, in essence, quite similar to the Copenhagen interpretation, but with an important difference. In the Copenhagen interpretation, the macroscopic world is assumed to be intrinsically classical in nature, and wave function collapse occurs when a quantum system interacts with macroscopic apparatus. In RQM, "any" interaction, be it micro or macroscopic, causes the linearity of Schrödinger evolution to break down. RQM could recover a Copenhagen-like view of the world by assigning a privileged status (not dissimilar to a preferred frame in relativity) to the classical world. However, by doing this one would lose sight of the key features that RQM brings to our view of the quantum world.
Hidden-variables theories.
Bohm's interpretation of QM does not sit well with RQM. One of the explicit hypotheses in the construction of RQM is that quantum mechanics is a complete theory, that is it provides a full account of the world. Moreover, the Bohmian view seems to imply an underlying, "absolute" set of states of all systems, which is also ruled out as a consequence of RQM.
We find a similar incompatibility between RQM and suggestions such as that of Penrose, which postulate that some process (in Penrose's case, gravitational effects) violate the linear evolution of the Schrödinger equation for the system.
Relative-state formulation.
The many-worlds family of interpretations (MWI) shares an important feature with RQM, that is, the relational nature of all value assignments (that is, properties). Everett, however, maintains that the universal wavefunction gives a complete description of the entire universe, while Rovelli argues that this is problematic, both because this description is not tied to a specific observer (and hence is "meaningless" in RQM), and because RQM maintains that there is no single, absolute description of the universe as a whole, but rather a net of interrelated partial descriptions.
Consistent histories approach.
In the consistent histories approach to QM, instead of assigning probabilities to single values for a given system, the emphasis is given to "sequences" of values, in such a way as to exclude (as physically impossible) all value assignments which result in inconsistent probabilities being attributed to observed states of the system. This is done by means of ascribing values to "frameworks", and all values are hence framework-dependent.
RQM accords perfectly well with this view. However, the consistent histories approach does not give a full description of the physical meaning of framework-dependent value (that is it does not account for how there can be "facts" if the value of any property depends on the framework chosen). By incorporating the relational view into this approach, the problem is solved: RQM provides the means by which the observer-independent, framework-dependent probabilities of various histories are reconciled with observer-dependent descriptions of the world.
EPR and quantum non-locality.
RQM provides an unusual solution to the EPR paradox. Indeed, it manages to dissolve the problem altogether, inasmuch as there is no superluminal transportation of information involved in a Bell test experiment: the principle of locality is preserved inviolate for all observers.
The problem.
In the EPR thought experiment, a radioactive source produces two electrons in a singlet state, meaning that the sum of the spin on the two electrons is zero. These electrons are fired off at time formula_7 towards two spacelike separated observers, Alice and Bob, who can perform spin measurements, which they do at time formula_22. The fact that the two electrons are a singlet means that if Alice measures z-spin up on her electron, Bob will measure z-spin down on his, and "vice versa": the correlation is perfect. If Alice measures z-axis spin, and Bob measures the orthogonal y-axis spin, however, the correlation will be zero. Intermediate angles give intermediate correlations in a way that, on careful analysis, proves inconsistent with the idea that each particle has a definite, independent probability of producing the observed measurements (the correlations violate Bell's inequality).
This subtle dependence of one measurement on the other holds even when measurements are made simultaneously and a great distance apart, which gives the appearance of a superluminal communication taking place between the two electrons. Put simply, how can Bob's electron "know" what Alice measured on hers, so that it can adjust its own behavior accordingly?
Relational solution.
In RQM, an interaction between a system and an observer is necessary for the system to have clearly defined properties relative to that observer. Since the two measurement events take place at spacelike separation, they do not lie in the intersection of Alice's and Bob's light cones. Indeed, there is "no" observer who can instantaneously measure both electrons' spin.
The key to the RQM analysis is to remember that the results obtained on each "wing" of the experiment only become determinate for a given observer once that observer has interacted with the "other" observer involved. As far as Alice is concerned, the specific results obtained on Bob's wing of the experiment are indeterminate for her, although she will know "that" Bob has a definite result. In order to find out what result Bob has, she has to interact with him at some time formula_30 in their future light cones, through ordinary classical information channels.
The question then becomes one of whether the expected correlations in results will appear: will the two particles behave in accordance with the laws of quantum mechanics? Let us denote by formula_31 the idea that the observer formula_32 (Alice) measures the state of the system formula_33 (Alice's particle).
So, at time formula_22, Alice knows the value of formula_31: the spin of her particle, relative to herself. But, since the particles are in a singlet state, she knows that
formula_34
and so if she measures her particle's spin to be formula_35, she can predict that Bob's particle (formula_36) will have spin formula_37. All this follows from standard quantum mechanics, and there is no "spooky action at a distance" yet. From the "coherence-operator" discussed above, Alice also knows that if at formula_30 she measures Bob's particle and then measures Bob (that is asks him what result he got) – or "vice versa" – the results will be consistent:
formula_38
Finally, if a third observer (Charles, say) comes along and measures Alice, Bob, "and" their respective particles, he will find that everyone still agrees, because his own "coherence-operator" demands that
formula_39 and formula_40
while knowledge that the particles were in a singlet state tells him that
formula_41
Thus the relational interpretation, by shedding the notion of an "absolute state" of the system, allows for an analysis of the EPR paradox which neither violates traditional locality constraints, nor implies superluminal information transfer, since we can assume that all observers are moving at comfortable sub-light velocities. And, most importantly, the results of every observer are in full accordance with those expected by conventional quantum mechanics.
Whether or not this account of locality is successful has been a matter of debate.
Derivation.
A promising feature of this interpretation is that RQM offers the possibility of being derived from a small number of axioms, or postulates based on experimental observations. Rovelli's derivation of RQM uses three fundamental postulates. However, it has been suggested that it may be possible to reformulate the third postulate into a weaker statement, or possibly even do away with it altogether. The derivation of RQM parallels, to a large extent, quantum logic. The first two postulates are motivated entirely by experimental results, while the third postulate, although it accords perfectly with what we have discovered experimentally, is introduced as a means of recovering the full Hilbert space formalism of quantum mechanics from the other two postulates. The two empirical postulates are:
We let formula_42 denote the set of all possible questions that may be "asked" of a quantum system, which we shall denote by formula_43, formula_44. We may experimentally find certain relations between these questions: formula_45, corresponding to {intersection, orthogonal sum, orthogonal complement, inclusion, and orthogonality} respectively, where formula_46.
Structure.
From the first postulate, it follows that we may choose a subset formula_47 of formula_48 mutually independent questions, where formula_48 is the number of bits contained in the maximum amount of information. We call such a question formula_47 a "complete question". The value of formula_47 can be expressed as an N-tuple sequence of binary valued numerals, which has formula_49 possible permutations of "0" and "1" values. There will also be more than one possible complete question. If we further assume that the relations formula_50 are defined for all formula_43, then formula_42 is an orthomodular lattice, while all the possible unions of sets of complete questions form a Boolean algebra with the formula_47 as atoms.
The second postulate governs the event of further questions being asked by an observer formula_51 of a system formula_1, when formula_51 already has a full complement of information on the system (an answer to a complete question). We denote by formula_52 the probability that a "yes" answer to a question formula_53 will follow the complete question formula_54. If formula_53 is independent of formula_54, then formula_55, or it might be fully determined by formula_54, in which case formula_56. There is also a range of intermediate possibilities, and this case is examined below.
If the question that formula_51 wants to ask the system is another complete question, formula_57, the probability formula_58 of a "yes" answer has certain constraints upon it:
1. formula_59
2. formula_60
3. formula_61
The three constraints above are inspired by the most basic of properties of probabilities, and are satisfied if
formula_62,
where formula_63 is a unitary matrix.
This third postulate implies that if we set a complete question formula_70 as a basis vector in a complex Hilbert space, we may then represent any other question formula_71 as a linear combination:
formula_72
And the conventional probability rule of quantum mechanics states that if two sets of basis vectors are in the relation above, then the probability formula_73 is
formula_74
Dynamics.
The Heisenberg picture of time evolution accords most easily with RQM. Questions may be labelled by a time parameter formula_75, and are regarded as distinct if they are specified by the same operator but are performed at different times. Because time evolution is a symmetry in the theory (it forms a necessary part of the full formal derivation of the theory from the postulates), the set of all possible questions at time formula_22 is isomorphic to the set of all possible questions at time formula_7. It follows, by standard arguments in quantum logic, from the derivation above that the orthomodular lattice formula_76 has the structure of the set of linear subspaces of a Hilbert space, with the relations between the questions corresponding to the relations between linear subspaces.
It follows that there must be a unitary transformation formula_77 that satisfies:
formula_78
and
formula_79
where formula_80 is the Hamiltonian, a self-adjoint operator on the Hilbert space and the unitary matrices are an abelian group.
Problems and discussion.
The question is whether RQM denies any objective reality, or otherwise stated: "there is only a subjectively knowable reality." Rovelli limits the scope of this claim by stating that RQM relates to the variables of a physical system and not to constant, intrinsic properties, such as the mass and charge of an electron. Indeed, mechanics in general only predicts the behavior of a physical system under various conditions. In classical mechanics this behavior is mathematically represented in a phase space with certain degrees of freedom; in quantum mechanics this is a state space, mathematically represented as a multidimensional complex Hilbert space, in which the dimensions correspond to the above variables.
Dorato, however, argues that all intrinsic properties of a physical system, including mass and charge, are only knowable in a subjective interaction between the observer and the physical system. The unspoken thought behind this is that intrinsic properties are essentially quantum mechanical properties as well.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "O"
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "|\\psi\\rangle"
},
{
"math_id": 3,
"text": "O'"
},
{
"math_id": 4,
"text": "|{\\uparrow}\\rangle "
},
{
"math_id": 5,
"text": " |\\downarrow \\rangle "
},
{
"math_id": 6,
"text": "H_S"
},
{
"math_id": 7,
"text": "t_1"
},
{
"math_id": 8,
"text": "| \\psi \\rangle = \\alpha|{\\uparrow}\\rangle + \\beta|{\\downarrow}\\rangle ,"
},
{
"math_id": 9,
"text": "|\\alpha|^2"
},
{
"math_id": 10,
"text": "|\\beta|^2"
},
{
"math_id": 11,
"text": "|{\\uparrow}\\rangle"
},
{
"math_id": 12,
"text": "|{\\downarrow}\\rangle"
},
{
"math_id": 13,
"text": "\\begin{matrix} t_1 & \\rightarrow & t_2 \\\\\n \\alpha |{\\uparrow}\\rangle + \\beta |{\\downarrow}\\rangle & \\rightarrow & |{\\uparrow}\\rangle.\n \\end{matrix}"
},
{
"math_id": 14,
"text": "H_S \\otimes H_{O}"
},
{
"math_id": 15,
"text": "H_{O}"
},
{
"math_id": 16,
"text": "|\\text{init}\\rangle"
},
{
"math_id": 17,
"text": "|O_{\\uparrow}\\rangle"
},
{
"math_id": 18,
"text": "|O_{\\downarrow}\\rangle"
},
{
"math_id": 19,
"text": "S+O"
},
{
"math_id": 20,
"text": "\\begin{matrix}\n t_1 & \\rightarrow & t_2 \\\\\n \\left( \\alpha |{\\uparrow}\\rangle + \\beta |{\\downarrow}\\rangle \\right) \n \\otimes |\\text{init}\\rangle \n & \\rightarrow \n & \\alpha |{\\uparrow}\\rangle \\otimes |O_{\\uparrow}\\rangle \n + \\beta |{\\downarrow}\\rangle \\otimes |O_{\\downarrow}\\rangle.\n \\end{matrix}"
},
{
"math_id": 21,
"text": "t_1 \\rightarrow t_2"
},
{
"math_id": 22,
"text": "t_2"
},
{
"math_id": 23,
"text": "(S+O)+O'"
},
{
"math_id": 24,
"text": "M"
},
{
"math_id": 25,
"text": "M\\left(|{\\uparrow}\\rangle \\otimes |O_{\\uparrow}\\rangle \\right) = |{\\uparrow}\\rangle \\otimes |O_{\\uparrow}\\rangle"
},
{
"math_id": 26,
"text": "M\\left(|{\\uparrow}\\rangle \\otimes |O_{\\downarrow}\\rangle \\right) = 0"
},
{
"math_id": 27,
"text": "M\\left(|{\\downarrow}\\rangle \\otimes |O_{\\uparrow}\\rangle \\right) = 0"
},
{
"math_id": 28,
"text": "M\\left(|{\\downarrow}\\rangle \\otimes |O_{\\downarrow}\\rangle \\right) = |{\\downarrow}\\rangle \\otimes |O_{\\downarrow}\\rangle"
},
{
"math_id": 29,
"text": "O''"
},
{
"math_id": 30,
"text": "t_3"
},
{
"math_id": 31,
"text": "M_A(\\alpha)"
},
{
"math_id": 32,
"text": "A"
},
{
"math_id": 33,
"text": "\\alpha"
},
{
"math_id": 34,
"text": "M_A(\\alpha)+M_A(\\beta)=0 ,"
},
{
"math_id": 35,
"text": "\\sigma"
},
{
"math_id": 36,
"text": "\\beta"
},
{
"math_id": 37,
"text": "-\\sigma"
},
{
"math_id": 38,
"text": "M_A(B)=M_A(\\beta)"
},
{
"math_id": 39,
"text": "M_C(A)=M_C(\\alpha)"
},
{
"math_id": 40,
"text": "M_C(B)=M_C(\\beta)"
},
{
"math_id": 41,
"text": "M_C(\\alpha)+M_C(\\beta) = 0. "
},
{
"math_id": 42,
"text": "W\\left(S\\right)"
},
{
"math_id": 43,
"text": "Q_i"
},
{
"math_id": 44,
"text": "i \\in W"
},
{
"math_id": 45,
"text": "\\left\\{\\land, \\lor, \\neg, \\supset, \\bot \\right\\}"
},
{
"math_id": 46,
"text": "Q_1 \\bot Q_2 \\equiv Q_1 \\supset \\neg Q_2 "
},
{
"math_id": 47,
"text": "Q_c^{(i)}"
},
{
"math_id": 48,
"text": "N"
},
{
"math_id": 49,
"text": "2^N = k"
},
{
"math_id": 50,
"text": "\\left\\{\\land, \\lor\\right\\}"
},
{
"math_id": 51,
"text": "O_1"
},
{
"math_id": 52,
"text": "p\\left(Q|Q_c^{(j)}\\right)"
},
{
"math_id": 53,
"text": "Q"
},
{
"math_id": 54,
"text": "Q_c^{(j)}"
},
{
"math_id": 55,
"text": "p=0.5"
},
{
"math_id": 56,
"text": "p=1"
},
{
"math_id": 57,
"text": "Q_b^{(i)}"
},
{
"math_id": 58,
"text": "p^{ij}=p\\left(Q_b^{(i)}|Q_c^{(j)}\\right)"
},
{
"math_id": 59,
"text": "0 \\leq p^{ij} \\leq 1, \\ "
},
{
"math_id": 60,
"text": "\\sum_{i} p^{ij} = 1, \\ "
},
{
"math_id": 61,
"text": "\\sum_{j} p^{ij} = 1. \\ "
},
{
"math_id": 62,
"text": "p^{ij} = \\left|U^{ij}\\right|^2"
},
{
"math_id": 63,
"text": "U^{ij}"
},
{
"math_id": 64,
"text": "b"
},
{
"math_id": 65,
"text": "c"
},
{
"math_id": 66,
"text": "U_{bc}"
},
{
"math_id": 67,
"text": "U_{cd} = U_{cb}U_{bd}"
},
{
"math_id": 68,
"text": "b, c"
},
{
"math_id": 69,
"text": "d"
},
{
"math_id": 70,
"text": "|Q^{(i)}_c \\rangle"
},
{
"math_id": 71,
"text": "|Q^{(j)}_b \\rangle"
},
{
"math_id": 72,
"text": "|Q^{(j)}_b \\rangle = \\sum_i U^{ij}_{bc} |Q^{(i)}_c \\rangle."
},
{
"math_id": 73,
"text": "p^{ij}"
},
{
"math_id": 74,
"text": "p^{ij} = |\\langle Q^{(i)}_c | Q^{(j)}_b \\rangle|^2 = |U_{bc}^{ij}|^2."
},
{
"math_id": 75,
"text": "t \\rightarrow Q(t)"
},
{
"math_id": 76,
"text": "W(S)"
},
{
"math_id": 77,
"text": "U \\left( t_2 - t_1 \\right)"
},
{
"math_id": 78,
"text": "Q(t_2) = U \\left( t_2 - t_1 \\right) Q(t_1) U^{-1} \\left( t_2 - t_1 \\right)"
},
{
"math_id": 79,
"text": "U \\left( t_2 - t_1 \\right) = \\exp({-i \\left(t_2 - t_1 \\right)H})"
},
{
"math_id": 80,
"text": "H"
}
] |
https://en.wikipedia.org/wiki?curid=5974662
|
59746854
|
Rank-maximal allocation
|
Rule for fair division of invisible items
Rank-maximal (RM) allocation is a rule for fair division of indivisible items. Suppose we have to allocate some items among people. Each person can rank the items from best to worst. The RM rule says that we have to give as many people as possible their best (#1) item. Subject to that, we have to give as many people as possible their next-best (#2) item, and so on.
In the special case in which each person should receive a single item (for example, when the "items" are tasks and each task has to be done by a single person), the problem is called rank-maximal matching or greedy matching.
The idea is similar to that of utilitarian cake-cutting, where the goal is to maximize the sum of utilities of all participants. However, the utilitarian rule works with cardinal (numeric) utility functions, while the RM rule works with ordinal utilities (rankings).
Definition.
There are several items and several agents. Each agent has a total order on the items. Agents can be indifferent between some items; for each agent, we can partition the items to equivalence classes that contain items of the same rank. For example, If Alice's preference-relation is x > y,z > w, it means that Alice's 1st choice is x, which is better for her than all other items; Alice's 2nd choice is y and z, which are equally good in her eyes but not as good as x; and Alice's 3rd choice is w, which she considers worse than all other items.
For every allocation of items to the agents, we construct its "rank-vector" as follows. Element #1 in the vector is the total number of items that are 1st-choice for their owners; Element #2 is the total number of items that are 2nd-choice for their owners; and so on.
A rank-maximal allocation is one in which the rank-vector is maximum, in lexicographic order.
Example.
Three items, x y and z, have to be divided among three agents whose rankings are:
In the allocation ("x", "y", "z"), Alice gets her 1st choice ("x"), Bob gets his 2nd choice ("y"), and Carl gets his 3rd choice ("z"). The rank-vector is thus (1,1,1).
In the allocation ("x","z","y"), both Alice and Carl get their 1st choice and Bob gets his 3rd choice. The rank-vector is thus (2,0,1), which is lexicographically higher than (1,1,1) – it gives more people their 1st choice.
It is easy to check that no allocation produces a lexicographically higher rank-vector. Hence, the allocation ("x","z","y") is rank-maximal. Similarly, the allocation ("z","x","y") is rank-maximal – it produces the same rank-vector (2,0,1).
Algorithms.
RM matchings were first studied by Robert Irving, who called them "greedy matchings". He presented an algorithm that finds an RM matching in time formula_0, where "n" is the number of agents and "c" is the largest length of a preference-list of an agent.
Later, an improved algorithm was found, which runs in time formula_1, where "m" is the total length of all preference-lists (total number of edges in the graph), and "C" is the maximal rank of an item used in an RM matching (i.e., the maximal number of non-zero elements in an optimal rank vector). The algorithm reduces the problem to maximum-cardinality matching. Intuitively, we would like to first find a maximum-cardinality matching using only edges of rank 1; then, extend this matching to a maximum-cardniality matching using only edges of ranks 1 and 2; then, extend this matching to a maximum-cardniality matching using only edges of ranks 1 2 and 3; and so on. The problem is that, if we pick the "wrong" maximum-cardinality matching for rank 1, then we might miss the optimal matching for rank 2. The algorithm of solves this problem using the Dulmage–Mendelsohn decomposition, which is a decomposition that uses a maximum-cardinality matching, but does not depend on which matching is chosen (the decomposition is the same for every maximum-cardinality matching chosen). It works in the following way.
A different solution, using maximum-weight matchings, attains a similar run-time: formula_2.
Variants.
The problem has several variants.
1. In maximum-cardinality RM matching, the goal is to find, among all different RM matchings, the one with the maximum number of matchings.
2. In fair matching, the goal is to find a maximum-cardinality matching such that the minimum number of edges of rank "r" are used, given that - the minimum number of edges of rank "r"−1 are used, and so on.
Both maximum-cardinality RM matching and fair matching can be found by reduction to maximum-weight matching.
3. In the capacitated RM matching problem, each agent has an upper capacity denoting an upper bound on the total number of items he should get. Each item has an upper quota denoting an upper bound on the number of different agents it can be allocated to. It was first studied by Melhorn and Michail, who gave an algorithm with run-time formula_3. There is an improved algorithm with run-time formula_4, where "B" is the minimum of the sum-of-quotas of the agents and the sum-of-quotas of the items. It is based on an extension of the Gallai–Edmonds decomposition to multi-edge matchings.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "O(n^2 c^3)"
},
{
"math_id": 1,
"text": "O(m\\cdot \\min(n, C\\sqrt{n}))"
},
{
"math_id": 2,
"text": "O(m\\cdot \\min(n+C, C\\sqrt{n}))"
},
{
"math_id": 3,
"text": "O(C n m \\log(n^2/m)\\log(n))"
},
{
"math_id": 4,
"text": "O(m\\cdot \\min(B, C \\sqrt{B}))"
}
] |
https://en.wikipedia.org/wiki?curid=59746854
|
59747046
|
Clarke number
|
Relative abundance of elements
Clarke number or clarke is the relative abundance of a chemical element, typically in Earth's crust. The technical definition of "Earth's crust" varies among authors, and the actual numbers also vary significantly.
History.
In the 1930s, USSR geochemist Alexander Fersman defined the relative abundance of chemical elements in geological objects, denoted in percents, as .141 This was in honor to the American geochemist Frank Wigglesworth Clarke, who pioneered in estimating the chemical composition of Earth's crust, based on Clarke and colleague's extensive chemical analysis of numerous rock samples, throughout 1889 to 1924().
Examples based on Fersman's definition:
In Russian.
is synonymous to "the relative abundance of elements" in any object, either in weight ratio or in atomic (number of atoms) ratio, regardless of how "Earth's crust" is defined, and denotation is not restricted to percents.
In English.
In the English speaking world, the term "clarke" was not even used in Wells(1937)4 which introduced Fersman's proposal, nor in later USGS articles such as Fleischer(1953). They used the term "relative abundance of the elements". Brian Mason also mentioned the term "clarke" in Mason(1952)42(mistakenly attributing it to Vladimir Vernadsky, later corrected to Fersman in Mason(1958)47), but the definition slightly differed from Fersman's, limiting it only to the average percentage in Earth's crust, but allowed to exclude hydrosphere and atmosphere. Besides for explaining the term, Mason himself did not use the term "clarke".
A variant term "clarke value" is occasionally used (examples:778). However, "clarke value" can have a different meaning, the clarke of concentration (example:412).
Terms "clarke number" and "Clarke number" are found in articles written by Japanese authors (example:55).
Usage in Japan.
In Japan, "clarke" is translated as . The word is always added, which happens to make the term appear similar in form with scientific constants such as . The term may have a narrower sense than Fersman's. Several of the following constraints may apply:
Another peculiarity in Japan is the existence of a popular version of data, which was tabulated in reference books such as the annual "Chronological Scientific Tables" (RCST1939(1938)E46), the "Dictionary of Physics and Chemistry" (IDPC(1939)app.VI) and other prominent books on geochemistry and chemistry.(62) This version Kimura(1938) was devised by chemist Kenjiro Kimura.5 It was often quoted as "The" "Clarke numbers" (unsourced examples:443,429 t2). The numbers differed from any versions by Clarke / Clarke&Washington (1889–1924), or anything listed in foreign (non-Japanese) articles such as the USGS compilation 4 t2, thus unknown outside of Japan. Yet the numbers were sometimes quoted in English articles without citation (example:55).
As geological definition of "Earth's crust" evolved, the "10 mile-deep" approximation were deemed out-of-date, and some people considered the term "clarke number" obsolete too. Yet other people may have meant broader senses, not limiting to Earth's crust, leading to confusion.355 RCST1961(1961) switched their "clarke number" table from Kimura(1938) to Mason(1958) based, and the label "clarke number" on table was removed in RCST1963(1962). IDPC(1971) removed the "clarke number" table which was a Kimura(1938)'s variant. IDPC(1981) said the term is mostly abandoned, and the dictionary entry for "clarke number" itself was removed from IDPC(1998).431 So "clarke numbers" became associated almost solely with Kimura(1938)'s data, but Kimura's name forgotten. Incidentally, in major reference books, there was no data table titled "clarke numbers" which showed Clarke's original tables.
Despite being removed from major reference books, data from Kimura(1938) and phrases such as "the Clarke number of iron is 4.70", unsourced, continue to circulate, even in the 2010s (example:799).
Example data.
This section lists only historical data. For recent data, see Abundance of elements in Earth's crust.
Technical definition of "clarke", "Earth's crust" and "lithosphere" differ among authors, and the actual numbers vary accordingly, sometimes by several times. Even the same author presents multiple versions, with various estimation parameters or knowledge refinements. Yet they are often quoted without source, rendering the data unverifiable.
Clarke & Washington11434 t17 presented estimations of the average composition of outer part of Earth with four variants:
"The earth's crust" in Clarke and Washington works can mean two different things: (a) The whole outer part of Earth, ie. lithosphere, hydrosphere and atmosphere; (b) Only the lithosphere, which in their works just meant "the rocky crust of the earth". "Crust" here means (b).
Of the mass of 10 mile-thick lithosphere plus hydrosphere and atmosphere.
Tables of historical data for some elements of their relative abundance in Earth's crust.
Other variants.
Some authors call these "clarkes" too, some do not.
Clarke of concentration.
A related term ""clarke of concentration" or "concentration clarke"", synonym: "concentration factor (mineralogy)", is a measure to see how rich a particular ore is.
That is, the ratio between the concentrations of a chemical element in the ore, and its concentration in the whole Earth's crust (i.e. "clarke") 4243.
If the concentration of a commodity in an ore X is formula_6 [ppm], and the "clarke" of that commodity is formula_7 [ppm],
then "the clarke of concentration" of that commodity X is formula_8 (dimensionless).
The value represents the degree to which the commodity is concentrated from crustal abundances to the ore by natural geochemical processes; a clue for whether the commodity could be mined economically.
References.
Footnotes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "Mx"
},
{
"math_id": 1,
"text": "Mo "
},
{
"math_id": 2,
"text": "Km=\\frac{Mo}{Mx}"
},
{
"math_id": 3,
"text": "Ay"
},
{
"math_id": 4,
"text": "As"
},
{
"math_id": 5,
"text": "Ka=\\frac{As}{Ay}"
},
{
"math_id": 6,
"text": "Kx"
},
{
"math_id": 7,
"text": "Ke"
},
{
"math_id": 8,
"text": "Kk=\\frac{Kx}{Ke}"
}
] |
https://en.wikipedia.org/wiki?curid=59747046
|
59747277
|
Gallai–Edmonds decomposition
|
Partition of the vertices of a graph giving information on the structure of maximum matchings
In graph theory, the Gallai–Edmonds decomposition is a partition of the vertices of a graph into three subsets which provides information on the structure of maximum matchings in the graph. Tibor Gallai and Jack Edmonds independently discovered it and proved its key properties.
The Gallai–Edmonds decomposition of a graph can be found using the blossom algorithm.
Properties.
Given a graph formula_0, its Gallai–Edmonds decomposition consists of three disjoint sets of vertices, formula_1, formula_2, and formula_3, whose union is formula_4: the set of all vertices of formula_0. First, the vertices of formula_0 are divided into "essential vertices" (vertices which are covered by every maximum matching in formula_0) and "inessential vertices" (vertices which are left uncovered by at least one maximum matching in formula_0). The set formula_3 is defined to contain all the inessential vertices. Essential vertices are split into formula_1 and formula_2: the set formula_1 is defined to contain all essential vertices adjacent to at least one vertex of formula_3, and formula_2 is defined to contain all essential vertices not adjacent to any vertices of formula_3.
It is common to identify the sets formula_1, formula_2, and formula_3 with the subgraphs induced by those sets. For example, we say "the components of formula_3" to mean the connected components of the subgraph induced by formula_3.
The Gallai–Edmonds decomposition has the following properties:
Construction.
The Gallai–Edmonds decomposition of a graph formula_0 can be found, somewhat inefficiently, by starting with any algorithm for finding a maximum matching. From the definition, a vertex formula_9 is in formula_3 if and only if formula_10 (the graph obtained from formula_0 by deleting formula_9) has a maximum matching of the same size as formula_0. Therefore we can identify formula_3 by computing a maximum matching in formula_0 and in formula_10 for every vertex formula_9. The complement of formula_3 can be partitioned into formula_1 and formula_2 directly from the definition.
One particular method for finding a maximum matching in a graph is Edmonds' blossom algorithm, and the processing done by this algorithm enables us to find the Gallai–Edmonds decomposition directly.
To find a maximum matching in a graph formula_0, the blossom algorithm starts with a small matching and goes through multiple iterations in which it increases the size of the matching by one edge. We can find the Gallai–Edmonds decomposition from the blossom algorithm's work in the last iteration: the work done when it has a maximum matching formula_11, which it fails to make any larger.
In every iteration, the blossom algorithm passes from formula_0 to smaller graphs by contracting subgraphs called "blossoms" to single vertices. When this is done in the last iteration, the blossoms have a special property:
The first property follows from the algorithm: every vertex of a blossom is the endpoint of an alternating path that starts at a vertex uncovered by the matching. The second property follows from the first by the lemma below:
Let formula_0 be a graph, formula_11 a matching in formula_0, and let formula_12 be a cycle of length formula_13which contains formula_7 edges of formula_11 and is vertex-disjoint from the rest of formula_11. Construct a new graph formula_14 from formula_0 by shrinking formula_12 to a single vertex. Then formula_15 is a maximum matching in formula_14 if and only if formula_11 is a maximum matching in formula_0.
This lemma also implies that when a blossom is contracted, the set of inessential vertices outside the blossom remains the same.
Once every blossom has been contracted by the algorithm, the result is a smaller graph formula_14, a maximum matching formula_16 in formula_14 of the same size as formula_11, and an alternating forest formula_17 in formula_14 with respect to formula_16. In formula_14, the Gallai–Edmonds decomposition has a short description. The vertices in formula_17 are classified into inner vertices (vertices at an odd distance in formula_17 from a root) and outer vertices (vertices at an even distance in formula_17 from a root); formula_18 is exactly the set of inner vertices, and formula_19 is exactly the set of outer vertices. Vertices of formula_14 that are not in formula_17 form formula_20.
Contracting blossoms preserves the set of inessential vertices; therefore formula_3 can be found from formula_19 by taking all vertices of formula_0 which were contracted as part of a blossom, as well as all vertices in formula_19. The vertices in formula_1 and formula_2 are never contracted; formula_21 and formula_22.
Generalizations.
The Gallai–Edmonds decomposition is a generalization of Dulmage–Mendelsohn decomposition from bipartite graphs to general graphs.
An extension of the Gallai–Edmonds decomposition theorem to multi-edge matchings is given in Katarzyna Paluch's "Capacitated Rank-Maximal Matchings".
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "G"
},
{
"math_id": 1,
"text": "A(G)"
},
{
"math_id": 2,
"text": "C(G)"
},
{
"math_id": 3,
"text": "D(G)"
},
{
"math_id": 4,
"text": "V(G)"
},
{
"math_id": 5,
"text": "X \\subseteq A(G)"
},
{
"math_id": 6,
"text": "|X|+1"
},
{
"math_id": 7,
"text": "k"
},
{
"math_id": 8,
"text": "\\frac{1}{2}(|V(G)| - k + |A(G)|)"
},
{
"math_id": 9,
"text": "v"
},
{
"math_id": 10,
"text": "G-v"
},
{
"math_id": 11,
"text": "M"
},
{
"math_id": 12,
"text": "Z"
},
{
"math_id": 13,
"text": "2k+1"
},
{
"math_id": 14,
"text": "G'"
},
{
"math_id": 15,
"text": "M' = M-E(Z)"
},
{
"math_id": 16,
"text": "M'"
},
{
"math_id": 17,
"text": "F'"
},
{
"math_id": 18,
"text": "A(G')"
},
{
"math_id": 19,
"text": "D(G')"
},
{
"math_id": 20,
"text": "C(G')"
},
{
"math_id": 21,
"text": "A(G) = A(G')"
},
{
"math_id": 22,
"text": "C(G) = C(G')"
}
] |
https://en.wikipedia.org/wiki?curid=59747277
|
59749428
|
Nekrasov matrix
|
In mathematics, a Nekrasov matrix or generalised Nekrasov matrix is a type of diagonally dominant matrix (i.e. one in which the diagonal elements are in some way greater than some function of the non-diagonal elements). Specifically if A is a generalised Nekrasov matrix, its diagonal elements are non-zero and the diagonal elements also satisfy,
formula_0
where,
formula_1.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\na_{ii} > R_i(A)\n"
},
{
"math_id": 1,
"text": "\nR_i(A) = \\sum_{j=1}^{i-1} |a_{ij}|\\frac{R_j(A)}{|a_{jj}|}+\\sum_{j=i+1}^n |a_{ij}|\n"
}
] |
https://en.wikipedia.org/wiki?curid=59749428
|
5975433
|
Singular control
|
In optimal control, problems of singular control are problems that are difficult to solve because a straightforward application of Pontryagin's minimum principle fails to yield a complete solution. Only a few such problems have been solved, such as Merton's portfolio problem in financial economics or trajectory optimization in aeronautics. A more technical explanation follows.
The most common difficulty in applying Pontryagin's principle arises when the Hamiltonian depends linearly on the control formula_0, i.e., is of the form: formula_1 and the control is restricted to being between an upper and a lower bound: formula_2. To minimize formula_3, we need to make formula_0 as big or as small as possible, depending on the sign of formula_4, specifically:
formula_5
If formula_6 is positive at some times, negative at others and is only zero instantaneously, then the solution is straightforward and is a bang-bang control that switches from formula_7 to formula_8 at times when formula_6 switches from negative to positive.
The case when formula_6 remains at zero for a finite length of time formula_9 is called the singular control case. Between formula_10 and formula_11 the maximization of the Hamiltonian with respect to formula_0 gives us no useful information and the solution in that time interval is going to have to be found from other considerations. One approach is to repeatedly differentiate formula_12 with respect to time until the control u again explicitly appears, though this is not guaranteed to happen eventually. One can then set that expression to zero and solve for u. This amounts to saying that between formula_10 and formula_11 the control formula_0 is determined by the requirement that the singularity condition continues to hold. The resulting so-called singular arc, if it is optimal, will satisfy the Kelley condition:
formula_13
Others refer to this condition as the generalized Legendre–Clebsch condition.
The term bang-singular control refers to a control that has a bang-bang portion as well as a singular portion.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "u"
},
{
"math_id": 1,
"text": "H(u)=\\phi(x,\\lambda,t)u+\\cdots"
},
{
"math_id": 2,
"text": "a\\le u(t)\\le b"
},
{
"math_id": 3,
"text": "H(u)"
},
{
"math_id": 4,
"text": "\\phi(x,\\lambda,t)"
},
{
"math_id": 5,
"text": "u(t) = \\begin{cases} b, & \\phi(x,\\lambda,t)<0 \\\\ ?, & \\phi(x,\\lambda,t)=0 \\\\ a, & \\phi(x,\\lambda,t)>0.\\end{cases}"
},
{
"math_id": 6,
"text": "\\phi"
},
{
"math_id": 7,
"text": "b"
},
{
"math_id": 8,
"text": "a"
},
{
"math_id": 9,
"text": "t_1\\le t\\le t_2"
},
{
"math_id": 10,
"text": "t_1"
},
{
"math_id": 11,
"text": "t_2"
},
{
"math_id": 12,
"text": "\\partial H/\\partial u"
},
{
"math_id": 13,
"text": "(-1)^k \\frac{\\partial}{\\partial u} \\left[ {\\left( \\frac{d}{dt} \\right)}^{2k} H_u \\right] \\ge 0 ,\\, k=0,1,\\cdots"
}
] |
https://en.wikipedia.org/wiki?curid=5975433
|
5975550
|
Power iteration
|
Eigenvalue algorithm
In mathematics, power iteration (also known as the power method) is an eigenvalue algorithm: given a diagonalizable matrix formula_0, the algorithm will produce a number formula_1, which is the greatest (in absolute value) eigenvalue of formula_0, and a nonzero vector formula_2, which is a corresponding eigenvector of formula_1, that is, formula_3.
The algorithm is also known as the Von Mises iteration.
Power iteration is a very simple algorithm, but it may converge slowly. The most time-consuming operation of the algorithm is the multiplication of matrix formula_0 by a vector, so it is effective for a very large sparse matrix with appropriate implementation. The speed of convergence is like formula_4(see a later section). In words, convergence is exponential with base being the spectral gap.
The method.
The power iteration algorithm starts with a vector formula_5, which may be an approximation to the dominant eigenvector or a random vector. The method is described by the recurrence relation
formula_6
So, at every iteration, the vector formula_7 is multiplied by the matrix formula_0 and normalized.
If we assume formula_0 has an eigenvalue that is strictly greater in magnitude than its other eigenvalues and the starting vector formula_5 has a nonzero component in the direction of an eigenvector associated with the dominant eigenvalue, then a subsequence formula_8 converges to an eigenvector associated with the dominant eigenvalue.
Without the two assumptions above, the sequence formula_8 does not necessarily converge. In this sequence,
formula_9,
where formula_10 is an eigenvector associated with the dominant eigenvalue, and formula_11. The presence of the term formula_12 implies that formula_13 does not converge unless formula_14. Under the two assumptions listed above, the sequence formula_15 defined by
formula_16
converges to the dominant eigenvalue (with Rayleigh quotient).
One may compute this with the following algorithm (shown in Python with NumPy):
import numpy as np
def power_iteration(A, num_iterations: int):
# Ideally choose a random vector
# To decrease the chance that our vector
# Is orthogonal to the eigenvector
b_k = np.random.rand(A.shape[1])
for _ in range(num_iterations):
# calculate the matrix-by-vector product Ab
b_k1 = np.dot(A, b_k)
# calculate the norm
b_k1_norm = np.linalg.norm(b_k1)
# re normalize the vector
b_k = b_k1 / b_k1_norm
return b_k
power_iteration(np.array(0.5, 0.5], [0.2, 0.8), 10)
The vector formula_7 converges to an associated eigenvector. Ideally, one should use the Rayleigh quotient in order to get the associated eigenvalue.
This algorithm is used to calculate the "Google PageRank".
The method can also be used to calculate the spectral radius (the eigenvalue with the largest magnitude, for a square matrix) by computing the Rayleigh quotient
formula_17
Analysis.
Let formula_0 be decomposed into its Jordan canonical form: formula_18, where the first column of formula_19 is an eigenvector of formula_0 corresponding to the dominant eigenvalue formula_20. Since generically, the dominant eigenvalue of formula_0 is unique, the first Jordan block of formula_21 is the formula_22 matrix formula_23 where formula_20 is the largest eigenvalue of "A" in magnitude. The starting vector formula_5 can be written as a linear combination of the columns of "V":
formula_24
By assumption, formula_25 has a nonzero component in the direction of the dominant eigenvalue, so formula_26.
The computationally useful recurrence relation for formula_27 can be rewritten as:
formula_28
where the expression: formula_29 is more amenable to the following analysis.
formula_30
The expression above simplifies as formula_31
formula_32
The limit follows from the fact that the eigenvalue of formula_33 is less than 1 in magnitude, so
formula_34
It follows that:
formula_35
Using this fact, formula_36 can be written in a form that emphasizes its relationship with formula_37 when "k" is large:
formula_38
where formula_39 and formula_40 as formula_31
The sequence formula_41 is bounded, so it contains a convergent subsequence. Note that the eigenvector corresponding to the dominant eigenvalue is only unique up to a scalar, so although the sequence formula_42 may not converge,
formula_36 is nearly an eigenvector of "A" for large "k".
Alternatively, if "A" is diagonalizable, then the following proof yields the same result
Let λ1, λ2, ..., λ"m" be the m eigenvalues (counted with multiplicity) of A and let "v"1, "v"2, ..., "v""m" be the corresponding eigenvectors. Suppose that formula_43 is the dominant eigenvalue, so that formula_44 for formula_45.
The initial vector formula_5 can be written:
formula_46
If formula_5 is chosen randomly (with uniform probability), then "c"1 ≠ 0 with probability 1. Now,
formula_47
On the other hand:
formula_48
Therefore, formula_7 converges to (a multiple of) the eigenvector formula_10. The convergence is geometric, with ratio
formula_49
where formula_50 denotes the second dominant eigenvalue. Thus, the method converges slowly if there is an eigenvalue close in magnitude to the dominant eigenvalue.
Applications.
Although the power iteration method approximates only one eigenvalue of a matrix, it remains useful for certain computational problems. For instance, Google uses it to calculate the PageRank of documents in their search engine, and Twitter uses it to show users recommendations of whom to follow. The power iteration method is especially suitable for sparse matrices, such as the web matrix, or as the matrix-free method that does not require storing the coefficient matrix formula_0 explicitly, but can instead access a function evaluating matrix-vector products formula_51. For non-symmetric matrices that are well-conditioned the power iteration method can outperform more complex Arnoldi iteration. For symmetric matrices, the power iteration method is rarely used, since its convergence speed can be easily increased without sacrificing the small cost per iteration; see, e.g., Lanczos iteration and LOBPCG.
Some of the more advanced eigenvalue algorithms can be understood as variations of the power iteration. For instance, the inverse iteration method applies power iteration to the matrix formula_52. Other algorithms look at the whole subspace generated by the vectors formula_7. This subspace is known as the Krylov subspace. It can be computed by Arnoldi iteration or Lanczos iteration.
Gram iteration is a super-linear and deterministic method to compute the largest eigenpair.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "\\lambda"
},
{
"math_id": 2,
"text": "v"
},
{
"math_id": 3,
"text": "Av = \\lambda v"
},
{
"math_id": 4,
"text": "(\\lambda_1 / \\lambda_2)^k "
},
{
"math_id": 5,
"text": "b_0"
},
{
"math_id": 6,
"text": " b_{k+1} = \\frac{Ab_k}{\\|Ab_k\\|} "
},
{
"math_id": 7,
"text": "b_k"
},
{
"math_id": 8,
"text": "\\left( b_{k} \\right)"
},
{
"math_id": 9,
"text": "b_k = e^{i \\phi_k} v_1 + r_k"
},
{
"math_id": 10,
"text": "v_1"
},
{
"math_id": 11,
"text": " \\| r_{k} \\| \\rightarrow 0"
},
{
"math_id": 12,
"text": "e^{i \\phi_{k}}"
},
{
"math_id": 13,
"text": "\\left( b_{k} \\right) "
},
{
"math_id": 14,
"text": "e^{i \\phi_{k}} = 1"
},
{
"math_id": 15,
"text": "\\left( \\mu_{k} \\right)"
},
{
"math_id": 16,
"text": "\\mu_{k} = \\frac{b_{k}^{*}Ab_{k}}{b_{k}^{*}b_{k}}"
},
{
"math_id": 17,
"text": "\\rho(A) = \\max \\left \\{ |\\lambda_1|, \\dotsc, |\\lambda_n| \\right \\} = \\frac{b_k^\\top A b_k}{b_k^\\top b_k}. "
},
{
"math_id": 18,
"text": "A=VJV^{-1}"
},
{
"math_id": 19,
"text": "V"
},
{
"math_id": 20,
"text": "\\lambda_{1}"
},
{
"math_id": 21,
"text": "J"
},
{
"math_id": 22,
"text": "1 \\times 1"
},
{
"math_id": 23,
"text": "[\\lambda_1],"
},
{
"math_id": 24,
"text": "b_{0} = c_{1}v_{1} + c_{2}v_{2} + \\cdots + c_{n}v_{n}."
},
{
"math_id": 25,
"text": "b_{0}"
},
{
"math_id": 26,
"text": "c_{1} \\ne 0"
},
{
"math_id": 27,
"text": "b_{k+1}"
},
{
"math_id": 28,
"text": "b_{k+1}=\\frac{Ab_{k}}{\\|Ab_{k}\\|}=\\frac{A^{k+1}b_{0}}{\\|A^{k+1}b_{0}\\|},"
},
{
"math_id": 29,
"text": "\\frac{A^{k+1}b_{0}}{\\|A^{k+1}b_{0}\\|}"
},
{
"math_id": 30,
"text": "\\begin{align}\nb_k &= \\frac{A^{k}b_{0}}{\\| A^{k} b_{0} \\|} \\\\\n &= \\frac{\\left( VJV^{-1} \\right)^{k} b_{0}}{\\|\\left( VJV^{-1} \\right)^{k}b_{0}\\|} \\\\\n &= \\frac{ VJ^{k}V^{-1} b_{0}}{\\| V J^{k} V^{-1} b_{0}\\|} \\\\\n &= \\frac{ VJ^{k}V^{-1} \\left( c_{1}v_{1} + c_{2}v_{2} + \\cdots + c_{n}v_{n} \\right)}{\\| V J^{k} V^{-1} \\left( c_{1}v_{1} + c_{2}v_{2} + \\cdots + c_{n}v_{n} \\right)\\|} \\\\\n &= \\frac{ VJ^{k}\\left( c_{1}e_{1} + c_{2}e_{2} + \\cdots + c_{n}e_{n} \\right)}{\\| V J^{k} \\left( c_{1}e_{1} + c_{2}e_{2} + \\cdots + c_{n}e_{n} \\right) \\|} \\\\\n &= \\left( \\frac{\\lambda_{1}}{|\\lambda_{1}|} \\right)^{k} \\frac{c_{1}}{|c_{1}|} \\frac{ v_{1} + \\frac{1}{c_{1}} V \\left( \\frac{1}{\\lambda_1} J \\right)^{k} \\left( c_{2}e_{2} + \\cdots + c_{n}e_{n} \\right)}{ \\left \\| v_{1} + \\frac{1}{c_{1}} V \\left( \\frac{1}{\\lambda_1} J \\right)^{k} \\left( c_{2}e_{2} + \\cdots + c_{n}e_{n} \\right) \\right \\| }\n\\end{align}"
},
{
"math_id": 31,
"text": "k \\to \\infty "
},
{
"math_id": 32,
"text": "\\left( \\frac{1}{\\lambda_{1}} J \\right)^{k} = \n\\begin{bmatrix}\n[1] & & & & \\\\\n& \\left( \\frac{1}{\\lambda_{1}} J_{2} \\right)^{k}& & & \\\\\n& & \\ddots & \\\\\n& & & \\left( \\frac{1}{\\lambda_{1}} J_{m} \\right)^{k} \\\\\n\\end{bmatrix}\n\\rightarrow\n\\begin{bmatrix}\n1 & & & & \\\\\n& 0 & & & \\\\\n& & \\ddots & \\\\\n& & & 0 \\\\\n\\end{bmatrix} \\quad \\text{as} \\quad k \\to \\infty."
},
{
"math_id": 33,
"text": " \\frac{1}{\\lambda_{1}} J_{i} "
},
{
"math_id": 34,
"text": "\\left( \\frac{1}{\\lambda_{1}} J_{i} \\right)^{k} \\to 0 \\quad \\text{as} \\quad k \\to \\infty."
},
{
"math_id": 35,
"text": "\\frac{1}{c_{1}} V \\left( \\frac{1}{\\lambda_1} J \\right)^{k} \\left( c_{2}e_{2} + \\cdots + c_{n}e_{n} \\right) \\to 0 \\quad \\text{as} \\quad k \\to \\infty"
},
{
"math_id": 36,
"text": "b_{k}"
},
{
"math_id": 37,
"text": "v_{1}"
},
{
"math_id": 38,
"text": "\\begin{align}\nb_k &= \\left( \\frac{\\lambda_{1}}{|\\lambda_{1}|} \\right)^{k} \\frac{c_{1}}{|c_{1}|} \\frac{v_{1} + \\frac{1}{c_{1}} V \\left( \\frac{1}{\\lambda_1} J \\right)^{k} \\left( c_{2}e_{2} + \\cdots + c_{n}e_{n} \\right)}{\\left \\| v_{1} + \\frac{1}{c_{1}} V \\left( \\frac{1}{\\lambda_1} J \\right)^{k} \\left( c_{2}e_{2} + \\cdots + c_{n}e_{n} \\right) \\right \\| } \\\\[6pt]\n &= e^{i \\phi_{k}} \\frac{c_{1}}{|c_{1}|} \\frac{v_{1}}{\\|v_{1}\\|} + r_{k}\n\\end{align}"
},
{
"math_id": 39,
"text": "e^{i \\phi_{k}} = \\left( \\lambda_{1} / |\\lambda_{1}| \\right)^{k} "
},
{
"math_id": 40,
"text": " \\| r_{k} \\| \\to 0 "
},
{
"math_id": 41,
"text": " \\left( b_{k} \\right)"
},
{
"math_id": 42,
"text": "\\left(b_{k}\\right)"
},
{
"math_id": 43,
"text": "\\lambda_1"
},
{
"math_id": 44,
"text": "|\\lambda_1| > |\\lambda_j|"
},
{
"math_id": 45,
"text": "j>1"
},
{
"math_id": 46,
"text": "b_0 = c_{1}v_{1} + c_{2}v_{2} + \\cdots + c_{m}v_{m}."
},
{
"math_id": 47,
"text": "\\begin{align}\nA^{k}b_0 &= c_{1}A^{k}v_{1} + c_{2}A^{k}v_{2} + \\cdots + c_{m}A^{k}v_{m} \\\\\n&= c_{1}\\lambda_{1}^{k}v_{1} + c_{2}\\lambda_{2}^{k}v_{2} + \\cdots + c_{m}\\lambda_{m}^{k}v_{m} \\\\\n&= c_{1}\\lambda_{1}^{k} \\left( v_{1} + \\frac{c_{2}}{c_{1}}\\left(\\frac{\\lambda_{2}}{\\lambda_{1}}\\right)^{k}v_{2} + \\cdots + \\frac{c_{m}}{c_{1}}\\left(\\frac{\\lambda_{m}}{\\lambda_{1}}\\right)^{k}v_{m}\\right) \\\\\n&\\to c_{1}\\lambda_{1}^{k} v_1 && \\left |\\frac{\\lambda_j}{\\lambda_1} \\right | < 1 \\text{ for } j>1\n\\end{align}"
},
{
"math_id": 48,
"text": " b_k = \\frac{A^k b_0}{\\|A^kb_0\\|}. "
},
{
"math_id": 49,
"text": " \\left| \\frac{\\lambda_2}{\\lambda_1} \\right|, "
},
{
"math_id": 50,
"text": "\\lambda_2"
},
{
"math_id": 51,
"text": "Ax"
},
{
"math_id": 52,
"text": "A^{-1}"
}
] |
https://en.wikipedia.org/wiki?curid=5975550
|
597564
|
Just-noticeable difference
|
Amount a stimulus must be changed to be detected
In the branch of experimental psychology focused on sense, sensation, and perception, which is called psychophysics, a just-noticeable difference or JND is the amount something must be changed in order for a difference to be noticeable, detectable at least half the time. This limen is also known as the difference limen, difference threshold, or least perceptible difference.
Quantification.
For many sensory modalities, over a wide range of stimulus magnitudes sufficiently far from the upper and lower limits of perception, the 'JND' is a fixed proportion of the reference sensory level, and so the ratio of the JND/reference is roughly constant (that is the JND is a constant proportion/percentage of the reference level). Measured in physical units, we have:
formula_0
where formula_1 is the original intensity of the particular stimulation, formula_2 is the addition to it required for the change to be perceived (the JND), and "k" is a constant. This rule was first discovered by Ernst Heinrich Weber (1795–1878), an anatomist and physiologist, in experiments on the thresholds of perception of lifted weights. A theoretical rationale (not universally accepted) was subsequently provided by Gustav Fechner, so the rule is therefore known either as the Weber Law or as the Weber–Fechner law; the constant "k" is called the Weber constant. It is true, at least to a good approximation, of many but not all sensory dimensions, for example the brightness of lights, and the intensity and the pitch of sounds. It is not true, however, for the wavelength of light. Stanley Smith Stevens argued that it would hold only for what he called "prothetic" sensory continua, where change of input takes the form of increase in intensity or something obviously analogous; it would not hold for "metathetic" continua, where change of input produces a qualitative rather than a quantitative change of the percept. Stevens developed his own law, called Stevens' Power Law, that raises the stimulus to a constant power while, like Weber, also multiplying it by a constant factor in order to achieve the perceived stimulus.
The JND is a statistical, rather than an exact quantity: from trial to trial, the difference that a given person notices will vary somewhat, and it is therefore necessary to conduct many trials in order to determine the threshold. The JND usually reported is the difference that a person notices on 50% of trials. If a different proportion is used, this should be included in the description—for example one might report the value of the "75% JND".
Modern approaches to psychophysics, for example signal detection theory, imply that the observed JND, even in this statistical sense, is not an absolute quantity, but will depend on situational and motivational as well as perceptual factors. For example, when a researcher flashes a very dim light, a participant may report seeing it on some trials but not on others.
The JND formula has an objective interpretation (implied at the start of this entry) as the disparity between levels of the presented stimulus that is detected on 50% of occasions by a particular observed response, rather than what is subjectively "noticed" or as a difference in magnitudes of consciously experienced 'sensations'. This 50%-discriminated disparity can be used as a universal unit of measurement of the psychological distance of the level of a feature in an object or situation and an internal standard of comparison in memory, such as the 'template' for a category or the 'norm' of recognition. The JND-scaled distances from norm can be combined among observed and inferred psychophysical functions to generate diagnostics among hypothesised information-transforming (mental) processes mediating observed quantitative judgments.
Music production applications.
In music production, a single change in a property of sound which is below the JND does not affect perception of the sound. For amplitude, the JND for humans is around 1 dB.
The JND for tone is dependent on the tone's frequency content. Below 500 Hz, the JND is about 3 Hz for sine waves, and 1 Hz for complex tones; above 1000 Hz, the JND for sine waves is about 0.6% (about 10 cents).
The JND is typically tested by playing two tones in quick succession with the listener asked if there was a difference in their pitches. The JND becomes smaller if the two tones are played simultaneously as the listener is then able to discern beat frequencies. The total number of perceptible pitch steps in the range of human hearing is about 1,400; the total number of notes in the equal-tempered scale, from 16 to 16,000 Hz, is 120.
In speech perception.
JND analysis is frequently occurring in both music and speech, the two being related and overlapping in the analysis of speech prosody (i.e. speech melody). While several studies have shown that JND for tones (not necessarily sine waves) might normally lie between 5 and 9 semitones (STs), a small percentage of individuals exhibit an accuracy of between a quarter and a half ST. Although JND varies as a function of the frequency band being tested, it has been shown that JND for the best performers at around 1 kHz is well below 1 Hz, (i.e. less than a tenth of a percent). It is, however, important to be aware of the role played by critical bandwidth when performing this kind of analysis.
When analysing speech melody, rather than musical tones, accuracy decreases. This is not surprising given that speech does not stay at fixed intervals in the way that tones in music do. Johan 't Hart (1981) found that JND for speech averaged between 1 and 2 STs but concluded that "only differences of more than 3 semitones play a part in communicative situations".
Note that, given the logarithmic characteristics of Hz, for both music and speech perception results should not be reported in Hz but either as percentages or in STs (5 Hz between 20 and 25 Hz is very different from 5 Hz between 2000 and 2005 Hz, but an ~18.9% or 3 semitone increase is perceptually the same size difference, regardless of whether one starts at 20Hz or at 2000Hz).
Marketing applications.
Weber's law has important applications in marketing. Manufacturers and marketers endeavor to determine the relevant JND for their products for two very different reasons:
When it comes to product improvements, marketers very much want to meet or exceed the consumer's differential threshold; that is, they want consumers to readily perceive any improvements made in the original products. Marketers use the JND to determine the amount of improvement they should make in their products. Less than the JND is wasted effort because the improvement will not be perceived; more than the JND is again wasteful because it reduces the level of repeat sales. On the other hand, when it comes to price increases, less than the JND is desirable because consumers are unlikely to notice it.
Haptics applications.
Weber's law is used in haptic devices and robotic applications. Exerting the proper amount of force to human operator is a critical aspects in human robot interactions and tele operation scenarios. It can highly improve the performance of the user in accomplishing a task.
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac {\\Delta I} {I} = k,"
},
{
"math_id": 1,
"text": "I\\!"
},
{
"math_id": 2,
"text": "\\Delta I\\!"
}
] |
https://en.wikipedia.org/wiki?curid=597564
|
597584
|
Tree traversal
|
Class of algorithms
In computer science, tree traversal (also known as tree search and walking the tree) is a form of graph traversal and refers to the process of visiting (e.g. retrieving, updating, or deleting) each node in a tree data structure, exactly once. Such traversals are classified by the order in which the nodes are visited. The following algorithms are described for a binary tree, but they may be generalized to other trees as well.
Types.
Unlike linked lists, one-dimensional arrays and other linear data structures, which are canonically traversed in linear order, trees may be traversed in multiple ways. They may be traversed in depth-first or breadth-first order. There are three common ways to traverse them in depth-first order: in-order, pre-order and post-order. Beyond these basic traversals, various more complex or hybrid schemes are possible, such as depth-limited searches like iterative deepening depth-first search. The latter, as well as breadth-first search, can also be used to traverse infinite trees, see below.
Data structures for tree traversal.
Traversing a tree involves iterating over all nodes in some manner. Because from a given node there is more than one possible next node (it is not a linear data structure), then, assuming sequential computation (not parallel), some nodes must be deferred—stored in some way for later visiting. This is often done via a stack (LIFO) or queue (FIFO). As a tree is a self-referential (recursively defined) data structure, traversal can be defined by recursion or, more subtly, corecursion, in a natural and clear fashion; in these cases the deferred nodes are stored implicitly in the call stack.
Depth-first search is easily implemented via a stack, including recursively (via the call stack), while breadth-first search is easily implemented via a queue, including corecursively.
Depth-first search.
In "depth-first search" (DFS), the search tree is deepened as much as possible before going to the next sibling.
To traverse binary trees with depth-first search, perform the following operations at each node:
The trace of a traversal is called a sequentialisation of the tree. The traversal trace is a list of each visited node. No one sequentialisation according to pre-, in- or post-order describes the underlying tree uniquely. Given a tree with distinct elements, either pre-order or post-order paired with in-order is sufficient to describe the tree uniquely. However, pre-order with post-order leaves some ambiguity in the tree structure.
There are three methods at which position of the traversal relative to the node (in the figure: red, green, or blue) the visit of the node shall take place. The choice of exactly one color determines exactly one visit of a node as described below. Visit at all three colors results in a threefold visit of the same node yielding the “all-order” sequentialisation:
F-B-A-A-A-B-D-C-C-C-D-E-E-E-D-B-F-G-G- I-H-H-H- I- I-G-F
Pre-order, NLR.
The pre-order traversal is a topologically sorted one, because a parent node is processed before any of its child nodes is done.
Post-order, LRN.
Post-order traversal can be useful to get postfix expression of a binary expression tree.
In-order, LNR.
In a binary search tree ordered such that in each node the key is greater than all keys in its left subtree and less than all keys in its right subtree, in-order traversal retrieves the keys in "ascending" sorted order.
Reverse in-order, RNL.
In a binary search tree ordered such that in each node the key is greater than all keys in its left subtree and less than all keys in its right subtree, reverse in-order traversal retrieves the keys in "descending" sorted order.
Arbitrary trees.
To traverse arbitrary trees (not necessarily binary trees) with depth-first search, perform the following operations at each node:
Depending on the problem at hand, pre-order, post-order, and especially one of the number of subtrees − 1 in-order operations may be optional. Also, in practice more than one of pre-order, post-order, and in-order operations may be required. For example, when inserting into a ternary tree, a pre-order operation is performed by comparing items. A post-order operation may be needed afterwards to re-balance the tree.
Breadth-first search.
In "breadth-first search" (BFS) or "level-order search", the search tree is broadened as much as possible before going to the next depth.
Other types.
There are also tree traversal algorithms that classify as neither depth-first search nor breadth-first search. One such algorithm is Monte Carlo tree search, which concentrates on analyzing the most promising moves, basing the expansion of the search tree on random sampling of the search space.
Applications.
Pre-order traversal can be used to make a prefix expression (Polish notation) from expression trees: traverse the expression tree pre-orderly. For example, traversing the depicted arithmetic expression in pre-order yields "+ * "A" − "B" "C" + "D" "E". In prefix notation, there is no need for any parentheses as long as each operator has a fixed number of operands. Pre-order traversal is also used to create a copy of the tree.
Post-order traversal can generate a postfix representation (Reverse Polish notation) of a binary tree. Traversing the depicted arithmetic expression in post-order yields "A" "B" "C" − * "D" "E" + +"; the latter can easily be transformed into machine code to evaluate the expression by a stack machine. Post-order traversal is also used to delete the tree. Each node is freed after freeing its children.
In-order traversal is very commonly used on binary search trees because it returns values from the underlying set in order, according to the comparator that set up the binary search tree.
Implementations.
Depth-first search implementation.
Another variant of Pre-order.
If the tree is represented by an array (first index is 0), it is possible to calculate the index of the next element:
procedure bubbleUp(array, i, leaf)
k ← 1
i ← (i - 1)/2
while (leaf + 1) % (k * 2) ≠ k
i ← (i - 1)/2
k ← 2 * k
return i
procedure preorder(array)
i ← 0
while i ≠ array.size
visit(array[i])
if i = size - 1
i ← size
else if i < size/2
i ← i * 2 + 1
else
leaf ← i - size/2
parent ← bubble_up(array, i, leaf)
i ← parent * 2 + 2
Advancing to the next or previous node.
The codice_0 to be started with may have been found in the binary search tree codice_1 by means of a standard search function, which is shown here in an implementation without parent pointers, i.e. it uses a codice_2 for holding the ancestor pointers.
procedure search(bst, key)
// returns a (node, stack)
node ← bst.root
stack ← empty stack
while node ≠ null
stack.push(node)
if key = node.key
return (node, stack)
if key < node.key
node ← node.left
else
node ← node.right
return (null, empty stack)
The function inorderNext returns an in-order-neighbor of codice_0, either the in-order-"suc"cessor (for codice_4) or the in-order-"prede"cessor (for codice_5), and the updated codice_2, so that the binary search tree may be sequentially in-order-traversed and searched in the given direction codice_7 further on.
procedure inorderNext(node, dir, stack)
newnode ← node.child[dir]
if newnode ≠ null
do
node ← newnode
stack.push(node)
newnode ← node.child[1-dir]
until newnode = null
return (node, stack)
// node does not have a dir-child:
do
if stack.isEmpty()
return (null, empty stack)
oldnode ← node
node ← stack.pop() // parent of oldnode
until oldnode ≠ node.child[dir]
// now oldnode = node.child[1-dir],
// i.e. node = ancestor (and predecessor/successor) of original node
return (node, stack)
Note that the function does not use keys, which means that the sequential structure is completely recorded by the binary search tree’s edges. For traversals without change of direction, the (amortised) average complexity is formula_0 because a full traversal takes formula_1 steps for a BST of size formula_2 1 step for edge up and 1 for edge down. The worst-case complexity is formula_3 with formula_4 as the height of the tree.
All the above implementations require stack space proportional to the height of the tree which is a call stack for the recursive and a parent (ancestor) stack for the iterative ones. In a poorly balanced tree, this can be considerable. With the iterative implementations we can remove the stack requirement by maintaining parent pointers in each node, or by threading the tree (next section).
Morris in-order traversal using threading.
A binary tree is threaded by making every left child pointer (that would otherwise be null) point to the in-order predecessor of the node (if it exists) and every right child pointer (that would otherwise be null) point to the in-order successor of the node (if it exists).
Advantages:
Disadvantages:
Morris traversal is an implementation of in-order traversal that uses threading:
Breadth-first search.
Also, listed below is pseudocode for a simple queue based level-order traversal, and will require space proportional to the maximum number of nodes at a given depth. This can be as much as half the total number of nodes. A more space-efficient approach for this type of traversal can be implemented using an iterative deepening depth-first search.
procedure levelorder(node)
queue ← empty queue
queue.enqueue(node)
while not queue.isEmpty()
node ← queue.dequeue()
visit(node)
if node.left ≠ null
queue.enqueue(node.left)
if node.right ≠ null
queue.enqueue(node.right)
If the tree is represented by an array (first index is 0), it is sufficient iterating through all elements:
procedure levelorder(array)
for i from 0 to array.size
visit(array[i])
Infinite trees.
While traversal is usually done for trees with a finite number of nodes (and hence finite depth and finite branching factor) it can also be done for infinite trees. This is of particular interest in functional programming (particularly with lazy evaluation), as infinite data structures can often be easily defined and worked with, though they are not (strictly) evaluated, as this would take infinite time. Some finite trees are too large to represent explicitly, such as the game tree for chess or go, and so it is useful to analyze them as if they were infinite.
A basic requirement for traversal is to visit every node eventually. For infinite trees, simple algorithms often fail this. For example, given a binary tree of infinite depth, a depth-first search will go down one side (by convention the left side) of the tree, never visiting the rest, and indeed an in-order or post-order traversal will never visit "any" nodes, as it has not reached a leaf (and in fact never will). By contrast, a breadth-first (level-order) traversal will traverse a binary tree of infinite depth without problem, and indeed will traverse any tree with bounded branching factor.
On the other hand, given a tree of depth 2, where the root has infinitely many children, and each of these children has two children, a depth-first search will visit all nodes, as once it exhausts the grandchildren (children of children of one node), it will move on to the next (assuming it is not post-order, in which case it never reaches the root). By contrast, a breadth-first search will never reach the grandchildren, as it seeks to exhaust the children first.
A more sophisticated analysis of running time can be given via infinite ordinal numbers; for example, the breadth-first search of the depth 2 tree above will take ω·2 steps: ω for the first level, and then another ω for the second level.
Thus, simple depth-first or breadth-first searches do not traverse every infinite tree, and are not efficient on very large trees. However, hybrid methods can traverse any (countably) infinite tree, essentially via a diagonal argument ("diagonal"—a combination of vertical and horizontal—corresponds to a combination of depth and breadth).
Concretely, given the infinitely branching tree of infinite depth, label the root (), the children of the root (1), (2), ..., the grandchildren (1, 1), (1, 2), ..., (2, 1), (2, 2), ..., and so on. The nodes are thus in a one-to-one correspondence with finite (possibly empty) sequences of positive numbers, which are countable and can be placed in order first by sum of entries, and then by lexicographic order within a given sum (only finitely many sequences sum to a given value, so all entries are reached—formally there are a finite number of compositions of a given natural number, specifically 2"n"−1 compositions of "n" ≥ 1), which gives a traversal. Explicitly:
etc.
This can be interpreted as mapping the infinite depth binary tree onto this tree and then applying breadth-first search: replace the "down" edges connecting a parent node to its second and later children with "right" edges from the first child to the second child, from the second child to the third child, etc. Thus at each step one can either go down (append a (, 1) to the end) or go right (add one to the last number) (except the root, which is extra and can only go down), which shows the correspondence between the infinite binary tree and the above numbering; the sum of the entries (minus one) corresponds to the distance from the root, which agrees with the 2"n"−1 nodes at depth "n" − 1 in the infinite binary tree (2 corresponds to binary).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathcal{O}(1) ,"
},
{
"math_id": 1,
"text": "2 n-2"
},
{
"math_id": 2,
"text": "n ,"
},
{
"math_id": 3,
"text": "\\mathcal{O}(h) "
},
{
"math_id": 4,
"text": "h"
}
] |
https://en.wikipedia.org/wiki?curid=597584
|
59769241
|
Performance effects
|
Strategy researchers want to understand differences in firm performance. For example, what can explain performance differences between Toyota’s cars business and Samsung’s mobile phones business? Studies show that just three effects account for most performance differences between such businesses: the industry to which a business belongs (automotive industry vs electronics industry), the corporation it is part of (Toyota vs. Samsung), and the business itself.
Effect.
Performance usually means financial performance, measured most often as return on assets or less often as return on sales, return on invested capital, or market share.
A performance effect is an observed difference in business performance. For example, it compares the performance of Toyota's cars business and that of Samsung's mobile phones business. A performance effect is "not" a causal effect. For example, it does not indicate what the performance of the mobile phone business would have been if Toyota instead of Samsung was the owner.
Levels of analysis.
Performance effects occur at multiple level of analysis.
Industry, corporate, business, and year effects.
Industry, corporate, business, and year effects are among the most investigated levels of analyses. An industry is a group of businesses that sell similar goods or services. For example, Toyota's cars business belongs to the automotive industry and Samsung's mobile phones business to the electronics industry. A corporation is the legal owner of the business. For example, Berkshire Hathaway owns many businesses including of clothing, building products, and insurance. Thus, a corporation can own more than one business. A business is then defined by what it does (i.e. industry) and by whom it is owned (i.e. corporation). Year refers to the year of performance.
An industry effect is the performance difference of businesses in an industry and those in other industries. A corporate effect is the performance difference of businesses of a corporation and those of other corporations. A business effect is the performance difference of a business and those of other businesses. A year effect is the performance difference of businesses in one year and those in another year.
Formally, the performance (formula_0) of a business in industry formula_1, corporation formula_2, and year formula_3 can be written as:
formula_4
Here formula_5 is the mean performance of all businesses across all years. formula_6 is the industry effect for industry formula_1 (the performance difference between industry formula_7 and the mean); formula_8 is the corporate effect for corporation formula_2 (the performance difference between corporation formula_2 and the mean); formula_9 is the business effect for a business in industry formula_1 and corporation formula_2 (the performance difference between that business and the mean); formula_10 is the year effect for year formula_3 (the performance difference between year formula_3 and the mean); and formula_11 is an error term (the performance difference between a business and the mean that is not accounted for by industry, corporate, business, and year effects).
A meta-analysis finds that the strongest effects are business, then corporate, then industry, and then year. Figures 1 and 2 show the strength of each effect with effect sizes in variance and in standard deviation, respectively.
Other.
Other performance effects include the chief executive officer and geographical region or country.
Effect sizes.
An effect size is a measure of the magnitude of performance differences.
A common measure is the variance. A finding of 36% for business effects means that the variance in business effects is 36% of the total variance in performance. Conversely, the variance in performance is for about one third related to differences between business with the other two-thirds related to other effects (e.g. different industries, different corporations, different year, and random differences). An upside of the variance measure is that the effects sum to 100%. A downside is that the variance uses squared distances so that large effects are amplified and small effects are shrunk.
Another measure is standard deviation, which is the square root of variance. An upside of this measure is that the standard deviation relates to linear distances so effects are not similarly amplified or shrunk. For example, business effects are greater than year effects by about factor 45 when using variance and by about only factor 8 when using standard deviation. Relatedly, the standard deviation measure has the same unit of measurement as performance. For example, if performance is in dollars, then the standard deviation is also in dollars (the variance would be in dollars squared). A downside is that the effects measured in standard deviations do not sum to 100%.
An alternative measure is the sum of squares measure. It seeks to attribute squared performance difference to the different effects. Because the sum of squares measure does not account for degrees of freedom, it is sensitive to sample dimensions. For example, sampling more businesses in the same number of industries will change the ratio of sum of squares due to industry and sum of squares due to business.
Methods.
Different methods are used to estimate effect sizes, including hierarchical linear model, or analysis of variance (ANOVA), or variance components analysis (VCA).
|
[
{
"math_id": 0,
"text": "p_{ict}"
},
{
"math_id": 1,
"text": "i"
},
{
"math_id": 2,
"text": "c"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": " p_{ict} = m + I_i + C_c + B_{ic} + Y_t + e_{ict} "
},
{
"math_id": 5,
"text": "m"
},
{
"math_id": 6,
"text": "I_i"
},
{
"math_id": 7,
"text": "it"
},
{
"math_id": 8,
"text": "C_c"
},
{
"math_id": 9,
"text": "B_{ic}"
},
{
"math_id": 10,
"text": "Y_t"
},
{
"math_id": 11,
"text": "e_{ict}"
}
] |
https://en.wikipedia.org/wiki?curid=59769241
|
59776675
|
AWM–Microsoft Research Prize in Algebra and Number Theory
|
Research honor for women
The AWM–Microsoft Research Prize in Algebra and Number Theory and is a prize given every other year by the Association for Women in Mathematics to an outstanding young female researcher in algebra or number theory. It was funded in 2012 by Microsoft Research and first issued in 2014.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p"
}
] |
https://en.wikipedia.org/wiki?curid=59776675
|
597837
|
Sato–Tate conjecture
|
Mathematical conjecture about elliptic curves
In mathematics, the Sato–Tate conjecture is a statistical statement about the family of elliptic curves "Ep" obtained from an elliptic curve "E" over the rational numbers by reduction modulo almost all prime numbers "p". Mikio Sato and John Tate independently posed the conjecture around 1960.
If "Np" denotes the number of points on the elliptic curve "Ep" defined over the finite field with "p" elements, the conjecture gives an answer to the distribution of the second-order term for "Np". By Hasse's theorem on elliptic curves,
formula_0
as formula_1, and the point of the conjecture is to predict how the O-term varies.
The original conjecture and its generalization to all totally real fields was proved by Laurent Clozel, Michael Harris, Nicholas Shepherd-Barron, and Richard Taylor under mild assumptions in 2008, and completed by Thomas Barnet-Lamb, David Geraghty, Harris, and Taylor in 2011. Several generalizations to other algebraic varieties and fields are open.
Statement.
Let "E" be an elliptic curve defined over the rational numbers without complex multiplication. For a prime number "p", define "θ""p" as the solution to the equation
formula_2
Then, for every two real numbers formula_3 and formula_4 for which formula_5
formula_6
Details.
By Hasse's theorem on elliptic curves, the ratio
formula_7
is between -1 and 1. Thus it can be expressed as cos "θ" for an angle "θ"; in geometric terms there are two eigenvalues accounting for the remainder and with the denominator as given they are complex conjugate and of absolute value 1. The "Sato–Tate conjecture", when "E" doesn't have complex multiplication, states that the probability measure of "θ" is proportional to
formula_8
This is due to Mikio Sato and John Tate (independently, and around 1960, published somewhat later).
Proof.
In 2008, Clozel, Harris, Shepherd-Barron, and Taylor published a proof of the Sato–Tate conjecture for elliptic curves over totally real fields satisfying a certain condition: of having multiplicative reduction at some prime, in a series of three joint papers.
Further results are conditional on improved forms of the Arthur–Selberg trace formula. Harris has a conditional proof of a result for the product of two elliptic curves (not isogenous) following from such a hypothetical trace formula. In 2011, Barnet-Lamb, Geraghty, Harris, and Taylor proved a generalized version of the Sato–Tate conjecture for an arbitrary non-CM holomorphic modular form of weight greater than or equal to two, by improving the potential modularity results of previous papers. The prior issues involved with the trace formula were solved by Michael Harris, and Sug Woo Shin.
In 2015, Richard Taylor was awarded the Breakthrough Prize in Mathematics "for numerous breakthrough results in (...) the Sato–Tate conjecture."
Generalisations.
There are generalisations, involving the distribution of Frobenius elements in Galois groups involved in the Galois representations on étale cohomology. In particular there is a conjectural theory for curves of genus "n" > 1.
Under the random matrix model developed by Nick Katz and Peter Sarnak, there is a conjectural correspondence between (unitarized) characteristic polynomials of Frobenius elements and conjugacy classes in the compact Lie group USp(2"n") = Sp("n"). The Haar measure on USp(2"n") then gives the conjectured distribution, and the classical case is USp(2) = SU(2).
Refinements.
There are also more refined statements. The Lang–Trotter conjecture (1976) of Serge Lang and Hale Trotter states the asymptotic number of primes "p" with a given value of "a""p", the trace of Frobenius that appears in the formula. For the typical case (no complex multiplication, trace ≠ 0) their formula states that the number of "p" up to "X" is asymptotically
formula_9
with a specified constant "c". Neal Koblitz (1988) provided detailed conjectures for the case of a prime number "q" of points on "E""p", motivated by elliptic curve cryptography.
In 1999, Chantal David and Francesco Pappalardi proved an averaged version of the Lang–Trotter conjecture.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "N_p/p = 1 + \\mathrm{O}(1/\\!\\sqrt{p})\\ "
},
{
"math_id": 1,
"text": "p\\to\\infty"
},
{
"math_id": 2,
"text": " p+1-N_p=2\\sqrt{p}\\cos\\theta_p ~~ (0\\leq \\theta_p \\leq \\pi)."
},
{
"math_id": 3,
"text": " \\alpha "
},
{
"math_id": 4,
"text": " \\beta "
},
{
"math_id": 5,
"text": " 0\\leq \\alpha < \\beta \\leq \\pi, "
},
{
"math_id": 6,
"text": "\\lim_{N\\to\\infty}\\frac{\\#\\{p\\leq N:\\alpha\\leq \\theta_p \\leq \\beta\\}}\n{\\#\\{p\\leq N\\}}=\\frac{2}{\\pi} \\int_\\alpha^\\beta \\sin^2 \\theta \\, d\\theta = \\frac{1}{\\pi}\\left(\\beta-\\alpha+\\sin(\\alpha)\\cos(\\alpha)-\\sin(\\beta)\\cos(\\beta)\\right)"
},
{
"math_id": 7,
"text": "\\frac{(p + 1)-N_p}{2\\sqrt{p}}=\\frac{a_p}{2\\sqrt{p}} "
},
{
"math_id": 8,
"text": "\\sin^2 \\theta \\, d\\theta."
},
{
"math_id": 9,
"text": "c \\sqrt{X}/ \\log X\\ "
}
] |
https://en.wikipedia.org/wiki?curid=597837
|
5978424
|
Kernel principal component analysis
|
Multivariate statistical technique
In the field of multivariate statistics, kernel principal component analysis (kernel PCA)
is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are performed in a reproducing kernel Hilbert space.
Background: Linear PCA.
Recall that conventional PCA operates on zero-centered data; that is,
formula_0,
where formula_1 is one of the formula_2 multivariate observations.
It operates by diagonalizing the covariance matrix,
formula_3
in other words, it gives an eigendecomposition of the covariance matrix:
formula_4
which can be rewritten as
formula_5.
Introduction of the Kernel to PCA.
To understand the utility of kernel PCA, particularly for clustering, observe that, while "N" points cannot, in general, be linearly separated in formula_6 dimensions, they can almost always be linearly separated in formula_7 dimensions. That is, given "N" points, formula_1, if we map them to an "N"-dimensional space with
formula_8 where formula_9,
it is easy to construct a hyperplane that divides the points into arbitrary clusters. Of course, this formula_10 creates linearly independent vectors, so there is no covariance on which to perform eigendecomposition "explicitly" as we would in linear PCA.
Instead, in kernel PCA, a non-trivial, arbitrary formula_10 function is 'chosen' that is never calculated explicitly, allowing the possibility to use very-high-dimensional formula_10's if we never have to actually evaluate the data in that space. Since we generally try to avoid working in the formula_10-space, which we will call the 'feature space', we can create the N-by-N kernel
formula_11
which represents the inner product space (see Gramian matrix) of the otherwise intractable feature space. The dual form that arises in the creation of a kernel allows us to mathematically formulate a version of PCA in which we never actually solve the eigenvectors and eigenvalues of the covariance matrix in the formula_12-space (see Kernel trick). The N-elements in each column of "K" represent the dot product of one point of the transformed data with respect to all the transformed points (N points). Some well-known kernels are shown in the example below.
Because we are never working directly in the feature space, the kernel-formulation of PCA is restricted in that it computes not the principal components themselves, but the projections of our data onto those components. To evaluate the projection from a point in the feature space formula_12 onto the kth principal component formula_13 (where superscript k means the component k, not powers of k)
formula_14
We note that formula_15 denotes dot product, which is simply the elements of the kernel formula_16. It seems all that's left is to calculate and normalize the formula_17, which can be done by solving the eigenvector equation
formula_18
where formula_2 is the number of data points in the set, and formula_19 and formula_20 are the eigenvalues and eigenvectors of formula_16. Then to normalize the eigenvectors formula_21, we require that
formula_22
Care must be taken regarding the fact that, whether or not formula_23 has zero-mean in its original space, it is not guaranteed to be centered in the feature space (which we never compute explicitly). Since centered data is required to perform an effective principal component analysis, we 'centralize' formula_16 to become formula_24
formula_25
where formula_26 denotes a N-by-N matrix for which each element takes value formula_27. We use formula_24 to perform the kernel PCA algorithm described above.
One caveat of kernel PCA should be illustrated here. In linear PCA, we can use the eigenvalues to rank the eigenvectors based on how much of the variation of the data is captured by each principal component. This is useful for data dimensionality reduction and it could also be applied to KPCA. However, in practice there are cases that all variations of the data are same. This is typically caused by a wrong choice of kernel scale.
Large datasets.
In practice, a large data set leads to a large K, and storing K may become a problem. One way to deal with this is to perform clustering on the dataset, and populate the kernel with the means of those clusters. Since even this method may yield a relatively large K, it is common to compute only the top P eigenvalues and eigenvectors of the eigenvalues are calculated in this way.
Example.
Consider three concentric clouds of points (shown); we wish to use kernel PCA to identify these groups. The color of the points does not represent information involved in the algorithm, but only shows how the transformation relocates the data points.
First, consider the kernel
formula_28
Applying this to kernel PCA yields the next image.
Now consider a Gaussian kernel:
formula_29
That is, this kernel is a measure of closeness, equal to 1 when the points coincide and equal to 0 at infinity.
Note in particular that the first principal component is enough to distinguish the three different groups, which is impossible using only linear PCA, because linear PCA operates only in the given (in this case two-dimensional) space, in which these concentric point clouds are not linearly separable.
Applications.
Kernel PCA has been demonstrated to be useful for novelty detection and image de-noising.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\frac{1}{N}\\sum_{i=1}^N \\mathbf{x}_i = \\mathbf{0}"
},
{
"math_id": 1,
"text": "\\mathbf{x}_i"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "C=\\frac{1}{N}\\sum_{i=1}^N \\mathbf{x}_i\\mathbf{x}_i^\\top"
},
{
"math_id": 4,
"text": "\\lambda \\mathbf{v}=C\\mathbf{v}"
},
{
"math_id": 5,
"text": "\\lambda \\mathbf{x}_i^\\top \\mathbf{v}=\\mathbf{x}_i^\\top C\\mathbf{v} \\quad \\textrm{for}~i=1,\\ldots,N"
},
{
"math_id": 6,
"text": "d < N"
},
{
"math_id": 7,
"text": "d \\geq N"
},
{
"math_id": 8,
"text": "\\Phi(\\mathbf{x}_i)"
},
{
"math_id": 9,
"text": "\\Phi : \\mathbb{R}^d \\to \\mathbb{R}^N"
},
{
"math_id": 10,
"text": "\\Phi"
},
{
"math_id": 11,
"text": "K = k(\\mathbf{x},\\mathbf{y}) = (\\Phi(\\mathbf{x}),\\Phi(\\mathbf{y})) = \\Phi(\\mathbf{x})^T\\Phi(\\mathbf{y})"
},
{
"math_id": 12,
"text": "\\Phi(\\mathbf{x})"
},
{
"math_id": 13,
"text": "V^k"
},
{
"math_id": 14,
"text": "{V^k}^T\\Phi(\\mathbf{x}) =\\left(\\sum_{i=1}^N \\mathbf{a}^k_i\\Phi(\\mathbf{x}_i)\\right)^T\\Phi(\\mathbf{x}) "
},
{
"math_id": 15,
"text": "\\Phi(\\mathbf{x}_i)^T\\Phi(\\mathbf{x})"
},
{
"math_id": 16,
"text": "K"
},
{
"math_id": 17,
"text": "\\mathbf{a}_i^k"
},
{
"math_id": 18,
"text": "N \\lambda\\mathbf{a} =K\\mathbf{a}"
},
{
"math_id": 19,
"text": "\\lambda"
},
{
"math_id": 20,
"text": "\\mathbf{a}"
},
{
"math_id": 21,
"text": "\\mathbf{a}^k"
},
{
"math_id": 22,
"text": "1 = (V^k)^T V^k"
},
{
"math_id": 23,
"text": "x"
},
{
"math_id": 24,
"text": "K'"
},
{
"math_id": 25,
"text": "K' = K - \\mathbf{1_N} K - K \\mathbf{1_N} + \\mathbf{1_N} K \\mathbf{1_N}"
},
{
"math_id": 26,
"text": "\\mathbf{1_N}"
},
{
"math_id": 27,
"text": "1/N"
},
{
"math_id": 28,
"text": "k(\\boldsymbol{x},\\boldsymbol{y}) = (\\boldsymbol{x}^\\mathrm{T}\\boldsymbol{y} + 1)^2"
},
{
"math_id": 29,
"text": "k(\\boldsymbol{x},\\boldsymbol{y}) = e^\\frac{-||\\boldsymbol{x} - \\boldsymbol{y}||^2}{2\\sigma^2},"
}
] |
https://en.wikipedia.org/wiki?curid=5978424
|
5978615
|
Time-evolving block decimation
|
Quantum many-body simulation algorithm
The time-evolving block decimation (TEBD) algorithm is a numerical scheme used to simulate one-dimensional quantum many-body systems, characterized by at most nearest-neighbour interactions. It is dubbed Time-evolving Block Decimation because it dynamically identifies the relevant low-dimensional Hilbert subspaces of an exponentially larger original Hilbert space. The algorithm, based on the Matrix Product States formalism, is highly efficient when the amount of entanglement in the system is limited, a requirement fulfilled by a large class of quantum many-body systems in one dimension.
Introduction.
Considering the inherent difficulties of simulating general quantum many-body systems, the exponential increase in parameters with the size of the system, and correspondingly, the high computational costs, one solution would be to look for numerical methods that deal with special cases, where one can profit from the physics of the system. The raw approach, by directly dealing with all the parameters used to fully characterize a quantum many-body system is seriously impeded by the lavishly exponential buildup with the system size of the amount of variables needed for simulation, which leads, in the best cases, to unreasonably long computational times and extended use of memory. To get around this problem a number of various methods have been developed and put into practice in the course of time, one of the most successful ones being the quantum Monte Carlo method (QMC). Also the density matrix renormalization group (DMRG) method, next to QMC, is a very reliable method, with an expanding community of users and an increasing number of applications to physical systems.
When the first quantum computer is plugged in and functioning, the perspectives for the field of computational physics will look rather promising, but until that day one has to restrict oneself to the mundane tools offered by classical computers. While experimental physicists are putting a lot of effort in trying to build the first quantum computer, theoretical physicists are searching, in the field of quantum information theory (QIT), for genuine quantum algorithms, appropriate for problems that would perform badly when trying to be solved on a classical computer, but pretty fast and successful on a quantum one. The search for such algorithms is still going, the best-known (and almost the only ones found) being the Shor's algorithm, for factoring large numbers, and Grover's search algorithm.
In the field of QIT one has to identify the primary resources necessary for genuine quantum computation. Such a resource may be responsible for the speedup gain in quantum versus classical, identifying them means also identifying systems that can be simulated in a reasonably efficient manner on a classical computer. Such a resource is quantum entanglement; hence, it is possible to establish a distinct lower bound for the entanglement needed for quantum computational speedups.
Guifré Vidal, then at the Institute for Quantum Information, Caltech, has recently proposed a scheme useful for simulating a certain category of quantum systems. He asserts that "any quantum computation with pure states can be efficiently simulated with a classical computer provided the amount of entanglement involved is sufficiently restricted".
This happens to be the case with generic Hamiltonians displaying local interactions, as for example, Hubbard-like Hamiltonians. The method exhibits a low-degree polynomial behavior in the increase of computational time with respect to the amount of entanglement present in the system. The algorithm is based on a scheme that exploits the fact that in these one-dimensional systems the eigenvalues of the reduced density matrix on a bipartite split of the system are exponentially decaying, thus allowing us to work in a re-sized space spanned by the eigenvectors corresponding to the eigenvalues we selected.
One can also estimate the amount of computational resources required for the simulation of a quantum system on a classical computer, knowing how the entanglement contained in the system scales with the size of the system. The classically (and quantum, as well) feasible simulations are those that involve systems only lightly entangled—the strongly entangled ones being, on the other hand, good candidates only for genuine quantum computations.
The numerical method is efficient in simulating real-time dynamics or calculations of ground states using imaginary-time evolution or isentropic interpolations between a target Hamiltonian and a Hamiltonian with an already-known ground state. The computational time scales linearly with the system size, hence many-particles systems in 1D can be investigated.
A useful feature of the TEBD algorithm is that it can be reliably employed for time evolution simulations of time-dependent Hamiltonians, describing systems that can be realized with cold atoms in optical lattices, or in systems far from equilibrium in quantum transport. From this point of view, TEBD had a certain ascendance over DMRG, a very powerful technique, but until recently not very well suited for simulating time-evolutions. With the Matrix Product States formalism being at the mathematical heart of DMRG, the TEBD scheme was adopted by the DMRG community, thus giving birth to the time dependent DMRG , t-DMRG for short.
Around the same time, other groups have developed similar approaches in which quantum information plays a predominant role as, for example, in DMRG implementations for periodic boundary conditions , and for studying mixed-state dynamics in one-dimensional quantum lattice systems.<ref name="cond-mat/0406426"> </ref><ref name="cond-mat/0406440"></ref> Those last approaches actually provide a formalism that is more general than the original TEBD approach, as it also allows to deal with evolutions with matrix product operators; this enables the simulation of nontrivial non-infinitesimal evolutions as opposed to the TEBD case, and is a crucial ingredient to deal with higher-dimensional analogues of matrix product states.
The decomposition of state.
Introducing the decomposition of State.
Consider a chain of "N" qubits, described by the function formula_0. The most natural way of describing formula_1 would be using the local formula_2-dimensional basis formula_3:
formula_4
where "M" is the on-site dimension.
The trick of TEBD is to re-write the coefficients formula_5:
formula_6
This form, known as a Matrix product state, simplifies the calculations greatly.
To understand why, one can look at the Schmidt decomposition of a state, which uses singular value decomposition to express a state with limited entanglement more simply.
The Schmidt decomposition.
Consider the state of a bipartite system formula_7. Every such state formula_8 can be represented in an appropriately chosen basis as:
formula_9
where formula_10 are formed with vectors formula_11 that make an orthonormal basis in formula_12 and, correspondingly, vectors formula_13, which form an orthonormal basis in formula_14, with the coefficients formula_15 being real and positive, formula_16. This is called the Schmidt decomposition (SD) of a state. In general the summation goes up to formula_17. The Schmidt rank of a bipartite split is given by the number of non-zero Schmidt coefficients. If the Schmidt rank is one, the split is characterized by a product state. The vectors of the SD are determined up to a phase and the eigenvalues and the Schmidt rank are unique.
For example, the two-qubit state:
formula_18
has the following SD:
formula_19
with
formula_20
On the other hand, the state:
formula_21
is a product state:
formula_22
Building the decomposition of state.
At this point we know enough to try to see how we explicitly build the decomposition (let's call it "D").
Consider the bipartite splitting formula_23. The SD has the coefficients formula_24 and eigenvectors formula_25.
By expanding the formula_26's in the local basis, one can write:
formula_27
The process can be decomposed in three steps, iterated for each bond (and, correspondingly, SD) in the chain:
Step 1: express the formula_28's in a local basis for qubit 2:
formula_29
The vectors formula_30 are not necessarily normalized.
Step 2: write each vector formula_30 in terms of the at most (Vidal's emphasis) formula_31 Schmidt vectors formula_32 and, correspondingly, coefficients formula_33:
formula_34
Step 3: make the substitutions and obtain:
formula_35
Repeating the steps 1 to 3, one can construct the whole decomposition of state "D". The last formula_36's are a special case, like the first ones, expressing the right-hand Schmidt vectors at the formula_37 bond in terms of the local basis at the formula_38 lattice place. As shown in, it is straightforward to obtain the Schmidt decomposition at formula_39 bond, i.e. formula_40, from "D".
The Schmidt eigenvalues, are given explicitly in "D":
formula_41
The Schmidt eigenvectors are simply:
formula_42
and
formula_43
Rationale.
Now, looking at "D", instead of formula_2 initial terms, there are formula_44. Apparently this is just a fancy way of rewriting the coefficients formula_5, but in fact there is more to it than that. Assuming that "N" is even, the Schmidt rank formula_31 for a bipartite cut in the middle of the chain can have a maximal value of formula_45; in this case we end up with at least formula_46 coefficients, considering only the formula_47 ones, slightly more than the initial formula_2! The truth is that the decomposition "D" is useful when dealing with systems that exhibit a low degree of entanglement, which fortunately is the case with many 1D systems, where the Schmidt coefficients of the ground state decay in an exponential manner with formula_48:
formula_49
Therefore, it is possible to take into account only some of the Schmidt coefficients (namely the largest ones), dropping the others and consequently normalizing again the state:
formula_50
where formula_51 is the number of kept Schmidt coefficients.
Let's get away from this abstract picture and refresh ourselves with a concrete example, to emphasize the advantage of making this decomposition. Consider for instance the case of 50 fermions in a ferromagnetic chain, for the sake of simplicity. A dimension of 12, let's say, for the formula_51 would be a reasonable choice, keeping the discarded eigenvalues at formula_52% of the total, as shown by numerical studies, meaning roughly formula_53 coefficients, as compared to the originally formula_54 ones.
Even if the Schmidt eigenvalues don't have this exponential decay, but they show an algebraic decrease, we can still use "D" to describe our state formula_55. The number of coefficients to account for a faithful description of formula_55 may be sensibly larger, but still within reach of eventual numerical simulations.
The update of the decomposition.
One can proceed now to investigate the behaviour of the decomposition "D" when acted upon with one-qubit gates (OQG) and two-qubit gates (TQG) acting on neighbouring qubits. Instead of updating all the formula_2 coefficients formula_5, we will restrict ourselves to a number of operations that increase in formula_31 as a polynomial of low degree, thus saving computational time.
One-qubit gates acting on qubit "k".
The OQGs are affecting only the qubit they are acting upon, the update of the state formula_56 after a unitary operator at qubit "k" does not modify the Schmidt eigenvalues or vectors on the left, consequently the formula_57's, or on the right, hence the formula_58's. The only formula_36's that will be updated are the formula_59's (requiring only at most formula_60 operations), as
formula_61
Two-qubit gates acting on qubits "k, k"+1.
The changes required to update the formula_36's and the formula_62's, following a unitary operation "V" on qubits "k", "k"+1, concern only formula_59, and formula_58.
They consist of a number of formula_63 basic operations.
Following Vidal's original approach, formula_56 can be regarded as belonging to only four subsystems:
formula_64
The subspace "J" is spanned by the eigenvectors of the reduced density matrix formula_65:
formula_66
In a similar way, the subspace "K" is spanned by the eigenvectors of the reduced density matrix:
formula_67
The subspaces formula_68 and formula_69 belong to the qubits "k" and "k" + 1.
Using this basis and the decomposition "D", formula_56 can be written as:
formula_70
Using the same reasoning as for the OQG, the applying the TQG "V" to qubits "k", "k" + 1 one needs only to update formula_71, formula_62 and formula_72
We can write formula_73 as:
formula_74
where
formula_75
To find out the new decomposition, the new formula_62's at the bond "k" and their corresponding Schmidt eigenvectors must be computed and expressed in terms of the formula_76's of the decomposition "D". The reduced density matrix formula_77 is therefore diagonalized:
formula_78
The square roots of its eigenvalues are the new formula_62's.
Expressing the eigenvectors of the diagonalized matrix in the basis: formula_79 the formula_80's are obtained as well:
formula_81
From the left-hand eigenvectors,
formula_82
after expressing them in the basis formula_83, the formula_84's are:
formula_85
The computational cost.
The dimension of the largest tensors in "D" is of the order formula_86; when constructing the formula_87 one makes the summation over formula_88, formula_89 and formula_90 for each formula_91, adding up to a total of formula_92 operations. The same holds for the formation of the elements formula_93, or for computing the left-hand eigenvectors formula_94, a maximum of formula_95, respectively formula_96 basic operations. In the case of qubits, formula_97, hence its role is not very relevant for the order of magnitude of the number of basic operations, but in the case when the on-site dimension is higher than two it has a rather decisive contribution.
The numerical simulation.
The numerical simulation is targeting (possibly time-dependent) Hamiltonians of a system of formula_98 particles arranged in a line, which are composed of arbitrary OQGs and TQGs:
formula_99
It is useful to decompose formula_100 as a sum of two possibly non-commuting terms, formula_101, where
formula_102formula_103
Any two-body terms commute: formula_104, formula_105
This is done to make the Suzuki–Trotter expansion (ST) of the exponential operator, named after Masuo Suzuki and Hale Trotter.
The Suzuki–Trotter expansion.
The Suzuki–Trotter expansion of the first order (ST1) represents a general way of writing exponential operators:
formula_106
or, equivalently
formula_107
The correction term vanishes in the limit formula_108
For simulations of quantum dynamics it is useful to use operators that are unitary, conserving the norm (unlike power series expansions), and there's where the Trotter-Suzuki expansion comes in. In problems of quantum dynamics the unitarity of the operators in the ST expansion proves quite practical, since the error tends to concentrate in the overall phase, thus allowing us to faithfully compute expectation values and conserved quantities. Because the ST conserves the phase-space volume, it is also called a symplectic integrator.
The trick of the ST2 is to write the unitary operators formula_109 as:
formula_110
where formula_111. The number formula_112 is called the Trotter number.
Simulation of the time-evolution.
The operators formula_113, formula_114 are easy to express, as:
formula_115
formula_116
since any two operators formula_117,formula_118 (respectively, formula_119,formula_120) commute for formula_121 and an ST expansion of the first order keeps only the product of the exponentials, the approximation becoming, in this case, exact.
The time-evolution can be made according to
formula_122
For each "time-step" formula_123, formula_124 are applied successively to all odd sites, then formula_125 to the even ones, and formula_124 again to the odd ones; this is basically a sequence of TQG's, and it has been explained above how to update the decomposition formula_126 when applying them.
Our goal is to make the time evolution of a state formula_127 for a time T, towards the state formula_128 using the n-particle Hamiltonian formula_129.
It is rather troublesome, if at all possible, to construct the decomposition formula_126 for an arbitrary n-particle state, since this would mean one has to compute the Schmidt decomposition at each bond, to arrange the Schmidt eigenvalues in decreasing order and to choose the first formula_51 and the appropriate Schmidt eigenvectors. Mind this would imply diagonalizing somewhat generous reduced density matrices, which, depending on the system one has to simulate, might be a task beyond our reach and patience.
Instead, one can try to do the following:
Error sources.
The errors in the simulation are resulting from the Suzuki–Trotter approximation and the involved truncation of the Hilbert space.
Errors coming from the Suzuki–Trotter expansion.
In the case of a Trotter approximation of formula_130 order, the error is of order formula_131. Taking into account formula_132 steps, the error after the time T is:
formula_133
The unapproximated state formula_134 is:
formula_135
where formula_136 is the state kept after the Trotter expansion and formula_137 accounts for the part that is neglected when doing the expansion.
The total error scales with time formula_138 as:
formula_139
The Trotter error is independent of the dimension of the chain.
Errors coming from the truncation of the Hilbert space.
Considering the errors arising from the truncation of the Hilbert space comprised in the decomposition "D", they are twofold.
First, as we have seen above, the smallest contributions to the Schmidt spectrum are left away, the state being faithfully represented up to:
formula_140
where formula_141 is the sum of all the discarded eigenvalues of the reduced density matrix, at the bond formula_142.
The state formula_56 is, at a given bond formula_142, described by the Schmidt decomposition:
formula_143
where
formula_144
is the state kept after the truncation and
formula_145
is the state formed by the eigenfunctions corresponding to the smallest, irrelevant Schmidt coefficients, which are neglected.
Now, formula_146 because they are spanned by vectors corresponding to orthogonal spaces. Using the same argument as for the Trotter expansion, the error after the truncation is:
formula_147
After moving to the next bond, the state is, similarly:
formula_148
The error, after the second truncation, is:
formula_149
and so on, as we move from bond to bond.
The second error source enfolded in the decomposition formula_150 is more subtle and requires a little bit of calculation.
As we calculated before, the normalization constant after making the truncation at bond formula_151 formula_152 is:
formula_153
Now let us go to the bond formula_154 and calculate the norm of the right-hand Schmidt vectors formula_155; taking into account the full Schmidt dimension, the norm is:
formula_156
where formula_157.
Taking into account the truncated space, the norm is:
formula_158
Taking the difference, formula_159, we get:
formula_160
Hence, when constructing the reduced density matrix, the trace of the matrix is multiplied by the factor:
formula_161
The total truncation error.
The total truncation error, considering both sources, is upper bounded by:
formula_162
When using the Trotter expansion, we do not move from bond to bond, but between bonds of same parity; moreover, for the ST2, we make a sweep of the even ones and two for the odd. But nevertheless, the calculation presented above still holds. The error is evaluated by successively multiplying with the normalization constant, each time we build the reduced density matrix and select its relevant eigenvalues.
"Adaptive" Schmidt dimension.
One thing that can save a lot of computational time without loss of accuracy is to use a different Schmidt dimension for each bond instead of a fixed one for all bonds, keeping only the necessary amount of relevant coefficients, as usual. For example, taking the first bond, in the case of qubits, the Schmidt dimension is just two. Hence, at the first bond, instead of futilely diagonalizing, let us say, 10 by 10 or 20 by 20 matrices, we can just restrict ourselves to ordinary 2 by 2 ones, thus making the algorithm generally faster. What we can do instead is set a threshold for the eigenvalues of the SD, keeping only those that are above the threshold.
TEBD also offers the possibility of straightforward parallelization due to the factorization of the exponential time-evolution operator using the Suzuki–Trotter expansion. A parallel-TEBD has the same mathematics as its non-parallelized counterpart, the only difference is in the numerical implementation.
|
[
{
"math_id": 0,
"text": "| \\Psi \\rangle \\in H^{{\\otimes} N }"
},
{
"math_id": 1,
"text": "| \\Psi \\rangle"
},
{
"math_id": 2,
"text": "M^N"
},
{
"math_id": 3,
"text": "| i_1,i_2,..,i_{N-1},i_N \\rangle"
},
{
"math_id": 4,
"text": "| \\Psi \\rangle=\\sum\\limits_{i=1}^{M}c_{i_1i_2..i_N} | {i_1,i_2,..,i_{N-1},i_N} \\rangle"
},
{
"math_id": 5,
"text": "c_{i_1i_2..i_N}"
},
{
"math_id": 6,
"text": " c_{i_1i_2..i_N}=\\sum\\limits_{\\alpha_1,..,\\alpha_{N-1}=0}^{\\chi}\\Gamma^{[1]i_1}_{\\alpha_1}\\lambda^{[1]}_{\\alpha_1}\\Gamma^{[2]i_2}_{\\alpha_1\\alpha_2}\\lambda^{[2]}_{\\alpha_2}\\Gamma^{[3]i_3}_{\\alpha_2\\alpha_3}\\lambda^{[3]}_{\\alpha_3}\\cdot..\\cdot\\Gamma^{[{N-1}]i_{N-1}}_{\\alpha_{N-2}\\alpha_{N-1}}\\lambda^{[N-1]}_{\\alpha_{N-1}}\\Gamma^{[N]i_N}_{\\alpha_{N-1}}"
},
{
"math_id": 7,
"text": "\\vert \\Psi \\rangle \\in {H_A \\otimes H_B}"
},
{
"math_id": 8,
"text": "|{\\Psi}\\rangle"
},
{
"math_id": 9,
"text": "\\left\\vert \\Psi \\right\\rangle = \\sum\\limits_{i=1}^{M_{A|B}} a_i \\left\\vert{\\Phi^A_i \\Phi^B_i}\\right\\rangle"
},
{
"math_id": 10,
"text": "|{\\Phi^A_i \\Phi^B_i}\\rangle=| {\\Phi^A_i}\\rangle\\otimes | {\\Phi^B_i}\\rangle"
},
{
"math_id": 11,
"text": "|{\\Phi^A_i}\\rangle"
},
{
"math_id": 12,
"text": "H_A"
},
{
"math_id": 13,
"text": "|{\\Phi^B_i}\\rangle"
},
{
"math_id": 14,
"text": "{H_B}"
},
{
"math_id": 15,
"text": "a_i"
},
{
"math_id": 16,
"text": "\\sum\\limits_{i=1}^{M_{A|B}}a^2_i = 1"
},
{
"math_id": 17,
"text": "M_{A|B}=\\min(\\dim({{H_A}}),\\dim({{H_B}}))"
},
{
"math_id": 18,
"text": "| {\\Psi}\\rangle=\\frac{1}{2\\sqrt{2}}\\left( | {00}\\rangle + {\\sqrt{3}} | {01}\\rangle + {\\sqrt{3}} | {10}\\rangle + |{11}\\rangle\\right)"
},
{
"math_id": 19,
"text": "\n\\left|{\\Psi}\\right\\rangle = \\frac{\\sqrt{3}+1}{2\\sqrt{2}} \\left|{\\phi^{A}_1\\phi^{B}_1}\\right\\rangle + \\frac{\\sqrt{3}-1}{2\\sqrt{2}} \\left|{\\phi^{A}_2\\phi^{B}_2}\\right\\rangle"
},
{
"math_id": 20,
"text": "|{\\phi^{A}_1}\\rangle=\\frac{1}{\\sqrt{2}}(|{0_{A}}\\rangle+|{1_{A}}\\rangle), \\ \\ |{\\phi^{B}_1}\\rangle=\\frac{1}{\\sqrt{2}}(|{0_{B}}\\rangle+|{1_{B}}\\rangle), \\ \\ |{\\phi^{A}_2}\\rangle=\\frac{1}{\\sqrt{2}}(|{0_{A}}\\rangle-|{1_{A}}\\rangle), \\ \\ |{\\phi^{B}_2}\\rangle=\\frac{1}{\\sqrt{2}}(|{1_{B}}\\rangle-|{0_{B}}\\rangle)"
},
{
"math_id": 21,
"text": "|{\\Phi}\\rangle =\\frac{1}{\\sqrt{3}}|{00}\\rangle + \\frac{1}{\\sqrt{6}}|{01}\\rangle- \\frac{i}{\\sqrt{3}}|{10}\\rangle - \\frac{i}{\\sqrt{6}}|{11}\\rangle"
},
{
"math_id": 22,
"text": "\\left|\\Phi\\right\\rangle = \\left( \\frac{1}{\\sqrt{3}} \\left|0_A\\right\\rangle - \\frac{i}{\\sqrt{3}} \\left|1_A\\right\\rangle \\right) \\otimes \\left( \\left|0_B\\right\\rangle + \\frac{1}{\\sqrt{2}} \\left|1_B\\right\\rangle \\right)"
},
{
"math_id": 23,
"text": "[1]:[2..N]"
},
{
"math_id": 24,
"text": "\\lambda^{[1]}_{{\\alpha}_1}"
},
{
"math_id": 25,
"text": "\\left|{\\Phi^{[1]}_{\\alpha_1}}\\right\\rangle \\left|{ \\Phi^{[2..N]}_{\\alpha_1}}\\right\\rangle"
},
{
"math_id": 26,
"text": "\\left|{\\Phi^{[1]}_{\\alpha_1}}\\right\\rangle"
},
{
"math_id": 27,
"text": "|{\\Psi}\\rangle=\\sum\\limits_{i_1,{\\alpha_1=1}}^{M,\\chi}\\Gamma^{[1]i_1}_{\\alpha_1}\\lambda^{[1]}_{\\alpha_1}|{i_1}\\rangle|{\\Phi^{[2..N]}_{\\alpha_1}}\\rangle\n"
},
{
"math_id": 28,
"text": "|{\\Phi^{[2..N]}_{\\alpha_1}}\\rangle"
},
{
"math_id": 29,
"text": " |{\\Phi^{[2..N]}_{\\alpha_1}}\\rangle=\\sum_{i_2}|{i_2}\\rangle|{\\tau^{[3..N]}_{\\alpha_1i_2}}\\rangle"
},
{
"math_id": 30,
"text": "|{\\tau^{[3..N]}_{\\alpha_1i_2}}\\rangle"
},
{
"math_id": 31,
"text": "\\chi"
},
{
"math_id": 32,
"text": "|{\\Phi^{[3..N]}_{\\alpha_2}}\\rangle"
},
{
"math_id": 33,
"text": "\\lambda^{[2]}_{{\\alpha}_2}"
},
{
"math_id": 34,
"text": "|\\tau^{[3..N]}_{\\alpha_1i_2}\\rangle=\\sum_{\\alpha_2}\\Gamma^{[2]i_2}_{\\alpha_1\\alpha_2}\\lambda^{[2]}_{{\\alpha}_2}|{\\Phi^{[3..N]}_{\\alpha_2}}\\rangle\n"
},
{
"math_id": 35,
"text": "|{\\Psi}\\rangle=\\sum_{i_1,i_2,\\alpha_1,\\alpha_2}\\Gamma^{[1]i_1}_{\\alpha_1}\\lambda^{[1]}_{\\alpha_1}\\Gamma^{[2]i_2}_{\\alpha_1\\alpha_2}\\lambda^{[2]}_{{\\alpha}_2}|{i_1i_2}\\rangle|{\\Phi^{[3..N]}_{\\alpha_2}}\\rangle"
},
{
"math_id": 36,
"text": "\\Gamma"
},
{
"math_id": 37,
"text": "(N-1)^{th}"
},
{
"math_id": 38,
"text": "N^{th}"
},
{
"math_id": 39,
"text": "k^{th}"
},
{
"math_id": 40,
"text": "[1..k]:[k+1..N]"
},
{
"math_id": 41,
"text": "|{\\Psi}\\rangle=\\sum_{\\alpha_k}\\lambda^{[k]}_{{\\alpha}_k}|{\\Phi^{[1..k]}_{\\alpha_k}}\\rangle|{\\Phi^{[k+1..N]}_{\\alpha_k}}\\rangle"
},
{
"math_id": 42,
"text": "|{\\Phi^{[1..k]}_{\\alpha_k}}\\rangle=\\sum_{\\alpha_1,\\alpha_2..\\alpha_{k-1}}\\Gamma^{[1]i_1}_{\\alpha_1}\\lambda^{[1]}_{\\alpha_1}\\cdot\\cdot\\Gamma^{[k]i_k}_{\\alpha_{k-1}\\alpha_k}|{i_1i_2..i_k}\\rangle"
},
{
"math_id": 43,
"text": "|{\\Phi^{[k+1..N]}_{\\alpha_k}}\\rangle=\\sum_{\\alpha_{k+1},\\alpha_{k+2}..\\alpha_{N}}\\Gamma^{[k+1]i_{k+1}}_{\\alpha_k\\alpha_{k+1}}\\lambda^{[k+1]}_{\\alpha_{k+1}}\\cdot\\cdot\\lambda^{N-1}_{\\alpha_{N-1}}\\Gamma^{[N]i_N}_{\\alpha_{N-1}}|{i_{k+1}i_{k+2}..i_N}\\rangle"
},
{
"math_id": 44,
"text": "{\\chi}^2{\\cdot}M(N-2) + 2{\\chi}M + (N-1)\\chi"
},
{
"math_id": 45,
"text": "M^{N/2}"
},
{
"math_id": 46,
"text": "M^{N+1}{\\cdot}(N-2)"
},
{
"math_id": 47,
"text": "{\\chi}^2"
},
{
"math_id": 48,
"text": "\\alpha"
},
{
"math_id": 49,
"text": "\n\\lambda^{[l]}_{{\\alpha}_l}{\\sim}e^{-K\\alpha_l},\\ K>0."
},
{
"math_id": 50,
"text": "|{\\Psi}\\rangle=\\frac{1}{\\sqrt{\\sum\\limits_{{\\alpha_l}=1}^{{\\chi}_c}{|\\lambda^{[l]}_{{\\alpha}_l}|}^2}}\\cdot\\sum\\limits_{{{\\alpha}_l}=1}^{{\\chi}_c}\\lambda^{[l]}_{{\\alpha}_l}|{\\Phi^{[1..l]}_{\\alpha_l}}\\rangle|{ \\Phi^{[l+1..N]}_{\\alpha_l}}\\rangle,"
},
{
"math_id": 51,
"text": "\\chi_c"
},
{
"math_id": 52,
"text": "0.0001"
},
{
"math_id": 53,
"text": "2^{14}"
},
{
"math_id": 54,
"text": "2^{50}"
},
{
"math_id": 55,
"text": "\\psi"
},
{
"math_id": 56,
"text": "|{\\psi}\\rangle"
},
{
"math_id": 57,
"text": "\\Gamma^{[k-1]}"
},
{
"math_id": 58,
"text": "\\Gamma^{[k+1]}"
},
{
"math_id": 59,
"text": "\\Gamma^{[k]}"
},
{
"math_id": 60,
"text": "{{O}}(M^2\\cdot\\chi^2)"
},
{
"math_id": 61,
"text": "\\Gamma^{'[k]i_k}_{\\alpha_{k-1}\\alpha_k}=\\sum_{j}U^{i_k}_{j_k}\\Gamma^{[k]j_k}_{\\alpha_{k-1}\\alpha_k}."
},
{
"math_id": 62,
"text": "\\lambda"
},
{
"math_id": 63,
"text": "{{O}}({M\\cdot\\chi}^3)"
},
{
"math_id": 64,
"text": "{{H}=J{{{\\otimes}}}H_C{\\otimes}H_D{\\otimes}K}.\\, "
},
{
"math_id": 65,
"text": "\\rho^{J} = Tr_{CDK}|\\psi\\rangle\\langle\\psi|"
},
{
"math_id": 66,
"text": "\\rho^{[1..{k-1}]}=\\sum_{\\alpha}{(\\lambda^{[k-1]}_{\\alpha})}^2|{\\Phi^{[1..{k-1}]}_{\\alpha}}\\rangle\\langle{\\Phi^{[1..{k-1}]}_{\\alpha}}|=\\sum_{\\alpha}{(\\lambda^{[k-1]}_{\\alpha})^2}|{\\alpha}\\rangle\\langle{\\alpha}|."
},
{
"math_id": 67,
"text": "\\rho^{[{k+2}..{N}]}=\\sum_{\\gamma}{(\\lambda^{[k+1]}_{\\gamma})^2}|{\\Phi^{[{k+2}..N]}_{\\gamma}}\\rangle\\langle{\\Phi^{[{k+2}..N]}_{\\gamma}}|=\\sum_{\\gamma}{(\\lambda^{[k+1]}_{\\gamma})^2}|{\\gamma}\\rangle\\langle{\\gamma}|."
},
{
"math_id": 68,
"text": "H_C"
},
{
"math_id": 69,
"text": "H_D"
},
{
"math_id": 70,
"text": "|{\\psi}\\rangle=\\sum\\limits_{\\alpha,\\beta,\\gamma=1}^{\\chi}\\sum\\limits_{i,j=1}^{M}\\lambda^{[C-1]}_{\\alpha}\\Gamma^{[C]i}_{\\alpha\\beta}\\lambda^{[C]}_{\\beta}\\Gamma^{[D]j}_{\\beta\\gamma}\\lambda^{[D]}_{\\gamma}|{{\\alpha}ij{\\gamma}}\\rangle"
},
{
"math_id": 71,
"text": "\\Gamma^{[C]}"
},
{
"math_id": 72,
"text": "\\Gamma^{[D]}."
},
{
"math_id": 73,
"text": "|{\\psi'}\\rangle=V|{\\psi}\\rangle"
},
{
"math_id": 74,
"text": "|{\\psi'}\\rangle=\\sum\\limits_{\\alpha,\\gamma=1}^{\\chi}\\sum\\limits_{i,j=1}^{M}\\lambda_{\\alpha}\\Theta^{ij}_{\\alpha\\gamma}\\lambda_{\\gamma}|{{\\alpha}ij\\gamma}\\rangle"
},
{
"math_id": 75,
"text": "\\Theta^{ij}_{\\alpha\\gamma}=\\sum\\limits_{\\beta=1}^{\\chi}\\sum\\limits_{m,n=1}^{M}V^{ij}_{mn}\\Gamma^{[C]m}_{\\alpha\\beta}\\lambda_{\\beta}\\Gamma^{[D]n}_{\\beta\\gamma}."
},
{
"math_id": 76,
"text": "{{\\Gamma}}"
},
{
"math_id": 77,
"text": "\\rho^{'[DK]}"
},
{
"math_id": 78,
"text": "\\rho^{'[DK]}=Tr_{JC}|{\\psi'}\\rangle\\langle{\\psi'}|=\\sum_{j,j',\\gamma,\\gamma'}\\rho^{jj'}_{\\gamma\\gamma'}|{j\\gamma}\\rangle\\langle{j'\\gamma'}|."
},
{
"math_id": 79,
"text": "\\{|{j\\gamma}\\rangle\\}"
},
{
"math_id": 80,
"text": "\\Gamma^{[{{D]}}}"
},
{
"math_id": 81,
"text": "|{\\Phi^{'[{{DK}}]}}\\rangle=\\sum_{j,\\gamma}\\Gamma^{'[{{D}}]j}_{\\beta\\gamma}\\lambda_{\\gamma}|{j\\gamma}\\rangle."
},
{
"math_id": 82,
"text": "\\lambda^{'}_{\\beta}|{\\Phi^{'[{{JC}}]}_{\\beta}}\\rangle=\\langle{\\Phi^{'[{DK}]}_{\\beta}}|{\\psi'}\\rangle=\\sum_{i,j,\\alpha,\\gamma}(\\Gamma^{'[{D}]j}_{\\beta\\gamma})^{*}\\Theta^{ij}_{\\alpha\\gamma}(\\lambda_{\\gamma})^2\\lambda_{\\alpha}|{{\\alpha}i}\\rangle"
},
{
"math_id": 83,
"text": "\\{|{i\\alpha}\\rangle\\}"
},
{
"math_id": 84,
"text": "\\Gamma^{[{C}]}"
},
{
"math_id": 85,
"text": "|{\\Phi^{'[{{JC}}]}}\\rangle=\\sum_{i,\\alpha}\\Gamma^{'[{{C}}]i}_{\\alpha\\beta}\\lambda_{\\alpha}|{{\\alpha}i}\\rangle."
},
{
"math_id": 86,
"text": "{{O}}(M{\\cdot}{\\chi}^2)"
},
{
"math_id": 87,
"text": "\\Theta^{ij}_{\\alpha\\gamma}"
},
{
"math_id": 88,
"text": "\\beta"
},
{
"math_id": 89,
"text": "\\it{m}"
},
{
"math_id": 90,
"text": "\\it{n}"
},
{
"math_id": 91,
"text": "\\gamma,\\alpha,{\\it{i,j}}"
},
{
"math_id": 92,
"text": "{{O}}(M^4{\\cdot}{\\chi}^3)"
},
{
"math_id": 93,
"text": "\\rho^{{{jj'}}}_{\\gamma\\gamma'}"
},
{
"math_id": 94,
"text": "\\lambda^{'}_{\\beta}|{\\Phi^{'[{\\it{JC}}]}_{\\beta}}\\rangle"
},
{
"math_id": 95,
"text": "{\\it{O}}(M^3{\\cdot}{\\chi}^3)"
},
{
"math_id": 96,
"text": "{\\it{O}}(M^2{\\cdot}{\\chi}^3)"
},
{
"math_id": 97,
"text": "M = 2"
},
{
"math_id": 98,
"text": "N"
},
{
"math_id": 99,
"text": "H_N=\\sum\\limits_{l=1}^{N}K^{[l]}_1 + \\sum\\limits_{l=1}^{N}K^{[l,l+1]}_2."
},
{
"math_id": 100,
"text": "H_N"
},
{
"math_id": 101,
"text": "H_N = F + G"
},
{
"math_id": 102,
"text": "F \\equiv \\sum_{\\text{even } l}(K^{l}_1 + K^{l,l+1}_2) = \\sum_{\\text{even } l}F^{[l]},"
},
{
"math_id": 103,
"text": "G \\equiv \\sum_{\\text{odd } l}(K^{l}_1 + K^{l,l+1}_2) = \\sum_{\\text{odd } l}G^{[l]}."
},
{
"math_id": 104,
"text": "[F^{[l]},F^{[l']}]=0"
},
{
"math_id": 105,
"text": "[G^{[l]},G^{[l']}]=0"
},
{
"math_id": 106,
"text": " e^{A+B} = \\lim_{n\\to\\infty} \\left(e^{\\frac{A}{n}}e^{\\frac{B}{n}} \\right)^n"
},
{
"math_id": 107,
"text": "e^{{\\delta}(A+B)} = e^{{\\delta}A}e^{{\\delta}B} + {{\\it{O}}}(\\delta^2)."
},
{
"math_id": 108,
"text": "\\delta \\to 0"
},
{
"math_id": 109,
"text": "e^{-iHt}"
},
{
"math_id": 110,
"text": "e^{-iH_N T} = [e^{-iH_N\\delta}]^{T/{\\delta}} = [e^{\\frac{{\\delta}}{2}F}e^{{\\delta}G}e^{\\frac{{\\delta}}{2}F}]^{n}"
},
{
"math_id": 111,
"text": "n=\\frac{T}{\\delta}"
},
{
"math_id": 112,
"text": "n"
},
{
"math_id": 113,
"text": "e^{\\frac{{\\delta}}{2}F}"
},
{
"math_id": 114,
"text": "e^{{\\delta}G}"
},
{
"math_id": 115,
"text": "e^{\\frac{{\\delta}}{2}F} = \\prod_{\\text{even } l}e^{\\frac{{\\delta}}{2}F^{[l]}}"
},
{
"math_id": 116,
"text": "e^{{\\delta}G} = \\prod_{\\text{odd } l}e^{{\\delta}G^{[l]}}"
},
{
"math_id": 117,
"text": "F^{[l]}"
},
{
"math_id": 118,
"text": "F^{[l']}"
},
{
"math_id": 119,
"text": "G^{[l]}"
},
{
"math_id": 120,
"text": "G^{[l']}"
},
{
"math_id": 121,
"text": "l{\\neq}l'"
},
{
"math_id": 122,
"text": "|{\\tilde{\\psi}_{t+\\delta}}\\rangle=e^{-i\\frac{{\\delta}}{2}F}e^{{-i\\delta}G}e^{\\frac{{-i\\delta}}{2}F}|{\\tilde{\\psi}_{t}}\\rangle."
},
{
"math_id": 123,
"text": "\\delta"
},
{
"math_id": 124,
"text": "e^{-i\\frac{{\\delta}}{2}F^{[l]}}"
},
{
"math_id": 125,
"text": "e^{{-i\\delta}G^{[l]}}"
},
{
"math_id": 126,
"text": "{\\it{D}}"
},
{
"math_id": 127,
"text": "|{\\psi_0}\\rangle"
},
{
"math_id": 128,
"text": "|{\\psi_{T}}\\rangle"
},
{
"math_id": 129,
"text": "H_n"
},
{
"math_id": 130,
"text": "{\\it{p^{th}}}"
},
{
"math_id": 131,
"text": "{\\delta}^{p+1}"
},
{
"math_id": 132,
"text": "n = \\frac{T}{\\delta}"
},
{
"math_id": 133,
"text": " \\epsilon=\\frac{T}{\\delta}\\delta^{p+1}=T\\delta^p"
},
{
"math_id": 134,
"text": "|{\\tilde{\\psi}_{Tr}}\\rangle"
},
{
"math_id": 135,
"text": "|{\\tilde{\\psi}_{Tr}}\\rangle = \\sqrt{1-{\\epsilon}^2}|{\\psi_{Tr}}\\rangle + {\\epsilon}|{\\psi^{\\bot}_{Tr}}\\rangle"
},
{
"math_id": 136,
"text": "|{\\psi_{Tr}}\\rangle"
},
{
"math_id": 137,
"text": "|{\\psi^{\\bot}_{Tr}}\\rangle"
},
{
"math_id": 138,
"text": "T"
},
{
"math_id": 139,
"text": "\\epsilon(T) = 1 -|\\langle{\\tilde{\\psi_{Tr}}}|{\\psi_{{Tr}}}\\rangle|^2 = 1 - 1 + \\epsilon^2 = \\epsilon^2"
},
{
"math_id": 140,
"text": "\\epsilon({{{\\it{D}}}}) = 1 - \\prod\\limits_{n=1}^{N-1}(1-\\epsilon_n)"
},
{
"math_id": 141,
"text": " \\epsilon_n = \\sum\\limits_{\\alpha=\\chi_c}^{\\chi}(\\lambda^{[n]}_{\\alpha})^2"
},
{
"math_id": 142,
"text": "{\\it{n}}"
},
{
"math_id": 143,
"text": "|{\\psi}\\rangle = \\sqrt{1-\\epsilon_n}|{\\psi_{D}}\\rangle + \\sqrt{\\epsilon_n}|{\\psi^{\\bot}_{D}}\\rangle"
},
{
"math_id": 144,
"text": "|{\\psi_{D}}\\rangle = \\frac{1}{\\sqrt{1-\\epsilon_n}}\\sum\\limits_{{{\\alpha}_n}=1}^{{\\chi}_c}\\lambda^{[n]}_{{\\alpha}_n}|{\\Phi^{[1..n]}_{\\alpha_n}}\\rangle|{ \\Phi^{[n+1..N]}_{\\alpha_n}}\\rangle"
},
{
"math_id": 145,
"text": "|{\\psi^{\\bot}_{D}}\\rangle = \\frac{1}{\\sqrt{\\epsilon_n}}\\sum\\limits_{{{\\alpha}_n}={\\chi}_c}^{{\\chi}}\\lambda^{[n]}_{{\\alpha}_n}|{\\Phi^{[1..n]}_{\\alpha_n}}\\rangle|{ \\Phi^{[n+1..N]}_{\\alpha_n}}\\rangle"
},
{
"math_id": 146,
"text": "\\langle\\psi^{\\bot}_{D}|\\psi_{D}\\rangle=0"
},
{
"math_id": 147,
"text": "\n\\epsilon_n = 1 - |\\langle{\\psi}|\\psi_{D}\\rangle|^2 = \\sum\\limits_{\\alpha=\\chi_c}^{\\chi}(\\lambda^{[n]}_{\\alpha})^2"
},
{
"math_id": 148,
"text": "|{\\psi_{D}}\\rangle = \\sqrt{1-\\epsilon_{n+1}}|{{{\\psi}'}_{D}}\\rangle + \\sqrt{\\epsilon_{n+1}}|{{\\psi'}^{\\bot}_{D}}\\rangle"
},
{
"math_id": 149,
"text": "\\epsilon = 1 - |\\langle{\\psi}|\\psi'_{D}\\rangle|^2 = 1 - (1-\\epsilon_{n+1})|\\langle{\\psi}|\\psi_{D}\\rangle|^2 = 1 - (1-\\epsilon_{n+1})(1-\\epsilon_{n})"
},
{
"math_id": 150,
"text": "D"
},
{
"math_id": 151,
"text": "l"
},
{
"math_id": 152,
"text": "([1..l]:[l+1..N])"
},
{
"math_id": 153,
"text": " R = {\\sum\\limits_{{\\alpha_l}=1}^{{\\chi}_c}{|\\lambda^{[l]}_{{\\alpha}_l}|}^2} = {1 - \\epsilon_l}"
},
{
"math_id": 154,
"text": "{\\it{l}}-1"
},
{
"math_id": 155,
"text": "\\|{\\Phi^{[l-1..N]}_{\\alpha_{l-1}}}\\|"
},
{
"math_id": 156,
"text": "n_1 = 1 = \\sum\\limits_{\\alpha_l=1}^{\\chi_c}(c_{\\alpha_{l-1}\\alpha_{l}})^2(\\lambda^{[l]}_{\\alpha_l})^2 + \\sum\\limits_{\\alpha_l=\\chi_c}^{\\chi}(c_{\\alpha_{l-1}\\alpha_{l}})^2(\\lambda^{[l]}_{\\alpha_l})^2 = S_1 + S_2,"
},
{
"math_id": 157,
"text": "(c_{\\alpha_{l-1}\\alpha_{l}})^2 = \\sum\\limits_{i_l=1}^{d}(\\Gamma^{[l]i_l}_{\\alpha_{l-1}\\alpha_{l}})^{*}\\Gamma^{[l]i_l}_{\\alpha_{l-1}\\alpha_{l}}"
},
{
"math_id": 158,
"text": "n_{2}=\\sum\\limits_{\\alpha_l=1}^{\\chi_c} (c_{\\alpha_{{{l-1}}}\\alpha_{l}})^2\\cdot({\\lambda'}^{[l]}_{\\alpha_l})^2=\\sum\\limits_{\\alpha_l=1}^{\\chi_c}(c_{\\alpha_{{{l-1}}}\\alpha_{l}})^2\\frac{(\\lambda^{[l]}_{\\alpha_l})^2}{R} = \\frac{S_1}{R}"
},
{
"math_id": 159,
"text": "\\epsilon = n_2 - n_1 = n_2 - 1"
},
{
"math_id": 160,
"text": "\\epsilon = \\frac{S_1}{R} - 1 \\leq \\frac{1-R}{R} = \\frac{\\epsilon_l}{1-\\epsilon_l} {\\to}0\\ \\ as\\ \\ {{\\epsilon_l{\\to}{{0}}}} "
},
{
"math_id": 161,
"text": "|\\langle{\\psi_{D}}|\\psi_{D}\\rangle|^2 = 1 - \\frac{\\epsilon_l}{1-\\epsilon_l} = \\frac{1-2\\epsilon_l}{1-\\epsilon_l}\n"
},
{
"math_id": 162,
"text": "\\epsilon({{{{D}}}}) = 1 - \\prod\\limits_{n=1}^{N-1}(1-\\epsilon_n) \\prod\\limits_{n=1}^{N-1}\\frac{1-2\\epsilon_n}{1-\\epsilon_n} = 1 - \\prod\\limits_{n=1}^{N-1}(1-2\\epsilon_n)"
}
] |
https://en.wikipedia.org/wiki?curid=5978615
|
59788104
|
Von Bertalanffy function
|
Growth curve model
The von Bertalanffy growth function (VBGF), or von Bertalanffy curve, is a type of growth curve for a time series and is named after Ludwig von Bertalanffy. It is a special case of the generalised logistic function. The growth curve is used to model mean length from age in animals. The function is commonly applied in ecology to model fish growth and in paleontology to model sclerochronological parameters of shell growth.
The model can be written as the following:
formula_0
where formula_1 is age, formula_2 is the growth coefficient, formula_3 is the theoretical age when size is zero, and formula_4 is asymptotic size. It is the solution of the following linear differential equation:
formula_5
Seasonally-adjusted von Bertalanffy.
The seasonally-adjusted von Bertalanffy is an extension of this function that accounts for organism growth that occurs seasonally. It was created by I. F. Somers in 1988.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "L(a)= L_\\infty(1-\\exp(-k(a-t_0)))"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "t_0"
},
{
"math_id": 4,
"text": "L_\\infty"
},
{
"math_id": 5,
"text": " \\frac{dL}{da} = k (L_{\\infty} - L ) "
}
] |
https://en.wikipedia.org/wiki?curid=59788104
|
5979010
|
Elliptic unit
|
In mathematics, elliptic units are certain units of abelian extensions of imaginary quadratic fields constructed using singular values of modular functions, or division values of elliptic functions. They were introduced by Gilles Robert in 1973, and were used by John Coates and Andrew Wiles in their work on the Birch and Swinnerton-Dyer conjecture. Elliptic units are an analogue for imaginary quadratic fields of cyclotomic units. They form an example of an Euler system.
Definition.
A system of elliptic units may be constructed for an elliptic curve "E" with complex multiplication by the ring of integers "R" of an imaginary quadratic field "F". For simplicity we assume that "F" has class number one. Let a be an ideal of "R" with generator α. For a Weierstrass model of "E", define
formula_0
where "P" is a point on "E", Δ is the discriminant, and "x" is the X-coordinate on the Weierstrass model. The function Θ is independent of the choice of model, and is defined over the field of definition of "E".
Properties.
Let b be an ideal of "R" coprime to "a" and "Q" an "R"-generator of the b-torsion. Then Θa("Q") is defined over the ray class field "K"(b), and if b is not a prime power then Θa("Q") is a global unit: if b is a power of a prime p then Θa("Q") is a unit away from p.
The function Θa satisfies a "distribution relation" for b = (β) coprime to a:
formula_1
|
[
{
"math_id": 0,
"text": "\\Theta_{\\mathbf{a}}(P) = \\alpha^{-12} \\Delta_E^{N\\mathbf{a} - 1} \\prod_{\\mathbf{a}P=0, P\\ne0} (x-x(P))^{-6} \\ . "
},
{
"math_id": 1,
"text": " \\prod_{\\mathbf{b}Q=0} \\Theta_{\\mathbf{a}}(P+R) = \\Theta_{\\mathbf{a}}(\\beta P) \\ . "
}
] |
https://en.wikipedia.org/wiki?curid=5979010
|
597998
|
Multinomial theorem
|
Generalization of the binomial theorem to other polynomials
In mathematics, the multinomial theorem describes how to expand a power of a sum in terms of powers of the terms in that sum. It is the generalization of the binomial theorem from binomials to multinomials.
Theorem.
For any positive integer m and any non-negative integer n, the multinomial theorem describes how a sum with m terms expands when raised to the nth power:
formula_0
where
formula_1
is a multinomial coefficient. The sum is taken over all combinations of nonnegative integer indices "k"1 through km such that the sum of all ki is n. That is, for each term in the expansion, the exponents of the xi must add up to n.
In the case "m" = 2, this statement reduces to that of the binomial theorem.
Example.
The third power of the trinomial "a" + "b" + "c" is given by
formula_2
This can be computed by hand using the distributive property of multiplication over addition and combining like terms, but it can also be done (perhaps more easily) with the multinomial theorem. It is possible to "read off" the multinomial coefficients from the terms by using the multinomial coefficient formula. For example, formula_3 has coefficient formula_4, formula_5 has coefficient formula_6, and so on.
Alternate expression.
The statement of the theorem can be written concisely using multiindices:
formula_7
where
formula_8
and
formula_9
Proof.
This proof of the multinomial theorem uses the binomial theorem and induction on m.
First, for "m" = 1, both sides equal "x"1"n" since there is only one term "k"1 = "n" in the sum. For the induction step, suppose the multinomial theorem holds for m. Then
formula_10
by the induction hypothesis. Applying the binomial theorem to the last factor,
formula_11
formula_12
which completes the induction. The last step follows because
formula_13
as can easily be seen by writing the three coefficients using factorials as follows:
formula_14
Multinomial coefficients.
The numbers
formula_15
appearing in the theorem are the multinomial coefficients. They can be expressed in numerous ways, including as a product of binomial coefficients or of factorials:
formula_16
Sum of all multinomial coefficients.
The substitution of "xi" = 1 for all i into the multinomial theorem
formula_17
gives immediately that
formula_18
Number of multinomial coefficients.
The number of terms in a multinomial sum, #"n","m", is equal to the number of monomials of degree n on the variables "x"1, …, "xm":
formula_19
The count can be performed easily using the method of stars and bars.
Valuation of multinomial coefficients.
The largest power of a prime p that divides a multinomial coefficient may be computed using a generalization of Kummer's theorem.
Asymptotics.
By Stirling's approximation, or equivalently the log-gamma function's asymptotic expansion, formula_20so for example,formula_21
Interpretations.
Ways to put objects into bins.
The multinomial coefficients have a direct combinatorial interpretation, as the number of ways of depositing n distinct objects into m distinct bins, with "k"1 objects in the first bin, "k"2 objects in the second bin, and so on.
Number of ways to select according to a distribution.
In statistical mechanics and combinatorics, if one has a number distribution of labels, then the multinomial coefficients naturally arise from the binomial coefficients. Given a number distribution {"ni"} on a set of N total items, ni represents the number of items to be given the label i. (In statistical mechanics i is the label of the energy state.)
The number of arrangements is found by
Multiplying the number of choices at each step results in:
formula_25
Cancellation results in the formula given above.
Number of unique permutations of words.
The multinomial coefficient
formula_26
is also the number of distinct ways to permute a multiset of n elements, where ki is the multiplicity of each of the ith element. For example, the number of distinct permutations of the letters of the word MISSISSIPPI, which has 1 M, 4 Is, 4 Ss, and 2 Ps, is
formula_27
Generalized Pascal's triangle.
One can use the multinomial theorem to generalize Pascal's triangle or Pascal's pyramid to Pascal's simplex. This provides a quick way to generate a lookup table for multinomial coefficients.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(x_1 + x_2 + \\cdots + x_m)^n\n = \\sum_{\\begin{array}{c} k_1+k_2+\\cdots+k_m=n \\\\ k_1, k_2, \\cdots, k_m \\geq 0\\end{array}} {n \\choose k_1, k_2, \\ldots, k_m}\n x_1^{k_1} \\cdot x_2^{k_2} \\cdots x_m^{k_m}"
},
{
"math_id": 1,
"text": " {n \\choose k_1, k_2, \\ldots, k_m}\n = \\frac{n!}{k_1!\\, k_2! \\cdots k_m!}"
},
{
"math_id": 2,
"text": "\n(a+b+c)^3 = a^3 + b^3 + c^3 + 3 a^2 b + 3 a^2 c + 3 b^2 a + 3 b^2 c + 3 c^2 a + 3 c^2 b + 6 a b c.\n"
},
{
"math_id": 3,
"text": "a^2 b^0 c^1 "
},
{
"math_id": 4,
"text": "{3 \\choose 2, 0, 1} = \\frac{3!}{2!\\cdot 0!\\cdot 1!} = \\frac{6}{2 \\cdot 1 \\cdot 1} = 3"
},
{
"math_id": 5,
"text": "a^1 b^1 c^1"
},
{
"math_id": 6,
"text": "{3 \\choose 1, 1, 1} = \\frac{3!}{1!\\cdot 1!\\cdot 1!} = \\frac{6}{1 \\cdot 1 \\cdot 1} = 6"
},
{
"math_id": 7,
"text": "(x_1+\\cdots+x_m)^n = \\sum_{|\\alpha|=n}{n \\choose \\alpha}x^\\alpha"
},
{
"math_id": 8,
"text": "\n\\alpha=(\\alpha_1,\\alpha_2,\\dots,\\alpha_m)\n"
},
{
"math_id": 9,
"text": "\nx^\\alpha=x_1^{\\alpha_1} x_2^{\\alpha_2} \\cdots x_m^{\\alpha_m}\n"
},
{
"math_id": 10,
"text": "\n\\begin{align}\n& (x_1+x_2+\\cdots+x_m+x_{m+1})^n = (x_1+x_2+\\cdots+(x_m+x_{m+1}))^n \\\\[6pt]\n= {} & \\sum_{k_1+k_2+\\cdots+k_{m-1}+K=n}{n\\choose k_1,k_2,\\ldots,k_{m-1},K} x_1^{k_1} x_2^{k_2}\\cdots x_{m-1}^{k_{m-1}}(x_m+x_{m+1})^K\n\\end{align}\n"
},
{
"math_id": 11,
"text": " = \\sum_{k_1+k_2+\\cdots+k_{m-1}+K=n}{n\\choose k_1,k_2,\\ldots,k_{m-1},K} x_1^{k_1}x_2^{k_2}\\cdots x_{m-1}^{k_{m-1}}\\sum_{k_m+k_{m+1}=K}{K\\choose k_m,k_{m+1}}x_m^{k_m}x_{m+1}^{k_{m+1}}"
},
{
"math_id": 12,
"text": " = \\sum_{k_1+k_2+\\cdots+k_{m-1}+k_m+k_{m+1}=n}{n\\choose k_1,k_2,\\ldots,k_{m-1},k_m,k_{m+1}} x_1^{k_1}x_2^{k_2}\\cdots x_{m-1}^{k_{m-1}}x_m^{k_m}x_{m+1}^{k_{m+1}}\n"
},
{
"math_id": 13,
"text": "{n\\choose k_1,k_2,\\ldots,k_{m-1},K}{K\\choose k_m,k_{m+1}} = {n\\choose k_1,k_2,\\ldots,k_{m-1},k_m,k_{m+1}},"
},
{
"math_id": 14,
"text": " \\frac{n!}{k_1! k_2! \\cdots k_{m-1}!K!} \\frac{K!}{k_m! k_{m+1}!}=\\frac{n!}{k_1! k_2! \\cdots k_{m+1}!}."
},
{
"math_id": 15,
"text": " {n \\choose k_1, k_2, \\ldots, k_m}"
},
{
"math_id": 16,
"text": "\n{n \\choose k_1, k_2, \\ldots, k_m} = \\frac{n!}{k_1!\\, k_2! \\cdots k_m!} = {k_1\\choose k_1}{k_1+k_2\\choose k_2}\\cdots{k_1+k_2+\\cdots+k_m\\choose k_m}\n "
},
{
"math_id": 17,
"text": "\\sum_{k_1+k_2+\\cdots+k_m=n} {n \\choose k_1, k_2, \\ldots, k_m} x_1^{k_1} x_2^{k_2} \\cdots x_m^{k_m}\n= (x_1 + x_2 + \\cdots + x_m)^n"
},
{
"math_id": 18,
"text": "\n\\sum_{k_1+k_2+\\cdots+k_m=n} {n \\choose k_1, k_2, \\ldots, k_m} = m^n.\n"
},
{
"math_id": 19,
"text": "\n\\#_{n,m} = {n+m-1 \\choose m-1}.\n"
},
{
"math_id": 20,
"text": "\\log\\binom{kn}{n, n, \\cdots, n} = k n \\log(k) + \\frac{1}{2} \\left(\\log(k) - (k - 1) \\log(2 \\pi n)\\right) - \\frac{k^2 - 1}{12kn} + \\frac{k^4 - 1}{360k^3n^3} - \\frac{k^6 - 1}{1260k^5n^5} + O\\left(\\frac{1}{n^6}\\right)"
},
{
"math_id": 21,
"text": "\\binom{2n}{n} \\sim \\frac{2^{2n}}{\\sqrt{n\\pi }}"
},
{
"math_id": 22,
"text": "\\tbinom{N}{n_1}"
},
{
"math_id": 23,
"text": "\\tbinom{N-n_1}{n_2}"
},
{
"math_id": 24,
"text": "\\tbinom{N-n_1-n_2}{n_3}"
},
{
"math_id": 25,
"text": "{N \\choose n_1}{N-n_1\\choose n_2}{N-n_1-n_2\\choose n_3}\\cdots=\\frac{N!}{(N-n_1)!n_1!} \\cdot \\frac{(N-n_1)!}{(N-n_1-n_2)!n_2!} \\cdot \\frac{(N-n_1-n_2)!}{(N-n_1-n_2-n_3)!n_3!}\\cdots."
},
{
"math_id": 26,
"text": "\\binom{n}{k_1, \\ldots, k_m}"
},
{
"math_id": 27,
"text": "{11 \\choose 1, 4, 4, 2} = \\frac{11!}{1!\\, 4!\\, 4!\\, 2!} = 34650."
}
] |
https://en.wikipedia.org/wiki?curid=597998
|
598031
|
Independent component analysis
|
Signal processing computational method
<templatestyles src="Machine learning/styles.css"/>
In signal processing, independent component analysis (ICA) is a computational method for separating a multivariate signal into additive subcomponents. This is done by assuming that at most one subcomponent is Gaussian and that the subcomponents are statistically independent from each other. ICA was invented by Jeanny Hérault and Christian Jutten in 1985. ICA is a special case of blind source separation. A common example application of ICA is the "cocktail party problem" of listening in on one person's speech in a noisy room.
Introduction.
Independent component analysis attempts to decompose a multivariate signal into independent non-Gaussian signals. As an example, sound is usually a signal that is composed of the numerical addition, at each time t, of signals from several sources. The question then is whether it is possible to separate these contributing sources from the observed total signal. When the statistical independence assumption is correct, blind ICA separation of a mixed signal gives very good results. It is also used for signals that are not supposed to be generated by mixing for analysis purposes.
A simple application of ICA is the "cocktail party problem", where the underlying speech signals are separated from a sample data consisting of people talking simultaneously in a room. Usually the problem is simplified by assuming no time delays or echoes. Note that a filtered and delayed signal is a copy of a dependent component, and thus the statistical independence assumption is not violated.
Mixing weights for constructing the "formula_0" observed signals from the formula_1 components can be placed in an formula_2 matrix. An important thing to consider is that if formula_1 sources are present, at least formula_1 observations (e.g. microphones if the observed signal is audio) are needed to recover the original signals. When there are an equal number of observations and source signals, the mixing matrix is square ("formula_3"). Other cases of underdetermined ("formula_4") and overdetermined ("formula_5") have been investigated.
The success of ICA separation of mixed signals relies on two assumptions and three effects of mixing source signals. Two assumptions:
Three effects of mixing source signals:
Those principles contribute to the basic establishment of ICA. If the signals extracted from a set of mixtures are independent and have non-Gaussian distributions or have low complexity, then they must be source signals.
Defining component independence.
ICA finds the independent components (also called factors, latent variables or sources) by maximizing the statistical independence of the estimated components. We may choose one of many ways to define a proxy for independence, and this choice governs the form of the ICA algorithm. The two broadest definitions of independence for ICA are
The Minimization-of-Mutual information (MMI) family of ICA algorithms uses measures like Kullback-Leibler Divergence and maximum entropy. The non-Gaussianity family of ICA algorithms, motivated by the central limit theorem, uses kurtosis and negentropy.
Typical algorithms for ICA use centering (subtract the mean to create a zero mean signal), whitening (usually with the eigenvalue decomposition), and dimensionality reduction as preprocessing steps in order to simplify and reduce the complexity of the problem for the actual iterative algorithm. Whitening and dimension reduction can be achieved with principal component analysis or singular value decomposition. Whitening ensures that all dimensions are treated equally "a priori" before the algorithm is run. Well-known algorithms for ICA include infomax, FastICA, JADE, and kernel-independent component analysis, among others. In general, ICA cannot identify the actual number of source signals, a uniquely correct ordering of the source signals, nor the proper scaling (including sign) of the source signals.
ICA is important to blind signal separation and has many practical applications. It is closely related to (or even a special case of) the search for a factorial code of the data, i.e., a new vector-valued representation of each data vector such that it gets uniquely encoded by the resulting code vector (loss-free coding), but the code components are statistically independent.
Mathematical definitions.
Linear independent component analysis can be divided into noiseless and noisy cases, where noiseless ICA is a special case of noisy ICA. Nonlinear ICA should be considered as a separate case.
General definition.
The data are represented by the observed random vector formula_6 and the hidden components as the random vector formula_7 The task is to transform the observed data formula_8 using a linear static transformation formula_9 as formula_10 into a vector of maximally independent components formula_11 measured by some function formula_12 of independence.
Generative model.
Linear noiseless ICA.
The components formula_13 of the observed random vector formula_6 are generated as a sum of the independent components formula_14, formula_15:
formula_16
weighted by the mixing weights formula_17.
The same generative model can be written in vector form as formula_18, where the observed random vector formula_19 is represented by the basis vectors formula_20. The basis vectors formula_21 form the columns of the mixing matrix formula_22 and the generative formula can be written as formula_23, where formula_24.
Given the model and realizations (samples) formula_25 of the random vector formula_19, the task is to estimate both the mixing matrix formula_26 and the sources formula_11. This is done by adaptively calculating the formula_27 vectors and setting up a cost function which either maximizes the non-gaussianity of the calculated formula_28 or minimizes the mutual information. In some cases, a priori knowledge of the probability distributions of the sources can be used in the cost function.
The original sources formula_11 can be recovered by multiplying the observed signals formula_19 with the inverse of the mixing matrix formula_29, also known as the unmixing matrix. Here it is assumed that the mixing matrix is square (formula_30). If the number of basis vectors is greater than the dimensionality of the observed vectors, formula_31, the task is overcomplete but is still solvable with the pseudo inverse.
Linear noisy ICA.
With the added assumption of zero-mean and uncorrelated Gaussian noise formula_32, the ICA model takes the form formula_33.
Nonlinear ICA.
The mixing of the sources does not need to be linear. Using a nonlinear mixing function formula_34 with parameters formula_35 the nonlinear ICA model is formula_36.
Identifiability.
The independent components are identifiable up to a permutation and scaling of the sources. This identifiability requires that:
Binary ICA.
A special variant of ICA is binary ICA in which both signal sources and monitors are in binary form and observations from monitors are disjunctive mixtures of binary independent sources. The problem was shown to have applications in many domains including medical diagnosis, multi-cluster assignment, network tomography and internet resource management.
Let formula_40 be the set of binary variables from formula_37 monitors and formula_41 be the set of binary variables from formula_38 sources. Source-monitor connections are represented by the (unknown) mixing matrix formula_42, where formula_43 indicates that signal from the "i"-th source can be observed by the "j"-th monitor. The system works as follows: at any time, if a source formula_44 is active (formula_45) and it is connected to the monitor formula_46 (formula_47) then the monitor formula_46 will observe some activity (formula_48). Formally we have:
formula_49
where formula_50 is Boolean AND and formula_51 is Boolean OR. Noise is not explicitly modelled, rather, can be treated as independent sources.
The above problem can be heuristically solved by assuming variables are continuous and running FastICA on binary observation data to get the mixing matrix formula_42 (real values), then apply round number techniques on formula_42 to obtain the binary values. This approach has been shown to produce a highly inaccurate result.
Another method is to use dynamic programming: recursively breaking the observation matrix formula_52 into its sub-matrices and run the inference algorithm on these sub-matrices. The key observation which leads to this algorithm is the sub-matrix formula_53 of formula_52 where formula_54 corresponds to the unbiased observation matrix of hidden components that do not have connection to the formula_44-th monitor. Experimental results from show that this approach is accurate under moderate noise levels.
The Generalized Binary ICA framework introduces a broader problem formulation which does not necessitate any knowledge on the generative model. In other words, this method attempts to decompose a source into its independent components (as much as possible, and without losing any information) with no prior assumption on the way it was generated. Although this problem appears quite complex, it can be accurately solved with a branch and bound search tree algorithm or tightly upper bounded with a single multiplication of a matrix with a vector.
Methods for blind source separation.
Projection pursuit.
Signal mixtures tend to have Gaussian probability density functions, and source signals tend to have non-Gaussian probability density functions. Each source signal can be extracted from a set of signal mixtures by taking the inner product of a weight vector and those signal mixtures where this inner product provides an orthogonal projection of the signal mixtures. The remaining challenge is finding such a weight vector. One type of method for doing so is projection pursuit.
Projection pursuit seeks one projection at a time such that the extracted signal is as non-Gaussian as possible. This contrasts with ICA, which typically extracts "M" signals simultaneously from "M" signal mixtures, which requires estimating a "M" × "M" unmixing matrix. One practical advantage of projection pursuit over ICA is that fewer than "M" signals can be extracted if required, where each source signal is extracted from "M" signal mixtures using an "M"-element weight vector.
We can use kurtosis to recover the multiple source signal by finding the correct weight vectors with the use of projection pursuit.
The kurtosis of the probability density function of a signal, for a finite sample, is computed as
formula_55
where formula_56 is the sample mean of formula_57, the extracted signals. The constant 3 ensures that Gaussian signals have zero kurtosis, Super-Gaussian signals have positive kurtosis, and Sub-Gaussian signals have negative kurtosis. The denominator is the variance of formula_57, and ensures that the measured kurtosis takes account of signal variance. The goal of projection pursuit is to maximize the kurtosis, and make the extracted signal as non-normal as possible.
Using kurtosis as a measure of non-normality, we can now examine how the kurtosis of a signal formula_58 extracted from a set of "M" mixtures formula_59 varies as the weight vector formula_60 is rotated around the origin. Given our assumption that each source signal formula_61 is super-gaussian we would expect:
For multiple source mixture signals, we can use kurtosis and Gram-Schmidt Orthogonalization (GSO) to recover the signals. Given "M" signal mixtures in an "M"-dimensional space, GSO project these data points onto an ("M-1")-dimensional space by using the weight vector. We can guarantee the independence of the extracted signals with the use of GSO.
In order to find the correct value of formula_60, we can use gradient descent method. We first of all whiten the data, and transform formula_65 into a new mixture formula_66, which has unit variance, and formula_67. This process can be achieved by applying Singular value decomposition to formula_65,
formula_68
Rescaling each vector formula_69, and let formula_70. The signal extracted by a weighted vector formula_60 is formula_71. If the weight vector w has unit length, then the variance of y is also 1, that is formula_72. The kurtosis can thus be written as:
formula_73
The updating process for formula_60 is:
formula_74
where formula_75 is a small constant to guarantee that formula_60 converges to the optimal solution. After each update, we normalize formula_76, and set formula_77, and repeat the updating process until convergence. We can also use another algorithm to update the weight vector formula_60.
Another approach is using negentropy instead of kurtosis. Using negentropy is a more robust method than kurtosis, as kurtosis is very sensitive to outliers. The negentropy methods are based on an important property of Gaussian distribution: a Gaussian variable has the largest entropy among all continuous random variables of equal variance. This is also the reason why we want to find the most nongaussian variables. A simple proof can be found in Differential entropy.
formula_78
y is a Gaussian random variable of the same covariance matrix as x
formula_79
An approximation for negentropy is
formula_80
A proof can be found in the original papers of Comon; it has been reproduced in the book "Independent Component Analysis" by Aapo Hyvärinen, Juha Karhunen, and Erkki Oja This approximation also suffers from the same problem as kurtosis (sensitivity to outliers). Other approaches have been developed.
formula_81
A choice of formula_82 and formula_83 are
formula_84 and formula_85
Based on infomax.
Infomax ICA is essentially a multivariate, parallel version of projection pursuit. Whereas projection pursuit extracts a series of signals one at a time from a set of "M" signal mixtures, ICA extracts "M" signals in parallel. This tends to make ICA more robust than projection pursuit.
The projection pursuit method uses Gram-Schmidt orthogonalization to ensure the independence of the extracted signal, while ICA use infomax and maximum likelihood estimate to ensure the independence of the extracted signal. The Non-Normality of the extracted signal is achieved by assigning an appropriate model, or prior, for the signal.
The process of ICA based on infomax in short is: given a set of signal mixtures formula_65 and a set of identical independent model cumulative distribution functions(cdfs) formula_86, we seek the unmixing matrix formula_87 which maximizes the joint entropy of the signals formula_88, where formula_89 are the signals extracted by formula_87. Given the optimal formula_87, the signals formula_90 have maximum entropy and are therefore independent, which ensures that the extracted signals formula_91 are also independent. formula_86 is an invertible function, and is the signal model. Note that if the source signal model probability density function formula_92 matches the probability density function of the extracted signal formula_93, then maximizing the joint entropy of formula_94 also maximizes the amount of mutual information between formula_65 and formula_90. For this reason, using entropy to extract independent signals is known as infomax.
Consider the entropy of the vector variable formula_88, where formula_89 is the set of signals extracted by the unmixing matrix formula_87. For a finite set of values sampled from a distribution with pdf formula_93, the entropy of formula_90 can be estimated as:
formula_95
The joint pdf formula_96 can be shown to be related to the joint pdf formula_93 of the extracted signals by the multivariate form:
formula_97
where formula_98 is the Jacobian matrix. We have formula_99, and formula_100 is the pdf assumed for source signals formula_101, therefore,
formula_102
therefore,
formula_103
We know that when formula_104, formula_96 is of uniform distribution, and formula_105 is maximized. Since
formula_106
where formula_107 is the absolute value of the determinant of the unmixing matrix formula_87. Therefore,
formula_108
so,
formula_109
since formula_110, and maximizing formula_87 does not affect formula_111, so we can maximize the function
formula_112
to achieve the independence of the extracted signal.
If there are "M" marginal pdfs of the model joint pdf formula_113 are independent and use the commonly super-gaussian model pdf for the source signals formula_114, then we have
formula_115
In the sum, given an observed signal mixture formula_65, the corresponding set of extracted signals formula_57 and source signal model formula_116, we can find the optimal unmixing matrix formula_87, and make the extracted signals independent and non-gaussian. Like the projection pursuit situation, we can use gradient descent method to find the optimal solution of the unmixing matrix.
Based on maximum likelihood estimation.
Maximum likelihood estimation (MLE) is a standard statistical tool for finding parameter values (e.g. the unmixing matrix formula_87) that provide the best fit of some data (e.g., the extracted signals formula_117) to a given a model (e.g., the assumed joint probability density function (pdf) formula_92 of source signals).
The ML "model" includes a specification of a pdf, which in this case is the pdf formula_92 of the unknown source signals formula_118. Using ML ICA, the objective is to find an unmixing matrix that yields extracted signals formula_119 with a joint pdf as similar as possible to the joint pdf formula_92 of the unknown source signals formula_118.
MLE is thus based on the assumption that if the model pdf formula_92 and the model parameters formula_120 are correct then a high probability should be obtained for the data formula_121 that were actually observed. Conversely, if formula_120 is far from the correct parameter values then a low probability of the observed data would be expected.
Using MLE, we call the probability of the observed data for a given set of model parameter values (e.g., a pdf formula_92 and a matrix formula_120) the "likelihood" of the model parameter values given the observed data.
We define a "likelihood" function formula_122 of formula_87:
formula_123
This equals to the probability density at formula_121, since formula_124.
Thus, if we wish to find a formula_87 that is most likely to have generated the observed mixtures formula_121 from the unknown source signals formula_118 with pdf formula_92 then we need only find that formula_87 which maximizes the "likelihood" formula_122. The unmixing matrix that maximizes equation is known as the MLE of the optimal unmixing matrix.
It is common practice to use the log "likelihood", because this is easier to evaluate. As the logarithm is a monotonic function, the formula_87 that maximizes the function formula_122 also maximizes its logarithm formula_125. This allows us to take the logarithm of equation above, which yields the log "likelihood" function
formula_126
If we substitute a commonly used high-Kurtosis model pdf for the source signals formula_127 then we have
formula_128
This matrix formula_87 that maximizes this function is the maximum likelihood estimation.
History and background.
The early general framework for independent component analysis was introduced by Jeanny Hérault and Bernard Ans from 1984, further developed by Christian Jutten in 1985 and 1986, and refined by Pierre Comon in 1991, and popularized in his paper of 1994. In 1995, Tony Bell and Terry Sejnowski introduced a fast and efficient ICA algorithm based on infomax, a principle introduced by Ralph Linsker in 1987. An interesting link between ML and Infomax approaches can be found in. A quite comprehensive tutorial on ML approach has been published by J-F.Cardoso in 1998.
There are many algorithms available in the literature which do ICA. A largely used one, including in industrial applications, is the FastICA algorithm, developed by Hyvärinen and Oja, which uses the negentropy as cost function, already proposed 7 years before by Pierre Comon in this context. Other examples are rather related to blind source separation where a more general approach is used. For example, one can drop the independence assumption and separate mutually correlated signals, thus, statistically "dependent" signals. Sepp Hochreiter and Jürgen Schmidhuber showed how to obtain non-linear ICA or source separation as a by-product of regularization (1999). Their method does not require a priori knowledge about the number of independent sources.
Applications.
ICA can be extended to analyze non-physical signals. For instance, ICA has been applied to discover discussion topics on a bag of news list archives.
Some ICA applications are listed below:
Availability.
ICA can be applied through the following software:
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M"
},
{
"math_id": 1,
"text": "N"
},
{
"math_id": 2,
"text": "M \\times N"
},
{
"math_id": 3,
"text": "M = N"
},
{
"math_id": 4,
"text": "M < N"
},
{
"math_id": 5,
"text": "M > N"
},
{
"math_id": 6,
"text": "\\boldsymbol{x}=(x_1,\\ldots,x_m)^T"
},
{
"math_id": 7,
"text": "\\boldsymbol{s}=(s_1,\\ldots,s_n)^T."
},
{
"math_id": 8,
"text": "\\boldsymbol{x},"
},
{
"math_id": 9,
"text": "\\boldsymbol{W}"
},
{
"math_id": 10,
"text": "\\boldsymbol{s} = \\boldsymbol{W} \\boldsymbol{x},"
},
{
"math_id": 11,
"text": "\\boldsymbol{s}"
},
{
"math_id": 12,
"text": "F(s_1,\\ldots,s_n)"
},
{
"math_id": 13,
"text": "x_i"
},
{
"math_id": 14,
"text": "s_k"
},
{
"math_id": 15,
"text": "k=1,\\ldots,n"
},
{
"math_id": 16,
"text": "x_i = a_{i,1} s_1 + \\cdots + a_{i,k} s_k + \\cdots + a_{i,n} s_n"
},
{
"math_id": 17,
"text": "a_{i,k}"
},
{
"math_id": 18,
"text": "\\boldsymbol{x}=\\sum_{k=1}^{n} s_k \\boldsymbol{a}_k"
},
{
"math_id": 19,
"text": "\\boldsymbol{x}"
},
{
"math_id": 20,
"text": "\\boldsymbol{a}_k=(\\boldsymbol{a}_{1,k},\\ldots,\\boldsymbol{a}_{m,k})^T"
},
{
"math_id": 21,
"text": "\\boldsymbol{a}_k"
},
{
"math_id": 22,
"text": "\\boldsymbol{A}=(\\boldsymbol{a}_1,\\ldots,\\boldsymbol{a}_n)"
},
{
"math_id": 23,
"text": "\\boldsymbol{x}=\\boldsymbol{A} \\boldsymbol{s}"
},
{
"math_id": 24,
"text": "\\boldsymbol{s}=(s_1,\\ldots,s_n)^T"
},
{
"math_id": 25,
"text": "\\boldsymbol{x}_1,\\ldots,\\boldsymbol{x}_N"
},
{
"math_id": 26,
"text": "\\boldsymbol{A}"
},
{
"math_id": 27,
"text": "\\boldsymbol{w}"
},
{
"math_id": 28,
"text": "s_k = \\boldsymbol{w}^T \\boldsymbol{x}"
},
{
"math_id": 29,
"text": "\\boldsymbol{W}=\\boldsymbol{A}^{-1}"
},
{
"math_id": 30,
"text": "n=m"
},
{
"math_id": 31,
"text": "n>m"
},
{
"math_id": 32,
"text": "n\\sim N(0,\\operatorname{diag}(\\Sigma))"
},
{
"math_id": 33,
"text": "\\boldsymbol{x}=\\boldsymbol{A} \\boldsymbol{s}+n"
},
{
"math_id": 34,
"text": "f(\\cdot|\\theta)"
},
{
"math_id": 35,
"text": "\\theta"
},
{
"math_id": 36,
"text": "x=f(s|\\theta)+n"
},
{
"math_id": 37,
"text": "m"
},
{
"math_id": 38,
"text": "n"
},
{
"math_id": 39,
"text": "m \\ge n"
},
{
"math_id": 40,
"text": "{x_1, x_2, \\ldots, x_m}"
},
{
"math_id": 41,
"text": "{y_1, y_2, \\ldots, y_n}"
},
{
"math_id": 42,
"text": "\\boldsymbol{G}"
},
{
"math_id": 43,
"text": "g_{ij} = 1"
},
{
"math_id": 44,
"text": "i"
},
{
"math_id": 45,
"text": "y_i=1"
},
{
"math_id": 46,
"text": "j"
},
{
"math_id": 47,
"text": "g_{ij}=1"
},
{
"math_id": 48,
"text": "x_j=1"
},
{
"math_id": 49,
"text": "\nx_i = \\bigvee_{j=1}^n (g_{ij}\\wedge y_j), i = 1, 2, \\ldots, m,\n"
},
{
"math_id": 50,
"text": "\\wedge"
},
{
"math_id": 51,
"text": "\\vee"
},
{
"math_id": 52,
"text": "\\boldsymbol{X}"
},
{
"math_id": 53,
"text": "\\boldsymbol{X}^0"
},
{
"math_id": 54,
"text": "x_{ij} = 0, \\forall j"
},
{
"math_id": 55,
"text": "\nK=\\frac{\\operatorname{E}[(\\mathbf{y}-\\mathbf{\\overline{y}})^4]}{(\\operatorname{E}[(\\mathbf{y}-\\mathbf{\\overline{y}})^2])^2}-3 \n"
},
{
"math_id": 56,
"text": "\\mathbf{\\overline{y}}"
},
{
"math_id": 57,
"text": "\\mathbf{y}"
},
{
"math_id": 58,
"text": "\\mathbf{y} = \\mathbf{w}^T \\mathbf{x}"
},
{
"math_id": 59,
"text": "\\mathbf{x}=(x_1,x_2,\\ldots,x_M)^T"
},
{
"math_id": 60,
"text": "\\mathbf{w}"
},
{
"math_id": 61,
"text": "\\mathbf{s}"
},
{
"math_id": 62,
"text": "\\mathbf{y} = \\mathbf{s}"
},
{
"math_id": 63,
"text": "S_1"
},
{
"math_id": 64,
"text": "S_2"
},
{
"math_id": 65,
"text": "\\mathbf{x}"
},
{
"math_id": 66,
"text": "\\mathbf{z}"
},
{
"math_id": 67,
"text": "\\mathbf{z}=(z_1,z_2,\\ldots,z_M)^T"
},
{
"math_id": 68,
"text": "\\mathbf{x} = \\mathbf{U} \\mathbf{D} \\mathbf{V}^T"
},
{
"math_id": 69,
"text": "U_i=U_i/\\operatorname{E}(U_i^2)"
},
{
"math_id": 70,
"text": "\\mathbf{z} = \\mathbf{U}"
},
{
"math_id": 71,
"text": "\\mathbf{y} = \\mathbf{w}^T \\mathbf{z}"
},
{
"math_id": 72,
"text": "\\operatorname{E}[(\\mathbf{w}^T \\mathbf{z})^2]=1"
},
{
"math_id": 73,
"text": "\nK=\\frac{\\operatorname{E}[\\mathbf{y}^4]}{(\\operatorname{E}[\\mathbf{y}^2])^2}-3=\\operatorname{E}[(\\mathbf{w}^T \\mathbf{z})^4]-3. \n"
},
{
"math_id": 74,
"text": "\\mathbf{w}_{new}=\\mathbf{w}_{old}-\\eta\\operatorname{E}[\\mathbf{z}(\\mathbf{w}_{old}^T \\mathbf{z})^3 ]."
},
{
"math_id": 75,
"text": "\\eta"
},
{
"math_id": 76,
"text": "\\mathbf{w}_{new}=\\frac{\\mathbf{w}_{new}}{|\\mathbf{w}_{new}|}"
},
{
"math_id": 77,
"text": "\\mathbf{w}_{old}=\\mathbf{w}_{new}"
},
{
"math_id": 78,
"text": "J(x) = S(y) - S(x)\\,"
},
{
"math_id": 79,
"text": "S(x) = - \\int p_x(u) \\log p_x(u) du"
},
{
"math_id": 80,
"text": "J(x)=\\frac{1}{12}(E(x^3))^2 + \\frac{1}{48}(kurt(x))^2"
},
{
"math_id": 81,
"text": "J(y) = k_1(E(G_1(y)))^2 + k_2(E(G_2(y)) - E(G_2(v))^2"
},
{
"math_id": 82,
"text": "G_1"
},
{
"math_id": 83,
"text": "G_2"
},
{
"math_id": 84,
"text": "G_1 = \\frac{1}{a_1}\\log(\\cosh(a_1u))"
},
{
"math_id": 85,
"text": "G_2 = -\\exp(-\\frac{u^2}{2})"
},
{
"math_id": 86,
"text": "g"
},
{
"math_id": 87,
"text": "\\mathbf{W}"
},
{
"math_id": 88,
"text": "\\mathbf{Y}=g(\\mathbf{y})"
},
{
"math_id": 89,
"text": "\\mathbf{y}=\\mathbf{Wx}"
},
{
"math_id": 90,
"text": "\\mathbf{Y}"
},
{
"math_id": 91,
"text": "\\mathbf{y}=g^{-1}(\\mathbf{Y})"
},
{
"math_id": 92,
"text": "p_s"
},
{
"math_id": 93,
"text": "p_{\\mathbf{y}}"
},
{
"math_id": 94,
"text": "Y"
},
{
"math_id": 95,
"text": "\nH(\\mathbf{Y})=-\\frac{1}{N}\\sum_{t=1}^N \\ln p_{\\mathbf{Y}}(\\mathbf{Y}^t)\n"
},
{
"math_id": 96,
"text": "p_{\\mathbf{Y}}"
},
{
"math_id": 97,
"text": "\np_{\\mathbf{Y}}(Y)=\\frac{p_{\\mathbf{y}}(\\mathbf{y})}{|\\frac{\\partial\\mathbf{Y}}{\\partial \\mathbf{y}}|}\n"
},
{
"math_id": 98,
"text": "\\mathbf{J}=\\frac{\\partial\\mathbf{Y}}{\\partial \\mathbf{y}}"
},
{
"math_id": 99,
"text": "|\\mathbf{J}|=g'(\\mathbf{y})"
},
{
"math_id": 100,
"text": "g'"
},
{
"math_id": 101,
"text": "g'=p_s"
},
{
"math_id": 102,
"text": "\np_{\\mathbf{Y}}(Y)=\\frac{p_{\\mathbf{y}}(\\mathbf{y})}{|\\frac{\\partial\\mathbf{Y}}{\\partial \\mathbf{y}}|}=\\frac{p_\\mathbf{y}(\\mathbf{y})}{p_\\mathbf{s}(\\mathbf{y})}\n"
},
{
"math_id": 103,
"text": "\nH(\\mathbf{Y})=-\\frac{1}{N}\\sum_{t=1}^N \\ln\\frac{p_\\mathbf{y}(\\mathbf{y})}{p_\\mathbf{s}(\\mathbf{y})}\n"
},
{
"math_id": 104,
"text": "p_{\\mathbf{y}}=p_s"
},
{
"math_id": 105,
"text": "H({\\mathbf{Y}})"
},
{
"math_id": 106,
"text": "\np_{\\mathbf{y}}(\\mathbf{y})=\\frac{p_\\mathbf{x}(\\mathbf{x})}{|\\frac{\\partial\\mathbf{y}}{\\partial\\mathbf{x}}|}=\\frac{p_\\mathbf{x}(\\mathbf{x})}{|\\mathbf{W}|}\n"
},
{
"math_id": 107,
"text": "|\\mathbf{W}|"
},
{
"math_id": 108,
"text": "\nH(\\mathbf{Y})=-\\frac{1}{N}\\sum_{t=1}^N \\ln\\frac{p_\\mathbf{x}(\\mathbf{x}^t)}{|\\mathbf{W}|p_\\mathbf{s}(\\mathbf{y}^t)}\n"
},
{
"math_id": 109,
"text": "\nH(\\mathbf{Y})=\\frac{1}{N}\\sum_{t=1}^N \\ln p_\\mathbf{s}(\\mathbf{y}^t)+\\ln|\\mathbf{W}|+H(\\mathbf{x})\n"
},
{
"math_id": 110,
"text": "H(\\mathbf{x})=-\\frac{1}{N}\\sum_{t=1}^N\\ln p_\\mathbf{x}(\\mathbf{x}^t)"
},
{
"math_id": 111,
"text": "H_{\\mathbf{x}}"
},
{
"math_id": 112,
"text": "\nh(\\mathbf{Y})=\\frac{1}{N}\\sum_{t=1}^N \\ln p_\\mathbf{s}(\\mathbf{y}^t)+\\ln|\\mathbf{W}|\n"
},
{
"math_id": 113,
"text": "p_{\\mathbf{s}}"
},
{
"math_id": 114,
"text": "p_{\\mathbf{s}}=(1-\\tanh(\\mathbf{s})^2)"
},
{
"math_id": 115,
"text": "\nh(\\mathbf{Y})=\\frac{1}{N}\\sum_{i=1}^M\\sum_{t=1}^N \\ln (1-\\tanh(\\mathbf{w}_i^\\mathsf{T}\\mathbf{x}^t)^2)+\\ln|\\mathbf{W}|\n"
},
{
"math_id": 116,
"text": "p_{\\mathbf{s}}=g'"
},
{
"math_id": 117,
"text": "y"
},
{
"math_id": 118,
"text": "s"
},
{
"math_id": 119,
"text": "y = \\mathbf{W}x"
},
{
"math_id": 120,
"text": "\\mathbf{A}"
},
{
"math_id": 121,
"text": "x"
},
{
"math_id": 122,
"text": "\\mathbf{L(W)}"
},
{
"math_id": 123,
"text": "\\mathbf{ L(W)} = p_s (\\mathbf{W}x)|\\det \\mathbf{W}|. "
},
{
"math_id": 124,
"text": "s = \\mathbf{W}x"
},
{
"math_id": 125,
"text": "\\ln \\mathbf{L(W)}"
},
{
"math_id": 126,
"text": "\\ln \\mathbf{L(W)} =\\sum_{i}\\sum_{t} \\ln p_s(w^T_ix_t) + N\\ln|\\det \\mathbf{W}|"
},
{
"math_id": 127,
"text": "p_s = (1-\\tanh(s)^2)"
},
{
"math_id": 128,
"text": "\\ln \\mathbf{L(W)} ={1 \\over N}\\sum_{i}^{M} \\sum_{t}^{N}\\ln(1-\\tanh(w^T_i x_t )^2) + \\ln |\\det \\mathbf{W}|"
}
] |
https://en.wikipedia.org/wiki?curid=598031
|
5980344
|
Propellane
|
Class of organic compounds with three rings sharing a single carbon bond
In organic chemistry, propellane is any member of a class of polycyclic hydrocarbons, whose carbon skeleton consists of three rings of carbon atoms sharing a common carbon–carbon covalent bond. The concept was introduced in 1966 by D. Ginsburg Propellanes with small cycles are highly strained and unstable, and are easily turned into polymers with interesting structures, such as staffanes. Partly for these reasons, they have been the object of much research.
Nomenclature.
The name derives from a supposed resemblance of the molecule to a propeller: namely, the rings would be the propeller's blades, and the shared C–C bond would be its axis. The bond shared by the three cycles is usually called the "bridge"; the shared carbon atoms are then the "bridgeheads".
The IUPAC nomenclature of the homologue series of all-carbon propellanes would be called tricyclo[x.y.z.01,(x+2)]alkane. More common in literature is the notation ["x"."y"."z"]propellane means the member of the family whose rings have "x", "y", and "z" carbons, not counting the two bridgeheads; or "x" + 2, "y" + 2, and "z" + 2 carbons, counting them. The chemical formula is therefore C2+"x"+"y"+"z"H2("x"+"y"+"z"). The minimum value for "x", "y", and "z" is 1, meaning three fused cyclopropyl-rings forming the [1.1.1]propellane. There is no structural ordering between the rings; for example, [1.3.2]propellane is the same substance as [3.2.1]propellane. Therefore, it is customary to sort the indices in decreasing order, "x" ≥ "y" ≥ "z".
Further, heterosubstituted propellanes or structurally embedded propellane moieties exist and have been synthesised and follow a more complex nomenclature (see below).
General properties.
Strain.
Propellanes with small cycles, such as [1.1.1]propellane or [2.2.2]propellane, bear a high absolute strain energy. The two interbridgeheaded carbons and their bonds may even be described as an inverted tetrahedral geometry.
The resulting steric strain causes such compounds to be unstable and highly reactive. The interbridgehead C-C bond is easily broken (even spontaneously) to yield less-strained bicyclic or even monocyclic hydrocarbons. This so-called strain-release chemistry is used in strategies to access otherwise hard-to-obtain structures.
Surprisingly, the most strained member [1.1.1] is far more stable than the other small ring members ([2.1.1], [2.2.1], [2.2.2], [3.2.1], [3.1.1], and [4.1.1]), which can be explained by special bonding situation of the interbridgehead bond.
Bonding properties.
The bonding situation of small-ring propellanes, such as [n.1.1]propellanes, is topic of debate. Recent computational studies explain the interbridgehead bond as a Charge-shift bond possessing an unusual positive Laplace operator formula_0 of the electron density formula_1. Studies by Sterling et al. suggest delocalisation effects onto the three-membered bridges relaxing Pauli-repulsion and thus stabilising the propellane core.
Reactivity.
Propellanes, especially the synthetically studied [1.1.1]Propellane, is known to possess omniphilic reactivity. Anions and radicals add towards the interbridgehead bond resulting in bicyclo[1.1.1]pentyl-units. In contrary, cations and metals decompose the tricyclic core towards monocyclic systems by opening of the bridged bonds forming "exo"-methylene cyclobutanes. For [3.1.1]propellane only radical addition is reported. The reactivity of other propellanes is far less explored and their reactivity profile is less clear.
Polymerization.
In principle, any propellane can be polymerized by breaking the axial C–C bond to yield a radical with two active centers, and then joining these radicals in a linear chain. For the propellanes with small cycles (such as [1.1.1], [3.2.1], or 1,3-dihydroadamantane), this process is easily achieved, yielding either simple polymers or alternating copolymers. For example, [1.1.1]propellane yields spontaneously an interesting rigid polymer called staffane; and [3.2.1]propellane combines spontaneously with oxygen at room temperature to give a copolymer where the bridge-opened propellane units [–C8H12–] alternate with [–O–O–] groups.
Synthesis.
The smaller-cycle propellanes are difficult to synthesize because of their strain. Larger members are more easily obtained. Weber and Cook described in 1978 a general method which should yield ["n".3.3]propellanes for any "n" ≥ 3.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\nabla^2 "
},
{
"math_id": 1,
"text": "\\rho"
}
] |
https://en.wikipedia.org/wiki?curid=5980344
|
59808021
|
Theorem of the gnomon
|
Certain parallelograms occurring in a gnomon have areas of equal size
The theorem of the gnomon states that certain parallelograms occurring in a gnomon have areas of equal size.
Theorem.
In a parallelogram formula_0 with a point formula_1 on the diagonal formula_2, the parallel to formula_3 through formula_1 intersects the side formula_4 in formula_5 and the side formula_6 in formula_7. Similarly the parallel to the side formula_6 through formula_1 intersects the side formula_3 in formula_8 and the side formula_9 in formula_10. Then the theorem of the gnomon states that the parallelograms formula_11 and formula_12 have equal areas.
"Gnomon" is the name for the L-shaped figure consisting of the two overlapping parallelograms formula_13 and formula_14. The parallelograms of equal area formula_11 and formula_12 are called "complements" (of the parallelograms on diagonal formula_15 and formula_16).
Proof.
The proof of the theorem is straightforward if one considers the areas of the main parallelogram and the two inner parallelograms around its diagonal:
formula_17
Applications and extensions.
The theorem of the gnomon can be used to construct a new parallelogram or rectangle of equal area to a given parallelogram or rectangle by the means of straightedge and compass constructions. This also allows the representation of a division of two numbers in geometrical terms, an important feature to reformulate geometrical problems in algebraic terms. More precisely, if two numbers are given as lengths of line segments one can construct a third line segment, the length of which matches the quotient of those two numbers (see diagram). Another application is to transfer the ratio of partition of one line segment to another line segment (of different length), thus dividing that other line segment in the same ratio as a given line segment and its partition (see diagram).
A similar statement can be made in three dimensions for parallelepipeds. In this case you have a point formula_1 on the space diagonal of a parallelepiped, and instead of two parallel lines you have three planes through formula_1, each parallel to the faces of the parallelepiped. The three planes partition the parallelepiped into eight smaller parallelepipeds; two of those surround the diagonal and meet at formula_1. Now each of those two parallelepipeds around the diagonal has three of the remaining six parallelepipeds attached to it, and those three play the role of the complements and are of equal volume (see diagram).
General theorem about nested parallelograms.
The theorem of gnomon is special case of a more general statement about nested parallelograms with a common diagonal. For a given parallelogram formula_0 consider an arbitrary inner parallelogram formula_18 having formula_2 as a diagonal as well. Furthermore there are two uniquely determined parallelograms formula_19 and formula_20 the sides of which are parallel to the sides of the outer parallelogram and which share the vertex formula_10 with the inner parallelogram. Now the difference of the areas of those two parallelograms is equal to area of the inner parallelogram, that is:
formula_21
This statement yields the theorem of the gnomon if one looks at a degenerate inner parallelogram formula_18 whose vertices are all on the diagonal formula_2. This means in particular for the parallelograms formula_19 and formula_20, that their common point formula_10 is on the diagonal and that the difference of their areas is zero, which is exactly what the theorem of the gnomon states.
Historical aspects.
The theorem of the gnomon was described as early as in Euclid's Elements (around 300 BC), and there it plays an important role in the derivation of other theorems. It is given as proposition 43 in Book I of the Elements, where it is phrased as a statement about parallelograms without using the term "gnomon". The latter is introduced by Euclid as the second definition of the second book of Elements. Further theorems for which the gnomon and its properties play an important role are proposition 6 in Book II, proposition 29 in Book VI and propositions 1 to 4 in Book XIII.
|
[
{
"math_id": 0,
"text": "ABCD"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "AC"
},
{
"math_id": 3,
"text": "AD"
},
{
"math_id": 4,
"text": "CD"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "AB"
},
{
"math_id": 7,
"text": "H"
},
{
"math_id": 8,
"text": "I"
},
{
"math_id": 9,
"text": "BC"
},
{
"math_id": 10,
"text": "F"
},
{
"math_id": 11,
"text": "HBFP"
},
{
"math_id": 12,
"text": "IPGD"
},
{
"math_id": 13,
"text": "ABFI"
},
{
"math_id": 14,
"text": "AHGD"
},
{
"math_id": 15,
"text": "PFCG"
},
{
"math_id": 16,
"text": "AHPI"
},
{
"math_id": 17,
"text": "|IPGD|=\\frac{|ABCD|}{2}-\\frac{|AHPI|}{2}-\\frac{|PFCG|}{2}=|HBFP| "
},
{
"math_id": 18,
"text": "AFCE"
},
{
"math_id": 19,
"text": "GFHD"
},
{
"math_id": 20,
"text": "IBJF"
},
{
"math_id": 21,
"text": "|AFCE|=|GFHD|-|IBJF| "
}
] |
https://en.wikipedia.org/wiki?curid=59808021
|
5980831
|
Contract bridge probabilities
|
Mathematical probabilities in the game of bridge
In the game of bridge mathematical probabilities play a significant role. Different declarer play strategies lead to success depending on the distribution of opponent's cards. To decide which strategy has highest likelihood of success, the declarer needs to have at least an elementary knowledge of probabilities.
The tables below specify the various prior probabilities, i.e. the probabilities in the absence of any further information. During bidding and play, more information about the hands becomes available, allowing players to improve their probability estimates.
Probability of suit distributions (for missing trumps, etc.) in two hidden hands.
This table represents the different ways that two to eight particular cards may be distributed, or may "lie" or "split", between two unknown 13-card hands (before the bidding and play, or "a priori").
The table also shows the number of combinations of particular cards that match any numerical split and the probabilities for each combination.
These probabilities follow directly from the law of Vacant Places.
Calculation of probabilities.
Let formula_0 be the probability of an East player with formula_1 unknown cards holding formula_2 cards in a given suit and a West player with formula_3 unknown cards holding formula_4 cards in the given suit. The total number of arrangements of formula_5 cards in the suit in formula_6 spaces is formula_7 i.e. the number of permutations of formula_6 objects of which cards in the suit are indistinguishable and cards not in the suit are indistinguishable. The number of arrangements of which correspond to East having formula_2 cards in the suit and West formula_4 cards in the suit is given by formula_8. Therefore, formula_9If the direction of the split is unimportant (it is only required that the split be formula_2-formula_4, not that East is specifically required to hold formula_2 cards), then the overall probability is given byformula_10 where the Kronecker delta ensures that the situation where East and West have the same number of cards in the suit is not counted twice.
The above probabilities assume formula_11 and that the direction of the split is unimportant, and so are given byformula_12The more general formula can be used to calculate the probability of a suit breaking if a player is known to have cards in another suit from e.g. the bidding. Suppose East is known to have 7 spades from the bidding and after seeing dummy you deduce West to hold 2 spades; then if your two lines of play are to hope either for diamonds 5-3 or clubs 4-2, the "a priori" probabilities are 47% and 48% respectively but formula_13 and formula_14 so now the club line is significantly better than the diamond line.
Probability of HCP distribution.
High card points (HCP) are usually counted using the Milton Work scale of 4/3/2/1 points for each Ace/King/Queen/Jack respectively. The a priori probabilities that a given hand contains no more than a specified number of HCP is given in the table below. To find the likelihood of a certain point range, one simply subtracts the two relevant cumulative probabilities. So, the likelihood of being dealt a 12-19 HCP hand (ranges inclusive) is the probability of having at most 19 HCP minus the probability of having at most 11 HCP, or: 0.9855 − 0.6518 = 0.3337.
Hand pattern probabilities.
A "hand pattern" denotes the distribution of the thirteen cards in a hand over the four suits. In total 39 hand patterns are possible, but only 13 of them have an "a priori probability" exceeding 1%. The most likely pattern is the 4-4-3-2 pattern consisting of two four-card suits, a three-card suit and a doubleton.
Note that the hand pattern leaves unspecified which particular suits contain the indicated lengths. For a 4-4-3-2 pattern, one needs to specify which suit contains the three-card and which suit contains the doubleton in order to identify the length in each of the four suits. There are four possibilities to first identify the three-card suit and three possibilities to next identify the doubleton. Hence, the number of "suit permutations" of the 4-4-3-2 pattern is twelve. Or, stated differently, in total there are twelve ways a 4-4-3-2 pattern can be mapped onto the four suits.
Below table lists all 39 possible hand patterns, their probability of occurrence, as well as the number of suit permutations for each pattern. The list is ordered according to likelihood of occurrence of the hand patterns.
The 39 hand patterns can by classified into four "hand types": balanced hands, single suiters, two suiters and three-suiters. Below table gives the "a priori" likelihoods of being dealt a certain hand-type.
Alternative grouping of the 39 hand patterns can be made either by longest suit or by shortest suit. Below tables gives the "a priori" chance of being dealt a hand with a longest or a shortest suit of given length.
Number of possible hands and deals.
There are 635,013,559,600 (formula_15) different hands that one player can hold. Furthermore, when the remaining 39 cards are included with all their combinations there are 53,644,737,765,488,792,839,237,440,000 (53.6 x 1027) different deals possible (formula_16) The immenseness of this number can be understood by answering the question "How large an area would you need to spread all possible bridge deals if each deal would occupy only one square millimeter?". The answer is: "an area more than a hundred million times the surface area of Earth".
Obviously, the deals that are identical except for swapping—say—the ♥2 and the ♥3 would be unlikely to give a different result. To make the irrelevance of small cards explicit (which is not always the case though), in bridge such small cards are generally denoted by an 'x'. Thus, the "number of possible deals" in this sense depends on how many non-honour cards (2, 3, .. 9) are considered 'indistinguishable'. For example, if 'x' notation is applied to all cards smaller than ten, then the suit distributions A987-K106-Q54-J32 and A432-K105-Q76-J98 would be considered identical.
The table below gives the number of deals when various numbers of small cards are considered indistinguishable.
Note that the last entry in the table (37,478,624) corresponds to the number of different distributions of the deck (the number of deals when cards are only distinguished by their suit).
Probability of Losing-Trick Counts.
The Losing-Trick Count is an alternative to the HCP count as a method of hand evaluation.
|
[
{
"math_id": 0,
"text": "P'(a, b, n_e, n_w)"
},
{
"math_id": 1,
"text": "n_e"
},
{
"math_id": 2,
"text": "a"
},
{
"math_id": 3,
"text": "n_w"
},
{
"math_id": 4,
"text": "b"
},
{
"math_id": 5,
"text": "(a+b)"
},
{
"math_id": 6,
"text": "(n_e + n_w)"
},
{
"math_id": 7,
"text": "T = \\frac{(n_e + n_w)!}{(n_e + n_w - a - b)!(a + b)!}"
},
{
"math_id": 8,
"text": "S = \\frac{n_e!}{a!(n_e - a)!} \\times \\frac{n_w!}{b!(n_w - b)!}"
},
{
"math_id": 9,
"text": "P'(a, b, n_e, n_w) = \\frac{S}{T} = \\frac{(a+b)!}{a!b!} \\times \\frac{n_e!n_w!(n_e + n_w - a - b)!}{(n_e+n_w)!(n_e - a)! (n_w - b)!} = \\binom{a+b}{a}\\frac{n_e!n_w!(n_e + n_w - a - b)!}{(n_e+n_w)!(n_e - a)! (n_w - b)!}=\\frac{\\binom{a+b}{a}\\binom{n_e+n_w-a-b}{n_e-a}}{\\binom{n_e+n_w}{n_e}}"
},
{
"math_id": 10,
"text": "P(a, b, n_e, n_w) = P'(a, b, n_e, n_w) + (1-\\delta_{a, b})P'(b, a, n_e, n_w)"
},
{
"math_id": 11,
"text": "n_e = n_w = 13"
},
{
"math_id": 12,
"text": "P(a, b) = P(a, b, 13, 13) = \\binom{a+b}{a}\\frac{13!13!(26-a-b)!}{26!(13-a)!(13-b)!}(2-\\delta_{a,b})"
},
{
"math_id": 13,
"text": "P(5, 3, 13-7, 13-2) \\thickapprox 42\\%"
},
{
"math_id": 14,
"text": "P(4, 2, 13-7, 13-2) \\thickapprox 47\\%"
},
{
"math_id": 15,
"text": "{52 \\choose 13}"
},
{
"math_id": 16,
"text": "52!/(13!)^4"
}
] |
https://en.wikipedia.org/wiki?curid=5980831
|
59814243
|
Molecular demon
|
A molecular demon or biological molecular machine is a biological macromolecule that resembles and seems to have the same properties as Maxwell's demon. These macromolecules gather information in order to recognize their substrate or ligand within a myriad of other molecules floating in the intracellular or extracellular plasm. This molecular recognition represents an information gain which is equivalent to an energy gain or decrease in entropy. When the demon is reset i.e. when the ligand is released, the information is erased, energy is dissipated and entropy increases obeying the second law of thermodynamics. The difference between biological molecular demons and the thought experiment of Maxwell's demon is the latter's apparent violation of the second law.
Cycle.
The molecular demon switches mainly between two conformations. The first, or basic state, upon recognizing and binding the ligand or substrate following an induced fit, undergoes a change in conformation which leads to the second quasi-stable state: the protein-ligand complex. In order to reset the protein to its original, basic state, it needs ATP. When ATP is consumed or hydrolyzed, the ligand is released and the demon acquires again information reverting to its basic state. The cycle may start again.
Ratchet.
The second law of thermodynamics is a statistical law. Hence, occasionally, single molecules may not obey the law. All molecules are subject to the molecular storm, i.e. the random movement of molecules in the cytoplasm and the extracellular fluid. Molecular demons or molecular machines either biological or artificially constructed are continuously pushed around by the random thermal motion in a direction that sometimes violates the law. When this happens and the gliding back of the macromolecule from the movement it had made or the conformational change it underwent to its original state can be prevented, as is the case with molecular demons, the molecule works as a ratchet; it is possible to observe for example the creation of a gradient of ions or other molecules across the cell membrane, the movement of motor proteins along filament proteins or also the accumulation of products deriving from an enzymatic reaction. Even some artificial molecular machines and experiments are capable of forming a ratchet apparently defying the second law of thermodynamics. All these molecular demons have to be reset to their original state consuming external energy that is subsequently dissipated as heat. This final step in which entropy increases is therefore irreversible. If the demons were reversible, no work would be done.
Artificial.
An example of artificial ratchets is the work by Serreli et al. (2007). Serreli et al. constructed a nanomachine, a rotaxane, that consists of a ring-shaped molecule, that moves along a tiny molecular axle between two different equal compartments, A and B. The normal, random movement of molecules sends the ring back and forth. Since the rings move freely, half of the rotaxanes have the ring on site B and the other half on site A. But the system used by Serreli et al. has a chemical gate on the rotaxane molecule and the axle contains two sticky parts, one at either side of the gate. This gate opens when the ring is close by. The sticky part in B is close to the gate and the rings pass more readily to A than from A to B. They obtained a deviation from equilibrium of 70:50 for A and B respectively, a bit like the demon of Maxwell. But this system works only when light is shone on it and thus needs external energy, just like molecular demons.
Energy and information.
Landauer stated that information is physical. His principle sets fundamental thermodynamical constraints for classical and quantum information processing. Much effort has been dedicated to incorporating information into thermodynamics and measuring the entropic and energetic costs of manipulating information. Gaining information, decreases entropy which has an energy cost. This energy has to be collected from the environment. Landauer established equivalence of one bit of information with entropy which is represented by kT ln 2, where k is the Boltzmann constant and T is room temperature. This bound is called the Landauer's limit. Erasing energy increases entropy instead. Toyabe et al. (2010) were able to demonstrate experimentally that information can be converted in free energy. It is a quite elegant experiment that consists of a microscopic particle on a spiral-staircase-like potential. The step has a height corresponding to kB"T," where kB is the Boltzmann constant and T is the temperature. The particle jumps between steps due to random thermal motions. Since the downward jumps following the gradient are more frequent than the upward ones, the particle falls down the stairs, on average. But when an upward jump is observed, a block is placed behind the particle to prevent it from falling, just like in a ratchet. This way it should climb the stairs. Information is gained by measuring the particle's location, which is equivalent to a gain in energy, i.e. a decrease in entropy. They used a generalized equation for the second law that contains a variable for information:
formula_0
"ΔF" is the free energy between states", W" is the work done on the system", k" is the Boltzmann constant", T" is temperature, and "I" is the mutual information content obtained by measurements. The brackets indicate that the energy is an average. They could convert the equivalent of one bit information to 0.28 "kT" ln2 of energy or, in other words, they could exploit more than a quarter of the information’s energy content.
Cognitive demons.
In his book "Chance and Necessity," Jacques Monod described the functions of proteins and other molecules capable of recognizing with 'elective discrimination' a substrate or ligand or other molecule. In describing these molecules he introduced the term 'cognitive' functions, the same cognitive functions that Maxwell attributed to his demon. Werner Loewenstein goes further and names these molecules 'molecular demon' or 'demon' in short.
Naming the biological molecular machines in this way makes it easier to understand the similarities between these molecules and Maxwell's demon.
Because of this real discriminative if not 'cognitive' property, Jacques Monod attributed a teleonomic function to these biological complexes. Teleonomy implies the idea of an oriented, coherent and constructive activity. Proteins therefore must be considered essential molecular agents in the teleonomic performances of all living beings.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\langle \\Delta F - W\\rangle \\leq k T I"
}
] |
https://en.wikipedia.org/wiki?curid=59814243
|
59829761
|
Higman–Sims asymptotic formula
|
Asymptotic estimate in group theory
In finite group theory, the Higman–Sims asymptotic formula gives an asymptotic estimate on number of groups of prime power order.
Statement.
Let formula_0 be a (fixed) prime number. Define formula_1 as the number of isomorphism classes of groups of order formula_2. Then:
formula_3
Here, the big-O notation is with respect to formula_4, not with respect to formula_0 (the constant under the big-O notation may depend on formula_0).
|
[
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "f(n,p)"
},
{
"math_id": 2,
"text": "p^n"
},
{
"math_id": 3,
"text": "f(n,p) = p^{\\frac{2}{27}n^3 + \\mathcal O(n^{8/3})}"
},
{
"math_id": 4,
"text": "n"
}
] |
https://en.wikipedia.org/wiki?curid=59829761
|
5983310
|
Degree Lintner
|
Unit used to measure diastatic power
°Lintner or degrees Lintner is a unit used to measure the ability of a malt to reduce starch to sugar, that is, its diastatic power. Degrees Lintner is an intensive unit, not an extensive one; it is independent of the quantity of malt used. While the measurement is applicable to any amylase, in general it refers to the combined α-amylase and β-amylase used in brewing. The term is also generalized to diastatic malt extracts and separately prepared brewing enzymes. The abbreviation °L is official, but in brewing applications it may conflict with °L used for degrees Lovibond.
JECFA, the Joint FAO/WHO Expert Committee on Food Additives, defines the degree Lintner as follows:
"A malt has a diastatic power of 100 °L if 0.1cc of a clear 5% infusion of the malt, acting on 100cc of a 2% starch solution at 20°C for one hour, produces sufficient reducing sugars to reduce completely 5cc of Fehling's solution."
Note that the amylases used in brewing reach their peak efficiencies around 66 °C.
One can convert this definition to the number of international enzyme units (IU, enzyme activity that produces 1 μmole of product per minute) per gram of grain, for example. Maltose, the main sugar produced in mashing, is a disaccharide of glucose with one reducing equivalent (one reactive aldehyde group). One maltose will reduce two Cu2+ in the Fehling reaction. The concentration of Cu2+ in Fehling’s solution is 0.14 M, which is capable of oxidizing 0.070 M maltose. 5 mL of Fehling’s solution can oxidize 0.070 M x 0.005 L = 0.00035 moles of maltose. A 100 °L malt extract produces 0.00035 mol maltose in 60 min, or 5.8 μmol/min, or 5.8 IU of enzyme activity in 0.1 mL of a 5 g/100 mL (5%) infusion. The 0.1 mL of this infusion is equivalent to 0.005 g of malt. Therefore, 5.8 IU/0.005 g of malt = 1160 IU/gram of malt. 100 °L is equivalent to 1160 IU per gram of malt (or 526,176 IU per pound). This value is useful if alternatives to Fehling's reaction are being used to determine the amylase activity.
Evaluation of a malt or extract is usually done by the manufacturer rather than by the end user; as a rule of thumb, the total grain bill of a mash should have a diastatic power of at least 40 °L in order to guarantee efficient conversion of all the starches in the mash to sugars.
The most active barley malts currently available have a diastatic activity of 110 - 160 °Lintner (385 - 520 °WK).
In Europe, diastatic activity is often stated in Windisch–Kolbach units (°WK). These are related approximately to °Lintner by:
formula_0
formula_1.
|
[
{
"math_id": 0,
"text": "{}^\\circ\\mbox{Lintner} = \\frac{{}^\\circ\\mbox{WK} + 16}{3.5}"
},
{
"math_id": 1,
"text": "{}^\\circ\\mbox{WK} = \\left ( 3.5 \\times {}^\\circ\\mbox{Lintner} \\right ) - 16"
}
] |
https://en.wikipedia.org/wiki?curid=5983310
|
5983414
|
Star domain
|
Property of point sets in Euclidean spaces
In geometry, a set formula_0 in the Euclidean space formula_1 is called a star domain (or star-convex set, star-shaped set or radially convex set) if there exists an formula_2 such that for all formula_3 the line segment from formula_4 to formula_5 lies in formula_6 This definition is immediately generalizable to any real, or complex, vector space.
Intuitively, if one thinks of formula_0 as a region surrounded by a wall, formula_0 is a star domain if one can find a vantage point formula_4 in formula_0 from which any point formula_5 in formula_0 is within line-of-sight. A similar, but distinct, concept is that of a radial set.
Definition.
Given two points formula_7 and formula_8 in a vector space formula_9 (such as Euclidean space formula_1), the convex hull of formula_10 is called the closed interval with endpoints formula_7 and formula_8 and it is denoted by
formula_11
where formula_12 for every vector formula_13
A subset formula_0 of a vector space formula_9 is said to be star-shaped at formula_2 if for every formula_3 the closed interval
formula_14
A set formula_0 is star shaped and is called a star domain if there exists some point formula_2 such that formula_0 is star-shaped at formula_15
A set that is star-shaped at the origin is sometimes called a star set. Such sets are closely related to Minkowski functionals.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "\\R^n"
},
{
"math_id": 2,
"text": "s_0 \\in S"
},
{
"math_id": 3,
"text": "s \\in S,"
},
{
"math_id": 4,
"text": "s_0"
},
{
"math_id": 5,
"text": "s"
},
{
"math_id": 6,
"text": "S."
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "y"
},
{
"math_id": 9,
"text": "X"
},
{
"math_id": 10,
"text": "\\{x, y\\}"
},
{
"math_id": 11,
"text": "\\left[x, y\\right] ~:=~ \\left\\{t x + (1 - t) y : 0 \\leq t \\leq 1\\right\\} ~=~ x + (y - x) [0, 1],"
},
{
"math_id": 12,
"text": "z [0, 1] := \\{z t : 0 \\leq t \\leq 1\\}"
},
{
"math_id": 13,
"text": "z."
},
{
"math_id": 14,
"text": "\\left[s_0, s\\right] \\subseteq S."
},
{
"math_id": 15,
"text": "s_0."
},
{
"math_id": 16,
"text": "A"
},
{
"math_id": 17,
"text": "\\R^n,"
},
{
"math_id": 18,
"text": "B = \\{t a : a \\in A, t \\in [0, 1]\\}"
},
{
"math_id": 19,
"text": "r < 1,"
},
{
"math_id": 20,
"text": "r"
},
{
"math_id": 21,
"text": "\\R^n."
},
{
"math_id": 22,
"text": "W \\subseteq X,"
},
{
"math_id": 23,
"text": "\\bigcap_{|u|=1} u W"
},
{
"math_id": 24,
"text": "u"
},
{
"math_id": 25,
"text": "W"
},
{
"math_id": 26,
"text": "0 \\in W"
},
{
"math_id": 27,
"text": "r w \\in W"
},
{
"math_id": 28,
"text": "0 \\leq r \\leq 1"
},
{
"math_id": 29,
"text": "w \\in W"
}
] |
https://en.wikipedia.org/wiki?curid=5983414
|
5984147
|
Cohn's irreducibility criterion
|
Sufficient condition for a polynomial to be unfactorable
Cohn's irreducibility criterion is a sufficient condition for a polynomial to be irreducible in formula_0—that is, for it to be unfactorable into the product of lower-degree polynomials with integer coefficients.
Statement.
The criterion is often stated as follows:
If a prime number formula_1 is expressed in base 10 as formula_2 (where formula_3) then the polynomial
formula_4
is irreducible in formula_0.
The theorem can be generalized to other bases as follows:
Assume that formula_5 is a natural number and formula_6 is a polynomial such that formula_7. If formula_8 is a prime number then formula_9 is irreducible in formula_0.
History and extensions.
The base 10 version of the theorem is attributed to Cohn by Pólya and Szegő in "Problems and Theorems in Analysis" while the generalization to any base "b" is due to Brillhart, Filaseta, and Odlyzko. It is clear from context that the "A. Cohn" mentioned by Polya and Szegő is Arthur Cohn (1894–1940), a student of Issai Schur who was awarded his doctorate from Frederick William University in 1921.
A further generalization of the theorem allowing coefficients larger than digits was given by Filaseta and Gross. In particular, let formula_10 be a polynomial with non-negative integer coefficients such that formula_11 is prime. If all coefficients are formula_12 49598666989151226098104244512918, then formula_13 is irreducible over formula_0. Moreover, they proved that this bound is also sharp. In other words, coefficients larger than 49598666989151226098104244512918 do not guarantee irreducibility. The method of Filaseta and Gross was also generalized to provide similar sharp bounds for some other bases by Cole, Dunn, and Filaseta.
An analogue of the theorem also holds for algebraic function fields over finite fields.
Converse.
The converse of this criterion is that, if "p" is an irreducible polynomial with integer coefficients that have greatest common divisor 1, then there exists a base such that the coefficients of "p" form the representation of a prime number in that base. This is the Bunyakovsky conjecture and its truth or falsity remains an open question.
|
[
{
"math_id": 0,
"text": "\\mathbb{Z}[x]"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "p = a_m 10^m + a_{m-1} 10^{m-1} +\\cdots+ a_1 10 + a_0"
},
{
"math_id": 3,
"text": "0\\leq a_i\\leq 9"
},
{
"math_id": 4,
"text": "f(x)=a_mx^m+a_{m-1}x^{m-1}+\\cdots+a_1x+a_0"
},
{
"math_id": 5,
"text": "b \\ge 2"
},
{
"math_id": 6,
"text": "p(x) = a_k x^k + a_{k-1} x^{k-1} +\\cdots+ a_1 x + a_0"
},
{
"math_id": 7,
"text": "0\\leq a_i \\leq b-1"
},
{
"math_id": 8,
"text": "p(b)"
},
{
"math_id": 9,
"text": "p(x)"
},
{
"math_id": 10,
"text": "f(x) "
},
{
"math_id": 11,
"text": "f(10)"
},
{
"math_id": 12,
"text": " \\leq "
},
{
"math_id": 13,
"text": "f(x)"
}
] |
https://en.wikipedia.org/wiki?curid=5984147
|
59841811
|
Competitive inhibition
|
Interruption of a chemical pathway
Competitive inhibition is interruption of a chemical pathway owing to one chemical substance inhibiting the effect of another by competing with it for binding or bonding. Any metabolic or chemical messenger system can potentially be affected by this principle, but several classes of competitive inhibition are especially important in biochemistry and medicine, including the competitive form of enzyme inhibition, the competitive form of receptor antagonism, the competitive form of antimetabolite activity, and the competitive form of poisoning (which can include any of the aforementioned types).
Enzyme inhibition type.
In competitive inhibition of enzyme catalysis, binding of an inhibitor prevents binding of the target molecule of the enzyme, also known as the substrate. This is accomplished by blocking the binding site of the substrate – the active site – by some means. The Vmax indicates the maximum velocity of the reaction, while the Km is the amount of substrate needed to reach half of the Vmax. Km also plays a part in indicating the tendency of the substrate to bind the enzyme. Competitive inhibition can be overcome by adding more substrate to the reaction, which increases the chances of the enzyme and substrate binding. As a result, competitive inhibition alters only the Km, leaving the Vmax the same. This can be demonstrated using enzyme kinetics plots such as the Michaelis–Menten or the Lineweaver-Burk plot. Once the inhibitor is bound to the enzyme, the slope will be affected, as the Km either increases or decreases from the original Km of the reaction.
Most competitive inhibitors function by binding reversibly to the active site of the enzyme. As a result, many sources state that this is the defining feature of competitive inhibitors. This, however, is a misleading oversimplification, as there are many possible mechanisms by which an enzyme may bind either the inhibitor or the substrate but never both at the same time. For example, allosteric inhibitors may display competitive, non-competitive, or uncompetitive inhibition.
Mechanism.
In competitive inhibition, an inhibitor that resembles the normal substrate binds to the enzyme, usually at the active site, and prevents the substrate from binding. At any given moment, the enzyme may be bound to the inhibitor, the substrate, or neither, but it cannot bind both at the same time. During competitive inhibition, the inhibitor and substrate compete for the active site. The active site is a region on an enzyme to which a particular protein or substrate can bind. The active site will thus only allow one of the two complexes to bind to the site, either allowing a reaction to occur or yielding it. In competitive inhibition, the inhibitor resembles the substrate, taking its place and binding to the active site of an enzyme. Increasing the substrate concentration would diminish the "competition" for the substrate to properly bind to the active site and allow a reaction to occur. When the substrate is of higher concentration than the concentration of the competitive inhibitor, it is more probable that the substrate will come into contact with the enzyme's active site than with the inhibitor's.
Competitive inhibitors are commonly used to make pharmaceuticals. For example, methotrexate is a chemotherapy drug that acts as a competitive inhibitor. It is structurally similar to the coenzyme, folate, which binds to the enzyme dihydrofolate reductase. This enzyme is part of the synthesis of DNA and RNA, and when methotrexate binds the enzyme, it renders it inactive, so that it cannot synthesize DNA and RNA. The cancer cells are thus unable to grow and divide. Another example: prostaglandin are made in large amounts as a response to pain and can cause inflammation. Essential fatty acids form the prostaglandins; when this was discovered, it turned out that these were actually very good inhibitors to prostaglandins. These fatty acids inhibitors have been used as drugs to relieve pain because they can act as the substrate, and bind to the enzyme, and block prostaglandins.
An example of non-drug related competitive inhibition is in the prevention of browning of fruits and vegetables. For example, tyrosinase, an enzyme within mushrooms, normally binds to the substrate, monophenols, and forms brown o-quinones. Competitive substrates, such as 4-substituted benzaldehydes for mushrooms, compete with the substrate lowering the amount of the monophenols that bind. These inhibitory compounds added to the produce keep it fresh for longer periods of time by decreasing the binding of the monophenols that cause browning. This allows for an increase in produce quality as well as shelf life.
Competitive inhibition can be reversible or irreversible. If it is reversible inhibition, then effects of the inhibitor can be overcome by increasing substrate concentration. If it is irreversible, the only way to overcome it is to produce more of the target (and typically degrade and/or excrete the irreversibly inhibited target).
In virtually every case, competitive inhibitors bind in the same binding site (active site) as the substrate, but same-site binding is not a requirement. A competitive inhibitor could bind to an allosteric site of the free enzyme and prevent substrate binding, as long as it does not bind to the allosteric site when the substrate is bound. For example, strychnine acts as an allosteric inhibitor of the glycine receptor in the mammalian spinal cord and brain stem. Glycine is a major post-synaptic inhibitory neurotransmitter with a specific receptor site. Strychnine binds to an alternate site that reduces the affinity of the glycine receptor for glycine, resulting in convulsions due to lessened inhibition by the glycine.
In competitive inhibition, the maximum velocity (formula_0) of the reaction is unchanged, while the apparent affinity of the substrate to the binding site is decreased (the formula_1 dissociation constant is apparently increased). The change in formula_2 (Michaelis–Menten constant) is parallel to the alteration in formula_1, as one increases the other must decrease. When a competitive inhibitor is bound to an enzyme the formula_2 increases. This means the binding affinity for the enzyme is decreased, but it can be overcome by increasing the concentration of the substrate. Any given competitive inhibitor concentration can be overcome by increasing the substrate concentration. In that case, the substrate will reduce the availability for an inhibitor to bind, and, thus, outcompete the inhibitor in binding to the enzyme.
Biological examples.
After an accidental ingestion of a contaminated opioid drug desmethylprodine, the neurotoxic effect of 1-methyl-4-phenyl-1,2,3,6-tetrahydropyridine (MPTP) was discovered. MPTP is able to cross the blood brain barrier and enter acidic lysosomes. MPTP is biologically activated by MAO-B, an isozyme of monoamine oxidase (MAO) which is mainly concentrated in neurological disorders and diseases. Later, it was discovered that MPTP causes symptoms similar to that of Parkinson's disease. Cells in the central nervous system (astrocytes) include MAO-B that oxidizes MPTP to 1-methyl-4-phenylpyridinium (MPP+), which is toxic. MPP+ eventually travels to the extracellular fluid by a dopamine transporter, which ultimately causes the Parkinson's symptoms. However, competitive inhibition of the MAO-B enzyme or the dopamine transporter protects against the oxidation of MPTP to MPP+. A few compounds have been tested for their ability to inhibit oxidation of MPTP to MPP+ including methylene blue, 5-nitroindazole, norharman, 9-methylnorharman, and menadione. These demonstrated a reduction of neurotoxicity produced by MPTP. Sulfa drugs also act as competitive inhibitors. For example, sulfanilamide competitively binds to the enzyme in the dihydropteroate synthase (DHPS) active site by mimicking the substrate para-aminobenzoic acid (PABA). This prevents the substrate itself from binding which halts the production of folic acid, an essential nutrient. Bacteria must synthesize folic acid because they do not have a transporter for it. Without folic acid, bacteria cannot grow and divide. Therefore, because of sulfa drugs' competitive inhibition, they are excellent antibacterial agents.
An example of competitive inhibition was demonstrated experimentally for the enzyme succinic dehydrogenase, which catalyzes the oxidation of succinate to fumarate in the Krebs cycle. Malonate is a competitive inhibitor of succinic dehydrogenase. The binding of succinic dehydrogenase to the substrate, succinate, is competitively inhibited. This happens because malonate's chemistry is similar to succinate. Malonate's ability to inhibit binding of the enzyme and substrate is based on the ratio of malonate to succinate. Malonate binds to the active site of succinic dehydrogenase so that succinate cannot. Thus, it inhibits the reaction.
Equation.
The Michaelis–Menten Model can be an invaluable tool to understanding enzyme kinetics. According to this model, a plot of the reaction velocity (V0) associated with the concentration [S] of the substrate can then be used to determine values such as Vmax, initial velocity, and Km (Vmax/2 or affinity of enzyme to substrate complex).
Competitive inhibition increases the apparent value of the Michaelis–Menten constant, formula_3, such that initial rate of reaction, formula_4, is given by
formula_5
where formula_6, formula_7 is the inhibitor's dissociation constant and formula_8 is the inhibitor concentration.
formula_0 remains the same because the presence of the inhibitor can be overcome by higher substrate concentrations. formula_3, the substrate concentration that is needed to reach formula_9, increases with the presence of a competitive inhibitor. This is because the concentration of substrate needed to reach formula_0 with an inhibitor is greater than the concentration of substrate needed to reach formula_0 without an inhibitor.
Derivation.
In the simplest case of a single-substrate enzyme obeying Michaelis–Menten kinetics, the typical scheme
<chem>
E + S <=>[k_1][k_{-1}] ES ->[k_2] E + P
</chem>
is modified to include binding of the inhibitor to the free enzyme:
<chem>
EI + S <=>[k_{-3}][k_3] E + S + I <=>[k_1][k_{-1}] ES + I ->[k_2] E + P + I
</chem>
Note that the inhibitor does not bind to the ES complex and the substrate does not bind to the EI complex. It is generally assumed that this behavior is indicative of both compounds binding at the same site, but that is not strictly necessary. As with the derivation of the Michaelis–Menten equation, assume that the system is at steady-state, i.e. the concentration of each of the enzyme species is not changing.
formula_10
Furthermore, the known total enzyme concentration is formula_11, and the velocity is measured under conditions in which the substrate and inhibitor concentrations do not change substantially and an insignificant amount of product has accumulated.
We can therefore set up a system of equations:
where <chem>[S], [I]</chem> and <chem>[E]_0</chem> are known. The initial velocity is defined as formula_12, so we need to define the unknown <chem>[ES]</chem> in terms of the knowns <chem>[S], [I]</chem> and <chem>[E]_0</chem>.
From equation (3), we can define "E" in terms of "ES" by rearranging to
formula_13
Dividing by formula_14 gives
formula_15
As in the derivation of the Michaelis–Menten equation, the term formula_16 can be replaced by the macroscopic rate constant formula_2:
Substituting equation (5) into equation (4), we have
formula_17
Rearranging, we find that
formula_18
At this point, we can define the dissociation constant for the inhibitor as formula_19, giving
At this point, substitute equation (5) and equation (6) into equation (1):
formula_20
Rearranging to solve for ES, we find
formula_21
Returning to our expression for formula_4, we now have:
formula_22
formula_23
Since the velocity is maximal when all the enzyme is bound as the enzyme-substrate complex, formula_24.
Replacing and combining terms finally yields the conventional form:
To compute the concentration of competitive inhibitor <chem>[I]</chem> that yields a fraction formula_25 of velocity formula_4 where formula_26:
Notes and references.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "V_\\max"
},
{
"math_id": 1,
"text": "K_d"
},
{
"math_id": 2,
"text": "K_m"
},
{
"math_id": 3,
"text": "K^\\text{app}_m"
},
{
"math_id": 4,
"text": "V_0"
},
{
"math_id": 5,
"text": " V_0 = \\frac{V_\\max\\,[S]}{K^\\text{app}_m + [S]}"
},
{
"math_id": 6,
"text": "K^\\text{app}_m=K_m(1+[I]/K_i)"
},
{
"math_id": 7,
"text": "K_i"
},
{
"math_id": 8,
"text": "[I]"
},
{
"math_id": 9,
"text": "V_\\max / 2"
},
{
"math_id": 10,
"text": "\\frac{d[\\ce E]}{dt} = \\frac{d[\\ce{ES}]}{dt} = \\frac{d[\\ce{EI}]}{dt} = 0. "
},
{
"math_id": 11,
"text": "[\\ce E]_0 = [\\ce E] + [\\ce{ES}] + [\\ce{EI}]"
},
{
"math_id": 12,
"text": "V_0 = d[\\ce P]/dt = k_2 [\\ce{ES}]"
},
{
"math_id": 13,
"text": " k_1[\\ce E][\\ce S]=(k_{-1}+k_2)[\\ce{ES}]"
},
{
"math_id": 14,
"text": "k_1[\\ce S]"
},
{
"math_id": 15,
"text": " [\\ce E] = \\frac{(k_{-1}+k_2)[\\ce{ES}]}{k_1[\\ce S]} "
},
{
"math_id": 16,
"text": "(k_{-1}+k_2)/k_1"
},
{
"math_id": 17,
"text": " 0 = \\frac{k_3[\\ce I]K_m[\\ce{ES}]}\\ce{[S]} - k_{-3}[\\ce{EI}] "
},
{
"math_id": 18,
"text": " [\\ce{EI}] = \\frac{K_m k_3[\\ce I][\\ce{ES}]}{k_{-3}[\\ce S]} "
},
{
"math_id": 19,
"text": "K_i = k_{-3}/k_3"
},
{
"math_id": 20,
"text": " [\\ce E]_0 = \\frac{K_m[\\ce{ES}]}\\ce{[S]} + [\\ce{ES}] + \\frac{K_m[\\ce I][\\ce{ES}]}{K_i[\\ce S]}"
},
{
"math_id": 21,
"text": " [\\ce E]_0 = [\\ce{ES}] \\left ( \\frac{K_m}\\ce{[S]} + 1 + \\frac{K_m[\\ce I]}{K_i[\\ce S]} \\right )= [\\ce{ES}] \\frac{K_m K_i + K_i[\\ce S] + K_m[\\ce I]}{K_i[\\ce S]}"
},
{
"math_id": 22,
"text": " V_0 = k_2[\\ce{ES}] = \\frac{k_2 K_i [\\ce S][\\ce E]_0}{K_m K_i + K_i[\\ce S] + K_m[\\ce I]} "
},
{
"math_id": 23,
"text": " V_0 = \\frac{k_2 [\\ce E]_0 [\\ce S]}{K_m + [\\ce S] + K_m\\frac{[\\ce I]}{K_i}} "
},
{
"math_id": 24,
"text": "V_\\max = k_2 [\\ce E]_0"
},
{
"math_id": 25,
"text": "f_{V{_0}}"
},
{
"math_id": 26,
"text": "0 < f_{V{_0}} < 1"
}
] |
https://en.wikipedia.org/wiki?curid=59841811
|
59842040
|
Graph removal lemma
|
In graph theory, the graph removal lemma states that when a graph contains few copies of a given subgraph, then all of the copies can be eliminated by removing a small number of edges.
The special case in which the subgraph is a triangle is known as the triangle removal lemma.
The graph removal lemma can be used to prove Roth's theorem on 3-term arithmetic progressions, and a generalization of it, the hypergraph removal lemma, can be used to prove Szemerédi's theorem. It also has applications to property testing.
Formulation.
Let formula_0 be a graph with formula_1 vertices. The graph removal lemma states that for any
formula_2, there exists a constant formula_3 such that for any formula_4-vertex graph formula_5 with fewer than formula_6 subgraphs isomorphic to formula_0, it is possible to eliminate all copies of formula_0 by removing at most formula_7 edges from formula_5.
An alternative way to state this is to say that for any formula_4-vertex graph formula_5 with formula_8 subgraphs isomorphic to formula_0, it is possible to eliminate all copies of formula_0 by removing formula_9 edges from formula_5. Here, the formula_10 indicates the use of little o notation.
In the case when formula_0 is a triangle, resulting lemma is called triangle removal lemma.
History.
The original motivation for the study of triangle removal lemma was Ruzsa–Szemerédi problem. Initial formulation due to Imre Z. Ruzsa and Szemerédi from 1978 was slightly weaker than the triangle removal lemma used nowadays and can be roughly stated as follows: every locally linear graph on formula_4 vertices contains formula_9 edges. This statement can be quickly deduced from a modern triangle removal lemma. Ruzsa and Szemerédi provided also an alternative proof of Roth's theorem on arithmetic progressions as a simple corollary.
In 1986 during their work on generalizations of Ruzsa–Szemerédi problem to arbitrary formula_11-uniform graphs, Erdős, Frankl, and Rödl provided statement for general graphs very close to the modern graph removal lemma: if graph formula_12 is a homomorphic image of formula_12, then any formula_13-free graph formula_5 on formula_4 vertices can be made formula_12-free by removing formula_9 edges.
The modern formulation of graph removal lemma was first stated by Füredi in 1994. The proof generalized earlier approaches by Ruzsa and Szemerédi and Erdős, Frankl, and Rödl, also utilizing Szemerédi regularity lemma.
Graph counting lemma.
A key component of the proof of graph removal lemma is the graph counting lemma about counting subgraphs in systems of regular pairs. Graph counting lemma is also very useful on its own. According to Füredi, it is used "in most applications of regularity lemma".
Heuristic argument.
Let formula_0 be a graph on formula_1 vertices, whose vertex set is formula_14 and edge set is formula_15. Let formula_16 be sets of vertices of some graph formula_5 such that for all formula_17 pair formula_18 is formula_19-regular (in the sense of regularity lemma). Let also formula_20 be the density between sets formula_21 and formula_22. Intuitively, regular pair formula_23 with density formula_24 should behave like a random Erdős–Rényi-like graph, where every pair of vertices formula_25 is selected to be an edge independently with probability formula_24. This suggests that the number of copies of formula_0 on vertices formula_26 such that formula_27 should be close to the expected number from Erdős–Rényi model:
formula_28
where formula_29 and formula_30 are the edge set and the vertex set of formula_0.
Precise statement.
The straightforward formalization of above heuristic claim is as follows. Let formula_0 be a graph on formula_1 vertices, whose vertex set is formula_14 and edge set is formula_15. Let formula_31 be arbitrary. Then there exists formula_32 such that for any formula_16 as above, satisfying formula_33 for all formula_17, the number of graph homomorphisms from formula_0 to formula_5 such that vertex formula_34 is mapped to formula_21 is not smaller than
formula_35
Blow-up Lemma.
One can even find bounded degree subgraphs of blow-ups of formula_0 in a similar setting. The following claim appears in the literature under name of the blow-up lemma and was first proven by Komlós, Sárközy and Szemerédi. Precise statement here is a slightly simplified version due to Komlós, who referred to it also as the key lemma, as it is used in numerous regularity-based proofs.
Let formula_13 be an arbitrary graph and formula_36. Construct formula_37 by replacing each vertex formula_38 of formula_0 by independent set formula_39 of size formula_40 and replacing every edge formula_41 of formula_0 by complete bipartite graph on formula_42. Let formula_43 be arbitrary reals, formula_44 be a positive integer and let formula_12 be a subgraph of formula_37 with formula_1 vertices and with maximum degree formula_45. Define formula_46. Finally, let formula_5 be a graph and formula_16 be disjoint sets of vertices of formula_5 such that whenever formula_47 then formula_18 is a formula_19-regular pair with density at least formula_48. Then if formula_49 and formula_50, the number of injective graph homomorphisms from formula_12 to formula_5 is at least formula_51.
In fact, one can only restrict to counting homomorphisms such that any vertex formula_52 of formula_12 such that formula_53 is mapped to a vertex in formula_21.
Proof.
We will provide proof of the counting lemma in the case when formula_0 is a triangle (triangle counting lemma). The proof of the general case, as well as the proof of the blow-up lemma, are very similar and do not require different techniques.
Take formula_54. Let formula_55 be the set of those vertices in formula_56 which have at least formula_57 neighbors in formula_58 and at least formula_59 neighbors in formula_60. Note that if there were more than formula_61 vertices in formula_56 with less than formula_57 neighbors in formula_58, then these vertices together with whole formula_58 would witness formula_19-irregularity of the pair formula_62. Repeating this argument for formula_60 shows that we must have formula_63. Now take arbitrary formula_64 and define formula_65 and formula_66 as neighbors of formula_67 in formula_58 and formula_60 respectively. By definition formula_68 and formula_69 so by regularity of formula_70 we obtain existence of at least
formula_71
triangles containing formula_67. Since formula_67 was chosen arbitrarily from the set formula_72 of size at least formula_73, we obtain a total of at least
formula_74
which finishes the proof as formula_54.
Proof.
Proof of the triangle removal lemma.
To prove the triangle removal lemma, consider an formula_75-regular partition formula_76 of the vertex set of formula_5. This exists by the Szemerédi regularity lemma. The idea is to remove all edges between irregular pairs, low-density pairs, and small parts, and prove that if at least one triangle still remains, then many triangles remain. Specifically, remove all edges between parts formula_39 and formula_77 if
This procedure removes at most formula_7 edges. If there exists a triangle with vertices in formula_78 after these edges are removed, then the triangle counting lemma tells us there are at least
formula_79
triples in formula_80 which form a triangle. Thus, we may take
formula_81
Proof of the graph removal lemma.
The proof of the case of general formula_0 is analogous to the triangle case, and uses graph counting lemma instead of triangle counting lemma.
Induced Graph Removal Lemma.
A natural generalization of the Graph Removal Lemma is to consider induced subgraphs. In property testing it is often useful to consider how far a graph is from being induced H-free. A graph formula_5 is considered to contain an induced subgraph formula_0 if there is an injective map formula_82 such that formula_83 is an edge of formula_5 if and only if formula_84 is an edge of formula_0. Notice that non-edges are considered as well. formula_5 is induced formula_0-free if there is no induced subgraph formula_5. We define formula_5 as formula_19-far from being induced formula_0-free if we cannot add or delete formula_7 edges to make formula_5 induced formula_0-free.
Formulation.
A version of the Graph Removal for induced subgraphs was proved by Alon, Fischer, Krivelevich, and Szegedy in 2000. It states that for any graph formula_0 with formula_1 vertices and formula_2, there exists a constant formula_85 such that if an formula_4-vertex graph formula_5 has fewer than formula_6 induced subgraphs isomorphic to formula_0, then it is possible to eliminate all induced copies of formula_0 by adding or removing fewer than formula_7 edges.
The problem can be reformulated as follows: Given a red-blue coloring formula_86 of the complete graph formula_87 (Analogous to the graph formula_0 on the same formula_1 vertices where non-edges are blue, edges are red), and a constant formula_2, then there exists a constant formula_85 such that for any red-blue colorings of formula_88 has fewer than formula_6 subgraphs isomorphic to formula_86, then it is possible to eliminate all copies of formula_0 by changing the colors of fewer than formula_7 edges. Notice that our previous "cleaning" process, where we remove all edges between irregular pairs, low-density pairs, and small parts, only involves removing edges. Removing edges only corresponds to changing edge colors from red to blue. However, there are situations in the induced case where the optimal edit distance involves changing edge colors from blue to red as well. Thus, the Regularity Lemma is insufficient to prove Induced Graph Removal Lemma. The proof of the Induced Graph Removal Lemma must take advantage of the strong regularity lemma.
Proof.
Strong Regularity Lemma.
The strong regularity lemma is a strengthened version of Szemerédi's Regularity Lemma. For any infinite sequence of constants formula_89, there exists an integer formula_90 such that for any graph formula_5, we can obtain two (equitable) partitions formula_91 and formula_92 such that the following properties are satisfied:
The function formula_97 is defined to be the energy function defined in Szemerédi regularity lemma. Essentially, we can find a pair of partitions formula_98 where formula_92 is extremely regular compared to formula_91, and at the same time formula_98 are close to each other. (This property is captured in the third condition)
Corollary of the Strong Regularity Lemma.
The following corollary of the strong regularity lemma is used in the proof of the Induced Graph Removal Lemma. For any infinite sequence of constants formula_89, there exists formula_31 such that there exists a partition formula_99 and subsets formula_100 for each formula_38 where the following properties are satisfied:
The main idea of the proof of this corollary is to start with two partitions formula_91 and formula_92 that satisfy the Strong Regularity Lemma where formula_106. Then for each part formula_107, we uniformly at random choose some part formula_108 that is a part in formula_92. The expected number of irregular pairs formula_102 is less than 1. Thus, there exists some collection of formula_109 such that every pair is formula_94-regular!
The important aspect of this corollary is that every pair of formula_110 are formula_94-regular! This allows us to consider edges and non-edges when we perform our cleaning argument.
Proof of Sketch of the Induced Graph Removal Lemma.
With these results, we are able to prove the Induced Graph Removal Lemma. Take any graph formula_5 with formula_4 vertices that has less than formula_111 copies of formula_0. The idea is to start with a collection of vertex sets formula_109 which satisfy the conditions of the Corollary of the Strong Regularity Lemma. We then can perform a "cleaning" process where we remove all edges between pairs of parts formula_102 with low density, and we can add all edges between pairs of parts formula_102 with high density. We choose the density requirements such that we added/deleted at most formula_7 edges.
If the new graph has no copies of formula_0, then we are done. Suppose the new graph has a copy of formula_0. Suppose the vertex formula_112 is embedded in formula_113. Then if there is an edge connecting formula_114 in formula_0, then formula_110 does not have low density. (Edges between formula_110 were not removed in the cleaning process) Similarly, if there is not an edge connecting formula_114 in formula_0, then formula_110 does not have high density. (Edges between formula_110 were not added in the cleaning process)
Thus, by a similar counting argument to the proof of the triangle counting lemma, that is the graph counting lemma, we can show that formula_5 has more than formula_111 copies of formula_0.
Generalizations.
The graph removal lemma was later extended to directed graphs and to hypergraphs.
Quantitative bounds.
Usage of regularity lemma in the proof of graph removal lemma forces formula_115 to be extremely small, bounded by tower function of height polynomial in formula_116 that is formula_117 (here formula_118 is the tower of twos of height formula_119). Tower function of height formula_120 is necessary in all regularity proofs as is implied by results of Gowers on lower bounds in regularity lemma. However, in 2011 Fox provided a new proof of graph removal lemma which does not use regularity lemma, improving the bound to formula_121 (here formula_1 is number of vertices of removed graph formula_0). His proof, however, uses regularity-related ideas such as energy increment, but with different notion of energy, related to entropy. This proof can be also rephrased using Frieze-Kannan weak regularity lemma as noted by Conlon and Fox. In the special case of bipartite formula_0 it was shown that formula_122 is sufficient.
There is a large gap between upper and lower bounds for formula_115 in the general case. The current best result true for all graphs formula_0 is due to Alon and states that for each nonbipartite formula_0 there exists constant formula_123 such that formula_124 is necessary for the graph removal lemma to hold while for bipartite formula_0 the optimal formula_115 has polynomial dependence on formula_19, which matches the lower bound. Construction for nonbipartite case is a consequence of Behrend construction of large Salem-Spencer set. Indeed, as triangle removal lemma implies Roth's theorem, existence of large Salem-Spencer set may be translated to an upper bound for formula_115 in the triangle removal lemma. This method can be leveraged for arbitrary nonbipartite formula_0 to give aforementioned bound.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "h"
},
{
"math_id": 2,
"text": "\\epsilon > 0"
},
{
"math_id": 3,
"text": "\\delta = \\delta(\\epsilon, H) > 0"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "\\delta n^h"
},
{
"math_id": 7,
"text": "\\epsilon n^2"
},
{
"math_id": 8,
"text": "o(n^h)"
},
{
"math_id": 9,
"text": "o(n^2)"
},
{
"math_id": 10,
"text": "o"
},
{
"math_id": 11,
"text": "r"
},
{
"math_id": 12,
"text": "H_2"
},
{
"math_id": 13,
"text": "H_1"
},
{
"math_id": 14,
"text": "V=\\{1,2,\\ldots,h\\}"
},
{
"math_id": 15,
"text": "E"
},
{
"math_id": 16,
"text": "X_1,X_2,\\ldots,X_h"
},
{
"math_id": 17,
"text": "ij\\in E"
},
{
"math_id": 18,
"text": "(X_i,X_j)"
},
{
"math_id": 19,
"text": "\\epsilon"
},
{
"math_id": 20,
"text": "d_{ij}"
},
{
"math_id": 21,
"text": "X_i"
},
{
"math_id": 22,
"text": "X_j"
},
{
"math_id": 23,
"text": "(X,Y)"
},
{
"math_id": 24,
"text": "d"
},
{
"math_id": 25,
"text": "(x,y)\\in (X\\times Y)"
},
{
"math_id": 26,
"text": "x_1,x_2,\\ldots,x_h"
},
{
"math_id": 27,
"text": "x_i\\in X_i"
},
{
"math_id": 28,
"text": "\\prod_{ij\\in E(H)}d_{ij}\\prod_{i\\in V(H)}|X_i|"
},
{
"math_id": 29,
"text": "E(H)"
},
{
"math_id": 30,
"text": "V(H)"
},
{
"math_id": 31,
"text": "\\delta>0"
},
{
"math_id": 32,
"text": "\\epsilon>0"
},
{
"math_id": 33,
"text": "d_{ij}>\\delta"
},
{
"math_id": 34,
"text": "i\\in V(H)"
},
{
"math_id": 35,
"text": "\n(1-\\delta)\\prod_{ij\\in E}(d_{ij}-\\delta)\\prod_{i\\in V}|X_i|\n"
},
{
"math_id": 36,
"text": "t\\in\\mathbb{Z}_+"
},
{
"math_id": 37,
"text": "H(t)"
},
{
"math_id": 38,
"text": "i"
},
{
"math_id": 39,
"text": "V_i"
},
{
"math_id": 40,
"text": "t"
},
{
"math_id": 41,
"text": "ij"
},
{
"math_id": 42,
"text": "(V_i,V_j)"
},
{
"math_id": 43,
"text": "\\epsilon,\\delta>0"
},
{
"math_id": 44,
"text": "N"
},
{
"math_id": 45,
"text": "\\Delta"
},
{
"math_id": 46,
"text": "\\epsilon_0=\\delta^\\Delta/(2+\\Delta)"
},
{
"math_id": 47,
"text": "ij\\in E(H_2)"
},
{
"math_id": 48,
"text": "\\epsilon+\\delta"
},
{
"math_id": 49,
"text": "\\epsilon\\leq\\epsilon_0"
},
{
"math_id": 50,
"text": "1-t\\leq N\\epsilon_0"
},
{
"math_id": 51,
"text": "(\\epsilon_0N)^h"
},
{
"math_id": 52,
"text": "k\\in [h]"
},
{
"math_id": 53,
"text": "k\\in V_i"
},
{
"math_id": 54,
"text": "\\epsilon=\\delta/2"
},
{
"math_id": 55,
"text": "X_1'\\subset X_1"
},
{
"math_id": 56,
"text": "X_1"
},
{
"math_id": 57,
"text": "(d_{12}-\\epsilon)|X_2|"
},
{
"math_id": 58,
"text": "X_2"
},
{
"math_id": 59,
"text": "(d_{13}-\\epsilon)|X_3|"
},
{
"math_id": 60,
"text": "X_3"
},
{
"math_id": 61,
"text": "\\epsilon|X_1|"
},
{
"math_id": 62,
"text": "(X_1,X_2)"
},
{
"math_id": 63,
"text": "|X_1'|>(1-2\\epsilon)|X_1|"
},
{
"math_id": 64,
"text": "x\\in X_1'"
},
{
"math_id": 65,
"text": "X_2'"
},
{
"math_id": 66,
"text": "X_3'"
},
{
"math_id": 67,
"text": "x"
},
{
"math_id": 68,
"text": "|X_2'|\\geq (d_{12}-\\epsilon)|X_2|\\geq \\epsilon|X_2|"
},
{
"math_id": 69,
"text": "|X_3'|\\geq \\epsilon|X_3|"
},
{
"math_id": 70,
"text": "(X_2,X_3)"
},
{
"math_id": 71,
"text": "\n(d_{23}-\\epsilon)|X_2'||X_3'|\\geq (d_{12}-\\epsilon)(d_{23}-\\epsilon)(d_{13}-\\epsilon)|X_2||X_3|\n"
},
{
"math_id": 72,
"text": "X_1'"
},
{
"math_id": 73,
"text": "(1-2\\epsilon)|X_1|"
},
{
"math_id": 74,
"text": "\n(1-2\\epsilon)(d_{23}-\\epsilon)|X_2'||X_3'|\\geq (d_{12}-\\epsilon)(d_{23}-\\epsilon)(d_{13}-\\epsilon)|X_1||X_2||X_3|\n"
},
{
"math_id": 75,
"text": "\\epsilon/4"
},
{
"math_id": 76,
"text": "V_1 \\cup \\cdots \\cup V_M"
},
{
"math_id": 77,
"text": "V_j"
},
{
"math_id": 78,
"text": "V_i, V_j, V_k"
},
{
"math_id": 79,
"text": "\\left(1-\\frac{\\epsilon}{2}\\right)\\left(\\frac{\\epsilon}{4}\\right)^3\\left(\\frac{\\epsilon}{4M}\\right)^3\\cdot n^3"
},
{
"math_id": 80,
"text": "V_i \\times V_j \\times V_k"
},
{
"math_id": 81,
"text": "\\delta < \\frac{1}{6} \\left(1-\\frac{\\epsilon}{2}\\right)\\left(\\frac{\\epsilon}{4}\\right)^3\\left(\\frac{\\epsilon}{4M}\\right)^3."
},
{
"math_id": 82,
"text": "f: V(H) \\rightarrow V(G)"
},
{
"math_id": 83,
"text": "(f(u),f(v))"
},
{
"math_id": 84,
"text": "(u,v)"
},
{
"math_id": 85,
"text": "\\delta > 0"
},
{
"math_id": 86,
"text": "H'"
},
{
"math_id": 87,
"text": "K_h"
},
{
"math_id": 88,
"text": "K_n"
},
{
"math_id": 89,
"text": "\\epsilon_0\\ge \\epsilon_1 \\ge ...>0"
},
{
"math_id": 90,
"text": "M"
},
{
"math_id": 91,
"text": "\\mathcal{P}"
},
{
"math_id": 92,
"text": "\\mathcal{Q}"
},
{
"math_id": 93,
"text": "\\epsilon_0"
},
{
"math_id": 94,
"text": "\\epsilon_{|\\mathcal{P}|}"
},
{
"math_id": 95,
"text": "q(\\mathcal{Q})<q(\\mathcal{P})+\\epsilon_0"
},
{
"math_id": 96,
"text": "|\\mathcal{Q}|\\le M"
},
{
"math_id": 97,
"text": "q"
},
{
"math_id": 98,
"text": "\\mathcal{P}, \\mathcal{Q}"
},
{
"math_id": 99,
"text": "\\mathcal{P}={V_1,...,V_k}"
},
{
"math_id": 100,
"text": " W_i \\subset V_i"
},
{
"math_id": 101,
"text": "|W_i|>\\delta n"
},
{
"math_id": 102,
"text": "(W_i,W_j)"
},
{
"math_id": 103,
"text": "i,j"
},
{
"math_id": 104,
"text": "|d(W_i,W_j)-d(V_i,V_j)|\\le \\epsilon_0"
},
{
"math_id": 105,
"text": "\\epsilon_0 |\\mathcal{P}|^2"
},
{
"math_id": 106,
"text": "q(\\mathcal{Q})<q(\\mathcal{P})+\\epsilon_0^3/8"
},
{
"math_id": 107,
"text": "V_i \\in \\mathcal{P}"
},
{
"math_id": 108,
"text": "W_i \\subset V_i"
},
{
"math_id": 109,
"text": "W_i"
},
{
"math_id": 110,
"text": "W_i,W_j"
},
{
"math_id": 111,
"text": "\\delta n^{v(H)}"
},
{
"math_id": 112,
"text": "v_i \\in v(H)"
},
{
"math_id": 113,
"text": "W_{f(i)}"
},
{
"math_id": 114,
"text": "v_i,v_j"
},
{
"math_id": 115,
"text": "\\delta"
},
{
"math_id": 116,
"text": "\\epsilon^{-1}"
},
{
"math_id": 117,
"text": "\\delta=1/\\text{tower}(\\epsilon^{-O(1)})"
},
{
"math_id": 118,
"text": "\\text{tower}(k)"
},
{
"math_id": 119,
"text": "k"
},
{
"math_id": 120,
"text": "\\epsilon^{-O(1)}"
},
{
"math_id": 121,
"text": "\\delta=1/\\text{tower}(5h^2\\log\\epsilon^{-1})"
},
{
"math_id": 122,
"text": "\\delta=\\epsilon^{O(1)}"
},
{
"math_id": 123,
"text": "c>0"
},
{
"math_id": 124,
"text": "\\delta<(\\epsilon/c)^{c\\log (c/\\epsilon)}"
}
] |
https://en.wikipedia.org/wiki?curid=59842040
|
59842050
|
Ruzsa–Szemerédi problem
|
In combinatorial mathematics and extremal graph theory, the Ruzsa–Szemerédi problem or (6,3)-problem asks for the maximum number of edges in a
graph in which every edge belongs to a unique triangle.
Equivalently it asks for the maximum number of edges in a balanced bipartite graph whose edges can be partitioned into a linear number of induced matchings, or the maximum number of triples one can choose from formula_0 points so that every six points contain at most two triples. The problem is named after Imre Z. Ruzsa and Endre Szemerédi, who first proved that its answer is smaller than formula_1 by a slowly-growing (but still unknown) factor.
Equivalence between formulations.
The following questions all have answers that are asymptotically equivalent: they differ by, at most, constant factors from each other.
The Ruzsa–Szemerédi problem asks for the answer to these equivalent questions.
To convert the bipartite graph induced matching problem into the unique triangle problem, add a third set of formula_0 vertices to the graph, one for each induced matching, and add edges from vertices formula_2 and formula_3 of the bipartite graph to vertex formula_4 in this third set whenever
bipartite edge formula_5 belongs to induced matching formula_4.
The result is a balanced tripartite graph with formula_6 vertices and the unique triangle property. In the other direction, an arbitrary graph with the unique triangle property can be made into a balanced tripartite graph by choosing a partition of the vertices into three equal sets randomly and keeping only the triangles that respect the partition. This will retain (in expectation) a constant fraction of the triangles and edges. A balanced tripartite graph with the unique triangle property can be made into a partitioned bipartite graph by removing one of its three subsets of vertices, and making an induced matching on the neighbors of each removed vertex.
To convert a graph with a unique triangle per edge into a triple system,
let the triples be the triangles of the graph. No six points can include three triangles without either two of the three triangles sharing an edge or all three triangles forming a fourth triangle that shares an edge with each of them.
In the other direction, to convert a triple system into a graph, first eliminate any sets of four points that contain two triples. These four points cannot participate in any other triples, and so cannot contribute towards a more-than-linear total number of triples. Then, form a graph connecting any pair of points that both belong to any of the remaining triples.
Lower bound.
A nearly-quadratic lower bound on the Ruzsa–Szemerédi problem can be derived from a result of Felix Behrend, according to which the numbers modulo an odd prime number formula_7 have large Salem–Spencer sets, subsets formula_8 of size formula_9 with no three-term arithmetic progressions.
Behrend's result can be used to construct tripartite graphs in which each side of the tripartition has formula_7 vertices, there are formula_10 edges, and each edge belongs to a unique triangle. Thus, with this construction, formula_11 and the number of edges is formula_12.
To construct a graph of this form from Behrend's arithmetic-progression-free subset formula_8, number the vertices on each side of the tripartition from formula_13 to formula_14, and construct triangles of the form formula_15 modulo formula_7 for each formula_16 in the range from formula_13 to formula_14 and each formula_17 in formula_8. For example, with formula_18 and formula_19, the result is a nine-vertex balanced tripartite graph with 18 edges, shown in the illustration. The graph formed from the union of these triangles has the desired property that every edge belongs to a unique triangle. For, if not, there would be a triangle formula_20 where formula_17, formula_21, and formula_22 all belong to formula_8, violating the assumption that there be no arithmetic progressions formula_23 in formula_8.
Upper bound.
The Szemerédi regularity lemma can be used to prove that any solution to the Ruzsa–Szemerédi problem has at most formula_24 edges or triples. A stronger form of the graph removal lemma by Jacob Fox implies that the size of a solution is at most formula_25. Here the formula_26 and formula_27 are instances of little o and big Omega notation, and formula_28 denotes the iterated logarithm.
Fox proves that, in any formula_0-vertex graph with formula_29 triangles for some formula_30, one can find a triangle-free subgraph by removing at most formula_25 edges. In a graph with the unique triangle property, there are (naively) formula_31 triangles, so this result applies with formula_32. But in this graph, each edge removal eliminates only one triangle, so the number of edges that must be removed to eliminate all triangles is the same as the number of triangles.
History.
The problem is named after Imre Z. Ruzsa and Endre Szemerédi, who studied this problem, in the formulation involving triples of points, in a 1978 publication. However, it had been previously studied by W. G. Brown, Paul Erdős, and Vera T. Sós, in two publications in 1973 which
proved that the maximum number of triples can be formula_33, and conjectured that it was formula_24. Ruzsa and Szemerédi provided (unequal) nearly-quadratic upper and lower bounds for the problem, significantly improving the previous lower bound of Brown, Erdős, and Sós, and proving their conjecture.
Applications.
The existence of dense graphs that can be partitioned into large induced matchings has been used to construct efficient tests for whether a Boolean function is linear, a key component of the PCP theorem in computational complexity theory. In the theory of property testing algorithms, the known results on the Ruzsa–Szemerédi problem have been applied to show that it is possible to test whether a graph has no copies of a given subgraph formula_34, with one-sided error in a number of queries polynomial in the error parameter, if and only if formula_34 is a bipartite graph.
In the theory of streaming algorithms for graph matching (for instance to match internet advertisers with advertising slots), the quality of matching covers (sparse subgraphs that approximately preserve the size of a matching in all vertex subsets) is closely related to the density of bipartite graphs that can be partitioned into induced matchings. This construction uses a modified form of the Ruzsa-Szemerédi problem in which the number of induced matchings can be much smaller than the number of vertices, but each induced matching must cover most of the vertices of the graph. In this version of the problem, it is possible to construct graphs with a non-constant number of linear-sized induced matchings, and this result leads to nearly-tight bounds on the approximation ratio of streaming matching algorithms.
The subquadratic upper bound on the Ruzsa–Szemerédi problem was also used to provide an formula_35 bound on the size of cap sets,
before stronger bounds of the form formula_36 for formula_37 were proven for this problem. It also provides the best known upper bound on tripod packing.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "n^2"
},
{
"math_id": 2,
"text": "u"
},
{
"math_id": 3,
"text": "v"
},
{
"math_id": 4,
"text": "w"
},
{
"math_id": 5,
"text": "uv"
},
{
"math_id": 6,
"text": "3n"
},
{
"math_id": 7,
"text": "p"
},
{
"math_id": 8,
"text": "A"
},
{
"math_id": 9,
"text": "|A|=p/e^{O(\\sqrt{\\log p})}"
},
{
"math_id": 10,
"text": "3|A|p"
},
{
"math_id": 11,
"text": "n=3p"
},
{
"math_id": 12,
"text": "n^2/e^{O(\\sqrt{\\log n})}"
},
{
"math_id": 13,
"text": "0"
},
{
"math_id": 14,
"text": "p-1"
},
{
"math_id": 15,
"text": "(x,x+a,x+2a)"
},
{
"math_id": 16,
"text": "x"
},
{
"math_id": 17,
"text": "a"
},
{
"math_id": 18,
"text": "p=3"
},
{
"math_id": 19,
"text": "A=\\{\\pm 1\\}"
},
{
"math_id": 20,
"text": "(x,x+a,x+a+b)"
},
{
"math_id": 21,
"text": "b"
},
{
"math_id": 22,
"text": "c=(a+b)/2"
},
{
"math_id": 23,
"text": "(a,c,b)"
},
{
"math_id": 24,
"text": "o(n^2)"
},
{
"math_id": 25,
"text": "n^2/e^{\\Omega(\\log^* n)}"
},
{
"math_id": 26,
"text": "o"
},
{
"math_id": 27,
"text": "\\Omega"
},
{
"math_id": 28,
"text": "\\log^*"
},
{
"math_id": 29,
"text": "O(n^{3-\\delta})"
},
{
"math_id": 30,
"text": "\\delta>0"
},
{
"math_id": 31,
"text": "O(n^2)"
},
{
"math_id": 32,
"text": "\\delta=1"
},
{
"math_id": 33,
"text": "\\Omega(n^{3/2})"
},
{
"math_id": 34,
"text": "H"
},
{
"math_id": 35,
"text": "o(3^n)"
},
{
"math_id": 36,
"text": "c^n"
},
{
"math_id": 37,
"text": "c<3"
}
] |
https://en.wikipedia.org/wiki?curid=59842050
|
59843147
|
Iron law of processor performance
|
In computer architecture, the iron law of processor performance (or simply iron law of performance) describes the performance trade-off between complexity and the number of primitive instructions that processors use to perform calculations. This formulation of the trade-off spurred the development of Reduced Instruction Set Computers (RISC) whose instruction set architectures (ISAs) leverage a smaller set of core instructions to improve performance. The term was coined by Douglas Clark based on research performed by Clark and Joel Emer in the 1980s.
Explanation.
The performance of a processor is the time it takes to execute a program: formula_0. This can be further broken down into three factors:
formula_1Selection of an instruction set architecture affects formula_2, whereas formula_3 is largely determined by the manufacturing technology. Classic Complex Instruction Set Computer (CISC) ISAs optimized formula_4 by providing a larger set of more complex CPU instructions. Generally speaking, however, complex instructions inflate the number of clock cycles per instruction formula_5 because they must be decoded into simpler "micro-operations" actually performed by the hardware. After converting X86 binary to the micro-operations used internally, the total number of operations is close to what is produced for a comparable RISC ISA. The iron law of processor performance makes this trade-off explicit and pushes for optimization of formula_0as a whole, not just a single component.
While the iron law is credited for sparking the development of RISC architectures, it does not imply that a simpler ISA is always faster. If that were the case, the fastest ISA would consist of simple binary logic. A single CISC instruction "can" be faster than the equivalent set of RISC instructions when it enables multiple micro-operations to be performed in a single clock cycle. In practice, however, the regularity of RISC instructions allowed a pipelined implementation where the total execution time of an instruction was (typically) ~5 clock cycles, but each instruction followed the previous instruction ~1 clock cycle later . CISC processors can "also" achieve higher performance using techniques such as modular extensions, predictive logic, compressed instructions, and macro-operation fusion.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\mathrm{\\tfrac{Time}{Program}}"
},
{
"math_id": 1,
"text": "\\mathrm{\\frac{Instructions}{Program} \\times \\frac{Clock Cycles}{Instruction} \\times \\frac{Time}{Clock Cycles}}"
},
{
"math_id": 2,
"text": "\\mathrm{\\tfrac{Instructions}{Program} \\times \\tfrac{Clock Cycles}{Instruction}}"
},
{
"math_id": 3,
"text": "\\mathrm{\\tfrac{Time}{Clock Cycles}}"
},
{
"math_id": 4,
"text": "\\mathrm{\\tfrac{Instructions}{Program}}"
},
{
"math_id": 5,
"text": "\\mathrm{\\tfrac{ClockCycles}{Instruction}}"
}
] |
https://en.wikipedia.org/wiki?curid=59843147
|
598434
|
Post-glacial rebound
|
Rise of land masses after glacial period
Post-glacial rebound (also called isostatic rebound or crustal rebound) is the rise of land masses after the removal of the huge weight of ice sheets during the last glacial period, which had caused isostatic depression. Post-glacial rebound and isostatic depression are phases of glacial isostasy (glacial isostatic adjustment, glacioisostasy), the deformation of the Earth's crust in response to changes in ice mass distribution. The direct raising effects of post-glacial rebound are readily apparent in parts of Northern Eurasia, Northern America, Patagonia, and Antarctica. However, through the processes of "ocean siphoning" and "continental levering", the effects of post-glacial rebound on sea level are felt globally far from the locations of current and former ice sheets.
Overview.
During the last glacial period, much of northern Europe, Asia, North America, Greenland and Antarctica was covered by ice sheets, which reached up to three kilometres thick during the glacial maximum about 20,000 years ago. The enormous weight of this ice caused the surface of the Earth's crust to deform and warp downward, forcing the viscoelastic mantle material to flow away from the loaded region. At the end of each glacial period when the glaciers retreated, the removal of this weight led to slow (and still ongoing) uplift or rebound of the land and the return flow of mantle material back under the deglaciated area. Due to the extreme viscosity of the mantle, it will take many thousands of years for the land to reach an equilibrium level.
The uplift has taken place in two distinct stages. The initial uplift following deglaciation was almost immediate due to the elastic response of the crust as the ice load was removed. After this elastic phase, uplift proceeded by slow viscous flow at an exponentially decreasing rate. Today, typical uplift rates are of the order of 1 cm/year or less. In northern Europe, this is clearly shown by the GPS data obtained by the BIFROST GPS network; for example in Finland, the total area of the country is growing by about seven square kilometers per year. Studies suggest that rebound will continue for at least another 10,000 years. The total uplift from the end of deglaciation depends on the local ice load and could be several hundred metres near the centre of rebound.
Recently, the term "post-glacial rebound" is gradually being replaced by the term "glacial isostatic adjustment". This is in recognition that the response of the Earth to glacial loading and unloading is not limited to the upward rebound movement, but also involves downward land movement, horizontal crustal motion, changes in global sea levels and the Earth's gravity field, induced earthquakes, and changes in the Earth's rotation. Another alternate term is "glacial isostasy", because the uplift near the centre of rebound is due to the tendency towards the restoration of isostatic equilibrium (as in the case of isostasy of mountains). Unfortunately, that term gives the wrong impression that isostatic equilibrium is somehow reached, so by appending "adjustment" at the end, the motion of restoration is emphasized.
Effects.
Post-glacial rebound produces measurable effects on vertical crustal motion, global sea levels, horizontal crustal motion, gravity field, Earth's rotation, crustal stress, and earthquakes. Studies of glacial rebound give us information about the flow law of mantle rocks, which is important to the study of mantle convection, plate tectonics and the thermal evolution of the Earth. It also gives insight into past ice sheet history, which is important to glaciology, paleoclimate, and changes in global sea level. Understanding postglacial rebound is also important to our ability to monitor recent global change.
Vertical crustal motion.
Erratic boulders, U-shaped valleys, drumlins, eskers, kettle lakes, bedrock striations are among the common signatures of the Ice Age. In addition, post-glacial rebound has caused numerous significant changes to coastlines and landscapes over the last several thousand years, and the effects continue to be significant.
In Sweden, Lake Mälaren was formerly an arm of the Baltic Sea, but uplift eventually cut it off and led to its becoming a freshwater lake in about the 12th century, at the time when Stockholm was founded at its outlet. Marine seashells found in Lake Ontario sediments imply a similar event in prehistoric times. Other pronounced effects can be seen on the island of Öland, Sweden, which has little topographic relief due to the presence of the very level Stora Alvaret. The rising land has caused the Iron Age settlement area to recede from the Baltic Sea, making the present day villages on the west coast set back unexpectedly far from the shore. These effects are quite dramatic at the village of Alby, for example, where the Iron Age inhabitants were known to subsist on substantial coastal fishing.
As a result of post-glacial rebound, the Gulf of Bothnia is predicted to eventually close up at Kvarken in more than 2,000 years. The Kvarken is a UNESCO World Natural Heritage Site, selected as a "type area" illustrating the effects of post-glacial rebound and the holocene glacial retreat.
In several other Nordic ports, like Tornio and Pori (formerly at Ulvila), the harbour has had to be relocated several times. Place names in the coastal regions also illustrate the rising land: there are inland places named 'island', 'skerry', 'rock', 'point' and 'sound'. For example, Oulunsalo "island of Oulujoki" is a peninsula, with inland names such as "Koivukari" "Birch Rock", "Santaniemi" "Sandy Cape", and "Salmioja" "the brook of the Sound". (Compare and .)
In Great Britain, glaciation affected Scotland but not southern England, and the post-glacial rebound of northern Great Britain (up to 10 cm per century) is causing a corresponding downward movement of the southern half of the island (up to 5 cm per century). This will eventually lead to an increased risk of floods in southern England and south-western Ireland.
Since the glacial isostatic adjustment process causes the land to move relative to the sea, ancient shorelines are found to lie above present day sea level in areas that were once glaciated. On the other hand, places in the peripheral bulge area which was uplifted during glaciation now begins to subside. Therefore, ancient beaches are found below present day sea level in the bulge area. The "relative sea level data", which consists of height and age measurements of the ancient beaches around the world, tells us that glacial isostatic adjustment proceeded at a higher rate near the end of deglaciation than today.
The present-day uplift motion in northern Europe is also monitored by a GPS network called BIFROST. Results of GPS data show a peak rate of about 11 mm/year in the north part of the Gulf of Bothnia, but this uplift rate decreases away and becomes negative outside the former ice margin.
In the near field outside the former ice margin, the land sinks relative to the sea. This is the case along the east coast of the United States, where ancient beaches are found submerged below present day sea level and Florida is expected to be submerged in the future. GPS data in North America also confirms that land uplift becomes subsidence outside the former ice margin.
Global sea levels.
To form the ice sheets of the last Ice Age, water from the oceans evaporated, condensed as snow and was deposited as ice in high latitudes. Thus global sea level fell during glaciation.
The ice sheets at the last glacial maximum were so massive that global sea level fell by about 120 metres. Thus continental shelves were exposed and many islands became connected with the continents through dry land. This was the case between the British Isles and Europe (Doggerland), or between Taiwan, the Indonesian islands and Asia (Sundaland). A land bridge also existed between Siberia and Alaska that allowed the migration of people and animals during the last glacial maximum.
The fall in sea level also affects the circulation of ocean currents and thus has important impact on climate during the glacial maximum.
During deglaciation, the melted ice water returns to the oceans, thus sea level in the ocean increases again. However, geological records of sea level changes show that the redistribution of the melted ice water is not the same everywhere in the oceans. In other words, depending upon the location, the rise in sea level at a certain site may be more than that at another site. This is due to the gravitational attraction between the mass of the melted water and the other masses, such as remaining ice sheets, glaciers, water masses and mantle rocks and the changes in centrifugal potential due to Earth's variable rotation.
Horizontal crustal motion.
Accompanying vertical motion is the horizontal motion of the crust. The BIFROST GPS network shows that the motion diverges from the centre of rebound. However, the largest horizontal velocity is found near the former ice margin.
The situation in North America is less certain; this is due to the sparse distribution of GPS stations in northern Canada, which is rather inaccessible.
Tilt.
The combination of horizontal and vertical motion changes the tilt of the surface. That is, locations farther north rise faster, an effect that becomes apparent in lakes. The bottoms of the lakes gradually tilt away from the direction of the former ice maximum, such that lake shores on the side of the maximum (typically north) recede and the opposite (southern) shores sink. This causes the formation of new rapids and rivers. For example, Lake Pielinen in
Finland, which is large (90 x 30 km) and oriented perpendicularly to the former ice margin, originally drained through an outlet in the middle of the lake near Nunnanlahti to Lake Höytiäinen. The change of tilt caused Pielinen to burst through the Uimaharju esker at the southwestern end of the lake, creating a new river (Pielisjoki) that runs to the sea via Lake Pyhäselkä to Lake Saimaa. The effects are similar to that concerning seashores, but occur above sea level. Tilting of land will also affect the flow of water in lakes and rivers in the future, and thus is important for water resource management planning.
In Sweden Lake Sommen's outlet in the northwest has a rebound of 2.36 mm/a while in the eastern Svanaviken it is 2.05 mm/a. This means the lake is being slowly tilted and the southeastern shores drowned.
Gravity field.
Ice, water, and mantle rocks have mass, and as they move around, they exert a gravitational pull on other masses towards them. Thus, the gravity field, which is sensitive to all mass on the surface and within the Earth, is affected by the redistribution of ice/melted water on the surface of the Earth and the flow of mantle rocks within.
Today, more than 6000 years after the last deglaciation terminated, the flow of mantle material back to the glaciated area causes the overall shape of the Earth to become less oblate. This change in the topography of Earth's surface affects the long-wavelength components of the gravity field.
The changing gravity field can be detected by repeated land measurements with absolute gravimeters and recently by the GRACE satellite mission. The change in long-wavelength components of Earth's gravity field also perturbs the orbital motion of satellites and has been detected by LAGEOS satellite motion.
Vertical datum.
The "vertical datum" is a reference surface for altitude measurement and plays vital roles in many human activities, including land surveying and construction of buildings and bridges. Since postglacial rebound continuously deforms the crustal surface and the gravitational field, the vertical datum needs to be redefined repeatedly through time.
State of stress, intraplate earthquakes and volcanism.
According to the theory of plate tectonics, plate-plate interaction results in earthquakes near plate boundaries. However, large earthquakes are found in intraplate environments like eastern Canada (up to M7) and northern Europe (up to M5) which are far away from present-day plate boundaries. An important intraplate earthquake was the magnitude 8 New Madrid earthquake that occurred in mid-continental US in the year 1811.
Glacial loads provided more than 30 MPa of vertical stress in northern Canada and more than 20 MPa in northern Europe during glacial maximum. This vertical stress is supported by the mantle and the flexure of the lithosphere. Since the mantle and the lithosphere continuously respond to the changing ice and water loads, the state of stress at any location continuously changes in time. The changes in the orientation of the state of stress is recorded in the postglacial faults in southeastern Canada. When the postglacial faults formed at the end of deglaciation 9000 years ago, the horizontal principal stress orientation was almost perpendicular to the former ice margin, but today the orientation is in the northeast–southwest, along the direction of seafloor spreading at the Mid-Atlantic Ridge. This shows that the stress due to postglacial rebound had played an important role at deglacial time, but has gradually relaxed so that tectonic stress has become more dominant today.
According to the Mohr–Coulomb theory of rock failure, large glacial loads generally suppress earthquakes, but rapid deglaciation promotes earthquakes. According to Wu & Hasagawa, the rebound stress that is available to trigger earthquakes today is of the order of 1 MPa. This stress level is not large enough to rupture intact rocks but is large enough to reactivate pre-existing faults that are close to failure. Thus, both postglacial rebound and past tectonics play important roles in today's intraplate earthquakes in eastern Canada and southeast US. Generally postglacial rebound stress could have triggered the intraplate earthquakes in eastern Canada and may have played some role in triggering earthquakes in the eastern US including the New Madrid earthquakes of 1811. The situation in northern Europe today is complicated by the current tectonic activities nearby and by coastal loading and weakening.
Increasing pressure due to the weight of the ice during glaciation may have suppressed melt generation and volcanic activities below Iceland and Greenland. On the other hand, decreasing pressure due to deglaciation can increase the melt production and volcanic activities by 20-30 times.
Recent global warming.
Recent global warming has caused mountain glaciers and the ice sheets in Greenland and Antarctica to melt and global sea level to rise. Therefore, monitoring sea level rise and the mass balance of ice sheets and glaciers allows people to understand more about global warming.
Recent rise in sea levels has been monitored by tide gauges and satellite altimetry (e.g. TOPEX/Poseidon). As well as the addition of melted ice water from glaciers and ice sheets, recent sea level changes are affected by the thermal expansion of sea water due to global warming, sea level change due to deglaciation of the last glacial maximum (postglacial sea level change), deformation of the land and ocean floor and other factors. Thus, to understand global warming from sea level change, one must be able to separate all these factors, especially postglacial rebound, since it is one of the leading factors.
Mass changes of ice sheets can be monitored by measuring changes in the ice surface height, the deformation of the ground below and the changes in the gravity field over the ice sheet. Thus ICESat, GPS and GRACE satellite mission are useful for such purpose. However, glacial isostatic adjustment of the ice sheets affect ground deformation and the gravity field today. Thus understanding glacial isostatic adjustment is important in monitoring recent global warming.
One of the possible impacts of global warming-triggered rebound may be more volcanic activity in previously ice-capped areas such as Iceland and Greenland. It may also trigger intraplate earthquakes near the ice margins of Greenland and Antarctica. Unusually rapid (up to 4.1 cm/year) present glacial isostatic rebound due to recent ice mass losses in the Amundsen Sea embayment region of Antarctica coupled with low regional mantle viscosity is predicted to provide a modest stabilizing influence on marine ice sheet instability in West Antarctica, but likely not to a sufficient degree to arrest it.
Applications.
The speed and amount of postglacial rebound is determined by two factors: the viscosity or rheology (i.e., the flow) of the mantle, and the ice loading and unloading histories on the surface of Earth.
The viscosity of the mantle is important in understanding mantle convection, plate tectonics, the dynamical processes in Earth, and the thermal state and thermal evolution of Earth. However viscosity is difficult to observe because creep experiments of mantle rocks at natural strain rates would take thousands of years to observe and the ambient temperature and pressure conditions are not easy to attain for a long enough time. Thus, the observations of postglacial rebound provide a natural experiment to measure mantle rheology. Modelling of glacial isostatic adjustment addresses the question of how viscosity changes in the radial and lateral directions and whether the flow law is linear, nonlinear, or composite rheology. Mantle viscosity may additionally be estimated using seismic tomography, where seismic velocity is used as a proxy observable.
Ice thickness histories are useful in the study of paleoclimatology, glaciology and paleo-oceanography. Ice thickness histories are traditionally deduced from the three types of information: First, the sea level data at stable sites far away from the centers of deglaciation give an estimate of how much water entered the oceans or equivalently how much ice was locked up at glacial maximum. Secondly, the location and dates of terminal moraines tell us the areal extent and retreat of past ice sheets. Physics of glaciers gives us the theoretical profile of ice sheets at equilibrium, it also says that the thickness and horizontal extent of equilibrium ice sheets are closely related to the basal condition of the ice sheets. Thus the volume of ice locked up is proportional to their instantaneous area. Finally, the heights of ancient beaches in the sea level data and observed land uplift rates (e.g. from GPS or VLBI) can be used to constrain local ice thickness. A popular ice model deduced this way is the ICE5G model. Because the response of the Earth to changes in ice height is slow, it cannot record rapid fluctuation or surges of ice sheets, thus the ice sheet profiles deduced this way only gives the "average height" over a thousand years or so.
Glacial isostatic adjustment also plays an important role in understanding recent global warming and climate change.
Discovery.
Before the eighteenth century, it was thought, in Sweden, that sea levels were falling. On the initiative of Anders Celsius a number of marks were made in rock on different locations along the Swedish coast. In 1765 it was possible to conclude that it was not a lowering of sea levels but an uneven rise of land. In 1865 Thomas Jamieson came up with a theory that the rise of land was connected with the ice age that had been first discovered in 1837. The theory was accepted after investigations by Gerard De Geer of old shorelines in Scandinavia published in 1890.
Legal implications.
In areas where the rising of land is seen, it is necessary to define the exact limits of property. In Finland, the "new land" is legally the property of the owner of the water area, not any land owners on the shore. Therefore, if the owner of the land wishes to build a pier over the "new land", they need the permission of the owner of the (former) water area. The landowner of the shore may redeem the new land at market price. Usually the owner of the water area is the partition unit of the landowners of the shores, a collective holding corporation.
Formulation: sea-level equation.
The sea-level equation (SLE) is a linear integral equation that describes the sea-level variations associated with the PGR.
The basic idea of the SLE dates back to 1888, when Woodward published his pioneering work on the form and position of mean sea level, and only later has been refined by Platzman and Farrell in the context of the study of the ocean tides. In the words of Wu and Peltier, the solution of the SLE yields the space– and time–dependent change of ocean bathymetry which is required to keep the gravitational potential of the sea surface constant for a specific deglaciation chronology and viscoelastic earth model. The SLE theory was then developed by other authors as Mitrovica & Peltier, Mitrovica et al. and Spada & Stocchi. In its simplest form, the SLE reads
formula_0
where formula_1 is the sea–level change, formula_2 is the sea surface variation as seen from Earth's center of mass, and formula_3 is vertical displacement.
In a more explicit form the SLE can be written as follow:
formula_4
where formula_5 is colatitude and formula_6 is longitude, formula_7 is time, formula_8 and formula_9 are the densities of ice and water, respectively, formula_10 is the reference surface gravity, formula_11 is the sea–level Green's function (dependent upon the formula_12 and formula_13 viscoelastic load–deformation coefficients - LDCs), formula_14 is the ice thickness variation, formula_15 represents the eustatic term (i.e. the ocean–averaged value of formula_1), formula_16 and formula_17 denote spatio-temporal convolutions over the ice- and ocean-covered regions, and the overbar indicates an average over the surface of the oceans that ensures mass conservation.
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": " S=N-U, "
},
{
"math_id": 1,
"text": "S"
},
{
"math_id": 2,
"text": "N"
},
{
"math_id": 3,
"text": "U"
},
{
"math_id": 4,
"text": "S (\\theta, \\lambda, t) = \\frac{\\rho_i}{\\gamma} G_s \\otimes_i I + \\frac{\\rho_w}{\\gamma} G_s \\otimes_o S + S^E - \\frac{\\rho_i}{\\gamma}\\overline{G_s \\otimes_i I } - \\frac{\\rho_w}{\\gamma}\\overline{G_o \\otimes_o S }, "
},
{
"math_id": 5,
"text": "\\theta"
},
{
"math_id": 6,
"text": "\\lambda"
},
{
"math_id": 7,
"text": "t"
},
{
"math_id": 8,
"text": "\\rho_i"
},
{
"math_id": 9,
"text": "\\rho_w"
},
{
"math_id": 10,
"text": "\\gamma"
},
{
"math_id": 11,
"text": "G_s=G_s(h,k)"
},
{
"math_id": 12,
"text": "h"
},
{
"math_id": 13,
"text": "k"
},
{
"math_id": 14,
"text": " I= I(\\theta, \\lambda, t)"
},
{
"math_id": 15,
"text": "S^E=S^E(t)"
},
{
"math_id": 16,
"text": "\\otimes_i"
},
{
"math_id": 17,
"text": "\\otimes_o"
}
] |
https://en.wikipedia.org/wiki?curid=598434
|
59844
|
The Number of the Beast (novel)
|
1980 novel by Robert A. Heinlein
The Number of the Beast is a science fiction novel by American writer Robert A. Heinlein, published in 1980. Excerpts from the novel were serialized in the magazine "Omni" (October–November 1979).
Plot.
The book is a series of diary entries primarily by each of the four main characters: Zebadiah "Zeb" John Carter, programmer Dejah Thoris "Deety" Burroughs Carter, her mathematics professor father Jacob Burroughs, and off-campus socialite Hilda Corners. The names "Dejah Thoris", "Burroughs", and "Carter" are overt references to John Carter and Dejah Thoris, the protagonists of the Barsoom novels by Edgar Rice Burroughs.
In the opening, Deety is dancing with Zeb at a party at Hilda's mansion. Deety is trying to get Zeb to meet her father to discuss what she thinks is an article Zeb wrote about n-dimensional space, even going so far as to offer herself. Zeb figures out and explains to Deety that he is not the one who wrote the article but a relative with a similar name.
After dancing a very intimate tango, Zeb jokingly suggests the dance was so strong they should get married, and Deety agrees. Zeb is taken aback but then accepts. As they are leaving, Deety and Zeb rescue Jacob from a heated argument he is having with another faculty member before a fight breaks out. As they are approaching their vehicles, Hilda comes out, deciding to tag along. Zeb, having a premonition, grabs the three of them and ducks behind another vehicle before Jacob and Deety's vehicle explodes. Zeb gets everyone into his modified air car "Gay Deceiver" and by activating the "Deceiver"'s flying capability, escapes undetected by the authorities or the criminals who put a bomb in the other vehicle.
Zeb flies to Elko, Nevada, the state being the only one to allow people to get married 24 hours a day with no waiting period or blood test. The incidents have so traumatized Jacob that he has agreed to marry Hilda and so they have a double ceremony. The couples then go to Jacob's hidden cabin in the woods, where they have their honeymoons.
Thus begins the series of adventures that the four embark upon as they travel in the "Gay Deceiver", which is equipped with the professor's "continua" device and armed by the Australian Defence Force. The continua device was built by Professor Burroughs while he was formulating his theories on "n"-dimensional non-Euclidean geometry. The geometry of the novel's universe contains six dimensions – the three spatial dimensions, known to the real world, and three time dimensions: "t", the real world's temporal dimension, "τ" (Greek tau), and "т" (Cyrillic te). The continua device can travel on all six axes. The continua device allows travel into various fictional universes, such as the Land of Oz, as well as through time. An attempt to visit Barsoom takes them to an apparently different version of Mars, seemingly under the colonial rule of the British and Russian Empires, but near the end of the novel, Heinlein's recurring character Lazarus Long hints that they had traveled to Barsoom and that its "colonial" status was an illusion imposed on them by the telepathically adept Barsoomians:
<templatestyles src="Template:Blockquote/styles.css" />... E.R.B.'s universe is no harder to reach than any other and Mars is in its usual orbit. But that does not mean that you will find Jolly Green Giants and gorgeous red princesses dressed only in jewels. Unless invited, you are likely to find a Potemkin Village illusion tailored to your subconscious...
Title.
In the novel, the biblical number of the beast turns out to be not 666 but formula_0 = 10,314,424,798,490,535,546,171,949,056, the initial number of parallel universes accessible through the continua device. It is later theorized by the character Jacob that the number may be merely the instantly accessible universes from a given location and that there is a larger structure that implies an infinite number of universes.
Literary significance and reception.
Jack Kirwan wrote in "National Review" that the novel is "about two men and two women in a time machine safari through this and other universes. But describing "The Number of the Beast" thus is like saying "Moby Dick" is about a one-legged guy trying to catch a fish." He went on to state that Heinlein celebrates the "competent person".
Sue K. Hurwitz wrote in her review for the "School Library Journal" that it is "a catalog of Heinlein's sins as an author; it is sophomoric, sexist, militantly right wing, and excessively verbose" and commentary that the book's ending was "a devastating parody of SF conventions—will have genre addicts rolling on the floor. It's garbage, but right from the top of the heap."
Heinlein buff David Potter explained on alt.fan.heinlein, in a posting reprinted on the Heinlein Society, that the entire book is actually "one of the greatest textbooks on narrative fiction ever produced, with a truly magnificent set of examples of "how not to do it" right there in the foreground, and constant explanations of how to do it right, with literary references to people and books that "did" do it right, in the background." He noted that "every single time there's a boring lecture or tedious character interaction going on in the foreground, there's an example of how to do it "right" in the background."
Greg Costikyan reviewed "The Number of the Beast" in "Ares Magazine" #5 and commented: "No one writes like Heinlein, and what is a disappointment from him would be a smashing success from anyone else."
James Nicoll has credited it as having taught him that he does not have to finish reading every book he begins.
"The Pursuit of the Pankera".
In 2020, a previously unpublished manuscript by Heinlein was released as The Pursuit of the Pankera. Using the same premise and characters as "The Number of the Beast", the first third of the two novels are the same. In the remainder of "The Pursuit of the Pankera", the characters visit fictional universes, primarily Barsoom, Oz, and the world of E. E. Doc Smith's "Lensman" series.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(6^6)^6"
}
] |
https://en.wikipedia.org/wiki?curid=59844
|
598500
|
Algebraic K-theory
|
Subject area in mathematics
Algebraic "K"-theory is a subject area in mathematics with connections to geometry, topology, ring theory, and number theory. Geometric, algebraic, and arithmetic objects are assigned objects called "K"-groups. These are groups in the sense of abstract algebra. They contain detailed information about the original object but are notoriously difficult to compute; for example, an important outstanding problem is to compute the "K"-groups of the integers.
"K"-theory was discovered in the late 1950s by Alexander Grothendieck in his study of intersection theory on algebraic varieties. In the modern language, Grothendieck defined only "K"0, the zeroth "K"-group, but even this single group has plenty of applications, such as the Grothendieck–Riemann–Roch theorem. Intersection theory is still a motivating force in the development of (higher) algebraic "K"-theory through its links with motivic cohomology and specifically Chow groups. The subject also includes classical number-theoretic topics like quadratic reciprocity and embeddings of number fields into the real numbers and complex numbers, as well as more modern concerns like the construction of higher regulators and special values of "L"-functions.
The lower "K"-groups were discovered first, in the sense that adequate descriptions of these groups in terms of other algebraic structures were found. For example, if "F" is a field, then "K"0("F") is isomorphic to the integers Z and is closely related to the notion of vector space dimension. For a commutative ring "R", the group "K"0("R") is related to the Picard group of "R", and when "R" is the ring of integers in a number field, this generalizes the classical construction of the class group. The group "K"1("R") is closely related to the group of units "R"×, and if "R" is a field, it is exactly the group of units. For a number field "F", the group "K"2("F") is related to class field theory, the Hilbert symbol, and the solvability of quadratic equations over completions. In contrast, finding the correct definition of the higher "K"-groups of rings was a difficult achievement of Daniel Quillen, and many of the basic facts about the higher "K"-groups of algebraic varieties were not known until the work of Robert Thomason.
History.
The history of "K"-theory was detailed by Charles Weibel.
The Grothendieck group "K"0.
In the 19th century, Bernhard Riemann and his student Gustav Roch proved what is now known as the Riemann–Roch theorem. If "X" is a Riemann surface, then the sets of meromorphic functions and meromorphic differential forms on "X" form vector spaces. A line bundle on "X" determines subspaces of these vector spaces, and if "X" is projective, then these subspaces are finite dimensional. The Riemann–Roch theorem states that the difference in dimensions between these subspaces is equal to the degree of the line bundle (a measure of twistedness) plus one minus the genus of "X". In the mid-20th century, the Riemann–Roch theorem was generalized by Friedrich Hirzebruch to all algebraic varieties. In Hirzebruch's formulation, the Hirzebruch–Riemann–Roch theorem, the theorem became a statement about Euler characteristics: The Euler characteristic of a vector bundle on an algebraic variety (which is the alternating sum of the dimensions of its cohomology groups) equals the Euler characteristic of the trivial bundle plus a correction factor coming from characteristic classes of the vector bundle. This is a generalization because on a projective Riemann surface, the Euler characteristic of a line bundle equals the difference in dimensions mentioned previously, the Euler characteristic of the trivial bundle is one minus the genus, and the only nontrivial characteristic class is the degree.
The subject of "K"-theory takes its name from a 1957 construction of Alexander Grothendieck which appeared in the Grothendieck–Riemann–Roch theorem, his generalization of Hirzebruch's theorem. Let "X" be a smooth algebraic variety. To each vector bundle on "X", Grothendieck associates an invariant, its "class". The set of all classes on "X" was called "K"("X") from the German "Klasse". By definition, "K"("X") is a quotient of the free abelian group on isomorphism classes of vector bundles on "X", and so it is an abelian group. If the basis element corresponding to a vector bundle "V" is denoted ["V"], then for each short exact sequence of vector bundles:
formula_0
Grothendieck imposed the relation ["V"] = ["V′"] + ["V″"]. These generators and relations define "K"("X"), and they imply that it is the universal way to assign invariants to vector bundles in a way compatible with exact sequences.
Grothendieck took the perspective that the Riemann–Roch theorem is a statement about morphisms of varieties, not the varieties themselves. He proved that there is a homomorphism from "K"("X") to the Chow groups of "X" coming from the Chern character and Todd class of "X". Additionally, he proved that a proper morphism "f" : "X" → "Y" to a smooth variety "Y" determines a homomorphism "f"* : "K"("X") → "K"("Y") called the "pushforward". This gives two ways of determining an element in the Chow group of "Y" from a vector bundle on "X": Starting from "X", one can first compute the pushforward in "K"-theory and then apply the Chern character and Todd class of "Y", or one can first apply the Chern character and Todd class of "X" and then compute the pushforward for Chow groups. The Grothendieck–Riemann–Roch theorem says that these are equal. When "Y" is a point, a vector bundle is a vector space, the class of a vector space is its dimension, and the Grothendieck–Riemann–Roch theorem specializes to Hirzebruch's theorem.
The group "K"("X") is now known as "K"0("X"). Upon replacing vector bundles by projective modules, "K"0 also became defined for non-commutative rings, where it had applications to group representations. Atiyah and Hirzebruch quickly transported Grothendieck's construction to topology and used it to define topological K-theory. Topological "K"-theory was one of the first examples of an extraordinary cohomology theory: It associates to each topological space "X" (satisfying some mild technical constraints) a sequence of groups "K""n"("X") which satisfy all the Eilenberg–Steenrod axioms except the normalization axiom. The setting of algebraic varieties, however, is much more rigid, and the flexible constructions used in topology were not available. While the group "K"0 seemed to satisfy the necessary properties to be the beginning of a cohomology theory of algebraic varieties and of non-commutative rings, there was no clear definition of the higher "K""n"("X"). Even as such definitions were developed, technical issues surrounding restriction and gluing usually forced "K""n" to be defined only for rings, not for varieties.
"K"0, "K"1, and "K"2.
A group closely related to "K"1 for group rings was earlier introduced by J.H.C. Whitehead. Henri Poincaré had attempted to define the Betti numbers of a manifold in terms of a triangulation. His methods, however, had a serious gap: Poincaré could not prove that two triangulations of a manifold always yielded the same Betti numbers. It was clearly true that Betti numbers were unchanged by subdividing the triangulation, and therefore it was clear that any two triangulations that shared a common subdivision had the same Betti numbers. What was not known was that any two triangulations admitted a common subdivision. This hypothesis became a conjecture known as the "Hauptvermutung" (roughly "main conjecture"). The fact that triangulations were stable under subdivision led J.H.C. Whitehead to introduce the notion of simple homotopy type. A simple homotopy equivalence is defined in terms of adding simplices or cells to a simplicial complex or cell complex in such a way that each additional simplex or cell deformation retracts into a subdivision of the old space. Part of the motivation for this definition is that a subdivision of a triangulation is simple homotopy equivalent to the original triangulation, and therefore two triangulations that share a common subdivision must be simple homotopy equivalent. Whitehead proved that simple homotopy equivalence is a finer invariant than homotopy equivalence by introducing an invariant called the "torsion". The torsion of a homotopy equivalence takes values in a group now called the "Whitehead group" and denoted "Wh"("π"), where "π" is the fundamental group of the target complex. Whitehead found examples of non-trivial torsion and thereby proved that some homotopy equivalences were not simple. The Whitehead group was later discovered to be a quotient of "K"1(Zπ"), where Zπ" is the integral group ring of "π". Later John Milnor used Reidemeister torsion, an invariant related to Whitehead torsion, to disprove the Hauptvermutung.
The first adequate definition of "K"1 of a ring was made by Hyman Bass and Stephen Schanuel. In topological "K"-theory, "K"1 is defined using vector bundles on a suspension of the space. All such vector bundles come from the clutching construction, where two trivial vector bundles on two halves of a space are glued along a common strip of the space. This gluing data is expressed using the general linear group, but elements of that group coming from elementary matrices (matrices corresponding to elementary row or column operations) define equivalent gluings. Motivated by this, the Bass–Schanuel definition of "K"1 of a ring "R" is "GL"("R") / "E"("R"), where "GL"("R") is the infinite general linear group (the union of all "GL""n"("R")) and "E"("R") is the subgroup of elementary matrices. They also provided a definition of "K"0 of a homomorphism of rings and proved that "K"0 and "K"1 could be fit together into an exact sequence similar to the relative homology exact sequence.
Work in "K"-theory from this period culminated in Bass' book "Algebraic "K"-theory". In addition to providing a coherent exposition of the results then known, Bass improved many of the statements of the theorems. Of particular note is that Bass, building on his earlier work with Murthy, provided the first proof of what is now known as the fundamental theorem of algebraic "K"-theory. This is a four-term exact sequence relating "K"0 of a ring "R" to "K"1 of "R", the polynomial ring "R"["t"], and the localization "R"["t", "t"−1]. Bass recognized that this theorem provided a description of "K"0 entirely in terms of "K"1. By applying this description recursively, he produced negative "K"-groups "K"−n("R"). In independent work, Max Karoubi gave another definition of negative "K"-groups for certain categories and proved that his definitions yielded that same groups as those of Bass.
The next major development in the subject came with the definition of "K"2. Steinberg studied the universal central extensions of a Chevalley group over a field and gave an explicit presentation of this group in terms of generators and relations. In the case of the group E"n"("k") of elementary matrices, the universal central extension is now written St"n"("k") and called the "Steinberg group". In the spring of 1967, John Milnor defined "K"2("R") to be the kernel of the homomorphism St("R") → "E"("R"). The group "K"2 further extended some of the exact sequences known for "K"1 and "K"0, and it had striking applications to number theory. Hideya Matsumoto's 1968 thesis showed that for a field "F", "K"2("F") was isomorphic to:
formula_1
This relation is also satisfied by the Hilbert symbol, which expresses the solvability of quadratic equations over local fields. In particular, John Tate was able to prove that "K"2(Q) is essentially structured around the law of quadratic reciprocity.
Higher "K"-groups.
In the late 1960s and early 1970s, several definitions of higher "K"-theory were proposed. Swan and Gersten both produced definitions of "K""n" for all "n", and Gersten proved that his and Swan's theories were equivalent, but the two theories were not known to satisfy all the expected properties. Nobile and Villamayor also proposed a definition of higher "K"-groups. Karoubi and Villamayor defined well-behaved "K"-groups for all "n", but their equivalent of "K"1 was sometimes a proper quotient of the Bass–Schanuel "K"1. Their "K"-groups are now called "KV""n" and are related to homotopy-invariant modifications of "K"-theory.
Inspired in part by Matsumoto's theorem, Milnor made a definition of the higher "K"-groups of a field. He referred to his definition as "purely "ad hoc"", and it neither appeared to generalize to all rings nor did it appear to be the correct definition of the higher "K"-theory of fields. Much later, it was discovered by Nesterenko and Suslin and by Totaro that Milnor "K"-theory is actually a direct summand of the true "K"-theory of the field. Specifically, "K"-groups have a filtration called the "weight filtration", and the Milnor "K"-theory of a field is the highest weight-graded piece of the "K"-theory. Additionally, Thomason discovered that there is no analog of Milnor "K"-theory for a general variety.
The first definition of higher "K"-theory to be widely accepted was Daniel Quillen's. As part of Quillen's work on the Adams conjecture in topology, he had constructed maps from the classifying spaces "BGL"(F"q") to the homotopy fiber of "ψ""q" − 1, where "ψ""q" is the "q"th Adams operation acting on the classifying space "BU". This map is acyclic, and after modifying "BGL"(F"q") slightly to produce a new space "BGL"(F"q")+, the map became a homotopy equivalence. This modification was called the plus construction. The Adams operations had been known to be related to Chern classes and to "K"-theory since the work of Grothendieck, and so Quillen was led to define the "K"-theory of "R" as the homotopy groups of "BGL"("R")+. Not only did this recover "K"1 and "K"2, the relation of "K"-theory to the Adams operations allowed Quillen to compute the "K"-groups of finite fields.
The classifying space "BGL" is connected, so Quillen's definition failed to give the correct value for "K"0. Additionally, it did not give any negative "K"-groups. Since "K"0 had a known and accepted definition it was possible to sidestep this difficulty, but it remained technically awkward. Conceptually, the problem was that the definition sprung from "GL", which was classically the source of "K"1. Because "GL" knows only about gluing vector bundles, not about the vector bundles themselves, it was impossible for it to describe "K"0.
Inspired by conversations with Quillen, Segal soon introduced another approach to constructing algebraic "K"-theory under the name of Γ-objects. Segal's approach is a homotopy analog of Grothendieck's construction of "K"0. Where Grothendieck worked with isomorphism classes of bundles, Segal worked with the bundles themselves and used isomorphisms of the bundles as part of his data. This results in a spectrum whose homotopy groups are the higher "K"-groups (including "K"0). However, Segal's approach was only able to impose relations for split exact sequences, not general exact sequences. In the category of projective modules over a ring, every short exact sequence splits, and so Γ-objects could be used to define the "K"-theory of a ring. However, there are non-split short exact sequences in the category of vector bundles on a variety and in the category of all modules over a ring, so Segal's approach did not apply to all cases of interest.
In the spring of 1972, Quillen found another approach to the construction of higher "K"-theory which was to prove enormously successful. This new definition began with an exact category, a category satisfying certain formal properties similar to, but slightly weaker than, the properties satisfied by a category of modules or vector bundles. From this he constructed an auxiliary category using a new device called his ""Q"-construction." Like Segal's Γ-objects, the "Q"-construction has its roots in Grothendieck's definition of "K"0. Unlike Grothendieck's definition, however, the "Q"-construction builds a category, not an abelian group, and unlike Segal's Γ-objects, the "Q"-construction works directly with short exact sequences. If "C" is an abelian category, then "QC" is a category with the same objects as "C" but whose morphisms are defined in terms of short exact sequences in "C". The "K"-groups of the exact category are the homotopy groups of Ω"BQC", the loop space of the geometric realization (taking the loop space corrects the indexing). Quillen additionally proved his "+
"Q" theorem" that his two definitions of "K"-theory agreed with each other. This yielded the correct "K"0 and led to simpler proofs, but still did not yield any negative "K"-groups.
All abelian categories are exact categories, but not all exact categories are abelian. Because Quillen was able to work in this more general situation, he was able to use exact categories as tools in his proofs. This technique allowed him to prove many of the basic theorems of algebraic "K"-theory. Additionally, it was possible to prove that the earlier definitions of Swan and Gersten were equivalent to Quillen's under certain conditions.
"K"-theory now appeared to be a homology theory for rings and a cohomology theory for varieties. However, many of its basic theorems carried the hypothesis that the ring or variety in question was regular. One of the basic expected relations was a long exact sequence (called the "localization sequence") relating the "K"-theory of a variety "X" and an open subset "U". Quillen was unable to prove the existence of the localization sequence in full generality. He was, however, able to prove its existence for a related theory called "G"-theory (or sometimes "K"′-theory). "G"-theory had been defined early in the development of the subject by Grothendieck. Grothendieck defined "G"0("X") for a variety "X" to be the free abelian group on isomorphism classes of coherent sheaves on "X", modulo relations coming from exact sequences of coherent sheaves. In the categorical framework adopted by later authors, the "K"-theory of a variety is the "K"-theory of its category of vector bundles, while its "G"-theory is the "K"-theory of its category of coherent sheaves. Not only could Quillen prove the existence of a localization exact sequence for "G"-theory, he could prove that for a regular ring or variety, "K"-theory equaled "G"-theory, and therefore "K"-theory of regular varieties had a localization exact sequence. Since this sequence was fundamental to many of the facts in the subject, regularity hypotheses pervaded early work on higher "K"-theory.
Applications of algebraic "K"-theory in topology.
The earliest application of algebraic "K"-theory to topology was Whitehead's construction of Whitehead torsion. A closely related construction was found by C. T. C. Wall in 1963. Wall found that a space "X" dominated by a finite complex has a generalized Euler characteristic taking values in a quotient of "K"0(Z"π"), where "π" is the fundamental group of the space. This invariant is called "Wall's finiteness obstruction" because "X" is homotopy equivalent to a finite complex if and only if the invariant vanishes. Laurent Siebenmann in his thesis found an invariant similar to Wall's that gives an obstruction to an open manifold being the interior of a compact manifold with boundary. If two manifolds with boundary "M" and "N" have isomorphic interiors (in TOP, PL, or DIFF as appropriate), then the isomorphism between them defines an "h"-cobordism between "M" and "N".
Whitehead torsion was eventually reinterpreted in a more directly "K"-theoretic way. This reinterpretation happened through the study of "h"-cobordisms. Two "n"-dimensional manifolds "M" and "N" are "h"-cobordant if there exists an ("n" + 1)-dimensional manifold with boundary "W" whose boundary is the disjoint union of "M" and "N" and for which the inclusions of "M" and "N" into "W" are homotopy equivalences (in the categories TOP, PL, or DIFF). Stephen Smale's "h"-cobordism theorem asserted that if "n" ≥ 5, "W" is compact, and "M", "N", and "W" are simply connected, then "W" is isomorphic to the cylinder "M" × [0, 1] (in TOP, PL, or DIFF as appropriate). This theorem proved the Poincaré conjecture for "n" ≥ 5.
If "M" and "N" are not assumed to be simply connected, then an "h"-cobordism need not be a cylinder. The "s"-cobordism theorem, due independently to Mazur, Stallings, and Barden, explains the general situation: An "h"-cobordism is a cylinder if and only if the Whitehead torsion of the inclusion "M" ⊂ "W" vanishes. This generalizes the "h"-cobordism theorem because the simple connectedness hypotheses imply that the relevant Whitehead group is trivial. In fact the "s"-cobordism theorem implies that there is a bijective correspondence between isomorphism classes of "h"-cobordisms and elements of the Whitehead group.
An obvious question associated with the existence of "h"-cobordisms is their uniqueness. The natural notion of equivalence is isotopy. Jean Cerf proved that for simply connected smooth manifolds "M" of dimension at least 5, isotopy of "h"-cobordisms is the same as a weaker notion called pseudo-isotopy. Hatcher and Wagoner studied the components of the space of pseudo-isotopies and related it to a quotient of "K"2(Z"π").
The proper context for the "s"-cobordism theorem is the classifying space of "h"-cobordisms. If "M" is a CAT manifold, then "H"CAT("M") is a space that classifies bundles of "h"-cobordisms on "M". The "s"-cobordism theorem can be reinterpreted as the statement that the set of connected components of this space is the Whitehead group of "π"1("M"). This space contains strictly more information than the Whitehead group; for example, the connected component of the trivial cobordism describes the possible cylinders on "M" and in particular is the obstruction to the uniqueness of a homotopy between a manifold and "M" × [0, 1]. Consideration of these questions led Waldhausen to introduce his algebraic "K"-theory of spaces. The algebraic "K"-theory of "M" is a space "A"("M") which is defined so that it plays essentially the same role for higher "K"-groups as "K"1(Zπ1("M")) does for "M". In particular, Waldhausen showed that there is a map from "A"("M") to a space Wh("M") which generalizes the map "K"1(Zπ1("M")) → Wh("π"1("M")) and whose homotopy fiber is a homology theory.
In order to fully develop "A"-theory, Waldhausen made significant technical advances in the foundations of "K"-theory. Waldhausen introduced Waldhausen categories, and for a Waldhausen category "C" he introduced a simplicial category "S"⋅"C" (the "S" is for Segal) defined in terms of chains of cofibrations in "C". This freed the foundations of "K"-theory from the need to invoke analogs of exact sequences.
Algebraic topology and algebraic geometry in algebraic "K"-theory.
Quillen suggested to his student Kenneth Brown that it might be possible to create a theory of sheaves of spectra of which "K"-theory would provide an example. The sheaf of "K"-theory spectra would, to each open subset of a variety, associate the "K"-theory of that open subset. Brown developed such a theory for his thesis. Simultaneously, Gersten had the same idea. At a Seattle conference in autumn of 1972, they together discovered a spectral sequence converging from the sheaf cohomology of formula_2, the sheaf of "K""n"-groups on "X", to the "K"-group of the total space. This is now called the Brown–Gersten spectral sequence.
Spencer Bloch, influenced by Gersten's work on sheaves of "K"-groups, proved that on a regular surface, the cohomology group formula_3 is isomorphic to the Chow group "CH"2("X") of codimension 2 cycles on "X". Inspired by this, Gersten conjectured that for a regular local ring "R" with fraction field "F", "K""n"("R") injects into "K""n"("F") for all "n". Soon Quillen proved that this is true when "R" contains a field, and using this he proved that
formula_4
for all "p". This is known as "Bloch's formula". While progress has been made on Gersten's conjecture since then, the general case remains open.
Lichtenbaum conjectured that special values of the zeta function of a number field could be expressed in terms of the "K"-groups of the ring of integers of the field. These special values were known to be related to the étale cohomology of the ring of integers. Quillen therefore generalized Lichtenbaum's conjecture, predicting the existence of a spectral sequence like the Atiyah–Hirzebruch spectral sequence in topological "K"-theory. Quillen's proposed spectral sequence would start from the étale cohomology of a ring "R" and, in high enough degrees and after completing at a prime l invertible in "R", abut to the l-adic completion of the "K"-theory of "R". In the case studied by Lichtenbaum, the spectral sequence would degenerate, yielding Lichtenbaum's conjecture.
The necessity of localizing at a prime l suggested to Browder that there should be a variant of "K"-theory with finite coefficients. He introduced "K"-theory groups "K""n"("R"; Z/lZ) which were Z/lZ-vector spaces, and he found an analog of the Bott element in topological "K"-theory. Soule used this theory to construct "étale Chern classes", an analog of topological Chern classes which took elements of algebraic "K"-theory to classes in étale cohomology. Unlike algebraic "K"-theory, étale cohomology is highly computable, so étale Chern classes provided an effective tool for detecting the existence of elements in "K"-theory. William G. Dwyer and Eric Friedlander then invented an analog of "K"-theory for the étale topology called étale "K"-theory. For varieties defined over the complex numbers, étale "K"-theory is isomorphic to topological "K"-theory. Moreover, étale "K"-theory admitted a spectral sequence similar to the one conjectured by Quillen. Thomason proved around 1980 that after inverting the Bott element, algebraic "K"-theory with finite coefficients became isomorphic to étale "K"-theory.
Throughout the 1970s and early 1980s, "K"-theory on singular varieties still lacked adequate foundations. While it was believed that Quillen's "K"-theory gave the correct groups, it was not known that these groups had all of the envisaged properties. For this, algebraic "K"-theory had to be reformulated. This was done by Thomason in a lengthy monograph which he co-credited to his dead friend Thomas Trobaugh, who he said gave him a key idea in a dream. Thomason combined Waldhausen's construction of "K"-theory with the foundations of intersection theory described in volume six of Grothendieck's Séminaire de Géométrie Algébrique du Bois Marie. There, "K"0 was described in terms of complexes of sheaves on algebraic varieties. Thomason discovered that if one worked with in derived category of sheaves, there was a simple description of when a complex of sheaves could be extended from an open subset of a variety to the whole variety. By applying Waldhausen's construction of "K"-theory to derived categories, Thomason was able to prove that algebraic "K"-theory had all the expected properties of a cohomology theory.
In 1976, Keith Dennis discovered an entirely novel technique for computing "K"-theory based on Hochschild homology. This was based around the existence of the Dennis trace map, a homomorphism from "K"-theory to Hochschild homology. While the Dennis trace map seemed to be successful for calculations of "K"-theory with finite coefficients, it was less successful for rational calculations. Goodwillie, motivated by his "calculus of functors", conjectured the existence of a theory intermediate to "K"-theory and Hochschild homology. He called this theory topological Hochschild homology because its ground ring should be the sphere spectrum (considered as a ring whose operations are defined only up to homotopy). In the mid-1980s, Bokstedt gave a definition of topological Hochschild homology that satisfied nearly all of Goodwillie's conjectural properties, and this made possible further computations of "K"-groups. Bokstedt's version of the Dennis trace map was a transformation of spectra "K" → "THH". This transformation factored through the fixed points of a circle action on "THH", which suggested a relationship with cyclic homology. In the course of proving an algebraic "K"-theory analog of the Novikov conjecture, Bokstedt, Hsiang, and Madsen introduced topological cyclic homology, which bore the same relationship to topological Hochschild homology as cyclic homology did to Hochschild homology.
The Dennis trace map to topological Hochschild homology factors through topological cyclic homology, providing an even more detailed tool for calculations. In 1996, Dundas, Goodwillie, and McCarthy proved that topological cyclic homology has in a precise sense the same local structure as algebraic "K"-theory, so that if a calculation in "K"-theory or topological cyclic homology is possible, then many other "nearby" calculations follow.
Lower "K"-groups.
The lower "K"-groups were discovered first, and given various ad hoc descriptions, which remain useful. Throughout, let "A" be a ring.
"K"0.
The functor "K"0 takes a ring "A" to the Grothendieck group of the set of isomorphism classes of its finitely generated projective modules, regarded as a monoid under direct sum. Any ring homomorphism "A" → "B" gives a map "K"0("A") → "K"0("B") by mapping (the class of) a projective "A"-module "M" to "M" ⊗"A" "B", making "K"0 a covariant functor.
If the ring "A" is commutative, we can define a subgroup of "K"0("A") as the set
formula_5
where :
formula_6
is the map sending every (class of a) finitely generated projective "A"-module "M" to the rank of the free formula_7-module formula_8 (this module is indeed free, as any finitely generated projective module over a local ring is free). This subgroup formula_9 is known as the "reduced zeroth K-theory" of "A".
If "B" is a ring without an identity element, we can extend the definition of K0 as follows. Let "A" = "B"⊕Z be the extension of "B" to a ring with unity obtaining by adjoining an identity element (0,1). There is a short exact sequence "B" → "A" → Z and we define K0("B") to be the kernel of the corresponding map "K"0("A") → K0(Z) = Z.
Examples.
An algebro-geometric variant of this construction is applied to the category of algebraic varieties; it associates with a given algebraic variety "X" the Grothendieck's "K"-group of the category of locally free sheaves (or coherent sheaves) on "X". Given a compact topological space "X", the topological "K"-theory "K"top("X") of (real) vector bundles over "X" coincides with "K0" of the ring of continuous real-valued functions on "X".
Relative "K"0.
Let "I" be an ideal of "A" and define the "double" to be a subring of the Cartesian product "A"×"A":
formula_10
The "relative K-group" is defined in terms of the "double"
formula_11
where the map is induced by projection along the first factor.
The relative "K"0("A","I") is isomorphic to "K"0("I"), regarding "I" as a ring without identity. The independence from "A" is an analogue of the Excision theorem in homology.
"K"0 as a ring.
If "A" is a commutative ring, then the tensor product of projective modules is again projective, and so tensor product induces a multiplication turning K0 into a commutative ring with the class ["A"] as identity. The exterior product similarly induces a λ-ring structure.
The Picard group embeds as a subgroup of the group of units "K"0("A")∗.
"K"1.
Hyman Bass provided this definition, which generalizes the group of units of a ring: "K"1("A") is the abelianization of the infinite general linear group:
formula_12
Here
formula_13
is the direct limit of the GL("n"), which embeds in GL("n" + 1) as the upper left block matrix, and formula_14 is its commutator subgroup. Define an "elementary matrix" to be one which is the sum of an identity matrix and a single off-diagonal element (this is a subset of the elementary matrices used in linear algebra). Then Whitehead's lemma states that the group "E"("A") generated by elementary matrices equals the commutator subgroup [GL("A"), GL("A")]. Indeed, the group GL("A")/E("A") was first defined and studied by Whitehead, and is called the Whitehead group of the ring "A".
Relative "K"1.
The "relative K-group" is defined in terms of the "double"
formula_15
There is a natural exact sequence
formula_16
Commutative rings and fields.
For "A" a commutative ring, one can define a determinant det: GL("A") → "A*" to the group of units of "A", which vanishes on E("A") and thus descends to a map det : "K"1("A") → "A*". As E("A") ◅ SL("A"), one can also define the special Whitehead group S"K"1("A") := SL("A")/E("A"). This map splits via the map "A*" → GL(1, "A") → "K"1("A") (unit in the upper left corner), and hence is onto, and has the special Whitehead group as kernel, yielding the split short exact sequence:
formula_17
which is a quotient of the usual split short exact sequence defining the special linear group, namely
formula_18
The determinant is split by including the group of units "A*" = GL1("A") into the general linear group GL"(A)", so "K"1("A") splits as the direct sum of the group of units and the special Whitehead group: "K"1("A") ≅ "A*" ⊕ SK1 ("A").
When "A" is a Euclidean domain (e.g. a field, or the integers) S"K"1("A") vanishes, and the determinant map is an isomorphism from "K"1("A") to "A"∗. This is "false" in general for PIDs, thus providing one of the rare mathematical features of Euclidean domains that do not generalize to all PIDs. An explicit PID such that SK1 is nonzero was given by Ischebeck in 1980 and by Grayson in 1981. If "A" is a Dedekind domain whose quotient field is an algebraic number field (a finite extension of the rationals) then shows that S"K"1("A") vanishes.
The vanishing of SK1 can be interpreted as saying that K1 is generated by the image of GL1 in GL. When this fails, one can ask whether K1 is generated by the image of GL2. For a Dedekind domain, this is the case: indeed, K1 is generated by the images of GL1 and SL2 in GL. The subgroup of SK1 generated by SL2 may be studied by Mennicke symbols. For Dedekind domains with all quotients by maximal ideals finite, SK1 is a torsion group.
For a non-commutative ring, the determinant cannot in general be defined, but the map GL("A") → "K"1("A") is a generalisation of the determinant.
Central simple algebras.
In the case of a central simple algebra "A" over a field "F", the reduced norm provides a generalisation of the determinant giving a map "K"1("A") → "F"∗ and S"K"1("A") may be defined as the kernel. Wang's theorem states that if "A" has prime degree then S"K"1("A") is trivial, and this may be extended to square-free degree. Wang also showed that S"K"1("A") is trivial for any central simple algebra over a number field, but Platonov has given examples of algebras of degree prime squared for which S"K"1("A") is non-trivial.
"K"2.
John Milnor found the right definition of "K"2: it is the center of the Steinberg group St("A") of "A".
It can also be defined as the kernel of the map
formula_19
or as the Schur multiplier of the group of elementary matrices.
For a field, K2 is determined by Steinberg symbols: this leads to Matsumoto's theorem.
One can compute that K2 is zero for any finite field. The computation of K2(Q) is complicated: Tate proved
formula_20
and remarked that the proof followed Gauss's first proof of the Law of Quadratic Reciprocity.
For non-Archimedean local fields, the group K2("F") is the direct sum of a finite cyclic group of order "m", say, and a divisible group K2("F")"m".
We have K2(Z) = Z/2, and in general K2 is finite for the ring of integers of a number field.
We further have K2(Z/"n") = Z/2 if "n" is divisible by 4, and otherwise zero.
Matsumoto's theorem.
Matsumoto's theorem states that for a field "k", the second "K"-group is given by
formula_21
Matsumoto's original theorem is even more general: For any root system, it gives a presentation for the unstable K-theory. This presentation is different from the one given here only for symplectic root systems. For non-symplectic root systems, the unstable second K-group with respect to the root system is exactly the stable K-group for GL("A"). Unstable second K-groups (in this context) are defined by taking the kernel of the universal central extension of the Chevalley group of universal type for a given root system. This construction yields the kernel of the Steinberg extension for the root systems "A""n" ("n" > 1) and, in the limit, stable second "K"-groups.
Long exact sequences.
If "A" is a Dedekind domain with field of fractions "F" then there is a long exact sequence
formula_22
where p runs over all prime ideals of "A".
There is also an extension of the exact sequence for relative K1 and K0:
formula_23
Pairing.
There is a pairing on K1 with values in K2. Given commuting matrices "X" and "Y" over "A", take elements "x" and "y" in the Steinberg group with "X","Y" as images. The commutator formula_24 is an element of K2. The map is not always surjective.
Milnor "K"-theory.
The above expression for "K"2 of a field "k" led Milnor to the following definition of "higher" "K"-groups by
formula_25
thus as graded parts of a quotient of the tensor algebra of the multiplicative group "k"× by the two-sided ideal, generated by the
formula_26
For "n" = 0,1,2 these coincide with those below, but for "n" ≧ 3 they differ in general. For example, we have "K'(F"'q) = 0" for "n" ≧ 2
but "KnFq" is nonzero for odd "n" (see below).
The tensor product on the tensor algebra induces a product formula_27 making formula_28 a graded ring which is graded-commutative.
The images of elements formula_29 in formula_30 are termed "symbols", denoted formula_31. For integer "m" invertible in "k" there is a map
formula_32
where formula_33 denotes the group of "m"-th roots of unity in some separable extension of "k". This extends to
formula_34
satisfying the defining relations of the Milnor K-group. Hence formula_35 may be regarded as a map on formula_30, called the "Galois symbol" map.
The relation between étale (or Galois) cohomology of the field and Milnor K-theory modulo 2 is the Milnor conjecture, proven by Vladimir Voevodsky. The analogous statement for odd primes is the Bloch-Kato conjecture, proved by Voevodsky, Rost, and others.
Higher "K"-theory.
The accepted definitions of higher "K"-groups were given by , after a few years during which several incompatible definitions were suggested. The object of the program was to find definitions of K("R") and K("R","I") in terms of classifying spaces so that
"R" ⇒ K("R") and ("R","I") ⇒ K("R","I") are functors into a homotopy category of spaces and the long exact sequence for relative K-groups arises as the long exact homotopy sequence of a fibration K("R","I") → K("R") → K("R"/"I").
Quillen gave two constructions, the "plus-construction" and the ""Q"-construction", the latter subsequently modified in different ways. The two constructions yield the same K-groups.
The +-construction.
One possible definition of higher algebraic "K"-theory of rings was given by Quillen
formula_36
Here π"n" is a homotopy group, GL("R") is the direct limit of the general linear groups over "R" for the size of the matrix tending to infinity, "B" is the classifying space construction of homotopy theory, and the + is Quillen's plus construction. He originally found this idea while studying the group cohomology of formula_37 and noted some of his calculations were related to formula_38.
This definition only holds for "n" > 0 so one often defines the higher algebraic "K"-theory via
formula_39
Since "BGL"("R")+ is path connected and "K"0("R") discrete, this definition doesn't differ in higher degrees and also holds for "n" = 0.
The "Q"-construction.
The "Q"-construction gives the same results as the +-construction, but it applies in more general situations. Moreover, the definition is more direct in the sense that the "K"-groups, defined via the "Q"-construction are functorial by definition. This fact is not automatic in the plus-construction.
Suppose formula_40 is an exact category; associated to formula_40 a new category formula_41 is defined, objects of which are those of formula_40 and morphisms from "M"′ to "M"″ are isomorphism classes of diagrams
formula_42
where the first arrow is an admissible epimorphism and the second arrow is an admissible monomorphism. Note the morphisms in formula_41 are analogous to the definitions of morphisms in the category of motives, where morphisms are given as correspondences formula_43 such thatformula_44is a diagram where the arrow on the left is a covering map (hence surjective) and the arrow on the right is injective. This category can then be turned into a topological space using the classifying space construction formula_45 , which is defined to be the geometric realisation of the "nerve" of formula_41. Then, the i-th "K"-group of the exact category formula_40 is then defined as
formula_46
with a fixed zero-object formula_47. Note the classifying space of a groupoid formula_48 moves the homotopy groups up one degree, hence the shift in degrees for formula_49 being formula_50 of a space.
This definition coincides with the above definition of "K"0("P"). If "P" is the category of finitely generated projective "R"-modules, this definition agrees with the above "BGL+"
definition of "K""n"("R") for all "n".
More generally, for a scheme "X", the higher "K"-groups of "X" are defined to be the "K"-groups of (the exact category of) locally free coherent sheaves on "X".
The following variant of this is also used: instead of finitely generated projective (= locally free) modules, take finitely generated modules. The resulting "K"-groups are usually written "G""n"("R"). When "R" is a noetherian regular ring, then "G"- and "K"-theory coincide. Indeed, the global dimension of regular rings is finite, i.e. any finitely generated module has a finite projective resolution "P"* → "M", and a simple argument shows that the canonical map "K"0(R) → "G"0(R) is an isomorphism, with ["M"]=Σ ± ["P""n"]. This isomorphism extends to the higher "K"-groups, too.
The "S"-construction.
A third construction of "K"-theory groups is the "S"-construction, due to Waldhausen. It applies to categories with cofibrations (also called Waldhausen categories). This is a more general concept than exact categories.
Examples.
While the Quillen algebraic "K"-theory has provided deep insight into various aspects of algebraic geometry and topology, the "K"-groups have proved particularly difficult to compute except in a few isolated but interesting cases. (See also: K-groups of a field.)
Algebraic "K"-groups of finite fields.
The first and one of the most important calculations of the higher algebraic "K"-groups of a ring were made by Quillen himself for the case of finite fields:
If F"q" is the finite field with "q" elements, then:
Rick Jardine (1993) reproved Quillen's computation using different methods.
Algebraic "K"-groups of rings of integers.
Quillen proved that if "A" is the ring of algebraic integers in an algebraic number field "F" (a finite extension of the rationals), then the algebraic K-groups of "A" are finitely generated. Armand Borel used this to calculate "K""i"("A") and K"i"("F") modulo torsion. For example, for the integers Z, Borel proved that (modulo torsion)
The torsion subgroups of K2"i"+1(Z), and the orders of the finite groups K4"k"+2(Z) have recently been determined, but whether the latter groups are cyclic, and whether the groups "K"4"k"(Z) vanish depends upon Vandiver's conjecture about the class groups of cyclotomic integers. See Quillen–Lichtenbaum conjecture for more details.
Applications and open questions.
Algebraic "K"-groups are used in conjectures on special values of L-functions and the formulation of a non-commutative main conjecture of Iwasawa theory and in construction of higher regulators.
Parshin's conjecture concerns the higher algebraic "K"-groups for smooth varieties over finite fields, and states that in this case the groups vanish up to torsion.
Another fundamental conjecture due to Hyman Bass (Bass' conjecture) says that all of the groups "Gn"("A") are finitely generated when "A" is a finitely generated Z-algebra. (The groups
"Gn"("A") are the "K"-groups of the category of finitely generated "A"-modules)
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "0 \\to V' \\to V \\to V'' \\to 0,"
},
{
"math_id": 1,
"text": "F^\\times \\otimes_{\\mathbf{Z}} F^\\times / \\langle x \\otimes (1 - x) \\colon x \\in F \\setminus \\{0, 1\\} \\rangle."
},
{
"math_id": 2,
"text": "\\mathcal K_n"
},
{
"math_id": 3,
"text": "H^2(X, \\mathcal K_2)"
},
{
"math_id": 4,
"text": "H^p(X, \\mathcal K_p) \\cong \\operatorname{CH}^p(X)"
},
{
"math_id": 5,
"text": "\\tilde{K}_0\\left(A\\right) = \\bigcap\\limits_{\\mathfrak p\\text{ prime ideal of }A}\\mathrm{Ker}\\dim_{\\mathfrak p},"
},
{
"math_id": 6,
"text": "\\dim_{\\mathfrak p}:K_0\\left(A\\right)\\to \\mathbf{Z}"
},
{
"math_id": 7,
"text": "A_{\\mathfrak p}"
},
{
"math_id": 8,
"text": "M_{\\mathfrak p}"
},
{
"math_id": 9,
"text": "\\tilde{K}_0\\left(A\\right)"
},
{
"math_id": 10,
"text": "D(A,I) = \\{ (x,y) \\in A \\times A : x-y \\in I \\} \\ . "
},
{
"math_id": 11,
"text": "K_0(A,I) = \\ker \\left({ K_0(D(A,I)) \\rightarrow K_0(A) }\\right) \\ . "
},
{
"math_id": 12,
"text": "K_1(A) = \\operatorname{GL}(A)^{\\mbox{ab}} = \\operatorname{GL}(A) / [\\operatorname{GL}(A),\\operatorname{GL}(A)]"
},
{
"math_id": 13,
"text": "\\operatorname{GL}(A) = \\operatorname{colim} \\operatorname{GL}(n, A)"
},
{
"math_id": 14,
"text": "[\\operatorname{GL}(A), \\operatorname{GL}(A)]"
},
{
"math_id": 15,
"text": "K_1(A,I) = \\ker \\left({ K_1(D(A,I)) \\rightarrow K_1(A) }\\right) \\ . "
},
{
"math_id": 16,
"text": " K_1(A,I) \\rightarrow K_1(A) \\rightarrow K_1(A/I) \\rightarrow K_0(A,I) \\rightarrow K_0(A) \\rightarrow K_0(A/I) \\ . "
},
{
"math_id": 17,
"text": "1 \\to SK_1(A) \\to K_1(A) \\to A^* \\to 1,"
},
{
"math_id": 18,
"text": "1 \\to \\operatorname{SL}(A) \\to \\operatorname{GL}(A) \\to A^* \\to 1."
},
{
"math_id": 19,
"text": "\\varphi\\colon\\operatorname{St}(A)\\to\\mathrm{GL}(A),"
},
{
"math_id": 20,
"text": "K_2(\\mathbf{Q}) = (\\mathbf{Z}/4)^* \\times \\prod_{p \\text{ odd prime}} (\\mathbf{Z}/p)^* \\ "
},
{
"math_id": 21,
"text": "K_2(k) = k^\\times\\otimes_{\\mathbf Z} k^\\times/\\langle a\\otimes(1-a)\\mid a\\not=0,1\\rangle."
},
{
"math_id": 22,
"text": " K_2F \\rightarrow \\oplus_{\\mathbf p} K_1 A/{\\mathbf p} \\rightarrow K_1 A \\rightarrow K_1 F \\rightarrow \\oplus_{\\mathbf p} K_0 A/{\\mathbf p} \\rightarrow K_0 A \\rightarrow K_0 F \\rightarrow 0 \\ "
},
{
"math_id": 23,
"text": "K_2(A) \\rightarrow K_2(A/I) \\rightarrow K_1(A,I) \\rightarrow K_1(A) \\cdots \\ . "
},
{
"math_id": 24,
"text": "x y x^{-1} y^{-1}"
},
{
"math_id": 25,
"text": " K^M_*(k) := T^*(k^\\times)/(a\\otimes (1-a)), "
},
{
"math_id": 26,
"text": "\\left \\{a\\otimes(1-a): \\ a \\neq 0,1 \\right \\}."
},
{
"math_id": 27,
"text": " K_m \\times K_n \\rightarrow K_{m+n}"
},
{
"math_id": 28,
"text": " K^M_*(F)"
},
{
"math_id": 29,
"text": "a_1 \\otimes \\cdots \\otimes a_n"
},
{
"math_id": 30,
"text": "K^M_n(k)"
},
{
"math_id": 31,
"text": "\\{a_1,\\ldots,a_n\\}"
},
{
"math_id": 32,
"text": "\\partial : k^* \\rightarrow H^1(k,\\mu_m) "
},
{
"math_id": 33,
"text": "\\mu_m"
},
{
"math_id": 34,
"text": "\\partial^n : k^* \\times \\cdots \\times k^* \\rightarrow H^n\\left({k,\\mu_m^{\\otimes n}}\\right) \\ "
},
{
"math_id": 35,
"text": "\\partial^n"
},
{
"math_id": 36,
"text": " K_n(R) = \\pi_n(B\\operatorname{GL}(R)^+),"
},
{
"math_id": 37,
"text": "GL_n(\\mathbb{F}_q)"
},
{
"math_id": 38,
"text": "K_1(\\mathbb{F}_q)"
},
{
"math_id": 39,
"text": " K_n(R) = \\pi_n(B\\operatorname{GL}(R)^+\\times K_0(R)) "
},
{
"math_id": 40,
"text": "P"
},
{
"math_id": 41,
"text": "QP"
},
{
"math_id": 42,
"text": " M'\\longleftarrow N\\longrightarrow M'',"
},
{
"math_id": 43,
"text": "Z \\subset X \\times Y"
},
{
"math_id": 44,
"text": "X \\leftarrow Z \\rightarrow Y"
},
{
"math_id": 45,
"text": "BQP"
},
{
"math_id": 46,
"text": " K_i(P)=\\pi_{i+1}(\\mathrm{BQ}P,0)"
},
{
"math_id": 47,
"text": "0"
},
{
"math_id": 48,
"text": "B\\mathcal{G}"
},
{
"math_id": 49,
"text": "K_i"
},
{
"math_id": 50,
"text": "\\pi_{i+1}"
}
] |
https://en.wikipedia.org/wiki?curid=598500
|
59850453
|
W. G. Brown
|
Canadian mathematician
William G. Brown is a Canadian mathematician specializing in graph theory. He is a professor emeritus of mathematics at McGill University.
Education and career.
Brown earned his Ph.D. from the University of Toronto in 1963, under the joint supervision of Harold Scott MacDonald Coxeter and W. T. Tutte. His dissertation was "Enumeration Problems Of Linear Graph Theory (Problems in the Enumeration of Maps)".
In 1968, he moved to McGill from the University of British Columbia as an associate professor.
Contributions.
Brown's dissertation research concerned graph enumeration, and his early publications continued in that direction.[E][T] However, much of his later work was in extremal graph theory. He is known for formulating the Ruzsa–Szemerédi problem on the density of systems of triples in which no six points contain more than two triples in joint work with Paul Erdős and Vera T. Sós,[A][B] and for his constructions of dense formula_0-free graphs in connection with the Zarankiewicz problem.[Z]
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "K_{3,3}"
}
] |
https://en.wikipedia.org/wiki?curid=59850453
|
59850911
|
Salem–Spencer set
|
Progression-free set of numbers
In mathematics, and in particular in arithmetic combinatorics, a Salem-Spencer set is a set of numbers no three of which form an arithmetic progression. Salem–Spencer sets are also called 3-AP-free sequences or progression-free sets. They have also been called non-averaging sets, but this term has also been used to denote a set of integers none of which can be obtained as the average of any subset of the other numbers. Salem-Spencer sets are named after Raphaël Salem and Donald C. Spencer, who showed in 1942 that Salem–Spencer sets can have nearly-linear size. However a later theorem of Klaus Roth shows that the size is always less than linear.
Examples.
For formula_0 the smallest values of formula_1 such that the numbers from formula_2 to formula_1 have a formula_3-element Salem-Spencer set are
1, 2, 4, 5, 9, 11, 13, 14, 20, 24, 26, 30, 32, 36, ... (sequence in the OEIS)
For instance, among the numbers from 1 to 14, the eight numbers
{1, 2, 4, 5, 10, 11, 13, 14}
form the unique largest Salem-Spencer set.
This example is shifted by adding one to the elements of an infinite Salem–Spencer set, the Stanley sequence
0, 1, 3, 4, 9, 10, 12, 13, 27, 28, 30, 31, 36, 37, 39, 40, ... (sequence in the OEIS)
of numbers that, when written as a ternary number, use only the digits 0 and 1. This sequence is the lexicographically first infinite Salem–Spencer set. Another infinite Salem–Spencer set is given by the cubes
0, 1, 8, 27, 64, 125, 216, 343, 512, 729, 1000, ... (sequence in the OEIS)
It is a theorem of Leonhard Euler that no three cubes are in arithmetic progression.
Size.
In 1942, Salem and Spencer published a proof that the integers in the range from formula_2 to formula_1 have large Salem–Spencer sets, of size formula_4. The denominator of this expression uses big O notation, and grows more slowly than any power of formula_1, so the sets found by Salem and Spencer have a size that is nearly linear. This bound disproved a conjecture of Paul Erdős and Pál Turán that the size of such a set could be at most formula_5 for some formula_6.
The construction of Salem and Spencer was improved by Felix Behrend in 1946, who found sets of size formula_7.
In 1952, Klaus Roth proved Roth's theorem establishing that the size of a Salem-Spencer set must be formula_8. Therefore, although the sets constructed by Salem, Spencer, and Behrend have sizes that are nearly linear, it is not possible to improve them and find sets whose size is actually linear. This result became a special case of Szemerédi's theorem on the density of sets of integers that avoid longer arithmetic progressions. To distinguish Roth's bound on Salem–Spencer sets from Roth's theorem on Diophantine approximation of algebraic numbers, this result has been called "Roth's theorem on arithmetic progressions". After several additional improvements to Roth's theorem, the size of a Salem–Spencer set has been proven to be formula_9. An even better bound of formula_10 (for some formula_6 that has not been explicitly computed) was announced in 2020 in a preprint. In 2023 a new bound of formula_11 was found by computers scientist Kelley and Meka and shortly after an exposition in more familiar mathematical terms was given by Bloom and Sisask who have since also improved the exponent of the Kelly-Meka bound to formula_12 (and conjectured formula_13) in a preprint.
Construction.
A simple construction for a Salem–Spencer set (of size considerably smaller than Behrend's bound) is to choose the ternary numbers that use only the digits 0 and 1, not 2. Such a set must be progression-free, because if two of its elements formula_14 and formula_15 are the first and second members of an arithmetic progression, the third member must have the digit two at the position of the least significant digit where formula_14 and formula_15 differ. The illustration shows a set of this form, for the three-digit ternary numbers (shifted by one to make the smallest element 1 instead of 0).
Behrend's construction uses a similar idea, for a larger odd radix formula_16. His set consists of the numbers whose digits are restricted to the range from formula_17 to formula_18 (so that addition of these numbers has no carries), with the extra constraint that the sum of the squares of the digits is some chosen value formula_3. If the digits of each number are thought of as coordinates of a vector, this constraint describes a sphere in the resulting vector space, and by convexity the average of two distinct values on this sphere will be interior to the sphere rather than on it. Therefore, if two elements of Behrend's set are the endpoints of an arithmetic progression, the middle value of the progression (their average) will not be in the set. Thus, the resulting set is progression-free.
With a careful choice of formula_19, and a choice of formula_3 as the most frequently-occurring sum of squares of digits, Behrend achieves his bound.
In 1953, Leo Moser proved that there is a single infinite Salem–Spencer sequence achieving the same asymptotic density on every prefix as Behrend's construction.
By considering the convex hull of points inside a sphere, rather than the set of points on a sphere,
it is possible to improve the construction by a factor of formula_20. However, this does not affect the size bound in the form stated above.
Generalization.
The notion of Salem–Spencer sets (3-AP-free set) can be generalized to formula_3-AP-free sets, in which formula_3 elements form an arithmetic progression if and only if they are all equal. gave constructions of large formula_3-AP-free sets.
Computational results.
Gasarch, Glenn, and Kruskal have performed a comparison of different computational methods for large subsets of formula_21 with no arithmetic progression. Using these methods they found the exact size of the largest such set for formula_22. Their results include several new bounds for different values of formula_1, found by branch-and-bound algorithms that use linear programming and problem-specific heuristics to bound the size that can be achieved in any branch of the search tree. One heuristic that they found to be particularly effective was the "thirds method", in which two shifted copies of a Salem–Spencer set for formula_1 are placed in the first and last thirds of a set for formula_23.
Applications.
In connection with the Ruzsa–Szemerédi problem, Salem–Spencer sets have been used to construct dense graphs in which each edge belongs to a unique triangle.
Salem–Spencer sets have also been used in theoretical computer science. They have been used in the design of the Coppersmith–Winograd algorithm for fast matrix multiplication, and in the construction of efficient non-interactive zero-knowledge proofs. Recently, they have been used to show size lower bounds for graph spanners, and the strong exponential time hypothesis based hardness of the subset sum problem.
These sets can also be applied in recreational mathematics to a mathematical chess problem of
placing as few queens as possible on the main diagonal of an formula_24 chessboard so that all squares of the board are attacked. The set of diagonal squares that remain unoccupied must form a Salem–Spencer set, in which all values have the same parity (all odd or all even).
The smallest possible set of queens is the complement of the largest Salem–Spencer subset of the odd numbers in formula_21.
This Salem-Spencer subset can be found by doubling and subtracting one from the values in a Salem–Spencer subset of all the numbers in formula_25
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k=1,2,\\dots"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "1"
},
{
"math_id": 3,
"text": "k"
},
{
"math_id": 4,
"text": "n/e^{O(\\log n/\\log\\log n)}"
},
{
"math_id": 5,
"text": "n^{1-\\delta}"
},
{
"math_id": 6,
"text": "\\delta>0"
},
{
"math_id": 7,
"text": "n/e^{O(\\sqrt{\\log n})}"
},
{
"math_id": 8,
"text": "O(n/\\log\\log n)"
},
{
"math_id": 9,
"text": "O\\bigl(n(\\log\\log n)^4/\\log n\\bigr)"
},
{
"math_id": 10,
"text": "O\\bigl(n/(\\log n)^{1+\\delta}\\bigr)"
},
{
"math_id": 11,
"text": "\\exp(-c(\\log N)^{1/12})N"
},
{
"math_id": 12,
"text": "\\beta=1/9"
},
{
"math_id": 13,
"text": "\\beta=5/41"
},
{
"math_id": 14,
"text": "x"
},
{
"math_id": 15,
"text": "y"
},
{
"math_id": 16,
"text": "2d-1"
},
{
"math_id": 17,
"text": "0"
},
{
"math_id": 18,
"text": "d-1"
},
{
"math_id": 19,
"text": "d"
},
{
"math_id": 20,
"text": "\\sqrt{\\log n}"
},
{
"math_id": 21,
"text": "\\{1,\\dots n\\}"
},
{
"math_id": 22,
"text": "n\\le 187"
},
{
"math_id": 23,
"text": "3n"
},
{
"math_id": 24,
"text": "n\\times n"
},
{
"math_id": 25,
"text": "\\{1,\\dots n/2\\}."
}
] |
https://en.wikipedia.org/wiki?curid=59850911
|
5985207
|
Expansion of the universe
|
Increase in distance between parts of the universe over time
The expansion of the universe is the increase in distance between gravitationally unbound parts of the observable universe with time. It is an intrinsic expansion, so it does not mean that the universe expands "into" anything or that space exists "outside" it. To any observer in the universe, it appears that all but the nearest galaxies (which are bound to each other by gravity) recede at speeds that are proportional to their distance from the observer, on average. While objects cannot move faster than light, this limitation applies only with respect to local reference frames and does not limit the recession rates of cosmologically distant objects.
Cosmic expansion is a key feature of Big Bang cosmology. It can be modeled mathematically with the Friedmann–Lemaître–Robertson–Walker metric (FLRW), where it corresponds to an increase in the scale of the spatial part of the universe's spacetime metric tensor (which governs the size and geometry of spacetime). Within this framework, the separation of objects over time is associated with the expansion of space itself. However, this is not a generally covariant description but rather only a choice of coordinates. Contrary to common misconception, it is equally valid to adopt a description in which space does not expand and objects simply move apart while under the influence of their mutual gravity. Although cosmic expansion is often framed as a consequence of general relativity, it is also predicted by Newtonian gravity.<ref name="10.1093/mnras/282.1.206">Tipler, Monthly Notices of the Royal Astronomical Society 282(1), pp. 206–210 (1996).</ref>
According to inflation theory, during the inflationary epoch about 10−32 of a second after the Big Bang, the universe suddenly expanded, and its volume increased by a factor of at least 1078 (an expansion of distance by a factor of at least 1026 in each of the three dimensions). This would be equivalent to expanding an object 1 nanometer across (, about half the width of a molecule of DNA) to one approximately 10.6 light-years across (about , or 62 trillion miles). Cosmic expansion subsequently decelerated to much slower rates, until around 9.8 billion years after the Big Bang (4 billion years ago) it began to gradually expand more quickly, and is still doing so. Physicists have postulated the existence of dark energy, appearing as a cosmological constant in the simplest gravitational models, as a way to explain this late-time acceleration. According to the simplest extrapolation of the currently favored cosmological model, the Lambda-CDM model, this acceleration becomes dominant in the future.
History.
In 1912–1914, Vesto M. Slipher discovered that light from remote galaxies was redshifted, a phenomenon later interpreted as galaxies receding from the Earth. In 1922, Alexander Friedmann used the Einstein field equations to provide theoretical evidence that the universe is expanding.
Swedish astronomer Knut Lundmark was the first person to find observational evidence for expansion, in 1924. According to Ian Steer of the NASA/IPAC Extragalactic Database of Galaxy Distances, "Lundmark's extragalactic distance estimates were far more accurate than Hubble's, consistent with an expansion rate (Hubble constant) that was within 1% of the best measurements today."
In 1927, Georges Lemaître independently reached a similar conclusion to Friedmann on a theoretical basis, and also presented observational evidence for a linear relationship between distance to galaxies and their recessional velocity. Edwin Hubble observationally confirmed Lundmark's and Lemaître's findings in 1929. Assuming the cosmological principle, these findings would imply that all galaxies are moving away from each other.
Astronomer Walter Baade recalculated the size of the known universe in the 1940s, doubling the previous calculation made by Hubble in 1929. He announced this finding to considerable astonishment at the 1952 meeting of the International Astronomical Union in Rome. For most of the second half of the 20th century, the value of the Hubble constant was estimated to be between .
On 13 January 1994, NASA formally announced a completion of its repairs related to the main mirror of the Hubble Space Telescope, allowing for sharper images and, consequently, more accurate analyses of its observations. Shortly after the repairs were made, Wendy Freedman's 1994 Key Project analyzed the recession velocity of M100 from the core of the Virgo Cluster, offering a Hubble constant measurement of . Later the same year, Adam Riess et al. used an empirical method of visual-band light-curve shapes to more finely estimate the luminosity of Type Ia supernovae. This further minimized the systematic measurement errors of the Hubble constant, to . Reiss's measurements on the recession velocity of the nearby Virgo Cluster more closely agree with subsequent and independent analyses of Cepheid variable calibrations of Type Ia supernova, which estimates a Hubble constant of . In 2003, David Spergel's analysis of the cosmic microwave background during the first year observations of the "Wilkinson Microwave Anisotropy Probe" satellite (WMAP) further agreed with the estimated expansion rates for local galaxies, .
Structure of cosmic expansion.
The universe at the largest scales is observed to be homogeneous (the same everywhere) and isotropic (the same in all directions), consistent with the cosmological principle. These constraints demand that any expansion of the universe accord with Hubble's law, in which objects recede from each observer with velocities proportional to their positions with respect to that observer. That is, recession velocities formula_0 scale with (observer-centered) positions formula_1 according to
formula_2
where the Hubble rate formula_3 quantifies the rate of expansion. formula_3 is a function of cosmic time.
Dynamics of cosmic expansion.
The expansion of the universe can be understood as a consequence of an initial impulse (possibly due to inflation), which sent the contents of the universe flying apart. The mutual gravitational attraction of the matter and radiation within the universe gradually slows this expansion over time, but expansion nevertheless continues due to momentum left over from the initial impulse. Also, certain exotic relativistic fluids, such as dark energy and inflation, exert gravitational repulsion in the cosmological context, which accelerates the expansion of the universe. A cosmological constant also has this effect.
Mathematically, the expansion of the universe is quantified by the scale factor, formula_4, which is proportional to the average separation between objects, such as galaxies. The scale factor is a function of time and is conventionally set to be formula_5 at the present time. Because the universe is expanding, formula_4 is smaller in the past and larger in the future. Extrapolating back in time with certain cosmological models will yield a moment when the scale factor was zero; our current understanding of cosmology sets this time at 13.787 ± 0.020 billion years ago. If the universe continues to expand forever, the scale factor will approach infinity in the future. It is also possible in principle for the universe to stop expanding and begin to contract, which corresponds to the scale factor decreasing in time.
The scale factor formula_4 is a parameter of the FLRW metric, and its time evolution is governed by the Friedmann equations. The second Friedmann equation,
formula_6
shows how the contents of the universe influence its expansion rate. Here, formula_7 is the gravitational constant, formula_8 is the energy density within the universe, formula_9 is the pressure, formula_10 is the speed of light, and formula_11 is the cosmological constant. A positive energy density leads to deceleration of the expansion, formula_12, and a positive pressure further decelerates expansion. On the other hand, sufficiently negative pressure with formula_13 leads to accelerated expansion, and the cosmological constant also accelerates expansion. Nonrelativistic matter is essentially pressureless, with formula_14, while a gas of ultrarelativistic particles (such as a photon gas) has positive pressure formula_15. Negative-pressure fluids, like dark energy, are not experimentally confirmed, but the existence of dark energy is inferred from astronomical observations.
Distances in the expanding universe.
Comoving coordinates.
In an expanding universe, it is often useful to study the evolution of structure with the expansion of the universe factored out. This motivates the use of comoving coordinates, which are defined to grow proportionally with the scale factor. If an object is moving only with the Hubble flow of the expanding universe, with no other motion, then it remains stationary in comoving coordinates. The comoving coordinates are the spatial coordinates in the FLRW metric.
Shape of the universe.
The universe is a four-dimensional spacetime, but within a universe that obeys the cosmological principle, there is a natural choice of three-dimensional spatial surface. These are the surfaces on which observers who are stationary in comoving coordinates agree on the age of the universe. In a universe governed by special relativity, such surfaces would be hyperboloids, because relativistic time dilation means that rapidly receding distant observers' clocks are slowed, so that spatial surfaces must bend "into the future" over long distances. However, within general relativity, the shape of these "comoving synchronous" spatial surfaces is affected by gravity. Current observations are consistent with these spatial surfaces being geometrically flat (so that, for example, the angles of a triangle add up to 180 degrees).
Cosmological horizons.
An expanding universe typically has a finite age. Light, and other particles, can have propagated only a finite distance. The comoving distance that such particles can have covered over the age of the universe is known as the particle horizon, and the region of the universe that lies within our particle horizon is known as the observable universe.
If the dark energy that is inferred to dominate the universe today is a cosmological constant, then the particle horizon converges to a finite value in the infinite future. This implies that the amount of the universe that we will ever be able to observe is limited. Many systems exist whose light can never reach us, because there is a cosmic event horizon induced by the repulsive gravity of the dark energy.
Within the study of the evolution of structure within the universe, a natural scale emerges, known as the Hubble horizon. Cosmological perturbations much larger than the Hubble horizon are not dynamical, because gravitational influences do not have time to propagate across them, while perturbations much smaller than the Hubble horizon are straightforwardly governed by Newtonian gravitational dynamics.
Consequences of cosmic expansion.
Velocities and redshifts.
An object's peculiar velocity is its velocity with respect to the comoving coordinate grid, i.e., with respect to the average expansion-associated motion of the surrounding material. It is a measure of how a particle's motion deviates from the Hubble flow of the expanding universe. The peculiar velocities of nonrelativistic particles decay as the universe expands, in inverse proportion with the cosmic scale factor. This can be understood as a self-sorting effect. A particle that is moving in some direction gradually overtakes the Hubble flow of cosmic expansion in that direction, asymptotically approaching material with the same velocity as its own.
More generally, the peculiar momenta of both relativistic and nonrelativistic particles decay in inverse proportion with the scale factor. For photons, this leads to the cosmological redshift. While the cosmological redshift is often explained as the stretching of photon wavelengths due to "expansion of space", it is more naturally viewed as a consequence of the Doppler effect.
Temperature.
The universe cools as it expands. This follows from the decay of particles' peculiar momenta, as discussed above. It can also be understood as adiabatic cooling. The temperature of ultrarelativistic fluids, often called "radiation" and including the cosmic microwave background, scales inversely with the scale factor (i.e. formula_16). The temperature of nonrelativistic matter drops more sharply, scaling as the inverse square of the scale factor (i.e. formula_17).
Density.
The contents of the universe dilute as it expands. The number of particles within a comoving volume remains fixed (on average), while the volume expands. For nonrelativistic matter, this implies that the energy density drops as formula_18, where formula_4 is the scale factor.
For ultrarelativistic particles ("radiation"), the energy density drops more sharply, as formula_19. This is because in addition to the volume dilution of the particle count, the energy of each particle (including the rest mass energy) also drops significantly due to the decay of peculiar momenta.
In general, we can consider a perfect fluid with pressure formula_20, where formula_8 is the energy density. The parameter formula_21 is the equation of state parameter. The energy density of such a fluid drops as
formula_22
Nonrelativistic matter has formula_23 while radiation has formula_24. For an exotic fluid with negative pressure, like dark energy, the energy density drops more slowly; if formula_25 it remains constant in time. If formula_26, corresponding to phantom energy, the energy density grows as the universe expands.
Expansion history.
Cosmic inflation.
Inflation is a period of accelerated expansion hypothesized to have occurred at a time of around 10−32 seconds. It would have been driven by the inflaton, a field that has a positive-energy false vacuum state. Inflation was originally proposed to explain the absence of exotic relics predicted by grand unified theories, such as magnetic monopoles, because the rapid expansion would have diluted such relics. It was subsequently realized that the accelerated expansion would also solve the horizon problem and the flatness problem. Additionally, quantum fluctuations during inflation would have created initial variations in the density of the universe, which gravity later amplified to yield the observed spectrum of matter density variations.
During inflation, the cosmic scale factor grew exponentially in time. In order to solve the horizon and flatness problems, inflation must have lasted long enough that the scale factor grew by at least a factor of e60 (about 1026).
Radiation epoch.
The history of the universe after inflation but before a time of about 1 second is largely unknown. However, the universe is known to have been dominated by ultrarelativistic Standard Model particles, conventionally called "radiation", by the time of neutrino decoupling at about 1 second. During radiation domination, cosmic expansion decelerated, with the scale factor growing proportionally with the square root of the time.
Matter epoch.
Since radiation redshifts as the universe expands, eventually nonrelativistic matter came to dominate the energy density of the universe. This transition happened at a time of about 50 thousand years after the Big Bang. During the matter-dominated epoch, cosmic expansion also decelerated, with the scale factor growing as the 2/3 power of the time (formula_27). Also, gravitational structure formation is most efficient when nonrelativistic matter dominates, and this epoch is responsible for the formation of galaxies and the large-scale structure of the universe.
Dark energy.
Around 3 billion years ago, at a time of about 11 billion years, dark energy is believed to have begun to dominate the energy density of the universe. This transition came about because dark energy does not dilute as the universe expands, instead maintaining a constant energy density. Similarly to inflation, dark energy drives accelerated expansion, such that the scale factor grows exponentially in time.
Measuring the expansion rate.
The most direct way to measure the expansion rate is to independently measure the recession velocities and the distances of distant objects, such as galaxies. The ratio between these quantities gives the Hubble rate, in accordance with Hubble's law. Typically, the distance is measured using a standard candle, which is an object or event for which the intrinsic brightness is known. The object's distance can then be inferred from the observed apparent brightness. Meanwhile, the recession speed is measured through the redshift. Hubble used this approach for his original measurement of the expansion rate, by measuring the brightness of Cepheid variable stars and the redshifts of their host galaxies. More recently, using Type Ia supernovae, the expansion rate was measured to be "H"0=. This means that for every million parsecs of distance from the observer, recessional velocity of objects at that distance increases by about .
Supernovae are observable at such great distances that the light travel time therefrom can approach the age of the universe. Consequently, they can be used to measure not only the present-day expansion rate but also the expansion history. In work that was awarded the 2011 Nobel Prize in Physics, supernova observations were used to determine that cosmic expansion is accelerating in the present epoch.
By assuming a cosmological model, e.g. the Lambda-CDM model, another possibility is to infer the present-day expansion rate from the sizes of the largest fluctuations seen in the cosmic microwave background. A higher expansion rate would imply a smaller characteristic size of CMB fluctuations, and vice versa. The Planck collaboration measured the expansion rate this way and determined "H"0 = . There is a disagreement between this measurement and the supernova-based measurements, known as the Hubble tension.
A third option proposed recently is to use information from gravitational wave events (especially those involving the merger of neutron stars, like GW170817), to measure the expansion rate. Such measurements do not yet have the precision to resolve the Hubble tension.
In principle, the cosmic expansion history can also be measured by studying how redshifts, distances, fluxes, angular positions, and angular sizes of astronomical objects change over the course of the time that they are being observed. These effects are too small to have yet been detected. However, changes in redshift or flux could be observed by the Square Kilometre Array or Extremely Large Telescope in the mid-2030s.
Conceptual considerations and misconceptions.
Measuring distances in expanding space.
At cosmological scales, the present universe conforms to Euclidean space, what cosmologists describe as "geometrically flat", to within experimental error.
Consequently, the rules of Euclidean geometry associated with Euclid's fifth postulate hold in the present universe in 3D space. It is, however, possible that the geometry of past 3D space could have been highly curved. The curvature of space is often modeled using a non-zero Riemann curvature tensor in curvature of Riemannian manifolds. Euclidean "geometrically flat" space has a Riemann curvature tensor of zero.
"Geometrically flat" space has three dimensions and is consistent with Euclidean space. However, spacetime has four dimensions; it is not flat according to Einstein's general theory of relativity. Einstein's theory postulates that "matter and energy curve spacetime, and there is enough matter and energy to provide for curvature."
In part to accommodate such different geometries, the expansion of the universe is inherently general-relativistic. It cannot be modeled with special relativity alone: Though such models exist, they may be at fundamental odds with the observed interaction between matter and spacetime seen in the universe.
The images to the right show two views of spacetime diagrams that show the large-scale geometry of the universe according to the ΛCDM cosmological model. Two of the dimensions of space are omitted, leaving one dimension of space (the dimension that grows as the cone gets larger) and one of time (the dimension that proceeds "up" the cone's surface). The narrow circular end of the diagram corresponds to a cosmological time of 700 million years after the Big Bang, while the wide end is a cosmological time of 18 billion years, where one can see the beginning of the accelerating expansion as a splaying outward of the spacetime, a feature that eventually dominates in this model. The purple grid lines mark cosmological time at intervals of one billion years from the Big Bang. The cyan grid lines mark comoving distance at intervals of one billion light-years in the present era (less in the past and more in the future). The circular curling of the surface is an artifact of the embedding with no physical significance and is done for illustrative purposes; a flat universe does not curl back onto itself. (A similar effect can be seen in the tubular shape of the pseudosphere.)
The brown line on the diagram is the worldline of Earth (or more precisely its location in space, even before it was formed). The yellow line is the worldline of the most distant known quasar. The red line is the path of a light beam emitted by the quasar about 13 billion years ago and reaching Earth at the present day. The orange line shows the present-day distance between the quasar and Earth, about 28 billion light-years, which is a larger distance than the age of the universe multiplied by the speed of light, "ct".
According to the equivalence principle of general relativity, the rules of special relativity are "locally" valid in small regions of spacetime that are approximately flat. In particular, light always travels locally at the speed "c"; in the diagram, this means, according to the convention of constructing spacetime diagrams, that light beams always make an angle of 45° with the local grid lines. It does not follow, however, that light travels a distance "ct" in a time "t", as the red worldline illustrates. While it always moves locally at "c", its time in transit (about 13 billion years) is not related to the distance traveled in any simple way, since the universe expands as the light beam traverses space and time. The distance traveled is thus inherently ambiguous because of the changing scale of the universe. Nevertheless, there are two distances that appear to be physically meaningful: the distance between Earth and the quasar when the light was emitted, and the distance between them in the present era (taking a slice of the cone along the dimension defined as the spatial dimension). The former distance is about 4 billion light-years, much smaller than "ct", whereas the latter distance (shown by the orange line) is about 28 billion light-years, much larger than "ct". In other words, if space were not expanding today, it would take 28 billion years for light to travel between Earth and the quasar, while if the expansion had stopped at the earlier time, it would have taken only 4 billion years.
The light took much longer than 4 billion years to reach us though it was emitted from only 4 billion light-years away. In fact, the light emitted towards Earth was actually moving "away" from Earth when it was first emitted; the metric distance to Earth increased with cosmological time for the first few billion years of its travel time, also indicating that the expansion of space between Earth and the quasar at the early time was faster than the speed of light. None of this behavior originates from a special property of metric expansion, but rather from local principles of special relativity integrated over a curved surface.
Topology of expanding space.
Over time, the space that makes up the universe is expanding. The words 'space' and 'universe', sometimes used interchangeably, have distinct meanings in this context. Here 'space' is a mathematical concept that stands for the three-dimensional manifold into which our respective positions are embedded, while 'universe' refers to everything that exists, including the matter and energy in space, the extra dimensions that may be wrapped up in various strings, and the time through which various events take place. The expansion of space is in reference to this 3D manifold only; that is, the description involves no structures such as extra dimensions or an exterior universe.
The ultimate topology of space is "a posteriori" – something that in principle must be observed – as there are no constraints that can simply be reasoned out (in other words there cannot be any "a priori" constraints) on how the space in which we live is connected or whether it wraps around on itself as a compact space. Though certain cosmological models such as Gödel's universe even permit bizarre worldlines that intersect with themselves, ultimately the question as to whether we are in something like a "Pac-Man universe", where if traveling far enough in one direction would allow one to simply end up back in the same place like going all the way around the surface of a balloon (or a planet like the Earth), is an observational question that is constrained as measurable or non-measurable by the universe's global geometry. At present, observations are consistent with the universe having infinite extent and being a simply connected space, though cosmological horizons limit our ability to distinguish between simple and more complicated proposals. The universe could be infinite in extent or it could be finite; but the evidence that leads to the inflationary model of the early universe also implies that the "total universe" is much larger than the observable universe. Thus any edges or exotic geometries or topologies would not be directly observable, since light has not reached scales on which such aspects of the universe, if they exist, are still allowed. For all intents and purposes, it is safe to assume that the universe is infinite in spatial extent, without edge or strange connectedness.
Regardless of the overall shape of the universe, the question of what the universe is expanding into is one that does not require an answer, according to the theories that describe the expansion; the way we define space in our universe in no way requires additional exterior space into which it can expand, since an expansion of an infinite expanse can happen without changing the infinite extent of the expanse. All that is certain is that the manifold of space in which we live simply has the property that the distances between objects are getting larger as time goes on. This only implies the simple observational consequences associated with the metric expansion explored below. No "outside" or embedding in hyperspace is required for an expansion to occur. The visualizations often seen of the universe growing as a bubble into nothingness are misleading in that respect. There is no reason to believe there is anything "outside" the expanding universe into which the universe expands.
Even if the overall spatial extent is infinite and thus the universe cannot get any "larger", we still say that space is expanding because, locally, the characteristic distance between objects is increasing. As an infinite space grows, it remains infinite.
Density of universe during expansion.
Despite being extremely dense when very young and during part of its early expansion – far denser than is usually required to form a black hole – the universe did not re-collapse into a black hole. This is because commonly used calculations for gravitational collapse are usually based upon objects of relatively constant size, such as stars, and do not apply to rapidly expanding space such as the Big Bang.
Effects of expansion on small scales.
The expansion of space is sometimes described as a force that acts to push objects apart. Though this is an accurate description of the effect of the cosmological constant, it is not an accurate picture of the phenomenon of expansion in general.
In addition to slowing the overall expansion, gravity causes local clumping of matter into stars and galaxies. Once objects are formed and bound by gravity, they "drop out" of the expansion and do not subsequently expand under the influence of the cosmological metric, there being no force compelling them to do so.
There is no difference between the inertial expansion of the universe and the inertial separation of nearby objects in a vacuum; the former is simply a large-scale extrapolation of the latter.
Once objects are bound by gravity, they no longer recede from each other. Thus, the Andromeda Galaxy, which is bound to the Milky Way Galaxy, is actually falling "towards" us and is not expanding away. Within the Local Group, the gravitational interactions have changed the inertial patterns of objects such that there is no cosmological expansion taking place. Beyond the Local Group, the inertial expansion is measurable, though systematic gravitational effects imply that larger and larger parts of space will eventually fall out of the "Hubble Flow" and end up as bound, non-expanding objects up to the scales of superclusters of galaxies. Such future events are predicted by knowing the precise way the Hubble Flow is changing as well as the masses of the objects to which we are being gravitationally pulled. Currently, the Local Group is being gravitationally pulled towards either the Shapley Supercluster or the "Great Attractor", with which we would eventually merge if dark energy were not acting.
A consequence of metric expansion being due to inertial motion is that a uniform local "explosion" of matter into a vacuum can be locally described by the FLRW geometry, the same geometry that describes the expansion of the universe as a whole and was also the basis for the simpler Milne universe, which ignores the effects of gravity. In particular, general relativity predicts that light will move at the speed "c" with respect to the local motion of the exploding matter, a phenomenon analogous to frame dragging.
The situation changes somewhat with the introduction of dark energy or a cosmological constant. A cosmological constant due to a vacuum energy density has the effect of adding a repulsive force between objects that is proportional (not inversely proportional) to distance. Unlike inertia it actively "pulls" on objects that have clumped together under the influence of gravity, and even on individual atoms. However, this does not cause the objects to grow steadily or to disintegrate; unless they are very weakly bound, they will simply settle into an equilibrium state that is slightly (undetectably) larger than it would otherwise have been. As the universe expands and the matter in it thins, the gravitational attraction decreases (since it is proportional to the density), while the cosmological repulsion increases. Thus, the ultimate fate of the ΛCDM universe is a near-vacuum expanding at an ever-increasing rate under the influence of the cosmological constant. However, gravitationally bound objects like the Milky Way do not expand, and the Andromeda Galaxy is moving fast enough towards us that it will still merge with the Milky Way in around 3 billion years.
Metric expansion and speed of light.
At the end of the early universe's inflationary period, all the matter and energy in the universe was set on an inertial trajectory consistent with the equivalence principle and Einstein's general theory of relativity. This is when the precise and regular form of the universe's expansion had its origin (that is, matter in the universe is separating because it was separating in the past due to the inflaton field).
While special relativity prohibits objects from moving faster than light with respect to a local reference frame where spacetime can be treated as flat and unchanging, it does not apply to situations where spacetime curvature or evolution in time become important. These situations are described by general relativity, which allows the separation between two distant objects to increase faster than the speed of light, although the definition of "distance" here is somewhat different from that used in an inertial frame. The definition of distance used here is the summation or integration of local comoving distances, all done at constant local proper time. For example, galaxies that are farther than the Hubble radius, approximately 4.5 gigaparsecs or 14.7 billion light-years, away from us have a recession speed that is faster than the speed of light. Visibility of these objects depends on the exact expansion history of the universe. Light that is emitted today from galaxies beyond the more-distant cosmological event horizon, about 5 gigaparsecs or 16 billion light-years, will never reach us, although we can still see the light that these galaxies emitted in the past. Because of the high rate of expansion, it is also possible for a distance between two objects to be greater than the value calculated by multiplying the speed of light by the age of the universe. These details are a frequent source of confusion among amateurs and even professional physicists.<ref name="astro-ph/0310808"></ref> Due to the non-intuitive nature of the subject and what has been described by some as "careless" choices of wording, certain descriptions of the metric expansion of space and the misconceptions to which such descriptions can lead are an ongoing subject of discussion within the fields of education and communication of scientific concepts.
Common analogies for cosmic expansion.
The expansion of the universe is often illustrated with conceptual models where an expanding object is taken to represent expanding space. These models can be misleading to the extent that they give the false impression that expanding space must carry objects with it. In reality, the expansion of the universe can alternatively be thought of as corresponding only to the inertial motion of objects away from one another.
In the "ant on a rubber rope model" one imagines an ant (idealized as pointlike) crawling at a constant speed on a perfectly elastic rope that is constantly stretching. If we stretch the rope in accordance with the ΛCDM scale factor and think of the ant's speed as the speed of light, then this analogy is conceptually accurate – the ant's position over time will match the path of the red line on the embedding diagram above.
In the "rubber sheet model", one replaces the rope with a flat two-dimensional rubber sheet that expands uniformly in all directions. The addition of a second spatial dimension allows for the possibility of showing local perturbations of the spatial geometry by local curvature in the sheet.
In the "balloon model" the flat sheet is replaced by a spherical balloon that is inflated from an initial size of zero (representing the Big Bang). A balloon has positive Gaussian curvature, even though observations suggest that the real universe is spatially flat, but this inconsistency can be eliminated by making the balloon very large so that it is locally flat within the limits of observation. This analogy is potentially confusing since it could wrongly suggest that the Big Bang took place at the center of the balloon. In fact points off the surface of the balloon have no meaning, even if they were occupied by the balloon at an earlier time or will be occupied later.
In the "raisin bread model", one imagines a loaf of raisin bread expanding in an oven. The loaf (space) expands as a whole, but the raisins (gravitationally bound objects) do not expand; they merely move farther away from each other. This analogy has the disadvantage of wrongly implying that the expansion has a center and an edge.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\vec v"
},
{
"math_id": 1,
"text": "\\vec x"
},
{
"math_id": 2,
"text": "\\vec v = H \\vec x,"
},
{
"math_id": 3,
"text": "H"
},
{
"math_id": 4,
"text": "a"
},
{
"math_id": 5,
"text": "a=1"
},
{
"math_id": 6,
"text": "\\frac{\\ddot{a}}{a} = -\\frac{4 \\pi G}{3}\\left(\\rho+\\frac{3p}{c^2}\\right) + \\frac{\\Lambda c^2}{3},"
},
{
"math_id": 7,
"text": "G"
},
{
"math_id": 8,
"text": "\\rho"
},
{
"math_id": 9,
"text": "p"
},
{
"math_id": 10,
"text": "c"
},
{
"math_id": 11,
"text": "\\Lambda"
},
{
"math_id": 12,
"text": "\\ddot{a}<0"
},
{
"math_id": 13,
"text": "p<-\\rho c^2/3"
},
{
"math_id": 14,
"text": "|p|\\ll\\rho c^2"
},
{
"math_id": 15,
"text": "p=\\rho c^2/3"
},
{
"math_id": 16,
"text": "T\\propto a^{-1}"
},
{
"math_id": 17,
"text": "T\\propto a^{-2}"
},
{
"math_id": 18,
"text": "\\rho\\propto a^{-3}"
},
{
"math_id": 19,
"text": "\\rho\\propto a^{-4}"
},
{
"math_id": 20,
"text": "p=w\\rho"
},
{
"math_id": 21,
"text": "w"
},
{
"math_id": 22,
"text": "\\rho\\propto a^{-3(1+w)}."
},
{
"math_id": 23,
"text": "w=0"
},
{
"math_id": 24,
"text": "w=1/3"
},
{
"math_id": 25,
"text": "w=-1"
},
{
"math_id": 26,
"text": "w<-1"
},
{
"math_id": 27,
"text": "a\\propto t^{2/3}"
}
] |
https://en.wikipedia.org/wiki?curid=5985207
|
59852572
|
Behrend's theorem
|
On subsets of the integers in which no member of the set is a multiple of any other
In arithmetic combinatorics, Behrend's theorem states that the subsets of the integers from 1 to formula_0 in which no member of the set is a multiple of any other must have a logarithmic density that goes to zero as formula_0 becomes large. The theorem is named after Felix Behrend, who published it in 1935.
Statement.
The logarithmic density of a set of integers from 1 to formula_0 can be defined by setting the weight of each integer formula_1 to be formula_2, and dividing the total weight of the set by the formula_0th partial sum of the harmonic series (or, equivalently for the purposes of asymptotic analysis, dividing by formula_3). The resulting number is 1 or close to 1 when the set includes all of the integers in that range, but smaller when many integers are missing, and particularly when the missing integers are themselves small.
A subset of formula_4 is called "primitive" if it has the property that no subset element is a multiple of any other element.
Behrend's theorem states that the logarithmic density of any primitive subset must be small.
More precisely, the logarithmic density of such a set must be formula_5.
For infinite primitive sequences, the maximum possible density is smaller, formula_6.
Examples.
There exist large primitive subsets of formula_4. However, these sets still have small logarithmic density.
Both of these subsets have significantly smaller logarithmic density than the bound given by Behrend's theorem. Resolving a conjecture of G. H. Hardy, both Paul Erdős and Subbayya Sivasankaranarayana Pillai showed that, for formula_11,
the set of numbers with exactly formula_12 prime factors (counted with multiplicity) has logarithmic density
formula_13
exactly matching the form of Behrend's theorem. This example is best possible, in the sense that no other primitive subset has logarithmic density with the same form and a larger leading constant.
History.
This theorem is known as Behrend's theorem because Felix Behrend proved it in 1934, and published it in 1935. Paul Erdős proved the same result, on a 1934 train trip from Hungary to Cambridge to escape the growing anti-semitism in Europe, but on his arrival he discovered that Behrend's proof was already known.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "i"
},
{
"math_id": 2,
"text": "1/i"
},
{
"math_id": 3,
"text": "\\log n"
},
{
"math_id": 4,
"text": "\\{1,\\dots n\\}"
},
{
"math_id": 5,
"text": "O(1/\\sqrt{\\log\\log n})"
},
{
"math_id": 6,
"text": "o(1/\\sqrt{\\log\\log n})"
},
{
"math_id": 7,
"text": "\\{\\lceil (n+1)/2 \\rceil,\\dots n\\}"
},
{
"math_id": 8,
"text": "1"
},
{
"math_id": 9,
"text": "O(1/\\log n)"
},
{
"math_id": 10,
"text": "O(\\log\\log n/\\log n)"
},
{
"math_id": 11,
"text": "k\\approx\\log\\log n"
},
{
"math_id": 12,
"text": "k"
},
{
"math_id": 13,
"text": "\\frac{1+o(1)}{\\sqrt{2\\pi\\log\\log n}},"
}
] |
https://en.wikipedia.org/wiki?curid=59852572
|
598536
|
Einstein ring
|
Feature seen when light is gravitationally lensed by an object
An Einstein ring, also known as an Einstein–Chwolson ring or Chwolson ring (named for Orest Chwolson), is created when light from a galaxy or star passes by a massive object en route to the Earth. Due to gravitational lensing, the light is diverted, making it seem to come from different places. If source, lens, and observer are all in perfect alignment ("syzygy"), the light appears as a ring.
Introduction.
Gravitational lensing is predicted by Albert Einstein's theory of general relativity. Instead of light from a source traveling in a straight line (in three dimensions), it is bent by the presence of a massive body, which distorts spacetime. An Einstein Ring is a special case of gravitational lensing, caused by the exact alignment of the source, lens, and observer. This results in symmetry around the lens, causing a ring-like structure.
The size of an Einstein ring is given by the Einstein radius. In radians, it is
formula_0
where
formula_1 is the gravitational constant,
formula_2 is the mass of the lens,
formula_3 is the speed of light,
formula_4 is the angular diameter distance to the lens,
formula_5 is the angular diameter distance to the source, and
formula_6 is the angular diameter distance between the lens and the source.
Over cosmological distances formula_7 in general.
History.
The bending of light by a gravitational body was predicted by Albert Einstein in 1912, a few years before the publication of general relativity in 1916 (Renn et al. 1997). The ring effect was first mentioned in the academic literature by Orest Khvolson in a short article in 1924, in which he mentioned the “halo effect” of gravitation when the source, lens, and observer are in near-perfect alignment. Einstein remarked upon this effect in 1936 in a paper prompted by a letter by a Czech engineer, R W Mandl, but stated
<templatestyles src="Template:Blockquote/styles.css" />Of course, there is no hope of observing this phenomenon directly. First, we shall scarcely ever approach closely enough to such a central line. Second, the angle β will defy the resolving power of our instruments.
(In this statement, β is the Einstein Radius currently denoted by formula_8 as in the expression above.) However, Einstein was only considering the chance of observing Einstein rings produced by stars, which is low – the chance of observing those produced by larger lenses such as galaxies or black holes is higher since the angular size of an Einstein ring increases with the mass of the lens.
The first complete Einstein ring, designated B1938+666, was discovered by collaboration between astronomers at the University of Manchester and NASA's Hubble Space Telescope in 1998.
There have apparently not been any observations of a star forming an Einstein ring with another star, but there is a 45% chance of this happening in early May, 2028 when Alpha Centauri A passes between us and a distant red star.
Known Einstein rings.
Hundreds of gravitational lenses are currently known. About half a dozen of them are partial Einstein rings with diameters up to an arcsecond, although as either the mass distribution of the lenses is not perfectly axially symmetrical, or the source, lens, and observer are not perfectly aligned, we have yet to see a perfect Einstein ring. Most rings have been discovered in the radio range. The degree of completeness needed for an image seen through a gravitational lens to qualify as an Einstein ring is yet to be defined.
The first Einstein ring was discovered by Hewitt et al. (1988), who observed the radio source MG1131+0456 using the Very Large Array. This observation saw a quasar lensed by a nearer galaxy into two separate but very similar images of the same object, the images stretched round the lens into an almost complete ring. These dual images are another possible effect of the source, lens, and observer not being perfectly aligned.
The first complete Einstein ring to be discovered was B1938+666, which was found by King et al. (1998) via optical follow-up with the Hubble Space Telescope of a gravitational lens imaged with MERLIN. The galaxy causing the lens at B1938+666 is an ancient elliptical galaxy, and the image we see through the lens is a dark dwarf satellite galaxy, which we would otherwise not be able to see with current technology.
In 2005, the combined power of the Sloan Digital Sky Survey (SDSS) with the Hubble Space Telescope was used in the Sloan Lens ACS (SLACS) Survey to find 19 new gravitational lenses, 8 of which showed Einstein rings, these are the 8 shown in the adjacent image. As of 2009, this survey has found 85 confirmed gravitational lenses but there is not yet a number for how many show Einstein rings. This survey is responsible for most of the recent discoveries of Einstein rings in the optical range, following are some examples which were found:
Another example is the radio/X-Ray Einstein ring around PKS 1830-211, which is unusually strong in radio. It was discovered in X-Ray by Varsha Gupta et al. at the Chandra X-Ray observatory It is also notable for being the first case of a quasar being lensed by an almost face-on spiral galaxy.
Galaxy MG1654+1346 features a radio ring. The image in the ring is that of a quasar radio lobe, discovered in 1989 by G.Langston et al.
In June 2023, a team of astronomers led by Justin Spilker announced their discovery of an Einstein ring of distant galaxy rich in organic molecules (aromatic hydrocarbons).
Extra rings.
Using the Hubble Space Telescope, a double ring has been found by Raphael Gavazzi of the STScI and Tommaso Treu of the University of California, Santa Barbara. This arises from the light from three galaxies at distances of 3, 6, and 11 billion light years. Such rings help in understanding the distribution of dark matter, dark energy, the nature of distant galaxies, and the curvature of the universe. The odds of finding such a double ring around a massive galaxy are 1 in 10,000. Sampling 50 suitable double rings would provide astronomers with a more accurate measurement of the dark matter content of the universe and the equation of state of the dark energy to within 10 percent precision.
Simulation.
Below in the Gallery section is a simulation depicting a zoom on a Schwarzschild black hole in the plane of the Milky Way between us and the centre of the galaxy. The first Einstein ring is the most distorted region of the picture and shows the galactic disc. The zoom then reveals a series of 4 extra rings, increasingly thinner and closer to the black hole shadow. They are multiple images of the galactic disk. The first and third correspond to points which are behind the black hole (from the observer's position) and correspond here to the bright yellow region of the galactic disc (close to the galactic center), whereas the second and fourth correspond to images of objects which are behind the observer, which appear bluer, since the corresponding part of the galactic disc is thinner and hence dimmer here.
References.
<templatestyles src="Reflist/styles.css" />
Journals.
<templatestyles src="Refbegin/styles.css" />
News.
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "\\theta_1 = \\sqrt{\\frac{4GM}{c^2}\\;\\frac{D_{LS}}{D_S D_L}},"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "M"
},
{
"math_id": 3,
"text": "c"
},
{
"math_id": 4,
"text": "D_L"
},
{
"math_id": 5,
"text": "D_S"
},
{
"math_id": 6,
"text": "D_{LS}"
},
{
"math_id": 7,
"text": "D_{LS}\\ne D_S-D_L"
},
{
"math_id": 8,
"text": "\\theta_1,"
}
] |
https://en.wikipedia.org/wiki?curid=598536
|
59854417
|
Confirmatory composite analysis
|
In statistics, confirmatory composite analysis (CCA) is a sub-type of structural equation modeling (SEM).
Although, historically, CCA emerged from a re-orientation and re-start of partial least squares path modeling (PLS-PM),
it has become an independent approach and the two should not be confused.
In many ways it is similar to, but also quite distinct from confirmatory factor analysis (CFA).
It shares with CFA the process of model specification, model identification, model estimation, and model assessment.
However, in contrast to CFA which always assumes the existence of latent variables, in CCA all variables can be observable, with their interrelationships expressed in terms of composites, i.e., linear compounds of subsets of the variables.
The composites are treated as the fundamental objects and path diagrams can be used to illustrate their relationships.
This makes CCA particularly useful for disciplines examining theoretical concepts that are designed to attain certain goals, so-called artifacts, and their interplay with theoretical concepts of behavioral sciences.
Development.
The initial idea of CCA was sketched by Theo K. Dijkstra and Jörg Henseler in 2014.
The scholarly publishing process took its time until the first full description of CCA was published by Florian Schuberth, Jörg Henseler and Theo K. Dijkstra in 2018.
As common for statistical developments, interim developments of CCA were shared with the scientific community in written form.
Moreover, CCA was presented at several conferences including the 5th Modern Modeling Methods Conference, the 2nd International Symposium on Partial Least Squares Path Modeling, the 5th CIM Community Workshop, and the Meeting of the SEM Working Group in 2018.
Statistical model.
A composite is typically a linear combination of observable random variables. However, also so-called second-order composites as linear combinations of latent variables and composites, respectively, are conceivable.
For a random column vector formula_0 of observable variables that is partitioned into sub-vectors formula_1, composites can be defined as weighted linear combinations.
So the "i"-th composite formula_2 equals:
formula_3,
where the weights of each composite are appropriately normalized (see Confirmatory composite analysis#Model identification).
In the following, it is assumed that the weights are scaled in such a way that each composite has a variance of one, i.e., formula_4.
Moreover, it is assumed that the observable random variables are standardized having a mean of zero and a unit variance.
Generally, the variance-covariance matrices formula_5 of the sub-vectors are not constrained beyond being positive definite.
Similar to the latent variables of a factor model, the composites explain the covariances between the sub-vectors leading to the following inter-block covariance matrix:
formula_6,
where formula_7 is the correlation between the composites formula_8 and formula_2.
The composite model imposes rank one constraints on the inter-block covariance matrices formula_9, i.e., formula_10.
Generally, the variance-covariance matrix of formula_11 is positive definite iff the correlation matrix of the composites formula_12 and the variance-covariance matrices formula_13's are both positive definite.
In addition, the composites can be related via a structural model which constrains the correlation matrix formula_14 indirectly via a set of simultaneous equations:
formula_15,
where the vector formula_16 is partitioned in an exogenous and an endogenous part, and the matrices formula_17 and formula_18 contain the so-called path (and feedback) coefficients.
Moreover, the vector formula_19 contains the structural error terms having a zero mean and being uncorrelated with formula_20.
As the model needs not to be recursive, the matrix formula_17 is not necessarily triangular and the elements of formula_19 may be correlated.
Model identification.
To ensure identification of the composite model, each composite must be correlated with at least one variable not forming the composite. Additionally to this non-isolation condition, each composite needs to be normalized, e.g., by fixing one weight per composite, the length of each weight vector, or the composite’s variance to a certain value.
If the composites are embedded in a structural model, also the structural model needs to be identified.
Finally, since the weight signs are still undetermined, it is recommended to select a dominant indicator per block of indicators that dictates the orientation of the composite.
The degrees of freedom of the basic composite model, i.e., with no constraints imposed on the composites' correlation matrix formula_14, are calculated as follows:
Model estimation.
To estimate the parameters of a composite model, various methods that create composites can be used such as approaches to generalized canonical correlation, principal component analysis, and linear discriminant analysis. Moreover, a maximum-likelihood estimator and composite-based methods for SEM such as partial least squares path modeling and generalized structured component analysis can be employed to estimate weights and the correlations among the composites.
Evaluating model fit.
In CCA, the model fit, i.e., the discrepancy between the estimated model-implied variance-covariance matrix formula_21 and its sample counterpart formula_22, can be assessed in two non-exclusive ways.
On the one hand, measures of fit can be employed; on the other hand, a test for overall model fit can be used.
While the former relies on heuristic rules, the latter is based on statistical inferences.
Fit measures for composite models comprises statistics such as the standardized root mean square residual (SRMR), and the root mean squared error of outer residuals (RMSformula_23)
In contrast to fit measures for common factor models, fit measures for composite models are relatively unexplored and reliable thresholds still need to be determined.
To assess the overall model fit by means of statistical testing, the bootstrap test for overall model fit, also known as Bollen-Stine bootstrap test, can be used to investigate whether a composite model fits to the data.
Alternative views on CCA.
Besides the originally proposed CCA, the evaluation steps known from partial least squares structural equation modeling (PLS-SEM) are dubbed CCA.
It is emphasized that PLS-SEM's evaluation steps, in the following called PLS-CCA, differ from CCA in many regards:.
(i) While PLS-CCA aims at conforming reflective and formative measurement models, CCA aims at assessing composite models; (ii) PLS-CCA omits overall model fit assessment, which is a crucial step in CCA as well as SEM; (iii) PLS-CCA is strongly linked to PLS-PM, while for CCA PLS-PM can be employed as one estimator, but this is in no way mandatory.
Hence, researchers who employ need to be aware to which technique they are referring to.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\mathbf{x} "
},
{
"math_id": 1,
"text": " \\mathbf{x}_i "
},
{
"math_id": 2,
"text": "c_i"
},
{
"math_id": 3,
"text": "c_i= \\mathbf{w}_i'\\mathbf{x}_i "
},
{
"math_id": 4,
"text": "\\mathbf{w}_i' \\mathbf{\\Sigma}_{ii} \\mathbf{w}_i"
},
{
"math_id": 5,
"text": "\\mathbf{\\Sigma}_{ii}"
},
{
"math_id": 6,
"text": "\\mathbf{\\Sigma}_{ij}=\\rho_{ij} \\mathbf{\\Sigma}_{ii}\\mathbf{w}_i (\\mathbf{\\Sigma}_{jj} \\mathbf{w}_j)' "
},
{
"math_id": 7,
"text": "\\rho_{ij}"
},
{
"math_id": 8,
"text": "c_j"
},
{
"math_id": 9,
"text": "\\mathbf{\\Sigma}_{ij}"
},
{
"math_id": 10,
"text": "\\text{rank}(\\mathbf{\\Sigma}_{ij})=1"
},
{
"math_id": 11,
"text": "\\mathbf{x}"
},
{
"math_id": 12,
"text": "\\mathbf{R}:=(\\rho_{ij})"
},
{
"math_id": 13,
"text": "\\mathbf{\\Sigma}_{jj} "
},
{
"math_id": 14,
"text": "\\mathbf{R}"
},
{
"math_id": 15,
"text": " \\mathbf{B} \\mathbf{c}_{\\text{endogenous}}=\\mathbf{C} \\mathbf{c}_{\\text{exogenous}}+\\mathbf{z} "
},
{
"math_id": 16,
"text": "\\mathbf{c}"
},
{
"math_id": 17,
"text": "\\mathbf{B}"
},
{
"math_id": 18,
"text": "\\mathbf{C}"
},
{
"math_id": 19,
"text": " \\mathbf{z} "
},
{
"math_id": 20,
"text": " \\mathbf{c}_{\\text{exogenous}}"
},
{
"math_id": 21,
"text": "\\hat{\\mathbf{\\Sigma}}"
},
{
"math_id": 22,
"text": "\\mathbf{S}"
},
{
"math_id": 23,
"text": "_{\\theta}"
}
] |
https://en.wikipedia.org/wiki?curid=59854417
|
59859530
|
EP matrix
|
In mathematics, an EP matrix (or range-Hermitian matrix or RPN matrix) is a square matrix "A" whose range is equal to the range of its conjugate transpose "A"*. Another equivalent characterization of EP matrices is that the range of "A" is orthogonal to the nullspace of "A". Thus, EP matrices are also known as RPN (Range Perpendicular to Nullspace) matrices.
EP matrices were introduced in 1950 by Hans Schwerdtfeger, and since then, many equivalent characterizations of EP matrices have been investigated through the literature. The meaning of the EP abbreviation stands originally for "E"qual "P"rincipal, but it is widely believed that it stands for "Equal Projectors" instead, since an equivalent characterization of EP matrices is based in terms of equality of the projectors "AA+" and "A+A".
The range of any matrix "A" is perpendicular to the null-space of "A"*, but is not necessarily perpendicular to the null-space of "A". When "A" is an EP matrix, the range of "A" is precisely perpendicular to the null-space of "A".
Decomposition.
The spectral theorem states that a matrix is normal if and only if it is unitarily similar to a diagonal matrix.
Weakening the normality condition to EPness, a similar statement is still valid. Precisely, a matrix "A" of rank "r" is an EP matrix if and only if it is unitarily similar to a core-nilpotent matrix, that is,
formula_0
where "U" is an orthogonal matrix and "C" is an "r" x "r" nonsingular matrix. Note that if "A" is full rank, then "A" = "UCU"*.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A = U \\begin{pmatrix} C & 0 \\\\ 0 & 0 \\end{pmatrix} U^{*},"
}
] |
https://en.wikipedia.org/wiki?curid=59859530
|
59861597
|
Induced matching
|
In graph theory, an induced matching or strong matching is a subset of the edges of an undirected graph that do not share any vertices (it is a matching) and these are the only edges connecting any two vertices which are endpoints of the matching edges (it is an induced subgraph).
An induced matching can also be described as an independent set in the square of the line graph of the given graph.
Strong coloring and neighborhoods.
The minimum number of induced matchings into which the edges of a graph can be partitioned is called its "strong chromatic index", by analogy with the chromatic index of the graph, the minimum number of matchings into which its edges can be partitioned. It equals the chromatic number of the square of the line graph. Brooks' theorem, applied to the square of the line graph,
shows that the strong chromatic index is at most quadratic in the maximum degree of the given graph, but better constant factors in the quadratic bound can be obtained by other methods.
The Ruzsa–Szemerédi problem concerns the edge density of balanced bipartite graphs with linear strong chromatic index. Equivalently, it concerns the density of a different class of graphs, the locally linear graphs in which the neighborhood of every vertex is an induced matching. Neither of these types of graph can have a quadratic number of edges, but constructions are known for graphs of this type with nearly-quadratic numbers of edges.
Computational complexity.
Finding an induced matching of size at least formula_0 is NP-complete (and thus, finding an induced matching of maximum size is NP-hard). It can be solved in polynomial time in chordal graphs, because the squares of line graphs of chordal graphs are perfect graphs.
Moreover, it can be solved in linear time in chordal graphs .
Unless an unexpected collapse in the polynomial hierarchy occurs,
the largest induced matching cannot be approximated to within any formula_1 approximation ratio in polynomial time.
The problem is also W[1]-hard, meaning that even finding a small induced matching of a given size formula_0 is unlikely to have an algorithm significantly faster than the brute force search approach of trying all formula_0-tuples of edges. However, the problem of finding formula_0 vertices whose removal leaves an induced matching is fixed-parameter tractable. The problem can also be solved exactly on formula_2-vertex graphs in time formula_3 with exponential space, or in time formula_4 with polynomial space.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "n^{1-\\varepsilon}"
},
{
"math_id": 2,
"text": "n"
},
{
"math_id": 3,
"text": "O(1.3752^n)"
},
{
"math_id": 4,
"text": "O(1.4231^n)"
}
] |
https://en.wikipedia.org/wiki?curid=59861597
|
59863
|
Correspondence principle
|
Physics principle formulated by Niels Bohr
In physics, a correspondence principle is any one of several premises or assertions about the relationship between classical and quantum mechanics.
The physicist Niels Bohr coined the term in 1920 during the early development of quantum theory; he used it to explain how quantized classical orbitals connect to quantum radiation.
Modern sources often use the term for the idea that the behavior of systems described by quantum theory reproduces classical physics in the limit of large quantum numbers: for large orbits and for large energies, quantum calculations must agree with classical calculations. A "generalized" correspondence principle refers to the requirement for a broad set of connections between any old and new theory.
History.
Max Planck was the first to introduce the idea of quanta of energy in 1900 while studying black-body radiation. In 1906, he was also the first to write that quantum theory should recover the classical mechanics at some limit, particularly when Planck constant "h" tends to zero. With this idea he showed that Planck's law for thermal radiation leads to the Rayleigh–Jeans law, the classical prediction (valid for large wavelength).
Niels Bohr used a similar idea, while developing his model of the atom. In 1913, he provided the first postulates of what is now known as old quantum theory. Using these postulates he obtained that for the hydrogen atom, the energy spectrum approaches the classical continuum for large "n" (a quantum number that encodes the energy of the orbit). Bohr coined the term "correspondence principle" during a lecture in 1920.
Arnold Sommerfeld refined Bohr's theory leading to the Bohr-Sommerfeld quantization condition. Sommerfeld referred to the correspondence principle as Bohr's magic wand (), in 1921.
Bohr's correspondence principle.
The seeds of Bohr's correspondence principle appeared from two sources. First Sommerfeld and Max Born developed a "quantization procedure" based on the action angle variables of classical Hamiltonian mechanics. This gave a mathematical foundation for stationary states of the Bohr-Sommerfeld model of the atom. The second seed was Albert Einstein's quantum derivation of Planck's law in 1916. Einstein developed the statistical mechanics for Bohr-model atoms interacting with electromagnetic radiation, leading to absorption and two kinds of emission, spontaneous and stimulated emission. But for Bohr the important result was the use classical analogies and the Bohr atomic model to fix inconsistencies in Planck's derivation of the blackbody radiation formula.
Bohr used the word "correspondence" in italics in lectures and writing before calling it a correspondence principle. He viewed this as a correspondence between quantum motion and radiation, not between classical and quantum theories. He writes in 1920 that there exists "a far-reaching correspondence between the various types of possible transitions between the stationary states on the one hand and the various harmonic components of the motion on the other hand."
Bohr first article containing the definition of the correspondence principle was in 1923 in a summary paper entitled (in the English translation) "On the application of quantum theory to atomic structure". In his chapter II, "The process of radiation", he defines his correspondence principle as a condition connecting harmonic components of the electron moment to the possible occurrence of a radiative transition. In modern terms, this condition is a selection rule, saying that a given quantum jump is possible if and only if a particular type of motion exists in the corresponding classical model.
Following his definition of the correspondence principle, Bohr describes two applications. First he shows that the frequency of emitted radiation is related to an integral which can be well approximated by a sum when the quantum numbers inside the integral are large compared with their differences. Similarly he shows a relationship for the intensities of spectral lines and thus the rates at which quantum jumps occur.
These asymptotic relationships are expressed by Bohr as consequences of his general correspondence principle. However, historically each of these applications have been called "the correspondence principle".
The PhD dissertation of Hans Kramers working in Bohr's group in Copenhagen applied Bohr's correspondence principle to account for all of the known facts of the spectroscopic Stark effect, including some spectral components not known at the time of Kramers work.
Sommerfeld had been skeptical of the correspondence principle as it did not seem to be a consequence of a fundamental theory; the Kramers' work convinced him that the principle had heuristic utility nevertheless. Other physicists picked up the concept, including work by John Van Vleck and by Kramers and Heisenberg on dispersion theory. The principle became a cornerstone of the semi-classical Bohr-Sommerfeld atomic theory;
Bohr's 1922 Nobel prize was partly awarded for his work with the correspondence principle.
Despite the successes, the physical theories based on the principle faced increasing challenges the early 1920s. Theoretical calculations by Van Vleck and by Kramers of the ionization potential of Helium disagreed significantly with experimental values. Bohr, Kramers, and John C. Slater responded with a new theoretical approach now called the BKS theory based on the correspondence principle but disavowing conservation of energy. Einstein and Wolfgang Pauli criticized the new approach and the Bothe–Geiger coincidence experiment showed that energy was conserved in quantum collisions.
With the existing theories in conflict with observations, two new quantum mechanics concepts arose. First, Heisenberg's 1925 "Umdeutung" paper on matrix mechanics was inspired by the correspondence principle, he did not cite Bohr. Further developed in collaboration with Pascual Jordan and Max Born resulted in a mathematical model without connection to the principle. Second, Schrodinger's wave mechanics in the following year similarly did not use the principle. Both pictures were later shown to be equivalent and accurate enough to replace old quantum theory. These approaches have no atomic orbits: the correspondence is more of an analogy than a principle.
Dirac's correspondence.
Paul Dirac developed significant portions of the new quantum theory in the second half of the 1920s. While he did not apply Bohr's correspondence principle, he developed a different, more formal classical–quantum correspondence. Dirac connected the structures of classical mechanics known as Poisson brackets to analogous structure of quantum mechanics known as commutators:
formula_0
By this correspondence, now called canonical quantization, Dirac showed how the mathematical form of classical mechanics could be recast as a basis for the new mathematics of quantum mechanics.
Dirac developed these connections by studying the work of Heisenberg and Kramers on dispersion, work that was directly built on Bohr's correspondence principle; the Dirac approach provides a mathematically sound path Bohr's goal of a connection between classical and quantum mechanics. While Dirac did not call this correspondence a "principle", physics textbooks refer to his connections a "correspondence principle".
The classical limit of wave mechanics.
The outstanding success of classical mechanics in the description of natural phenomena up to the 20th century means that quantum mechanics must do as well in similar circumstances.
<templatestyles src="Template:Blockquote/styles.css" />Judged by the test of experience, the laws of classical physics have brilliantly justified themselves in all processes of motion… It must therefore be laid down as an unconditionally necessary postulate, that the new mechanics … must in all these problems reach the same results as the classical mechanics.
One way to quantitatively define this concept is to require quantum mechanical theories to produce classical mechanics results as the quantum of action goes to zero, formula_1. This transition can be accomplished in two different ways.
First the particle can be approximated by a wave packet and the indefinite spread of the packet with time can be ignored. In 1927, Paul Ehrenfest proved his namesake theorem that showed that Newton's laws of motion hold on average in quantum mechanics: the quantum statistical expectation value of the position and momentum obey Newton's laws.
Second the individual particle view can be replaced with a statistical mixture of classical particles with a density matching the quantum probability density. This approach lead to the concept of semiclassical physics, beginning with the development of WKB approximation used in descriptions of quantum tunneling for example.
Modern view.
While Bohr viewed "correspondence" as principle aiding his description of quantum phenomena, fundamental differences between the mathematical structure of quantum and of classical mechanics prevents correspondence in many cases. Rather than a principle, "there may be in some situations an approximate correspondence between classical and quantum concepts," physicist Asher Peres put it. Since quantum mechanics operates in a discrete space and classical mechanics in a continuous one, any correspondence will be necessarily fuzzy and elusive.
Introductory quantum mechanics textbooks suggest that that quantum mechanics goes over to classical theory in the limit of high quantum numbers or in a limit where the Planck constant in the quantum formula is reduced to zero, formula_1. However such correspondence is not always possible. For example, classical systems can exhibit chaotic orbits which diverge but quantum states are unitary and maintain a fixed overlap.
Generalized correspondence principle.
The term "generalized correspondence principle" has been used in the study of the history of science to mean the reduction of a new scientific theory to an earlier scientific theory in appropriate circumstances. This requires that the new theory explain all the phenomena under circumstances for which the preceding theory was known to be valid; it also means that new theory will retain large parts of the older theory. The generalized principle applies correspondence across aspects of a complete theory, not just a single formula as in the classical limit correspondence. For example, Albert Einstein in his 1905 work on relativity noted that classical mechanics relied on Galilean relativity while electromagnetism did not, and yet both work well. He produced a new theory that combined them in a away that reduced to these separate theories in approximations.
Ironically the singular failure of this "generalized correspondence principle" concept of scientific theories is the replacement of classical mechanics with quantum mechanics.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\{A, B\\} \\longmapsto \\frac{1}{i \\hbar} [\\hat{A}, \\hat{B}]."
},
{
"math_id": 1,
"text": "\\hbar \\rightarrow 0"
}
] |
https://en.wikipedia.org/wiki?curid=59863
|
598676
|
Afterload
|
Pressure in the wall of the left ventricle during ejection
Afterload is the pressure that the heart must work against to eject blood during systole (ventricular contraction). Afterload is proportional to the average arterial pressure. As aortic and pulmonary pressures increase, the afterload increases on the left and right ventricles respectively. Afterload changes to adapt to the continually changing demands on an animal's cardiovascular system. Afterload is proportional to mean systolic blood pressure and is measured in millimeters of mercury (mm Hg).
Hemodynamics.
Afterload is a determinant of cardiac output. Cardiac output is the product of stroke volume and heart rate. Afterload is a determinant of stroke volume (in addition to preload, and strength of myocardial contraction).
Following Laplace's law, the tension upon the muscle fibers in the heart wall is the pressure within the ventricle multiplied by the volume within the ventricle divided by the wall thickness (this ratio is the other factor in setting the afterload). Therefore, when comparing a normal heart to a heart with a dilated left ventricle, if the aortic pressure is the same in both hearts, the dilated heart must create a greater tension to overcome the same aortic pressure to eject blood because it has a larger internal radius and volume. Thus, the dilated heart has a greater total load (tension) on the myocytes, i.e., has a higher afterload. This is also true in the eccentric hypertrophy consequent to high-intensity aerobic training. Conversely, a concentrically hypertrophied left ventricle may have a lower afterload for a given aortic pressure. When contractility becomes impaired and the ventricle dilates, the afterload rises and limits output. This may start a vicious circle, in which cardiac output is reduced as oxygen requirements are increased.
Afterload can also be described as the pressure that the chambers of the heart must generate to eject blood from the heart, and this is a consequence of aortic pressure (for the left ventricle) and pulmonic pressure or pulmonary artery pressure (for the right ventricle). The pressure in the ventricles must be greater than the systemic and pulmonary pressure to open the aortic and pulmonic valves, respectively. As afterload increases, cardiac output decreases. Cardiac imaging is a somewhat limited modality in defining afterload because it depends on the interpretation of volumetric data.
Calculating afterload.
Quantitatively, afterload can be calculated by determining the wall stress of the left ventricle, using the Young–Laplace equation:
formula_0 where
EDP is end-diastolic pressure in the left ventricle, which is typically approximated by taking pulmonary artery wedge pressure,
EDR is end-diastolic radius at the midpoint of the left ventricle, and
"h" is the mean thickness of the left ventricle wall. Both radius and mean thickness of the left ventricle may be measured by echocardiography.
Factors affecting afterload.
Disease processes pathology that include indicators such as an increasing left ventricular afterload include elevated blood pressure and aortic valve disease.
Systolic hypertension (HTN) (elevated blood pressure) increases the left ventricular (LV) afterload because the LV must work harder to eject blood into the aorta. This is because the aortic valve won't open until the pressure generated in the left ventricle is higher than the elevated blood pressure in the aorta.
Pulmonary hypertension (PH) is increased blood pressure within the right heart leading to the lungs. PH indicates a regionally applied increase in afterload dedicated to the right side of the heart, divided and isolated from the left heart by the interventricular septum.
In the natural aging process, aortic stenosis often increases afterload because the left ventricle must overcome the pressure gradient caused by the calcified and stenotic aortic valve, in addition to the blood pressure required to eject blood into the aorta. For instance, if the blood pressure is 120/80, and the aortic valve stenosis creates a trans-valvular gradient of 30 mmHg, the left ventricle has to generate a pressure of 110 mmHg to open the aortic valve and eject blood into the aorta.
Due to the increased afterload, the ventricle has to work harder to accomplish its goal of ejecting blood into the aorta. Thus, in the long-term, increased afterload (due to the stenosis) results in hypertrophy of the left ventricle to account for the increased work required and also to decrease wall stress since wall thickness and wall stress are inversely proportional.
Aortic insufficiency (Aortic Regurgitation) increases afterload, because a percentage of the blood that ejects forward regurgitates back through the diseased aortic valve. This leads to elevated systolic blood pressure. The diastolic blood pressure in the aorta falls, due to regurgitation. This increases pulse pressure.
Mitral regurgitation (MR) "decreases" afterload. In ventricular systole under MR, regurgitant blood flows backwards/retrograde back and forth through a diseased and leaking mitral valve. The remaining blood loaded into the LV is then optimally ejected out through the aortic valve. With an extra pathway for blood flow through the mitral valve, the left ventricle does not have to work as hard to eject its blood, i.e. there is a decreased afterload. Afterload is largely dependent upon aortic pressure.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\left ( \\frac{EDP \\cdot EDR}{2h} \\right )"
}
] |
https://en.wikipedia.org/wiki?curid=598676
|
59872034
|
FHI-aims
|
Molecular dynamics modelling software
FHI-aims (Fritz Haber Institute ab initio materials simulations) is a software package for computational molecular and materials science written in Fortran. It uses density functional theory and many-body perturbation theory to simulate chemical and physical properties of atoms, molecules, nanostructures, solids, and surfaces. Originally developed at the Fritz Haber Institute in Berlin, the ongoing development of the FHI-aims source code is now driven by a worldwide community of collaborating research institutions.
Overview.
The FHI-aims software package is an all-electron, full-potential electronic structure code utilizing numeric atom-centered basis functions for its electronic structure calculations. The localized basis set enables the accurate treatment of all electrons on the same footing in periodic and non-periodic systems without relying on the approximation for the core states, such as pseudopotentials. Importantly, the basis sets enable high numerical accuracy on par with the best available all-electron reference methods while remaining scalable to system sizes up to several thousands of atoms. In order to achieve this for bulk solids, surfaces or other low-dimensional systems and molecules, the choice of basis functions is crucial.
The workload of the simulations is efficiently distributable for parallel computing using the MPI communication protocol. The code is routinely used on platforms ranging from laptops to distributed-parallel supercomputers with ten thousand CPUs, and the scalability of the code has been tested up to 100,000's of CPUs.
The primary production methods of FHI-aims are density functional theory as well as many-body methods and higher-level quantum chemistry approaches. For the exchange-correlation treatment, local (LDA), semi-local (e.g., PBE, PBEsol), meta-GGA, and hybrid (e.g., HSE06, B3LYP) functionals have been implemented. The resulting orbitals can be used within the framework of many-body perturbation theory, such as Møller-Plesset perturbation theory or the GW approximation. Moreover, thermodynamic properties of the molecules and solids are accessible via Born-Oppenheimer molecular dynamics and path integral molecular dynamics methods.
The first step is to expand the Kohn-Sham orbitals packages formula_0 into a set of basis functions formula_1
formula_2
Since FHI-aims is an all-electron full-potential code that is computationally efficient without compromising accuracy, the choice of basis function is crucial in order to achieve the said accuracy. Therefore, FHI-aims is based on numerically tabulated atom-centered orbitals (NAOs) of the form:
formula_3
As the name implies, the radial shape formula_4 is numerically tabulated and, therefore, fully flexible. This allows the creation of optimized element-dependent basis sets that are as compact as possible while retaining a high and transferable accuracy in production calculations up to meV-level total energy convergence. To obtain real-valued formula_5, formula_6 here denotes the real parts (formula_7) and imaginary parts (formula_8) of complex spherical harmonics, with formula_9 an implicit function of the radial function index formula_10.
History.
The first line of code of the actual FHI-aims code was written in late 2004, using the atomic solver employed in the Fritz Haber Institute pseudopotential program package fhi98PP as a foundation to obtain radial functions for use as basis functions. The first developments benefitted heavily from the excellent set of numerical technologies described in several publications by Bernard Delley and coworkers in the context of the DMol3 code, as well as from many broader methodological developments published in the electronic structure theory community over the years. Initial efforts in FHI-aims focused on developing a complete numeric atom-centered basis set library for density-functional theory from "light" to highly accurate (few meV/atom) accuracy for total energies, available for the elements up to nobelium (Z=102) across the periodic table.
By 2006, work on parallel functionality, support for periodic boundary conditions, total energy gradients (forces) and on exact exchange and many-body perturbation theory had commenced. On May 18, 2009, an initial formal point release of the code, "051809", was made available and laid the foundation for broadening the user and developer base of the code.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\psi_i(r)"
},
{
"math_id": 1,
"text": "\\{\\phi_j(r)\\}"
},
{
"math_id": 2,
"text": "\\psi_i(r) = \\sum_j C_{ij} \\phi_j(r)."
},
{
"math_id": 3,
"text": "\\phi_j(r) = \\frac{\\mu_j(r)}{r} Y_{lm}(\\Omega)."
},
{
"math_id": 4,
"text": "\\mu_j(r)"
},
{
"math_id": 5,
"text": "\\phi_j(r)"
},
{
"math_id": 6,
"text": "Y_{lm}(\\Omega)"
},
{
"math_id": 7,
"text": "m=0,\\ldots,l"
},
{
"math_id": 8,
"text": "m=-l,\\ldots,-1"
},
{
"math_id": 9,
"text": "l"
},
{
"math_id": 10,
"text": "j"
}
] |
https://en.wikipedia.org/wiki?curid=59872034
|
59874
|
Schrödinger equation
|
Description of a quantum-mechanical system
The Schrödinger equation is a partial differential equation that governs the wave function of a quantum-mechanical system. Its discovery was a significant landmark in the development of quantum mechanics. It is named after Erwin Schrödinger, who postulated the equation in 1925 and published it in 1926, forming the basis for the work that resulted in his Nobel Prize in Physics in 1933.
Conceptually, the Schrödinger equation is the quantum counterpart of Newton's second law in classical mechanics. Given a set of known initial conditions, Newton's second law makes a mathematical prediction as to what path a given physical system will take over time. The Schrödinger equation gives the evolution over time of the wave function, the quantum-mechanical characterization of an isolated physical system. The equation was postulated by Schrödinger based on a postulate of Louis de Broglie that all matter has an associated matter wave. The equation predicted bound states of the atom in agreement with experimental observations.
The Schrödinger equation is not the only way to study quantum mechanical systems and make predictions. Other formulations of quantum mechanics include matrix mechanics, introduced by Werner Heisenberg, and the path integral formulation, developed chiefly by Richard Feynman. When these approaches are compared, the use of the Schrödinger equation is sometimes called "wave mechanics". The Klein-Gordon equation is a wave equation which is the relativistic version of the Schrödinger equation. The Schrödinger equation is nonrelativistic because it contains a first derivative in time and a second derivative in space, and therefore space & time are not on equal footing.
Paul Dirac incorporated special relativity and quantum mechanics into a single formulation that simplifies to the Schrödinger equation in the non-relativistic limit. This is the Dirac equation, which contains a single derivative in both space and time. The second-derivative PDE of the Klein-Gordon equation led to a problem with probability density even though it was a relativistic wave equation. The probability density could be negative, which is physically unviable. This was fixed by Dirac by taking the so-called square-root of the Klein-Gordon operator and in turn introducing Dirac matrices. In a modern context, the Klein-Gordon equation describes spin-less particles, while the Dirac equation describes spin-1/2 particles.
<templatestyles src="Template:TOC limit/styles.css" />
Definition.
Preliminaries.
Introductory courses on physics or chemistry typically introduce the Schrödinger equation in a way that can be appreciated knowing only the concepts and notations of basic calculus, particularly derivatives with respect to space and time. A special case of the Schrödinger equation that admits a statement in those terms is the position-space Schrödinger equation for a single nonrelativistic particle in one dimension:
formula_0
Here, formula_1 is a wave function, a function that assigns a complex number to each point formula_2 at each time formula_3. The parameter formula_4 is the mass of the particle, and formula_5 is the "potential" that represents the environment in which the particle exists. The constant formula_6 is the imaginary unit, and formula_7 is the reduced Planck constant, which has units of action (energy multiplied by time).
Broadening beyond this simple case, the mathematical formulation of quantum mechanics developed by Paul Dirac, David Hilbert, John von Neumann, and Hermann Weyl defines the state of a quantum mechanical system to be a vector formula_8 belonging to a separable complex Hilbert space formula_9. This vector is postulated to be normalized under the Hilbert space's inner product, that is, in Dirac notation it obeys formula_10. The exact nature of this Hilbert space is dependent on the system – for example, for describing position and momentum the Hilbert space is the space of square-integrable functions formula_11, while the Hilbert space for the spin of a single proton is the two-dimensional complex vector space formula_12 with the usual inner product.
Physical quantities of interest – position, momentum, energy, spin – are represented by observables, which are self-adjoint operators acting on the Hilbert space. A wave function can be an eigenvector of an observable, in which case it is called an eigenstate, and the associated eigenvalue corresponds to the value of the observable in that eigenstate. More generally, a quantum state will be a linear combination of the eigenstates, known as a quantum superposition. When an observable is measured, the result will be one of its eigenvalues with probability given by the Born rule: in the simplest case the eigenvalue formula_13 is non-degenerate and the probability is given by formula_14, where formula_15 is its associated eigenvector. More generally, the eigenvalue is degenerate and the probability is given by formula_16, where formula_17 is the projector onto its associated eigenspace.
A momentum eigenstate would be a perfectly monochromatic wave of infinite extent, which is not square-integrable. Likewise a position eigenstate would be a Dirac delta distribution, not square-integrable and technically not a function at all. Consequently, neither can belong to the particle's Hilbert space. Physicists sometimes regard these eigenstates as "generalized eigenvectors" for a Hilbert space composed of elements outside that space. These are used for calculational convenience and do not represent physical states. Thus, a position-space wave function formula_1 as used above can be written as the inner product of a time-dependent state vector formula_18 with unphysical but convenient "position eigenstates" formula_19:
formula_20
Time-dependent equation.
The form of the Schrödinger equation depends on the physical situation. The most general form is the time-dependent Schrödinger equation, which gives a description of a system evolving with time:
Time-dependent Schrödinger equation "(general)"
formula_21
where formula_3 is time, formula_22 is the state vector of the quantum system (formula_23 being the Greek letter psi), and formula_24 is an observable, the Hamiltonian operator.
The term "Schrödinger equation" can refer to both the general equation, or the specific nonrelativistic version. The general equation is indeed quite general, used throughout quantum mechanics, for everything from the Dirac equation to quantum field theory, by plugging in diverse expressions for the Hamiltonian. The specific nonrelativistic version is an approximation that yields accurate results in many situations, but only to a certain extent (see relativistic quantum mechanics and relativistic quantum field theory).
To apply the Schrödinger equation, write down the Hamiltonian for the system, accounting for the kinetic and potential energies of the particles constituting the system, then insert it into the Schrödinger equation. The resulting partial differential equation is solved for the wave function, which contains information about the system. In practice, the square of the absolute value of the wave function at each point is taken to define a probability density function. For example, given a wave function in position space formula_1 as above, we have
formula_25
Time-independent equation.
The time-dependent Schrödinger equation described above predicts that wave functions can form standing waves, called stationary states. These states are particularly important as their individual study later simplifies the task of solving the time-dependent Schrödinger equation for "any" state. Stationary states can also be described by a simpler form of the Schrödinger equation, the time-independent Schrödinger equation.
Time-independent Schrödinger equation ("general")
formula_26
where formula_27 is the energy of the system. This is only used when the Hamiltonian itself is not dependent on time explicitly. However, even in this case the total wave function is dependent on time as explained in the section on linearity below. In the language of linear algebra, this equation is an eigenvalue equation. Therefore, the wave function is an eigenfunction of the Hamiltonian operator with corresponding eigenvalue(s) formula_27.
Properties.
Linearity.
The Schrödinger equation is a linear differential equation, meaning that if two state vectors formula_28 and formula_29 are solutions, then so is any linear combination
formula_30
of the two state vectors where a and b are any complex numbers. Moreover, the sum can be extended for any number of state vectors. This property allows superpositions of quantum states to be solutions of the Schrödinger equation. Even more generally, it holds that a general solution to the Schrödinger equation can be found by taking a weighted sum over a basis of states. A choice often employed is the basis of energy eigenstates, which are solutions of the time-independent Schrödinger equation. In this basis, a time-dependent state vector formula_18 can be written as the linear combination
formula_31
where formula_32 are complex numbers and the vectors formula_33 are solutions of the time-independent equation formula_34.
Unitarity.
Holding the Hamiltonian formula_24 constant, the Schrödinger equation has the solution
formula_35
The operator formula_36 is known as the time-evolution operator, and it is unitary: it preserves the inner product between vectors in the Hilbert space. Unitarity is a general feature of time evolution under the Schrödinger equation. If the initial state is formula_37, then the state at a later time formula_3 will be given by
formula_38
for some unitary operator formula_39. Conversely, suppose that formula_39 is a continuous family of unitary operators parameterized by formula_3. Without loss of generality, the parameterization can be chosen so that formula_40 is the identity operator and that formula_41 for any formula_42. Then formula_39 depends upon the parameter formula_3 in such a way that
formula_43
for some self-adjoint operator formula_44, called the "generator" of the family formula_39. A Hamiltonian is just such a generator (up to the factor of the Planck constant that would be set to 1 in natural units).
To see that the generator is Hermitian, note that with formula_45, we have
formula_46 so formula_39 is unitary only if, to first order, its derivative is Hermitian.
Changes of basis.
The Schrödinger equation is often presented using quantities varying as functions of position, but as a vector-operator equation it has a valid representation in any arbitrary complete basis of kets in Hilbert space. As mentioned above, "bases" that lie outside the physical Hilbert space are also employed for calculational purposes. This is illustrated by the "position-space" and "momentum-space" Schrödinger equations for a nonrelativistic, spinless particle. The Hilbert space for such a particle is the space of complex square-integrable functions on three-dimensional Euclidean space, and its Hamiltonian is the sum of a kinetic-energy term that is quadratic in the momentum operator and a potential-energy term:
formula_47
Writing formula_48 for a three-dimensional position vector and formula_49 for a three-dimensional momentum vector, the position-space Schrödinger equation is
formula_50
The momentum-space counterpart involves the Fourier transforms of the wave function and the potential:
formula_51
The functions formula_52 and formula_53 are derived from formula_18 by
formula_54
formula_55
where formula_56 and formula_57 do not belong to the Hilbert space itself, but have well-defined inner products with all elements of that space.
When restricted from three dimensions to one, the position-space equation is just the first form of the Schrödinger equation given above. The relation between position and momentum in quantum mechanics can be appreciated in a single dimension. In canonical quantization, the classical variables formula_2 and formula_58 are promoted to self-adjoint operators formula_59 and formula_60 that satisfy the canonical commutation relation
formula_61
This implies that
formula_62
so the action of the momentum operator formula_60 in the position-space representation is formula_63. Thus, formula_64 becomes a second derivative, and in three dimensions, the second derivative becomes the Laplacian formula_65.
The canonical commutation relation also implies that the position and momentum operators are Fourier conjugates of each other. Consequently, functions originally defined in terms of their position dependence can be converted to functions of momentum using the Fourier transform. In solid-state physics, the Schrödinger equation is often written for functions of momentum, as Bloch's theorem ensures the periodic crystal lattice potential couples formula_66 with formula_67 for only discrete reciprocal lattice vectors formula_68. This makes it convenient to solve the momentum-space Schrödinger equation at each point in the Brillouin zone independently of the other points in the Brillouin zone.
Probability current.
The Schrödinger equation is consistent with local probability conservation. It also ensures that a normalized wavefunction remains normalized after time evolution. In matrix mechanics, this means that the time evolution operator is a unitary operator. In contrast to, for example, the Klein Gordon equation, although a redefined inner product of a wavefunction can be time independent, the total volume integral of modulus square of the wavefunction need not be time independent.
The continuity equation for probability in non relativistic quantum mechanics is stated as:
formula_69where
formula_70
is the probability current or probability flux (flow per unit area).
If the wavefunction is represented as formula_71 where formula_72 is a real function which represents the complex phase of the wavefunction, then the probability flux is calculated as:formula_73Hence, the spatial variation of the phase of a wavefunction is said to characterize the probability flux of the wavefunction. Although the formula_74 term appears to play the role of velocity, it does not represent velocity at a point since simultaneous measurement of position and velocity violates uncertainty principle.
Separation of variables.
If the Hamiltonian is not an explicit function of time, Schrödinger's equation reads:
formula_75 The operator on the left side depends only on time; the one on the right side depends only on space.
Solving the equation by separation of variables means seeking a solution of the form of a product of spatial and temporal parts
formula_76
where formula_77 is a function of all the spatial coordinate(s) of the particle(s) constituting the system only, and formula_78 is a function of time only. Substituting this expression for formula_23 into the time dependent left hand side shows that formula_78 is a phase factor:
formula_79
A solution of this type is called "stationary," since the only time dependence is a phase factor that cancels when the probability density is calculated via the Born rule.
The spatial part of the full wave function solves:
formula_80
where the energy formula_27 appears in the phase factor.
This generalizes to any number of particles in any number of dimensions (in a time-independent potential): the standing wave solutions of the time-independent equation are the states with definite energy, instead of a probability distribution of different energies. In physics, these standing waves are called "stationary states" or "energy eigenstates"; in chemistry they are called "atomic orbitals" or "molecular orbitals". Superpositions of energy eigenstates change their properties according to the relative phases between the energy levels. The energy eigenstates form a basis: any wave function may be written as a sum over the discrete energy states or an integral over continuous energy states, or more generally as an integral over a measure. This is the spectral theorem in mathematics, and in a finite-dimensional state space it is just a statement of the completeness of the eigenvectors of a Hermitian matrix.
Separation of variables can also be a useful method for the time-independent Schrödinger equation. For example, depending on the symmetry of the problem, the Cartesian axes might be separated,
formula_81
or radial and angular coordinates might be separated:
formula_82
Examples.
Particle in a box.
The particle in a one-dimensional potential energy box is the most mathematically simple example where restraints lead to the quantization of energy levels. The box is defined as having zero potential energy "inside" a certain region and infinite potential energy "outside". For the one-dimensional case in the formula_2 direction, the time-independent Schrödinger equation may be written
formula_83
With the differential operator defined by
formula_84
the previous equation is evocative of the classic kinetic energy analogue,
formula_85
with state formula_86 in this case having energy formula_27 coincident with the kinetic energy of the particle.
The general solutions of the Schrödinger equation for the particle in a box are
formula_87
or, from Euler's formula,
formula_88
The infinite potential walls of the box determine the values of formula_89 and formula_90 at formula_91 and formula_92 where formula_86 must be zero. Thus, at formula_91,
formula_93
and formula_94. At formula_92,
formula_95
in which formula_96 cannot be zero as this would conflict with the postulate that formula_86 has norm 1. Therefore, since formula_97, formula_98 must be an integer multiple of formula_99,
formula_100
This constraint on formula_90 implies a constraint on the energy levels, yielding
formula_101
A finite potential well is the generalization of the infinite potential well problem to potential wells having finite depth. The finite potential well problem is mathematically more complicated than the infinite particle-in-a-box problem as the wave function is not pinned to zero at the walls of the well. Instead, the wave function must satisfy more complicated mathematical boundary conditions as it is nonzero in regions outside the well. Another related problem is that of the rectangular potential barrier, which furnishes a model for the quantum tunneling effect that plays an important role in the performance of modern technologies such as flash memory and scanning tunneling microscopy.
Harmonic oscillator.
The Schrödinger equation for this situation is
formula_102
where formula_103 is the displacement and formula_104 the angular frequency. Furthermore, it can be used to describe approximately a wide variety of other systems, including vibrating atoms, molecules, and atoms or ions in lattices, and approximating other potentials near equilibrium points. It is also the basis of perturbation methods in quantum mechanics.
The solutions in position space are
formula_105
where formula_106, and the functions formula_107 are the Hermite polynomials of order formula_108. The solution set may be generated by
formula_109
The eigenvalues are
formula_110
The case formula_111 is called the ground state, its energy is called the zero-point energy, and the wave function is a Gaussian.
The harmonic oscillator, like the particle in a box, illustrates the generic feature of the Schrödinger equation that the energies of bound eigenstates are discretized.
Hydrogen atom.
The Schrödinger equation for the electron in a hydrogen atom (or a hydrogen-like atom) is
formula_112
where formula_113 is the electron charge, formula_114 is the position of the electron relative to the nucleus, formula_115 is the magnitude of the relative position, the potential term is due to the Coulomb interaction, wherein formula_116 is the permittivity of free space and
formula_117
is the 2-body reduced mass of the hydrogen nucleus (just a proton) of mass formula_118 and the electron of mass formula_119. The negative sign arises in the potential term since the proton and electron are oppositely charged. The reduced mass in place of the electron mass is used since the electron and proton together orbit each other about a common center of mass, and constitute a two-body problem to solve. The motion of the electron is of principal interest here, so the equivalent one-body problem is the motion of the electron using the reduced mass.
The Schrödinger equation for a hydrogen atom can be solved by separation of variables. In this case, spherical polar coordinates are the most convenient. Thus,
formula_120
where "R" are radial functions and formula_121 are spherical harmonics of degree formula_122 and order formula_123. This is the only atom for which the Schrödinger equation has been solved for exactly. Multi-electron atoms require approximate methods. The family of solutions are:
formula_124
where
Approximate solutions.
It is typically not possible to solve the Schrödinger equation exactly for situations of physical interest. Accordingly, approximate solutions are obtained using techniques like variational methods and WKB approximation. It is also common to treat a problem of interest as a small modification to a problem that can be solved exactly, a method known as perturbation theory.
Semiclassical limit.
One simple way to compare classical to quantum mechanics is to consider the time-evolution of the "expected" position and "expected" momentum, which can then be compared to the time-evolution of the ordinary position and momentum in classical mechanics. The quantum expectation values satisfy the Ehrenfest theorem. For a one-dimensional quantum particle moving in a potential formula_132, the Ehrenfest theorem says
formula_133
Although the first of these equations is consistent with the classical behavior, the second is not: If the pair formula_134 were to satisfy Newton's second law, the right-hand side of the second equation would have to be
formula_135
which is typically not the same as formula_136. For a general formula_137, therefore, quantum mechanics can lead to predictions where expectation values do not mimic the classical behavior. In the case of the quantum harmonic oscillator, however, formula_137 is linear and this distinction disappears, so that in this very special case, the expected position and expected momentum do exactly follow the classical trajectories.
For general systems, the best we can hope for is that the expected position and momentum will "approximately" follow the classical trajectories. If the wave function is highly concentrated around a point formula_138, then formula_139 and formula_140 will be "almost" the same, since both will be approximately equal to formula_141. In that case, the expected position and expected momentum will remain very close to the classical trajectories, at least for as long as the wave function remains highly localized in position.
The Schrödinger equation in its general form
formula_142
is closely related to the Hamilton–Jacobi equation (HJE)
formula_143
where formula_144 is the classical action and formula_145 is the Hamiltonian function (not operator). Here the generalized coordinates formula_146 for formula_147 (used in the context of the HJE) can be set to the position in Cartesian coordinates as formula_148.
Substituting
formula_149
where formula_150 is the probability density, into the Schrödinger equation and then taking the limit formula_151 in the resulting equation yield the Hamilton–Jacobi equation.
Density matrices.
Wave functions are not always the most convenient way to describe quantum systems and their behavior. When the preparation of a system is only imperfectly known, or when the system under investigation is a part of a larger whole, density matrices may be used instead. A density matrix is a positive semi-definite operator whose trace is equal to 1. (The term "density operator" is also used, particularly when the underlying Hilbert space is infinite-dimensional.) The set of all density matrices is convex, and the extreme points are the operators that project onto vectors in the Hilbert space. These are the density-matrix representations of wave functions; in Dirac notation, they are written formula_152
The density-matrix analogue of the Schrödinger equation for wave functions is
formula_153
where the brackets denote a commutator. This is variously known as the von Neumann equation, the Liouville–von Neumann equation, or just the Schrödinger equation for density matrices. If the Hamiltonian is time-independent, this equation can be easily solved to yield
formula_154
More generally, if the unitary operator formula_39 describes wave function evolution over some time interval, then the time evolution of a density matrix over that same interval is given by
formula_155
Unitary evolution of a density matrix conserves its von Neumann entropy.
Relativistic quantum physics and quantum field theory.
The one-particle Schrödinger equation described above is valid essentially in the nonrelativistic domain. For one reason, it is essentially invariant under Galilean transformations, which form the symmetry group of Newtonian dynamics. Moreover, processes that change particle number are natural in relativity, and so an equation for one particle (or any fixed number thereof) can only be of limited use. A more general form of the Schrödinger equation that also applies in relativistic situations can be formulated within quantum field theory (QFT), a framework that allows the combination of quantum mechanics with special relativity. The region in which both simultaneously apply may be described by relativistic quantum mechanics. Such descriptions may use time evolution generated by a Hamiltonian operator, as in the Schrödinger functional method.
Klein–Gordon and Dirac equations.
Attempts to combine quantum physics with special relativity began with building relativistic wave equations from the relativistic energy–momentum relation
formula_156
instead of nonrelativistic energy equations. The Klein–Gordon equation and the Dirac equation are two such equations. The Klein–Gordon equation,
formula_157
was the first such equation to be obtained, even before the nonrelativistic one-particle Schrödinger equation, and applies to massive spinless particles. Historically, Dirac obtained the Dirac equation by seeking a differential equation that would be first-order in both time and space, a desirable property for a relativistic theory. Taking the "square root" of the left-hand side of the Klein–Gordon equation in this way required factorizing it into a product of two operators, which Dirac wrote using 4 × 4 matrices formula_158. Consequently, the wave function also became a four-component function, governed by the Dirac equation that, in free space, read
formula_159
This has again the form of the Schrödinger equation, with the time derivative of the wave function being given by a Hamiltonian operator acting upon the wave function. Including influences upon the particle requires modifying the Hamiltonian operator. For example, the Dirac Hamiltonian for a particle of mass "m" and electric charge "q" in an electromagnetic field (described by the electromagnetic potentials "φ" and A) is:
formula_160
in which the γ = ("γ"1, "γ"2, "γ"3) and "γ"0 are the Dirac gamma matrices related to the spin of the particle. The Dirac equation is true for all particles, and the solutions to the equation are 4-component spinor fields with two components corresponding to the particle and the other two for the antiparticle.
For the Klein–Gordon equation, the general form of the Schrödinger equation is inconvenient to use, and in practice the Hamiltonian is not expressed in an analogous way to the Dirac Hamiltonian. The equations for relativistic quantum fields, of which the Klein–Gordon and Dirac equations are two examples, can be obtained in other ways, such as starting from a Lagrangian density and using the Euler–Lagrange equations for fields, or using the representation theory of the Lorentz group in which certain representations can be used to fix the equation for a free particle of given spin (and mass).
In general, the Hamiltonian to be substituted in the general Schrödinger equation is not just a function of the position and momentum operators (and possibly time), but also of spin matrices. Also, the solutions to a relativistic wave equation, for a massive particle of spin "s", are complex-valued spinor fields.
Fock space.
As originally formulated, the Dirac equation is an equation for a single quantum particle, just like the single-particle Schrödinger equation with wave function formula_1. This is of limited use in relativistic quantum mechanics, where particle number is not fixed. Heuristically, this complication can be motivated by noting that mass–energy equivalence implies material particles can be created from energy. A common way to address this in QFT is to introduce a Hilbert space where the basis states are labeled by particle number, a so-called Fock space. The Schrödinger equation can then be formulated for quantum states on this Hilbert space. However, because the Schrödinger equation picks out a preferred time axis, the Lorentz invariance of the theory is no longer manifest, and accordingly, the theory is often formulated in other ways.
History.
Following Max Planck's quantization of light (see black-body radiation), Albert Einstein interpreted Planck's quanta to be photons, particles of light, and proposed that the energy of a photon is proportional to its frequency, one of the first signs of wave–particle duality. Since energy and momentum are related in the same way as frequency and wave number in special relativity, it followed that the momentum formula_58 of a photon is inversely proportional to its wavelength formula_13, or proportional to its wave number formula_90:
formula_161
where formula_162 is the Planck constant and formula_163 is the reduced Planck constant. Louis de Broglie hypothesized that this is true for all particles, even particles which have mass such as electrons. He showed that, assuming that the matter waves propagate along with their particle counterparts, electrons form standing waves, meaning that only certain discrete rotational frequencies about the nucleus of an atom are allowed.
These quantized orbits correspond to discrete energy levels, and de Broglie reproduced the Bohr model formula for the energy levels. The Bohr model was based on the assumed quantization of angular momentum formula_164 according to
formula_165
According to de Broglie, the electron is described by a wave, and a whole number of wavelengths must fit along the circumference of the electron's orbit:
formula_166
This approach essentially confined the electron wave in one dimension, along a circular orbit of radius formula_167.
In 1921, prior to de Broglie, Arthur C. Lunn at the University of Chicago had used the same argument based on the completion of the relativistic energy–momentum 4-vector to derive what we now call the de Broglie relation. Unlike de Broglie, Lunn went on to formulate the differential equation now known as the Schrödinger equation and solve for its energy eigenvalues for the hydrogen atom; the paper was rejected by the "Physical Review", according to Kamen.
Following up on de Broglie's ideas, physicist Peter Debye made an offhand comment that if particles behaved as waves, they should satisfy some sort of wave equation. Inspired by Debye's remark, Schrödinger decided to find a proper 3-dimensional wave equation for the electron. He was guided by William Rowan Hamilton's analogy between mechanics and optics, encoded in the observation that the zero-wavelength limit of optics resembles a mechanical system—the trajectories of light rays become sharp tracks that obey Fermat's principle, an analog of the principle of least action.
The equation he found is
formula_168
By that time Arnold Sommerfeld had refined the Bohr model with relativistic corrections. Schrödinger used the relativistic energy–momentum relation to find what is now known as the Klein–Gordon equation in a Coulomb potential (in natural units):
formula_169
He found the standing waves of this relativistic equation, but the relativistic corrections disagreed with Sommerfeld's formula. Discouraged, he put away his calculations and secluded himself with a mistress in a mountain cabin in December 1925.
While at the cabin, Schrödinger decided that his earlier nonrelativistic calculations were novel enough to publish and decided to leave off the problem of relativistic corrections for the future. Despite the difficulties in solving the differential equation for hydrogen (he had sought help from his friend the mathematician Hermann Weyl) Schrödinger showed that his nonrelativistic version of the wave equation produced the correct spectral energies of hydrogen in a paper published in 1926. Schrödinger computed the hydrogen spectral series by treating a hydrogen atom's electron as a wave formula_170, moving in a potential well formula_132, created by the proton. This computation accurately reproduced the energy levels of the Bohr model.
The Schrödinger equation details the behavior of formula_23 but says nothing of its "nature". Schrödinger tried to interpret the real part of formula_171 as a charge density, and then revised this proposal, saying in his next paper that the modulus squared of formula_23 is a charge density. This approach was, however, unsuccessful. In 1926, just a few days after this paper was published, Max Born successfully interpreted formula_23 as the probability amplitude, whose modulus squared is equal to probability density. Later, Schrödinger himself explained this interpretation as follows:
Interpretation.
The Schrödinger equation provides a way to calculate the wave function of a system and how it changes dynamically in time. However, the Schrödinger equation does not directly say "what," exactly, the wave function is. The meaning of the Schrödinger equation and how the mathematical entities in it relate to physical reality depends upon the interpretation of quantum mechanics that one adopts.
In the views often grouped together as the Copenhagen interpretation, a system's wave function is a collection of statistical information about that system. The Schrödinger equation relates information about the system at one time to information about it at another. While the time-evolution process represented by the Schrödinger equation is continuous and deterministic, in that knowing the wave function at one instant is in principle sufficient to calculate it for all future times, wave functions can also change discontinuously and stochastically during a measurement. The wave function changes, according to this school of thought, because new information is available. The post-measurement wave function generally cannot be known prior to the measurement, but the probabilities for the different possibilities can be calculated using the Born rule. Other, more recent interpretations of quantum mechanics, such as relational quantum mechanics and QBism also give the Schrödinger equation a status of this sort.
Schrödinger himself suggested in 1952 that the different terms of a superposition evolving under the Schrödinger equation are "not alternatives but all really happen simultaneously". This has been interpreted as an early version of Everett's many-worlds interpretation. This interpretation, formulated independently in 1956, holds that "all" the possibilities described by quantum theory "simultaneously" occur in a multiverse composed of mostly independent parallel universes. This interpretation removes the axiom of wave function collapse, leaving only continuous evolution under the Schrödinger equation, and so all possible states of the measured system and the measuring apparatus, together with the observer, are present in a real physical quantum superposition. While the multiverse is deterministic, we perceive non-deterministic behavior governed by probabilities, because we do not observe the multiverse as a whole, but only one parallel universe at a time. Exactly how this is supposed to work has been the subject of much debate. Why we should assign probabilities at all to outcomes that are certain to occur in some worlds, and why should the probabilities be given by the Born rule? Several ways to answer these questions in the many-worlds framework have been proposed, but there is no consensus on whether they are successful.
Bohmian mechanics reformulates quantum mechanics to make it deterministic, at the price of adding a force due to a "quantum potential". It attributes to each physical system not only a wave function but in addition a real position that evolves deterministically under a nonlocal guiding equation. The evolution of a physical system is given at all times by the Schrödinger equation together with the guiding equation.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "i\\hbar\\frac{\\partial}{\\partial t} \\Psi(x,t) = \\left [ - \\frac{\\hbar^2}{2m}\\frac{\\partial^2}{\\partial x^2} + V(x,t)\\right ] \\Psi(x,t)."
},
{
"math_id": 1,
"text": "\\Psi(x,t)"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "t"
},
{
"math_id": 4,
"text": "m"
},
{
"math_id": 5,
"text": "V(x,t)"
},
{
"math_id": 6,
"text": "i"
},
{
"math_id": 7,
"text": "\\hbar"
},
{
"math_id": 8,
"text": "|\\psi\\rangle"
},
{
"math_id": 9,
"text": "\\mathcal H"
},
{
"math_id": 10,
"text": "\\langle \\psi | \\psi \\rangle = 1"
},
{
"math_id": 11,
"text": "L^2"
},
{
"math_id": 12,
"text": "\\Complex^2"
},
{
"math_id": 13,
"text": "\\lambda"
},
{
"math_id": 14,
"text": "|\\langle \\lambda | \\psi\\rangle|^2"
},
{
"math_id": 15,
"text": " |\\lambda\\rangle"
},
{
"math_id": 16,
"text": "\\langle \\psi | P_\\lambda |\\psi\\rangle"
},
{
"math_id": 17,
"text": "P_\\lambda"
},
{
"math_id": 18,
"text": "|\\Psi(t)\\rangle"
},
{
"math_id": 19,
"text": "|x\\rangle"
},
{
"math_id": 20,
"text": "\\Psi(x,t) = \\langle x | \\Psi(t) \\rangle."
},
{
"math_id": 21,
"text": "i \\hbar \\frac{d}{d t}\\vert\\Psi(t)\\rangle = \\hat H\\vert\\Psi(t)\\rangle"
},
{
"math_id": 22,
"text": "\\vert\\Psi(t)\\rangle"
},
{
"math_id": 23,
"text": "\\Psi"
},
{
"math_id": 24,
"text": "\\hat{H}"
},
{
"math_id": 25,
"text": "\\Pr(x,t) = |\\Psi(x,t)|^2."
},
{
"math_id": 26,
"text": "\\operatorname{\\hat H}|\\Psi\\rangle = E |\\Psi\\rangle "
},
{
"math_id": 27,
"text": "E"
},
{
"math_id": 28,
"text": "|\\psi_1\\rangle"
},
{
"math_id": 29,
"text": "|\\psi_2\\rangle"
},
{
"math_id": 30,
"text": " |\\psi\\rangle = a|\\psi_1\\rangle + b |\\psi_2\\rangle "
},
{
"math_id": 31,
"text": "|\\Psi(t)\\rangle = \\sum_{n} A_n e^{ {-iE_n t}/\\hbar} |\\psi_{E_n}\\rangle , "
},
{
"math_id": 32,
"text": "A_n"
},
{
"math_id": 33,
"text": "|\\psi_{E_n}\\rangle"
},
{
"math_id": 34,
"text": "\\hat H |\\psi_{E_n}\\rangle = E_n |\\psi_{E_n}\\rangle"
},
{
"math_id": 35,
"text": " |\\Psi(t)\\rangle = e^{-i\\hat{H}t/\\hbar }|\\Psi(0)\\rangle."
},
{
"math_id": 36,
"text": "\\hat{U}(t) = e^{-i\\hat{H}t/\\hbar}"
},
{
"math_id": 37,
"text": "|\\Psi(0)\\rangle"
},
{
"math_id": 38,
"text": " |\\Psi(t)\\rangle = \\hat{U}(t) |\\Psi(0)\\rangle "
},
{
"math_id": 39,
"text": "\\hat{U}(t)"
},
{
"math_id": 40,
"text": "\\hat{U}(0)"
},
{
"math_id": 41,
"text": "\\hat{U}(t/N)^N = \\hat{U}(t)"
},
{
"math_id": 42,
"text": "N > 0"
},
{
"math_id": 43,
"text": "\\hat{U}(t) = e^{-i\\hat{G}t} "
},
{
"math_id": 44,
"text": "\\hat{G}"
},
{
"math_id": 45,
"text": "\\hat{U}(\\delta t) \\approx \\hat{U}(0)-i\\hat{G} \\delta t"
},
{
"math_id": 46,
"text": "\\hat{U}(\\delta t)^\\dagger \\hat{U}(\\delta t)\\approx(\\hat{U}(0)^\\dagger+i\\hat{G}^\\dagger \\delta t)(\\hat{U}(0)-i\\hat{G}\\delta t)=I+i\\delta t(\\hat{G}^\\dagger-\\hat{G})+O(\\delta t^2),"
},
{
"math_id": 47,
"text": "i\\hbar \\frac{d}{dt}|\\Psi(t)\\rangle = \\left(\\frac{1}{2m}\\hat{p}^2 + \\hat{V}\\right)|\\Psi(t)\\rangle."
},
{
"math_id": 48,
"text": "\\mathbf{r}"
},
{
"math_id": 49,
"text": "\\mathbf{p}"
},
{
"math_id": 50,
"text": "i\\hbar\\frac{\\partial}{\\partial t} \\Psi(\\mathbf{r},t) = - \\frac{\\hbar^2}{2m} \\nabla^2 \\Psi(\\mathbf{r},t) + V(\\mathbf{r}) \\Psi(\\mathbf{r},t)."
},
{
"math_id": 51,
"text": " i\\hbar \\frac{\\partial}{\\partial t} \\tilde{\\Psi}(\\mathbf{p}, t) = \\frac{\\mathbf{p}^2}{2m} \\tilde{\\Psi}(\\mathbf{p},t) + (2\\pi\\hbar)^{-3/2} \\int d^3 \\mathbf{p}' \\, \\tilde{V}(\\mathbf{p} - \\mathbf{p}') \\tilde{\\Psi}(\\mathbf{p}',t)."
},
{
"math_id": 52,
"text": "\\Psi(\\mathbf{r},t)"
},
{
"math_id": 53,
"text": "\\tilde{\\Psi}(\\mathbf{p},t)"
},
{
"math_id": 54,
"text": "\\Psi(\\mathbf{r},t) = \\langle \\mathbf{r} | \\Psi(t)\\rangle,"
},
{
"math_id": 55,
"text": "\\tilde{\\Psi}(\\mathbf{p},t) = \\langle \\mathbf{p} | \\Psi(t)\\rangle,"
},
{
"math_id": 56,
"text": "|\\mathbf{r}\\rangle"
},
{
"math_id": 57,
"text": "|\\mathbf{p}\\rangle"
},
{
"math_id": 58,
"text": "p"
},
{
"math_id": 59,
"text": "\\hat{x}"
},
{
"math_id": 60,
"text": "\\hat{p}"
},
{
"math_id": 61,
"text": "[\\hat{x}, \\hat{p}] = i\\hbar."
},
{
"math_id": 62,
"text": "\\langle x | \\hat{p} | \\Psi \\rangle = -i\\hbar \\frac{d}{dx} \\Psi(x),"
},
{
"math_id": 63,
"text": "-i\\hbar \\frac{d}{dx}"
},
{
"math_id": 64,
"text": "\\hat{p}^2"
},
{
"math_id": 65,
"text": "\\nabla^2"
},
{
"math_id": 66,
"text": "\\tilde{\\Psi}(p) "
},
{
"math_id": 67,
"text": "\\tilde{\\Psi}(p+K) "
},
{
"math_id": 68,
"text": "K "
},
{
"math_id": 69,
"text": "\\frac{\\partial}{\\partial t} \\rho\\left(\\mathbf{r},t\\right) + \\nabla \\cdot \\mathbf{j} = 0, "
},
{
"math_id": 70,
"text": " \\mathbf{j} = \\frac{1}{2m} \\left( \\Psi^*\\hat{\\mathbf{p}}\\Psi - \\Psi\\hat{\\mathbf{p}}\\Psi^* \\right) = -\\frac{i\\hbar}{2m}(\\psi^*\\nabla\\psi-\\psi\\nabla\\psi^*) = \\frac \\hbar m \\operatorname{Im} (\\psi^*\\nabla \\psi) "
},
{
"math_id": 71,
"text": "\\psi( {\\bf x},t)=\\sqrt{\\rho({\\bf x},t)}\\exp\\left(\\frac{i S({\\bf x},t)}{\\hbar}\\right), "
},
{
"math_id": 72,
"text": "S(\\mathbf x,t) "
},
{
"math_id": 73,
"text": " \\mathbf{j} = \\frac{\\rho \\nabla S} {m} "
},
{
"math_id": 74,
"text": " \\frac{ \\nabla S} {m} "
},
{
"math_id": 75,
"text": "i\\hbar\\frac{\\partial}{\\partial t} \\Psi(\\mathbf{r},t) = \\left [ - \\frac{\\hbar^2}{2m}\\nabla^2 + V(\\mathbf{r})\\right ] \\Psi(\\mathbf{r},t)."
},
{
"math_id": 76,
"text": "\\Psi(\\mathbf{r},t)=\\psi(\\mathbf{r})\\tau(t),"
},
{
"math_id": 77,
"text": "\\psi(\\mathbf{r})"
},
{
"math_id": 78,
"text": "\\tau(t)"
},
{
"math_id": 79,
"text": " \\Psi(\\mathbf{r},t) = \\psi(\\mathbf{r}) e^{-i{E t/\\hbar}}."
},
{
"math_id": 80,
"text": " \\nabla^2\\psi(\\mathbf{r}) + \\frac{2m}{\\hbar^2} \\left [E - V(\\mathbf{r})\\right ] \\psi(\\mathbf{r}) = 0."
},
{
"math_id": 81,
"text": "\\psi(\\mathbf{r}) = \\psi_x(x)\\psi_y(y)\\psi_z(z),"
},
{
"math_id": 82,
"text": "\\psi(\\mathbf{r}) = \\psi_r(r)\\psi_\\theta(\\theta)\\psi_\\phi(\\phi)."
},
{
"math_id": 83,
"text": " - \\frac {\\hbar ^2}{2m} \\frac {d ^2 \\psi}{dx^2} = E \\psi."
},
{
"math_id": 84,
"text": " \\hat{p}_x = -i\\hbar\\frac{d}{dx} "
},
{
"math_id": 85,
"text": " \\frac{1}{2m} \\hat{p}_x^2 = E,"
},
{
"math_id": 86,
"text": "\\psi"
},
{
"math_id": 87,
"text": " \\psi(x) = A e^{ikx} + B e ^{-ikx} \\qquad\\qquad E = \\frac{\\hbar^2 k^2}{2m}"
},
{
"math_id": 88,
"text": " \\psi(x) = C \\sin(kx) + D \\cos(kx)."
},
{
"math_id": 89,
"text": "C, D, "
},
{
"math_id": 90,
"text": "k"
},
{
"math_id": 91,
"text": "x=0"
},
{
"math_id": 92,
"text": "x=L"
},
{
"math_id": 93,
"text": "\\psi(0) = 0 = C\\sin(0) + D\\cos(0) = D"
},
{
"math_id": 94,
"text": "D=0"
},
{
"math_id": 95,
"text": " \\psi(L) = 0 = C\\sin(kL),"
},
{
"math_id": 96,
"text": "C"
},
{
"math_id": 97,
"text": "\\sin(kL)=0"
},
{
"math_id": 98,
"text": "kL"
},
{
"math_id": 99,
"text": "\\pi"
},
{
"math_id": 100,
"text": "k = \\frac{n\\pi}{L}\\qquad\\qquad n=1,2,3,\\ldots."
},
{
"math_id": 101,
"text": "E_n = \\frac{\\hbar^2 \\pi^2 n^2}{2mL^2} = \\frac{n^2h^2}{8mL^2}."
},
{
"math_id": 102,
"text": " E\\psi = -\\frac{\\hbar^2}{2m}\\frac{d^2}{d x^2}\\psi + \\frac{1}{2} m\\omega^2 x^2\\psi, "
},
{
"math_id": 103,
"text": " x "
},
{
"math_id": 104,
"text": " \\omega "
},
{
"math_id": 105,
"text": " \\psi_n(x) = \\sqrt{\\frac{1}{2^n\\,n!}} \\ \\left(\\frac{m\\omega}{\\pi \\hbar}\\right)^{1/4} \\ e^{\n- \\frac{m\\omega x^2}{2 \\hbar}} \\ \\mathcal{H}_n\\left(\\sqrt{\\frac{m\\omega}{\\hbar}} x \\right), "
},
{
"math_id": 106,
"text": "n \\in \\{0, 1, 2, \\ldots \\}"
},
{
"math_id": 107,
"text": " \\mathcal{H}_n "
},
{
"math_id": 108,
"text": " n "
},
{
"math_id": 109,
"text": "\\psi_n(x) = \\frac{1}{\\sqrt{n!}} \\left( \\sqrt{\\frac{m \\omega}{2 \\hbar}} \\right)^{n} \\left( x - \\frac{\\hbar}{m \\omega} \\frac{d}{dx}\\right)^n \\left( \\frac{m \\omega}{\\pi \\hbar} \\right)^{\\frac{1}{4}} e^{\\frac{-m \\omega x^2}{2\\hbar}}."
},
{
"math_id": 110,
"text": " E_n = \\left(n + \\frac{1}{2} \\right) \\hbar \\omega. "
},
{
"math_id": 111,
"text": " n = 0 "
},
{
"math_id": 112,
"text": " E \\psi = -\\frac{\\hbar^2}{2\\mu}\\nabla^2\\psi - \\frac{q^2}{4\\pi\\varepsilon_0 r}\\psi "
},
{
"math_id": 113,
"text": " q "
},
{
"math_id": 114,
"text": " \\mathbf{r} "
},
{
"math_id": 115,
"text": " r = |\\mathbf{r}| "
},
{
"math_id": 116,
"text": " \\varepsilon_0 "
},
{
"math_id": 117,
"text": " \\mu = \\frac{m_q m_p}{m_q+m_p} "
},
{
"math_id": 118,
"text": " m_p "
},
{
"math_id": 119,
"text": " m_q "
},
{
"math_id": 120,
"text": " \\psi(r,\\theta,\\varphi) = R(r)Y_\\ell^m(\\theta, \\varphi) = R(r)\\Theta(\\theta)\\Phi(\\varphi),"
},
{
"math_id": 121,
"text": " Y^m_l (\\theta, \\varphi) "
},
{
"math_id": 122,
"text": " \\ell "
},
{
"math_id": 123,
"text": " m "
},
{
"math_id": 124,
"text": " \\psi_{n\\ell m}(r,\\theta,\\varphi) = \\sqrt {\\left ( \\frac{2}{n a_0} \\right )^3\\frac{(n-\\ell-1)!}{2n[(n+\\ell)!]} } e^{- r/na_0} \\left(\\frac{2r}{na_0}\\right)^\\ell L_{n-\\ell-1}^{2\\ell+1}\\left(\\frac{2r}{na_0}\\right) \\cdot Y_{\\ell}^m(\\theta, \\varphi ) "
},
{
"math_id": 125,
"text": " a_0 = \\frac{4 \\pi \\varepsilon_0 \\hbar^2}{m_q q^2} "
},
{
"math_id": 126,
"text": " L_{n-\\ell-1}^{2\\ell+1}(\\cdots) "
},
{
"math_id": 127,
"text": " n - \\ell - 1 "
},
{
"math_id": 128,
"text": " n, \\ell, m "
},
{
"math_id": 129,
"text": "n = 1, 2, 3, \\dots,"
},
{
"math_id": 130,
"text": "\\ell = 0, 1, 2, \\dots, n - 1,"
},
{
"math_id": 131,
"text": "m = -\\ell, \\dots, \\ell."
},
{
"math_id": 132,
"text": "V"
},
{
"math_id": 133,
"text": "m\\frac{d}{dt}\\langle x\\rangle = \\langle p\\rangle;\\quad \\frac{d}{dt}\\langle p\\rangle = -\\left\\langle V'(X)\\right\\rangle."
},
{
"math_id": 134,
"text": "(\\langle X\\rangle, \\langle P\\rangle)"
},
{
"math_id": 135,
"text": "-V'\\left(\\left\\langle X\\right\\rangle\\right)"
},
{
"math_id": 136,
"text": "-\\left\\langle V'(X)\\right\\rangle"
},
{
"math_id": 137,
"text": "V'"
},
{
"math_id": 138,
"text": "x_0"
},
{
"math_id": 139,
"text": "V'\\left(\\left\\langle X\\right\\rangle\\right)"
},
{
"math_id": 140,
"text": "\\left\\langle V'(X)\\right\\rangle"
},
{
"math_id": 141,
"text": "V'(x_0)"
},
{
"math_id": 142,
"text": " i\\hbar \\frac{\\partial}{\\partial t} \\Psi\\left(\\mathbf{r},t\\right) = \\hat{H} \\Psi\\left(\\mathbf{r},t\\right)"
},
{
"math_id": 143,
"text": " -\\frac{\\partial}{\\partial t} S(q_i,t) = H\\left(q_i,\\frac{\\partial S}{\\partial q_i},t \\right) "
},
{
"math_id": 144,
"text": "S"
},
{
"math_id": 145,
"text": "H"
},
{
"math_id": 146,
"text": "q_i"
},
{
"math_id": 147,
"text": "i = 1, 2, 3"
},
{
"math_id": 148,
"text": "\\mathbf{r} = (q_1, q_2, q_3) = (x, y, z)"
},
{
"math_id": 149,
"text": " \\Psi = \\sqrt{\\rho(\\mathbf{r},t)} e^{iS(\\mathbf{r},t)/\\hbar}"
},
{
"math_id": 150,
"text": "\\rho"
},
{
"math_id": 151,
"text": "\\hbar \\to 0"
},
{
"math_id": 152,
"text": " \\hat{\\rho} = |\\Psi\\rangle\\langle \\Psi|."
},
{
"math_id": 153,
"text": " i \\hbar \\frac{\\partial \\hat{\\rho}}{\\partial t} = [\\hat{H}, \\hat{\\rho}],"
},
{
"math_id": 154,
"text": "\\hat{\\rho}(t) = e^{-i \\hat{H} t/\\hbar} \\hat{\\rho}(0) e^{i \\hat{H} t/\\hbar}."
},
{
"math_id": 155,
"text": " \\hat{\\rho}(t) = \\hat{U}(t) \\hat{\\rho}(0) \\hat{U}(t)^\\dagger."
},
{
"math_id": 156,
"text": "E^2 = (pc)^2 + \\left(m_0 c^2\\right)^2,"
},
{
"math_id": 157,
"text": " -\\frac {1}{c^2} \\frac{\\partial^2}{\\partial t^2} \\psi + \\nabla^2 \\psi = \\frac {m^2 c^2}{\\hbar^2} \\psi,"
},
{
"math_id": 158,
"text": "\\alpha_1,\\alpha_2,\\alpha_3,\\beta"
},
{
"math_id": 159,
"text": "\\left(\\beta mc^2 + c\\left(\\sum_{n \\mathop = 1}^{3}\\alpha_n p_n\\right)\\right) \\psi = i \\hbar \\frac{\\partial\\psi }{\\partial t}. "
},
{
"math_id": 160,
"text": "\\hat{H}_{\\text{Dirac}}= \\gamma^0 \\left[c \\boldsymbol{\\gamma}\\cdot\\left(\\hat{\\mathbf{p}} - q \\mathbf{A}\\right) + mc^2 + \\gamma^0q \\varphi \\right],"
},
{
"math_id": 161,
"text": "p = \\frac{h}{\\lambda} = \\hbar k,"
},
{
"math_id": 162,
"text": "h"
},
{
"math_id": 163,
"text": "\\hbar = {h}/{2\\pi}"
},
{
"math_id": 164,
"text": "L"
},
{
"math_id": 165,
"text": " L = n \\frac{h}{2\\pi} = n\\hbar."
},
{
"math_id": 166,
"text": "n \\lambda = 2 \\pi r."
},
{
"math_id": 167,
"text": "r"
},
{
"math_id": 168,
"text": "i\\hbar \\frac{\\partial}{\\partial t} \\Psi(\\mathbf{r}, t) = -\\frac{\\hbar^2}{2m} \\nabla^2 \\Psi(\\mathbf{r}, t) + V(\\mathbf{r})\\Psi(\\mathbf{r}, t)."
},
{
"math_id": 169,
"text": "\\left(E + \\frac{e^2}{r}\\right)^2 \\psi(x) = - \\nabla^2 \\psi(x) + m^2 \\psi(x)."
},
{
"math_id": 170,
"text": "\\Psi(\\mathbf{x}, t)"
},
{
"math_id": 171,
"text": "\\Psi \\frac{\\partial \\Psi^*}{\\partial t}"
}
] |
https://en.wikipedia.org/wiki?curid=59874
|
5987577
|
Hamiltonian (control theory)
|
Function used in optimal control theory
The Hamiltonian is a function used to solve a problem of optimal control for a dynamical system. It can be understood as an instantaneous increment of the Lagrangian expression of the problem that is to be optimized over a certain time period. Inspired by—but distinct from—the Hamiltonian of classical mechanics, the Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his maximum principle. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to optimize the Hamiltonian.
Problem statement and definition of the Hamiltonian.
Consider a dynamical system of formula_0 first-order differential equations
formula_1
where formula_2 denotes a vector of state variables, and formula_3 a vector of control variables. Once initial conditions formula_4 and controls formula_5 are specified, a solution to the differential equations, called a "trajectory" formula_6, can be found. The problem of optimal control is to choose formula_5 (from some set formula_7) so that formula_8 maximizes or minimizes a certain objective function between an initial time formula_9 and a terminal time formula_10 (where formula_11 may be infinity). Specifically, the goal is to optimize over a performance index formula_12 defined at each point in time,
formula_13, with formula_14
subject to the above equations of motion of the state variables. The solution method involves defining an ancillary function known as the control Hamiltonian
formula_15
which combines the objective function and the state equations much like a Lagrangian in a static optimization problem, only that the multipliers formula_16—referred to as "costate variables"—are functions of time rather than constants.
The goal is to find an optimal control policy function formula_17 and, with it, an optimal trajectory of the state variable formula_18, which by Pontryagin's maximum principle are the arguments that maximize the Hamiltonian,
formula_19 for all formula_20
The first-order necessary conditions for a maximum are given by
formula_21 which is the maximum principle,
formula_22 which generates the state transition function formula_23,
formula_24 which generates the costate equations formula_25
Together, the state and costate equations describe the Hamiltonian dynamical system (again analogous to but distinct from the Hamiltonian system in physics), the solution of which involves a two-point boundary value problem, given that there are formula_26 boundary conditions involving two different points in time, the initial time (the formula_0 differential equations for the state variables), and the terminal time (the formula_0 differential equations for the costate variables; unless a final function is specified, the boundary conditions are formula_27, or formula_28 for infinite time horizons).
A sufficient condition for a maximum is the concavity of the Hamiltonian evaluated at the solution, i.e.
formula_29
where formula_17 is the optimal control, and formula_18 is resulting optimal trajectory for the state variable. Alternatively, by a result due to Olvi L. Mangasarian, the necessary conditions are sufficient if the functions formula_12 and formula_30 are both concave in formula_8 and formula_5.
Derivation from the Lagrangian.
A constrained optimization problem as the one stated above usually suggests a Lagrangian expression, specifically
formula_31
where formula_16 compares to the Lagrange multiplier in a static optimization problem but is now, as noted above, a function of time. In order to eliminate formula_32, the last term on the right-hand side can be rewritten using integration by parts, such that
formula_33
which can be substituted back into the Lagrangian expression to give
formula_34
To derive the first-order conditions for an optimum, assume that the solution has been found and the Lagrangian is maximized. Then any perturbation to formula_8 or formula_5 must cause the value of the Lagrangian to decline. Specifically, the total derivative of formula_35 obeys
formula_36
For this expression to equal zero necessitates the following optimality conditions:
formula_37
If both the initial value formula_38 and terminal value formula_39 are fixed, i.e. formula_40, no conditions on formula_41 and formula_42 are needed. If the terminal value is free, as is often the case, the additional condition formula_27 is necessary for optimality. The latter is called a transversality condition for a fixed horizon problem.
It can be seen that the necessary conditions are identical to the ones stated above for the Hamiltonian. Thus the Hamiltonian can be understood as a device to generate the first-order necessary conditions.
The Hamiltonian in discrete time.
When the problem is formulated in discrete time, the Hamiltonian is defined as:
formula_43
and the costate equations are
formula_44
(Note that the discrete time Hamiltonian at time formula_45 involves the costate variable at time formula_46 This small detail is essential so that when we differentiate with respect to formula_47 we get a term involving formula_48 on the right hand side of the costate equations. Using a wrong convention here can lead to incorrect results, i.e. a costate equation which is not a backwards difference equation).
Behavior of the Hamiltonian over time.
From Pontryagin's maximum principle, special conditions for the Hamiltonian can be derived. When the final time formula_49 is fixed and the Hamiltonian does not depend explicitly on time formula_50, then:
formula_51
or if the terminal time is free, then:
formula_52
Further, if the terminal time tends to infinity, a transversality condition on the Hamiltonian applies.
formula_53
The Hamiltonian of control compared to the Hamiltonian of mechanics.
William Rowan Hamilton defined the Hamiltonian for describing the mechanics of a system. It is a function of three variables and related to the Lagrangian as
formula_54
where formula_35 is the Lagrangian, the extremizing of which determines the dynamics ("not" the Lagrangian defined above) and formula_55 is the state variable. The Lagrangian is evaluated with formula_56 representing the time derivative of the state's evolution and formula_57, the so-called "conjugate momentum", relates to it as
formula_58.
Hamilton then formulated his equations to describe the dynamics of the system as
formula_59
formula_60
The Hamiltonian of control theory describes not the "dynamics" of a system but conditions for extremizing some scalar function thereof (the Lagrangian) with respect to a control variable formula_61. As normally defined, it is a function of 4 variables
formula_62
where formula_55 is the state variable and formula_61 is the control variable with respect to that which we are extremizing.
The associated conditions for a maximum are
formula_63
formula_64
formula_65
This definition agrees with that given by the article by Sussmann and Willems. (see p. 39, equation 14). Sussmann and Willems show how the control Hamiltonian can be used in dynamics e.g. for the brachistochrone problem, but do not mention the prior work of Carathéodory on this approach.
Current value and present value Hamiltonian.
In economics, the objective function in dynamic optimization problems often depends directly on time only through exponential discounting, such that it takes the form
formula_66
where formula_67 is referred to as the instantaneous utility function, or felicity function. This allows a redefinition of the Hamiltonian as formula_68 where
formula_69
which is referred to as the current value Hamiltonian, in contrast to the present value Hamiltonian formula_70 defined in the first section. Most notably the costate variables are redefined as formula_71, which leads to modified first-order conditions.
formula_72,
formula_73
which follows immediately from the product rule. Economically, formula_74 represent current-valued shadow prices for the capital goods formula_8.
Example: Ramsey–Cass–Koopmans model.
In economics, the Ramsey–Cass–Koopmans model is used to determine an optimal savings behavior for an economy. The objective function formula_75 is the social welfare function,
formula_76
to be maximized by choice of an optimal consumption path formula_77. The function formula_78 indicates the utility the representative agent of consuming formula_79 at any given point in time. The factor formula_80 represents discounting. The maximization problem is subject to the following differential equation for capital intensity, describing the time evolution of capital per effective worker:
formula_81
where formula_77 is period t consumption, formula_82 is period t capital per worker (with formula_83), formula_84 is period t production, formula_0 is the population growth rate, formula_85 is the capital depreciation rate, the agent discounts future utility at rate formula_86, with formula_87 and formula_88.
Here, formula_82 is the state variable which evolves according to the above equation, and formula_77 is the control variable. The Hamiltonian becomes
formula_89
The optimality conditions are
formula_90
formula_91
in addition to the transversality condition formula_92. If we let formula_93, then log-differentiating the first optimality condition with respect to formula_45 yields
formula_94
Inserting this equation into the second optimality condition yields
formula_95
which is known as the Keynes–Ramsey rule, which gives a condition for consumption in every period which, if followed, ensures maximum lifetime utility.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "n"
},
{
"math_id": 1,
"text": "\\dot{\\mathbf{x}}(t) = \\mathbf{f}(\\mathbf{x}(t),\\mathbf{u}(t),t)"
},
{
"math_id": 2,
"text": "\\mathbf{x}(t) = \\left[ x_{1}(t), x_{2}(t), \\ldots, x_{n}(t) \\right]^{\\mathsf{T}}"
},
{
"math_id": 3,
"text": "\\mathbf{u}(t) = \\left[ u_{1}(t), u_{2}(t), \\ldots, u_{r}(t) \\right]^{\\mathsf{T}}"
},
{
"math_id": 4,
"text": "\\mathbf{x}(t_{0}) = \\mathbf{x}_{0}"
},
{
"math_id": 5,
"text": "\\mathbf{u}(t)"
},
{
"math_id": 6,
"text": "\\mathbf{x}(t; \\mathbf{x}_{0}, t_{0})"
},
{
"math_id": 7,
"text": "\\mathcal{U} \\subseteq \\mathbb{R}^{r}"
},
{
"math_id": 8,
"text": "\\mathbf{x}(t)"
},
{
"math_id": 9,
"text": "t = t_{0}"
},
{
"math_id": 10,
"text": "t = t_{1}"
},
{
"math_id": 11,
"text": "t_{1}"
},
{
"math_id": 12,
"text": "I(\\mathbf{x}(t),\\mathbf{u}(t),t)"
},
{
"math_id": 13,
"text": "\\max_{\\mathbf{u}(t)} J"
},
{
"math_id": 14,
"text": "J = \\int_{t_{0}}^{t_{1}} I[\\mathbf{x}(t),\\mathbf{u}(t),t] \\, \\mathrm{d}t"
},
{
"math_id": 15,
"text": "H(\\mathbf{x}(t),\\mathbf{u}(t),\\mathbf{\\lambda}(t),t) \\equiv I(\\mathbf{x}(t),\\mathbf{u}(t),t) + \\mathbf{\\lambda}^{\\mathsf{T}}(t) \\mathbf{f}(\\mathbf{x}(t),\\mathbf{u}(t),t)"
},
{
"math_id": 16,
"text": "\\mathbf{\\lambda}(t)"
},
{
"math_id": 17,
"text": "\\mathbf{u}^\\ast(t)"
},
{
"math_id": 18,
"text": "\\mathbf{x}^\\ast(t)"
},
{
"math_id": 19,
"text": "H(\\mathbf{x}^\\ast(t),\\mathbf{u}^\\ast(t),\\mathbf{\\lambda}(t),t) \\geq H(\\mathbf{x}(t),\\mathbf{u}(t),\\mathbf{\\lambda}(t),t)"
},
{
"math_id": 20,
"text": "\\mathbf{u}(t) \\in \\mathcal{U}"
},
{
"math_id": 21,
"text": "\\frac{\\partial H(\\mathbf{x}(t),\\mathbf{u}(t),\\mathbf{\\lambda}(t),t)}{\\partial \\mathbf{u}} = 0 \\quad"
},
{
"math_id": 22,
"text": "\\frac{\\partial H(\\mathbf{x}(t),\\mathbf{u}(t),\\mathbf{\\lambda}(t),t)}{\\partial \\mathbf{\\lambda}} = \\dot{\\mathbf{x}}(t) \\quad"
},
{
"math_id": 23,
"text": "\\, \\mathbf{f}(\\mathbf{x}(t),\\mathbf{u}(t),t) = \\dot{\\mathbf{x}}(t)"
},
{
"math_id": 24,
"text": "\\frac{\\partial H(\\mathbf{x}(t),\\mathbf{u}(t),\\mathbf{\\lambda}(t),t)}{\\partial \\mathbf{x}} = - \\dot{\\mathbf{\\lambda}}(t) \\quad"
},
{
"math_id": 25,
"text": "\\, \\dot{\\mathbf{\\lambda}}(t) = - \\left[ I_{\\mathbf{x}}(\\mathbf{x}(t),\\mathbf{u}(t),t) + \\mathbf{\\lambda}^{\\mathsf{T}}(t) \\mathbf{f}_{\\mathbf{x}}(\\mathbf{x}(t),\\mathbf{u}(t),t) \\right]"
},
{
"math_id": 26,
"text": "2n"
},
{
"math_id": 27,
"text": "\\mathbf{\\lambda}(t_{1}) = 0"
},
{
"math_id": 28,
"text": "\\lim_{t_{1} \\to \\infty} \\mathbf{\\lambda}(t_{1}) = 0"
},
{
"math_id": 29,
"text": "H_{\\mathbf{uu}}(\\mathbf{x}^\\ast(t),\\mathbf{u}^\\ast(t),\\mathbf{\\lambda}(t),t) \\leq 0"
},
{
"math_id": 30,
"text": "\\mathbf{f}(\\mathbf{x}(t),\\mathbf{u}(t),t)"
},
{
"math_id": 31,
"text": "L = \\int_{t_{0}}^{t_{1}} I(\\mathbf{x}(t),\\mathbf{u}(t),t) + \\mathbf{\\lambda}^{\\mathsf{T}}(t) \\left[ \\mathbf{f}(\\mathbf{x}(t),\\mathbf{u}(t),t) - \\dot{\\mathbf{x}}(t) \\right] \\, \\mathrm{d}t"
},
{
"math_id": 32,
"text": "\\dot{\\mathbf{x}}(t)"
},
{
"math_id": 33,
"text": "- \\int_{t_{0}}^{t_{1}} \\mathbf{\\lambda}^{\\mathsf{T}}(t) \\dot{\\mathbf{x}}(t) \\, \\mathrm{d}t = -\\mathbf{\\lambda}^{\\mathsf{T}}(t_{1}) \\mathbf{x}(t_{1}) + \\mathbf{\\lambda}^{\\mathsf{T}}(t_{0}) \\mathbf{x}(t_{0}) + \\int_{t_{0}}^{t_{1}} \\dot{\\mathbf{\\lambda}}^{\\mathsf{T}}(t) \\mathbf{x}(t) \\, \\mathrm{d}t "
},
{
"math_id": 34,
"text": "L = \\int_{t_{0}}^{t_{1}} \\left[ I(\\mathbf{x}(t),\\mathbf{u}(t),t) + \\mathbf{\\lambda}^{\\mathsf{T}}(t) \\mathbf{f}(\\mathbf{x}(t),\\mathbf{u}(t),t) + \\dot{\\mathbf{\\lambda}}^{\\mathsf{T}}(t) \\mathbf{x}(t) \\right] \\, \\mathrm{d}t - \\mathbf{\\lambda}^{\\mathsf{T}}(t_{1}) \\mathbf{x}(t_{1}) + \\mathbf{\\lambda}^{\\mathsf{T}}(t_{0}) \\mathbf{x}(t_{0}) "
},
{
"math_id": 35,
"text": "L"
},
{
"math_id": 36,
"text": "\\mathrm{d}L = \\int_{t_{0}}^{t_{1}} \\left[ \\left( I_{\\mathbf{u}}(\\mathbf{x}(t),\\mathbf{u}(t),t) + \\mathbf{\\lambda}^{\\mathsf{T}}(t) \\mathbf{f}_{\\mathbf{u}}(\\mathbf{x}(t),\\mathbf{u}(t),t) \\right) \\mathrm{d}\\mathbf{u}(t) + \\left( I_{\\mathbf{x}}(\\mathbf{x}(t),\\mathbf{u}(t),t) + \\mathbf{\\lambda}^{\\mathsf{T}}(t) \\mathbf{f}_{\\mathbf{x}}(\\mathbf{x}(t),\\mathbf{u}(t),t) + \\dot{\\mathbf{\\lambda}}(t) \\right) \\mathrm{d}\\mathbf{x}(t) \\right] \\mathrm{d}t - \\mathbf{\\lambda}^{\\mathsf{T}}(t_{1}) \\mathrm{d}\\mathbf{x}(t_{1}) + \\mathbf{\\lambda}^{\\mathsf{T}}(t_{0}) \\mathrm{d}\\mathbf{x}(t_{0}) \\leq 0"
},
{
"math_id": 37,
"text": "\\begin{align}\n\\underbrace{I_{\\mathbf{u}}(\\mathbf{x}(t),\\mathbf{u}(t),t) + \\mathbf{\\lambda}^{\\mathsf{T}}(t) \\mathbf{f}_{\\mathbf{u}}(\\mathbf{x}(t),\\mathbf{u}(t),t)}_{= \\frac{\\partial H(\\mathbf{x}(t),\\mathbf{u}(t),\\mathbf{\\lambda}(t),t)}{\\partial \\mathbf{u}}} &= 0 \\\\\n\\underbrace{I_{\\mathbf{x}}(\\mathbf{x}(t),\\mathbf{u}(t),t) + \\mathbf{\\lambda}^{\\mathsf{T}}(t) \\mathbf{f}_{\\mathbf{x}}(\\mathbf{x}(t),\\mathbf{u}(t),t)}_{= \\frac{\\partial H(\\mathbf{x}(t),\\mathbf{u}(t),\\mathbf{\\lambda}(t),t)}{\\partial \\mathbf{x}}} + \\dot{\\mathbf{\\lambda}}(t) &= 0\n\\end{align}"
},
{
"math_id": 38,
"text": "\\mathbf{x}(t_{0})"
},
{
"math_id": 39,
"text": "\\mathbf{x}(t_{1})"
},
{
"math_id": 40,
"text": "\\mathrm{d}\\mathbf{x}(t_{0}) = \\mathrm{d}\\mathbf{x}(t_{1}) = 0"
},
{
"math_id": 41,
"text": "\\mathbf{\\lambda}(t_{0})"
},
{
"math_id": 42,
"text": "\\mathbf{\\lambda}(t_{1})"
},
{
"math_id": 43,
"text": "\nH(x_{t},u_{t},\\lambda_{t+1},t)=\\lambda^\\top_{t+1}f(x_{t},u_{t},t)+I(x_{t},u_{t},t) \\,\n"
},
{
"math_id": 44,
"text": "\n\\lambda_{t} =\\frac{\\partial H}{\\partial x_{t}}\n"
},
{
"math_id": 45,
"text": "t"
},
{
"math_id": 46,
"text": "t+1."
},
{
"math_id": 47,
"text": "x"
},
{
"math_id": 48,
"text": "\\lambda_{t+1}"
},
{
"math_id": 49,
"text": "t_1"
},
{
"math_id": 50,
"text": "\\left(\\tfrac{\\partial H}{\\partial t} = 0\\right)"
},
{
"math_id": 51,
"text": "H(x^*(t),u^*(t),\\lambda^*(t)) = \\mathrm{constant}\\,"
},
{
"math_id": 52,
"text": "H(x^*(t),u^*(t),\\lambda^*(t)) = 0.\\,"
},
{
"math_id": 53,
"text": "\\lim_{t \\to \\infty} H(t) = 0"
},
{
"math_id": 54,
"text": "\\mathcal{H}(p,q,t) = \\langle p,\\dot{q} \\rangle -L(q,\\dot{q},t)"
},
{
"math_id": 55,
"text": "q"
},
{
"math_id": 56,
"text": "\\dot{q}"
},
{
"math_id": 57,
"text": "p"
},
{
"math_id": 58,
"text": "p = \\frac{\\partial L}{\\partial \\dot{q}}"
},
{
"math_id": 59,
"text": "\\frac{ d}{ dt}p(t) = -\\frac{\\partial}{\\partial q}\\mathcal{H}"
},
{
"math_id": 60,
"text": "\\frac{ d}{ dt}q(t) =~~\\frac{\\partial}{\\partial p}\\mathcal{H}"
},
{
"math_id": 61,
"text": "u"
},
{
"math_id": 62,
"text": "H(q,u,p,t)= \\langle p,\\dot{q} \\rangle -L(q,u,t)"
},
{
"math_id": 63,
"text": "\\frac{dp}{dt} = -\\frac{\\partial H}{\\partial q}"
},
{
"math_id": 64,
"text": "\\frac{dq}{dt} = ~~\\frac{\\partial H}{\\partial p}"
},
{
"math_id": 65,
"text": "\\frac{\\partial H}{\\partial u} = 0"
},
{
"math_id": 66,
"text": "I(\\mathbf{x}(t),\\mathbf{u}(t),t) = e^{-\\rho t} \\nu(\\mathbf{x}(t),\\mathbf{u}(t))"
},
{
"math_id": 67,
"text": "\\nu(\\mathbf{x}(t),\\mathbf{u}(t))"
},
{
"math_id": 68,
"text": "H(\\mathbf{x}(t),\\mathbf{u}(t),\\mathbf{\\lambda}(t),t) = e^{-\\rho t} \\bar{H}(\\mathbf{x}(t),\\mathbf{u}(t),\\mathbf{\\lambda}(t))"
},
{
"math_id": 69,
"text": "\\begin{align}\n\\bar{H}(\\mathbf{x}(t),\\mathbf{u}(t),\\mathbf{\\lambda}(t)) \\equiv& \\, e^{\\rho t} \\left[ I(\\mathbf{x}(t),\\mathbf{u}(t),t) + \\mathbf{\\lambda}^{\\mathsf{T}}(t) \\mathbf{f}(\\mathbf{x}(t),\\mathbf{u}(t),t) \\right] \\\\\n=& \\, \\nu(\\mathbf{x}(t),\\mathbf{u}(t),t) + \\mathbf{\\mu}^{\\mathsf{T}}(t) \\mathbf{f}(\\mathbf{x}(t),\\mathbf{u}(t),t)\n\\end{align}"
},
{
"math_id": 70,
"text": "H(\\mathbf{x}(t),\\mathbf{u}(t),\\mathbf{\\lambda}(t),t)"
},
{
"math_id": 71,
"text": "\\mathbf{\\mu}(t) = e^{\\rho t} \\mathbf{\\lambda}(t)"
},
{
"math_id": 72,
"text": "\\frac{\\partial \\bar{H}(\\mathbf{x}(t),\\mathbf{u}(t),\\mathbf{\\lambda}(t))}{\\partial \\mathbf{u}} = 0"
},
{
"math_id": 73,
"text": "\\frac{\\partial \\bar{H}(\\mathbf{x}(t),\\mathbf{u}(t),\\mathbf{\\lambda}(t))}{\\partial \\mathbf{x}} = - \\dot{\\mathbf{\\mu}}(t) + \\rho \\mathbf{\\mu}(t)"
},
{
"math_id": 74,
"text": "\\mathbf{\\mu}(t)"
},
{
"math_id": 75,
"text": "J(c)"
},
{
"math_id": 76,
"text": "J(c) = \\int^T_0 e^{-\\rho t}u(c(t)) dt"
},
{
"math_id": 77,
"text": "c(t)"
},
{
"math_id": 78,
"text": "u(c(t))"
},
{
"math_id": 79,
"text": "c"
},
{
"math_id": 80,
"text": "e^{-\\rho t}"
},
{
"math_id": 81,
"text": "\\dot{k}=\\frac{\\partial k}{\\partial t} =f(k(t)) - (n + \\delta)k(t) - c(t)"
},
{
"math_id": 82,
"text": "k(t)"
},
{
"math_id": 83,
"text": "k(0) = k_{0} > 0"
},
{
"math_id": 84,
"text": "f(k(t))"
},
{
"math_id": 85,
"text": "\\delta"
},
{
"math_id": 86,
"text": "\\rho"
},
{
"math_id": 87,
"text": "u'>0"
},
{
"math_id": 88,
"text": "u''<0"
},
{
"math_id": 89,
"text": "H(k,c,\\mu,t)=e^{-\\rho t}u(c(t))+\\mu(t)\\dot{k}=e^{-\\rho t}u(c(t))+\\mu(t)[f(k(t)) - (n + \\delta)k(t) - c(t)]"
},
{
"math_id": 90,
"text": "\\frac{\\partial H}{\\partial c}=0 \\Rightarrow\ne^{-\\rho t}u'(c)=\\mu(t)"
},
{
"math_id": 91,
"text": "\\frac{\\partial H}{\\partial k}=-\\frac{\\partial \\mu}{\\partial t}=-\\dot{\\mu} \\Rightarrow \\mu(t)[f'(k)-(n+\\delta)]=-\\dot{\\mu}"
},
{
"math_id": 92,
"text": "\\mu(T)k(T)=0"
},
{
"math_id": 93,
"text": "u(c)=\\log(c)"
},
{
"math_id": 94,
"text": "-\\rho-\\frac{\\dot{c}}{c(t)}=\\frac{\\dot{\\mu}}{\\mu(t)}"
},
{
"math_id": 95,
"text": "\\rho+\\frac{\\dot{c}}{c(t)}=f'(k)-(n+\\delta)"
}
] |
https://en.wikipedia.org/wiki?curid=5987577
|
5987648
|
Uncertainty quantification
|
Characterization and reduction of uncertainties in both computational and real world applications
Uncertainty quantification (UQ) is the science of quantitative characterization and estimation of uncertainties in both computational and real world applications. It tries to determine how likely certain outcomes are if some aspects of the system are not exactly known. An example would be to predict the acceleration of a human body in a head-on crash with another car: even if the speed was exactly known, small differences in the manufacturing of individual cars, how tightly every bolt has been tightened, etc., will lead to different results that can only be predicted in a statistical sense.
Many problems in the natural sciences and engineering are also rife with sources of uncertainty. Computer experiments on computer simulations are the most common approach to study problems in uncertainty quantification.
Sources.
Uncertainty can enter mathematical models and experimental measurements in various contexts. One way to categorize the sources of uncertainty is to consider:
Aleatoric and epistemic.
Uncertainty is sometimes classified into two categories, prominently seen in medical applications.
In real life applications, both kinds of uncertainties are present. Uncertainty quantification intends to explicitly express both types of uncertainty separately. The quantification for the aleatoric uncertainties can be relatively straightforward, where traditional (frequentist) probability is the most basic form. Techniques such as the Monte Carlo method are frequently used. A probability distribution can be represented by its moments (in the Gaussian case, the mean and covariance suffice, although, in general, even knowledge of all moments to arbitrarily high order still does not specify the distribution function uniquely), or more recently, by techniques such as Karhunen–Loève and polynomial chaos expansions. To evaluate epistemic uncertainties, the efforts are made to understand the (lack of) knowledge of the system, process or mechanism. Epistemic uncertainty is generally understood through the lens of Bayesian probability, where probabilities are interpreted as indicating how certain a rational person could be regarding a specific claim.
Mathematical perspective.
In mathematics, uncertainty is often characterized in terms of a probability distribution. From that perspective, epistemic uncertainty means not being certain what the relevant probability distribution is, and aleatoric uncertainty means not being certain what a random sample drawn from a probability distribution will be.
Types of problems.
There are two major types of problems in uncertainty quantification: one is the forward propagation of uncertainty (where the various sources of uncertainty are propagated through the model to predict the overall uncertainty in the system response) and the other is the inverse assessment of model uncertainty and parameter uncertainty (where the model parameters are calibrated simultaneously using test data). There has been a proliferation of research on the former problem and a majority of uncertainty analysis techniques were developed for it. On the other hand, the latter problem is drawing increasing attention in the engineering design community, since uncertainty quantification of a model and the subsequent predictions of the true system response(s) are of great interest in designing robust systems.
Forward.
Uncertainty propagation is the quantification of uncertainties in system output(s) propagated from uncertain inputs. It focuses on the influence on the outputs from the "parametric variability" listed in the sources of uncertainty. The targets of uncertainty propagation analysis can be:
Inverse.
Given some experimental measurements of a system and some computer simulation results from its mathematical model, inverse uncertainty quantification estimates the discrepancy between the experiment and the mathematical model (which is called bias correction), and estimates the values of unknown parameters in the model if there are any (which is called parameter calibration or simply calibration). Generally this is a much more difficult problem than forward uncertainty propagation; however it is of great importance since it is typically implemented in a model updating process. There are several scenarios in inverse uncertainty quantification:
Bias correction only.
Bias correction quantifies the "model inadequacy", i.e. the discrepancy between the experiment and the mathematical model. The general model updating formula for bias correction is:
formula_0
where formula_1 denotes the experimental measurements as a function of several input variables formula_2, formula_3 denotes the computer model (mathematical model) response, formula_4 denotes the additive discrepancy function (aka bias function), and formula_5 denotes the experimental uncertainty. The objective is to estimate the discrepancy function formula_4, and as a by-product, the resulting updated model is formula_6. A prediction confidence interval is provided with the updated model as the quantification of the uncertainty.
Parameter calibration only.
Parameter calibration estimates the values of one or more unknown parameters in a mathematical model. The general model updating formulation for calibration is:
formula_7
where formula_8 denotes the computer model response that depends on several unknown model parameters formula_9, and formula_10 denotes the true values of the unknown parameters in the course of experiments. The objective is to either estimate formula_10, or to come up with a probability distribution of formula_10 that encompasses the best knowledge of the true parameter values.
Bias correction and parameter calibration.
It considers an inaccurate model with one or more unknown parameters, and its model updating formulation combines the two together:
formula_11
It is the most comprehensive model updating formulation that includes all possible sources of uncertainty, and it requires the most effort to solve.
Selective methodologies.
Much research has been done to solve uncertainty quantification problems, though a majority of them deal with uncertainty propagation. During the past one to two decades, a number of approaches for inverse uncertainty quantification problems have also been developed and have proved to be useful for most small- to medium-scale problems.
Forward propagation.
Existing uncertainty propagation approaches include probabilistic approaches and non-probabilistic approaches. There are basically six categories of probabilistic approaches for uncertainty propagation:
For non-probabilistic approaches, interval analysis, Fuzzy theory, Possibility theory and evidence theory are among the most widely used.
The probabilistic approach is considered as the most rigorous approach to uncertainty analysis in engineering design due to its consistency with the theory of decision analysis. Its cornerstone is the calculation of probability density functions for sampling statistics. This can be performed rigorously for random variables that are obtainable as transformations of Gaussian variables, leading to exact confidence intervals.
Inverse uncertainty.
Frequentist.
In regression analysis and least squares problems, the standard error of parameter estimates is readily available, which can be expanded into a confidence interval.
Bayesian.
Several methodologies for inverse uncertainty quantification exist under the Bayesian framework. The most complicated direction is to aim at solving problems with both bias correction and parameter calibration. The challenges of such problems include not only the influences from model inadequacy and parameter uncertainty, but also the lack of data from both computer simulations and experiments. A common situation is that the input settings are not the same over experiments and simulations. Another common situation is that parameters derived from experiments are input to simulations. For computationally expensive simulations, then often a surrogate model, e.g. a Gaussian process or a Polynomial Chaos Expansion, is necessary, defining an inverse problem for finding the surrogate model that best approximates the simulations.
Modular approach.
An approach to inverse uncertainty quantification is the modular Bayesian approach. The modular Bayesian approach derives its name from its four-module procedure. Apart from the current available data, a prior distribution of unknown parameters should be assigned.
To address the issue from lack of simulation results, the computer model is replaced with a Gaussian process (GP) model
formula_12
where
formula_13
formula_14 is the dimension of input variables, and formula_15 is the dimension of unknown parameters. While formula_16 is pre-defined, formula_17, known as "hyperparameters" of the GP model, need to be estimated via maximum likelihood estimation (MLE). This module can be considered as a generalized kriging method.
Similarly with the first module, the discrepancy function is replaced with a GP model
formula_18
where
formula_19
Together with the prior distribution of unknown parameters, and data from both computer models and experiments, one can derive the maximum likelihood estimates for formula_20. At the same time, formula_21 from Module 1 gets updated as well.
Bayes' theorem is applied to calculate the posterior distribution of the unknown parameters:
formula_22
where formula_23 includes all the fixed hyperparameters in previous modules.
Full approach.
Fully Bayesian approach requires that not only the priors for unknown parameters formula_9 but also the priors for the other hyperparameters formula_23 should be assigned. It follows the following steps:
However, the approach has significant drawbacks:
The fully Bayesian approach requires a huge amount of calculations and may not yet be practical for dealing with the most complicated modelling situations.
Known issues.
The theories and methodologies for uncertainty propagation are much better established, compared with inverse uncertainty quantification. For the latter, several difficulties remain unsolved:
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " y^e(\\mathbf{x})=y^m(\\mathbf{x})+\\delta(\\mathbf{x})+\\varepsilon "
},
{
"math_id": 1,
"text": " y^e(\\mathbf{x}) "
},
{
"math_id": 2,
"text": " \\mathbf{x} "
},
{
"math_id": 3,
"text": " y^m(\\mathbf{x}) "
},
{
"math_id": 4,
"text": " \\delta(\\mathbf{x}) "
},
{
"math_id": 5,
"text": " \\varepsilon "
},
{
"math_id": 6,
"text": " y^m(\\mathbf{x})+\\delta(\\mathbf{x}) "
},
{
"math_id": 7,
"text": " y^e(\\mathbf{x})=y^m(\\mathbf{x},\\boldsymbol{\\theta}^*)+\\varepsilon "
},
{
"math_id": 8,
"text": " y^m(\\mathbf{x},\\boldsymbol{\\theta}) "
},
{
"math_id": 9,
"text": " \\boldsymbol{\\theta} "
},
{
"math_id": 10,
"text": " \\boldsymbol{\\theta}^* "
},
{
"math_id": 11,
"text": " y^e(\\mathbf{x})=y^m(\\mathbf{x},\\boldsymbol{\\theta}^*)+\\delta(\\mathbf{x})+\\varepsilon "
},
{
"math_id": 12,
"text": " y^m(\\mathbf{x},\\boldsymbol{\\theta})\\sim\\mathcal{GP}\\big(\\mathbf{h}^m(\\cdot)^T\\boldsymbol{\\beta}^m,\\sigma_m^2R^m(\\cdot,\\cdot)\\big) "
},
{
"math_id": 13,
"text": " R^m\\big((\\mathbf{x},\\boldsymbol{\\theta}),(\\mathbf{x}',\\boldsymbol{\\theta}')\\big)=\\exp\\left\\{-\\sum_{k=1}^d \\omega_k^m(x_k-x_k')^2\\right\\}\\exp\\left\\{-\\sum_{k=1}^r \\omega_{d+k}^m(\\theta_k-\\theta_k')^2 \\right\\}. "
},
{
"math_id": 14,
"text": " d "
},
{
"math_id": 15,
"text": " r "
},
{
"math_id": 16,
"text": " \\mathbf{h}^m(\\cdot) "
},
{
"math_id": 17,
"text": " \\left\\{\\boldsymbol{\\beta}^m, \\sigma_m, \\omega_k^m, k=1,\\ldots,d+r\\right\\} "
},
{
"math_id": 18,
"text": " \\delta(\\mathbf{x})\\sim\\mathcal{GP}\\big(\\mathbf{h}^\\delta(\\cdot)^T\\boldsymbol{\\beta}^\\delta,\\sigma_\\delta^2R^\\delta(\\cdot,\\cdot)\\big) "
},
{
"math_id": 19,
"text": " R^\\delta(\\mathbf{x},\\mathbf{x}')=\\exp\\left\\{-\\sum_{k=1}^d \\omega_k^\\delta(x_k-x_k')^2 \\right\\}. "
},
{
"math_id": 20,
"text": " \\left\\{\\boldsymbol{\\beta}^\\delta, \\sigma_\\delta, \\omega_k^\\delta, k=1,\\ldots,d\\right\\} "
},
{
"math_id": 21,
"text": " \\boldsymbol{\\beta}^m "
},
{
"math_id": 22,
"text": " p(\\boldsymbol{\\theta}\\mid\\text{data},\\boldsymbol{\\varphi})\\propto p(\\rm{data}\\mid\\boldsymbol{\\theta},\\boldsymbol{\\varphi})p(\\boldsymbol{\\theta}) "
},
{
"math_id": 23,
"text": " \\boldsymbol{\\varphi} "
},
{
"math_id": 24,
"text": " p(\\boldsymbol{\\theta},\\boldsymbol{\\varphi}\\mid\\text{data}) "
},
{
"math_id": 25,
"text": " p(\\boldsymbol{\\theta}\\mid\\text{data}) "
}
] |
https://en.wikipedia.org/wiki?curid=5987648
|
59877
|
Gas constant
|
Physical constant equivalent to the Boltzmann constant, but in different units
The molar gas constant (also known as the gas constant, universal gas constant, or ideal gas constant) is denoted by the symbol "R" or . It is the molar equivalent to the Boltzmann constant, expressed in units of energy per temperature increment per amount of substance, rather than energy per temperature increment per "particle". The constant is also a combination of the constants from Boyle's law, Charles's law, Avogadro's law, and Gay-Lussac's law. It is a physical constant that is featured in many fundamental equations in the physical sciences, such as the ideal gas law, the Arrhenius equation, and the Nernst equation.
The gas constant is the constant of proportionality that relates the energy scale in physics to the temperature scale and the scale used for amount of substance. Thus, the value of the gas constant ultimately derives from historical decisions and accidents in the setting of units of energy, temperature and amount of substance. The Boltzmann constant and the Avogadro constant were similarly determined, which separately relate energy to temperature and particle count to amount of substance.
The gas constant "R" is defined as the Avogadro constant "N"A multiplied by the Boltzmann constant "k" (or "k"B):
formula_0
=
Since the 2019 redefinition of SI base units, both "N"A and "k" are defined with exact numerical values when expressed in SI units. As a consequence, the SI value of the molar gas constant is exact.
Some have suggested that it might be appropriate to name the symbol "R" the Regnault constant in honour of the French chemist Henri Victor Regnault, whose accurate experimental data were used to calculate the early value of the constant. However, the origin of the letter "R" to represent the constant is elusive. The universal gas constant was apparently introduced independently by Clausius' student, A.F. Horstmann (1873) and Dmitri Mendeleev who reported it first on 12 September 1874. Using his extensive measurements of the properties of gases,
Mendeleev also calculated it with high precision, within 0.3% of its modern value.
The gas constant occurs in the ideal gas law:
formula_1
where "P" is the absolute pressure, "V" is the volume of gas, "n" is the amount of substance, "m" is the mass, and "T" is the thermodynamic temperature. "R"specific is the mass-specific gas constant. The gas constant is expressed in the same unit as molar heat.
Dimensions.
From the ideal gas law "PV" = "nRT" we get:
formula_2
where "P" is pressure, "V" is volume, "n" is number of moles of a given substance, and "T" is temperature.
As pressure is defined as force per area of measurement, the gas equation can also be written as:
formula_3
Area and volume are (length)2 and (length)3 respectively. Therefore:
formula_4
Since force × length = work:
formula_5
The physical significance of "R" is work per mole per degree. It may be expressed in any set of units representing work or energy (such as joules), units representing degrees of temperature on an absolute scale (such as kelvin or rankine), and any system of units designating a mole or a similar pure number that allows an equation of macroscopic mass and fundamental particle numbers in a system, such as an ideal gas (see "Avogadro constant").
Instead of a mole the constant can be expressed by considering the normal cubic metre.
Otherwise, we can also say that:
formula_6
Therefore, we can write "R" as:
formula_7
And so, in terms of SI base units:
"R" = kg⋅m2⋅s−2⋅K−1⋅mol−1.
Relationship with the Boltzmann constant.
The Boltzmann constant "k"B (alternatively "k") may be used in place of the molar gas constant by working in pure particle count, "N", rather than amount of substance, "n", since:
formula_8
where "N"A is the Avogadro constant.
For example, the ideal gas law in terms of the Boltzmann constant is:
formula_9
where "N" is the number of particles (molecules in this case), or to generalize to an inhomogeneous system the local form holds:
formula_10
where "ρ"N = "N"/"V" is the number density.
Measurement and replacement with defined value.
As of 2006, the most precise measurement of "R" had been obtained by measuring the speed of sound "c"a("P", "T") in argon at the temperature "T" of the triple point of water at different pressures "P", and extrapolating to the zero-pressure limit "c"a(0, "T"). The value of "R" is then obtained from the relation:
formula_11
where:
However, following the 2019 redefinition of the SI base units, "R" now has an exact value defined in terms of other exactly defined physical constants.
Specific gas constant.
The specific gas constant of a gas or a mixture of gases ("R"specific) is given by the molar gas constant divided by the molar mass ("M") of the gas or mixture:
formula_12
Just as the molar gas constant can be related to the Boltzmann constant, so can the specific gas constant by dividing the Boltzmann constant by the molecular mass of the gas:
formula_13
Another important relationship comes from thermodynamics. Mayer's relation relates the specific gas constant to the specific heat capacities for a calorically perfect gas and a thermally perfect gas:
formula_14
where "c"p is the specific heat capacity for a constant pressure and "c"v is the specific heat capacity for a constant volume.
It is common, especially in engineering applications, to represent the specific gas constant by the symbol "R". In such cases, the universal gas constant is usually given a different symbol such as "R" to distinguish it. In any case, the context and/or unit of the gas constant should make it clear as to whether the universal or specific gas constant is being referred to.
In case of air, using the perfect gas law and the standard sea-level conditions (SSL) (air density "ρ"0 = 1.225 kg/m3, temperature "T"0 = 288.15 K and pressure "p"0 = ), we have that "R"air = "P"0/("ρ"0"T"0) = . Then the molar mass of air is computed by "M"0 = "R"/"R"air = .
U.S. Standard Atmosphere.
The U.S. Standard Atmosphere, 1976 (USSA1976) defines the gas constant "R"∗ as:
"R"∗ = = .
Note the use of the kilomole, with the resulting factor of in the constant. The USSA1976 acknowledges that this value is not consistent with the cited values for the Avogadro constant and the Boltzmann constant. This disparity is not a significant departure from accuracy, and USSA1976 uses this value of "R"∗ for all the calculations of the standard atmosphere. When using the ISO value of "R", the calculated pressure increases by only 0.62 pascal at 11 kilometres (the equivalent of a difference of only 17.4 centimetres or 6.8 inches) and 0.292 Pa at 20 km (the equivalent of a difference of only 33.8 cm or 13.2 in).
Also note that this was well before the 2019 SI redefinition, through which the constant was given an exact value.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "R = N_{\\rm A} k"
},
{
"math_id": 1,
"text": "PV = nRT = m R_{\\rm specific} T"
},
{
"math_id": 2,
"text": "R = \\frac{PV}{nT}"
},
{
"math_id": 3,
"text": "R = \\frac{ \\dfrac{\\mathrm{force}}{\\mathrm{area}} \\times \\mathrm{volume} }\n { \\mathrm{amount} \\times \\mathrm{temperature} }\n"
},
{
"math_id": 4,
"text": "R = \\frac{ \\dfrac{\\mathrm{force} }{ (\\mathrm{length})^2} \\times (\\mathrm{length})^3 }\n { \\mathrm{amount} \\times \\mathrm{temperature} }\n = \\frac{ \\mathrm{force} \\times \\mathrm{length} }\n { \\mathrm{amount} \\times \\mathrm{temperature} }\n"
},
{
"math_id": 5,
"text": "R = \\frac{ \\mathrm{work} }\n { \\mathrm{amount} \\times \\mathrm{temperature} }\n"
},
{
"math_id": 6,
"text": "\\mathrm{force} = \\frac{ \\mathrm{mass} \\times \\mathrm{length} }\n { (\\mathrm{time})^2 }\n"
},
{
"math_id": 7,
"text": "R = \\frac{ \\mathrm{mass} \\times \\mathrm{length}^2 }\n { \\mathrm{amount} \\times \\mathrm{temperature} \\times (\\mathrm{time})^2 }\n"
},
{
"math_id": 8,
"text": "R = N_{\\rm A} k_{\\rm B},\\,"
},
{
"math_id": 9,
"text": "PV = Nk_{\\rm B} T,"
},
{
"math_id": 10,
"text": "P = \\rho_{\\rm N} k_{\\rm B} T,"
},
{
"math_id": 11,
"text": "c_\\mathrm{a}(0, T) = \\sqrt{\\frac{\\gamma_0 R T}{A_\\mathrm{r}(\\mathrm{Ar}) M_\\mathrm{u}}},"
},
{
"math_id": 12,
"text": " R_{\\rm specific} = \\frac{R}{M} "
},
{
"math_id": 13,
"text": " R_{\\rm specific} = \\frac{k_{\\rm B}}{m} "
},
{
"math_id": 14,
"text": " R_{\\rm specific} = c_{\\rm p} - c_{\\rm v}\\ "
}
] |
https://en.wikipedia.org/wiki?curid=59877
|
598776
|
Informant (statistics)
|
Gradient of the likelihood function
In statistics, the score (or informant) is the gradient of the log-likelihood function with respect to the parameter vector. Evaluated at a particular point of the parameter vector, the score indicates the steepness of the log-likelihood function and thereby the sensitivity to infinitesimal changes to the parameter values. If the log-likelihood function is continuous over the parameter space, the score will vanish at a local maximum or minimum; this fact is used in maximum likelihood estimation to find the parameter values that maximize the likelihood function.
Since the score is a function of the observations, which are subject to sampling error, it lends itself to a test statistic known as "score test" in which the parameter is held at a particular value. Further, the ratio of two likelihood functions evaluated at two distinct parameter values can be understood as a definite integral of the score function.
Definition.
The score is the gradient (the vector of partial derivatives) of formula_0, the natural logarithm of the likelihood function, with respect to an m-dimensional parameter vector formula_1.
formula_2
This differentiation yields a formula_3 row vector at each value of formula_4 and formula_5, and indicates the sensitivity of the likelihood (its derivative normalized by its value).
In older literature, "linear score" may refer to the score with respect to infinitesimal translation of a given density. This convention arises from a time when the primary parameter of interest was the mean or median of a distribution. In this case, the likelihood of an observation is given by a density of the form formula_6. The "linear score" is then defined as
formula_7
Properties.
Mean.
While the score is a function of formula_1, it also depends on the observations formula_8 at which the likelihood function is evaluated, and in view of the random character of sampling one may take its expected value over the sample space. Under certain regularity conditions on the density functions of the random variables, the expected value of the score, evaluated at the true parameter value formula_1, is zero. To see this, rewrite the likelihood function formula_9 as a probability density function formula_10, and denote the sample space formula_11. Then:
formula_12
The assumed regularity conditions allow the interchange of derivative and integral (see Leibniz integral rule), hence the above expression may be rewritten as
formula_13
It is worth restating the above result in words: the expected value of the score, at true parameter value formula_1 is zero. Thus, if one were to repeatedly sample from some distribution, and repeatedly calculate the score, then the mean value of the scores would tend to zero asymptotically.
Variance.
The variance of the score, formula_14, can be derived from the above expression for the expected value.
formula_15
Hence the variance of the score is equal to the negative expected value of the Hessian matrix of the log-likelihood.
formula_16
The latter is known as the Fisher information and is written formula_17. Note that the Fisher information is not a function of any particular observation, as the random variable formula_18 has been averaged out. This concept of information is useful when comparing two methods of observation of some random process.
Examples.
Bernoulli process.
Consider observing the first "n" trials of a Bernoulli process, and seeing that "A" of them are successes and the remaining "B" are failures, where the probability of success is "θ".
Then the likelihood formula_9 is
formula_19
so the score "s" is
formula_20
We can now verify that the expectation of the score is zero. Noting that the expectation of "A" is "nθ" and the expectation of "B" is "n"(1 − "θ") [recall that "A" and "B" are random variables], we can see that the expectation of "s" is
formula_21
We can also check the variance of formula_22. We know that "A" + "B" = "n" (so "B" = "n" − "A") and the variance of "A" is "nθ"(1 − "θ") so the variance of "s" is
formula_23
Binary outcome model.
For models with binary outcomes ("Y" = 1 or 0), the model can be scored with the logarithm of predictions
formula_24
where "p" is the probability in the model to be estimated and "S" is the score.
Applications.
Scoring algorithm.
The scoring algorithm is an iterative method for numerically determining the maximum likelihood estimator.
Score test.
Note that formula_22 is a function of formula_1 and the observation formula_8, so that, in general, it is not a statistic. However, in certain applications, such as the score test, the score is evaluated at a specific value of formula_1 (such as a null-hypothesis value), in which case the result is a statistic. Intuitively, if the restricted estimator is near the maximum of the likelihood function, the score should not differ from zero by more than sampling error. In 1948, C. R. Rao first proved that the square of the score divided by the information matrix follows an asymptotic χ2-distribution under the null hypothesis.
Further note that the likelihood-ratio test is given by
formula_25
which means that the likelihood-ratio test can be understood as the area under the score function between formula_26 and formula_27.
Score matching (machine learning).
Score matching describes the process of applying machine learning algorithms (commonly neural networks) to approximate the score function formula_28 of an unknown distribution formula_29 from finite samples. The learned function formula_30 can then be used in generative modeling to draw new samples from formula_29.
It might seem confusing that the word score has been used for formula_31, because it is not a likelihood function, neither it has a derivative with respect to the parameters. For more information about this definition, see the referenced paper.
History.
The term "score function" may initially seem unrelated to its contemporary meaning, which centers around the derivative of the log-likelihood function in statistical models. This apparent discrepancy can be traced back to the term's historical origins. The concept of the "score function" was first introduced by British statistician Ronald Fisher in his 1935 paper titled "The Detection of Linkage with 'Dominant' Abnormalities." Fisher employed the term in the context of genetic analysis, specifically for families where a parent had a dominant genetic abnormality. Over time, the application and meaning of the "score function" have evolved, diverging from its original context but retaining its foundational principles.
Fisher's initial use of the term was in the context of analyzing genetic attributes in families with a parent possessing a genetic abnormality. He categorized the children of such parents into four classes based on two binary traits: whether they had inherited the abnormality or not, and their zygosity status as either homozygous or heterozygous. Fisher devised a method to assign each family a "score," calculated based on the number of children falling into each of the four categories. This score was used to estimate what he referred to as the "linkage parameter," which described the probability of the genetic abnormality being inherited. Fisher evaluated the efficacy of his scoring rule by comparing it with an alternative rule and against what he termed the "ideal score." The ideal score was defined as the derivative of the logarithm of the sampling density, as mentioned on page 193 of his work.
The term "score" later evolved through subsequent research, notably expanding beyond the specific application in genetics that Fisher had initially addressed. Various authors adapted Fisher's original methodology to more generalized statistical contexts. In these broader applications, the term "score" or "efficient score" started to refer more commonly to the derivative of the log-likelihood function of the statistical model in question. This conceptual expansion was significantly influenced by a 1948 paper by C. R. Rao, which introduced "efficient score tests" that employed the derivative of the log-likelihood function.
Thus, what began as a specialized term in the realm of genetic statistics has evolved to become a fundamental concept in broader statistical theory, often associated with the derivative of the log-likelihood function.
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\log \\mathcal{L}(\\theta;x)"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "s(\\theta;x) \\equiv \\frac{\\partial \\log \\mathcal{L}(\\theta;x)}{\\partial \\theta}"
},
{
"math_id": 3,
"text": "(1 \\times m)"
},
{
"math_id": 4,
"text": " \\theta "
},
{
"math_id": 5,
"text": "x"
},
{
"math_id": 6,
"text": "\\mathcal L(\\theta;X)=f(X+\\theta)"
},
{
"math_id": 7,
"text": "\ns_{\\rm linear}\n= \\frac{\\partial}{\\partial X} \\log f(X)\n"
},
{
"math_id": 8,
"text": "\\mathbf{x} = (x_1, x_2, \\ldots, x_T)"
},
{
"math_id": 9,
"text": "\\mathcal L"
},
{
"math_id": 10,
"text": "\\mathcal L(\\theta; x) = f(x; \\theta)"
},
{
"math_id": 11,
"text": "\\mathcal{X}"
},
{
"math_id": 12,
"text": "\n\\begin{align}\n\\operatorname{E}(s\\mid\\theta)\n& =\\int_{\\mathcal{X}}\nf(x; \\theta) \\frac{\\partial}{\\partial\\theta} \\log \\mathcal L(\\theta;x)\n\\,dx \\\\[6pt]\n& = \\int_{\\mathcal{X}}\nf(x; \\theta) \\frac{1}{f(x; \\theta)}\\frac{\\partial f(x; \\theta)}{\\partial \\theta}\\, dx\n=\\int_{\\mathcal{X}} \\frac{\\partial f(x; \\theta)}{\\partial \\theta} \\, dx\n\\end{align}\n"
},
{
"math_id": 13,
"text": "\n\\frac{\\partial}{\\partial\\theta} \\int_{\\mathcal{X}}\n f(x; \\theta) \\, dx\n=\n\\frac{\\partial}{\\partial\\theta}1 = 0.\n"
},
{
"math_id": 14,
"text": "\\operatorname{Var}(s(\\theta)) = \\operatorname{E}(s(\\theta) s(\\theta)^{\\mathsf{T}})"
},
{
"math_id": 15,
"text": "\n\\begin{align}\n0\n& =\\frac{\\partial}{\\partial \\theta^{\\mathsf{T}}} \\operatorname{E}(s\\mid\\theta) \\\\[6pt]\n& =\\frac{\\partial}{\\partial \\theta^{\\mathsf{T}}} \\int_{\\mathcal{X}}\n \\frac{\\partial \\log \\mathcal L(\\theta;X)}{\\partial\\theta} f(x; \\theta)\n\\,dx \\\\[6pt]\n& = \\int_{\\mathcal{X}}\n \\frac{\\partial}{\\partial \\theta^{\\mathsf{T}}} \\left\\{ \\frac{\\partial \\log \\mathcal L(\\theta;X)}{\\partial\\theta} f(x; \\theta) \\right\\}\n\\,dx \\\\[6pt]\n& = \\int_{\\mathcal{X}} \\left\\{ \\frac{\\partial^2 \\log \\mathcal{L}(\\theta;X)}{\\partial \\theta \\, \\partial \\theta^\\mathsf{T}} f(x; \\theta) + \\frac{\\partial \\log \\mathcal{L}(\\theta;X)}{\\partial \\theta} \\frac{\\partial f(x; \\theta)}{\\partial \\theta^\\mathsf{T} } \\right\\} \\,dx \\\\[6pt]\n& = \\int_{\\mathcal{X}} \\frac{\\partial^2 \\log \\mathcal{L}(\\theta;X)}{\\partial \\theta \\partial \\theta^\\mathsf{T}} f(x; \\theta) \\,dx + \\int_{\\mathcal{X}} \\frac{\\partial \\log \\mathcal{L}(\\theta;X)}{\\partial \\theta} \\frac{\\partial f(x; \\theta)}{\\partial \\theta^\\mathsf{T} } \\,dx \\\\[6pt]\n& = \\int_{\\mathcal{X}} \\frac{\\partial^2 \\log \\mathcal{L}(\\theta;X)}{\\partial \\theta \\, \\partial \\theta^\\mathsf{T}} f(x; \\theta) \\,dx + \\int_{\\mathcal{X}} \\frac{\\partial \\log \\mathcal{L}(\\theta;X)}{\\partial \\theta} \\frac{\\partial \\log \\mathcal{L}(\\theta;X)}{\\partial \\theta^\\mathsf{T} } f(x; \\theta) \\,dx \\\\[6pt]\n& = \\operatorname{E}\\left( \\frac{\\partial^2 \\log \\mathcal{L}(\\theta;X)}{\\partial \\theta \\, \\partial \\theta^\\mathsf{T}} \\right) + \\operatorname{E}\\left( \\frac{\\partial \\log \\mathcal{L}(\\theta;X)}{\\partial \\theta} \\left[ \\frac{\\partial \\log \\mathcal{L}(\\theta;X)}{\\partial \\theta} \\right]^\\mathsf{T} \\right)\n\\end{align}\n"
},
{
"math_id": 16,
"text": "\\operatorname{E}(s(\\theta) s(\\theta)^{\\mathsf{T}}) = - \\operatorname{E}\\left( \\frac{\\partial^2 \\log \\mathcal{L}}{\\partial \\theta \\, \\partial \\theta^{\\mathsf{T}} } \\right)"
},
{
"math_id": 17,
"text": "\\mathcal{I}(\\theta)"
},
{
"math_id": 18,
"text": "X"
},
{
"math_id": 19,
"text": "\n\\mathcal L(\\theta;A,B)=\\frac{(A+B)!}{A!B!}\\theta^A(1-\\theta)^B,"
},
{
"math_id": 20,
"text": "\ns=\\frac{\\partial \\log \\mathcal L}{\\partial \\theta}=\\frac{1}{\\mathcal L}\\frac{\\partial \\mathcal L}{\\partial\\theta} = \\frac{A}{\\theta}-\\frac{B}{1-\\theta}.\n"
},
{
"math_id": 21,
"text": "\nE(s)\n= \\frac{n\\theta}{\\theta} - \\frac{n(1-\\theta)}{1-\\theta}\n= n - n \n= 0.\n"
},
{
"math_id": 22,
"text": "s"
},
{
"math_id": 23,
"text": "\n\\begin{align}\n\\operatorname{var}(s) & =\\operatorname{var}\\left(\\frac{A}{\\theta}-\\frac{n-A}{1-\\theta}\\right)\n=\\operatorname{var}\\left(A\\left(\\frac{1}{\\theta}+\\frac{1}{1-\\theta}\\right)\\right) \\\\\n& =\\left(\\frac{1}{\\theta}+\\frac{1}{1-\\theta}\\right)^2\\operatorname{var}(A)\n=\\frac{n}{\\theta(1-\\theta)}.\n\\end{align}\n"
},
{
"math_id": 24,
"text": " S = Y \\log( p ) + ( 1 - Y ) ( \\log( 1 - p ) ) "
},
{
"math_id": 25,
"text": "-2 \\left[ \\log \\mathcal{L}(\\theta_0) - \\log \\mathcal{L}(\\hat{\\theta}) \\right] = 2 \\int_{\\theta_0}^{\\hat{\\theta}} \\frac{ d \\, \\log \\mathcal{L}(\\theta) }{d \\theta} \\, d \\theta = 2 \\int_{\\theta_0}^{\\hat{\\theta}} s(\\theta) \\, d \\theta "
},
{
"math_id": 26,
"text": "\\theta_{0}"
},
{
"math_id": 27,
"text": "\\hat{\\theta}"
},
{
"math_id": 28,
"text": "s_\\theta \\approx \\nabla_x \\log p(x)"
},
{
"math_id": 29,
"text": "\\pi(x)"
},
{
"math_id": 30,
"text": "s_\\theta"
},
{
"math_id": 31,
"text": " \\nabla_x \\log p(x)"
}
] |
https://en.wikipedia.org/wiki?curid=598776
|
59881
|
Ideal gas law
|
Equation of the state of a hypothetical ideal gas
The ideal gas law, also called the general gas equation, is the equation of state of a hypothetical ideal gas. It is a good approximation of the behavior of many gases under many conditions, although it has several limitations. It was first stated by Benoît Paul Émile Clapeyron in 1834 as a combination of the empirical Boyle's law, Charles's law, Avogadro's law, and Gay-Lussac's law. The ideal gas law is often written in an empirical form:
formula_0
where formula_1, formula_2 and formula_3 are the pressure, volume and temperature respectively; formula_4 is the amount of substance; and formula_5 is the ideal gas constant.
It can also be derived from the microscopic kinetic theory, as was achieved (apparently independently) by August Krönig in 1856 and Rudolf Clausius in 1857.
Equation.
The state of an amount of gas is determined by its pressure, volume, and temperature. The modern form of the equation relates these simply in two main forms. The temperature used in the equation of state is an absolute temperature: the appropriate SI unit is the kelvin.
Common forms.
The most frequently introduced forms are:formula_6where:
In SI units, "p" is measured in pascals, "V" is measured in cubic metres, "n" is measured in moles, and "T" in kelvins (the Kelvin scale is a shifted Celsius scale, where 0.00 K = −273.15 °C, the lowest possible temperature). "R" has for value 8.314 J/(mol·K) = 1.989 ≈ 2 cal/(mol·K), or 0.0821 L⋅atm/(mol⋅K).
Molar form.
How much gas is present could be specified by giving the mass instead of the chemical amount of gas. Therefore, an alternative form of the ideal gas law may be useful. The chemical amount, "n" (in moles), is equal to total mass of the gas ("m") (in kilograms) divided by the molar mass, "M" (in kilograms per mole):
formula_10
By replacing "n" with "m"/"M" and subsequently introducing density "ρ" = "m"/"V", we get:
formula_11
formula_12
formula_13
Defining the specific gas constant "R"specific as the ratio "R"/"M",
formula_14
This form of the ideal gas law is very useful because it links pressure, density, and temperature in a unique formula independent of the quantity of the considered gas. Alternatively, the law may be written in terms of the specific volume "v", the reciprocal of density, as
formula_15
It is common, especially in engineering and meteorological applications, to represent the specific gas constant by the symbol "R". In such cases, the universal gas constant is usually given a different symbol such as formula_16 or formula_17 to distinguish it. In any case, the context and/or units of the gas constant should make it clear as to whether the universal or specific gas constant is being used.
Statistical mechanics.
In statistical mechanics, the following molecular equation is derived from first principles
formula_18
where "P" is the absolute pressure of the gas, "n" is the number density of the molecules (given by the ratio "n" = "N"/"V", in contrast to the previous formulation in which "n" is the "number of moles"), "T" is the absolute temperature, and "k"B is the Boltzmann constant relating temperature and energy, given by:
formula_19
where "N"A is the Avogadro constant.
From this we notice that for a gas of mass "m", with an average particle mass of "μ" times the atomic mass constant, "m"u, (i.e., the mass is "μ" Da) the number of molecules will be given by
formula_20
and since "ρ" = "m"/"V" = "nμm"u, we find that the ideal gas law can be rewritten as
formula_21
In SI units, "P" is measured in pascals, "V" in cubic metres, "T" in kelvins, and "k"B = in SI units.
Combined gas law.
Combining the laws of Charles, Boyle and Gay-Lussac gives the combined gas law, which takes the same functional form as the ideal gas law says that the number of moles is unspecified, and the ratio of formula_22 to formula_3 is simply taken as a constant:
formula_23
where formula_24 is the pressure of the gas, formula_2 is the volume of the gas, formula_3 is the absolute temperature of the gas, and formula_25 is a constant. When comparing the same substance under two different sets of conditions, the law can be written as
formula_26
Energy associated with a gas.
According to the assumptions of the kinetic theory of ideal gases, one can consider that there are no intermolecular attractions between the molecules, or atoms, of an ideal gas. In other words, its potential energy is zero. Hence, all the energy possessed by the gas is the kinetic energy of the molecules, or atoms, of the gas.
formula_27
This corresponds to the kinetic energy of "n" moles of a monoatomic gas having 3 degrees of freedom; "x", "y", "z". The table here below gives this relationship for different amounts of a monoatomic gas.
Applications to thermodynamic processes.
The table below essentially simplifies the ideal gas equation for a particular processes, thus making this equation easier to solve using numerical methods.
A thermodynamic process is defined as a system that moves from state 1 to state 2, where the state number is denoted by subscript. As shown in the first column of the table, basic thermodynamic processes are defined such that one of the gas properties ("P", "V", "T", "S", or "H") is constant throughout the process.
For a given thermodynamics process, in order to specify the extent of a particular process, one of the properties ratios (which are listed under the column labeled "known ratio") must be specified (either directly or indirectly). Also, the property for which the ratio is known must be distinct from the property held constant in the previous column (otherwise the ratio would be unity, and not enough information would be available to simplify the gas law equation).
In the final three columns, the properties ("p", "V", or "T") at state 2 can be calculated from the properties at state 1 using the equations listed.
a. In an isentropic process, system entropy ("S") is constant. Under these conditions, "p"1"V"1"γ" = "p"2"V"2"γ", where "γ" is defined as the heat capacity ratio, which is constant for a calorifically perfect gas. The value used for "γ" is typically 1.4 for diatomic gases like nitrogen (N2) and oxygen (O2), (and air, which is 99% diatomic). Also "γ" is typically 1.6 for mono atomic gases like the noble gases helium (He), and argon (Ar). In internal combustion engines "γ" varies between 1.35 and 1.15, depending on constitution gases and temperature.
b. In an isenthalpic process, system enthalpy ("H") is constant. In the case of free expansion for an ideal gas, there are no molecular interactions, and the temperature remains constant. For real gasses, the molecules do interact via attraction or repulsion depending on temperature and pressure, and heating or cooling does occur. This is known as the Joule–Thomson effect. For reference, the Joule–Thomson coefficient μJT for air at room temperature and sea level is 0.22 °C/bar.
Deviations from ideal behavior of real gases.
The equation of state given here ("PV" = "nRT") applies only to an ideal gas, or as an approximation to a real gas that behaves sufficiently like an ideal gas. There are in fact many different forms of the equation of state. Since the ideal gas law neglects both molecular size and intermolecular attractions, it is most accurate for monatomic gases at high temperatures and low pressures. The neglect of molecular size becomes less important for lower densities, i.e. for larger volumes at lower pressures, because the average distance between adjacent molecules becomes much larger than the molecular size. The relative importance of intermolecular attractions diminishes with increasing thermal kinetic energy, i.e., with increasing temperatures. More detailed "equations of state", such as the van der Waals equation, account for deviations from ideality caused by molecular size and intermolecular forces.
Derivations.
Empirical.
The empirical laws that led to the derivation of the ideal gas law were discovered with experiments that changed only 2 state variables of the gas and kept every other one constant.
All the possible gas laws that could have been discovered with this kind of setup are:
where "P" stands for pressure, "V" for volume, "N" for number of particles in the gas and "T" for temperature; where formula_34 are constants in this context because of each equation requiring only the parameters explicitly noted in them changing.
To derive the ideal gas law one does not need to know all 6 formulas, one can just know 3 and with those derive the rest or just one more to be able to get the ideal gas law, which needs 4.
Since each formula only holds when only the state variables involved in said formula change while the others (which are a property of the gas but are not explicitly noted in said formula) remain constant, we cannot simply use algebra and directly combine them all. This is why: Boyle did his experiments while keeping "N" and "T" constant and this must be taken into account (in this same way, every experiment kept some parameter as constant and this must be taken into account for the derivation).
Keeping this in mind, to carry the derivation on correctly, one must imagine the gas being altered by one process at a time (as it was done in the experiments). The derivation using 4 formulas can look like this:
at first the gas has parameters formula_35
Say, starting to change only pressure and volume, according to Boyle's law (Equation 1), then:
After this process, the gas has parameters formula_36
Using then equation (5) to change the number of particles in the gas and the temperature,
After this process, the gas has parameters formula_37
Using then equation (6) to change the pressure and the number of particles,
After this process, the gas has parameters formula_38
Using then Charles's law (equation 2) to change the volume and temperature of the gas,
After this process, the gas has parameters formula_39
Using simple algebra on equations (7), (8), (9) and (10) yields the result:
formula_40 or formula_41 where formula_42 stands for the Boltzmann constant.
Another equivalent result, using the fact that formula_43, where "n" is the number of moles in the gas and "R" is the universal gas constant, is:
formula_44 which is known as the ideal gas law.
If three of the six equations are known, it may be possible to derive the remaining three using the same method. However, because each formula has two variables, this is possible only for certain groups of three. For example, if you were to have equations (1), (2) and (4) you would not be able to get any more because combining any two of them will only give you the third. However, if you had equations (1), (2) and (3) you would be able to get all six equations because combining (1) and (2) will yield (4), then (1) and (3) will yield (6), then (4) and (6) will yield (5), as well as would the combination of (2) and (3) as is explained in the following visual relation:
where the numbers represent the gas laws numbered above.
If you were to use the same method used above on 2 of the 3 laws on the vertices of one triangle that has a "O" inside it, you would get the third.
For example:
Change only pressure and volume first:
then only volume and temperature:
then as we can choose any value for formula_45, if we set formula_46, equation (2') becomes:
combining equations (1') and (3') yields formula_47, which is equation (4), of which we had no prior knowledge until this derivation.
Theoretical.
Kinetic theory.
The ideal gas law can also be derived from first principles using the kinetic theory of gases, in which several simplifying assumptions are made, chief among which are that the molecules, or atoms, of the gas are point masses, possessing mass but no significant volume, and undergo only elastic collisions with each other and the sides of the container in which both linear momentum and kinetic energy are conserved.
First we show that the fundamental assumptions of the kinetic theory of gases imply that
formula_48
Consider a container in the formula_49 Cartesian coordinate system. For simplicity, we assume that a third of the molecules moves parallel to the formula_50-axis, a third moves parallel to the formula_51-axis and a third moves parallel to the formula_52-axis. If all molecules move with the same velocity formula_53, denote the corresponding pressure by formula_54. We choose an area formula_55 on a wall of the container, perpendicular to the formula_50-axis. When time formula_56 elapses, all molecules in the volume formula_57 moving in the positive direction of the formula_50-axis will hit the area. There are formula_58 molecules in a part of volume formula_57 of the container, but only one sixth (i.e. a half of a third) of them moves in the positive direction of the formula_50-axis. Therefore, the number of molecules formula_59 that will hit the area formula_55 when the time formula_56 elapses is formula_60.
When a molecule bounces off the wall of the container, it changes its momentum formula_61 to formula_62. Hence the magnitude of change of the momentum of one molecule is formula_63. The magnitude of the change of momentum of all molecules that bounce off the area formula_55 when time formula_56 elapses is then formula_64. From formula_65 and formula_66 we get
formula_67
We considered a situation where all molecules move with the same velocity formula_53. Now we consider a situation where they can move with different velocities, so we apply an "averaging transformation" to the above equation, effectively replacing formula_54 by a new pressure formula_24 and formula_68 by the arithmetic mean of all squares of all velocities of the molecules, i.e. by formula_69 Therefore
formula_70
which gives the desired formula.
Using the Maxwell–Boltzmann distribution, the fraction of molecules that have a speed in the range formula_53 to formula_71 is formula_72, where
formula_73
and formula_25 denotes the Boltzmann constant. The root-mean-square speed can be calculated by
formula_74
Using the integration formula
formula_75
it follows that
formula_76
from which we get the ideal gas law:
formula_77
Statistical mechanics.
Let q = ("q"x, "q"y, "q"z) and p = ("p"x, "p"y, "p"z) denote the position vector and momentum vector of a particle of an ideal gas, respectively. Let F denote the net force on that particle. Then (two times) the time-averaged kinetic energy of the particle is:
formula_78
where the first equality is Newton's second law, and the second line uses Hamilton's equations and the equipartition theorem. Summing over a system of "N" particles yields
formula_79
By Newton's third law and the ideal gas assumption, the net force of the system is the force applied by the walls of the container, and this force is given by the pressure "P" of the gas. Hence
formula_80
where dS is the infinitesimal area element along the walls of the container. Since the divergence of the position vector q is
formula_81
the divergence theorem implies that
formula_82
where "dV" is an infinitesimal volume within the container and "V" is the total volume of the container.
Putting these equalities together yields
formula_83
which immediately implies the ideal gas law for "N" particles:
formula_84
where "n" = "N"/"N"A is the number of moles of gas and "R" = "N"A"k"B is the gas constant.
Other dimensions.
For a "d"-dimensional system, the ideal gas pressure is:
formula_85
where formula_86 is the volume of the "d"-dimensional domain in which the gas exists. The dimensions of the pressure changes with dimensionality.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "pV = nRT"
},
{
"math_id": 1,
"text": "p"
},
{
"math_id": 2,
"text": "V"
},
{
"math_id": 3,
"text": "T"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "R"
},
{
"math_id": 6,
"text": "pV = nRT = n k_\\text{B} N_\\text{A} T = N k_\\text{B} T "
},
{
"math_id": 7,
"text": "k_\\text{B}"
},
{
"math_id": 8,
"text": "N_{A}"
},
{
"math_id": 9,
"text": "N"
},
{
"math_id": 10,
"text": " n = \\frac{m}{M}. "
},
{
"math_id": 11,
"text": " pV = \\frac{m}{M} RT "
},
{
"math_id": 12,
"text": " p = \\frac{m}{V} \\frac{RT}{M} "
},
{
"math_id": 13,
"text": " p = \\rho \\frac{R}{M} T "
},
{
"math_id": 14,
"text": " p = \\rho R_\\text{specific}T "
},
{
"math_id": 15,
"text": " pv = R_\\text{specific}T. "
},
{
"math_id": 16,
"text": "\\bar R"
},
{
"math_id": 17,
"text": "R^*"
},
{
"math_id": 18,
"text": " P = nk_\\text{B}T, "
},
{
"math_id": 19,
"text": " k_\\text{B} = \\frac{R}{N_\\text{A}} "
},
{
"math_id": 20,
"text": " N = \\frac{m}{\\mu m_\\text{u}}, "
},
{
"math_id": 21,
"text": " P = \\frac{1}{V}\\frac{m}{\\mu m_\\text{u}} k_\\text{B} T = \\frac{k_\\text{B}}{\\mu m_\\text{u}} \\rho T. "
},
{
"math_id": 22,
"text": "PV"
},
{
"math_id": 23,
"text": "\\frac{PV}{T}=k,"
},
{
"math_id": 24,
"text": "P"
},
{
"math_id": 25,
"text": "k"
},
{
"math_id": 26,
"text": " \\frac{P_1 V_1}{T_1}= \\frac{P_2 V_2}{T_2}. "
},
{
"math_id": 27,
"text": "E=\\frac{3}{2} n RT "
},
{
"math_id": 28,
"text": "PV = C_1 \\quad \\text{or} \\quad P_1 V_1 = P_2 V_2 "
},
{
"math_id": 29,
"text": "\\frac{V}{T} = C_2 \\quad \\text{or} \\quad \\frac{V_1}{T_1} = \\frac{V_2}{T_2} "
},
{
"math_id": 30,
"text": "\\frac{V}{N}=C_3 \\quad \\text{or} \\quad \\frac{V_1}{N_1}=\\frac{V_2}{N_2} "
},
{
"math_id": 31,
"text": "\\frac{P}{T}=C_4 \\quad \\text{or} \\quad \\frac{P_1}{T_1}=\\frac{P_2}{T_2} "
},
{
"math_id": 32,
"text": "NT = C_5 \\quad \\text{or} \\quad N_1 T_1 = N_2 T_2 "
},
{
"math_id": 33,
"text": "\\frac{P}{N} = C_6 \\quad \\text{or} \\quad \\frac{P_1}{N_1}=\\frac{P_2}{N_2} "
},
{
"math_id": 34,
"text": "C_1, C_2, C_3, C_4, C_5, C_6 "
},
{
"math_id": 35,
"text": "P_1, V_1, N_1, T_1 "
},
{
"math_id": 36,
"text": "P_2,V_2,N_1,T_1 "
},
{
"math_id": 37,
"text": "P_2,V_2,N_2,T_2 "
},
{
"math_id": 38,
"text": "P_3,V_2,N_3,T_2 "
},
{
"math_id": 39,
"text": "P_3,V_3,N_3,T_3 "
},
{
"math_id": 40,
"text": "\\frac{P_1 V_1}{N_1 T_1} = \\frac{P_3 V_3}{N_3 T_3} "
},
{
"math_id": 41,
"text": "\\frac{PV}{NT} = k_\\text{B} ,"
},
{
"math_id": 42,
"text": " k_\\text{B} "
},
{
"math_id": 43,
"text": "nR = N k_\\text{B} "
},
{
"math_id": 44,
"text": "PV = nRT, "
},
{
"math_id": 45,
"text": "V_3"
},
{
"math_id": 46,
"text": "V_1 = V_3"
},
{
"math_id": 47,
"text": "\\frac{P_1}{T_1} = \\frac{P_2}{T_2}"
},
{
"math_id": 48,
"text": "PV = \\frac{1}{3}Nmv_{\\text{rms}}^2."
},
{
"math_id": 49,
"text": "xyz"
},
{
"math_id": 50,
"text": "x"
},
{
"math_id": 51,
"text": "y"
},
{
"math_id": 52,
"text": "z"
},
{
"math_id": 53,
"text": "v"
},
{
"math_id": 54,
"text": "P_0"
},
{
"math_id": 55,
"text": "S"
},
{
"math_id": 56,
"text": "t"
},
{
"math_id": 57,
"text": "vtS"
},
{
"math_id": 58,
"text": "NvtS/V"
},
{
"math_id": 59,
"text": "N'"
},
{
"math_id": 60,
"text": "NvtS/(6V)"
},
{
"math_id": 61,
"text": "\\mathbf{p}_1"
},
{
"math_id": 62,
"text": "\\mathbf{p}_2=-\\mathbf{p}_1"
},
{
"math_id": 63,
"text": "|\\mathbf{p}_2-\\mathbf{p}_1|=2mv"
},
{
"math_id": 64,
"text": "|\\Delta \\mathbf{p}|=2mvN'=NtSmv^2/(3V)"
},
{
"math_id": 65,
"text": "F=|\\Delta \\mathbf{p}|/t"
},
{
"math_id": 66,
"text": "P_0=F/S"
},
{
"math_id": 67,
"text": "P_0=\\frac{1}{3}Nm\\frac{v^2}{V}."
},
{
"math_id": 68,
"text": "v^2"
},
{
"math_id": 69,
"text": "v_{\\text{rms}}^2."
},
{
"math_id": 70,
"text": "P=\\frac{1}{3}Nm\\frac{v_{\\text{rms}}^2}{V}"
},
{
"math_id": 71,
"text": "v + dv"
},
{
"math_id": 72,
"text": "f(v) \\, dv"
},
{
"math_id": 73,
"text": "f(v) = 4\\pi \\left(\\frac{m}{2\\pi kT}\\right)^{\\!\\frac{3}{2}}v^2 e^{-\\frac{mv^2}{2kT}}"
},
{
"math_id": 74,
"text": "v_{\\text{rms}}^2 = \\int_0^\\infty v^2 f(v) \\, dv = 4\\pi \\left(\\frac{m}{2\\pi kT}\\right)^{\\frac{3}{2}}\\int_0^\\infty v^4 e^{-\\frac{mv^2}{2kT}} \\, dv."
},
{
"math_id": 75,
"text": "\\int_0^\\infty x^{2n}e^{-\\frac{x^2}{a^2}} \\, dx = \\sqrt{\\pi} \\, \\frac{(2n)!}{n!}\\left(\\frac{a}{2}\\right)^{2n+1},\\quad n\\in\\mathbb{N},\\,a\\in\\mathbb{R}^+,"
},
{
"math_id": 76,
"text": "v_{\\text{rms}}^2 = 4\\pi\\left(\\frac{m}{2\\pi kT}\\right)^{\\!\\frac{3}{2}}\\sqrt{\\pi} \\, \\frac{4!}{2!}\\left(\\frac{\\sqrt{\\frac{2kT}{m}}}{2}\\right)^{\\!5} = \\frac{3kT}{m},"
},
{
"math_id": 77,
"text": "PV = \\frac{1}{3} Nm\\left(\\frac{3kT}{m}\\right) = NkT."
},
{
"math_id": 78,
"text": "\\begin{align}\n\\langle \\mathbf{q} \\cdot \\mathbf{F} \\rangle\n&= \\left\\langle q_{x} \\frac{dp_{x}}{dt} \\right\\rangle +\n\\left\\langle q_{y} \\frac{dp_{y}}{dt} \\right\\rangle +\n\\left\\langle q_{z} \\frac{dp_{z}}{dt} \\right\\rangle\\\\\n&=-\\left\\langle q_{x} \\frac{\\partial H}{\\partial q_x} \\right\\rangle -\n\\left\\langle q_{y} \\frac{\\partial H}{\\partial q_y} \\right\\rangle -\n\\left\\langle q_{z} \\frac{\\partial H}{\\partial q_z} \\right\\rangle = -3k_\\text{B} T,\n\\end{align}"
},
{
"math_id": 79,
"text": "3Nk_{B} T = - \\left\\langle \\sum_{k=1}^{N} \\mathbf{q}_{k} \\cdot \\mathbf{F}_{k} \\right\\rangle."
},
{
"math_id": 80,
"text": "-\\left\\langle\\sum_{k=1}^{N} \\mathbf{q}_{k} \\cdot \\mathbf{F}_{k}\\right\\rangle = P \\oint_{\\text{surface}} \\mathbf{q} \\cdot d\\mathbf{S},"
},
{
"math_id": 81,
"text": "\n\\nabla \\cdot \\mathbf{q} =\n\\frac{\\partial q_{x}}{\\partial q_{x}} +\n\\frac{\\partial q_{y}}{\\partial q_{y}} +\n\\frac{\\partial q_{z}}{\\partial q_{z}} = 3,\n"
},
{
"math_id": 82,
"text": "P \\oint_{\\text{surface}} \\mathbf{q} \\cdot d\\mathbf{S}\n= P \\int_{\\text{volume}} \\left( \\nabla \\cdot \\mathbf{q} \\right) dV\n= 3PV,"
},
{
"math_id": 83,
"text": "3 N k_\\text{B} T = -\\left\\langle \\sum_{k=1}^{N} \\mathbf{q}_{k} \\cdot \\mathbf{F}_{k} \\right\\rangle = 3PV,"
},
{
"math_id": 84,
"text": "PV = Nk_{B} T = nRT,"
},
{
"math_id": 85,
"text": "P^{(d)} = \\frac{N k_B T}{L^d}, "
},
{
"math_id": 86,
"text": "L^d"
}
] |
https://en.wikipedia.org/wiki?curid=59881
|
59886546
|
Popov criterion
|
In nonlinear control and stability theory, the Popov criterion is a stability criterion discovered by Vasile M. Popov for the absolute stability of a class of nonlinear systems whose nonlinearity must satisfy an open-sector condition. While the circle criterion can be applied to nonlinear time-varying systems, the Popov criterion is applicable only to autonomous (that is, time invariant) systems.
System description.
The sub-class of Lur'e systems studied by Popov is described by:
formula_0
formula_1
where "x" ∈ R"n", "ξ","u","y" are scalars, and "A","b","c" and "d" have commensurate dimensions. The nonlinear element Φ: R → R is a time-invariant nonlinearity belonging to "open sector" (0, ∞), that is, Φ(0) = 0 and "y"Φ("y") > 0 for all "y" not equal to 0.
Note that the system studied by Popov has a pole at the origin and there is no direct pass-through from input to output, and the transfer function from "u" to "y" is given by
formula_2
Criterion.
Consider the system described above and suppose
then the system is globally asymptotically stable if there exists a number "r" > 0 such that formula_3
|
[
{
"math_id": 0,
"text": "\n\\begin{align}\n\\dot{x} & = Ax+bu \\\\\n\\dot{\\xi} & = u \\\\\ny & = cx+d\\xi \n\\end{align} "
},
{
"math_id": 1,
"text": " \\begin{matrix} u = -\\varphi (y) \\end{matrix} "
},
{
"math_id": 2,
"text": " H(s) = \\frac{d}{s} + c(sI-A)^{-1}b"
},
{
"math_id": 3,
"text": " \\inf_{\\omega\\,\\in\\,\\mathbb R} \\operatorname{Re} \\left[ (1+j\\omega r) H(j\\omega)\\right] > 0. "
}
] |
https://en.wikipedia.org/wiki?curid=59886546
|
59890989
|
Stable marriage with indifference
|
Variant of the stable marriage problem
Stable marriage with indifference is a variant of the stable marriage problem. Like in the original problem, the goal is to match all men to all women such that no pair of man and woman who are unmarried to each other, would simultaneously like to leave their present partners and pair with each other instead.
In the classic version of the problem, each person must rank the members of the opposite sex in strict order of preference. However, in a real-world setting, a person may prefer two or more persons as equally favorable partner. Such tied preference is termed as "indifference".
Below is such an instance where formula_0 is indifferent between formula_1 and formula_2 is indifferent between formula_3.
formula_4
formula_5
formula_6
If tied preference lists are allowed then the stable marriage problem will have three notions of stability which are discussed in the below sections.
1. A matching is called weakly stable unless there is a couple each of whom strictly prefers the other to his/her partner in the matching. Robert W. Irving extended the Gale–Shapley algorithm as shown below to provide such a weakly stable matching in formula_7 time, where n is the size of the stable marriage problem. Ties in the men and women's preference lists are broken arbitrarily. Preference lists are reduced as the algorithm proceeds.
Assign each person to be free;
while (some man m is free) do
begin
w := first woman on m’s list;
m proposes, and becomes engaged, to w;
if (some man m' is engaged to w) then
assign m' to be free;
for each (successor m" of m on w’s list) do
delete the pair (m", w)
end;
output the engaged pairs, which form a stable matching
2. A matching is called super-stable if there is no couple each of whom either strictly prefers the other to his/her partner or is indifferent between them. Robert W. Irving has modified the above algorithm to check whether such super stable matching exists and outputs matching in formula_7 time if it exists. Below is the pseudocode.
assign each person to be free;
repeat
while (some man m is free) do
for each (woman w at the head of m’s list) do
begin
m proposes, and becomes engaged, to w;
for each (strict successor m' of m on w’s list) do
begin
if (m' is engaged) to w then
break the engagement;
delete the pair (m', w)
end
end
for each (woman w who is multiply engaged) do
begin
break all engagements involving w;
for each (man m at the tail of w’s list) do
delete the pair (m, w)
end;
until (some man’s list is empty) or (everyone is engaged);
if everyone is engaged then
the engagement relation is a super-stable matching
else
no super-stable matching exists
3. A matching is strongly stable if there is no couple x, y such that x strictly prefers y to his/her partner and y either strictly prefers x to his/her partner or is indifferent between them. Robert W. Irving has provided the algorithm which checks if such strongly stable matching exists and outputs the matching if it exists. The algorithm computes perfect matching between sets of men and women, thus finding the critical set of men who are engaged to multiple women. Since such engagements are never stable, all such pairs are deleted and the proposal sequence will be repeated again until either 1) some man's preference list becomes empty (in which case no strongly stable matching exists) or 2) strongly stable matching is obtained. Below is the pseudo-code for finding strongly stable matching. It runs in formula_8 time which is explained in the Lemma 4.6 of .
Assign each person to be free;
repeat
while (some man m is free) do
for each (woman w at the head of m's list) do
begin
m proposes, and becomes engaged, to w;
for each (strict successor m' of m on w’s list) do
begin
if (m' is engaged) to w then
break the engagement;
delete the pair (m'. w)
end
end
if (the engagement relation does not contain a perfect matching) then
begin
find the critical set Z of men;
for each (woman w who is engaged to a man in Z) do
begin
break all engagements involving w;
for each man m at the tail of w’s list do
delete the pair (m, w)
end;
end;
until (some man’s list is empty) or (everyone is engaged);
if everyone is engaged then
the engagement relation is a super-stable matching
else
no strongly stable matching exists
Structure of stable marriage with indifference.
In many problems, there can be several different stable matchings. The set of stable matchings has a special structure. David F. Manlove proved that both the set of strong stable matchings and the set of super stable matchings form a distributive lattice.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "m_2"
},
{
"math_id": 1,
"text": "w_3 \\& w_1"
},
{
"math_id": 2,
"text": "w_2"
},
{
"math_id": 3,
"text": "m_1 \\& m_2"
},
{
"math_id": 4,
"text": "m_1[\\ w_2\\ w_1\\ w_3 \\ ] \\ \\ \\ \\ \\ \\ w_1[\\ m_3\\ m_2\\ m_1 \\ ]"
},
{
"math_id": 5,
"text": "m_2[\\left( w_3\\ w_1 \\right) w_2] \\ \\ \\ \\ \\ \\ w_2[\\left( m_1\\ m_2\\right) m_3 ]"
},
{
"math_id": 6,
"text": "m_3[\\ w_1\\ w_2\\ w_3 \\ ] \\ \\ \\ \\ \\ \\ w_3[\\ m_2\\ m_3\\ m_1 \\ ]"
},
{
"math_id": 7,
"text": "O(n^2)"
},
{
"math_id": 8,
"text": "O(n^4)"
}
] |
https://en.wikipedia.org/wiki?curid=59890989
|
59891279
|
Thyrotroph Thyroid Hormone Sensitivity Index
|
The Thyrotroph Thyroid Hormone Sensitivity Index (abbreviated "TTSI", also referred to as "Thyrotroph T4 Resistance Index" or "TT4RI") is a calculated structure parameter of thyroid homeostasis. It was originally developed to deliver a method for fast screening for resistance to thyroid hormone. Today it is also used to get an estimate for the set point of thyroid homeostasis, especially to assess dynamic thyrotropic adaptation of the anterior pituitary gland, including non-thyroidal illnesses.
How to determine TTSI.
Universal form.
The TTSI can be calculated with
formula_0
from equilibrium serum or plasma concentrations of thyrotropin (TSH), free T4 (FT4) and the assay-specific upper limit of the reference interval for FT4 concentration ("lu").
Short form.
Some publications use a simpler form of this equation that doesn't correct for the reference range of free T4. It is calculated with
formula_1.
The disadvantage of this uncorrected version is that its numeric results are highly dependent on the used assays and their units of measurement.
Biochemical associations.
In case of resistance to thyroid hormone, the magnitude of TTSI depends on which nucleotide in the THRB gene is mutated, but also on the genotype of coactivators. A systematic investigation in mice demonstrated a strong association of TT4RI to the genotypes of THRB and the steroid receptor coactivator (SRC-1) gene.
Clinical significance.
The TTSI is used as a screening parameter for resistance to thyroid hormone due to mutations in the THRB gene, where it is elevated. It is also beneficial for assessing the severity of already confirmed thyroid hormone resistance, even on replacement therapy with L-T4, and for monitoring the pituitary response to substitution therapy with thyromimetics (e.g. TRIAC) in RTH Beta.
In autoimmune thyroiditis the TTSI is moderately elevated.
A large cohort study demonstrated TTSI to be strongly influenced by genetic factors. A variant of the TTSI that is not corrected for the upper limit of the FT4 reference range was shown to be significantly increased in offspring from long-lived siblings compared to their partners.
Conversely, an elevated set point of thyroid homeostasis, as quantified by the TT4RI, is associated to higher prevalence of metabolic syndrome and several harmonized criteria by the International Diabetes Federation, including triglyceride and HDL concentration and blood pressure.
In certain phenotypes of non-thyroidal illness syndrome, especially in cases with concomitant sepsis, the TTSI is reduced. This reflects a reduced set point of thyroid homeostasis, as also experimentally predicted in rodent models of inflammation and sepsis.
Negative correlation of the TTSI with the urinary excretion of certain phthalates suggests that endocrine disruptors may affect the central set point of thyroid homeostasis.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "TTSI = {100 \\cdot TSH \\cdot FT4 \\over l_u}"
},
{
"math_id": 1,
"text": "TTSI = {100 \\cdot TSH \\cdot FT4}"
}
] |
https://en.wikipedia.org/wiki?curid=59891279
|
59892172
|
Neural style transfer
|
Type of software algorithm for image manipulation
Neural style transfer (NST) refers to a class of software algorithms that manipulate digital images, or videos, in order to adopt the appearance or visual style of another image. NST algorithms are characterized by their use of deep neural networks for the sake of image transformation. Common uses for NST are the creation of artificial artwork from photographs, for example by transferring the appearance of famous paintings to user-supplied photographs. Several notable mobile apps use NST techniques for this purpose, including DeepArt and Prisma. This method has been used by artists and designers around the globe to develop new artwork based on existent style(s).
Earlier style transfer algorithms.
NST is an example of image stylization, a problem studied for over two decades within the field of non-photorealistic rendering. The first two example-based style transfer algorithms were image analogies and image quilting. Both of these methods were based on patch-based texture synthesis algorithms.
Given a training pair of images–a photo and an artwork depicting that photo–a transformation could be learned and then applied to create new artwork from a new photo, by analogy. If no training photo was available, it would need to be produced by processing the input artwork; image quilting did not require this processing step, though it was demonstrated on only one style.
NST.
NST was first published in the paper "A Neural Algorithm of Artistic Style" by Leon Gatys et al., originally released to ArXiv 2015, and subsequently accepted by the peer-reviewed CVPR conference in 2016. The original paper used a VGG-19 architecture that has been pre-trained to perform object recognition using the ImageNet dataset.
In 2017, Google AI introduced a method that allows a single deep convolutional style transfer network to learn multiple styles at the same time. This algorithm permits style interpolation in real-time, even when done on video media.
Formulation.
The process of NST assumes an input image formula_0 and an example style image formula_1.
The image formula_0 is fed through the CNN, and network activations are sampled at a late convolution layer of the VGG-19 architecture. Let formula_2 be the resulting output sample, called the 'content' of the input formula_0.
The style image formula_1 is then fed through the same CNN, and network activations are sampled at the early to middle layers of the CNN. These activations are encoded into a Gramian matrix representation, call it formula_3 to denote the 'style' of formula_1.
The goal of NST is to synthesize an output image formula_4 that exhibits the content of formula_0 applied with the style of formula_1, i.e. formula_5 and formula_6.
An iterative optimization (usually gradient descent) then gradually updates formula_4 to minimize the loss function error:
formula_7,
where formula_8 is the L2 distance. The constant formula_9 controls the level of the stylization effect.
Training.
Image formula_4 is initially approximated by adding a small amount of white noise to input image formula_0 and feeding it through the CNN. Then we successively backpropagate this loss through the network with the CNN weights fixed in order to update the pixels of formula_4. After several thousand epochs of training, an formula_4 (hopefully) emerges that matches the style of formula_1 and the content of formula_0.
Algorithms are typically implemented for GPUs, so that training takes a few minutes.
Extensions.
NST has also been extended to videos.
Subsequent work improved the speed of NST for images.
In a paper by Fei-Fei Li et al. adopted a different regularized loss metric and accelerated method for training to produce results in real-time (three orders of magnitude faster than Gatys). Their idea was to use not the "pixel-based loss" defined above but rather a 'perceptual loss' measuring the differences between higher-level layers within the CNN. They used a symmetric encoder-decoder CNN. Training uses a similar loss function to the basic NST method but also regularizes the output for smoothness using a total variation (TV) loss. Once trained, the network may be used to transform an image into the style used during training, using a single feed-forward pass of the network. However the network is restricted to the single style in which it has been trained.
In a work by Chen Dongdong et al. they explored the fusion of optical flow information into feedforward networks in order to improve the temporal coherence of the output.
Most recently, feature transform based NST methods have been explored for fast stylization that are not coupled to single specific style and enable user-controllable "blending" of styles, for example the whitening and coloring transform (WCT).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "C(p)"
},
{
"math_id": 3,
"text": "S(a)"
},
{
"math_id": 4,
"text": "x"
},
{
"math_id": 5,
"text": "C(x)=C(p)"
},
{
"math_id": 6,
"text": "S(x)=S(a)"
},
{
"math_id": 7,
"text": "\\mathcal{L(x)} = | C(x)-C(p) | + k |S(x)-S(a)|"
},
{
"math_id": 8,
"text": "|.|"
},
{
"math_id": 9,
"text": "k"
}
] |
https://en.wikipedia.org/wiki?curid=59892172
|
5989592
|
Maximum cardinality matching
|
Graph theory problem: find a matching containing the most edges
Maximum cardinality matching is a fundamental problem in graph theory.
We are given a graph G, and the goal is to find a matching containing as many edges as possible; that is, a maximum cardinality subset of the edges such that each vertex is adjacent to at most one edge of the subset. As each edge will cover exactly two vertices, this problem is equivalent to the task of finding a matching that covers as many vertices as possible.
An important special case of the maximum cardinality matching problem is when G is a bipartite graph, whose vertices V are partitioned between left vertices in X and right vertices in Y, and edges in E always connect a left vertex to a right vertex. In this case, the problem can be efficiently solved with simpler algorithms than in the general case.
Algorithms for bipartite graphs.
Flow-based algorithm.
The simplest way to compute a maximum cardinality matching is to follow the Ford–Fulkerson algorithm. This algorithm solves the more general problem of computing the maximum flow. A bipartite graph ("X" + "Y", "E") can be converted to a flow network as follows.
Since each edge in the network has integral capacity, there exists a maximum flow where all flows are integers; these integers must be either 0 or 1 since the all capacities are 1. Each integral flow defines a matching in which an edge is in the matching if and only if its flow is 1. It is a matching because:
The Ford–Fulkerson algorithm proceeds by repeatedly finding an augmenting path from some "x" ∈ "X" to some "y" ∈ Y and updating the matching M by taking the symmetric difference of that path with M (assuming such a path exists). As each path can be found in "O"("E") time, the running time is "O"("VE"), and the maximum matching consists of the edges of E that carry flow from X to Y.
Advanced algorithms.
An improvement to this algorithm is given by the more elaborate Hopcroft–Karp algorithm, which searches for multiple augmenting paths simultaneously. This algorithm runs in formula_0 time.
The algorithm of Chandran and Hochbaum for bipartite graphs runs in time that depends on the size of the maximum matching k, which for is
formula_1
Using boolean operations on words of size formula_2 the complexity is further improved to
formula_3
More efficient algorithms exist for special kinds of bipartite graphs:
Algorithms for arbitrary graphs.
The blossom algorithm finds a maximum-cardinality matching in general (not necessarily bipartite) graphs. It runs in time formula_5. A better performance of for general graphs, matching the performance of the Hopcroft–Karp algorithm on bipartite graphs, can be achieved with the much more complicated algorithm of Micali and Vazirani. The same bound was achieved by an algorithm by Blum and an algorithm by Gabow and Tarjan.
An alternative approach uses randomization and is based on the fast matrix multiplication algorithm. This gives a randomized algorithm for general graphs with complexity formula_6. This is better in theory for sufficiently dense graphs, but in practice the algorithm is slower.
Other algorithms for the task are reviewed by Duan and Pettie (see Table I). In terms of approximation algorithms, they also point out that the blossom algorithm and the algorithms by Micali and Vazirani can be seen as approximation algorithms running in linear time for any fixed error bound.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "O(\\sqrt{V}E)"
},
{
"math_id": 1,
"text": "O\\left(\\min\\{|X|k,E\\}+ \\sqrt{k} \\min \\{k^2,E\\}\\right)."
},
{
"math_id": 2,
"text": "\\lambda"
},
{
"math_id": 3,
"text": "O\\left(\\min \\left\\{|X|k, \\frac{|X||Y|}{\\lambda}, E\\right\\} + k^2 + \\frac{k^{2.5}}{\\lambda}\\right)."
},
{
"math_id": 4,
"text": "\\tilde{O}(E^{10/7})"
},
{
"math_id": 5,
"text": "O(|V|^2 \\cdot |E|)"
},
{
"math_id": 6,
"text": "O(V^{2.372})"
}
] |
https://en.wikipedia.org/wiki?curid=5989592
|
5989598
|
Maximum weight matching
|
Graph theory problem: find a matching with max total weight
In computer science and graph theory, the maximum weight matching problem is the problem of finding, in a weighted graph, a matching in which the sum of weights is maximized.
A special case of it is the assignment problem, in which the input is restricted to be a bipartite graph, and the matching constrained to be have cardinality that of the smaller of the two partitions. Another special case is the problem of finding a maximum cardinality matching on an unweighted graph: this corresponds to the case where all edge weights are the same.
Algorithms.
There is a formula_0 time algorithm to find a maximum matching or a maximum weight matching in a graph that is not bipartite; it is due to Jack Edmonds, is called the "paths, trees, and flowers" method or simply Edmonds' algorithm, and uses bidirected edges. A generalization of the same technique can also be used to find maximum independent sets in claw-free graphs.
More elaborate algorithms exist and are reviewed by Duan and Pettie (see Table III). Their work proposes an approximation algorithm for the maximum weight matching problem, which runs in linear time for any fixed error bound.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "O(V^{2}E)"
}
] |
https://en.wikipedia.org/wiki?curid=5989598
|
5989665
|
One woodland terminal model
|
The ITU terrestrial model for one terminal in woodland is a radio propagation model belonging to the class of foliage models. This model is a successor of the early ITU model.
Applicable to/under conditions.
Applicable to the scenario where one terminal of a link is inside foliage and the other end is free.
Coverage.
Frequency: below 5 GHz
Depth of foliage: unspecified
Mathematical formulation.
The mathematical formulation of the model is:
formula_0
Where,
Av = Attenuation due to vegetation. Unit: decibel (dB)
A = Maximum attenuation for one terminal caused by a certain foliage. Unit: decibel (dB)
d = Depth of Foliage along the path. Unit: Meter(m)
formula_1 = Specific attenuation for short vegetations. Unit: decibel/meter (dB/m)
Points to note.
The value of formula_2 is dependent on frequency and is an empirical constant.
The model assumes that exactly one of the terminals is located inside some forest or plantation and the term depth applies to the distance from the terminal inside the plantation to the end of plantation along the link.
|
[
{
"math_id": 0,
"text": "A_v\\;=\\;A\\;[1\\;-\\;e^-{\\frac{d \\gamma}{A}}]"
},
{
"math_id": 1,
"text": "\\gamma\\;"
},
{
"math_id": 2,
"text": "\\gamma"
}
] |
https://en.wikipedia.org/wiki?curid=5989665
|
598971
|
Fisher information
|
Notion in statistics
In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable "X" carries about an unknown parameter "θ" of a distribution that models "X". Formally, it is the variance of the score, or the expected value of the observed information.
The role of the Fisher information in the asymptotic theory of maximum-likelihood estimation was emphasized and explored by the statistician Sir Ronald Fisher (following some initial results by Francis Ysidro Edgeworth). The Fisher information matrix is used to calculate the covariance matrices associated with maximum-likelihood estimates. It can also be used in the formulation of test statistics, such as the Wald test.
In Bayesian statistics, the Fisher information plays a role in the derivation of non-informative prior distributions according to Jeffreys' rule. It also appears as the large-sample covariance of the posterior distribution, provided that the prior is sufficiently smooth (a result known as Bernstein–von Mises theorem, which was anticipated by Laplace for exponential families). The same result is used when approximating the posterior with Laplace's approximation, where the Fisher information appears as the covariance of the fitted Gaussian.
Statistical systems of a scientific nature (physical, biological, etc.) whose likelihood functions obey shift invariance have been shown to obey maximum Fisher information. The level of the maximum depends upon the nature of the system constraints.
Definition.
The Fisher information is a way of measuring the amount of information that an observable random variable formula_0 carries about an unknown parameter formula_1 upon which the probability of formula_0 depends. Let formula_2 be the probability density function (or probability mass function) for formula_0 conditioned on the value of formula_1. It describes the probability that we observe a given outcome of formula_0, "given" a known value of formula_1. If formula_3 is sharply peaked with respect to changes in formula_1, it is easy to indicate the "correct" value of formula_1 from the data, or equivalently, that the data formula_0 provides a lot of information about the parameter formula_1. If formula_3 is flat and spread-out, then it would take many samples of formula_0 to estimate the actual "true" value of formula_1 that "would" be obtained using the entire population being sampled. This suggests studying some kind of variance with respect to formula_1.
Formally, the partial derivative with respect to formula_1 of the natural logarithm of the likelihood function is called the "score". Under certain regularity conditions, if formula_1 is the true parameter (i.e. formula_0 is actually distributed as formula_2), it can be shown that the expected value (the first moment) of the score, evaluated at the true parameter value formula_1, is 0:
formula_4
The Fisher information is defined to be the variance of the score:
formula_5
Note that formula_6. A random variable carrying high Fisher information implies that the absolute value of the score is often high. The Fisher information is not a function of a particular observation, as the random variable "X" has been averaged out.
If log "f"("x"; "θ") is twice differentiable with respect to "θ", and under certain regularity conditions, then the Fisher information may also be written as
formula_7
since
formula_8
and
formula_9
Thus, the Fisher information may be seen as the curvature of the support curve (the graph of the log-likelihood). Near the maximum likelihood estimate, low Fisher information therefore indicates that the maximum appears "blunt", that is, the maximum is shallow and there are many nearby values with a similar log-likelihood. Conversely, high Fisher information indicates that the maximum is sharp.
Regularity conditions.
The regularity conditions are as follows:
If "θ" is a vector then the regularity conditions must hold for every component of "θ". It is easy to find an example of a density that does not satisfy the regularity conditions: The density of a Uniform(0, "θ") variable fails to satisfy conditions 1 and 3. In this case, even though the Fisher information can be computed from the definition, it will not have the properties it is typically assumed to have.
In terms of likelihood.
Because the likelihood of "θ" given "X" is always proportional to the probability "f"("X"; "θ"), their logarithms necessarily differ by a constant that is independent of "θ", and the derivatives of these logarithms with respect to "θ" are necessarily equal. Thus one can substitute in a log-likelihood "l"("θ"; "X") instead of log "f"("X"; "θ") in the definitions of Fisher Information.
Samples of any size.
The value "X" can represent a single sample drawn from a single distribution or can represent a collection of samples drawn from a collection of distributions. If there are "n" samples and the corresponding "n" distributions are statistically independent then the Fisher information will necessarily be the sum of the single-sample Fisher information values, one for each single sample from its distribution. In particular, if the "n" distributions are independent and identically distributed then the Fisher information will necessarily be "n" times the Fisher information of a single sample from the common distribution. Stated in other words, the Fisher Information of i.i.d. observations of a sample of size n from a population is equal to the product of n and the Fisher Information of a single observation from the same population.
Informal derivation of the Cramér–Rao bound.
The Cramér–Rao bound states that the inverse of the Fisher information is a lower bound on the variance of any unbiased estimator of "θ". H.L. Van Trees (1968) and B. Roy Frieden (2004) provide the following method of deriving the Cramér–Rao bound, a result which describes use of the Fisher information.
Informally, we begin by considering an unbiased estimator formula_10. Mathematically, "unbiased" means that
formula_11
This expression is zero independent of "θ", so its partial derivative with respect to "θ" must also be zero. By the product rule, this partial derivative is also equal to
formula_12
For each "θ", the likelihood function is a probability density function, and therefore formula_13. By using the chain rule on the partial derivative of formula_14 and then dividing and multiplying by formula_15, one can verify that
formula_16
Using these two facts in the above, we get
formula_17
Factoring the integrand gives
formula_18
Squaring the expression in the integral, the Cauchy–Schwarz inequality yields
formula_19
The second bracketed factor is defined to be the Fisher Information, while the first bracketed factor is the expected mean-squared error of the estimator formula_20. By rearranging, the inequality tells us that
formula_21
In other words, the precision to which we can estimate "θ" is fundamentally limited by the Fisher information of the likelihood function.
Alternatively, the same conclusion can be obtained directly from the Cauchy–Schwarz inequality for random variables, formula_22, applied to the random variables formula_10 and formula_23, and observing that for unbiased estimators we haveformula_24
Examples.
Single-parameter Bernoulli experiment.
A Bernoulli trial is a random variable with two possible outcomes, 0 and 1, with 1 having a probability of "θ". The outcome can be thought of as determined by the toss of a biased coin, with the probability of heads (1) being "θ" and the probability of tails (0) being 1 − "θ".
Let "X" be a Bernoulli trial of one sample from the distribution. The Fisher information contained in "X" may be calculated to be:
formula_25
Because Fisher information is additive, the Fisher information contained in "n" independent Bernoulli trials is therefore
formula_26
If formula_27 is one of the formula_28 possible outcomes of "n" independent Bernoulli trials and formula_29 is the "j" th outcome of the "i" th trial, then the probability of formula_27 is given by:
formula_30
The mean of the "i" th trial is formula_31
The expected value of the mean of a trial is:
formula_32
where the sum is over all formula_28 possible trial outcomes. The expected value of the square of the means is:
formula_33
so the variance in the value of the mean is:
formula_34
It is seen that the Fisher information is the reciprocal of the variance of the mean number of successes in "n" Bernoulli trials. This is generally true. In this case, the Cramér–Rao bound is an equality.
Estimate formula_1 from formula_35.
As another toy example consider a random variable formula_0 with possible outcomes 0 and 1, with probabilities formula_36 and formula_37, respectively, for some formula_38. Our goal is estimating formula_1 from observations of formula_0.
The Fisher information reads in this caseformula_39This expression can also be derived directly from the change of reparametrization formula given below. More generally, for any sufficiently regular function formula_3 such that formula_40, the Fisher information to retrieve formula_1 from formula_41 is similarly computed to beformula_42
Matrix form.
When there are "N" parameters, so that "θ" is an "N" × 1 vector formula_43 then the Fisher information takes the form of an "N" × "N" matrix. This matrix is called the Fisher information matrix (FIM) and has typical element
formula_44
The FIM is a "N" × "N" positive semidefinite matrix. If it is positive definite, then it defines a Riemannian metric on the "N"-dimensional parameter space. The topic information geometry uses this to connect Fisher information to differential geometry, and in that context, this metric is known as the Fisher information metric.
Under certain regularity conditions, the Fisher information matrix may also be written as
formula_45
The result is interesting in several ways:
Information orthogonal parameters.
We say that two parameter component vectors "θ1" and "θ2" are information orthogonal if the Fisher information matrix is block diagonal, with these components in separate blocks. Orthogonal parameters are easy to deal with in the sense that their maximum likelihood estimates are asymptotically uncorrelated. When considering how to analyse a statistical model, the modeller is advised to invest some time searching for an orthogonal parametrization of the model, in particular when the parameter of interest is one-dimensional, but the nuisance parameter can have any dimension.
Singular statistical model.
If the Fisher information matrix is positive definite for all θ, then the corresponding statistical model is said to be "regular"; otherwise, the statistical model is said to be "singular". Examples of singular statistical models include the following: normal mixtures, binomial mixtures, multinomial mixtures, Bayesian networks, neural networks, radial basis functions, hidden Markov models, stochastic context-free grammars, reduced rank regressions, Boltzmann machines.
In machine learning, if a statistical model is devised so that it extracts hidden structure from a random phenomenon, then it naturally becomes singular.
Multivariate normal distribution.
The FIM for a "N"-variate multivariate normal distribution, formula_46 has a special form. Let the "K"-dimensional vector of parameters be formula_47 and the vector of random normal variables be formula_48. Assume that the mean values of these random variables are formula_49, and let formula_50 be the covariance matrix. Then, for formula_51, the ("m", "n") entry of the FIM is:
formula_52
where formula_53 denotes the transpose of a vector, formula_54 denotes the trace of a square matrix, and:
formula_55
Note that a special, but very common, case is the one where formula_56, a constant. Then
formula_57
In this case the Fisher information matrix may be identified with the coefficient matrix of the normal equations of least squares estimation theory.
Another special case occurs when the mean and covariance depend on two different vector parameters, say, "β" and "θ". This is especially popular in the analysis of spatial data, which often uses a linear model with correlated residuals. In this case,
formula_58
where
formula_59
Properties.
Chain rule.
Similar to the entropy or mutual information, the Fisher information also possesses a chain rule decomposition. In particular, if "X" and "Y" are jointly distributed random variables, it follows that:
formula_60
where formula_61 and formula_62 is the Fisher information of "Y" relative to formula_1 calculated with respect to the conditional density of "Y" given a specific value "X" = "x".
As a special case, if the two random variables are independent, the information yielded by the two random variables is the sum of the information from each random variable separately:
formula_63
Consequently, the information in a random sample of "n" independent and identically distributed observations is "n" times the information in a sample of size 1.
F-divergence.
Given a convex function formula_64 that formula_65 is finite for all formula_66, formula_67, and formula_68, (which could be infinite), it defines an f-divergence formula_69. Then if formula_3 is strictly convex at formula_70, then locally at formula_71, the Fisher information matrix is a metric, in the sense thatformula_72where formula_73 is the distribution parametrized by formula_1. That is, it's the distribution with pdf formula_74.
In this form, it is clear that the Fisher information matrix is a Riemannian metric, and varies correctly under a change of variables. (see section on Reparameterization.)
Sufficient statistic.
The information provided by a sufficient statistic is the same as that of the sample "X". This may be seen by using Neyman's factorization criterion for a sufficient statistic. If "T"("X") is sufficient for "θ", then
formula_75
for some functions "g" and "h". The independence of "h"("X") from "θ" implies
formula_76
and the equality of information then follows from the definition of Fisher information. More generally, if "T
t"("X") is a statistic, then
formula_77
with equality if and only if "T" is a sufficient statistic.
Reparameterization.
The Fisher information depends on the parametrization of the problem. If "θ" and "η" are two scalar parametrizations of an estimation problem, and "θ" is a continuously differentiable function of "η", then
formula_78
where formula_79 and formula_80 are the Fisher information measures of "η" and "θ", respectively.
In the vector case, suppose formula_81 and formula_82 are "k"-vectors which parametrize an estimation problem, and suppose that formula_81 is a continuously differentiable function of formula_82, then,
formula_83
where the ("i", "j")th element of the "k" × "k" Jacobian matrix formula_84 is defined by
formula_85
and where formula_86 is the matrix transpose of formula_87
In information geometry, this is seen as a change of coordinates on a Riemannian manifold, and the intrinsic properties of curvature are unchanged under different parametrizations. In general, the Fisher information matrix provides a Riemannian metric (more precisely, the Fisher–Rao metric) for the manifold of thermodynamic states, and can be used as an information-geometric complexity measure for a classification of phase transitions, e.g., the scalar curvature of the thermodynamic metric tensor diverges at (and only at) a phase transition point.
In the thermodynamic context, the Fisher information matrix is directly related to the rate of change in the corresponding order parameters. In particular, such relations identify second-order phase transitions via divergences of individual elements of the Fisher information matrix.
Isoperimetric inequality.
The Fisher information matrix plays a role in an inequality like the isoperimetric inequality. Of all probability distributions with a given entropy, the one whose Fisher information matrix has the smallest trace is the Gaussian distribution. This is like how, of all bounded sets with a given volume, the sphere has the smallest surface area.
The proof involves taking a multivariate random variable formula_0 with density function formula_3 and adding a location parameter to form a family of densities formula_88. Then, by analogy with the Minkowski–Steiner formula, the "surface area" of formula_0 is defined to be
formula_89
where formula_90 is a Gaussian variable with covariance matrix formula_91. The name "surface area" is apt because the entropy power formula_92 is the volume of the "effective support set," so formula_93 is the "derivative" of the volume of the effective support set, much like the Minkowski-Steiner formula. The remainder of the proof uses the entropy power inequality, which is like the Brunn–Minkowski inequality. The trace of the Fisher information matrix is found to be a factor of formula_93.
Applications.
Optimal design of experiments.
Fisher information is widely used in optimal experimental design. Because of the reciprocity of estimator-variance and Fisher information, "minimizing" the "variance" corresponds to "maximizing" the "information".
When the linear (or linearized) statistical model has several parameters, the mean of the parameter estimator is a vector and its variance is a matrix. The inverse of the variance matrix is called the "information matrix". Because the variance of the estimator of a parameter vector is a matrix, the problem of "minimizing the variance" is complicated. Using statistical theory, statisticians compress the information-matrix using real-valued summary statistics; being real-valued functions, these "information criteria" can be maximized.
Traditionally, statisticians have evaluated estimators and designs by considering some summary statistic of the covariance matrix (of an unbiased estimator), usually with positive real values (like the determinant or matrix trace). Working with positive real numbers brings several advantages: If the estimator of a single parameter has a positive variance, then the variance and the Fisher information are both positive real numbers; hence they are members of the convex cone of nonnegative real numbers (whose nonzero members have reciprocals in this same cone).
For several parameters, the covariance matrices and information matrices are elements of the convex cone of nonnegative-definite symmetric matrices in a partially ordered vector space, under the Loewner (Löwner) order. This cone is closed under matrix addition and inversion, as well as under the multiplication of positive real numbers and matrices. An exposition of matrix theory and Loewner order appears in Pukelsheim.
The traditional optimality criteria are the information matrix's invariants, in the sense of invariant theory; algebraically, the traditional optimality criteria are functionals of the eigenvalues of the (Fisher) information matrix (see optimal design).
Jeffreys prior in Bayesian statistics.
In Bayesian statistics, the Fisher information is used to calculate the Jeffreys prior, which is a standard, non-informative prior for continuous distribution parameters.
Computational neuroscience.
The Fisher information has been used to find bounds on the accuracy of neural codes. In that case, "X" is typically the joint responses of many neurons representing a low dimensional variable "θ" (such as a stimulus parameter). In particular the role of correlations in the noise of the neural responses has been studied.
Epidemiology.
Fisher information was used to study how informative different data sources are for estimation of the reproduction number of SARS-CoV-2.
Derivation of physical laws.
Fisher information plays a central role in a controversial principle put forward by Frieden as the basis of physical laws, a claim that has been disputed.
Machine learning.
The Fisher information is used in machine learning techniques such as elastic weight consolidation, which reduces catastrophic forgetting in artificial neural networks.
Fisher information can be used as an alternative to the Hessian of the loss function in second-order gradient descent network training.
Color discrimination.
Using a Fisher information metric, da Fonseca et. al investigated the degree to which MacAdam ellipses (color discrimination ellipses) can be derived from the response functions of the retinal photoreceptors.
Relation to relative entropy.
Fisher information is related to relative entropy. The relative entropy, or Kullback–Leibler divergence, between two distributions formula_94 and formula_95 can be written as
formula_96
Now, consider a family of probability distributions formula_74 parametrized by formula_97. Then the Kullback–Leibler divergence, between two distributions in the family can be written as
formula_98
If formula_1 is fixed, then the relative entropy between two distributions of the same family is minimized at formula_99. For formula_100 close to formula_1, one may expand the previous expression in a series up to second order:
formula_101
But the second order derivative can be written as
formula_102
Thus the Fisher information represents the curvature of the relative entropy of a conditional distribution with respect to its parameters.
History.
The Fisher information was discussed by several early statisticians, notably F. Y. Edgeworth. For example, Savage says: "In it [Fisher information], he [Fisher] was to some extent anticipated (Edgeworth 1908–9 esp. 502, 507–8, 662, 677–8, 82–5 and references he [Edgeworth] cites including Pearson and Filon 1898 [. . .])." There are a number of early historical sources and a number of reviews of this early work.
See also.
Other measures employed in information theory:
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "X"
},
{
"math_id": 1,
"text": "\\theta"
},
{
"math_id": 2,
"text": "f(X;\\theta)"
},
{
"math_id": 3,
"text": "f"
},
{
"math_id": 4,
"text": "\\begin{align}\n \\operatorname{E} \\left[\\left. \\frac{\\partial}{\\partial\\theta} \\log f(X;\\theta)\\,\\,\\right|\\,\\,\\theta \\right] \n ={} &\\int_{\\mathbb{R}} \\frac{\\frac{\\partial}{\\partial\\theta} f(x;\\theta)}{f(x; \\theta)} f(x;\\theta)\\,dx \\\\[6pt]\n ={} &\\frac{\\partial}{\\partial\\theta} \\int_{\\mathbb{R}} f(x; \\theta)\\,dx \\\\[6pt]\n ={} &\\frac{\\partial}{\\partial\\theta} 1 \\\\[6pt]\n ={} & 0.\n\\end{align}"
},
{
"math_id": 5,
"text": " \\mathcal{I}(\\theta) = \\operatorname{E} \\left[\\left. \\left(\\frac{\\partial}{\\partial\\theta} \\log f(X;\\theta)\\right)^2 \\,\\, \\right| \\,\\, \\theta \\right] = \\int_{\\mathbb{R}} \\left(\\frac{\\partial}{\\partial\\theta} \\log f(x;\\theta)\\right)^2 f(x; \\theta)\\,dx,"
},
{
"math_id": 6,
"text": "\\mathcal{I}(\\theta) \\geq 0"
},
{
"math_id": 7,
"text": " \\mathcal{I}(\\theta) = - \\operatorname{E} \\left[\\left. \\frac{\\partial^2}{\\partial\\theta^2} \\log f(X;\\theta) \\,\\, \\right| \\,\\, \\theta \\right],"
},
{
"math_id": 8,
"text": "\\frac{\\partial^2}{\\partial\\theta^2} \\log f(X;\\theta) = \\frac{\\frac{\\partial^2}{\\partial\\theta^2} f(X;\\theta)}{f(X; \\theta)} - \\left( \\frac{\\frac{\\partial}{\\partial\\theta} f(X;\\theta)}{f(X; \\theta)} \\right)^2\n= \\frac{\\frac{\\partial^2}{\\partial\\theta^2} f(X;\\theta)}{f(X; \\theta)} - \\left( \\frac{\\partial}{\\partial\\theta} \\log f(X;\\theta)\\right)^2 "
},
{
"math_id": 9,
"text": " \\operatorname{E} \\left[\\left. \\frac{\\frac{\\partial^2}{\\partial\\theta^2} f(X;\\theta)}{f(X; \\theta)} \\,\\, \\right| \\,\\, \\theta \\right] = \\frac{\\partial^2}{\\partial\\theta^2} \\int_{\\mathbb{R}} f(x;\\theta)\\,dx = 0. "
},
{
"math_id": 10,
"text": "\\hat\\theta(X)"
},
{
"math_id": 11,
"text": "\n\\operatorname{E}\\left[ \\left. \\hat\\theta(X) - \\theta \\,\\, \\right| \\,\\, \\theta \\right]\n= \\int \\left(\\hat\\theta(x) - \\theta\\right) \\, f(x ;\\theta) \\, dx = 0 \\text{ regardless of the value of } \\theta.\n"
},
{
"math_id": 12,
"text": "\n0 = \\frac{\\partial}{\\partial\\theta} \\int \\left(\\hat\\theta(x) - \\theta \\right) \\, f(x ;\\theta) \\,dx\n= \\int \\left(\\hat\\theta(x)-\\theta\\right) \\frac{\\partial f}{\\partial\\theta} \\, dx - \\int f \\,dx.\n"
},
{
"math_id": 13,
"text": "\\int f\\,dx = 1"
},
{
"math_id": 14,
"text": "\\log f"
},
{
"math_id": 15,
"text": "f(x;\\theta)"
},
{
"math_id": 16,
"text": "\\frac{\\partial f}{\\partial\\theta} = f \\, \\frac{\\partial \\log f}{\\partial\\theta}."
},
{
"math_id": 17,
"text": "\n\\int \\left(\\hat\\theta-\\theta\\right) f \\, \\frac{\\partial \\log f}{\\partial\\theta} \\, dx = 1.\n"
},
{
"math_id": 18,
"text": "\n\\int \\left(\\left(\\hat\\theta-\\theta\\right) \\sqrt{f} \\right) \\left( \\sqrt{f} \\, \\frac{\\partial \\log f}{\\partial\\theta} \\right) \\, dx = 1.\n"
},
{
"math_id": 19,
"text": "\n1 =\n\\biggl( \\int \\left[\\left(\\hat\\theta-\\theta\\right) \\sqrt{f} \\right] \\cdot \\left[ \\sqrt{f} \\, \\frac{\\partial \\log f}{\\partial\\theta} \\right] \\, dx \\biggr)^2\n\\le\n\\left[ \\int \\left(\\hat\\theta - \\theta\\right)^2 f \\, dx \\right] \\cdot \\left[ \\int \\left( \\frac{\\partial \\log f}{\\partial\\theta} \\right)^2 f \\, dx \\right].\n"
},
{
"math_id": 20,
"text": "\\hat\\theta"
},
{
"math_id": 21,
"text": "\n\\operatorname{Var}\\left(\\hat\\theta\\right) \\geq \\frac{1}{\\mathcal{I}\\left(\\theta\\right)}.\n"
},
{
"math_id": 22,
"text": "|\\operatorname{Cov}(A,B)|^2 \\le \\operatorname{Var}(A)\\operatorname{Var}(B)"
},
{
"math_id": 23,
"text": "\\partial_\\theta\\log f(X;\\theta)"
},
{
"math_id": 24,
"text": "\\operatorname{Cov}[\\hat\\theta(X),\\partial_\\theta \\log f(X;\\theta)] =\n\\int \\hat\\theta(x)\\, \\partial_\\theta f(x;\\theta)\\, dx = \\partial_\\theta \\operatorname E[\\hat\\theta] = 1."
},
{
"math_id": 25,
"text": "\\begin{align}\n \\mathcal{I}(\\theta)\n &= -\\operatorname{E}\\left[\\left. \\frac{\\partial^2}{\\partial\\theta^2} \\log\\left(\\theta^X (1 - \\theta)^{1 - X}\\right)\\right|\\theta\\right] \\\\[5pt]\n &= -\\operatorname{E}\\left[\\left. \\frac{\\partial^2}{\\partial\\theta^2} \\left(X\\log\\theta + (1 - X)\\log(1 - \\theta)\\right) \\,\\, \\right| \\,\\, \\theta \\right] \\\\[5pt]\n &= \\operatorname{E}\\left[\\left. \\frac{X}{\\theta^2} + \\frac{1 - X}{(1 - \\theta)^2} \\,\\, \\right| \\,\\, \\theta\\right] \\\\[5pt]\n &= \\frac{\\theta}{\\theta^2} + \\frac{1 - \\theta}{(1 - \\theta)^2} \\\\[5pt]\n &= \\frac{1}{\\theta(1 - \\theta)}.\n\\end{align}"
},
{
"math_id": 26,
"text": "\\mathcal{I}(\\theta) = \\frac{n}{\\theta(1 - \\theta)}."
},
{
"math_id": 27,
"text": "x_i"
},
{
"math_id": 28,
"text": "2^n"
},
{
"math_id": 29,
"text": "x_{ij}"
},
{
"math_id": 30,
"text": "p(x_i,\\theta)=\\prod_{j=0}^n \\theta^{x_{ij}}(1-\\theta)^{x_{ij}} "
},
{
"math_id": 31,
"text": "\\mu_i = (1/n)\\sum_{j=1}^n x_{ij}"
},
{
"math_id": 32,
"text": "E(\\mu)=\\sum_{x_i} \\mu_i \\, p(x_i,\\theta) = \\theta"
},
{
"math_id": 33,
"text": "E(\\mu^2)=\\sum_{x_i} \\mu_i^2 \\, p(x_i,\\theta) = \\frac{(1+(n-1)\\theta)\\theta}{n}"
},
{
"math_id": 34,
"text": "E(\\mu^2)-E(\\mu)^2 = (1/n)\\theta(1-\\theta)"
},
{
"math_id": 35,
"text": "X\\sim \\operatorname{Bern}(\\sqrt\\theta)"
},
{
"math_id": 36,
"text": "p_0=1-\\sqrt\\theta"
},
{
"math_id": 37,
"text": "p_1=\\sqrt\\theta"
},
{
"math_id": 38,
"text": "\\theta\\in[0,1]"
},
{
"math_id": 39,
"text": "\\begin{align}\n\\mathcal I(\\theta) &= \\mathrm E\\left[\n\\left(\\frac{\\partial}{\\partial\\theta} \\log \n f(X;\\theta)\\right)^2\\Bigg| \\,\\theta\n\\right]\n\\\\&= (1-\\sqrt\\theta)\\left(\\frac{-1}{2\\sqrt\\theta(1-\\sqrt\\theta)}\\right)^2\n+ \\sqrt\\theta\\left(\\frac{1}{2\\theta}\\right)^2 \\\\\n&= \\frac{1}{4\\theta}\\left(\\frac{1}{1-\\sqrt\\theta} + \\frac{1}{\\sqrt\\theta}\\right)\n\\end{align}."
},
{
"math_id": 40,
"text": "f(\\theta)\\in[0,1]"
},
{
"math_id": 41,
"text": "X\\sim\\operatorname{Bern}(f(\\theta))"
},
{
"math_id": 42,
"text": "\\mathcal I(\\theta) = f'(\\theta)^2 \\left(\\frac{1}{1-f(\\theta)}+\\frac{1}{f(\\theta)} \\right)."
},
{
"math_id": 43,
"text": "\\theta = \\begin{bmatrix}\\theta_1 & \\theta_2 & \\dots & \\theta_N\\end{bmatrix}^\\textsf{T},"
},
{
"math_id": 44,
"text": "\n \\bigl[\\mathcal{I}(\\theta)\\bigr]_{i, j} =\n \\operatorname{E}\\left[\\left.\n \\left(\\frac{\\partial}{\\partial\\theta_i} \\log f(X;\\theta)\\right)\n \\left(\\frac{\\partial}{\\partial\\theta_j} \\log f(X;\\theta)\\right)\n \\,\\, \\right| \\,\\,\\theta\\right].\n"
},
{
"math_id": 45,
"text": "\n \\bigl[\\mathcal{I}(\\theta) \\bigr]_{i, j} =\n -\\operatorname{E}\\left[\\left.\n \\frac{\\partial^2}{\\partial\\theta_i\\, \\partial\\theta_j} \\log f(X;\\theta)\n \\,\\, \\right| \\,\\, \\theta\\right]\\,.\n"
},
{
"math_id": 46,
"text": "\\,X \\sim N\\left(\\mu(\\theta),\\, \\Sigma(\\theta)\\right)"
},
{
"math_id": 47,
"text": "\\theta = \\begin{bmatrix} \\theta_1 & \\dots & \\theta_K \\end{bmatrix}^\\textsf{T}"
},
{
"math_id": 48,
"text": "X = \\begin{bmatrix} X_1 & \\dots & X_N \\end{bmatrix}^\\textsf{T}"
},
{
"math_id": 49,
"text": "\\,\\mu(\\theta) = \\begin{bmatrix} \\mu_1(\\theta) & \\dots & \\mu_N(\\theta) \\end{bmatrix}^\\textsf{T}"
},
{
"math_id": 50,
"text": "\\,\\Sigma(\\theta)"
},
{
"math_id": 51,
"text": "1 \\le m,\\, n \\le K"
},
{
"math_id": 52,
"text": "\n \\mathcal{I}_{m,n} =\n \\frac{\\partial\\mu^\\textsf{T}}{\\partial\\theta_m}\\Sigma^{-1}\n \\frac{\\partial\\mu}{\\partial\\theta_n} +\n \\frac{1}{2}\\operatorname{tr}\\left(\n \\Sigma^{-1}\\frac{\\partial\\Sigma}{\\partial\\theta_m}\n \\Sigma^{-1}\\frac{\\partial\\Sigma}{\\partial\\theta_n}\n \\right),\n"
},
{
"math_id": 53,
"text": "(\\cdot)^\\textsf{T}"
},
{
"math_id": 54,
"text": "\\operatorname{tr}(\\cdot)"
},
{
"math_id": 55,
"text": "\\begin{align}\n \\frac{\\partial \\mu}{\\partial \\theta_m} &=\n \\begin{bmatrix}\n \\dfrac{\\partial\\mu_1}{\\partial\\theta_m} &\n \\dfrac{\\partial\\mu_2}{\\partial\\theta_m} &\n \\cdots &\n \\dfrac{\\partial\\mu_N}{\\partial\\theta_m}\n \\end{bmatrix}^\\textsf{T}; \\\\[8pt]\n \\dfrac{\\partial \\Sigma}{\\partial \\theta_m} &=\n \\begin{bmatrix}\n \\dfrac{\\partial\\Sigma_{1,1}}{\\partial\\theta_m} &\n \\dfrac{\\partial\\Sigma_{1,2}}{\\partial\\theta_m} &\n \\cdots &\n \\dfrac{\\partial\\Sigma_{1,N}}{\\partial\\theta_m} \\\\[5pt]\n \\dfrac{\\partial\\Sigma_{2,1}}{\\partial\\theta_m} &\n \\dfrac{\\partial\\Sigma_{2,2}}{\\partial\\theta_m} &\n \\cdots &\n \\dfrac{\\partial\\Sigma_{2,N}}{\\partial\\theta_m} \\\\\n \\vdots & \\vdots & \\ddots & \\vdots \\\\\n \\dfrac{\\partial\\Sigma_{N,1}}{\\partial\\theta_m} &\n \\dfrac{\\partial\\Sigma_{N,2}}{\\partial\\theta_m} &\n \\cdots &\n \\dfrac{\\partial\\Sigma_{N,N}}{\\partial\\theta_m}\n \\end{bmatrix}.\n\\end{align}"
},
{
"math_id": 56,
"text": "\\Sigma(\\theta) = \\Sigma"
},
{
"math_id": 57,
"text": "\n \\mathcal{I}_{m,n} =\n \\frac{\\partial\\mu^\\textsf{T}}{\\partial\\theta_m}\\Sigma^{-1}\n \\frac{\\partial\\mu}{\\partial\\theta_n}.\\ \n"
},
{
"math_id": 58,
"text": "\\mathcal{I}(\\beta, \\theta) = \\operatorname{diag}\\left(\\mathcal{I}(\\beta), \\mathcal{I}(\\theta)\\right)"
},
{
"math_id": 59,
"text": "\\begin{align}\n \\mathcal{I}{(\\beta)_{m,n}} &= \\frac{\\partial\\mu^\\textsf{T}}{\\partial\\beta_m} \\Sigma^{-1} \\frac{\\partial\\mu}{\\partial\\beta_n}, \\\\[5pt]\n \\mathcal{I}{(\\theta)_{m,n}} &= \\frac{1}{2}\\operatorname{tr}\\left(\\Sigma^{-1} \\frac{\\partial \\Sigma}{\\partial\\theta_m}{\\Sigma^{-1}}\\frac{\\partial\\Sigma}{\\partial\\theta_n}\\right)\n\\end{align}"
},
{
"math_id": 60,
"text": "\\mathcal{I}_{X,Y}(\\theta) = \\mathcal{I}_X(\\theta) + \\mathcal{I}_{Y\\mid X}(\\theta),"
},
{
"math_id": 61,
"text": "\\mathcal{I}_{Y\\mid X}(\\theta) = \\operatorname{E}_{X} \\left[ \\mathcal{I}_{Y\\mid X = x}(\\theta) \\right] "
},
{
"math_id": 62,
"text": " \\mathcal{I}_{Y\\mid X = x}(\\theta) "
},
{
"math_id": 63,
"text": "\\mathcal{I}_{X,Y}(\\theta) = \\mathcal{I}_X(\\theta) + \\mathcal{I}_Y(\\theta)."
},
{
"math_id": 64,
"text": "f: [0, \\infty)\\to(-\\infty, \\infty]"
},
{
"math_id": 65,
"text": "f(x)"
},
{
"math_id": 66,
"text": "x > 0"
},
{
"math_id": 67,
"text": "f(1)=0"
},
{
"math_id": 68,
"text": "f(0)=\\lim_{t\\to 0^+} f(t) "
},
{
"math_id": 69,
"text": "D_f"
},
{
"math_id": 70,
"text": "1"
},
{
"math_id": 71,
"text": "\\theta\\in\\Theta"
},
{
"math_id": 72,
"text": "(\\delta\\theta)^T I(\\theta) (\\delta\\theta) = \\frac{1}{f''(1)}D_f(P_{\\theta+\\delta\\theta} \\parallel P_\\theta)"
},
{
"math_id": 73,
"text": "P_\\theta"
},
{
"math_id": 74,
"text": "f(x; \\theta)"
},
{
"math_id": 75,
"text": "f(X; \\theta) = g(T(X), \\theta) h(X)"
},
{
"math_id": 76,
"text": "\\frac{\\partial}{\\partial\\theta} \\log \\left[f(X; \\theta)\\right] = \\frac{\\partial}{\\partial\\theta} \\log\\left[g(T(X);\\theta)\\right],"
},
{
"math_id": 77,
"text": " \\mathcal{I}_T(\\theta) \\leq \\mathcal{I}_X(\\theta) "
},
{
"math_id": 78,
"text": "{\\mathcal I}_\\eta(\\eta) = {\\mathcal I}_\\theta(\\theta(\\eta)) \\left( \\frac{d\\theta}{d\\eta} \\right)^2"
},
{
"math_id": 79,
"text": "{\\mathcal I}_\\eta"
},
{
"math_id": 80,
"text": "{\\mathcal I}_\\theta"
},
{
"math_id": 81,
"text": "{\\boldsymbol \\theta}"
},
{
"math_id": 82,
"text": "{\\boldsymbol \\eta}"
},
{
"math_id": 83,
"text": "{\\mathcal I}_{\\boldsymbol \\eta}({\\boldsymbol \\eta}) = {\\boldsymbol J}^\\textsf{T} {\\mathcal I}_{\\boldsymbol \\theta} ({\\boldsymbol \\theta}({\\boldsymbol \\eta})) {\\boldsymbol J}\n"
},
{
"math_id": 84,
"text": "\\boldsymbol J"
},
{
"math_id": 85,
"text": "J_{ij} = \\frac{\\partial \\theta_i}{\\partial \\eta_j},"
},
{
"math_id": 86,
"text": "{\\boldsymbol J}^\\textsf{T}"
},
{
"math_id": 87,
"text": "{\\boldsymbol J}."
},
{
"math_id": 88,
"text": "\\{f(x-\\theta) \\mid \\theta \\in \\mathbb{R}^n\\}"
},
{
"math_id": 89,
"text": "S(X) = \\lim_{\\varepsilon \\to 0} \\frac{e^{H(X+Z_\\varepsilon)} - e^{H(X)}}{\\varepsilon}"
},
{
"math_id": 90,
"text": "Z_\\varepsilon"
},
{
"math_id": 91,
"text": "\\varepsilon I"
},
{
"math_id": 92,
"text": "e^{H(X)}"
},
{
"math_id": 93,
"text": "S(X)"
},
{
"math_id": 94,
"text": "p"
},
{
"math_id": 95,
"text": "q"
},
{
"math_id": 96,
"text": "KL(p:q) = \\int p(x)\\log\\frac{p(x)}{q(x)} \\, dx."
},
{
"math_id": 97,
"text": "\\theta \\in \\Theta"
},
{
"math_id": 98,
"text": "D(\\theta,\\theta') = KL(p({}\\cdot{};\\theta):p({}\\cdot{};\\theta'))= \\int f(x; \\theta)\\log\\frac{f(x;\\theta)}{f(x; \\theta')} \\, dx."
},
{
"math_id": 99,
"text": "\\theta'=\\theta"
},
{
"math_id": 100,
"text": "\\theta'"
},
{
"math_id": 101,
"text": "D(\\theta,\\theta') = \\frac{1}{2}(\\theta' - \\theta)^\\textsf{T} \\left(\\frac{\\partial^2}{\\partial\\theta'_i\\, \\partial\\theta'_j} D(\\theta,\\theta')\\right)_{\\theta'=\\theta}(\\theta' - \\theta) + o\\left( (\\theta'-\\theta)^2 \\right)"
},
{
"math_id": 102,
"text": " \\left(\\frac{\\partial^2}{\\partial\\theta'_i\\, \\partial\\theta'_j} D(\\theta,\\theta')\\right)_{\\theta'=\\theta} = - \\int f(x; \\theta) \\left( \\frac{\\partial^2}{\\partial\\theta'_i\\, \\partial\\theta'_j} \\log(f(x; \\theta'))\\right)_{\\theta'=\\theta} \\, dx = [\\mathcal{I}(\\theta)]_{i,j}. "
}
] |
https://en.wikipedia.org/wiki?curid=598971
|
599012
|
Pocket Cube
|
2x2x2 combination puzzle
The Pocket Cube (also known as the Mini Cube) is a 2×2×2 combination puzzle invented in 1970 by American puzzle designer Larry D. Nichols. The cube consists of 8 pieces, which are all corners.
History.
In February 1970, Larry D. Nichols invented a 2×2×2 "Puzzle with Pieces Rotatable in Groups" and filed a Canadian patent application for it. Nichols's cube was held together with magnets. Nichols was granted U.S. patent 3655201 on April 11, 1972, two years before Rubik invented his Cube.
Nichols assigned his patent to his employer Moleculon Research Corp., which sued Ideal in 1982. In 1984, Ideal lost the patent infringement suit and appealed. In 1986, the appeals court affirmed the judgment that Rubik's 2×2×2 Pocket Cube infringed Nichols's patent, but overturned the judgment on Rubik's 3×3×3 Cube.
Group Theory.
The group theory of the 3×3×3 cube can be transferred to the 2×2×2 cube. The elements of the group are typically the moves of that can be executed on the cube (both individual rotations of layers and composite moves from several rotations) and the group operator is a concatenation of the moves.
To analyse the group of the 2×2×2 cube, the cube configuration has to be determined. This can be represented as a 2-tuple, which is made up of the following parameters:
Two moves formula_0and formula_1 from the set formula_2of all moves are considered equal if they produce the same configuration with the same initial configuration of the cube. With the 2×2×2 cube, it must also be considered that there is no fixed orientation or top side of the cube,because the 2×2×2 cube has no fixed center pieces. Therefore, the equivalence relation formula_3 is introduced with formula_4 and formula_1 result in the same cube configuration (with optional rotation of the cube). This relation is reflexive, as two identical moves transform the cube into the same final configuration with the same initial configuration. In addition, the relation is symmetrical and transitive, as it is similar to the mathematical relation of equality.
With this equivalence relation, equivalence classes can be formed that are defined with formula_5 on the set of all moves formula_2. Accordingly, each equivalence class formula_6 contains all moves of the set formula_2 that are equivalent to the move with the equivalence relation. formula_6 is a subset of formula_2. All equivalent elements of an equivalence class formula_6 are the representatives of its equivalence class.
The quotient set formula_7 can be formed using these equivalence classes. It contains the equivalence classes of all cube moves without containing the same moves twice. The elements of formula_7 are all equivalence classes with regard to the equivalence relation formula_3. The following therefore applies: formula_8. This quotient set is the set of the group of the cube.
The 2×2×2 Rubik's cube, has eight permutation objects (corner pieces), three possible orientations of the eight corner pieces and 24 possible rotations of the cube, as there is no unique top side.
Any permutation of the eight corners is possible (8! positions), and seven of them can be independently rotated with three possible orientations (37 positions). There is nothing identifying the orientation of the cube in space, reducing the positions by a factor of 24. This is because all 24 possible positions and orientations of the first corner are equivalent due to the lack of fixed centers (similar to what happens in circular permutations). This factor does not appear when calculating the permutations of "N"×"N"×"N" cubes where "N" is odd, since those puzzles have fixed centers which identify the cube's spatial orientation. The number of possible positions of the cube is
formula_9 This is the order of the group as well.
Any cube configuration can be solved in up to 14 turns (when making only quarter turns) or in up to 11 turns (when making half turns in addition to quarter turns).
The number a of positions that require n "any" (half or quarter) turns and number q of positions that require n quarter turns only are:
The two-generator subgroup (the number of positions generated just by rotations of two adjacent faces) is of order 29,160.
Code that generates these results can be found here.
Methods.
A pocket cube can be solved with the same methods as a 3x3x3 Rubik's cube, simply by treating it as a 3x3x3 with solved (invisible) centers and edges. More advanced methods combine multiple steps and require more algorithms. These algorithms designed for solving a 2×2×2 cube are often significantly shorter and faster than the algorithms one would use for solving a 3×3×3 cube.
The Ortega method, also called the Varasano method, is an intermediate method. First a face is built (but the pieces may be permuted incorrectly), then the last layer is oriented (OLL) and lastly both layers are permuted (PBL). The Ortega method requires a total of 12 algorithms.
The CLL method first builds a layer (with correct permutation) and then solves the second layer in one step by using one of 42 algorithms. A more advanced version of CLL is the TCLL Method also known as Twisty CLL. One layer is built with correct permutation similarly to normal CLL, however one corner piece can be incorrectly oriented. The rest of the cube is solved, and the incorrect corner orientated in one step. There are 83 cases for TCLL.
One of the most advanced methods is the EG method. It starts by building a face like in the Ortega method, but then solves the rest of the puzzle in one step. It requires knowing 128 algorithms, 42 of which are the CLL algorithms.
Top-level speedcubers may also 1-look the puzzle,
which involves inspecting the entire cube and planning as many solutions as possible and choosing the best one before starting the solve by predicting where the pieces will go after finishing a side.
Notation.
Notation is based on 3×3×3 notation but some moves are redundant (All moves are 90°, moves ending with ‘2’ are 180° turns):
World records.
The world record for the fastest single solve time is 0.43 seconds, set by Teodor Zajder of Poland at Warsaw Cube Masters 2023.
The world record average of 5 solves (excluding fastest and slowest) is 0.78 seconds, set by Yiheng Wang (王艺衡) of China at Johor Cube Open 2024, with times of 0.74, (0.70), (0.97), 0.78, and 0.81 seconds.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "M_1"
},
{
"math_id": 1,
"text": "M_2"
},
{
"math_id": 2,
"text": "A_M"
},
{
"math_id": 3,
"text": "\\sim \n"
},
{
"math_id": 4,
"text": "M_1 \\sim M_2 := M_1 "
},
{
"math_id": 5,
"text": "[ M ] := \\{ M' \\in A_M | M' \\sim M \\} \\subseteq A_M"
},
{
"math_id": 6,
"text": "[M]"
},
{
"math_id": 7,
"text": "A_M / \\sim "
},
{
"math_id": 8,
"text": "A_M / \\sim := \\{ [M] | M \\in A_M \\}"
},
{
"math_id": 9,
"text": "\\frac{8! \\times 3^7}{24}=7! \\times 3^6=3,674,160."
}
] |
https://en.wikipedia.org/wiki?curid=599012
|
5990317
|
Ignorability
|
In statistics, ignorability is a feature of an experiment design whereby the method of data collection (and the nature of missing data) does not depend on the missing data. A missing data mechanism such as a treatment assignment or survey sampling strategy is "ignorable" if the missing data matrix, which indicates which variables are observed or missing, is independent of the missing data conditional on the observed data.
This idea is part of the Rubin Causal Inference Model, developed by Donald Rubin in collaboration with Paul Rosenbaum in the early 1970s. The exact definition differs between their articles in that period. In one of Rubins articles from 1978 Rubin discuss "ignorable assignment mechanisms", which can be understood as the way individuals are assigned to treatment groups is irrelevant for the data analysis, given everything that is recorded about that individual. Later, in 1983 Rubin and Rosenbaum rather define "strongly ignorable treatment assignment" which is a stronger condition, mathematically formulated as formula_0, where formula_1 is a potential outcome given treatment formula_2, formula_3 is some covariates and formula_4 is the actual treatment.
Pearl devised a simple graphical criterion, called "back-door", that entails ignorability and identifies sets of covariates that achieve this condition.
Ignorability means we can ignore how one ended up in one vs. the other group (‘treated’ formula_5, or ‘control’ formula_6) when it comes to the potential outcome (say formula_7). It has also been called unconfoundedness, selection on the observables, or no omitted variable bias.
Formally it has been written as formula_8, or in words the potential formula_7 outcome of person formula_9 had they been treated or not does not depend on whether they have really been (observable) treated or not. We can ignore in other words how people ended up in one vs. the other condition, and treat their potential outcomes as exchangeable. While this seems thick, it becomes clear if we add subscripts for the ‘realized’ and superscripts for the ‘ideal’ (potential) worlds (notation suggested by David Freedman.
So: Y11/*Y01 are potential Y outcomes had the person been treated (superscript 1), when in reality they have actually been (Y11, subscript 1), or not (*Y01: the formula_10 signals this quantity can never be realized or observed, or is "fully" contrary-to-fact or counterfactual, CF).
Similarly, formula_11 are potential formula_7 outcomes had the person not been treated (superscript formula_12), when in reality they have been formula_13, subscript formula_14 or not actually (formula_15.
Only one of each potential outcome (PO) can be realized, the other cannot, for the same assignment to condition, so when we try to estimate treatment effects, we need something to replace the fully contrary-to-fact ones with observables (or estimate them). When ignorability/exogeneity holds, like when people are randomized to be treated or not, we can ‘replace’ *"Y"01 with its observable counterpart Y11, and *Y10 with its observable counterpart "Y"00, not at the individual level Yi’s, but when it comes to averages like E["Y""i"1 – "Y""i"0], which is exactly the causal treatment effect (TE) one tries to recover.
Because of the ‘consistency rule’, the potential outcomes are the values actually realized, so we can write Yi0 = Yi00 and Yi1 = Yi11 (“the consistency rule states that an individual’s potential outcome under a hypothetical condition that happened to materialize is precisely the outcome experienced by that individual”, p. 872). Hence TE = E[Yi1 – Yi0] = E[Yi11 – Yi00].
Now, by simply adding and subtracting the same fully counterfactual quantity *Y10 we get:
E[Yi11 – Yi00] = E[Yi11 –*Y10 +*Y10 - Yi00] = E[Yi11 –*Y10] + E[*Y10 - Yi00] = ATT + {Selection Bias},
where ATT = average treatment effect on the treated and the second term is the bias introduced when people have the choice to belong to either the ‘treated’ or the ‘control’ group.
Ignorability, either plain or conditional on some other variables, implies that such selection bias can be ignored, so one can recover (or estimate) the causal effect.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "(r_1,r_0) \\perp \\!\\!\\!\\perp z \\mid v ,\\quad 0<\\operatorname{pr}(z=1)<1 \\quad \\forall v"
},
{
"math_id": 1,
"text": "r_t"
},
{
"math_id": 2,
"text": "t"
},
{
"math_id": 3,
"text": "v"
},
{
"math_id": 4,
"text": "z"
},
{
"math_id": 5,
"text": "Tx = 1"
},
{
"math_id": 6,
"text": "Tx = 0"
},
{
"math_id": 7,
"text": "Y"
},
{
"math_id": 8,
"text": "[Y_i^1, Y_i^0] \\perp Tx_i"
},
{
"math_id": 9,
"text": "i"
},
{
"math_id": 10,
"text": "^*"
},
{
"math_id": 11,
"text": "^*Y_1^0 / Y_0^0"
},
{
"math_id": 12,
"text": "^0"
},
{
"math_id": 13,
"text": "^*Y_1^0"
},
{
"math_id": 14,
"text": "_1"
},
{
"math_id": 15,
"text": "Y_0^0"
}
] |
https://en.wikipedia.org/wiki?curid=5990317
|
59904030
|
Rachinger correction
|
In X-ray diffraction, the Rachinger correction is a method for accounting for the effect of an undesired K-alpha 2 peak in the energy spectrum. Ideally, diffraction measurements are made with X-rays of a single wavelength. Practically, the x-rays for a measurement are usually generated in an X-ray tube from a metal's K-alpha line. This generation creates x-rays at a variety of wavelengths, but most of the non K-alpha X-rays can be blocked from reaching the sample by filters. However, the K-alpha line is actually two x-ray lines close together: the stronger K-alpha 1 peak, and the weaker K-alpha 2 peak. Compared to other radiation such as the Bremsstrahlung, the K-alpha two peak is more difficult to filter mechanically. The Rachinger correction is a recursive method suggested by William Albert Rachinger (1927) to eliminate the disturbing formula_0 peak.
Cause of the double peak.
For diffraction experiments with X-rays radiation is usually used with the formula_1 Wavelength of the anode material . However, this is a doublet, so in reality two slightly different wavelengths. According to the diffraction conditions of the Laue or Bragg equation, both wavelengths each generate an intensity maximum. These maxima are very close to each other, with their distance depending on the diffraction angle formula_2. For larger angles, the distance of the intensity maxima is greater.
Procedure.
Basics.
The wavelengths of formula_3 and formula_0 radiation are also known to increase their energy through the relationship:
formula_4
From this, the angular distance can be determined for each diffraction angle formula_5 determine the two Kα peaks.
Furthermore, it is known how the intensities of formula_3 and formula_0 behave in the diffraction pattern. This ratio is determined quantum mechanically and is for all anode materials:
formula_6
Calculation.
The total intensity is:
formula_7,
where formula_8 is the intensity of the "pure" formula_3 peak and formula_9 the intensity of the "pure" formula_10 peak.
The intensity of formula_0 peak can be expressed as:
formula_11,
so the overall intensity is:
formula_12
Practical Implementation.
To practically perform the Rachinger correction, one starts on a rising edge of a peak. For a certain angle formula_13 becomes the intensity of the diffraction image formula_14 take and with formula_15 scales with formula_16, at the same time the angle difference becomes te calculated formula_5. At the point formula_17 can the true intensity formula_18 (which, if there is no formula_0peak) would be calculated by:
formula_19.
Since the measured values of X-ray diffraction experiments are usually available as ASCII tables, this procedure can be repeated step by step until the entire diffraction pattern has been run through.
Today this method is hardly used anymore. Due to the power of the computers, the formula_0 peak can be fit simultaneously.
Restrictions.
From the way the corrected diffraction image is calculated, it follows that no correction is made for the small diffraction angles. Furthermore, the assumption Rachinger that it is formula_0 Peak just a scaled variant of the formula_3 peaks are not correct, as the lines generally have different widths. Therefore, in reality there is a deviation in form and intensity. Also, the correction loses its validity for a non-negligible background, since this itself causes an unwanted correction.
|
[
{
"math_id": 0,
"text": "K_{\\alpha_2}"
},
{
"math_id": 1,
"text": "K_\\alpha"
},
{
"math_id": 2,
"text": "2\\theta"
},
{
"math_id": 3,
"text": "K_{\\alpha_1}"
},
{
"math_id": 4,
"text": "E = h \\frac{c_0}{\\lambda}"
},
{
"math_id": 5,
"text": "\\Delta\\theta"
},
{
"math_id": 6,
"text": "r = \\frac{I_{\\alpha_2}}{I_{\\alpha_1}} = 0.5"
},
{
"math_id": 7,
"text": "I(\\theta) = I_1(\\theta) + I_2(\\theta)"
},
{
"math_id": 8,
"text": "I_1(\\theta)"
},
{
"math_id": 9,
"text": "I_2(\\theta)"
},
{
"math_id": 10,
"text": "\\alpha_2"
},
{
"math_id": 11,
"text": "I_2(\\theta) = r\\cdot I_1(\\theta-\\Delta\\theta)"
},
{
"math_id": 12,
"text": "I(\\theta) = I_1(\\theta) + r\\cdot I_1(\\theta-\\Delta\\theta)"
},
{
"math_id": 13,
"text": "\\theta"
},
{
"math_id": 14,
"text": "I(\\theta)"
},
{
"math_id": 15,
"text": "r"
},
{
"math_id": 16,
"text": "I'(\\theta) = r\\cdot I(\\theta)"
},
{
"math_id": 17,
"text": "\\theta+\\Delta\\theta"
},
{
"math_id": 18,
"text": "I_1"
},
{
"math_id": 19,
"text": "I_1(\\theta+\\Delta\\theta) = I(\\theta+\\Delta\\theta) - I'(\\theta)"
}
] |
https://en.wikipedia.org/wiki?curid=59904030
|
5991396
|
Cyclotomic unit
|
Algebraic number field unit
In mathematics, a cyclotomic unit (or circular unit) is a unit of an algebraic number field which is the product of numbers of the form (ζ − 1) for ζ an "n"th root of unity and 0 < "a" < "n".
Properties.
The cyclotomic units form a subgroup of finite index in the group of units of a cyclotomic field. The index of this subgroup of "real" cyclotomic units (those cyclotomic units in the maximal real subfield) within the full real unit group is equal to the class number of the maximal real subfield of the cyclotomic field.
The cyclotomic units satisfy "distribution relations". Let a be a rational number prime to p and let "g""a" denote exp(2"πia") − 1. Then for "a" ≠ 0 we have formula_0.
Using these distribution relations and the symmetry relation ζ − 1 = −ζ (ζ − 1) a basis "B""n" of the cyclotomic units can be constructed with the property that "B""d" ⊆ "B""n" for "d" | "n".
Notes.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": " \\prod_{p b=a} g_b = g_a"
}
] |
https://en.wikipedia.org/wiki?curid=5991396
|
59915725
|
Behrend sequence
|
Type of integer sequence
In number theory, a Behrend sequence is an integer sequence whose multiples include almost all integers. The sequences are named after Felix Behrend.
Definition.
If formula_0 is a sequence of integers greater than one, and if formula_1 denotes the set of positive integer multiples of members of formula_0, then formula_0 is a Behrend sequence if formula_1 has natural density one. This means that the proportion of the integers from 1 to formula_2 that belong to formula_1 converges, in the limit of large formula_2, to one.
Examples.
The prime numbers form a Behrend sequence, because every integer greater than one is a multiple of a prime number. More generally, a subsequence formula_0 of the prime numbers forms a Behrend sequence if and only if the sum of reciprocals of formula_0 diverges.
The semiprimes, the products of two prime numbers, also form a Behrend sequence. The only integers that are not multiples of a semiprime are the prime powers. But as the prime powers have density zero, their complement, the multiples of the semiprimes, have density one.
History.
The problem of characterizing these sequence was described as "very difficult" by Paul Erdős in 1979.
These sequences were named "Behrend sequences" in 1990 by Richard R. Hall, with a definition using logarithmic density in place of natural density. Hall chose their name in honor of Felix Behrend, who proved that for a Behrend sequence formula_0, the sum of reciprocals of formula_0 must diverge. Later, Hall and Gérald Tenenbaum used natural density to define Behrend sequences in place of logarithmic density. This variation in definitions makes no difference in which sequences are Behrend sequences, because the Davenport–Erdős theorem shows that, for sets of multiples, having natural density one and having logarithmic density one are equivalent.
Derived sequences.
When formula_0 is a Behrend sequence, one may derive another Behrend sequence by omitting from formula_0 any finite number of elements.
Every Behrend sequence may be decomposed into the disjoint union of infinitely many Behrend sequences.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "M(A)"
},
{
"math_id": 2,
"text": "n"
}
] |
https://en.wikipedia.org/wiki?curid=59915725
|
59915731
|
Davenport–Erdős theorem
|
Equivalence of notions of density for sets of multiples of integers
In number theory, the Davenport–Erdős theorem states that, for sets of multiples of integers, several different notions of density are equivalent.
Let formula_0 be a sequence of positive integers. Then the multiples of formula_1 are another set formula_2 that can be defined as the set formula_3 of numbers formed by multiplying members of formula_1 by arbitrary positive integers.
According to the Davenport–Erdős theorem, for a set formula_2, the following notions of density are equivalent, in the sense that they all produce the same number as each other for the density of formula_2:
However, there exist sequences formula_1 and their sets of multiples formula_2 for which the upper natural density (taken using the superior limit in place of the inferior limit) differs from the lower density, and for which the natural density itself (the limit of the same sequence of values) does not exist.
The theorem is named after Harold Davenport and Paul Erdős, who published it in 1936. Their original proof used the Hardy–Littlewood tauberian theorem; later, they published another, elementary proof.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "A=a_1,a_2,\\dots"
},
{
"math_id": 1,
"text": "A"
},
{
"math_id": 2,
"text": "M(A)"
},
{
"math_id": 3,
"text": "M(A)=\\{ka\\mid k\\in\\mathbb{N}, a\\in A\\}"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "[1,n]"
},
{
"math_id": 6,
"text": "a"
},
{
"math_id": 7,
"text": "1/a"
},
{
"math_id": 8,
"text": "i"
},
{
"math_id": 9,
"text": "M(\\{a_1,\\dots a_i\\})"
}
] |
https://en.wikipedia.org/wiki?curid=59915731
|
5991764
|
Bondi k-calculus
|
Method of teaching special relativity
Bondi "k"-calculus is a method of teaching special relativity popularised by Sir Hermann Bondi, that has been used in university-level physics classes (e.g. at the University of Oxford), and in some relativity textbooks.
The usefulness of the "k"-calculus is its simplicity. Many introductions to relativity begin with the concept of velocity and a derivation of the Lorentz transformation. Other concepts such as time dilation, length contraction, the relativity of simultaneity, the resolution of the twins paradox and the relativistic Doppler effect are then derived from the Lorentz transformation, all as functions of velocity.
Bondi, in his book "Relativity and Common Sense", first published in 1964 and based on articles published in "The Illustrated London News" in 1962, reverses the order of presentation. He begins with what he calls "a fundamental ratio" denoted by the letter formula_0 (which turns out to be the radial Doppler factor).40 From this he explains the twins paradox, and the relativity of simultaneity, time dilation, and length contraction, all in terms of formula_0. It is not until later in the exposition that he provides a link between velocity and the fundamental ratio formula_0. The Lorentz transformation appears towards the end of the book.
History.
The "k"-calculus method had previously been used by E. A. Milne in 1935. Milne used the letter formula_1 to denote a constant Doppler factor, but also considered a more general case involving non-inertial motion (and therefore a varying Doppler factor). Bondi used the letter formula_0 instead of formula_1 and simplified the presentation (for constant formula_0 only), and introduced the name ""k"-calculus".109
Bondi's "k"-factor.
Consider two inertial observers, Alice and Bob, moving directly away from each other at constant relative velocity. Alice sends a flash of blue light towards Bob once every formula_2 seconds, as measured by her own clock. Because Alice and Bob are separated by a distance, there is a delay between Alice sending a flash and Bob receiving a flash. Furthermore, the separation distance is steadily increasing at a constant rate, so the delay keeps on increasing. This means that the time interval between Bob receiving the flashes, as measured by his clock, is greater than formula_2 seconds, say formula_3 seconds for some constant formula_4. (If Alice and Bob were, instead, moving directly towards each other, a similar argument would apply, but in that case formula_5.)80
Bondi describes formula_0 as “a fundamental ratio”,88 and other authors have since called it "the Bondi "k"-factor" or "Bondi's "k"-factor".63
Alice's flashes are transmitted at a frequency of formula_6 Hz, by her clock, and received by Bob at a frequency of formula_7 Hz, by his clock. This implies a Doppler factor of formula_8. So Bondi's "k"-factor is another name for the Doppler factor (when source Alice and observer Bob are moving directly away from or towards each other).40
If Alice and Bob were to swap roles, and Bob sent flashes of light to Alice, the Principle of Relativity (Einstein's first postulate) implies that the "k"-factor from Bob to Alice would be the same value as the "k"-factor from Alice to Bob, as all inertial observers are equivalent. So the "k"-factor depends only on the relative speed between the observers and nothing else.80
The reciprocal "k"-factor.
Consider, now, a third inertial observer Dave who is a fixed distance from Alice, and such that Bob lies on the straight line between Alice and Dave. As Alice and Dave are mutually at rest, the delay from Alice to Dave is constant. This means that Dave receives Alice's blue flashes at a rate of once every formula_2 seconds, by his clock, the same rate as Alice sends them. In other words, the "k"-factor from Alice to Dave is equal to one.77
Now suppose that whenever Bob receives a blue flash from Alice he immediately sends his own red flash towards Dave, once every formula_3 seconds (by Bob's clock). Einstein's second postulate, that the speed of light is independent of the motion of its source, implies that Alice's blue flash and Bob's red flash both travel at the same speed, neither overtaking the other, and therefore arrive at Dave at the same time. So Dave receives a red flash from Bob every formula_2 seconds, by Dave's clock, which were sent by Bob every formula_3 seconds by Bob's clock. This implies that the "k"-factor from Bob to Dave is formula_9.80
This establishes that the "k"-factor for observers moving directly apart (red shift) is the reciprocal of the "k"-factor for observers moving directly towards each other at the same speed (blue shift).
The twins paradox.
Consider, now, a fourth inertial observer Carol who travels from Dave to Alice at exactly the same speed as Bob travels from Alice to Dave. Carol's journey is timed such that she leaves Dave at exactly the same time as Bob arrives. Denote times recorded by Alice's, Bob's and Carol's clocks by formula_10.
When Bob passes Alice, they both synchronise their clocks to formula_11. When Carol passes Bob, she synchronises her clock to Bob's, formula_12. Finally, as Carol passes Alice, they compare their clocks against each other. In Newtonian physics, the expectation would be that, at the final comparison, Alice's and Carol's clock would agree, formula_13. It will be shown below that in relativity this is not true. This is a version of the well-known "twins paradox" in which identical twins separate and reunite, only to find that one is now older than the other.
If Alice sends a flash of light at time formula_14 towards Bob, then, by the definition of the "k"-factor, it will be received by Bob at time formula_15. The flash is timed so that it arrives at Bob just at the moment that Bob meets Carol, so Carol synchronises her clock to read formula_16.
Also, when Bob and Carol meet, they both simultaneously send flashes to Alice, which are received simultaneously by Alice. Considering, first, Bob's flash, sent at time formula_15, it must be received by Alice at time formula_17, using the fact that the "k"-factor from Alice to Bob is the same as the "k"-factor from Bob to Alice.
As Bob's outward journey had a duration of formula_3, by his clock, it follows by symmetry that Carol's return journey over the same distance at the same speed must also have a duration of formula_3, by her clock, and so when Carol meets Alice, Carol's clock reads formula_18. The "k"-factor for this leg of the journey must be the reciprocal formula_9 (as discussed earlier), so, considering Carol's flash towards Alice, a transmission interval of formula_3 corresponds to a reception interval of formula_2. This means that the final time on Alice's clock, when Carol and Alice meet, is formula_19. This is larger than Carol's clock time formula_20 since
formula_21
provided formula_22 and formula_23.
Radar measurements and velocity.
In the "k"-calculus methodology, distances are measured using radar. An observer sends a radar pulse towards a target and receives an echo from it. The radar pulse (which travels at formula_24, the speed of light) travels a total distance, there and back, that is twice the distance to the target, and takes time formula_25, where formula_26 and formula_27 are times recorded by the observer's clock at transmission and reception of the radar pulse. This implies that the distance to the target is60
formula_28
Furthermore, since the speed of light is the same in both directions, the time at which the radar pulse arrives at the target must be, according to the observer, halfway between the transmission and reception times, namely60
formula_29
In the particular case where the radar observer is Alice and the target is Bob (momentarily co-located with Dave) as described previously, by "k"-calculus we have formula_30, and so
formula_31
As Alice and Bob were co-located at formula_32, the velocity of Bob relative to Alice is given by10364
formula_33
This equation expresses velocity as a function of the Bondi "k"-factor. It can be solved for formula_0 to give formula_0 as a function of formula_34:10365
formula_35
Velocity composition.
Consider three inertial observers Alice, Bob and Ed, arranged in that order and moving at different speeds along the same straight line. In this section, the notation formula_36 will be used to denote the "k"-factor from Alice to Bob (and similarly between other pairs of observers).
As before, Alice sends a blue flash towards Bob and Ed every formula_2 seconds, by her clock, which Bob receives every formula_37 seconds, by Bob's clock, and Ed receives every formula_38 seconds, by Ed's clock.
Now suppose that whenever Bob receives a blue flash from Alice he immediately sends his own red flash towards Ed, once every formula_37 seconds by Bob's clock, so Ed receives a red flash from Bob every formula_39 seconds, by Ed's clock. Einstein's second postulate, that the speed of light is independent of the motion of its source, implies that Alice's blue flash and Bob's red flash both travel at the same speed, neither overtaking the other, and therefore arrive at Ed at the same time. Therefore, as measured by Ed, the red flash interval formula_39 and the blue flash interval formula_38 must be the same. So the rule for combining "k"-factors is simply multiplication:105
formula_40
Finally, substituting
formula_41
gives the velocity composition formula105
formula_42
The invariant interval.
Using the radar method described previously, inertial observer Alice assigns coordinates formula_43 to an event by transmitting a radar pulse at time formula_44 and receiving its echo at time formula_45, as measured by her clock.
Similarly, inertial observer Bob can assign coordinates formula_46 to the same event by transmitting a radar pulse at time formula_47 and receiving its echo at time formula_48, as measured by his clock. However, as the diagram shows, it is not necessary for Bob to generate his own radar signal, as he can simply take the timings from Alice's signal instead.
Now, applying the "k"-calculus method to the signal that travels from Alice to Bob
formula_49
Similarly, applying the "k"-calculus method to the signal that travels from Bob to Alice
formula_50
Equating the two expressions for formula_0 and rearranging,118
formula_51
This establishes that the quantity formula_52 is an invariant: it takes the same value in any inertial coordinate system and is known as the invariant interval.
The Lorentz transformation.
The two equations for formula_0 in the previous section can be solved as simultaneous equations to obtain:11867
formula_53
These equations are the Lorentz transformation expressed in terms of the Bondi "k"-factor instead of in terms of velocity. By substituting
formula_54
the more traditional form
formula_55
is obtained.11867
Rapidity.
Rapidity formula_56 can be defined from the "k"-factor by71
formula_57
and so
formula_58
The "k"-factor version of the Lorentz transform becomes
formula_59
It follows from the composition rule for formula_0, formula_60, that the composition rule for rapidities is addition:71
formula_61
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "s"
},
{
"math_id": 2,
"text": "T"
},
{
"math_id": 3,
"text": "kT"
},
{
"math_id": 4,
"text": "k > 1"
},
{
"math_id": 5,
"text": "k < 1"
},
{
"math_id": 6,
"text": "f_s = 1/T"
},
{
"math_id": 7,
"text": "f_o = 1/(kT) "
},
{
"math_id": 8,
"text": "f_s / f_o = k"
},
{
"math_id": 9,
"text": "1/k"
},
{
"math_id": 10,
"text": "t_A, t_B, t_C"
},
{
"math_id": 11,
"text": "t_A=t_B=0"
},
{
"math_id": 12,
"text": "t_C=t_B"
},
{
"math_id": 13,
"text": "t_C=t_A"
},
{
"math_id": 14,
"text": "t_A=T"
},
{
"math_id": 15,
"text": "t_B = kT"
},
{
"math_id": 16,
"text": "t_C = t_B = kT"
},
{
"math_id": 17,
"text": "t_A=k^2 T"
},
{
"math_id": 18,
"text": "t_C=2kT"
},
{
"math_id": 19,
"text": "t_A = (k^2+1)T"
},
{
"math_id": 20,
"text": "t_C = 2kT"
},
{
"math_id": 21,
"text": "t_A-t_C=(k^2-2k+1)T = (k-1)^2 T > 0,"
},
{
"math_id": 22,
"text": "k \\neq 1"
},
{
"math_id": 23,
"text": "T > 0"
},
{
"math_id": 24,
"text": "c"
},
{
"math_id": 25,
"text": "T_2 - T_1"
},
{
"math_id": 26,
"text": "T_1"
},
{
"math_id": 27,
"text": "T_2"
},
{
"math_id": 28,
"text": "x_A = \\tfrac{1}{2} c(T_2-T_1). "
},
{
"math_id": 29,
"text": "t_A = \\tfrac{1}{2} (T_2+T_1). "
},
{
"math_id": 30,
"text": "T_2 = k^2 T_1"
},
{
"math_id": 31,
"text": "\\begin{align}\nx_A &= \\tfrac{1}{2} c(k^2-1) T_1 \\\\\nt_A &= \\tfrac{1}{2} (k^2+1) T_1.\n\\end{align} "
},
{
"math_id": 32,
"text": "t_A=0, x_A=0"
},
{
"math_id": 33,
"text": "v = \\frac{x_A}{t_A} = \\frac{\\tfrac{1}{2} c(k^2-1) T_1}{\\tfrac{1}{2} (k^2+1) T_1} = c \\frac{k^2-1}{k^2+1} = c \\frac{k-k^{-1}}{k+k^{-1}}."
},
{
"math_id": 34,
"text": "v"
},
{
"math_id": 35,
"text": "k = \\sqrt{\\frac{1+v/c}{1-v/c}}."
},
{
"math_id": 36,
"text": "k_{AB}"
},
{
"math_id": 37,
"text": "k_{AB} T"
},
{
"math_id": 38,
"text": "k_{AE} T"
},
{
"math_id": 39,
"text": "k_{BE} (k_{AB} T)"
},
{
"math_id": 40,
"text": "k_{AE} = k_{AB} k_{BE}. "
},
{
"math_id": 41,
"text": "k_{AB}=\\sqrt{\\frac{1+v_{AB}/c}{1-v_{AB}/c}}, \\, k_{BE}=\\sqrt{\\frac{1+v_{BE}/c}{1-v_{BE}/c}}, \\, v_{AE}=c \\frac{k_{AE}^2-1}{k_{AE}^2+1}"
},
{
"math_id": 42,
"text": "v_{AE}=\\frac{v_{AB} + v_{BE}}{1 + v_{AB}v_{BE}/c^2}. "
},
{
"math_id": 43,
"text": "(t_A, x_A)"
},
{
"math_id": 44,
"text": "t_A - x_A/c "
},
{
"math_id": 45,
"text": "t_A+x_A/c"
},
{
"math_id": 46,
"text": "(t_B, x_B)"
},
{
"math_id": 47,
"text": "t_B-x_B/c"
},
{
"math_id": 48,
"text": "t_B+x_B/c"
},
{
"math_id": 49,
"text": "k = \\frac{t_B-x_B/c}{t_A-x_A/c}. "
},
{
"math_id": 50,
"text": "k=\\frac{t_A+x_A/c}{t_B+x_B/c}. "
},
{
"math_id": 51,
"text": "c^2 t_A^2-x_A^2=c^2 t_B^2-x_B^2. "
},
{
"math_id": 52,
"text": "c^2 t^2-x^2"
},
{
"math_id": 53,
"text": "\\begin{align}\nct_B &= \\tfrac{1}{2} (k+k^{-1} ) ct_A - \\tfrac{1}{2} (k-k^{-1} ) x_A \\\\\nx_B &= \\tfrac{1}{2} (k+k^{-1} ) x_A - \\tfrac{1}{2} (k-k^{-1} ) ct_A\n\\end{align}"
},
{
"math_id": 54,
"text": " k = \\sqrt{\\frac{1+v/c}{1-v/c}}, "
},
{
"math_id": 55,
"text": "t_B=\\frac{t_A-vx_A/c^2}{\\sqrt{1-v^2/c^2}}; \\, x_B=\\frac{x_A-vt_A}{\\sqrt{1-v^2/c^2}}"
},
{
"math_id": 56,
"text": "\\varphi"
},
{
"math_id": 57,
"text": "\\varphi = \\log_e k, \\, k = e^\\varphi,"
},
{
"math_id": 58,
"text": "v = c \\frac{k-k^{-1}}{k+k^{-1}} = c \\tanh \\varphi."
},
{
"math_id": 59,
"text": "\\begin{align}\nct_B &= ct_A \\cosh \\varphi - x_A \\sinh \\varphi \\\\\nx_B &= x_A \\cosh \\varphi - ct_A \\sinh \\varphi\n\\end{align}"
},
{
"math_id": 60,
"text": "k_{AE}=k_{AB} k_{BE}"
},
{
"math_id": 61,
"text": "\\varphi_{AE} = \\varphi_{AB} + \\varphi_{BE}. "
}
] |
https://en.wikipedia.org/wiki?curid=5991764
|
59920190
|
James E. Lewis
|
African-American artist, art historian, curator, and professor
James Edward Lewis (August 4, 1923 – August 9, 1997) was an African-American artist, art collector, professor, and curator in the city of Baltimore. He is best known for his role as the leading force for the creation of the James E. Lewis Museum of Art, an institution of the HBCU Morgan State University. His work as the chairman of the Morgan Art Department from 1950 to 1986 allowed for the museum to amass a large collection of more than 3,000 works, predominantly of African and African diasporan art. In addition, he is also well known for his role as an interdisciplinary artist, primarily focused on sculpture, though also having notable examples of lithography and illustration. His artistic style throughout the years has developed from an earlier focus on African-American history and historical figures, for which he is most notable as an artist, to a more contemporary style of African-inspired abstract expressionism.
Early and personal life.
James E. Lewis was born in rural Phenix, Virginia on August 4, 1923 to James T. Lewis and Pearlean Harvey. Lewis' parents were both sharecroppers. Shortly after his birth, his father moved to Baltimore for increased job opportunities; James E. was subsequently raised by his mother until the family was reunited in 1925. They lived for a short time with distant relatives until moving to a four-bedroom house on 1024 North Durham Street in East Baltimore, a predominantly African-American lower-class neighborhood close to Johns Hopkins Hospital. Lewis' primary school, PS 101, was the only public school in East Baltimore that served black children. Lewis grew up in a church-going family, his parents both active members of the Faith Baptist Church, devoting the entirety of their Sundays to church activities. His parents worked a variety of different jobs throughout his youth: his father working as a stevedore for a shipping company, a mechanic, a custodian, a mailroom handler, and an elevator operator. His mother worked as both a clerk at a drugstore and a laundress for a private family.
Lewis' primary exposure to the arts came from Dr. Leon Winslow, a faculty member at PS 101 who Lewis saw as "providing encouragement and art materials to those who wanted and needed it." In fifth grade, Lewis transferred to PS 102. Here, he was able to receive specialized Art Education in Ms. William's class under the guidance of Winslow. He was considered a standout pupil at PS 102 as a result of his introduction to the connection between the arts and the other studies. His time spent in Ms. Pauline Wharton's class allowed for him to experiment with singing, to which he was considered a talented singer. His involvement in this class challenged his earlier belief that singing was not a masculine artistic pursuit. He was able to study both European classics and negro spirituals, which was one of his earliest introductions to arts specific to American black culture. Under Ms. Wharton's direction, he was also involved in many different musical performances, including some works of the Works Progress Administration's Federal Theatre Project. Lewis attended Paul Laurence Dunbar High School, where his love of the arts was heightened through his industrial art class with Lee Davis, who instilled in him a care for fine craftsmanship. At age sixteen, Lewis had won a citywide poster design contest, and later had the work displayed at the Enoch Pratt Free Library. He produced his first sculpture at age 17 out of earthen clay from the East Monument Street fairgrounds. He was personally very close with the school faculty, often going over to Davis' house to listen to jazz music or visiting Dr. Winslow and his children. The connection he had with the Winslow family solidified his interest in pursuing fine art after high school. By age 19, he had produced five completed portrait busts. While still in high school, Lewis had been awarded a Carnegie Institute grant to study at the Maryland Institute College of Art, but the school was highly segregated at the time and thus he was prohibited from attending. Luckily, a compromise was made with the school to allow an advanced student, American artist Charles Cross, to tutor him in private sessions. He graduated from Dunbar High with the highest average in the arts.
Pearlean was supportive of her son's desire to pursue a career in the arts, but her husband felt the opposite, believing that his son should make an honest living through manual labor. As a result, James E. Lewis opted to work at the Baltimore Calvert distillery during the summer following graduation, beginning on the 30th of June of that year. Having just graduated and now a young adult, Lewis registered for the Selective Service System. During this time, a legal loophole was created that allowed for African-Americans in Maryland to study at any college of their choice with tuition, travel, and room & board covered by the state, so long as they intended to study a field that had no current representation in any of the black schools they were allowed to attend. Lewis, seeing this law as a form of "poetic justice", decided to apply to study fine art at Philadelphia College of Art, now called the University of the Arts. Lewis studied for a year at PCA before receiving a letter in the mail from the United States Armed Forces stating that he had been drafted into World War II.
Lewis had been drafted into the United States Navy, but soon after joined the United States Marine Corps due to a new policy that allowed for black recruits, something which he saw as both progress and a personal challenge. He was stationed at Camp Lejeune in Jacksonville, North Carolina, traveling there with his fellow Marines in a segregated train car. At the camp, the segregation further continued, with the black soldiers living in makeshift huts and their white counterparts living in brick buildings. In 1943, he received notice of his father's passing, and returned home for the funeral. Shortly after the services, he returned to camp, something which he regretted in hindsight after finding out he could have been discharged, as he was his mother's sole source of financial support. He returned to discover that many of his unit members had been shipped out to the front lines in the Pacific. He heard stories of black soldiers being sent out with no weapons and so he had himself transferred to the 51st Defense Battalion, the first black fighting unit of the Marine Corp. Given the prevalence of racial discrimination in the United States military and the skill of the battalion, Lewis claimed that they were shipped out to Easter Island to keep them away from the action and preserve the image of the white Marines. He served a short stint in gunnery and intelligence before leaving the military in 1946.
James E. Lewis returned to Philadelphia College of Art and received a Bachelor of Fine Arts in 1949. During this time, he met and married his wife, Jacqueline Lucille Adams and Saint Cyprian's Episcopal Church in the Elmwood neighborhood of Philadelphia on June 8, 1946. Lewis planned on having a career solely in illustration, but realized that the field was not welcoming to African-Americans. The Philadelphia College of Art offered him a position as a drawing instructor, and so he worked there for a short time. He received an offer to teach at Morgan State University not long after this but refused the position, instead choosing to use the G.I. Bill to stay in Philadelphia and attend the Tyler School of Art at Temple University. He received his Master of Fine Arts from Temple in 1950. After graduating, he was offered a teaching position at Jackson State Teachers' College and accepted. Three days before his planned arrival in Jackson, Mississippi, he received a call from Martin David Jenkins with another offer to teach at Morgan State. Lewis changed his mind and took the Morgan position, settling back in Baltimore with his wife and son, James.
Lewis was also a personal collector of art. He once cancelled a vacation because he ended up spending all the funds on purchasing a Henry Ossawa Tanner work in New York. He was also known to have collected the works of his students, buying them to inspire them to keep producing art.
He was known to have spoken with Martin Luther King Jr. prior to King's death. Lewis was a strong supporter of placing a bust in the United States Capitol, and was one of the first to lobby for it.
Lewis and Adams had two children together, Cathleen Susan Lewis (born March 17, 1958) and James Edward Lewis Jr. (born October 16, 1949).
James E. Lewis died of stroke complications on August 9, 1997, at Genesis Nursing Home in Baltimore. He was 74 years old.
Academic career.
Morgan State educator.
At the start of his career, Lewis was part of a three-person art department, including himself, Charles Stallings, and Samella Lewis. Samella Lewis was on leave at the start of his position, and she left shortly after his promotion to chairman at the end of the year. He chose to restructure the department in 1951, and created the program that grants bachelor's degrees in art education. Lewis' role as an art collector on behalf of Morgan State began in 1952 with the purchase of five works of African art for $595. Sometime in the early 1950s, Lewis also introduced the first courses on African and African-American art. In 1954, he was awarded the Ford Foundation Fellowship to study as a fellow at various different institutions. He was a fellow at Temple University and Syracuse University in 1954 and Yale University in 1955. It was during his time at Yale that he was working directly under the great Bauhaus artist Josef Albers, who "shook up" what Lewis previously knew about art. It was Albers who inspired Lewis to seek motifs from traditional African art for his own work. Around this time, Lewis received a letter from his former mentor, Charles Cross, who sought a position in the department, though Cross didn't end up working there.
Lewis, on behalf of Morgan State University, was awarded a $5,000 grant from the American Federation of Arts to add works to the university's collection, the first award the arts department had received for its gallery. Shortly after they were purchased, however, three works were stolen from the collection while Lewis was working to find a permanent location to display them. In 1964, he made a visit to galleries in New York City, one of which was the Hirschl & Adler Galleries, which later gifted the museum thirty-five works of European and American importance. Lewis was working on curating an exhibition at Morgan's art galleries at the time, one of the most influential shows of his career, entitled "The Calculated Image".
In March 1969, Lewis took a trip to Europe. While in Paris, Lewis saw the works of Bill Hutson, Sam Middleton, and Edward Clark at the American Center for Students and Artists, as well as some work by Beauford Delaney at the United States Information Center. Afterward, he visited the Galerie Dürr in Munich to see the work of Lawrence Compton Kolawole. Inspired by the success that these artists (as well as Dean Dixon) had abroad, he secured a grant from the Smithsonian Institution to do a study on African-American artists who have moved abroad to pursue their artistic careers the following year. He conducted his study on the aforementioned visual artists with the additions of Herbert Gentry, William Johnson, Jacob Lawrence, and Henry Ossawa Tanner. Using this information, he guest-curated a show of the works of those artists entitled "Afro American Artists Abroad" at the University of Texas at Austin. He also was a guest-curator at the Baltimore Museum of Art in primitive art.
On December 9, 1990, the Murphy Fine Art Galleries were officially renamed to the James E. Lewis Museum of Art in his honor. An official dedication exhibition was held to celebrate his commitment to education and African-American art. Many artists donated works to the museum to celebrate its renaming, including Gordon Parks, Sam Gilliam, Grace Hartigan, Joyce J. Scott, Jack White, and Joan Erbe. Aaron Sopher also made an original illustration of Lewis for the museum. During his time at MSU, Lewis was able to collect over 3,000 works for the museum. Some of the artists represented in the museum's collection as a result of Lewis' collecting efforts include Hale Woodruff, Romare Bearden, Henri de Toulouse-Lautrec, Thomas Cole, Mary Cassatt, Robert Rauschenberg, and Pablo Picasso.
Archaeological digs.
Lewis began visiting Africa originally to collect works of art for JELMA. He gave lectures at different West African universities for the American Society of African Culture in 1965, one of which was Ahmadu Bello University in Zaria, Nigeria. He became area director of the ASAC the following year. During his visit, he met Epko O. Eyo, the director of antiquities of Nigeria while spending time in the city of Lagos. While in Africa, he was also the organizer of the Dakar Arts Festival. In 1965, at the recommendation of Eyo, Lewis returned to Nigeria to Owo to join a fourteen-person party on an archaeological excavation. This dig was his third trip to Nigeria and his seventh to Africa. The group labored in over heat to excavate, work that uncovered thousands of terracotta artifacts and fragments. Lewis estimated that some of the items found at Owo, such as a leopard figure, were valued at over $100,000. Lewis returned twice after this to continue the search. Some of the oldest works from the site are dated to around the 12th century. These finds were significant in that they identified a cultural connection between ancient Ifẹ and Benin societies, visible through the similarities to both of their works. Lewis made 15 trips to Africa in his life, most of which were on behalf of the United States Information Agency.
Lewis also did some archaeological work in Israel.
Organizations.
Lewis was a member of a number of different organizations. In 1962, he was appointed to the Maryland Fine Art Commission by Governor J. Millard Tawes. In addition to this, he was a member of the Baltimore City Commission for Historical and Architectural Preservation, the American Society of African Culture, the College Art Association, the Eastern Art Association, the American Federation of Arts, the Maryland Art Association, and the American Association of Museums. He was also a founding member of the Baltimore Council on Foreign Relations. Lewis also worked briefly with the Baltimore Symphony Orchestra.
Artistic career.
James E. Lewis was heavily inspired by the history and culture of African-Americans and Africa. Lewis once said, "We need to be more supportive of our unique cultural heritage and its arts." One such example of his inspirations were gold weights of the Ashanti people. Another major influence for him were the masks of the Senufo culture. Lewis cited his success as a result of his capability to express himself within the limitation of a predominantly white art world. His primary means of gaining acceptance for his works were to create sculptures in a Western naturalistic or abstract expressionist style. He once said, "Had I gone in to meetings with their committee looking exotic in a dashiki and all the trappings worn by the young black activists who have knowledge of African aesthetic, my ideas would have been promptly rejected. But they accepted what I proposed because it seemed to them that I myself was quite within their norm, and they assumed that what I was proposing had to be sound, whether they fully understood it or not." He became the most comfortable with producing three-dimensional works during his first few years at MSU, referring to it as his métier, working on them whatever chance he got. His daughter said that he often would go nights without sleep to keep working on his art. Many of his works have been described as "socially charged".
"Negro soldier" controversy.
Sometime prior to fall 1968, an anonymous donor put out a nationwide search for an artist willing to create a statue dedicated to African-Americans who had been involved in military conflict. A law firm, on behalf of the donor, sent a letter to Lewis to gauge his interest in the project, asking him to submit a sketch and a cost estimate. Due to the notion of Baltimore as a city of monuments, Lewis accepted the project. His original sketch for the work was revealed in "The Baltimore Sun" on October 4, 1968, which displayed a wreath and a list of wars that black soldiers had been a part of. The original estimate for the creation of the sculpture and the pedestal was between $23,000 to $25,000. In June 1968, the Baltimore Board of Recreation and Parks approved the location of the statue to be placed at Battle Monument Square. On October 3 of that year, the Baltimore Art Commission also approved of the statue's creation.
The choice of location incited a major controversy on the Baltimore political scene. Harry D. Kaufman, a member of the Park Board, criticized the fact that the statue was going to be of an unidentified black male, arguing that it was a tribute to a race as opposed to an individual. He suggested that the statue pay tribute to Crispus Attucks, Harriet Tubman, or Doris Miller instead. Additional arguments were also raised by the General Society of the War of 1812, the Constellation Committee, and the Star-Spangled Banner Flag House. Some concerns were raised about the location of the statue, which was a plaza dedicated to the fallen soldiers of Fort McHenry, which some believed would change the scope and meaning of the site. The work was also criticized for its choice to dress the soldier in modern clothing. Despite the criticism, Lewis refused to meet with his opponents to discuss any changes to the statue or location.
The work was completed by famed New York foundry, Roman Bronze Works, in December 1971. Lewis had requested that the city pay for the work's pedestal, to be made of brick and marble and costing around $500, though the city did not approve this. The final cost of the work was about $30,000. The statue was erected in Battle Monument Square on May 30, 1972, and was covered in a black fabric. Weeks before the official unveiling of the statue, a vandal destroyed the fabric and exposed the statue. After this, the statue was further criticized for the direction it was facing, which was facing away from traffic along a one-way street, meaning that only pedestrians could appreciate the work.
The statue sat in Battle Monument Square for more than 30 years, after which time the African American Patriots Consortium made a request to have the statue moved to War Memorial Plaza, close to Baltimore City Hall. With the approval of the city as well as Lewis' wife and son, the statue was moved on January 12, 2007.
Known works.
Other works by the artist include sculptures of figures including Dwight O. W. Holmes, Theodore McKeldin, Carl J. Murphy, William H. Hastie, Charles Key, and Dr. Edward N. Wilson. The location of these works are currently unknown.
Lewis was known to have made multiple different sculptures of Frederick Douglass, the other two of which are unknown in location, as well as of Martin Luther King Jr., which exist in Lusaka, Zambia and the King Memorial Park in Woodstock, Maryland.
Works by Lewis are also held in the collections of the Baltimore Museum of Art, Howard University, and Clark Atlanta University.
|
[
{
"math_id": 0,
"text": "{7 \\over 8}"
},
{
"math_id": 1,
"text": "7 \\over 8"
},
{
"math_id": 2,
"text": "1\\over2"
}
] |
https://en.wikipedia.org/wiki?curid=59920190
|
59921575
|
Fluorescence imaging
|
Type of non-invasive imaging technique
Fluorescence imaging is a type of non-invasive imaging technique that can help visualize biological processes taking place in a living organism. Images can be produced from a variety of methods including: microscopy, imaging probes, and spectroscopy.
Fluorescence itself, is a form of luminescence that results from matter emitting light of a certain wavelength after absorbing electromagnetic radiation. Molecules that re-emit light upon absorption of light are called fluorophores.
Fluorescence imaging photographs fluorescent dyes and fluorescent proteins to mark molecular mechanisms and structures. It allows one to experimentally observe the dynamics of gene expression, protein expression, and molecular interactions in a living cell. It essentially serves as a precise, quantitative tool regarding biochemical applications.
A common misconception, fluorescence differs from bioluminescence by how the proteins from each process produce light. Bioluminescence is a chemical process that involves enzymes breaking down a substrate to produce light. Fluorescence is the physical excitation of an electron, and subsequent return to emit light.
Attributes.
Fluorescence mechanism.
When a certain molecule absorbs light, the energy of the molecule is briefly raised to a higher excited state. The subsequent return to ground state results in emission of fluorescent light that can be detected and measured. The emitted light, resulting from the absorbed photon of energy "hv", has a specific wavelength. It is important to know this wavelength beforehand so that when an experiment is running, the measuring device knows what wavelength to be set at to detect light production. This wavelength is determined by the equation:
formula_0
Where "h" = Planck's constant, and "c" = the speed of light. Typically a large scanning device or CCD is used here to measure the intensity and digitally photograph an image.
Fluorescent dyes versus proteins.
Fluorescent dyes, with no maturation time, offer higher photo stability and brightness in comparison to fluorescent proteins. In terms of brightness, luminosity is dependent on the fluorophores’ extinction coefficient or ability to absorb light, and its quantum efficiency or effectiveness at transforming absorbed light into fluorescently emitting luminescence. The dyes themselves are not very fluorescent, but when they bind to proteins, they become more easily detectable. One example, NanoOrange, binds to the coating and hydrophobic regions of a protein while being immune to reducing agents. Regarding proteins, these molecules themselves will fluorescence when they absorb a specific incident light wavelength. One example of this, green fluorescent protein (GFP), fluoresces green when exposed to light in the blue to UV range. Fluorescent proteins are excellent reporter molecules that can aid in localizing proteins, observing protein binding, and quantifying gene expression.
Imaging range.
Since some wavelengths of fluorescence are beyond the range of the human eye, charged-coupled devices (CCD) are used to accurately detect light and image the emission. This typically occurs in the 300 – 800 nm range. One of the advantages of fluorescent signaling is that intensity of emitted light behaves rather linearly in regards to the quantity of fluorescent molecules provided. This is obviously contingent that the absorbed light intensity and wavelength are constant. In terms of the actual image itself, it is usually in a 12-bit or 16-bit data format.
Imaging systems.
The main components of fluorescence imaging systems are:
Applications.
Types of microscopy.
A different array of microscope techniques can be employed to change the visualization and contrast of an image. Each method comes with pros and cons, but all utilize the same mechanism of fluorescence to observe a biological process.
Disadvantages.
Overall, this form of imaging is extremely useful in cutting-edge research, with its ability to monitor biological processes. The progression from 2D fluorescent images to 3D ones has allowed scientists to better study spatial precision and resolution. In addition, with concentrated efforts towards 4D analysis, scientists are now able to monitor a cell in real time, enabling them to monitor fast acting processes.
Future directions.
Developing more effective fluorescent proteins is a task that many scientists have taken up in order to improve imaging probe capabilities. Often, mutations in certain residues can significantly change the protein's fluorescent properties. For example, by mutating the F64L gene in jellyfish GFP, the protein is able to more efficiently fluoresce at 37 °C, an important attribute to have when growing cultures in a laboratory. In addition to this, genetic engineering can produce a protein that emits light at a better wavelength or frequency. In addition, the environment itself can play a crucial role. Fluorescence lifetime can be stabilized in a polar environment.
Mechanisms that have been well described but not necessarily incorporated into practical applications hold promising potential for fluorescence imaging. Fluorescence resonance energy transfer (FRET) is an extremely sensitive mechanism that produce signaling molecules in the range of 1-10 nm.
Improvements in the techniques that constitute fluorescence processes is also crucial towards more efficient designs. Fluorescence correlation spectroscopy (FCS) is an analysis technique that observes the fluctuation of fluorescence intensity. This analysis is a component of many fluorescence imaging machines and improvements in spatial resolution could improve the sensitivity and range.
Development of more sensitive probes and analytical techniques for laser induced fluorescence can allow for more accurate, up-to-date experimental data.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\\lambda_{emission}=\\ \\frac{hc}{{Energy}_{emission}}"
}
] |
https://en.wikipedia.org/wiki?curid=59921575
|
59921578
|
Amplitwist
|
Concept used to represent a derivative
In mathematics, the amplitwist is a concept created by Tristan Needham in the book "Visual Complex Analysis" (1997) to represent the derivative of a complex function visually.
Definition.
The "amplitwist" associated with a given function is its derivative in the complex plane. More formally, it is a complex number formula_0 such that in an infinitesimally small neighborhood of a point formula_1 in the complex plane, formula_2 for an infinitesimally small vector formula_3. The complex number formula_0 is defined to be the derivative of formula_4 at formula_1.
Uses.
The concept of an amplitwist is used primarily in complex analysis to offer a way of visualizing the derivative of a complex-valued function as a local amplification and twist of vectors at a point in the complex plane.
Examples.
Define the function formula_5. Consider the derivative of the function at the point formula_6. Since the derivative of formula_7 is formula_8, we can say that for an infinitesimal vector formula_9 at formula_6, formula_10.
|
[
{
"math_id": 0,
"text": "z"
},
{
"math_id": 1,
"text": "a"
},
{
"math_id": 2,
"text": "f(\\xi) = z \\xi"
},
{
"math_id": 3,
"text": "\\xi"
},
{
"math_id": 4,
"text": "f"
},
{
"math_id": 5,
"text": "f(z) = z^3"
},
{
"math_id": 6,
"text": "e^{i\\frac{\\pi}{4}}"
},
{
"math_id": 7,
"text": "f(z)"
},
{
"math_id": 8,
"text": "3z^2"
},
{
"math_id": 9,
"text": "\\gamma"
},
{
"math_id": 10,
"text": "f(\\gamma)=3(e^{i\\frac{\\pi}{4}})^2\\gamma = 3e^{i\\frac{\\pi}{2}}\\gamma"
}
] |
https://en.wikipedia.org/wiki?curid=59921578
|
59925424
|
Sudoku graph
|
Mathematical graph of a Sudoku
In the mathematics of Sudoku, the Sudoku graph is an undirected graph whose vertices represent the cells of a (blank) Sudoku puzzle and whose edges represent pairs of cells that belong to the same row, column, or block of the puzzle. The problem of solving a Sudoku puzzle can be represented as precoloring extension on this graph. It is an integral Cayley graph.
Basic properties and examples.
On a Sudoku board of size formula_2, the Sudoku graph has formula_3 vertices, each with exactly formula_4 neighbors. Therefore, it is a regular graph. The total number of edges is formula_5.
For instance, the graph shown in the figure above, for a formula_0 board, has 16 vertices and 56 edges, and is 7-regular.
For the most common form of Sudoku, on a formula_1 board, the Sudoku graph is a 20-regular graph with 81 vertices and 810 edges.
The second figure shows how to count the neighbors of each cell in a formula_1 board.
Puzzle solutions and graph coloring.
Each row, column, or block of the Sudoku puzzle forms a clique in the Sudoku graph, whose size equals the number of symbols used to solve the puzzle. A graph coloring of the Sudoku graph using this number of colors (the minimum possible number of colors for this graph) can be interpreted as a solution to the puzzle. The usual form of a Sudoku puzzle, in which some cells are filled in with symbols and the rest must be filled in by the person solving the puzzle, corresponds to the precoloring extension problem on this graph.
Algebraic properties.
For any formula_6, the Sudoku graph of an formula_2 Sudoku board is an integral graph, meaning that the spectrum of its adjacency matrix consists only of integers. More precisely, its spectrum consists of the eigenvalues
It can be represented as a Cayley graph of the abelian group formula_18.
Related graphs.
The Sudoku graph contains as a subgraph the rook's graph, which is defined in the same way using only the rows and columns (but not the blocks) of the Sudoku board.
The 20-regular 81-vertex Sudoku graph should be distinguished from a different 20-regular graph on 81 vertices, the Brouwer–Haemers graph, which has smaller cliques (of size 3) and requires fewer colors (7 instead of 9).
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "4\\times 4"
},
{
"math_id": 1,
"text": "9\\times 9"
},
{
"math_id": 2,
"text": "n^2\\times n^2"
},
{
"math_id": 3,
"text": "n^4"
},
{
"math_id": 4,
"text": "3n^2-2n-1"
},
{
"math_id": 5,
"text": "n^4(3n^2-2n-1)/2"
},
{
"math_id": 6,
"text": "n"
},
{
"math_id": 7,
"text": "1"
},
{
"math_id": 8,
"text": "2n^2-2n-1"
},
{
"math_id": 9,
"text": "2(n-1)"
},
{
"math_id": 10,
"text": "n^2-n-1"
},
{
"math_id": 11,
"text": "2n(n-1)"
},
{
"math_id": 12,
"text": "n^2-2n-1"
},
{
"math_id": 13,
"text": "(n-1)^2"
},
{
"math_id": 14,
"text": "-1"
},
{
"math_id": 15,
"text": "n^2(n-1)^2"
},
{
"math_id": 16,
"text": "-n-1"
},
{
"math_id": 17,
"text": "2n(n-1)^2"
},
{
"math_id": 18,
"text": "Z_n^4"
}
] |
https://en.wikipedia.org/wiki?curid=59925424
|
599306
|
ABA routing transit number
|
Code used in U.S. check transactions
In the United States, an ABA routing transit number (ABA RTN) is a nine-digit code printed on the bottom of checks to identify the financial institution on which it was drawn. The American Bankers Association (ABA) developed the system in 1910 to facilitate the sorting, bundling, and delivering of paper checks to the drawer's (check writer's) bank for debit to the drawer's account.
Newer electronic payment methods continue to rely on ABA RTNs to identify the paying bank or other financial institution. The Federal Reserve Bank uses ABA RTNs in processing Fedwire funds transfers. The ACH Network also uses ABA RTNs in processing direct deposits, bill payments, and other automated money transfers.
Management.
Since 1911, the American Bankers Association has partnered with a series of registrars, currently Accuity, to manage the ABA routing number system. Accuity is the Official Routing Number Registrar and is responsible for assigning ABA RTNs and managing the ABA RTN system. Accuity publishes the "American Bankers Association Key to Routing Numbers" semi-annually. The "Key Book" contains the listing of all ABA RTNs that have been assigned.
There are approximately 26,895 active ABA RTNs currently in use. Every financial institution in the United States has at least one. The Routing Number Policy allows for up to five ABA RTNs to be assigned to a financial institution. Many institutions have more than five ABA RTNs as a result of mergers.
ABA RTNs are only for use in payment transactions within the United States. They are used on paper check, wire transfers, and ACH transactions. On a paper check, the ABA RTN is usually the middle set of nine numbers printed at the bottom of the check. Domestic transfers that use the ABA RTN will usually be returned to the paying bank.
Incoming international wire transfers also use a BIC code, also known as a SWIFT code, as they are administered by the Society for Worldwide Interbank Financial Telecommunication (SWIFT) and defined by ISO 9362. In addition, many international financial institutions use an IBAN code.
The IBAN was originally developed to facilitate payments within the European Union but the format is flexible enough to be applied globally. It consists of an ISO 3166-1 alpha-2 country code, followed by two check digits that are calculated using a mod-97 technique, and Basic Bank Account Number (BBAN) with up to thirty alphanumeric characters. The BBAN includes the domestic bank account number and potentially routing information. The national banking communities decide individually on a fixed length for all BBAN in their country.
History.
The bank numbers in the United States were originated by the American Bankers Association (ABA) in 1911. Banks had been disagreeing on identification. The ABA arranged a meeting of clearing house managers in Chicago in December 1910. The gathering chose a committee to assign each bank in the country convenient numbers to use. In May 1911, the American Bankers Association released the codes. The numerical committee was W. G. Schroeder, C. R. McKay, and J. A. Walker. The publisher of the new directory was Rand-McNally and Company. The ABA clearing house codes are like the sub-headings in a decimal outline. The prefixes mean locations and the suffixes banking firms within those locations. Half of the prefixes represent major cities the other half represent regions of the United States. Lower prefixes are used for higher populations, first based on the 1910 U. S. Census. Likewise, within each prefix area banks are numbered in order of city population and bank seniority, although single-bank towns are numbered in alphabetical order. When a new bank is being organized, the current publisher of the directory of banks assigns it a transit code. The American Bankers Association asked banks to use the directory exclusively so banks would agree on how to sort checks. The book was abbreviated "Key to Numerical System of The American Bankers Association," and as the "Key". It was published by Rand McNally & Co. In 1952 Rand McNally moved its corporate headquarters to Skokie, Illinois, and became more interested in publishing maps. Also in Skokie is a company called Accuity, which from its history has been the official registrar of ABA bank numbers since 1911. By 2014 it was the publisher of the semi-annual "ABA Key to Routing Numbers" and was owned by Reed Business Information, British publisher of reference works for professionals, which in turn is owned by Reed-Elsevier, English-Dutch publisher of online format reference works for professionals. Over the years the ABA's identification numbers for banks accommodated the Federal Reserve Act, the Expedited Funds Act and the Check 21 Act. By 2014 the "Key" included the U. S. Federal Reserve's nine-digit magnetic-ink routing numbers.
Formats.
The ABA RTN appears in two forms on a standard check – the fraction form and the MICR (magnetic ink character recognition) form. Both forms give essentially the same information, though there are slight differences.
The MICR forms are the main form – it is printed in magnetic ink, and is machine-readable; it appears at the bottom left of a check, and consists of nine digits.
The fraction form was used for manual processing before the invention of the MICR line, and still serves as a backup in check processing should the MICR line become illegible or torn; it generally appears in the upper right part of a check near the date.
The MICR number is of the form
XXXXYYYYC
where XXXX is Federal Reserve Routing Symbol, YYYY is ABA Institution Identifier,
and C is the Check Digit, while the fraction is of the form:
PP-YYYY/XXXX
where PP is a 1 or 2 digit Prefix, no longer used in processing, but still printed, representing the bank's check processing center location, with 1 through 49 for processing centers located in a major city, and 50 through 99 representing processing is done at a non-major city in a particular state. Sometimes a branch number or the account number are printed below the fraction form; branch number is not used in processing, while the account number is listed in MICR form at the bottom. Further, the Federal Reserve Routing Symbol and ABA Institution Identifier may have fewer than 4 digits in the fraction form. The essential data, shared by both forms, is the Federal Reserve Routing Symbol (XXXX), and the ABA Institution Identifier (YYYY), and these are usually the same in both the fraction form and the MICR, with only the order and format switched (and left-padded with 0s to ensure that they are 4 digits long).
The prefix and the Federal Reserve Routing Symbol (XXXX) are determined by the bank's geographical location and treatment by the Federal Reserve type, while the remaining data (YYYY, and Branch number, if present) depends on the specific bank, and are unique within a Federal Reserve district.
In the check depicted above right, the fraction form is "11-3167/1210" (with "01" below it) and MICR form is "129131673" which are analyzed as follows:
In the case of a MICR line that is illegible or torn, the check can still be processed without the check digit. Typically, a repair strip or sleeve is attached to the check, then a new MICR line is imprinted. Either 021200025 or 0212-0002 (with a hyphen, but no check digit) may be printed, and both are 9 digits. The former (with check digit) is preferred to ensure better accuracy, but requires computing the check digit, while the latter is easily determined by inspection of the fraction, with minimal clerical handling.
MICR routing number format.
The MICR routing number consists of nine digits:
XXXXYYYYC
where XXXX is Federal Reserve Routing Symbol, YYYY is ABA Institution Identifier,
and C is the check digit.
Federal Reserve.
The Federal Reserve uses the ABA RTN system for processing its customers' payments. The ABA RTNs were originally assigned in the systematic way outlined below, reflecting a financial institution's geographical location and internal handling by the Federal Reserve. Following consolidation of the Federal Reserve's check processing facilities, and the consolidation in the banking industry, the RTN a financial institution uses may not reflect the "Fed District" where the financial institution's place of business is located. Check processing is now centralized at the Federal Reserve Bank of Atlanta.
The first two digits of the nine digit RTN must be in the ranges 00 through 12, 21 through 32, 61 through 72, or 80.
The digits are assigned as follows:
The first two digits correspond to the 12 Federal Reserve Banks as follows:
The third digit corresponds to the Federal Reserve check processing center originally assigned to the bank.
The fourth digit is "0" if the bank is located in the Federal Reserve city proper, and otherwise is 1–9, according to which state in the Federal Reserve district it is.
ABA Institution Identifier.
The fifth through eighth digits constitute the bank's unique ABA identity within the given Federal Reserve district.
Check digit.
The ninth, check digit provides a checksum test using a position-weighted sum of each of the digits. High-speed check-sorting equipment will typically verify the checksum and if it fails, route the item to a reject pocket for manual examination, repair, and re-sorting. Mis-routings to an incorrect bank are thus greatly reduced.
The following condition must hold:
(3(d1 + d4 + d7) + 7(d2 + d5 + d8) + (d3 + d6 + d9)) mod 10 = 0
(Mod or modulo is the remainder of a division operation.)
In terms of weights, this is 371 371 371. This allows one to catch any single-digit error (incorrectly inputting one digit), together with most transposition errors. 1, 3, and 7 are used because they (together with 9) are coprime to 10; using a coefficient that is divisible by 2 or 5 would lose information (because formula_0), and thus would not catch some substitution errors. These do not catch transpositions of two digits that differ by 5 (0 and 5, 1 and 6, 2 and 7, 3 and 8, 4 and 9), but captures other transposition errors.
As an example, consider 111000025 (which is a valid routing number of Bank of America in Virginia). Applying the formula, we get:
(3(1 + 0 + 0) + 7(1 + 0 + 2) + (1 + 0 + 5)) mod 10 = 0.
Routing symbol.
The symbol that delimits a routing transit number is the MICR E-13B transit character ⑆ This character, with Unicode value U+2446, appears at right.
Fraction format.
The fraction form looks like a fraction, with a numerator and a denominator.
The numerator consists of two parts separated by a dash. The prefix (no longer used in check processing, yet still printed on most checks) is a 1 or 2 digit code (P or PP) indicating the region where the bank is located. The numbers 1 to 49 are cities, assigned by size of the cities in 1910. The numbers 50 to 99 are states, assigned in a rough spatial geographic order, and are used for banks located outside one of the 49 numbered cities.
The second part of the numerator (after the dash) is the bank's ABA Institution Identifier, which also forms digits 5 to 8 of the nine digit routing number (YYYY).
The denominator is also part of the routing number; by adding leading zeroes to make up four digits where necessary (e.g. 212 is written as 0212, 31 is written as 0031, etc.), it forms the first four digits of the routing number (XXXX).
There might also be a fourth element printed to the right of the fraction: this is the bank's branch number. It is not included in the MICR line. It would only be used internally by the bank, e.g. to show where the signature card is located, where to contact the responsible officer in case of an overdraft, etc.
For example, a check from Wachovia Bank in Yardley, PA, has a fraction of 55-2/212 and a routing number of 021200025. The prefix (55) no longer has any relevance, but from the remainder of the fraction, the first 8 digits of the routing number (02120002) can be determined, and the check digit (the last digit, 5 in this example) can be calculated by using the check digit formula (thus giving 021200025).
ABA prefix table.
This table is up to date as of 2020. One weakness of the current routing table arrangement is that various territories like American Samoa, Guam, Puerto Rico and the US Virgin Islands share the same routing code.
See also.
General category
Canada has similar but different transaction routing structures
References.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Refbegin/styles.css" />
|
[
{
"math_id": 0,
"text": "5 \\cdot 0 = 5 \\cdot 2 = 5 \\cdot 4 = 5 \\cdot 6 = 5 \\cdot 8 = 0 \\mod 10"
}
] |
https://en.wikipedia.org/wiki?curid=599306
|
59934381
|
Spectral G-index
|
Variable to quantify the short wavelength light in a visible light source
The spectral G-Index is a variable that was developed to quantify the amount of short wavelength light in a visible light source relative to its visible emission (it is a measure of the amount of blue light per lumen). The smaller the G-index, the more blue, violet, or ultraviolet light a lamp emits relative to its total output. It is used in order to select outdoor lamps that minimize skyglow and ecological light pollution. The G-index was originally proposed by , an astrophysicist at Calar Alto Observatory.
Definition.
The G-index is grounded in the system of astronomical photometry, and is defined as follows:
formula_0
where
The sums are to be taken using a step size of 1 nm. For lamps with absolutely no emissions below 500 nm (e.g. Low Pressure Sodium or PC Amber LED), the G-index would in principle be undefined. In practice, such lamps would be reported as having G greater than some value, due to the limits of measurement precision. The Regional Government of Andalusia has developed a spreadsheet to allow calculation of the G-index for any lamp for which the spectral power distribution is known, and it can also be calculated in the "Astrocalc" software or the f.luxometer web app.
The G-index does not directly measure light pollution, but rather says something about the color of light coming from a lamp. For example, since the equation defining G-index is normalised to total flux, if twice as many lamps are used, the G-index would not change; it is a measure of fractional light, not total light. Similarly, the definition of G-index does not include the direction in which light shines, so it is not directly related to skyglow, which depends strongly on direction.
Rationale.
The ongoing global switch from (mainly) orange high pressure sodium lamps for street lighting to (mainly) white LEDs has resulted in a shift towards broad spectrum light, with greater short wavelength (blue) emissions. This switch is problematic from the perspective of increased astronomical and ecological light pollution. Short wavelength light is more likely to scatter in the atmosphere, and therefore produces more artificial skyglow than an equivalent amount of longer wavelength light. Additionally, both broad spectrum (white) light and short wavelength light tend to have greater overall ecological impacts than narrow band and long wavelength visible light. For this reason, lighting guidelines, recommendations, norms, and legislation frequently place limits on blue light emissions. For example, the "fixture seal of approval" program of the International Dark-Sky Association limits lights to have a correlated color temperature (CCT) below 3000 K, while the national French light pollution law restricts CCT to maximum 3000 K in most areas, and 2400 K or 2700 K in protected areas such as nature reserves.
The problem with these approaches is that CCT is not perfectly correlated with blue light emissions. Lamps with identical CCT can have quite different fractional blue light emissions. This is because CCT is based upon comparison to a blackbody light source, which is a poor approximation for LEDs and vapor discharge lamps such as high pressure sodium. The G-index was therefore developed for use in decision making for the purchase of outdoor lamps and in lighting regulations as an improved alternative to the CCT metric.
Use.
In 2019, the European Commission's Joint Research Centre incorporated the G-index into their guidelines for the Green Public Procurement of road lighting. Specifically, in areas needing protection for astronomical or ecological reasons, they recommend the use of the G-index instead of CCT in making lighting decisions, because the G-index more accurately quantifies the amount of blue light. In their "core criteria", they recommend that "in parks, gardens and areas considered by the procurer to be ecologically sensitive, the G-index shall be ≥1.5". In the case that G-index could for some reason not be calculated, they suggest that CCT≤3000 K is likely to satisfy this criterion. In the stricter "comprehensive criteria", they recommend that parks and ecologically sensitive areas or areas at specified distances from optical astronomy observatories have a G-index greater than or equal to 2.0. Again, in this case if calculating the G-index is not possible, CCT≤2700 K is suggested.
The G-index is planned to be used by the Regional Government of Andalusia, specifically for the purpose of protecting the night sky. Depending on the "environmental zone", the regulation requires lighting to have a G value above 2, 1.5, or 1. In areas where astronomical activities are ongoing, it is expected that only monochromatic or quasi-monochromatic lamps will be used, with G>3.5 and in principle only emissions in the interval 585-605 nm.
Questionable Use Warning.
The G-index has not been evaluated or adopted by a standards development organization (SDO), such as the CIE. Generally, for a specification to be used in a regulation or tender, it must go through the rigorous process of evaluation and adoption by an SDO. It is thus questionable for the EC Joint Research Center and the Andalusian Regional Government (and others) to suggest or prescribe mandatory requirements based on the G-index.
A measure focused solely on reducing blue light will not provide ecological protection. Because the intensity of light plays a role as strong or stronger than spectrum, putting the light in the right places (on road surfaces and sidewalks) and avoiding spillage into ecological regions is likely to be more effective than manipulating the spectrum of the light. Spectrum does play a role, but in order to prevent disturbance to sensitive animals, changes must be made to the spectrum which cannot be described by the G-index. Those changes are also species dependent. A specific (red-dominant) spectrum has been proven to be as good as darkness for many (but not all) light sensitive insect and bat species. An amber spectrum is proven to be less eco-friendly than a red spectrum for some species, although both have negligible blue content and ‘favorable’ G index. Therefore the use of spectral G-index is overly simplistic and may do more harm than good. The use of the G-index is therefore strongly discouraged for use in lighting specifications or regulations.
References.
<templatestyles src="Reflist/styles.css" />
|
[
{
"math_id": 0,
"text": "\nG=-2.5\\mathrm{log}_{10}\n\\frac{\\sum\\limits _{\\lambda=380 \\mathrm{nm}} ^{500\\mathrm{nm}} E(\\lambda)}\n{\\sum\\limits _{\\lambda=380 \\mathrm{nm}} ^{780\\mathrm{nm}} E(\\lambda)V(\\lambda)}\n"
}
] |
https://en.wikipedia.org/wiki?curid=59934381
|
5993712
|
Koide formula
|
Unexplained empirical equation in particle physics
The Koide formula is an unexplained empirical equation discovered by Yoshio Koide in 1981. In its original form, it is not fully empirical but a set of guesses for a model for masses of quarks and leptons, as well as CKM angles. From this model it survives the observation about the masses of the three charged leptons; later authors have extended the relation to neutrinos, quarks, and other families of particles.
Formula.
The Koide formula is
formula_0
where the masses of the electron, muon, and tau are measured respectively as and the digits in parentheses are the uncertainties in the last digits. This gives
No matter what masses are chosen to stand in place of the electron, muon, and tau, the ratio "Q" is constrained to The upper bound follows from the fact that the square roots are necessarily positive, and the lower bound follows from the Cauchy–Bunyakovsky–Schwarz inequality. The experimentally determined value, , lies at the center of the mathematically allowed range. But note that removing the requirement of positive roots, it is possible to fit an extra tuple in the quark sector (the one with strange, charm and bottom).
The mystery is in the physical value. Not only is the result peculiar, in that three ostensibly arbitrary numbers give a simple fraction, but also in that in the case of electron, muon, and tau, Q is exactly halfway between the two extremes of all possible combinations: (if the three masses were equal) and 1 (if one mass dwarfs the other two). Q is a dimensionless quantity, so the relation holds regardless of which unit is used to express the magnitudes of the masses.
Robert Foot also interpreted the Koide formula as a geometrical relation, in which the value formula_1 is the squared cosine of the angle between the vector formula_2 and the vector formula_3 (see dot product). That angle is almost exactly 45 degrees: formula_4
When the formula is assumed to hold exactly it may be used to predict the tau mass from the (more precisely known) electron and muon masses; that prediction is
While the original formula arose in the context of preon models, other ways have been found to derive it (both by Sumino and by Koide – see references below). As a whole, however, understanding remains incomplete. Similar matches have been found for triplets of quarks depending on running masses. With alternating quarks, chaining Koide equations for consecutive triplets, it is possible to reach a result of 173.263947(6) GeV for the mass of the top quark.
Speculative extension.
Carl Brannen has proposed the lepton masses are given by the squares of the eigenvalues of a circulant matrix with real eigenvalues, corresponding to the relation
formula_5 for n = 0, 1, 2, ...
which can be fit to experimental data with η2 = 0.500003(23) (corresponding to the Koide relation) and phase δ = 0.2222220(19), which is almost exactly . However, the experimental data are in conflict with simultaneous equality of η2 = and δ = .
This kind of relation has also been proposed for the quark families, with phases equal to low-energy values = × and = × , hinting at a relation with the charge of the particle family ( and for quarks vs. = 1 for the leptons, where
Origins.
The original derivation
postulates formula_6 with the conditions
formula_7
formula_8
from which the formula follows. Besides, masses for neutrinos and down quarks were postulated to be proportional to formula_9 while masses for up quarks were postulated to be formula_10
The published model justifies the first condition as part of a symmetry breaking scheme, and the second one as a "flavor charge" for preons in the interaction that causes this symmetry breaking.
Note that in matrix form with formula_11 and formula_12 the equations are simply formula_13 and formula_14
Similar formulae.
There are similar formulae which relate other masses.
Quark masses depend on the energy scale used to measure them, which makes an analysis more complicated.
Taking the heaviest three quarks, charm (1.275 ± 0.03 GeV), bottom (4.180 ± 0.04 GeV) and top (173.0 ± 0.40 GeV), regardless of their uncertainties, one arrives at the value cited by F. G. Cao (2012):
formula_15
This was noticed by Rodejohann and Zhang in the first version of their 2011 article, but the observation was removed in the published version, so the first published mention is in 2012 from Cao.
Similarly, the masses of the lightest quarks, up (2.2 ± 0.4 MeV), down (4.7 ± 0.3 MeV), and strange (95.0 ± 4.0 MeV), without using their experimental uncertainties, yield
formula_16
a value also cited by Cao in the same article.
Note that an older article, H. Harari, et al., calculates "theoretical" values for up, down and strange quarks, coincidentally matching the later Koide formula, albeit with a massless up-quark.
formula_17
Running of particle masses.
In quantum field theory, quantities like coupling constant and mass "run" with the energy scale.
That is, their value depends on the energy scale at which the observation occurs, in a way described by a renormalization group equation (RGE).
One usually expects relationships between such quantities to be simple at high energies (where some symmetry is unbroken) but not at low energies, where the RG flow will have produced complicated deviations from the high-energy relation. The Koide relation is exact (within experimental error) for the pole masses, which are low-energy quantities defined at different energy scales. For this reason, many physicists regard the relation as "numerology".
However, the Japanese physicist Yukinari Sumino has proposed mechanisms to explain origins of the charged lepton spectrum as well as the Koide formula, e.g., by constructing an effective field theory with a new gauge symmetry that causes the pole masses to exactly satisfy the relation.
Koide has published his opinions concerning Sumino's model.
François Goffinet's doctoral thesis gives a discussion on pole masses and how the Koide formula can be reformulated to avoid using square roots for the masses.
As solutions to a cubic equation.
A cubic equation usually arises in symmetry breaking when solving for the Higgs vacuum, and is a natural object when considering three generations of particles. This involves finding the eigenvalues of a 3×3 mass matrix.
For this example, consider a characteristic polynomial
formula_18
with roots formula_19 that must be real and positive.
To derive the Koide relation, let formula_20 and the resulting polynomial can be factored into
formula_21
or
formula_22
The elementary symmetric polynomials of the roots must reproduce the corresponding coefficients from the polynomial that they solve, so formula_23 and formula_24 Taking the ratio of these symmetric polynomials, but squaring the first so we divide out the unknown parameter formula_25 we get a Koide-type formula: Regardless of the value of formula_25 the solutions to the cubic equation for formula_26 must satisfy
formula_27
so
formula_28
and
formula_29
Converting back to formula_30
formula_31
For the relativistic case, Goffinet's dissertation presented a similar method to build a polynomial with only even powers of formula_32
Higgs mechanism.
Koide proposed that an explanation for the formula could be a Higgs particle with formula_33 flavour charge formula_34 given by:
formula_35
with the charged lepton mass terms given by formula_36 Such a potential is minimised when the masses fit the Koide formula. Minimising does not give the mass scale, which would have to be given by additional terms of the potential, so the Koide formula might indicate existence of additional scalar particles beyond the Standard Model's Higgs boson.
In fact one such Higgs potential would be precisely formula_37 which when expanded out the determinant in terms of traces would simplify using the Koide relations.
Footnotes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
*
|
[
{
"math_id": 0,
"text": "Q = \\frac{~ m_e + m_\\mu + m_\\tau ~}{\\left(\\ \\sqrt{\\ m_e\\ } + \\sqrt{\\ m_\\mu\\ } + \\sqrt{\\ m_\\tau\\ }\\ \\right)^2} = 0.666661(7) \\approx \\frac{\\ 2\\ }{ 3 }\\ ,"
},
{
"math_id": 1,
"text": "\\ \\frac{ 1 }{\\ 3\\ Q\\ }\\ "
},
{
"math_id": 2,
"text": "\\ [\\ \\sqrt{\\ m_e\\ }, \\sqrt{\\ m_\\mu\\ }, \\sqrt{\\ m_\\tau\\ }\\ ]\\ "
},
{
"math_id": 3,
"text": "\\ [\\ 1, 1, 1\\ ]\\ "
},
{
"math_id": 4,
"text": "\\ \\theta = 45.000^\\circ \\pm 0.001^\\circ ~."
},
{
"math_id": 5,
"text": "\\sqrt{\\,m_n\\;} = \\mu \\left[\\,1 + 2 \\eta \\cos\\left( \\delta + \\frac{\\,2\\pi\\,}{3}\\cdot n \\right) \\,\\right] ~,~"
},
{
"math_id": 6,
"text": "\\ m_{e_i} \\propto\\ (z_0 + z_i)^2\\ "
},
{
"math_id": 7,
"text": "\\ z_1 + z_2 + z_3 = 0\\ "
},
{
"math_id": 8,
"text": "\\ \\tfrac{\\ 1\\ }{ 3 }\\ (z_1^2+z_2^2+z_3^2) = z_0^2\\ "
},
{
"math_id": 9,
"text": "\\ z_i^2\\ "
},
{
"math_id": 10,
"text": "\\ \\propto\\ ( z_0 + 2 z_i )^2 ~."
},
{
"math_id": 11,
"text": "~~ M = A\\ A^\\dagger ~~"
},
{
"math_id": 12,
"text": "~~ A = Z_0 + Z ~~"
},
{
"math_id": 13,
"text": "~~ \\operatorname{tr} Z = 0 ~~"
},
{
"math_id": 14,
"text": "~~ \\operatorname{tr} Z_0^2 = \\operatorname{tr} Z^2 ~."
},
{
"math_id": 15,
"text": "Q_\\text{heavy} = \\frac{m_c + m_b + m_t}{\\big(\\sqrt{m_c} + \\sqrt{m_b} + \\sqrt{m_t}\\big)^2} \\approx 0.669 \\approx \\frac{2}{3}."
},
{
"math_id": 16,
"text": "Q_\\text{light} = \\frac{m_u + m_d + m_s}{\\big(\\sqrt{m_u} + \\sqrt{m_d} + \\sqrt{m_s}\\big)^2} \\approx 0.57 \\approx \\frac{5}{9},"
},
{
"math_id": 17,
"text": "Q_\\text{light} = \\frac{0 + m_d + m_s}{\\big(\\sqrt{0} + \\sqrt{m_d} + \\sqrt{m_s}\\big)^2} "
},
{
"math_id": 18,
"text": "\\ 4\\ m^3 - 24\\ n^2\\ m^2 + 9\\ n\\ (n^3 - 4)\\ m - 9\\ "
},
{
"math_id": 19,
"text": "\\ m_j\\ :\\ j = 1, 2, 3\\ ,"
},
{
"math_id": 20,
"text": "\\ m \\equiv x^2\\ "
},
{
"math_id": 21,
"text": "\\ (\\ 2\\ x^3 - 6\\ n\\ x^2 + 3\\ n^2x - 3\\ )(\\ 2\\ x^3 + 6\\ n\\ x^2 + 3\\ n^2\\ x + 3\\ )\\ "
},
{
"math_id": 22,
"text": "\\ 4\\ (\\ x^3 - 3\\ n\\ x^2 + \\tfrac{ 3 }{\\ 2\\ }\\ n^2x - \\tfrac{ 3 }{\\ 2\\ }\\ )(\\ x^3 + 3\\ n\\ x^2 + \\tfrac{ 3 }{\\ 2\\ }\\ n^2\\ x + \\tfrac{ 3 }{\\ 2\\ }\\ )\\ "
},
{
"math_id": 23,
"text": "~~ x_1 + x_2 + x_3 = \\pm 3\\ n ~~"
},
{
"math_id": 24,
"text": "~~ x_1 x_2 + x_2 x_3 + x_3 x_1 = + \\tfrac{ 3 }{\\ 2\\ }\\ n^2 ~."
},
{
"math_id": 25,
"text": "\\ n\\ ,"
},
{
"math_id": 26,
"text": "\\ x\\ "
},
{
"math_id": 27,
"text": "\\ \\frac{\\ 2\\ ( x_1 x_2 + x_2 x_3 + x_3 x_1)\\ }{~ ( x_1 + x_2 + x_3 )^2\\ } = \\frac{\\ (3\\ n^2)\\ }{~ (\\pm 3\\ n)^2\\ } = \\frac{\\ 1\\ }{ 3 }\\ "
},
{
"math_id": 28,
"text": "\\ 1 - \\frac{\\ 2\\ x_1 x_2 + 2\\ x_2 x_3 + 2\\ x_3 x_1\\ }{~ ( x_1 + x_2 + x_3 )^2\\ } = 1 - \\frac{\\ 1\\ }{ 3 } = \\frac{\\ 2\\ }{ 3 }\\ ~."
},
{
"math_id": 29,
"text": "\\ 1 - \\frac{\\ 2\\ x_1 x_2 + 2\\ x_2 x_3 + 2\\ x_3 x_1\\ }{~ ( x_1 + x_2 + x_3 )^2\\ } = \\frac{\\ ( x_1 + x_2 + x_3 )^2 - 2\\ x_1 x_2 - 2\\ x_2 x_3 - 2\\ x_3 x_1\\ }{~ ( x_1 + x_2 + x_3 )^2\\ } = \\frac{\\ x_1^2 + x_2^2 + x_3^2\\ }{~ ( x_1 + x_2 + x_3 )^2\\ } ~."
},
{
"math_id": 30,
"text": "\\ \\sqrt{\\ m\\ } = x\\ "
},
{
"math_id": 31,
"text": "\\frac{\\ m_1 + m_2 + m_3\\ }{\\ \\left(\\ \\sqrt{\\ m_1\\ } + \\sqrt{\\ m_2\\ } + \\sqrt{\\ m_3\\ }\\ \\right)^2\\ } = \\frac{\\ 2\\ }{ 3 }\\ ~."
},
{
"math_id": 32,
"text": "\\ m ~."
},
{
"math_id": 33,
"text": "U(3)"
},
{
"math_id": 34,
"text": "\\Phi^{a\\overline{b}}"
},
{
"math_id": 35,
"text": "\\ V(\\Phi) = \\left[\\ 2\\ \\left[tr(\\Phi)\\right]^2 - 3\\ tr(\\Phi^2)\\ \\right]^2\\ "
},
{
"math_id": 36,
"text": "\\ \\overline{\\psi}\\ \\Phi^2\\ \\psi ~."
},
{
"math_id": 37,
"text": "V(\\Phi) = \\det[(\\Phi-\\sqrt{m_e})]^2 + \\det[(\\Phi-\\sqrt{m_\\mu})]^2 + \\det[(\\Phi-\\sqrt{m_\\tau})]^2"
}
] |
https://en.wikipedia.org/wiki?curid=5993712
|
59937621
|
Van Schooten's theorem
|
On lines connecting the vertices of an equilateral triangle to a point on its circumcircle
Van Schooten's theorem, named after the Dutch mathematician Frans van Schooten, describes a property of equilateral triangles. It states:
"For an equilateral triangle formula_0 with a point formula_1 on its circumcircle the length of longest of the three line segments formula_2 connecting formula_1 with the vertices of the triangle equals the sum of the lengths of the other two."
The theorem is a consequence of Ptolemy's theorem for concyclic quadrilaterals. Let formula_3 be the side length of the equilateral triangle formula_0 and formula_4 the longest line segment. The triangle's vertices together with formula_1 form a concyclic quadrilateral and hence Ptolemy's theorem yields:
formula_5
Dividing the last equation by formula_3 delivers Van Schooten's theorem.
|
[
{
"math_id": 0,
"text": "\\triangle ABC"
},
{
"math_id": 1,
"text": "P"
},
{
"math_id": 2,
"text": "PA, PB, PC"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "PA"
},
{
"math_id": 5,
"text": "\n\\begin{align} \n& |BC| \\cdot |PA| =|AC| \\cdot |PB| + |AB| \\cdot |PC| \\\\[6pt]\n \\Longleftrightarrow & a \\cdot |PA| =a \\cdot |PB| + a \\cdot |PC|\n\\end{align}\n"
}
] |
https://en.wikipedia.org/wiki?curid=59937621
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.