id
stringlengths 2
8
| title
stringlengths 1
130
| text
stringlengths 0
252k
| formulas
listlengths 1
823
| url
stringlengths 38
44
|
---|---|---|---|---|
72683786 | Earth–Moon problem | Unsolved problem on graph coloring
<templatestyles src="Unsolved/styles.css" />
Unsolved problem in mathematics:
How many colors are needed to color biplanar graphs?
The Earth–Moon problem is an unsolved problem on graph coloring in mathematics. It is an extension of the planar map coloring problem (solved by the four color theorem), and was posed by Gerhard Ringel in 1959. An intuitive form of the problem asks how many colors are needed to color political maps of the Earth and Moon, in a hypothetical future where each Earth country has a Moon colony which must be given the same color. In mathematical terms, it seeks the chromatic number of biplanar graphs. It is known that this number is at least 9 and at most 12.
The Earth–Moon problem has been extended to analogous problems of coloring maps on any number of planets. For this extension the lower bounds and upper bounds on the number of colors are closer, within two of each other. One real-world application of the Earth–Moon problem involves testing printed circuit boards.
Formulation and history.
In the map coloring problem, finitely many simply connected regions in the Euclidean plane or a topologically equivalent space, such as countries on the surface of the Earth, are to be colored so that, when two regions share a boundary of nonzero length, they have different colors. It can be transformed into a graph coloring problem by making a vertex for each region and an edge for each two neighboring regions, producing a planar graph whose vertices are to be colored. Corresponding to the requirement that adjacent regions should have different colors, adjacent vertices (the two endpoints of any edge) should have different colors. According to the four color theorem, the resulting planar graph (or any planar graph) can be colored using at most four different colors, no matter how many regions are given.
In 1959, Gerhard Ringel published a book on colorings of surfaces, surveying the results at the time on the four color problem and the Heawood conjecture on coloring maps on non-planar surfaces such as the torus and Klein bottle. Both had been long-conjectured but were unsolved at the time. Ringel himself later proved the Heawood conjecture in a 1968 paper with J. W. T. Youngs; the four-color theorem evaded proof until 1976. Another topic of Ringel's book was a result of Percy John Heawood from 1890, on the "empire problem": coloring maps in which each empire has some number formula_0 of distinct regions on the Earth (a home country and formula_1 colonies). As Heawood showed for formula_2, and Ringel later proved with Jackson in 1984 for formula_3, formula_4 colors are necessary and sufficient. Perhaps inspired by this problem and the dawn of the space age, Ringel included the Earth-Moon problem in his book as a variant of the empire problem in which the colonies are on the Moon rather than on the Earth. In a formulation of Martin Gardner, the colonies are instead on Mars.
In Ringel's Earth–Moon problem, each country on the Earth has a corresponding colony on the surface of the Moon, that must be given the same color. These colonies may have borders that are completely different from the arrangement of the borders on the Earth. The countries must be colored, using the same color for each country and its colony, so that when two countries share a border either on the Earth or on the Moon they are given different colors. Ringel's problem asks: how many colors are needed to guarantee that the countries can all be colored, no matter how their boundaries are arranged? Ringel proved that the number of colors needed was at least 8 and at most 12, conjecturing that 8 was the correct answer.
Again, one can phrase the same question equivalently as one in graph theory, with a single vertex for each pair of a country and its colony, and an edge for each adjacency between countries or colonies. As in the planar case, after this transformation, it is the vertices that must be colored, with different colors for the endpoints of each edge. The graphs that result in this version of the problem are biplanar graphs, or equivalently the graphs of thickness two: their edges can be partitioned into two subsets (the edges coming from Earth adjacencies and those coming from Moon adjacencies) such that the corresponding two subgraphs are both planar. In mathematical terms, Ringel's problem asks for the maximum chromatic number of biplanar graphs.
Bounds.
A biplanar graph on formula_5 vertices has at most formula_6 edges (double the number that a planar graph can have), from which it follows from the degree sum formula that it has at least one vertex with at most 11 neighbors. Removing this vertex, coloring the remaining graph recursively, and then using the smallest-numbered unused color for the removed vertex leads to a coloring with at most 12 colors; this is the greedy coloring for a degeneracy ordering of the graph. Therefore, biplanar graphs require at most 12 colors.
An example of a biplanar graph requiring 9 colors can be constructed as the join of a 6-vertex complete graph and a 5-vertex cycle graph. This means that these two subgraphs are connected by all possible edges from one subgraph to the other. The resulting graph has 11 vertices, and requires 6 colors for the complete subgraph and 3 colors for the cycle subgraph, giving 9 colors overall. This construction, by Thom Sulanke in 1974, disproved the conjecture of Ringel that 8 colors would always suffice. Subsequently, an infinite family of biplanar 9-critical graphs (minimal graphs that require nine colors) has been constructed.
Despite a lack of further progress on the problem, in 2018 Ellen Gethner conjectured that the correct number of colors for this problem is 11. She suggests several candidates for 10-chromatic biplanar graphs, including the graph formula_7 obtained as the strong product of a cycle graph with a clique, and the graph obtained by removing any vertex from formula_8. These graphs can be shown to require 10 colors, because they have no independent set large enough to be the largest color class in a coloring with fewer colors. Additionally, they meet the bounds on the number of edges a biplanar graph can have. However, a representation of them as biplanar graphs (or Earth–Moon maps) remains elusive.
Application.
One application of colorings of biplanar graphs involves testing printed circuit boards for short circuits. The electrical conductors within these boards include crossings, but (for double-sided printed circuit boards) their adjacencies can be assumed to form a biplanar graph. After coloring this graph, short circuits between adjacent conductors can be detected by adding extra circuitry to connect all conductors with the same colors to each other and testing for connections between pairs of different colors. With some care, this idea can be used to reduce the number of tests needed per circuit to only four.
Generalizations.
Various generalizations of the problem have also been considered, including versions of the problem with more than two planets or with countries that can have more than one region per planet. Maps with one planet and multiple regions per country give Heawood's empire problem. Maps with more than two planets but only one region per planet correspond to graphs whose thickness is at most equal to the number of planets. For these graphs, more precise (although still incomplete) results are known. For the graphs of thickness formula_9, and the corresponding formula_10-planet maps, the chromatic number is at most formula_11 by the same degeneracy argument used in the Earth–Moon problem. As well, for formula_9, a complete graph with formula_12 vertices has thickness formula_10, showing some of these graphs require formula_12 colors. Thus, in this case, the upper and lower bounds are within two colors of each other.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "m"
},
{
"math_id": 1,
"text": "m-1"
},
{
"math_id": 2,
"text": "m=2"
},
{
"math_id": 3,
"text": "m>2"
},
{
"math_id": 4,
"text": "6m"
},
{
"math_id": 5,
"text": "n"
},
{
"math_id": 6,
"text": "6n-12"
},
{
"math_id": 7,
"text": "C_7\\boxtimes K_4"
},
{
"math_id": 8,
"text": "C_5\\boxtimes K_4"
},
{
"math_id": 9,
"text": "t\\ge 3"
},
{
"math_id": 10,
"text": "t"
},
{
"math_id": 11,
"text": "6t"
},
{
"math_id": 12,
"text": "6t-2"
}
]
| https://en.wikipedia.org/wiki?curid=72683786 |
72699825 | Undesigned coincidences | Type of Christian apologetic argument
In Christian apologetics, the argument from undesigned coincidences aims to support the historical reliability of the Bible. So named by J.J. Blunt, based on previous work by William Paley, an undesigned coincidence is said to have occurred when an account of one event in the Bible omits a piece or pieces of information which is filled in, seemingly coincidentally, by a different recording, which helps to answer inquiries raised by the first. According to this approach, undesigned coincidences often occur when one account forgoes a reason for an action which is given by a different account (which often does not mention that action). In this case, so the argument goes, there was a complex unified story both authors are writing down despite gathering data from different witnesses.
Perspectives.
Criticism.
The arguments for the reliability of the New Testament from undesigned coincidences have been criticized as merely implying that the Gospel authors developed their work by sharing information from the same source literature. This source has been often referred to as the Q source.
Advocacy.
Evangelists will often claim that account formula_0 relies on information from account formula_1, but account formula_1 also relies on account formula_0 (or some larger example of that using all four accounts of the Gospels).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A"
},
{
"math_id": 1,
"text": "B"
}
]
| https://en.wikipedia.org/wiki?curid=72699825 |
72703925 | Welfare maximization | The welfare maximization problem is an optimization problem studied in economics and computer science. Its goal is to partition a set of items among agents with different utility functions, such that the welfare – defined as the sum of the agents' utilities – is as high as possible. In other words, the goal is to find an item allocation satisfying the utilitarian rule.
An equivalent problem in the context of combinatorial auctions is called the winner determination problem. In this context, each agent submits a list of bids on sets of items, and the goal is to determine what bid or bids should win, such that the sum of the winning bids is maximum.
Definitions.
There is a set "M" of "m" items, and a set "N" of "n" agents. Each agent "i" in "N" has a utility function formula_0. The function assigns a real value to every possible subset of items. It is usually assumed that the utility functions are monotone set functions, that is, formula_1 implies formula_2. It is also assumed that formula_3. Together with monotonicity, this implies that all utilities are non-negative.
An allocation is an ordered partition of the items into "n" disjoint subsets, one subset per agent, denoted formula_4, such that formula_5.The welfare of an allocation is the sum of agents' utilities: formula_6.
The welfare maximization problem is: find an allocation X that maximizes "W"(X).
The welfare maximization problem has many variants, depending on the type of allowed utility functions, the way by which the algorithm can access the utility functions, and whether there are additional constraints on the allowed allocations.
Additive agents.
An additive agent has a utility function that is an additive set function: for every additive agent "i" and item "j", there is a value formula_7, such that formula_8 for every set "Z" of items. When all agents are additive, welfare maximization can be done by a simple polynomial-time algorithm: give each item "j" to an agent for whom formula_7 is maximum (breaking ties arbitrarily). The problem becomes more challenging when there are additional constraints on the allocation.
Fairness constraints.
One may want to maximize the welfare among all allocations that are "fair", for example, envy-free up to one item (EF1), proportional up to one item (PROP1), or equitable up to one item (EQ1). This problem is strongly NP-hard when "n" is variable. For any fixed "n ≥ 2," the problem is weakly NP-hard, and has a pseudo-polynomial time algorithm based on dynamic programming. For "n = 2", the problem has a fully polynomial-time approximation scheme.
There are algorithms for solving this problem in polynomial time when there are few agent types, few item types or small value levels. The problem can also be solved in polynomial time when the agents' additive utilities are "binary" (the value of every item is either 0 or 1), as well as for a more general class of utilities called "generalized binary".
Matroid constraints.
Another constraint on the allocation is that the bundles must be independent sets of a matroid. For example, every bundle must contain at most "k" items, where "k" is a fixed integer (this corresponds to a uniform matroid). Or, the items may be partitioned into categories, and each bundle must contain at most "kc" items from each category "c" (this corresponds to a partition matroid). In general, there may be a different matroid for each agent, and the allocation must give each agent "i" a subset "Xi" that is an independent set of their own matroid.
Welfare maximization with additive utilities under heterogeneous matroid constraints can be done in polynomial time, by reduction to the weighted matroid intersection problem.
Gross-substitute agents.
Gross-substitute utilities are more general than additive utilities. Welfare maximization with gross-substitute agents can be done in polynomial time. This is because, with gross-substitute agents, a Walrasian equilibrium always exists, and it maximizes the sum of utilities. A Walrasian equilibrium can be found in polynomial time.
Submodular agents.
A submodular agent has a utility function that is a submodular set function. This means that the agent's utility has decreasing marginals. Submodular utilities are more general than gross-substitute utilities.
Hardness.
Welfare maximization with submodular agents is NP-hard. Moreover, it cannot be approximated to a factor better than (1-1/e)≈0.632 unless P=NP. Moreover, a better than (1-1/e) approximation would require an exponential number of querires to a value oracle, regardless of whether P=NP.
Greedy algorithm.
The maximum welfare can be approximated by the following polynomial-time greedy algorithm:
Lehman, Lehman and Nisan prove that the greedy algorithm finds a 1/2-factor approximation (they note that this result follows from a result of Fisher, Nemhauser and Wolsey regarding the maximization of a single submodular valuation over a matroid). The proof idea is as follows. Suppose the algorithm allocates an item "g" to some agent "i". This contributes to the welfare some amount "v", which is marginal utility of "g" for "i" at that point. Suppose that, in the optimal solution, "g" should be given to another agent, say "k." Consider how the welfare changes if we move "g" from i to "k":
So, for every contribution of "v" to the algorithm welfare, the potential contribution to the optimal welfare could be at most 2"v". Therefore, the optimal welfare is at most 2 times the algorithm welfare. The factor of 2 is tight for the greedy algorithm. For example, suppose there are two items x,y and the valuations are:
The optimal allocation is Alice: {y}, George: {x}, with welfare 2. But if the greedy algorithm allocates x first, it might allocate it to Alice. Then, regardless of how y is allocated, the welfare is only 1.
Algorithms using a value oracle.
A value oracle is an oracle that, given a set of items, returns the agent's value to this set. In this model:
The welfare maximization problem (with "n" different submodular functions) can be reduced to the problem of maximizing a "single" submodular set function subject to a matroid constraint: given an instance with "m" items and "n" agents, construct an instance with "m"*"n" (agent,item) pairs, where each pair represents the assignment of an item to an agent. Construct a single function that assigns, to each set of pairs, the total welfare of the corresponding allocation. It can be shown that, if all utilities are submodular, then this welfare function is also submodular. This function should be maximized subject to a partition matroid constraint, ensuring that each item is allocated to at most one agent.
Algorithms using a demand oracle.
Another way to access the agents' utilities is using a demand oracle (an oracle that, given a price-vector, returns the agent's most desired bundle). In this model:
Subadditive agents.
When agents' utilities are subadditive set functions (more general than submodular), a formula_10 approximation would require an exponential number of value queries.
Feige presents a way of rounding any fractional solution to an LP relaxation to this problem to a feasible solution with welfare at least 1/2 the value of the fractional solution. This gives a 1/2-approximation for general subadditive agents, and (1-1/e)-approximation for the special case of fractionally-subadditive valuations.
Superadditive agents.
When agents' utilities are superadditive set functions (more general than supermodular), a formula_11 approximation would require a super-polynomial number of value queries.
Single-minded agents.
A single-minded agent wants only a specific set of items. For every single-minded agent "i", there is a demanded set "Di", and a value "Vi" > 0, such that formula_12. That is, the agent receives a fixed positive utility if and only if their bundle contains their demanded set.
Welfare maximization with single-minded agents is NP-hard even when formula_13 for all "i". In this case, the problem is equivalent to set packing, which is known to be NP hard. Moreover, it cannot be approximated within any constant factor (in contrast to the case of submodular agents). The best known algorithm approximates it within a factor of formula_14.
General agents.
When agents can have arbitrary monotone utility functions (including complementary items), welfare maximization is hard to approximate within a factor of formula_15 for any formula_16. However, there are algorithms based on state space search that work very well in practice.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "u_i: 2^M \\to \\mathbb{R}"
},
{
"math_id": 1,
"text": "Z_1\\supseteq Z_2"
},
{
"math_id": 2,
"text": "u_i(Z_1) \\geq u_i(Z_2)"
},
{
"math_id": 3,
"text": "u_i(\\emptyset)=0"
},
{
"math_id": 4,
"text": "\\mathbf{X} = (X_1, \\ldots, X_n)"
},
{
"math_id": 5,
"text": "M = X_1\\sqcup \\cdots \\sqcup X_n"
},
{
"math_id": 6,
"text": "W(\\mathbf{X}) := \\sum_{i\\in N} u_i(X_i)"
},
{
"math_id": 7,
"text": "v_{i,j}"
},
{
"math_id": 8,
"text": "u_i(Z) = \\sum_{j\\in X_i} v_{i,j}"
},
{
"math_id": 9,
"text": "n/(2n-1)"
},
{
"math_id": 10,
"text": "\\frac{1}{m^{1/2-\\epsilon}}"
},
{
"math_id": 11,
"text": "\\frac{(\\log m)^{1+\\epsilon}}{m}"
},
{
"math_id": 12,
"text": "u_i(Z) = \n\\begin{cases}\nV_i & Z\\supseteq D_i\n\\\\\n0 & \\text{otherwise}\n\\end{cases}\n"
},
{
"math_id": 13,
"text": "V_i=1\n"
},
{
"math_id": 14,
"text": "O(\\sqrt{m})"
},
{
"math_id": 15,
"text": "O(n^{1/2-\\epsilon})"
},
{
"math_id": 16,
"text": "\\epsilon>0"
}
]
| https://en.wikipedia.org/wiki?curid=72703925 |
727196 | Tarski monster group | Type of infinite group in group theory
In the area of modern algebra known as group theory, a Tarski monster group, named for Alfred Tarski, is an infinite group "G", such that every proper subgroup "H" of "G", other than the identity subgroup, is a cyclic group of order a fixed prime number "p". A Tarski monster group is necessarily simple. It was shown by Alexander Yu. Olshanskii in 1979 that Tarski groups exist, and that there is a Tarski "p"-group for every prime "p" > 1075. They are a source of counterexamples to conjectures in group theory, most importantly to Burnside's problem and the von Neumann conjecture.
Definition.
Let formula_0 be a fixed prime number. An infinite group formula_1 is called a Tarski monster group for formula_0 if every nontrivial subgroup (i.e. every subgroup other than 1 and G itself) has formula_0 elements. | [
{
"math_id": 0,
"text": "p"
},
{
"math_id": 1,
"text": "G"
},
{
"math_id": 2,
"text": "N\\trianglelefteq G"
},
{
"math_id": 3,
"text": "U\\leq G"
},
{
"math_id": 4,
"text": "N"
},
{
"math_id": 5,
"text": "NU"
},
{
"math_id": 6,
"text": "p^2"
},
{
"math_id": 7,
"text": "p>10^{75}"
}
]
| https://en.wikipedia.org/wiki?curid=727196 |
72719782 | Polynomial root-finding algorithms | Finding polynomial roots is a long-standing problem that has been the object of much research throughout history. A testament to this is that up until the 19th century, algebra meant essentially theory of polynomial equations.
Principles.
Finding the root of a linear polynomial (degree one) is easy and needs only one division: the general equation formula_0 has solution formula_1 For quadratic polynomials (degree two), the quadratic formula produces a solution, but its numerical evaluation may require some care for ensuring numerical stability. For degrees three and four, there are closed-form solutions in terms of radicals, which are generally not convenient for numerical evaluation, as being too complicated and involving the computation of several nth roots whose computation is not easier than the direct computation of the roots of the polynomial (for example the expression of the real roots of a cubic polynomial may involve non-real cube roots). For polynomials of degree five or higher Abel–Ruffini theorem asserts that there is, in general, no radical expression of the roots.
So, except for very low degrees, root finding of polynomials consists of finding approximations of the roots. By the fundamental theorem of algebra, a polynomial of degree n has exactly n real or complex roots counting multiplicities.
It follows that the problem of root finding for polynomials may be split in three different subproblems;
For finding one root, Newton's method and other general iterative methods work generally well.
For finding all the roots, arguably the most reliable method is the Francis QR algorithm computing the eigenvalues of the Companion matrix corresponding to the polynomial, implemented as the standard method in MATLAB.
The oldest method of finding all roots is to start by finding a single root. When a root r has been found, it can be removed from the polynomial by dividing out the binomial "x" – "r". The resulting polynomial contains the remaining roots, which can be found by iterating on this process. However, except for low degrees, this does not work well because of the numerical instability: Wilkinson's polynomial shows that a very small modification of one coefficient may change dramatically not only the value of the roots, but also their nature (real or complex). Also, even with a good approximation, when one evaluates a polynomial at an approximate root, one may get a result that is far to be close to zero. For example, if a polynomial of degree 20 (the degree of Wilkinson's polynomial) has a root close to 10, the derivative of the polynomial at the root may be of the order of formula_2 this implies that an error of formula_3 on the value of the root may produce a value of the polynomial at the approximate root that is of the order of formula_4
For avoiding these problems, methods have been elaborated, which compute all roots simultaneously, to any desired accuracy. Presently the most efficient method is Aberth method. A free implementation is available under the name of MPSolve. This is a reference implementation, which can find routinely the roots of polynomials of degree larger than 1,000, with more than 1,000 significant decimal digits.
The methods for computing all roots may be used for computing real roots. However, it may be difficult to decide whether a root with a small imaginary part is real or not. Moreover, as the number of the real roots is, on the average, proportional to the logarithm of the degree, it is a waste of computer resources to compute the non-real roots when one is interested in real roots.
The oldest method for computing the number of real roots, and the number of roots in an interval results from Sturm's theorem, but the methods based on Descartes' rule of signs and its extensions—Budan's and Vincent's theorems—are generally more efficient. For root finding, all proceed by reducing the size of the intervals in which roots are searched until getting intervals containing zero or one root. Then the intervals containing one root may be further reduced for getting a quadratic convergence of Newton's method to the isolated roots. The main computer algebra systems (Maple, Mathematica, SageMath, PARI/GP) have each a variant of this method as the default algorithm for the real roots of a polynomial.
The class of methods is based on converting the problem of finding polynomial roots to the problem of finding eigenvalues of the companion matrix of the polynomial, in principle, can use any eigenvalue algorithm to find the roots of the polynomial. However, for efficiency reasons one prefers methods that employ the structure of the matrix, that is, can be implemented in matrix-free form. Among these methods are the power method, whose application to the transpose of the companion matrix is the classical Bernoulli's method to find the root of greatest modulus. The inverse power method with shifts, which finds some smallest root first, is what drives the complex ("cpoly") variant of the Jenkins–Traub algorithm and gives it its numerical stability. Additionally, it has fast convergence with order formula_5 (where formula_6 is the golden ratio) even in the presence of clustered roots. This fast convergence comes with a cost of three polynomial evaluations per step, resulting in a residual of "O"(|"f"("x")|2+3"φ"), that is a slower convergence than with three steps of Newton's method.
Finding one root.
The most widely used method for computing a root is Newton's method, which consists of the iterations of the computation of
formula_7
by starting from a well-chosen value formula_8
If f is a polynomial, the computation is faster when using Horner's method or evaluation with preprocessing for computing the polynomial and its derivative in each iteration.
Though the convergence is generally quadratic, it may converge much slowly or even not converge at all. In particular, if the polynomial has no real root, and formula_9 is real, then Newton's method cannot converge. However, if the polynomial has a real root, which is larger than the larger real root of its derivative, then Newton's method converges quadratically to this largest root if formula_9 is larger than this larger root (there are easy ways for computing an upper bound of the roots, see Properties of polynomial roots). This is the starting point of Horner method for computing the roots.
When one root r has been found, one may use Euclidean division for removing the factor "x" – "r" from the polynomial. Computing a root of the resulting quotient, and repeating the process provides, in principle, a way for computing all roots. However, this iterative scheme is numerically unstable; the approximation errors accumulate during the successive factorizations, so that the last roots are determined with a polynomial that deviates widely from a factor of the original polynomial. To reduce this error, one may, for each root that is found, restart Newton's method with the original polynomial, and this approximate root as starting value.
However, there is no warranty that this will allow finding all roots. In fact, the problem of finding the roots of a polynomial from its coefficients can be highly ill-conditioned. This is illustrated by
Wilkinson's polynomial: the roots of this polynomial of degree 20 are the 20 first positive integers; changing the last bit of the 32-bit representation of one of its coefficient (equal to –210) produces a polynomial with only 10 real roots and 10 complex roots with imaginary parts larger than 0.6.
Closely related to Newton's method are Halley's method and Laguerre's method. Both use the polynomial and its two first derivations for an iterative process that has a cubic convergence. Combining two consecutive steps of these methods into a single test, one gets a rate of convergence of 9, at the cost of 6 polynomial evaluations (with Horner rule). On the other hand, combining three steps of Newtons method gives a rate of convergence of 8 at the cost of the same number of polynomial evaluation. This gives a slight advantage to these methods (less clear for Laguerre's method, as a square root has to be computed at each step).
When applying these methods to polynomials with real coefficients and real starting points, Newton's and Halley's method stay inside the real number line. One has to choose complex starting points to find complex roots. In contrast, the Laguerre method with a square root in its evaluation will leave the real axis of its own accord.
Finding roots in pairs.
If the given polynomial only has real coefficients, one may wish to avoid computations with complex numbers. To that effect, one has to find quadratic factors for pairs of conjugate complex roots. The application of the multidimensional Newton's method to this task results in Bairstow's method.
The real variant of Jenkins–Traub algorithm is an improvement of this method.
Finding all roots at once.
The simple Durand–Kerner and the slightly more complicated Aberth method simultaneously find all of the roots using only simple complex number arithmetic. Accelerated algorithms for multi-point evaluation and interpolation similar to the fast Fourier transform can help speed them up for large degrees of the polynomial. It is advisable to choose an asymmetric, but evenly distributed set of initial points. The implementation of this method in the free software MPSolve is a reference for its efficiency and its accuracy.
Another method with this style is the Dandelin–Gräffe method (sometimes also ascribed to Lobachevsky), which uses polynomial transformations to repeatedly and implicitly square the roots. This greatly magnifies variances in the roots. Applying Viète's formulas, one obtains easy approximations for the modulus of the roots, and with some more effort, for the roots themselves.
Exclusion and enclosure methods.
Several fast tests exist that tell if a segment of the real line or a region of the complex plane contains no roots. By bounding the modulus of the roots and recursively subdividing the initial region indicated by these bounds, one can isolate small regions that may contain roots and then apply other methods to locate them exactly.
All these methods involve finding the coefficients of shifted and scaled versions of the polynomial. For large degrees, FFT-based accelerated methods become viable.
For real roots, see next sections.
The Lehmer–Schur algorithm uses the Schur–Cohn test for circles; a variant, Wilf's global bisection algorithm uses a winding number computation for rectangular regions in the complex plane.
The splitting circle method uses FFT-based polynomial transformations to find large-degree factors corresponding to clusters of roots. The precision of the factorization is maximized using a Newton-type iteration. This method is useful for finding the roots of polynomials of high degree to arbitrary precision; it has almost optimal complexity in this setting.
Real-root isolation.
Finding the real roots of a polynomial with real coefficients is a problem that has received much attention since the beginning of 19th century, and is still an active domain of research. Most root-finding algorithms can find some real roots, but cannot certify having found all the roots. Methods for finding all complex roots, such as Aberth method can provide the real roots. However, because of the numerical instability of polynomials (see Wilkinson's polynomial), they may need arbitrary-precision arithmetic for deciding which roots are real. Moreover, they compute all complex roots when only few are real.
It follows that the standard way of computing real roots is to compute first disjoint intervals, called "isolating intervals", such that each one contains exactly one real root, and together they contain all the roots. This computation is called "real-root isolation". Having an isolating interval, one may use fast numerical methods, such as Newton's method for improving the precision of the result.
The oldest complete algorithm for real-root isolation results from Sturm's theorem. However, it appears to be much less efficient than the methods based on Descartes' rule of signs and Vincent's theorem. These methods divide into two main classes, one using continued fractions and the other using bisection. Both method have been dramatically improved since the beginning of 21st century. With these improvements they reach a computational complexity that is similar to that of the best algorithms for computing all the roots (even when all roots are real).
These algorithms have been implemented and are available in Mathematica (continued fraction method) and Maple (bisection method). Both implementations can routinely find the real roots of polynomials of degree higher than 1,000.
Finding multiple roots of polynomials.
Numerical computation of multiple roots.
Multiple roots are highly sensitive, known to be ill-conditioned and inaccurate in numerical computation in general. A method by
Zhonggang Zeng (2004), implemented as a MATLAB package, computes multiple roots and corresponding multiplicities of a polynomial accurately even if the coefficients are inexact.
The method can be summarized in two steps. Let formula_10 be the given polynomial. The first step determines the multiplicity structure by applying square-free factorization with a numerical greatest common divisor algorithm. This allows writing formula_10 as
formula_11
where formula_12 are the multiplicities of the distinct roots. This equation is an overdetermined system for having formula_13 variables formula_14 on formula_15 equations matching coefficients with formula_16 (the leading coefficient formula_17 is not a variable). The least squares solution is no longer ill-conditioned in most cases. The second step applies the Gauss-Newton algorithm to solve the overdetermined system for the distinct roots.
The sensitivity of multiple roots can be regularized due to a geometric property of multiple roots discovered by William Kahan (1972) and the overdetermined system model formula_18 maintains the multiplicities formula_12.
Square-free factorization.
For polynomials whose coefficients are exactly given as integers or rational numbers, there is an efficient method to factorize them into factors that have only simple roots and whose coefficients are also exactly given. This method, called "square-free factorization", is based on the multiple roots of a polynomial being the roots of the greatest common divisor of the polynomial and its derivative.
The square-free factorization of a polynomial "p" is a factorization formula_19 where each formula_20 is either 1 or a polynomial without multiple roots, and two different formula_20 do not have any common root.
An efficient method to compute this factorization is Yun's algorithm.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "ax + b = 0"
},
{
"math_id": 1,
"text": "x = -b/a."
},
{
"math_id": 2,
"text": "10^{20};"
},
{
"math_id": 3,
"text": "10^{-10}"
},
{
"math_id": 4,
"text": "10^{10}."
},
{
"math_id": 5,
"text": "1+\\varphi\\approx 2.6"
},
{
"math_id": 6,
"text": "\\varphi"
},
{
"math_id": 7,
"text": "x_{n+1}=x_n-\\frac{f(x_n)}{f'(x_n)},"
},
{
"math_id": 8,
"text": "x_0."
},
{
"math_id": 9,
"text": "x_0"
},
{
"math_id": 10,
"text": "p"
},
{
"math_id": 11,
"text": "(*)\\;\\;\\;\\; p(x) = a(x-z_1)^{m_1} \\cdots (x-z_k)^{m_k},"
},
{
"math_id": 12,
"text": "m_1,\\ldots,m_k"
},
{
"math_id": 13,
"text": "k"
},
{
"math_id": 14,
"text": "z_1,\\ldots,z_k"
},
{
"math_id": 15,
"text": "n"
},
{
"math_id": 16,
"text": "k<n"
},
{
"math_id": 17,
"text": "a"
},
{
"math_id": 18,
"text": "(*)"
},
{
"math_id": 19,
"text": "p=p_1p_2^2\\cdots p_k^k "
},
{
"math_id": 20,
"text": "p_i"
}
]
| https://en.wikipedia.org/wiki?curid=72719782 |
727271 | Pentagonal hexecontahedron | In geometry, a pentagonal hexecontahedron is a Catalan solid, dual of the snub dodecahedron. It has two distinct forms, which are mirror images (or "enantiomorphs") of each other. It has 92 vertices that span 60 pentagonal faces. It is the Catalan solid with the most vertices. Among the Catalan and Archimedean solids, it has the second largest number of vertices, after the truncated icosidodecahedron, which has 120 vertices.
Properties.
The faces are irregular pentagons with two long edges and three short edges. Let formula_0 be the real zero of the polynomial formula_1.
Then the ratio formula_2 of the edge lengths is given by:
formula_3.
The faces have four equal obtuse angles and one acute angle (between the two long edges). The obtuse angles equal formula_4, and the acute one equals formula_5. The dihedral angle equals formula_6.
Note that the face centers of the snub dodecahedron cannot serve directly as vertices of the pentagonal hexecontahedron: the four triangle centers lie in one plane but the pentagon center does not; it needs to be radially pushed out to make it coplanar with the triangle centers. Consequently, the vertices of the pentagonal hexecontahedron do not all lie on the same sphere and by definition it is not a zonohedron.
To find the volume and surface area of a pentagonal hexecontahedron, denote the shorter side of one of the pentagonal faces as formula_7, and set a constant "t"
formula_8
Then the surface area (formula_9) is:
formula_10.
And the volume (formula_11) is:
formula_12.
Using these, one can calculate the measure of sphericity for this shape:
formula_13
Construction.
The pentagonal hexecontahedron can be constructed from a snub dodecahedron without taking the dual. Pentagonal pyramids are added to the 12 pentagonal faces of the snub dodecahedron, and triangular pyramids are added to the 20 triangular faces that do not share an edge with a pentagon. The pyramid heights are adjusted to make them coplanar with the other 60 triangular faces of the snub dodecahedron. The result is the pentagonal hexecontahedron.
An alternate construction method uses quaternions and the icosahedral symmetry of the Weyl group orbits formula_14 of order 60. This is shown in the figure on the right.
Specifically, with quaternions from the binary Icosahedral group formula_15, where formula_16 is the conjugate of formula_17 and formula_18 and formula_19, then just as the Coxeter group formula_20 is the symmetry group of the 600-cell and the 120-cell of order 14400, we have formula_21 of order 120. formula_22 is defined as the even permutations of formula_23 such that formula_24 gives the 60 twisted chiral snub dodecahedron coordinates, where formula_25 is one permutation from the first set of 12 in those listed above. The exact coordinate for formula_26 is obtained by taking the solution to formula_27, with formula_28, and applying it to the normalization of formula_29.
Cartesian coordinates.
Using the Icosahedral symmetry in the orbits of the Weyl group formula_30of order 60 gives the following Cartesian coordinates with formula_31 is the golden ratio:
formula_35 and formula_36
A group of two sets of twelve have 0 or 2 minus signs (i.e. 1 or 3 plus signs):
formula_38
formula_39
and another group of three sets of 12 have 0 or 2 plus signs (i.e. 1 or 3 minus signs):formula_40
formula_41
formula_42
Negating all vertices in both groups gives the mirror of the chiral snub dodecahedron, yet results in the same pentagonal hexecontahedron convex hull.
Variations.
Isohedral variations can be constructed with pentagonal faces with 3 edge lengths.
This variation shown can be constructed by adding pyramids to 12 pentagonal faces and 20 triangular faces of a snub dodecahedron such that the new triangular faces are coparallel to other triangles and can be merged into the pentagon faces.
Orthogonal projections.
The "pentagonal hexecontahedron" has three symmetry positions, two on vertices, and one mid-edge.
Related polyhedra and tilings.
This polyhedron is topologically related as a part of sequence of polyhedra and tilings of pentagons with face configurations (V3.3.3.3."n"). (The sequence progresses into tilings the hyperbolic plane to any "n".) These face-transitive figures have (n32) rotational symmetry.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\xi\\approx 0.943\\,151\\,259\\,24"
},
{
"math_id": 1,
"text": "x^3+2x^2-\\phi^2"
},
{
"math_id": 2,
"text": "l"
},
{
"math_id": 3,
"text": " l = \\frac{1+\\xi}{2-\\xi^2}\\approx 1.749\\,852\\,566\\,74"
},
{
"math_id": 4,
"text": "\\arccos(-\\xi/2)\\approx 118.136\\,622\\,758\\,62^{\\circ}"
},
{
"math_id": 5,
"text": "\\arccos(-\\phi^2\\xi/2+\\phi)\\approx 67.453\\,508\\,965\\,51^{\\circ}"
},
{
"math_id": 6,
"text": "\\arccos(-\\xi/(2-\\xi))\\approx 153.2^{\\circ}"
},
{
"math_id": 7,
"text": "b"
},
{
"math_id": 8,
"text": " t = \\frac{\\sqrt[3]{44+12\\phi(9+\\sqrt{81\\phi-15})}+\\sqrt[3]{44+12\\phi(9-\\sqrt{81\\phi-15})}-4}{12} \\approx 0.472. "
},
{
"math_id": 9,
"text": " A "
},
{
"math_id": 10,
"text": " A = \\frac{30b^2\\cdot(2+3t)\\cdot\\sqrt{1-t^2}}{1-2t^2}\\approx162.698b^2"
},
{
"math_id": 11,
"text": " V "
},
{
"math_id": 12,
"text": " V = \\frac{5b^3(1+t)(2+3t)}{(1-2t^2)\\cdot\\sqrt{1-2t}}\\approx189.789b^3"
},
{
"math_id": 13,
"text": " \\Psi = \\frac{\\pi^{\\frac{1}{3}}(6V)^{\\frac{2}{3}}}{A} \\approx 0.982 "
},
{
"math_id": 14,
"text": "O(\\Lambda)=W(H_3)/C_2 \\approx A_5=I"
},
{
"math_id": 15,
"text": "(p,q) \\in I_h"
},
{
"math_id": 16,
"text": "q=\\bar p"
},
{
"math_id": 17,
"text": "p"
},
{
"math_id": 18,
"text": "[p,q]:r\\rightarrow r'=prq"
},
{
"math_id": 19,
"text": "[p,q]^*:r\\rightarrow r''=p\\bar rq"
},
{
"math_id": 20,
"text": "W(H_4)=\\lbrace[p,\\bar p] \\oplus [p,\\bar p]^*\\rbrace "
},
{
"math_id": 21,
"text": "W(H_3)=\\lbrace[p,\\bar p] \\oplus [p,\\bar p]^*\\rbrace=A_5\\times C_2=I_h"
},
{
"math_id": 22,
"text": "I "
},
{
"math_id": 23,
"text": "I_h"
},
{
"math_id": 24,
"text": "[I,\\bar I]:r "
},
{
"math_id": 25,
"text": "r\\approx -0.389662 e_1 + 0.267979 e_2 -0.881108 e_3 "
},
{
"math_id": 26,
"text": "r"
},
{
"math_id": 27,
"text": "x^3-x^2-x-\\phi=0"
},
{
"math_id": 28,
"text": "x\\approx 1.94315"
},
{
"math_id": 29,
"text": "r=(-1+x^2(-1-2/\\phi-x\\phi)e_1 + (3-x^2+3x\\phi)e_2 + ((x^3-1/\\phi)\\phi^3)e_3"
},
{
"math_id": 30,
"text": "O(\\Lambda)=W(H_3)/C_2 \\approx A_5"
},
{
"math_id": 31,
"text": "\\phi=\\frac{1+\\sqrt{5}}{2}"
},
{
"math_id": 32,
"text": "\\frac{(0, \\pm 1, \\pm \\phi)}{\\sqrt{\\phi^2 + 1}} , \\frac{(\\pm 1, \\pm \\phi, 0)}{\\sqrt{\\phi^2 + 1}} , \\frac{(\\pm \\phi, 0, \\pm 1)}{\\sqrt{\\phi^2 + 1}}."
},
{
"math_id": 33,
"text": "R\\approx 0.95369785218"
},
{
"math_id": 34,
"text": "700569 - 1795770 x^2 + 1502955 x^4 - 423900 x^6 + 14175 x^8 - 2250 x^{10} + 125 x^{12} = 0"
},
{
"math_id": 35,
"text": "(\\pm 1, \\pm 1,\\pm 1)\\frac{R}{\\sqrt{3}} "
},
{
"math_id": 36,
"text": "(0, \\pm \\phi, \\pm \\frac{1}{\\phi})\\frac{R}{\\sqrt{3}} , (\\pm \\frac{1}{\\phi}, 0 , \\pm \\phi) \\frac{R}{\\sqrt{3}} , (\\pm \\phi, \\pm \\frac{1}{\\phi},0)\\frac{R}{\\sqrt{3}}."
},
{
"math_id": 37,
"text": "R"
},
{
"math_id": 38,
"text": "(\\pm 0.267979, \\pm 0.881108, \\pm 0.389662) R ,"
},
{
"math_id": 39,
"text": "(\\pm 0.721510, \\pm 0.600810, \\pm 0.344167) R ,"
},
{
"math_id": 40,
"text": "(\\pm 0.176956, \\pm 0.824852, \\pm 0.536941) R ,"
},
{
"math_id": 41,
"text": "(\\pm 0.435190, \\pm0.777765, \\pm 0.453531) R ,"
},
{
"math_id": 42,
"text": "(\\pm 0.990472, \\pm 0.103342, \\pm 0.091023) R ."
}
]
| https://en.wikipedia.org/wiki?curid=727271 |
72727516 | Perdita Stevens | British mathematician and computer scientist
Perdita Emma Stevens (born 1966) is a British mathematician, theoretical computer scientist, and software engineer who holds a personal chair in the mathematics of software engineering as part of the School of Informatics at the University of Edinburgh. Her research includes work on model-driven engineering, including model transformation, model checking, and the Unified Modeling Language.
Education and career.
Stevens read mathematics at the University of Cambridge, earning a bachelor's degree in 1987. She went to the University of Warwick for graduate study in abstract algebra, earning a master's degree in 1988 and completing a PhD in 1992. Her doctoral dissertation, "Integral Forms for Weyl Modules of formula_0", was supervised by Sandy Green.
After working in industry as a software engineer, Stevens joined the Department of Computer Science at the University of Edinburgh in 1984. She became a reader there in 2003 and in 2014 was given a personal chair as Professor of Mathematics of Software Engineering.
Books.
Stevens is the author of books including:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{GL}(2,\\mathrm{Q})"
}
]
| https://en.wikipedia.org/wiki?curid=72727516 |
72732384 | Jeopardy! Masters | American television quiz show
Jeopardy! Masters is an American game show hosted by Ken Jennings on ABC. Its first season featured six recent notable "Jeopardy!" champions competing against each other in a "Champions League-style" format. It premiered on May 8, 2023. In February 2024, it was announced that the show would be renewed for a second season which premiered on May 1, 2024.
Contestants.
Season 1 (2023).
The following six contestants, listed in order of finish, competed in the first "Jeopardy! Masters" competition:
As the three finalists, Holzhauer, Roach, and Amodio all received invitations to the next "Masters" competition.
Season 2 (2024).
The following six contestants, listed in order of finish, competed in the second "Jeopardy! Masters" competition:
As the three finalists, Groce, Raut, and Holzhauer all received invitations to the next Masters competition.
Tournament format.
The tournament features six former "Jeopardy!" champions competing round-robin style, with the first season consisting of 10 hour-long episodes featuring two games each, for a total of 20 games. Initially, the producers intended to structure the tournament as a pure round-robin system with every possible combination of three players (formula_0), without eliminations. This was adjusted to a three-round structure prior to production. In the second season, the number of episodes was reduced to nine (eighteen games total), with the same overall structure.
Unlike traditional "Jeopardy!", which is scored in dollars, all of the games in this tournament are scored in points, just like in "Super Jeopardy!", the first two seasons of "Rock & Roll Jeopardy!", "Sports Jeopardy!," and "Jeopardy! The Greatest of All Time".
The producers have also used "Jeopardy! Masters" to experiment with variations to the "Jeopardy!" format. In the first season, each round began with the revelation to the television audience of the location of that round's Daily Double(s); this did not continue in the second season.
Quarterfinals.
The quarterfinals consist of several round-robin matches of two games each; in each episode, three of the contestants play each other in the first game, and the remaining three play in the second game. The winner of each game receives three match points, the runner-up receives one match point, and the third player receives no match points. The second game of each episode except for the first pairs up the winners from the previous episode against another randomly-selected contestant who has not already played against both winners.
After all quarterfinal episodes, the match points are totaled; the top four contestants advance to the semifinals, while the other two are eliminated from the competition.
There were seven quarterfinal episodes in the first season, and six episodes in the second season.
Semifinals.
The four remaining contestants play each other round-robin over four games, with each player sitting out one game. Each player's match points total is reset to zero, and, as in the quarterfinals, the winner of each game receives three match points, second place receives one, and last place receives none. The three highest-ranked players move on to the finals, while the lowest-ranked player is eliminated.
Finals.
The three remaining players play each other in a two-game match, as is standard in the final round of most "Jeopardy!" tournaments. The player with the highest combined score over the two games is declared champion. Furthermore, the three players will be automatically qualified in the next edition of the tournament.
Tiebreakers.
Should either the quarterfinals or semifinals end in a tie for match points, the following tie-breaking criteria are used, in order:
Episodes.
The winner of each game and the final is highlighted in bold.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "C(6,3) = 20"
}
]
| https://en.wikipedia.org/wiki?curid=72732384 |
72734643 | PFQ | PFQ or pFq can refer to:
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "_pF_q"
}
]
| https://en.wikipedia.org/wiki?curid=72734643 |
7274114 | STAR model | In statistics, Smooth Transition Autoregressive (STAR) models are typically applied to time series data as an extension of autoregressive models, in order to allow for higher degree of flexibility in model parameters through a smooth transition.
Given a time series of data "x""t", the STAR model is a tool for understanding and, perhaps, predicting future values in this series, assuming that the behaviour of the series changes depending on the value of the transition variable. The transition might depend on the past values of the "x" series (similar to the SETAR models), or exogenous variables.
The model consists of "2" autoregressive (AR) parts linked by the transition function. The model is usually referred to as the STAR("p") models proceeded by the letter describing the transition function (see below) and "p" is the order of the autoregressive part. Most popular transition function include exponential function and first and second-order logistic functions. They give rise to Logistic STAR (LSTAR) and Exponential STAR (ESTAR) models.
Definition.
AutoRegressive Models.
Consider a simple AR("p") model for a time series "y""t"
formula_0
where:
formula_1 for "i"=1,2...,"p" are autoregressive coefficients, assumed to be constant over time;
formula_2 stands for white-noise error term with constant variance.
written in a following vector form:
formula_3
where:
formula_4 is a column vector of variables;
formula_5 is the vector of parameters :formula_6;
formula_7 stands for white-noise error term with constant variance.
STAR as an Extension of the AutoRegressive Model.
STAR models were introduced and comprehensively developed by Kung-sik Chan and Howell Tong in 1986 (esp. p. 187), in which the same acronym was used. It originally stands for Smooth Threshold AutoRegressive. For some background history, see Tong (2011, 2012). The models can be thought of in terms of extension of autoregressive models discussed above, allowing for changes in the model parameters according to the value of a "transition variable" "zt". Chan and Tong (1986) rigorously proved that the family of STAR models includes the SETAR model as a limiting case by showing the uniform boundedness and equicontinuity with respect to the switching parameter. Without this proof, to say that STAR models nest the SETAR model lacks justification. Unfortunately, whether one should use a SETAR model or a STAR model for one's data has been a matter of subjective judgement, taste and inclination in much of the literature. Fortunately, the test procedure, based on David Cox's test of separate family of hypotheses and developed by Gao, Ling and Tong (2018, Statistica Sinica, volume 28, 2857-2883) is now available to address this issue. Such a test is important before adopting a STAR model because, among other issues, the parameter controlling its rate of switching is notoriously data-hungry.
Defined in this way, STAR model can be presented as follows:
formula_8
where:
formula_9 is a column vector of variables;
formula_10 is the transition function bounded between 0 and 1.
Basic Structure.
They can be understood as two-regime SETAR model with smooth transition between regimes, or as continuum of regimes. In both cases the presence of the transition function is the defining feature of the model as it allows for changes in values of the parameters.
Transition Function.
Three basic transition functions and the name of resulting models are:
formula_11
formula_12
formula_13 | [
{
"math_id": 0,
"text": "y_{t}=\\gamma_{0}+\\gamma_{1}y_{t-1}+\\gamma_{2}y_{t-2}+...+\\gamma_{p}y_{t-p}+\\epsilon_{t}.\\,"
},
{
"math_id": 1,
"text": " \\gamma_{i}\\,"
},
{
"math_id": 2,
"text": " \\epsilon_{t}\\stackrel{\\mathit{iid}}{\\sim}WN(0;\\sigma^{2})\\, "
},
{
"math_id": 3,
"text": " y_{t}=\\mathbf{X_{t}\\gamma}+\\sigma\\epsilon_{t}.\\,"
},
{
"math_id": 4,
"text": "\\mathbf{X_{t}}=(1,y_{t-1},y_{t-2},\\ldots,y_{t-p})\\,"
},
{
"math_id": 5,
"text": "\\gamma \\,"
},
{
"math_id": 6,
"text": "\\gamma_{0}, \\gamma_{1},\\gamma_{2},..., \\gamma_{p}\\,"
},
{
"math_id": 7,
"text": "\\epsilon_{t}\\stackrel{\\mathit{iid}}{\\sim}WN(0;1)\\,"
},
{
"math_id": 8,
"text": "\ty_{t}=\\mathbf{X_{t}}+ G(z_{t}, \\zeta, c)\\mathbf{X_{t}}+\\sigma^{(j)}\\epsilon_{t}\\,"
},
{
"math_id": 9,
"text": " X_{t}=(1,y_{t-1},y_{t-2},...,y_{t-p})\\,"
},
{
"math_id": 10,
"text": "G(z_{t}, \\zeta, c)"
},
{
"math_id": 11,
"text": "G(z_{t}, \\zeta, c) = (1+exp(-\\zeta(z_{t}-c)))^{-1}, \\zeta>0 "
},
{
"math_id": 12,
"text": "G(z_{t}, \\zeta, c) = 1-exp(-\\zeta(z_{t}-c)^{2}), \\zeta>0 "
},
{
"math_id": 13,
"text": "G(z_{t}, \\zeta, c) = (1+exp(-\\zeta(z_{t}-c_{1})(z_{t}-c_{2})))^{-1}, \\zeta>0 "
}
]
| https://en.wikipedia.org/wiki?curid=7274114 |
727472 | Rose (mathematics) | Multi-lobed plane curve
In mathematics, a rose or rhodonea curve is a sinusoid specified by either the cosine or sine functions with no phase angle that is plotted in polar coordinates. Rose curves or "rhodonea" were named by the Italian mathematician who studied them, Guido Grandi, between the years 1723 and 1728.
General overview.
Specification.
A rose is the set of points in polar coordinates specified by the polar equation
formula_0
or in Cartesian coordinates using the parametric equations
formula_1
Roses can also be specified using the sine function. Since
formula_2.
Thus, the rose specified by "r"
"a" sin("kθ") is identical to that specified by "r"
"a" cos("kθ") rotated counter-clockwise by radians, which is one-quarter the period of either sinusoid.
Since they are specified using the cosine or sine function, roses are usually expressed as polar coordinate (rather than Cartesian coordinate) graphs of sinusoids that have angular frequency of k and an amplitude of a that determine the radial coordinate r given the polar angle θ (though when k is a rational number, a rose curve can be expressed in Cartesian coordinates since those can be specified as algebraic curves).
General properties.
Roses are directly related to the properties of the sinusoids that specify them.
Petals.
long and consists of a positive half-cycle, the continuous set of points where "r" ≥ 0 and is
long, and a negative half-cycle is the other half where "r" ≤ 0.)
"a" cos("kθ") (that is bounded by the angle interval − ≤ "θ" ≤). The petal is symmetric about the polar axis. All other petals are rotations of this petal about the pole, including those for roses specified by the sine function with same values for a and k.
"a".
, , , etc. (See the figure in the introduction section.)
Symmetry.
All roses display one or more forms of symmetry due to the underlying symmetric and periodic properties of sinusoids.
"a" cos("kθ") is symmetric about the polar axis (the line "θ"
0) because of the identity "a" cos("kθ")
"a" cos(−"kθ") that makes the roses specified by the two polar equations coincident.
"a" sin("kθ") is symmetric about the vertical line "θ"
because of the identity "a" sin("kθ")
"a" sin("π" − "kθ") that makes the roses specified by the two polar equations coincident.
Roses with non-zero integer values of "k".
When k is a non-zero integer, the curve will be rose-shaped with 2"k" "petals" if k is even, and k petals when k is odd. The properties of these roses are a special case of roses with angular frequencies k that are rational numbers discussed in the next section of this article.
"a", corresponding to the radial coordinate of all of its peaks.
"k" cycles displayed in the graph. No additional points need be plotted because the radial coordinate at "θ"
0 is the same value at "θ"
2"π" (which are crests for two different positive half-cycles for roses specified by the cosine function).
"a". Line segments connecting successive peaks will form a regular polygon with an even number of vertices that has its center at the pole and a radius through each peak, and likewise:
radians. Thus, these roses have rotational symmetry of order 2"k".
"a". These rose's positive and negative half-cycles are coincident, which means that in graphing them, only the positive half-cycles or only the negative half-cycles need to plotted in order to form the full curve. (Equivalently, a complete curve will be graphed by plotting any continuous interval of polar angles that is π radians long such as "θ"
0 to "θ"
"π".) Line segments connecting successive peaks will form a regular polygon with an odd number of vertices, and likewise:
The circle.
A rose with "k"
1 is a circle that lies on the pole with a diameter that lies on the polar axis when "r"
"a" cos("θ"). The circle is the curve's single petal. (See the circle being formed at the end of the next section.) In Cartesian coordinates, the equivalent cosine and sine specifications are
formula_3
and
formula_4
respectively.
The quadrifolium.
A rose with "k"
2 is called a quadrifolium because it has "2k"
4 petals. In Cartesian coordinates the cosine and sine specifications are
formula_5
and
formula_6
respectively.
The trifolium.
A rose with "k"
3 is called a trifolium because it has "k"
3 petals. The curve is also called the Paquerette de Mélibée. In Cartesian Coordinates the cosine and sine specifications are
formula_7
and
formula_8
respectively. (See the trifolium being formed at the end of the next section.)
The octafolium.
A rose with "k"
4 is called a octafolium because it has "2k"
8 petals. In Cartesian Coordinates the cosine and sine specifications are
formula_9
and
formula_10
respectively.
The pentafolium.
A rose with "k"
5 is called a pentafolium because it has "k"
5 petals. In Cartesian Coordinates the cosine and sine specifications are
formula_11
and
formula_12
respectively.
Total and petal areas.
The total area of a rose with polar equation of the form "r"
"a" cos("kθ") or "r"
"a" sin("kθ"), where k is a non-zero integer, is
formula_13
When k is even, there are 2"k" petals; and when k is odd, there are k petals, so the area of each petal is .
Roses with rational number values for "k".
In general, when k is a rational number in the irreducible fraction form "k"
, where n and d are non-zero integers, the number of petals is the denominator of the expression −
. This means that the number of petals is n if both n and d are odd, and 2"n" otherwise.
or at "θ"
. (This means that roses "r"
"a" cos("kθ") and "r"
"a" sin("kθ") with non-zero integer values of k are never coincident.)
"a", corresponding to the radial coordinate of all of its peaks.
The Dürer folium.
A rose with "k"
is called the Dürer folium, named after the German painter and engraver Albrecht Dürer. The roses specified by "r"
"a" cos() and "r"
"a" sin() are coincident even though "a" cos() ≠ "a" sin(). In Cartesian coordinates the rose is specified as
formula_14
The Dürer folium is also a trisectrix, a curve that can be used to trisect angles.
The limaçon trisectrix.
A rose with "k"
is a limaçon trisectrix that has the property of trisectrix curves that can be used to trisect angles. The rose has a single petal with two loops. (See the animation below.)
Roses with irrational number values for "k".
A rose curve specified with an irrational number for k has an infinite number of petals and will never complete. For example, the sinusoid "r"
"a" cos("πθ") has a period "T"
2, so, it has a petal in the polar angle interval − ≤ "θ" ≤ with a crest on the polar axis; however there is no other polar angle in the domain of the polar equation that will plot at the coordinates ("a",0). Overall, roses specified by sinusoids with angular frequencies that are irrational constants form a dense set (that is, they come arbitrarily close to specifying every point in the disk "r" ≤ "a").
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "r=a\\cos(k\\theta)"
},
{
"math_id": 1,
"text": "\\begin{align}\nx &= r\\cos(\\theta) = a\\cos(k\\theta)\\cos(\\theta) \\\\\ny &= r\\sin(\\theta) = a\\cos(k\\theta)\\sin(\\theta)\n\\end{align}"
},
{
"math_id": 2,
"text": "\\sin(k \\theta) = \\cos\\left( k \\theta - \\frac{\\pi}{2} \\right) = \\cos\\left( k \\left( \\theta-\\frac{\\pi}{2k} \\right) \\right)"
},
{
"math_id": 3,
"text": "\\left(x-\\frac{a}{2}\\right)^2+y^2=\\left(\\frac{a}{2}\\right)^2"
},
{
"math_id": 4,
"text": "x^2+\\left(y-\\frac{a}{2}\\right)^2=\\left(\\frac{a}{2}\\right)^2"
},
{
"math_id": 5,
"text": "\\left(x^2+y^2\\right)^3=a^2\\left(x^2-y^2\\right)^2"
},
{
"math_id": 6,
"text": "\\left(x^2+y^2\\right)^3=4\\left(axy\\right)^2"
},
{
"math_id": 7,
"text": "\\left(x^2+y^2\\right)^2=a\\left(x^3-3xy^2\\right)"
},
{
"math_id": 8,
"text": "\\left(x^2+y^2\\right)^2=-a\\left(x^3-3xy^2\\right)"
},
{
"math_id": 9,
"text": "\\left(x^2+y^2\\right)^5=a^2\\left(x^4-6x^2y^2+y^4\\right)^2"
},
{
"math_id": 10,
"text": "\\left(x^2+y^2\\right)^5=16a^2\\left(xy^3-yx^3\\right)^2"
},
{
"math_id": 11,
"text": "\\left(x^2+y^2\\right)^3=a\\left(x^5-10x^3y^2+5xy^4\\right)"
},
{
"math_id": 12,
"text": "\\left(x^2+y^2\\right)^3=a\\left(5x^4y-10x^2y^3+y^5\\right)"
},
{
"math_id": 13,
"text": "\\begin{align}\n \\frac{1}{2}\\int_{0}^{2\\pi}(a\\cos (k\\theta))^2\\,d\\theta &= \\frac {a^2}{2} \\left(\\pi + \\frac{\\sin(4k\\pi)}{4k}\\right) = \\frac{\\pi a^2}{2} &&\\quad\\text{for even }k\\\\[8px]\n \\frac{1}{2}\\int_{0}^{\\pi}(a\\cos (k\\theta))^2\\,d\\theta &= \\frac {a^2}{2} \\left(\\frac{\\pi}{2} + \\frac{\\sin(2k\\pi)}{4k}\\right) = \\frac{\\pi a^2}{4} &&\\quad\\text{for odd }k\n\\end{align}"
},
{
"math_id": 14,
"text": "\\left(x^2+y^2\\right)\\left(2\\left(x^2+y^2\\right)-a^2\\right)^2=a^4x^2"
}
]
| https://en.wikipedia.org/wiki?curid=727472 |
727476 | Page replacement algorithm | Algorithm for virtual memory implementation
In a computer operating system that uses paging for virtual memory management, page replacement algorithms decide which memory pages to page out, sometimes called swap out, or write to disk, when a page of memory needs to be allocated. Page replacement happens when a requested page is not in memory (page fault) and a free page cannot be used to satisfy the allocation, either because there are none, or because the number of free pages is lower than some threshold.
When the page that was selected for replacement and paged out is referenced again it has to be paged in (read in from disk), and this involves waiting for I/O completion. This determines the "quality" of the page replacement algorithm: the less time waiting for page-ins, the better the algorithm. A page replacement algorithm looks at the limited information about accesses to the pages provided by hardware, and tries to guess which pages should be replaced to minimize the total number of page misses, while balancing this with the costs (primary storage and processor time) of the algorithm itself.
The page replacing problem is a typical online problem from the competitive analysis perspective in the sense that the optimal deterministic algorithm is known.
History.
Page replacement algorithms were a hot topic of research and debate in the 1960s and 1970s.
That mostly ended with the development of sophisticated LRU (least recently used) approximations and working set algorithms. Since then, some basic assumptions made by the traditional page replacement algorithms were invalidated, resulting in a revival of research. In particular, the following trends in the behavior of underlying hardware and user-level software have affected the performance of page replacement algorithms:
Requirements for page replacement algorithms have changed due to differences in operating system kernel architectures. In particular, most modern OS kernels have unified virtual memory and file system caches, requiring the page replacement algorithm to select a page from among the pages of both user program virtual address spaces and cached files. The latter pages have specific properties. For example, they can be locked, or can have write ordering requirements imposed by journaling. Moreover, as the goal of page replacement is to minimize total time waiting for memory, it has to take into account memory requirements imposed by other kernel sub-systems that allocate memory. As a result, page replacement in modern kernels (Linux, FreeBSD, and Solaris) tends to work at the level of a general purpose kernel memory allocator, rather than at the higher level of a virtual memory subsystem.
Local vs. global replacement.
Replacement algorithms can be "local" or "global."
When a process incurs a page fault, a local page replacement algorithm selects for replacement some page that belongs to that same process (or a group of processes sharing a memory partition).
A global replacement algorithm is free to select any page in memory.
Local page replacement assumes some form of memory partitioning that determines how many pages are to be assigned to a given process or a group of processes. Most popular forms of partitioning are "fixed partitioning" and "balanced set" algorithms based on the working set model. The advantage of local page replacement is its scalability: each process can handle its page faults independently, leading to more consistent performance for that process. However global page replacement is more efficient on an overall system basis.
Detecting which pages are referenced and modified.
Modern general purpose computers and some embedded processors have support for virtual memory. Each process has its own virtual address space. A page table maps a subset of the process virtual addresses to physical addresses. In addition, in most architectures the page table holds an "access" bit and a "dirty" bit for each page in the page table. The CPU sets the access bit when the process reads or writes memory in that page. The CPU sets the dirty bit when the process writes memory in that page. The operating system can modify the access and dirty bits. The operating system can detect accesses to memory and files through the following means:
Precleaning.
Most replacement algorithms simply return the target page as their result. This means that if target page is "dirty" (that is, contains data that have to be written to the stable storage before page can be reclaimed), I/O has to be initiated to send that page to the stable storage (to "clean" the page). In the early days of virtual memory, time spent on cleaning was not of much concern, because virtual memory was first implemented on systems with full duplex channels to the stable storage, and cleaning was customarily overlapped with paging. Contemporary commodity hardware, on the other hand, does not support full duplex transfers, and cleaning of target pages becomes an issue.
To deal with this situation, various "precleaning" policies are implemented. Precleaning is the mechanism that starts I/O on dirty pages that are (likely) to be replaced soon. The idea is that by the time the precleaned page is actually selected for the replacement, the I/O will complete and the page will be clean. Precleaning assumes that it is possible to identify pages that will be replaced "next". Precleaning that is too eager can waste I/O bandwidth by writing pages that manage to get re-dirtied before being selected for replacement.
The (h,k)-paging problem.
The (h,k)-paging problem is a generalization of the model of paging problem: Let h,k be positive integers such that formula_0. We measure the performance of an algorithm with cache of size formula_0 relative to the theoretically optimal page replacement algorithm. If formula_1, we provide the optimal page replacement algorithm with strictly less resource.
The (h,k)-paging problem is a way to measure how an online algorithm performs by comparing it with the performance of the optimal algorithm, specifically, separately parameterizing the cache size of the online algorithm and optimal algorithm.
Marking algorithms.
Marking algorithms is a general class of paging algorithms. For each page, we associate it with a bit called its mark. Initially, we set all pages as unmarked. During a stage (a period of operation or a sequence of requests) of page requests, we mark a page when it is first requested in this stage. A marking algorithm is such an algorithm that never pages out a marked page.
If ALG is a marking algorithm with a cache of size k, and OPT is the optimal algorithm with a cache of size h, where formula_0, then ALG is formula_2-competitive. So every marking algorithm attains the formula_2-competitive ratio.
LRU is a marking algorithm while FIFO is not a marking algorithm.
Conservative algorithms.
An algorithm is conservative, if on any consecutive request sequence containing k or fewer distinct page references, the algorithm will incur k or fewer page faults.
If ALG is a conservative algorithm with a cache of size k, and OPT is the optimal algorithm with a cache of formula_0, then ALG is formula_2-competitive. So every conservative algorithm attains the formula_2-competitive ratio.
LRU, FIFO and CLOCK are conservative algorithms.
Page replacement algorithms.
There are a variety of page replacement algorithms:
The theoretically optimal page replacement algorithm.
The theoretically optimal page replacement algorithm (also known as OPT, clairvoyant replacement algorithm, or Bélády's optimal page replacement policy) is an algorithm that works as follows: when a page needs to be swapped in, the operating system swaps out the page whose next use will occur farthest in the future. For example, a page that is not going to be used for the next 6 seconds will be swapped out over a page that is going to be used within the next 0.4 seconds.
This algorithm cannot be implemented in a general purpose operating system because it is impossible to compute reliably how long it will be before a page is going to be used, except when all software that will run on a system is either known beforehand and is amenable to static analysis of its memory reference patterns, or only a class of applications allowing run-time analysis. Despite this limitation, algorithms exist that can offer near-optimal performance — the operating system keeps track of all pages referenced by the program, and it uses those data to decide which pages to swap in and out on subsequent runs. This algorithm can offer near-optimal performance, but not on the first run of a program, and only if the program's memory reference pattern is relatively consistent each time it runs.
Analysis of the paging problem has also been done in the field of online algorithms. Efficiency of randomized online algorithms for the paging problem is measured using amortized analysis.
Not recently used.
The not recently used (NRU) page replacement algorithm is an algorithm that favours keeping pages in memory that have been recently used. This algorithm works on the following principle: when a page is referenced, a referenced bit is set for that page, marking it as referenced. Similarly, when a page is modified (written to), a modified bit is set. The setting of the bits is usually done by the hardware, although it is possible to do so on the software level as well.
At a certain fixed time interval, a timer interrupt triggers and clears the referenced bit of all the pages, so only pages referenced within the current timer interval are marked with a referenced bit. When a page needs to be replaced, the operating system divides the pages into four classes:
3. referenced, modified
2. referenced, not modified
1. not referenced, modified
0. not referenced, not modified
Although it does not seem possible for a page to be modified yet not referenced, this happens when a class 3 page has its referenced bit cleared by the timer interrupt. The NRU algorithm picks a random page from the lowest category for removal. So out of the above four page categories, the NRU algorithm will replace a not-referenced, not-modified page if such a page exists. Note that this algorithm implies that a modified but not-referenced (within the last timer interval) page is less important than a not-modified page that is intensely referenced.
NRU is a marking algorithm, so it is formula_2-competitive.
First-in, first-out.
The simplest page-replacement algorithm is a FIFO algorithm. The first-in, first-out (FIFO) page replacement algorithm is a low-overhead algorithm that requires little bookkeeping on the part of the operating system. The idea is obvious from the name – the operating system keeps track of all the pages in memory in a queue, with the most recent arrival at the back, and the oldest arrival in front. When a page needs to be replaced, the page at the front of the queue (the oldest page) is selected. While FIFO is cheap and intuitive, it performs poorly in practical application. Thus, it is rarely used in its unmodified form. This algorithm experiences Bélády's anomaly.
In simple words, on a page fault, the frame that has been in memory the longest is replaced.
FIFO page replacement algorithm is used by the OpenVMS operating system, with some modifications. Partial second chance is provided by skipping a limited number of entries with valid translation table references, and additionally, pages are displaced from process working set to a systemwide pool from which they can be recovered if not already re-used.
FIFO is a conservative algorithm, so it is formula_2-competitive.
Second-chance.
A modified form of the FIFO page replacement algorithm, known as the Second-chance page replacement algorithm, fares relatively better than FIFO at little cost for the improvement. It works by looking at the front of the queue as FIFO does, but instead of immediately paging out that page, it checks to see if its referenced bit is set. If it is not set, the page is swapped out. Otherwise, the referenced bit is cleared, the page is inserted at the back of the queue (as if it were a new page) and this process is repeated. This can also be thought of as a circular queue. If all the pages have their referenced bit set, on the second encounter of the first page in the list, that page will be swapped out, as it now has its referenced bit cleared. If all the pages have their reference bit cleared, then second chance algorithm degenerates into pure FIFO.
As its name suggests, Second-chance gives every page a "second-chance" – an old page that has been referenced is probably in use, and should not be swapped out over a new page that has not been referenced.
Clock.
Clock is a more efficient version of FIFO than Second-chance because pages don't have to be constantly pushed to the back of the list, but it performs the same general function as Second-Chance. The clock algorithm keeps a circular list of pages in memory, with the "hand" (iterator) pointing to the last examined page frame in the list. When a page fault occurs and no empty frames exist, then the R (referenced) bit is inspected at the hand's location. If R is 0, the new page is put in place of the page the "hand" points to, and the hand is advanced one position. Otherwise, the R bit is cleared, then the clock hand is incremented and the process is repeated until a page is replaced. This algorithm was first described in 1969 by Fernando J. Corbató.
Variants of clock.
CLOCK is a conservative algorithm, so it is formula_2-competitive.
Least recently used.
The least recently used (LRU) page replacement algorithm, though similar in name to NRU, differs in the fact that LRU keeps track of page usage over a short period of time, while NRU just looks at the usage in the last clock interval. LRU works on the idea that pages that have been most heavily used in the past few instructions are most likely to be used heavily in the next few instructions too. While LRU can provide near-optimal performance in theory (almost as good as adaptive replacement cache), it is rather expensive to implement in practice. There are a few implementation methods for this algorithm that try to reduce the cost yet keep as much of the performance as possible.
The most expensive method is the linked list method, which uses a linked list containing all the pages in memory. At the back of this list is the least recently used page, and at the front is the most recently used page. The cost of this implementation lies in the fact that items in the list will have to be moved about every memory reference, which is a very time-consuming process.
Another method that requires hardware support is as follows: suppose the hardware has a 64-bit counter that is incremented at every instruction. Whenever a page is accessed, it acquires the value equal to the counter at the time of page access. Whenever a page needs to be replaced, the operating system selects the page with the lowest counter and swaps it out.
Because of implementation costs, one may consider algorithms (like those that follow) that are similar to LRU, but which offer cheaper implementations.
One important advantage of the LRU algorithm is that it is amenable to full statistical analysis. It has been proven, for example, that LRU can never result in more than N-times more page faults than OPT algorithm, where N is proportional to the number of pages in the managed pool.
On the other hand, LRU's weakness is that its performance tends to degenerate under many quite common reference patterns. For example, if there are N pages in the LRU pool, an application executing a loop over array of N + 1 pages will cause a page fault on each and every access. As loops over large arrays are common, much effort has been put into modifying LRU to work better in such situations. Many of the proposed LRU modifications try to detect looping reference patterns and to switch into suitable replacement algorithm, like Most Recently Used (MRU).
Variants on LRU.
A comparison of ARC with other algorithms (LRU, MQ, 2Q, LRU-2, LRFU, LIRS) can be found in Megiddo & Modha 2004.
LRU is a marking algorithm, so it is formula_2-competitive.
Random.
Random replacement algorithm replaces a random page in memory. This eliminates the overhead cost of tracking page references. Usually it fares better than FIFO, and for looping memory references it is better than LRU, although generally LRU performs better in practice. OS/390 uses global LRU approximation and falls back to random replacement when LRU performance degenerates, and the Intel i860 processor used a random replacement policy (Rhodehamel 1989).
Not frequently used (NFU).
The not frequently used (NFU) page replacement algorithm requires a counter, and every page has one counter of its own which is initially set to 0. At each clock interval, all pages that have been referenced within that interval will have their counter incremented by 1. In effect, the counters keep track of how frequently a page has been used. Thus, the page with the lowest counter can be swapped out when necessary.
The main problem with NFU is that it keeps track of the frequency of use without regard to the time span of use. Thus, in a multi-pass compiler, pages which were heavily used during the first pass, but are not needed in the second pass will be favoured over pages which are comparably lightly used in the second pass, as they have higher frequency counters. This results in poor performance. Other common scenarios exist where NFU will perform similarly, such as an OS boot-up. Thankfully, a similar and better algorithm exists, and its description follows.
The not frequently used page-replacement algorithm generates fewer page faults than the least recently used page replacement algorithm when the page table contains null pointer values.
Aging.
The aging algorithm is a descendant of the NFU algorithm, with modifications to make it aware of the time span of use. Instead of just incrementing the counters of pages referenced, putting equal emphasis on page references regardless of the time, the reference counter on a page is first shifted right (divided by 2), before adding the referenced bit to the left of that binary number. For instance, if a page has referenced bits 1,0,0,1,1,0 in the past 6 clock ticks, its referenced counter will look like this in chronological order: 10000000, 01000000, 00100000, 10010000, 11001000, 01100100. Page references closer to the present time have more impact than page references long ago. This ensures that pages referenced more recently, though less frequently referenced, will have higher priority over pages more frequently referenced in the past. Thus, when a page needs to be swapped out, the page with the lowest counter will be chosen.
The following Python code simulates the aging algorithm.
Counters formula_3 are initialized with and updated as described above via formula_4, using arithmetic shift operators.
from collections.abc import Sequence
def simulate_aging(Rs: Sequence, k: int) -> None:
# Simulate aging
print(" t | R-bits (0-{length}) | Counters for pages 0-{length}".format(length=len(Rs)))
Vs = [0] * len(Rs[0])
for t, R in enumerate(Rs):
Vs[:] = [R[i] « (k - 1) | V » 1
for i, V in enumerate(Vs)]
print("{:02d} | {} | [{}]".format(t, R,
", ".join(["{:0{}b}".format(V, k)
for V in Vs])))
In the given example of R-bits for 6 pages over 5 clock ticks, the function prints the following output, which lists the R-bits for each clock tick t and the individual counter values formula_3 for each page in binary representation.
»> Rs = 1,0,1,0,1,1], [1,1,0,0,1,0], [1,1,0,1,0,1], [1,0,0,0,1,0], [0,1,1,0,0,0
»> k = 8
»> simulate_aging(Rs, k)
t | R-bits (0-5) | Counters for pages 0-5
00 | [1, 0, 1, 0, 1, 1] | [10000000, 00000000, 10000000, 00000000, 10000000, 10000000]
01 | [1, 1, 0, 0, 1, 0] | [11000000, 10000000, 01000000, 00000000, 11000000, 01000000]
02 | [1, 1, 0, 1, 0, 1] | [11100000, 11000000, 00100000, 10000000, 01100000, 10100000]
03 | [1, 0, 0, 0, 1, 0] | [11110000, 01100000, 00010000, 01000000, 10110000, 01010000]
04 | [0, 1, 1, 0, 0, 0] | [01111000, 10110000, 10001000, 00100000, 01011000, 00101000]
Note that aging differs from LRU in the sense that aging can only keep track of the references in the latest (depending on the bit size of the processor's integers) time intervals. Consequently, two pages may have referenced counters of 00000000, even though one page was referenced 9 intervals ago and the other 1000 intervals ago. Generally speaking, knowing the usage within the past 16 intervals is sufficient for making a good decision as to which page to swap out. Thus, aging can offer near-optimal performance for a moderate price.
Longest distance first (LDF) page replacement algorithm.
The basic idea behind this algorithm is Locality of Reference as used in LRU but the difference is that in LDF, locality is based on distance not on the used references. In the LDF, replace the page which is on longest distance from the current page. If two pages are on same distance then the page which is next to current page in anti-clock rotation will get replaced.
Implementation details.
Techniques for hardware with no reference bit.
Many of the techniques discussed above assume the presence of a reference bit associated with each page. Some hardware has no such bit, so its efficient use requires techniques that operate well without one.
One notable example is VAX hardware running OpenVMS. This system knows if a page has been modified, but not necessarily if a page has been read. Its approach is known as Secondary Page Caching. Pages removed from working sets (process-private memory, generally) are placed on special-purpose lists while remaining in physical memory for some time. Removing a page from a working set is not technically a page-replacement operation, but effectively identifies that page as a candidate. A page whose backing store is still valid (whose contents are not dirty, or otherwise do not need to be preserved) is placed on the tail of the Free Page List. A page that requires writing to backing store will be placed on the Modified Page List. These actions are typically triggered when the size of the Free Page List falls below an adjustable threshold.
Pages may be selected for working set removal in an essentially random fashion, with the expectation that if a poor choice is made, a future reference may retrieve that page from the Free or Modified list before it is removed from physical memory. A page referenced this way will be removed from the Free or Modified list and placed back into a process working set. The Modified Page List additionally provides an opportunity to write pages out to backing store in groups of more than one page, increasing efficiency. These pages can then be placed on the Free Page List. The sequence of pages that works its way to the head of the Free Page List resembles the results of a LRU or NRU mechanism and the overall effect has similarities to the Second-Chance algorithm described earlier.
Another example is used by the Linux kernel on ARM. The lack of hardware functionality is made up for by providing two page tables – the processor-native page tables, with neither referenced bits nor dirty bits, and software-maintained page tables with the required bits present. The emulated bits in the software-maintained table are set by page faults. In order to get the page faults, clearing emulated bits in the second table revokes some of the access rights to the corresponding page, which is implemented by altering the native table.
Page cache in Linux.
Linux uses a unified page cache for
The unified page cache operates on units of the smallest page size supported by the CPU (4 KiB in ARMv8, x86 and x86-64) with some pages of the next larger size (2 MiB in x86-64) called "huge pages" by Linux. The pages in the page cache are divided in an "active" set and an "inactive" set. Both sets keep a LRU list of pages. In the basic case, when a page is accessed by a user-space program it is put in the head of the inactive set. When it is accessed repeatedly, it is moved to the active list. Linux moves the pages from the active set to the inactive set as needed so that the active set is smaller than the inactive set. When a page is moved to the inactive set it is removed from the page table of any process address space, without being paged out of physical memory. When a page is removed from the inactive set, it is paged out of physical memory. The size of the "active" and "inactive" list can be queried from codice_6 in the fields "Active", "Inactive", "Active(anon)", "Inactive(anon)", "Active(file)" and "Inactive(file)".
Working set.
The working set of a process is the set of pages expected to be used by that process during some time interval.
The "working set model" isn't a page replacement algorithm in the strict sense (it's actually a kind of medium-term scheduler)
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Div col/styles.css"/> | [
{
"math_id": 0,
"text": "h \\leq k"
},
{
"math_id": 1,
"text": "h<k"
},
{
"math_id": 2,
"text": " \\tfrac{k}{k-h+1}"
},
{
"math_id": 3,
"text": "V_i"
},
{
"math_id": 4,
"text": "V_i \\leftarrow (R_i \\ll (k-1)) | (V_i \\gg 1)"
}
]
| https://en.wikipedia.org/wiki?curid=727476 |
7276069 | Kendall tau distance | Metric to compare ordering
The Kendall tau rank distance is a metric (distance function) that counts the number of pairwise disagreements between two ranking lists. The larger the distance, the more dissimilar the two lists are. Kendall tau distance is also called bubble-sort distance since it is equivalent to the number of swaps that the bubble sort algorithm would take to place one list in the same order as the other list. The Kendall tau distance was created by Maurice Kendall.
Definition.
The Kendall tau ranking distance between two lists formula_0 and formula_1 is
formula_2
where formula_3 and formula_4 are the rankings of the element formula_5 in formula_0 and formula_1 respectively.
formula_6 will be equal to 0 if the two lists are identical and formula_7 (where formula_8 is the list size) if one list is the reverse of the other.
Kendall tau distance may also be defined as
formula_9
where
Kendall tau distance can also be defined as the total number of discordant pairs.
Kendall tau distance in Rankings: A permutation (or ranking) is an array of N integers where each of the integers between 0 and N-1 appears exactly once.
The Kendall tau distance between two rankings is the number of pairs that are in different order in the two rankings. For example, the Kendall tau distance between 0 3 1 6 2 5 4 and 1 0 3 6 4 2 5 is four because the pairs 0-1, 3-1, 2-4, 5-4 are in different order in the two rankings, but all other pairs are in the same order.
The normalized Kendall tau distance formula_12 is formula_13 and therefore lies in the interval [0,1].
If Kendall tau distance function is performed as formula_14 instead of formula_15 (where formula_0 and formula_1 are the rankings of formula_16 and formula_17 elements respectively), then triangular inequality is not guaranteed. The triangular inequality fails sometimes also in cases where there are repetitions in the lists. So then we are not dealing with a metric anymore.
Generalised versions of Kendall tau distance have been proposed to give weights to different items and different positions in the ranking.
Comparison to Kendall tau rank correlation coefficient.
The Kendall tau distance (formula_18) must not be confused with the Kendall tau rank correlation coefficient (formula_19) used in statistics.
They are related by formula_20, formula_21
Or simpler by formula_22 where formula_12 is the normalised distance formula_23 see above)
The distance is a value between 0 and formula_24.
The correlation is between -1 and 1.
The distance between equals is 0, the correlation between equals is 1.
The distance between reversals is formula_25, the correlation between reversals is -1
For example comparing the rankings A>B>C>D and A>B>C>D the distance is 0 the correlation is 1.
Comparing the rankings A>B>C>D and D>C>B>A the distance is 6 the correlation is -1
Comparing the rankings A>B>C>D and B>D>A>C the distance is 3 the correlation is 0
Example.
Suppose one ranks a group of five people by height and by weight:
Here person A is tallest and third-heaviest, B is the second -tallest and fourth-heaviest and so on.
In order to calculate the Kendall tau distance, pair each person with every other person and count the number of times the values in list 1 are in the opposite order of the values in list 2.
Since there are four pairs whose values are in opposite order, the Kendall tau distance is 4. The normalized Kendall tau distance is
formula_26
A value of 0.4 indicates that 40% of pairs differ in ordering between the two lists.
Computing the Kendall tau distance.
A naive implementation in Python (using NumPy) is:
import numpy as np
def normalised_kendall_tau_distance(values1, values2):
"""Compute the Kendall tau distance."""
n = len(values1)
assert len(values2) == n, "Both lists have to be of equal length"
i, j = np.meshgrid(np.arange(n), np.arange(n))
a = np.argsort(values1)
b = np.argsort(values2)
ndisordered = np.logical_or(np.logical_and(a[i] < a[j], b[i] > b[j]), np.logical_and(a[i] > a[j], b[i] < b[j])).sum()
return ndisordered / (n * (n - 1))
However, this requires formula_27 memory, which is inefficient for large arrays.
Given two rankings formula_28, it is possible to rename the items such that formula_29. Then, the problem of computing the Kendall tau distance reduces to computing the number of "inversions" in formula_1—the number of index pairs formula_30 such that formula_31 while formula_32. There are several algorithms for calculating this number.
Here is a basic C implementation.
int kendallTau(short x[], short y[], int len) {
int i, j, v = 0;
bool a, b;
for (i = 0; i < len; i++) {
for (j = i + 1; j < len; j++) {
a = x[i] < x[j] && y[i] > y[j];
b = x[i] > x[j] && y[i] < y[j];
if (a || b)
v++;
return abs(v);
float normalize(int kt, int len) {
return kt / (len * (len - 1) / 2.0);
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tau_1"
},
{
"math_id": 1,
"text": "\\tau_2"
},
{
"math_id": 2,
"text": "K_d(\\tau_1, \\tau_2) = |\\{(i,j): i < j, [ \\tau_1(i) < \\tau_1(j) \\wedge \\tau_2(i) > \\tau_2(j) ] \\vee [ \\tau_1(i) > \\tau_1(j) \\wedge \\tau_2(i) < \\tau_2(j) ]\\}|."
},
{
"math_id": 3,
"text": "\\tau_1(i)"
},
{
"math_id": 4,
"text": "\\tau_2(i)"
},
{
"math_id": 5,
"text": "i"
},
{
"math_id": 6,
"text": "K_d(\\tau_1,\\tau_2)"
},
{
"math_id": 7,
"text": " \\frac{1}{2} n (n-1)"
},
{
"math_id": 8,
"text": "n"
},
{
"math_id": 9,
"text": "K_d(\\tau_1,\\tau_2) = \\sum_{\\{i,j\\}\\in P , i<j } \\bar{K}_{i,j}(\\tau_1,\\tau_2) "
},
{
"math_id": 10,
"text": "\\bar{K}_{i,j}(\\tau_1,\\tau_2)"
},
{
"math_id": 11,
"text": "\\tau_2."
},
{
"math_id": 12,
"text": "K_n"
},
{
"math_id": 13,
"text": " \\frac{K_d}{\\frac{1}{2} n (n-1)} = \\frac{2 K_d}{n (n-1)} "
},
{
"math_id": 14,
"text": "K(L1,L2)"
},
{
"math_id": 15,
"text": "K(\\tau_1,\\tau_2)"
},
{
"math_id": 16,
"text": "L1"
},
{
"math_id": 17,
"text": "L2"
},
{
"math_id": 18,
"text": "K_d"
},
{
"math_id": 19,
"text": "K_c"
},
{
"math_id": 20,
"text": " K_c = 1 - 4 K_d /(n(n-1)) "
},
{
"math_id": 21,
"text": " K_d = (1 - K_c) (n(n-1))/4 "
},
{
"math_id": 22,
"text": "K_c = 1 - 2 K_n , K_n = (1-K_c)/2"
},
{
"math_id": 23,
"text": "2 K_d / (n(n-1)) "
},
{
"math_id": 24,
"text": "n(n-1) /2"
},
{
"math_id": 25,
"text": "n(n-1) / 2 "
},
{
"math_id": 26,
"text": "\\frac{4}{5(5 - 1)/2} = 0.4."
},
{
"math_id": 27,
"text": "n^2"
},
{
"math_id": 28,
"text": "\\tau_1,\\tau_2"
},
{
"math_id": 29,
"text": "\\tau_1 = (1,2,3,...)"
},
{
"math_id": 30,
"text": "i,j"
},
{
"math_id": 31,
"text": "i<j"
},
{
"math_id": 32,
"text": "\\tau_2(i) > \\tau_2(j)"
},
{
"math_id": 33,
"text": "O(n \\log n)"
},
{
"math_id": 34,
"text": "O(n\\sqrt{\\log{n}})"
}
]
| https://en.wikipedia.org/wiki?curid=7276069 |
7277012 | Greedy algorithm for Egyptian fractions | Simple method for finding Egyptian fractions.
In mathematics, the greedy algorithm for Egyptian fractions is a greedy algorithm, first described by Fibonacci, for transforming rational numbers into Egyptian fractions. An Egyptian fraction is a representation of an irreducible fraction as a sum of distinct unit fractions, such as = +. As the name indicates, these representations have been used as long ago as ancient Egypt, but the first published systematic method for constructing such expansions was described in 1202 in the "Liber Abaci" of Leonardo of Pisa (Fibonacci). It is called a greedy algorithm because at each step the algorithm chooses greedily the largest possible unit fraction that can be used in any representation of the remaining fraction.
Fibonacci actually lists several different methods for constructing Egyptian fraction representations. He includes the greedy method as a last resort for situations when several simpler methods fail; see Egyptian fraction for a more detailed listing of these methods. As Salzer (1948) details, the greedy method, and extensions of it for the approximation of irrational numbers, have been rediscovered several times by modern mathematicians, earliest and most notably by J. J. Sylvester (1880) A closely related expansion method that produces closer approximations at each step by allowing some unit fractions in the sum to be negative dates back to .
The expansion produced by this method for a number formula_0 is called the greedy Egyptian expansion, Sylvester expansion, or Fibonacci–Sylvester expansion of formula_0. However, the term "Fibonacci expansion" usually refers, not to this method, but to representation of integers as sums of Fibonacci numbers.
Algorithm and examples.
Fibonacci's algorithm expands the fraction formula_1 to be represented, by repeatedly performing the replacement
formula_2
(simplifying the second term in this replacement as necessary). For instance:
formula_3
in this expansion, the denominator 3 of the first unit fraction is the result of rounding up to the next larger integer, and the remaining fraction is the result of simplifying = . The denominator of the second unit fraction, 8, is the result of rounding up to the next larger integer, and the remaining fraction is what is left from after subtracting both and .
As each expansion step reduces the numerator of the remaining fraction to be expanded, this method always terminates with a finite expansion; however, compared to ancient Egyptian expansions or to more modern methods, this method may produce expansions that are quite long, with large denominators. For instance, this method expands
formula_4
while other methods lead to the much better expansion
formula_5
suggests an even more badly-behaved example, . The greedy method leads to an expansion with ten terms, the last of which has over 500 digits in its denominator; however, has a much shorter non-greedy representation, + + +.
Sylvester's sequence and closest approximation.
Sylvester's sequence 2, 3, 7, 43, 1807, ... (OEIS: ) can be viewed as generated by an infinite greedy expansion of this type for the number 1, where at each step we choose the denominator ⌊ ⌋ + 1 instead of ⌈ ⌉. Truncating this sequence to "k" terms and forming the corresponding Egyptian fraction, e.g. (for "k" = 4)
formula_6
results in the closest possible underestimate of 1 by any "k"-term Egyptian fraction. That is, for example, any Egyptian fraction for a number in the open interval (, 1) requires at least five terms. describes an application of these closest-approximation results in lower-bounding the number of divisors of a perfect number, while describes applications in group theory.
Maximum-length expansions and congruence conditions.
Any fraction requires at most "x" terms in its greedy expansion. and examine the conditions under which the greedy method produces an expansion of with exactly "x" terms; these can be described in terms of congruence conditions on "y".
More generally the sequence of fractions that have "x"-term greedy expansions and that have the smallest possible denominator "y" for each "x" is
<templatestyles src="Block indent/styles.css"/>
Approximation of polynomial roots.
and describe a method of finding an accurate approximation for the roots of a polynomial based on the greedy method. Their algorithm computes the greedy expansion of a root; at each step in this expansion it maintains an auxiliary polynomial that has as its root the remaining fraction to be expanded. Consider as an example applying this method to find the greedy expansion of the golden ratio, one of the two solutions of the polynomial equation "P"0("x") = "x"2 − "x" − 1 = 0. The algorithm of Stratemeyer and Salzer performs the following sequence of steps:
Continuing this approximation process eventually produces the greedy expansion for the golden ratio,
<templatestyles src="Block indent/styles.css"/>
Other integer sequences.
The length, minimum denominator, and maximum denominator of the greedy expansion for all fractions with small numerators and denominators can be found in the On-Line Encyclopedia of Integer Sequences as sequences OEIS: , OEIS: , and OEIS: , respectively. In addition, the greedy expansion of any irrational number leads to an infinite increasing sequence of integers, and the OEIS contains expansions of several well known constants. Some additional entries in the OEIS, though not labeled as being produced by the greedy algorithm, appear to be of the same type.
Related expansions.
In general, if one wants an Egyptian fraction expansion in which the denominators are constrained in some way, it is possible to define a greedy algorithm in which at each step one chooses the expansion
formula_8
where formula_9 is chosen, among all possible values satisfying the constraints, as small as possible such that formula_10 and such that formula_9 is distinct from all previously chosen denominators. Examples of methods defined in this way include Engel expansion, in which each successive denominator must be a multiple of the previous one, and odd greedy expansion, in which all denominators are constrained to be odd numbers.
However, it may be difficult to determine whether an algorithm of this type can always succeed in finding a finite expansion. In particular, it is unknown whether the odd greedy expansion terminates with a finite expansion for all fractions formula_1 for which formula_11 is odd, although it is possible to find finite odd expansions for these fractions by non-greedy methods.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "x"
},
{
"math_id": 1,
"text": "x/y"
},
{
"math_id": 2,
"text": "\\frac{x}{y}=\\frac{1}{\\left\\lceil \\frac y x \\right\\rceil}+\\frac{(-y)\\bmod x}{y\\left\\lceil \\frac y x \\right\\rceil}"
},
{
"math_id": 3,
"text": "\\frac{7}{15}=\\frac{1}{3}+\\frac{2}{15}=\\frac{1}{3}+\\frac{1}{8}+\\frac{1}{120}."
},
{
"math_id": 4,
"text": "\\frac{5}{121}=\\frac{1}{25}+\\frac{1}{757}+\\frac{1}{763\\,309}+\\frac{1}{873\\,960\\,180\\,913}+\\frac{1}{1\\,527\\,612\\,795\\,642\\,093\\,418\\,846\\,225},"
},
{
"math_id": 5,
"text": "\\frac{5}{121}=\\frac{1}{33}+\\frac{1}{121}+\\frac{1}{363}."
},
{
"math_id": 6,
"text": "\\frac12+\\frac13+\\frac17+\\frac1{43}=\\frac{1805}{1806}"
},
{
"math_id": 7,
"text": "\\frac{(-y)\\bmod x}{y\\left\\lceil \\frac y x \\right\\rceil} = \\frac2{\\,\\frac{y(y+2)}{3}\\,}"
},
{
"math_id": 8,
"text": "\\frac{x}{y}=\\frac{1}{d}+\\frac{xd-y}{yd},"
},
{
"math_id": 9,
"text": "d"
},
{
"math_id": 10,
"text": "xd > y"
},
{
"math_id": 11,
"text": "y"
}
]
| https://en.wikipedia.org/wiki?curid=7277012 |
7277255 | Continuous embedding | In mathematics, one normed vector space is said to be continuously embedded in another normed vector space if the inclusion function between them is continuous. In some sense, the two norms are "almost equivalent", even though they are not both defined on the same space. Several of the Sobolev embedding theorems are continuous embedding theorems.
Definition.
Let "X" and "Y" be two normed vector spaces, with norms ||·||"X" and ||·||"Y" respectively, such that "X" ⊆ "Y". If the inclusion map (identity function)
formula_0
is continuous, i.e. if there exists a constant "C" > 0 such that
formula_1
for every "x" in "X", then "X" is said to be continuously embedded in "Y". Some authors use the hooked arrow "↪" to denote a continuous embedding, i.e. ""X" ↪ "Y" means "X" and "Y" are normed spaces with "X" continuously embedded in "Y"". This is a consistent use of notation from the point of view of the category of topological vector spaces, in which the morphisms ("arrows") are the continuous linear maps.
formula_2
In this case, ||"x"||"X" = ||"x"||"Y" for every real number "X". Clearly, the optimal choice of constant "C" is "C" = 1.
formula_3
Then the Sobolev space "W"1,"p"(Ω; R) is continuously embedded in the "L""p" space "L""p"∗(Ω; R). In fact, for 1 ≤ "q" < "p"∗, this embedding is compact. The optimal constant "C" will depend upon the geometry of the domain Ω.
formula_4
the space of continuous real-valued functions defined on the unit interval, but equip "X" with the "L"1 norm and "Y" with the supremum norm. For "n" ∈ N, let "f""n" be the continuous, piecewise linear function given by
formula_5
Then, for every "n", ||"f""n"||"Y" = ||"f""n"||∞ = "n", but
formula_6
Hence, no constant "C" can be found such that ||"f""n"||"Y" ≤ "C"||"f""n"||"X", and so the embedding of "X" into "Y" is discontinuous. | [
{
"math_id": 0,
"text": "i : X \\hookrightarrow Y : x \\mapsto x"
},
{
"math_id": 1,
"text": "\\| x \\|_Y \\leq C \\| x \\|_X"
},
{
"math_id": 2,
"text": "i : \\mathbf{R} \\to \\mathbf{R}^2 : x \\mapsto (x, 0)"
},
{
"math_id": 3,
"text": "p^{*} = \\frac{n p}{n - p}."
},
{
"math_id": 4,
"text": "X = Y = C^0 ([0, 1]; \\mathbf{R}),"
},
{
"math_id": 5,
"text": "f_n (x) = \\begin{cases} - n^2 x + n , & 0 \\leq x \\leq \\tfrac 1 n; \\\\ 0, & \\text{otherwise.} \\end{cases}"
},
{
"math_id": 6,
"text": "\\| f_n \\|_{L^1} = \\int_0^1 | f_n (x) | \\, \\mathrm{d} x = \\frac1{2}."
}
]
| https://en.wikipedia.org/wiki?curid=7277255 |
72773869 | Madhava's correction term | Madhava's correction term is a mathematical expression attributed to Madhava of Sangamagrama (c. 1340 – c. 1425), the founder of the Kerala school of astronomy and mathematics, that can be used to give a better approximation to the value of the mathematical constant π ("pi") than the partial sum approximation obtained by truncating the Madhava–Leibniz infinite series for π. The Madhava–Leibniz infinite series for π is
formula_0
Taking the partial sum of the first formula_1 terms we have the following approximation to π:
formula_2
Denoting the Madhava correction term by formula_3, we have the following better approximation to π:
formula_4
Three different expressions have been attributed to Madhava as possible values of formula_3, namely,
formula_5
formula_6
formula_7
In the extant writings of the mathematicians of the Kerala school there are some indications regarding how the correction terms formula_8 and formula_9 have been obtained, but there are no indications on how the expression formula_10 has been obtained. This has led to a lot of speculative work on how the formulas might have been derived.
Correction terms as given in Kerala texts.
The expressions for formula_9 and formula_10 are given explicitly in the "Yuktibhasha", a major treatise on mathematics and astronomy authored by the Indian astronomer Jyesthadeva of the Kerala school of mathematics around 1530, but that for formula_8 appears there only as a step in the argument leading to the derivation of formula_9.
The "Yuktidipika–Laghuvivrthi" commentary of "Tantrasangraha", a treatise written by Nilakantha Somayaji an astronomer/mathematician belonging to the Kerala school of astronomy and mathematics and completed in 1501, presents the second correction term in the following verses (Chapter 2: Verses 271–274):
English translation of the verses:
"To the diameter multiplied by 4 alternately add and subtract in order the diameter multiplied by 4 and divided separately by the odd numbers 3, 5, etc. That odd number at which this process ends, four times the diameter should be multiplied by the next even number, halved and [then] divided by one added to that [even] number squared. The result is to be added or subtracted according as the last term was subtracted or added. This gives the circumference more accurately than would be obtained by going on with that process."
In modern notations this can be stated as follows (where formula_11 is the diameter of the circle):
Circumference formula_12
If we set formula_13, the last term in the right hand side of the above equation reduces to formula_14.
The same commentary also gives the correction term formula_10 in the following verses (Chapter 2: Verses 295–296):
English translation of the verses:
"A subtler method, with another correction. [Retain] the first procedure involving division of four times the diameter by the odd numbers, 3, 5, etc. [But] then add or subtract it [four times the diameter] multiplied by one added to the next even number halved and squared, and divided by one added to four times the preceding multiplier [with this] multiplied by the even number halved."
In modern notations, this can be stated as follows:
formula_15
where the "multiplier" formula_16 If we set formula_13, the last term in the right hand side of the above equation reduces to formula_17.
Accuracy of the correction terms.
Let
formula_18.
Then, writing formula_19, the errors formula_20 have the following bounds:
formula_21
Numerical values of the errors in the computation of π.
The errors in using these approximations in computing the value of π are
formula_22
formula_23
The following table gives the values of these errors for a few selected values of formula_1.
Continued fraction expressions for the correction terms.
It has been noted that the correction terms formula_24 are the first three convergents of the following continued fraction expressions:
The function formula_27 that renders the equation
formula_28
exact can be expressed in the following form:
formula_29
The first three convergents of this infinite continued fraction are precisely the correction terms of Madhava. Also, this function formula_27 has the following property:
formula_30
Speculative derivation by Hayashi "et al.".
In a paper published in 1990, a group of three Japanese researchers proposed an ingenious method by which Madhava might have obtained the three correction terms. Their proposal was based on two assumptions: Madhava used formula_31 as the value of π and he used the Euclidean algorithm for division.
Writing
formula_32
and taking formula_33 compute the values formula_34 express them as a fraction with 1 as numerator, and finally ignore the fractional parts in the denominator to obtain approximations:
formula_35
This suggests the following first approximation to formula_36 which is the correction term formula_8 talked about earlier.
formula_37
The fractions that were ignored can then be expressed with 1 as numerator, with the fractional parts in the denominators ignored to obtain the next approximation. Two such steps are:
formula_38
This yields the next two approximations to formula_34 exactly the same as the correction terms formula_39
formula_40
and formula_41
formula_42
attributed to Madhava.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\frac{\\pi}{4}=1-\\frac{1}{3}+\\frac{1}{5}-\\frac{1}{7}+ \\cdots"
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "\\frac{\\pi}{4} \\approx 1-\\frac{1}{3}+\\frac{1}{5}-\\frac{1}{7}+ \\cdots + (-1)^{n-1}\\frac{1}{2n-1}"
},
{
"math_id": 3,
"text": "F(n)"
},
{
"math_id": 4,
"text": "\\frac{\\pi}{4} \\approx 1-\\frac{1}{3}+\\frac{1}{5}-\\frac{1}{7}+ \\cdots + (-1)^{n-1}\\frac{1}{2n-1} + (-1)^n F(n)"
},
{
"math_id": 5,
"text": "F_1(n)=\\frac{1}{4n}"
},
{
"math_id": 6,
"text": "F_2(n)=\\frac{n}{4n^2+1}"
},
{
"math_id": 7,
"text": "F_3(n)=\\frac{n^2+1}{4n^3+5n}"
},
{
"math_id": 8,
"text": "F_1(n)"
},
{
"math_id": 9,
"text": "F_2(n)"
},
{
"math_id": 10,
"text": "F_3(n)"
},
{
"math_id": 11,
"text": "d"
},
{
"math_id": 12,
"text": " = 4d - \\frac{4d}{3} + \\frac{4d}{5}- \\cdots \\pm \\frac{4d}{p} \\mp \\frac{ 4d\\left(p+1\\right) / 2 }{1 + (p+1)^2}"
},
{
"math_id": 13,
"text": "p=2n-1"
},
{
"math_id": 14,
"text": "4d F_2(n)"
},
{
"math_id": 15,
"text": "\n\\text{Circumference}\n= 4d - \\frac{4d}3 + \\frac{4d}5 - \\cdots \\pm \\frac{4d}p \\mp \\frac{4dm}{\\left(1 + 4m\\right)(p+1)/2},"
},
{
"math_id": 16,
"text": "m = 1 + \\left((p+1)/2\\right)^2."
},
{
"math_id": 17,
"text": "4d F_3(n)"
},
{
"math_id": 18,
"text": "s_i = 1-\\frac{1}{3}+\\frac{1}{5}-\\frac{1}{7}+ \\cdots + (-1)^{n-1}\\frac{1}{2n-1} + (-1)^n F_i(n)"
},
{
"math_id": 19,
"text": "p=2n+1"
},
{
"math_id": 20,
"text": "\\left|\\frac{\\pi}{4}-s_i(n)\\right|"
},
{
"math_id": 21,
"text": "\\begin{align}&\\begin{align}\n\\frac{1} {p^3 - p} - \\frac{1} {(p+2)^3-(p+2)} &< \\left| \\frac{\\pi}{4}-s_1(n)\\right| < \\frac{1}{p^3 - p}, \\\\[10mu]\n\\frac{4}{p^5 + 4p} - \\frac{4}{(p+2)^5+4(p+2)} &< \\left| \\frac{\\pi}{4}-s_2(n)\\right| < \\frac{4}{p^5 +4 p },\n\\end{align}\\\\[20mu]\n&\\begin{align}\n&\\frac{36}{p^7+7p^5+28p^3-36p} - \\frac{36}{(p+2)^7+7(p+2)^5+28(p+2)^3-36(p+2)} \\cdots \\\\[10mu]\n&\\phantom{\\frac{4}{p^5 + 4p} - \\frac{4}{(p+2)^5+4(p+2)}}\n< \\left| \\frac{\\pi}{4}-s_3(n)\\right| < \\frac{36}{p^7+7p^5+28p^3-36p}.\n\\end{align}\\end{align}\n\n"
},
{
"math_id": 22,
"text": "E(n) = \\pi - 4\\left( 1-\\frac{1}{3}+\\frac{1}{5}-\\frac{1}{7}+ \\cdots + (-1)^{n-1}\\frac{1}{2n-1} \\right)"
},
{
"math_id": 23,
"text": "E_i(n) = E(n) - 4\\times (-1)^n F_i(n)"
},
{
"math_id": 24,
"text": "F_1(n), F_2(n), F_3(n)"
},
{
"math_id": 25,
"text": " \\cfrac{1}{4n + \\cfrac{1}{n + \\cfrac{1}{n + \\cdots}}} "
},
{
"math_id": 26,
"text": " \\cfrac{1}{4n + \\cfrac{1^2}{ n + \\cfrac{2^2}{ 4n + \\cfrac{3^2}{ n + \\cfrac{\\cdots}{\\cdots + \\cfrac{r^2}{ n[4 - 3(r \\bmod 2)] +\\cdots}}}}}}= \\cfrac{1}{4n + \\cfrac{2^2}{4n + \\cfrac{4^2}{4n + \\cfrac{6^2}{4n + \\cfrac{8^2}{4n +\\cdots}}}}}"
},
{
"math_id": 27,
"text": "f(n)"
},
{
"math_id": 28,
"text": " \\frac{\\pi}{4} = 1 - \\frac{1}{3}+\\frac{1}{5} - \\cdots \\pm \\frac{1}{n} \\mp f(n+1)"
},
{
"math_id": 29,
"text": " f(n) = \\frac{1}{2}\\times \\cfrac{1}{ n + \\cfrac{1^2}{n + \\cfrac{2^2}{ n + \\cfrac{3^2}{n + \\cdots}}}}"
},
{
"math_id": 30,
"text": "f(2n) = \\cfrac{1}{4n + \\cfrac{2^2}{4n + \\cfrac{4^2}{4n + \\cfrac{6^2}{4n + \\cfrac{8^2}{4n +\\cdots}}}}}"
},
{
"math_id": 31,
"text": "355/113"
},
{
"math_id": 32,
"text": " S(n) = \\left| 1 - \\frac{1}{3} +\\frac{1}{5}-\\frac{1}{7}+ \\cdots +\\frac{(-1)^{n-1}}{2n-1} - \\frac{\\pi}{4}\\right|"
},
{
"math_id": 33,
"text": "\\pi=355/113,"
},
{
"math_id": 34,
"text": "S(n),"
},
{
"math_id": 35,
"text": "\\begin{alignat}{3}\nS(1) &= \\ \\ \\,\\frac{97}{452} &&= \\ \\ \\ \\frac{1}{4 + \\frac{64}{97}} &&\\approx \\frac{1}{4}, \\\\[6mu]\nS(2) &= \\ \\ \\frac{161}{1356} &&= \\ \\ \\,\\frac{1}{8 + \\frac{68}{161}} &&\\approx \\frac{1}{8}, \\\\[6mu]\nS(3) &= \\ \\ \\frac{551}{6780} &&= \\ \\,\\frac{1}{12 +\\frac{168}{551}} &&\\approx \\frac{1}{12}, \\\\[6mu]\nS(4) &= \\ \\frac{2923}{47460} &&= \\ \\frac{1}{16 +\\frac{ 692}{2923}} &&\\approx \\frac{1}{16}, \\\\[6mu]\nS(5) &= \\frac{21153}{427140} &&= \\frac{1}{20 +\\frac{ 4080}{21153}} &&\\approx \\frac{1}{20}.\n\\end{alignat}"
},
{
"math_id": 36,
"text": "S(n)"
},
{
"math_id": 37,
"text": " S(n) \\approx \\frac{1}{4n}"
},
{
"math_id": 38,
"text": "\\begin{alignat}{5}\n \\frac{64}{97} &= \\ \\,\\frac{1}{1 + \\frac{33}{64}} &&\\approx \\frac{1}{1},\n& \\frac{33}{64} &= \\,\\frac{1}{1 + \\frac{31}{33}} &&\\approx \\frac{1}{1}, \\\\[6mu]\n \\frac{68}{161} &= \\ \\,\\frac{1}{2 + \\frac{25}{68}} &&\\approx \\frac{1}{2},\n& \\frac{25}{68} &= \\,\\frac{1}{2 + \\frac{18}{25}} &&\\approx \\frac{1}{2}, \\\\[6mu]\n \\frac{168}{551} &= \\ \\frac{1}{3 + \\frac{47}{168}} &&\\approx \\frac{1}{3},\n& \\frac{47}{168} &= \\,\\frac{1}{3 + \\frac{27}{47}} &&\\approx \\frac{1}{3}, \\\\[6mu]\n\\frac{ 692}{2923} &= \\frac{1}{4 + \\frac{155}{692}} &&\\approx \\frac{1}{4},\n& \\frac{155}{692} &= \\frac{1}{4 + \\frac{72}{155}} &&\\approx \\frac{1}{4}, \\\\[6mu]\n \\frac{4080}{21153} &= \\frac{1}{5 + \\frac{753}{4080}} &&\\approx \\frac{1}{5},\n&\\quad \\frac{753}{4080} &= \\frac{1}{5 + \\frac{315}{753}} &&\\approx \\frac{1}{5}.\n\\end{alignat}"
},
{
"math_id": 39,
"text": "F_2(n),"
},
{
"math_id": 40,
"text": "S(n)\\approx \\frac{1}{4n+\\dfrac{1}{n}} = \\frac{n}{4n^2+1},"
},
{
"math_id": 41,
"text": "F_3(n),"
},
{
"math_id": 42,
"text": " S(n) \\approx \\dfrac{1}{4n + \\dfrac{1}{n+\\dfrac{1}{n}}} = \\frac{n^2+1}{n(4n^2+5)},"
}
]
| https://en.wikipedia.org/wiki?curid=72773869 |
72775401 | Optimal kidney exchange | Optimal kidney exchange (OKE) is an optimization problem faced by programs for kidney paired donations (also called Kidney Exchange Programs). Such programs have large databases of patient-donor pairs, where the donor is willing to donate a kidney in order to help the patient, but cannot do so due to medical incompatibility. The centers try to arrange exchanges between such pairs. For example, the donor in pair A donates to the patient in pair B, the donor in pair B donates to the patient in pair C, and the donor in pair C donates to the patient in pair A.
The objective of the OKE problem is to find an optimal arrangement of such exchanges. "Optimal" usually means that the number of transplants is as large as possible, but there may be other objectives. A crucial constraint in this optimization problem is that a donor gives a kidney only if his patient receives a compatible kidney, so that no pair loses a kidney from participating. This requirement is sometimes called "individual rationality".
The OKE problem has many variants, which differ in the allowed size of each exchange, the objective function, and other factors.
Definitions.
Input.
An instance of OKE is usually described as a directed graph. Every node represents a patient-donor pair. A directed arc from pair A to pair B means that the donor in pair A is medically compatible with the patient in pair B (compatibility is determined based on the blood types of the donor and patient, as well as other factors such as particular antigens in their blood). A directed cycle in the compatibility graph represents a possible exchange. A directed cycle of size 2 (e.g. A -> B -> A) represents a possible "pairwise exchange" - an exchange between a pair of pairs.
A more general variant of OKE considers also nodes of a second type, that represent "altruistic donors" - donors who are not paired to a patient, and are willing to donate a kidney to any compatible patient. Altruistic donor nodes have only outgoing arcs. With altruistic donors, it is possible to arrange exchanges not only with cycles but also with "chains", starting at an altruistic donor.
The arcs in the graph may have weights, representing e.g. the probability of success of the involved transplants. They may also have priorities, determined e.g. by medical urgency or by the time the patient have waited in the transplantation queue.
Output.
The output of an OKE is a set of pairwise-disjoint directed cycles (and possibly directed chains, if altruistic donors are available). The simplest objective in OKE is to maximize the number of patients who receive a kidney. Other common objectives are:
Unrestricted cycle length.
Initially, the problem was studied without any bound on the length of the exchange cycles. Roth, Sonmez and Unver presented a mechanism, based on an extension of the top trading cycles mechanism, for finding exchange cycles in a Pareto-optimal and incentive-compatible way.
Abraham, Blum and Sandholm show that, with unbounded cycle length, a maximum-cardinality and maximum-weight exchange can be found in polynomial time. For example, to find a maximum-cardinality exchange, given the original directed graph "G", construct an undirected bipartite graph H("X"+"Y", "E") in which:
Every maximum-cardinality exchange in "G" corresponds to a maximum-weight matching in "H." Note that the weights guarantee that every maximum-weight matching in "H" is perfect, so that every patient is matched, either to a compatible donor, or to his own donor. So no donor gives a kidney unless his patient receives a kidney, which satisfies the requirement of individual rationality.
It is easy to extend this algorithm to maximum-weight exchanges, and to incorporate altruistic donors.
Pairwise kidney exchange.
In the discussions towards implementing a kidney exchange program in New England in 2004, it was found out that, logistically, only pairwise exchanges are possible. This is because all operations in an exchange must be done simultaneously. This requirement aims to ensure the "individual rationality" constraint - to avoid the risk that a donor refuses to donate after his patient has received a kidney. An exchange cycle of size "k" requires 2"k" simultaneous operations. At that time, it was not practical to arrange more than 4 simultaneous operations, so the size of cycles was limited to 2.
In this setting, it is possible to reduce the directed compatibility graph to an undirected graph, where pairs A and B are connected if and only if A->B and B->A. Finding a maximum-cardinality pairwise exchange is equivalent to finding a maximum cardinality matching in that undirected graph. Moreover, when only pairwise exchanges are allowed, a matching is Pareto-efficient if and only if it has maximum cardinality.Lem.1 Therefore, such an exchange can be found in polynomial time.
Roth, Sonmez and Unver study two extensions of the simple maximum-cardinality exchange:
Cycles of length k.
In later years, logistic improvements allowed the execution of larger number of simultaneous operations. Accordingly, exchange cycles involving three or more pairs were made possible. Finding a maximum-cardinality exchange is called, in graph theoretic terms, maximum cycle packing. Maximum cycle packing with cycles of length at most "k", for any fixed "k ≥ 3", is an NP-hard computational problem (this can be proved by reduction from the problem of 3-dimensional matching in a hypergraph).
Abraham, Blum and Sandholm present two techniques for maximum cycle packing: column generation and constraint generation. They report that column generation scales much better. Their algorithm have been implemented in the Alliance for Paired Kidney Donations.
Biro, Manlove and Rizzi suggest two approaches for solving this problem even when the edges have weights (in which case it is called maximum-weight cycle packing):
Cycles of length k, and chains of unbounded length.
Altruistic donors can be used to initiate a chain of exchanges, that is not a cycle. In such a chain, the operations need not be done simultaneously: it is possible to guarantee to each patient that he receives a kidney before his donor gives a kidney. If a donor defects, it breaks the chain, but it does not harm patients whose donor already gave a kidney, so it does not break individual rationality.
Anderson, Ashlagi, Gamarnik and Roth present two algorithms for finding a maximum-cardinality packing into cycles of length at most k and chains of unbounded length:
Uncertain transplants.
Early theoretic works in OKE assumed that, once a set of exchanges is determined, all of them will be executed. In practice, however, transplants might be cancelled. For example, the medical examination done just before the transplant might reveal that the donor is incompatible with the patient, even though in the database they are registered as compatible. Therefore, newer works aim to maximize the expected number of transplants. For example, Alvelos, Klimentova and Viana present a branch-and-price algorithm for this problem.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "O(3^s)"
}
]
| https://en.wikipedia.org/wiki?curid=72775401 |
727811 | Snark (graph theory) | 3-regular graph with no 3-edge-coloring
In the mathematical field of graph theory, a snark is an undirected graph with exactly three edges per vertex whose edges cannot be colored with only three colors. In order to avoid trivial cases, snarks are often restricted to have additional requirements on their connectivity and on the length of their cycles. Infinitely many snarks exist.
One of the equivalent forms of the four color theorem is that every snark is a non-planar graph. Research on snarks originated in Peter G. Tait's work on the four color theorem in 1880, but their name is much newer, given to them by Martin Gardner in 1976. Beyond coloring, snarks also have connections to other hard problems in graph theory: writing in the "Electronic Journal of Combinatorics", Miroslav Chladný and Martin Škoviera state that
As well as the problems they mention, W. T. Tutte's "snark conjecture" concerns the existence of Petersen graphs as graph minors of snarks; its proof has been long announced but remains unpublished, and would settle a special case of the existence of nowhere zero 4-flows.
History and examples.
Snarks were so named by the American mathematician Martin Gardner in 1976, after the mysterious and elusive object of the poem "The Hunting of the Snark" by Lewis Carroll. However, the study of this class of graphs is significantly older than their name. Peter G. Tait initiated the study of snarks in 1880, when he proved that the four color theorem is equivalent to the statement that no snark is planar. The first graph known to be a snark was the Petersen graph; it was proved to be a snark by Julius Petersen in 1898, although it had already been studied for a different purpose by Alfred Kempe in 1886.
The next four known snarks were
In 1975, Rufus Isaacs generalized Blanuša's method to construct two infinite families of snarks: the flower snarks and the Blanuša–Descartes–Szekeres snarks, a family that includes the two Blanuša snarks, the Descartes snark and the Szekeres snark. Isaacs also discovered a 30-vertex snark that does not belong to the Blanuša–Descartes–Szekeres family and that is not a flower snark: the double-star snark.
The 50-vertex Watkins snark was discovered in 1989.
Another notable cubic non-three-edge-colorable graph is Tietze's graph, with 12 vertices; as Heinrich Franz Friedrich Tietze discovered in 1910, it forms the boundary of a subdivision of the Möbius strip requiring six colors. However, because it contains a triangle, it is not generally considered a snark. Under strict definitions of snarks, the smallest snarks are the Petersen graph and Blanuša snarks, followed by six different 20-vertex snarks.
A list of all of the snarks up to 36 vertices (according to a strict definition), and up to 34 vertices (under a weaker definition), was generated by Gunnar Brinkmann, Jan Goedgebeur, Jonas Hägglund and Klas Markström in 2012. The number of snarks for a given even number of vertices grows at least exponentially in the number of vertices. (Because they have odd-degree vertices, all snarks must have an even number of vertices by the handshaking lemma.) OEIS sequence contains the number of non-trivial snarks of formula_0 vertices for small values of formula_1.
Definition.
The precise definition of snarks varies among authors, but generally refers to cubic graphs (having exactly three edges at each vertex) whose edges cannot be colored with only three colors. By Vizing's theorem, the number of colors needed for the edges of a cubic graph is either three ("class one" graphs) or four ("class two" graphs), so snarks are cubic graphs of class two. However, in order to avoid cases where a snark is of class two for trivial reasons, or is constructed in a trivial way from smaller graphs, additional restrictions on connectivity and cycle lengths are often imposed. In particular:
Although these definitions only consider constraints on the girth up to five, snarks with arbitrarily large girth exist.
Properties.
Work by Peter G. Tait established that the four-color theorem is true if and only if every snark is non-planar. This theorem states that every planar graph has a graph coloring of its the vertices with four colors, but Tait showed how to convert 4-vertex-colorings of maximal planar graphs into 3-edge-colorings of their dual graphs, which are cubic and planar, and vice versa. A planar snark would therefore necessarily be dual to a counterexample to the four-color theorem. Thus, the subsequent proof of the four-color theorem also demonstrates that all snarks are non-planar.
All snarks are non-Hamiltonian: when a cubic graph has a Hamiltonian cycle, it is always possible to 3-color its edges, by using two colors in alternation for the cycle, and the third color for the remaining edges. However, many known snarks are close to being Hamiltonian, in the sense that they are hypohamiltonian graphs: the removal of any single vertex leaves a Hamiltonian subgraph. A hypohamiltonian snark must be "bicritical": the removal of any two vertices leaves a three-edge-colorable subgraph. The "oddness" of a cubic graph is defined as the minimum number of odd cycles, in any system of cycles that covers each vertex once (a 2-factor). For the same reason that they have no Hamiltonian cycles, snarks have positive oddness: a completely even 2-factor would lead to a 3-edge-coloring, and vice versa. It is possible to construct infinite families of snarks whose oddness grows linearly with their numbers of vertices.
The cycle double cover conjecture posits that in every bridgeless graph one can find a collection of cycles covering each edge twice, or equivalently that the graph can be embedded onto a surface in such a way that all faces of the embedding are simple cycles. When a cubic graph has a 3-edge-coloring, it has a cycle double cover consisting of the cycles formed by each pair of colors. Therefore, among cubic graphs, the snarks are the only possible counterexamples. More generally, snarks form the difficult case for this conjecture: if it is true for snarks, it is true for all graphs. In this connection, Branko Grünbaum conjectured that no snark could be embedded onto a surface in such a way that all faces are simple cycles and such that every two faces either are disjoint or share only a single edge; if any snark had such an embedding, its faces would form a cycle double cover. However, a counterexample to Grünbaum's conjecture was found by Martin Kochol.
Determining whether a given cyclically 5-connected cubic graph is 3-edge-colorable is NP-complete. Therefore, determining whether a graph is a snark is co-NP-complete.
Snark conjecture.
W. T. Tutte conjectured that every snark has the Petersen graph as a minor. That is, he conjectured that the smallest snark, the Petersen graph, may be formed from any other snark by contracting some edges and deleting others. Equivalently (because the Petersen graph has maximum degree three) every snark has a subgraph that can be formed from the Petersen graph by subdividing some of its edges. This conjecture is a strengthened form of the four color theorem, because any graph containing the Petersen graph as a minor must be nonplanar. In 1999, Neil Robertson, Daniel P. Sanders, Paul Seymour, and Robin Thomas announced a proof of this conjecture. Steps towards this result have been published in 2016 and 2019, but the complete proof remains unpublished. See the Hadwiger conjecture for other problems and results relating graph coloring to graph minors.
Tutte also conjectured a generalization to arbitrary graphs: every bridgeless graph with no Petersen minor has a nowhere zero 4-flow. That is, the edges of the graph may be assigned a direction, and a number from the set {1, 2, 3}, such that the sum of the incoming numbers minus the sum of the outgoing numbers at each vertex is divisible by four. As Tutte showed, for cubic graphs such an assignment exists if and only if the edges can be colored by three colors, so the conjecture would follow from the snark conjecture in this case. However, proving the snark conjecture would not settle the question of the existence of 4-flows for non-cubic graphs.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "2n"
},
{
"math_id": 1,
"text": "n"
}
]
| https://en.wikipedia.org/wiki?curid=727811 |
72786336 | Flattening transformation | The flattening transformation is an algorithm that transforms nested data parallelism into flat data parallelism. It was pioneered by Guy Blelloch as part of the NESL programming language. The flattening transformation is also sometimes called vectorization, but is completely unrelated to automatic vectorization. The original flattening algorithm was concerned solely with first-order multidimensional arrays containing primitive types, but was extended to handle higher-order and recursive data types in the work on Data Parallel Haskell.
Overview.
Flattening works by "lifting" functions to operate on arrays instead of on single values. For example, a function formula_0 is lifted to a function formula_1. This means an expression formula_2 can be replaced with an application of the lifted function: formula_3. Intuitively, flattening thus works by replacing all function applications with applications of the corresponding lifted function.
After flattening, arrays are represented as single-dimensional value vector "V" containing scalar elements, alongside auxiliary information recording the nested structure, typically in the form of a boolean flag vector "F". The flag vector indicates, for the corresponding element in the value vector, whether it is the beginning of a new "segment". For example, the two-dimensional irregular array formula_4 can be represented as the data vector formula_5 alongside the flag vector formula_6.
This flag vector is necessary in order to correctly flatten nested parallelism. For example, it is used in the flattening of prefix sum to segmented scan.
Flattening can increase the asymptotic work and space complexity of the original program, leading to a much less efficient result.
Usage.
Flattening was originally developed for vector machines such as the Connection Machine, and often produces code that is not a good fit for modern multicore CPUs. However, the principles underlying its simpler cases can be found in constructs such as the codice_0 in Google Jax.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f : A \\rightarrow B"
},
{
"math_id": 1,
"text": "f' : [A] \\rightarrow [B]"
},
{
"math_id": 2,
"text": "map~f~x"
},
{
"math_id": 3,
"text": "f'~x"
},
{
"math_id": 4,
"text": "A=[[1,2,3], [4,5], [], [6]]"
},
{
"math_id": 5,
"text": "V = [1,2,3,4,5,6,7]"
},
{
"math_id": 6,
"text": "F = [1, 0, 0, 1, 0, 1, 1]"
}
]
| https://en.wikipedia.org/wiki?curid=72786336 |
727981 | Analytic capacity | In the mathematical discipline of complex analysis, the analytic capacity of a compact subset "K" of the complex plane is a number that denotes "how big" a bounded analytic function on C \ "K" can become. Roughly speaking, "γ"("K") measures the size of the unit ball of the space of bounded analytic functions outside "K".
It was first introduced by Lars Ahlfors in the 1940s while studying the removability of singularities of bounded analytic functions.
Definition.
Let "K" ⊂ C be compact. Then its analytic capacity is defined to be
formula_0
Here, formula_1 denotes the set of bounded analytic functions "U" → C, whenever "U" is an open subset of the complex plane. Further,
formula_2
formula_3
Note that formula_4, where formula_5. However, usually formula_6.
Equivalently, the analytic capacity may be defined as
formula_7
where "C" is a contour enclosing "K" and the supremum is taken over "f" satisfying the same conditions as above: "f" is bounded analytic outside "K", the bound is one, and formula_8
If "A" ⊂ C is an arbitrary set, then we define
formula_9
Removable sets and Painlevé's problem.
The compact set "K" is called removable if, whenever Ω is an open set containing "K", every function which is bounded and holomorphic on the set Ω \ "K" has an analytic extension to all of Ω. By Riemann's theorem for removable singularities, every singleton is removable. This motivated Painlevé to pose a more general question in 1880: "Which subsets of C are removable?"
It is easy to see that "K" is removable if and only if "γ"("K") = 0. However, analytic capacity is a purely complex-analytic concept, and much more work needs to be done in order to obtain a more geometric characterization.
Ahlfors function.
For each compact "K" ⊂ C, there exists a unique extremal function, i.e. formula_10 such that formula_11, "f"(∞) = 0 and "f′"(∞) = "γ"("K"). This function is called the Ahlfors function of "K". Its existence can be proved by using a normal family argument involving Montel's theorem.
Analytic capacity in terms of Hausdorff dimension.
Let dim"H" denote Hausdorff dimension and "H"1 denote 1-dimensional Hausdorff measure. Then "H"1("K") = 0 implies "γ"("K") = 0 while dim"H"("K") > 1 guarantees "γ"("K") > 0. However, the case when dim"H"("K") = 1 and "H"1("K") ∈ (0, ∞] is more difficult.
Positive length but zero analytic capacity.
Given the partial correspondence between the 1-dimensional Hausdorff measure of a compact subset of C and its analytic capacity, it might be conjectured that "γ"("K") = 0 implies "H"1("K") = 0. However, this conjecture is false. A counterexample was first given by A. G. Vitushkin, and a much simpler one by John B. Garnett in his 1970 paper. This latter example is the linear four corners Cantor set, constructed as follows:
Let "K"0 := [0, 1] × [0, 1] be the unit square. Then, "K"1 is the union of 4 squares of side length 1/4 and these squares are located in the corners of "K"0. In general, "Kn" is the union of 4"n" squares (denoted by formula_12) of side length 4−"n", each formula_12 being in the corner of some formula_13. Take "K" to be the intersection of all "K""n" then formula_14 but "γ"("K") = 0.
Vitushkin's conjecture.
Let "K" ⊂ C be a compact set. Vitushkin's conjecture states that
formula_15
where formula_16 denotes the orthogonal projection in direction θ. By the results described above, Vitushkin's conjecture is true when dim"H""K" ≠ 1.
Guy David published a proof in 1998 of Vitushkin's conjecture for the case dim"H""K" = 1 and "H"1("K") < ∞. In 2002, Xavier Tolsa proved that analytic capacity is countably semiadditive. That is, there exists an absolute constant "C" > 0 such that if "K" ⊂ C is a compact set and formula_17, where each "K""i" is a Borel set, then formula_18.
David's and Tolsa's theorems together imply that Vitushkin's conjecture is true when "K" is "H"1-sigma-finite.
In the non "H"1-sigma-finite case, Pertti Mattila proved in 1986 that the conjecture is false, but his proof did not specify which implication of the conjecture fails. Subsequent work by Jones and Muray produced an example of a set with zero Favard length and positive analytic capacity, explicitly disproving one of the directions of the conjecture. As of 2023 it is not known whether the other implication holds but some progress has been made towards a positive answer by Chang and Tolsa.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\gamma(K) = \\sup \\{|f'(\\infty)|;\\ f\\in\\mathcal{H}^\\infty(\\mathbf{C}\\setminus K),\\ \\|f\\|_\\infty\\leq 1,\\ f(\\infty)=0\\}"
},
{
"math_id": 1,
"text": "\\mathcal{H}^\\infty (U) "
},
{
"math_id": 2,
"text": " f'(\\infty):= \\lim_{z\\to\\infty}z\\left(f(z)-f(\\infty)\\right) "
},
{
"math_id": 3,
"text": " f(\\infty):= \\lim_{z\\to\\infty}f(z) "
},
{
"math_id": 4,
"text": "f'(\\infty) = g'(0)"
},
{
"math_id": 5,
"text": "g(z) = f(1/z)"
},
{
"math_id": 6,
"text": " f'(\\infty)\\neq \\lim_{z\\to\\infty} f'(z)"
},
{
"math_id": 7,
"text": "\\gamma(K)=\\sup \\left|\\frac1{2\\pi} \\int_C f(z)dz\\right|"
},
{
"math_id": 8,
"text": "f(\\infty)=0."
},
{
"math_id": 9,
"text": "\\gamma(A) = \\sup \\{ \\gamma(K) : K \\subset A, \\, K \\text{ compact} \\}."
},
{
"math_id": 10,
"text": "f\\in\\mathcal{H}^\\infty(\\mathbf{C}\\setminus K)"
},
{
"math_id": 11,
"text": "\\|f\\|\\leq 1"
},
{
"math_id": 12,
"text": "Q_n^j"
},
{
"math_id": 13,
"text": "Q_{n-1}^k"
},
{
"math_id": 14,
"text": "H^1(K)=\\sqrt{2}"
},
{
"math_id": 15,
"text": " \\gamma(K)=0\\ \\iff \\ \\int_0^\\pi \\mathcal H^1(\\operatorname{proj}_\\theta(K)) \\, d\\theta = 0 "
},
{
"math_id": 16,
"text": "\\operatorname{proj}_\\theta(x,y) := x \\cos \\theta + y\\sin\\theta"
},
{
"math_id": 17,
"text": "K = \\bigcup_{i=1}^\\infty K_i"
},
{
"math_id": 18,
"text": "\\gamma(K) \\leq C \\sum_{i=1}^\\infty\\gamma(K_i)"
}
]
| https://en.wikipedia.org/wiki?curid=727981 |
728019 | Triaugmented triangular prism | Convex polyhedron with 14 triangle faces
The triaugmented triangular prism, in geometry, is a convex polyhedron with 14 equilateral triangles as its faces. It can be constructed from a triangular prism by attaching equilateral square pyramids to each of its three square faces. The same shape is also called the tetrakis triangular prism, tricapped trigonal prism, tetracaidecadeltahedron, or tetrakaidecadeltahedron; these last names mean a polyhedron with 14 triangular faces. It is an example of a deltahedron, composite polyhedron, and Johnson solid.
The edges and vertices of the triaugmented triangular prism form a maximal planar graph with 9 vertices and 21 edges, called the Fritsch graph. It was used by Rudolf and Gerda Fritsch to show that Alfred Kempe's attempted proof of the four color theorem was incorrect. The Fritsch graph is one of only six graphs in which every neighborhood is a 4- or 5-vertex cycle.
The dual polyhedron of the triaugmented triangular prism is an associahedron, a polyhedron with four quadrilateral faces and six pentagons whose vertices represent the 14 triangulations of a regular hexagon. In the same way, the nine vertices of the triaugmented triangular prism represent the nine diagonals of a hexagon, with two vertices connected by an edge when the corresponding two diagonals do not cross. Other applications of the triaugmented triangular prism appear in chemistry as the basis for the tricapped trigonal prismatic molecular geometry, and in mathematical optimization as a solution to the Thomson problem and Tammes problem.
Construction.
The triaugmented triangular prism is a composite polyhedron, meaning it can be constructed by attaching equilateral square pyramids to each of the three square faces of a triangular prism, a process called augmentation. These pyramids cover each square, replacing it with four equilateral triangles, so that the resulting polyhedron has 14 equilateral triangles as its faces. A polyhedron with only equilateral triangles as faces is called a deltahedron. There are only eight different convex deltahedra, one of which is the triaugmented triangular prism. More generally, the convex polyhedra in which all faces are regular polygons are called the Johnson solids, and every convex deltahedron is a Johnson solid. The triaugmented triangular prism is numbered among the Johnson solids as formula_1.
One possible system of Cartesian coordinates for the vertices of a triaugmented triangular prism, giving it edge length 2, is:
formula_2
Properties.
A triaugmented triangular prism with edge length formula_3 has surface area
formula_4
the area of 14 equilateral triangles. Its volume,
formula_5
can be derived by slicing it into a central prism and three square pyramids, and adding their volumes.
It has the same three-dimensional symmetry group as the triangular prism, the dihedral group formula_0 of order twelve. Its dihedral angles can be calculated by adding the angles of the component pyramids and prism. The prism itself has square-triangle dihedral angles formula_6 and square-square angles formula_7. The triangle-triangle angles on the pyramid are the same as in the regular octahedron, and the square-triangle angles are half that. Therefore, for the triaugmented triangular prism, the dihedral angles incident to the degree-four vertices, on the edges of the prism triangles, and on the square-to-square prism edges are, respectively,
formula_8
Fritsch graph.
The graph of the triaugmented triangular prism has 9 vertices and 21 edges. It was used by as a small counterexample to Alfred Kempe's false proof of the four color theorem using Kempe chains, and its dual map was used as their book's cover illustration. Therefore, this graph has subsequently been named the Fritsch graph. An even smaller counterexample, called the Soifer graph, is obtained by removing one edge from the Fritsch graph (the bottom edge in the illustration here).
The Fritsch graph is one of only six connected graphs in which the neighborhood of every vertex is a cycle of length four or five. More generally, when every vertex in a graph has a cycle of length at least four as its neighborhood, the triangles of the graph automatically link up to form a topological surface called a Whitney triangulation. These six graphs come from the six Whitney triangulations that, when their triangles are equilateral, have positive angular defect at every vertex. This makes them a combinatorial analogue of the positively curved smooth surfaces. They come from six of the eight deltahedra—excluding the two that have a vertex with a triangular neighborhood. As well as the Fritsch graph, the other five are the graphs of the regular octahedron, regular icosahedron, pentagonal bipyramid, snub disphenoid, and gyroelongated square bipyramid.
Dual associahedron.
The dual polyhedron of the triaugmented triangular prism has a face for each vertex of the triaugmented triangular prism, and a vertex for each face. It is an enneahedron (that is, a nine-sided polyhedron) that can be realized with three non-adjacent square faces, and six more faces that are congruent irregular pentagons. It is also known as an order-5 associahedron, a polyhedron whose vertices represent the 14 triangulations of a regular hexagon. A less-symmetric form of this dual polyhedron, obtained by slicing a truncated octahedron into four congruent quarters by two planes that perpendicularly bisect two parallel families of its edges, is a space-filling polyhedron.
More generally, when a polytope is the dual of an associahedron, its boundary (a simplicial complex of triangles, tetrahedra, or higher-dimensional simplices) is called a "cluster complex". In the case of the triaugmented triangular prism, it is a cluster complex of type formula_9, associated with the formula_9 Dynkin diagram , the formula_9 root system, and the formula_9 cluster algebra. The connection with the associahedron provides a correspondence between the nine vertices of the triaugmented triangular prism and the nine diagonals of a hexagon. The edges of the triaugmented triangular prism correspond to pairs of diagonals that do not cross, and the triangular faces of the triaugmented triangular prism correspond to the triangulations of the hexagon (consisting of three non-crossing diagonals). The triangulations of other regular polygons correspond to polytopes in the same way, with dimension equal to the number of sides of the polygon minus three.
Applications.
In the geometry of chemical compounds, it is common to visualize an atom cluster surrounding a central atom as a polyhedron—the convex hull of the surrounding atoms' locations. The tricapped trigonal prismatic molecular geometry describes clusters for which this polyhedron is a triaugmented triangular prism, although not necessarily one with equilateral triangle faces. For example, the lanthanides from lanthanum to dysprosium dissolve in water to form cations surrounded by nine water molecules arranged as a triaugmented triangular prism.
In the Thomson problem, concerning the minimum-energy configuration of formula_10 charged particles on a sphere, and for the Tammes problem of constructing a spherical code maximizing the smallest distance among the points, the minimum solution known for formula_11 places the points at the vertices of a triaugmented triangular prism with non-equilateral faces, inscribed in a sphere. This configuration is proven optimal for the Tammes problem, but a rigorous solution to this instance of the Thomson problem is not known.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D_{3\\mathrm{h}}"
},
{
"math_id": 1,
"text": "J_{51}"
},
{
"math_id": 2,
"text": "\\begin{align}\n\\left(0,\\frac2{\\sqrt3},\\pm1 \\right),\\qquad & \\left(\\pm1,-\\frac1{\\sqrt3},\\pm1 \\right),\\\\\n\\left(0,-\\frac{1+\\sqrt6}{\\sqrt3},0 \\right),\\qquad & \\left(\\pm\\frac{1+\\sqrt6}{2},\\frac{1+\\sqrt6}{2\\sqrt3},0\\right).\\\\\n\\end{align}"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "\\frac{7\\sqrt{3}}{2}a^2\\approx 6.062a^2,"
},
{
"math_id": 5,
"text": "\\frac{2\\sqrt{2}+\\sqrt{3}}{4}a^3\\approx 1.140a^3,"
},
{
"math_id": 6,
"text": "\\pi/2"
},
{
"math_id": 7,
"text": "\\pi/3"
},
{
"math_id": 8,
"text": "\n\\begin{align}\n\\arccos\\left(-\\frac13\\right)&\\approx 109.5^\\circ,\\\\\n\\frac{\\pi}{2}+\\frac12\\arccos\\left(-\\frac13\\right)&\\approx 144.7^\\circ,\\\\\n\\frac{\\pi}{3}+\\arccos\\left(-\\frac13\\right)&\\approx 169.5^\\circ.\\\\\n\\end{align}"
},
{
"math_id": 9,
"text": "A_3"
},
{
"math_id": 10,
"text": "n"
},
{
"math_id": 11,
"text": "n=9"
}
]
| https://en.wikipedia.org/wiki?curid=728019 |
728038 | Pentagonal bipyramid | Two pentagonal pyramids joined at the bases
In geometry, the pentagonal bipyramid (or pentagonal dipyramid) is a polyhedron with 10 triangular faces. It is constructed by attaching two pentagonal pyramids to each of their bases. If the triangular faces are equilateral, the pentagonal bipyramid is an example of deltahedra, composite polyhedron, and Johnson solid.
The pentagonal bipyramid may be represented as 4-connected well-covered graph. This polyhedron may be used in the chemical compound as the description of an atom cluster known as pentagonal bipyramidal molecular geometry, as a solution in Thomson problem, as well as in decahedral nanoparticles.
Special cases.
As a right bipyramid.
Like other bipyramids, the pentagonal bipyramid can be constructed by attaching the base of two pentagonal pyramids. These pyramids cover their pentagonal base, such that the resulting polyhedron has 10 triangles as its faces, 15 edges, and 7 vertices. The pentagonal bipyramid is said to be right if the pyramids are symmetrically regular and both of their apices are on the line passing through the base's center; otherwise, it is oblique.
Like other right bipyramids, the pentagonal bipyramid has three-dimensional symmetry group of dihedral group formula_0 of order 20: the appearance is symmetrical by rotating around the axis of symmetry that passing through apices and base's center vertically, and it has mirror symmetry relative to any bisector of the base; it is also symmetrical by reflecting it across a horizontal plane. Therefore, the pentagonal bipyramid is face-transitive or isohedral.
The pentagonal bipyramid is 4-connected, meaning that it takes the removal of four vertices to disconnect the remaining vertices. It is one of only four 4-connected simplicial well-covered polyhedra, meaning that all of the maximal independent sets of its vertices have the same size. The other three polyhedra with this property are the regular octahedron, the snub disphenoid, and an irregular polyhedron with 12 vertices and 20 triangular faces.
The dual polyhedron of a pentagonal bipyramid is the pentagonal prism. More generally, the dual polyhedron of every bipyramid is the prism, and the vice versa is true. The pentagonal prism has 2 pentagonal faces at the base, and the rest are 5 rectangular.
As a Johnson solid.
If the pyramids are regular, then all edges of the triangular bipyramid are equal in length, making up the faces equilateral triangles. A polyhedron with only equilateral triangles as faces is called a deltahedron. There are only eight different convex deltahedra, one of which is the pentagonal bipyramid with regular faces. More generally, the convex polyhedron in which all faces are regular is the Johnson solid, and every convex deltahedra is a Johnson solid. The pentagonal bipyramid with the regular faces is among the numbered Johnson solids as formula_1, the thirteenth Johnson solid. It is an example of a composite polyhedron, because it is constructed by attaching two regular pentagonal pyramids.
A pentagonal bipyramid's surface area formula_2 is 10 times that of all triangles, and its volume formula_3 can be ascertained by slicing it into two pentagonal pyramids and adding their volume. In the case of edge length formula_4, they are:
formula_5
The dihedral angle of a pentagonal bipyramid with regular faces can be calculated by adding the angle of pentagonal pyramids. The dihedral angle of a pentagonal pyramid between two adjacent triangles is approximately 138.2°, and that between the triangular face and the base is 37.4°. Therefore, the dihedral angle of a pentagonal pyramid with regular faces between two adjacent triangular faces, on the edge where two pyramids are attached, is 74.8°.
Applications.
In the geometry of chemical compounds, the pentagonal bipyramid can be used as the atom cluster surrounding an atom. The pentagonal bipyramidal molecular geometry describes clusters for which this polyhedron is a pentagonal bipyramid. An example of such a cluster is iodine heptafluoride in the gas phase.
The Thomson problem concerns the minimum-energy configuration of charged particles on a sphere. One of them is a pentagonal bipyramid, a known solution for the case of seven electrons, by placing vertices of a pentagonal bipyramid inscribed in a sphere.
Pentagonal bipyramids and related five-fold shapes are found in decahedral nanoparticles, which can also be macroscopic in size when they are also called fiveling cyclic twins in mineralogy.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " D_{5\\mathrm{h}} "
},
{
"math_id": 1,
"text": " J_{13} "
},
{
"math_id": 2,
"text": " A "
},
{
"math_id": 3,
"text": " V "
},
{
"math_id": 4,
"text": " a "
},
{
"math_id": 5,
"text": " \\begin{align}\n A &= \\frac{5\\sqrt{3}}{2}a^2 &\\approx 4.3301a^2, \\\\\n V &= \\frac{5 + \\sqrt{5}}{12}a^3 &\\approx 0.603a^3.\n\\end{align} "
}
]
| https://en.wikipedia.org/wiki?curid=728038 |
72805267 | Graphical time warping | Graphical time warping
Graphical time warping (GTW) is a framework for jointly aligning multiple pairs of time series or sequences. GTW considers both the alignment accuracy of each sequence pair and the similarity among pairs. On contrary, alignment with dynamic time warping (DTW) considers the pairs independently and minimizes only the distance between the two sequences in a given pair. Therefore, GTW generalizes DTW and could achieve a better alignment performance when similarity among pairs is expected.
One application of GTW is signal propagation analysis in time-lapse bio-imaging data, where the propagation patterns in adjacent pixels are generally similar. Other applications include signature identification, binocular stereo depth calculation, and liquid chromatography–mass spectrometry (LC-MS) profile alignment in proteomics data analysis. Indeed, as long as the data are structured with inter-dependent time series/sequences, they can be analyzed with GTW.
GTW is able to model constraints or similarities between warping paths by transforming the DTW-equivalent shortest path problem to the maximum flow problem in the dual graph, which can be solved by most max-flow algorithms. However, when the data is large, these algorithms become time-consuming and the memory usage is high. An efficient algorithm, Bidirectional pushing with Linear Component Operations (BILCO), was developed to solve the GTW problem. It could achieve an average 10-fold improvement in both computational and memory usage compared with the state of art generic maximum flow algorithms in GTW applications.
Joint alignment and GTW formulation.
Joint alignment.
Assume there are formula_0 pairs of time series formula_1, and each pair formula_2 has a corresponding warping path formula_3. Some pairs of warping paths are known to be similar, and the set of all such pairs is denoted as formula_4. For example, if formula_5 is in this set, warping paths formula_6 and formula_3 are similar. To optimize both the similarity between the aligned time series and the warping paths distances, the joint alignment problem is formulated as a minimization problem:
formula_7
Here formula_8 denotes the distance between formula_9 and formula_10 after alignment with warping function formula_3, formula_11 is the distance between warping paths formula_6 and formula_3 defined by the area of the region bounded by formula_6 and formula_3, and formula_12 is a hyperparameter balancing the time series alignment cost term and warping function distance term.
Notice that the similarity strength can be application-specific or user-designed. For different related warping paths pair formula_5, we can set different parameters formula_13. For simplicity, here we use the unified hyperparameter formula_12.
The above minimization problem is intuitively formulated. However, it is not clear how to efficiently solve it in its original form, and a naïve enumeration of the warping paths leads to an NP-hard problem.
GTW formulation.
This minimization problem can be reformulated into a minimum cut problem on a special graph termed GTW graph, where the minimum cut and the warping paths are equivalent. The formulation could be described as:
Explanation of the equivalence.
Each GTW subgraph formula_15 is the dual graph of the DTW graph representing the alignment of a single time-series pair. As a result, the cut within a GTW subgraph is dual to a warping path in DTW graph, and the profile alignment cost term can be represented by the cut cost within subgraphs. The infinite capacities of reverse edges are used to guarantee the monotonicity and continuity of warping paths.
Cross edges constrain the similarity of warping paths and contribute to the distance term in the objective function. Notice that in a minimum cut problem, the nodes would eventually be assigned to the source side or the sink side, and the final cut is defined by the edges between two sides. Each pair of mismatched nodes in formula_16 and formula_15 contribute to the distance between formula_6 and formula_3 and would result in an extra formula_17 cost. Thus, the distance term could be represented by the cut cost in cross edges.
Therefore, the cut cost in the GTW graph corresponds to the cost terms in the objective function. Recalling the fact that the cut within each subgraph corresponds to the warping path of one time-series pair, the minimum cut of GTW graph corresponds to the optimal solution of warping paths in the joint alignment.
Extension.
Neighbor-wise Compound-specific Graphical Time Warping (ncGTW).
In multiple sequence alignment, the purpose is to align all sequences to a common reference. However, this common reference is usually unknown. In addition, there is also structural information among the sequences. Though GTW cannot be directly applied in these applications, a two-stage framework called ncGTW was built upon GTW to solve this problem. In the first stage, the prior structural knowledge among the sequences is utilized to obtain the warping functions. In the second stage, these warping functions help to jointly align all sequences to a virtual reference, which does not need to be explicitly specified. ncGTW was applied to LC-MS profile alignment problems in proteomics data and performed better than existing approaches.
Efficient algorithm.
Bidirectional pushing with Linear Component Operations (BILCO).
Solving the minimum cut problem on GTW graph through traditional maximum flow algorithms would take a long running time and large memory usage due to the large graph size, which limits the usage of GTW. BILCO algorithm utilizes two important properties of the joint alignment problem and achieves an average 10-fold improvement in both running time and memory usage. The two properties are:
According to the first property, BILCO divides the flow exchange into two types: (1) Flow exchange within GTW subgraph; (2) Flow exchange across related GTW subgraphs. The process can be analogized to the process of pumping water from connected water tanks, and two types of flow exchange are termed "Drain" and "Discharge". To fully utilize such property, components (each component is a connected subset of GTW subgraph), rather than single nodes, are used as the operation unit. Both "Drain" and "Discharge" component operations can be implemented in linear time.
The second property inspires the bidirectional-pushing strategy. In this strategy, BILCO first segments the graph into two parts using the initial approximate solution, and then pushes excess/deficit in obtained sink/source parts, respectively. Compared with existing push-relabel-based maximum flow algorithms, BILCO significantly reduces redundant computation. It is worth noting that such a strategy could be also utilized to help accelerate other push-relabel-based algorithms.
Applications.
Signal propagation analysis.
In time-lapse bio-imaging data, signal propagation is a widely observed phenomenon in many cell types. Studying signal propagation may help uncover the function of these cells in both normal and pathological conditions. The propagation information could be derived from the warping paths by aligning pixels’ curves with a reference signal. Due to the low signal-to-noise ratio in bio-imaging data, pairwise alignment methods usually lead to unsatisfactory results. Considering the spatial correlation of the signals, the similarity of warping paths between adjacent pixels can be utilized in GTW to enhance the alignment performance, which may lead to a more accurate calculation of propagation properties.
Depth extraction.
In binocular stereo images, alignment technique can be used to extract depth information. The depth could be derived by the disparity of the same row between left image and right image. Since the depth of adjacent rows should be similar, GTW could be utilized to enhance the extraction result.
Signature identification.
A signature usually contains multiple feature sequences, such as the x location, the y location, and the pressure. Those feature sequences are correlated, which indicates that when comparing two signatures, the distance measure obtained by pairwise alignment is not optimal. GTW could take the dependency between features into account and provide a better distance measure.
Biological sequence alignment.
In a biological sequence data set, it is common that there is some structural information among the sequences. In LC-MS data, the samples of nearby profiles tend to have similar patterns of distortion and GTW is extended to jointly align these profiles. The same technique may also be applied to the joint alignment of other sequences. Structural information between sequences also exists in DNA and amino acids data. For example, the sequences between related species are more similar compared with sequences from more remotely related species. This information could be utilized by GTW.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "\\{(x_n,y_n)|n = 1,2,...,N\\}"
},
{
"math_id": 2,
"text": "{x_n,y_n}"
},
{
"math_id": 3,
"text": "P_n"
},
{
"math_id": 4,
"text": "{(m,n)}"
},
{
"math_id": 5,
"text": "(m,n)"
},
{
"math_id": 6,
"text": "P_m"
},
{
"math_id": 7,
"text": "\\min_{\\{P_n| n = 1,2,...,N\\}} \\left( \\sum_{n=1}^N cost(P_n) + \\kappa \\sum_{(m,n)\\in Neib} dist (P_m,P_n) \\right)"
},
{
"math_id": 8,
"text": "cost(P_n)"
},
{
"math_id": 9,
"text": "x_n"
},
{
"math_id": 10,
"text": "y_n"
},
{
"math_id": 11,
"text": "dist(P_m,P_n)"
},
{
"math_id": 12,
"text": "\\kappa"
},
{
"math_id": 13,
"text": "\\kappa_(m,n)"
},
{
"math_id": 14,
"text": "n_th"
},
{
"math_id": 15,
"text": "G^n"
},
{
"math_id": 16,
"text": "G^m"
},
{
"math_id": 17,
"text": "\\kappa/2"
}
]
| https://en.wikipedia.org/wiki?curid=72805267 |
7280707 | Variable-length code | Code which maps information to a variable number of bits
In coding theory, a variable-length code is a code which maps source symbols to a "variable" number of bits. The equivalent concept in computer science is "bit string".
Variable-length codes can allow sources to be compressed and decompressed with "zero" error (lossless data compression) and still be read back symbol by symbol. With the right coding strategy an independent and identically-distributed source may be compressed almost arbitrarily close to its entropy. This is in contrast to fixed-length coding methods, for which data compression is only possible for large blocks of data, and any compression beyond the logarithm of the total number of possibilities comes with a finite (though perhaps arbitrarily small) probability of failure.
Some examples of well-known variable-length coding strategies are Huffman coding, Lempel–Ziv coding, arithmetic coding, and context-adaptive variable-length coding.
Codes and their extensions.
The extension of a code is the mapping of finite length source sequences to finite length bit strings, that is obtained by concatenating for each symbol of the source sequence the corresponding codeword produced by the original code.
Using terms from formal language theory, the precise mathematical definition is as follows: Let formula_0 and formula_1 be two finite sets, called the source and target alphabets, respectively. A code formula_2 is a total function mapping each symbol from formula_0 to a sequence of symbols over formula_1, and the extension of formula_3 to a homomorphism of formula_4 into formula_5, which naturally maps each sequence of source symbols to a sequence of target symbols, is referred to as its extension.
Classes of variable-length codes.
Variable-length codes can be strictly nested in order of decreasing generality as non-singular codes, uniquely decodable codes and prefix codes. Prefix codes are always uniquely decodable, and these in turn are always non-singular:
Non-singular codes.
A code is non-singular if each source symbol is mapped to a different non-empty bit string, i.e. the mapping from source symbols to bit strings is injective.
Uniquely decodable codes.
A code is uniquely decodable if its extension is § non-singular. Whether a given code is uniquely decodable can be decided with the Sardinas–Patterson algorithm.
Prefix codes.
A code is a prefix code if no target bit string in the mapping is a prefix of the target bit string of a different source symbol in the same mapping. This means that symbols can be decoded instantaneously after their entire codeword is received. Other commonly used names for this concept are prefix-free code, instantaneous code, or context-free code.
Example of encoding and decoding:
aabacdab → 00100110111010 → |0|0|10|0|110|111|0|10| → aabacdab
A special case of prefix codes are block codes. Here all codewords must have the same length. The latter are not very useful in the context of source coding, but often serve as forward error correction in the context of channel coding.
Another special case of prefix codes are LEB128 and variable-length quantity (VLQ) codes, which encode arbitrarily large integers as a sequence of octets—i.e., every codeword is a multiple of 8 bits.
Advantages.
The advantage of a variable-length code is that unlikely source symbols can be assigned longer codewords and likely source symbols can be assigned shorter codewords, thus giving a low "expected" codeword length. For the above example, if the probabilities of (a, b, c, d) were formula_11, the expected number of bits used to represent a source symbol using the code above would be:
formula_12.
As the entropy of this source is 1.75 bits per symbol, this code compresses the source as much as possible so that the source can be recovered with "zero" error.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "S"
},
{
"math_id": 1,
"text": "T"
},
{
"math_id": 2,
"text": "C: S \\to T^*"
},
{
"math_id": 3,
"text": "C"
},
{
"math_id": 4,
"text": "S^*"
},
{
"math_id": 5,
"text": "T^*"
},
{
"math_id": 6,
"text": "M_1 = \\{\\, a\\mapsto 0, b\\mapsto 0, c\\mapsto 1\\,\\}"
},
{
"math_id": 7,
"text": "M_2 = \\{\\, a \\mapsto 1, b \\mapsto 011, c\\mapsto 01110, d\\mapsto 1110, e\\mapsto 10011, f\\mapsto0\\}"
},
{
"math_id": 8,
"text": "M_3 = \\{\\, a\\mapsto 0, b\\mapsto 01, c\\mapsto 011\\,\\}"
},
{
"math_id": 9,
"text": "M_2"
},
{
"math_id": 10,
"text": "M_3"
},
{
"math_id": 11,
"text": "\\textstyle\\left(\\frac{1}{2}, \\frac{1}{4}, \\frac{1}{8}, \\frac{1}{8}\\right)"
},
{
"math_id": 12,
"text": "1\\times\\frac{1}{2}+2\\times\\frac{1}{4}+3\\times\\frac{1}{8}+3\\times\\frac{1}{8}=\\frac{7}{4}"
}
]
| https://en.wikipedia.org/wiki?curid=7280707 |
72808068 | Parameterized approximation algorithm | Type of algorithm
A parameterized approximation algorithm is a type of algorithm that aims to find approximate solutions to NP-hard optimization problems in polynomial time in the input size and a function of a specific parameter. These algorithms are designed to combine the best aspects of both traditional approximation algorithms and fixed-parameter tractability.
In traditional approximation algorithms, the goal is to find solutions that are at most a certain factor α away from the optimal solution, known as an α-approximation, in polynomial time. On the other hand, parameterized algorithms are designed to find exact solutions to problems, but with the constraint that the running time of the algorithm is polynomial in the input size and a function of a specific parameter k. The parameter describes some property of the input and is small in typical applications. The problem is said to be fixed-parameter tractable (FPT) if there is an algorithm that can find the optimum solution in formula_0 time, where formula_1 is a function independent of the input size n.
A parameterized approximation algorithm aims to find a balance between these two approaches by finding approximate solutions in FPT time: the algorithm computes an α-approximation in formula_0 time, where formula_1 is a function independent of the input size n. This approach aims to overcome the limitations of both traditional approaches by having stronger guarantees on the solution quality compared to traditional approximations while still having efficient running times as in FPT algorithms. An overview of the research area studying parameterized approximation algorithms can be found in the survey of Marx and the more recent survey by Feldmann et al.
Obtainable approximation ratios.
The full potential of parameterized approximation algorithms is utilized when a given optimization problem is shown to admit an α-approximation algorithm running in formula_0 time, while in contrast the problem neither has a polynomial-time α-approximation algorithm (under some complexity assumption, e.g., formula_2), nor an FPT algorithm for the given parameter k (i.e., it is at least W[1]-hard).
For example, some problems that are APX-hard and W[1]-hard admit a parameterized approximation scheme (PAS), i.e., for any formula_3 a formula_4-approximation can be computed in formula_5 time for some functions f and g. This then circumvents the lower bounds in terms of polynomial-time approximation and fixed-parameter tractability. A PAS is similar in spirit to a polynomial-time approximation scheme (PTAS) but additionally exploits a given parameter k. Since the degree of the polynomial in the runtime of a PAS depends on a function formula_6, the value of formula_7 is assumed to be arbitrary but constant in order for the PAS to run in FPT time. If this assumption is unsatisfying, formula_7 is treated as a parameter as well to obtain an efficient parameterized approximation scheme (EPAS), which for any formula_3 computes a formula_4-approximation in formula_8 time for some function f. This is similar in spirit to an efficient polynomial-time approximation scheme (EPTAS).
"k"-cut.
The "k"-cut problem has no polynomial-time formula_9-approximation algorithm for any formula_3, assuming formula_2 and the small set expansion hypothesis. It is also W[1]-hard parameterized by the number k of required components. However an EPAS exists, which computes a formula_4-approximation in formula_10 time.
Steiner Tree.
The Steiner Tree problem is FPT parameterized by the number of terminals. However, for the "dual" parameter consisting of the number k of non-terminals contained in the optimum solution, the problem is W[2]-hard (due to a folklore reduction from the Dominating Set problem). Steiner Tree is also known to be APX-hard. However, there is an EPAS computing a formula_4-approximation in formula_11 time. The more general Steiner Forest problem is NP-hard on graphs of treewidth 3. However, on graphs of treewidth t an EPAS can compute a formula_4-approximation in formula_12 time.
Strongly-connected Steiner subgraph.
It is known that the Strongly Connected Steiner Subgraph problem is W[1]-hard parameterized by the number k of terminals, and also does not admit an formula_13-approximation in polynomial time (under standard complexity assumptions). However a 2-approximation can be computed in formula_14 time. Furthermore, this is best possible, since no formula_9-approximation can be computed in formula_0 time for any function f, under Gap-ETH.
"k"-median and "k"-means.
For the well-studied metric clustering problems of "k"-median and "k"-means parameterized by the number k of centers, it is known that no formula_15-approximation for k-Median and no formula_16-approximation for k-Means can be computed in formula_0 time for any function f, under Gap-ETH. Matching parameterized approximation algorithms exist, but it is not known whether matching approximations can be computed in polynomial time.
Clustering is often considered in settings of low dimensional data, and thus a practically relevant parameterization is by the dimension of the underlying metric. In the Euclidean space, the k-Median and k-Means problems admit an EPAS parameterized by the dimension d, and also an EPAS parameterized by k. The former was generalized to an EPAS for the parameterization by the doubling dimension. For the loosely related highway dimension parameter, only an approximation scheme with XP runtime is known to date.
"k"-center.
For the metric "k"-center problem a 2-approximation can be computed in polynomial time. However, when parameterizing by either the number k of centers, the doubling dimension (in fact the dimension of a Manhattan metric), or the highway dimension, no parameterized formula_9-approximation algorithm exists, under standard complexity assumptions. Furthermore, the k-Center problem is W[1]-hard even on planar graphs when simultaneously parameterizing it by the number k of centers, the doubling dimension, the highway dimension, and the pathwidth. However, when combining k with the doubling dimension an EPAS exists, and the same is true when combining k with the highway dimension. For the more general version with vertex capacities, an EPAS exists for the parameterization by k and the doubling dimension, but not when using k and the highway dimension as the parameter. Regarding the pathwidth, k-Center admits an EPAS even for the more general treewidth parameter, and also for cliquewidth.
Densest subgraph.
An optimization variant of the "k"-Clique problem is the Densest "k"-Subgraph problem (which is a 2-ary Constraint Satisfaction problem), where the task is to find a subgraph on k vertices with maximum number of edges. It is not hard to obtain a formula_17-approximation by just picking a matching of size formula_18 in the given input graph, since the maximum number of edges on k vertices is always at most formula_19. This is also asymptotically optimal, since under Gap-ETH no formula_20-approximation can be computed in FPT time parameterized by k.
Dominating set.
For the Dominating set problem it is W[1]-hard to compute any formula_21-approximation in formula_0 time for any functions g and f.
Approximate kernelization.
Kernelization is a technique used in fixed-parameter tractability to pre-process an instance of an NP-hard problem in order to remove "easy parts" and reveal the NP-hard core of the instance. A kernelization algorithm takes an instance I and a parameter k, and returns a new instance formula_22 with parameter formula_23 such that the size of formula_22 and formula_23 is bounded as a function of the input parameter k, and the algorithm runs in polynomial time. An α-approximate kernelization algorithm is a variation of this technique that is used in parameterized approximation algorithms. It returns a kernel formula_22 such that any β-approximation in formula_22 can be converted into an αβ-approximation to the input instance I in polynomial time. This notion was introduced by Lokshtanov et al., but there are other related notions in the literature such as Turing kernels and α-fidelity kernelization.
As for regular (non-approximate) kernels, a problem admits an α-approximate kernelization algorithm if and only if it has a parameterized α-approximation algorithm. The proof of this fact is very similar to the one for regular kernels. However the guaranteed approximate kernel might be of exponential size (or worse) in the input parameter. Hence it becomes interesting to find problems that admit polynomial sized approximate kernels. Furthermore, a polynomial-sized approximate kernelization scheme (PSAKS) is an α-approximate kernelization algorithm that computes a polynomial-sized kernel and for which α can be set to formula_24 for any formula_3.
For example, while the Connected Vertex Cover problem is FPT parameterized by the solution size, it does not admit a (regular) polynomial sized kernel (unless formula_25), but a PSAKS exists. Similarly, the Steiner Tree problem is FPT parameterized by the number of terminals, does not admit a polynomial sized kernel (unless formula_25), but a PSAKS exists. When parameterizing Steiner Tree by the number of non-terminals in the optimum solution, the problem is W[2]-hard (and thus admits no exact kernel at all, unless FPT=W[2]), but still admits a PSAKS.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(k)n^{O(1)}"
},
{
"math_id": 1,
"text": "f(k)"
},
{
"math_id": 2,
"text": "\\mathsf{P}\\neq \\mathsf{NP}"
},
{
"math_id": 3,
"text": "\\varepsilon>0"
},
{
"math_id": 4,
"text": "(1+\\varepsilon)"
},
{
"math_id": 5,
"text": "f(k,\\varepsilon)n^{g(\\varepsilon)}"
},
{
"math_id": 6,
"text": "g(\\varepsilon)"
},
{
"math_id": 7,
"text": "\\varepsilon"
},
{
"math_id": 8,
"text": "f(k,\\varepsilon)n^{O(1)}"
},
{
"math_id": 9,
"text": "(2-\\varepsilon)"
},
{
"math_id": 10,
"text": "(k/\\varepsilon)^{O(k)}n^{O(1)}"
},
{
"math_id": 11,
"text": "2^{O(k^2/\\varepsilon^4)}n^{O(1)}"
},
{
"math_id": 12,
"text": "2^{O(\\frac{t^2}{\\varepsilon}\\log \\frac{t}{\\varepsilon})}n^{O(1)}"
},
{
"math_id": 13,
"text": "O(\\log^{2-\\varepsilon} n)"
},
{
"math_id": 14,
"text": "3^{k}n^{O(1)}"
},
{
"math_id": 15,
"text": "(1+2/e-\\varepsilon)"
},
{
"math_id": 16,
"text": "(1+8/e-\\varepsilon)"
},
{
"math_id": 17,
"text": "(k-1)"
},
{
"math_id": 18,
"text": "k/2"
},
{
"math_id": 19,
"text": "{k \\choose 2}= k(k-1)/2"
},
{
"math_id": 20,
"text": "k^{1-o(1)}"
},
{
"math_id": 21,
"text": "g(k)"
},
{
"math_id": 22,
"text": "I'"
},
{
"math_id": 23,
"text": "k'"
},
{
"math_id": 24,
"text": "1+\\varepsilon"
},
{
"math_id": 25,
"text": "\\textsf{NP}\\subseteq \\textsf{coNP/poly}"
}
]
| https://en.wikipedia.org/wiki?curid=72808068 |
728168 | Monstrous moonshine | Unexpected connection in group theory
In mathematics, monstrous moonshine, or moonshine theory, is the unexpected connection between the monster group "M" and modular functions, in particular, the "j" function. The initial numerical observation was made by John McKay in 1978, and the phrase was coined by John Conway and Simon P. Norton in 1979.
The monstrous moonshine is now known to be underlain by a vertex operator algebra called the moonshine module (or monster vertex algebra) constructed by Igor Frenkel, James Lepowsky, and Arne Meurman in 1988, which has the monster group as its group of symmetries. This vertex operator algebra is commonly interpreted as a structure underlying a two-dimensional conformal field theory, allowing physics to form a bridge between two mathematical areas. The conjectures made by Conway and Norton were proven by Richard Borcherds for the moonshine module in 1992 using the no-ghost theorem from string theory and the theory of vertex operator algebras and generalized Kac–Moody algebras.
History.
In 1978, John McKay found that the first few terms in the Fourier expansion of the normalized J-invariant (sequence in the OEIS) could be expressed in terms of linear combinations of the dimensions of the irreducible representations formula_0 of the monster group "M" (sequence in the OEIS) with "small" non-negative coefficients. The J-invariant is
formula_1
with formula_2 and "τ" as the half-period ratio, and the "M" expressions, letting formula_0 = 1, 196883, 21296876, 842609326, 18538750076, 19360062527, 293553734298, ..., are
formula_3
The LHS are the coefficients of formula_4, while in the RHS the integers formula_0 are the dimensions of irreducible representations of the monster group "M". (Since there can be several linear relations between the formula_0 such as formula_5, the representation may be in more than one way.)
McKay viewed this as evidence that there is a naturally occurring infinite-dimensional graded representation of "M", whose graded dimension is given by the coefficients of "J", and whose lower-weight pieces decompose into irreducible representations as above. After he informed John G. Thompson of this observation, Thompson suggested that because the graded dimension is just the graded trace of the identity element, the graded traces of nontrivial elements "g" of "M" on such a representation may be interesting as well.
Conway and Norton computed the lower-order terms of such graded traces, now known as McKay–Thompson series "T""g", and found that all of them appeared to be the expansions of Hauptmoduln. In other words, if "G""g" is the subgroup of SL2(R) which fixes "T""g", then the quotient of the upper half of the complex plane by "G""g" is a sphere with a finite number of points removed, and furthermore, "T""g" generates the field of meromorphic functions on this sphere.
Based on their computations, Conway and Norton produced a list of Hauptmoduln, and conjectured the existence of an infinite dimensional graded representation of "M", whose graded traces "T""g" are the expansions of precisely the functions on their list.
In 1980, A.O.L. Atkin, Paul Fong and Stephen D. Smith produced strong computational evidence that such a graded representation exists, by decomposing a large number of coefficients of "J" into representations of "M". A graded representation whose graded dimension is "J", called the moonshine module, was explicitly constructed by Igor Frenkel, James Lepowsky, and Arne Meurman, giving an effective solution to the McKay–Thompson conjecture, and they also determined the graded traces for all elements in the centralizer of an involution of "M", partially settling the Conway–Norton conjecture. Furthermore, they showed that the vector space they constructed, called the Moonshine Module formula_6, has the additional structure of a vertex operator algebra, whose automorphism group is precisely "M".
In 1985, the Atlas of Finite Groups was published by a group of mathematicians, including John Conway. The Atlas, which enumerates all sporadic groups, included "Moonshine" as a section in its list of notable properties of the monster group.
Borcherds proved the Conway–Norton conjecture for the Moonshine Module in 1992. He won the Fields Medal in 1998 in part for his solution of the conjecture.
The moonshine module.
The Frenkel–Lepowsky–Meurman construction starts with two main tools:
Frenkel, Lepowsky, and Meurman then showed that the automorphism group of the moonshine module, as a vertex operator algebra, is "M". Furthermore, they determined that the graded traces of elements in the subgroup 21+24."Co"1 match the functions predicted by Conway and Norton ().
Borcherds' proof.
Richard Borcherds' proof of the conjecture of Conway and Norton can be broken into the following major steps:
Thus, the proof is completed (). Borcherds was later quoted as saying "I was over the moon when I proved the moonshine conjecture", and "I sometimes wonder if this is the feeling you get when you take certain drugs. I don't actually know, as I have not tested this theory of mine."
More recent work has simplified and clarified the last steps of the proof. Jurisich (, ) found that the homology computation could be substantially shortened by replacing the usual triangular decomposition of the Monster Lie algebra with a decomposition into a sum of "gl"2 and two free Lie algebras. Cummins and Gannon showed that the recursion relations automatically imply the McKay-Thompson series are either Hauptmoduln or terminate after at most 3 terms, thus eliminating the need for computation at the last step.
Generalized moonshine.
Conway and Norton suggested in their 1979 paper that perhaps moonshine is not limited to the monster, but that similar phenomena may be found for other groups. While Conway and Norton's claims were not very specific, computations by Larissa Queen in 1980 strongly suggested that one can construct the expansions of many Hauptmoduln from simple combinations of dimensions of irreducible representations of sporadic groups. In particular, she decomposed the coefficients of McKay-Thompson series into representations of subquotients of the Monster in the following cases:
Queen found that the traces of non-identity elements also yielded "q"-expansions of Hauptmoduln, some of which were not McKay–Thompson series from the Monster. In 1987, Norton combined Queen's results with his own computations to formulate the Generalized Moonshine conjecture. This conjecture asserts that there is a rule that assigns to each element "g" of the monster, a graded vector space "V"("g"), and to each commuting pair of elements ("g", "h") a holomorphic function "f"("g", "h", τ) on the upper half-plane, such that:
This is a generalization of the Conway–Norton conjecture, because Borcherds's theorem concerns the case where "g" is set to the identity.
Like the Conway–Norton conjecture, Generalized Moonshine also has an interpretation in physics, proposed by Dixon–Ginsparg–Harvey in 1988 (). They interpreted the vector spaces "V"("g") as twisted sectors of a conformal field theory with monster symmetry, and interpreted the functions "f"("g", "h", τ) as genus one partition functions, where one forms a torus by gluing along twisted boundary conditions. In mathematical language, the twisted sectors are irreducible twisted modules, and the partition functions are assigned to elliptic curves with principal monster bundles, whose isomorphism type is described by monodromy along a basis of 1-cycles, i.e., a pair of commuting elements.
Modular moonshine.
In the early 1990s, the group theorist A. J. E. Ryba discovered remarkable similarities between parts of the character table of the monster, and Brauer characters of certain subgroups. In particular, for an element "g" of prime order "p" in the monster, many irreducible characters of an element of order "kp" whose "k"th power is "g" are simple combinations of Brauer characters for an element of order "k" in the centralizer of "g". This was numerical evidence for a phenomenon similar to monstrous moonshine, but for representations in positive characteristic. In particular, Ryba conjectured in 1994 that for each prime factor "p" in the order of the monster, there exists a graded vertex algebra over the finite field F"p" with an action of the centralizer of an order "p" element "g", such that the graded Brauer character of any "p"-regular automorphism "h" is equal to the McKay-Thompson series for "gh" ().
In 1996, Borcherds and Ryba reinterpreted the conjecture as a statement about Tate cohomology of a self-dual integral form of formula_6. This integral form was not known to exist, but they constructed a self-dual form over Z[1/2], which allowed them to work with odd primes "p". The Tate cohomology for an element of prime order naturally has the structure of a super vertex algebra over F"p", and they broke up the problem into an easy step equating graded Brauer super-trace with the McKay-Thompson series, and a hard step showing that Tate cohomology vanishes in odd degree. They proved the vanishing statement for small odd primes, by transferring a vanishing result from the Leech lattice (). In 1998, Borcherds showed that vanishing holds for the remaining odd primes, using a combination of Hodge theory and an integral refinement of the no-ghost theorem (, ).
The case of order 2 requires the existence of a form of formula_6 over a 2-adic ring, i.e., a construction that does not divide by 2, and this was not known to exist at the time. There remain many additional unanswered questions, such as how Ryba's conjecture should generalize to Tate cohomology of composite order elements, and the nature of any connections to generalized moonshine and other moonshine phenomena.
Conjectured relationship with quantum gravity.
In 2007, E. Witten suggested that AdS/CFT correspondence yields a duality between pure quantum gravity in (2 + 1)-dimensional anti de Sitter space and extremal holomorphic CFTs. Pure gravity in 2 + 1 dimensions has no local degrees of freedom, but when the cosmological constant is negative, there is nontrivial content in the theory, due to the existence of BTZ black hole solutions. Extremal CFTs, introduced by G. Höhn, are distinguished by a lack of Virasoro primary fields in low energy, and the moonshine module is one example.
Under Witten's proposal (), gravity in AdS space with maximally negative cosmological constant is AdS/CFT dual to a holomorphic CFT with central charge "c=24", and the partition function of the CFT is precisely "j"-744, i.e., the graded character of the moonshine module. By assuming Frenkel-Lepowsky-Meurman's conjecture that moonshine module is the unique holomorphic VOA with central charge 24 and character "j"-744, Witten concluded that pure gravity with maximally negative cosmological constant is dual to the monster CFT. Part of Witten's proposal is that Virasoro primary fields are dual to black-hole-creating operators, and as a consistency check, he found that in the large-mass limit, the Bekenstein-Hawking semiclassical entropy estimate for a given black hole mass agrees with the logarithm of the corresponding Virasoro primary multiplicity in the moonshine module. In the low-mass regime, there is a small quantum correction to the entropy, e.g., the lowest energy primary fields yield ln(196883) ~ 12.19, while the Bekenstein–Hawking estimate gives 4π ~ 12.57.
Later work has refined Witten's proposal. Witten had speculated that the extremal CFTs with larger cosmological constant may have monster symmetry much like the minimal case, but this was quickly ruled out by independent work of Gaiotto and Höhn. Work by Witten and Maloney () suggested that pure quantum gravity may not satisfy some consistency checks related to its partition function, unless some subtle properties of complex saddles work out favorably. However, Li–Song–Strominger () have suggested that a chiral quantum gravity theory proposed by Manschot in 2007 may have better stability properties, while being dual to the chiral part of the monster CFT, i.e., the monster vertex algebra. Duncan–Frenkel () produced additional evidence for this duality by using Rademacher sums to produce the McKay–Thompson series as (2 + 1)-dimensional gravity partition functions by a regularized sum over global torus-isogeny geometries. Furthermore, they conjectured the existence of a family of twisted chiral gravity theories parametrized by elements of the monster, suggesting a connection with generalized moonshine and gravitational instanton sums. At present, all of these ideas are still rather speculative, in part because 3d quantum gravity does not have a rigorous mathematical foundation.
Mathieu moonshine.
In 2010, Tohru Eguchi, Hirosi Ooguri, and Yuji Tachikawa observed that the elliptic genus of a K3 surface can be decomposed into characters of the "N"
(4,4) superconformal algebra, such that the multiplicities of massive states appear to be simple combinations of irreducible representations of the Mathieu group M24. This suggests that there is a sigma-model conformal field theory with K3 target that carries M24 symmetry. However, by the Mukai–Kondo classification, there is no faithful action of this group on any K3 surface by symplectic automorphisms, and by work of Gaberdiel–Hohenegger–Volpato, there is no faithful action on any K3 sigma-model conformal field theory, so the appearance of an action on the underlying Hilbert space is still a mystery.
By analogy with McKay–Thompson series, Cheng suggested that both the multiplicity functions and the graded traces of nontrivial elements of M24 form mock modular forms. In 2012, Gannon proved that all but the first of the multiplicities are non-negative integral combinations of representations of M24, and Gaberdiel–Persson–Ronellenfitsch–Volpato computed all analogues of generalized moonshine functions, strongly suggesting that some analogue of a holomorphic conformal field theory lies behind Mathieu moonshine. Also in 2012, Cheng, Duncan, and Harvey amassed numerical evidence of an umbral moonshine phenomenon where families of mock modular forms appear to be attached to Niemeier lattices. The special case of the "A" lattice yields Mathieu Moonshine, but in general the phenomenon does not yet have an interpretation in terms of geometry.
Origin of the term.
The term "monstrous moonshine" was coined by Conway, who, when told by John McKay in the late 1970s that the coefficient of formula_11 (namely 196884) was precisely one more than the degree of the smallest faithful complex representation of the monster group (namely 196883), replied that this was "moonshine" (in the sense of being a crazy or foolish idea). Thus, the term not only refers to the monster group "M"; it also refers to the perceived craziness of the intricate relationship between "M" and the theory of modular functions.
Related observations.
The monster group was investigated in the 1970s by mathematicians Jean-Pierre Serre, Andrew Ogg and John G. Thompson; they studied the quotient of the hyperbolic plane by subgroups of SL2(R), particularly, the normalizer Γ0("p")+ of the Hecke congruence subgroup Γ0("p") in SL(2,R). They found that the Riemann surface resulting from taking the quotient of the hyperbolic plane by Γ0("p")+ has genus zero exactly for "p" = 2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 41, 47, 59 or 71. When Ogg heard about the monster group later on, and noticed that these were precisely the prime factors of the size of "M", he published a paper offering a bottle of Jack Daniel's whiskey to anyone who could explain this fact ().
These 15 primes are known as the supersingular primes, not to be confused with the use of the same phrase with a different meaning in algebraic number theory.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "r_n"
},
{
"math_id": 1,
"text": "J(\\tau) = \\frac{1}{{q}}+ 744 + 196884{q} + 21493760{q}^2 + 864299970{q}^3 + 20245856256{q}^4 + \\cdots"
},
{
"math_id": 2,
"text": "{q} = e^{2\\pi i\\tau}"
},
{
"math_id": 3,
"text": "\\begin{align}\n1 & = r_1 \\\\\n196884 & = r_1 + r_2 \\\\\n21493760 & = r_1 + r_2 + r_3 \\\\\n864299970 & = 2r_1 + 2r_2 + r_3 + r_4 \\\\\n20245856256 & = 3r_1 + 3r_2 + r_3 + 2r_4 + r_5 = 2r_1+ 3r_2 + 2r_3 + r_4 + r_6\\\\\n333202640600 & = 5r_1 + 5r_2 + 2r_3 + 3r_4 + 2r_5 + r_7 = 4r_1 + 5r_2 + 3r_3 + 2r_4 + r_5 + r_6 + r_7\\\\\n\\end{align}"
},
{
"math_id": 4,
"text": "j(\\tau)"
},
{
"math_id": 5,
"text": "r_1 - r_3 + r_4 + r_5 - r_6 = 0"
},
{
"math_id": 6,
"text": "V^\\natural"
},
{
"math_id": 7,
"text": "\\mathfrak{m}"
},
{
"math_id": 8,
"text": "(\\begin{smallmatrix} a & b \\\\ c & d \\end{smallmatrix}) \\in \\operatorname{SL}_2(\\mathbf{Z})"
},
{
"math_id": 9,
"text": "f(g, h, \\tfrac{a\\tau + b}{c\\tau + d})"
},
{
"math_id": 10,
"text": "f(g^a h^c, g^b h^d, \\tau)"
},
{
"math_id": 11,
"text": "{q}"
}
]
| https://en.wikipedia.org/wiki?curid=728168 |
728209 | Half-period ratio | Elliptic functions
In mathematics, the half-period ratio τ of an elliptic function is the ratio
formula_0
of the two half-periods formula_1 and formula_2 of the elliptic function, where the elliptic function is defined in such a way that
formula_3
is in the upper half-plane.
Quite often in the literature, ω1 and ω2 are defined to be the periods of an elliptic function rather than its half-periods. Regardless of the choice of notation, the ratio ω2/ω1 of periods is identical to the ratio (ω2/2)/(ω1/2) of half-periods. Hence, the period ratio is the same as the "half-period ratio".
Note that the half-period ratio can be thought of as a simple number, namely, one of the parameters to elliptic functions, or it can be thought of as a function itself, because the half periods can be given in terms of the elliptic modulus or in terms of the nome.
See the pages on quarter period and elliptic integrals for additional definitions and relations on the arguments and parameters to elliptic functions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tau = \\frac{\\omega_2}{\\omega_1}"
},
{
"math_id": 1,
"text": "\\frac{\\omega_1}{2}"
},
{
"math_id": 2,
"text": "\\frac{\\omega_2}{2}"
},
{
"math_id": 3,
"text": "\\Im(\\tau) > 0"
}
]
| https://en.wikipedia.org/wiki?curid=728209 |
72824 | Boundary (topology) | All points not part of the interior of a subset of a topological space
In topology and mathematics in general, the boundary of a subset S of a topological space X is the set of points in the closure of S not belonging to the interior of S. An element of the boundary of S is called a boundary point of S. The term boundary operation refers to finding or taking the boundary of a set. Notations used for boundary of a set S include formula_0 and formula_1.
Some authors (for example Willard, in "General Topology") use the term frontier instead of boundary in an attempt to avoid confusion with a different definition used in algebraic topology and the theory of manifolds. Despite widespread acceptance of the meaning of the terms boundary and frontier, they have sometimes been used to refer to other sets. For example, "Metric Spaces" by E. T. Copson uses the term boundary to refer to Hausdorff's border, which is defined as the intersection of a set with its boundary. Hausdorff also introduced the term residue, which is defined as the intersection of a set with the closure of the border of its complement.
Definitions.
There are several equivalent definitions for the boundary of a subset formula_2 of a topological space formula_3 which will be denoted by formula_4 formula_5 or simply formula_1 if formula_6 is understood:
A boundary point of a set is any element of that set's boundary. The boundary formula_12 defined above is sometimes called the set's topological boundary to distinguish it from other similarly named notions such as the boundary of a manifold with boundary or the boundary of a manifold with corners, to name just a few examples.
A connected component of the boundary of S is called a boundary component of S.
Properties.
The closure of a set formula_7 equals the union of the set with its boundary:
formula_13
where formula_8 denotes the closure of formula_7 in formula_9
A set is closed if and only if it contains its boundary, and open if and only if it is disjoint from its boundary. The boundary of a set is closed; this follows from the formula formula_14 which expresses formula_12 as the intersection of two closed subsets of formula_9
("Trichotomy") Given any subset formula_15 each point of formula_6 lies in exactly one of the three sets formula_16 and formula_17 Said differently, formula_18 and these three sets are pairwise disjoint. Consequently, if these set are not empty then they form a partition of formula_9
A point formula_10 is a boundary point of a set if and only if every neighborhood of formula_11 contains at least one point in the set and at least one point not in the set.
The boundary of the interior of a set as well as the boundary of the closure of a set are both contained in the boundary of the set.
"Conceptual Venn diagram showing the relationships among different points of a subset formula_7 of formula_19 formula_20 = set of accumulation points of formula_7 (also called limit points), formula_21 set of boundary points of formula_22 area shaded green = set of interior points of formula_22 area shaded yellow = set of isolated points of formula_22 areas shaded black = empty sets. Every point of formula_7 is either an interior point or a boundary point. Also, every point of formula_7 is either an accumulation point or an isolated point. Likewise, every boundary point of formula_7 is either an accumulation point or an isolated point. Isolated points are always boundary points."
Examples.
Characterizations and general examples.
A set and its complement have the same boundary:
formula_23
A set formula_24 is a dense open subset of formula_6 if and only if formula_25
The interior of the boundary of a closed set is empty.
Consequently, the interior of the boundary of the closure of a set is empty.
The interior of the boundary of an open set is also empty.
Consequently, the interior of the boundary of the interior of a set is empty.
In particular, if formula_2 is a closed or open subset of formula_6 then there does not exist any nonempty subset formula_26 such that formula_24 is open in formula_9
This fact is important for the definition and use of nowhere dense subsets, meager subsets, and Baire spaces.
A set is the boundary of some open set if and only if it is closed and nowhere dense.
The boundary of a set is empty if and only if the set is both closed and open (that is, a clopen set).
Concrete examples.
Consider the real line formula_27 with the usual topology (that is, the topology whose basis sets are open intervals) and formula_28 the subset of rational numbers (whose topological interior in formula_27 is empty). Then
These last two examples illustrate the fact that the boundary of a dense set with empty interior is its closure. They also show that it is possible for the boundary formula_1 of a subset formula_7 to contain a non-empty open subset of formula_33; that is, for the interior of formula_1 in formula_6 to be non-empty. However, a closed subset's boundary always has an empty interior.
In the space of rational numbers with the usual topology (the subspace topology of formula_27), the boundary of formula_34 where formula_35 is irrational, is empty.
The boundary of a set is a topological notion and may change if one changes the topology. For example, given the usual topology on formula_36 the boundary of a closed disk formula_37 is the disk's surrounding circle: formula_38 If the disk is viewed as a set in formula_39 with its own usual topology, that is, formula_40 then the boundary of the disk is the disk itself: formula_41 If the disk is viewed as its own topological space (with the subspace topology of formula_42), then the boundary of the disk is empty.
Boundary of an open ball vs. its surrounding sphere.
This example demonstrates that the topological boundary of an open ball of radius formula_43 is not necessarily equal to the corresponding sphere of radius formula_44 (centered at the same point); it also shows that the closure of an open ball of radius formula_43 is not necessarily equal to the closed ball of radius formula_44 (again centered at the same point).
Denote the usual Euclidean metric on formula_42 by
formula_45
which induces on formula_42 the usual Euclidean topology.
Let formula_46 denote the union of the formula_47-axis formula_48 with the unit circle formula_49 centered at the origin formula_50; that is, formula_51 which is a topological subspace of formula_42 whose topology is equal to that induced by the (restriction of) the metric formula_52
In particular, the sets formula_53 and formula_54 are all closed subsets of formula_42 and thus also closed subsets of its subspace formula_9
Henceforth, unless it clearly indicated otherwise, every open ball, closed ball, and sphere should be assumed to be centered at the origin formula_55 and moreover, only the metric space formula_56 will be considered (and not its superspace formula_57); this being a path-connected and locally path-connected complete metric space.
Denote the open ball of radius formula_43 in formula_56 by
formula_58
so that when formula_59 then
formula_60
is the open sub-interval of the formula_47-axis strictly between formula_61 and formula_62
The unit sphere in formula_56 ("unit" meaning that its radius is formula_59) is
formula_63
while the closed unit ball in formula_56 is the union of the open unit ball and the unit sphere centered at this same point:
formula_64
However, the topological boundary formula_65 and topological closure formula_66 in formula_6 of the open unit ball formula_67 are:
formula_68
In particular, the open unit ball's topological boundary formula_69 is a proper subset of the unit sphere formula_70 in formula_71
And the open unit ball's topological closure formula_72 is a proper subset of the closed unit ball formula_73 in formula_71
The point formula_74 for instance, cannot belong to formula_66 because there does not exist a sequence in formula_75 that converges to it; the same reasoning generalizes to also explain why no point in formula_6 outside of the closed sub-interval formula_54 belongs to formula_76 Because the topological boundary of the set formula_67 is always a subset of formula_67's closure, it follows that formula_65 must also be a subset of formula_77
In any metric space formula_78 the topological boundary in formula_79 of an open ball of radius formula_43 centered at a point formula_80 is always a subset of the sphere of radius formula_44 centered at that same point formula_81; that is,
formula_82
always holds.
Moreover, the unit sphere in formula_56 contains formula_83 which is an open subset of formula_9 This shows, in particular, that the unit sphere formula_84 in formula_56 contains a non-empty open subset of formula_9
Boundary of a boundary.
For any set formula_85 where formula_86 denotes the superset with equality holding if and only if the boundary of formula_7 has no interior points, which will be the case for example if formula_7 is either closed or open. Since the boundary of a set is closed, formula_87 for any set formula_88 The boundary operator thus satisfies a weakened kind of idempotence.
In discussing boundaries of manifolds or simplexes and their simplicial complexes, one often meets the assertion that the boundary of the boundary is always empty. Indeed, the construction of the singular homology rests critically on this fact. The explanation for the apparent incongruity is that the topological boundary (the subject of this article) is a slightly different concept from the boundary of a manifold or of a simplicial complex. For example, the boundary of an open disk viewed as a manifold is empty, as is its topological boundary viewed as a subset of itself, while its topological boundary viewed as a subset of the real plane is the circle surrounding the disk. Conversely, the boundary of a closed disk viewed as a manifold is the bounding circle, as is its topological boundary viewed as a subset of the real plane, while its topological boundary viewed as a subset of itself is empty. In particular, the topological boundary depends on the ambient space, while the boundary of a manifold is invariant.
Notes.
<templatestyles src="Reflist/styles.css" />
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\operatorname{bd}(S), \\operatorname{fr}(S),"
},
{
"math_id": 1,
"text": "\\partial S"
},
{
"math_id": 2,
"text": "S \\subseteq X"
},
{
"math_id": 3,
"text": "X,"
},
{
"math_id": 4,
"text": "\\partial_X S,"
},
{
"math_id": 5,
"text": "\\operatorname{Bd}_X S,"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "S"
},
{
"math_id": 8,
"text": "\\overline{S} = \\operatorname{cl}_X S"
},
{
"math_id": 9,
"text": "X."
},
{
"math_id": 10,
"text": "p \\in X"
},
{
"math_id": 11,
"text": "p"
},
{
"math_id": 12,
"text": "\\partial_X S"
},
{
"math_id": 13,
"text": "\\overline{S} = S \\cup \\partial_X S"
},
{
"math_id": 14,
"text": "\\partial_X S ~:=~ \\overline{S} \\cap \\overline{(X \\setminus S)},"
},
{
"math_id": 15,
"text": "S \\subseteq X,"
},
{
"math_id": 16,
"text": "\\operatorname{int}_X S, \\partial_X S,"
},
{
"math_id": 17,
"text": "\\operatorname{int}_X (X \\setminus S)."
},
{
"math_id": 18,
"text": "X ~=~ \\left(\\operatorname{int}_X S\\right) \\;\\cup\\; \\left(\\partial_X S\\right) \\;\\cup\\; \\left(\\operatorname{int}_X (X \\setminus S)\\right)"
},
{
"math_id": 19,
"text": "\\R^n."
},
{
"math_id": 20,
"text": "A"
},
{
"math_id": 21,
"text": "B = "
},
{
"math_id": 22,
"text": "S,"
},
{
"math_id": 23,
"text": "\\partial_X S = \\partial_X (X \\setminus S)."
},
{
"math_id": 24,
"text": "U"
},
{
"math_id": 25,
"text": "\\partial_X U = X \\setminus U."
},
{
"math_id": 26,
"text": "U \\subseteq \\partial_X S"
},
{
"math_id": 27,
"text": "\\R"
},
{
"math_id": 28,
"text": "\\Q,"
},
{
"math_id": 29,
"text": "\\partial (0,5) = \\partial [0,5) = \\partial (0,5] = \\partial [0,5] = \\{0, 5\\}"
},
{
"math_id": 30,
"text": "\\partial \\varnothing= \\varnothing"
},
{
"math_id": 31,
"text": "\\partial \\Q = \\R"
},
{
"math_id": 32,
"text": "\\partial (\\Q \\cap [0, 1]) = [0, 1]"
},
{
"math_id": 33,
"text": "X := \\R"
},
{
"math_id": 34,
"text": "(-\\infty, a),"
},
{
"math_id": 35,
"text": "a"
},
{
"math_id": 36,
"text": "\\R^2,"
},
{
"math_id": 37,
"text": "\\Omega = \\left\\{(x, y) : x^2 + y^2 \\leq 1 \\right\\}"
},
{
"math_id": 38,
"text": "\\partial \\Omega = \\left\\{(x, y) : x^2 + y^2 = 1 \\right\\}."
},
{
"math_id": 39,
"text": "\\R^3"
},
{
"math_id": 40,
"text": "\\Omega = \\left\\{(x, y, 0) : x^2 + y^2 \\leq 1 \\right\\},"
},
{
"math_id": 41,
"text": "\\partial \\Omega = \\Omega."
},
{
"math_id": 42,
"text": "\\R^2"
},
{
"math_id": 43,
"text": "r > 0"
},
{
"math_id": 44,
"text": "r"
},
{
"math_id": 45,
"text": "d((a, b), (x, y)) := \\sqrt{(x - a)^2 + (y - b)^2}"
},
{
"math_id": 46,
"text": "X \\subseteq \\R^2"
},
{
"math_id": 47,
"text": "y"
},
{
"math_id": 48,
"text": "Y := \\{ 0 \\} \\times \\R"
},
{
"math_id": 49,
"text": "S^1 := \\left\\{ p \\in \\R^2 : d(p, \\mathbf{0}) = 1 \\right\\} = \\left\\{ (x, y) \\in \\R^2 : x^2 + y^2 = 1 \\right\\}"
},
{
"math_id": 50,
"text": "\\mathbf{0} := (0, 0) \\in \\R^2"
},
{
"math_id": 51,
"text": "X := Y \\cup S^1,"
},
{
"math_id": 52,
"text": "d."
},
{
"math_id": 53,
"text": "Y, S^1, Y \\cap S^1 = \\{ (0, \\pm 1) \\},"
},
{
"math_id": 54,
"text": "\\{ 0 \\} \\times [-1, 1]"
},
{
"math_id": 55,
"text": "\\mathbf{0} = (0, 0)"
},
{
"math_id": 56,
"text": "(X, d)"
},
{
"math_id": 57,
"text": "(\\R^2, d)"
},
{
"math_id": 58,
"text": "B_r := \\left\\{ p \\in X : d(p, \\mathbf{0}) < r \\right\\}"
},
{
"math_id": 59,
"text": "r = 1"
},
{
"math_id": 60,
"text": "B_1 = \\{ 0 \\} \\times (-1, 1)"
},
{
"math_id": 61,
"text": "y = -1"
},
{
"math_id": 62,
"text": "y = 1."
},
{
"math_id": 63,
"text": "\\left\\{ p \\in X : d(p, \\mathbf{0}) = 1 \\right\\} = S^1"
},
{
"math_id": 64,
"text": "\\left\\{ p \\in X : d(p, \\mathbf{0}) \\leq 1 \\right\\} = S^1 \\cup \\left(\\{ 0 \\} \\times [-1, 1]\\right)."
},
{
"math_id": 65,
"text": "\\partial_X B_1"
},
{
"math_id": 66,
"text": "\\operatorname{cl}_X B_1"
},
{
"math_id": 67,
"text": "B_1"
},
{
"math_id": 68,
"text": "\\partial_X B_1 = \\{ (0, 1), (0, -1) \\} \\quad \\text{ and } \\quad \\operatorname{cl}_X B_1 ~=~ B_1 \\cup \\partial_X B_1 ~=~ B_1 \\cup\\{ (0, 1), (0, -1) \\} ~=~\\{ 0 \\} \\times [-1, 1]."
},
{
"math_id": 69,
"text": "\\partial_X B_1 = \\{ (0, 1), (0, -1) \\}"
},
{
"math_id": 70,
"text": "\\left\\{ p \\in X : d(p, \\mathbf{0}) = 1 \\right\\} = S^1"
},
{
"math_id": 71,
"text": "(X, d)."
},
{
"math_id": 72,
"text": "\\operatorname{cl}_X B_1 = B_1 \\cup \\{ (0, 1), (0, -1) \\}"
},
{
"math_id": 73,
"text": "\\left\\{ p \\in X : d(p, \\mathbf{0}) \\leq 1 \\right\\} = S^1 \\cup \\left(\\{ 0 \\} \\times [-1, 1]\\right)"
},
{
"math_id": 74,
"text": "(1, 0) \\in X,"
},
{
"math_id": 75,
"text": "B_1 = \\{ 0 \\} \\times (-1, 1)"
},
{
"math_id": 76,
"text": "\\operatorname{cl}_X B_1."
},
{
"math_id": 77,
"text": "\\{ 0 \\} \\times [-1, 1]."
},
{
"math_id": 78,
"text": "(M, \\rho),"
},
{
"math_id": 79,
"text": "M"
},
{
"math_id": 80,
"text": "c \\in M"
},
{
"math_id": 81,
"text": "c"
},
{
"math_id": 82,
"text": "\\partial_M \\left(\\left\\{ m \\in M : \\rho(m, c) < r \\right\\}\\right) ~\\subseteq~ \\left\\{ m \\in M : \\rho(m, c)= r \\right\\}"
},
{
"math_id": 83,
"text": "X \\setminus Y = S^1 \\setminus \\{ (0, \\pm 1) \\},"
},
{
"math_id": 84,
"text": "\\left\\{ p \\in X : d(p, \\mathbf{0}) = 1 \\right\\}"
},
{
"math_id": 85,
"text": "S, \\partial S \\supseteq \\partial\\partial S,"
},
{
"math_id": 86,
"text": "\\,\\supseteq\\,"
},
{
"math_id": 87,
"text": "\\partial \\partial S = \\partial \\partial \\partial S"
},
{
"math_id": 88,
"text": "S."
}
]
| https://en.wikipedia.org/wiki?curid=72824 |
72824901 | Gauss composition law | In mathematics, in number theory, Gauss composition law is a rule, invented by Carl Friedrich Gauss, for performing a binary operation on integral binary quadratic forms (IBQFs). Gauss presented this rule in his "Disquisitiones Arithmeticae", a textbook on number theory published in 1801, in Articles 234 - 244. Gauss composition law is one of the deepest results in the theory of IBQFs and Gauss's formulation of the law and the proofs its properties as given by Gauss are generally considered highly complicated and very difficult. Several later mathematicians have simplified the formulation of the composition law and have presented it in a format suitable for numerical computations. The concept has also found generalisations in several directions.
Integral binary quadratic forms.
An expression of the form formula_0, where formula_1 are all integers, is called an integral binary quadratic form (IBQF). The form formula_2 is called a primitive IBQF if formula_3 are relatively prime. The quantity formula_4 is called the discriminant of the IBQF formula_2. An integer formula_5 is the discriminant of some IBQF if and only if formula_6. formula_5 is called a fundamental discriminant if and only if one of the following statements holds
If formula_11 and formula_12 then formula_2 is said to be positive definite; if formula_11 and formula_13 then formula_2 is said to be negative definite; if formula_14 then formula_2 is said to be indefinite.
Equivalence of IBQFs.
Two IBQFs formula_15 and formula_16 are said to be equivalent (or, properly equivalent) if there exist integers α, β, γ, δ such that
formula_17 and formula_18
The notation formula_19 is used to denote the fact that the two forms are equivalent. The relation "formula_20" is an equivalence relation in the set of all IBQFs. The equivalence class to which the IBQF formula_15 belongs is denoted by formula_21.
Two IBQFs formula_15 and formula_16 are said to be improperly equivalent if
formula_22 and formula_18
The relation in the set of IBQFs of being improperly equivalent is also an equivalence relation.
It can be easily seen that equivalent IBQFs (properly or improperly) have the same discriminant.
Gauss's formulation of the composition law.
Historical context.
The following identity, called Brahmagupta identity, was known to the Indian mathematician Brahmagupta (598–668) who used it to calculate successively better fractional approximations to square roots of positive integers:
formula_23
Writing formula_24 this identity can be put in the form
formula_25 where formula_26.
Gauss's composition law of IBQFs generalises this identity to an identity of the form formula_27 where formula_28 are all IBQFs and formula_29 are linear combinations of the products formula_30.
The composition law of IBQFs.
Consider the following IBQFs:
formula_31
formula_32
formula_33
If it is possible to find integers formula_34 and formula_35 such that the following six numbers
formula_36
have no common divisors other than ±1, and such that if we let
formula_37
formula_38
the following relation is identically satisfied
formula_39,
then the form formula_40 is said to be a composite of the forms formula_15 and formula_41. It may be noted that the composite of two IBQFs, if it exists, is not unique.
Example.
Consider the following binary quadratic forms:
formula_42
formula_43
formula_44
Let
formula_45
We have
formula_46.
These six numbers have no common divisors other than ±1.
Let
formula_47,
formula_48.
Then it can be verified that
formula_49.
Hence formula_40 is a composite of formula_15 and formula_41.
An algorithm to find the composite of two IBQFs.
The following algorithm can be used to compute the composite of two IBQFs.
Algorithm.
Given the following IBQFs having the same discriminant formula_5:
formula_50
formula_51
formula_52
# Compute formula_53
# Compute formula_54
# Compute formula_55 such that formula_56
# Compute formula_57
# Compute formula_58
# Compute formula_59
# Compute formula_60
# Compute
formula_61
formula_62
Then formula_63 so that formula_64 is a composite of formula_65 and formula_66.
Properties of the composition law.
Existence of the composite.
The composite of two IBQFs exists if and only if they have the same discriminant.
Equivalent forms and the composition law.
Let formula_67 be IBQFs and let there be the following equivalences:
formula_68
formula_69
If formula_64 is a composite of formula_15 and formula_16, and formula_70 is a composite of formula_71 and formula_72, then
formula_73
A binary operation.
Let formula_74 be a fixed integer and consider set formula_75 of all possible primitive IBQFs of discriminant formula_74. Let formula_76 be the set of equivalence classes in this set under the equivalence relation "formula_20". Let formula_21 and formula_77 be two elements of formula_76. Let formula_40 be a composite of the IBQFs formula_15 and formula_16 in formula_75. Then the following equation
formula_78
defines a well-defined binary operation "formula_79" in formula_76.
Modern approach to the composition law.
The following sketch of the modern approach to the composition law of IBQFs is based on a monograph by Duncan A. Buell. The book may be consulted for further details and for proofs of all the statements made hereunder.
Quadratic algebraic numbers and integers.
Let formula_84 be the set of integers. Hereafter, in this section, elements of formula_84 will be referred as "rational integers" to distinguish them from "algebraic integers" to be defined below.
A complex number formula_85 is called a "quadratic algebraic number" if it satisfies an equation of the form
formula_86 where formula_87.
formula_88 is called a "quadratic algebraic integer" if it satisfies an equation of the form
formula_89 where formula_90
The quadratic algebraic numbers are numbers of the form
formula_91 where formula_92 and formula_93 has no square factors other than formula_94.
The integer formula_95 is called the "radicand" of the algebraic integer formula_85. The "norm" of the quadratic algebraic number formula_85 is defined as
formula_96.
Let formula_97 be the field of rational numbers. The smallest field containing formula_97 and a quadratic algebraic number formula_85 is the "quadratic field" containing formula_85 and is denoted by formula_98. This field can be shown to be
formula_99
The "discriminant" formula_5 of the field formula_100 is defined by
formula_101
Let formula_102 be a rational integer without square factors (except 1). The set of quadratic algebraic integers of radicand formula_93 is denoted by formula_103. This set is given by
formula_104
formula_105 is a ring under ordinary addition and multiplication. If we let
formula_106
then
formula_107.
Ideals in quadratic fields.
Let formula_108 be an ideal in the ring of integers formula_105; that is, let formula_108 be a nonempty subset of formula_105 such that for any formula_109 and any formula_110, formula_111. (An ideal formula_112 as defined here is sometimes referred to as an "integral ideal" to distinguish from "fractional ideal" to be defined below.) If formula_108 is an ideal in formula_105 then one can find formula_113 such any element in formula_108 can be uniquely represented in the form formula_114 with formula_115. Such a pair of elements in formula_105 is called a "basis" of the ideal formula_108. This is indicated by writing formula_116. The "norm" of formula_116 is defined as
formula_117.
The norm is independent of the choice of the basis.
# For any formula_124 and for any formula_110, formula_125.
# There exists a fixed algebraic integer formula_126 such that for every formula_127, formula_128.
Some special ideals.
There is this important result: "Given any ideal (integral or fractional) formula_129, there exists an integral ideal formula_132 such that the product ideal formula_133 is a principal ideal."
An equivalence relation in the set of ideals.
Two (integral or fractional) ideals formula_112 and formula_134 ares said to be "equivalent", dented formula_135, if there is a principal ideal formula_131 such that formula_136. These ideals are "narrowly equivalent" if the norm of formula_85 is positive. The relation, in the set of ideals, of being equivalent or narrowly equivalent as defined here is indeed an equivalence relation.
The equivalence classes (respectively, narrow equivalence classes) of fractional ideals of a ring of quadratic algebraic integers formula_105 form an abelian group under multiplication of ideals. The identity of the group is the class of all principal ideals (respectively, the class of all principal ideals formula_137 with formula_138). The groups of classes of ideals and of narrow classes of ideals are called the "class group" and the "narrow class group" of the formula_100.
Binary quadratic forms and classes of ideals.
The main result that connects the IBQFs and classes of ideals can now be stated as follows:
"The group of classes of binary quadratic forms of discriminant formula_5 is isomorphic to the narrow class group of the quadratic number field formula_139."
Bhargava's approach to the composition law.
Manjul Bhargava, a Canadian-American Fields Medal winning mathematician introduced a configuration, called a Bhargava cube, of eight integers formula_140 (see figure) to study the composition laws of binary quadratic forms and other such forms. Defining matrices associated with the opposite faces of this cube as given below
formula_141,
Bhargava constructed three IBQFs as follows:
formula_142
Bhargava established the following result connecting a Bhargava cube with the Gauss composition law:
"If a cube A gives rise to three primitive binary quadratic forms "Q"1, "Q"2, "Q"3, then "Q"1, "Q"2, "Q"3 have the same discriminant, and the product of these three forms is the identity in the group defined by Gauss composition. Conversely, if "Q"1, "Q"2, "Q"3 are any three primitive binary quadratic forms of the same discriminant whose product is the identity under Gauss composition, then there exists a cube A yielding "Q"1, "Q"2, "Q"3."
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Q(x,y)=\\alpha x^2 + \\beta xy + \\gamma y^2"
},
{
"math_id": 1,
"text": "\\alpha, \\beta, \\gamma, x, y"
},
{
"math_id": 2,
"text": "Q(x,y)"
},
{
"math_id": 3,
"text": "\\alpha, \\beta, \\gamma"
},
{
"math_id": 4,
"text": "\\Delta = \\beta^2-4\\alpha\\gamma"
},
{
"math_id": 5,
"text": "\\Delta"
},
{
"math_id": 6,
"text": " \\Delta \\equiv 0, 1 (\\mathrm{mod}\\,\\, 4)"
},
{
"math_id": 7,
"text": " \\Delta \\equiv 1\\,\\, (\\mathrm{mod}\\,\\, 4)"
},
{
"math_id": 8,
"text": " \\Delta = 4m"
},
{
"math_id": 9,
"text": "m=2 \\text{ or } 3\\,\\, (\\mathrm{mod}\\,\\, 4)"
},
{
"math_id": 10,
"text": "m"
},
{
"math_id": 11,
"text": " \\Delta<0"
},
{
"math_id": 12,
"text": "\\alpha>0"
},
{
"math_id": 13,
"text": "\\alpha<0"
},
{
"math_id": 14,
"text": " \\Delta>0"
},
{
"math_id": 15,
"text": "g(x,y)"
},
{
"math_id": 16,
"text": "h(x,y)"
},
{
"math_id": 17,
"text": "\\alpha\\delta - \\beta\\gamma = 1"
},
{
"math_id": 18,
"text": "g(\\alpha x + \\beta y, \\gamma x + \\delta y) = h(x,y)."
},
{
"math_id": 19,
"text": "g(x,y) \\sim h(x,y)"
},
{
"math_id": 20,
"text": "\\sim"
},
{
"math_id": 21,
"text": "[g(x,y)]"
},
{
"math_id": 22,
"text": "\\alpha\\delta - \\beta\\gamma = -1"
},
{
"math_id": 23,
"text": " (x^2 +D y^2)(u^2 + D v^2) = (xu+Dyv)^2 + D(xv - yu)^2 "
},
{
"math_id": 24,
"text": "f(x,y)=x^2+Dy^2"
},
{
"math_id": 25,
"text": "f(x,y)f(u,v)=f(X,Y)"
},
{
"math_id": 26,
"text": " X = xu+Dyv, Y=xv-yu"
},
{
"math_id": 27,
"text": "g(x,y)h(u,v)=F(X,Y)"
},
{
"math_id": 28,
"text": "g(x,y), h(x,y), F(X,Y)"
},
{
"math_id": 29,
"text": "X,Y"
},
{
"math_id": 30,
"text": "xu, xv, yu, yv"
},
{
"math_id": 31,
"text": " g(x,y) = ax^2+bxy+cy^2"
},
{
"math_id": 32,
"text": " h(x,y) = dx^2+exy+ fy^2 "
},
{
"math_id": 33,
"text": " F(x,y) = Ax^2 + Bxy + Cy^2"
},
{
"math_id": 34,
"text": "p,q,r,s"
},
{
"math_id": 35,
"text": "p^\\prime, q^\\prime, r^\\prime, s^\\prime"
},
{
"math_id": 36,
"text": "pq^\\prime - qp^\\prime, pr^\\prime - rp^\\prime, ps^\\prime - sp^\\prime, qr^\\prime - rq^\\prime, qs^\\prime - sq^\\prime, rs^\\prime - sr^\\prime"
},
{
"math_id": 37,
"text": " X = pxu + qxv + ryu+syv "
},
{
"math_id": 38,
"text": " Y = p^\\prime xu + q^\\prime xv + r^\\prime yu+s^\\prime yv "
},
{
"math_id": 39,
"text": " g(x,y)h(u,v) = F(X,Y) "
},
{
"math_id": 40,
"text": "F(x,y)"
},
{
"math_id": 41,
"text": "h(x, y)"
},
{
"math_id": 42,
"text": " g(x,y) = 2x^2+3xy-10y^2"
},
{
"math_id": 43,
"text": "h(x,y) = 5x^2 + 3xy-4y^2 "
},
{
"math_id": 44,
"text": " F(x,y) = 10x^2 +3xy - 2 y^2"
},
{
"math_id": 45,
"text": "[p, q, r, s] = [1, 0, 0, 2], \\quad [p^\\prime, q^\\prime, r^\\prime, s^\\prime] =[0, 2, 5, 3]"
},
{
"math_id": 46,
"text": "pq^\\prime - qp^\\prime=2, pr^\\prime - rp^\\prime=5, ps^\\prime - sp^\\prime=3, qr^\\prime - rq^\\prime=0, qs^\\prime - sq^\\prime=4, rs^\\prime - sr^\\prime=10"
},
{
"math_id": 47,
"text": " X = pxu + qxv + ryu+syv = xu+2yv"
},
{
"math_id": 48,
"text": " Y = p^\\prime xu + q^\\prime xv + r^\\prime yu+s^\\prime yv = 2xv+5yu+3yv"
},
{
"math_id": 49,
"text": " g(x,y)h(u,v) = F(X,Y)"
},
{
"math_id": 50,
"text": "f_1(x,y) = a_1x^2+b_1xy+c_1y^2"
},
{
"math_id": 51,
"text": "f_2(x,y) = a_2x^2 + b_2xy + c_2y^2 "
},
{
"math_id": 52,
"text": "\\Delta=b_1^2-4a_1c_1=b_2^2-4a_2c_2"
},
{
"math_id": 53,
"text": " \\beta = \\frac{b_1+b_2}{2} "
},
{
"math_id": 54,
"text": "n = \\gcd (a_1,a_2,\\beta)"
},
{
"math_id": 55,
"text": "t,u,v"
},
{
"math_id": 56,
"text": "a_1t+a_2 u+\\beta v = n "
},
{
"math_id": 57,
"text": " A = \\frac{a_1a_2}{n^2} "
},
{
"math_id": 58,
"text": " B = \\frac{a_1b_2t + a_2b_1u + v(b_1b_2+\\Delta)/2}{n} "
},
{
"math_id": 59,
"text": " C = \\frac{B^2 - \\Delta}{4A} "
},
{
"math_id": 60,
"text": " F(x,y) = Ax^2 + Bxy + Cy^2 "
},
{
"math_id": 61,
"text": " X = nx_1x_2 + \\frac{(b_2-B)n}{2a_2} x_1y_2 + \\frac{(b_1-B)n}{2a_1} y_1 x_1+ \\frac{[b_1b_2+\\Delta - B(b_1+b_2)]n}{4a_1a_2} y_1y_2 "
},
{
"math_id": 62,
"text": " Y = \\frac{a_1}{n} x_1y_2 + \\frac{a_2}{n}y_1 x_1+ \\frac{b_1+b_2}{2n} y_1y_2 "
},
{
"math_id": 63,
"text": "F(X,Y) = f_1(x_1,y_1)f_2(x_2,y_2)"
},
{
"math_id": 64,
"text": " F(x,y)"
},
{
"math_id": 65,
"text": "f_1(x,y)"
},
{
"math_id": 66,
"text": "f_2(x,y)"
},
{
"math_id": 67,
"text": "g(x,y), h(x,y), g^\\prime(x,y), h^\\prime(x,y)"
},
{
"math_id": 68,
"text": " g(x,y) \\sim g^\\prime (x,y)"
},
{
"math_id": 69,
"text": " h(x,y) \\sim h^\\prime (x,y)"
},
{
"math_id": 70,
"text": " F^\\prime(x,y)"
},
{
"math_id": 71,
"text": "g^\\prime(x,y)"
},
{
"math_id": 72,
"text": "h^\\prime(x,y)"
},
{
"math_id": 73,
"text": " F(x,y) \\sim F^\\prime (x,y)."
},
{
"math_id": 74,
"text": "D"
},
{
"math_id": 75,
"text": "S_D"
},
{
"math_id": 76,
"text": "G_D"
},
{
"math_id": 77,
"text": "[h(x,y)]"
},
{
"math_id": 78,
"text": " [g(x,y)] \\circ [h(x,y)] = [F(x,y)] "
},
{
"math_id": 79,
"text": " \\circ"
},
{
"math_id": 80,
"text": "\\circ"
},
{
"math_id": 81,
"text": " \\begin{cases}\n[x^2-(D/4)y^2] & \\text{ if } D \\equiv 0\\,(\\mathrm{mod}\\,\\, 4)\\\\[1mm]\n[x^2+xy+((1-D)/4)y^2] & \\text{ if } D \\equiv 1\\, (\\mathrm{mod}\\,\\, 4)\n\\end{cases}\n"
},
{
"math_id": 82,
"text": "[ax^2+bxy+cy^2]"
},
{
"math_id": 83,
"text": "[ax^2 -bxy+cy^2]"
},
{
"math_id": 84,
"text": "\\mathbb Z"
},
{
"math_id": 85,
"text": "\\alpha"
},
{
"math_id": 86,
"text": "ax^2+bx+c=0"
},
{
"math_id": 87,
"text": "a,b,c \\in \\mathbb Z"
},
{
"math_id": 88,
"text": " \\alpha"
},
{
"math_id": 89,
"text": "x^2+bx+c=0"
},
{
"math_id": 90,
"text": "b, c \\in \\mathbb Z"
},
{
"math_id": 91,
"text": "\\alpha = \\frac{-b +e\\sqrt{d}}{2a}"
},
{
"math_id": 92,
"text": "a,b,d,e \\in \\mathbb Z"
},
{
"math_id": 93,
"text": "d"
},
{
"math_id": 94,
"text": "1"
},
{
"math_id": 95,
"text": " d"
},
{
"math_id": 96,
"text": "N(\\alpha) = (b^2+e^2d)/4a^2"
},
{
"math_id": 97,
"text": " \\mathbb Q"
},
{
"math_id": 98,
"text": "\\mathbb Q (\\alpha)"
},
{
"math_id": 99,
"text": "\\mathbb Q (\\alpha) = \\mathbb Q (\\sqrt{d}) = \\{ t+u\\sqrt{d}\\,|\\, t,u \\in \\mathbb Q\\}"
},
{
"math_id": 100,
"text": "\\mathbb Q(\\sqrt{d})"
},
{
"math_id": 101,
"text": "\\Delta = \n\\begin{cases}\n4d & \\text{ if } d \\equiv 2 \\text{ or } 3 \\,\\, (\\mathrm{mod}\\,\\, 4 )\\\\[1mm]\nd & \\text{ if } d \\equiv 1 \\,\\, (\\mathrm{mod}\\,\\, 4 )\n\\end{cases}\n"
},
{
"math_id": 102,
"text": " d \\ne 1"
},
{
"math_id": 103,
"text": " O(\\sqrt{d})"
},
{
"math_id": 104,
"text": "\nO(\\sqrt{d}) = \\begin{cases} \n\\{ a+ b \\sqrt{d}\\, |\\, a,b \\in \\mathbb Z\\} & \\text{ if } d \\equiv 2\\text{ or }3 \\,\\,(\\mathrm{mod}\\,\\,4)\\\\[1mm] \n\\{ (a+ b \\sqrt{d})/2\\, |\\, a,b \\in \\mathbb Z, a\\equiv b \\,\\,\\mathrm{mod}\\,\\, 2)\\} & \\text{ if } d \\equiv 1 \\,\\,(\\mathrm{mod}\\,\\,4)\\}\n\\end{cases}\n"
},
{
"math_id": 105,
"text": "O(\\sqrt{d})"
},
{
"math_id": 106,
"text": " \\delta =\n\\begin{cases}\n-\\sqrt{d} & \\text{ if } \\delta \\text{ is even}\\\\[1mm]\n(1-\\sqrt{d})/2 & \\text{ if } \\delta \\text{ is odd}\n\\end{cases}\n"
},
{
"math_id": 107,
"text": "O(\\sqrt{d}) = \\{ a + b\\delta,|, a,b \\in \\mathbb Z\\}"
},
{
"math_id": 108,
"text": " \\mathbf a "
},
{
"math_id": 109,
"text": "\\alpha,\\beta \\in \\mathbf a "
},
{
"math_id": 110,
"text": "\\lambda, \\mu \\in O(\\sqrt{d})"
},
{
"math_id": 111,
"text": "\\lambda\\alpha + \\mu\\beta \\in \\mathbf a "
},
{
"math_id": 112,
"text": "\\mathbf a"
},
{
"math_id": 113,
"text": "\\alpha_1, \\alpha_2 \\in O(\\sqrt{d})"
},
{
"math_id": 114,
"text": "\\alpha_1 x + \\alpha_2 y"
},
{
"math_id": 115,
"text": "x,y\\in \\mathbb Z"
},
{
"math_id": 116,
"text": " \\mathbf a = \\langle \\alpha_1, \\alpha_2 \\rangle "
},
{
"math_id": 117,
"text": " N(\\mathbf a) = |\\alpha_1\\overline{\\alpha_2} - \\overline{\\alpha_1}\\alpha_2|/\\sqrt{\\Delta}"
},
{
"math_id": 118,
"text": "\\mathbf a = \\langle \\alpha_1, \\alpha_2 \\rangle "
},
{
"math_id": 119,
"text": "\\mathbf b = \\langle \\beta_1, \\beta_2 \\rangle "
},
{
"math_id": 120,
"text": "\\mathbf a \\mathbf b"
},
{
"math_id": 121,
"text": "\\alpha_1\\beta_1, \\alpha_1\\beta_2, \\alpha_2\\beta_1, \\alpha_2\\beta_2 "
},
{
"math_id": 122,
"text": "I"
},
{
"math_id": 123,
"text": "\\mathbb Q(\\sqrt{\\Delta})"
},
{
"math_id": 124,
"text": "\\alpha, \\beta \\in I"
},
{
"math_id": 125,
"text": " \\lambda \\alpha + \\mu \\beta \\in I "
},
{
"math_id": 126,
"text": "\\nu"
},
{
"math_id": 127,
"text": "\\alpha \\in I"
},
{
"math_id": 128,
"text": " \\nu \\alpha \\in O(\\sqrt{d})"
},
{
"math_id": 129,
"text": "\\mathbf a "
},
{
"math_id": 130,
"text": " \\mathbf a = \\{ \\lambda \\alpha\\, | \\, \\lambda \\in O(\\sqrt{d}) \\}"
},
{
"math_id": 131,
"text": "(\\alpha)"
},
{
"math_id": 132,
"text": " \\mathbf b "
},
{
"math_id": 133,
"text": " \\mathbf{ab} "
},
{
"math_id": 134,
"text": "\\mathbf b"
},
{
"math_id": 135,
"text": " \\mathbf a \\sim \\mathbf b "
},
{
"math_id": 136,
"text": "\\mathbf a = (\\alpha)\\mathbf b "
},
{
"math_id": 137,
"text": " (\\alpha) "
},
{
"math_id": 138,
"text": " N(\\alpha)>0"
},
{
"math_id": 139,
"text": "\\mathbb Q(\\sqrt {\\Delta})"
},
{
"math_id": 140,
"text": "a,b,c,d,e,f"
},
{
"math_id": 141,
"text": "M_1=\\begin{bmatrix} a & b \\\\ c & d\\end{bmatrix},N_1=\\begin{bmatrix} e & f \\\\ g & h\\end{bmatrix}, M_2=\\begin{bmatrix} a & c \\\\ e & g\\end{bmatrix},N_2=\\begin{bmatrix} b & d \\\\ f & h\\end{bmatrix}, M_3=\\begin{bmatrix} a & e \\\\ b & f\\end{bmatrix},N_3=\\begin{bmatrix} c & g \\\\ d & h\\end{bmatrix} "
},
{
"math_id": 142,
"text": "Q_1=-\\det(M_1x+N_1y), \\,\\,Q_2=-\\det(M_2x+N_2y)\\,\\,Q_3=-\\det(M_3x+N_3y)"
}
]
| https://en.wikipedia.org/wiki?curid=72824901 |
7282499 | Surface-area-to-volume ratio | Surface area per unit volume
The surface-area-to-volume ratio or surface-to-volume ratio (denoted as SA:V, SA/V, or sa/vol) is the ratio between surface area and volume of an object or collection of objects.
SA:V is an important concept in science and engineering. It is used to explain the relation between structure and function in processes occurring through the surface and the volume. Good examples for such processes are processes governed by the heat equation, that is, diffusion and heat transfer by thermal conduction. SA:V is used to explain the diffusion of small molecules, like oxygen and carbon dioxide between air, blood and cells, water loss by animals, bacterial morphogenesis, organism's thermoregulation, design of artificial bone tissue, artificial lungs and many more biological and biotechnological structures. For more examples see Glazier.
The relation between SA:V and diffusion or heat conduction rate is explained from flux and surface perspective, focusing on the surface of a body as the place where diffusion, or heat conduction, takes place, i.e., the larger the SA:V there is more surface area per unit volume through which material can diffuse, therefore, the diffusion or heat conduction, will be faster. Similar explanation appears in the literature: "Small size implies a large ratio of surface area to volume, thereby helping to maximize the uptake of nutrients across the plasma membrane", and elsewhere.
For a given volume, the object with the smallest surface area (and therefore with the smallest SA:V) is a ball, a consequence of the isoperimetric inequality in 3 dimensions. By contrast, objects with acute-angled spikes will have very large surface area for a given volume.
For solid spheres.
A "solid sphere" or "ball" is a three-dimensional object, being the solid figure bounded by a sphere. (In geometry, the term "sphere" properly refers only to the surface, so a sphere thus lacks volume in this context.)
For an ordinary three-dimensional ball, the SA:V can be calculated using the standard equations for the surface and volume, which are, respectively, formula_0 and formula_1. For the unit case in which "r" = 1 the SA:V is thus 3. For the general case, SA:V equals 3/"r", in an inverse relationship with the radius - if the radius is doubled, the SA:V halves (see figure).
For "n"-dimensional balls.
Balls exist in any dimension and are generically called ""n"-balls" or "hyperballs", where "n" is the number of dimensions.
The same reasoning can be generalized to n-balls using the general equations for volume and surface area, which are:
formula_2
formula_3
So the ratio equals formula_4. Thus, the same linear relationship between area and volume holds for any number of dimensions (see figure): doubling the radius always halves the ratio.
Dimension and units.
The surface-area-to-volume ratio has physical dimension inverse length (L−1) and is therefore expressed in units of inverse metre (m-1) or its prefixed unit multiples and submultiples. As an example, a cube with sides of length 1 cm will have a surface area of 6 cm2 and a volume of 1 cm3. The surface to volume ratio for this cube is thus
formula_5.
For a given shape, SA:V is inversely proportional to size. A cube 2 cm on a side has a ratio of 3 cm−1, half that of a cube 1 cm on a side. Conversely, preserving SA:V as size increases requires changing to a less compact shape.
Applications.
Physical chemistry.
Materials with high surface area to volume ratio (e.g. very small diameter, very porous, or otherwise not compact) react at much faster rates than monolithic materials, because more surface is available to react. An example is grain dust: while grain is not typically flammable, grain dust is explosive. Finely ground salt dissolves much more quickly than coarse salt.
A high surface area to volume ratio provides a strong "driving force" to speed up thermodynamic processes that minimize free energy.
Biology.
The ratio between the surface area and volume of cells and organisms has an enormous impact on their biology, including their physiology and behavior. For example, many aquatic microorganisms have increased surface area to increase their drag in the water. This reduces their rate of sink and allows them to remain near the surface with less energy expenditure.
An increased surface area to volume ratio also means increased exposure to the environment. The finely-branched appendages of filter feeders such as krill provide a large surface area to sift the water for food.
Individual organs like the lung have numerous internal branchings that increase the surface area; in the case of the lung, the large surface supports gas exchange, bringing oxygen into the blood and releasing carbon dioxide from the blood. Similarly, the small intestine has a finely wrinkled internal surface, allowing the body to absorb nutrients efficiently.
Cells can achieve a high surface area to volume ratio with an elaborately convoluted surface, like the microvilli lining the small intestine.
Increased surface area can also lead to biological problems. More contact with the environment through the surface of a cell or an organ (relative to its volume) increases loss of water and dissolved substances. High surface area to volume ratios also present problems of temperature control in unfavorable environments.
The surface to volume ratios of organisms of different sizes also leads to some biological rules such as Allen's rule, Bergmann's rule and gigantothermy.
Fire spread.
In the context of wildfires, the ratio of the surface area of a solid fuel to its volume is an important measurement. Fire spread behavior is frequently correlated to the surface-area-to-volume ratio of the fuel (e.g. leaves and branches). The higher its value, the faster a particle responds to changes in environmental conditions, such as temperature or moisture. Higher values are also correlated to shorter fuel ignition times, and hence faster fire spread rates.
Planetary cooling.
A body of icy or rocky material in outer space may, if it can build and retain sufficient heat, develop a differentiated interior and alter its surface through volcanic or tectonic activity. The length of time through which a planetary body can maintain surface-altering activity depends on how well it retains heat, and this is governed by its surface area-to-volume ratio. For Vesta (r=263 km), the ratio is so high that astronomers were surprised to find that it "did" differentiate and have brief volcanic activity. The moon, Mercury and Mars have radii in the low thousands of kilometers; all three retained heat well enough to be thoroughly differentiated although after a billion years or so they became too cool to show anything more than very localized and infrequent volcanic activity. As of April 2019, however, NASA has announced the detection of a "marsquake" measured on April 6, 2019, by NASA's InSight lander. Venus and Earth (r>6,000 km) have sufficiently low surface area-to-volume ratios (roughly half that of Mars and much lower than all other known rocky bodies) so that their heat loss is minimal.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "SA=4\\pi{r^2}"
},
{
"math_id": 1,
"text": "V=(4/3)\\pi{r^3}"
},
{
"math_id": 2,
"text": "V=\\frac{r^n\\pi^{n/2}}{\\Gamma(1+n/2)}"
},
{
"math_id": 3,
"text": "SA=\\frac{nr^{n-1}\\pi^{n/2}}{\\Gamma(1+{n/2})}"
},
{
"math_id": 4,
"text": "SA/V=nr^{-1}"
},
{
"math_id": 5,
"text": "\\mbox{SA:V} = \\frac{6~\\mbox{cm}^2}{1~\\mbox{cm}^3} = 6~\\mbox{cm}^{-1}"
}
]
| https://en.wikipedia.org/wiki?curid=7282499 |
7282654 | Trailing zero | In mathematics, trailing zeros are a sequence of 0 in the decimal representation (or more generally, in any positional representation) of a number, after which no other digits follow.
Trailing zeros to the right of a decimal point, as in 12.340, don't affect the value of a number and may be omitted if all that is of interest is its numerical value. This is true even if the zeros recur infinitely. For example, in pharmacy, trailing zeros are omitted from dose values to prevent misreading. However, trailing zeros may be useful for indicating the number of significant figures, for example in a measurement. In such a context, "simplifying" a number by removing trailing zeros would be incorrect.
The number of trailing zeros in a non-zero base-"b" integer "n" equals the exponent of the highest power of "b" that divides "n". For example, 14000 has three trailing zeros and is therefore divisible by 1000 = 103, but not by 104. This property is useful when looking for small factors in integer factorization. Some computer architectures have a count trailing zeros operation in their instruction set for efficiently determining the number of trailing zero bits in a machine word.
Factorial.
The number of trailing zeros in the decimal representation of "n"!, the factorial of a non-negative integer "n", is simply the multiplicity of the prime factor 5 in "n"!. This can be determined with this special case of de Polignac's formula:
formula_0
where "k" must be chosen such that
formula_1
more precisely
formula_2
formula_3
and formula_4 denotes the floor function applied to "a". For "n" = 0, 1, 2, ... this is
0, 0, 0, 0, 0, 1, 1, 1, 1, 1, 2, 2, 2, 2, 2, 3, 3, 3, 3, 3, 4, 4, 4, 4, 4, 6, ... (sequence in the OEIS).
For example, 53 > 32, and therefore 32! = 263130836933693530167218012160000000 ends in
formula_5
zeros. If "n" < 5, the inequality is satisfied by "k" = 0; in that case the sum is empty, giving the answer 0.
The formula actually counts the number of factors 5 in "n"!, but since there are at least as many factors 2, this is equivalent to the number of factors 10, each of which gives one more trailing zero.
Defining
formula_6
the following recurrence relation holds:
formula_7
This can be used to simplify the computation of the terms of the summation, which can be stopped as soon as "q i" reaches zero. The condition 5"k"+1 > "n" is equivalent to "q" "k"+1 = 0. | [
{
"math_id": 0,
"text": "f(n) = \\sum_{i=1}^k \\left \\lfloor \\frac{n}{5^i} \\right \\rfloor =\n\\left \\lfloor \\frac{n}{5} \\right \\rfloor + \\left \\lfloor \\frac{n}{5^2} \\right \\rfloor + \\left \\lfloor \\frac{n}{5^3} \\right \\rfloor + \\cdots + \\left \\lfloor \\frac{n}{5^k} \\right \\rfloor, \\,"
},
{
"math_id": 1,
"text": "5^{k+1} > n,\\,"
},
{
"math_id": 2,
"text": "5^{k} \\le n < 5^{k+1},"
},
{
"math_id": 3,
"text": "k = \\left \\lfloor \\log_{5} n \\right \\rfloor,"
},
{
"math_id": 4,
"text": "\\lfloor a \\rfloor"
},
{
"math_id": 5,
"text": "\\left \\lfloor \\frac{32}{5} \\right \\rfloor + \\left \\lfloor \\frac{32}{5^2} \\right \\rfloor = 6 + 1 = 7\\,"
},
{
"math_id": 6,
"text": "q_i = \\left \\lfloor \\frac{n}{5^i} \\right \\rfloor,\\,"
},
{
"math_id": 7,
"text": "\\begin{align}q_0\\,\\,\\,\\,\\, & = \\,\\,\\,n,\\quad \\\\\n q_{i+1} & = \\left \\lfloor \\frac{q_i}{5} \\right \\rfloor.\\,\\end{align}"
}
]
| https://en.wikipedia.org/wiki?curid=7282654 |
72827 | Cauchy's integral formula | Provides integral formulas for all derivatives of a holomorphic function
In mathematics, Cauchy's integral formula, named after Augustin-Louis Cauchy, is a central statement in complex analysis. It expresses the fact that a holomorphic function defined on a disk is completely determined by its values on the boundary of the disk, and it provides integral formulas for all derivatives of a holomorphic function. Cauchy's formula shows that, in complex analysis, "differentiation is equivalent to integration": complex differentiation, like integration, behaves well under uniform limits – a result that does not hold in real analysis.
Theorem.
Let "U" be an open subset of the complex plane C, and suppose the closed disk "D" defined as
formula_0
is completely contained in "U". Let "f" : "U" → C be a holomorphic function, and let "γ" be the circle, oriented counterclockwise, forming the boundary of "D". Then for every "a" in the interior of "D",
formula_1
The proof of this statement uses the Cauchy integral theorem and like that theorem, it only requires "f" to be complex differentiable. Since formula_2 can be expanded as a power series in the variable formula_3
formula_4
it follows that holomorphic functions are analytic, i.e. they can be expanded as convergent power series.
In particular "f" is actually infinitely differentiable, with
formula_5
This formula is sometimes referred to as Cauchy's differentiation formula.
The theorem stated above can be generalized. The circle "γ" can be replaced by any closed rectifiable curve in "U" which has winding number one about "a". Moreover, as for the Cauchy integral theorem, it is sufficient to require that "f" be holomorphic in the open region enclosed by the path and continuous on its closure.
Note that not every continuous function on the boundary can be used to produce a function inside the boundary that fits the given boundary function. For instance, if we put the function "f"("z")
, defined for |"z"| = 1, into the Cauchy integral formula, we get zero for all points inside the circle. In fact, giving just the real part on the boundary of a holomorphic function is enough to determine the function up to an imaginary constant — there is only one imaginary part on the boundary that corresponds to the given real part, up to addition of a constant. We can use a combination of a Möbius transformation and the Stieltjes inversion formula to construct the holomorphic function from the real part on the boundary. For example, the function "f"("z") = "i" − "iz" has real part Re "f"("z") = Im "z". On the unit circle this can be written . Using the Möbius transformation and the Stieltjes formula we construct the function inside the circle. The term makes no contribution, and we find the function −"iz". This has the correct real part on the boundary, and also gives us the corresponding imaginary part, but off by a constant, namely "i".
Proof sketch.
By using the Cauchy integral theorem, one can show that the integral over "C" (or the closed rectifiable curve) is equal to the same integral taken over an arbitrarily small circle around "a". Since "f"("z") is continuous, we can choose a circle small enough on which "f"("z") is arbitrarily close to "f"("a"). On the other hand, the integral
formula_6
over any circle "C" centered at "a". This can be calculated directly via a parametrization (integration by substitution) "z"("t")
"a" + "εeit" where 0 ≤ "t" ≤ 2π and "ε" is the radius of the circle.
Letting "ε" → 0 gives the desired estimate
formula_7
Example.
Let
formula_8
and let "C" be the contour described by |"z"| = 2 (the circle of radius 2).
To find the integral of "g"("z") around the contour "C", we need to know the singularities of "g"("z"). Observe that we can rewrite "g" as follows:
formula_9
where "z"1 = − 1 + "i" and "z"2 = − 1 − "i".
Thus, "g" has poles at "z"1 and "z"2. The moduli of these points are less than 2 and thus lie inside the contour. This integral can be split into two smaller integrals by Cauchy–Goursat theorem; that is, we can express the integral around the contour as the sum of the integral around "z"1 and "z"2 where the contour is a small circle around each pole. Call these contours "C"1 around "z"1 and "C"2 around "z"2.
Now, each of these smaller integrals can be evaluated by the Cauchy integral formula, but they first must be rewritten to apply the theorem. For the integral around "C"1, define "f"1 as "f"1("z") = ("z" − "z"1)"g"("z"). This is analytic (since the contour does not contain the other singularity). We can simplify "f"1 to be:
formula_10
and now
formula_11
Since the Cauchy integral formula says that:
formula_12
we can evaluate the integral as follows:
formula_13
Doing likewise for the other contour:
formula_14
we evaluate
formula_15
The integral around the original contour "C" then is the sum of these two integrals:
formula_16
An elementary trick using partial fraction decomposition:
formula_17
Consequences.
The integral formula has broad applications. First, it implies that a function which is holomorphic in an open set is in fact infinitely differentiable there. Furthermore, it is an analytic function, meaning that it can be represented as a power series. The proof of this uses the dominated convergence theorem and the geometric series applied to
formula_18
The formula is also used to prove the residue theorem, which is a result for meromorphic functions, and a related result, the argument principle. It is known from Morera's theorem that the uniform limit of holomorphic functions is holomorphic. This can also be deduced from Cauchy's integral formula: indeed the formula also holds in the limit and the integrand, and hence the integral, can be expanded as a power series. In addition the Cauchy formulas for the higher order derivatives show that all these derivatives also converge uniformly.
The analog of the Cauchy integral formula in real analysis is the Poisson integral formula for harmonic functions; many of the results for holomorphic functions carry over to this setting. No such results, however, are valid for more general classes of differentiable or real analytic functions. For instance, the existence of the first derivative of a real function need not imply the existence of higher order derivatives, nor in particular the analyticity of the function. Likewise, the uniform limit of a sequence of (real) differentiable functions may fail to be differentiable, or may be differentiable but with a derivative which is not the limit of the derivatives of the members of the sequence.
Another consequence is that if "f"("z") = Σ "a""n" "z""n" is holomorphic in and 0 < "r" < "R" then the coefficients "a""n" satisfy Cauchy's inequality
formula_19
From Cauchy's inequality, one can easily deduce that every bounded entire function must be constant (which is Liouville's theorem).
The formula can also be used to derive Gauss's Mean-Value Theorem, which states
formula_20
In other words, the average value of "f" over the circle centered at "z" with radius "r" is "f"("z"). This can be calculated directly via a parametrization of the circle.
Generalizations.
Smooth functions.
A version of Cauchy's integral formula is the Cauchy–Pompeiu formula, and holds for smooth functions as well, as it is based on Stokes' theorem. Let "D" be a disc in C and suppose that "f" is a complex-valued "C"1 function on the closure of "D". Then
formula_21
One may use this representation formula to solve the inhomogeneous Cauchy–Riemann equations in "D". Indeed, if "φ" is a function in "D", then a particular solution "f" of the equation is a holomorphic function outside the support of "μ". Moreover, if in an open set "D",
formula_22
for some "φ" ∈ "C""k"("D") (where "k" ≥ 1), then "f"("ζ", "ζ") is also in "C""k"("D") and satisfies the equation
formula_23
The first conclusion is, succinctly, that the convolution "μ" ∗ "k"("z") of a compactly supported measure with the Cauchy kernel
formula_24
is a holomorphic function off the support of "μ". Here p.v. denotes the principal value. The second conclusion asserts that the Cauchy kernel is a fundamental solution of the Cauchy–Riemann equations. Note that for smooth complex-valued functions "f" of compact support on C the generalized Cauchy integral formula simplifies to
formula_25
and is a restatement of the fact that, considered as a distribution, (π"z")−1 is a fundamental solution of the Cauchy–Riemann operator .
The generalized Cauchy integral formula can be deduced for any bounded open region "X" with "C"1 boundary ∂"X" from this result and the formula for the distributional derivative of the characteristic function "χ""X" of "X":
formula_26
where the distribution on the right hand side denotes contour integration along ∂"X".
<templatestyles src="Math_proof/styles.css" />Proof
For formula_27 calculate:
formula_28
then traverse formula_29 in the anti-clockwise direction. Fix a point formula_30 and let formula_31 denote arc length on formula_29 measured from formula_32 anti-clockwise. Then, if formula_33 is the length of formula_34 is a parametrization of formula_29. The derivative formula_35 is a unit tangent to formula_29 and formula_36 is the unit outward normal on formula_29. We are lined up for use of the divergence theorem: put formula_37 so that formula_38 and we get
formula_39
Hence we proved formula_40.
Now we can deduce the generalized Cauchy integral formula:
<templatestyles src="Math_proof/styles.css" />Proof
Several variables.
In several complex variables, the Cauchy integral formula can be generalized to polydiscs. Let "D" be the polydisc given as the Cartesian product of "n" open discs "D"1, ..., "D""n":
formula_41
Suppose that "f" is a holomorphic function in "D" continuous on the closure of "D". Then
formula_42
where "ζ" = ("ζ"1...,"ζ""n") ∈ "D".
In real algebras.
The Cauchy integral formula is generalizable to real vector spaces of two or more dimensions. The insight into this property comes from geometric algebra, where objects beyond scalars and vectors (such as planar bivectors and volumetric trivectors) are considered, and a proper generalization of Stokes' theorem.
Geometric calculus defines a derivative operator ∇ = ê"i" ∂"i" under its geometric product — that is, for a "k"-vector field "ψ"(r), the derivative ∇"ψ" generally contains terms of grade "k" + 1 and "k" − 1. For example, a vector field ("k" = 1) generally has in its derivative a scalar part, the divergence ("k" = 0), and a bivector part, the curl ("k" = 2). This particular derivative operator has a Green's function:
formula_43
where "Sn" is the surface area of a unit "n"-ball in the space (that is, "S"2 = 2π, the circumference of a circle with radius 1, and "S"3 = 4π, the surface area of a sphere with radius 1). By definition of a Green's function,
formula_44
It is this useful property that can be used, in conjunction with the generalized Stokes theorem:
formula_45
where, for an "n"-dimensional vector space, "dS is an ("n" − 1)-vector and "dV is an "n"-vector. The function "f"(r) can, in principle, be composed of any combination of multivectors. The proof of Cauchy's integral theorem for higher dimensional spaces relies on the using the generalized Stokes theorem on the quantity "G"(r, r′) "f"(r′) and use of the product rule:
formula_46
When ∇"f" = 0, "f"(r) is called a "monogenic function", the generalization of holomorphic functions to higher-dimensional spaces — indeed, it can be shown that the Cauchy–Riemann condition is just the two-dimensional expression of the monogenic condition. When that condition is met, the second term in the right-hand integral vanishes, leaving only
formula_47
where "in" is that algebra's unit "n"-vector, the pseudoscalar. The result is
formula_48
Thus, as in the two-dimensional (complex analysis) case, the value of an analytic (monogenic) function at a point can be found by an integral over the surface surrounding the point, and this is valid not only for scalar functions but vector and general multivector functions as well.
See also.
<templatestyles src="Div col/styles.css"/>
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "D = \\bigl\\{z:|z - z_0| \\leq r\\bigr\\}"
},
{
"math_id": 1,
"text": "f(a) = \\frac{1}{2\\pi i} \\oint_\\gamma \\frac{f(z)}{z-a}\\,dz.\\,"
},
{
"math_id": 2,
"text": "1/(z-a)"
},
{
"math_id": 3,
"text": "a"
},
{
"math_id": 4,
"text": "\\frac{1}{z-a} = \\frac{1+\\frac{a}{z}+\\left(\\frac{a}{z}\\right)^2+\\cdots}{z}"
},
{
"math_id": 5,
"text": "f^{(n)}(a) = \\frac{n!}{2\\pi i} \\oint_\\gamma \\frac{f(z)}{\\left(z-a\\right)^{n+1}}\\,dz."
},
{
"math_id": 6,
"text": "\\oint_C \\frac{1}{z-a} \\,dz = 2 \\pi i,"
},
{
"math_id": 7,
"text": "\\begin{align}\n\\left | \\frac{1}{2 \\pi i} \\oint_C \\frac{f(z)}{z-a} \\,dz - f(a) \\right |\n&= \\left | \\frac{1}{2 \\pi i} \\oint_C \\frac{f(z)-f(a)}{z-a} \\,dz \\right | \\\\[1ex]\n&= \\left | \\frac{1}{2\\pi i}\\int_0^{2\\pi}\\left(\\frac{f\\bigl(z(t)\\bigr)-f(a)}{\\varepsilon e^{it}}\\cdot\\varepsilon e^{it} i\\right )\\,dt\\right | \\\\[1ex]\n&\\leq \\frac{1}{2 \\pi} \\int_0^{2\\pi} \\frac{ \\left|f\\bigl(z(t)\\bigr) - f(a)\\right| } {\\varepsilon} \\,\\varepsilon\\,dt \\\\[1ex]\n&\\leq \\max_{|z-a|=\\varepsilon} \\left|f(z) - f(a)\\right|\n~~ \\xrightarrow[\\varepsilon\\to 0]{} ~~ 0.\n\\end{align}"
},
{
"math_id": 8,
"text": "g(z) = \\frac{z^2}{z^2+2z+2},"
},
{
"math_id": 9,
"text": "g(z) = \\frac{z^2}{(z-z_1)(z-z_2)}"
},
{
"math_id": 10,
"text": "f_1(z) = \\frac{z^2}{z-z_2}"
},
{
"math_id": 11,
"text": "g(z) = \\frac{f_1(z)}{z-z_1}."
},
{
"math_id": 12,
"text": "\\oint_C \\frac{f_1(z)}{z-a}\\, dz=2\\pi i\\cdot f_1(a),"
},
{
"math_id": 13,
"text": "\n \\oint_{C_1} g(z)\\,dz\n =\\oint_{C_1} \\frac{f_1(z)}{z-z_1}\\,dz\n =2\\pi i\\frac{z_1^2}{z_1-z_2}.\n"
},
{
"math_id": 14,
"text": "f_2(z) = \\frac{z^2}{z-z_1},"
},
{
"math_id": 15,
"text": "\n \\oint_{C_2} g(z)\\,dz\n =\\oint_{C_2} \\frac{f_2(z)}{z-z_2}\\,dz\n =2\\pi i\\frac{z_2^2}{z_2-z_1}.\n"
},
{
"math_id": 16,
"text": "\\begin{align}\n \\oint_C g(z)\\,dz\n&{}= \\oint_{C_1} g(z)\\,dz\n + \\oint_{C_2} g(z)\\,dz \\\\[.5em]\n&{}= 2\\pi i\\left(\\frac{z_1^2}{z_1-z_2}+\\frac{z_2^2}{z_2-z_1}\\right) \\\\[.5em]\n&{}= 2\\pi i(-2) \\\\[.3em]\n&{}=-4\\pi i.\n\\end{align}"
},
{
"math_id": 17,
"text": "\n \\oint_C g(z)\\,dz\n =\\oint_C \\left(1-\\frac{1}{z-z_1}-\\frac{1}{z-z_2}\\right) \\, dz\n =0-2\\pi i-2\\pi i\n =-4\\pi i\n"
},
{
"math_id": 18,
"text": "f(\\zeta) = \\frac{1}{2\\pi i}\\int_C \\frac{f(z)}{z-\\zeta}\\,dz."
},
{
"math_id": 19,
"text": "|a_n|\\le r^{-n} \\sup_{|z|=r}|f(z)|."
},
{
"math_id": 20,
"text": "f(z) = \\frac{1}{2\\pi} \\int_{0}^{2\\pi} f(z + r e^{i\\theta}) \\, d\\theta."
},
{
"math_id": 21,
"text": "f(\\zeta) = \\frac{1}{2\\pi i}\\int_{\\partial D} \\frac{f(z) \\,dz}{z-\\zeta} - \\frac{1}{\\pi}\\iint_D \\frac{\\partial f}{\\partial \\bar{z}}(z) \\frac{dx\\wedge dy}{z-\\zeta}."
},
{
"math_id": 22,
"text": "d\\mu = \\frac{1}{2\\pi i}\\varphi \\, dz\\wedge d\\bar{z}"
},
{
"math_id": 23,
"text": "\\frac{\\partial f}{\\partial\\bar{z}} = \\varphi(z,\\bar{z})."
},
{
"math_id": 24,
"text": "k(z) = \\operatorname{p.v.}\\frac{1}{z}"
},
{
"math_id": 25,
"text": "f(\\zeta) = \\frac{1}{2\\pi i}\\iint \\frac{\\partial f}{\\partial \\bar{z}}\\frac{dz\\wedge d\\bar{z}}{z-\\zeta},"
},
{
"math_id": 26,
"text": " \\frac {\\partial \\chi_X}{\\partial \\bar z}= \\frac{i}{2} \\oint_{\\partial X} \\,dz,"
},
{
"math_id": 27,
"text": "\\varphi \\in \\mathcal{D}(X)"
},
{
"math_id": 28,
"text": "\n\\begin{aligned}\n\\left\\langle\\frac{\\partial}{\\partial \\bar{z}}\\left(\\chi_X\\right), \\varphi\\right\\rangle & =-\\int_X \\frac{\\partial \\varphi}{\\partial \\bar{z}} \\mathrm{~d}(x, y) \\\\\n& =-\\frac{1}{2} \\int_X\\left(\\partial_x \\varphi+\\mathrm{i} \\partial_y \\varphi\\right) \\mathrm{d}(x, y) .\n\\end{aligned}\n"
},
{
"math_id": 29,
"text": "\\partial X"
},
{
"math_id": 30,
"text": "p \\in \\partial X"
},
{
"math_id": 31,
"text": "s"
},
{
"math_id": 32,
"text": "p"
},
{
"math_id": 33,
"text": "\\ell"
},
{
"math_id": 34,
"text": "\\partial X,[0, \\ell] \\ni s \\mapsto(x(s), y(s))"
},
{
"math_id": 35,
"text": "\\tau=\\left(x'(s), y'(s)\\right)"
},
{
"math_id": 36,
"text": "\\nu:=\\left(-y'(s), x'(s)\\right)"
},
{
"math_id": 37,
"text": "V=(\\varphi, \\mathrm{i} \\varphi) \\in \\mathcal{D}(X)^2"
},
{
"math_id": 38,
"text": "\\operatorname{div} V=\\partial_x \\varphi+\\mathrm{i} \\partial_y \\varphi"
},
{
"math_id": 39,
"text": "\n\\begin{aligned}\n-\\frac{1}{2} \\int_X\\left(\\partial_x \\varphi+\\mathrm{i} \\partial_y \\varphi\\right) \\mathrm{d}(x, y) & =-\\frac{1}{2} \\int_{\\partial X} V \\cdot \\nu \\mathrm{d} S \\\\\n& =-\\frac{1}{2} \\int_0^{\\ell}\\left(\\varphi \\nu_1+\\mathrm{i} \\varphi \\nu_2\\right) \\mathrm{d} s \\\\\n& =-\\frac{1}{2} \\int_0^{\\ell} \\varphi(x(s), y(s))\\left(y'(s)-\\mathrm{i} x'(s)\\right) \\mathrm{d} s \\\\\n& =\\frac{1}{2} \\int_0^{\\ell} \\mathrm{i} \\varphi(x(s), y(s))\\left(x'(s)+\\mathrm{i} y'(s)\\right) \\mathrm{d} s \\\\\n& =\\frac{\\mathrm{i}}{2} \\int_{\\partial X} \\varphi \\mathrm{d} z\n\\end{aligned}\n"
},
{
"math_id": 40,
"text": " \\frac {\\partial \\chi_X}{\\partial \\bar z}= \\frac{i}{2} \\oint_{\\partial X} \\,dz"
},
{
"math_id": 41,
"text": "D = \\prod_{i=1}^n D_i."
},
{
"math_id": 42,
"text": "f(\\zeta) = \\frac{1}{\\left(2\\pi i\\right)^n}\\int\\cdots\\iint_{\\partial D_1\\times\\cdots\\times\\partial D_n} \\frac{f(z_1,\\ldots,z_n)}{(z_1-\\zeta_1)\\cdots(z_n-\\zeta_n)} \\, dz_1\\cdots dz_n"
},
{
"math_id": 43,
"text": "G\\left(\\mathbf r, \\mathbf r'\\right) = \\frac{1}{S_n} \\frac{\\mathbf r - \\mathbf r'}{\\left|\\mathbf r - \\mathbf r'\\right|^n}"
},
{
"math_id": 44,
"text": "\\nabla G\\left(\\mathbf r, \\mathbf r'\\right) = \\delta\\left(\\mathbf r- \\mathbf r'\\right)."
},
{
"math_id": 45,
"text": "\\oint_{\\partial V} d\\mathbf S \\; f(\\mathbf r) = \\int_V d\\mathbf V \\; \\nabla f(\\mathbf r)"
},
{
"math_id": 46,
"text": "\\oint_{\\partial V'} G\\left(\\mathbf r, \\mathbf r'\\right)\\; d\\mathbf S' \\; f\\left(\\mathbf r'\\right)\n= \\int_V \\left(\\left[\\nabla' G\\left(\\mathbf r, \\mathbf r'\\right)\\right] f\\left(\\mathbf r'\\right) + G\\left(\\mathbf r, \\mathbf r'\\right) \\nabla' f\\left(\\mathbf r'\\right)\\right) \\; d\\mathbf V"
},
{
"math_id": 47,
"text": "\\oint_{\\partial V'} G\\left(\\mathbf r, \\mathbf r'\\right)\\; d\\mathbf S' \\; f\\left(\\mathbf r'\\right)\n= \\int_V \\left[\\nabla' G\\left(\\mathbf r, \\mathbf r'\\right)\\right] f\\left(\\mathbf r'\\right)\n= -\\int_V \\delta\\left(\\mathbf r - \\mathbf r'\\right) f\\left(\\mathbf r'\\right) \\; d\\mathbf V\n=- i_n f(\\mathbf r)"
},
{
"math_id": 48,
"text": "f(\\mathbf r)\n=- \\frac{1}{i_n} \\oint_{\\partial V} G\\left(\\mathbf r, \\mathbf r'\\right)\\; d\\mathbf S \\; f\\left(\\mathbf r'\\right)\n= -\\frac{1}{i_n} \\oint_{\\partial V} \\frac{\\mathbf r - \\mathbf r'}{S_n \\left|\\mathbf r - \\mathbf r'\\right|^n} \\; d\\mathbf S \\; f\\left(\\mathbf r'\\right)"
}
]
| https://en.wikipedia.org/wiki?curid=72827 |
72827238 | Branched pathways | Common pattern in metabolism
Branched pathways, also known as branch points (not to be confused with the mathematical branch point), are a common pattern found in metabolism. This is where an intermediate species is chemically made or transformed by multiple enzymatic processes. linear pathways only have one enzymatic reaction producing a species and one enzymatic reaction consuming the species.
Branched pathways are present in numerous metabolic reactions, including glycolysis, the synthesis of lysine, glutamine, and penicillin, and in the production of the aromatic amino acids.
In general, a single branch may have formula_1 producing branches and formula_2 consuming branches. If the intermediate at the branch point is given by formula_3, then the rate of change of formula_3 is given by:
formula_4
At steady-state when formula_5 the consumption and production rates must be equal:
formula_6
Biochemical pathways can be investigated by computer simulation or by looking at the sensitivities, i.e. control coefficients for flux and species concentrations using metabolic control analysis.
Elementary properties.
A simple branched pathway has one key property related to the conservation of mass. In general, the rate of change of the branch species based on the above figure is given by:
formula_7
At steady-state the rate of change of formula_8 is zero. This gives rise to a steady-state constraint among the branch reaction rates:
formula_9
Such constraints are key to computational methods such as flux balance analysis.
Control properties of a branch pathway.
Branched pathways have unique control properties compared to simple linear chain or cyclic pathways. These properties can be investigated using metabolic control analysis. The fluxes can be controlled by enzyme concentrations formula_10, formula_11, and formula_12 respectively, described by the corresponding flux control coefficients. To do this the flux control coefficients with respect to one of the branch fluxes can be derived. The derivation is shown in a subsequent section. The flux control coefficient with respect to the upper branch flux, formula_13 are given by:
formula_14
formula_15
formula_16
where formula_17 is the fraction of flux going through the upper arm, formula_13, and formula_18 the fraction going through the lower arm, formula_19. formula_20 and formula_21 are the elasticities for formula_22 with respect to formula_23 and formula_0 respectively.
For the following analysis, the flux formula_13 will be the observed variable in response to changes in enzyme concentrations.
There are two possible extremes to consider, either most of the flux goes through the upper branch formula_24 or most of the flux goes through the lower branch, formula_25. The former, depicted in panel a), is the least interesting as it converts the branch in to a simple linear pathway. Of more interest is when most of the flux goes through formula_19
If most of the flux goes through formula_19, then formula_26 and formula_27 (condition (b) in the figure), the flux control coefficients for formula_13 with respect to formula_11 and formula_12 can be written:
formula_28
formula_29
That is, formula_11 acquires proportional influence over its own flux, formula_13. Since formula_13 only carries a very small amount of flux, any changes in formula_11 will have little effect on formula_30. Hence the flux through formula_11 is almost entirely governed by the activity of formula_11. Because of the flux summation theorem and the fact that formula_31, it means that the remaining two coefficients must be equal and opposite in value. Since formula_32 is positive, formula_33 must be negative. This also means that in this situation, there can be more than one Rate-limiting step (biochemistry) in a pathway.
Unlike a linear pathway, values for formula_33and formula_32 are not bounded between zero and one. Depending on the values of the elasticities, it is possible for the control coefficients in a branched system to greatly exceed one. This has been termed the branchpoint effect by some in the literature.
Example.
The following branch pathway model (in antimony format) illustrates the case formula_34 and formula_19 have very high flux control and step J2 has proportional control.
J1: $Xo -> S1; e1*k1*Xo
J2: S1 ->; e2*k3*S1/(Km1 + S1)
J3: S1 ->; e3*k4*S1/(Km2 + S1)
k1 = 2.5;
k3 = 5.9; k4 = 20.75
Km1 = 4; Km2 = 0.02
Xo =5;
e1 = 1; e2 = 1; e3 = 1
A simulation of this model yields the following values for the flux control coefficients with respect to flux formula_13
Branch point theorems.
In a linear pathway, only two sets of theorems exist, the summation and connectivity theorems. Branched pathways have an additional set of branch centric summation theorems. When combined with the connectivity theorems and the summation theorem, it is possible to derive the control equations shown in the previous section. The deviation of the branch point theorems is as follows.
Following these assumptions two sets of equations can be derived: the flux branch point theorems and the concentration branch point theorems.
Derivation.
From these assumptions, the following system equation can be produced:
formula_41
Because formula_39 and, assuming that the flux rates are directly related to the enzyme concentration thus, the elasticities, formula_42, equal one, the local equations are:
formula_43
formula_44
Substituting formula_45 for formula_46 in the system equation results in:
formula_47
Conservation of mass dictates formula_48 since formula_49 then formula_50. Substitution eliminates the formula_51 term from the system equation:
formula_52
Dividing out formula_53 results in:
formula_54
formula_55 and formula_56 can be substituted by the fractional rates giving:
formula_57
Rearrangement yields the final form of the first flux branch point theorem:
formula_58
Similar derivations result in two more flux branch point theorems and the three concentration branch point theorems.
formula_59
formula_60
formula_61
formula_62
formula_63
formula_64
Concentration branch point theorems.
Following the flux summation theorem and the connectivity theorem the following system of equations can be produced for the simple pathway.
formula_65
formula_66
formula_67
formula_68
formula_69
formula_70
Using these theorems plus flux summation and connectivity theorems values for the concentration and flux control coefficients can be determined using linear algebra.
formula_14
formula_15
formula_16
formula_71
formula_72
formula_73 | [
{
"math_id": 0,
"text": "v_3"
},
{
"math_id": 1,
"text": "b"
},
{
"math_id": 2,
"text": "d"
},
{
"math_id": 3,
"text": "s_i"
},
{
"math_id": 4,
"text": " \\sum_{i=1}^b v_i-\\sum_{j=1}^d v_j=\\frac{d s_i}{d t} "
},
{
"math_id": 5,
"text": "ds_i/dt = 0"
},
{
"math_id": 6,
"text": " \\sum_{i=1}^b v_i=\\sum_{j=1}^d v_j "
},
{
"math_id": 7,
"text": " \\frac{ds_1}{dt} = v_1 - (v_2 + v_3) "
},
{
"math_id": 8,
"text": " S_1 "
},
{
"math_id": 9,
"text": " v_1 = v_2 + v_3 "
},
{
"math_id": 10,
"text": "e_1"
},
{
"math_id": 11,
"text": "e_2"
},
{
"math_id": 12,
"text": "e_3"
},
{
"math_id": 13,
"text": "J_2"
},
{
"math_id": 14,
"text": "C^{J_2}_{e_1} = \\frac{\\varepsilon_2}{\\varepsilon_2 \\alpha + \\varepsilon_3 (1-\\alpha) - \\varepsilon_1} "
},
{
"math_id": 15,
"text": "C^{J_2}_{e_2} = \\frac{\\varepsilon_3 (1-\\alpha) - \\varepsilon_1}{\\varepsilon_2 \\alpha + \\varepsilon_3 (1-\\alpha) - \\varepsilon_1} "
},
{
"math_id": 16,
"text": "C^{J_2}_{e_3} = \\frac{- \\varepsilon_2 (1-\\alpha)}{\\varepsilon_2 \\alpha + \\varepsilon_3 (1-\\alpha) - \\varepsilon_1} "
},
{
"math_id": 17,
"text": "\\alpha"
},
{
"math_id": 18,
"text": "1-\\alpha"
},
{
"math_id": 19,
"text": "J_3"
},
{
"math_id": 20,
"text": "\\varepsilon_1, \\varepsilon_2,"
},
{
"math_id": 21,
"text": "\\varepsilon_3"
},
{
"math_id": 22,
"text": "s_1"
},
{
"math_id": 23,
"text": "v_1, v_2, "
},
{
"math_id": 24,
"text": "J_2 "
},
{
"math_id": 25,
"text": "J_3 "
},
{
"math_id": 26,
"text": " \\alpha \\rightarrow 0 "
},
{
"math_id": 27,
"text": " 1 - \\alpha \\rightarrow 1 "
},
{
"math_id": 28,
"text": " C^{J_2}_{e_2} \\rightarrow 1 "
},
{
"math_id": 29,
"text": " C^{J_2}_{e_3} \\rightarrow \\frac{\\varepsilon_2}{\\varepsilon_1 - \\varepsilon_3} "
},
{
"math_id": 30,
"text": "S"
},
{
"math_id": 31,
"text": "C^{J_2}_{e_2} = 1"
},
{
"math_id": 32,
"text": "C^{J_2}_{e_1}"
},
{
"math_id": 33,
"text": "C^{J_2}_{e_3}"
},
{
"math_id": 34,
"text": "J_1"
},
{
"math_id": 35,
"text": "\\alpha = J_2/J_1"
},
{
"math_id": 36,
"text": "1 - \\alpha = J_3/J_1"
},
{
"math_id": 37,
"text": "\\delta e_2"
},
{
"math_id": 38,
"text": "S_1"
},
{
"math_id": 39,
"text": "\\delta S_1 = 0"
},
{
"math_id": 40,
"text": "\\delta J_1 = 0"
},
{
"math_id": 41,
"text": "C^{J_1}_{e_2} \\frac{\\delta e_2}{e_2} + C^{J_1}_{e_3} \\frac{\\delta e_3}{e_3} = \\frac{\\delta J_1}{J_1} = 0 "
},
{
"math_id": 42,
"text": "\\varepsilon^{v}_{e_i}"
},
{
"math_id": 43,
"text": "\\frac{\\delta v_2}{v_2} = \\frac{\\delta e_2}{e_2} "
},
{
"math_id": 44,
"text": "\\frac{\\delta v_3}{v_3} = \\frac{\\delta e_3}{e_3} "
},
{
"math_id": 45,
"text": "\\frac{\\delta v_i}{v_i} "
},
{
"math_id": 46,
"text": "\\frac{\\delta e_i}{e_i} "
},
{
"math_id": 47,
"text": "C^{J_1}_{e_2} \\frac{\\delta v_2}{v_2} + C^{J_1}_{e_3} \\frac{\\delta v_3}{v_3} = 0 "
},
{
"math_id": 48,
"text": "\\delta J_1 = \\delta J_2 + \\delta J_3 "
},
{
"math_id": 49,
"text": "\\delta J_1 = 0 "
},
{
"math_id": 50,
"text": " \\delta v_2 = - \\delta v_3 "
},
{
"math_id": 51,
"text": " \\delta v_3 "
},
{
"math_id": 52,
"text": "C^{J_1}_{e_2} \\frac{\\delta v_2}{v_2} - C^{J_1}_{e_3} \\frac{\\delta v_2}{v_3} = 0 "
},
{
"math_id": 53,
"text": "\\frac{\\delta v_2}{v_2} "
},
{
"math_id": 54,
"text": "C^{J_1}_{e_2} - C^{J_1}_{e_3} \\frac{v_2}{v_3} = 0 "
},
{
"math_id": 55,
"text": "v_2 "
},
{
"math_id": 56,
"text": "v_3 "
},
{
"math_id": 57,
"text": "C^{J_1}_{e_2} - C^{J_1}_{e_3} \\frac{\\alpha}{1-\\alpha} = 0 "
},
{
"math_id": 58,
"text": "C^{J_1}_{e_2}(1-\\alpha) - C^{J_1}_{e_3} {\\alpha} = 0 "
},
{
"math_id": 59,
"text": "C^{J_1}_{e_2} (1-\\alpha) - C^{J_1}_{e_3} (\\alpha) = 0"
},
{
"math_id": 60,
"text": "C^{J_2}_{e_1} (1-\\alpha) + C^{J_2}_{e_3} (\\alpha) = 0"
},
{
"math_id": 61,
"text": "C^{J_3}_{e_1} (\\alpha) + C^{J_3}_{e_2} = 0"
},
{
"math_id": 62,
"text": "C^{S_1}_{e_2} (1-\\alpha) + C^{S_1}_{e_3} (\\alpha) = 0"
},
{
"math_id": 63,
"text": "C^{S_1}_{e_1} (1-\\alpha) + C^{S_1}_{e_3} = 0"
},
{
"math_id": 64,
"text": "C^{S_1}_{e_1} (\\alpha) + C^{S_1}_{e_2} = 0"
},
{
"math_id": 65,
"text": "C^{J_1}_{e_1} + C^{J_1}_{e_2} + C^{J_1}_{e_3} = 1"
},
{
"math_id": 66,
"text": "C^{J_2}_{e_1} + C^{J_2}_{e_2} + C^{J_2}_{e_3} = 1"
},
{
"math_id": 67,
"text": "C^{J_3}_{e_1} + C^{J_3}_{e_2} + C^{J_3}_{e_3} = 1"
},
{
"math_id": 68,
"text": "C^{J_1}_{e_1} \\varepsilon^{v_1}_s + C^{J_1}_{e_2} \\varepsilon^{v_2}_s +\nC^{J_1}_{e_3} \\varepsilon^{v_3}_s = 0"
},
{
"math_id": 69,
"text": "C^{J_2}_{e_1} \\varepsilon^{v_1}_s + C^{J_2}_{e_2} \\varepsilon^{v_2}_s +\nC^{J_2}_{e_3} \\varepsilon^{v_3}_s = 0"
},
{
"math_id": 70,
"text": "C^{J_3}_{e_1} \\varepsilon^{v_1}_s + C^{J_3}_{e_2} \\varepsilon^{v_2}_s +\nC^{J_3}_{e_3} \\varepsilon^{v_3}_s = 0"
},
{
"math_id": 71,
"text": "C^{S_1}_{e_1} = \\frac{1}{\\varepsilon_2 \\alpha + \\varepsilon_3 (1-\\alpha) - \\varepsilon_1} "
},
{
"math_id": 72,
"text": "C^{S_1}_{e_1} = \\frac{- \\alpha}{\\varepsilon_2 \\alpha + \\varepsilon_3 (1-\\alpha) - \\varepsilon_1} "
},
{
"math_id": 73,
"text": "C^{S_1}_{e_3} = \\frac{-(1-\\alpha)}{\\varepsilon_2 \\alpha + \\varepsilon_3 (1-\\alpha) - \\varepsilon_1} "
}
]
| https://en.wikipedia.org/wiki?curid=72827238 |
72828156 | Negative methane | Negative ion of methane
Negative methane is the negative ion of methane, meaning that a neutral methane molecule captured an extra electron and became an ion with a total negative electric charge: CH4-. This kind of ion is also known as anion and are relevant in nature because negative ions have been observed to have important roles in several environments. For instance, they are confirmed in the interstellar space, in plasma, in the atmosphere of Earth and, in the ionosphere of Titan. Negative ions also hold the key for the radiocarbon dating method
Negative ions can not be described with conventional atomic theory. Quantum mechanical models, including more factors than solely Coulomb attraction, have to be considered to explain their stability. Such factors are Coulomb potential screening and electron correlation.
Relevance.
Negative methane is important for fundamental science because methane was not expected to produce a stable negative state. It is also relevant because the existence of its negative ion demonstrates an extra property of this powerful greenhouse gas. It is also relevant for plasma science, specially for methane-based plasma. In addition, it may be important in some atmospheric environments, where there exists methane, like in the ionosphere of satellite Titan where negative ion species have been detected.
Negative ions are metastable because they decay over time, releasing the extra electron. Therefore, they can act as time-dependent-sources of thermal electrons (low energy) in plasma environments. Negative ion's ubiquitous presence in the interstellar medium, for example, prompts the question of an efficient formation mechanism since they are expected to decay over time. In addition, their extra electron is in general weakly attached to its neutral core and as a consequence, it is also expected to lose the additional electron with a large probability, prompting again the question of the mechanism of its formation.
History.
Negative methane was not identified for at least two reasons. In mass spectrometers, its characteristic mark at "m/q = -16" is similar to that of the well known anion of oxygen O-. Because, oxygen is present in most mass spectrometers as a very habitual contaminant from the atmosphere, detections of any signal at this particular mark of "m/q = -16" were readily attributed to the anion of oxygen and not to methane's.
Second, in chemistry, methane is the molecular isoelectronic analogous to neon gas. Since neon does not have a known stable negative ion state, methane was not expected to support an extra electron either.
However, its molecular nature allows more degrees of freedom that allow for the formation of a negative ion. By a change of its nuclear configuration to form a Feshback negative ion resonance in which the electrons or nuclei of the molecule can re-arrange to form an excited state capable of supporting the extra electron.
Detection and structure.
The existence of a stable state of negative methane was first reported in 2014. In this report, some of its properties were measured, like its very large average radius (3.5 Å), its long lifetime, and the electron detachment cross-section when interacting with molecules N2 and O2.
The findings of that report (an experiment) are consistent with a quantum chemistry model in which it was found that its stable configuration corresponds to a linear molecular exciplex (CH2:H2)- which showed stability in the timescale of hundreds of ps. However, the experiment of 2014 demonstrated stability over the larger timescale of μs, and therefore, perfectly fitted to be detected by standard mass spectrometry techniques.
The mechanism of formation of CH4- is not fully understood. However, it can be elucidated that it may form under high methane density conditions and, probably, a three body collision.
Electron Affinity of Methane.
The electron affinity ("Eea") of an atom or molecule ("A") is the energy difference between the ground state energy of the corresponding neutral species ("EA") and the ground state energy of the negative ion ("EA-"):
formula_0
In the case of CH4-, dissociation into CH2- + H2 is more likely than releasing the extra electron, therefore, the conventional definition of "Eea" does not apply to methane. The energy difference between CH4- and CH2- + H2, is 0.85 kcal/mol according to the available theoretical model.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " E_{ea} = E_{A} - E_{A^-} "
}
]
| https://en.wikipedia.org/wiki?curid=72828156 |
72828889 | Pseudogamma function | Function that interpolates the factorial
In mathematics, a pseudogamma function is a function that interpolates the factorial. The gamma function is the most famous solution to the problem of extending the notion of the factorial beyond the positive integers only. However, it is clearly not the only solution, as, for any set of points, an infinite number of curves can be drawn through those points. Such a curve, namely one which interpolates the factorial but is not equal to the gamma function, is known as a pseudogamma function. The two most famous pseudogamma functions are Hadamard's gamma function:
formula_0
where formula_1 is the Lerch zeta function. We also have the Luschny factorial:
formula_2
where Γ("x") denotes the classical gamma function
and "ψ"("x") denotes the digamma function. Other related pseudo gamma functions are also known.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H(x)=\\frac{\\psi\\left ( 1 - \\frac{x}{2}\\right )-\\psi\\left ( \\frac{1}{2} - \\frac{x}{2}\\right )}{2\\Gamma (1-x)} = \\frac{\\Phi\\left(-1, 1, -x\\right)}{\\Gamma(-x)}"
},
{
"math_id": 1,
"text": "\\Phi"
},
{
"math_id": 2,
"text": "\\Gamma(x+1)\\left(1-\\frac{\\sin\\left(\\pi x\\right)}{\\pi x}\\left(\\frac{x}{2}\\left(\\psi\\left(\\frac{x+1}{2}\\right)-\\psi\\left(\\frac{x}{2}\\right)\\right)-\\frac{1}{2}\\right)\\right)"
}
]
| https://en.wikipedia.org/wiki?curid=72828889 |
72834766 | Gaussian brackets | In mathematics, Gaussian brackets are a special notation invented by Carl Friedrich Gauss to represent the convergents of a simple continued fraction in the form of a simple fraction. Gauss used this notation in the context of finding solutions of the indeterminate equations of the form formula_0.
This notation should not be confused with the widely prevalent use of square brackets to denote the greatest integer function: formula_1 denotes the greatest integer less than or equal to formula_2. This notation was also invented by Gauss and was used in the third proof of the quadratic reciprocity law. The notation formula_3, denoting the floor function, is now more commonly used to denote the greatest integer less than or equal to formula_2.
The notation.
The Gaussian brackets notation is defined as follows:
formula_4
The expanded form of the expression formula_5 can be described thus: "The first term is the product of all "n" members; after it come all possible products of ("n" -2) members in which the numbers have alternately odd and even indices in ascending order, each starting with an odd index; then all possible products of ("n"-4) members likewise have successively higher alternating odd and even indices, each starting with an odd index; and so on. If the bracket has an odd number of members, it ends with the sum of all members of odd index; if it has an even number, it ends with unity."
With this notation, one can easily verify that
formula_6
formula_14
Applications.
The Gaussian brackets have been used extensively by optical designers as a time-saving device in computing the effects of changes in surface power, thickness, and separation of focal length, magnification, and object and image distances.
References.
<templatestyles src="Reflist/styles.css" />
Additional reading.
The following papers give additional details regarding the applications of Gaussian brackets in optics. | [
{
"math_id": 0,
"text": "ax=by\\pm 1 "
},
{
"math_id": 1,
"text": "[x]"
},
{
"math_id": 2,
"text": "x"
},
{
"math_id": 3,
"text": "\\lfloor x \\rfloor "
},
{
"math_id": 4,
"text": "\\begin{align}\n\\quad[\\,\\,] & = 1\\\\[1mm]\n[a_1] & = a_1\\\\[1mm]\n[a_1, a_2] & = [a_1]a_2 + [\\,\\,]\\\\[1mm]\n & = a_1a_2+1\\\\[1mm] \n[a_1, a_2, a_3] & = [a_1, a_2]a_3 + [a_1] \\\\[1mm]\n & = a_1a_2a_3 + a_1 + a_3 \\\\[1mm]\n[a_1,a_2,a_3,a_4] & = [a_1,a_2,a_3]a_4 + [a_1,a_2]\\\\[1mm]\n & = a_1a_2a_3a_4 + a_1a_2 + a_1a_4 + a_3a_4 + 1\\\\[1mm]\n[a_1,a_2,a_3,a_4,a_5] & = [a_1,a_2,a_3,a_4]a_5 + [a_1, a_2,a_3]\\\\[1mm]\n & = a_1a_2a_3a_4a_5 + a_1a_2a_3 + a_1a_2a_5 + a_1a_4a_5 + a_3a_4a_5 + a_1+a_3+a_5\\\\[1mm]\n\\vdots & \\\\[1mm]\n[a_1,a_2,\\ldots,a_n] & = [a_1,a_2,\\ldots,a_{n-1}]a_n + [a_1,a_2,\\ldots,a_{n-2}]\n\\end{align}\n"
},
{
"math_id": 5,
"text": "[a_1,a_2,\\ldots, a_n]"
},
{
"math_id": 6,
"text": " \\cfrac{1}{a_1 + \\cfrac{1}{ a_2 + \\cfrac{1}{a_3 + \\cdots \\frac{\\ddots}{ \\cfrac{1}{a_{n-1} +\\frac{1}{a_n}} } }}} = \\frac{[a_2,\\ldots,a_n]}{[a_1,a_2,\\ldots,a_n]}"
},
{
"math_id": 7,
"text": "\\,\\,[a_1,a_2, a_3, \\ldots, a_n]=a_1[a_2,a_3, \\ldots,a_n] + [a_3,\\ldots,a_n]"
},
{
"math_id": 8,
"text": "\\,\\,[a_1,a_2, \\ldots,a_{n-1},a_n]=[a_n,a_{n-1},\\ldots, a_2,a_1]"
},
{
"math_id": 9,
"text": "\\,\\,[a_1,a_2,\\ldots,a_n] = \n\\begin{vmatrix} \na_1 & -1 & 0 & 0 & \\cdots & 0 & 0 & 0 \\\\[1mm] \n1 & a_2 & -1 & 0 & \\cdots & 0 & 0 & 0 \\\\[1mm]\n0 & 1 & a_3 & -1 & \\cdots & 0 & 0 & 0 \\\\[1mm]\n\\vdots & & & & & & & \\\\[1mm]\n0 & 0 & 0 & 0 & \\cdots & 1 & a_{n-1} & -1 \\\\[1mm]\n0 & 0 & 0 & 0 & \\cdots & 0 & 1 & a_n\n\\end{vmatrix}\n"
},
{
"math_id": 10,
"text": "n=1"
},
{
"math_id": 11,
"text": "[a_2,\\ldots,a_0]=0"
},
{
"math_id": 12,
"text": "\\,\\, \\begin{vmatrix} [a_1,\\ldots,a_n] & [a_1,\\ldots,a_{n-1}]\\\\[1mm] [a_2, \\ldots, a_{n}] & [a_2,\\ldots, a_{n-1}]\\end{vmatrix}=(-1)^n"
},
{
"math_id": 13,
"text": "[-a_1, -a_2, \\ldots, -a_n] = (-1)^n[a_1,a_2, \\ldots,a_n]"
},
{
"math_id": 14,
"text": "\n\\begin{align}\n\\,\\,\\quad[a_1,0,a_3,0,\\ldots,a_{2m+1}] & = a_1+a_3+\\cdots + a_{2m+1}\\\\[1mm]\n[a_1,0,a_3,0,\\ldots,a_{2m+1}, 0] & = 1\\\\[1mm]\n[0, a_2, 0, a_4, \\ldots, a_{2m}] & = 1 \\\\[1mm]\n[0, a_2, 0, a_4, \\ldots, a_{2m}, 0] & = 0 \n\\end{align}\n"
}
]
| https://en.wikipedia.org/wiki?curid=72834766 |
72839 | Foucault pendulum | Device to demonstrate Earth's rotation
The Foucault pendulum or Foucault's pendulum is a simple device named after French physicist Léon Foucault, conceived as an experiment to demonstrate the Earth's rotation. A long and heavy pendulum suspended from the high roof above a circular area was monitored over an extended time period, showing that its plane of oscillation rotated.
The pendulum was introduced in 1851 and was the first experiment to give simple, direct evidence of the Earth's rotation. Foucault followed up in 1852 with a gyroscope experiment to further demonstrate the Earth's rotation. Foucault pendulums today are popular displays in science museums and universities.
Original Foucault pendulum.
Foucault was inspired by observing a thin flexible rod on the axis of a lathe, which vibrated in the same plane despite the rotation of the supporting frame of the lathe.
The first public exhibition of a Foucault pendulum took place in February 1851 in the Meridian of the Paris Observatory. A few weeks later, Foucault made his most famous pendulum when he suspended a brass-coated lead bob with a wire from the dome of the Panthéon, Paris.
Because the latitude of its location was formula_0, the plane of the pendulum's swing made a full circle in approximately formula_1, rotating clockwise approximately 11.3° per hour. The proper period of the pendulum was approximately formula_2, so with each oscillation, the pendulum rotates by about formula_3. Foucault reported observing 2.3 mm of deflection on the edge of a pendulum every oscillation, which is achieved if the pendulum swing angle is 2.1°.
Foucault explained his results in an 1851 paper entitled "Physical demonstration of the Earth's rotational movement by means of the pendulum", published in the "Comptes rendus de l'Académie des Sciences". He wrote that, at the North Pole:
...an oscillatory movement of the pendulum mass follows an arc of a circle whose plane is well known, and to which the inertia of matter ensures an unchanging position in space. If these oscillations continue for a certain time, the movement of the earth, which continues to rotate from west to east, will become sensitive in contrast to the immobility of the oscillation plane whose trace on the ground will seem animated by a movement consistent with the apparent movement of the celestial sphere; and if the oscillations could be perpetuated for twenty-four hours, the trace of their plane would then execute an entire revolution around the vertical projection of the point of suspension.
The original bob used in 1851 at the Panthéon was moved in 1855 to the Conservatoire des Arts et Métiers in Paris. A second temporary installation was made for the 50th anniversary in 1902.
During museum reconstruction in the 1990s, the original pendulum was temporarily displayed at the Panthéon (1995), but was later returned to the Musée des Arts et Métiers before it reopened in 2000. On April 6, 2010, the cable suspending the bob in the Musée des Arts et Métiers snapped, causing irreparable damage to the pendulum bob and to the marble flooring of the museum. The original, now damaged pendulum bob is displayed in a separate case adjacent to the current pendulum display.
An exact copy of the original pendulum has been operating under the dome of the Panthéon, Paris since 1995.
Explanation of mechanics.
At either the Geographic North Pole or Geographic South Pole, the plane of oscillation of a pendulum remains fixed relative to the distant masses of the universe while Earth rotates underneath it, taking one sidereal day to complete a rotation. So, relative to Earth, the plane of oscillation of a pendulum at the North Pole – viewed from above – undergoes a full clockwise rotation during one day; a pendulum at the South Pole rotates counterclockwise.
When a Foucault pendulum is suspended at the equator, the plane of oscillation remains fixed relative to Earth. At other latitudes, the plane of oscillation precesses relative to Earth, but more slowly than at the pole; the angular speed, ω (measured in clockwise degrees per sidereal day), is proportional to the sine of the latitude, φ:
formula_4
where latitudes north and south of the equator are defined as positive and negative, respectively. A "pendulum day" is the time needed for the plane of a freely suspended Foucault pendulum to complete an apparent rotation about the local vertical. This is one sidereal day divided by the sine of the latitude. For example, a Foucault pendulum at 30° south latitude, viewed from above by an earthbound observer, rotates counterclockwise 360° in two days.
Using enough wire length, the described circle can be wide enough that the tangential displacement along the measuring circle of between two oscillations can be visible by eye, rendering the Foucault pendulum a spectacular experiment: for example, the original Foucault pendulum in Panthéon moves circularly, with a 6-metre pendulum amplitude, by about 5 mm each period.
A Foucault pendulum requires care to set up because imprecise construction can cause additional veering which masks the terrestrial effect. Heike Kamerlingh Onnes (Nobel laureate 1913) performed precise experiments and developed a fuller theory of the Foucault pendulum for his doctoral thesis (1879). He observed the pendulum to go over from linear to elliptic oscillation in an hour. By a perturbation analysis, he showed that geometrical imperfection of the system or elasticity of the support wire may cause a beat between two horizontal modes of oscillation. The initial launch of the pendulum is also critical; the traditional way to do this is to use a flame to burn through a thread which temporarily holds the bob in its starting position, thus avoiding unwanted sideways motion (see a ).
Notably, veering of a pendulum was observed already in 1661 by Vincenzo Viviani, a disciple of Galileo, but there is no evidence that he connected the effect with the Earth's rotation; rather, he regarded it as a nuisance in his study that should be overcome with suspending the bob on two ropes instead of one.
Air resistance damps the oscillation, so some Foucault pendulums in museums incorporate an electromagnetic or other drive to keep the bob swinging; others are restarted regularly, sometimes with a launching ceremony as an added attraction. Besides air resistance (the use of a heavy symmetrical bob is to reduce friction forces, mainly air resistance by a symmetrical and aerodynamic bob) the other main engineering problem in creating a 1-meter Foucault pendulum nowadays is said to be ensuring there is no preferred direction of swing.
Related physical systems.
Many physical systems precess in a similar manner to a Foucault pendulum. As early as 1836, the Scottish mathematician Edward Sang contrived and explained the precession of a spinning top. In 1851, Charles Wheatstone described an apparatus that consists of a vibrating spring that is mounted on top of a disk so that it makes a fixed angle φ with the disk. The spring is struck so that it oscillates in a plane. When the disk is turned, the plane of oscillation changes just like the one of a Foucault pendulum at latitude φ.
Similarly, consider a nonspinning, perfectly balanced bicycle wheel mounted on a disk so that its axis of rotation makes an angle φ with the disk. When the disk undergoes a full clockwise revolution, the bicycle wheel will not return to its original position, but will have undergone a net rotation of 2π sin "φ".
Foucault-like precession is observed in a virtual system wherein a massless particle is constrained to remain on a rotating plane that is inclined with respect to the axis of rotation.
Spin of a relativistic particle moving in a circular orbit precesses similar to the swing plane of Foucault pendulum. The relativistic velocity space in Minkowski spacetime can be treated as a sphere "S"3 in 4-dimensional Euclidean space with imaginary radius and imaginary timelike coordinate. Parallel transport of polarization vectors along such sphere gives rise to Thomas precession, which is analogous to the rotation of the swing plane of Foucault pendulum due to parallel transport along a sphere "S"2 in 3-dimensional Euclidean space.
In physics, the evolution of such systems is determined by geometric phases. Mathematically they are understood through parallel transport.
Foucault pendulums around the world.
There are numerous Foucault pendulums at universities, science museums, and the like throughout the world. The United Nations General Assembly Building at the United Nations headquarters in New York City has one. The Oregon Convention Center pendulum is claimed to be the largest, its length approximately , however, there are larger ones listed in the article, such as the one in Gamow Tower at the University of Colorado (39.3 m). There used to be much longer pendulums, such as the pendulum in Saint Isaac's Cathedral, Saint Petersburg, Russia.
South Pole.
The experiment has also been carried out at the South Pole, where it was assumed that the rotation of the Earth would have maximum effect. A pendulum was installed in a six-story staircase of a new station under construction at the Amundsen-Scott South Pole Station. It had a length of and the bob weighed . The location was ideal: no moving air could disturb the pendulum. The researchers confirmed about 24 hours as the rotation period of the plane of oscillation.
See also.
<templatestyles src="Div col/styles.css"/>
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\phi = \\mathrm{48^\\circ 52' N}"
},
{
"math_id": 1,
"text": "\\frac{\\mathrm{23h56'}}{\\sin \\phi} \\approx \\mathrm{31.8\\,h} \\;(\\mathrm{31\\,h\\,50\\,min})"
},
{
"math_id": 2,
"text": "2\\pi\\sqrt{l/g}\\approx 16.5 \\,\\mathrm{s}"
},
{
"math_id": 3,
"text": "9.05 \\times 10^{-4} \\mathrm{rad}"
},
{
"math_id": 4,
"text": "\\omega=360^\\circ\\sin\\varphi\\ /\\mathrm{day},"
}
]
| https://en.wikipedia.org/wiki?curid=72839 |
72853748 | Conjugate gradient squared method | Algorithm for solving matrix-vector equations
In numerical linear algebra, the conjugate gradient squared method (CGS) is an iterative algorithm for solving systems of linear equations of the form formula_0, particularly in cases where computing the transpose formula_1 is impractical. The CGS method was developed as an improvement to the biconjugate gradient method.
Background.
A system of linear equations formula_0 consists of a known matrix formula_2 and a known vector formula_3. To solve the system is to find the value of the unknown vector formula_4. A direct method for solving a system of linear equations is to take the inverse of the matrix formula_2, then calculate formula_5. However, computing the inverse is computationally expensive. Hence, iterative methods are commonly used. Iterative methods begin with a guess formula_6, and on each iteration the guess is improved. Once the difference between successive guesses is sufficiently small, the method has converged to a solution.
As with the conjugate gradient method, biconjugate gradient method, and similar iterative methods for solving systems of linear equations, the CGS method can be used to find solutions to multi-variable optimisation problems, such as power-flow analysis, hyperparameter optimisation, and facial recognition.
Algorithm.
The algorithm is as follows:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "A{\\bold x} = {\\bold b}"
},
{
"math_id": 1,
"text": "A^T"
},
{
"math_id": 2,
"text": "A"
},
{
"math_id": 3,
"text": "{\\bold b}"
},
{
"math_id": 4,
"text": "{\\bold x}"
},
{
"math_id": 5,
"text": "\\bold x = A^{-1}\\bold b"
},
{
"math_id": 6,
"text": "\\bold x^{(0)}"
},
{
"math_id": 7,
"text": "{\\bold x}^{(0)}"
},
{
"math_id": 8,
"text": "{\\bold r}^{(0)} = {\\bold b} - A{\\bold x}^{(0)}"
},
{
"math_id": 9,
"text": "\\tilde {\\bold r}^{(0)} = {\\bold r}^{(0)}"
},
{
"math_id": 10,
"text": "i = 1, 2, 3, \\dots"
},
{
"math_id": 11,
"text": "\\rho^{(i-1)} = \\tilde {\\bold r}^{T(i-1)}{\\bold r}^{(i-1)}"
},
{
"math_id": 12,
"text": "\\rho^{(i-1)} = 0"
},
{
"math_id": 13,
"text": "i=1"
},
{
"math_id": 14,
"text": "{\\bold p}^{(1)} = {\\bold u}^{(1)} = {\\bold r}^{(0)}"
},
{
"math_id": 15,
"text": "\\beta^{(i-1)} = \\rho^{(i-1)}/\\rho^{(i-2)}"
},
{
"math_id": 16,
"text": "{\\bold u}^{(i)} = {\\bold r}^{(i-1)} + \\beta_{i-1}{\\bold q}^{(i-1)}"
},
{
"math_id": 17,
"text": "{\\bold p}^{(i)} = {\\bold u}^{(i)} + \\beta^{(i-1)}({\\bold q}^{(i-1)} + \\beta^{(i-1)}{\\bold p}^{(i-1)})"
},
{
"math_id": 18,
"text": "M\\hat {\\bold p}={\\bold p}^{(i)}"
},
{
"math_id": 19,
"text": "M"
},
{
"math_id": 20,
"text": "\\hat {\\bold v} = A\\hat {\\bold p}"
},
{
"math_id": 21,
"text": "\\alpha^{(i)} = \\rho^{(i-1)} / \\tilde {\\bold r}^T \\hat {\\bold v}"
},
{
"math_id": 22,
"text": "{\\bold q}^{(i)} = {\\bold u}^{(i)} - \\alpha^{(i)}\\hat {\\bold v}"
},
{
"math_id": 23,
"text": "M\\hat {\\bold u} = {\\bold u}^{(i)} + {\\bold q}^{(i)}"
},
{
"math_id": 24,
"text": "{\\bold x}^{(i)} = {\\bold x}^{(i-1)} + \\alpha^{(i)} \\hat {\\bold u}"
},
{
"math_id": 25,
"text": "\\hat {\\bold q} = A\\hat {\\bold u}"
},
{
"math_id": 26,
"text": "{\\bold r}^{(i)} = {\\bold r}^{(i-1)} - \\alpha^{(i)}\\hat {\\bold q}"
}
]
| https://en.wikipedia.org/wiki?curid=72853748 |
72866575 | Response coefficient (biochemistry) | Biochemical pathway response measurement
Control coefficients measure the response of a biochemical pathway to changes in enzyme activity. The response coefficient, as originally defined by Kacser and Burns, is a measure of how external factors such as inhibitors, pharmaceutical drugs, or boundary species affect the steady-state fluxes and species concentrations. The flux response coefficient is defined by:
formula_0
where formula_1 is the steady-state pathway flux. Similarly, the concentration response coefficient is defined by the expression:
formula_2
where in both cases formula_3 is the concentration of the external factor. The response coefficient measures how sensitive a pathway is to changes in external factors other than enzyme activities.
The flux response coefficient is related to control coefficients and elasticities through the following relationship:
formula_4
Likewise, the concentration response coefficient is related by the following expression:
formula_5
The summation in both cases accounts for cases where a given external factor, formula_6, can act at multiple sites. For example, a given drug might act on multiple protein sites. The overall response is the sum of the individual responses.
These results show that the action of an external factor, such as a drug, has two components:
When designing drugs for therapeutic action, both aspects must therefore be considered.
Proof of Response Theorem.
There are various ways to prove the response theorems:
Proof by perturbation.
The perturbation proof by Kacser and Burns is given as follows.
Given the simple linear pathway catalyzed by two enzymes formula_7 and formula_8:
formula_9
where formula_10 is the fixed boundary species. Let us increase the concentration of enzyme formula_7 by an amount formula_11. This will cause the steady state flux and concentration of formula_12, and all downstream species
beyond formula_8 to increase. The concentration of formula_10 is now decreased such that the flux and steady-state concentration of formula_12 is restored back to their original values. These changes allow one to write down the following local and systems equations for the changes that occurred:
formula_13
There is no formula_14 term in either equation because the concentration of formula_14is unchanged. Both right-hand sides of the equations are guaranteed to be zero by construction. The term formula_15 can be eliminated by combining both equations. If we also assume that the reaction rate for an enzyme-catalyzed reaction is proportional to the enzyme concentration, then formula_16, therefore:
formula_17
Since formula_18
this yields:
formula_19.
This proof can be generalized to the case where formula_20 may act at multiple sites.
Pure algebraic proof.
The pure algebraic proof is more complex and requires consideration of the system equation:
formula_21
where formula_22 is the stoichiometry matrix and formula_23 the rate vector. In this derivation, we assume there are no conserved moieties in the network, but this doesn't invalidate the proof. Using the chain rule and differentiating with respect to formula_24 yields, after rearrangement:
formula_25
The inverted term is the unscaled control coefficient so that after scaling, it is possible to write:
formula_26
To derive the flux response coefficient theorem, we must use the additional equation:
formula_27
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " R_x^J=\\frac{d J}{d x} \\frac{x}{J} "
},
{
"math_id": 1,
"text": " J "
},
{
"math_id": 2,
"text": " R_x^s=\\frac{d s}{d x} \\frac{x}{s} "
},
{
"math_id": 3,
"text": "x"
},
{
"math_id": 4,
"text": " R_x^J=\\sum_{i=1}^n C_{e_i}^J \\varepsilon_x^{v_i} "
},
{
"math_id": 5,
"text": " R_x^s=\\sum_{i=1}^n C_{e_i}^s \\varepsilon_x^{v_i} "
},
{
"math_id": 6,
"text": " x "
},
{
"math_id": 7,
"text": "e_1"
},
{
"math_id": 8,
"text": "e_2"
},
{
"math_id": 9,
"text": " X \\stackrel{e_1}\\longrightarrow S \\stackrel{e_2}\\longrightarrow "
},
{
"math_id": 10,
"text": "X"
},
{
"math_id": 11,
"text": "\\delta e_1"
},
{
"math_id": 12,
"text": "S"
},
{
"math_id": 13,
"text": "\n\\begin{array}{r}\n\\left. \\dfrac{\\delta v_1}{v_1} = \\varepsilon_x^1 \\dfrac{\\delta x}{x}+\\varepsilon_{e_1}^1 \\dfrac{\\delta e_1}{e_1} = 0 \\right\\} \\text { Local equation } \\\\[5pt]\n\\left. \\dfrac{\\delta J}{J} = R_x^J \\dfrac{\\delta x}{x}+C_{e_1}^J \\dfrac{\\delta e_1}{e_1}=0\n\\right\\} \\text { System equation }\n\\end{array} \n"
},
{
"math_id": 14,
"text": "s"
},
{
"math_id": 15,
"text": " \\delta e_1/e_1 "
},
{
"math_id": 16,
"text": "\\varepsilon_{e_1}^1=1 "
},
{
"math_id": 17,
"text": " 0=R_x^J \\frac{\\delta x}{x}-C_{e_1}^J \\varepsilon_x^1 \\frac{\\delta x}{x} "
},
{
"math_id": 18,
"text": " \\delta e_1/e_1 \\neq 0 "
},
{
"math_id": 19,
"text": " R_x^J=C_{e_1}^J \\varepsilon_x^1 "
},
{
"math_id": 20,
"text": " X"
},
{
"math_id": 21,
"text": " {\\bf N} {\\bf v} (s (p), p) = 0"
},
{
"math_id": 22,
"text": " {\\bf N} "
},
{
"math_id": 23,
"text": " {\\bf v} "
},
{
"math_id": 24,
"text": " p "
},
{
"math_id": 25,
"text": " \\dfrac{ds}{dp} = \\left[ -{\\bf N} \\dfrac{\\partial v}{\\partial s} \\right]^{-1} \\dfrac{\\partial v}{\\partial p} "
},
{
"math_id": 26,
"text": " R^s_p = C^s_v \\varepsilon^v_p "
},
{
"math_id": 27,
"text": " {\\bf v} = {\\bf v} ({\\bf s} (p), p) "
}
]
| https://en.wikipedia.org/wiki?curid=72866575 |
72867679 | Biochemical systems equation | The biochemical systems equation is a compact equation of nonlinear differential equations for describing a kinetic model for any network of coupled biochemical reactions and transport processes.
The equation is expressed in the following form:
formula_0
The notation for the dependent variable x varies among authors. For example, some authors use s, indicating species. x is used here to match the state space notation used in control theory but either notation is acceptable.
formula_1 is the stoichiometry matrix which is an formula_2 by formula_3 matrix of stoichiometry coefficient. formula_2 is the number of species and formula_3 the number of biochemical reactions. The notation for formula_1 is also variable. In constraint-based modeling the symbol formula_1 tends to be used to indicate 'stoichiometry'. However in biochemical dynamic modeling and sensitivity analysis, formula_1 tends to be in more common use to indicate 'number'. In the chemistry domain, the symbol used for the stoichiometry matrix is highly variable though the symbols S and N have been used in the past.
formula_4 is an n-dimensional column vector of reaction rates, and formula_5 is a p-dimensional column vector of parameters.
Example.
Given the biochemical network:
formula_6
where formula_7 and formula_8 are fixed species to ensure the system is open. The system equation can be written as:
formula_9 formula_10
So that:
formula_11 formula_12
The elements of the rate vector will be rate equations that are functions of one or more species formula_13 and parameters, p. In the example, these might be simple mass-action rate laws such as formula_14 where formula_15 is the rate constant parameter. The particular laws chosen will depend on the specific system under study. Assuming mass-action kinetics, the above equation can be written in complete form as:
formula_11 formula_16
Analysis.
The system equation can be analyzed by looking at the linear response of the equation around the steady-state with respect to the parameter formula_17. At steady-state, the system equation is set to zero and given by:
formula_18
Differentiating the equation with respect to formula_19 and rearranging gives:
formula_20
This derivation assumes that the stoichiometry matrix has full rank. If this is not the case, then the inverse won't exist.
Example.
For example, consider the same problem from the previous section of a linear chain. The matrix formula_21 is the unscaled elasticity matrix:
formula_22
In this specific problem there are 3 species (formula_23) and 4 reaction steps (formula_24), the elasticity matrix is therefore a formula_25 matrix. However, a number of entries in the matrix will be zero. For example formula_26 will be zero since formula_27 has no effect on formula_28. The matrix, therefore, will contain the following entries:
formula_29
The parameter matrix depends on which parameters are considered. In Metabolic control analysis, a common set of parameters are the enzyme activities. For the sake of argument, we can equate the rate constants with the enzyme activity parameters. We also assume that each enzyme, formula_30, only can affect its own step and no other. The matrix formula_31 is the unscaled elasticity matrix with respect to the parameters. Since there are 4 reaction steps and 4 corresponding parameters, the matrix will be a 4 by 4 matrix. Since each parameter only affects one reaction, the matrix will be a diagonal matrix:
formula_32
Since there are 3 species and 4 reactions, the resulting matrix formula_33 will be a 3 by 4 matrix
formula_34
formula_35
formula_36
formula_37
Each expression in the matrix describes how a given parameter influences the steady-state concentration of a given species. Note that this is the unscaled derivative. It is often the case that the derivative is scaled by the parameter and concentration to eliminate units as well as turn the measure into a relative change.
Assumptions.
The biochemical systems equation makes two key assumptions:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\dfrac{{\\bf dx}}{dt} = {\\bf N} {\\bf v} ({\\bf x} (p), p) "
},
{
"math_id": 1,
"text": " \\bf N "
},
{
"math_id": 2,
"text": " m "
},
{
"math_id": 3,
"text": " n "
},
{
"math_id": 4,
"text": " \\bf v "
},
{
"math_id": 5,
"text": " p "
},
{
"math_id": 6,
"text": " X_o \\stackrel{v_1}\\longrightarrow\\ x_1 \\stackrel{v_2}\\longrightarrow\\ x_2 \\stackrel{v_3}\\longrightarrow\\ x_3 \\stackrel{v_4}\\longrightarrow\\ X_1 "
},
{
"math_id": 7,
"text": " X_o "
},
{
"math_id": 8,
"text": " X_1 "
},
{
"math_id": 9,
"text": "\n\\mathbf{N} = \\begin{bmatrix}\n 1 & -1 & \\phantom{+}0 & \\phantom{+}0 \\\\\n 0 & \\phantom{+}1 & -1 & \\phantom{+}0 \\\\\n 0 & \\phantom{+}0 & \\phantom{+}1 & -1 \\\\\n\\end{bmatrix},\\ \n"
},
{
"math_id": 10,
"text": "\n\\mathbf{v} = \\begin{bmatrix}\n v_1 \\\\\n v_2 \\\\\n v_3 \\\\\n v_4 \\\\\n\\end{bmatrix}\n"
},
{
"math_id": 11,
"text": "\n\\begin{bmatrix}\n \\dfrac{dx_1}{dt} \\\\[4pt]\n \\dfrac{dx_2}{dt} \\\\[4pt]\n \\dfrac{dx_3}{dt} \\\\[4pt]\n \\dfrac{dx_4}{dt} \\\\[4pt]\n\\end{bmatrix}\n= \\begin{bmatrix}\n 1 & -1 & \\phantom{+}0 & \\phantom{+}0 \\\\\n 0 & \\phantom{+}1 & -1 & \\phantom{+}0 \\\\\n 0 & \\phantom{+}0 & \\phantom{+}1 & -1 \\\\\n\\end{bmatrix}\n"
},
{
"math_id": 12,
"text": "\n\\begin{bmatrix}\n v_1 \\\\\n v_2 \\\\\n v_3 \\\\\n v_4 \\\\\n\\end{bmatrix}\n"
},
{
"math_id": 13,
"text": " x_i "
},
{
"math_id": 14,
"text": " v_2 = k_2 x_1 "
},
{
"math_id": 15,
"text": " k_2 "
},
{
"math_id": 16,
"text": "\n\\begin{bmatrix}\n k_1 X_o \\\\\n k_2 x_1 \\\\\n k_3 x_2 \\\\\n k_4 x_3 \\\\\n\\end{bmatrix}\n"
},
{
"math_id": 17,
"text": " \\bf p "
},
{
"math_id": 18,
"text": " 0 = {\\bf N} {\\bf v} ({\\bf x} ({\\bf p}), {\\bf p}) "
},
{
"math_id": 19,
"text": " {\\bf p} "
},
{
"math_id": 20,
"text": " \\dfrac{d{\\bf x}}{d{\\bf p}} = -\\left( {\\bf N} \\frac{\\partial \\mathbf{v}}{\\partial \\mathbf{x}}\\right)^{-1} {\\bf N} \\frac{\\partial \\mathbf{v}}{\\partial \\mathbf{p}} "
},
{
"math_id": 21,
"text": "\\frac{\\partial \\mathbf{v}}{\\partial \\mathbf{x}}"
},
{
"math_id": 22,
"text": "\n\\mathcal{E} = \n\\begin{bmatrix} \n \\dfrac{\\partial v_1}{\\partial x_1} & \\cdots & \\dfrac{\\partial v_1}{\\partial x_m} \\\\ \\vdots & \\ddots & \\vdots \\\\ \n\\dfrac{\\partial v_n}{\\partial x_1} & \\cdots & \\dfrac{\\partial v_n}{\\partial x_m} \n\\end{bmatrix}. \n"
},
{
"math_id": 23,
"text": "m=3"
},
{
"math_id": 24,
"text": " n = 4"
},
{
"math_id": 25,
"text": " m \\times n = 3\\ \\mbox{by}\\ 4 "
},
{
"math_id": 26,
"text": " \\partial v_1/\\partial x_3 "
},
{
"math_id": 27,
"text": " x_3 "
},
{
"math_id": 28,
"text": " v_1 "
},
{
"math_id": 29,
"text": "\n\\mathcal{E} = \n\\begin{bmatrix} \n \\dfrac{\\partial v_1}{\\partial x_1} & 0 & 0 \\\\ \n \\dfrac{\\partial v_2}{\\partial x_1} & \\dfrac{\\partial v_2}{\\partial x_2} & 0 \\\\ \n 0 & \\dfrac{\\partial v_3}{\\partial x_2} & \\dfrac{\\partial v_3}{\\partial x_3} \\\\ \n 0 & 0 & \\dfrac{\\partial v_4}{\\partial x_3} \\\\ \n\\end{bmatrix}. \n"
},
{
"math_id": 30,
"text": " k_i"
},
{
"math_id": 31,
"text": "\\frac{\\partial \\mathbf{v}}{\\partial \\mathbf{p}}"
},
{
"math_id": 32,
"text": "\n\\mathcal{E} = \n\\begin{bmatrix} \n \\dfrac{\\partial v_1}{\\partial k_1} & 0 & 0 & 0 \\\\ \n 0 & \\dfrac{\\partial v_2}{\\partial k_2} & 0 & 0 \\\\ \n 0 & 0 & \\dfrac{\\partial v_3}{\\partial k_3} & 0 \\\\ \n 0 & 0 & & \\dfrac{\\partial v_4}{\\partial k_4} \\\\ \n\\end{bmatrix}. \n"
},
{
"math_id": 33,
"text": " \\frac{d{\\bf x}}{d {\\bf p}} "
},
{
"math_id": 34,
"text": " D = \\mathcal{E}^{1}_{1} \\mathcal{E}^{2}_{2} (\\mathcal{E}^{3}_{3}-\\mathcal{E}^{4}_{3})+\\mathcal{E}^{1}_{1} \\mathcal{E}^{3}_{2} \\mathcal{E}^{4}_{3}-\\mathcal{E}^{1}_{2} \\mathcal{E}^{3}_{2} \\mathcal{E}^{4}_{3} "
},
{
"math_id": 35,
"text": " \\vphantom{ } "
},
{
"math_id": 36,
"text": " \\frac{d{\\bf x}}{d {\\bf p}} = \\frac{1}{D}\n\\left[\n\\begin{array}{ll}\n\\mathcal{E}^{1}_{k_1} (\\mathcal{E}^{2}_{2} (\\mathcal{E}^{3}_{3}-\\mathcal{E}^{4}_{3})+\\mathcal{E}^{3}_{2} \\mathcal{E}^{4}_{3}) & \n-\\mathcal{E}^{3}_{2} \\mathcal{E}^{4}_{3} \\mathcal{E}^{2}_{k_2} \\\\\n\n\\mathcal{E}^{1}_{2} \\mathcal{E}^{1}_{k_1} (\\mathcal{E}^{3}_{3}-\\mathcal{E}^{4}_{3}) & \n\n\\mathcal{E}^{1}_{1} \\mathcal{E}^{2}_{k_2} (\\mathcal{E}^{3}_{3}-\\mathcal{E}^{4}_{3}) \\\\\n\n\\mathcal{E}^{1}_{2} \\mathcal{E}^{3}_{2} \\mathcal{E}^{1}_{k_1} & \n\\mathcal{E}^{1}_{1} \\mathcal{E}^{3}_{2} \\mathcal{E}^{2}_{k_2} \\\\\n\\end{array}\n\\right.\n"
},
{
"math_id": 37,
"text": "\n\\qquad\\qquad\\qquad\\quad\n\\left.\n\\begin{array}{ll}\n\\mathcal{E}^{2}_{2} \\mathcal{E}^{4}_{3} \\mathcal{E}^{3}_{k_3} &\n\\mathcal{E}^{2}_{2} \\mathcal{E}^{3}_{3} \\mathcal{E}^{4}_{k_4} \\\\\n\n\\mathcal{E}^{4}_{3} \\mathcal{E}^{3}_{k_3} (\\mathcal{E}^{1}_{1}-\\mathcal{E}^{1}_{2})\n& \n\\mathcal{E}^{3}_{3} \\mathcal{E}^{4}_{k_4} (\\mathcal{E}^{1}_{2}-\\mathcal{E}^{1}_{1}) \\\\\n\n\\mathcal{E}^{1}_{1} \\mathcal{E}^{2}_{2} \\mathcal{E}^{3}_{k_3} & \n-\\mathcal{E}^{4}_{k_4} (\\mathcal{E}^{1}_{1} (\\mathcal{E}^{2}_{2}-\\mathcal{E}^{3}_{2})+\\mathcal{E}^{1}_{2} \\mathcal{E}^{3}_{2}) \\\\\n\\end{array}\n\\right]\n"
}
]
| https://en.wikipedia.org/wiki?curid=72867679 |
72876243 | Lomonosov's invariant subspace theorem | Lomonosov's invariant subspace theorem is a mathematical theorem from functional analysis concerning the existence of invariant subspaces of a linear operator on some complex Banach space. The theorem was proved in 1973 by the Russian–American mathematician Victor Lomonosov.
Lomonosov's invariant subspace theorem.
Notation and terminology.
Let formula_0 be the space of bounded linear operators from some space formula_1 to itself. For an operator formula_2 we call a closed subspace formula_3 an invariant subspace if formula_4, i.e. formula_5 for every formula_6.
Theorem.
Let formula_1 be an infinite dimensional complex Banach space, formula_2 be compact and such that formula_7. Further let formula_8 be an operator that commutes with formula_9. Then there exist an invariant subspace formula_10 of the operator formula_11, i.e. formula_12.
Citations.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathcal{B}(X):=\\mathcal{B}(X,X)"
},
{
"math_id": 1,
"text": "X"
},
{
"math_id": 2,
"text": "T\\in\\mathcal{B}(X)"
},
{
"math_id": 3,
"text": "M\\subset X,\\;M\\neq \\{0\\}"
},
{
"math_id": 4,
"text": "T(M)\\subset M"
},
{
"math_id": 5,
"text": "Tx\\in M"
},
{
"math_id": 6,
"text": "x\\in M"
},
{
"math_id": 7,
"text": "T\\neq 0"
},
{
"math_id": 8,
"text": "S\\in\\mathcal{B}(X)"
},
{
"math_id": 9,
"text": "T"
},
{
"math_id": 10,
"text": "M"
},
{
"math_id": 11,
"text": "S"
},
{
"math_id": 12,
"text": "S(M)\\subset M"
}
]
| https://en.wikipedia.org/wiki?curid=72876243 |
72876407 | Faxén integral | In mathematics, the Faxén integral (also named Faxén function) is the following integral
formula_0
The integral is named after the Swedish physicist Olov Hilding Faxén, who published it in 1921 in his PhD thesis.
"n"-dimensional Faxén integral.
More generally one defines the "formula_1-dimensional Faxén integral" as
formula_2
with
formula_3 and formula_4
for formula_5 and
formula_6
The parameter formula_7 is only for convenience in calculations.
Properties.
Let formula_8 denote the Gamma function, then
For formula_11 one has the following relationship to the Scorer function
formula_12
Asymptotics.
For formula_13 we have the following asymptotics | [
{
"math_id": 0,
"text": "\\operatorname{Fi}(\\alpha,\\beta;x)=\\int_0^{\\infty} \\exp(-t+xt^{\\alpha})t^{\\beta-1}\\mathrm{d}t,\\qquad (0\\leq \\operatorname{Re}(\\alpha) <1,\\;\\operatorname{Re}(\\beta)>0)."
},
{
"math_id": 1,
"text": "n"
},
{
"math_id": 2,
"text": "I_n(x)=\\lambda_n\\int_0^{\\infty}\\cdots \\int_0^{\\infty}t_1^{\\beta_1-1}\\cdots t_n^{\\beta_n-1}e^{-f(t_1,\\dots,t_n;x)}\\mathrm{d}t_1\\cdots \\mathrm{d}t_n,"
},
{
"math_id": 3,
"text": "f(t_1,\\dots,t_n;x):=\\sum\\limits_{j=1}^n t_j^{\\mu_j}-xt_1^{\\alpha_1}\\cdots t_n^{\\alpha_n}\\quad"
},
{
"math_id": 4,
"text": "\\quad\\lambda_n:=\\prod\\limits_{j=1}^n\\mu_j"
},
{
"math_id": 5,
"text": "x \\in \\C"
},
{
"math_id": 6,
"text": "(0<\\alpha_i <\\mu_i,\\;\\operatorname{Re}(\\beta_i)>0,\\; i=1,\\dots,n)."
},
{
"math_id": 7,
"text": "\\lambda_n"
},
{
"math_id": 8,
"text": "\\Gamma"
},
{
"math_id": 9,
"text": "\\operatorname{Fi}(\\alpha,\\beta;0)=\\Gamma(\\beta),"
},
{
"math_id": 10,
"text": "\\operatorname{Fi}(0,\\beta;x)=e^{x}\\Gamma(\\beta)."
},
{
"math_id": 11,
"text": "\\alpha=\\beta=\\tfrac{1}{3}"
},
{
"math_id": 12,
"text": "\\operatorname{Fi}(\\tfrac{1}{3},\\tfrac{1}{3};x)=3^{2/3}\\pi \\operatorname{Hi}(3^{-1/3}x)."
},
{
"math_id": 13,
"text": "x\\to \\infty"
},
{
"math_id": 14,
"text": "\\operatorname{Fi}(\\alpha,\\beta;-x)\\sim \\frac{\\Gamma(\\beta/\\alpha)}{\\alpha y^{\\beta/\\alpha}},"
},
{
"math_id": 15,
"text": "\\operatorname{Fi}(\\alpha,\\beta;x)\\sim \\left(\\frac{2\\pi}{1-\\alpha}\\right)^{1/2}(\\alpha x)^{(2\\beta-1)/(2-2\\alpha)}\\exp\\left((1-\\alpha)(\\alpha^{\\alpha}y)^{1/(1-\\alpha)}\\right)."
}
]
| https://en.wikipedia.org/wiki?curid=72876407 |
72877 | Odometer | Instrument used for measuring the distance traveled by a vehicle
An odometer or odograph is an instrument used for measuring the distance traveled by a vehicle, such as a bicycle or car. The device may be electronic, mechanical, or a combination of the two (electromechanical). The noun derives from ancient Greek , "hodómetron", from , "hodós" ("path" or "gateway") and , "métron" ("measure"). Early forms of the odometer existed in the ancient Greco-Roman world as well as in ancient China. In countries using Imperial units or US customary units it is sometimes called a mileometer or milometer, the former name especially being prevalent in the United Kingdom and among members of the Commonwealth.
History.
Classical Era.
Possibly the first evidence for the use of an odometer can be found in the works of the ancient Roman Pliny (NH 6. 61-62) and the ancient Greek Strabo (11.8.9). Both authors list the distances of routes traveled by Alexander the Great (r. 336-323 BC) as by his bematists Diognetus and Baeton. However, the high accuracy of the bematists's measurements rather indicates the use of a mechanical device. For example, the section between the cities Hecatompylos and Alexandria Areion, which later became a part of the Silk Road, was given by Alexander's bematists as 575 Roman miles (529 English miles) long, that is with a deviation of 0.2% from the actual distance (531 English miles). From the nine surviving bematists' measurements in Pliny's "Naturalis Historia" eight show a deviation of less than 5% from the actual distance, three of them being within 1%. Since these minor discrepancies can be adequately explained by slight changes in the tracks of roads during the last 2300 years, the overall accuracy of the measurements implies that the bematists already must have used a sophisticated device for measuring distances, although there is no direct mention of such a device.
An odometer for measuring distance was first described by Vitruvius around 27 and 23 BC, during the First Punic War, although the actual inventor may have been Archimedes of Syracuse (c. 287 BC – c. 212 BC). Hero of Alexandria (10 AD – 70 AD) describes a similar device in chapter 34 of his "Dioptra". The machine was also used in the time of Roman Emperor Commodus (c. 192 AD), although after this point in time there seems to be a gap between its use in Roman times and that of the 15th century in Western Europe. Some researchers have speculated that the device might have included technology similar to that of the Greek Antikythera mechanism.
The odometer of Vitruvius was based on chariot wheels of 4 Roman feet (1.18 m) diameter turning 400 times in one Roman mile (about 1,480 m). For each revolution a pin on the axle engaged a 400-tooth cogwheel thus turning it one complete revolution per mile. This engaged another gear with holes along the circumference, where pebbles ("calculus") were located, that were to drop one by one into a box. The distance traveled would thus be given simply by counting the number of pebbles. Whether this instrument was ever built at the time is disputed. Leonardo da Vinci later tried to build it himself according to the description, but failed. However, in 1981 engineer Andre Sleeswyk built his own replica, replacing the square-toothed gear designs of Leonardo with the triangular, pointed teeth found in the "Antikythera mechanism". With this modification, the Vitruvius odometer functioned perfectly.
Imperial China.
Han Dynasty and Three Kingdoms period.
The odometer was also independently invented in ancient China, possibly by the prolific inventor and early scientist Zhang Heng (78 AD – 139 AD) of the Han dynasty. By the 3rd century (during the Three Kingdoms Period), the Chinese had termed the device as the 'jì lĭ gŭ chē' (記里鼓車), or 'li-recording drum carriage' (Note: the modern measurement of li = ). Chinese texts of the 3rd century tell of the mechanical carriage's functions, and as one li is traversed, a mechanical-driven wooden figure strikes a drum, and when ten li is traversed, another wooden figure would strike a gong or a bell with its mechanical-operated arm.
Despite its association with Zhang Heng or even the later Ma Jun (c. 200–265), there is evidence to suggest that the invention of the odometer was a gradual process in Han Dynasty China that centered around the "huang men" court people (i.e. eunuchs, palace officials, attendants and familiars, actors, acrobats, etc.) that would follow the musical procession of the royal 'drum-chariot'. The historian Joseph Needham asserts that it is no surprise this social group would have been responsible for such a device, since there is already other evidence of their craftsmanship with mechanical toys to delight the emperor and the court. There is speculation that some time in the 1st century BC (during the Western Han Dynasty), the beating of drums and gongs were mechanically-driven by working automatically off the rotation of the road-wheels. This might have actually been the design of one Luoxia Hong (c. 110 BC), yet by 125 AD the mechanical odometer carriage in China was already known (depicted in a mural of the Xiaotangshan Tomb).
The odometer was used also in subsequent periods of Chinese history. In the historical text of the "Jin Shu" (635 AD), the oldest part of the compiled text, the book known as the "Cui Bao" (c. 300 AD), recorded the use of the odometer, providing description (attributing it to the Western Han era, from 202 BC–9 AD). The passage in the "Jin Shu" expanded upon this, explaining that it took a similar form to the mechanical device of the south-pointing chariot invented by Ma Jun (200–265, see also differential gear). As recorded in the "Song Shi" of the Song Dynasty (960-1279 AD), the odometer and south-pointing chariot were combined into one wheeled device by engineers of the 9th century, 11th century, and 12th century. The "Sunzi Suanjing" (Master Sun's Mathematical Manual), dated from the 3rd century to 5th century, presented a mathematical problem for students involving the odometer. It involved a given distance between two cities, the small distance needed for one rotation of the carriage's wheel, and the posed question of how many rotations the wheels would have in all if the carriage was to travel between point A and B.
Song Dynasty.
The historical text of the "Song Shi" (1345 AD), recording the people and events of the Chinese Song Dynasty (960–1279), also mentioned the odometer used in that period. However, unlike written sources of earlier periods, it provided a much more thoroughly detailed description of the device that harkens back to its ancient form (Wade-Giles spelling):
The odometer. [The mile-measuring carriage] is painted red, with pictures of flowers and birds on the four sides, and constructed in two storeys, handsomely adorned with carvings. At the completion of every li, the wooden figure of a man in the lower storey strikes a drum; at the completion of every ten li, the wooden figure in the upper storey strikes a bell. The carriage-pole ends in a phoenix-head, and the carriage is drawn by four horses. The escort was formerly of 18 men, but in the 4th year of the Yung-Hsi reign-period (987 AD) the emperor Thai Tsung increased it to 30. In the 5th year of the Thien-Sheng reign-period (1027 AD) the Chief Chamberlain Lu Tao-lung presented specifications for the construction of odometers as follows:
What follows is a long dissertation made by the Chief Chamberlain Lu Daolong on the ranging measurements and sizes of wheels and gears, along with a concluding description at the end of how the device ultimately functions:
The vehicle should have a single pole and two wheels. On the body are two storeys, each containing a carved wooden figure holding a drumstick. The road-wheels are each 6 ft in diameter, and 18 ft in circumference, one evolution covering 3 paces. According to ancient standards the pace was equal to 6 ft and 300 paces to a li; but now the li is reckoned as 360 paces of 5 ft each.
[Note: the measurement of the Chinese-mile unit, the li, was changed over time, as the li in Song times differed from the length of a li in Han times.]
The vehicle wheel (li lun) is attached to the left road-wheel; it has a diameter of 1.38 ft with a circumference of 4.14 ft, and has 18 cogs (chhih) 2.3 inches apart. There is also a lower horizontal wheel (hsia phing lun), of diameter 4.14 ft and circumference 12.42 ft, with 54 cogs, the same distance apart as those on the vertical wheel (2.3 inches). (This engages with the former.)
Upon a vertical shaft turning with this wheel, there is fixed a bronze "turning-like-the-wind wheel" (hsuan feng lun) which has (only) 3 cogs, the distance between these being 1.2 inches. (This turns the following one.) In the middle is a horizontal wheel, 4 ft in diameter, and 12 ft circumference, with 100 cogs, the distance between these cogs being the same as on the "turning-like-the-wind wheel" (1.2 inches).
Next, there is fixed (on the same shaft) a small horizontal wheel (hsiao phing lun) 3.3 inches in diameter and 1 ft in circumference, having 10 cogs 1.5 inches apart. (Engaging with this) there is an upper horizontal wheel (shang phing lun) having a diameter of 3.3 ft and a circumference of 10 ft, with 100 cogs, the same distance apart as those of the small horizontal wheel (1.5 inches).
When the middle horizontal wheel has made 1 revolution, the carriage will have gone 1 li and the wooden figure in the lower story will strike the drum. When the upper horizontal wheel has made 1 revolution, the carriage will have gone 10 li and the figure in the upper storey will strike the bell. The number of wheels used, great and small, is 8 inches in all, with a total of 285 teeth. Thus the motion is transmitted as if by the links of a chain, the "dog-teeth" mutually engaging with each other, so that by due revolution everything comes back to its original starting point (ti hsiang kou so, chhuan ya hsiang chih, chou erh fu shih).
Subsequent developments.
Odometers were first developed in the 1600s for wagons and other horse-drawn vehicles in order to measure distances traveled.
Levinus Hulsius published the odometer in 1604 in his work "Gründtliche Beschreibung deß Diensthafften und Nutzbahrn Instruments Viatorii oder Wegzählers, So zu Fuß, zu Pferdt unnd zu Fußen gebraucht werden kann, damit mit geringer mühe zu wissen, wie weit man gegangen, geritten, oder gefahren sey: als auch zu erfahren, ohne messen oder zehlen, wie weit von einem Orth zum andern. Daneben wird auch der grosse verborgene Wegweiser angezeiget und vermeldet".
In 1645, the French mathematician Blaise Pascal invented the "pascaline". Though not an odometer, the "pascaline" utilized gears to compute measurements. Each gear contained 10 teeth. The first gear advanced the next gear one position when moved one complete revolution, the same principle employed on modern mechanical odometers.
Odometers were developed for ships in 1698 with the odometer invented by the Englishman Thomas Savery. Benjamin Franklin, U.S. statesman and the first Postmaster General, built a prototype odometer in 1775 that he attached to his carriage to help measure the mileage of postal routes. In 1847, William Clayton and Orson Pratt, pioneers of the Church of Jesus Christ of Latter-day Saints, first implemented the "Roadometer" they had invented earlier (a version of the modern odometer), which they attached to a wagon used by American settlers heading west. It recorded the distance traveled each day by the wagon trains. The "Roadometer" used two gears and was an early example of an odometer with pascaline-style gears in actual use.
In 1895, Curtis Hussey Veeder invented the "Cyclometer". The "Cyclometer" was a mechanical device that counted the number of rotations of a bicycle wheel. A flexible cable transmitted the number of rotations of the wheel to an analog odometer visible to the rider, which converted the wheel rotations into the number of miles traveled according to a predetermined formula.
In 1903 Arthur P. and Charles H. Warner, two brothers from Beloit, Wisconsin, introduced their patented "Auto-meter". The "Auto-Meter" used a magnet attached to a rotating shaft to induce a magnetic pull upon a thin metal disk. Measuring this pull provided accurate measurements of both distance and speed information to automobile drivers in a single instrument. The Warners sold their company in 1912 to the Stewart & Clark Company of Chicago. The new firm was renamed the Stewart-Warner Corporation. By 1925, Stewart-Warner odometers and trip meters were standard equipment on the vast majority of automobiles and motorcycles manufactured in the United States.
By the early 2000s, mechanical odometers would be phased out on cars from major manufacturers. The Pontiac Grand Prix was the last GM car sold in the US to offer a mechanical odometer in 2003; the Canadian-built Ford Crown Victoria and Mercury Grand Marquis were the last Fords sold with one in 2005.
Trip meters.
Most modern cars include a trip meter (trip odometer). Unlike the odometer, a trip meter is reset at any point in a journey, making it possible to record the distance traveled in any particular journey or part of a journey. It was traditionally a purely mechanical device but, in most modern vehicles, it is now electronic. Many modern vehicles often have multiple trip meters. Most mechanical trip meters will show a maximum value of 999.9. The trip meter may be used to record the distance traveled on each tank of fuel, making it very easy to accurately track the energy efficiency of the vehicle; another common use is resetting it to zero at each instruction in a sequence of driving directions, to be sure when one has arrived at the next turn.
Clocking/busting miles and legality.
A form of fraud is to tamper with the reading on an odometer and presenting the incorrect number of miles/kilometres traveled to a prospective buyer; this is often referred to as "clocking" in the UK and "busting miles" in the US. This is done to make a car appear to have been driven less than it really has been, and thus increase its apparent market value. Most new cars sold today use digital odometers that store the mileage in the vehicle's engine control unit, making it difficult (but not impossible) to manipulate the mileage electronically. With mechanical odometers, the speedometer can be removed from the car dashboard and the digits wound back, or the drive cable can be disconnected and connected to another odometer/speedometer pair while on the road. Older vehicles can be driven in reverse to subtract mileage, a concept which provides the premise for a classic scene in the comedy film "Ferris Bueller's Day Off", but modern odometers add mileage driven in reverse to the total as if driven forward, thereby accurately reflecting the true total wear and tear on the vehicle.
The resale value of a vehicle is often strongly influenced by the total distance shown on the odometer, yet odometers are inherently insecure because they are under the control of their owners. Many jurisdictions have chosen to enact laws which penalize people who are found to commit odometer fraud. In the US (and many other countries), vehicle mechanics are also required to keep records of the odometer any time a vehicle is serviced or inspected. Companies such as Carfax then use these data to help potential car buyers detect whether odometer rollback has occurred.
Prevalence.
Research by Irish vehicle check specialist Cartell found that 20% of vehicles imported to Ireland from Great Britain and Northern Ireland had had their mileometers altered to show a lower mileage.
Accuracy.
Most odometers work by counting wheel rotations and assume that the distance traveled is the number of wheel rotations times the tire circumference, which is a standard tire diameter times pi (3.141592). If nonstandard or severely worn or underinflated tires are used then this will cause some error in the odometer. The formula is formula_0. It is common for odometers to be off by several percent. Odometer errors are typically proportional to speedometer errors.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(actual\\ distance\\ traveled) = \\tfrac{((final\\ odometer\\ reading) - (initial\\ odometer\\ reading)) \\cdot (actual\\ tire\\ diameter)}{(standard\\ tire\\ diameter)}"
}
]
| https://en.wikipedia.org/wiki?curid=72877 |
7287830 | Kendall rank correlation coefficient | Statistic for rank correlation
In statistics, the Kendall rank correlation coefficient, commonly referred to as Kendall's τ coefficient (after the Greek letter τ, tau), is a statistic used to measure the ordinal association between two measured quantities. A τ test is a non-parametric hypothesis test for statistical dependence based on the τ coefficient. It is a measure of rank correlation: the similarity of the orderings of the data when ranked by each of the quantities. It is named after Maurice Kendall, who developed it in 1938, though Gustav Fechner had proposed a similar measure in the context of time series in 1897.
Intuitively, the Kendall correlation between two variables will be high when observations have a similar (or identical for a correlation of 1) rank (i.e. relative position label of the observations within the variable: 1st, 2nd, 3rd, etc.) between the two variables, and low when observations have a dissimilar (or fully different for a correlation of −1) rank between the two variables.
Both Kendall's formula_0 and Spearman's formula_1 can be formulated as special cases of a more general correlation coefficient. Its notions of concordance and discordance also appear in other areas of statistics, like the Rand index in cluster analysis.
Definition.
Let formula_2 be a set of observations of the joint random variables "X" and "Y", such that all the values of (formula_3) and (formula_4) are unique. (See the section #Accounting for ties for ways of handling non-unique values.) Any pair of observations formula_5 and formula_6, where formula_7, are said to be "concordant" if the sort order of formula_8 and "formula_9" agrees: that is, if either both formula_10 and formula_11 holds or both formula_12 and formula_13; otherwise they are said to be "discordant".
The Kendall τ coefficient is defined as:
formula_14
where formula_15 is the binomial coefficient for the number of ways to choose two items from n items.
The number of discordant pairs is equal to the inversion number that permutes the y-sequence into the same order as the x-sequence.
Properties.
The denominator is the total number of pair combinations, so the coefficient must be in the range −1 ≤ "τ" ≤ 1.
Hypothesis test.
The Kendall rank coefficient is often used as a test statistic in a statistical hypothesis test to establish whether two variables may be regarded as statistically dependent. This test is non-parametric, as it does not rely on any assumptions on the distributions of "X" or "Y" or the distribution of ("X","Y").
Under the null hypothesis of independence of "X" and "Y", the sampling distribution of "τ" has an expected value of zero. The precise distribution cannot be characterized in terms of common distributions, but may be calculated exactly for small samples; for larger samples, it is common to use an approximation to the normal distribution, with mean zero and variance formula_17.
Theorem. If the samples are independent, then the variance of formula_18 is given by formula_19.
<templatestyles src="Template:Hidden begin/styles.css"/>Proof
<templatestyles src="Math_proof/styles.css" />ProofValz & McLeod (1990; 1995)
WLOG, we reorder the data pairs, so that formula_20. By assumption of independence, the order of formula_21 is a permutation sampled uniformly at random from formula_22, the permutation group on formula_23.
For each permutation, its unique formula_24 inversion code is formula_25 such that each formula_26 is in the range formula_27. Sampling a permutation uniformly is equivalent to sampling a formula_24-inversion code uniformly, which is equivalent to sampling each formula_26 uniformly and independently.
Then we have formula_28
The first term is just formula_29. The second term can be calculated by noting that formula_26 is a uniform random variable on formula_27, so formula_30 and formula_31, then using the sum of squares formula again.
<templatestyles src="Math_theorem/styles.css" />
Asymptotic normality — At the formula_32 limit, formula_33 converges in distribution to the standard normal distribution.
<templatestyles src="Math_proof/styles.css" />Proof
Use a result from "A class of statistics with asymptotically normal distribution" Hoeffding (1948).
Case of standard normal distributions.
If formula_34 are IID samples from the same jointly normal distribution with a known Pearson correlation coefficient formula_35, then the expectation of Kendall rank correlation has a closed-form formula.
<templatestyles src="Math_theorem/styles.css" />
Greiner's equality — If formula_36 are jointly normal, with correlation formula_35, then formula_37
The name is credited to Richard Greiner (1909) by P. A. P. Moran.<templatestyles src="Template:Hidden begin/styles.css"/>Proof
<templatestyles src="Math_proof/styles.css" />Proof
Define the following quantities.
In the notation, we see that the number of concordant pairs, formula_41, is equal to the number of formula_42 that fall in the subset formula_43. That is, formula_44.
Thus, formula_45
Since each formula_46 is an IID sample of the jointly normal distribution, the pairing does not matter, so each term in the summation is exactly the same, and so formula_47 and it remains to calculate the probability. We perform this by repeated affine transforms.
First normalize formula_36 by subtracting the mean and dividing the standard deviation. This does not change formula_18. This gives us formula_48 where formula_49 is sampled from the standard normal distribution on formula_40.
Thus, formula_50 where the vector formula_51 is still distributed as the standard normal distribution on formula_40. It remains to perform some unenlightening tedious matrix exponentiations and trigonometry, which can be skipped over.
Thus, formula_52 iff formula_53 where the subset on the right is a “squashed” version of two quadrants. Since the standard normal distribution is rotationally symmetric, we need only calculate the angle spanned by each squashed quadrant.
The first quadrant is the sector bounded by the two rays formula_54. It is transformed to the sector bounded by the two rays formula_55 and formula_56. They respectively make angle formula_57 with the horizontal and vertical axis, where formula_58
Together, the two transformed quadrants span an angle of formula_59, so formula_60 and therefore
formula_61
Accounting for ties.
A pair formula_62 is said to be "tied" if and only if formula_63 or formula_64; a tied pair is neither concordant nor discordant. When tied pairs arise in the data, the coefficient may be modified in a number of ways to keep it in the range [−1, 1]:
Tau-a.
The Tau-a statistic tests the strength of association of the cross tabulations. Both variables have to be ordinal. Tau-a will not make any adjustment for ties. It is defined as:
formula_65
where "n""c", "n""d" and "n""0" are defined as in the next section.
Tau-b.
The Tau-b statistic, unlike Tau-a, makes adjustments for ties. Values of Tau-b range from −1 (100% negative association, or perfect inversion) to +1 (100% positive association, or perfect agreement). A value of zero indicates the absence of association.
The Kendall Tau-b coefficient is defined as:
formula_66
where
formula_67
A simple algorithm developed in BASIC computes Tau-b coefficient using an alternative formula.
Be aware that some statistical packages, e.g. SPSS, use alternative formulas for computational efficiency, with double the 'usual' number of concordant and discordant pairs.
Tau-c.
Tau-c (also called Stuart-Kendall Tau-c) is more suitable than Tau-b for the analysis of data based on non-square (i.e. rectangular) contingency tables. So use Tau-b if the underlying scale of both variables has the same number of possible values (before ranking) and Tau-c if they differ. For instance, one variable might be scored on a 5-point scale (very good, good, average, bad, very bad), whereas the other might be based on a finer 10-point scale.
The Kendall Tau-c coefficient is defined as:
formula_68
where
formula_69
Significance tests.
When two quantities are statistically dependent, the distribution of formula_0 is not easily characterizable in terms of known distributions. However, for formula_70 the following statistic, formula_71, is approximately distributed as a standard normal when the variables are statistically independent:
formula_72
where formula_73.
Thus, to test whether two variables are statistically dependent, one computes formula_71, and finds the cumulative probability for a standard normal distribution at formula_74. For a 2-tailed test, multiply that number by two to obtain the "p"-value. If the "p"-value is below a given significance level, one rejects the null hypothesis (at that significance level) that the quantities are statistically independent.
Numerous adjustments should be added to formula_71 when accounting for ties. The following statistic, formula_75, has the same distribution as the formula_76 distribution, and is again approximately equal to a standard normal distribution when the quantities are statistically independent:
formula_77
where
formula_78
This is sometimes referred to as the Mann-Kendall test.
Algorithms.
The direct computation of the numerator formula_79, involves two nested iterations, as characterized by the following pseudocode:
numer := 0
for i := 2..N do
for j := 1..(i − 1) do
numer := numer + sign(x[i] − x[j]) × sign(y[i] − y[j])
return numer
Although quick to implement, this algorithm is formula_80 in complexity and becomes very slow on large samples. A more sophisticated algorithm built upon the Merge Sort algorithm can be used to compute the numerator in formula_81 time.
Begin by ordering your data points sorting by the first quantity, formula_82, and secondarily (among ties in formula_82) by the second quantity, formula_83. With this initial ordering, formula_83 is not sorted, and the core of the algorithm consists of computing how many steps a Bubble Sort would take to sort this initial formula_83. An enhanced Merge Sort algorithm, with formula_84 complexity, can be applied to compute the number of swaps, formula_85, that would be required by a Bubble Sort to sort formula_4. Then the numerator for formula_0 is computed as:
formula_86
where formula_87 is computed like formula_88 and formula_89, but with respect to the joint ties in formula_82 and formula_83.
A Merge Sort partitions the data to be sorted, formula_83 into two roughly equal halves, formula_90 and formula_91, then sorts each half recursive, and then merges the two sorted halves into a fully sorted vector. The number of Bubble Sort swaps is equal to:
formula_92
where formula_93 and formula_94 are the sorted versions of formula_90 and formula_91, and formula_95 characterizes the Bubble Sort swap-equivalent for a merge operation. formula_95 is computed as depicted in the following pseudo-code:
function M(L[1..n], R[1..m]) is
i := 1
j := 1
nSwaps := 0
while i ≤ n and j ≤ m do
if R[j] < L[i] then
nSwaps := nSwaps + n − i + 1
j := j + 1
else
i := i + 1
return nSwaps
A side effect of the above steps is that you end up with both a sorted version of formula_82 and a sorted version of formula_83. With these, the factors formula_96 and formula_97 used to compute formula_76 are easily obtained in a single linear-time pass through the sorted arrays.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\tau"
},
{
"math_id": 1,
"text": "\\rho"
},
{
"math_id": 2,
"text": "(x_1,y_1), ..., (x_n,y_n)"
},
{
"math_id": 3,
"text": "x_i"
},
{
"math_id": 4,
"text": "y_i"
},
{
"math_id": 5,
"text": "(x_i,y_i)"
},
{
"math_id": 6,
"text": "(x_j,y_j)"
},
{
"math_id": 7,
"text": "i < j"
},
{
"math_id": 8,
"text": "(x_i,x_j)"
},
{
"math_id": 9,
"text": "(y_i,y_j)"
},
{
"math_id": 10,
"text": "x_i>x_j"
},
{
"math_id": 11,
"text": "y_i>y_j "
},
{
"math_id": 12,
"text": "x_i<x_j"
},
{
"math_id": 13,
"text": "y_i<y_j"
},
{
"math_id": 14,
"text": "\\tau = \\frac{(\\text{number of concordant pairs}) - (\\text{number of discordant pairs})}{ \n(\\text{number of pairs}) } = 1- \\frac{2 (\\text{number of discordant pairs})}{ \n {n \\choose 2} } ."
},
{
"math_id": 15,
"text": " {n \\choose 2} = {n (n-1) \\over 2} "
},
{
"math_id": 16,
"text": "\\tau= \\frac{2}{n(n-1)}\\sum_{i<j} \\sgn(x_i-x_j)\\sgn(y_i-y_j)"
},
{
"math_id": 17,
"text": "2(2n+5)/9n (n-1)"
},
{
"math_id": 18,
"text": "\\tau_A"
},
{
"math_id": 19,
"text": "Var[\\tau_A] = 2(2n+5)/9n (n-1)"
},
{
"math_id": 20,
"text": "x_1 < x_2 < \\cdots < x_n"
},
{
"math_id": 21,
"text": "y_1, ..., y_n"
},
{
"math_id": 22,
"text": "S_n"
},
{
"math_id": 23,
"text": "1:n"
},
{
"math_id": 24,
"text": "l"
},
{
"math_id": 25,
"text": "l_0l_1\\cdots l_{n-1}"
},
{
"math_id": 26,
"text": "l_i"
},
{
"math_id": 27,
"text": "0:i"
},
{
"math_id": 28,
"text": "\\begin{aligned}\n E[\\tau_A^2] &= E\\left[\\left(1-\\frac{4\\sum_i l_i}{n(n-1)}\\right)^2\\right] \\\\\n &= 1 - \\frac{8}{n(n-1)}\\sum_i E[l_i] + \\frac{16}{n^2(n-1)^2}\\sum_{ij} E[l_il_j] \\\\\n &= 1 - \\frac{8}{n(n-1)}\\sum_i E[l_i] + \\frac{16}{n^2(n-1)^2} \\left(\\sum_{ij} E[l_i]E[l_j] + \\sum_i V[l_i] \\right) \\\\\n &= 1 - \\frac{8}{n(n-1)}\\sum_i E[l_i] +\\frac{16}{n^2(n-1)^2} \\sum_{ij} E[l_i]E[l_j] + \\frac{16}{n^2(n-1)^2} \\left( \\sum_i V[l_i] \\right) \\\\\n &=\\left(1-\\frac{4\\sum_i E[l_i]}{n(n-1)}\\right)^2 + \\frac{16}{n^2(n-1)^2} \\left( \\sum_i V[l_i] \\right)\n \\end{aligned}"
},
{
"math_id": 29,
"text": "E[\\tau_A]^2 = 0"
},
{
"math_id": 30,
"text": "E[l_i] = \\frac i2"
},
{
"math_id": 31,
"text": "E[l_i^2] = \\frac{0^2+\\cdots + i^2}{i+1} = \\frac{i(2i+1)}6"
},
{
"math_id": 32,
"text": "n\\to \\infty"
},
{
"math_id": 33,
"text": "z_A = \\frac{\\tau_A}{\\sqrt{Var[\\tau_A]}} = {n_C - n_D \\over \\sqrt{n(n-1)(2n+5)/18} }"
},
{
"math_id": 34,
"text": "(x_1, y_1), (x_2, y_2), ..., (x_n, y_n)"
},
{
"math_id": 35,
"text": "r"
},
{
"math_id": 36,
"text": "X, Y"
},
{
"math_id": 37,
"text": "r = \\sin{\\left(\\frac\\pi 2 E[\\tau_A]\\right)}"
},
{
"math_id": 38,
"text": "A^+ := \\{(\\Delta x, \\Delta y) : \\Delta x \\Delta y > 0\\}"
},
{
"math_id": 39,
"text": "\\Delta_{i,j} := (x_i - x_j , y_i - y_j)"
},
{
"math_id": 40,
"text": "\\R^2"
},
{
"math_id": 41,
"text": "n_C"
},
{
"math_id": 42,
"text": "\\Delta_{i, j}"
},
{
"math_id": 43,
"text": "A^+"
},
{
"math_id": 44,
"text": "n_C = \\sum_{1 \\leq i < j \\leq n} 1_{\\Delta_{i,j} \\in A^+} "
},
{
"math_id": 45,
"text": "E[\\tau_A] = \\frac{4}{n(n-1)}E[n_C] - 1 = \\frac{4}{n(n-1)}\\sum_{1 \\leq i < j \\leq n} Pr(\\Delta_{i,j} \\in A^+) - 1"
},
{
"math_id": 46,
"text": "(x_i, y_i)"
},
{
"math_id": 47,
"text": "E[\\tau_A] = 2 Pr(\\Delta_{1,2} \\in A^+) - 1"
},
{
"math_id": 48,
"text": "\n \\begin{bmatrix} x \\\\ y \\end{bmatrix}\n =\\begin{bmatrix} 1 & r \\\\ r & 1 \\end{bmatrix}^{1/2}\n \\begin{bmatrix} z \\\\ w \\end{bmatrix}\n "
},
{
"math_id": 49,
"text": "(Z, W)"
},
{
"math_id": 50,
"text": "\\Delta_{1,2} = \\sqrt 2\\begin{bmatrix} 1 & r \\\\ r & 1 \\end{bmatrix}^{1/2}\n \\begin{bmatrix} (z_1-z_2)/\\sqrt{2} \\\\ (w_1-w_2)/\\sqrt{2} \\end{bmatrix}"
},
{
"math_id": 51,
"text": "\\begin{bmatrix} (z_1-z_2)/\\sqrt{2} \\\\ (w_1-w_2)/\\sqrt{2} \\end{bmatrix}"
},
{
"math_id": 52,
"text": "\\Delta_{1,2} \\in A^+"
},
{
"math_id": 53,
"text": "\\begin{bmatrix} (z_1-z_2)/\\sqrt{2} \\\\ (w_1-w_2)/\\sqrt{2} \\end{bmatrix} \\in \\frac{1}{\\sqrt 2}\\begin{bmatrix} 1 & r \\\\ r & 1 \\end{bmatrix}^{-1/2} A^+ = \\frac{1}{2\\sqrt 2} \n \\begin{bmatrix} \n \\frac{1}{\\sqrt{1+r}}+ \\frac{1}{\\sqrt{1-r}} & \\frac{1}{\\sqrt{1+r}} - \\frac{1}{\\sqrt{1-r}} \\\\\n \\frac{1}{\\sqrt{1+r}} - \\frac{1}{\\sqrt{1-r}} & \\frac{1}{\\sqrt{1+r}} + \\frac{1}{\\sqrt{1-r}}\n \\end{bmatrix}A^+"
},
{
"math_id": 54,
"text": "(1, 0), (0, 1)"
},
{
"math_id": 55,
"text": "(\\frac{1}{\\sqrt{1+r}}+ \\frac{1}{\\sqrt{1-r}}, \\frac{1}{\\sqrt{1+r}} - \\frac{1}{\\sqrt{1-r}})"
},
{
"math_id": 56,
"text": "(\\frac{1}{\\sqrt{1+r}} - \\frac{1}{\\sqrt{1-r}}, \\frac{1}{\\sqrt{1+r}}+ \\frac{1}{\\sqrt{1-r}})"
},
{
"math_id": 57,
"text": "\\theta"
},
{
"math_id": 58,
"text": "\\theta = \\arctan\\frac{\\frac{1}{\\sqrt{1+r}} - \\frac{1}{\\sqrt{1-r}}}{\\frac{1}{\\sqrt{1+r}}+ \\frac{1}{\\sqrt{1-r}}}"
},
{
"math_id": 59,
"text": "\\pi + 4\\theta"
},
{
"math_id": 60,
"text": "Pr(\\Delta_{1,2} \\in A^+) = \\frac{\\pi + 4\\theta}{2\\pi}"
},
{
"math_id": 61,
"text": "\\sin{\\left(\\frac\\pi 2 E[\\tau_A]\\right)} = \\sin(2\\theta) = r"
},
{
"math_id": 62,
"text": " \\{ (x_{i},y_{i}),(x_{j},y_{j}) \\} "
},
{
"math_id": 63,
"text": " x_{i} = x_{j} "
},
{
"math_id": 64,
"text": " y_{i} = y_{j} "
},
{
"math_id": 65,
"text": "\\tau_A = \\frac{n_c-n_d}{n_0}"
},
{
"math_id": 66,
"text": "\\tau_B = \\frac{n_c-n_d}{\\sqrt{(n_0-n_1)(n_0-n_2)}}"
},
{
"math_id": 67,
"text": "\n\\begin{align}\nn_0 & = n(n-1)/2\\\\\nn_1 & = \\sum_i t_i (t_i-1)/2 \\\\\nn_2 & = \\sum_j u_j (u_j-1)/2 \\\\\nn_c & = \\text{Number of concordant pairs} \\\\\nn_d & = \\text{Number of discordant pairs} \\\\\nt_i & = \\text{Number of tied values in the } i^\\text{th} \\text{ group of ties for the first quantity} \\\\\nu_j & = \\text{Number of tied values in the } j^\\text{th} \\text{ group of ties for the second quantity}\n\\end{align}\n"
},
{
"math_id": 68,
"text": "\\tau_C = \\frac{2 (n_c-n_d)}{n^2 \\frac{(m-1)}{m}} = \\tau_A \\frac{n-1}{n} \\frac{m}{m-1}"
},
{
"math_id": 69,
"text": "\n\\begin{align}\nn_c & = \\text{Number of concordant pairs} \\\\\nn_d & = \\text{Number of discordant pairs} \\\\\nr & = \\text{Number of rows} \\\\\nc & = \\text{Number of columns} \\\\\nm & = \\min(r, c)\n\\end{align}\n"
},
{
"math_id": 70,
"text": "\\tau_A"
},
{
"math_id": 71,
"text": "z_A"
},
{
"math_id": 72,
"text": "z_A = {n_c - n_d \\over \\sqrt{\\frac{1}{18}v_0} }"
},
{
"math_id": 73,
"text": "v_0 = n(n-1)(2n+5)"
},
{
"math_id": 74,
"text": "-|z_A|"
},
{
"math_id": 75,
"text": "z_B"
},
{
"math_id": 76,
"text": "\\tau_B"
},
{
"math_id": 77,
"text": "z_B = {n_c - n_d \\over \\sqrt{ v } }"
},
{
"math_id": 78,
"text": "\\begin{array}{ccl}\nv & = & \\frac{1}{18} v_0 - (v_t + v_u)/18 + (v_1 + v_2) \\\\\nv_0 & = & n (n-1) (2n+5) \\\\\nv_t & = & \\sum_i t_i (t_i-1) (2 t_i+5)\\\\\nv_u & = & \\sum_j u_j (u_j-1)(2 u_j+5) \\\\\nv_1 & = & \\sum_i t_i (t_i-1) \\sum_j u_j (u_j-1) / (2n(n-1)) \\\\\nv_2 & = & \\sum_i t_i (t_i-1) (t_i-2) \\sum_j u_j (u_j-1) (u_j-2) / (9 n (n-1) (n-2))\n\\end{array}\n"
},
{
"math_id": 79,
"text": "n_c - n_d"
},
{
"math_id": 80,
"text": "O(n^2)"
},
{
"math_id": 81,
"text": "O(n \\cdot \\log{n})"
},
{
"math_id": 82,
"text": "x"
},
{
"math_id": 83,
"text": "y"
},
{
"math_id": 84,
"text": "O(n \\log n)"
},
{
"math_id": 85,
"text": "S(y)"
},
{
"math_id": 86,
"text": "n_c-n_d = n_0 - n_1 - n_2 + n_3 - 2 S(y),"
},
{
"math_id": 87,
"text": "n_3"
},
{
"math_id": 88,
"text": "n_1"
},
{
"math_id": 89,
"text": "n_2"
},
{
"math_id": 90,
"text": "y_\\mathrm{left}"
},
{
"math_id": 91,
"text": "y_\\mathrm{right}"
},
{
"math_id": 92,
"text": "S(y) = S(y_\\mathrm{left}) + S(y_\\mathrm{right}) + M(Y_\\mathrm{left},Y_\\mathrm{right})"
},
{
"math_id": 93,
"text": "Y_\\mathrm{left}"
},
{
"math_id": 94,
"text": "Y_\\mathrm{right}"
},
{
"math_id": 95,
"text": "M(\\cdot,\\cdot)"
},
{
"math_id": 96,
"text": "t_i"
},
{
"math_id": 97,
"text": "u_j"
},
{
"math_id": 98,
"text": "\\tau_C"
}
]
| https://en.wikipedia.org/wiki?curid=7287830 |
7287996 | Little Carpathians | Mountain range in Slovakia and Austria
The Little Carpathians (also: "Lesser Carpathians", ; ; ) are a low mountain range, about 100 km long, and part of the Carpathian Mountains. The mountains are situated in Western Slovakia, covering the area from Bratislava to Nové Mesto nad Váhom, and northeastern Austria, where a very small part called Hundsheimer Berge (or Hainburger Berge) is located south of the Devín Gate. The Little Carpathians are bordered by the Záhorie Lowland in the west and the Danubian Lowland in the east.
In 1976, the Little Carpathians were declared a protected area under the name Little Carpathians Protected Landscape Area, covering . The area is rich in floral and faunal diversity and contains numerous castles, most notably the Bratislava Castle, and natural caves. Driny is the only cave open to the public. The three highest mountains are Záruby at , Vysoká at , and Vápenná at .
Description.
Geomorphologically, the Little Carpathians belong to the Alps-Himalaya System, the Carpathian Mountains sub-system, the Western Carpathians province, and the Inner Western Carpathians sub-province.
The Little Carpathians are further divided into four parts, from south to north: the Devín Carpathians (), the Pezinok Carpathians (), the Brezová Carpathians () and the Čachtice Carpathians ().
The mountains are densely forested (90% being broad-leaved trees), and the southeastern part contains extensive vineyards (e.g. Rača, Pezinok, and Modra). Several castles or castle ruins are situated in the Little Carpathians, for example Devín, Čachtice, Červený Kameň, and Smolenice castles.
Geologically, the mountain range is part of the Tatra-Fatra Belt of core mountains. There are several active faults which have produced earthquakes. Of them the most notable is the Dobra Voda fault (1906 and 1930 produced 8.5° and 7.5° EMS-98 or equal to formula_0 = 5.7 and 5.0). This particular fault is closely monitored because of its proximity to the Bohunice Nuclear Power Plant, approximately 15 km away. The Little Carpathians are seismically one of the most active regions in Slovakia and the epicentres of many earthquakes with an approximate magnitude of 2.5 on the Richter magnitude scale are located here.
There are a total of eight karst areas in the Little Carpathians: the Devín Carpathians, Borinka (Pajštún), Cajlan, Kuchyňa-orešany, Plavecký, Smolenice, Dobrovodský, and Čachtice karsts. The most important karst forms include the caves Deravá, Tmavá skala, Driny, and Čachtická, and additional caves along the Borinský potok. Driny, a limestone cave, is the only cave open to the public. Major streams include Vydrica and Suchý jarok.
History.
While being a relatively low mountain range, the Little Carpathians have long been considered a formidable mountain barrier, often attaining a height of 500 meters, as they are surrounded by various lowlands. In the past, various types of ore were mined in the Little Carpathians, including ores containing gold, silver, antimony, manganese, and pyrite.
During the Second World War, the Little Carpathians were the birthplace of the partisan group "Janko Kráľ". Insurgency in the mountains lasted until their occupation by the Soviet Red Army in 1945.
Tourism.
The Little Carpathians are a popular tourist destination in Western Slovakia. The mountains are used for hiking, cycling, tramping, backpacking, automobile and motorcycle tourism, alpine skiing, cross-country skiing, and other winter sports. The mountain range contains a dense network of trails, and the recreational infrastructure is relatively well developed, especially in the south. The Little Carpathians are a popular destination for the inhabitants of Bratislava and other larger cities in the region.
Since the Middle Ages, the area has been known for its wines and wine-making traditions. Well known centers of local wine-making include Svätý Jur, Modra, and Pezinok. The main tourist centers include the Slovak capital Bratislava, Pezinská Baba (halfway between Pezinok and Pernek), and Zochova chata (near Modra).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "M_\\mathrm{L}"
}
]
| https://en.wikipedia.org/wiki?curid=7287996 |
72883371 | Trend periodic nonstationary processes | Trending periodic processes
Trend periodic non-stationary processes (or trend cyclostationary processes) are a type of cyclostationary process that exhibits both periodic behavior and a statistical trend. The trend can be linear or nonlinear, and it can result from systematic changes in the data over time. A cyclostationary process can be formed by removing the trend component. This approach is utilized in the analysis of the trend-stationary process.
In data analysis classification of periodic data into stationary-periodic, trend-periodic and stochastic-periodic time series is achieved by means of phase dispersion minimization (PDM) test, which is a method for identifying periodicity.
Applications.
Trending cyclostationary processes have several applications in finance, engineering, economics, and environmental research. Trending cyclostationary processes are used in economics to predict the seasonality and trend of time series data that display both periodic and trending behavior, such as rail and air travel demand. Trending cyclostationary processes are used in engineering to simulate signals that display both periodic and trending behavior, such as signals in modulated radio communications or control systems. Trending cyclostationary processes are used in economics to represent time series data that display both periodic behavior and trends in which the trend is usually represented by a so-called unit root
in the autoregressive part of the model. Trending cyclostationary processes are used in environmental research to simulate time series data that display both periodic behavior and trends, such as temperature or pollutant appearance patterns. In fact, almost any pollutions related phenomena falls into one of stochastic, periodic-stochastic, or trend-period-stochastic processes.
Properties.
Trending cyclostationary processes have traits that are a mix of cyclostationary processes and trends. Trending cyclostationary processes have second-order stationarity, which means that their second-order moments are time-periodic. They do, however, display non-stationarity, which means that their mean and variance alter over time as a result of the presence of the trend.
A trend periodic stationary process is a sort of stationary time series data that has a consistent underlying trend that repeats itself regularly. A Fourier series expansion is a popular mathematical depiction of a trend periodic stationary process:
formula_0
where x(t) is the time series data, T is the period of the trend, formula_1 is the mean of the series, formula_2 and formula_3 are the Fourier coefficients, and k is the harmonic number.
Another way to represent trend periodic stationary processes is by using a regression model with a sine and cosine function, such as:
formula_4
where formula_5, formula_6, formula_7, and formula_8 are the regression coefficients that can be estimated using statistical methods.
Decomposing the signal is widely used to separate the trend process from the periodic one and represent the periodic part as sinusoid functions. The spectral density estimation is one of the methods used for this purpose. The decomposed function of the periodic trend process has a trend and a principal function that governs the periodicity.
Example.
An example of trend periodic in the second form is formula_9
where 10t is trend and formula_10 plus the sinusiduals are periodic stationary processes.
Detection and estimation.
Estimation and detection of trending cyclostationary processes are more difficult than for standard cyclostationary processes due to discrepancies in trend definitions. One popular strategy is to first remove the trend from the data before estimating and detecting cyclostationary processes. Another strategy is to represent the data as a cyclostationary process and a trend and estimate the parameters of both components at the same time.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "x(t) = a_0 + \\sum_{k=1}^\\infty(a_k cos(2\\pi kt/T) + b_k sin(2\\pi kt/T)) "
},
{
"math_id": 1,
"text": "a_0"
},
{
"math_id": 2,
"text": "a_k"
},
{
"math_id": 3,
"text": "b_k "
},
{
"math_id": 4,
"text": " x(t) = \\beta_0 + \\beta_1 t+ \\beta_2 cos(2 \\pi t/T) + \\beta_3sin(2\\pi t/T) "
},
{
"math_id": 5,
"text": "\\beta_0"
},
{
"math_id": 6,
"text": "\\beta_1"
},
{
"math_id": 7,
"text": "\\beta_2"
},
{
"math_id": 8,
"text": "\\beta_3"
},
{
"math_id": 9,
"text": " x(t) = 10 t+ 2 + 5 cos(2 \\pi t/10) + 7sin(2\\pi t/10) "
},
{
"math_id": 10,
"text": "a_0=2"
}
]
| https://en.wikipedia.org/wiki?curid=72883371 |
72887334 | Parallax in astronomy | Change in the apparent position of celestial bodies when seen from two different positions
The most important fundamental distance measurements in astronomy come from trigonometric parallax, as applied in the "stellar parallax method". As the Earth orbits the Sun, the position of nearby stars will appear to shift slightly against the more distant background. These shifts are angles in an isosceles triangle, with 2 AU (the distance between the extreme positions of Earth's orbit around the Sun) making the base leg of the triangle and the distance to the star being the long equal-length legs. The amount of shift is quite small, even for the nearest stars, measuring 1 arcsecond for an object at 1 parsec's distance (3.26 light-years), and thereafter decreasing in angular amount as the distance increases. Astronomers usually express distances in units of "parsecs" (parallax arcseconds); light-years are used in popular media.
Because parallax becomes smaller for a greater stellar distance, useful distances can be measured only for stars which are near enough to have a parallax larger than a few times the precision of the measurement. In the 1990s, for example, the Hipparcos mission obtained parallaxes for over a hundred thousand stars with a precision of about a milliarcsecond, providing useful distances for stars out to a few hundred parsecs. The Hubble Space Telescope's Wide Field Camera 3 has the potential to provide a precision of 20 to 40 "micro"arcseconds, enabling reliable distance measurements up to for small numbers of stars. The "Gaia" space mission provided similarly accurate distances to most stars brighter than 15th magnitude.
Distances can be measured within 10% as far as the Galactic Center, about 30,000 light years away. Stars have a velocity relative to the Sun that causes proper motion (transverse across the sky) and radial velocity (motion toward or away from the Sun). The former is determined by plotting the changing position of the stars over many years, while the latter comes from measuring the Doppler shift of the star's spectrum caused by motion along the line of sight. For a group of stars with the same spectral class and a similar magnitude range, a mean parallax can be derived from statistical analysis of the proper motions relative to their radial velocities. This statistical parallax method is useful for measuring the distances of bright stars beyond 50 parsecs and giant variable stars, including Cepheids and the RR Lyrae variables.
The motion of the Sun through space provides a longer baseline that will increase the accuracy of parallax measurements, known as secular parallax. For stars in the Milky Way disk, this corresponds to a mean baseline of 4 AU per year, while for halo stars the baseline is 40 AU per year. After several decades, the baseline can be orders of magnitude greater than the Earth–Sun baseline used for traditional parallax. However, secular parallax introduces a higher level of uncertainty because the relative velocity of observed stars is an additional unknown. When applied to samples of multiple stars, the uncertainty can be reduced; the uncertainty is inversely proportional to the square root of the sample size.
Moving cluster parallax is a technique where the motions of individual stars in a nearby star cluster can be used to find the distance to the cluster. Only open clusters are near enough for this technique to be useful. In particular the distance obtained for the Hyades has historically been an important step in the distance ladder.
Other individual objects can have fundamental distance estimates made for them under special circumstances. If the expansion of a gas cloud, like a supernova remnant or planetary nebula, can be observed over time, then an "expansion parallax" distance to that cloud can be estimated. Those measurements however suffer from uncertainties in the deviation of the object from sphericity. Binary stars which are both visual and spectroscopic binaries also can have their distance estimated by similar means, and do not suffer from the above geometric uncertainty. The common characteristic to these methods is that a measurement of angular motion is combined with a measurement of the absolute velocity (usually obtained via the Doppler effect). The distance estimate comes from computing how far the object must be to make its observed absolute velocity appear with the observed angular motion.
Expansion parallaxes in particular can give fundamental distance estimates for objects that are very far, because supernova ejecta have large expansion velocities and large sizes (compared to stars). Further, they can be observed with radio interferometers which can measure very small angular motions. These combine to provide fundamental distance estimates to supernovae in other galaxies. Though valuable, such cases are quite rare, so they serve as important consistency checks on the distance ladder rather than workhorse steps by themselves.
Stellar parallax.
Stellar parallax created by the relative motion between the Earth and a star can be seen, in the Copernican model, as arising from the orbit of the Earth around the Sun: the star only "appears" to move relative to more distant objects in the sky. In a geostatic model, the movement of the star would have to be taken as "real" with the star oscillating across the sky with respect to the background stars.
Stellar parallax is most often measured using "annual parallax", defined as the difference in position of a star as seen from the Earth and Sun, i.e. the angle subtended at a star by the mean radius of the Earth's orbit around the Sun. The parsec (3.26 light-years) is defined as the distance for which the annual parallax is 1 arcsecond. Annual parallax is normally measured by observing the position of a star at different times of the year as the Earth moves through its orbit. Measurement of annual parallax was the first reliable way to determine the distances to the closest stars. The first successful measurements of stellar parallax were made by Friedrich Bessel in 1838 for the star 61 Cygni using a heliometer. Stellar parallax remains the standard for calibrating other measurement methods. Accurate calculations of distance based on stellar parallax require a measurement of the distance from the Earth to the Sun, now based on radar reflection off the surfaces of planets.
The angles involved in these calculations are very small and thus difficult to measure. The nearest star to the Sun (and thus the star with the largest parallax), Proxima Centauri, has a parallax of 0.7687 ± 0.0003 arcsec. This angle is approximate that subtended by an object 2 centimeters in diameter located 5.3 kilometers away.
The fact that stellar parallax was so small that it was unobservable at the time was used as the main scientific argument against heliocentrism during the early modern age. It is clear from Euclid's geometry that the effect would be undetectable if the stars were far enough away, but for various reasons such gigantic distances involved seemed entirely implausible: it was one of Tycho's principal objections to Copernican heliocentrism that for it to be compatible with the lack of observable stellar parallax, there would have to be an enormous and unlikely void between the orbit of Saturn (then the most distant known planet) and the eighth sphere (the fixed stars).
In 1989, the satellite Hipparcos was launched primarily for obtaining improved parallaxes and proper motions for over 100,000 nearby stars, increasing the reach of the method tenfold. Even so, Hipparcos was only able to measure parallax angles for stars up to about 1,600 light-years away, a little more than one percent of the diameter of the Milky Way Galaxy. The European Space Agency's Gaia mission, launched in December 2013, can measure parallax angles to an accuracy of 10 microarcseconds, thus mapping nearby stars (and potentially planets) up to a distance of tens of thousands of light-years from Earth. In April 2014, NASA astronomers reported that the Hubble Space Telescope, by using spatial scanning, can precisely measure distances up to 10,000 light-years away, a ten-fold improvement over earlier measurements.
Diurnal parallax.
"Diurnal parallax" is a parallax that varies with the rotation of the Earth or with a difference in location on the Earth. The Moon and to a smaller extent the terrestrial planets or asteroids seen from different viewing positions on the Earth (at one given moment) can appear differently placed against the background of fixed stars.
The diurnal parallax has been used by John Flamsteed in 1672 to measure the distance to Mars at its opposition and through that to estimate the astronomical unit and the size of the Solar System.
Lunar parallax.
"Lunar parallax" (often short for "lunar horizontal parallax" or "lunar equatorial horizontal parallax"), is a special case of (diurnal) parallax: the Moon, being the nearest celestial body, has by far the largest maximum parallax of any celestial body, at times exceeding 1 degree.
The diagram for stellar parallax can illustrate lunar parallax as well if the diagram is taken to be scaled right down and slightly modified. Instead of 'near star', read 'Moon', and instead of taking the circle at the bottom of the diagram to represent the size of the Earth's orbit around the Sun, take it to be the size of the Earth's globe, and a circle around the Earth's surface. Then, the lunar (horizontal) parallax amounts to the difference in angular position, relative to the background of distant stars, of the Moon as seen from two different viewing positions on the Earth.
One of the viewing positions is the place from which the Moon can be seen directly overhead at a given moment. That is, viewed along the vertical line in the diagram. The other viewing position is a place from which the Moon can be seen on the horizon at the same moment. That is, viewed along one of the diagonal lines, from an Earth-surface position corresponding roughly to one of the blue dots on the modified diagram.
The lunar (horizontal) parallax can alternatively be defined as the angle subtended at the distance of the Moon by the radius of the Earth—equal to angle p in the diagram when scaled-down and modified as mentioned above.
The lunar horizontal parallax at any time depends on the linear distance of the Moon from the Earth. The Earth-Moon linear distance varies continuously as the Moon follows its perturbed and approximately elliptical orbit around the Earth. The range of the variation in linear distance is from about 56 to 63.7 Earth radii, corresponding to a horizontal parallax of about a degree of arc, but ranging from about 61.4' to about 54'. The "Astronomical Almanac" and similar publications tabulate the lunar horizontal parallax and/or the linear distance of the Moon from the Earth on a periodical e.g. daily basis for the convenience of astronomers (and of celestial navigators), and the study of how this coordinate varies with time forms part of lunar theory.
Parallax can also be used to determine the distance to the Moon.
One way to determine the lunar parallax from one location is by using a lunar eclipse. A full shadow of the Earth on the Moon has an apparent radius of curvature equal to the difference between the apparent radii of the Earth and the Sun as seen from the Moon. This radius can be seen to be equal to 0.75 degrees, from which (with the solar apparent radius of 0.25 degrees) we get an Earth apparent radius of 1 degree. This yields for the Earth-Moon distance 60.27 Earth radii or This procedure was first used by Aristarchus of Samos and Hipparchus, and later found its way into the work of Ptolemy.
The diagram at the right shows how daily lunar parallax arises on the geocentric and geostatic planetary model, in which the Earth is at the center of the planetary system and does not rotate. It also illustrates the important point that parallax need not be caused by any motion of the observer, contrary to some definitions of parallax that say it is, but may arise purely from motion of the observed.
Another method is to take two pictures of the Moon at the same time from two locations on Earth and compare the positions of the Moon relative to the stars. Using the orientation of the Earth, those two position measurements, and the distance between the two locations on the Earth, the distance to the Moon can be triangulated:
formula_0
This is the method referred to by Jules Verne in his 1865 novel "From the Earth to the Moon": Until then, many people had no idea how one could calculate the distance separating the Moon from the Earth. The circumstance was exploited to teach them that this distance was obtained by measuring the parallax of the Moon. If the word parallax appeared to amaze them, they were told that it was the angle subtended by two straight lines running from both ends of the Earth's radius to the Moon. If they had doubts about the perfection of this method, they were immediately shown that not only did this mean distance amount to a whole two hundred thirty-four thousand three hundred and forty-seven miles (94,330 leagues) but also that the astronomers were not in error by more than seventy miles (≈ 30 leagues).
Solar parallax.
After Copernicus proposed his heliocentric system, with the Earth in revolution around the Sun, it was possible to build a model of the whole Solar System without scale. To ascertain the scale, it is necessary only to measure one distance within the Solar System, e.g., the mean distance from the Earth to the Sun (now called an astronomical unit, or AU). When found by triangulation, this is referred to as the "solar parallax", the difference in position of the Sun as seen from the Earth's center and a point one Earth radius away, i.e., the angle subtended at the Sun by the Earth's mean radius. Knowing the solar parallax and the mean Earth radius allows one to calculate the AU, the first, small step on the long road of establishing the size and expansion age of the visible Universe.
A primitive way to determine the distance to the Sun in terms of the distance to the Moon was already proposed by Aristarchus of Samos in his book "On the Sizes and Distances of the Sun and Moon". He noted that the Sun, Moon, and Earth form a right triangle (with the right angle at the Moon) at the moment of first or last quarter moon. He then estimated that the Moon–Earth–Sun angle was 87°. Using correct geometry but inaccurate observational data, Aristarchus concluded that the Sun was slightly less than 20 times farther away than the Moon. The true value of this angle is close to 89° 50', and the Sun is about 390 times farther away.
Aristarchus pointed out that the Moon and Sun have nearly equal apparent angular sizes, and therefore their diameters must be in proportion to their distances from Earth. He thus concluded that the Sun was around 20 times larger than the Moon. This conclusion, although incorrect, follows logically from his incorrect data. It suggests that the Sun is larger than the Earth, which could be taken to support the heliocentric model.
Although Aristarchus' results were incorrect due to observational errors, they were based on correct geometric principles of parallax, and became the basis for estimates of the size of the Solar System for almost 2000 years, until the transit of Venus was correctly observed in 1761 and 1769. This method was proposed by Edmond Halley in 1716, although he did not live to see the results. The use of Venus transits was less successful than had been hoped due to the black drop effect, but the resulting estimate, 153 million kilometers, is just 2% above the currently accepted value, 149.6 million kilometers.
Much later, the Solar System was "scaled" using the parallax of asteroids, some of which, such as Eros, pass much closer to Earth than Venus. In a favorable opposition, Eros can approach the Earth to within 22 million kilometers. During the opposition of 1900–1901, a worldwide program was launched to make parallax measurements of Eros to determine the solar parallax (or distance to the Sun), with the results published in 1910 by Arthur Hinks of Cambridge and Charles D. Perrine of the Lick Observatory, University of California.
Perrine published progress reports in 1906 and 1908. He took 965 photographs with the Crossley Reflector and selected 525 for measurement. A similar program was then carried out, during a closer approach, in 1930–1931 by Harold Spencer Jones. The value of the Astronomical Unit (roughly the Earth-Sun distance) obtained by this program was considered definitive until 1968, when radar and dynamical parallax methods started producing more precise measurements.
Also radar reflections, both off Venus (1958) and off asteroids, like Icarus, have been used for solar parallax determination. Today, use of spacecraft telemetry links has solved this old problem. The currently accepted value of solar parallax is arcseconds.
Moving-cluster parallax.
The open stellar cluster Hyades in Taurus extends over such a large part of the sky, 20 degrees, that the proper motions as derived from astrometry appear to converge with some precision to a perspective point north of Orion. Combining the observed apparent (angular) proper motion in seconds of arc with the also observed true (absolute) receding motion as witnessed by the Doppler redshift of the stellar spectral lines, allows estimation of the distance to the cluster (151 light-years) and its member stars in much the same way as using annual parallax.
Dynamical parallax.
Dynamical parallax has sometimes also been used to determine the distance to a supernova when the optical wavefront of the outburst is seen to propagate through the surrounding dust clouds at an apparent angular velocity, while its true propagation velocity is known to be the speed of light.
Spatio-temporal parallax.
From enhanced relativistic positioning systems, spatio-temporal parallax generalizing the usual notion of parallax in space only has been developed. Then, event fields in spacetime can be deduced directly without intermediate models of light bending by massive bodies such as the one used in the PPN formalism for instance.
Statistical parallax.
Two related techniques can determine the mean distances of stars by modelling the motions of stars. Both are referred to as statistical parallaxes, or individually called secular parallaxes and classical statistical parallaxes.
The motion of the Sun through space provides a longer baseline that will increase the accuracy of parallax measurements, known as secular parallax. For stars in the Milky Way disk, this corresponds to a mean baseline of 4 AU per year. For halo stars the baseline is 40 AU per year. After several decades, the baseline can be orders of magnitude greater than the Earth–Sun baseline used for traditional parallax. Secular parallax introduces a higher level of uncertainty, because the relative velocity of other stars is an additional unknown. When applied to samples of multiple stars, the uncertainty can be reduced; the precision is inversely proportional to the square root of the sample size.
The mean parallaxes and distances of a large group of stars can be estimated from their radial velocities and proper motions. This is known as a classical statistical parallax. The motions of the stars are modelled to statistically reproduce the velocity dispersion based on their distance.
Other methods for distance measurement in astronomy.
In astronomy, the term "parallax" has come to mean a method of estimating distances, not necessarily utilizing a true parallax, such as:
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{distance}_{\\mathrm{moon}} = \\frac {\\mathrm{distance}_{\\mathrm{observerbase}}} {\\tan (\\mathrm{angle})}"
}
]
| https://en.wikipedia.org/wiki?curid=72887334 |
72900340 | Neural radiance field | 3D reconstruction technique
A neural radiance field (NeRF) is a method based on deep learning for reconstructing a three-dimensional representation of a scene from two-dimensional images. The NeRF model enables downstream applications of novel view synthesis, scene geometry reconstruction, and obtaining the reflectance properties of the scene. Additional scene properties such as camera poses may also be jointly learned. First introduced in 2020, it has since gained significant attention for its potential applications in computer graphics and content creation.
Algorithm.
The NeRF algorithm represents a scene as a radiance field parametrized by a deep neural network (DNN). The network predicts a volume density and view-dependent emitted radiance given the spatial location ("x, y, z") and viewing direction in Euler angles ("θ, Φ") of the camera. By sampling many points along camera rays, traditional volume rendering techniques can produce an image.
Data collection.
A NeRF needs to be retrained for each unique scene. The first step is to collect images of the scene from different angles and their respective camera pose. These images are standard 2D images and do not require a specialized camera or software. Any camera is able to generate datasets, provided the settings and capture method meet the requirements for SfM (Structure from Motion).
This requires tracking of the camera position and orientation, often through some combination of SLAM, GPS, or inertial estimation. Researchers often use synthetic data to evaluate NeRF and related techniques. For such data, images (rendered through traditional non-learned methods) and respective camera poses are reproducible and error-free.
Training.
For each sparse viewpoint (image and camera pose) provided, camera rays are marched through the scene, generating a set of 3D points with a given radiance direction (into the camera). For these points, volume density and emitted radiance are predicted using the multi-layer perceptron (MLP). An image is then generated through classical volume rendering. Because this process is fully differentiable, the error between the predicted image and the original image can be minimized with gradient descent over multiple viewpoints, encouraging the MLP to develop a coherent model of the scene.
Variations and improvements.
Early versions of NeRF were slow to optimize and required that all input views were taken with the same camera in the same lighting conditions. These performed best when limited to orbiting around individual objects, such as a drum set, plants or small toys. Since the original paper in 2020, many improvements have been made to the NeRF algorithm, with variations for special use cases.
Fourier feature mapping.
In 2020, shortly after the release of NeRF, the addition of Fourier Feature Mapping improved training speed and image accuracy. Deep neural networks struggle to learn high frequency functions in low dimensional domains; a phenomenon known as spectral bias. To overcome this shortcoming, points are mapped to a higher dimensional feature space before being fed into the MLP.
formula_0
Where formula_1 is the input point, formula_2 are the frequency vectors, and formula_3 are coefficients.
This allows for rapid convergence to high frequency functions, such as pixels in a detailed image.
Bundle-adjusting neural radiance fields.
One limitation of NeRFs is the requirement of knowing accurate camera poses to train the model. Often times, pose estimation methods are not completely accurate, nor is the camera pose even possible to know. These imperfections result in artifacts and suboptimal convergence. So, a method was developed to optimize the camera pose along with the volumetric function itself. Called Bundle-Adjusting Neural Radiance Field (BARF), the technique uses a dynamic low-pass filter to go from coarse to fine adjustment, minimizing error by finding the geometric transformation to the desired image. This corrects imperfect camera poses and greatly improves the quality of NeRF renders.
Multiscale representation.
Conventional NeRFs struggle to represent detail at all viewing distances, producing blurry images up close and overly aliased images from distant views. In 2021, researchers introduced a technique to improve the sharpness of details at different viewing scales known as mip-NeRF (comes from mipmap). Rather than sampling a single ray per pixel, the technique fits a gaussian to the conical frustum cast by the camera. This improvement effectively anti-aliases across all viewing scales. mip-NeRF also reduces overall image error and is faster to converge at ~half the size of ray-based NeRF.
Learned initializations.
In 2021, researchers applied meta-learning to assign initial weights to the MLP. This rapidly speeds up convergence by effectively giving the network a head start in gradient descent. Meta-learning also allowed the MLP to learn an underlying representation of certain scene types. For example, given a dataset of famous tourist landmarks, an initialized NeRF could partially reconstruct a scene given one image.
NeRF in the wild.
Conventional NeRFs are vulnerable to slight variations in input images (objects, lighting) often resulting in ghosting and artifacts. As a result, NeRFs struggle to represent dynamic scenes, such as bustling city streets with changes in lighting and dynamic objects. In 2021, researchers at Google developed a new method for accounting for these variations, named NeRF in the Wild (NeRF-W). This method splits the neural network (MLP) into three separate models. The main MLP is retained to encode the static volumetric radiance. However, it operates in sequence with a separate MLP for appearance embedding (changes in lighting, camera properties) and an MLP for transient embedding (changes in scene objects). This allows the NeRF to be trained on diverse photo collections, such as those taken by mobile phones at different times of day.
Relighting.
In 2021, researchers added more outputs to the MLP at the heart of NeRFs. The output now included: volume density, surface normal, material parameters, distance to the first surface intersection (in any direction), and visibility of the external environment in any direction. The inclusion of these new parameters lets the MLP learn material properties, rather than pure radiance values. This facilitates a more complex rendering pipeline, calculating direct and global illumination, specular highlights, and shadows. As a result, the NeRF can render the scene under any lighting conditions with no re-training.
Plenoctrees.
Although NeRFs had reached high levels of fidelity, their costly compute time made them useless for many applications requiring real-time rendering, such as VR/AR and interactive content. Introduced in 2021, Plenoctrees (plenoptic octrees) enabled real-time rendering of pre-trained NeRFs through division of the volumetric radiance function into an octree. Rather than assigning a radiance direction into the camera, viewing direction is taken out of the network input and spherical radiance is predicted for each region. This makes rendering over 3000x faster than conventional NeRFs.
Sparse Neural Radiance Grid.
Similar to Plenoctrees, this method enabled real-time rendering of pretrained NeRFs. To avoid querying the large MLP for each point, this method bakes NeRFs into Sparse Neural Radiance Grids (SNeRG). A SNeRG is a sparse voxel grid containing opacity and color, with learned feature vectors to encode view-dependent information. A lightweight, more efficient MLP is then used to produce view-dependent residuals to modify the color and opacity. To enable this compressive baking, small changes to the NeRF architecture were made, such as running the MLP once per pixel rather than for each point along the ray. These improvements make SNeRG extremely efficient, outperforming Plenoctrees.
Instant NeRFs.
In 2022, researchers at Nvidia enabled real-time training of NeRFs through a technique known as Instant Neural Graphics Primitives. An innovative input encoding reduces computation, enabling real-time training of a NeRF, an improvement orders of magnitude above previous methods. The speedup stems from the use of spatial hash functions, which have formula_4 access times, and parallelized architectures which run fast on modern GPUs.
Related techniques.
Plenoxels.
Plenoxel (plenoptic volume element) uses a sparse voxel representation instead of a volumetric approach as seen in NeRFs. Plenoxel also completely removes the MLP, instead directly performing gradient descent on the voxel coefficients. Plenoxel can match the fidelity of a conventional NeRF in orders of magnitude less training time. Published in 2022, this method disproved the importance of the MLP, showing that the differentiable rendering pipeline is the critical component.
Gaussian splatting.
Gaussian splatting is a newer method that can outperform NeRF in render time and fidelity. Rather than representing the scene as a volumetric function, it uses a sparse cloud of 3D gaussians. First, a point cloud is generated (through structure from motion) and converted to gaussians of initial covariance, color, and opacity. The gaussians are directly optimized through stochastic gradient descent to match the input image. This saves computation by removing empty space and foregoing the need to query a neural network for each point. Instead, simply "splat" all the gaussians onto the screen and they overlap to produce the desired image.
Photogrammetry.
Traditional photogrammetry is not neural, instead using robust geometric equations to obtain 3D measurements. NeRFs, unlike photogrammetric methods, do not inherently produce dimensionally accurate 3D geometry. While their results are often sufficient for extracting accurate geometry (ex: via cube marching), the process is fuzzy, as with most neural methods. This limits NeRF to cases where the output image is valued, rather than raw scene geometry. However, NeRFs excel in situations with unfavorable lighting. For example, photogrammetric methods completely break down when trying to reconstruct reflective or transparent objects in a scene, while a NeRF is able to infer the geometry.
Applications.
NeRFs have a wide range of applications, and are starting to grow in popularity as they become integrated into user-friendly applications.
Content creation.
NeRFs have huge potential in content creation, where on-demand photorealistic views are extremely valuable. The technology democratizes a space previously only accessible by teams of VFX artists with expensive assets. Neural radiance fields now allow anyone with a camera to create compelling 3D environments. NeRF has been combined with generative AI, allowing users with no modelling experience to instruct changes in photorealistic 3D scenes. NeRFs have potential uses in video production, computer graphics, and product design.
Interactive content.
The photorealism of NeRFs make them appealing for applications where immersion is important, such as virtual reality or videogames. NeRFs can be combined with classical rendering techniques to insert synthetic objects and create believable virtual experiences.
Medical imaging.
NeRFs have been used to reconstruct 3D CT scans from sparse or even single X-ray views. The model demonstrated high fidelity renderings of chest and knee data. If adopted, this method can save patients from excess doses of ionizing radiation, allowing for safer diagnosis.
Robotics and autonomy.
The unique ability of NeRFs to understand transparent and reflective objects makes them useful for robots interacting in such environments. The use of NeRF allowed a robot arm to precisely manipulate a transparent wine glass; a task where traditional computer vision would struggle.
NeRFs can also generate photorealistic human faces, making them valuable tools for human-computer interaction. Traditionally rendered faces can be uncanny, while other neural methods are too slow to run in real-time.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\gamma(\\mathrm{v}) = \\begin{bmatrix} a_1 \\cos(2{\\pi} {\\Beta}_1^T \\mathrm{v}) \\\\ a_1 \\sin(2\\pi {\\Beta}_1^T \\mathrm{v}) \\\\ \\vdots \\\\ a_m \\cos(2{\\pi} {\\Beta}_m^T \\mathrm{v}) \\\\ a_m \\sin(2{\\pi} {\\Beta}_m^T \\mathrm{v}) \\end{bmatrix}"
},
{
"math_id": 1,
"text": "\\mathrm{v}"
},
{
"math_id": 2,
"text": "\\Beta_i"
},
{
"math_id": 3,
"text": "a_i"
},
{
"math_id": 4,
"text": "O(1)"
}
]
| https://en.wikipedia.org/wiki?curid=72900340 |
7290120 | Methods of detecting exoplanets | Any planet is an extremely faint light source compared to its parent star. For example, a star like the Sun is about a billion times as bright as the reflected light from any of the planets orbiting it. In addition to the intrinsic difficulty of detecting such a faint light source, the light from the parent star causes a glare that washes it out. For those reasons, very few of the exoplanets reported as of 2024[ [update]] have been observed directly, with even fewer being resolved from their host star.
Instead, astronomers have generally had to resort to indirect methods to detect extrasolar planets. As of 2016, several different indirect methods have yielded success.
Established detection methods.
The following methods have at least once proved successful for discovering a new planet or detecting an already discovered planet:
Radial velocity.
A star with a planet will move in its own small orbit in response to the planet's gravity. This leads to variations in the speed with which the star moves toward or away from Earth, i.e. the variations are in the radial velocity of the star with respect to Earth. The radial velocity can be deduced from the displacement in the parent star's spectral lines due to the Doppler effect. The radial-velocity method measures these variations in order to confirm the presence of the planet using the binary mass function.
The speed of the star around the system's center of mass is much smaller than that of the planet, because the radius of its orbit around the center of mass is so small. (For example, the Sun moves by about 13 m/s due to Jupiter, but only about 9 cm/s due to Earth). However, velocity variations down to 3 m/s or even somewhat less can be detected with modern spectrometers, such as the HARPS (High Accuracy Radial Velocity Planet Searcher) spectrometer at the ESO 3.6 meter telescope in La Silla Observatory, Chile, the HIRES spectrometer at the Keck telescopes or EXPRES at the Lowell Discovery Telescope.
An especially simple and inexpensive method for measuring radial velocity is "externally dispersed interferometry".
Until around 2012, the radial-velocity method (also known as Doppler spectroscopy) was by far the most productive technique used by planet hunters. (After 2012, the transit method from the Kepler space telescope overtook it in number.) The radial velocity signal is distance independent, but requires high signal-to-noise ratio spectra to achieve high precision, and so is generally used only for relatively nearby stars, out to about 160 light-years from Earth, to find lower-mass planets. It is also not possible to simultaneously observe many target stars at a time with a single telescope. Planets of Jovian mass can be detectable around stars up to a few thousand light years away. This method easily finds massive planets that are close to stars. Modern spectrographs can also easily detect Jupiter-mass planets orbiting 10 astronomical units away from the parent star, but detection of those planets requires many years of observation. Earth-mass planets are currently detectable only in very small orbits around low-mass stars, e.g. Proxima b.
It is easier to detect planets around low-mass stars, for two reasons: First, these stars are more affected by gravitational tug from planets. The second reason is that low-mass main-sequence stars generally rotate relatively slowly. Fast rotation makes spectral-line data less clear because half of the star quickly rotates away from observer's viewpoint while the other half approaches. Detecting planets around more massive stars is easier if the star has left the main sequence, because leaving the main sequence slows down the star's rotation.
Sometimes Doppler spectrography produces false signals, especially in multi-planet and multi-star systems. Magnetic fields and certain types of stellar activity can also give false signals. When the host star has multiple planets, false signals can also arise from having insufficient data, so that multiple solutions can fit the data, as stars are not generally observed continuously. Some of the false signals can be eliminated by analyzing the stability of the planetary system, conducting photometry analysis on the host star and knowing its rotation period and stellar activity cycle periods.
Planets with orbits highly inclined to the line of sight from Earth produce smaller visible wobbles, and are thus more difficult to detect. One of the advantages of the radial velocity method is that eccentricity of the planet's orbit can be measured directly. One of the main disadvantages of the radial-velocity method is that it can only estimate a planet's minimum mass (formula_0). The posterior distribution of the inclination angle "i" depends on the true mass distribution of the planets. However, when there are multiple planets in the system that orbit relatively close to each other and have sufficient mass, orbital stability analysis allows one to constrain the maximum mass of these planets. The radial-velocity method can be used to confirm findings made by the transit method. When both methods are used in combination, then the planet's true mass can be estimated.
Although radial velocity of the star only gives a planet's minimum mass, if the planet's spectral lines can be distinguished from the star's spectral lines then the radial velocity of the planet itself can be found, and this gives the inclination of the planet's orbit. This enables measurement of the planet's actual mass. This also rules out false positives, and also provides data about the composition of the planet. The main issue is that such detection is possible only if the planet orbits around a relatively bright star and if the planet reflects or emits a lot of light.
Transit photometry.
Technique, advantages, and disadvantages.
While the radial velocity method provides information about a planet's mass, the photometric method can determine the planet's radius. If a planet crosses (transits) in front of its parent star's disk, then the observed visual brightness of the star drops by a small amount, depending on the relative sizes of the star and the planet. For example, in the case of HD 209458, the star dims by 1.7%. However, most transit signals are considerably smaller; for example, an Earth-size planet transiting a Sun-like star produces a dimming of only 80 parts per million (0.008 percent).
A theoretical transiting exoplanet light curve model predicts the following characteristics of an observed planetary system: transit depth (δ), transit duration (T), the ingress/egress duration (τ), and period of the exoplanet (P). However, these observed quantities are based on several assumptions. For convenience in the calculations, we assume that the planet and star are spherical, the stellar disk is uniform, and the orbit is circular. Depending on the relative position that an observed transiting exoplanet is while transiting a star, the observed physical parameters of the light curve will change. The transit depth (δ) of a transiting light curve describes the decrease in the normalized flux of the star during a transit. This details the radius of an exoplanet compared to the radius of the star. For example, if an exoplanet transits a solar radius size star, a planet with a larger radius would increase the transit depth and a planet with a smaller radius would decrease the transit depth. The transit duration (T) of an exoplanet is the length of time that a planet spends transiting a star. This observed parameter changes relative to how fast or slow a planet is moving in its orbit as it transits the star. The ingress/egress duration (τ) of a transiting light curve describes the length of time the planet takes to fully cover the star (ingress) and fully uncover the star (egress). If a planet transits from the one end of the diameter of the star to the other end, the ingress/egress duration is shorter because it takes less time for a planet to fully cover the star. If a planet transits a star relative to any other point other than the diameter, the ingress/egress duration lengthens as you move further away from the diameter because the planet spends a longer time partially covering the star during its transit. From these observable parameters, a number of different physical parameters (semi-major axis, star mass, star radius, planet radius, eccentricity, and inclination) are determined through calculations. With the combination of radial velocity measurements of the star, the mass of the planet is also determined.This method has two major disadvantages. First, planetary transits are observable only when the planet's orbit happens to be perfectly aligned from the astronomers' vantage point. The probability of a planetary orbital plane being directly on the line-of-sight to a star is the ratio of the diameter of the star to the diameter of the orbit (in small stars, the radius of the planet is also an important factor). About 10% of planets with small orbits have such an alignment, and the fraction decreases for planets with larger orbits. For a planet orbiting a Sun-sized star at 1 AU, the probability of a random alignment producing a transit is 0.47%. Therefore, the method cannot guarantee that any particular star is not a host to planets. However, by scanning large areas of the sky containing thousands or even hundreds of thousands of stars at once, transit surveys can find more extrasolar planets than the radial-velocity method. Several surveys have taken that approach, such as the ground-based MEarth Project, SuperWASP, KELT, and HATNet, as well as the space-based COROT, Kepler and TESS missions. The transit method has also the advantage of detecting planets around stars that are located a few thousand light years away. The most distant planets detected by Sagittarius Window Eclipsing Extrasolar Planet Search are located near the galactic center. However, reliable follow-up observations of these stars are nearly impossible with current technology.
The second disadvantage of this method is a high rate of false detections. A 2012 study found that the rate of false positives for transits observed by the Kepler mission could be as high as 40% in single-planet systems. For this reason, a star with a single transit detection requires additional confirmation, typically from the radial-velocity method or orbital brightness modulation method. The radial velocity method is especially necessary for Jupiter-sized or larger planets, as objects of that size encompass not only planets, but also brown dwarfs and even small stars. As the false positive rate is very low in stars with two or more planet candidates, such detections often can be validated without extensive follow-up observations. Some can also be confirmed through the transit timing variation method.
Many points of light in the sky have brightness variations that may appear as transiting planets by flux measurements. False-positives in the transit photometry method arise in three common forms: blended eclipsing binary systems, grazing eclipsing binary systems, and transits by planet sized stars. Eclipsing binary systems usually produce deep eclipses that distinguish them from exoplanet transits, since planets are usually smaller than about 2RJ, but eclipses are shallower for blended or grazing eclipsing binary systems.
Blended eclipsing binary systems consist of a normal eclipsing binary blended with a third (usually brighter) star along the same line of sight, usually at a different distance. The constant light of the third star dilutes the measured eclipse depth, so the light-curve may resemble that for a transiting exoplanet. In these cases, the target most often contains a large main sequence primary with a small main sequence secondary or a giant star with a main sequence secondary.
Grazing eclipsing binary systems are systems in which one object will just barely graze the limb of the other. In these cases, the maximum transit depth of the light curve will not be proportional to the ratio of the squares of the radii of the two stars, but will instead depend solely on the small fraction of the primary that is blocked by the secondary. The small measured dip in flux can mimic that of an exoplanet transit. Some of the false positive cases of this category can be easily found if the eclipsing binary system has a circular orbit, with the two companions having different masses. Due to the cyclic nature of the orbit, there would be two eclipsing events, one of the primary occulting the secondary and vice versa. If the two stars have significantly different masses, and this different radii and luminosities, then these two eclipses would have different depths. This repetition of a shallow and deep transit event can easily be detected and thus allow the system to be recognized as a grazing eclipsing binary system. However, if the two stellar companions are approximately the same mass, then these two eclipses would be indistinguishable, thus making it impossible to demonstrate that a grazing eclipsing binary system is being observed using only the transit photometry measurements.
Finally, there are two types of stars that are approximately the same size as gas giant planets, white dwarfs and brown dwarfs. This is due to the fact that gas giant planets, white dwarfs, and brown dwarfs, are all supported by degenerate electron pressure. The light curve does not discriminate between masses as it only depends on the size of the transiting object. When possible, radial velocity measurements are used to verify that the transiting or eclipsing body is of planetary mass, meaning less than 13MJ. Transit Time Variations can also determine MP. Doppler Tomography with a known radial velocity orbit can obtain minimum MP and projected sing-orbit alignment.
Red giant branch stars have another issue for detecting planets around them: while planets around these stars are much more likely to transit due to the larger star size, these transit signals are hard to separate from the main star's brightness light curve as red giants have frequent pulsations in brightness with a period of a few hours to days. This is especially notable with subgiants. In addition, these stars are much more luminous, and transiting planets block a much smaller percentage of light coming from these stars. In contrast, planets can completely occult a very small star such as a neutron star or white dwarf, an event which would be easily detectable from Earth. However, due to the small star sizes, the chance of a planet aligning with such a stellar remnant is extremely small.
The main advantage of the transit method is that the size of the planet can be determined from the light curve. When combined with the radial-velocity method (which determines the planet's mass), one can determine the density of the planet, and hence learn something about the planet's physical structure. The planets that have been studied by both methods are by far the best-characterized of all known exoplanets.
The transit method also makes it possible to study the atmosphere of the transiting planet. When the planet transits the star, light from the star passes through the upper atmosphere of the planet. By studying the high-resolution stellar spectrum carefully, one can detect elements present in the planet's atmosphere. A planetary atmosphere, and planet for that matter, could also be detected by measuring the polarization of the starlight as it passed through or is reflected off the planet's atmosphere.
Additionally, the secondary eclipse (when the planet is blocked by its star) allows direct measurement of the planet's radiation and helps to constrain the planet's orbital eccentricity without needing the presence of other planets. If the star's photometric intensity during the secondary eclipse is subtracted from its intensity before or after, only the signal caused by the planet remains. It is then possible to measure the planet's temperature and even to detect possible signs of cloud formations on it. In March 2005, two groups of scientists carried out measurements using this technique with the Spitzer Space Telescope. The two teams, from the Harvard-Smithsonian Center for Astrophysics, led by David Charbonneau, and the Goddard Space Flight Center, led by L. D. Deming, studied the planets TrES-1 and HD 209458b respectively. The measurements revealed the planets' temperatures: 1,060 K (790°C) for TrES-1 and about 1,130 K (860 °C) for HD 209458b. In addition, the hot Neptune Gliese 436 b is known to enter secondary eclipse. However, some transiting planets orbit such that they do not enter secondary eclipse relative to Earth; HD 17156 b is over 90% likely to be one of the latter.
History.
The first exoplanet for which transits were observed for HD 209458 b, which was discovered using radial velocity technique. These transits were observed in 1999 by two teams led David Charbonneau and Gregory W. Henry. The first exoplanet to be discovered with the transit method was OGLE-TR-56b in 2002 by the OGLE project.
A French Space Agency mission, CoRoT, began in 2006 to search for planetary transits from orbit, where the absence of atmospheric scintillation allows improved accuracy. This mission was designed to be able to detect planets "a few times to several times larger than Earth" and performed "better than expected", with two exoplanet discoveries (both of the "hot Jupiter" type) as of early 2008. In June 2013, CoRoT's exoplanet count was 32 with several still to be confirmed. The satellite unexpectedly stopped transmitting data in November 2012 (after its mission had twice been extended), and was retired in June 2013.
In March 2009, NASA mission Kepler was launched to scan a large number of stars in the constellation Cygnus with a measurement precision expected to detect and characterize Earth-sized planets. The NASA Kepler Mission uses the transit method to scan a hundred thousand stars for planets. It was hoped that by the end of its mission of 3.5 years, the satellite would have collected enough data to reveal planets even smaller than Earth. By scanning a hundred thousand stars simultaneously, it was not only able to detect Earth-sized planets, it was able to collect statistics on the numbers of such planets around Sun-like stars.
On 2 February 2011, the Kepler team released a list of 1,235 extrasolar planet candidates, including 54 that may be in the habitable zone. On 5 December 2011, the Kepler team announced that they had discovered 2,326 planetary candidates, of which 207 are similar in size to Earth, 680 are super-Earth-size, 1,181 are Neptune-size, 203 are Jupiter-size and 55 are larger than Jupiter. Compared to the February 2011 figures, the number of Earth-size and super-Earth-size planets increased by 200% and 140% respectively. Moreover, 48 planet candidates were found in the habitable zones of surveyed stars, marking a decrease from the February figure; this was due to the more stringent criteria in use in the December data. By June 2013, the number of planet candidates was increased to 3,278 and some confirmed planets were smaller than Earth, some even Mars-sized (such as Kepler-62c) and one even smaller than Mercury (Kepler-37b).
The Transiting Exoplanet Survey Satellite launched in April 2018.
Reflection and emission modulations.
Short-period planets in close orbits around their stars will undergo reflected light variations because, like the Moon, they will go through phases from full to new and back again. In addition, as these planets receive a lot of starlight, it heats them, making thermal emissions potentially detectable. Since telescopes cannot resolve the planet from the star, they see only the combined light, and the brightness of the host star seems to change over each orbit in a periodic manner. Although the effect is small — the photometric precision required is about the same as to detect an Earth-sized planet in transit across a solar-type star – such Jupiter-sized planets with an orbital period of a few days are detectable by space telescopes such as the Kepler Space Observatory. Like with the transit method, it is easier to detect large planets orbiting close to their parent star than other planets as these planets catch more light from their parent star. When a planet has a high albedo and is situated around a relatively luminous star, its light variations are easier to detect in visible light while darker planets or planets around low-temperature stars are more easily detectable with infrared light with this method. In the long run, this method may find the most planets that will be discovered by that mission because the reflected light variation with orbital phase is largely independent of orbital inclination and does not require the planet to pass in front of the disk of the star. It still cannot detect planets with circular face-on orbits from Earth's viewpoint as the amount of reflected light does not change during its orbit.
The phase function of the giant planet is also a function of its thermal properties and atmosphere, if any. Therefore, the phase curve may constrain other planet properties, such as the size distribution of atmospheric particles. When a planet is found transiting and its size is known, the phase variations curve helps calculate or constrain the planet's albedo. It is more difficult with very hot planets as the glow of the planet can interfere when trying to calculate albedo. In theory, albedo can also be found in non-transiting planets when observing the light variations with multiple wavelengths. This allows scientists to find the size of the planet even if the planet is not transiting the star.
The first-ever direct detection of the spectrum of visible light reflected from an exoplanet was made in 2015 by an international team of astronomers. The astronomers studied light from 51 Pegasi b – the first exoplanet discovered orbiting a main-sequence star (a Sunlike star), using the High Accuracy Radial velocity Planet Searcher (HARPS) instrument at the European Southern Observatory's La Silla Observatory in Chile.
Both CoRoT and Kepler have measured the reflected light from planets. However, these planets were already known since they transit their host star. The first planets discovered by this method are Kepler-70b and Kepler-70c, found by Kepler.
Relativistic beaming.
A separate novel method to detect exoplanets from light variations uses relativistic beaming of the observed flux from the star due to its motion. It is also known as Doppler beaming or Doppler boosting. The method was first proposed by Abraham Loeb and Scott Gaudi in 2003. As the planet tugs the star with its gravitation, the density of photons and therefore the apparent brightness of the star changes from observer's viewpoint. Like the radial velocity method, it can be used to determine the orbital eccentricity and the minimum mass of the planet. With this method, it is easier to detect massive planets close to their stars as these factors increase the star's motion. Unlike the radial velocity method, it does not require an accurate spectrum of a star, and therefore can be used more easily to find planets around fast-rotating stars and more distant stars.
One of the biggest disadvantages of this method is that the light variation effect is very small. A Jovian-mass planet orbiting 0.025 AU away from a Sun-like star is barely detectable even when the orbit is edge-on. This is not an ideal method for discovering new planets, as the amount of emitted and reflected starlight from the planet is usually much larger than light variations due to relativistic beaming. This method is still useful, however, as it allows for measurement of the planet's mass without the need for follow-up data collection from radial velocity observations.
The first discovery of a planet using this method (Kepler-76b) was announced in 2013.
Ellipsoidal variations.
Massive planets can cause slight tidal distortions to their host stars. When a star has a slightly ellipsoidal shape, its apparent brightness varies, depending if the oblate part of the star is facing the observer's viewpoint. Like with the relativistic beaming method, it helps to determine the minimum mass of the planet, and its sensitivity depends on the planet's orbital inclination. The extent of the effect on a star's apparent brightness can be much larger than with the relativistic beaming method, but the brightness changing cycle is twice as fast. In addition, the planet distorts the shape of the star more if it has a low semi-major axis to stellar radius ratio and the density of the star is low. This makes this method suitable for finding planets around stars that have left the main sequence.
Pulsar timing.
A pulsar is a neutron star: the small, ultradense remnant of a star that has exploded as a supernova. Pulsars emit radio waves extremely regularly as they rotate. Because the intrinsic rotation of a pulsar is so regular, slight anomalies in the timing of its observed radio pulses can be used to track the pulsar's motion. Like an ordinary star, a pulsar will move in its own small orbit if it has a planet. Calculations based on pulse-timing observations can then reveal the parameters of that orbit.
This method was not originally designed for the detection of planets, but is so sensitive that it is capable of detecting planets far smaller than any other method can, down to less than a tenth the mass of Earth. It is also capable of detecting mutual gravitational perturbations between the various members of a planetary system, thereby revealing further information about those planets and their orbital parameters. In addition, it can easily detect planets which are relatively far away from the pulsar.
There are two main drawbacks to the pulsar timing method: pulsars are relatively rare, and special circumstances are required for a planet to form around a pulsar. Therefore, it is unlikely that a large number of planets will be found this way. Additionally, life would likely not survive on planets orbiting pulsars due to the high intensity of ambient radiation.
In 1992, Aleksander Wolszczan and Dale Frail used this method to discover planets around the pulsar PSR 1257+12. Their discovery was confirmed by 1994, making it the first confirmation of planets outside the Solar System.
Variable star timing.
Like pulsars, some other types of pulsating variable stars are regular enough that radial velocity could be determined purely photometrically from the Doppler shift of the pulsation frequency, without needing spectroscopy. This method is not as sensitive as the pulsar timing variation method, due to the periodic activity being longer and less regular. The ease of detecting planets around a variable star depends on the pulsation period of the star, the regularity of pulsations, the mass of the planet, and its distance from the host star.
The first success with this method came in 2007, when V391 Pegasi b was discovered around a pulsating subdwarf star.
Transit timing.
The transit timing variation method considers whether transits occur with strict periodicity, or if there is a variation. When multiple transiting planets are detected, they can often be confirmed with the transit timing variation method. This is useful in planetary systems far from the Sun, where radial velocity methods cannot detect them due to the low signal-to-noise ratio. If a planet has been detected by the transit method, then variations in the timing of the transit provide an extremely sensitive method of detecting additional non-transiting planets in the system with masses comparable to Earth's. It is easier to detect transit-timing variations if planets have relatively close orbits, and when at least one of the planets is more massive, causing the orbital period of a less massive planet to be more perturbed.
The main drawback of the transit timing method is that usually not much can be learnt about the planet itself. Transit timing variation can help to determine the maximum mass of a planet. In most cases, it can confirm if an object has a planetary mass, but it does not put narrow constraints on its mass. There are exceptions though, as planets in the Kepler-36 and Kepler-88 systems orbit close enough to accurately determine their masses.
The first significant detection of a non-transiting planet using TTV was carried out with NASA's Kepler space telescope. The transiting planet Kepler-19b shows TTV with an amplitude of five minutes and a period of about 300 days, indicating the presence of a second planet, Kepler-19c, which has a period which is a near-rational multiple of the period of the transiting planet.
In circumbinary planets, variations of transit timing are mainly caused by the orbital motion of the stars, instead of gravitational perturbations by other planets. These variations make it harder to detect these planets through automated methods. However, it makes these planets easy to confirm once they are detected.
Transit duration variation.
"Duration variation" refers to changes in how long the transit takes. Duration variations may be caused by an exomoon, apsidal precession for eccentric planets due to another planet in the same system, or general relativity.
When a circumbinary planet is found through the transit method, it can be easily confirmed with the transit duration variation method. In close binary systems, the stars significantly alter the motion of the companion, meaning that any transiting planet has significant variation in transit duration. The first such confirmation came from Kepler-16b.
Eclipsing binary minima timing.
When a binary star system is aligned such that – from the Earth's point of view – the stars pass in front of each other in their orbits, the system is called an "eclipsing binary" star system. The time of minimum light, when the star with the brighter surface is at least partially obscured by the disc of the other star, is called the primary eclipse, and approximately half an orbit later, the secondary eclipse occurs when the brighter surface area star obscures some portion of the other star. These times of minimum light, or central eclipses, constitute a time stamp on the system, much like the pulses from a pulsar (except that rather than a flash, they are a dip in brightness). If there is a planet in circumbinary orbit around the binary stars, the stars will be offset around a binary-planet center of mass. As the stars in the binary are displaced back and forth by the planet, the times of the eclipse minima will vary. The periodicity of this offset may be the most reliable way to detect extrasolar planets around close binary systems. With this method, planets are more easily detectable if they are more massive, orbit relatively closely around the system, and if the stars have low masses.
The eclipsing timing method allows the detection of planets further away from the host star than the transit method. However, signals around cataclysmic variable stars hinting for planets tend to match with unstable orbits. In 2011, Kepler-16b became the first planet to be definitely characterized via eclipsing binary timing variations.
Gravitational microlensing.
Gravitational microlensing occurs when the gravitational field of a star acts like a lens, magnifying the light of a distant background star. This effect occurs only when the two stars are almost exactly aligned. Lensing events are brief, lasting for weeks or days, as the two stars and Earth are all moving relative to each other. More than a thousand such events have been observed over the past ten years.
If the foreground lensing star has a planet, then that planet's own gravitational field can make a detectable contribution to the lensing effect. Since that requires a highly improbable alignment, a very large number of distant stars must be continuously monitored in order to detect planetary microlensing contributions at a reasonable rate. This method is most fruitful for planets between Earth and the center of the galaxy, as the galactic center provides a large number of background stars.
In 1991, astronomers Shude Mao and Bohdan Paczyński proposed using gravitational microlensing to look for binary companions to stars, and their proposal was refined by Andy Gould and Abraham Loeb in 1992 as a method to detect exoplanets. Successes with the method date back to 2002, when a group of Polish astronomers (Andrzej Udalski, Marcin Kubiak and Michał Szymański from Warsaw, and Bohdan Paczyński) during project OGLE (the Optical Gravitational Lensing Experiment) developed a workable technique. During one month, they found several possible planets, though limitations in the observations prevented clear confirmation. Since then, several confirmed extrasolar planets have been detected using microlensing. This was the first method capable of detecting planets of Earth-like mass around ordinary main-sequence stars.
Unlike most other methods, which have detection bias towards planets with small (or for resolved imaging, large) orbits, the microlensing method is most sensitive to detecting planets around 1-10 astronomical units away from Sun-like stars.
A notable disadvantage of the method is that the lensing cannot be repeated, because the chance alignment never occurs again. Also, the detected planets will tend to be several kiloparsecs away, so follow-up observations with other methods are usually impossible. In addition, the only physical characteristic that can be determined by microlensing is the mass of the planet, within loose constraints. Orbital properties also tend to be unclear, as the only orbital characteristic that can be directly determined is its current semi-major axis from the parent star, which can be misleading if the planet follows an eccentric orbit. When the planet is far away from its star, it spends only a tiny portion of its orbit in a state where it is detectable with this method, so the orbital period of the planet cannot be easily determined. It is also easier to detect planets around low-mass stars, as the gravitational microlensing effect increases with the planet-to-star mass ratio.
The main advantages of the gravitational microlensing method are that it can detect low-mass planets (in principle down to Mars mass with future space projects such as Roman Space Telescope); it can detect planets in wide orbits comparable to Saturn and Uranus, which have orbital periods too long for the radial velocity or transit methods; and it can detect planets around very distant stars. When enough background stars can be observed with enough accuracy, then the method should eventually reveal how common Earth-like planets are in the galaxy.
Observations are usually performed using networks of robotic telescopes. In addition to the European Research Council-funded OGLE, the Microlensing Observations in Astrophysics (MOA) group is working to perfect this approach.
The PLANET (Probing Lensing Anomalies NETwork)/RoboNet project is even more ambitious. It allows nearly continuous round-the-clock coverage by a world-spanning telescope network, providing the opportunity to pick up microlensing contributions from planets with masses as low as Earth's. This strategy was successful in detecting the first low-mass planet on a wide orbit, designated OGLE-2005-BLG-390Lb.
The NASA Roman Space Telescope scheduled for launch in 2027 includes a microlensing planet survey as one of its three core projects.
Direct imaging.
Planets are extremely faint light sources compared to stars, and what little light comes from them tends to be lost in the glare from their parent star. So in general, it is very difficult to detect and resolve them directly from their host star. Planets orbiting far enough from stars to be resolved reflect very little starlight, so planets are detected through their thermal emission instead. It is easier to obtain images when the planetary system is relatively near to the Sun, and when the planet is especially large (considerably larger than Jupiter), widely separated from its parent star, and hot so that it emits intense infrared radiation; images have then been made in the infrared, where the planet is brighter than it is at visible wavelengths. Coronagraphs are used to block light from the star, while leaving the planet visible. Direct imaging of an Earth-like exoplanet requires extreme optothermal stability. During the accretion phase of planetary formation, the star-planet contrast may be even better in H alpha than it is in infrared – an H alpha survey is currently underway.
Direct imaging can give only loose constraints of the planet's mass, which is derived from the age of the star and the temperature of the planet. Mass can vary considerably, as planets can form several million years after the star has formed. The cooler the planet is, the less the planet's mass needs to be. In some cases it is possible to give reasonable constraints to the radius of a planet based on planet's temperature, its apparent brightness, and its distance from Earth. The spectra emitted from planets do not have to be separated from the star, which eases determining the chemical composition of planets.
Sometimes observations at multiple wavelengths are needed to rule out the planet being a brown dwarf. Direct imaging can be used to accurately measure the planet's orbit around the star. Unlike the majority of other methods, direct imaging works better with planets with face-on orbits rather than edge-on orbits, as a planet in a face-on orbit is observable during the entirety of the planet's orbit, while planets with edge-on orbits are most easily observable during their period of largest apparent separation from the parent star.
The planets detected through direct imaging currently fall into two categories. First, planets are found around stars more massive than the Sun which are young enough to have protoplanetary disks. The second category consists of possible sub-brown dwarfs found around very dim stars, or brown dwarfs which are at least 100 AU away from their parent stars.
Planetary-mass objects not gravitationally bound to a star are found through direct imaging as well.
Early discoveries.
In 2004, a group of astronomers used the European Southern Observatory's Very Large Telescope array in Chile to produce an image of 2M1207b, a companion to the brown dwarf 2M1207. In the following year, the planetary status of the companion was confirmed. The planet is estimated to be several times more massive than Jupiter, and to have an orbital radius greater than 40 AU.
On 6 November 2008 an object was published that was imaged first in April 2008 at a separation of 330 AU from the star 1RXS J160929.1−210524, already announced on 8 September 2008. But it was not until 2010, that it was confirmed to be a companion planet to the star and not just a chance alignment. It is not confirmed, yet, whether the mass of the companion is above or below the deuterium-burning limit.
The first multiplanet system, announced on 13 November 2008, was imaged in 2007, using telescopes at both the Keck Observatory and Gemini Observatory. Three planets were directly observed orbiting HR 8799, whose masses are approximately ten, ten, and seven times that of Jupiter. On the same day, 13 November 2008, it was announced that the Hubble Space Telescope directly observed an exoplanet orbiting Fomalhaut, with a mass no more than 3 MJ. Both systems are surrounded by disks not unlike the Kuiper belt.
On 21 November 2008, three days after acceptance of a letter to the editor published online on 11 December 2008, it was announced that analysis of images dating back to 2003, revealed a planet orbiting Beta Pictoris.
In 2012, it was announced that a "Super-Jupiter" planet with a mass about 12.8 MJ orbiting Kappa Andromedae was directly imaged using the Subaru Telescope in Hawaii. It orbits its parent star at a distance of about 55 AU, or nearly twice the distance of Neptune from the sun.
An additional system, GJ 758, was imaged in November 2009, by a team using the HiCIAO instrument of the Subaru Telescope, but it was a brown dwarf.
Other possible exoplanets to have been directly imaged include GQ Lupi b, AB Pictoris b, and SCR 1845 b. As of March 2006, none have been confirmed as planets; instead, they might themselves be small brown dwarfs.
Imaging instruments.
Several planet-imaging-capable instruments are installed on large ground-based telescope, such as Gemini Planet Imager, VLT-SPHERE, the Subaru Coronagraphic Extreme Adaptive Optics (SCExAO) instrument, or Palomar Project 1640. In space, there are currently no dedicated exoplanet imaging instrument. Although the JWST does have some exoplanet imaging capabilities, it has not specifically been designed and optimised for that purpose. The RST will be the first space observatory to include a dedicated exoplanet imaging instrument. This instrument is designed JPL as a demonstrator for a future large observatory in space that will have the imaging of Earth-like exoplanets as one of its primary science goals. Concepts such as the LUVOIR or the HabEx have been proposed in preparation of the 2020 Astronomy and Astrophysics Decadal Survey.
In 2010, a team from NASA's Jet Propulsion Laboratory demonstrated that a vortex coronagraph could enable small scopes to directly image planets. They did this by imaging the previously imaged HR 8799 planets, using just a 1.5 meter-wide portion of the Hale Telescope.
Another promising approach is nulling interferometry.
It has also been proposed that space-telescopes that focus light using zone plates instead of mirrors would provide higher-contrast imaging, and be cheaper to launch into space due to being able to fold up the lightweight foil zone plate. Another possibility would be to use a large occulter in space designed to block the light of nearby stars in order to observe their orbiting planets, such as the New Worlds Mission.
Data Reduction Techniques.
Post-processing of observational data to enhance signal strength of off-axial bodies (i.e. exoplanets) can be accomplished in a variety of ways. All methods are based on the presence of diversity in the data between the central star and the exoplanet companions: this diversity can originate from differences in the spectrum, the angular position, the orbital motion, the polarisation, or the coherence of the light. The most popular technique is Angular (ADI), where exposures are acquired at different parallactic angle positions and the sky is left to rotate around the observed central star. The exposures are averaged, each exposure undergoes subtraction by the average, and then they are (de-)rotated to stack the faint planetary signal all in one place.
Specral Differential Imaging (SDI) performs an analogous procedure, but for radial changes in brightness (as a function of spectra or wavelength) instead of angular changes.
Combinations of the two are possible (ASDI, SADI, or Combined Differential Imaging "CODI").
Polarimetry.
Light given off by a star is un-polarized, i.e. the direction of oscillation of the light wave is random. However, when the light is reflected off the atmosphere of a planet, the light waves interact with the molecules in the atmosphere and become polarized.
By analyzing the polarization in the combined light of the planet and star (about one part in a million), these measurements can in principle be made with very high sensitivity, as polarimetry is not limited by the stability of the Earth's atmosphere. Another main advantage is that polarimetry allows for determination of the composition of the planet's atmosphere. The main disadvantage is that it will not be able to detect planets without atmospheres. Larger planets and planets with higher albedo are easier to detect through polarimetry, as they reflect more light.
Astronomical devices used for polarimetry, called polarimeters, are capable of detecting polarized light and rejecting unpolarized beams. Groups such as ZIMPOL/CHEOPS and PlanetPol are currently using polarimeters to search for extrasolar planets. The first successful detection of an extrasolar planet using this method came in 2008, when HD 189733 b, a planet discovered three years earlier, was detected using polarimetry. However, no new planets have yet been discovered using this method.
Astrometry.
This method consists of precisely measuring a star's position in the sky, and observing how that position changes over time. Originally, this was done visually, with hand-written records. By the end of the 19th century, this method used photographic plates, greatly improving the accuracy of the measurements as well as creating a data archive. If a star has a planet, then the gravitational influence of the planet will cause the star itself to move in a tiny circular or elliptical orbit. Effectively, star and planet each orbit around their mutual centre of mass (barycenter), as explained by solutions to the two-body problem. Since the star is much more massive, its orbit will be much smaller. Frequently, the mutual centre of mass will lie within the radius of the larger body. Consequently, it is easier to find planets around low-mass stars, especially brown dwarfs.
Astrometry is the oldest search method for extrasolar planets, and was originally popular because of its success in characterizing astrometric binary star systems. It dates back at least to statements made by William Herschel in the late 18th century. He claimed that an "unseen companion" was affecting the position of the star he cataloged as "70 Ophiuchi". The first known formal astrometric calculation for an extrasolar planet was made by William Stephen Jacob in 1855 for this star. Similar calculations were repeated by others for another half-century until finally refuted in the early 20th century.
For two centuries claims circulated of the discovery of "unseen companions" in orbit around nearby star systems that all were reportedly found using this method, culminating in the prominent 1996 announcement, of multiple planets orbiting the nearby star Lalande 21185 by George Gatewood. None of these claims survived scrutiny by other astronomers, and the technique fell into disrepute. Unfortunately, changes in stellar position are so small—and atmospheric and systematic distortions so large—that even the best ground-based telescopes cannot produce precise enough measurements. All claims of a "planetary companion" of less than 0.1 solar mass, as the mass of the planet, made before 1996 using this method are likely spurious. In 2002, the Hubble Space Telescope did succeed in using astrometry to characterize a previously discovered planet around the star Gliese 876.
The space-based observatory "Gaia", launched in 2013, is expected to find thousands of planets via astrometry, but prior to the launch of "Gaia", no planet detected by astrometry had been confirmed. SIM PlanetQuest was a US project (cancelled in 2010) that would have had similar exoplanet finding capabilities to Gaia.
One potential advantage of the astrometric method is that it is most sensitive to planets with large orbits. This makes it complementary to other methods that are most sensitive to planets with small orbits. However, very long observation times will be required — years, and possibly decades, as planets far enough from their star to allow detection via astrometry also take a long time to complete an orbit. Planets orbiting around one of the stars in binary systems are more easily detectable, as they cause perturbations in the orbits of stars themselves. However, with this method, follow-up observations are needed to determine which star the planet orbits around.
In 2009, the discovery of VB 10b by astrometry was announced. This planetary object, orbiting the low mass red dwarf star VB 10, was reported to have a mass seven times that of Jupiter. If confirmed, this would be the first exoplanet discovered by astrometry, of the many that have been claimed through the years. However recent radial velocity independent studies rule out the existence of the claimed planet.
In 2010, six binary stars were astrometrically measured. One of the star systems, called HD 176051, was found with "high confidence" to have a planet.
In 2018, a study comparing observations from the "Gaia" spacecraft to Hipparcos data for the Beta Pictoris system was able to measure the mass of Beta Pictoris b, constraining it to Jupiter masses. This is in good agreement with previous mass estimations of roughly 13 Jupiter masses.
In 2019, data from the Gaia spacecraft and its predecessor Hipparcos was complemented with HARPS data enabling a better description of ε Indi Ab as the second-closest Jupiter-like exoplanet with a mass of 3 Jupiters on a slightly eccentric orbit with an orbital period of 45 years.
As of 2022[ [update]], especially thanks to Gaia, the combination of radial velocity and astrometry has been used to detect and characterize numerous Jovian planets, including the nearest Jupiter analogues ε Eridani b and ε Indi Ab. In addition, radio astrometry using the VLBA has been used to discover planets in orbit around TVLM 513-46546 and EQ Pegasi A.
X-ray eclipse.
In September 2020, the detection of a candidate planet orbiting the high-mass X-ray binary M51-ULS-1 in the Whirlpool Galaxy was announced. The planet was detected by eclipses of the X-ray source, which consists of a stellar remnant (either a neutron star or a black hole) and a massive star, likely a B-type supergiant. This is the only method capable of detecting a planet in another galaxy.
Disc kinematics.
Planets can be detected by the gaps they produce in protoplanetary discs, such as the one in orbit around the young variable star HD 97048.
Other possible methods.
Flare and variability echo detection.
Non-periodic variability events, such as flares, can produce extremely faint echoes in the light curve if they reflect off an exoplanet or other scattering medium in the star system. More recently, motivated by advances in instrumentation and signal processing technologies, echoes from exoplanets are predicted to be recoverable from high-cadence photometric and spectroscopic measurements of active star systems, such as M dwarfs. These echoes are theoretically observable in all orbital inclinations.
Transit imaging.
An optical/infrared interferometer array (e.g, a 16 interferometer-array of the Big Fringe Telescope) doesn't collect as much light as a single telescope of equivalent size, but has the resolution of a single telescope the size of the array. For bright stars, this resolving power could be used to image a star's surface during a transit event and observe the shadow of the planet transiting. This could provide a direct measurement of the planet's angular radius and, via parallax, its actual radius. This is more accurate than radius estimates based on transit photometry, which are dependent on stellar radius estimates which in turn depend on models of star characteristics. Imaging also provides more accurate determination of the inclination than photometry does.
Magnetospheric (auroral) radio emissions.
Auroral radio emissions from exoplanet magnetospheres could be detected with radio telescopes. The emission may be caused by the exoplanet's magnetic field interacting with a stellar wind, adjacent plasma sources (such as Jupiter's volcanic moon Io travelling through its magnetosphere) or the interaction of the magnetic field with the interstellar medium. Although several discoveries have been claimed, thus far, none have been verified. The most sensitive searches for direct radio emissions from exoplanet magnetic fields, or from exoplanet magnetic fields interacting with those from their host stars, have been conducted with the Arecibo radio telescope.
In addition to allowing for a study of exoplanet magnetic fields, radio emissions may be used to measure the interior rotation rate of an exoplanet.
Optical interferometry.
In March 2019, ESO astronomers, employing the GRAVITY instrument on their Very Large Telescope Interferometer (VLTI), announced the first direct detection of an exoplanet, HR 8799 e, using optical interferometry.
Modified interferometry.
By looking at the wiggles of an interferogram using a Fourier-Transform-Spectrometer, enhanced sensitivity could be obtained in order to detect faint signals from Earth-like planets.
Detection of dust trapping around Lagrangian points.
Identification of dust clumps along a protoplanetary disk demonstrate trace accumulation around Lagrangian points. From the detection of this dust, it can be inferred that a planet exists such that it has created those accumulations.
Detection of extrasolar asteroids and debris disks.
Circumstellar disks.
Disks of space dust (debris disks) surround many stars. The dust can be detected because it absorbs ordinary starlight and re-emits it as infrared radiation. Even if the dust particles have a total mass well less than that of Earth, they can still have a large enough total surface area that they outshine their parent star in infrared wavelengths.
The Hubble Space Telescope is capable of observing dust disks with its NICMOS (Near Infrared Camera and Multi-Object Spectrometer) instrument. Even better images have now been taken by its sister instrument, the Spitzer Space Telescope, and by the European Space Agency's Herschel Space Observatory, which can see far deeper into infrared wavelengths than the Hubble can. Dust disks have now been found around more than 15% of nearby sunlike stars.
The dust is thought to be generated by collisions among comets and asteroids. Radiation pressure from the star will push the dust particles away into interstellar space over a relatively short timescale. Therefore, the detection of dust indicates continual replenishment by new collisions, and provides strong indirect evidence of the presence of small bodies like comets and asteroids that orbit the parent star. For example, the dust disk around the star Tau Ceti indicates that that star has a population of objects analogous to our own Solar System's Kuiper Belt, but at least ten times thicker.
More speculatively, features in dust disks sometimes suggest the presence of full-sized planets. Some disks have a central cavity, meaning that they are really ring-shaped. The central cavity may be caused by a planet "clearing out" the dust inside its orbit. Other disks contain clumps that may be caused by the gravitational influence of a planet. Both these kinds of features are present in the dust disk around Epsilon Eridani, hinting at the presence of a planet with an orbital radius of around 40 AU (in addition to the inner planet detected through the radial-velocity method). These kinds of planet-disk interactions can be modeled numerically using collisional grooming techniques.
Contamination of stellar atmospheres.
Spectral analysis of white dwarfs' atmospheres often finds contamination of heavier elements like magnesium and calcium. These elements cannot originate from the stars' core, and it is probable that the contamination comes from asteroids that got too close (within the Roche limit) to these stars by gravitational interaction with larger planets and were torn apart by star's tidal forces. Up to 50% of young white dwarfs may be contaminated in this manner.
Additionally, the dust responsible for the atmospheric pollution may be detected by infrared radiation if it exists in sufficient quantity, similar to the detection of debris discs around main sequence stars. Data from the Spitzer Space Telescope suggests that 1-3% of white dwarfs possess detectable circumstellar dust.
In 2015, minor planets were discovered transiting the white dwarf WD 1145+017. This material orbits with a period of around 4.5 hours, and the shapes of the transit light curves suggest that the larger bodies are disintegrating, contributing to the contamination in the white dwarf's atmosphere.
Space telescopes.
Most confirmed extrasolar planets have been found using space-based telescopes (as of 01/2015). Many of the detection methods can work more effectively with space-based telescopes that avoid atmospheric haze and turbulence. COROT (2007-2012) and Kepler were space missions dedicated to searching for extrasolar planets using transits. COROT discovered about 30 new exoplanets.
Kepler (2009-2013) and K2 (2013- ) have discovered over 2000 verified exoplanets. Hubble Space Telescope and MOST have also found or confirmed a few planets. The infrared Spitzer Space Telescope has been used to detect transits of extrasolar planets, as well as occultations of the planets by their host star and phase curves.
The "Gaia" mission, launched in December 2013, will use astrometry to determine the true masses of 1000 nearby exoplanets.
TESS, launched in 2018, CHEOPS launched in 2019 and PLATO in 2026 will use the transit method.
References.
<templatestyles src="Reflist/styles.css" />
External links.
https://iopscience.iop.org/article/10.1209/0295-5075/ad152d?fbclid=IwZXh0bgNhZW0CMTAAAR2OqKaBuALLa_qLBWy8uvusdEwiK6i8cZNQG8i46VowG9R9Cz4KduQzg7o_aem_g1nNaim20xNIyHErktMbnQ | [
{
"math_id": 0,
"text": "M_\\text{true} * {\\sin i} \\, "
}
]
| https://en.wikipedia.org/wiki?curid=7290120 |
72907 | Sundial | Device that tells the time of day by the apparent position of the Sun in the sky
A sundial is a horological device that tells the time of day (referred to as civil time in modern usage) when direct sunlight shines by the apparent position of the Sun in the sky. In the narrowest sense of the word, it consists of a flat plate (the "dial") and a gnomon, which casts a shadow onto the dial. As the Sun appears to move through the sky, the shadow aligns with different hour-lines, which are marked on the dial to indicate the time of day. The "style" is the time-telling edge of the gnomon, though a single point or "nodus" may be used. The gnomon casts a broad shadow; the shadow of the style shows the time. The gnomon may be a rod, wire, or elaborately decorated metal casting. The style must be parallel to the axis of the Earth's rotation for the sundial to be accurate throughout the year. The style's angle from horizontal is equal to the sundial's geographical latitude.
The term "sundial" can refer to any device that uses the Sun's altitude or azimuth (or both) to show the time. Sundials are valued as decorative objects, metaphors, and objects of intrigue and mathematical study.
The passing of time can be observed by placing a stick in the sand or a nail in a board and placing markers at the edge of a shadow or outlining a shadow at intervals. It is common for inexpensive, mass-produced decorative sundials to have incorrectly aligned gnomons, shadow lengths, and hour-lines, which cannot be adjusted to tell correct time.
<templatestyles src="Template:TOC limit/styles.css" />
Introduction.
There are several different types of sundials. Some sundials use a shadow or the edge of a shadow while others use a line or spot of light to indicate the time.
The shadow-casting object, known as a "gnomon", may be a long thin rod or other object with a sharp tip or a straight edge. Sundials employ many types of gnomon. The gnomon may be fixed or moved according to the season. It may be oriented vertically, horizontally, aligned with the Earth's axis, or oriented in an altogether different direction determined by mathematics.
Given that sundials use light to indicate time, a line of light may be formed by allowing the Sun's rays through a thin slit or focusing them through a cylindrical lens. A spot of light may be formed by allowing the Sun's rays to pass through a small hole, window, oculus, or by reflecting them from a small circular mirror. A spot of light can be as small as a pinhole in a solargraph or as large as the oculus in the Pantheon.
Sundials also may use many types of surfaces to receive the light or shadow. Planes are the most common surface, but partial spheres, cylinders, cones and other shapes have been used for greater accuracy or beauty.
Sundials differ in their portability and their need for orientation. The installation of many dials requires knowing the local latitude, the precise vertical direction (e.g., by a level or plumb-bob), and the direction to true north. Portable dials are self-aligning: for example, it may have two dials that operate on different principles, such as a horizontal and analemmatic dial, mounted together on one plate. In these designs, their times agree only when the plate is aligned properly.
Sundials may indicate the local solar time only. To obtain the national clock time, three corrections are required:
Apparent motion of the Sun.
The principles of sundials are understood most easily from the Sun's apparent motion. The Earth rotates on its axis, and revolves in an elliptical orbit around the Sun. An excellent approximation assumes that the Sun revolves around a stationary Earth on the celestial sphere, which rotates every 24 hours about its celestial axis. The celestial axis is the line connecting the celestial poles. Since the celestial axis is aligned with the axis about which the Earth rotates, the angle of the axis with the local horizontal is the local geographical latitude.
Unlike the fixed stars, the Sun changes its position on the celestial sphere, being (in the northern hemisphere) at a positive declination in spring and summer, and at a negative declination in autumn and winter, and having exactly zero declination (i.e., being on the celestial equator) at the equinoxes. The Sun's celestial longitude also varies, changing by one complete revolution per year. The path of the Sun on the celestial sphere is called the ecliptic. The ecliptic passes through the twelve constellations of the zodiac in the course of a year.
This model of the Sun's motion helps to understand sundials. If the shadow-casting gnomon is aligned with the celestial poles, its shadow will revolve at a constant rate, and this rotation will not change with the seasons. This is the most common design. In such cases, the same hour lines may be used throughout the year. The hour-lines will be spaced uniformly if the surface receiving the shadow is either perpendicular (as in the equatorial sundial) or circular about the gnomon (as in the armillary sphere).
In other cases, the hour-lines are not spaced evenly, even though the shadow rotates uniformly. If the gnomon is "not" aligned with the celestial poles, even its shadow will not rotate uniformly, and the hour lines must be corrected accordingly. The rays of light that graze the tip of a gnomon, or which pass through a small hole, or reflect from a small mirror, trace out a cone aligned with the celestial poles. The corresponding light-spot or shadow-tip, if it falls onto a flat surface, will trace out a conic section, such as a hyperbola, ellipse or (at the North or South Poles) a circle.
This conic section is the intersection of the cone of light rays with the flat surface. This cone and its conic section change with the seasons, as the Sun's declination changes; hence, sundials that follow the motion of such light-spots or shadow-tips often have different hour-lines for different times of the year. This is seen in shepherd's dials, sundial rings, and vertical gnomons such as obelisks. Alternatively, sundials may change the angle or position (or both) of the gnomon relative to the hour lines, as in the analemmatic dial or the Lambert dial.
History.
The earliest sundials known from the archaeological record are shadow clocks (1500 BC or BCE) from ancient Egyptian astronomy and Babylonian astronomy. Presumably, humans were telling time from shadow-lengths at an even earlier date, but this is hard to verify. In roughly 700 BC, the Old Testament describes a sundial—the "dial of Ahaz" mentioned in and . By 240 BC Eratosthenes had estimated the circumference of the world using an obelisk and a water well and a few centuries later Ptolemy had charted the latitude of cities using the angle of the sun. The people of Kush created sun dials through geometry. The Roman writer Vitruvius lists dials and shadow clocks known at that time in his "De architectura". The Tower of Winds constructed in Athens included sundial and a water clock for telling time. A canonical sundial is one that indicates the canonical hours of liturgical acts. Such sundials were used from the 7th to the 14th centuries by the members of religious communities. The Italian astronomer Giovanni Padovani published a treatise on the sundial in 1570, in which he included instructions for the manufacture and laying out of mural (vertical) and horizontal sundials. Giuseppe Biancani's "Constructio instrumenti ad horologia solaria" (c. 1620) discusses how to make a perfect sundial. They have been commonly used since the 16th century.
Functioning.
In general, sundials indicate the time by casting a shadow or throwing light onto a surface known as a "dial face" or "dial plate". Although usually a flat plane, the dial face may also be the inner or outer surface of a sphere, cylinder, cone, helix, and various other shapes.
The time is indicated where a shadow or light falls on the dial face, which is usually inscribed with hour lines. Although usually straight, these hour lines may also be curved, depending on the design of the sundial (see below). In some designs, it is possible to determine the date of the year, or it may be required to know the date to find the correct time. In such cases, there may be multiple sets of hour lines for different months, or there may be mechanisms for setting/calculating the month. In addition to the hour lines, the dial face may offer other data—such as the horizon, the equator and the tropics—which are referred to collectively as the dial furniture.
The entire object that casts a shadow or light onto the dial face is known as the sundial's "gnomon". However, it is usually only an edge of the gnomon (or another linear feature) that casts the shadow used to determine the time; this linear feature is known as the sundial's "style". The style is usually aligned parallel to the axis of the celestial sphere, and therefore is aligned with the local geographical meridian. In some sundial designs, only a point-like feature, such as the tip of the style, is used to determine the time and date; this point-like feature is known as the sundial's "nodus".
Some sundials use both a style and a nodus to determine the time and date.
The gnomon is usually fixed relative to the dial face, but not always; in some designs such as the analemmatic sundial, the style is moved according to the month. If the style is fixed, the line on the dial plate perpendicularly beneath the style is called the "substyle", meaning "below the style". The angle the style makes with the plane of the dial plate is called the substyle height, an unusual use of the word "height" to mean an "angle". On many wall dials, the substyle is not the same as the noon line (see below). The angle on the dial plate between the noon line and the substyle is called the "substyle distance", an unusual use of the word "distance" to mean an "angle".
By tradition, many sundials have a motto. The motto is usually in the form of an epigram: sometimes sombre reflections on the passing of time and the brevity of life, but equally often humorous witticisms of the dial maker. One such quip is, "I am a sundial, and I make a botch, Of what is done much better by a watch."
A dial is said to be "equiangular" if its hour-lines are straight and spaced equally. Most equiangular sundials have a fixed gnomon style aligned with the Earth's rotational axis, as well as a shadow-receiving surface that is symmetrical about that axis; examples include the equatorial dial, the equatorial bow, the armillary sphere, the cylindrical dial and the conical dial. However, other designs are equiangular, such as the Lambert dial, a version of the analemmatic sundial with a moveable style.
In the Southern Hemisphere.
A sundial at a particular latitude in one hemisphere must be reversed for use at the opposite latitude in the other hemisphere. A vertical direct south sundial in the Northern Hemisphere becomes a vertical direct north sundial in the Southern Hemisphere. To position a horizontal sundial correctly, one has to find true north or south. The same process can be used to do both. The gnomon, set to the correct latitude, has to point to the true south in the Southern Hemisphere as in the Northern Hemisphere it has to point to the true north. The hour numbers also run in opposite directions, so on a horizontal dial they run anticlockwise (US: counterclockwise) rather than clockwise.
Sundials which are designed to be used with their plates horizontal in one hemisphere can be used with their plates vertical at the complementary latitude in the other hemisphere. For example, the illustrated sundial in Perth, Australia, which is at latitude 32° South, would function properly if it were mounted on a south-facing vertical wall at latitude 58° (i.e. 90° − 32°) North, which is slightly further north than Perth, Scotland. The surface of the wall in Scotland would be parallel with the horizontal ground in Australia (ignoring the difference of longitude), so the sundial would work identically on both surfaces. Correspondingly, the hour marks, which run counterclockwise on a horizontal sundial in the southern hemisphere, also do so on a vertical sundial in the northern hemisphere. (See the first two illustrations at the top of this article.) On horizontal northern-hemisphere sundials, and on vertical southern-hemisphere ones, the hour marks run clockwise.
Adjustments to calculate clock time from a sundial reading.
The most common reason for a sundial to differ greatly from clock time is that the sundial has not been oriented correctly or its hour lines have not been drawn correctly. For example, most commercial sundials are designed as "horizontal sundials" as described above. To be accurate, such a sundial must have been designed for the local geographical latitude and its style must be parallel to the Earth's rotational axis; the style must be aligned with true north and its "height" (its angle with the horizontal) must equal the local latitude. To adjust the style height, the sundial can often be tilted slightly "up" or "down" while maintaining the style's north-south alignment.
Summer (daylight saving) time correction.
Some areas of the world practice daylight saving time, which changes the official time, usually by one hour. This shift must be added to the sundial's time to make it agree with the official time.
Time-zone (longitude) correction.
A standard time zone covers roughly 15° of longitude, so any point within that zone which is not on the reference longitude (generally a multiple of 15°) will experience a difference from standard time that is equal to 4 minutes of time per degree. For illustration, sunsets and sunrises are at a much later "official" time at the western edge of a time-zone, compared to sunrise and sunset times at the eastern edge. If a sundial is located at, say, a longitude 5° west of the reference longitude, then its time will read 20 minutes slow, since the Sun appears to revolve around the Earth at 15° per hour. This is a constant correction throughout the year. For equiangular dials such as equatorial, spherical or Lambert dials, this correction can be made by rotating the dial surface by an angle equaling the difference in longitude, without changing the gnomon position or orientation. However, this method does not work for other dials, such as a horizontal dial; the correction must be applied by the viewer.
However, for political and practical reasons, time-zone boundaries have been skewed. At their most extreme, time zones can cause official noon, including daylight savings, to occur up to three hours early (in which case the Sun is actually on the meridian at official clock time of 3 PM). This occurs in the far west of Alaska, China, and Spain. For more details and examples, see time zones.
Equation of time correction.
Although the Sun appears to rotate uniformly about the Earth, in reality this motion is not perfectly uniform. This is due to the eccentricity of the Earth's orbit (the fact that the Earth's orbit about the Sun is not perfectly circular, but slightly elliptical) and the tilt (obliquity) of the Earth's rotational axis relative to the plane of its orbit. Therefore, sundial time varies from standard clock time. On four days of the year, the correction is effectively zero. However, on others, it can be as much as a quarter-hour early or late. The amount of correction is described by the equation of time. This correction is equal worldwide: it does not depend on the local latitude or longitude of the observer's position. It does, however, change over long periods of time, (centuries or more,)
because of slow variations in the Earth's orbital and rotational motions. Therefore, tables and graphs of the equation of time that were made centuries ago are now significantly incorrect. The reading of an old sundial should be corrected by applying the present-day equation of time, not one from the period when the dial was made.
In some sundials, the equation of time correction is provided as an informational plaque affixed to the sundial, for the observer to calculate. In more sophisticated sundials the equation can be incorporated automatically. For example, some equatorial bow sundials are supplied with a small wheel that sets the time of year; this wheel in turn rotates the equatorial bow, offsetting its time measurement. In other cases, the hour lines may be curved, or the equatorial bow may be shaped like a vase, which exploits the changing altitude of the sun over the year to effect the proper offset in time.
A "heliochronometer" is a precision sundial first devised in about 1763 by Philipp Hahn and improved by Abbé Guyoux in about 1827.
It corrects apparent solar time to mean solar time or another standard time. Heliochronometers usually indicate the minutes to within 1 minute of Universal Time.
The Sunquest sundial, designed by Richard L. Schmoyer in the 1950s, uses an analemmic-inspired gnomon to cast a shaft of light onto an equatorial time-scale crescent. Sunquest is adjustable for latitude and longitude, automatically correcting for the equation of time, rendering it "as accurate as most pocket watches".
Similarly, in place of the shadow of a gnomon the sundial at Miguel Hernández University uses the solar projection of a graph of the equation of time intersecting a time scale to display clock time directly.
An analemma may be added to many types of sundials to correct apparent solar time to mean solar time or another standard time. These usually have hour lines shaped like "figure eights" (analemmas) according to the equation of time. This compensates for the slight eccentricity in the Earth's orbit and the tilt of the Earth's axis that causes up to a 15 minute variation from mean solar time. This is a type of dial furniture seen on more complicated horizontal and vertical dials.
Prior to the invention of accurate clocks, in the mid 17th century, sundials were the only timepieces in common use, and were considered to tell the "right" time. The equation of time was not used. After the invention of good clocks, sundials were still considered to be correct, and clocks usually incorrect. The equation of time was used in the opposite direction from today, to apply a correction to the time shown by a clock to make it agree with sundial time. Some elaborate "equation clocks", such as one made by Joseph Williamson in 1720, incorporated mechanisms to do this correction automatically. (Williamson's clock may have been the first-ever device to use a differential gear.) Only after about 1800 was uncorrected clock time considered to be "right", and sundial time usually "wrong", so the equation of time became used as it is today.
With fixed axial gnomon.
The most commonly observed sundials are those in which the shadow-casting style is fixed in position and aligned with the Earth's rotational axis, being oriented with true north and south, and making an angle with the horizontal equal to the geographical latitude. This axis is aligned with the celestial poles, which is closely, but not perfectly, aligned with the pole star Polaris. For illustration, the celestial axis points vertically at the true North Pole, whereas it points horizontally on the equator. The world's largest axial gnomon sundial is the mast of the Sundial Bridge at Turtle Bay in Redding, California . A formerly world's largest gnomon is at Jaipur, raised 26°55′ above horizontal, reflecting the local latitude.
On any given day, the Sun appears to rotate uniformly about this axis, at about 15° per hour, making a full circuit (360°) in 24 hours. A linear gnomon aligned with this axis will cast a sheet of shadow (a half-plane) that, falling opposite to the Sun, likewise rotates about the celestial axis at 15° per hour. The shadow is seen by falling on a receiving surface that is usually flat, but which may be spherical, cylindrical, conical or of other shapes. If the shadow falls on a surface that is symmetrical about the celestial axis (as in an armillary sphere, or an equatorial dial), the surface-shadow likewise moves uniformly; the hour-lines on the sundial are equally spaced. However, if the receiving surface is not symmetrical (as in most horizontal sundials), the surface shadow generally moves non-uniformly and the hour-lines are not equally spaced; one exception is the Lambert dial described below.
Some types of sundials are designed with a fixed gnomon that is not aligned with the celestial poles like a vertical obelisk. Such sundials are covered below under the section, "Nodus-based sundials".
Empirical hour-line marking.
The formulas shown in the paragraphs below allow the positions of the hour-lines to be calculated for various types of sundial. In some cases, the calculations are simple; in others they are extremely complicated. There is an alternative, simple method of finding the positions of the hour-lines which can be used for many types of sundial, and saves a lot of work in cases where the calculations are complex. This is an empirical procedure in which the position of the shadow of the gnomon of a real sundial is marked at hourly intervals. The equation of time must be taken into account to ensure that the positions of the hour-lines are independent of the time of year when they are marked. An easy way to do this is to set a clock or watch so it shows "sundial time"
which is standard time, plus the equation of time on the day in question.
The hour-lines on the sundial are marked to show the positions of the shadow of the style when this clock shows whole numbers of hours, and are labelled with these numbers of hours. For example, when the clock reads 5:00, the shadow of the style is marked, and labelled "5" (or "V" in Roman numerals). If the hour-lines are not all marked in a single day, the clock must be adjusted every day or two to take account of the variation of the equation of time.
Equatorial sundials.
The distinguishing characteristic of the "equatorial dial" (also called the "equinoctial dial") is the planar surface that receives the shadow, which is exactly perpendicular to the gnomon's style. This plane is called equatorial, because it is parallel to the equator of the Earth and of the celestial sphere. If the gnomon is fixed and aligned with the Earth's rotational axis, the sun's apparent rotation about the Earth casts a uniformly rotating sheet of shadow from the gnomon; this produces a uniformly rotating line of shadow on the equatorial plane. Since the Earth rotates 360° in 24 hours, the hour-lines on an equatorial dial are all spaced 15° apart (360/24).
formula_0
The uniformity of their spacing makes this type of sundial easy to construct. If the dial plate material is opaque, both sides of the equatorial dial must be marked, since the shadow will be cast from below in winter and from above in summer. With translucent dial plates (e.g. glass) the hour angles need only be marked on the sun-facing side, although the hour numberings (if used) need be made on both sides of the dial, owing to the differing hour schema on the sun-facing and sun-backing sides.
Another major advantage of this dial is that equation of time (EoT) and daylight saving time (DST) corrections can be made by simply rotating the dial plate by the appropriate angle each day. This is because the hour angles are equally spaced around the dial. For this reason, an equatorial dial is often a useful choice when the dial is for public display and it is desirable to have it show the true local time to reasonable accuracy. The EoT correction is made via the relation
formula_1
Near the equinoxes in spring and autumn, the sun moves on a circle that is nearly the same as the equatorial plane; hence, no clear shadow is produced on the equatorial dial at those times of year, a drawback of the design.
A "nodus" is sometimes added to equatorial sundials, which allows the sundial to tell the time of year. On any given day, the shadow of the nodus moves on a circle on the equatorial plane, and the radius of the circle measures the declination of the sun. The ends of the gnomon bar may be used as the nodus, or some feature along its length. An ancient variant of the equatorial sundial has only a nodus (no style) and the concentric circular hour-lines are arranged to resemble a spider-web.
Horizontal sundials.
In the "horizontal sundial" (also called a "garden sundial"), the plane that receives the shadow is aligned horizontally, rather than being perpendicular to the style as in the equatorial dial. Hence, the line of shadow does not rotate uniformly on the dial face; rather, the hour lines are spaced according to the rule.
formula_2
Or in other terms:
formula_3
where L is the sundial's geographical latitude (and the angle the gnomon makes with the dial plate), formula_4 is the angle between a given hour-line and the noon hour-line (which always points towards true north) on the plane, and t is the number of hours before or after noon. For example, the angle formula_4 of the 3 PM hour-line would equal the arctangent of since tan 45° = 1. When formula_5 (at the North Pole), the horizontal sundial becomes an equatorial sundial; the style points straight up (vertically), and the horizontal plane is aligned with the equatorial plane; the hour-line formula becomes formula_6 as for an equatorial dial. A horizontal sundial at the Earth's equator, where formula_7 would require a (raised) horizontal style and would be an example of a polar sundial (see below).
The chief advantages of the horizontal sundial are that it is easy to read, and the sunlight lights the face throughout the year. All the hour-lines intersect at the point where the gnomon's style crosses the horizontal plane. Since the style is aligned with the Earth's rotational axis, the style points true north and its angle with the horizontal equals the sundial's geographical latitude L . A sundial designed for one latitude can be adjusted for use at another latitude by tilting its base upwards or downwards by an angle equal to the difference in latitude. For example, a sundial designed for a latitude of 40° can be used at a latitude of 45°, if the sundial plane is tilted upwards by 5°, thus aligning the style with the Earth's rotational axis.
Many ornamental sundials are designed to be used at 45 degrees north. Some mass-produced garden sundials fail to correctly calculate the "hourlines" and so can never be corrected. A local standard time zone is nominally 15 degrees wide, but may be modified to follow geographic or political boundaries. A sundial can be rotated around its style (which must remain pointed at the celestial pole) to adjust to the local time zone. In most cases, a rotation in the range of 7.5° east to 23° west suffices. This will introduce error in sundials that do not have equal hour angles. To correct for daylight saving time, a face needs two sets of numerals or a correction table. An informal standard is to have numerals in hot colors for summer, and in cool colors for winter. Since the hour angles are not evenly spaced, the equation of time corrections cannot be made via rotating the dial plate about the gnomon axis. These types of dials usually have an equation of time correction tabulation engraved on their pedestals or close by. Horizontal dials are commonly seen in gardens, churchyards and in public areas.
Vertical sundials.
In the common "vertical dial", the shadow-receiving plane is aligned vertically; as usual, the gnomon's style is aligned with the Earth's axis of rotation. As in the horizontal dial, the line of shadow does not move uniformly on the face; the sundial is not "equiangular". If the face of the vertical dial points directly south, the angle of the hour-lines is instead described by the formula
formula_8
where L is the sundial's geographical latitude, formula_9 is the angle between a given hour-line and the noon hour-line (which always points due north) on the plane, and t is the number of hours before or after noon. For example, the angle formula_9 of the 3 P.M. hour-line would equal the arctangent of since The shadow moves "counter-clockwise" on a south-facing vertical dial, whereas it runs clockwise on horizontal and equatorial north-facing dials.
Dials with faces perpendicular to the ground and which face directly south, north, east, or west are called "vertical direct dials". It is widely believed, and stated in respectable publications, that a vertical dial cannot receive more than twelve hours of sunlight a day, no matter how many hours of daylight there are. However, there is an exception. Vertical sundials in the tropics which face the nearer pole (e.g. north facing in the zone between the Equator and the Tropic of Cancer) can actually receive sunlight for more than 12 hours from sunrise to sunset for a short period around the time of the summer solstice. For example, at latitude 20° North, on June 21, the sun shines on a north-facing vertical wall for 13 hours, 21 minutes. Vertical sundials which do "not" face directly south (in the northern hemisphere) may receive significantly less than twelve hours of sunlight per day, depending on the direction they do face, and on the time of year. For example, a vertical dial that faces due East can tell time only in the morning hours; in the afternoon, the sun does not shine on its face. Vertical dials that face due East or West are "polar dials", which will be described below. Vertical dials that face north are uncommon, because they tell time only during the spring and summer, and do not show the midday hours except in tropical latitudes (and even there, only around midsummer). For non-direct vertical dials – those that face in non-cardinal directions – the mathematics of arranging the style and the hour-lines becomes more complicated; it may be easier to mark the hour lines by observation, but the placement of the style, at least, must be calculated first; such dials are said to be "declining dials".
Vertical dials are commonly mounted on the walls of buildings, such as town-halls, cupolas and church-towers, where they are easy to see from far away. In some cases, vertical dials are placed on all four sides of a rectangular tower, providing the time throughout the day. The face may be painted on the wall, or displayed in inlaid stone; the gnomon is often a single metal bar, or a tripod of metal bars for rigidity. If the wall of the building faces "toward" the south, but does not face due south, the gnomon will not lie along the noon line, and the hour lines must be corrected. Since the gnomon's style must be parallel to the Earth's axis, it always "points" true north and its angle with the horizontal will equal the sundial's geographical latitude; on a direct south dial, its angle with the vertical face of the dial will equal the colatitude, or 90° minus the latitude.
Polar dials.
In "polar dials", the shadow-receiving plane is aligned "parallel" to the gnomon-style.
Thus, the shadow slides sideways over the surface, moving perpendicularly to itself as the Sun rotates about the style. As with the gnomon, the hour-lines are all aligned with the Earth's rotational axis. When the Sun's rays are nearly parallel to the plane, the shadow moves very quickly and the hour lines are spaced far apart. The direct East- and West-facing dials are examples of a polar dial. However, the face of a polar dial need not be vertical; it need only be parallel to the gnomon. Thus, a plane inclined at the angle of latitude (relative to horizontal) under the similarly inclined gnomon will be a polar dial. The perpendicular spacing X of the hour-lines in the plane is described by the formula
formula_10
where H is the height of the style above the plane, and t is the time (in hours) before or after the center-time for the polar dial. The center time is the time when the style's shadow falls directly down on the plane; for an East-facing dial, the center time will be 6 A.M., for a West-facing dial, this will be 6 P.M., and for the inclined dial described above, it will be noon. When t approaches ±6 hours away from the center time, the spacing X diverges to +∞; this occurs when the Sun's rays become parallel to the plane.
Vertical declining dials.
A "declining dial" is any non-horizontal, planar dial that does not face in a cardinal direction, such as (true) north, south, east or west. As usual, the gnomon's style is aligned with the Earth's rotational axis, but the hour-lines are not symmetrical about the noon hour-line. For a vertical dial, the angle formula_11 between the noon hour-line and another hour-line is given by the formula below. Note that formula_11 is defined positive in the clockwise sense w.r.t. the upper vertical hour angle; and that its conversion to the equivalent solar hour requires careful consideration of which quadrant of the sundial that it belongs in.
formula_12
where formula_13 is the sundial's geographical latitude; t is the time before or after noon; formula_14 is the angle of declination from true south, defined as positive when east of south; and formula_15 is a switch integer for the dial orientation. A partly south-facing dial has an formula_15 value of those partly north-facing, a value of When such a dial faces south (formula_16), this formula reduces to the formula given above for vertical south-facing dials, i.e.
formula_17
When a sundial is not aligned with a cardinal direction, the substyle of its gnomon is not aligned with the noon hour-line. The angle formula_18 between the substyle and the noon hour-line is given by the formula
formula_19
If a vertical sundial faces trUe south Or north (formula_20 or formula_21 respectively), the angle formula_22 and the substyle is aligned with the noon hour-line.
The height of the gnomon, that is the angle the style makes to the plate, formula_23 is given by :
formula_24
Reclining dials.
The sundials described above have gnomons that are aligned with the Earth's rotational axis and cast their shadow onto a plane. If the plane is neither vertical nor horizontal nor equatorial, the sundial is said to be "reclining" or "inclining". Such a sundial might be located on a south-facing roof, for example. The hour-lines for such a sundial can be calculated by slightly correcting the horizontal formula above
formula_25
where formula_26 is the desired angle of reclining relative to the local vertical, L is the sundial's geographical latitude, formula_27 is the angle between a given hour-line and the noon hour-line (which always points due north) on the plane, and t is the number of hours before or after noon. For example, the angle formula_27 of the 3pm hour-line would equal the arctangent of since When (in other words, a south-facing vertical dial), we obtain the vertical dial formula above.
Some authors use a more specific nomenclature to describe the orientation of the shadow-receiving plane. If the plane's face points downwards towards the ground, it is said to be "proclining" or "inclining", whereas a dial is said to be "reclining" when the dial face is pointing away from the ground. Many authors also often refer to reclined, proclined and inclined sundials in general as inclined sundials. It is also common in the latter case to measure the angle of inclination relative to the horizontal plane on the sun side of the dial.
In such texts, since formula_28 the hour angle formula will often be seen written as :
formula_29
The angle between the gnomon style and the dial plate, B, in this type of sundial is :
formula_30
or :
formula_31
Declining-reclining dials/ Declining-inclining dials.
Some sundials both decline and recline, in that their shadow-receiving plane is not oriented with a cardinal direction (such as true north or true south) and is neither horizontal nor vertical nor equatorial. For example, such a sundial might be found on a roof that was not oriented in a cardinal direction.
The formulae describing the spacing of the hour-lines on such dials are rather more complicated than those for simpler dials.
There are various solution approaches, including some using the methods of rotation matrices, and some making a 3D model of the reclined-declined plane and its vertical declined counterpart plane, extracting the geometrical relationships between the hour angle components on both these planes and then reducing the trigonometric algebra.
One system of formulas for Reclining-Declining sundials: (as stated by Fennewick)
The angle formula_32 between the noon hour-line and another hour-line is given by the formula below. Note that formula_32 advances counterclockwise with respect to the zero hour angle for those dials that are partly south-facing and clockwise for those that are north-facing.
formula_33
within the parameter ranges : formula_34 and formula_35
Or, if preferring to use inclination angle, formula_36 rather than the reclination, formula_37 where formula_38 :
formula_39
within the parameter ranges : formula_40 and formula_41
Here formula_13 is the sundial's geographical latitude; formula_15 is the orientation switch integer; t is the time in hours before or after noon; and formula_26 and formula_14 are the angles of reclination and declination, respectively.
Note that formula_26 is measured with reference to the vertical. It is positive when the dial leans back towards the horizon behind the dial and negative when the dial leans forward to the horizon on the Sun's side. Declination angle formula_14 is defined as positive when moving east of true south.
Dials facing fully or partly south have formula_42 while those partly or fully north-facing have an formula_43
Since the above expression gives the hour angle as an arctangent function, due consideration must be given to which quadrant of the sundial each hour belongs to before assigning the correct hour angle.
Unlike the simpler vertical declining sundial, this type of dial does not always show hour angles on its sunside face for all declinations between east and west. When a northern hemisphere partly south-facing dial reclines back (i.e. away from the Sun) from the vertical, the gnomon will become co-planar with the dial plate at declinations less than due east or due west. Likewise for southern hemisphere dials that are partly north-facing.
Were these dials reclining forward, the range of declination would actually exceed due east and due west.
In a similar way, northern hemisphere dials that are partly north-facing and southern hemisphere dials that are south-facing, and which lean forward toward their upward pointing gnomons, will have a similar restriction on the range of declination that is possible for a given reclination value.
The critical declination formula_44 is a geometrical constraint which depends on the value of both the dial's reclination and its latitude :
formula_45
As with the vertical declined dial, the gnomon's substyle is not aligned with the noon hour-line. The general formula for the angle formula_46 between the substyle and the noon-line is given by :
formula_47
The angle formula_23 between the style and the plate is given by :
formula_48
Note that for formula_49 i.e. when the gnomon is coplanar with the dial plate, we have :
formula_50
i.e. when formula_51 the critical declination value.
Empirical method.
Because of the complexity of the above calculations, using them for the practical purpose of designing a dial of this type is difficult and prone to error. It has been suggested that it is better to locate the hour lines empirically, marking the positions of the shadow of a style on a real sundial at hourly intervals as shown by a clock and adding/deducting that day's equation of time adjustment. See Empirical hour-line marking, above.
Spherical sundials.
The surface receiving the shadow need not be a plane, but can have any shape, provided that the sundial maker is willing to mark the hour-lines. If the style is aligned with the Earth's rotational axis, a spherical shape is convenient since the hour-lines are equally spaced, as they are on the equatorial dial shown here; the sundial is "equiangular". This is the principle behind the armillary sphere and the equatorial bow sundial. However, some equiangular sundials – such as the Lambert dial described below – are based on other principles.
In the "equatorial bow sundial", the gnomon is a bar, slot or stretched wire parallel to the celestial axis. The face is a semicircle, corresponding to the equator of the sphere, with markings on the inner surface. This pattern, built a couple of meters wide out of temperature-invariant steel invar, was used to keep the trains running on time in France before World War I.
Among the most precise sundials ever made are two equatorial bows constructed of marble found in Yantra mandir. This collection of sundials and other astronomical instruments was built by Maharaja Jai Singh II at his then-new capital of Jaipur, India between 1727 and 1733. The larger equatorial bow is called the "Samrat Yantra" (The Supreme Instrument); standing at 27 meters, its shadow moves visibly at 1 mm per second, or roughly a hand's breadth (6 cm) every minute.
Cylindrical, conical, and other non-planar sundials.
Other non-planar surfaces may be used to receive the shadow of the gnomon.
As an elegant alternative, the style (which could be created by a hole or slit in the circumference) may be located on the circumference of a cylinder or sphere, rather than at its central axis of symmetry.
In that case, the hour lines are again spaced equally, but at "twice" the usual angle, due to the geometrical inscribed angle theorem. This is the basis of some modern sundials, but it was also used in ancient times;
In another variation of the polar-axis-aligned cylindrical, a cylindrical dial could be rendered as a helical ribbon-like surface, with a thin gnomon located either along its center or at its periphery.
Movable-gnomon sundials.
Sundials can be designed with a gnomon that is placed in a different position each day throughout the year. In other words, the position of the gnomon relative to the centre of the hour lines varies. The gnomon need not be aligned with the celestial poles and may even be perfectly vertical (the analemmatic dial). These dials, when combined with fixed-gnomon sundials, allow the user to determine true north with no other aid; the two sundials are correctly aligned if and only if they both show the same time.
Universal equinoctial ring dial.
A "universal equinoctial ring dial" (sometimes called a "ring dial" for brevity, although the term is ambiguous), is a portable version of an armillary sundial, or was inspired by the mariner's astrolabe. It was likely invented by William Oughtred around 1600 and became common throughout Europe.
In its simplest form, the style is a thin slit that allows the Sun's rays to fall on the hour-lines of an equatorial ring. As usual, the style is aligned with the Earth's axis; to do this, the user may orient the dial towards true north and suspend the ring dial vertically from the appropriate point on the meridian ring. Such dials may be made self-aligning with the addition of a more complicated central bar, instead of a simple slit-style. These bars are sometimes an addition to a set of Gemma's rings. This bar could pivot about its end points and held a perforated slider that was positioned to the month and day according to a scale scribed on the bar. The time was determined by rotating the bar towards the Sun so that the light shining through the hole fell on the equatorial ring. This forced the user to rotate the instrument, which had the effect of aligning the instrument's vertical ring with the meridian.
When not in use, the equatorial and meridian rings can be folded together into a small disk.
In 1610, Edward Wright created the sea ring, which mounted a universal ring dial over a magnetic compass. This permitted mariners to determine the time and magnetic variation in a single step.
Analemmatic sundials.
Analemmatic sundials are a type of horizontal sundial that has a vertical gnomon and hour markers positioned in an elliptical pattern. There are no hour lines on the dial and the time of day is read on the ellipse. The gnomon is not fixed and must change position daily to accurately indicate time of day.
Analemmatic sundials are sometimes designed with a human as the gnomon. Human gnomon analemmatic sundials are not practical at lower latitudes where a human shadow is quite short during the summer months. A 66 inch tall person casts a 4 inch shadow at 27° latitude on the summer solstice.
Foster-Lambert dials.
The Foster-Lambert dial is another movable-gnomon sundial. In contrast to the elliptical analemmatic dial, the Lambert dial is circular with evenly spaced hour lines, making it an "equiangular sundial", similar to the equatorial, spherical, cylindrical and conical dials described above. The gnomon of a Foster-Lambert dial is neither vertical nor aligned with the Earth's rotational axis; rather, it is tilted northwards by an angle α = 45° - (Φ/2), where Φ is the geographical latitude. Thus, a Foster-Lambert dial located at latitude 40° would have a gnomon tilted away from vertical by 25° in a northerly direction. To read the correct time, the gnomon must also be moved northwards by a distance
formula_52
where "R" is the radius of the Foster-Lambert dial and δ again indicates the Sun's declination for that time of year.
Altitude-based sundials.
Altitude dials measure the height of the Sun in the sky, rather than directly measuring its hour-angle about the Earth's axis. They are not oriented towards true north, but rather towards the Sun and generally held vertically. The Sun's elevation is indicated by the position of a nodus, either the shadow-tip of a gnomon, or a spot of light.
In altitude dials, the time is read from where the nodus falls on a set of hour-curves that vary with the time of year. Many such altitude-dials' construction is calculation-intensive, as also the case with many azimuth dials. But the capuchin dials (described below) are constructed and used graphically.
Altitude dials' disadvantages:
Since the Sun's altitude is the same at times equally spaced about noon (e.g., 9am and 3pm), the user had to know whether it was morning or afternoon. At, say, 3:00 pm, that is not a problem. But when the dial indicates a time 15 minutes from noon, the user likely will not have a way of distinguishing 11:45 from 12:15.
Additionally, altitude dials are less accurate near noon, because the sun's altitude is not changing rapidly then.
Many of these dials are portable and simple to use. As is often the case with other sundials, many altitude dials are designed for only one latitude. But the capuchin dial (described below) has a version that's adjustable for latitude.
describe the Universal Capuchin sundial.
Human shadows.
The length of a human shadow (or of any vertical object) can be used to measure the sun's elevation and, thence, the time. The Venerable Bede gave a table for estimating the time from the length of one's shadow in feet, on the assumption that a monk's height is six times the length of his foot. Such shadow lengths will vary with the geographical latitude and with the time of year. For example, the shadow length at noon is short in summer months, and long in winter months.
Chaucer evokes this method a few times in his "Canterbury Tales", as in his "Parson's Tale".
An equivalent type of sundial using a vertical rod of fixed length is known as a "backstaff dial".
Shepherd's dial – timesticks.
A shepherd's dial – also known as a "shepherd's column dial", "pillar dial", "cylinder dial" or "chilindre" – is a portable cylindrical sundial with a knife-like gnomon that juts out perpendicularly. It is normally dangled from a rope or string so the cylinder is vertical. The gnomon can be twisted to be above a month or day indication on the face of the cylinder. This corrects the sundial for the equation of time. The entire sundial is then twisted on its string so that the gnomon aims toward the Sun, while the cylinder remains vertical. The tip of the shadow indicates the time on the cylinder. The hour curves inscribed on the cylinder permit one to read the time. Shepherd's dials are sometimes hollow, so that the gnomon can fold within when not in use.
The shepherd's dial is evoked in "Henry VI, Part 3",
among other works of literature.
The cylindrical shepherd's dial can be unrolled into a flat plate. In one simple version, the front and back of the plate each have three columns, corresponding to pairs of months with roughly the same solar declination (June:July, May:August, April:September, March:October, February:November, and January:December). The top of each column has a hole for inserting the shadow-casting gnomon, a peg. Often only two times are marked on the column below, one for noon and the other for mid-morning / mid-afternoon.
Timesticks, "clock spear", or "shepherds' time stick", are based on the same principles as dials. The time stick is carved with eight vertical time scales for a different period of the year, each bearing a time scale calculated according to the relative amount of daylight during the different months of the year. Any reading depends not only on the time of day but also on the latitude and time of year.
A peg gnomon is inserted at the top in the appropriate hole or face for the season of the year, and turned to the Sun so that the shadow falls directly down the scale. Its end displays the time.
Ring dials.
In a ring dial (also known as an "Aquitaine" or a "perforated ring dial"), the ring is hung vertically and oriented sideways towards the sun. A beam of light passes through a small hole in the ring and falls on hour-curves that are inscribed on the inside of the ring. To adjust for the equation of time, the hole is usually on a loose ring within the ring so that the hole can be adjusted to reflect the current month.
Card dials (Capuchin dials).
Card dials are another form of altitude dial. A card is aligned edge-on with the sun and tilted so that a ray of light passes through an aperture onto a specified spot, thus determining the sun's altitude. A weighted string hangs vertically downwards from a hole in the card, and carries a bead or knot. The position of the bead on the hour-lines of the card gives the time. In more sophisticated versions such as the Capuchin dial, there is only one set of hour-lines, i.e., the hour lines do not vary with the seasons. Instead, the position of the hole from which the weighted string hangs is varied according to the season.
The Capuchin sundials are constructed and used graphically, as opposed the direct hour-angle measurements of horizontal or equatorial dials; or the calculated hour angle lines of some altitude and azimuth dials.
In addition to the ordinary Capuchin dial, there is a universal Capuchin dial, adjustable for latitude.
Navicula.
A navicula de Venetiis or "little ship of Venice" was an altitude dial used to tell time and which was shaped like a little ship. The cursor (with a plumb line attached) was slid up / down the mast to the correct latitude. The user then sighted the Sun through the pair of sighting holes at either end of the "ship's deck". The plumb line then marked what hour of the day it was.
Nodus-based sundials.
Another type of sundial follows the motion of a single point of light or shadow, which may be called the "nodus". For example, the sundial may follow the sharp tip of a gnomon's shadow, e.g., the shadow-tip of a vertical obelisk (e.g., the "Solarium Augusti") or the tip of the horizontal marker in a shepherd's dial. Alternatively, sunlight may be allowed to pass through a small hole or reflected from a small (e.g., coin-sized) circular mirror, forming a small spot of light whose position may be followed. In such cases, the rays of light trace out a cone over the course of a day; when the rays fall on a surface, the path followed is the intersection of the cone with that surface. Most commonly, the receiving surface is a geometrical plane, so that the path of the shadow-tip or light-spot (called "declination line") traces out a conic section such as a hyperbola or an ellipse. The collection of hyperbolae was called a "pelekonon" (axe) by the Greeks, because it resembles a double-bladed ax, narrow in the center (near the noonline) and flaring out at the ends (early morning and late evening hours).
There is a simple verification of hyperbolic declination lines on a sundial: the distance from the origin to the equinox line should be equal to harmonic mean of distances from the origin to summer and winter solstice lines.
Nodus-based sundials may use a small hole or mirror to isolate a single ray of light; the former are sometimes called "aperture dials". The oldest example is perhaps the antiborean sundial ("antiboreum"), a spherical nodus-based sundial that faces true north; a ray of sunlight enters from the south through a small hole located at the sphere's pole and falls on the hour and date lines inscribed within the sphere, which resemble lines of longitude and latitude, respectively, on a globe.
Reflection sundials.
Isaac Newton developed a convenient and inexpensive sundial, in which a small mirror is placed on the sill of a south-facing window. The mirror acts like a nodus, casting a single spot of light on the ceiling. Depending on the geographical latitude and time of year, the light-spot follows a conic section, such as the hyperbolae of the pelikonon. If the mirror is parallel to the Earth's equator, and the ceiling is horizontal, then the resulting angles are those of a conventional horizontal sundial. Using the ceiling as a sundial surface exploits unused space, and the dial may be large enough to be very accurate.
Multiple dials.
Sundials are sometimes combined into multiple dials. If two or more dials that operate on different principles — such as an analemmatic dial and a horizontal or vertical dial — are combined, the resulting multiple dial becomes self-aligning, most of the time. Both dials need to output both time and declination. In other words, the direction of true north need not be determined; the dials are oriented correctly when they read the same time and declination. However, the most common forms combine dials are based on the same principle and the analemmatic does not normally output the declination of the sun, thus are not self-aligning.
Diptych (tablet) sundial.
The diptych consisted of two small flat faces, joined by a hinge. Diptychs usually folded into little flat boxes suitable for a pocket. The gnomon was a string between the two faces. When the string was tight, the two faces formed both a vertical and horizontal sundial. These were made of white ivory, inlaid with black lacquer markings. The gnomons were black braided silk, linen or hemp string. With a knot or bead on the string as a nodus, and the correct markings, a diptych (really any sundial large enough) can keep a calendar well-enough to plant crops. A common error describes the diptych dial as self-aligning. This is not correct for diptych dials consisting of a horizontal and vertical dial using a string gnomon between faces, no matter the orientation of the dial faces. Since the string gnomon is continuous, the shadows must meet at the hinge; hence, "any" orientation of the dial will show the same time on both dials.
Multiface dials.
A common type of multiple dial has sundials on every face of a Platonic solid (regular polyhedron), usually a cube.
Extremely ornate sundials can be composed in this way, by applying a sundial to every surface of a solid object.
In some cases, the sundials are formed as hollows in a solid object, e.g., a cylindrical hollow aligned with the Earth's rotational axis (in which the edges play the role of styles) or a spherical hollow in the ancient tradition of the "hemisphaerium" or the "antiboreum". (See the History section above.) In some cases, these multiface dials are small enough to sit on a desk, whereas in others, they are large stone monuments.
A Polyhedral's dial faces can be designed to give the time for different time-zones simultaneously. Examples include the Scottish sundial of the 17th and 18th centuries, which was often an extremely complex shape of polyhedral, and even convex faces.
Prismatic dials.
Prismatic dials are a special case of polar dials, in which the sharp edges of a prism of a concave polygon serve as the styles and the sides of the prism receive the shadow. Examples include a three-dimensional cross or star of David on gravestones.
Unusual sundials.
Benoy dial.
The Benoy dial was invented by Walter Gordon Benoy of Collingham, Nottinghamshire, England. Whereas a gnomon casts a sheet of shadow, his invention creates an equivalent sheet of light by allowing the Sun's rays through a thin slit, reflecting them from a long, slim mirror (usually half-cylindrical), or focusing them through a cylindrical lens. Examples of Benoy dials can be found in the United Kingdom at:
Bifilar sundial.
Invented by the German mathematician Hugo Michnik in 1922, the bifilar sundial has two non-intersecting threads parallel to the dial. Usually the second thread is orthogonal to the first.
The intersection of the two threads' shadows gives the local solar time.
Digital sundial.
A digital sundial indicates the current time with numerals formed by the sunlight striking it. Sundials of this type are installed in the Deutsches Museum in Munich and in the Sundial Park in Genk (Belgium), and a small version is available commercially. There is a patent for this type of sundial.
Globe dial.
The globe dial is a sphere aligned with the Earth's rotational axis, and equipped with a spherical vane. Similar to sundials with a fixed axial style, a globe dial determines the time from the Sun's azimuthal angle in its apparent rotation about the earth. This angle can be determined by rotating the vane to give the smallest shadow.
Noon marks.
The simplest sundials do not give the hours, but rather note the exact moment of 12:00 noon. In centuries past, such dials were used to set mechanical clocks, which were sometimes so inaccurate as to lose or gain significant time in a single day. The simplest noon-marks have a shadow that passes a mark. Then, an almanac can translate from local solar time and date to civil time. The civil time is used to set the clock. Some noon-marks include a figure-eight that embodies the equation of time, so that no almanac is needed.
In some U.S. colonial-era houses, a noon-mark might be carved into a floor or windowsill. Such marks indicate local noon, and provide a simple and accurate time reference for households to set their clocks. Some Asian countries had post offices set their clocks from a precision noon-mark. These in turn provided the times for the rest of the society. The typical noon-mark sundial was a lens set above an analemmatic plate. The plate has an engraved figure-eight shape, which corresponds to the equation of time (described above) versus the solar declination. When the edge of the Sun's image touches the part of the shape for the current month, this indicates that it is 12:00 noon.
Sundial cannon.
A sundial cannon, sometimes called a 'meridian cannon', is a specialized sundial that is designed to create an 'audible noonmark', by automatically igniting a quantity of gunpowder at noon. These were novelties rather than precision sundials, sometimes installed in parks in Europe mainly in the late 18th or early 19th centuries. They typically consist of a horizontal sundial, which has in addition to a gnomon a suitably mounted lens, set to focus the rays of the sun at exactly noon on the firing pan of a miniature cannon loaded with gunpowder (but no ball). To function properly the position and angle of the lens must be adjusted seasonally.
Meridian lines.
A horizontal line aligned on a meridian with a gnomon facing the noon-sun is termed a meridian line and does not indicate the time, but instead the day of the year. Historically they were used to accurately determine the length of the solar year. Examples are the Bianchini meridian line in Santa Maria degli Angeli e dei Martiri in Rome, and the Cassini line in San Petronio Basilica at Bologna.
Sundial mottoes.
The association of sundials with time has inspired their designers over the centuries to display mottoes as part of the design. Often these cast the device in the role of "memento mori", inviting the observer to reflect on the transience of the world and the inevitability of death. "Do not kill time, for it will surely kill thee." Other mottoes are more whimsical: "I count only the sunny hours," and "I am a sundial and I make a botch / of what is done far better by a watch." Collections of sundial mottoes have often been published through the centuries.
Use as a compass.
If a horizontal-plate sundial is made for the latitude in which it is being used, and if it is mounted with its plate horizontal and its gnomon pointing to the celestial pole that is above the horizon, then it shows the correct time in apparent solar time. Conversely, if the directions of the cardinal points are initially unknown, but the sundial is aligned so it shows the correct apparent solar time as calculated from the reading of a clock, its gnomon shows the direction of True north or south, allowing the sundial to be used as a compass. The sundial can be placed on a horizontal surface, and rotated about a vertical axis until it shows the correct time. The gnomon will then be pointing to the north, in the northern hemisphere, or to the south in the southern hemisphere. This method is much more accurate than using a watch as a compass (see Cardinal direction#Watch face) and can be used in places where the magnetic declination is large, making a magnetic compass unreliable. An alternative method uses two sundials of different designs. (See #Multiple dials, above.) The dials are attached to and aligned with each other, and are oriented so they show the same time. This allows the directions of the cardinal points and the apparent solar time to be determined simultaneously, without requiring a clock.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " H_E = 15^{\\circ}\\times t\\text{ (hours)} ~."
},
{
"math_id": 1,
"text": " \\text{Correction}^{\\circ} = \\frac{\\text{EoT (minutes)} + 60 \\times \\Delta \\text{DST (hours)}}{4} ~."
},
{
"math_id": 2,
"text": "\\ \\tan H_H = \\sin L\\ \\tan \\left(\\ 15^{\\circ} \\times t\\ \\right)\\ "
},
{
"math_id": 3,
"text": " \\ H_H = \\tan^{-1}\\left[\\ \\sin L\\ \\tan(\\ 15^{\\circ} \\times t\\ )\\ \\right] "
},
{
"math_id": 4,
"text": "\\ H_H\\ "
},
{
"math_id": 5,
"text": "\\ L = 90^\\circ\\ "
},
{
"math_id": 6,
"text": "\\ H_H = 15^\\circ \\times t\\ ,"
},
{
"math_id": 7,
"text": "\\ L = 0^\\circ\\ ,"
},
{
"math_id": 8,
"text": " \\tan H_V = \\cos L\\ \\tan(\\ 15^{\\circ} \\times t\\ )\\ "
},
{
"math_id": 9,
"text": "\\ H_V\\ "
},
{
"math_id": 10,
"text": " X = H\\ \\tan(\\ 15^{\\circ} \\times t\\ )\\ "
},
{
"math_id": 11,
"text": "\\ H_\\text{VD}\\ "
},
{
"math_id": 12,
"text": " \\tan H_\\text{VD} = \\frac{\\cos L}{\\ \\cos D\\ \\cot(\\ 15^{\\circ} \\times t\\ ) - s_o\\ \\sin L\\ \\sin D\\ } "
},
{
"math_id": 13,
"text": "\\ L\\ "
},
{
"math_id": 14,
"text": "\\ D\\ "
},
{
"math_id": 15,
"text": "\\ s_o\\ "
},
{
"math_id": 16,
"text": "\\ D = 0^{\\circ}\\ "
},
{
"math_id": 17,
"text": "\\ \\tan H_\\text{V} = \\cos L\\ \\tan(\\ 15^{\\circ} \\times t\\ )\\ "
},
{
"math_id": 18,
"text": "\\ B\\ "
},
{
"math_id": 19,
"text": " \\tan B = \\sin D\\ \\cot L "
},
{
"math_id": 20,
"text": "\\ D = 0^{\\circ}\\ "
},
{
"math_id": 21,
"text": "\\ D = 180^{\\circ}\\ ,"
},
{
"math_id": 22,
"text": "\\ B = 0^{\\circ}\\ "
},
{
"math_id": 23,
"text": "\\ G\\ ,"
},
{
"math_id": 24,
"text": "\\ \\sin G = \\cos D\\ \\cos L ~"
},
{
"math_id": 25,
"text": "\\ \\tan H_{RV} = \\cos(\\ L + R\\ )\\ \\tan(\\ 15^{\\circ} \\times t\\ )\\ "
},
{
"math_id": 26,
"text": "\\ R\\ "
},
{
"math_id": 27,
"text": "\\ H_{RV}\\ "
},
{
"math_id": 28,
"text": "\\ I = 90^\\circ + R\\ ,"
},
{
"math_id": 29,
"text": " \\tan H_{RV} = \\sin(L + I)\\ \\tan(\\ 15^{\\circ} \\times t\\ )\\ "
},
{
"math_id": 30,
"text": " B = 90^{\\circ} - (L + R) "
},
{
"math_id": 31,
"text": " B = 180^{\\circ} - (L + I) "
},
{
"math_id": 32,
"text": "\\ H_\\text{RD}\\ "
},
{
"math_id": 33,
"text": "\\ \\tan H_\\text{RD} = \\frac{\\ \\cos R\\ \\cos L - \\sin R\\ \\sin L\\ \\cos D - s_o \\sin R \\sin D \\cot(15^{\\circ} \\times t)\\ }{\\ \\cos D\\ \\cot(15^{\\circ} \\times t) - s_o \\sin D\\ \\sin L }\\ "
},
{
"math_id": 34,
"text": "\\ D < D_c\\ "
},
{
"math_id": 35,
"text": " -90^{\\circ} < R < (90^{\\circ} - L) ~."
},
{
"math_id": 36,
"text": "\\ I\\ ,"
},
{
"math_id": 37,
"text": "\\ R\\ ,"
},
{
"math_id": 38,
"text": "\\ I = (90^{\\circ} + R)\\ "
},
{
"math_id": 39,
"text": "\\ \\tan H_\\text{RD} = \\frac{\\ \\sin I\\ \\cos L + \\cos I\\ \\sin L\\ \\cos D + s_o \\cos I\\ \\sin D\\ \\cot(15^{\\circ} \\times t)\\ }{\\ \\cos D\\ \\cot(15^{\\circ} \\times t\\ ) - s_o \\sin D\\ \\sin L\\ }\\ "
},
{
"math_id": 40,
"text": "\\ D < D_c ~~"
},
{
"math_id": 41,
"text": "~~ 0^{\\circ} < I < (180^{\\circ} - L) ~."
},
{
"math_id": 42,
"text": "\\ s_o = +1\\ ,"
},
{
"math_id": 43,
"text": "\\ s_o = -1 ~."
},
{
"math_id": 44,
"text": "\\ D_c\\ "
},
{
"math_id": 45,
"text": "\\ \\cos D_c = \\tan R\\ \\tan L = - \\tan L\\ \\cot I\\ "
},
{
"math_id": 46,
"text": "\\ B\\ ,"
},
{
"math_id": 47,
"text": "\\ \\tan B = \\frac {\\sin D}{\\sin R\\ \\cos D + \\cos R\\ \\tan L} = \\frac {\\sin D}{\\ \\cos I\\ \\cos D - \\sin I\\ \\tan L\\ }\\ "
},
{
"math_id": 48,
"text": "\\ \\sin G = \\cos L\\ \\cos D\\ \\cos R - \\sin L\\ \\sin R = - \\cos L\\ \\cos D\\ \\sin I + \\sin L\\ \\cos I\\ "
},
{
"math_id": 49,
"text": "\\ G = 0^{\\circ}\\ ,"
},
{
"math_id": 50,
"text": "\\ \\cos D = \\tan L\\ \\tan R = - \\tan L\\ \\cot I\\ "
},
{
"math_id": 51,
"text": "\\ D = D_c\\ ,"
},
{
"math_id": 52,
"text": "\nY = R \\tan \\alpha \\tan \\delta \\,\n"
}
]
| https://en.wikipedia.org/wiki?curid=72907 |
7290730 | Rotation formalisms in three dimensions | Ways to represent 3D rotations
In geometry, various formalisms exist to express a rotation in three dimensions as a mathematical transformation. In physics, this concept is applied to classical mechanics where rotational (or angular) kinematics is the science of quantitative description of a purely rotational motion. The orientation of an object at a given instant is described with the same tools, as it is defined as an imaginary rotation from a reference placement in space, rather than an actually observed rotation from a previous placement in space.
According to Euler's rotation theorem, the rotation of a rigid body (or three-dimensional coordinate system with a fixed origin) is described by a single rotation about some axis. Such a rotation may be uniquely described by a minimum of three real parameters. However, for various reasons, there are several ways to represent it. Many of these representations use more than the necessary minimum of three parameters, although each of them still has only three degrees of freedom.
An example where rotation representation is used is in computer vision, where an automated observer needs to track a target. Consider a rigid body, with three orthogonal unit vectors fixed to its body (representing the three axes of the object's local coordinate system). The basic problem is to specify the orientation of these three unit vectors, and hence the rigid body, with respect to the observer's coordinate system, regarded as a reference placement in space.
Rotations and motions.
Rotation formalisms are focused on proper (orientation-preserving) motions of the Euclidean space with one fixed point, that a "rotation" refers to. Although physical motions with a fixed point are an important case (such as ones described in the center-of-mass frame, or motions of a joint), this approach creates a knowledge about all motions. Any proper motion of the Euclidean space decomposes to a rotation around the origin and a translation. Whichever the order of their composition will be, the "pure" rotation component wouldn't change, uniquely determined by the complete motion.
One can also understand "pure" rotations as linear maps in a vector space equipped with Euclidean structure, not as maps of points of a corresponding affine space. In other words, a rotation formalism captures only the rotational part of a motion, that contains three degrees of freedom, and ignores the translational part, that contains another three.
When representing a rotation as numbers in a computer, some people prefer the quaternion representation or the axis+angle representation, because they avoid the gimbal lock that can occur with Euler rotations.
Formalism alternatives.
Rotation matrix.
The above-mentioned triad of unit vectors is also called a basis. Specifying the coordinates ("components") of vectors of this basis in its current (rotated) position, in terms of the reference (non-rotated) coordinate axes, will completely describe the rotation. The three unit vectors, û, v̂ and ŵ, that form the rotated basis each consist of 3 coordinates, yielding a total of 9 parameters.
These parameters can be written as the elements of a 3 × 3 matrix A, called a rotation matrix. Typically, the coordinates of each of these vectors are arranged along a column of the matrix (however, beware that an alternative definition of rotation matrix exists and is widely used, where the vectors' coordinates defined above are arranged by rows)
formula_0
The elements of the rotation matrix are not all independent—as Euler's rotation theorem dictates, the rotation matrix has only three degrees of freedom.
The rotation matrix has the following properties:
The angle θ which appears in the eigenvalue expression corresponds to the angle of the Euler axis and angle representation. The eigenvector corresponding to the eigenvalue of 1 is the accompanying Euler axis, since the axis is the only (nonzero) vector which remains unchanged by left-multiplying (rotating) it with the rotation matrix.
The above properties are equivalent to
formula_2
which is another way of stating that (û, v̂, ŵ) form a 3D orthonormal basis. These statements comprise a total of 6 conditions (the cross product contains 3), leaving the rotation matrix with just 3 degrees of freedom, as required.
Two successive rotations represented by matrices A1 and A2 are easily combined as elements of a group,
formula_3
(Note the order, since the vector being rotated is multiplied from the right).
The ease by which vectors can be rotated using a rotation matrix, as well as the ease of combining successive rotations, make the rotation matrix a useful and popular way to represent rotations, even though it is less concise than other representations.
Euler axis and angle (rotation vector).
From Euler's rotation theorem we know that any rotation can be expressed as a single rotation about some axis. The axis is the unit vector (unique except for sign) which remains unchanged by the rotation. The magnitude of the angle is also unique, with its sign being determined by the sign of the rotation axis.
The axis can be represented as a three-dimensional unit vector
formula_4
and the angle by a scalar θ.
Since the axis is normalized, it has only two degrees of freedom. The angle adds the third degree of freedom to this rotation representation.
One may wish to express rotation as a rotation vector, or Euler vector, an un-normalized three-dimensional vector the direction of which specifies the axis, and the length of which is θ,
formula_5
The rotation vector is useful in some contexts, as it represents a three-dimensional rotation with only three scalar values (its components), representing the three degrees of freedom. This is also true for representations based on sequences of three Euler angles (see below).
If the rotation angle θ is zero, the axis is not uniquely defined. Combining two successive rotations, each represented by an Euler axis and angle, is not straightforward, and in fact does not satisfy the law of vector addition, which shows that finite rotations are not really vectors at all. It is best to employ the rotation matrix or quaternion notation, calculate the product, and then convert back to Euler axis and angle.
Euler rotations.
The idea behind Euler rotations is to split the complete rotation of the coordinate system into three simpler constitutive rotations, called precession, nutation, and intrinsic rotation, being each one of them an increment on one of the Euler angles. Notice that the outer matrix will represent a rotation around one of the axes of the reference frame, and the inner matrix represents a rotation around one of the moving frame axes. The middle matrix represents a rotation around an intermediate axis called line of nodes.
However, the definition of Euler angles is not unique and in the literature many different conventions are used. These conventions depend on the axes about which the rotations are carried out, and their sequence (since rotations on a sphere are non-commutative).
The convention being used is usually indicated by specifying the axes about which the consecutive rotations (before being composed) take place, referring to them by index (1, 2, 3) or letter (X, Y, Z). The engineering and robotics communities typically use 3-1-3 Euler angles. Notice that after composing the independent rotations, they do not rotate about their axis anymore. The most external matrix rotates the other two, leaving the second rotation matrix over the line of nodes, and the third one in a frame comoving with the body. There are 3 × 3 × 3 = 27 possible combinations of three basic rotations but only 3 × 2 × 2 = 12 of them can be used for representing arbitrary 3D rotations as Euler angles. These 12 combinations avoid consecutive rotations around the same axis (such as XXY) which would reduce the degrees of freedom that can be represented.
Therefore, Euler angles are never expressed in terms of the external frame, or in terms of the co-moving rotated body frame, but in a mixture. Other conventions (e.g., rotation matrix or quaternions) are used to avoid this problem.
In aviation orientation of the aircraft is usually expressed as intrinsic Tait-Bryan angles following the "z"-"y"′-"x"″ convention, which are called heading, elevation, and bank (or synonymously, yaw, pitch, and roll).
Quaternions.
Quaternions, which form a four-dimensional vector space, have proven very useful in representing rotations due to several advantages over the other representations mentioned in this article.
A quaternion representation of rotation is written as a versor (normalized quaternion):
formula_6
The above definition stores the quaternion as an array following the convention used in (Wertz 1980) and (Markley 2003). An alternative definition, used for example in (Coutsias 1999) and (Schmidt 2001), defines the "scalar" term as the first quaternion element, with the other elements shifted down one position.
In terms of the Euler axis
formula_4
and angle θ this versor's components are expressed as follows:
formula_7
Inspection shows that the quaternion parametrization obeys the following constraint:
formula_8
The last term (in our definition) is often called the scalar term, which has its origin in quaternions when understood as the mathematical extension of the complex numbers, written as
formula_9
and where {"i", "j", "k"} are the hypercomplex numbers satisfying
formula_10
Quaternion multiplication, which is used to specify a composite rotation, is performed in the same manner as multiplication of complex numbers, except that the order of the elements must be taken into account, since multiplication is not commutative. In matrix notation we can write quaternion multiplication as
formula_11
Combining two consecutive quaternion rotations is therefore just as simple as using the rotation matrix. Just as two successive rotation matrices, A1 followed by A2, are combined as
formula_12
we can represent this with quaternion parameters in a similarly concise way:
formula_13
Quaternions are a very popular parametrization due to the following properties:
Like rotation matrices, quaternions must sometimes be renormalized due to rounding errors, to make sure that they correspond to valid rotations. The computational cost of renormalizing a quaternion, however, is much less than for normalizing a 3 × 3 matrix.
Quaternions also capture the spinorial character of rotations in three dimensions. For a three-dimensional object connected to its (fixed) surroundings by slack strings or bands, the strings or bands can be untangled after "two" complete turns about some fixed axis from an initial untangled state. Algebraically, the quaternion describing such a rotation changes from a scalar +1 (initially), through (scalar + pseudovector) values to scalar −1 (at one full turn), through (scalar + pseudovector) values back to scalar +1 (at two full turns). This cycle repeats every 2 turns. After 2"n" turns (integer "n" > 0), without any intermediate untangling attempts, the strings/bands can be partially untangled back to the 2("n" − 1) turns state with each application of the same procedure used in untangling from 2 turns to 0 turns. Applying the same procedure "n" times will take a 2"n"-tangled object back to the untangled or 0 turn state. The untangling process also removes any rotation-generated twisting about the strings/bands themselves. Simple 3D mechanical models can be used to demonstrate these facts.
Rodrigues vector.
The Rodrigues vector (sometimes called the Gibbs vector, with coordinates called Rodrigues parameters) can be expressed in terms of the axis and angle of the rotation as follows:
formula_14
This representation is a higher-dimensional analog of the gnomonic projection, mapping unit quaternions from a 3-sphere onto the 3-dimensional pure-vector hyperplane.
It has a discontinuity at 180° (π radians): as any rotation vector r tends to an angle of π radians, its tangent tends to infinity.
A rotation g followed by a rotation f in the Rodrigues representation has the simple rotation composition form
formula_15
Today, the most straightforward way to prove this formula is in the (faithful) doublet representation, where g = n̂ tan "a", etc.
The combinatoric features of the Pauli matrix derivation just mentioned are also identical to the equivalent quaternion derivation below. Construct a quaternion associated with a spatial rotation R as,
formula_16
Then the composition of the rotation RB with RA is the rotation "RC"
"RBRA", with rotation axis and angle defined by the product of the quaternions,
formula_17
that is
formula_18
Expand this quaternion product to
formula_19
Divide both sides of this equation by the identity resulting from the previous one,
formula_20
and evaluate
formula_21
This is Rodrigues' formula for the axis of a composite rotation defined in terms of the axes of the two component rotations. He derived this formula in 1840 (see page 408). The three rotation axes A, B, and C form a spherical triangle and the dihedral angles between the planes formed by the sides of this triangle are defined by the rotation angles.
Modified Rodrigues parameters (MRPs) can be expressed in terms of Euler axis and angle by
formula_22
Its components can be expressed in terms of the components of a unit quaternion representing the same rotation as
formula_23
The modified Rodrigues vector is a stereographic projection mapping unit quaternions from a 3-sphere onto the 3-dimensional pure-vector hyperplane. The projection of the opposite quaternion −q results in a different modified Rodrigues vector p"s" than the projection of the original quaternion q. Comparing components one obtains that
formula_24
Notably, if one of these vectors lies inside the unit 3-sphere, the other will lie outside.
Cayley–Klein parameters.
See definition at Wolfram Mathworld.
Vector transformation law.
Active rotations of a 3D vector p in Euclidean space around an axis n over an angle "η" can be easily written in terms of dot and cross products as follows:
formula_25
wherein
formula_26
is the longitudinal component of p along n, given by the dot product,
formula_27
is the transverse component of p with respect to n, and
formula_28
is the cross product of p with n.
The above formula shows that the longitudinal component of p remains unchanged, whereas the transverse portion of p is rotated in the plane perpendicular to n. This plane is spanned by the transverse portion of p itself and a direction perpendicular to both p and n. The rotation is directly identifiable in the equation as a 2D rotation over an angle "η".
Passive rotations can be described by the same formula, but with an inverse sign of either "η" or n.
Conversion formulae between formalisms.
Rotation matrix ↔ Euler angles.
The Euler angles ("φ", "θ", "ψ") can be extracted from the rotation matrix A by inspecting the rotation matrix in analytical form.
Rotation matrix → Euler angles ("z"-"x"-"z" extrinsic).
Using the x-convention, the 3-1-3 extrinsic Euler angles φ, θ and ψ (around the z-axis, x-axis and again the formula_29-axis) can be obtained as follows:
formula_30
Note that atan2("a", "b") is equivalent to arctan where it also takes into account the quadrant that the point ("b", "a") is in; see atan2.
When implementing the conversion, one has to take into account several situations:
Euler angles ("z"-"y"′-"x"″ intrinsic) → rotation matrix.
The rotation matrix A is generated from the 3-2-1 intrinsic Euler angles by multiplying the three matrices generated by rotations about the axes.
formula_31
The axes of the rotation depend on the specific convention being used. For the x-convention the rotations are about the x-, y- and z-axes with angles ϕ, θ and ψ, the individual matrices are as follows:
formula_32
This yields
formula_33
Note: This is valid for a right-hand system, which is the convention used in almost all engineering and physics disciplines.
The interpretation of these right-handed rotation matrices is that they express coordinate transformations (passive) as opposed to point transformations (active). Because A expresses a rotation from the local frame 1 to the global frame 0 (i.e., A encodes the axes of frame 1 with respect to frame 0), the elementary rotation matrices are composed as above. Because the inverse rotation is just the rotation transposed, if we wanted the global-to-local rotation from frame 0 to frame 1, we would write
formula_34
Rotation matrix ↔ Euler axis/angle.
If the Euler angle θ is not a multiple of π, the Euler axis ê and angle θ can be computed from the elements of the rotation matrix A as follows:
formula_35
Alternatively, the following method can be used:
Eigendecomposition of the rotation matrix yields the eigenvalues 1 and cos "θ" ± "i" sin "θ". The Euler axis is the eigenvector corresponding to the eigenvalue of 1, and θ can be computed from the remaining eigenvalues.
The Euler axis can be also found using singular value decomposition since it is the normalized vector spanning the null-space of the matrix I − A.
To convert the other way the rotation matrix corresponding to an Euler axis ê and angle θ can be computed according to Rodrigues' rotation formula (with appropriate modification) as follows:
formula_36
with I3 the 3 × 3 identity matrix, and
formula_37
is the cross-product matrix.
This expands to:
formula_38
Rotation matrix ↔ quaternion.
When computing a quaternion from the rotation matrix there is a sign ambiguity, since q and −q represent the same rotation.
One way of computing the quaternion
formula_39
from the rotation matrix A is as follows:
formula_40
There are three other mathematically equivalent ways to compute q. Numerical inaccuracy can be reduced by avoiding situations in which the denominator is close to zero. One of the other three methods looks as follows:
formula_41
The rotation matrix corresponding to the quaternion q can be computed as follows:
formula_42
where
formula_43
which gives
formula_44
or equivalently
formula_45
This is called the Euler–Rodrigues formula for the transformation matrix formula_46
Euler angles ↔ quaternion.
Euler angles ("z"-"x"-"z" extrinsic) → quaternion.
We will consider the x-convention 3-1-3 extrinsic Euler angles for the following algorithm. The terms of the algorithm depend on the convention used.
We can compute the quaternion
formula_39
from the Euler angles ("ϕ", "θ", "ψ") as follows:
formula_47
Euler angles ("z"-"y"′-"x"″ intrinsic) → quaternion.
A quaternion equivalent to yaw (ψ), pitch (θ) and roll (ϕ) angles. or intrinsic Tait–Bryan angles following the "z"-"y"′-"x"″ convention, can be computed by
formula_48
Quaternion → Euler angles ("z"-"x"-"z" extrinsic).
Given the rotation quaternion
formula_49
the x-convention 3-1-3 extrinsic Euler Angles ("φ", "θ", "ψ") can be computed by
formula_50
Quaternion → Euler angles ("z"-"y"′-"x"″ intrinsic).
Given the rotation quaternion
formula_49
yaw, pitch and roll angles, or intrinsic Tait–Bryan angles following the "z"-"y"′-"x"″ convention, can be computed by
formula_51
Euler axis–angle ↔ quaternion.
Given the Euler axis ê and angle θ, the quaternion
formula_49
can be computed by
formula_52
Given the rotation quaternion q, define
formula_53
Then the Euler axis ê and angle θ can be computed by
formula_54
Rotation matrix ↔ Rodrigues vector.
Rodrigues vector → Rotation matrix.
Since the definition of the Rodrigues vector can be related to rotation quaternions:formula_55By making use of the following propertyformula_56The formula can be obtained by factoring "q" from the final expression obtained for quaternions:
formula_57
Leading to the final formula:
formula_58
Conversion formulae for derivatives.
Rotation matrix ↔ angular velocities.
The angular velocity vector
formula_59
can be extracted from the time derivative of the rotation matrix by the following relation:
formula_60
The derivation is adapted from Ioffe as follows:
For any vector r0, consider r("t") = A("t")r0 and differentiate it:
formula_61
The derivative of a vector is the linear velocity of its tip. Since A is a rotation matrix, by definition the length of r("t") is always equal to the length of r0, and hence it does not change with time. Thus, when r("t") rotates, its tip moves along a circle, and the linear velocity of its tip is tangential to the circle; i.e., always perpendicular to r("t"). In this specific case, the relationship between the linear velocity vector and the angular velocity vector is
formula_62
(see circular motion and cross product).
By the transitivity of the abovementioned equations,
formula_63
which implies
formula_64
Quaternion ↔ angular velocities.
The angular velocity vector
formula_59
can be obtained from the derivative of the quaternion as follows:
formula_65
where q̃ is the conjugate (inverse) of q.
Conversely, the derivative of the quaternion is
formula_66
Rotors in a geometric algebra.
The formalism of geometric algebra (GA) provides an extension and interpretation of the quaternion method. Central to GA is the geometric product of vectors, an extension of the traditional inner and cross products, given by
formula_67
where the symbol ∧ denotes the exterior product or wedge product. This product of vectors a, and b produces two terms: a scalar part from the inner product and a bivector part from the wedge product. This bivector describes the plane perpendicular to what the cross product of the vectors would return.
Bivectors in GA have some unusual properties compared to vectors. Under the geometric product, bivectors have a negative square: the bivector x̂ŷ describes the xy-plane. Its square is (x̂ŷ)2 = x̂ŷx̂ŷ. Because the unit basis vectors are orthogonal to each other, the geometric product reduces to the antisymmetric outer product, so x̂ and ŷ can be swapped freely at the cost of a factor of −1. The square reduces to −x̂x̂ŷŷ = −1 since the basis vectors themselves square to +1.
This result holds generally for all bivectors, and as a result the bivector plays a role similar to the imaginary unit. Geometric algebra uses bivectors in its analogue to the quaternion, the "rotor", given by
formula_68
where B̂ is a unit bivector that describes the plane of rotation. Because B̂ squares to −1, the power series expansion of R generates the trigonometric functions. The rotation formula that maps a vector a to a rotated vector b is then
formula_69
where
formula_70
is the "reverse" of formula_71 (reversing the order of the vectors in formula_72 is equivalent to changing its sign).
Example. A rotation about the axis
formula_73
can be accomplished by converting v̂ to its dual bivector,
formula_74
where i = x̂ŷẑ is the unit volume element, the only trivector (pseudoscalar) in three-dimensional space. The result is
formula_75
In three-dimensional space, however, it is often simpler to leave the expression for B̂ = iv̂, using the fact that i commutes with all objects in 3D and also squares to −1. A rotation of the x̂ vector in this plane by an angle θ is then
formula_76
Recognizing that
formula_77
and that −v̂x̂v̂ is the reflection of x̂ about the plane perpendicular to v̂ gives a geometric interpretation to the rotation operation: the rotation preserves the components that are parallel to v̂ and changes only those that are perpendicular. The terms are then computed:
formula_78
The result of the rotation is then
formula_79
A simple check on this result is the angle "θ" = π. Such a rotation should map x̂ to ŷ. Indeed, the rotation reduces to
formula_80
exactly as expected. This rotation formula is valid not only for vectors but for any multivector. In addition, when Euler angles are used, the complexity of the operation is much reduced. Compounded rotations come from multiplying the rotors, so the total rotor from Euler angles is
formula_81
but
formula_82
These rotors come back out of the exponentials like so:
formula_83
where R"β" refers to rotation in the original coordinates. Similarly for the γ rotation,
formula_84
Noting that R"γ" and R"α" commute (rotations in the same plane must commute), and the total rotor becomes
formula_85
Thus, the compounded rotations of Euler angles become a series of equivalent rotations in the original fixed frame.
While rotors in geometric algebra work almost identically to quaternions in three dimensions, the power of this formalism is its generality: this method is appropriate and valid in spaces with any number of dimensions. In 3D, rotations have three degrees of freedom, a degree for each linearly independent plane (bivector) the rotation can take place in. It has been known that pairs of quaternions can be used to generate rotations in 4D, yielding six degrees of freedom, and the geometric algebra approach verifies this result: in 4D, there are six linearly independent bivectors that can be used as the generators of rotations.
Angle-angle-angle.
Rotations can be modeled as an axis and an angle; as illustrated with a gyroscope which has an axis through the rotor, and the amount of spin around that axis demonstrated by the rotation of the rotor; this rotation can be expressed as angle ∗ (axis) where axis is a unit vector specifying the direction of the rotor axis. From the origin, in any direction, is the same rotation axis, with the scale of the angle equivalent to the distance from the origin. From any other point in space, similarly the same direction vector applied relative to the orientation represented by the starting point rather than the origin applies the same change around the same axes that the unit vector specifies. The angle ∗ axis scaling each point gives a unique coordinate in angle-angle-angle notation. The difference between two coordinates immediately yields the single axis of rotation and angle between the two orientations.
The natural log of a quaternion represents curving space by 3 angles around 3 axles of rotation, and is expressed in arc-length; similar to Euler angles, but order independent. There is a Lie product formula definition of the addition of rotations, which is that they are sum of infinitesimal steps of each rotation applied in series; this would imply that rotations are the result of all rotations in the same instant are applied, rather than a series of rotations applied subsequently.
The axes of rotation are aligned to the standard Cartesian "x", "y", "z" axes. These rotations may be simply added and subtracted, especially when the frames being rotated are fixed to each other as in IK chains. Differences between two objects that are in the same reference frame are found by simply subtracting their orientations. Rotations that are applied from external sources, or are from sources relative to the current rotation still require multiplications, application of the Rodriguez Formula is provided.
The rotation from each axle coordinate represent rotating the plane perpendicular to the specified axis simultaneously with all other axles. Although the measures can be considered in angles, the representation is actually the arc-length of the curve; an angle implies a rotation around a point, where a curvature is a delta applied to the current point in an inertial direction.
Just an observational note: log quaternions have rings, or octaves of rotations; that is for rotations greater than 4π have related curves. Curvatures of things that approach this boundary appear to chaotically jump orbits.
For 'human readable' angles the 1-norm can be used to rescale the angles to look more 'appropriate':
formula_86
Other related values are immediately derivable:
formula_87
The total angle of rotation:
formula_88
The axis of rotation:
formula_89
Quaternion representation.
formula_90
Basis matrix computation.
This was built from rotating the vectors (1,0,0), (0,1,0), (0,0,1), and reducing constants.
Given an input Q
["X", "Y", "Z"],
formula_91
Which are used to compute the resulting matrix
formula_92
Alternate basis calculation.
Alternatively this can be used. Given A
["X", "Y", "Z"], convert to angle-axis "θ"
‖"A"‖, and ["x", "y", "z"]
Compute some partial expressions:formula_93
Compute the resulting matrix: formula_94
Expanded: formula_95
Vector rotation.
Rotate the vector v
("X", "Y", "Z") around the rotation vector Q
("X", "Y", "Z").
The angle of rotation will be "θ"
‖Q‖.
Calculate the cosine of the angle times the vector to rotate, plus sine of the angle times the axis of rotation, plus one minus cosine of the angle times the dot product of the vector and rotation axis times the axis of rotation.
formula_96
Some notes: the dot product includes the cosine of the angle between the vector being rotated and the axis of rotation times the length of v; the cross product includes the sine of the angle between the vector being rotated and the axis of rotation.
Rotate a rotation vector.
Using Rodrigues' composite rotation formula, for a given rotation vector Q
("X", "Y", "Z"), and another rotation vector A
("X"′, "Y"′, "Z"′) to rotate the frame around.
From the initial rotation vectors, extract the angles and axes:
formula_97
Normalized axis of rotation for the current frame:
formula_98
Normalized axis of rotation to rotate the frame around:
formula_99
The result angle angle of the rotation is
formula_100
or
formula_101
The resultant, unnormalized axis of rotation:
formula_102
or
formula_103
The Rodrigues rotation formula would lead that the sin of above resulting angle can be used to normalize the vector, however this fails for large ranges; so normalize the result axis as any other vector.
formula_104
And the final frame rotation coordinate:
formula_105
Spin rotation around a fixed axis.
A rotation vector Q represents three axes; these may be used as a shorthand to rotate the rotation around using the above method to rotate a rotation vector. These expressions are best represented as code fragments.
Setup some constants used in other expressions.
formula_106
using the above values:
formula_107
or
formula_108
or
formula_109
Conversion from Basis Matrix.
Compute the determinant of the matrix:
formula_110
Convert to the angle of rotation:
formula_111
Compute the normal factor:
formula_112
the resulting angle-angle-angle is n ⋅ "θ".
Conversion from normal vector ("Y").
Representation of a normal as a rotation, this assumes that the "Y" axis vector (0,1,0) is pointing up. If some other axis is considered primary, the coordinates can be simply swapped.
This assumes a normalized input vector in the direction of the normal
formula_113
The angle is simply the sum of the x- and z-coordinates (or y and x if Z is up, or y and z if X is up):
formula_114
if angle is 0, the job is done, result with (0,0,0)
formula_115
Some temporary values; these values are just partials referenced later:
formula_116
Use the projected normal on the Y axis as the angle to rotate:
formula_117
Align normal using basis.
The default tangent and bitangent of rotations which only have their normal set, results in tangents and bi-tangents that are irregular. Alternatively build a basis matrix, and convert from basis using the above mentioned method.
Compute the normal of the above, and the matrix to convert
formula_118
formula_119
and then use the basis to log quaternion conversion as follows.
Align normal directly.
Or This is the direct computation to result with a log quaternion; compute the above result vector and then...
formula_120
This is the angle
formula_121
These partial products are used below:
formula_122
Compute the normalized rotation vector (axle of rotation):
formula_123
and finally compute the resulting log quaternion.
formula_124
Conversion from axis-angle.
This assumes the input axis a
["X", "Y", "Z"] is normalized. If there is zero rotation, result with (0,0,0)
formula_125
References.
<templatestyles src="Reflist/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{A} = \\begin{bmatrix}\n \\hat{\\mathbf{u}}_x & \\hat{\\mathbf{v}}_x & \\hat{\\mathbf{w}}_x \\\\\n \\hat{\\mathbf{u}}_y & \\hat{\\mathbf{v}}_y & \\hat{\\mathbf{w}}_y \\\\\n \\hat{\\mathbf{u}}_z & \\hat{\\mathbf{v}}_z & \\hat{\\mathbf{w}}_z \\\\\n\\end{bmatrix}"
},
{
"math_id": 1,
"text": "\\left\\{1, e^{\\pm i\\theta} \\right\\} = \\{1,\\ \\cos\\theta+i\\sin\\theta,\\ \\cos\\theta-i\\sin\\theta\\}"
},
{
"math_id": 2,
"text": "\\begin{align}\n |\\hat{\\mathbf u}| = |\\hat{\\mathbf v}| = |\\hat{\\mathbf w}| &= 1\\\\\n \\hat{\\mathbf u} \\cdot \\hat{\\mathbf v} &= 0\\\\\n \\hat{\\mathbf u} \\times \\hat{\\mathbf v} &= \\hat{\\mathbf w} \\,,\n\\end{align}"
},
{
"math_id": 3,
"text": "\\mathbf{A}_\\text{total} = \\mathbf{A}_2\\mathbf{A}_1"
},
{
"math_id": 4,
"text": "\\hat{\\mathbf{e}} = \\begin{bmatrix} e_x \\\\ e_y \\\\ e_z \\end{bmatrix}"
},
{
"math_id": 5,
"text": "\\mathbf{r} = \\theta \\hat{\\mathbf{e}}\\,."
},
{
"math_id": 6,
"text": "\\hat{\\mathbf{q}} =q_i\\mathbf{i}+q_j\\mathbf{j}+q_k\\mathbf{k}+q_r = \\begin{bmatrix} q_i \\\\ q_j \\\\ q_k \\\\ q_r \\end{bmatrix}"
},
{
"math_id": 7,
"text": "\\begin{align}\n q_i &= e_x\\sin\\frac{\\theta}{2}\\\\\n q_j &= e_y\\sin\\frac{\\theta}{2}\\\\\n q_k &= e_z\\sin\\frac{\\theta}{2}\\\\\n q_r &= \\cos\\frac{\\theta}{2}\n\\end{align}"
},
{
"math_id": 8,
"text": "q_i^2 + q_j^2 + q_k^2 + q_r^2 = 1"
},
{
"math_id": 9,
"text": "a + b i + c j + d k \\qquad \\text{with } a, b, c, d \\in \\R"
},
{
"math_id": 10,
"text": "\n\\begin{array}{ccccccc}\n i^2 &=& j^2 &=& k^2 &=& -1\\\\\n ij &=& -ji &=& k&&\\\\\n jk &=& -kj &=& i&&\\\\\n ki &=& -ik &=& j&&\n\\end{array}\n"
},
{
"math_id": 11,
"text": "\n\\tilde{\\mathbf{q}}\\otimes\\mathbf{q} =\n\\begin{bmatrix}\n \\;\\;\\, q_r & \\;\\;\\, q_k & -q_j & \\;\\;\\, q_i\\\\\n -q_k & \\;\\;\\, q_r & \\;\\;\\, q_i & \\;\\;\\, q_j\\\\\n \\;\\;\\, q_j & -q_i & \\;\\;\\, q_r & \\;\\;\\, q_k\\\\\n -q_i & -q_j & -q_k & \\;\\;\\, q_r\n\\end{bmatrix}\n\\begin{bmatrix}\n \\tilde{q}_i\\\\\n \\tilde{q}_j\\\\\n \\tilde{q}_k\\\\\n \\tilde{q}_r\n\\end{bmatrix} =\n\\begin{bmatrix}\n \\;\\;\\, \\tilde{q}_r & -\\tilde{q}_k & \\;\\;\\, \\tilde{q}_j & \\;\\;\\, \\tilde{q}_i\\\\\n \\;\\;\\, \\tilde{q}_k & \\;\\;\\, \\tilde{q}_r & -\\tilde{q}_i & \\;\\;\\, \\tilde{q}_j\\\\\n -\\tilde{q}_j & \\;\\;\\, \\tilde{q}_i & \\;\\;\\, \\tilde{q}_r & \\;\\;\\, \\tilde{q}_k\\\\\n -\\tilde{q}_i & -\\tilde{q}_j & -\\tilde{q}_k & \\;\\;\\, \\tilde{q}_r\n\\end{bmatrix}\n\\begin{bmatrix}\n q_i\\\\\n q_j\\\\\n q_k\\\\\n q_r\n\\end{bmatrix}\n"
},
{
"math_id": 12,
"text": "\\mathbf{A}_3 = \\mathbf{A}_2\\mathbf{A}_1,"
},
{
"math_id": 13,
"text": "\\mathbf{q}_3 = \\mathbf{q}_2 \\otimes \\mathbf{q}_1"
},
{
"math_id": 14,
"text": "\\mathbf{g} = \\hat{\\mathbf{e}}\\tan\\frac{\\theta}{2}"
},
{
"math_id": 15,
"text": "(\\mathbf{g},\\mathbf{f}) = \\frac{\\mathbf{g}+\\mathbf{f}-\\mathbf{f}\\times\\mathbf{g}}{1-\\mathbf{g}\\cdot\\mathbf{f}} \\,."
},
{
"math_id": 16,
"text": " S = \\cos\\frac{\\phi}{2} + \\sin\\frac{\\phi}{2} \\mathbf{S}. "
},
{
"math_id": 17,
"text": "A=\\cos\\frac{\\alpha}{2}+ \\sin\\frac{\\alpha}{2}\\mathbf{A}\\quad\n\\text{and} \\quad B=\\cos\\frac{\\beta}{2}+ \\sin\\frac{\\beta}{2}\\mathbf{B},"
},
{
"math_id": 18,
"text": " C = \\cos\\frac{\\gamma}{2}+\\sin\\frac{\\gamma}{2}\\mathbf{C}\n=\n\\left(\\cos\\frac{\\beta}{2}+\\sin\\frac{\\beta}{2}\\mathbf{B}\\right) \\left(\\cos\\frac{\\alpha}{2} + \\sin\\frac{\\alpha}{2}\\mathbf{A}\\right).\n"
},
{
"math_id": 19,
"text": "\n\\cos\\frac{\\gamma}{2}+\\sin\\frac{\\gamma}{2} \\mathbf{C} =\n\\left(\\cos\\frac{\\beta}{2}\\cos\\frac{\\alpha}{2} - \n\\sin\\frac{\\beta}{2}\\sin\\frac{\\alpha}{2} \\mathbf{B}\\cdot \\mathbf{A}\\right) + \\left(\\sin\\frac{\\beta}{2} \\cos\\frac{\\alpha}{2} \\mathbf{B} + \n\\sin\\frac{\\alpha}{2} \\cos\\frac{\\beta}{2} \\mathbf{A} + \n\\sin\\frac{\\beta}{2} \\sin\\frac{\\alpha}{2} \\mathbf{B}\\times \\mathbf{A}\\right).\n"
},
{
"math_id": 20,
"text": " \\cos\\frac{\\gamma}{2} = \\cos\\frac{\\beta}{2}\\cos\\frac{\\alpha}{2} - \n\\sin\\frac{\\beta}{2}\\sin\\frac{\\alpha}{2} \\mathbf{B}\\cdot \\mathbf{A},"
},
{
"math_id": 21,
"text": " \\tan\\frac{\\gamma}{2} \\mathbf{C} = \\frac{\\tan\\frac{\\beta}{2}\\mathbf{B} + \n\\tan\\frac{\\alpha}{2} \\mathbf{A} + \n\\tan\\frac{\\beta}{2}\\tan\\frac{\\alpha}{2} \\mathbf{B}\\times \\mathbf{A}}{1 - \n\\tan\\frac{\\beta}{2}\\tan\\frac{\\alpha}{2} \\mathbf{B}\\cdot \\mathbf{A}}.\n"
},
{
"math_id": 22,
"text": "\\mathbf{p} = \\hat{\\mathbf{e}}\\tan\\frac{\\theta}{4}\\,."
},
{
"math_id": 23,
"text": "p_{x,y,z} = \\frac{q_{i,j,k}}{1 + q_r}\\,."
},
{
"math_id": 24,
"text": "p^s_{x,y,z} = \\frac{-q_{i,j,k}}{1-q_r} =\\frac{-p_{x,y,z}}{\\mathbf{p}^2}\\,."
},
{
"math_id": 25,
"text": "\\mathbf{p}' = p_\\parallel \\mathbf{n} + \\cos{\\eta} \\, \\mathbf{p}_\\perp + \\sin{\\eta} \\, \\mathbf{p} \\wedge \\mathbf{n}"
},
{
"math_id": 26,
"text": "p_\\parallel = \\mathbf{p} \\cdot \\mathbf{n}"
},
{
"math_id": 27,
"text": "\\mathbf{p}_\\perp = \\mathbf{p} - (\\mathbf{p} \\cdot \\mathbf{n}) \\mathbf{n}"
},
{
"math_id": 28,
"text": "\\mathbf{p} \\wedge \\mathbf{n}"
},
{
"math_id": 29,
"text": " Z"
},
{
"math_id": 30,
"text": "\n\\begin{align} \n \\phi &= \\operatorname{atan2}\\left(A_{31}, A_{32}\\right)\\\\\n \\theta &= \\arccos\\left(A_{33}\\right)\\\\\n \\psi &= -\\operatorname{atan2}\\left(A_{13}, A_{23}\\right)\n\\end{align}\n"
},
{
"math_id": 31,
"text": "\\mathbf{A} = \\mathbf{A}_3\\mathbf{A}_2\\mathbf{A}_1 = \\mathbf{A}_Z\\mathbf{A}_Y\\mathbf{A}_X"
},
{
"math_id": 32,
"text": "\\begin{align}\n \\mathbf{A}_X &= \\begin{bmatrix} 1 & 0 & 0\\\\ 0 & \\cos\\phi & -\\sin\\phi\\\\ 0 & \\sin\\phi & \\cos\\phi \\end{bmatrix} \\\\[5px]\n \\mathbf{A}_Y &= \\begin{bmatrix} \\cos\\theta & 0 & \\sin\\theta\\\\ 0 & 1 & 0\\\\ -\\sin\\theta & 0 & \\cos\\theta \\end{bmatrix} \\\\[5px]\n \\mathbf{A}_Z &= \\begin{bmatrix} \\cos\\psi & -\\sin\\psi & 0\\\\ \\sin\\psi & \\cos\\psi & 0\\\\ 0 & 0 & 1 \\end{bmatrix}\n\\end{align}"
},
{
"math_id": 33,
"text": "\\mathbf{A} = \\begin{bmatrix}\n \\cos\\theta \\cos\\psi & -\\cos\\phi \\sin\\psi + \\sin\\phi \\sin\\theta \\cos\\psi & \\sin\\phi \\sin\\psi + \\cos\\phi \\sin\\theta \\cos\\psi \\\\\n \\cos\\theta\\sin\\psi & \\cos\\phi \\cos\\psi + \\sin\\phi \\sin\\theta \\sin\\psi & - \\sin\\phi \\cos\\psi + \\cos\\phi \\sin\\theta \\sin\\psi \\\\\n -\\sin\\theta & \\sin\\phi \\cos\\theta & \\cos\\phi \\cos\\theta \\\\\n \\end{bmatrix}"
},
{
"math_id": 34,
"text": "\\mathbf{A}^\\mathsf{T} = (\\mathbf{A}_Z\\mathbf{A}_Y\\mathbf{A}_X)^\\mathsf{T} = \\mathbf{A}_X^\\mathsf{T}\\mathbf{A}_Y^\\mathsf{T}\\mathbf{A}_Z^\\mathsf{T}\\,."
},
{
"math_id": 35,
"text": "\\begin{align}\n \\theta &= \\arccos\\frac{A_{11}+A_{22}+A_{33}-1}{2}\\\\\n e_1 &= \\frac{A_{32}-A_{23}}{2\\sin\\theta}\\\\\n e_2 &= \\frac{A_{13}-A_{31}}{2\\sin\\theta}\\\\\n e_3 &= \\frac{A_{21}-A_{12}}{2\\sin\\theta}\n\\end{align}"
},
{
"math_id": 36,
"text": "\\mathbf{A} = \\mathbf{I}_3\\cos\\theta + (1-\\cos\\theta)\\hat{\\mathbf{e}}\\hat{\\mathbf{e}}^\\mathsf{T} + \\left[\\hat{\\mathbf{e}}\\right]_{\\times} \\sin\\theta"
},
{
"math_id": 37,
"text": "\\left[\\hat{\\mathbf{e}}\\right]_{\\times} = \\begin{bmatrix} 0 & -e_3 & e_2\\\\ e_3 & 0 & -e_1\\\\ -e_2 & e_1 & 0 \\end{bmatrix} "
},
{
"math_id": 38,
"text": "\\begin{align}\nA_{11} &= (1-\\cos\\theta) e_1^2 + \\cos\\theta \\\\\nA_{12} &= (1-\\cos\\theta) e_1 e_2 - e_3 \\sin\\theta \\\\\nA_{13} &= (1-\\cos\\theta) e_1 e_3 + e_2 \\sin\\theta \\\\\nA_{21} &= (1-\\cos\\theta) e_2 e_1 + e_3 \\sin\\theta \\\\\nA_{22} &= (1-\\cos\\theta) e_2^2 + \\cos\\theta \\\\\nA_{23} &= (1-\\cos\\theta) e_2 e_3 - e_1 \\sin\\theta \\\\\nA_{31} &= (1-\\cos\\theta) e_3 e_1 - e_2 \\sin\\theta \\\\\nA_{32} &= (1-\\cos\\theta) e_3 e_2 + e_1 \\sin\\theta \\\\\nA_{33} &= (1-\\cos\\theta) e_3^2 + \\cos\\theta\n\\end{align}"
},
{
"math_id": 39,
"text": "\\mathbf{q} = \\begin{bmatrix} q_i \\\\ q_j \\\\ q_k \\\\ q_r \\end{bmatrix} = q_i\\mathbf{i}+q_j\\mathbf{j}+q_k\\mathbf{k}+q_r"
},
{
"math_id": 40,
"text": "\\begin{align} \n q_r &= \\frac{1}{2}\\sqrt{1+A_{11}+A_{22}+A_{33}}\\\\\n q_i &= \\frac{1}{4q_r}\\left(A_{32}- A_{23}\\right)\\\\\n q_j &= \\frac{1}{4q_r}\\left(A_{13}- A_{31}\\right)\\\\\n q_k &= \\frac{1}{4q_r}\\left(A_{21}- A_{12}\\right)\n\\end{align}"
},
{
"math_id": 41,
"text": "\\begin{align} \n q_i &= \\frac{1}{2}\\sqrt{1 + A_{11} - A_{22} - A_{33}}\\\\\n q_j &= \\frac{1}{4q_i}\\left(A_{12} + A_{21}\\right)\\\\\n q_k &= \\frac{1}{4q_i}\\left(A_{13} + A_{31}\\right)\\\\\n q_r &= \\frac{1}{4q_i}\\left(A_{32} - A_{23}\\right)\n\\end{align}"
},
{
"math_id": 42,
"text": "\\mathbf{A} = \\left(q_r^2 - \\check{\\mathbf{q}}^\\mathsf{T}\\check{\\mathbf{q}}\\right)\\mathbf{I}_3 + 2\\check{\\mathbf{q}}\\check{\\mathbf{q}}^\\mathsf{T} + 2q_r\\mathbf{\\mathcal{Q}}"
},
{
"math_id": 43,
"text": "\\check{\\mathbf{q}} = \\begin{bmatrix} q_i\\\\q_j\\\\q_k\\end{bmatrix} \\,, \\quad \\mathbf{\\mathcal{Q}} = \\begin{bmatrix} 0 & -q_k & q_j\\\\ q_k & 0 & -q_i\\\\ -q_j & q_i & 0 \\end{bmatrix}"
},
{
"math_id": 44,
"text": "\\mathbf{A} = \\begin{bmatrix}\n 1 - 2q_j^2 - 2q_k^2 & 2\\left(q_iq_j - q_kq_r\\right) & 2\\left(q_iq_k + q_jq_r\\right)\\\\\n 2\\left(q_iq_j + q_kq_r\\right) & 1 - 2q_i^2- 2 q_k^2 & 2\\left(q_jq_k - q_iq_r\\right)\\\\\n 2\\left(q_iq_k - q_jq_r\\right) & 2\\left(q_jq_k + q_iq_r\\right) & 1 - 2q_i^2 - 2q_j^2\n\\end{bmatrix}"
},
{
"math_id": 45,
"text": "\\mathbf{A} = \\begin{bmatrix}\n -1 + 2q_i^2 + 2q_r^2 & 2\\left(q_iq_j - q_kq_r\\right) & 2\\left(q_iq_k + q_jq_r\\right)\\\\\n 2\\left(q_iq_j + q_kq_r\\right) & -1 + 2q_j^2 + 2q_r^2 & 2\\left(q_jq_k - q_iq_r\\right)\\\\\n 2\\left(q_iq_k - q_jq_r\\right) & 2\\left(q_jq_k + q_iq_r\\right) & -1 + 2q_k^2 + 2q_r^2\n\\end{bmatrix}"
},
{
"math_id": 46,
"text": "\\mathbf{A}"
},
{
"math_id": 47,
"text": "\\begin{align}\n q_i &= \\cos\\frac{\\phi - \\psi}{2}\\sin\\frac{\\theta}{2}\\\\\n q_j &= \\sin\\frac{\\phi - \\psi}{2}\\sin\\frac{\\theta}{2}\\\\\n q_k &= \\sin\\frac{\\phi + \\psi}{2}\\cos\\frac{\\theta}{2}\\\\\n q_r &= \\cos\\frac{\\phi + \\psi}{2}\\cos\\frac{\\theta}{2}\n\\end{align}"
},
{
"math_id": 48,
"text": "\n\\begin{align}\n q_i &= \\sin \\frac{\\phi}{2} \\cos \\frac{\\theta}{2} \\cos \\frac{\\psi}{2} - \\cos \\frac{\\phi}{2} \\sin \\frac{\\theta}{2} \\sin \\frac{\\psi}{2}\\\\\n q_j &= \\cos \\frac{\\phi}{2} \\sin \\frac{\\theta}{2} \\cos \\frac{\\psi}{2} + \\sin \\frac{\\phi}{2} \\cos \\frac{\\theta}{2} \\sin \\frac{\\psi}{2}\\\\\n q_k &= \\cos \\frac{\\phi}{2} \\cos \\frac{\\theta}{2} \\sin \\frac{\\psi}{2} - \\sin \\frac{\\phi}{2} \\sin \\frac{\\theta}{2} \\cos \\frac{\\psi}{2}\\\\\n q_r &= \\cos \\frac{\\phi}{2} \\cos \\frac{\\theta}{2} \\cos \\frac{\\psi}{2} + \\sin \\frac{\\phi}{2} \\sin \\frac{\\theta}{2} \\sin \\frac{\\psi}{2}\n\\end{align}\n"
},
{
"math_id": 49,
"text": "\\mathbf{q} = \\begin{bmatrix} q_i \\\\ q_j \\\\ q_k \\\\ q_r \\end{bmatrix} = q_i\\mathbf{i}+q_j\\mathbf{j}+q_k\\mathbf{k}+q_r \\,,"
},
{
"math_id": 50,
"text": "\n\\begin{align}\n \\phi &= \\operatorname{atan2}\\left(\\left(q_iq_k + q_jq_r\\right), -\\left(q_jq_k - q_iq_r\\right)\\right)\\\\\n \\theta &= \\arccos\\left(-q_i^2 - q_j^2 + q_k^2+q_r^2\\right)\\\\\n \\psi &= \\operatorname{atan2}\\left(\\left(q_iq_k - q_jq_r\\right), \\left(q_jq_k + q_iq_r\\right)\\right)\n\\end{align}\n"
},
{
"math_id": 51,
"text": "\\begin{align}\n\\text{roll} &= \\operatorname{atan2} \\left(2\\left(q_r q_i + q_j q_k\\right),1 - 2\\left(q_i^2 + q_j^2\\right)\\right) \\\\\n\\text{pitch} &= \\arcsin \\left(2\\left(q_r q_j - q_k q_i\\right)\\right) \\\\\n\\text{yaw} &= \\operatorname{atan2} \\left(2\\left(q_r q_k + q_i q_j\\right),1 - 2\\left(q_j^2 + q_k^2\\right)\\right)\n\\end{align} "
},
{
"math_id": 52,
"text": "\\begin{align}\n q_i &= \\hat{e}_1\\sin\\frac{\\theta}{2} \\\\\n q_j &= \\hat{e}_2\\sin\\frac{\\theta}{2} \\\\\n q_k &= \\hat{e}_3\\sin\\frac{\\theta}{2} \\\\\n q_r &= \\cos\\frac{\\theta}{2}\n\\end{align}"
},
{
"math_id": 53,
"text": "\\check{\\mathbf{q}} = \\begin{bmatrix} q_i \\\\ q_j \\\\ q_k \\end{bmatrix}\\,."
},
{
"math_id": 54,
"text": "\\begin{align}\n \\hat{\\mathbf{e}} &= \\frac{\\check{\\mathbf{q}}}{\\left\\|\\check{\\mathbf{q}}\\right\\|} \\\\\n \\theta &= 2\\arccos q_r\n\\end{align}"
},
{
"math_id": 55,
"text": "\n\\begin{cases}\n g_i = \\dfrac{q_i}{q_r} = e_x \\tan\\left(\\dfrac{\\theta}{2}\\right) \\\\\n g_j = \\dfrac{q_j}{q_r} = e_y \\tan\\left(\\dfrac{\\theta}{2}\\right)\\\\\n g_k = \\dfrac{q_k}{q_r} = e_z \\tan\\left(\\dfrac{\\theta}{2}\\right)\n\\end{cases}"
},
{
"math_id": 56,
"text": "\n1 = q_r^2 + q_i^2 + q_j^2 + q_k^2 = q_r^2 \\left(1 + \\frac{q_i^2}{q_r^2} + \\frac{q_j^2}{q_r^2} + \\frac{q_k^2}{q_r^2}\\right) = q_r^2 \\left(1 + g_i^2 + g_j^2 + g_k^2\\right)"
},
{
"math_id": 57,
"text": "\\mathbf{A} = q_r^2 \\begin{bmatrix}\n \\frac{1}{q_r^2} - 2\\frac{q_j^2}{q_r^2} - 2\\frac{q_k^2}{q_r^2} & 2\\left(\\frac{q_i}{q_r}\\frac{q_j}{q_r} - \\frac{q_k}{q_r}\\right) & 2\\left(\\frac{q_i}{q_r}\\frac{q_k}{q_r} + \\frac{q_j}{q_r}\\right)\\\\\n 2\\left(\\frac{q_i}{q_r}\\frac{q_j}{q_r} + \\frac{q_k}{q_r}\\right) & \\frac{1}{q_r^2} - 2\\frac{q_i^2}{q_r^2} - 2 \\frac{q_k^2}{q_r^2} & 2\\left(\\frac{q_j}{q_r}\\frac{q_k}{q_r} - \\frac{q_i}{q_r}\\right)\\\\\n 2\\left(\\frac{q_i}{q_r}\\frac{q_k}{q_r} - \\frac{q_j}{q_r}\\right) & 2\\left(\\frac{q_j}{q_r}\\frac{q_k}{q_r} + \\frac{q_i}{q_r}\\right) & \\frac{1}{q_r^2} - 2\\frac{q_i^2}{q_r^2} - 2\\frac{q_j^2}{q_r^2}\n\\end{bmatrix}"
},
{
"math_id": 58,
"text": "\\mathbf{A} = \\frac{1}{1+g_i^2+g_j^2+g_k^2} \\begin{bmatrix}\n 1 + g_i^2 - g_j^2 - g_k^2 & 2\\left(g_i g_j - g_k\\right) & 2\\left(g_i g_k + g_j\\right)\\\\\n 2\\left(g_i g_j + g_k\\right) &1 - g_i^2 + g_j^2 - g_k^2 & 2\\left(g_j g_k - g_i\\right)\\\\\n 2\\left(g_i g_k - g_j\\right) & 2\\left(g_j g_k + g_i\\right) &1 - g_i^2 - g_j^2 + g_k^2\n\\end{bmatrix}"
},
{
"math_id": 59,
"text": "\\boldsymbol{\\omega} = \\begin{bmatrix} \\omega_x \\\\ \\omega_y \\\\ \\omega_z \\end{bmatrix}"
},
{
"math_id": 60,
"text": "[\\boldsymbol{\\omega}]_\\times = \\begin{bmatrix} 0 & -\\omega_z & \\omega_y \\\\ \\omega_z & 0 & -\\omega_x \\\\ -\\omega_y & \\omega_x & 0 \\end{bmatrix} = \\frac{\\mathrm{d}\\mathbf{A}}{\\mathrm{d}t}\\mathbf{A}^\\mathsf{T}"
},
{
"math_id": 61,
"text": "\\frac{\\mathrm{d}\\mathbf{r}}{\\mathrm{d}t} = \\frac{\\mathrm{d}\\mathbf{A}}{\\mathrm{d}t} \\mathbf{r}_0 = \\frac{\\mathrm{d}\\mathbf{A}}{\\mathrm{d}t} \\mathbf{A}^\\mathsf{T}(t) \\mathrm{r}(t)"
},
{
"math_id": 62,
"text": "\\frac{\\mathrm{d}\\mathbf{r}}{\\mathrm{d}t} = \\boldsymbol{\\omega}(t)\\times \\mathbf{r}(t) = [\\boldsymbol{\\omega}]_\\times \\mathbf{r}(t)"
},
{
"math_id": 63,
"text": "\\frac{\\mathrm{d}\\mathbf{A}}{\\mathrm{d}t} \\mathbf{A}^\\mathsf{T}(t) \\mathbf{r}(t) = [\\boldsymbol{\\omega}]_\\times \\mathbf{r}(t)"
},
{
"math_id": 64,
"text": "\\frac{\\mathrm{d}\\mathbf{A}}{\\mathrm{d}t} \\mathbf{A}^\\mathsf{T}(t) = [\\boldsymbol{\\omega}]_\\times"
},
{
"math_id": 65,
"text": " \\begin{bmatrix}\n 0 \\\\\n \\omega_x \\\\\n \\omega_y \\\\\n \\omega_z \n\\end{bmatrix} = 2 \\frac{\\mathrm{d}\\mathbf{q}}{\\mathrm{d}t}\\tilde{\\mathbf{q}}\n"
},
{
"math_id": 66,
"text": " \\frac{\\mathrm{d}\\mathbf{q}}{\\mathrm{d}t} = \\frac{1}{2}\\begin{bmatrix}\n 0 \\\\\n \\omega_x \\\\\n \\omega_y \\\\\n \\omega_z\n\\end{bmatrix}\\mathbf{q} \\,.\n"
},
{
"math_id": 67,
"text": "\\mathbf{ab} = \\mathbf{a} \\cdot \\mathbf{b} + \\mathbf{a} \\wedge \\mathbf{b}"
},
{
"math_id": 68,
"text": "\\mathbf{R} = \\exp\\left(\\frac{-\\hat\\mathbf{B}\\theta}{2}\\right) = \\cos \\frac{\\theta}{2} - \\hat\\mathbf{B} \\sin \\frac{\\theta}{2}\\,,"
},
{
"math_id": 69,
"text": "\\mathbf{b} = \\mathbf{R a R}^\\dagger"
},
{
"math_id": 70,
"text": "\\mathbf{R}^\\dagger = \\exp\\left(\\frac{1}{2}\\hat\\mathbf{B} \\theta\\right) = \\cos \\frac{\\theta}{2} + \\hat\\mathbf{B} \\sin \\frac{\\theta}{2}"
},
{
"math_id": 71,
"text": "\\scriptstyle R"
},
{
"math_id": 72,
"text": " B"
},
{
"math_id": 73,
"text": "\\hat \\mathbf{v} = \\frac{1}{\\sqrt 3}\\left(\\hat \\mathbf{x} + \\hat \\mathbf{y} + \\hat \\mathbf{z}\\right)"
},
{
"math_id": 74,
"text": "\\hat \\mathbf{B} = \\hat \\mathbf{x} \\hat \\mathbf{y} \\hat \\mathbf{z} \\hat \\mathbf{v} = \\mathbf{i} \\hat \\mathbf{v} \\,,"
},
{
"math_id": 75,
"text": "\\hat \\mathbf{B} = \\frac{1}{\\sqrt 3}\\left(\\hat \\mathbf{y} \\hat \\mathbf{z} + \\hat \\mathbf{z} \\hat \\mathbf{x} + \\hat \\mathbf{x} \\hat \\mathbf{y}\\right) \\,."
},
{
"math_id": 76,
"text": "\\hat \\mathbf{x}' = \\mathbf{R} \\hat \\mathbf{x} \\mathbf{R}^\\dagger = e^{-i\\hat \\mathbf{v} \\frac{\\theta}{2}} \\hat \\mathbf{x} e^{i \\hat \\mathbf{v} \\frac{\\theta}{2}} = \\hat \\mathbf{x} \\cos^2 \\frac{\\theta}{2} + \\mathbf{i} \\left(\\hat \\mathbf{x} \\hat \\mathbf{v} - \\hat \\mathbf{v} \\hat \\mathbf{x}\\right) \\cos \\frac{\\theta}{2} \\sin \\frac{\\theta}{2} + \\hat \\mathbf{v} \\hat \\mathbf{x} \\hat \\mathbf{v} \\sin^2 \\frac{\\theta}{2}"
},
{
"math_id": 77,
"text": "\\mathbf{i} (\\hat \\mathbf{x} \\hat \\mathbf{v} - \\hat \\mathbf{v} \\hat \\mathbf{x}) = 2\\mathbf{i} (\\hat \\mathbf{x} \\wedge \\hat \\mathbf{v})"
},
{
"math_id": 78,
"text": "\\begin{align}\n \\hat \\mathbf{v} \\hat \\mathbf{x} \\hat \\mathbf{v} &= \\frac{1}{3} \\left(-\\hat \\mathbf{x} + 2 \\hat \\mathbf{y} + 2 \\hat \\mathbf{z}\\right) \\\\\n 2\\mathbf{i} \\hat \\mathbf{x} \\wedge \\hat \\mathbf{v} &= 2\\mathbf{i} \\frac{1}{\\sqrt 3} \\left(\\hat \\mathbf{x} \\hat \\mathbf{y} + \\hat \\mathbf{x} \\hat \\mathbf{z}\\right) = \\frac{2}{\\sqrt 3} \\left(\\hat \\mathbf{y} - \\hat \\mathbf{z}\\right)\n\\end{align}"
},
{
"math_id": 79,
"text": "\\hat \\mathbf{x}' = \\hat \\mathbf{x} \\left(\\cos^2 \\frac{\\theta}{2} - \\frac{1}{3} \\sin^2 \\frac{\\theta}{2}\\right) + \\frac{2}{3} \\hat \\mathbf{y} \\sin \\frac{\\theta}{2} \\left(\\sin \\frac{\\theta}{2} + \\sqrt{3} \\cos \\frac{\\theta}{2}\\right) + \\frac{2}{3} \\hat \\mathbf{z} \\sin \\frac{\\theta}{2} \\left(\\sin \\frac{\\theta}{2} - \\sqrt{3} \\cos \\frac{\\theta}{2}\\right) "
},
{
"math_id": 80,
"text": "\\begin{align}\n \\hat \\mathbf{x}' &= \\hat \\mathbf{x}\\left(\\frac{1}{4} - \\frac{1}{3} \\frac{3}{4}\\right) + \\frac{2}{3} \\hat \\mathbf{y} \\frac{\\sqrt 3}{2} \\left(\\frac{\\sqrt 3}{2} + \\sqrt{3}\\frac{1}{2}\\right) + \\frac{2}{3} \\hat \\mathbf{z} \\frac{\\sqrt 3}{2} \\left(\\frac{\\sqrt 3}{2} - \\sqrt{3}\\frac{1}{2}\\right) \\\\\n &= 0 \\hat \\mathbf{x} + \\hat \\mathbf{y} + 0 \\hat \\mathbf{z} = \\hat \\mathbf{y}\n\\end{align}"
},
{
"math_id": 81,
"text": "\\mathbf{R} = \\mathbf{R}_{\\gamma'} \\mathbf{R}_{\\beta'} \\mathbf{R}_\\alpha = \\exp\\left(\\frac{-\\mathbf{i} \\hat \\mathbf{z}' \\gamma}{2}\\right) \\exp\\left(\\frac{-\\mathbf{i} \\hat \\mathbf{x}' \\beta}{2}\\right) \\exp\\left(\\frac{-\\mathbf{i} \\hat \\mathbf{z} \\alpha}{2}\\right)"
},
{
"math_id": 82,
"text": "\\begin{align}\n\\hat \\mathbf{x}' &= \\mathbf{R}_\\alpha \\hat \\mathbf{x} \\mathbf{R}_\\alpha^\\dagger \\quad \\text{and} \\\\\n\\hat \\mathbf{z}' &= \\mathbf{R}_{\\beta'} \\hat \\mathbf{z} \\mathbf{R}_{\\beta'}^\\dagger \\,.\n\\end{align}"
},
{
"math_id": 83,
"text": "\\mathbf{R}_{\\beta'} = \\cos \\frac{\\beta}{2} - \\mathbf{i} \\mathbf{R}_\\alpha \\hat \\mathbf{x} \\mathbf{R}_\\alpha^\\dagger \\sin \\frac{\\beta}{2} = \\mathbf{R}_\\alpha \\mathbf{R}_\\beta \\mathbf{R}_\\alpha^\\dagger"
},
{
"math_id": 84,
"text": "\\mathbf{R}_{\\gamma'} = \\mathbf{R}_{\\beta'} \\mathbf{R}_\\gamma \\mathbf{R}_{\\beta'}^\\dagger = \\mathbf{R}_\\alpha \\mathbf{R}_\\beta \\mathbf{R}_\\alpha^\\dagger \\mathbf{R}_\\gamma \\mathbf{R}_\\alpha \\mathbf{R}_\\beta^\\dagger \\mathbf{R}_\\alpha^\\dagger \\,."
},
{
"math_id": 85,
"text": "\\mathbf{R} = \\mathbf{R}_\\alpha \\mathbf{R}_\\beta \\mathbf{R}_\\gamma"
},
{
"math_id": 86,
"text": "\\mathbf{Q} = \\begin{bmatrix} X \\\\ Y \\\\ Z \\end{bmatrix}"
},
{
"math_id": 87,
"text": "\\begin{align}\n\\|\\mathbf{V}\\|\\text{ or }\\|\\mathbf{V}\\|_2 &= \\sqrt{XX+YY+ZZ}\\\\[6pt]\n\\|\\mathbf{V}\\|_1 &= |X|+|Y|+|Z|\n\\end{align}"
},
{
"math_id": 88,
"text": " \\theta = \\|\\mathbf{V}\\| "
},
{
"math_id": 89,
"text": "\\text{Axis}(\\ln \\mathbf{Q}) = \\begin{bmatrix}\n \\frac{X} \\theta \\\\\n \\frac{Y} \\theta \\\\\n \\frac{Z} \\theta\n\\end{bmatrix}"
},
{
"math_id": 90,
"text": "\\mathbf{q} = \\begin{bmatrix}\n\\cos \\frac \\theta {2} \\\\\n\\sin \\frac \\theta {2} {\\frac {X} {\\|\\mathbf{Q}\\|}}\\\\\n\\sin \\frac\\theta{2} {\\frac {Y} {\\|\\mathbf{Q}\\|}}\\\\\n\\sin\\frac\\theta{2} {\\frac {Z} {\\|\\mathbf{Q}\\|}}\n\\end{bmatrix}"
},
{
"math_id": 91,
"text": "\\begin{matrix}\n q_r = \\cos\\theta \\\\\n q_i = \\sin\\theta \\cdot \\frac{X}{\\|\\mathbf{Q}\\|}\\\\\n q_j = \\sin\\theta \\cdot \\frac{Y}{\\|\\mathbf{Q}\\|}\\\\\n q_k = \\sin\\theta \\cdot \\frac{Z}{\\|\\mathbf{Q}\\|}\n\\end{matrix}"
},
{
"math_id": 92,
"text": "\\begin{bmatrix}\n1 - 2 q_j^2 - 2 q_k^2 & 2(q_i q_j - q_k q_r) & 2(q_i q_k + q_j q_r)\\\\\n2(q_i q_j + q_k q_r) & 1 - 2 q_i^2 - 2q_k^2 & 2(q_j q_k - q_i q_r)\\\\\n2(q_i q_k - q_j q_r) & 2(q_j q_k + q_i q_r) & 1 - 2 q_i^2 - 2 q_j^2\n\\end{bmatrix}"
},
{
"math_id": 93,
"text": "\\begin{matrix}\n x_y = xy(1-\\cos \\theta) & w_x = x\\sin \\theta & x_x = xx(1-\\cos \\theta) \\\\\n y_z = yz(1-\\cos \\theta) & w_y = y\\sin \\theta & y_y = yy(1-\\cos \\theta) \\\\\n x_z = xz(1-\\cos \\theta) & w_z = z\\sin \\theta & z_z = zz(1-\\cos \\theta)\n\\end{matrix}"
},
{
"math_id": 94,
"text": "\\begin{bmatrix}\n\\cos \\theta+x_x & x_y + w_z & w_y + x_z \\\\\n w_z + x_y & \\cos \\theta+y_y & y_z - w_x \\\\\n x_z - w_y & w_x + y_z & \\cos \\theta+z_z\n\\end{bmatrix} "
},
{
"math_id": 95,
"text": " \\begin{bmatrix}\n\\cos \\theta+x^2 (1-\\cos \\theta) & xy(1-\\cos \\theta) - z\\sin \\theta & y\\sin \\theta + xz(1-\\cos \\theta) \\\\\n z\\sin \\theta + xy(1-\\cos \\theta) & \\cos \\theta+y^2 (1-\\cos \\theta) & yz(1-\\cos \\theta) - x\\sin \\theta \\\\\nxz(1-\\cos \\theta) - y\\sin \\theta & x\\sin \\theta + yz(1-\\cos \\theta) & \\cos \\theta+z^{2}(1-\\cos \\theta)\n\\end{bmatrix} "
},
{
"math_id": 96,
"text": " \\mathbf{v}' = \\cos(\\theta) \\mathbf{v} + \\sin(\\theta) \\left( \\frac \\mathbf{Q} {\\|\\mathbf{Q}\\|} \\times \\mathbf{v} \\right) + (1-\\cos(\\theta) ) \\left( \\frac \\mathbf{Q} {\\|\\mathbf{Q}\\|} \\cdot \\mathbf{v} \\right) \\frac \\mathbf{Q} {\\|\\mathbf{Q}\\|}\n"
},
{
"math_id": 97,
"text": "\\begin{align} \\theta &= \\frac {\\|\\mathbf{Q}\\|} {2} \\\\[6pt] \\gamma &= \\frac {\\|\\mathbf{A}\\|} {2} \\end{align}"
},
{
"math_id": 98,
"text": "\\hat\\mathbf{Q} = \\frac {\\mathbf{Q}} {\\|\\mathbf{Q}\\|}"
},
{
"math_id": 99,
"text": "\\hat\\mathbf{A} = \\frac {\\mathbf{A}} {\\|\\mathbf{A}\\|}"
},
{
"math_id": 100,
"text": " \\alpha = 2 \\arccos \\left( \\cos(\\theta) \\cos(\\gamma) + \\sin(\\theta) \\sin(\\gamma) \\hat\\mathbf{Q} \\cdot \\hat\\mathbf{A} \\right)"
},
{
"math_id": 101,
"text": " \\alpha = 2 \\arccos \\left( {\\cos( \\theta - \\gamma )} ( 1 - \\hat\\mathbf{Q} \\cdot \\hat\\mathbf{A} ) + {\\cos ( \\theta + \\gamma) } (1 + \\hat\\mathbf{Q} \\cdot \\hat\\mathbf{A}) \\right)\n"
},
{
"math_id": 102,
"text": " \\mathbf{r} = \\sin\\gamma \\cos\\theta \\hat\\mathbf{A} + \\sin\\theta \\cos\\gamma \\hat\\mathbf{Q} + \\sin\\theta \\sin\\gamma \\hat\\mathbf{A} \\times \\hat\\mathbf{Q}"
},
{
"math_id": 103,
"text": " r = \\left( \\hat\\mathbf{A} \\times \\hat\\mathbf{Q} \\right) \\bigl( {\\cos ({\\theta} - \\gamma)}-{ \\cos ({ \\theta} + \\gamma)} \\bigr) + \\hat\\mathbf{A} \\bigl({\\sin (\\theta + \\gamma)}+{\\sin ( \\theta - \\gamma)}\\bigr) + \\hat\\mathbf{Q} \\bigl({\\sin (\\theta + \\gamma)}-{\\sin ({ \\theta} - \\gamma)}\\bigr) "
},
{
"math_id": 104,
"text": " \\hat\\mathbf{R} = \\frac \\mathbf{r} {\\|\\mathbf{r}\\|} "
},
{
"math_id": 105,
"text": " \\mathbf{R} = \\alpha \\hat\\mathbf{R} "
},
{
"math_id": 106,
"text": "\\begin{align}\nn_x &= \\frac{Q_x}{\\|\\mathbf{Q}\\|} \\\\\nn_y &= \\frac{Q_y}{\\|\\mathbf{Q}\\|} \\\\\nn_z &= \\frac{Q_z}{\\|\\mathbf{Q}\\|} \\\\\n\\text{angle} &= \\|\\mathbf{Q}\\| \\\\\ns &= \\sin(\\text{angle}) \\\\\nc_1 &= \\cos(\\text{angle}) \\\\\nc &= 1 - c_1\n\\end{align}"
},
{
"math_id": 107,
"text": "\\text{x-axis} = \\left[x = c n_x^2 + c_1, \\; y = c n_x n_y + s n_z, \\; z = c n_x n_z - s n_y\\right]"
},
{
"math_id": 108,
"text": "\\text{y-axis} = \\left[x = c n_y n_x - s n_z, \\; y = c n_y^2 + c_1, \\; z = c n_y n_z + s n_x\\right]"
},
{
"math_id": 109,
"text": "\\text{z-axis} = \\left[x = c n_z n_x + s n_y, \\; y = c n_z n_y - s n_x, \\; z = c n_z^2 + c_1\\right]"
},
{
"math_id": 110,
"text": "d = \\frac{ \\left( \\text{basis}_{\\text{right}_X} + \\text{basis}_{\\text{up}_Y} + \\text{basis}_{\\text{forward}_Z} \\right) - 1 }{2}"
},
{
"math_id": 111,
"text": "\\begin{align}\n\\theta &= 2 \\arccos d \\\\[6pt]\nyz &= \\text{basis}_{\\text{up}_Z} - \\text{basis}_{\\text{forward}_Y} \\\\[6pt]\nxz &= \\text{basis}_{\\text{forward}_X} - \\text{basis}_{\\text{right}_Z} \\\\[6pt]\nxy &= \\text{basis}_{\\text{right}_Y} - \\text{basis}_{\\text{up}_X}\n\\end{align}"
},
{
"math_id": 112,
"text": "\\begin{align}\n\\text{normal} &= \\frac 1 \\sqrt{yz ^2 + xz^2 + xy^2 } \\\\[6pt]\n\\mathbf{n} &= \\begin{bmatrix}\nyz \\cdot \\text{normal}\\\\\nxz \\cdot \\text{normal}\\\\\nxy \\cdot \\text{normal}\n\\end{bmatrix} \\end{align}"
},
{
"math_id": 113,
"text": "\\mathbf{N} = \\begin{bmatrix}\n\\text{normal}_X \\\\\n\\text{normal}_Y \\\\\n\\text{normal}_Z\n\\end{bmatrix}"
},
{
"math_id": 114,
"text": " \\text{angle} = |N_x| + |N_z|"
},
{
"math_id": 115,
"text": " r = \\frac{1}{\\text{angle}}"
},
{
"math_id": 116,
"text": " \\mathbf{t} = \\begin{bmatrix}\n N_x \\cdot r\\\\\n N_y\\\\\n N_z \\cdot r\n\\end{bmatrix}"
},
{
"math_id": 117,
"text": "\\begin{align}\n\\text{target}_\\text{angle} &= \\arccos t_Y \\\\[6pt]\n\\text{result} &= \\begin{bmatrix}\n t_Z \\cdot \\text{target}_\\text{angle}\\\\\n0\\\\\n -t_X \\cdot \\text{target}_\\text{angle}\n\\end{bmatrix}\\end{align}"
},
{
"math_id": 118,
"text": "\\text{normal}_\\text{twist} = {\\sqrt { t_Z^2+t_X^2 }}"
},
{
"math_id": 119,
"text": "\\begin{bmatrix}\n\\left(N_y \\cdot \\frac {-t_X }{ \\text{normal}_\\text{twist} }\\right)&N_x&\\frac {t_Z}{\\text{normal}_\\text{twist}} \\\\\n \\left(N_z \\cdot \\frac {t_Z}{\\text{normal}_\\text{twist}}\\right)-\\left(N_x \\cdot \\frac {-t_X }{ \\text{normal}_\\text{twist} } \\right)&N_y&0\\\\\n \\left(-N_y \\cdot \\frac {t_Z}{\\text{normal}_\\text{twist}} \\right)&N_z&\\frac {-t_X }{ \\text{normal}_\\text{twist} }\n\\end{bmatrix}"
},
{
"math_id": 120,
"text": " \\begin{align}\n t_{X_n} &= t_X\\cdot \\text{normal}_\\text{twist} \\\\[4pt]\n t_{Z_n} &= t_Z\\cdot \\text{normal}_\\text{twist} \\\\[4pt]\ns &= \\sin( \\text{target}_\\text{angle} ) \\\\[4pt]\nc &= 1- \\cos( \\text{target}_\\text{angle} )\n \\end{align}"
},
{
"math_id": 121,
"text": "\\text{angle} = \\arccos\\left( \\frac{\\left( t_Y + 1 \\right) \\left( 1 - t_{X_n} \\right) }{ 2 } - 1 \\right);"
},
{
"math_id": 122,
"text": "\\begin{align}\n yz &= s \\cdot n_X \\\\[4pt]\nxz &= \\left( 2 - c \\cdot \\left(n_X^2 + n_Z^2\\right) \\right) \\cdot t_{Z_n}\\\\[4pt]\nxy &= s \\cdot n_X \\cdot t_{Z_n} + s \\cdot n_Z \\cdot \\left(1-t_{X_n}\\right)\n\\end{align}"
},
{
"math_id": 123,
"text": "n = \\begin{bmatrix}\n \\frac{yz}{\\sqrt{yz^2 + xz^2 + xy^2}}\\\\\n\\frac{xz}{\\sqrt{yz^2 + xz^2 + xy^2}}\\\\\n\\frac{xy}{\\sqrt{yz^2 + xz^2 + xy^2}}\n\\end{bmatrix}\n"
},
{
"math_id": 124,
"text": " \\text{final}_\\text{result} = \\text{angle} \\cdot {n}"
},
{
"math_id": 125,
"text": "\\theta = \\text{angle} \\quad ; \\quad \\text{result} = \\theta * \\mathbf{a}"
}
]
| https://en.wikipedia.org/wiki?curid=7290730 |
72910457 | Highway dimension | The highway dimension is a graph parameter modelling transportation networks, such as road networks or public transportation networks. It was first formally defined by Abraham et al. based on the observation by Bast et al. that any road network has a sparse set of "transit nodes", such that driving from a point A to a sufficiently far away point B along the shortest route will always pass through one of these transit nodes. It has also been proposed that the highway dimension captures the properties of public transportation networks well (at least according to definitions 1 and 2 below), given that longer routes using busses, trains, or airplanes will typically be serviced by larger transit hubs (stations and airports). This relates to the spoke–hub distribution paradigm in transport topology optimization.
Definitions.
Several definitions of the highway dimension exist. Each definition of the highway dimension uses a hitting set of a certain set of shortest paths: given a graph formula_0 with edge lengths formula_1, let formula_2 contain every vertex set formula_3 such that formula_4 induces a shortest path between some vertex pair of formula_5, according to the edge lengths formula_6. To measure the highway dimension we determine the "sparseness" of a hitting set of a subset of formula_2 in a local area of the graph, for which we define a ball of radius formula_7 around a vertex formula_8 to be the set formula_9 of vertices at distance at most formula_10 from formula_11 in formula_5 according to the edge lengths formula_6. In the context of low highway dimension graphs, the vertices of a hitting set for the shortest paths are called hubs.
Definition 1.
The original definition of the highway dimension measures the sparseness of a hub set formula_12 of shortest paths "contained" within a ball of radius formula_13:The highway dimension of formula_5 is the smallest integer formula_14 such that for any radius formula_7 and any node formula_8 there is a hitting set formula_15 of size at most formula_14 for all shortest paths formula_16 of length more than formula_10 for which formula_17.A variant of this definition uses balls of radius formula_18 for some constant formula_19. Choosing a constant greater than 4 implies additional structural properties of graphs of bounded highway dimension, which can be exploited algorithmically.
Definition 2.
A subsequent definition of the highway dimension measures the sparseness of a hub set formula_12 of shortest paths "intersecting" a ball of radius formula_20:The highway dimension of formula_5 is the smallest integer formula_21 such that for any radius formula_7 and any node formula_8 there is a hitting set formula_22 of size at most formula_21 for all shortest paths formula_16 of length more than formula_10 and at most formula_23 for which formula_24.This definition is weaker than the first, i.e., every graph of highway dimension formula_14 also has highway dimension formula_21, but not vice versa.
Definition 3.
For the third definition of the highway dimension we introduce the notion of a "witness path": for a given radius formula_10, a shortest path formula_16 has an formula_10-witness path formula_25 if formula_26 has length more than formula_10 and formula_26 can be obtained from formula_4 by adding at most one vertex to either end of formula_4 (i.e., formula_26 has at most 2 vertices more than formula_4 and these additional vertices are incident to formula_4). Note that formula_4 may be shorter than formula_10 but is contained in formula_26, which has length more than formula_10.The highway dimension of formula_5 is the smallest integer formula_27 such that for any radius formula_7 and any node formula_8 there is a hitting set formula_22 of size at most formula_27 for all shortest paths formula_16 that have an formula_10-witness path formula_26 with formula_28.This definition is stronger than the above, i.e., every graph of highway dimension formula_14 also has highway dimension formula_29, but formula_14 cannot be bounded in terms of formula_27.
Shortest path cover.
A notion closely related to the highway dimension is that of a shortest path cover, where the order of the quantifiers in the definition is reversed, i.e., instead of a hub set for each ball, there is a one hub set formula_12, which is sparse in every ball:Given a radius formula_7, an formula_30-shortest path cover of formula_5 is a hitting set formula_22 for all shortest paths in formula_2 of length more than formula_10 and at most formula_23. The formula_30-shortest path cover formula_12 is locally formula_31-sparse if any node formula_8 the ball formula_32 contains at most formula_31 vertices of formula_12, i.e., formula_33.Every graph of bounded highway dimension formula_31 (according to any of the above definitions) also has a locally formula_31-sparse formula_30-shortest path cover for every formula_7, but not vice versa. For algorithmic purposes it is often more convenient to work with one hitting set for each radius formula_30, which makes shortest path covers an important tool for algorithms on graphs of bounded highway dimension.
Relation to other graph parameters.
The highway dimension combines structural and metric properties of graphs, and is thus incomparable to common structural and metric parameters. In particular, for any graph it is possible to choose edge lengths such that the highway dimension is formula_34, while at the same time some graphs with very simple structure such as trees can have arbitrary large highway dimension. This implies that the highway dimension parameter is incomparable to structural graph parameters such as treewidth, cliquewidth, or minor-freeness. On the other hand, a star with unit edge lengths has highway dimension formula_34 (according to definitions 1 and 2 above) but unbounded doubling dimension, while a formula_35 grid graph with unit edge lengths has constant doubling dimension but highway dimension formula_36. This means that the highway dimension according to definitions 1 and 2 is also incomparable to the doubling dimension. Any graph of bounded highway dimension according to definition 3 above, also has bounded doubling dimension.
Computing the highway dimension.
Computing the highway dimension of a given graph is NP-hard. Assuming that all shortest paths are unique (which can be done by slightly perturbing the edge lengths), an formula_37-approximation can be computed in polynomial time, given that the highway dimension of the graph is formula_31. It is not known whether computing the highway dimension is fixed-parameter tractable (FPT), however there are hardness results indicating that this is likely not the case. In particular, these results imply that, under standard complexity assumptions, an FPT algorithm can neither compute the highway dimension bottom-up (from the smallest value formula_30 to the largest) nor top-down (from the largest value formula_30 to the smallest).
Algorithms exploiting the highway dimension.
Shortest path algorithms.
Some heuristics to compute shortest paths, such as the Reach, Contraction Hierarchies, Transit Nodes, and Hub Labelling algorithms, can be formally proven to run faster than other shortest path algorithms (e.g. Dijkstra's algorithm) on graphs of bounded highway dimension according to definition 3 above.
Approximations for NP-hard problems.
A crucial property that can be exploited algorithmically for graphs of bounded highway dimension is that vertices that are far from the hubs of a shortest path cover are clustered into so-called towns:Given a radius formula_7, an formula_30-shortest path cover formula_12 of formula_5, and a vertex formula_8 at distance more than formula_23 from formula_12, the set formula_38 of vertices at distance at most formula_30 from formula_39 according to the edge lengths formula_40 is called a town. The set of all vertices not lying in any town is called the sprawl.It can be shown that the diameter of every town is at most formula_30, while the distance between a town and any vertex outside it is more than formula_30. Furthermore, the distance from any vertex in the sprawl to some hub of formula_12 is at most formula_20.
Based on this structure, Feldmann et al. defined the towns decomposition, which recursively decomposes the sprawl into towns of exponentially growing values formula_30. For a graph of bounded highway dimension (according to definition 1 above) this decomposition can be used to find a metric embedding into a graph of bounded treewidth that preserves distances between vertices arbitrarily well. Due to this embedding it is possible to obtain quasi-polynomial time approximation schemes (QPTASs) for various problems such as Travelling Salesman (TSP), Steiner Tree, k-Median, and Facility Location.
For clustering problems such as k-Median, k-Means, and Facility Location, faster polynomial-time approximation schemes (PTASs) are known for graphs of bounded highway dimension according to definition 1 above. For network design problems such as TSP and Steiner Tree it is not known how to obtain a PTAS.
For the k-Center problem, it is not known whether a PTAS exists for graphs of bounded highway dimension, however it is NP-hard to compute a (formula_41)-approximation on graphs of highway dimension formula_42, which implies that any (formula_41)-approximation algorithm needs at least double exponential time in the highway dimension, unless P=NP. On the other hand, it was shown that a parameterized formula_43-approximation algorithm with a runtime of formula_44 exists for k-Center where formula_31 is the highway dimension according to "any" of the above definitions. When using definition 1 above, a parameterized approximation scheme (PAS) is known to exist when using formula_45 and formula_31 as parameters.
For the Capacitated k-Center problem there is no PAS parameterized by formula_45 and the highway dimension formula_31, unless FPT=W[1]. This is notable, since typically (i.e., for all the problems mentioned above), if there is an approximation scheme for metrics of low doubling dimension, then there is also one for graphs of bounded highway dimension. But for Capacitated k-Center there is a PAS parameterized by formula_45 and the doubling dimension.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "G=(V,E)"
},
{
"math_id": 1,
"text": "\\ell:E\\to\\mathbb{R}^+"
},
{
"math_id": 2,
"text": "\\mathcal{P}"
},
{
"math_id": 3,
"text": "P\\subseteq V"
},
{
"math_id": 4,
"text": "P"
},
{
"math_id": 5,
"text": "G"
},
{
"math_id": 6,
"text": "\\ell"
},
{
"math_id": 7,
"text": "r>0\n"
},
{
"math_id": 8,
"text": "u\\in V"
},
{
"math_id": 9,
"text": "B_r(u)\\subseteq V"
},
{
"math_id": 10,
"text": "r"
},
{
"math_id": 11,
"text": "u"
},
{
"math_id": 12,
"text": "H"
},
{
"math_id": 13,
"text": "4r\n"
},
{
"math_id": 14,
"text": "h_1"
},
{
"math_id": 15,
"text": "H\\subseteq B_{4r}(u)"
},
{
"math_id": 16,
"text": "P\\in\\mathcal{P}"
},
{
"math_id": 17,
"text": "P\\subseteq B_{4r}(u)"
},
{
"math_id": 18,
"text": "cr"
},
{
"math_id": 19,
"text": "c>4"
},
{
"math_id": 20,
"text": "2r\n"
},
{
"math_id": 21,
"text": "h_2"
},
{
"math_id": 22,
"text": "H\\subseteq V"
},
{
"math_id": 23,
"text": "2r"
},
{
"math_id": 24,
"text": "P\\cap B_{2r}(u)\\neq\\emptyset"
},
{
"math_id": 25,
"text": "Q\\in\\mathcal{P}"
},
{
"math_id": 26,
"text": "Q"
},
{
"math_id": 27,
"text": "h_3"
},
{
"math_id": 28,
"text": "Q\\cap B_{2r}(u)\\neq\\emptyset"
},
{
"math_id": 29,
"text": "h_3=O(h_1^2)"
},
{
"math_id": 30,
"text": "r\n"
},
{
"math_id": 31,
"text": "h"
},
{
"math_id": 32,
"text": "B_{2r}(u)"
},
{
"math_id": 33,
"text": "|B_{2r}(u)\\cap H|\\leq h"
},
{
"math_id": 34,
"text": "1\n"
},
{
"math_id": 35,
"text": "k\\times k\n"
},
{
"math_id": 36,
"text": "\\Omega(k)\n"
},
{
"math_id": 37,
"text": "O(\\log h)"
},
{
"math_id": 38,
"text": "T\\subseteq V"
},
{
"math_id": 39,
"text": "u\n"
},
{
"math_id": 40,
"text": "\\ell\n"
},
{
"math_id": 41,
"text": "2-\\varepsilon\n"
},
{
"math_id": 42,
"text": "O(\\log^2{n})\n"
},
{
"math_id": 43,
"text": "3/2"
},
{
"math_id": 44,
"text": "2^{O(kh \\log{h} )}n^{O(1)}"
},
{
"math_id": 45,
"text": "k"
}
]
| https://en.wikipedia.org/wiki?curid=72910457 |
72910789 | Absolutely maximally entangled state | The absolutely maximally entangled (AME) state is a concept in quantum information science, which has many applications in quantum error-correcting code, discrete AdS/CFT correspondence, AdS/CMT correspondence, and more. It is the multipartite generalization of the bipartite maximally entangled state.
Definition.
The bipartite maximally entangled state formula_0 is the one for which the reduced density operators are maximally mixed, i.e., formula_1. Typical examples are Bell states.
A multipartite state formula_2 of a system formula_3 is called absolutely maximally entangled if for any bipartition formula_4 of formula_3, the reduced density operator is maximally mixed formula_1, where formula_5.
Property.
The AME state does not always exist; in some given local dimension and number of parties, there is no AME state. There is a list of AME states in low dimensions created by Huber and Wyderka.
The existence of the AME state can be transformed into the existence of the solution for a specific quantum marginal problem.
The AME can also be used to build a kind of quantum error-correcting code called holographic error-correcting code.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "|\\psi\\rangle_{AB}"
},
{
"math_id": 1,
"text": "\\rho_A=\\rho_B=I/d"
},
{
"math_id": 2,
"text": "|\\psi\\rangle "
},
{
"math_id": 3,
"text": "S"
},
{
"math_id": 4,
"text": "A|B"
},
{
"math_id": 5,
"text": "d=\\min\\{d_A,d_B\\}"
}
]
| https://en.wikipedia.org/wiki?curid=72910789 |
7291099 | Independent goods | Goods that are neither complements nor substitutes
Independent goods are goods that have a zero cross elasticity of demand. Changes in the price of one good will have no effect on the demand for an independent good. Thus independent goods are neither complements nor substitutes.
For example, a person's demand for nails is usually independent of his or her demand for bread, since they are two unrelated types of goods. Note that this concept is subjective and depends on the consumer's personal utility function.
A Cobb-Douglas utility function implies that goods are independent. For goods in quantities "X"1 and "X"2, prices "p"1 and "p"2, income "m", and utility function parameter "a", the utility function
formula_0
when optimized subject to the budget constraint that expenditure on the two goods cannot exceed income, gives rise to this demand function for good 1: formula_1 which does not depend on "p"2.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " u(X_1, X_2) = X_1^a X_2^{(1-a)},"
},
{
"math_id": 1,
"text": "X_1= am/p_1,"
}
]
| https://en.wikipedia.org/wiki?curid=7291099 |
7291166 | Strange B meson | The meson is a meson composed of a bottom antiquark and a strange quark. Its antiparticle is the meson, composed of a bottom quark and a strange antiquark.
B–B oscillations.
Strange B mesons are noted for their ability to oscillate between matter and antimatter via a box-diagram with Δ"m"s
17.77 ± 0.10 (stat) ± 0.07 (syst) ps−1 measured by CDF experiment at Fermilab.
That is, a meson composed of a bottom quark and strange antiquark, the strange meson, can spontaneously change into an bottom antiquark and strange quark pair, the strange meson, and vice versa.
On 25 September 2006, Fermilab announced that they had claimed discovery of previously-only-theorized Bs meson oscillation. According to Fermilab's press release:
<templatestyles src="Template:Blockquote/styles.css" />This first major discovery of Run 2 continues the tradition of particle physics discoveries at Fermilab, where the bottom (1977) and top (1995) quarks were discovered. Surprisingly, the bizarre behavior of the B_s (pronounced "B sub s") mesons is actually predicted by the Standard Model of fundamental particles and forces. The discovery of this oscillatory behavior is thus another reinforcement of the Standard Model's durability...
CDF physicists have previously measured the rate of the matter-antimatter transitions for the B_s meson, which consists of the heavy bottom quark bound by the strong nuclear interaction to a strange antiquark. Now they have achieved the standard for a discovery in the field of particle physics, where the probability for a false observation must be proven to be less than about 5 in 10 million (5/10,000,000). For CDF's result the probability is even smaller, at 8 in 100 million (8/100,000,000).
Ronald Kotulak, writing for the Chicago Tribune, called the particle "bizarre" and stated that the meson "may open the door to a new era of physics" with its proven interactions with the "spooky realm of antimatter".
Better understanding of the meson is one of the main objectives of the LHCb experiment conducted at the Large Hadron Collider. On 24 April 2013, CERN physicists in the LHCb collaboration announced that they had observed CP violation in the decay of strange mesons for the first time. Scientists found the Bs meson decaying into two muons for the first time, with Large Hadron Collider experiments casting doubt on the scientific theory of supersymmetry.
CERN physicist Tara Shears described the CP violation observations as "verification of the validity of the Standard Model of physics".
Rare decays.
The rare decays of the Bs meson are an important test of the Standard Model. The branching fraction of the strange b-meson to a pair of muons is very precisely predicted with a value of Br(Bs→ μ+μ−)SM = (3.66 ± 0.23) × 10−9. Any variation from this rate would indicate possible physics beyond the Standard Model, such as supersymmetry. The first definitive measurement was made from a combination of LHCb and CMS experiment data:
formula_0
This result is compatible with the Standard Model and set limits on possible extensions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Br(B_s \\rightarrow \\mu^+\\mu^-) = 2.8^{+0.7}_{-0.6} \\times 10^{-9}"
}
]
| https://en.wikipedia.org/wiki?curid=7291166 |
72912536 | Nine-point stencil | Numerical analysis method
In numerical analysis, given a square grid in two dimensions, the nine-point stencil of a point in the grid is a stencil made up of the point itself together with its eight "neighbors". It is used to write finite difference approximations to derivatives at grid points. It is an example for numerical differentiation. This stencil is often used to approximate the Laplacian of a function of two variables.
Motivation.
If we discretize the 2D Laplacian by using central-difference methods, we obtain the commonly used five-point stencil, represented by the following convolution kernel:
formula_0
Even though it is simple to obtain and computationally lighter, the central difference kernel possess an undesired intrinsic anisotropic property, since it doesn't take into account the diagonal neighbours. This intrinsic anisotropy poses a problem when applied on certain numerical simulations or when more accuracy is required, by propagating the Laplacian effect faster in the coordinate axes directions and slower in the other directions, thus distortiong the final result.
This drawback calls for finding better methods for discretizing the Laplacian, reducing or eliminating the anisotropy.
Implementation.
The two most commonly used isotropic nine-point stencils are displayed below, in their convolution kernel forms. They can be obtained by the following formula:
formula_1
The first one is known by Oono-Puri, and it is obtained when γ=1/2.
formula_2
The second one is known by Patra-Karttunen or Mehrstellen, and it is obtained when γ=1/3.
formula_3
Both are isotropic forms of discrete Laplacian, and in the limit of small Δx, they all become equivalent, as Oono-Puri being described as the optimally isotropic form of discretization, displaying reduced overall error, and Patra-Karttunen having been systematically derived by imposing conditions of rotational invariance, displaying smallest error around the origin.
Desired anisotropy.
On the other hand, if controlled anisotropic effects are a desired feature, when solving anisotropic diffusion problems for example, it is also possible to use the 9-point stencil combined with tensors to generate them.
Consider the laplacian in the following form:
formula_4
Where c is just a constant coefficient. Now if we replace c by the 2nd rank tensor C:
formula_5
Where c1 is the constant coefficient for the principal direction in x axis, and c2 is the constant coefficient for the secondary direction in y axis. In order to generate anisotropic effects, c1 and c2 must be different.
By multiplying it by the rotation matrix Q, we obtain C', allowing anisotropic propagations in arbitrary directions other than the coordinate axes.
formula_6
formula_7
formula_8
Which is very similar to the Cauchy stress tensor in 2 dimensions. The angle formula_9 can be obtained by generating a vector field formula_10 in order to orientate the pattern as desired. Then:
formula_11
Or, for different anisotropic effects using the same vector field
formula_12
It is important to note that, regardless of the values of formula_9, the anisotropic propagation will occur parallel to the secondary direcion c2 and perpendicular to the principal direction c1:. The resulting convolution kernel is as follows
formula_13
If, for example, c1=c2=1, the cxy component will vanish, resulting in the simple five-point stencil, rendering no controlled anisotropy.
If c2>c1 and formula_9=0, the anisotropic effects will be more pronounced in the vertical axis.
if c2>c1 and formula_9=45 degrees, the anisotropic effects will be more pronounced in the upper-right / lower-left diagonal.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "D_{CD}=\\begin{bmatrix} 0 & 1 & 0 \\\\ 1 & -4 & 1 \\\\ 0 & 1 & 0\\end{bmatrix}"
},
{
"math_id": 1,
"text": "D= (1 - \\gamma) \\begin{bmatrix}0 & 1 & 0\\\\1 & -4 & 1\\\\0 & 1 & 0\\end{bmatrix}\n + \\gamma \\begin{bmatrix}1/2 & 0 & 1/2\\\\0 & -2 & 0\\\\1/2 & 0 & 1/2\\end{bmatrix}"
},
{
"math_id": 2,
"text": "D_{OP}=\\begin{bmatrix} 1/4 & 2/4 & 1/4 \\\\ 2/4 & -12/4 & 2/4 \\\\ 1/4 & 2/4 & 1/4\\end{bmatrix}=\\begin{bmatrix} 0.25 & 0.5 & 0.25 \\\\ 0.5 & -3 & 0.5 \\\\ 0.25 & 0.5 & 0.25\\end{bmatrix}"
},
{
"math_id": 3,
"text": "D_{PK}=\\begin{bmatrix} 1/6 & 4/6 & 1/6 \\\\ 4/6 & -20/6 & 4/6 \\\\ 1/6 & 4/6 & 1/6\\end{bmatrix}=\\begin{bmatrix} 0.16 & 0.66 & 0.16 \\\\ 0.66 & -3.33 & 0.66 \\\\ 0.16 & 0.66 & 0.16\\end{bmatrix}"
},
{
"math_id": 4,
"text": "c\\nabla^2A=cD_{OP}*A"
},
{
"math_id": 5,
"text": "C=\\begin{bmatrix} c_{1} & 0 \\\\ 0 & c_{2} \\end{bmatrix}"
},
{
"math_id": 6,
"text": "Q = \\begin{bmatrix}\n\\cos \\theta & \\sin \\theta \\\\ \n-\\sin \\theta & \\cos \\theta \n\\end{bmatrix} "
},
{
"math_id": 7,
"text": "C'=QCQ^{\\operatorname{T}}"
},
{
"math_id": 8,
"text": "C'=\\begin{bmatrix} c_{xx} & c_{xy} \\\\ c_{xy} & c_{yy} \\end{bmatrix}=\\begin{bmatrix} c_{1}\\cos^2\\theta+c_{2}\\sin^2\\theta & (c_{2}-c_{1})\\cos\\theta\\sin\\theta \\\\ (c_{2}-c_{1})\\cos\\theta\\sin\\theta & c_{2}\\cos^2\\theta+c_{1}\\sin^2\\theta \\end{bmatrix}"
},
{
"math_id": 9,
"text": "\\theta"
},
{
"math_id": 10,
"text": "\\mathbf{V}= V_x{\\mathbf i} +V_y{\\mathbf j}"
},
{
"math_id": 11,
"text": "\\theta=\\arctan(V_y/V_x) "
},
{
"math_id": 12,
"text": "\\theta=\\arctan(V_y/-V_x) "
},
{
"math_id": 13,
"text": "D_{Aniso}=\\begin{bmatrix} \\frac{-c_{xy}}{2} & c_{yy} & \\frac{c_{xy}}{2} \\\\ c_{xx} & -2(c_{xx}+c_{yy}) & c_{xx} \\\\ \\frac{c_{xy}}{2} & c_{yy} & \\frac{-c_{xy}}{2} \\end{bmatrix}"
}
]
| https://en.wikipedia.org/wiki?curid=72912536 |
729237 | A Dynamical Theory of the Electromagnetic Field | 1865 physics paper by James Maxwell
"A Dynamical Theory of the Electromagnetic Field" is a paper by James Clerk Maxwell on electromagnetism, published in 1865. In the paper, Maxwell derives an electromagnetic wave equation with a velocity for light in close agreement with measurements made by experiment, and deduces that light is an electromagnetic wave.
Publication.
Following standard procedure for the time, the paper was first read to the Royal Society on 8 December 1864, having been sent by Maxwell to the society on 27 October. It then underwent peer review, being sent to William Thomson (later Lord Kelvin) on 24 December 1864. It was then sent to George Gabriel Stokes, the Society's physical sciences secretary, on 23 March 1865. It was approved for publication in the "Philosophical Transactions of the Royal Society" on 15 June 1865, by the Committee of Papers (essentially the society's governing council) and sent to the printer the following day (16 June). During this period, "Philosophical Transactions" was only published as a bound volume once a year, and would have been prepared for the society's anniversary day on 30 November (the exact date is not recorded). However, the printer would have prepared and delivered to Maxwell offprints, for the author to distribute as he wished, soon after 16 June.
Maxwell's original equations.
In part III of the paper, which is entitled "General Equations of the Electromagnetic Field", Maxwell formulated twenty equations which were to become known as Maxwell's equations, until this term became applied instead to a vectorized set of four equations selected in 1884, which had all appeared in his 1861 paper "On Physical Lines of Force".
Heaviside's versions of Maxwell's equations are distinct by virtue of the fact that they are written in modern vector notation. They actually only contain one of the original eight—equation "G" (Gauss's Law). Another of Heaviside's four equations is an amalgamation of Maxwell's law of total currents (equation "A") with Ampère's circuital law (equation "C"). This amalgamation, which Maxwell himself had actually originally made at equation (112) in "On Physical Lines of Force", is the one that modifies Ampère's Circuital Law to include Maxwell's displacement current.
Heaviside's equations.
Eighteen of Maxwell's twenty original equations can be vectorized into six equations, labeled (A) to (F) below, each of which represents a group of three original equations in component form. The 19th and 20th of Maxwell's component equations appear as (G) and (H) below, making a total of eight vector equations. These are listed below in Maxwell's original order, designated by the letters that Maxwell assigned to them in his 1864 paper.
formula_0 formula_1 formula_2
formula_3
formula_4
formula_5
formula_6
formula_7
formula_8
formula_9.
formula_10 is the magnetic field, which Maxwell called the ""magnetic intensity".
formula_11 is the electric current density (with formula_12 being the total current density including displacement current).
formula_13 is the displacement field (called the "electric displacement" by Maxwell).
formula_14 is the free charge density (called the "quantity of free electricity" by Maxwell).
formula_15 is the magnetic potential (called the "angular impulse" by Maxwell).
formula_16 is the force per unit charge (called the "electromotive force" by Maxwell, not to be confused with the scalar quantity that is now called electromotive force; see below).
formula_17 is the electric potential (which Maxwell also called "electric potential").
formula_18 is the electrical conductivity (Maxwell called the inverse of conductivity the "specific resistance"", what is now called the resistivity).
formula_19 is the vector operator "del".
Clarifications
Maxwell did not consider completely general materials; his initial formulation used linear, isotropic, nondispersive media with permittivity "ϵ" and permeability "μ", although he also discussed the possibility of anisotropic materials.
Gauss's law for magnetism (∇⋅ B
0) is not included in the above list, but follows directly from equation (B) by taking divergences (because the divergence of the curl is zero).
Substituting (A) into (C) yields the familiar differential form of the .
Equation (D) implicitly contains the Lorentz force law and the differential form of Faraday's law of induction. For a "static" magnetic field, formula_20 vanishes, and the electric field E becomes conservative and is given by −∇"ϕ", so that (D) reduces to
formula_21.
This is simply the Lorentz force law on a per-unit-charge basis — although Maxwell's equation (D) first appeared at equation (77) in "On Physical Lines of Force" in 1861, 34 years before Lorentz derived his force law, which is now usually presented as a supplement to the four "Maxwell's equations". The cross-product term in the Lorentz force law is the source of the so-called "motional emf" in electric generators (see also "Moving magnet and conductor problem"). Where there is no motion through the magnetic field — e.g., in transformers — we can drop the cross-product term, and the force per unit charge (called f) reduces to the electric field E, so that Maxwell's equation (D) reduces to
formula_22.
Taking curls, noting that the curl of a gradient is zero, we obtain
formula_23
which is the differential form of Faraday's law. Thus the three terms on the right side of equation (D) may be described, from left to right, as the motional term, the transformer term, and the conservative term.
In deriving the electromagnetic wave equation, Maxwell considers the situation only from the rest frame of the medium, and accordingly drops the cross-product term. But he still works from equation (D), in contrast to modern textbooks which tend to work from Faraday's law (see below).
The constitutive equations (E) and (F) are now usually written in the rest frame of the medium as D
"ϵ"E and J
"σ"E.
Maxwell's equation (G), viewed in isolation as printed in the 1864 paper, at first seems to say that "ρ" + ∇⋅ D
0. However, if we trace the signs through the previous two triplets of equations, we see that what seem to be the components of D are in fact the components of −D. The notation used in Maxwell's later "Treatise on Electricity and Magnetism" is different, and avoids the misleading first impression.
Maxwell – electromagnetic light wave.
In part VI of "A Dynamical Theory of the Electromagnetic Field", subtitled "Electromagnetic theory of light", Maxwell uses the correction to Ampère's Circuital Law made in part III of his 1862 paper, "On Physical Lines of Force", which is defined as displacement current, to derive the electromagnetic wave equation.
He obtained a wave equation with a speed in close agreement to experimental determinations of the speed of light. He commented,
<templatestyles src="Template:Blockquote/styles.css" />The agreement of the results seems to show that light and magnetism are affections of the same substance, and that light is an electromagnetic disturbance propagated through the field according to electromagnetic laws.
Maxwell's derivation of the electromagnetic wave equation has been replaced in modern physics by a much less cumbersome method which combines the corrected version of Ampère's Circuital Law with Faraday's law of electromagnetic induction.
Modern equation methods.
To obtain the electromagnetic wave equation in a vacuum using the modern method, we begin with the modern 'Heaviside' form of Maxwell's equations. Using (SI units) in a vacuum, these equations are
If we take the curl of the curl equations we obtain
formula_24
formula_25
If we note the vector identity
formula_26
where formula_27 is any vector function of space, we recover the wave equations
formula_28
formula_29
where
formula_30 meters per second
is the speed of light in free space.
Legacy and impact.
Of this paper and Maxwell's related works, fellow physicist Richard Feynman said: "From the long view of this history of mankind – seen from, say, 10,000 years from now – there can be little doubt that the most significant event of the 19th century will be judged as Maxwell's discovery of the laws of electromagnetism."
Albert Einstein used Maxwell's equations as the starting point for his special theory of relativity, presented in "The Electrodynamics of Moving Bodies", one of Einstein's 1905 "Annus Mirabilis" papers. In it is stated:
the same laws of electrodynamics and optics will be valid for all frames of reference for which the equations of mechanics hold good
and
Any ray of light moves in the "stationary" system of co-ordinates with the determined velocity c, whether the ray be emitted by a stationary or by a moving body.
Maxwell's equations can also be derived by extending general relativity into five physical dimensions.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{J}_{\\rm tot} = "
},
{
"math_id": 1,
"text": "\\,\\mathbf{J}"
},
{
"math_id": 2,
"text": " +\\,\\frac{\\partial\\mathbf{D}}{\\partial t}"
},
{
"math_id": 3,
"text": "\\mu \\mathbf{H} = \\nabla \\times \\mathbf{A}"
},
{
"math_id": 4,
"text": "\\nabla \\times \\mathbf{H} = \\mathbf{J}_{\\rm tot}"
},
{
"math_id": 5,
"text": "\\mathbf{f} = \\mu (\\mathbf{v} \\times \\mathbf{H}) - \\frac{\\partial\\mathbf{A}}{\\partial t}-\\nabla \\phi "
},
{
"math_id": 6,
"text": "\\mathbf{f} = \\frac{1}{\\varepsilon} \\mathbf{D}"
},
{
"math_id": 7,
"text": "\\mathbf{f} = \\frac{1}{\\sigma} \\mathbf{J}"
},
{
"math_id": 8,
"text": "\\nabla \\cdot \\mathbf{D} = \\rho"
},
{
"math_id": 9,
"text": "\\nabla \\cdot \\mathbf{J} = -\\frac{\\partial\\rho}{\\partial t}\\,"
},
{
"math_id": 10,
"text": "\\mathbf{H}"
},
{
"math_id": 11,
"text": "\\mathbf{J}"
},
{
"math_id": 12,
"text": "\\mathbf{J}_{\\rm tot}"
},
{
"math_id": 13,
"text": "\\mathbf{D}"
},
{
"math_id": 14,
"text": "\\rho"
},
{
"math_id": 15,
"text": "\\mathbf{A}"
},
{
"math_id": 16,
"text": "\\mathbf{f}"
},
{
"math_id": 17,
"text": "\\phi"
},
{
"math_id": 18,
"text": "\\sigma"
},
{
"math_id": 19,
"text": "\\nabla"
},
{
"math_id": 20,
"text": "\\partial\\mathbf{A}/\\partial t"
},
{
"math_id": 21,
"text": "\\mathbf{f}=\\mathbf{E}+\\mathbf{v}\\times\\mathbf{B}\\,"
},
{
"math_id": 22,
"text": "\\mathbf{E}=-\\frac{\\partial\\mathbf{A}}{\\partial t}-\\nabla\\phi\\,"
},
{
"math_id": 23,
"text": "\\nabla\\times\\mathbf{E}\\,=\\,-\\nabla\\times\\frac{\\partial\\mathbf{A}}{\\partial t}\\,=\\,-\\frac{\\partial}{\\partial t}\\big(\\nabla\\times\\mathbf{A}\\big)\\,=\\,-\\frac{\\partial\\mathbf{B}}{\\partial t}\\,,"
},
{
"math_id": 24,
"text": " \\nabla \\times \\nabla \\times \\mathbf{E} = -\\mu_o \\frac{\\partial } {\\partial t} \\nabla \\times \\mathbf{H} = -\\mu_o \\varepsilon_o \\frac{\\partial^2 \\mathbf{E} } {\\partial t^2} "
},
{
"math_id": 25,
"text": " \\nabla \\times \\nabla \\times \\mathbf{H} = \\varepsilon_o \\frac{\\partial } {\\partial t} \\nabla \\times \\mathbf{E} = -\\mu_o \\varepsilon_o \\frac{\\partial^2 \\mathbf{H} } {\\partial t^2}\n"
},
{
"math_id": 26,
"text": "\\nabla \\times \\left( \\nabla \\times \\mathbf{V} \\right) = \\nabla \\left( \\nabla \\cdot \\mathbf{V} \\right) - \\nabla^2 \\mathbf{V}"
},
{
"math_id": 27,
"text": " \\mathbf{V} "
},
{
"math_id": 28,
"text": " {\\partial^2 \\mathbf{E} \\over \\partial t^2} \\ - \\ c^2 \\cdot \\nabla^2 \\mathbf{E} \\ \\ = \\ \\ 0"
},
{
"math_id": 29,
"text": " {\\partial^2 \\mathbf{H} \\over \\partial t^2} \\ - \\ c^2 \\cdot \\nabla^2 \\mathbf{H} \\ \\ = \\ \\ 0"
},
{
"math_id": 30,
"text": "c = { 1 \\over \\sqrt{ \\mu_o \\varepsilon_o } } = 2.99792458 \\times 10^8 "
}
]
| https://en.wikipedia.org/wiki?curid=729237 |
72927189 | Multiple orthogonal polynomials | In mathematics, the multiple orthogonal polynomials (MOPs) are orthogonal polynomials in one variable that are orthogonal with respect to a finite family of measures. The polynomials are divided into two classes named "type 1" and "type 2".
In the literature, MOPs are also called "formula_0-orthogonal polynomials", "Hermite-Padé polynomials" or "polyorthogonal polynomials". MOPs should not be confused with multivariate orthogonal polynomials.
Multiple orthogonal polynomials.
Consider a multiindex formula_1 and formula_2 positive measures formula_3 over the reals. As usual formula_4.
MOP of type 1.
Polynomials formula_5 for formula_6 are of "type 1" if the formula_7-th polynomial formula_5 has at most degree formula_8 such that
formula_9
and
formula_10
Explanation.
This defines a system of formula_11 equations for the formula_11 coefficients of the polynomials formula_12.
MOP of type 2.
A monic polynomial formula_13 is of "type 2" if it has degree formula_11 such that
formula_14
Explanation.
If we write formula_15 out, we get the following definition
formula_16
formula_17
formula_18
formula_19 | [
{
"math_id": 0,
"text": "d"
},
{
"math_id": 1,
"text": "\\vec{n}=(n_1,\\dots,n_r)\\in \\mathbb{N}^r"
},
{
"math_id": 2,
"text": "r"
},
{
"math_id": 3,
"text": "\\mu_1,\\dots,\\mu_r"
},
{
"math_id": 4,
"text": "|\\vec{n}|:=n_1+n_2+\\cdots + n_r"
},
{
"math_id": 5,
"text": "A_{\\vec{n},j}"
},
{
"math_id": 6,
"text": "j=1,2,\\dots,r"
},
{
"math_id": 7,
"text": "j"
},
{
"math_id": 8,
"text": "n_j-1"
},
{
"math_id": 9,
"text": "\\sum\\limits_{j=1}^r\\int_{\\R}x^kA_{\\vec{n},j}d\\mu_j(x)=0,\\qquad k=0,1,2,\\dots,|\\vec{n}|-2,"
},
{
"math_id": 10,
"text": "\\sum\\limits_{j=1}^r\\int_{\\R}x^{|\\vec{n}|-1}A_{\\vec{n},j}d\\mu_j(x)=1."
},
{
"math_id": 11,
"text": "|\\vec{n}|"
},
{
"math_id": 12,
"text": "A_{\\vec{n},1},A_{\\vec{n},2},\\dots,A_{\\vec{n},r}"
},
{
"math_id": 13,
"text": "P_{\\vec{n}}(x)"
},
{
"math_id": 14,
"text": "\\int_{\\R}P_{\\vec{n}}(x)x^k d\\mu_j(x)=0,\\qquad k=0,1,2,\\dots,n_j-1,\\qquad j=1,\\dots,r."
},
{
"math_id": 15,
"text": "j=1,\\dots,r"
},
{
"math_id": 16,
"text": "\\int_{\\R}P_{\\vec{n}}(x)x^k d\\mu_1(x)=0,\\qquad k=0,1,2,\\dots,n_1-1"
},
{
"math_id": 17,
"text": "\\int_{\\R}P_{\\vec{n}}(x)x^k d\\mu_2(x)=0,\\qquad k=0,1,2,\\dots,n_2-1"
},
{
"math_id": 18,
"text": "\\vdots"
},
{
"math_id": 19,
"text": "\\int_{\\R}P_{\\vec{n}}(x)x^k d\\mu_r(x)=0,\\qquad k=0,1,2,\\dots,n_r-1"
}
]
| https://en.wikipedia.org/wiki?curid=72927189 |
7293427 | Solar gain | Solar energy effect
Solar gain (also known as solar heat gain or passive solar gain) is the increase in thermal energy of a space, object or structure as it absorbs incident solar radiation. The amount of solar gain a space experiences is a function of the total incident solar irradiance and of the ability of any intervening material to transmit or resist the radiation.
Objects struck by sunlight absorb its visible and short-wave infrared components, increase in temperature, and then re-radiate that heat at longer infrared wavelengths. Though transparent building materials such as glass allow visible light to pass through almost unimpeded, once that light is converted to long-wave infrared radiation by materials indoors, it is unable to escape back through the window since glass is opaque to those longer wavelengths. The trapped heat thus causes solar gain via a phenomenon known as the greenhouse effect. In buildings, excessive solar gain can lead to overheating within a space, but it can also be used as a passive heating strategy when heat is desired.
Window solar gain properties.
Solar gain is most frequently addressed in the design and selection of windows and doors. Because of this, the most common metrics for quantifying solar gain are used as a standard way of reporting the thermal properties of window assemblies. In the United States, The American Society of Heating, Refrigerating, and Air-Conditioning Engineers (ASHRAE), and The National Fenestration Rating Council (NFRC) maintain standards for the calculation and measurement of these values.
Shading coefficient.
The shading coefficient (SC) is a measure of the radiative thermal performance of a glass unit (panel or window) in a building. It is defined as the ratio of solar radiation at a given wavelength and angle of incidence passing through a glass unit to the radiation that would pass through a reference window of frameless Clear Float Glass. Since the quantities compared are functions of both wavelength and angle of incidence, the shading coefficient for a window assembly is typically reported for a single wavelength typical of solar radiation entering normal to the plane of glass. This quantity includes both energy that is transmitted directly through the glass as well as energy that is absorbed by the glass and frame and re-radiated into the space, and is given by the following equation:
formula_0
Here, λ is the wavelength of radiation and θ is the angle of incidence. "T" is the transmissivity of the glass, "A" is its absorptivity, and "N" is the fraction of absorbed energy that is re-emitted into the space. The overall shading coefficient is thus given by the ratio:
formula_1
The shading coefficient depends on the radiation properties of the window assembly. These properties are the transmissivity "T" , absorptivity "A", emissivity (which is equal to the absorptivity for any given wavelength), and reflectivity all of which are dimensionless quantities that together sum to 1. Factors such as color, tint, and reflective coatings affect these properties, which is what prompted the development of the shading coefficient as a correction factor to account for this. ASHRAE's table of solar heat gain factors provides the expected solar heat gain for ⅛” clear float glass at different latitudes, orientations, and times, which can be multiplied by the shading coefficient to correct for differences in radiation properties.
The value of the shading coefficient ranges from 0 to 1. The lower the rating, the less solar heat is transmitted through the glass, and the greater its shading ability.
In addition to glass properties, shading devices integrated into the window assembly are also included in the SC calculation. Such devices can reduce the shading coefficient by blocking portions of the glazing with opaque or translucent material, thus reducing the overall transmissivity.
Window design methods have moved away from the Shading Coefficient and towards the Solar Heat Gain Coefficient (SHGC), which is defined as the fraction of incident solar radiation that actually enters a building through the entire window assembly as heat gain (not just the glass portion). The standard method for calculating the SHGC also uses a more realistic wavelength-by-wavelength method, rather than just providing a coefficient for a single wavelength like the shading coefficient does. Though the shading coefficient is still mentioned in manufacturer product literature and some industry computer software, it is no longer mentioned as an option in industry-specific texts or model building codes. Aside from its inherent inaccuracies, another shortcoming of the SC is its counter-intuitive name, which suggests that high values equal high shading when in reality the opposite is true. Industry technical experts recognized the limitations of SC and pushed towards SHGC in the United States (and the analogous g-value in Europe) before the early 1990s.
A conversion from SC to SHGC is not necessarily straightforward, as they each take into account different heat transfer mechanisms and paths (window assembly vs. glass-only). To perform an approximate conversion from SC to SHGC, multiply the SC value by 0.87.
g-value.
The g-value (sometimes also called a Solar Factor or Total Solar Energy Transmittance) is the coefficient commonly used in Europe to measure the solar energy transmittance of windows. Despite having minor differences in modeling standards compared to the SHGC, the two values are effectively the same. A g-value of 1.0 represents full transmittance of all solar radiation while 0.0 represents a window with no solar energy transmittance. In practice though, most g-values will range between 0.2 and 0.7, with solar control glazing having a g-value of less than 0.5.
Solar heat gain coefficient (SHGC).
SHGC is the successor to the shading coefficient used in the United States and it is the ratio of transmitted solar radiation to incident solar radiation of an entire window assembly. It ranges from 0 to 1 and refers to the solar energy transmittance of a window or door as a whole, factoring in the glass, frame material, sash (if present), divided lite bars (if present) and screens (if present). The transmittance of each component is calculated in a similar manner to the shading coefficient. However, in contrast to the shading coefficient, the total solar gain is calculated on a wavelength-by-wavelength basis where the directly
transmitted portion of the solar heat gain coefficient is given by:
formula_2
Here formula_3 is the spectral transmittance at a given wavelength in nanometers and formula_4 is the incident solar spectral irradiance. When integrated over the wavelengths of solar short-wave radiation, it yields the total fraction of transmitted solar energy across all solar wavelengths. The product formula_5 is thus the portion of absorbed and re-emitted energy across all assembly components beyond just the glass. It is important to note that the standard SHGC is calculated only for an angle of incidence normal to the window. However this tends to provide a good estimate over a wide range of angles, up to 30 degrees from normal in most cases.
SHGC can either be estimated through simulation models or measured by recording the total heat flow through a window with a calorimeter chamber. In both cases, NFRC standards outline the procedure for the test procedure and calculation of the SHGC. For dynamic fenestration or operable shading, each possible state can be described by a different SHGC.
Though the SHGC is more realistic than the SC, both are only rough approximations when they include complex elements such as shading devices, which offer more precise control over when fenestration is shaded from solar gain than glass treatments.
Solar gain in opaque building components.
Apart from windows, walls and roofs also serve as pathways for solar gain. In these components heat transfer is entirely due to absorptance, conduction, and re-radiation since all transmittance is blocked in opaque materials. The primary metric in opaque components is the Solar Reflectance Index which accounts for both solar reflectance (albedo) and emittance of a surface. Materials with high SRI will reflect and emit a majority of heat energy, keeping them cooler than other exterior finishes. This is quite significant in the design of roofs since dark roofing materials can often be as much as 50 °C hotter than the surrounding air temperature, leading to large thermal stresses as well as heat transfer to interior space.
Solar gain and building design.
Solar gain can have either positive or negative effects depending on the climate. In the context of passive solar building design, the aim of the designer is normally to maximize solar gain within the building in the winter (to reduce space heating demand), and to control it in summer (to minimize cooling requirements). Thermal mass may be used to even out the fluctuations during the day, and to some extent between days.
Control of solar gain.
Uncontrolled solar gain is undesirable in hot climates due to its potential for overheating a space. To minimize this and reduce cooling loads, several technologies exist for solar gain reduction. SHGC is influenced by the color or tint of glass and its degree of reflectivity. Reflectivity can be modified through the application of reflective metal oxides to the surface of the glass. Low-emissivity coating is another more recently developed option that offers greater specificity in the wavelengths reflected and re-emitted. This allows glass to block mainly short-wave infrared radiation without greatly reducing visible transmittance.
In climate-responsive design for cold and mixed climates, windows are typically sized and positioned in order to provide solar heat gains during the heating season. To that end, glazing with a relatively high solar heat gain coefficient is often used so as not to block solar heat gains, especially in the sunny side of the house. SHGC also decreases with the number of glass panes used in a window. For example, in triple glazed windows, SHGC tends to be in the range of 0.33 - 0.47. For double glazed windows SHGC is more often in the range of 0.42 - 0.55.
Different types of glass can be used to increase or to decrease solar heat gain through fenestration, but can also be more finely tuned by the proper orientation of windows and by the addition of shading devices such as overhangs, louvers, fins, porches, and other architectural shading elements.
Passive solar heating.
Passive solar heating is a design strategy that attempts to maximize the amount of solar gain in a building when additional heating is desired. It differs from active solar heating which uses exterior water tanks with pumps to absorb solar energy because passive solar systems do not require energy for pumping and store heat directly in structures and finishes of occupied space.
In direct solar gain systems, the composition and coating of the building glazing can also be manipulated to increase the greenhouse effect by optimizing their radiation properties, while their size, position, and shading can be used to optimize solar gain. Solar gain can also be transferred to the building by indirect or isolated solar gain systems.
Passive solar designs typically employ large equator facing windows with a high SHGC and overhangs that block sunlight in summer months and permit it to enter the window in the winter. When placed in the path of admitted sunlight, high thermal mass features such as concrete slabs or trombe walls store large amounts of solar radiation during the day and release it slowly into the space throughout the night. When designed properly, this can modulate temperature fluctuations. Some of the current research into this subject area is addressing the tradeoff between opaque thermal mass for storage and transparent glazing for collection through the use of transparent phase change materials that both admit light and store energy without the need for excessive weight.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "F(\\lambda,\\theta)=T(\\lambda,\\theta)+N*A(\\lambda,\\theta)"
},
{
"math_id": 1,
"text": "S.C. = F(\\lambda,\\theta)_1 / F(\\lambda,\\theta)_o"
},
{
"math_id": 2,
"text": "T = \\int\\limits_{350 \\ nm}^{3500 \\ nm} T(\\lambda) E(\\lambda) d\\lambda "
},
{
"math_id": 3,
"text": "T(\\lambda)"
},
{
"math_id": 4,
"text": "E(\\lambda)"
},
{
"math_id": 5,
"text": "N*A(\\lambda,\\theta)"
}
]
| https://en.wikipedia.org/wiki?curid=7293427 |
72936319 | Sobolev orthogonal polynomials | In mathematics, Sobolev orthogonal polynomials are orthogonal polynomials with respect to a Sobolev inner product, i.e. an inner product with derivatives.
By having conditions on the derivatives, the Sobolev orthogonal polynomials in general no longer share some of the nice features that classical orthogonal polynomials have.
Sobolev orthogonal polynomials are named after Sergei Lvovich Sobolev.
Definition.
Let formula_0 be positive Borel measures on formula_1 with finite moments. Consider the inner product
formula_2
and let formula_3 be the corresponding Sobolev space. The Sobolev orthogonal polynomials formula_4 are defined as
formula_5
where formula_6 denotes the Kronecker delta. One says that these polynomials are "sobolev orthogonal".
formula_7
Consequently neither Favard's theorem, the three term recurrence or the Christoffel-Darboux formula hold. There exist however other recursion formulas for certain types of measures. | [
{
"math_id": 0,
"text": "\\mu_0,\\mu_1,\\dots,\\mu_n"
},
{
"math_id": 1,
"text": "\\mathbb{R}"
},
{
"math_id": 2,
"text": "\\langle p_r,p_s \\rangle_{W^{n,2}}=\\int_{\\mathbb{R}}p_r(x) p_s(x)\\;\\mathrm{d}\\mu_0+\\sum\\limits_{k=1}^{n}\\int_{\\mathbb{R}}p_r^{(k)}(x) p_s^{(k)}(x)\\;\\mathrm{d}\\mu_k"
},
{
"math_id": 3,
"text": "W^{n,2}"
},
{
"math_id": 4,
"text": "\\{p_n\\}_{n\\geq 0}"
},
{
"math_id": 5,
"text": "\\langle p_n,p_s \\rangle_{W^{n,2}}=c_n\\delta_{n,s}"
},
{
"math_id": 6,
"text": "\\delta_{n,s}"
},
{
"math_id": 7,
"text": "\\langle xp_n,p_s\\rangle_{W^{n,2}}\\neq\\langle p_n,xp_s\\rangle_{W^{n,2}}"
},
{
"math_id": 8,
"text": "n=1"
}
]
| https://en.wikipedia.org/wiki?curid=72936319 |
72937548 | Kovasznay flow | Kovasznay flow corresponds to an exact solution of the Navier–Stokes equations and are interpreted to describe the flow behind a two-dimensional grid. The flow is named after Leslie Stephen George Kovasznay, who discovered this solution in 1948. The solution is often used to validate numerical codes solving two-dimensional Navier-Stokes equations.
Flow description.
Let formula_0 be the free stream velocity and let formula_1 be the spacing between a two-dimensional grid. The velocity field formula_2 of the Kovaszany flow, expressed in the Cartesian coordinate system is given by
formula_3
where formula_4 is the root of the equation formula_5 in which formula_6 represents the Reynolds number of the flow. The root that describes the flow behind the two-dimensional grid is found to be
formula_7
The corresponding vorticity field formula_8 and the stream function formula_9 are given by
formula_10
Similar exact solutions, extending Kovasznay's, has been noted by Lin and Tobak and C. Y. Wang.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "U"
},
{
"math_id": 1,
"text": "L"
},
{
"math_id": 2,
"text": "(u,v,0)"
},
{
"math_id": 3,
"text": "\\frac{u}{U} = 1- e^{\\lambda x/L}\\cos\\left(\\frac{2\\pi y}{L}\\right), \\quad \\frac{v}{U} = \\frac{\\lambda}{2\\pi} e^{\\lambda x/L}\\sin\\left(\\frac{2\\pi y}{L}\\right)"
},
{
"math_id": 4,
"text": "\\lambda"
},
{
"math_id": 5,
"text": "\\lambda^2-Re\\, \\lambda -4\\pi^2=0"
},
{
"math_id": 6,
"text": "Re=UL/\\nu"
},
{
"math_id": 7,
"text": "\\lambda = \\frac{1}{2}(Re-\\sqrt{Re^2+16\\pi^2})."
},
{
"math_id": 8,
"text": "(0,0,\\omega)"
},
{
"math_id": 9,
"text": "\\psi"
},
{
"math_id": 10,
"text": "\\frac{\\omega}{U/L}=Re\\lambda e^{\\lambda x/L}\\sin\\left(\\frac{2\\pi y}{L}\\right), \\quad \\frac{\\psi}{LU} = \\frac{y}{L}- \\frac{1}{2\\pi}e^{\\lambda x/L}\\sin\\left(\\frac{2\\pi y}{L}\\right)."
}
]
| https://en.wikipedia.org/wiki?curid=72937548 |
7294357 | N (disambiguation) | N is the fourteenth letter of the English alphabet.
N or n may also refer to:
<templatestyles src="Template:TOC_right/styles.css" />
See also.
Topics referred to by the same term
<templatestyles src="Dmbox/styles.css" />
This page lists associated with the title . | [
{
"math_id": 0,
"text": "\\mathbb{N}"
},
{
"math_id": 1,
"text": "F_n"
}
]
| https://en.wikipedia.org/wiki?curid=7294357 |
72945504 | Boué–Dupuis formula | In stochastic calculus, the Boué–Dupuis formula is variational representation for Wiener functionals. The representation has application in finding large deviation asymptotics.
The theorem was proven in 1998 by Michelle Boué and Paul Dupuis. In 2000 the result was generalized to infinite-dimensional Brownian motions and in 2009 extended to abstract Wiener spaces.
Boué–Dupuis formula.
Let formula_0 be the classical Wiener space and formula_1 be a formula_2-dimensional standard Brownian motion. Then for all bounded and measurable functions
formula_3 we have the following variational representation
formula_4
where: | [
{
"math_id": 0,
"text": "C([0,1],\\mathbb{R}^d)"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "d"
},
{
"math_id": 3,
"text": "f:C([0,1],\\mathbb{R}^d)\\to\\mathbb{R}"
},
{
"math_id": 4,
"text": "-\\log \\mathbb{E}\\left[e^{-f(B)}\\right]=\\inf\\limits_{V}\\mathbb{E}\\left[\\frac{1}{2}\\int_0^1\\|V_t\\|^2\\mathrm{d}t + f\\left(B+\\int_0^{\\cdot}V_t\\mathrm{d}t\\right)\\right],"
},
{
"math_id": 5,
"text": "\\mathcal{F}^B"
},
{
"math_id": 6,
"text": "\\|\\cdot \\|"
}
]
| https://en.wikipedia.org/wiki?curid=72945504 |
72947 | Marin Mersenne | French polymath (1588–1648)
Marin Mersenne, OM (also known as Marinus Mersennus or "le Père" Mersenne; ; 8 September 1588 – 1 September 1648) was a French polymath whose works touched a wide variety of fields. He is perhaps best known today among mathematicians for Mersenne prime numbers, those written in the form "Mn"
2"n" − 1 for some integer "n". He also developed Mersenne's laws, which describe the harmonics of a vibrating string (such as may be found on guitars and pianos), and his seminal work on music theory, "Harmonie universelle", for which he is referred to as the "father of acoustics". Mersenne, an ordained Catholic priest, had many contacts in the scientific world and has been called "the center of the world of science and mathematics during the first half of the 1600s" and, because of his ability to make connections between people and ideas, "the post-box of Europe". He was also a member of the ascetical Minim religious order and wrote and lectured on theology and philosophy.
Life.
Mersenne was born of Jeanne Moulière, wife of Julien Mersenne, peasants who lived near Oizé, County of Maine (present-day Sarthe, France). He was educated at Le Mans and at the Jesuit College of La Flèche. On 17 July 1611, he joined the Minim Friars and, after studying theology and Hebrew in Paris, was ordained a priest in 1613.
Between 1614 and 1618, he taught theology and philosophy at Nevers, but he returned to Paris and settled at the convent of L'Annonciade in 1620. There he studied mathematics and music and met with other kindred spirits such as René Descartes, Étienne Pascal, Pierre Petit, Gilles de Roberval, Thomas Hobbes, and Nicolas-Claude Fabri de Peiresc. He corresponded with Giovanni Doni, Jacques Alexandre Le Tenneur, Constantijn Huygens, Galileo Galilei, and other scholars in Italy, England and the Dutch Republic. He was a staunch defender of Galileo, assisting him in translations of some of his mechanical works.
For four years, Mersenne devoted himself entirely to philosophic and theological writing, and published "Quaestiones celeberrimae in Genesim" ("Celebrated Questions on the Book of Genesis") (1623); "L'Impieté des déistes" ("The Impiety of the Deists") (1624); "La Vérité des sciences" ("Truth of the Sciences Against the Sceptics", 1624). It is sometimes incorrectly stated that he was a Jesuit. He was educated by Jesuits, but he never joined the Society of Jesus. He taught theology and philosophy at Nevers and Paris.
In 1635 he set up the informal "Académie Parisienne" (Academia Parisiensis), which had nearly 140 correspondents, including astronomers and philosophers as well as mathematicians, and was the precursor of the Académie des sciences established by Jean-Baptiste Colbert in 1666. He was not afraid to cause disputes among his learned friends in order to compare their views, notable among which were disputes between Descartes, Pierre de Fermat, and Jean de Beaugrand. Peter L. Bernstein, in his book "Against the Gods: The Remarkable Story of Risk", wrote, "The Académie des Sciences in Paris and the Royal Society in London, which were founded about twenty years after Mersenne's death, were direct descendants of Mersenne's activities."
In 1635 Mersenne met with Tommaso Campanella but concluded that he could "teach nothing in the sciences ... but still he has a good memory and a fertile imagination." Mersenne asked if Descartes wanted Campanella to come to Holland to meet him, but Descartes declined. He visited Italy fifteen times, in 1640, 1641 and 1645. In 1643–1644 Mersenne also corresponded with the German Socinian Marcin Ruar concerning the Copernican ideas of Pierre Gassendi, finding Ruar already a supporter of Gassendi's position. Among his correspondents were Descartes, Galileo, Roberval, Pascal, Beeckman and other scientists.
He died on 1 September 1648 of complications arising from a lung abscess.
Work.
"Quaestiones celeberrimae in Genesim" was written as a commentary on the Book of Genesis and comprises uneven sections headed by verses from the first three chapters of that book. At first sight the book appears to be a collection of treatises on various miscellaneous topics. However Robert Lenoble has shown that the principle of unity in the work is a polemic against magical and divinatory arts, cabalism, and animistic and pantheistic philosophies.
Mersenne was concerned with the teachings of some Italian naturalists that all things happened naturally and determined astrologically; for example, the nomological determinism of Lucilio Vanini ("God acts on sublunary beings (humans) using the sky as a tool"), and Gerolamo Cardano's idea that martyrs and heretic were compelled to self-harm by the stars; Historian of science William Ashworth explains "Miracles, for example, were endangered by the naturalists, because in a world filled with sympathies and occult forces—with what Lenoble calls a "spontanéité indéfinie"—anything could happen naturally"
Mersenne mentions Martin Del Rio's "Investigations into Magic" and criticises Marsilio Ficino for claiming power for images and characters. He condemns astral magic and astrology and the "anima mundi", a concept popular amongst Renaissance neo-platonists. Whilst allowing for a mystical interpretation of the Cabala, he wholeheartedly condemned its magical application, particularly angelology. He also criticises Pico della Mirandola, Cornelius Agrippa, Francesco Giorgio and Robert Fludd, his main target.
"Harmonie universelle" is perhaps Mersenne's most influential work. It is one of the earliest comprehensive works on music theory, touching on a wide range of musical concepts, and especially the mathematical relationships involved in music. The work contains the earliest formulation of what has become known as Mersenne's laws, which describe the frequency of oscillation of a stretched string. This frequency is:
The formula for the lowest frequency is
formula_0
where "f" is the frequency [Hz], "L" is the length [m], "F" is the force [N] and μ is the mass per unit length [kg/m].
In this book, Mersenne also introduced several innovative concepts that can be considered the basis of modern reflecting telescopes:
Because of criticism that he encountered, especially from Descartes, Mersenne made no attempt to build a telescope of his own.
Mersenne is also remembered today thanks to his association with the Mersenne primes. The Mersenne Twister, named for Mersenne primes, is frequently used in computer engineering and in related fields such as cryptography.
However, Mersenne was not primarily a mathematician; he wrote about music theory and other subjects. He edited works of Euclid, Apollonius, Archimedes, and other Greek mathematicians. But perhaps his most important contribution to the advance of learning was his extensive correspondence (in Latin) with mathematicians and other scientists in many countries. At a time when the scientific journal had not yet come into being, Mersenne was the centre of a network for exchange of information.
It has been argued that Mersenne used his lack of mathematical specialty, his ties to the print world, his legal acumen, and his friendship with the French mathematician and philosopher René Descartes (1596–1650) to manifest his international network of mathematicians.
Mersenne's philosophical works are characterized by wide scholarship and the narrowest theological orthodoxy. His greatest service to philosophy was his enthusiastic defence of Descartes, whose agent he was in Paris and whom he visited in exile in the Netherlands. He submitted to various eminent Parisian thinkers a manuscript copy of the "Meditations on First Philosophy", and defended its orthodoxy against numerous clerical critics.
In later life, he gave up speculative thought and turned to scientific research, especially in mathematics, physics and astronomy. In this connection, his best known work is "Harmonie universelle" of 1636, dealing with the theory of music and musical instruments. It is regarded as a source of information on 17th-century music, especially French music and musicians, to rival even the works of Pietro Cerone.
One of his many contributions to musical tuning theory was the suggestion of
formula_1
as the ratio for an equally-tempered semitone (formula_2). It was more accurate (0.44 cents sharp) than Vincenzo Galilei's 18/17 (1.05 cents flat), and could be constructed using straightedge and compass. Mersenne's description in the 1636 "Harmonie universelle" of the first absolute determination of the frequency of an audible tone (at 84 Hz) implies that he had already demonstrated that the absolute-frequency ratio of two vibrating strings, radiating a musical tone and its octave, is 1 : 2. The perceived harmony (consonance) of two such notes would be explained if the ratio of the air oscillation frequencies is also 1 : 2, which in turn is consistent with the source-air-motion-frequency-equivalence hypothesis.
He also performed extensive experiments to determine the acceleration of falling objects by comparing them with the swing of pendulums, reported in his "Cogitata Physico-Mathematica" in 1644. He was the first to measure the length of the seconds pendulum, that is a pendulum whose swing takes one second, and the first to observe that a pendulum's swings are not isochronous as Galileo thought, but that large swings take longer than small swings.
Battles with occult and mystical thinkers.
Two German pamphlets that circulated around Europe in 1614–15, "Fama fraternitatis" and "Confessio Fraternitatis", claimed to be manifestos of a highly select, secret society of alchemists and sages called the Brotherhood of Rosicrucians. The books were allegories, but were obviously written by a small group who were reasonably knowledgeable about the sciences of the day, and their main theme was to promote educational reform (they were anti-Aristotelian). These pamphlets also promoted an occult view of science containing elements of Paracelsian philosophy, neo-Platonism, Christian Cabala and Hermeticism. In effect, they sought to establish a new form of scientific religion with some pre-Christian elements.
Mersenne led the fight against acceptance of these ideas, particularly those of Rosicrucian promoter Robert Fludd, who had a lifelong battle of words with Johannes Kepler. Fludd responded with "Sophia cum moria certamen" (1626), wherein he discusses his involvement with the Rosicrucians. The anonymous "Summum bonum" (1629), another critique of Mersenne, is a Rosicrucian-themed text. The cabalist Jacques Gaffarel joined Fludd's side, while Pierre Gassendi defended Mersenne.
The Rosicrucian ideas were defended by many prominent men of learning, and some members of the European scholarly community boosted their own prestige by claiming to be among the selected members of the Brotherhood. However, it is now generally agreed among historians that there is no evidence that an order of Rosicrucians existed at the time, with later Rosicrucian Orders drawing on the name, with no relation to the writers of the Rosicrucian Manifestoes.
During the mid-1630s Mersenne gave up the search for physical causes in the Aristotelian sense (rejecting the idea of "essences", which were still favoured by the scholastic philosophers) and taught that true physics could be only a descriptive science of motions ("Mécanisme"), which was the direction set by Galileo Galilei. Mersenne had been a regular correspondent with Galileo and had extended the work on vibrating strings originally developed by his father, Vincenzo Galilei.
Music.
An air attributed to Mersenne was used by Ottorino Respighi in his second suite of "Ancient Airs and Dances"
References.
Explanatory notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
General and cited sources.
<templatestyles src="Refbegin/styles.css" />
Further reading.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": " f=\\frac{1}{2L}\\sqrt{\\frac{F}{\\mu}}, "
},
{
"math_id": 1,
"text": "\\sqrt[4]{\\frac{2}{3-\\sqrt{2}}}"
},
{
"math_id": 2,
"text": "\\sqrt[12]{2}"
}
]
| https://en.wikipedia.org/wiki?curid=72947 |
72949749 | Alexandrov's soap bubble theorem | Alexandrov's soap bubble theorem is a mathematical theorem from geometric analysis that characterizes a sphere through the mean curvature. The theorem was proven in 1958 by Alexander Danilovich Alexandrov. In his proof he introduced the method of moving planes, which was used after by many mathematicians successfully in geometric analysis.
Soap bubble theorem.
Let formula_0 be a bounded connected domain with a boundary formula_1 that is of class formula_2 with a constant mean curvature, then formula_3 is a sphere. | [
{
"math_id": 0,
"text": "\\Omega\\subset \\mathbb{R}^n"
},
{
"math_id": 1,
"text": "\\Gamma=\\partial\\Omega"
},
{
"math_id": 2,
"text": "C^2"
},
{
"math_id": 3,
"text": "\\Gamma"
}
]
| https://en.wikipedia.org/wiki?curid=72949749 |
729572 | Bloch sphere | Geometrical representation of the pure state space of a two-level quantum mechanical system
In quantum mechanics and computing, the Bloch sphere is a geometrical representation of the pure state space of a two-level quantum mechanical system (qubit), named after the physicist Felix Bloch.
Mathematically each quantum mechanical system is associated with a separable complex Hilbert space formula_0. A pure state of a quantum system is represented by a non-zero vector formula_1 in formula_0. As the vectors formula_1 and formula_2 (with formula_3) represent the same state, the level of the quantum system corresponds to the dimension of the Hilbert space and pure states can be represented as equivalence classes, or, rays in a projective Hilbert space formula_4. For a two-dimensional Hilbert space, the space of all such states is the complex projective line formula_5 This is the Bloch sphere, which can be mapped to the Riemann sphere.
The Bloch sphere is a unit 2-sphere, with antipodal points corresponding to a pair of mutually orthogonal state vectors. The north and south poles of the Bloch sphere are typically chosen to correspond to the standard basis vectors formula_6 and formula_7, respectively, which in turn might correspond e.g. to the spin-up and spin-down states of an electron. This choice is arbitrary, however. The points on the surface of the sphere correspond to the pure states of the system, whereas the interior points correspond to the mixed states. The Bloch sphere may be generalized to an "n"-level quantum system, but then the visualization is less useful.
The natural metric on the Bloch sphere is the Fubini–Study metric. The mapping from the unit 3-sphere in the two-dimensional state space formula_8 to the Bloch sphere is the Hopf fibration, with each ray of spinors mapping to one point on the Bloch sphere.
Definition.
Given an orthonormal basis, any pure state formula_9 of a two-level quantum system can be written as a superposition of the basis vectors formula_6 and formula_7, where the coefficient of (or contribution from) each of the two basis vectors is a complex number. This means that the state is described by four real numbers. However, only the relative phase between the coefficients of the two basis vectors has any physical meaning (the phase of the quantum system is not directly measurable), so that there is redundancy in this description. We can take the coefficient of formula_6 to be real and non-negative. This allows the state to be described by only three real numbers, giving rise to the three dimensions of the Bloch sphere.
We also know from quantum mechanics that the total probability of the system has to be one:
formula_10, or equivalently formula_11.
Given this constraint, we can write formula_9 using the following representation:
formula_12, where formula_13 and formula_14.
The representation is always unique, because, even though the value of formula_15 is not unique when
formula_9 is one of the states (see Bra-ket notation) formula_6 or formula_7, the point represented by formula_16 and formula_15 is unique.
The parameters formula_17 and formula_18, re-interpreted in spherical coordinates as respectively the colatitude with respect to the "z"-axis and the longitude with respect to the "x"-axis, specify a point
formula_19
on the unit sphere in formula_20.
For mixed states, one considers the density operator. Any two-dimensional density operator ρ can be expanded using the identity I and the Hermitian, traceless Pauli matrices formula_21,
formula_22,
where formula_23 is called the Bloch vector.
It is this vector that indicates the point within the sphere that corresponds to a given mixed state. Specifically, as a basic feature of the Pauli vector, the eigenvalues of ρ are formula_24. Density operators must be positive-semidefinite, so it follows that formula_25.
For pure states, one then has
formula_26
in comportance with the above.
As a consequence, the surface of the Bloch sphere represents all the pure states of a two-dimensional quantum system, whereas the interior corresponds to all the mixed states.
"u", "v", "w" representation.
The Bloch vector formula_27 can be represented in the following basis, with reference to the density operator formula_28:
formula_29
formula_30
formula_31
where
formula_32
This basis is often used in laser theory, where formula_33 is known as the population inversion. In this basis, the numbers formula_34 are the expectations of the three Pauli matrices formula_35, allowing one to identify the three coordinates with x y and z axes.
Pure states.
Consider an "n"-level quantum mechanical system. This system is described by an "n"-dimensional Hilbert space "H""n". The pure state space is by definition the set of rays of "H""n".
Theorem. Let U("n") be the Lie group of unitary matrices of size "n". Then the pure state space of "H""n" can be identified with the compact coset space
formula_36
To prove this fact, note that there is a natural group action of U("n") on the set of states of "H""n". This action is continuous and transitive on the pure states. For any state formula_9, the isotropy group of formula_9, (defined as the set of elements formula_37 of U("n") such that formula_38) is isomorphic to the product group
formula_39
In linear algebra terms, this can be justified as follows. Any formula_37 of U("n") that leaves formula_9 invariant must have formula_9 as an eigenvector. Since the corresponding eigenvalue must be a complex number of modulus 1, this gives the U(1) factor of the isotropy group. The other part of the isotropy group is parametrized by the unitary matrices on the orthogonal complement of formula_9, which is isomorphic to U("n" − 1). From this the assertion of the theorem follows from basic facts about transitive group actions of compact groups.
The important fact to note above is that the "unitary group acts transitively" on pure states.
Now the (real) dimension of U("n") is "n"2. This is easy to see since the exponential map
formula_40
is a local homeomorphism from the space of self-adjoint complex matrices to U("n"). The space of self-adjoint complex matrices has real dimension "n"2.
Corollary. The real dimension of the pure state space of "H""n" is 2"n" − 2.
In fact,
formula_41
Let us apply this to consider the real dimension of an "m" qubit quantum register. The corresponding Hilbert space has dimension 2"m".
Corollary. The real dimension of the pure state space of an "m"-qubit quantum register is 2"m"+1 − 2.
Plotting pure two-spinor states through stereographic projection.
Mathematically the Bloch sphere for a two-spinor state can be mapped to a Riemann sphere formula_46, i.e., the projective Hilbert space formula_47 with the 2-dimensional complex Hilbert space formula_48 a representation space of SO(3).
Given a pure state
formula_49
where formula_50 and formula_51 are complex numbers which are normalized so that
formula_52
and such that formula_53 and formula_54,
i.e., such that formula_42 and formula_43 form a basis and have diametrically opposite representations on the Bloch sphere, then let
formula_55
be their ratio.
If the Bloch sphere is thought of as being embedded in formula_20 with its center at the origin and with radius one, then the plane "z" = 0 (which intersects the Bloch sphere at a great circle; the sphere's equator, as it were) can be thought of as an Argand diagram. Plot point "u" in this plane — so that in formula_20 it has coordinates formula_45.
Draw a straight line through "u" and through the point on the sphere that represents formula_43. (Let (0,0,1) represent formula_42 and (0,0,−1) represent formula_43.) This line intersects the sphere at another point besides formula_43. (The only exception is when formula_56, i.e., when formula_57 and formula_58.) Call this point "P". Point "u" on the plane "z" = 0 is the stereographic projection of point "P" on the Bloch sphere. The vector with tail at the origin and tip at "P" is the direction in 3-D space corresponding to the spinor formula_44. The coordinates of "P" are
formula_59
formula_60
formula_61
Density operators.
Formulations of quantum mechanics in terms of pure states are adequate for isolated systems; in general quantum mechanical systems need to be described in terms of density operators. The Bloch sphere parametrizes not only pure states but mixed states for 2-level systems. The density operator describing the mixed-state of a 2-level quantum system (qubit) corresponds to a point "inside" the Bloch sphere with the following coordinates:
formula_62
where formula_63 is the probability of the individual states within the ensemble and formula_64 are the coordinates of the individual states (on the "surface" of Bloch sphere). The set of all points on and inside the Bloch sphere is known as the "Bloch ball."
For states of higher dimensions there is difficulty in extending this to mixed states. The topological description is complicated by the fact that the unitary group does not act transitively on density operators. The orbits moreover are extremely diverse as follows from the following observation:
Theorem. Suppose "A" is a density operator on an "n" level quantum mechanical system whose distinct eigenvalues are μ1, ..., μ"k" with multiplicities "n"1, ..., "n""k". Then the group of unitary operators "V" such that "V A V"* = "A" is isomorphic (as a Lie group) to
formula_65
In particular the orbit of "A" is isomorphic to
formula_66
It is possible to generalize the construction of the Bloch ball to dimensions larger than 2, but the geometry of such a "Bloch body" is more complicated than that of a ball.
Rotations.
A useful advantage of the Bloch sphere representation is that the evolution of the qubit state is describable by rotations of the Bloch sphere. The most concise explanation for why this is the case is that the Lie algebra for the group of unitary and hermitian matrices formula_67 is isomorphic to the Lie algebra of the group of three dimensional rotations formula_68.
Rotation operators about the Bloch basis.
The rotations of the Bloch sphere about the Cartesian axes in the Bloch basis are given by
formula_69
Rotations about a general axis.
If formula_70 is a real unit vector in three dimensions, the rotation of the Bloch sphere about this axis is given by:
formula_71
An interesting thing to note is that this expression is identical under relabelling to the extended Euler formula for quaternions.
formula_72
Derivation of the Bloch rotation generator.
Ballentine presents an intuitive derivation for the infinitesimal unitary transformation. This is important for understanding why the rotations of Bloch spheres are exponentials of linear combinations of Pauli matrices. Hence a brief treatment on this is given here. A more complete description in a quantum mechanical context can be found here.
Consider a family of unitary operators formula_73 representing a rotation about some axis. Since the rotation has one degree of freedom, the operator acts on a field of scalars formula_74 such that:
formula_75
formula_76
where formula_77
We define the infinitesimal unitary as the Taylor expansion truncated at second order.
formula_78
By the unitary condition:
formula_79
Hence
formula_80
For this equality to hold true (assuming formula_81 is negligible) we require
formula_82.
This results in a solution of the form:
formula_83
where formula_84 is any Hermitian transformation, and is called the generator of the unitary family.
Hence
formula_85
Since the Pauli matrices formula_86 are unitary Hermitian matrices and have eigenvectors corresponding to the Bloch basis, formula_87, we can naturally see how a rotation of the Bloch sphere about an arbitrary axis formula_88 is described by
formula_89
with the rotation generator given by formula_90
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H"
},
{
"math_id": 1,
"text": "\\psi "
},
{
"math_id": 2,
"text": "\\lambda \\psi "
},
{
"math_id": 3,
"text": "\\lambda \\in \\mathbb{C}"
},
{
"math_id": 4,
"text": "\\mathbf{P}(H_{n})=\\mathbb{C}\\mathbf{P}^{n-1}"
},
{
"math_id": 5,
"text": "\\mathbb{C}\\mathbf{P}^1."
},
{
"math_id": 6,
"text": "|0\\rangle"
},
{
"math_id": 7,
"text": "|1\\rangle"
},
{
"math_id": 8,
"text": "\\mathbb{C}^2"
},
{
"math_id": 9,
"text": "|\\psi\\rangle"
},
{
"math_id": 10,
"text": "\\langle\\psi | \\psi\\rangle = 1"
},
{
"math_id": 11,
"text": "\\big\\| |\\psi\\rangle \\big\\|^2 = 1"
},
{
"math_id": 12,
"text": "\n |\\psi\\rangle =\n \\cos\\left(\\theta /2\\right) |0 \\rangle \\, + \\, e^{i\\phi} \\sin\\left(\\theta /2\\right) |1\\rangle =\n \\cos\\left(\\theta /2\\right) |0 \\rangle \\, + \\, (\\cos\\phi + i\\sin\\phi) \\, \\sin\\left(\\theta /2\\right) |1\\rangle "
},
{
"math_id": 13,
"text": " 0 \\leq \\theta \\leq \\pi"
},
{
"math_id": 14,
"text": "0 \\leq \\phi < 2 \\pi"
},
{
"math_id": 15,
"text": "\\phi"
},
{
"math_id": 16,
"text": "\\theta"
},
{
"math_id": 17,
"text": "\\theta\\,"
},
{
"math_id": 18,
"text": "\\phi\\,"
},
{
"math_id": 19,
"text": "\\vec{a} = (\\sin\\theta \\cos\\phi,\\; \\sin\\theta \\sin\\phi,\\; \\cos\\theta) = (u, v, w)"
},
{
"math_id": 20,
"text": "\\mathbb{R}^3"
},
{
"math_id": 21,
"text": "\\vec{\\sigma}"
},
{
"math_id": 22,
"text": "\\begin{align}\n \\rho &= \\frac{1}{2}\\left(I + \\vec{a} \\cdot \\vec{\\sigma}\\right) \\\\\n &= \\frac{1}{2}\\begin{pmatrix}\n 1 & 0 \\\\\n 0 & 1\n \\end{pmatrix} +\n \\frac{a_x}{2}\\begin{pmatrix}\n 0 & 1 \\\\\n 1 & 0\n \\end{pmatrix} +\n \\frac{a_y}{2}\\begin{pmatrix}\n 0 & -i \\\\\n i & 0\n \\end{pmatrix} + \n \\frac{a_z}{2}\\begin{pmatrix}\n 1 & 0 \\\\\n 0 & -1\n \\end{pmatrix} \\\\\n &= \\frac{1}{2}\\begin{pmatrix}\n 1 + a_z & a_x - ia_y \\\\\n a_x + ia_y & 1 - a_z\n \\end{pmatrix}\n\\end{align}"
},
{
"math_id": 23,
"text": "\\vec{a} \\in \\mathbb{R}^3"
},
{
"math_id": 24,
"text": "\\frac{1}{2}\\left(1 \\pm |\\vec{a}|\\right)"
},
{
"math_id": 25,
"text": "\\left|\\vec{a}\\right| \\le 1"
},
{
"math_id": 26,
"text": "\\operatorname{tr}\\left(\\rho^2\\right) = \\frac{1}{2}\\left(1 + \\left|\\vec{a}\\right|^2 \\right) = 1 \\quad \\Leftrightarrow \\quad \\left|\\vec{a}\\right| = 1 ~,"
},
{
"math_id": 27,
"text": "\\vec{a} = (u,v,w)"
},
{
"math_id": 28,
"text": "\\rho"
},
{
"math_id": 29,
"text": "u = \\rho_{10} + \\rho_{01} = 2 \\operatorname{Re}(\\rho_{01})"
},
{
"math_id": 30,
"text": "v = i(\\rho_{01} - \\rho_{10}) = 2 \\operatorname{Im}(\\rho_{10})"
},
{
"math_id": 31,
"text": "w = \\rho_{00} - \\rho_{11}"
},
{
"math_id": 32,
"text": "\\rho =\n \\begin{pmatrix} \\rho_{00} & \\rho_{01} \\\\ \\rho_{10} & \\rho_{11} \\end{pmatrix} =\n \\frac{1}{2}\\begin{pmatrix} 1+w & u-iv \\\\ u+iv & 1-w \\end{pmatrix}.\n"
},
{
"math_id": 33,
"text": "w"
},
{
"math_id": 34,
"text": "u, v, w"
},
{
"math_id": 35,
"text": "X, Y, Z"
},
{
"math_id": 36,
"text": " \\operatorname{U}(n) /(\\operatorname{U}(n - 1) \\times \\operatorname{U}(1)). "
},
{
"math_id": 37,
"text": "g"
},
{
"math_id": 38,
"text": "g |\\psi\\rangle = |\\psi\\rangle"
},
{
"math_id": 39,
"text": " \\operatorname{U}(n - 1) \\times \\operatorname{U}(1). "
},
{
"math_id": 40,
"text": " A \\mapsto e^{i A} "
},
{
"math_id": 41,
"text": " n^2 - \\left((n - 1)^2 + 1\\right) = 2n - 2. \\quad "
},
{
"math_id": 42,
"text": "\\left|\\uparrow\\right\\rangle"
},
{
"math_id": 43,
"text": "\\left|\\downarrow\\right\\rangle"
},
{
"math_id": 44,
"text": "\\left|\\nearrow\\right\\rangle"
},
{
"math_id": 45,
"text": "(u_x, u_y, 0)"
},
{
"math_id": 46,
"text": "\\mathbb{C}\\mathbf{P}^1"
},
{
"math_id": 47,
"text": "\\mathbf{P}(H_2)"
},
{
"math_id": 48,
"text": "H_2"
},
{
"math_id": 49,
"text": " \\alpha \\left|\\uparrow \\right\\rangle + \\beta \\left|\\downarrow \\right\\rangle = \\left|\\nearrow \\right\\rangle "
},
{
"math_id": 50,
"text": "\\alpha"
},
{
"math_id": 51,
"text": "\\beta"
},
{
"math_id": 52,
"text": " |\\alpha|^2 + |\\beta|^2 = \\alpha^* \\alpha + \\beta^* \\beta = 1"
},
{
"math_id": 53,
"text": "\\langle\\downarrow | \\uparrow\\rangle = 0"
},
{
"math_id": 54,
"text": "\\langle\\downarrow | \\downarrow\\rangle = \\langle\\uparrow | \\uparrow\\rangle = 1"
},
{
"math_id": 55,
"text": " u = {\\beta \\over \\alpha} = {\\alpha^* \\beta \\over \\alpha^* \\alpha} = {\\alpha^* \\beta \\over |\\alpha|^2} = u_x + i u_y"
},
{
"math_id": 56,
"text": "u = \\infty"
},
{
"math_id": 57,
"text": "\\alpha = 0"
},
{
"math_id": 58,
"text": "\\beta \\ne 0"
},
{
"math_id": 59,
"text": " P_x = {2 u_x \\over 1 + u_x^2 + u_y^2},"
},
{
"math_id": 60,
"text": "P_y = {2 u_y \\over 1 + u_x^2 + u_y^2},"
},
{
"math_id": 61,
"text": "P_z = {1 - u_x^2 - u_y^2 \\over 1 + u_x^2 + u_y^2}."
},
{
"math_id": 62,
"text": " \\left( \\sum p_i x_i, \\sum p_i y_i, \\sum p_i z_i \\right),"
},
{
"math_id": 63,
"text": "p_i"
},
{
"math_id": 64,
"text": "x_i, y_i, z_i"
},
{
"math_id": 65,
"text": "\\operatorname{U}(n_1) \\times \\cdots \\times \\operatorname{U}(n_k)."
},
{
"math_id": 66,
"text": "\\operatorname{U}(n)/\\left(\\operatorname{U}(n_1) \\times \\cdots \\times \\operatorname{U}(n_k)\\right)."
},
{
"math_id": 67,
"text": "SU(2)"
},
{
"math_id": 68,
"text": "SO(3)"
},
{
"math_id": 69,
"text": "\\begin{align}\n R_x(\\theta) &= e^{(-i \\theta X/2)} = \\cos(\\theta /2)I - i\\sin(\\theta/2)X =\n \\begin{bmatrix}\n \\cos \\theta/2 & -i \\sin \\theta/2 \\\\\n -i \\sin \\theta/2 & \\cos \\theta/2\n \\end{bmatrix} \\\\\n R_y(\\theta) &= e^{(-i \\theta Y/2)} = \\cos(\\theta /2)I - i\\sin(\\theta/2)Y =\n \\begin{bmatrix}\n \\cos \\theta/2 & -\\sin \\theta/2 \\\\\n \\sin \\theta/2 & \\cos \\theta/2\n \\end{bmatrix} \\\\\n R_z(\\theta) &= e^{(-i \\theta Z/2)} = \\cos(\\theta /2)I - i\\sin(\\theta/2)Z =\n \\begin{bmatrix}\n e^{-i \\theta/2} & 0 \\\\\n 0 & e^{i \\theta/2}\n \\end{bmatrix}\n\\end{align}"
},
{
"math_id": 70,
"text": " \\hat{n} = (n_x, n_y, n_z) "
},
{
"math_id": 71,
"text": " R_{\\hat{n}}(\\theta) = \\exp\\left(-i\\theta\\hat{n} \\cdot \\frac{1}{2}\\vec{\\sigma}\\right) "
},
{
"math_id": 72,
"text": " \\mathbf{q} =\n e^{\\frac{1}{2}\\theta(u_x\\mathbf{i} + u_y\\mathbf{j} + u_z\\mathbf{k})} =\n \\cos \\frac{\\theta}{2} + (u_x\\mathbf{i} + u_y\\mathbf{j} + u_z\\mathbf{k}) \\sin \\frac{\\theta}{2}\n"
},
{
"math_id": 73,
"text": "U"
},
{
"math_id": 74,
"text": "S"
},
{
"math_id": 75,
"text": " U(0) = I "
},
{
"math_id": 76,
"text": " U(s_1 + s_2) = U(s_1)U(s_2) "
},
{
"math_id": 77,
"text": " 0, s_1, s_2, \\in S "
},
{
"math_id": 78,
"text": " U(s) = I + \\frac{dU}{ds} \\Bigg|_{s=0} s + O\\left(s^2\\right) "
},
{
"math_id": 79,
"text": " U^{\\dagger}U = I "
},
{
"math_id": 80,
"text": " U^{\\dagger}U = I + s\\left(\\frac{dU}{ds}\\Bigg|_{s=0} + \\frac{dU^{\\dagger}}{ds}\\Bigg|_{s=0}\\right) + O\\left(s^2\\right) = I "
},
{
"math_id": 81,
"text": "O\\left(s^2\\right)"
},
{
"math_id": 82,
"text": "\\frac{dU}{ds} \\Bigg|_{s=0} + \\frac{dU^{\\dagger}}{ds} \\Bigg|_{s=0}= 0"
},
{
"math_id": 83,
"text": " \\frac{dU}{ds} \\Bigg|_{s=0} = iK "
},
{
"math_id": 84,
"text": "K"
},
{
"math_id": 85,
"text": " U(s) = e^{iKs} "
},
{
"math_id": 86,
"text": "(\\sigma_x, \\sigma_y, \\sigma_z)"
},
{
"math_id": 87,
"text": "(\\hat{x}, \\hat{y}, \\hat{z})"
},
{
"math_id": 88,
"text": "\\hat{n}"
},
{
"math_id": 89,
"text": " R_{\\hat{n}}(\\theta) = \\exp(-i \\theta \\hat{n} \\cdot \\vec{\\sigma}/2) "
},
{
"math_id": 90,
"text": "K = \\hat{n} \\cdot \\vec{\\sigma}/2. "
}
]
| https://en.wikipedia.org/wiki?curid=729572 |
72958957 | Cliquish function | Definition of cliquish function
In mathematics, the notion of a cliquish function is similar to, but weaker than, the notion of a continuous function and quasi-continuous function. All (quasi-)continuous functions are cliquish but the converse is not true in general.
Definition.
Let formula_0 be a topological space. A real-valued function formula_1 is cliquish at a point formula_2 if for any formula_3 and any open neighborhood formula_4 of formula_5 there is a non-empty open set formula_6 such that
formula_7
Note that in the above definition, it is not necessary that formula_8.
Example.
Consider the function formula_13 defined by formula_14 whenever formula_15 and formula_16 whenever formula_17. Clearly f is continuous everywhere except at x=0, thus cliquish everywhere except (at most) at x=0. At x=0, take any open neighborhood U of x. Then there exists an open set formula_6 such that formula_18. Clearly this yields formula_19 thus f is cliquish.
In contrast, the function formula_20 defined by formula_21 whenever formula_22 is a rational number and formula_23 whenever formula_22 is an irrational number is nowhere cliquish, since every nonempty open set formula_24 contains some formula_25 with formula_26. | [
{
"math_id": 0,
"text": " X "
},
{
"math_id": 1,
"text": " f:X \\rightarrow \\mathbb{R} "
},
{
"math_id": 2,
"text": " x \\in X "
},
{
"math_id": 3,
"text": " \\epsilon > 0 "
},
{
"math_id": 4,
"text": " U "
},
{
"math_id": 5,
"text": " x "
},
{
"math_id": 6,
"text": " G \\subset U "
},
{
"math_id": 7,
"text": " |f(y) - f(z)| < \\epsilon \\;\\;\\;\\; \\forall y,z \\in G "
},
{
"math_id": 8,
"text": " x \\in G "
},
{
"math_id": 9,
"text": " f: X \\rightarrow \\mathbb{R} "
},
{
"math_id": 10,
"text": " f"
},
{
"math_id": 11,
"text": " g: X \\rightarrow \\mathbb{R} "
},
{
"math_id": 12,
"text": " f+g "
},
{
"math_id": 13,
"text": " f: \\mathbb{R} \\rightarrow \\mathbb{R} "
},
{
"math_id": 14,
"text": " f(x) = 0 "
},
{
"math_id": 15,
"text": " x \\leq 0 "
},
{
"math_id": 16,
"text": " f(x) = 1 "
},
{
"math_id": 17,
"text": " x > 0 "
},
{
"math_id": 18,
"text": " y,z < 0 \\; \\forall y,z \\in G "
},
{
"math_id": 19,
"text": " |f(y) - f(z)| = 0 \\; \\forall y \\in G"
},
{
"math_id": 20,
"text": " g: \\mathbb{R} \\rightarrow \\mathbb{R} "
},
{
"math_id": 21,
"text": " g(x) = 0 "
},
{
"math_id": 22,
"text": " x"
},
{
"math_id": 23,
"text": " g(x) = 1 "
},
{
"math_id": 24,
"text": "G"
},
{
"math_id": 25,
"text": "y_1, y_2"
},
{
"math_id": 26,
"text": "|g(y_1) - g(y_2)| = 1"
}
]
| https://en.wikipedia.org/wiki?curid=72958957 |
72959682 | Bad control | Bad control variables in statistics
In statistics, bad controls are variables that introduce an unintended discrepancy between regression coefficients and the effects that said coefficients are supposed to measure. These are contrasted with confounders which are "good controls" and need to be included to remove omitted variable bias. This issue arises when a bad control is an outcome variable (or similar to) in a causal model and thus adjusting for it would eliminate part of the desired causal path. In other words, bad controls might as well be dependent variables in the model under consideration. Angrist and Pischke (2008) additionally differentiate two types of bad controls: a simple bad-control scenario and proxy-control scenario where the included variable partially controls for omitted factors but is partially affected by the variable of interest. Pearl (1995) provides a graphical method for determining good controls using causality diagrams and the back-door criterion and front-door criterion.
Examples.
"Simple" bad control.
A simplified example studies effect of education on wages formula_2. In this thought experiment, two levels of education formula_1 are possible: lower and higher and two types of jobs formula_0 are performed: white-collar and blue-collar work. When considering the causal effect of education on wages of an individual, it might be tempting to control for the work-type formula_0, however, work type is a mediator (formula_3) in the causal relationship between education and wages (see causal diagram) and thus, controlling for it precludes causal inference from the regression coefficients.
Bad proxy-control.
Another example of bad control is when attempting to control for innate ability when estimating effect of education formula_1 on wages formula_2. In this example, innate ability formula_5 (thought of as for example IQ at pre-school age) is a variable influencing wages formula_2, but its value is unavailable to researchers at the time of estimation. Instead they choose before-work IQ test scores formula_4, or late ability, as a proxy variable to estimate innate ability and perform regression from education to wages adjusting for late ability. Unfortunately, late ability (in this thought experiment) is causally determined by education and innate ability and, by controlling for it, researchers introduced collider bias into their model by opening a back-door path formula_6 previously not present in their model. On the other hand, if both links formula_7 and formula_8 are strong, one can expect strong (non-causal) correlation between formula_5 and formula_1 and thus large omitted-variable bias if formula_5 is not controlled for. This issue, however, is separate from the causality problem. | [
{
"math_id": 0,
"text": "T"
},
{
"math_id": 1,
"text": "E"
},
{
"math_id": 2,
"text": "W"
},
{
"math_id": 3,
"text": "E \\to T \\to W"
},
{
"math_id": 4,
"text": "L"
},
{
"math_id": 5,
"text": "I"
},
{
"math_id": 6,
"text": "E \\to L \\leftarrow I\\to W"
},
{
"math_id": 7,
"text": "E \\to L"
},
{
"math_id": 8,
"text": "I \\to L"
}
]
| https://en.wikipedia.org/wiki?curid=72959682 |
72966849 | Clarke generalized derivative | Types generalized of derivatives
In mathematics, the Clarke generalized derivatives are types generalized of derivatives that allow for the differentiation of nonsmooth functions. The Clarke derivatives were introduced by Francis Clarke in 1975.
Definitions.
For a locally Lipschitz continuous function formula_0 the "Clarke generalized directional derivative" of formula_1 at formula_2 in the direction formula_3 is defined as
formula_4
where formula_5 denotes the limit supremum.
Then, using the above definition of formula_6, the "Clarke generalized gradient" of formula_1 at formula_7 (also called the "Clarke subdifferential") is given as
formula_8
where formula_9 represents an inner product of vectors in formula_10
Note that the Clarke generalized gradient is set-valued—that is, at each formula_11 the function value formula_12 is a set.
More generally, given a Banach space formula_13 and a subset formula_14 the Clarke generalized directional derivative and generalized gradients are defined as above for a locally Lipschitz contininuous function formula_15
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f: \\mathbb{R}^{n} \\rightarrow \\mathbb{R},"
},
{
"math_id": 1,
"text": "f"
},
{
"math_id": 2,
"text": "x \\in \\mathbb{R}^n"
},
{
"math_id": 3,
"text": "v \\in \\mathbb{R}^n"
},
{
"math_id": 4,
"text": "\nf^{\\circ} (x, v)= \\limsup_{y \\rightarrow x, h \\downarrow 0} \\frac{f(y+ hv)-f(y)}{h},\n"
},
{
"math_id": 5,
"text": "\\limsup"
},
{
"math_id": 6,
"text": "f^{\\circ}"
},
{
"math_id": 7,
"text": "x"
},
{
"math_id": 8,
"text": "\n\\partial^{\\circ}\\! f(x):=\\{\\xi \\in \\mathbb{R}^{n}: \\langle\\xi, v\\rangle \\leq f^{\\circ}(x, v), \\forall v \\in \\mathbb{R}^{n}\\},\n"
},
{
"math_id": 9,
"text": "\\langle \\cdot, \\cdot\\rangle"
},
{
"math_id": 10,
"text": "\\mathbb{R}."
},
{
"math_id": 11,
"text": "x \\in \\mathbb{R}^n,"
},
{
"math_id": 12,
"text": "\\partial^{\\circ}\\! f(x)"
},
{
"math_id": 13,
"text": "X"
},
{
"math_id": 14,
"text": "Y \\subset X,"
},
{
"math_id": 15,
"text": "f : Y \\to \\mathbb{R}."
}
]
| https://en.wikipedia.org/wiki?curid=72966849 |
72967 | Cubic centimetre | Unit of volume
<templatestyles src="Template:Infobox/styles-images.css" />
A cubic centimetre (or cubic centimeter in US English) (SI unit symbol: cm3; non-SI abbreviations: cc and ccm) is a commonly used unit of volume that corresponds to the volume of a cube that measures 1 cm × 1 cm × 1 cm. One cubic centimetre corresponds to a volume of one millilitre. The mass of one cubic centimetre of water at 3.98 °C (the temperature at which it attains its maximum density) is almost equal to one gram.
In internal combustion engines, "cc" refers to the total volume of its engine displacement in cubic centimetres. The displacement can be calculated using the formula
formula_0
where d is engine displacement, b is the bore of the cylinders, s is length of the stroke and n is the number of cylinders.
Conversions
Unicode character.
The "cubic centimetre" symbol is encoded by Unicode at code point .
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "d = {\\pi \\over 4} \\times b^2 \\times s \\times n"
}
]
| https://en.wikipedia.org/wiki?curid=72967 |
7297179 | Statistical potential | In protein structure prediction, statistical potentials or knowledge-based potentials are scoring functions derived from an analysis of known protein structures in the Protein Data Bank (PDB).
The original method to obtain such potentials is the "quasi-chemical approximation", due to Miyazawa and Jernigan. It was later followed by the "potential of mean force" (statistical PMF ), developed by Sippl. Although the obtained scores are often considered as approximations of the free energy—thus referred to as "pseudo-energies"—this physical interpretation is incorrect. Nonetheless, they are applied with success in many cases, because they frequently correlate with actual Gibbs free energy differences.
Overview.
Possible features to which a pseudo-energy can be assigned include:
The classic application is, however, based on pairwise amino acid contacts or distances, thus producing statistical interatomic potentials. For pairwise amino acid contacts, a statistical potential is formulated as an interaction matrix that assigns a weight or energy value to each possible pair of standard amino acids. The energy of a particular structural model is then the combined energy of all pairwise contacts (defined as two amino acids within a certain distance of each other) in the structure. The energies are determined using statistics on amino acid contacts in a database of known protein structures (obtained from the PDB).
History.
Initial development.
Many textbooks present the statistical PMFs as proposed by Sippl as a simple consequence of the Boltzmann distribution, as applied to pairwise distances between amino acids. This is incorrect, but a useful start to introduce the construction of the potential in practice.
The Boltzmann distribution applied to a specific pair of amino acids,
is given by:
formula_0
where formula_1 is the distance, formula_2 is the Boltzmann constant, formula_3 is
the temperature and formula_4 is the partition function, with
formula_5
The quantity formula_6 is the free energy assigned to the pairwise system.
Simple rearrangement results in the "inverse Boltzmann formula",
which expresses the free energy formula_6 as a function of formula_7:
formula_8
To construct a PMF, one then introduces a so-called "reference state" with a corresponding distribution formula_9 and partition function
formula_10, and calculates the following free energy difference:
formula_11
The reference state typically results from a hypothetical
system in which the specific interactions between the amino acids
are absent. The second term involving formula_4 and
formula_10 can be ignored, as it is a constant.
In practice, formula_7 is estimated from the database of known protein
structures, while formula_12 typically results from calculations
or simulations. For example, formula_7 could be the conditional probability
of finding the formula_13 atoms of a valine and a serine at a given
distance formula_1 from each other, giving rise to the free energy difference
formula_14. The total free energy difference of a protein,
formula_15, is then claimed to be the sum
of all the pairwise free energies:
formula_16
where the sum runs over all amino acid pairs formula_17
(with formula_18) and formula_19 is their corresponding distance. In many studies formula_9 does not depend on the amino acid sequence.
Conceptual issues.
Intuitively, it is clear that a low value for formula_15 indicates
that the set of distances in a structure is more likely in proteins than
in the reference state. However, the physical meaning of these statistical PMFs has
been widely disputed, since their introduction. The main issues are:
Controversial analogy.
In response to the issue regarding the physical validity, the first justification of statistical PMFs was attempted by Sippl. It was based on an analogy with the statistical physics of liquids. For liquids, the potential of mean force is related to the radial distribution function formula_20, which is given by:
formula_21
where formula_7 and formula_12 are the respective probabilities of
finding two particles at a distance formula_1 from each other in the liquid
and in the reference state. For liquids, the reference state
is clearly defined; it corresponds to the ideal gas, consisting of
non-interacting particles. The two-particle potential of mean force
formula_22 is related to formula_20 by:
formula_23
According to the reversible work theorem, the two-particle
potential of mean force formula_22 is the reversible work required to
bring two particles in the liquid from infinite separation to a distance
formula_1 from each other.
Sippl justified the use of statistical PMFs—a few years after he introduced
them for use in protein structure prediction—by
appealing to the analogy with the reversible work theorem for liquids. For liquids, formula_20 can be experimentally measured
using small angle X-ray scattering; for proteins, formula_7 is obtained
from the set of known protein structures, as explained in the previous
section. However, as Ben-Naim wrote in a publication on the subject:
[...] the quantities, referred to as "statistical potentials," "structure
based potentials," or "pair potentials of mean force", as derived from
the protein data bank (PDB), are neither "potentials" nor "potentials of
mean force," in the ordinary sense as used in the literature on
liquids and solutions.
Moreover, this analogy does not solve the issue of how to specify a suitable "reference state" for proteins.
Machine learning.
In the mid-2000s, authors started to combine multiple statistical potentials, derived from different structural features, into "composite scores". For that purpose, they used machine learning techniques, such as support vector machines (SVMs). Probabilistic neural networks (PNNs) have also been applied for the training of a position-specific distance-dependent statistical potential. In 2016, the DeepMind artificial intelligence research laboratory started to apply deep learning techniques to the development of a torsion- and distance-dependent statistical potential. The resulting method, named AlphaFold, won the 13th Critical Assessment of Techniques for Protein Structure Prediction (CASP) by correctly predicting the most accurate structure for 25 out of 43 free modelling domains.
Explanation.
Bayesian probability.
Baker and co-workers justified statistical PMFs from a
Bayesian point of view and used these insights in the construction of
the coarse grained ROSETTA energy function. According
to Bayesian probability calculus, the conditional probability formula_24 of a structure formula_25, given the amino acid sequence formula_26, can be
written as:
formula_27
formula_28 is proportional to the product of
the likelihood formula_29 times the prior
formula_30. By assuming that the likelihood can be approximated
as a product of pairwise probabilities, and applying Bayes' theorem, the
likelihood can be written as:
formula_31
where the product runs over all amino acid pairs formula_17 (with
formula_18), and formula_19 is the distance between amino acids formula_32 and formula_33.
Obviously, the negative of the logarithm of the expression
has the same functional form as the classic
pairwise distance statistical PMFs, with the denominator playing the role of the
reference state. This explanation has two shortcomings: it relies on the unfounded assumption the likelihood can be expressed
as a product of pairwise probabilities, and it is purely "qualitative".
Probability kinematics.
Hamelryck and co-workers later gave a "quantitative" explanation for the statistical potentials, according to which they approximate a form of probabilistic reasoning due to Richard Jeffrey and named probability kinematics. This variant of Bayesian thinking (sometimes called "Jeffrey conditioning") allows updating a prior distribution based on new information on the probabilities of the elements of a partition on the support of the prior. From this point of view, (i) it is not necessary to assume that the database of protein structures—used to build the potentials—follows a Boltzmann distribution, (ii) statistical potentials generalize readily beyond pairwise differences, and (iii) the "reference ratio" is determined by the prior distribution.
Reference ratio.
Expressions that resemble statistical PMFs naturally result from the application of
probability theory to solve a fundamental problem that arises in protein
structure prediction: how to improve an imperfect probability
distribution formula_34 over a first variable formula_25 using a probability
distribution formula_35 over a second variable formula_36, with formula_37. Typically, formula_25 and formula_36 are fine and coarse grained variables, respectively. For example, formula_34 could concern
the local structure of the protein, while formula_35 could concern the pairwise distances between the amino acids. In that case, formula_25 could for example be a vector of dihedral angles that specifies all atom positions (assuming ideal bond lengths and angles).
In order to combine the two distributions, such that the local structure will be distributed according to formula_34, while
the pairwise distances will be distributed according to formula_35, the following expression is needed:
formula_38
where formula_39 is the distribution over formula_36 implied by formula_34. The ratio in the expression corresponds
to the PMF. Typically, formula_34 is brought in by sampling (typically from a fragment library), and not explicitly evaluated; the ratio, which in contrast is explicitly evaluated, corresponds to Sippl's PMF. This explanation is quantitive, and allows the generalization of statistical PMFs from pairwise distances to arbitrary coarse grained variables. It also
provides a rigorous definition of the reference state, which is implied by formula_34. Conventional applications of pairwise distance statistical PMFs usually lack two
necessary features to make them fully rigorous: the use of a proper probability distribution over pairwise distances in proteins, and the recognition that the reference state is rigorously
defined by formula_34.
Applications.
Statistical potentials are used as energy functions in the assessment of an ensemble of structural models produced by homology modeling or protein threading. Many differently parameterized statistical potentials have been shown to successfully identify the native state structure from an ensemble of decoy or non-native structures. Statistical potentials are not only used for protein structure prediction, but also for modelling the protein folding pathway.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\nP\\left(r\\right)=\\frac{1}{Z}e^{-\\frac{F\\left(r\\right)}{kT}}\n"
},
{
"math_id": 1,
"text": "r"
},
{
"math_id": 2,
"text": "k"
},
{
"math_id": 3,
"text": "T"
},
{
"math_id": 4,
"text": "Z"
},
{
"math_id": 5,
"text": "\nZ=\\int e^{-\\frac{F(r)}{kT}}dr\n"
},
{
"math_id": 6,
"text": "F(r)"
},
{
"math_id": 7,
"text": "P(r)"
},
{
"math_id": 8,
"text": "\nF\\left(r\\right)=-kT\\ln P\\left(r\\right)-kT\\ln Z\n"
},
{
"math_id": 9,
"text": "Q_{R}"
},
{
"math_id": 10,
"text": "Z_{R}"
},
{
"math_id": 11,
"text": "\n\\Delta F\\left(r\\right)=-kT\\ln\\frac{P\\left(r\\right)}{Q_{R}\\left(r\\right)}-kT\\ln\\frac{Z}{Z_{R}}\n"
},
{
"math_id": 12,
"text": "Q_{R}(r)"
},
{
"math_id": 13,
"text": "C\\beta"
},
{
"math_id": 14,
"text": "\\Delta F"
},
{
"math_id": 15,
"text": "\\Delta F_{\\textrm{T}}"
},
{
"math_id": 16,
"text": "\\Delta F_{\\textrm{T}}=\\sum_{i<j}\\Delta F(r_{ij}\\mid a_{i},a_{j})=-kT\\sum_{i<j}\\ln\\frac{P\\left(r_{ij}\\mid a_{i},a_{j}\\right)}{Q_{R}\\left(r_{ij}\\mid a_{i},a_{j}\\right)}"
},
{
"math_id": 17,
"text": "a_{i},a_{j}"
},
{
"math_id": 18,
"text": "i<j"
},
{
"math_id": 19,
"text": "r_{ij}"
},
{
"math_id": 20,
"text": "g(r)"
},
{
"math_id": 21,
"text": "\ng(r)=\\frac{P(r)}{Q_{R}(r)}\n"
},
{
"math_id": 22,
"text": "W(r)"
},
{
"math_id": 23,
"text": "\nW(r)=-kT\\log g(r)=-kT\\log\\frac{P(r)}{Q_{R}(r)}\n"
},
{
"math_id": 24,
"text": "P(X\\mid\nA)"
},
{
"math_id": 25,
"text": "X"
},
{
"math_id": 26,
"text": "A"
},
{
"math_id": 27,
"text": "\nP\\left(X\\mid A\\right)=\\frac{P\\left(A\\mid\nX\\right)P\\left(X\\right)}{P\\left(A\\right)}\\propto P\\left(A\\mid\nX\\right)P\\left(X\\right)\n"
},
{
"math_id": 28,
"text": "P(X\\mid A)"
},
{
"math_id": 29,
"text": "P\\left(A\\mid X\\right)"
},
{
"math_id": 30,
"text": "P\\left(X\\right)"
},
{
"math_id": 31,
"text": "P\\left(A\\mid X\\right)\\approx\\prod_{i<j}P\\left(a_{i},a_{j}\\mid r_{ij}\\right)\\propto\\prod_{i<j}\\frac{P\\left(r_{ij}\\mid a_{i},a_{j}\\right)}{P(r_{ij})}"
},
{
"math_id": 32,
"text": "i"
},
{
"math_id": 33,
"text": "j"
},
{
"math_id": 34,
"text": "Q(X)"
},
{
"math_id": 35,
"text": "P(Y)"
},
{
"math_id": 36,
"text": "Y"
},
{
"math_id": 37,
"text": "Y=f(X)"
},
{
"math_id": 38,
"text": "\nP(X,Y)=\\frac{P(Y)}{Q(Y)}Q(X)\n"
},
{
"math_id": 39,
"text": "Q(Y)"
}
]
| https://en.wikipedia.org/wiki?curid=7297179 |
72977646 | Mode conversion | Transformation of a wave at an interface
Mode conversion is the transformation of a wave at an interface into other wave types (modes).
Principle.
Mode conversion occurs when a wave encounters an interface between materials of different impedances and the incident angle is not normal to the interface. Thus, for example, if a longitudinal wave from a fluid (e.g., water or air) strikes a solid (e.g., steel plate), it is usually refracted and reflected as a function of the angle of incidence, but if some of the energy causes particle movement in the transverse direction, a second transverse wave is generated, which can also be refracted and reflected. Snellius' law of refraction can be formulated as:
formula_0
This means that the incident wave is split into two different wave types at the interface. If we consider a wave incident on an interface of two different solids (e.g. aluminum and steel), the wave type of the reflected wave also splits.
Besides these simple mode conversions, an incident wave can also be converted into surface waves. For example, if one radiates a longitudinal wave at a shallower angle than that of total reflection onto a boundary surface, it will be totally reflected, but in addition a surface wave traveling along the boundary layer will be generated. The incident wave is thus converted into reflected longitudinal and surface wave.
In general, mode conversions are not discrete processes, i.e. a part of the incident energy is converted into different types of waves. The amplitudes (transmission factor, reflection factor) of the converted waves depend on the angle of incidence.
Seismic waves.
In seismology, a wave conversion specifically refers to the conversion between P and S waves at discontinuities. Body waves are reflected and refracted when they hit a boundary layer within the earth. Here, P-waves can be converted into S-waves (PS-wave) at interfaces, as well as vice versa (SP-wave). Here applies analogously for an incident P-wave:
formula_1
The change in amplitudes can be described with the zoeppritz equations.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " \\frac{\\sin { \\theta }_{1} }{ {V}_{L1} } = \\frac{\\sin { \\theta }_{2} }{ {V}_{L2} } = \\frac{\\sin { \\theta }_{3} }{ {V}_{S1} } = \\frac{\\sin { \\theta }_{4} }{ {V}_{S2} } "
},
{
"math_id": 1,
"text": "\\frac{\\sin { \\theta }_{P1} }{ {P}_{1} } = \\frac{\\sin { \\theta }_{P1} }{ {PP}_{r} } = \\frac{\\sin { \\theta }_{P2} }{ {PP}_{t} } = \\frac{\\sin { \\theta }_{S1} }{ {PS}_{r} } = \\frac{\\sin { \\theta }_{S2} }{ {PS}_{t} }"
}
]
| https://en.wikipedia.org/wiki?curid=72977646 |
72988466 | Axial current | Type of conserved current
The axial current, also denoted the "pseudo-vector" or "chiral" current, is the conserved current associated to the chiral symmetry or axial symmetry of a system.
Origin.
According to Noether's theorem, each symmetry of a system is associated a conserved quantity. For example, the rotational invariance of a system implies the conservation of its angular momentum, or spacetime invariance implies the conservation of energy–momentum. In quantum field theory, internal symmetries also result in conserved quantities. For example, the U(1) gauge transformation of QED implies the conservation of the electric charge. Likewise, if a theory possesses an internal chiral or axial symmetry, there will be a conserved quantity, which is called the "axial charge". Further, just as the motion of an electrically charged particle produces an electric current, a moving axial charge constitutes an axial current.
Definition.
The axial current resulting from the motion of an axially charged moving particle is formally defined as formula_0, where formula_1 is the particle field represented by Dirac spinor (since the particle is typically a spin-1/2 fermion) and formula_2 and formula_3 are the Dirac gamma matrices.
For comparison, the electromagnetic current produced by an electrically charged moving particle is formula_4.
Meaning.
As explained above, the axial current is simply the equivalent of the electromagnetic current for the axial symmetry instead of the U(1) symmetry. Another perspective is given by recalling that the
chiral symmetry is the invariance of the theory under the field rotation formula_5 and formula_6 (or alternatively formula_7 and formula_8), where formula_9 denotes a left-handed field and formula_10 a right-handed one.
From this as well as the fact that formula_11 and the definition of formula_12 above, one sees that the axial current is the difference between the current due to left-handed fermions and that from right-handed ones, whilst the electromagnetic current is the sum.
Chiral symmetry is exhibited by vector gauge theories with massless fermions. Since there is no known massless fermion in nature, chiral symmetry is at best an approximate symmetry in fundamental theories, and the axial current is not conserved. (Note: this explicit breaking of the chiral symmetry by non-zero masses is not to be confused with the spontaneous chiral symmetry breaking that plays a dominant role in hadronic physics.) An important consequence of such non-conservation is the neutral pion decay and the chiral anomaly, which is directly related to the pion decay width.
Applications.
The axial current is an important part of the formalism describing high-energy scattering reactions. In such reaction, two particles scatter off each other by exchanging a force boson, e.g., a photon for electromagnetic scattering (see the figure).
The cross-section for such reaction is proportional to the square of the scattering amplitude, which in turn is given by the product of boson propagator times the two currents associated with the motions two colliding particles. Therefore, currents (axial or electromagnetic) are one of the two essential ingredients needed to compute high-energy scattering, the other being the boson propagator.
In electron–nucleon scattering (or more generally, charged lepton–hadron/nucleus scattering) the axial current yields the spin-dependent part of the cross-section. (The spin-average part of the cross-section comes from the electromagnetic current.)
In neutrino–nucleon scattering, neutrinos couple only via the axial current, thus accessing different nucleon structure information than with charged leptons.
Neutral pions also couple only via the axial current because pions are pseudoscalar particles and, to produce amplitudes (scalar quantities), a pion must couple to another pseudoscalar object like the axial current. (Charged pions can also couple via the electromagnetic current.)
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "j_5^\\mu = \\overline\\psi\\gamma^5\\gamma^\\mu\\psi"
},
{
"math_id": 1,
"text": " \\psi "
},
{
"math_id": 2,
"text": " \\gamma^5 "
},
{
"math_id": 3,
"text": " \\gamma^\\mu "
},
{
"math_id": 4,
"text": "j^\\mu = \\overline\\psi\\gamma^\\mu\\psi"
},
{
"math_id": 5,
"text": "\\psi_{\\rm L}\\rightarrow e^{i\\theta_{\\rm L}}\\psi_{\\rm L}"
},
{
"math_id": 6,
"text": "\\psi_{\\rm R}\\rightarrow \\psi_{\\rm R}"
},
{
"math_id": 7,
"text": "\\psi_{\\rm L}\\rightarrow \\psi_{\\rm L}"
},
{
"math_id": 8,
"text": "\\psi_{\\rm R}\\rightarrow e^{i\\theta_{\\rm R}}\\psi_{\\rm R}"
},
{
"math_id": 9,
"text": "\\psi_{\\rm L}"
},
{
"math_id": 10,
"text": "\\psi_{\\rm R}"
},
{
"math_id": 11,
"text": "\\psi=\\psi_{\\rm L}+\\psi_{\\rm R}"
},
{
"math_id": 12,
"text": "j_5^\\mu "
}
]
| https://en.wikipedia.org/wiki?curid=72988466 |
72988820 | Garnier integrable system | Integrable classical system
In mathematical physics, the Garnier integrable system, also known as the classical Gaudin model is a classical mechanical system
discovered by René Garnier in 1919 by taking the 'Painlevé simplification' or 'autonomous limit' of the Schlesinger equations. It is a classical analogue to the quantum Gaudin model due to Michel Gaudin (similarly, the Schlesinger equations are a classical analogue to the Knizhnik–Zamolodchikov equations). The classical Gaudin models are integrable.
They are also a specific case of Hitchin integrable systems, when the algebraic curve that the theory is defined on is the Riemann sphere and the system is tamely ramified.
As a limit of the Schlesinger equations.
The Schlesinger equations are a system of differential equations for formula_0 matrix-valued functions formula_1, given by
formula_2
formula_3
The 'autonomous limit' is given by replacing the formula_4 dependence in the denominator by constants formula_5 with formula_6:
formula_7
formula_8
This is the Garnier system in the form originally derived by Garnier.
As the classical Gaudin model.
There is a formulation of the Garnier system as a classical mechanical system, the classical Gaudin model, which quantizes to the quantum Gaudin model and whose equations of motion are equivalent to the Garnier system. This section describes this formulation.
As for any classical system, the Gaudin model is specified by a Poisson manifold formula_9 referred to as the phase space, and a smooth function on the manifold called the Hamiltonian.
Phase space.
Let formula_10 be a quadratic Lie algebra, that is, a Lie algebra with a non-degenerate invariant bilinear form formula_11. If formula_10 is complex and simple, this can be taken to be the Killing form.
The dual, denoted formula_12, can be made into a linear Poisson structure by the Kirillov–Kostant bracket.
The phase space formula_9 of the classical Gaudin model is then the Cartesian product of formula_13 copies of formula_12 for formula_13 a positive integer.
Sites.
Associated to each of these copies is a point in formula_14, denoted formula_15, and referred to as sites.
Lax matrix.
Fixing a basis of the Lie algebra formula_16 with structure constants formula_17, there are functions formula_18 with formula_19 on the phase space satisfying the Poisson bracket
formula_20
These in turn are used to define formula_10-valued functions
formula_21
with implicit summation.
Next, these are used to define the Lax matrix which is also a formula_10 valued function on the phase space which in addition depends meromorphically on a spectral parameter formula_22,
formula_23
and formula_24 is a constant element in formula_10, in the sense that it Poisson commutes (has vanishing Poisson bracket) with all functions.
(Quadratic) Hamiltonian.
The (quadratic) Hamiltonian is
formula_25
which is indeed a function on the phase space, which is additionally dependent on a spectral parameter formula_22. This can be written as
formula_26
with
formula_27
and
formula_28
From the Poisson bracket relation
formula_29
by varying formula_22 and formula_30 it must be true that the formula_31's, the formula_32's and formula_33 are all in involution. It can be shown that the formula_32's and formula_33 Poisson commute with all functions on the phase space, but the formula_31's do not in general. These are the conserved charges in involution for the purposes of Arnol'd Liouville integrability.
Lax equation.
One can show
formula_34
so the Lax matrix satisfies the Lax equation when time evolution is given by any of the Hamiltonians formula_31, as well as any linear combination of them.
Higher Hamiltonians.
The quadratic Casimir gives corresponds to a quadratic Weyl invariant polynomial for the Lie algebra formula_10, but in fact many more commuting conserved charges can be generated using formula_10-invariant polynomials. These invariant polynomials can be found using the Harish-Chandra isomorphism in the case formula_10 is complex, simple and finite.
Integrable field theories as classical Gaudin models.
Certain integrable classical field theories can be formulated as classical affine Gaudin models, where formula_10 is an affine Lie algebra. Such classical field theories include the principal chiral model, coset sigma models and affine Toda field theory. As such, affine Gaudin models can be seen as a 'master theory' for integrable systems, but is most naturally formulated in the Hamiltonian formalism, unlike other master theories like four-dimensional Chern–Simons theory or anti-self-dual Yang–Mills.
Quantum Gaudin models.
A great deal is known about the integrable structure of quantum Gaudin models. In particular, Feigin, Frenkel and Reshetikhin studied them using the theory of vertex operator algebras, showing the relation of Gaudin models to topics in mathematics including the Knizhnik–Zamolodchikov equations and the geometric Langlands correspondence.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "n + 2"
},
{
"math_id": 1,
"text": "A_i:\\mathbb{C}^{n+2} \\rightarrow \\mathrm{Mat}(m, \\mathbb{C})"
},
{
"math_id": 2,
"text": " \\frac{\\partial A_i}{\\partial \\lambda_j} = \\frac{[A_i,A_j]}{\\lambda_i-\\lambda_j} \\qquad \\qquad j\\neq i "
},
{
"math_id": 3,
"text": "\\sum_j \\frac{\\partial A_i}{\\partial \\lambda_j} = 0."
},
{
"math_id": 4,
"text": "\\lambda_i"
},
{
"math_id": 5,
"text": "\\alpha_i"
},
{
"math_id": 6,
"text": "\\alpha_{n+1} = 0, \\alpha_{n+2} = 1"
},
{
"math_id": 7,
"text": " \\frac{\\partial A_i}{\\partial \\lambda_j} = \\frac{[A_i,A_j]}{\\alpha_i-\\alpha_j} \\qquad \\qquad j\\neq i "
},
{
"math_id": 8,
"text": " \\sum_j \\frac{\\partial A_i}{\\partial \\lambda_j} = 0."
},
{
"math_id": 9,
"text": "M"
},
{
"math_id": 10,
"text": "\\mathfrak{g}"
},
{
"math_id": 11,
"text": "\\kappa"
},
{
"math_id": 12,
"text": "\\mathfrak{g}^*"
},
{
"math_id": 13,
"text": "N"
},
{
"math_id": 14,
"text": "\\mathbb{C}"
},
{
"math_id": 15,
"text": "\\lambda_1, \\cdots, \\lambda_N"
},
{
"math_id": 16,
"text": "\\{I^a\\}"
},
{
"math_id": 17,
"text": "f^{ab}_c"
},
{
"math_id": 18,
"text": "X^a_{(r)}"
},
{
"math_id": 19,
"text": "r = 1, \\cdots, N"
},
{
"math_id": 20,
"text": "\\{X^a_{(r)}, X^b_{(s)}\\} = \\delta_{rs}f^{ab}_c X^c_{(r)}."
},
{
"math_id": 21,
"text": "X^{(r)} = \\kappa_{ab}I^a \\otimes X^b_{(r)}"
},
{
"math_id": 22,
"text": "\\lambda"
},
{
"math_id": 23,
"text": "\\mathcal{L}(\\lambda) = \\sum_{r = 1}^N \\frac{X^{(r)}}{\\lambda - \\lambda_r} + \\Omega,"
},
{
"math_id": 24,
"text": "\\Omega"
},
{
"math_id": 25,
"text": "\\mathcal{H}(\\lambda) = \\frac{1}{2}\\kappa(\\mathcal{L}(\\lambda), \\mathcal{L}(\\lambda))"
},
{
"math_id": 26,
"text": "\\mathcal{H}(\\lambda) = \\Delta_\\infty + \\sum_{r = 1}^N\\left( \\frac{\\Delta_r}{(\\lambda - \\lambda_r)^2} + \n\\frac{\\mathcal{H}_r}{\\lambda - \\lambda_r} \\right),"
},
{
"math_id": 27,
"text": " \\Delta_r = \\frac{1}{2} \\kappa(X^{(r)}, X^{(r)}), \\Delta_\\infty = \\frac{1}{2} \\kappa(\\Omega, \\Omega)"
},
{
"math_id": 28,
"text": " \\mathcal{H}_r = \\sum_{s \\neq r} \\frac{ \\kappa( X^{(r)}, X^{(s)} )}{ \n\\lambda_r - \\lambda_s} + \\kappa(X^{(r)} , \\Omega)."
},
{
"math_id": 29,
"text": " \\{ \\mathcal{H}(\\lambda), \\mathcal{H}(\\mu) \\} = 0, \\forall \\lambda, \\mu \\in \\mathbb{C},"
},
{
"math_id": 30,
"text": "\\mu"
},
{
"math_id": 31,
"text": "\\mathcal{H}_r"
},
{
"math_id": 32,
"text": "\\Delta_r"
},
{
"math_id": 33,
"text": "\\Delta_\\infty"
},
{
"math_id": 34,
"text": " \\{\\mathcal{H}_r, \\mathcal{L}(\\lambda)\\} = \\left[\\frac{X^{(r)}}{\\lambda - \\lambda_r}, \\mathcal{L}(\\lambda)\\right], "
}
]
| https://en.wikipedia.org/wiki?curid=72988820 |
72990081 | Big-line-big-clique conjecture | Unsolved problem in discrete geometry
The big-line-big-clique conjecture is an unsolved problem in discrete geometry, stating that finite sets of many points in the Euclidean plane either have many collinear points, or they have many points that are all mutually visible to each other (no third point blocks any two of them from seeing each other).
Statement and history.
More precisely, the big-line big-clique conjecture states that, for any positive integers formula_0 and formula_1 there should exist another number formula_2, such that every set of formula_2 points contains formula_1 collinear points (a "big line"), formula_0 mutually-visible points (a "big clique"), or both.
The big-line-big-clique conjecture was posed by Jan Kára, Attila Pór, and David R. Wood in a 2005 publication. It has led to much additional research on point-to-point visibility in point sets.
Partial results.
Finite point sets in general position (no three collinear) do always contain a big clique, so the conjecture is true for formula_3. Additionally, finite point sets that have no five mutually-visible points (such as the intersections of the integer lattice with convex sets) do always contain many collinear points, so the conjecture is true for formula_4.
Generalizing the integer lattice example, projecting a formula_5-dimensional system of lattice points of size formula_6
onto the plane, using a generic linear projection, produces a set of points with no formula_1 collinear points and no formula_7 mutually visible points. Therefore, when formula_2 exists, it must be greater than formula_8.
Related problems.
The visibilities among any system of points can be analyzed by using the visibility graph of the points, a graph that has the points as vertices and that connects two points by an edge whenever the line segment connecting them is disjoint from the other points. The "big cliques" of the big-line-big-clique conjecture are cliques in the visibility graph. However, although a system of points that is entirely collinear can be characterized by having a bipartite visibility graph, this characterization does not extend to subsets of points: a subset can have a bipartite induced subgraph of the visibility graph without being collinear.
According to the solution of the happy ending problem, every subset of points with no three in line includes a large subset forming the vertices of a convex polygon. More generally, it can be proven using the same methods that every set of sufficiently many points either includes formula_1 collinear points or formula_0 points in convex position. However, some of these pairs of convex points could be blocked from visibility by points within the convex polygon they form.
Another related question asks whether points in general position (or with no lines of more than some given number of points) contain the vertices of an "empty convex polygon" or "hole". This is a polygon whose vertices belong to the point set, but that has no other points in the intersection of the point set with its convex hull. If a hole of a given size exists, its vertices all necessarily see each other. All sufficiently large sets of points in general position contain five vertices forming an empty pentagon or hexagon, but there exist arbitrarily large sets in general position with no empty heptagons.
A strengthening of the big line big clique conjecture asks for the big clique to be a "visible island", a set of points that are mutually visible and that are formed from the given larger point set by intersecting it with a convex set. However, this strengthened version is false: if a point set in general position has no empty heptagon, then replacing each of its points by a closely-spaced triple of collinear points produces a point set with no four in a line and with no visible islands of 13 or more points.
There is no possibility of an infinitary version of the same conjecture: Pór and Wood found examples of countable sets of points with no four points on a line and with no triangle of mutually visible points.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "k"
},
{
"math_id": 1,
"text": "\\ell"
},
{
"math_id": 2,
"text": "n_{k,\\ell}"
},
{
"math_id": 3,
"text": "\\ell\\le 3"
},
{
"math_id": 4,
"text": "k\\le 5"
},
{
"math_id": 5,
"text": "d"
},
{
"math_id": 6,
"text": "(\\ell-1)\\times(\\ell-1)\\times\\cdots=(\\ell-1)^d"
},
{
"math_id": 7,
"text": "2^d+1"
},
{
"math_id": 8,
"text": "(\\ell-1)^{\\log_2(k-1)}"
}
]
| https://en.wikipedia.org/wiki?curid=72990081 |
7299372 | Hollomon–Jaffe parameter | Parameter describing the effect of a heat treatment at a temperature for a certain time
The Hollomon–Jaffe parameter (HP), also generally known as the Larson–Miller parameter, describes the effect of a heat treatment at a temperature for a certain time.
This parameter is especially used to describe the tempering of steels, so that it is also called tempering parameter.
Effect.
The effect of the heat treatment depends on its temperature and its time. The same effect can be achieved with a low temperature and a long holding time, or with a higher temperature and a short holding time.
Formula.
In the Hollomon–Jaffe parameter, this exchangeability of time and temperature can be described by the following formula:
formula_0
This formula is not consistent concerning the units; the parameters must be entered in a certain manner. "T" is in degrees Celsius. The argument of the logarithmic function has the unit hours. "C" is a parameter unique to the material used. The Hollomon parameter itself is unitless and realistic numeric values vary between 15 and 21.
formula_1
where "T" is in kilokelvins, "t" is in hours, and "C" is the same as above.
Holloman and Jaffe determined the value of "C" experimentally by plotting hardness versus tempering time for a series of tempering temperatures of interest and interpolating the data to obtain the time necessary to yield a number of different hardness values. This work was based on six different heats of plain carbon steels with carbon contents varying from 0.35%–1.15%. The value of "C" was found to vary somewhat for different steels and decrease linearly with the carbon content of a steel grade. Holloman and Jaffe proposed that "C" = 19.5 for carbon and alloy steels with carbon contents of 0.25%–0.4%; and "C" = 15 for tool steels with carbon contents of 0.9%–1.2%.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H_p = \\frac {(273.15 + T)}{1000} \\cdot (C + \\log(t)) "
},
{
"math_id": 1,
"text": "H_p = T (C + \\log(t)) \\,"
}
]
| https://en.wikipedia.org/wiki?curid=7299372 |
72997008 | Formal distribution | Infinite sum of positive and negative powers of a formal variable
In mathematics, a formal distribution is an infinite sum of powers of a formal variable, usually denoted formula_0 in the theory of formal distributions. The coefficients of these infinite sums can be many different mathematical structures, such as vector spaces or rings, but in applications most often take values in an algebra over a field. These infinite sums are allowed to have infinitely many positive and negative powers, and are not required to converge, and so do not define functions of the formal variable. Rather, they are interpreted as distributions, that is, linear functionals on an appropriate space of test functions. They are closely related to formal Laurent series, but are not required to have finitely many negative powers. In particular, this means even if the coefficients are ring-valued, it is not necessarily possible to multiply two formal distributions.
They are important in the study of vertex operator algebras, since the vertex operator playing a central role in the theory takes values in a space of endomorphism-valued formal distributions.
Definition over a C-algebra.
Let formula_1 be an algebra over formula_2, as is the case for applications to vertex algebras. An formula_1-valued formal distribution in formula_3 variables formula_4 is an arbitrary series
formula_5
with each formula_6. These series form a vector space, denoted formula_7. While it can be possible to multiply some pairs of elements in the space of formal distributions, in general there is no product on the whole space.
In practice, the number of variables considered is often just one or two.
Products.
If the variables in two formal distributions are disjoint, then the product is well-defined.
The product of a formal distribution by a Laurent polynomial is also well-defined.
Formal distributions in a single variable.
For this section we consider formula_8.
Formal residue.
The formal residue is a linear map formula_9, given by
formula_10
The formal residue of formula_11 can also be written formula_12 or formula_13. It is named after residues from complex analysis, and when formula_11 is a meromorphic function on a neighborhood of zero in the complex plane, the two notions coincide.
Formal derivative.
The formal derivative is a linear map formula_14. For an element formula_15, its action is given by
formula_16
extended linearly to give a map for the whole space.
In particular, for any formal distribution formula_11,
formula_17
Interpretation as distribution.
This then motivates why they are named distributions: considering the space of 'test functions' to be the space of Laurent polynomials, any formal distribution defines a linear functional on the test functions. If formula_18 is a Laurent polynomial, the formal distribution formula_19 defines a linear functional by
formula_20
Formal distributions in two variables.
For this section we consider formula_21.
Delta distribution.
One of the most important distributions is the delta function, and indeed it can be realized as a formal distribution in two variables.
It is defined
formula_22
and satisfies, for "any formal distribution" formula_11
formula_23
where now, the subscript formula_0 on formula_24 is necessary to identify for which variable one reads the residue from.
Expansions of zero.
A subtle point which enters for formal distributions in two variables is that there are expressions which naïvely vanish but in fact are non-zero in the space of distributions.
Consider the expression formula_25, considered as a function in two complex variables. When formula_26, this has the series expansion formula_27, while for formula_28, it has the series expansion formula_29.
Then
formula_30
So the equality does not hold.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "z"
},
{
"math_id": 1,
"text": "R"
},
{
"math_id": 2,
"text": "\\mathbb{C}"
},
{
"math_id": 3,
"text": "n"
},
{
"math_id": 4,
"text": "z_1, \\cdots, z_n"
},
{
"math_id": 5,
"text": "A(z_1, \\cdots, z_n) = \\sum_{i_1 \\in \\mathbb{Z}}\\cdots \\sum_{i_n \\in \\mathbb{Z}} A_{i_1, \\cdots, i_n}z_1^{i_1}\\cdots z_n^{i_n},"
},
{
"math_id": 6,
"text": "A_{i_1, \\cdots, i_n} \\in R"
},
{
"math_id": 7,
"text": "R[[z_1, z_1^{-1}, \\cdots, z_n, z_n^{-1}]]"
},
{
"math_id": 8,
"text": "R[[z, z^{-1}]]"
},
{
"math_id": 9,
"text": "\\operatorname{Res}: R[[z, z^{-1}]] \\rightarrow R"
},
{
"math_id": 10,
"text": "\\operatorname{Res}f(z) = \\operatorname{Res}\\sum_{n \\in \\mathbb{Z}} f_n z^n = f_{-1}."
},
{
"math_id": 11,
"text": "f(z)"
},
{
"math_id": 12,
"text": "\\operatorname{Res}_z f(z), \\operatorname{Res}_{z = 0}f(z)"
},
{
"math_id": 13,
"text": "\\operatorname{Res}f(z)dz"
},
{
"math_id": 14,
"text": "\\partial_z: R[[z, z^{-1}]] \\rightarrow R[[z, z^{-1}]]"
},
{
"math_id": 15,
"text": "a z^n"
},
{
"math_id": 16,
"text": " a z^n \\mapsto \\partial_z a z^n = n a z^{n-1},"
},
{
"math_id": 17,
"text": " \\operatorname{Res} \\partial_z f(z) = 0"
},
{
"math_id": 18,
"text": "\\varphi \\in \\mathbb{C}[z, z^{-1}]"
},
{
"math_id": 19,
"text": "f \\in \\mathbb{C}[[z, z^{-1}]]"
},
{
"math_id": 20,
"text": " \\varphi \\mapsto \\langle f, \\varphi \\rangle := \\operatorname{Res}f(z)\\varphi(z)."
},
{
"math_id": 21,
"text": "R[[z, z^{-1}, w, w^{-1}]]"
},
{
"math_id": 22,
"text": " \\delta(z - w) := \\sum_{n \\in \\mathbb{Z}}z^{-n-1} w^n = \\frac{1}{z} \\sum_{n \\in \\mathbb{Z}}\\left(\\frac{w}{z}\\right)^n,"
},
{
"math_id": 23,
"text": " \\langle \\delta(z - w), f(z) \\rangle = \\operatorname{Res}_z \\delta(z-w) f(z) = f(w), "
},
{
"math_id": 24,
"text": "\\operatorname{Res}_z"
},
{
"math_id": 25,
"text": "(z - w)^{-1}"
},
{
"math_id": 26,
"text": "|z| > |w|"
},
{
"math_id": 27,
"text": "(z - w)^{-1}_+ := -\\frac{1}{z} \\sum_{n > 0}\\left(\\frac{z}{w}\\right)^n"
},
{
"math_id": 28,
"text": "|z| < |w|"
},
{
"math_id": 29,
"text": "(z - w)^{-1}_+ := \\frac{1}{z} \\sum_{n \\geq 0}\\left(\\frac{w}{z}\\right)^n"
},
{
"math_id": 30,
"text": "0 = (z - w)^{-1} - (z - w)^{-1} \\overset{?}= (z - w)^{-1}_+ - (z - w)^{-1}_- = \\frac{1}{z} \\sum_{n \\in \\mathbb{Z}}\\left(\\frac{w}{z}\\right)^n = \\delta(z - w)."
}
]
| https://en.wikipedia.org/wiki?curid=72997008 |
7300379 | Multiple sub-Nyquist sampling encoding | 1980s analog high-definition television standard
MUSE (Multiple sub-Nyquist Sampling Encoding), commercially known as Hi-Vision (a contraction of HIgh-definition teleVISION) was a Japanese analog high-definition television system, with design efforts going back to 1979.
It used dot-interlacing and digital video compression to deliver 1125 line, 60 field-per-second (1125i60) signals to the home. The system was standardized as ITU-R recommendation BO.786 and specified by SMPTE 260M, using a colorimetry matrix specified by SMPTE 240M. As with other analog systems, not all lines carry visible information. On MUSE there are 1035 active interlaced lines, therefore this system is sometimes also mentioned as "1035i". It employed 2-dimensional filtering, dot-interlacing, motion-vector compensation and line-sequential color encoding with time compression to "fold" an original 20 MHz bandwidth source signal into just 8.1 MHz.
Japan began broadcasting wideband analog HDTV signals in December 1988, initially with an aspect ratio of 2:1. The Sony HDVS high-definition video system was used to create content for the MUSE system.
By the time of its commercial launch in 1991, digital HDTV was already under development in the United States. Hi-Vision was mainly broadcast by NHK through their BShi satellite TV channel.
On May 20, 1994, Panasonic released the first MUSE LaserDisc player. There were also a number of players available from other brands like Pioneer and Sony.
Hi-Vision continued broadcasting in analog until 2007.
History.
MUSE was developed by NHK Science & Technology Research Laboratories in the 1980s as a compression system for Hi-Vision HDTV signals.
Technical specifications.
MUSE's "1125 lines" are an analog measurement, which includes non-video scan lines taking place while a CRT's electron beam returns to the top of the screen to begin scanning the next field. Only 1035 lines have picture information. Digital signals count only the lines (rows of pixels) that have actual detail, so NTSC's 525 lines become 486i (rounded to 480 to be MPEG compatible), PAL's 625 lines become 576i, and MUSE would be 1035i. To convert the bandwidth of Hi-Vision MUSE into "conventional" lines-of-horizontal resolution (as is used in the NTSC world), multiply 29.9 lines per MHz of bandwidth. (NTSC and PAL/SECAM are 79.9 lines per MHz) - this calculation of 29.9 lines works for all current HD systems including Blu-ray and HD-DVD. So, for MUSE, during a still picture, the lines of resolution would be: 598-lines of luminance resolution per-picture-height. The chroma resolution is: 209-lines. The horizontal luminance measurement approximately matches the vertical resolution of a 1080 interlaced image when the Kell factor and interlace factor are taken into account.
Key features of the MUSE system:
Colorimetry.
The MUSE luminance signal formula_0 encodes formula_3, specified as the following mix of the original RGB color channels:
formula_4
The chrominance formula_1 signal encodes formula_5 and formula_6 difference signals. By using these three signals ("formula_3", "formula_5" and "formula_6"), a MUSE receiver can retrieve the original RGB color components using the following matrix:
formula_7
The system used a colorimetry matrix specified by SMPTE 240M (with coefficients corresponding to the SMPTE RP 145 primaries, also known as SMPTE-C, in use at the time the standard was created). The chromaticity of the primary colors and white point are:
The luma (formula_8) function is specified as:
formula_9
The blue color difference (formula_10) is amplitude-scaled (formula_11), according to:
formula_12
The red color difference (formula_13) is amplitude-scaled (formula_14), according to:
formula_15
Signal and Transmission.
MUSE is a 1125 line system (1035 visible), and is not pulse and sync compatible with the digital 1080 line system used by modern HDTV. Originally, it was a 1125 line, interlaced, 60 Hz, system with a 5/3 (1.66:1) aspect ratio and an optimal viewing distance of roughly 3.3H.
For terrestrial MUSE transmission a bandwidth limited FM system was devised. A satellite transmission system uses uncompressed FM.
The pre-compression bandwidth for formula_0 is 20 MHz, and the pre-compression bandwidth for chrominance is a 7.425 MHz carrier.
The Japanese initially explored the idea of frequency modulation of a conventionally constructed composite signal. This would create a signal similar in structure to the formula_2 composite video NTSC signal - with the formula_0 ("luminance") at the lower frequencies and the formula_1 ("chrominance") above. Approximately 3 kW of power would be required, in order to get 40 dB of signal to noise ratio for a composite FM signal in the 22 GHz band. This was incompatible with satellite broadcast techniques and bandwidth.
To overcome this limitation, it was decided to use a separate transmission of formula_0 and formula_1. This reduces the effective frequency range and lowers the required power. Approximately 570 W (360 for formula_0 and 210 for formula_1) would be needed in order to get a 40 dB of signal to noise ratio for a separate formula_2 FM signal in the 22 GHz satellite band. This was feasible.
There is one more power saving that appears from the character of the human eye. The lack of visual response to low frequency noise allows significant reduction in transponder power if the higher video frequencies are emphasized prior to modulation at the transmitter and then de-emphasized at the receiver. This method was adopted, with crossover frequencies for the emphasis/de-emphasis at 5.2 MHz for formula_0 and 1.6 MHz for formula_1. With this in place, the power requirements drop to 260 W of power (190 for formula_0 and 69 for formula_1).
Sampling systems and ratios.
The subsampling in a video system is usually expressed as a three part ratio. The three terms of the ratio are: the number of brightness (luma) formula_0 samples, followed by the number of samples of the two color (chroma) components formula_16 and formula_17, for each complete sample area. Traditionally the value for brightness is always 4, with the rest of the values scaled accordingly.
A sampling of 4:4:4 indicates that all three components are fully sampled. A sampling of 4:2:0, for example, indicated that the two chroma components are sampled at half the horizontal sample rate of luma - the horizontal chroma resolution is halved. This reduces the bandwidth of an uncompressed video signal by one-third.
MUSE implements a similar system as a means of reducing bandwidth, but instead of static sampling, the actual ratio varies according to the amount of motion on the screen. In practice, MUSE sampling will vary from approximately 4:2:1 to 4:0.5:0.25, depending on the amount of movement. Thus the red-green chroma component formula_17 has between one-half and one-eighth the sampling resolution of the luma component formula_0, and the blue-yellow chroma formula_16 has half the resolution of red-green.
Audio subsystem.
MUSE had a discrete 2- or 4-channel digital audio system called "DANCE", which stood for Digital Audio Near-instantaneous Compression and Expansion.
It used differential audio transmission (differential pulse-code modulation) that was not psychoacoustics-based like MPEG-1 Layer II. It used a fixed transmission rate of 1350 kbp/s. Like the PAL NICAM stereo system, it used near-instantaneous companding (as opposed to Syllabic-companding like the dbx system uses) and non-linear 13-bit digital encoding at a 32 kHz sample rate.
It could also operate in a 48 kHz 16-bit mode. The DANCE system was well documented in numerous NHK technical papers and in a NHK-published book issued in the USA called "Hi-Vision Technology".
The DANCE audio codec was superseded by Dolby AC-3 (a.k.a. Dolby Digital), DTS Coherent Acoustics (a.k.a. DTS Zeta 6x20 or ARTEC), MPEG-1 Layer III (a.k.a. MP3), MPEG-2 Layer I, MPEG-4 AAC and many other audio coders. The methods of this codec are described in the IEEE paper:
Real world performance issues.
MUSE had a four-field dot-interlacing cycle, meaning it took four fields to complete a single MUSE frame. Thus, only stationary images were transmitted at full resolution. However, as MUSE lowers the horizontal and vertical resolution of material that varies greatly from frame to frame, moving images were blurred. Because MUSE used motion-compensation, whole camera pans maintained full resolution, but individual moving elements could be reduced to only a quarter of the full frame resolution. Because the mix between motion and non-motion was encoded on a pixel-by-pixel basis, it wasn't as visible as most would think. Later, NHK came up with backwards compatible methods of MUSE encoding/decoding that greatly increased resolution in moving areas of the image as well as increasing the chroma resolution during motion. This so-called MUSE-III system was used for broadcasts starting in 1995 and a very few of the last Hi-Vision MUSE LaserDiscs used it ("A River Runs Through It" is one Hi-Vision LD that used it). During early demonstrations of the MUSE system, complaints were common about the decoder's large size, which led to the creation of a miniaturized decoder.
Shadows and multipath still plague this analog frequency modulated transmission mode.
Japan has since switched to a digital HDTV system based on ISDB, but the original MUSE-based BS Satellite channel 9 (NHK BS Hi-vision) was broadcast until September 30, 2007.
Cultural and geopolitical impacts.
MUSE, as the US public came to know it, was initially covered in the magazine Popular Science in the mid-1980s. The US television networks did not provide much coverage of MUSE until the late 1980s, as there were few public demonstrations of the system outside Japan.
Because Japan had its own domestic frequency allocation tables (that were more open to the deployment of MUSE) it became possible for this television system to be transmitted by Ku Band satellite technology by the end of the 1980s.
The US FCC in the late 1980s began to issue directives that would allow MUSE to be tested in the US, providing it could be fit into a 6 MHz "System-M" channel.
The Europeans (in the form of the European Broadcasting Union (EBU)) were impressed with MUSE, but could never adopt it because it is a 60 Hz TV system, not a 50 Hz system that is standard in Europe and the rest of the world (outside the Americas and Japan).
The EBU development and deployment of B-MAC, D-MAC and much later on HD-MAC were made possible by Hi-Vision's technical success. In many ways MAC transmission systems are better than MUSE because of the total separation of colour from brightness in the time domain within the MAC signal structure.
Like Hi-Vision, HD-MAC could not be transmitted in 8 MHz channels without substantial modification – and a severe loss of quality and frame rate. A 6 MHz version Hi-Vision was experimented with in the US, but it too had severe quality problems so the FCC never fully sanctioned its use as a domestic terrestrial television transmission standard.
The US ATSC working group that had led to the creation of NTSC in the 1950s was reactivated in the early 1990s because of Hi-Vision's success. Many aspects of the DVB standard are based on work done by the ATSC working group, however most of the impact is in support for 60 Hz (as well as 24 Hz for film transmission) and uniform sampling rates and interoperable screen sizes.
Device support for Hi-Vision.
Hi-Vision LaserDiscs.
On May 20, 1994, Panasonic released the first MUSE LaserDisc player. There were a number of MUSE LaserDisc players available in Japan: Pioneer HLD-XØ, HLD-X9, HLD-1000, HLD-V500, HLD-V700; Sony HIL-1000, HIL-C1 and HIL-C2EX; the last two of which have OEM versions made by Panasonic, LX-HD10 and LX-HD20. Players also supported standard NTSC LaserDiscs. Hi-Vision LaserDiscs are extremely rare and expensive.
The HDL-5800 Video Disc Recorder recorded both high definition still images and continuous video onto an optical disc and was part of the early analog wideband Sony HDVS high-definition video system which supported the MUSE system. Capable of recording HD still images and video onto either the WHD-3AL0 or the WHD-33A0 optical disc; WHD-3Al0 for CLV mode (up to 10 minute video or 18,000 still frames per side); WHD-33A0 for CAV mode (up to 3 minute video or 5400 still frames per side).
The HDL-2000 was a full band high definition video disc player.
Video cassettes.
W-VHS allowed home recording of Hi-Vision programmes.
For recording Hi-Vision video, NHK and 10 Japanese companies in 1989 relased UniHi, a professional videocassette format. Recorders for the format were manufactured by Panasonic, Sony, NEC, and Toshiba. Both studio and portable versions were made. The head drum spins at 5400 RPM and uses tape that is 12.65 mm wide. It has a luminance (Y) bandwidth of 20 MHz and a chrominance (Pb, Pr) bandwidth of 7 MHz. It uses two vide heads with azimuth recording and records each frame of video into 6 helical tracks. Audio is recorded digitally as a PCM signal, as a section on the helical tracks. Development began in 1987. It uses metal particle tape. It could record video for 1 hour.
See also.
The analog TV systems these systems were meant to replace:
Related standards:
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "Y"
},
{
"math_id": 1,
"text": "C"
},
{
"math_id": 2,
"text": "Y/C"
},
{
"math_id": 3,
"text": "YM"
},
{
"math_id": 4,
"text": "\\begin{align}\nYM = 0.294\\,R + 0.588\\,G + 0.118\\,B \n\\end{align}"
},
{
"math_id": 5,
"text": "B-YM"
},
{
"math_id": 6,
"text": "R-YM"
},
{
"math_id": 7,
"text": "\\begin{align}\n \\begin{bmatrix} G \\\\ B \\\\ R \\end{bmatrix}\n \\ =\\ \n \\begin{bmatrix}\n 1 & -1/5 & -1/2 \\\\\n 1 & 1 & 0 \\\\\n 1 & 0 & 1\n \\end{bmatrix}\n \n \\begin{bmatrix}\n 1 & 0 & 0 \\\\\n 0 & 5/4 & 0 \\\\\n 0 & 0 & 1\n \\end{bmatrix}\n\n\\begin{bmatrix}\n YM \\\\\n B - YM \\\\\n R- YM\n \\end{bmatrix}\n \\ =\\ \n \\begin{bmatrix}\n 1 & -1/4 & -1/2 \\\\\n 1 & 5/4 & 0\\\\\n 1 & 0 & 1\n \\end{bmatrix}\n \\begin{bmatrix} YM \\\\ B- YM \\\\ R - YM \\end{bmatrix}\n\\end{align}"
},
{
"math_id": 8,
"text": "EY"
},
{
"math_id": 9,
"text": "\\begin{align}\nEY = 0.212\\,ER + 0.701\\,EG + 0.087\\,EB \n\\end{align}"
},
{
"math_id": 10,
"text": "EPB"
},
{
"math_id": 11,
"text": "EB-EY"
},
{
"math_id": 12,
"text": "\\begin{align}\nEPB = 1.826\\,(EB - EY) \n\\end{align}"
},
{
"math_id": 13,
"text": "EPR"
},
{
"math_id": 14,
"text": "ER-EY"
},
{
"math_id": 15,
"text": "\\begin{align}\nEPR = 1.576\\,(ER - EY) \n\\end{align}"
},
{
"math_id": 16,
"text": "Cb"
},
{
"math_id": 17,
"text": "Cr"
}
]
| https://en.wikipedia.org/wiki?curid=7300379 |
7300829 | Pi Josephson junction | Quantum mechanical device
A Josephson junction (JJ) is a quantum mechanical device which is made of two superconducting electrodes separated by a barrier (thin insulating tunnel barrier, normal metal, semiconductor, ferromagnet, etc.).
A π Josephson junction is a Josephson junction in which the Josephson phase "φ" equals π in the ground state, i.e. when no external current or magnetic field is applied.
Background.
The supercurrent "I""s" through a Josephson junction is generally given by "I""s" = "I""c"sin("φ"),
where φ is the phase difference of the superconducting wave functions of the two
electrodes, i.e. the Josephson phase.
The critical current "I""c" is the maximum supercurrent that can exist through the Josephson junction.
In experiment, one usually causes some current through the Josephson junction and the junction reacts by changing the Josephson phase. From the above formula it is clear that the phase "φ" = arcsin("I"/"I""c"), where "I" is the applied (super)current.
Since the phase is 2π-periodic, i.e. "φ" and "φ" + 2π"n" are physically equivalent, without losing generality, the discussion below refers to the interval 0 ≤ "φ" < 2π.
When no current ("I" = 0) exists through the Josephson junction, e.g. when the junction is disconnected, the junction is in the ground state and the Josephson phase across it is zero ("φ" = 0). The phase can also be "φ" = π, also resulting in no current through the junction. It turns out that the state with "φ" = π is "unstable" and corresponds to the Josephson energy maximum, while the state "φ" = 0 corresponds to the Josephson energy minimum and "is" a ground state.
In certain cases, one may obtain a Josephson junction where the critical current is negative ("I""c" < 0). In this case, the first Josephson relation becomes
formula_0
The ground state of such a Josephson junction is formula_1 and corresponds to the Josephson energy minimum, while the conventional state φ = 0 is unstable and corresponds to the Josephson energy maximum. Such a Josephson junction with formula_1 in the ground state is called a π Josephson junction.
π Josephson junctions have quite unusual properties. For example, if one connects (shorts) the superconducting electrodes with the inductance "L" (e.g. superconducting wire), one may expect the spontaneous supercurrent circulating in the loop, passing through the junction and through inductance clockwise or counterclockwise. This supercurrent is spontaneous and belongs to the ground state of the system. The direction of its circulation is chosen at random. This supercurrent will of course induce a magnetic field which can be detected experimentally. The magnetic flux passing through the loop will have the value from 0 to a half of magnetic flux quanta, i.e. from 0 to Φ0/2, depending on the value of inductance "L".
Historical developments.
Theoretically, the first time the possibility of creating a formula_2 Josephson junction was discussed by Bulaevskii "et al." ,
who considered a Josephson junction with paramagnetic scattering in the barrier. Almost one decade later, the possibility of having a formula_2 Josephson junction was discussed in the context of heavy fermion p-wave superconductors.
Experimentally, the first formula_2 Josephson junction was a corner junction made of yttrium barium copper oxide (d-wave) and Pb (s-wave) superconductors. The first unambiguous proof of a formula_2 Josephson junction with a ferromagnetic barrier was given only a decade later. That work used a weak ferromagnet consisting of a copper-nickel alloy (Cu"x"Ni1−"x", with "x" around 0.5) and optimized it so that the Curie temperature was close to the superconducting transition temperature of the superconducting niobium leads.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": " I_s = -|I_c|\\sin(\\varphi) = |I_c|\\sin(\\varphi+\\pi)"
},
{
"math_id": 1,
"text": "\\phi=\\pi"
},
{
"math_id": 2,
"text": "\\pi"
}
]
| https://en.wikipedia.org/wiki?curid=7300829 |
7300967 | Automatically Tuned Linear Algebra Software | Software library for linear algebra
Automatically Tuned Linear Algebra Software (ATLAS) is a software library for linear algebra. It provides a mature open source implementation of BLAS APIs for C and FORTRAN 77.
ATLAS is often recommended as a way to automatically generate an optimized BLAS library. While its performance often trails that of specialized libraries written for one specific hardware platform, it is often the first or even only optimized BLAS implementation available on new systems and is a large improvement over the generic BLAS available at Netlib. For this reason, ATLAS is sometimes used as a performance baseline for comparison with other products.
ATLAS runs on most Unix-like operating systems and on Microsoft Windows (using Cygwin). It is released under a BSD-style license without advertising clause, and many well-known mathematics applications including MATLAB, Mathematica, Scilab, SageMath, and some builds of GNU Octave may use it.
Functionality.
ATLAS provides a full implementation of the BLAS APIs as well as some additional functions from LAPACK, a higher-level library built on top of BLAS. In BLAS, functionality is divided into three groups called levels 1, 2 and 3.
formula_0
as well as scalar dot products and vector norms, among other things.
formula_1
as well as solving formula_2 for formula_3 with formula_4 being triangular, among other things.
formula_5
as well as solving formula_6 for triangular matrices formula_4, among other things.
Optimization approach.
The optimization approach is called Automated Empirical Optimization of Software (AEOS), which identifies four fundamental approaches to computer assisted optimization of which ATLAS employs three:
Every ATLAS level 1 BLAS function has its own kernel. Since it would be difficult to maintain thousands of cases in ATLAS there is little architecture specific optimization for Level 1 BLAS. Instead multiple implementation is relied upon to allow for compiler optimization to produce high performance implementation for the system.
With formula_7 data and formula_7 operations to perform the function is usually limited by bandwidth to memory, and thus there is not much opportunity for optimization
All routines in the ATLAS level 2 BLAS are built from two Level 2 BLAS kernels:
* GEMV—matrix by vector multiply update:
formula_1
* GER—general rank 1 update from an outer product:
formula_8
Since we have formula_9 ops with only formula_7 data, there are many opportunities for optimization
Level 3 BLAS.
Most of the Level 3 BLAS is derived from GEMM, so that is the primary focus of the optimization.
formula_10 operations vs. formula_11 data
The intuition that the formula_12 operations will dominate over the formula_13 data accesses only works for roughly square matrices.
The real measure should be some kind of surface area to volume.
The difference becomes important for very non-square matrices.
Can it afford to copy?
Copying the inputs allows the data to be arranged in a way that provides optimal access for the kernel functions,
but this comes at the cost of allocating temporary space, and an extra read and write of the inputs.
So the first question GEMM faces is, can it afford to copy the inputs?
If so,
If not,
The actual decision is made through a simple heuristic which checks for "skinny cases".
Cache edge.
For 2nd Level Cache blocking a single cache edge parameter is used.
The high level choose an order to traverse the blocks: "ijk, jik, ikj, jki, kij, kji".
These need not be the same order as the product is done within a block.
Typically chosen orders are "ijk" or "jik".
For "jik" the ideal situation would be to copy "A" and the "NB" wide panel of "B".
For "ijk" swap the role of "A" and "B".
Choosing the bigger of "M" or "N" for the outer loop reduces the footprint of the copy.
But for large "K" ATLAS does not even allocate such a large amount of memory.
Instead it defines a parameter, "Kp", to give best use of the L2 cache.
Panels are limited to "Kp" in length.
It first tries to allocate (in the "jik" case) formula_14.
If that fails it tries formula_15.
"Kp" is a function of cache edge and "NB".
LAPACK.
When integrating the ATLAS BLAS with LAPACK an important consideration is the choice of blocking factor for LAPACK. If the ATLAS blocking factor is small enough the blocking factor of LAPACK could be set to match that of ATLAS.
To take advantage of recursive factorization, ATLAS provides replacement routines for some LAPACK routines. These simply overwrite the corresponding LAPACK routines from Netlib.
Need for installation.
Installing ATLAS on a particular platform is a challenging process which is typically done by a system vendor or a local expert and made available to a wider audience.
For many systems, architectural default parameters are available; these are essentially saved searches plus the results of hand tuning.
If the arch defaults work they will likely get 10-15% better performance than the install search. On such systems the installation process is greatly simplified.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbf{y} \\leftarrow \\alpha \\mathbf{x} + \\mathbf{y} \\!"
},
{
"math_id": 1,
"text": "\\mathbf{y} \\leftarrow \\alpha A \\mathbf{x} + \\beta \\mathbf{y} \\!"
},
{
"math_id": 2,
"text": "T \\mathbf{x} = \\mathbf{y}"
},
{
"math_id": 3,
"text": "\\mathbf{x}"
},
{
"math_id": 4,
"text": "T"
},
{
"math_id": 5,
"text": "C \\leftarrow \\alpha A B + \\beta C \\!"
},
{
"math_id": 6,
"text": "B \\leftarrow \\alpha T^{-1} B"
},
{
"math_id": 7,
"text": "N^2"
},
{
"math_id": 8,
"text": "A \\leftarrow \\alpha \\mathbf{x} \\mathbf{y}^T + A \\! "
},
{
"math_id": 9,
"text": "N^3"
},
{
"math_id": 10,
"text": "O(n^3)"
},
{
"math_id": 11,
"text": "O(n^2)"
},
{
"math_id": 12,
"text": "n^3"
},
{
"math_id": 13,
"text": "n^2"
},
{
"math_id": 14,
"text": "M\\cdot p + NB\\cdot Kp + NB\\cdot NB"
},
{
"math_id": 15,
"text": "2\\cdot Kp\\cdot NB + NB\\cdot NB"
}
]
| https://en.wikipedia.org/wiki?curid=7300967 |
7302010 | Receptor–ligand kinetics | Branch of chemical kinetics
In biochemistry, receptor–ligand kinetics is a branch of chemical kinetics in which the kinetic species are defined by different non-covalent bindings and/or conformations of the molecules involved, which are denoted as "receptor(s)" and "ligand(s)". Receptor–ligand binding kinetics also involves the on- and off-rates of binding.
A main goal of receptor–ligand kinetics is to determine the concentrations of the various kinetic species (i.e., the states of the receptor and ligand) at all times, from a given set of initial concentrations and a given set of rate constants. In a few cases, an analytical solution of the rate equations may be determined, but this is relatively rare. However, most rate equations can be integrated numerically, or approximately, using the steady-state approximation. A less ambitious goal is to determine the final "equilibrium" concentrations of the kinetic species, which is adequate for the interpretation of equilibrium binding data.
A converse goal of receptor–ligand kinetics is to estimate the rate constants and/or dissociation constants of the receptors and ligands from experimental kinetic or equilibrium data. The total concentrations of receptor and ligands are sometimes varied systematically to estimate these constants.
Binding kinetics.
The binding constant is a special case of the equilibrium constant formula_0. It is associated with the binding and unbinding reaction of receptor (R) and ligand (L) molecules, which is formalized as:
<chem>{R} + {L} <=> {RL}</chem>.
The reaction is characterized by the on-rate constant formula_1 and the off-rate constant formula_2, which have units of 1/(concentration time) and 1/time, respectively. In equilibrium, the forward binding transition <chem>{R} + {L} -> {RL}</chem> should be balanced by the backward unbinding transition <chem>{RL} -> {R} + {L}</chem>. That is,
formula_3,
where <chem>[{R}]</chem>, <chem>[{L}]</chem> and <chem>[{RL}]</chem> represent the concentration of unbound free receptors, the concentration of unbound free ligand and the concentration of receptor-ligand complexes. The binding constant, or the association constant formula_4 is defined by
formula_5.
Simplest case: single receptor and single ligand bind to form a complex.
The simplest example of receptor–ligand kinetics is that of a single ligand L binding to a single receptor R to form a single complex C
<chem>
</chem>
The equilibrium concentrations are related by the dissociation constant "Kd"
formula_6
where "k1" and "k−1" are the forward and backward rate constants, respectively. The total concentrations of receptor and ligand in the system are constant
formula_7
formula_8
Thus, only one concentration of the three ([R], [L] and [C]) is independent; the other two concentrations may be determined from "Rtot", "Ltot" and the independent concentration.
This system is one of the few systems whose kinetics can be determined analytically. Choosing [R] as the independent concentration and representing the concentrations by italic variables for brevity (e.g., formula_9), the kinetic rate equation can be written
formula_10
Dividing both sides by "k"1 and introducing the constant "2E = Rtot - Ltot - Kd", the rate equation becomes
formula_11
where the two equilibrium concentrations formula_12 are given by the quadratic formula and "D" is defined
formula_13
However, only the formula_14 equilibrium has a positive concentration, corresponding to the equilibrium observed experimentally.
Separation of variables and a partial-fraction expansion yield the integrable ordinary differential equation
formula_15
whose solution is
formula_16
or, equivalently,
formula_17
formula_18
for association, and
formula_19
for dissociation, respectively; where the integration constant φ0 is defined
formula_20
From this solution, the corresponding solutions for the other concentrations formula_21 and formula_22 can be obtained.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K"
},
{
"math_id": 1,
"text": "k_{\\rm on}"
},
{
"math_id": 2,
"text": "k_{\\rm off}"
},
{
"math_id": 3,
"text": "k_\\ce{on}\\,[\\ce{R}]\\,[\\ce{L}] = k_\\ce{off}\\,[\\ce{RL}]"
},
{
"math_id": 4,
"text": "K_{\\rm a}"
},
{
"math_id": 5,
"text": "K_{\\rm a} = {k_\\ce{on} \\over k_\\ce{off}} = \\ce{[{RL}] \\over {[{R}]\\,[{L}]}}"
},
{
"math_id": 6,
"text": "\nK_{d} \\ \\stackrel{\\mathrm{def}}{=}\\ \\frac{k_{-1}}{k_{1}} = \\frac{[\\ce{R}]_{eq} [\\ce{L}]_{eq}}{[\\ce{C}]_{eq}}\n"
},
{
"math_id": 7,
"text": "\nR_{tot} \\ \\stackrel{\\mathrm{def}}{=}\\ [\\ce{R}] + [\\ce{C}]\n"
},
{
"math_id": 8,
"text": "\nL_{tot} \\ \\stackrel{\\mathrm{def}}{=}\\ [\\ce{L}] + [\\ce{C}]\n"
},
{
"math_id": 9,
"text": "R \\ \\stackrel{\\mathrm{def}}{=}\\ [\\ce{R}]"
},
{
"math_id": 10,
"text": "\n\\frac{dR}{dt} = -k_{1} R L + k_{-1} C = -k_{1} R (L_{tot} - R_{tot} + R) + k_{-1} (R_{tot} - R)\n"
},
{
"math_id": 11,
"text": "\n\\frac{1}{k_{1}} \\frac{dR}{dt} = -R^{2} + 2ER + K_{d}R_{tot} =\n-\\left( R - R_{+}\\right) \\left( R - R_{-}\\right)\n"
},
{
"math_id": 12,
"text": "R_{\\pm} \\ \\stackrel{\\mathrm{def}}{=}\\ E \\pm D"
},
{
"math_id": 13,
"text": "\nD \\ \\stackrel{\\mathrm{def}}{=}\\ \\sqrt{E^{2} + R_{tot} K_{d}}\n"
},
{
"math_id": 14,
"text": "R_{+}"
},
{
"math_id": 15,
"text": "\n\\left\\{ \\frac{1}{R - R_{+}} - \\frac{1}{R - R_{-}} \\right\\} dR = -2 D k_{1} dt\n"
},
{
"math_id": 16,
"text": "\n\\log \\left| R - R_{+} \\right| - \\log \\left| R - R_{-} \\right| = -2Dk_{1}t + \\phi_{0}\n"
},
{
"math_id": 17,
"text": "\ng = exp(-2Dk_{1}t+\\phi_{0})\n"
},
{
"math_id": 18,
"text": "\nR(t) = \\frac{R_{+} - gR_{-}}{1 - g} \n"
},
{
"math_id": 19,
"text": "\nR(t) = \\frac{R_{+} + gR_{-}}{1 + g} \n"
},
{
"math_id": 20,
"text": "\n\\phi_{0} \\ \\stackrel{\\mathrm{def}}{=}\\ \\log \\left| R(t=0) - R_{+} \\right| - \\log \\left| R(t=0) - R_{-} \\right|\n"
},
{
"math_id": 21,
"text": "C(t)"
},
{
"math_id": 22,
"text": "L(t)"
}
]
| https://en.wikipedia.org/wiki?curid=7302010 |
73020212 | Copper(II) stearate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Copper(II) stearate is a metal-organic compound, a salt of copper and stearic acid with the formula Cu(C17H35COO)2. The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid.
Synthesis.
Exchange reaction of sodium stearate and copper sulfate:
formula_0
Physical properties.
Copper(II) stearate forms a blue-green amorphous substance similar to plasticine both in appearance and touch.
Insoluble in water, ethanol, or ether; soluble in pyridine.
Chemical properties.
The compound is stable and non-reactive under normal conditions.
When trying to ignite, copper stearate first melts and then begins to burn with a green (at the base) flame, then it quickly turns black due to the formation of cupric oxide:
formula_1
Uses.
The compound is used in the production of antifouling paint and varnish materials.
Also used as a component in casting bronze sculptures.
Also applies as a catalyst for the decomposition of hydroperoxides.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ CuSO_4 + 2C_{17}H_{35}O_2Na \\ \\xrightarrow{}\\ Cu(C_{17}H_{35}O_2)_2\\downarrow + Na_2SO_4 }"
},
{
"math_id": 1,
"text": "\\mathsf{ (C_{17}H_{35}COO)_2 Cu + 52O_2 \\ \\xrightarrow{t}\\ CuO\\downarrow + 36CO_2\\uparrow + 35H_2O\\uparrow }"
}
]
| https://en.wikipedia.org/wiki?curid=73020212 |
7302795 | Local martingale | Stochastic process with sequence of stopping times so each stopped processes is martingale
In mathematics, a local martingale is a type of stochastic process, satisfying the localized version of the martingale property. Every martingale is a local martingale; every bounded local martingale is a martingale; in particular, every local martingale that is bounded from below is a supermartingale, and every local martingale that is bounded from above is a submartingale; however, a local martingale is not in general a martingale, because its expectation can be distorted by large values of small probability. In particular, a driftless diffusion process is a local martingale, but not necessarily a martingale.
Local martingales are essential in stochastic analysis (see Itô calculus, semimartingale, and Girsanov theorem).
Definition.
Let formula_0 be a probability space; let formula_1 be a filtration of formula_2; let formula_3 be an formula_4-adapted stochastic process on the set formula_5. Then formula_6 is called an formula_4-local martingale if there exists a sequence of formula_4-stopping times formula_7 such that
Examples.
Example 1.
Let "W""t" be the Wiener process and "T" = min{ "t" : "W""t" = −1 } the time of first hit of −1. The stopped process "W"min{ "t", "T" } is a martingale. Its expectation is 0 at all times; nevertheless, its limit (as "t" → ∞) is equal to −1 almost surely (a kind of gambler's ruin). A time change leads to a process
formula_13
The process formula_14 is continuous almost surely; nevertheless, its expectation is discontinuous,
formula_15
This process is not a martingale. However, it is a local martingale. A localizing sequence may be chosen as formula_16 if there is such "t", otherwise formula_17. This sequence diverges almost surely, since formula_17 for all "k" large enough (namely, for all "k" that exceed the maximal value of the process "X"). The process stopped at τ"k" is a martingale.
Example 2.
Let "W""t" be the Wiener process and "ƒ" a measurable function such that formula_18 Then the following process is a martingale:
formula_19
where
formula_20
The Dirac delta function formula_21 (strictly speaking, not a function), being used in place of formula_22 leads to a process defined informally as formula_23 and formally as
formula_24
where
formula_25
The process formula_26 is continuous almost surely (since formula_27 almost surely), nevertheless, its expectation is discontinuous,
formula_28
This process is not a martingale. However, it is a local martingale. A localizing sequence may be chosen as formula_29
Example 3.
Let formula_30 be the complex-valued Wiener process, and
formula_31
The process formula_14 is continuous almost surely (since formula_30 does not hit 1, almost surely), and is a local martingale, since the function formula_32 is harmonic (on the complex plane without the point 1). A localizing sequence may be chosen as formula_33 Nevertheless, the expectation of this process is non-constant; moreover,
formula_34 as formula_35
which can be deduced from the fact that the mean value of formula_36 over the circle formula_37 tends to infinity as formula_38. (In fact, it is equal to formula_39 for "r" ≥ 1 but to 0 for "r" ≤ 1).
Martingales via local martingales.
Let formula_40 be a local martingale. In order to prove that it is a martingale it is sufficient to prove that formula_41 in "L"1 (as formula_42) for every "t", that is, formula_43 here formula_44 is the stopped process. The given relation formula_45 implies that formula_41 almost surely. The dominated convergence theorem ensures the convergence in "L"1 provided that
formula_46 for every "t".
Thus, Condition (*) is sufficient for a local martingale formula_40 being a martingale. A stronger condition
formula_47 for every "t"
is also sufficient.
"Caution." The weaker condition
formula_48 for every "t"
is not sufficient. Moreover, the condition
formula_49
is still not sufficient; for a counterexample see Example 3 above.
A special case:
formula_50
where formula_51 is the Wiener process, and formula_52 is twice continuously differentiable. The process formula_40 is a local martingale if and only if "f" satisfies the PDE
formula_53
However, this PDE itself does not ensure that formula_40 is a martingale. In order to apply (**) the following condition on "f" is sufficient: for every formula_54 and "t" there exists formula_55 such that
formula_56
for all formula_57 and formula_58 | [
{
"math_id": 0,
"text": "(\\Omega,F,P)"
},
{
"math_id": 1,
"text": "F_*=\\{F_t\\mid t\\geq 0\\}"
},
{
"math_id": 2,
"text": "F"
},
{
"math_id": 3,
"text": "X\\colon [0,\\infty)\\times \\Omega \\rightarrow S"
},
{
"math_id": 4,
"text": "F_*"
},
{
"math_id": 5,
"text": "S"
},
{
"math_id": 6,
"text": "X"
},
{
"math_id": 7,
"text": "\\tau_k \\colon \\Omega \\to [0,\\infty)"
},
{
"math_id": 8,
"text": "\\tau_k"
},
{
"math_id": 9,
"text": "P\\left\\{\\tau_k < \\tau_{k+1} \\right\\}=1"
},
{
"math_id": 10,
"text": "P \\left\\{\\lim_{k\\to\\infty} \\tau_k =\\infty \\right\\}=1"
},
{
"math_id": 11,
"text": " X_t^{\\tau_k} := X_{\\min \\{ t, \\tau_k \\}}"
},
{
"math_id": 12,
"text": "k"
},
{
"math_id": 13,
"text": "\\displaystyle X_t = \\begin{cases}\n W_{\\min\\left(\\tfrac{t}{1-t},T\\right)} &\\text{for } 0 \\le t < 1,\\\\\n -1 &\\text{for } 1 \\le t < \\infty.\n \\end{cases} "
},
{
"math_id": 14,
"text": " X_t "
},
{
"math_id": 15,
"text": "\\displaystyle \\operatorname{E} X_t = \\begin{cases}\n 0 &\\text{for } 0 \\le t < 1,\\\\\n -1 &\\text{for } 1 \\le t < \\infty.\n \\end{cases} "
},
{
"math_id": 16,
"text": " \\tau_k = \\min \\{ t : X_t = k \\} "
},
{
"math_id": 17,
"text": "\\tau_k = k"
},
{
"math_id": 18,
"text": " \\operatorname{E} |f(W_1)| < \\infty. "
},
{
"math_id": 19,
"text": " X_t = \\operatorname{E} ( f(W_1) \\mid F_t ) = \\begin{cases}\n f_{1-t}(W_t) &\\text{for } 0 \\le t < 1,\\\\\n f(W_1) &\\text{for } 1 \\le t < \\infty;\n \\end{cases} "
},
{
"math_id": 20,
"text": " f_s(x) = \\operatorname{E} f(x+W_s) = \\int f(x+y) \\frac1{\\sqrt{2\\pi s}} \\mathrm{e}^{-y^2/(2s)} \\, dy. "
},
{
"math_id": 21,
"text": " \\delta "
},
{
"math_id": 22,
"text": " f, "
},
{
"math_id": 23,
"text": " Y_t = \\operatorname{E} ( \\delta(W_1) \\mid F_t ) "
},
{
"math_id": 24,
"text": " Y_t = \\begin{cases}\n \\delta_{1-t}(W_t) &\\text{for } 0 \\le t < 1,\\\\\n 0 &\\text{for } 1 \\le t < \\infty,\n \\end{cases} "
},
{
"math_id": 25,
"text": " \\delta_s(x) = \\frac1{\\sqrt{2\\pi s}} \\mathrm{e}^{-x^2/(2s)} . "
},
{
"math_id": 26,
"text": " Y_t "
},
{
"math_id": 27,
"text": " W_1 \\ne 0 "
},
{
"math_id": 28,
"text": " \\operatorname{E} Y_t = \\begin{cases}\n 1/\\sqrt{2\\pi} &\\text{for } 0 \\le t < 1,\\\\\n 0 &\\text{for } 1 \\le t < \\infty.\n \\end{cases} "
},
{
"math_id": 29,
"text": " \\tau_k = \\min \\{ t : Y_t = k \\}. "
},
{
"math_id": 30,
"text": " Z_t "
},
{
"math_id": 31,
"text": " X_t = \\ln | Z_t - 1 | \\, . "
},
{
"math_id": 32,
"text": " u \\mapsto \\ln|u-1| "
},
{
"math_id": 33,
"text": " \\tau_k = \\min \\{ t : X_t = -k \\}. "
},
{
"math_id": 34,
"text": " \\operatorname{E} X_t \\to \\infty "
},
{
"math_id": 35,
"text": " t \\to \\infty, "
},
{
"math_id": 36,
"text": " \\ln|u-1| "
},
{
"math_id": 37,
"text": " |u|=r "
},
{
"math_id": 38,
"text": " r \\to \\infty "
},
{
"math_id": 39,
"text": " \\ln r "
},
{
"math_id": 40,
"text": " M_t "
},
{
"math_id": 41,
"text": " M_t^{\\tau_k} \\to M_t "
},
{
"math_id": 42,
"text": " k \\to \\infty "
},
{
"math_id": 43,
"text": " \\operatorname{E} | M_t^{\\tau_k} - M_t | \\to 0; "
},
{
"math_id": 44,
"text": " M_t^{\\tau_k} = M_{t\\wedge \\tau_k} "
},
{
"math_id": 45,
"text": " \\tau_k \\to \\infty "
},
{
"math_id": 46,
"text": "\\textstyle (*) \\quad \\operatorname{E} \\sup_k| M_t^{\\tau_k} | < \\infty "
},
{
"math_id": 47,
"text": "\\textstyle (**) \\quad \\operatorname{E} \\sup_{s\\in[0,t]} |M_s| < \\infty "
},
{
"math_id": 48,
"text": "\\textstyle \\sup_{s\\in[0,t]} \\operatorname{E} |M_s| < \\infty "
},
{
"math_id": 49,
"text": "\\textstyle \\sup_{t\\in[0,\\infty)} \\operatorname{E} \\mathrm{e}^{|M_t|} < \\infty "
},
{
"math_id": 50,
"text": "\\textstyle M_t = f(t,W_t), "
},
{
"math_id": 51,
"text": " W_t "
},
{
"math_id": 52,
"text": " f : [0,\\infty) \\times \\mathbb{R} \\to \\mathbb{R} "
},
{
"math_id": 53,
"text": " \\Big( \\frac{\\partial}{\\partial t} + \\frac12 \\frac{\\partial^2}{\\partial x^2} \\Big) f(t,x) = 0. "
},
{
"math_id": 54,
"text": " \\varepsilon>0 "
},
{
"math_id": 55,
"text": " C = C(\\varepsilon,t) "
},
{
"math_id": 56,
"text": "\\textstyle |f(s,x)| \\le C \\mathrm{e}^{\\varepsilon x^2} "
},
{
"math_id": 57,
"text": " s \\in [0,t] "
},
{
"math_id": 58,
"text": " x \\in \\mathbb{R}. "
}
]
| https://en.wikipedia.org/wiki?curid=7302795 |
73029149 | Schulz–Zimm distribution | Conventional name of the gamma distribution when applied to macromolecular polydispersity
The Schulz–Zimm distribution is a special case of the gamma distribution. It is widely used to model the polydispersity of polymers. In this context it has been introduced in 1939 by Günter Victor Schulz and in 1948 by Bruno H. Zimm.
This distribution has only a shape parameter "k", the scale being fixed at "θ"=1/"k". Accordingly, the probability density function is
formula_0
When applied to polymers, the variable "x" is the relative mass or chain length formula_1. Accordingly, the mass distribution formula_2 is just a gamma distribution with scale parameter formula_3. This explains why the Schulz–Zimm distribution is unheard of outside its conventional application domain.
The distribution has mean 1 and variance 1/"k". The polymer dispersity is formula_4.
For large "k" the Schulz–Zimm distribution approaches a Gaussian distribution. In algorithms where one needs to draw samples formula_5, the Schulz–Zimm distribution is to be preferred over a Gaussian because the latter requires an arbitrary cut-off to prevent negative "x".
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "f(x)=\\frac{k^k x^{k - 1} e^{-kx}}{\\Gamma(k)}."
},
{
"math_id": 1,
"text": "x=M/M_n"
},
{
"math_id": 2,
"text": "f(M)"
},
{
"math_id": 3,
"text": "\\theta=M_n/k"
},
{
"math_id": 4,
"text": "\\langle x^2\\rangle / \\langle x\\rangle = 1+1/k"
},
{
"math_id": 5,
"text": "x\\ge 0"
}
]
| https://en.wikipedia.org/wiki?curid=73029149 |
73030801 | Lexicographic max-min optimization | Optimization method
Lexicographic max-min optimization (also called lexmaxmin or leximin or leximax or lexicographic max-ordering optimization) is a kind of multi-objective optimization. In general, multi-objective optimization deals with optimization problems with two or more objective functions to be optimized simultaneously. Lexmaxmin optimization presumes that the decision-maker would like the smallest objective value to be as high as possible; subject to this, the second-smallest objective should be as high as possible; and so on. In other words, the decision-maker ranks the possible solutions according to a leximin order of their objective function values.
As an example, consider egalitarian social planners, who want to decide on a policy such that the utility of the poorest person will be as high as possible; subject to this, they want to maximize the utility of the second-poorest person; and so on. This planner solves a lexmaxmin problem, where the objective function number "i" is the utility of agent number "i".
Algorithms for lexmaxmin optimization (not using this name) were developed for computing the nucleolus of a cooperative game. An early application of lexmaxmin was presented by Melvin Dresher in his book on game theory, in the context of taking maximum advantage of the opponent's mistakes in a zero-sum game. Behringer cites many other examples in game theory as well as decision theory.
Notation.
A lexmaxmin problem may be written as:formula_0where formula_1 are the functions to maximize; formula_2 is the vector of decision variables; and formula_3 is the "feasible set" - the set of possible values of formula_2.
Comparison with lexicographic optimization.
Lexmaxmin optimization is closely related to lexicographic optimization. However, in lexicographic optimization, there is a fixed order on the functions, such that formula_4 is the most important, formula_5 is the next-most important, and so on. In contrast, in lexmaxmin, all the objectives are equally important. To present lexmaxmin as a special case of lexicographic optimization, denote by formula_6 the smallest objective value in "x". Similarly, denote by formula_7 the second-smallest objective value in x, and so on, so that formula_8. Then, the lexmaxmin optimization problem can be written as the following lexicographic maximization problem:formula_9
Uniqueness.
In general, a lexmaxmin optimization problem may have more than one optimal solution. If formula_10 and formula_11 are two optimal solutions, then their "ordered" value vector must be the same, that is, formula_12 for all formula_13,Thm.2 that is, the smallest value is the same, the second-smallest value is the same, and so on. However, the unsorted value vectors may be different. For example, (1,2,3) and (2,1,3) may both be optimal solutions to the same problem.
However, if the feasible domain is a convex set, and the objectives are concave functions, then the value vectors in all optimal solutions must be the same, since if there were two different optimal solutions, their mean would be another feasible solution in which the objective functions attain a higher value - contradicting the optimality of the original solutions.Thm.6
Algorithms for continuous variables.
Saturation Algorithm for convex problems.
The Saturation Algorithm works when the feasible set is a convex set, and the objectives are concave functions. Variants of these algorithm appear in many papers. The earliest appearance is attributed to Alexander Kopelowitz by Elkind and Pasechnik. Other variants appear in.Alg.2
The algorithm keeps a set of objectives that are considered "saturated" (also called: "blocking"). This means that their value cannot be improved without harming lower-valued objectives. The other objectives are called "free". Initially, all objectives are free. In general, the algorithm works as follows:
It remains to explain how we can find new saturated objectives in each iteration.
Method 1: interior optimizers. An "interior optimizer" of a linear program is an optimal solution in which the smallest possible number of constraints are tight. In other words, it is a solution in the interior of the optimal face. An interior optimizer of (P1) can be found by solving (P1) using the ellipsoid method or interior point methods.
The set of tight constraints in an interior optimizer is unique. "Proof": Suppose by contradiction that there are two interior-optimizers, x1 and x2, with different sets of tight constraints. Since the feasible set is convex, the average solution x3 = (x1+x2)/2 is also an optimizer. Every constraint that is not tight in either x1 or x2, is not tight in x3. Therefore, the number of tight constraints in x3 is smaller than in x1 and x2, contradicting the definition of an interior optimizer.
Therefore, the set of tight constraints in the interior optimizer corresponds to the set of free objectives that become saturated. Using this method, the leximin solution can be computed using at most "n" iterations.
Method 2: iterating over all objectives. It is possible to find at least one saturated objective using the following algorithm.
At each step, at least one free objective must become saturated. This is because, if no objective were saturated, then the mean of all optimal solutions to (P2) would be a feasible solution in which all objective values are larger than formula_17 - contradicting the optimality of solution to (P1). For example, suppose formula_20, objective 1 is not saturated because there is a solution with value-vector (3,1), and objective 2 is not saturated because there exists a solution with value-vector and (1,3). Then, there exists a solution with value-vector at least (2,2), but formula_17 should have been at least 2.
Therefore, after at most "n" iterations, all variables are saturated and a leximin-optimal solution is found. In each iteration "t", the algorithm solves at most "n"-"t"+1 linear programs; therefore, the run-time of the algorithm is at most formula_21 times the run-time of the LP solver.
In some cases, the run-time of the saturation algorithm can be improved. Instead of finding "all" saturated objectives, we can break out of the inner loop after finding "one" saturated objective; the algorithm still stops after at most "n" iterations, and may reduce the number of linear programs (P2) we need to solve.Alg.3
Furthermore, instead of looping over all objectives to find a saturated one, the algorithm can find a saturated objective using the dual problem of (P1). In some cases, the dual variables are given as a byproduct of solving (P1), for example, when the objectives and constraints are linear and the solver is the simplex algorithm. In this case, (P2) is not needed at all, and the run-time of the algorithm is at most formula_22 times the run-time of the solver of (P1).Alg.4
All these variants work only for convex problems. For non-convex problems, there might be no saturated objective, so the algorithm might not stop.
Ordered Outcomes Algorithm for general problems.
The Ordered Outcomes Algorithm works in arbitrary domains (not necessarily convex). It was developed by Ogryczak and Śliwiński and also presented in the context of telecommunication networks by Ogryczak, Pioro and Tomaszewski, and in the context of location problems by Ogryczak. The algorithm reduces lexmaxmin optimization to the easier problem of lexicographic optimization. Lexicographic optimization can be done with a simple sequential algorithm, which solves at most "n" linear programs. The reduction starts with the following presentation of lexmaxmin:formula_23
This problem cannot be solved as-is, because formula_24 (the "t"-th smallest value in formula_25) is not a simple function of "x". The problem (L1) is equivalent to the following problem, where formula_26 the sum of the "t" smallest values in formula_25:formula_27
This problem can be solved iteratively using lexicographic optimization, but the number of constraints in each iteration "t" is C("n","t") -- the number of subsets of size "t". This grows exponentially with "n". It is possible to reduce the problem to a different problem, in which the number of constraints is polynomial in "n".
For every "t", the sum formula_28 can be computed as the optimal value to the following problem, with "n"+1 auxiliary variables (an unbounded variable formula_29, and non-negative variables formula_30 for all "j" in 1...,"n"), and "n" additional constraints:Thm.8formula_31"Proof". Let us compute the values of the auxiliary variables in the optimal solution.
Therefore, the problem (L2) is equivalent to the following lexicographic maximization problem:(32)formula_41
This problem (L4) has formula_42 additional variables, and formula_43 additional constraints. It can be solved by every algorithm for solving lexicographic maximization, for example: the sequential algorithm using "n" linear programs, or the lexicographic simplex algorithm (if the objectives and constraints are linear).
Approximate leximin solutions.
One advantage of the Ordered Outcomes Algorithm is that it can be used even when the single-problem solver is inaccurate, and returns only approximate solutions. Specifically, if the single-problem solver approximates the optimal single-problem solution with multiplicative factor α ∈ (0,1] and additive factor ϵ ≥ 0, then the algorithm returns a solution that approximates the leximin-optimal solution with multiplicative factor α2/(1 − α + α2) and additive factor ϵ/(1 − α + α2).
Ordered Values Algorithm for general problems.
The Ordered Values Algorithm works in any domain in which the set of possible values of the objective functions is finite. It was developed by Ogryczak and Śliwiński. Let formula_44 be the set of all values that can be returned by the functions formula_1, such that formula_45. Given a solution "x", and an integer "k" in {1..,"r"}, define formula_46 as the number of occurrences of the value "vr" in the vector formula_47. Then, the lexmaxmin problem can be stated as the following lexicographic minimization problem:formula_48since we want to have as few as possible functions attaining the smallest value; subject to this, as few as possible functions attaining the next-smallest value; and so on. Ogryczak and Śliwiński show how to transform this non-linear program into a linear program with auxiliary variables. In their computational experiments, the Ordered Values algorithm runs much faster than the Saturation algorithm and the Ordered Outcomes algorithm.
Behringer's algorithm for quasiconcave functions.
Behringer presented a sequential algorithm for lexmaxmin optimization when the objectives are quasiconvex functions, and the feasible set "X" is a convex set.
Weighted average.
Yager presented a way to represent the leximin ordering analytically using the Ordered weighted averaging aggregation operator. He assumes that all objective values are real numbers between 0 and 1, and the smallest difference between any two possible values is some constant "d" < 1 (so that values with difference smaller than "d" are considered equal). The weight formula_49 of formula_50 is set to approximately formula_51. This guarantees that maximizing the weighted sum formula_52 is equivalent to lexmaxmin.
Algorithms for discrete variables.
If the set of vectors is "discrete", and the domain is sufficiently small, then it is possible to use one of the functions representing the leximin order, and maximize it subject to the constraints, using a solver for constraint-satisfaction problems.
But if the domain is large, the above approach becomes unfeasible due to the large number of possible values that this function can have: formula_53, where "m" is the number of different values in the domain, and "n" is the number of variables.
Bouveret and Lemaître present five different algorithms for finding leximin-optimal solutions to discrete constraint-satisfaction problems:
In their experiments, the best-performing approach was 4 (ATLEAST), followed by 3 (SORT) followed by 1 (LEXIMIN).
Dall'aglio presents an algorithm for computing a leximin-optimal resource allocation.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\n\\begin{align}\n\\operatorname{lex}\n\\max \\min &&\nf_1(x), f_2(x), \\ldots, f_n(x)\n\\\\\n\\text{subject to} && x\\in X\n\\end{align}\n"
},
{
"math_id": 1,
"text": "f_1,\\ldots, f_n "
},
{
"math_id": 2,
"text": "x "
},
{
"math_id": 3,
"text": "X "
},
{
"math_id": 4,
"text": "f_1 "
},
{
"math_id": 5,
"text": "f_2 "
},
{
"math_id": 6,
"text": "f_{[1]}(x) := \\min(f_1(x),\\ldots,f_n(x)) = "
},
{
"math_id": 7,
"text": "f_{[2]}(x) := "
},
{
"math_id": 8,
"text": "f_{[1]}(x) \\leq f_{[2]}(x)\\leq \\cdots \\leq f_{[n]}(x) "
},
{
"math_id": 9,
"text": "\n\\begin{align}\n\\operatorname{lex}\n\\max &&\nf_{[1]}(x), \\ldots, f_{[n]}(x)\n\\\\\n\\text{subject to} && x\\in X\n\\end{align}\n"
},
{
"math_id": 10,
"text": "x^1 "
},
{
"math_id": 11,
"text": "x^2 "
},
{
"math_id": 12,
"text": "f_{[i]}(x^1) = f_{[i]}(x^2) "
},
{
"math_id": 13,
"text": "i\\in[n] "
},
{
"math_id": 14,
"text": "\nz_k\n"
},
{
"math_id": 15,
"text": "\nf_k\n"
},
{
"math_id": 16,
"text": "\n\\begin{align}\n\\max ~~~\nz\n\\\\\n\\text{subject to} ~~~ &x\\in X,\n\\\\ &f_k(x) = z_k \\text{ for all saturated objectives } k,\n\\\\ &f_k(x) \\geq z \\text{ for all free objectives } k\n\\end{align}\n"
},
{
"math_id": 17,
"text": "\nz_{\\max}\n"
},
{
"math_id": 18,
"text": "\nf_j\n"
},
{
"math_id": 19,
"text": "\n\\begin{align}\n\\max ~~~\nf_j(x)\n\\\\\n\\text{subject to} ~~~ &x\\in X,\n\\\\ &f_k(x) \\geq z_k \\text{ for all saturated objectives } k,\n\\\\ &f_k(x) \\geq z_{\\max} \\text{ for all free objectives } k\n\\end{align}\n"
},
{
"math_id": 20,
"text": "\nz_{\\max}=1\n"
},
{
"math_id": 21,
"text": "\n(n+2)(n+1)/2 \\in O(n^2)\n"
},
{
"math_id": 22,
"text": "\nn\n"
},
{
"math_id": 23,
"text": "\n\\begin{align}\n(L1)\n\\\\\n\\operatorname{lex}\n\\max &&\nf_{[1]}(x), \\ldots, f_{[n]}(x)\n\\\\\n\\text{subject to} && x\\in X\n\\end{align}\n"
},
{
"math_id": 24,
"text": "f_{[t]}(x)"
},
{
"math_id": 25,
"text": "\\mathbf{f}(x)"
},
{
"math_id": 26,
"text": "f_{[1..t]}(x) := \\sum_{i=1}^t f_{[i]}(x) = "
},
{
"math_id": 27,
"text": "\n\\begin{align}\n(L2)\n\\\\\n\\operatorname{lex}\n\\max &&\nf_{[1..1]}(x),f_{[1..2]}(x), \\ldots, f_{[1..n]}(x)\n\\\\\n\\text{subject to} && x\\in X\n\\end{align}\n"
},
{
"math_id": 28,
"text": "f_{[1..t]}(x) "
},
{
"math_id": 29,
"text": "r_{t} "
},
{
"math_id": 30,
"text": "d_{t,j} "
},
{
"math_id": 31,
"text": "\n\\begin{align}\n(L3)\n\\\\\n\\max (t\\cdot r_t - \\sum_{j=1}^n d_{t,j})\n\\\\\n\\text{subject to} ~~~ & x\\in X,\n\\\\\n&\nr_t - f_j(x) \\leq d_{t,j} \\text{ for all } j\\in[n],\n\\\\\n&\nd_{t,j} \\geq 0 \\text{ for all } j\\in[n].\n\\end{align}\n"
},
{
"math_id": 32,
"text": "r_t-f_j(x) "
},
{
"math_id": 33,
"text": "d_{t,j} = \\max(0, r_t-f_j(x)) "
},
{
"math_id": 34,
"text": "t\\cdot r_t - \\sum_{j=1}^n \n \\max(0, r_t-f_j(x)) "
},
{
"math_id": 35,
"text": "r_t "
},
{
"math_id": 36,
"text": "r_t \\geq f_{[k]}(x) "
},
{
"math_id": 37,
"text": "\\sum_{j=1}^k\n (r_t-f_{[j]}(x))\n =\n k\\cdot r_t \n - \n f_{[1..k]}(x) "
},
{
"math_id": 38,
"text": "(t-k)\\cdot r_t + f_{[1..k]}(x) "
},
{
"math_id": 39,
"text": "(t-k)\\cdot r_t "
},
{
"math_id": 40,
"text": "r_t "
},
{
"math_id": 41,
"text": "\n\\begin{align}\n(L4)\n\\\\\n\\operatorname{lex} \\max (t\\cdot r_t - \\sum_{j=1}^n d_{t,j})_{t=1}^n\n\\\\\n\\text{subject to} ~~~ & x\\in X,\n\\\\\n&\nr_t - f_j(x) \\leq d_{t,j} \\text{ for all } j\\in[n],\n\\\\\n&\nd_{t,j} \\geq 0 \\text{ for all } j\\in[n].\n\\end{align}\n"
},
{
"math_id": 42,
"text": "\nn^2+n\n"
},
{
"math_id": 43,
"text": "\nn^2\n"
},
{
"math_id": 44,
"text": "V = \\{v_1,\\ldots, v_r\\} "
},
{
"math_id": 45,
"text": "v_1 < \\cdots < v_r "
},
{
"math_id": 46,
"text": "h_k(x) "
},
{
"math_id": 47,
"text": "f_1(x),\\ldots, f_n(x) "
},
{
"math_id": 48,
"text": "\n\\begin{align}\n(H1)\n\\\\\n\\operatorname{lex}\n\\min &&\nh_1(x), \\ldots, h_{r-1}(x)\n\\\\\n\\text{subject to} && x\\in X\n\\end{align}\n"
},
{
"math_id": 49,
"text": "w_t "
},
{
"math_id": 50,
"text": "f_{[t]}(x) "
},
{
"math_id": 51,
"text": "d^t "
},
{
"math_id": 52,
"text": "\\sum_t w_t f_{[t]}(x) "
},
{
"math_id": 53,
"text": "{m+n-1 \\choose n}"
}
]
| https://en.wikipedia.org/wiki?curid=73030801 |
73032003 | Logic translation | Translation of a text into a logical system
Logic translation is the process of representing a text in the formal language of a logical system. If the original text is formulated in ordinary language then the term natural language formalization is often used. An example is the translation of the English sentence "some men are bald" into first-order logic as formula_0. The purpose is to reveal the logical structure of arguments. This makes it possible to use the precise rules of formal logic to assess whether these arguments are correct. It can also guide reasoning by arriving at new conclusions.
Many of the difficulties of the process are caused by vague or ambiguous expressions in natural language. For example, the English word "is" can mean that something exists, that it is identical to something else, or that it has a certain property. This contrasts with the precise nature of formal logic, which avoids such ambiguities. Natural language formalization is relevant to various fields in the sciences and humanities. It may play a key role for logic in general since it is needed to establish a link between many forms of reasoning and abstract logical systems. The use of informal logic is an alternative to formalization since it analyzes the cogency of ordinary language arguments in their original form. Natural language formalization is distinguished from logic translations that convert formulas from one logical system into another, for example, from modal logic to first-order logic. This form of logic translation is specifically relevant for logic programming and metalogic.
A major challenge in logic translation is determining the accuracy of translations and separating good from bad ones. The technical term for this is "criteria of adequate translations". An often-cited criterion states that translations should preserve the inferential relations between sentences. This implies that if an argument is valid in the original text then the translated argument should also be valid. Another criterion is that the original sentence and the translation have the same truth conditions. Further suggested conditions are that a translation does not include additional or unnecessary symbols and that its grammatical structure is similar to the original sentence. Various procedures for translating texts have been suggested. Preparatory steps include understanding the meaning of the original text and paraphrasing it to remove ambiguities and make its logical structure more explicit. As an intermediary step, a translation may happen into a hybrid language. This hybrid language implements a logical formalism but retains the vocabulary of the original expression. In the last step, this vocabulary is replaced by logical symbols. Translation procedures are usually not exact algorithms and their application depends on intuitive understanding. Logic translations are often criticized on the grounds that they are unable to accurately represent all the aspects and nuances of the original text.
Definition.
A logic translation is a translation of a text into a logical system. For example, translating the sentence "all skyscrapers are tall" as formula_3 is a logic translation that expresses an English language sentence in the logical system known as first-order logic. The aim of logic translations is usually to make the logical structure of natural language arguments explicit. This way, the rules of formal logic can be used to assess whether the arguments are valid.
Understood in a wide sense, a translation is a process that associates expressions belonging to a source language with expressions belonging to a target language. For example, in a sentence-by-sentence translation of an English text into French, English sentences are linked to their French counterparts. The hallmark of logic translations is that the target language belongs to a logical system. Logic translations differ from regular translations in that they are mainly concerned with expressing the logical structure of the original text and less with its concrete content. Regular translations, on the other hand, take various additional factors into account pertaining to the content, meaning, and style of the original expression. For this reason, some theorists, like Peregrin and Svoboda, have argued that it is not a form of translation. They tend to use other terms, such as "formalization", "symbolization", and "explication". This opinion is not shared by all logicians and some, like Mark Sainsbury, argue that successful logic translations preserve all the original meaning while making the logical structure explicit.
Discussions on logic translations usually focus on the problem of expressing the logical structure of ordinary language sentences in a formal logical system. The term also covers cases where the translation happens from one logical system into another.
Basic concepts.
Various basic concepts are employed in the study and analysis of logic translations. Logic is interested in correct reasoning, which happens in the form of inferences or arguments. An argument is a set of premises together with a conclusion. An argument is deductively valid if it is impossible for its conclusion to be false if all its premises are true. Valid arguments follow a rule of inference, which prescribes how the premises and the conclusion have to be structured. A prominent rule of inference is modus ponens, which states that arguments of the form "(1) "p"; (2) if "p" then "q"; (3) therefore "q"" are valid. An example of an argument following modus ponens is: "(1) today is Sunday; (2) if today is Sunday then I don't have to go to work today; (3) therefore I don't have to go to work today".
There are different logical systems for assessing which arguments are valid. For example, propositional logic only focuses on inferences based on logical connectives, like "and" or "if...then". First-order logic, on the other hand, also includes inferential patterns belonging to expressions like "every" or "some". Extended logics cover further inferences, for example, in relation to what is possible and necessary or regarding temporal relations.
This means that logical systems usually do not capture all inferential patterns. This is relevant for logic translation since they may miss patterns for which they were not intended. For example, propositional logic can be used to show that the following ordinary language argument is correct: "(1) John is not a pilot; (2) John is a pilot or Bill is a poet; (3) therefore Bill is a poet". However, it fails to show that the argument "(1) John is a pilot; (2) therefore John can aviate" is correct since it is unable to capture the inferential relation between the terms "Pilot" and "can aviate". If a logical system is applied to cases beyond its limited scope, it is unable to assess the validity of natural language arguments. The advantage of this limitation is that the vagueness and ambiguity of natural language arguments are avoided by making some of the inferential patterns very clear.
Formal logical systems use precise formal languages to express their formulas and inferences. In the case of propositional logic, letters like formula_4 and formula_1 are used to represent simple propositions. They can be combined into more complex propositions using propositional connectives like formula_2 to express that both propositions are true and formula_5 to express that at least one of the propositions is true. So if formula_4 stands for "Adam is athletic" and formula_1 stands for "Barbara is athletic", then the formula formula_6 represents the claim that "Adam is athletic, and also Barbara is athletic". First-order logic also includes propositional connectives but introduces additional symbols. Uppercase letters are used for predicates and lowercase letters stand for individuals. For example, if formula_4 stands for the predicate "is angry" and formula_7 represents the individual Elsa, then the formula formula_8 expresses the proposition "Elsa is angry". Another innovation of first-order logic is the use of quantifiers like formula_9 and formula_10 to represent the meanings of terms like "some" and "all".
Types.
Logic translations can be classified based on the source language of the original text. For many logic translations, the original text belongs to a natural language, like English or French. In this case, the term "natural language formalization" is often used. For example, the sentence "Dana is a logician and Dana is a nice person" can be formalized into propositional logic using the logical formula formula_11. A further type of logic translation happens between two logical systems. This means that the source text is composed of logical formulas belonging to one logical system and the goal is to associate them with logical formulas belonging to another logical system. For example, the formula formula_12 in modal logic can be translated into first-order logic using the formula formula_13.
Natural language formalization.
Natural language formalization is a form of semantic parsing that starts with a sentence in natural language and translates it into a logical formula. Its goal is to make the logical structure of natural language sentences and arguments explicit. It is mainly concerned with their logical form while their specific content is usually ignored. Logical analysis is a closely related term that refers to the process of uncovering the logical form or structure of a sentence. Natural language formalization makes it possible to use formal logic to analyze and evaluate natural language arguments. This is especially relevant for complex arguments, which are often difficult to evaluate without formal tools. Logic translation can also be used to look for new arguments and thereby guide the reasoning process. The reverse process of formalization is sometimes called "verbalization". It happens when logical formulas are translated back into natural language. This process is less nuanced and discussions concerning the relation between natural language and logic usually focus on the problem of formalization.
The success of applications of formal logic to natural language requires that the translation is correct. A formalization is correct if its explicit logical features fit the implicit logical features of the original sentence. The logical form of ordinary language sentences is often not obvious since there are many differences between natural languages and the formal languages used by logicians. This poses various difficulties for formalization. For example, ordinary expressions frequently include vague and ambiguous expressions. For this reason, the validity of an argument often depends not just on the expressions themselves but also on how they are interpreted. For example, the sentence "donkeys have ears" could mean that "all donkeys (without exception) have ears" or that "donkeys typically have ears". The second translation does not exclude the existence of some donkeys without ears. This difference matters for whether a universal quantifier can be used to translate the sentence. Such ambiguities are not found in the precise formulations of artificial logical languages and have to be solved before translation is possible.
The problem of natural language formalization has various implications for the sciences and humanities, especially for the fields of linguistics, cognitive science, and computer science. In the field of formal linguistics, for example, Richard Montague provides various suggestions for how to formalize English language expressions in his theory of universal grammar. Formalization is also discussed in the philosophy of logic in relation to its role in understanding and applying logic. If logic is understood as the theory of valid inferences in general then formalization plays a central role in it since many of these inferences are formulated in ordinary language. Logic translation is needed to link formal systems of logic to arguments expressed in ordinary language. A related claim is that all logical languages, including highly abstract ones like modal logic and many-valued logic, have to be "anchored in the structures of natural language". One difficulty in this regard is that logic is usually understood as a formal science, but a theory of its relation to empirical matters pertaining to ordinary languages goes beyond this purely formal conception. For this reason, some theorists like Georg Brun identify a pure branch of logic and contrast it with applied logic, which includes the problem of formalization.
Some theorists draw the conclusion from these considerations that informal reasoning takes precedence over formal reasoning. This would imply that formal logic can only succeed if it is based on correct formalization. For example, Michael Baumgartner and Timm Lampert hold that "there are no informal fallacies" but only "misunderstanding of informal arguments expressed by inadequate formalizations". This position is rejected by Jaroslav Peregrin and Vladimír Svoboda, who argue that informal reasoning is not always accurate and may be corrected through the application of formal logic.
An alternative to formalization is to use informal logic, which analyzes the cogency of natural language arguments in their original form. This has many advantages by avoiding the difficulties associated with logic translations but it also comes with various drawbacks. For example, informal logic lacks the precision found in formal logic for distinguishing between good arguments and fallacies.
Examples.
For propositional logic, the sentence "Tiffany sells jewelry, and Gucci sells cologne" can be translated as formula_14. In this example, formula_15 represents the claim "Tiffany sells jewelry", formula_16 stands for "Gucci sells cologne", and formula_2 is the logical conjunction corresponding to "and". Another example is the sentence "Notre Dame raises tuition if Purdue does", which can be formalized as formula_17.
For predicate logic, the sentence "Ann loves Ben" can be translated as formula_18. In this example, formula_19 stands for "loves", formula_20 stands for Ann and formula_21 stands for Ben. Other examples are "some men are bald" as formula_0, "all rivers have a head" as formula_22, "no frogs are birds" as formula_23, and "if Elizabeth is a historian, then some women are historians" as formula_24.
Problematic expressions.
For various natural language expressions, it is not clear how they should be translated and the right translation may differ from case to case. The vagueness and ambiguity of ordinary language, in contrast to the precise nature of logic, is often responsible for these problems. For this reason, it has proven difficult to find a general algorithm to cover all cases of translation. For example, the meaning of basic English expressions like "and", "or", and "if...then" can vary from context to context. The corresponding logical operators in symbolic logic (formula_2, formula_5, formula_25), on the other hand, have very precisely defined meanings. In this regard, they only capture some aspects of the original meaning.
The English word "is" poses another such difficulty since it has many meanings. It can express existence (as in "there is a Santa Claus"), identity (as in "Superman is Clark Kent"), and predication (as in "Venus is a planet"). Each one of these meanings is expressed differently in logical systems like first-order logic. Another difficulty is that quantifiers are often not explicitly expressed in ordinary language. For example, the sentence "emeralds are green" does not directly state the universal quantifier "all", i.e. "all emeralds are green". However, some sentences with a similar structure, such as the "children live next door", imply the existential quantifier "some", i.e. "some children live next door".
A closely related problem is found in some valid natural language arguments whose most obvious translations are invalid in formal logic. For example, the argument "(1) Fury is a horse; (2) therefore Fury is an animal" is valid but the corresponding argument in formal logic from formula_26 to formula_27 is invalid. One solution is to add to the argument an additional premise stating that "all horses are animals". Another is to translate the sentence "Fury is a horse" as formula_28. These solutions come with new problems of their own. Further problematic expressions are definite descriptions, conditional sentence, and attributive adjectives, as well as mass nouns and anaphora.
Translation between logics.
A further type of logic translation takes place between logical systems. A translation between two logical systems can be defined in a formal sense as a mathematical function. This function maps sentences of the first system to sentences of the second system while obeying the entailment relations between the original sentences. This means that if a sentence entails another sentence in the first logic, then the translation of the first sentence should entail the translation of the second sentence in the second logic. This way, a translation from one logic to another represents the formulas, proofs, and models of the first logic in terms of the second. This is sometimes referred to as "conservative translation". It contrasts with "rough translation", which only maps the sentences of the first logic to sentences of the second logic without regard to their entailment relations.
A preliminary of logic translations is that there is not one logic but many logics. These logics differ from each other concerning the languages they use as well as the rules of inference they see as valid. For example, intuitionistic logic differs from classical logic since it rejects certain rules of inference, such as the double negation elimination. This rule states that if a sentence is not not true, then it is true, i.e. that formula_4 follows from formula_29. One way to translate intuitionistic logic into non-intuitionistic logic is by using a modal operator. This is based on the idea that intuitionistic logic expresses not just what is true but what is knowable. For example, the formula formula_30 in intuitionistic logic can be translated as formula_31 where formula_32 is a model operator expressing that the following formula is knowable.
Another example is the translation of modal logic to regular predicate logic. Modal logic contains additional symbols for possibility (formula_33) and necessity (formula_34) not found in regular predicate logic. One way to translate them is to introduce new predicates, such as the predicate R, which indicates that one possible world is accessible from another possible world. For example, the modal logic expression formula_35 (it is possible that "p" is true in the actual world) can be translated as formula_36 (there exists a possible world that is accessible from the actual world and "p" is true in it).
Translations between logics are relevant for metalogic and logic programming. In metalogic, they can be used to study the properties of logical systems and the relations between them. In logic programming, they make it possible for programs limited to one type of logic to be applied to many additional cases. With the help of logic translations, programs like Prolog can be used to solve problems in modal logic and temporal logic even though Prolog does not natively support these logical systems. A closely related issue concerns the question of how to translate a formal language like Controlled English into a logical system. Controlled English is a controlled language that limits grammar and vocabulary with the goal of reducing ambiguity and complexity. In this regard, the advantage of Controlled English is that every sentence has a unique interpretation. This makes it possible to use algorithms to translate them into formal logic, which is generally not possible for natural languages.
Criteria of adequate translations.
Criteria of adequate translations specify how to distinguish good from bad translations. They determine whether a logical formula accurately represents the logical structure of the sentence it translates. This way, they help logicians decide between competing translations of the same sentence. Various criteria are discussed in the academic literature. According to various theorists, like Peregrin and Svoboda, the most basic criterion is that translations should preserve the inferential relations between sentences. This principle is sometimes called the "criterion of syntactic correctness" or the "criterion of reliability". It stipulates that if an argument is valid in the original text then the translated argument is also valid. One difficulty in this regard is that the same sentence may form part of several arguments, sometimes as a premise and sometimes as a conclusion. A translation of a sentence is only correct if in all or nearly all these cases, the inferential relations are preserved. According to the view of holism, this implies that one cannot evaluate sentence translations individually. This position holds that the correctness of a translation of one sentence depends on how other sentences are translated to ensure correspondence in the inferential relations. This view is rejected by atomists, who claim that the correctness of sentence translations can be assessed individually.
A closely related criterion focuses on the truth conditions of sentences. A "truth condition" of a sentence is what the world must be like for that sentence to be true. This criterion states that for adequate translations, the truth conditions of the original sentence are identical to the truth conditions of the translated sentence. The mere fact that the sentence and its translation have the same truth value is not sufficient. Instead, it implies that whenever one is true, the other is also true, i.e. they have to have the same truth value in all possible circumstances. This criterion is not universally accepted and it has been criticized based on the claim that logical formulas do not have truth conditions. According to this view, the symbols they use are meaningless by themselves and only have the purpose of expressing the logical form of a sentence without implying any concrete content. Another problem with this approach is that all tautologies have the same truth conditions: they are true independently of the circumstances. This would imply that any tautology is a correct translation of any other tautology.
Besides these core criteria, various additional criteria are often discussed in the academic literature. Their goal is usually to exclude bad translations that nonetheless comply with the other criteria. For example, according to the first two criteria, the sentence "it rains" could be formalized as formula_37 or as formula_38. The reason is that both formulas have the same truth conditions and the same inferential patterns. However, the second formula is a bad translation. One additional criterion is that translations should not include symbols that do not correspond to expressions in the original sentence. According to it, the translation of "it rains" should not include the symbol for logical negation (formula_39) since a corresponding expression is not found in the original sentence.
Another criterion holds that the order of symbols in the translation should reflect the order of the expressions in the original sentence. For example, the sentence "Pete went up the hill and Quinn went up the hill" should be translated as formula_40 and not as formula_41. A closely related criterion is the principle of "transparency", which states that translations should aim to be similar to the original expression. This concerns, for example, that a translation reflects the grammatical structure of the original sentence as closely as possible. The principle of "parsimony" states that simple translations (i.e. logical formulas that use as few symbols as possible) are to be preferred. One way to test whether a formalization is correct is to translate it back into natural language and see if this second translation matches the original.
The problem of the criteria of adequate translations is often not discussed in detail in introductions to logic. One reason for this is that some theorists, like Herbert E. Hendry, see logic translation as an art or an intuitive practice. According to this view, it is based on a practical skill learned from experience with many examples and guided by some rough rules of thumb. This outlook implies that there are no strict rules of adequate formalization. Critics of this idea argue that without clear criteria of adequate translations it is very difficult to decide between competing formalizations of the same sentence.
Translation procedures.
Various logicians have proposed translation procedures employing several steps to arrive at correct translations. Some only constitute rough guidelines to help translators in the process while others consist of detailed and effective procedures covering all the steps needed to arrive at a translation. In either case, they are usually not exact algorithms that could be blindly followed but rather tools to simplify the process.
Preparatory steps may be taken within natural language before the actual translation starts. An initial step is often to understand the meaning of the original text, for example, by analyzing the claims made in it. This includes identifying which arguments are made and whether a claim acts as a premise or as a conclusion. At this stage, a common recommendation is to paraphrase the sentences to make the claims more explicit, remove ambiguities, and highlight their logical structure. For example, the sentence "John Paul II is infallible" could be paraphrased as "it is not the case that John Paul II is fallible". This can involve identifying truth-functional connectives, like "and", "if...then", or "not", and decomposing the text accordingly. Each of the units analyzed this way is an individual claim that is either true or false. A closely related step is to group the individual expressions into logical units and classify them according to their logical role. In the sentence above, for example, "is fallible" is a predicate and the expression "it is not the case that" corresponds to the logical connective for negation.
Once these preparations are done, some theorists, like Peregrin and Svoboda, recommend translation into a hybrid language. Such hybrid expressions already contain a logical formalism but retain regular names for predicates and proper names. For example, the sentence "All rivers have heads" could be translated as formula_42. The idea behind this step is that the regular terms still carry their original meaning and thereby make it easier to understand the formulas and to see how they relate to the original text. The natural language vocabulary is usually not precisely defined and therefore lacks the exactness demanded by formal logic. As a last step, these regular terms are then replaced by logical symbols. For the expression above, this would result in the formula formula_43. This way, the connection to the ordinary language meanings is cut. The formulas become a purely formal expression of the logical structure of the original text and any specific content is removed.
The formalization of a full argument consists in several steps since the argument is made up of several propositions. Once the translation is complete, the formal tools of the logical system, such as its rules of inference, can be employed to assess whether the argument is valid.
Criticism.
Criticism of logic translations is primarily focused on the limitations and the range of valid applications, as well as the way they are discussed in academic literature. Logic translation is a widely accepted and utilized process in logic and other fields, even among theorists who criticize aspects of it. In some cases, individual logic translations are criticized based on the claim that they are unable to accurately represent all the aspects and nuances of the original text. For example, logical vocabulary is usually unable to capture things like sarcasm, indirect insinuation, or emphasis. In this regard, many aspects of the meaning of the original expression that go beyond truth value, validity, and logical structure are frequently ignored. On the level of informal inferences, there are various expressions that cannot easily be represented using the precise but limited languages of formal logic. For these reasons, it is sometimes controversial whether a specific logic translation is correct. When a logic translation is used to defend the conclusion of a natural language argument, one way to undermine such a defense is to claim that the logic translation is incorrect. This implies that insights gained from the formal logical analysis do not carry any weight for the original argument.
Another type of criticism is not directed at logic translations themselves but at how they are discussed in many standard works and courses of logic. In this regard, theorists like Georg Brun, Peregrin, and Svoboda argue that such works do not provide a proper discussion of the role and limitations of logic translations. Instead, it is claimed that they merely treat this topic as a side note. They may provide a few examples but their main focus is on the formal systems themselves. This way, there is no in-depth discussion of how these systems are applied to ordinary arguments.
References.
Notes.
<templatestyles src="Reflist/styles.css" />
Citations.
<templatestyles src="Reflist/styles.css" />
Sources.
<templatestyles src="Refbegin/styles.css" /> | [
{
"math_id": 0,
"text": "\\exist x (M(x) \\land B(x))"
},
{
"math_id": 1,
"text": "B"
},
{
"math_id": 2,
"text": "\\land"
},
{
"math_id": 3,
"text": "\\forall x (S(x) \\to T(x))"
},
{
"math_id": 4,
"text": "A"
},
{
"math_id": 5,
"text": "\\lor"
},
{
"math_id": 6,
"text": "A \\land B"
},
{
"math_id": 7,
"text": "e"
},
{
"math_id": 8,
"text": "A(e)"
},
{
"math_id": 9,
"text": "\\exists"
},
{
"math_id": 10,
"text": "\\forall"
},
{
"math_id": 11,
"text": "L \\land N"
},
{
"math_id": 12,
"text": "\\Box A(x)"
},
{
"math_id": 13,
"text": "\\forall y (R(x,y) \\to A(y))"
},
{
"math_id": 14,
"text": "T \\land G"
},
{
"math_id": 15,
"text": "T"
},
{
"math_id": 16,
"text": "G"
},
{
"math_id": 17,
"text": "P \\to N"
},
{
"math_id": 18,
"text": "L(a, b)"
},
{
"math_id": 19,
"text": "L"
},
{
"math_id": 20,
"text": "a"
},
{
"math_id": 21,
"text": "b"
},
{
"math_id": 22,
"text": "\\forall x (R(x) \\to H(x))"
},
{
"math_id": 23,
"text": "\\forall x (F(x) \\to \\lnot B(x))"
},
{
"math_id": 24,
"text": "H(e) \\to \\exists x (W(x) \\land H(x))"
},
{
"math_id": 25,
"text": "\\to"
},
{
"math_id": 26,
"text": "H(f)"
},
{
"math_id": 27,
"text": "A(f)"
},
{
"math_id": 28,
"text": "H(f) \\land A(f)"
},
{
"math_id": 29,
"text": "\\lnot \\lnot A"
},
{
"math_id": 30,
"text": "\\lnot p"
},
{
"math_id": 31,
"text": "K \\lnot p"
},
{
"math_id": 32,
"text": "K"
},
{
"math_id": 33,
"text": "\\diamond"
},
{
"math_id": 34,
"text": "\\Box"
},
{
"math_id": 35,
"text": "\\lfloor \\diamond p \\rfloor _{\\iota}"
},
{
"math_id": 36,
"text": "\\exists v R(\\iota, v) \\land \\lfloor p \\rfloor _v"
},
{
"math_id": 37,
"text": "p"
},
{
"math_id": 38,
"text": "\\lnot \\lnot \\lnot \\lnot \\lnot \\lnot p"
},
{
"math_id": 39,
"text": "\\lnot"
},
{
"math_id": 40,
"text": "p \\land q"
},
{
"math_id": 41,
"text": "q \\land p"
},
{
"math_id": 42,
"text": "\\forall x ((River(x)) \\to HasHead(x))"
},
{
"math_id": 43,
"text": "\\forall x ((R(x)) \\to H(x))"
}
]
| https://en.wikipedia.org/wiki?curid=73032003 |
730378 | Goddard–Thorn theorem | Theorem in string theory
In mathematics, and in particular in the mathematical background of string theory, the Goddard–Thorn theorem (also called the no-ghost theorem) is a theorem describing properties of a functor that quantizes bosonic strings. It is named after Peter Goddard and Charles Thorn.
The name "no-ghost theorem" stems from the fact that in the original statement of the theorem, the natural inner product induced on the output vector space is positive definite. Thus, there were no so-called ghosts (Pauli–Villars ghosts), or vectors of negative norm. The name "no-ghost theorem" is also a word play on the no-go theorem of quantum mechanics.
Statement.
This statement is that of Borcherds (1992).
Suppose that formula_0 is a unitary representation of the Virasoro algebra formula_1, so formula_0 is equipped with a non-degenerate bilinear form formula_2 and there is an algebra homomorphism formula_3 so that
formula_4
where the adjoint is defined with respect to the bilinear form, and
formula_5
Suppose also that formula_0 decomposes into a direct sum of eigenspaces of formula_6 with non-negative, integer eigenvalues formula_7, denoted formula_8, and that each formula_8 is finite dimensional (giving formula_0 a formula_9-grading). Assume also that formula_0 admits an action from a group formula_10 that preserves this grading.
For the two-dimensional even unimodular Lorentzian lattice II1,1, denote the corresponding lattice vertex algebra by formula_11. This is a II1,1-graded algebra with a bilinear form and carries an action of the Virasoro algebra.
Let formula_12 be the subspace of the vertex algebra formula_13 consisting of vectors formula_14 such that formula_15 for formula_16. Let formula_17 be the subspace of formula_12 of degree formula_18. Each space inherits a formula_10-action which acts as prescribed on formula_0 and trivially on formula_11.
The quotient of formula_17 by the nullspace of its bilinear form is naturally isomorphic as a formula_10-module with an invariant bilinear form, to formula_19 if formula_20 and formula_21 if formula_22.
II1,1.
The lattice II1,1 is the rank 2 lattice with bilinear form
formula_23
This is even, unimodular and integral with signature (+,-).
Formalism.
There are two naturally isomorphic functors that are typically used to quantize bosonic strings. In both cases, one starts with positive-energy representations of the Virasoro algebra of central charge 26, equipped with Virasoro-invariant bilinear forms, and ends up with vector spaces equipped with bilinear forms. Here, "Virasoro-invariant" means "Ln" is adjoint to "L"−"n" for all integers "n".
The first functor historically is "old canonical quantization", and it is given by taking the quotient of the weight 1 primary subspace by the radical of the bilinear form. Here, "primary subspace" is the set of vectors annihilated by "Ln" for all strictly positive "n", and "weight 1" means "L"0 acts by identity. A second, naturally isomorphic functor, is given by degree 1 BRST cohomology. Older treatments of BRST cohomology often have a shift in the degree due to a change in choice of BRST charge, so one may see degree −1/2 cohomology in papers and texts from before 1995. A proof that the functors are naturally isomorphic can be found in Section 4.4 of Polchinski's "String Theory" text.
The Goddard–Thorn theorem amounts to the assertion that this quantization functor more or less cancels the addition of two free bosons, as conjectured by Lovelace in 1971. Lovelace's precise claim was that at critical dimension 26, Virasoro-type Ward identities cancel two full sets of oscillators. Mathematically, this is the following claim:
Let "V" be a unitarizable Virasoro representation of central charge 24 with Virasoro-invariant bilinear form, and let π be the irreducible module of the R1,1 Heisenberg Lie algebra attached to a nonzero vector "λ" in R1,1. Then the image of "V" ⊗ π under quantization is canonically isomorphic to the subspace of "V" on which "L"0 acts by 1-("λ","λ").
The no-ghost property follows immediately, since the positive-definite Hermitian structure of "V" is transferred to the image under quantization.
Applications.
The bosonic string quantization functors described here can be applied to any conformal vertex algebra of central charge 26, and the output naturally has a Lie algebra structure. The Goddard–Thorn theorem can then be applied to concretely describe the Lie algebra in terms of the input vertex algebra.
Perhaps the most spectacular case of this application is Richard Borcherds's proof of the monstrous moonshine conjecture, where the unitarizable Virasoro representation is the monster vertex algebra (also called "moonshine module") constructed by Frenkel, Lepowsky, and Meurman. By taking a tensor product with the vertex algebra attached to a rank-2 hyperbolic lattice, and applying quantization, one obtains the monster Lie algebra, which is a generalized Kac–Moody algebra graded by the lattice. By using the Goddard–Thorn theorem, Borcherds showed that the homogeneous pieces of the Lie algebra are naturally isomorphic to graded pieces of the moonshine module, as representations of the monster simple group.
Earlier applications include Frenkel's determination of upper bounds on the root multiplicities of the Kac–Moody Lie algebra whose Dynkin diagram is the Leech lattice, and Borcherds's construction of a generalized Kac–Moody Lie algebra that contains Frenkel's Lie algebra and saturates Frenkel's 1/∆ bound. | [
{
"math_id": 0,
"text": "V"
},
{
"math_id": 1,
"text": "\\mathrm{Vir}"
},
{
"math_id": 2,
"text": "(\\cdot, \\cdot)"
},
{
"math_id": 3,
"text": "\\rho: \\mathrm{Vir} \\rightarrow \\mathrm{End}(V)"
},
{
"math_id": 4,
"text": "\\rho(L_i)^\\dagger = \\rho(L_{-i})"
},
{
"math_id": 5,
"text": "\\rho(c) = 24\\mathrm{id}_V."
},
{
"math_id": 6,
"text": "L_0"
},
{
"math_id": 7,
"text": "i \\geq 0"
},
{
"math_id": 8,
"text": "V^i"
},
{
"math_id": 9,
"text": "\\mathbb{Z}_{\\geq 0}"
},
{
"math_id": 10,
"text": "G"
},
{
"math_id": 11,
"text": "V_{II_{1,1}}"
},
{
"math_id": 12,
"text": "P^1"
},
{
"math_id": 13,
"text": "V \\otimes V_{II_{1,1}}"
},
{
"math_id": 14,
"text": "v"
},
{
"math_id": 15,
"text": "L_0 \\cdot v = v, L_n \\cdot v = 0"
},
{
"math_id": 16,
"text": "n > 0"
},
{
"math_id": 17,
"text": "P^1_r"
},
{
"math_id": 18,
"text": "r \\in II_{1,1}"
},
{
"math_id": 19,
"text": "V^{1 - (r,r)/2}"
},
{
"math_id": 20,
"text": "r \\neq 0"
},
{
"math_id": 21,
"text": "V^1 \\oplus \\mathbb{R}^2"
},
{
"math_id": 22,
"text": "r = 0"
},
{
"math_id": 23,
"text": "\\begin{pmatrix} 0 & -1 \\\\ -1 & 0 \\end{pmatrix}."
}
]
| https://en.wikipedia.org/wiki?curid=730378 |
73038823 | Mercury(II) stearate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Mercury(II) stearate is a metal-organic compound, a salt of mercury and stearic acid with the chemical formula C36H70HgO4. The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid. The compound is highly toxic by inhalation, ingestion, and skin absorption.
Synthesis.
An exchange reaction of sodium stearate and mercury dichloride:
formula_0
Also, heating mercurious oxide with stearic acid.
Physical properties.
The compound forms yellow waxy substance.
Uses.
It is used as a germicide
and as a plasticizer in the production of ceramics.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ HgCl_2 + 2C_{17}H_{35}COONa \\ \\xrightarrow{}\\ Hg(C_{17}H_{35}COO)_2\\downarrow + 2 NaCl }"
}
]
| https://en.wikipedia.org/wiki?curid=73038823 |
7304 | Coordination complex | Molecule or ion containing ligands datively bonded to a central metallic atom
A coordination complex is a chemical compound consisting of a central atom or ion, which is usually metallic and is called the "coordination centre", and a surrounding array of bound molecules or ions, that are in turn known as "ligands" or complexing agents. Many metal-containing compounds, especially those that include transition metals (elements like titanium that belong to the periodic table's d-block), are coordination complexes.
Nomenclature and terminology.
Coordination complexes are so pervasive that their structures and reactions are described in many ways, sometimes confusingly. The atom within a ligand that is bonded to the central metal atom or ion is called the donor atom. In a typical complex, a metal ion is bonded to several donor atoms, which can be the same or different. A polydentate (multiple bonded) ligand is a molecule or ion that bonds to the central atom through several of the ligand's atoms; ligands with 2, 3, 4 or even 6 bonds to the central atom are common. These complexes are called chelate complexes; the formation of such complexes is called chelation, complexation, and coordination.
The central atom or ion, together with all ligands, comprise the coordination sphere. The central atoms or ion and the donor atoms comprise the first coordination sphere.
Coordination refers to the "coordinate covalent bonds" (dipolar bonds) between the ligands and the central atom. Originally, a complex implied a reversible association of molecules, atoms, or ions through such weak chemical bonds. As applied to coordination chemistry, this meaning has evolved. Some metal complexes are formed virtually irreversibly and many are bound together by bonds that are quite strong.
The number of donor atoms attached to the central atom or ion is called the coordination number. The most common coordination numbers are 2, 4, and especially 6. A hydrated ion is one kind of a complex ion (or simply a complex), a species formed between a central metal ion and one or more surrounding ligands, molecules or ions that contain at least one lone pair of electrons.
If all the ligands are monodentate, then the number of donor atoms equals the number of ligands. For example, the cobalt(II) hexahydrate ion or the hexaaquacobalt(II) ion [Co(H2O)6]2+ is a hydrated-complex ion that consists of six water molecules attached to a metal ion Co. The oxidation state and the coordination number reflect the number of bonds formed between the metal ion and the ligands in the complex ion. However, the coordination number of Pt(en) is 4 (rather than 2) since it has two bidentate ligands, which contain four donor atoms in total.
Any donor atom will give a pair of electrons. There are some donor atoms or groups which can offer more than one pair of electrons. Such are called bidentate (offers two pairs of electrons) or polydentate (offers more than two pairs of electrons). In some cases an atom or a group offers a pair of electrons to two similar or different central metal atoms or acceptors—by division of the electron pair—into a three-center two-electron bond. These are called bridging ligands.
History.
Coordination complexes have been known since the beginning of modern chemistry. Early well-known coordination complexes include dyes such as Prussian blue. Their properties were first well understood in the late 1800s, following the 1869 work of Christian Wilhelm Blomstrand. Blomstrand developed what has come to be known as the "complex ion chain theory." In considering metal amine complexes, he theorized that the ammonia molecules compensated for the charge of the ion by forming chains of the type [(NH3)X]X+, where X is the coordination number of the metal ion. He compared his theoretical ammonia chains to hydrocarbons of the form (CH2)X.
Following this theory, Danish scientist Sophus Mads Jørgensen made improvements to it. In his version of the theory, Jørgensen claimed that when a molecule dissociates in a solution there were two possible outcomes: the ions would bind via the ammonia chains Blomstrand had described or the ions would bind directly to the metal.
It was not until 1893 that the most widely accepted version of the theory today was published by Alfred Werner. Werner's work included two important changes to the Blomstrand theory. The first was that Werner described the two possibilities in terms of location in the coordination sphere. He claimed that if the ions were to form a chain, this would occur outside of the coordination sphere while the ions that bound directly to the metal would do so within the coordination sphere. In one of his most important discoveries however Werner disproved the majority of the chain theory. Werner discovered the spatial arrangements of the ligands that were involved in the formation of the complex hexacoordinate cobalt. His theory allows one to understand the difference between a coordinated ligand and a charge balancing ion in a compound, for example the chloride ion in the cobaltammine chlorides and to explain many of the previously inexplicable isomers.
In 1911, Werner first resolved the coordination complex hexol into optical isomers, overthrowing the theory that only carbon compounds could possess chirality.
Structures.
The ions or molecules surrounding the central atom are called ligands. Ligands are classified as L or X (or a combination thereof), depending on how many electrons they provide for the bond between ligand and central atom. L ligands provide two electrons from a lone electron pair, resulting in a coordinate covalent bond. X ligands provide one electron, with the central atom providing the other electron, thus forming a regular covalent bond. The ligands are said to be coordinated to the atom. For alkenes, the pi bonds can coordinate to metal atoms. An example is ethylene in the complex (Zeise's salt).
Geometry.
In coordination chemistry, a structure is first described by its coordination number, the number of ligands attached to the metal (more specifically, the number of donor atoms). Usually one can count the ligands attached, but sometimes even the counting can become ambiguous. Coordination numbers are normally between two and nine, but large numbers of ligands are not uncommon for the lanthanides and actinides. The number of bonds depends on the size, charge, and electron configuration of the metal ion and the ligands. Metal ions may have more than one coordination number.
Typically the chemistry of transition metal complexes is dominated by interactions between s and p molecular orbitals of the donor-atoms in the ligands and the d orbitals of the metal ions. The s, p, and d orbitals of the metal can accommodate 18 electrons (see 18-Electron rule). The maximum coordination number for a certain metal is thus related to the electronic configuration of the metal ion (to be more specific, the number of empty orbitals) and to the ratio of the size of the ligands and the metal ion. Large metals and small ligands lead to high coordination numbers, e.g. . Small metals with large ligands lead to low coordination numbers, e.g. . Due to their large size, lanthanides, actinides, and early transition metals tend to have high coordination numbers.
Most structures follow the points-on-a-sphere pattern (or, as if the central atom were in the middle of a polyhedron where the corners of that shape are the locations of the ligands), where orbital overlap (between ligand and metal orbitals) and ligand-ligand repulsions tend to lead to certain regular geometries. The most observed geometries are listed below, but there are many cases that deviate from a regular geometry, e.g. due to the use of ligands of diverse types (which results in irregular bond lengths; the coordination atoms do not follow a points-on-a-sphere pattern), due to the size of ligands, or due to electronic effects (see, e.g., Jahn–Teller distortion):
The idealized descriptions of 5-, 7-, 8-, and 9- coordination are often indistinct geometrically from alternative structures with slightly differing L-M-L (ligand-metal-ligand) angles, e.g. the difference between square pyramidal and trigonal bipyramidal structures.
To distinguish between the alternative coordinations for five-coordinated complexes, the τ geometry index was invented by Addison et al. This index depends on angles by the coordination center and changes between 0 for the square pyramidal to 1 for trigonal bipyramidal structures, allowing to classify the cases in between. This system was later extended to four-coordinated complexes by Houser et al. and also Okuniewski et al.
In systems with low d electron count, due to special electronic effects such as (second-order) Jahn–Teller stabilization, certain geometries (in which the coordination atoms do not follow a points-on-a-sphere pattern) are stabilized relative to the other possibilities, e.g. for some compounds the trigonal prismatic geometry is stabilized relative to octahedral structures for six-coordination.
Isomerism.
The arrangement of the ligands is fixed for a given complex, but in some cases it is mutable by a reaction that forms another stable isomer.
There exist many kinds of isomerism in coordination complexes, just as in many other compounds.
Stereoisomerism.
Stereoisomerism occurs with the same bonds in distinct orientations. Stereoisomerism can be further classified into:
Cis–trans isomerism and facial–meridional isomerism.
Cis–trans isomerism occurs in octahedral and square planar complexes (but not tetrahedral). When two ligands are adjacent they are said to be cis, when
opposite each other, trans. When three identical ligands occupy one face of an octahedron, the isomer is said to be facial, or fac. In a "fac" isomer, any two identical ligands are adjacent or "cis" to each other. If these three ligands and the metal ion are in one plane, the isomer is said to be meridional, or mer. A "mer" isomer can be considered as a combination of a "trans" and a "cis", since it contains both trans and cis pairs of identical ligands.
Optical isomerism.
Optical isomerism occurs when a complex is not superimposable with its mirror image. It is so called because the two isomers are each optically active, that is, they rotate the plane of polarized light in opposite directions. In the first molecule shown, the symbol Λ ("lambda") is used as a prefix to describe the left-handed propeller twist formed by three bidentate ligands. The second molecule is the mirror image of the first, with the symbol Δ ("delta") as a prefix for the right-handed propeller twist. The third and fourth molecules are a similar pair of Λ and Δ isomers, in this case with two bidentate ligands and two identical monodentate ligands.
Structural isomerism.
Structural isomerism occurs when the bonds are themselves different. Four types of structural isomerism are recognized: ionisation isomerism, solvate or hydrate isomerism, linkage isomerism and coordination isomerism.
Electronic properties.
Many of the properties of transition metal complexes are dictated by their electronic structures. The electronic structure can be described by a relatively ionic model that ascribes formal charges to the metals and ligands. This approach is the essence of crystal field theory (CFT). Crystal field theory, introduced by Hans Bethe in 1929, gives a quantum mechanically based attempt at understanding complexes. But crystal field theory treats all interactions in a complex as ionic and assumes that the ligands can be approximated by negative point charges.
More sophisticated models embrace covalency, and this approach is described by ligand field theory (LFT) and Molecular orbital theory (MO). Ligand field theory, introduced in 1935 and built from molecular orbital theory, can handle a broader range of complexes and can explain complexes in which the interactions are covalent. The chemical applications of group theory can aid in the understanding of crystal or ligand field theory, by allowing simple, symmetry based solutions to the formal equations.
Chemists tend to employ the simplest model required to predict the properties of interest; for this reason, CFT has been a favorite for the discussions when possible. MO and LF theories are more complicated, but provide a more realistic perspective.
The electronic configuration of the complexes gives them some important properties:
Color of transition metal complexes.
Transition metal complexes often have spectacular colors caused by electronic transitions by the absorption of light. For this reason they are often applied as pigments. Most transitions that are related to colored metal complexes are either d–d transitions or charge transfer bands. In a d–d transition, an electron in a d orbital on the metal is excited by a photon to another d orbital of higher energy, therefore d–d transitions occur only for partially-filled d-orbital complexes (d1–9). For complexes having d0 or d10 configuration, charge transfer is still possible even though d–d transitions are not. A charge transfer band entails promotion of an electron from a metal-based orbital into an empty ligand-based orbital (metal-to-ligand charge transfer or MLCT). The converse also occurs: excitation of an electron in a ligand-based orbital into an empty metal-based orbital (ligand-to-metal charge transfer or LMCT). These phenomena can be observed with the aid of electronic spectroscopy; also known as UV-Vis. For simple compounds with high symmetry, the d–d transitions can be assigned using Tanabe–Sugano diagrams. These assignments are gaining increased support with computational chemistry.
Colors of lanthanide complexes.
Superficially lanthanide complexes are similar to those of the transition metals in that some are colored. However, for the common Ln3+ ions (Ln = lanthanide) the colors are all pale, and hardly influenced by the nature of the ligand. The colors are due to 4f electron transitions. As the 4f orbitals in lanthanides are "buried" in the xenon core and shielded from the ligand by the 5s and 5p orbitals they are therefore not influenced by the ligands to any great extent leading to a much smaller crystal field splitting than in the transition metals. The absorption spectra of an Ln3+ ion approximates to that of the free ion where the electronic states are described by spin-orbit coupling. This contrasts to the transition metals where the ground state is split by the crystal field. Absorptions for Ln3+ are weak as electric dipole transitions are parity forbidden (Laporte forbidden) but can gain intensity due to the effect of a low-symmetry ligand field or mixing with higher electronic states ("e.g." d orbitals). f-f absorption bands are extremely sharp which contrasts with those observed for transition metals which generally have broad bands. This can lead to extremely unusual effects, such as significant color changes under different forms of lighting.
Magnetism.
Metal complexes that have unpaired electrons are magnetic. Considering only monometallic complexes, unpaired electrons arise because the complex has an odd number of electrons or because electron pairing is destabilized. Thus, monomeric Ti(III) species have one "d-electron" and must be (para)magnetic, regardless of the geometry or the nature of the ligands. Ti(II), with two d-electrons, forms some complexes that have two unpaired electrons and others with none. This effect is illustrated by the compounds TiX2[(CH3)2PCH2CH2P(CH3)2]2: when X = Cl, the complex is paramagnetic (high-spin configuration), whereas when X = CH3, it is diamagnetic (low-spin configuration). Ligands provide an important means of adjusting the ground state properties.
In bi- and polymetallic complexes, in which the individual centres have an odd number of electrons or that are high-spin, the situation is more complicated. If there is interaction (either direct or through ligand) between the two (or more) metal centres, the electrons may couple (antiferromagnetic coupling, resulting in a diamagnetic compound), or they may enhance each other (ferromagnetic coupling). When there is no interaction, the two (or more) individual metal centers behave as if in two separate molecules.
Reactivity.
Complexes show a variety of possible reactivities:
If the ligands around the metal are carefully chosen, the metal can aid in (stoichiometric or catalytic) transformations of molecules or be used as a sensor.
Classification.
Metal complexes, also known as coordination compounds, include virtually all metal compounds. The study of "coordination chemistry" is the study of "inorganic chemistry" of all alkali and alkaline earth metals, transition metals, lanthanides, actinides, and metalloids. Thus, coordination chemistry is the chemistry of the majority of the periodic table. Metals and metal ions exist, in the condensed phases at least, only surrounded by ligands.
The areas of coordination chemistry can be classified according to the nature of the ligands, in broad terms:
Examples: [Co(EDTA)]−, [Co(NH3)6]3+, [Fe(C2O4)3]3-
Example: (C5H5)Fe(CO)2CH3
Example: hemoglobin contains heme, a porphyrin complex of iron
Example: chlorophyll contains a porphyrin complex of magnesium
Many natural ligands are "classical" especially including water.
Example Ru3(CO)12
Example: [Fe4S4(Scysteinyl)4]2−, in which a cluster is embedded in a biologically active species.
Mineralogy, materials science, and solid state chemistry – as they apply to metal ions – are subsets of coordination chemistry in the sense that the metals are surrounded by ligands. In many cases these ligands are oxides or sulfides, but the metals are coordinated nonetheless, and the principles and guidelines discussed below apply. In hydrates, at least some of the ligands are water molecules. It is true that the focus of mineralogy, materials science, and solid state chemistry differs from the usual focus of coordination or inorganic chemistry. The former are concerned primarily with polymeric structures, properties arising from a collective effects of many highly interconnected metals. In contrast, coordination chemistry focuses on reactivity and properties of complexes containing individual metal atoms or small ensembles of metal atoms.
Nomenclature of coordination complexes.
The basic procedure for naming a complex is:
Examples:
[Cd(CN)2(en)2] → dicyanidobis(ethylenediamine)cadmium(II)
[CoCl(NH3)5]SO4 → pentaamminechloridocobalt(III) sulfate
[Cu(H2O)6] 2+ → hexaaquacopper(II) ion
[CuCl5NH3]3− → amminepentachloridocuprate(II) ion
K4[Fe(CN)6] → potassium hexacyanidoferrate(II)
[NiCl4]2− → tetrachloridonickelate(II) ion (The use of chloro- was removed from IUPAC naming convention)
The coordination number of ligands attached to more than one metal (bridging ligands) is indicated by a subscript to the Greek symbol μ placed before the ligand name. Thus the dimer of aluminium trichloride is described by Al2Cl4(μ2-Cl)2.
Any anionic group can be electronically stabilized by any cation. An anionic complex can be stabilised by a hydrogen cation, becoming an acidic complex which can dissociate to release the cationic hydrogen. This kind of complex compound has a name with "ic" added after the central metal. For example, H2[Pt(CN)4] has the name tetracyanoplatinic (II) acid.
Stability constant.
The affinity of metal ions for ligands is described by a stability constant, also called the formation constant, and is represented by the symbol Kf. It is the equilibrium constant for its assembly from the constituent metal and ligands, and can be calculated accordingly, as in the following example for a simple case:
xM (aq) + yL (aq) ⇌ zZ (aq)
formula_0
where : x, y, and z are the stoichiometric coefficients of each species. M stands for metal / metal ion , the L for Lewis bases , and finally Z for complex ions. Formation constants vary widely. Large values indicate that the metal has high affinity for the ligand, provided the system is at equilibrium.
Sometimes the stability constant will be in a different form known as the constant of destability. This constant is expressed as the inverse of the constant of formation and is denoted as Kd = 1/Kf . This constant represents the reverse reaction for the decomposition of a complex ion into its individual metal and ligand components. When comparing the values for Kd, the larger the value, the more unstable the complex ion is.
As a result of these complex ions forming in solutions they also can play a key role in solubility of other compounds. When a complex ion is formed it can alter the concentrations of its components in the solution. For example:
Ag + 2NH3 ⇌ Ag(NH3)
AgCl(s) + H2O(l) ⇌ Ag + Cl
If these reactions both occurred in the same reaction vessel, the solubility of the silver chloride would be increased by the presence of NH4OH because formation of the Diammine argentum(I) complex consumes a significant portion of the free silver ions from the solution. By Le Chatelier's principle, this causes the equilibrium reaction for the dissolving of the silver chloride, which has silver ion as a product, to shift to the right.
This new solubility can be calculated given the values of Kf and Ksp for the original reactions. The solubility is found essentially by combining the two separate equilibria into one combined equilibrium reaction and this combined reaction is the one that determines the new solubility. So Kc, the new solubility constant, is denoted by:
formula_1
Application of coordination compounds.
As metals only exist in solution as coordination complexes, it follows then that this class of compounds is useful in a wide variety of ways.
Bioinorganic chemistry.
In bioinorganic chemistry and bioorganometallic chemistry, coordination complexes serve either structural or catalytic functions. An estimated 30% of proteins contain metal ions. Examples include the intensely colored vitamin B12, the heme group in hemoglobin, the cytochromes, the chlorin group in chlorophyll, and carboxypeptidase, a hydrolytic enzyme important in digestion. Another complex ion enzyme is catalase, which decomposes the cell's waste hydrogen peroxide. Synthetic coordination compounds are also used to bind to proteins and especially nucleic acids (e.g. anticancer drug cisplatin).
Industry.
Homogeneous catalysis is a major application of coordination compounds for the production of organic substances. Processes include hydrogenation, hydroformylation, oxidation. In one example, a combination of titanium trichloride and triethylaluminium gives rise to Ziegler–Natta catalysts, used for the polymerization of ethylene and propylene to give polymers of great commercial importance as fibers, films, and plastics.
Nickel, cobalt, and copper can be extracted using hydrometallurgical processes involving complex ions. They are extracted from their ores as ammine complexes. Metals can also be separated using the selective precipitation and solubility of complex ions. Cyanide is used chiefly for extraction of gold and silver from their ores.
Phthalocyanine complexes are an important class of pigments.
Analysis.
At one time, coordination compounds were used to identify the presence of metals in a sample. Qualitative inorganic analysis has largely been superseded by instrumental methods of analysis such as atomic absorption spectroscopy (AAS), inductively coupled plasma atomic emission spectroscopy (ICP-AES) and inductively coupled plasma mass spectrometry (ICP-MS).
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "K_f = \\frac{[\\text{Z}]^z}{[\\text{M}]^x[\\text{L}]^y}"
},
{
"math_id": 1,
"text": "K_c = K_{sp} K_f"
}
]
| https://en.wikipedia.org/wiki?curid=7304 |
73047762 | Cobalt(II) stearate | <templatestyles src="Chembox/styles.css"/>
Chemical compound
Cobalt(II) stearate is a metal-organic compound, a salt of cobalt and stearic acid with the chemical formula C36H70CoO4. The compound is classified as a metallic soap, i.e. a metal derivative of a fatty acid.
Synthesis.
An exchange reaction of sodium stearate and cobalt dichloride:
formula_0
Physical properties.
Cobalt(II) stearate forms a violet substance, occurring in several crystal structures.
It is insoluble in water.
Uses.
Cobalt(II) stearate is a high-performance bonding agent for rubber. The compound is suitable for applications in natural rubber, cisdene, styrene-butadiene rubber, and their compounds to bond easily with brass- or zinc-plated steel cord or metal plates as well as various bare steel, especially for bonding with brass plating of various thicknesses.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathsf{ CoCl_2 + 2C_{17}H_{35}COONa \\ \\xrightarrow{}\\ Co(C_{17}H_{35}COO)_2\\downarrow + 2NaCl }"
}
]
| https://en.wikipedia.org/wiki?curid=73047762 |
7304939 | Random energy model | In the statistical physics of disordered systems, the random energy model is a toy model of a system with quenched disorder, such as a spin glass, having a first-order phase transition. It concerns the statistics of a collection of formula_0 spins ("i.e." degrees of freedom formula_1 that can take one of two possible values formula_2) so that the number of possible states for the system is formula_3. The energies of such states are independent and identically distributed Gaussian random variables formula_4 with zero mean and a variance of formula_5. Many properties of this model can be computed exactly. Its simplicity makes this model suitable for pedagogical introduction of concepts like quenched disorder and replica symmetry.
Thermodynamic quantities.
Critical energy per particle: formula_6.
Critical inverse temperature formula_7.
Partition function formula_8, which at large formula_0 becomes formula_9 when formula_10, that is, condensation does not occur. When this is true, we say that it has the self-averaging property.
Free entropy per particleformula_11
Entropy per particleformula_12
Condensation.
When formula_10, the Boltzmann distribution of the system is concentrated at energy-per-particle formula_13, of which there are formula_14 states.
When formula_15, the Boltzmann distribution of the system is concentrated at formula_16, and since the entropy per particle at that point is zero, the Boltzmann distribution is concentrated on a sub-exponential number of states. This is a phase transition called condensation.
Participation.
Define the participation ratio asformula_17The participation ratio measures the amount of condensation in the Boltzmann distribution. It can be interpreted as the probability that two randomly sampled states are exactly the same state. Indeed, it is precisely the Simpson index, a commonly used diversity index.
For each formula_18, the participation ratio is a random variable determined by the energy levels.
When formula_10, the system is not in the condensed phase, and so by asymptotic equipartition, the Boltzmann distribution is asymptotically uniformly distributed over formula_14 states. The participation ratio is then formula_19which decays exponentially to zero.
When formula_15, the participation ratio satisfiesformula_20where the expectation is taken over all random energy levels.
Comparison with other disordered systems.
The formula_21-spin infinite-range model, in which all formula_21-spin sets interact with a random, independent, identically distributed interaction constant, becomes the random energy model in a suitably defined formula_22 limit.
More precisely, if the Hamiltonian of the model is defined by
formula_23
where the sum runs over all formula_24 distinct sets of formula_21 indices, and, for each such set, formula_25, formula_26 is an independent Gaussian variable of mean 0 and variance formula_27, the Random-Energy model is recovered in the formula_22 limit.
Derivation of thermodynamical quantities.
As its name suggests, in the REM each microscopic state has an independent distribution of energy. For a particular realization of the disorder, formula_28 where formula_29 refers to the individual spin configurations described by the state and formula_30 is the energy associated with it. The final extensive variables like the free energy need to be averaged over all realizations of the disorder, just as in the case of the Edwards–Anderson model. Averaging formula_31 over all possible realizations, we find that the probability that a given configuration of the disordered system has an energy equal to formula_32 is given by
formula_33
where formula_34 denotes the average over all realizations of the disorder. Moreover, the joint probability distribution of the energy values of two different microscopic configurations of the spins, formula_35 and formula_36 factorizes:
formula_37
It can be seen that the probability of a given spin configuration only depends on the energy of that state and not on the individual spin configuration.
The entropy of the REM is given by
formula_38
for formula_39. However this expression only holds if the entropy per spin, formula_40 is finite, i.e., when formula_41 Since formula_42, this corresponds to formula_43. For formula_44, the system remains "frozen" in a small number of configurations of energy formula_45 and the entropy per spin vanishes in the thermodynamic limit.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "N"
},
{
"math_id": 1,
"text": "\\boldsymbol\\sigma\\equiv \\{\\sigma_i\\}_{i=1}^N"
},
{
"math_id": 2,
"text": "\\sigma_i=\\pm 1"
},
{
"math_id": 3,
"text": "2^N"
},
{
"math_id": 4,
"text": "E_x \\sim \\mathcal{N}(0,N/2)"
},
{
"math_id": 5,
"text": "N/2"
},
{
"math_id": 6,
"text": "h_c = \\sqrt{\\ln 2}"
},
{
"math_id": 7,
"text": "\\beta_c = 2\\sqrt{\\ln 2}"
},
{
"math_id": 8,
"text": "Z(\\beta) = \\sum_s e^{-\\beta H(s)}"
},
{
"math_id": 9,
"text": "2^N \\mathbb E_E[e^{-\\beta E}]"
},
{
"math_id": 10,
"text": "\\beta < \\beta_c"
},
{
"math_id": 11,
"text": "f(\\beta) = \\lim_{N \\to \\infty} \\frac 1N \\ln Z = \\begin{cases} \\ln 2 + \\frac 14 \\beta^2 \\quad & \\beta < \\beta_c, \\\\\n\\beta \\sqrt{\\ln 2} \\quad & \\beta > \\beta_c \\end{cases}"
},
{
"math_id": 12,
"text": "s(h) = \\max_\\beta(f(\\beta) - \\beta h) = \\begin{cases} \\ln 2 - h^2 \\quad & h \\in [-h_c, +h_c ], \\\\ 0 \\quad & \\text{else }\\end{cases}"
},
{
"math_id": 13,
"text": "h = -\\beta/2"
},
{
"math_id": 14,
"text": "\\sim e^{N(\\ln 2 - \\beta^2/4)}"
},
{
"math_id": 15,
"text": "\\beta > \\beta_c"
},
{
"math_id": 16,
"text": "h = -h_c "
},
{
"math_id": 17,
"text": "Y = \\sum_E p_E^2 = \\frac{\\sum_E e^{-2\\beta E}}{(\\sum_E e^{-\\beta E})^2}"
},
{
"math_id": 18,
"text": "N, \\beta"
},
{
"math_id": 19,
"text": "\\sim e^{N(\\ln 2 - \\beta^2/4)} \\times (e^{-N(\\ln 2 - \\beta^2/4)})^2 = e^{-N(\\ln 2 - \\beta^2/4)}"
},
{
"math_id": 20,
"text": "\\lim_{N\\to\\infty} \\mathbb E [Y] = 1 - \\frac{\\beta_c}{\\beta}"
},
{
"math_id": 21,
"text": "r"
},
{
"math_id": 22,
"text": "r\\to\\infty"
},
{
"math_id": 23,
"text": "\nH(\\boldsymbol\\sigma)=\\sum_{\\{i_1,\\ldots,i_r\\}}J_{i_1,\\ldots i_r} \\sigma_{i_1} \\cdots \\sigma_{i_r},\n"
},
{
"math_id": 24,
"text": "{N\\choose r}"
},
{
"math_id": 25,
"text": "\\{i_1,\\ldots,i_r\\}"
},
{
"math_id": 26,
"text": "J_{i_1,\\ldots,i_r}"
},
{
"math_id": 27,
"text": "J^2r!/(2 N^{r-1})"
},
{
"math_id": 28,
"text": "P(E) = \\delta(E - H(\\sigma))"
},
{
"math_id": 29,
"text": "\\sigma=(\\sigma_i)"
},
{
"math_id": 30,
"text": "H(\\sigma)"
},
{
"math_id": 31,
"text": "P(E)"
},
{
"math_id": 32,
"text": "E"
},
{
"math_id": 33,
"text": "\n[P(E)] = \\sqrt{\\frac{1}{N\\pi J^2}}\\exp\\left(-\\dfrac{E^2}{J^2 N}\\right),\n"
},
{
"math_id": 34,
"text": "[\\cdots]"
},
{
"math_id": 35,
"text": "\\sigma"
},
{
"math_id": 36,
"text": "\\sigma'"
},
{
"math_id": 37,
"text": "\n[P(E,E')]=[P(E)]\\,[P(E')].\n"
},
{
"math_id": 38,
"text": "\nS(E) = N\\left[\\log 2 - \\left(\\frac E {NJ}\\right)^2\\right]\n"
},
{
"math_id": 39,
"text": "|E| < NJ\\sqrt{\\log 2}"
},
{
"math_id": 40,
"text": "\\lim_{N\\to\\infty}S(E)/N"
},
{
"math_id": 41,
"text": " |E|< -N J \\sqrt{\\log 2}."
},
{
"math_id": 42,
"text": "(1/T)=\\partial S/\\partial E"
},
{
"math_id": 43,
"text": "T>T_c=1/(2\\sqrt{\\log 2})"
},
{
"math_id": 44,
"text": "T<T_c"
},
{
"math_id": 45,
"text": "E\\simeq -N J \\sqrt{\\log 2}"
}
]
| https://en.wikipedia.org/wiki?curid=7304939 |
73050688 | Tensor (machine learning) | Concept in machine learning
Tensor informally refers in machine learning to two different concepts that organize and represent data. Data may be organized in a multidimensional array ("M"-way array) that is informally referred to as a "data tensor"; however in the strict mathematical sense, a tensor is a multilinear mapping over a set of domain vector spaces to a range vector space. Observations, such as images, movies, volumes, sounds, and relationships among words and concepts, stored in an "M"-way array ("data tensor") may be analyzed either by artificial neural networks or tensor methods.
Tensor decomposition can factorize data tensors into smaller tensors. Operations on data tensors can be expressed in terms of matrix multiplication and the Kronecker product. The computation of gradients, an important aspect of the backpropagation algorithm, can be performed using PyTorch and TensorFlow.
Computations are often performed on graphics processing units (GPUs) using CUDA and on dedicated hardware such as Google's Tensor Processing Unit or Nvidia's Tensor core. These developments have greatly accelerated neural network architectures and increased the size and complexity of models that can be trained.
History.
A tensor is by definition a multilinear map. In mathematics, this may express a multilinear relationship between sets of algebraic objects. In physics, tensor fields, considered as tensors at each point in space, are useful in expressing mechanics such as stress or elasticity. In machine learning, the exact use of tensors depends on the statistical approach being used.
In 2001, the field of signal processing and statistics were making use of tensor methods. Pierre Comon surveys the early adoption of tensor methods in the fields of telecommunications, radio surveillance, chemometrics and sensor processing. Linear tensor rank methods (such as, Parafac/CANDECOMP) analyzed M-way arrays ("data tensors") composed of higher order statistics that were employed in blind source separation problems to compute a linear model of the data. He noted several early limitations in determining the tensor rank and efficient tensor rank decomposition.
In the early 2000s, multilinear tensor methods crossed over into computer vision, computer graphics and machine learning with papers by Vasilescu or in collaboration with Terzopoulos, such as Human Motion Signatures, TensorFaces TensorTexures and Multilinear Projection. Multilinear algebra, the algebra of higher-order tensors, is a suitable and transparent framework for analyzing the multifactor structure of an ensemble of observations and for addressing the difficult problem of disentangling the causal factors based on second order or higher order statistics associated with each causal factor.
Tensor (multilinear) factor analysis disentangles and reduces the influence of different causal factors with multilinear subspace learning.
When treating an image or a video as a 2- or 3-way array, i.e., "data matrix/tensor", tensor methods reduce spatial or time redundancies as demonstrated by Wang and Ahuja.
Yoshua Bengio,
Geoff Hinton
and their collaborators briefly discuss the relationship between deep neural networks
and tensor factor analysis beyond the use of M-way arrays ("data tensors") as inputs. One of the early uses of tensors for neural networks appeared in natural language processing. A single word can be expressed as a vector via Word2vec. Thus a relationship between two words can be encoded in a matrix. However, for more complex relationships such as subject-object-verb, it is necessary to build higher-dimensional networks. In 2009, the work of Sutskever introduced Bayesian Clustered Tensor Factorization to model relational concepts while reducing the parameter space. From 2014 to 2015, tensor methods become more common in convolutional neural networks (CNNs). Tensor methods organize neural network weights in a "data tensor", analyze and reduce the number of neural network weights. Lebedev et al. accelerated CNN networks for character classification (the recognition of letters and digits in images) by using 4D kernel tensors.
Definition.
Let formula_0 be a field such as the real numbers formula_1 or the complex numbers formula_2. A tensor formula_3 is an formula_4 array over formula_0:
formula_5
Here, formula_6 and formula_7 are positive integers, and formula_6 is the number of dimensions, number of "ways", or "mode" of the tensor.
One basic approach (not the only way) to using tensors in machine learning is to embed various data types directly. For example, a grayscale image, commonly represented as a discrete 2D function formula_8 with resolution formula_9 may be embedded in a mode-2 tensor as
formula_10
A color image with 3 channels for RGB might be embedded in a mode-3 tensor with three elements in an additional dimension:
formula_11
In natural language processing, a word might be expressed as a vector formula_12 via the Word2vec algorithm. Thus formula_12 becomes a mode-1 tensor
formula_13
The embedding of subject-object-verb semantics requires embedding relationships among three words. Because a word is itself a vector, subject-object-verb semantics could be expressed using mode-3 tensors
formula_14
In practice the neural network designer is primarily concerned with the specification of embeddings, the connection of tensor layers, and the operations performed on them in a network. Modern machine learning frameworks manage the optimization, tensor factorization and backpropagation automatically.
As unit values.
Tensors may be used as the unit values of neural networks which extend the concept of scalar, vector and matrix values to multiple dimensions.
The output value of single layer unit formula_15 is the sum-product of its input units and the connection weights filtered through the activation function formula_16:
formula_17
where
formula_18
If each output element of formula_15 is a scalar, then we have the classical definition of an artificial neural network. By replacing each unit component with a tensor, the network is able to express higher dimensional data such as images or videos:
formula_19
This use of tensors to replace unit values is common in convolutional neural networks where each unit might be an image processed through multiple layers. By embedding the data in tensors such network structures enable learning of complex data types.
In fully connected layers.
Tensors may also be used to compute the layers of a fully connected neural network, where the tensor is applied to the entire layer instead of individual unit values.
The output value of single layer unit formula_15 is the sum-product of its input units and the connection weights filtered through the activation function formula_16:
formula_20
The vectors formula_21 and formula_22 of output values can be expressed as a mode-1 tensors, while the hidden weights can be expressed as a mode-2 tensor. In this example the unit values are scalars while the tensor takes on the dimensions of the network layers:
formula_23
formula_24
formula_25
In this notation, the output values can be computed as a tensor product of the input and weight tensors:
formula_26
which computes the sum-product as a tensor multiplication (similar to matrix multiplication).
This formulation of tensors enables the entire layer of a fully connected network to be efficiently computed by mapping the units and weights to tensors.
In convolutional layers.
A different reformulation of neural networks allows tensors to express the convolution layers of a neural network. A convolutional layer has multiple inputs, each of which is a spatial structure such as an image or volume. The inputs are convolved by filtering before being passed to the next layer. A typical use is to perform feature detection or isolation in image recognition.
Convolution is often computed as the multiplication of an input signal formula_27 with a filter kernel formula_16. In two dimensions the discrete, finite form is:
formula_28
where formula_29 is the width of the kernel.
This definition can be rephrased as a matrix-vector product in terms of tensors that express the kernel, data and inverse transform of the kernel.
formula_30
where formula_31 and formula_32 are the inverse transform, data and kernel. The derivation is more complex when the filtering kernel also includes a non-linear activation function such as sigmoid or ReLU.
The hidden weights of the convolution layer are the parameters to the filter. These can be reduced with a pooling layer which reduces the resolution (size) of the data, and can also be expressed as a tensor operation.
Tensor factorization.
An important contribution of tensors in machine learning is the ability to factorize tensors to decompose data into constituent factors or reduce the learned parameters. Data tensor modeling techniques stem from the linear tensor decomposition (CANDECOMP/Parafac decomposition) and the multilinear tensor decompositions (Tucker).
Tucker decomposition.
Tucker decomposition, for example, takes a 3-way array formula_33
and decomposes the tensor into three matrices formula_34 and a smaller tensor formula_35. The shape of the matrices and new tensor are such that the total number of elements is reduced. The new tensors have shapes
formula_36
formula_37
formula_38
formula_39
Then the original tensor can be expressed as the tensor product of these four tensors:
formula_40
In the example shown in the figure, the dimensions of the tensors are
formula_41: I=8, J=6, K=3, formula_42: I=8, P=5, formula_43: J=6, Q=4, formula_32: K=3, R=2, formula_35: P=5, Q=4, R=2.
The total number of elements in the Tucker factorization is
formula_44
formula_45
The number of elements in the original formula_41 is 144, resulting in a data reduction from 144 down to 110 elements, a reduction of 23% in parameters or data size. For much larger initial tensors, and depending on the rank (redundancy) of the tensor, the gains can be more significant.
The work of Rabanser et al. provides an introduction to tensors with more details on the extension of Tucker decomposition to N-dimensions beyond the mode-3 example given here.
Tensor trains.
Another technique for decomposing tensors rewrites the initial tensor as a sequence (train) of smaller sized tensors. A tensor-train (TT) is a sequence of tensors of reduced rank, called "canonical factors". The original tensor can be expressed as the sum-product of the sequence.
formula_46
Developed in 2011 by Ivan Oseledts, the author observes that Tucker decomposition is "suitable for small dimensions, especially for the three-dimensional case. For large "d" it is not suitable." Thus tensor-trains can be used to factorize larger tensors in higher dimensions.
Tensor graphs.
The unified data architecture and automatic differentiation of tensors has enabled higher-level designs of machine learning in the form of tensor graphs. This leads to new architectures, such as tensor-graph convolutional networks (TGCN), which identify highly non-linear associations in data, combine multiple relations, and scale gracefully, while remaining robust and performant.
These developments are impacting all areas of machine learning, such as text mining and clustering, time varying data, and neural networks wherein the input data is a social graph and the data changes dynamically.
Hardware.
Tensors provide a unified way to train neural networks for more complex data sets. However, training is expensive to compute on classical CPU hardware.
In 2014, Nvidia developed cuDNN, CUDA Deep Neural Network, a library for a set of optimized primitives written in the parallel CUDA language. CUDA and thus cuDNN run on dedicated GPUs that implement unified massive parallelism in hardware. These GPUs were not yet dedicated chips for tensors, but rather existing hardware adapted for parallel computation in machine learning.
In the period 2015–2017 Google invented the Tensor Processing Unit (TPU). TPUs are dedicated, fixed function hardware units that specialize in the matrix multiplications needed for tensor products. Specifically, they implement an array of 65,536 multiply units that can perform a 256x256 matrix sum-product in just one global instruction cycle.
Later in 2017, Nvidia released its own Tensor Core with the Volta GPU architecture. Each Tensor Core is a microunit that can perform a 4x4 matrix sum-product. There are eight tensor cores for each shared memory (SM) block. The first GV100 GPU card has 108 SMs resulting in 672 tensor cores. This device accelerated machine learning by 12x over the previous Tesla GPUs. The number of tensor cores scales as the number of cores and SM units continue to grow in each new generation of cards.
The development of GPU hardware, combined with the unified architecture of tensor cores, has enabled the training of much larger neural networks. In 2022, the largest neural network was Google's PaLM with 540 billion learned parameters (network weights) (the older GPT-3 language model has over 175 billion learned parameters that produces human-like text; size isn't everything, Stanford's much smaller 2023 Alpaca model claims to be better, having learned from Meta/Facebook's 2023 model LLaMA, the smaller 7 billion parameter variant). The widely popular chatbot ChatGPT is built on top of GPT-3.5 (and after an update GPT-4) using supervised and reinforcement learning.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathbb F"
},
{
"math_id": 1,
"text": "\\mathbb R"
},
{
"math_id": 2,
"text": "\\mathbb C"
},
{
"math_id": 3,
"text": "{\\mathcal A}"
},
{
"math_id": 4,
"text": "I_1 \\times I_2 \\times \\cdots \\times I_C"
},
{
"math_id": 5,
"text": "{\\mathcal A} \\in {\\mathbb F}^{I_1 \\times I_2 \\times \\ldots \\times I_C}."
},
{
"math_id": 6,
"text": "C"
},
{
"math_id": 7,
"text": "I_1, I_2, \\ldots, I_C"
},
{
"math_id": 8,
"text": "f(x,y)"
},
{
"math_id": 9,
"text": "N \\times M"
},
{
"math_id": 10,
"text": "f(x,y) \\mapsto \\mathcal{A}\\in \\mathbb{R}^{N \\times M}."
},
{
"math_id": 11,
"text": "f_{RGB}(x,y) \\mapsto \\mathcal{A}\\in \\mathbb{R}^{N \\times M \\times 3}."
},
{
"math_id": 12,
"text": "v"
},
{
"math_id": 13,
"text": "v \\mapsto \\mathcal{A}\\in \\mathbb{R}^N."
},
{
"math_id": 14,
"text": " v_a \\times v_b \\times v_c \\mapsto \\mathcal{A}\\in \\mathbb{R}^{N \\times N \\times N}."
},
{
"math_id": 15,
"text": "y_m"
},
{
"math_id": 16,
"text": "f"
},
{
"math_id": 17,
"text": "y_m = f\\left(\\sum_n x_n u_{m,n}\\right),"
},
{
"math_id": 18,
"text": "y_m \\in \\mathbb{R}."
},
{
"math_id": 19,
"text": " y_m \\in \\mathbb{R}^{I_0 \\times I_1 \\times .. \\times I_C}."
},
{
"math_id": 20,
"text": "y_m = f\\left(\\sum_n x_n u_{m,n}\\right)."
},
{
"math_id": 21,
"text": "x"
},
{
"math_id": 22,
"text": "y"
},
{
"math_id": 23,
"text": " x_n \\mapsto \\mathcal{X}\\in \\mathbb{R}^{1 \\times N},"
},
{
"math_id": 24,
"text": " y_n \\mapsto \\mathcal{Y}\\in \\mathbb{R}^{M \\times 1},"
},
{
"math_id": 25,
"text": " u_n \\mapsto \\mathcal{U}\\in \\mathbb{R}^{N \\times M}."
},
{
"math_id": 26,
"text": " \\mathcal{Y} = f ( \\mathcal{X} \\mathcal{U} )."
},
{
"math_id": 27,
"text": "g"
},
{
"math_id": 28,
"text": "(f*g)_{x,y} = \\sum_{j=-w}^w \\sum_{k=-w}^w f_{j,k} g_{x+j,y+k},"
},
{
"math_id": 29,
"text": "w"
},
{
"math_id": 30,
"text": "\\mathcal{Y} = \\mathcal{A}[(Cg) \\odot (Bd)],"
},
{
"math_id": 31,
"text": "\\mathcal{A}, \\mathcal{B}"
},
{
"math_id": 32,
"text": "\\mathcal{C}"
},
{
"math_id": 33,
"text": "\\mathcal{X} \\in \\mathbb{R}^{I \\times J \\times K}"
},
{
"math_id": 34,
"text": "\\mathcal{A,B,C}"
},
{
"math_id": 35,
"text": "\\mathcal{G}"
},
{
"math_id": 36,
"text": "\\mathcal{A} \\in \\mathbb{R}^{I \\times P},"
},
{
"math_id": 37,
"text": "\\mathcal{B} \\in \\mathbb{R}^{J \\times Q},"
},
{
"math_id": 38,
"text": "\\mathcal{C} \\in \\mathbb{R}^{K \\times R},"
},
{
"math_id": 39,
"text": "\\mathcal{G} \\in \\mathbb{R}^{P \\times Q \\times R}."
},
{
"math_id": 40,
"text": "\\mathcal{X} = \\mathcal{G} \\times \\mathcal{A} \\times \\mathcal{B} \\times \\mathcal{C}."
},
{
"math_id": 41,
"text": "\\mathcal{X}"
},
{
"math_id": 42,
"text": "\\mathcal{A}"
},
{
"math_id": 43,
"text": "\\mathcal{B}"
},
{
"math_id": 44,
"text": "|\\mathcal{A}|+|\\mathcal{B}|+|\\mathcal{C}|+|\\mathcal{G}| = "
},
{
"math_id": 45,
"text": "(I \\times P) + (J \\times Q) + (K \\times R) + (P \\times Q \\times R) = 8\\times5 + 6\\times4 + 3\\times2 + 5\\times4\\times2 = 110."
},
{
"math_id": 46,
"text": "\\mathcal{X} = \\mathcal{G_1} \\mathcal{G_2} \\mathcal{G_3} .. \\mathcal{G_d} "
}
]
| https://en.wikipedia.org/wiki?curid=73050688 |
73051980 | Brownian sheet | In mathematics, a Brownian sheet or multiparametric Brownian motion is a multiparametric generalization of the Brownian motion to a Gaussian random field. This means we generalize the "time" parameter formula_0 of a Brownian motion formula_1 from formula_2 to formula_3.
The exact dimension formula_4 of the space of the new time parameter varies from authors. We follow John B. Walsh and define the formula_5-Brownian sheet, while some authors define the Brownian sheet specifically only for formula_6, what we call the formula_7-Brownian sheet.
This definition is due to Nikolai Chentsov, there exist a slightly different version due to Paul Lévy.
(n,d)-Brownian sheet.
A formula_8-dimensional gaussian process formula_9 is called a formula_5-Brownian sheet if
formula_12
for formula_13.
Properties.
From the definition follows
formula_14
almost surely.
Lévy's definition of the multiparametric Brownian motion.
In Lévy's definition one replaces the covariance condition above with the following condition
formula_22
where formula_23 is the Euclidean metric on formula_24.
Existence of abstract Wiener measure.
Consider the space formula_25 of continuous functions of the form formula_26 satisfying
formula_27
This space becomes a separable Banach space when equipped with the norm
formula_28
Notice this space includes densely the space of zero at infinity formula_29 equipped with the uniform norm, since one can bound the uniform norm with the norm of formula_25 from above through the Fourier inversion theorem.
Let formula_30 be the space of tempered distributions. One can then show that there exist a suitalbe separable Hilbert space (and Sobolev space)
formula_31
that is continuously embbeded as a dense subspace in formula_29 and thus also in formula_32 and that there exist a probability measure formula_33 on formula_32 such that the triple
formula_34
is an abstract Wiener space.
A path formula_35 is formula_33-almost surely
This handles of a Brownian sheet in the case formula_38. For higher dimensional formula_8, the construction is similar. | [
{
"math_id": 0,
"text": "t"
},
{
"math_id": 1,
"text": "B_t"
},
{
"math_id": 2,
"text": "\\R_{+}"
},
{
"math_id": 3,
"text": "\\R_{+}^n"
},
{
"math_id": 4,
"text": "n"
},
{
"math_id": 5,
"text": "(n,d)"
},
{
"math_id": 6,
"text": "n=2"
},
{
"math_id": 7,
"text": "(2,d)"
},
{
"math_id": 8,
"text": "d"
},
{
"math_id": 9,
"text": "B=(B_t,t\\in \\mathbb{R}_+^n)"
},
{
"math_id": 10,
"text": "\\mathbb{E}[B_t]=0"
},
{
"math_id": 11,
"text": "t=(t_1,\\dots t_n)\\in \\mathbb{R}_+^n"
},
{
"math_id": 12,
"text": "\\operatorname{cov}(B_s^{(i)},B_t^{(j)})=\\begin{cases}\n \\prod\\limits_{l=1}^n \\operatorname{min} (s_l,t_l) & \\text{if }i=j,\\\\\n 0 &\\text{else}\n \\end{cases}"
},
{
"math_id": 13,
"text": "1\\leq i,j\\leq d"
},
{
"math_id": 14,
"text": "B(0,t_2,\\dots,t_n)=B(t_1,0,\\dots,t_n)=\\cdots=B(t_1,t_2,\\dots,0)=0"
},
{
"math_id": 15,
"text": "(1,1)"
},
{
"math_id": 16,
"text": "\\mathbb{R}^1"
},
{
"math_id": 17,
"text": "(1,d)"
},
{
"math_id": 18,
"text": "\\mathbb{R}^d"
},
{
"math_id": 19,
"text": "(2,1)"
},
{
"math_id": 20,
"text": "X_{t,s}"
},
{
"math_id": 21,
"text": "(t,s)\\in [0,\\infty)\\times [0,\\infty)"
},
{
"math_id": 22,
"text": "\\operatorname{cov}(B_s,B_t)=\\frac{(|t|+|s|-|t-s|)}{2}"
},
{
"math_id": 23,
"text": "|\\cdot|"
},
{
"math_id": 24,
"text": "\\R^n"
},
{
"math_id": 25,
"text": "\\Theta^{\\frac{n+1}{2}}(\\mathbb R^n;\\R)"
},
{
"math_id": 26,
"text": "f:\\mathbb R^n\\to\\mathbb R"
},
{
"math_id": 27,
"text": "\\lim\\limits_{|x|\\to \\infty}\\left(\\log(e+|x|)\\right)^{-1}|f(x)|=0."
},
{
"math_id": 28,
"text": "\\|f\\|_{\\Theta^{\\frac{n+1}{2}}(\\mathbb R^n;\\R)} := \\sup_{x\\in\\mathbb R^n}\\left(\\log(e+|x|)\\right)^{-1}|f(x)|."
},
{
"math_id": 29,
"text": "C_0(\\mathbb{R}^n;\\mathbb{R})"
},
{
"math_id": 30,
"text": "\\mathcal{S}'(\\mathbb{R}^{n};\\mathbb{R})"
},
{
"math_id": 31,
"text": "H^\\frac{n+1}{2}(\\mathbb R^n,\\mathbb R)\\subseteq \\mathcal{S}'(\\mathbb{R}^{n};\\mathbb{R})"
},
{
"math_id": 32,
"text": "\\Theta^{\\frac{n+1}{2}}(\\mathbb R^n;\\mathbb{R})"
},
{
"math_id": 33,
"text": "\\omega"
},
{
"math_id": 34,
"text": "(H^{\\frac{n+1}{2}}(\\mathbb R^n;\\mathbb{R}),\\Theta^{\\frac{n+1}{2}}(\\mathbb R^n;\\mathbb{R}),\\omega)"
},
{
"math_id": 35,
"text": "\\theta \\in \\Theta^{\\frac{n+1}{2}}(\\mathbb{R}^n;\\mathbb{R})"
},
{
"math_id": 36,
"text": "\\alpha \\in (0,1/2)"
},
{
"math_id": 37,
"text": "\\alpha> 1/2"
},
{
"math_id": 38,
"text": "d=1"
}
]
| https://en.wikipedia.org/wiki?curid=73051980 |
73052423 | Double operator integral | Type of integral
In functional analysis, double operator integrals (DOI) are integrals of the form
formula_0
where formula_1 is a bounded linear operator between two separable Hilbert spaces,
formula_2
formula_3
are two spectral measures, where formula_4 stands for the set of orthogonal projections over formula_5, and formula_6 is a scalar-valued measurable function called the "symbol" of the DOI. The integrals are to be understood in the form of Stieltjes integrals.
Double operator integrals can be used to estimate the differences of two operators and have application in perturbation theory. The theory was mainly developed by Mikhail Shlyomovich Birman and Mikhail Zakharovich Solomyak in the late 1960s and 1970s, however they appeared earlier first in a paper by Daletskii and Krein.
Double operator integrals.
The map
formula_7
is called a "transformer". We simply write formula_8, when it's clear which spectral measures we are looking at.
Originally Birman and Solomyak considered a Hilbert–Schmidt operator formula_9 and defined a spectral measure formula_10 by
formula_11
for measurable sets formula_12, then the double operator integral formula_13 can be defined as
formula_14
for bounded and measurable functions formula_6. However one can look at more general operators formula_9 as long as formula_13 stays bounded.
Examples.
Perturbation theory.
Consider the case where formula_15 is a Hilbert space and let formula_16 and formula_17 be two bounded self-adjoint operators on formula_5. Let formula_18 and formula_19 be a function on a set formula_20, such that the spectra formula_21 and formula_22 are in formula_20. As usual, formula_23 is the identity operator. Then by the spectral theorem formula_24 and formula_25 and formula_26, hence
formula_27
and so
formula_28
where formula_29 and formula_30 denote the corresponding spectral measures of formula_16 and formula_17. | [
{
"math_id": 0,
"text": "\\operatorname{Q}_{\\varphi}:=\\int_{N}\\int_M \\varphi(x,y)\\mathrm{d}E(x)\\operatorname{T}\\mathrm{d}F (y),"
},
{
"math_id": 1,
"text": "\\operatorname{T}:G\\to H"
},
{
"math_id": 2,
"text": "E:(N,\\mathcal{A})\\to P(H),"
},
{
"math_id": 3,
"text": "F:(M,\\mathcal{B})\\to P(G),"
},
{
"math_id": 4,
"text": "P(H)"
},
{
"math_id": 5,
"text": "H"
},
{
"math_id": 6,
"text": "\\varphi"
},
{
"math_id": 7,
"text": "\\operatorname{J}_{\\varphi}^{E,F}:\\operatorname{T}\\mapsto \\operatorname{Q}_{\\varphi}"
},
{
"math_id": 8,
"text": "\\operatorname{J}_{\\varphi}:=\\operatorname{J}_{\\varphi}^{E,F}"
},
{
"math_id": 9,
"text": "\\operatorname{T}"
},
{
"math_id": 10,
"text": "\\mathcal{E}"
},
{
"math_id": 11,
"text": "\\mathcal{E}(\\Lambda\\times \\Delta)\\operatorname{T}:=E(\\Lambda)\\operatorname{T}F(\\Delta),\\quad \\operatorname{T}\\in \\mathcal{S}_2,"
},
{
"math_id": 12,
"text": "\\Lambda\\times \\Delta\\subset N \\times M"
},
{
"math_id": 13,
"text": "\\operatorname{Q}_{\\varphi}"
},
{
"math_id": 14,
"text": "\\operatorname{Q}_{\\varphi}:=\\left(\\int_{N\\times M} \\varphi(\\lambda, \\mu)\\;\\mathrm{d}\\mathcal{E}(\\lambda, \\mu)\\right)\\operatorname{T}"
},
{
"math_id": 15,
"text": "H=G"
},
{
"math_id": 16,
"text": "A"
},
{
"math_id": 17,
"text": "B"
},
{
"math_id": 18,
"text": "\\operatorname{T}:=B-A"
},
{
"math_id": 19,
"text": "f"
},
{
"math_id": 20,
"text": "S"
},
{
"math_id": 21,
"text": "\\sigma(A)"
},
{
"math_id": 22,
"text": "\\sigma(B)"
},
{
"math_id": 23,
"text": "\\operatorname{I}"
},
{
"math_id": 24,
"text": "\\operatorname{J}_{\\lambda}\\operatorname{I}=A"
},
{
"math_id": 25,
"text": "\\operatorname{J}_{\\mu}\\operatorname{I}=B"
},
{
"math_id": 26,
"text": "\\operatorname{J}_{\\mu-\\lambda}\\operatorname{I}=\\operatorname{T}"
},
{
"math_id": 27,
"text": "f(B)-f(A)=\\operatorname{J}_{f(\\mu)-f(\\lambda)}\\operatorname{I}=\\operatorname{J}_{\\frac{f(\\mu)-f(\\lambda)}{\\mu-\\lambda}}\\operatorname{J}_{\\mu-\\lambda}\\operatorname{I}=\\operatorname{J}_{\\frac{f(\\mu)-f(\\lambda)}{\\mu-\\lambda}}\\operatorname{T}=\\operatorname{Q}_{\\varphi}"
},
{
"math_id": 28,
"text": "f(B)-f(A)=\\int_{\\sigma(A)}\\int_{\\sigma(B)}\\frac{f(\\mu)-f(\\lambda)}{\\mu-\\lambda}(\\mu-\\lambda)\\mathrm{d}E_A(\\lambda)\\mathrm{d}F_B(\\mu)=\\int_{\\sigma(A)}\\int_{\\sigma(B)}\\frac{f(\\mu)-f(\\lambda)}{\\mu-\\lambda}\\mathrm{d}E_A(\\lambda)\\operatorname{T}\\mathrm{d}F_B(\\mu),"
},
{
"math_id": 29,
"text": "E_A(\\cdot)"
},
{
"math_id": 30,
"text": "F_B(\\cdot)"
}
]
| https://en.wikipedia.org/wiki?curid=73052423 |
73052924 | Math walk | An educational walk
A math walk, or math trail, is a type of themed walk in the US, where direct experience is translated into the language of mathematics or abstract mathematical sciences such as information science, computer science, decision science, or probability and statistics. Some sources specify how to create a math walk whereas others define a math walk at a specific location such as a junior high school or in Boston. The journal The Mathematics Teacher includes a special section titled "Mathematical Lens" in many issues with the metaphor of lens capturing seeing the world as mathematics.
Informal learning.
The idea that "math is everywhere", which is emphasized on a math walk, is captured by the philosophy of mathematicism with its early adherents, Pythagoras and Plato. The math walk also implicitly involves experiencing math via modeling since mathematics serves to model what we sense. The math walk is a form of informal learning, often in an outside environment or in a museum. This type of learning is contrasted with formal learning, which tends to be more structured and performed in a classroom. Math walks have been shown to encourage students to think more deeply about mathematics, and to connect school content to the real world.
Maps and object discovery.
There are different approaches to designing a math walk. The walk can be guided or unguided. In a guided walk, the learners are guided by a person knowledgeable in the topic of mathematics. In an unguided walk, learners are provided with a map. The map identifies walking stops and identifiers, such as QR codes or bluetooth beacons, to provide additional information on how the objects experienced during a math walk are translated into mathematical language.
Example math walk scene.
A walk can involve translation only, or translation and problem solving. For example, considering a window on a building involves first perceiving the window. After perception, there is a translation of the form of the window to mathematical language, such as the array formula_0 where formula_1 is the window's width and formula_2 is the window's length. The array formula_0 is a mathematical model of the window. This modeling is pure translation, without explicit problem solving. Questions such as "what is the area of the window?" require not only translation, but also the problem of solving for area: formula_3.
A photo of the railroad tracks in Fernandina Beach Historic District captures a stop on a math walk. The walk's information can focus on discrete items. These items reflect counting and number sense. Examples of discrete items are the cloud structures, the distant red harbor cranes, power line poles, wooden railroad ties, the diagonal lines in the road, and the cross walk across the rails.
The counting of the ties leads to the idea of iteration in computer programming and, more generally, to discrete mathematics, the core of computer science. For iteration, we can use a programming language such as Python or C to encode the syntactical form of the iteration for a computer program.
Other computer science related topics include a labeled directed graph that defines a semantic network. Such a network captures the objects in the photo as well as the relations among those objects. The semantic network is generally represented by a diagram with circles (concepts) and arrows (directed relations). There are additional indirect mathematical relations, including a differential equation that would define the motion of the train engine, with time as an independent variable.
Connecting school subject to standards.
Exemplars of informal learning, such as a math walk, create opportunities for traditional education in school. Math walks can be a component in classroom pedagogy or in an after-school event. A key strategy is to create a mapping from what is learned on the walk to what is learned in school. This task is complicated due to geographic region, classification, and standards. A math walk can be situated as early as elementary school.
The map to disciplinary subject area in US Math Education begins with the majority of states having adopted Common Core, which includes English Language and Mathematics. Within each state's standards, one must identify the grade level. A table in Common Core, titled "Mathematics Domains at Each Grade Level" summarizes the mapping of math subject to level. Once the mapping is known between object on the math walk and corresponding school subjects, this mapping should be included as part of the walk information. This linkage will assist both student and teacher. "Know your audience" is key to the successful educational delivery along a math walk.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "(w,l)"
},
{
"math_id": 1,
"text": "w"
},
{
"math_id": 2,
"text": "l"
},
{
"math_id": 3,
"text": "a = l \\times w"
}
]
| https://en.wikipedia.org/wiki?curid=73052924 |
730585 | Hamilton–Jacobi–Bellman equation | An optimality condition in optimal control theory
The Hamilton-Jacobi-Bellman (HJB) equation is a nonlinear partial differential equation that provides necessary and sufficient conditions for optimality of a control with respect to a loss function. Its solution is the value function of the optimal control problem which, once known, can be used to obtain the optimal control by taking the maximizer (or minimizer) of the Hamiltonian involved in the HJB equation.
The equation is a result of the theory of dynamic programming which was pioneered in the 1950s by Richard Bellman and coworkers. The connection to the Hamilton–Jacobi equation from classical physics was first drawn by Rudolf Kálmán. In discrete-time problems, the analogous difference equation is usually referred to as the Bellman equation.
While classical variational problems, such as the brachistochrone problem, can be solved using the Hamilton–Jacobi–Bellman equation, the method can be applied to a broader spectrum of problems. Further it can be generalized to stochastic systems, in which case the HJB equation is a second-order elliptic partial differential equation. A major drawback, however, is that the HJB equation admits classical solutions only for a sufficiently smooth value function, which is not guaranteed in most situations. Instead, the notion of a viscosity solution is required, in which conventional derivatives are replaced by (set-valued) subderivatives.
Optimal Control Problems.
Consider the following problem in deterministic optimal control over the time period formula_0:
formula_1
where formula_2 is the scalar cost rate function and formula_3 is a function that gives the bequest value at the final state, formula_4 is the system state vector, formula_5 is assumed given, and formula_6 for formula_7 is the control vector that we are trying to find. Thus, formula_8 is the value function.
The system must also be subject to
formula_9
where formula_10 gives the vector determining physical evolution of the state vector over time.
The Partial Differential Equation.
For this simple system, the Hamilton–Jacobi–Bellman partial differential equation is
formula_11
subject to the terminal condition
formula_12
As before, the unknown scalar function formula_8 in the above partial differential equation is the Bellman value function, which represents the cost incurred from starting in state formula_13 at time formula_14 and controlling the system optimally from then until time formula_15.
Deriving the Equation.
Intuitively, the HJB equation can be derived as follows. If formula_16 is the optimal cost-to-go function (also called the 'value function'), then by Richard Bellman's principle of optimality, going from time "t" to "t" + "dt", we have
formula_17
Note that the Taylor expansion of the first term on the right-hand side is
formula_18
where formula_19 denotes the terms in the Taylor expansion of higher order than one in little-"o" notation. Then if we subtract formula_16 from both sides, divide by "dt", and take the limit as "dt" approaches zero, we obtain the HJB equation defined above.
Solving the Equation.
The HJB equation is usually solved backwards in time, starting from formula_20 and ending at formula_21.
When solved over the whole of state space and formula_22 is continuously differentiable, the HJB equation is a necessary and sufficient condition for an optimum when the terminal state is unconstrained. If we can solve for formula_23 then we can find from it a control formula_24 that achieves the minimum cost.
In general case, the HJB equation does not have a classical (smooth) solution. Several notions of generalized solutions have been developed to cover such situations, including viscosity solution (Pierre-Louis Lions and Michael Crandall), minimax solution (Andrei Izmailovich Subbotin), and others.
Approximate dynamic programming has been introduced by D. P. Bertsekas and J. N. Tsitsiklis with the use of artificial neural networks (multilayer perceptrons) for approximating the Bellman function in general. This is an effective mitigation strategy for reducing the impact of dimensionality by replacing the memorization of the complete function mapping for the whole space domain with the memorization of the sole neural network parameters. In particular, for continuous-time systems, an approximate dynamic programming approach that combines both policy iterations with neural networks was introduced. In discrete-time, an approach to solve the HJB equation combining value iterations and neural networks was introduced.
Alternatively, it has been shown that sum-of-squares optimization can yield an approximate polynomial solution to the Hamilton–Jacobi–Bellman equation arbitrarily well with respect to the formula_25 norm.
Extension to Stochastic Problems.
The idea of solving a control problem by applying Bellman's principle of optimality and then working out backwards in time an optimizing strategy can be generalized to stochastic control problems. Consider similar as above
formula_26
now with formula_27 the stochastic process to optimize and formula_28 the steering. By first using Bellman and then expanding formula_29 with Itô's rule, one finds the stochastic HJB equation
formula_30
where formula_31 represents the stochastic differentiation operator, and subject to the terminal condition
formula_32
Note that the randomness has disappeared. In this case a solution formula_33 of the latter does not necessarily solve the primal problem, it is a candidate only and a further verifying argument is required. This technique is widely used in Financial Mathematics to determine optimal investment strategies in the market (see for example Merton's portfolio problem).
Application to LQG-Control.
As an example, we can look at a system with linear stochastic dynamics and quadratic cost. If the system dynamics is given by
formula_34
and the cost accumulates at rate formula_35, the HJB equation is given by
formula_36
with optimal action given by
formula_37
Assuming a quadratic form for the value function, we obtain the usual Riccati equation for the Hessian of the value function as is usual for Linear-quadratic-Gaussian control.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "[0,T]"
},
{
"math_id": 1,
"text": "V(x(0), 0) = \\min_u \\left\\{ \\int_0^T C[x(t),u(t)]\\,dt + D[x(T)] \\right\\}"
},
{
"math_id": 2,
"text": "C[\\cdot]"
},
{
"math_id": 3,
"text": "D[\\cdot]"
},
{
"math_id": 4,
"text": "x(t)"
},
{
"math_id": 5,
"text": "x(0)"
},
{
"math_id": 6,
"text": "u(t)"
},
{
"math_id": 7,
"text": " 0 \\leq t \\leq T"
},
{
"math_id": 8,
"text": "V(x, t)"
},
{
"math_id": 9,
"text": " \\dot{x}(t)=F[x(t),u(t)] \\, "
},
{
"math_id": 10,
"text": "F[\\cdot]"
},
{
"math_id": 11,
"text": "\n\\frac{\\partial V(x,t)}{\\partial t} + \\min_u \\left\\{ \\frac{\\partial V(x,t)}{\\partial x} \\cdot F(x, u) + C(x,u) \\right\\} = 0\n"
},
{
"math_id": 12,
"text": "\nV(x,T) = D(x),\\,\n"
},
{
"math_id": 13,
"text": "x"
},
{
"math_id": 14,
"text": "t"
},
{
"math_id": 15,
"text": "T"
},
{
"math_id": 16,
"text": "V(x(t), t)"
},
{
"math_id": 17,
"text": " V(x(t), t) = \\min_u \\left\\{V(x(t+dt), t+dt) + \\int_t^{t + dt} C(x(s), u(s)) \\, ds\\right\\}. "
},
{
"math_id": 18,
"text": " V(x(t+dt), t+dt) = V(x(t), t) + \\frac{\\partial V(x, t)}{\\partial t} \\, dt + \\frac{\\partial V(x, t)}{\\partial x} \\cdot \\dot{x}(t) \\, dt + \\mathcal{o}(dt),"
},
{
"math_id": 19,
"text": "\\mathcal{o}(dt)"
},
{
"math_id": 20,
"text": "t = T"
},
{
"math_id": 21,
"text": "t = 0"
},
{
"math_id": 22,
"text": "V(x)"
},
{
"math_id": 23,
"text": "V"
},
{
"math_id": 24,
"text": "u"
},
{
"math_id": 25,
"text": "L^1"
},
{
"math_id": 26,
"text": " \\min_u \\mathbb E \\left\\{ \\int_0^T C(t,X_t,u_t)\\,dt + D(X_T) \\right\\}"
},
{
"math_id": 27,
"text": "(X_t)_{t \\in [0,T]}\\,\\!"
},
{
"math_id": 28,
"text": "(u_t)_{t \\in [0,T]}\\,\\!"
},
{
"math_id": 29,
"text": "V(X_t,t)"
},
{
"math_id": 30,
"text": "\n\\min_u \\left\\{ \\mathcal{A} V(x,t) + C(t,x,u) \\right\\} = 0,\n"
},
{
"math_id": 31,
"text": "\\mathcal{A}"
},
{
"math_id": 32,
"text": "\nV(x,T) = D(x)\\,\\!.\n"
},
{
"math_id": 33,
"text": "V\\,\\!"
},
{
"math_id": 34,
"text": "\ndx_t = (a x_t + b u_t) dt + \\sigma dw_t,\n"
},
{
"math_id": 35,
"text": "C(x_t,u_t) = r(t) u_t^2/2 + q(t) x_t^2/2"
},
{
"math_id": 36,
"text": "\n-\\frac{\\partial V(x,t)}{\\partial t} = \\frac{1}{2}q(t) x^2 + \\frac{\\partial V(x,t)}{\\partial x} a x - \\frac{b^2}{2 r(t)} \\left(\\frac{\\partial V(x,t)}{\\partial x}\\right)^2 + \\frac{\\sigma^2}{2} \\frac{\\partial^2 V(x,t)}{\\partial x^2}.\n"
},
{
"math_id": 37,
"text": "\nu_t = -\\frac{b}{r(t)}\\frac{\\partial V(x,t)}{\\partial x}\n"
}
]
| https://en.wikipedia.org/wiki?curid=730585 |
7306242 | Epstein–Zin preferences | In economics, Epstein–Zin preferences refers to a specification of recursive utility.
A recursive utility function can be constructed from two components,: a time aggregator that characterizes preferences in the absence of uncertainty and a risk aggregator that defines the certainty equivalent function that characterizes preferences over static gambles and is used to aggregate the risk associated with future utility. With Epstein–Zin preferences, the time aggregator is a linearly homogeneous CES aggregate of current consumption and the certainty equivalent of future utility. Specifically, the date-t utility index, formula_0, for a sequence of positive scalar consumptions formula_1, that are potentially stochastic for time periods beyond date t, is defined recursively as the solution to the nonlinear stochastic difference equation
formula_2
where formula_3 is a real-valued certainty equivalent operator. The parameter formula_4 determines the marginal rate of time preference, formula_5, and the parameter formula_6 determines the elasticity of intertemporal substitution, formula_7. Epstein and Zin considered a variety of certainty equivalent operators, but a popular choice for both theoretical and empirical research has been formula_8, where formula_9 denotes the expected value of probability distribution of formula_10, conditional on information available to the planner in date t. The parameter formula_11 encodes risk aversion, with smaller values of formula_12, other things equal, implying a stronger aversion to risk. The parameter restriction formula_13 results in a time-additive von Neumann–Morgenstern expected utility index.
Importantly, unlike von Neumann–Morgenstern utility functions (e.g. isoelastic utility), Epstein–Zin preferences allow the elasticity of intertemporal substitution (determined above by formula_14) to be unrelated to risk aversion (determined above by formula_12). | [
{
"math_id": 0,
"text": "U_t"
},
{
"math_id": 1,
"text": "\\{c_t, c_{t+1}, c_{t+2}, ...\\}"
},
{
"math_id": 2,
"text": "\nU_t = [ (1-\\beta) c_t^\\rho + \\beta \\mu_t(U_{t+1})^\\rho ]^{1/\\rho} , \n"
},
{
"math_id": 3,
"text": "\\mu_t( )"
},
{
"math_id": 4,
"text": "0<\\beta<1"
},
{
"math_id": 5,
"text": "1/\\beta -1"
},
{
"math_id": 6,
"text": "\\rho<1"
},
{
"math_id": 7,
"text": "1/(1-\\rho)"
},
{
"math_id": 8,
"text": "\\mu_t(U_{t+1})=[E_t U_{t+1}^\\alpha]^{1/\\alpha}"
},
{
"math_id": 9,
"text": "E_t"
},
{
"math_id": 10,
"text": "U_{t+1}"
},
{
"math_id": 11,
"text": "\\alpha < 1"
},
{
"math_id": 12,
"text": "\\alpha"
},
{
"math_id": 13,
"text": "\\alpha=\\rho"
},
{
"math_id": 14,
"text": "\\rho"
}
]
| https://en.wikipedia.org/wiki?curid=7306242 |
730684 | 44 Nysa | Main-belt asteroid
44 Nysa is a large and very bright main-belt asteroid, and the brightest member of the Nysian asteroid family. It is classified as a rare class E asteroid and is probably the largest of this type (though 55 Pandora is only slightly smaller).
Discovery.
It was discovered by Hermann Goldschmidt on May 27, 1857, and named after the mythical land of Nysa in Greek mythology.
Physical properties.
In 2002 Kaasalainen "et al." used 63 lightcurves from the Uppsala Asteroid Photometric Catalog (UAPC) to construct a shape model of 44 Nysa. The shape model is conical, which they interpreted as indicating the asteroid may actually be a contact binary.
In 2003, Tanga "et al." published results obtained from the Fine Guidance Sensor on the Hubble Space Telescope in which high-precision interferometry was performed on Nysa with the goal of a more accurate shape determination. Due to Hubble's orbit around the Earth, hours-long photometry sessions, as are normally used to resolve the asteroid's shape, were not possible. Instead, the team used interferometry on the asteroid at the time in its rotation when it would have its longest axis perpendicular to the Earth. Ellipsoidal shape models were then fit to the resulting data to determine an estimate of the asteroid's shape. Both single and double ellipsoid models were fit to the data with both providing approximately the same goodness of fit; leaving the team unable to differentiate between a single elongated object and the contact binary model put forth by Kaasalainen "et al."
An observation of an occultation by 44 Nysa of TYC 6273-01033-1 from the Dutch amateur astronomer Harrie Rutten showed a two-phase reappearance on March 20, 2012. This confirms the conical shape or the binary nature of Nysa.
In December 2006, Shepard "et al." performed three days of radar observations on Nysa with the Arecibo radio telescope. The asteroid was found to have a high radar polarization value ("μc") of 0.50 ± 0.2, a radar albedo (formula_0) of 0.19 ± 0.06, and a visual albedo ("pv") of 0.44 ± 0.10. The albedo measurements were based on a shape model worked out at Arecibo. The best fit shape model as measured by the Arecibo team has parameters a/b = 1.7 ± 0.1, a/c = 1.6–1.9, with an a-axis of 113 ± 10 km; this gives an effective diameter of 79 ± 10 km, which is in agreement with the HST study by Tanga "et al." in 2003. The data gathered also showed signs of significant concavity in Nysa's structure, but the dip in the radar curves is not pronounced enough to indicate bifurcation, calling into question whether or not Nysa really is a contact binary.
Nysa has so far been reported occulting a star three times.
Studies.
44 Nysa was in a study of asteroids using the Hubble FGS. Asteroids studied include 63 Ausonia, 15 Eunomia, 43 Ariadne, 44 Nysa, and 624 Hektor.
Notes.
<templatestyles src="Reflist/styles.css" />
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\hat{\\sigma}_{OC}"
}
]
| https://en.wikipedia.org/wiki?curid=730684 |
73068768 | Seats-to-votes ratio | Measure of equal representation
The seats-to-votes ratio, also known as the advantage ratio, is a measure of equal representation of voters. The equation for seats-to-votes ratio for a political party "i" is:
formula_0,
where formula_1 is fraction of votes and formula_2 is fraction of seats.
In the case both seats and votes are represented as fractions or percentages, then every voter has equal representation if the seats-to-votes ratio is 1. The principle of equal representation is expressed in slogan one man, one vote and relates to proportional representation.
Related is the votes-per-seat-won, which is inverse to the seats-to-votes ratio.
Relation to disproportionality indices.
The Sainte-Laguë Index is a disproportionality index derived by applying the Pearson's chi-squared test to the seats-to-votes ratio, the Gallagher index has a similar formula.
Seats-to-votes ratio for seat allocation.
Different apportionment methods such as Sainte-Laguë method and D'Hondt method differ in the seats-to-votes ratio for individual parties.
Seats-to-votes ratio for Sainte-Laguë method.
The Sainte-Laguë method optimizes the seats-to-votes ratio among all parties formula_3 with the least squares approach. The difference of the seats-to-votes ratio and the ideal seats-to-votes ratio for each party is squared, weighted according to the vote share of each party and summed up:
formula_4
It was shown that this error is minimized by the Sainte-Laguë method.
Seats-to-votes ratio for D'Hondt method.
The D'Hondt method approximates proportionality by minimizing the largest seats-to-votes ratio among all parties. The largest seats-to-votes ratio, which measures how over-represented the most over-represented party among all parties is:
formula_5
The D'Hondt method minimizes the largest seats-to-votes ratio by assigning the seats,
formula_6
where formula_7 is a seat allocation from the set of all allowed seat allocations formula_8.
Notes.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "\\mathrm{a_i} = s_i/v_i"
},
{
"math_id": 1,
"text": "\\mathrm{v_i}"
},
{
"math_id": 2,
"text": "s_i"
},
{
"math_id": 3,
"text": "i"
},
{
"math_id": 4,
"text": "error = \\sum_i {v_i*\\left(\\frac{s_i}{v_i}-1\\right)^2}"
},
{
"math_id": 5,
"text": "\\delta = \\max_i a_i,"
},
{
"math_id": 6,
"text": "\\delta^* = \\min_{\\mathbf{s} \\in \\mathcal{S}} \\max_i a_i,"
},
{
"math_id": 7,
"text": "\\mathbf{s}"
},
{
"math_id": 8,
"text": "\\mathcal{S}"
}
]
| https://en.wikipedia.org/wiki?curid=73068768 |
7307216 | Hammett acidity function | Measure of acidity used for extremely acidic solutions
The Hammett acidity function ("H"0) is a measure of acidity that is used for very concentrated solutions of strong acids, including superacids. It was proposed by the physical organic chemist Louis Plack Hammett and is the best-known acidity function used to extend the measure of Brønsted–Lowry acidity beyond the dilute aqueous solutions for which the pH scale is useful.
In highly concentrated solutions, simple approximations such as the Henderson–Hasselbalch equation are no longer valid due to the variations of the activity coefficients. The Hammett acidity function is used in fields such as physical organic chemistry for the study of acid-catalyzed reactions, because some of these reactions use acids in very high concentrations, or even neat (pure).
Definition.
The Hammett acidity function, "H"0, can replace the pH in concentrated solutions. It is defined using an equation analogous to the Henderson–Hasselbalch equation:
formula_0
where log(x) is the common logarithm of x, and p"K"BH+ is −log("K") for the dissociation of BH+, which is the conjugate acid of a very weak base B, with a very negative p"K"BH+. In this way, it is rather as if the pH scale has been extended to very negative values. Hammett originally used a series of anilines with electron-withdrawing groups for the bases.
Hammett also pointed out the equivalent form
formula_1
where a is the activity, and the "γ" are thermodynamic activity coefficients. In dilute aqueous solution (pH 0–14) the predominant acid species is H3O+ and the activity coefficients are close to unity, so "H"0 is approximately equal to the pH. However, beyond this pH range, the effective hydrogen-ion activity changes much more rapidly than the concentration. This is often due to changes in the nature of the acid species; for example in concentrated sulfuric acid, the predominant acid species ("H+") is not H3O+ but rather H3SO4+, which is a much stronger acid. The value "H"0 = -12 for pure sulfuric acid must not be interpreted as pH = −12 (which would imply an impossibly high H3O+ concentration of 10+12 mol/L in ideal solution). Instead it means that the acid species present (H3SO4+) has a protonating ability equivalent to H3O+ at a fictitious (ideal) concentration of 1012 mol/L, as measured by its ability to protonate weak bases.
Although the Hammett acidity function is the best known acidity function, other acidity functions have been developed by authors such as Arnett, Cox, Katrizky, Yates, and Stevens.
Typical values.
On this scale, pure H2SO4 (18.4 M) has a "H"0 value of −12, and pyrosulfuric acid has "H"0 ~ −15. Take note that the Hammett acidity function clearly avoids water in its equation. It is a generalization of the pH scale—in a dilute aqueous solution (where B is H2O), pH is very nearly equal to "H"0. By using a solvent-independent quantitative measure of acidity, the implications of the leveling effect are eliminated, and it becomes possible to directly compare the acidities of different substances (e.g. using p"K"a, HF is weaker than HCl or H2SO4 in water but stronger than HCl in glacial acetic acid.)
"H"0 for some concentrated acids:
For mixtures (e.g., partly diluted acids in water), the acidity function depends on the composition of the mixture and has to be determined empirically. Graphs of "H"0 vs mole fraction can be found in the literature for many acids.
References.
<templatestyles src="Reflist/styles.css" /> | [
{
"math_id": 0,
"text": "H_{0} = \\mbox{p}K_\\ce{BH^+} + \\log \\frac\\ce{[B]}\\ce{[BH^+]}"
},
{
"math_id": 1,
"text": "H_{0} = -\\log \\left ( a_\\ce{H^+} \\frac{\\gamma_\\ce{B}}{\\gamma_\\ce{BH^+}} \\right )"
}
]
| https://en.wikipedia.org/wiki?curid=7307216 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.