text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Ideal (ring theory)
In mathematics, and more specifically in ring theory, an ideal of a ring is a special subset of its elements. Ideals generalize certain subsets of the integers, such as the even numbers or the multiples of 3. Addition and subtraction of even numbers preserves evenness, and multiplying an even number by any integer (even or odd) results in an even number; these closure and absorption properties are the defining properties of an ideal. An ideal can be used to construct a quotient ring in a way similar to how, in group theory, a normal subgroup can be used to construct a quotient group.
Algebraic structure → Ring theory
Ring theory
Basic concepts
Rings
• Subrings
• Ideal
• Quotient ring
• Fractional ideal
• Total ring of fractions
• Product of rings
• Free product of associative algebras
• Tensor product of algebras
Ring homomorphisms
• Kernel
• Inner automorphism
• Frobenius endomorphism
Algebraic structures
• Module
• Associative algebra
• Graded ring
• Involutive ring
• Category of rings
• Initial ring $\mathbb {Z} $
• Terminal ring $0=\mathbb {Z} _{1}$
Related structures
• Field
• Finite field
• Non-associative ring
• Lie ring
• Jordan ring
• Semiring
• Semifield
Commutative algebra
Commutative rings
• Integral domain
• Integrally closed domain
• GCD domain
• Unique factorization domain
• Principal ideal domain
• Euclidean domain
• Field
• Finite field
• Composition ring
• Polynomial ring
• Formal power series ring
Algebraic number theory
• Algebraic number field
• Ring of integers
• Algebraic independence
• Transcendental number theory
• Transcendence degree
p-adic number theory and decimals
• Direct limit/Inverse limit
• Zero ring $\mathbb {Z} _{1}$
• Integers modulo pn $\mathbb {Z} /p^{n}\mathbb {Z} $
• Prüfer p-ring $\mathbb {Z} (p^{\infty })$
• Base-p circle ring $\mathbb {T} $
• Base-p integers $\mathbb {Z} $
• p-adic rationals $\mathbb {Z} [1/p]$
• Base-p real numbers $\mathbb {R} $
• p-adic integers $\mathbb {Z} _{p}$
• p-adic numbers $\mathbb {Q} _{p}$
• p-adic solenoid $\mathbb {T} _{p}$
Algebraic geometry
• Affine variety
Noncommutative algebra
Noncommutative rings
• Division ring
• Semiprimitive ring
• Simple ring
• Commutator
Noncommutative algebraic geometry
Free algebra
Clifford algebra
• Geometric algebra
Operator algebra
Among the integers, the ideals correspond one-for-one with the non-negative integers: in this ring, every ideal is a principal ideal consisting of the multiples of a single non-negative number. However, in other rings, the ideals may not correspond directly to the ring elements, and certain properties of integers, when generalized to rings, attach more naturally to the ideals than to the elements of the ring. For instance, the prime ideals of a ring are analogous to prime numbers, and the Chinese remainder theorem can be generalized to ideals. There is a version of unique prime factorization for the ideals of a Dedekind domain (a type of ring important in number theory).
The related, but distinct, concept of an ideal in order theory is derived from the notion of ideal in ring theory. A fractional ideal is a generalization of an ideal, and the usual ideals are sometimes called integral ideals for clarity.
History
Ernst Kummer invented the concept of ideal numbers to serve as the "missing" factors in number rings in which unique factorization fails; here the word "ideal" is in the sense of existing in imagination only, in analogy with "ideal" objects in geometry such as points at infinity.[1] In 1876, Richard Dedekind replaced Kummer's undefined concept by concrete sets of numbers, sets that he called ideals, in the third edition of Dirichlet's book Vorlesungen über Zahlentheorie, to which Dedekind had added many supplements.[1][2][3] Later the notion was extended beyond number rings to the setting of polynomial rings and other commutative rings by David Hilbert and especially Emmy Noether.
Definitions and motivation
For an arbitrary ring $(R,+,\cdot )$, let $(R,+)$ be its additive group. A subset I is called a left ideal of $R$ if it is an additive subgroup of $R$ that "absorbs multiplication from the left by elements of $R$"; that is, $I$ is a left ideal if it satisfies the following two conditions:
1. $(I,+)$ is a subgroup of $(R,+),$
2. For every $r\in R$ and every $x\in I$, the product $rx$ is in $I$.
A right ideal is defined with the condition $rx\in I$ replaced by $xr\in I$. A two-sided ideal is a left ideal that is also a right ideal, and is sometimes simply called an ideal. In the language of modules, the definitions mean that a left (resp. right, two-sided) ideal of $R$ is an $R$-submodule of $R$ when $R$ is viewed as a left (resp. right, bi-) $R$-module. When $R$ is a commutative ring, the definitions of left, right, and two-sided ideal coincide, and the term ideal is used alone.
To understand the concept of an ideal, consider how ideals arise in the construction of rings of "elements modulo". For concreteness, let us look at the ring $\mathbb {Z} /n\mathbb {Z} $ of integers modulo $n$ given an integer $n\in \mathbb {Z} $ ($\mathbb {Z} $ is a commutative ring). The key observation here is that we obtain $\mathbb {Z} /n\mathbb {Z} $ by taking the integer line $\mathbb {Z} $ and wrapping it around itself so that various integers get identified. In doing so, we must satisfy 2 requirements:
1) $n$ must be identified with 0 since $n$ is congruent to 0 modulo $n$.
2) the resulting structure must again be a ring.
The second requirement forces us to make additional identifications (i.e., it determines the precise way in which we must wrap $\mathbb {Z} $ around itself). The notion of an ideal arises when we ask the question:
What is the exact set of integers that we are forced to identify with 0?
The answer is, unsurprisingly, the set $n\mathbb {Z} =\{nm\mid m\in \mathbb {Z} \}$ of all integers congruent to 0 modulo $n$. That is, we must wrap $\mathbb {Z} $ around itself infinitely many times so that the integers $\ldots ,-2n,-n,n,2n,3n,\ldots $ will all align with 0. If we look at what properties this set must satisfy in order to ensure that $\mathbb {Z} /n\mathbb {Z} $ is a ring, then we arrive at the definition of an ideal. Indeed, one can directly verify that $n\mathbb {Z} $ is an ideal of $\mathbb {Z} $.
Remark. Identifications with elements other than 0 also need to be made. For example, the elements in $1+n\mathbb {Z} $ must be identified with 1, the elements in $2+n\mathbb {Z} $ must be identified with 2, and so on. Those, however, are uniquely determined by $n\mathbb {Z} $ since $\mathbb {Z} $ is an additive group.
We can make a similar construction in any commutative ring $R$: start with an arbitrary $x\in R$, and then identify with 0 all elements of the ideal $xR=\{xr\mid r\in R\}$. It turns out that the ideal $xR$ is the smallest ideal that contains $x$, called the ideal generated by $x$. More generally, we can start with an arbitrary subset $S\subseteq R$, and then identify with 0 all the elements in the ideal generated by $S$: the smallest ideal $(S)$ such that $S\subseteq (S)$. The ring that we obtain after the identification depends only on the ideal $(S)$ and not on the set $S$ that we started with. That is, if $(S)=(T)$, then the resulting rings will be the same.
Therefore, an ideal $I$ of a commutative ring $R$ captures canonically the information needed to obtain the ring of elements of $R$ modulo a given subset $S\subseteq R$. The elements of $I$, by definition, are those that are congruent to zero, that is, identified with zero in the resulting ring. The resulting ring is called the quotient of $R$ by $I$ and is denoted $R/I$. Intuitively, the definition of an ideal postulates two natural conditions necessary for $I$ to contain all elements designated as "zeros" by $R/I$:
1. $I$ is an additive subgroup of $R$: the zero 0 of $R$ is a "zero" $0\in I$, and if $x_{1}\in I$ and $x_{2}\in I$ are "zeros", then $x_{1}-x_{2}\in I$ is a "zero" too.
2. Any $r\in R$ multiplied by a "zero" $x\in I$ is a "zero" $rx\in I$.
It turns out that the above conditions are also sufficient for $I$ to contain all the necessary "zeros": no other elements have to be designated as "zero" in order to form $R/I$. (In fact, no other elements should be designated as "zero" if we want to make the fewest identifications.)
Remark. The above construction still works using two-sided ideals even if $R$ is not necessarily commutative.
Examples and properties
(For the sake of brevity, some results are stated only for left ideals but are usually also true for right ideals with appropriate notation changes.)
• In a ring R, the set R itself forms a two-sided ideal of R called the unit ideal. It is often also denoted by $(1)$ since it is precisely the two-sided ideal generated (see below) by the unity $1_{R}$. Also, the set $\{0_{R}\}$ consisting of only the additive identity 0R forms a two-sided ideal called the zero ideal and is denoted by $(0)$.[note 1] Every (left, right or two-sided) ideal contains the zero ideal and is contained in the unit ideal.[4]
• An (left, right or two-sided) ideal that is not the unit ideal is called a proper ideal (as it is a proper subset).[5] Note: a left ideal ${\mathfrak {a}}$ is proper if and only if it does not contain a unit element, since if $u\in {\mathfrak {a}}$ is a unit element, then $r=(ru^{-1})u\in {\mathfrak {a}}$ for every $r\in R$. Typically there are plenty of proper ideals. In fact, if R is a skew-field, then $(0),(1)$ are its only ideals and conversely: that is, a nonzero ring R is a skew-field if $(0),(1)$ are the only left (or right) ideals. (Proof: if $x$ is a nonzero element, then the principal left ideal $Rx$ (see below) is nonzero and thus $Rx=(1)$; i.e., $yx=1$ for some nonzero $y$. Likewise, $zy=1$ for some nonzero $z$. Then $z=z(yx)=(zy)x=x$.)
• The even integers form an ideal in the ring $\mathbb {Z} $ of all integers, since the sum of any two even integers is even, and the product of any integer with an even integer is also even; this ideal is usually denoted by $2\mathbb {Z} $. More generally, the set of all integers divisible by a fixed integer $n$ is an ideal denoted $n\mathbb {Z} $. In fact, every non-zero ideal of the ring $\mathbb {Z} $ is generated by its smallest positive element, as a consequence of Euclidean division, so $\mathbb {Z} $ is a principal ideal domain.[4]
• The set of all polynomials with real coefficients which are divisible by the polynomial $x^{2}+1$ is an ideal in the ring of all real-coefficient polynomials $\mathbb {R} [x]$.
• Take a ring $R$ and positive integer $n$. For each $1\leq i\leq n$, the set of all $n\times n$ matrices with entries in $R$ whose $i$-th row is zero is a right ideal in the ring $M_{n}(R)$ of all $n\times n$ matrices with entries in $R$. It is not a left ideal. Similarly, for each $1\leq j\leq n$, the set of all $n\times n$ matrices whose $j$-th column is zero is a left ideal but not a right ideal.
• The ring $C(\mathbb {R} )$ of all continuous functions $f$ from $\mathbb {R} $ to $\mathbb {R} $ under pointwise multiplication contains the ideal of all continuous functions $f$ such that $f(1)=0$.[6] Another ideal in $C(\mathbb {R} )$ is given by those functions which vanish for large enough arguments, i.e. those continuous functions $f$ for which there exists a number $L>0$ such that $f(x)=0$ whenever $|x|>L$.
• A ring is called a simple ring if it is nonzero and has no two-sided ideals other than $(0),(1)$. Thus, a skew-field is simple and a simple commutative ring is a field. The matrix ring over a skew-field is a simple ring.
• If $f:R\to S$ is a ring homomorphism, then the kernel $\ker(f)=f^{-1}(0_{S})$ is a two-sided ideal of $R$.[4] By definition, $f(1_{R})=1_{S}$, and thus if $S$ is not the zero ring (so $1_{S}\neq 0_{S}$), then $\ker(f)$ is a proper ideal. More generally, for each left ideal I of S, the pre-image $f^{-1}(I)$ is a left ideal. If I is a left ideal of R, then $f(I)$ is a left ideal of the subring $f(R)$ of S: unless f is surjective, $f(I)$ need not be an ideal of S; see also #Extension and contraction of an ideal below.
• Ideal correspondence: Given a surjective ring homomorphism $f:R\to S$, there is a bijective order-preserving correspondence between the left (resp. right, two-sided) ideals of $R$ containing the kernel of $f$ and the left (resp. right, two-sided) ideals of $S$: the correspondence is given by $I\mapsto f(I)$ and the pre-image $J\mapsto f^{-1}(J)$. Moreover, for commutative rings, this bijective correspondence restricts to prime ideals, maximal ideals, and radical ideals (see the Types of ideals section for the definitions of these ideals).
• (For those who know modules) If M is a left R-module and $S\subset M$ a subset, then the annihilator $\operatorname {Ann} _{R}(S)=\{r\in R\mid rs=0,s\in S\}$ of S is a left ideal. Given ideals ${\mathfrak {a}},{\mathfrak {b}}$ of a commutative ring R, the R-annihilator of $({\mathfrak {b}}+{\mathfrak {a}})/{\mathfrak {a}}$ is an ideal of R called the ideal quotient of ${\mathfrak {a}}$ by ${\mathfrak {b}}$ and is denoted by $({\mathfrak {a}}:{\mathfrak {b}})$; it is an instance of idealizer in commutative algebra.
• Let ${\mathfrak {a}}_{i},i\in S$ be an ascending chain of left ideals in a ring R; i.e., $S$ is a totally ordered set and ${\mathfrak {a}}_{i}\subset {\mathfrak {a}}_{j}$ for each $i<j$. Then the union $\textstyle \bigcup _{i\in S}{\mathfrak {a}}_{i}$ is a left ideal of R. (Note: this fact remains true even if R is without the unity 1.)
• The above fact together with Zorn's lemma proves the following: if $E\subset R$ is a possibly empty subset and ${\mathfrak {a}}_{0}\subset R$ is a left ideal that is disjoint from E, then there is an ideal that is maximal among the ideals containing ${\mathfrak {a}}_{0}$ and disjoint from E. (Again this is still valid if the ring R lacks the unity 1.) When $R\neq 0$, taking ${\mathfrak {a}}_{0}=(0)$ and $E=\{1\}$, in particular, there exists a left ideal that is maximal among proper left ideals (often simply called a maximal left ideal); see Krull's theorem for more.
• An arbitrary union of ideals need not be an ideal, but the following is still true: given a possibly empty subset X of R, there is the smallest left ideal containing X, called the left ideal generated by X and is denoted by $RX$. Such an ideal exists since it is the intersection of all left ideals containing X. Equivalently, $RX$ is the set of all the (finite) left R-linear combinations of elements of X over R:
$RX=\{r_{1}x_{1}+\dots +r_{n}x_{n}\mid n\in \mathbb {N} ,r_{i}\in R,x_{i}\in X\}.$
(since such a span is the smallest left ideal containing X.)[note 2] A right (resp. two-sided) ideal generated by X is defined in the similar way. For "two-sided", one has to use linear combinations from both sides; i.e.,
$RXR=\{r_{1}x_{1}s_{1}+\dots +r_{n}x_{n}s_{n}\mid n\in \mathbb {N} ,r_{i}\in R,s_{i}\in R,x_{i}\in X\}.\,$
• A left (resp. right, two-sided) ideal generated by a single element x is called the principal left (resp. right, two-sided) ideal generated by x and is denoted by $Rx$ (resp. $xR,RxR$). The principal two-sided ideal $RxR$ is often also denoted by $(x)$. If $X=\{x_{1},\dots ,x_{n}\}$ is a finite set, then $RXR$ is also written as $(x_{1},\dots ,x_{n})$.
• There is a bijective correspondence between ideals and congruence relations (equivalence relations that respect the ring structure) on the ring: Given an ideal $I$ of a ring $R$, let $x\sim y$ if $x-y\in I$. Then $\sim $ is a congruence relation on $R$. Conversely, given a congruence relation $\sim $ on $R$, let $I=\{x\in R:x\sim 0\}$. Then $I$ is an ideal of $R$.
Types of ideals
To simplify the description all rings are assumed to be commutative. The non-commutative case is discussed in detail in the respective articles.
Ideals are important because they appear as kernels of ring homomorphisms and allow one to define factor rings. Different types of ideals are studied because they can be used to construct different types of factor rings.
• Maximal ideal: A proper ideal I is called a maximal ideal if there exists no other proper ideal J with I a proper subset of J. The factor ring of a maximal ideal is a simple ring in general and is a field for commutative rings.[7]
• Minimal ideal: A nonzero ideal is called minimal if it contains no other nonzero ideal.
• Prime ideal: A proper ideal $I$ is called a prime ideal if for any $a$ and $b$ in $R$, if $ab$ is in $I$, then at least one of $a$ and $b$ is in $I$. The factor ring of a prime ideal is a prime ring in general and is an integral domain for commutative rings.[8]
• Radical ideal or semiprime ideal: A proper ideal I is called radical or semiprime if for any a in R, if an is in I for some n, then a is in I. The factor ring of a radical ideal is a semiprime ring for general rings, and is a reduced ring for commutative rings.
• Primary ideal: An ideal I is called a primary ideal if for all a and b in R, if ab is in I, then at least one of a and bn is in I for some natural number n. Every prime ideal is primary, but not conversely. A semiprime primary ideal is prime.
• Principal ideal: An ideal generated by one element.[9]
• Finitely generated ideal: This type of ideal is finitely generated as a module.
• Primitive ideal: A left primitive ideal is the annihilator of a simple left module.
• Irreducible ideal: An ideal is said to be irreducible if it cannot be written as an intersection of ideals which properly contain it.
• Comaximal ideals: Two ideals ${\mathfrak {i}},{\mathfrak {j}}$ are said to be comaximal if $x+y=1$ for some $x\in {\mathfrak {i}}$ and $y\in {\mathfrak {j}}$.
• Regular ideal: This term has multiple uses. See the article for a list.
• Nil ideal: An ideal is a nil ideal if each of its elements is nilpotent.
• Nilpotent ideal: Some power of it is zero.
• Parameter ideal: an ideal generated by a system of parameters.
• Perfect ideal: A proper ideal I in a Noetherian ring $R$ is called a perfect ideal if its grade equals the projective dimension of the associated quotient ring,[10] ${\textrm {grade}}(I)={\textrm {proj}}\dim(R/I)$. A perfect ideal is unmixed.
• Unmixed ideal: A proper ideal I in a Noetherian ring $R$ is called an unmixed ideal (in height) if the height of I is equal to the height of every associated prime P of R/I. (This is stronger than saying that R/I is equidimensional. See also equidimensional ring.
Two other important terms using "ideal" are not always ideals of their ring. See their respective articles for details:
• Fractional ideal: This is usually defined when R is a commutative domain with quotient field K. Despite their names, fractional ideals are R submodules of K with a special property. If the fractional ideal is contained entirely in R, then it is truly an ideal of R.
• Invertible ideal: Usually an invertible ideal A is defined as a fractional ideal for which there is another fractional ideal B such that AB = BA = R. Some authors may also apply "invertible ideal" to ordinary ring ideals A and B with AB = BA = R in rings other than domains.
Ideal operations
The sum and product of ideals are defined as follows. For ${\mathfrak {a}}$ and ${\mathfrak {b}}$, left (resp. right) ideals of a ring R, their sum is
${\mathfrak {a}}+{\mathfrak {b}}:=\{a+b\mid a\in {\mathfrak {a}}{\mbox{ and }}b\in {\mathfrak {b}}\}$,
which is a left (resp. right) ideal, and, if ${\mathfrak {a}},{\mathfrak {b}}$ are two-sided,
${\mathfrak {a}}{\mathfrak {b}}:=\{a_{1}b_{1}+\dots +a_{n}b_{n}\mid a_{i}\in {\mathfrak {a}}{\mbox{ and }}b_{i}\in {\mathfrak {b}},i=1,2,\dots ,n;{\mbox{ for }}n=1,2,\dots \},$
i.e. the product is the ideal generated by all products of the form ab with a in ${\mathfrak {a}}$ and b in ${\mathfrak {b}}$.
Note ${\mathfrak {a}}+{\mathfrak {b}}$ is the smallest left (resp. right) ideal containing both ${\mathfrak {a}}$ and ${\mathfrak {b}}$ (or the union ${\mathfrak {a}}\cup {\mathfrak {b}}$), while the product ${\mathfrak {a}}{\mathfrak {b}}$ is contained in the intersection of ${\mathfrak {a}}$ and ${\mathfrak {b}}$.
The distributive law holds for two-sided ideals ${\mathfrak {a}},{\mathfrak {b}},{\mathfrak {c}}$,
• ${\mathfrak {a}}({\mathfrak {b}}+{\mathfrak {c}})={\mathfrak {a}}{\mathfrak {b}}+{\mathfrak {a}}{\mathfrak {c}}$,
• $({\mathfrak {a}}+{\mathfrak {b}}){\mathfrak {c}}={\mathfrak {a}}{\mathfrak {c}}+{\mathfrak {b}}{\mathfrak {c}}$.
If a product is replaced by an intersection, a partial distributive law holds:
${\mathfrak {a}}\cap ({\mathfrak {b}}+{\mathfrak {c}})\supset {\mathfrak {a}}\cap {\mathfrak {b}}+{\mathfrak {a}}\cap {\mathfrak {c}}$
where the equality holds if ${\mathfrak {a}}$ contains ${\mathfrak {b}}$ or ${\mathfrak {c}}$.
Remark: The sum and the intersection of ideals is again an ideal; with these two operations as join and meet, the set of all ideals of a given ring forms a complete modular lattice. The lattice is not, in general, a distributive lattice. The three operations of intersection, sum (or join), and product make the set of ideals of a commutative ring into a quantale.
If ${\mathfrak {a}},{\mathfrak {b}}$ are ideals of a commutative ring R, then ${\mathfrak {a}}\cap {\mathfrak {b}}={\mathfrak {a}}{\mathfrak {b}}$ in the following two cases (at least)
• ${\mathfrak {a}}+{\mathfrak {b}}=(1)$
• ${\mathfrak {a}}$ is generated by elements that form a regular sequence modulo ${\mathfrak {b}}$.
(More generally, the difference between a product and an intersection of ideals is measured by the Tor functor: $\operatorname {Tor} _{1}^{R}(R/{\mathfrak {a}},R/{\mathfrak {b}})=({\mathfrak {a}}\cap {\mathfrak {b}})/{\mathfrak {a}}{\mathfrak {b}}.$[11])
An integral domain is called a Dedekind domain if for each pair of ideals ${\mathfrak {a}}\subset {\mathfrak {b}}$, there is an ideal ${\mathfrak {c}}$ such that ${\mathfrak {\mathfrak {a}}}={\mathfrak {b}}{\mathfrak {c}}$.[12] It can then be shown that every nonzero ideal of a Dedekind domain can be uniquely written as a product of maximal ideals, a generalization of the fundamental theorem of arithmetic.
Examples of ideal operations
In $\mathbb {Z} $ we have
$(n)\cap (m)=\operatorname {lcm} (n,m)\mathbb {Z} $
since $(n)\cap (m)$ is the set of integers which are divisible by both $n$ and $m$.
Let $R=\mathbb {C} [x,y,z,w]$ and let ${\mathfrak {a}}=(z,w),{\mathfrak {b}}=(x+z,y+w),{\mathfrak {c}}=(x+z,w)$. Then,
• ${\mathfrak {a}}+{\mathfrak {b}}=(z,w,x+z,y+w)=(x,y,z,w)$ and ${\mathfrak {a}}+{\mathfrak {c}}=(z,w,x+z)$
• ${\mathfrak {a}}{\mathfrak {b}}=(z(x+z),z(y+w),w(x+z),w(y+w))=(z^{2}+xz,zy+wz,wx+wz,wy+w^{2})$
• ${\mathfrak {a}}{\mathfrak {c}}=(xz+z^{2},zw,xw+zw,w^{2})$
• ${\mathfrak {a}}\cap {\mathfrak {b}}={\mathfrak {a}}{\mathfrak {b}}$ while ${\mathfrak {a}}\cap {\mathfrak {c}}=(w,xz+z^{2})\neq {\mathfrak {a}}{\mathfrak {c}}$
In the first computation, we see the general pattern for taking the sum of two finitely generated ideals, it is the ideal generated by the union of their generators. In the last three we observe that products and intersections agree whenever the two ideals intersect in the zero ideal. These computations can be checked using Macaulay2.[13][14][15]
Radical of a ring
Main article: Radical of a ring
Ideals appear naturally in the study of modules, especially in the form of a radical.
For simplicity, we work with commutative rings but, with some changes, the results are also true for non-commutative rings.
Let R be a commutative ring. By definition, a primitive ideal of R is the annihilator of a (nonzero) simple R-module. The Jacobson radical $J=\operatorname {Jac} (R)$ of R is the intersection of all primitive ideals. Equivalently,
$J=\bigcap _{{\mathfrak {m}}{\text{ maximal ideals}}}{\mathfrak {m}}.$
Indeed, if $M$ is a simple module and x is a nonzero element in M, then $Rx=M$ and $R/\operatorname {Ann} (M)=R/\operatorname {Ann} (x)\simeq M$, meaning $\operatorname {Ann} (M)$ is a maximal ideal. Conversely, if ${\mathfrak {m}}$ is a maximal ideal, then ${\mathfrak {m}}$ is the annihilator of the simple R-module $R/{\mathfrak {m}}$. There is also another characterization (the proof is not hard):
$J=\{x\in R\mid 1-yx\,{\text{ is a unit element for every }}y\in R\}.$
For a not-necessarily-commutative ring, it is a general fact that $1-yx$ is a unit element if and only if $1-xy$ is (see the link) and so this last characterization shows that the radical can be defined both in terms of left and right primitive ideals.
The following simple but important fact (Nakayama's lemma) is built-in to the definition of a Jacobson radical: if M is a module such that $JM=M$, then M does not admit a maximal submodule, since if there is a maximal submodule $L\subsetneq M$, $J\cdot (M/L)=0$ and so $M=JM\subset L\subsetneq M$, a contradiction. Since a nonzero finitely generated module admits a maximal submodule, in particular, one has:
If $JM=M$ and M is finitely generated, then $M=0.$
A maximal ideal is a prime ideal and so one has
$\operatorname {nil} (R)=\bigcap _{{\mathfrak {p}}{\text{ prime ideals }}}{\mathfrak {p}}\subset \operatorname {Jac} (R)$
where the intersection on the left is called the nilradical of R. As it turns out, $\operatorname {nil} (R)$ is also the set of nilpotent elements of R.
If R is an Artinian ring, then $\operatorname {Jac} (R)$ is nilpotent and $\operatorname {nil} (R)=\operatorname {Jac} (R)$. (Proof: first note the DCC implies $J^{n}=J^{n+1}$ for some n. If (DCC) ${\mathfrak {a}}\supsetneq \operatorname {Ann} (J^{n})$ is an ideal properly minimal over the latter, then $J\cdot ({\mathfrak {a}}/\operatorname {Ann} (J^{n}))=0$. That is, $J^{n}{\mathfrak {a}}=J^{n+1}{\mathfrak {a}}=0$, a contradiction.)
Extension and contraction of an ideal
Let A and B be two commutative rings, and let f : A → B be a ring homomorphism. If ${\mathfrak {a}}$ is an ideal in A, then $f({\mathfrak {a}})$ need not be an ideal in B (e.g. take f to be the inclusion of the ring of integers Z into the field of rationals Q). The extension ${\mathfrak {a}}^{e}$ of ${\mathfrak {a}}$ in B is defined to be the ideal in B generated by $f({\mathfrak {a}})$. Explicitly,
${\mathfrak {a}}^{e}={\Big \{}\sum y_{i}f(x_{i}):x_{i}\in {\mathfrak {a}},y_{i}\in B{\Big \}}$
If ${\mathfrak {b}}$ is an ideal of B, then $f^{-1}({\mathfrak {b}})$ is always an ideal of A, called the contraction ${\mathfrak {b}}^{c}$ of ${\mathfrak {b}}$ to A.
Assuming f : A → B is a ring homomorphism, ${\mathfrak {a}}$ is an ideal in A, ${\mathfrak {b}}$ is an ideal in B, then:
• ${\mathfrak {b}}$ is prime in B $\Rightarrow $ ${\mathfrak {b}}^{c}$ is prime in A.
• ${\mathfrak {a}}^{ec}\supseteq {\mathfrak {a}}$
• ${\mathfrak {b}}^{ce}\subseteq {\mathfrak {b}}$
It is false, in general, that ${\mathfrak {a}}$ being prime (or maximal) in A implies that ${\mathfrak {a}}^{e}$ is prime (or maximal) in B. Many classic examples of this stem from algebraic number theory. For example, embedding $\mathbb {Z} \to \mathbb {Z} \left\lbrack i\right\rbrack $. In $B=\mathbb {Z} \left\lbrack i\right\rbrack $, the element 2 factors as $2=(1+i)(1-i)$ where (one can show) neither of $1+i,1-i$ are units in B. So $(2)^{e}$ is not prime in B (and therefore not maximal, as well). Indeed, $(1\pm i)^{2}=\pm 2i$ shows that $(1+i)=((1-i)-(1-i)^{2})$, $(1-i)=((1+i)-(1+i)^{2})$, and therefore $(2)^{e}=(1+i)^{2}$.
On the other hand, if f is surjective and ${\mathfrak {a}}\supseteq \ker f$ then:
• ${\mathfrak {a}}^{ec}={\mathfrak {a}}$ and ${\mathfrak {b}}^{ce}={\mathfrak {b}}$.
• ${\mathfrak {a}}$ is a prime ideal in A $\Leftrightarrow $ ${\mathfrak {a}}^{e}$ is a prime ideal in B.
• ${\mathfrak {a}}$ is a maximal ideal in A $\Leftrightarrow $ ${\mathfrak {a}}^{e}$ is a maximal ideal in B.
Remark: Let K be a field extension of L, and let B and A be the rings of integers of K and L, respectively. Then B is an integral extension of A, and we let f be the inclusion map from A to B. The behaviour of a prime ideal ${\mathfrak {a}}={\mathfrak {p}}$ of A under extension is one of the central problems of algebraic number theory.
The following is sometimes useful:[16] a prime ideal ${\mathfrak {p}}$ is a contraction of a prime ideal if and only if ${\mathfrak {p}}={\mathfrak {p}}^{ec}$. (Proof: Assuming the latter, note ${\mathfrak {p}}^{e}B_{\mathfrak {p}}=B_{\mathfrak {p}}\Rightarrow {\mathfrak {p}}^{e}$ intersects $A-{\mathfrak {p}}$, a contradiction. Now, the prime ideals of $B_{\mathfrak {p}}$ correspond to those in B that are disjoint from $A-{\mathfrak {p}}$. Hence, there is a prime ideal ${\mathfrak {q}}$ of B, disjoint from $A-{\mathfrak {p}}$, such that ${\mathfrak {q}}B_{\mathfrak {p}}$ is a maximal ideal containing ${\mathfrak {p}}^{e}B_{\mathfrak {p}}$. One then checks that ${\mathfrak {q}}$ lies over ${\mathfrak {p}}$. The converse is obvious.)
Generalizations
Ideals can be generalized to any monoid object $(R,\otimes )$, where $R$ is the object where the monoid structure has been forgotten. A left ideal of $R$ is a subobject $I$ that "absorbs multiplication from the left by elements of $R$"; that is, $I$ is a left ideal if it satisfies the following two conditions:
1. $I$ is a subobject of $R$
2. For every $r\in (R,\otimes )$ and every $x\in (I,\otimes )$, the product $r\otimes x$ is in $(I,\otimes )$.
A right ideal is defined with the condition "$r\otimes x\in (I,\otimes )$" replaced by "'$x\otimes r\in (I,\otimes )$". A two-sided ideal is a left ideal that is also a right ideal, and is sometimes simply called an ideal. When $R$ is a commutative monoid object respectively, the definitions of left, right, and two-sided ideal coincide, and the term ideal is used alone.
An ideal can also be thought of as a specific type of R-module. If we consider $R$ as a left $R$-module (by left multiplication), then a left ideal $I$ is really just a left sub-module of $R$. In other words, $I$ is a left (right) ideal of $R$ if and only if it is a left (right) $R$-module which is a subset of $R$. $I$ is a two-sided ideal if it is a sub-$R$-bimodule of $R$.
Example: If we let $R=\mathbb {Z} $, an ideal of $\mathbb {Z} $ is an abelian group which is a subset of $\mathbb {Z} $, i.e. $m\mathbb {Z} $ for some $m\in \mathbb {Z} $. So these give all the ideals of $\mathbb {Z} $.
See also
• Modular arithmetic
• Noether isomorphism theorem
• Boolean prime ideal theorem
• Ideal theory
• Ideal (order theory)
• Ideal norm
• Splitting of prime ideals in Galois extensions
• Ideal sheaf
Notes
1. Some authors call the zero and unit ideals of a ring R the trivial ideals of R.
2. If R does not have a unit, then the internal descriptions above must be modified slightly. In addition to the finite sums of products of things in X with things in R, we must allow the addition of n-fold sums of the form x + x + ... + x, and n-fold sums of the form (−x) + (−x) + ... + (−x) for every x in X and every n in the natural numbers. When R has a unit, this extra requirement becomes superfluous.
References
1. John Stillwell (2010). Mathematics and its history. p. 439.
2. Harold M. Edwards (1977). Fermat's last theorem. A genetic introduction to algebraic number theory. p. 76.
3. Everest G., Ward T. (2005). An introduction to number theory. p. 83.
4. Dummit & Foote (2004), p. 243.
5. Lang 2005, Section III.2
6. Dummit & Foote (2004), p. 244.
7. Because simple commutative rings are fields. See Lam (2001). A First Course in Noncommutative Rings. p. 39.
8. Dummit & Foote (2004), p. 255.
9. Dummit & Foote (2004), p. 251.
10. Matsumura, Hideyuki (1987). Commutative Ring Theory. Cambridge: Cambridge University Press. p. 132. ISBN 9781139171762.
11. Eisenbud 1995, Exercise A 3.17
12. Milnor (1971), p. 9.
13. "ideals". www.math.uiuc.edu. Archived from the original on 2017-01-16. Retrieved 2017-01-14.
14. "sums, products, and powers of ideals". www.math.uiuc.edu. Archived from the original on 2017-01-16. Retrieved 2017-01-14.
15. "intersection of ideals". www.math.uiuc.edu. Archived from the original on 2017-01-16. Retrieved 2017-01-14.
16. Atiyah & Macdonald (1969), Proposition 3.16.
• Atiyah, Michael F.; Macdonald, Ian G. (1969). Introduction to Commutative Algebra. Perseus Books. ISBN 0-201-00361-9.
• Dummit, David Steven; Foote, Richard Martin (2004). Abstract algebra (Third ed.). Hoboken, NJ: John Wiley & Sons, Inc. ISBN 9780471433347.
• Eisenbud, David (1995), Commutative Algebra with a View toward Algebraic Geometry, Graduate Texts in Mathematics, vol. 150, Berlin, New York: Springer-Verlag, doi:10.1007/978-1-4612-5350-1, ISBN 978-0-387-94268-1, MR 1322960
• Lang, Serge (2005). Undergraduate Algebra (Third ed.). Springer-Verlag. ISBN 978-0-387-22025-3.
• Hazewinkel, Michiel; Gubareni, Nadiya; Gubareni, Nadezhda Mikhaĭlovna; Kirichenko, Vladimir V. (2004). Algebras, rings and modules. Vol. 1. Springer. ISBN 1-4020-2690-0.
• Milnor, John Willard (1971). Introduction to algebraic K-theory. Annals of Mathematics Studies. Vol. 72. Princeton, NJ: Princeton University Press. ISBN 9780691081014. MR 0349811. Zbl 0237.18005.
External links
• Levinson, Jake (July 14, 2014). "The Geometric Interpretation for Extension of Ideals?". Stack Exchange.
Authority control
International
• FAST
National
• France
• BnF data
• Germany
• Israel
• United States
| Wikipedia |
Imaginary unit
The imaginary unit or unit imaginary number (i) is a solution to the quadratic equation $x^{2}+1=0$. Although there is no real number with this property, i can be used to extend the real numbers to what are called complex numbers, using addition and multiplication. A simple example of the use of i in a complex number is $2+3i$.
Imaginary numbers are an important mathematical concept; they extend the real number system $\mathbb {R} $ to the complex number system $\mathbb {C} $, in which at least one root for every nonconstant polynomial exists (see Algebraic closure and Fundamental theorem of algebra). Here, the term "imaginary" is used because there is no real number having a negative square.
There are two complex square roots of −1: $i$ and $-i$, just as there are two complex square roots of every real number other than zero (which has one double square root).
In contexts in which use of the letter i is ambiguous or problematic, the letter j is sometimes used instead. For example, in electrical engineering and control systems engineering, the imaginary unit is normally denoted by j instead of i, because i is commonly used to denote electric current.[1]
Definition
The powers of i
return cyclic values:
$...$ (repeats the pattern
from blue area)
$i^{-3}=i$
$i^{-2}=-1$
$i^{-1}=-i$
$i^{0}=1$
$i^{1}=i$
$i^{2}=-1$
$i^{3}=-i$
$i^{4}=1$
$i^{5}=i$
$i^{6}=-1$
$...$ (repeats the pattern
from blue area)
The imaginary number i is defined solely by the property that its square is −1:
$i^{2}=-1.$
With i defined this way, it follows directly from algebra that i and $-i$ are both square roots of −1.
Although the construction is called "imaginary", and although the concept of an imaginary number may be intuitively more difficult to grasp than that of a real number, the construction is valid from a mathematical standpoint. Real number operations can be extended to imaginary and complex numbers, by treating i as an unknown quantity while manipulating an expression (and using the definition to replace any occurrence of $i^{2}$ with −1). Higher integral powers of i can also be replaced with $-i$, 1, i, or −1:
$i^{3}=i^{2}i=(-1)i=-i$
$i^{4}=i^{3}i=(-i)i=-(i^{2})=-(-1)=1$
or, equivalently,
$i^{4}=(i^{2})(i^{2})=(-1)(-1)=1$
$i^{5}=i^{4}i=(1)i=i$
Similarly, as with any non-zero real number:
$i^{0}=i^{1-1}=i^{1}i^{-1}=i^{1}{\frac {1}{i}}=i{\frac {1}{i}}={\frac {i}{i}}=1$
As a complex number, i can be represented in rectangular form as $0+1i$, with a zero real component and a unit imaginary component. In polar form, i can be represented as $1\times e^{i\pi /2}$ (or just $e^{i\pi /2}$), with an absolute value (or magnitude) of 1 and an argument (or angle) of ${\tfrac {\pi }{2}}$ radians. (Adding any multiple of 2π to this angle works as well.) In the complex plane (also known as the Argand plane), which is a special interpretation of a Cartesian plane, i is the point located one unit from the origin along the imaginary axis (which is orthogonal to the real axis).
i vs. −i
Being a quadratic polynomial with no multiple root, the defining equation $x^{2}=-1$ has two distinct solutions, which are equally valid and which happen to be additive and multiplicative inverses of each other. Once a solution i of the equation has been fixed, the value $-i$, which is distinct from i, is also a solution. Since the equation is the only definition of i, it appears that the definition is ambiguous (more precisely, not well-defined). However, no ambiguity will result as long as one or other of the solutions is chosen and labelled as "i", with the other one then being labelled as $-i$.[2] After all, although $-i$ and $+i$ are not quantitatively equivalent (they are negatives of each other), there is no algebraic difference between $+i$ and $-i$, as both imaginary numbers have equal claim to being the number whose square is −1.
In fact, if all mathematical textbooks and published literature referring to imaginary or complex numbers were to be rewritten with $-i$ replacing every occurrence of $+i$ (and, therefore, every occurrence of $-i$ replaced by $-(-i)=+i$), all facts and theorems would remain valid. The distinction between the two roots x of $x^{2}+1=0$, with one of them labelled with a minus sign, is purely a notational relic; neither root can be said to be more primary or fundamental than the other, and neither of them is "positive" or "negative".[3]
The issue can be a subtle one. One way of articulating the situation is that although the complex field is unique (as an extension of the real numbers) up to isomorphism, it is not unique up to a unique isomorphism. Indeed, there are two field automorphisms of C that keep each real number fixed, namely the identity and complex conjugation. For more on this general phenomenon, see Galois group.
Matrices
Using the concepts of matrices and matrix multiplication, imaginary units can be represented in linear algebra. The value of 1 is represented by an identity matrix I and the value of i is represented by any matrix J satisfying J2 = −I. A typical choice is
$I={\begin{pmatrix}1&0\\0&1\end{pmatrix}},\qquad J={\begin{pmatrix}0&-1\\1&0\end{pmatrix}}\,.$
More generally, a real-valued 2 × 2 matrix J satisfies J2 = −I if and only if J has a matrix trace of zero and a matrix determinant of one, so J can be chosen to be
$J={\begin{pmatrix}z&x\\y&-z\end{pmatrix}}\,,$
whenever −z2 − xy = 1. The product xy is negative because xy = −(1 + z2); thus, the points (x, y) lie on hyperbolas determined by z in quadrant II or IV.
Matrices larger than 2 × 2 can be used. For example, I could be chosen to be the 4 × 4 identity matrix with J chosen to be any of the three 4 × 4 Dirac matrices for spatial dimensions, γ1, γ2, γ3.
Regardless of the choice of representation, the usual rules of complex number mathematics work with these matrices because I × I = I, I × J = J, J × I = J, and J × J = −I. For example,
${\begin{aligned}J^{-1}&=-J\,,\\\left(aI+bJ\right)+\left(cI+dJ\right)&=(a+c)I+(b+d)J\,,\\\left(aI+bJ\right)\times \left(cI+dJ\right)&=(ac-bd)I+(ad+bc)J\,.\end{aligned}}$
Proper use
The imaginary unit is sometimes written ${\sqrt {-1}}$ in advanced mathematics contexts[2] (as well as in less advanced popular texts). However, great care needs to be taken when manipulating formulas involving radicals. The radical sign notation is reserved either for the principal square root function, which is only defined for real $x\geq 0$, or for the principal branch of the complex square root function. Attempting to apply the calculation rules of the principal (real) square root function to manipulate the principal branch of the complex square root function can produce false results:[4]
$-1=i\cdot i={\sqrt {-1}}\cdot {\sqrt {-1}}={\sqrt {(-1)\cdot (-1)}}={\sqrt {1}}=1\qquad {\text{(incorrect).}}$
Similarly:
${\frac {1}{i}}={\frac {\sqrt {1}}{\sqrt {-1}}}={\sqrt {\frac {1}{-1}}}={\sqrt {\frac {-1}{1}}}={\sqrt {-1}}=i\qquad {\text{(incorrect).}}$
Generally, the calculation rules
${\sqrt {a}}\cdot {\sqrt {b}}={\sqrt {a\cdot b}}$
and
${\frac {\sqrt {a}}{\sqrt {b}}}={\sqrt {\frac {a}{b}}}$
are guaranteed to be valid for real, positive values of a and b only.[5][6][7]
When a or b is real but negative, these problems can be avoided by writing and manipulating expressions like $i{\sqrt {7}}$, rather than ${\sqrt {-7}}$. For a more thorough discussion, see square root and branch point.
Properties
Square roots
Just like all nonzero complex numbers, i has two square roots: they are[lower-alpha 1]
$\pm \left({\frac {\sqrt {2}}{2}}+{\frac {\sqrt {2}}{2}}i\right)=\pm {\frac {\sqrt {2}}{2}}(1+i).$
Indeed, squaring both expressions yields:
${\begin{aligned}\left(\pm {\frac {\sqrt {2}}{2}}(1+i)\right)^{2}\ &=\left(\pm {\frac {\sqrt {2}}{2}}\right)^{2}(1+i)^{2}\ \\&={\frac {1}{2}}(1+2i+i^{2})\\&={\frac {1}{2}}(1+2i-1)\ \\&=i.\end{aligned}}$
Using the radical sign for the principal square root, we get:
${\sqrt {i}}={\frac {\sqrt {2}}{2}}(1+i).$
Cube roots
The three cube roots of i are:[9]
$-i,$
${\frac {\sqrt {3}}{2}}+{\frac {i}{2}},$ and
$-{\frac {\sqrt {3}}{2}}+{\frac {i}{2}}.$
Similar to all the roots of 1, all the roots of i are the vertices of regular polygons, which are inscribed within the unit circle in the complex plane.
Multiplication and division
Multiplying a complex number by i gives:
$i(a+bi)=ai+bi^{2}=-b+ai.$
(This is equivalent to a 90° counter-clockwise rotation of a vector about the origin in the complex plane.)
Dividing by i is equivalent to multiplying by the reciprocal of i:
${\frac {1}{i}}={\frac {1}{i}}\cdot {\frac {i}{i}}={\frac {i}{i^{2}}}={\frac {i}{-1}}=-i~.$
Using this identity to generalize division by i to all complex numbers gives:
${\frac {a+bi}{i}}=-i(a+bi)=-ai-bi^{2}=b-ai.$
(This is equivalent to a 90° clockwise rotation of a vector about the origin in the complex plane.)
Powers
The powers of i repeat in a cycle expressible with the following pattern, where n is any integer:
$i^{4n}=1$
$i^{4n+1}=i$
$i^{4n+2}=-1$
$i^{4n+3}=-i,$
This leads to the conclusion that
$i^{n}=i^{(n{\bmod {4}})}$
where mod represents the modulo operation. Equivalently:
$i^{n}=\cos(n\pi /2)+i\sin(n\pi /2)$
Although we do not give the details here, if one chooses branch cuts and principal values to support it then this last equation would apply to all complex values of n.[10]
i raised to the power of i
Making use of Euler's formula, $i^{i}$ has infinitely many values
$i^{i}=\left(e^{i(\pi /2+2k\pi )}\right)^{i}=e^{i^{2}(\pi /2+2k\pi )}=e^{-(\pi /2+2k\pi )}\,,$
for any integer k. A common principal value corresponds to $k=0$ and gives $i^{i}=e^{-\pi /2}$, which is 0.207879576....[11][12]
Factorial
The factorial of the imaginary unit i is most often given in terms of the gamma function evaluated at $1+i$:[13][14][15]
$i!=\Gamma (1+i)=i\Gamma (i)\approx 0.498015668-0.154949828i.$[16]
The magnitude of this number is
$|\Gamma (1+i)|={\sqrt {\frac {\pi }{\sinh \pi }}}=0.521564046\ldots ,$[17]
while its argument is
$\arg {\Gamma (1+i)}=\lim _{n\to \infty }{\biggl (}\ln {n}-\sum _{k=1}^{n}\operatorname {arccot} {k}{\biggr )}\approx -0.301640320.$[18]
Other operations
Many mathematical operations that can be carried out with real numbers can also be carried out with i, such as exponentiation, roots, logarithms, and trigonometric functions. The following functions are well-defined, single-valued functions when x is a positive real number.
A number raised to the ni power is:
$x^{ni}=\cos(n\ln x)+i\sin(n\ln x).$
The nith root of a number is:
${\sqrt[{ni}]{x}}=\cos \left({\frac {\ln x}{n}}\right)-i\sin \left({\frac {\ln x}{n}}\right)~.$
The cosine of ni is:
$\cos ni=\cosh n={\frac {1}{2}}\left(e^{n}+{\frac {1}{e^{n}}}\right)={\frac {e^{2n}+1}{2e^{n}}}\,,$
which is a real number when n is a real number.
The sine of ni is:
$\sin ni=i\sinh n={\frac {1}{2}}\left(e^{n}-{\frac {1}{e^{n}}}\right)i={\frac {e^{2n}-1}{2e^{n}}}i\,,$
which is a purely imaginary number when n is a real number.
In contrast, many functions involving i, including those that depend upon log i or the logarithm of another complex number, are complex multi-valued functions, with different values on different branches of the Riemann surface the function is defined on.[19] For example, if one chooses any branch where log i = πi/2 then one can write
$\log _{i}x=-{\frac {2i\ln x}{\pi }}\,,$
when x is a positive real number. When x is not a positive real number in the above formulas then one must precisely specify the branch to get a single-valued function; see complex logarithm.
History
Further information: Complex number § History
Designating square roots of negative numbers as "imaginary" is generally credited to René Descartes, and Isaac Newton used the term as early as 1670.[20][21] The i notation was introduced by Leonhard Euler.[22]
See also
• Euler's identity
• Hyperbolic unit
• Mathematical constant
• Multiplicity (mathematics)
• Root of unity
• Unit complex number
Notes
1. To find such a number, one can solve the equation $ (x+iy)^{2}=i$ where x and y are real parameters to be determined, or equivalently $x^{2}+2ixy-y^{2}=i.$ Because the real and imaginary parts are always separate, we regroup the terms, $x^{2}-y^{2}+2ixy=0+i.$ By equating coefficients, separating the real part and imaginary part, we get a system of two equations:
${\begin{aligned}x^{2}-y^{2}&=0\\[3mu]2xy&=1.\end{aligned}}$
Substituting $y={\tfrac {1}{2}}x^{-1}$ into the first equation, we get $x^{2}-{\tfrac {1}{4}}x^{-2}=0$ $\implies 4x^{4}=1.$ Because x is a real number, this equation has two real solutions for x: $x={\tfrac {1}{\sqrt {2}}}$ and $x=-{\tfrac {1}{\sqrt {2}}}$. Substituting either of these results into the equation $2xy=1$ in turn, we will get the corresponding result for y. Thus, the square roots of i are the numbers ${\tfrac {1}{\sqrt {2}}}+{\tfrac {1}{\sqrt {2}}}i$ and $-{\tfrac {1}{\sqrt {2}}}-{\tfrac {1}{\sqrt {2}}}i$.[8]
References
1. Boas, Mary L. (2006). Mathematical Methods in the Physical Sciences (3rd ed.). New York [u.a.]: Wiley. p. 49. ISBN 0-471-19826-9.
2. Weisstein, Eric W. "Imaginary Unit". mathworld.wolfram.com. Retrieved 10 August 2020.
3. Doxiadēs, Apostolos K.; Mazur, Barry (2012). Circles Disturbed: The interplay of mathematics and narrative (illustrated ed.). Princeton University Press. p. 225. ISBN 978-0-691-14904-2 – via Google Books.
4. Bunch, Bryan (2012). Mathematical Fallacies and Paradoxes (illustrated ed.). Courier Corporation. p. 31-34. ISBN 978-0-486-13793-3 – via Google Books.
5. Kramer, Arthur (2012). Math for Electricity & Electronics (4th ed.). Cengage Learning. p. 81. ISBN 978-1-133-70753-0 – via Google Books.
6. Picciotto, Henri; Wah, Anita (1994). Algebra: Themes, tools, concepts (Teachers’ ed.). Henri Picciotto. p. 424. ISBN 978-1-56107-252-1 – via Google Books.
7. Nahin, Paul J. (2010). An Imaginary Tale: The story of "i" [the square root of minus one]. Princeton University Press. p. 12. ISBN 978-1-4008-3029-9 – via Google Books.
8. "What is the square root of i ?". University of Toronto Mathematics Network. Retrieved 26 March 2007.
9. Zill, Dennis G.; Shanahan, Patrick D. (2003). A first course in complex analysis with applications. Boston: Jones and Bartlett. pp. 24–25. ISBN 0-7637-1437-2. OCLC 50495529.
10. Łukaszyk, S.; Tomski, A. (2023). "Omnidimensional Convex Polytopes". Symmetry. 15. doi:10.3390/sym15030755.
11. Wells, David (1997) [1986]. The Penguin Dictionary of Curious and Interesting Numbers (revised ed.). UK: Penguin Books. p. 26. ISBN 0-14-026149-4.
12. Sloane, N. J. A. (ed.). "Sequence A049006 (Decimal expansion of i^i = exp(-Pi/2))". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
13. Sloane, N. J. A. (ed.). "Sequence A212879 (Decimal expansion of the absolute value of i!)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
14. Ivan, M.; Thornber, N.; Kouba, O.; Constales, D. (2013). "Arggh! Eye factorial . . . Arg(i!)". American Mathematical Monthly. 120: 662–665. doi:10.4169/amer.math.monthly.120.07.660. S2CID 24405635.
15. Finch, S. (3 November 2022). "Errata and Addenda to Mathematical Constants". arXiv:2001.00578 [math.HO].
16. Sloane, N. J. A. (ed.). "Sequence A212877 (Decimal expansion of the real part of i!)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.Sloane, N. J. A. (ed.). "Sequence A212878 (Decimal expansion of the negated imaginary part of i!)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
17. Sloane, N. J. A. (ed.). "Sequence A212879 (Decimal expansion of the absolute value of i!)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
18. Sloane, N. J. A. (ed.). "Sequence A212880 (Decimal expansion of the negated argument of i!)". The On-Line Encyclopedia of Integer Sequences. OEIS Foundation.
19. Gbur, Greg (2011). Mathematical Methods for Optical Physics and Engineering. Cambridge, U.K.: Cambridge University Press. pp. 278–284. ISBN 978-0-511-91510-9. OCLC 704518582.
20. Silver, Daniel S. (November–December 2017). "The New Language of Mathematics". American Scientist. 105 (6): 364–371. doi:10.1511/2017.105.6.364.
21. "imaginary number". Oxford English Dictionary (Online ed.). Oxford University Press. (Subscription or participating institution membership required.)
22. Boyer, Carl B.; Merzbach, Uta C. (1991). A History of Mathematics. John Wiley & Sons. pp. 439–445. ISBN 978-0-471-54397-8.
Further reading
• Nahin, Paul J. (1998). An Imaginary Tale: The story of i [the square root of minus one]. Chichester: Princeton University Press. ISBN 0-691-02795-1 – via Archive.org.
External links
• Euler, Leonhard. "Imaginary Roots of Polynomials". at "Convergence". mathdl.maa.org. Mathematical Association of America. Archived from the original on 13 July 2007.
| Wikipedia |
Identity matrix
In linear algebra, the identity matrix of size $n$ is the $n\times n$ square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the object remains unchanged by the transformation. In other contexts, it is analogous to multiplying by the number 1.
Not to be confused with matrix of ones, unitary matrix, or matrix unit.
Terminology and notation
The identity matrix is often denoted by $I_{n}$, or simply by $I$ if the size is immaterial or can be trivially determined by the context.[1]
$I_{1}={\begin{bmatrix}1\end{bmatrix}},\ I_{2}={\begin{bmatrix}1&0\\0&1\end{bmatrix}},\ I_{3}={\begin{bmatrix}1&0&0\\0&1&0\\0&0&1\end{bmatrix}},\ \dots ,\ I_{n}={\begin{bmatrix}1&0&0&\cdots &0\\0&1&0&\cdots &0\\0&0&1&\cdots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&0&0&\cdots &1\end{bmatrix}}.$
The term unit matrix has also been widely used,[2][3][4][5] but the term identity matrix is now standard.[6] The term unit matrix is ambiguous, because it is also used for a matrix of ones and for any unit of the ring of all $n\times n$ matrices.[7]
In some fields, such as group theory or quantum mechanics, the identity matrix is sometimes denoted by a boldface one, $\mathbf {1} $, or called "id" (short for identity). Less frequently, some mathematics books use $U$ or $E$ to represent the identity matrix, standing for "unit matrix"[2] and the German word Einheitsmatrix respectively.[8]
In terms of a notation that is sometimes used to concisely describe diagonal matrices, the identity matrix can be written as
$I_{n}=\operatorname {diag} (1,1,\dots ,1).$
The identity matrix can also be written using the Kronecker delta notation:[8]
$(I_{n})_{ij}=\delta _{ij}.$
Properties
When $A$ is an $m\times n$ matrix, it is a property of matrix multiplication that
$I_{m}A=AI_{n}=A.$
In particular, the identity matrix serves as the multiplicative identity of the matrix ring of all $n\times n$ matrices, and as the identity element of the general linear group $GL(n)$, which consists of all invertible $n\times n$ matrices under the matrix multiplication operation. In particular, the identity matrix is invertible. It is an involutory matrix, equal to its own inverse. In this group, two square matrices have the identity matrix as their product exactly when they are the inverses of each other.
When $n\times n$ matrices are used to represent linear transformations from an $n$-dimensional vector space to itself, the identity matrix $I_{n}$ represents the identity function, for whatever basis was used in this representation.
The $i$th column of an identity matrix is the unit vector $e_{i}$, a vector whose $i$th entry is 1 and 0 elsewhere. The determinant of the identity matrix is 1, and its trace is $n$.
The identity matrix is the only idempotent matrix with non-zero determinant. That is, it is the only matrix such that:
1. When multiplied by itself, the result is itself
2. All of its rows and columns are linearly independent.
The principal square root of an identity matrix is itself, and this is its only positive-definite square root. However, every identity matrix with at least two rows and columns has an infinitude of symmetric square roots.[9]
The rank of an identity matrix $I_{n}$ equals the size $n$, i.e.:
$\operatorname {rank} (I_{n})=n.$
See also
• Binary matrix (zero-one matrix)
• Elementary matrix
• Exchange matrix
• Matrix of ones
• Pauli matrices (the identity matrix is the zeroth Pauli matrix)
• Householder transformation (the Householder matrix is built through the identity matrix)
• Square root of a 2 by 2 identity matrix
• Unitary matrix
• Zero matrix
Notes
1. "Identity matrix: intro to identity matrices (article)". Khan Academy. Retrieved 2020-08-14.
2. Pipes, Louis Albert (1963). Matrix Methods for Engineering. Prentice-Hall International Series in Applied Mathematics. Prentice-Hall. p. 91.
3. Roger Godement, Algebra, 1968.
4. ISO 80000-2:2009.
5. Ken Stroud, Engineering Mathematics, 2013.
6. ISO 80000-2:2019.
7. Weisstein, Eric W. "Unit Matrix". mathworld.wolfram.com. Retrieved 2021-05-05.
8. Weisstein, Eric W. "Identity Matrix". mathworld.wolfram.com. Retrieved 2020-08-14.
9. Mitchell, Douglas W. (November 2003). "87.57 Using Pythagorean triples to generate square roots of $I_{2}$". The Mathematical Gazette. 87 (510): 499–500. doi:10.1017/S0025557200173723. JSTOR 3621289.
Matrix classes
Explicitly constrained entries
• Alternant
• Anti-diagonal
• Anti-Hermitian
• Anti-symmetric
• Arrowhead
• Band
• Bidiagonal
• Bisymmetric
• Block-diagonal
• Block
• Block tridiagonal
• Boolean
• Cauchy
• Centrosymmetric
• Conference
• Complex Hadamard
• Copositive
• Diagonally dominant
• Diagonal
• Discrete Fourier Transform
• Elementary
• Equivalent
• Frobenius
• Generalized permutation
• Hadamard
• Hankel
• Hermitian
• Hessenberg
• Hollow
• Integer
• Logical
• Matrix unit
• Metzler
• Moore
• Nonnegative
• Pentadiagonal
• Permutation
• Persymmetric
• Polynomial
• Quaternionic
• Signature
• Skew-Hermitian
• Skew-symmetric
• Skyline
• Sparse
• Sylvester
• Symmetric
• Toeplitz
• Triangular
• Tridiagonal
• Vandermonde
• Walsh
• Z
Constant
• Exchange
• Hilbert
• Identity
• Lehmer
• Of ones
• Pascal
• Pauli
• Redheffer
• Shift
• Zero
Conditions on eigenvalues or eigenvectors
• Companion
• Convergent
• Defective
• Definite
• Diagonalizable
• Hurwitz
• Positive-definite
• Stieltjes
Satisfying conditions on products or inverses
• Congruent
• Idempotent or Projection
• Invertible
• Involutory
• Nilpotent
• Normal
• Orthogonal
• Unimodular
• Unipotent
• Unitary
• Totally unimodular
• Weighing
With specific applications
• Adjugate
• Alternating sign
• Augmented
• Bézout
• Carleman
• Cartan
• Circulant
• Cofactor
• Commutation
• Confusion
• Coxeter
• Distance
• Duplication and elimination
• Euclidean distance
• Fundamental (linear differential equation)
• Generator
• Gram
• Hessian
• Householder
• Jacobian
• Moment
• Payoff
• Pick
• Random
• Rotation
• Seifert
• Shear
• Similarity
• Symplectic
• Totally positive
• Transformation
Used in statistics
• Centering
• Correlation
• Covariance
• Design
• Doubly stochastic
• Fisher information
• Hat
• Precision
• Stochastic
• Transition
Used in graph theory
• Adjacency
• Biadjacency
• Degree
• Edmonds
• Incidence
• Laplacian
• Seidel adjacency
• Tutte
Used in science and engineering
• Cabibbo–Kobayashi–Maskawa
• Density
• Fundamental (computer vision)
• Fuzzy associative
• Gamma
• Gell-Mann
• Hamiltonian
• Irregular
• Overlap
• S
• State transition
• Substitution
• Z (chemistry)
Related terms
• Jordan normal form
• Linear independence
• Matrix exponential
• Matrix representation of conic sections
• Perfect matrix
• Pseudoinverse
• Row echelon form
• Wronskian
• Mathematics portal
• List of matrices
• Category:Matrices
| Wikipedia |
Unit measure
Unit measure is an axiom of probability theory[1] that states that the probability of the entire sample space is equal to one (unity); that is, P(S)=1 where S is the sample space. Loosely speaking, it means that S must be chosen so that when the experiment is performed, something happens. The term measure here refers to the measure-theoretic approach to probability.
Violations of unit measure have been reported in arguments about the outcomes of events [2][3] under which events acquire "probabilities" that are not the probabilities of probability theory. In situations such as these the term "probability" serves as a false premise to the associated argument.
References
1. A. Kolmogorov, "Foundations of the theory of probability" 1933. English translation by Nathan Morrison 1956 copyright Chelsea Publishing Company.
2. R. Christensen and T. Reichert: "Unit measure violations in pattern recognition: ambiguity and irrelevancy" Pattern Recognition, 8, No. 4 1976.
3. T. Oldberg and R. Christensen "Erratic measure" NDE for the Energy Industry 1995, American Society of Mechanical Engineers, New York, NY.
| Wikipedia |
Versor
In mathematics, a versor is a quaternion of norm one (a unit quaternion). Each versor has the form
$q=\exp(a\mathbf {r} )=\cos a+\mathbf {r} \sin a,\quad \mathbf {r} ^{2}=-1,\quad a\in [0,\pi ],$
where the r2 = −1 condition means that r is a unit-length vector quaternion (or that the first component of r is zero, and the last three components of r are a unit vector in 3 dimensions). The corresponding 3-dimensional rotation has the angle 2a about the axis r in axis–angle representation. In case a = π/2 (a right angle), then $q=\mathbf {r} $, and the resulting unit vector is termed a right versor.
The collection of versors with quaternion multiplication forms a group, and the set of versors is a 3-sphere in the 4-dimensional quaternion algebra.
Presentation on 3- and 2-spheres
Hamilton denoted the versor of a quaternion q by the symbol Uq. He was then able to display the general quaternion in polar coordinate form
q = Tq Uq,
where Tq is the norm of q. The norm of a versor is always equal to one; hence they occupy the unit 3-sphere in H. Examples of versors include the eight elements of the quaternion group. Of particular importance are the right versors, which have angle π/2. These versors have zero scalar part, and so are vectors of length one (unit vectors). The right versors form a sphere of square roots of −1 in the quaternion algebra. The generators i, j, and k are examples of right versors, as well as their additive inverses. Other versors include the twenty-four Hurwitz quaternions that have the norm 1 and form vertices of a 24-cell polychoron.
Hamilton defined a quaternion as the quotient of two vectors. A versor can be defined as the quotient of two unit vectors. For any fixed plane Π the quotient of two unit vectors lying in Π depends only on the angle (directed) between them, the same a as in the unit vector–angle representation of a versor explained above. That's why it may be natural to understand corresponding versors as directed arcs that connect pairs of unit vectors and lie on a great circle formed by intersection of Π with the unit sphere, where the plane Π passes through the origin. Arcs of the same direction and length (or, the same, its subtended angle in radians) are equivalent, i.e. define the same versor.
Such an arc, although lying in the three-dimensional space, does not represent a path of a point rotating as described with the sandwiched product with the versor. Indeed, it represents the left multiplication action of the versor on quaternions that preserves the plane Π and the corresponding great circle of 3-vectors. The 3-dimensional rotation defined by the versor has the angle two times the arc's subtended angle, and preserves the same plane. It is a rotation about the corresponding vector r, that is perpendicular to Π.
On three unit vectors, Hamilton writes[1]
$q=\beta :\alpha =OB:OA\ $ :\alpha =OB:OA\ } and
$q'=\gamma :\beta =OC:OB$ :\beta =OC:OB}
imply
$q'q=\gamma :\alpha =OC:OA.$ :\alpha =OC:OA.}
Multiplication of quaternions of norm one corresponds to the (non-commutative) "addition" of great circle arcs on the unit sphere. Any pair of great circles either is the same circle or has two intersection points. Hence, one can always move the point B and the corresponding vector to one of these points such that the beginning of the second arc will be the same as the end of the first arc.
An equation
$\exp(c\mathbf {r} )\exp(a\mathbf {s} )=\exp(b\mathbf {t} )\!$
implicitly specifies the unit vector–angle representation for the product of two versors. Its solution is an instance of the general Campbell–Baker–Hausdorff formula in Lie group theory. As the 3-sphere represented by versors in $\mathbb {H} $ is a 3-parameter Lie group, practice with versor compositions is a step into Lie theory. Evidently versors are the image of the exponential map applied to a ball of radius π in the quaternion subspace of vectors.
Versors compose as aforementioned vector arcs, and Hamilton referred to this group operation as "the sum of arcs", but as quaternions they simply multiply.
The geometry of elliptic space has been described as the space of versors.[2]
Representation of SO(3)
The orthogonal group in three dimensions, rotation group SO(3), is frequently interpreted with versors via the inner automorphism $q\mapsto u^{-1}qu$ where u is a versor. Indeed, if
$u=\exp(ar)$ and vector s is perpendicular to r,
then
$u^{-1}su=s\cos 2a+sr\sin 2a$
by calculation.[3] The plane $\{x+yr:(x,y)\in \mathbb {R} ^{2}\}\subset H$ is isomorphic to $\mathbb {C} $ and the inner automorphism, by commutativity, reduces to the identity mapping there. Since quaternions can be interpreted as an algebra of two complex dimensions, the rotation action can also be viewed through the special unitary group SU(2).
For a fixed r, versors of the form exp(ar) where a ∈ (−π, π], form a subgroup isomorphic to the circle group. Orbits of the left multiplication action of this subgroup are fibers of a fiber bundle over the 2-sphere, known as Hopf fibration in the case r = i; other vectors give isomorphic, but not identical fibrations. In 2003 David W. Lyons[4] wrote "the fibers of the Hopf map are circles in S3" (page 95). Lyons gives an elementary introduction to quaternions to elucidate the Hopf fibration as a mapping on unit quaternions.
Versors have been used to represent rotations of the Bloch sphere with quaternion multiplication.[5]
Elliptic space
The facility of versors illustrate elliptic geometry, in particular elliptic space, a three-dimensional realm of rotations. The versors are the points of this elliptic space, though they refer to rotations in 4-dimensional Euclidean space. Given two fixed versors u and v, the mapping $q\mapsto uqv$ is an elliptic motion. If one of the fixed versors is 1, then the motion is a Clifford translation of the elliptic space, named after William Kingdon Clifford who was a proponent of the space. An elliptic line through versor u is $\{ue^{ar}:0\leq a<\pi \}.$ Parallelism in the space is expressed by Clifford parallels. One of the methods of viewing elliptic space uses the Cayley transform to map the versors to $\mathbb {R} ^{3}$
Hyperbolic versor
A hyperbolic versor is a generalization of quaternionic versors to indefinite orthogonal groups, such as Lorentz group. It is defined as a quantity of the form
$\exp(a\mathbf {r} )=\cosh a+\mathbf {r} \sinh a$ where $\mathbf {r} ^{2}=+1.$
Such elements arise in algebras of mixed signature, for example split-complex numbers or split-quaternions. It was the algebra of tessarines discovered by James Cockle in 1848 that first provided hyperbolic versors. In fact, James Cockle wrote the above equation (with j in place of r) when he found that the tessarines included the new type of imaginary element.
This versor was used by Homersham Cox (1882/83) in relation to quaternion multiplication.[6][7] The primary exponent of hyperbolic versors was Alexander Macfarlane as he worked to shape quaternion theory to serve physical science.[8] He saw the modelling power of hyperbolic versors operating on the split-complex number plane, and in 1891 he introduced hyperbolic quaternions to extend the concept to 4-space. Problems in that algebra led to use of biquaternions after 1900. In a widely circulated review of 1899, Macfarlane said:
...the root of a quadratic equation may be versor in nature or scalar in nature. If it is versor in nature, then the part affected by the radical involves the axis perpendicular to the plane of reference, and this is so, whether the radical involves the square root of minus one or not. In the former case the versor is circular, in the latter hyperbolic.[9]
Today the concept of a one-parameter group subsumes the concepts of versor and hyperbolic versor as the terminology of Sophus Lie has replaced that of Hamilton and Macfarlane. In particular, for each r such that r r = +1 or r r = −1, the mapping $a\mapsto \exp(a\,\mathbf {r} )$ takes the real line to a group of hyperbolic or ordinary versors. In the ordinary case, when r and −r are antipodes on a sphere, the one-parameter groups have the same points but are oppositely directed. In physics, this aspect of rotational symmetry is termed a doublet.
In 1911 Alfred Robb published his Optical Geometry of Motion in which he identified the parameter rapidity which specifies a change in frame of reference. This rapidity parameter corresponds to the real variable in a one-parameter group of hyperbolic versors. With the further development of special relativity the action of a hyperbolic versor came to be called a Lorentz boost.
Lie theory
Main article: Lie theory
Sophus Lie was less than a year old when Hamilton first described quaternions, but Lie's name has become associated with all groups generated by exponentiation. The set of versors with their multiplication has been denoted Sl(1,q) by Robert Gilmore in his text on Lie theory.[10] Sl(1,q) is the special linear group of one dimension over quaternions, the "special" indicating that all elements are of norm one. The group is isomorphic to SU(2,c), a special unitary group, a frequently used designation since quaternions and versors are sometimes considered anachronistic for group theory. The special orthogonal group SO(3,r) of rotations in three dimensions is closely related: it is a 2:1 homomorphic image of SU(2,c).
The subspace $\{xi+yj+zk:x,y,z\in R\}\subset H$ is called the Lie algebra of the group of versors. The commutator product $[u,v]=uv-vu\ ,$ just double the cross product of two vectors, forms the multiplication in the Lie algebra. The close relation to SU(1,c) and SO(3,r) is evident in the isomorphism of their Lie algebras.[10]
Lie groups that contain hyperbolic versors include the group on the unit hyperbola and the special unitary group SU(1,1).
Etymology
The word is derived from Latin versari = "to turn" with the suffix -or forming a noun from the verb (i.e. versor = "the turner"). It was introduced by William Rowan Hamilton in the 1840s in the context of his quaternion theory.
See also
• cis (mathematics) (cis(x) = cos(x) + i sin(x))
• Quaternions and spatial rotation
• Rotations in 4-dimensional Euclidean space
• Turn (geometry)
Notes
1. Elements of Quaternions, 2nd edition, v. 1, p. 146
2. Harold Scott MacDonald Coxeter (1950) Review of "Quaternions and Elliptic Space" (by Georges Lemaître) from Mathematical Reviews
3. Rotation representation
4. Lyons, David W. (April 2003), "An Elementary Introduction to the Hopf Fibration" (PDF), Mathematics Magazine, 76 (2): 87–98, CiteSeerX 10.1.1.583.3499, doi:10.2307/3219300, ISSN 0025-570X, JSTOR 3219300
5. K. B. Wharton, D. Koch (2015) "Unit quaternions and the Bloch Sphere", Journal of Physics A 48(23) doi:10.1088/1751-8113/48/23/235302 MR3355237
6. Cox, H. (1883) [1882]. "On the Application of Quaternions and Grassmann's Ausdehnungslehre to different kinds of Uniform Space". Transactions of the Cambridge Philosophical Society. 13: 69–143.
7. Cox, H. (1883) [1882]. "On the Application of Quaternions and Grassmann's Ausdehnungslehre to different kinds of Uniform Space". Proc. Camb. Phil. Soc. 4: 194–196.
8. Alexander Macfarlane (1894) Papers on Space Analysis, especially papers #2, 3, & 5, B. Westerman, New York, weblink from archive.org
9. Science, 9:326 (1899)
10. Robert Gilmore (1974) Lie Groups, Lie Algebras and some of their Applications, chapter 5: Some simple examples, pages 120–35, Wiley ISBN 0-471-30179-5 Gilmore denotes the real, complex, and quaternion division algebras by r, c, and q, rather than the more common R, C, and H.
References
• William Rowan Hamilton (1844 to 1850) On quaternions or a new system of imaginaries in algebra, Philosophical Magazine, link to David R. Wilkins collection at Trinity College, Dublin.
• William Rowan Hamilton (1899) Elements of Quaternions, 2nd edition, edited by Charles Jasper Joly, Longmans Green & Company. See pp. 135–147.
• Arthur Sherburne Hardy (1887) Elements of Quaternions, pp. 71,2 "Representation of Versors by spherical arcs" and pp. 112–8 "Applications to Spherical Trigonometry".
• Arthur Stafford Hathaway (1896) A Primer on Quaternions, Chapter 2: Turns, Rotations, Arc Steps, from Project Gutenberg
• Cibelle Celestino Silva, Roberto de Andrade Martins (2002) "Polar and Axial Vectors versus Quaternions", American Journal of Physics 70:958. Section IV: Versors and unitary vectors in the system of quaternions. Section V: Versor and unitary vectors in vector algebra.
• Pieter Molenbroeck (1891) Theorie der Quaternionen, Seite 48, "Darstellung der Versoren mittelst Bogen auf der Einheitskugel", Leiden: Brill.
External links
• Versor at Encyclopedia of Mathematics.
• Luis Ibáñez Quaternion tutorial Archived 2012-02-04 at the Wayback Machine from National Library of Medicine
| Wikipedia |
Unit square
In mathematics, a unit square is a square whose sides have length 1. Often, the unit square refers specifically to the square in the Cartesian plane with corners at the four points (0, 0), (1, 0), (0, 1), and (1, 1).
Cartesian coordinates
In a Cartesian coordinate system with coordinates (x, y), a unit square is defined as a square consisting of the points where both x and y lie in a closed unit interval from 0 to 1.
That is, a unit square is the Cartesian product I × I, where I denotes the closed unit interval.
Complex coordinates
The unit square can also be thought of as a subset of the complex plane, the topological space formed by the complex numbers. In this view, the four corners of the unit square are at the four complex numbers 0, 1, i, and 1 + i.
Rational distance problem
Unsolved problem in mathematics:
Is there a point in the plane at a rational distance from all four corners of a unit square?
(more unsolved problems in mathematics)
It is not known whether any point in the plane is a rational distance from all four vertices of the unit square.[1]
See also
• Unit circle
• Unit cube
• Unit sphere
References
1. Guy, Richard K. (1991), Unsolved Problems in Number Theory, Vol. 1 (2nd ed.), Springer-Verlag, pp. 181–185.
External links
• Weisstein, Eric W. "Unit square". MathWorld.
| Wikipedia |
Unit tangent bundle
In Riemannian geometry, the unit tangent bundle of a Riemannian manifold (M, g), denoted by T1M, UT(M) or simply UTM, is the unit sphere bundle for the tangent bundle T(M). It is a fiber bundle over M whose fiber at each point is the unit sphere in the tangent bundle:
$\mathrm {UT} (M):=\coprod _{x\in M}\left\{v\in \mathrm {T} _{x}(M)\left|g_{x}(v,v)=1\right.\right\},$
where Tx(M) denotes the tangent space to M at x. Thus, elements of UT(M) are pairs (x, v), where x is some point of the manifold and v is some tangent direction (of unit length) to the manifold at x. The unit tangent bundle is equipped with a natural projection
$\pi :\mathrm {UT} (M)\to M,$ :\mathrm {UT} (M)\to M,}
$\pi :(x,v)\mapsto x,$ :(x,v)\mapsto x,}
which takes each point of the bundle to its base point. The fiber π−1(x) over each point x ∈ M is an (n−1)-sphere Sn−1, where n is the dimension of M. The unit tangent bundle is therefore a sphere bundle over M with fiber Sn−1.
The definition of unit sphere bundle can easily accommodate Finsler manifolds as well. Specifically, if M is a manifold equipped with a Finsler metric F : TM → R, then the unit sphere bundle is the subbundle of the tangent bundle whose fiber at x is the indicatrix of F:
$\mathrm {UT} _{x}(M)=\left\{v\in \mathrm {T} _{x}(M)\left|F(v)=1\right.\right\}.$
If M is an infinite-dimensional manifold (for example, a Banach, Fréchet or Hilbert manifold), then UT(M) can still be thought of as the unit sphere bundle for the tangent bundle T(M), but the fiber π−1(x) over x is then the infinite-dimensional unit sphere in the tangent space.
Structures
The unit tangent bundle carries a variety of differential geometric structures. The metric on M induces a contact structure on UTM. This is given in terms of a tautological one-form, defined at a point u of UTM (a unit tangent vector of M) by
$\theta _{u}(v)=g(u,\pi _{*}v)\,$
where $\pi _{*}$ is the pushforward along π of the vector v ∈ TuUTM.
Geometrically, this contact structure can be regarded as the distribution of (2n−2)-planes which, at the unit vector u, is the pullback of the orthogonal complement of u in the tangent space of M. This is a contact structure, for the fiber of UTM is obviously an integral manifold (the vertical bundle is everywhere in the kernel of θ), and the remaining tangent directions are filled out by moving up the fiber of UTM. Thus the maximal integral manifold of θ is (an open set of) M itself.
On a Finsler manifold, the contact form is defined by the analogous formula
$\theta _{u}(v)=g_{u}(u,\pi _{*}v)\,$
where gu is the fundamental tensor (the hessian of the Finsler metric). Geometrically, the associated distribution of hyperplanes at the point u ∈ UTxM is the inverse image under π* of the tangent hyperplane to the unit sphere in TxM at u.
The volume form θ∧dθn−1 defines a measure on M, known as the kinematic measure, or Liouville measure, that is invariant under the geodesic flow of M. As a Radon measure, the kinematic measure μ is defined on compactly supported continuous functions ƒ on UTM by
$\int _{UTM}f\,d\mu =\int _{M}dV(p)\int _{UT_{p}M}\left.f\right|_{UT_{p}M}\,d\mu _{p}$
where dV is the volume element on M, and μp is the standard rotationally-invariant Borel measure on the Euclidean sphere UTpM.
The Levi-Civita connection of M gives rise to a splitting of the tangent bundle
$T(UTM)=H\oplus V$
into a vertical space V = kerπ* and horizontal space H on which π* is a linear isomorphism at each point of UTM. This splitting induces a metric on UTM by declaring that this splitting be an orthogonal direct sum, and defining the metric on H by the pullback:
$g_{H}(v,w)=g(v,w),\quad v,w\in H$
and defining the metric on V as the induced metric from the embedding of the fiber UTxM into the Euclidean space TxM. Equipped with this metric and contact form, UTM becomes a Sasakian manifold.
Bibliography
• Jeffrey M. Lee: Manifolds and Differential Geometry. Graduate Studies in Mathematics Vol. 107, American Mathematical Society, Providence (2009). ISBN 978-0-8218-4815-9
• Jürgen Jost: Riemannian Geometry and Geometric Analysis, (2002) Springer-Verlag, Berlin. ISBN 3-540-42627-2
• Ralph Abraham und Jerrold E. Marsden: Foundations of Mechanics, (1978) Benjamin-Cummings, London. ISBN 0-8053-0102-X
| Wikipedia |
Frenet–Serret formulas
In differential geometry, the Frenet–Serret formulas describe the kinematic properties of a particle moving along a differentiable curve in three-dimensional Euclidean space $\mathbb {R} ^{3}$, or the geometric properties of the curve itself irrespective of any motion. More specifically, the formulas describe the derivatives of the so-called tangent, normal, and binormal unit vectors in terms of each other. The formulas are named after the two French mathematicians who independently discovered them: Jean Frédéric Frenet, in his thesis of 1847, and Joseph Alfred Serret, in 1851. Vector notation and linear algebra currently used to write these formulas were not yet available at the time of their discovery.
"Binormal" redirects here. For the category-theoretic meaning of this word, see normal morphism.
The tangent, normal, and binormal unit vectors, often called T, N, and B, or collectively the Frenet–Serret frame or TNB frame, together form an orthonormal basis spanning $\mathbb {R} ^{3}$ and are defined as follows:
• T is the unit vector tangent to the curve, pointing in the direction of motion.
• N is the normal unit vector, the derivative of T with respect to the arclength parameter of the curve, divided by its length.
• B is the binormal unit vector, the cross product of T and N.
The Frenet–Serret formulas are:
${\begin{aligned}{\frac {\mathrm {d} \mathbf {T} }{\mathrm {d} s}}&=\kappa \mathbf {N} ,\\{\frac {\mathrm {d} \mathbf {N} }{\mathrm {d} s}}&=-\kappa \mathbf {T} +\tau \mathbf {B} ,\\{\frac {\mathrm {d} \mathbf {B} }{\mathrm {d} s}}&=-\tau \mathbf {N} ,\end{aligned}}$
where d/ds is the derivative with respect to arclength, κ is the curvature, and τ is the torsion of the curve. The two scalars κ and τ effectively define the curvature and torsion of a space curve. The associated collection, T, N, B, κ, and τ, is called the Frenet–Serret apparatus. Intuitively, curvature measures the failure of a curve to be a straight line, while torsion measures the failure of a curve to be planar.
Definitions
Let r(t) be a curve in Euclidean space, representing the position vector of the particle as a function of time. The Frenet–Serret formulas apply to curves which are non-degenerate, which roughly means that they have nonzero curvature. More formally, in this situation the velocity vector r′(t) and the acceleration vector r′′(t) are required not to be proportional.
Let s(t) represent the arc length which the particle has moved along the curve in time t. The quantity s is used to give the curve traced out by the trajectory of the particle a natural parametrization by arc length (i.e. arc-length parametrization), since many different particle paths may trace out the same geometrical curve by traversing it at different rates. In detail, s is given by
$s(t)=\int _{0}^{t}\left\|\mathbf {r} '(\sigma )\right\|d\sigma .$
Moreover, since we have assumed that r′ ≠ 0, it follows that s(t) is a strictly monotonically increasing function. Therefore, it is possible to solve for t as a function of s, and thus to write r(s) = r(t(s)). The curve is thus parametrized in a preferred manner by its arc length.
With a non-degenerate curve r(s), parameterized by its arc length, it is now possible to define the Frenet–Serret frame (or TNB frame):
• The tangent unit vector T is defined as
$\mathbf {T} :={\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} s}}.$ :={\frac {\mathrm {d} \mathbf {r} }{\mathrm {d} s}}.}
• The normal unit vector N is defined as
$\mathbf {N} :={{\frac {\mathrm {d} \mathbf {T} }{\mathrm {d} s}} \over \left\|{\frac {\mathrm {d} \mathbf {T} }{\mathrm {d} s}}\right\|},$ :={{\frac {\mathrm {d} \mathbf {T} }{\mathrm {d} s}} \over \left\|{\frac {\mathrm {d} \mathbf {T} }{\mathrm {d} s}}\right\|},}
from which it follows, since T always has unit magnitude, that N (the change of T) is always perpendicular to T, since there is no change in length of T. Note that by calling curvature $\kappa =\left\|{\frac {\mathrm {d} \mathbf {T} }{\mathrm {d} s}}\right\|$ we automatically obtain the first relation.
• The binormal unit vector B is defined as the cross product of T and N:
$\mathbf {B} :=\mathbf {T} \times \mathbf {N} ,$ :=\mathbf {T} \times \mathbf {N} ,}
from which it follows that B is always perpendicular to both T and N. Thus, the three unit vectors T, N, and B are all perpendicular to each other.
The Frenet–Serret formulas are:
${\begin{aligned}{\frac {\mathrm {d} \mathbf {T} }{\mathrm {d} s}}&=\kappa \mathbf {N} ,\\{\frac {\mathrm {d} \mathbf {N} }{\mathrm {d} s}}&=-\kappa \mathbf {T} +\tau \mathbf {B} ,\\{\frac {\mathrm {d} \mathbf {B} }{\mathrm {d} s}}&=-\tau \mathbf {N} ,\end{aligned}}$
where $\kappa $ is the curvature and $\tau $ is the torsion.
The Frenet–Serret formulas are also known as Frenet–Serret theorem, and can be stated more concisely using matrix notation:[1]
${\begin{bmatrix}\mathbf {T'} \\\mathbf {N'} \\\mathbf {B'} \end{bmatrix}}={\begin{bmatrix}0&\kappa &0\\-\kappa &0&\tau \\0&-\tau &0\end{bmatrix}}{\begin{bmatrix}\mathbf {T} \\\mathbf {N} \\\mathbf {B} \end{bmatrix}}.$
This matrix is skew-symmetric.
Formulas in n dimensions
The Frenet–Serret formulas were generalized to higher-dimensional Euclidean spaces by Camille Jordan in 1874.
Suppose that r(s) is a smooth curve in $\mathbb {R} ^{n}$, and that the first n derivatives of r are linearly independent.[2] The vectors in the Frenet–Serret frame are an orthonormal basis constructed by applying the Gram-Schmidt process to the vectors (r′(s), r′′(s), ..., r(n)(s)).
In detail, the unit tangent vector is the first Frenet vector e1(s) and is defined as
$\mathbf {e} _{1}(s)={\frac {{\overline {\mathbf {e} _{1}}}(s)}{\|{\overline {\mathbf {e} _{1}}}(s)\|}}$
where
${\overline {\mathbf {e} _{1}}}(s)=\mathbf {r} '(s)$
The normal vector, sometimes called the curvature vector, indicates the deviance of the curve from being a straight line. It is defined as
${\overline {\mathbf {e} _{2}}}(s)=\mathbf {r} ''(s)-\langle \mathbf {r} ''(s),\mathbf {e} _{1}(s)\rangle \,\mathbf {e} _{1}(s)$
Its normalized form, the unit normal vector, is the second Frenet vector e2(s) and defined as
$\mathbf {e} _{2}(s)={\frac {{\overline {\mathbf {e} _{2}}}(s)}{\|{\overline {\mathbf {e} _{2}}}(s)\|}}$
The tangent and the normal vector at point s define the osculating plane at point r(s).
The remaining vectors in the frame (the binormal, trinormal, etc.) are defined similarly by
${\begin{aligned}\mathbf {e} _{j}(s)={\frac {{\overline {\mathbf {e} _{j}}}(s)}{\|{\overline {\mathbf {e} _{j}}}(s)\|}}{\mbox{, }}\end{aligned}}$
${\begin{aligned}{\overline {\mathbf {e} _{j}}}(s)=\mathbf {r} ^{(j)}(s)-\sum _{i=1}^{j-1}\langle \mathbf {r} ^{(j)}(s),\mathbf {e} _{i}(s)\rangle \,\mathbf {e} _{i}(s).\end{aligned}}$
The last vector in the frame is defined by the cross-product of the first $n-1$ vectors:
${\mathbf {e} _{n}}(s)={\mathbf {e} _{1}}(s)\times {\mathbf {e} _{2}}(s)\times \dots \times {\mathbf {e} _{n-2}}(s)\times {\mathbf {e} _{n-1}}(s)$
The real valued functions used below χi(s) are called generalized curvature and are defined as
$\chi _{i}(s)={\frac {\langle \mathbf {e} _{i}'(s),\mathbf {e} _{i+1}(s)\rangle }{\|\mathbf {r} '(s)\|}}$
The Frenet–Serret formulas, stated in matrix language, are
${\begin{aligned}{\begin{bmatrix}\mathbf {e} _{1}'(s)\\\vdots \\\mathbf {e} _{n}'(s)\\\end{bmatrix}}=\\\end{aligned}}\|\mathbf {r} '(s)\|\cdot {\begin{aligned}{\begin{bmatrix}0&\chi _{1}(s)&&0\\-\chi _{1}(s)&\ddots &\ddots &\\&\ddots &0&\chi _{n-1}(s)\\0&&-\chi _{n-1}(s)&0\\\end{bmatrix}}{\begin{bmatrix}\mathbf {e} _{1}(s)\\\vdots \\\mathbf {e} _{n}(s)\\\end{bmatrix}}\end{aligned}}$
Notice that as defined here, the generalized curvatures and the frame may differ slightly from the convention found in other sources. The top curvature $\chi _{n-1}$ (also called the torsion, in this context) and the last vector in the frame $\mathbf {e} _{n}$, differ by a sign
$\operatorname {or} \left(\mathbf {r} ^{(1)},\dots ,\mathbf {r} ^{(n)}\right)$
(the orientation of the basis) from the usual torsion. The Frenet–Serret formulas are invariant under flipping the sign of both $\chi _{n-1}$ and $\mathbf {e} _{n}$, and this change of sign makes the frame positively oriented. As defined above, the frame inherits its orientation from the jet of $\mathbf {r} $.
Proof
Consider the 3 by 3 matrix
$Q={\begin{bmatrix}\mathbf {T} \\\mathbf {N} \\\mathbf {B} \end{bmatrix}}$
The rows of this matrix are mutually perpendicular unit vectors: an orthonormal basis of $\mathbb {R} ^{3}$. As a result, the transpose of Q is equal to the inverse of Q: Q is an orthogonal matrix. It suffices to show that
$\left({\frac {dQ}{ds}}\right)Q^{\top }={\begin{bmatrix}0&\kappa &0\\-\kappa &0&\tau \\0&-\tau &0\end{bmatrix}}$
Note the first row of this equation already holds, by definition of the normal N and curvature κ, as well as the last row by the definition of torsion. So it suffices to show that dQ/dsQT is a skew-symmetric matrix. Since I = QQT, taking a derivative and applying the product rule yields
${\begin{aligned}0={\frac {\mathrm {d} I}{\mathrm {d} s}}=\left({\frac {\mathrm {d} Q}{\mathrm {d} s}}\right)Q^{\top }+Q\left({\frac {\mathrm {d} Q}{\mathrm {d} s}}\right)^{\top }\\\implies \left({\frac {\mathrm {d} Q}{\mathrm {d} s}}\right)Q^{\top }=-\left(\left({\frac {\mathrm {d} Q}{\mathrm {d} s}}\right)Q^{\top }\right)^{\top }\\\end{aligned}}$
which establishes the required skew-symmetry.[3]
Applications and interpretation
Kinematics of the frame
The Frenet–Serret frame consisting of the tangent T, normal N, and binormal B collectively forms an orthonormal basis of 3-space. At each point of the curve, this attaches a frame of reference or rectilinear coordinate system (see image).
The Frenet–Serret formulas admit a kinematic interpretation. Imagine that an observer moves along the curve in time, using the attached frame at each point as their coordinate system. The Frenet–Serret formulas mean that this coordinate system is constantly rotating as an observer moves along the curve. Hence, this coordinate system is always non-inertial. The angular momentum of the observer's coordinate system is proportional to the Darboux vector of the frame.
Concretely, suppose that the observer carries an (inertial) top (or gyroscope) with them along the curve. If the axis of the top points along the tangent to the curve, then it will be observed to rotate about its axis with angular velocity -τ relative to the observer's non-inertial coordinate system. If, on the other hand, the axis of the top points in the binormal direction, then it is observed to rotate with angular velocity -κ. This is easily visualized in the case when the curvature is a positive constant and the torsion vanishes. The observer is then in uniform circular motion. If the top points in the direction of the binormal, then by conservation of angular momentum it must rotate in the opposite direction of the circular motion. In the limiting case when the curvature vanishes, the observer's normal precesses about the tangent vector, and similarly the top will rotate in the opposite direction of this precession.
The general case is illustrated below. There are further illustrations on Wikimedia.
Applications
The kinematics of the frame have many applications in the sciences.
• In the life sciences, particularly in models of microbial motion, considerations of the Frenet–Serret frame have been used to explain the mechanism by which a moving organism in a viscous medium changes its direction.[4]
• In physics, the Frenet–Serret frame is useful when it is impossible or inconvenient to assign a natural coordinate system for a trajectory. Such is often the case, for instance, in relativity theory. Within this setting, Frenet–Serret frames have been used to model the precession of a gyroscope in a gravitational well.[5]
Graphical Illustrations
1. Example of a moving Frenet basis (T in blue, N in green, B in purple) along Viviani's curve.
1. On the example of a torus knot, the tangent vector T, the normal vector N, and the binormal vector B, along with the curvature κ(s), and the torsion τ(s) are displayed.
At the peaks of the torsion function the rotation of the Frenet–Serret frame (T,N,B) around the tangent vector is clearly visible.
1. The kinematic significance of the curvature is best illustrated with plane curves (having constant torsion equal to zero). See the page on curvature of plane curves.
Frenet–Serret formulas in calculus
The Frenet–Serret formulas are frequently introduced in courses on multivariable calculus as a companion to the study of space curves such as the helix. A helix can be characterized by the height 2πh and radius r of a single turn. The curvature and torsion of a helix (with constant radius) are given by the formulas
$\kappa ={\frac {r}{r^{2}+h^{2}}}$
$\tau =\pm {\frac {h}{r^{2}+h^{2}}}.$
The sign of the torsion is determined by the right-handed or left-handed sense in which the helix twists around its central axis. Explicitly, the parametrization of a single turn of a right-handed helix with height 2πh and radius r is
x = r cos t
y = r sin t
z = h t
(0 ≤ t ≤ 2 π)
and, for a left-handed helix,
x = r cos t
y = −r sin t
z = h t
(0 ≤ t ≤ 2 π).
Note that these are not the arc length parametrizations (in which case, each of x, y, and z would need to be divided by ${\sqrt {h^{2}+r^{2}}}$.)
In his expository writings on the geometry of curves, Rudy Rucker[6] employs the model of a slinky to explain the meaning of the torsion and curvature. The slinky, he says, is characterized by the property that the quantity
$A^{2}=h^{2}+r^{2}$
remains constant if the slinky is vertically stretched out along its central axis. (Here 2πh is the height of a single twist of the slinky, and r the radius.) In particular, curvature and torsion are complementary in the sense that the torsion can be increased at the expense of curvature by stretching out the slinky.
Taylor expansion
Repeatedly differentiating the curve and applying the Frenet–Serret formulas gives the following Taylor approximation to the curve near s = 0:[7]
$\mathbf {r} (s)=\mathbf {r} (0)+\left(s-{\frac {s^{3}\kappa ^{2}(0)}{6}}\right)\mathbf {T} (0)+\left({\frac {s^{2}\kappa (0)}{2}}+{\frac {s^{3}\kappa '(0)}{6}}\right)\mathbf {N} (0)+\left({\frac {s^{3}\kappa (0)\tau (0)}{6}}\right)\mathbf {B} (0)+o(s^{4}).$
For a generic curve with nonvanishing torsion, the projection of the curve onto various coordinate planes in the T, N, B coordinate system at s = 0 have the following interpretations:
• The osculating plane is the plane containing T and N. The projection of the curve onto this plane has the form:
$\mathbf {r} (0)+s\mathbf {T} (0)+{\frac {s^{2}\kappa (0)}{2}}\mathbf {N} (0)+o(s^{2}).$
This is a parabola up to terms of order o(s2), whose curvature at 0 is equal to κ(0).
• The normal plane is the plane containing N and B. The projection of the curve onto this plane has the form:
$\mathbf {r} (0)+\left({\frac {s^{2}\kappa (0)}{2}}+{\frac {s^{3}\kappa '(0)}{6}}\right)\mathbf {N} (0)+\left({\frac {s^{3}\kappa (0)\tau (0)}{6}}\right)\mathbf {B} (0)+o(s^{3})$
which is a cuspidal cubic to order o(s3).
• The rectifying plane is the plane containing T and B. The projection of the curve onto this plane is:
$\mathbf {r} (0)+\left(s-{\frac {s^{3}\kappa ^{2}(0)}{6}}\right)\mathbf {T} (0)+\left({\frac {s^{3}\kappa (0)\tau (0)}{6}}\right)\mathbf {B} (0)+o(s^{3})$
which traces out the graph of a cubic polynomial to order o(s3).
Ribbons and tubes
The Frenet–Serret apparatus allows one to define certain optimal ribbons and tubes centered around a curve. These have diverse applications in materials science and elasticity theory,[8] as well as to computer graphics.[9]
The Frenet ribbon[10] along a curve C is the surface traced out by sweeping the line segment [−N,N] generated by the unit normal along the curve. This surface is sometimes confused with the tangent developable, which is the envelope E of the osculating planes of C. This is perhaps because both the Frenet ribbon and E exhibit similar properties along C. Namely, the tangent planes of both sheets of E, near the singular locus C where these sheets intersect, approach the osculating planes of C; the tangent planes of the Frenet ribbon along C are equal to these osculating planes. The Frenet ribbon is in general not developable.
Congruence of curves
In classical Euclidean geometry, one is interested in studying the properties of figures in the plane which are invariant under congruence, so that if two figures are congruent then they must have the same properties. The Frenet–Serret apparatus presents the curvature and torsion as numerical invariants of a space curve.
Roughly speaking, two curves C and C′ in space are congruent if one can be rigidly moved to the other. A rigid motion consists of a combination of a translation and a rotation. A translation moves one point of C to a point of C′. The rotation then adjusts the orientation of the curve C to line up with that of C′. Such a combination of translation and rotation is called a Euclidean motion. In terms of the parametrization r(t) defining the first curve C, a general Euclidean motion of C is a composite of the following operations:
• (Translation) r(t) → r(t) + v, where v is a constant vector.
• (Rotation) r(t) + v → M(r(t) + v), where M is the matrix of a rotation.
The Frenet–Serret frame is particularly well-behaved with regard to Euclidean motions. First, since T, N, and B can all be given as successive derivatives of the parametrization of the curve, each of them is insensitive to the addition of a constant vector to r(t). Intuitively, the TNB frame attached to r(t) is the same as the TNB frame attached to the new curve r(t) + v.
This leaves only the rotations to consider. Intuitively, if we apply a rotation M to the curve, then the TNB frame also rotates. More precisely, the matrix Q whose rows are the TNB vectors of the Frenet–Serret frame changes by the matrix of a rotation
$Q\rightarrow QM.$
A fortiori, the matrix dQ/dsQT is unaffected by a rotation:
${\frac {\mathrm {d} (QM)}{\mathrm {d} s}}(QM)^{\top }={\frac {\mathrm {d} Q}{\mathrm {d} s}}MM^{\top }Q^{\top }={\frac {\mathrm {d} Q}{\mathrm {d} s}}Q^{\top }$
since MMT = I for the matrix of a rotation.
Hence the entries κ and τ of dQ/dsQT are invariants of the curve under Euclidean motions: if a Euclidean motion is applied to a curve, then the resulting curve has the same curvature and torsion.
Moreover, using the Frenet–Serret frame, one can also prove the converse: any two curves having the same curvature and torsion functions must be congruent by a Euclidean motion. Roughly speaking, the Frenet–Serret formulas express the Darboux derivative of the TNB frame. If the Darboux derivatives of two frames are equal, then a version of the fundamental theorem of calculus asserts that the curves are congruent. In particular, the curvature and torsion are a complete set of invariants for a curve in three-dimensions.
Other expressions of the frame
The formulas given above for T, N, and B depend on the curve being given in terms of the arclength parameter. This is a natural assumption in Euclidean geometry, because the arclength is a Euclidean invariant of the curve. In the terminology of physics, the arclength parametrization is a natural choice of gauge. However, it may be awkward to work with in practice. A number of other equivalent expressions are available.
Suppose that the curve is given by r(t), where the parameter t need no longer be arclength. Then the unit tangent vector T may be written as
$\mathbf {T} (t)={\frac {\mathbf {r} '(t)}{\|\mathbf {r} '(t)\|}}$
The normal vector N takes the form
$\mathbf {N} (t)={\frac {\mathbf {T} '(t)}{\|\mathbf {T} '(t)\|}}={\frac {\mathbf {r} '(t)\times \left(\mathbf {r} ''(t)\times \mathbf {r} '(t)\right)}{\left\|\mathbf {r} '(t)\right\|\,\left\|\mathbf {r} ''(t)\times \mathbf {r} '(t)\right\|}}$
The binormal B is then
$\mathbf {B} (t)=\mathbf {T} (t)\times \mathbf {N} (t)={\frac {\mathbf {r} '(t)\times \mathbf {r} ''(t)}{\|\mathbf {r} '(t)\times \mathbf {r} ''(t)\|}}$
An alternative way to arrive at the same expressions is to take the first three derivatives of the curve r′(t), r′′(t), r′′′(t), and to apply the Gram-Schmidt process. The resulting ordered orthonormal basis is precisely the TNB frame. This procedure also generalizes to produce Frenet frames in higher dimensions.
In terms of the parameter t, the Frenet–Serret formulas pick up an additional factor of ||r′(t)|| because of the chain rule:
${\frac {\mathrm {d} }{\mathrm {d} t}}{\begin{bmatrix}\mathbf {T} \\\mathbf {N} \\\mathbf {B} \end{bmatrix}}=\|\mathbf {r} '(t)\|{\begin{bmatrix}0&\kappa &0\\-\kappa &0&\tau \\0&-\tau &0\end{bmatrix}}{\begin{bmatrix}\mathbf {T} \\\mathbf {N} \\\mathbf {B} \end{bmatrix}}$
Explicit expressions for the curvature and torsion may be computed. For example,
$\kappa ={\frac {\|\mathbf {r} '(t)\times \mathbf {r} ''(t)\|}{\|\mathbf {r} '(t)\|^{3}}}$
The torsion may be expressed using a scalar triple product as follows,
$\tau ={\frac {[\mathbf {r} '(t),\mathbf {r} ''(t),\mathbf {r} '''(t)]}{\|\mathbf {r} '(t)\times \mathbf {r} ''(t)\|^{2}}}$
Special cases
If the curvature is always zero then the curve will be a straight line. Here the vectors N, B and the torsion are not well defined.
If the torsion is always zero then the curve will lie in a plane.
A curve may have nonzero curvature and zero torsion. For example, the circle of radius R given by r(t)=(R cos t, R sin t, 0) in the z=0 plane has zero torsion and curvature equal to 1/R. The converse, however, is false. That is, a regular curve with nonzero torsion must have nonzero curvature. (This is just the contrapositive of the fact that zero curvature implies zero torsion.)
A helix has constant curvature and constant torsion.
Plane curves
Further information: Plane curve
Given a curve contained on the x-y plane, its tangent vector T is also contained on that plane. Its binormal vector B can be naturally postulated to coincide with the normal to the plane (along the z axis). Finally, the curve normal can be found completing the right-handed system, N = B × T.[11] This form is well-defined even when the curvature is zero; for example, the normal to a straight line in a plane will be perpendicular to the tangent, all co-planar.
See also
• Affine geometry of curves
• Differentiable curve
• Darboux frame
• Kinematics
• Moving frame
• Tangential and normal components
• Radial, transverse, normal
Notes
1. Kühnel 2002, §1.9
2. Only the first n − 1 actually need to be linearly independent, as the final remaining frame vector en can be chosen as the unit vector orthogonal to the span of the others, such that the resulting frame is positively oriented.
3. This proof is likely due to Élie Cartan. See Griffiths (1974) where he gives the same proof, but using the Maurer-Cartan form. Our explicit description of the Maurer-Cartan form using matrices is standard. See, for instance, Spivak, Volume II, p. 37. A generalization of this proof to n dimensions is not difficult, but was omitted for the sake of exposition. Again, see Griffiths (1974) for details.
4. Crenshaw (1993).
5. Iyer and Vishveshwara (1993).
6. Rucker, Rudy (1999). "Watching Flies Fly: Kappatau Space Curves". San Jose State University. Archived from the original on 15 October 2004.
7. Kühnel 2002, p. 19
8. Goriely et al. (2006).
9. Hanson.
10. For terminology, see Sternberg (1964). Lectures on Differential Geometry. Englewood Cliffs, N.J., Prentice-Hall. p. 252-254. ISBN 9780135271506..
11. Weisstein, Eric W. "Normal Vector". MathWorld. Wolfram.
References
• Crenshaw, H.C.; Edelstein-Keshet, L. (1993), "Orientation by Helical Motion II. Changing the direction of the axis of motion", Bulletin of Mathematical Biology, 55 (1): 213–230, doi:10.1016/s0092-8240(05)80070-9, S2CID 50734771
• Etgen, Garret; Hille, Einar; Salas, Saturnino (1995), Salas and Hille's Calculus — One and Several Variables (7th ed.), John Wiley & Sons, p. 896
• Frenet, F. (1847), Sur les courbes à double courbure (PDF), Thèse, Toulouse. Abstract in Journal de Mathématiques Pures et Appliquées 17, 1852.
• Goriely, A.; Robertson-Tessi, M.; Tabor, M.; Vandiver, R. (2006), "Elastic growth models", BIOMAT-2006 (PDF), Springer-Verlag, archived from the original (PDF) on 2006-12-29.
• Griffiths, Phillip (1974), "On Cartan's method of Lie groups and moving frames as applied to uniqueness and existence questions in differential geometry", Duke Mathematical Journal, 41 (4): 775–814, doi:10.1215/S0012-7094-74-04180-5, S2CID 12966544.
• Guggenheimer, Heinrich (1977), Differential Geometry, Dover, ISBN 0-486-63433-7
• Hanson, A.J. (2007), "Quaternion Frenet Frames: Making Optimal Tubes and Ribbons from Curves" (PDF), Indiana University Technical Report
• Iyer, B.R.; Vishveshwara, C.V. (1993), "Frenet-Serret description of gyroscopic precession", Phys. Rev., D, 48 (12): 5706–5720, arXiv:gr-qc/9310019, Bibcode:1993PhRvD..48.5706I, doi:10.1103/physrevd.48.5706, PMID 10016237, S2CID 119458843
• Jordan, Camille (1874), "Sur la théorie des courbes dans l'espace à n dimensions", C. R. Acad. Sci. Paris, 79: 795–797
• Kühnel, Wolfgang (2002), Differential geometry, Student Mathematical Library, vol. 16, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2656-0, MR 1882174
• Serret, J. A. (1851), "Sur quelques formules relatives à la théorie des courbes à double courbure" (PDF), Journal de Mathématiques Pures et Appliquées, 16.
• Spivak, Michael (1999), A Comprehensive Introduction to Differential Geometry (Volume Two), Publish or Perish, Inc..
• Sternberg, Shlomo (1964), Lectures on Differential Geometry, Prentice-Hall
• Struik, Dirk J. (1961), Lectures on Classical Differential Geometry, Reading, Mass: Addison-Wesley.
External links
• Create your own animated illustrations of moving Frenet-Serret frames, curvature and torsion functions (Maple Worksheet)
• Rudy Rucker's KappaTau Paper.
• Very nice visual representation for the trihedron
Various notions of curvature defined in differential geometry
Differential geometry
of curves
• Curvature
• Torsion of a curve
• Frenet–Serret formulas
• Radius of curvature (applications)
• Affine curvature
• Total curvature
• Total absolute curvature
Differential geometry
of surfaces
• Principal curvatures
• Gaussian curvature
• Mean curvature
• Darboux frame
• Gauss–Codazzi equations
• First fundamental form
• Second fundamental form
• Third fundamental form
Riemannian geometry
• Curvature of Riemannian manifolds
• Riemann curvature tensor
• Ricci curvature
• Scalar curvature
• Sectional curvature
Curvature of connections
• Curvature form
• Torsion tensor
• Cocurvature
• Holonomy
| Wikipedia |
Unit type
In the area of mathematical logic and computer science known as type theory, a unit type is a type that allows only one value (and thus can hold no information). The carrier (underlying set) associated with a unit type can be any singleton set. There is an isomorphism between any two such sets, so it is customary to talk about the unit type and ignore the details of its value. One may also regard the unit type as the type of 0-tuples, i.e. the product of no types.
This article is about the notion used in computer programming and type theory. For types of measurement units, see Units of measurement. For other uses, see Unit (disambiguation).
The unit type is the terminal object in the category of types and typed functions. It should not be confused with the zero or bottom type, which allows no values and is the initial object in this category. Similarly, the Boolean is the type with two values.
The unit type is implemented in most functional programming languages. The void type that is used in some imperative programming languages serves some of its functions, but because its carrier set is empty, it has some limitations (as detailed below).
In programming languages
Several computer programming languages provide a unit type to specify the result type of a function with the sole purpose of causing a side effect, and the argument type of a function that does not require arguments.
• In Haskell, Rust, and Elm, the unit type is called () and its only value is also (), reflecting the 0-tuple interpretation.
• In ML descendants (including OCaml, Standard ML, and F#), the type is called unit but the value is written as ().
• In Scala, the unit type is called Unit and its only value is written as ().
• In Common Lisp the type named NULL is a unit type which has one value, namely the symbol NIL. This should not be confused with the NIL type, which is the bottom type.
• In Python, there is a type called NoneType which allows the single value of None.
• In Swift, the unit type is called Void or () and its only value is also (), reflecting the 0-tuple interpretation.
• In Java, the unit type is called Void and its only value is null.
• In Go, the unit type is written struct{} and its value is struct{}{}.
• In PHP, the unit type is called null, which only value is NULL itself.
• In JavaScript, both Null (its only value is null) and Undefined (its only value is undefined) are built-in unit types.
• in Kotlin, Unit is a singleton with only one value: the Unit object.
• In Ruby, nil is the only instance of the NilClass class.
• In C++, the std::monostate unit type was added in C++17. Before that, it is possible to define a custom unit type using an empty struct such as struct empty{}.
Void type as unit type
In C, C++, C#, and D, void is used to designate a function that does not return anything useful, or a function that accepts no arguments. The unit type in C is conceptually similar to an empty struct, but a struct without members is not allowed in the C language specification (this is allowed in C++). Instead, 'void' is used in a manner that simulates some, but not all, of the properties of the unit type, as detailed below. Like most imperative languages, C allows functions that do not return a value; these are specified as having the void return type. Such functions are called procedures in other imperative languages like Pascal, where a syntactic distinction, instead of type-system distinction, is made between functions and procedures.
Difference in calling convention
The first notable difference between a true unit type and the void type is that the unit type may always be the type of the argument to a function, but the void type cannot be the type of an argument in C, despite the fact that it may appear as the sole argument in the list. This problem is best illustrated by the following program, which is a compile-time error in C:
void f(void) {}
void g(void) {}
int main(void)
{
f(g()); // compile-time error here
return 0;
}
This issue does not arise in most programming practice in C, because since the void type carries no information, it is useless to pass it anyway; but it may arise in generic programming, such as C++ templates, where void must be treated differently from other types. In C++ however, empty classes are allowed, so it is possible to implement a real unit type; the above example becomes compilable as:
class unit_type {};
const unit_type the_unit;
unit_type f(unit_type) { return the_unit; }
unit_type g(unit_type) { return the_unit; }
int main()
{
f(g(the_unit));
return 0;
}
(For brevity, we're not worried in the above example whether the_unit is really a singleton; see singleton pattern for details on that issue.)
Difference in storage
The second notable difference is that the void type is special and can never be stored in a record type, i.e. in a struct or a class in C/C++. In contrast, the unit type can be stored in records in functional programming languages, i.e. it can appear as the type of a field; the above implementation of the unit type in C++ can also be stored. While this may seem a useless feature, it does allow one for instance to elegantly implement a set as a map to the unit type; in the absence of a unit type, one can still implement a set this way by storing some dummy value of another type for each key.
In Generics
In Java Generics, type parameters must be reference types. The wrapper type Void is often used when a unit type parameter is needed. Although the Void type can never have any instances, it does have one value, null (like all other reference types), so it acts as a unit type. In practice, any other non-instantiable type, e.g. Math, can also be used for this purpose, since they also have exactly one value, null.
public static Void f(Void x) { return null; }
public static Void g(Void x) { return null; }
public static void main(String[] args)
{
f(g(null));
}
Null type
Statically typed languages give a type to every possible expression. They need to associate a type to the null expression. A type will be defined for null and it will only have this value.
For example in D, it's possible to declare functions that may only return null:
typeof(null) returnThatSpecialThing(){
return null;
}
null is the only value that typeof(null), a unit type, can have.
See also
• Singleton pattern (where a particular class has only one instance, but narrowly-typed non-nullable references to it are usually not held by other classes)
References
• Pierce, Benjamin C. (2002). Types and Programming Languages. MIT Press. pp. 118–119. ISBN 0-262-16209-1.
• unit type at the nLab
Data types
Uninterpreted
• Bit
• Byte
• Trit
• Tryte
• Word
• Bit array
Numeric
• Arbitrary-precision or bignum
• Complex
• Decimal
• Fixed point
• Floating point
• Reduced precision
• Minifloat
• Half precision
• bfloat16
• Single precision
• Double precision
• Quadruple precision
• Octuple precision
• Extended precision
• Long double
• Integer
• signedness
• Interval
• Rational
Pointer
• Address
• physical
• virtual
• Reference
Text
• Character
• String
• null-terminated
Composite
• Algebraic data type
• generalized
• Array
• Associative array
• Class
• Dependent
• Equality
• Inductive
• Intersection
• List
• Object
• metaobject
• Option type
• Product
• Record or Struct
• Refinement
• Set
• Union
• tagged
Other
• Boolean
• Bottom type
• Collection
• Enumerated type
• Exception
• Function type
• Opaque data type
• Recursive data type
• Semaphore
• Stream
• Strongly typed identifier
• Top type
• Type class
• Empty type
• Unit type
• Void
Related
topics
• Abstract data type
• Boxing
• Data structure
• Generic
• Kind
• metaclass
• Parametric polymorphism
• Primitive data type
• Interface
• Subtyping
• Type constructor
• Type conversion
• Type system
• Type theory
• Variable
| Wikipedia |
Unital (geometry)
In geometry, a unital is a set of n3 + 1 points arranged into subsets of size n + 1 so that every pair of distinct points of the set are contained in exactly one subset.[lower-alpha 1] This is equivalent to saying that a unital is a 2-(n3 + 1, n + 1, 1) block design. Some unitals may be embedded in a projective plane of order n2 (the subsets of the design become sets of collinear points in the projective plane). In this case of embedded unitals, every line of the plane intersects the unital in either 1 or n + 1 points. In the Desarguesian planes, PG(2,q2), the classical examples of unitals are given by nondegenerate Hermitian curves. There are also many non-classical examples. The first and the only known unital with non prime power parameters, n=6, was constructed by Bhaskar Bagchi and Sunanda Bagchi.[1] It is still unknown if this unital can be embedded in a projective plane of order 36, if such a plane exists.
Unitals
Classical
We review some terminology used in projective geometry.
A correlation of a projective geometry is a bijection on its subspaces that reverses containment. In particular, a correlation interchanges points and hyperplanes.[2]
A correlation of order two is called a polarity.
A polarity is called a unitary polarity if its associated sesquilinear form s with companion automorphism α satisfies
s(u,v) = s(v,u)α for all vectors u, v of the underlying vector space.
A point is called an absolute point of a polarity if it lies on the image of itself under the polarity.
The absolute points of a unitary polarity of the projective geometry PG(d,F), for some d ≥ 2, is a nondegenerate Hermitian variety, and if d = 2 this variety is called a nondegenerate Hermitian curve.[3]
In PG(2,q2) for some prime power q, the set of points of a nondegenerate Hermitian curve form a unital,[4] which is called a classical unital.
Let ${\mathcal {H}}={\mathcal {H}}(2,q^{2})$ be a nondegenerate Hermitian curve in $PG(2,q^{2})$ for some prime power $q$. As all nondegenerate Hermitian curves in the same plane are projectively equivalent, ${\mathcal {H}}$ can be described in terms of homogeneous coordinates as follows:[5]
${\mathcal {H}}=\{(x_{0},x_{1},x_{2})\colon x_{0}^{q+1}+x_{1}^{q+1}+x_{2}^{q+1}=0\}.$
Ree unitals
Another family of unitals based on Ree groups was constructed by H. Lüneburg.[6] Let Γ = R(q) be the Ree group of type 2G2 of order (q3 + 1)q3(q − 1) where q = 32m+1. Let P be the set of all q3 + 1 Sylow 3-subgroups of Γ. Γ acts doubly transitively on this set by conjugation (it will be convenient to think of these subgroups as points that Γ is acting on.) For any S and T in P, the pointwise stabilizer, ΓS,T is cyclic of order q - 1, and thus contains a unique involution, μ. Each such involution fixes exactly q + 1 points of P. Construct a block design on the points of P whose blocks are the fixed point sets of these various involutions μ. Since Γ acts doubly transitively on P, this will be a 2-design with parameters 2-(q3 + 1, q + 1, 1) called a Ree unital.[7]
Lüneburg also showed that the Ree unitals can not be embedded in projective planes of order q2 (Desarguesian or not) such that the automorphism group Γ is induced by a collineation group of the plane.[8] For q = 3, Grüning[9] proved that a Ree unital can not be embedded in any projective plane of order 9.[10]
Unitals with $n=3$
In the four projective planes of order 9 (the Desarguesian plane PG(2,9), the Hall plane of order 9, the dual Hall plane of order 9 and the Hughes plane of order 9.[lower-alpha 2]), an exhaustive computer search by Penttila and Royle[11] found 18 unitals (up to equivalence) with n = 3 in these four planes: two in PG(2,9) (both Buekenhout), four in the Hall plane (two Buekenhout, two not), and so another four in the dual Hall plane, and eight in the Hughes plane. However, one of the Buekenhout unitals in the Hall plane is self-dual,[12] and thus gets counted again in the dual Hall plane. Thus, there are 17 distinct embeddable unitals with n = 3. On the other hand, a nonexhaustive computer search found over 900 mutually nonisomorphic designs which are unitals with n = 3.[13]
Isomorphic versus equivalent unitals
Since unitals are block designs, two unitals are said to be isomorphic if there is a design isomorphism between them, that is, a bijection between the point sets which maps blocks to blocks. This concept does not take into account the property of embeddability, so to do so we say that two unitals, embedded in the same ambient plane, are equivalent if there is a collineation of the plane which maps one unital to the other.[10]
Buekenhout's Constructions
By examining the classical unital in $PG(2,q^{2})$ in the Bruck/Bose model, Buekenhout[14] provided two constructions, which together proved the existence of an embedded unital in any finite 2-dimensional translation plane. Metz[15] subsequently showed that one of Buekenhout's constructions actually yields non-classical unitals in all finite Desarguesian planes of square order at least 9. These Buekenhout-Metz unitals have been extensively studied.[16][17]
The core idea in Buekenhout's construction is that when one looks at $PG(2,q^{2})$ in the higher-dimensional Bruck/Bose model, which lies in $PG(4,q)$, the equation of the Hermitian curve satisfied by a classical unital becomes a quadric surface in $PG(4,q)$, either a point-cone over a 3-dimensional ovoid if the line represented by the spread of the Bruck/Bose model meets the unital in one point, or a non-singular quadric otherwise. Because these objects have known intersection patterns with respect to planes of $PG(4,q)$, the resulting point set remains a unital in any translation plane whose generating spread contains all of the same lines as the original spread within the quadric surface. In the ovoidal cone case, this forced intersection consists of a single line, and any spread can be mapped onto a spread containing this line, showing that every translation plane of this form admits an embedded unital.
Hermitian varieties
Hermitian varieties are in a sense a generalisation of quadrics, and occur naturally in the theory of polarities.
Definition
Let K be a field with an involutive automorphism $\theta $. Let n be an integer $\geq 1$ and V be an (n+1)-dimensional vector space over K.
A Hermitian variety H in PG(V) is a set of points of which the representing vector lines consisting of isotropic points of a non-trivial Hermitian sesquilinear form on V.
Representation
Let $e_{0},e_{1},\ldots ,e_{n}$ be a basis of V. If a point p in the projective space has homogeneous coordinates $(X_{0},\ldots ,X_{n})$ with respect to this basis, it is on the Hermitian variety if and only if :
$\sum _{i,j=0}^{n}a_{ij}X_{i}X_{j}^{\theta }=0$
where $a_{ij}=a_{ji}^{\theta }$ and not all $a_{ij}=0$
If one constructs the Hermitian matrix A with $A_{ij}=a_{ij}$, the equation can be written in a compact way :
$X^{t}AX^{\theta }=0$
where $X={\begin{bmatrix}X_{0}\\X_{1}\\\vdots \\X_{n}\end{bmatrix}}.$
Tangent spaces and singularity
Let p be a point on the Hermitian variety H. A line L through p is by definition tangent when it is contains only one point (p itself) of the variety or lies completely on the variety. One can prove that these lines form a subspace, either a hyperplane of the full space. In the latter case, the point is singular.
Notes
1. Some authors, such as Barwick & Ebert 2008, p. 28, further require that n ≥ 3 to avoid small exceptional cases.
2. PG(2,9) and the Hughes plane are both self-dual.
Citations
1. Bagchi & Bagchi 1989, pp. 51–61.
2. Barwick & Ebert 2008, p. 15.
3. Barwick & Ebert 2008, p. 18.
4. Dembowski 1968, p. 104.
5. Barwick & Ebert 2008, p. 21.
6. Lüneburg 1966, pp. 256–259.
7. Assmus & Key 1992, p. 209.
8. Dembowski 1968, p. 105.
9. Grüning 1986, pp. 473–480.
10. Barwick & Ebert 2008, p. 29.
11. Penttila & Royle 1995, pp. 229–245.
12. Grüning, Klaus (1987-06-01). "A class of unitals of order $q$ which can be embedded in two different planes of order $q^{2}$". Journal of Geometry. 29 (1): 61–77. doi:10.1007/BF01234988. ISSN 1420-8997. S2CID 117872040.
13. Betten, Betten & Tonchev 2003, pp. 23–33.
14. Buekenhout, F. (1976-07-01). "Existence of unitals in finite translation planes of order $q^{2}$ with a kernel of order $q$". Geometriae Dedicata. 5 (2): 189–194. doi:10.1007/BF00145956. ISSN 1572-9168. S2CID 123037502.
15. Metz, Rudolf (1979-03-01). "On a class of unitals". Geometriae Dedicata. 8 (1): 125–126. doi:10.1007/BF00147935. ISSN 1572-9168. S2CID 119595725.
16. Baker, R.D; Ebert, G.L (1992-05-01). "On Buekenhout-Metz unitals of odd order". Journal of Combinatorial Theory, Series A. 60 (1): 67–84. doi:10.1016/0097-3165(92)90038-V. ISSN 0097-3165.
17. Ebert, G.L. (1992-03-01). "On Buekenhout-Metz unitals of even order". European Journal of Combinatorics. 13 (2): 109–117. doi:10.1016/0195-6698(92)90042-X. ISSN 0195-6698.
Sources
• Assmus, E. F. Jr; Key, J. D. (1992), Designs and Their Codes, Cambridge Tracts in Mathematics #103, Cambridge University Press, ISBN 0-521-41361-3
• Bagchi, S.; Bagchi, B. (1989), "Designs from pairs of finite fields. A cyclic unital U(6) and other regular steiner 2-designs", Journal of Combinatorial Theory, Series A, 52: 51–61, doi:10.1016/0097-3165(89)90061-7
• Barwick, Susan; Ebert, Gary (2008), Unitals in Projective Planes, Springer, doi:10.1007/978-0-387-76366-8, ISBN 978-0-387-76364-4
• Betten, A.; Betten, D.; Tonchev, V.D. (2003), "Unitals and codes", Discrete Mathematics, 267 (1–3): 23–33, doi:10.1016/s0012-365x(02)00600-3
• Dembowski, Peter (1968), Finite geometries, Ergebnisse der Mathematik und ihrer Grenzgebiete, Band 44, Berlin, New York: Springer-Verlag, ISBN 3-540-61786-8, MR 0233275 – via Internet Archive
• Grüning, K. (1986), "Das Kleinste Ree-Unital", Archiv der Mathematik, 46 (5): 473–480, doi:10.1007/bf01210788, S2CID 115302560
• Lüneburg, H. (1966), "Some remarks concerning the Ree group of type (G2)", Journal of Algebra, 3 (2): 256–259, doi:10.1016/0021-8693(66)90014-7
• Penttila, T.; Royle, G.F. (1995), "Sets of type (m,n) in the affine and projective planes of order nine", Designs, Codes and Cryptography, 6 (3): 229–245, doi:10.1007/bf01388477, S2CID 43638589
| Wikipedia |
Unital map
In abstract algebra, a unital map on a C*-algebra is a map $\phi $ which preserves the identity element:
$\phi (I)=I.$
This condition appears often in the context of completely positive maps, especially when they represent quantum operations.
If $\phi $ is completely positive, it can always be represented as
$\phi (\rho )=\sum _{i}E_{i}\rho E_{i}^{\dagger }.$
(The $E_{i}$ are the Kraus operators associated with $\phi $). In this case, the unital condition can be expressed as
$\sum _{i}E_{i}E_{i}^{\dagger }=I.$
References
• Paulsen, Vern I. (2002). Completely bounded maps and operator algebras. Cambridge: Cambridge University Press. ISBN 0-511-06103-X. OCLC 228110971.
| Wikipedia |
Dilation (operator theory)
In operator theory, a dilation of an operator T on a Hilbert space H is an operator on a larger Hilbert space K, whose restriction to H composed with the orthogonal projection onto H is T.
More formally, let T be a bounded operator on some Hilbert space H, and H be a subspace of a larger Hilbert space H' . A bounded operator V on H' is a dilation of T if
$P_{H}\;V|_{H}=T$
where $P_{H}$ is an orthogonal projection on H.
V is said to be a unitary dilation (respectively, normal, isometric, etc.) if V is unitary (respectively, normal, isometric, etc.). T is said to be a compression of V. If an operator T has a spectral set $X$, we say that V is a normal boundary dilation or a normal $\partial X$ dilation if V is a normal dilation of T and $\sigma (V)\subseteq \partial X$.
Some texts impose an additional condition. Namely, that a dilation satisfy the following (calculus) property:
$P_{H}\;f(V)|_{H}=f(T)$
where f(T) is some specified functional calculus (for example, the polynomial or H∞ calculus). The utility of a dilation is that it allows the "lifting" of objects associated to T to the level of V, where the lifted objects may have nicer properties. See, for example, the commutant lifting theorem.
Applications
We can show that every contraction on Hilbert spaces has a unitary dilation. A possible construction of this dilation is as follows. For a contraction T, the operator
$D_{T}=(I-T^{*}T)^{\frac {1}{2}}$
is positive, where the continuous functional calculus is used to define the square root. The operator DT is called the defect operator of T. Let V be the operator on
$H\oplus H$
defined by the matrix
$V={\begin{bmatrix}T&D_{T^{*}}\\\ D_{T}&-T^{*}\end{bmatrix}}.$
V is clearly a dilation of T. Also, T(I - T*T) = (I - TT*)T and a limit argument[1] imply
$TD_{T}=D_{T^{*}}T.$
Using this one can show, by calculating directly, that V is unitary, therefore a unitary dilation of T. This operator V is sometimes called the Julia operator of T.
Notice that when T is a real scalar, say $T=\cos \theta $, we have
$V={\begin{bmatrix}\cos \theta &\sin \theta \\\ \sin \theta &-\cos \theta \end{bmatrix}}.$
which is just the unitary matrix describing rotation by θ. For this reason, the Julia operator V(T) is sometimes called the elementary rotation of T.
We note here that in the above discussion we have not required the calculus property for a dilation. Indeed, direct calculation shows the Julia operator fails to be a "degree-2" dilation in general, i.e. it need not be true that
$T^{2}=P_{H}\;V^{2}|_{H}$.
However, it can also be shown that any contraction has a unitary dilation which does have the calculus property above. This is Sz.-Nagy's dilation theorem. More generally, if ${\mathcal {R}}(X)$ is a Dirichlet algebra, any operator T with $X$ as a spectral set will have a normal $\partial X$ dilation with this property. This generalises Sz.-Nagy's dilation theorem as all contractions have the unit disc as a spectral set.
Notes
1. Sz.-Nagy & Foiaş 1970, 3.1.
References
• Constantinescu, T. (1996), Schur Parameters, Dilation and Factorization Problems, vol. 82, Birkhauser Verlag, ISBN 3-7643-5285-X.
• Paulsen, V. (2002), Completely Bounded Maps and Operator Algebras, Cambridge University Press, ISBN 0-521-81669-6.
• Sz.-Nagy, B.; Foiaş, C. (1970), Harmonic analysis of operators on Hilbert space, North-Holland Publishing Company, ISBN 9780720420357.
| Wikipedia |
Unitary divisor
In mathematics, a natural number a is a unitary divisor (or Hall divisor) of a number b if a is a divisor of b and if a and ${\frac {b}{a}}$ are coprime, having no common factor other than 1. Thus, 5 is a unitary divisor of 60, because 5 and ${\frac {60}{5}}=12$ have only 1 as a common factor, while 6 is a divisor but not a unitary divisor of 60, as 6 and ${\frac {60}{6}}=10$ have a common factor other than 1, namely 2. 1 is a unitary divisor of every natural number.
Equivalently, a divisor a of b is a unitary divisor if and only if every prime factor of a has the same multiplicity in a as it has in b.
The sum-of-unitary-divisors function is denoted by the lowercase Greek letter sigma thus: σ*(n). The sum of the k-th powers of the unitary divisors is denoted by σ*k(n):
$\sigma _{k}^{*}(n)=\sum _{d\,\mid \,n \atop \gcd(d,\,n/d)=1}\!\!d^{k}.$
If the proper unitary divisors of a given number add up to that number, then that number is called a unitary perfect number.
The concept of a unitary divisor originates from R. Vaidyanathaswamy (1931) [The theory of multiplicative arithmetic functions. Transactions of the American Mathematical Society, 33(2), 579--662] who used the term block divisor.
Properties
The number of unitary divisors of a number n is 2k, where k is the number of distinct prime factors of n.
This is because each integer N > 1 is the product of positive powers prp of distinct prime numbers p. Thus every unitary divisor of N is the product, over a given subset S of the prime divisors {p} of N, of the prime powers prp for p ∈ S. If there are k prime factors, then there are exactly 2k subsets S, and the statement follows.
The sum of the unitary divisors of n is odd if n is a power of 2 (including 1), and even otherwise.
Both the count and the sum of the unitary divisors of n are multiplicative functions of n that are not completely multiplicative. The Dirichlet generating function is
${\frac {\zeta (s)\zeta (s-k)}{\zeta (2s-k)}}=\sum _{n\geq 1}{\frac {\sigma _{k}^{*}(n)}{n^{s}}}.$
Every divisor of n is unitary if and only if n is square-free.
Odd unitary divisors
The sum of the k-th powers of the odd unitary divisors is
$\sigma _{k}^{(o)*}(n)=\sum _{{d\,\mid \,n \atop d\equiv 1{\pmod {2}}} \atop \gcd(d,n/d)=1}\!\!d^{k}.$
It is also multiplicative, with Dirichlet generating function
${\frac {\zeta (s)\zeta (s-k)(1-2^{k-s})}{\zeta (2s-k)(1-2^{k-2s})}}=\sum _{n\geq 1}{\frac {\sigma _{k}^{(o)*}(n)}{n^{s}}}.$
Bi-unitary divisors
A divisor d of n is a bi-unitary divisor if the greatest common unitary divisor of d and n/d is 1. This concept originates from D. Suryanarayana (1972). [The number of bi-unitary divisors of an integer, in The Theory of Arithmetic Functions, Lecture Notes in Mathematics 251: 273–282, New York, Springer–Verlag].
The number of bi-unitary divisors of n is a multiplicative function of n with average order $A\log x$ where[1]
$A=\prod _{p}\left({1-{\frac {p-1}{p^{2}(p+1)}}}\right)\ .$
A bi-unitary perfect number is one equal to the sum of its bi-unitary aliquot divisors. The only such numbers are 6, 60 and 90.[2]
OEIS sequences
• OEIS: A034444 is σ*0(n)
• OEIS: A034448 is σ*1(n)
• OEIS: A034676 to OEIS: A034682 are σ*2(n) to σ*8(n)
• OEIS: A068068 is σ(o)*0(n)
• OEIS: A192066 is σ(o)*1(n)
• OEIS: A064609 is $\sum _{i=1}^{n}\sigma _{1}(i)$
References
1. Ivić (1985) p.395
2. Sandor et al (2006) p.115
• Richard K. Guy (2004). Unsolved Problems in Number Theory. Springer-Verlag. p. 84. ISBN 0-387-20860-7. Section B3.
• Paulo Ribenboim (2000). My Numbers, My Friends: Popular Lectures on Number Theory. Springer-Verlag. p. 352. ISBN 0-387-98911-0.
• Cohen, Eckford (1959). "A class of residue systems (mod r) and related arithmetical functions. I. A generalization of Möbius inversion". Pacific J. Math. 9 (1): 13–23. doi:10.2140/pjm.1959.9.13. MR 0109806.
• Cohen, Eckford (1960). "Arithmetical functions associated with the unitary divisors of an integer". Mathematische Zeitschrift. 74: 66–80. doi:10.1007/BF01180473. MR 0112861. S2CID 53004302.
• Cohen, Eckford (1960). "The number of unitary divisors of an integer". American Mathematical Monthly. 67 (9): 879–880. doi:10.2307/2309455. JSTOR 2309455. MR 0122790.
• Cohen, Graeme L. (1990). "On an integers' infinitary divisors". Math. Comp. 54 (189): 395–411. Bibcode:1990MaCom..54..395C. doi:10.1090/S0025-5718-1990-0993927-5. MR 0993927.
• Cohen, Graeme L. (1993). "Arithmetic functions associated with infinitary divisors of an integer". Int. J. Math. Math. Sci. 16 (2): 373–383. doi:10.1155/S0161171293000456.
• Finch, Steven (2004). "Unitarism and Infinitarism" (PDF).
• Ivić, Aleksandar (1985). The Riemann zeta-function. The theory of the Riemann zeta-function with applications. A Wiley-Interscience Publication. New York etc.: John Wiley & Sons. p. 395. ISBN 0-471-80634-X. Zbl 0556.10026.
• Mathar, R. J. (2011). "Survey of Dirichlet series of multiplicative arithmetic functions". arXiv:1106.4038 [math.NT]. Section 4.2
• Sándor, József; Mitrinović, Dragoslav S.; Crstici, Borislav, eds. (2006). Handbook of number theory I. Dordrecht: Springer-Verlag. ISBN 1-4020-4215-9. Zbl 1151.11300.
• Toth, L. (2009). "On the bi-unitary analogues of Euler's arithmetical function and the gcd-sum function". J. Int. Seq. 12.
External links
• Weisstein, Eric W. "Unitary Divisor". MathWorld.
• Mathoverflow | Boolean ring of unitary divisors
Divisibility-based sets of integers
Overview
• Integer factorization
• Divisor
• Unitary divisor
• Divisor function
• Prime factor
• Fundamental theorem of arithmetic
Factorization forms
• Prime
• Composite
• Semiprime
• Pronic
• Sphenic
• Square-free
• Powerful
• Perfect power
• Achilles
• Smooth
• Regular
• Rough
• Unusual
Constrained divisor sums
• Perfect
• Almost perfect
• Quasiperfect
• Multiply perfect
• Hemiperfect
• Hyperperfect
• Superperfect
• Unitary perfect
• Semiperfect
• Practical
• Erdős–Nicolas
With many divisors
• Abundant
• Primitive abundant
• Highly abundant
• Superabundant
• Colossally abundant
• Highly composite
• Superior highly composite
• Weird
Aliquot sequence-related
• Untouchable
• Amicable (Triple)
• Sociable
• Betrothed
Base-dependent
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
Other sets
• Arithmetic
• Deficient
• Friendly
• Solitary
• Sublime
• Harmonic divisor
• Descartes
• Refactorable
• Superperfect
| Wikipedia |
Unitary element
In mathematics, an element x of a *-algebra is unitary if it satisfies $x^{*}=x^{-1}.$
In functional analysis, a linear operator A from a Hilbert space into itself is called unitary if it is invertible and its inverse is equal to its own adjoint A∗ and that the domain of A is the same as that of A∗. See unitary operator for a detailed discussion. If the Hilbert space is finite-dimensional and an orthonormal basis has been chosen, then the operator A is unitary if and only if the matrix describing A with respect to this basis is a unitary matrix.
See also
• Normal element
• Self-adjoint – Element of algebra where x* equals x
• Unitary matrix – Complex matrix whose conjugate transpose equals its inverse
References
• Reed, M.; Simon, B. (1972). Methods of Mathematical Physics. Vol 2. Academic Press.
• Teschl, G. (2009). Mathematical Methods in Quantum Mechanics; With Applications to Schrödinger Operators. Providence: American Mathematical Society.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
| Wikipedia |
Unitary matrix
In linear algebra, an invertible complex square matrix U is unitary if its conjugate transpose U* is also its inverse, that is, if
$U^{*}U=UU^{*}=UU^{-1}=I,$
For matrices with orthogonality over the real number field, see orthogonal matrix. For the restriction on the allowed evolution of quantum systems that ensures the sum of probabilities of all possible outcomes of any event always equals 1, see unitarity.
where I is the identity matrix.
In physics, especially in quantum mechanics, the conjugate transpose is referred to as the Hermitian adjoint of a matrix and is denoted by a dagger (†), so the equation above is written
$U^{\dagger }U=UU^{\dagger }=I.$
For real numbers, the analogue of a unitary matrix is an orthogonal matrix. Unitary matrices have significant importance in quantum mechanics because they preserve norms, and thus, probability amplitudes.
Properties
For any unitary matrix U of finite size, the following hold:
• Given two complex vectors x and y, multiplication by U preserves their inner product; that is, ⟨Ux, Uy⟩ = ⟨x, y⟩.
• U is normal ($U^{*}U=UU^{*}$).
• U is diagonalizable; that is, U is unitarily similar to a diagonal matrix, as a consequence of the spectral theorem. Thus, U has a decomposition of the form $U=VDV^{*},$ where V is unitary, and D is diagonal and unitary.
• $\left|\det(U)\right|=1$.
• Its eigenspaces are orthogonal.
• U can be written as U = eiH, where e indicates the matrix exponential, i is the imaginary unit, and H is a Hermitian matrix.
For any nonnegative integer n, the set of all n × n unitary matrices with matrix multiplication forms a group, called the unitary group U(n).
Any square matrix with unit Euclidean norm is the average of two unitary matrices.[1]
Equivalent conditions
If U is a square, complex matrix, then the following conditions are equivalent:[2]
1. $U$ is unitary.
2. $U^{*}$ is unitary.
3. $U$ is invertible with $U^{-1}=U^{*}$.
4. The columns of $U$ form an orthonormal basis of $\mathbb {C} ^{n}$ with respect to the usual inner product. In other words, $U^{*}U=I$.
5. The rows of $U$ form an orthonormal basis of $\mathbb {C} ^{n}$ with respect to the usual inner product. In other words, $UU^{*}=I$.
6. $U$ is an isometry with respect to the usual norm. That is, $\|Ux\|_{2}=\|x\|_{2}$ for all $x\in \mathbb {C} ^{n}$, where $ \|x\|_{2}={\sqrt {\sum _{i=1}^{n}|x_{i}|^{2}}}$.
7. $U$ is a normal matrix (equivalently, there is an orthonormal basis formed by eigenvectors of $U$) with eigenvalues lying on the unit circle.
Elementary constructions
2 × 2 unitary matrix
One general expression of a 2 × 2 unitary matrix is
$U={\begin{bmatrix}a&b\\-e^{i\varphi }b^{*}&e^{i\varphi }a^{*}\\\end{bmatrix}},\qquad \left|a\right|^{2}+\left|b\right|^{2}=1\ ,$
which depends on 4 real parameters (the phase of a, the phase of b, the relative magnitude between a and b, and the angle φ). The form is configured so the determinant of such a matrix is
$\det(U)=e^{i\varphi }~.$
The sub-group of those elements $\ U\ $ with $\ \det(U)=1\ $ is called the special unitary group SU(2).
Among several alternative forms, the matrix U can be written in this form:
$\ U=e^{i\varphi /2}{\begin{bmatrix}e^{i\alpha }\cos \theta &e^{i\beta }\sin \theta \\-e^{-i\beta }\sin \theta &e^{-i\alpha }\cos \theta \\\end{bmatrix}}\ ,$
where $\ e^{i\alpha }\cos \theta =a\ $ and $\ e^{i\beta }\sin \theta =b\ ,$ above, and the angles $\ \varphi ,\alpha ,\beta ,\theta \ $ can take any values.
By introducing $\ \alpha =\psi +\delta \ $ and $\ \beta =\psi -\delta \ ,$ has the following factorization:
$U=e^{i\varphi /2}{\begin{bmatrix}e^{i\psi }&0\\0&e^{-i\psi }\end{bmatrix}}{\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \\\end{bmatrix}}{\begin{bmatrix}e^{i\Delta }&0\\0&e^{-i\Delta }\end{bmatrix}}~.$
This expression highlights the relation between 2 × 2 unitary matrices and 2 × 2 orthogonal matrices of angle θ.
Another factorization is[3]
$U={\begin{bmatrix}\cos \rho &-\sin \rho \\\sin \rho &\;\cos \rho \\\end{bmatrix}}{\begin{bmatrix}e^{i\xi }&0\\0&e^{i\zeta }\end{bmatrix}}{\begin{bmatrix}\;\cos \sigma &\sin \sigma \\-\sin \sigma &\cos \sigma \\\end{bmatrix}}~.$
Many other factorizations of a unitary matrix in basic matrices are possible.[4][5][6][7]
See also
• Hermitian matrix and
Skew-Hermitian matrix
• Matrix decomposition
• Orthogonal group O(n)
• Special orthogonal group SO(n)
• Orthogonal matrix
• Semi-orthogonal matrix
• Quantum logic gate
• Special Unitary group SU(n)
• Symplectic matrix
• Unitary group U(n)
• Unitary operator
References
1. Li, Chi-Kwong; Poon, Edward (2002). "Additive decomposition of real matrices". Linear and Multilinear Algebra. 50 (4): 321–326. doi:10.1080/03081080290025507. S2CID 120125694.
2. Horn, Roger A.; Johnson, Charles R. (2013). Matrix Analysis. Cambridge University Press. doi:10.1017/CBO9781139020411. ISBN 9781139020411.
3. Führ, Hartmut; Rzeszotnik, Ziemowit (2018). "A note on factoring unitary matrices". Linear Algebra and Its Applications. 547: 32–44. doi:10.1016/j.laa.2018.02.017. ISSN 0024-3795. S2CID 125455174.
4. Williams, Colin P. (2011). "Quantum gates". In Williams, Colin P. (ed.). Explorations in Quantum Computing. Texts in Computer Science. London, UK: Springer. p. 82. doi:10.1007/978-1-84628-887-6_2. ISBN 978-1-84628-887-6.
5. Nielsen, M.A.; Chuang, Isaac (2010). Quantum Computation and Quantum Information. Cambridge, UK: Cambridge University Press. p. 20. ISBN 978-1-10700-217-3. OCLC 43641333.
6. Barenco, Adriano; Bennett, Charles H.; Cleve, Richard; DiVincenzo, David P.; Margolus, Norman; Shor, Peter; et al. (1 November 1995). "Elementary gates for quantum computation". Physical Review A. American Physical Society (APS). 52 (5): 3457–3467, esp.p. 3465. arXiv:quant-ph/9503016. doi:10.1103/physreva.52.3457. ISSN 1050-2947. PMID 9912645. S2CID 8764584.
7. Marvian, Iman (10 January 2022). "Restrictions on realizable unitary operations imposed by symmetry and locality". Nature Physics. 18 (3): 283–289. arXiv:2003.05524. doi:10.1038/s41567-021-01464-0. ISSN 1745-2481. S2CID 245840243.
See also:
Alhambra, Álvaro M. (10 January 2022). "Forbidden by symmetry". News & Views. Nature Physics. 18 (3): 235–236. doi:10.1038/s41567-021-01483-x. ISSN 1745-2481. S2CID 256745894. The physics of large systems is often understood as the outcome of the local operations among its components. Now, it is shown that this picture may be incomplete in quantum systems whose interactions are constrained by symmetries.
External links
• Weisstein, Eric W. "Unitary Matrix". MathWorld. Todd Rowland.
• Ivanova, O. A. (2001) [1994], "Unitary matrix", Encyclopedia of Mathematics, EMS Press
• "Show that the eigenvalues of a unitary matrix have modulus 1". Stack Exchange. March 28, 2016.
Matrix classes
Explicitly constrained entries
• Alternant
• Anti-diagonal
• Anti-Hermitian
• Anti-symmetric
• Arrowhead
• Band
• Bidiagonal
• Bisymmetric
• Block-diagonal
• Block
• Block tridiagonal
• Boolean
• Cauchy
• Centrosymmetric
• Conference
• Complex Hadamard
• Copositive
• Diagonally dominant
• Diagonal
• Discrete Fourier Transform
• Elementary
• Equivalent
• Frobenius
• Generalized permutation
• Hadamard
• Hankel
• Hermitian
• Hessenberg
• Hollow
• Integer
• Logical
• Matrix unit
• Metzler
• Moore
• Nonnegative
• Pentadiagonal
• Permutation
• Persymmetric
• Polynomial
• Quaternionic
• Signature
• Skew-Hermitian
• Skew-symmetric
• Skyline
• Sparse
• Sylvester
• Symmetric
• Toeplitz
• Triangular
• Tridiagonal
• Vandermonde
• Walsh
• Z
Constant
• Exchange
• Hilbert
• Identity
• Lehmer
• Of ones
• Pascal
• Pauli
• Redheffer
• Shift
• Zero
Conditions on eigenvalues or eigenvectors
• Companion
• Convergent
• Defective
• Definite
• Diagonalizable
• Hurwitz
• Positive-definite
• Stieltjes
Satisfying conditions on products or inverses
• Congruent
• Idempotent or Projection
• Invertible
• Involutory
• Nilpotent
• Normal
• Orthogonal
• Unimodular
• Unipotent
• Unitary
• Totally unimodular
• Weighing
With specific applications
• Adjugate
• Alternating sign
• Augmented
• Bézout
• Carleman
• Cartan
• Circulant
• Cofactor
• Commutation
• Confusion
• Coxeter
• Distance
• Duplication and elimination
• Euclidean distance
• Fundamental (linear differential equation)
• Generator
• Gram
• Hessian
• Householder
• Jacobian
• Moment
• Payoff
• Pick
• Random
• Rotation
• Seifert
• Shear
• Similarity
• Symplectic
• Totally positive
• Transformation
Used in statistics
• Centering
• Correlation
• Covariance
• Design
• Doubly stochastic
• Fisher information
• Hat
• Precision
• Stochastic
• Transition
Used in graph theory
• Adjacency
• Biadjacency
• Degree
• Edmonds
• Incidence
• Laplacian
• Seidel adjacency
• Tutte
Used in science and engineering
• Cabibbo–Kobayashi–Maskawa
• Density
• Fundamental (computer vision)
• Fuzzy associative
• Gamma
• Gell-Mann
• Hamiltonian
• Irregular
• Overlap
• S
• State transition
• Substitution
• Z (chemistry)
Related terms
• Jordan normal form
• Linear independence
• Matrix exponential
• Matrix representation of conic sections
• Perfect matrix
• Pseudoinverse
• Row echelon form
• Wronskian
• Mathematics portal
• List of matrices
• Category:Matrices
| Wikipedia |
Unitary perfect number
A unitary perfect number is an integer which is the sum of its positive proper unitary divisors, not including the number itself (a divisor d of a number n is a unitary divisor if d and n/d share no common factors). Some perfect numbers are not unitary perfect numbers, and some unitary perfect numbers are not ordinary perfect numbers.
Unsolved problem in mathematics:
Are there infinitely many unitary perfect numbers?
(more unsolved problems in mathematics)
Known examples
The number 60 is a unitary perfect number, because 1, 3, 4, 5, 12, 15, and 20 are its proper unitary divisors, and 1 + 3 + 4 + 5 + 12 + 15 + 20 = 60. The first five, and only known, unitary perfect numbers are $6=2\times 3$, $60=2^{2}\times 3\times 5$, $90=2\times 3^{2}\times 5$, $87360=2^{6}\times 3\times 5\times 7\times 13$, and $146361946186458562560000=2^{18}\times 3\times 5^{4}\times 7\times 11\times 13\times 19\times 37\times 79\times 109\times 157\times 313$ (sequence A002827 in the OEIS). The respective sums of their proper unitary divisors are as follows:
• 6 = 1 + 2 + 3
• 60 = 1 + 3 + 4 + 5 + 12 + 15 + 20
• 90 = 1 + 2 + 5 + 9 + 10 + 18 + 45
• 87360 = 1 + 3 + 5 + 7 + 13 + 15 + 21 + 35 + 39 + 64 + 65 + 91 + 105 + 192 + 195 + 273 + 320 + 448 + 455 + 832 + 960 + 1344 + 1365 + 2240 + 2496 + 4160 + 5824 + 6720 + 12480 + 17472 + 29120
• 146361946186458562560000 = 1 + 3 + 7 + 11 + ... + 13305631471496232960000 + 20908849455208366080000 + 48787315395486187520000 (4095 divisors in the sum)
Properties
There are no odd unitary perfect numbers. This follows since 2d*(n) divides the sum of the unitary divisors of an odd number n, where d*(n) is the number of distinct prime factors of n. One gets this because the sum of all the unitary divisors is a multiplicative function and one has that the sum of the unitary divisors of a prime power pa is pa + 1 which is even for all odd primes p. Therefore, an odd unitary perfect number must have only one distinct prime factor, and it is not hard to show that a power of prime cannot be a unitary perfect number, since there are not enough divisors.
It is not known whether or not there are infinitely many unitary perfect numbers, or indeed whether there are any further examples beyond the five already known. A sixth such number would have at least nine odd prime factors.[1]
References
1. Wall, Charles R. (1988). "New unitary perfect numbers have at least nine odd components". Fibonacci Quarterly. 26 (4): 312–317. ISSN 0015-0517. MR 0967649. Zbl 0657.10003.
• Richard K. Guy (2004). Unsolved Problems in Number Theory. Springer-Verlag. pp. 84–86. ISBN 0-387-20860-7. Section B3.
• Paulo Ribenboim (2000). My Numbers, My Friends: Popular Lectures on Number Theory. Springer-Verlag. p. 352. ISBN 0-387-98911-0.
• Sándor, József; Mitrinović, Dragoslav S.; Crstici, Borislav, eds. (2006). Handbook of number theory I. Dordrecht: Springer-Verlag. ISBN 1-4020-4215-9. Zbl 1151.11300.
• Sándor, Jozsef; Crstici, Borislav (2004). Handbook of number theory II. Dordrecht: Kluwer Academic. ISBN 1-4020-2546-7. Zbl 1079.11001.
Divisibility-based sets of integers
Overview
• Integer factorization
• Divisor
• Unitary divisor
• Divisor function
• Prime factor
• Fundamental theorem of arithmetic
Factorization forms
• Prime
• Composite
• Semiprime
• Pronic
• Sphenic
• Square-free
• Powerful
• Perfect power
• Achilles
• Smooth
• Regular
• Rough
• Unusual
Constrained divisor sums
• Perfect
• Almost perfect
• Quasiperfect
• Multiply perfect
• Hemiperfect
• Hyperperfect
• Superperfect
• Unitary perfect
• Semiperfect
• Practical
• Erdős–Nicolas
With many divisors
• Abundant
• Primitive abundant
• Highly abundant
• Superabundant
• Colossally abundant
• Highly composite
• Superior highly composite
• Weird
Aliquot sequence-related
• Untouchable
• Amicable (Triple)
• Sociable
• Betrothed
Base-dependent
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
Other sets
• Arithmetic
• Deficient
• Friendly
• Solitary
• Sublime
• Harmonic divisor
• Descartes
• Refactorable
• Superperfect
| Wikipedia |
Representation of a Lie superalgebra
In the mathematical field of representation theory, a representation of a Lie superalgebra is an action of Lie superalgebra L on a Z2-graded vector space V, such that if A and B are any two pure elements of L and X and Y are any two pure elements of V, then
$(c_{1}A+c_{2}B)\cdot X=c_{1}A\cdot X+c_{2}B\cdot X$
$A\cdot (c_{1}X+c_{2}Y)=c_{1}A\cdot X+c_{2}A\cdot Y$
$(-1)^{A\cdot X}=(-1)^{A}(-1)^{X}$
$[A,B]\cdot X=A\cdot (B\cdot X)-(-1)^{AB}B\cdot (A\cdot X).$
Equivalently, a representation of L is a Z2-graded representation of the universal enveloping algebra of L which respects the third equation above.
Unitary representation of a star Lie superalgebra
A * Lie superalgebra is a complex Lie superalgebra equipped with an involutive antilinear map * such that * respects the grading and
[a,b]*=[b*,a*].
A unitary representation of such a Lie algebra is a Z2 graded Hilbert space which is a representation of a Lie superalgebra as above together with the requirement that self-adjoint elements of the Lie superalgebra are represented by Hermitian transformations.
This is a major concept in the study of supersymmetry together with representation of a Lie superalgebra on an algebra. Say A is an *-algebra representation of the Lie superalgebra (together with the additional requirement that * respects the grading and L[a]*=-(-1)LaL*[a*]) and H is the unitary rep and also, H is a unitary representation of A.
These three reps are all compatible if for pure elements a in A, |ψ> in H and L in the Lie superalgebra,
L[a|ψ>)]=(L[a])|ψ>+(-1)Laa(L[|ψ>]).
Sometimes, the Lie superalgebra is embedded within A in the sense that there is a homomorphism from the universal enveloping algebra of the Lie superalgebra to A. In that case, the equation above reduces to
L[a]=La-(-1)LaaL.
This approach avoids working directly with a Lie supergroup, and hence avoids the use of auxiliary Grassmann numbers.
See also
• Graded vector space
• Lie algebra representation
• Representation theory of Hopf algebras
| Wikipedia |
Unitary representation
In mathematics, a unitary representation of a group G is a linear representation π of G on a complex Hilbert space V such that π(g) is a unitary operator for every g ∈ G. The general theory is well-developed in the case that G is a locally compact (Hausdorff) topological group and the representations are strongly continuous.
The theory has been widely applied in quantum mechanics since the 1920s, particularly influenced by Hermann Weyl's 1928 book Gruppentheorie und Quantenmechanik. One of the pioneers in constructing a general theory of unitary representations, for any group G rather than just for particular groups useful in applications, was George Mackey.
Context in harmonic analysis
The theory of unitary representations of topological groups is closely connected with harmonic analysis. In the case of an abelian group G, a fairly complete picture of the representation theory of G is given by Pontryagin duality. In general, the unitary equivalence classes (see below) of irreducible unitary representations of G make up its unitary dual. This set can be identified with the spectrum of the C*-algebra associated to G by the group C*-algebra construction. This is a topological space.
The general form of the Plancherel theorem tries to describe the regular representation of G on L2(G) by means of a measure on the unitary dual. For G abelian this is given by the Pontryagin duality theory. For G compact, this is done by the Peter–Weyl theorem; in that case the unitary dual is a discrete space, and the measure attaches an atom to each point of mass equal to its degree.
Formal definitions
Let G be a topological group. A strongly continuous unitary representation of G on a Hilbert space H is a group homomorphism from G into the unitary group of H,
$\pi :G\rightarrow \operatorname {U} (H)$
such that g → π(g) ξ is a norm continuous function for every ξ ∈ H.
Note that if G is a Lie group, the Hilbert space also admits underlying smooth and analytic structures. A vector ξ in H is said to be smooth or analytic if the map g → π(g) ξ is smooth or analytic (in the norm or weak topologies on H).[1] Smooth vectors are dense in H by a classical argument of Lars Gårding, since convolution by smooth functions of compact support yields smooth vectors. Analytic vectors are dense by a classical argument of Edward Nelson, amplified by Roe Goodman, since vectors in the image of a heat operator e–tD, corresponding to an elliptic differential operator D in the universal enveloping algebra of G, are analytic. Not only do smooth or analytic vectors form dense subspaces; they also form common cores for the unbounded skew-adjoint operators corresponding to the elements of the Lie algebra, in the sense of spectral theory.[2]
Two unitary representations π1: G → U(H1), π2: G → U(H2) are said to be unitarily equivalent if there is a unitary transformation A:H1 → H2 such that π1(g) = A* ∘ π2(g) ∘ A for all g in G. When this holds, A is said to be an intertwining operator for the representations $(\pi _{1},H_{1}),(\pi _{2},H_{2})$.[3]
If $\pi $ is a representation of a connected Lie group $G$ on a finite-dimensional Hilbert space $H$, then $\pi $ is unitary if and only if the associated Lie algebra representation $d\pi :{\mathfrak {g}}\rightarrow \mathrm {End} (H)$ :{\mathfrak {g}}\rightarrow \mathrm {End} (H)} maps into the space of skew-self-adjoint operators on $H$.[4]
Complete reducibility
A unitary representation is completely reducible, in the sense that for any closed invariant subspace, the orthogonal complement is again a closed invariant subspace. This is at the level of an observation, but is a fundamental property. For example, it implies that finite-dimensional unitary representations are always a direct sum of irreducible representations, in the algebraic sense.
Since unitary representations are much easier to handle than the general case, it is natural to consider unitarizable representations, those that become unitary on the introduction of a suitable complex Hilbert space structure. This works very well for finite groups, and more generally for compact groups, by an averaging argument applied to an arbitrary hermitian structure.[5] For example, a natural proof of Maschke's theorem is by this route.
Unitarizability and the unitary dual question
In general, for non-compact groups, it is a more serious question which representations are unitarizable. One of the important unsolved problems in mathematics is the description of the unitary dual, the effective classification of irreducible unitary representations of all real reductive Lie groups. All irreducible unitary representations are admissible (or rather their Harish-Chandra modules are), and the admissible representations are given by the Langlands classification, and it is easy to tell which of them have a non-trivial invariant sesquilinear form. The problem is that it is in general hard to tell when the quadratic form is positive definite. For many reductive Lie groups this has been solved; see representation theory of SL2(R) and representation theory of the Lorentz group for examples.
Notes
1. Warner (1972)
2. Reed and Simon (1975)
3. Paul Sally (2013) Fundamentals of Mathematical Analysis, American Mathematical Society pg. 234
4. Hall 2015 Proposition 4.8
5. Hall 2015 Section 4.4
References
• Hall, Brian C. (2015), Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Graduate Texts in Mathematics, vol. 222 (2nd ed.), Springer, ISBN 978-3319134666
• Reed, Michael; Simon, Barry (1975), Methods of Modern Mathematical Physics, Vol. 2: Fourier Analysis, Self-Adjointness, Academic Press, ISBN 0-12-585002-6
• Warner, Garth (1972), Harmonic Analysis on Semi-simple Lie Groups I, Springer-Verlag, ISBN 0-387-05468-5
See also
• Induced representations
• Isotypical representation
• Representation theory of SL2(R)
• Representations of the Lorentz group
• Stone–von Neumann theorem
• Unitary representation of a star Lie superalgebra
• Zonal spherical function
| Wikipedia |
Unitary transformation
In mathematics, a unitary transformation is a transformation that preserves the inner product: the inner product of two vectors before the transformation is equal to their inner product after the transformation.
Formal definition
More precisely, a unitary transformation is an isomorphism between two inner product spaces (such as Hilbert spaces). In other words, a unitary transformation is a bijective function
$U:H\to H_{2}\,$
between two inner product spaces, $H$ and $H_{2},$ such that
$\langle Ux,Uy\rangle _{H_{2}}=\langle x,y\rangle _{H}\quad {\text{ for all }}x,y\in H.$
Properties
A unitary transformation is an isometry, as one can see by setting $x=y$ in this formula.
Unitary operator
In the case when $H_{1}$ and $H_{2}$ are the same space, a unitary transformation is an automorphism of that Hilbert space, and then it is also called a unitary operator.
Antiunitary transformation
A closely related notion is that of antiunitary transformation, which is a bijective function
$U:H_{1}\to H_{2}\,$
between two complex Hilbert spaces such that
$\langle Ux,Uy\rangle ={\overline {\langle x,y\rangle }}=\langle y,x\rangle $
for all $x$ and $y$ in $H_{1}$, where the horizontal bar represents the complex conjugate.
See also
• Antiunitary
• Orthogonal transformation
• Time reversal
• Unitary group
• Unitary operator
• Unitary matrix
• Wigner's theorem
• Unitary transformations in quantum mechanics
| Wikipedia |
Unitary transformation (quantum mechanics)
In quantum mechanics, the Schrödinger equation describes how a system changes with time. It does this by relating changes in the state of system to the energy in the system (given by an operator called the Hamiltonian). Therefore, once the Hamiltonian is known, the time dynamics are in principle known. All that remains is to plug the Hamiltonian into the Schrödinger equation and solve for the system state as a function of time.[1][2]
Often, however, the Schrödinger equation is difficult to solve (even with a computer). Therefore, physicists have developed mathematical techniques to simplify these problems and clarify what is happening physically. One such technique is to apply a unitary transformation to the Hamiltonian. Doing so can result in a simplified version of the Schrödinger equation which nonetheless has the same solution as the original.
Transformation
A unitary transformation (or frame change) can be expressed in terms of a time-dependent Hamiltonian $H(t)$ and unitary operator $U(t)$. Under this change, the Hamiltonian transforms as:
$H\to UH{U^{\dagger }}+i\hbar \,{{\dot {U}}U^{\dagger }}=:{\breve {H}}\quad \quad (0)$.
The Schrödinger equation applies to the new Hamiltonian. Solutions to the untransformed and transformed equations are also related by Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): U . Specifically, if the wave function $\psi (t)$ satisfies the original equation, then $U\psi (t)$ will satisfy the new equation.[3]
Derivation
Recall that by the definition of a unitary matrix, $U^{\dagger }U=1$. Beginning with the Schrödinger equation,
${\dot {\psi }}=-{\frac {i}{\hbar }}H\psi $,
we can therefore insert $U^{\dagger }U$ at will. In particular, inserting it after $H/\hbar $ and also premultiplying both sides by $U$, we get
$U{\dot {\psi }}=-{\frac {i}{\hbar }}\left(UHU^{\dagger }\right)U\psi \quad \quad (1)$.
Next, note that by the product rule,
${\frac {\mathrm {d} }{\mathrm {d} t}}\left(U\psi \right)={\dot {U}}\psi +U{\dot {\psi }}$.
Inserting another $U^{\dagger }U$ and rearranging, we get
$U{\dot {\psi }}={\frac {\mathrm {d} }{\mathrm {d} t}}{\Big (}U\psi {\Big )}-{\dot {U}}U^{\dagger }U\psi \quad \quad (2)$.
Finally, combining (1) and (2) above results in the desired transformation:
${\frac {\mathrm {d} }{\mathrm {d} t}}{\Big (}U\psi {\Big )}=-{\frac {i}{\hbar }}{\Big (}UH{U^{\dagger }}+i\hbar \,{\dot {U}}{U^{\dagger }}{\Big )}{\Big (}U\psi {\Big )}\quad \quad \left(3\right)$.
If we adopt the notation ${\breve {\psi }}:=U\psi $ to describe the transformed wave function, the equations can be written in a clearer form. For instance, $(3)$ can be rewritten as
${\frac {\mathrm {d} }{\mathrm {d} t}}{\breve {\psi }}=-{\frac {i}{\hbar }}{\breve {H}}{\breve {\psi }}\quad \quad \left(4\right)$,
which can be rewritten in the form of the original Schrödinger equation,
${\breve {H}}{\breve {\psi }}=i\hbar {\operatorname {d} \!{\breve {\psi }} \over \operatorname {d} \!t}.$
The original wave function can be recovered as $\psi =U^{\dagger }{\breve {\psi }}$.
Relation to the interaction picture
Unitary transformations can be seen as a generalization of the interaction (Dirac) picture. In the latter approach, a Hamiltonian is broken into a time-independent part and a time-dependent part,
$H(t)=H_{0}+V(t)\quad \quad (a)$.
In this case, the Schrödinger equation becomes
${\dot {\psi _{I}}}=-{\frac {i}{\hbar }}\left(e^{iH_{0}t/\hbar }Ve^{-iH_{0}t/\hbar }\right)\psi _{I}$, with $\psi _{I}=e^{iH_{0}t/\hbar }\psi $.[4]
The correspondence to a unitary transformation can be shown by choosing $ U(t)=\exp \left[{+iH_{0}t/\hbar }\right]$. As a result, ${U^{\dagger }}(t)=\exp \left[{-iH_{0}t}/\hbar \right].$
Using the notation from $(0)$ above, our transformed Hamiltonian becomes
${\breve {H}}=U\left[H_{0}+V(t)\right]U^{\dagger }+i\hbar {\dot {U}}U^{\dagger }\quad \quad (b)$
First note that since $U$ is a function of $H_{0}$, the two must commute. Then
$UH_{0}U^{\dagger }=H_{0}$,
which takes care of the first term in the transformation in $(b)$, i.e. ${\breve {H}}=H_{0}+UV(t)U^{\dagger }+i\hbar {\dot {U}}U^{\dagger }$. Next use the chain rule to calculate
${\begin{aligned}i\hbar {\dot {U}}U^{\dagger }&=i\hbar \left({\operatorname {d} \!U \over \operatorname {d} \!t}\right)e^{-iH_{0}t/\hbar }\\&=i\hbar {\Big (}iH_{0}/\hbar {\Big )}e^{+iH_{0}t/\hbar }e^{-iH_{0}t/\hbar }\\&=i\hbar \left({iH_{0}}/\hbar \right)\\&=-H_{0},\\\end{aligned}}$
which cancels with the other $H_{0}$. Evidently we are left with ${\breve {H}}=UVU^{\dagger }$, yielding ${\dot {\psi _{I}}}=-{\frac {i}{\hbar }}UVU^{\dagger }\psi _{I}$ as shown above.
When applying a general unitary transformation, however, it is not necessary that $H(t)$ be broken into parts, or even that $U(t)$ be a function of any part of the Hamiltonian.
Examples
Rotating frame
Consider an atom with two states, ground $|g\rangle $ and excited $|e\rangle $. The atom has a Hamiltonian $H=\hbar \omega {|{e}\rangle \langle {e}|}$, where $\omega $ is the frequency of light associated with the g-e transition. Now suppose we illuminate the atom with a drive at frequency $\omega _{d}$ which couples the two states, and that the time-dependent driven Hamiltonian is
$H/\hbar =\omega |e\rangle \langle e|+\Omega \ e^{i\omega _{d}t}|g\rangle \langle e|+\Omega ^{*}\ e^{-i\omega _{d}t}|e\rangle \langle g|$
for some complex drive strength $\Omega $. Because of the competing frequency scales ($\omega $, $\omega _{d}$, and $\Omega $), it is difficult to anticipate the effect of the drive (see driven harmonic motion).
Without a drive, the phase of $|e\rangle $ would oscillate relative to $|g\rangle $. In the Bloch sphere representation of a two-state system, this corresponds to rotation around the z-axis. Conceptually, we can remove this component of the dynamics by entering a rotating frame of reference defined by the unitary transformation $U=e^{i\omega t|e\rangle \langle e|}$. Under this transformation, the Hamiltonian becomes
$H/\hbar \to \Omega \,e^{i(\omega _{d}-\omega )t}|g\rangle \langle e|+\Omega ^{*}\,e^{i(\omega -\omega _{d})t}|e\rangle \langle g|$.
If the driving frequency is equal to the g-e transition's frequency, $\omega _{d}=\omega $, resonance will occur and then the equation above reduces to
${\breve {H}}/\hbar =\Omega \ |g\rangle \langle e|+\Omega ^{*}\ |e\rangle \langle g|$.
From this it is apparent, even without getting into details, that the dynamics will involve an oscillation between the ground and excited states at frequency $\Omega $.[4]
As another limiting case, suppose the drive is far off-resonant, $|\omega _{d}-\omega |\gg 0$. We can figure out the dynamics in that case without solving the Schrödinger equation directly. Suppose the system starts in the ground state $|g\rangle $. Initially, the Hamiltonian will populate some component of $|e\rangle $. A small time later, however, it will populate roughly the same amount of $|e\rangle $ but with completely different phase. Thus the effect of an off-resonant drive will tend to cancel itself out. This can also be expressed by saying that an off-resonant drive is rapidly rotating in the frame of the atom.
These concepts are illustrated in the table below, where the sphere represents the Bloch sphere, the arrow represents the state of the atom, and the hand represents the drive.
Lab frame Rotating frame
Resonant drive
Off-resonant drive
Displaced frame
The example above could also have been analyzed in the interaction picture. The following example, however, is more difficult to analyze without the general formulation of unitary transformations. Consider two harmonic oscillators, between which we would like to engineer a beam splitter interaction,
$g\,ab^{\dagger }+g^{*}\,a^{\dagger }b$.
This was achieved experimentally with two microwave cavity resonators serving as $a$ and $b$.[5] Below, we sketch the analysis of a simplified version of this experiment.
In addition to the microwave cavities, the experiment also involved a transmon qubit, $c$, coupled to both modes. The qubit is driven simultaneously at two frequencies, $\omega _{1}$ and $\omega _{2}$, for which $\omega _{1}-\omega _{2}=\omega _{a}-\omega _{b}$.
$H_{\mathrm {drive} }/\hbar =\Re \left[\epsilon _{1}e^{i\omega _{1}t}+\epsilon _{2}e^{i\omega _{2}t}\right](c+c^{\dagger }).$
In addition, there are many fourth-order terms coupling the modes, but most of them can be neglected. In this experiment, two such terms which will become important are
$H_{4}/\hbar =g_{4}{\Big (}e^{i(\omega _{b}-\omega _{a})t}ab^{\dagger }+{\text{h.c.}}{\Big )}c^{\dagger }c$.
(H.c. is shorthand for the Hermitian conjugate.) We can apply a displacement transformation, $U=D(-\xi _{1}e^{-i\omega _{1}t}-\xi _{2}e^{-i\omega _{2}t})$, to mode $c$. For carefully chosen amplitudes, this transformation will cancel $H_{\textrm {drive}}$ while also displacing the ladder operator, $c\to c+\xi _{1}e^{-i\omega _{1}t}+\xi _{2}e^{-i\omega _{2}t}$. This leaves us with
$H/\hbar =g_{4}{\Big (}e^{i(\omega _{b}-\omega _{a})t}ab^{\dagger }+e^{i(\omega _{a}-\omega _{b})t}a^{\dagger }b{\big )}(c^{\dagger }+\xi _{1}^{*}e^{i\omega _{1}t}+\xi _{2}^{*}e^{i\omega _{2}t})(c+\xi _{1}e^{-i\omega _{1}t}+\xi _{2}e^{-i\omega _{2}t})$.
Expanding this expression and dropping the rapidly rotating terms, we are left with the desired Hamiltonian,
$H/\hbar =g_{4}\xi _{1}^{*}\xi _{2}e^{i(\omega _{b}-\omega _{a}+\omega _{1}-\omega _{2})t}\ ab^{\dagger }+{\text{h.c.}}=g\,ab^{\dagger }+g^{*}\,a^{\dagger }b$.
Relation to the Baker–Campbell–Hausdorff formula
It is common for the operators involved in unitary transformations to be written as exponentials of operators, $U=e^{X}$, as seen above. Further, the operators in the exponentials commonly obey the relation $X^{\dagger }=-X$, so that the transform of an operator $Y$ is,$UYU^{\dagger }=e^{X}Ye^{-X}$. By now introducing the iterator commutator,
$[(X)^{n},Y]\equiv \underbrace {[X,\dotsb [X,[X} _{n{\text{ times }}},Y]]\dotsb ],\quad [(X)^{0},Y]\equiv Y,$
we can use a special result of the Baker-Campbell-Hausdorff formula to write this transformation compactly as,
$e^{X}Ye^{-X}=\sum _{n=0}^{\infty }{\frac {[(X)^{n},Y]}{n!}},$
or, in long form for completeness,
$e^{X}Ye^{-X}=Y+\left[X,Y\right]+{\frac {1}{2!}}[X,[X,Y]]+{\frac {1}{3!}}[X,[X,[X,Y]]]+\cdots .$
References
1. Sakurai, J. J.; Napolitano, Jim J. (2014). Modern Quantum Mechanics (Indian Subcontinent Version ed.). Pearson. pp. 67–72. ISBN 978-93-325-1900-8.
2. Griffiths, David J. (2005). Introduction to Quantum Mechanics (Second ed.). Pearson. pp. 24–29. ISBN 978-0-13-191175-8.
3. Axline, Christopher J. (2018). "Chapter 6" (PDF). Building Blocks for Modular Circuit QED Quantum Computing (Ph.D. thesis). Retrieved 4 August 2018.
4. Sakurai, pp. 346-350.
5. Yvonne Y. Gao; Brian J. Lester; et al. (21 June 2018). "Programmable Interference between Two Microwave Quantum Memories". Phys. Rev. X. 8 (2). Supplemental Material. arXiv:1802.08510. doi:10.1103/PhysRevX.8.021073. S2CID 3723797.
| Wikipedia |
Tagged union
In computer science, a tagged union, also called a variant, variant record, choice type, discriminated union, disjoint union, sum type or coproduct, is a data structure used to hold a value that could take on several different, but fixed, types. Only one of the types can be in use at any one time, and a tag field explicitly indicates which one is in use. It can be thought of as a type that has several "cases", each of which should be handled correctly when that type is manipulated. This is critical in defining recursive datatypes, in which some component of a value may have the same type as that value, for example in defining a type for representing trees, where it is necessary to distinguish multi-node subtrees and leaves. Like ordinary unions, tagged unions can save storage by overlapping storage areas for each type, since only one is in use at a time.
Description
Tagged unions are most important in functional programming languages such as ML and Haskell, where they are called datatypes (see algebraic data type) and the compiler is able to verify that all cases of a tagged union are always handled, avoiding many types of errors. They can, however, be constructed in nearly any programming language, and are much safer than untagged unions, often simply called unions, which are similar but do not explicitly track which member of a union is in use currently.
Tagged unions are often accompanied by the concept of a type constructor, which is similar but not the same as a constructor for a class. Type constructors produce a tagged union type, given the initial tag type and the corresponding type.
Mathematically, tagged unions correspond to disjoint or discriminated unions, usually written using +. Given an element of a disjoint union A + B, it is possible to determine whether it came from A or B. If an element lies in both, there will be two effectively distinct copies of the value in A + B, one from A and one from B.
In type theory, a tagged union is called a sum type. Sum types are the dual of product types. Notations vary, but usually the sum type A + B comes with two introduction forms (injections) inj1: A → A + B and inj2: B → A + B. The elimination form is case analysis, known as pattern matching in ML-style languages: if e has type A + B and e1 and e2 have type $\tau $ under the assumptions x: A and y: B respectively, then the term ${\mathsf {case}}\ e\ {\mathsf {of}}\ x\Rightarrow e_{1}\mid y\Rightarrow e_{2}$ has type $\tau $. The sum type corresponds to intuitionistic logical disjunction under the Curry–Howard correspondence.
An enumerated type can be seen as a degenerate case: a tagged union of unit types. It corresponds to a set of nullary constructors and may be implemented as a simple tag variable, since it holds no additional data besides the value of the tag.
Many programming techniques and data structures, including rope, lazy evaluation, class hierarchy (see below), arbitrary-precision arithmetic, CDR coding, the indirection bit and other kinds of tagged pointers, etc. are usually implemented using some sort of tagged union.
A tagged union can be seen as the simplest kind of self-describing data format. The tag of the tagged union can be seen as the simplest kind of metadata.
Advantages and disadvantages
The primary advantage of a tagged union over an untagged union is that all accesses are safe, and the compiler can even check that all cases are handled. Untagged unions depend on program logic to correctly identify the currently active field, which may result in strange behavior and hard-to-find bugs if that logic fails.
The primary advantage of a tagged union over a simple record containing a field for each type is that it saves storage by overlapping storage for all the types. Some implementations reserve enough storage for the largest type, while others dynamically adjust the size of a tagged union value as needed. When the value is immutable, it is simple to allocate just as much storage as is needed.
The main disadvantage of tagged unions is that the tag occupies space. Since there are usually a small number of alternatives, the tag can often be squeezed into 2 or 3 bits wherever space can be found, but sometimes even these bits are not available. In this case, a helpful alternative may be folded, computed or encoded tags, where the tag value is dynamically computed from the contents of the union field. Common examples of this are the use of reserved values, where, for example, a function returning a positive number may return -1 to indicate failure, and sentinel values, most often used in tagged pointers.
Sometimes, untagged unions are used to perform bit-level conversions between types, called reinterpret casts in C++. Tagged unions are not intended for this purpose; typically a new value is assigned whenever the tag is changed.
Many languages support, to some extent, a universal data type, which is a type that includes every value of every other type, and often a way is provided to test the actual type of a value of the universal type. These are sometimes referred to as variants. While universal data types are comparable to tagged unions in their formal definition, typical tagged unions include a relatively small number of cases, and these cases form different ways of expressing a single coherent concept, such as a data structure node or instruction. Also, there is an expectation that every possible case of a tagged union will be dealt with when it is used. The values of a universal data type are not related and there is no feasible way to deal with them all.
Like option types and exception handling, tagged unions are sometimes used to handle the occurrence of exceptional results. Often these tags are folded into the type as reserved values, and their occurrence is not consistently checked: this is a fairly common source of programming errors. This use of tagged unions can be formalized as a monad with the following functions:
${\text{return}}\colon A\to \left(A+E\right)=a\mapsto {\text{value}}\,a$
${\text{bind}}\colon \left(A+E\right)\to \left(A\to \left(B+E\right)\right)\to \left(B+E\right)=a\mapsto f\mapsto {\begin{cases}{\text{err}}\,e&{\text{if}}\ a={\text{err}}\,e\\f\,a'&{\text{if}}\ a={\text{value}}\,a'\end{cases}}$
where "value" and "err" are the constructors of the union type, A and B are valid result types and E is the type of error conditions. Alternately, the same monad may be described by return and two additional functions, fmap and join:
${\text{fmap}}\colon (A\to B)\to \left(\left(A+E\right)\to \left(B+E\right)\right)=f\mapsto a\mapsto {\begin{cases}{\text{err}}\,e&{\text{if}}\ a={\text{err}}\,e\\{\text{value}}\,{\text{(}}\,f\,a'\,{\text{)}}&{\text{if}}\ a={\text{value}}\,a'\end{cases}}$
${\text{join}}\colon ((A+E)+E)\to (A+E)=a\mapsto {\begin{cases}{\text{err}}\,e&{\mbox{if}}\ a={\text{err}}\,e\\{\text{err}}\,e&{\text{if}}\ a={\text{value}}\,{\text{(err}}\,e\,{\text{)}}\\{\text{value}}\,a'&{\text{if}}\ a={\text{value}}\,{\text{(value}}\,a'\,{\text{)}}\end{cases}}$
Examples
Say we wanted to build a binary tree of integers. In ML, we would do this by creating a datatype like this:
datatype tree = Leaf
| Node of (int * tree * tree)
This is a tagged union with two cases: one, the leaf, is used to terminate a path of the tree, and functions much like a null value would in imperative languages. The other branch holds a node, which contains an integer and a left and right subtree. Leaf and Node are the constructors, which enable us to actually produce a particular tree, such as:
Node(5, Node(1, Leaf, Leaf), Node(3, Leaf, Node(4, Leaf, Leaf)))
which corresponds to this tree:
Now we can easily write a typesafe function that, say, counts the number of nodes in the tree:
fun countNodes(Leaf) = 0
| countNodes(Node(int, left, right)) =
1 + countNodes(left) + countNodes(right)
Timeline of language support
1960s
In ALGOL 68, tagged unions are called united modes, the tag is implicit, and the case construct is used to determine which field is tagged:
mode node = union (real, int, compl, string);
Usage example for union case of node:
node n := "1234";
case n in
(real r): print(("real:", r)),
(int i): print(("int:", i)),
(compl c): print(("compl:", c)),
(string s): print(("string:", s))
out print(("?:", n))
esac
1970s & 1980s
Although primarily only functional programming languages such as ML (from the 1970s) and Haskell (from the 1990s) give a central role to tagged unions and have the power to check that all cases are handled, other languages also support tagged unions. However, in practice they can be less efficient in non-functional languages due to optimizing enabled by functional language compilers that can eliminate explicit tag checks and avoid explicit storage of tags.
Pascal, Ada, and Modula-2 call them variant records (formally discriminated type in Ada), and require the tag field to be manually created and the tag values specified, as in this Pascal example:
type shapeKind = (square, rectangle, circle);
shape = record
centerx : integer;
centery : integer;
case kind : shapeKind of
square : (side : integer);
rectangle : (width, height : integer);
circle : (radius : integer);
end;
and this Ada equivalent:
type Shape_Kind is (Square, Rectangle, Circle);
type Shape (Kind : Shape_Kind) is record
Center_X : Integer;
Center_Y : Integer;
case Kind is
when Square =>
Side : Integer;
when Rectangle =>
Width, Height : Integer;
when Circle =>
Radius : Integer;
end case;
end record;
-- Any attempt to access a member which existence depends
-- on a certain value of the discriminant, while the
-- discriminant is not the expected one, raises an error.
In C and C++, a tagged union can be created from untagged unions using a strict access discipline where the tag is always checked:
enum ShapeKind { Square, Rectangle, Circle };
struct Shape {
int centerx;
int centery;
enum ShapeKind kind;
union {
struct { int side; }; /* Square */
struct { int width, height; }; /* Rectangle */
struct { int radius; }; /* Circle */
};
};
int getSquareSide(struct Shape* s) {
assert(s->kind == Square);
return s->side;
}
void setSquareSide(struct Shape* s, int side) {
s->kind = Square;
s->side = side;
}
/* and so on */
As long as the union fields are only accessed through the functions, the accesses will be safe and correct. The same approach can be used for encoded tags; we simply decode the tag and then check it on each access. If the inefficiency of these tag checks is a concern, they may be automatically removed in the final version.
C and C++ also have language support for one particular tagged union: the possibly-null pointer. This may be compared to the option type in ML or the Maybe type in Haskell, and can be seen as a tagged pointer: a tagged union (with an encoded tag) of two types:
• Valid pointers,
• A null pointer type with only one value, null, indicating an exceptional condition.
Unfortunately, C compilers do not verify that the null case is always handled, and this is a particularly prevalent source of errors in C code, since there is a tendency to ignore exceptional cases.
2000s
One advanced dialect of C called Cyclone has extensive built-in support for tagged unions.[1]
The enum types in the Rust, Haxe, and Swift languages also work as tagged unions.
The variant library from the Boost C++ Libraries demonstrated it was possible to implement a safe tagged union as a library in C++, visitable using function objects.
struct display : boost::static_visitor<void>
{
void operator()(int i)
{
std::cout << "It's an int, with value " << i << std::endl;
}
void operator()(const std::string& s)
{
std::cout << "It's a string, with value " << s << std::endl;
}
};
boost::variant<int, std::string> v = 42;
boost::apply_visitor(display(), v);
boost::variant<int, std::string> v = "hello world";
boost::apply_visitor(display(), v);
Scala has case classes:
sealed abstract class Tree
case object Leaf extends Tree
case class Node(value: Int, left: Tree, right: Tree) extends Tree
val tree = Node(5, Node(1, Leaf, Leaf), Node(3, Leaf, Node(4, Leaf, Leaf)))
Because the class hierarchy is sealed, the compiler can check that all cases are handled in a pattern match:
tree match {
case Node(x, _, _) => println("top level node value: " + x)
case Leaf => println("top level node is a leaf")
}
Scala's case classes also permit reuse through subtyping:
sealed abstract class Shape(centerX: Int, centerY: Int)
case class Square(side: Int, centerX: Int, centerY: Int) extends Shape(centerX, centerY)
case class Rectangle(length: Int, height: Int, centerX: Int, centerY: Int) extends Shape(centerX, centerY)
case class Circle(radius: Int, centerX: Int, centerY: Int) extends Shape(centerX, centerY)
F# has discriminated unions:
type Tree =
| Leaf
| Node of value: int * left: Tree * right: Tree
let tree = Node(5, Node(1, Leaf, Leaf), Node(3, Leaf, Node(4, Leaf, Leaf)))
Because the defined cases are exhaustive, the compiler can check that all cases are handled in a pattern match:
match tree with
| Node (x, _, _) -> printfn "top level node value: %i" x
| Leaf -> printfn "top level node is a leaf"
Haxe's enums also work as tagged unions:[2]
enum Color {
Red;
Green;
Blue;
Rgb(r:Int, g:Int, b:Int);
}
These can be matched using a switch expression:
switch (color) {
case Red: trace("Color was red");
case Green: trace("Color was green");
case Blue: trace("Color was blue");
case Rgb(r, g, b): trace("Color had a red value of " +r);
}
Nim has object variants[3] similar in declaration to those in Pascal and Ada:
type
ShapeKind = enum
skSquare, skRectangle, skCircle
Shape = object
centerX, centerY: int
case kind: ShapeKind
of skSquare:
side: int
of skRectangle:
length, height: int
of skCircle:
radius: int
Macros can be used to emulate pattern matching or to create syntactic sugar for declaring object variants, seen here as implemented by the package patty:
import patty
proc `~`[A](a: A): ref A =
new(result)
result[] = a
variant List[A]:
Nil
Cons(x: A, xs: ref List[A])
proc listHelper[A](xs: seq[A]): List[A] =
if xs.len == 0: Nil[A]()
else: Cons(xs[0], ~listHelper(xs[1 .. xs.high]))
proc list[A](xs: varargs[A]): List[A] = listHelper(@xs)
proc sum(xs: List[int]): int = (block:
match xs:
Nil: 0
Cons(y, ys): y + sum(ys[])
)
echo sum(list(1, 2, 3, 4, 5))
2010s
Enums are added in Scala 3,[4] allowing us to rewrite the earlier Scala examples more concisely:
enum Tree[+T]:
case Leaf
case Node(x: Int, left: Tree[T], right: Tree[T])
enum Shape(centerX: Int, centerY: Int):
case Square(side: Int, centerX: Int, centerY: Int) extends Shape(centerY, centerX)
case Rectangle(length: Int, height: Int, centerX: Int, centerY: Int) extends Shape(centerX, centerY)
case Circle(radius: Int, centerX: Int, centerY: Int) extends Shape(centerX, centerY)
The Rust language has extensive support for tagged unions, called enums.[5] For example:
enum Tree {
Leaf,
Node(i64, Box<Tree>, Box<Tree>)
}
It also allows matching on unions:
let tree = Tree::Node(
2,
Box::new(Tree::Node(0, Box::new(Tree::Leaf), Box::new(Tree::Leaf))),
Box::new(Tree::Node(3, Box::new(Tree::Leaf),
Box::new(Tree::Node(4, Box::new(Tree::Leaf), Box::new(Tree::Leaf)))))
);
fn add_values(tree: Tree) -> i64 {
match tree {
Tree::Node(v, a, b) => v + add_values(*a) + add_values(*b),
Tree::Leaf => 0
}
}
assert_eq!(add_values(tree), 9);
Rust's error handling model relies extensively on these tagged unions, especially the Option<T> type, which is either None or Some(T), and the Result<T, E> type, which is either Ok(T) or Err(E).[6]
Swift also has substantial support for tagged unions via enumerations.[7] For example:
enum Tree {
case leaf
indirect case node(Int, Tree, Tree)
}
let tree = Tree.node(
2,
.node(0, .leaf, .leaf),
.node(3, .leaf, .node(4, .leaf, .leaf))
)
func add_values(_ tree: Tree) -> Int {
switch tree {
case let .node(v, a, b):
return v + add_values(a) + add_values(b)
case .leaf:
return 0
}
}
assert(add_values(tree) == 9)
With TypeScript it is possible to create tagged unions also. For example:
interface Leaf { kind: "leaf"; }
interface Node { kind: "node"; value: number; left: Tree; right: Tree; }
type Tree = Leaf | Node
const root: Tree = {
kind: "node",
value: 5,
left: {
kind: "node",
value: 1,
left: { kind: "leaf" },
right: { kind: "leaf" }
},
right: {
kind: "node",
value: 3,
left: { kind: "leaf" },
right: {
kind: "node",
value: 4,
left: { kind: "leaf" },
right: { kind: "leaf" }
}
}
}
function visit(tree: Tree) {
switch (tree.kind) {
case "leaf":
break
case "node":
console.log(tree.value)
visit(tree.left)
visit(tree.right)
break
}
}
Python 3.9 introduces support for typing annotations that can be used to define a tagged union type (PEP-593[8]):
Currency = Annotated[
TypedDict('Currency', {'dollars': float, 'pounds': float}, total=False),
TaggedUnion,
]
C++17 introduces std::variant and constexpr if
using Tree = std::variant<struct Leaf, struct Node>;
struct Leaf
{
std::string value;
};
struct Node
{
Tree* left = nullptr;
Tree* right = nullptr;
};
struct Transverser
{
template<typename T>
void operator()(T&& v)
{
if constexpr (std::is_same_v<T, Leaf&>)
{
std::cout << v.value << "\n";
}
else if constexpr (std::is_same_v<T, Node&>)
{
if (v.left != nullptr)
std::visit(Transverser{}, *v.left);
if (v.right != nullptr)
std::visit(Transverser{}, *v.right);
}
else
{
// The !sizeof(T) expression is always false
static_assert(!sizeof(T), "non-exhaustive visitor!");
};
}
};
/*Tree forest = ...;
std::visit(Transverser{}, forest);*/
Class hierarchies as tagged unions
In a typical class hierarchy in object-oriented programming, each subclass can encapsulate data unique to that class. The metadata used to perform virtual method lookup (for example, the object's vtable pointer in most C++ implementations) identifies the subclass and so effectively acts as a tag identifying the particular data stored by the instance (see RTTI). An object's constructor sets this tag, and it remains constant throughout the object's lifetime.
Nevertheless, a class hierarchy involves true subtype polymorphism; it can be extended by creating further subclasses of the same base type, which could not be handled correctly under a tag/dispatch model. Hence, it is usually not possible to do case analysis or dispatch on a subobject's 'tag' as one would for tagged unions. Some languages such as Scala allow base classes to be "sealed", and unify tagged unions with sealed base classes.
See also
• Discriminator, the type tag for discriminated unions in CORBA
• Variant type (COM)
References
1. "Cyclone: Tagged Unions".
2. "Using Enums - Haxe - The Cross-platform Toolkit". Haxe Foundation.
3. "Nim Manual". nim-lang.org. Retrieved 2020-01-23.
4. "Scala 3 Language Reference: Enumerations". The Scala Team.
5. "The Rust Programming Language". Mozilla.
6. "Rust By Example". Mozilla.
7. "Enumerations — The Swift Programming Language (Swift 5.4)". docs.swift.org. Retrieved 2021-04-28.
8. "PEP 593 -- Flexible function and variable annotations". Python.org. Retrieved 2021-06-20.
External links
• boost::variant is a C++ typesafe discriminated union
• std.variant is an implementation of variant type in D 2.0
Data types
Uninterpreted
• Bit
• Byte
• Trit
• Tryte
• Word
• Bit array
Numeric
• Arbitrary-precision or bignum
• Complex
• Decimal
• Fixed point
• Floating point
• Reduced precision
• Minifloat
• Half precision
• bfloat16
• Single precision
• Double precision
• Quadruple precision
• Octuple precision
• Extended precision
• Long double
• Integer
• signedness
• Interval
• Rational
Pointer
• Address
• physical
• virtual
• Reference
Text
• Character
• String
• null-terminated
Composite
• Algebraic data type
• generalized
• Array
• Associative array
• Class
• Dependent
• Equality
• Inductive
• Intersection
• List
• Object
• metaobject
• Option type
• Product
• Record or Struct
• Refinement
• Set
• Union
• tagged
Other
• Boolean
• Bottom type
• Collection
• Enumerated type
• Exception
• Function type
• Opaque data type
• Recursive data type
• Semaphore
• Stream
• Strongly typed identifier
• Top type
• Type class
• Empty type
• Unit type
• Void
Related
topics
• Abstract data type
• Boxing
• Data structure
• Generic
• Kind
• metaclass
• Parametric polymorphism
• Primitive data type
• Interface
• Subtyping
• Type constructor
• Type conversion
• Type system
• Type theory
• Variable
| Wikipedia |
Unity amplitude
A sinusoidal waveform is said to have a unity amplitude when the amplitude of the wave is equal to 1.
$x(t)=a\sin(\theta (t))$
where $a=1$. This terminology is most commonly used in digital signal processing and is usually associated with the Fourier series and Fourier Transform sinusoids that involve a duty cycle, $\alpha $, and a defined fundamental period, $T_{o}$.
Analytic signals with unit amplitude satisfy the Bedrosian Theorem.[1]
References
1. Huang et al. On Instantaneous Frequency: http://rcada.ncu.edu.tw/2009%20Vol.1_No.2/1.ON%20INSTANTANEOUS%20FREQUENCY.pdf
| Wikipedia |
Univariate
In mathematics, a univariate object is an expression, equation, function or polynomial involving only one variable. Objects involving more than one variable are multivariate. In some cases the distinction between the univariate and multivariate cases is fundamental; for example, the fundamental theorem of algebra and Euclid's algorithm for polynomials are fundamental properties of univariate polynomials that cannot be generalized to multivariate polynomials.
In statistics, a univariate distribution characterizes one variable, although it can be applied in other ways as well. For example, univariate data are composed of a single scalar component. In time series analysis, the whole time series is the "variable": a univariate time series is the series of values over time of a single quantity. Correspondingly, a "multivariate time series" characterizes the changing values over time of several quantities. In some cases, the terminology is ambiguous, since the values within a univariate time series may be treated using certain types of multivariate statistical analyses and may be represented using multivariate distributions.
In addition to the question of scaling, a criterion (variable) in univariate statistics can be described by two important measures (also key figures or parameters): Location & Variation.[1]
• Measures of Location Scales (e.g. mode, median, arithmetic mean) describe in which area the data is arranged centrally.
• Measures of Variation (e.g. span, interquartile distance, standard deviation) describe how similar or different the data are scattered.
See also
• Arity
• Bivariate (disambiguation)
• Multivariate (disambiguation)
• Univariate analysis
• Univariate binary model
• Univariate distribution
References
1. Grünwald, Robert. "Univariate Statistik in SPSS". novustat.com (in German). Retrieved 29 October 2019.
| Wikipedia |
Arithmetica Universalis
Arithmetica Universalis ("Universal Arithmetic") is a mathematics text by Isaac Newton. Written in Latin, it was edited and published by William Whiston, Newton's successor as Lucasian Professor of Mathematics at the University of Cambridge. The Arithmetica was based on Newton's lecture notes.
Whiston's original edition was published in 1707. It was translated into English by Joseph Raphson, who published it in 1720 as the Universal Arithmetick. John Machin published a second Latin edition in 1722.
None of these editions credit Newton as author; Newton was unhappy with the publication of the Arithmetica, and so refused to have his name appear. In fact, when Whiston's edition was published, Newton was so upset he considered purchasing all of the copies so he could destroy them.
The Arithmetica touches on algebraic notation, arithmetic, the relationship between geometry and algebra, and the solution of equations. Newton also applied Descartes' rule of signs to imaginary roots. He also offered, without proof, a rule to determine the number of imaginary roots of polynomial equations. Not for another 150 years would a rigorous proof to Newton's counting formula be found, by James Joseph Sylvester, published in 1865.
References
• The Arithmetica Universalis from the Grace K. Babson Collection, including links to PDFs of English and Latin versions of the Arithmetica
• Arithmetica Universalis (1720), translated by Joseph Raphson
• Centre College Library information on Newton's works
Wikiquote has quotations related to Isaac Newton#Arithmetica Universalis (1707).
Sir Isaac Newton
Publications
• Fluxions (1671)
• De Motu (1684)
• Principia (1687)
• Opticks (1704)
• Queries (1704)
• Arithmetica (1707)
• De Analysi (1711)
Other writings
• Quaestiones (1661–1665)
• "standing on the shoulders of giants" (1675)
• Notes on the Jewish Temple (c. 1680)
• "General Scholium" (1713; "hypotheses non fingo" )
• Ancient Kingdoms Amended (1728)
• Corruptions of Scripture (1754)
Contributions
• Calculus
• fluxion
• Impact depth
• Inertia
• Newton disc
• Newton polygon
• Newton–Okounkov body
• Newton's reflector
• Newtonian telescope
• Newton scale
• Newton's metal
• Spectrum
• Structural coloration
Newtonianism
• Bucket argument
• Newton's inequalities
• Newton's law of cooling
• Newton's law of universal gravitation
• post-Newtonian expansion
• parameterized
• gravitational constant
• Newton–Cartan theory
• Schrödinger–Newton equation
• Newton's laws of motion
• Kepler's laws
• Newtonian dynamics
• Newton's method in optimization
• Apollonius's problem
• truncated Newton method
• Gauss–Newton algorithm
• Newton's rings
• Newton's theorem about ovals
• Newton–Pepys problem
• Newtonian potential
• Newtonian fluid
• Classical mechanics
• Corpuscular theory of light
• Leibniz–Newton calculus controversy
• Newton's notation
• Rotating spheres
• Newton's cannonball
• Newton–Cotes formulas
• Newton's method
• generalized Gauss–Newton method
• Newton fractal
• Newton's identities
• Newton polynomial
• Newton's theorem of revolving orbits
• Newton–Euler equations
• Newton number
• kissing number problem
• Newton's quotient
• Parallelogram of force
• Newton–Puiseux theorem
• Absolute space and time
• Luminiferous aether
• Newtonian series
• table
Personal life
• Woolsthorpe Manor (birthplace)
• Cranbury Park (home)
• Early life
• Later life
• Apple tree
• Religious views
• Occult studies
• Scientific Revolution
• Copernican Revolution
Relations
• Catherine Barton (niece)
• John Conduitt (nephew-in-law)
• Isaac Barrow (professor)
• William Clarke (mentor)
• Benjamin Pulleyn (tutor)
• John Keill (disciple)
• William Stukeley (friend)
• William Jones (friend)
• Abraham de Moivre (friend)
Depictions
• Newton by Blake (monotype)
• Newton by Paolozzi (sculpture)
• Isaac Newton Gargoyle
• Astronomers Monument
Namesake
• Newton (unit)
• Newton's cradle
• Isaac Newton Institute
• Isaac Newton Medal
• Isaac Newton Telescope
• Isaac Newton Group of Telescopes
• XMM-Newton
• Sir Isaac Newton Sixth Form
• Statal Institute of Higher Education Isaac Newton
• Newton International Fellowship
Categories
Isaac Newton
| Wikipedia |
Covering space
A covering of a topological space $X$ is a continuous map $\pi :E\rightarrow X$ with special properties.
Definition
Let $X$ be a topological space. A covering of $X$ is a continuous map
$\pi :E\rightarrow X$
such that there exists a discrete space $D$ and for every $x\in X$ an open neighborhood $U\subset X$, such that $\pi ^{-1}(U)=\displaystyle \bigsqcup _{d\in D}V_{d}$ and $\pi |_{V_{d}}:V_{d}\rightarrow U$ is a homeomorphism for every $d\in D$. Often, the notion of a covering is used for the covering space $E$ as well as for the map $\pi :E\rightarrow X$. The open sets $V_{d}$ are called sheets, which are uniquely determined up to a homeomorphism if $U$ is connected.[1]: 56 For each $x\in X$ the discrete subset $\pi ^{-1}(x)$ is called the fiber of $x$. The degree of a covering is the cardinality of the space $D$. If $E$ is path-connected, then the covering $\pi :E\rightarrow X$ is denoted as a path-connected covering.
Examples
• For every topological space $X$, there is a covering map $\pi :X\rightarrow X$ given by $\pi (x)=x$, which is called the trivial covering of $X.$
• The map $r:\mathbb {R} \to S^{1}$ with $r(t)=(\cos(2\pi t),\sin(2\pi t))$ is a covering of the unit circle $S^{1}$. The base of the covering is $S^{1}$ and the covering space is $\mathbb {R} $. For any point $x=(x_{1},x_{2})\in S^{1}$ such that $x_{1}>0$, the set $U:=\{(x_{1},x_{2})\in S^{1}\mid x_{1}>0\}$ is an open neighborhood of $x$. The preimage of $U$ under $r$ is
$r^{-1}(U)=\displaystyle \bigsqcup _{n\in \mathbb {Z} }\left(n-{\frac {1}{4}},n+{\frac {1}{4}}\right)$
and the sheets of the covering are $V_{n}=(n-1/4,n+1/4)$ for $n\in \mathbb {Z} .$ The fiber of $x$ is
$r^{-1}(x)=\{t\in \mathbb {R} \mid (\cos(2\pi t),\sin(2\pi t))=x\}.$
• Another covering of the unit circle is the map $q:S^{1}\to S^{1}$ with $q(z)=z^{n}$ for some $n\in \mathbb {N} .$ For an open neighborhood $U$ of an $x\in S^{1}$, one has:
$q^{-1}(U)=\displaystyle \bigsqcup _{i=1}^{n}U$.
• A map which is a local homeomorphism but not a covering of the unit circle is $p:\mathbb {R_{+}} \to S^{1}$ with $p(t)=(\cos(2\pi t),\sin(2\pi t))$. There is a sheet of an open neighborhood of $(1,0)$, which is not mapped homeomorphically onto $U$.
Properties
Local homeomorphism
Since a covering $\pi :E\rightarrow X$ maps each of the disjoint open sets of $\pi ^{-1}(U)$ homeomorphically onto $U$ it is a local homeomorphism, i.e. $\pi $ is a continuous map and for every $e\in E$ there exists an open neighborhood $V\subset E$ of $e$, such that $\pi |_{V}:V\rightarrow \pi (V)$ is a homeomorphism.
It follows that the covering space $E$ and the base space $X$ locally share the same properties.
• If $X$ is a connected and non-orientable manifold, then there is a covering $\pi :{\tilde {X}}\rightarrow X$ :{\tilde {X}}\rightarrow X} of degree $2$, whereby ${\tilde {X}}$ is a connected and orientable manifold.[1]: 234
• If $X$ is a connected Lie group, then there is a covering $\pi :{\tilde {X}}\rightarrow X$ :{\tilde {X}}\rightarrow X} which is also a Lie group homomorphism and ${\tilde {X}}:=\{\gamma :\gamma {\text{ is a path in X with }}\gamma (0)={\boldsymbol {1_{X}}}{\text{ modulo homotopy with fixed ends}}\}$ :\gamma {\text{ is a path in X with }}\gamma (0)={\boldsymbol {1_{X}}}{\text{ modulo homotopy with fixed ends}}\}} is a Lie group.[2]: 174
• If $X$ is a graph, then it follows for a covering $\pi :E\rightarrow X$ that $E$ is also a graph.[1]: 85
• If $X$ is a connected manifold, then there is a covering $\pi :{\tilde {X}}\rightarrow X$ :{\tilde {X}}\rightarrow X} , whereby ${\tilde {X}}$ is a connected and simply connected manifold.[3]: 32
• If $X$ is a connected Riemann surface, then there is a covering $\pi :{\tilde {X}}\rightarrow X$ :{\tilde {X}}\rightarrow X} which is also a holomorphic map[3]: 22 and ${\tilde {X}}$ is a connected and simply connected Riemann surface.[3]: 32
Factorisation
Let $X,Y$ and $E$ be path-connected, locally path-connected spaces, and $p,q$ and $r$ be continuous maps, such that the diagram
commutes.
• If $p$ and $q$ are coverings, so is $r$.
• If $p$ and $r$ are coverings, so is $q$.[4]: 485
Product of coverings
Let $X$ and $X'$ be topological spaces and $p:E\rightarrow X$ and $p':E'\rightarrow X'$ be coverings, then $p\times p':E\times E'\rightarrow X\times X'$ with $(p\times p')(e,e')=(p(e),p'(e'))$ is a covering.[4]: 339
Equivalence of coverings
Let $X$ be a topological space and $p:E\rightarrow X$ and $p':E'\rightarrow X$ be coverings. Both coverings are called equivalent, if there exists a homeomorphism $h:E\rightarrow E'$, such that the diagram
commutes. If such a homeomorphism exists, then one calls the covering spaces $E$ and $E'$ isomorphic.
Lifting property
An important property of the covering is, that it satisfies the lifting property, i.e.:
Let $I$ be the unit interval and $p:E\rightarrow X$ be a covering. Let $F:Y\times I\rightarrow X$ be a continuous map and ${\tilde {F}}_{0}:Y\times \{0\}\rightarrow E$ be a lift of $F|_{Y\times \{0\}}$, i.e. a continuous map such that $p\circ {\tilde {F}}_{0}=F|_{Y\times \{0\}}$. Then there is a uniquely determined, continuous map ${\tilde {F}}:Y\times I\rightarrow E$ for which ${\tilde {F}}(y,0)={\tilde {F}}_{0}$ and which is a lift of $F$, i.e. $p\circ {\tilde {F}}=F$.[1]: 60
If $X$ is a path-connected space, then for $Y=\{0\}$ it follows that the map ${\tilde {F}}$ is a lift of a path in $X$ and for $Y=I$ it is a lift of a homotopy of paths in $X$.
Because of that property one can show, that the fundamental group $\pi _{1}(S^{1})$ of the unit circle is an infinite cyclic group, which is generated by the homotopy classes of the loop $\gamma :I\rightarrow S^{1}$ with $\gamma (t)=(\cos(2\pi t),\sin(2\pi t))$.[1]: 29
Let $X$ be a path-connected space and $p:E\rightarrow X$ be a connected covering. Let $x,y\in X$ be any two points, which are connected by a path $\gamma $, i.e. $\gamma (0)=x$ and $\gamma (1)=y$. Let ${\tilde {\gamma }}$ be the unique lift of $\gamma $, then the map
$L_{\gamma }:p^{-1}(x)\rightarrow p^{-1}(y)$ with $L_{\gamma }({\tilde {\gamma }}(0))={\tilde {\gamma }}(1)$
is bijective.[1]: 69
If $X$ is a path-connected space and $p:E\rightarrow X$ a connected covering, then the induced group homomorphism
$p_{\#}:\pi _{1}(E)\rightarrow \pi _{1}(X)$ with $p_{\#}([\gamma ])=[p\circ \gamma ]$,
is injective and the subgroup $p_{\#}(\pi _{1}(E))$ of $\pi _{1}(X)$ consists of the homotopy classes of loops in $X$, whose lifts are loops in $E$.[1]: 61
Branched covering
Holomorphic maps between Riemann surfaces
Let $X$ and $Y$ be Riemann surfaces, i.e. one dimensional complex manifolds, and let $f:X\rightarrow Y$ be a continuous map. $f$ is holomorphic in a point $x\in X$, if for any charts $\phi _{x}:U_{1}\rightarrow V_{1}$ of $x$ and $\phi _{f(x)}:U_{2}\rightarrow V_{2}$ of $f(x)$, with $\phi _{x}(U_{1})\subset U_{2}$, the map $\phi _{f(x)}\circ f\circ \phi _{x}^{-1}:\mathbb {C} \rightarrow \mathbb {C} $ is holomorphic.
If $f$ is holomorphic at all $x\in X$, we say $f$ is holomorphic.
The map $F=\phi _{f(x)}\circ f\circ \phi _{x}^{-1}$ is called the local expression of $f$ in $x\in X$.
If $f:X\rightarrow Y$ is a non-constant, holomorphic map between compact Riemann surfaces, then $f$ is surjective and an open map,[3]: 11 i.e. for every open set $U\subset X$ the image $f(U)\subset Y$ is also open.
Ramification point and branch point
Let $f:X\rightarrow Y$ be a non-constant, holomorphic map between compact Riemann surfaces. For every $x\in X$ there exist charts for $x$ and $f(x)$ and there exists a uniquely determined $k_{x}\in \mathbb {N_{>0}} $, such that the local expression $F$ of $f$ in $x$ is of the form $z\mapsto z^{k_{x}}$.[3]: 10 The number $k_{x}$ is called the ramification index of $f$ in $x$ and the point $x\in X$ is called a ramification point if $k_{x}\geq 2$. If $k_{x}=1$ for an $x\in X$, then $x$ is unramified. The image point $y=f(x)\in Y$ of a ramification point is called a branch point.
Degree of a holomorphic map
Let $f:X\rightarrow Y$ be a non-constant, holomorphic map between compact Riemann surfaces. The degree $\operatorname {deg} (f)$ of $f$ is the cardinality of the fiber of an unramified point $y=f(x)\in Y$, i.e. $\operatorname {deg} (f):=|f^{-1}(y)|$.
This number is well-defined, since for every $y\in Y$ the fiber $f^{-1}(y)$ is discrete[3]: 20 and for any two unramified points $y_{1},y_{2}\in Y$, it is: $|f^{-1}(y_{1})|=|f^{-1}(y_{2})|.$
It can be calculated by:
$\sum _{x\in f^{-1}(y)}k_{x}=\operatorname {deg} (f)$ [3]: 29
Definition
A continuous map $f:X\rightarrow Y$ is called a branched covering, if there exists a closed set with dense complement $E\subset Y$, such that $f_{|X\smallsetminus f^{-1}(E)}:X\smallsetminus f^{-1}(E)\rightarrow Y\smallsetminus E$ is a covering.
Examples
• Let $n\in \mathbb {N} $ and $n\geq 2$, then $f:\mathbb {C} \rightarrow \mathbb {C} $ with $f(z)=z^{n}$ is branched covering of degree $n$, whereby $z=0$ is a branch point.
• Every non-constant, holomorphic map between compact Riemann surfaces $f:X\rightarrow Y$ of degree $d$ is a branched covering of degree $d$.
Universal covering
Definition
Let $p:{\tilde {X}}\rightarrow X$ be a simply connected covering. If $\beta :E\rightarrow X$ is another simply connected covering, then there exists a uniquely determined homeomorphism $\alpha :{\tilde {X}}\rightarrow E$ :{\tilde {X}}\rightarrow E} , such that the diagram
commutes.[4]: 482
This means that $p$ is, up to equivalence, uniquely determined and because of that universal property denoted as the universal covering of the space $X$.
Existence
A universal covering does not always exist, but the following properties guarantee its existence:
Let $X$ be a connected, locally simply connected topological space; then, there exists a universal covering $p:{\tilde {X}}\rightarrow X$.
${\tilde {X}}$ is defined as ${\tilde {X}}:=\{\gamma :\gamma {\text{ is a path in }}X{\text{ with }}\gamma (0)=x_{0}\}/{\text{ homotopy with fixed ends}}$ :\gamma {\text{ is a path in }}X{\text{ with }}\gamma (0)=x_{0}\}/{\text{ homotopy with fixed ends}}} and $p:{\tilde {X}}\rightarrow X$ by $p([\gamma ]):=\gamma (1)$.[1]: 64
The topology on ${\tilde {X}}$ is constructed as follows: Let $\gamma :I\rightarrow X$ be a path with $\gamma (0)=x_{0}$. Let $U$ be a simply connected neighborhood of the endpoint $x=\gamma (1)$, then for every $y\in U$ the paths $\sigma _{y}$ inside $U$ from $x$ to $y$ are uniquely determined up to homotopy. Now consider ${\tilde {U}}:=\{\gamma .\sigma _{y}:y\in U\}/{\text{ homotopy with fixed ends}}$, then $p_{|{\tilde {U}}}:{\tilde {U}}\rightarrow U$ with $p([\gamma .\sigma _{y}])=\gamma .\sigma _{y}(1)=y$ is a bijection and ${\tilde {U}}$ can be equipped with the final topology of $p_{|{\tilde {U}}}$.
The fundamental group $\pi _{1}(X,x_{0})=\Gamma $ acts freely through $([\gamma ],[{\tilde {x}}])\mapsto [\gamma .{\tilde {x}}]$ on ${\tilde {X}}$ and $\psi :\Gamma \backslash {\tilde {X}}\rightarrow X$ :\Gamma \backslash {\tilde {X}}\rightarrow X} with $\psi ([\Gamma {\tilde {x}}])={\tilde {x}}(1)$ is a homeomorphism, i.e. $\Gamma \backslash {\tilde {X}}\cong X$.
Examples
• $r:\mathbb {R} \to S^{1}$ with $r(t)=(\cos(2\pi t),\sin(2\pi t))$ is the universal covering of the unit circle $S^{1}$.
• $p:S^{n}\to \mathbb {R} P^{n}\cong \{+1,-1\}\backslash S^{n}$ with $p(x)=[x]$ is the universal covering of the projective space $\mathbb {R} P^{n}$ for $n>1$.
• $q:\mathrm {SU} (n)\ltimes \mathbb {R} \to U(n)$ with
$q(A,t)={\begin{bmatrix}\exp(2\pi it)&0\\0&I_{n-1}\end{bmatrix}}_{\vphantom {x}}A$
is the universal covering of the unitary group $U(n)$.[5]: 5, Theorem 1
• Since $\mathrm {SU} (2)\cong S^{3}$, it follows that the quotient map
$f:\mathrm {SU} (2)\rightarrow \mathrm {SU} (2)\backslash \mathbb {Z_{2}} \cong \mathrm {SO} (3)$
is the universal covering of the $\mathrm {SO} (3)$.
• A topological space which has no universal covering is the Hawaiian earring:
$X=\bigcup _{n\in \mathbb {N} }\left\{(x_{1},x_{2})\in \mathbb {R} ^{2}:{\Bigl (}x_{1}-{\frac {1}{n}}{\Bigr )}^{2}+x_{2}^{2}={\frac {1}{n^{2}}}\right\}$
One can show that no neighborhood of the origin $(0,0)$ is simply connected.[4]: 487, Example 1
G-coverings
Let G be a discrete group acting on the topological space X. This means that each element g of G is associated to a homeomorphism Hg of X onto itself, in such a way that Hg h is always equal to Hg ∘ Hh for any two elements g and h of G. (Or in other words, a group action of the group G on the space X is just a group homomorphism of the group G into the group Homeo(X) of self-homeomorphisms of X.) It is natural to ask under what conditions the projection from X to the orbit space X/G is a covering map. This is not always true since the action may have fixed points. An example for this is the cyclic group of order 2 acting on a product X × X by the twist action where the non-identity element acts by (x, y) ↦ (y, x). Thus the study of the relation between the fundamental groups of X and X/G is not so straightforward.
However the group G does act on the fundamental groupoid of X, and so the study is best handled by considering groups acting on groupoids, and the corresponding orbit groupoids. The theory for this is set down in Chapter 11 of the book Topology and groupoids referred to below. The main result is that for discontinuous actions of a group G on a Hausdorff space X which admits a universal cover, then the fundamental groupoid of the orbit space X/G is isomorphic to the orbit groupoid of the fundamental groupoid of X, i.e. the quotient of that groupoid by the action of the group G. This leads to explicit computations, for example of the fundamental group of the symmetric square of a space.
Deck transformation
Definition
Let $p:E\rightarrow X$ be a covering. A deck transformation is a homeomorphism $d:E\rightarrow E$, such that the diagram of continuous maps
commutes. Together with the composition of maps, the set of deck transformation forms a group $\operatorname {Deck} (p)$, which is the same as $\operatorname {Aut} (p)$.
Now suppose $p:C\to X$ is a covering map and $C$ (and therefore also $X$) is connected and locally path connected. The action of $\operatorname {Aut} (p)$ on each fiber is free. If this action is transitive on some fiber, then it is transitive on all fibers, and we call the cover regular (or normal or Galois). Every such regular cover is a principal $G$-bundle, where $G=\operatorname {Aut} (p)$ is considered as a discrete topological group.
Every universal cover $p:D\to X$ is regular, with deck transformation group being isomorphic to the fundamental group $\pi _{1}(X)$.
Examples
• Let $q:S^{1}\to S^{1}$ be the covering $q(z)=z^{n}$ for some $n\in \mathbb {N} $, then the map $d_{k}:S^{1}\rightarrow S^{1}:z\mapsto z\,e^{2\pi ik/n}$ is a deck transformation and $\operatorname {Deck} (q)\cong \mathbb {Z} /\mathbb {nZ} $.
• Let $r:\mathbb {R} \to S^{1}$ be the covering $r(t)=(\cos(2\pi t),\sin(2\pi t))$, then the map $d_{k}:\mathbb {R} \rightarrow \mathbb {R} :t\mapsto t+k$ with $k\in \mathbb {Z} $ is a deck transformation and $\operatorname {Deck} (r)\cong \mathbb {Z} $.
• As another important example, consider $\mathbb {C} $ the complex plane and $\mathbb {C} ^{\times }$ the complex plane minus the origin. Then the map $p:\mathbb {C} ^{\times }\to \mathbb {C} ^{\times }$ with $p(z)=z^{n}$ is a regular cover. The deck transformations are multiplications with $n$-th roots of unity and the deck transformation group is therefore isomorphic to the cyclic group $\mathbb {Z} /n\mathbb {Z} $. Likewise, the map $\exp :\mathbb {C} \to \mathbb {C} ^{\times }$ :\mathbb {C} \to \mathbb {C} ^{\times }} with $\exp(z)=e^{z}$ is the universal cover.
Properties
Let $X$ be a path-connected space and $p:E\rightarrow X$ be a connected covering. Since a deck transformation $d:E\rightarrow E$ is bijective, it permutes the elements of a fiber $p^{-1}(x)$ with $x\in X$ and is uniquely determined by where it sends a single point. In particular, only the identity map fixes a point in the fiber.[1]: 70 Because of this property every deck transformation defines a group action on $E$, i.e. let $U\subset X$ be an open neighborhood of a $x\in X$ and ${\tilde {U}}\subset E$ an open neighborhood of an $e\in p^{-1}(x)$, then $\operatorname {Deck} (p)\times E\rightarrow E:(d,{\tilde {U}})\mapsto d({\tilde {U}})$ is a group action.
Definition
A covering $p:E\rightarrow X$ is called normal, if $\operatorname {Deck} (p)\backslash E\cong X$. This means, that for every $x\in X$ and any two $e_{0},e_{1}\in p^{-1}(x)$ there exists a deck transformation $d:E\rightarrow E$, such that $d(e_{0})=e_{1}$.
Properties
Let $X$ be a path-connected space and $p:E\rightarrow X$ be a connected covering. Let $H=p_{\#}(\pi _{1}(E))$ be a subgroup of $\pi _{1}(X)$, then $p$ is a normal covering iff $H$ is a normal subgroup of $\pi _{1}(X)$.
If $p:E\rightarrow X$ is a normal covering and $H=p_{\#}(\pi _{1}(E))$, then $\operatorname {Deck} (p)\cong \pi _{1}(X)/H$.
If $p:E\rightarrow X$ is a path-connected covering and $H=p_{\#}(\pi _{1}(E))$, then $\operatorname {Deck} (p)\cong N(H)/H$, whereby $N(H)$ is the normaliser of $H$.[1]: 71
Let $E$ be a topological space. A group $\Gamma $ acts discontinuously on $E$, if every $e\in E$ has an open neighborhood $V\subset E$ with $V\neq \emptyset $, such that for every $\gamma \in \Gamma $ with $\gamma V\cap V\neq \emptyset $ one has $d_{1}=d_{2}$.
If a group $\Gamma $ acts discontinuously on a topological space $E$, then the quotient map $q:E\rightarrow \Gamma \backslash E$ with $q(e)=\Gamma e$ is a normal covering.[1]: 72 Hereby $\Gamma \backslash E=\{\Gamma e:e\in E\}$ is the quotient space and $\Gamma e=\{\gamma (e):\gamma \in \Gamma \}$ is the orbit of the group action.
Examples
• The covering $q:S^{1}\to S^{1}$ with $q(z)=z^{n}$ is a normal coverings for every $n\in \mathbb {N} $.
• Every simply connected covering is a normal covering.
Calculation
Let $\Gamma $ be a group, which acts discontinuously on a topological space $E$ and let $q:E\rightarrow \Gamma \backslash E$ be the normal covering.
• If $E$ is path-connected, then $\operatorname {Deck} (q)\cong \Gamma $.[1]: 72
• If $E$ is simply connected, then $\operatorname {Deck} (q)\cong \pi _{1}(\Gamma \backslash E)$.[1]: 71
Examples
• Let $n\in \mathbb {N} $. The antipodal map $g:S^{n}\rightarrow S^{n}$ with $g(x)=-x$ generates, together with the composition of maps, a group $D(g)\cong \mathbb {Z/2Z} $ and induces a group action $D(g)\times S^{n}\rightarrow S^{n},(g,x)\mapsto g(x)$, which acts discontinuously on $S^{n}$. Because of $\mathbb {Z_{2}} \backslash S^{n}\cong \mathbb {R} P^{n}$ it follows, that the quotient map $q:S^{n}\rightarrow \mathbb {Z_{2}} \backslash S^{n}\cong \mathbb {R} P^{n}$ is a normal covering and for $n>1$ a universal covering, hence $\operatorname {Deck} (q)\cong \mathbb {Z/2Z} \cong \pi _{1}({\mathbb {R} P^{n}})$ for $n>1$.
• Let $\mathrm {SO} (3)$ be the special orthogonal group, then the map $f:\mathrm {SU} (2)\rightarrow \mathrm {SO} (3)\cong \mathbb {Z_{2}} \backslash \mathrm {SU} (2)$ is a normal covering and because of $\mathrm {SU} (2)\cong S^{3}$, it is the universal covering, hence $\operatorname {Deck} (f)\cong \mathbb {Z/2Z} \cong \pi _{1}(\mathrm {SO} (3))$.
• With the group action $(z_{1},z_{2})*(x,y)=(z_{1}+(-1)^{z_{2}}x,z_{2}+y)$ of $\mathbb {Z^{2}} $ on $\mathbb {R^{2}} $, whereby $(\mathbb {Z^{2}} ,*)$ is the semidirect product $\mathbb {Z} \rtimes \mathbb {Z} $, one gets the universal covering $f:\mathbb {R^{2}} \rightarrow (\mathbb {Z} \rtimes \mathbb {Z} )\backslash \mathbb {R^{2}} \cong K$ of the klein bottle $K$, hence $\operatorname {Deck} (f)\cong \mathbb {Z} \rtimes \mathbb {Z} \cong \pi _{1}(K)$.
• Let $T=S^{1}\times S^{1}$ be the torus which is embedded in the $\mathbb {C^{2}} $. Then one gets a homeomorphism $\alpha :T\rightarrow T:(e^{ix},e^{iy})\mapsto (e^{i(x+\pi )},e^{-iy})$, which induces a discontinuous group action $G_{\alpha }\times T\rightarrow T$, whereby $G_{\alpha }\cong \mathbb {Z/2Z} $. It follows, that the map $f:T\rightarrow G_{\alpha }\backslash T\cong K$ is a normal covering of the klein bottle, hence $\operatorname {Deck} (f)\cong \mathbb {Z/2Z} $.
• Let $S^{3}$ be embedded in the $\mathbb {C^{2}} $. Since the group action $S^{3}\times \mathbb {Z/pZ} \rightarrow S^{3}:((z_{1},z_{2}),[k])\mapsto (e^{2\pi ik/p}z_{1},e^{2\pi ikq/p}z_{2})$ is discontinuously, whereby $p,q\in \mathbb {N} $ are coprime, the map $f:S^{3}\rightarrow \mathbb {Z_{p}} \backslash S^{3}=:L_{p,q}$ is the universal covering of the lens space $L_{p,q}$, hence $\operatorname {Deck} (f)\cong \mathbb {Z/pZ} \cong \pi _{1}(L_{p,q})$.
Galois correspondence
Let $X$ be a connected and locally simply connected space, then for every subgroup $H\subseteq \pi _{1}(X)$ there exists a path-connected covering $\alpha :X_{H}\rightarrow X$ with $\alpha _{\#}(\pi _{1}(X_{H}))=H$.[1]: 66
Let $p_{1}:E\rightarrow X$ and $p_{2}:E'\rightarrow X$ be two path-connected coverings, then they are equivalent iff the subgroups $H=p_{1\#}(\pi _{1}(E))$ and $H'=p_{2\#}(\pi _{1}(E'))$ are conjugate to each other.[4]: 482
Let $X$ be a connected and locally simply connected space, then, up to equivalence between coverings, there is a bijection:
${\begin{matrix}\qquad \displaystyle \{{\text{Subgroup of }}\pi _{1}(X)\}&\longleftrightarrow &\displaystyle \{{\text{path-connected covering }}p:E\rightarrow X\}\\H&\longrightarrow &\alpha :X_{H}\rightarrow X\\p_{\#}(\pi _{1}(E))&\longleftarrow &p\\\displaystyle \{{\text{normal subgroup of }}\pi _{1}(X)\}&\longleftrightarrow &\displaystyle \{{\text{normal covering }}p:E\rightarrow X\}\\H&\longrightarrow &\alpha :X_{H}\rightarrow X\\p_{\#}(\pi _{1}(E))&\longleftarrow &p\end{matrix}}$
For a sequence of subgroups $\displaystyle \{{\text{e}}\}\subset H\subset G\subset \pi _{1}(X)$ one gets a sequence of coverings ${\tilde {X}}\longrightarrow X_{H}\cong H\backslash {\tilde {X}}\longrightarrow X_{G}\cong G\backslash {\tilde {X}}\longrightarrow X\cong \pi _{1}(X)\backslash {\tilde {X}}$. For a subgroup $H\subset \pi _{1}(X)$ with index $\displaystyle [\pi _{1}(X):H]=d$, the covering $\alpha :X_{H}\rightarrow X$ has degree $d$.
Classification
Category of coverings
Let $X$ be a topological space. The objects of the category ${\boldsymbol {Cov(X)}}$ are the coverings $p:E\rightarrow X$ of $X$ and the morphisms between two coverings $p:E\rightarrow X$ and $q:F\rightarrow X$ are continuous maps $f:E\rightarrow F$, such that the diagram
commutes.
G-Set
Let $G$ be a topological group. The category ${\boldsymbol {G-Set}}$ is the category of sets which are G-sets. The morphisms are G-maps $\phi :X\rightarrow Y$ between G-sets. They satisfy the condition $\phi (gx)=g\,\phi (x)$ for every $g\in G$.
Equivalence
Let $X$ be a connected and locally simply connected space, $x\in X$ and $G=\pi _{1}(X,x)$ be the fundamental group of $X$. Since $G$ defines, by lifting of paths and evaluating at the endpoint of the lift, a group action on the fiber of a covering, the functor $F:{\boldsymbol {Cov(X)}}\longrightarrow {\boldsymbol {G-Set}}:p\mapsto p^{-1}(x)$ is an equivalence of categories.[1]: 68–70
Applications
An important practical application of covering spaces occurs in charts on SO(3), the rotation group. This group occurs widely in engineering, due to 3-dimensional rotations being heavily used in navigation, nautical engineering, and aerospace engineering, among many other uses. Topologically, SO(3) is the real projective space RP3, with fundamental group Z/2, and only (non-trivial) covering space the hypersphere S3, which is the group Spin(3), and represented by the unit quaternions. Thus quaternions are a preferred method for representing spatial rotations – see quaternions and spatial rotation.
However, it is often desirable to represent rotations by a set of three numbers, known as Euler angles (in numerous variants), both because this is conceptually simpler for someone familiar with planar rotation, and because one can build a combination of three gimbals to produce rotations in three dimensions. Topologically this corresponds to a map from the 3-torus T3 of three angles to the real projective space RP3 of rotations, and the resulting map has imperfections due to this map being unable to be a covering map. Specifically, the failure of the map to be a local homeomorphism at certain points is referred to as gimbal lock, and is demonstrated in the animation at the right – at some points (when the axes are coplanar) the rank of the map is 2, rather than 3, meaning that only 2 dimensions of rotations can be realized from that point by changing the angles. This causes problems in applications, and is formalized by the notion of a covering space.
See also
• Bethe lattice is the universal cover of a Cayley graph
• Covering graph, a covering space for an undirected graph, and its special case the bipartite double cover
• Covering group
• Galois connection
• Quotient space (topology)
Literature
• Hatcher, Allen (2002). Algebraic topology. Cambridge: Cambridge University Press. ISBN 0-521-79160-X. OCLC 45420394.
• Forster, Otto (1981). Lectures on Riemann surfaces. New York. ISBN 0-387-90617-7. OCLC 7596520.{{cite book}}: CS1 maint: location missing publisher (link)
• Munkres, James R. (2018). Topology. New York, NY. ISBN 978-0-13-468951-7. OCLC 964502066.{{cite book}}: CS1 maint: location missing publisher (link)
• Kühnel, Wolfgang (2011). Matrizen und Lie-Gruppen Eine geometrische Einführung (in German). Wiesbaden: Vieweg+Teubner Verlag. doi:10.1007/978-3-8348-9905-7. ISBN 978-3-8348-9905-7. OCLC 706962685.
References
1. Hatcher, Allen (2001). Algebraic Topology. Cambridge: Cambridge Univ. Press. ISBN 0-521-79160-X.
2. Kühnel, Wolfgang. Matrizen und Lie-Gruppen. Stuttgart: Springer Fachmedien Wiesbaden GmbH. ISBN 978-3-8348-9905-7.
3. Forster, Otto (1991). Lectures on Riemann surfaces. München: Springer Berlin. ISBN 978-3-540-90617-9.
4. Munkres, James (2000). Topology. Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-468951-7.
5. Aguilar, Marcelo Alberto; Socolovsky, Miguel (23 November 1999). "The Universal Covering Group of U(n) and Projective Representations". International Journal of Theoretical Physics. Springer US (published April 2000). 39 (4): 997–1013. arXiv:math-ph/9911028. Bibcode:1999math.ph..11028A. doi:10.1023/A:1003694206391. S2CID 18686364.
| Wikipedia |
Universal differential equation
A universal differential equation (UDE) is a non-trivial differential algebraic equation with the property that its solutions can approximate any continuous function on any interval of the real line to any desired level of accuracy.
Precisely, a (possibly implicit) differential equation P(y', y'', y''', ... , y(n)) = 0 is a UDE if for any continuous real-valued function f and for any positive continuous function ε there exist a smooth solution y of P(y', y'', y''', ... , y(n)) = 0 with |y(x) − f(x)| < ε(x) for all x in R.[1]
The existence of an UDE has been initially regarded as an analogue of the universal Turing machine for analog computers, because of a result of Shannon that identifies the outputs of the general purpose analog computer with the solutions of algebraic differential equations.[1] However, in contrast to universal Turing machines, UDEs do not dictate the evolution of a system, but rather sets out certain conditions that any evolution must fulfill.[2]
Examples
• Rubel found the first known UDE in 1981. It is given by the following implicit differential equation of fourth-order:[1][2] $3y^{\prime 4}y^{\prime \prime }y^{\prime \prime \prime \prime 2}-4y^{\prime 4}y^{\prime \prime \prime 2}y^{\prime \prime \prime \prime }+6y^{\prime 3}y^{\prime \prime 2}y^{\prime \prime \prime }y^{\prime \prime \prime \prime }+24y^{\prime 2}y^{\prime \prime 4}y^{\prime \prime \prime \prime }-12y^{\prime 3}y^{\prime \prime }y^{\prime \prime \prime 3}-29y^{\prime 2}y^{\prime \prime 3}y^{\prime \prime \prime 2}+12y^{\prime \prime 7}=0$
• Duffin obtained a family of UDEs given by:[3]
$n^{2}y^{\prime \prime \prime \prime }y^{\prime 2}+3n(1-n)y^{\prime \prime \prime }y^{\prime \prime }y^{\prime }+\left(2n^{2}-3n+1\right)y^{\prime \prime 3}=0$ and $ny^{\prime \prime \prime \prime }y^{\prime 2}+(2-3n)y^{\prime \prime \prime }y^{\prime \prime }y^{\prime }+2(n-1)y^{\prime \prime 3}=0$, whose solutions are of class $C^{n}$ for n > 3.
• Briggs proposed another family of UDEs whose construction is based on Jacobi elliptic functions:[4]
$y^{\prime \prime \prime \prime }y^{\prime 2}-3y^{\prime \prime \prime \prime }y^{\prime \prime }y^{\prime }+2\left(1-n^{-2}\right)y^{\prime \prime 3}=0$, where n > 3.
• Bournez and Pouly proved the existence of a fixed polynomial vector field p such that for any f and ε there exists some initial condition of the differential equation y' = p(y) that yields a unique and analytic solution satisfying |y(x) − f(x)| < ε(x) for all x in R.[2]
See also
• Zeta function universality
• Hölder's theorem
References
1. Rubel, Lee A. (1981). "A universal differential equation". Bulletin of the American Mathematical Society. 4 (3): 345–349. doi:10.1090/S0273-0979-1981-14910-7. ISSN 0273-0979.
2. Pouly, Amaury; Bournez, Olivier (2020-02-28). "A Universal Ordinary Differential Equation". Logical Methods in Computer Science. 16 (1). arXiv:1702.08328. doi:10.23638/LMCS-16(1:28)2020.
3. Duffin, R. J. (1981). "Rubel's universal differential equation". Proceedings of the National Academy of Sciences. 78 (8): 4661–4662. Bibcode:1981PNAS...78.4661D. doi:10.1073/pnas.78.8.4661. ISSN 0027-8424. PMC 320216. PMID 16593068.
4. Briggs, Keith (2002-11-08). "Another universal differential equation". arXiv:math/0211142.
External links
• Wolfram Mathworld page on UDEs
| Wikipedia |
Horn clause
In mathematical logic and logic programming, a Horn clause is a logical formula of a particular rule-like form which gives it useful properties for use in logic programming, formal specification, and model theory. Horn clauses are named for the logician Alfred Horn, who first pointed out their significance in 1951.[1]
Definition
A Horn clause is a clause (a disjunction of literals) with at most one positive, i.e. unnegated, literal.
Conversely, a disjunction of literals with at most one negated literal is called a dual-Horn clause.
A Horn clause with exactly one positive literal is a definite clause or a strict Horn clause;[2] a definite clause with no negative literals is a unit clause,[3] and a unit clause without variables is a fact;[4]. A Horn clause without a positive literal is a goal clause. Note that the empty clause, consisting of no literals (which is equivalent to false) is a goal clause. These three kinds of Horn clauses are illustrated in the following propositional example:
Type of Horn clause Disjunction form Implication form Read intuitively as
Definite clause ¬p ∨ ¬q ∨ ... ∨ ¬t ∨ u u ← p ∧ q ∧ ... ∧ t assume that,
if p and q and ... and t all hold, then also u holds
Fact u u ← true assume that
u holds
Goal clause ¬p ∨ ¬q ∨ ... ∨ ¬t false ← p ∧ q ∧ ... ∧ t show that
p and q and ... and t all hold[note 1]
All variables in a clause are implicitly universally quantified with the scope being the entire clause. Thus, for example:
¬ human(X) ∨ mortal(X)
stands for:
∀X( ¬ human(X) ∨ mortal(X) )
which is logically equivalent to:
∀X ( human(X) → mortal(X) )
Significance
Horn clauses play a basic role in constructive logic and computational logic. They are important in automated theorem proving by first-order resolution, because the resolvent of two Horn clauses is itself a Horn clause, and the resolvent of a goal clause and a definite clause is a goal clause. These properties of Horn clauses can lead to greater efficiency of proving a theorem: the goal clause is the negation of this theorem; see Goal clause in the above table. Intuitively, if we wish to prove φ, we assume ¬φ (the goal) and check whether such assumption leads to a contradiction. If so, then φ must hold. This way, a mechanical proving tool needs to maintain only one set of formulas (assumptions), rather than two sets (assumptions and (sub)goals).
Propositional Horn clauses are also of interest in computational complexity. The problem of finding truth-value assignments to make a conjunction of propositional Horn clauses true is known as HORNSAT. This problem is P-complete and solvable in linear time.[5] Note that the unrestricted Boolean satisfiability problem is an NP-complete problem.
Logic programming
Horn clauses are also the basis of logic programming, where it is common to write definite clauses in the form of an implication:
(p ∧ q ∧ ... ∧ t) → u
In fact, the resolution of a goal clause with a definite clause to produce a new goal clause is the basis of the SLD resolution inference rule, used in implementation of the logic programming language Prolog.
In logic programming, a definite clause behaves as a goal-reduction procedure. For example, the Horn clause written above behaves as the procedure:
to show u, show p and show q and ... and show t.
To emphasize this reverse use of the clause, it is often written in the reverse form:
u ← (p ∧ q ∧ ... ∧ t)
In Prolog this is written as:
u :- p, q, ..., t.
In logic programming, computation and query evaluation are performed by representing the negation of a problem to be solved as a goal clause. For example, the problem of solving the existentially quantified conjunction of positive literals:
∃X (p ∧ q ∧ ... ∧ t)
is represented by negating the problem (denying that it has a solution), and representing it in the logically equivalent form of a goal clause:
∀X (false ← p ∧ q ∧ ... ∧ t)
In Prolog this is written as:
:- p, q, ..., t.
Solving the problem amounts to deriving a contradiction, which is represented by the empty clause (or "false"). The solution of the problem is a substitution of terms for the variables in the goal clause, which can be extracted from the proof of contradiction. Used in this way, goal clauses are similar to conjunctive queries in relational databases, and Horn clause logic is equivalent in computational power to a universal Turing machine.
The Prolog notation is actually ambiguous, and the term “goal clause” is sometimes also used ambiguously. The variables in a goal clause can be read as universally or existentially quantified, and deriving “false” can be interpreted either as deriving a contradiction or as deriving a successful solution of the problem to be solved.
Van Emden and Kowalski (1976) investigated the model-theoretic properties of Horn clauses in the context of logic programming, showing that every set of definite clauses D has a unique minimal model M. An atomic formula A is logically implied by D if and only if A is true in M. It follows that a problem P represented by an existentially quantified conjunction of positive literals is logically implied by D if and only if P is true in M. The minimal model semantics of Horn clauses is the basis for the stable model semantics of logic programs.[6]
Notes
1. Like in resolution theorem proving, "show φ" and "assume ¬φ" are synonymous (indirect proof); they both correspond to the same formula, viz. ¬φ.
See also
• Propositional calculus
References
1. Horn, Alfred (1951). "On sentences which are true of direct unions of algebras". Journal of Symbolic Logic. 16 (1): 14–21. doi:10.2307/2268661. JSTOR 2268661. S2CID 42534337.
2. Makowsky (1987). "Why Horn Formulas Matter in Computer Science: Initial Structures and Generic Examples" (PDF). Journal of Computer and System Sciences. 34 (2–3): 266–292. doi:10.1016/0022-0000(87)90027-4.
3. Buss, Samuel R. (1998). "An Introduction to Proof Theory". In Samuel R. Buss (ed.). Handbook of Proof Theory. Studies in Logic and the Foundations of Mathematics. Vol. 137. Elsevier B.V. pp. 1–78. doi:10.1016/S0049-237X(98)80016-5. ISBN 978-0-444-89840-1. ISSN 0049-237X.
4. Lau & Ornaghi (2004). "Specifying Compositional Units for Correct Program Development in Computational Logic". Program Development in Computational Logic. Lecture Notes in Computer Science. Vol. 3049. pp. 1–29. doi:10.1007/978-3-540-25951-0_1. ISBN 978-3-540-22152-4.
5. Dowling, William F.; Gallier, Jean H. (1984). "Linear-time algorithms for testing the satisfiability of propositional Horn formulae". Journal of Logic Programming. 1 (3): 267–284. doi:10.1016/0743-1066(84)90014-1.
6. van Emden, M. H.; Kowalski, R. A. (1976). "The semantics of predicate logic as a programming language" (PDF). Journal of the ACM. 23 (4): 733–742. CiteSeerX 10.1.1.64.9246. doi:10.1145/321978.321991. S2CID 11048276.
| Wikipedia |
Universal set
In set theory, a universal set is a set which contains all objects, including itself.[1] In set theory as usually formulated, it can be proven in multiple ways that a universal set does not exist. However, some non-standard variants of set theory include a universal set.
Reasons for nonexistence
Many set theories do not allow for the existence of a universal set. There are several different arguments for its non-existence, based on different choices of axioms for set theory.
Regularity
In Zermelo–Fraenkel set theory, the axiom of regularity and axiom of pairing prevent any set from containing itself. For any set $A$, the set $\{A\}$ (constructed using pairing) necessarily contains an element disjoint from $\{A\}$, by regularity. Because its only element is $A$, it must be the case that $A$ is disjoint from $\{A\}$, and therefore that $A$ does not contain itself. Because a universal set would necessarily contain itself, it cannot exist under these axioms.[2]
Russell's paradox
Main article: Russell's paradox
Russell's paradox prevents the existence of a universal set in set theories that include Zermelo's axiom of comprehension. This axiom states that, for any formula $\varphi (x)$ and any set $A$, there exists a set
$\{x\in A\mid \varphi (x)\}$
that contains exactly those elements $x$ of $A$ that satisfy $\varphi $.
As a consequence of this axiom, to every set $A$ there corresponds another set $B=\{x\in A\mid x\not \in x\}$ consisting of the elements of $A$ that do not contain themselves. $B$ cannot contain itself, because it consists only of sets that do not contain themselves. It cannot be a member of $A$, because if it were it would be included as a member of itself, by its definition, contradicting the fact that it cannot contain itself. Therefore, every set $A$ is non-universal: there exists a set $B$ that it does not contain. This indeed holds even with predicative comprehension and over intuitionistic logic.
Cantor's theorem
Main article: Cantor's theorem
Another difficulty with the idea of a universal set concerns the power set of the set of all sets. Because this power set is a set of sets, it would necessarily be a subset of the set of all sets, provided that both exist. However, this conflicts with Cantor's theorem that the power set of any set (whether infinite or not) always has strictly higher cardinality than the set itself.
Theories of universality
The difficulties associated with a universal set can be avoided either by using a variant of set theory in which the axiom of comprehension is restricted in some way, or by using a universal object that is not considered to be a set.
Restricted comprehension
There are set theories known to be consistent (if the usual set theory is consistent) in which the universal set V does exist (and $V\in V$ is true). In these theories, Zermelo's axiom of comprehension does not hold in general, and the axiom of comprehension of naive set theory is restricted in a different way. A set theory containing a universal set is necessarily a non-well-founded set theory. The most widely studied set theory with a universal set is Willard Van Orman Quine's New Foundations. Alonzo Church and Arnold Oberschelp also published work on such set theories. Church speculated that his theory might be extended in a manner consistent with Quine's,[3] but this is not possible for Oberschelp's, since in it the singleton function is provably a set,[4] which leads immediately to paradox in New Foundations.[5]
Another example is positive set theory, where the axiom of comprehension is restricted to hold only for the positive formulas (formulas that do not contain negations). Such set theories are motivated by notions of closure in topology.
Universal objects that are not sets
Main article: Universe (mathematics)
The idea of a universal set seems intuitively desirable in the Zermelo–Fraenkel set theory, particularly because most versions of this theory do allow the use of quantifiers over all sets (see universal quantifier). One way of allowing an object that behaves similarly to a universal set, without creating paradoxes, is to describe V and similar large collections as proper classes rather than as sets. One difference between a universal set and a universal class is that the universal class does not contain itself, because proper classes cannot be elements of other classes. Russell's paradox does not apply in these theories because the axiom of comprehension operates on sets, not on classes.
The category of sets can also be considered to be a universal object that is, again, not itself a set. It has all sets as elements, and also includes arrows for all functions from one set to another. Again, it does not contain itself, because it is not itself a set.
See also
• Universe (mathematics)
• Grothendieck universe
• Domain of discourse
• Von Neumann–Bernays–Gödel set theory — an extension of ZFC that admits the class of all sets
Notes
1. Forster (1995), p. 1.
2. Cenzer et al. (2020).
3. Church (1974, p. 308). See also Forster (1995, p. 136), Forster (2001, p. 17), and Sheridan (2016).
4. Oberschelp (1973), p. 40.
5. Holmes 1998 p. 110.
References
• Cenzer, Douglas; Larson, Jean; Porter, Christopher; Zapletal, Jindrich (2020). Set Theory and Foundations of Mathematics: An Introduction to Mathematical Logic. World Scientific. p. 2. doi:10.1142/11324. ISBN 978-981-12-0192-9. S2CID 208131473.
• Church, Alonzo (1974). "Set theory with a universal set". Proceedings of the Tarski Symposium: An international symposium held at the University of California, Berkeley, June 23–30, 1971, to honor Alfred Tarski on the occasion of his seventieth birthday. Proceedings of Symposia in Pure Mathematics. Vol. 25. Providence, Rhode Island: American Mathematical Society. pp. 297–308. MR 0369069.
• Forster, T. E. (1995). Set Theory with a Universal Set: Exploring an Untyped Universe. Oxford Logic Guides. Vol. 31. Oxford University Press. ISBN 0-19-851477-8.
• Forster, Thomas (2001). "Church's set theory with a universal set". In Anderson, C. Anthony; Zelëny, Michael (eds.). Logic, Meaning and Computation: Essays in Memory of Alonzo Church. Synthese Library. Vol. 305. Dordrecht: Kluwer Academic Publishers. pp. 109–138. MR 2067968.
• Oberschelp, Arnold (1973). Set theory over classes. Dissertationes Mathematicae (Rozprawy Matematyczne). Vol. 106. Instytut Matematyczny Polskiej Akademii Nauk. MR 0319758.
• Willard Van Orman Quine (1937) "New Foundations for Mathematical Logic," American Mathematical Monthly 44, pp. 70–80.
• Sheridan, Flash (2016). "A variant of Church's set theory with a universal set in which the singleton function is a set" (PDF). Logique et Analyse. 59 (233): 81–131. JSTOR 26767819. MR 3524800.
External links
• Weisstein, Eric W. "Universal Set". MathWorld.
• Bibliography: Set Theory with a Universal Set, originated by T. E. Forster and maintained by Randall Holmes.
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
| Wikipedia |
Universal Taylor series
A universal Taylor series is a formal power series $\sum _{n=1}^{\infty }a_{n}x^{n}$, such that for every continuous function $h$ on $[-1,1]$, if $h(0)=0$, then there exists an increasing sequence $\left(\lambda _{n}\right)$ of positive integers such that
$\lim _{n\to \infty }\left\|\sum _{k=1}^{\lambda _{n}}a_{k}x^{k}-h(x)\right\|=0$
In other words, the set of partial sums of $\sum _{n=1}^{\infty }a_{n}x^{n}$ is dense (in sup-norm) in $C[-1,1]_{0}$, the set of continuous functions on $[-1,1]$ that is zero at origin.[1]
Statements and proofs
Fekete proved that a universal Taylor series exists.[2]
Proof
Let $f_{1},f_{2},...$ be the sequence in which each rational-coefficient polynomials with zero constant coefficient appears countably infinitely many times (use the diagonal enumeration). By Weierstrass approximation theorem, it is dense in $C[-1,1]_{0}$. Thus it suffices to approximate the sequence. We construct the power series iteratively as a sequence of polynomials $p_{1},p_{2},...$, such that $p_{n},p_{n+1}$ agrees on the first $n$ coefficients, and $\|f_{n}-p_{n}\|_{\infty }\leq 1/n$.
To start, let $p_{1}=f_{1}$. To construct $p_{n+1}$, replace each $x$ in $f_{n+1}-p_{n}$ by a close enough approximation with lowest degree $\geq n+1$, using the lemma below. Now add this to $p_{n}$.
Lemma — The function $f(x)=x$ can be approximated to arbitrary precision with a polynomial with arbitrarily lowest degree. That is, $\forall \epsilon >0,n\in \{1,2,...\}\exists $ polynomial $p(x)=a_{n}x^{n}+\cdots +a_{N}x^{N},$ such that $\|f-p\|_{\infty }\leq \epsilon $.
Proof of lemma
The function $g(x)=x-c\tanh(x/c)$ is the uniform limit of its Taylor expansion, which starts with degree 3. Also, $\|f-g\|_{\infty }<c$. Thus to $\epsilon $-approximate $f(x)=x$ using a polynomial with lowest degree 3, we do so for $g(x)$ with $c<\epsilon /2$ by truncating its Taylor expansion. Now iterate this construction by plugging in the lowest-degree-3 approximation into the Taylor expansion of $g(x)$, obtaining an approximation of lowest degree 9, 27, 81...
References
1. Mouze, A.; Nestoridis, V. (2010). "Universality and ultradifferentiable functions: Fekete's theorem". Proceedings of the American Mathematical Society. 138 (11): 3945–3955. doi:10.1090/S0002-9939-10-10380-3. ISSN 0002-9939.
2. Pál, Julius (1914). "Zwei kleine Bemerkungen". Tohoku Mathematical Journal. First Series. 6: 42–43.
| Wikipedia |
Universal Teichmüller space
In mathematical complex analysis, universal Teichmüller space T(1) is a Teichmüller space containing the Teichmüller space T(G) of every Fuchsian group G. It was introduced by Bers (1965) as the set of boundary values of quasiconformal maps of the upper half-plane that fix 0, 1, and ∞.
References
• Bers, Lipman (1965), "Automorphic forms and general Teichmüller spaces", in Aeppli, A.; Calabi, Eugenio; Röhrl, H. (eds.), Proceedings of the Conference on Complex Analysis, Minneapolis 1964, Berlin, New York: Springer-Verlag, pp. 109–113, ISBN 9783540033851
• Bers, Lipman (1970), "Universal Teichmüller space", in Gilbert, Robert P.; Newton, Roger G. (eds.), Analytic methods in mathematical physics (Sympos., Indiana Univ., Bloomington, Ind., 1968), Gordon and Breach, pp. 65–83, ISBN 9780677135601
• Bers, Lipman (1972), "Uniformization, moduli, and Kleinian groups", The Bulletin of the London Mathematical Society, 4 (3): 257–300, doi:10.1112/blms/4.3.257, ISSN 0024-6093, MR 0348097
• Gardiner, Frederick P.; Harvey, William J. (2002), "Universal Teichmüller space", Handbook of complex analysis: geometric function theory, Vol. 1, Handbook of Complex Analysis, vol. 1, Amsterdam: North-Holland, pp. 457–492, arXiv:math/0012168, doi:10.1016/S1874-5709(02)80016-6, ISBN 9780444828453, MR 1966201, S2CID 16561248
• Pekonen, Osmo (1995), "Universal Teichmüller space in geometry and physics", Journal of Geometry and Physics, 15 (3): 227–251, arXiv:hep-th/9310045, Bibcode:1995JGP....15..227P, doi:10.1016/0393-0440(94)00007-Q, ISSN 0393-0440, MR 1316332, S2CID 119598450
| Wikipedia |
Universal algebraic geometry
In algebraic geometry, universal algebraic geometry generalizes the geometry of rings to geometries of arbitrary varieties of algebras, so that every variety of algebras has its own algebraic geometry. The two terms algebraic variety and variety of algebras should not be confused.
See also
• Algebraic geometry
• Universal algebra
References
• Seven Lectures on the Universal Algebraic Geometry
| Wikipedia |
Universal approximation theorem
In the mathematical theory of artificial neural networks, universal approximation theorems are results[1][2] that put limits on what neural networks can theoretically learn, i.e. that establish the density of an algorithmically generated class of functions within a given function space of interest. Typically, these results concern the approximation capabilities of the feedforward architecture on the space of continuous functions between two Euclidean spaces, and the approximation is with respect to the compact convergence topology. What must be stressed, is that while some functions can be arbitrarily well approximated in a region, the proofs do not apply outside of the region, i.e. the approximated functions do not extrapolate outside of the region. That applies for all non-periodic activation functions, i.e. what's in practice used and most proofs assume.
However, there are also a variety of results between non-Euclidean spaces[3] and other commonly used architectures and, more generally, algorithmically generated sets of functions, such as the convolutional neural network (CNN) architecture,[4][5] radial basis functions,[6] or neural networks with specific properties.[7][8] Most universal approximation theorems can be parsed into two classes. The first quantifies the approximation capabilities of neural networks with an arbitrary number of artificial neurons ("arbitrary width" case) and the second focuses on the case with an arbitrary number of hidden layers, each containing a limited number of artificial neurons ("arbitrary depth" case). In addition to these two classes, there are also universal approximation theorems for neural networks with bounded number of hidden layers and a limited number of neurons in each layer ("bounded depth and bounded width" case).
Universal approximation theorems imply that neural networks can represent a wide variety of interesting functions with appropriate weights. On the other hand, they typically do not provide a construction for the weights, but merely state that such a construction is possible. To construct the weight, neural networks are trained, and they may converge on the correct weights, or not (i.e. get stuck in a local optimum). If the network is too small (for the dimensions of input data) then the universal approximation theorems do not apply, i.e. the networks will not learn. What was once proven about the depth of a network, i.e. a single hidden layer enough, only applies for one dimension, i.e. is in general too swallow of a network. The width of a network is also an important hyperparameter. The chose of an activation function is also important, and some work, and proofs written about, assume e.g. ReLU (or sigmoid) used, while some, such as a linear are known to not work (nor any polynominal).
The universal approximation property of width-bounded networks has been studied as a dual of classical universal approximation results on depth-bounded networks. For input dimension dx and output dimension dy the minimum width required for the universal approximation of the Lp functions is exactly max{dx + 1, dy} (for a ReLU network). More generally this also holds if both ReLU and a threshold activation function are used.[9]
History
One of the first versions of the arbitrary width case was proven by George Cybenko in 1989 for sigmoid activation functions.[10] Kurt Hornik, Maxwell Stinchcombe, and Halbert White showed in 1989 that multilayer feed-forward networks with as few as one hidden layer are universal approximators.[1] Hornik also showed in 1991[11] that it is not the specific choice of the activation function but rather the multilayer feed-forward architecture itself that gives neural networks the potential of being universal approximators. Moshe Leshno et al in 1993[12] and later Allan Pinkus in 1999[13] showed that the universal approximation property is equivalent to having a nonpolynomial activation function. In 2022, Shen Zuowei, Haizhao Yang, and Shijun Zhang[14] obtained precise quantitative on the depth and width required to approximate a target function by deep and wide ReLU neural networks.
The arbitrary depth case was also studied by a number of authors such as Gustaf Gripenberg in 2003,[15] Dmitry Yarotsky,[16] Zhou Lu et al in 2017,[17] Boris Hanin and Mark Sellke in 2018[18] who focused on neural networks with ReLU activation function. In 2020, Patrick Kidger and Terry Lyons[19] extended those results to neural networks with general activation functions such, e.g. tanh, GeLU, or Swish, and in 2022, their result was made quantitative by Leonie Papon and Anastasis Kratsios[20] who derived explicit depth estimates depending on the regularity of the target function and of the activation function.
The question of minimal possible width for universality was first studied in 2021, Park et al obtained the minimum width required for the universal approximation of Lp functions using feed-forward neural networks with ReLU as activation functions.[9] Similar results that can be directly applied to residual neural networks were also obtained in the same year by Paulo Tabuada and Bahman Gharesifard using control-theoretic arguments.[21][22] In 2023, Cai [23] obtained the optimal minimum width bound for the universal approximation.
The bounded depth and bounded width case was first studied by Maiorov and Pinkus in 1999.[24] They showed that there exists an analytic sigmoidal activation function such that two hidden layer neural networks with bounded number of units in hidden layers are universal approximators. Using algorithmic and computer programming techniques, Guliyev and Ismailov constructed a smooth sigmoidal activation function providing universal approximation property for two hidden layer feedforward neural networks with less units in hidden layers.[25] It was constructively proved in 2018 paper[26] that single hidden layer networks with bounded width are still universal approximators for univariate functions, but this property is no longer true for multivariable functions.
Several extensions of the theorem exist, such as to discontinuous activation functions,[12] noncompact domains,[19] certifiable networks,[27] random neural networks,[28] and alternative network architectures and topologies.[19][29]
Arbitrary-width case
A spate of papers in the 1980s—1990s, from George Cybenko and Kurt Hornik etc, established several universal approximation theorems for arbitrary width and bounded depth.[30][10][31][11] See [32][33][13] for reviews. The following is the most often quoted:
Universal approximation theorem — Let $C(X,\mathbb {R} ^{m})$ denote the set of continuous functions from a subset $X$ of a Euclidean $\mathbb {R} ^{n}$ space to a Euclidean space $\mathbb {R} ^{m}$. Let $\sigma \in C(\mathbb {R} ,\mathbb {R} )$. Note that $(\sigma \circ x)_{i}=\sigma (x_{i})$, so $\sigma \circ x$ denotes $\sigma $ applied to each component of $x$.
Then $\sigma $ is not polynomial if and only if for every $n\in \mathbb {N} $, $m\in \mathbb {N} $, compact $K\subseteq \mathbb {R} ^{n}$, $f\in C(K,\mathbb {R} ^{m}),\varepsilon >0$ there exist $k\in \mathbb {N} $, $A\in \mathbb {R} ^{k\times n}$, $b\in \mathbb {R} ^{k}$, $C\in \mathbb {R} ^{m\times k}$ such that
$\sup _{x\in K}\|f(x)-g(x)\|<\varepsilon $
where $g(x)=C\cdot (\sigma \circ (A\cdot x+b))$
Such an $f$ can also be approximated by a network of greater depth by using the same construction for the first layer and approximating the identity function with later layers.
Proof sketch
It suffices to prove the case where $m=1$, since uniform convergence in $\mathbb {R} ^{m}$ is just uniform convergence in each coordinate.
Let $F_{\sigma }$ be the set of all one-hidden-layer neural networks constructed with $\sigma $. Let $C_{0}(\mathbb {R} ^{d},\mathbb {R} )$ be the set of all $C(\mathbb {R} ^{d},\mathbb {R} )$ with compact support.
If the function is a polynomial of degree $d$, then $F_{\sigma }$ is contained in the closed subspace of all polynomials of degree $d$, so its closure is also contained in it, which is not all of $C_{0}(\mathbb {R} ^{d},\mathbb {R} )$.
Otherwise, we show that $F_{\sigma }$'s closure is all of $C_{0}(\mathbb {R} ^{d},\mathbb {R} )$. Suppose we can construct arbitrarily good approximations of the ramp function $r(x)={\begin{cases}-1{\text{ if }}x<-1\\x{\text{ if }}|x|\leq 1\\1{\text{ if }}x>1\\\end{cases}}$ then it can be combined to construct arbitrary compactly-supported continuous function to arbitrary precision. It remains to approximate the ramp function.
Any of the commonly used activation functions used in machine learning can obviously be used to approximate the ramp function, or first approximate the ReLU, then the ramp function.
if $\sigma $ is "squashing", that is, it has limits $\sigma (-\infty )<\sigma (+\infty )$, then one can first affinely scale down its x-axis so that its graph looks like a step-function with two sharp "overshoots", then make a linear sum of enough of them to make a "staircase" approximation of the ramp function. With more steps of the staircase, the overshoots smooth out and we get arbitrarily good approximation of the ramp function.
The case where $\sigma $ is a generic non-polynomial function is harder, and the reader is directed to.[13]
The problem with polynomials may be removed by allowing the outputs of the hidden layers to be multiplied together (the "pi-sigma networks"), yielding the generalization:[31]
Universal approximation theorem for pi-sigma networks — With any nonconstant activation function, a one-hidden-layer pi-sigma network is a universal approximator.
Arbitrary-depth case
The 'dual' versions of the theorem consider networks of bounded width and arbitrary depth. A variant of the universal approximation theorem was proved for the arbitrary depth case by Zhou Lu et al. in 2017.[17] They showed that networks of width n+4 with ReLU activation functions can approximate any Lebesgue integrable function on n-dimensional input space with respect to $L^{1}$ distance if network depth is allowed to grow. It was also shown that if the width was less than or equal to n, this general expressive power to approximate any Lebesgue integrable function was lost. In the same paper[17] it was shown that ReLU networks with width n+1 were sufficient to approximate any continuous function of n-dimensional input variables.[34] The following refinement, specifies the optimal minimum width for which such an approximation is possible and is due to.[35]
Universal approximation theorem (L1 distance, ReLU activation, arbitrary depth, minimal width). For any Bochner–Lebesgue p-integrable function $f:\mathbb {R} ^{n}\rightarrow \mathbb {R} ^{m}$ and any $\epsilon >0$, there exists a fully-connected ReLU network $F$ of width exactly $d_{m}=\max\{{n+1},m\}$, satisfying
$\int _{\mathbb {R} ^{n}}\left\|f(x)-F_{}(x)\right\|^{p}\mathrm {d} x<\epsilon $.
Moreover, there exists a function $f\in L^{p}(\mathbb {R} ^{n},\mathbb {R} ^{m})$ and some $\epsilon >0$, for which there is no fully-connected ReLU network of width less than $d_{m}=\max\{{n+1},m\}$ satisfying the above approximation bound.
Remark: If the activation is replaced by leaky-ReLU, and the input is restricted in a compact domain, then the exact minimum width is [23] $d_{m}=\max\{n,m,2\}$.
Quantitative Refinement: In the case where, when ${\mathcal {X}}=[0,1]^{d}$ and $D=1$ and where $\sigma $ is the ReLU activation function then, the exact depth and width for a ReLU network to achieve $\varepsilon $ error is also known.[36] If, moreover, the target function $f$ is smooth then the required number of layer and their width can be exponentially smaller.[37] Even if $f$ is not smooth, the curse of dimensionality can be broken if $f$ admits additional "compositional structure".[38][39]
Together, the central result of [19] yields the following universal approximation theorem for networks with bounded width (cf. also [15] for the first result of this kind).
Universal approximation theorem (Uniform non-affine activation, arbitrary depth, constrained width). Let ${\mathcal {X}}$ be a compact subset of $\mathbb {R} ^{d}$. Let $\sigma :\mathbb {R} \to \mathbb {R} $ :\mathbb {R} \to \mathbb {R} } be any non-affine continuous function which is continuously differentiable at at least one point, with nonzero derivative at that point. Let ${\mathcal {N}}_{d,D:d+D+2}^{\sigma }$ denote the space of feed-forward neural networks with $d$ input neurons, $D$ output neurons, and an arbitrary number of hidden layers each with $d+D+2$ neurons, such that every hidden neuron has activation function $\sigma $ and every output neuron has the identity as its activation function, with input layer $\phi $, and output layer $\rho $. Then given any $\varepsilon >0$ and any $f\in C({\mathcal {X}},\mathbb {R} ^{D})$, there exists ${\hat {f}}\in {\mathcal {N}}_{d,D:d+D+2}^{\sigma }$ such that
$\sup _{x\in {\mathcal {X}}}\,\left\|{\hat {f}}(x)-f(x)\right\|<\varepsilon .$
In other words, ${\mathcal {N}}$ is dense in $C({\mathcal {X}};\mathbb {R} ^{D})$ with respect to the topology of uniform convergence.
Quantitative Refinement: The number of layers and the width of each layer required to approximate f to $\varepsilon $ precision known;[20] moreover, the result hold true when ${\mathcal {X}}$ and $\mathbb {R} ^{D}$are replaced with any non-positively curved Riemannian manifold.
Certain necessary conditions for the bounded width, arbitrary depth case have been established, but there is still a gap between the known sufficient and necessary conditions.[17][18][40]
Bounded depth and bounded width case
The first result on approximation capabilities of neural networks with bounded number of layers, each containing a limited number of artificial neurons was obtained by Maiorov and Pinkus.[24] Their remarkable result revealed that such networks can be universal approximators and for achieving this property two hidden layers are enough.
Universal approximation theorem:[24] There exists an activation function $\sigma $ which is analytic, strictly increasing and sigmoidal and has the following property: For any $f\in C[0,1]^{d}$ and $\varepsilon >0$ there exist constants $d_{i},c_{ij},\theta _{ij},\gamma _{i}$, and vectors $\mathbf {w} ^{ij}\in \mathbb {R} ^{d}$ for which
$\left\vert f(\mathbf {x} )-\sum _{i=1}^{6d+3}d_{i}\sigma \left(\sum _{j=1}^{3d}c_{ij}\sigma (\mathbf {w} ^{ij}\cdot \mathbf {x-} \theta _{ij})-\gamma _{i}\right)\right\vert <\varepsilon $
for all $\mathbf {x} =(x_{1},...,x_{d})\in [0,1]^{d}$.
This is an existence result. It says that activation functions providing universal approximation property for bounded depth bounded width networks exist. Using certain algorithmic and computer programming techniques, Guliyev and Ismailov efficiently constructed such activation functions depending on a numerical parameter. The developed algorithm allows one to compute the activation functions at any point of the real axis instantly. For the algorithm and the corresponding computer code see.[25] The theoretical result can be formulated as follows.
Universal approximation theorem:[25][26] Let $[a,b]$ be a finite segment of the real line, $s=b-a$ and $\lambda $ be any positive number. Then one can algorithmically construct a computable sigmoidal activation function $\sigma \colon \mathbb {R} \to \mathbb {R} $, which is infinitely differentiable, strictly increasing on $(-\infty ,s)$, $\lambda $ -strictly increasing on $[s,+\infty )$, and satisfies the following properties:
1) For any $f\in C[a,b]$ and $\varepsilon >0$ there exist numbers $c_{1},c_{2},\theta _{1}$ and $\theta _{2}$ such that for all $x\in [a,b]$
$|f(x)-c_{1}\sigma (x-\theta _{1})-c_{2}\sigma (x-\theta _{2})|<\varepsilon $
2) For any continuous function $F$ on the $d$-dimensional box $[a,b]^{d}$ and $\varepsilon >0$, there exist constants $e_{p}$, $c_{pq}$, $\theta _{pq}$ and $\zeta _{p}$ such that the inequality
$\left|F(\mathbf {x} )-\sum _{p=1}^{2d+2}e_{p}\sigma \left(\sum _{q=1}^{d}c_{pq}\sigma (\mathbf {w} ^{q}\cdot \mathbf {x} -\theta _{pq})-\zeta _{p}\right)\right|<\varepsilon $
holds for all $\mathbf {x} =(x_{1},\ldots ,x_{d})\in [a,b]^{d}$. Here the weights $\mathbf {w} ^{q}$, $q=1,\ldots ,d$, are fixed as follows:
$\mathbf {w} ^{1}=(1,0,\ldots ,0),\quad \mathbf {w} ^{2}=(0,1,\ldots ,0),\quad \ldots ,\quad \mathbf {w} ^{d}=(0,0,\ldots ,1).$
In addition, all the coefficients $e_{p}$, except one, are equal.
Here “$\sigma \colon \mathbb {R} \to \mathbb {R} $ is $\lambda $-strictly increasing on some set $X$” means that there exists a strictly increasing function $u\colon X\to \mathbb {R} $ such that $|\sigma (x)-u(x)|\leq \lambda $ for all $x\in X$. Clearly, a $\lambda $-increasing function behaves like a usual increasing function as $\lambda $ gets small. In the "depth-width" terminology, the above theorem says that for certain activation functions depth-$2$ width-$2$ networks are universal approximators for univariate functions and depth-$3$ width-$(2d+2)$ networks are universal approximators for $d$-variable functions ($d>1$).
Graph input
Achieving useful universal function approximation on graphs (or rather on graph isomorphism classes) has been a longstanding problem. The popular graph convolutional neural networks (GCNs or GNNs) can be made as discriminative as the Weisfeiler–Leman graph isomorphism test.[41] In 2020,[42] a universal approximation theorem result was established by Brüel-Gabrielsson, showing that graph representation with certain injective properties is sufficient for universal function approximation on bounded graphs and restricted universal function approximation on unbounded graphs, with an accompanying $O($#edges$\times $#nodes$)$-runtime method that performed at state of the art on a collection of benchmarks.
See also
• Kolmogorov–Arnold representation theorem
• Representer theorem
• No free lunch theorem
• Stone–Weierstrass theorem
• Fourier series
References
1. Hornik, Kurt; Stinchcombe, Maxwell; White, Halbert (1989). Multilayer Feedforward Networks are Universal Approximators (PDF). Neural Networks. Vol. 2. Pergamon Press. pp. 359–366.
2. Balázs Csanád Csáji (2001) Approximation with Artificial Neural Networks; Faculty of Sciences; Eötvös Loránd University, Hungary
3. Kratsios, Anastasis; Bilokopytov, Eugene (2020). Non-Euclidean Universal Approximation (PDF). Advances in Neural Information Processing Systems. Vol. 33. Curran Associates.
4. Zhou, Ding-Xuan (2020). "Universality of deep convolutional neural networks". Applied and Computational Harmonic Analysis. 48 (2): 787–794. arXiv:1805.10769. doi:10.1016/j.acha.2019.06.004. S2CID 44113176.
5. Heinecke, Andreas; Ho, Jinn; Hwang, Wen-Liang (2020). "Refinement and Universal Approximation via Sparsely Connected ReLU Convolution Nets". IEEE Signal Processing Letters. 27: 1175–1179. Bibcode:2020ISPL...27.1175H. doi:10.1109/LSP.2020.3005051. S2CID 220669183.
6. Park, J.; Sandberg, I. W. (1991). "Universal Approximation Using Radial-Basis-Function Networks". Neural Computation. 3 (2): 246–257. doi:10.1162/neco.1991.3.2.246. PMID 31167308. S2CID 34868087.
7. Yarotsky, Dmitry (2021). "Universal Approximations of Invariant Maps by Neural Networks". Constructive Approximation. 55: 407–474. arXiv:1804.10306. doi:10.1007/s00365-021-09546-1. S2CID 13745401.
8. Zakwan, Muhammad; d’Angelo, Massimiliano; Ferrari-Trecate, Giancarlo (2023). "Universal Approximation Property of Hamiltonian Deep Neural Networks". IEEE Control Systems Letters. 7: 2689–2694. arXiv:2303.12147. doi:10.1109/LCSYS.2023.3288350. ISSN 2475-1456. S2CID 257663609.
9. Park, Sejun; Yun, Chulhee; Lee, Jaeho; Shin, Jinwoo (2021). Minimum Width for Universal Approximation. International Conference on Learning Representations. arXiv:2006.08859.
10. Cybenko, G. (1989). "Approximation by superpositions of a sigmoidal function". Mathematics of Control, Signals, and Systems. 2 (4): 303–314. CiteSeerX 10.1.1.441.7873. doi:10.1007/BF02551274. S2CID 3958369.
11. Hornik, Kurt (1991). "Approximation capabilities of multilayer feedforward networks". Neural Networks. 4 (2): 251–257. doi:10.1016/0893-6080(91)90009-T. S2CID 7343126.
12. Leshno, Moshe; Lin, Vladimir Ya.; Pinkus, Allan; Schocken, Shimon (January 1993). "Multilayer feedforward networks with a nonpolynomial activation function can approximate any function". Neural Networks. 6 (6): 861–867. doi:10.1016/S0893-6080(05)80131-5. S2CID 206089312.
13. Pinkus, Allan (January 1999). "Approximation theory of the MLP model in neural networks". Acta Numerica. 8: 143–195. Bibcode:1999AcNum...8..143P. doi:10.1017/S0962492900002919. S2CID 16800260.
14. Shen, Zuowei; Yang, Haizhao; Zhang, Shijun (January 2022). "Optimal approximation rate of ReLU networks in terms of width and depth". Journal de Mathématiques Pures et Appliquées. 157: 101–135. arXiv:2103.00502. doi:10.1016/j.matpur.2021.07.009. S2CID 232075797.
15. Gripenberg, Gustaf (June 2003). "Approximation by neural networks with a bounded number of nodes at each level". Journal of Approximation Theory. 122 (2): 260–266. doi:10.1016/S0021-9045(03)00078-9.
16. Yarotsky, Dmitry (2016-10-03). Error bounds for approximations with deep ReLU networks. OCLC 1106247665.
17. Lu, Zhou; Pu, Homgming; Wang, Feicheng; Hu, Zhiqiang; Wang, Liwei (2017). "The Expressive Power of Neural Networks: A View from the Width". Advances in Neural Information Processing Systems. Curran Associates. 30: 6231–6239. arXiv:1709.02540.
18. Hanin, Boris; Sellke, Mark (2018). "Approximating Continuous Functions by ReLU Nets of Minimal Width". arXiv:1710.11278 [stat.ML].
19. Kidger, Patrick; Lyons, Terry (July 2020). Universal Approximation with Deep Narrow Networks. Conference on Learning Theory. arXiv:1905.08539.
20. Kratsios, Anastasis; Papon, Léonie (2022). "Universal Approximation Theorems for Differentiable Geometric Deep Learning". Journal of Machine Learning Research. 23 (196): 1–73. arXiv:2101.05390. ISSN 1533-7928.
21. Tabuada, Paulo; Gharesifard, Bahman (2021). Universal approximation power of deep residual neural networks via nonlinear control theory. International Conference on Learning Representations. arXiv:2007.06007.
22. Tabuada, Paulo; Gharesifard, Bahman (2023). "Universal Approximation Power of Deep Residual Neural Networks Through the Lens of Control". IEEE Transactions on Automatic Control. 68 (5): 2715–2728. doi:10.1109/TAC.2022.3190051. ISSN 1558-2523. S2CID 250512115.
23. Cai, Yongqiang (2023-02-01). "Achieve the Minimum Width of Neural Networks for Universal Approximation". ICLR. arXiv:2209.11395.
24. Maiorov, Vitaly; Pinkus, Allan (April 1999). "Lower bounds for approximation by MLP neural networks". Neurocomputing. 25 (1–3): 81–91. doi:10.1016/S0925-2312(98)00111-8.
25. Guliyev, Namig; Ismailov, Vugar (November 2018). "Approximation capability of two hidden layer feedforward neural networks with fixed weights". Neurocomputing. 316: 262–269. arXiv:2101.09181. doi:10.1016/j.neucom.2018.07.075. S2CID 52285996.
26. Guliyev, Namig; Ismailov, Vugar (February 2018). "On the approximation by single hidden layer feedforward neural networks with fixed weights". Neural Networks. 98: 296–304. arXiv:1708.06219. doi:10.1016/j.neunet.2017.12.007. PMID 29301110. S2CID 4932839.
27. Baader, Maximilian; Mirman, Matthew; Vechev, Martin (2020). Universal Approximation with Certified Networks. ICLR.
28. Gelenbe, Erol; Mao, Zhi Hong; Li, Yan D. (1999). "Function approximation with spiked random networks". IEEE Transactions on Neural Networks. 10 (1): 3–9. doi:10.1109/72.737488. PMID 18252498.
29. Lin, Hongzhou; Jegelka, Stefanie (2018). ResNet with one-neuron hidden layers is a Universal Approximator. Advances in Neural Information Processing Systems. Vol. 30. Curran Associates. pp. 6169–6178.
30. Funahashi, Ken-Ichi (1989-01-01). "On the approximate realization of continuous mappings by neural networks". Neural Networks. 2 (3): 183–192. doi:10.1016/0893-6080(89)90003-8. ISSN 0893-6080.
31. Hornik, Kurt; Stinchcombe, Maxwell; White, Halbert (1989-01-01). "Multilayer feedforward networks are universal approximators". Neural Networks. 2 (5): 359–366. doi:10.1016/0893-6080(89)90020-8. ISSN 0893-6080. S2CID 2757547.
32. Haykin, Simon (1998). Neural Networks: A Comprehensive Foundation, Volume 2, Prentice Hall. ISBN 0-13-273350-1.
33. Hassoun, M. (1995) Fundamentals of Artificial Neural Networks MIT Press, p. 48
34. Hanin, B. (2018). Approximating Continuous Functions by ReLU Nets of Minimal Width. arXiv preprint arXiv:1710.11278.
35. Park, Yun, Lee, Shin, Sejun, Chulhee, Jaeho, Jinwoo (2020-09-28). "Minimum Width for Universal Approximation". ICLR. arXiv:2006.08859.{{cite journal}}: CS1 maint: multiple names: authors list (link)
36. Shen, Zuowei; Yang, Haizhao; Zhang, Shijun (2022-01-01). "Optimal approximation rate of ReLU networks in terms of width and depth". Journal de Mathématiques Pures et Appliquées. 157: 101–135. arXiv:2103.00502. doi:10.1016/j.matpur.2021.07.009. ISSN 0021-7824. S2CID 232075797.
37. Lu, Jianfeng; Shen, Zuowei; Yang, Haizhao; Zhang, Shijun (2021-01-01). "Deep Network Approximation for Smooth Functions". SIAM Journal on Mathematical Analysis. 53 (5): 5465–5506. arXiv:2001.03040. doi:10.1137/20M134695X. ISSN 0036-1410. S2CID 210116459.
38. Juditsky, Anatoli B.; Lepski, Oleg V.; Tsybakov, Alexandre B. (2009-06-01). "Nonparametric estimation of composite functions". The Annals of Statistics. 37 (3). doi:10.1214/08-aos611. ISSN 0090-5364. S2CID 2471890.
39. Poggio, Tomaso; Mhaskar, Hrushikesh; Rosasco, Lorenzo; Miranda, Brando; Liao, Qianli (2017-03-14). "Why and when can deep-but not shallow-networks avoid the curse of dimensionality: A review". International Journal of Automation and Computing. 14 (5): 503–519. doi:10.1007/s11633-017-1054-2. ISSN 1476-8186. S2CID 15562587.
40. Johnson, Jesse (2019). Deep, Skinny Neural Networks are not Universal Approximators. International Conference on Learning Representations.
41. Xu, Keyulu; Hu, Weihua; Leskovec, Jure; Jegelka, Stefanie (2019). How Powerful are Graph Neural Networks?. International Conference on Learning Representations.
42. Brüel-Gabrielsson, Rickard (2020). Universal Function Approximation on Graphs. Advances in Neural Information Processing Systems. Vol. 33. Curran Associates.
Differentiable computing
General
• Differentiable programming
• Information geometry
• Statistical manifold
• Automatic differentiation
• Neuromorphic engineering
• Pattern recognition
• Tensor calculus
• Computational learning theory
• Inductive bias
Concepts
• Gradient descent
• SGD
• Clustering
• Regression
• Overfitting
• Hallucination
• Adversary
• Attention
• Convolution
• Loss functions
• Backpropagation
• Normalization (Batchnorm)
• Activation
• Softmax
• Sigmoid
• Rectifier
• Regularization
• Datasets
• Augmentation
• Diffusion
• Autoregression
Applications
• Machine learning
• In-context learning
• Artificial neural network
• Deep learning
• Scientific computing
• Artificial Intelligence
• Language model
• Large language model
Hardware
• IPU
• TPU
• VPU
• Memristor
• SpiNNaker
Software libraries
• TensorFlow
• PyTorch
• Keras
• Theano
• JAX
• Flux.jl
Implementations
Audio–visual
• AlexNet
• WaveNet
• Human image synthesis
• HWR
• OCR
• Speech synthesis
• Speech recognition
• Facial recognition
• AlphaFold
• DALL-E
• Midjourney
• Stable Diffusion
Verbal
• Word2vec
• Seq2seq
• BERT
• LaMDA
• Bard
• NMT
• Project Debater
• IBM Watson
• GPT-2
• GPT-3
• ChatGPT
• GPT-4
• GPT-J
• Chinchilla AI
• PaLM
• BLOOM
• LLaMA
Decisional
• AlphaGo
• AlphaZero
• Q-learning
• SARSA
• OpenAI Five
• Self-driving car
• MuZero
• Action selection
• Auto-GPT
• Robot control
People
• Yoshua Bengio
• Alex Graves
• Ian Goodfellow
• Stephen Grossberg
• Demis Hassabis
• Geoffrey Hinton
• Yann LeCun
• Fei-Fei Li
• Andrew Ng
• Jürgen Schmidhuber
• David Silver
Organizations
• Anthropic
• EleutherAI
• Google DeepMind
• Hugging Face
• OpenAI
• Meta AI
• Mila
• MIT CSAIL
Architectures
• Neural Turing machine
• Differentiable neural computer
• Transformer
• Recurrent neural network (RNN)
• Long short-term memory (LSTM)
• Gated recurrent unit (GRU)
• Echo state network
• Multilayer perceptron (MLP)
• Convolutional neural network
• Residual network
• Autoencoder
• Variational autoencoder (VAE)
• Generative adversarial network (GAN)
• Graph neural network
• Portals
• Computer programming
• Technology
• Categories
• Artificial neural networks
• Machine learning
| Wikipedia |
AIXI
AIXI ['ai̯k͡siː] is a theoretical mathematical formalism for artificial general intelligence. It combines Solomonoff induction with sequential decision theory. AIXI was first proposed by Marcus Hutter in 2000[1] and several results regarding AIXI are proved in Hutter's 2005 book Universal Artificial Intelligence.[2]
AIXI is a reinforcement learning (RL) agent. It maximizes the expected total rewards received from the environment. Intuitively, it simultaneously considers every computable hypothesis (or environment). In each time step, it looks at every possible program and evaluates how many rewards that program generates depending on the next action taken. The promised rewards are then weighted by the subjective belief that this program constitutes the true environment. This belief is computed from the length of the program: longer programs are considered less likely, in line with Occam's razor. AIXI then selects the action that has the highest expected total reward in the weighted sum of all these programs.
Definition
AIXI is a reinforcement learning agent that interacts with some stochastic and unknown but computable environment $\mu $. The interaction proceeds in time steps, from $t=1$ to $t=m$, where $m\in \mathbb {N} $ is the lifespan of the AIXI agent. At time step t, the agent chooses an action $a_{t}\in {\mathcal {A}}$ (e.g. a limb movement) and executes it in the environment, and the environment responds with a "percept" $e_{t}\in {\mathcal {E}}={\mathcal {O}}\times \mathbb {R} $, which consists of an "observation" $o_{t}\in {\mathcal {O}}$ (e.g., a camera image) and a reward $r_{t}\in \mathbb {R} $, distributed according to the conditional probability $\mu (o_{t}r_{t}|a_{1}o_{1}r_{1}...a_{t-1}o_{t-1}r_{t-1}a_{t})$, where $a_{1}o_{1}r_{1}...a_{t-1}o_{t-1}r_{t-1}a_{t}$ is the "history" of actions, observations and rewards. The environment $\mu $ is thus mathematically represented as a probability distribution over "percepts" (observations and rewards) which depend on the full history, so there is no Markov assumption (as opposed to other RL algorithms). Note again that this probability distribution is unknown to the AIXI agent. Furthermore, note again that $\mu $ is computable, that is, the observations and rewards received by the agent from the environment $\mu $ can be computed by some program (which runs on a Turing machine), given the past actions of the AIXI agent.[3]
The only goal of the AIXI agent is to maximise $\sum _{t=1}^{m}r_{t}$, that is, the sum of rewards from time step 1 to m.
The AIXI agent is associated with a stochastic policy $\pi :({\mathcal {A}}\times {\mathcal {E}})^{*}\rightarrow {\mathcal {A}}$ :({\mathcal {A}}\times {\mathcal {E}})^{*}\rightarrow {\mathcal {A}}} , which is the function it uses to choose actions at every time step, where ${\mathcal {A}}$ is the space of all possible actions that AIXI can take and ${\mathcal {E}}$ is the space of all possible "percepts" that can be produced by the environment. The environment (or probability distribution) $\mu $ can also be thought of as a stochastic policy (which is a function): $\mu :({\mathcal {A}}\times {\mathcal {E}})^{*}\times {\mathcal {A}}\rightarrow {\mathcal {E}}$ :({\mathcal {A}}\times {\mathcal {E}})^{*}\times {\mathcal {A}}\rightarrow {\mathcal {E}}} , where the $*$ is the Kleene star operation.
In general, at time step $t$ (which ranges from 1 to m), AIXI, having previously executed actions $a_{1}\dots a_{t-1}$ (which is often abbreviated in the literature as $a_{<t}$) and having observed the history of percepts $o_{1}r_{1}...o_{t-1}r_{t-1}$ (which can be abbreviated as $e_{<t}$), chooses and executes in the environment the action, $a_{t}$, defined as follows [4]
$a_{t}:=\arg \max _{a_{t}}\sum _{o_{t}r_{t}}\ldots \max _{a_{m}}\sum _{o_{m}r_{m}}[r_{t}+\ldots +r_{m}]\sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}$
or, using parentheses, to disambiguate the precedences
$a_{t}:=\arg \max _{a_{t}}\left(\sum _{o_{t}r_{t}}\ldots \left(\max _{a_{m}}\sum _{o_{m}r_{m}}[r_{t}+\ldots +r_{m}]\left(\sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}\right)\right)\right)$
Intuitively, in the definition above, AIXI considers the sum of the total reward over all possible "futures" up to $m-t$ time steps ahead (that is, from $t$ to $m$), weighs each of them by the complexity of programs $q$ (that is, by $2^{-{\textrm {length}}(q)}$) consistent with the agent's past (that is, the previously executed actions, $a_{<t}$, and received percepts, $e_{<t}$) that can generate that future, and then picks the action that maximises expected future rewards.[3]
Let us break this definition down in order to attempt to fully understand it.
$o_{t}r_{t}$ is the "percept" (which consists of the observation $o_{t}$ and reward $r_{t}$) received by the AIXI agent at time step $t$ from the environment (which is unknown and stochastic). Similarly, $o_{m}r_{m}$ is the percept received by AIXI at time step $m$ (the last time step where AIXI is active).
$r_{t}+\ldots +r_{m}$ is the sum of rewards from time step $t$ to time step $m$, so AIXI needs to look into the future to choose its action at time step $t$.
$U$ denotes a monotone universal Turing machine, and $q$ ranges over all (deterministic) programs on the universal machine $U$, which receives as input the program $q$ and the sequence of actions $a_{1}\dots a_{m}$ (that is, all actions), and produces the sequence of percepts $o_{1}r_{1}\ldots o_{m}r_{m}$. The universal Turing machine $U$ is thus used to "simulate" or compute the environment responses or percepts, given the program $q$ (which "models" the environment) and all actions of the AIXI agent: in this sense, the environment is "computable" (as stated above). Note that, in general, the program which "models" the current and actual environment (where AIXI needs to act) is unknown because the current environment is also unknown.
${\textrm {length}}(q)$ is the length of the program $q$ (which is encoded as a string of bits). Note that $2^{-{\textrm {length}}(q)}={\frac {1}{2^{{\textrm {length}}(q)}}}$. Hence, in the definition above, $\sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}$ should be interpreted as a mixture (in this case, a sum) over all computable environments (which are consistent with the agent's past), each weighted by its complexity $2^{-{\textrm {length}}(q)}$. Note that $a_{1}\ldots a_{m}$ can also be written as $a_{1}\ldots a_{t-1}a_{t}\ldots a_{m}$, and $a_{1}\ldots a_{t-1}=a_{<t}$ is the sequence of actions already executed in the environment by the AIXI agent. Similarly, $o_{1}r_{1}\ldots o_{m}r_{m}=o_{1}r_{1}\ldots o_{t-1}r_{t-1}o_{t}r_{t}\ldots o_{m}r_{m}$, and $o_{1}r_{1}\ldots o_{t-1}r_{t-1}$ is the sequence of percepts produced by the environment so far.
Let us now put all these components together in order to understand this equation or definition.
At time step t, AIXI chooses the action $a_{t}$ where the function $\sum _{o_{t}r_{t}}\ldots \max _{a_{m}}\sum _{o_{m}r_{m}}[r_{t}+\ldots +r_{m}]\sum _{q:\;U(q,a_{1}\ldots a_{m})=o_{1}r_{1}\ldots o_{m}r_{m}}2^{-{\textrm {length}}(q)}$ attains its maximum.
Parameters
The parameters to AIXI are the universal Turing machine U and the agent's lifetime m, which need to be chosen. The latter parameter can be removed by the use of discounting.
The meaning of the word AIXI
According to Hutter, the word "AIXI" can have several interpretations. AIXI can stand for AI based on Solomonoff's distribution, denoted by $\xi $ (which is the Greek letter xi), or e.g. it can stand for AI "crossed" (X) with induction (I). There are other interpretations.
Optimality
AIXI's performance is measured by the expected total number of rewards it receives. AIXI has been proven to be optimal in the following ways.[2]
• Pareto optimality: there is no other agent that performs at least as well as AIXI in all environments while performing strictly better in at least one environment.
• Balanced Pareto optimality: like Pareto optimality, but considering a weighted sum of environments.
• Self-optimizing: a policy p is called self-optimizing for an environment $\mu $ if the performance of p approaches the theoretical maximum for $\mu $ when the length of the agent's lifetime (not time) goes to infinity. For environment classes where self-optimizing policies exist, AIXI is self-optimizing.
It was later shown by Hutter and Jan Leike that balanced Pareto optimality is subjective and that any policy can be considered Pareto optimal, which they describe as undermining all previous optimality claims for AIXI.[5]
However, AIXI does have limitations. It is restricted to maximizing rewards based on percepts as opposed to external states. It also assumes it interacts with the environment solely through action and percept channels, preventing it from considering the possibility of being damaged or modified. Colloquially, this means that it doesn't consider itself to be contained by the environment it interacts with. It also assumes the environment is computable.[6]
Computational aspects
Like Solomonoff induction, AIXI is incomputable. However, there are computable approximations of it. One such approximation is AIXItl, which performs at least as well as the provably best time t and space l limited agent.[2] Another approximation to AIXI with a restricted environment class is MC-AIXI (FAC-CTW) (which stands for Monte Carlo AIXI FAC-Context-Tree Weighting), which has had some success playing simple games such as partially observable Pac-Man.[3][7]
See also
• Gödel machine
References
1. Marcus Hutter (2000). A Theory of Universal Artificial Intelligence based on Algorithmic Complexity. arXiv:cs.AI/0004001. Bibcode:2000cs........4001H.
2. — (2005). Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability. Texts in Theoretical Computer Science an EATCS Series. Springer. doi:10.1007/b138233. ISBN 978-3-540-22139-5. S2CID 33352850.
3. Veness, Joel; Kee Siong Ng; Hutter, Marcus; Uther, William; Silver, David (2009). "A Monte Carlo AIXI Approximation". arXiv:0909.0801 [cs.AI].
4. Universal Artificial Intelligence
5. Leike, Jan; Hutter, Marcus (2015). Bad Universal Priors and Notions of Optimality (PDF). Proceedings of the 28th Conference on Learning Theory.
6. Soares, Nate. "Formalizing Two Problems of Realistic World-Models" (PDF). Intelligence.org. Retrieved 2015-07-19.
7. Playing Pacman using AIXI Approximation – YouTube
• "Universal Algorithmic Intelligence: A mathematical top->down approach", Marcus Hutter, arXiv:cs/0701125; also in Artificial General Intelligence, eds. B. Goertzel and C. Pennachin, Springer, 2007, ISBN 9783540237334, pp. 227–290, doi:10.1007/978-3-540-68677-4_8.
| Wikipedia |
Universal bundle
In mathematics, the universal bundle in the theory of fiber bundles with structure group a given topological group G, is a specific bundle over a classifying space BG, such that every bundle with the given structure group G over M is a pullback by means of a continuous map M → BG.
Existence of a universal bundle
In the CW complex category
When the definition of the classifying space takes place within the homotopy category of CW complexes, existence theorems for universal bundles arise from Brown's representability theorem.
For compact Lie groups
We will first prove:
Proposition. Let G be a compact Lie group. There exists a contractible space EG on which G acts freely. The projection EG → BG is a G-principal fibre bundle.
Proof. There exists an injection of G into a unitary group U(n) for n big enough.[1] If we find EU(n) then we can take EG to be EU(n). The construction of EU(n) is given in classifying space for U(n).
The following Theorem is a corollary of the above Proposition.
Theorem. If M is a paracompact manifold and P → M is a principal G-bundle, then there exists a map f : M → BG, unique up to homotopy, such that P is isomorphic to f ∗(EG), the pull-back of the G-bundle EG → BG by f.
Proof. On one hand, the pull-back of the bundle π : EG → BG by the natural projection P ×G EG → BG is the bundle P × EG. On the other hand, the pull-back of the principal G-bundle P → M by the projection p : P ×G EG → M is also P × EG
${\begin{array}{rcccl}P&\to &P\times EG&\to &EG\\\downarrow &&\downarrow &&\downarrow \pi \\M&\to _{\!\!\!\!\!\!\!s}&P\times _{G}EG&\to &BG\end{array}}$
Since p is a fibration with contractible fibre EG, sections of p exist.[2] To such a section s we associate the composition with the projection P ×G EG → BG. The map we get is the f we were looking for.
For the uniqueness up to homotopy, notice that there exists a one-to-one correspondence between maps f : M → BG such that f ∗(EG) → M is isomorphic to P → M and sections of p. We have just seen how to associate a f to a section. Inversely, assume that f is given. Let Φ : f ∗(EG) → P be an isomorphism:
$\Phi :\left\{(x,u)\in M\times EG\ :\ f(x)=\pi (u)\right\}\to P$ :\left\{(x,u)\in M\times EG\ :\ f(x)=\pi (u)\right\}\to P}
Now, simply define a section by
${\begin{cases}M\to P\times _{G}EG\\x\mapsto \lbrack \Phi (x,u),u\rbrack \end{cases}}$
Because all sections of p are homotopic, the homotopy class of f is unique.
Use in the study of group actions
The total space of a universal bundle is usually written EG. These spaces are of interest in their own right, despite typically being contractible. For example, in defining the homotopy quotient or homotopy orbit space of a group action of G, in cases where the orbit space is pathological (in the sense of being a non-Hausdorff space, for example). The idea, if G acts on the space X, is to consider instead the action on Y = X × EG, and corresponding quotient. See equivariant cohomology for more detailed discussion.
If EG is contractible then X and Y are homotopy equivalent spaces. But the diagonal action on Y, i.e. where G acts on both X and EG coordinates, may be well-behaved when the action on X is not.
See also: equivariant cohomology § Homotopy quotient
Examples
• Classifying space for U(n)
See also
• Chern class
• tautological bundle, a universal bundle for the general linear group.
External links
• PlanetMath page of universal bundle examples
Notes
1. J. J. Duistermaat and J. A. Kolk,-- Lie Groups, Universitext, Springer. Corollary 4.6.5
2. A.~Dold -- Partitions of Unity in the Theory of Fibrations, Annals of Mathematics, vol. 78, No 2 (1963)
Manifolds (Glossary)
Basic concepts
• Topological manifold
• Atlas
• Differentiable/Smooth manifold
• Differential structure
• Smooth atlas
• Submanifold
• Riemannian manifold
• Smooth map
• Submersion
• Pushforward
• Tangent space
• Differential form
• Vector field
Main results (list)
• Atiyah–Singer index
• Darboux's
• De Rham's
• Frobenius
• Generalized Stokes
• Hopf–Rinow
• Noether's
• Sard's
• Whitney embedding
Maps
• Curve
• Diffeomorphism
• Local
• Geodesic
• Exponential map
• in Lie theory
• Foliation
• Immersion
• Integral curve
• Lie derivative
• Section
• Submersion
Types of
manifolds
• Closed
• (Almost) Complex
• (Almost) Contact
• Fibered
• Finsler
• Flat
• G-structure
• Hadamard
• Hermitian
• Hyperbolic
• Kähler
• Kenmotsu
• Lie group
• Lie algebra
• Manifold with boundary
• Oriented
• Parallelizable
• Poisson
• Prime
• Quaternionic
• Hypercomplex
• (Pseudo−, Sub−) Riemannian
• Rizza
• (Almost) Symplectic
• Tame
Tensors
Vectors
• Distribution
• Lie bracket
• Pushforward
• Tangent space
• bundle
• Torsion
• Vector field
• Vector flow
Covectors
• Closed/Exact
• Covariant derivative
• Cotangent space
• bundle
• De Rham cohomology
• Differential form
• Vector-valued
• Exterior derivative
• Interior product
• Pullback
• Ricci curvature
• flow
• Riemann curvature tensor
• Tensor field
• density
• Volume form
• Wedge product
Bundles
• Adjoint
• Affine
• Associated
• Cotangent
• Dual
• Fiber
• (Co) Fibration
• Jet
• Lie algebra
• (Stable) Normal
• Principal
• Spinor
• Subbundle
• Tangent
• Tensor
• Vector
Connections
• Affine
• Cartan
• Ehresmann
• Form
• Generalized
• Koszul
• Levi-Civita
• Principal
• Vector
• Parallel transport
Related
• Classification of manifolds
• Gauge theory
• History
• Morse theory
• Moving frame
• Singularity theory
Generalizations
• Banach manifold
• Diffeology
• Diffiety
• Fréchet manifold
• K-theory
• Orbifold
• Secondary calculus
• over commutative algebras
• Sheaf
• Stratifold
• Supermanifold
• Stratified space
| Wikipedia |
Turing machine
A Turing machine is a mathematical model of computation describing an abstract machine[1] that manipulates symbols on a strip of tape according to a table of rules.[2] Despite the model's simplicity, it is capable of implementing any computer algorithm.[3]
Turing machines
Machine
• Turing machine equivalents
• Turing machine examples
• Turing machine gallery
Variants
• Alternating Turing machine
• Neural Turing machine
• Nondeterministic Turing machine
• Quantum Turing machine
• Post–Turing machine
• Probabilistic Turing machine
• Multitape Turing machine
• Multi-track Turing machine
• Symmetric Turing machine
• Total Turing machine
• Unambiguous Turing machine
• Universal Turing machine
• Zeno machine
Science
• Alan Turing
• Category:Turing machine
Classes of automata
The machine operates on an infinite[4] memory tape divided into discrete cells,[5] each of which can hold a single symbol drawn from a finite set of symbols called the alphabet of the machine. It has a "head" that, at any point in the machine's operation, is positioned over one of these cells, and a "state" selected from a finite set of states. At each step of its operation, the head reads the symbol in its cell. Then, based on the symbol and the machine's own present state, the machine writes a symbol into the same cell, and moves the head one step to the left or the right,[6] or halts the computation. The choice of which replacement symbol to write, which direction to move the head, and whether to halt is based on a finite table that specifies what to do for each combination of the current state and the symbol that is read. Like a real computer program, it is possible for a Turing machine to go into an infinite loop which will never halt.
The Turing machine was invented in 1936 by Alan Turing,[7][8] who called it an "a-machine" (automatic machine).[9] It was Turing's doctoral advisor, Alonzo Church, who later coined the term "Turing machine" in a review.[10] With this model, Turing was able to answer two questions in the negative:
• Does a machine exist that can determine whether any arbitrary machine on its tape is "circular" (e.g., freezes, or fails to continue its computational task)?
• Does a machine exist that can determine whether any arbitrary machine on its tape ever prints a given symbol?[11][12]
Thus by providing a mathematical description of a very simple device capable of arbitrary computations, he was able to prove properties of computation in general—and in particular, the uncomputability of the Entscheidungsproblem ('decision problem').[13]
Turing machines proved the existence of fundamental limitations on the power of mechanical computation.[14] While they can express arbitrary computations, their minimalist design makes them too slow for computation in practice: real-world computers are based on different designs that, unlike Turing machines, use random-access memory.
Turing completeness is the ability for a computational model or a system of instructions to simulate a Turing machine. A programming language that is Turing complete is theoretically capable of expressing all tasks accomplishable by computers; nearly all programming languages are Turing complete if the limitations of finite memory are ignored.
Overview
A Turing machine is an idealized model of a central processing unit (CPU) that controls all data manipulation done by a computer, with the canonical machine using sequential memory to store data. Typically, the sequential memory is represented as a tape of infinite length on which the machine can perform read and write operations.
In the context of formal language theory, a Turing machine (automaton) is capable of enumerating some arbitrary subset of valid strings of an alphabet. A set of strings which can be enumerated in this manner is called a recursively enumerable language. The Turing machine can equivalently be defined as a model that recognizes valid input strings, rather than enumerating output strings.
Given a Turing machine M and an arbitrary string s, it is generally not possible to decide whether M will eventually produce s. This is due to the fact that the halting problem is unsolvable, which has major implications for the theoretical limits of computing.
The Turing machine is capable of processing an unrestricted grammar, which further implies that it is capable of robustly evaluating first-order logic in an infinite number of ways. This is famously demonstrated through lambda calculus.
A Turing machine that is able to simulate any other Turing machine is called a universal Turing machine (UTM, or simply a universal machine). Another mathematical formalism, lambda calculus, with a similar "universal" nature was introduced by Alonzo Church. Church's work intertwined with Turing's to form the basis for the Church-Turing thesis. This thesis states that Turing machines, lambda calculus, and other similar formalisms of computation do indeed capture the informal notion of effective methods in logic and mathematics and thus provide a model through which one can reason about an algorithm or "mechanical procedure" in a mathematically precise way without being tied to any particular formalism. Studying the abstract properties of Turing machines has yielded many insights into computer science, computability theory, and complexity theory.
Physical description
In his 1948 essay, "Intelligent Machinery", Turing wrote that his machine consisted of:
...an unlimited memory capacity obtained in the form of an infinite tape marked out into squares, on each of which a symbol could be printed. At any moment there is one symbol in the machine; it is called the scanned symbol. The machine can alter the scanned symbol, and its behavior is in part determined by that symbol, but the symbols on the tape elsewhere do not affect the behavior of the machine. However, the tape can be moved back and forth through the machine, this being one of the elementary operations of the machine. Any symbol on the tape may therefore eventually have an innings.[15]
— Turing 1948, p. 3[16]
Description
For visualizations of Turing machines, see Turing machine gallery.
The Turing machine mathematically models a machine that mechanically operates on a tape. On this tape are symbols, which the machine can read and write, one at a time, using a tape head. Operation is fully determined by a finite set of elementary instructions such as "in state 42, if the symbol seen is 0, write a 1; if the symbol seen is 1, change into state 17; in state 17, if the symbol seen is 0, write a 1 and change to state 6;" etc. In the original article ("On Computable Numbers, with an Application to the Entscheidungsproblem", see also references below), Turing imagines not a mechanism, but a person whom he calls the "computer", who executes these deterministic mechanical rules slavishly (or as Turing puts it, "in a desultory manner").
More explicitly, a Turing machine consists of:
• A tape divided into cells, one next to the other. Each cell contains a symbol from some finite alphabet. The alphabet contains a special blank symbol (here written as '0') and one or more other symbols. The tape is assumed to be arbitrarily extendable to the left and to the right, so that the Turing machine is always supplied with as much tape as it needs for its computation. Cells that have not been written before are assumed to be filled with the blank symbol. In some models the tape has a left end marked with a special symbol; the tape extends or is indefinitely extensible to the right.
• A head that can read and write symbols on the tape and move the tape left and right one (and only one) cell at a time. In some models the head moves and the tape is stationary.
• A state register that stores the state of the Turing machine, one of finitely many. Among these is the special start state with which the state register is initialized. These states, writes Turing, replace the "state of mind" a person performing computations would ordinarily be in.
• A finite table[17] of instructions[18] that, given the state(qi) the machine is currently in and the symbol(aj) it is reading on the tape (symbol currently under the head), tells the machine to do the following in sequence (for the 5-tuple models):
1. Either erase or write a symbol (replacing aj with aj1).
2. Move the head (which is described by dk and can have values: 'L' for one step left or 'R' for one step right or 'N' for staying in the same place).
3. Assume the same or a new state as prescribed (go to state qi1).
In the 4-tuple models, erasing or writing a symbol (aj1) and moving the head left or right (dk) are specified as separate instructions. The table tells the machine to (ia) erase or write a symbol or (ib) move the head left or right, and then (ii) assume the same or a new state as prescribed, but not both actions (ia) and (ib) in the same instruction. In some models, if there is no entry in the table for the current combination of symbol and state, then the machine will halt; other models require all entries to be filled.
Every part of the machine (i.e. its state, symbol-collections, and used tape at any given time) and its actions (such as printing, erasing and tape motion) is finite, discrete and distinguishable; it is the unlimited amount of tape and runtime that gives it an unbounded amount of storage space.
Formal definition
Following Hopcroft & Ullman (1979, p. 148), a (one-tape) Turing machine can be formally defined as a 7-tuple $M=\langle Q,\Gamma ,b,\Sigma ,\delta ,q_{0},F\rangle $ where
• $\Gamma $ is a finite, non-empty set of tape alphabet symbols;
• $b\in \Gamma $ is the blank symbol (the only symbol allowed to occur on the tape infinitely often at any step during the computation);
• $\Sigma \subseteq \Gamma \setminus \{b\}$ is the set of input symbols, that is, the set of symbols allowed to appear in the initial tape contents;
• $Q$ is a finite, non-empty set of states;
• $q_{0}\in Q$ is the initial state;
• $F\subseteq Q$ is the set of final states or accepting states. The initial tape contents is said to be accepted by $M$ if it eventually halts in a state from $F$.
• $\delta :(Q\setminus F)\times \Gamma \not \to Q\times \Gamma \times \{L,R\}$ :(Q\setminus F)\times \Gamma \not \to Q\times \Gamma \times \{L,R\}} is a partial function called the transition function, where L is left shift, R is right shift. If $\delta $ is not defined on the current state and the current tape symbol, then the machine halts;[19] intuitively, the transition function specifies the next state transited from the current state, which symbol to overwrite the current symbol pointed by the head, and the next head movement.
A relatively uncommon variant allows "no shift", say N, as a third element of the set of directions $\{L,R\}$.
The 7-tuple for the 3-state busy beaver looks like this (see more about this busy beaver at Turing machine examples):
• $Q=\{{\mbox{A}},{\mbox{B}},{\mbox{C}},{\mbox{HALT}}\}$ (states);
• $\Gamma =\{0,1\}$ (tape alphabet symbols);
• $b=0$ (blank symbol);
• $\Sigma =\{1\}$ (input symbols);
• $q_{0}={\mbox{A}}$ (initial state);
• $F=\{{\mbox{HALT}}\}$ (final states);
• $\delta =$ see state-table below (transition function).
Initially all tape cells are marked with $0$.
State table for 3-state, 2-symbol busy beaver
Tape symbol Current state A Current state B Current state C
Write symbol Move tape Next state Write symbol Move tape Next state Write symbol Move tape Next state
0 1 R B 1 L A 1 L B
1 1 L C 1 R B 1 R HALT
Additional details required to visualize or implement Turing machines
In the words of van Emde Boas (1990), p. 6: "The set-theoretical object [his formal seven-tuple description similar to the above] provides only partial information on how the machine will behave and what its computations will look like."
For instance,
• There will need to be many decisions on what the symbols actually look like, and a failproof way of reading and writing symbols indefinitely.
• The shift left and shift right operations may shift the tape head across the tape, but when actually building a Turing machine it is more practical to make the tape slide back and forth under the head instead.
• The tape can be finite, and automatically extended with blanks as needed (which is closest to the mathematical definition), but it is more common to think of it as stretching infinitely at one or both ends and being pre-filled with blanks except on the explicitly given finite fragment the tape head is on. (This is, of course, not implementable in practice.) The tape cannot be fixed in length, since that would not correspond to the given definition and would seriously limit the range of computations the machine can perform to those of a linear bounded automaton if the tape was proportional to the input size, or finite-state machine if it was strictly fixed-length.
Alternative definitions
Definitions in literature sometimes differ slightly, to make arguments or proofs easier or clearer, but this is always done in such a way that the resulting machine has the same computational power. For example, the set could be changed from $\{L,R\}$ to $\{L,R,N\}$, where N ("None" or "No-operation") would allow the machine to stay on the same tape cell instead of moving left or right. This would not increase the machine's computational power.
The most common convention represents each "Turing instruction" in a "Turing table" by one of nine 5-tuples, per the convention of Turing/Davis (Turing (1936) in The Undecidable, p. 126–127 and Davis (2000) p. 152):
(definition 1): (qi, Sj, Sk/E/N, L/R/N, qm)
( current state qi , symbol scanned Sj , print symbol Sk/erase E/none N , move_tape_one_square left L/right R/none N , new state qm )
Other authors (Minsky (1967) p. 119, Hopcroft and Ullman (1979) p. 158, Stone (1972) p. 9) adopt a different convention, with new state qm listed immediately after the scanned symbol Sj:
(definition 2): (qi, Sj, qm, Sk/E/N, L/R/N)
( current state qi , symbol scanned Sj , new state qm , print symbol Sk/erase E/none N , move_tape_one_square left L/right R/none N )
For the remainder of this article "definition 1" (the Turing/Davis convention) will be used.
Example: state table for the 3-state 2-symbol busy beaver reduced to 5-tuples
Current state Scanned symbol Print symbol Move tape Final (i.e. next) state 5-tuples
A 0 1 R B (A, 0, 1, R, B)
A 1 1 L C (A, 1, 1, L, C)
B 0 1 L A (B, 0, 1, L, A)
B 1 1 R B (B, 1, 1, R, B)
C 0 1 L B (C, 0, 1, L, B)
C 1 1 N H (C, 1, 1, N, H)
In the following table, Turing's original model allowed only the first three lines that he called N1, N2, N3 (cf. Turing in The Undecidable, p. 126). He allowed for erasure of the "scanned square" by naming a 0th symbol S0 = "erase" or "blank", etc. However, he did not allow for non-printing, so every instruction-line includes "print symbol Sk" or "erase" (cf. footnote 12 in Post (1947), The Undecidable, p. 300). The abbreviations are Turing's (The Undecidable, p. 119). Subsequent to Turing's original paper in 1936–1937, machine-models have allowed all nine possible types of five-tuples:
Current m-configuration
(Turing state)
Tape symbol Print-operation Tape-motion Final m-configuration
(Turing state)
5-tuple 5-tuple comments 4-tuple
N1 qi Sj Print(Sk) Left L qm (qi, Sj, Sk, L, qm) "blank" = S0, 1=S1, etc.
N2 qi Sj Print(Sk) Right R qm (qi, Sj, Sk, R, qm) "blank" = S0, 1=S1, etc.
N3 qi Sj Print(Sk) None N qm (qi, Sj, Sk, N, qm) "blank" = S0, 1=S1, etc. (qi, Sj, Sk, qm)
4 qi Sj None N Left L qm (qi, Sj, N, L, qm) (qi, Sj, L, qm)
5 qi Sj None N Right R qm (qi, Sj, N, R, qm) (qi, Sj, R, qm)
6 qi Sj None N None N qm (qi, Sj, N, N, qm) Direct "jump" (qi, Sj, N, qm)
7 qi Sj Erase Left L qm (qi, Sj, E, L, qm)
8 qi Sj Erase Right R qm (qi, Sj, E, R, qm)
9 qi Sj Erase None N qm (qi, Sj, E, N, qm) (qi, Sj, E, qm)
Any Turing table (list of instructions) can be constructed from the above nine 5-tuples. For technical reasons, the three non-printing or "N" instructions (4, 5, 6) can usually be dispensed with. For examples see Turing machine examples.
Less frequently the use of 4-tuples are encountered: these represent a further atomization of the Turing instructions (cf. Post (1947), Boolos & Jeffrey (1974, 1999), Davis-Sigal-Weyuker (1994)); also see more at Post–Turing machine.
The "state"
The word "state" used in context of Turing machines can be a source of confusion, as it can mean two things. Most commentators after Turing have used "state" to mean the name/designator of the current instruction to be performed—i.e. the contents of the state register. But Turing (1936) made a strong distinction between a record of what he called the machine's "m-configuration", and the machine's (or person's) "state of progress" through the computation—the current state of the total system. What Turing called "the state formula" includes both the current instruction and all the symbols on the tape:
Thus the state of progress of the computation at any stage is completely determined by the note of instructions and the symbols on the tape. That is, the state of the system may be described by a single expression (sequence of symbols) consisting of the symbols on the tape followed by Δ (which is supposed to not to appear elsewhere) and then by the note of instructions. This expression is called the "state formula".
— The Undecidable, pp. 139–140, emphasis added
Earlier in his paper Turing carried this even further: he gives an example where he placed a symbol of the current "m-configuration"—the instruction's label—beneath the scanned square, together with all the symbols on the tape (The Undecidable, p. 121); this he calls "the complete configuration" (The Undecidable, p. 118). To print the "complete configuration" on one line, he places the state-label/m-configuration to the left of the scanned symbol.
A variant of this is seen in Kleene (1952) where Kleene shows how to write the Gödel number of a machine's "situation": he places the "m-configuration" symbol q4 over the scanned square in roughly the center of the 6 non-blank squares on the tape (see the Turing-tape figure in this article) and puts it to the right of the scanned square. But Kleene refers to "q4" itself as "the machine state" (Kleene, p. 374–375). Hopcroft and Ullman call this composite the "instantaneous description" and follow the Turing convention of putting the "current state" (instruction-label, m-configuration) to the left of the scanned symbol (p. 149), that is, the instantaneous description is the composite of non-blank symbols to the left, state of the machine, the current symbol scanned by the head, and the non-blank symbols to the right.
Example: total state of 3-state 2-symbol busy beaver after 3 "moves" (taken from example "run" in the figure below):
1A1
This means: after three moves the tape has ... 000110000 ... on it, the head is scanning the right-most 1, and the state is A. Blanks (in this case represented by "0"s) can be part of the total state as shown here: B01; the tape has a single 1 on it, but the head is scanning the 0 ("blank") to its left and the state is B.
"State" in the context of Turing machines should be clarified as to which is being described: the current instruction, or the list of symbols on the tape together with the current instruction, or the list of symbols on the tape together with the current instruction placed to the left of the scanned symbol or to the right of the scanned symbol.
Turing's biographer Andrew Hodges (1983: 107) has noted and discussed this confusion.
"State" diagrams
The table for the 3-state busy beaver ("P" = print/write a "1")
Tape symbol Current state A Current state B Current state C
Write symbol Move tape Next state Write symbol Move tape Next state Write symbol Move tape Next state
0 P R B P L A P L B
1 P L C P R B P R HALT
To the right: the above table as expressed as a "state transition" diagram.
Usually large tables are better left as tables (Booth, p. 74). They are more readily simulated by computer in tabular form (Booth, p. 74). However, certain concepts—e.g. machines with "reset" states and machines with repeating patterns (cf. Hill and Peterson p. 244ff)—can be more readily seen when viewed as a drawing.
Whether a drawing represents an improvement on its table must be decided by the reader for the particular context.
The reader should again be cautioned that such diagrams represent a snapshot of their table frozen in time, not the course ("trajectory") of a computation through time and space. While every time the busy beaver machine "runs" it will always follow the same state-trajectory, this is not true for the "copy" machine that can be provided with variable input "parameters".
The diagram "progress of the computation" shows the three-state busy beaver's "state" (instruction) progress through its computation from start to finish. On the far right is the Turing "complete configuration" (Kleene "situation", Hopcroft–Ullman "instantaneous description") at each step. If the machine were to be stopped and cleared to blank both the "state register" and entire tape, these "configurations" could be used to rekindle a computation anywhere in its progress (cf. Turing (1936) The Undecidable, pp. 139–140).
Equivalent models
See also: Turing machine equivalents, Register machine, and Post–Turing machine
Many machines that might be thought to have more computational capability than a simple universal Turing machine can be shown to have no more power (Hopcroft and Ullman p. 159, cf. Minsky (1967)). They might compute faster, perhaps, or use less memory, or their instruction set might be smaller, but they cannot compute more powerfully (i.e. more mathematical functions). (The Church–Turing thesis hypothesizes this to be true for any kind of machine: that anything that can be "computed" can be computed by some Turing machine.)
A Turing machine is equivalent to a single-stack pushdown automaton (PDA) that has been made more flexible and concise by relaxing the last-in-first-out (LIFO) requirement of its stack. In addition, a Turing machine is also equivalent to a two-stack PDA with standard LIFO semantics, by using one stack to model the tape left of the head and the other stack for the tape to the right.
At the other extreme, some very simple models turn out to be Turing-equivalent, i.e. to have the same computational power as the Turing machine model.
Common equivalent models are the multi-tape Turing machine, multi-track Turing machine, machines with input and output, and the non-deterministic Turing machine (NDTM) as opposed to the deterministic Turing machine (DTM) for which the action table has at most one entry for each combination of symbol and state.
Read-only, right-moving Turing machines are equivalent to DFAs (as well as NFAs by conversion using the NDFA to DFA conversion algorithm).
For practical and didactical intentions the equivalent register machine can be used as a usual assembly programming language.
A relevant question is whether or not the computation model represented by concrete programming languages is Turing equivalent. While the computation of a real computer is based on finite states and thus not capable to simulate a Turing machine, programming languages themselves do not necessarily have this limitation. Kirner et al., 2009 have shown that among the general-purpose programming languages some are Turing complete while others are not. For example, ANSI C is not Turing-equivalent, as all instantiations of ANSI C (different instantiations are possible as the standard deliberately leaves certain behaviour undefined for legacy reasons) imply a finite-space memory. This is because the size of memory reference data types, called pointers, is accessible inside the language. However, other programming languages like Pascal do not have this feature, which allows them to be Turing complete in principle. It is just Turing complete in principle, as memory allocation in a programming language is allowed to fail, which means the programming language can be Turing complete when ignoring failed memory allocations, but the compiled programs executable on a real computer cannot.
Choice c-machines, oracle o-machines
Early in his paper (1936) Turing makes a distinction between an "automatic machine"—its "motion ... completely determined by the configuration" and a "choice machine":
...whose motion is only partially determined by the configuration ... When such a machine reaches one of these ambiguous configurations, it cannot go on until some arbitrary choice has been made by an external operator. This would be the case if we were using machines to deal with axiomatic systems.
— The Undecidable, p. 118
Turing (1936) does not elaborate further except in a footnote in which he describes how to use an a-machine to "find all the provable formulae of the [Hilbert] calculus" rather than use a choice machine. He "suppose[s] that the choices are always between two possibilities 0 and 1. Each proof will then be determined by a sequence of choices i1, i2, ..., in (i1 = 0 or 1, i2 = 0 or 1, ..., in = 0 or 1), and hence the number 2n + i12n-1 + i22n-2 + ... +in completely determines the proof. The automatic machine carries out successively proof 1, proof 2, proof 3, ..." (Footnote ‡, The Undecidable, p. 138)
This is indeed the technique by which a deterministic (i.e., a-) Turing machine can be used to mimic the action of a nondeterministic Turing machine; Turing solved the matter in a footnote and appears to dismiss it from further consideration.
An oracle machine or o-machine is a Turing a-machine that pauses its computation at state "o" while, to complete its calculation, it "awaits the decision" of "the oracle"—an unspecified entity "apart from saying that it cannot be a machine" (Turing (1939), The Undecidable, p. 166–168).
Universal Turing machines
Main article: Universal Turing machine
As Turing wrote in The Undecidable, p. 128 (italics added):
It is possible to invent a single machine which can be used to compute any computable sequence. If this machine U is supplied with the tape on the beginning of which is written the string of quintuples separated by semicolons of some computing machine M, then U will compute the same sequence as M.
This finding is now taken for granted, but at the time (1936) it was considered astonishing. The model of computation that Turing called his "universal machine"—"U" for short—is considered by some (cf. Davis (2000)) to have been the fundamental theoretical breakthrough that led to the notion of the stored-program computer.
Turing's paper ... contains, in essence, the invention of the modern computer and some of the programming techniques that accompanied it.
— Minsky (1967), p. 104
In terms of computational complexity, a multi-tape universal Turing machine need only be slower by logarithmic factor compared to the machines it simulates. This result was obtained in 1966 by F. C. Hennie and R. E. Stearns. (Arora and Barak, 2009, theorem 1.9)
Comparison with real machines
It is often believed that Turing machines, unlike simpler automata, are as powerful as real machines, and are able to execute any operation that a real program can. What is neglected in this statement is that, because a real machine can only have a finite number of configurations, it is nothing but a finite-state machine, whereas a Turing machine has an unlimited amount of storage space available for its computations.
There are a number of ways to explain why Turing machines are useful models of real computers:
• Anything a real computer can compute, a Turing machine can also compute. For example: "A Turing machine can simulate any type of subroutine found in programming languages, including recursive procedures and any of the known parameter-passing mechanisms" (Hopcroft and Ullman p. 157). A large enough FSA can also model any real computer, disregarding IO. Thus, a statement about the limitations of Turing machines will also apply to real computers.
• The difference lies only with the ability of a Turing machine to manipulate an unbounded amount of data. However, given a finite amount of time, a Turing machine (like a real machine) can only manipulate a finite amount of data.
• Like a Turing machine, a real machine can have its storage space enlarged as needed, by acquiring more disks or other storage media.
• Descriptions of real machine programs using simpler abstract models are often much more complex than descriptions using Turing machines. For example, a Turing machine describing an algorithm may have a few hundred states, while the equivalent deterministic finite automaton (DFA) on a given real machine has quadrillions. This makes the DFA representation infeasible to analyze.
• Turing machines describe algorithms independent of how much memory they use. There is a limit to the memory possessed by any current machine, but this limit can rise arbitrarily in time. Turing machines allow us to make statements about algorithms which will (theoretically) hold forever, regardless of advances in conventional computing machine architecture.
• Turing machines simplify the statement of algorithms. Algorithms running on Turing-equivalent abstract machines are usually more general than their counterparts running on real machines, because they have arbitrary-precision data types available and never have to deal with unexpected conditions (including, but not limited to, running out of memory).
Computational complexity theory
Further information: Computational complexity theory
A limitation of Turing machines is that they do not model the strengths of a particular arrangement well. For instance, modern stored-program computers are actually instances of a more specific form of abstract machine known as the random-access stored-program machine or RASP machine model. Like the universal Turing machine, the RASP stores its "program" in "memory" external to its finite-state machine's "instructions". Unlike the universal Turing machine, the RASP has an infinite number of distinguishable, numbered but unbounded "registers"—memory "cells" that can contain any integer (cf. Elgot and Robinson (1964), Hartmanis (1971), and in particular Cook-Rechow (1973); references at random-access machine). The RASP's finite-state machine is equipped with the capability for indirect addressing (e.g., the contents of one register can be used as an address to specify another register); thus the RASP's "program" can address any register in the register-sequence. The upshot of this distinction is that there are computational optimizations that can be performed based on the memory indices, which are not possible in a general Turing machine; thus when Turing machines are used as the basis for bounding running times, a "false lower bound" can be proven on certain algorithms' running times (due to the false simplifying assumption of a Turing machine). An example of this is binary search, an algorithm that can be shown to perform more quickly when using the RASP model of computation rather than the Turing machine model.
Interaction
In the early days of computing, computer use was typically limited to batch processing, i.e., non-interactive tasks, each producing output data from given input data. Computability theory, which studies computability of functions from inputs to outputs, and for which Turing machines were invented, reflects this practice.
Since the 1970s, interactive use of computers became much more common. In principle, it is possible to model this by having an external agent read from the tape and write to it at the same time as a Turing machine, but this rarely matches how interaction actually happens; therefore, when describing interactivity, alternatives such as I/O automata are usually preferred.
History
See also: Algorithm and Church–Turing thesis
Historical background: computational machinery
Robin Gandy (1919–1995)—a student of Alan Turing (1912–1954), and his lifelong friend—traces the lineage of the notion of "calculating machine" back to Charles Babbage (circa 1834) and actually proposes "Babbage's Thesis":
That the whole of development and operations of analysis are now capable of being executed by machinery.
— (italics in Babbage as cited by Gandy, p. 54)
Gandy's analysis of Babbage's analytical engine describes the following five operations (cf. p. 52–53):
• The arithmetic functions +, −, ×, where − indicates "proper" subtraction x − y = 0 if y ≥ x.
• Any sequence of operations is an operation.
• Iteration of an operation (repeating n times an operation P).
• Conditional iteration (repeating n times an operation P conditional on the "success" of test T).
• Conditional transfer (i.e., conditional "goto").
Gandy states that "the functions which can be calculated by (1), (2), and (4) are precisely those which are Turing computable." (p. 53). He cites other proposals for "universal calculating machines" including those of Percy Ludgate (1909), Leonardo Torres Quevedo (1914),[20][21] Maurice d'Ocagne (1922), Louis Couffignal (1933), Vannevar Bush (1936), Howard Aiken (1937). However:
… the emphasis is on programming a fixed iterable sequence of arithmetical operations. The fundamental importance of conditional iteration and conditional transfer for a general theory of calculating machines is not recognized…
— Gandy p. 55
The Entscheidungsproblem (the "decision problem"): Hilbert's tenth question of 1900
With regard to Hilbert's problems posed by the famous mathematician David Hilbert in 1900, an aspect of problem #10 had been floating about for almost 30 years before it was framed precisely. Hilbert's original expression for No. 10 is as follows:
10. Determination of the solvability of a Diophantine equation. Given a Diophantine equation with any number of unknown quantities and with rational integral coefficients: To devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers. The Entscheidungsproblem [decision problem for first-order logic] is solved when we know a procedure that allows for any given logical expression to decide by finitely many operations its validity or satisfiability ... The Entscheidungsproblem must be considered the main problem of mathematical logic.
— quoted, with this translation and the original German, in Dershowitz and Gurevich, 2008
By 1922, this notion of "Entscheidungsproblem" had developed a bit, and H. Behmann stated that
... most general form of the Entscheidungsproblem [is] as follows:
A quite definite generally applicable prescription is required which will allow one to decide in a finite number of steps the truth or falsity of a given purely logical assertion ...
— Gandy p. 57, quoting Behmann
Behmann remarks that ... the general problem is equivalent to the problem of deciding which mathematical propositions are true.
— ibid.
If one were able to solve the Entscheidungsproblem then one would have a "procedure for solving many (or even all) mathematical problems".
— ibid., p. 92
By the 1928 international congress of mathematicians, Hilbert "made his questions quite precise. First, was mathematics complete ... Second, was mathematics consistent ... And thirdly, was mathematics decidable?" (Hodges p. 91, Hawking p. 1121). The first two questions were answered in 1930 by Kurt Gödel at the very same meeting where Hilbert delivered his retirement speech (much to the chagrin of Hilbert); the third—the Entscheidungsproblem—had to wait until the mid-1930s.
The problem was that an answer first required a precise definition of "definite general applicable prescription", which Princeton professor Alonzo Church would come to call "effective calculability", and in 1928 no such definition existed. But over the next 6–7 years Emil Post developed his definition of a worker moving from room to room writing and erasing marks per a list of instructions (Post 1936), as did Church and his two students Stephen Kleene and J. B. Rosser by use of Church's lambda-calculus and Gödel's recursion theory (1934). Church's paper (published 15 April 1936) showed that the Entscheidungsproblem was indeed "undecidable" and beat Turing to the punch by almost a year (Turing's paper submitted 28 May 1936, published January 1937). In the meantime, Emil Post submitted a brief paper in the fall of 1936, so Turing at least had priority over Post. While Church refereed Turing's paper, Turing had time to study Church's paper and add an Appendix where he sketched a proof that Church's lambda-calculus and his machines would compute the same functions.
But what Church had done was something rather different, and in a certain sense weaker. ... the Turing construction was more direct, and provided an argument from first principles, closing the gap in Church's demonstration.
— Hodges p. 112
And Post had only proposed a definition of calculability and criticized Church's "definition", but had proved nothing.
Alan Turing's a-machine
In the spring of 1935, Turing as a young Master's student at King's College, Cambridge, took on the challenge; he had been stimulated by the lectures of the logician M. H. A. Newman "and learned from them of Gödel's work and the Entscheidungsproblem ... Newman used the word 'mechanical' ... In his obituary of Turing 1955 Newman writes:
To the question 'what is a "mechanical" process?' Turing returned the characteristic answer 'Something that can be done by a machine' and he embarked on the highly congenial task of analysing the general notion of a computing machine.
— Gandy, p. 74
Gandy states that:
I suppose, but do not know, that Turing, right from the start of his work, had as his goal a proof of the undecidability of the Entscheidungsproblem. He told me that the 'main idea' of the paper came to him when he was lying in Grantchester meadows in the summer of 1935. The 'main idea' might have either been his analysis of computation or his realization that there was a universal machine, and so a diagonal argument to prove unsolvability.
— ibid., p. 76
While Gandy believed that Newman's statement above is "misleading", this opinion is not shared by all. Turing had a lifelong interest in machines: "Alan had dreamt of inventing typewriters as a boy; [his mother] Mrs. Turing had a typewriter; and he could well have begun by asking himself what was meant by calling a typewriter 'mechanical'" (Hodges p. 96). While at Princeton pursuing his PhD, Turing built a Boolean-logic multiplier (see below). His PhD thesis, titled "Systems of Logic Based on Ordinals", contains the following definition of "a computable function":
It was stated above that 'a function is effectively calculable if its values can be found by some purely mechanical process'. We may take this statement literally, understanding by a purely mechanical process one which could be carried out by a machine. It is possible to give a mathematical description, in a certain normal form, of the structures of these machines. The development of these ideas leads to the author's definition of a computable function, and to an identification of computability with effective calculability. It is not difficult, though somewhat laborious, to prove that these three definitions [the 3rd is the λ-calculus] are equivalent.
— Turing (1939) in The Undecidable, p. 160
Alan Turing invented the "a-machine" (automatic machine) in 1936.[7] Turing submitted his paper on 31 May 1936 to the London Mathematical Society for its Proceedings (cf. Hodges 1983:112), but it was published in early 1937 and offprints were available in February 1937 (cf. Hodges 1983:129) It was Turing's doctoral advisor, Alonzo Church, who later coined the term "Turing machine" in a review.[22] With this model, Turing was able to answer two questions in the negative:
• Does a machine exist that can determine whether any arbitrary machine on its tape is "circular" (e.g., freezes, or fails to continue its computational task)?
• Does a machine exist that can determine whether any arbitrary machine on its tape ever prints a given symbol?[23][24]
Thus by providing a mathematical description of a very simple device capable of arbitrary computations, he was able to prove properties of computation in general—and in particular, the uncomputability of the Entscheidungsproblem ('decision problem').[25]
When Turing returned to the UK he ultimately became jointly responsible for breaking the German secret codes created by encryption machines called "The Enigma"; he also became involved in the design of the ACE (Automatic Computing Engine), "[Turing's] ACE proposal was effectively self-contained, and its roots lay not in the EDVAC [the USA's initiative], but in his own universal machine" (Hodges p. 318). Arguments still continue concerning the origin and nature of what has been named by Kleene (1952) Turing's Thesis. But what Turing did prove with his computational-machine model appears in his paper "On Computable Numbers, with an Application to the Entscheidungsproblem" (1937):
[that] the Hilbert Entscheidungsproblem can have no solution ... I propose, therefore to show that there can be no general process for determining whether a given formula U of the functional calculus K is provable, i.e. that there can be no machine which, supplied with any one U of these formulae, will eventually say whether U is provable.
— from Turing's paper as reprinted in The Undecidable, p. 145
Turing's example (his second proof): If one is to ask for a general procedure to tell us: "Does this machine ever print 0", the question is "undecidable".
1937–1970: The "digital computer", the birth of "computer science"
In 1937, while at Princeton working on his PhD thesis, Turing built a digital (Boolean-logic) multiplier from scratch, making his own electromechanical relays (Hodges p. 138). "Alan's task was to embody the logical design of a Turing machine in a network of relay-operated switches ..." (Hodges p. 138). While Turing might have been just initially curious and experimenting, quite-earnest work in the same direction was going in Germany (Konrad Zuse (1938)), and in the United States (Howard Aiken) and George Stibitz (1937); the fruits of their labors were used by both the Axis and Allied militaries in World War II (cf. Hodges p. 298–299). In the early to mid-1950s Hao Wang and Marvin Minsky reduced the Turing machine to a simpler form (a precursor to the Post–Turing machine of Martin Davis); simultaneously European researchers were reducing the new-fangled electronic computer to a computer-like theoretical object equivalent to what was now being called a "Turing machine". In the late 1950s and early 1960s, the coincidentally parallel developments of Melzak and Lambek (1961), Minsky (1961), and Shepherdson and Sturgis (1961) carried the European work further and reduced the Turing machine to a more friendly, computer-like abstract model called the counter machine; Elgot and Robinson (1964), Hartmanis (1971), Cook and Reckhow (1973) carried this work even further with the register machine and random-access machine models—but basically all are just multi-tape Turing machines with an arithmetic-like instruction set.
1970–present: as a model of computation
Today, the counter, register and random-access machines and their sire the Turing machine continue to be the models of choice for theorists investigating questions in the theory of computation. In particular, computational complexity theory makes use of the Turing machine:
Depending on the objects one likes to manipulate in the computations (numbers like nonnegative integers or alphanumeric strings), two models have obtained a dominant position in machine-based complexity theory:
the off-line multitape Turing machine..., which represents the standard model for string-oriented computation, and the random access machine (RAM) as introduced by Cook and Reckhow ..., which models the idealized Von Neumann-style computer.
— van Emde Boas 1990:4
Only in the related area of analysis of algorithms this role is taken over by the RAM model.
— van Emde Boas 1990:16
See also
• Arithmetical hierarchy
• Bekenstein bound, showing the impossibility of infinite-tape Turing machines of finite size and bounded energy
• BlooP and FlooP
• Chaitin's constant or Omega (computer science) for information relating to the halting problem
• Chinese room
• Conway's Game of Life, a Turing-complete cellular automaton
• Digital infinity
• The Emperor's New Mind
• Enumerator (in theoretical computer science)
• Genetix
• Gödel, Escher, Bach: An Eternal Golden Braid, a famous book that discusses, among other topics, the Church–Turing thesis
• Halting problem, for more references
• Harvard architecture
• Imperative programming
• Langton's ant and Turmites, simple two-dimensional analogues of the Turing machine
• List of things named after Alan Turing
• Modified Harvard architecture
• Quantum Turing machine
• Claude Shannon, another leading thinker in information theory
• Turing machine examples
• Turing switch
• Turing tarpit, any computing system or language that, despite being Turing complete, is generally considered useless for practical computing
• Unorganized machine, for Turing's very early ideas on neural networks
• Von Neumann architecture
Notes
1. Minsky 1967:107 "In his 1936 paper, A. M. Turing defined the class of abstract machines that now bear his name. A Turing machine is a finite-state machine associated with a special kind of environment -- its tape -- in which it can store (and later recover) sequences of symbols," also Stone 1972:8 where the word "machine" is in quotation marks.
2. Stone 1972:8 states "This "machine" is an abstract mathematical model", also cf. Sipser 2006:137ff that describes the "Turing machine model". Rogers 1987 (1967):13 refers to "Turing's characterization", Boolos Burgess and Jeffrey 2002:25 refers to a "specific kind of idealized machine".
3. Sipser 2006:137 "A Turing machine can do everything that a real computer can do".
4. Cf. Sipser 2002:137. Also, Rogers 1987 (1967):13 describes "a paper tape of infinite length in both directions". Minsky 1967:118 states "The tape is regarded as infinite in both directions". Boolos Burgess and Jeffrey 2002:25 include the possibility of "there is someone stationed at each end to add extra blank squares as needed".
5. Cf. Rogers 1987 (1967):13. Other authors use the word "square" e.g. Boolos Burgess Jeffrey 2002:35, Minsky 1967:117, Penrose 1989:37.
6. Boolos Burgess Jeffry 2002:25 illustrate the machine as moving along the tape. Penrose 1989:36-37 describes himself as "uncomfortable" with an infinite tape observing that it "might be hard to shift!"; he "prefer[s] to think of the tape as representing some external environment through which our finite device can move" and after observing that the " 'movement' is a convenient way of picturing things" and then suggests that "the device receives all its input from this environment. Some variations of the Turing machine model also allow the head to stay in the same position instead of moving or halting.
7. Hodges, Andrew (2012). Alan Turing: The Enigma (The Centenary ed.). Princeton University Press. ISBN 978-0-691-15564-7.
8. The idea came to him in mid-1935 (perhaps, see more in the History section) after a question posed by M. H. A. Newman in his lectures: "Was there a definite method, or as Newman put it, a "mechanical process" which could be applied to a mathematical statement, and which would come up with the answer as to whether it was provable" (Hodges 1983:93). Turing submitted his paper on 31 May 1936 to the London Mathematical Society for its Proceedings (cf. Hodges 1983:112), but it was published in early 1937 and offprints were available in February 1937 (cf. Hodges 1983:129).
9. See footnote in Davis 2000:151.
10. see note in forward to The Collected Works of Alonzo Church (Burge, Tyler; Enderton, Herbert, eds. (2019-04-23). The Collected Works of Alonzo Church. Cambridge, MA, US: MIT Press. ISBN 978-0-262-02564-5.)
11. Turing 1936 in The Undecidable 1965:132-134; Turing's definition of "circular" is found on page 119.
12. Turing, Alan Mathison (1937). "On Computable Numbers, with an Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society. Series 2. 42 (1): 230–265. doi:10.1112/plms/s2-42.1.230. S2CID 73712. — Reprint at: Turing, Alan. "On computable numbers, with an application to the Entscheidungsproblem". The Turing Digital Archive. Retrieved 9 July 2020.
13. Turing 1936 in The Undecidable 1965:145
14. Sipser 2006:137 observes that "A Turing machine can do everything that a real computer can do. Nevertheless, even a Turing machine cannot solve certain problems. In a very real sense, these problems are beyond the theoretical limits of computation."
15. See the definition of "innings" on Wiktionary
16. A.M. Turing (Jul 1948). Intelligent Machinery (Report). National Physical Laboratory. Here: p.3-4
17. Occasionally called an action table or transition function.
18. Usually quintuples [5-tuples]: qiaj→qi1aj1dk, but sometimes quadruples [4-tuples].
19. p.149; in particular, Hopcroft and Ullman assume that $\delta $ is undefined on all states from $F$
20. L. Torres Quevedo. Ensayos sobre Automática – Su definicion. Extension teórica de sus aplicaciones, Revista de la Academia de Ciencias Exacta, Revista 12, pp. 391–418, 1914.
21. Torres Quevedo. L. (1915). "Essais sur l'Automatique - Sa définition. Etendue théorique de ses applications", Revue Génerale des Sciences Pures et Appliquées, vol. 2, pp. 601–611.
22. see note in forward to The Collected Works of Alonzo Church (Burge, Tyler; Enderton, Herbert, eds. (2019-04-23). The Collected Works of Alonzo Church. Cambridge, MA, US: MIT Press. ISBN 978-0-262-02564-5.)
23. Turing 1936 in The Undecidable 1965:132-134; Turing's definition of "circular" is found on page 119.
24. Turing, Alan Mathison (1937). "On Computable Numbers, with an Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society. Series 2. 42 (1): 230–265. doi:10.1112/plms/s2-42.1.230. S2CID 73712. — Reprint at: Turing, Alan. "On computable numbers, with an application to the Entscheidungsproblem". The Turing Digital Archive. Retrieved 9 July 2020.
25. Turing 1936 in The Undecidable 1965:145
References
Primary literature, reprints, and compilations
• B. Jack Copeland ed. (2004), The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus The Secrets of Enigma, Clarendon Press (Oxford University Press), Oxford UK, ISBN 0-19-825079-7. Contains the Turing papers plus a draft letter to Emil Post re his criticism of "Turing's convention", and Donald W. Davies' Corrections to Turing's Universal Computing Machine
• Martin Davis (ed.) (1965), The Undecidable, Raven Press, Hewlett, NY.
• Emil Post (1936), "Finite Combinatory Processes—Formulation 1", Journal of Symbolic Logic, 1, 103–105, 1936. Reprinted in The Undecidable, pp. 289ff.
• Emil Post (1947), "Recursive Unsolvability of a Problem of Thue", Journal of Symbolic Logic, vol. 12, pp. 1–11. Reprinted in The Undecidable, pp. 293ff. In the Appendix of this paper Post comments on and gives corrections to Turing's paper of 1936–1937. In particular see the footnotes 11 with corrections to the universal computing machine coding and footnote 14 with comments on Turing's first and second proofs.
• Turing, A.M. (1936). "On Computable Numbers, with an Application to the Entscheidungsproblem". Proceedings of the London Mathematical Society. 2 (published 1937). 42: 230–265. doi:10.1112/plms/s2-42.1.230. S2CID 73712. (and Turing, A.M. (1938). "On Computable Numbers, with an Application to the Entscheidungsproblem: A correction". Proceedings of the London Mathematical Society. 2 (published 1937). 43 (6): 544–6. doi:10.1112/plms/s2-43.6.544.). Reprinted in many collections, e.g. in The Undecidable, pp. 115–154; available on the web in many places.
• Alan Turing, 1948, "Intelligent Machinery." Reprinted in "Cybernetics: Key Papers." Ed. C.R. Evans and A.D.J. Robertson. Baltimore: University Park Press, 1968. p. 31. Reprinted in Turing, A. M. (1996). "Intelligent Machinery, A Heretical Theory". Philosophia Mathematica. 4 (3): 256–260. doi:10.1093/philmat/4.3.256.
• F. C. Hennie and R. E. Stearns. Two-tape simulation of multitape Turing machines. JACM, 13(4):533–546, 1966.
Computability theory
• Boolos, George; Richard Jeffrey (1999) [1989]. Computability and Logic (3rd ed.). Cambridge UK: Cambridge University Press. ISBN 0-521-20402-X.
• Boolos, George; John Burgess; Richard Jeffrey (2002). Computability and Logic (4th ed.). Cambridge UK: Cambridge University Press. ISBN 0-521-00758-5. Some parts have been significantly rewritten by Burgess. Presentation of Turing machines in context of Lambek "abacus machines" (cf. Register machine) and recursive functions, showing their equivalence.
• Taylor L. Booth (1967), Sequential Machines and Automata Theory, John Wiley and Sons, Inc., New York. Graduate level engineering text; ranges over a wide variety of topics, Chapter IX Turing Machines includes some recursion theory.
• Martin Davis (1958). Computability and Unsolvability. McGraw-Hill Book Company, Inc, New York.. On pages 12–20 he gives examples of 5-tuple tables for Addition, The Successor Function, Subtraction (x ≥ y), Proper Subtraction (0 if x < y), The Identity Function and various identity functions, and Multiplication.
• Davis, Martin; Ron Sigal; Elaine J. Weyuker (1994). Computability, Complexity, and Languages and Logic: Fundamentals of Theoretical Computer Science (2nd ed.). San Diego: Academic Press, Harcourt, Brace & Company. ISBN 0-12-206382-1.
• Hennie, Fredrick (1977). Introduction to Computability. Addison–Wesley, Reading, Mass. QA248.5H4 1977.. On pages 90–103 Hennie discusses the UTM with examples and flow-charts, but no actual 'code'.
• Hopcroft, John; Ullman, Jeffrey (1979). Introduction to Automata Theory, Languages, and Computation (1st ed.). Addison–Wesley, Reading Mass. ISBN 0-201-02988-X. Centered around the issues of machine-interpretation of "languages", NP-completeness, etc.
• Hopcroft, John E.; Rajeev Motwani; Jeffrey D. Ullman (2001). Introduction to Automata Theory, Languages, and Computation (2nd ed.). Reading Mass: Addison–Wesley. ISBN 0-201-44124-1.
• Stephen Kleene (1952), Introduction to Metamathematics, North–Holland Publishing Company, Amsterdam Netherlands, 10th impression (with corrections of 6th reprint 1971). Graduate level text; most of Chapter XIII Computable functions is on Turing machine proofs of computability of recursive functions, etc.
• Knuth, Donald E. (1973). Volume 1/Fundamental Algorithms: The Art of computer Programming (2nd ed.). Reading, Mass.: Addison–Wesley Publishing Company.. With reference to the role of Turing machines in the development of computation (both hardware and software) see 1.4.5 History and Bibliography pp. 225ff and 2.6 History and Bibliographypp. 456ff.
• Zohar Manna, 1974, Mathematical Theory of Computation. Reprinted, Dover, 2003. ISBN 978-0-486-43238-0
• Marvin Minsky, Computation: Finite and Infinite Machines, Prentice–Hall, Inc., N.J., 1967. See Chapter 8, Section 8.2 "Unsolvability of the Halting Problem."
• Christos Papadimitriou (1993). Computational Complexity (1st ed.). Addison Wesley. ISBN 0-201-53082-1. Chapter 2: Turing machines, pp. 19–56.
• Hartley Rogers, Jr., Theory of Recursive Functions and Effective Computability, The MIT Press, Cambridge MA, paperback edition 1987, original McGraw-Hill edition 1967, ISBN 0-262-68052-1 (pbk.)
• Michael Sipser (1997). Introduction to the Theory of Computation. PWS Publishing. ISBN 0-534-94728-X. Chapter 3: The Church–Turing Thesis, pp. 125–149.
• Stone, Harold S. (1972). Introduction to Computer Organization and Data Structures (1st ed.). New York: McGraw–Hill Book Company. ISBN 0-07-061726-0.
• Peter van Emde Boas 1990, Machine Models and Simulations, pp. 3–66, in Jan van Leeuwen, ed., Handbook of Theoretical Computer Science, Volume A: Algorithms and Complexity, The MIT Press/Elsevier, [place?], ISBN 0-444-88071-2 (Volume A). QA76.H279 1990.
Church's thesis
• Nachum Dershowitz; Yuri Gurevich (September 2008). "A natural axiomatization of computability and proof of Church's Thesis" (PDF). Bulletin of Symbolic Logic. 14 (3). Retrieved 2008-10-15.
• Roger Penrose (1990) [1989]. The Emperor's New Mind (2nd ed.). Oxford University Press, New York. ISBN 0-19-851973-7.
Small Turing machines
• Rogozhin, Yurii, 1998, "A Universal Turing Machine with 22 States and 2 Symbols", Romanian Journal of Information Science and Technology, 1(3), 259–265, 1998. (surveys known results about small universal Turing machines)
• Stephen Wolfram, 2002, A New Kind of Science, Wolfram Media, ISBN 1-57955-008-8
• Brunfiel, Geoff, Student snags maths prize, Nature, October 24. 2007.
• Jim Giles (2007), Simplest 'universal computer' wins student $25,000, New Scientist, October 24, 2007.
• Alex Smith, Universality of Wolfram’s 2, 3 Turing Machine, Submission for the Wolfram 2, 3 Turing Machine Research Prize.
• Vaughan Pratt, 2007, "Simple Turing machines, Universality, Encodings, etc.", FOM email list. October 29, 2007.
• Martin Davis, 2007, "Smallest universal machine", and Definition of universal Turing machine FOM email list. October 26–27, 2007.
• Alasdair Urquhart, 2007 "Smallest universal machine", FOM email list. October 26, 2007.
• Hector Zenil (Wolfram Research), 2007 "smallest universal machine", FOM email list. October 29, 2007.
• Todd Rowland, 2007, "Confusion on FOM", Wolfram Science message board, October 30, 2007.
• Olivier and Marc RAYNAUD, 2014, A programmable prototype to achieve Turing machines Archived 2016-01-14 at the Wayback Machine" LIMOS Laboratory of Blaise Pascal University (Clermont-Ferrand in France).
Other
• Martin Davis (2000). Engines of Logic: Mathematicians and the origin of the Computer (1st ed.). W. W. Norton & Company, New York. ISBN 978-0-393-32229-3.
• Robin Gandy, "The Confluence of Ideas in 1936", pp. 51–102 in Rolf Herken, see below.
• Stephen Hawking (editor), 2005, God Created the Integers: The Mathematical Breakthroughs that Changed History, Running Press, Philadelphia, ISBN 978-0-7624-1922-7. Includes Turing's 1936–1937 paper, with brief commentary and biography of Turing as written by Hawking.
• Rolf Herken (1995). The Universal Turing Machine—A Half-Century Survey. Springer Verlag. ISBN 978-3-211-82637-9.
• Andrew Hodges, Alan Turing: The Enigma, Simon and Schuster, New York. Cf. Chapter "The Spirit of Truth" for a history leading to, and a discussion of, his proof.
• Ivars Peterson (1988). The Mathematical Tourist: Snapshots of Modern Mathematics (1st ed.). W. H. Freeman and Company, New York. ISBN 978-0-7167-2064-5.
• Roger Penrose, The Emperor's New Mind: Concerning Computers, Minds, and the Laws of Physics, Oxford University Press, Oxford and New York, 1989 (1990 corrections), ISBN 0-19-851973-7.
• Paul Strathern (1997). Turing and the Computer—The Big Idea. Anchor Books/Doubleday. ISBN 978-0-385-49243-0.
• Hao Wang, "A variant to Turing's theory of computing machines", Journal of the Association for Computing Machinery (JACM) 4, 63–92 (1957).
• Charles Petzold, Petzold, Charles, The Annotated Turing, John Wiley & Sons, Inc., ISBN 0-470-22905-5
• Arora, Sanjeev; Barak, Boaz, "Complexity Theory: A Modern Approach", Cambridge University Press, 2009, ISBN 978-0-521-42426-4, section 1.4, "Machines as strings and the universal Turing machine" and 1.7, "Proof of theorem 1.9"
• Kantorovitz, Isaiah Pinchas (December 1, 2005). "A note on turing machine computability of rule driven systems". SIGACT News. 36 (4): 109–110. doi:10.1145/1107523.1107525. S2CID 31117713.
• Kirner, Raimund; Zimmermann, Wolf; Richter, Dirk: "On Undecidability Results of Real Programming Languages", In 15. Kolloquium Programmiersprachen und Grundlagen der Programmierung (KPS'09), Maria Taferl, Austria, Oct. 2009.
External links
Wikimedia Commons has media related to Turing machines.
• "Turing machine", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Turing Machine – Stanford Encyclopedia of Philosophy
• Turing Machine Causal Networks by Enrique Zeleny as part of the Wolfram Demonstrations Project.
• Turing Machines at Curlie
Automata theory: formal languages and formal grammars
Chomsky hierarchyGrammarsLanguagesAbstract machines
• Type-0
• —
• Type-1
• —
• —
• —
• —
• —
• Type-2
• —
• —
• Type-3
• —
• —
• Unrestricted
• (no common name)
• Context-sensitive
• Positive range concatenation
• Indexed
• —
• Linear context-free rewriting systems
• Tree-adjoining
• Context-free
• Deterministic context-free
• Visibly pushdown
• Regular
• —
• Non-recursive
• Recursively enumerable
• Decidable
• Context-sensitive
• Positive range concatenation*
• Indexed*
• —
• Linear context-free rewriting language
• Tree-adjoining
• Context-free
• Deterministic context-free
• Visibly pushdown
• Regular
• Star-free
• Finite
• Turing machine
• Decider
• Linear-bounded
• PTIME Turing Machine
• Nested stack
• Thread automaton
• restricted Tree stack automaton
• Embedded pushdown
• Nondeterministic pushdown
• Deterministic pushdown
• Visibly pushdown
• Finite
• Counter-free (with aperiodic finite monoid)
• Acyclic finite
Each category of languages, except those marked by a *, is a proper subset of the category directly above it. Any language in each category is generated by a grammar and by an automaton in the category in the same line.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
Alan Turing
• Turing machine
• Turing test
• Turing completeness
• Turing's proof
• Turing (microarchitecture)
• Turing degree
Authority control
International
• FAST
National
• Spain
• France
• BnF data
• Germany
• Israel
• United States
• Japan
Other
• IdRef
| Wikipedia |
Cone (category theory)
In category theory, a branch of mathematics, the cone of a functor is an abstract notion used to define the limit of that functor. Cones make other appearances in category theory as well.
Definition
Let F : J → C be a diagram in C. Formally, a diagram is nothing more than a functor from J to C. The change in terminology reflects the fact that we think of F as indexing a family of objects and morphisms in C. The category J is thought of as an "index category". One should consider this in analogy with the concept of an indexed family of objects in set theory. The primary difference is that here we have morphisms as well. Thus, for example, when J is a discrete category, it corresponds most closely to the idea of an indexed family in set theory. Another common and more interesting example takes J to be a span. J can also be taken to be the empty category, leading to the simplest cones.
Let N be an object of C. A cone from N to F is a family of morphisms
$\psi _{X}\colon N\to F(X)\,$
for each object X of J, such that for every morphism f : X → Y in J the following diagram commutes:
The (usually infinite) collection of all these triangles can be (partially) depicted in the shape of a cone with the apex N. The cone ψ is sometimes said to have vertex N and base F.
One can also define the dual notion of a cone from F to N (also called a co-cone) by reversing all the arrows above. Explicitly, a co-cone from F to N is a family of morphisms
$\psi _{X}\colon F(X)\to N\,$
for each object X of J, such that for every morphism f : X → Y in J the following diagram commutes:
Equivalent formulations
At first glance cones seem to be slightly abnormal constructions in category theory. They are maps from an object to a functor (or vice versa). In keeping with the spirit of category theory we would like to define them as morphisms or objects in some suitable category. In fact, we can do both.
Let J be a small category and let CJ be the category of diagrams of type J in C (this is nothing more than a functor category). Define the diagonal functor Δ : C → CJ as follows: Δ(N) : J → C is the constant functor to N for all N in C.
If F is a diagram of type J in C, the following statements are equivalent:
• ψ is a cone from N to F
• ψ is a natural transformation from Δ(N) to F
• (N, ψ) is an object in the comma category (Δ ↓ F)
The dual statements are also equivalent:
• ψ is a co-cone from F to N
• ψ is a natural transformation from F to Δ(N)
• (N, ψ) is an object in the comma category (F ↓ Δ)
These statements can all be verified by a straightforward application of the definitions. Thinking of cones as natural transformations we see that they are just morphisms in CJ with source (or target) a constant functor.
Category of cones
By the above, we can define the category of cones to F as the comma category (Δ ↓ F). Morphisms of cones are then just morphisms in this category. This equivalence is rooted in the observation that a natural map between constant functors Δ(N), Δ(M) corresponds to a morphism between N and M. In this sense, the diagonal functor acts trivially on arrows. In similar vein, writing down the definition of a natural map from a constant functor Δ(N) to F yields the same diagram as the above. As one might expect, a morphism from a cone (N, ψ) to a cone (L, φ) is just a morphism N → L such that all the "obvious" diagrams commute (see the first diagram in the next section).
Likewise, the category of co-cones from F is the comma category (F ↓ Δ).
Universal cones
Limits and colimits are defined as universal cones. That is, cones through which all other cones factor. A cone φ from L to F is a universal cone if for any other cone ψ from N to F there is a unique morphism from ψ to φ.
Equivalently, a universal cone to F is a universal morphism from Δ to F (thought of as an object in CJ), or a terminal object in (Δ ↓ F).
Dually, a cone φ from F to L is a universal cone if for any other cone ψ from F to N there is a unique morphism from φ to ψ.
Equivalently, a universal cone from F is a universal morphism from F to Δ, or an initial object in (F ↓ Δ).
The limit of F is a universal cone to F, and the colimit is a universal cone from F. As with all universal constructions, universal cones are not guaranteed to exist for all diagrams F, but if they do exist they are unique up to a unique isomorphism (in the comma category (Δ ↓ F)).
See also
• Inverse limit#Cones – Construction in category theory
References
• Mac Lane, Saunders (1998). Categories for the Working Mathematician (2nd ed.). New York: Springer. ISBN 0-387-98403-8.
• Borceux, Francis (1994). "Limits". Handbook of categorical algebra. Encyclopedia of mathematics and its applications 50-51, 53 [i.e. 52]. Vol. 1. Cambridge University Press. ISBN 0-521-44178-1.
External links
• Cone at the nLab
| Wikipedia |
Universal property
In mathematics, more specifically in category theory, a universal property is a property that characterizes up to an isomorphism the result of some constructions. Thus, universal properties can be used for defining some objects independently from the method chosen for constructing them. For example, the definitions of the integers from the natural numbers, of the rational numbers from the integers, of the real numbers from the rational numbers, and of polynomial rings from the field of their coefficients can all be done in terms of universal properties. In particular, the concept of universal property allows a simple proof that all constructions of real numbers are equivalent: it suffices to prove that they satisfy the same universal property.
Technically, a universal property is defined in terms of categories and functors by means of a universal morphism (see § Formal definition, below). Universal morphisms can also be thought more abstractly as initial or terminal objects of a comma category (see § Connection with comma categories, below).
Universal properties occur almost everywhere in mathematics, and the use of the concept allows the use of general properties of universal properties for easily proving some properties that would need boring verifications otherwise. For example, given a commutative ring R, the field of fractions of the quotient ring of R by a prime ideal p can be identified with the residue field of the localization of R at p; that is $R_{p}/pR_{p}\cong \operatorname {Frac} (R/p)$ (all these constructions can be defined by universal properties).
Other objects that can be defined by universal properties include: all free objects, direct products and direct sums, free groups, free lattices, Grothendieck group, completion of a metric space, completion of a ring, Dedekind–MacNeille completion, product topologies, Stone–Čech compactification, tensor products, inverse limit and direct limit, kernels and cokernels, quotient groups, quotient vector spaces, and other quotient spaces.
Motivation
Before giving a formal definition of universal properties, we offer some motivation for studying such constructions.
• The concrete details of a given construction may be messy, but if the construction satisfies a universal property, one can forget all those details: all there is to know about the construction is already contained in the universal property. Proofs often become short and elegant if the universal property is used rather than the concrete details. For example, the tensor algebra of a vector space is slightly complicated to construct, but much easier to deal with by its universal property.
• Universal properties define objects uniquely up to a unique isomorphism.[1] Therefore, one strategy to prove that two objects are isomorphic is to show that they satisfy the same universal property.
• Universal constructions are functorial in nature: if one can carry out the construction for every object in a category C then one obtains a functor on C. Furthermore, this functor is a right or left adjoint to the functor U used in the definition of the universal property.[2]
• Universal properties occur everywhere in mathematics. By understanding their abstract properties, one obtains information about all these constructions and can avoid repeating the same analysis for each individual instance.
Formal definition
To understand the definition of a universal construction, it is important to look at examples. Universal constructions were not defined out of thin air, but were rather defined after mathematicians began noticing a pattern in many mathematical constructions (see Examples below). Hence, the definition may not make sense to one at first, but will become clear when one reconciles it with concrete examples.
Let $F:{\mathcal {C}}\to {\mathcal {D}}$ be a functor between categories ${\mathcal {C}}$ and ${\mathcal {D}}$. In what follows, let $X$ be an object of ${\mathcal {D}}$, while $A$ and $A'$ are objects of ${\mathcal {C}}$, and $h:A\to A'$ is a morphism in ${\mathcal {C}}$.
Thus, the functor $F$ maps $A$, $A'$ and $h$ in ${\mathcal {C}}$ to $F(A)$, $F(A')$ and $F(h)$ in ${\mathcal {D}}$.
A universal morphism from $X$ to $F$ is a unique pair $(A,u:X\to F(A))$ in ${\mathcal {D}}$ which has the following property, commonly referred to as a universal property:
For any morphism of the form $f:X\to F(A')$ in ${\mathcal {D}}$, there exists a unique morphism $h:A\to A'$ in ${\mathcal {C}}$ such that the following diagram commutes:
We can dualize this categorical concept. A universal morphism from $F$ to $X$ is a unique pair $(A,u:F(A)\to X)$ that satisfies the following universal property:
For any morphism of the form $f:F(A')\to X$ in ${\mathcal {D}}$, there exists a unique morphism $h:A'\to A$ in ${\mathcal {C}}$ such that the following diagram commutes:
Note that in each definition, the arrows are reversed. Both definitions are necessary to describe universal constructions which appear in mathematics; but they also arise due to the inherent duality present in category theory. In either case, we say that the pair $(A,u)$ which behaves as above satisfies a universal property.
Connection with comma categories
Universal morphisms can be described more concisely as initial and terminal objects in a comma category (i.e. one where morphisms are seen as objects in their own right).
Let $F:{\mathcal {C}}\to {\mathcal {D}}$ be a functor and $X$ an object of ${\mathcal {D}}$. Then recall that the comma category $(X\downarrow F)$ is the category where
• Objects are pairs of the form $(B,f:X\to F(B))$, where $B$ is an object in ${\mathcal {C}}$
• A morphism from $(B,f:X\to F(B))$ to $(B',f':X\to F(B'))$ is given by a morphism $h:B\to B'$ in ${\mathcal {C}}$ such that the diagram commutes:
Now suppose that the object $(A,u:X\to F(A))$ in $(X\downarrow F)$ is initial. Then for every object $(A',f:X\to F(A'))$, there exists a unique morphism $h:A\to A'$ such that the following diagram commutes.
Note that the equality here simply means the diagrams are the same. Also note that the diagram on the right side of the equality is the exact same as the one offered in defining a universal morphism from $X$ to $F$. Therefore, we see that a universal morphism from $X$ to $F$ is equivalent to an initial object in the comma category $(X\downarrow F)$.
Conversely, recall that the comma category $(F\downarrow X)$ is the category where
• Objects are pairs of the form $(B,f:F(B)\to X)$ where $B$ is an object in ${\mathcal {C}}$
• A morphism from $(B,f:F(B)\to X)$ to $(B',f':F(B')\to X)$ is given by a morphism $h:B\to B'$ in ${\mathcal {C}}$ such that the diagram commutes:
Suppose $(A,u:F(A)\to X)$ is a terminal object in $(F\downarrow X)$. Then for every object $(A',f:F(A')\to X)$, there exists a unique morphism $h:A'\to A$ such that the following diagrams commute.
The diagram on the right side of the equality is the same diagram pictured when defining a universal morphism from $F$ to $X$. Hence, a universal morphism from $F$ to $X$ corresponds with a terminal object in the comma category $(F\downarrow X)$.
Examples
Below are a few examples, to highlight the general idea. The reader can construct numerous other examples by consulting the articles mentioned in the introduction.
Tensor algebras
Let ${\mathcal {C}}$ be the category of vector spaces $K$-Vect over a field $K$ and let ${\mathcal {D}}$ be the category of algebras $K$-Alg over $K$ (assumed to be unital and associative). Let
$U$ : $K$-Alg → $K$-Vect
be the forgetful functor which assigns to each algebra its underlying vector space.
Given any vector space $V$ over $K$ we can construct the tensor algebra $T(V)$. The tensor algebra is characterized by the fact:
“Any linear map from $V$ to an algebra $A$ can be uniquely extended to an algebra homomorphism from $T(V)$ to $A$.”
This statement is an initial property of the tensor algebra since it expresses the fact that the pair $(T(V),i)$, where $i:V\to U(T(V))$ is the inclusion map, is a universal morphism from the vector space $V$ to the functor $U$.
Since this construction works for any vector space $V$, we conclude that $T$ is a functor from $K$-Vect to $K$-Alg. This means that $T$ is left adjoint to the forgetful functor $U$ (see the section below on relation to adjoint functors).
Products
A categorical product can be characterized by a universal construction. For concreteness, one may consider the Cartesian product in Set, the direct product in Grp, or the product topology in Top, where products exist.
Let $X$ and $Y$ be objects of a category ${\mathcal {C}}$ with finite products. The product of $X$ and $Y$ is an object $X$ × $Y$ together with two morphisms
$\pi _{1}$ : $X\times Y\to X$
$\pi _{2}$ : $X\times Y\to Y$
such that for any other object $Z$ of ${\mathcal {C}}$ and morphisms $f:Z\to X$ and $g:Z\to Y$ there exists a unique morphism $h:Z\to X\times Y$ such that $f=\pi _{1}\circ h$ and $g=\pi _{2}\circ h$.
To understand this characterization as a universal property, take the category ${\mathcal {D}}$ to be the product category ${\mathcal {C}}\times {\mathcal {C}}$ and define the diagonal functor
$\Delta :{\mathcal {C}}\to {\mathcal {C}}\times {\mathcal {C}}$ :{\mathcal {C}}\to {\mathcal {C}}\times {\mathcal {C}}}
by $\Delta (X)=(X,X)$ and $\Delta (f:X\to Y)=(f,f)$. Then $(X\times Y,(\pi _{1},\pi _{2}))$ is a universal morphism from $\Delta $ to the object $(X,Y)$ of ${\mathcal {C}}\times {\mathcal {C}}$: if $(f,g)$ is any morphism from $(Z,Z)$ to $(X,Y)$, then it must equal a morphism $\Delta (h:Z\to X\times Y)=(h,h)$ from $\Delta (Z)=(Z,Z)$ to $\Delta (X\times Y)=(X\times Y,X\times Y)$ followed by $(\pi _{1},\pi _{2})$. As a commutative diagram:
For the example of the Cartesian product in Set, the morphism $(\pi _{1},\pi _{2})$ comprises the two projections $\pi _{1}(x,y)=x$ and $\pi _{2}(x,y)=y$. Given any set $Z$ and functions $f,g$ the unique map such that the required diagram commutes is given by $h=\langle x,y\rangle (z)=(f(z),g(z))$.[3]
Limits and colimits
Categorical products are a particular kind of limit in category theory. One can generalize the above example to arbitrary limits and colimits.
Let ${\mathcal {J}}$ and ${\mathcal {C}}$ be categories with ${\mathcal {J}}$ a small index category and let ${\mathcal {C}}^{\mathcal {J}}$ be the corresponding functor category. The diagonal functor
$\Delta :{\mathcal {C}}\to {\mathcal {C}}^{\mathcal {J}}$ :{\mathcal {C}}\to {\mathcal {C}}^{\mathcal {J}}}
is the functor that maps each object $N$ in ${\mathcal {C}}$ to the constant functor $\Delta (N):{\mathcal {J}}\to {\mathcal {C}}$ (i.e. $\Delta (N)(X)=N$ for each $X$ in ${\mathcal {J}}$ and $\Delta (N)(f)=1_{N}$ for each $f:X\to Y$ in ${\mathcal {J}}$) and each morphism $f:N\to M$ in ${\mathcal {C}}$ to the natural transformation $\Delta (f):\Delta (N)\to \Delta (M)$ in ${\mathcal {C}}^{\mathcal {J}}$ defined as, for every object $X$ of ${\mathcal {J}}$, the component
$\Delta (f)(X):\Delta (N)(X)\to \Delta (M)(X)=f:N\to M$
at $X$. In other words, the natural transformation is the one defined by having constant component $f:N\to M$ for every object of ${\mathcal {J}}$.
Given a functor $F:{\mathcal {J}}\to {\mathcal {C}}$ (thought of as an object in ${\mathcal {C}}^{\mathcal {J}}$), the limit of $F$, if it exists, is nothing but a universal morphism from $\Delta $ to $F$. Dually, the colimit of $F$ is a universal morphism from $F$ to $\Delta $.
Properties
Existence and uniqueness
Defining a quantity does not guarantee its existence. Given a functor $F:{\mathcal {C}}\to {\mathcal {D}}$ and an object $X$ of ${\mathcal {D}}$, there may or may not exist a universal morphism from $X$ to $F$. If, however, a universal morphism $(A,u)$ does exist, then it is essentially unique. Specifically, it is unique up to a unique isomorphism: if $(A',u')$ is another pair, then there exists a unique isomorphism $k:A\to A'$ such that $u'=F(k)\circ u$. This is easily seen by substituting $(A,u')$ in the definition of a universal morphism.
It is the pair $(A,u)$ which is essentially unique in this fashion. The object $A$ itself is only unique up to isomorphism. Indeed, if $(A,u)$ is a universal morphism and $k:A\to A'$ is any isomorphism then the pair $(A',u')$, where $u'=F(k)\circ u$ is also a universal morphism.
Equivalent formulations
The definition of a universal morphism can be rephrased in a variety of ways. Let $F:{\mathcal {C}}\to {\mathcal {D}}$ be a functor and let $X$ be an object of ${\mathcal {D}}$. Then the following statements are equivalent:
• $(A,u)$ is a universal morphism from $X$ to $F$
• $(A,u)$ is an initial object of the comma category $(X\downarrow F)$
• $(A,F(\bullet )\circ u)$ is a representation of ${\text{Hom}}_{\mathcal {D}}(X,F(-))$, where its components $(F(\bullet )\circ u)_{B}:{\text{Hom}}_{\mathcal {C}}(A,B)\to {\text{Hom}}_{\mathcal {D}}(X,F(B))$ are defined by
$(F(\bullet )\circ u)_{B}(f:A\to B):X\to F(B)=F(f)\circ u:X\to F(B)$
for each object $B$ in ${\mathcal {C}}.$
The dual statements are also equivalent:
• $(A,u)$ is a universal morphism from $F$ to $X$
• $(A,u)$ is a terminal object of the comma category $(F\downarrow X)$
• $(A,u\circ F(\bullet ))$ is a representation of ${\text{Hom}}_{\mathcal {D}}(F(-),X)$, where its components $(u\circ F(\bullet ))_{B}:{\text{Hom}}_{\mathcal {C}}(B,A)\to {\text{Hom}}_{\mathcal {D}}(F(B),X)$ are defined by
$(u\circ F(\bullet ))_{B}(f:B\to A):F(B)\to X=u\circ F(f):F(B)\to X$
for each object $B$ in ${\mathcal {C}}.$
Relation to adjoint functors
Suppose $(A_{1},u_{1})$ is a universal morphism from $X_{1}$ to $F$ and $(A_{2},u_{2})$ is a universal morphism from $X_{2}$ to $F$. By the universal property of universal morphisms, given any morphism $h:X_{1}\to X_{2}$ there exists a unique morphism $g:A_{1}\to A_{2}$ such that the following diagram commutes:
If every object $X_{i}$ of ${\mathcal {D}}$ admits a universal morphism to $F$, then the assignment $X_{i}\mapsto A_{i}$ and $h\mapsto g$ defines a functor $G:{\mathcal {D}}\to {\mathcal {C}}$. The maps $u_{i}$ then define a natural transformation from $1_{\mathcal {D}}$ (the identity functor on ${\mathcal {D}}$) to $F\circ G$. The functors $(F,G)$ are then a pair of adjoint functors, with $G$ left-adjoint to $F$ and $F$ right-adjoint to $G$.
Similar statements apply to the dual situation of terminal morphisms from $F$. If such morphisms exist for every $X$ in ${\mathcal {C}}$ one obtains a functor $G:{\mathcal {C}}\to {\mathcal {D}}$ which is right-adjoint to $F$ (so $F$ is left-adjoint to $G$).
Indeed, all pairs of adjoint functors arise from universal constructions in this manner. Let $F$ and $G$ be a pair of adjoint functors with unit $\eta $ and co-unit $\epsilon $ (see the article on adjoint functors for the definitions). Then we have a universal morphism for each object in ${\mathcal {C}}$ and ${\mathcal {D}}$:
• For each object $X$ in ${\mathcal {C}}$, $(F(X),\eta _{X})$ is a universal morphism from $X$ to $G$. That is, for all $f:X\to G(Y)$ there exists a unique $g:F(X)\to Y$ for which the following diagrams commute.
• For each object $Y$ in ${\mathcal {D}}$, $(G(Y),\epsilon _{Y})$ is a universal morphism from $F$ to $Y$. That is, for all $g:F(X)\to Y$ there exists a unique $f:X\to G(Y)$ for which the following diagrams commute.
Universal constructions are more general than adjoint functor pairs: a universal construction is like an optimization problem; it gives rise to an adjoint pair if and only if this problem has a solution for every object of ${\mathcal {C}}$ (equivalently, every object of ${\mathcal {D}}$).
History
Universal properties of various topological constructions were presented by Pierre Samuel in 1948. They were later used extensively by Bourbaki. The closely related concept of adjoint functors was introduced independently by Daniel Kan in 1958.
See also
• Free object
• Natural transformation
• Adjoint functor
• Monad (category theory)
• Variety of algebras
• Cartesian closed category
Notes
1. Jacobson (2009), Proposition 1.6, p. 44.
2. See for example, Polcino & Sehgal (2002), p. 133. exercise 1, about the universal property of group rings.
3. Fong, Brendan; Spivak, David I. (2018-10-12). "Seven Sketches in Compositionality: An Invitation to Applied Category Theory". arXiv:1803.05316 [math.CT].
References
• Paul Cohn, Universal Algebra (1981), D.Reidel Publishing, Holland. ISBN 90-277-1213-1.
• Mac Lane, Saunders (1998). Categories for the Working Mathematician. Graduate Texts in Mathematics 5 (2nd ed.). Springer. ISBN 0-387-98403-8.
• Borceux, F. Handbook of Categorical Algebra: vol 1 Basic category theory (1994) Cambridge University Press, (Encyclopedia of Mathematics and its Applications) ISBN 0-521-44178-1
• N. Bourbaki, Livre II : Algèbre (1970), Hermann, ISBN 0-201-00639-1.
• Milies, César Polcino; Sehgal, Sudarshan K.. An introduction to group rings. Algebras and applications, Volume 1. Springer, 2002. ISBN 978-1-4020-0238-0
• Jacobson. Basic Algebra II. Dover. 2009. ISBN 0-486-47187-X
External links
• nLab, a wiki project on mathematics, physics and philosophy with emphasis on the n-categorical point of view
• André Joyal, CatLab, a wiki project dedicated to the exposition of categorical mathematics
• Hillman, Chris (2001). "A Categorical Primer" (Document): {{cite document}}: Cite document requires |publisher= (help); Unknown parameter |citeseerx= ignored (help) formal introduction to category theory.
• J. Adamek, H. Herrlich, G. Stecker, Abstract and Concrete Categories-The Joy of Cats
• Stanford Encyclopedia of Philosophy: "Category Theory"—by Jean-Pierre Marquis. Extensive bibliography.
• List of academic conferences on category theory
• Baez, John, 1996,"The Tale of n-categories." An informal introduction to higher order categories.
• WildCats is a category theory package for Mathematica. Manipulation and visualization of objects, morphisms, categories, functors, natural transformations, universal properties.
• The catsters, a YouTube channel about category theory.
• Video archive of recorded talks relevant to categories, logic and the foundations of physics.
• Interactive Web page which generates examples of categorical constructions in the category of finite sets.
Category theory
Key concepts
Key concepts
• Category
• Adjoint functors
• CCC
• Commutative diagram
• Concrete category
• End
• Exponential
• Functor
• Kan extension
• Morphism
• Natural transformation
• Universal property
Universal constructions
Limits
• Terminal objects
• Products
• Equalizers
• Kernels
• Pullbacks
• Inverse limit
Colimits
• Initial objects
• Coproducts
• Coequalizers
• Cokernels and quotients
• Pushout
• Direct limit
Algebraic categories
• Sets
• Relations
• Magmas
• Groups
• Abelian groups
• Rings (Fields)
• Modules (Vector spaces)
Constructions on categories
• Free category
• Functor category
• Kleisli category
• Opposite category
• Quotient category
• Product category
• Comma category
• Subcategory
Higher category theory
Key concepts
• Categorification
• Enriched category
• Higher-dimensional algebra
• Homotopy hypothesis
• Model category
• Simplex category
• String diagram
• Topos
n-categories
Weak n-categories
• Bicategory (pseudofunctor)
• Tricategory
• Tetracategory
• Kan complex
• ∞-groupoid
• ∞-topos
Strict n-categories
• 2-category (2-functor)
• 3-category
Categorified concepts
• 2-group
• 2-ring
• En-ring
• (Traced)(Symmetric) monoidal category
• n-group
• n-monoid
• Category
• Outline
• Glossary
| Wikipedia |
Covering group
In mathematics, a covering group of a topological group H is a covering space G of H such that G is a topological group and the covering map p : G → H is a continuous group homomorphism. The map p is called the covering homomorphism. A frequently occurring case is a double covering group, a topological double cover in which H has index 2 in G; examples include the spin groups, pin groups, and metaplectic groups.
This article is about topological covering group. For algebraic covering group, see universal perfect central extension.
Roughly explained, saying that for example the metaplectic group Mp2n is a double cover of the symplectic group Sp2n means that there are always two elements in the metaplectic group representing one element in the symplectic group.
Properties
Let G be a covering group of H. The kernel K of the covering homomorphism is just the fiber over the identity in H and is a discrete normal subgroup of G. The kernel K is closed in G if and only if G is Hausdorff (and if and only if H is Hausdorff). Going in the other direction, if G is any topological group and K is a discrete normal subgroup of G then the quotient map p : G → G/K is a covering homomorphism.
If G is connected then K, being a discrete normal subgroup, necessarily lies in the center of G and is therefore abelian. In this case, the center of H = G/K is given by
$Z(H)\cong Z(G)/K.$
As with all covering spaces, the fundamental group of G injects into the fundamental group of H. Since the fundamental group of a topological group is always abelian, every covering group is a normal covering space. In particular, if G is path-connected then the quotient group $\pi _{1}(H)/\pi _{1}(G)$ is isomorphic to K. The group K acts simply transitively on the fibers (which are just left cosets) by right multiplication. The group G is then a principal K-bundle over H.
If G is a covering group of H then the groups G and H are locally isomorphic. Moreover, given any two connected locally isomorphic groups H1 and H2, there exists a topological group G with discrete normal subgroups K1 and K2 such that H1 is isomorphic to G/K1 and H2 is isomorphic to G/K2.
Group structure on a covering space
Let H be a topological group and let G be a covering space of H. If G and H are both path-connected and locally path-connected, then for any choice of element e* in the fiber over e ∈ H, there exists a unique topological group structure on G, with e* as the identity, for which the covering map p : G → H is a homomorphism.
The construction is as follows. Let a and b be elements of G and let f and g be paths in G starting at e* and terminating at a and b respectively. Define a path h : I → H by h(t) = p(f(t))p(g(t)). By the path-lifting property of covering spaces there is a unique lift of h to G with initial point e*. The product ab is defined as the endpoint of this path. By construction we have p(ab) = p(a)p(b). One must show that this definition is independent of the choice of paths f and g, and also that the group operations are continuous.
Alternatively, the group law on G can be constructed by lifting the group law H × H → H to G, using the lifting property of the covering map G × G → H × H.
The non-connected case is interesting and is studied in the papers by Taylor and by Brown-Mucuk cited below. Essentially there is an obstruction to the existence of a universal cover which is also a topological group such that the covering map is a morphism: this obstruction lies in the third cohomology group of the group of components of G with coefficients in the fundamental group of G at the identity.
Universal covering group
If H is a path-connected, locally path-connected, and semilocally simply connected group then it has a universal cover. By the previous construction the universal cover can be made into a topological group with the covering map a continuous homomorphism. This group is called the universal covering group of H. There is also a more direct construction which we give below.
Let PH be the path group of H. That is, PH is the space of paths in H based at the identity together with the compact-open topology. The product of paths is given by pointwise multiplication, i.e. (fg)(t) = f(t)g(t). This gives PH the structure of a topological group. There is a natural group homomorphism PH → H which sends each path to its endpoint. The universal cover of H is given as the quotient of PH by the normal subgroup of null-homotopic loops. The projection PH → H descends to the quotient giving the covering map. One can show that the universal cover is simply connected and the kernel is just the fundamental group of H. That is, we have a short exact sequence
$1\to \pi _{1}(H)\to {\tilde {H}}\to H\to 1$
where ${\tilde {H}}$ is the universal cover of H. Concretely, the universal covering group of H is the space of homotopy classes of paths in H with pointwise multiplication of paths. The covering map sends each path class to its endpoint.
Lattice of covering groups
As the above suggest, if a group has a universal covering group (if it is path-connected, locally path-connected, and semilocally simply connected), with discrete center, then the set of all topological groups that are covered by the universal covering group form a lattice, corresponding to the lattice of subgroups of the center of the universal covering group: inclusion of subgroups corresponds to covering of quotient groups. The maximal element is the universal covering group ${\tilde {H}},$ while the minimal element is the universal covering group mod its center, ${\tilde {H}}/Z({\tilde {H}})$.
This corresponds algebraically to the universal perfect central extension (called "covering group", by analogy) as the maximal element, and a group mod its center as minimal element.
This is particularly important for Lie groups, as these groups are all the (connected) realizations of a particular Lie algebra. For many Lie groups the center is the group of scalar matrices, and thus the group mod its center is the projectivization of the Lie group. These covers are important in studying projective representations of Lie groups, and spin representations lead to the discovery of spin groups: a projective representation of a Lie group need not come from a linear representation of the group, but does come from a linear representation of some covering group, in particular the universal covering group. The finite analog led to the covering group or Schur cover, as discussed above.
A key example arises from SL2(R), which has center {±1} and fundamental group Z. It is a double cover of the centerless projective special linear group PSL2(R), which is obtained by taking the quotient by the center. By Iwasawa decomposition, both groups are circle bundles over the complex upper half-plane, and their universal cover ${\mathrm {S} {\widetilde {\mathrm {L} _{2}(}}\mathbf {R} )}$ is a real line bundle over the half-plane that forms one of Thurston's eight geometries. Since the half-plane is contractible, all bundle structures are trivial. The preimage of SL2(Z) in the universal cover is isomorphic to the braid group on three strands.
Lie groups
See also: Group extension § Central extension
The above definitions and constructions all apply to the special case of Lie groups. In particular, every covering of a manifold is a manifold, and the covering homomorphism becomes a smooth map. Likewise, given any discrete normal subgroup of a Lie group the quotient group is a Lie group and the quotient map is a covering homomorphism.
Two Lie groups are locally isomorphic if and only if their Lie algebras are isomorphic. This implies that a homomorphism φ : G → H of Lie groups is a covering homomorphism if and only if the induced map on Lie algebras
$\phi _{*}:{\mathfrak {g}}\to {\mathfrak {h}}$
is an isomorphism.
Since for every Lie algebra ${\mathfrak {g}}$ there is a unique simply connected Lie group G with Lie algebra ${\mathfrak {g}}$, from this follows that the universal covering group of a connected Lie group H is the (unique) simply connected Lie group G having the same Lie algebra as H.
Examples
• The universal covering group of the circle group T is the additive group of real numbers R with the covering homomorphism given by the exponential function exp: R → T. The kernel of the exponential map is isomorphic to Z.
• For any integer n we have a covering group of the circle by itself T → T which sends z to zn. The kernel of this homomorphism is the cyclic group consisting of the nth roots of unity.
• The rotation group SO(3) has as a universal cover the group SU(2) which is isomorphic to the group of versors in the quaternions. This is a double cover since the kernel has order 2. (cf the tangloids.)
• The unitary group U(n) is covered by the compact group T × SU(n) with the covering homomorphism given by p(z, A) = zA. The universal cover is R × SU(n).
• The special orthogonal group SO(n) has a double cover called the spin group Spin(n). For n ≥ 3, the spin group is the universal cover of SO(n).
• For n ≥ 2, the universal cover of the special linear group SL(n, R) is not a matrix group (i.e. it has no faithful finite-dimensional representations).
References
• Pontryagin, Lev S. (1986). Topological Groups. trans. from Russian by Arlen Brown and P.S.V. Naidu (3rd ed.). Gordon & Breach Science. ISBN 2-88124-133-6.
• Taylor, R.L. (1954). "Covering groups of nonconnected topological groups". Proc. Amer. Math. Soc. 5: 753–768. doi:10.1090/S0002-9939-1954-0087028-0. JSTOR 2031861. MR 0087028.
• Brown, R.; Mucuk, O. (1994). "Covering groups of nonconnected topological groups revisited". Math. Proc. Cambridge Philos. Soc. 115 (1): 97–110. arXiv:math/0009021. Bibcode:2000math......9021B. CiteSeerX 10.1.1.236.9436. doi:10.1017/S0305004100071942.
| Wikipedia |
Kähler differential
In mathematics, Kähler differentials provide an adaptation of differential forms to arbitrary commutative rings or schemes. The notion was introduced by Erich Kähler in the 1930s. It was adopted as standard in commutative algebra and algebraic geometry somewhat later, once the need was felt to adapt methods from calculus and geometry over the complex numbers to contexts where such methods are not available.
Definition
Let R and S be commutative rings and φ : R → S be a ring homomorphism. An important example is for R a field and S a unital algebra over R (such as the coordinate ring of an affine variety). Kähler differentials formalize the observation that the derivatives of polynomials are again polynomial. In this sense, differentiation is a notion which can be expressed in purely algebraic terms. This observation can be turned into a definition of the module
$\Omega _{S/R}$
of differentials in different, but equivalent ways.
Definition using derivations
An R-linear derivation on S is an R-module homomorphism $d:S\to M$ to an S-module M satisfying the Leibniz rule $d(fg)=f\,dg+g\,df$ (it automatically follows from this definition that the image of R is in the kernel of d [1]). The module of Kähler differentials is defined as the S-module $\Omega _{S/R}$ for which there is a universal derivation $d:S\to \Omega _{S/R}$. As with other universal properties, this means that d is the best possible derivation in the sense that any other derivation may be obtained from it by composition with an S-module homomorphism. In other words, the composition with d provides, for every S-module M, an S-module isomorphism
$\operatorname {Hom} _{S}(\Omega _{S/R},M){\xrightarrow {\cong }}\operatorname {Der} _{R}(S,M).$
One construction of ΩS/R and d proceeds by constructing a free S-module with one formal generator ds for each s in S, and imposing the relations
• dr = 0,
• d(s + t) = ds + dt,
• d(st) = s dt + t ds,
for all r in R and all s and t in S. The universal derivation sends s to ds. The relations imply that the universal derivation is a homomorphism of R-modules.
Definition using the augmentation ideal
Another construction proceeds by letting I be the ideal in the tensor product $S\otimes _{R}S$ defined as the kernel of the multiplication map
${\begin{cases}S\otimes _{R}S\to S\\\sum s_{i}\otimes t_{i}\mapsto \sum s_{i}\cdot t_{i}\end{cases}}$
Then the module of Kähler differentials of S can be equivalently defined by[2]
$\Omega _{S/R}=I/I^{2},$
and the universal derivation is the homomorphism d defined by
$ds=1\otimes s-s\otimes 1.$
This construction is equivalent to the previous one because I is the kernel of the projection
${\begin{cases}S\otimes _{R}S\to S\otimes _{R}R\\\sum s_{i}\otimes t_{i}\mapsto \sum s_{i}\cdot t_{i}\otimes 1\end{cases}}$
Thus we have:
$S\otimes _{R}S\equiv I\oplus S\otimes _{R}R.$
Then $S\otimes _{R}S/S\otimes _{R}R$ may be identified with I by the map induced by the complementary projection
$\sum s_{i}\otimes t_{i}\mapsto \sum s_{i}\otimes t_{i}-\sum s_{i}\cdot t_{i}\otimes 1.$
This identifies I with the S-module generated by the formal generators ds for s in S, subject to d being a homomorphism of R-modules which sends each element of R to zero. Taking the quotient by I2 precisely imposes the Leibniz rule.
Examples and basic facts
For any commutative ring R, the Kähler differentials of the polynomial ring $S=R[t_{1},\dots ,t_{n}]$ are a free S-module of rank n generated by the differentials of the variables:
$\Omega _{R[t_{1},\dots ,t_{n}]/R}^{1}=\bigoplus _{i=1}^{n}R[t_{1},\dots t_{n}]\,dt_{i}.$
Kähler differentials are compatible with extension of scalars, in the sense that for a second R-algebra R′ and for $S'=R'\otimes _{R}S$, there is an isomorphism
$\Omega _{S/R}\otimes _{S}S'\cong \Omega _{S'/R'}.$
As a particular case of this, Kähler differentials are compatible with localizations, meaning that if W is a multiplicative set in S, then there is an isomorphism
$W^{-1}\Omega _{S/R}\cong \Omega _{W^{-1}S/R}.$
Given two ring homomorphisms $R\to S\to T$, there is a short exact sequence of T-modules
$\Omega _{S/R}\otimes _{S}T\to \Omega _{T/R}\to \Omega _{T/S}\to 0.$
If $T=S/I$ for some ideal I, the term $\Omega _{T/S}$ vanishes and the sequence can be continued at the left as follows:
$I/I^{2}{\xrightarrow {[f]\mapsto df\otimes 1}}\Omega _{S/R}\otimes _{S}T\to \Omega _{T/R}\to 0.$
A generalization of these two short exact sequences is provided by the cotangent complex.
The latter sequence and the above computation for the polynomial ring allows the computation of the Kähler differentials of finitely generated R-algebras $T=R[t_{1},\ldots ,t_{n}]/(f_{1},\ldots ,f_{m})$. Briefly, these are generated by the differentials of the variables and have relations coming from the differentials of the equations. For example, for a single polynomial in a single variable,
$\Omega _{(R[t]/(f))/R}\cong (R[t]\,dt\otimes R[t]/(f))/(df)\cong R[t]/(f,df/dt)\,dt.$
Kähler differentials for schemes
Because Kähler differentials are compatible with localization, they may be constructed on a general scheme by performing either of the two definitions above on affine open subschemes and gluing. However, the second definition has a geometric interpretation that globalizes immediately. In this interpretation, I represents the ideal defining the diagonal in the fiber product of Spec(S) with itself over Spec(S) → Spec(R). This construction therefore has a more geometric flavor, in the sense that the notion of first infinitesimal neighbourhood of the diagonal is thereby captured, via functions vanishing modulo functions vanishing at least to second order (see cotangent space for related notions). Moreover, it extends to a general morphism of schemes $f:X\to Y$ by setting ${\mathcal {I}}$ to be the ideal of the diagonal in the fiber product $X\times _{Y}X$. The cotangent sheaf $\Omega _{X/Y}={\mathcal {I}}/{\mathcal {I}}^{2}$, together with the derivation $d:{\mathcal {O}}_{X}\to \Omega _{X/Y}$ defined analogously to before, is universal among $f^{-1}{\mathcal {O}}_{Y}$-linear derivations of ${\mathcal {O}}_{X}$-modules. If U is an open affine subscheme of X whose image in Y is contained in an open affine subscheme V, then the cotangent sheaf restricts to a sheaf on U which is similarly universal. It is therefore the sheaf associated to the module of Kähler differentials for the rings underlying U and V.
Similar to the commutative algebra case, there exist exact sequences associated to morphisms of schemes. Given morphisms $f:X\to Y$ and $g:Y\to Z$ of schemes there is an exact sequence of sheaves on $X$
$f^{*}\Omega _{Y/Z}\to \Omega _{X/Z}\to \Omega _{X/Y}\to 0$
Also, if $X\subset Y$ is a closed subscheme given by the ideal sheaf ${\mathcal {I}}$, then $\Omega _{X/Y}=0$ and there is an exact sequence of sheaves on $X$
${\mathcal {I}}/{\mathcal {I}}^{2}\to \Omega _{Y/Z}|_{X}\to \Omega _{X/Z}\to 0$
Finite separable field extensions
If $K/k$ is a finite field extension, then $\Omega _{K/k}^{1}=0$ if and only if $K/k$ is separable. Consequently, if $K/k$ is a finite separable field extension and $\pi :Y\to \operatorname {Spec} (K)$ is a smooth variety (or scheme), then the relative cotangent sequence
$\pi ^{*}\Omega _{K/k}^{1}\to \Omega _{Y/k}^{1}\to \Omega _{Y/K}^{1}\to 0$
proves $\Omega _{Y/k}^{1}\cong \Omega _{Y/K}^{1}$.
Cotangent modules of a projective variety
Given a projective scheme $X\in \operatorname {Sch} /\mathbb {k} $, its cotangent sheaf can be computed from the sheafification of the cotangent module on the underlying graded algebra. For example, consider the complex curve
$\operatorname {Proj} \left({\frac {\mathbb {C} [x,y,z]}{(x^{n}+y^{n}-z^{n})}}\right)=\operatorname {Proj} (R)$
then we can compute the cotangent module as
$\Omega _{R/\mathbb {C} }={\frac {R\cdot dx\oplus R\cdot dy\oplus R\cdot dz}{nx^{n-1}dx+ny^{n-1}dy-nz^{n-1}dz}}$
Then,
$\Omega _{X/\mathbb {C} }={\widetilde {\Omega _{R/\mathbb {C} }}}$
Morphisms of schemes
Consider the morphism
$X=\operatorname {Spec} \left({\frac {\mathbb {C} [t,x,y]}{(xy-t)}}\right)=\operatorname {Spec} (R)\to \operatorname {Spec} (\mathbb {C} [t])=Y$
in $\operatorname {Sch} /\mathbb {C} $. Then, using the first sequence we see that
${\widetilde {R\cdot dt}}\to {\widetilde {\frac {R\cdot dt\oplus R\cdot dx\oplus R\cdot dy}{ydx+xdy-dt}}}\to \Omega _{X/Y}\to 0$
hence
$\Omega _{X/Y}={\widetilde {\frac {R\cdot dx\oplus R\cdot dy}{ydx+xdy}}}$
Higher differential forms and algebraic de Rham cohomology
de Rham complex
As before, fix a map $X\to Y$. Differential forms of higher degree are defined as the exterior powers (over ${\mathcal {O}}_{X}$),
$\Omega _{X/Y}^{n}:=\bigwedge ^{n}\Omega _{X/Y}.$
The derivation ${\mathcal {O}}_{X}\to \Omega _{X/Y}$ extends in a natural way to a sequence of maps
$0\to {\mathcal {O}}_{X}{\xrightarrow {d}}\Omega _{X/Y}^{1}{\xrightarrow {d}}\Omega _{X/Y}^{2}{\xrightarrow {d}}\cdots $
satisfying $d\circ d=0.$ This is a cochain complex known as the de Rham complex.
The de Rham complex enjoys an additional multiplicative structure, the wedge product
$\Omega _{X/Y}^{n}\otimes \Omega _{X/Y}^{m}\to \Omega _{X/Y}^{n+m}.$
This turns the de Rham complex into a commutative differential graded algebra. It also has a coalgebra structure inherited from the one on the exterior algebra.[3]
de Rham cohomology
The hypercohomology of the de Rham complex of sheaves is called the algebraic de Rham cohomology of X over Y and is denoted by $H_{\text{dR}}^{n}(X/Y)$ or just $H_{\text{dR}}^{n}(X)$ if Y is clear from the context. (In many situations, Y is the spectrum of a field of characteristic zero.) Algebraic de Rham cohomology was introduced by Grothendieck (1966a). It is closely related to crystalline cohomology.
As is familiar from coherent cohomology of other quasi-coherent sheaves, the computation of de Rham cohomology is simplified when X = Spec S and Y = Spec R are affine schemes. In this case, because affine schemes have no higher cohomology, $H_{\text{dR}}^{n}(X/Y)$ can be computed as the cohomology of the complex of abelian groups
$0\to S{\xrightarrow {d}}\Omega _{S/R}^{1}{\xrightarrow {d}}\Omega _{S/R}^{2}{\xrightarrow {d}}\cdots $
which is, termwise, the global sections of the sheaves $\Omega _{X/Y}^{r}$.
To take a very particular example, suppose that $X=\operatorname {Spec} \mathbb {Q} \left[x,x^{-1}\right]$ is the multiplicative group over $\mathbb {Q} .$ Because this is an affine scheme, hypercohomology reduces to ordinary cohomology. The algebraic de Rham complex is
$\mathbb {Q} [x,x^{-1}]{\xrightarrow {d}}\mathbb {Q} [x,x^{-1}]\,dx.$
The differential d obeys the usual rules of calculus, meaning $d(x^{n})=nx^{n-1}\,dx.$ The kernel and cokernel compute algebraic de Rham cohomology, so
${\begin{aligned}H_{\text{dR}}^{0}(X)&=\mathbb {Q} \\H_{\text{dR}}^{1}(X)&=\mathbb {Q} \cdot x^{-1}dx\end{aligned}}$
and all other algebraic de Rham cohomology groups are zero. By way of comparison, the algebraic de Rham cohomology groups of $Y=\operatorname {Spec} \mathbb {F} _{p}\left[x,x^{-1}\right]$ are much larger, namely,
${\begin{aligned}H_{\text{dR}}^{0}(Y)&=\bigoplus _{k\in \mathbb {Z} }\mathbb {F} _{p}\cdot x^{kp}\\H_{\text{dR}}^{1}(Y)&=\bigoplus _{k\in \mathbb {Z} }\mathbb {F} _{p}\cdot x^{kp-1}\,dx\end{aligned}}$
Since the Betti numbers of these cohomology groups are not what is expected, crystalline cohomology was developed to remedy this issue; it defines a Weil cohomology theory over finite fields.
Grothendieck's comparison theorem
If X is a smooth complex algebraic variety, there is a natural comparison map of complexes of sheaves
$\Omega _{X/\mathbb {C} }^{\bullet }(-)\to \Omega _{X^{\text{an}}}^{\bullet }((-)^{\text{an}})$
between the algebraic de Rham complex and the smooth de Rham complex defined in terms of (complex-valued) differential forms on $X^{\text{an}}$, the complex manifold associated to X. Here, $ (-)^{\text{an}}$ denotes the complex analytification functor. This map is far from being an isomorphism. Nonetheless, Grothendieck (1966a) showed that the comparison map induces an isomorphism
$H_{\text{dR}}^{\ast }(X/\mathbb {C} )\cong H_{\text{dR}}^{\ast }(X^{\text{an}})$
from algebraic to smooth de Rham cohomology (and thus to singular cohomology $ H_{\text{sing}}^{*}(X^{\text{an}};\mathbb {C} )$ by de Rham's theorem). In particular, if X is a smooth affine algebraic variety embedded in $ \mathbb {C} ^{n}$, then the inclusion of the subcomplex of algebraic differential forms into that of all smooth forms on X is a quasi-isomorphism. For example, if
$X=\{(w,z)\in \mathbb {C} ^{2}:wz=1\}$,
then as shown above, the computation of algebraic de Rham cohomology gives explicit generators $ \{1,z^{-1}dz\}$ for $H_{\text{dR}}^{0}(X/\mathbb {C} )$ and $H_{\text{dR}}^{1}(X/\mathbb {C} )$, respectively, while all other cohomology groups vanish. Since X is homotopy equivalent to a circle, this is as predicted by Grothendieck's theorem.
Counter-examples in the singular case can be found with non-Du Bois singularities such as the graded ring $k[x,y]/(y^{2}-x^{3})$ with $y$ where $\deg(y)=3$ and $\deg(x)=2$.[4] Other counterexamples can be found in algebraic plane curves with isolated singularities whose Milnor and Tjurina numbers are non-equal.[5]
A proof of Grothendieck's theorem using the concept of a mixed Weil cohomology theory was given by Cisinski & Déglise (2013).
Applications
Canonical divisor
If X is a smooth variety over a field k, then $\Omega _{X/k}$ is a vector bundle (i.e., a locally free ${\mathcal {O}}_{X}$-module) of rank equal to the dimension of X. This implies, in particular, that
$\omega _{X/k}:=\bigwedge ^{\dim X}\Omega _{X/k}$
is a line bundle or, equivalently, a divisor. It is referred to as the canonical divisor. The canonical divisor is, as it turns out, a dualizing complex and therefore appears in various important theorems in algebraic geometry such as Serre duality or Verdier duality.
Classification of algebraic curves
The geometric genus of a smooth algebraic variety X of dimension d over a field k is defined as the dimension
$g:=\dim H^{0}(X,\Omega _{X/k}^{d}).$
For curves, this purely algebraic definition agrees with the topological definition (for $k=\mathbb {C} $) as the "number of handles" of the Riemann surface associated to X. There is a rather sharp trichotomy of geometric and arithmetic properties depending on the genus of a curve, for g being 0 (rational curves), 1 (elliptic curves), and greater than 1 (hyperbolic Riemann surfaces, including hyperelliptic curves), respectively.
Tangent bundle and Riemann–Roch theorem
The tangent bundle of a smooth variety X is, by definition, the dual of the cotangent sheaf $\Omega _{X/k}$. The Riemann–Roch theorem and its far-reaching generalization, the Grothendieck–Riemann–Roch theorem, contain as a crucial ingredient the Todd class of the tangent bundle.
Unramified and smooth morphisms
The sheaf of differentials is related to various algebro-geometric notions. A morphism $f:X\to Y$ of schemes is unramified if and only if $\Omega _{X/Y}$ is zero.[6] A special case of this assertion is that for a field k, $K:=k[t]/f$ is separable over k iff $\Omega _{K/k}=0$, which can also be read off the above computation.
A morphism f of finite type is a smooth morphism if it is flat and if $\Omega _{X/Y}$ is a locally free ${\mathcal {O}}_{X}$-module of appropriate rank. The computation of $\Omega _{R[t_{1},\ldots ,t_{n}]/R}$ above shows that the projection from affine space $\mathbb {A} _{R}^{n}\to \operatorname {Spec} (R)$ is smooth.
Periods
Periods are, broadly speaking, integrals of certain arithmetically defined differential forms.[7] The simplest example of a period is $2\pi i$, which arises as
$\int _{S^{1}}{\frac {dz}{z}}=2\pi i.$
Algebraic de Rham cohomology is used to construct periods as follows:[8] For an algebraic variety X defined over $\mathbb {Q} ,$ the above-mentioned compatibility with base-change yields a natural isomorphism
$H_{\text{dR}}^{n}(X/\mathbb {Q} )\otimes _{\mathbb {Q} }\mathbb {C} =H_{\text{dR}}^{n}(X\otimes _{\mathbb {Q} }\mathbb {C} /\mathbb {C} ).$
On the other hand, the right hand cohomology group is isomorphic to de Rham cohomology of the complex manifold $X^{\text{an}}$ associated to X, denoted here $H_{\text{dR}}^{n}(X^{\text{an}}).$ Yet another classical result, de Rham's theorem, asserts an isomorphism of the latter cohomology group with singular cohomology (or sheaf cohomology) with complex coefficients, $H^{n}(X^{\text{an}},\mathbb {C} )$, which by the universal coefficient theorem is in its turn isomorphic to $H^{n}(X^{\text{an}},\mathbb {Q} )\otimes _{\mathbb {Q} }\mathbb {C} .$ Composing these isomorphisms yields two rational vector spaces which, after tensoring with $\mathbb {C} $ become isomorphic. Choosing bases of these rational subspaces (also called lattices), the determinant of the base-change matrix is a complex number, well defined up to multiplication by a rational number. Such numbers are periods.
Algebraic number theory
In algebraic number theory, Kähler differentials may be used to study the ramification in an extension of algebraic number fields. If L / K is a finite extension with rings of integers O and o respectively then the different ideal δL / K, which encodes the ramification data, is the annihilator of the O-module ΩO/o:[9]
$\delta _{L/K}=\{x\in O:x\,dy=0{\text{ for all }}y\in O\}.$
Related notions
Hochschild homology is a homology theory for associative rings that turns out to be closely related to Kähler differentials. This is because of the Hochschild-Kostant-Rosenberg theorem which states that the Hochschild homology $HH_{\bullet }(R)$ of an algebra of a smooth variety is isomorphic to the de-Rham complex $\Omega _{R/k}^{\bullet }$ for $k$ a field of characteristic $0$. A derived enhancement of this theorem states that the Hochschild homology of a differential graded algebra is isomorphic to the derived de-Rham complex.
The de Rham–Witt complex is, in very rough terms, an enhancement of the de Rham complex for the ring of Witt vectors.
Notes
1. "Stacks Project". Retrieved 2022-11-21.
2. Hartshorne (1977, p. 172)
3. Laurent-Gengoux, C.; Pichereau, A.; Vanhaecke, P. (2013), Poisson structures, §3.2.3: Springer, ISBN 978-3-642-31090-4{{citation}}: CS1 maint: location (link)
4. "algebraic de Rham cohomology of singular varieties", mathoverflow.net
5. Arapura, Donu; Kang, Su-Jeong (2011), "Kähler-de Rham cohomology and Chern classes" (PDF), Communications in Algebra, 39 (4): 1153–1167, doi:10.1080/00927871003610320, MR 2782596, S2CID 15924437, archived from the original (PDF) on 2015-11-12
6. Milne, James, Etale cohomology, Proposition I.3.5{{citation}}: CS1 maint: location (link); the map f is supposed to be locally of finite type for this statement.
7. André, Yves (2004), Une introduction aux motifs, Partie III: Société Mathématique de France
8. Periods and Nori Motives (PDF), Elementary examples
9. Neukirch (1999, p. 201)
References
• Cisinski, Denis-Charles; Déglise, Frédéric (2013), "Mixed Weil cohomologies", Advances in Mathematics, 230 (1): 55–130, arXiv:0712.3291, doi:10.1016/j.aim.2011.10.021
• Grothendieck, Alexander (1966a), "On the de Rham cohomology of algebraic varieties", Publications Mathématiques de l'IHÉS, 29 (29): 95–103, doi:10.1007/BF02684807, ISSN 0073-8301, MR 0199194, S2CID 123434721 (letter to Michael Atiyah, October 14, 1963)
• Grothendieck, Alexander (1966b), Letter to John Tate (PDF)
• Grothendieck, Alexander (1968), "Crystals and the de Rham cohomology of schemes" (PDF), in Giraud, Jean; Grothendieck, Alexander; Kleiman, Steven L.; et al. (eds.), Dix Exposés sur la Cohomologie des Schémas, Advanced studies in pure mathematics, vol. 3, Amsterdam: North-Holland, pp. 306–358, MR 0269663
• Johnson, James (1969), "Kähler differentials and differential algebra", Annals of Mathematics, 89 (1): 92–98, doi:10.2307/1970810, JSTOR 1970810, Zbl 0179.34302
• Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
• Matsumura, Hideyuki (1986), Commutative ring theory, Cambridge University Press
• Neukirch, Jürgen (1999), Algebraische Zahlentheorie, Grundlehren der mathematischen Wissenschaften, vol. 322, Berlin: Springer-Verlag, ISBN 978-3-540-65399-8, MR 1697859, Zbl 0956.11021
• Rosenlicht, M. (1976), "On Liouville's theory of elementary functions" (PDF), Pacific Journal of Mathematics, 65 (2): 485–492, doi:10.2140/pjm.1976.65.485, Zbl 0318.12107
• Fu, Guofeng; Halás, Miroslav; Li, Ziming (2011), "Some remarks on Kähler differentials and ordinary differentials in nonlinear control systems", Systems and Control Letters, 60: 699–703, doi:10.1016/j.sysconle.2011.05.006
External links
• Notes on p-adic algebraic de-Rham cohomology - gives many computations over characteristic 0 as motivation
• A thread devoted to the relation on algebraic and analytic differential forms
• Differentials (Stacks project)
| Wikipedia |
Enveloping von Neumann algebra
In operator algebras, the enveloping von Neumann algebra of a C*-algebra is a von Neumann algebra that contains all the operator-algebraic information about the given C*-algebra. This may also be called the universal enveloping von Neumann algebra, since it is given by a universal property; and (as always with von Neumann algebras) the term W*-algebra may be used in place of von Neumann algebra.
Definition
Let A be a C*-algebra and πU be its universal representation, acting on Hilbert space HU. The image of πU, πU(A), is a C*-subalgebra of bounded operators on HU. The enveloping von Neumann algebra of A is the closure of πU(A) in the weak operator topology. It is sometimes denoted by A′′.
Properties
The universal representation πU and A′′ satisfies the following universal property: for any representation π, there is a unique *-homomorphism
$\Phi :\pi _{U}(A)''\rightarrow \pi (A)''$ :\pi _{U}(A)''\rightarrow \pi (A)''}
that is continuous in the weak operator topology and the restriction of Φ to πU(A) is π.
As a particular case, one can consider the continuous functional calculus, whose unique extension gives a canonical Borel functional calculus.
By the Sherman–Takeda theorem, the double dual of a C*-algebra A, A**, can be identified with A′′, as Banach spaces.
Every representation of A uniquely determines a central projection (i.e. a projection in the center of the algebra) in A′′; it is called the central cover of that projection.
See also
• Universal enveloping algebra
| Wikipedia |
Universal generalization
In predicate logic, generalization (also universal generalization or universal introduction,[1][2][3] GEN) is a valid inference rule. It states that if $\vdash \!P(x)$ has been derived, then $\vdash \!\forall x\,P(x)$ can be derived.
Universal generalization
TypeRule of inference
FieldPredicate logic
StatementSuppose $P$ is true of any arbitrarily selected $p$, then $P$ is true of everything.
Symbolic statement$\vdash \!P(x)$, $\vdash \!\forall x\,P(x)$
Transformation rules
Propositional calculus
Rules of inference
• Implication introduction / elimination (modus ponens)
• Biconditional introduction / elimination
• Conjunction introduction / elimination
• Disjunction introduction / elimination
• Disjunctive / hypothetical syllogism
• Constructive / destructive dilemma
• Absorption / modus tollens / modus ponendo tollens
• Negation introduction
Rules of replacement
• Associativity
• Commutativity
• Distributivity
• Double negation
• De Morgan's laws
• Transposition
• Material implication
• Exportation
• Tautology
Predicate logic
Rules of inference
• Universal generalization / instantiation
• Existential generalization / instantiation
Generalization with hypotheses
The full generalization rule allows for hypotheses to the left of the turnstile, but with restrictions. Assume $\Gamma $ is a set of formulas, $\varphi $ a formula, and $\Gamma \vdash \varphi (y)$ has been derived. The generalization rule states that $\Gamma \vdash \forall x\,\varphi (x)$ can be derived if $y$ is not mentioned in $\Gamma $ and $x$ does not occur in $\varphi $.
These restrictions are necessary for soundness. Without the first restriction, one could conclude $\forall xP(x)$ from the hypothesis $P(y)$. Without the second restriction, one could make the following deduction:
1. $\exists z\,\exists w\,(z\not =w)$ (Hypothesis)
2. $\exists w\,(y\not =w)$ (Existential instantiation)
3. $y\not =x$ (Existential instantiation)
4. $\forall x\,(x\not =x)$ (Faulty universal generalization)
This purports to show that $\exists z\,\exists w\,(z\not =w)\vdash \forall x\,(x\not =x),$ which is an unsound deduction. Note that $\Gamma \vdash \forall y\,\varphi (y)$ is permissible if $y$ is not mentioned in $\Gamma $ (the second restriction need not apply, as the semantic structure of $\varphi (y)$ is not being changed by the substitution of any variables).
Example of a proof
Prove: $\forall x\,(P(x)\rightarrow Q(x))\rightarrow (\forall x\,P(x)\rightarrow \forall x\,Q(x))$ is derivable from $\forall x\,(P(x)\rightarrow Q(x))$ and $\forall x\,P(x)$.
Proof:
Step Formula Justification
1 $\forall x\,(P(x)\rightarrow Q(x))$ Hypothesis
2 $\forall x\,P(x)$ Hypothesis
3 $(\forall x\,(P(x)\rightarrow Q(x)))\rightarrow (P(y)\rightarrow Q(y))$ Universal instantiation
4 $P(y)\rightarrow Q(y)$ From (1) and (3) by Modus ponens
5 $(\forall x\,P(x))\rightarrow P(y)$ Universal instantiation
6 $P(y)\ $ From (2) and (5) by Modus ponens
7 $Q(y)\ $ From (6) and (4) by Modus ponens
8 $\forall x\,Q(x)$ From (7) by Generalization
9 $\forall x\,(P(x)\rightarrow Q(x)),\forall x\,P(x)\vdash \forall x\,Q(x)$ Summary of (1) through (8)
10 $\forall x\,(P(x)\rightarrow Q(x))\vdash \forall x\,P(x)\rightarrow \forall x\,Q(x)$ From (9) by Deduction theorem
11 $\vdash \forall x\,(P(x)\rightarrow Q(x))\rightarrow (\forall x\,P(x)\rightarrow \forall x\,Q(x))$ From (10) by Deduction theorem
In this proof, universal generalization was used in step 8. The deduction theorem was applicable in steps 10 and 11 because the formulas being moved have no free variables.
See also
• First-order logic
• Hasty generalization
• Universal instantiation
References
1. Copi and Cohen
2. Hurley
3. Moore and Parker
| Wikipedia |
Universal geometric algebra
In mathematics, a universal geometric algebra is a type of geometric algebra generated by real vector spaces endowed with an indefinite quadratic form. Some authors restrict this to the infinite-dimensional case.
The universal geometric algebra ${\mathcal {G}}(n,n)$ of order 22n is defined as the Clifford algebra of 2n-dimensional pseudo-Euclidean space Rn, n.[1] This algebra is also called the "mother algebra". It has a nondegenerate signature. The vectors in this space generate the algebra through the geometric product. This product makes the manipulation of vectors more similar to the familiar algebraic rules, although non-commutative.
When n = ∞, i.e. there are countably many dimensions, then ${\mathcal {G}}(\infty ,\infty )$ is called simply the universal geometric algebra (UGA), which contains vector spaces such as Rp, q and their respective geometric algebras ${\mathcal {G}}(p,q)$.
UGA contains all finite-dimensional geometric algebras (GA).
The elements of UGA are called multivectors. Every multivector can be written as the sum of several r-vectors. Some r-vectors are scalars (r = 0), vectors (r = 1) and bivectors (r = 2).
One may generate a finite-dimensional GA by choosing a unit pseudoscalar (I). The set of all vectors that satisfy
$a\wedge I=0$
is a vector space. The geometric product of the vectors in this vector space then defines the GA, of which I is a member. Since every finite-dimensional GA has a unique I (up to a sign), one can define or characterize the GA by it. A pseudoscalar can be interpreted as an n-plane segment of unit area in an n-dimensional vector space.
Vector manifolds
A vector manifold is a special set of vectors in the UGA.[2] These vectors generate a set of linear spaces tangent to the vector manifold. Vector manifolds were introduced to do calculus on manifolds so one can define (differentiable) manifolds as a set isomorphic to a vector manifold. The difference lies in that a vector manifold is algebraically rich while a manifold is not. Since this is the primary motivation for vector manifolds the following interpretation is rewarding.
Consider a vector manifold as a special set of "points". These points are members of an algebra and so can be added and multiplied. These points generate a tangent space of definite dimension "at" each point. This tangent space generates a (unit) pseudoscalar which is a function of the points of the vector manifold. A vector manifold is characterized by its pseudoscalar. The pseudoscalar can be interpreted as a tangent oriented n-plane segment of unit area. Bearing this in mind, a manifold looks locally like Rn at every point.
Although a vector manifold can be treated as a completely abstract object, a geometric algebra is created so that every element of the algebra represents a geometric object and algebraic operations such as adding and multiplying correspond to geometric transformations.
Consider a set of vectors {x} = Mn in UGA. If this set of vectors generates a set of "tangent" simple (n + 1)-vectors, which is to say
$\forall x\in M^{n}:\exists I_{n}(x)=x\wedge A(x)\mid I_{n}(x)\lor M_{n}=x$
then Mn is a vector manifold, the value of A is that of a simple n-vector. If one interprets these vectors as points then In(x) is the pseudoscalar of an algebra tangent to Mn at x. In(x) can be interpreted as a unit area at an oriented n-plane: this is why it is labeled with n. The function In gives a distribution of these tangent n-planes over Mn.
A vector manifold is defined similarly to how a particular GA can be defined, by its unit pseudoscalar. The set {x} is not closed under addition and multiplication by scalars. This set is not a vector space. At every point the vectors generate a tangent space of definite dimension. The vectors in this tangent space are different from the vectors of the vector manifold. In comparison to the original set they are bivectors, but since they span a linear space—the tangent space—they are also referred to as vectors. Notice that the dimension of this space is the dimension of the manifold. This linear space generates an algebra and its unit pseudoscalar characterizes the vector manifold. This is the manner in which the set of abstract vectors {x} defines the vector manifold. Once the set of "points" generates the "tangent space" the "tangent algebra" and its "pseudoscalar" follow immediately.
The unit pseudoscalar of the vector manifold is a (pseudoscalar-valued) function of the points on the vector manifold. If i.e. this function is smooth then one says that the vector manifold is smooth.[3] A manifold can be defined as a set isomorphic to a vector manifold. The points of a manifold do not have any algebraic structure and pertain only to the set itself. This is the main difference between a vector manifold and a manifold that is isomorphic. A vector manifold is always a subset of Universal Geometric Algebra by definition and the elements can be manipulated algebraically. In contrast, a manifold is not a subset of any set other than itself, but the elements have no algebraic relation among them.
The differential geometry of a manifold[3] can be carried out in a vector manifold. All quantities relevant to differential geometry can be calculated from In(x) if it is a differentiable function. This is the original motivation behind its definition. Vector manifolds allow an approach to the differential geometry of manifolds alternative to the "build-up" approach where structures such as metrics, connections and fiber bundles are introduced as needed.[4] The relevant structure of a vector manifold is its tangent algebra. The use of geometric calculus along with the definition of vector manifold allow the study of geometric properties of manifolds without using coordinates.
See also
• Conformal geometric algebra
References
1. Pozo, José María; Sobczyk, Garret. Geometric Algebra in Linear Algebra and Geometry
2. Chapter 1 of: [D. Hestenes & G. Sobczyk] From Clifford Algebra to Geometric Calculus
3. Chapter 4 of: [D. Hestenes & G. Sobczyk] From Clifford Algebra to Geometric Calculus
4. Chapter 5 of: [D. Hestenes & G. Sobczyk] From Clifford Algebra to Geometric Calculus
• D. Hestenes, G. Sobczyk (1987-08-31). Clifford Algebra to Geometric Calculus: a Unified Language for mathematics and Physics. Springer. ISBN 902-772-561-6.
• C. Doran, A. Lasenby (2003-05-29). "6.5 Embedded Surfaces and Vector Manifolds". Geometric Algebra for Physicists. Cambridge University Press. ISBN 0-521-715-954.
• L. Dorst, J. Lasenby (2011). "19". Guide to Geometric Algebra in Practice. Springer. ISBN 0-857-298-100.
• Hongbo Li (2008). Invariant Algebras And Geometric Reasoning. World Scientific. ISBN 981-270-808-1.
| Wikipedia |
Universal graph
In mathematics, a universal graph is an infinite graph that contains every finite (or at-most-countable) graph as an induced subgraph. A universal graph of this type was first constructed by Richard Rado[1][2] and is now called the Rado graph or random graph. More recent work[3] [4] has focused on universal graphs for a graph family F: that is, an infinite graph belonging to F that contains all finite graphs in F. For instance, the Henson graphs are universal in this sense for the i-clique-free graphs.
A universal graph for a family F of graphs can also refer to a member of a sequence of finite graphs that contains all graphs in F; for instance, every finite tree is a subgraph of a sufficiently large hypercube graph[5] so a hypercube can be said to be a universal graph for trees. However it is not the smallest such graph: it is known that there is a universal graph for n-vertex trees, with only n vertices and O(n log n) edges, and that this is optimal.[6] A construction based on the planar separator theorem can be used to show that n-vertex planar graphs have universal graphs with O(n3/2) edges, and that bounded-degree planar graphs have universal graphs with O(n log n) edges.[7][8][9] It is also possible to construct universal graphs for planar graphs that have n1+o(1) vertices.[10] Sumner's conjecture states that tournaments are universal for polytrees, in the sense that every tournament with 2n − 2 vertices contains every polytree with n vertices as a subgraph.[11]
A family F of graphs has a universal graph of polynomial size, containing every n-vertex graph as an induced subgraph, if and only if it has an adjacency labelling scheme in which vertices may be labeled by O(log n)-bit bitstrings such that an algorithm can determine whether two vertices are adjacent by examining their labels. For, if a universal graph of this type exists, the vertices of any graph in F may be labeled by the identities of the corresponding vertices in the universal graph, and conversely if a labeling scheme exists then a universal graph may be constructed having a vertex for every possible label.[12]
In older mathematical terminology, the phrase "universal graph" was sometimes used to denote a complete graph.
The notion of universal graph has been adapted and used for solving mean payoff games.[13]
References
1. Rado, R. (1964). "Universal graphs and universal functions". Acta Arithmetica. 9 (4): 331–340. doi:10.4064/aa-9-4-331-340. MR 0172268.
2. Rado, R. (1967). "Universal graphs". A Seminar in Graph Theory. Holt, Rinehart, and Winston. pp. 83–85. MR 0214507.
3. Goldstern, Martin; Kojman, Menachem (1996). "Universal arrow-free graphs". Acta Mathematica Hungarica. 1973 (4): 319–326. arXiv:math.LO/9409206. doi:10.1007/BF00052907. MR 1428038.
4. Cherlin, Greg; Shelah, Saharon; Shi, Niandong (1999). "Universal graphs with forbidden subgraphs and algebraic closure". Advances in Applied Mathematics. 22 (4): 454–491. arXiv:math.LO/9809202. doi:10.1006/aama.1998.0641. MR 1683298. S2CID 17529604.
5. Wu, A. Y. (1985). "Embedding of tree networks into hypercubes". Journal of Parallel and Distributed Computing. 2 (3): 238–249. doi:10.1016/0743-7315(85)90026-7.
6. Chung, F. R. K.; Graham, R. L. (1983). "On universal graphs for spanning trees" (PDF). Journal of the London Mathematical Society. 27 (2): 203–211. CiteSeerX 10.1.1.108.3415. doi:10.1112/jlms/s2-27.2.203. MR 0692525..
7. Babai, L.; Chung, F. R. K.; Erdős, P.; Graham, R. L.; Spencer, J. H. (1982). "On graphs which contain all sparse graphs". In Rosa, Alexander; Sabidussi, Gert; Turgeon, Jean (eds.). Theory and practice of combinatorics: a collection of articles honoring Anton Kotzig on the occasion of his sixtieth birthday (PDF). Annals of Discrete Mathematics. Vol. 12. pp. 21–26. MR 0806964.
8. Bhatt, Sandeep N.; Chung, Fan R. K.; Leighton, F. T.; Rosenberg, Arnold L. (1989). "Universal graphs for bounded-degree trees and planar graphs". SIAM Journal on Discrete Mathematics. 2 (2): 145–155. doi:10.1137/0402014. MR 0990447.
9. Chung, Fan R. K. (1990). "Separator theorems and their applications". In Korte, Bernhard; Lovász, László; Prömel, Hans Jürgen; et al. (eds.). Paths, Flows, and VLSI-Layout. Algorithms and Combinatorics. Vol. 9. Springer-Verlag. pp. 17–34. ISBN 978-0-387-52685-0. MR 1083375.
10. Dujmović, Vida; Esperet, Louis; Joret, Gwenaël; Gavoille, Cyril; Micek, Piotr; Morin, Pat (2021), "Adjacency Labelling for Planar Graphs (And Beyond)", Journal of the ACM, 68 (6): 1–33, arXiv:2003.04280, doi:10.1145/3477542
11. Sumner's Universal Tournament Conjecture, Douglas B. West, retrieved 2010-09-17.
12. Kannan, Sampath; Naor, Moni; Rudich, Steven (1992), "Implicit representation of graphs", SIAM Journal on Discrete Mathematics, 5 (4): 596–603, doi:10.1137/0405049, MR 1186827.
13. Czerwiński, Wojciech; Daviaud, Laure; Fijalkow, Nathanaël; Jurdziński, Marcin; Lazić, Ranko; Parys, Paweł (2018-07-27). "Universal trees grow inside separating automata: Quasi-polynomial lower bounds for parity games". Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms. pp. 2333–2349. arXiv:1807.10546. doi:10.1137/1.9781611975482.142. ISBN 978-1-61197-548-2. S2CID 51865783.
External links
• The panarborial formula, "Theorem of the Day" concerning universal graphs for trees
| Wikipedia |
Universal hashing
In mathematics and computing, universal hashing (in a randomized algorithm or data structure) refers to selecting a hash function at random from a family of hash functions with a certain mathematical property (see definition below). This guarantees a low number of collisions in expectation, even if the data is chosen by an adversary. Many universal families are known (for hashing integers, vectors, strings), and their evaluation is often very efficient. Universal hashing has numerous uses in computer science, for example in implementations of hash tables, randomized algorithms, and cryptography.
Introduction
Assume we want to map keys from some universe $U$ into $m$ bins (labelled $[m]=\{0,\dots ,m-1\}$). The algorithm will have to handle some data set $S\subseteq U$ of $|S|=n$ keys, which is not known in advance. Usually, the goal of hashing is to obtain a low number of collisions (keys from $S$ that land in the same bin). A deterministic hash function cannot offer any guarantee in an adversarial setting if $|U|>m\cdot n$, since the adversary may choose $S$ to be precisely the preimage of a bin. This means that all data keys land in the same bin, making hashing useless. Furthermore, a deterministic hash function does not allow for rehashing: sometimes the input data turns out to be bad for the hash function (e.g. there are too many collisions), so one would like to change the hash function.
The solution to these problems is to pick a function randomly from a family of hash functions. A family of functions $H=\{h:U\to [m]\}$ is called a universal family if, $\forall x,y\in U,~x\neq y:~~|\{h\in H:h(x)=h(y)\}|\leq {\frac {|H|}{m}}$.
In other words, any two different keys of the universe collide with probability at most $1/m$ when the hash function $h$ is drawn uniformly at random from $H$. This is exactly the probability of collision we would expect if the hash function assigned truly random hash codes to every key.
Sometimes, the definition is relaxed by a constant factor, only requiring collision probability $O(1/m)$ rather than $\leq 1/m$. This concept was introduced by Carter and Wegman[1] in 1977, and has found numerous applications in computer science (see, for example[2]).
If we have an upper bound of $\epsilon <1$ on the collision probability, we say that we have $\epsilon $-almost universality. So for example, a universal family has $1/m$-almost universality.
Many, but not all, universal families have the following stronger uniform difference property:
$\forall x,y\in U,~x\neq y$, when $h$ is drawn randomly from the family $H$, the difference $h(x)-h(y)~{\bmod {~}}m$ is uniformly distributed in $[m]$.
Note that the definition of universality is only concerned with whether $h(x)-h(y)=0$, which counts collisions. The uniform difference property is stronger.
(Similarly, a universal family can be XOR universal if $\forall x,y\in U,~x\neq y$, the value $h(x)\oplus h(y)~{\bmod {~}}m$ is uniformly distributed in $[m]$ where $\oplus $ is the bitwise exclusive or operation. This is only possible if $m$ is a power of two.)
An even stronger condition is pairwise independence: we have this property when $\forall x,y\in U,~x\neq y$ we have the probability that $x,y$ will hash to any pair of hash values $z_{1},z_{2}$ is as if they were perfectly random: $P(h(x)=z_{1}\land h(y)=z_{2})=1/m^{2}$. Pairwise independence is sometimes called strong universality.
Another property is uniformity. We say that a family is uniform if all hash values are equally likely: $P(h(x)=z)=1/m$ for any hash value $z$. Universality does not imply uniformity. However, strong universality does imply uniformity.
Given a family with the uniform distance property, one can produce a pairwise independent or strongly universal hash family by adding a uniformly distributed random constant with values in $[m]$ to the hash functions. (Similarly, if $m$ is a power of two, we can achieve pairwise independence from an XOR universal hash family by doing an exclusive or with a uniformly distributed random constant.) Since a shift by a constant is sometimes irrelevant in applications (e.g. hash tables), a careful distinction between the uniform distance property and pairwise independent is sometimes not made.[3]
For some applications (such as hash tables), it is important for the least significant bits of the hash values to be also universal. When a family is strongly universal, this is guaranteed: if $H$ is a strongly universal family with $m=2^{L}$, then the family made of the functions $h{\bmod {2^{L'}}}$ for all $h\in H$ is also strongly universal for $L'\leq L$. Unfortunately, the same is not true of (merely) universal families. For example, the family made of the identity function $h(x)=x$ is clearly universal, but the family made of the function $h(x)=x{\bmod {2^{L'}}}$ fails to be universal.
UMAC and Poly1305-AES and several other message authentication code algorithms are based on universal hashing.[4][5] In such applications, the software chooses a new hash function for every message, based on a unique nonce for that message.
Several hash table implementations are based on universal hashing. In such applications, typically the software chooses a new hash function only after it notices that "too many" keys have collided; until then, the same hash function continues to be used over and over. (Some collision resolution schemes, such as dynamic perfect hashing, pick a new hash function every time there is a collision. Other collision resolution schemes, such as cuckoo hashing and 2-choice hashing, allow a number of collisions before picking a new hash function). A survey of fastest known universal and strongly universal hash functions for integers, vectors, and strings is found in.[6]
Mathematical guarantees
For any fixed set $S$ of $n$ keys, using a universal family guarantees the following properties.
1. For any fixed $x$ in $S$, the expected number of keys in the bin $h(x)$ is $n/m$. When implementing hash tables by chaining, this number is proportional to the expected running time of an operation involving the key $x$ (for example a query, insertion or deletion).
2. The expected number of pairs of keys $x,y$ in $S$ with $x\neq y$ that collide ($h(x)=h(y)$) is bounded above by $n(n-1)/2m$, which is of order $O(n^{2}/m)$. When the number of bins, $m$ is chosen linear in $n$ (i.e., is determined by a function in $\Omega (n)$), the expected number of collisions is $O(n)$. When hashing into $n^{2}$ bins, there are no collisions at all with probability at least a half.
3. The expected number of keys in bins with at least $t$ keys in them is bounded above by $2n/(t-2(n/m)+1)$.[7] Thus, if the capacity of each bin is capped to three times the average size ($t=3n/m$), the total number of keys in overflowing bins is at most $O(m)$. This only holds with a hash family whose collision probability is bounded above by $1/m$. If a weaker definition is used, bounding it by $O(1/m)$, this result is no longer true.[7]
As the above guarantees hold for any fixed set $S$, they hold if the data set is chosen by an adversary. However, the adversary has to make this choice before (or independent of) the algorithm's random choice of a hash function. If the adversary can observe the random choice of the algorithm, randomness serves no purpose, and the situation is the same as deterministic hashing.
The second and third guarantee are typically used in conjunction with rehashing. For instance, a randomized algorithm may be prepared to handle some $O(n)$ number of collisions. If it observes too many collisions, it chooses another random $h$ from the family and repeats. Universality guarantees that the number of repetitions is a geometric random variable.
Constructions
Since any computer data can be represented as one or more machine words, one generally needs hash functions for three types of domains: machine words ("integers"); fixed-length vectors of machine words; and variable-length vectors ("strings").
Hashing integers
This section refers to the case of hashing integers that fit in machines words; thus, operations like multiplication, addition, division, etc. are cheap machine-level instructions. Let the universe to be hashed be $\{0,\dots ,|U|-1\}$.
The original proposal of Carter and Wegman[1] was to pick a prime $p\geq |U|$ and define
$h_{a,b}(x)=((ax+b)~{\bmod {~}}p)~{\bmod {~}}m$
where $a,b$ are randomly chosen integers modulo $p$ with $a\neq 0$. (This is a single iteration of a linear congruential generator.)
To see that $H=\{h_{a,b}\}$ is a universal family, note that $h(x)=h(y)$ only holds when
$ax+b\equiv ay+b+i\cdot m{\pmod {p}}$
for some integer $i$ between $0$ and $(p-1)/m$. Since $p\geq |U|$, if $x\neq y$ their difference $x-y$ is nonzero and has an inverse modulo $p$. Solving for $a$ yields
$a\equiv i\cdot m\cdot (x-y)^{-1}{\pmod {p}}$.
There are $p-1$ possible choices for $a$ (since $a=0$ is excluded) and, varying $i$ in the allowed range, $\lfloor (p-1)/m\rfloor $ possible non-zero values for the right hand side. Thus the collision probability is
$\lfloor (p-1)/m\rfloor /(p-1)\leq ((p-1)/m)/(p-1)=1/m$.
Another way to see $H$ is a universal family is via the notion of statistical distance. Write the difference $h(x)-h(y)$ as
$h(x)-h(y)\equiv (a(x-y)~{\bmod {~}}p){\pmod {m}}$.
Since $x-y$ is nonzero and $a$ is uniformly distributed in $\{1,\dots ,p-1\}$, it follows that $a(x-y)$ modulo $p$ is also uniformly distributed in $\{1,\dots ,p-1\}$. The distribution of $(h(x)-h(y))~{\bmod {~}}m$ is thus almost uniform, up to a difference in probability of $\pm 1/p$ between the samples. As a result, the statistical distance to a uniform family is $O(m/p)$, which becomes negligible when $p\gg m$.
The family of simpler hash functions
$h_{a}(x)=(ax~{\bmod {~}}p)~{\bmod {~}}m$
is only approximately universal: $\Pr\{h_{a}(x)=h_{a}(y)\}\leq 2/m$ for all $x\neq y$.[1] Moreover, this analysis is nearly tight; Carter and Wegman [1] show that $\Pr\{h_{a}(1)=h_{a}(m+1)\}\geq 2/(m-1)$ whenever $(p-1)~{\bmod {~}}m=1$.
Avoiding modular arithmetic
The state of the art for hashing integers is the multiply-shift scheme described by Dietzfelbinger et al. in 1997.[8] By avoiding modular arithmetic, this method is much easier to implement and also runs significantly faster in practice (usually by at least a factor of four[9]). The scheme assumes the number of bins is a power of two, $m=2^{M}$. Let $w$ be the number of bits in a machine word. Then the hash functions are parametrised over odd positive integers $a<2^{w}$ (that fit in a word of $w$ bits). To evaluate $h_{a}(x)$, multiply $x$ by $a$ modulo $2^{w}$ and then keep the high order $M$ bits as the hash code. In mathematical notation, this is
$h_{a}(x)=(a\cdot x\,\,{\bmod {\,}}2^{w})\,\,\mathrm {div} \,\,2^{w-M}.$
This scheme does not satisfy the uniform difference property and is only $2/m$-almost-universal; for any $x\neq y$, $\Pr\{h_{a}(x)=h_{a}(y)\}\leq 2/m$.
To understand the behavior of the hash function, notice that, if $ax{\bmod {2}}^{w}$ and $ay{\bmod {2}}^{w}$ have the same highest-order 'M' bits, then $a(x-y){\bmod {2}}^{w}$ has either all 1's or all 0's as its highest order M bits (depending on whether $ax{\bmod {2}}^{w}$ or $ay{\bmod {2}}^{w}$ is larger). Assume that the least significant set bit of $x-y$ appears on position $w-c$. Since $a$ is a random odd integer and odd integers have inverses in the ring $Z_{2^{w}}$, it follows that $a(x-y){\bmod {2}}^{w}$ will be uniformly distributed among $w$-bit integers with the least significant set bit on position $w-c$. The probability that these bits are all 0's or all 1's is therefore at most $2/2^{M}=2/m$. On the other hand, if $c<M$, then higher-order M bits of $a(x-y){\bmod {2}}^{w}$ contain both 0's and 1's, so it is certain that $h(x)\neq h(y)$. Finally, if $c=M$ then bit $w-M$ of $a(x-y){\bmod {2}}^{w}$ is 1 and $h_{a}(x)=h_{a}(y)$ if and only if bits $w-1,\ldots ,w-M+1$ are also 1, which happens with probability $1/2^{M-1}=2/m$.
This analysis is tight, as can be shown with the example $x=2^{w-M-2}$ and $y=3x$. To obtain a truly 'universal' hash function, one can use the multiply-add-shift scheme that picks higher-order bits
$h_{a,b}(x)=((ax+b){\bmod {2}}^{w+M})\,\mathrm {div} \,2^{w},$
where $a$ is a random positive integer with $a<2^{2w}$ and $b$ is a random non-negative integer with $b<2^{2w}$. This requires doing arithmetic on $2w$-bit unsigned integers. This version of multiply-shift is due to Dietzfelbinger, and was later analyzed more precisely by Woelfel.[10]
Hashing vectors
This section is concerned with hashing a fixed-length vector of machine words. Interpret the input as a vector ${\bar {x}}=(x_{0},\dots ,x_{k-1})$ of $k$ machine words (integers of $w$ bits each). If $H$ is a universal family with the uniform difference property, the following family (dating back to Carter and Wegman[1]) also has the uniform difference property (and hence is universal):
$h({\bar {x}})=\left(\sum _{i=0}^{k-1}h_{i}(x_{i})\right)\,{\bmod {~}}m$, where each $h_{i}\in H$ is chosen independently at random.
If $m$ is a power of two, one may replace summation by exclusive or.[11]
In practice, if double-precision arithmetic is available, this is instantiated with the multiply-shift hash family of hash functions.[12] Initialize the hash function with a vector ${\bar {a}}=(a_{0},\dots ,a_{k-1})$ of random odd integers on $2w$ bits each. Then if the number of bins is $m=2^{M}$ for $M\leq w$:
$h_{\bar {a}}({\bar {x}})=\left({\big (}\sum _{i=0}^{k-1}x_{i}\cdot a_{i}{\big )}~{\bmod {~}}2^{2w}\right)\,\,\mathrm {div} \,\,2^{2w-M}$.
It is possible to halve the number of multiplications, which roughly translates to a two-fold speed-up in practice.[11] Initialize the hash function with a vector ${\bar {a}}=(a_{0},\dots ,a_{k-1})$ of random odd integers on $2w$ bits each. The following hash family is universal:[13]
$h_{\bar {a}}({\bar {x}})=\left({\Big (}\sum _{i=0}^{\lceil k/2\rceil }(x_{2i}+a_{2i})\cdot (x_{2i+1}+a_{2i+1}){\Big )}{\bmod {~}}2^{2w}\right)\,\,\mathrm {div} \,\,2^{2w-M}$.
If double-precision operations are not available, one can interpret the input as a vector of half-words ($w/2$-bit integers). The algorithm will then use $\lceil k/2\rceil $ multiplications, where $k$ was the number of half-words in the vector. Thus, the algorithm runs at a "rate" of one multiplication per word of input.
The same scheme can also be used for hashing integers, by interpreting their bits as vectors of bytes. In this variant, the vector technique is known as tabulation hashing and it provides a practical alternative to multiplication-based universal hashing schemes.[14]
Strong universality at high speed is also possible.[15] Initialize the hash function with a vector ${\bar {a}}=(a_{0},\dots ,a_{k})$ of random integers on $2w$ bits. Compute
$h_{\bar {a}}({\bar {x}})^{\mathrm {strong} }=(a_{0}+\sum _{i=0}^{k-1}a_{i+1}x_{i}{\bmod {~}}2^{2w})\,\,\mathrm {div} \,\,2^{w}$.
The result is strongly universal on $w$ bits. Experimentally, it was found to run at 0.2 CPU cycle per byte on recent Intel processors for $w=32$.
Hashing strings
This refers to hashing a variable-sized vector of machine words. If the length of the string can be bounded by a small number, it is best to use the vector solution from above (conceptually padding the vector with zeros up to the upper bound). The space required is the maximal length of the string, but the time to evaluate $h(s)$ is just the length of $s$. As long as zeroes are forbidden in the string, the zero-padding can be ignored when evaluating the hash function without affecting universality.[11] Note that if zeroes are allowed in the string, then it might be best to append a fictitious non-zero (e.g., 1) character to all strings prior to padding: this will ensure that universality is not affected.[15]
Now assume we want to hash ${\bar {x}}=(x_{0},\dots ,x_{\ell })$, where a good bound on $\ell $ is not known a priori. A universal family proposed by [12] treats the string $x$ as the coefficients of a polynomial modulo a large prime. If $x_{i}\in [u]$, let $p\geq \max\{u,m\}$ be a prime and define:
$h_{a}({\bar {x}})=h_{\mathrm {int} }\left({\big (}\sum _{i=0}^{\ell }x_{i}\cdot a^{\ell -i}{\big )}{\bmod {~}}p\right)$, where $a\in [p]$ is uniformly random and $h_{\mathrm {int} }$ is chosen randomly from a universal family mapping integer domain $[p]\mapsto [m]$.
Using properties of modular arithmetic, above can be computed without producing large numbers for large strings as follows:[16]
uint hash(String x, int a, int p)
uint h = INITIAL_VALUE
for (uint i=0 ; i < x.length ; ++i)
h = ((h*a) + x[i]) mod p
return h
This Rabin-Karp rolling hash is based on a linear congruential generator.[17] Above algorithm is also known as Multiplicative hash function.[18] In practice, the mod operator and the parameter p can be avoided altogether by simply allowing integer to overflow because it is equivalent to mod (Max-Int-Value + 1) in many programming languages. Below table shows values chosen to initialize h and a for some of the popular implementations.
Implementation INITIAL_VALUE a
Bernstein's hash function djb2[19] 5381 33
STLPort 4.6.2 0 5
Kernighan and Ritchie's hash function[20] 0 31
java.lang.String.hashCode()[21] 0 31
Consider two strings ${\bar {x}},{\bar {y}}$ and let $\ell $ be length of the longer one; for the analysis, the shorter string is conceptually padded with zeros up to length $\ell $. A collision before applying $h_{\mathrm {int} }$ implies that $a$ is a root of the polynomial with coefficients ${\bar {x}}-{\bar {y}}$. This polynomial has at most $\ell $ roots modulo $p$, so the collision probability is at most $\ell /p$. The probability of collision through the random $h_{\mathrm {int} }$ brings the total collision probability to ${\frac {1}{m}}+{\frac {\ell }{p}}$. Thus, if the prime $p$ is sufficiently large compared to the length of strings hashed, the family is very close to universal (in statistical distance).
Other universal families of hash functions used to hash unknown-length strings to fixed-length hash values include the Rabin fingerprint and the Buzhash.
Avoiding modular arithmetic
To mitigate the computational penalty of modular arithmetic, three tricks are used in practice:[11]
1. One chooses the prime $p$ to be close to a power of two, such as a Mersenne prime. This allows arithmetic modulo $p$ to be implemented without division (using faster operations like addition and shifts). For instance, on modern architectures one can work with $p=2^{61}-1$, while $x_{i}$'s are 32-bit values.
2. One can apply vector hashing to blocks. For instance, one applies vector hashing to each 16-word block of the string, and applies string hashing to the $\lceil k/16\rceil $ results. Since the slower string hashing is applied on a substantially smaller vector, this will essentially be as fast as vector hashing.
3. One chooses a power-of-two as the divisor, allowing arithmetic modulo $2^{w}$ to be implemented without division (using faster operations of bit masking). The NH hash-function family takes this approach.
See also
• K-independent hashing – Family of Hash functions
• Rolling hashing – hash function where the input is hashed in a window that moves through the inputPages displaying wikidata descriptions as a fallback
• Tabulation hashing – Hash functions computed by exclusive or
• Min-wise independence – Data mining techniquePages displaying short descriptions of redirect targets
• Universal one-way hash function – type of universal hash function in cryptography proposed as an alternative to collision-resistant hash functionsPages displaying wikidata descriptions as a fallback
• Low-discrepancy sequence
• Perfect hashing – Hash function without any collisionsPages displaying short descriptions of redirect targets
References
1. Carter, Larry; Wegman, Mark N. (1979). "Universal Classes of Hash Functions". Journal of Computer and System Sciences. 18 (2): 143–154. doi:10.1016/0022-0000(79)90044-8. Conference version in STOC'77.
2. Miltersen, Peter Bro. "Universal Hashing" (PDF). Archived from the original (PDF) on 24 May 2011. Retrieved 24 June 2009.
3. Motwani, Rajeev; Raghavan, Prabhakar (1995). Randomized Algorithms. Cambridge University Press. p. 221. ISBN 0-521-47465-5.
4. David Wagner, ed. "Advances in Cryptology - CRYPTO 2008". p. 145.
5. Jean-Philippe Aumasson, Willi Meier, Raphael Phan, Luca Henzen. "The Hash Function BLAKE". 2014. p. 10.
6. Thorup, Mikkel (2015). "High Speed Hashing for Integers and Strings". arXiv:1504.06804 [cs.DS].
7. Baran, Ilya; Demaine, Erik D.; Pătraşcu, Mihai (2008). "Subquadratic Algorithms for 3SUM" (PDF). Algorithmica. 50 (4): 584–596. doi:10.1007/s00453-007-9036-3. S2CID 9855995.
8. Dietzfelbinger, Martin; Hagerup, Torben; Katajainen, Jyrki; Penttonen, Martti (1997). "A Reliable Randomized Algorithm for the Closest-Pair Problem" (Postscript). Journal of Algorithms. 25 (1): 19–51. doi:10.1006/jagm.1997.0873. Retrieved 10 February 2011.
9. Thorup, Mikkel (18 December 2009). "Text-book algorithms at SODA".
10. Woelfel, Philipp (1999). Efficient Strongly Universal and Optimally Universal Hashing. Mathematical Foundations of Computer Science 1999. LNCS. Vol. 1672. pp. 262–272. doi:10.1007/3-540-48340-3_24.
11. Thorup, Mikkel (2009). String hashing for linear probing. Proc. 20th ACM-SIAM Symposium on Discrete Algorithms (SODA). pp. 655–664. CiteSeerX 10.1.1.215.4253. doi:10.1137/1.9781611973068.72. ISBN 978-0-89871-680-1., section 5.3
12. Dietzfelbinger, Martin; Gil, Joseph; Matias, Yossi; Pippenger, Nicholas (1992). Polynomial Hash Functions Are Reliable (Extended Abstract). Proc. 19th International Colloquium on Automata, Languages and Programming (ICALP). pp. 235–246.
13. Black, J.; Halevi, S.; Krawczyk, H.; Krovetz, T. (1999). UMAC: Fast and Secure Message Authentication (PDF). Advances in Cryptology (CRYPTO '99)., Equation 1
14. Pătraşcu, Mihai; Thorup, Mikkel (2011). The power of simple tabulation hashing. Proceedings of the 43rd annual ACM Symposium on Theory of Computing (STOC '11). pp. 1–10. arXiv:1011.5200. doi:10.1145/1993636.1993638. ISBN 9781450306911.
15. Kaser, Owen; Lemire, Daniel (2013). "Strongly universal string hashing is fast". Computer Journal. Oxford University Press. 57 (11): 1624–1638. arXiv:1202.4961. doi:10.1093/comjnl/bxt070.
16. "Hebrew University Course Slides" (PDF).
17. Robert Uzgalis. "Library Hash Functions". 1996.
18. Kankowsk, Peter. "Hash functions: An empirical comparison".
19. Yigit, Ozan. "String hash functions".
20. Kernighan; Ritchie (1988). "6". The C Programming Language (2nd ed.). pp. 118. ISBN 0-13-110362-8.{{cite book}}: CS1 maint: multiple names: authors list (link)
21. "String (Java Platform SE 6)". docs.oracle.com. Retrieved 2015-06-10.
Further reading
• Knuth, Donald Ervin (1998). The Art of Computer Programming, Vol. III: Sorting and Searching (3rd ed.). Reading, Mass; London: Addison-Wesley. ISBN 0-201-89685-0.
External links
• Open Data Structures - Section 5.1.1 - Multiplicative Hashing, Pat Morin
| Wikipedia |
Universal homeomorphism
In algebraic geometry, a universal homeomorphism is a morphism of schemes $f:X\to Y$ such that, for each morphism $Y'\to Y$, the base change $X\times _{Y}Y'\to Y'$ is a homeomorphism of topological spaces.
A morphism of schemes is a universal homeomorphism if and only if it is integral, radicial and surjective.[1] In particular, a morphism of locally of finite type is a universal homeomorphism if and only if it is finite, radicial and surjective.
For example, an absolute Frobenius morphism is a universal homeomorphism.
References
1. EGA IV4, 18.12.11.
• Grothendieck, Alexandre; Dieudonné, Jean (1967). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Quatrième partie". Publications Mathématiques de l'IHÉS. 32. doi:10.1007/bf02732123. MR 0238860.
External links
• Universal homeomorphisms and the étale topology
• Do pushouts along universal homeomorphisms exist?
| Wikipedia |
Universal instantiation
In predicate logic, universal instantiation[1][2][3] (UI; also called universal specification or universal elimination, and sometimes confused with dictum de omni) is a valid rule of inference from a truth about each member of a class of individuals to the truth about a particular individual of that class. It is generally given as a quantification rule for the universal quantifier but it can also be encoded in an axiom schema. It is one of the basic principles used in quantification theory.
Universal instantiation
TypeRule of inference
FieldPredicate logic
Symbolic statement$\forall x\,A\Rightarrow A\{x\mapsto a\}$
Transformation rules
Propositional calculus
Rules of inference
• Implication introduction / elimination (modus ponens)
• Biconditional introduction / elimination
• Conjunction introduction / elimination
• Disjunction introduction / elimination
• Disjunctive / hypothetical syllogism
• Constructive / destructive dilemma
• Absorption / modus tollens / modus ponendo tollens
• Negation introduction
Rules of replacement
• Associativity
• Commutativity
• Distributivity
• Double negation
• De Morgan's laws
• Transposition
• Material implication
• Exportation
• Tautology
Predicate logic
Rules of inference
• Universal generalization / instantiation
• Existential generalization / instantiation
Example: "All dogs are mammals. Fido is a dog. Therefore Fido is a mammal."
Formally, the rule as an axiom schema is given as
$\forall x\,A\Rightarrow A\{x\mapsto a\},$
for every formula A and every term a, where $A\{x\mapsto a\}$ is the result of substituting a for each free occurrence of x in A. $\,A\{x\mapsto a\}$ is an instance of $\forall x\,A.$
And as a rule of inference it is
from $\vdash \forall xA$ infer $\vdash A\{x\mapsto a\}.$
Irving Copi noted that universal instantiation "...follows from variants of rules for 'natural deduction', which were devised independently by Gerhard Gentzen and Stanisław Jaśkowski in 1934."[4]
Quine
According to Willard Van Orman Quine, universal instantiation and existential generalization are two aspects of a single principle, for instead of saying that "∀x x = x" implies "Socrates = Socrates", we could as well say that the denial "Socrates ≠ Socrates" implies "∃x x ≠ x". The principle embodied in these two operations is the link between quantifications and the singular statements that are related to them as instances. Yet it is a principle only by courtesy. It holds only in the case where a term names and, furthermore, occurs referentially.[5]
See also
• Existential instantiation
• Existential generalization
• Existential quantification
• Inference rules
References
1. Irving M. Copi; Carl Cohen; Kenneth McMahon (Nov 2010). Introduction to Logic. Pearson Education. ISBN 978-0205820375.
2. Hurley, Patrick. A Concise Introduction to Logic. Wadsworth Pub Co, 2008.
3. Moore and Parker
4. Copi, Irving M. (1979). Symbolic Logic, 5th edition, Prentice Hall, Upper Saddle River, NJ
5. Willard Van Orman Quine; Roger F. Gibson (2008). "V.24. Reference and Modality". Quintessence. Cambridge, Mass: Belknap Press of Harvard University Press. OCLC 728954096. Here: p. 366.
| Wikipedia |
Universal parabolic constant
The universal parabolic constant is a mathematical constant.
It is defined as the ratio, for any parabola, of the arc length of the parabolic segment formed by the latus rectum to the focal parameter. The focal parameter is twice the focal length. The ratio is denoted P.[1][2][3] In the diagram, the latus rectum is pictured in blue, the parabolic segment that it forms in red and the focal parameter in green. (The focus of the parabola is the point F and the directrix is the line L.)
The value of P is[4]
$P=\ln(1+{\sqrt {2}})+{\sqrt {2}}=2.29558714939\dots $
(sequence A103710 in the OEIS). The circle and parabola are unique among conic sections in that they have a universal constant. The analogous ratios for ellipses and hyperbolas depend on their eccentricities. This means that all circles are similar and all parabolas are similar, whereas ellipses and hyperbolas are not.
Derivation
Take $ y={\frac {x^{2}}{4f}}$ as the equation of the parabola. The focal parameter is $p=2f$ and the semilatus rectum is $\ell =2f$.
${\begin{aligned}P&:={\frac {1}{p}}\int _{-\ell }^{\ell }{\sqrt {1+\left(y'(x)\right)^{2}}}\,dx\\&={\frac {1}{2f}}\int _{-2f}^{2f}{\sqrt {1+{\frac {x^{2}}{4f^{2}}}}}\,dx\\&=\int _{-1}^{1}{\sqrt {1+t^{2}}}\,dt&(x=2ft)\\&=\operatorname {arsinh} (1)+{\sqrt {2}}\\&=\ln(1+{\sqrt {2}})+{\sqrt {2}}.\end{aligned}}$
Properties
P is a transcendental number.
Proof. Suppose that P is algebraic. Then $\!\ P-{\sqrt {2}}=\ln(1+{\sqrt {2}})$ must also be algebraic. However, by the Lindemann–Weierstrass theorem, $\!\ e^{\ln(1+{\sqrt {2}})}=1+{\sqrt {2}}$ would be transcendental, which is not the case. Hence P is transcendental.
Since P is transcendental, it is also irrational.
Applications
The average distance from a point randomly selected in the unit square to its center is[5]
$d_{\text{avg}}={P \over 6}.$
Proof.
${\begin{aligned}d_{\text{avg}}&:=8\int _{0}^{1 \over 2}\int _{0}^{x}{\sqrt {x^{2}+y^{2}}}\,dy\,dx\\&=8\int _{0}^{1 \over 2}{1 \over 2}x^{2}(\ln(1+{\sqrt {2}})+{\sqrt {2}})\,dx\\&=4P\int _{0}^{1 \over 2}x^{2}\,dx\\&={P \over 6}\end{aligned}}$
References and footnotes
1. Sylvester Reese and Jonathan Sondow. "Universal Parabolic Constant". MathWorld., a Wolfram Web resource.
2. Reese, Sylvester. "Pohle Colloquium Video Lecture: The universal parabolic constant". Retrieved February 2, 2005.
3. Sondow, Jonathan (2013). "The parbelos, a parabolic analog of the arbelos". Amer. Math. Monthly. 120 (10): 929–935. arXiv:1210.2279. doi:10.4169/amer.math.monthly.120.10.929. S2CID 33402874. American Mathematical Monthly, 120 (2013), 929-935.
4. See Parabola#Arc length. Use $p=2f$, the length of the semilatus rectum, so $h=f$ and $q=f{\sqrt {2}}$. Calculate $2s$ in terms of $f$, then divide by $2f$, which is the focal parameter.
5. Weisstein, Eric W. "Square Point Picking". MathWorld., a Wolfram Web resource.
| Wikipedia |
Universal point set
In graph drawing, a universal point set of order n is a set S of points in the Euclidean plane with the property that every n-vertex planar graph has a straight-line drawing in which the vertices are all placed at points of S.
Unsolved problem in mathematics:
Do planar graphs have universal point sets of subquadratic size?
(more unsolved problems in mathematics)
Bounds on the size of universal point sets
When n is ten or less, there exist universal point sets with exactly n points, but for all n ≥ 15 additional points are required.[1]
Several authors have shown that subsets of the integer lattice of size O(n) × O(n) are universal. In particular, de Fraysseix, Pach & Pollack (1988) showed that a grid of (2n − 3) × (n − 1) points is universal, and Schnyder (1990) reduced this to a triangular subset of an (n − 1) × (n − 1) grid, with n2/2 − O(n) points. By modifying the method of de Fraysseix et al., Brandenburg (2008) found an embedding of any planar graph into a triangular subset of the grid consisting of 4n2/9 points. A universal point set in the form of a rectangular grid must have size at least n/3 × n/3[2] but this does not rule out the possibility of smaller universal point sets of other types. The smallest known universal point sets are not based on grids, but are instead constructed from superpatterns (permutations that contain all permutation patterns of a given size); the universal point sets constructed in this way have size n2/4 − Θ(n).[3]
de Fraysseix, Pach & Pollack (1988) proved the first nontrivial lower bound on the size of a universal point set, with a bound of the form n + Ω(√n), and Chrobak & Karloff (1989) showed that universal point sets must contain at least 1.098n − o(n) points. Kurowski (2004) stated an even stronger bound of 1.235n − o(n),[4] which was further improved by Scheucher, Schrezenmaier & Steiner (2018) to 1.293n − o(n).
Closing the gap between the known linear lower bounds and quadratic upper bounds remains an open problem.[5]
Special classes of graphs
Subclasses of the planar graphs may, in general, have smaller universal sets (sets of points that allow straight-line drawings of all n-vertex graphs in the subclass) than the full class of planar graphs, and in many cases universal point sets of exactly n points are possible. For instance, it is not hard to see that every set of n points in convex position (forming the vertices of a convex polygon) is universal for the n-vertex outerplanar graphs, and in particular for trees. Less obviously, every set of n points in general position (no three collinear) remains universal for outerplanar graphs.[6]
Planar graphs that can be partitioned into nested cycles, 2-outerplanar graphs and planar graphs of bounded pathwidth, have universal point sets of nearly-linear size.[7] Planar 3-trees have universal point sets of size O(n3/2 log n); the same bound also applies to series–parallel graphs.[8]
Other drawing styles
As well as for straight-line graph drawing, universal point sets have been studied for other drawing styles; in many of these cases, universal point sets with exactly n points exist, based on a topological book embedding in which the vertices are placed along a line in the plane and the edges are drawn as curves that cross this line at most once. For instance, every set of n collinear points is universal for an arc diagram in which each edge is represented as either a single semicircle or a smooth curve formed from two semicircles.[9]
By using a similar layout, every strictly convex curve in the plane can be shown to contain an n-point subset that is universal for polyline drawing with at most one bend per edge.[10] This set contains only the vertices of the drawing, not the bends; larger sets are known that can be used for polyline drawing with all vertices and all bends placed within the set.[11]
Notes
1. Cardinal, Hoffmann & Kusters (2015).
2. Dolev, Leighton & Trickey (1984); Chrobak & Karloff (1989); Demaine & O'Rourke (2002–2012). A weaker quadratic lower bound on the grid size needed for planar graph drawing was given earlier by Valiant (1981).
3. Bannister et al. (2013).
4. Mondal (2012) claimed that Kurowski's proof was erroneous, but later (after discussion with Jean Cardinal) retracted this claim; see Explanation Supporting Kurowski's Proof, D. Mondal, updated August 9, 2013.
5. Demaine & O'Rourke (2002–2012); Brandenburg et al. (2003); Mohar (2007).
6. Gritzmann et al. (1991).
7. Angelini et al. (2018); Bannister et al. (2013).
8. Fulek & Tóth (2015)
9. Giordano et al. (2007).
10. Everett et al. (2010).
11. Dujmović et al. (2013).
References
• Angelini, Patrizio; Bruckdorfer, Till; Di Battista, Giuseppe; Kaufmann, Michael; Mchedlidze, Tamara; Roselli, Vincenzo; Squarcella, Claudio (2018), "Small Universal Point Sets for k-Outerplanar Graphs", Discrete & Computational Geometry, 60 (2): 430–470, doi:10.1007/s00454-018-0009-x, S2CID 51907835.
• Bannister, Michael J.; Cheng, Zhanpeng; Devanny, William E.; Eppstein, David (2013), "Superpatterns and universal point sets", Proc. 21st International Symposium on Graph Drawing (GD 2013), arXiv:1308.0403, Bibcode:2013arXiv1308.0403B, doi:10.7155/jgaa.00318, S2CID 6229641.
• Brandenburg, Franz J. (2008), "Drawing planar graphs on ${\tfrac {8}{9}}n^{2}$ area", The International Conference on Topological and Geometric Graph Theory, Electronic Notes in Discrete Mathematics, vol. 31, Elsevier, pp. 37–40, doi:10.1016/j.endm.2008.06.005, MR 2571101.
• Brandenburg, Franz-Josef; Eppstein, David; Goodrich, Michael T.; Kobourov, Stephen G.; Liotta, Giuseppe; Mutzel, Petra (2003), "Selected open problems in graph drawing", in Liotta, Giuseppe (ed.), Graph Drawing: 11th International Symposium, GD 2003, Perugia, Italy, September 21–24, 2003 Revised Papers, Lecture Notes in Computer Science, vol. 2912, Springer-Verlag, pp. 515–539, doi:10.1007/978-3-540-24595-7_55. See in particular problem 11 on p. 520.
• Cardinal, Jean; Hoffmann, Michael; Kusters, Vincent (2015), "On universal point sets for planar graphs", Journal of Graph Algorithms and Applications, 19 (1): 529–547, arXiv:1209.3594, doi:10.7155/jgaa.00374, MR 3420760, S2CID 39043733
• Chrobak, M.; Karloff, H. (1989), "A lower bound on the size of universal sets for planar graphs", SIGACT News, 20 (4): 83–86, doi:10.1145/74074.74088, S2CID 7188305.
• de Fraysseix, Hubert; Pach, János; Pollack, Richard (1988), "Small sets supporting Fary embeddings of planar graphs", Twentieth Annual ACM Symposium on Theory of Computing, pp. 426–433, doi:10.1145/62212.62254, ISBN 0-89791-264-0, S2CID 15230919.
• Demaine, E.; O'Rourke, J. (2002–2012), "Problem 45: Smallest Universal Set of Points for Planar Graphs", The Open Problems Project, retrieved 2013-03-19.
• Dolev, Danny; Leighton, Tom; Trickey, Howard (1984), "Planar embedding of planar graphs" (PDF), Advances in Computing Research, 2: 147–161.
• Dujmović, V.; Evans, W. S.; Lazard, S.; Lenhart, W.; Liotta, G.; Rappaport, D.; Wismath, S. K. (2013), "On point-sets that support planar graphs", Comput. Geom. Theory Appl., 46 (1): 29–50, doi:10.1016/j.comgeo.2012.03.003.
• Everett, Hazel; Lazard, Sylvain; Liotta, Giuseppe; Wismath, Stephen (2010), "Universal Sets of n Points for One-Bend Drawings of Planar Graphs with n Vertices", Discrete and Computational Geometry, 43 (2): 272–288, doi:10.1007/s00454-009-9149-3.
• Fulek, Radoslav; Tóth, Csaba D. (2015), "Universal point sets for planar three-trees", Journal of Discrete Algorithms, 30: 101–112, arXiv:1212.6148, doi:10.1016/j.jda.2014.12.005, MR 3305154, S2CID 1597229
• Giordano, Francesco; Liotta, Giuseppe; Mchedlidze, Tamara; Symvonis, Antonios (2007), "Computing upward topological book embeddings of upward planar digraphs", Algorithms and Computation: 18th International Symposium, ISAAC 2007, Sendai, Japan, December 17-19, 2007, Proceedings, Lecture Notes in Computer Science, vol. 4835, Springer, pp. 172–183, doi:10.1007/978-3-540-77120-3_17.
• Gritzmann, P.; Mohar, B.; Pach, János; Pollack, Richard (1991), "Embedding a planar triangulation with vertices at specified positions", American Mathematical Monthly, 98 (2): 165–166, doi:10.2307/2323956, JSTOR 2323956.
• Kurowski, Maciej (2004), "A 1.235 lower bound on the number of points needed to draw all n-vertex planar graphs", Information Processing Letters, 92 (2): 95–98, doi:10.1016/j.ipl.2004.06.009, MR 2085707.
• Mohar, Bojan (2007), "Universal point sets for planar graphs", Open Problem Garden, retrieved 2013-03-20.
• Mondal, Debajyoti (2012), Embedding a Planar Graph on a Given Point Set (PDF), Masters thesis, Department of Computer Science, University of Manitoba.
• Scheucher, Manfred; Schrezenmaier, Hendrik; Steiner, Raphael (2018), A Note On Universal Point Sets for Planar Graphs, arXiv:1811.06482, Bibcode:2018arXiv181106482S.
• Schnyder, Walter (1990), "Embedding planar graphs on the grid", Proc. 1st ACM/SIAM Symposium on Discrete Algorithms (SODA), pp. 138–148, ISBN 9780898712513.
• Valiant, L. G. (1981), "Universality considerations in VLSI circuits", IEEE Transactions on Computers, C-30 (2): 135–140, doi:10.1109/TC.1981.6312176, S2CID 1450313
| Wikipedia |
Universal quadratic form
In mathematics, a universal quadratic form is a quadratic form over a ring that represents every element of the ring.[1] A non-singular form over a field which represents zero non-trivially is universal.[2]
Examples
• Over the real numbers, the form x2 in one variable is not universal, as it cannot represent negative numbers: the two-variable form x2 − y2 over R is universal.
• Lagrange's four-square theorem states that every positive integer is the sum of four squares. Hence the form x2 + y2 + z2 + t2 − u2 over Z is universal.
• Over a finite field, any non-singular quadratic form of dimension 2 or more is universal.[3]
Forms over the rational numbers
The Hasse–Minkowski theorem implies that a form is universal over Q if and only if it is universal over Qp for all p (where we include p = ∞, letting Q∞ denote R).[4] A form over R is universal if and only if it is not definite; a form over Qp is universal if it has dimension at least 4.[5] One can conclude that all indefinite forms of dimension at least 4 over Q are universal.[4]
See also
• The 15 and 290 theorems give conditions for a quadratic form to represent all positive integers.
References
1. Lam (2005) p.10
2. Rajwade (1993) p.146
3. Lam (2005) p.36
4. Serre (1973) p.43
5. Serre (1973) p.37
• Lam, Tsit-Yuen (2005). Introduction to Quadratic Forms over Fields. Graduate Studies in Mathematics. Vol. 67. American Mathematical Society. ISBN 0-8218-1095-2. MR 2104929. Zbl 1068.11023.
• Rajwade, A. R. (1993). Squares. London Mathematical Society Lecture Note Series. Vol. 171. Cambridge University Press. ISBN 0-521-42668-5. Zbl 0785.11022.
• Serre, Jean-Pierre (1973). A Course in Arithmetic. Graduate Texts in Mathematics. Vol. 7. Springer-Verlag. ISBN 0-387-90040-3. Zbl 0256.12001.
| Wikipedia |
Universal science
Universal science (German: Universalwissenschaft; Latin: scientia generalis, scientia universalis) is a branch of metaphysics.[1] In the work of Gottfried Wilhelm Leibniz, the universal science is the true logic.[2][3][4] Plato's system of idealism, formulated using the teachings of Socrates, is a predecessor to the concept of universal science. It emphasizes on the first principles which appear to be the reasoning behind everything, emerging and being in state with everything. This mode of reasoning had a supporting influence on great scientists such as Boole, Frege, Cantor, Hilbert, Gödel, and Turing. [5] All of these great minds shared a similar dream, vision or belief in a future where universal computing would eventually change severything. [6]
See also
• Architectonics
• Unified Science
References
1. Osminskaya, Natalia A. (2018). "Historical roots of Gottfried Wilhelm Leibniz's universal science". Epistemology & Philosophy of Science. 55 (2): 165–179. doi:10.5840/eps201855236.
2. Franz Exner, "Über Leibnitz'ens Universal-Wissenschaft", Prague, 1843
3. "Universalwissenschaft": entry in the Meyers Großes Konversations-Lexikon
4. Stanley Burris, "Leibniz's Influence on 19th Century Logic", Stanford Encyclopedia of Philosophy
5. Kossak, Roman. “The Universal Computer: The Road from Leibniz to Turing by Martin Davis: THIRD EDITION. BOCA RATON, FL: CRC PRESS, 2018, XV + 222 PP., US $35.96, ISBN 978-1-138-50208-6.” Mathematical Intelligencer 41, no. 2 (June 2019): 78–79. https://doi.org/10.1007/s00283-018-09860-w.
6. Davis, Martin. “The Universal Computer: The Road from Leibniz to Turing.” The American Mathematical Monthly 109 (June 1, 2002). https://doi.org/10.2307/2695463.
External links
• Stephen Palmquist, Heading 6, Philosophy as the Theological Science
Gottfried Wilhelm Leibniz
Mathematics and
philosophy
• Alternating series test
• Best of all possible worlds
• Calculus controversy
• Calculus ratiocinator
• Characteristica universalis
• Compossibility
• Difference
• Dynamism
• Identity of indiscernibles
• Individuation
• Law of continuity
• Leibniz wheel
• Leibniz's gap
• Leibniz's notation
• Lingua generalis
• Mathesis universalis
• Pre-established harmony
• Plenitude
• Sufficient reason
• Salva veritate
• Theodicy
• Transcendental law of homogeneity
• Rationalism
• Universal science
• Vis viva
• Well-founded phenomenon
Works
• De Arte Combinatoria (1666)
• Discourse on Metaphysics (1686)
• New Essays on Human Understanding (1704)
• Théodicée (1710)
• Monadology (1714)
• Leibniz–Clarke correspondence (1715–1716)
Category
History of science
Background
• Theories and sociology
• Historiography
• Pseudoscience
• History and philosophy of science
By era
• Ancient world
• Classical Antiquity
• The Golden Age of Islam
• Renaissance
• Scientific Revolution
• Age of Enlightenment
• Romanticism
By culture
• African
• Argentine
• Brazilian
• Byzantine
• Medieval European
• French
• Chinese
• Indian
• Medieval Islamic
• Japanese
• Korean
• Mexican
• Russian
• Spanish
Natural sciences
• Astronomy
• Biology
• Chemistry
• Earth science
• Physics
Mathematics
• Algebra
• Calculus
• Combinatorics
• Geometry
• Logic
• Probability
• Statistics
• Trigonometry
Social sciences
• Anthropology
• Archaeology
• Economics
• History
• Political science
• Psychology
• Sociology
Technology
• Agricultural science
• Computer science
• Materials science
• Engineering
Medicine
• Human medicine
• Veterinary medicine
• Anatomy
• Neuroscience
• Neurology and neurosurgery
• Nutrition
• Pathology
• Pharmacy
• Timelines
• Portal
• Category
| Wikipedia |
Vogel plane
In mathematics, the Vogel plane is a method of parameterizing simple Lie algebras by eigenvalues α, β, γ of the Casimir operator on the symmetric square of the Lie algebra, which gives a point (α: β: γ) of P2/S3, the projective plane P2 divided out by the symmetric group S3 of permutations of coordinates. It was introduced by Vogel (1999), and is related by some observations made by Deligne (1996). Landsberg & Manivel (2006) generalized Vogel's work to higher symmetric powers.
The point of the projective plane (modulo permutations) corresponding to a simple complex Lie algebra is given by three eigenvalues α, β, γ of the Casimir operator acting on spaces A, B, C, where the symmetric square of the Lie algebra (usually) decomposes as a sum of the complex numbers and 3 irreducible spaces A, B, C.
See also
• E7½
References
• Deligne, Pierre (1996), "La série exceptionnelle de groupes de Lie", Comptes Rendus de l'Académie des Sciences, Série I, 322 (4): 321–326, ISSN 0764-4442, MR 1378507
• Deligne, Pierre; Gross, Benedict H. (2002), "On the exceptional series, and its descendants", Comptes Rendus Mathématique, 335 (11): 877–881, doi:10.1016/S1631-073X(02)02590-6, ISSN 1631-073X, MR 1952563
• Landsberg, J. M.; Manivel, L. (2006), "A universal dimension formula for complex simple Lie algebras", Advances in Mathematics, 201 (2): 379–407, arXiv:math/0401296, doi:10.1016/j.aim.2005.02.007, ISSN 0001-8708, MR 2211533
• Vogel, Pierre (1999), The universal Lie algebra, Preprint
| Wikipedia |
Universal space
In mathematics, a universal space is a certain metric space that contains all metric spaces whose dimension is bounded by some fixed constant. A similar definition exists in topological dynamics.
Definition
Given a class $\textstyle {\mathcal {C}}$ of topological spaces, $\textstyle \mathbb {U} \in {\mathcal {C}}$ is universal for $\textstyle {\mathcal {C}}$ if each member of $\textstyle {\mathcal {C}}$ embeds in $\textstyle \mathbb {U} $. Menger stated and proved the case $\textstyle d=1$ of the following theorem. The theorem in full generality was proven by Nöbeling.
Theorem:[1] The $\textstyle (2d+1)$-dimensional cube $\textstyle [0,1]^{2d+1}$ is universal for the class of compact metric spaces whose Lebesgue covering dimension is less than $\textstyle d$.
Nöbeling went further and proved:
Theorem: The subspace of $\textstyle [0,1]^{2d+1}$ consisting of set of points, at most $\textstyle d$ of whose coordinates are rational, is universal for the class of separable metric spaces whose Lebesgue covering dimension is less than $\textstyle d$.
The last theorem was generalized by Lipscomb to the class of metric spaces of weight $\textstyle \alpha $, $\textstyle \alpha >\aleph _{0}$: There exist a one-dimensional metric space $\textstyle J_{\alpha }$ such that the subspace of $\textstyle J_{\alpha }^{2d+1}$ consisting of set of points, at most $\textstyle d$ of whose coordinates are "rational" (suitably defined), is universal for the class of metric spaces whose Lebesgue covering dimension is less than $\textstyle d$ and whose weight is less than $\textstyle \alpha $.[2]
Universal spaces in topological dynamics
Consider the category of topological dynamical systems $\textstyle (X,T)$ consisting of a compact metric space $\textstyle X$ and a homeomorphism $\textstyle T:X\rightarrow X$. The topological dynamical system $\textstyle (X,T)$ is called minimal if it has no proper non-empty closed $\textstyle T$-invariant subsets. It is called infinite if $\textstyle |X|=\infty $. A topological dynamical system $\textstyle (Y,S)$ is called a factor of $\textstyle (X,T)$ if there exists a continuous surjective mapping $\textstyle \varphi :X\rightarrow Y$ which is equivariant, i.e. $\textstyle \varphi (Tx)=S\varphi (x)$ for all $\textstyle x\in X$.
Similarly to the definition above, given a class $\textstyle {\mathcal {C}}$ of topological dynamical systems, $\textstyle \mathbb {U} \in {\mathcal {C}}$ is universal for $\textstyle {\mathcal {C}}$ if each member of $\textstyle {\mathcal {C}}$ embeds in $\textstyle \mathbb {U} $ through an equivariant continuous mapping. Lindenstrauss proved the following theorem:
Theorem[3]: Let $\textstyle d\in \mathbb {N} $. The compact metric topological dynamical system $\textstyle (X,T)$ where $\textstyle X=([0,1]^{d})^{\mathbb {Z} }$ and $\textstyle T:X\rightarrow X$ is the shift homeomorphism $\textstyle (\ldots ,x_{-2},x_{-1},\mathbf {x_{0}} ,x_{1},x_{2},\ldots )\rightarrow (\ldots ,x_{-1},x_{0},\mathbf {x_{1}} ,x_{2},x_{3},\ldots )$
is universal for the class of compact metric topological dynamical systems whose mean dimension is strictly less than $\textstyle {\frac {d}{36}}$ and which possess an infinite minimal factor.
In the same article Lindenstrauss asked what is the largest constant $\textstyle c$ such that a compact metric topological dynamical system whose mean dimension is strictly less than $\textstyle cd$ and which possesses an infinite minimal factor embeds into $\textstyle ([0,1]^{d})^{\mathbb {Z} }$. The results above implies $\textstyle c\geq {\frac {1}{36}}$. The question was answered by Lindenstrauss and Tsukamoto[4] who showed that $\textstyle c\leq {\frac {1}{2}}$ and Gutman and Tsukamoto[5] who showed that $\textstyle c\geq {\frac {1}{2}}$. Thus the answer is $\textstyle c={\frac {1}{2}}$.
See also
• Universal property
• Urysohn universal space
• Mean dimension
References
1. Hurewicz, Witold; Wallman, Henry (2015) [1941]. "V Covering and Imbedding Theorems §3 Imbedding of a compact n-dimensional space in I2n+1: Theorem V.2". Dimension Theory. Princeton Mathematical Series. Vol. 4. Princeton University Press. pp. 56–. ISBN 978-1400875665.
2. Lipscomb, Stephen Leon (2009). "The quest for universal spaces in dimension theory" (PDF). Notices Amer. Math. Soc. 56 (11): 1418–24.
3. Lindenstrauss, Elon (1999). "Mean dimension, small entropy factors and an embedding theorem. Theorem 5.1". Inst. Hautes Études Sci. Publ. Math. 89 (1): 227–262. doi:10.1007/BF02698858. S2CID 2413058.
4. Lindenstrauss, Elon; Tsukamoto, Masaki (March 2014). "Mean dimension and an embedding problem: An example". Israel Journal of Mathematics. 199 (2): 573–584. doi:10.1007/s11856-013-0040-9. ISSN 0021-2172. S2CID 2099527.
5. Gutman, Yonatan; Tsukamoto, Masaki (2020-07-01). "Embedding minimal dynamical systems into Hilbert cubes". Inventiones Mathematicae. 221 (1): 113–166. arXiv:1511.01802. Bibcode:2020InMat.221..113G. doi:10.1007/s00222-019-00942-w. ISSN 1432-1297. S2CID 119139371.
| Wikipedia |
Universal Turing machine
In computer science, a universal Turing machine (UTM) is a Turing machine capable of computing any computable sequence,[1] as described by Alan Turing in his seminal paper "On Computable Numbers, with an Application to the Entscheidungsproblem". Common sense might say that a universal machine is impossible, but Turing proves that it is possible.[2] He suggested that we may compare a man in the process of computing a real number to a machine which is only capable of a finite number of conditions q 1: q 2 . .... qI; which will be called "m-configurations".[3] He then described the operation of such machine, as described below, and argued:
"It is my contention that these operations include all those which are used in the computation of a number."[4]
Turing machines
Machine
• Turing machine equivalents
• Turing machine examples
• Turing machine gallery
Variants
• Alternating Turing machine
• Neural Turing machine
• Nondeterministic Turing machine
• Quantum Turing machine
• Post–Turing machine
• Probabilistic Turing machine
• Multitape Turing machine
• Multi-track Turing machine
• Symmetric Turing machine
• Total Turing machine
• Unambiguous Turing machine
• Universal Turing machine
• Zeno machine
Science
• Alan Turing
• Category:Turing machine
Alan Turing introduced the idea of such a machine in 1936–1937. This principle is considered to be the origin of the idea of a stored-program computer used by John von Neumann in 1946 for the "Electronic Computing Instrument" that now bears von Neumann's name: the von Neumann architecture.[5]
Introduction
Davis makes a persuasive argument that Turing's conception of what is now known as "the stored-program computer", of placing the "action table"—the instructions for the machine—in the same "memory" as the input data, strongly influenced John von Neumann's conception of the first American discrete-symbol (as opposed to analog) computer—the EDVAC. Davis quotes Time magazine to this effect, that "everyone who taps at a keyboard ... is working on an incarnation of a Turing machine", and that "John von Neumann [built] on the work of Alan Turing" (Davis 2000:193 quoting Time magazine of 29 March 1999).
Davis makes a case that Turing's Automatic Computing Engine (ACE) computer "anticipated" the notions of microprogramming (microcode) and RISC processors (Davis 2000:188). Knuth cites Turing's work on the ACE computer as designing "hardware to facilitate subroutine linkage" (Knuth 1973:225); Davis also references this work as Turing's use of a hardware "stack" (Davis 2000:237 footnote 18).
As the Turing machine was encouraging the construction of computers, the UTM was encouraging the development of the fledgling computer sciences. An early, if not the very first, assembler was proposed "by a young hot-shot programmer" for the EDVAC (Davis 2000:192). Von Neumann's "first serious program ... [was] to simply sort data efficiently" (Davis 2000:184). Knuth observes that the subroutine return embedded in the program itself rather than in special registers is attributable to von Neumann and Goldstine.[6] Knuth furthermore states that
The first interpretive routine may be said to be the "Universal Turing Machine" ... Interpretive routines in the conventional sense were mentioned by John Mauchly in his lectures at the Moore School in 1946 ... Turing took part in this development also; interpretive systems for the Pilot ACE computer were written under his direction.
— Knuth 1973:226
Davis briefly mentions operating systems and compilers as outcomes of the notion of program-as-data (Davis 2000:185).
Some, however, might raise issues with this assessment. At the time (mid-1940s to mid-1950s) a relatively small cadre of researchers were intimately involved with the architecture of the new "digital computers". Hao Wang (1954), a young researcher at this time, made the following observation:
Turing's theory of computable functions antedated but has not much influenced the extensive actual construction of digital computers. These two aspects of theory and practice have been developed almost entirely independently of each other. The main reason is undoubtedly that logicians are interested in questions radically different from those with which the applied mathematicians and electrical engineers are primarily concerned. It cannot, however, fail to strike one as rather strange that often the same concepts are expressed by very different terms in the two developments.
— Wang 1954, 1957:63
Wang hoped that his paper would "connect the two approaches". Indeed, Minsky confirms this: "that the first formulation of Turing-machine theory in computer-like models appears in Wang (1957)" (Minsky 1967:200). Minsky goes on to demonstrate Turing equivalence of a counter machine.
With respect to the reduction of computers to simple Turing equivalent models (and vice versa), Minsky's designation of Wang as having made "the first formulation" is open to debate. While both Minsky's paper of 1961 and Wang's paper of 1957 are cited by Shepherdson and Sturgis (1963), they also cite and summarize in some detail the work of European mathematicians Kaphenst (1959), Ershov (1959), and Péter (1958). The names of mathematicians Hermes (1954, 1955, 1961) and Kaphenst (1959) appear in the bibliographies of both Sheperdson-Sturgis (1963) and Elgot-Robinson (1961). Two other names of importance are Canadian researchers Melzak (1961) and Lambek (1961). For much more see Turing machine equivalents; references can be found at register machine.
Mathematical theory
With this encoding of action tables as strings, it becomes possible, in principle, for Turing machines to answer questions about the behaviour of other Turing machines. Most of these questions, however, are undecidable, meaning that the function in question cannot be calculated mechanically. For instance, the problem of determining whether an arbitrary Turing machine will halt on a particular input, or on all inputs, known as the Halting problem, was shown to be, in general, undecidable in Turing's original paper. Rice's theorem shows that any non-trivial question about the output of a Turing machine is undecidable.
A universal Turing machine can calculate any recursive function, decide any recursive language, and accept any recursively enumerable language. According to the Church–Turing thesis, the problems solvable by a universal Turing machine are exactly those problems solvable by an algorithm or an effective method of computation, for any reasonable definition of those terms. For these reasons, a universal Turing machine serves as a standard against which to compare computational systems, and a system that can simulate a universal Turing machine is called Turing complete.
An abstract version of the universal Turing machine is the universal function, a computable function which can be used to calculate any other computable function. The UTM theorem proves the existence of such a function.
Efficiency
Without loss of generality, the input of Turing machine can be assumed to be in the alphabet {0, 1}; any other finite alphabet can be encoded over {0, 1}. The behavior of a Turing machine M is determined by its transition function. This function can be easily encoded as a string over the alphabet {0, 1} as well. The size of the alphabet of M, the number of tapes it has, and the size of the state space can be deduced from the transition function's table. The distinguished states and symbols can be identified by their position, e.g. the first two states can by convention be the start and stop states. Consequently, every Turing machine can be encoded as a string over the alphabet {0, 1}. Additionally, we convene that every invalid encoding maps to a trivial Turing machine that immediately halts, and that every Turing machine can have an infinite number of encodings by padding the encoding with an arbitrary number of (say) 1's at the end, just like comments work in a programming language. It should be no surprise that we can achieve this encoding given the existence of a Gödel number and computational equivalence between Turing machines and μ-recursive functions. Similarly, our construction associates to every binary string α, a Turing machine Mα.
Starting from the above encoding, in 1966 F. C. Hennie and R. E. Stearns showed that given a Turing machine Mα that halts on input x within N steps, then there exists a multi-tape universal Turing machine that halts on inputs α, x (given on different tapes) in CN log N, where C is a machine-specific constant that does not depend on the length of the input x, but does depend on M's alphabet size, number of tapes, and number of states. Effectively this is an ${\mathcal {O}}\left(N\log {N}\right)$ simulation, using Donald Knuth's Big O notation.[7] The corresponding result for space-complexity rather than time-complexity is that we can simulate in a way that uses at most CN cells at any stage of the computation, an ${\mathcal {O}}(N)$ simulation.[8]
Smallest machines
When Alan Turing came up with the idea of a universal machine he had in mind the simplest computing model powerful enough to calculate all possible functions that can be calculated. Claude Shannon first explicitly posed the question of finding the smallest possible universal Turing machine in 1956. He showed that two symbols were sufficient so long as enough states were used (or vice versa), and that it was always possible to exchange states for symbols. He also showed that no universal Turing machine of one state could exist.
Marvin Minsky discovered a 7-state 4-symbol universal Turing machine in 1962 using 2-tag systems. Other small universal Turing machines have since been found by Yurii Rogozhin and others by extending this approach of tag system simulation. If we denote by (m, n) the class of UTMs with m states and n symbols the following tuples have been found: (15, 2), (9, 3), (6, 4), (5, 5), (4, 6), (3, 9), and (2, 18).[9][10][11] Rogozhin's (4, 6) machine uses only 22 instructions, and no standard UTM of lesser descriptional complexity is known.
However, generalizing the standard Turing machine model admits even smaller UTMs. One such generalization is to allow an infinitely repeated word on one or both sides of the Turing machine input, thus extending the definition of universality and known as "semi-weak" or "weak" universality, respectively. Small weakly universal Turing machines that simulate the Rule 110 cellular automaton have been given for the (6, 2), (3, 3), and (2, 4) state-symbol pairs.[12] The proof of universality for Wolfram's 2-state 3-symbol Turing machine further extends the notion of weak universality by allowing certain non-periodic initial configurations. Other variants on the standard Turing machine model that yield small UTMs include machines with multiple tapes or tapes of multiple dimension, and machines coupled with a finite automaton.
Machines with no internal states
If multiple heads are allowed on a Turing machine then no internal states are required; as "states" can be encoded in the tape. For example, consider a tape with 6 colours: 0, 1, 2, 0A, 1A, 2A. Consider a tape such as 0,0,1,2,2A,0,2,1 where a 3-headed Turing machine is situated over the triple (2,2A,0). The rules then convert any triple to another triple and move the 3-heads left or right. For example, the rules might convert (2,2A,0) to (2,1,0) and move the head left. Thus in this example, the machine acts like a 3-colour Turing machine with internal states A and B (represented by no letter). The case for a 2-headed Turing machine is very similar. Thus a 2-headed Turing machine can be Universal with 6 colours. It is not known what the smallest number of colours needed for a multi-headed Turing machine is or if a 2-colour Universal Turing machine is possible with multiple heads. It also means that rewrite rules are Turing complete since the triple rules are equivalent to rewrite rules. Extending the tape to two dimensions with a head sampling a letter and its 8 neighbours, only 2 colours are needed, as for example, a colour can be encoded in a vertical triple pattern such as 110.
Example of universal-machine coding
For those who would undertake the challenge of designing a UTM exactly as Turing specified see the article by Davies in Copeland (2004:103ff). Davies corrects the errors in the original and shows what a sample run would look like. He claims to have successfully run a (somewhat simplified) simulation.
The following example is taken from Turing (1936). For more about this example, see Turing machine examples.
Turing used seven symbols { A, C, D, R, L, N, ; } to encode each 5-tuple; as described in the article Turing machine, his 5-tuples are only of types N1, N2, and N3. The number of each "m‑configuration" (instruction, state) is represented by "D" followed by a unary string of A's, e.g. "q3" = DAAA. In a similar manner, he encodes the symbols blank as "D", the symbol "0" as "DC", the symbol "1" as DCC, etc. The symbols "R", "L", and "N" remain as is.
After encoding each 5-tuple is then "assembled" into a string in order as shown in the following table:
Current m‑configuration Tape symbol Print-operation Tape-motion Final m‑configuration Current m‑configuration code Tape symbol code Print-operation code Tape-motion code Final m‑configuration code 5-tuple assembled code
q1 blank P0 R q2 DA D DC R DAA DADDCRDAA
q2 blank E R q3 DAA D D R DAAA DAADDRDAAA
q3 blank P1 R q4 DAAA D DCC R DAAAA DAAADDCCRDAAAA
q4 blank E R q1 DAAAA D D R DA DAAAADDRDA
Finally, the codes for all four 5-tuples are strung together into a code started by ";" and separated by ";" i.e.:
;DADDCRDAA;DAADDRDAAA;DAAADDCCRDAAAA;DAAAADDRDA
This code he placed on alternate squares—the "F-squares" – leaving the "E-squares" (those liable to erasure) empty. The final assembly of the code on the tape for the U-machine consists of placing two special symbols ("e") one after the other, then the code separated out on alternate squares, and lastly the double-colon symbol "::" (blanks shown here with "." for clarity):
ee.;.D.A.D.D.C.R.D.A.A.;.D.A.A.D.D.R.D.A.A.A.;.D.A.A.A.D.D.C.C.R.D.A.A.A.A.;.D.A.A.A.A.D.D.R.D.A.::......
The U-machine's action-table (state-transition table) is responsible for decoding the symbols. Turing's action table keeps track of its place with markers "u", "v", "x", "y", "z" by placing them in "E-squares" to the right of "the marked symbol" – for example, to mark the current instruction z is placed to the right of ";" x is keeping the place with respect to the current "m‑configuration" DAA. The U-machine's action table will shuttle these symbols around (erasing them and placing them in different locations) as the computation progresses:
ee.; .D.A.D.D.C.R.D.A.A. ; zD.A.AxD.D.R.D.A.A.A.;.D.A.A.A.D.D.C.C.R.D.A.A.A.A.;.D.A.A.A.A.D.D.R.D.A.::......
Turing's action-table for his U-machine is very involved.
A number of other commentators (notably Penrose 1989) provide examples of ways to encode instructions for the Universal machine. As does Penrose, most commentators use only binary symbols i.e. only symbols { 0, 1 }, or { blank, mark | }. Penrose goes further and writes out his entire U-machine code (Penrose 1989:71–73). He asserts that it truly is a U-machine code, an enormous number that spans almost 2 full pages of 1's and 0's. For readers interested in simpler encodings for the Post–Turing machine the discussion of Davis in Steen (Steen 1980:251ff) may be useful.
Asperti and Ricciotti described a multi-tape UTM defined by composing elementary machines with very simple semantics, rather than explicitly giving its full action table. This approach was sufficiently modular to allow them to formally prove the correctness of the machine in the Matita proof assistant.
Programming Turing machines
Various higher level languages are designed to be compiled into a Turing machine. Examples include Laconic and Turing Machine Descriptor.[13][14]
See also
• Alternating Turing machine
• Von Neumann universal constructor – an attempt to build a self-replicating Turing machine
• Kleene's T predicate – a similar concept for µ-recursive functions
• Turing completeness
References
1. Turing, A.M. (1937), On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42: page 241. https://doi.org/10.1112/plms/s2-42.1.230
2. From lecture transcript attributed to John von Neumann, as quoted by Copeland in B.J.; Fan, Z. Turing and Von Neumann: From Logic to the Computer. Philosophies 2023, 8, 22. https://doi.org/10.3390/philosophies8020022
3. Turing, A.M. (1937), On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42: page 231. https://doi.org/10.1112/plms/s2-42.1.230
4. Turing, A.M. (1937), On Computable Numbers, with an Application to the Entscheidungsproblem. Proceedings of the London Mathematical Society, s2-42: page 232. https://doi.org/10.1112/plms/s2-42.1.230
5. Martin Davis, The universal computer : the road from Leibniz to Turing (2017)
6. In particular: Burks, Goldstine, von Neumann (1946), Preliminary discussion of the logical design of an electronic computing instrument, reprinted in Bell and Newell 1971
7. Arora and Barak, 2009, Theorem 1.9
8. Arora and Barak, 2009, Exercises 4.1
9. Rogozhin, 1996
10. Kudlek and Rogozhin, 2002
11. Neary and Woods, 2009
12. Neary and Woods, 2009b
13. "Shtetl-Optimized » Blog Archive » The 8000th Busy Beaver number eludes ZF set theory: new paper by Adam Yedidia and me". www.scottaaronson.com. 3 May 2016. Retrieved 29 December 2016.
14. "Laconic - Esolang". esolangs.org. Retrieved 29 December 2016.
General references
• Arora, Sanjeev; Barak, Boaz (2009). Complexity Theory: A Modern Approach. Cambridge University Press. ISBN 978-0-521-42426-4. section 1.4, "Machines as strings and the universal Turing machine" and 1.7, "Proof of theorem 1.9"
Original Paper
• Turing, A. M. (1936). "On Computable Numbers, with an Application to the Entscheidungsproblem" (PDF).
Seminal papers
• Hennie, F. C.; Stearns, R. E. (1966). "Two-Tape Simulation of Multitape Turing Machines". Journal of the ACM. 13 (4): 533. doi:10.1145/321356.321362. S2CID 2347143.
Implementation
• Kamvysselis (Kellis), Manolis (1999). "Scheme Implementation of a Universal Turing Machine". Self-published.
Formal verification
• Asperti, Andrea; Ricciotti, Wilmer (2015). "A formalization of multi-tape Turing machines" (PDF). Theoretical Computer Science. Elsevier. 603: 23–42. doi:10.1016/j.tcs.2015.07.013. ISSN 0304-3975.
Other references
• Copeland, Jack, ed. (2004), The Essential Turing: Seminal Writings in Computing, Logic, Philosophy, Artificial Intelligence, and Artificial Life plus The Secrets of Enigma, Oxford UK: Oxford University Press, ISBN 0-19-825079-7
• Davis, Martin (1980), "What is Computation?", in Steen, Lynn Arthur (ed.), Mathematics Today: Twelve Informal Essays, New York: Vintage Books (Random House), ISBN 978-0-394-74503-9.
• Davis, Martin (2000), Engines of Logic: Mathematicians and the origin of the Computer (1st ed.), New York NY: W. W. Norton & Company, ISBN 0-393-32229-7, (pb.)
• Goldstine, Herman H.; von Neumann, John. Planning and Coding of the Problems for an Electronic Computing Instrument. {{cite book}}: |work= ignored (help)
Bell, C. Gordon; Newell, Allen (1971). Computer Structures: Readings and Examples (Reprinted ed.). New York: McGraw-Hill Book Company. pp. 92–119. ISBN 0-07-004357-4.
• Herken, Rolf (1995), The Universal Turing Machine – A Half-Century Survey, Springer Verlag, ISBN 3-211-82637-8
• Knuth, Donald E. (1973), The Art of Computer Programming, vol. 1/Fundamental Algorithms (second ed.), Addison-Wesley Publishing Company The first of Knuth's series of three texts.
• Kudlek, Manfred; Rogozhin, Yurii (2002), "A universal Turing machine with 3 states and 9 symbols", in Werner Kuich; Grzegorz Rozenberg; Arto Salomaa (eds.), Developments in Language Theory: 5th International Conference, DLT 2001 Wien, Austria, July 16–21, 2001, Revised Papers, Lecture Notes in Computer Science, vol. 2295, Springer, pp. 311–318, doi:10.1007/3-540-46011-x_27, ISBN 978-3-540-43453-5
• Minsky, Marvin (1962), "Size and Structure of Universal Turing Machines using Tag Systems, Recursive Function Theory", Proc. Symp. Pure Mathematics, Providence RI: American Mathematical Society, 5: 229–238, doi:10.1090/pspum/005/0142452
• Neary, Turlough; Woods, Damien (2009), "Four Small Universal Turing Machines" (PDF), Fundamenta Informaticae, 91 (1): 123–144, doi:10.3233/FI-2009-0036
• Neary, Turlough; Woods, Damien (2009b), "Small Weakly Universal Turing Machines", 17th International Symposium on Fundamentals of Computation Theory, Lecture Notes in Computer Science, vol. 5699, Springer, pp. 262–273
• Penrose, Roger (1989), The Emperor's New Mind, Oxford UK: Oxford University Press, ISBN 0-19-851973-7, (hc.), (pb.)
• Rogozhin, Yurii (1996), "Small Universal Turing Machines", Theoretical Computer Science, 168 (2): 215–240, doi:10.1016/S0304-3975(96)00077-1
• Shannon, Claude (1956), "A Universal Turing Machine with Two Internal States", Automata Studies, Princeton, NJ: Princeton University Press, pp. 157–165
• Turing, A.M. (1936), "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, 2, vol. 42, pp. 230–65, doi:10.1112/plms/s2-42.1.230, S2CID 73712
• Turing, A.M. (1938), "On Computable Numbers, with an Application to the Entscheidungsproblem: A correction", Proceedings of the London Mathematical Society, 2 (published 1937), vol. 43, no. 6, pp. 544–6, doi:10.1112/plms/s2-43.6.544)
Davis, Martin, ed. (1965). The Undecidable (Reprint ed.). Hewlett, NY: Raven Press. pp. 115–154. with corrections to Turing's UTM by Emil Post cf footnote 11 pg:299
External links
Smith, Alvy Ray. "A Business Card Universal Turing Machine" (PDF). Retrieved 2 January 2020.
Authority control: National
• Germany
| Wikipedia |
Tautological bundle
In mathematics, the tautological bundle is a vector bundle occurring over a Grassmannian in a natural tautological way: for a Grassmannian of $k$-dimensional subspaces of $V$, given a point in the Grassmannian corresponding to a $k$-dimensional vector subspace $W\subseteq V$, the fiber over $W$ is the subspace $W$ itself. In the case of projective space the tautological bundle is known as the tautological line bundle.
The tautological bundle is also called the universal bundle since any vector bundle (over a compact space[1]) is a pullback of the tautological bundle; this is to say a Grassmannian is a classifying space for vector bundles. Because of this, the tautological bundle is important in the study of characteristic classes.
Tautological bundles are constructed both in algebraic topology and in algebraic geometry. In algebraic geometry, the tautological line bundle (as invertible sheaf) is
${\mathcal {O}}_{\mathbb {P} ^{n}}(-1),$
the dual of the hyperplane bundle or Serre's twisting sheaf ${\mathcal {O}}_{\mathbb {P} ^{n}}(1)$. The hyperplane bundle is the line bundle corresponding to the hyperplane (divisor) $\mathbb {P} ^{n-1}$ in $\mathbb {P} ^{n}$. The tautological line bundle and the hyperplane bundle are exactly the two generators of the Picard group of the projective space.[2]
In Michael Atiyah's "K-theory", the tautological line bundle over a complex projective space is called the standard line bundle. The sphere bundle of the standard bundle is usually called the Hopf bundle. (cf. Bott generator.)
More generally, there are also tautological bundles on a projective bundle of a vector bundle as well as a Grassmann bundle.
The older term canonical bundle has dropped out of favour, on the grounds that canonical is heavily overloaded as it is, in mathematical terminology, and (worse) confusion with the canonical class in algebraic geometry could scarcely be avoided.
Intuitive definition
Grassmannians by definition are the parameter spaces for linear subspaces, of a given dimension, in a given vector space $W$. If $G$ is a Grassmannian, and $V_{g}$ is the subspace of $W$ corresponding to $g$ in $G$, this is already almost the data required for a vector bundle: namely a vector space for each point $g$, varying continuously. All that can stop the definition of the tautological bundle from this indication, is the difficulty that the $V_{g}$ are going to intersect. Fixing this up is a routine application of the disjoint union device, so that the bundle projection is from a total space made up of identical copies of the $V_{g}$, that now do not intersect. With this, we have the bundle.
The projective space case is included. By convention $P(V)$ may usefully carry the tautological bundle in the dual space sense. That is, with $V^{*}$ the dual space, points of $P(V)$ carry the vector subspaces of $V^{*}$ that are their kernels, when considered as (rays of) linear functionals on $V^{*}$. If $V$ has dimension $n+1$, the tautological line bundle is one tautological bundle, and the other, just described, is of rank $n$.
Formal definition
Let $G_{n}(\mathbb {R} ^{n+k})$ be the Grassmannian of n-dimensional vector subspaces in $\mathbb {R} ^{n+k};$ as a set it is the set of all n-dimensional vector subspaces of $\mathbb {R} ^{n+k}.$ For example, if n = 1, it is the real projective k-space.
We define the tautological bundle γn, k over $G_{n}(\mathbb {R} ^{n+k})$ as follows. The total space of the bundle is the set of all pairs (V, v) consisting of a point V of the Grassmannian and a vector v in V; it is given the subspace topology of the Cartesian product $G_{n}(\mathbb {R} ^{n+k})\times \mathbb {R} ^{n+k}.$ The projection map π is given by π(V, v) = V. If F is the pre-image of V under π, it is given a structure of a vector space by a(V, v) + b(V, w) = (V, av + bw). Finally, to see local triviality, given a point X in the Grassmannian, let U be the set of all V such that the orthogonal projection p onto X maps V isomorphically onto X,[3] and then define
${\begin{cases}\phi :\pi ^{-1}(U)\to U\times X\subseteq G_{n}(\mathbb {R} ^{n+k})\times X\\\phi (V,v)=(V,p(v))\end{cases}}$ :\pi ^{-1}(U)\to U\times X\subseteq G_{n}(\mathbb {R} ^{n+k})\times X\\\phi (V,v)=(V,p(v))\end{cases}}}
which is clearly a homeomorphism. Hence, the result is a vector bundle of rank n.
The above definition continues to make sense if we replace $\mathbb {R} $ with the complex field $\mathbb {C} .$
By definition, the infinite Grassmannian $G_{n}$ is the direct limit of $G_{n}(\mathbb {R} ^{n+k})$ as $k\to \infty .$ Taking the direct limit of the bundles γn, k gives the tautological bundle γn of $G_{n}.$ It is a universal bundle in the sense: for each compact space X, there is a natural bijection
${\begin{cases}[X,G_{n}]\to \operatorname {Vect} _{n}^{\mathbb {R} }(X)\\f\mapsto f^{*}(\gamma _{n})\end{cases}}$
where on the left the bracket means homotopy class and on the right is the set of isomorphism classes of real vector bundles of rank n. The inverse map is given as follows: since X is compact, any vector bundle E is a subbundle of a trivial bundle: $E\hookrightarrow X\times \mathbb {R} ^{n+k}$ for some k and so E determines a map
${\begin{cases}f_{E}:X\to G_{n}\\x\mapsto E_{x}\end{cases}}$
unique up to homotopy.
Remark: In turn, one can define a tautological bundle as a universal bundle; suppose there is a natural bijection
$[X,G_{n}]=\operatorname {Vect} _{n}^{\mathbb {R} }(X)$
for any paracompact space X. Since $G_{n}$ is the direct limit of compact spaces, it is paracompact and so there is a unique vector bundle over $G_{n}$ that corresponds to the identity map on $G_{n}.$ It is precisely the tautological bundle and, by restriction, one gets the tautological bundles over all $G_{n}(\mathbb {R} ^{n+k}).$
Hyperplane bundle
The hyperplane bundle H on a real projective k-space is defined as follows. The total space of H is the set of all pairs (L, f) consisting of a line L through the origin in $\mathbb {R} ^{k+1}$ and f a linear functional on L. The projection map π is given by π(L, f) = L (so that the fiber over L is the dual vector space of L.) The rest is exactly like the tautological line bundle.
In other words, H is the dual bundle of the tautological line bundle.
In algebraic geometry, the hyperplane bundle is the line bundle (as invertible sheaf) corresponding to the hyperplane divisor
$H=\mathbb {P} ^{n-1}\subset \mathbb {P} ^{n}$
given as, say, x0 = 0, when xi are the homogeneous coordinates. This can be seen as follows. If D is a (Weil) divisor on $X=\mathbb {P} ^{n},$ one defines the corresponding line bundle O(D) on X by
$\Gamma (U,O(D))=\{f\in K|(f)+D\geq 0{\text{ on }}U\}$
where K is the field of rational functions on X. Taking D to be H, we have:
${\begin{cases}O(H)\simeq O(1)\\f\mapsto fx_{0}\end{cases}}$
where x0 is, as usual, viewed as a global section of the twisting sheaf O(1). (In fact, the above isomorphism is part of the usual correspondence between Weil divisors and Cartier divisors.) Finally, the dual of the twisting sheaf corresponds to the tautological line bundle (see below).
Tautological line bundle in algebraic geometry
In algebraic geometry, this notion exists over any field k. The concrete definition is as follows. Let $A=k[y_{0},\dots ,y_{n}]$ and $\mathbb {P} ^{n}=\operatorname {Proj} A$. Note that we have:
$\mathbf {Spec} \left({\mathcal {O}}_{\mathbb {P} ^{n}}[x_{0},\ldots ,x_{n}]\right)=\mathbb {A} _{\mathbb {P} ^{n}}^{n+1}=\mathbb {A} ^{n+1}\times _{k}{\mathbb {P} ^{n}}$
where Spec is relative Spec. Now, put:
$L=\mathbf {Spec} \left({\mathcal {O}}_{\mathbb {P} ^{n}}[x_{0},\dots ,x_{n}]/I\right)$
where I is the ideal sheaf generated by global sections $x_{i}y_{j}-x_{j}y_{i}$. Then L is a closed subscheme of $\mathbb {A} _{\mathbb {P} ^{n}}^{n+1}$ over the same base scheme $\mathbb {P} ^{n}$; moreover, the closed points of L are exactly those (x, y) of $\mathbb {A} ^{n+1}\times _{k}\mathbb {P} ^{n}$ such that either x is zero or the image of x in $\mathbb {P} ^{n}$ is y. Thus, L is the tautological line bundle as defined before if k is the field of real or complex numbers.
In more concise terms, L is the blow-up of the origin of the affine space $\mathbb {A} ^{n+1}$, where the locus x = 0 in L is the exceptional divisor. (cf. Hartshorne, Ch. I, the end of § 4.)
In general, $\mathbf {Spec} (\operatorname {Sym} {\check {E}})$ is the algebraic vector bundle corresponding to a locally free sheaf E of finite rank.[4] Since we have the exact sequence:
$0\to I\to {\mathcal {O}}_{\mathbb {P} ^{n}}[x_{0},\ldots ,x_{n}]{\overset {x_{i}\mapsto y_{i}}{\longrightarrow }}\operatorname {Sym} {\mathcal {O}}_{\mathbb {P} ^{n}}(1)\to 0,$
the tautological line bundle L, as defined above, corresponds to the dual ${\mathcal {O}}_{\mathbb {P} ^{n}}(-1)$ of Serre's twisting sheaf. In practice both the notions (tautological line bundle and the dual of the twisting sheaf) are used interchangeably.
Over a field, its dual line bundle is the line bundle associated to the hyperplane divisor H, whose global sections are the linear forms. Its Chern class is −H. This is an example of an anti-ample line bundle. Over $\mathbb {C} ,$ this is equivalent to saying that it is a negative line bundle, meaning that minus its Chern class is the de Rham class of the standard Kähler form.
Facts
• The tautological line bundle γ1, k is locally trivial but not trivial, for k ≥ 1. This remains true over other fields.
In fact, it is straightforward to show that, for k = 1, the real tautological line bundle is none other than the well-known bundle whose total space is the Möbius strip. For a full proof of the above fact, see.[5]
• The Picard group of line bundles on $\mathbb {P} (V)$ is infinite cyclic, and the tautological line bundle is a generator.
• In the case of projective space, where the tautological bundle is a line bundle, the associated invertible sheaf of sections is ${\mathcal {O}}(-1)$, the tensor inverse (ie the dual vector bundle) of the hyperplane bundle or Serre twist sheaf ${\mathcal {O}}(1)$; in other words the hyperplane bundle is the generator of the Picard group having positive degree (as a divisor) and the tautological bundle is its opposite: the generator of negative degree.
See also
• Hopf bundle
• Stiefel-Whitney class
• Euler sequence
• Chern class (Chern classes of tautological bundles is the algebraically independent generators of the cohomology ring of the infinite Grassmannian.)
• Borel's theorem
• Thom space (Thom spaces of tautological bundles γn as n →∞ is called the Thom spectrum.)
• Grassmann bundle
References
1. Over a noncompact but paracompact base, this remains true provided one uses infinite Grassmannian.
2. In literature and textbooks, they are both often called canonical generators.
3. U is open since $G_{n}(\mathbb {R} ^{n+k})$ is given a topology such that
${\begin{cases}G_{n}(\mathbb {R} ^{n+k})\to \operatorname {End} (\mathbb {R} ^{n+k})\\V\mapsto p_{V}\end{cases}}$
where $p_{V}$ is the orthogonal projection onto V, is a homeomorphism onto the image.
4. Editorial note: this definition differs from Hartshorne in that he does not take dual, but is consistent with the standard practice and the other parts of Wikipedia.
5. Milnor & Stasheff 1974, §2. Theorem 2.1.
Sources
• Atiyah, Michael Francis (1989), K-theory, Advanced Book Classics (2nd ed.), Addison-Wesley, ISBN 978-0-201-09394-0, MR 1043170
• Griffiths, Phillip; Harris, Joseph (1994), Principles of algebraic geometry, Wiley Classics Library, New York: John Wiley & Sons, doi:10.1002/9781118032527, ISBN 978-0-471-05059-9, MR 1288523.
• Hartshorne, Robin (1977), Algebraic Geometry, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157, OCLC 13348052.
• Milnor, John W.; Stasheff, James D. (1974), Characteristic Classes, Annals of Mathematics Studies, vol. 76, Princeton, New Jersey: Princeton University Press, MR 0440554
• Rubei, Elena (2014), Algebraic Geometry: A Concise Dictionary, Berlin/Boston: Walter De Gruyter, ISBN 978-3-11-031622-3
| Wikipedia |
Universal vertex
In graph theory, a universal vertex is a vertex of an undirected graph that is adjacent to all other vertices of the graph. It may also be called a dominating vertex, as it forms a one-element dominating set in the graph. (It is not to be confused with a universally quantified vertex in the logic of graphs.)
A graph that contains a universal vertex may be called a cone. In this context, the universal vertex may also be called the apex of the cone.[1] However, this terminology conflicts with the terminology of apex graphs, in which an apex is a vertex whose removal leaves a planar subgraph.
In special families of graphs
The stars are exactly the trees that have a universal vertex, and may be constructed by adding a universal vertex to an independent set. The wheel graphs, similarly, may be formed by adding a universal vertex to a cycle graph.[2] In geometry, the three-dimensional pyramids have wheel graphs as their skeletons, and more generally the graph of any higher-dimensional pyramid has a universal vertex as the apex of the pyramid.
The trivially perfect graphs (the comparability graphs of order-theoretic trees) always contain a universal vertex, the root of the tree, and more strongly they may be characterized as the graphs in which every connected induced subgraph contains a universal vertex.[3] The connected threshold graphs form a subclass of the trivially perfect graphs, so they also contain a universal vertex; they may be defined as the graphs that can be formed by repeated addition of either a universal vertex or an isolated vertex (one with no incident edges).[4]
The friendship theorem of Paul Erdős, Alfréd Rényi, and Vera T. Sós (1966) states that, if every two vertices in a finite graph have exactly one shared neighbor, then the graph contains a universal vertex. The graphs described by this theorem are the friendship graphs, formed by systems of triangles connected together at a common shared vertex, the universal vertex.[5]
Every graph with a universal vertex is a dismantlable graph, meaning that it can be reduced to a single vertex by repeatedly removing vertices whose closed neighborhoods are subsets of other vertices' closed neighborhoods. Any removal sequence that leaves the universal vertex in place, removing all of the other vertices, fits this definition. Almost all dismantlable graphs have a universal vertex, in the sense that the fraction of $n$-vertex dismantlable graphs that have a universal vertex tends to one in the limit as $n$ goes to infinity.[6]
As a special case of the observation that the domination number increases at most multiplicatively in strong products of graphs,[7] a strong product has a universal vertex if and only if both of its factors do.
Recognition
In a graph with n vertices, a universal vertex is a vertex whose degree is exactly n − 1. Therefore, like the split graphs, graphs with a universal vertex can be recognized purely by their degree sequences, without looking at the structure of the graph.
The property of having a universal vertex can be expressed by a formula in the first-order logic of graphs. Using $\sim $ to indicate the adjacency relation in a graph, a graph $G$ has a universal vertex if and only if it models the formula
$\exists u\forall v{\bigl (}(u\neq v)\Rightarrow (u\sim v){\bigr )}.$
The existence of this formula, and its small number of alternations between universal and existential quantifers, can be used in a fixed-parameter tractable algorithm for testing whether all components of a graph can be made to have universal vertices by $k$ steps of removing a vertex from each component.[8]
References
1. Larrión, F.; de Mello, C. P.; Morgana, A.; Neumann-Lara, V.; Pizaña, M. A. (2004), "The clique operator on cographs and serial graphs", Discrete Mathematics, 282 (1–3): 183–191, doi:10.1016/j.disc.2003.10.023, MR 2059518.
2. Bonato, Anthony (2008), A course on the web graph, Graduate Studies in Mathematics, vol. 89, Atlantic Association for Research in the Mathematical Sciences (AARMS), Halifax, NS, p. 7, doi:10.1090/gsm/089, ISBN 978-0-8218-4467-0, MR 2389013.
3. Wolk, E. S. (1962), "The comparability graph of a tree", Proceedings of the American Mathematical Society, 13 (5): 789–795, doi:10.2307/2034179, JSTOR 2034179, MR 0172273.
4. Chvátal, Václav; Hammer, Peter Ladislaw (1977), "Aggregation of inequalities in integer programming", in Hammer, P. L.; Johnson, E. L.; Korte, B. H.; Nemhauser, G. L. (eds.), Studies in Integer Programming (Proc. Worksh. Bonn 1975), Annals of Discrete Mathematics, vol. 1, Amsterdam: North-Holland, pp. 145–162.
5. Erdős, Paul; Rényi, Alfréd; Sós, Vera T. (1966), "On a problem of graph theory" (PDF), Studia Sci. Math. Hungar., 1: 215–235.
6. Bonato, Anthony; Kemkes, Graeme; Prałat, Paweł (2012), "Almost all cop-win graphs contain a universal vertex", Discrete Mathematics, 312 (10): 1652–1657, doi:10.1016/j.disc.2012.02.018, MR 2901161.
7. Lakshmanan, S. Aparna; Vijayakumar, A. (2009), "A note on some domination parameters in graph products", Journal of Combinatorial Mathematics and Combinatorial Computing, 69: 31–37, MR 2517304
8. Fomin, Fedor V.; Golovach, Petr A.; Thilikos, Dimitrios M. (2021), "Parameterized complexity of elimination distance to first-order logic properties", 36th Annual ACM/IEEE Symposium on Logic in Computer Science, LICS 2021, Rome, Italy, June 29 - July 2, 2021, IEEE, pp. 1–13, arXiv:2104.02998, doi:10.1109/LICS52264.2021.9470540, S2CID 233169117
External links
• Weisstein, Eric W., "Cone Graph", MathWorld
| Wikipedia |
Universally Baire set
In the mathematical field of descriptive set theory, a set of real numbers (or more generally a subset of the Baire space or Cantor space) is called universally Baire if it has a certain strong regularity property. Universally Baire sets play an important role in Ω-logic, a very strong logical system invented by W. Hugh Woodin and the centerpiece of his argument against the continuum hypothesis of Georg Cantor.
Definition
A subset A of the Baire space is universally Baire if it has the following equivalent properties:
1. For every notion of forcing, there are trees T and U such that A is the projection of the set of all branches through T, and it is forced that the projections of the branches through T and the branches through U are complements of each other.
2. For every compact Hausdorff space Ω, and every continuous function f from Ω to the Baire space, the preimage of A under f has the property of Baire in Ω.
3. For every cardinal λ and every continuous function f from λω to the Baire space, the preimage of A under f has the property of Baire.
References
• Bagaria, Joan; Todorcevic, Stevo (eds.). Set Theory: Centre de Recerca Matemàtica Barcelona, 2003-2004. Trends in Mathematics. ISBN 978-3-7643-7691-8.
• Feng, Qi; Magidor, Menachem; Woodin, Hugh. Judah, H.; Just, W.; Woodin, Hugh (eds.). Set Theory of the Continuum. Mathematical Sciences Research Institute Publications.
| Wikipedia |
Catenary ring
In mathematics, a commutative ring R is catenary if for any pair of prime ideals p, q, any two strictly increasing chains
p = p0 ⊂ p1 ⊂ ... ⊂ pn = q
of prime ideals are contained in maximal strictly increasing chains from p to q of the same (finite) length. In a geometric situation, in which the dimension of an algebraic variety attached to a prime ideal will decrease as the prime ideal becomes bigger, the length of such a chain n is usually the difference in dimensions.
A ring is called universally catenary if all finitely generated algebras over it are catenary rings.
The word 'catenary' is derived from the Latin word catena, which means "chain".
There is the following chain of inclusions.
Universally catenary rings ⊃ Cohen–Macaulay rings ⊃ Gorenstein rings ⊃ complete intersection rings ⊃ regular local rings
Dimension formula
Suppose that A is a Noetherian domain and B is a domain containing A that is finitely generated over A. If P is a prime ideal of B and p its intersection with A, then
${\text{height}}(P)\leq {\text{height}}(p)+{\text{tr.deg.}}_{A}(B)-{\text{tr.deg.}}_{\kappa (p)}(\kappa (P)).$
The dimension formula for universally catenary rings says that equality holds if A is universally catenary. Here κ(P) is the residue field of P and tr.deg. means the transcendence degree (of quotient fields). In fact, when A is not universally catenary, but $B=A[x_{1},\dots ,x_{n}]$, then equality also holds.[1]
Examples
Almost all Noetherian rings that appear in algebraic geometry are universally catenary. In particular the following rings are universally catenary:
• Complete Noetherian local rings
• Dedekind domains (and fields)
• Cohen-Macaulay rings (and regular local rings)
• Any localization of a universally catenary ring
• Any finitely generated algebra over a universally catenary ring.
A ring that is catenary but not universally catenary
It is delicate to construct examples of Noetherian rings that are not universally catenary. The first example was found by Masayoshi Nagata (1956, 1962, page 203 example 2), who found a 2-dimensional Noetherian local domain that is catenary but not universally catenary.
Nagata's example is as follows. Choose a field k and a formal power series z=Σi>0aixi in the ring S of formal power series in x over k such that z and x are algebraically independent.
Define z1 = z and zi+1=zi/x–ai.
Let R be the (non-Noetherian) ring generated by x and all the elements zi.
Let m be the ideal (x), and let n be the ideal generated by x–1 and all the elements zi. These are both maximal ideals of R, with residue fields isomorphic to k. The local ring Rm is a regular local ring of dimension 1 (the proof of this uses the fact that z and x are algebraically independent) and the local ring Rn is a regular Noetherian local ring of dimension 2.
Let B be the localization of R with respect to all elements not in either m or n. Then B is a 2-dimensional Noetherian semi-local ring with 2 maximal ideals, mB (of height 1) and nB (of height 2).
Let I be the Jacobson radical of B, and let A = k+I. The ring A is a local domain of dimension 2 with maximal ideal I, so is catenary because all 2-dimensional local domains are catenary. The ring A is Noetherian because B is Noetherian and is a finite A-module. However A is not universally catenary, because if it were then the ideal mB of B would have the same height as mB∩A by the dimension formula for universally catenary rings, but the latter ideal has height equal to dim(A)=2.
Nagata's example is also a quasi-excellent ring, so gives an example of a quasi-excellent ring that is not an excellent ring.
See also
• Formally catenary ring (which is the same as a universally catenary ring).
References
1. http://www.math.lsa.umich.edu/~hochster/615W14/615.pdf
• H. Matsumura, Commutative algebra 1980 ISBN 0-8053-7026-9.
• Nagata, Masayoshi (1956), "On the chain problem of prime ideals", Nagoya Math. J., 10: 51–64, doi:10.1017/S0027763000000076, MR 0078974, S2CID 122444738
• Nagata, Masayoshi Local rings. Interscience Tracts in Pure and Applied Mathematics, No. 13 Interscience Publishers a division of John Wiley & Sons, New York-London 1962, reprinted by R. E. Krieger Pub. Co (1975) ISBN 0-88275-228-6
See also
• Ring theory
• Local rings
• Commutative rings
• Cohen-Macaulay rings
• Gorenstein local rings
| Wikipedia |
Radicial morphism
In algebraic geometry, a morphism of schemes
f: X → Y
is called radicial or universally injective, if, for every field K the induced map X(K) → Y(K) is injective. (EGA I, (3.5.4)) This is a generalization of the notion of a purely inseparable extension of fields (sometimes called a radicial extension, which should not be confused with a radical extension.)
It suffices to check this for K algebraically closed.
This is equivalent to the following condition: f is injective on the topological spaces and for every point x in X, the extension of the residue fields
k(f(x)) ⊂ k(x)
is radicial, i.e. purely inseparable.
It is also equivalent to every base change of f being injective on the underlying topological spaces. (Thus the term universally injective.)
Radicial morphisms are stable under composition, products and base change. If gf is radicial, so is f.
References
• Grothendieck, Alexandre; Dieudonné, Jean (1960), "Éléments de géométrie algébrique (rédigés avec la collaboration de Jean Dieudonné) : I. Le langage des schémas", Publications Mathématiques de l'IHÉS, 4 (1): 5–228, doi:10.1007/BF02684778, ISSN 1618-1913, section I.3.5.
• Bourbaki, Nicolas (1988), Algebra, Berlin, New York: Springer-Verlag, ISBN 978-3-540-19373-9, see section V.5.
| Wikipedia |
Universally measurable set
In mathematics, a subset $A$ of a Polish space $X$ is universally measurable if it is measurable with respect to every complete probability measure on $X$ that measures all Borel subsets of $X$. In particular, a universally measurable set of reals is necessarily Lebesgue measurable (see § Finiteness condition below).
Every analytic set is universally measurable. It follows from projective determinacy, which in turn follows from sufficient large cardinals, that every projective set is universally measurable.
Finiteness condition
The condition that the measure be a probability measure; that is, that the measure of $X$ itself be 1, is less restrictive than it may appear. For example, Lebesgue measure on the reals is not a probability measure, yet every universally measurable set is Lebesgue measurable. To see this, divide the real line into countably many intervals of length 1; say, N0=[0,1), N1=[1,2), N2=[-1,0), N3=[2,3), N4=[-2,-1), and so on. Now letting μ be Lebesgue measure, define a new measure ν by
$\nu (A)=\sum _{i=0}^{\infty }{\frac {1}{2^{n+1}}}\mu (A\cap N_{i})$
Then easily ν is a probability measure on the reals, and a set is ν-measurable if and only if it is Lebesgue measurable. More generally a universally measurable set must be measurable with respect to every sigma-finite measure that measures all Borel sets.
Example contrasting with Lebesgue measurability
Suppose $A$ is a subset of Cantor space $2^{\omega }$; that is, $A$ is a set of infinite sequences of zeroes and ones. By putting a binary point before such a sequence, the sequence can be viewed as a real number between 0 and 1 (inclusive), with some unimportant ambiguity. Thus we can think of $A$ as a subset of the interval [0,1], and evaluate its Lebesgue measure, if that is defined. That value is sometimes called the coin-flipping measure of $A$, because it is the probability of producing a sequence of heads and tails that is an element of $A$ upon flipping a fair coin infinitely many times.
Now it follows from the axiom of choice that there are some such $A$ without a well-defined Lebesgue measure (or coin-flipping measure). That is, for such an $A$, the probability that the sequence of flips of a fair coin will wind up in $A$ is not well-defined. This is a pathological property of $A$ that says that $A$ is "very complicated" or "ill-behaved".
From such a set $A$, form a new set $A'$ by performing the following operation on each sequence in $A$: Intersperse a 0 at every even position in the sequence, moving the other bits to make room. Although $A'$ is not intuitively any "simpler" or "better-behaved" than $A$, the probability that the sequence of flips of a fair coin will be in $A'$ is well-defined. Indeed, to be in $A'$, the coin must come up tails on every even-numbered flip, which happens with probability zero.
However $A'$ is not universally measurable. To see that, we can test it against a biased coin that always comes up tails on even-numbered flips, and is fair on odd-numbered flips. For a set of sequences to be universally measurable, an arbitrarily biased coin may be used (even one that can "remember" the sequence of flips that has gone before) and the probability that the sequence of its flips ends up in the set must be well-defined. However, when $A'$ is tested by the coin we mentioned (the one that always comes up tails on even-numbered flips, and is fair on odd-numbered flips), the probability to hit $A'$ is not well defined (for the same reason why $A$ cannot be tested by the fair coin). Thus, $A'$ is not universally measurable.
References
• Alexander Kechris (1995), Classical Descriptive Set Theory, Graduate Texts in Mathematics, vol. 156, Springer, ISBN 0-387-94374-9
• Nishiura Togo (2008), Absolute Measurable Spaces, Cambridge University Press, ISBN 0-521-87556-0
Measure theory
Basic concepts
• Absolute continuity of measures
• Lebesgue integration
• Lp spaces
• Measure
• Measure space
• Probability space
• Measurable space/function
Sets
• Almost everywhere
• Atom
• Baire set
• Borel set
• equivalence relation
• Borel space
• Carathéodory's criterion
• Cylindrical σ-algebra
• Cylinder set
• 𝜆-system
• Essential range
• infimum/supremum
• Locally measurable
• π-system
• σ-algebra
• Non-measurable set
• Vitali set
• Null set
• Support
• Transverse measure
• Universally measurable
Types of Measures
• Atomic
• Baire
• Banach
• Besov
• Borel
• Brown
• Complex
• Complete
• Content
• (Logarithmically) Convex
• Decomposable
• Discrete
• Equivalent
• Finite
• Inner
• (Quasi-) Invariant
• Locally finite
• Maximising
• Metric outer
• Outer
• Perfect
• Pre-measure
• (Sub-) Probability
• Projection-valued
• Radon
• Random
• Regular
• Borel regular
• Inner regular
• Outer regular
• Saturated
• Set function
• σ-finite
• s-finite
• Signed
• Singular
• Spectral
• Strictly positive
• Tight
• Vector
Particular measures
• Counting
• Dirac
• Euler
• Gaussian
• Haar
• Harmonic
• Hausdorff
• Intensity
• Lebesgue
• Infinite-dimensional
• Logarithmic
• Product
• Projections
• Pushforward
• Spherical measure
• Tangent
• Trivial
• Young
Maps
• Measurable function
• Bochner
• Strongly
• Weakly
• Convergence: almost everywhere
• of measures
• in measure
• of random variables
• in distribution
• in probability
• Cylinder set measure
• Random: compact set
• element
• measure
• process
• variable
• vector
• Projection-valued measure
Main results
• Carathéodory's extension theorem
• Convergence theorems
• Dominated
• Monotone
• Vitali
• Decomposition theorems
• Hahn
• Jordan
• Maharam's
• Egorov's
• Fatou's lemma
• Fubini's
• Fubini–Tonelli
• Hölder's inequality
• Minkowski inequality
• Radon–Nikodym
• Riesz–Markov–Kakutani representation theorem
Other results
• Disintegration theorem
• Lifting theory
• Lebesgue's density theorem
• Lebesgue differentiation theorem
• Sard's theorem
For Lebesgue measure
• Isoperimetric inequality
• Brunn–Minkowski theorem
• Milman's reverse
• Minkowski–Steiner formula
• Prékopa–Leindler inequality
• Vitale's random Brunn–Minkowski inequality
Applications & related
• Convex analysis
• Descriptive set theory
• Probability theory
• Real analysis
• Spectral theory
| Wikipedia |
Domain of discourse
In the formal sciences, the domain of discourse, also called the universe of discourse, universal set, or simply universe, is the set of entities over which certain variables of interest in some formal treatment may range.
Overview
The domain of discourse is usually identified in the preliminaries, so that there is no need in the further treatment to specify each time the range of the relevant variables.[1] Many logicians distinguish, sometimes only tacitly, between the domain of a science and the universe of discourse of a formalization of the science.[2]
Examples
For example, in an interpretation of first-order logic, the domain of discourse is the set of individuals over which the quantifiers range. A proposition such as ∀x (x2 ≠ 2) is ambiguous, if no domain of discourse has been identified. In one interpretation, the domain of discourse could be the set of real numbers; in another interpretation, it could be the set of natural numbers. If the domain of discourse is the set of real numbers, the proposition is false, with x = √2 as counterexample; if the domain is the set of natural numbers, the proposition is true, since 2 is not the square of any natural number.
Universe of discourse
The term "universe of discourse" generally refers to the collection of objects being discussed in a specific discourse. In model-theoretical semantics, a universe of discourse is the set of entities that a model is based on. The concept universe of discourse is generally attributed to Augustus De Morgan (1846) but the name was used for the first time by George Boole (1854) on page 42 of his Laws of Thought. Boole's definition is quoted below. The concept, probably discovered independently by Boole in 1847, played a crucial role in his philosophy of logic especially in his principle of wholistic reference.
Boole’s 1854 definition
In every discourse, whether of the mind conversing with its own thoughts, or of the individual in his intercourse with others, there is an assumed or expressed limit within which the subjects of its operation are confined. The most unfettered discourse is that in which the words we use are understood in the widest possible application, and for them the limits of discourse are co-extensive with those of the universe itself. But more usually we confine ourselves to a less spacious field. Sometimes, in discoursing of men we imply (without expressing the limitation) that it is of men only under certain circumstances and conditions that we speak, as of civilized men, or of men in the vigour of life, or of men under some other condition or relation. Now, whatever may be the extent of the field within which all the objects of our discourse are found, that field may properly be termed the universe of discourse. Furthermore, this universe of discourse is in the strictest sense the ultimate subject of the discourse.
— George Boole, The Laws of Thought. 1854/2003. p. 42.[3]
See also
Look up domain of discourse in Wiktionary, the free dictionary.
• Domain of a function
• Domain theory
• Interpretation (logic)
• Quantifier (logic)
• Term algebra
• Universe (mathematics)
References
1. Corcoran, John. Universe of discourse. Cambridge Dictionary of Philosophy, Cambridge University Press, 1995, p. 941.
2. José Miguel Sagüillo, Domains of sciences, universe of discourse, and omega arguments, History and philosophy of logic, vol. 20 (1999), pp. 267–280.
3. Facsimile of 1854 edition, with an introduction by J. Corcoran. Buffalo: Prometheus Books (2003). Reviewed by James van Evra in Philosophy in Review 24 (2004): 167–169.
| Wikipedia |
UCPH Department of Mathematical Sciences
The UCPH Department of Mathematical Sciences (Danish: Institut for Matematiske Fag) is a department under the Faculty of Science at the University of Copenhagen (UCPH). The department is based at the university's North Campus in Copenhagen.
Location
The department is located in the E building of the Hans Christian Ørsted Institute, on Universitetsparken 5 in Copenhagen, Denmark.
From the founding of the University of Copenhagen in 1479, mathematics had been part of the Faculty of Philosophy. In 1850 it was moved to the new faculty of Mathematics and Natural Sciences. The Institute for Mathematical Sciences was first created in 1934 next to the Niels Bohr Institute building, when Carlsberg Foundation donated money for a building in celebration of the 450th anniversary of the University of Copenhagen in 1929. In 1963 the institute moved to its current location.
Mathematical research
Many different branches of mathematics are being covered by the fields of interest of different researchers at the institute.
Harald Bohr, the brother of physicist Niels Bohr, is one famous alumnus of the department; his research in harmonic analysis and almost periodic functions in the 1930s laid the foundation for a huge drive in analysis. Most notably, since the 1980s the department has been a globally recognized frontrunner in functional analysis, particularly the study of operator algebras and C*-algebras. Faculty from the department who have contributed to this research include the following:
• Bent Fuglede
• Søren Eilers
• George Elliott
• Gert Kjærgaard Pedersen
• Ryszard Nest
• Richard V. Kadison
Contributing to these efforts, the department houses a center for non-commutative geometry.
Of other major research frontiers are homological algebra, and more recently - grounds have been laid for a boost in the research of algebraic topology.
External links
• Department of Mathematical Sciences
• Center for Non-commutative Geometry
• Topology
University of Copenhagen
Academics
Faculties
• Faculty of Health and Medical Sciences
• Faculty of Humanities
• Faculty of Law
• Faculty of Science
• Faculty of Social Sciences
• Faculty of Theology
Centres and
departments
• Bioinformatics Centre
• Department of Biology
• Department of Chemistry
• Department of Computer Science
• Department of Mathematical Sciences
• Department of Nutrition, Exercise and Sports
• Nano-Science Center
• Center for the Philosophy of Nature and Science Studies
• Center for Planetary Research
• Center for Protein Research
• Søren Kierkegaard Research Center
• Cosmic Dawn Center
Institutes and
laboratories
• Arnamagnæan Institute
• Hans Christian Ørsted Institute
• Niels Bohr Institute
• University of Copenhagen Arctic Station
• Urban Culture Lab
Schools
• Royal School of Library and Information Science
• School of Pharmaceutical Sciences
• School of Veterinary Medicine and Animal Science
Other
• Copenhagen–Tartu school
• TOPPS
• Master of International Health
• Museum Tusculanum Press
University
Campuses
• City Campus
• Frederiksberg Campus
• North Campus
• South Campus
Historical dormitories
• Borchs Kollegium
• Elers' Kollegium
• Hassagers Kollegium
• Regensen
• Valkendorfs Kollegium
Museums
• Natural History Museum of Denmark
• University of Copenhagen Botanical Garden
• University of Copenhagen Geological Museum
• Medical Museion
Authority control
International
• VIAF
National
• Germany
• United States
55.7004°N 12.5606°E / 55.7004; 12.5606
| Wikipedia |
University of Liverpool Mathematics School
University of Liverpool Mathematics School (abbreviated as University of Liverpool Maths School and ULMaS) is a coeducational maths school in Central, Liverpool, in the English county of Merseyside.[1] It was opened by the University of Liverpool[2] in September 2020 as the third specialist maths school in the country[3] and the first in Northern England.[4] It is located on the university's campus, in the Sir Alastair Pilkington Building, and offers a curriculum specialising in A-Level mathematics (including further mathematics), physics and computer science.[5]
University of Liverpool Mathematics School
Location
Sir Alastair Pilkington Building, Back Bedford Street, Central
Liverpool
,
L69 7SH
England
,
United Kingdom
Coordinates53.4017°N 2.9669°W / 53.4017; -2.9669
Information
TypeMaths school
MottoEducation for 16–19 year olds
Established1 September 2020 (2020-09-01)
FoundersUniversity of Liverpool
Local authorityLiverpool City Council
Department for Education URN147477 Tables
OfstedReports
ChairGavin Brown
HeadteacherDamian Haigh
GenderCoeducational
Age16 to 19
Enrolment63 as of November 2022[1]
Capacity160[1]
Websitehttps://liverpoolmathsschool.org
History
The University of Liverpool Mathematics College will be a hub for the most able young mathematicians in the Liverpool city region so they can develop their knowledge and skills through the study of maths and related subjects. In today’s global economy it is essential that the UK develops the potential of our most able maths students and this initiative will help respond to that challenge.
—University of Liverpool Vice-Chancellor Janet Beer[6]
In July 2018 the Department for Education, with Lord Agnew and Liz Truss, announced plans to establish the University of Liverpool Mathematics College. It would be a maths school offering the subjects of A-Level mathematics, further mathematics, and physics, and would enrol 80 students per year.[6] An offer of computer science and music were also considered.[7] The New Schools Network, made to support free schools (including maths schools), welcomed the announcement.[8] The University of Liverpool promoted this college to Year Eleven pupils in multiple schools throughout April 2019.[9]
By June 2020 the college's name had been changed to University of Liverpool Mathematics School. A headteacher, Damian Haigh, was appointed. The first teaching staff were recruited through video call as a result of the COVID-19 pandemic. The Department for Education reached a funding agreement with the University of Liverpool Mathematics School Trust to enable it to open the school in September 2020.[10] The college opened on 1 September 2020[1] but the official opening ceremony did not take place until 30 September 2021. Doctor Steve Garnett was the guest of honour at the official opening ceremony,[11] a business magnate who previously spoke at the college.[12]
Between January 2021 and the start of March 2021, due to the COVID-19 pandemic, distance education arrangements were in effect. Physical face-to-face teaching resumed on 8 March under new preventative measures, such as compulsory face masks in areas where social distancing could not be enforced. Students were also offered campus COVID-19 tests and some testing equipment for home usage.[13]
External links
• Official website
References
1. "University of Liverpool Mathematics School". get-information-schools.service.gov.uk. 3 November 2022. Retrieved 22 December 2022.
2. "University of Liverpool Maths School for sixth formers". University of Liverpool. Retrieved 7 April 2022.
3. Staufenberg, Jess (13 October 2020). "Do maths schools have proof of concept yet?". Schools Week. Retrieved 7 April 2022.
4. "Specialist maths school opens in Liverpool". Liverpool Business News. 4 September 2020. Retrieved 7 April 2022.
5. "Liverpool gets first specialist maths school in north of England". Liverpool Express. 7 September 2020. Retrieved 7 April 2022.
6. This article incorporates text published under the British Open Government Licence: "Skills boost for the North as maths school to open in Liverpool". GOV.UK. Retrieved 7 April 2022.
7. "North West to get specialist Maths college". ITV News. 4 June 2018. Retrieved 7 April 2022.
8. "NSN welcomes the announcement that the University of Liverpool Mathematics College will open in 2020". New Schools Network. Retrieved 7 April 2022.
9. "Where might maths take you?". University of Liverpool. 23 April 2019. Retrieved 7 April 2022.
10. "UK's third specialist Mathematics School to open in Liverpool". Educate magazine. 5 June 2020. Retrieved 7 April 2022.
11. "University of Liverpool specialist Mathematics School officially opened". University of Liverpool. 30 September 2021. Retrieved 7 April 2022.
12. "University's specialist Maths School welcomes Liverpool tech entrepreneur". University of Liverpool. 6 July 2021. Retrieved 7 April 2022.
13. "8th March return to school and covid-19 testing information". University of Liverpool Maths School. Retrieved 7 April 2022.
Mathematics in the United Kingdom
Organizations and Projects
• International Centre for Mathematical Sciences
• Advisory Committee on Mathematics Education
• Association of Teachers of Mathematics
• British Society for Research into Learning Mathematics
• Council for the Mathematical Sciences
• Count On
• Edinburgh Mathematical Society
• HoDoMS
• Institute of Mathematics and its Applications
• Isaac Newton Institute
• United Kingdom Mathematics Trust
• Joint Mathematical Council
• Kent Mathematics Project
• London Mathematical Society
• Making Mathematics Count
• Mathematical Association
• Mathematics and Computing College
• Mathematics in Education and Industry
• Megamaths
• Millennium Mathematics Project
• More Maths Grads
• National Centre for Excellence in the Teaching of Mathematics
• National Numeracy
• National Numeracy Strategy
• El Nombre
• Numbertime
• Oxford University Invariant Society
• School Mathematics Project
• Science, Technology, Engineering and Mathematics Network
• Sentinus
Maths schools
• Exeter Mathematics School
• King's College London Mathematics School
• Lancaster University School of Mathematics
• University of Liverpool Mathematics School
Journals
• Compositio Mathematica
• Eureka
• Forum of Mathematics
• Glasgow Mathematical Journal
• The Mathematical Gazette
• Philosophy of Mathematics Education Journal
• Plus Magazine
Competitions
• British Mathematical Olympiad
• British Mathematical Olympiad Subtrust
• National Cipher Challenge
Awards
• Chartered Mathematician
• Smith's Prize
• Adams Prize
• Thomas Bond Sprague Prize
• Rollo Davidson Prize
Schools in Liverpool
Primary schools
• Arnot St Mary Church of England Primary School
• Broadgreen Primary School
• Dovedale Primary School
• Liverpool College
• Roscoe Primary School
Secondary schools
Boys
• Cardinal Heenan Catholic High School
• St Francis Xavier's College
• St Margaret's Church of England Academy
• West Derby School
Girls
• Archbishop Blanch School
• Bellerive FCJ Catholic College
• Broughton Hall High School
• Holly Lodge Girls' College
• St John Bosco Arts College
• St Julie's Catholic High School
• The Belvedere Academy
Coeducational
• Academy of St Francis of Assisi
• Academy of St Nicholas
• Alsop High School
• Archbishop Beck Catholic College
• Calderstones School
• Childwall Academy
• Dixons Broadgreen Academy
• Dixons Croxteth Academy
• Dixons Fazakerley Academy
• Gateacre School
• King David High School
• King's Leadership Academy Liverpool
• Liverpool College
• Liverpool Life Sciences UTC
• North Liverpool Academy
• Notre Dame Catholic College
• St Edward's College
• St Hilda's Church of England High School
• The Studio School Liverpool
Grammar
• The Liverpool Blue Coat School
Special schools
• Clifford Holroyde
• Royal School for the Blind
Further education
• City of Liverpool College
• University of Liverpool Mathematics School
Defunct schools
• Anfield Community Comprehensive School
• Blackburne House
• Bluecoat Chambers
• Huyton College
• Huyton Hill Preparatory School
• Liverpool Collegiate School
• Liverpool Institute High School for Boys
• Liverpool Institute High School for Girls
• New Heys Comprehensive School
• Parklands High School
• St Benedict's College
• Scotland Road Free School
University of Liverpool
People
• Chancellor: Wendy Beetlestone
• Vice-Chancellor: Professor Tim Jones
• Gladstone Professor of Greek
• King Alfred Chair of English Literature
Schools and
research
• Archaeology, Classics and Egyptology
• Architecture
• Centre for Genomic Research
• Centre for Manx Studies
• Dentistry
• Mathematics School
• Medical School
• Veterinary Science
Buildings
• Central Teaching Hub
• Greenbank House
• Harold Cohen Library
• Johnston Laboratories
• Ness Botanic Gardens
• Victoria Building
• Victoria Gallery & Museum
• Waterhouse Building
Student life
• Guild of Students'
• Liverpool Medical Students Society
• Christie Cup
Affiliates
• Broadgreen Hospital
• Liverpool Life Sciences UTC
• Liverpool School of Tropical Medicine
• Liverpool University Press
• N8 Research Partnership
• Royal Liverpool University Hospital
• UK Centre for Materials Education
• Victoria University
• Xi'an Jiaotong-Liverpool University
• Category
• Commons
| Wikipedia |
Mathematical Institute, University of Oxford
The Mathematical Institute is the mathematics department at the University of Oxford in England. It is one of the nine departments of the university's Mathematical, Physical and Life Sciences Division.[2] The institute includes both pure and applied mathematics (Statistics is a separate department) and is one of the largest mathematics departments in the United Kingdom with about 200 academic staff.[1] It was ranked (in a joint submission with Statistics) as the top mathematics department in the UK in the 2021 Research Excellence Framework.[3] Research at the Mathematical Institute covers all branches of mathematical sciences ranging from, for example, algebra, number theory, and geometry to the application of mathematics to a wide range of fields including industry, finance, networks, and the brain. It has more than 850 undergraduates and 550 doctoral or masters students.[1] The institute inhabits a purpose-built building between Somerville College and Green Templeton College on Woodstock Road, next to the Faculty of Philosophy.
Mathematical Institute
The Andrew Wiles Building, home of the Mathematical Institute at the University of Oxford and featuring the Penrose tiling at its entrance, completed in 2013.
Established1966 (1966)
Head of Department
James Sparks[1]
Students1,400[1]
LocationWoodstock Road, Oxford
51.7606°N 1.2629°W / 51.7606; -1.2629
Postal code
OX2 6GG
Operating agency
University of Oxford
Websitewww.maths.ox.ac.uk
Map
Location in Oxford city centre
History
The earliest forerunner of the Mathematical Institute was the School of Geometry and Arithmetic in the Bodleian Library's main quadrangle. This was completed in 1620.[4]
Notable mathematicians associated with the university include Christopher Wren who, before his notable career as an architect, made contributions in analytical mathematics, astronomy, and mathematical physics;[5] Edmond Halley who published a series of profound papers on astronomy while Savilian Professor of Geometry in the early 18th century;[6] John Wallis, whose innovations include using the symbol $\infty $ for infinity;[7] Charles Dodgson, who made significant contributions to geometry and logic while also achieving fame as a children's author under his pen name Lewis Carroll;[8] and Henry John Stephen Smith, another Savilian Professor of Geometry, whose work in number theory and matrices attracted international recognition to Oxford mathematics.[9] Dodgson jokingly proposed that the university should grant its mathematicians a narrow strip of level ground, reaching "ever so far", so that they could test whether or not parallel lines ever meet.[4]
The building of an institute was originally proposed by G. H. Hardy in 1930. Lectures were normally given in the individual colleges of the university and Hardy proposed a central space where mathematics lectures could be held and where mathematicians could regularly meet.[4] This proposal was too ambitious for the university, who allocated just six rooms for mathematicians in an extension to the Radcliffe Science Library built in 1934.[10] A dedicated Mathematical Institute was built in 1966 and was located at the northern end of St Giles' near the junction with Banbury Road in central north Oxford.[10] The needs of the institute soon outgrew its building, so it also occupied a neighbouring house on St Giles and two annexes: Dartington House on Little Clarendon Street, and the Gibson Building on the site of the Radcliffe Infirmary.[11][10]
In 2008 the institute was given US$25 million — the largest grant ever for a mathematics department in the UK — to establish the Oxford Centre for Collaborative Applied Mathematics (OCCAM).[12][13] Since 2013 the institute has been housed in the purpose-built Andrew Wiles Building in the Radcliffe Observatory Quarter in North Oxford, near the original Radcliffe Infirmary. Wiles, the university's Regius Professor of Mathematics, is known for proving Fermat's Last Theorem.[14] The design and construction of the building was informed by the academic staff to incorporate mathematical ideas; Sir Roger Penrose designed a non-periodic pattern (a Penrose tiling) to decorate the ground at the entrance, and two structures where natural light enters the building have "crystals" illustrating concepts from graph theory and the vibration of a two-dimensional surface.[14]
Research
The institute is home to a number of research groups and funded research centres. Groups in mathematical logic, algebra, number theory, numerical analysis, geometry, topology, and mathematical physics date back to at least the 1960s.[15] More recent groups include a combinatorics group, the Wolfson Centre for Mathematical Biology (WCMB), the Oxford Centre for Industrial Applied Mathematics (OCIAM) which includes a centre studying financial derivatives, and the Oxford Centre for Nonlinear Partial Differential Equations (OxPDE).[16][17] In the 21st century, the institute's research topics have come to include quantum computing, tumour growth, and string theory, among other physical, biological, and economic problems.[18] In 2012 the office of the President of the Clay Mathematics Institute (CMI) moved to the Mathematical Institute as Nick Woodhouse became CMI's president. The CMI offers the Millennium Prizes of one million dollars for solving famous mathematical problems that were unsolved in 2000.[19] The current CMI president, Martin Bridson, is also based at the institute.[20]
Like other university departments in the UK, the institute has been rated for the quality and impact of its research. In the 2008 Research Assessment Exercise, Oxford was joint first (with the University of Cambridge) for applied mathematics[21] and third for pure mathematics.[22] In the 2014 Research Excellence Framework, the institute submitted jointly with the Department of Statistics, getting the highest placement for mathematical sciences in the UK.[23] In the 2021 Research Excellence Framework, Oxford maintained its top place.[3]
Teaching
The institute has more than 850 undergraduate students on four degree courses: Mathematics, Mathematics and Statistics, Mathematics and Philosophy, and Mathematics and Computer Science. Students decide during their degree whether to earn a Bachelor of Arts (BA) after three years or to continue to a fourth year to earn a Master of Mathematics (MMath).[24][1] In 2017, the time allowed for exams was increased from 90 to 105 minutes for each paper for all students, with one motivation being to improve women's scores and close the gender performance gap.[25][26] The 550 postgraduate students take one of five courses to earn a Master of Science (MSc)[27] or conduct research to earn a DPhil (the Oxford name for a Doctor of Philosophy).[28][1]
The Guardian's 2021 ranking of "Best UK universities for mathematics" placed Oxford at the top.[29]
Outreach
The institute promotes understanding of mathematics outside the university by running public lectures, by hosting events for school students, and by supporting staff members who promote mathematics to the general public.[30][31] Of those staff members, the best known are Sir Roger Penrose, David Acheson, and Marcus du Sautoy. Penrose, a former Rouse Ball Professor of Mathematics who has an emeritus post at the institute, has written a series of popular books on mathematics and physics. Acheson has reached a wide audience through publishing, radio, and YouTube. Du Sautoy is the current Simonyi Professor for the Public Understanding of Science and is known as a television and radio broadcaster as well as an author of popular books on mathematics.[31]
Historical statutory professors
• Oxford's Regius Professor of Mathematics was created in 2016 as part of Queen Elizabeth's 90th birthday celebrations. Regius Professorships are awarded "to reflect an exceptionally high standard of teaching and research at an institution".[32][33] The first and current holder of this chair is Sir Andrew Wiles.[34]
• The Savilian Professor of Geometry was created in 1619 by Sir Henry Savile, at a time when the successes of astronomy and geometry prompted new interest in mathematical education.[35]
• The Wallis Professor of Mathematics was created in 1969 to celebrate John Wallis, who was Savilian Professor of Geometry for 54 years.[36]
• The Waynflete Professor of Pure Mathematics was created as a result of a Royal Commission reviewing the university in 1877. It honours the 15th-century bishop William of Waynflete, founder of Magdalen College.[37]
• The institute hosts the Rouse Ball Professor of Mathematics, a chair endowed in 1925 by Walter William Rouse Ball. A Cambridge mathematician and historian of mathematics, Rouse Ball hoped that the new professor "would not neglect [the] historical and philosophical aspects [of mathematics]." The chair was initially advertised as a Chair in Mathematical Physics.[38][39]
• The Sedleian Professor of Natural Philosophy was established in 1621 by Sir William Sedley, 4th Baronet of Aylesford. While some of the first holders of this chair had medical degrees, it has more recently become associated with applied mathematics.[40]
• The Simonyi Professor for the Public Understanding of Science was endowed in 1995 by the Hungarian software architect Charles Simonyi.[41] Its first holder was the ethologist Richard Dawkins. It has been associated with the institute since 2008, when the chair was taken up by Marcus du Sautoy.[42]
Alumni
Sir Michael Atiyah was a member between 1961 and 1990.[43] Mary Cartwright, who earned her first degree and doctorate at Oxford, was the first female mathematician to be awarded Fellowship of the Royal Society and the first female president of the London Mathematical Society.[44]
In popular culture
In 2015, the final episode, "What Lies Tangled", of the British television detective drama Lewis was set and filmed in the Mathematical Institute. Sir Andrew Wiles played a professor who appears in the background of one shot.[45]
References
1. "About Us | Mathematical Institute". www.maths.ox.ac.uk. Retrieved 13 October 2022.
2. "Departments — Mathematical, Physical and Life Sciences Division". www.mpls.ox.ac.uk. Retrieved 5 August 2022.
3. "REF 2021: Mathematical sciences". Times Higher Education (THE). 12 May 2022. Retrieved 5 August 2022.
4. Woodhouse 2014, p. 1.
5. Chapman 2013a.
6. Chapman 2013b.
7. Flood & Fauvel 2013.
8. Wilson 2013.
9. Hannabuss 2013a, p. 239.
10. Woodhouse 2014, p. 2.
11. About, Mathematical Institute, University of Oxford, UK.
12. "KAUST Global Research Partnership Center Grant | Mathematical Institute". www.maths.ox.ac.uk. 30 April 2008. Retrieved 5 August 2022.
13. Neumann 2013, p. 351.
14. Woodhouse 2014, p. 15.
15. Neumann 2013, p. 344.
16. Neumann 2013, p. 346–351.
17. "Welcome to the WCMB website | Mathematical Institute". www.maths.ox.ac.uk. Retrieved 30 September 2022.
18. Neumann 2013, p. 350–352.
19. Neumann 2013, p. 351–2.
20. Anon (2017). "Bridson, Prof. Martin Robert". Who's Who (online Oxford University Press ed.). A & C Black. doi:10.1093/ww/9780199540884.013.250830. (Subscription or UK public library membership required.)
21. "RAE 2008: applied mathematics results". the Guardian. 18 December 2008. Retrieved 5 August 2022.
22. "RAE 2008: pure mathematics". the Guardian. 18 December 2008. Retrieved 5 August 2022.
23. "Research Excellence Framework 2014 ranking" (PDF). Times Higher Education. 18 December 2014. Retrieved 27 June 2022.
24. "Prospectus | Mathematical Institute". www.maths.ox.ac.uk. Retrieved 5 August 2022.
25. Beauchamp, Sarah. "Oxford Is Giving Students Extra Time For Exams (But Female Students Especially)". Bustle. Retrieved 23 January 2018.
26. Diver, Tony (2018). "Oxford University extends exam times for women's benefit". The Telegraph. ISSN 0307-1235. Retrieved 23 January 2018.
27. "MSc Courses | Mathematical Institute". www.maths.ox.ac.uk. Retrieved 5 August 2022.
28. "Doctor of Philosophy (DPhil) | Mathematical Institute". www.maths.ox.ac.uk. Retrieved 5 August 2022.
29. "Best UK universities for mathematics – league table". the Guardian. Retrieved 5 August 2022.
30. "Oxford Mathematics Public Lectures and Events". www.maths.ox.ac.uk. Retrieved 30 September 2022.
31. Neumann 2013, p. 353.
32. Fisher, Connie (29 January 2013). "The Queen awards Regius professorships". The Royal Family. Retrieved 27 June 2022.
33. "New Regius Professorship in Mathematics for Queen's 90th birthday". University of Oxford. 6 June 2016. Retrieved 27 June 2022.
34. University of Oxford, Sir Andrew Wiles appointed first Regius Professor of Mathematics at Oxford, 31 May 2018
35. Chapman 2013a, p. 94.
36. Alexanderson, Gerald (2012). "John Wallis and Oxford" (PDF). Bulletin of the American Mathematical Society. 49 (3): 443–446. doi:10.1090/S0273-0979-2012-01377-0.
37. Hannabuss 2013b, pp. 223, 237.
38. Fauvel 2013, p. 23.
39. Rayner 2013, pp. 313–314.
40. Chapman 2013a, p. 95.
41. "The Oxford Simonyi Professor for the Public Understanding of Science". ox.ac.uk. Retrieved 29 September 2022.
42. The Simonyi Professorship, University of Oxford, UK.
43. Atiyah 2013.
44. Rayner 2013, p. 311.
45. "Ian Pearce, Location Manager for Lewis, Talks Filming in Oxford with Frederick Weisel". CrimeReads. 5 April 2021. Retrieved 14 March 2022.
Sources
• Fauvel, John; Flood, Raymond; Wilson, Robin, eds. (2013). Oxford figures : eight centuries of the mathematical sciences (Second ed.). Oxford: Oxford University Press. ISBN 9780199681976.
• Flood, Raymond; Fauvel, John. "John Wallis". In Fauvel, Flood & Wilson (2013).
• Fauvel, John. "Eight centuries of mathematical traditions". In Fauvel, Flood & Wilson (2013).
• Chapman, Allan (2013a). "The first professors". In Fauvel, Flood & Wilson (2013).
• Chapman, Allan (2013b). "Edmond Halley". In Fauvel, Flood & Wilson (2013).
• Hannabuss, Keith (2013a). "The 19th century". In Fauvel, Flood & Wilson (2013).
• Hannabuss, Keith (2013b). "Henry Smith". In Fauvel, Flood & Wilson (2013).
• Wilson, Robin. "Charles Dodgson". In Fauvel, Flood & Wilson (2013).
• Rayner, Margaret E. "The 20th century". In Fauvel, Flood & Wilson (2013).
• Atiyah, Michael. "Some personal reminiscences". In Fauvel, Flood & Wilson (2013).
• Neumann, Peter M. "Recent Developments". In Fauvel, Flood & Wilson (2013).
• Woodhouse, Nick (2014). The Andrew Wiles Building: a short history (PDF). University of Oxford.
External links
Wikimedia Commons has media related to Mathematical Institute, University of Oxford.
• The Mathematical Institute website
• Oxford Mathematics on YouTube
University of Oxford
Leadership
• Chancellor
• The Lord Patten of Barnes
• Vice-Chancellor
• Irene Tracey
• Registrar
• Heads of houses
Colleges
• All Souls
• Balliol
• Brasenose
• Christ Church
• Corpus Christi
• Exeter
• Green Templeton
• Harris Manchester
• Hertford
• Jesus
• Keble
• Kellogg
• Lady Margaret Hall
• Linacre
• Lincoln
• Magdalen
• Mansfield
• Merton
• New College
• Nuffield
• Oriel
• Pembroke
• Queen's
• Reuben
• St Anne's
• St Antony's
• St Catherine's
• St Cross
• St Edmund Hall
• St Hilda's
• St Hugh's
• St John's
• St Peter's
• Somerville
• Trinity
• University College
• Wadham
• Wolfson
• Worcester
Permanent private halls
• Blackfriars Hall
• Campion Hall
• Regent's Park
• St Stephen's House
• Wycliffe Hall
Divisions and departments
Humanities
• Asian and Middle Eastern Studies
• American Institute
• Art
• Classics
• History
• Linguistics, Philology & Phonetics
• Medieval and Modern Languages
• Music
• Philosophy
• Theology and Religion
Medical Sciences
• Biochemistry
• Human Genetics
• Medical School
• Pathology
• Population Health
Mathematical, Physical
and Life Sciences
• Biology
• Chemistry
• Computer Science
• Earth Sciences
• Engineering Science
• Materials
• Mathematical Institute
• Physics
Social Sciences
• Archaeology
• Business
• Continuing Education
• Economics
• Government
• International Development
• Law
• Politics & International Relations
• Social Policy and Intervention
Gardens, Libraries
& Museums
• Ashmolean Museum
• Bodleian Libraries
• Botanic Garden
• History of Science
• Natural History
• Pitt Rivers
Institutes and affiliates
• Begbroke Science Park
• Big Data Institute
• Ineos Oxford Institute
• Jenner Institute
• Internet Institute
• Oxford-Man Institute
• Martin School
• Oxford University Innovation
• Oxford University Press
• Ripon College Cuddesdon
• Smith School
Recognised independent
centres
• Buddhist Studies
• Energy Studies
• Hebrew and Jewish Studies
• Hindu Studies
• Islamic Studies
Sports
• Australian rules football
• Boxing
• Cricket
• Cycling
• Dancesport
• Football
• Women's
• Handball
• Ice hockey
• Mountaineering
• Quidditch
• Polo
• Rowing
• Men's
• Women's
• Men's Lightweight
• Women's Lightweight
• Rugby
• Competitions
• Cuppers
• The Boat Race
• Women's Boat Race
• Henley Boat Races
• Polo Varsity Match
• Rugby League Varsity Match
• Rugby Union Varsity Match
• University Cricket Match
• University Golf Match
• Venues
• Bullingdon Green
• Christ Church Ground
• Magdalen Ground
• New College Ground
• Roger Bannister running track
• University Parks
Student life
• Cherwell
• The Mays
• Oxford Union
• Student Union
Related
• People
• fictional colleges
• fictional people
• The Oxford Magazine
• Oxford University Gazette
• Category
• Portal
Authority control
International
• ISNI
• VIAF
National
• Israel
• United States
| Wikipedia |
Annales de l'Institut Fourier
The Annales de l'Institut Fourier is a French mathematical journal publishing papers in all fields of mathematics. It was established in 1949. The journal publishes one volume per year, consisting of six issues. The current editor-in-chief is Hervé Pajot.[1] Articles are published either in English or in French.
Annales de l'Institut Fourier
DisciplineMathematics
LanguageEnglish, French
Edited byHervé Pajot
Publication details
History1949–present
FrequencyBimonthly
Impact factor
0.823 (2019)
Standard abbreviations
ISO 4 (alt) · Bluebook (alt1 · alt2)
NLM (alt) · MathSciNet (alt )
ISO 4Ann. Inst. Fourier
MathSciNetAnn. Inst. Fourier (Grenoble)
Indexing
CODEN (alt · alt2) · JSTOR (alt) · LCCN (alt)
MIAR · NLM (alt) · Scopus
CODENAIFUA7
ISSN0373-0956 (print)
1777-5310 (web)
Links
• Journal homepage
The journal is indexed in Mathematical Reviews, Zentralblatt MATH and the Web of Science. According to the Journal Citation Reports, the journal had a 2008 impact factor of 0.804.[2]
References
1. Editors of the Annales de l'Institut Fourier Archived 2014-07-24 at the Wayback Machine, Annales de l'Institut Fourier. Accessed August 24, 2014
2. 2008 Journal Citation Reports, Science Edition, Thomson Scientific, 2008.
External links
• Official website
| Wikipedia |
Unknotting number
In the mathematical area of knot theory, the unknotting number of a knot is the minimum number of times the knot must be passed through itself (crossing switch) to untie it. If a knot has unknotting number $n$, then there exists a diagram of the knot which can be changed to unknot by switching $n$ crossings.[1] The unknotting number of a knot is always less than half of its crossing number.[2] This invariant was first defined by Hilmar Wendt in 1936.[3]
Any composite knot has unknotting number at least two, and therefore every knot with unknotting number one is a prime knot. The following table show the unknotting numbers for the first few knots:
• Trefoil knot
unknotting number 1
• Figure-eight knot
unknotting number 1
• Cinquefoil knot
unknotting number 2
• Three-twist knot
unknotting number 1
• Stevedore knot
unknotting number 1
• 6₂ knot
unknotting number 1
• 6₃ knot
unknotting number 1
• 7₁ knot
unknotting number 3
In general, it is relatively difficult to determine the unknotting number of a given knot. Known cases include:
• The unknotting number of a nontrivial twist knot is always equal to one.
• The unknotting number of a $(p,q)$-torus knot is equal to $(p-1)(q-1)/2$.[4]
• The unknotting numbers of prime knots with nine or fewer crossings have all been determined.[5] (The unknotting number of the 1011 prime knot is unknown.)
Other numerical knot invariants
• Crossing number
• Bridge number
• Linking number
• Stick number
See also
• Unknotting problem
References
1. Adams, Colin Conrad (2004). The knot book: an elementary introduction to the mathematical theory of knots. Providence, Rhode Island: American Mathematical Society. p. 56. ISBN 0-8218-3678-1.
2. Taniyama, Kouki (2009), "Unknotting numbers of diagrams of a given nontrivial knot are unbounded", Journal of Knot Theory and its Ramifications, 18 (8): 1049–1063, arXiv:0805.3174, doi:10.1142/S0218216509007361, MR 2554334.
3. Wendt, Hilmar (December 1937). "Die gordische Auflösung von Knoten". Mathematische Zeitschrift. 42 (1): 680–696. doi:10.1007/BF01160103.
4. "Torus Knot", Mathworld.Wolfram.com. "${\frac {1}{2}}(p-1)(q-1)$".
5. Weisstein, Eric W. "Unknotting Number". MathWorld.
External links
• "Three_Dimensional_Invariants#Unknotting_Number", The Knot Atlas.
Knot theory (knots and links)
Hyperbolic
• Figure-eight (41)
• Three-twist (52)
• Stevedore (61)
• 62
• 63
• Endless (74)
• Carrick mat (818)
• Perko pair (10161)
• (−2,3,7) pretzel (12n242)
• Whitehead (52
1
)
• Borromean rings (63
2
)
• L10a140
• Conway knot (11n34)
Satellite
• Composite knots
• Granny
• Square
• Knot sum
Torus
• Unknot (01)
• Trefoil (31)
• Cinquefoil (51)
• Septafoil (71)
• Unlink (02
1
)
• Hopf (22
1
)
• Solomon's (42
1
)
Invariants
• Alternating
• Arf invariant
• Bridge no.
• 2-bridge
• Brunnian
• Chirality
• Invertible
• Crosscap no.
• Crossing no.
• Finite type invariant
• Hyperbolic volume
• Khovanov homology
• Genus
• Knot group
• Link group
• Linking no.
• Polynomial
• Alexander
• Bracket
• HOMFLY
• Jones
• Kauffman
• Pretzel
• Prime
• list
• Stick no.
• Tricolorability
• Unknotting no. and problem
Notation
and operations
• Alexander–Briggs notation
• Conway notation
• Dowker–Thistlethwaite notation
• Flype
• Mutation
• Reidemeister move
• Skein relation
• Tabulation
Other
• Alexander's theorem
• Berge
• Braid theory
• Conway sphere
• Complement
• Double torus
• Fibered
• Knot
• List of knots and links
• Ribbon
• Slice
• Sum
• Tait conjectures
• Twist
• Wild
• Writhe
• Surgery theory
• Category
• Commons
| Wikipedia |
Unknotting problem
In mathematics, the unknotting problem is the problem of algorithmically recognizing the unknot, given some representation of a knot, e.g., a knot diagram. There are several types of unknotting algorithms. A major unresolved challenge is to determine if the problem admits a polynomial time algorithm; that is, whether the problem lies in the complexity class P.
Unsolved problem in mathematics:
Can unknots be recognized in polynomial time?
(more unsolved problems in mathematics)
Computational complexity
First steps toward determining the computational complexity were undertaken in proving that the problem is in larger complexity classes, which contain the class P. By using normal surfaces to describe the Seifert surfaces of a given knot, Hass, Lagarias & Pippenger (1999) showed that the unknotting problem is in the complexity class NP. Hara, Tani & Yamamoto (2005) claimed the weaker result that unknotting is in AM ∩ co-AM; however, later they retracted this claim.[1] In 2011, Greg Kuperberg proved that (assuming the generalized Riemann hypothesis) the unknotting problem is in co-NP,[2] and in 2016, Marc Lackenby provided an unconditional proof of co-NP membership.[3]
The unknotting problem has the same computational complexity as testing whether an embedding of an undirected graph in Euclidean space is linkless.[4]
Unknotting algorithms
Several algorithms solving the unknotting problem are based on Haken's theory of normal surfaces:
• Haken's algorithm uses the theory of normal surfaces to find a disk whose boundary is the knot. Haken originally used this algorithm to show that unknotting is decidable, but did not analyze its complexity in more detail.
• Hass, Lagarias, and Pippenger showed that the set of all normal surfaces may be represented by the integer points in a polyhedral cone and that a surface witnessing the unknottedness of a curve (if it exists) can always be found on one of the extreme rays of this cone. Therefore, vertex enumeration methods can be used to list all of the extreme rays and test whether any of them corresponds to a bounding disk of the knot. Hass, Lagarias, and Pippenger used this method to show that the unknottedness is in NP; later researchers such as Burton (2011a) refined their analysis, showing that this algorithm can be useful (though not polynomial time), with its complexity being a low-order singly-exponential function of the number of crossings.
• The algorithm of Birman & Hirsch (1998) uses braid foliations, a somewhat different type of structure than a normal surface. However to analyze its behavior they return to normal surface theory.
Other approaches include:
• The number of Reidemeister moves needed to change an unknot diagram to the standard unknot diagram is at most polynomial in the number of crossings.[5] Therefore, a brute force search for all sequences of Reidemeister moves can detect unknottedness in exponential time.
• Similarly, any two triangulations of the same knot complement may be connected by a sequence of Pachner moves of length at most doubly exponential in the number of crossings.[6] Therefore, it is possible to determine whether a knot is the unknot by testing all sequences of Pachner moves of this length, starting from the complement of the given knot, and determining whether any of them transforms the complement into a standard triangulation of a solid torus. The time for this method would be triply exponential; however, experimental evidence suggests that this bound is very pessimistic and that many fewer Pachner moves are needed.[7]
• Any arc-presentation of an unknot can be monotonically simplified to a minimal one using elementary moves.[8] So a brute force search among all arc-presentations of not greater complexity gives a single-exponential algorithm for the unknotting problem.
• Residual finiteness of the knot group (which follows from geometrization of Haken manifolds) gives an algorithm: check if the group has non-cyclic finite group quotient. This idea is used in Kuperberg's result that the unknotting problem is in co-NP.
• Knot Floer homology of the knot detects the genus of the knot, which is 0 if and only if the knot is an unknot. A combinatorial version of knot Floer homology allows it to be computed (Manolescu, Ozsváth & Sarkar 2009).
• Khovanov homology detects the unknot according to a result of Kronheimer and Mrowka.[9] The complexity of Khovanov homology at least as high as the #P-hard problem of computing the Jones polynomial, but it may be calculated in practice using an algorithm and program of Bar-Natan (2007). Bar-Natan provides no rigorous analysis of his algorithm, but heuristically estimates it to be exponential in the pathwidth of a crossing diagram, which in turn is at most proportional to the square root of the number of crossings.
Understanding the complexity of these algorithms is an active field of study.
See also
• Algorithmic topology
• Unknotting number
Notes
1. Mentioned as a "personal communication" in reference [15] of Kuperberg (2014).
2. Kuperberg (2014)
3. Lackenby (2021)
4. Kawarabayashi, Kreutzer & Mohar (2010).
5. Lackenby (2015).
6. Mijatović (2005).
7. Burton (2011b).
8. Dynnikov (2006).
9. Kronheimer & Mrowka (2011)
References
• Bar-Natan, Dror (2007), "Fast Khovanov homology computations", Journal of Knot Theory and Its Ramifications, 16 (3): 243–255, arXiv:math.GT/0606318, doi:10.1142/S0218216507005294, MR 2320156, S2CID 17036344.
• Birman, Joan S.; Hirsch, Michael (1998), "A new algorithm for recognizing the unknot", Geometry and Topology, 2: 178–220, arXiv:math/9801126, doi:10.2140/gt.1998.2.175, S2CID 17776505.
• Burton, Benjamin A. (2011a), "Maximal admissible faces and asymptotic bounds for the normal surface solution space" (PDF), Journal of Combinatorial Theory, Series A, 118 (4): 1410–1435, arXiv:1004.2605, doi:10.1016/j.jcta.2010.12.011, MR 2763065, S2CID 11461722.
• Burton, Benjamin (2011b), "The Pachner graph and the simplification of 3-sphere triangulations", Proc. 27th ACM Symposium on Computational Geometry, pp. 153–162, arXiv:1011.4169, doi:10.1145/1998196.1998220, S2CID 382685.
• Dynnikov, Ivan (2006), "Arc-presentations of links: monotonic simplification", Fundamenta Mathematicae, 190: 29–76, arXiv:math/0208153, doi:10.4064/fm190-0-3, S2CID 14137437.
• Haken, Wolfgang (1961), "Theorie der Normalflächen", Acta Mathematica, 105: 245–375, doi:10.1007/BF02559591.
• Hara, Masao; Tani, Seiichi; Yamamoto, Makoto (2005), "Unknotting is in AM ∩ co-AM", Proc. 16th ACM-SIAM Symposium on Discrete algorithms (SODA '05), pp. 359–364.
• Hass, Joel; Lagarias, Jeffrey C.; Pippenger, Nicholas (1999), "The computational complexity of knot and link problems", Journal of the ACM, 46 (2): 185–211, arXiv:math/9807016, doi:10.1145/301970.301971, MR 1693203, S2CID 125854.
• Hass, Joel; Lagarias, Jeffrey C. (2001), "The number of Reidemeister moves needed for unknotting", Journal of the American Mathematical Society, 14 (2): 399–428, arXiv:math/9807012, doi:10.1090/S0894-0347-01-00358-7, MR 1815217, S2CID 15654705.
• Kawarabayashi, Ken-ichi; Kreutzer, Stephan; Mohar, Bojan (2010), "Linkless and flat embeddings in 3-space and the unknot problem" (PDF), Proc. ACM Symposium on Computational Geometry (SoCG '10), pp. 97–106, doi:10.1145/1810959.1810975, S2CID 12290801.
• Kronheimer, Peter; Mrowka, Tomasz (2011), "Khovanov homology is an unknot-detector", Publications Mathématiques de l'IHÉS, 113 (1): 97–208, arXiv:1005.4346, doi:10.1007/s10240-010-0030-y, S2CID 119586228
• Kuperberg, Greg (2014), "Knottedness is in NP, modulo GRH", Advances in Mathematics, 256: 493–506, arXiv:1112.0845, doi:10.1016/j.aim.2014.01.007, MR 3177300, S2CID 12634367.
• Lackenby, Marc (2015), "A polynomial upper bound on Reidemeister moves", Annals of Mathematics, Second Series, 182 (2): 491–564, arXiv:1302.0180, doi:10.4007/annals.2015.182.2.3, MR 3418524, S2CID 119662237.
• Lackenby, Marc (2021), "The efficient certification of Knottedness and Thurston norm", Advances in Mathematics, 387: 107796, arXiv:1604.00290, doi:10.1016/j.aim.2021.107796, S2CID 119307517.
• Manolescu, Ciprian; Ozsváth, Peter S.; Sarkar, Sucharit (2009), "A combinatorial description of knot Floer homology", Annals of Mathematics, Second Series, 169 (2): 633–660, arXiv:math/0607691, Bibcode:2006math......7691M, doi:10.4007/annals.2009.169.633, MR 2480614, S2CID 15427272.
• Mijatović, Aleksandar (2005), "Simplical structures of knot complements", Mathematical Research Letters, 12 (6): 843–856, arXiv:math/0306117, doi:10.4310/mrl.2005.v12.n6.a6, MR 2189244, S2CID 7726354
Knot theory (knots and links)
Hyperbolic
• Figure-eight (41)
• Three-twist (52)
• Stevedore (61)
• 62
• 63
• Endless (74)
• Carrick mat (818)
• Perko pair (10161)
• (−2,3,7) pretzel (12n242)
• Whitehead (52
1
)
• Borromean rings (63
2
)
• L10a140
• Conway knot (11n34)
Satellite
• Composite knots
• Granny
• Square
• Knot sum
Torus
• Unknot (01)
• Trefoil (31)
• Cinquefoil (51)
• Septafoil (71)
• Unlink (02
1
)
• Hopf (22
1
)
• Solomon's (42
1
)
Invariants
• Alternating
• Arf invariant
• Bridge no.
• 2-bridge
• Brunnian
• Chirality
• Invertible
• Crosscap no.
• Crossing no.
• Finite type invariant
• Hyperbolic volume
• Khovanov homology
• Genus
• Knot group
• Link group
• Linking no.
• Polynomial
• Alexander
• Bracket
• HOMFLY
• Jones
• Kauffman
• Pretzel
• Prime
• list
• Stick no.
• Tricolorability
• Unknotting no. and problem
Notation
and operations
• Alexander–Briggs notation
• Conway notation
• Dowker–Thistlethwaite notation
• Flype
• Mutation
• Reidemeister move
• Skein relation
• Tabulation
Other
• Alexander's theorem
• Berge
• Braid theory
• Conway sphere
• Complement
• Double torus
• Fibered
• Knot
• List of knots and links
• Ribbon
• Slice
• Sum
• Tait conjectures
• Twist
• Wild
• Writhe
• Surgery theory
• Category
• Commons
| Wikipedia |
Unobserved heterogeneity in duration models
Issues of heterogeneity in duration models can take on different forms. On the one hand, unobserved heterogeneity can play a crucial role when it comes to different sampling methods, such as stock or flow sampling.[1] On the other hand, duration models have also been extended to allow for different subpopulations, with a strong link to mixture models. Many of these models impose the assumptions that heterogeneity is independent of the observed covariates, it has a distribution that depends on a finite number of parameters only, and it enters the hazard function multiplicatively.[2]
One can define the conditional hazard as the hazard function conditional on the observed covariates and the unobserved heterogeneity.[3] In the general case, the cumulative distribution function of ti* associated with the conditional hazard is given by F(t|xi , vi ; θ). Under the first assumption above, the unobserved component can be integrated out and we obtain the cumulative distribution on the observed covariates only, i.e.
G(t ∨ xi ; θ , ρ) = ∫ F (t ∨ xi, ν ; θ ) h ( ν ; ρ ) dν [4]
where the additional parameter ρ parameterizes the density of the unobserved component v. Now, the different estimation methods for stock or flow sampling data are available to estimate the relevant parameters.
A specific example is described by Lancaster. Assume that the conditional hazard is given by
λ(t ; xi , vi ) = vi exp (x [5] β) α t α-1
where x is a vector of observed characteristics, v is the unobserved heterogeneity part, and a normalization (often E[vi] = 1) needs to be imposed. It then follows that the average hazard is given by exp(x'β) αtα-1. More generally, it can be shown that as long as the hazard function exhibits proportional properties of the form λ ( t ; xi, vi ) = vi κ (xi ) λ0 (t), one can identify both the covariate function κ(.) and the hazard function λ(.).[6]
Recent examples provide a nonparametric approaches to estimating the baseline hazard and the distribution of the unobserved heterogeneity under fairly weak assumptions.[7] In grouped data, the strict exogeneity assumptions for time-varying covariates are hard to relax. Parametric forms can be imposed for the distribution of the unobserved heterogeneity,[8] even though semiparametric methods that do not specify such parametric forms for the unobserved heterogeneity are available.[9]
References
1. Salant, S. W. (1977): Search Theory and Duration Data: A Theory of Sorts. The Quarterly Journal of Economics, 91(1), pp. 39-57
2. Wooldridge, J. (2002): Econometric Analysis of Cross Section and Panel Data, MIT Press, Cambridge, Mass.
3. Lancaster, T. (1990): The Econometric Analysis of Transition Data. Cambridge University Press, Cambridge.
4. Wooldridge, J. (2002): Econometric Analysis of Cross Section and Panel Data, MIT Press, Cambridge, Mass.
5. i
6. Lancaster, T. (1990): The Econometric Analysis of Transition Data. Cambridge University Press, Cambridge.
7. Horowitz, J. L. (1999): Semiparametric and Nonparametric Estimation of Quantal Response Models. Handbook of Statistics, Vol. 11, ed. by G. S. Maddala, C. R. Rao, and H. D. Vinod. North Holland, Amsterdam.
8. McCall, B. P. (1994): Testing the Proportional Hazards Assumption in the Presence of Unmeasured Heterogeneity. Journal of Applied Econometrics, 9, pp. 321-334
9. Heckman, J. J. and B. Singer (1984): A Method for Minimizing the Impact of Distributional Assumptions in Econometric Models for Duration Data. Econometrica, 52, pp. 271-320
| Wikipedia |
Unordered pair
In mathematics, an unordered pair or pair set is a set of the form {a, b}, i.e. a set having two elements a and b with no particular relation between them, where {a, b} = {b, a}. In contrast, an ordered pair (a, b) has a as its first element and b as its second element, which means (a, b) ≠ (b, a).
While the two elements of an ordered pair (a, b) need not be distinct, modern authors only call {a, b} an unordered pair if a ≠ b.[1][2][3][4] But for a few authors a singleton is also considered an unordered pair, although today, most would say that {a, a} is a multiset. It is typical to use the term unordered pair even in the situation where the elements a and b could be equal, as long as this equality has not yet been established.
A set with precisely two elements is also called a 2-set or (rarely) a binary set.
An unordered pair is a finite set; its cardinality (number of elements) is 2 or (if the two elements are not distinct) 1.
In axiomatic set theory, the existence of unordered pairs is required by an axiom, the axiom of pairing.
More generally, an unordered n-tuple is a set of the form {a1, a2,... an}.[5][6][7]
Notes
1. Düntsch, Ivo; Gediga, Günther (2000), Sets, Relations, Functions, Primers Series, Methodos, ISBN 978-1-903280-00-3.
2. Fraenkel, Adolf (1928), Einleitung in die Mengenlehre, Berlin, New York: Springer-Verlag
3. Roitman, Judith (1990), Introduction to modern set theory, New York: John Wiley & Sons, ISBN 978-0-471-63519-2.
4. Schimmerling, Ernest (2008), Undergraduate set theory
5. Hrbacek, Karel; Jech, Thomas (1999), Introduction to set theory (3rd ed.), New York: Dekker, ISBN 978-0-8247-7915-3.
6. Rubin, Jean E. (1967), Set theory for the mathematician, Holden-Day
7. Takeuti, Gaisi; Zaring, Wilson M. (1971), Introduction to axiomatic set theory, Graduate Texts in Mathematics, Berlin, New York: Springer-Verlag
References
• Enderton, Herbert (1977), Elements of set theory, Boston, MA: Academic Press, ISBN 978-0-12-238440-0.
| Wikipedia |
List of cohomology theories
This is a list of some of the ordinary and generalized (or extraordinary) homology and cohomology theories in algebraic topology that are defined on the categories of CW complexes or spectra. For other sorts of homology theories see the links at the end of this article.
Notation
• S = π = S0 is the sphere spectrum.
• Sn is the spectrum of the n-dimensional sphere
• SnY = Sn∧Y is the nth suspension of a spectrum Y.
• [X,Y] is the abelian group of morphisms from the spectrum X to the spectrum Y, given (roughly) as homotopy classes of maps.
• [X,Y]n = [SnX,Y]
• [X,Y]* is the graded abelian group given as the sum of the groups [X,Y]n.
• πn(X) = [Sn, X] = [S, X]n is the nth stable homotopy group of X.
• π*(X) is the sum of the groups πn(X), and is called the coefficient ring of X when X is a ring spectrum.
• X∧Y is the smash product of two spectra.
If X is a spectrum, then it defines generalized homology and cohomology theories on the category of spectra as follows.
• Xn(Y) = [S, X∧Y]n = [Sn, X∧Y] is the generalized homology of Y,
• Xn(Y) = [Y, X]−n = [S−nY, X] is the generalized cohomology of Y
Ordinary homology theories
These are the theories satisfying the "dimension axiom" of the Eilenberg–Steenrod axioms that the homology of a point vanishes in dimension other than 0. They are determined by an abelian coefficient group G, and denoted by H(X, G) (where G is sometimes omitted, especially if it is Z). Usually G is the integers, the rationals, the reals, the complex numbers, or the integers mod a prime p.
The cohomology functors of ordinary cohomology theories are represented by Eilenberg–MacLane spaces.
On simplicial complexes, these theories coincide with singular homology and cohomology.
Homology and cohomology with integer coefficients.
Spectrum: H (Eilenberg–MacLane spectrum of the integers.)
Coefficient ring: πn(H) = Z if n = 0, 0 otherwise.
The original homology theory.
Homology and cohomology with rational (or real or complex) coefficients.
Spectrum: HQ (Eilenberg–Mac Lane spectrum of the rationals.)
Coefficient ring: πn(HQ) = Q if n = 0, 0 otherwise.
These are the easiest of all homology theories. The homology groups HQn(X) are often denoted by Hn(X, Q). The homology groups H(X, Q), H(X, R), H(X, C) with rational, real, and complex coefficients are all similar, and are used mainly when torsion is not of interest (or too complicated to work out). The Hodge decomposition writes the complex cohomology of a complex projective variety as a sum of sheaf cohomology groups.
Homology and cohomology with mod p coefficients.
Spectrum: HZp (Eilenberg–Maclane spectrum of the integers mod p.)
Coefficient ring: πn(HZp) = Zp (Integers mod p) if n = 0, 0 otherwise.
K-theories
The simpler K-theories of a space are often related to vector bundles over the space, and different sorts of K-theories correspond to different structures that can be put on a vector bundle.
Real K-theory
Spectrum: KO
Coefficient ring: The coefficient groups πi(KO) have period 8 in i, given by the sequence Z, Z2, Z2,0, Z, 0, 0, 0, repeated. As a ring, it is generated by a class η in degree 1, a class x4 in degree 4, and an invertible class v14 in degree 8, subject to the relations that 2η = η3 = ηx4 = 0, and x42 = 4v14.
KO0(X) is the ring of stable equivalence classes of real vector bundles over X. Bott periodicity implies that the K-groups have period 8.
Complex K-theory
Spectrum: KU (even terms BU or Z × BU, odd terms U).
Coefficient ring: The coefficient ring K*(point) is the ring of Laurent polynomials in a generator of degree 2.
K0(X) is the ring of stable equivalence classes of complex vector bundles over X. Bott periodicity implies that the K-groups have period 2.
Quaternionic K-theory
Spectrum: KSp
Coefficient ring: The coefficient groups πi(KSp) have period 8 in i, given by the sequence Z, 0, 0, 0,Z, Z2, Z2,0, repeated.
KSp0(X) is the ring of stable equivalence classes of quaternionic vector bundles over X. Bott periodicity implies that the K-groups have period 8.
K theory with coefficients
Spectrum: KG
G is some abelian group; for example the localization Z(p) at the prime p. Other K-theories can also be given coefficients.
Self conjugate K-theory
Spectrum: KSC
Coefficient ring: to be written...
The coefficient groups $\pi _{i}$(KSC) have period 4 in i, given by the sequence Z, Z2, 0, Z, repeated. Introduced by Donald W. Anderson in his unpublished 1964 University of California, Berkeley Ph.D. dissertation, "A new cohomology theory".
Connective K-theories
Spectrum: ku for connective K-theory, ko for connective real K-theory.
Coefficient ring: For ku, the coefficient ring is the ring of polynomials over Z on a single class v1 in dimension 2. For ko, the coefficient ring is the quotient of a polynomial ring on three generators, η in dimension 1, x4 in dimension 4, and v14 in dimension 8, the periodicity generator, modulo the relations that 2η = 0, x42 = 4v14, η3 = 0, and ηx = 0.
Roughly speaking, this is K-theory with the negative dimensional parts killed off.
KR-theory
This is a cohomology theory defined for spaces with involution, from which many of the other K-theories can be derived.
Bordism and cobordism theories
Cobordism studies manifolds, where a manifold is regarded as "trivial" if it is the boundary of another compact manifold. The cobordism classes of manifolds form a ring that is usually the coefficient ring of some generalized cohomology theory. There are many such theories, corresponding roughly to the different structures that one can put on a manifold.
The functors of cobordism theories are often represented by Thom spaces of certain groups.
Stable homotopy and cohomotopy
Spectrum: S (sphere spectrum).
Coefficient ring: The coefficient groups πn(S) are the stable homotopy groups of spheres, which are notoriously hard to compute or understand for n > 0. (For n < 0 they vanish, and for n = 0 the group is Z.)
Stable homotopy is closely related to cobordism of framed manifolds (manifolds with a trivialization of the normal bundle).
Unoriented cobordism
Spectrum: MO (Thom spectrum of orthogonal group)
Coefficient ring: π*(MO) is the ring of cobordism classes of unoriented manifolds, and is a polynomial ring over the field with 2 elements on generators of degree i for every i not of the form 2n−1. That is: $\mathbb {Z} _{2}[x_{2},x_{4},x_{5},x_{6},x_{8}\cdots ]$ where $x_{2n}$ can be represented by the classes of $\mathbb {RP} ^{2n}$ while for odd indices one can use appropriate Dold manifolds.
Unoriented bordism is 2-torsion, since 2M is the boundary of $M\times I$.
MO is a rather weak cobordism theory, as the spectrum MO is isomorphic to H(π*(MO)) ("homology with coefficients in π*(MO)") – MO is a product of Eilenberg–MacLane spectra. In other words, the corresponding homology and cohomology theories are no more powerful than homology and cohomology with coefficients in Z/2Z. This was the first cobordism theory to be described completely.
Complex cobordism
Main article: Complex cobordism
Spectrum: MU (Thom spectrum of unitary group)
Coefficient ring: π*(MU) is the polynomial ring on generators of degree 2, 4, 6, 8, ... and is naturally isomorphic to Lazard's universal ring, and is the cobordism ring of stably almost complex manifolds.
Oriented cobordism
Spectrum: MSO (Thom spectrum of special orthogonal group)
Coefficient ring: The oriented cobordism class of a manifold is completely determined by its characteristic numbers: its Stiefel–Whitney numbers and Pontryagin numbers, but the overall coefficient ring, denoted $\Omega _{*}=\Omega (*)=MSO(*)$ is quite complicated. Rationally, and at 2 (corresponding to Pontryagin and Stiefel–Whitney classes, respectively), MSO is a product of Eilenberg–MacLane spectra – $MSO_{\mathbf {Q} }=H(\pi _{*}(MSO_{\mathbf {Q} }))$ and $MSO[2]=H(\pi _{*}(MSO[2]))$ – but at odd primes it is not, and the structure is complicated to describe. The ring has been completely described integrally, due to work of John Milnor, Boris Averbuch, Vladimir Rokhlin, and C. T. C. Wall.
Special unitary cobordism
Spectrum: MSU (Thom spectrum of special unitary group)
Coefficient ring:
Spin cobordism (and variants)
Spectrum: MSpin (Thom spectrum of spin group)
Coefficient ring: See (D. W. Anderson, E. H. Brown & F. P. Peterson 1967).
Symplectic cobordism
Spectrum: MSp (Thom spectrum of symplectic group)
Coefficient ring:
PL cobordism and topological cobordism
Spectrum: MPL, MSPL, MTop, MSTop
Coefficient ring:
The definition is similar to cobordism, except that one uses piecewise linear or topological instead of smooth manifolds, either oriented or unoriented. The coefficient rings are complicated.
Brown–Peterson cohomology
Spectrum: BP
Coefficient ring: π*(BP) is a polynomial algebra over Z(p) on generators vn of dimension 2(pn − 1) for n ≥ 1.
Brown–Peterson cohomology BP is a summand of MUp, which is complex cobordism MU localized at a prime p. In fact MU(p) is a sum of suspensions of BP.
Morava K-theory
Spectrum: K(n) (They also depend on a prime p.)
Coefficient ring: Fp[vn, vn−1], where vn has degree 2(pn -1).
These theories have period 2(pn − 1). They are named after Jack Morava.
Johnson–Wilson theory
Spectrum E(n)
Coefficient ring Z(2)[v1, ..., vn, 1/vn] where vi has degree 2(2i−1)
String cobordism
Spectrum:
Coefficient ring:
Theories related to elliptic curves
Elliptic cohomology
Spectrum: Ell
Topological modular forms
Spectra: tmf, TMF (previously called eo2.)
The coefficient ring π*(tmf) is called the ring of topological modular forms. TMF is tmf with the 24th power of the modular form Δ inverted, and has period 242=576. At the prime p = 2, the completion of tmf is the spectrum eo2, and the K(2)-localization of tmf is the Hopkins-Miller Higher Real K-theory spectrum EO2.
See also
• Alexander–Spanier cohomology
• Algebraic K-theory
• BRST cohomology
• Cellular homology
• Čech cohomology
• Crystalline cohomology
• De Rham cohomology
• Deligne cohomology
• Étale cohomology
• Floer homology
• Galois cohomology
• Group cohomology
• Hodge structure
• Intersection cohomology
• L2 cohomology
• l-adic cohomology
• Lie algebra cohomology
• Quantum cohomology
• Sheaf cohomology
• Singular homology
• Spencer cohomology
References
• Stable Homotopy and Generalised Homology (Chicago Lectures in Mathematics) by J. Frank Adams, University of Chicago Press; Reissue edition (February 27, 1995) ISBN 0-226-00524-0
• Anderson, Donald W.; Brown, Edgar H. Jr.; Peterson, Franklin P. (1967), "The Structure of the Spin Cobordism Ring", Annals of Mathematics, Second Series, 86 (2): 271–298, doi:10.2307/1970690, JSTOR 1970690
• Notes on cobordism theory, by Robert E. Stong, Princeton University Press (1968) ASIN B0006C2BN6
• Elliptic Cohomology (University Series in Mathematics) by Charles B. Thomas, Springer; 1 edition (October, 1999) ISBN 0-306-46097-1
| Wikipedia |
Independence (mathematical logic)
In mathematical logic, independence is the unprovability of a sentence from other sentences.
A sentence σ is independent of a given first-order theory T if T neither proves nor refutes σ; that is, it is impossible to prove σ from T, and it is also impossible to prove from T that σ is false. Sometimes, σ is said (synonymously) to be undecidable from T; this is not the same meaning of "decidability" as in a decision problem.
A theory T is independent if each axiom in T is not provable from the remaining axioms in T. A theory for which there is an independent set of axioms is independently axiomatizable.
Usage note
Some authors say that σ is independent of T when T simply cannot prove σ, and do not necessarily assert by this that T cannot refute σ. These authors will sometimes say "σ is independent of and consistent with T" to indicate that T can neither prove nor refute σ.
Independence results in set theory
Many interesting statements in set theory are independent of Zermelo–Fraenkel set theory (ZF). The following statements in set theory are known to be independent of ZF, under the assumption that ZF is consistent:
• The axiom of choice
• The continuum hypothesis and the generalized continuum hypothesis
• The Suslin conjecture
The following statements (none of which have been proved false) cannot be proved in ZFC (the Zermelo-Fraenkel set theory plus the axiom of choice) to be independent of ZFC, under the added hypothesis that ZFC is consistent.
• The existence of strongly inaccessible cardinals
• The existence of large cardinals
• The non-existence of Kurepa trees
The following statements are inconsistent with the axiom of choice, and therefore with ZFC. However they are probably independent of ZF, in a corresponding sense to the above: They cannot be proved in ZF, and few working set theorists expect to find a refutation in ZF. However ZF cannot prove that they are independent of ZF, even with the added hypothesis that ZF is consistent.
• The axiom of determinacy
• The axiom of real determinacy
• AD+
Applications to physical theory
Since 2000, logical independence has become understood as having crucial significance in the foundations of physics.[1][2]
See also
• List of statements independent of ZFC
• Parallel postulate for an example in geometry
Notes
1. Paterek, T.; Kofler, J.; Prevedel, R.; Klimek, P.; Aspelmeyer, M.; Zeilinger, A.; Brukner, Č. (2010), "Logical independence and quantum randomness", New Journal of Physics, 12: 013019, arXiv:0811.4542, Bibcode:2010NJPh...12a3019P, doi:10.1088/1367-2630/12/1/013019
2. Székely, Gergely (2013), "The Existence of Superluminal Particles is Consistent with the Kinematics of Einstein's Special Theory of Relativity", Reports on Mathematical Physics, 72 (2): 133–152, arXiv:1202.5790, Bibcode:2013RpMP...72..133S, doi:10.1016/S0034-4877(13)00021-9
References
• Mendelson, Elliott (1997), An Introduction to Mathematical Logic (4th ed.), London: Chapman & Hall, ISBN 978-0-412-80830-2
• Monk, J. Donald (1976), Mathematical Logic, Graduate Texts in Mathematics, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90170-1
• Stabler, Edward Russell (1948), An introduction to mathematical thought, Reading, Massachusetts: Addison-Wesley
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
| Wikipedia |
Unramified morphism
In algebraic geometry, an unramified morphism is a morphism $f:X\to Y$ of schemes such that (a) it is locally of finite presentation and (b) for each $x\in X$ and $y=f(x)$, we have that
1. The residue field $k(x)$ is a separable algebraic extension of $k(y)$.
2. $f^{\#}({\mathfrak {m}}_{y}){\mathcal {O}}_{x,X}={\mathfrak {m}}_{x},$ where $f^{\#}:{\mathcal {O}}_{y,Y}\to {\mathcal {O}}_{x,X}$ and ${\mathfrak {m}}_{y},{\mathfrak {m}}_{x}$ are maximal ideals of the local rings.
A flat unramified morphism is called an étale morphism. Less strongly, if $f$ satisfies the conditions when restricted to sufficiently small neighborhoods of $x$ and $y$, then $f$ is said to be unramified near $x$.
Some authors prefer to use weaker conditions, in which case they call a morphism satisfying the above a G-unramified morphism.
Simple example
Let $A$ be a ring and B the ring obtained by adjoining an integral element to A; i.e., $B=A[t]/(F)$ for some monic polynomial F. Then $\operatorname {Spec} (B)\to \operatorname {Spec} (A)$ is unramified if and only if the polynomial F is separable (i.e., it and its derivative generate the unit ideal of $A[t]$).
Curve case
Let $f:X\to Y$ be a finite morphism between smooth connected curves over an algebraically closed field, P a closed point of X and $Q=f(P)$. We then have the local ring homomorphism $f^{\#}:{\mathcal {O}}_{Q}\to {\mathcal {O}}_{P}$ where $({\mathcal {O}}_{Q},{\mathfrak {m}}_{Q})$ and $({\mathcal {O}}_{P},{\mathfrak {m}}_{P})$ are the local rings at Q and P of Y and X. Since ${\mathcal {O}}_{P}$ is a discrete valuation ring, there is a unique integer $e_{P}>0$ such that $f^{\#}({\mathfrak {m}}_{Q}){\mathcal {O}}_{P}={{\mathfrak {m}}_{P}}^{e_{P}}$. The integer $e_{P}$ is called the ramification index of $P$ over $Q$.[1] Since $k(P)=k(Q)$ as the base field is algebraically closed, $f$ is unramified at $P$ (in fact, étale) if and only if $e_{P}=1$. Otherwise, $f$ is said to be ramified at P and Q is called a branch point.
Characterization
Given a morphism $f:X\to Y$ that is locally of finite presentation, the following are equivalent:[2]
1. f is unramified.
2. The diagonal map $\delta _{f}:X\to X\times _{Y}X$ is an open immersion.
3. The relative cotangent sheaf $\Omega _{X/Y}$ is zero.
See also
• Finite extensions of local fields
• Ramification (mathematics)
References
1. Hartshorne 1977, Ch. IV, § 2.
2. Grothendieck & Dieudonné 1967, Corollary 17.4.2.
• Grothendieck, Alexandre; Dieudonné, Jean (1967). "Éléments de géométrie algébrique: IV. Étude locale des schémas et des morphismes de schémas, Quatrième partie". Publications Mathématiques de l'IHÉS. 32. doi:10.1007/bf02732123. MR 0238860.
• Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157
| Wikipedia |
Unrestricted algorithm
An unrestricted algorithm is an algorithm for the computation of a mathematical function that puts no restrictions on the range of the argument or on the precision that may be demanded in the result.[1] The idea of such an algorithm was put forward by C. W. Clenshaw and F. W. J. Olver in a paper published in 1980.[1][2]
In the problem of developing algorithms for computing, as regards the values of a real-valued function of a real variable (e.g., g[x] in "restricted" algorithms), the error that can be tolerated in the result is specified in advance. An interval on the real line would also be specified for values when the values of a function are to be evaluated. Different algorithms may have to be applied for evaluating functions outside the interval. An unrestricted algorithm envisages a situation in which a user may stipulate the value of x and also the precision required in g(x) quite arbitrarily. The algorithm should then produce an acceptable result without failure.[1]
References
1. C.W. Clenshaw and F. W. J. Olver (April 1980). "An unrestricted algorithm for the exponential function". SIAM Journal on Numerical Analysis. 17 (2): 310–331. doi:10.1137/0717026. JSTOR 2156615.
2. Richard P Brent (1980). "Unrestricted algorithms for elementary and special functions". In S. H. Lavington (ed.). Information Processing. Vol. 80. North-Holland, Amsterdam. pp. 613–619. arXiv:1004.3621.
| Wikipedia |
Unrooted binary tree
In mathematics and computer science, an unrooted binary tree is an unrooted tree in which each vertex has either one or three neighbors.
Definitions
A free tree or unrooted tree is a connected undirected graph with no cycles. The vertices with one neighbor are the leaves of the tree, and the remaining vertices are the internal nodes of the tree. The degree of a vertex is its number of neighbors; in a tree with more than one node, the leaves are the vertices of degree one. An unrooted binary tree is a free tree in which all internal nodes have degree exactly three.
In some applications it may make sense to distinguish subtypes of unrooted binary trees: a planar embedding of the tree may be fixed by specifying a cyclic ordering for the edges at each vertex, making it into a plane tree. In computer science, binary trees are often rooted and ordered when they are used as data structures, but in the applications of unrooted binary trees in hierarchical clustering and evolutionary tree reconstruction, unordered trees are more common.[1]
Additionally, one may distinguish between trees in which all vertices have distinct labels, trees in which the leaves only are labeled, and trees in which the nodes are not labeled. In an unrooted binary tree with n leaves, there will be n − 2 internal nodes, so the labels may be taken from the set of integers from 1 to 2n − 1 when all nodes are to be labeled, or from the set of integers from 1 to n when only the leaves are to be labeled.[1]
Related structures
Rooted binary trees
Main article: Rooted binary tree
An unrooted binary tree T may be transformed into a full rooted binary tree (that is, a rooted tree in which each non-leaf node has exactly two children) by choosing a root edge e of T, placing a new root node in the middle of e, and directing every edge of the resulting subdivided tree away from the root node. Conversely, any full rooted binary tree may be transformed into an unrooted binary tree by removing the root node, replacing the path between its two children by a single undirected edge, and suppressing the orientation of the remaining edges in the graph. For this reason, there are exactly 2n −3 times as many full rooted binary trees with n leaves as there are unrooted binary trees with n leaves.[1]
Hierarchical clustering
A hierarchical clustering of a collection of objects may be formalized as a maximal family of sets of the objects in which no two sets cross. That is, for every two sets S and T in the family, either S and T are disjoint or one is a subset of the other, and no more sets can be added to the family while preserving this property. If T is an unrooted binary tree, it defines a hierarchical clustering of its leaves: for each edge (u,v) in T there is a cluster consisting of the leaves that are closer to u than to v, and these sets together with the empty set and the set of all leaves form a maximal non-crossing family. Conversely, from any maximal non-crossing family of sets over a set of n elements, one can form a unique unrooted binary tree that has a node for each triple (A,B,C) of disjoint sets in the family that together cover all of the elements.[2]
Evolutionary trees
According to simple forms of the theory of evolution, the history of life can be summarized as a phylogenetic tree in which each node describes a species, the leaves represent the species that exist today, and the edges represent ancestor-descendant relationships between species. This tree has a natural orientation from ancestors to descendants, and a root at the common ancestor of the species, so it is a rooted tree. However, some methods of reconstructing binary trees can reconstruct only the nodes and the edges of this tree, but not their orientations.
For instance, cladistic methods such as maximum parsimony use as data a set of binary attributes describing features of the species. These methods seek a tree with the given species as leaves in which the internal nodes are also labeled with features, and attempt to minimize the number of times some feature is present at only one of the two endpoints of an edge in the tree. Ideally, each feature should only have one edge for which this is the case. Changing the root of a tree does not change this number of edge differences, so methods based on parsimony are not capable of determining the location of the tree root and will produce an unrooted tree, often an unrooted binary tree.[3]
Unrooted binary trees also are produced by methods for inferring evolutionary trees based on quartet data specifying, for each four leaf species, the unrooted binary tree describing the evolution of those four species, and by methods that use quartet distance to measure the distance between trees.[4]
Branch-decomposition
Unrooted binary trees are also used to define branch-decompositions of graphs, by forming an unrooted binary tree whose leaves represent the edges of the given graph. That is, a branch-decomposition may be viewed as a hierarchical clustering of the edges of the graph. Branch-decompositions and an associated numerical quantity, branch-width, are closely related to treewidth and form the basis for efficient dynamic programming algorithms on graphs.[5]
Enumeration
Because of their applications in hierarchical clustering, the most natural graph enumeration problem on unrooted binary trees is to count the number of trees with n labeled leaves and unlabeled internal nodes. An unrooted binary tree on n labeled leaves can be formed by connecting the nth leaf to a new node in the middle of any of the edges of an unrooted binary tree on n − 1 labeled leaves. There are 2n − 5 edges at which the nth node can be attached; therefore, the number of trees on n leaves is larger than the number of trees on n − 1 leaves by a factor of 2n − 5. Thus, the number of trees on n labeled leaves is the double factorial
$(2n-5)!!={\frac {(2n-4)!}{(n-2)!2^{n-2}}}.$[6]
The numbers of trees on 2, 3, 4, 5, ... labeled leaves are
1, 1, 3, 15, 105, 945, 10395, 135135, 2027025, 34459425, ... (sequence A001147 in the OEIS).
Fundamental Equalities
The leaf-to-leaf path-length on a fixed Unrooted Binary Tree (UBT) T encodes the number of edges belonging to the unique path in T connecting a given leaf to another leaf. For example, by referring to the UBT shown in the image on the right, the path-length $p_{1,2}$ between the leaves 1 and 2 is equal to 2 whereas the path-length $p_{1,3}$ between the leaves 1 and 3 is equal to 3. The path-length sequence from a given leaf on a fixed UBT T encodes the lengths of the paths from the given leaf to all the remaining ones. For example, by referring to the UBT shown in the image on the right, the path-length sequence from the leaf 1 is $p_{1}=(p_{1,2},p_{1,3},p_{1,4})=(2,3,3)$. The set of path-length sequences associated to the leaves of T is usually referred to as the path-length sequence collection of T [7] .
Daniele Catanzaro, Raffaele Pesenti and Laurence Wolsey showed[7] that the path-length sequence collection encoding a given UBT with n leaves must satisfy specific equalities, namely
• $p_{i,i}=0$ for all $i\in [1,n]$
• $p_{i,j}=p_{j,i}$ for all $i,j\in [1,n]:i\neq j$
• $p_{i,j}\leq p_{i,k}+p_{k,j}$ for all $i,j,k\in [1,n]:i\neq j\neq k$
• Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle \sum_{j=1}^{n} 1/2^{p_{i,j}}=1/2 } for all $i\in [1,n]$ (which is an adaptation of the Kraft–McMillan inequality)
• $\sum _{i=1}^{n}\sum _{j=1}^{n}p_{i,j}/2^{p_{i,j}}=2n-3$, also referred to as the phylogenetic manifold.[7]
These equalities are proved to be necessary and independent for a path-length collection to encode an UBT with n leaves.[7] It is currently unknown whether they are also sufficient.
Alternative names
Unrooted binary trees have also been called free binary trees,[8] cubic trees,[9] ternary trees[5] and unrooted ternary trees,.[10] However, the "free binary tree" name has also been applied to unrooted trees that may have degree-two nodes[11] and to rooted binary trees with unordered children,[12] and the "ternary tree" name is more frequently used to mean a rooted tree with three children per node.
Notes
1. Furnas (1984).
2. See e.g. Eppstein (2009) for the same correspondence between clusterings and trees, but using rooted binary trees instead of unrooted trees and therefore including an arbitrary choice of the root node.
3. Hendy & Penny (1989).
4. St. John et al. (2003).
5. Robertson & Seymour (1991).
6. Balding, Bishop & Cannings (2007).
7. Catanzaro D, Pesenti R, Wolsey L (2020). "On the Balanced Minimum Evolution Polytope". Discrete Optimization. 36: 100570. doi:10.1016/j.disopt.2020.100570. S2CID 213389485.
8. Czumaj & Gibbons (1996).
9. Exoo (1996).
10. Cilibrasi & Vitanyi (2006).
11. Harary, Palmer & Robinson (1992).
12. Przytycka & Larmore (1994).
References
• Balding, D. J.; Bishop, Martin J.; Cannings, Christopher (2007), Handbook of Statistical Genetics, vol. 1 (3rd ed.), Wiley-Interscience, p. 502, ISBN 978-0-470-05830-5.
• Cilibrasi, Rudi; Vitanyi, Paul M.B. (2006). "A new quartet tree heuristic for hierarchical clustering". arXiv:cs/0606048..
• Czumaj, Artur; Gibbons, Alan (1996), "Guthrie's problem: new equivalences and rapid reductions", Theoretical Computer Science, 154 (1): 3–22, doi:10.1016/0304-3975(95)00126-3.
• Eppstein, David (2009), "Squarepants in a tree: Sum of subtree clustering and hyperbolic pants decomposition", ACM Transactions on Algorithms, 5 (3): 1–24, arXiv:cs.CG/0604034, doi:10.1145/1541885.1541890, S2CID 2434.
• Exoo, Geoffrey (1996), "A simple method for constructing small cubic graphs of girths 14, 15, and 16" (PDF), Electronic Journal of Combinatorics, 3 (1): R30, doi:10.37236/1254.
• Furnas, George W. (1984), "The generation of random, binary unordered trees", Journal of Classification, 1 (1): 187–233, doi:10.1007/BF01890123, S2CID 121121529.
• Harary, Frank; Palmer, E.M.; Robinson, R.W. (1992), "Counting free binary trees admitting a given height" (PDF), Journal of Combinatorics, Information, and System Sciences, 17: 175–181.
• Hendy, Michael D.; Penny, David (1989), "A framework for the quantitative study of evolutionary trees", Systematic Biology, 38 (4): 297–309, doi:10.2307/2992396, JSTOR 2992396
• Przytycka, Teresa M.; Larmore, Lawrence L. (1994), "The optimal alphabetic tree problem revisited", Proc. 21st International Colloquium on Automata, Languages and Programming (ICALP '94), Lecture Notes in Computer Science, vol. 820, Springer-Verlag, pp. 251–262, doi:10.1007/3-540-58201-0_73.
• Robertson, Neil; Seymour, Paul D. (1991), "Graph minors. X. Obstructions to tree-decomposition", Journal of Combinatorial Theory, 52 (2): 153–190, doi:10.1016/0095-8956(91)90061-N.
• St. John, Katherine; Warnow, Tandy; Moret, Bernard M. E.; Vawterd, Lisa (2003), "Performance study of phylogenetic methods: (unweighted) quartet methods and neighbor-joining" (PDF), Journal of Algorithms, 48 (1): 173–193, doi:10.1016/S0196-6774(03)00049-X, S2CID 5550338.
| Wikipedia |
Unsatisfiable core
In mathematical logic, given an unsatisfiable Boolean propositional formula in conjunctive normal form, a subset of clauses whose conjunction is still unsatisfiable is called an unsatisfiable core of the original formula.
Many SAT solvers can produce a resolution graph which proves the unsatisfiability of the original problem. This can be analyzed to produce a smaller unsatisfiable core.
An unsatisfiable core is called a minimal unsatisfiable core, if every proper subset (allowing removal of any arbitrary clause or clauses) of it is satisfiable. Thus, such a core is a local minimum, though not necessarily a global one. There are several practical methods of computing minimal unsatisfiable cores.[1][2]
A minimum unsatisfiable core contains the smallest number of the original clauses required to still be unsatisfiable. No practical algorithms for computing the minimum core are known.[3] Notice the terminology: whereas the minimal unsatisfiable core was a local problem with an easy solution, the minimum unsatisfiable core is a global problem with no known easy solution.
References
1. Dershowitz, N.; Hanna, Z.; Nadel, A. (2006). "A Scalable Algorithm for Minimal Unsatisfiable Core Extraction" (PDF). In Biere, A.; Gomes, C.P. (eds.). Theory and Applications of Satisfiability Testing — SAT 2006. Lecture Notes in Computer Science. Vol. 4121. Springer. pp. 36–41. arXiv:cs/0605085. CiteSeerX 10.1.1.101.5209. doi:10.1007/11814948_5. ISBN 978-3-540-37207-3. S2CID 2845982.
2. Szeider, Stefan (December 2004). "Minimal unsatisfiable formulas with bounded clause-variable difference are fixed-parameter tractable". Journal of Computer and System Sciences. 69 (4): 656–674. CiteSeerX 10.1.1.634.5311. doi:10.1016/j.jcss.2004.04.009.
3. Liffiton, M.H.; Sakallah, K.A. (2008). "Algorithms for Computing Minimal Unsatisfiable Subsets of Constraints" (PDF). J Autom Reason. 40: 1–33. CiteSeerX 10.1.1.79.1304. doi:10.1007/s10817-007-9084-z. S2CID 11106131.
| Wikipedia |
Unscented optimal control
In mathematics, unscented optimal control combines the notion of the unscented transform with deterministic optimal control to address a class of uncertain optimal control problems.[1][2][3] It is a specific application of Riemmann-Stieltjes optimal control theory,[4][5] a concept introduced by Ross and his coworkers.
Mathematical description
Suppose that the initial state $x^{0}$ of a dynamical system,
${\dot {x}}=f(x,u,t)$
is an uncertain quantity. Let $\mathrm {X} ^{i}$ be the sigma points. Then sigma-copies of the dynamical system are given by,
${\dot {\mathrm {X} }}^{i}=f(\mathrm {X} ^{i},u,t)$
Applying standard deterministic optimal control principles to this ensemble generates an unscented optimal control.[6][7][8] Unscented optimal control is a special case of tychastic optimal control theory.[1][9][10] According to Aubin[10] and Ross,[1] tychastic processes differ from stochastic processes in that a tychastic process is conditionally deterministic.
Applications
Unscented optimal control theory has been applied to UAV guidance,[8][11] spacecraft attitude control,[12] air-traffic control[13] and low-thrust trajectory optimization[2][6]
References
1. Ross, Isaac (2015). A primer on Pontryagin's principle in optimal control. San Francisco: Collegiate Publishers. pp. 75–82. ISBN 978-0-9843571-1-6.
2. Unscented Optimal Control for Orbital and Proximity Operations in an Uncertain Environment: A New Zermelo Problem I. Michael Ross, Ronald Proulx, Mark Karpenko August 2014, American Institute of Aeronautics and Astronautics (AIAA) doi:10.2514/6.2014-4423
3. Ross et al, Unscented Control for Uncertain Dynamical Systems, US Patent US 9,727,034 Bl. Issued Aug 8, 2017. https://calhoun.nps.edu/bitstream/handle/10945/55812/USPN%209727034.pdf?sequence=1&isAllowed=y
4. Ross, I. Michael; Karpenko, Mark; Proulx, Ronald J. (2015). "Riemann-Stieltjes Optimal Control Problems for Uncertain Dynamic Systems". Journal of Guidance, Control, and Dynamics. AIAA. 38 (7): 1251–1263. Bibcode:2015JGCD...38.1251R. doi:10.2514/1.G000505. S2CID 121424228.
5. Karpenko, Mark; Proulx, Ronald J. (2016). "Experimental Implementation of Riemann–Stieltjes Optimal Control for Agile Imaging Satellites". Journal of Guidance, Control, and Dynamics. 39 (1): 144–150. Bibcode:2016JGCD...39..144K. doi:10.2514/1.g001325. ISSN 0731-5090. S2CID 116887441.
6. Naoya Ozaki and Ryu Funase. "Tube Stochastic Differential Dynamic Programming for Robust Low-Thrust Trajectory Optimization Problems", 2018 AIAA Guidance, Navigation, and Control Conference, AIAA SciTech Forum, (AIAA 2018-0861) doi:10.2514/6.2018-0861
7. "Robust Differential Dynamic Programming for Low-Thrust Trajectory Design: Approach with Robust Model Predictive Control Technique" (PDF).
8. Shaffer, R.; Karpenko, M.; Gong, Q. (July 2016). "Unscented guidance for waypoint navigation of a fixed-wing UAV". 2016 American Control Conference (ACC). pp. 473–478. doi:10.1109/acc.2016.7524959. ISBN 978-1-4673-8682-1. S2CID 11741951.
9. Ross, I. Michael; Karpenko, Mark; Proulx, Ronald J. (July 2016). "Path constraints in tychastic and unscented optimal control: Theory, application and experimental results". 2016 American Control Conference (ACC). IEEE. pp. 2918–2923. doi:10.1109/acc.2016.7525362. ISBN 978-1-4673-8682-1. S2CID 1123147.
10. Aubin, Jean-Pierre; Saint-Pierre, Patrick (2008), A Tychastic Approach to Guaranteed Pricing and Management of Portfolios under Transaction Constraints, Progress in Probability, vol. 59, Basel: Birkhäuser Basel, pp. 411–433, doi:10.1007/978-3-7643-8458-6_22, ISBN 978-3-7643-8457-9, retrieved 2020-12-23
11. Ross, I. M.; Proulx, R. J.; Karpenko, M. (July 2015). "Unscented guidance". 2015 American Control Conference (ACC). pp. 5605–5610. doi:10.1109/acc.2015.7172217. ISBN 978-1-4799-8684-2. S2CID 28136418.
12. Ross, I. M.; Karpenko, M.; Proulx, R. J. (July 2016). "Path constraints in tychastic and unscented optimal control: Theory, application and experimental results". 2016 American Control Conference (ACC). pp. 2918–2923. doi:10.1109/acc.2016.7525362. ISBN 978-1-4673-8682-1. S2CID 1123147.
13. Ng, Hok Kwan (2020-06-08), "Strategic Planning with Unscented Optimal Guidance for Urban Air Mobility", AIAA AVIATION 2020 FORUM, AIAA AVIATION Forum, American Institute of Aeronautics and Astronautics, doi:10.2514/6.2020-2904, ISBN 978-1-62410-598-2, S2CID 225658104, retrieved 2020-12-23
| Wikipedia |
Point at infinity
In geometry, a point at infinity or ideal point is an idealized limiting point at the "end" of each line.
In the case of an affine plane (including the Euclidean plane), there is one ideal point for each pencil of parallel lines of the plane. Adjoining these points produces a projective plane, in which no point can be distinguished, if we "forget" which points were added. This holds for a geometry over any field, and more generally over any division ring.[1]
In the real case, a point at infinity completes a line into a topologically closed curve. In higher dimensions, all the points at infinity form a projective subspace of one dimension less than that of the whole projective space to which they belong. A point at infinity can also be added to the complex line (which may be thought of as the complex plane), thereby turning it into a closed surface known as the complex projective line, CP1, also called the Riemann sphere (when complex numbers are mapped to each point).
In the case of a hyperbolic space, each line has two distinct ideal points. Here, the set of ideal points takes the form of a quadric.
Affine geometry
In an affine or Euclidean space of higher dimension, the points at infinity are the points which are added to the space to get the projective completion. The set of the points at infinity is called, depending on the dimension of the space, the line at infinity, the plane at infinity or the hyperplane at infinity, in all cases a projective space of one less dimension.[2]
As a projective space over a field is a smooth algebraic variety, the same is true for the set of points at infinity. Similarly, if the ground field is the real or the complex field, the set of points at infinity is a manifold.
Perspective
In artistic drawing and technical perspective, the projection on the picture plane of the point at infinity of a class of parallel lines is called their vanishing point.[3]
Hyperbolic geometry
Main article: Ideal point
In hyperbolic geometry, points at infinity are typically named ideal points.[4] Unlike Euclidean and elliptic geometries, each line has two points at infinity: given a line l and a point P not on l, the right- and left-limiting parallels converge asymptotically to different points at infinity.
All points at infinity together form the Cayley absolute or boundary of a hyperbolic plane.
Projective geometry
A symmetry of points and lines arises in a projective plane: just as a pair of points determine a line, so a pair of lines determine a point. The existence of parallel lines leads to establishing a point at infinity which represents the intersection of these parallels. This axiomatic symmetry grew out of a study of graphical perspective where a parallel projection arises as a central projection where the center C is a point at infinity, or figurative point.[5] The axiomatic symmetry of points and lines is called duality.
Though a point at infinity is considered on a par with any other point of a projective range, in the representation of points with projective coordinates, distinction is noted: finite points are represented with a 1 in the final coordinate while a point at infinity has a 0 there. The need to represent points at infinity requires that one extra coordinate beyond the space of finite points is needed.
Other generalisations
Main article: Compactification (mathematics)
This construction can be generalized to topological spaces. Different compactifications may exist for a given space, but arbitrary topological space admits Alexandroff extension, also called the one-point compactification when the original space is not itself compact. Projective line (over arbitrary field) is the Alexandroff extension of the corresponding field. Thus the circle is the one-point compactification of the real line, and the sphere is the one-point compactification of the plane. Projective spaces Pn for n > 1 are not one-point compactifications of corresponding affine spaces for the reason mentioned above under § Affine geometry, and completions of hyperbolic spaces with ideal points are also not one-point compactifications.
See also
• Division by zero
• Sphere at infinity
• Midpoint § Generalizations
• Asymptote § Algebraic curves
References
1. Weisstein, Eric W. "Point at Infinity". mathworld.wolfram.com. Wolfram Research. Retrieved 28 December 2016.
2. Coxeter, H. S. M. (1987). Projective Geometry (2nd ed.). Springer-Verlag. p. 109.
3. Faugeras, Olivier; Luong, Quang-Tuan (2001). The Geometry of Multiple Images: The Laws That Govern the Formation of Multiple Images of a Scene and Some of Their Applications. MIT Press. p. 19. ISBN 978-0262062206.
4. Kay, David C. (2011). College Geometry: A Unified Development. CRC Press. p. 548.
5. Halsted, G. B. (1906). Synthetic Projective Geometry. p. 7.
| Wikipedia |
Undecidable problem
In computability theory and computational complexity theory, an undecidable problem is a decision problem for which it is proved to be impossible to construct an algorithm that always leads to a correct yes-or-no answer.[1] The halting problem is an example: it can be proven that there is no algorithm that correctly determines whether arbitrary programs eventually halt when run.[2]
Background
A decision problem is a question which, for every input in some infinite set of inputs, answers "yes" or "no".[3]. Those inputs can be numbers (for example, the decision problem "is the input a prime number?") or other values of some other kind, such as strings of a formal language.
The formal representation of a decision problem is a subset of the natural numbers. For decision problems on natural numbers, the set consists of those numbers that the decision problem answers "yes" to. For example, the decision problem "is the input even?" is formalized as the set of even numbers. A decision problem whose input consists of strings or more complex values is formalized as the set of numbers that, via a specific Gödel numbering, correspond to inputs that satisfy the decision problem's criteria.
A decision problem A is called decidable or effectively solvable if the formalized set of A is a recursive set. Otherwise, A is called undecidable. A problem is called partially decidable, semi-decidable, solvable, or provable if A is a recursively enumerable set.[nb 1]
Example: the halting problem in computability theory
In computability theory, the halting problem is a decision problem which can be stated as follows:
Given the description of an arbitrary program and a finite input, decide whether the program finishes running or will run forever.
Alan Turing proved in 1936 that a general algorithm running on a Turing machine that solves the halting problem for all possible program-input pairs necessarily cannot exist. Hence, the halting problem is undecidable for Turing machines.
Relationship with Gödel's incompleteness theorem
The concepts raised by Gödel's incompleteness theorems are very similar to those raised by the halting problem, and the proofs are quite similar. In fact, a weaker form of the First Incompleteness Theorem is an easy consequence of the undecidability of the halting problem. This weaker form differs from the standard statement of the incompleteness theorem by asserting that an axiomatization of the natural numbers that is both complete and sound is impossible. The "sound" part is the weakening: it means that we require the axiomatic system in question to prove only true statements about natural numbers. Since soundness implies consistency, this weaker form can be seen as a corollary of the strong form. It is important to observe that the statement of the standard form of Gödel's First Incompleteness Theorem is completely unconcerned with the truth value of a statement, but only concerns the issue of whether it is possible to find it through a mathematical proof.
The weaker form of the theorem can be proved from the undecidability of the halting problem as follows.[4] Assume that we have a sound (and hence consistent) and complete axiomatization of all true first-order logic statements about natural numbers. Then we can build an algorithm that enumerates all these statements. This means that there is an algorithm N(n) that, given a natural number n, computes a true first-order logic statement about natural numbers, and that for all true statements, there is at least one n such that N(n) yields that statement. Now suppose we want to decide if the algorithm with representation a halts on input i. We know that this statement can be expressed with a first-order logic statement, say H(a, i). Since the axiomatization is complete it follows that either there is an n such that N(n) = H(a, i) or there is an n′ such that N(n′) = ¬ H(a, i). So if we iterate over all n until we either find H(a, i) or its negation, we will always halt, and furthermore, the answer it gives us will be true (by soundness). This means that this gives us an algorithm to decide the halting problem. Since we know that there cannot be such an algorithm, it follows that the assumption that there is a consistent and complete axiomatization of all true first-order logic statements about natural numbers must be false.
Examples of undecidable problems
Main article: List of undecidable problems
Undecidable problems can be related to different topics, such as logic, abstract machines or topology. Since there are uncountably many undecidable problems,[nb 2] any list, even one of infinite length, is necessarily incomplete.
Examples of undecidable statements
See also: List of statements independent of ZFC and Independence (mathematical logic)
There are two distinct senses of the word "undecidable" in contemporary use. The first of these is the sense used in relation to Gödel's theorems, that of a statement being neither provable nor refutable in a specified deductive system. The second sense is used in relation to computability theory and applies not to statements but to decision problems, which are countably infinite sets of questions each requiring a yes or no answer. Such a problem is said to be undecidable if there is no computable function that correctly answers every question in the problem set. The connection between these two is that if a decision problem is undecidable (in the recursion theoretical sense) then there is no consistent, effective formal system which proves for every question A in the problem either "the answer to A is yes" or "the answer to A is no".
Because of the two meanings of the word undecidable, the term independent is sometimes used instead of undecidable for the "neither provable nor refutable" sense. The usage of "independent" is also ambiguous, however. It can mean just "not provable", leaving open whether an independent statement might be refuted.
Undecidability of a statement in a particular deductive system does not, in and of itself, address the question of whether the truth value of the statement is well-defined, or whether it can be determined by other means. Undecidability only implies that the particular deductive system being considered does not prove the truth or falsity of the statement. Whether there exist so-called "absolutely undecidable" statements, whose truth value can never be known or is ill-specified, is a controversial point among various philosophical schools.
One of the first problems suspected to be undecidable, in the second sense of the term, was the word problem for groups, first posed by Max Dehn in 1911, which asks if there is a finitely presented group for which no algorithm exists to determine whether two words are equivalent. This was shown to be the case in 1952.
The combined work of Gödel and Paul Cohen has given two concrete examples of undecidable statements (in the first sense of the term): The continuum hypothesis can neither be proved nor refuted in ZFC (the standard axiomatization of set theory), and the axiom of choice can neither be proved nor refuted in ZF (which is all the ZFC axioms except the axiom of choice). These results do not require the incompleteness theorem. Gödel proved in 1940 that neither of these statements could be disproved in ZF or ZFC set theory. In the 1960s, Cohen proved that neither is provable from ZF, and the continuum hypothesis cannot be proven from ZFC.
In 1970, Russian mathematician Yuri Matiyasevich showed that Hilbert's Tenth Problem, posed in 1900 as a challenge to the next century of mathematicians, cannot be solved. Hilbert's challenge sought an algorithm which finds all solutions of a Diophantine equation. A Diophantine equation is a more general case of Fermat's Last Theorem; we seek the integer roots of a polynomial in any number of variables with integer coefficients. Since we have only one equation but n variables, infinitely many solutions exist (and are easy to find) in the complex plane; however, the problem becomes impossible if solutions are constrained to integer values only. Matiyasevich showed this problem to be unsolvable by mapping a Diophantine equation to a recursively enumerable set and invoking Gödel's Incompleteness Theorem.[5]
In 1936, Alan Turing proved that the halting problem—the question of whether or not a Turing machine halts on a given program—is undecidable, in the second sense of the term. This result was later generalized by Rice's theorem.
In 1973, Saharon Shelah showed the Whitehead problem in group theory is undecidable, in the first sense of the term, in standard set theory.[6]
In 1977, Paris and Harrington proved that the Paris-Harrington principle, a version of the Ramsey theorem, is undecidable in the axiomatization of arithmetic given by the Peano axioms but can be proven to be true in the larger system of second-order arithmetic.
Kruskal's tree theorem, which has applications in computer science, is also undecidable from the Peano axioms but provable in set theory. In fact Kruskal's tree theorem (or its finite form) is undecidable in a much stronger system codifying the principles acceptable on basis of a philosophy of mathematics called predicativism.
Goodstein's theorem is a statement about the Ramsey theory of the natural numbers that Kirby and Paris showed is undecidable in Peano arithmetic.
Gregory Chaitin produced undecidable statements in algorithmic information theory and proved another incompleteness theorem in that setting. Chaitin's theorem states that for any theory that can represent enough arithmetic, there is an upper bound c such that no specific number can be proven in that theory to have Kolmogorov complexity greater than c. While Gödel's theorem is related to the liar paradox, Chaitin's result is related to Berry's paradox.
In 2007, researchers Kurtz and Simon, building on earlier work by J.H. Conway in the 1970s, proved that a natural generalization of the Collatz problem is undecidable.[7]
In 2019, Ben-David and colleagues constructed an example of a learning model (named EMX), and showed a family of functions whose learnability in EMX is undecidable in standard set theory.[8][9]
See also
• Decidability (logic)
• Entscheidungsproblem
• Proof of impossibility
• Unknowability
• Wicked problem
Notes
1. This means that there exists an algorithm that halts eventually when the answer is yes but may run forever if the answer is no.
2. There are uncountably many subsets of $\{0,1\}^{*}$, only countably many of which can be decided by algorithms. However, also only countably many decision problems can be stated in any language.
References
1. "Decidable and Undecidable problems in Theory of Computation". GeeksforGeeks. 2018-01-08. Retrieved 2022-06-12.
2. "Formal Computational Models and Computability". www.cs.rochester.edu. Retrieved 2022-06-12.
3. "decision problem". Oxford Reference. Retrieved 2022-06-12.
4. Aaronson, Scott (21 July 2011). "Rosser's Theorem via Turing machines". Shtetl-Optimized. Retrieved 2 November 2022.
5. Matiyasevich, Yuri (1970). Диофантовость перечислимых множеств [Enumerable sets are Diophantine]. Doklady Akademii Nauk SSSR (in Russian). 191: 279–282.
6. Shelah, Saharon (1974). "Infinite Abelian groups, Whitehead problem and some constructions". Israel Journal of Mathematics. 18 (3): 243–256. doi:10.1007/BF02757281. MR 0357114.
7. Kurtz, Stuart A.; Simon, Janos, "The Undecidability of the Generalized Collatz Problem", in Proceedings of the 4th International Conference on Theory and Applications of Models of Computation, TAMC 2007, held in Shanghai, China in May 2007. ISBN 3-540-72503-2. doi:10.1007/978-3-540-72504-6_49
8. Ben-David, Shai; Hrubeš, Pavel; Moran, Shay; Shpilka, Amir; Yehudayoff, Amir (2019-01-07). "Learnability can be undecidable". Nature Machine Intelligence. 1 (1): 44–48. doi:10.1038/s42256-018-0002-3. ISSN 2522-5839.
9. Reyzin, Lev (2019). "Unprovability comes to machine learning". Nature. 565 (7738): 166–167. doi:10.1038/d41586-019-00012-4. ISSN 0028-0836.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
| Wikipedia |
Soundness
In logic or, more precisely, deductive reasoning, an argument is sound if it is both valid in form and its premises are true.[1] Soundness also has a related meaning in mathematical logic, wherein logical systems are sound if and only if every formula that can be proved in the system is logically valid with respect to the semantics of the system.
Definition
In deductive reasoning, a sound argument is an argument that is valid and all of its premises are true (and as a consequence its conclusion is true as well). An argument is valid if, assuming its premises are true, the conclusion must be true. An example of a sound argument is the following well-known syllogism:
(premises)
All men are mortal.
Socrates is a man.
(conclusion)
Therefore, Socrates is mortal.
Because of the logical necessity of the conclusion, this argument is valid; and because the argument is valid and its premises are true, the argument is sound.
However, an argument can be valid without being sound. For example:
All birds can fly.
Penguins are birds.
Therefore, penguins can fly.
This argument is valid as the conclusion must be true assuming the premises are true. However, the first premise is false. Not all birds can fly (for example, penguins). For an argument to be sound, the argument must be valid and its premises must be true.[2]
Use in mathematical logic
Logical systems
In mathematical logic, a logical system has the soundness property if every formula that can be proved in the system is logically valid with respect to the semantics of the system. In most cases, this comes down to its rules having the property of preserving truth.[3] The converse of soundness is known as completeness.
A logical system with syntactic entailment $\vdash $ and semantic entailment $\models $ is sound if for any sequence $A_{1},A_{2},...,A_{n}$ of sentences in its language, if $A_{1},A_{2},...,A_{n}\vdash C$, then $A_{1},A_{2},...,A_{n}\models C$. In other words, a system is sound when all of its theorems are tautologies.
Soundness is among the most fundamental properties of mathematical logic. The soundness property provides the initial reason for counting a logical system as desirable. The completeness property means that every validity (truth) is provable. Together they imply that all and only validities are provable.
Most proofs of soundness are trivial. For example, in an axiomatic system, proof of soundness amounts to verifying the validity of the axioms and that the rules of inference preserve validity (or the weaker property, truth). If the system allows Hilbert-style deduction, it requires only verifying the validity of the axioms and one rule of inference, namely modus ponens. (and sometimes substitution)
Soundness properties come in two main varieties: weak and strong soundness, of which the former is a restricted form of the latter.
Soundness
Soundness of a deductive system is the property that any sentence that is provable in that deductive system is also true on all interpretations or structures of the semantic theory for the language upon which that theory is based. In symbols, where S is the deductive system, L the language together with its semantic theory, and P a sentence of L: if ⊢S P, then also ⊨L P.
Strong soundness
Strong soundness of a deductive system is the property that any sentence P of the language upon which the deductive system is based that is derivable from a set Γ of sentences of that language is also a logical consequence of that set, in the sense that any model that makes all members of Γ true will also make P true. In symbols where Γ is a set of sentences of L: if Γ ⊢S P, then also Γ ⊨L P. Notice that in the statement of strong soundness, when Γ is empty, we have the statement of weak soundness.
Arithmetic soundness
If T is a theory whose objects of discourse can be interpreted as natural numbers, we say T is arithmetically sound if all theorems of T are actually true about the standard mathematical integers. For further information, see ω-consistent theory.
Relation to completeness
The converse of the soundness property is the semantic completeness property. A deductive system with a semantic theory is strongly complete if every sentence P that is a semantic consequence of a set of sentences Γ can be derived in the deduction system from that set. In symbols: whenever Γ ⊨ P, then also Γ ⊢ P. Completeness of first-order logic was first explicitly established by Gödel, though some of the main results were contained in earlier work of Skolem.
Informally, a soundness theorem for a deductive system expresses that all provable sentences are true. Completeness states that all true sentences are provable.
Gödel's first incompleteness theorem shows that for languages sufficient for doing a certain amount of arithmetic, there can be no consistent and effective deductive system that is complete with respect to the intended interpretation of the symbolism of that language. Thus, not all sound deductive systems are complete in this special sense of completeness, in which the class of models (up to isomorphism) is restricted to the intended one. The original completeness proof applies to all classical models, not some special proper subclass of intended ones.
See also
• Soundness (interactive proof)
References
1. Smith, Peter (2010). "Types of proof system" (PDF). p. 5.
2. Gensler, Harry J., 1945- (January 6, 2017). Introduction to logic (Third ed.). New York. ISBN 978-1-138-91058-4. OCLC 957680480.{{cite book}}: CS1 maint: location missing publisher (link) CS1 maint: multiple names: authors list (link)
3. Mindus, Patricia (2009-09-18). A Real Mind: The Life and Work of Axel Hägerström. Springer Science & Business Media. ISBN 978-90-481-2895-2.
Bibliography
• Hinman, P. (2005). Fundamentals of Mathematical Logic. A K Peters. ISBN 1-56881-262-0.
• Copi, Irving (1979), Symbolic Logic (5th ed.), Macmillan Publishing Co., ISBN 0-02-324880-7
• Boolos, Burgess, Jeffrey. Computability and Logic, 4th Ed, Cambridge, 2002.
External links
Wiktionary has definitions related to Soundness
• Validity and Soundness in the Internet Encyclopedia of Philosophy.
Metalogic and metamathematics
• Cantor's theorem
• Entscheidungsproblem
• Church–Turing thesis
• Consistency
• Effective method
• Foundations of mathematics
• of geometry
• Gödel's completeness theorem
• Gödel's incompleteness theorems
• Soundness
• Completeness
• Decidability
• Interpretation
• Löwenheim–Skolem theorem
• Metatheorem
• Satisfiability
• Independence
• Type–token distinction
• Use–mention distinction
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
| Wikipedia |
Unstructured grid
An unstructured grid or irregular grid is a tessellation of a part of the Euclidean plane or Euclidean space by simple shapes, such as triangles or tetrahedra, in an irregular pattern. Grids of this type may be used in finite element analysis when the input to be analyzed has an irregular shape.
Unlike structured grids, unstructured grids require a list of the connectivity which specifies the way a given set of vertices make up individual elements (see graph (data structure)).
Ruppert's algorithm is often used to convert an irregularly shaped polygon into an unstructured grid of triangles.
In addition to triangles and tetrahedra, other commonly used elements in finite element simulation include quadrilateral (4-noded) and hexahedral (8-noded) elements in 2D and 3D, respectively. One of the most commonly used algorithms to generate unstructured quadrilateral grid is "Paving". However, there is no such commonly used algorithm for generating unstructured hexahedral grid on a general 3D solid model. "Plastering" is a 3D version of Paving, but it has difficulty in forming hexahedral elements at the interior of a solid.
See also
• Gridding – Interpolation on functions of more than one variablePages displaying short descriptions of redirect targets
• Types of mesh
• Regular grid – Tessellation of n-dimensional Euclidean space by congruent parallelotopes
• Mesh generation – Subdivision of space into cells
• Finite element analysis – Numerical method for solving physical or engineering problemsPages displaying short descriptions of redirect targets
External links
• "Types of Grids". Archived from the original on 2013-03-25. Unstructured Grid
| Wikipedia |
Multiple kernel learning
Multiple kernel learning refers to a set of machine learning methods that use a predefined set of kernels and learn an optimal linear or non-linear combination of kernels as part of the algorithm. Reasons to use multiple kernel learning include a) the ability to select for an optimal kernel and parameters from a larger set of kernels, reducing bias due to kernel selection while allowing for more automated machine learning methods, and b) combining data from different sources (e.g. sound and images from a video) that have different notions of similarity and thus require different kernels. Instead of creating a new kernel, multiple kernel algorithms can be used to combine kernels already established for each individual data source.
Part of a series on
Machine learning
and data mining
Paradigms
• Supervised learning
• Unsupervised learning
• Online learning
• Batch learning
• Meta-learning
• Semi-supervised learning
• Self-supervised learning
• Reinforcement learning
• Rule-based learning
• Quantum machine learning
Problems
• Classification
• Generative model
• Regression
• Clustering
• dimension reduction
• density estimation
• Anomaly detection
• Data Cleaning
• AutoML
• Association rules
• Semantic analysis
• Structured prediction
• Feature engineering
• Feature learning
• Learning to rank
• Grammar induction
• Ontology learning
• Multimodal learning
Supervised learning
(classification • regression)
• Apprenticeship learning
• Decision trees
• Ensembles
• Bagging
• Boosting
• Random forest
• k-NN
• Linear regression
• Naive Bayes
• Artificial neural networks
• Logistic regression
• Perceptron
• Relevance vector machine (RVM)
• Support vector machine (SVM)
Clustering
• BIRCH
• CURE
• Hierarchical
• k-means
• Fuzzy
• Expectation–maximization (EM)
• DBSCAN
• OPTICS
• Mean shift
Dimensionality reduction
• Factor analysis
• CCA
• ICA
• LDA
• NMF
• PCA
• PGD
• t-SNE
• SDL
Structured prediction
• Graphical models
• Bayes net
• Conditional random field
• Hidden Markov
Anomaly detection
• RANSAC
• k-NN
• Local outlier factor
• Isolation forest
Artificial neural network
• Autoencoder
• Cognitive computing
• Deep learning
• DeepDream
• Feedforward neural network
• Recurrent neural network
• LSTM
• GRU
• ESN
• reservoir computing
• Restricted Boltzmann machine
• GAN
• Diffusion model
• SOM
• Convolutional neural network
• U-Net
• Transformer
• Vision
• Spiking neural network
• Memtransistor
• Electrochemical RAM (ECRAM)
Reinforcement learning
• Q-learning
• SARSA
• Temporal difference (TD)
• Multi-agent
• Self-play
Learning with humans
• Active learning
• Crowdsourcing
• Human-in-the-loop
Model diagnostics
• Learning curve
Mathematical foundations
• Kernel machines
• Bias–variance tradeoff
• Computational learning theory
• Empirical risk minimization
• Occam learning
• PAC learning
• Statistical learning
• VC theory
Machine-learning venues
• ECML PKDD
• NeurIPS
• ICML
• ICLR
• IJCAI
• ML
• JMLR
Related articles
• Glossary of artificial intelligence
• List of datasets for machine-learning research
• Outline of machine learning
Multiple kernel learning approaches have been used in many applications, such as event recognition in video,[1] object recognition in images,[2] and biomedical data fusion.[3]
Algorithms
Multiple kernel learning algorithms have been developed for supervised, semi-supervised, as well as unsupervised learning. Most work has been done on the supervised learning case with linear combinations of kernels, however, many algorithms have been developed. The basic idea behind multiple kernel learning algorithms is to add an extra parameter to the minimization problem of the learning algorithm. As an example, consider the case of supervised learning of a linear combination of a set of $n$ kernels $K$. We introduce a new kernel $K'=\sum _{i=1}^{n}\beta _{i}K_{i}$, where $\beta $ is a vector of coefficients for each kernel. Because the kernels are additive (due to properties of reproducing kernel Hilbert spaces), this new function is still a kernel. For a set of data $X$ with labels $Y$, the minimization problem can then be written as
$\min _{\beta ,c}\mathrm {E} (Y,K'c)+R(K,c)$
where $\mathrm {E} $ is an error function and $R$ is a regularization term. $\mathrm {E} $ is typically the square loss function (Tikhonov regularization) or the hinge loss function (for SVM algorithms), and $R$ is usually an $\ell _{n}$ norm or some combination of the norms (i.e. elastic net regularization). This optimization problem can then be solved by standard optimization methods. Adaptations of existing techniques such as the Sequential Minimal Optimization have also been developed for multiple kernel SVM-based methods.[4]
Supervised learning
For supervised learning, there are many other algorithms that use different methods to learn the form of the kernel. The following categorization has been proposed by Gonen and Alpaydın (2011)[5]
Fixed rules approaches
Fixed rules approaches such as the linear combination algorithm described above use rules to set the combination of the kernels. These do not require parameterization and use rules like summation and multiplication to combine the kernels. The weighting is learned in the algorithm. Other examples of fixed rules include pairwise kernels, which are of the form
$k((x_{1i},x_{1j}),(x_{2i},x_{2j}))=k(x_{1i},x_{2i})k(x_{1j},x_{2j})+k(x_{1i},x_{2j})k(x_{1j},x_{2i})$.
These pairwise approaches have been used in predicting protein-protein interactions.[6]
Heuristic approaches
These algorithms use a combination function that is parameterized. The parameters are generally defined for each individual kernel based on single-kernel performance or some computation from the kernel matrix. Examples of these include the kernel from Tenabe et al. (2008).[7] Letting $\pi _{m}$ be the accuracy obtained using only $K_{m}$, and letting $\delta $ be a threshold less than the minimum of the single-kernel accuracies, we can define
$\beta _{m}={\frac {\pi _{m}-\delta }{\sum _{h=1}^{n}(\pi _{h}-\delta )}}$
Other approaches use a definition of kernel similarity, such as
$A(K_{1},K_{2})={\frac {\langle K_{1},K_{2}\rangle }{\sqrt {\langle K_{1},K_{1}\rangle \langle K_{2},K_{2}\rangle }}}$
Using this measure, Qui and Lane (2009)[8] used the following heuristic to define
$\beta _{m}={\frac {A(K_{m},YY^{T})}{\sum _{h=1}^{n}A(K_{h},YY^{T})}}$
Optimization approaches
These approaches solve an optimization problem to determine parameters for the kernel combination function. This has been done with similarity measures and structural risk minimization approaches. For similarity measures such as the one defined above, the problem can be formulated as follows:[9]
$\max _{\beta ,\operatorname {tr} (K'_{tra})=1,K'\geq 0}A(K'_{tra},YY^{T}).$
where $K'_{tra}$ is the kernel of the training set.
Structural risk minimization approaches that have been used include linear approaches, such as that used by Lanckriet et al. (2002).[10] We can define the implausibility of a kernel $\omega (K)$ to be the value of the objective function after solving a canonical SVM problem. We can then solve the following minimization problem:
$\min _{\operatorname {tr} (K'_{tra})=c}\omega (K'_{tra})$
where $c$ is a positive constant. Many other variations exist on the same idea, with different methods of refining and solving the problem, e.g. with nonnegative weights for individual kernels and using non-linear combinations of kernels.
Bayesian approaches
Bayesian approaches put priors on the kernel parameters and learn the parameter values from the priors and the base algorithm. For example, the decision function can be written as
$f(x)=\sum _{i=0}^{n}\alpha _{i}\sum _{m=1}^{p}\eta _{m}K_{m}(x_{i}^{m},x^{m})$
$\eta $ can be modeled with a Dirichlet prior and $\alpha $ can be modeled with a zero-mean Gaussian and an inverse gamma variance prior. This model is then optimized using a customized multinomial probit approach with a Gibbs sampler.
[11] These methods have been used successfully in applications such as protein fold recognition and protein homology problems [12][13]
Boosting approaches
Boosting approaches add new kernels iteratively until some stopping criteria that is a function of performance is reached. An example of this is the MARK model developed by Bennett et al. (2002) [14]
$f(x)=\sum _{i=1}^{N}\sum _{m=1}^{P}\alpha _{i}^{m}K_{m}(x_{i}^{m},x^{m})+b$
The parameters $\alpha _{i}^{m}$ and $b$ are learned by gradient descent on a coordinate basis. In this way, each iteration of the descent algorithm identifies the best kernel column to choose at each particular iteration and adds that to the combined kernel. The model is then rerun to generate the optimal weights $\alpha _{i}$ and $b$.
Semisupervised learning
Semisupervised learning approaches to multiple kernel learning are similar to other extensions of supervised learning approaches. An inductive procedure has been developed that uses a log-likelihood empirical loss and group LASSO regularization with conditional expectation consensus on unlabeled data for image categorization. We can define the problem as follows. Let $L={(x_{i},y_{i})}$ be the labeled data, and let $U={x_{i}}$ be the set of unlabeled data. Then, we can write the decision function as follows.
$f(x)=\alpha _{0}+\sum _{i=1}^{|L|}\alpha _{i}K_{i}(x)$
The problem can be written as
$\min _{f}L(f)+\lambda R(f)+\gamma \Theta (f)$
where $L$ is the loss function (weighted negative log-likelihood in this case), $R$ is the regularization parameter (Group LASSO in this case), and $\Theta $ is the conditional expectation consensus (CEC) penalty on unlabeled data. The CEC penalty is defined as follows. Let the marginal kernel density for all the data be
$g_{m}^{\pi }(x)=\langle \phi _{m}^{\pi },\psi _{m}(x)\rangle $
where $\psi _{m}(x)=[K_{m}(x_{1},x),\ldots ,K_{m}(x_{L},x)]^{T}$ (the kernel distance between the labeled data and all of the labeled and unlabeled data) and $\phi _{m}^{\pi }$ is a non-negative random vector with a 2-norm of 1. The value of $\Pi $ is the number of times each kernel is projected. Expectation regularization is then performed on the MKD, resulting in a reference expectation $q_{m}^{pi}(y|g_{m}^{\pi }(x))$ and model expectation $p_{m}^{\pi }(f(x)|g_{m}^{\pi }(x))$. Then, we define
$\Theta ={\frac {1}{\Pi }}\sum _{\pi =1}^{\Pi }\sum _{m=1}^{M}D(q_{m}^{pi}(y|g_{m}^{\pi }(x))||p_{m}^{\pi }(f(x)|g_{m}^{\pi }(x)))$
where $D(Q||P)=\sum _{i}Q(i)\ln {\frac {Q(i)}{P(i)}}$ is the Kullback-Leibler divergence. The combined minimization problem is optimized using a modified block gradient descent algorithm. For more information, see Wang et al.[15]
Unsupervised learning
Unsupervised multiple kernel learning algorithms have also been proposed by Zhuang et al. The problem is defined as follows. Let $U={x_{i}}$ be a set of unlabeled data. The kernel definition is the linear combined kernel $K'=\sum _{i=1}^{M}\beta _{i}K_{m}$. In this problem, the data needs to be "clustered" into groups based on the kernel distances. Let $B_{i}$ be a group or cluster of which $x_{i}$ is a member. We define the loss function as $\sum _{i=1}^{n}\left\Vert x_{i}-\sum _{x_{j}\in B_{i}}K(x_{i},x_{j})x_{j}\right\Vert ^{2}$. Furthermore, we minimize the distortion by minimizing $\sum _{i=1}^{n}\sum _{x_{j}\in B_{i}}K(x_{i},x_{j})\left\Vert x_{i}-x_{j}\right\Vert ^{2}$. Finally, we add a regularization term to avoid overfitting. Combining these terms, we can write the minimization problem as follows.
$\min _{\beta ,B}\sum _{i=1}^{n}\left\Vert x_{i}-\sum _{x_{j}\in B_{i}}K(x_{i},x_{j})x_{j}\right\Vert ^{2}+\gamma _{1}\sum _{i=1}^{n}\sum _{x_{j}\in B_{i}}K(x_{i},x_{j})\left\Vert x_{i}-x_{j}\right\Vert ^{2}+\gamma _{2}\sum _{i}|B_{i}|$
where . One formulation of this is defined as follows. Let $D\in {0,1}^{n\times n}$ be a matrix such that $D_{ij}=1$ means that $x_{i}$ and $x_{j}$ are neighbors. Then, $B_{i}={x_{j}:D_{ij}=1}$. Note that these groups must be learned as well. Zhuang et al. solve this problem by an alternating minimization method for $K$ and the groups $B_{i}$. For more information, see Zhuang et al.[16]
Libraries
Available MKL libraries include
• SPG-GMKL: A scalable C++ MKL SVM library that can handle a million kernels.[17]
• GMKL: Generalized Multiple Kernel Learning code in MATLAB, does $\ell _{1}$ and $\ell _{2}$ regularization for supervised learning.[18]
• (Another) GMKL: A different MATLAB MKL code that can also perform elastic net regularization[19]
• SMO-MKL: C++ source code for a Sequential Minimal Optimization MKL algorithm. Does $p$-n orm regularization.[20]
• SimpleMKL: A MATLAB code based on the SimpleMKL algorithm for MKL SVM.[21]
• MKLPy: A Python framework for MKL and kernel machines scikit-compliant with different algorithms, e.g. EasyMKL[22] and others.
References
1. Lin Chen, Lixin Duan, and Dong Xu, "Event Recognition in Videos by Learning From Heterogeneous Web Sources," in IEEE International Conference on Computer Vision and Pattern Recognition (CVPR), 2013, pp. 2666-2673
2. Serhat S. Bucak, Rong Jin, and Anil K. Jain, Multiple Kernel Learning for Visual Object Recognition: A Review. T-PAMI, 2013.
3. Yu et al. L2-norm multiple kernel learning and its application to biomedical data fusion. BMC Bioinformatics 2010, 11:309
4. Francis R. Bach, Gert R. G. Lanckriet, and Michael I. Jordan. 2004. Multiple kernel learning, conic duality, and the SMO algorithm. In Proceedings of the twenty-first international conference on Machine learning (ICML '04). ACM, New York, NY, USA
5. Mehmet Gönen, Ethem Alpaydın. Multiple Kernel Learning Algorithms Jour. Mach. Learn. Res. 12(Jul):2211−2268, 2011
6. Ben-Hur, A. and Noble W.S. Kernel methods for predicting protein-protein interactions. Bioinformatics. 2005 Jun;21 Suppl 1:i38-46.
7. Hiroaki Tanabe, Tu Bao Ho, Canh Hao Nguyen, and Saori Kawasaki. Simple but effective methods for combining kernels in computational biology. In Proceedings of IEEE International Conference on Research, Innovation and Vision for the Future, 2008.
8. Shibin Qiu and Terran Lane. A framework for multiple kernel support vector regression and its applications to siRNA efficacy prediction. IEEE/ACM Transactions on Computational Biology and Bioinformatics, 6(2):190–199, 2009
9. Gert R. G. Lanckriet, Nello Cristianini, Peter Bartlett, Laurent El Ghaoui, and Michael I. Jordan. Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research, 5:27–72, 2004a
10. Gert R. G. Lanckriet, Nello Cristianini, Peter Bartlett, Laurent El Ghaoui, and Michael I. Jordan. Learning the kernel matrix with semidefinite programming. In Proceedings of the 19th International Conference on Machine Learning, 2002
11. Mark Girolami and Simon Rogers. Hierarchic Bayesian models for kernel learning. In Proceedings of the 22nd International Conference on Machine Learning, 2005
12. Theodoros Damoulas and Mark A. Girolami. Combining feature spaces for classification. Pattern Recognition, 42(11):2671–2683, 2009
13. Theodoros Damoulas and Mark A. Girolami. Probabilistic multi-class multi-kernel learning: On protein fold recognition and remote homology detection. Bioinformatics, 24(10):1264–1270, 2008
14. Kristin P. Bennett, Michinari Momma, and Mark J. Embrechts. MARK: A boosting algorithm for heterogeneous kernel models. In Proceedings of the 8th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 2002
15. Wang, Shuhui et al. S3MKL: Scalable Semi-Supervised Multiple Kernel Learning for Real-World Image Applications. IEEE TRANSACTIONS ON MULTIMEDIA, VOL. 14, NO. 4, AUGUST 2012
16. J. Zhuang, J. Wang, S.C.H. Hoi & X. Lan. Unsupervised Multiple Kernel Learning. Jour. Mach. Learn. Res. 20:129–144, 2011
17. Ashesh Jain, S. V. N. Vishwanathan and Manik Varma. SPG-GMKL: Generalized multiple kernel learning with a million kernels. In Proceedings of the ACM SIGKDD Conference on Knowledge Discovery and Data Mining, Beijing, China, August 2012
18. M. Varma and B. R. Babu. More generality in efficient multiple kernel learning. In Proceedings of the International Conference on Machine Learning, Montreal, Canada, June 2009
19. Yang, H., Xu, Z., Ye, J., King, I., & Lyu, M. R. (2011). Efficient Sparse Generalized Multiple Kernel Learning. IEEE Transactions on Neural Networks, 22(3), 433-446
20. S. V. N. Vishwanathan, Z. Sun, N. Theera-Ampornpunt and M. Varma. Multiple kernel learning and the SMO algorithm. In Advances in Neural Information Processing Systems, Vancouver, B. C., Canada, December 2010.
21. Alain Rakotomamonjy, Francis Bach, Stephane Canu, Yves Grandvalet. SimpleMKL. Journal of Machine Learning Research, Microtome Publishing, 2008, 9, pp.2491-2521.
22. Fabio Aiolli, Michele Donini. EasyMKL: a scalable multiple kernel learning algorithm. Neurocomputing, 169, pp.215-224.
| Wikipedia |
Untouchable number
An untouchable number is a positive integer that cannot be expressed as the sum of all the proper divisors of any positive integer (including the untouchable number itself). That is, these numbers are not in the image of the aliquot sum function. Their study goes back at least to Abu Mansur al-Baghdadi (circa 1000 AD), who observed that both 2 and 5 are untouchable.[1]
Unsolved problem in mathematics:
Are there any odd untouchable numbers other than 5?
(more unsolved problems in mathematics)
Examples
The number 4 is not untouchable as it is equal to the sum of the proper divisors of 9: 1 + 3 = 4. The number 5 is untouchable as it is not the sum of the proper divisors of any positive integer: 5 = 1 + 4 is the only way to write 5 as the sum of distinct positive integers including 1, but if 4 divides a number, 2 does also, so 1 + 4 cannot be the sum of all of any number's proper divisors (since the list of factors would have to contain both 4 and 2).
The first few untouchable numbers are
2, 5, 52, 88, 96, 120, 124, 146, 162, 188, 206, 210, 216, 238, 246, 248, 262, 268, 276, 288, 290, 292, 304, 306, 322, 324, 326, 336, 342, 372, 406, 408, 426, 430, 448, 472, 474, 498, ... (sequence A005114 in the OEIS).
Properties
The number 5 is believed to be the only odd untouchable number, but this has not been proven. It would follow from a slightly stronger version of the Goldbach conjecture, since the sum of the proper divisors of pq (with p, q distinct primes) is 1 + p + q. Thus, if a number n can be written as a sum of two distinct primes, then n + 1 is not an untouchable number. It is expected that every even number larger than 6 is a sum of two distinct primes, so probably no odd number larger than 7 is an untouchable number, and $1=\sigma (2)-2$, $3=\sigma (4)-4$, $7=\sigma (8)-8$, so only 5 can be an odd untouchable number.[2] Thus it appears that besides 2 and 5, all untouchable numbers are composite numbers (since except 2, all even numbers are composite). No perfect number is untouchable, since, at the very least, it can be expressed as the sum of its own proper divisors. Similarly, none of the amicable numbers or sociable numbers are untouchable. Also, none of the Mersenne numbers are untouchable, since Mn = 2n − 1 is equal to the sum of the proper divisors of 2n.
No untouchable number is one more than a prime number, since if p is prime, then the sum of the proper divisors of p2 is p + 1. Also, no untouchable number is three more than a prime number, except 5, since if p is an odd prime then the sum of the proper divisors of 2p is p + 3.
Infinitude
There are infinitely many untouchable numbers, a fact that was proven by Paul Erdős.[3] According to Chen & Zhao, their natural density is at least d > 0.06.[4]
See also
• Aliquot sequence
• Nontotient
• Noncototient
• Weird number
References
1. Sesiano, J. (1991), "Two problems of number theory in Islamic times", Archive for History of Exact Sciences, 41 (3): 235–238, doi:10.1007/BF00348408, JSTOR 41133889, MR 1107382, S2CID 115235810
2. The stronger version is obtained by adding to the Goldbach conjecture the further requirement that the two primes be distinct—see Adams-Watters, Frank & Weisstein, Eric W. "Untouchable Number". MathWorld.
3. P. Erdos, Über die Zahlen der Form $\sigma (n)-n$ und $n-\phi (n)$. Elemente der Math. 28 (1973), 83-86
4. Yong-Gao Chen and Qing-Qing Zhao, Nonaliquot numbers, Publ. Math. Debrecen 78:2 (2011), pp. 439-442.
• Richard K. Guy, Unsolved Problems in Number Theory (3rd ed), Springer Verlag, 2004 ISBN 0-387-20860-7; section B10.
External links
• OEIS sequence A070015 (Least m such that sum of aliquot parts of m equals n or 0 if no such number exists)
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
Divisibility-based sets of integers
Overview
• Integer factorization
• Divisor
• Unitary divisor
• Divisor function
• Prime factor
• Fundamental theorem of arithmetic
Factorization forms
• Prime
• Composite
• Semiprime
• Pronic
• Sphenic
• Square-free
• Powerful
• Perfect power
• Achilles
• Smooth
• Regular
• Rough
• Unusual
Constrained divisor sums
• Perfect
• Almost perfect
• Quasiperfect
• Multiply perfect
• Hemiperfect
• Hyperperfect
• Superperfect
• Unitary perfect
• Semiperfect
• Practical
• Erdős–Nicolas
With many divisors
• Abundant
• Primitive abundant
• Highly abundant
• Superabundant
• Colossally abundant
• Highly composite
• Superior highly composite
• Weird
Aliquot sequence-related
• Untouchable
• Amicable (Triple)
• Sociable
• Betrothed
Base-dependent
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
Other sets
• Arithmetic
• Deficient
• Friendly
• Solitary
• Sublime
• Harmonic divisor
• Descartes
• Refactorable
• Superperfect
| Wikipedia |
Unusual number
In number theory, an unusual number is a natural number n whose largest prime factor is strictly greater than ${\sqrt {n}}$.
A k-smooth number has all its prime factors less than or equal to k, therefore, an unusual number is non-${\sqrt {n}}$-smooth.
Relation to prime numbers
All prime numbers are unusual. For any prime p, its multiples less than p2 are unusual, that is p, ... (p-1)p, which have a density 1/p in the interval (p, p2).
Examples
The first few unusual numbers are
2, 3, 5, 6, 7, 10, 11, 13, 14, 15, 17, 19, 20, 21, 22, 23, 26, 28, 29, 31, 33, 34, 35, 37, 38, 39, 41, 42, 43, 44, 46, 47, 51, 52, 53, 55, 57, 58, 59, 61, 62, 65, 66, 67, ... (sequence A064052 in the OEIS)
The first few non-prime (composite) unusual numbers are
6, 10, 14, 15, 20, 21, 22, 26, 28, 33, 34, 35, 38, 39, 42, 44, 46, 51, 52, 55, 57, 58, 62, 65, 66, 68, 69, 74, 76, 77, 78, 82, 85, 86, 87, 88, 91, 92, 93, 94, 95, 99, 102, ... (sequence A063763 in the OEIS)
Distribution
If we denote the number of unusual numbers less than or equal to n by u(n) then u(n) behaves as follows:
n u(n) u(n) / n
10 6 0.6
100 67 0.67
1000 715 0.72
10000 7319 0.73
100000 73322 0.73
1000000 731660 0.73
10000000 7280266 0.73
100000000 72467077 0.72
1000000000 721578596 0.72
Richard Schroeppel stated in 1972 that the asymptotic probability that a randomly chosen number is unusual is ln(2). In other words:
$\lim _{n\rightarrow \infty }{\frac {u(n)}{n}}=\ln(2)=0.693147\dots \,.$
External links
• Weisstein, Eric W. "Rough Number". MathWorld.
Divisibility-based sets of integers
Overview
• Integer factorization
• Divisor
• Unitary divisor
• Divisor function
• Prime factor
• Fundamental theorem of arithmetic
Factorization forms
• Prime
• Composite
• Semiprime
• Pronic
• Sphenic
• Square-free
• Powerful
• Perfect power
• Achilles
• Smooth
• Regular
• Rough
• Unusual
Constrained divisor sums
• Perfect
• Almost perfect
• Quasiperfect
• Multiply perfect
• Hemiperfect
• Hyperperfect
• Superperfect
• Unitary perfect
• Semiperfect
• Practical
• Erdős–Nicolas
With many divisors
• Abundant
• Primitive abundant
• Highly abundant
• Superabundant
• Colossally abundant
• Highly composite
• Superior highly composite
• Weird
Aliquot sequence-related
• Untouchable
• Amicable (Triple)
• Sociable
• Betrothed
Base-dependent
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
Other sets
• Arithmetic
• Deficient
• Friendly
• Solitary
• Sublime
• Harmonic divisor
• Descartes
• Refactorable
• Superperfect
| Wikipedia |
Proof mining
In proof theory, a branch of mathematical logic, proof mining (or proof unwinding) is a research program that studies or analyzes formalized proofs, especially in analysis, to obtain explicit bounds, ranges or rates of convergence from proofs that, when expressed in natural language, appear to be nonconstructive.[1] This research has led to improved results in analysis obtained from the analysis of classical proofs.
References
1. Ulrich Kohlenbach (2008). Applied Proof Theory: Proof Interpretations and Their Use in Mathematics. Springer Verlag, Berlin. pp. 1–536.
Further reading
• Ulrich Kohlenbach and Paulo Oliva, "Proof Mining: A systematic way of analysing proofs in mathematics", Proc. Steklov Inst. Math, 242:136–164, 2003
• Paulo Oliva, "Proof Mining in Subsystems of Analysis", BRICS PhD thesis citeseer
| Wikipedia |
Up-and-Down Designs
Up-and-down designs (UDDs) are a family of statistical experiment designs used in dose-finding experiments in science, engineering, and medical research. Dose-finding experiments have binary responses: each individual outcome can be described as one of two possible values, such as success vs. failure or toxic vs. non-toxic. Mathematically the binary responses are coded as 1 and 0. The goal of dose-finding experiments is to estimate the strength of treatment (i.e., the "dose") that would trigger the "1" response a pre-specified proportion of the time. This dose can be envisioned as a percentile of the distribution of response thresholds. An example where dose-finding is used is in an experiment to estimate the LD50 of some toxic chemical with respect to mice.
Dose-finding designs are sequential and response-adaptive: the dose at a given point in the experiment depends upon previous outcomes, rather than be fixed a priori. Dose-finding designs are generally more efficient for this task than fixed designs, but their properties are harder to analyze, and some require specialized design software. UDDs use a discrete set of doses rather than vary the dose continuously. They are relatively simple to implement, and are also among the best understood dose-finding designs. Despite this simplicity, UDDs generate random walks with intricate properties.[1] The original UDD aimed to find the median threshold by increasing the dose one level after a "0" response, and decreasing it one level after a "1" response. Hence the name "up-and-down". Other UDDs break this symmetry in order to estimate percentiles other than the median, or are able to treat groups of subjects rather than one at a time.
UDDs were developed in the 1940s by several research groups independently.[2][3][4] The 1950s and 1960s saw rapid diversification with UDDs targeting percentiles other than the median, and expanding into numerous applied fields. The 1970s to early 1990s saw little UDD methods research, even as the design continued to be used extensively. A revival of UDD research since the 1990s has provided deeper understanding of UDDs and their properties,[5] and new and better estimation methods.[6][7]
UDDs are still used extensively in the two applications for which they were originally developed: psychophysics where they are used to estimate sensory thresholds and are often known as fixed forced-choice staircase procedures,[8] and explosive sensitivity testing, where the median-targeting UDD is often known as the Bruceton test. UDDs are also very popular in toxicity and anesthesiology research.[9] They are also considered a viable choice for Phase I clinical trials.[10]
Mathematical description
Definition
Let $n$ be the sample size of a UDD experiment, and assuming for now that subjects are treated one at a time. Then the doses these subjects receive, denoted as random variables $X_{1},\ldots ,X_{n}$, are chosen from a discrete, finite set of $M$ increasing dose levels ${\mathcal {X}}=\left\{d_{1},\ldots ,d_{M}:\ d_{1}<\cdots <d_{M}\right\}.$ Furthermore, if $X_{i}=d_{m}$, then $X_{i+1}\in \{d_{m-1},d_{m},d_{m+1}\},$ according to simple constant rules based on recent responses. The next subject must be treated one level up, one level down, or at the same level as the current subject. The responses themselves are denoted $Y_{1},\ldots ,Y_{n}\in \left\{0,1\right\};$ hereafter the "1" responses are positive and "0" negative. The repeated application of the same rules (known as dose-transition rules) over a finite set of dose levels, turns $X_{1},\ldots ,X_{n}$ into a random walk over ${\mathcal {X}}$. Different dose-transition rules produce different UDD "flavors", such as the three shown in the figure above.
Despite the experiment using only a discrete set of dose levels, the dose-magnitude variable itself, $x$, is assumed to be continuous, and the probability of positive response is assumed to increase continuously with increasing $x$. The goal of dose-finding experiments is to estimate the dose $x$ (on a continuous scale) that would trigger positive responses at a pre-specified target rate $\Gamma =P\left\{Y=1\mid X=x\right\},\ \ \Gamma \in (0,1)$; often known as the "target dose". This problem can be also expressed as estimation of the quantile $F^{-1}(\Gamma )$ of a cumulative distribution function describing the dose-toxicity curve $F(x)$. The density function $f(x)$ associated with $F(x)$ is interpretable as the distribution of response thresholds of the population under study.
Transition probability matrix
Given that a subject receives dose $d_{m}$, denote the probability that the next subject receives dose $d_{m-1},d_{m}$, or $d_{m+1}$, as $p_{m,m-1},p_{mm}$ or $p_{m,m+1}$, respectively. These transition probabilities obey the constraints $p_{m,m-1}+p_{mm}+p_{m,m+1}=1$ and the boundary conditions $p_{1,0}=p_{M,M+1}=0$.
Each specific set of UDD rules enables the symbolic calculation of these probabilities, usually as a function of $F(x)$. Assuming that transition probabilities are fixed in time, depending only upon the current allocation and its outcome, i.e., upon $\left(X_{i},Y_{i}\right)$ and through them upon $F(x)$ (and possibly on a set of fixed parameters). The probabilities are then best represented via a tri-diagonal transition probability matrix (TPM) $\mathbf {P} $:
${\bf {{P}=\left({\begin{array}{cccccc}p_{11}&p_{12}&0&\cdots &\cdots &0\\p_{21}&p_{22}&p_{23}&0&\ddots &\vdots \\0&\ddots &\ddots &\ddots &\ddots &\vdots \\\vdots &\ddots &\ddots &\ddots &\ddots &0\\\vdots &\ddots &0&p_{M-1,M-2}&p_{M-1,M-1}&p_{M-1,M}\\0&\cdots &\cdots &0&p_{M,M-1}&p_{MM}\\\end{array}}\right).}}$
Balance point
Usually, UDD dose-transition rules bring the dose down (or at least bar it from escalating) after positive responses, and vice versa. Therefore, UDD random walks have a central tendency: dose assignments tend to meander back and forth around some dose $x^{*}$ that can be calculated from the transition rules, when those are expressed as a function of $F(x)$.[1] This dose has often been confused with the experiment's formal target $F^{-1}(\Gamma )$, and the two are often identical - but they do not have to be. The target is the dose that the experiment is tasked with estimating, while $x^{*}$, known as the "balance point", is approximately where the UDD's random walk revolves around.[11]
Stationary distribution of dose allocations
Since UDD random walks are regular Markov chains, they generate a stationary distribution of dose allocations, $\pi $, once the effect of the manually-chosen starting dose wears off. This means, long-term visit frequencies to the various doses will approximate a steady state described by $\pi $. According to Markov chain theory the starting-dose effect wears off rather quickly, at a geometric rate.[12] Numerical studies suggest that it would typically take between $2/M$ and $4/M$ subjects for the effect to wear off nearly completely.[11] $\pi $ is also the asymptotic distribution of cumulative dose allocations.
UDDs' central tendencies ensure that long-term, the most frequently visited dose (i.e., the mode of $\pi $) will be one of the two doses closest to the balance point $x^{*}$.[1] If $x^{*}$ is outside the range of allowed doses, then the mode will be on the boundary dose closest to it. Under the original median-finding UDD, the mode will be at the closest dose to $x^{*}$ in any case. Away from the mode, asymptotic visit frequencies decrease sharply, at a faster-than-geometric rate. Even though a UDD experiment is still a random walk, long excursions away from the region of interest are very unlikely.
Common UDDs
Original ("simple" or "classical") UDD
The original "simple" or "classical" UDD moves the dose up one level upon a negative response, and vice versa. Therefore, the transition probabilities are
${\begin{array}{rl}p_{m,m+1}&=P\{Y_{i}=0|X_{i}=d_{m}\}=1-F(d_{m});\\p_{m,m-1}&=P\{Y_{i}=1|X_{i}=d_{m}\}=F(d_{m}).\end{array}}$
We use the original UDD as an example for calculating the balance point $x^{*}$. The design's 'up', 'down' functions are $p(x)=1-F(x),q(x)=F(x).$ We equate them to find $F^{*}$:
$1-F^{*}=F^{*}\ \longrightarrow \ F^{*}=0.5.$
The "classical" UDD is designed to find the median threshold. This is a case where $F^{*}=\Gamma .$
The "classical" UDD can be seen as a special case of each of the more versatile designs described below.
Durham and Flournoy's biased coin design
This UDD shifts the balance point, by adding the option of treating the next subject at the same dose rather than move only up or down. Whether to stay is determined by a random toss of a metaphoric "coin" with probability $b=P\{{\textrm {heads}}\}.$ This biased-coin design (BCD) has two "flavors", one for $F^{*}>0.5$ and one for $F^{*}<0.5,$ whose rules are shown below:
$X_{i+1}={\begin{array}{ll}d_{m+1}&{\textrm {if}}\ \ Y_{i}=0\ \ \&\ \ {\textrm {'heads'}};\\d_{m-1}&{\textrm {if}}\ \ Y\_i=1;\\d_{m}&{\textrm {if}}\ \ Y_{i}=0\ \ \&\ \ {\textrm {'tails'}}.\\\end{array}}$
The heads probability $b$ can take any value in$[0,1]$. The balance point is
${\begin{array}{rcl}b\left(1-F^{*}\right)&=&F^{*}\\F^{*}&=&{\frac {b}{1+b}}\in [0,0.5].\end{array}}$
The BCD balance point can made identical to a target rate $F^{-1}(\Gamma )$ by setting the heads probability to $b=\Gamma /(1-\Gamma )$. For example, for $\Gamma =0.3$ set $b=3/7$. Setting $b=1$ makes this design identical to the classical UDD, and inverting the rules by imposing the coin toss upon positive rather than negative outcomes, produces above-median balance points. Versions with two coins, one for each outcome, have also been published, but they do not seem to offer an advantage over the simpler single-coin BCD.
Group (cohort) UDDs
Some dose-finding experiments, such as phase I trials, require a waiting period of weeks before determining each individual outcome. It may preferable then, to be able treat several subjects at once or in rapid succession. With group UDDs, the transition rules apply rules to cohorts of fixed size $s$ rather than to individuals. $X_{i}$ becomes the dose given to cohort $i$, and $Y_{i}$ is the number of positive responses in the $i$-th cohort, rather than a binary outcome. Given that the $i$-th cohort is treated at $X_{i}=d_{m}$ on the interior of ${\mathcal {X}}$ the $i+1$-th cohort is assigned to
$X_{i+1}={\begin{cases}d_{m+1}&{\textrm {if}}\ \ Y_{i}\leq l;\\d_{m-1}&{\textrm {if}}\ \ Y_{i}\geq u;\\d_{m}&{\textrm {if}}\ \ Y_{i}<u.\end{cases}}$
$Y_{i}$ follow a binomial distribution conditional on $X_{i}$, with parameters$s$ and$F(X_{i})$. The up and down probabilities are the binomial distribution's tails, and the stay probability its center (it is zero if $u=l+1$). A specific choice of parameters can be abbreviated as GUD$_{(s,l,u)}.$
Nominally, group UDDs generate $s$-order random walks, since the $s$ most recent observations are needed to determine the next allocation. However, with cohorts viewed as single mathematical entities, these designs generate a first-order random walk having a tri-diagonal TPM as above. Some relevant group UDD subfamilies:
• Symmetric designs with $l+u=s$ (e.g., GUD$_{(2,0,2)}$) target the median.
• The family GUD$_{(s,0,1)},$ encountered in toxicity studies, allows escalation only with zero positive responses, and de-escalate upon any positive response. The escalation probability at $x$ is $\left(1-F(x)\right)^{s},$ and since this design does not allow for remaining at the same dose, at the balance point it will be exactly $1/2$. Therefore,
$F^{*}=1-\left({\frac {1}{2}}\right)^{1/s}.$
With $s=2,3,4$ would be associated with $F^{*}\approx 0.293,0.206$ and $0.159$, respectively. The mirror-image family GUD$_{(s,s-1,s)}$ has its balance points at one minus these probabilities.
For general group UDDs, the balance point can be calculated only numerically, by finding the dose $x^{*}$ with toxicity rate $F^{*}$ such that
$\sum _{r=u}^{s}\left({\begin{array}{c}s\\r\\\end{array}}\right)\left(F^{*}\right)^{r}(1-F^{*})^{s-r}=\sum _{t=0}^{l}\left({\begin{array}{c}s\\t\\\end{array}}\right)\left(F^{*}\right)^{t}(1-F^{*})^{s-t}.$
Any numerical root-finding algorithm, e.g., Newton–Raphson, can be used to solve for $F^{*}$.[13]
$k$-in-a-row (or "transformed" or "geometric") UDD
This is the most commonly used non-median UDD. It was introduced by Wetherill in 1963,[14] and proliferated by him and colleagues shortly thereafter to psychophysics,[15] where it remains one of the standard methods to find sensory thresholds.[8] Wetherill called it "transformed" UDD; Gezmu who was the first to analyze its random-walk properties, called it "Geometric" UDD in the 1990s;[16] and in the 2000s the more straightforward name "$k$-in-a-row" UDD was adopted.[11] The design's rules are deceptively simple:
$X_{i+1}={\begin{cases}d_{m+1}&{\textrm {if}}\ \ Y_{i-k+1}=\cdots =Y_{i}=0,\ \ {\textrm {all}}\ {\textrm {observed}}\ {\textrm {at}}\ \ d_{m};\\d_{m-1}&{\textrm {if}}\ \ Y_{i}=1;\\d_{m}&{\textrm {otherwise}},\end{cases}}$
Every dose escalation requires $k$ non-toxicities observed on consecutive data points, all at the current dose, while de-escalation only requires a single toxicity. It closely resembles GUD$_{(s,0,1)}$ described above, and indeed shares the same balance point. The difference is that $k$-in-a-row can bail out of a dose level upon the first toxicity, whereas its group UDD sibling might treat the entire cohort at once, and therefore might see more than one toxicity before descending.
The method used in sensory studies is actually the mirror-image of the one defined above, with $k$ successive responses required for a de-escalation and only one non-response for escalation, yielding $F^{*}\approx 0.707,0.794,0.841,\ldots $ for $k=2,3,4,\ldots $.[17]
$k$-in-a-row generates a $k$-th order random walk because knowledge of the last $k$ responses might be needed. It can be represented as a first-order chain with $Mk$ states, or as a Markov chain with $M$ levels, each having $k$ internal states labeled $0$ to $k-1$ The internal state serves as a counter of the number of immediately recent consecutive non-toxicities observed at the current dose. This description is closer to the physical dose-allocation process, because subjects at different internal states of the level $m$, are all assigned the same dose $d_{m}$. Either way, the TPM is $Mk\times Mk$ (or more precisely, $\left[(M-1)k+1)\right]\times \left[(M-1)k+1)\right]$, because the internal counter is meaningless at the highest dose) - and it is not tridiagonal.
Here is the expanded $k$-in-a-row TPM with $k=2$ and $M=5$, using the abbreviation $F_{m}\equiv F\left(d_{m}\right).$ Each level's internal states are adjacent to each other.
${\begin{bmatrix}F_{1}&1-F_{1}&0&0&0&0&0&0&0\\F_{1}&0&1-F_{1}&0&0&0&0&0&0\\F_{2}&0&0&1-F_{2}&0&0&0&0&0\\F_{2}&0&0&0&1-F_{2}&0&0&0&0\\0&0&F_{3}&0&0&1-F_{3}&0&0&0\\0&0&F_{3}&0&0&0&1-F_{3}&0&0\\0&0&0&0&F_{4}&0&0&1-F_{4}&0\\0&0&0&0&F_{4}&0&0&0&1-F_{4}\\0&0&0&0&0&0&F_{5}&0&1-F_{5}\\\end{bmatrix}}.$
$k$-in-a-row is often considered for clinical trials targeting a low-toxicity dose. In this case, the balance point and the target are not identical; rather, $k$ is chosen to aim close to the target rate, e.g., $k=2$ for studies targeting the 30th percentile, and $k=3$ for studies targeting the 20th percentile.
Estimating the target dose
Unlike other design approaches, UDDs do not have a specific estimation method "bundled in" with the design as a default choice. Historically, the more common choice has been some weighted average of the doses administered, usually excluding the first few doses to mitigate the starting-point bias. This approach antedates deeper understanding of UDDs' Markov properties, but its success in numerical evaluations relies upon the eventual sampling from $\pi $, since the latter is centered roughly around $x^{*}.$[5]
The single most popular among these averaging estimators was introduced by Wetherill et al. in 1966, and only includes reversal points (points where the outcome switches from 0 to 1 or vice versa) in the average.[18]In recent years, the limitations of averaging estimators have come to light, in particular the many sources of bias that are very difficult to mitigate. Reversal estimators suffer from both multiple biases (although there is some inadvertent cancelling out of biases), and increased variance due to using a subsample of doses. However, the knowledge about averaging-estimator limitations has yet to disseminate outside the methodological literature and affect actual practice.[5]
By contrast, regression estimators attempt to approximate the curve $y=F(x)$ describing the dose-response relationship, in particular around the target percentile. The raw data for the regression are the doses $d_{m}$ on the horizontal axis, and the observed toxicity frequencies,
${\hat {F}}_{m}={\frac {\sum _{i=1}^{n}Y_{i}I\left[X_{i}=d_{m}\right]}{\sum _{i=1}^{n}I\left[X_{i}=d_{m}\right]}},\ m=1,\ldots ,M,$
on the vertical axis. The target estimate is the abscissa of the point where the fitted curve crosses $y=\Gamma .$
Probit regression has been used for many decades to estimate UDD targets, although far less commonly than the reversal-averaging estimator. In 2002, Stylianou and Flournoy introduced an interpolated version of isotonic regression (IR) to estimate UDD targets and other dose-response data.[6] More recently, a modification called "centered isotonic regression" (CIR) was developed by Oron and Flournoy, promising substantially better estimation performance than ordinary isotonic regression in most cases, and also offering the first viable interval estimator for isotonic regression in general.[7] Isotonic regression estimators appear to be the most compatible with UDDs, because both approaches are nonparametric and relatively robust.[5] The publicly available R package "cir" implements both CIR and IR for dose-finding and other applications.[19]
References
1. Durham, SD; Flournoy, N. "Up-and-down designs. I. Stationary treatment distributions.". In Flournoy, N; Rosenberger, WF (eds.). IMS Lecture Notes Monograph Series. Vol. 25: Adaptive Designs. pp. 139–157.
2. Dixon, WJ; Mood, AM (1948). "A method for obtaining and analyzing sensitivity data". Journal of the American Statistical Association. 43 (241): 109–126. doi:10.1080/01621459.1948.10483254.
3. von Békésy, G (1947). "A new audiometer". Acta Oto-Laryngologica. 35 (5–6): 411–422. doi:10.3109/00016484709123756.
4. Anderson, TW; McCarthy, PJ; Tukey, JW (1946). 'Staircase' method of sensitivity testing (Technical report). Naval Ordnance Report. 65-46.
5. Flournoy, N; Oron, AP. "Up-and-Down Designs for Dose-Finding". In Dean, A (ed.). Handbook of Design and Analysis of Experiments. CRC Press. pp. 858–894.
6. Stylianou, MP; Flournoy, N (2002). "Dose finding using the biased coin up-and-down design and isotonic regression". Biometrics. 58 (1): 171–177. doi:10.1111/j.0006-341x.2002.00171.x. PMID 11890313. S2CID 8743090.
7. Oron, AP; Flournoy, N (2017). "Centered Isotonic Regression: Point and Interval Estimation for Dose-Response Studies". Statistics in Biopharmaceutical Research. 9 (3): 258–267. arXiv:1701.05964. doi:10.1080/19466315.2017.1286256. S2CID 88521189.
8. Leek, MR (2001). "Adaptive procedures in psychophysical research". Perception and Psychophysics. 63 (8): 1279–1292. doi:10.3758/bf03194543. PMID 11800457.
9. Pace, NL; Stylianou, MP (2007). "Advances in and Limitations of Up-and-down Methodology: A Precis of Clinical Use, Study Design, and Dose Estimation in Anesthesia Research". Anesthesiology. 107 (1): 144–152. doi:10.1097/01.anes.0000267514.42592.2a. PMID 17585226.
10. Oron, AP; Hoff, PD (2013). "Small-Sample Behavior of Novel Phase I Cancer Trial Designs". Clinical Trials. 10 (1): 63–80. arXiv:1202.4962. doi:10.1177/1740774512469311. PMID 23345304. S2CID 5667047.
11. Oron, AP; Hoff, PD (2009). "The k-in-a-row up-and-down design, revisited". Statistics in Medicine. 28 (13): 1805–1820. doi:10.1002/sim.3590. PMID 19378270. S2CID 25904900.
12. Diaconis, P; Stroock, D (1991). "Geometric bounds for eigenvalues of Markov chain". The Annals of Applied Probability. 1: 36–61. doi:10.1214/aoap/1177005980.
13. Gezmu, M; Flournoy, N (2006). "Group up-and-down designs for dose-finding". Journal of Statistical Planning and Inference. 136 (6): 1749–1764. doi:10.1016/j.jspi.2005.08.002.
14. Wetherill, GB; Levitt, H (1963). "Sequential estimation of quantal response curves". Journal of the Royal Statistical Society, Series B. 25: 1–48. doi:10.1111/j.2517-6161.1963.tb00481.x.
15. Wetherill, GB (1965). "Sequential estimation of points on a Psychometric Function". British Journal of Mathematical and Statistical Psychology. 18: 1–10. doi:10.1111/j.2044-8317.1965.tb00689.x. PMID 14324842.
16. Gezmu, Misrak (1996). The Geometric Up-and-Down Design for Allocating Dosage Levels (PhD). American University.
17. Garcia-Perez, MA (1998). "Forced-choice staircases with fixed step sizes: asymptotic and small-sample properties". Vision Research. 38 (12): 1861–81. doi:10.1016/s0042-6989(97)00340-4. PMID 9797963.
18. Wetherill, GB; Chen, H; Vasudeva, RB (1966). "Sequential estimation of quantal response curves: a new method of estimation". Biometrika. 53 (3–4): 439–454. doi:10.1093/biomet/53.3-4.439.
19. Oron, Assaf. "Package 'cir'". CRAN. R Foundation for Statistical Computing. Retrieved 26 December 2020.
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
| Wikipedia |
Knuth's up-arrow notation
In mathematics, Knuth's up-arrow notation is a method of notation for very large integers, introduced by Donald Knuth in 1976.[1]
In his 1947 paper,[2] R. L. Goodstein introduced the specific sequence of operations that are now called hyperoperations. Goodstein also suggested the Greek names tetration, pentation, etc., for the extended operations beyond exponentiation. The sequence starts with a unary operation (the successor function with n = 0), and continues with the binary operations of addition (n = 1), multiplication (n = 2), exponentiation (n = 3), tetration (n = 4), pentation (n = 5), etc. Various notations have been used to represent hyperoperations. One such notation is $H_{n}(a,b)$. Knuth's up-arrow notation $\uparrow $ is another. For example:
• the single arrow $\uparrow $ represents exponentiation (iterated multiplication)
$2\uparrow 4=H_{3}(2,4)=2\times (2\times (2\times 2))=2^{4}=16$
• the double arrow $\uparrow \uparrow $ represents tetration (iterated exponentiation)
$2\uparrow \uparrow 4=2[4]4=2\uparrow (2\uparrow (2\uparrow 2))=2^{2^{2^{2}}}=2^{16}=65,536$
• the triple arrow $\uparrow \uparrow \uparrow $ represents pentation (iterated tetration)
${\begin{aligned}2\uparrow \uparrow \uparrow 4=H_{5}(2,4)=2\uparrow \uparrow (2\uparrow \uparrow (2\uparrow \uparrow 2))\\&=2\uparrow \uparrow (2\uparrow \uparrow (2\uparrow 2))\\&=2\uparrow \uparrow (2\uparrow \uparrow 4)\\&=\underbrace {2\uparrow (2\uparrow (2\uparrow \dots ))} \;=\;\underbrace {\;2^{2^{\cdots ^{2}}}} \\&\;\;\;\;\;2\uparrow \uparrow 4{\mbox{ copies of }}2\;\;\;\;\;{\mbox{65,536 2s}}\\\end{aligned}}$
The general definition of the up-arrow notation is as follows (for $a\geq 0,n\geq 1,b\geq 0$):
$a\uparrow ^{n}b=H_{n+2}(a,b)=a[n+2]b.$
Here, $\uparrow ^{n}$ stands for n arrows, so for example
$2\uparrow \uparrow \uparrow \uparrow 3=2\uparrow ^{4}3.$
The square brackets are another notation for hyperoperations.
Introduction
The hyperoperations naturally extend the arithmetical operations of addition and multiplication as follows. Addition by a natural number is defined as iterated incrementation:
${\begin{matrix}H_{1}(a,b)=a+b=&a+\underbrace {1+1+\dots +1} \\&b{\mbox{ copies of }}1\end{matrix}}$
Multiplication by a natural number is defined as iterated addition:
${\begin{matrix}H_{2}(a,b)=a\times b=&\underbrace {a+a+\dots +a} \\&b{\mbox{ copies of }}a\end{matrix}}$
For example,
${\begin{matrix}4\times 3&=&\underbrace {4+4+4} &=&12\\&&3{\mbox{ copies of }}4\end{matrix}}$
Exponentiation for a natural power $b$ is defined as iterated multiplication, which Knuth denoted by a single up-arrow:
${\begin{matrix}a\uparrow b=H_{3}(a,b)=a^{b}=&\underbrace {a\times a\times \dots \times a} \\&b{\mbox{ copies of }}a\end{matrix}}$
For example,
${\begin{matrix}4\uparrow 3=4^{3}=&\underbrace {4\times 4\times 4} &=&64\\&3{\mbox{ copies of }}4\end{matrix}}$
Tetration is defined as iterated exponentiation, which Knuth denoted by a “double arrow”:
${\begin{matrix}a\uparrow \uparrow b=H_{4}(a,b)=&\underbrace {a^{a^{{}^{.\,^{.\,^{.\,^{a}}}}}}} &=&\underbrace {a\uparrow (a\uparrow (\dots \uparrow a))} \\&b{\mbox{ copies of }}a&&b{\mbox{ copies of }}a\end{matrix}}$
For example,
${\begin{matrix}4\uparrow \uparrow 3=&\underbrace {4^{4^{4}}} &=&\underbrace {4\uparrow (4\uparrow 4)} &=&4^{256}&\approx &1.34078079\times 10^{154}&\\&3{\mbox{ copies of }}4&&3{\mbox{ copies of }}4\end{matrix}}$
Expressions are evaluated from right to left, as the operators are defined to be right-associative.
According to this definition,
$3\uparrow \uparrow 2=3^{3}=27$
$3\uparrow \uparrow 3=3^{3^{3}}=3^{27}=7,625,597,484,987$
$3\uparrow \uparrow 4=3^{3^{3^{3}}}=3^{3^{27}}=3^{7625597484987}\approx 1.2580143\times 10^{3638334640024}$
$3\uparrow \uparrow 5=3^{3^{3^{3^{3}}}}=3^{3^{3^{27}}}=3^{3^{7625597484987}}\approx 3^{1.2580143\times 10^{3638334640024}}$
etc.
This already leads to some fairly large numbers, but the hyperoperator sequence does not stop here.
Pentation, defined as iterated tetration, is represented by the “triple arrow”:
${\begin{matrix}a\uparrow \uparrow \uparrow b=H_{5}(a,b)=&\underbrace {a_{}\uparrow \uparrow (a\uparrow \uparrow (\dots \uparrow \uparrow a))} \\&b{\mbox{ copies of }}a\end{matrix}}$
Hexation, defined as iterated pentation, is represented by the “quadruple arrow”:
${\begin{matrix}a\uparrow \uparrow \uparrow \uparrow b=H_{6}(a,b)=&\underbrace {a_{}\uparrow \uparrow \uparrow (a\uparrow \uparrow \uparrow (\dots \uparrow \uparrow \uparrow a))} \\&b{\mbox{ copies of }}a\end{matrix}}$
and so on. The general rule is that an $n$-arrow operator expands into a right-associative series of ($n-1$)-arrow operators. Symbolically,
${\begin{matrix}a\ \underbrace {\uparrow _{}\uparrow \!\!\dots \!\!\uparrow } _{n}\ b=\underbrace {a\ \underbrace {\uparrow \!\!\dots \!\!\uparrow } _{n-1}\ (a\ \underbrace {\uparrow _{}\!\!\dots \!\!\uparrow } _{n-1}\ (\dots \ \underbrace {\uparrow _{}\!\!\dots \!\!\uparrow } _{n-1}\ a))} _{b{\text{ copies of }}a}\end{matrix}}$
Examples:
$3\uparrow \uparrow \uparrow 2=3\uparrow \uparrow 3=3^{3^{3}}=3^{27}=7,625,597,484,987$
${\begin{matrix}3\uparrow \uparrow \uparrow 3=3\uparrow \uparrow (3\uparrow \uparrow 3)=3\uparrow \uparrow (3\uparrow 3\uparrow 3)=&\underbrace {3_{}\uparrow 3\uparrow \dots \uparrow 3} \\&3\uparrow 3\uparrow 3{\mbox{ copies of }}3\end{matrix}}{\begin{matrix}=&\underbrace {3_{}\uparrow 3\uparrow \dots \uparrow 3} \\&{\mbox{7,625,597,484,987 copies of 3}}\end{matrix}}{\begin{matrix}=&\underbrace {3^{3^{3^{3^{\cdot ^{\cdot ^{\cdot ^{\cdot ^{3}}}}}}}}} \\&{\mbox{7,625,597,484,987 copies of 3}}\end{matrix}}$
Notation
In expressions such as $a^{b}$, the notation for exponentiation is usually to write the exponent $b$ as a superscript to the base number $a$. But many environments — such as programming languages and plain-text e-mail — do not support superscript typesetting. People have adopted the linear notation $a\uparrow b$ for such environments; the up-arrow suggests 'raising to the power of'. If the character set does not contain an up arrow, the caret (^) is used instead.
The superscript notation $a^{b}$ doesn't lend itself well to generalization, which explains why Knuth chose to work from the inline notation $a\uparrow b$ instead.
$a\uparrow ^{n}b$ is a shorter alternative notation for n uparrows. Thus $a\uparrow ^{4}b=a\uparrow \uparrow \uparrow \uparrow b$.
Writing out up-arrow notation in terms of powers
Attempting to write $a\uparrow \uparrow b$ using the familiar superscript notation gives a power tower.
For example: $a\uparrow \uparrow 4=a\uparrow (a\uparrow (a\uparrow a))=a^{a^{a^{a}}}$
If b is a variable (or is too large), the power tower might be written using dots and a note indicating the height of the tower.
$a\uparrow \uparrow b=\underbrace {a^{a^{.^{.^{.{a}}}}}} _{b}$
Continuing with this notation, $a\uparrow \uparrow \uparrow b$ could be written with a stack of such power towers, each describing the size of the one above it.
$a\uparrow \uparrow \uparrow 4=a\uparrow \uparrow (a\uparrow \uparrow (a\uparrow \uparrow a))=\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{a}}}$
Again, if b is a variable or is too large, the stack might be written using dots and a note indicating its height.
$a\uparrow \uparrow \uparrow b=\left.\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {\vdots } _{a}}}\right\}b$
Furthermore, $a\uparrow \uparrow \uparrow \uparrow b$ might be written using several columns of such stacks of power towers, each column describing the number of power towers in the stack to its left:
$a\uparrow \uparrow \uparrow \uparrow 4=a\uparrow \uparrow \uparrow (a\uparrow \uparrow \uparrow (a\uparrow \uparrow \uparrow a))=\left.\left.\left.\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {\vdots } _{a}}}\right\}\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {\vdots } _{a}}}\right\}\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {\vdots } _{a}}}\right\}a$
And more generally:
$a\uparrow \uparrow \uparrow \uparrow b=\underbrace {\left.\left.\left.\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {\vdots } _{a}}}\right\}\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {a^{a^{.^{.^{.{a}}}}}} _{\underbrace {\vdots } _{a}}}\right\}\cdots \right\}a} _{b}$
This might be carried out indefinitely to represent $a\uparrow ^{n}b$ as iterated exponentiation of iterated exponentiation for any a, n and b (although it clearly becomes rather cumbersome).
Using tetration
The Rudy Rucker notation $^{b}a$ for tetration allows us to make these diagrams slightly simpler while still employing a geometric representation (we could call these tetration towers).
$a\uparrow \uparrow b={}^{b}a$
$a\uparrow \uparrow \uparrow b=\underbrace {^{^{^{^{^{a}.}.}.}a}a} _{b}$
$a\uparrow \uparrow \uparrow \uparrow b=\left.\underbrace {^{^{^{^{^{a}.}.}.}a}a} _{\underbrace {^{^{^{^{^{a}.}.}.}a}a} _{\underbrace {\vdots } _{a}}}\right\}b$
Finally, as an example, the fourth Ackermann number $4\uparrow ^{4}4$ could be represented as:
$\underbrace {^{^{^{^{^{4}.}.}.}4}4} _{\underbrace {^{^{^{^{^{4}.}.}.}4}4} _{\underbrace {^{^{^{^{^{4}.}.}.}4}4} _{4}}}=\underbrace {^{^{^{^{^{4}.}.}.}4}4} _{\underbrace {^{^{^{^{^{4}.}.}.}4}4} _{^{^{^{4}4}4}4}}$
Generalizations
Some numbers are so large that multiple arrows of Knuth's up-arrow notation become too cumbersome; then an n-arrow operator $\uparrow ^{n}$ is useful (and also for descriptions with a variable number of arrows), or equivalently, hyper operators.
Some numbers are so large that even that notation is not sufficient. The Conway chained arrow notation can then be used: a chain of three elements is equivalent with the other notations, but a chain of four or more is even more powerful.
${\begin{matrix}a\uparrow ^{n}b&=&a[n+2]b&=&a\to b\to n\\{\mbox{(Knuth)}}&&{\mbox{(hyperoperation)}}&&{\mbox{(Conway)}}\end{matrix}}$
$6\uparrow \uparrow 4$ = $\underbrace {6^{6^{.^{.^{.^{6}}}}}} _{4}$, Since $6\uparrow \uparrow 4$ = $6^{6^{6^{6}}}$ = $6^{6^{46,656}}$, Thus the result comes out with $\underbrace {6^{6^{.^{.^{.^{6}}}}}} _{4}$
$10\uparrow (3\times 10\uparrow (3\times 10\uparrow 15)+3)$ = $\underbrace {100000...000} _{\underbrace {300000...003} _{\underbrace {300000...000} _{15}}}$ or $10^{3\times 10^{3\times 10^{15}}+3}$ (Petillion)
Even faster-growing functions can be categorized using an ordinal analysis called the fast-growing hierarchy. The fast-growing hierarchy uses successive function iteration and diagonalization to systematically create faster-growing functions from some base function $f(x)$. For the standard fast-growing hierarchy using $f_{0}(x)=x+1$, $f_{3}(x)$ already exhibits exponential growth, $f_{4}(x)$ is comparable to tetrational growth and is upper-bounded by a function involving the first four hyperoperators;. Then, $f_{\omega }(x)$ is comparable to the Ackermann function, $f_{\omega +1}(x)$ is already beyond the reach of indexed arrows but can be used to approximate Graham's number, and $f_{\omega ^{2}}(x)$ is comparable to arbitrarily-long Conway chained arrow notation.
These functions are all computable. Even faster computable functions, such as the Goodstein sequence and the TREE sequence require the usage of large ordinals, may occur in certain combinatorical and proof-theoretic contexts. There exists functions which grow uncomputably fast, such as the Busy Beaver, whose very nature will be completely out of reach from any up-arrow, or even any ordinal-based analysis.
Definition
Without reference to hyperoperation the up-arrow operators can be formally defined by
$a\uparrow ^{n}b={\begin{cases}a^{b},&{\text{if }}n=1;\\1,&{\text{if }}n>1{\text{ and }}b=0;\\a\uparrow ^{n-1}(a\uparrow ^{n}(b-1)),&{\text{otherwise }}\end{cases}}$
for all integers $a,b,n$ with $a\geq 0,n\geq 1,b\geq 0$[nb 1].
This definition uses exponentiation $(a\uparrow ^{1}b=a\uparrow b=a^{b})$ as the base case, and tetration $(a\uparrow ^{2}b=a\uparrow \uparrow b)$ as repeated exponentiation. This is equivalent to the hyperoperation sequence except it omits the three more basic operations of succession, addition and multiplication.
One can alternatively choose multiplication $(a\uparrow ^{0}b=a\times b)$ as the base case and iterate from there. Then exponentiation becomes repeated multiplication. The formal definition would be
$a\uparrow ^{n}b={\begin{cases}a\times b,&{\text{if }}n=0;\\1,&{\text{if }}n>0{\text{ and }}b=0;\\a\uparrow ^{n-1}(a\uparrow ^{n}(b-1)),&{\text{otherwise }}\end{cases}}$
for all integers $a,b,n$ with $a\geq 0,n\geq 0,b\geq 0$.
Note, however, that Knuth did not define the "nil-arrow" ($\uparrow ^{0}$). One could extend the notation to negative indices (n ≥ -2) in such a way as to agree with the entire hyperoperation sequence, except for the lag in the indexing:
$H_{n}(a,b)=a[n]b=a\uparrow ^{n-2}b{\text{ for }}n\geq 0.$
The up-arrow operation is a right-associative operation, that is, $a\uparrow b\uparrow c$ is understood to be $a\uparrow (b\uparrow c)$, instead of $(a\uparrow b)\uparrow c$. If ambiguity is not an issue parentheses are sometimes dropped.
Tables of values
Computing 0↑n b
Computing $0\uparrow ^{n}b=H_{n+2}(0,b)=0[n+2]b$ results in
0, when n = 0 [nb 2]
1, when n = 1 and b = 0 [nb 1][nb 3]
0, when n = 1 and b > 0 [nb 1][nb 3]
1, when n > 1 and b is even (including 0)
0, when n > 1 and b is odd
Computing 2↑n b
Computing $2\uparrow ^{n}b$ can be restated in terms of an infinite table. We place the numbers $2^{b}$ in the top row, and fill the left column with values 2. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
Values of $2\uparrow ^{n}b$ = $H_{n+2}(2,b)$ = $2[n+2]b$ = 2 → b → n
b
ⁿ
1 2 3 4 5 6 formula
1 248163264$2^{b}$
2 241665536$2^{65{,}536}\approx 2.0\times 10^{19{,}728}$$2^{2^{65{,}536}}\approx 10^{6.0\times 10^{19{,}727}}$$2\uparrow \uparrow b$
3 2465536${\begin{matrix}\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$${\begin{matrix}\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$${\begin{matrix}\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$$2\uparrow \uparrow \uparrow b$
4 24${\begin{matrix}\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{2}.}.}.}2}2} \\\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{2}.}.}.}2}2} \\\underbrace {^{^{^{^{^{2}.}.}.}2}2} \\\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{2}.}.}.}2}2} \\\underbrace {^{^{^{^{^{2}.}.}.}2}2} \\\underbrace {^{^{^{^{^{2}.}.}.}2}2} \\\underbrace {2_{}^{2^{{}^{.\,^{.\,^{.\,^{2}}}}}}} \\65{,}536{\mbox{ copies of }}2\end{matrix}}$ $2\uparrow \uparrow \uparrow \uparrow b$
The table is the same as that of the Ackermann function, except for a shift in $n$ and $b$, and an addition of 3 to all values.
Computing 3↑n b
We place the numbers $3^{b}$ in the top row, and fill the left column with values 3. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
Values of $3\uparrow ^{n}b$ = $H_{n+2}(3,b)$ = $3[n+2]b$ = 3 → b → n
b
ⁿ
1 2 3 4 5 formula
1 392781243$3^{b}$
2 3277,625,597,484,987$3^{7{,}625{,}597{,}484{,}987}\approx 1.3\times 10^{3{,}638{,}334{,}640{,}024}$$3^{3^{7{,}625{,}597{,}484{,}987}}$$3\uparrow \uparrow b$
3 37,625,597,484,987${\begin{matrix}\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$${\begin{matrix}\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$${\begin{matrix}\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$$3\uparrow \uparrow \uparrow b$
4 3${\begin{matrix}\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{3}.}.}.}3}3} \\\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{3}.}.}.}3}3} \\\underbrace {^{^{^{^{^{3}.}.}.}3}3} \\\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{3}.}.}.}3}3} \\\underbrace {^{^{^{^{^{3}.}.}.}3}3} \\\underbrace {^{^{^{^{^{3}.}.}.}3}3} \\\underbrace {3_{}^{3^{{}^{.\,^{.\,^{.\,^{3}}}}}}} \\7{,}625{,}597{,}484{,}987{\mbox{ copies of }}3\end{matrix}}$ $3\uparrow \uparrow \uparrow \uparrow b$
Computing 4↑n b
We place the numbers $4^{b}$ in the top row, and fill the left column with values 4. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
Values of $4\uparrow ^{n}b$ = $H_{n+2}(4,b)$ = $4[n+2]b$ = 4 → b → n
b
ⁿ
1 2 3 4 5 formula
1 416642561024$4^{b}$
2 4256$4^{256}\approx 1.34\times 10^{154}$$4^{4^{256}}\approx 10^{8.0\times 10^{153}}$$4^{4^{4^{256}}}$$4\uparrow \uparrow b$
3 4$4^{4^{256}}\approx 10^{8.0\times 10^{153}}$${\begin{matrix}\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$${\begin{matrix}\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$${\begin{matrix}\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$$4\uparrow \uparrow \uparrow b$
4 4${\begin{matrix}\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{4}.}.}.}4}4} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{4}.}.}.}4}4} \\\underbrace {^{^{^{^{^{4}.}.}.}4}4} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{4}.}.}.}4}4} \\\underbrace {^{^{^{^{^{4}.}.}.}4}4} \\\underbrace {^{^{^{^{^{4}.}.}.}4}4} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\\underbrace {4_{}^{4^{{}^{.\,^{.\,^{.\,^{4}}}}}}} \\4^{4^{256}}{\mbox{ copies of }}4\end{matrix}}$ $4\uparrow \uparrow \uparrow \uparrow b$
Computing 10↑n b
We place the numbers $10^{b}$ in the top row, and fill the left column with values 10. To determine a number in the table, take the number immediately to the left, then look up the required number in the previous row, at the position given by the number just taken.
Values of $10\uparrow ^{n}b$ = $H_{n+2}(10,b)$ = $10[n+2]b$ = 10 → b → n
b
ⁿ
1 2 3 4 5 formula
1 101001,00010,000100,000$10^{b}$
2 1010,000,000,000$10^{10,000,000,000}$$10^{10^{10,000,000,000}}$$10^{10^{10^{10,000,000,000}}}$$10\uparrow \uparrow b$
3 10${\begin{matrix}\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\10{\mbox{ copies of }}10\end{matrix}}$${\begin{matrix}\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\10{\mbox{ copies of }}10\end{matrix}}$${\begin{matrix}\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\10{\mbox{ copies of }}10\end{matrix}}$${\begin{matrix}\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\\underbrace {10_{}^{10^{{}^{.\,^{.\,^{.\,^{10}}}}}}} \\10{\mbox{ copies of }}10\end{matrix}}$$10\uparrow \uparrow \uparrow b$
4 10${\begin{matrix}\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\10{\mbox{ copies of }}10\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\10{\mbox{ copies of }}10\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\10{\mbox{ copies of }}10\end{matrix}}$${\begin{matrix}\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\\underbrace {^{^{^{^{^{10}.}.}.}10}10} \\10{\mbox{ copies of }}10\end{matrix}}$ $10\uparrow \uparrow \uparrow \uparrow b$
For 2 ≤ b ≤ 9 the numerical order of the numbers $10\uparrow ^{n}b$ is the lexicographical order with n as the most significant number, so for the numbers of these 8 columns the numerical order is simply line-by-line. The same applies for the numbers in the 97 columns with 3 ≤ b ≤ 99, and if we start from n = 1 even for 3 ≤ b ≤ 9,999,999,999.
See also
• Primitive recursion
• Hyperoperation
• Busy beaver
• Cutler's bar notation
• Tetration
• Pentation
• Ackermann function
• Graham's number
• Steinhaus–Moser notation
Notes
1. For more details, see Powers of zero.
2. Keep in mind that Knuth did not define the operator $\uparrow ^{0}$.
3. For more details, see Zero to the power of zero.
References
1. Knuth, Donald E. (1976). "Mathematics and Computer Science: Coping with Finiteness". Science. 194 (4271): 1235–1242. Bibcode:1976Sci...194.1235K. doi:10.1126/science.194.4271.1235. PMID 17797067. S2CID 1690489.
2. R. L. Goodstein (Dec 1947). "Transfinite Ordinals in Recursive Number Theory". Journal of Symbolic Logic. 12 (4): 123–129. doi:10.2307/2266486. JSTOR 2266486. S2CID 1318943.
External links
• Weisstein, Eric W. "Knuth Up-Arrow Notation". MathWorld.
• Robert Munafo, Large Numbers: Higher hyper operators
Hyperoperations
Primary
• Successor (0)
• Addition (1)
• Multiplication (2)
• Exponentiation (3)
• Tetration (4)
• Pentation (5)
Inverse for left argument
• Predecessor (0)
• Subtraction (1)
• Division (2)
• Root extraction (3)
• Super-root (4)
Inverse for right argument
• Predecessor (0)
• Subtraction (1)
• Division (2)
• Logarithm (3)
• Super-logarithm (4)
Related articles
• Ackermann function
• Conway chained arrow notation
• Grzegorczyk hierarchy
• Knuth's up-arrow notation
• Steinhaus–Moser notation
Large numbers
Examples
in
numerical
order
• Thousand
• Ten thousand
• Hundred thousand
• Million
• Ten million
• Hundred million
• Billion
• Trillion
• Quadrillion
• Quintillion
• Sextillion
• Septillion
• Octillion
• Nonillion
• Decillion
• Eddington number
• Googol
• Shannon number
• Googolplex
• Skewes's number
• Moser's number
• Graham's number
• TREE(3)
• SSCG(3)
• BH(3)
• Rayo's number
• Transfinite numbers
Expression
methods
Notations
• Scientific notation
• Knuth's up-arrow notation
• Conway chained arrow notation
• Steinhaus–Moser notation
Operators
• Hyperoperation
• Tetration
• Pentation
• Ackermann function
• Grzegorczyk hierarchy
• Fast-growing hierarchy
Related
articles
(alphabetical
order)
• Busy beaver
• Extended real number line
• Indefinite and fictitious numbers
• Infinitesimal
• Largest known prime number
• List of numbers
• Long and short scales
• Number systems
• Number names
• Orders of magnitude
• Power of two
• Power of three
• Power of 10
• Sagan Unit
• Names
• History
Donald Knuth
Publications
• The Art of Computer Programming
• "The Complexity of Songs"
• Computers and Typesetting
• Concrete Mathematics
• Surreal Numbers
• Things a Computer Scientist Rarely Talks About
• Selected papers series
Software
• TeX
• Metafont
• MIXAL (MIX
• MMIX)
Fonts
• AMS Euler
• Computer Modern
• Concrete Roman
Literate programming
• WEB
• CWEB
Algorithms
• Knuth's Algorithm X
• Knuth–Bendix completion algorithm
• Knuth–Morris–Pratt algorithm
• Knuth shuffle
• Robinson–Schensted–Knuth correspondence
• Trabb Pardo–Knuth algorithm
• Generalization of Dijkstra's algorithm
• Knuth's Simpath algorithm
Other
• Dancing Links
• Knuth reward check
• Knuth Prize
• Knuth's up-arrow notation
• Man or boy test
• Quater-imaginary base
• -yllion
• Potrzebie system of weights and measures
| Wikipedia |
Combinatorial game theory
Combinatorial game theory is a branch of mathematics and theoretical computer science that typically studies sequential games with perfect information. Study has been largely confined to two-player games that have a position that the players take turns changing in defined ways or moves to achieve a defined winning condition. Combinatorial game theory has not traditionally studied games of chance or those that use imperfect or incomplete information, favoring games that offer perfect information in which the state of the game and the set of available moves is always known by both players.[1] However, as mathematical techniques advance, the types of game that can be mathematically analyzed expands, thus the boundaries of the field are ever changing.[2] Scholars will generally define what they mean by a "game" at the beginning of a paper, and these definitions often vary as they are specific to the game being analyzed and are not meant to represent the entire scope of the field.
This article is about the theory of combinatorial games. For the theory that includes games of chance and games of imperfect knowledge, see Game theory.
Combinatorial games include well-known games such as chess, checkers, and Go, which are regarded as non-trivial, and tic-tac-toe, which is considered trivial, in the sense of being "easy to solve". Some combinatorial games may also have an unbounded playing area, such as infinite chess. In combinatorial game theory, the moves in these and other games are represented as a game tree.
Combinatorial games also include one-player combinatorial puzzles such as Sudoku, and no-player automata, such as Conway's Game of Life, (although in the strictest definition, "games" can be said to require more than one participant, thus the designations of "puzzle" and "automata".[3])
Game theory in general includes games of chance, games of imperfect knowledge, and games in which players can move simultaneously, and they tend to represent real-life decision making situations.
Combinatorial game theory has a different emphasis than "traditional" or "economic" game theory, which was initially developed to study games with simple combinatorial structure, but with elements of chance (although it also considers sequential moves, see extensive-form game). Essentially, combinatorial game theory has contributed new methods for analyzing game trees, for example using surreal numbers, which are a subclass of all two-player perfect-information games.[3] The type of games studied by combinatorial game theory is also of interest in artificial intelligence, particularly for automated planning and scheduling. In combinatorial game theory there has been less emphasis on refining practical search algorithms (such as the alpha–beta pruning heuristic included in most artificial intelligence textbooks), but more emphasis on descriptive theoretical results (such as measures of game complexity or proofs of optimal solution existence without necessarily specifying an algorithm, such as the strategy-stealing argument).
An important notion in combinatorial game theory is that of the solved game. For example, tic-tac-toe is considered a solved game, as it can be proven that any game will result in a draw if both players play optimally. Deriving similar results for games with rich combinatorial structures is difficult. For instance, in 2007 it was announced that checkers has been weakly solved—optimal play by both sides also leads to a draw—but this result was a computer-assisted proof.[4] Other real world games are mostly too complicated to allow complete analysis today, although the theory has had some recent successes in analyzing Go endgames. Applying combinatorial game theory to a position attempts to determine the optimum sequence of moves for both players until the game ends, and by doing so discover the optimum move in any position. In practice, this process is torturously difficult unless the game is very simple.
It can be helpful to distinguish between combinatorial "mathgames" of interest primarily to mathematicians and scientists to ponder and solve, and combinatorial "playgames" of interest to the general population as a form of entertainment and competition.[5] However, a number of games fall into both categories. Nim, for instance, is a playgame instrumental in the foundation of combinatorial game theory, and one of the first computerized games.[6] Tic-tac-toe is still used to teach basic principles of game AI design to computer science students.[7]
History
Combinatorial game theory arose in relation to the theory of impartial games, in which any play available to one player must be available to the other as well. One such game is Nim, which can be solved completely. Nim is an impartial game for two players, and subject to the normal play condition, which means that a player who cannot move loses. In the 1930s, the Sprague–Grundy theorem showed that all impartial games are equivalent to heaps in Nim, thus showing that major unifications are possible in games considered at a combinatorial level, in which detailed strategies matter, not just pay-offs.
In the 1960s, Elwyn R. Berlekamp, John H. Conway and Richard K. Guy jointly introduced the theory of a partisan game, in which the requirement that a play available to one player be available to both is relaxed. Their results were published in their book Winning Ways for your Mathematical Plays in 1982. However, the first work published on the subject was Conway's 1976 book On Numbers and Games, also known as ONAG, which introduced the concept of surreal numbers and the generalization to games. On Numbers and Games was also a fruit of the collaboration between Berlekamp, Conway, and Guy.
Combinatorial games are generally, by convention, put into a form where one player wins when the other has no moves remaining. It is easy to convert any finite game with only two possible results into an equivalent one where this convention applies. One of the most important concepts in the theory of combinatorial games is that of the sum of two games, which is a game where each player may choose to move either in one game or the other at any point in the game, and a player wins when his opponent has no move in either game. This way of combining games leads to a rich and powerful mathematical structure.
Conway stated in On Numbers and Games that the inspiration for the theory of partisan games was based on his observation of the play in Go endgames, which can often be decomposed into sums of simpler endgames isolated from each other in different parts of the board.
Examples
The introductory text Winning Ways introduced a large number of games, but the following were used as motivating examples for the introductory theory:
• Blue–Red Hackenbush - At the finite level, this partisan combinatorial game allows constructions of games whose values are dyadic rational numbers. At the infinite level, it allows one to construct all real values, as well as many infinite ones that fall within the class of surreal numbers.
• Blue–Red–Green Hackenbush - Allows for additional game values that are not numbers in the traditional sense, for example, star.
• Toads and Frogs - Allows various game values. Unlike most other games, a position is easily represented by a short string of characters.
• Domineering - Various interesting games, such as hot games, appear as positions in Domineering, because there is sometimes an incentive to move, and sometimes not. This allows discussion of a game's temperature.
• Nim - An impartial game. This allows for the construction of the nimbers. (It can also be seen as a green-only special case of Blue-Red-Green Hackenbush.)
The classic game Go was influential on the early combinatorial game theory, and Berlekamp and Wolfe subsequently developed an endgame and temperature theory for it (see references). Armed with this they were able to construct plausible Go endgame positions from which they could give expert Go players a choice of sides and then defeat them either way.
Another game studied in the context of combinatorial game theory is chess. In 1953 Alan Turing wrote of the game, "If one can explain quite unambiguously in English, with the aid of mathematical symbols if required, how a calculation is to be done, then it is always possible to programme any digital computer to do that calculation, provided the storage capacity is adequate."[8] In a 1950 paper, Claude Shannon estimated the lower bound of the game-tree complexity of chess to be 10120, and today this is referred to as the Shannon number.[9] Chess remains unsolved, although extensive study, including work involving the use of supercomputers has created chess endgame tablebases, which shows the result of perfect play for all end-games with seven pieces or less. Infinite chess has an even greater combinatorial complexity than chess (unless only limited end-games, or composed positions with a small number of pieces are being studied).
Overview
A game, in its simplest terms, is a list of possible "moves" that two players, called left and right, can make. The game position resulting from any move can be considered to be another game. This idea of viewing games in terms of their possible moves to other games leads to a recursive mathematical definition of games that is standard in combinatorial game theory. In this definition, each game has the notation {L|R}. L is the set of game positions that the left player can move to, and R is the set of game positions that the right player can move to; each position in L and R is defined as a game using the same notation.
Using Domineering as an example, label each of the sixteen boxes of the four-by-four board by A1 for the upper leftmost square, C2 for the third box from the left on the second row from the top, and so on. We use e.g. (D3, D4) to stand for the game position in which a vertical domino has been placed in the bottom right corner. Then, the initial position can be described in combinatorial game theory notation as
$\{(\mathrm {A} 1,\mathrm {A} 2),(\mathrm {B} 1,\mathrm {B} 2),\dots |(\mathrm {A} 1,\mathrm {B} 1),(\mathrm {A} 2,\mathrm {B} 2),\dots \}.$
In standard Cross-Cram play, the players alternate turns, but this alternation is handled implicitly by the definitions of combinatorial game theory rather than being encoded within the game states.
$\{(\mathrm {A} 1,\mathrm {A} 2)|(\mathrm {A} 1,\mathrm {B} 1)\}=\{\{|\}|\{|\}\}.$
The above game describes a scenario in which there is only one move left for either player, and if either player makes that move, that player wins. (An irrelevant open square at C3 has been omitted from the diagram.) The {|} in each player's move list (corresponding to the single leftover square after the move) is called the zero game, and can actually be abbreviated 0. In the zero game, neither player has any valid moves; thus, the player whose turn it is when the zero game comes up automatically loses.
The type of game in the diagram above also has a simple name; it is called the star game, which can also be abbreviated ∗. In the star game, the only valid move leads to the zero game, which means that whoever's turn comes up during the star game automatically wins.
An additional type of game, not found in Domineering, is a loopy game, in which a valid move of either left or right is a game that can then lead back to the first game. Checkers, for example, becomes loopy when one of the pieces promotes, as then it can cycle endlessly between two or more squares. A game that does not possess such moves is called loopfree.
Game abbreviations
Numbers
Numbers represent the number of free moves, or the move advantage of a particular player. By convention positive numbers represent an advantage for Left, while negative numbers represent an advantage for Right. They are defined recursively with 0 being the base case.
0 = {|}
1 = {0|}, 2 = {1|}, 3 = {2|}
−1 = {|0}, −2 = {|−1}, −3 = {|−2}
The zero game is a loss for the first player.
The sum of number games behaves like the integers, for example 3 + −2 = 1.
Star
Star, written as ∗ or {0|0}, is a first-player win since either player must (if first to move in the game) move to a zero game, and therefore win.
∗ + ∗ = 0, because the first player must turn one copy of ∗ to a 0, and then the other player will have to turn the other copy of ∗ to a 0 as well; at this point, the first player would lose, since 0 + 0 admits no moves.
The game ∗ is neither positive nor negative; it and all other games in which the first player wins (regardless of which side the player is on) are said to be fuzzy with or confused with 0; symbolically, we write ∗ || 0.
Up
Up, written as ↑, is a position in combinatorial game theory.[10] In standard notation, ↑ = {0|∗}.
−↑ = ↓ (down)
Up is strictly positive (↑ > 0), but is infinitesimal. Up is defined in Winning Ways for your Mathematical Plays.
Down
Down, written as ↓, is a position in combinatorial game theory.[10] In standard notation, ↓ = {∗|0}.
−↓ = ↑ (up)
Down is strictly negative (↓ < 0), but is infinitesimal. Down is defined in Winning Ways for your Mathematical Plays.
"Hot" games
Consider the game {1|−1}. Both moves in this game are an advantage for the player who makes them; so the game is said to be "hot;" it is greater than any number less than −1, less than any number greater than 1, and fuzzy with any number in between. It is written as ±1. It can be added to numbers, or multiplied by positive ones, in the expected fashion; for example, 4 ± 1 = {5|3}.
Nimbers
An impartial game is one where, at every position of the game, the same moves are available to both players. For instance, Nim is impartial, as any set of objects that can be removed by one player can be removed by the other. However, domineering is not impartial, because one player places horizontal dominoes and the other places vertical ones. Likewise Checkers is not impartial, since the players own different colored pieces. For any ordinal number, one can define an impartial game generalizing Nim in which, on each move, either player may replace the number with any smaller ordinal number; the games defined in this way are known as nimbers. The Sprague–Grundy theorem states that every impartial game is equivalent to a nimber.
The "smallest" nimbers – the simplest and least under the usual ordering of the ordinals – are 0 and ∗.
See also
• Alpha–beta pruning, an optimised algorithm for searching the game tree
• Backward induction, reasoning backwards from a final situation
• Cooling and heating (combinatorial game theory), various transformations of games making them more amenable to the theory
• Connection game, a type of game where players attempt to establish connections
• Endgame tablebase, a database saying how to play endgames
• Expectiminimax tree, an adaptation of a minimax game tree to games with an element of chance
• Extensive-form game, a game tree enriched with payoffs and information available to players
• Game classification, an article discussing ways of classifying games
• Game complexity, an article describing ways of measuring the complexity of games
• Grundy's game, a mathematical game in which heaps of objects are split
• Multi-agent system, a type of computer system for tackling complex problems
• Positional game, a type of game where players claim previously-unclaimed positions
• Solving chess
• Sylver coinage, a mathematical game of choosing positive integers that are not the sum of non-negative multiples of previously chosen integers
• Wythoff's game, a mathematical game of taking objects from one or two piles
• Topological game, a type of mathematical game played in a topological space
• Zugzwang, being obliged to play when this is disadvantageous
Notes
1. Lessons in Play, p. 3
2. Thomas S. Fergusson's analysis of poker is an example of combinatorial game theory expanding into games that include elements of chance. Research into Three Player Nim is an example of study expanding beyond two player games. Conway, Guy and Berlekamp's analysis of partisan games is perhaps the most famous expansion of the scope of combinatorial game theory, taking the field beyond the study of impartial games.
3. Demaine, Erik D.; Hearn, Robert A. (2009). "Playing games with algorithms: algorithmic combinatorial game theory". In Albert, Michael H.; Nowakowski, Richard J. (eds.). Games of No Chance 3. Mathematical Sciences Research Institute Publications. Vol. 56. Cambridge University Press. pp. 3–56. arXiv:cs.CC/0106019.
4. Schaeffer, J.; Burch, N.; Bjornsson, Y.; Kishimoto, A.; Muller, M.; Lake, R.; Lu, P.; Sutphen, S. (2007). "Checkers is solved". Science. 317 (5844): 1518–1522. Bibcode:2007Sci...317.1518S. CiteSeerX 10.1.1.95.5393. doi:10.1126/science.1144079. PMID 17641166. S2CID 10274228.
5. Fraenkel, Aviezri (2009). "Combinatorial Games: selected bibliography with a succinct gourmet introduction". Games of No Chance 3. 56: 492.
6. Grant, Eugene F.; Lardner, Rex (2 August 1952). "The Talk of the Town - It". The New Yorker.
7. Russell, Stuart; Norvig, Peter (2021). "Chapter 5: Adversarial search and games". Artificial Intelligence: A Modern Approach. Pearson series in artificial intelligence (4th ed.). Pearson Education, Inc. pp. 146–179. ISBN 978-0-13-461099-3.
8. Alan Turing. "Digital computers applied to games". University of Southampton and King's College Cambridge. p. 2.
9. Claude Shannon (1950). "Programming a Computer for Playing Chess" (PDF). Philosophical Magazine. 41 (314): 4. Archived from the original (PDF) on 2010-07-06.
10. E. Berlekamp; J. H. Conway; R. Guy (1982). Winning Ways for your Mathematical Plays. Vol. I. Academic Press. ISBN 0-12-091101-9.
E. Berlekamp; J. H. Conway; R. Guy (1982). Winning Ways for your Mathematical Plays. Vol. II. Academic Press. ISBN 0-12-091102-7.
References
• Albert, Michael H.; Nowakowski, Richard J.; Wolfe, David (2007). Lessons in Play: An Introduction to Combinatorial Game Theory. A K Peters Ltd. ISBN 978-1-56881-277-9.
• Beck, József (2008). Combinatorial games: tic-tac-toe theory. Cambridge University Press. ISBN 978-0-521-46100-9.
• Berlekamp, E.; Conway, J. H.; Guy, R. (1982). Winning Ways for your Mathematical Plays: Games in general. Academic Press. ISBN 0-12-091101-9. 2nd ed., A K Peters Ltd (2001–2004), ISBN 1-56881-130-6, ISBN 1-56881-142-X
• Berlekamp, E.; Conway, J. H.; Guy, R. (1982). Winning Ways for your Mathematical Plays: Games in particular. Academic Press. ISBN 0-12-091102-7. 2nd ed., A K Peters Ltd (2001–2004), ISBN 1-56881-143-8, ISBN 1-56881-144-6.
• Berlekamp, Elwyn; Wolfe, David (1997). Mathematical Go: Chilling Gets the Last Point. A K Peters Ltd. ISBN 1-56881-032-6.
• Bewersdorff, Jörg (2004). Luck, Logic and White Lies: The Mathematics of Games. A K Peters Ltd. ISBN 1-56881-210-8. See especially sections 21–26.
• Conway, John Horton (1976). On Numbers and Games. Academic Press. ISBN 0-12-186350-6. 2nd ed., A K Peters Ltd (2001), ISBN 1-56881-127-6.
• Robert A. Hearn; Erik D. Demaine (2009). Games, Puzzles, and Computation. A K Peters, Ltd. ISBN 978-1-56881-322-6.
External links
• List of combinatorial game theory links at the homepage of David Eppstein
• An Introduction to Conway's games and numbers by Dierk Schleicher and Michael Stoll
• Combinational Game Theory terms summary by Bill Spight
• Combinatorial Game Theory Workshop, Banff International Research Station, June 2005
Topics in game theory
Definitions
• Congestion game
• Cooperative game
• Determinacy
• Escalation of commitment
• Extensive-form game
• First-player and second-player win
• Game complexity
• Graphical game
• Hierarchy of beliefs
• Information set
• Normal-form game
• Preference
• Sequential game
• Simultaneous game
• Simultaneous action selection
• Solved game
• Succinct game
Equilibrium
concepts
• Bayesian Nash equilibrium
• Berge equilibrium
• Core
• Correlated equilibrium
• Epsilon-equilibrium
• Evolutionarily stable strategy
• Gibbs equilibrium
• Mertens-stable equilibrium
• Markov perfect equilibrium
• Nash equilibrium
• Pareto efficiency
• Perfect Bayesian equilibrium
• Proper equilibrium
• Quantal response equilibrium
• Quasi-perfect equilibrium
• Risk dominance
• Satisfaction equilibrium
• Self-confirming equilibrium
• Sequential equilibrium
• Shapley value
• Strong Nash equilibrium
• Subgame perfection
• Trembling hand
Strategies
• Backward induction
• Bid shading
• Collusion
• Forward induction
• Grim trigger
• Markov strategy
• Dominant strategies
• Pure strategy
• Mixed strategy
• Strategy-stealing argument
• Tit for tat
Classes
of games
• Bargaining problem
• Cheap talk
• Global game
• Intransitive game
• Mean-field game
• Mechanism design
• n-player game
• Perfect information
• Large Poisson game
• Potential game
• Repeated game
• Screening game
• Signaling game
• Stackelberg competition
• Strictly determined game
• Stochastic game
• Symmetric game
• Zero-sum game
Games
• Go
• Chess
• Infinite chess
• Checkers
• Tic-tac-toe
• Prisoner's dilemma
• Gift-exchange game
• Optional prisoner's dilemma
• Traveler's dilemma
• Coordination game
• Chicken
• Centipede game
• Lewis signaling game
• Volunteer's dilemma
• Dollar auction
• Battle of the sexes
• Stag hunt
• Matching pennies
• Ultimatum game
• Rock paper scissors
• Pirate game
• Dictator game
• Public goods game
• Blotto game
• War of attrition
• El Farol Bar problem
• Fair division
• Fair cake-cutting
• Cournot game
• Deadlock
• Diner's dilemma
• Guess 2/3 of the average
• Kuhn poker
• Nash bargaining game
• Induction puzzles
• Trust game
• Princess and monster game
• Rendezvous problem
Theorems
• Arrow's impossibility theorem
• Aumann's agreement theorem
• Folk theorem
• Minimax theorem
• Nash's theorem
• Negamax theorem
• Purification theorem
• Revelation principle
• Sprague–Grundy theorem
• Zermelo's theorem
Key
figures
• Albert W. Tucker
• Amos Tversky
• Antoine Augustin Cournot
• Ariel Rubinstein
• Claude Shannon
• Daniel Kahneman
• David K. Levine
• David M. Kreps
• Donald B. Gillies
• Drew Fudenberg
• Eric Maskin
• Harold W. Kuhn
• Herbert Simon
• Hervé Moulin
• John Conway
• Jean Tirole
• Jean-François Mertens
• Jennifer Tour Chayes
• John Harsanyi
• John Maynard Smith
• John Nash
• John von Neumann
• Kenneth Arrow
• Kenneth Binmore
• Leonid Hurwicz
• Lloyd Shapley
• Melvin Dresher
• Merrill M. Flood
• Olga Bondareva
• Oskar Morgenstern
• Paul Milgrom
• Peyton Young
• Reinhard Selten
• Robert Axelrod
• Robert Aumann
• Robert B. Wilson
• Roger Myerson
• Samuel Bowles
• Suzanne Scotchmer
• Thomas Schelling
• William Vickrey
Miscellaneous
• All-pay auction
• Alpha–beta pruning
• Bertrand paradox
• Bounded rationality
• Combinatorial game theory
• Confrontation analysis
• Coopetition
• Evolutionary game theory
• First-move advantage in chess
• Game Description Language
• Game mechanics
• Glossary of game theory
• List of game theorists
• List of games in game theory
• No-win situation
• Solving chess
• Topological game
• Tragedy of the commons
• Tyranny of small decisions
Authority control: National
• Germany
| Wikipedia |
Up to
Two mathematical objects a and b are called equal up to an equivalence relation R
• if a and b are related by R, that is,
• if aRb holds, that is,
• if the equivalence classes of a and b with respect to R are equal.
This figure of speech is mostly used in connection with expressions derived from equality, such as uniqueness or count. For example, x is unique up to R means that all objects x under consideration are in the same equivalence class with respect to the relation R.
Moreover, the equivalence relation R is often designated rather implicitly by a generating condition or transformation. For example, the statement "an integer's prime factorization is unique up to ordering" is a concise way to say that any two lists of prime factors of a given integer are equivalent with respect to the relation R that relates two lists if one can be obtained by reordering (permutation) from the other.[1] As another example, the statement "the solution to an indefinite integral is sin(x), up to addition of a constant" tacitly employs the equivalence relation R between functions, defined by fRg if the difference f−g is a constant function, and means that the solution and the function sin(x) are equal up to this R. In the picture, "there are 4 partitions up to rotation" means that the set P has 4 equivalence classes with respect to R defined by aRb if b can be obtained from a by rotation; one representative from each class is shown in the bottom left picture part.
Equivalence relations are often used to disregard possible differences of objects, so "up to R" can be understood informally as "ignoring the same subtleties as R ignores". In the factorization example, "up to ordering" means "ignoring the particular ordering".
Further examples include "up to isomorphism", "up to permutations", and "up to rotations", which are described in the Examples section.
In informal contexts, mathematicians often use the word modulo (or simply "mod") for similar purposes, as in "modulo isomorphism".
Examples
Tetris
A simple example is "there are seven tetrominoes, up to rotations", which makes reference to the seven possible contiguous arrangements of tetrominoes (collections of four unit squares arranged to connect on at least one side) and which are frequently thought of as the seven Tetris pieces (O, I, L, J, T, S, Z). One could also say "there are five tetrominoes, up to reflections and rotations", which would then take into account the perspective that L and J (as well as S and Z) can be thought of as the same piece when reflected. The Tetris game does not allow reflections, so the former statement is likely to seem more relevant than the latter.
To add in the exhaustive count, there is no formal notation for the number of pieces of tetrominoes. However, it is common to write that "there are seven tetrominoes (= 19 total[2]) up to rotations". Here, Tetris provides an excellent example, as one might simply count 7 pieces × 4 rotations as 28, but some pieces (such as the 2×2 O) obviously have fewer than four rotation states.
Eight queens
In the eight queens puzzle, if the eight queens are considered to be distinct, then there are 3709440 distinct solutions. Normally, however, the queens are considered to be equal, and one usually says "there are 3,709,440 / 8! = 92 unique solutions up to permutations of the queens", or that "there are 92 solutions modulo the names of the queens", signifying that two different arrangements of the queens are considered equivalent if the queens have been permuted, but the same squares on the chessboard are occupied by them.
If, in addition to treating the queens as identical, rotations and reflections of the board were allowed, we would have only 12 distinct solutions up to symmetry and the naming of the queens, signifying that two arrangements that are symmetrical to each other are considered equivalent (for more, see Eight queens puzzle § Solutions).
Polygons
The regular n-gon, for a fixed n, is unique up to similarity. In other words, by scaling, translation, and rotation, as necessary, any n-gon can be transformed to any other n-gon (with the same n).
Group theory
In group theory, one may have a group G acting on a set X, in which case, one might say that two elements of X are equivalent "up to the group action"—if they lie in the same orbit.
Another typical example is the statement that "there are two different groups of order 4 up to isomorphism", or "modulo isomorphism, there are two groups of order 4". This means that there are two equivalence classes of groups of order 4—assuming that one considers groups to be equivalent if they are isomorphic.
Nonstandard analysis
A hyperreal x and its standard part st(x) are equal up to an infinitesimal difference.
Computer science
In computer science, the term up-to techniques is a precisely defined notion that refers to certain proof techniques for (weak) bisimulation, and to relate processes that only behave similarly up to unobservable steps.[3]
See also
Look up up to in Wiktionary, the free dictionary.
• Abuse of notation
• Adequality
• All other things being equal
• Essentially unique
• List of mathematical jargon
• Modulo
• Quotient group
• Quotient set
• Synecdoche
References
1. Nekovář, Jan (2011). "Mathematical English (a brief summary)" (PDF). Institut de mathématiques de Jussieu – Paris Rive Gauche. Retrieved 2019-11-21.
2. Weisstein, Eric W. "Tetromino". mathworld.wolfram.com. Retrieved 2019-11-21.
3. Damien Pous, Up-to techniques for weak bisimulation, Proc. 32nd ICALP, Lecture Notes in Computer Science, vol. 3580, Springer Verlag (2005), pp. 730–741
Further reading
• Up-to Techniques for Weak Bisimulation
| Wikipedia |
Bound graph
In graph theory, a bound graph expresses which pairs of elements of some partially ordered set have an upper bound. Rigorously, any graph G is a bound graph if there exists a partial order ≤ on the vertices of G with the property that for any vertices u and v of G, uv is an edge of G if and only if u ≠ v and there is a vertex w such that u ≤ w and v ≤ w.
Bound graphs are sometimes referred to as upper bound graphs, but the analogously defined lower bound graphs comprise exactly the same class—any lower bound for ≤ is easily seen to be an upper bound for the dual partial order ≥.
References
• McMorris, F.R.; Zaslavsky, T. (1982). "Bound graphs of a partially ordered set". Journal of Combinatorics, Information & System Sciences. 7: 134–138.
• Lundgren, J.R.; Maybee, J.S. (1983). "A characterization of upper bound graphs". Congressus Numerantium. 40: 189–193.
• Bergstrand, D.J.; Jones, K.F. (1988). "On upper bound graphs of partially ordered sets". Congressus Numerantium. 66: 185–193.
• Tanenbaum, P.J. (2000). "Bound graph polysemy" (PDF). Electronic Journal of Combinatorics. 7: #R43. doi:10.37236/1521.
| Wikipedia |
Limits of integration
In calculus and mathematical analysis the limits of integration (or bounds of integration) of the integral
$\int _{a}^{b}f(x)\,dx$
of a Riemann integrable function $f$ defined on a closed and bounded interval are the real numbers $a$ and $b$, in which $a$ is called the lower limit and $b$ the upper limit. The region that is bounded can be seen as the area inside $a$ and $b$.
For example, the function $f(x)=x^{3}$ is defined on the interval $[2,4]$
$\int _{2}^{4}x^{3}\,dx$
with the limits of integration being $2$ and $4$.[1]
Integration by Substitution (U-Substitution)
In Integration by substitution, the limits of integration will change due to the new function being integrated. With the function that is being derived, $a$ and $b$ are solved for $f(u)$. In general,
$\int _{a}^{b}f(g(x))g'(x)\ dx$
where $u=g(x)$ and $du=g'(x)\ dx$. Thus, $a$ and $b$ will be solved in terms of $u$; the lower bound is $g(a)$ and the upper bound is $g(b)$.
For example,
$\int _{0}^{2}2x\cos(x^{2})dx=\int _{0}^{4}\cos(u)\,du$
where $u=x^{2}$ and $du=2xdx$. Thus, $f(0)=0^{2}=0$ and $f(2)=2^{2}=4$. Hence, the new limits of integration are $0$ and $4$.[2]
The same applies for other substitutions.
Improper integrals
Limits of integration can also be defined for improper integrals, with the limits of integration of both
$\lim _{z\to a^{+}}\int _{z}^{b}f(x)\,dx$
and
$\lim _{z\to b^{-}}\int _{a}^{z}f(x)\,dx$
again being a and b. For an improper integral
$\int _{a}^{\infty }f(x)\,dx$
or
$\int _{-\infty }^{b}f(x)\,dx$
the limits of integration are a and ∞, or −∞ and b, respectively.[3]
Definite Integrals
If $c\in (a,b)$, then[4]
$\int _{a}^{b}f(x)\ dx=\int _{a}^{c}f(x)\ dx\ +\int _{c}^{b}f(x)\ dx.$
See also
• Integral
• Riemann integration
• Definite integral
References
1. "31.5 Setting up Correct Limits of Integration". math.mit.edu. Retrieved 2019-12-02.
2. "𝘶-substitution". Khan Academy. Retrieved 2019-12-02.
3. "Calculus II - Improper Integrals". tutorial.math.lamar.edu. Retrieved 2019-12-02.
4. Weisstein, Eric W. "Definite Integral". mathworld.wolfram.com. Retrieved 2019-12-02.
| Wikipedia |
Minkowski–Bouligand dimension
In fractal geometry, the Minkowski–Bouligand dimension, also known as Minkowski dimension or box-counting dimension, is a way of determining the fractal dimension of a set $S$ in a Euclidean space $\mathbb {R} ^{n}$, or more generally in a metric space $(X,d)$. It is named after the Polish mathematician Hermann Minkowski and the French mathematician Georges Bouligand.
To calculate this dimension for a fractal $S$, imagine this fractal lying on an evenly spaced grid and count how many boxes are required to cover the set. The box-counting dimension is calculated by seeing how this number changes as we make the grid finer by applying a box-counting algorithm.
Suppose that $N(\varepsilon )$ is the number of boxes of side length $\varepsilon $ required to cover the set. Then the box-counting dimension is defined as
$\dim _{\text{box}}(S):=\lim _{\varepsilon \to 0}{\frac {\log N(\varepsilon )}{\log(1/\varepsilon )}}.$
Roughly speaking, this means that the dimension is the exponent $d$ such that $N(1/n)\approx Cn^{d}$, which is what one would expect in the trivial case where $S$ is a smooth space (a manifold) of integer dimension $d$.
If the above limit does not exist, one may still take the limit superior and limit inferior, which respectively define the upper box dimension and lower box dimension. The upper box dimension is sometimes called the entropy dimension, Kolmogorov dimension, Kolmogorov capacity, limit capacity or upper Minkowski dimension, while the lower box dimension is also called the lower Minkowski dimension.
The upper and lower box dimensions are strongly related to the more popular Hausdorff dimension. Only in very special applications is it important to distinguish between the three (see below). Yet another measure of fractal dimension is the correlation dimension.
Alternative definitions
It is possible to define the box dimensions using balls, with either the covering number or the packing number. The covering number $N_{\text{covering}}(\varepsilon )$ is the minimal number of open balls of radius ε required to cover the fractal, or in other words, such that their union contains the fractal. We can also consider the intrinsic covering number $N'_{\text{covering}}(\varepsilon )$, which is defined the same way but with the additional requirement that the centers of the open balls lie inside the set S. The packing number $N_{\text{packing}}(\varepsilon )$ is the maximal number of disjoint open balls of radius ε one can situate such that their centers would be inside the fractal. While N, Ncovering, N'covering and Npacking are not exactly identical, they are closely related and give rise to identical definitions of the upper and lower box dimensions. This is easy to prove once the following inequalities are proven:
$N_{\text{packing}}(\varepsilon )\leq N'_{\text{covering}}(\varepsilon )\leq N_{\text{covering}}(\varepsilon /2).$
These, in turn, follow with a little effort from the triangle inequality.
The advantage of using balls rather than squares is that this definition generalizes to any metric space. In other words, the box definition is extrinsic — one assumes the fractal space S is contained in a Euclidean space, and defines boxes according to the external geometry of the containing space. However, the dimension of S should be intrinsic, independent of the environment into which S is placed, and the ball definition can be formulated intrinsically. One defines an internal ball as all points of S within a certain distance of a chosen center, and one counts such balls to get the dimension. (More precisely, the Ncovering definition is extrinsic, but the other two are intrinsic.)
The advantage of using boxes is that in many cases N(ε) may be easily calculated explicitly, and that for boxes the covering and packing numbers (defined in an equivalent way) are equal.
The logarithm of the packing and covering numbers are sometimes referred to as entropy numbers and are somewhat analogous to the concepts of thermodynamic entropy and information-theoretic entropy, in that they measure the amount of "disorder" in the metric space or fractal at scale ε and also measure how many bits or digits one would need to specify a point of the space to accuracy ε.
Another equivalent (extrinsic) definition for the box-counting dimension is given by the formula
$\dim _{\text{box}}(S)=n-\lim _{r\to 0}{\frac {\log {\text{vol}}(S_{r})}{\log r}},$
where for each r > 0, the set $S_{r}$ is defined to be the r-neighborhood of S, i.e. the set of all points in $R^{n}$ that are at distance less than r from S (or equivalently, $S_{r}$ is the union of all the open balls of radius r centered at a point in S).
Properties
Both box dimensions are finitely additive, i.e. if {A1, ..., An} is a finite collection of sets, then
$\dim(A_{1}\cup \dotsb \cup A_{n})=\max\{\dim A_{1},\dots ,\dim A_{n}\}.$
However, they are not countably additive, i.e. this equality does not hold for an infinite sequence of sets. For example, the box dimension of a single point is 0, but the box dimension of the collection of rational numbers in the interval [0, 1] has dimension 1. The Hausdorff measure by comparison, is countably additive.
An interesting property of the upper box dimension not shared with either the lower box dimension or the Hausdorff dimension is the connection to set addition. If A and B are two sets in a Euclidean space, then A + B is formed by taking all the pairs of points a, b where a is from A and b is from B and adding a + b. One has
$\dim _{\text{upper box}}(A+B)\leq \dim _{\text{upper box}}(A)+\dim _{\text{upper box}}(B).$
Relations to the Hausdorff dimension
The box-counting dimension is one of a number of definitions for dimension that can be applied to fractals. For many well behaved fractals all these dimensions are equal; in particular, these dimensions coincide whenever the fractal satisfies the open set condition (OSC).[1] For example, the Hausdorff dimension, lower box dimension, and upper box dimension of the Cantor set are all equal to log(2)/log(3). However, the definitions are not equivalent.
The box dimensions and the Hausdorff dimension are related by the inequality
$\dim _{\text{Haus}}\leq \dim _{\text{lower box}}\leq \dim _{\text{upper box}}.$
In general, both inequalities may be strict. The upper box dimension may be bigger than the lower box dimension if the fractal has different behaviour in different scales. For example, examine the set of numbers in the interval [0, 1] satisfying the condition
for any n, all the digits between the 22n-th digit and the (22n+1 − 1)-th digit are zero.
The digits in the "odd place-intervals", i.e. between digits 22n+1 and 22n+2 − 1 are not restricted and may take any value. This fractal has upper box dimension 2/3 and lower box dimension 1/3, a fact which may be easily verified by calculating N(ε) for $\varepsilon =10^{-2^{n}}$ and noting that their values behave differently for n even and odd.
Another example: the set of rational numbers $\mathbb {Q} $, a countable set with $\dim _{\text{Haus}}=0$, has $\dim _{\text{box}}=1$ because its closure, $\mathbb {R} $, has dimension 1. In fact,
$\dim _{\text{box}}\left\{0,1,{\frac {1}{2}},{\frac {1}{3}},{\frac {1}{4}},\ldots \right\}={\frac {1}{2}}.$
These examples show that adding a countable set can change box dimension, demonstrating a kind of instability of this dimension.
See also
• Correlation dimension
• Packing dimension
• Uncertainty exponent
• Weyl–Berry conjecture
• Lacunarity
References
1. Wagon, Stan (2010). Mathematica in Action: Problem Solving Through Visualization and Computation. Springer-Verlag. p. 214. ISBN 0-387-75477-6.
• Falconer, Kenneth (1990). Fractal geometry: mathematical foundations and applications. Chichester: John Wiley. pp. 38–47. ISBN 0-471-92287-0. Zbl 0689.28003.
• Weisstein, Eric W. "Minkowski-Bouligand Dimension". MathWorld.
External links
• FrakOut!: an OSS application for calculating the fractal dimension of a shape using the box counting method (Does not automatically place the boxes for you).
• FracLac: online user guide and software ImageJ and FracLac box counting plugin; free user-friendly open source software for digital image analysis in biology
Fractals
Characteristics
• Fractal dimensions
• Assouad
• Box-counting
• Higuchi
• Correlation
• Hausdorff
• Packing
• Topological
• Recursion
• Self-similarity
Iterated function
system
• Barnsley fern
• Cantor set
• Koch snowflake
• Menger sponge
• Sierpinski carpet
• Sierpinski triangle
• Apollonian gasket
• Fibonacci word
• Space-filling curve
• Blancmange curve
• De Rham curve
• Minkowski
• Dragon curve
• Hilbert curve
• Koch curve
• Lévy C curve
• Moore curve
• Peano curve
• Sierpiński curve
• Z-order curve
• String
• T-square
• n-flake
• Vicsek fractal
• Hexaflake
• Gosper curve
• Pythagoras tree
• Weierstrass function
Strange attractor
• Multifractal system
L-system
• Fractal canopy
• Space-filling curve
• H tree
Escape-time
fractals
• Burning Ship fractal
• Julia set
• Filled
• Newton fractal
• Douady rabbit
• Lyapunov fractal
• Mandelbrot set
• Misiurewicz point
• Multibrot set
• Newton fractal
• Tricorn
• Mandelbox
• Mandelbulb
Rendering techniques
• Buddhabrot
• Orbit trap
• Pickover stalk
Random fractals
• Brownian motion
• Brownian tree
• Brownian motor
• Fractal landscape
• Lévy flight
• Percolation theory
• Self-avoiding walk
People
• Michael Barnsley
• Georg Cantor
• Bill Gosper
• Felix Hausdorff
• Desmond Paul Henry
• Gaston Julia
• Helge von Koch
• Paul Lévy
• Aleksandr Lyapunov
• Benoit Mandelbrot
• Hamid Naderi Yeganeh
• Lewis Fry Richardson
• Wacław Sierpiński
Other
• "How Long Is the Coast of Britain?"
• Coastline paradox
• Fractal art
• List of fractals by Hausdorff dimension
• The Fractal Geometry of Nature (1982 book)
• The Beauty of Fractals (1986 book)
• Chaos: Making a New Science (1987 book)
• Kaleidoscope
• Chaos theory
| Wikipedia |
Lower envelope
In mathematics, the lower envelope or pointwise minimum of a finite set of functions is the pointwise minimum of the functions, the function whose value at every point is the minimum of the values of the functions in the given set. The concept of a lower envelope can also be extended to partial functions by taking the minimum only among functions that have values at the point. The upper envelope or pointwise maximum is defined symmetrically. For an infinite set of functions, the same notions may be defined using the infimum in place of the minimum, and the supremum in place of the maximum.[1]
For continuous functions from a given class, the lower or upper envelope is a piecewise function whose pieces are from the same class. For functions of a single real variable whose graphs have a bounded number of intersection points, the complexity of the lower or upper envelope can be bounded using Davenport–Schinzel sequences, and these envelopes can be computed efficiently by a divide-and-conquer algorithm that computes and then merges the envelopes of subsets of the functions.[2]
For convex functions or quasiconvex functions, the upper envelope is again convex or quasiconvex. The lower envelope is not, but can be replaced by the lower convex envelope to obtain an operation analogous to the lower envelope that maintains convexity. The upper and lower envelopes of Lipschitz functions preserve the property of being Lipschitz. However, the lower and upper envelope operations do not necessarily preserve the property of being a continuous function.[3]
References
1. Choquet, Gustave (1966), "3. Upper and lower envelopes of a family of functions", Topology, Academic Press, pp. 129–131, ISBN 9780080873312
2. Boissonnat, Jean-Daniel; Yvinec, Mariette (1998), "15.3.2 Computing the lower envelope", Algorithmic Geometry, Cambridge University Press, p. 358, ISBN 9780521565295
3. Choquet (1966), p. 136.
| Wikipedia |
Upper and lower probabilities
Upper and lower probabilities are representations of imprecise probability. Whereas probability theory uses a single number, the probability, to describe how likely an event is to occur, this method uses two numbers: the upper probability of the event and the lower probability of the event.
Because frequentist statistics disallows metaprobabilities, frequentists have had to propose new solutions. Cedric Smith and Arthur Dempster each developed a theory of upper and lower probabilities. Glenn Shafer developed Dempster's theory further, and it is now known as Dempster–Shafer theory or Choquet (1953). More precisely, in the work of these authors one considers in a power set, $P(S)\,\!$, a mass function $m:P(S)\rightarrow R$ satisfying the conditions
$m(\varnothing )=0\,\,\,\,\,\,\!;\,\,\,\,\,\,m(A)\geq 0\,\,\,\,\,\,\!;\,\,\,\,\,\,\sum _{A\in P(X)}m(A)=1.\,\!$
In turn, a mass is associated with two non-additive continuous measures called belief and plausibility defined as follows:
$\operatorname {bel} (A)=\sum _{B\mid B\subseteq A}m(B)\,\,\,\,;\,\,\,\,\operatorname {pl} (A)=\sum _{B\mid B\cap A\neq \varnothing }m(B)$
In the case where $S$ is infinite there can be $\operatorname {bel} $ such that there is no associated mass function. See p. 36 of Halpern (2003). Probability measures are a special case of belief functions in which the mass function assigns positive mass to singletons of the event space only.
A different notion of upper and lower probabilities is obtained by the lower and upper envelopes obtained from a class C of probability distributions by setting
$\operatorname {env_{1}} (A)=\inf _{p\in C}p(A)\,\,\,\,;\,\,\,\,\operatorname {env_{2}} (A)=\sup _{p\in C}p(A)$
The upper and lower probabilities are also related with probabilistic logic: see Gerla (1994).
Observe also that a necessity measure can be seen as a lower probability and a possibility measure can be seen as an upper probability.
See also
• Possibility theory
• Fuzzy measure theory
• Interval finite element
• Probability bounds analysis
References
• Choquet, G. (1953). "Theory of Capacities". Annales de l'Institut Fourier. 5: 131–295. doi:10.5802/aif.53.
• Gerla, G. (1994). "Inferences in Probability Logic". Artificial Intelligence. 70 (1–2): 33–52. doi:10.1016/0004-3702(94)90102-3.
• Halpern, J. Y. (2003). Reasoning about Uncertainty. MIT Press. ISBN 978-0-262-08320-1.
• Halpern, J. Y.; Fagin, R. (1992). "Two views of belief: Belief as generalized probability and belief as evidence". Artificial Intelligence. 54 (3): 275–317. CiteSeerX 10.1.1.70.6130. doi:10.1016/0004-3702(92)90048-3.
• Huber, P. J. (1980). Robust Statistics. New York: Wiley. ISBN 978-0-471-41805-4.
• Saffiotti, A. (1992). "A Belief-Function Logic". Procs of the 10h AAAI Conference. San Jose, CA. pp. 642–647. ISBN 978-0-262-51063-9.{{cite book}}: CS1 maint: location missing publisher (link)
• Shafer, G. (1976). A Mathematical Theory of Evidence. Princeton: Princeton University Press. ISBN 978-0-691-08175-5.
• Walley, P.; Fine, T. L. (1982). "Towards a frequentist theory of upper and lower probability". Annals of Statistics. 10 (3): 741–761. doi:10.1214/aos/1176345868. JSTOR 2240901.
| Wikipedia |
Dini derivative
In mathematics and, specifically, real analysis, the Dini derivatives (or Dini derivates) are a class of generalizations of the derivative. They were introduced by Ulisse Dini, who studied continuous but nondifferentiable functions.
The upper Dini derivative, which is also called an upper right-hand derivative,[1] of a continuous function
$f:{\mathbb {R} }\rightarrow {\mathbb {R} },$
is denoted by f′+ and defined by
$f'_{+}(t)=\limsup _{h\to {0+}}{\frac {f(t+h)-f(t)}{h}},$
where lim sup is the supremum limit and the limit is a one-sided limit. The lower Dini derivative, f′−, is defined by
$f'_{-}(t)=\liminf _{h\to {0+}}{\frac {f(t)-f(t-h)}{h}},$
where lim inf is the infimum limit.
If f is defined on a vector space, then the upper Dini derivative at t in the direction d is defined by
$f'_{+}(t,d)=\limsup _{h\to {0+}}{\frac {f(t+hd)-f(t)}{h}}.$
If f is locally Lipschitz, then f′+ is finite. If f is differentiable at t, then the Dini derivative at t is the usual derivative at t.
Remarks
• The functions are defined in terms of the infimum and supremum in order to make the Dini derivatives as "bullet proof" as possible, so that the Dini derivatives are well-defined for almost all functions, even for functions that are not conventionally differentiable. The upshot of Dini's analysis is that a function is differentiable at the point t on the real line (ℝ), only if all the Dini derivatives exist, and have the same value.
• Sometimes the notation D+ f(t) is used instead of f′+(t) and D− f(t) is used instead of f′−(t).[1]
• Also,
$D^{+}f(t)=\limsup _{h\to {0+}}{\frac {f(t+h)-f(t)}{h}}$
and
$D_{-}f(t)=\liminf _{h\to {0+}}{\frac {f(t)-f(t-h)}{h}}$.
• So when using the D notation of the Dini derivatives, the plus or minus sign indicates the left- or right-hand limit, and the placement of the sign indicates the infimum or supremum limit.
• There are two further Dini derivatives, defined to be
$D_{+}f(t)=\liminf _{h\to {0+}}{\frac {f(t+h)-f(t)}{h}}$
and
$D^{-}f(t)=\limsup _{h\to {0+}}{\frac {f(t)-f(t-h)}{h}}$.
which are the same as the first pair, but with the supremum and the infimum reversed. For only moderately ill-behaved functions, the two extra Dini derivatives aren't needed. For particularly badly behaved functions, if all four Dini derivatives have the same value ($D^{+}f(t)=D_{+}f(t)=D^{-}f(t)=D_{-}f(t)$) then the function f is differentiable in the usual sense at the point t .
• On the extended reals, each of the Dini derivatives always exist; however, they may take on the values +∞ or −∞ at times (i.e., the Dini derivatives always exist in the extended sense).
See also
• Denjoy–Young–Saks theorem – Mathematical theorem about Dini derivatives
• Derivative (generalizations) – Fundamental construction of differential calculusPages displaying short descriptions of redirect targets
• Semi-differentiability
References
1. Khalil, Hassan K. (2002). Nonlinear Systems (3rd ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 0-13-067389-7.
• Lukashenko, T.P. (2001) [1994], "Dini derivative", Encyclopedia of Mathematics, EMS Press.
• Royden, H. L. (1968). Real Analysis (2nd ed.). MacMillan. ISBN 978-0-02-404150-0.
• Thomson, Brian S.; Bruckner, Judith B.; Bruckner, Andrew M. (2008). Elementary Real Analysis. ClassicalRealAnalysis.com [first edition published by Prentice Hall in 2001]. pp. 301–302. ISBN 978-1-4348-4161-2.
This article incorporates material from Dini derivative on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
| Wikipedia |
Semi-continuity
In mathematical analysis, semicontinuity (or semi-continuity) is a property of extended real-valued functions that is weaker than continuity. An extended real-valued function $f$ is upper (respectively, lower) semicontinuous at a point $x_{0}$ if, roughly speaking, the function values for arguments near $x_{0}$ are not much higher (respectively, lower) than $f\left(x_{0}\right).$
For the notion of upper or lower semi-continuous set-valued function, see Hemicontinuity.
A function is continuous if and only if it is both upper and lower semicontinuous. If we take a continuous function and increase its value at a certain point $x_{0}$ to $f\left(x_{0}\right)+c$ for some $c>0$, then the result is upper semicontinuous; if we decrease its value to $f\left(x_{0}\right)-c$ then the result is lower semicontinuous.
The notion of upper and lower semicontinuous function was first introduced and studied by René Baire in his thesis in 1899.[1]
Definitions
Assume throughout that $X$ is a topological space and $f:X\to {\overline {\mathbb {R} }}$ is a function with values in the extended real numbers ${\overline {\mathbb {R} }}=\mathbb {R} \cup \{-\infty ,\infty \}=[-\infty ,\infty ]$.
Upper semicontinuity
A function $f:X\to {\overline {\mathbb {R} }}$ is called upper semicontinuous at a point $x_{0}\in X$ if for every real $y>f\left(x_{0}\right)$ there exists a neighborhood $U$ of $x_{0}$ such that $f(x)<y$ for all $x\in U$.[2] Equivalently, $f$ is upper semicontinuous at $x_{0}$ if and only if
$\limsup _{x\to x_{0}}f(x)\leq f(x_{0})$
where lim sup is the limit superior of the function $f$ at the point $x_{0}$.
A function $f:X\to {\overline {\mathbb {R} }}$ is called upper semicontinuous if it satisfies any of the following equivalent conditions:[2]
(1) The function is upper semicontinuous at every point of its domain.
(2) All sets $f^{-1}([-\infty ,y))=\{x\in X:f(x)<y\}$ with $y\in \mathbb {R} $ are open in $X$, where $[-\infty ,y)=\{t\in {\overline {\mathbb {R} }}:t<y\}$.
(3) All superlevel sets $\{x\in X:f(x)\geq y\}$ with $y\in \mathbb {R} $ are closed in $X$.
(4) The hypograph $\{(x,t)\in X\times \mathbb {R} :t\leq f(x)\}$ is closed in $X\times \mathbb {R} $.
(5) The function is continuous when the codomain ${\overline {\mathbb {R} }}$ is given the left order topology. This is just a restatement of condition (2) since the left order topology is generated by all the intervals $[-\infty ,y)$.
Lower semicontinuity
A function $f:X\to {\overline {\mathbb {R} }}$ is called lower semicontinuous at a point $x_{0}\in X$ if for every real $y<f\left(x_{0}\right)$ there exists a neighborhood $U$ of $x_{0}$ such that $f(x)>y$ for all $x\in U$. Equivalently, $f$ is lower semicontinuous at $x_{0}$ if and only if
$\liminf _{x\to x_{0}}f(x)\geq f(x_{0})$
where $\liminf $ is the limit inferior of the function $f$ at point $x_{0}$.
A function $f:X\to {\overline {\mathbb {R} }}$ is called lower semicontinuous if it satisfies any of the following equivalent conditions:
(1) The function is lower semicontinuous at every point of its domain.
(2) All sets $f^{-1}((y,\infty ])=\{x\in X:f(x)>y\}$ with $y\in \mathbb {R} $ are open in $X$, where $(y,\infty ]=\{t\in {\overline {\mathbb {R} }}:t>y\}$.
(3) All sublevel sets $\{x\in X:f(x)\leq y\}$ with $y\in \mathbb {R} $ are closed in $X$.
(4) The epigraph $\{(x,t)\in X\times \mathbb {R} :t\geq f(x)\}$ is closed in $X\times \mathbb {R} $.
(5) The function is continuous when the codomain ${\overline {\mathbb {R} }}$ is given the right order topology. This is just a restatement of condition (2) since the right order topology is generated by all the intervals $(y,\infty ]$.
Examples
Consider the function $f,$ piecewise defined by:
$f(x)={\begin{cases}-1&{\mbox{if }}x<0,\\1&{\mbox{if }}x\geq 0\end{cases}}$
This function is upper semicontinuous at $x_{0}=0,$ but not lower semicontinuous.
The floor function $f(x)=\lfloor x\rfloor ,$ which returns the greatest integer less than or equal to a given real number $x,$ is everywhere upper semicontinuous. Similarly, the ceiling function $f(x)=\lceil x\rceil $ is lower semicontinuous.
Upper and lower semicontinuity bear no relation to continuity from the left or from the right for functions of a real variable. Semicontinuity is defined in terms of an ordering in the range of the functions, not in the domain.[3] For example the function
$f(x)={\begin{cases}\sin(1/x)&{\mbox{if }}x\neq 0,\\1&{\mbox{if }}x=0,\end{cases}}$
is upper semicontinuous at $x=0$ while the function limits from the left or right at zero do not even exist.
If $X=\mathbb {R} ^{n}$ is a Euclidean space (or more generally, a metric space) and $\Gamma =C([0,1],X)$ is the space of curves in $X$ (with the supremum distance $d_{\Gamma }(\alpha ,\beta )=\sup\{d_{X}(\alpha (t),\beta (t)):t\in [0,1]\}$), then the length functional $L:\Gamma \to [0,+\infty ],$ which assigns to each curve $\alpha $ its length $L(\alpha ),$ is lower semicontinuous.[4] As an example, consider approximating the unit square diagonal by a staircase from below. The staircase always has length 2, while the diagonal line has only length ${\sqrt {2}}$.
Let $(X,\mu )$ be a measure space and let $L^{+}(X,\mu )$ denote the set of positive measurable functions endowed with the topology of convergence in measure with respect to $\mu .$ Then by Fatou's lemma the integral, seen as an operator from $L^{+}(X,\mu )$ to $[-\infty ,+\infty ]$ is lower semicontinuous.
Properties
Unless specified otherwise, all functions below are from a topological space $X$ to the extended real numbers ${\overline {\mathbb {R} }}=[-\infty ,\infty ].$ Several of the results hold for semicontinuity at a specific point, but for brevity they are only stated from semicontinuity over the whole domain.
• A function $f:X\to {\overline {\mathbb {R} }}$ is continuous if and only if it is both upper and lower semicontinuous.
• The indicator function of a set $A\subset X$ (defined by $\mathbf {1} _{A}(x)=1$ if $x\in A$ and $0$ if $x\notin A$) is upper semicontinuous if and only if $A$ is a closed set. It is lower semicontinuous if and only if $A$ is an open set.[note 1]
• The sum $f+g$ of two lower semicontinuous functions is lower semicontinuous[5] (provided the sum is well-defined, i.e., $f(x)+g(x)$ is not the indeterminate form $-\infty +\infty $). The same holds for upper semicontinuous functions.
• If both functions are non-negative, the product function $fg$ of two lower semicontinuous functions is lower semicontinuous. The corresponding result holds for upper semicontinuous functions.
• A function $f:X\to {\overline {\mathbb {R} }}$ is lower semicontinuous if and only if $-f$ is upper semicontinuous.
• The composition $f\circ g$ of upper semicontinuous functions is not necessarily upper semicontinuous, but if $f$ is also non-decreasing, then $f\circ g$ is upper semicontinuous.[6]
• The minimum and the maximum of two lower semicontinuous functions are lower semicontinuous. In other words, the set of all lower semicontinuous functions from $X$ to ${\overline {\mathbb {R} }}$ (or to $\mathbb {R} $) forms a lattice. The same holds for upper semicontinuous functions.
• The (pointwise) supremum of an arbitrary family $(f_{i})_{i\in I}$ of lower semicontinuous functions $f_{i}:X\to {\overline {\mathbb {R} }}$ (defined by $f(x)=\sup\{f_{i}(x):i\in I\}$) is lower semicontinuous.[7]
In particular, the limit of a monotone increasing sequence $f_{1}\leq f_{2}\leq f_{3}\leq \cdots $ of continuous functions is lower semicontinuous. (The Theorem of Baire below provides a partial converse.) The limit function will only be lower semicontinuous in general, not continuous. An example is given by the functions $f_{n}(x)=1-(1-x)^{n}$ defined for $x\in [0,1]$ for $n=1,2,\ldots .$
Likewise, the infimum of an arbitrary family of upper semicontinuous functions is upper semicontinuous. And the limit of a monotone decreasing sequence of continuous functions is upper semicontinuous.
• (Theorem of Baire)[note 2] Assume $X$ is a metric space. Every lower semicontinuous function $f:X\to {\overline {\mathbb {R} }}$ is the limit of a monotone increasing sequence of extended real-valued continuous functions on $X$; if $f$ does not take the value $-\infty $, the continuous functions can be taken to be real-valued.[8][9]
And every upper semicontinuous function $f:X\to {\overline {\mathbb {R} }}$ is the limit of a monotone decreasing sequence of extended real-valued continuous functions on $X$; if $f$ does not take the value $\infty ,$ the continuous functions can be taken to be real-valued.
• If $C$ is a compact space (for instance a closed bounded interval $[a,b]$) and $f:C\to {\overline {\mathbb {R} }}$ is upper semicontinuous, then $f$ has a maximum on $C.$ If $f$ is lower semicontinuous on $C,$ it has a minimum on $C.$
(Proof for the upper semicontinuous case: By condition (5) in the definition, $f$ is continuous when ${\overline {\mathbb {R} }}$ is given the left order topology. So its image $f(C)$ is compact in that topology. And the compact sets in that topology are exactly the sets with a maximum. For an alternative proof, see the article on the extreme value theorem.)
• Any upper semicontinuous function $f:X\to \mathbb {N} $ on an arbitrary topological space $X$ is locally constant on some dense open subset of $X.$
• Tonelli's theorem in functional analysis characterizes the weak lower semicontinuity of nonlinear functionals on Lp spaces in terms of the convexity of another function.
See also
• Directional continuity – Mathematical function with no sudden changesPages displaying short descriptions of redirect targets
• Katětov–Tong insertion theorem – On existence of a continuous function between semicontinuous upper and lower bounds
• Semicontinuous set-valued function
Notes
1. In the context of convex analysis, the characteristic function of a set $A$ is defined differently, as $\chi _{A}(x)=0$ if $x\in A$ and $\chi _{A}(x)=\infty $ if $x\notin A$. With that definition, the characteristic function of any closed set is lower semicontinuous, and the characteristic function of any open set is upper semicontinuous.
2. The result was proved by René Baire in 1904 for real-valued function defined on $\mathbb {R} $. It was extended to metric spaces by Hans Hahn in 1917, and Hing Tong showed in 1952 that the most general class of spaces where the theorem holds is the class of perfectly normal spaces. (See Engelking, Exercise 1.7.15(c), p. 62 for details and specific references.)
References
1. Verry, Matthieu. "Histoire des mathématiques - René Baire".
2. Stromberg, p. 132, Exercise 4
3. Willard, p. 49, problem 7K
4. Giaquinta, Mariano (2007). Mathematical analysis : linear and metric structures and continuity. Giuseppe Modica (1 ed.). Boston: Birkhäuser. Theorem 11.3, p.396. ISBN 978-0-8176-4514-4. OCLC 213079540.
5. Puterman, Martin L. (2005). Markov Decision Processes Discrete Stochastic Dynamic Programming. Wiley-Interscience. pp. 602. ISBN 978-0-471-72782-8.
6. Moore, James C. (1999). Mathematical methods for economic theory. Berlin: Springer. p. 143. ISBN 9783540662358.
7. "To show that the supremum of any collection of lower semicontinuous functions is lower semicontinuous".
8. Stromberg, p. 132, Exercise 4(g)
9. "Show that lower semicontinuous function is the supremum of an increasing sequence of continuous functions".
Bibliography
• Benesova, B.; Kruzik, M. (2017). "Weak Lower Semicontinuity of Integral Functionals and Applications". SIAM Review. 59 (4): 703–766. arXiv:1601.00390. doi:10.1137/16M1060947. S2CID 119668631.
• Bourbaki, Nicolas (1998). Elements of Mathematics: General Topology, 1–4. Springer. ISBN 0-201-00636-7.
• Bourbaki, Nicolas (1998). Elements of Mathematics: General Topology, 5–10. Springer. ISBN 3-540-64563-2.
• Engelking, Ryszard (1989). General Topology. Heldermann Verlag, Berlin. ISBN 3-88538-006-4.
• Gelbaum, Bernard R.; Olmsted, John M.H. (2003). Counterexamples in analysis. Dover Publications. ISBN 0-486-42875-3.
• Hyers, Donald H.; Isac, George; Rassias, Themistocles M. (1997). Topics in nonlinear analysis & applications. World Scientific. ISBN 981-02-2534-2.
• Stromberg, Karl (1981). Introduction to Classical Real Analysis. Wadsworth. ISBN 978-0-534-98012-2.
• Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
• Zălinescu, Constantin (30 July 2002). Convex Analysis in General Vector Spaces. River Edge, N.J. London: World Scientific Publishing. ISBN 978-981-4488-15-0. MR 1921556. OCLC 285163112 – via Internet Archive.
Convex analysis and variational analysis
Basic concepts
• Convex combination
• Convex function
• Convex set
Topics (list)
• Choquet theory
• Convex geometry
• Convex metric space
• Convex optimization
• Duality
• Lagrange multiplier
• Legendre transformation
• Locally convex topological vector space
• Simplex
Maps
• Convex conjugate
• Concave
• (Closed
• K-
• Logarithmically
• Proper
• Pseudo-
• Quasi-) Convex function
• Invex function
• Legendre transformation
• Semi-continuity
• Subderivative
Main results (list)
• Carathéodory's theorem
• Ekeland's variational principle
• Fenchel–Moreau theorem
• Fenchel-Young inequality
• Jensen's inequality
• Hermite–Hadamard inequality
• Krein–Milman theorem
• Mazur's lemma
• Shapley–Folkman lemma
• Robinson-Ursescu
• Simons
• Ursescu
Sets
• Convex hull
• (Orthogonally, Pseudo-) Convex set
• Effective domain
• Epigraph
• Hypograph
• John ellipsoid
• Lens
• Radial set/Algebraic interior
• Zonotope
Series
• Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx))
Duality
• Dual system
• Duality gap
• Strong duality
• Weak duality
Applications and related
• Convexity in economics
| Wikipedia |
Upper topology
In mathematics, the upper topology on a partially ordered set X is the coarsest topology in which the closure of a singleton $\{a\}$ is the order section $a]=\{x\leq a\}$ for each $a\in X.$ If $\leq $ is a partial order, the upper topology is the least order consistent topology in which all open sets are up-sets. However, not all up-sets must necessarily be open sets. The lower topology induced by the preorder is defined similarly in terms of the down-sets. The preorder inducing the upper topology is its specialization preorder, but the specialization preorder of the lower topology is opposite to the inducing preorder.
The real upper topology is most naturally defined on the upper-extended real line $(-\infty ,+\infty ]=\mathbb {R} \cup \{+\infty \}$ by the system $\{(a,+\infty ]:a\in \mathbb {R} \cup \{\pm \infty \}\}$ of open sets. Similarly, the real lower topology $\{[-\infty ,a):a\in \mathbb {R} \cup \{\pm \infty \}\}$ is naturally defined on the lower real line $[-\infty ,+\infty )=\mathbb {R} \cup \{-\infty \}.$ A real function on a topological space is upper semi-continuous if and only if it is lower-continuous, i.e. is continuous with respect to the lower topology on the lower-extended line ${[-\infty ,+\infty )}.$ Similarly, a function into the upper real line is lower semi-continuous if and only if it is upper-continuous, i.e. is continuous with respect to the upper topology on ${(-\infty ,+\infty ]}.$
See also
• List of topologies – List of concrete topologies and topological spaces
References
• Gerhard Gierz; K.H. Hofmann; K. Keimel; J. D. Lawson; M. Mislove; D. S. Scott (2003). Continuous Lattices and Domains. Cambridge University Press. p. 510. ISBN 0-521-80338-1.
• Kelley, John L. (1955). General Topology. Van Nostrand Reinhold. p. 101.
• Knapp, Anthony W. (2005). Basic Real Analysis. Birkhhauser. p. 481. ISBN 0-8176-3250-6.
| Wikipedia |
Upward planar drawing
In graph drawing, an upward planar drawing of a directed acyclic graph is an embedding of the graph into the Euclidean plane, in which the edges are represented as non-crossing monotonic upwards curves. That is, the curve representing each edge should have the property that every horizontal line intersects it in at most one point, and no two edges may intersect except at a shared endpoint.[1] In this sense, it is the ideal case for layered graph drawing, a style of graph drawing in which edges are monotonic curves that may cross, but in which crossings are to be minimized.
Characterizations
A directed acyclic graph must be planar in order to have an upward planar drawing, but not every planar acyclic graph has such a drawing. Among the planar directed acyclic graphs with a single source (vertex with no incoming edges) and sink (vertex with no outgoing edges), the graphs with upward planar drawings are the st-planar graphs, planar graphs in which the source and sink both belong to the same face of at least one of the planar embeddings of the graph. More generally, a graph G has an upward planar drawing if and only if it is directed and acyclic, and is a subgraph of an st-planar graph on the same vertex set.[2]
In an upward embedding, the sets of incoming and outgoing edges incident to each vertex are contiguous in the cyclic ordering of the edges at the vertex. A planar embedding of a given directed acyclic graph is said to be bimodal when it has this property. Additionally, the angle between two consecutive edges with the same orientation at a given vertex may be labeled as small if it is less than π, or large if it is greater than π. Each source or sink must have exactly one large angle, and each vertex that is neither a source nor a sink must have none. Additionally, each internal face of the drawing must have two more small angles than large ones, and the external face must have two more large angles than small ones. A consistent assignment is a labeling of the angles that satisfies these properties; every upward embedding has a consistent assignment. Conversely, every directed acyclic graph that has a bimodal planar embedding with a consistent assignment has an upward planar drawing, that can be constructed from it in linear time.[3]
Another characterization is possible for graphs with a single source. In this case an upward planar embedding must have the source on the outer face, and every undirected cycle of the graph must have at least one vertex at which both cycle edges are incoming (for instance, the vertex with the highest placement in the drawing). Conversely, if an embedding has both of these properties, then it is equivalent to an upward embedding.[4]
Computational complexity
Several special cases of upward planarity testing are known to be possible in polynomial time:
• Testing whether a graph is st-planar may be accomplished in linear time by adding an edge from s to t and testing whether the remaining graph is planar. Along the same lines, it is possible to construct an upward planar drawing (when it exists) of a directed acyclic graph with a single source and sink, in linear time.[5]
• Testing whether a directed graph with a fixed planar embedding can be drawn upward planar, with an embedding consistent with the given one, can be accomplished by checking that the embedding is bimodal and modeling the consistent assignment problem as a network flow problem. The running time is linear in the size of the input graph, and polynomial in its number of sources and sinks.[6]
• Because oriented polyhedral graphs have a unique planar embedding, the existence of an upward planar drawing for these graphs may be tested in polynomial time.[7]
• Testing whether an outerplanar directed acyclic graph has an upward planar drawing is also polynomial.[8]
• Every series–parallel graph, oriented consistently with the series–parallel structure, is upward planar. An upward planar drawing can be constructed directly from the series–parallel decomposition of the graph.[9] More generally, arbitrary orientations of undirected series–parallel graphs may be tested for upward planarity in polynomial time.[10]
• Every oriented tree is upward planar.[9]
• Every bipartite planar graph, with its edges oriented consistently from one side of the bipartition to the other, is upward planar[9][11]
• A more complicated polynomial time algorithm is known for testing upward planarity of graphs that have a single source, but multiple sinks, or vice versa.[12]
• Testing upward planarity can be performed in polynomial time when there are a constant number of triconnected components and cut vertices, and is fixed-parameter tractable in these two numbers.[13] It is also fixed-parameter tractable in the cyclomatic number of the input graph.[14] It is also fixed-parameter tractable in the number of sources (i.e. vertices with no in-edges)[15]
• If the y-coordinates of all vertices are fixed, then a choice of x-coordinates that makes the drawing upward planar can be found in polynomial time.[16]
However, it is NP-complete to determine whether a planar directed acyclic graph with multiple sources and sinks has an upward planar drawing.[17]
Straight-line drawing and area requirements
Fáry's theorem states that every planar graph has a drawing in which its edges are represented by straight line segments, and the same is true of upward planar drawing: every upward planar graph has a straight upward planar drawing.[18] A straight-line upward drawing of a transitively reduced st-planar graph may be obtained by the technique of dominance drawing, with all vertices having integer coordinates within an n × n grid.[19] However, certain other upward planar graphs may require exponential area in all of their straight-line upward planar drawings.[18] If a choice of embedding is fixed, even oriented series parallel graphs and oriented trees may require exponential area.[20]
Hasse diagrams
Upward planar drawings are particularly important for Hasse diagrams of partially ordered sets, as these diagrams are typically required to be drawn upwardly. In graph-theoretic terms, these correspond to the transitively reduced directed acyclic graphs; such a graph can be formed from the covering relation of a partial order, and the partial order itself forms the reachability relation in the graph. If a partially ordered set has one minimal element, has one maximal element, and has an upward planar drawing, then it must necessarily form a lattice, a set in which every pair of elements has a unique greatest lower bound and a unique least upper bound.[21] The Hasse diagram of a lattice is planar if and only if its order dimension is at most two.[22] However, some partial orders of dimension two and with one minimal and maximal element do not have an upward planar drawing (take the order defined by the transitive closure of $a<b,a<c,b<d,b<e,c<d,c<e,d<f,e<f$).
References
Footnotes
1. Garg & Tamassia (1995); Di Battista et al. (1998).
2. Garg & Tamassia (1995), pp. 111–112; Di Battista et al. (1998), 6.1 "Inclusion in a Planar st-Graph", pp. 172–179; Di Battista & Tamassia (1988); Kelly (1987).
3. Garg & Tamassia (1995), pp. 112–115; Di Battista et al. (1998), 6.2 "Angles in Upward Drawings", pp. 180–188; Bertolazzi & Di Battista (1991); Bertolazzi et al. (1994).
4. Garg & Tamassia (1995), p. 115; Di Battista et al. (1998), 6.7.2 "Forbidden Cycles for Single-Source Digraphs", pp. 209–210; Thomassen (1989).
5. Garg & Tamassia (1995), p. 119; Di Battista et al. (1998), p. 179.
6. Garg & Tamassia (1995), pp. 119–121; Di Battista et al. (1998), 6.3 "Upward Planarity Testing of Embedded Digraphs", pp. 188–192; Bertolazzi & Di Battista (1991); Bertolazzi et al. (1994); Abbasi, Healy & Rextin (2010).
7. Di Battista et al. (1998), pp. 191–192; Bertolazzi & Di Battista (1991); Bertolazzi et al. (1994).
8. Garg & Tamassia (1995), pp. 125–126; Di Battista et al. (1998), 6.7.1 "Outerplanar Digraph", p. 209; Papakostas (1995).
9. Di Battista et al. (1998), 6.7.4 "Some Classes of Upward Planar Digraphs", p. 212.
10. Didimo, Giordano & Liotta (2009).
11. Di Battista, Liu & Rival (1990).
12. Garg & Tamassia (1995), pp. 122–125; Di Battista et al. (1998), 6.5 "Optimal Upward Planarity Testing of Single-Source Digraphs", pp. 195–200; Hutton & Lubiw (1996); Bertolazzi et al. (1998).
13. Chan (2004); Healy & Lynch (2006).
14. Healy & Lynch (2006).
15. Chaplick et al. (2022)
16. Jünger & Leipert (1999).
17. Garg & Tamassia (1995), pp. 126–132; Di Battista et al. (1998), 6.6 "Upward Planarity Testing is NP-complete", pp. 201–209; Garg & Tamassia (2001).
18. Di Battista & Frati (2012); Di Battista, Tamassia & Tollis (1992).
19. Di Battista et al. (1998), 4.7 "Dominance Drawings", pp. 112–127; Di Battista, Tamassia & Tollis (1992).
20. Di Battista & Frati (2012); Bertolazzi et al. (1994); Frati (2008).
21. Di Battista et al. (1998), 6.7.3 "Forbidden Structures for Lattices", pp. 210–212; Platt (1976).
22. Garg & Tamassia (1995), pp. 118; Baker, Fishburn & Roberts (1972).
Surveys and textbooks
• Di Battista, Giuseppe; Eades, Peter; Tamassia, Roberto; Tollis, Ioannis G. (1998), "Flow and Upward Planarity", Graph Drawing: Algorithms for the Visualization of Graphs, Prentice Hall, pp. 171–213, ISBN 978-0-13-301615-4.
• Di Battista, Giuseppe; Frati, Fabrizio (2012), "Drawing trees, outerplanar graphs, series–parallel graphs, and planar graphs in small area", Thirty Essays on Geometric Graph Theory, Algorithms and combinatorics, vol. 29, Springer, pp. 121–165, doi:10.1007/978-1-4614-0110-0_9, ISBN 9781461401100. Section 5, "Upward Drawings", pp. 149–151.
• Garg, Ashim; Tamassia, Roberto (1995), "Upward planarity testing", Order, 12 (2): 109–133, doi:10.1007/BF01108622, MR 1354797, S2CID 14183717.
Research articles
• Abbasi, Sarmad; Healy, Patrick; Rextin, Aimal (2010), "Improving the running time of embedded upward planarity testing", Information Processing Letters, 110 (7): 274–278, doi:10.1016/j.ipl.2010.02.004, MR 2642837.
• Baker, K. A.; Fishburn, P. C.; Roberts, F. S. (1972), "Partial orders of dimension 2", Networks, 2 (1): 11–28, doi:10.1002/net.3230020103.
• Bertolazzi, Paola; Cohen, Robert F.; Di Battista, Giuseppe; Tamassia, Roberto; Tollis, Ioannis G. (1994), "How to draw a series–parallel digraph", International Journal of Computational Geometry & Applications, 4 (4): 385–402, doi:10.1142/S0218195994000215, MR 1310911.
• Bertolazzi, Paola; Di Battista, Giuseppe (1991), "On upward drawing testing of triconnected digraphs", Proceedings of the Seventh Annual Symposium on Computational Geometry (SCG '91, North Conway, New Hampshire, USA), New York, NY, USA: ACM, pp. 272–280, doi:10.1145/109648.109679, ISBN 0-89791-426-0, S2CID 18306721.
• Bertolazzi, P.; Di Battista, G.; Liotta, G.; Mannino, C. (1994), "Upward drawings of triconnected digraphs", Algorithmica, 12 (6): 476–497, doi:10.1007/BF01188716, MR 1297810, S2CID 33167313.
• Bertolazzi, Paola; Di Battista, Giuseppe; Mannino, Carlo; Tamassia, Roberto (1998), "Optimal upward planarity testing of single-source digraphs", SIAM Journal on Computing, 27 (1): 132–169, doi:10.1137/S0097539794279626, MR 1614821.
• Chan, Hubert (2004), "A parameterized algorithm for upward planarity testing", Proc. 12th European Symposium on Algorithms (ESA '04), Lecture Notes in Computer Science, vol. 3221, Springer-Verlag, pp. 157–168, doi:10.1007/978-3-540-30140-0_16.
• Di Battista, Giuseppe; Liu, Wei-Ping; Rival, Ivan (1990), "Bipartite graphs, upward drawings, and planarity", Information Processing Letters, 36 (6): 317–322, doi:10.1016/0020-0190(90)90045-Y, MR 1084490.
• Di Battista, Giuseppe; Tamassia, Roberto (1988), "Algorithms for plane representations of acyclic digraphs", Theoretical Computer Science, 61 (2–3): 175–198, doi:10.1016/0304-3975(88)90123-5, MR 0980241.
• Di Battista, Giuseppe; Tamassia, Roberto; Tollis, Ioannis G. (1992), "Area requirement and symmetry display of planar upward drawings", Discrete and Computational Geometry, 7 (4): 381–401, doi:10.1007/BF02187850, MR 1148953.
• Didimo, Walter; Giordano, Francesco; Liotta, Giuseppe (2009), "Upward spirality and upward planarity testing", SIAM Journal on Discrete Mathematics, 23 (4): 1842–1899, doi:10.1137/070696854, MR 2594962, S2CID 26154284.
• Frati, Fabrizio (2008), "On minimum area planar upward drawings of directed trees and other families of directed acyclic graphs", International Journal of Computational Geometry & Applications, 18 (3): 251–271, doi:10.1142/S021819590800260X, MR 2424444.
• Garg, Ashim; Tamassia, Roberto (2001), "On the computational complexity of upward and rectilinear planarity testing", SIAM Journal on Computing, 31 (2): 601–625, doi:10.1137/S0097539794277123, MR 1861292, S2CID 15691098.
• Healy, Patrick; Lynch, Karol (2006), "Two fixed-parameter tractable algorithms for testing upward planarity", International Journal of Foundations of Computer Science, 17 (5): 1095–1114, doi:10.1142/S0129054106004285.
• Hutton, Michael D.; Lubiw, Anna (1996), "Upward planar drawing of single-source acyclic digraphs", SIAM Journal on Computing, 25 (2): 291–311, doi:10.1137/S0097539792235906, MR 1379303. First presented at the 2nd ACM-SIAM Symposium on Discrete Algorithms, 1991.
• Jünger, Michael; Leipert, Sebastian (1999), "Level planar embedding in linear time", Graph Drawing (Proc. GD '99), Lecture Notes in Computer Science, vol. 1731, pp. 72–81, doi:10.1007/3-540-46648-7_7, ISBN 978-3-540-66904-3.
• Kelly, David (1987), "Fundamentals of planar ordered sets", Discrete Mathematics, 63 (2–3): 197–216, doi:10.1016/0012-365X(87)90008-2, MR 0885497.
• Papakostas, Achilleas (1995), "Upward planarity testing of outerplanar dags (extended abstract)", Graph Drawing: DIMACS International Workshop, GD '94, Princeton, New Jersey, USA, October 10–12, 1994, Proceedings, Lecture Notes in Computer Science, vol. 894, Berlin: Springer, pp. 298–306, doi:10.1007/3-540-58950-3_385, MR 1337518.
• Platt, C. R. (1976), "Planar lattices and planar graphs", Journal of Combinatorial Theory, Ser. B, 21 (1): 30–39, doi:10.1016/0095-8956(76)90024-1.
• Thomassen, Carsten (1989), "Planar acyclic oriented graphs", Order, 5 (4): 349–361, doi:10.1007/BF00353654, MR 1010384, S2CID 121445872.
• Chaplick, Steven; Di Giacomo, Emilio; Frati, Fabrizio; Ganian, Robert; Raftopoulou, Chrysanthi N.; Simonov, Kirill (2022), "Parameterized Algorithms for Upward Planarity", 38th International Symposium on Computational Geometry, SoCG, Leibniz International Proceedings in Informatics (LIPIcs), vol. 224, pp. 26:1–26:16, doi:10.4230/LIPIcs.SoCG.2022.26, ISBN 9783959772273
| Wikipedia |
Upwind scheme
In computational physics, the term upwind scheme (sometimes advection scheme) typically refers to a class of numerical discretization methods for solving hyperbolic partial differential equations, in which so-called upstream variables are used to calculate the derivatives in a flow field. That is, derivatives are estimated using a set of data points biased to be more "upwind" of the query point, with respect to the direction of the flow. Historically, the origin of upwind methods can be traced back to the work of Courant, Isaacson, and Rees who proposed the CIR method.[1]
Model equation
To illustrate the method, consider the following one-dimensional linear advection equation
${\frac {\partial u}{\partial t}}+a{\frac {\partial u}{\partial x}}=0$
which describes a wave propagating along the $x$-axis with a velocity $a$. This equation is also a mathematical model for one-dimensional linear advection. Consider a typical grid point $i$ in the domain. In a one-dimensional domain, there are only two directions associated with point $i$ – left (towards negative infinity) and right (towards positive infinity). If $a$ is positive, the traveling wave solution of the equation above propagates towards the right, the left side of $i$ is called upwind side and the right side is the downwind side. Similarly, if $a$ is negative the traveling wave solution propagates towards the left, the left side is called downwind side and right side is the upwind side. If the finite difference scheme for the spatial derivative, $\partial u/\partial x$ contains more points in the upwind side, the scheme is called an upwind-biased or simply an upwind scheme.
First-order upwind scheme
The simplest upwind scheme possible is the first-order upwind scheme. It is given by[2]
${\frac {u_{i}^{n+1}-u_{i}^{n}}{\Delta t}}+a{\frac {u_{i}^{n}-u_{i-1}^{n}}{\Delta x}}=0\quad {\text{for}}\quad a>0$
(1)
${\frac {u_{i}^{n+1}-u_{i}^{n}}{\Delta t}}+a{\frac {u_{i+1}^{n}-u_{i}^{n}}{\Delta x}}=0\quad {\text{for}}\quad a<0$
(2)
where $n$ refers to the $t$ dimension and $i$ refers to the $x$ dimension. (By comparison, a central difference scheme in this scenario would look like
${\frac {u_{i}^{n+1}-u_{i}^{n}}{\Delta t}}+a{\frac {u_{i+1}^{n}-u_{i-1}^{n}}{2\Delta x}}=0,$
regardless of the sign of $a$.)
Compact form
Defining
$a^{+}={\text{max}}(a,0)\,,\qquad a^{-}={\text{min}}(a,0)$
and
$u_{x}^{-}={\frac {u_{i}^{n}-u_{i-1}^{n}}{\Delta x}}\,,\qquad u_{x}^{+}={\frac {u_{i+1}^{n}-u_{i}^{n}}{\Delta x}}$
the two conditional equations (1) and (2) can be combined and written in a compact form as
$u_{i}^{n+1}=u_{i}^{n}-\Delta t\left[a^{+}u_{x}^{-}+a^{-}u_{x}^{+}\right]$
(3)
Equation (3) is a general way of writing any upwind-type schemes.
Stability
The upwind scheme is stable if the following Courant–Friedrichs–Lewy condition (CFL) is satisfied.[3]
$c=\left|{\frac {a\Delta t}{\Delta x}}\right|\leq 1$ and $0\leq a$.
A Taylor series analysis of the upwind scheme discussed above will show that it is first-order accurate in space and time. Modified wavenumber analysis shows that the first-order upwind scheme introduces severe numerical diffusion/dissipation in the solution where large gradients exist due to necessity of high wavenumbers to represent sharp gradients.
Second-order upwind scheme
The spatial accuracy of the first-order upwind scheme can be improved by including 3 data points instead of just 2, which offers a more accurate finite difference stencil for the approximation of spatial derivative. For the second-order upwind scheme, $u_{x}^{-}$ becomes the 3-point backward difference in equation (3) and is defined as
$u_{x}^{-}={\frac {3u_{i}^{n}-4u_{i-1}^{n}+u_{i-2}^{n}}{2\Delta x}}$
and $u_{x}^{+}$ is the 3-point forward difference, defined as
$u_{x}^{+}={\frac {-u_{i+2}^{n}+4u_{i+1}^{n}-3u_{i}^{n}}{2\Delta x}}$
This scheme is less diffusive compared to the first-order accurate scheme and is called linear upwind differencing (LUD) scheme.
See also
• Finite difference method
• Upwind differencing scheme for convection
• Godunov's scheme
References
1. Courant, Richard; Isaacson, E; Rees, M. (1952). "On the Solution of Nonlinear Hyperbolic Differential Equations by Finite Differences". Comm. Pure Appl. Math. 5 (3): 243..255. doi:10.1002/cpa.3160050303.
2. Patankar, S. V. (1980). Numerical Heat Transfer and Fluid Flow. Taylor & Francis. ISBN 978-0-89116-522-4.
3. Hirsch, C. (1990). Numerical Computation of Internal and External Flows. John Wiley & Sons. ISBN 978-0-471-92452-4.
Numerical methods for partial differential equations
Finite difference
Parabolic
• Forward-time central-space (FTCS)
• Crank–Nicolson
Hyperbolic
• Lax–Friedrichs
• Lax–Wendroff
• MacCormack
• Upwind
• Method of characteristics
Others
• Alternating direction-implicit (ADI)
• Finite-difference time-domain (FDTD)
Finite volume
• Godunov
• High-resolution
• Monotonic upstream-centered (MUSCL)
• Advection upstream-splitting (AUSM)
• Riemann solver
• Essentially non-oscillatory (ENO)
• Weighted essentially non-oscillatory (WENO)
Finite element
• hp-FEM
• Extended (XFEM)
• Discontinuous Galerkin (DG)
• Spectral element (SEM)
• Mortar
• Gradient discretisation (GDM)
• Loubignac iteration
• Smoothed (S-FEM)
Meshless/Meshfree
• Smoothed-particle hydrodynamics (SPH)
• Peridynamics (PD)
• Moving particle semi-implicit method (MPS)
• Material point method (MPM)
• Particle-in-cell (PIC)
Domain decomposition
• Schur complement
• Fictitious domain
• Schwarz alternating
• additive
• abstract additive
• Neumann–Dirichlet
• Neumann–Neumann
• Poincaré–Steklov operator
• Balancing (BDD)
• Balancing by constraints (BDDC)
• Tearing and interconnect (FETI)
• FETI-DP
Others
• Spectral
• Pseudospectral (DVR)
• Method of lines
• Multigrid
• Collocation
• Level-set
• Boundary element
• Method of moments
• Immersed boundary
• Analytic element
• Isogeometric analysis
• Infinite difference method
• Infinite element method
• Galerkin method
• Petrov–Galerkin method
• Validated numerics
• Computer-assisted proof
• Integrable algorithm
• Method of fundamental solutions
| Wikipedia |
Urelement
In set theory, a branch of mathematics, an urelement or ur-element (from the German prefix ur-, 'primordial') is an object that is not a set, but that may be an element of a set. It is also referred to as an atom or individual.
Theory
There are several different but essentially equivalent ways to treat urelements in a first-order theory.
One way is to work in a first-order theory with two sorts, sets and urelements, with a ∈ b only defined when b is a set. In this case, if U is an urelement, it makes no sense to say $X\in U$, although $U\in X$ is perfectly legitimate.
Another way is to work in a one-sorted theory with a unary relation used to distinguish sets and urelements. As non-empty sets contain members while urelements do not, the unary relation is only needed to distinguish the empty set from urelements. Note that in this case, the axiom of extensionality must be formulated to apply only to objects that are not urelements.
This situation is analogous to the treatments of theories of sets and classes. Indeed, urelements are in some sense dual to proper classes: urelements cannot have members whereas proper classes cannot be members. Put differently, urelements are minimal objects while proper classes are maximal objects by the membership relation (which, of course, is not an order relation, so this analogy is not to be taken literally).
Urelements in set theory
The Zermelo set theory of 1908 included urelements, and hence is a version now called ZFA or ZFCA (i.e. ZFA with axiom of choice).[1] It was soon realized that in the context of this and closely related axiomatic set theories, the urelements were not needed because they can easily be modeled in a set theory without urelements.[2] Thus, standard expositions of the canonical axiomatic set theories ZF and ZFC do not mention urelements (for an exception, see Suppes[3]). Axiomatizations of set theory that do invoke urelements include Kripke–Platek set theory with urelements and the variant of Von Neumann–Bernays–Gödel set theory described by Mendelson.[4] In type theory, an object of type 0 can be called an urelement; hence the name "atom".
Adding urelements to the system New Foundations (NF) to produce NFU has surprising consequences. In particular, Jensen proved[5] the consistency of NFU relative to Peano arithmetic; meanwhile, the consistency of NF relative to anything remains an open problem, pending verification of Holmes's proof of its consistency relative to ZF. Moreover, NFU remains relatively consistent when augmented with an axiom of infinity and the axiom of choice. Meanwhile, the negation of the axiom of choice is, curiously, an NF theorem. Holmes (1998) takes these facts as evidence that NFU is a more successful foundation for mathematics than NF. Holmes further argues that set theory is more natural with than without urelements, since we may take as urelements the objects of any theory or of the physical universe.[6] In finitist set theory, urelements are mapped to the lowest-level components of the target phenomenon, such as atomic constituents of a physical object or members of an organisation.
Quine atoms
An alternative approach to urelements is to consider them, instead of as a type of object other than sets, as a particular type of set. Quine atoms (named after Willard Van Orman Quine) are sets that only contain themselves, that is, sets that satisfy the formula x = {x}.[7]
Quine atoms cannot exist in systems of set theory that include the axiom of regularity, but they can exist in non-well-founded set theory. ZF set theory with the axiom of regularity removed cannot prove that any non-well-founded sets exist (unless it is inconsistent, in which case it will prove any arbitrary statement), but it is compatible with the existence of Quine atoms. Aczel's anti-foundation axiom implies that there is a unique Quine atom. Other non-well-founded theories may admit many distinct Quine atoms; at the opposite end of the spectrum lies Boffa's axiom of superuniversality, which implies that the distinct Quine atoms form a proper class.[8]
Quine atoms also appear in Quine's New Foundations, which allows more than one such set to exist.[9]
Quine atoms are the only sets called reflexive sets by Peter Aczel,[8] although other authors, e.g. Jon Barwise and Lawrence Moss, use the latter term to denote the larger class of sets with the property x ∈ x.[10]
References
1. Dexter Chua et al.: ZFA: Zermelo–Fraenkel set theory with atoms, on: ncatlab.org: nLab, revised on July 16, 2016.
2. Jech, Thomas J. (1973). The Axiom of Choice. Mineola, New York: Dover Publ. p. 45. ISBN 0486466248.
3. Suppes, Patrick (1972). Axiomatic Set Theory ([Éd. corr. et augm. du texte paru en 1960] ed.). New York: Dover Publ. ISBN 0486616304. Retrieved 17 September 2012.
4. Mendelson, Elliott (1997). Introduction to Mathematical Logic (4th ed.). London: Chapman & Hall. pp. 297–304. ISBN 978-0412808302. Retrieved 17 September 2012.
5. Jensen, Ronald Björn (December 1968). "On the Consistency of a Slight (?) Modification of Quine's 'New Foundations'". Synthese. Springer. 19 (1/2): 250–264. doi:10.1007/bf00568059. ISSN 0039-7857. JSTOR 20114640. S2CID 46960777.
6. Holmes, Randall, 1998. Elementary Set Theory with a Universal Set. Academia-Bruylant.
7. Thomas Forster (2003). Logic, Induction and Sets. Cambridge University Press. p. 199. ISBN 978-0-521-53361-4.
8. Aczel, Peter (1988), Non-well-founded sets, CSLI Lecture Notes, vol. 14, Stanford University, Center for the Study of Language and Information, p. 57, ISBN 0-937073-22-9, MR 0940014, retrieved 2016-10-17.
9. Barwise, Jon; Moss, Lawrence S. (1996), Vicious circles. On the mathematics of non-wellfounded phenomena, CSLI Lecture Notes, vol. 60, CSLI Publications, p. 306, ISBN 1575860090.
10. Barwise, Jon; Moss, Lawrence S. (1996), Vicious circles. On the mathematics of non-wellfounded phenomena, CSLI Lecture Notes, vol. 60, CSLI Publications, p. 57, ISBN 1575860090.
External links
• Weisstein, Eric W. "Urelement". MathWorld.
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
| Wikipedia |
Urbach tail
The Urbach tail is an exponential part in the energy spectrum of the absorption coefficient. This tail appears near the optical band edge in amorphous, disordered and crystalline materials.
History
Researchers began questioning the nature of "tail states" in disordered semiconductors in the 1950s. It was found that such tails arise from the strains sufficient to push local states past the band edges.
In 1953, the Austrian-American physicist Franz Urbach (1902–1969)[1] found that such tails decay exponentially into the gap.[2] Later, photoemission experiments delivered absorption models revealing temperature dependence of the tail.[3]
A variety of amorphous crystalline solids expose exponential band edges via optical absorption. The universality of this feature suggested a common cause. Several attempts were made to explain the phenomenon, but these could not connect specific topological units to the electronic structure.[4][5]
See also
• Tauc plot
References
1. Franz Urbach. Austrian Academy of Sciences
2. Urbach, Franz (1953). "The Long-Wavelength Edge of Photographic Sensitivity and of the Electronic Absorption of Solids". Physical Review. 92 (5): 1324. Bibcode:1953PhRv...92.1324U. doi:10.1103/physrev.92.1324.
3. Aljishi, Samer; Cohen, J. David; Jin, Shu; Ley, Lothar (1990-06-04). "Band tails in hydrogenated amorphous silicon and silicon-germanium alloys". Physical Review Letters. 64 (23): 2811–2814. Bibcode:1990PhRvL..64.2811A. doi:10.1103/physrevlett.64.2811. PMID 10041817.
4. Bacalis, N.; Economou, E. N.; Cohen, M. H. (1988). "Simple derivation of exponential tails in the density of states". Physical Review B. 37 (5): 2714–2717. Bibcode:1988PhRvB..37.2714B. doi:10.1103/physrevb.37.2714. PMID 9944833.
5. Cohen, M. H.; Chou, M.-Y.; Economou, E. N.; John, S.; Soukoulis, C. M. (1988). "Band tails, path integrals, instantons, polarons, and all that". IBM Journal of Research and Development. 32 (1): 82–92. doi:10.1147/rd.321.0082.
| Wikipedia |
Uriel Feige
Uriel Feige (Hebrew: אוריאל פייגה) is an Israeli computer scientist who was a doctoral student of Adi Shamir.
Uriel Feige
Alma materPh.D. Weizmann Institute of Science, 1992[1]
Known forFeige–Fiat–Shamir identification scheme
Scientific career
InstitutionsWeizmann Institute
Doctoral advisorAdi Shamir
Life
Uriel Feige currently holds the post of Professor at the Department of Computer Science and Applied Mathematics, the Weizmann Institute of Science, Rehovot in Israel.[2]
Work
He is notable for co-inventing the Feige–Fiat–Shamir identification scheme along with Amos Fiat and Adi Shamir.
Honors and awards
He won the Gödel Prize in 2001 "for the PCP theorem and its applications to hardness of approximation".
References
1. Uriel Feige at the Mathematics Genealogy Project.
2. Uriel Feige's profile at the Weizmann Institute
Gödel Prize laureates
1990s
• Babai / Goldwasser / Micali / Moran / Rackoff (1993)
• Håstad (1994)
• Immerman / Szelepcsényi (1995)
• Jerrum / Sinclair (1996)
• Halpern / Moses (1997)
• Toda (1998)
• Shor (1999)
2000s
• Vardi / Wolper (2000)
• Arora / Feige / Goldwasser / Lund / Lovász / Motwani / Safra / Sudan / Szegedy (2001)
• Sénizergues (2002)
• Freund / Schapire (2003)
• Herlihy / Saks / Shavit / Zaharoglou (2004)
• Alon / Matias / Szegedy (2005)
• Agrawal / Kayal / Saxena (2006)
• Razborov / Rudich (2007)
• Teng / Spielman (2008)
• Reingold / Vadhan / Wigderson (2009)
2010s
• Arora / Mitchell (2010)
• Håstad (2011)
• Koutsoupias / Papadimitriou / Roughgarden / É. Tardos / Nisan / Ronen (2012)
• Boneh / Franklin / Joux (2013)
• Fagin / Lotem / Naor (2014)
• Spielman / Teng (2015)
• Brookes / O'Hearn (2016)
• Dwork / McSherry / Nissim / Smith (2017)
• Regev (2018)
• Dinur (2019)
2020s
• Moser / G. Tardos (2020)
• Bulatov / Cai / Chen / Dyer / Richerby (2021)
• Brakerski / Gentry / Vaikuntanathan (2022)
Authority control: Academics
• Association for Computing Machinery
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.