text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Walk-regular graph
In discrete mathematics, a walk-regular graph is a simple graph where the number of closed walks of any length from a vertex to itself does not depend on the choice of vertex.
Equivalent definitions
Suppose that $G$ is a simple graph. Let $A$ denote the adjacency matrix of $G$, $V(G)$ denote the set of vertices of $G$, and $\Phi _{G-v}(x)$ denote the characteristic polynomial of the vertex-deleted subgraph $G-v$ for all $v\in V(G).$Then the following are equivalent:
• $G$ is walk-regular.
• $A^{k}$ is a constant-diagonal matrix for all $k\geq 0.$
• $\Phi _{G-u}(x)=\Phi _{G-v}(x)$ for all $u,v\in V(G).$
Examples
• The vertex-transitive graphs are walk-regular.
• The semi-symmetric graphs are walk-regular.[1]
• The distance-regular graphs are walk-regular. More generally, any simple graph in a homogeneous coherent algebra is walk-regular.
• A connected regular graph is walk-regular if:
• It has at most four distinct eigenvalues.
• It is triangle-free and has at most five distinct eigenvalues.
• It is bipartite and has at most six distinct eigenvalues.
Properties
• A walk-regular graph is necessarily a regular graph.
• Complements of walk-regular graphs are walk-regular.
• Cartesian products of walk-regular graphs are walk-regular.
• Categorical products of walk-regular graphs are walk-regular.
• Strong products of walk-regular graphs are walk-regular.
• In general, the line graph of a walk-regular graph is not walk-regular.
References
1. "Are there only finitely many distinct cubic walk-regular graphs that are neither vertex-transitive nor distance-regular?". mathoverflow.net. Retrieved 2017-07-21.
External links
• Chris Godsil and Brendan McKay, Feasibility conditions for the existence of walk-regular graphs.
| Wikipedia |
Wall's finiteness obstruction
In geometric topology, a field within mathematics, the obstruction to a finitely dominated space X being homotopy-equivalent to a finite CW-complex is its Wall finiteness obstruction w(X) which is an element in the reduced zeroth algebraic K-theory ${\widetilde {K}}_{0}(\mathbb {Z} [\pi _{1}(X)])$ of the integral group ring $\mathbb {Z} [\pi _{1}(X)]$. It is named after the mathematician C. T. C. Wall.
By work of John Milnor[1] on finitely dominated spaces, no generality is lost in letting X be a CW-complex. A finite domination of X is a finite CW-complex K together with maps $r:K\to X$ and $i\colon X\to K$ such that $r\circ i\simeq 1_{X}$. By a construction due to Milnor it is possible to extend r to a homotopy equivalence ${\bar {r}}\colon {\bar {K}}\to X$ where ${\bar {K}}$ is a CW-complex obtained from K by attaching cells to kill the relative homotopy groups $\pi _{n}(r)$.
The space ${\bar {K}}$ will be finite if all relative homotopy groups are finitely generated. Wall showed that this will be the case if and only if his finiteness obstruction vanishes. More precisely, using covering space theory and the Hurewicz theorem one can identify $\pi _{n}(r)$ with $H_{n}({\widetilde {X}},{\widetilde {K}})$. Wall then showed that the cellular chain complex $C_{*}({\widetilde {X}})$ is chain-homotopy equivalent to a chain complex $A_{*}$ of finite type of projective $\mathbb {Z} [\pi _{1}(X)]$-modules, and that $H_{n}({\widetilde {X}},{\widetilde {K}})\cong H_{n}(A_{*})$ will be finitely generated if and only if these modules are stably-free. Stably-free modules vanish in reduced K-theory. This motivates the definition
$w(X)=\sum _{i}(-1)^{i}[A_{i}]\in {\widetilde {K}}_{0}(\mathbb {Z} [\pi _{1}(X)])$.
See also
• Algebraic K-theory
• Whitehead torsion
References
1. Milnor, John (1959), "On spaces having the homotopy type of a CW-complex", Transactions of the American Mathematical Society, 90 (2): 272–280
• Varadarajan, Kalathoor (1989), The finiteness obstruction of C. T. C. Wall, Canadian Mathematical Society Series of Monographs and Advanced Texts, New York: John Wiley & Sons Inc., ISBN 978-0-471-62306-9, MR 0989589.
• Ferry, Steve; Ranicki, Andrew (2001), "A survey of Wall's finiteness obstruction", Surveys on Surgery Theory, Vol. 2, Annals of Mathematics Studies, vol. 149, Princeton, NJ: Princeton University Press, pp. 63–79, arXiv:math/0008070, Bibcode:2000math......8070F, MR 1818772.
• Rosenberg, Jonathan (2005), "K-theory and geometric topology", in Friedlander, Eric M.; Grayson, Daniel R. (eds.), Handbook of K-Theory (PDF), Berlin: Springer, pp. 577–610, doi:10.1007/978-3-540-27855-9_12, ISBN 978-3-540-23019-9, MR 2181830
| Wikipedia |
Wall-crossing
In algebraic geometry and string theory, the phenomenon of wall-crossing describes the discontinuous change of a certain quantity, such as an integer geometric invariant, an index or a space of BPS state, across a codimension-one wall in a space of stability conditions, a so-called wall of marginal stability.
String theory
Fundamental objects
• String
• Cosmic string
• Brane
• D-brane
Perturbative theory
• Bosonic
• Superstring (Type I, Type II, Heterotic)
Non-perturbative results
• S-duality
• T-duality
• U-duality
• M-theory
• F-theory
• AdS/CFT correspondence
Phenomenology
• Phenomenology
• Cosmology
• Landscape
Mathematics
• Geometric Langlands correspondence
• Mirror symmetry
• Monstrous moonshine
• Vertex algebra
Related concepts
• Theory of everything
• Conformal field theory
• Quantum gravity
• Supersymmetry
• Supergravity
• Twistor string theory
• N = 4 supersymmetric Yang–Mills theory
• Kaluza–Klein theory
• Multiverse
• Holographic principle
Theorists
• Aganagić
• Arkani-Hamed
• Atiyah
• Banks
• Berenstein
• Bousso
• Cleaver
• Curtright
• Dijkgraaf
• Distler
• Douglas
• Duff
• Dvali
• Ferrara
• Fischler
• Friedan
• Gates
• Gliozzi
• Gopakumar
• Green
• Greene
• Gross
• Gubser
• Gukov
• Guth
• Hanson
• Harvey
• Hořava
• Horowitz
• Gibbons
• Kachru
• Kaku
• Kallosh
• Kaluza
• Kapustin
• Klebanov
• Knizhnik
• Kontsevich
• Klein
• Linde
• Maldacena
• Mandelstam
• Marolf
• Martinec
• Minwalla
• Moore
• Motl
• Mukhi
• Myers
• Nanopoulos
• Năstase
• Nekrasov
• Neveu
• Nielsen
• van Nieuwenhuizen
• Novikov
• Olive
• Ooguri
• Ovrut
• Polchinski
• Polyakov
• Rajaraman
• Ramond
• Randall
• Randjbar-Daemi
• Roček
• Rohm
• Sagnotti
• Scherk
• Schwarz
• Seiberg
• Sen
• Shenker
• Siegel
• Silverstein
• Sơn
• Staudacher
• Steinhardt
• Strominger
• Sundrum
• Susskind
• 't Hooft
• Townsend
• Trivedi
• Turok
• Vafa
• Veneziano
• Verlinde
• Verlinde
• Wess
• Witten
• Yau
• Yoneya
• Zamolodchikov
• Zamolodchikov
• Zaslow
• Zumino
• Zwiebach
• History
• Glossary
References
• Kontsevich, M. and Soibelman, Y. "Stability structures, motivic Donaldson–Thomas invariants and cluster transformations" (2008). arXiv:0811.2435.
• M. Kontsevich, Y. Soibelman, "Motivic Donaldson–Thomas invariants: summary of results", arXiv:0910.4315
• Joyce, D. and Song, Y. "A theory of generalized Donaldson–Thomas invariants," (2008). arXiv:0810.5645.
• Gaiotto, D. and Moore, G. and Neitzke, A. "Four-dimensional wall-crossing via three-dimensional field theory" (2008). arXiv:/0807.4723.
• Mina Aganagic, Hirosi Ooguri, Cumrun Vafa, Masahito Yamazaki, "Wall crossing and M-theory", arXiv:0908.1194
• Kontsevich, M. and Soibelman, Y., "Wall-crossing structures in Donaldson-Thomas invariants, integrable systems and Mirror Symmetry", arXiv:1303.3253
| Wikipedia |
Wallace–Bolyai–Gerwien theorem
In geometry, the Wallace–Bolyai–Gerwien theorem,[1] named after William Wallace, Farkas Bolyai and P. Gerwien, is a theorem related to dissections of polygons. It answers the question when one polygon can be formed from another by cutting it into a finite number of pieces and recomposing these by translations and rotations. The Wallace–Bolyai–Gerwien theorem states that this can be done if and only if two polygons have the same area.
Wallace had proven the same result already in 1807.
According to other sources, Bolyai and Gerwien had independently proved the theorem in 1833 and 1835, respectively.
Formulation
There are several ways in which this theorem may be formulated. The most common version uses the concept of "equidecomposability" of polygons: two polygons are equidecomposable if they can be split into finitely many triangles that only differ by some isometry (in fact only by a combination of a translation and a rotation). In this case the Wallace–Bolyai–Gerwien theorem states that two polygons are equidecomposable if and only if they have the same area.
Another formulation is in terms of scissors congruence: two polygons are scissors-congruent if they can be decomposed into finitely many polygons that are pairwise congruent. Scissors-congruence is an equivalence relation. In this case the Wallace–Bolyai–Gerwien theorem states that the equivalence classes of this relation contain precisely those polygons that have the same area.
Proof sketch
The theorem can be understood in a few steps. Firstly, every polygon can be cut into triangles. There are a few methods for this. For convex polygons one can cut off each vertex in turn, while for concave polygons this requires more care. A general approach that works for non-simple polygons as well would be to choose a line not parallel to any of the sides of the polygon and draw a line parallel to this one through each of the vertices of the polygon. This will divide the polygon into triangles and trapezoids, which in turn can be converted into triangles.
Secondly, each of these triangles can be transformed into a right triangle and subsequently into a rectangle with one side of length 1. Alternatively, a triangle can be transformed into one such rectangle by first turning it into a parallelogram and then turning this into such a rectangle. By doing this for each triangle, the polygon can be decomposed into a rectangle with unit width and height equal to its area.
Since this can be done for any two polygons, a "common subdivision" of the rectangle in between proves the theorem. That is, cutting the common rectangle (of size 1 by its area) according to both polygons will be an intermediate between both polygons.
Notes about the proof
First of all, this proof requires an intermediate polygon. In the formulation of the theorem using scissors-congruence, the use of this intermediate can be reformulated by using the fact that scissor-congruences are transitive. Since both the first polygon and the second polygon are scissors-congruent to the intermediate, they are scissors-congruent to one another.
The proof of this theorem is constructive and doesn't require the axiom of choice, even though some other dissection problems (e.g. Tarski's circle-squaring problem) do need it. In this case, the decomposition and reassembly can actually be carried out "physically": the pieces can, in theory, be cut with scissors from paper and reassembled by hand.
Nonetheless, the number of pieces required to compose one polygon from another using this procedure generally far exceeds the minimum number of polygons needed.[2]
Degree of decomposability
Consider two equidecomposable polygons P and Q. The minimum number n of pieces required to compose one polygon Q from another polygon P is denoted by σ(P,Q).
Depending on the polygons, it is possible to estimate upper and lower bounds for σ(P,Q). For instance, Alfred Tarski proved that if P is convex and the diameters of P and Q are respectively given by d(P) and d(Q), then[3]
$\sigma (P,Q)\geq {\frac {d(P)}{d(Q)}}.$
If Px is a rectangle of sides a · x and a · (1/x) and Q is a rectangle of size a, then Px and Q are equidecomposable for every x > 0. An upper bound for σ(Px,Q) is given by[3]
$\sigma (P_{x},Q)\leq 2+\left\lceil {\sqrt {x^{2}-1}}\right\rceil ,\quad {\text{for }}x\geq 1.$
Since σ(Px,Q) = σ(P(1/x),Q), we also have that
$\sigma \left(P_{\frac {1}{x}},Q\right)\leq 2+\left\lceil {\frac {\sqrt {1-x^{2}}}{x}}\right\rceil ,\quad {\text{for }}x\leq 1.$
Generalisations
The analogous statement about polyhedra in three dimensions, known as Hilbert's third problem, is false, as proven by Max Dehn in 1900. The problem has also been considered in some non-Euclidean geometries. In two-dimensional hyperbolic and spherical geometry, the theorem holds. However, the problem is still open for these geometries in three dimensions.
References
1. Gardner, R. J. (1985-02-01). "A problem of Sallee on equidecomposable convex bodies". Proceedings of the American Mathematical Society. 94 (2): 329–332. doi:10.1090/S0002-9939-1985-0784187-9. ISSN 0002-9939. JSTOR 2045399.
2. "Dissection".
3. McFarland, Andrew; McFarland, Joanna; Smith, James T. (2014). Alfred Tarski. Birkhäuser, New York, NY. pp. 77–91. doi:10.1007/978-1-4939-1474-6_5. ISBN 9781493914739.
External links
• Wallace–Bolyai–Gerwien Theorem
• Scissors Congruence - An interactive demonstration of the Wallace–Bolyai–Gerwien theorem.
• Video showing a sketch of the proof
• An Example of the Bolyai–Gerwien Theorem by Sándor Kabai, Ferenc Holló Szabó, and Lajos Szilassi, the Wolfram Demonstrations Project.
• A presentation about Hilbert's third problem at College of Staten Island CUNY - Abhijit Champanerkar.
• Optimal dissection of a unit square in a rectangle
| Wikipedia |
Wallenius' noncentral hypergeometric distribution
In probability theory and statistics, Wallenius' noncentral hypergeometric distribution (named after Kenneth Ted Wallenius) is a generalization of the hypergeometric distribution where items are sampled with bias.
This distribution can be illustrated as an urn model with bias. Assume, for example, that an urn contains m1 red balls and m2 white balls, totalling N = m1 + m2 balls. Each red ball has the weight ω1 and each white ball has the weight ω2. We will say that the odds ratio is ω = ω1 / ω2. Now we are taking n balls, one by one, in such a way that the probability of taking a particular ball at a particular draw is equal to its proportion of the total weight of all balls that lie in the urn at that moment. The number of red balls x1 that we get in this experiment is a random variable with Wallenius' noncentral hypergeometric distribution.
The matter is complicated by the fact that there is more than one noncentral hypergeometric distribution. Wallenius' noncentral hypergeometric distribution is obtained if balls are sampled one by one in such a way that there is competition between the balls. Fisher's noncentral hypergeometric distribution is obtained if the balls are sampled simultaneously or independently of each other. Unfortunately, both distributions are known in the literature as "the" noncentral hypergeometric distribution. It is important to be specific about which distribution is meant when using this name.
The two distributions are both equal to the (central) hypergeometric distribution when the odds ratio is 1.
The difference between these two probability distributions is subtle. See the Wikipedia entry on noncentral hypergeometric distributions for a more detailed explanation.
Univariate distribution
Univariate Wallenius' Noncentral Hypergeometric Distribution
Parameters $m_{1},m_{2}\in \mathbb {N} $
$N=m_{1}+m_{2}$
$n\in [0,N)$
$\omega \in \mathbb {R} _{+}$
Support $x\in [x_{min},x_{max}]$
$x_{min}=\max(0,n-m_{2})$
$x_{max}=\min(n,m_{1})$
PMF ${\binom {m_{1}}{x}}{\binom {m_{2}}{n-x}}\int _{0}^{1}(1-t^{\omega /D})^{x}(1-t^{1/D})^{n-x}\operatorname {d} t$
where $D=\omega (m_{1}-x)+(m_{2}-(n-x))$
Mean Approximated by solution $\mu $ to
${\frac {\mu }{m_{1}}}+\left(1-{\frac {n-\mu }{m_{2}}}\right)^{\omega }=1$
Variance $\approx {\frac {Nab}{(N-1)(m_{1}b+m_{2}a)}}\,$, where
$a=\mu (m_{1}-\mu ),\;b=(n-\mu )(\mu +m_{2}-n)$
Wallenius' distribution is particularly complicated because each ball has a probability of being taken that depends not only on its weight, but also on the total weight of its competitors. And the weight of the competing balls depends on the outcomes of all preceding draws.
This recursive dependency gives rise to a difference equation with a solution that is given in open form by the integral in the expression of the probability mass function in the table above.
Closed form expressions for the probability mass function exist (Lyons, 1980), but they are not very useful for practical calculations because of extreme numerical instability, except in degenerate cases.
Several other calculation methods are used, including recursion, Taylor expansion and numerical integration (Fog, 2007, 2008).
The most reliable calculation method is recursive calculation of f(x,n) from f(x,n-1) and f(x-1,n-1) using the recursion formula given below under properties. The probabilities of all (x,n) combinations on all possible trajectories leading to the desired point are calculated, starting with f(0,0) = 1 as shown on the figure to the right. The total number of probabilities to calculate is n(x+1)-x2. Other calculation methods must be used when n and x are so big that this method is too inefficient.
The probability that all balls have the same color is easier to calculate. See the formula below under multivariate distribution.
No exact formula for the mean is known (short of complete enumeration of all probabilities). The equation given above is reasonably accurate. This equation can be solved for μ by Newton-Raphson iteration. The same equation can be used for estimating the odds from an experimentally obtained value of the mean.
Properties of the univariate distribution
Wallenius' distribution has fewer symmetry relations than Fisher's noncentral hypergeometric distribution has. The only symmetry relates to the swapping of colors:
$\operatorname {wnchypg} (x;n,m_{1},m_{2},\omega )=\operatorname {wnchypg} (n-x;n,m_{2},m_{1},1/\omega )\,.$
Unlike Fisher's distribution, Wallenius' distribution has no symmetry relating to the number of balls not taken.
The following recursion formula is useful for calculating probabilities:
$\operatorname {wnchypg} (x;n,m_{1},m_{2},\omega )=$
$\operatorname {wnchypg} (x-1;n-1,m_{1},m_{2},\omega ){\frac {(m_{1}-x+1)\omega }{(m_{1}-x+1)\omega +m_{2}+x-n}}+$
$\operatorname {wnchypg} (x;n-1,m_{1},m_{2},\omega ){\frac {m_{2}+x-n+1}{(m_{1}-x)\omega +m_{2}+x-n+1}}$
Another recursion formula is also known:
$\operatorname {wnchypg} (x;n,m_{1},m_{2},\omega )=$
$\operatorname {wnchypg} (x-1;n-1,m_{1}-1,m_{2},\omega ){\frac {m_{1}\omega }{m_{1}\omega +m_{2}}}+$
$\operatorname {wnchypg} (x;n-1,m_{1},m_{2}-1,\omega ){\frac {m_{2}}{m_{1}\omega +m_{2}}}\,.$
The probability is limited by
$\operatorname {f} _{1}(x)\leq \operatorname {wnchypg} (x;n,m_{1},m_{2},\omega )\leq \operatorname {f} _{2}(x)\,,\,\,{\text{for}}\,\,\omega <1\,,$
$\operatorname {f} _{1}(x)\geq \operatorname {wnchypg} (x;n,m_{1},m_{2},\omega )\geq \operatorname {f} _{2}(x)\,,\,\,{\text{for}}\,\,\omega >1\,,{\text{where}}$
$\operatorname {f} _{1}(x)={\binom {m_{1}}{x}}{\binom {m_{2}}{n-x}}{\frac {n!}{(m_{1}+m_{2}/\omega )^{\underline {x}}\,(m_{2}+\omega (m_{1}-x))^{\underline {n-x}}}}$
$\operatorname {f} _{2}(x)={\binom {m_{1}}{x}}{\binom {m_{2}}{n-x}}{\frac {n!}{(m_{1}+(m_{2}-x_{2})/\omega )^{\underline {x}}\,(m_{2}+\omega m_{1})^{\underline {n-x}}}}\,,$
where the underlined superscript indicates the falling factorial $a^{\underline {b}}=a(a-1)\ldots (a-b+1)$.
Multivariate distribution
The distribution can be expanded to any number of colors c of balls in the urn. The multivariate distribution is used when there are more than two colors.
Multivariate Wallenius' Noncentral Hypergeometric Distribution
Parameters $c\in \mathbb {N} $
$\mathbf {m} =(m_{1},\ldots ,m_{c})\in \mathbb {N} ^{c}$
$N=\sum _{i=1}^{c}m_{i}$
$n\in [0,N)$
${\boldsymbol {\omega }}=(\omega _{1},\ldots ,\omega _{c})\in \mathbb {R} _{+}^{c}$
Support $\mathrm {S} =\left\{\mathbf {x} \in \mathbb {Z} _{0+}^{c}\,:\,\sum _{i=1}^{c}x_{i}=n\right\}$
PMF $\left(\prod _{i=1}^{c}{\binom {m_{i}}{x_{i}}}\right)\int _{0}^{1}\prod _{i=1}^{c}(1-t^{\omega _{i}/D})^{x_{i}}\operatorname {d} t\,,$
where $D={\boldsymbol {\omega }}\cdot (\mathbf {m} -\mathbf {x} )=\sum _{i=1}^{c}\omega _{i}(m_{i}-x_{i})$
Mean Approximated by solution $\mu _{1},\ldots ,\mu _{c}$ to
$\left(1-{\frac {\mu _{1}}{m_{1}}}\right)^{1/\omega _{1}}=\left(1-{\frac {\mu _{2}}{m_{2}}}\right)^{1/\omega _{2}}=\ldots =\left(1-{\frac {\mu _{c}}{m_{c}}}\right)^{1/\omega _{c}}$
$\wedge \,\sum _{i=1}^{c}\mu _{i}=n\,\wedge \,\forall \,i\in [0,c]\,:\,0\leq \mu _{i}\leq m_{i}\,.$
Variance Approximated by variance of Fisher's noncentral hypergeometric distribution with same mean.
The probability mass function can be calculated by various Taylor expansion methods or by numerical integration (Fog, 2008).
The probability that all balls have the same color, j, can be calculated as:
$\operatorname {mwnchypg} ((0,\ldots ,0,x_{j},0,\ldots );n,\mathbf {m} ,{\boldsymbol {\omega }})={\frac {m_{j}^{\,\,{\underline {n}}}}{\left({\frac {1}{\omega _{j}}}\sum _{i=1}^{c}m_{i}\omega _{i}\right)^{\underline {n}}}}$
for xj = n ≤ mj, where the underlined superscript denotes the falling factorial.
A reasonably good approximation to the mean can be calculated using the equation given above. The equation can be solved by defining θ so that
$\mu _{i}=m_{i}(1-e^{\omega _{i}\theta })$
and solving
$\sum _{i=1}^{c}\mu _{i}=n$
for θ by Newton-Raphson iteration.
The equation for the mean is also useful for estimating the odds from experimentally obtained values for the mean.
No good way of calculating the variance is known. The best known method is to approximate the multivariate Wallenius distribution by a multivariate Fisher's noncentral hypergeometric distribution with the same mean, and insert the mean as calculated above in the approximate formula for the variance of the latter distribution.
Properties of the multivariate distribution
The order of the colors is arbitrary so that any colors can be swapped.
The weights can be arbitrarily scaled:
$\operatorname {mwnchypg} (\mathbf {x} ;n,\mathbf {m} ,{\boldsymbol {\omega }})=\operatorname {mwnchypg} (\mathbf {x} ;n,\mathbf {m} ,r{\boldsymbol {\omega }})\,\,$ for all $r\in \mathbb {R} _{+}$.
Colors with zero number (mi = 0) or zero weight (ωi = 0) can be omitted from the equations.
Colors with the same weight can be joined:
$\operatorname {mwnchypg} \left(\mathbf {x} ;n,\mathbf {m} ,(\omega _{1},\ldots ,\omega _{c-1},\omega _{c-1})\right)\,=$
$\operatorname {mwnchypg} \left((x_{1},\ldots ,x_{c-1}+x_{c});n,(m_{1},\ldots ,m_{c-1}+m_{c}),(\omega _{1},\ldots ,\omega _{c-1})\right)\,\cdot $
$\operatorname {hypg} (x_{c};x_{c-1}+x_{c},m_{c},m_{c-1}+m_{c})\,,$
where $\operatorname {hypg} (x;n,m,N)$ is the (univariate, central) hypergeometric distribution probability.
Complementary Wallenius' noncentral hypergeometric distribution
The balls that are not taken in the urn experiment have a distribution that is different from Wallenius' noncentral hypergeometric distribution, due to a lack of symmetry. The distribution of the balls not taken can be called the complementary Wallenius' noncentral hypergeometric distribution.
Probabilities in the complementary distribution are calculated from Wallenius' distribution by replacing n with N-n, xi with mi - xi, and ωi with 1/ωi.
Software available
• WalleniusHypergeometricDistribution in Mathematica.
• An implementation for the R programming language is available as the package named BiasedUrn. Includes univariate and multivariate probability mass functions, distribution functions, quantiles, random variable generating functions, mean and variance.
• Implementation in C++ is available from www.agner.org.
See also
• Noncentral hypergeometric distributions
• Fisher's noncentral hypergeometric distribution
• Biased sample
• Bias
• Population genetics
• Fisher's exact test
References
• Chesson, J. (1976). "A non-central multivariate hypergeometric distribution arising from biased sampling with application to selective predation". Journal of Applied Probability. Vol. 13, no. 4. Applied Probability Trust. pp. 795–797. doi:10.2307/3212535. JSTOR 3212535.
• Fog, A. (2007). "Random number theory".
• Fog, A. (2008). "Calculation Methods for Wallenius' Noncentral Hypergeometric Distribution". Communications in Statictics, Simulation and Computation. 37 (2): 258–273. doi:10.1080/03610910701790269. S2CID 9040568.
• Johnson, N. L.; Kemp, A. W.; Kotz, S. (2005). Univariate Discrete Distributions. Hoboken, New Jersey: Wiley and Sons.
• Lyons, N. I. (1980). "Closed Expressions for Noncentral Hypergeometric Probabilities". Communications in Statistics - Simulation and Computation. Vol. 9, no. 3. pp. 313–314. doi:10.1080/03610918008812156.
• Manly, B. F. J. (1974). "A Model for Certain Types of Selection Experiments". Biometrics. Vol. 30, no. 2. International Biometric Society. pp. 281–294. doi:10.2307/2529649. JSTOR 2529649.
• Wallenius, K. T. (1963). Biased Sampling: The Non-central Hypergeometric Probability Distribution. Ph.D. Thesis (Thesis). Stanford University, Department of Statistics.
Probability distributions (list)
Discrete
univariate
with finite
support
• Benford
• Bernoulli
• beta-binomial
• binomial
• categorical
• hypergeometric
• negative
• Poisson binomial
• Rademacher
• soliton
• discrete uniform
• Zipf
• Zipf–Mandelbrot
with infinite
support
• beta negative binomial
• Borel
• Conway–Maxwell–Poisson
• discrete phase-type
• Delaporte
• extended negative binomial
• Flory–Schulz
• Gauss–Kuzmin
• geometric
• logarithmic
• mixed Poisson
• negative binomial
• Panjer
• parabolic fractal
• Poisson
• Skellam
• Yule–Simon
• zeta
Continuous
univariate
supported on a
bounded interval
• arcsine
• ARGUS
• Balding–Nichols
• Bates
• beta
• beta rectangular
• continuous Bernoulli
• Irwin–Hall
• Kumaraswamy
• logit-normal
• noncentral beta
• PERT
• raised cosine
• reciprocal
• triangular
• U-quadratic
• uniform
• Wigner semicircle
supported on a
semi-infinite
interval
• Benini
• Benktander 1st kind
• Benktander 2nd kind
• beta prime
• Burr
• chi
• chi-squared
• noncentral
• inverse
• scaled
• Dagum
• Davis
• Erlang
• hyper
• exponential
• hyperexponential
• hypoexponential
• logarithmic
• F
• noncentral
• folded normal
• Fréchet
• gamma
• generalized
• inverse
• gamma/Gompertz
• Gompertz
• shifted
• half-logistic
• half-normal
• Hotelling's T-squared
• inverse Gaussian
• generalized
• Kolmogorov
• Lévy
• log-Cauchy
• log-Laplace
• log-logistic
• log-normal
• log-t
• Lomax
• matrix-exponential
• Maxwell–Boltzmann
• Maxwell–Jüttner
• Mittag-Leffler
• Nakagami
• Pareto
• phase-type
• Poly-Weibull
• Rayleigh
• relativistic Breit–Wigner
• Rice
• truncated normal
• type-2 Gumbel
• Weibull
• discrete
• Wilks's lambda
supported
on the whole
real line
• Cauchy
• exponential power
• Fisher's z
• Kaniadakis κ-Gaussian
• Gaussian q
• generalized normal
• generalized hyperbolic
• geometric stable
• Gumbel
• Holtsmark
• hyperbolic secant
• Johnson's SU
• Landau
• Laplace
• asymmetric
• logistic
• noncentral t
• normal (Gaussian)
• normal-inverse Gaussian
• skew normal
• slash
• stable
• Student's t
• Tracy–Widom
• variance-gamma
• Voigt
with support
whose type varies
• generalized chi-squared
• generalized extreme value
• generalized Pareto
• Marchenko–Pastur
• Kaniadakis κ-exponential
• Kaniadakis κ-Gamma
• Kaniadakis κ-Weibull
• Kaniadakis κ-Logistic
• Kaniadakis κ-Erlang
• q-exponential
• q-Gaussian
• q-Weibull
• shifted log-logistic
• Tukey lambda
Mixed
univariate
continuous-
discrete
• Rectified Gaussian
Multivariate
(joint)
• Discrete:
• Ewens
• multinomial
• Dirichlet
• negative
• Continuous:
• Dirichlet
• generalized
• multivariate Laplace
• multivariate normal
• multivariate stable
• multivariate t
• normal-gamma
• inverse
• Matrix-valued:
• LKJ
• matrix normal
• matrix t
• matrix gamma
• inverse
• Wishart
• normal
• inverse
• normal-inverse
• complex
Directional
Univariate (circular) directional
Circular uniform
univariate von Mises
wrapped normal
wrapped Cauchy
wrapped exponential
wrapped asymmetric Laplace
wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
• Circular
• compound Poisson
• elliptical
• exponential
• natural exponential
• location–scale
• maximum entropy
• mixture
• Pearson
• Tweedie
• wrapped
• Category
• Commons
| Wikipedia |
Wallis product
In mathematics, the Wallis product for π, published in 1656 by John Wallis,[1] states that
${\begin{aligned}{\frac {\pi }{2}}&=\prod _{n=1}^{\infty }{\frac {4n^{2}}{4n^{2}-1}}=\prod _{n=1}^{\infty }\left({\frac {2n}{2n-1}}\cdot {\frac {2n}{2n+1}}\right)\\[6pt]&={\Big (}{\frac {2}{1}}\cdot {\frac {2}{3}}{\Big )}\cdot {\Big (}{\frac {4}{3}}\cdot {\frac {4}{5}}{\Big )}\cdot {\Big (}{\frac {6}{5}}\cdot {\frac {6}{7}}{\Big )}\cdot {\Big (}{\frac {8}{7}}\cdot {\frac {8}{9}}{\Big )}\cdot \;\cdots \\\end{aligned}}$
Proof using integration
Wallis derived this infinite product using interpolation, though his method is not regarded as rigorous. A modern derivation can be found by examining $\int _{0}^{\pi }\sin ^{n}x\,dx$ for even and odd values of $n$, and noting that for large $n$, increasing $n$ by 1 results in a change that becomes ever smaller as $n$ increases. Let[2]
$I(n)=\int _{0}^{\pi }\sin ^{n}x\,dx.$
(This is a form of Wallis' integrals.) Integrate by parts:
${\begin{aligned}u&=\sin ^{n-1}x\\\Rightarrow du&=(n-1)\sin ^{n-2}x\cos x\,dx\\dv&=\sin x\,dx\\\Rightarrow v&=-\cos x\end{aligned}}$
${\begin{aligned}\Rightarrow I(n)&=\int _{0}^{\pi }\sin ^{n}x\,dx\\[6pt]{}&=-\sin ^{n-1}x\cos x{\Biggl |}_{0}^{\pi }-\int _{0}^{\pi }(-\cos x)(n-1)\sin ^{n-2}x\cos x\,dx\\[6pt]{}&=0+(n-1)\int _{0}^{\pi }\cos ^{2}x\sin ^{n-2}x\,dx,\qquad n>1\\[6pt]{}&=(n-1)\int _{0}^{\pi }(1-\sin ^{2}x)\sin ^{n-2}x\,dx\\[6pt]{}&=(n-1)\int _{0}^{\pi }\sin ^{n-2}x\,dx-(n-1)\int _{0}^{\pi }\sin ^{n}x\,dx\\[6pt]{}&=(n-1)I(n-2)-(n-1)I(n)\\[6pt]{}&={\frac {n-1}{n}}I(n-2)\\[6pt]\Rightarrow {\frac {I(n)}{I(n-2)}}&={\frac {n-1}{n}}\\[6pt]\end{aligned}}$
Now, we make two variable substitutions for convenience to obtain:
$I(2n)={\frac {2n-1}{2n}}I(2n-2)$
$I(2n+1)={\frac {2n}{2n+1}}I(2n-1)$
We obtain values for $I(0)$ and $I(1)$ for later use.
${\begin{aligned}I(0)&=\int _{0}^{\pi }dx=x{\Biggl |}_{0}^{\pi }=\pi \\[6pt]I(1)&=\int _{0}^{\pi }\sin x\,dx=-\cos x{\Biggl |}_{0}^{\pi }=(-\cos \pi )-(-\cos 0)=-(-1)-(-1)=2\\[6pt]\end{aligned}}$
Now, we calculate for even values $I(2n)$ by repeatedly applying the recurrence relation result from the integration by parts. Eventually, we end get down to $I(0)$, which we have calculated.
$I(2n)=\int _{0}^{\pi }\sin ^{2n}x\,dx={\frac {2n-1}{2n}}I(2n-2)={\frac {2n-1}{2n}}\cdot {\frac {2n-3}{2n-2}}I(2n-4)$
$={\frac {2n-1}{2n}}\cdot {\frac {2n-3}{2n-2}}\cdot {\frac {2n-5}{2n-4}}\cdot \cdots \cdot {\frac {5}{6}}\cdot {\frac {3}{4}}\cdot {\frac {1}{2}}I(0)=\pi \prod _{k=1}^{n}{\frac {2k-1}{2k}}$
Repeating the process for odd values $I(2n+1)$,
$I(2n+1)=\int _{0}^{\pi }\sin ^{2n+1}x\,dx={\frac {2n}{2n+1}}I(2n-1)={\frac {2n}{2n+1}}\cdot {\frac {2n-2}{2n-1}}I(2n-3)$
$={\frac {2n}{2n+1}}\cdot {\frac {2n-2}{2n-1}}\cdot {\frac {2n-4}{2n-3}}\cdot \cdots \cdot {\frac {6}{7}}\cdot {\frac {4}{5}}\cdot {\frac {2}{3}}I(1)=2\prod _{k=1}^{n}{\frac {2k}{2k+1}}$
We make the following observation, based on the fact that $\sin {x}\leq 1$
$\sin ^{2n+1}x\leq \sin ^{2n}x\leq \sin ^{2n-1}x,0\leq x\leq \pi $
$\Rightarrow I(2n+1)\leq I(2n)\leq I(2n-1)$
Dividing by $I(2n+1)$:
$\Rightarrow 1\leq {\frac {I(2n)}{I(2n+1)}}\leq {\frac {I(2n-1)}{I(2n+1)}}={\frac {2n+1}{2n}}$, where the equality comes from our recurrence relation.
By the squeeze theorem,
$\Rightarrow \lim _{n\rightarrow \infty }{\frac {I(2n)}{I(2n+1)}}=1$
$\lim _{n\rightarrow \infty }{\frac {I(2n)}{I(2n+1)}}={\frac {\pi }{2}}\lim _{n\rightarrow \infty }\prod _{k=1}^{n}\left({\frac {2k-1}{2k}}\cdot {\frac {2k+1}{2k}}\right)=1$
$\Rightarrow {\frac {\pi }{2}}=\prod _{k=1}^{\infty }\left({\frac {2k}{2k-1}}\cdot {\frac {2k}{2k+1}}\right)={\frac {2}{1}}\cdot {\frac {2}{3}}\cdot {\frac {4}{3}}\cdot {\frac {4}{5}}\cdot {\frac {6}{5}}\cdot {\frac {6}{7}}\cdot \cdots $
Proof using Laplace's method
See the main page on Gaussian integral.
Proof using Euler's infinite product for the sine function
While the proof above is typically featured in modern calculus textbooks, the Wallis product is, in retrospect, an easy corollary of the later Euler infinite product for the sine function.
${\frac {\sin x}{x}}=\prod _{n=1}^{\infty }\left(1-{\frac {x^{2}}{n^{2}\pi ^{2}}}\right)$
Let $x={\frac {\pi }{2}}$:
${\begin{aligned}\Rightarrow {\frac {2}{\pi }}&=\prod _{n=1}^{\infty }\left(1-{\frac {1}{4n^{2}}}\right)\\[6pt]\Rightarrow {\frac {\pi }{2}}&=\prod _{n=1}^{\infty }\left({\frac {4n^{2}}{4n^{2}-1}}\right)\\[6pt]&=\prod _{n=1}^{\infty }\left({\frac {2n}{2n-1}}\cdot {\frac {2n}{2n+1}}\right)={\frac {2}{1}}\cdot {\frac {2}{3}}\cdot {\frac {4}{3}}\cdot {\frac {4}{5}}\cdot {\frac {6}{5}}\cdot {\frac {6}{7}}\cdots \end{aligned}}$ [1]
Relation to Stirling's approximation
Stirling's approximation for the factorial function $n!$ asserts that
$n!={\sqrt {2\pi n}}{\left({\frac {n}{e}}\right)}^{n}\left[1+O\left({\frac {1}{n}}\right)\right].$
Consider now the finite approximations to the Wallis product, obtained by taking the first $k$ terms in the product
$p_{k}=\prod _{n=1}^{k}{\frac {2n}{2n-1}}{\frac {2n}{2n+1}},$
where $p_{k}$ can be written as
${\begin{aligned}p_{k}&={1 \over {2k+1}}\prod _{n=1}^{k}{\frac {(2n)^{4}}{[(2n)(2n-1)]^{2}}}\\[6pt]&={1 \over {2k+1}}\cdot {{2^{4k}\,(k!)^{4}} \over {[(2k)!]^{2}}}.\end{aligned}}$
Substituting Stirling's approximation in this expression (both for $k!$ and $(2k)!$) one can deduce (after a short calculation) that $p_{k}$ converges to ${\frac {\pi }{2}}$ as $k\rightarrow \infty $.
Derivative of the Riemann zeta function at zero
The Riemann zeta function and the Dirichlet eta function can be defined:[1]
${\begin{aligned}\zeta (s)&=\sum _{n=1}^{\infty }{\frac {1}{n^{s}}},\Re (s)>1\\[6pt]\eta (s)&=(1-2^{1-s})\zeta (s)\\[6pt]&=\sum _{n=1}^{\infty }{\frac {(-1)^{n-1}}{n^{s}}},\Re (s)>0\end{aligned}}$
Applying an Euler transform to the latter series, the following is obtained:
${\begin{aligned}\eta (s)&={\frac {1}{2}}+{\frac {1}{2}}\sum _{n=1}^{\infty }(-1)^{n-1}\left[{\frac {1}{n^{s}}}-{\frac {1}{(n+1)^{s}}}\right],\Re (s)>-1\\[6pt]\Rightarrow \eta '(s)&=(1-2^{1-s})\zeta '(s)+2^{1-s}(\ln 2)\zeta (s)\\[6pt]&=-{\frac {1}{2}}\sum _{n=1}^{\infty }(-1)^{n-1}\left[{\frac {\ln n}{n^{s}}}-{\frac {\ln(n+1)}{(n+1)^{s}}}\right],\Re (s)>-1\end{aligned}}$
${\begin{aligned}\Rightarrow \eta '(0)&=-\zeta '(0)-\ln 2=-{\frac {1}{2}}\sum _{n=1}^{\infty }(-1)^{n-1}\left[\ln n-\ln(n+1)\right]\\[6pt]&=-{\frac {1}{2}}\sum _{n=1}^{\infty }(-1)^{n-1}\ln {\frac {n}{n+1}}\\[6pt]&=-{\frac {1}{2}}\left(\ln {\frac {1}{2}}-\ln {\frac {2}{3}}+\ln {\frac {3}{4}}-\ln {\frac {4}{5}}+\ln {\frac {5}{6}}-\cdots \right)\\[6pt]&={\frac {1}{2}}\left(\ln {\frac {2}{1}}+\ln {\frac {2}{3}}+\ln {\frac {4}{3}}+\ln {\frac {4}{5}}+\ln {\frac {6}{5}}+\cdots \right)\\[6pt]&={\frac {1}{2}}\ln \left({\frac {2}{1}}\cdot {\frac {2}{3}}\cdot {\frac {4}{3}}\cdot {\frac {4}{5}}\cdot \cdots \right)={\frac {1}{2}}\ln {\frac {\pi }{2}}\\\Rightarrow \zeta '(0)&=-{\frac {1}{2}}\ln \left(2\pi \right)\end{aligned}}$
See also
• John Wallis, English mathematician who is given partial credit for the development of infinitesimal calculus and pi.
• Viète's formula, a different infinite product formula for $\pi $.
• Leibniz formula for π, an infinite sum that can be converted into an infinite Euler product for $\pi $.
• Wallis sieve
Notes
1. "Wallis Formula".
2. "Integrating Powers and Product of Sines and Cosines: Challenging Problems".
External links
• "Wallis formula", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• "Why does this product equal π/2? A new proof of the Wallis formula for π." 3Blue1Brown. April 20, 2018. Archived from the original on 2021-12-12 – via YouTube.
| Wikipedia |
John Wallis
John Wallis (/ˈwɒlɪs/;[2] Latin: Wallisius; 3 December [O.S. 23 November] 1616 – 8 November [O.S. 28 October] 1703) was an English clergyman and mathematician who is given partial credit for the development of infinitesimal calculus. Between 1643 and 1689 he served as chief cryptographer for Parliament and, later, the royal court.[3] He is credited with introducing the symbol ∞ to represent the concept of infinity.[4] He similarly used 1/∞ for an infinitesimal. John Wallis was a contemporary of Newton and one of the greatest intellectuals of the early renaissance of mathematics.[5]
John Wallis
Born3 December [O.S. 23 November] 1616
Ashford, Kent, England
Died8 November 1703(1703-11-08) (aged 86) [O.S. 28 October 1703]
Oxford, Oxfordshire, England
NationalityEnglish
EducationFelsted School, Emmanuel College, Cambridge
Known forWallis product
Inventing the symbol ∞
Extending Cavalieri's quadrature formula
Coining the term "momentum"[1]
Scientific career
FieldsMathematics
Institutions
• Queens' College, Cambridge
• University of Oxford
Academic advisorsWilliam Oughtred
Notable studentsWilliam Brouncker
Biography
Educational background
• Cambridge, M.A., Oxford, D.D.
• Grammar School at Tenterden, Kent, 1625–31.
• School of Martin Holbeach at Felsted, Essex, 1631–2.
• Cambridge University, Emmanuel College, 1632–40; B.A., 1637; M.A., 1640.
• D.D. at Oxford in 1654
Family
On 14 March 1645 he married Susanna Glynde (c. 1600 – 16 March 1687). They had three children:
1. Anne Blencoe (4 June 1656 – 5 April 1718), married Sir John Blencowe (30 November 1642 – 6 May 1726) in 1675, with issue[6]
2. John Wallis (26 December 1650 – 14 March 1717),[7] MP for Wallingford 1690–1695, married Elizabeth Harris (d. 1693) on 1 February 1682, with issue: one son and two daughters
3. Elizabeth Wallis (1658–1703[8]), married William Benson (1649–1691) of Towcester, died with no issue
Life
John Wallis was born in Ashford, Kent. He was the third of five children of Reverend John Wallis and Joanna Chapman. He was initially educated at a school in Ashford but moved to James Movat's school in Tenterden in 1625 following an outbreak of plague. Wallis was first exposed to mathematics in 1631, at Felsted School (then known as Martin Holbeach's school in Felsted); he enjoyed maths, but his study was erratic, since "mathematics, at that time with us, were scarce looked on as academical studies, but rather mechanical" (Scriba 1970). At the school in Felsted, Wallis learned how to speak and write Latin. By this time, he also was proficient in French, Greek, and Hebrew.[9] As it was intended he should be a doctor, he was sent in 1632 to Emmanuel College, Cambridge.[10] While there, he kept an act on the doctrine of the circulation of the blood; that was said to have been the first occasion in Europe on which this theory was publicly maintained in a disputation. His interests, however, centred on mathematics. He received his Bachelor of Arts degree in 1637 and a Master's in 1640, afterwards entering the priesthood. From 1643 to 1649, he served as a nonvoting scribe at the Westminster Assembly. He was elected to a fellowship at Queens' College, Cambridge in 1644, from which he had to resign following his marriage.
Throughout this time, Wallis had been close to the Parliamentarian party, perhaps as a result of his exposure to Holbeach at Felsted School. He rendered them great practical assistance in deciphering Royalist dispatches. The quality of cryptography at that time was mixed; despite the individual successes of mathematicians such as François Viète, the principles underlying cipher design and analysis were very poorly understood. Most ciphers were ad hoc methods relying on a secret algorithm, as opposed to systems based on a variable key. Wallis realised that the latter were far more secure – even describing them as "unbreakable", though he was not confident enough in this assertion to encourage revealing cryptographic algorithms. He was also concerned about the use of ciphers by foreign powers, refusing, for example, Gottfried Leibniz's request of 1697 to teach Hanoverian students about cryptography.[11]
Returning to London – he had been made chaplain at St Gabriel Fenchurch in 1643 – Wallis joined the group of scientists that was later to evolve into the Royal Society. He was finally able to indulge his mathematical interests, mastering William Oughtred's Clavis Mathematicae in a few weeks in 1647. He soon began to write his own treatises, dealing with a wide range of topics, which he continued for the rest of his life. Wallis wrote the first survey about mathematical concepts in England where he discussed the Hindu-Arabic system.[12]
Wallis joined the moderate Presbyterians in signing the remonstrance against the execution of Charles I, by which he incurred the lasting hostility of the Independents. In spite of their opposition he was appointed in 1649 to the Savilian Chair of Geometry at Oxford University, where he lived until his death on 8 November [O.S. 28 October] 1703. In 1650, Wallis was ordained as a minister. After, he spent two years with Sir Richard Darley and Lady Vere as a private chaplain. In 1661, he was one of twelve Presbyterian representatives at the Savoy Conference.
Besides his mathematical works he wrote on theology, logic, English grammar and philosophy, and he was involved in devising a system for teaching a deaf boy to speak at Littlecote House.[13] William Holder had earlier taught a deaf man, Alexander Popham, to speak "plainly and distinctly, and with a good and graceful tone".[14] Wallis later claimed credit for this, leading Holder to accuse Wallis of "rifling his Neighbours, and adorning himself with their spoyls".[15]
Wallis' appointment as Savilian Professor of Geometry at the Oxford University
The Parliamentary visitation of Oxford that began in 1647 removed many senior academics from their positions, including (in November 1648) the Savilian Professors of Geometry and Astronomy. In 1649 Wallis was appointed as Savilian Professor of Geometry. Wallis seems to have been chosen largely on political grounds (as perhaps had been his Royalist predecessor Peter Turner, who despite his appointment to two professorships never published any mathematical works); while Wallis was perhaps the nation's leading cryptographer and was part of an informal group of scientists that would later become the Royal Society, he had no particular reputation as a mathematician. Nonetheless, Wallis' appointment proved richly justified by his subsequent work during the 54 years he served as Savilian Professor.[16]
Contributions to mathematics
Wallis made significant contributions to trigonometry, calculus, geometry, and the analysis of infinite series. In his Opera Mathematica I (1695) he introduced the term "continued fraction".
Analytic geometry
In 1655, Wallis published a treatise on conic sections in which they were defined analytically. This was the earliest book in which these curves are considered and defined as curves of the second degree. It helped to remove some of the perceived difficulty and obscurity of René Descartes' work on analytic geometry. In the Treatise on the Conic Sections Wallis popularised the symbol ∞ for infinity. He wrote, "I suppose any plane (following the Geometry of Indivisibles of Cavalieri) to be made up of an infinite number of parallel lines, or as I would prefer, of an infinite number of parallelograms of the same altitude; (let the altitude of each one of these be an infinitely small part 1/∞ of the whole altitude, and let the symbol ∞ denote Infinity) and the altitude of all to make up the altitude of the figure."[17]
Integral calculus
Arithmetica Infinitorum, the most important of Wallis's works, was published in 1656. In this treatise the methods of analysis of Descartes and Cavalieri were systematised and extended, but some ideas were open to criticism. He began, after a short tract on conic sections, by developing the standard notation for powers, extending them from positive integers to rational numbers:
$x^{0}=1$
$x^{-1}={\frac {1}{x}}$
$x^{-n}={\frac {1}{x^{n}}}{\text{ etc.}}$
$x^{1/2}={\sqrt {x}}$
$x^{2/3}={\sqrt[{3}]{x^{2}}}{\text{ etc.}}$
$x^{1/n}={\sqrt[{n}]{x}}$
$x^{p/q}={\sqrt[{q}]{x^{p}}}$
Leaving the numerous algebraic applications of this discovery, he next proceeded to find, by integration, the area enclosed between the curve y = xm, x-axis, and any ordinate x = h, and he proved that the ratio of this area to that of the parallelogram on the same base and of the same height is 1/(m + 1), extending Cavalieri's quadrature formula. He apparently assumed that the same result would be true also for the curve y = axm, where a is any constant, and m any number positive or negative, but he discussed only the case of the parabola in which m = 2 and the hyperbola in which m = −1. In the latter case, his interpretation of the result is incorrect. He then showed that similar results may be written down for any curve of the form
$y=\sum _{m}^{}ax^{m}$
and hence that, if the ordinate y of a curve can be expanded in powers of x, its area can be determined: thus he says that if the equation of the curve is y = x0 + x1 + x2 + ..., its area would be x + x2/2 + x3/3 + ... . He then applied this to the quadrature of the curves y = (x − x2)0, y = (x − x2)1, y = (x − x2)2, etc., taken between the limits x = 0 and x = 1. He shows that the areas are, respectively, 1, 1/6, 1/30, 1/140, etc. He next considered curves of the form y = x1/m and established the theorem that the area bounded by this curve and the lines x = 0 and x = 1 is equal to the area of the rectangle on the same base and of the same altitude as m : m + 1. This is equivalent to computing
$\int _{0}^{1}x^{1/m}\,dx.$
He illustrated this by the parabola, in which case m = 2. He stated, but did not prove, the corresponding result for a curve of the form y = xp/q.
Wallis showed considerable ingenuity in reducing the equations of curves to the forms given above, but, as he was unacquainted with the binomial theorem, he could not effect the quadrature of the circle, whose equation is $y={\sqrt {1-x^{2}}}$, since he was unable to expand this in powers of x. He laid down, however, the principle of interpolation. Thus, as the ordinate of the circle $y={\sqrt {1-x^{2}}}$ is the geometrical mean of the ordinates of the curves $y=(1-x^{2})^{0}$ and $y=(1-x^{2})^{1}$, it might be supposed that, as an approximation, the area of the semicircle $\int _{0}^{1}\!{\sqrt {1-x^{2}}}\,dx$ which is ${\tfrac {1}{4}}\pi $ might be taken as the geometrical mean of the values of
$\int _{0}^{1}(1-x^{2})^{0}\,dx\ {\text{ and }}\int _{0}^{1}(1-x^{2})^{1}\,dx,$
that is, $1$ and ${\tfrac {2}{3}}$; this is equivalent to taking $4{\sqrt {\tfrac {2}{3}}}$ or 3.26... as the value of π. But, Wallis argued, we have in fact a series $1,{\tfrac {1}{6}},{\tfrac {1}{30}},{\tfrac {1}{140}},$... and therefore the term interpolated between $1$ and ${\tfrac {1}{6}}$ ought to be chosen so as to obey the law of this series. This, by an elaborate method that is not described here in detail, leads to a value for the interpolated term which is equivalent to taking
${\frac {\pi }{2}}={\frac {2}{1}}\cdot {\frac {2}{3}}\cdot {\frac {4}{3}}\cdot {\frac {4}{5}}\cdot {\frac {6}{5}}\cdot {\frac {6}{7}}\cdots $
(which is now known as the Wallis product).
In this work also the formation and properties of continued fractions are discussed, the subject having been brought into prominence by Brouncker's use of these fractions.
A few years later, in 1659, Wallis published a tract containing the solution of the problems on the cycloid which had been proposed by Blaise Pascal. In this he incidentally explained how the principles laid down in his Arithmetica Infinitorum could be used for the rectification of algebraic curves and gave a solution of the problem to rectify (i.e., find the length of) the semicubical parabola x3 = ay2, which had been discovered in 1657 by his pupil William Neile. Since all attempts to rectify the ellipse and hyperbola had been (necessarily) ineffectual, it had been supposed that no curves could be rectified, as indeed Descartes had definitely asserted to be the case. The logarithmic spiral had been rectified by Evangelista Torricelli and was the first curved line (other than the circle) whose length was determined, but the extension by Neile and Wallis to an algebraic curve was novel. The cycloid was the next curve rectified; this was done by Christopher Wren in 1658.
Early in 1658 a similar discovery, independent of that of Neile, was made by van Heuraët, and this was published by van Schooten in his edition of Descartes's Geometria in 1659. Van Heuraët's method is as follows. He supposes the curve to be referred to rectangular axes; if this is so, and if (x, y) are the coordinates of any point on it, and n is the length of the normal, and if another point whose coordinates are (x, η) is taken such that η : h = n : y, where h is a constant; then, if ds is the element of the length of the required curve, we have by similar triangles ds : dx = n : y. Therefore, h ds = η dx. Hence, if the area of the locus of the point (x, η) can be found, the first curve can be rectified. In this way van Heuraët effected the rectification of the curve y3 = ax2 but added that the rectification of the parabola y2 = ax is impossible since it requires the quadrature of the hyperbola. The solutions given by Neile and Wallis are somewhat similar to that given by van Heuraët, though no general rule is enunciated, and the analysis is clumsy. A third method was suggested by Fermat in 1660, but it is inelegant and laborious.
Collision of bodies
The theory of the collision of bodies was propounded by the Royal Society in 1668 for the consideration of mathematicians. Wallis, Christopher Wren, and Christiaan Huygens sent correct and similar solutions, all depending on what is now called the conservation of momentum; but, while Wren and Huygens confined their theory to perfectly elastic bodies (elastic collision), Wallis considered also imperfectly elastic bodies (inelastic collision). This was followed in 1669 by a work on statics (centres of gravity), and in 1670 by one on dynamics: these provide a convenient synopsis of what was then known on the subject.
Algebra
In 1685 Wallis published Algebra, preceded by a historical account of the development of the subject, which contains a great deal of valuable information. The second edition, issued in 1693 and forming the second volume of his Opera, was considerably enlarged. This algebra is noteworthy as containing the first systematic use of formulae. A given magnitude is here represented by the numerical ratio which it bears to the unit of the same kind of magnitude: thus, when Wallis wants to compare two lengths he regards each as containing so many units of length. This perhaps will be made clearer by noting that the relation between the space described in any time by a particle moving with a uniform velocity is denoted by Wallis by the formula
s = vt,
where s is the number representing the ratio of the space described to the unit of length; while the previous writers would have denoted the same relation by stating what is equivalent to the proposition
s1 : s2 = v1t1 : v2t2.
Number line
Wallis has been credited as the originator of the number line "for negative quantities"[18] and "for operational purposes."[19] This is based on a passage in his 1685 treatise on algebra in which he introduced a number line to illustrate the legitimacy of negative quantities:[20]
Yet is not that Supposition (of Negative Quantities) either Unuseful or Absurd; when rightly understood. And though, as to the bare Algebraick Notation, it import a Quantity less than nothing: Yet, when it comes to a Physical Application, it denotes as Real a Quantity as if the Sign were $+$; but to be interpreted in a contrary sense... $+3$, signifies $3$ Yards Forward; and $-3$, signifies $3$ Yards Backward.
It has also been noted that, in an earlier work, Wallis came to the conclusion that the ratio of a positive number to a negative one is greater than infinity. The argument involves the quotient ${\tfrac {1}{x}}$ and considering what happens as $x$ approaches and then crosses the point $x=0$ from the positive side.[21] Wallis was not alone in this thinking: Leonhard Euler came to the same conclusion by considering the geometric series ${\tfrac {1}{1-x}}=1+x+x^{2}+\cdots $, evaluated at $x=2$, followed by reasoning similar to Wallis's (he resolved the paradox by distinguishing different kinds of negative numbers).[18]
Geometry
He is usually credited with the proof of the Pythagorean theorem using similar triangles. However, Thabit Ibn Qurra (AD 901), an Arab mathematician, had produced a generalisation of the Pythagorean theorem applicable to all triangles six centuries earlier. It is a reasonable conjecture that Wallis was aware of Thabit's work.[22]
Wallis was also inspired by the works of Islamic mathematician Sadr al-Tusi, the son of Nasir al-Din al-Tusi, particularly by al-Tusi's book written in 1298 on the parallel postulate. The book was based on his father's thoughts and presented one of the earliest arguments for a non-Euclidean hypothesis equivalent to the parallel postulate. After reading this, Wallis then wrote about his ideas as he developed his own thoughts about the postulate, trying to prove it also with similar triangles.[23]
He found that Euclid's fifth postulate is equivalent to the one currently named "Wallis postulate" after him. This postulate states that "On a given finite straight line it is always possible to construct a triangle similar to a given triangle". This result was encompassed in a trend trying to deduce Euclid's fifth from the other four postulates which today is known to be impossible. Unlike other authors, he realised that the unbounded growth of a triangle was not guaranteed by the four first postulates.[24]
Calculator
Another aspect of Wallis's mathematical skills was his ability to do mental calculations. He slept badly and often did mental calculations as he lay awake in his bed. One night he calculated in his head the square root of a number with 53 digits. In the morning he dictated the 27-digit square root of the number, still entirely from memory. It was a feat that was considered remarkable, and Henry Oldenburg, the Secretary of the Royal Society, sent a colleague to investigate how Wallis did it. It was considered important enough to merit discussion in the Philosophical Transactions of the Royal Society of 1685.[25][26]
Musical theory
Wallis translated into Latin works of Ptolemy and Bryennius, and Porphyrius's commentary on Ptolemy. He also published three letters to Henry Oldenburg concerning tuning. He approved of equal temperament, which was being used in England's organs.[27]
Other works
His Institutio logicae, published in 1687, was very popular.[4] The Grammatica linguae Anglicanae was a work on English grammar, that remained in print well into the eighteenth century. He also published on theology.[4]
Wallis as cryptographer
While employed as lady Vere's chaplain in 1642 Wallis was given an enciphered letter about the fall of Chicester which he managed to decipher within two hours. This started his career as a cryptographer. He was a moderate suporter of the Parliamentarian side in the First English Civil War and therefore worked as a decipherer of intercepted correspondence for the Parliamentarian leaders. For his services he was rewarded with the Livings of St. Gabriel and St. Martin's in London.[28]
Because of his Parliamentarian sympathies Wallis was not employed as a cryptographer after the Stuart Restoration,[29] but after the Glorious Revolution he was sought out by lord Nottingham and frequently employed to decipher encrypted intercepted correspondence, though he thought that he was not always adequately rewarded for his work.[lower-alpha 1] King king William III from 1689 also employed Wallis as a cryptographer, sometimes almost on a daily basis. Couriers would bring him letters to be decrypted, and waited in front of his study for the product. The king took a personal interest in Wallis' work and well-being as witnessed by a letter he sent to Dutch Grand pensionary Anthonie Heinsius in 1689.[29]
In these early days of the Williamite reign directly obtaining foreign intercepted letters was a problem for the English, as they did not have the resources of foreign Black Chambers as yet, but allies like the Elector of Brandenburg without their own Black Chambers occasionally made gifts of such intercepted correspondence, like the letter of king Louis XIV of France to king John III Sobieski of Poland that king William in 1689 used to cause a crisis in French-Polish diplomatic relations. He was quite open about it and Wallis was rewarded for his role.[31]. But Wallis became nervous that the French might take action against him.[32]
Wallis relationship with the German mathematician Gottfried Wilhelm Leibniz was cordial. But Leibniz also had cryptographic interests and tried to get Wallis to divulge some of his trade secrets, which Wallis declined to do as a matter of patriotic principle.[33]
Smith gives an example of the painstaking work that Wallis performed, as described by himself in a letter to Richard Hampden of 3 August 1689. In it he gives a detailed account of his work on a particular letter and the parts he had encountered difficulties with.[34]
Wallis' correspondence also shows details of the way he stood up for himself, when he thought he was under-appreciated, financially or otherwise. He lobbied enthusiastically, both on his own behalf, and that of his relatives, as witnessed by letters to Lord Nottingham, Richard Hampden and the MP Harbord Harbord that Smith quotes.[35] In a letter to the English envoy to Prussia, James Johnston Wallis bitterly complains that a courtier of the Prussian Elector, by the name of Smetteau, had done him wrong in the matter of just compensation for services rendered to the Elector. In the letter he gives details of what he had done, and gives advice on a simple substitution cipher for the use of Johnston himself.[36]
Wallis' contributions to the art of cryptography were not only of a "technological" character. De Leeuw points out that even the "purely scientific" contributions of Wallis to the science of linguistics in the field of the "rationality" of Natural language as it developed over time, played a role in the development of cryptology as a science. Wallis' development of a model of English grammar, independent of earlier models based on Latin grammar, is a case in point of the way other sciences helped develop cryptology in his view.[37]
Wallis tried to teach his own son John, and his grandson by his daughter Anne, William Blencowe the tricks of the trade. With William he was so successful that he could persuade the government to allow the grandson to get the survivorship of the annual pension of £100 Wallis had received in compensation for his cryptographic work.[38]
William Blencowe eventually succeeded Wallis as official Cryptographer to Queen Anne after Walis' death in 1703.[39]
See also
• 31982 Johnwallis, an asteroid that was named after him
• Invisible College
• John Wallis Academy – former ChristChurch school in Ashford renamed in 2010
• Wallis's conical edge
• Wallis' integrals
Notes
1. Smith quotes his sometimes acrimonious letters to Nottingham and others.[30]
References
1. Joseph Frederick Scott, The mathematical work of John Wallis (1616-1703), Taylor and Francis, 1938, p. 109.
2. Random House Dictionary.
3. Smith, David Eugene (1917). "John Wallis As a Cryptographer". Bulletin of the American Mathematical Society. 24 (2): 82–96. doi:10.1090/s0002-9904-1917-03015-7. MR 1560009.
4. Chisholm, Hugh, ed. (1911). "Wallis, John" . Encyclopædia Britannica. Vol. 28 (11th ed.). Cambridge University Press. p. 284–285.
5. Kearns, D. A. (1958). "John Wallis and complex numbers". The Mathematics Teacher. 51 (5): 373–374. JSTOR 27955680.
6. Joan Thirsk (2005). "Blencowe, Anne, Lady Blencowe (1656–1718)". Oxford Dictionary of National Biography. Oxford University Press. Retrieved 21 August 2023.
7. "WALLIS, John (1650-1717), of Soundness, Nettlebed, Oxon". History of Parliament Online. Retrieved 21 August 2023.
8. "Elizabeth Wallis". Early Modern Letters Online : Person. Retrieved 21 August 2023.
9. Yule, G. Udny (1939). "John Wallis, D.D., F.R.S.". Notes and Records of the Royal Society of London. 2 (1): 74–82. doi:10.1098/rsnr.1939.0012. JSTOR 3087253.
10. "Wallys, John (WLS632J)". A Cambridge Alumni Database. University of Cambridge.
11. Kahn, David (1967), The Codebreakers: The Story of Secret Writing, New York: Macmillan, p. 169, LCCN 63016109
12. 4
13. "Find could end 350-year science dispute". BBC. 26 July 2008. Retrieved 5 May 2018.
14. W. Holder, W. (1668). "Of an Experiment, Concerning Deafness". Philosophical Transactions of the Royal Society 3, pp. 665–668.
15. Holder, Philosophical Transactions of the Royal Society, supplement, 10.
16. John Wallis: Time-line via Oxford University
17. Scott, J.F. 1981. The Mathematical Work of John Wallis, D.D., F.R.S. (1616–1703). Chelsea Publishing Co. New York, NY. p. 18.
18. Heeffer, Albrecht (10 March 2011). "Historical Objections Against the Number Line". Science & Education. 20 (9): 863–880 [872–876]. Bibcode:2011Sc&Ed..20..863H. doi:10.1007/s11191-011-9349-0. hdl:1854/LU-1891046. S2CID 120058064.
19. Núñez, Rafael (2017). "How Much Mathematics Is "Hardwired," If Any at All: Biological Evolution, Development, and the Essential Role of Culture" (PDF). In Sera, Maria D.; Carlson, Stephanie M.; Maratsos, Michael (eds.). Minnesota Symposium on Child Psychology: Culture and Developmental Systems, Volume 38. John Wiley & Sons, Inc. pp. 83–124 [96]. doi:10.1002/9781119301981.ch3.
20. Wallis, John (1685). A treatise of algebra, both historical and practical. London: Richard Davis. p. 265. MPIWG:GK8U243K.
21. Martínez, Alberto A. (2006). Negative Math: How Mathematical Rules Can Be Positively Bent. Princeton University Press. p. 22. ISBN 978-0-691-12309-7. Retrieved 9 June 2013.
22. Joseph, G.G. (2000). The Crest of the Peacock: Non-European Roots of Mathematics (2 ed.). Penguin. p. 337. ISBN 978-0-14-027778-4.
23. The Mathematics of Egypt, Mesopotamia, China, India, and Islam:A Sourcebook Victor J. Katz Princeton University Press Archived 1 October 2016 at the Wayback Machine
24. Burton, David M. (2011), The History of Mathematics / An Introduction (7th ed.), McGraw-Hill, p. 566, ISBN 978-0-07-338315-6
25. Dr. Wallis (1685) "Two extracts of the Journal of the Phil. Soc. of Oxford; one containing a paper, communicated March 31, 1685, by the Reverend Dr. Wallis, president of that society, concerning the strength of memory when applied with due attention; … ", Philosophical Transactions of the Royal Society of London, 15 : 1269-1271. Available on-line at: Royal Society of London
26. Hoppen, K. Theodore (2013), The Common Scientist of the Seventeenth Century: A Study of the Dublin Philosophical Society, 1683–1708, Routledge Library Editions: History & Philosophy of Science, vol. 15, Routledge, p. 157, ISBN 9781135028541
27. David Damschoder and David Russell Williams, Music Theory from Zarlino to Schenker: A Bibliography and Guide (Stytvesant, NY: Pendragon Press, 1990), p. 374.
28. Smith, p. 83
29. De Leeuw (1999), p. 138
30. Smith, pp. 83-86
31. Smith, p. 87
32. De Leeuw (1999), p. 139
33. Smith, pp. 83-84
34. Smith, pp. 85-87
35. Smith, pp. 89-93
36. Smith, pp. 94-96
37. De Leeuw (2000), p. 9
38. Cave, E., ed. (1788). "Original Letter of dr. Wallis with Some Particulars of his Pension". The Gentleman's Magazine. 63 (June 1788): 479–480. Retrieved 20 August 2023.
39. De Leeuw (1999), p.143
Sources
• The initial text of this article was taken from the public domain resource:
• W. W. Rouse Ball (1908). "A Short Account of the History of Mathematics" (4 ed.).
• Leeuw, K. de (1999). "The Black Chamber in the Dutch Republic during the War of the Spanish Succession and it Aftermath, 1707-1715" (PDF). The Historical Journal. 42 (1): 133–156. Retrieved 3 August 2023.
• Leeuw, K. de (2000). "The use of codes and ciphers in the Dutch Republic, mainly during the 18th century". Cryptology and statecraft in the Dutch Republic (PDF). Amsterdam. pp. 6–51. Retrieved 4 August 2023.{{cite book}}: CS1 maint: location missing publisher (link)
• Smith, David Eugene (1917). "John Wallis As a Cryptographer". Bulletin of the American Mathematical Society. 24 (2): 82–96. doi:10.1090/s0002-9904-1917-03015-7. MR 1560009.
• Scriba, C J (1970). "The autobiography of John Wallis, F.R.S.". Notes and Records of the Royal Society of London. 25: 17–46. doi:10.1098/rsnr.1970.0003. S2CID 145393357.
• Stedall, Jacqueline, 2005, "Arithmetica Infinitorum" in Ivor Grattan-Guinness, ed., Landmark Writings in Western Mathematics. Elsevier: 23–32.
• Guicciardini, Niccolò (2012) "John Wallis as editor of Newton's Mathematical Work", Notes and Records of the Royal Society of London 66(1): 3–17. Jstor link
• Stedall, Jacqueline A. (2001) "Of Our Own Nation: John Wallis's Account of Mathematical Learning in Medieval England", Historia Mathematica 28: 73.
• Wallis, J. (1691). A seventh letter, concerning the sacred Trinity occasioned by a second letter from W.J. / by John Wallis ... (Early English books online). London: Printed for Tho. Parkhurst ...
External links
• Media related to John Wallis at Wikimedia Commons
Wikiquote has quotations related to John Wallis.
• The Correspondence of John Wallis in EMLO
• "Wallis, John (1616-1703)" . Dictionary of National Biography. London: Smith, Elder & Co. 1885–1900.
• O'Connor, John J.; Robertson, Edmund F., "John Wallis", MacTutor History of Mathematics Archive, University of St Andrews
• Galileo Project page
• "Archival material relating to John Wallis". UK National Archives.
• Portraits of John Wallis at the National Portrait Gallery, London
• Works by John Wallis at Post-Reformation Digital Library
• John Wallis (1685) A treatise of algebra - digital facsimile, Linda Hall Library
• Wallis, John (1685). A Treatise of Algebra, both Historical and Practical. Shewing the Original, Progress, and Advancement thereof, from time to time, and by what Steps it hath attained to the Heighth at which it now is. Oxford: Richard Davis. doi:10.3931/e-rara-8842.
• Hutchinson, John (1892). "John Wallis" . Men of Kent and Kentishmen (Subscription ed.). Canterbury: Cross & Jackman. pp. 139–140.
Savilian Professors
Chairs established by Sir Henry Savile
Savilian Professors
of Astronomy
• John Bainbridge (1620)
• John Greaves (1642)
• Seth Ward (1649)
• Christopher Wren (1661)
• Edward Bernard (1673)
• David Gregory (1691)
• John Caswell (1709)
• John Keill (1712)
• James Bradley (1721)
• Thomas Hornsby (1763)
• Abraham Robertson (1810)
• Stephen Rigaud (1827)
• George Johnson (1839)
• William Donkin (1842)
• Charles Pritchard (1870)
• Herbert Turner (1893)
• Harry Plaskett (1932)
• Donald Blackwell (1960)
• George Efstathiou (1994)
• Joseph Silk (1999)
• Steven Balbus (2012)
Savilian Professors
of Geometry
• Henry Briggs (1619)
• Peter Turner (1631)
• John Wallis (1649)
• Edmond Halley (1704)
• Nathaniel Bliss (1742)
• Joseph Betts (1765)
• John Smith (1766)
• Abraham Robertson (1797)
• Stephen Rigaud (1810)
• Baden Powell (1827)
• Henry John Stephen Smith (1861)
• James Joseph Sylvester (1883)
• William Esson (1897)
• Godfrey Harold Hardy (1919)
• Edward Charles Titchmarsh (1931)
• Michael Atiyah (1963)
• Ioan James (1969)
• Richard Taylor (1995)
• Nigel Hitchin (1997)
• Frances Kirwan (2017)
University of Oxford portal
Keeper of the Archives of the University of Oxford
• Brian Twyne (1634)
• Gerard Langbaine (1644)
• John Wallis (1658)
• Bernard Gardiner (1703)
• Francis Wise (1726)
• John Swinton (1767)
• Benjamin Buckler (1777)
• Thomas Wenman (1781)
• Whittington Landon (1796)
• James Ingram (1815)
• George Leigh Cooke (1818)
• Philip Bliss (1826)
• John Griffiths (1857)
• Thomas Vere Bayne (1885)
• Reginald Lane Poole (1909)
• Strickland Gibson (1927)
• William Abel Pantin (1946)
• Trevor Aston (1969)
• Jeffrey Hackney (1987)
• David Vaisey (1995)
• Simon Bailey (2000)
• Faye McLeod (2020)
University of Oxford portal
Authority control
International
• FAST
• ISNI
• VIAF
National
• Spain
• France
• BnF data
• Germany
• Israel
• Belgium
• United States
• Sweden
• Czech Republic
• Greece
• Netherlands
• Poland
• Portugal
• Vatican
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
• Trove
Other
• SNAC
• IdRef
{{|date=September 2019}}
| Wikipedia |
Wallis Professor of Mathematics
The Wallis Professorship of Mathematics is a chair in the Mathematical Institute of the University of Oxford. It was established in 1969 in honour of John Wallis, who was Savilian Professor of Geometry at Oxford from 1649 to 1703.
List of Wallis Professors of Mathematics
• 1969 to 1985: John Kingman[1]
• 1985 to 1997: Simon Donaldson[2]
• 1999 to 2022: Terence Lyons[2]
• 2022 to date: Massimiliano Gubinelli[3]
See also
• List of professorships at the University of Oxford
References
1. "Terry Lyons to be LMS President" (PDF). Oxford Mathematical Institute Newsletter. University of Oxford (11): 3. Spring 2013. ... several Oxford mathematicians have been LMS President at other stages of their careers [including] Wallis Professor Sir John Kingman (1990) ...
2. Alexanderson, Gerald (2012). "John Wallis and Oxford" (PDF). Bulletin of the A.M.S. 49 (3): 443–446. doi:10.1090/S0273-0979-2012-01377-0.
3. "New Wallis Professor of Mathematics". {{cite journal}}: Cite journal requires |journal= (help)
| Wikipedia |
Wallis's conical edge
In geometry, Wallis's conical edge is a ruled surface given by the parametric equations
$x=v\cos u,\quad y=v\sin u,\quad z=c{\sqrt {a^{2}-b^{2}\cos ^{2}u}}$
where a, b and c are constants.
Wallis's conical edge is also a kind of right conoid. It is named after the English mathematician John Wallis, who was one of the first to use Cartesian methods to study conic sections.[1]
See also
• Ruled surface
• Right conoid
References
1. Abbena, Elsa; Salamon, Simon; Gray, Alfred (21 June 2006). Modern Differential Geometry of Curves and Surfaces with Mathematica, Third Edition. ISBN 9781584884484.
• A. Gray, E. Abbena, S. Salamon, Modern differential geometry of curves and surfaces with Mathematica, 3rd ed. Boca Raton, Florida:CRC Press, 2006. (ISBN 978-1-58488-448-4)
External links
• Wallis's Conical Edge from MathWorld.
| Wikipedia |
Wallman compactification
In mathematics, the Wallman compactification, generally called Wallman–Shanin compactification is a compactification of T1 topological spaces that was constructed by Wallman (1938).
Definition
The points of the Wallman compactification ωX of a space X are the maximal proper filters in the poset of closed subsets of X. Explicitly, a point of ωX is a family ${\mathcal {F}}$ of closed nonempty subsets of X such that ${\mathcal {F}}$ is closed under finite intersections, and is maximal among those families that have these properties. For every closed subset F of X, the class ΦF of points of ωX containing F is closed in ωX. The topology of ωX is generated by these closed classes.
Special cases
For normal spaces, the Wallman compactification is essentially the same as the Stone–Čech compactification.
See also
• Lattice (order)
• Pointless topology
References
• Aleksandrov, P.S. (2001) [1994], "Wallman_compactification", Encyclopedia of Mathematics, EMS Press
• Wallman, Henry (1938), "Lattices and topological spaces", Annals of Mathematics, 39 (1): 112–126, doi:10.2307/1968717, JSTOR 1968717
| Wikipedia |
Wally Smith (mathematician)
Walter Laws Smith (November 12, 1926 – March 6, 2023) was a British-born American mathematician, known for his contributions to applied probability theory.[1]
Wally Smith
Born(1926-11-12)November 12, 1926
London, England
DiedMarch 6, 2023(2023-03-06) (aged 96)
Chapel Hill, North Carolina, U.S.
Alma materUniversity of Cambridge
AwardsAdams Prize (1960)
Guggenheim Fellowship
Scientific career
InstitutionsUniversity of North Carolina at Chapel Hill
ThesisStochastic Sequences of Events (1954)
Doctoral advisorHenry Daniels
David Cox
Websitehttp://www.stat.unc.edu/faculty/wsmith.html
Biography
Smith was born in London on November 12, 1926.[2]
Smith received a B.A. in mathematics (1947) from Cambridge University, having gained First Class in the Mathematical Tripos Part 1 and Part 2. He then received an M.A. (1951) and Ph.D (1953) from Cambridge. His dissertation was entitled Stochastic Sequences of Events advised by Henry Daniels and D. R. Cox, with whom he published the book Queues (1961) and also published with in his early years.[3] He became a professor of statistics at The University of North Carolina Chapel Hill (1954–56 and 1958–), and he was an emeritus professor in the Department of Statistics and Operations Research.[4]
Smith was a fellow of the Institute of Mathematical Statistics, a fellow of the American Statistical Association (1966), a winner of the Adams Prize at the University of Cambridge (1960), a Sir Winston Churchill overseas fellow and a recipient of a Guggenheim Fellowship (see List of Guggenheim Fellowships awarded in 1974)
Smith died in Chapel Hill, North Carolina, on March 6, 2023, at the age of 96.[5]
Publications
• The superimposition of several strictly periodic sequences of events, in Biometrika, 40(?), 1953. With Cox.
• A direct proof of a fundamental theorem of renewal theory, in Skandinavisk Aktuartidsskrift, 36(?), 1953
• On the superposition of renewal processes, in Biometrika, 41(1–2):91–99, 1954. With Cox.
• A note on truncation and sufficient statistics in The Annals of Mathematical Statistics, 28(1):247–252, 1957
• On the distribution of Tribolium confusum in a container, in Biometrika, 44(?), 1957. With Cox.
• Renewal theory and its ramifications, in Journal of the Royal Statistical Society, 20(2):243–302, 1958
• On the elementary renewal theorem for non-identically distributed variables, in Pacific Journal of Mathematics, 14(2):673–699, 1964
• Congestion Theory, Proceedings of the Symposium on Congestion Theory, The University of North Carolina Monograph Series in Probability and Statistics., 1965. With William E. Wilkinson (editors).
• Necessary conditions for almost sure extinction of a branching process with random environment, Annals of Mathematical Statistics,. 39(?):2136–2140, 1968
• Branching processes in Markovian environments in Duke Mathematical Journal 38(4):749–763, 1971. With William E. Wilkinson
• Harold Hotelling 1895–1973 in The Annals of Statistics, 6(6):1173–1183, 1978
• On transient regenerative processes in Journal of Applied Probability, 23(?):52–70, 1986. With E. Murphree.
References
1. "Walter Smith Obituary | UNC Statistics & Operations Research". stor.unc.edu. Retrieved 2023-04-22.
2. Jaques Cattell Press (1982). "American Men and Women of Science: The physical and biological sciences". American Men & Women of Science. Bowker (v. 6, v. 15, no. 6). ISSN 0192-8570. Retrieved 2015-04-14.
3. Wally Smith at the Mathematics Genealogy Project
4. homepage at unc.edu
5. "Walter Laws Smith". Legacy. Retrieved 19 March 2023.
Authority control
International
• ISNI
• VIAF
National
• Israel
• United States
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Walsh function
In mathematics, more specifically in harmonic analysis, Walsh functions form a complete orthogonal set of functions that can be used to represent any discrete function—just like trigonometric functions can be used to represent any continuous function in Fourier analysis.[1] They can thus be viewed as a discrete, digital counterpart of the continuous, analog system of trigonometric functions on the unit interval. But unlike the sine and cosine functions, which are continuous, Walsh functions are piecewise constant. They take the values −1 and +1 only, on sub-intervals defined by dyadic fractions.
The system of Walsh functions is known as the Walsh system. It is an extension of the Rademacher system of orthogonal functions.[2]
Walsh functions, the Walsh system, the Walsh series,[3] and the fast Walsh–Hadamard transform are all named after the American mathematician Joseph L. Walsh. They find various applications in physics and engineering when analyzing digital signals.
Historically, various numerations of Walsh functions have been used; none of them is particularly superior to another. This articles uses the Walsh–Paley numeration.
Definition
We define the sequence of Walsh functions $W_{k}:[0,1]\rightarrow \{-1,1\}$, $k\in \mathbb {N} $ as follows.
For any natural number k, and real number $x\in [0,1]$, let
$k_{j}$ be the jth bit in the binary representation of k, starting with $k_{0}$ as the least significant bit, and
$x_{j}$ be the jth bit in the fractional binary representation of $x$, starting with $x_{1}$ as the most significant fractional bit.
Then, by definition
$W_{k}(x)=(-1)^{\sum _{j=0}^{\infty }k_{j}x_{j+1}}$
In particular, $W_{0}(x)=1$ everywhere on the interval, since all bits of k are zero.
Notice that $W_{2^{m}}$ is precisely the Rademacher function rm. Thus, the Rademacher system is a subsystem of the Walsh system. Moreover, every Walsh function is a product of Rademacher functions:
$W_{k}(x)=\prod _{j=0}^{\infty }r_{j}(x)^{k_{j}}$
Comparison between Walsh functions and trigonometric functions
Walsh functions and trigonometric functions are both systems that form a complete, orthonormal set of functions, an orthonormal basis in Hilbert space $L^{2}[0,1]$ of the square-integrable functions on the unit interval. Both are systems of bounded functions, unlike, say, the Haar system or the Franklin system.
Both trigonometric and Walsh systems admit natural extension by periodicity from the unit interval to the real line $\mathbb {R} $. Furthermore, both Fourier analysis on the unit interval (Fourier series) and on the real line (Fourier transform) have their digital counterparts defined via Walsh system, the Walsh series analogous to the Fourier series, and the Hadamard transform analogous to the Fourier transform.
Properties
The Walsh system $\{W_{k}\},k\in \mathbb {N} _{0}$ is a commutative multiplicative discrete group isomorphic to $\coprod _{n=0}^{\infty }\mathbb {Z} /2\mathbb {Z} $, the Pontryagin dual of Cantor group $\prod _{n=0}^{\infty }\mathbb {Z} /2\mathbb {Z} $. Its identity is $W_{0}$, and every element is of order two (that is, self-inverse).
The Walsh system is an orthonormal basis of Hilbert space $L^{2}[0,1]$. Orthonormality means
$\int _{0}^{1}W_{k}(x)W_{l}(x)dx=\delta _{kl}$,
and being a basis means that if, for every $f\in L^{2}[0,1]$, we set $f_{k}=\int _{0}^{1}f(x)W_{k}(x)dx$ then
$\int _{0}^{1}(f(x)-\sum _{k=0}^{N}f_{k}W_{k}(x))^{2}dx{\xrightarrow[{N\rightarrow \infty }]{}}0$
It turns out that for every $f\in L^{2}[0,1]$, the series $\sum _{k=0}^{\infty }f_{k}W_{k}(x)$ converge to $f(x)$ for almost every $x\in [0,1]$.
The Walsh system (in Walsh-Paley numeration) forms a Schauder basis in $L^{p}[0,1]$, $1<p<\infty $. Note that, unlike the Haar system, and like the trigonometric system, this basis is not unconditional, nor is the system a Schauder basis in $L^{1}[0,1]$.
Generalizations
Walsh-Ferleger systems
Let $\mathbb {D} =\prod _{n=1}^{\infty }\mathbb {Z} /2\mathbb {Z} $ be the compact Cantor group endowed with Haar measure and let ${\hat {\mathbb {D} }}=\coprod _{n=1}^{\infty }\mathbb {Z} /2\mathbb {Z} $ be its discrete group of characters. Elements of ${\hat {\mathbb {D} }}$ are readily identified with Walsh functions. Of course, the characters are defined on $\mathbb {D} $ while Walsh functions are defined on the unit interval, but since there exists a modulo zero isomorphism between these measure spaces, measurable functions on them are identified via isometry.
Then basic representation theory suggests the following broad generalization of the concept of Walsh system.
For an arbitrary Banach space $(X,||\cdot ||)$ let $\{R_{t}\}_{t\in \mathbb {D} }\subset Aut(X)$ be a strongly continuous, uniformly bounded faithful action of $\mathbb {D} $ on X. For every $\gamma \in {\hat {\mathbb {D} }}$, consider its eigenspace $X_{\gamma }=\{x\in X:R_{t}x=\gamma (t)x\}$. Then X is the closed linear span of the eigenspaces: $X={\overline {\operatorname {Span} }}(X_{\gamma },\gamma \in {\hat {\mathbb {D} }})$. Assume that every eigenspace is one-dimensional and pick an element $w_{\gamma }\in X_{\gamma }$ such that $||w_{\gamma }||=1$. Then the system $\{w_{\gamma }\}_{\gamma \in {\hat {\mathbb {D} }}}$, or the same system in the Walsh-Paley numeration of the characters $\{w_{k}\}_{k\in {\mathbb {N} }_{0}}$ is called generalized Walsh system associated with action $\{R_{t}\}_{t\in \mathbb {D} }$. Classical Walsh system becomes a special case, namely, for
$R_{t}:x=\sum _{j=1}^{\infty }x_{j}2^{-j}\mapsto \sum _{j=1}^{\infty }(x_{j}\oplus t_{j})2^{-j}$
where $\oplus $ is addition modulo 2.
In the early 1990s, Serge Ferleger and Fyodor Sukochev showed that in a broad class of Banach spaces (so called UMD spaces [4]) generalized Walsh systems have many properties similar to the classical one: they form a Schauder basis [5] and a uniform finite dimensional decomposition [6] in the space, have property of random unconditional convergence.[7] One important example of generalized Walsh system is Fermion Walsh system in non-commutative Lp spaces associated with hyperfinite type II factor.
Fermion Walsh system
The Fermion Walsh system is a non-commutative, or "quantum" analog of the classical Walsh system. Unlike the latter, it consists of operators, not functions. Nevertheless, both systems share many important properties, e.g., both form an orthonormal basis in corresponding Hilbert space, or Schauder basis in corresponding symmetric spaces. Elements of the Fermion Walsh system are called Walsh operators.
The term Fermion in the name of the system is explained by the fact that the enveloping operator space, the so-called hyperfinite type II factor ${\mathcal {R}}$, may be viewed as the space of observables of the system of countably infinite number of distinct spin ${\frac {1}{2}}$ fermions. Each Rademacher operator acts on one particular fermion coordinate only, and there it is a Pauli matrix. It may be identified with the observable measuring spin component of that fermion along one of the axes $\{x,y,z\}$ in spin space. Thus, a Walsh operator measures the spin of a subset of fermions, each along its own axis.
Vilenkin system
Fix a sequence $\alpha =(\alpha _{1},\alpha _{2},...)$ of integers with $\alpha _{k}\geq 2,k=1,2,\dots $ and let $\mathbb {G} =\mathbb {G} _{\alpha }=\prod _{n=1}^{\infty }\mathbb {Z} /\alpha _{k}\mathbb {Z} $ endowed with the product topology and the normalized Haar measure. Define $A_{0}=1$ and $A_{k}=\alpha _{1}\alpha _{2}\dots \alpha _{k-1}$. Each $x\in \mathbb {G} $ can be associated with the real number
$\left|x\right|=\sum _{k=1}^{\infty }{\frac {x_{k}}{A_{k}}}\in \left[0,1\right].$
This correspondence is a module zero isomorphism between $\mathbb {G} $ and the unit interval. It also defines a norm which generates the topology of $\mathbb {G} $. For $k=1,2,\dots $, let $\rho _{k}:\mathbb {G} \to \mathbb {C} $ where
$\rho _{k}(x)=\exp(i{\frac {2\pi x_{k}}{\alpha _{k}}})=\cos({\frac {2\pi x_{k}}{\alpha _{k}}})+i\sin({\frac {2\pi x_{k}}{\alpha _{k}}}).$
The set $\{\rho _{k}\}$ is called generalized Rademacher system. The Vilenkin system is the group ${\hat {\mathbb {G} }}=\coprod _{n=1}^{\infty }\mathbb {Z} /\alpha _{k}\mathbb {Z} $ of (complex-valued) characters of $\mathbb {G} $, which are all finite products of $\{\rho _{k}\}$. For each non-negative integer $n$ there is a unique sequence $n_{0},n_{1},\dots $ such that $0\leq n_{k}<\alpha _{k+1},k=0,1,2,\dots $ and
$n=\sum _{k=0}^{\infty }n_{k}A_{k}.$
Then ${\hat {\mathbb {G} }}={\chi _{n}|n=0,1,\dots }$ where
$\chi _{n}=\sum _{k=0}^{\infty }\rho _{k+1}^{n_{k}}.$
In particular, if $\alpha _{k}=2,k=1,2...$, then $\mathbb {G} $ is the Cantor group and ${\hat {\mathbb {G} }}=\left\{\chi _{n}|n=0,1,\dots \right\}$ is the (real-valued) Walsh-Paley system.
The Vilenkin system is a complete orthonormal system on $\mathbb {G} $ and forms a Schauder basis in $L^{p}(\mathbb {G} ,\mathbb {C} )$, $1<p<\infty $.[8]
Binary Surfaces
Romanuke showed that Walsh functions can be generalized to binary surfaces in a particular case of function of two variables.[9] There also exist eight Walsh-like bases of orthonormal binary functions,[10] whose structure is nonregular (unlike the structure of Walsh functions). These eight bases are generalized to surfaces (in the case of the function of two variables) also. It was proved that piecewise-constant functions can be represented within each of nine bases (including the Walsh functions basis) as finite sums of binary functions, when weighted with proper coefficients.[11]
Nonlinear Phase Extensions
Nonlinear phase extensions of discrete Walsh-Hadamard transform were developed. It was shown that the nonlinear phase basis functions with improved cross-correlation properties significantly outperform the traditional Walsh codes in code division multiple access (CDMA) communications.[12]
Applications
Applications of the Walsh functions can be found wherever digit representations are used, including speech recognition, medical and biological image processing, and digital holography.
For example, the fast Walsh–Hadamard transform (FWHT) may be used in the analysis of digital quasi-Monte Carlo methods. In radio astronomy, Walsh functions can help reduce the effects of electrical crosstalk between antenna signals. They are also used in passive LCD panels as X and Y binary driving waveforms where the autocorrelation between X and Y can be made minimal for pixels that are off.
See also
• Discrete Fourier transform
• Fast Fourier transform
• Harmonic analysis
• Orthogonal functions
• Walsh matrix
• Parity function
Notes
1. Walsh 1923.
2. Fine 1949.
3. Schipp, Wade & Simon 1990.
4. Pisier 2011.
5. Sukochev & Ferleger 1995.
6. Ferleger & Sukochev 1996.
7. Ferleger 1998.
8. Young 1976
9. Romanuke 2010a.
10. Romanuke 2010b.
11. Romanuke 2010c.
12. A.N. Akansu and R. Poluri, "Walsh-Like Nonlinear Phase Orthogonal Codes for Direct Sequence CDMA Communications," IEEE Trans. Signal Process., vol. 55, no. 7, pp. 3800–3806, July 2007.
References
• Ferleger, Sergei V. (March 1998). RUC-Systems In Non-Commutative Symmetric Spaces (Technical report). MP-ARC-98-188.
• Ferleger, Sergei V.; Sukochev, Fyodor A. (March 1996). "On the contractibility to a point of the linear groups of reflexive non-commutative Lp-spaces". Mathematical Proceedings of the Cambridge Philosophical Society. 119 (3): 545–560. Bibcode:1996MPCPS.119..545F. doi:10.1017/s0305004100074405. S2CID 119786894.
• Fine, N.J. (1949). "On the Walsh functions". Trans. Amer. Math. Soc. 65 (3): 372–414. doi:10.1090/s0002-9947-1949-0032833-2.
• Pisier, Gilles (2011). Martingales in Banach Spaces (in connection with Type and Cotype). Course IHP (PDF).
• Romanuke, V. V. (2010a). "On the Point of Generalizing the Walsh Functions to Surfaces".
• Romanuke, V. V. (2010b). "Generalization of the Eight Known Orthonormal Bases of Binary Functions to Surfaces".
• Romanuke, V. V. (2010c). "Equidistantly Discrete on the Argument Axis Functions and their Representation in the Orthonormal Bases Series".
• Schipp, Ferenc; Wade, W.R.; Simon, P. (1990). Walsh series. An introduction to dyadic harmonic analysis. Akadémiai Kiadó.
• Sukochev, Fyodor A.; Ferleger, Sergei V. (December 1995). "Harmonic analysis in (UMD)-spaces: Applications to the theory of bases". Mathematical Notes. 58 (6): 1315–1326. doi:10.1007/bf02304891. S2CID 121256402.
• Walsh, J.L. (1923). "A closed set of normal orthogonal functions". Amer. J. Math. 45 (1): 5–24. doi:10.2307/2387224. JSTOR 2387224. S2CID 6131655.
• Young, W.-S. (1976). "Mean convergence of generalized Walsh-Fourier series". Trans. Amer. Math. Soc. 218: 311–320. doi:10.1090/s0002-9947-1976-0394022-8. JSTOR 1997441. S2CID 53755959.
External links
• "Walsh functions". MathWorld.
• "Walsh functions". Encyclopedia of Mathematics.
• "Walsh system". Encyclopedia of Mathematics.
• "Walsh functions". Stanford Exploration Project.
Authority control
International
• FAST
National
• Spain
• France
• BnF data
• Germany
• Israel
• United States
Other
• IdRef
| Wikipedia |
Hadamard code
The Hadamard code is an error-correcting code named after Jacques Hadamard that is used for error detection and correction when transmitting messages over very noisy or unreliable channels. In 1971, the code was used to transmit photos of Mars back to Earth from the NASA space probe Mariner 9.[1] Because of its unique mathematical properties, the Hadamard code is not only used by engineers, but also intensely studied in coding theory, mathematics, and theoretical computer science. The Hadamard code is also known under the names Walsh code, Walsh family,[2] and Walsh–Hadamard code[3] in recognition of the American mathematician Joseph Leonard Walsh.
Hadamard code
Named afterJacques Hadamard
Classification
TypeLinear block code
Block length$n=2^{k}$
Message length$k$
Rate$k/2^{k}$
Distance$d=2^{k-1}$
Alphabet size$2$
Notation$[2^{k},k,2^{k-1}]_{2}$-code
Augmented Hadamard code
Named afterJacques Hadamard
Classification
TypeLinear block code
Block length$n=2^{k}$
Message length$k+1$
Rate$(k+1)/2^{k}$
Distance$d=2^{k-1}$
Alphabet size$2$
Notation$[2^{k},k+1,2^{k-1}]_{2}$-code
The Hadamard code is an example of a linear code of length $2^{m}$ over a binary alphabet. Unfortunately, this term is somewhat ambiguous as some references assume a message length $k=m$ while others assume a message length of $k=m+1$. In this article, the first case is called the Hadamard code while the second is called the augmented Hadamard code.
The Hadamard code is unique in that each non-zero codeword has a Hamming weight of exactly $2^{k-1}$, which implies that the distance of the code is also $2^{k-1}$. In standard coding theory notation for block codes, the Hadamard code is a $[2^{k},k,2^{k-1}]_{2}$-code, that is, it is a linear code over a binary alphabet, has block length $2^{k}$, message length (or dimension) $k$, and minimum distance $2^{k}/2$. The block length is very large compared to the message length, but on the other hand, errors can be corrected even in extremely noisy conditions.
The augmented Hadamard code is a slightly improved version of the Hadamard code; it is a $[2^{k},k+1,2^{k-1}]_{2}$-code and thus has a slightly better rate while maintaining the relative distance of $1/2$, and is thus preferred in practical applications. In communication theory, this is simply called the Hadamard code and it is the same as the first order Reed–Muller code over the binary alphabet.[4]
Normally, Hadamard codes are based on Sylvester's construction of Hadamard matrices, but the term “Hadamard code” is also used to refer to codes constructed from arbitrary Hadamard matrices, which are not necessarily of Sylvester type. In general, such a code is not linear. Such codes were first constructed by Raj Chandra Bose and Sharadchandra Shankar Shrikhande in 1959.[5] If n is the size of the Hadamard matrix, the code has parameters $(n,2n,n/2)_{2}$, meaning it is a not-necessarily-linear binary code with 2n codewords of block length n and minimal distance n/2. The construction and decoding scheme described below apply for general n, but the property of linearity and the identification with Reed–Muller codes require that n be a power of 2 and that the Hadamard matrix be equivalent to the matrix constructed by Sylvester's method.
The Hadamard code is a locally decodable code, which provides a way to recover parts of the original message with high probability, while only looking at a small fraction of the received word. This gives rise to applications in computational complexity theory and particularly in the design of probabilistically checkable proofs. Since the relative distance of the Hadamard code is 1/2, normally one can only hope to recover from at most a 1/4 fraction of error. Using list decoding, however, it is possible to compute a short list of possible candidate messages as long as fewer than ${\frac {1}{2}}-\epsilon $ of the bits in the received word have been corrupted.
In code-division multiple access (CDMA) communication, the Hadamard code is referred to as Walsh Code, and is used to define individual communication channels. It is usual in the CDMA literature to refer to codewords as “codes”. Each user will use a different codeword, or “code”, to modulate their signal. Because Walsh codewords are mathematically orthogonal, a Walsh-encoded signal appears as random noise to a CDMA capable mobile terminal, unless that terminal uses the same codeword as the one used to encode the incoming signal.[6]
History
Hadamard code is the name that is most commonly used for this code in the literature. However, in modern use these error correcting codes are referred to as Walsh–Hadamard codes.
There is a reason for this:
Jacques Hadamard did not invent the code himself, but he defined Hadamard matrices around 1893, long before the first error-correcting code, the Hamming code, was developed in the 1940s.
The Hadamard code is based on Hadamard matrices, and while there are many different Hadamard matrices that could be used here, normally only Sylvester's construction of Hadamard matrices is used to obtain the codewords of the Hadamard code.
James Joseph Sylvester developed his construction of Hadamard matrices in 1867, which actually predates Hadamard's work on Hadamard matrices. Hence the name Hadamard code is disputed and sometimes the code is called Walsh code, honoring the American mathematician Joseph Leonard Walsh.
An augmented Hadamard code was used during the 1971 Mariner 9 mission to correct for picture transmission errors. The data words used during this mission were 6 bits long, which represented 64 grayscale values.
Because of limitations of the quality of the alignment of the transmitter at the time (due to Doppler Tracking Loop issues) the maximum useful data length was about 30 bits. Instead of using a repetition code, a [32, 6, 16] Hadamard code was used.
Errors of up to 7 bits per word could be corrected using this scheme. Compared to a 5-repetition code, the error correcting properties of this Hadamard code are much better, yet its rate is comparable. The efficient decoding algorithm was an important factor in the decision to use this code.
The circuitry used was called the "Green Machine". It employed the fast Fourier transform which can increase the decoding speed by a factor of three. Since the 1990s use of this code by space programs has more or less ceased, and the NASA Deep Space Network does not support this error correction scheme for its dishes that are greater than 26 m.
Constructions
While all Hadamard codes are based on Hadamard matrices, the constructions differ in subtle ways for different scientific fields, authors, and uses. Engineers, who use the codes for data transmission, and coding theorists, who analyse extremal properties of codes, typically want the rate of the code to be as high as possible, even if this means that the construction becomes mathematically slightly less elegant.
On the other hand, for many applications of Hadamard codes in theoretical computer science it is not so important to achieve the optimal rate, and hence simpler constructions of Hadamard codes are preferred since they can be analyzed more elegantly.
Construction using inner products
When given a binary message $x\in \{0,1\}^{k}$ of length $k$, the Hadamard code encodes the message into a codeword ${\text{Had}}(x)$ using an encoding function ${\text{Had}}:\{0,1\}^{k}\to \{0,1\}^{2^{k}}.$ This function makes use of the inner product $\langle x,y\rangle $ of two vectors $x,y\in \{0,1\}^{k}$, which is defined as follows:
$\langle x,y\rangle =\sum _{i=1}^{k}x_{i}y_{i}\ {\bmod {\ }}2\,.$
Then the Hadamard encoding of $x$ is defined as the sequence of all inner products with $x$:
${\text{Had}}(x)={\Big (}\langle x,y\rangle {\Big )}_{y\in \{0,1\}^{k}}$
As mentioned above, the augmented Hadamard code is used in practice since the Hadamard code itself is somewhat wasteful. This is because, if the first bit of $y$ is zero, $y_{1}=0$, then the inner product contains no information whatsoever about $x_{1}$, and hence, it is impossible to fully decode $x$ from those positions of the codeword alone. On the other hand, when the codeword is restricted to the positions where $y_{1}=1$, it is still possible to fully decode $x$. Hence it makes sense to restrict the Hadamard code to these positions, which gives rise to the augmented Hadamard encoding of $x$; that is, ${\text{pHad}}(x)={\Big (}\langle x,y\rangle {\Big )}_{y\in \{1\}\times \{0,1\}^{k-1}}$.
Construction using a generator matrix
The Hadamard code is a linear code, and all linear codes can be generated by a generator matrix $G$. This is a matrix such that ${\text{Had}}(x)=x\cdot G$ holds for all $x\in \{0,1\}^{k}$, where the message $x$ is viewed as a row vector and the vector-matrix product is understood in the vector space over the finite field $\mathbb {F} _{2}$. In particular, an equivalent way to write the inner product definition for the Hadamard code arises by using the generator matrix whose columns consist of all strings $y$ of length $k$, that is,
$G={\begin{pmatrix}\uparrow &\uparrow &&\uparrow \\y_{1}&y_{2}&\dots &y_{2^{k}}\\\downarrow &\downarrow &&\downarrow \end{pmatrix}}\,.$
where $y_{i}\in \{0,1\}^{k}$ is the $i$-th binary vector in lexicographical order. For example, the generator matrix for the Hadamard code of dimension $k=3$ is:
$G={\begin{bmatrix}0&0&0&0&1&1&1&1\\0&0&1&1&0&0&1&1\\0&1&0&1&0&1&0&1\end{bmatrix}}.$
The matrix $G$ is a $(k\times 2^{k})$-matrix and gives rise to the linear operator ${\text{Had}}:\{0,1\}^{k}\to \{0,1\}^{2^{k}}$.
The generator matrix of the augmented Hadamard code is obtained by restricting the matrix $G$ to the columns whose first entry is one. For example, the generator matrix for the augmented Hadamard code of dimension $k=3$ is:
$G'={\begin{bmatrix}1&1&1&1\\0&0&1&1\\0&1&0&1\end{bmatrix}}.$
Then ${\text{pHad}}:\{0,1\}^{k}\to \{0,1\}^{2^{k-1}}$ is a linear mapping with ${\text{pHad}}(x)=x\cdot G'$.
For general $k$, the generator matrix of the augmented Hadamard code is a parity-check matrix for the extended Hamming code of length $2^{k-1}$ and dimension $2^{k-1}-k$, which makes the augmented Hadamard code the dual code of the extended Hamming code. Hence an alternative way to define the Hadamard code is in terms of its parity-check matrix: the parity-check matrix of the Hadamard code is equal to the generator matrix of the Hamming code.
Construction using general Hadamard matrices
Hadamard codes are obtained from an n-by-n Hadamard matrix H. In particular, the 2n codewords of the code are the rows of H and the rows of −H. To obtain a code over the alphabet {0,1}, the mapping −1 ↦ 1, 1 ↦ 0, or, equivalently, x ↦ (1 − x)/2, is applied to the matrix elements. That the minimum distance of the code is n/2 follows from the defining property of Hadamard matrices, namely that their rows are mutually orthogonal. This implies that two distinct rows of a Hadamard matrix differ in exactly n/2 positions, and, since negation of a row does not affect orthogonality, that any row of H differs from any row of −H in n/2 positions as well, except when the rows correspond, in which case they differ in n positions.
To get the augmented Hadamard code above with $n=2^{k-1}$, the chosen Hadamard matrix H has to be of Sylvester type, which gives rise to a message length of $\log _{2}(2n)=k$.
Distance
The distance of a code is the minimum Hamming distance between any two distinct codewords, i.e., the minimum number of positions at which two distinct codewords differ. Since the Walsh–Hadamard code is a linear code, the distance is equal to the minimum Hamming weight among all of its non-zero codewords. All non-zero codewords of the Walsh–Hadamard code have a Hamming weight of exactly $2^{k-1}$ by the following argument.
Let $x\in \{0,1\}^{k}$ be a non-zero message. Then the following value is exactly equal to the fraction of positions in the codeword that are equal to one:
$\Pr _{y\in \{0,1\}^{k}}{\big [}({\text{Had}}(x))_{y}=1{\big ]}=\Pr _{y\in \{0,1\}^{k}}{\big [}\langle x,y\rangle =1{\big ]}\,.$
The fact that the latter value is exactly $1/2$ is called the random subsum principle. To see that it is true, assume without loss of generality that $x_{1}=1$. Then, when conditioned on the values of $y_{2},\dots ,y_{k}$, the event is equivalent to $y_{1}\cdot x_{1}=b$ for some $b\in \{0,1\}$ depending on $x_{2},\dots ,x_{k}$ and $y_{2},\dots ,y_{k}$. The probability that $y_{1}=b$ happens is exactly $1/2$. Thus, in fact, all non-zero codewords of the Hadamard code have relative Hamming weight $1/2$, and thus, its relative distance is $1/2$.
The relative distance of the augmented Hadamard code is $1/2$ as well, but it no longer has the property that every non-zero codeword has weight exactly $1/2$ since the all $1$s vector $1^{2^{k-1}}$ is a codeword of the augmented Hadamard code. This is because the vector $x=10^{k-1}$ encodes to ${\text{pHad}}(10^{k-1})=1^{2^{k-1}}$. Furthermore, whenever $x$ is non-zero and not the vector $10^{k-1}$, the random subsum principle applies again, and the relative weight of ${\text{Had}}(x)$ is exactly $1/2$.
Local decodability
A locally decodable code is a code that allows a single bit of the original message to be recovered with high probability by only looking at a small portion of the received word.
A code is $q$-query locally decodable if a message bit, $x_{i}$, can be recovered by checking $q$ bits of the received word. More formally, a code, $C:\{0,1\}^{k}\rightarrow \{0,1\}^{n}$, is $(q,\delta \geq 0,\epsilon \geq 0)$-locally decodable, if there exists a probabilistic decoder, $D:\{0,1\}^{n}\rightarrow \{0,1\}^{k}$, such that (Note: $\Delta (x,y)$ represents the Hamming distance between vectors $x$ and $y$):
$\forall x\in \{0,1\}^{k},\forall y\in \{0,1\}^{n}$, $\Delta (y,C(x))\leq \delta n$ implies that $Pr[D(y)_{i}=x_{i}]\geq {\frac {1}{2}}+\epsilon ,\forall i\in [k]$
Theorem 1: The Walsh–Hadamard code is $(2,\delta ,{\frac {1}{2}}-2\delta )$-locally decodable for all $0\leq \delta \leq {\frac {1}{4}}$.
Lemma 1: For all codewords, $c$ in a Walsh–Hadamard code, $C$, $c_{i}+c_{j}=c_{i+j}$, where $c_{i},c_{j}$ represent the bits in $c$ in positions $i$ and $j$ respectively, and $c_{i+j}$ represents the bit at position $(i+j)$.
Proof of lemma 1
Let $C(x)=c=(c_{0},\dots ,c_{2^{n}-1})$ be the codeword in $C$ corresponding to message $x$.
Let $G={\begin{pmatrix}\uparrow &\uparrow &&\uparrow \\g_{0}&g_{1}&\dots &g_{2^{n}-1}\\\downarrow &\downarrow &&\downarrow \end{pmatrix}}$ be the generator matrix of $C$.
By definition, $c_{i}=x\cdot g_{i}$. From this, $c_{i}+c_{j}=x\cdot g_{i}+x\cdot g_{j}=x\cdot (g_{i}+g_{j})$. By the construction of $G$, $g_{i}+g_{j}=g_{i+j}$. Therefore, by substitution, $c_{i}+c_{j}=x\cdot g_{i+j}=c_{i+j}$.
Proof of theorem 1
To prove theorem 1 we will construct a decoding algorithm and prove its correctness.
Algorithm
Input: Received word $y=(y_{0},\dots ,y_{2^{n}-1})$
For each $i\in \{1,\dots ,n\}$:
1. Pick $j\in \{0,\dots ,2^{n}-1\}$ uniformly at random.
2. Pick $k\in \{0,\dots ,2^{n}-1\}$ such that $j+k=e_{i}$, where $e_{i}$ is the $i$-th standard basis vector and $j+k$ is the bitwise xor of $j$ and $k$.
3. $x_{i}\gets y_{j}+y_{k}$.
Output: Message $x=(x_{1},\dots ,x_{n})$
Proof of correctness
For any message, $x$, and received word $y$ such that $y$ differs from $c=C(x)$ on at most $\delta $ fraction of bits, $x_{i}$ can be decoded with probability at least ${\frac {1}{2}}+({\frac {1}{2}}-2\delta )$.
By lemma 1, $c_{j}+c_{k}=c_{j+k}=x\cdot g_{j+k}=x\cdot e_{i}=x_{i}$. Since $j$ and $k$ are picked uniformly, the probability that $y_{j}\not =c_{j}$ is at most $\delta $. Similarly, the probability that $y_{k}\not =c_{k}$ is at most $\delta $. By the union bound, the probability that either $y_{j}$ or $y_{k}$ do not match the corresponding bits in $c$ is at most $2\delta $. If both $y_{j}$ and $y_{k}$ correspond to $c$, then lemma 1 will apply, and therefore, the proper value of $x_{i}$ will be computed. Therefore, the probability $x_{i}$ is decoded properly is at least $1-2\delta $. Therefore, $\epsilon ={\frac {1}{2}}-2\delta $ and for $\epsilon $ to be positive, $0\leq \delta \leq {\frac {1}{4}}$.
Therefore, the Walsh–Hadamard code is $(2,\delta ,{\frac {1}{2}}-2\delta )$ locally decodable for $0\leq \delta \leq {\frac {1}{4}}$.
Optimality
For k ≤ 7 the linear Hadamard codes have been proven optimal in the sense of minimum distance.[7]
See also
• Zadoff–Chu sequence — improve over the Walsh–Hadamard codes
References
1. Malek, Massoud (2006). "Hadarmark Codes". Coding Theory (PDF). Archived from the original (PDF) on 2020-01-09.
2. Amadei, M.; Manzoli, Umberto; Merani, Maria Luisa (2002-11-17). "On the assignment of Walsh and quasi-orthogonal codes in a multicarrier DS-CDMA system with multiple classes of users". Global Telecommunications Conference, 2002. GLOBECOM'02. IEEE. Vol. 1. IEEE. pp. 841–845. doi:10.1109/GLOCOM.2002.1188196. ISBN 0-7803-7632-3.
3. Arora, Sanjeev; Barak, Boaz (2009). "Section 19.2.2". Computational Complexity: A Modern Approach. Cambridge University Press. ISBN 978-0-521-42426-4.
4. Guruswami, Venkatesan (2009). List decoding of binary codes (PDF). p. 3.
5. Bose, Raj Chandra; Shrikhande, Sharadchandra Shankar (June 1959). "A note on a result in the theory of code construction". Information and Control. 2 (2): 183–194. CiteSeerX 10.1.1.154.2879. doi:10.1016/S0019-9958(59)90376-6.
6. Langton, Charan [at Wikidata] (2002). "CDMA Tutorial: Intuitive Guide to Principles of Communications" (PDF). Complex to Real. Archived (PDF) from the original on 2011-07-20. Retrieved 2017-11-10.
7. Jaffe, David B.; Bouyukliev, Iliya. "Optimal binary linear codes of dimension at most seven". Archived from the original on 2007-08-08. Retrieved 2007-08-21.
Further reading
• Rudra, Atri. "Hamming code and Hamming bound" (PDF). Lecture notes.
• Rudolph, Dietmar; Rudolph, Matthias (2011-04-12). "46.4. Hadamard– oder Walsh–Codes". Modulationsverfahren (PDF) (in German). Cottbus, Germany: Brandenburg University of Technology (BTU). p. 214. Archived (PDF) from the original on 2021-06-16. Retrieved 2021-06-14. (xiv+225 pages)
Consultative Committee for Space Data Systems
Data compression
• Images
• ICER
• JPEG
• JPEG 2000
• 122.0.B1
• Data
• Adaptive Entropy Coder
Error Correction
Current
Binary Golay code
Concatenated codes
Turbo codes
Proposed
LDPC codes
Telemetry command uplink
• Command Loss Timer Reset
• Proximity-1 Space Link Protocol
Telemetry downlink
• Spacecraft Monitoring & Control
• Beacon mode service
Telemetry general
• Space Communications Protocol Specifications (SCPS): Performance Enhancing Proxy
Telemetry modulation systems
Current
BPSK
QPSK
OQPSK
Proposed
GMSK
Frequencies
• X band
• S band
• Ku band
• K band
• Ka band
Networking, interoperability and monitoring
• Service-oriented architecture (Message Abstraction Layer)
| Wikipedia |
Shewhart Medal
The Shewhart Medal, named in honour of Walter A. Shewhart, is awarded annually by the American Society for Quality for ...outstanding technical leadership in the field of modern quality control, especially through the development to its theory, principles, and techniques.[1] The first medal was awarded in 1948.[2]
Medalists
YearMedalistYearMedalistYearMedalist
1948Leslie E. Simon1974Benjamin Epstein1999James M. Lucas
1949Harold F. Dodge1975William R. Pabst, Jr.2000John A. Cornell
1950Martin A. Brumbaugh1976John W. Tukey2001Søren Bisgaard
1951George D. Edwards1977Albert H. Bowker2002William H. Woodall
1952Eugene L. Grant1978Lloyd S. Nelson2003Wayne B. Nelson
1953Harry G. Romig1979Hugo C. Hamaker2004Douglas M. Hawkins
1954Edwin G. Olds1980John Mandel2005Norman Draper
1955W. Edwards Deming1981Richard A. Freund2006William Q. Meeker
1956Mason E. Wescott1982Kaoru Ishikawa2007C.F. Jeff Wu
1957Cecil C. Craig1983Edward G. Schilling2008Roger W. Hoerl
1958Irving W. Burr1984Norman L. Johnson2009David W. Bacon
1959Paul S. Olmstead1985Ronald D. Snee2010G. Geoffrey Vining
1960Ellis R. Ott1986Donald W. Marquardt2011Jerald Lawless
1961Leonard H. C. Tippett1987Fred Leone2012Robert L. Mason
1962Lloyd A. Knowler1988Harrison M. Wadsworth2013G. Geoffrey Vining
1964Acheson J. Duncan1989Dorian Shainin2014Bovas Abraham
1965Paul C. Clifford1990William J. Hill2015Dennis K.J. Lin
1966Edward P. Coleman1991Cuthbert Daniel2016Connie M. Borror
1967Charles A. Bicking1992Gerald J. Hahn2017David M. Steinberg
1968George E.P. Box1993Harry Smith, Jr.2018Christine Anderson-Cook
1969William J. Youden1994Brian L. Joiner2019Ronald J. M. M. Does
1970J. Stuart Hunter1995Genichi Taguchi2020Necip Doganaksoy
1971Frank E. Grubbs1996Douglas C. Montgomery2021Jianjun Shi
1972Gerald J. Lieberman1997John F. MacGregor2022Stefan Steiner
1973Sebastian B. Littauer1998Raymond H. Myers
Shewhart Medal
Awarded forOutstanding technical leadership in the field of modern quality control
CountryUSA
Presented byAmerican Society for Quality
First awarded1948
Websitehttp://www.asq.org/about-asq/awards/shewhart.html
See also
• List of mathematics awards
• Wilks Memorial Award
References
1. "Shewhart Medal". American Society for Quality. Retrieved 2009-04-22.
2. "Shewhart Medalists". American Society for Quality. Retrieved 2009-04-22.
External links
• Official website
| Wikipedia |
Walter Borho
Walter Borho (born 17 December 1945, in Hamburg) is a German mathematician, who works on algebra and number theory.
Borho received his PhD in 1973 from the University of Hamburg under the direction of Ernst Witt with thesis Wesentliche ganze Erweiterungen kommutativer Ringe. He is a professor at the University of Wuppertal.
Borho does research on representation theory, Lie algebras, ring theory and also on number theory (amicable numbers) and tilings.
In 1986 he was an invited speaker at the International Congress of Mathematicians in Berkeley (Nilpotent orbits, primitive ideals and characteristic classes – a survey).
Publications
• Borho, Walter (1972). "On Thabit ibn Kurrah's formula for amicable numbers". Math. Comp. 26 (118): 571–578. doi:10.2307/2005185. JSTOR 2005185. MR 0313177.
• Borho, Don Zagier et al.: Lebendige Zahlen, Birkhäuser 1981 (containing Borho's Befreundete Zahlen [Amicable Numbers])
• with Peter Gabriel, Rudolf Rentschler: Primideale in Einhüllenden auflösbarer Lie-Algebren, Springer Verlag, Lecture Notes in Mathematics, vol. 357, 1973[1]
• with Klaus Bongartz, D. Mertens, A. Steins: Farbige Parkette. Mathematische Theorie und Ausführung auf dem Computer [Colored tilings: mathematical theory and computer implementation], Birkhäuser 1988
• with Jean-Luc Brylinski, Robert MacPherson: Nilpotent orbits, primitive ideals and characteristic classes. A geometric perspective in ring theory, Birkhäuser 1989
• with Karsten Blankenagel, Axel vom Stein: Blankenagel, Karsten; Borho, Walter; Vom Stein, Axel (2003). "New amicable four-cycles". Math. Comp. 72 (244): 2071–2076. doi:10.1090/s0025-5718-03-01489-3. MR 1986823.
References
1. Hochschild, G. (1975). "Review: Primideale in Einhüllenden auflösbarer Lie-Algebren, by Walter Borho, Peter Gabriel, and Rudolf Rentschler". Bull. Amer. Math. Soc. 81 (1): 39–40. doi:10.1090/s0002-9904-1975-13628-7.
External links
• Walter Borho at the Mathematics Genealogy Project
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Italy
• Israel
• United States
• Latvia
• Czech Republic
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Walter Brown (mathematician)
Walter Brown FRSE (29 April 1886, Glasgow – 14 April 1957, Marandellas, Rhodesia) was a Scottish mathematician and engineer.
Walter Brown
Born29 April 1886
Glasgow
Died14 April 1957 (aged 70)
Marondera
Alma mater
• University of Glasgow
• Allan Glen's School
Employer
• Allan Glen's School (1911–1914)
• University of Glasgow (1947–1948)
• University of Hong Kong (1914–1941)
• University of Strathclyde (1946–1947)
Life
The younger son of Hugh A. Brown, a headmaster in Paisley, Walter was educated at Allan Glen's School and then studied at the University of Glasgow (BSc Hons Mathematics and Physics 1907; and BSc Pure Science 1910). He began his career as a teacher at Allan Glen's. Brown became a member of the Edinburgh Mathematical Society in March 1911.[1]
In 1914 he took up the post of Lecturer in Engineering at Hong Kong University. He was soon promoted to become Professor in Pure and Applied Mathematics, a post he held from 1918 to 1946.[2]
In 1920 he was elected an Associate Member of the Institution of Electrical Engineers. In 1923 he was elected a Fellow of the Royal Society of Edinburgh. His proposers were Andrew Gray, George Alexander Gibson, John Walter Gregory, John Gordon Gray and Dugald McQuistan.[3]
He was President of the Hong Kong Philharmonic Society, and a member of the Hong Kong English Association, the Hong Kong Sino-British Association, and the Hong Kong Institute of Engineers and Shipbuilders.
A member of the Royal Naval Volunteer Reserve, Brown was captured when Hong Kong surrendered to the Japanese, and he was held as a Prisoner of War at Stanley Camp (1941–45). He organised study groups in the internment camp, and helped attend to the medical needs of the prisoners.[4]
Returning to Scotland after the war, he taught civil and mechanical engineering at the Royal Technical College in Glasgow (1946–47), and mathematics at the University of Glasgow (1947–48).
He travelled extensively and died in Marandellas in Rhodesia (now Zimbabwe) in April 1957.[3]
References
1. "Biography". School of Mathematics and Statistics, University of St Andrews. Retrieved 20 September 2010.
2. "Former RSE Fellows 1783–2002" (PDF). Royal Society of Edinburgh. Archived from the original (PDF) on 4 October 2006. Retrieved 19 September 2010.
3. "University of Glasgow :: International Story :: Professor Walter Brown". internationalstory.gla.ac.uk. Retrieved 15 July 2021.
4. Royal Society of Edinburgh Year Book 1958.
| Wikipedia |
Walter Craig (mathematician)
Walter L. Craig FRSC (1953 – January 18, 2019)[1] was a Canadian mathematician and a Canada Research Chair in Mathematical Analysis and Applications at McMaster University.[2][3][4][5]
Walter L. Craig
FRSC
Born1953 (1953)
State College, Pennsylvania
Died18 January 2019(2019-01-18) (aged 65–66)
NationalityCanadian
EducationPh.D. New York University - Courant Institute (1981)
Years active1981-2019
Known forMathematician
SpouseDeirdre Haskell
Parent
• William Craig (father)
Personal life
Craig was born in State College, Pennsylvania in 1953. His father, a professor at Pennsylvania State University transferred to University of California, Berkeley, which is where Craig and his siblings were raised starting in 1959.
Craig was the son of the logician William Craig and the husband of mathematician Deirdre Haskell.[6]
Education
Craig attended the University of California at Berkeley and, after spending two years performing as a jazz musician, returned there to graduate with a bachelor's degree in mathematics in 1977.[7] Craig earned his Ph.D. from New York University - Courant Institute in 1981; his dissertation, A Bifurcation Theory for Periodic Dissipative Wave Equations, was supervised by Louis Nirenberg.[8]
Career
After stints at the California Institute of Technology, Stanford University, and Brown University, Craig moved to McMaster University in Hamilton, Ontario, Canada in 2000. His research topic included nonlinear partial differential equation, infinite dimensional Hamiltonian systems, Schrödinger operators and spectral theory, water waves, general relativity, and cosmology.[7]
In 2007, he was made a Fellow of the Royal Society of Canada[9] and he was awarded a Killiam Fellowship in 2009.
In 2013, he became one of the inaugural Fellows of the American Mathematical Society.[10] He served as Director of the Fields Institute from 2013 to 2015.
References
1. Walter Craig 1953 - 2019
2. "Chairs". gc.ca. Retrieved January 27, 2017.
3. "Walter Craig". mcmaster.ca. Retrieved January 27, 2017.
4. "Walter Craig". Retrieved January 27, 2017.
5. "Distinguished University Professor" (PDF). mcmaster.ca. Retrieved January 27, 2017.
6. SIAM News: Celebrating the Life of Walter Craig (1953-2019)
7. "Remembrances of Walter Craig" (PDF). Notices of the American Mathematical Society. 67 (4): 520–531. April 2020.
8. Walter Craig at the Mathematics Genealogy Project
9. "Search Fellows". Royal Society of Canada. Retrieved September 7, 2017.
10. List of Fellows of the American Mathematical Society, retrieved April 12, 2017
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
Academics
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
Other
• IdRef
| Wikipedia |
Walter Foster (mathematician)
Walter Foster (fl. 1652), was an English mathematician.
Foster was the elder brother of Samuel Foster. He was educated at Emmanuel College, Cambridge, of which he became a fellow. He took the two degrees in arts, B.A. in 1617, M.A. in 1621, and commenced B.D. in 1628.[1] Dr. Samuel Ward, in a letter to Archbishop Ussher, dated from Sidney Sussex College, Cambridge, 25 May 1630, says that Foster had taken some pains upon the Latin copy of Ignatius's 'Epistles' in Caius College Library, and adds that as he was 'shortly to depart from the colledg by his time there allotted, finding in himself some impediment in his utterance, he could wish to be employed by your lordship in such like business. He is a good scholar, and an honest man'. Despite the impediment in his speech he was afterwards rector of Allerton in Somersetshire.
Twysden commends him for his skill in mathematics, and says that he communicated to him his brother's papers, which are published in his Miscellanies (Preface to the same). There is a tetrastich of his writing among the 'Epigrammata in Radulphi Wintertoni Metaphrasin' published at the end of Hippocratis Aphorismi soluti et metrici, 8vo, Cambridge, 1633. In 1652 he was living at Sherborne, Dorsetshire, and in the May of that year his brother bequeathed him ‘fourescore pounds and his library in Gresham Colledge.’
References
1. "Foster, Walter (FSTR614W)". A Cambridge Alumni Database. University of Cambridge.
Sources
• This article incorporates text from a publication now in the public domain: "Foster, Walter". Dictionary of National Biography. London: Smith, Elder & Co. 1885–1900.
| Wikipedia |
Walter Gröbli
Walter Gröbli (1852–1903) was a Swiss mathematician.
Walter Gröbli
Born(1852-09-23)23 September 1852
Oberuzwil, Switzerland
Died26 June 1903(1903-06-26) (aged 50)
Piz Blas, Switzerland
Alma materETH Zurich
University of Göttingen
SpouseEmma Bodmer
Parent(s)Isaak Gröbli and Elisabetha Grob
Scientific career
FieldsMathematics
InstitutionsETH Zurich
ThesisSpecielle Probleme über die Bewegung geradliniger paralleler Wirbelfäden (1877)
Doctoral advisorHermann Schwarz
Doctoral studentsErnst Amberg
Life and work
His father, Issak Gröbli, was an industrial who was invented a shuttle embroidery machine in 1863, and his old brother is credited to have introduced the invention in the United States.[1] Walter Gröbli was more interested in mathematics than in embroidery[2] and he studied from 1871 to 1875 at the Polytechnicum of Zurich under Hermann Schwarz and Heinrich Martin Weber.[3] Then Gröbli studied at university of Berlin and he was awarded a doctorate in the university of Göttingen (1877).
The following six years, Gröbli was assistant of Frobenius in Polytechnicum of Zurich. In 1883 he was elected mathematics professor in the Gymnasium of Zurich.[4] Despite his mathematical talent he did not follow a research career, he was happy to be a schoolmaster.[5]
His other main passion was mountaineering. He died with other three colleagues on a mountain accident climbing the Piz Blas.[6]
The only work known by Gröbli was his doctoral thesis dissertation. It deals about three vortex motion, four vortex motion having an axis of symmetry and $2n$ vortex motion having $n$ symmetry axes.[7] This work is a classical in vortex dynamics literature.[8]
References
1. Aref, Rott & Thomann 1992, p. 2.
2. Eminger 2015, p. 106.
3. Aref, Rott & Thomann 1992, pp. 2–3.
4. Aref, Rott & Thomann 1992, p. 18.
5. Eminger 2015, p. 107.
6. Eminger 2015, p. 108.
7. Aref, Rott & Thomann 1992, pp. 9–10.
8. Meleshko & Aref 2007, p. 222.
Bibliography
• Aref, Hassan; Rott, Nicholas; Thomann, Hans (1992). "Gröbli's Solution of the Three-Vortex Problem". Annual Review of Fluid Mechanics. 24: 1–21. Bibcode:1992AnRFM..24....1A. doi:10.1146/annurev.fl.24.010192.000245. ISSN 0066-4189.
• Eminger, Stefanie Ursula (2015). Carl Friedrich Geiser and Ferdinand Rudio: The Men Behind the First International Congress of Mathematicians (PDF). St. Andrews University.
• Meleshko, Vyacheslav V.; Aref, Hassan (2007). "A Bibliography of Vortex Dynamics". In Hassan Aref; Erik van der Giessen (eds.). Advances in Applied Mechanics, Volum 41. Elsevier. pp. 197–293. ISBN 978-0-12-002057-7.
External links
• O'Connor, John J.; Robertson, Edmund F., "Walter Gröbli", MacTutor History of Mathematics Archive, University of St Andrews
• Thomann, Hans (2014). "Gröbli, Walter". Dictionnaire historique de la Suisse. Retrieved March 29, 2018.
Authority control
International
• VIAF
National
• Germany
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• Historical Dictionary of Switzerland
• IdRef
| Wikipedia |
Walter Gottschalk
Walter Helbig Gottschalk (November 3, 1918 – February 15, 2004) was an American mathematician, one of the founders of topological dynamics.
Biography
Gottschalk was born in Lynchburg, Virginia, on November 3, 1918, and moved to Salem, Virginia as a child.[1][2] His father, Carl Gottschalk, was a German immigrant who worked as a machinist and later owned several small businesses in Salem; his younger brother, Carl W. Gottschalk, became a notable medical researcher.[3]
Gottschalk did both his undergraduate studies and graduate studies at the University of Virginia, finishing with a Ph.D. in 1944 under the supervision of Gustav A. Hedlund.[1][2][4] After graduating, he joined the faculty of the University of Pennsylvania, and was chair of the Pennsylvania mathematics department from 1954 to 1958.[1][2][5] In the academic year 1947/1948 he was a visiting scholar at the Institute for Advanced Study.[6] At Pennsylvania, his doctoral students included Philip Rabinowitz, who became known for his work in numerical analysis, and Robert Ellis, who became known for his work on topological dynamics.[4] Gottschalk moved to Wesleyan University in 1963; at Wesleyan, he also served two terms as chair before retiring in 1982.[1][2] He died on February 15, 2004, in Providence, Rhode Island, where he had lived since his retirement.[1]
Contributions
Gottschalk and his advisor Gustav Hedlund wrote the 1955 monograph Topological Dynamics.[1][7][8] Other research contributions of Gottschalk include the first study of surjunctive groups[9] and a short proof of the De Bruijn–Erdős theorem on coloring infinite graphs.[10]
As well as being a research mathematician, Gottschalk also put on two exhibits of mathematical sculptures in the 1960s.[1]
Awards and honors
Gottschalk was a fellow of the American Association for the Advancement of Science.[1]
Selected publications
• Gottschalk, W. H. (1951), "Choice functions and Tychonoff's theorem", Proceedings of the American Mathematical Society, 2 (1): 172, doi:10.2307/2032641, JSTOR 2032641, MR 0040376.
• Gottschalk, Walter Helbig; Hedlund, Gustav Arnold (1955), Topological dynamics, American Mathematical Society Colloquium Publications, vol. 36, Providence, R. I.: American Mathematical Society, ISBN 9780821874691, MR 0074810.
• Gottschalk, Walter (1973), "Some general dynamical notions", Recent Advances in Topological Dynamics (Proc. Conf. Topological Dynamics, Yale Univ., New Haven, Conn., 1972; in honor of Gustav Arnold Hedlund), Lecture Notes in Math., vol. 318, Berlin, New York: Springer-Verlag, pp. 120–125, doi:10.1007/BFb0061728, ISBN 978-3-540-06187-8, MR 0407821.
References
1. About the author, Gottschalk's Gestalts, retrieved 2012-11-21.
2. Walter H. Gottschalk, Salem Educational Foundation and Alumni Association Hall of Fame, retrieved 2012-11-21.
3. "Carl W. Gottschalk", Biographical Memoirs of the National Academy of Sciences, 77: 122–141, 1999
4. Walter Helbig Gottschalk at the Mathematics Genealogy Project
5. Tenured faculty 1899– and Past department chairs, Univ. of Pennsylvania Dept. of Mathematics, retrieved 2012-11-21.
6. Gottschalk, Walter H., Institute for Advanced Study
7. Review of Topological Dynamics by Y. N. Dowker, MR0074810.
8. Halmos, Paul R. (1955), "Book Review: W. H. Gottschalk and G. A. Hedlund, Topological dynamics", Bulletin of the American Mathematical Society, 61 (6): 584–588, doi:10.1090/S0002-9904-1955-09999-3, MR 1565733.
9. Gottschalk (1973).
10. Gottschalk (1951).
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Walter Lewis Baily Jr.
Walter Lewis Baily Jr. (July 5, 1930, in Waynesburg, Pennsylvania[1] – January 15, 2013, in Northbrook, Illinois[2]) was an American mathematician.
Walter Lewis Baily Jr.
Born(1930-07-05)July 5, 1930
Waynesburg, Pennsylvania
DiedJanuary 15, 2013(2013-01-15) (aged 82)
Northbrook, Illinois
NationalityAmerican
Alma materMassachusetts Institute of Technology
Princeton University
Known forBaily–Borel compactification
Scientific career
FieldsMathematics
InstitutionsUniversity of Chicago
Massachusetts Institute of Technology
Princeton University
Doctoral advisorKunihiko Kodaira
Doctoral studentsPaul Monsky, Timothy J. Hickey, Daniel Bump
Baily's research focused on areas of algebraic groups, modular forms and number-theoretical applications of automorphic forms. One of his significant works was with Armand Borel, now known as the Baily–Borel compactification, which is a compactification of a quotient of a Hermitian symmetric space by an arithmetic group (that is, a linear algebraic group over the rational numbers).[3] Baily and Borel built on the work of Ichirō Satake and others.
Baily became a Putnam Fellow in 1952.[4] He studied at the Massachusetts Institute of Technology (MIT), receiving a Bachelor of Science in Mathematics in 1952. After that, he attended Princeton University, receiving a Masters in 1953 and a Ph.D. in Mathematics in 1955 under the direction of his thesis advisor Kunihiko Kodaira (On the Quotient of a Complex Analytic Manifold by a Discontinuous Group of Complex Analytic Self-Homomorphisms).[5] Subsequently, he was an instructor at Princeton and then MIT. In 1957 he worked as a mathematician at Bell Laboratories. In 1957, he was appointed Assistant Professor and subsequently promoted to Professor in 1963 at the University of Chicago. He became a Professor Emeritus at the University of Chicago in 2005.
He was a member of the American Mathematical Society and the Mathematical Society of Japan. He often visited the University of Tokyo as a guest of Shokichi Iyanaga and Kunihiko Kodaira, spoke fluent Japanese and in Tokyo, 1963 married Yaeko Iseki, with whom he had a son. He owned an apartment in Tokyo for many years, where he spent his summers. In addition, he often visited Moscow and Saint Petersburg and spoke fluent Russian.
He was awarded an Alfred P. Sloan Fellowship in 1958. In 1962, he was an invited speaker at the International Congress of Mathematicians held in Stockholm (On the moduli of Abelian varieties with multiplications from an order in a totally real number field).
His doctoral students include Paul Monsky, Timothy J. Hickey, and Daniel Bump.
Bibliography
• Baily, Walter Lewis; Borel, Armand (1964), "On the compactification of arithmetically defined quotients of bounded symmetric domains", Bulletin of the American Mathematical Society, 70 (4): 588–593, doi:10.1090/S0002-9904-1964-11207-6, MR 0168802
• Baily, Walter Lewis; Borel, Armand (1966), "Compactification of arithmetic quotients of bounded symmetric domains", Annals of Mathematics, 84 (3): 442–528, doi:10.2307/1970457, JSTOR 1970457, MR 0216035
• Baily, Walter Lewis (1958), "On Satake's compactification of $V_{n}$", American Journal of Mathematics, 80: 348–364, doi:10.2307/2372789, JSTOR 2372789, MR 0099451
• Baily, Walter Lewis (1959), "On the Hilbert–Siegel modular space", American Journal of Mathematics, 81 (4): 846–874, doi:10.2307/2372991, JSTOR 2372991, MR 0121506
• On the orbit spaces of arithmetic groups, in: Arithmetical Algebraic Geometry (Proc. Conf. Purdue Univ., 1963), Harper and Row (1965), 4–10
• On compactifications of orbit spaces of arithmetic discontinuous groups acting on bounded symmetric domains, in: Algebraic Groups and Discontinuous Subgroups, Proceedings of Symposia in Pure Mathematics, 9, American Mathematical Society (1966), 281–295 MR0207711
References
1. Biographical dates from American Men and Women of Science, Thomson Gale 2004
2. Obituary from the University of Chicago
3. Baily-Borel Compactification, Encyclopedia of Mathematics
4. "Putnam Competition Individual and Team Winners". Mathematical Association of America. Retrieved December 10, 2021.
5. Walter Lewis Baily Jr. at the Mathematics Genealogy Project
External links
• Autoren-Profil in the databank zbMATH
• Guide to the Walter Baily Papers 1930-2005 from the University of Chicago Special Collections Research Center
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
| Wikipedia |
Walter Neumann
Walter David Neumann (born 1 January 1946) is a British mathematician who works in topology, geometric group theory, and singularity theory. He is an emeritus professor at Barnard College, Columbia University. Neumann obtained his Ph.D. under the joint supervision of Friedrich Hirzebruch and Klaus Jänich at the University of Bonn in 1969.[1]
He is a son of the mathematicians Bernhard Neumann and Hanna Neumann.[1] His brother Peter M. Neumann was also a mathematician.
He is in the Inaugural Class of Fellows of the American Mathematical Society.[2]
References
1. Faces of Geometry, A Conference in Honor of Walter Neumann
2. List of Fellows of the American Mathematical Society
External links
• Google Scholar Profile
• Home page at Columbia
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Czech Republic
• Greece
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• zbMATH
Other
• IdRef
| Wikipedia |
Walter Richard Talbot
Walter Richard Talbot (1909-1977)[1] was the fourth African American to earn a Ph.D. in Mathematics (Geometric Group Theory) from the University of Pittsburgh[2] and Lincoln University's youngest Doctor of Philosophy.[3] He was a member of Sigma Xi[4] and Pi Tau Phi.[5] In 1969 Talbot co-founded the National Association of Mathematics (NAM) at Morgan State University,[6] the organization which, nine years later honored him at a memorial luncheon and created a scholarship[7] in his name.[8] In 1990 the Cox-Talbot lecture[9] was inaugurated recognizing his accomplishments together with Elbert Frank Cox – the first African-American to get a doctoral degree in mathematics.
Academic positions Talbot held include: Mathematics Department Chair and Professor[10] (Morgan State University); assistant professor,[11] professor, department chair, dean of men, registrar, acting dean of instruction (Lincoln University).[12] Talbot was most widely known for his introduction of computer technology to the school.[13]
Talbot's dissertation was entitled Fundamental Regions in S(sub 6) for the Simple Quaternary G(sub 60), Type I.[14]
References
1. "Walter R. Talbot - Mathematicians of the African Diaspora". www.math.buffalo.edu. Retrieved 2020-12-29.
2. Hales, Thomas (2018-05-16). "Walter Talbot's thesis". arXiv:1805.06890 [math.HO].
3. "Walter R Talbot - Biography". Maths History. Retrieved 2020-12-29.
4. Williams, Talitha. "In Honor of Black History" (PDF). www.ams.org.
5. The Crisis. The Crisis Publishing Company, Inc. August 1931.
6. Nkwanta, Asamoah. "African-American Mathematicians and the Mathematical Association of America" (PDF). www.maa.org.
7. Houston, Johnny. "Ten African American Pioneers and Mathematicians Who Inspired Me" (PDF). www.ams.org.
8. Pitcher, Everett. "Notices of the American Mathematical Society" (PDF). www.ams.org.
9. "Cox-Talbot Lecture". www.nam-math.org. Retrieved 2020-12-29.
10. "Document Resume" (PDF).
11. "Alumni Vertical Files - Lincoln University". bluetigerportal.lincolnu.edu. Retrieved 2020-12-29.
12. Foundation (U.S.), National Science (1963). Annual Report for Fiscal Year ... The Foundation.
13. "Dr. Walter R. Talbot Sr". The Baltimore Sun. December 29, 1977.
14. "Walter R. Talbot: MathSciNet". www.genealogy.math.
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
Walter Rudin
Walter Rudin (May 2, 1921 – May 20, 2010[2]) was an Austrian-American mathematician and professor of Mathematics at the University of Wisconsin–Madison.[3]
Walter Rudin
Born(1921-05-02)May 2, 1921
Vienna, Austria
DiedMay 20, 2010(2010-05-20) (aged 89)
Madison, Wisconsin, U.S.
CitizenshipUnited States
Alma materDuke University (B.A. 1947, Ph.D. 1949)
Known forMathematics textbooks; contributions to harmonic analysis and complex analysis[1]
SpouseMary Ellen Rudin
AwardsAmerican Mathematical Society Leroy P. Steele Prize for Mathematical Exposition (1993)
Scientific career
FieldsMathematics
InstitutionsMassachusetts Institute of Technology
University of Wisconsin–Madison
Doctoral advisorJohn Jay Gergen
Doctoral studentsCharles Dunkl
Daniel Rider
In addition to his contributions to complex and harmonic analysis, Rudin was known for his mathematical analysis textbooks: Principles of Mathematical Analysis,[4] Real and Complex Analysis,[5] and Functional Analysis.[6] Rudin wrote Principles of Mathematical Analysis only two years after obtaining his Ph.D. from Duke University, while he was a C. L. E. Moore Instructor at MIT. Principles, acclaimed for its elegance and clarity,[7] has since become a standard textbook for introductory real analysis courses in the United States.[8]
Rudin's analysis textbooks have also been influential in mathematical education worldwide, having been translated into 13 languages, including Russian,[9] Chinese,[10] and Spanish.[11]
Biography
Rudin was born into a Jewish family in Austria in 1921. He was enrolled for a period of time at a Swiss boarding school, the Institut auf dem Rosenberg, where he was part of a small program that prepared its students for entry to British universities.[12] His family fled to France after the Anschluss in 1938.
When France surrendered to Germany in 1940, Rudin fled to England and served in the Royal Navy for the rest of World War II, after which he left for the United States. He obtained both his B.A. in 1947 and Ph.D. in 1949 from Duke University. After his Ph.D., he was a C.L.E. Moore instructor at MIT. He briefly taught at the University of Rochester before becoming a professor at the University of Wisconsin–Madison where he remained for 32 years.[2] His research interests ranged from harmonic analysis to complex analysis.
In 1970 Rudin was an Invited Speaker at the International Congress of Mathematicians in Nice.[13] He was awarded the Leroy P. Steele Prize for Mathematical Exposition in 1993 for authorship of the now classic analysis texts, Principles of Mathematical Analysis and Real and Complex Analysis. He received an honorary degree from the University of Vienna in 2006.
In 1953, he married fellow mathematician Mary Ellen Estill, known for her work in set-theoretic topology. The two resided in Madison, Wisconsin, in the eponymous Walter Rudin House, a home designed by architect Frank Lloyd Wright. They had four children.[1]
Rudin died on May 20, 2010, after suffering from Parkinson's disease.[2]
Selected publications
Ph.D. thesis
• Rudin, Walter (1950). Uniqueness Theory for Laplace Series (Thesis). Duke University.[14]
Selected research articles
• Rudin, Walter (1950). "Uniqueness theory for Laplace series". Trans. Amer. Math. Soc. 68 (2): 287–303. doi:10.1090/s0002-9947-1950-0033368-1. MR 0033368.
• Rudin, W. (1957). "Factorization in the group algebra of the real line". Proc Natl Acad Sci U S A. 43 (4): 339–340. Bibcode:1957PNAS...43..339R. doi:10.1073/pnas.43.4.339. PMC 528447. PMID 16578475.
• Rudin, Walter (1967). "Zero-sets in polydiscs". Bull. Amer. Math. Soc. 73 (4): 580–583. doi:10.1090/s0002-9904-1967-11758-0. MR 0210934.
• Rudin, Walter (1981). "Holomorphic maps that extend to automorphisms of a ball" (PDF). Proc. Amer. Math. Soc. 81 (3): 429–432. doi:10.1090/s0002-9939-1981-0597656-8. MR 0597656.
• "Totally real Klein bottles in ${\mathbb {C} }^{2}$"(PDF)". Proc. Amer. Math. Soc. 82 (4): 653–654. 1981. doi:10.1090/s0002-9939-1981-0614897-1. MR 0614897.
Books
Textbooks:
• Principles of Mathematical Analysis.[7][8] (1953; 3rd ed., 1976, 342 pp.)
• Real and Complex Analysis.[15] (1966; 3rd ed., 1987, 416 pp.)
• Rudin, Walter (1991). Functional Analysis. International Series in Pure and Applied Mathematics. Vol. 8 (Second ed.). New York, NY: McGraw-Hill Science/Engineering/Math. ISBN 978-0-07-054236-5. OCLC 21163277.
• Functional Analysis.[16] (1973; 2nd ed., 1991, 424 pp.)
Monographs:
• Rudin, Walter; Bers, L.; Courant, R.; Stoker, J. J.; Henney, Dagmar Renate (1964). "Fourier Analysis on Groups". Physics Today. 17 (1): 82. Bibcode:1964PhT....17a..82R. doi:10.1063/1.3051392.[17] (1962)
• Function Theory in Polydiscs. (1969)
• Function Theory in the Unit Ball of $\mathbb {C} ^{n}$.[18] (1980)
Autobiography:
• The Way I Remember It. (1991)
Major awards
• Steele Prize for Mathematical Exposition (1993)
See also
• Helson–Kahane–Katznelson–Rudin theorem
• Rudin–Shapiro sequence
• Rudin's conjecture
References
1. "Vilas Professor Emeritus Walter Rudin died after a long illness on May 20, 2010".
2. Ziff, Deborah (May 21, 2010). "Noted UW-Madison mathematician Rudin dies at 89". Wisconsin State Journal. Retrieved May 21, 2010.
3. Nagel, Alexander; Stout, Edgar Lee; Kahane, Jean-Pierre; Rosay, Jean-Pierre; Wermer, John (2013). "Remembering Walter Rudin (1921–2010)" (PDF). Notices of the AMS. 60 (3): 295–301. doi:10.1090/noti955.
4. Rudin, Walter (1976) [1953]. Principles of Mathematical Analysis (3rd ed.). New York: McGraw-Hill. ISBN 007054235X.
5. Rudin, Walter (1987) [1966]. Real and Complex Analysis (3rd ed.). New York: McGraw-Hill. ISBN 0070542341.
6. Rudin, Walter (1991) [1973]. Functional Analysis (2nd ed.). New York: McGraw-Hill. ISBN 0-07-100944-2.
7. Munroe, M. E. (2016-11-06). "Review: Casper Goffman, Real Functions, and Walter Rudin, Principles of mathematical analysis, and Henry P. Thielman, Theory of functions of real variables". Bulletin of the American Mathematical Society. 59 (6): 572–577. doi:10.1090/s0002-9904-1953-09765-8. ISSN 0002-9904.
8. Locascio, Andrew (13 August 2007). "Book Review: Principles of Mathematical Analysis". Mathematical Association of America. Retrieved 12 October 2016.
9. Rudin, Walter; Havin, V. P. (translator) (1976). Principles of Mathematical Analysis (Russian translation of 2nd ed.). Moscow: Mir Publishers. {{cite book}}: |first2= has generic name (help)
10. Rudin, Walter; Zhao, Cigeng (translator); Jiang, Duo (translator) (1979). Principles of Mathematical Analysis (simplified Chinese translation). Beijing: People's Education Press, China Machine Press (reprint, 2004). ISBN 7-111-13417-6. {{cite book}}: |first2= has generic name (help)
11. Rudin, Walter; Irán Alcerreca Sanchez, Miguel (translator) (1980). Principles of Mathematical Analysis (Spanish translation). México: Libros McGraw-Hill. ISBN 968-6046-82-8. {{cite book}}: |first2= has generic name (help)
12. Rubin, Walter (1992). The Way I Remember it. American Mathematical Society. p. 39. ISBN 9780821872550.
13. Rudin, Walter. "Harmonic analysis in polydiscs." Actes Congr. Int. Math., Nice 2 (1970): 489–493.
14. Bilyk, Dmitriy; De Carli, Laura; Petukhov, Alexander; Stokolos, Alexander M.; Wick, Brett D., eds. (2012). "remarks on Walter Rudin's PhD thesis". Recent Advances in Harmonic Analysis and Applications: In Honor of Konstantin Oskolkov. Vol. 25. Springer Science & Business Media. p. 59. ISBN 9781461445647.
15. Shapiro, Victor L. (1968). "Review: Walter Rudin, Real and complex analysis". Bull. Am. Math. Soc. 74 (1): 79–83. doi:10.1090/s0002-9904-1968-11881-6.
16. Kadison, Richard V. (1973-01-01). "Review of Functional Analysis". American Scientist. 61 (5): 604. JSTOR 27844041.
17. Kahane, J.-P. (1964). "Review: Walter Rudin, Fourier analysis on groups". Bull. Am. Math. Soc. 70 (2): 230–232. doi:10.1090/s0002-9904-1964-11092-2.
18. Krantz, Steven G. (1981-11-01). "Review: Walter Rudin, Function theory in the unit ball of $\mathbb {C} ^{n}$". Bulletin of the American Mathematical Society. New Series. 5 (3): 331–339. doi:10.1090/s0273-0979-1981-14951-x. ISSN 0273-0979.
External links
• O'Connor, John J.; Robertson, Edmund F., "Walter Rudin", MacTutor History of Mathematics Archive, University of St Andrews
• UW Mathematics Dept obituary
• MathDL obituary
• Walter Rudin at the Mathematics Genealogy Project
• Photos of Rudin Residence
• Walter B. Rudin, "Set Theory: An Offspring of Analysis" (1990 Morris Marden Lecture) – YouTube
Authority control
International
• FAST
• ISNI
• VIAF
National
• Spain
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
• Korea
• Netherlands
• Poland
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Walter Schnee
Walter Schnee (8 August 1885 in Rawitsch, now Rawicz – 10 June 1958 in Leipzig) was a German mathematician. From 1904 to 1908 he studied mathematics in Berlin. From 1909 to 1917 he worked at the University of Breslau. He then went to the University of Leipzig, where he stayed till 1954. He worked in the field of number theory.
References
• Walter Schnee at the Mathematics Genealogy Project
• Walter Schnee
Authority control
International
• ISNI
• VIAF
National
• Germany
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
| Wikipedia |
Walter Taylor (mathematician)
Walter Taylor (c. 1700 – 23 February 1743/44) was a Trinity College, Cambridge tutor who coached 83 students in the 1724–1743 period. He later was appointed as the Regius Professor of Greek.
Walter Taylor
Bornc. 1700
Tuxford, Nottinghamshire, England
Died23 February 1743/44
Alma materTrinity College, Cambridge
Scientific career
FieldsMathematician and classicist
InstitutionsTrinity College, Cambridge
Academic advisorsRobert Smith
Notable studentsStephen Whisson
He was the son of John Taylor, Vicar of Tuxford, Nottinghamshire. He matriculated in 1716 from Wakefield School, Yorkshire. Taylor was admitted as a pensioner at Trinity on 7 April 1716.[1]
Robert Smith was Taylor's Cambridge tutor.
Timeline
• 1717 Scholar
• 1719/20 BA
• 1723 MA
• 1736 BD
• 1722 Fellow of Trinity
• 1726–44 Regius Professor of Greek
• 1725 Ordained deacon
• 1726/7 Ordained priest
• 1743/4 buried at Tuxford
Notes
1. "Taylor, Walter (TLR716W)". A Cambridge Alumni Database. University of Cambridge.
External links
• Walter Taylor at the Mathematics Genealogy Project
Authority control: Academics
• Mathematics Genealogy Project
| Wikipedia |
Walther von Dyck
Walther Franz Anton von Dyck (6 December 1856 – 5 November 1934), born Dyck (German pronunciation: [diːk][1]) and later ennobled, was a German mathematician. He is credited with being the first to define a mathematical group, in the modern sense in (Dyck 1882). He laid the foundations of combinatorial group theory,[2] being the first to systematically study a group by generators and relations.
Walther von Dyck
8th Rector of the Technical University of Munich
In office
1919–1925
Preceded byKarl Heinrich Hager
Succeeded byJonathan Zenneck
1st Rector of the Technical University of Munich
In office
1903–1906
Preceded byPosition renamed
Succeeded byFriedrich von Thiersch
7th Director of the Technical University of Munich
In office
1900–1903
Preceded byEgbert von Hoyer
Succeeded byPosition renamed
Personal details
Born(1856-12-06)6 December 1856
Munich, Kingdom of Bavaria
Died5 November 1934(1934-11-05) (aged 77)
Munich, Nazi Germany
NationalityGerman
EducationTechnical University of Munich
Scientific career
FieldsMathematics
ThesisÜber regulär verzweigte Riemannsche Flächen und die durch sie definierten Irrationalitäten (1879)
Doctoral advisorFelix Klein
Biography
Von Dyck was a student of Felix Klein,[2] and served as chairman of the commission publishing Klein's encyclopedia. Von Dyck was also the editor of Kepler's works. He promoted technological education as rector of the Technische Hochschule of Munich.[3] He was a Plenary Speaker of the ICM in 1908 at Rome.[4]
Von Dyck is the son of the Bavarian painter Hermann Dyck.
Legacy
The Dyck language in formal language theory is named after him,[5] as are Dyck's theorem and Dyck's surface in the theory of surfaces, together with the von Dyck groups, the Dyck tessellations, Dyck paths, and the Dyck graph.
Publications
• Dyck, Walther (1882), "Gruppentheoretische Studien (Group-theoretical Studies)", Mathematische Annalen (in German), 20 (1): 1–44, doi:10.1007/BF01443322, hdl:2027/njp.32101075301422, ISSN 0025-5831, S2CID 179178038.
Notes
1. Pronunciation according to information from the Board of Management of the Technical University of Munich.
2. Stillwell, John (2002), Mathematics and its history, Springer, p. 374, ISBN 978-0-387-95336-6
3. Rowe, David E. (November 2008). "Review of Walther von Dyck (1856–1934). Mathematik, Technik und Wissenschaftsorganisation an der TH München". Historia Mathematica. 35 (4): 333–334. doi:10.1016/j.hm.2008.08.002.
4. Dyck, W. von (1909). "Die Encyklopädie der mathematischen Wissenschaften". In G. Castelnuovo (ed.). Atti del IV Congresso Internazionale dei Matematici (Roma, 6–11 Aprile 1908). ICM proceedings. Vol. 1. University of Toronto Press. pp. 123–134.
5. "Udacity CS262". Archived from the original on 27 December 2016. Retrieved 8 July 2012.
References
• Ulf Hashagen: Walther von Dyck (1856–1934). Mathematik, Technik und Wissenschaftsorganisation an der TH München, Franz Steiner Verlag, Stuttgart 2003, ISBN 3-515-08359-6
External links
• O'Connor, John J.; Robertson, Edmund F., "Walther von Dyck", MacTutor History of Mathematics Archive, University of St Andrews
• Walther von Dyck at the Mathematics Genealogy Project
Presidents of the Technical University of Munich
Directors
(1868–1903)
• Karl Maximilian von Bauernfeind (1868–1874, 1880–1889)
• Wilhelm von Beetz (1874–1877)
• August von Kluckhohn (1877–1880)
• Karl Haushofer (1889–1895)
• Egbert von Hoyer (1895–1900)
• Walther von Dyck (1900–1903)
Rectors
(1903–1976)
• Walther von Dyck (1903–1906, 1919–1925)
• Friedrich von Thiersch (1906–1908)
• Moritz Schröter (1908–1911)
• Siegmund Günther (1911–1913)
• Heinrich von Schmidt (1913–1915)
• Karl Lintner (1915–1917)
• Karl Heinrich Hager (1917–1919)
• Jonathan Zenneck (1925–1927)
• Kaspar Dantscher (1927–1929)
• Johann Ossanna (1929–1931)
• Richard Schachner (1931–1933)
• Anton Schwaiger (1933–1935)
• Albert Wolfgang Schmidt (1935–1938)
• Lutz Pistor (1938–1945)
• Hans Döllgast (1945)
• Georg Faber (1945–1946)
• Robert Vorhoelzer (1946–1947)
• Ludwig Föppl (1947–1948)
• Hans Piloty (1948–1951)
• August Rucker (1951–1954)
• Robert Sauer (1954–1956)
• Ernst Schmidt (1956–1958)
• Max Kneissl (1958–1960)
• Gustav Aufhammer (1960–1962)
• Franz Patat (1962–1964)
• Heinrich Netz (1964–1965)
• Gerd Albers (1965–1968)
• Horst von Engerth (1968–1970)
• Heinz Schmidtke (1970–1972)
• Ulrich Grigull (1972–1976)
Presidents
(since 1976)
• Ulrich Grigull (1976–1980)
• Wolfgang Wild (1980–1986)
• Herbert Kupfer (1986–1987)
• Otto Meitinger (1987–1995)
• Wolfgang A. Herrmann (1995–2019)
• Thomas Hofmann (since 2019)
Authority control
International
• FAST
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• United States
• Czech Republic
• Netherlands
• Poland
• Vatican
Academics
• Leopoldina
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
• 2
Artists
• Scientific illustrators
People
• Deutsche Biographie
• Trove
Other
• SNAC
• IdRef
| Wikipedia |
Walter Warner
Walter Warner (1563–1643) was an English mathematician and scientist.
Life
He was born in Leicestershire and educated at Merton College, Oxford, graduating B.A. in 1578.[1]
At the end of the sixteenth century he belonged to the circle round Henry Percy, 9th Earl of Northumberland, the 'Wizard Earl'. The Earl's ‘three magi’ were Warner, Thomas Harriot and Robert Hues.[2] Percy paid Warner a retainer to help him with alchemical experiments (£20 per annum in 1595, rising to £40 in 1607).[3] He also belonged to the overlapping group around Sir Walter Ralegh. At this time he was mainly known for chemical and medical interests.[4] It has been argued by Jean Jacquot that this group of experimental researchers, sponsored by Percy and Ralegh, represents the transitional moment from the still-magical theories of Giordano Bruno to real science.[5]
He may have been associated with Christopher Marlowe's study group on religion, branded atheists, but confusion is possible here with William Warner.[6]
After Henry Percy's death, he was supported by Algernon Percy, 10th Earl of Northumberland, and then Sir Thomas Aylesbury.[1] Warner edited Harriot's Artis Analyticae Praxis in 1631.[7] He met Thomas Hobbes through Sir Charles Cavendish, who circulated Warner's works.[8]
Warner was a friend of Robert Payne, chaplain to Cavendish;[9] and this connection is frequently used to associate Warner with the Welbeck Academy.[10] In 1634 Warner and Hobbes discussed refraction.[11] This acquaintance was later brought up against Hobbes in the Hobbes-Wallis controversy.[12]
With John Pell he computed the first table of antilogarithms in the 1630s.[13] John Aubrey, relying on Pell's testimony, states that Warner had claimed to have anticipated William Harvey's discovery of the circulation of the blood, and that Harvey must have heard of it through a Mr Prothero. Pell also mentioned that Warner had been born without a left hand.[14]
Scientific work and legacy
Warner was unpublished in his lifetime, but well known, in particular to Marin Mersenne who published some of his optical work in Universae geometriae (1646). He was an atomist, and a believer in an infinite universe. He was both a theoretical and practical chemist, and wrote psychological works based on Bruno and Lullism. Many manuscripts of his survive, and show eclectic interests; they include works related to the circulation of the blood.[1] Some of Warner's papers ended up in the Pell manuscripts collected by Richard Busby;[15] after his death the bulk of his papers were seized in 1644 by superstitious sequestrators.[16] George John Gray, writing in the Dictionary of National Biography, states that the table of 11-figure antilogarithms later published by James Dodson was believed to have passed to Herbert Thorndike, and then to Busby; Pell's account in 1644 was that Warner had been bankrupt, and the creditors were likely to destroy the work.[17]
References
1. Andrew Pyle (editor), Dictionary of Seventeenth Century British Philosophers (2000), article Warner, pp. 858–862.See also Jan Prins: Walter Warner (ca. 1557-1643) and his notes on Animal Organisms. Dissertation, Utrecht University, 1992
2. Stephen Coote, A Play of Passion: The Life of Sir Walter Ralegh (1993), p. 325.
3. Steven Shapin, A Social History of Truth: Civility and Science in Seventeenth-century England (1994), p. 366.
4. Robert Lacey, Sir Walter Ralegh (1973), p. 320.
5. John S. Mebane, Renaissance Magic and the Return of the Golden Age: The Occult Tradition and Marlowe, Jonson, and Shakespeare (1992), p. 78.
6. Christopher Hill, Intellectual Origins of the English Revolution, Revisited (1997), p. 129, indexed under William Warner.
7. Chisholm, Hugh, ed. (1911). "Algebra" . Encyclopædia Britannica. Vol. 1 (11th ed.). Cambridge University Press. p. 619.
8. George Henry Radcliffe Parkinson, Stuart Shanker (1999), Routledge History of Philosophy (1999), p. 222.
9. Aloysius Martinich, Hobbes: A Biography (1999), p. 24.
10. Ted-Larry Pebworth (2000). Literary circles and cultural communities in Renaissance England. University of Missouri Press. p. 94. ISBN 978-0-8262-1317-4. Retrieved 3 April 2012.
11. Stuart Clark, Vanities of the Eye: Vision in Early Modern European Culture (2007), p. 334.
12. Steven Shapin and Simon Schaffer (1985), Leviathan and the Air-Pump, p. 83.
13. Lee, Sidney, ed. (1895). "Pell, John" . Dictionary of National Biography. Vol. 44. London: Smith, Elder & Co.
14. Richard Barber (editor), John Aubrey, Brief Lives (1975), p. 320.
15. "Archived copy". Archived from the original on 5 June 2011. Retrieved 7 February 2009.{{cite web}}: CS1 maint: archived copy as title (link)
16. Keith Thomas, Religion and the Decline of Magic (1973), p. 431.
17. "Dodson, James" . Dictionary of National Biography. London: Smith, Elder & Co. 1885–1900.
Authority control
International
• ISNI
• VIAF
National
• Germany
• Netherlands
Other
• IdRef
| Wikipedia |
Walter Wilson Stothers
Walter Wilson Stothers (8 November 1946 – 16 July 2009)[1] was a British mathematician who proved the Stothers-Mason Theorem (Mason-Stothers theorem) in the early 1980s.[2]
He was the third and youngest son of a family doctor in Glasgow and a mother, who herself had graduated in mathematics in 1927. He attended Allan Glen's School, a secondary school in Glasgow that specialised in science education, and where he was Dux of the School in 1964. From 1964 to 1968 he was a student in the Science Faculty of the University of Glasgow graduating with a First Class Honours degree.
In September 1968 he married Andrea Watson before beginning further studies at Peterhouse, Cambridge from which he had received a "Jack Scholarship".
Under the supervision of Peter Swinnerton-Dyer, Stothers studied for a Ph.D. in Number theory at the University of Cambridge from 1968 to 1971. He obtained his doctorate in 1972 with a Ph.D. thesis entitled "Some Discrete Triangle Groups".
His main achievement was proving the Stothers-Mason theorem (also known as the Mason-Stothers theorem) in 1981.[3] This is an analogue of the well-known abc conjecture for integers: indeed it was the inspiration for the latter. Later independent proofs were given by R. C. Mason in 1983 in the proceedings of a 1982 colloquium [4] and again in 1984 [5] and by Umberto Zannier in 1995.[6]
References
1. "Stothers Dr WALTER WILSON : Obituary". Herald – via legacy-ia.com.
2. Cohen, Stephen D. (2010). "Walter Wilson Stothers (1946–2009)". Glasgow Mathematical Journal. 52 (3): 711–715. doi:10.1017/S0017089510000534.
3. Stothers, W. W. (1981), "Polynomial identities and hauptmoduln", Quarterly J. Math. Oxford, 2, 32 (3): 349–370, doi:10.1093/qmath/32.3.349
4. Mason, R.C., D. Bertrand, M. Waldschmidt. (ed.), "Equations over function fields: in Approximations Diophantiennes et Nombres Transcendants, Colloque de Luminy, 1982", Progr. Math., Boston: Birkhäuser, 31: 143–149
5. Mason, R. C. (1984), Diophantine Equations over Function Fields, London Mathematical Society Lecture Note Series, vol. 96, Cambridge, England: Cambridge University Press, doi:10.1017/CBO9780511752490, ISBN 978-0-521-26983-4.
6. Zannier, Umberto (1995), "On Davenport's bound for the degree of f^3-g^2 and Riemann's existence theorem", Acta Arithmetica, 71 (2): 107–137, doi:10.4064/aa-71-2-107-137, MR 1339121
• Cohen, Stephen D. (2010). "Walter Wilson Stothers (1946–2009)". Glasgow Mathematical Journal. 52 (3): 711–715. doi:10.1017/S0017089510000534. ISSN 0017-0895.
• Ramon Garcia, Stephan; J. Miller, Steven (13 June 2019). 100 Years of Math Milestones: The Pi Mu Epsilon Centennial Collection. American Mathematical Soc. p. 375. ISBN 978-1-4704-3652-0.
• Lang, Serge (1999). The abc Conjecture: Math Talks for Undergraduates. Springer, New York, NY. pp. 18–31. doi:10.1007/978-1-4612-1476-2_2.
• Formanek, Edward (30 August 2010). "Theorems of W. W. Stothers and the Jacobian Conjecture in two variables". Proceedings of the American Mathematical Society. 139 (4): 1137–1140. doi:10.1090/S0002-9939-2010-10523-3.
• Zannier, Umberto (1996). "Acknowledgment of priority. Addenda: on Davenport's bound for the degree of f^3-g^2". Acta Arithmetica. 74 (4): 387.
Further reading
• Stothers, W. W. (1981). "Polynomial Identities and Hauptmoduln". The Quarterly Journal of Mathematics. 32 (3): 349–370. doi:10.1093/qmath/32.3.349. ISSN 0033-5606.
Authority control
International
• ISNI
• VIAF
National
• Catalonia
• Israel
• United States
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Wan Zhexian
Wan Zhexian (Chinese: 万哲先; 7 November 1927 – 30 May 2023) was a Chinese mathematician, an academician of the Chinese Academy of Sciences.
Wan Zhexian
万哲先
Born(1927-11-07)7 November 1927
Zichuan, Shandong, China
Died30 May 2023(2023-05-30) (aged 95)
Beijing, China
Alma materTsinghua University
Scientific career
FieldsMathematics
InstitutionsAcademy of Mathematics and Systems science, CAS
Chinese name
Simplified Chinese万哲先
Traditional Chinese萬哲先
Transcriptions
Standard Mandarin
Hanyu PinyinWàn Zhéxiān
Biography
Wan was born in Zichuan (now Zichuan District of Zibo), Shandong, on 7 November 1927, while his ancestral home is in Xiantao, Hubei.[1] He attended Zhangdian Primary School (张店小学).[2]
Wan was admitted to the Mathematics Department of National South-West Associated University in 1944.[1] He graduated from Tsinghua University in 1948 and taught there after graduation.[1] In 1950, he was transferred to the Institute of Systems science, Chinese Academy of Sciences, where he successively worked as an assistant, assistant researcher, associate researcher, and researcher.[1] In the same year, he studied classical group under the guidance of Professor Hua Luogeng, and co-wrote "Classical Group" with Hua Luogeng in 1963.[3]
Wan joined the China Association for Promoting Democracy in 1953 and the Chinese Communist Party in 1985.[1]
On 30 May 2023, he died of an illness in Beijing, aged 95.[1]
Honours and awards
• 1987 State Natural Science Award (Third Class) for the isomorphism theory of Classical Group
• 1991 Member of the Chinese Academy of Sciences (CAS)
• 1995 2nd Hua Luogeng Mathematics Prize
References
1. Sun Zonghe (孙宗鹤) (6 June 2023). 万哲先院士在京逝世 享年95岁. gmw.cn (in Chinese). Retrieved 29 June 2023.
2. Cao Jing (曹竞) (2 June 2023). 沉痛悼念!我们的“校友”万哲先院士. cyol.com (in Chinese). Retrieved 29 June 2023.
3. 万哲先. amss.cas.cn (in Chinese). 30 May 2023. Retrieved 29 June 2023.
Authority control
International
• ISNI
• VIAF
National
• Norway
• 2
• France
• BnF data
• Germany
• Israel
• United States
• Sweden
• Czech Republic
• Australia
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Wanda Szmielew
Wanda Szmielew née Montlak (5 April 1918 – 27 August 1976)[2] was a Polish mathematical logician who first proved the decidability of the first-order theory of abelian groups.[2]
Wanda Szmielew
Born(1918-04-05)5 April 1918
Warsaw, Congress Poland
Died27 August 1976(1976-08-27) (aged 58)
Warsaw, Poland
Alma mater
• University of Warsaw (MA)
• University of California, Berkeley (PhD)
Scientific career
FieldsMathematics, logic
ThesisElementary properties of Abelian groups (1955)
Doctoral advisorAlfred Tarski
InfluencedAbraham Robinson[1]
Life
Wanda Montlak was born on 5 April 1918 in Warsaw. She completed high school in 1935 and married, taking the name Szmielew. In the same year she entered the University of Warsaw, where she studied logic under Adolf Lindenbaum, Jan Łukasiewicz, Kazimierz Kuratowski, and Alfred Tarski. Her research at this time included work on the axiom of choice, but it was interrupted by the 1939 Invasion of Poland.[2]
Szmielew became a surveyor during World War II, during which time she continued her research on her own, developing a decision procedure based on quantifier elimination for the theory of abelian groups. She also taught for the Polish underground. After the liberation of Poland, Szmielew took a position at the University of Łódź, which was founded in May 1945. In 1947, she published her paper on the axiom of choice, earned a master's degree from the University of Warsaw, and moved to Warsaw as a senior assistant.[2][1]
In 1949 and 1950, Szmielew visited the University of California, Berkeley, where Tarski had found a permanent position after being exiled from Poland for the war. She lived in the home of Tarski and his wife as Tarski's mistress, leaving her husband behind in Poland,[3] and completed a Ph.D. at Berkeley in 1950 under Tarski's supervision, with her dissertation consisting of her work on abelian groups.[2][1][4] For the 1955 journal publication of these results, Tarski convinced Szmielew to rephrase her work in terms of his theory of arithmetical functions, a decision that caused this work to be described by Solomon Feferman as "unreadable".[5] Later work by Eklof & Fischer (1972) re-proved Szmielew's result using more standard model-theoretic techniques.[5][6]
Returning to Warsaw as an assistant professor, her interests shifted to the foundations of geometry. With Karol Borsuk, she published a text on the subject in 1955 (translated into English in 1960), and another monograph, published posthumously in 1981 and (in English translation) 1983.[2][1]
She died of cancer on 27 August 1976 in Warsaw.[2]
Selected publications
• Szmielew, Wanda (1947), "On choices from finite sets", Fundamenta Mathematicae, 34 (1): 75–80, doi:10.4064/fm-34-1-75-80, ISSN 0016-2736, MR 0022539.
• Szmielew, Wanda (1955), "Elementary properties of Abelian groups", Fundamenta Mathematicae, 41 (2): 203–271, doi:10.4064/fm-41-2-203-271, ISSN 0016-2736, MR 0072131.
• Borsuk, Karol; Szmielew, Wanda (1955), Podstawy geometrii, Warsawa: Państwowe Wydawnictwo Naukowe, MR 0071791. Translated as Borsuk, Karol; Szmielew, Wanda (1960), Foundations of geometry: Euclidean and Bolyai-Lobachevskian geometry; projective geometry, Revised English translation, New York: Interscience Publishers, Inc., MR 0143072.
• Szmielew, Wanda (1981), Od geometrii afinicznej do euklidesowej, Biblioteka Matematyczna [Mathematics Library], vol. 55, Warsaw: Państwowe Wydawnictwo Naukowe (PWN), p. 172, ISBN 83-01-01374-5, MR 0664205. Translated as Szmielew, Wanda (1983), From affine to Euclidean geometry, Warsaw: PWN—Polish Scientific Publishers, ISBN 90-277-1243-3, MR 0720548.
• Schwabhäuser, W.; Szmielew, W.; Tarski, A. (1983), Metamathematische Methoden in der Geometrie, Hochschultext [University Textbooks], Berlin: Springer-Verlag, doi:10.1007/978-3-642-69418-9, ISBN 3-540-12958-8, MR 0731370.
References
1. Kordos, Marek; Moszyńska, Maria; Szczerba, Lesław W. (December 1977), translated by Smólska, J., "Wanda Szmielew 1918–1976", Studia Logica, Kluwer Academic Publishers, 36 (4): 241–244, doi:10.1007/BF02120661, eISSN 1572-8730, ISSN 0039-3215, MR 0497794, S2CID 123110088.
2. O'Connor, John J.; Robertson, Edmund F., "Wanda Montlak Szmielew", MacTutor History of Mathematics Archive, University of St Andrews
3. Feferman, Anita Burdman; Feferman, Solomon (2004), Alfred Tarski: life and logic, Cambridge: Cambridge University Press, pp. 177–178, ISBN 0-521-80240-7, MR 2095748
4. Wanda Szmielew at the Mathematics Genealogy Project
5. Feferman, Solomon (2008), "Tarski's Conceptual Analysis of Semantical Notions", in Patterson, Douglas (ed.), New Essays on Tarski and Philosophy, Oxford Univ. Press, Oxford, pp. 72–93, doi:10.1093/acprof:oso/9780199296309.003.0004, ISBN 978-0-19-929630-9, MR 2509211. See footnote 27, p. 90.
6. Eklof, Paul C.; Fischer, Edward R. (1972), "The elementary theory of abelian groups", Annals of Pure and Applied Logic, 4 (2): 115–171, doi:10.1016/0003-4843(72)90013-7, MR 0540003.
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Netherlands
• Poland
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Travelling salesman problem
The travelling salesman problem (TSP) asks the following question: "Given a list of cities and the distances between each pair of cities, what is the shortest possible route that visits each city exactly once and returns to the origin city?" It is an NP-hard problem in combinatorial optimization, important in theoretical computer science and operations research.
The travelling purchaser problem and the vehicle routing problem are both generalizations of TSP.
In the theory of computational complexity, the decision version of the TSP (where given a length L, the task is to decide whether the graph has a tour of at most L) belongs to the class of NP-complete problems. Thus, it is possible that the worst-case running time for any algorithm for the TSP increases superpolynomially (but no more than exponentially) with the number of cities.
The problem was first formulated in 1930 and is one of the most intensively studied problems in optimization. It is used as a benchmark for many optimization methods. Even though the problem is computationally difficult, many heuristics and exact algorithms are known, so that some instances with tens of thousands of cities can be solved completely and even problems with millions of cities can be approximated within a small fraction of 1%.[1]
The TSP has several applications even in its purest formulation, such as planning, logistics, and the manufacture of microchips. Slightly modified, it appears as a sub-problem in many areas, such as DNA sequencing. In these applications, the concept city represents, for example, customers, soldering points, or DNA fragments, and the concept distance represents travelling times or cost, or a similarity measure between DNA fragments. The TSP also appears in astronomy, as astronomers observing many sources will want to minimize the time spent moving the telescope between the sources; in such problems, the TSP can be embedded inside an optimal control problem. In many applications, additional constraints such as limited resources or time windows may be imposed.
History
The origins of the travelling salesman problem are unclear. A handbook for travelling salesmen from 1832 mentions the problem and includes example tours through Germany and Switzerland, but contains no mathematical treatment.[2]
The TSP was mathematically formulated in the 19th century by the Irish mathematician William Rowan Hamilton and by the British mathematician Thomas Kirkman. Hamilton's icosian game was a recreational puzzle based on finding a Hamiltonian cycle.[3] The general form of the TSP appears to have been first studied by mathematicians during the 1930s in Vienna and at Harvard, notably by Karl Menger, who defines the problem, considers the obvious brute-force algorithm, and observes the non-optimality of the nearest neighbour heuristic:
We denote by messenger problem (since in practice this question should be solved by each postman, anyway also by many travelers) the task to find, for finitely many points whose pairwise distances are known, the shortest route connecting the points. Of course, this problem is solvable by finitely many trials. Rules which would push the number of trials below the number of permutations of the given points, are not known. The rule that one first should go from the starting point to the closest point, then to the point closest to this, etc., in general does not yield the shortest route.[4]
It was first considered mathematically in the 1930s by Merrill M. Flood who was looking to solve a school bus routing problem.[5] Hassler Whitney at Princeton University generated interest in the problem, which he called the "48 states problem". The earliest publication using the phrase "travelling [or traveling] salesman problem" was the 1949 RAND Corporation report by Julia Robinson, "On the Hamiltonian game (a traveling salesman problem)."[6][7]
In the 1950s and 1960s, the problem became increasingly popular in scientific circles in Europe and the United States after the RAND Corporation in Santa Monica offered prizes for steps in solving the problem.[5] Notable contributions were made by George Dantzig, Delbert Ray Fulkerson and Selmer M. Johnson from the RAND Corporation, who expressed the problem as an integer linear program and developed the cutting plane method for its solution. They wrote what is considered the seminal paper on the subject in which with these new methods they solved an instance with 49 cities to optimality by constructing a tour and proving that no other tour could be shorter. Dantzig, Fulkerson and Johnson, however, speculated that given a near optimal solution we may be able to find optimality or prove optimality by adding a small number of extra inequalities (cuts). They used this idea to solve their initial 49 city problem using a string model. They found they only needed 26 cuts to come to a solution for their 49 city problem. While this paper did not give an algorithmic approach to TSP problems, the ideas that lay within it were indispensable to later creating exact solution methods for the TSP, though it would take 15 years to find an algorithmic approach in creating these cuts.[5] As well as cutting plane methods, Dantzig, Fulkerson and Johnson used branch and bound algorithms perhaps for the first time.[5]
In 1959, Jillian Beardwood, J.H. Halton and John Hammersley published an article entitled "The Shortest Path Through Many Points" in the journal of the Cambridge Philosophical Society.[8] The Beardwood–Halton–Hammersley theorem provides a practical solution to the travelling salesman problem. The authors derived an asymptotic formula to determine the length of the shortest route for a salesman who starts at a home or office and visits a fixed number of locations before returning to the start.
In the following decades, the problem was studied by many researchers from mathematics, computer science, chemistry, physics, and other sciences. In the 1960s, however, a new approach was created, that instead of seeking optimal solutions would produce a solution whose length is provably bounded by a multiple of the optimal length, and in doing so would create lower bounds for the problem; these lower bounds would then be used with branch and bound approaches. One method of doing this was to create a minimum spanning tree of the graph and then double all its edges, which produces the bound that the length of an optimal tour is at most twice the weight of a minimum spanning tree.[5]
In 1976, Christofides and Serdyukov independently of each other made a big advance in this direction:[9] the Christofides-Serdyukov algorithm yields a solution that, in the worst case, is at most 1.5 times longer than the optimal solution. As the algorithm was simple and quick, many hoped it would give way to a near optimal solution method. However, this hope for improvement did not immediately materialize, and Christofides-Serdyukov remained the method with the best worst-case scenario until 2011, when a (very) slightly improved approximation algorithm was developed for the subset of "graphical" TSPs.[10] In 2020 this tiny improvement was extended to the full (metric) TSP.[11][12]
Richard M. Karp showed in 1972 that the Hamiltonian cycle problem was NP-complete, which implies the NP-hardness of TSP. This supplied a mathematical explanation for the apparent computational difficulty of finding optimal tours.
Great progress was made in the late 1970s and 1980, when Grötschel, Padberg, Rinaldi and others managed to exactly solve instances with up to 2,392 cities, using cutting planes and branch and bound.
In the 1990s, Applegate, Bixby, Chvátal, and Cook developed the program Concorde that has been used in many recent record solutions. Gerhard Reinelt published the TSPLIB in 1991, a collection of benchmark instances of varying difficulty, which has been used by many research groups for comparing results. In 2006, Cook and others computed an optimal tour through an 85,900-city instance given by a microchip layout problem, currently the largest solved TSPLIB instance. For many other instances with millions of cities, solutions can be found that are guaranteed to be within 2–3% of an optimal tour.[13]
Description
As a graph problem
TSP can be modelled as an undirected weighted graph, such that cities are the graph's vertices, paths are the graph's edges, and a path's distance is the edge's weight. It is a minimization problem starting and finishing at a specified vertex after having visited each other vertex exactly once. Often, the model is a complete graph (i.e., each pair of vertices is connected by an edge). If no path exists between two cities, adding a sufficiently long edge will complete the graph without affecting the optimal tour.
Asymmetric and symmetric
In the symmetric TSP, the distance between two cities is the same in each opposite direction, forming an undirected graph. This symmetry halves the number of possible solutions. In the asymmetric TSP, paths may not exist in both directions or the distances might be different, forming a directed graph. Traffic collisions, one-way streets, and airfares for cities with different departure and arrival fees are examples of how this symmetry could break down.
Related problems
• An equivalent formulation in terms of graph theory is: Given a complete weighted graph (where the vertices would represent the cities, the edges would represent the roads, and the weights would be the cost or distance of that road), find a Hamiltonian cycle with the least weight.
• The requirement of returning to the starting city does not change the computational complexity of the problem, see Hamiltonian path problem.
• Another related problem is the bottleneck travelling salesman problem (bottleneck TSP): Find a Hamiltonian cycle in a weighted graph with the minimal weight of the weightiest edge. For example, avoiding narrow streets with big buses.[14] The problem is of considerable practical importance, apart from evident transportation and logistics areas. A classic example is in printed circuit manufacturing: scheduling of a route of the drill machine to drill holes in a PCB. In robotic machining or drilling applications, the "cities" are parts to machine or holes (of different sizes) to drill, and the "cost of travel" includes time for retooling the robot (single machine job sequencing problem).[15]
• The generalized travelling salesman problem, also known as the "travelling politician problem", deals with "states" that have (one or more) "cities" and the salesman has to visit exactly one "city" from each "state". One application is encountered in ordering a solution to the cutting stock problem in order to minimize knife changes. Another is concerned with drilling in semiconductor manufacturing, see e.g., U.S. Patent 7,054,798. Noon and Bean demonstrated that the generalized travelling salesman problem can be transformed into a standard TSP with the same number of cities, but a modified distance matrix.
• The sequential ordering problem deals with the problem of visiting a set of cities where precedence relations between the cities exist.
• A common interview question at Google is how to route data among data processing nodes; routes vary by time to transfer the data, but nodes also differ by their computing power and storage, compounding the problem of where to send data.
• The travelling purchaser problem deals with a purchaser who is charged with purchasing a set of products. He can purchase these products in several cities, but at different prices and not all cities offer the same products. The objective is to find a route between a subset of the cities that minimizes total cost (travel cost + purchasing cost) and enables the purchase of all required products.
Integer linear programming formulations
The TSP can be formulated as an integer linear program.[16][17][18] Several formulations are known. Two notable formulations are the Miller–Tucker–Zemlin (MTZ) formulation and the Dantzig–Fulkerson–Johnson (DFJ) formulation. The DFJ formulation is stronger, though the MTZ formulation is still useful in certain settings.[19][20]
Common to both these formulations is that one labels the cities with the numbers $1,\ldots ,n$ and takes $c_{ij}>0$ to be the cost (distance) from city $i$ to city $j$. The main variables in the formulations are:
$x_{ij}={\begin{cases}1&{\text{the path goes from city }}i{\text{ to city }}j\\0&{\text{otherwise}}\end{cases}}$
It is because these are 0/1 variables that the formulations become integer programs; all other constraints are purely linear. In particular, the objective in the program is to
minimize the tour length $\sum _{i=1}^{n}\sum _{j\neq i,j=1}^{n}c_{ij}x_{ij}$.
Without further constraints, the $\{x_{ij}\}_{i,j}$ will however effectively range over all subsets of the set of edges, which is very far from the sets of edges in a tour, and allows for a trivial minimum where all $x_{ij}=0$. Therefore, both formulations also have the constraints that there at each vertex is exactly one incoming edge and one outgoing edge, which may be expressed as the $2n$ linear equations
$\sum _{i=1,i\neq j}^{n}x_{ij}=1$ for $j=1,\ldots ,n$ and $\sum _{j=1,j\neq i}^{n}x_{ij}=1$ for $i=1,\ldots ,n$.
These ensure that the chosen set of edges locally looks like that of a tour, but still allow for solutions violating the global requirement that there is one tour which visits all vertices, as the edges chosen could make up several tours each visiting only a subset of the vertices; arguably it is this global requirement that makes TSP a hard problem. The MTZ and DFJ formulations differ in how they express this final requirement as linear constraints.
Miller–Tucker–Zemlin formulation[21]
In addition to the $x_{ij}$ variables as above, there is for each $i=2,\ldots ,n$ a dummy variable $u_{i}$ that keeps track of the order in which the cities are visited, counting from city $1$; the interpretation is that $u_{i}<u_{j}$ implies city $i$ is visited before city $j$. For a given tour (as encoded into values of the $x_{ij}$ variables), one may find satisfying values for the $u_{i}$ variables by making $u_{i}$ equal to the number of edges along that tour, when going from city $1$ to city $i$.
Because linear programming favours non-strict inequalities ($\geq $) over strict ($>$), we would like to impose constraints to the effect that
$u_{j}\geq u_{i}+1$ if $x_{ij}=1$.
Merely requiring $u_{j}\geq u_{i}+x_{ij}$ would not achieve that, because this also requires $u_{j}\geq u_{i}$ when $x_{ij}=0$, which is not correct. Instead MTZ use the $(n-1)(n-2)$ linear constraints
$u_{j}+(n-2)\geq u_{i}+(n-1)x_{ij}$ for all distinct $i,j\in \{2,\dotsc ,n\}$
where the constant term $n-2$ provides sufficient slack that $x_{ij}=0$ does not impose a relation between $u_{j}$ and $u_{i}$.
The way that the $u_{i}$ variables then enforce that a single tour visits all cities is that they increase by (at least) $1$ for each step along a tour, with a decrease only allowed where the tour passes through city $1$. That constraint would be violated by every tour which does not pass through city $1$, so the only way to satisfy it is that the tour passing city $1$ also passes through all other cities.
The MTZ formulation of TSP is thus the following integer linear programming problem:
${\begin{aligned}\min \sum _{i=1}^{n}\sum _{j\neq i,j=1}^{n}c_{ij}x_{ij}&\colon &&\\x_{ij}\in {}&\{0,1\}&&i,j=1,\ldots ,n;\\u_{i}\in {}&\mathbf {Z} &&i=2,\ldots ,n;\\\sum _{i=1,i\neq j}^{n}x_{ij}={}&1&&j=1,\ldots ,n;\\\sum _{j=1,j\neq i}^{n}x_{ij}={}&1&&i=1,\ldots ,n;\\u_{i}-u_{j}+(n-1)x_{ij}\leq {}&n-2&&2\leq i\neq j\leq n;\\2\leq u_{i}\leq {}&n&&2\leq i\leq n.\end{aligned}}$
The first set of equalities requires that each city is arrived at from exactly one other city, and the second set of equalities requires that from each city there is a departure to exactly one other city. The last constraints enforce that there is only a single tour covering all cities, and not two or more disjointed tours that only collectively cover all cities. To prove this, it is shown below (1) that every feasible solution contains only one closed sequence of cities, and (2) that for every single tour covering all cities, there are values for the dummy variables $u_{i}$ that satisfy the constraints.
To prove that every feasible solution contains only one closed sequence of cities, it suffices to show that every subtour in a feasible solution passes through city 1 (noting that the equalities ensure there can only be one such tour). For if we sum all the inequalities corresponding to $x_{ij}=1$ for any subtour of k steps not passing through city 1, we obtain:
$(n-1)k\leq (n-2)k,$
which is a contradiction.
It now must be shown that for every single tour covering all cities, there are values for the dummy variables $u_{i}$ that satisfy the constraints.
Without loss of generality, define the tour as originating (and ending) at city 1. Choose $u_{i}=t$ if city $i$ is visited in step $t(i,t=2,3,\ldots ,n)$. Then
$u_{i}-u_{j}\leq n-2,$
since $u_{i}$ can be no greater than $n$ and $u_{j}$ can be no less than 2; hence the constraints are satisfied whenever $x_{ij}=0.$ For $x_{ij}=1$, we have:
$u_{i}-u_{j}+(n-1)x_{ij}=(t)-(t+1)+n-1=n-2,$
satisfying the constraint.
Dantzig–Fulkerson–Johnson formulation
Label the cities with the numbers 1, …, n and define:
$x_{ij}={\begin{cases}1&{\text{the path goes from city }}i{\text{ to city }}j\\0&{\text{otherwise}}\end{cases}}$
Take $c_{ij}>0$ to be the distance from city i to city j. Then TSP can be written as the following integer linear programming problem:
${\begin{aligned}\min &\sum _{i=1}^{n}\sum _{j\neq i,j=1}^{n}c_{ij}x_{ij}\colon &&\\&\sum _{i=1,i\neq j}^{n}x_{ij}=1&&j=1,\ldots ,n;\\&\sum _{j=1,j\neq i}^{n}x_{ij}=1&&i=1,\ldots ,n;\\&\sum _{i\in Q}{\sum _{j\neq i,j\in Q}{x_{ij}}}\leq |Q|-1&&\forall Q\subsetneq \{1,\ldots ,n\},|Q|\geq 2\\\end{aligned}}$
The last constraint of the DFJ formulation—called a subtour elimination constraint—ensures no proper subset Q can form a sub-tour, so the solution returned is a single tour and not the union of smaller tours. Because this leads to an exponential number of possible constraints, in practice it is solved with row generation.[22]
Computing a solution
The traditional lines of attack for the NP-hard problems are the following:
• Devising exact algorithms, which work reasonably fast only for small problem sizes.
• Devising "suboptimal" or heuristic algorithms, i.e., algorithms that deliver approximated solutions in a reasonable time.
• Finding special cases for the problem ("subproblems") for which either better or exact heuristics are possible.
Exact algorithms
The most direct solution would be to try all permutations (ordered combinations) and see which one is cheapest (using brute-force search). The running time for this approach lies within a polynomial factor of $O(n!)$, the factorial of the number of cities, so this solution becomes impractical even for only 20 cities.
One of the earliest applications of dynamic programming is the Held–Karp algorithm that solves the problem in time $O(n^{2}2^{n})$.[23] This bound has also been reached by Exclusion-Inclusion in an attempt preceding the dynamic programming approach.
Improving these time bounds seems to be difficult. For example, it has not been determined whether a classical exact algorithm for TSP that runs in time $O(1.9999^{n})$ exists.[24] The currently best quantum exact algorithm for TSP due to Ambainis et al. runs in time $O(1.728^{n})$. [25]
Other approaches include:
• Various branch-and-bound algorithms, which can be used to process TSPs containing 40–60 cities.
• Progressive improvement algorithms which use techniques reminiscent of linear programming. Works well for up to 200 cities.
• Implementations of branch-and-bound and problem-specific cut generation (branch-and-cut[26]);[27] this is the method of choice for solving large instances. This approach holds the current record, solving an instance with 85,900 cities, see Applegate et al. (2006).
An exact solution for 15,112 German towns from TSPLIB was found in 2001 using the cutting-plane method proposed by George Dantzig, Ray Fulkerson, and Selmer M. Johnson in 1954, based on linear programming. The computations were performed on a network of 110 processors located at Rice University and Princeton University. The total computation time was equivalent to 22.6 years on a single 500 MHz Alpha processor. In May 2004, the travelling salesman problem of visiting all 24,978 towns in Sweden was solved: a tour of length approximately 72,500 kilometres was found and it was proven that no shorter tour exists.[28] In March 2005, the travelling salesman problem of visiting all 33,810 points in a circuit board was solved using Concorde TSP Solver: a tour of length 66,048,945 units was found and it was proven that no shorter tour exists. The computation took approximately 15.7 CPU-years (Cook et al. 2006). In April 2006 an instance with 85,900 points was solved using Concorde TSP Solver, taking over 136 CPU-years, see Applegate et al. (2006).
Heuristic and approximation algorithms
Various heuristics and approximation algorithms, which quickly yield good solutions, have been devised. These include the Multi-fragment algorithm. Modern methods can find solutions for extremely large problems (millions of cities) within a reasonable time which are with a high probability just 2–3% away from the optimal solution.[13]
Several categories of heuristics are recognized.
Constructive heuristics
The nearest neighbour (NN) algorithm (a greedy algorithm) lets the salesman choose the nearest unvisited city as his next move. This algorithm quickly yields an effectively short route. For N cities randomly distributed on a plane, the algorithm on average yields a path 25% longer than the shortest possible path.[29] However, there exist many specially arranged city distributions which make the NN algorithm give the worst route.[30] This is true for both asymmetric and symmetric TSPs.[31] Rosenkrantz et al.[32] showed that the NN algorithm has the approximation factor $\Theta (\log |V|)$ for instances satisfying the triangle inequality. A variation of NN algorithm, called nearest fragment (NF) operator, which connects a group (fragment) of nearest unvisited cities, can find shorter routes with successive iterations.[33] The NF operator can also be applied on an initial solution obtained by NN algorithm for further improvement in an elitist model, where only better solutions are accepted.
The bitonic tour of a set of points is the minimum-perimeter monotone polygon that has the points as its vertices; it can be computed efficiently by dynamic programming.
Another constructive heuristic, Match Twice and Stitch (MTS), performs two sequential matchings, where the second matching is executed after deleting all the edges of the first matching, to yield a set of cycles. The cycles are then stitched to produce the final tour.[34]
The Algorithm of Christofides and Serdyukov
The algorithm of Christofides and Serdyukov follows a similar outline but combines the minimum spanning tree with a solution of another problem, minimum-weight perfect matching. This gives a TSP tour which is at most 1.5 times the optimal. It was one of the first approximation algorithms, and was in part responsible for drawing attention to approximation algorithms as a practical approach to intractable problems. As a matter of fact, the term "algorithm" was not commonly extended to approximation algorithms until later; the Christofides algorithm was initially referred to as the Christofides heuristic.[9]
This algorithm looks at things differently by using a result from graph theory which helps improve on the lower bound of the TSP which originated from doubling the cost of the minimum spanning tree. Given an Eulerian graph we can find an Eulerian tour in $O(n)$ time.[5] So if we had an Eulerian graph with cities from a TSP as vertices then we can easily see that we could use such a method for finding an Eulerian tour to find a TSP solution. By triangular inequality we know that the TSP tour can be no longer than the Eulerian tour and as such we have a lower bound for the TSP. Such a method is described below.
1. Find a minimum spanning tree for the problem
2. Create duplicates for every edge to create an Eulerian graph
3. Find an Eulerian tour for this graph
4. Convert to TSP: if a city is visited twice, create a shortcut from the city before this in the tour to the one after this.
To improve the lower bound, a better way of creating an Eulerian graph is needed. By triangular inequality, the best Eulerian graph must have the same cost as the best travelling salesman tour, hence finding optimal Eulerian graphs is at least as hard as TSP. One way of doing this is by minimum weight matching using algorithms of $O(n^{3})$.[5]
Making a graph into an Eulerian graph starts with the minimum spanning tree. Then all the vertices of odd order must be made even. So a matching for the odd degree vertices must be added which increases the order of every odd degree vertex by one.[5] This leaves us with a graph where every vertex is of even order which is thus Eulerian. Adapting the above method gives the algorithm of Christofides and Serdyukov.
1. Find a minimum spanning tree for the problem
2. Create a matching for the problem with the set of cities of odd order.
3. Find an Eulerian tour for this graph
4. Convert to TSP using shortcuts.
Pairwise exchange
The pairwise exchange or 2-opt technique involves iteratively removing two edges and replacing these with two different edges that reconnect the fragments created by edge removal into a new and shorter tour. Similarly, the 3-opt technique removes 3 edges and reconnects them to form a shorter tour. These are special cases of the k-opt method. The label Lin–Kernighan is an often heard misnomer for 2-opt. Lin–Kernighan is actually the more general k-opt method.
For Euclidean instances, 2-opt heuristics give on average solutions that are about 5% better than Christofides' algorithm. If we start with an initial solution made with a greedy algorithm, the average number of moves greatly decreases again and is $O(n)$. For random starts however, the average number of moves is $O(n\log(n))$. However whilst in order this is a small increase in size, the initial number of moves for small problems is 10 times as big for a random start compared to one made from a greedy heuristic. This is because such 2-opt heuristics exploit 'bad' parts of a solution such as crossings. These types of heuristics are often used within Vehicle routing problem heuristics to reoptimize route solutions.[29]
k-opt heuristic, or Lin–Kernighan heuristics
The Lin–Kernighan heuristic is a special case of the V-opt or variable-opt technique. It involves the following steps:
1. Given a tour, delete k mutually disjoint edges.
2. Reassemble the remaining fragments into a tour, leaving no disjoint subtours (that is, don't connect a fragment's endpoints together). This in effect simplifies the TSP under consideration into a much simpler problem.
3. Each fragment endpoint can be connected to 2k − 2 other possibilities: of 2k total fragment endpoints available, the two endpoints of the fragment under consideration are disallowed. Such a constrained 2k-city TSP can then be solved with brute force methods to find the least-cost recombination of the original fragments.
The most popular of the k-opt methods are 3-opt, as introduced by Shen Lin of Bell Labs in 1965. A special case of 3-opt is where the edges are not disjoint (two of the edges are adjacent to one another). In practice, it is often possible to achieve substantial improvement over 2-opt without the combinatorial cost of the general 3-opt by restricting the 3-changes to this special subset where two of the removed edges are adjacent. This so-called two-and-a-half-opt typically falls roughly midway between 2-opt and 3-opt, both in terms of the quality of tours achieved and the time required to achieve those tours.
V-opt heuristic
The variable-opt method is related to, and a generalization of the k-opt method. Whereas the k-opt methods remove a fixed number (k) of edges from the original tour, the variable-opt methods do not fix the size of the edge set to remove. Instead, they grow the set as the search process continues. The best-known method in this family is the Lin–Kernighan method (mentioned above as a misnomer for 2-opt). Shen Lin and Brian Kernighan first published their method in 1972, and it was the most reliable heuristic for solving travelling salesman problems for nearly two decades. More advanced variable-opt methods were developed at Bell Labs in the late 1980s by David Johnson and his research team. These methods (sometimes called Lin–Kernighan–Johnson) build on the Lin–Kernighan method, adding ideas from tabu search and evolutionary computing. The basic Lin–Kernighan technique gives results that are guaranteed to be at least 3-opt. The Lin–Kernighan–Johnson methods compute a Lin–Kernighan tour, and then perturb the tour by what has been described as a mutation that removes at least four edges and reconnects the tour in a different way, then V-opting the new tour. The mutation is often enough to move the tour from the local minimum identified by Lin–Kernighan. V-opt methods are widely considered the most powerful heuristics for the problem, and are able to address special cases, such as the Hamilton Cycle Problem and other non-metric TSPs that other heuristics fail on. For many years Lin–Kernighan–Johnson had identified optimal solutions for all TSPs where an optimal solution was known and had identified the best-known solutions for all other TSPs on which the method had been tried.
Randomized improvement
Optimized Markov chain algorithms which use local searching heuristic sub-algorithms can find a route extremely close to the optimal route for 700 to 800 cities.
TSP is a touchstone for many general heuristics devised for combinatorial optimization such as genetic algorithms, simulated annealing, tabu search, ant colony optimization, river formation dynamics (see swarm intelligence) and the cross entropy method.
Constricting Insertion Heuristic
Start with a sub-tour such as the convex hull, insert other vertexes. [35]
Ant colony optimization
Artificial intelligence researcher Marco Dorigo described in 1993 a method of heuristically generating "good solutions" to the TSP using a simulation of an ant colony called ACS (ant colony system).[36] It models behaviour observed in real ants to find short paths between food sources and their nest, an emergent behaviour resulting from each ant's preference to follow trail pheromones deposited by other ants.
ACS sends out a large number of virtual ant agents to explore many possible routes on the map. Each ant probabilistically chooses the next city to visit based on a heuristic combining the distance to the city and the amount of virtual pheromone deposited on the edge to the city. The ants explore, depositing pheromone on each edge that they cross, until they have all completed a tour. At this point the ant which completed the shortest tour deposits virtual pheromone along its complete tour route (global trail updating). The amount of pheromone deposited is inversely proportional to the tour length: the shorter the tour, the more it deposits.
Special cases
Metric
In the metric TSP, also known as delta-TSP or Δ-TSP, the intercity distances satisfy the triangle inequality.
A very natural restriction of the TSP is to require that the distances between cities form a metric to satisfy the triangle inequality; that is the direct connection from A to B is never farther than the route via intermediate C:
$d_{AB}\leq d_{AC}+d_{CB}$.
The edge spans then build a metric on the set of vertices. When the cities are viewed as points in the plane, many natural distance functions are metrics, and so many natural instances of TSP satisfy this constraint.
The following are some examples of metric TSPs for various metrics.
• In the Euclidean TSP (see below) the distance between two cities is the Euclidean distance between the corresponding points.
• In the rectilinear TSP the distance between two cities is the sum of the absolute values of the differences of their x- and y-coordinates. This metric is often called the Manhattan distance or city-block metric.
• In the maximum metric, the distance between two points is the maximum of the absolute values of differences of their x- and y-coordinates.
The last two metrics appear, for example, in routing a machine that drills a given set of holes in a printed circuit board. The Manhattan metric corresponds to a machine that adjusts first one co-ordinate, and then the other, so the time to move to a new point is the sum of both movements. The maximum metric corresponds to a machine that adjusts both co-ordinates simultaneously, so the time to move to a new point is the slower of the two movements.
In its definition, the TSP does not allow cities to be visited twice, but many applications do not need this constraint. In such cases, a symmetric, non-metric instance can be reduced to a metric one. This replaces the original graph with a complete graph in which the inter-city distance $d_{AB}$ is replaced by the shortest path length between A and B in the original graph.
Euclidean
For points in the Euclidean plane, the optimal solution to the travelling salesman problem forms a simple polygon through all of the points, a polygonalization of the points.[37] Any non-optimal solution with crossings can be made into a shorter solution without crossings by local optimizations. The Euclidean distance obeys the triangle inequality, so the Euclidean TSP forms a special case of metric TSP. However, even when the input points have integer coordinates, their distances generally take the form of square roots, and the length of a tour is a sum of radicals, making it difficult to perform the symbolic computation needed to perform exact comparisons of the lengths of different tours.
Like the general TSP, the exact Euclidean TSP is NP-hard, but the issue with sums of radicals is an obstacle to proving that its decision version is in NP, and therefore NP-complete. A discretized version of the problem with distances rounded to integers is NP-complete.[38] With rational coordinates and the actual Euclidean metric, Euclidean TSP is known to be in the Counting Hierarchy,[39] a subclass of PSPACE. With arbitrary real coordinates, Euclidean TSP cannot be in such classes, since there are uncountably many possible inputs. Despite these complications, Euclidean TSP is much easier than the general metric case for approximation.[40] For example, the minimum spanning tree of the graph associated with an instance of the Euclidean TSP is a Euclidean minimum spanning tree, and so can be computed in expected O (n log n) time for n points (considerably less than the number of edges). This enables the simple 2-approximation algorithm for TSP with triangle inequality above to operate more quickly.
In general, for any c > 0, where d is the number of dimensions in the Euclidean space, there is a polynomial-time algorithm that finds a tour of length at most (1 + 1/c) times the optimal for geometric instances of TSP in
$O\left(n(\log n)^{(O(c{\sqrt {d}}))^{d-1}}\right),$
time; this is called a polynomial-time approximation scheme (PTAS).[41] Sanjeev Arora and Joseph S. B. Mitchell were awarded the Gödel Prize in 2010 for their concurrent discovery of a PTAS for the Euclidean TSP.
In practice, simpler heuristics with weaker guarantees continue to be used.
Asymmetric
In most cases, the distance between two nodes in the TSP network is the same in both directions. The case where the distance from A to B is not equal to the distance from B to A is called asymmetric TSP. A practical application of an asymmetric TSP is route optimization using street-level routing (which is made asymmetric by one-way streets, slip-roads, motorways, etc.).
Conversion to symmetric
Solving an asymmetric TSP graph can be somewhat complex. The following is a 3×3 matrix containing all possible path weights between the nodes A, B and C. One option is to turn an asymmetric matrix of size N into a symmetric matrix of size 2N.[42]
Asymmetric path weights
ABC
A 12
B 63
C 54
To double the size, each of the nodes in the graph is duplicated, creating a second ghost node, linked to the original node with a "ghost" edge of very low (possibly negative) weight, here denoted −w. (Alternatively, the ghost edges have weight 0, and weight w is added to all other edges.) The original 3×3 matrix shown above is visible in the bottom left and the transpose of the original in the top-right. Both copies of the matrix have had their diagonals replaced by the low-cost hop paths, represented by −w. In the new graph, no edge directly links original nodes and no edge directly links ghost nodes.
Symmetric path weights
ABCA′B′C′
A −w65
B 1−w4
C 23−w
A′ −w12
B′ 6−w3
C′ 54−w
The weight −w of the "ghost" edges linking the ghost nodes to the corresponding original nodes must be low enough to ensure that all ghost edges must belong to any optimal symmetric TSP solution on the new graph (w=0 is not always low enough). As a consequence, in the optimal symmetric tour, each original node appears next to its ghost node (e.g. a possible path is $\mathrm {A\to A'\to C\to C'\to B\to B'\to A} $) and by merging the original and ghost nodes again we get an (optimal) solution of the original asymmetric problem (in our example, $\mathrm {A\to C\to B\to A} $).
Analyst's problem
There is an analogous problem in geometric measure theory which asks the following: under what conditions may a subset E of Euclidean space be contained in a rectifiable curve (that is, when is there a curve with finite length that visits every point in E)? This problem is known as the analyst's travelling salesman problem.
Path length for random sets of points in a square
Suppose $X_{1},\ldots ,X_{n}$ are $n$ independent random variables with uniform distribution in the square $[0,1]^{2}$, and let $L_{n}^{\ast }$ be the shortest path length (i.e. TSP solution) for this set of points, according to the usual Euclidean distance. It is known[8] that, almost surely,
${\frac {L_{n}^{*}}{\sqrt {n}}}\rightarrow \beta \qquad {\text{when }}n\to \infty ,$
where $\beta $ is a positive constant that is not known explicitly. Since $L_{n}^{*}\leq 2{\sqrt {n}}+2$ (see below), it follows from bounded convergence theorem that $\beta =\lim _{n\to \infty }\mathbb {E} [L_{n}^{*}]/{\sqrt {n}}$, hence lower and upper bounds on $\beta $ follow from bounds on $\mathbb {E} [L_{n}^{*}]$.
The almost sure limit ${\frac {L_{n}^{*}}{\sqrt {n}}}\rightarrow \beta $ as $n\to \infty $ may not exist if the independent locations $X_{1},\ldots ,X_{n}$ are replaced with observations from a stationary ergodic process with uniform marginals.[43]
Upper bound
• One has $L^{*}\leq 2{\sqrt {n}}+2$, and therefore $\beta \leq 2$, by using a naive path which visits monotonically the points inside each of ${\sqrt {n}}$ slices of width $1/{\sqrt {n}}$ in the square.
• Few[44] proved $L_{n}^{*}\leq {\sqrt {2n}}+1.75$, hence $\beta \leq {\sqrt {2}}$, later improved by Karloff (1987): $\beta \leq 0.984{\sqrt {2}}$.
• Fietcher[45] showed an upper bound of $\beta \leq 0.73\dots $.
Lower bound
• By observing that $\mathbb {E} [L_{n}^{*}]$ is greater than $n$ times the distance between $X_{0}$ and the closest point $X_{i}\neq X_{0}$, one gets (after a short computation)
$\mathbb {E} [L_{n}^{*}]\geq {\tfrac {1}{2}}{\sqrt {n}}.$
• A better lower bound is obtained[8] by observing that $\mathbb {E} [L_{n}^{*}]$ is greater than ${\tfrac {1}{2}}n$ times the sum of the distances between $X_{0}$ and the closest and second closest points $X_{i},X_{j}\neq X_{0}$, which gives
$\mathbb {E} [L_{n}^{*}]\geq \left({\tfrac {1}{4}}+{\tfrac {3}{8}}\right){\sqrt {n}}={\tfrac {5}{8}}{\sqrt {n}},$
• The currently[46] best lower bound is
$\mathbb {E} [L_{n}^{*}]\geq ({\tfrac {5}{8}}+{\tfrac {19}{5184}}){\sqrt {n}},$
• Held and Karp[47] gave a polynomial-time algorithm that provides numerical lower bounds for $L_{n}^{*}$, and thus for $\beta (\simeq L_{n}^{*}/{\sqrt {n}})$ which seem to be good up to more or less 1%.[48] In particular, David S. Johnson[49] obtained a lower bound by computer experiment:
$L_{n}^{*}\gtrsim 0.7080{\sqrt {n}}+0.522,$
where 0.522 comes from the points near square boundary which have fewer neighbours, and Christine L. Valenzuela and Antonia J. Jones[50] obtained the following other numerical lower bound:
$L_{n}^{*}\gtrsim 0.7078{\sqrt {n}}+0.551$.
Computational complexity
The problem has been shown to be NP-hard (more precisely, it is complete for the complexity class FPNP; see function problem), and the decision problem version ("given the costs and a number x, decide whether there is a round-trip route cheaper than x") is NP-complete. The bottleneck travelling salesman problem is also NP-hard. The problem remains NP-hard even for the case when the cities are in the plane with Euclidean distances, as well as in a number of other restrictive cases. Removing the condition of visiting each city "only once" does not remove the NP-hardness, since in the planar case there is an optimal tour that visits each city only once (otherwise, by the triangle inequality, a shortcut that skips a repeated visit would not increase the tour length).
Complexity of approximation
In the general case, finding a shortest travelling salesman tour is NPO-complete.[51] If the distance measure is a metric (and thus symmetric), the problem becomes APX-complete[52] and the algorithm of Christofides and Serdyukov approximates it within 1.5.[53][54][9]
If the distances are restricted to 1 and 2 (but still are a metric) the approximation ratio becomes 8/7.[55] In the asymmetric case with triangle inequality, up until recently only logarithmic performance guarantees were known.[56] In 2018, a constant factor approximation was developed by Svensson, Tarnawski and Végh.[57] The best current algorithm, by Traub and Vygen, achieves performance ratio of $22+\varepsilon $.[58] The best known inapproximability bound is 75/74.[59]
The corresponding maximization problem of finding the longest travelling salesman tour is approximable within 63/38.[60] If the distance function is symmetric, the longest tour can be approximated within 4/3 by a deterministic algorithm[61] and within ${\tfrac {1}{25}}(33+\varepsilon )$ by a randomized algorithm.[62]
Human and animal performance
The TSP, in particular the Euclidean variant of the problem, has attracted the attention of researchers in cognitive psychology. It has been observed that humans are able to produce near-optimal solutions quickly, in a close-to-linear fashion, with performance that ranges from 1% less efficient, for graphs with 10–20 nodes, to 11% less efficient for graphs with 120 nodes.[63][64] The apparent ease with which humans accurately generate near-optimal solutions to the problem has led researchers to hypothesize that humans use one or more heuristics, with the two most popular theories arguably being the convex-hull hypothesis and the crossing-avoidance heuristic.[65][66][67] However, additional evidence suggests that human performance is quite varied, and individual differences as well as graph geometry appear to affect performance in the task.[68][69][70] Nevertheless, results suggest that computer performance on the TSP may be improved by understanding and emulating the methods used by humans for these problems,[71] and have also led to new insights into the mechanisms of human thought.[72] The first issue of the Journal of Problem Solving was devoted to the topic of human performance on TSP,[73] and a 2011 review listed dozens of papers on the subject.[72]
A 2011 study in animal cognition titled "Let the Pigeon Drive the Bus," named after the children's book Don't Let the Pigeon Drive the Bus!, examined spatial cognition in pigeons by studying their flight patterns between multiple feeders in a laboratory in relation to the travelling salesman problem. In the first experiment, pigeons were placed in the corner of a lab room and allowed to fly to nearby feeders containing peas. The researchers found that pigeons largely used proximity to determine which feeder they would select next. In the second experiment, the feeders were arranged in such a way that flying to the nearest feeder at every opportunity would be largely inefficient if the pigeons needed to visit every feeder. The results of the second experiment indicate that pigeons, while still favoring proximity-based solutions, "can plan several steps ahead along the route when the differences in travel costs between efficient and less efficient routes based on proximity become larger."[74] These results are consistent with other experiments done with non-primates, which have proven that some non-primates were able to plan complex travel routes. This suggests non-primates may possess a relatively sophisticated spatial cognitive ability.
Natural computation
When presented with a spatial configuration of food sources, the amoeboid Physarum polycephalum adapts its morphology to create an efficient path between the food sources which can also be viewed as an approximate solution to TSP.[75]
Benchmarks
For benchmarking of TSP algorithms, TSPLIB[76] is a library of sample instances of the TSP and related problems is maintained, see the TSPLIB external reference. Many of them are lists of actual cities and layouts of actual printed circuits.
Popular culture
• Travelling Salesman, by director Timothy Lanzone, is the story of four mathematicians hired by the U.S. government to solve the most elusive problem in computer-science history: P vs. NP.[77]
• Solutions to the problem are used by mathematician Bob Bosche in a subgenre called TSP art.[78]
See also
• Canadian traveller problem
• Exact algorithm
• Route inspection problem (also known as "Chinese postman problem")
• Set TSP problem
• Seven Bridges of Königsberg
• Steiner travelling salesman problem
• Subway Challenge
• Tube Challenge
• Vehicle routing problem
• Graph exploration
• Mixed Chinese postman problem
• Arc routing
• Snow plow routing problem
Notes
1. See the TSP world tour problem which has already been solved to within 0.05% of the optimal solution.
2. "Der Handlungsreisende – wie er sein soll und was er zu tun hat, um Aufträge zu erhalten und eines glücklichen Erfolgs in seinen Geschäften gewiß zu sein – von einem alten Commis-Voyageur" (The travelling salesman – how he must be and what he should do in order to get commissions and be sure of the happy success in his business – by an old commis-voyageur)
3. A discussion of the early work of Hamilton and Kirkman can be found in Graph Theory, 1736–1936 by Biggs, Lloyd, and Wilson (Clarendon Press, 1986).
4. Cited and English translation in Schrijver (2005). Original German: "Wir bezeichnen als Botenproblem (weil diese Frage in der Praxis von jedem Postboten, übrigens auch von vielen Reisenden zu lösen ist) die Aufgabe, für endlich viele Punkte, deren paarweise Abstände bekannt sind, den kürzesten die Punkte verbindenden Weg zu finden. Dieses Problem ist natürlich stets durch endlich viele Versuche lösbar. Regeln, welche die Anzahl der Versuche unter die Anzahl der Permutationen der gegebenen Punkte herunterdrücken würden, sind nicht bekannt. Die Regel, man solle vom Ausgangspunkt erst zum nächstgelegenen Punkt, dann zu dem diesem nächstgelegenen Punkt gehen usw., liefert im allgemeinen nicht den kürzesten Weg."
5. Lawler, E. L. (1985). The Travelling Salesman Problem: A Guided Tour of Combinatorial Optimization (Repr. with corrections. ed.). John Wiley & sons. ISBN 978-0471904137.
6. Robinson, Julia (5 December 1949). "On the Hamiltonian game (a traveling salesman problem)". Project RAND. Santa Monica, CA: The RAND Corporation (RM-303). Archived from the original on 29 June 2020. Retrieved 2 May 2020.
7. A detailed treatment of the connection between Menger and Whitney as well as the growth in the study of TSP can be found in Schrijver (2005).
8. Beardwood, Halton & Hammersley (1959).
9. van Bevern, René; Slugina, Viktoriia A. (2020). "A historical note on the 3/2-approximation algorithm for the metric traveling salesman problem". Historia Mathematica. 53: 118–127. arXiv:2004.02437. doi:10.1016/j.hm.2020.04.003. S2CID 214803097.
10. Klarreich, Erica (30 January 2013). "Computer Scientists Find New Shortcuts for Infamous Traveling Salesman Problem". WIRED. Retrieved 14 June 2015.
11. Klarreich, Erica (8 October 2020). "Computer Scientists Break Traveling Salesperson Record". Quanta Magazine. Retrieved 13 October 2020.
12. Karlin, Anna R.; Klein, Nathan; Gharan, Shayan Oveis (2021), "A (slightly) improved approximation algorithm for metric TSP", in Khuller, Samir; Williams, Virginia Vassilevska (eds.), STOC '21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event, Italy, June 21-25, 2021, pp. 32–45, arXiv:2007.01409, doi:10.1145/3406325.3451009, S2CID 220347561
13. Rego, César; Gamboa, Dorabela; Glover, Fred; Osterman, Colin (2011), "Traveling salesman problem heuristics: leading methods, implementations and latest advances", European Journal of Operational Research, 211 (3): 427–441, doi:10.1016/j.ejor.2010.09.010, MR 2774420, S2CID 2856898.
14. "How Do You Fix School Bus Routes? Call MIT in Wall street Journal" (PDF).
15. Behzad, Arash; Modarres, Mohammad (2002), "New Efficient Transformation of the Generalized Traveling Salesman Problem into Traveling Salesman Problem", Proceedings of the 15th International Conference of Systems Engineering (Las Vegas)
16. Papadimitriou, C.H.; Steiglitz, K. (1998), Combinatorial optimization: algorithms and complexity, Mineola, NY: Dover, pp.308-309.
17. Tucker, A. W. (1960), "On Directed Graphs and Integer Programs", IBM Mathematical research Project (Princeton University)
18. Dantzig, George B. (1963), Linear Programming and Extensions, Princeton, NJ: PrincetonUP, pp. 545–7, ISBN 0-691-08000-3, sixth printing, 1974.
19. Velednitsky, Mark (2017). "Short combinatorial proof that the DFJ polytope is contained in the MTZ polytope for the Asymmetric Traveling Salesman Problem". Operations Research Letters. 45 (4): 323–324. arXiv:1805.06997. doi:10.1016/j.orl.2017.04.010. S2CID 6941484.
20. Bektaş, Tolga; Gouveia, Luis (2014). "Requiem for the Miller–Tucker–Zemlin subtour elimination constraints?". European Journal of Operational Research. 236 (3): 820–832. doi:10.1016/j.ejor.2013.07.038.
21. C. E. Miller, A. W. Tucker, and R. A. Zemlin. 1960. Integer Programming Formulation of Traveling Salesman Problems. J. ACM 7, 4 (Oct. 1960), 326–329. DOI:https://doi.org/10.1145/321043.321046
22. Dantzig, G.; Fulkerson, R.; Johnson, S. (November 1954). "Solution of a Large-Scale Traveling-Salesman Problem". Journal of the Operations Research Society of America. 2 (4): 393–410. doi:10.1287/opre.2.4.393.
23. Bellman (1960), Bellman (1962), Held & Karp (1962)
24. Woeginger (2003).
25. Ambainis, Andris; Balodis, Kaspars; Iraids, Jānis; Kokainis, Martins; Prūsis, Krišjānis; Vihrovs, Jevgēnijs (2019). "Quantum Speedups for Exponential-Time Dynamic Programming Algorithms". Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms. pp. 1783–1793. doi:10.1137/1.9781611975482.107. ISBN 978-1-61197-548-2. S2CID 49743824.
26. Padberg & Rinaldi (1991).
27. Traveling Salesman Problem - Branch and Bound on YouTube. How to cut unfruitful branches using reduced rows and columns as in Hungarian matrix algorithm
28. Applegate, David; Bixby, Robert; Chvátal, Vašek; Cook, William; Helsgaun, Keld (June 2004). "Optimal Tour of Sweden". Retrieved 11 November 2020.
29. Johnson, D. S.; McGeoch, L. A. (1997). "The Traveling Salesman Problem: A Case Study in Local Optimization" (PDF). In Aarts, E. H. L.; Lenstra, J. K. (eds.). Local Search in Combinatorial Optimisation. London: John Wiley and Sons Ltd. pp. 215–310.
30. Gutina, Gregory; Yeob, Anders; Zverovich, Alexey (15 March 2002). "Traveling salesman should not be greedy: domination analysis of greedy-type heuristics for the TSP". Discrete Applied Mathematics. 117 (1–3): 81–86. doi:10.1016/S0166-218X(01)00195-0.>
31. Zverovitch, Alexei; Zhang, Weixiong; Yeo, Anders; McGeoch, Lyle A.; Gutin, Gregory; Johnson, David S. (2007), "Experimental Analysis of Heuristics for the ATSP", The Traveling Salesman Problem and Its Variations, Combinatorial Optimization, Springer, Boston, MA, pp. 445–487, CiteSeerX 10.1.1.24.2386, doi:10.1007/0-306-48213-4_10, ISBN 978-0-387-44459-8
32. Rosenkrantz, D. J.; Stearns, R. E.; Lewis, P. M. (14–16 October 1974). Approximate algorithms for the traveling salesperson problem. 15th Annual Symposium on Switching and Automata Theory (swat 1974). doi:10.1109/SWAT.1974.4.
33. Ray, S. S.; Bandyopadhyay, S.; Pal, S. K. (2007). "Genetic Operators for Combinatorial Optimization in TSP and Microarray Gene Ordering". Applied Intelligence. 26 (3): 183–195. CiteSeerX 10.1.1.151.132. doi:10.1007/s10489-006-0018-y. S2CID 8130854.
34. Kahng, A. B.; Reda, S. (2004). "Match Twice and Stitch: A New TSP Tour Construction Heuristic". Operations Research Letters. 32 (6): 499–509. doi:10.1016/j.orl.2004.04.001.
35. Alatartsev, Sergey; Augustine, Marcus; Ortmeier, Frank (2 June 2013). "Constricting Insertion Heuristic for Traveling Salesman Problem with Neighborhoods" (PDF). Proceedings of the International Conference on Automated Planning and Scheduling. 23: 2–10. doi:10.1609/icaps.v23i1.13539. ISSN 2334-0843. S2CID 18691261.
36. Dorigo, Marco; Gambardella, Luca Maria (1997). "Ant Colonies for the Traveling Salesman Problem". Biosystems. 43 (2): 73–81. CiteSeerX 10.1.1.54.7734. doi:10.1016/S0303-2647(97)01708-5. PMID 9231906. S2CID 8243011.
37. Quintas, L. V.; Supnick, Fred (1965). "On some properties of shortest Hamiltonian circuits". The American Mathematical Monthly. 72 (9): 977–980. doi:10.2307/2313333. JSTOR 2313333. MR 0188872.
38. Papadimitriou (1977).
39. Allender et al. (2007).
40. Larson & Odoni (1981).
41. Arora (1998).
42. Jonker, Roy; Volgenant, Ton (1983). "Transforming asymmetric into symmetric traveling salesman problems". Operations Research Letters. 2 (161–163): 1983. doi:10.1016/0167-6377(83)90048-2.
43. Arlotto, Alessandro; Steele, J. Michael (2016), "Beardwood–Halton–Hammersley theorem for stationary ergodic sequences: a counterexample", The Annals of Applied Probability, 26 (4): 2141–2168, arXiv:1307.0221, doi:10.1214/15-AAP1142, S2CID 8904077
44. Few, L. (1955). "The shortest path and the shortest road through n points". Mathematika. 2 (2): 141–144. doi:10.1112/s0025579300000784.
45. Fiechter, C.-N. (1994). "A parallel tabu search algorithm for large traveling salesman problems". Disc. Applied Math. 51 (3): 243–267. doi:10.1016/0166-218X(92)00033-I.
46. Steinerberger (2015).
47. Held, M.; Karp, R.M. (1970). "The Traveling Salesman Problem and Minimum Spanning Trees". Operations Research. 18 (6): 1138–1162. doi:10.1287/opre.18.6.1138.
48. Goemans, M.; Bertsimas, D. (1991). "Probabilistic analysis of the Held and Karp lower bound for the Euclidean traveling salesman problem". Mathematics of Operations Research. 16 (1): 72–89. doi:10.1287/moor.16.1.72.
49. "error". about.att.com.
50. Christine L. Valenzuela and Antonia J. Jones Archived 25 October 2007 at the Wayback Machine
51. Orponen, P.; Mannila, H. (1987). On approximation preserving reductions: Complete problems and robust measures' (Report). Department of Computer Science, University of Helsinki. Technical Report C-1987–28.
52. Papadimitriou & Yannakakis (1993).
53. Christofides (1976).
54. Serdyukov, Anatoliy I. (1978), "О некоторых экстремальных обходах в графах" [On some extremal walks in graphs] (PDF), Upravlyaemye Sistemy (in Russian), 17: 76–79
55. Berman & Karpinski (2006).
56. Kaplan et al. (2004).
57. Svensson, Ola; Tarnawski, Jakub; Végh, László A. (2018). "A constant-factor approximation algorithm for the asymmetric traveling salesman problem". Proceedings of the 50th Annual ACM SIGACT Symposium on Theory of Computing. Stoc 2018. Los Angeles, CA, USA: ACM Press. pp. 204–213. doi:10.1145/3188745.3188824. ISBN 978-1-4503-5559-9. S2CID 12391033.
58. Traub, Vera; Vygen, Jens (8 June 2020). "An improved approximation algorithm for ATSP". Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing. Stoc 2020. Chicago, IL: ACM. pp. 1–13. arXiv:1912.00670. doi:10.1145/3357713.3384233. ISBN 978-1-4503-6979-4. S2CID 208527125.
59. Karpinski, Lampis & Schmied (2015).
60. Kosaraju, Park & Stein (1994).
61. Serdyukov (1984).
62. Hassin & Rubinstein (2000).
63. Macgregor, J. N.; Ormerod, T. (June 1996), "Human performance on the traveling salesman problem", Perception & Psychophysics, 58 (4): 527–539, doi:10.3758/BF03213088, PMID 8934685.
64. Dry, Matthew; Lee, Michael D.; Vickers, Douglas; Hughes, Peter (2006). "Human Performance on Visually Presented Traveling Salesperson Problems with Varying Numbers of Nodes". The Journal of Problem Solving. 1 (1). CiteSeerX 10.1.1.360.9763. doi:10.7771/1932-6246.1004. ISSN 1932-6246.
65. Rooij, Iris Van; Stege, Ulrike; Schactman, Alissa (1 March 2003). "Convex hull and tour crossings in the Euclidean traveling salesperson problem: Implications for human performance studies". Memory & Cognition. 31 (2): 215–220. CiteSeerX 10.1.1.12.6117. doi:10.3758/bf03194380. ISSN 0090-502X. PMID 12749463. S2CID 18989303.
66. MacGregor, James N.; Chu, Yun (2011). "Human Performance on the Traveling Salesman and Related Problems: A Review". The Journal of Problem Solving. 3 (2). doi:10.7771/1932-6246.1090. ISSN 1932-6246.
67. MacGregor, James N.; Chronicle, Edward P.; Ormerod, Thomas C. (1 March 2004). "Convex hull or crossing avoidance? Solution heuristics in the traveling salesperson problem". Memory & Cognition. 32 (2): 260–270. doi:10.3758/bf03196857. ISSN 0090-502X. PMID 15190718.
68. Vickers, Douglas; Mayo, Therese; Heitmann, Megan; Lee, Michael D; Hughes, Peter (2004). "Intelligence and individual differences in performance on three types of visually presented optimisation problems". Personality and Individual Differences. 36 (5): 1059–1071. doi:10.1016/s0191-8869(03)00200-9.
69. Kyritsis, Markos; Gulliver, Stephen R.; Feredoes, Eva (12 June 2017). "Acknowledging crossing-avoidance heuristic violations when solving the Euclidean travelling salesperson problem". Psychological Research. 82 (5): 997–1009. doi:10.1007/s00426-017-0881-7. ISSN 0340-0727. PMID 28608230. S2CID 3959429.
70. Kyritsis, Markos; Blathras, George; Gulliver, Stephen; Varela, Vasiliki-Alexia (11 January 2017). "Sense of direction and conscientiousness as predictors of performance in the Euclidean travelling salesman problem". Heliyon. 3 (11): e00461. Bibcode:2017Heliy...300461K. doi:10.1016/j.heliyon.2017.e00461. PMC 5727545. PMID 29264418.
71. Kyritsis, Markos; Gulliver, Stephen R.; Feredoes, Eva; Din, Shahab Ud (December 2018). "Human behaviour in the Euclidean Travelling Salesperson Problem: Computational modelling of heuristics and figural effects". Cognitive Systems Research. 52: 387–399. doi:10.1016/j.cogsys.2018.07.027. S2CID 53761995.
72. MacGregor, James N.; Chu, Yun (2011), "Human performance on the traveling salesman and related problems: A review", Journal of Problem Solving, 3 (2), doi:10.7771/1932-6246.1090.
73. Journal of Problem Solving 1(1), 2006, retrieved 2014-06-06.
74. Gibson, Brett; Wilkinson, Matthew; Kelly, Debbie (1 May 2012). "Let the pigeon drive the bus: pigeons can plan future routes in a room". Animal Cognition. 15 (3): 379–391. doi:10.1007/s10071-011-0463-9. ISSN 1435-9456. PMID 21965161. S2CID 14994429.
75. Jones, Jeff; Adamatzky, Andrew (2014), "Computation of the travelling salesman problem by a shrinking blob" (PDF), Natural Computing: 2, 13, arXiv:1303.4969, Bibcode:2013arXiv1303.4969J
76. "TSPLIB". comopt.ifi.uni-heidelberg.de. Retrieved 10 October 2020.
77. Geere, Duncan (26 April 2012). "'Travelling Salesman' movie considers the repercussions if P equals NP". Wired UK. Retrieved 26 April 2012.
78. When the Mona Lisa is NP-HardBy Evelyn Lamb, Scientific American, 31 April 2015
References
• Applegate, D. L.; Bixby, R. M.; Chvátal, V.; Cook, W. J. (2006), The Traveling Salesman Problem, Princeton University Press, ISBN 978-0-691-12993-8.
• Allender, Eric; Bürgisser, Peter; Kjeldgaard-Pedersen, Johan; Mitersen, Peter Bro (2007), "On the Complexity of Numerical Analysis" (PDF), SIAM J. Comput., 38 (5): 1987–2006, CiteSeerX 10.1.1.167.5495, doi:10.1137/070697926.
• Arora, Sanjeev (1998), "Polynomial time approximation schemes for Euclidean traveling salesman and other geometric problems" (PDF), Journal of the ACM, 45 (5): 753–782, doi:10.1145/290179.290180, MR 1668147, S2CID 3023351.
• Beardwood, J.; Halton, J.H.; Hammersley, J.M. (October 1959), "The Shortest Path Through Many Points", Proceedings of the Cambridge Philosophical Society, 55 (4): 299–327, Bibcode:1959PCPS...55..299B, doi:10.1017/s0305004100034095, ISSN 0305-0041, S2CID 122062088.
• Bellman, R. (1960), "Combinatorial Processes and Dynamic Programming", in Bellman, R.; Hall, M. Jr. (eds.), Combinatorial Analysis, Proceedings of Symposia in Applied Mathematics 10, American Mathematical Society, pp. 217–249.
• Bellman, R. (1962), "Dynamic Programming Treatment of the Travelling Salesman Problem", J. Assoc. Comput. Mach., 9: 61–63, doi:10.1145/321105.321111, S2CID 15649582.
• Berman, Piotr; Karpinski, Marek (2006), "8/7-approximation algorithm for (1,2)-TSP", Proc. 17th ACM-SIAM Symposium on Discrete Algorithms (SODA '06), pp. 641–648, CiteSeerX 10.1.1.430.2224, doi:10.1145/1109557.1109627, ISBN 978-0898716054, S2CID 9054176, ECCC TR05-069.
• Christofides, N. (1976), Worst-case analysis of a new heuristic for the travelling salesman problem, Technical Report 388, Graduate School of Industrial Administration, Carnegie-Mellon University, Pittsburgh.
• Hassin, R.; Rubinstein, S. (2000), "Better approximations for max TSP", Information Processing Letters, 75 (4): 181–186, CiteSeerX 10.1.1.35.7209, doi:10.1016/S0020-0190(00)00097-1.
• Held, M.; Karp, R. M. (1962), "A Dynamic Programming Approach to Sequencing Problems", Journal of the Society for Industrial and Applied Mathematics, 10 (1): 196–210, doi:10.1137/0110015.
• Kaplan, H.; Lewenstein, L.; Shafrir, N.; Sviridenko, M. (2004), "Approximation Algorithms for Asymmetric TSP by Decomposing Directed Regular Multigraphs", In Proc. 44th IEEE Symp. on Foundations of Comput. Sci, pp. 56–65.
• Karpinski, M.; Lampis, M.; Schmied, R. (2015), "New Inapproximability bounds for TSP", Journal of Computer and System Sciences, 81 (8): 1665–1677, arXiv:1303.6437, doi:10.1016/j.jcss.2015.06.003
• Kosaraju, S. R.; Park, J. K.; Stein, C. (1994), "Long tours and short superstrings'", Proc. 35th Ann. IEEE Symp. on Foundations of Comput. Sci, IEEE Computer Society, pp. 166–177.
• Larson, Richard C.; Odoni, Amedeo R. (1981), "6.4.7: Applications of Network Models § Routing Problems §§ Euclidean TSP", Urban Operations Research, Prentice-Hall, ISBN 9780139394478, OCLC 6331426.
• Padberg, M.; Rinaldi, G. (1991), "A Branch-and-Cut Algorithm for the Resolution of Large-Scale Symmetric Traveling Salesman Problems", SIAM Review, 33: 60–100, doi:10.1137/1033004, S2CID 18516138.
• Papadimitriou, Christos H. (1977), "The Euclidean traveling salesman problem is NP-complete", Theoretical Computer Science, 4 (3): 237–244, doi:10.1016/0304-3975(77)90012-3, MR 0455550.
• Papadimitriou, C. H.; Yannakakis, M. (1993), "The traveling salesman problem with distances one and two", Math. Oper. Res., 18: 1–11, doi:10.1287/moor.18.1.1.
• Schrijver, Alexander (2005). "On the history of combinatorial optimization (till 1960)". In K. Aardal; G.L. Nemhauser; R. Weismantel (eds.). Handbook of Discrete Optimization (PDF). Amsterdam: Elsevier. pp. 1–68.
• Serdyukov, A. I. (1984), "An algorithm with an estimate for the traveling salesman problem of the maximum'", Upravlyaemye Sistemy, 25: 80–86.
• Steinerberger, Stefan (2015), "New Bounds for the Traveling Salesman Constant", Advances in Applied Probability, 47 (1): 27–36, arXiv:1311.6338, Bibcode:2013arXiv1311.6338S, doi:10.1239/aap/1427814579, S2CID 119293287.
• Woeginger, G.J. (2003), "Exact Algorithms for NP-Hard Problems: A Survey", Combinatorial Optimization – Eureka, You Shrink! Lecture notes in computer science, vol. 2570, Springer, pp. 185–207.
Further reading
• Adleman, Leonard (1994), "Molecular Computation of Solutions To Combinatorial Problems" (PDF), Science, 266 (5187): 1021–4, Bibcode:1994Sci...266.1021A, CiteSeerX 10.1.1.54.2565, doi:10.1126/science.7973651, PMID 7973651, archived from the original (PDF) on 6 February 2005
• Babin, Gilbert; Deneault, Stéphanie; Laportey, Gilbert (2005), "Improvements to the Or-opt Heuristic for the Symmetric Traveling Salesman Problem", The Journal of the Operational Research Society, Cahiers du GERAD, Montreal: Group for Research in Decision Analysis, G-2005-02 (3): 402–407, CiteSeerX 10.1.1.89.9953, JSTOR 4622707
• Cook, William (2012). In Pursuit of the Traveling Salesman: Mathematics at the Limits of Computation. Princeton University Press. ISBN 9780691152707.
• Cook, William; Espinoza, Daniel; Goycoolea, Marcos (2007), "Computing with domino-parity inequalities for the TSP", INFORMS Journal on Computing, 19 (3): 356–365, doi:10.1287/ijoc.1060.0204
• Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (31 July 2009). "35.2: The traveling-salesman problem". Introduction to Algorithms (2nd ed.). MIT Press. pp. 1027–1033. ISBN 9780262033848.
• Dantzig, G. B.; Fulkerson, R.; Johnson, S. M. (1954), "Solution of a large-scale traveling salesman problem", Operations Research, 2 (4): 393–410, doi:10.1287/opre.2.4.393, JSTOR 166695, S2CID 44960960
• Garey, Michael R.; Johnson, David S. (1979). "A2.3: ND22–24". Computers and Intractability: A Guide to the Theory of NP-completeness. W. H. Freeman. pp. 211–212. ISBN 9780716710448.
• Goldberg, D. E. (1989), "Genetic Algorithms in Search, Optimization & Machine Learning", Reading: Addison-Wesley, New York: Addison-Wesley, Bibcode:1989gaso.book.....G, ISBN 978-0-201-15767-3
• Gutin, G.; Yeo, A.; Zverovich, A. (15 March 2002). "Traveling salesman should not be greedy: domination analysis of greedy-type heuristics for the TSP". Discrete Applied Mathematics. 117 (1–3): 81–86. doi:10.1016/S0166-218X(01)00195-0. ISSN 0166-218X.
• Gutin, G.; Punnen, A. P. (18 May 2007). The Traveling Salesman Problem and Its Variations. Springer US. ISBN 9780387444598.
• Johnson, D. S.; McGeoch, L. A. (1997), "The Traveling Salesman Problem: A Case Study in Local Optimization", in Aarts, E. H. L.; Lenstra, J. K. (eds.), Local Search in Combinatorial Optimisation (PDF), John Wiley and Sons Ltd., pp. 215–310
• Lawler, E. L.; Shmoys, D. B.; Kan, A. H. G. Rinnooy; Lenstra, J. K. (1985). The Traveling Salesman Problem. John Wiley & Sons, Incorporated. ISBN 9780471904137.
• MacGregor, J. N.; Ormerod, T. (1996), "Human performance on the traveling salesman problem", Perception & Psychophysics, 58 (4): 527–539, doi:10.3758/BF03213088, PMID 8934685, S2CID 38355042
• Medvedev, Andrei; Lee, Michael; Butavicius, Marcus; Vickers, Douglas (1 February 2001). "Human performance on visually presented Traveling Salesman problems". Psychological Research. 65 (1): 34–45. doi:10.1007/s004260000031. ISSN 1430-2772. PMID 11505612. S2CID 8986203.
• Mitchell, J. S. B. (1999), "Guillotine subdivisions approximate polygonal subdivisions: A simple polynomial-time approximation scheme for geometric TSP, k-MST, and related problems", SIAM Journal on Computing, 28 (4): 1298–1309, doi:10.1137/S0097539796309764
• Rao, S.; Smith, W. (1998). "Approximating geometrical graphs via 'spanners' and 'banyans'". STOC '98: Proceedings of the thirtieth annual ACM symposium on Theory of computing. pp. 540–550. CiteSeerX 10.1.1.51.8676.
• Rosenkrantz, Daniel J.; Stearns, Richard E.; Lewis, Philip M., II (1977). "An Analysis of Several Heuristics for the Traveling Salesman Problem". SIAM Journal on Computing. SIAM (Society for Industrial and Applied Mathematics). 6 (5): 563–581. doi:10.1137/0206041. S2CID 14764079.{{cite journal}}: CS1 maint: multiple names: authors list (link)
• Walshaw, Chris (2000), A Multilevel Approach to the Travelling Salesman Problem, CMS Press
• Walshaw, Chris (2001), A Multilevel Lin-Kernighan-Helsgaun Algorithm for the Travelling Salesman Problem, CMS Press
External links
Wikimedia Commons has media related to Traveling salesman problem.
• Traveling Salesman Problem at the Wayback Machine (archived 17 December 2013) at University of Waterloo
• TSPLIB, Sample instances for the TSP at the University of Heidelberg
• Traveling Salesman Problem by Jon McLoone at the Wolfram Demonstrations Project
• TSP visualization tool
Authority control: National
• Germany
• Israel
• United States
| Wikipedia |
Joséphine Guidy Wandja
Joséphine Guidy Wandja (born 1945, also Guidy-Wandja) is an Ivorian mathematician.[1] She is the first African woman with a PhD in mathematics.
Joséphine Guidy Wandja
Born
Joséphine Wandja
1945 (age 77–78)
Ivory Coast
Other namesJoséphine Guidy-Wandja
CitizenshipIvory Coast
OccupationMathematics lecturer
Academic background
EducationLycée Jules-Ferry
Alma materPierre and Marie Curie University
University of Abidjan
Academic work
DisciplineMathematics
InstitutionsParis Diderot University (1970-71)
University of Abidjan (1971-?)
Early life
She moved to France aged 14.[2] She attended the Lycée Jules-Ferry in Paris, and later the Pierre and Marie Curie University.[2] Her master's degree thesis was entitled Sous les courbes fermées convexes du plan et le théorème des quatre sommets (Under closed convex curves in the plane and the theorem of four peaks).[3] Whilst working in Paris in the late 1960s she was advised by René Thom, Henri Cartan and Paulette Liberman.[4] She studied for a PhD at the University of Abidjan, becoming the first African woman to get a PhD in mathematics.[5][6]
Career
In 1969, she worked at the Lycée Jacques Amyot in Melun, before working for a year at the Paris Diderot University.[2] In 1971, she joined the University of Abidjan, as a mathematics lecturer.[2] In doing so, she became the first African female university mathematics professor.[6] In 1983, she was appointed the president of the International Committee on Mathematics in Developing Countries (ICOMIDC). The organisation was set up during the International Mathematical Union (IMU) conference in Warsaw, Poland, but without the IMU's knowledge.[7] In 1986, she wrote a humorous 24 page mathematical comic book Yao crack en maths.[5][8] In 1985, she organised an ICOMIDC conference in Yamoussoukro, Ivory Coast.[7]
She is an officer of the Ivorian Order of Merit of National Education, and the French Ordre des Palmes Académiques.[2][6]
Publications
• Guidy Wandja, Joséphine, Yao crack en maths (in French), Nouvelles Éditions africaines, 1985. ISBN 2723607356
References
1. O'Connor, John J.; Robertson, Edmund F., "Joséphine Guidy-Wandja", MacTutor History of Mathematics Archive, University of St Andrews
2. "Interview de Joséphine Guidy Wandja". Amina. July 1986. pp. 50–53. Archived from the original on 16 April 2016. Retrieved 16 March 2019.
3. Gerdes, Paulus; Djebbar, Ahmed (2011). History of Mathematics in Africa: 2000-2011. Lulu.com. ISBN 9781105141003. Retrieved 16 March 2019.
4. "AMINA Guidy Wandja". aflit.arts.uwa.edu.au. Retrieved 8 October 2019.
5. Cassiau-Haurie, Christophe (20 February 2008). "Les femmes peinent à percer les bulles" (in French). Africultures. Retrieved 16 March 2019.
6. "Joséphine Guidy-Wandja". Committee for Women in Mathematics. Retrieved 16 March 2019.
7. Lehto, Olli (6 December 2012). Mathematics Without Borders: A History of the International Mathematical Union. Springer Science+Business Media. p. 268. ISBN 9781461206132. Retrieved 16 March 2019.
8. "Littérature pour enfant : "Yao crack en math", une bande dessinée qui démystifie les maths". Abidijan.net (in French). 10 February 2016. Retrieved 16 March 2018.
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Netherlands
Academics
• Mathematics Genealogy Project
Other
• IdRef
| Wikipedia |
Wang tile
Wang tiles (or Wang dominoes), first proposed by mathematician, logician, and philosopher Hao Wang in 1961, are a class of formal systems. They are modelled visually by square tiles with a color on each side. A set of such tiles is selected, and copies of the tiles are arranged side by side with matching colors, without rotating or reflecting them.
Wikimedia Commons has media related to Wang tiles.
The basic question about a set of Wang tiles is whether it can tile the plane or not, i.e., whether an entire infinite plane can be filled this way. The next question is whether this can be done in a periodic pattern.
Domino problem
In 1961, Wang conjectured that if a finite set of Wang tiles can tile the plane, then there also exists a periodic tiling, which, mathematically, is a tiling that is invariant under translations by vectors in a 2-dimensional lattice. This can be likened to the periodic tiling in a wallpaper pattern, where the overall pattern is a repetition of some smaller pattern. He also observed that this conjecture would imply the existence of an algorithm to decide whether a given finite set of Wang tiles can tile the plane.[1][2] The idea of constraining adjacent tiles to match each other occurs in the game of dominoes, so Wang tiles are also known as Wang dominoes.[3] The algorithmic problem of determining whether a tile set can tile the plane became known as the domino problem.[4]
According to Wang's student, Robert Berger,[4]
The Domino Problem deals with the class of all domino sets. It consists of deciding, for each domino set, whether or not it is solvable. We say that the Domino Problem is decidable or undecidable according to whether there exists or does not exist an algorithm which, given the specifications of an arbitrary domino set, will decide whether or not the set is solvable.
In other words, the domino problem asks whether there is an effective procedure that correctly settles the problem for all given domino sets.
In 1966, Berger solved the domino problem in the negative. He proved that no algorithm for the problem can exist, by showing how to translate any Turing machine into a set of Wang tiles that tiles the plane if and only if the Turing machine does not halt. The undecidability of the halting problem (the problem of testing whether a Turing machine eventually halts) then implies the undecidability of Wang's tiling problem.[4]
Aperiodic sets of tiles
Combining Berger's undecidability result with Wang's observation shows that there must exist a finite set of Wang tiles that tiles the plane, but only aperiodically. This is similar to a Penrose tiling, or the arrangement of atoms in a quasicrystal. Although Berger's original set contained 20,426 tiles, he conjectured that smaller sets would work, including subsets of his set, and in his unpublished Ph.D. thesis, he reduced the number of tiles to 104. In later years, ever smaller sets were found.[5][6][7][8] For example, a set of 13 aperiodic tiles was published by Karel Culik II in 1996.[6]
The smallest set of aperiodic tiles was discovered by Emmanuel Jeandel and Michael Rao in 2015, with 11 tiles and 4 colors. They used an exhaustive computer search to prove that 10 tiles or 3 colors are insufficient to force aperiodicity.[8] This set, shown above in the title image, can be examined more closely at File:Wang 11 tiles.svg.
Generalizations
Wang tiles can be generalized in various ways, all of which are also undecidable in the above sense. For example, Wang cubes are equal-sized cubes with colored faces and side colors can be matched on any polygonal tessellation. Culik and Kari have demonstrated aperiodic sets of Wang cubes.[9] Winfree et al. have demonstrated the feasibility of creating molecular "tiles" made from DNA (deoxyribonucleic acid) that can act as Wang tiles.[10] Mittal et al. have shown that these tiles can also be composed of peptide nucleic acid (PNA), a stable artificial mimic of DNA.[11]
Applications
Wang tiles have been used for procedural synthesis of textures, heightfields, and other large and nonrepeating bidimensional data sets; a small set of precomputed or hand-made source tiles can be assembled very cheaply without too obvious repetitions and without periodicity. In this case, traditional aperiodic tilings would show their very regular structure; much less constrained sets that guarantee at least two tile choices for any two given side colors are common because tileability is easily ensured and each tile can be selected pseudorandomly.[12][13][14][15][16]
Wang tiles have also been used in cellular automata theory decidability proofs.[17]
In popular culture
The short story Wang's Carpets, later expanded to the novel Diaspora, by Greg Egan, postulates a universe, complete with resident organisms and intelligent beings, embodied as Wang tiles implemented by patterns of complex molecules.[18]
See also
• Edge-matching puzzle
References
1. Wang, Hao (1961), "Proving theorems by pattern recognition—II", Bell System Technical Journal, 40 (1): 1–41, doi:10.1002/j.1538-7305.1961.tb03975.x. Wang proposes his tiles, and conjectures that there are no aperiodic sets.
2. Wang, Hao (November 1965), "Games, logic and computers", Scientific American, 213 (5): 98–106, doi:10.1038/scientificamerican1165-98. Presents the domino problem for a popular audience.
3. Renz, Peter (1981), "Mathematical proof: What it is and what it ought to be", The Two-Year College Mathematics Journal, 12 (2): 83–103, doi:10.2307/3027370, JSTOR 3027370.
4. Berger, Robert (1966), "The undecidability of the domino problem", Memoirs of the American Mathematical Society, 66: 72, MR 0216954. Berger coins the term "Wang tiles", and demonstrates the first aperiodic set of them.
5. Robinson, Raphael M. (1971), "Undecidability and nonperiodicity for tilings of the plane", Inventiones Mathematicae, 12 (3): 177–209, Bibcode:1971InMat..12..177R, doi:10.1007/bf01418780, MR 0297572, S2CID 14259496.
6. Culik, Karel, II (1996), "An aperiodic set of 13 Wang tiles", Discrete Mathematics, 160 (1–3): 245–251, doi:10.1016/S0012-365X(96)00118-5, MR 1417576{{citation}}: CS1 maint: multiple names: authors list (link). (Showed an aperiodic set of 13 tiles with 5 colors.)
7. Kari, Jarkko (1996), "A small aperiodic set of Wang tiles", Discrete Mathematics, 160 (1–3): 259–264, doi:10.1016/0012-365X(95)00120-L, MR 1417578.
8. Jeandel, Emmanuel; Rao, Michaël (2021), "An aperiodic set of 11 Wang tiles", Advances in Combinatorics: 1:1–1:37, arXiv:1506.06492, doi:10.19086/aic.18614, MR 4210631, S2CID 13261182. (Showed an aperiodic set of 11 tiles with 4 colors, and proved that fewer tiles or fewer colors cannot be aperiodic.)
9. Culik, Karel, II; Kari, Jarkko (1995), "An aperiodic set of Wang cubes", Journal of Universal Computer Science, 1 (10): 675–686, doi:10.1007/978-3-642-80350-5_57, MR 1392428{{citation}}: CS1 maint: multiple names: authors list (link).
10. Winfree, E.; Liu, F.; Wenzler, L.A.; Seeman, N.C. (1998), "Design and self-assembly of two-dimensional DNA crystals", Nature, 394 (6693): 539–544, Bibcode:1998Natur.394..539W, doi:10.1038/28998, PMID 9707114, S2CID 4385579.
11. Lukeman, P.; Seeman, N.; Mittal, A. (2002), "Hybrid PNA/DNA nanosystems", 1st International Conference on Nanoscale/Molecular Mechanics (N-M2-I), Outrigger Wailea Resort, Maui, Hawaii.
12. Stam, Jos (1997), Aperiodic Texture Mapping (PDF). Introduces the idea of using Wang tiles for texture variation, with a deterministic substitution system.
13. Neyret, Fabrice; Cani, Marie-Paule (1999), "Pattern-Based Texturing Revisited", Proceedings of the 26th annual conference on Computer graphics and interactive techniques - SIGGRAPH '99 (PDF), Los Angeles, United States: ACM, pp. 235–242, doi:10.1145/311535.311561, ISBN 0201485605, S2CID 11247905. Introduces stochastic tiling.
14. Cohen, Michael F.; Shade, Jonathan; Hiller, Stefan; Deussen, Oliver (2003), "Wang tiles for image and texture generation", ACM SIGGRAPH 2003 Papers on - SIGGRAPH '03 (PDF), New York, NY, USA: ACM, pp. 287–294, doi:10.1145/1201775.882265, ISBN 1-58113-709-5, S2CID 207162102, archived from the original (PDF) on 2006-03-18.
15. Wei, Li-Yi (2004), "Tile-based texture mapping on graphics hardware", Proceedings of the ACM SIGGRAPH/EUROGRAPHICS Conference on Graphics Hardware (HWWS '04), New York, NY, USA: ACM, pp. 55–63, doi:10.1145/1058129.1058138, ISBN 3-905673-15-0, S2CID 53224612. Applies Wang Tiles for real-time texturing on a GPU.
16. . Kopf, Johannes; Cohen-Or, Daniel; Deussen, Oliver; Lischinski, Dani (2006), "Recursive Wang tiles for real-time blue noise", ACM SIGGRAPH 2006 Papers on - SIGGRAPH '06, New York, NY, USA: ACM, pp. 509–518, doi:10.1145/1179352.1141916, ISBN 1-59593-364-6, S2CID 11007853. Shows advanced applications.
17. Kari, Jarkko (1990), "Reversibility of 2D cellular automata is undecidable", Cellular automata: theory and experiment (Los Alamos, NM, 1989), Physica D: Nonlinear Phenomena, vol. 45, pp. 379–385, Bibcode:1990PhyD...45..379K, doi:10.1016/0167-2789(90)90195-U, MR 1094882.
18. Burnham, Karen (2014), Greg Egan, Modern Masters of Science Fiction, University of Illinois Press, pp. 72–73, ISBN 9780252096297.
Further reading
• Grünbaum, Branko; Shephard, G. C. (1987), Tilings and Patterns, New York: W. H. Freeman, ISBN 0-7167-1193-1.
External links
• Steven Dutch's page including many pictures of aperiodic tilings
• Animated demonstration of a naïve Wang tiling method - requires Javascript and HTML5
Tessellation
Periodic
• Pythagorean
• Rhombille
• Schwarz triangle
• Rectangle
• Domino
• Uniform tiling and honeycomb
• Coloring
• Convex
• Kisrhombille
• Wallpaper group
• Wythoff
Aperiodic
• Ammann–Beenker
• Aperiodic set of prototiles
• List
• Einstein problem
• Socolar–Taylor
• Gilbert
• Penrose
• Pentagonal
• Pinwheel
• Quaquaversal
• Rep-tile and Self-tiling
• Sphinx
• Socolar
• Truchet
Other
• Anisohedral and Isohedral
• Architectonic and catoptric
• Circle Limit III
• Computer graphics
• Honeycomb
• Isotoxal
• List
• Packing
• Problems
• Domino
• Wang
• Heesch's
• Squaring
• Dividing a square into similar rectangles
• Prototile
• Conway criterion
• Girih
• Regular Division of the Plane
• Regular grid
• Substitution
• Voronoi
• Voderberg
By vertex type
Spherical
• 2n
• 33.n
• V33.n
• 42.n
• V42.n
Regular
• 2∞
• 36
• 44
• 63
Semi-
regular
• 32.4.3.4
• V32.4.3.4
• 33.42
• 33.∞
• 34.6
• V34.6
• 3.4.6.4
• (3.6)2
• 3.122
• 42.∞
• 4.6.12
• 4.82
Hyper-
bolic
• 32.4.3.5
• 32.4.3.6
• 32.4.3.7
• 32.4.3.8
• 32.4.3.∞
• 32.5.3.5
• 32.5.3.6
• 32.6.3.6
• 32.6.3.8
• 32.7.3.7
• 32.8.3.8
• 33.4.3.4
• 32.∞.3.∞
• 34.7
• 34.8
• 34.∞
• 35.4
• 37
• 38
• 3∞
• (3.4)3
• (3.4)4
• 3.4.62.4
• 3.4.7.4
• 3.4.8.4
• 3.4.∞.4
• 3.6.4.6
• (3.7)2
• (3.8)2
• 3.142
• 3.162
• (3.∞)2
• 3.∞2
• 42.5.4
• 42.6.4
• 42.7.4
• 42.8.4
• 42.∞.4
• 45
• 46
• 47
• 48
• 4∞
• (4.5)2
• (4.6)2
• 4.6.12
• 4.6.14
• V4.6.14
• 4.6.16
• V4.6.16
• 4.6.∞
• (4.7)2
• (4.8)2
• 4.8.10
• V4.8.10
• 4.8.12
• 4.8.14
• 4.8.16
• 4.8.∞
• 4.102
• 4.10.12
• 4.122
• 4.12.16
• 4.142
• 4.162
• 4.∞2
• (4.∞)2
• 54
• 55
• 56
• 5∞
• 5.4.6.4
• (5.6)2
• 5.82
• 5.102
• 5.122
• (5.∞)2
• 64
• 65
• 66
• 68
• 6.4.8.4
• (6.8)2
• 6.82
• 6.102
• 6.122
• 6.162
• 73
• 74
• 77
• 7.62
• 7.82
• 7.142
• 83
• 84
• 86
• 88
• 8.62
• 8.122
• 8.162
• ∞3
• ∞4
• ∞5
• ∞∞
• ∞.62
• ∞.82
| Wikipedia |
Wang Dongming (academic)
Wang Dongming (Chinese: 王东明; born July 1961 in Anhui, China) is Research Director (Directeur de Recherche) at the French National Center for Scientific Research (Centre National de la Recherche Scientifique, CNRS).[1] He was awarded Wen-tsün Wu Chair Professor at the University of Science and Technology of China in 2001,[2] Changjiang Scholar of the Chinese Ministry of Education in 2005,[3] and Bagui Scholar of Guangxi Zhuang Autonomous Region, China in 2014.[4] He was elected Member of the Academia Europaea in 2017.[5]
Wang worked on algorithmic elimination theory, geometric reasoning and knowledge management, and applications of symbolic computation to qualitative analysis of differential equations. In 1993 he proposed an elimination method for triangular decomposition of polynomial systems,[6] which has been referred to as Wang's method and compared with other three methods.[7] Later on he introduced the concepts of regular systems and simple systems[8] and devised algorithms for regular and simple triangular decompositions.[9][10] He also developed a package, called Epsilon,[11] which implements his methods.[12][13]
Wang popularized the use of methods and tools of computer algebra for symbolic analysis of stability and bifurcation of differential and biological systems. He constructed a class of cubic differential systems with six small-amplitude limit cycles[14] and rediscovered the incompleteness of Kukles' center conditions of 1944,[15] which stimulated the study of Kukles' system in hundred papers.[16] Since 2004 he has been involved in research projects on geometric knowledge management and discovery. With co-workers he developed an algorithmic approach for automated discovery of geometric theorems from images of diagrams.[17]
Wang served as General Chair of ISSAC 2007 and is founding Editor-in-Chief and Managing Editor of Mathematics in Computer Science[18] and Executive Associate Editor-in-Chief of SCIENCE CHINA Information Sciences.[19]
Currently he works as Professor at Beihang University and Guangxi University for Nationalities, China on leave (détaché) from CNRS.
References
1. "POLSYS Team". Polsys.lip6.fr. Retrieved 2017-09-07.
2. "中国科学技术大学人力资源部". Hr.ustc.edu.cn. Retrieved 2017-09-07.
3. "第六批特聘教授名单_长江学者名单_数据中心_中国学位与研究生教育信息网". Cdgdc.edu.cn. 2017-03-13. Retrieved 2017-09-07.
4. "广西日报数字报刊". Gxrb.gxnews.com.cn. Retrieved 2017-09-07.
5. "Academy of Europe: Wang Dongming". Ae-info.org. Retrieved 2017-09-07.
6. Wang, Dongming (1993). "An elimination method for polynomial systems". Journal of Symbolic Computation. 16 (2): 83–114. doi:10.1006/jsco.1993.1035.
7. Aubry, Philippe; Moreno Maza, Marc (1999). "Triangular sets for solving polynomial systems: a comparative implementation of four methods". Journal of Symbolic Computation. 28 (1): 125–154. doi:10.1006/jsco.1999.0270.
8. Dellière, Stéphane. "D.M.Wang simple systems and dynamic constructible closure". LACO – Rapport n° 2000–16. Retrieved 4 September 2017.
9. Wang, Dongming (1998). "Decomposing polynomial systems into simple systems". Journal of Symbolic Computation. 25 (3): 295–314. doi:10.1006/jsco.1997.0177.
10. Wang, Dongming (2000). "Computing triangular systems and regular systems". Journal of Symbolic Computation. 30 (2): 221–236. doi:10.1006/jsco.1999.0355.
11. "Epsilon 0.618". Wang.cc4cm.org. Retrieved 2017-09-07.
12. Wang, Dongming (2001). Elimination Methods. Wien New York: Springer-Verlag.
13. Wang, Dongming (2004). Elimination Practice: Software Tools and Applications. London: Imperial College Press.
14. Wang, Dongming (1990). "A class of cubic differential systems with 6-tuple focus". Journal of Differential Equations. 87 (2): 305–315. Bibcode:1990JDE....87..305D. doi:10.1016/0022-0396(90)90004-9.
15. Jin, Xiaofan; Wang, Dongming (1990). "On the conditions of Kukles for the existence of a centre". Bulletin of the London Mathematical Society. 22 (1): 1–4. doi:10.1112/blms/22.1.1.
16. Christopher, C. J.; Lloyd, N. G. (1990). "On the paper of Jin and Wang concerning the conditions for a centre in certain cubic systems". Bulletin of the London Mathematical Society. 22 (1): 5–12. doi:10.1112/blms/22.1.5.
17. Chen, Xiaoyu; Song, Dan; Wang, Dongming (2015). "Automated generation of geometric theorems from images of diagrams". Annals of Mathematics and Artificial Intelligence. 74 (3–4): 333–358. arXiv:1406.1638. doi:10.1007/s10472-014-9433-7.
18. "Mathematics in Computer Science – incl. option to publish open access". Springer.com. Retrieved 2017-09-07.
19. "Science China Information Sciences". Springer.com. Retrieved 2017-09-07.
External links
• Wang Dongming at the Mathematics Genealogy Project
• Wang's personal website
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• United States
• Czech Republic
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• Publons
• ResearcherID
Other
• IdRef
| Wikipedia |
Wang Yuan (mathematician)
Wang Yuan (Chinese: 王元; pinyin: Wáng Yuán; 29 April 1930 – 14 May 2021) was a Chinese mathematician and writer known for his contributions to the Goldbach conjecture. He was a president of the Chinese Mathematical Society and head of the Institute of Mathematics, Chinese Academy of Sciences.[1]
Wang Yuan
Born(1930-04-29)29 April 1930
Lanxi, Zhejiang, China
Died14 May 2021(2021-05-14) (aged 91)
Beijing, China
NationalityChinese
Alma materZhejiang University
Known forNumber theory, History of mathematics, Numerical analysis, Design of experiments
Scientific career
FieldsMathematics
InstitutionsChinese Academy of Sciences
Doctoral advisorHua Luogeng
Notable studentsShou-Wu Zhang
InfluencedKai-Tai Fang
Life
Wang was born in Lanxi, Zhejiang, China. His father was a magistrate in the local government. Because of the Japanese invasion (the Second Sino-Japanese War), Wang's family had to move away from Zhejiang Province, and finally arrived at the southeast city Kunming in Yunnan in 1938. 1942, Wang's father rose to the position of Chief Secretary of the Academia Sinica. 1946 after the Japanese surrender, his family moved to the capital city, Nanjing. 1946–1949, Wang's father was the Acting Director of the institute. In 1949, Wang separated with his father, who went to Taiwan.
Wang entered Yingshi University (later merged into National Chekiang University, now Zhejiang University) in Hangzhou, and graduated from the Department of Mathematics in 1952.[2] He then earned a position in the Institute of Mathematics, Academia Sinica. Hua Luogeng was his academic adviser and one of his closest collaborators.
In 1966, Wang's career was interrupted by the Cultural Revolution. He was unable to work for more than five years, until 1972. During this time, Wang was harassed and put through interrogation.
In 1978, Wang was back to his professorship, in the Institute of Mathematics at the Chinese Academy of Sciences. In 1980, he was elected to be a member of Chinese Academy of Science. 1988–1992, he was the president of the Chinese Mathematical Society. Wang also worked in the United States for a period of time. He visited the Institute of Advanced Studies and taught at the University of Colorado.
Wang advised Shou-Wu Zhang when he studied at the Chinese Academy of Sciences for his master's degree from 1983 to 1986.[3][4][5]
Wang is the father of Chinese American computer scientist James Z. Wang.[2]
Research
Number theory
Wang's research focused on the area of number theory, especially in the Goldbach conjecture, through sieve theory and the Hardy-Littlewood circle method. He obtained a series of important results in the field of number theory.[6][7]
Numerical integration and statistics
With Hua Luogeng (华罗庚, alternatively Hua Loo-Keng), he developed high-dimensional combinatorial designs for numerical integration on the unit cube. Their work came to the attention of the statistician Kai-Tai Fang, who realized that their results could be used in the design of experiments. In particular, their results could be used to investigate interaction, for example, in factorial experiments and response surface methodology. Collaborating with Fang led to uniform designs, which have been used also in computer simulations.[8][9][10][11]
Books
• Wang, Yuan (1991). Diophantine equations and inequalities in algebraic number fields. Berlin: Springer-Verlag. doi:10.1007/978-3-642-58171-7. ISBN 9783642634895. OCLC 851809136.
• Wang, Yuan (2005). Wang, Yuan (ed.). Selected papers of Wang Yuan. Singapore: World Scientific. ISBN 9812561978. OCLC 717731203.
• Fang, Kai-Tai; Wang, Yuan (1993). Number-theoretic methods in statistics. Chapman and Hall Monographs on Statistics and Applied Probability. Vol. 51. CRC Press. ISBN 0412465205. OCLC 246555560.
Citations
1. "错误". Archived from the original on 14 July 2011. Retrieved 19 January 2008.
2. "Yuan Wang's Home Page, Goldbach Conjecture -- 数学家王元 -- 哥德巴赫猜想".
3. "从放鸭娃到数学大师" [From ducklings to mathematics master] (in Chinese). Academy of Mathematics and Systems Science. 11 November 2011. Retrieved 5 May 2019.
4. "專訪張壽武:在數學殿堂里,依然懷抱小學四年級的夢想" [Interview with Zhang Shou-Wu: In the mathematics department, he still has his dream from fourth grade of elementary school] (in Chinese). Beijing Sina Net. 3 May 2019. Retrieved 5 May 2019.
5. "专访数学家张寿武:要让别人解中国人出的数学题" [Interview with mathematician Zhang Shouwu: Let others solve the math problems of Chinese people] (in Chinese). Sina Education. 4 May 2019. Retrieved 5 May 2019.
6. "Wang Yuan (1930-)".
7. "王元(中国科学院)". www.cas.cn. Archived from the original on 14 November 2003.
8. Loie (2005)
9. Fang, Kai-Tai; Wang, Yuan; Bentler, Peter M. (1994). "Some applications of number-theoretic methods in statistics". Statistical Science. 9 (3): 416–428. doi:10.1214/ss/1177010392.
10. Santner, Williams & Notz (2003, Chapter 5.4 "Uniform designs", 145–148): Santner, Thomas J.; Williams, Brian J.; Notz, William I. (2003). The design and analysis of computer experiments. Springer Series in Statistics (2013 printing ed.). Springer-Verlag. ISBN 1475737998.
11. Li & Yuan (2005, pp. xi and xx–xxi "7) Number-theoretic methods in statistics"):
References
• Li, Wenlin; Yuan, Xiangdong (2005). "Wang Yuan: A brief outline of his life and works". In Wang, Yuan (ed.). Selected papers of Wang Yuan. Singapore: World Scientific. pp. xi–xxii. doi:10.1142/9789812701190_fmatter. ISBN 9812561978. OCLC 717731203.
• Loie, Agnes W. L. (2005). "A conversation with Kai-Tai Fang". In Fan, Jianqing; Li, Gang (eds.). Contemporary multivariate analysis and design of experiments: In celebration of Professor Kai-Tai Fang's 65th Birthday. Series in biostatistics. Vol. 2. New Jersey and Hong Kong: World Scientific. pp. 1–22. ISBN 981-256-120-X. OCLC 63193398.
External links
• Wang Yuan biography
• Brief Introduction of CMS (Center of Mathematical Sciences)
• Yuan Wang's Home Page, Goldbach Conjecture with photos
• Wang Yuan at the Mathematics Genealogy Project
• Wang Yuan, from the website of Chinese Academy of Science (in Chinese)
• Wng Yuan's story, from Sina.com (in Chinese)
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Israel
• Belgium
• United States
• Sweden
• Latvia
• Czech Republic
• Korea
• 2
• Netherlands
• Poland
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Wang algebra
In algebra and network theory, a Wang algebra is a commutative algebra $A$, over a field or (more generally) a commutative unital ring, in which $A$ has two additional properties:
(Rule i) For all elements x of $A$, x + x = 0 (universal additive nilpotency of degree 1).
(Rule ii) For all elements x of $A$, x⋅x = 0 (universal multiplicative nilpotency of degree 1).[1][2]
History and applications
Rules (i) and (ii) were originally published by K. T. Wang (Wang Ki-Tung, 王 季同) in 1934 as part of a method for analyzing electrical networks.[3] From 1935 to 1940, several Chinese electrical engineering researchers published papers on the method. The original Wang algebra is the Grassman algebra over the finite field mod 2.[1] At the 57th annual meeting of the American Mathematical Society, held on December 27–29, 1950, Raoul Bott and Richard Duffin introduced the concept of a Wang algebra in their abstract (number 144t) The Wang algebra of networks. They gave an interpretation of the Wang algebra as a particular type of Grassman algebra mod 2.[4] In 1969 Wai-Kai Chen used the Wang algebra formulation to give a unification of several different techniques for generating the trees of a graph.[5] The Wang algebra formulation has been used to systematically generate King-Altman directed graph patterns. Such patterns are useful in deriving rate equations in the theory of enzyme kinetics.[6]
According to Guo Jinhai, professor in the Institute for the History of Natural Sciences of the Chinese Academy of Sciences, Wang Ki Tung's pioneering method of analyzing electrical networks significantly promoted electrical engineering not only in China but in the rest of the world; the Wang algebra formulation is useful in electrical networks for solving problems involving topological methods, graph theory, and Hamiltonian cycles.[7]
Wang Algebra and the Spanning Trees of a Graph
The Wang Rules for Finding all Spanning Trees of a Graph G[8]
1. For each node write the sum of all the edge-labels that meet that node.
2. Leave out one node and take the product of the sums of labels for all the remaining nodes.
3. Expand the product in 2. using the Wang algebra.
4. The terms in the sum of the expansion obtained in 3. are in 1-1 correspondence with the spanning trees in the graph.
References
1. Duffin, R. J. (1959). "An analysis of the Wang algebra of networks". Trans. Amer. Math. Soc. 93: 114–131. doi:10.1090/s0002-9947-1959-0109161-6. MR 0109161.
2. Chen, Wai-Kai (2 December 2012). "5.4 The Wang-algebra formulation". Applied Graph Theory. North-Holland. pp. 332–352. ISBN 9780444601933. p. 333, p. 334
3. K. T. Wang (1934). "On a new method of analysis of electrical networks". Memoir 2. National Research Institute of Engineering, Academia Sinica.
4. Whyburn, W. M. (March 1951). "The annual meeting of the society". Bulletin of the American Mathematical Society. 57 (2): 109–152. doi:10.1090/S0002-9904-1951-09479-3. MR 1565283. S2CID 120638163. (See p. 136.)
5. Chen, Wai-Kai (1969). "Unified theory on the generation of trees of a graph Part I. The Wang algebra formulation". International Journal of Electronics. 27 (2): 101–117. doi:10.1080/00207216908900016.
6. Qi, Feng; Dash, Ranjan K.; Han, Yu; Beard, Daniel A. (2009). "Generating rate equations for complex enzyme systems by a computer-assisted systematic method". BMC Bioinformatics. 10: 238. doi:10.1186/1471-2105-10-238. PMC 2729780. PMID 19653903.
7. 郭金海 (Guo Jinhai) (2003). "王季同的电网络分析新方法及其学术影响 (Wang Ki-Tung's New Method for the Analysis of Electric Network and Its Scientific Influence)". The Chinese Journal for the History of Science and Technology, No. 4. Institute for the History of Natural Sciences, Chinese Academy of Sciences: 33–40.
8. Kauffman, Louis H. "Wang Algebra and the Spanning Trees of a Graph" (PDF). Mathematics Department, University of Chicago Illinois.
| Wikipedia |
Wang B-machine
As presented by Hao Wang (1954, 1957), his basic machine B is an extremely simple computational model equivalent to the Turing machine. It is "the first formulation of a Turing-machine theory in terms of computer-like models" (Minsky, 1967: 200). With only 4 sequential instructions it is very similar to, but even simpler than, the 7 sequential instructions of the Post–Turing machine. In the same paper, Wang introduced a variety of equivalent machines, including what he called the W-machine, which is the B-machine with an "erase" instruction added to the instruction set.
Description
As defined by Wang (1954) the B-machine has at its command only 4 instructions:
• (1) → : Move tape-scanning head one tape square to the right (or move tape one square left), then continue to next instruction in numerical sequence;
• (2) ← : Move tape-scanning head one tape square to the left (or move tape one square right), then continue to next instruction in numerical sequence;
• (3) * : In scanned tape-square print mark * then go to next instruction in numerical sequence;
• (4) Cn: Conditional "transfer" (jump, branch) to instruction "n": If scanned tape-square is marked then go to instruction "n" else (if scanned square is blank) continue to next instruction in numerical sequence.
A sample of a simple B-machine instruction is his example (p. 65):
1. *, 2. →, 3. C2, 4. →, 5. ←
He rewrites this as a collection of ordered pairs:
{ ( 1, * ), ( 2, → ), ( 3, C2 ), ( 4, → ), ( 5, ← ) }
Wang's W-machine is simply the B-machine with the one additional instruction
• (5) E : In scanned tape-square erase the mark * (if there is one) then go to next instruction in numerical sequence.
See also
• Codd's cellular automaton
• Counter-machine model
References
• Hao Wang (1957), A Variant to Turing's Theory of Computing Machines, JACM (Journal of the Association for Computing Machinery) 4; 63–92. Presented at the meeting of the Association, June 23–25, 1954.
• Z. A. Melzak (1961) received 15 May 1961 An Informal Arithmetical Approach to Computability and Computation, Canadian Mathematical Bulletin, vol. 4, no. 3. September 1961 pages 279–293. Melzak offers no references but acknowledges "the benefit of conversations with Drs. R. Hamming, D. McIlroy and V. Vyssotsky of the Bell telephone Laborators and with Dr. H. Wang of Oxford university."
• Joachim Lambek (1961) received 15 June 1961 How to Program an Infinite Abacus, Mathematical Bulletin, vol. 4, no. 3. September 1961 pages 295–302. In his Appendix II, Lambek proposes a "formal definition of 'program'". He references Melzak (1961) and Kleene (1952) Introduction to Metamathematics.
• Marvin Minsky (1967), Computation: Finite and Infinite Machines, Prentice-Hall, Inc. Englewood Cliffs, N.J. In particular see p. 262ff (italics in original):
"We can now demonstrate the remarkable fact, first shown by Wang [1957], that for any Turing machine T there is an equivalent Turing machine TN that never changes a once-written symbol! In fact, we will construct a two-symbol machine TN that can only change blank squares on its tape to 1's but can not change a 1 back to a blank." Minsky then offers a proof of this.
| Wikipedia |
Lamé function
In mathematics, a Lamé function, or ellipsoidal harmonic function, is a solution of Lamé's equation, a second-order ordinary differential equation. It was introduced in the paper (Gabriel Lamé 1837). Lamé's equation appears in the method of separation of variables applied to the Laplace equation in elliptic coordinates. In some special cases solutions can be expressed in terms of polynomials called Lamé polynomials.
The Lamé equation
Lamé's equation is
${\frac {d^{2}y}{dx^{2}}}+(A+B\wp (x))y=0,$
where A and B are constants, and $\wp $ is the Weierstrass elliptic function. The most important case is when $B\wp (x)=-\kappa ^{2}\operatorname {sn} ^{2}x$, where $\operatorname {sn} $ is the elliptic sine function, and $\kappa ^{2}=n(n+1)k^{2}$ for an integer n and $k$ the elliptic modulus, in which case the solutions extend to meromorphic functions defined on the whole complex plane. For other values of B the solutions have branch points.
By changing the independent variable to $t$ with $t=\operatorname {sn} x$, Lamé's equation can also be rewritten in algebraic form as
${\frac {d^{2}y}{dt^{2}}}+{\frac {1}{2}}\left({\frac {1}{t-e_{1}}}+{\frac {1}{t-e_{2}}}+{\frac {1}{t-e_{3}}}\right){\frac {dy}{dt}}-{\frac {A+Bt}{4(t-e_{1})(t-e_{2})(t-e_{3})}}y=0,$
which after a change of variable becomes a special case of Heun's equation.
A more general form of Lamé's equation is the ellipsoidal equation or ellipsoidal wave equation which can be written (observe we now write $\Lambda $, not $A$ as above)
${\frac {d^{2}y}{dx^{2}}}+(\Lambda -\kappa ^{2}\operatorname {sn} ^{2}x-\Omega ^{2}k^{4}\operatorname {sn} ^{4}x)y=0,$
where $k$ is the elliptic modulus of the Jacobian elliptic functions and $\kappa $ and $\Omega $ are constants. For $\Omega =0$ the equation becomes the Lamé equation with $\Lambda =A$. For $\Omega =0,k=0,\kappa =2h,\Lambda -2h^{2}=\lambda ,x=z\pm {\frac {\pi }{2}}$ the equation reduces to the Mathieu equation
${\frac {d^{2}y}{dz^{2}}}+(\lambda -2h^{2}\cos 2z)y=0.$
The Weierstrassian form of Lamé's equation is quite unsuitable for calculation (as Arscott also remarks, p. 191). The most suitable form of the equation is that in Jacobian form, as above. The algebraic and trigonometric forms are also cumbersome to use. Lamé equations arise in quantum mechanics as equations of small fluctuations about classical solutions—called periodic instantons, bounces or bubbles—of Schrödinger equations for various periodic and anharmonic potentials.[1][2]
Asymptotic expansions
Asymptotic expansions of periodic ellipsoidal wave functions, and therewith also of Lamé functions, for large values of $\kappa $ have been obtained by Müller.[3][4][5] The asymptotic expansion obtained by him for the eigenvalues $\Lambda $ is, with $q$ approximately an odd integer (and to be determined more precisely by boundary conditions – see below),
${\begin{aligned}\Lambda (q)={}&q\kappa -{\frac {1}{2^{3}}}(1+k^{2})(q^{2}+1)-{\frac {q}{2^{6}\kappa }}\{(1+k^{2})^{2}(q^{2}+3)-4k^{2}(q^{2}+5)\}\\[6pt]&{}-{\frac {1}{2^{10}\kappa ^{2}}}{\Big \{}(1+k^{2})^{3}(5q^{4}+34q^{2}+9)-4k^{2}(1+k^{2})(5q^{4}+34q^{2}+9)\\[6pt]&{}-384\Omega ^{2}k^{4}(q^{2}+1){\Big \}}-\cdots ,\end{aligned}}$
(another (fifth) term not given here has been calculated by Müller, the first three terms have also been obtained by Ince[6]). Observe terms are alternately even and odd in $q$ and $\kappa $ (as in the corresponding calculations for Mathieu functions, and oblate spheroidal wave functions and prolate spheroidal wave functions). With the following boundary conditions (in which $K(k)$ is the quarter period given by a complete elliptic integral)
$\operatorname {Ec} (2K)=\operatorname {Ec} (0)=0,\;\;\operatorname {Es} (2K)=\operatorname {Es} (0)=0,$
as well as (the prime meaning derivative)
$(\operatorname {Ec} )_{2K}^{'}=(\operatorname {Ec} )_{0}^{'}=0,\;\;(\operatorname {Es} )_{2K}^{'}=(\operatorname {Es} )_{0}^{'}=0,$
defining respectively the ellipsoidal wave functions
$\operatorname {Ec} _{n}^{q_{0}},\operatorname {Es} _{n}^{q_{0}+1},\operatorname {Ec} _{n}^{q_{0}-1},\operatorname {Es} _{n}^{q_{0}}$
of periods $4K,2K,2K,4K,$ and for $q_{0}=1,3,5,\ldots $ one obtains
$q-q_{0}=\mp 2{\sqrt {\frac {2}{\pi }}}\left({\frac {1+k}{1-k}}\right)^{-\kappa /k}\left({\frac {8\kappa }{1-k^{2}}}\right)^{q_{0}/2}{\frac {1}{[(q_{0}-1)/2]!}}\left[1-{\frac {3(q_{0}^{2}+1)(1+k^{2})}{2^{5}\kappa }}+\cdots \right].$
Here the upper sign refers to the solutions $\operatorname {Ec} $ and the lower to the solutions $\operatorname {Es} $. Finally expanding $\Lambda (q)$ about $q_{0},$ one obtains
${\begin{aligned}\Lambda _{\pm }(q)\simeq {}&\Lambda (q_{0})+(q-q_{0})\left({\frac {\partial \Lambda }{\partial q}}\right)_{q_{0}}+\cdots \\[6pt]={}&\Lambda (q_{0})+(q-q_{0})\kappa \left[1-{\frac {q_{0}(1+k^{2})}{2^{2}\kappa }}-{\frac {1}{2^{6}\kappa ^{2}}}\{3(1+k^{2})^{2}(q_{0}^{2}+1)-4k^{2}(q_{0}^{2}+2q_{0}+5)\}+\cdots \right]\\[6pt]\simeq {}&\Lambda (q_{0})\mp 2\kappa {\sqrt {\frac {2}{\pi }}}\left({\frac {1+k}{1-k}}\right)^{-\kappa /k}\left({\frac {8\kappa }{1-k^{2}}}\right)^{q_{0}/2}{\frac {1}{[(q_{0}-1)/2]!}}{\Big [}1-{\frac {1}{2^{5}\kappa }}(1+k^{2})(3q_{0}^{2}+8q_{0}+3)\\[6pt]&{}+{\frac {1}{3.2^{11}\kappa ^{2}}}\{3(1+k^{2})^{2}(9q_{0}^{4}+8q_{0}^{3}-78q_{0}^{2}-88q_{0}-87)\\[6pt]&{}+128k^{2}(2q_{0}^{3}+9q_{0}^{2}+10q_{0}+15)\}-\cdots {\Big ]}.\end{aligned}}$
In the limit of the Mathieu equation (to which the Lamé equation can be reduced) these expressions reduce to the corresponding expressions of the Mathieu case (as shown by Müller).
Notes
1. H. J. W. Müller-Kirsten, Introduction to Quantum Mechanics: Schrödinger Equation and Path Integral, 2nd ed. World Scientific, 2012, ISBN 978-981-4397-73-5
2. Liang, Jiu-Qing; Müller-Kirsten, H.J.W.; Tchrakian, D.H. (1992). "Solitons, bounces and sphalerons on a circle". Physics Letters B. Elsevier BV. 282 (1–2): 105–110. doi:10.1016/0370-2693(92)90486-n. ISSN 0370-2693.
3. W. Müller, Harald J. (1966). "Asymptotic Expansions of Ellipsoidal Wave Functions and their Characteristic Numbers". Mathematische Nachrichten (in German). Wiley. 31 (1–2): 89–101. doi:10.1002/mana.19660310108. ISSN 0025-584X.
4. Müller, Harald J. W. (1966). "Asymptotic Expansions of Ellipsoidal Wave Functions in Terms of Hermite Functions". Mathematische Nachrichten (in German). Wiley. 32 (1–2): 49–62. doi:10.1002/mana.19660320106. ISSN 0025-584X.
5. Müller, Harald J. W. (1966). "On Asymptotic Expansions of Ellipsoidal Wave Functions". Mathematische Nachrichten (in German). Wiley. 32 (3–4): 157–172. doi:10.1002/mana.19660320305. ISSN 0025-584X.
6. Ince, E. L. (1940). "VII—Further Investigations into the Periodic Lamé Functions". Proceedings of the Royal Society of Edinburgh. Cambridge University Press (CUP). 60 (1): 83–99. doi:10.1017/s0370164600020071. ISSN 0370-1646.
References
• Arscott, F. M. (1964), Periodic Differential Equations, Oxford: Pergamon Press, pp. 191–236.
• Erdélyi, Arthur; Magnus, Wilhelm; Oberhettinger, Fritz; Tricomi, Francesco G. (1955), Higher transcendental functions (PDF), Bateman Manuscript Project, vol. III, New York–Toronto–London: McGraw-Hill, pp. XVII + 292, MR 0066496, Zbl 0064.06302.
• Lamé, G. (1837), "Sur les surfaces isothermes dans les corps homogènes en équilibre de température", Journal de mathématiques pures et appliquées, 2: 147–188. Available at Gallica.
• Rozov, N. Kh. (2001) [1994], "Lamé equation", Encyclopedia of Mathematics, EMS Press
• Rozov, N. Kh. (2001) [1994], "Lamé function", Encyclopedia of Mathematics, EMS Press
• Volkmer, H. (2010), "Lamé function", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
• Müller-Kirsten, Harald J. W. (2012), Introduction to Quantum Mechanics: Schrödinger Equation and Path Integral, 2nd ed., World Scientific
| Wikipedia |
Wanxiong Shi
Wanxiong Shi (Chinese: 施皖雄; pinyin: Shī Wǎnxióng; Pe̍h-ōe-jī: Si Oán-hiông; 6 October 1963 - 30 September 2021) was a Chinese mathematician. He was known for his fundamental work in the theory of Ricci flow.
Wanxiong Shi
施皖雄
Born(1963-10-06)October 6, 1963
China
DiedSeptember 30, 2021(2021-09-30) (aged 57)
Washington D.C., USA
Alma materUniversity of Science and Technology of China
Chinese Academy of Sciences
Harvard University (Ph.D.)
Scientific career
FieldsMathematics
InstitutionsPurdue University
ThesisRicci Deformation Of The Metric On Complete Noncompact Kahler Manifolds
Doctoral advisorShing-Tung Yau
Education
Shi was a native of Quanzhou, Fujian. In 1978, Shi graduated from Quanzhou No. 5 Middle School, and entered the University of Science and Technology of China. Shi earned his bachelor's degree in mathematics in 1982, then he went to the Institute of Mathematics of Chinese Academy of Sciences and obtained his master's degree in mathematics in 1985 under the guidance of Lu Qikeng (Chinese: 陆启铿) and Zhong Jiaqing (Chinese: 钟家庆). Then Shi was recruited by Shing-Tung Yau to study under him at the University of California, San Diego.[1] In 1987, Shi followed Yau to Harvard University and obtained his Ph.D. there in 1990.[2]
Since Shi was stronger in geometric analysis than other Chinese students, having an impressive ability to carry out highly technical arguments, he was assigned by Yau to investigate Ricci flow in the challenging case of noncompact manifolds.[3] Shi made significant breakthroughs and was highly regarded by researchers in the field. Richard Hamilton, the founder of Ricci flow theory, liked his work very much.[4]
Academic career and later life
Upon his graduation, several prominent universities were interested in offering him a faculty position. Hung-Hsi Wu (Chinese: 伍鸿熙) from the University of California, Berkeley asked Yau if Shi could come to Berkeley. Without seeking opinion from Yau, Shi applied to and got tenure track assistant professorship offers from the University of California, San Diego, where Richard Hamilton was working at, and Purdue University.
Shi decided to join Purdue University. He published several important papers there, and was awarded three grants from the NSF in 1991, 1994 and 1997.[5][6][7] However, Shi did not pass the tenure review in 1997, so he had to leave the university. (The principal investigator of the NSF grant of 1997 was changed because of this.) Yau believes that the failure was due to the faculty members not realising the importance of Ricci flow theory. Hamilton sent a belated reference letter to Purdue University in which he rebuked the decision, but to no avail.[4]
Shi then left academia and moved to Washington D.C., where he lived a frugal and secluded life in solitude, and had less and less contact with his friends. He turned down some offers from other universities.[4] Yau and former classmates of Shi tried to persuade Shi and help him return to academia, but he rejected.[8] Yau felt sorry for Shi's leaving academia, since among the four students of Yau who worked on Ricci flow, Shi had done the best work. Shi died from a sudden heart attack in the evening of September 30, 2021.[9][10]
Work
Shi initiated the study of Ricci flow theory on noncompact complete manifolds. He proved local derivative estimates for the Ricci flow, which are fundamental to many arguments of the theory, including Perelman's proof of the Poincaré conjecture using Ricci flow in 2002.[11]
Publications
• Shi, Wan-Xiong (1989). "Complete noncompact three-manifolds with non-negative Ricci curvature". J. Differ. Geom. 29 (2): 353–360. doi:10.4310/jdg/1214442879.
• Shi, Wan-Xiong (1989). "Deforming the metric on complete Riemannian manifolds". J. Differ. Geom. 30 (1): 223–301. doi:10.4310/jdg/1214443292.
• Shi, Wan-Xiong (1989). "Ricci deformation of the metric on complete non-compact Riemannian manifolds". J. Differ. Geom. 30 (2): 303–394. doi:10.4310/jdg/1214443595.
• Shi, Wan-Xiong (1990). "Complete noncompact Kähler manifolds with positive holomorphic bisectional curvature". Bull. Am. Math. Soc. New Ser. 23 (2): 437–440. doi:10.1090/S0273-0979-1990-15954-3.
• Shi, Wanxiong and Yau, S. T. (1994). "Harmonic maps on complete noncompact Riemannian manifolds". A tribute to Ilya Bakelman. Proceedings of a conference, College Station, TX, USA, October 1993. College Station, TX: Texas A & M University. pp. 79–120. ISBN 0-9630728-2-X.{{cite conference}}: CS1 maint: multiple names: authors list (link)
• Shi, Wan-Xiong and Yau, S.-T. (1996). "A note on the total curvature of a Kähler manifold". Math. Res. Lett. 3 (1): 123–132. doi:10.4310/MRL.1996.v3.n1.a12.{{cite journal}}: CS1 maint: multiple names: authors list (link)
• Shi, Wan-Xiong (1997). "Ricci flow and the uniformization on complete noncompact Kähler manifolds". J. Differ. Geom. 45 (1): 94–220. doi:10.4310/jdg/1214459756.
• Shi, Wan-Xiong (1998). "A uniformization theorem for complete Kähler manifolds with positive holomorphic bisectional curvature". J. Geom. Anal. 8 (1): 117–142. doi:10.1007/BF02922111. S2CID 121610392.
References
1. Shing-Tung Yau; Steve Nadis (2019). The Shape of a Life: One Mathematician's Search for the Universe's Hidden Geometry. Yale University Press. p. 171. ISBN 9780300245523.
2. "才学一流 破百年猜想 贡献卓著 人生几何 解千古疑惑 淡泊超然". School of Mathematical Sciences USTC. 9 October 2021. Retrieved 7 May 2022.
3. 郑方阳 Zheng Fangyang (6 October 2021). "忆皖雄同学". Wechat official accounts platform. Retrieved 7 May 2022.
4. Shing-Tung Yau (7 October 2021). "丘成桐:悼念我的学生施皖雄". sohu.com. Retrieved 7 May 2022.
5. "Award Abstract # 9103140 Mathematical Sciences: "Studying the Topology of Riemannian Manifolds through Ricci Deformation of the Metric"". National Science Foundation. 10 May 1991. Retrieved 9 May 2022.
6. "Award Abstract # 9403405 Mathematical Sciences: Heat Flow on Riemannian Manifolds". National Science Foundation. 13 July 1994. Retrieved 9 May 2022.
7. "Award Abstract # 9703656 Global Analysis on Riemannian Manifolds". National Science Foundation. 14 February 1997. Retrieved 9 May 2022.
8. 王晓林 Wang Xiaolin (12 October 2021). "【情系科大】特立独行,不离不弃——怀念皖雄同学". Wechat official accounts platform. Retrieved 7 May 2022.
9. "58岁数学家施皖雄去世,曾为解决庞加莱猜想做出基础性贡献". thepaper.cn. 7 October 2021. Retrieved 7 May 2022.
10. "师友追忆旅美数学家施皖雄——"我们一起生活,对于将来是荣幸的"". 上海科技报 Shanghai Keji Bao. 21 October 2021. Retrieved 7 May 2022.
11. Richard S. Hamilton. "Prof. Hamilton's speech about Poincare conjecture in Beijing (8 June, 2005)". The Institute of Mathematical Sciences, CUHK. Retrieved 7 May 2022.
External links
• Wan-Xiong Shi at the Mathematics Genealogy Project
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
| Wikipedia |
Warburg coefficient
The Warburg coefficient (or Warburg constant; denoted AW or σ) is the diffusion coefficient of ions in solution, associated to the Warburg element, ZW. The Warburg coefficient has units of ${\Omega }/{\sqrt {\text{seconds}}}={\Omega }(s^{-1/2})$
The value of AW can be obtained by the gradient of the Warburg plot, a linear plot of the real impedance (R) against the reciprocal of the square root of the frequency (${1}/{\sqrt {\omega }}$). This relation should always yield a straight line, as it is unique for a Warburg.
Alternatively, the value of AW can be found by:
$A_{W}={\frac {RT}{An^{2}F^{2}{\sqrt {2}}}}{\left({\frac {1}{C_{\mathrm {O} }^{b}{\sqrt {D_{\mathrm {O} }}}}}+{\frac {1}{C_{\mathrm {R} }^{b}{\sqrt {D_{\mathrm {R} }}}}}\right)}={\frac {RT}{An^{2}F^{2}\Theta C{\sqrt {2D}}}}$
where
• R is the ideal gas constant;
• T is the thermodynamic temperature;
• F is the Faraday constant;
• n is the valency;
• D is the diffusion coefficient of the species, where subscripts O and R stand for the oxidized and reduced species respectively;
• Cb is the concentration of the O and R species in the bulk;
• C is the concentration of the electrolyte;
• A denotes the surface area;
• Θ denotes the fraction of the O and R species present.
The equation for AW applies to both reversible and quasi-reversible reactions for which both halves of the couple are soluble.
References
• Ottova-Leitmannova, A. (2006). Advances in Planar Lipid Bilayers and Liposomes. Academic Press.
| Wikipedia |
Ward's conjecture
In mathematics, Ward's conjecture is the conjecture made by Ward (1985, p. 451) that "many (and perhaps all?) of the ordinary and partial differential equations that are regarded as being integrable or solvable may be obtained from the self-dual gauge field equations (or its generalizations) by reduction".
Examples
Ablowitz, Chakravarty, and Halburd (2003) explain how a variety of completely integrable equations such as the Korteweg-de Vries equation (KdV) equation, the Kadomtsev–Petviashvili equation (KP) equation, the nonlinear Schrödinger equation, the sine-Gordon equation, the Ernst equation and the Painlevé equations all arise as reductions or other simplifications of the self-dual Yang-Mills equations:
$F=\star F$
where $F$ is the curvature of a connection on an oriented 4-dimensional pseudo-Riemannian manifold, and $\star $ is the Hodge star operator.
They also obtain the equations of an integrable system known as the Euler–Arnold–Manakov top, a generalization of the Euler top, and they state that the Kowalevsaya top is also a reduction of the self-dual Yang-Mills equations.
Penrose-Ward transform
Via the Penrose-Ward transform these solutions give the holomorphic vector bundles often seen in the context of algebraic integrable systems.
References
• Ablowitz, M. J.; Chakravarty, S.; R. G., Halburd (2003), "Integrable systems and reductions of the self-dual Yang–Mills equations", Journal of Mathematical Physics, 44 (8): 3147–3173, Bibcode:2003JMP....44.3147A, doi:10.1063/1.1586967 http://www.ucl.ac.uk/~ucahrha/Publications/sdym-03.pdf
• Ward, R. S. (1985), "Integrable and solvable systems, and relations among them", Philosophical Transactions of the Royal Society of London. Series A. Mathematical and Physical Sciences, 315 (1533): 451–457, Bibcode:1985RSPTA.315..451W, doi:10.1098/rsta.1985.0051, ISSN 0080-4614, MR 0836745, S2CID 123659512
• Mason, L. J.; Woodhouse, N. M. J. (1996), Integrability, Self-duality, and Twistor Theory, Clarendon
| Wikipedia |
Penrose transform
In theoretical physics, the Penrose transform, introduced by Roger Penrose (1967, 1968, 1969), is a complex analogue of the Radon transform that relates massless fields on spacetime, or more precisely the space of solutions to massless field equations to sheaf cohomology groups on complex projective space. The projective space in question is the twistor space, a geometrical space naturally associated to the original spacetime, and the twistor transform is also geometrically natural in the sense of integral geometry. The Penrose transform is a major component of classical twistor theory.
Overview
Abstractly, the Penrose transform operates on a double fibration of a space Y, over two spaces X and Z
$Z{\xleftarrow {\eta }}Y{\xrightarrow {\tau }}X.$
In the classical Penrose transform, Y is the spin bundle, X is a compactified and complexified form of Minkowski space (which as a complex manifold is $\mathbf {Gr} (2,4)$) and Z is the twistor space (which is $\mathbb {P} ^{3}$). More generally examples come from double fibrations of the form
$G/H_{1}{\xleftarrow {\eta }}G/(H_{1}\cap H_{2}){\xrightarrow {\tau }}G/H_{2}$
where G is a complex semisimple Lie group and H1 and H2 are parabolic subgroups.
The Penrose transform operates in two stages. First, one pulls back the sheaf cohomology groups Hr(Z,F) to the sheaf cohomology Hr(Y,η−1F) on Y; in many cases where the Penrose transform is of interest, this pullback turns out to be an isomorphism. One then pushes the resulting cohomology classes down to X; that is, one investigates the direct image of a cohomology class by means of the Leray spectral sequence. The resulting direct image is then interpreted in terms of differential equations. In the case of the classical Penrose transform, the resulting differential equations are precisely the massless field equations for a given spin.
Example
The classical example is given as follows
• The "twistor space" Z is complex projective 3-space CP3, which is also the Grassmannian Gr1(C4) of lines in 4-dimensional complex space.
• X = Gr2(C4), the Grassmannian of 2-planes in 4-dimensional complex space. This is a compactification of complex Minkowski space.
• Y is the flag manifold whose elements correspond to a line in a plane of C4.
• G is the group SL4(C) and H1 and H2 are the parabolic subgroups fixing a line or a plane containing this line.
The maps from Y to X and Z are the natural projections.
Using spinor index notation, the Penrose transform gives a bijection between solutions to the spin $\pm n/2$ massless field equation
$\partial _{A}\,^{A_{1}'}\phi _{A_{1}'A_{2}'\cdots A_{n}'}=0$
and the first sheaf cohomology group $H^{1}(\mathbb {P} ^{1},{\mathcal {O}}(\pm n-2))$, where $\mathbb {P} ^{1}$ is the Riemann sphere, ${\mathcal {O}}(k)$ are the usual holomorphic line bundles over projective space, and the sheaves under consideration are the sheaves of sections of ${\mathcal {O}}(k)$.[1]
Penrose–Ward transform
The Penrose–Ward transform is a nonlinear modification of the Penrose transform, introduced by Ward (1977), that (among other things) relates holomorphic vector bundles on 3-dimensional complex projective space CP3 to solutions of the self-dual Yang–Mills equations on S4. Atiyah & Ward (1977) used this to describe instantons in terms of algebraic vector bundles on complex projective 3-space and Atiyah (1979) explained how this could be used to classify instantons on a 4-sphere.
See also
• Twistor correspondence
References
• Atiyah, Michael Francis; Ward, R. S. (1977), "Instantons and algebraic geometry", Communications in Mathematical Physics, Springer Berlin / Heidelberg, 55 (2): 117–124, Bibcode:1977CMaPh..55..117A, doi:10.1007/BF01626514, ISSN 0010-3616, MR 0494098
• Atiyah, Michael Francis (1979), Geometry of Yang-Mills fields, Lezioni Fermiane, Scuola Normale Superiore Pisa, Pisa, ISBN 978-88-7642-303-1, MR 0554924
• Baston, Robert J.; Eastwood, Michael G. (1989), The Penrose transform, Oxford Mathematical Monographs, The Clarendon Press Oxford University Press, ISBN 978-0-19-853565-2, MR 1038279.
• Eastwood, Michael (1993), "Introduction to Penrose transform", in Eastwood, Michael; Wolf, Joseph; Zierau., Roger (eds.), The Penrose transform and analytic cohomology in representation theory (South Hadley, MA, 1992), Contemp. Math., vol. 154, Providence, R.I.: Amer. Math. Soc., pp. 71–75, ISBN 978-0-8218-5176-0, MR 1246377
• Eastwood, M.G. (2001) [1994], "Penrose transform", Encyclopedia of Mathematics, EMS Press
• David, Liana (2001), The Penrose transform and its applications (PDF), University of Edinburgh; Doctor of Philosophy thesis.
• Penrose, Roger (1967), "Twistor algebra", Journal of Mathematical Physics, 8 (2): 345–366, Bibcode:1967JMP.....8..345P, doi:10.1063/1.1705200, ISSN 0022-2488, MR 0216828, archived from the original on 2013-01-12
• Penrose, Roger (1968), "Twistor quantisation and curved space-time", International Journal of Theoretical Physics, Springer Netherlands, 1 (1): 61–99, Bibcode:1968IJTP....1...61P, doi:10.1007/BF00668831, ISSN 0020-7748
• Penrose, Roger (1969), "Solutions of the Zero‐Rest‐Mass Equations", Journal of Mathematical Physics, 10 (1): 38–39, Bibcode:1969JMP....10...38P, doi:10.1063/1.1664756, ISSN 0022-2488, archived from the original on 2013-01-12
• Penrose, Roger; Rindler, Wolfgang (1986), Spinors and space-time. Vol. 2, Cambridge Monographs on Mathematical Physics, Cambridge University Press, ISBN 978-0-521-25267-6, MR 0838301.
• Ward, R. S. (1977), "On self-dual gauge fields", Physics Letters A, 61 (2): 81–82, Bibcode:1977PhLA...61...81W, doi:10.1016/0375-9601(77)90842-8, ISSN 0375-9601, MR 0443823
Topics of twistor theory
Objectives
Principles
• Background independence
Final objective
• Quantum gravity
• Theory of everything
Mathematical concepts
Twistors
• Penrose transform
• Twistor space
Physical concepts
• Twistor string theory
• Twistor correspondence
• Twistor theory
Roger Penrose
Books
• The Emperor's New Mind (1989)
• Shadows of the Mind (1994)
• The Road to Reality (2004)
• Cycles of Time (2010)
• Fashion, Faith, and Fantasy in the New Physics of the Universe (2016)
Coauthored books
• The Nature of Space and Time (with Stephen Hawking) (1996)
• The Large, the Small and the Human Mind (with Abner Shimony, Nancy Cartwright and Stephen Hawking) (1997)
• White Mars or, The Mind Set Free (with Brian W. Aldiss) (1999)
Academic works
• Techniques of Differential Topology in Relativity (1972)
• Spinors and Space-Time: Volume 1, Two-Spinor Calculus and Relativistic Fields (with Wolfgang Rindler) (1987)
• Spinors and Space-Time: Volume 2, Spinor and Twistor Methods in Space-Time Geometry (with Wolfgang Rindler) (1988)
Concepts
• Twistor theory
• Spin network
• Abstract index notation
• Black hole bomb
• Geometry of spacetime
• Cosmic censorship
• Weyl curvature hypothesis
• Penrose inequalities
• Penrose interpretation of quantum mechanics
• Moore–Penrose inverse
• Newman–Penrose formalism
• Penrose diagram
• Penrose–Hawking singularity theorems
• Penrose inequality
• Penrose process
• Penrose tiling
• Penrose triangle
• Penrose stairs
• Penrose graphical notation
• Penrose transform
• Penrose–Terrell effect
• Orchestrated objective reduction/Penrose–Lucas argument
• FELIX experiment
• Trapped surface
• Andromeda paradox
• Conformal cyclic cosmology
Related
• Lionel Penrose (father)
• Oliver Penrose (brother)
• Jonathan Penrose (brother)
• Shirley Hodgson (sister)
• John Beresford Leathes (grandfather)
• Illumination problem
• Quantum mind
Authority control
International
• FAST
National
• France
• BnF data
• Germany
• Israel
• United States
Other
• IdRef
1. Dunajski, Maciej (2010). Solitons, instantons, and twistors. Oxford: Oxford University Press. pp. 145–146. ISBN 9780198570639.
| Wikipedia |
Warfield group
In algebra, a Warfield group, studied by Warfield (1972), is a summand of a simply presented abelian group.
References
• Warfield, R. B. Jr. (1972), "Classification theorems for p-groups and modules over a discrete valuation ring", Bulletin of the American Mathematical Society, 78: 88–92, doi:10.1090/S0002-9904-1972-12870-2, ISSN 0002-9904, MR 0291284
| Wikipedia |
Waring's prime number conjecture
In number theory, Waring's prime number conjecture is a conjecture related to Vinogradov's theorem, named after the English mathematician Edward Waring. It states that every odd number exceeding 3 is either a prime number or the sum of three prime numbers. It follows from the generalized Riemann hypothesis,[1] and (trivially) from Goldbach's weak conjecture.
See also
• Schnirelmann's constant
References
1. Deshouillers, J.-M.; Effinger, G.; te Riele, H.; Zinoviev, D. (1997). "A complete Vinogradov 3-primes theorem under the Riemann Hypothesis". Electr. Res. Ann. of AMS. 3: 99–104.
External links
• Weisstein, Eric W. "Waring's prime number conjecture". MathWorld.
Prime number conjectures
• Hardy–Littlewood
• 1st
• 2nd
• Agoh–Giuga
• Andrica's
• Artin's
• Bateman–Horn
• Brocard's
• Bunyakovsky
• Chinese hypothesis
• Cramér's
• Dickson's
• Elliott–Halberstam
• Firoozbakht's
• Gilbreath's
• Grimm's
• Landau's problems
• Goldbach's
• weak
• Legendre's
• Twin prime
• Legendre's constant
• Lemoine's
• Mersenne
• Oppermann's
• Polignac's
• Pólya
• Schinzel's hypothesis H
• Waring's prime number
| Wikipedia |
Waring–Goldbach problem
The Waring–Goldbach problem is a problem in additive number theory, concerning the representation of integers as sums of powers of prime numbers. It is named as a combination of Waring's problem on sums of powers of integers, and the Goldbach conjecture on sums of primes. It was initiated by Hua Luogeng[1] in 1938.
Problem statement
It asks whether large numbers can be expressed as a sum, with at most a constant number of terms, of like powers of primes. That is, for any given natural number, k, is it true that for sufficiently large integer N there necessarily exist a set of primes, {p1, p2, ..., pt}, such that N = p1k + p2k + ... + ptk, where t is at most some constant value?[2]
The case, k=1, is a weaker version of the Goldbach conjecture. Some progress has been made on the cases k=2 to 7.
Heuristic justification
By the prime number theorem, the number of k-th powers of a prime below x is of the order x1/k/log x. From this, the number of t-term expressions with sums ≤x is roughly xt/k/(log x)t. It is reasonable to assume that for some sufficiently large number t this is x-c, i.e., all numbers up to x are t-fold sums of k-th powers of primes. This argument is, of course, a long way from a strict proof.
Relevant results
In his monograph,[3] using and refining the methods of Hardy, Littlewood and Vinogradov, Hua Luogeng obtains a O(k2log k) upper bound for the number of terms required to exhibit all sufficiently large numbers as the sum of k-th powers of primes.
Every sufficiently large odd integer is the sum of 21 fifth powers of primes.[4]
References
1. L. K. Hua: Some results in additive prime number theory, Quart. J. Math. Oxford, 9(1938), 68–80.
2. Buttcane, Jack (January 2010). "A note on the Waring–Goldbach problem". Journal of Number Theory. Elsevier. 130 (1): 116–127. doi:10.1016/j.jnt.2009.07.006.
3. Hua Lo Keng: Additive theory of prime numbers, Translations of Mathematical Monographs, 13, American Mathematical Society, Providence, R.I. 1965 xiii+190 pp
4. Kawada, Koichi; Wooley, Trevor D. (2001), "On the Waring–Goldbach problem for fourth and fifth powers" (PDF), Proceedings of the London Mathematical Society, 83 (1): 1–50, doi:10.1112/plms/83.1.1, hdl:2027.42/135164.
| Wikipedia |
WarpPLS
WarpPLS is a software with graphical user interface for variance-based and factor-based structural equation modeling (SEM) using the partial least squares and factor-based methods.[1][2] The software can be used in empirical research to analyse collected data (e.g., from questionnaire surveys) and test hypothesized relationships. Since it runs on the MATLAB Compiler Runtime, it does not require the MATLAB software development application to be installed; and can be installed and used on various operating systems in addition to Windows, with virtual installations.
WarpPLS
WarpPLS Logo
Original author(s)Ned Kock
Initial release2009 (2009)
Stable release
WarpPLS 7.0
Operating systemWindows
PlatformMATLAB
Available inEnglish
TypeStatistical analysis, data collection, structural equation modeling, multivariate analysis
LicenseProprietary software
Websitewww.warppls.com
Main features
Among the main features of WarpPLS is its ability to identify and model non-linearity among variables in path models, whether these variables are measured as latent variables or not, yielding parameters that take the corresponding underlying heterogeneity into consideration.[3][4][5][6][7]
Other notable features are summarized:[8][9][10][11]
• Guides SEM analysis flow via a step-by-step user interface guide.[12]
• Implements classic (composite-based) as well as factor-based PLS algorithms.
• Identifies nonlinear relationships, and estimates path coefficients accordingly.
• Also models linear relationships, using classic and factor-based PLS algorithms.
• Models reflective and formative variables, as well as moderating effects.
• Calculates P values, model fit and quality indices, and full collinearity coefficients.
• Calculates effect sizes and Q-squared predictive validity coefficients.
• Calculates indirect effects for paths with 2, 3 etc. segments; as well as total effects.
• Calculates several causality assessment coefficients.
• Provides zoomed 2D graphs and 3D graphs.
See also
• Partial least squares path modeling
• Partial least squares regression
• Principal component analysis
• SmartPLS
References
1. Kock, N., & Mayfield, M. (2015). PLS-based SEM algorithms: The good neighbor assumption, collinearity, and nonlinearity. Information Management and Business Review, 7(2), 113-130.
2. Kock, N. (2015). A note on how to conduct a factor-based PLS-SEM analysis. International Journal of e-Collaboration, 11(3), 1-9.
3. Gountas, S., & Gountas, J. (2016). How the ‘warped’ relationships between nurses' emotions, attitudes, social support and perceived organizational conditions impact customer orientation. Journal of Advanced Nursing, 72(2), 283-293.
4. Guo, K.H., Yuan, Y., Archer, N.P., & Connelly, C.E. (2011). Understanding nonmalicious security violations in the workplace: A composite behavior model. Journal of Management Information Systems, 28(2), 203-236.
5. Brewer, T.D., Cinner, J.E., Fisher, R., Green, A., & Wilson, S.K. (2012). Market access, population density, and socioeconomic development explain diversity and functional group biomass of coral reef fish assemblages. Global Environmental Change, 22(2), 399-406.
6. Schmiedel, T., vom Brocke, J., & Recker, J. (2014). Development and validation of an instrument to measure organizational cultures’ support of business process management. Information & Management, 51(1), 43-56.
7. Schmitz, K. W., Teng, J. T., & Webb, K. J. (2016). Capturing the complexity of malleable IT use: Adaptive structuration theory for individuals. Management Information Systems Quarterly, 40(3), 663-686.
8. Memon, M. A., Ramayah, T., Cheah, J.-H., Ting, H., Chuah, F., & Cham, T. H. (2021). PLS-SEM statistical programs: A review. Journal of Applied Structural Equation Modeling, 5(1), i-xiii.
9. Kock, N. (2019). Factor-based structural equation modeling with WarpPLS. Australasian Marketing Journal, 27(1), 57-63.
10. Kock, N. (2019). From composites to factors: Bridging the gap between PLS and covariance‐based structural equation modeling. Information Systems Journal, 29(3), 674-706.
11. Kock, N. (2011). Using WarpPLS in e-collaboration studies: Mediating effects, control and second order variables, and algorithm choices. International Journal of e-Collaboration, 7(3), 1-13.
12. "SEM Analysis with WarpPLS" – via www.youtube.com.
| Wikipedia |
Warped geometry
In mathematics and physics, in particular differential geometry and general relativity, a warped geometry is a Riemannian or Lorentzian manifold whose metric tensor can be written in form
$ds^{2}=g_{ab}(y)\,dy^{a}\,dy^{b}+f(y)g_{ij}(x)\,dx^{i}\,dx^{j}.$
The geometry almost decomposes into a Cartesian product of the y geometry and the x geometry – except that the x part is warped, i.e. it is rescaled by a scalar function of the other coordinates y. For this reason, the metric of a warped geometry is often called a warped product metric.[1][2]
Warped geometries are useful in that separation of variables can be used when solving partial differential equations over them.
Examples
Warped geometries acquire their full meaning when we substitute the variable y for t, time and x, for s, space. Then the f(y) factor of the spatial dimension becomes the effect of time that in words of Einstein "curves space". How it curves space will define one or other solution to a space-time world. For that reason, different models of space-time use warped geometries. Many basic solutions of the Einstein field equations are warped geometries, for example, the Schwarzschild solution and the Friedmann–Lemaitre–Robertson–Walker models.
Also, warped geometries are the key building block of Randall–Sundrum models in string theory.
See also
• Metric tensor
• Exact solutions in general relativity
• Poincaré half-plane model
References
1. Chen, Bang-Yen (2011). Pseudo-Riemannian geometry, [delta]-invariants and applications. World Scientific. ISBN 978-981-4329-63-7.
2. O'Neill, Barrett (1983). Semi-Riemannian geometry. Academic Press. ISBN 0-12-526740-1.
| Wikipedia |
Warren Goldfarb
Warren David Goldfarb (born 1949) is Walter Beverly Pearson Professor of Modern Mathematics and Mathematical Logic at Harvard University. He specializes in the history of analytic philosophy and in logic, most notably the classical decision problem.
Warren D. Goldfarb
Born1949
EraContemporary philosophy
RegionWestern philosophy
SchoolAnalytic
InstitutionsHarvard University
Main interests
logic, history of analytic philosophy
Influences
• Burton Dreben
Education and career
He received his A.B. and philosophy Ph.D. from Harvard University under the supervision of Burton Dreben, and has been a member of the Harvard faculty since 1975. He received tenure in 1982, the only philosopher to be promoted to tenure at Harvard between 1962 and 1999.[1]
Prof. Goldfarb is also one of the founders of the Harvard Gay & Lesbian Caucus and was one of the first openly gay Harvard faculty members.[2][3]
Philosophical work
Goldfarb was an editor of volumes III–V of Kurt Gödel's Collected Works. He has also published articles on important analytic philosophers, including Frege, Russell, Wittgenstein's early and later work, Carnap and Quine.[4]
Selected publications
Books
• The Decision Problem: Solvable Classes of Quantificational Formulas, Addison-Wesley, 1979. ISBN 0-201-02540-X (with Burton Dreben).
• Deductive Logic, Hackett, 2003.
Articles
• "Logic in the Twenties: the Nature of the Quantifier," The Journal of Symbolic Logic (1979)
• "I want you to bring me a slab: Remarks on the opening sections of the Philosophical Investigations," Synthese (1983)
• "Kripke on Wittgenstein on Rules," The Journal of Philosophy (1985)
• "Poincare Against the Logicists," in History and Philosophy of Modern Mathematics (University of Minnesota Press, 1987)
• "Wittgenstein on Understanding," Midwest Studies in Philosophy (1992)
• "Frege's Conception of Logic," in Future Pasts: The Analytic Tradition in the 20th Century, Juliet Floyd and Sanford Shieh, editors (Oxford University Press, 2001)
• “Rule-Following Revisited," in Wittgenstein and the Philosophy of Mind, Jonathan Ellis and Daniel Guevara, editors (Oxford University Press, 2012).
• "The Undecidability of the Second-Order Unification Problem" (PDF). Theoretical Computer Science. 13: 225–230. 1981. doi:10.1016/0304-3975(81)90040-2.
References
1. Harvard Tenures First Philosopher in 17 Years, The Chronicle of Higher Education, May 7, 1999.
2. Who's Who: About Us, Harvard Gay and Lesbian Caucus.
3. Gay Faculty Become Activists, Melissa Lee and Anna D. Wilde, Harvard Crimson, April 23, 1993.
4. "The Goldfarb Panel," Warren Goldfarb on Quine: https://www.youtube.com/watch?v=-_tSuKAOGSY
External links
• Goldfarb's web page at Harvard University
• Works by Warren Goldfarb at PhilPapers
Authority control
International
• ISNI
• VIAF
National
• Israel
• United States
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• PhilPeople
Other
• IdRef
| Wikipedia |
Warsaw School (mathematics)
Warsaw School of Mathematics is the name given to a group of mathematicians who worked at Warsaw, Poland, in the two decades between the World Wars, especially in the fields of logic, set theory, point-set topology and real analysis. They published in the journal Fundamenta Mathematicae, founded in 1920—one of the world's first specialist pure-mathematics journals. It was in this journal, in 1933, that Alfred Tarski—whose illustrious career would a few years later take him to the University of California, Berkeley—published his celebrated theorem on the undefinability of the notion of truth.
Notable members of the Warsaw School of Mathematics have included:
• Wacław Sierpiński
• Kazimierz Kuratowski
• Edward Marczewski
• Bronisław Knaster
• Zygmunt Janiszewski
• Stefan Mazurkiewicz
• Stanisław Saks
• Karol Borsuk
• Roman Sikorski
• Nachman Aronszajn
• Samuel Eilenberg
Additionally, notable logicians of the Lwów–Warsaw School of Logic, working at Warsaw, have included:
• Stanisław Leśniewski
• Adolf Lindenbaum
• Alfred Tarski
• Jan Łukasiewicz
• Andrzej Mostowski
• Helena Rasiowa
Fourier analysis has been advanced at Warsaw by:
• Aleksander Rajchman
• Antoni Zygmund
• Józef Marcinkiewicz
• Otton M. Nikodym
• Jerzy Spława-Neyman
See also
• Polish School of Mathematics
• Kraków School of Mathematics
• Lwów School of Mathematics
| Wikipedia |
Shape theory (mathematics)
Shape theory is a branch of topology that provides a more global view of the topological spaces than homotopy theory. The two coincide on compacta dominated homotopically by finite polyhedra. Shape theory associates with the Čech homology theory while homotopy theory associates with the singular homology theory.
Background
Shape theory was reinvented, further developed and promoted by the Polish mathematician Karol Borsuk in 1968. Actually, the name shape theory was coined by Borsuk.
Warsaw circle
Borsuk lived and worked in Warsaw, hence the name of one of the fundamental examples of the area, the Warsaw circle. It is a compact subset of the plane produced by "closing up" a topologist's sine curve with an arc. The homotopy groups of the Warsaw circle are all trivial, just like those of a point, and so any map between the Warsaw circle and a point induces a weak homotopy equivalence. However these two spaces are not homotopy equivalent. So by the Whitehead theorem, the Warsaw circle does not have the homotopy type of a CW complex.
Development
Borsuk's shape theory was generalized onto arbitrary (non-metric) compact spaces, and even onto general categories, by Włodzimierz Holsztyński in year 1968/1969, and published in Fund. Math. 70 , 157–168, y.1971 (see Jean-Marc Cordier, Tim Porter, (1989) below). This was done in a continuous style, characteristic for the Čech homology rendered by Samuel Eilenberg and Norman Steenrod in their monograph Foundations of Algebraic Topology. Due to the circumstance, Holsztyński's paper was hardly noticed, and instead a great popularity in the field was gained by a later paper by Sibe Mardešić and Jack Segal, Fund. Math. 72, 61–68, y.1971. Further developments are reflected by the references below, and by their contents.
For some purposes, like dynamical systems, more sophisticated invariants were developed under the name strong shape. Generalizations to noncommutative geometry, e.g. the shape theory for operator algebras have been found.
See also
• List of topologies
References
• Mardešić, Sibe (1997). "Thirty years of shape theory" (PDF). Mathematical Communications. 2: 1–12.
• shape theory at the nLab
• Jean-Marc Cordier and Tim Porter, (1989), Shape Theory: Categorical Methods of Approximation, Mathematics and its Applications, Ellis Horwood. Reprinted Dover (2008)
• Aristide Deleanu and Peter John Hilton, On the categorical shape of a functor, Fundamenta Mathematicae 97 (1977) 157 - 176.
• Aristide Deleanu and Peter John Hilton, Borsuk's shape and Grothendieck categories of pro-objects, Mathematical Proceedings of the Cambridge Philosophical Society 79 (1976) 473–482.
• Sibe Mardešić and Jack Segal, Shapes of compacta and ANR-systems, Fundamenta Mathematicae 72 (1971) 41–59
• Karol Borsuk, Concerning homotopy properties of compacta, Fundamenta Mathematicae 62 (1968) 223-254
• Karol Borsuk, Theory of Shape, Monografie Matematyczne Tom 59, Warszawa 1975.
• D. A. Edwards and H. M. Hastings, Čech Theory: its Past, Present, and Future, Rocky Mountain Journal of Mathematics, Volume 10, Number 3, Summer 1980
• D. A. Edwards and H. M. Hastings, (1976), Čech and Steenrod homotopy theories with applications to geometric topology, Lecture Notes in Mathematics 542, Springer-Verlag.
• Tim Porter, Čech homotopy I, II, Journal of the London Mathematical Society, 1, 6, 1973, pp. 429–436; 2, 6, 1973, pp. 667–675.
• J.T. Lisica and Sibe Mardešić, Coherent prohomotopy and strong shape theory, Glasnik Matematički 19(39) (1984) 335–399.
• Michael Batanin, Categorical strong shape theory, Cahiers Topologie Géom. Différentielle Catég. 38 (1997), no. 1, 3–66, numdam
• Marius Dădărlat, Shape theory and asymptotic morphisms for C*-algebras, Duke Mathematical Journal, 73(3):687–711, 1994.
• Marius Dădărlat and Terry A. Loring, Deformations of topological spaces predicted by E-theory, In Algebraic methods in operator theory, p. 316–327. Birkhäuser 1994.
Authority control: National
• Israel
• United States
| Wikipedia |
Washek Pfeffer
Washek F. Pfeffer (November 14, 1936–January 3, 2021) was a Czech-born US mathematician and Emeritus Professor at the University of California, Davis. Pfeffer was one of the world's pre-eminent authorities on real integration and has authored several books on the topic of integration, and numerous papers on these topics and others related to many areas of real analysis and measure theory. Pfeffer gave his name to the Pfeffer integral, which extends a Riemann-type construction for the integral of a measurable function both to higher-dimensional domains and, in the case of one dimension, to a superset of the Lebesgue integrable functions.
External links
• UC Davis memorial
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
Disc integration
Disc integration, also known in integral calculus as the disc method, is a method for calculating the volume of a solid of revolution of a solid-state material when integrating along an axis "parallel" to the axis of revolution. This method models the resulting three-dimensional shape as a stack of an infinite number of discs of varying radius and infinitesimal thickness. It is also possible to use the same principles with rings instead of discs (the "washer method") to obtain hollow solids of revolutions. This is in contrast to shell integration, which integrates along an axis perpendicular to the axis of revolution.
Part of a series of articles about
Calculus
• Fundamental theorem
• Limits
• Continuity
• Rolle's theorem
• Mean value theorem
• Inverse function theorem
Differential
Definitions
• Derivative (generalizations)
• Differential
• infinitesimal
• of a function
• total
Concepts
• Differentiation notation
• Second derivative
• Implicit differentiation
• Logarithmic differentiation
• Related rates
• Taylor's theorem
Rules and identities
• Sum
• Product
• Chain
• Power
• Quotient
• L'Hôpital's rule
• Inverse
• General Leibniz
• Faà di Bruno's formula
• Reynolds
Integral
• Lists of integrals
• Integral transform
• Leibniz integral rule
Definitions
• Antiderivative
• Integral (improper)
• Riemann integral
• Lebesgue integration
• Contour integration
• Integral of inverse functions
Integration by
• Parts
• Discs
• Cylindrical shells
• Substitution (trigonometric, tangent half-angle, Euler)
• Euler's formula
• Partial fractions
• Changing order
• Reduction formulae
• Differentiating under the integral sign
• Risch algorithm
Series
• Geometric (arithmetico-geometric)
• Harmonic
• Alternating
• Power
• Binomial
• Taylor
Convergence tests
• Summand limit (term test)
• Ratio
• Root
• Integral
• Direct comparison
• Limit comparison
• Alternating series
• Cauchy condensation
• Dirichlet
• Abel
Vector
• Gradient
• Divergence
• Curl
• Laplacian
• Directional derivative
• Identities
Theorems
• Gradient
• Green's
• Stokes'
• Divergence
• generalized Stokes
Multivariable
Formalisms
• Matrix
• Tensor
• Exterior
• Geometric
Definitions
• Partial derivative
• Multiple integral
• Line integral
• Surface integral
• Volume integral
• Jacobian
• Hessian
Advanced
• Calculus on Euclidean space
• Generalized functions
• Limit of distributions
Specialized
• Fractional
• Malliavin
• Stochastic
• Variations
Miscellaneous
• Precalculus
• History
• Glossary
• List of topics
• Integration Bee
• Mathematical analysis
• Nonstandard analysis
Definition
Function of x
If the function to be revolved is a function of x, the following integral represents the volume of the solid of revolution:
$\pi \int _{a}^{b}R(x)^{2}\,dx$
where R(x) is the distance between the function and the axis of rotation. This works only if the axis of rotation is horizontal (example: y = 3 or some other constant).
Function of y
If the function to be revolved is a function of y, the following integral will obtain the volume of the solid of revolution:
$\pi \int _{c}^{d}R(y)^{2}\,dy$
where R(y) is the distance between the function and the axis of rotation. This works only if the axis of rotation is vertical (example: x = 4 or some other constant).
Washer method
To obtain a hollow solid of revolution (the “washer method”), the procedure would be to take the volume of the inner solid of revolution and subtract it from the volume of the outer solid of revolution. This can be calculated in a single integral similar to the following:
$\pi \int _{a}^{b}\left(R_{\mathrm {O} }(x)^{2}-R_{\mathrm {I} }(x)^{2}\right)\,dx$
where RO(x) is the function that is farthest from the axis of rotation and RI(x) is the function that is closest to the axis of rotation. For example, the next figure shows the rotation along the x-axis of the red "leaf" enclosed between the square-root and quadratic curves:
The volume of this solid is:
$\pi \int _{0}^{1}\left(\left({\sqrt {x}}\right)^{2}-\left(x^{2}\right)^{2}\right)\,dx\,.$
One should take caution not to evaluate the square of the difference of the two functions, but to evaluate the difference of the squares of the two functions.
$R_{\mathrm {O} }(x)^{2}-R_{\mathrm {I} }(x)^{2}\neq \left(R_{\mathrm {O} }(x)-R_{\mathrm {I} }(x)\right)^{2}$
(This formula only works for revolutions about the x-axis.)
To rotate about any horizontal axis, simply subtract from that axis from each formula. If h is the value of a horizontal axis, then the volume equals
$\pi \int _{a}^{b}\left(\left(h-R_{\mathrm {O} }(x)\right)^{2}-\left(h-R_{\mathrm {I} }(x)\right)^{2}\right)\,dx\,.$
For example, to rotate the region between y = −2x + x2 and y = x along the axis y = 4, one would integrate as follows:
$\pi \int _{0}^{3}\left(\left(4-\left(-2x+x^{2}\right)\right)^{2}-(4-x)^{2}\right)\,dx\,.$
The bounds of integration are the zeros of the first equation minus the second. Note that when integrating along an axis other than the x, the graph of the function that is farthest from the axis of rotation may not be that obvious. In the previous example, even though the graph of y = x is, with respect to the x-axis, further up than the graph of y = −2x + x2, with respect to the axis of rotation the function y = x is the inner function: its graph is closer to y = 4 or the equation of the axis of rotation in the example.
The same idea can be applied to both the y-axis and any other vertical axis. One simply must solve each equation for x before one inserts them into the integration formula.
See also
• Solid of revolution
• Shell integration
References
• "Volumes of Solids of Revolution". CliffsNotes.com. Retrieved July 8, 2014.
• Weisstein, Eric W. "Method of Disks". MathWorld.
• Frank Ayres, Elliott Mendelson. Schaum's Outlines: Calculus. McGraw-Hill Professional 2008, ISBN 978-0-07-150861-2. pp. 244–248 (online copy, p. 244, at Google Books. Retrieved July 12, 2013.)
Calculus
Precalculus
• Binomial theorem
• Concave function
• Continuous function
• Factorial
• Finite difference
• Free variables and bound variables
• Graph of a function
• Linear function
• Radian
• Rolle's theorem
• Secant
• Slope
• Tangent
Limits
• Indeterminate form
• Limit of a function
• One-sided limit
• Limit of a sequence
• Order of approximation
• (ε, δ)-definition of limit
Differential calculus
• Derivative
• Second derivative
• Partial derivative
• Differential
• Differential operator
• Mean value theorem
• Notation
• Leibniz's notation
• Newton's notation
• Rules of differentiation
• linearity
• Power
• Sum
• Chain
• L'Hôpital's
• Product
• General Leibniz's rule
• Quotient
• Other techniques
• Implicit differentiation
• Inverse functions and differentiation
• Logarithmic derivative
• Related rates
• Stationary points
• First derivative test
• Second derivative test
• Extreme value theorem
• Maximum and minimum
• Further applications
• Newton's method
• Taylor's theorem
• Differential equation
• Ordinary differential equation
• Partial differential equation
• Stochastic differential equation
Integral calculus
• Antiderivative
• Arc length
• Riemann integral
• Basic properties
• Constant of integration
• Fundamental theorem of calculus
• Differentiating under the integral sign
• Integration by parts
• Integration by substitution
• trigonometric
• Euler
• Tangent half-angle substitution
• Partial fractions in integration
• Quadratic integral
• Trapezoidal rule
• Volumes
• Washer method
• Shell method
• Integral equation
• Integro-differential equation
Vector calculus
• Derivatives
• Curl
• Directional derivative
• Divergence
• Gradient
• Laplacian
• Basic theorems
• Line integrals
• Green's
• Stokes'
• Gauss'
Multivariable calculus
• Divergence theorem
• Geometric
• Hessian matrix
• Jacobian matrix and determinant
• Lagrange multiplier
• Line integral
• Matrix
• Multiple integral
• Partial derivative
• Surface integral
• Volume integral
• Advanced topics
• Differential forms
• Exterior derivative
• Generalized Stokes' theorem
• Tensor calculus
Sequences and series
• Arithmetico-geometric sequence
• Types of series
• Alternating
• Binomial
• Fourier
• Geometric
• Harmonic
• Infinite
• Power
• Maclaurin
• Taylor
• Telescoping
• Tests of convergence
• Abel's
• Alternating series
• Cauchy condensation
• Direct comparison
• Dirichlet's
• Integral
• Limit comparison
• Ratio
• Root
• Term
Special functions
and numbers
• Bernoulli numbers
• e (mathematical constant)
• Exponential function
• Natural logarithm
• Stirling's approximation
History of calculus
• Adequality
• Brook Taylor
• Colin Maclaurin
• Generality of algebra
• Gottfried Wilhelm Leibniz
• Infinitesimal
• Infinitesimal calculus
• Isaac Newton
• Fluxion
• Law of Continuity
• Leonhard Euler
• Method of Fluxions
• The Method of Mechanical Theorems
Lists
• Differentiation rules
• List of integrals of exponential functions
• List of integrals of hyperbolic functions
• List of integrals of inverse hyperbolic functions
• List of integrals of inverse trigonometric functions
• List of integrals of irrational functions
• List of integrals of logarithmic functions
• List of integrals of rational functions
• List of integrals of trigonometric functions
• Secant
• Secant cubed
• List of limits
• Lists of integrals
Miscellaneous topics
• Complex calculus
• Contour integral
• Differential geometry
• Manifold
• Curvature
• of curves
• of surfaces
• Tensor
• Euler–Maclaurin formula
• Gabriel's horn
• Integration Bee
• Proof that 22/7 exceeds π
• Regiomontanus' angle maximization problem
• Steinmetz solid
| Wikipedia |
Irving Stringham
Washington Irving Stringham (December 10, 1847 – October 5, 1909) was an American mathematician born in Yorkshire, New York. He was the first person to denote the natural logarithm as $\ln(x)$ where $x$ is its argument. The use of $\ln(x)$ in place of $\log _{e}(x)$ is commonplace in digital calculators today.
"In place of $^{e}\log $ we shall henceforth use the shorter symbol $\ln $, made up of the initial letters of logarithm and of natural or Napierian."[1]
Irving Stringham
Born(1847-12-10)December 10, 1847
Yorkshire, New York, U.S.
DiedOctober 5, 1909(1909-10-05) (aged 61)
Berkeley, California, U.S.
NationalityAmerican
Alma materHarvard College
Johns Hopkins University
Scientific career
FieldsMathematics
InstitutionsUniversity of California at Berkeley
Doctoral advisorJames Joseph Sylvester
Stringham graduated from Harvard College in 1877. He earned his PhD from Johns Hopkins University in 1880. His dissertation was titled Regular Figures in N-dimensional Space[2] under his advisor James Joseph Sylvester.
In 1881 he was in Schwartzbach, Saxony, when he submitted an article on finite groups found in the quaternion algebra.[3]
Stringham began his professorship in mathematics at Berkeley in 1882.[4] In 1893 in Chicago, his paper Formulary for an Introduction to Elliptic Functions was read at the International Mathematical Congress held in connection with the World's Columbian Exposition.[5] In 1900 he was an Invited Speaker at the ICM in Paris.[6]
Personal life
Irving married Martha Sherman Day. The couple raised a daughter, Martha Sherman Stringham, (March 5, 1891- August 7, 1967).
References
1. Charles Smith, Irving Stringham, Elementary algebra for the use of schools and colleges 2nd ed, (The Macmillan Company, New York, 1904) p 437.
2. W.I. Stringham "Regular Figures in N-dimensional Space", American Journal of Mathematics Vol 3 (1880) pp 1-15.
3. I. Stringham (1881) "Determination of the finite quaternion groups", American Journal of Mathematics 4(1–4):345–57
4. "In Memoriam, Dean Stringham" University of California Chronicle Vol XII (University Press, Berkeley, 1909) pp 1–20.
5. "Formulary for an Introduction to Elliptic Functions by Irving Stringham". Mathematical papers read at the International Mathematical Congress held in connection with the World's Columbian Exposition. NY: Macmillan as publisher for the AMS. 1896. pp. 350–366.
6. "Orthogonal transformations in elliptic, or in hyperbolic, space by Irving Stringham". Compte rendu du deuxième Congrès international des mathématiciens tenu à Paris du 6 au 12 Aout 1900. Vol. Tome 2. 1902. pp. 327–338.
Publications
• I. Stringham (1879) The Quaternion Formulae for Quantification of Curves, Surfaces, and Solids, and for Barycenters, American Journal of Mathematics 2:205–7.
• I. Stringham (1901) On the geometry of planes in a parabolic space of four dimensions, Transactions of the American Mathematical Society 2:183–214.
• I. Stringham (1905) "A geometric construction for quaternion products", Bulletin of the American Mathematical Society 11(8):437–9.
External links
• Irving Stringham at the Mathematics Genealogy Project
• Portrait of W. Irving Stringham from Mathematics Department University of California, Berkeley
• San Francisco Call 6 October 1909 re Irving Stringham death, from California Digital Newspaper Collection.
Authority control
International
• FAST
• ISNI
• VIAF
National
• Germany
• United States
Academics
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• SNAC
• IdRef
| Wikipedia |
Wason selection task
The Wason selection task (or four-card problem) is a logic puzzle devised by Peter Cathcart Wason in 1966.[1][2][3] It is one of the most famous tasks in the study of deductive reasoning.[4] An example of the puzzle is:
You are shown a set of four cards placed on a table, each of which has a number on one side and a color on the other. The visible faces of the cards show 3, 8, blue and red. Which card(s) must you turn over in order to test that if a card shows an even number on one face, then its opposite face is blue?
A response that identifies a card that need not be inverted, or that fails to identify a card that needs to be inverted, is incorrect. The original task dealt with numbers (even, odd) and letters (vowels, consonants).
The test is of special interest because people have a hard time solving it in most scenarios but can usually solve it correctly in certain contexts. In particular, researchers have found that the puzzle is readily solved when the imagined context is policing a social rule.
Solution
The correct response is to turn over the 8 card and the red card.
The rule was "If the card shows an even number on one face, then its opposite face is blue." Only a card with both an even number on one face and something other than blue on the other face can invalidate this rule:
• If the 3 card is blue (or red), that doesn't violate the rule. The rule makes no claims about odd numbers. (Denying the antecedent)
• If the 8 card is not blue, it violates the rule. (Modus ponens)
• If the blue card is odd (or even), that doesn't violate the rule. The blue color is not exclusive to even numbers. (Affirming the consequent)
• If the red card is even, it violates the rule. (Modus tollens)
Use of logic
The interpretation of "if" here is that of the material conditional in classical logic, so this problem can be solved by choosing the cards using modus ponens (all even cards must be checked to ensure they are blue) and modus tollens (all non-blue cards must be checked to ensure they are non-even).
One experiment revolving around the Wason four card problem found many influences on people's selection in this task experiment that were not based on logic. The non-logical inferences made by the participants from this experiment demonstrate the possibility and structure of extra logical reasoning mechanisms.[5]
Alternatively, one might solve the problem by using another reference to zeroth-order logic. In classical propositional logic, the material conditional is false if and only if its antecedent is true and its consequent is false. As an implication of this, two cases need to be inspected in the selection task to check whether we are dealing with a false conditional:
• The case in which the antecedent is true (the even card), to examine whether the consequent is false (the opposite face is not blue).
• The case in which the consequent is false (the red card), to study whether the antecedent is true (the opposite face is even).
Explanations of performance on the task
In Wason's study, not even 10% of subjects found the correct solution, which for the specific criteria of this problem, would be 8 card and the red card.[6] This result was replicated in 1993.[7] The poor success rate of this selection experiment may be explained by its lack of relevant significance. If this task was reframed, however, empirical evidence has shown an increase in logical responses.[8]
Some authors have argued that participants do not read "if... then..." as the material conditional, since the natural language conditional is not the material conditional.[9][10][11] (See also the paradoxes of the material conditional for more information.) However one interesting feature of the task is how participants react when the classical logic solution is explained:
A psychologist, not very well disposed toward logic, once confessed to me that despite all problems in short-term inferences like the Wason Card Task, there was also the undeniable fact that he had never met an experimental subject who did not understand the logical solution when it was explained to him, and then agreed that it was correct.[12]
This latter comment is also controversial, since it does not explain whether the subjects regarded their previous solution as incorrect, or whether they regarded the problem as sufficiently vague to permit two interpretations.
Wason also ascribes participants' errors on this selection task due to confirmation bias. Confirmation bias compels people to seek the cards which confirm the rule; meanwhile, they overlook the main purpose of the experiment, which is to purposefully choose the cards that potentially disconfirm the rule.[13]
Policing social rules
See also: Evolution of human intelligence § Sexual selection, Evolution of morality, Prisoner's dilemma, and Social selection
As of 1983, experimenters had identified that success on the Wason selection task was highly context-dependent, but there was no theoretical explanation for which contexts elicited mostly correct responses and which ones elicited mostly incorrect responses.[14]
Evolutionary psychologists Leda Cosmides and John Tooby (1992) identified that the selection task tends to produce the "correct" response when presented in a context of social relations.[14] For example, if the rule used is "If you are drinking alcohol, then you must be over 18", and the cards have an age on one side and beverage on the other, e.g., "16", "drinking beer", "25", "drinking soda", most people have no difficulty in selecting the correct cards ("16” and "drinking beer").[14] In a series of experiments in different contexts, subjects demonstrated consistent superior performance when asked to police a social rule involving a benefit that was only legitimately available to someone who had qualified for that benefit.[14] Cosmides and Tooby argued that experimenters have ruled out alternative explanations, such as that people learn the rules of social exchange through practice and find it easier to apply these familiar rules than less-familiar rules.[14]
According to Cosmides and Tooby, this experimental evidence supports the hypothesis that a Wason task proves to be easier if the rule to be tested is one of social exchange (in order to receive benefit X you need to fulfill condition Y) and the subject is asked to police the rule, but is more difficult otherwise. They argued that such a distinction, if empirically borne out, would support the contention of evolutionary psychologists that human reasoning is governed by context-sensitive mechanisms that have evolved, through natural selection, to solve specific problems of social interaction, rather than context-free, general-purpose mechanisms.[14] In this case, the module is described as a specialized cheater-detection module.[14]
Evaluation of social relations hypothesis
Davies et al. (1995) have argued that Cosmides and Tooby's argument in favor of context-sensitive, domain-specific reasoning mechanisms as opposed to general-purpose reasoning mechanisms is theoretically incoherent and inferentially unjustified.[15] Von Sydow (2006) has argued that we have to distinguish deontic and descriptive conditionals, but that the logic of testing deontic conditionals is more systematic (see Beller, 2001) and depend on one's goals (see Sperber & Girotto, 2002).[11][16][17] However, in response to Kanazawa (2010),[18] Kaufman et al. (2011) gave 112 subjects a 70-item computerized version of the contextualized Wason card-selection task proposed by Cosmides and Tooby (1992) and found instead that "performance on non-arbitrary, evolutionarily familiar problems is more strongly related to general intelligence than performance on arbitrary, evolutionarily novel problems",[19] and writing for Psychology Today, Kaufman concluded instead that "It seems that general intelligence is very much compatible with evolutionary psychology."[20]
See also
• Cognition
• Confirmation bias
• Evolution of human intelligence
• Logic
• Necessary and sufficient conditions
• Psychology of reasoning
References
1. Wason, P. C. (1968). "Reasoning about a rule". Quarterly Journal of Experimental Psychology. 20 (3): 273–281. doi:10.1080/14640746808400161. PMID 5683766. S2CID 1212273.
2. Wason, P. C. (1966). "Reasoning". In Foss, B. M. (ed.). New horizons in psychology. Vol. 1. Harmondsworth: Penguin. LCCN 66005291.
3. Wason, P. C.; Shapiro, Diana (1971). "Natural and contrived experience in a reasoning problem". Quarterly Journal of Experimental Psychology. 23: 63–71. doi:10.1080/00335557143000068. S2CID 7903333.
4. Manktelow, K. I. (1999). Reasoning and Thinking. Psychology Press. p. 8. ISBN 978-0-86377-708-0. The Wason selection task has often been claimed to be the single most investigated experimental paradigm in the psychology of reasoning.
5. Fiddick, Laurence; Cosmides, Leda; Tooby, John (2000-10-16). "No interpretation without representation: the role of domain-specific representations and inferences in the Wason selection task". Cognition. 77 (1): 1–79. doi:10.1016/S0010-0277(00)00085-8. ISSN 0010-0277.
6. Wason, P. C. (1977). "Self-contradictions". In Johnson-Laird, P. N.; Wason, P. C. (eds.). Thinking: Readings in cognitive science. Cambridge: Cambridge University Press. ISBN 978-0521217569.
7. Evans, Jonathan St. B. T.; Newstead, Stephen E.; Byrne, Ruth M. J. (1993). Human Reasoning: The Psychology of Deduction. Psychology Press. ISBN 978-0-86377-313-6.
8. Leighton, Jacqueline P; Dawson, Michael R. W (2001-09-01). "A parallel distributed processing model of Wason's selection task". Cognitive Systems Research. 2 (3): 207–231. doi:10.1016/S1389-0417(01)00035-3. ISSN 1389-0417.
9. Oaksford, M.; Chater, N. (1994). "A rational analysis of the selection task as optimal data selection". Psychological Review. 101 (4): 608–631. CiteSeerX 10.1.1.174.4085. doi:10.1037/0033-295X.101.4.608.
10. Stenning, K.; van Lambalgen, M. (2004). "A little logic goes a long way: basing experiment on semantic theory in the cognitive science of conditional reasoning". Cognitive Science. 28 (4): 481–530. CiteSeerX 10.1.1.13.1854. doi:10.1016/j.cogsci.2004.02.002.
11. von Sydow, M. (2006). Towards a Flexible Bayesian and Deontic Logic of Testing Descriptive and Prescriptive Rules. Göttingen: Göttingen University Press. doi:10.53846/goediss-161. S2CID 246924881.
12. van Benthem, Johan (2008). "Logic and reasoning: do the facts matter?". Studia Logica. 88 (1): 67–84. CiteSeerX 10.1.1.130.4704. doi:10.1007/s11225-008-9101-1. S2CID 11228131.
13. Dawson, Erica; Gilovich, Thomas; Regan, Dennis T. (October 2002). "Motivated Reasoning and Performance on the was on Selection Task". Personality and Social Psychology Bulletin. 28 (10): 1379–1387. doi:10.1177/014616702236869. ISSN 0146-1672.
14. Cosmides, L.; Tooby, J. (1992). "Cognitive Adaptions for Social Exchange" (PDF). In Barkow, J.; Cosmides, L.; Tooby, J. (eds.). The adapted mind: Evolutionary psychology and the generation of culture. New York: Oxford University Press. pp. 163–228. ISBN 978-0-19-506023-2.
15. Davies, Paul Sheldon; Fetzer, James H.; Foster, Thomas R. (1995). "Logical reasoning and domain specificity". Biology and Philosophy. 10 (1): 1–37. doi:10.1007/BF00851985. S2CID 83429932.
16. Beller, S. (2001). "A model theory of deontic reasoning about social norms". In Moore, J. D.; Stenning, K. (eds.). Proceedings of the 23rd Annual Conference of the Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum. pp. 63–68.
17. Sperber, D.; Girotto, V. (2002). "Use or misuse of the selection task?". Cognition. 85 (3): 277–290. CiteSeerX 10.1.1.207.3101. doi:10.1016/s0010-0277(02)00125-7. PMID 12169412. S2CID 2086414.
18. Kanazawa, Satoshi (May–June 2010). "Evolutionary Psychology and Intelligence Research" (PDF). American Psychologist. 65 (4): 279–289. doi:10.1037/a0019378. PMID 20455621. Retrieved February 16, 2018.
19. Kaufman, Scott Barry; DeYoung, Colin G.; Reis, Deidre L.; Gray, Jeremy R. (May–June 2010). "General intelligence predicts reasoning ability even for evolutionarily familiar content" (PDF). Intelligence. 39 (5): 311–322. doi:10.1016/j.intell.2011.05.002. Retrieved February 16, 2018.
20. Kaufman, Scott Barry (July 2, 2011). "Is General Intelligence Compatible with Evolutionary Psychology?". Psychology Today. Sussex Publishers. Retrieved February 16, 2018.
Further reading
• Barkow, Jerome H.; Cosmides, Leda; Tooby, John (1995). The adapted mind: evolutionary psychology and the generation of culture (illustrated, reprint, revised ed.). Oxford University Press US. pp. 181–184. ISBN 978-0-19-510107-2.
External links
Wikimedia Commons has media related to Wason selection task.
• Here is the general structure of a Wason selection task — from the Center for Evolutionary Psychology at the University of California, Santa Barbara
• CogLab: Wason Selection — from Wadsworth CogLab 2.0 Cognitive Psychology Online Laboratory
• Elementary My Dear Wason – interactive version of Wason Selection Task at PhilosophyExperiments.Com
Evolutionary psychology
• History
• Evolutionary thought
• Theoretical foundations
• Adaptationism
• Cognitive revolution
• Cognitivism
• Gene selection theory
• Modern synthesis
• Criticism
Evolutionary
processes
• Adaptations
• Altruism
• Cheating
• Hamiltonian spite
• Reciprocal
• Baldwin effect
• By-products
• Evolutionarily stable strategy
• Exaptation
• Fitness
• Inclusive
• Kin selection
• Mismatch
• Natural selection
• Parental investment
• Parent–offspring conflict
• Sexual selection
• Costly signaling
• Male/Female intrasexual competition
• Mate choice
• Sexual dimorphism
• Social selection
Areas
Cognition /
Emotion
• Affect
• Display
• Display rules
• Facial expression
• Behavioral modernity
• Cognitive module/Modularity of mind
• Automatic and controlled processes
• Computational theory of mind
• Domain generality
• Domain specificity
• Dual process theory
• Evolution of the brain
• Evolution of nervous systems
• Fight-or-flight response
• Arachnophobia
• Basophobia
• Ophidiophobia
• Folk biology/taxonomy
• Folk psychology/Theory of mind
• Intelligence
• Flynn effect
• Wason selection task
• Motor control/skill
• Multitasking
• Visual perception
• Color vision
• Eye
• Naïve physics
Culture
• Aesthetics
• Literary criticism
• Musicology
• Anthropology
• Biological
• Crime
• Language
• Origin
• Psychology
• Speech
• Morality
• Moral foundations
• Religion
• Origin
• Universals
Development
• Attachment
• Bonding
• Affectional/Maternal/Paternal bond
• Caregiver deprivation
• Childhood attachment
• Cinderella effect
• Cognitive development
• Education
• Personality development
• Socialization
Human factors /
Mental health
• Criticism of Facebook
• Depression
• Digital media use and mental health
• Computer addiction
• Cyberbullying
• Internet addiction disorder
• Internet sex addiction
• Online problem gambling
• Problematic smartphone use
• Problematic social media use
• Television addiction
• Video game addiction
• Effects of the car on societies
• Distracted driving
• Lead–crime hypothesis
• Mobile phones and driving safety
• Texting while driving
• Hypophobia
• Mind-blindness
• Rank theory of depression
• Schizophrenia
• Sex differences
• Screen time
• Social aspects of television
Sex
• Activity
• Adult attachment
• Age disparity
• Arousal
• Coolidge effect
• Desire
• Fantasy
• Hormonal motivation
• Jealousy
• Mate guarding
• Mating preferences
• Mating strategies
• Orientation
• Epigenetic theories of homosexuality
• Environmental factors
• Fraternal birth order
• Gender incongruence
• Neuroscience
• Prenatal hormones
• Ovulatory shift hypothesis
• Pair bond
• Physical/Sexual attraction
• Sexuality/Male/Female
• Sexy son hypothesis
• Westermarck effect
Sex differences
• Aggression
• Autism
• Cognition
• Crime
• Division of labour
• Emotional intelligence
• Empathising–systemising theory
• Gender role
• Intelligence
• Memory
• Narcissism
• Neuroscience
• Schizophrenia
• Substance abuse
• Suicide
• Variability hypothesis
People
Biologists /
neuroscientists
• John Crook
• Charles Darwin
• Richard Dawkins
• Jared Diamond
• W. D. Hamilton
• Alfred Kinsey
• Peter Kropotkin
• Gordon Orians
• Jaak Panksepp
• Margie Profet
• Peter Richerson
• Giacomo Rizzolatti
• Randy Thornhill
• Robert Trivers
• Carel van Schaik
• Claus Wedekind
• Mary Jane West-Eberhard
• Wolfgang Wickler
• George C. Williams
• David Sloan Wilson
• E. O. Wilson
• Richard Wrangham
Anthropologists
• Jerome H. Barkow
• Christopher Boehm
• Robert Boyd
• Donald E. Brown
• Napoleon Chagnon
• Robin Dunbar
• Daniel Fessler
• Mark Flinn
• John D. Hawks
• Joseph Henrich
• Ruth Mace
• Daniel Nettle
• Stephen Shennan
• Donald Symons
• John Tooby
• Pierre van den Berghe
Behavioral economists /
political scientists
• Samuel Bowles
• Ernst Fehr
• Herbert Gintis
• Dominic D. P. Johnson
• Gad Saad
Literary theorists /
philosophers
• Edmund Burke
• Joseph Carroll
• Daniel Dennett
• Denis Dutton
• Thomas Hobbes
• David Hume
Psychologists /
cognitive scientists
• Mary Ainsworth
• Simon Baron-Cohen
• Justin L. Barrett
• Jay Belsky
• Jesse Bering
• David F. Bjorklund
• Paul Bloom
• John Bowlby
• Pascal Boyer
• Joseph Bulbulia
• David Buss
• Josep Call
• Anne Campbell
• Donald T. Campbell
• Peter Carruthers
• Noam Chomsky
• Leda Cosmides
• Martin Daly
• Paul Ekman
• Bruce J. Ellis
• Anne Fernald
• Aurelio José Figueredo
• Diana Fleischman
• Uta Frith
• Gordon G. Gallup
• David C. Geary
• Gerd Gigerenzer
• Jonathan Haidt
• Harry Harlow
• Judith Rich Harris
• Martie Haselton
• Stephen Kaplan
• Douglas T. Kenrick
• Simon M. Kirby
• Robert Kurzban
• Brian MacWhinney
• Michael T. McGuire
• Geoffrey Miller
• Darcia Narvaez
• Katherine Nelson
• Randolph M. Nesse
• Steven Neuberg
• David Perrett
• Henry Plotkin
• Steven Pinker
• Paul Rozin
• Mark Schaller
• David P. Schmitt
• Nancy Segal
• Todd K. Shackelford
• Roger Shepard
• Irwin Silverman
• Peter K. Smith
• Dan Sperber
• Anthony Stevens
• Frank Sulloway
• Michael Tomasello
• Joshua Tybur
• Mark van Vugt
• Andrew Whiten
• Glenn Wilson
• Margo Wilson
Research centers/
organizations
• Center for Evolutionary Psychology
• Human Behavior and Evolution Society
• Max Planck Institute for Evolutionary Anthropology
• Max Planck Institute for Human Cognitive and Brain Sciences
• New England Complex Systems Institute
Publications
• The Adapted Mind
• Evolution and Human Behavior
• The Evolution of Human Sexuality
• Evolution, Mind and Behaviour
• Evolutionary Behavioral Sciences
• Evolutionary Psychology
Related subjects
Academic disciplines
• Behavioral/Evolutionary economics
• Behavioral epigenetics/genetics
• Affective/Behavioral/Cognitive/Evolutionary neuroscience
• Biocultural anthropology
• Cognitive psychology
• Cognitive science
• Ethology
• Evolutionary biology
• Evolutionary medicine
• Functional psychology
• Philosophy of mind
• Population genetics
• Primatology
• Sociobiology
Research topics
• Cultural evolution
• Great ape language
• Missing heritability problem
• Unit of selection
• Coevolution
• Cultural group selection
• Dual inheritance theory
• Fisher's principle
• Group selection
• Hologenome theory
• Lamarckism
• Population
• Punctuated equilibrium
• Recent human evolution
• Species
• Species complex
• Species problem
• Transgenerational epigenetic inheritance
• Trivers–Willard hypothesis
Theoretical positions
• Cultural selection theory
• Determinism/Indeterminism
• Biological determinism
• Connectionism
• Cultural determinism
• Environmental determinism
• Nature versus nurture
• Psychological nativism
• Social constructionism
• Social determinism
• Standard social science model
• Functionalism
• Memetics
• Multilineal evolution
• Neo-Darwinism
• Neoevolutionism
• Sociocultural evolution
• Unilineal evolution
• Evolutionary psychology
• Psychology portal
• Evolutionary biology portal
| Wikipedia |
Wassily Hoeffding
Wassily Hoeffding (June 12, 1914 – February 28, 1991) was a Finnish statistician and probabilist. Hoeffding was one of the founders of nonparametric statistics, in which Hoeffding contributed the idea and basic results on U-statistics.[1][2]
Wassily Hoeffding
Born(1914-06-12)June 12, 1914
Mustamäki, Grand Duchy of Finland
DiedFebruary 28, 1991(1991-02-28) (aged 76)
Chapel Hill, North Carolina
NationalityAmerican
Alma materBerlin University
Known forHoeffding's inequality, Hoeffding's lemma
Scientific career
FieldsStatistician
InstitutionsUniversity of North Carolina at Chapel Hill
Doctoral advisorAlfred Klose
Doctoral students
• Donald Burkholder
• Joan R. Rosenblatt
• Nicholas Fisher
• Meyer Dwass
In probability theory, Hoeffding's inequality provides an upper bound on the probability for the sum of random variables to deviate from its expected value.[3]
Personal life
Hoeffding was born in Mustamäki, Finland, (Gorkovskoye, Russia since 1940), although his place of birth is registered as St. Petersburg on his birth certificate. His father was an economist and a disciple of Peter Struve, the Russian social scientist and public figure. His paternal grandparents were Danish and his father's uncle was the Danish philosopher Harald Høffding. His mother, née Wedensky, had studied medicine. Both grandfathers had been engineers. In 1918 the family left Tsarskoye Selo for Ukraine and, after traveling through scenes of civil war, finally left Russia for Denmark in 1920, where Wassily entered school.
In 1924 the family settled in Berlin. Hoeffding obtained his PhD in 1940 at the University of Berlin. He migrated with his mother to the United States in 1946. His younger brother, Oleg, became a military historian in the United States. [4]
Hoeffding's ashes were buried in a small cemetery on land owned by George E. Nicholson, Jr.'s family in Chatham County, NC about 11 miles south of Chapel Hill, NC.
Work
In 1948, he introduced the concept of U-statistics.
See the collected works of Wassily Hoeffding.[5]
Writings
• Masstabinvariante Korrelationstheorie, 1940
• On the distribution of the rank correlation coefficient t when the variates are not independent in Biometrika, 1947
• A class of statistics with asymptotically normal distribution, 1948
• A nonparametric test for independence, 1948
• The central limit theorem for dependent random variables (with Herbert Robbins), 1948
• "Optimum" nonparametric tests, 1951
• A combinatorial central limit theorem, 1951
• The large-sample power of test based on permutations of observations, 1952
• On the distribution of the expected values of the order statistics, 1953
• The efficiency of tests (with J. R. Rosenblatt), 1955
• On the distribution of the number of successes in independent trials, 1956
• Distinguishability of sets of distributions. (The case of independent and identically distributed random variables.), (with Jacob Wolfowitz), 1958
• Lower bounds for the expected sample size and the average risk of a sequential procedure, 1960
• Probability inequalities for sums of bounded random variables, 1963
See also
• Hoeffding's bounds
• Hoeffding's C1 statistic
• Hoeffding's decomposition
• Hoeffding's independence test
• Hoeffding's inequality
• Hoeffding's lemma
• Hoeffding–Blum–Kiefer–Rosenblatt process
• Terry–Hoeffding test
References
1. Wassily Hoeffding (1948) "A class of statistics with asymptotically normal distributions". Annals of Statistics, 19, 293–325. (Partially reprinted in: Kotz, S., Johnson, N.L. (1992) Breakthroughs in Statistics, Vol I, pp 308–334. Springer-Verlag. ISBN 0-387-94037-5)
2. Sen, P.K (1992) "Introduction to Hoeffding (1948) A Class of Statistics with Asymptotically Normal Distribution". In: Kotz, S., Johnson, N.L. Breakthroughs in Statistics, Vol I, pp 299–307. Springer-Verlag. ISBN 0-387-94037-5.
3. Wassily Hoeffding (1963) Probability inequalities for sums of bounded random variables, Journal of the American Statistical Association, 58 (301), 13–30. (JSTOR)
4. Fisher, Nickolas J; van Zwet, Willem R (2005). Biographic Memoirs Volume 86. The National Academies Press, Washington D.C. doi:10.17226/11429. ISBN 978-0-309-09304-0. Retrieved July 28, 2016.
5. The Collected Works of Wassily Hoeffding (1994), N. I. Fisher and P. K. Sen, eds., Springer-Verlag, New York.
External links
• Wassily Hoeffding at the Mathematics Genealogy Project
Authority control
International
• FAST
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Netherlands
• Poland
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
Watchman route problem
The Watchman Problem is an optimization problem in computational geometry where the objective is to compute the shortest route a watchman should take to guard an entire area with obstacles given only a map of the area. The challenge is to make sure the watchman peeks behind every corner and to determine the best order in which corners should be visited in. The problem may be solved in polynomial time when the area to be guarded is a simple polygon.[1][2][3] The problem is NP-hard for polygons with holes,[1] but may be approximated in polynomial time by a solution whose length is within a polylogarithmic factor of optimal.[4]
See also
• Art gallery problem, which similarly involves viewing all points of a given area, but with multiple stationary watchmen
References
1. Chin, Wei-Pang; Ntafos, Simeon (1988), "Optimum watchman routes", Information Processing Letters, 28 (1): 39–44, doi:10.1016/0020-0190(88)90141-X, MR 0947253.
2. Carlsson, S.; Jonsson, H.; Nilsson, B. J. (1999), "Finding the shortest watchman route in a simple polygon", Discrete and Computational Geometry, 22 (3): 377–402, doi:10.1007/PL00009467, MR 1706598.
3. Tan, Xuehou (2001), "Fast computation of shortest watchman routes in simple polygons", Information Processing Letters, 77 (1): 27–33, doi:10.1016/S0020-0190(00)00146-0, MR 1813864.
4. Mitchell, Joseph S. B. (2013), "Approximating watchman routes", Proceedings of the Twenty-Fourth Annual ACM–SIAM Symposium on Discrete Algorithms (SODA '13), SIAM, pp. 844–855, doi:10.1137/1.9781611973105.60, ISBN 978-1-611972-51-1.
| Wikipedia |
Waterfall chart
A waterfall chart is a form of data visualization that helps in understanding the cumulative effect of sequentially introduced positive or negative values. These intermediate values can either be time based or category based. The waterfall chart is also known as a flying bricks chart or Mario chart due to the apparent suspension of columns (bricks) in mid-air. Often in finance, it will be referred to as a bridge.
Waterfall charts were popularized by the strategic consulting firm McKinsey & Company in its presentations to clients.[1][2]
Complexity can be added to waterfall charts with multiple total columns and values that cross the axis. Increments and decrements that are sufficiently extreme can cause the cumulative total to fall above and below the axis at various points. Intermediate subtotals, depicted with whole columns, can be added to the graph between floating columns.
Overview
The waterfall is known as a bridge or cascade; the chart portrays how an initial value is affected by a series of intermediate positive or negative values. Also, it is similar to Bar Graph.
Applications
A waterfall chart can be used for analytical purposes, especially for understanding or explaining the gradual transition in the quantitative value of an entity that is subjected to increment or decrement. Often, a waterfall or cascade chart is used to show changes in revenue or profit between two time periods.
Waterfall charts can be used for various types of quantitative analysis, ranging from inventory analysis to performance analysis.
Waterfall charts are also commonly used in financial analysis to display how a net value is arrived at through gains and losses over time or between actual and budgeted amounts. Changes in cash flows or income statement line items can also be shown via a waterfall chart. Other non-business applications include tracking demographic and legal activity changes over time.
There are several sources for automatic creations of Waterfall Charts (PlusX , Origin, etc.)
References
1. "How to Create a McKinsey-style waterfall chart". Idea transplant. Archived from the original on 2018-08-17.
2. Ethan M. Rasiel. The McKinsey Way. McGraw–Hill, 1999.
| Wikipedia |
University of Waterloo Faculty of Mathematics
The Faculty of Mathematics is one of six faculties of the University of Waterloo in Waterloo, Ontario, offering more than 500 courses in mathematics, statistics and computer science. The faculty also houses the David R. Cheriton School of Computer Science, formerly the faculty's computer science department. There are more than 31,000 alumni.[2]
Faculty of Mathematics,
University of Waterloo
Mathematics and Computer Building
TypeFaculty
EstablishedJanuary 1, 1967 (1967-01-01)
AffiliationUniversity of Waterloo
DeanMark Giesbrecht
Undergraduates6,936[1]
Postgraduates420[1]
Doctoral students
337[1]
Location
Waterloo, Ontario, Canada
SymbolPink tie
Websiteuwaterloo.ca/math
History
The faculty was founded on January 1, 1967, a successor to the University of Waterloo's Department of Mathematics, which had grown to be the largest department in the Faculty of Arts under the chairmanship of Ralph Stanton (and included such influential professors as W. T. Tutte).[2] Initially located in the Physics building, the faculty was moved in May 1968 into the newly constructed Mathematics and Computing (MC) Building. Inspired by Stanton's famously gaudy ties, the students draped a large pink tie over the MC Building on the occasion of its opening, which later became a symbol of the faculty.[3]
At the time of its founding, the faculty included five departments: Applied Analysis and Computer Science, Applied Mathematics, Combinatorics and Optimization, Pure Mathematics, and Statistics. In 1975 the Department of Applied Analysis and Computer Science became simply the Department of Computer Science; in 2005 it became the David R. Cheriton School of Computer Science. The Statistics Department also was later renamed the Department of Statistics and Actuarial Science.[2] The Department of Combinatorics and Optimization is the only academic department in the world devoted to combinatorics.[4]
The second building occupied by the Mathematics faculty was the Davis Centre, which was completed in 1988. This building includes a plethora of offices, along with various lecture halls and meeting rooms. (The Davis Centre is also home to the library originally known as the Engineering, Math, and Science [EMS] Library, which was originally housed on the fourth floor of the MC building.)
The Faculty of Mathematics finished construction of a third building, Mathematics 3 (M3), in 2011. This building now houses the Department of Statistics and Actuarial Science and a large lecture hall. An additional building, M4, has been proposed but has yet to be built.
Academics
Degrees
The Faculty of Mathematics grants the BMath (Bachelor of Mathematics) degree for most of its undergraduate programs. Computer Science undergraduates can generally choose between graduating with a BMath or a BCS (Bachelor of Computer Science) degree. The former requires more coursework in mathematics. Specialized degrees exist for the Software Engineering program (the BSE, or Bachelor of Software Engineering) and Computing and Financial Management (BCFM, or Bachelor of Computing and Financial Management). Postgraduate students are generally awarded an MMath (Master of Mathematics) or PhD.
Rankings
In the 2018 QS World University Rankings, the University of Waterloo was ranked 39th globally for Mathematics (and 3rd in Canada) and 31st globally for Computer Science (and 2nd in Canada).[5][6] The University was ranked third in Canada for Mathematics and second in Canada for Computer Science in 2018 by the Maclean's University Rankings.[7][8]
Student life
Students in the Faculty of Mathematics are represented by the Mathematics Society (MathSoc), which represents student interests to the university, operates the Math Coffee and Donut Shop, publishes the faculty newspaper mathNEWS, and runs student services including an exam bank and lounge space.[9]
Pi Day is celebrated by the department in each term: on 14 March (3/14), on 22 July (22/7, Pi Approximation Day), and on 10 November (the 314th day of the year). Typical activities include throwing pie at MathSoc executives and/or popular professors, viewing mathematics-related films, competing in pi recitation contests, and eating pie (on 22/7, cake is served instead, which is approximately pie).
Tie Guard
A yearly tradition at the University of Waterloo, a group of senior math students volunteer for the position of Tie Guard each year, and are selected by the University of Waterloo Federation of Students representatives from the Faculty of Mathematics. It is expected that the appointed Tie Guard volunteers will be on hand 24 hours a day for the duration of the orientation week, to guard the Faculty's mascot (a 40-foot pink tie which hangs off the side of the building) and to provide first aid and information to incoming students.[3]
The Tie Guard was founded in 1994 after several previous attempts on the Pink Tie resulted in both damaged mascots and injuries to students, the most notorious of which was the Tie Liberation Organization (TLO) kidnapping in 1988. In more recent years the tie guard has expanded and now several students are appointed to the Tie Guard each year. A new pink tie was draped over the Mathematics 3 Building in 2011.
Notable members
• Mark Giesbrecht, Dean[10]
• George Alfred Barnard, Lecturer[11]
• Walter Benz, Professor[12]
• Jonathan Borwein OC FRSC, Researcher (1991–93)[13]
• Timothy Chan, Professor[14]
• C. B. Collins, Professor[15]
• Gordon Cormack[16]
• Paul Cress, Lecturer[17]
• Kenneth Davidson FRSC, Professor[18]
• Jack Edmonds, Professor[19]
• Keith Geddes, Professor[20]
• Ian Goldberg, Assistant Professor[21]
• Ian Goulden FRSC, Professor[22]
• Peter Ladislaw Hammer, Professor[23]
• Hiroshi Haruki, Professor (1966–86)[24]
• Ric Holt, Professor[25]
• David Jackson FRSC, Professor[26]
• Srinivasan Keshav, Associate Professor; Sloan Fellowship (1997–99)[27]
• Murray Klamkin, Professor[28]
• Neal Koblitz, Professor[29]
• Kenneth Mackenzie, Professor[30]
• Alfred Menezes, Professor[31]
• Crispin Nash-Williams, Professor[32]
• Josef Paldus FRSC, Professor (1968–2001)[33]
• Vladimir Platonov, Professor (1993–2001); Humboldt Prize (1993)[34]
• Ronald Read, Professor[35]
• Jeffrey Shallit, Professor[36]
• Doug Stinson, Professor[37]
• W. T. Tutte OC FRS FRSC, Professor (1962–85); CRM-Fields-PIMS Prize (2001)[38]
• Scott Vanstone FRSC, Professor[39]
References
1. "Student Headcounts". Institutional Analysis and Planning. University of Waterloo. 2 June 2015. Retrieved 3 November 2018.
2. "Our history". Faculty of Mathematics. University of Waterloo. 5 December 2014. Retrieved 3 November 2018.
3. "Legend of the Pink Tie". Faculty of Mathematics. University of Waterloo. 17 August 2011. Retrieved 3 November 2018.
4. Stanley, Richard P. (17 June 2021). "Enumerative and Algebraic Combinatorics in the 1960's and 1970's". arXiv:2105.07884 [math.HO].
5. "QS World University Rankings by Subject 2018: Mathematics". QS World University Rankings. Retrieved 3 November 2018.
6. "QS World University Rankings by Subject 2018: Computer Science & Informational Systems". QS World University Rankings. Retrieved 3 November 2018.
7. "Best mathematics universities in Canada: 2018 rankings". Maclean's. 30 November 2017.
8. "Best computer science universities in Canada: 2018 ranking". Maclean's. 30 November 2017.
9. "MathSoc Services". University of Waterloo Mathematics Society. Retrieved 26 January 2016.
10. "Mark Giesbrecht". University of Waterloo. 9 July 2020.
11. "Senate". University of Waterloo. Archived from the original on September 23, 2010. Retrieved December 5, 2008.
12. Benz, Walter (2005). Classical Geometries in Modern Contexts. Springer. ISBN 978-3-7643-7371-9.
13. Shore, Valerie (1994-01-05). "SFU Week". UW Gazette. University of Waterloo. Retrieved 5 December 2008.
14. "Timothy M. Chan". University of Waterloo. Retrieved 14 April 2008.
15. "Recent Graduates". University of Waterloo. Retrieved 5 December 2008.
16. "Board sets confidential session". University of Waterloo. 7 March 2001. Retrieved 14 April 2008.
17. "History of Computer Science at Waterloo". University of Waterloo. Archived from the original on April 20, 2008. Retrieved April 14, 2008.
18. "Convocation focuses on arts today". University of Waterloo. 14 June 2007. Retrieved 14 April 2008.
19. "Just about ready for Campus Day". University of Waterloo. 5 March 2001. Retrieved 14 April 2008.
20. "Keith O Geddes". University of Waterloo. Retrieved 14 April 2008.
21. "Faculty talk of unionization". University of Waterloo. 10 January 1996. Retrieved 14 April 2008.
22. "Ian P. Goulden". University of Waterloo. Retrieved 14 April 2008.
23. "Peter Ladislaw Hammer (1936-2006)". Rutgers University. Retrieved 5 December 2008.
24. "An outward-looking perspective". University of Waterloo. 16 September 1997. Retrieved 14 April 2008.
25. "A week filled with research, pioneers, donations and trick or eating at UW". University of Waterloo. Retrieved 14 April 2008.
26. "David M. Jackson". University of Waterloo. Retrieved 14 April 2008.
27. "Task force will look at wildlife". University of Waterloo. 27 November 2006. Retrieved 14 April 2008.
28. Alexanderson, Gerald L. (January 1988). "Award for Distinguished Service to Professor Murray Klamkin". American Mathematical Monthly. 95 (1): 1, 3–4. doi:10.1080/00029890.1988.11971959. JSTOR 2323439.
29. "CACR: 1998 Seminars". University of Waterloo. Retrieved 5 December 2008.
30. MacKenzie, Kenneth D. (January 2000). "Processes and their Frameworks". Management Science. 46 (1): 110–125. doi:10.1287/mnsc.46.1.110.15126.
31. "This year's star lecturer named". University of Waterloo. 5 October 1999. Retrieved 2008-04-14.
32. "Open letter". UW Gazette. 20 September 1995. Retrieved 5 December 2008.
33. Elve, Barbara (9 August 2005). "Grants help 'internationalize' courses". University of Waterloo.
34. "Statement on Prof. Vladimir Platonov". University of Waterloo. 2001-02-07. Retrieved 14 April 2008.
35. "Ronald Read". The Mathematics Genealogy Project. Retrieved 5 December 2008.
36. "What's in the public interest?". University of Waterloo. 13 March 1998. Retrieved 14 April 2008.
37. "Doug Stinson's Home Page". University of Waterloo. Retrieved 14 April 2008.
38. "Biography of Professor Tutte". University of Waterloo Faculty of Mathematics. Retrieved 21 September 2014.
39. "Farms and museums conference". University of Waterloo. 19 June 1998. Retrieved 14 April 2008.
External links
Wikimedia Commons has media related to Mathematics & Computer Building (University of Waterloo).
• Faculty of Mathematics website
• Mathematics Society of the University of Waterloo (MathSoc)
University of Waterloo
Faculties
• Health
• Arts
• Engineering
• Environment
• Mathematics
• Science
Schools
• Accounting and Finance
• Architecture
• Computer Science
• Entrepreneurship and Business
• Environment, Enterprise and Development
• Environment, Resources and Sustainability
• Interaction Design and Business
• International Affairs
• Optometry and Vision Science
• Pharmacy
• Planning
• Public Health and Health Systems
• Social Work
Colleges
• Conrad Grebel
• Renison
• St. Jerome's
• United College, Waterloo
Satellite campuses
• Kitchener
• Cambridge
• Stratford
Organizations
• Museum of Vision Science
• Centre for Education in Mathematics and Computing
• Centre for Applied Cryptographic Research
• Institute for Quantum Computing
• Elliott Avedon Museum and Archive of Games (former)
Student life
• Waterloo Undergraduate Student Association
• University of Waterloo Engineering Society
• CKMS-FM (former)
Athletics
• Waterloo Warriors
• Men's basketball
• Football
• Women's ice hockey
• Warrior Field
Chancellors
• Dana Porter
• Ira G. Needles
• Carl Arthur Pollock
• Josef Kates
• J. Page Wadsworth
• Sylvia Ostry
• Val O'Donovan
• Mike Lazaridis
• Prem Watsa
• Dominic Barton
Presidents
• Gerry Hagey
• Howard Petch (pro tem)
• Burt Matthews
• Douglas T. Wright
• James Downey
• David Lloyd Johnston
• Feridun Hamdullahpur
• Vivek Goel
Other
• Notable alumni and faculty
• Midnight Sun
• Transit station
43.4721°N 80.5439°W / 43.4721; -80.5439
Authority control
International
• ISNI
• VIAF
National
• United States
Mathematics in Canada
Organizations
• Banff International Research Station
• Canadian Mathematical Society
• Centre de Recherches Mathématiques
• Fields Institute for Research in Mathematical Sciences
• Pacific Institute for the Mathematical Sciences
• Tutte Institute for Mathematics and Computing
• Centre for Education in Mathematics and Computing
Mathematics departments
• McGill University
• University of Toronto
• University of Waterloo
Journals
• Ars Combinatoria
• Canadian Journal of Mathematics
• Canadian Mathematical Bulletin
• Crux Mathematicorum
Competitions
• Canadian Mathematical Olympiad
• Canadian Open Mathematics Challenge
• Canadian Mathematics Competition
Awards
• Adrien Pouliot Award
• Coxeter–James Prize
• CRM-Fields-PIMS prize
• Jeffery–Williams Prize
• Krieger–Nelson Prize
| Wikipedia |
Waterman polyhedron
In geometry, the Waterman polyhedra are a family of polyhedra discovered around 1990 by the mathematician Steve Waterman. A Waterman polyhedron is created by packing spheres according to the cubic close(st) packing (CCP), also known as the face-centered cubic (fcc) packing, then sweeping away the spheres that are farther from the center than a defined radius,[1] then creating the convex hull of the sphere centers.
• Cubic Close(st) Packed spheres with radius √24
• Corresponding Waterman polyhedron W24 Origin 1
Waterman polyhedra form a vast family of polyhedra. Some of them have a number of nice properties such as multiple symmetries, or interesting and regular shapes. Others are just a collection of faces formed from irregular convex polygons.
The most popular Waterman polyhedra are those with centers at the point (0,0,0) and built out of hundreds of polygons. Such polyhedra resemble spheres. In fact, the more faces a Waterman polyhedron has, the more it resembles its circumscribed sphere in volume and total area.
With each point of 3D space we can associate a family of Waterman polyhedra with different values of radii of the circumscribed spheres. Therefore, from a mathematical point of view we can consider Waterman polyhedra as 4D spaces W(x, y, z, r), where x, y, z are coordinates of a point in 3D, and r is a positive number greater than 1.[2]
Seven origins of cubic close(st) packing (CCP)
There can be seven origins defined in CCP,[3] where n = {1, 2, 3, …}:
• Origin 1: offset 0,0,0, radius ${\sqrt {2n}}$
• Origin 2: offset 1/2,1/2,0, radius ${\tfrac {1}{2}}{\sqrt {2+4n}}$
• Origin 3: offset 1/3,1/3,2/3, radius ${\tfrac {1}{3}}{\sqrt {6(n+1)}}$
• Origin 3*: offset 1/3,1/3,1/3, radius ${\tfrac {1}{3}}{\sqrt {3+6n}}$
• Origin 4: offset 1/2,1/2,1/2, radius ${\tfrac {1}{2}}{\sqrt {3+8(n-1)}}$
• Origin 5: offset 0,0,1/2, radius ${\tfrac {1}{2}}{\sqrt {1+4n}}$
• Origin 6: offset 1,0,0, radius ${\sqrt {1+2(n-1)}}$
Depending on the origin of the sweeping, a different shape and resulting polyhedron are obtained.
Relation to Platonic and Archimedean solids
Some Waterman polyhedra create Platonic solids and Archimedean solids. For this comparison of Waterman polyhedra they are normalized, e.g. W2 O1 has a different size or volume than W1 O6, but has the same form as an octahedron.
Platonic solids
• Tetrahedron: W1 O3*, W2 O3*, W1 O3, W1 O4
• Octahedron: W2 O1, W1 O6
• Cube: W2 O6
• Icosahedron and dodecahedron have no representation as Waterman polyhedra.
Archimedean solids
• Cuboctahedron: W1 O1, W4 O1
• Truncated octahedron: W10 O1
• Truncated tetrahedron: W4 O3, W2 O4
• The other Archimedean solids have no representation as Waterman polyhedra.
The W7 O1 might be mistaken for a truncated cuboctahedron, as well W3 O1 = W12 O1 mistaken for a rhombicuboctahedron, but those Waterman polyhedra have two edge lengths and therefore do not qualify as Archimedean solids.
Generalized Waterman polyhedra
Generalized Waterman polyhedra are defined as the convex hull derived from the point set of any spherical extraction from a regular lattice.
Included is a detailed analysis of the following 10 lattices – bcc, cuboctahedron, diamond, fcc, hcp, truncated octahedron, rhombic dodecahedron, simple cubic, truncated tet tet, truncated tet truncated octahedron cuboctahedron.
Each of the 10 lattices were examined to isolate those particular origin points that manifested a unique polyhedron, as well as possessing some minimal symmetry requirement. From a viable origin point, within a lattice, there exists an unlimited series of polyhedra. Given its proper sweep interval, then there is a one-to-one correspondence between each integer value and a generalized Waterman polyhedron.
Notes
1. Popko, Edward S. (2012). Divided Spheres: Geodesics and the Orderly Subdivision of the Sphere. CRC Press. pp. 174–177. ISBN 9781466504295.
2. Visualizing Waterman Polyhedra with MuPAD by M. Majewski
3. 7 Origins of CCP Waterman polyhedra by Mark Newbold
External links
• Steve Waterman's Homepage
• Waterman Polyhedra Java applet by Mark Newbold
• Maurice Starck's write-up
• hand-made models by Magnus Wenninger
• write-up by Paul Bourke
• on-line generator by Paul Bourke
• program to make Waterman polyhedron by Adrian Rossiter in Antiprism
• Waterman projection and write up by Carlos Furiti
• rotating globe by Izidor Hafner
• real time winds and temperature on Waterman projection by Cameron Beccario
• Solar Termination (Waterman) by Mike Bostock
• interactive Waterman butterfly map by Jason Davies
• write-up by Maurice Starck
• first 1000 Waterman polyhedra and sphere clusters by Nemo Thorx
• OEIS sequence A119870 (Number of vertices of the root-n Waterman polyhedron)
• Steve Waterman's Waterman polyhedron (WP)
• Generalized Waterman polyhedron by Ed Pegg jr of Wolfram
• various Waterman sphere clusters by Ed Pegg jr of Wolfram
• app to make 4d waterman polyhedron in Great Stella by Rob Webb
• Waterman polyhedron app in Matlab needs a workaround as shown on the following reference page
• Waterman polyhedron in Mupad
| Wikipedia |
Watkins snark
In the mathematical field of graph theory, the Watkins snark is a snark with 50 vertices and 75 edges.[1][2] It was discovered by John J. Watkins in 1989.[3]
Watkins snark
The Watkins snark
Named afterJ. J. Watkins
Vertices50
Edges75
Radius7
Diameter7
Girth5
Automorphisms5
Chromatic number3
Chromatic index4
Book thickness3
Queue number2
PropertiesSnark
Table of graphs and parameters
As a snark, the Watkins graph is a connected, bridgeless cubic graph with chromatic index equal to 4. The Watkins snark is also non-planar and non-hamiltonian. It has book thickness 3 and queue number 2.[4]
Another well known snark on 50 vertices is the Szekeres snark, the fifth known snark, discovered by George Szekeres in 1973.[5]
Gallery
• The chromatic number of the Watkins snark is 3.
• The chromatic index of the Watkins snark is 4.
Edges
[[1,2], [1,4], [1,15], [2,3], [2,8], [3,6], [3,37], [4,6], [4,7], [5,10], [5,11], [5,22], [6,9], [7,8], [7,12], [8,9], [9,14], [10,13], [10,17], [11,16], [11,18], [12,14], [12,33], [13,15], [13,16], [14,20], [15,21], [16,19], [17,18], [17,19], [18,30], [19,21], [20,24], [20,26], [21,50], [22,23], [22,27], [23,24], [23,25], [24,29], [25,26], [25,28], [26,31], [27,28], [27,48], [28,29], [29,31], [30,32], [30,36], [31,36], [32,34], [32,35], [33,34], [33,40], [34,41], [35,38], [35,40], [36,38], [37,39], [37,42], [38,41], [39,44], [39,46], [40,46], [41,46], [42,43], [42,45], [43,44], [43,49], [44,47], [45,47], [45,48], [47,50], [48,49], [49,50]]
References
1. Weisstein, Eric W. "Watkins Snark". MathWorld.
2. Watkins, J. J. and Wilson, R. J. "A Survey of Snarks." In Graph Theory, Combinatorics, and Applications (Ed. Y. Alavi, G. Chartrand, O. R. Oellermann, and A. J. Schwenk). New York: Wiley, pp. 1129-1144, 1991
3. Watkins, J. J. "Snarks." Ann. New York Acad. Sci. 576, 606-622, 1989.
4. Wolz, Jessica; Engineering Linear Layouts with SAT. Master Thesis, University of Tübingen, 2018
5. Szekeres, G. (1973). "Polyhedral decompositions of cubic graphs". Bull. Austral. Math. Soc. 8 (3): 367–387. doi:10.1017/S0004972700042660.
| Wikipedia |
Watson's lemma
In mathematics, Watson's lemma, proved by G. N. Watson (1918, p. 133), has significant application within the theory on the asymptotic behavior of integrals.
Statement of the lemma
Let $0<T\leq \infty $ be fixed. Assume $\varphi (t)=t^{\lambda }\,g(t)$, where $g(t)$ has an infinite number of derivatives in the neighborhood of $t=0$, with $g(0)\neq 0$, and $\lambda >-1$.
Suppose, in addition, either that
$|\varphi (t)|<Ke^{bt}\ \forall t>0,$
where $K,b$ are independent of $t$, or that
$\int _{0}^{T}|\varphi (t)|\,\mathrm {d} t<\infty .$
Then, it is true that for all positive $x$ that
$\left|\int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t\right|<\infty $
and that the following asymptotic equivalence holds:
$\int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t\sim \ \sum _{n=0}^{\infty }{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}},\ \ (x>0,\ x\rightarrow \infty ).$
See, for instance, Watson (1918) for the original proof or Miller (2006) for a more recent development.
Proof
We will prove the version of Watson's lemma which assumes that $|\varphi (t)|$ has at most exponential growth as $t\to \infty $. The basic idea behind the proof is that we will approximate $g(t)$ by finitely many terms of its Taylor series. Since the derivatives of $g$ are only assumed to exist in a neighborhood of the origin, we will essentially proceed by removing the tail of the integral, applying Taylor's theorem with remainder in the remaining small interval, then adding the tail back on in the end. At each step we will carefully estimate how much we are throwing away or adding on. This proof is a modification of the one found in Miller (2006).
Let $0<T\leq \infty $ and suppose that $\varphi $ is a measurable function of the form $\varphi (t)=t^{\lambda }g(t)$, where $\lambda >-1$ and $g$ has an infinite number of continuous derivatives in the interval $[0,\delta ]$ for some $0<\delta <T$, and that $|\varphi (t)|\leq Ke^{bt}$ for all $\delta \leq t\leq T$, where the constants $K$ and $b$ are independent of $t$.
We can show that the integral is finite for $x$ large enough by writing
$(1)\quad \int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t=\int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t+\int _{\delta }^{T}e^{-xt}\varphi (t)\,\mathrm {d} t$
and estimating each term.
For the first term we have
$\left|\int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t\right|\leq \int _{0}^{\delta }e^{-xt}|\varphi (t)|\,\mathrm {d} t\leq \int _{0}^{\delta }|\varphi (t)|\,\mathrm {d} t$
for $x\geq 0$, where the last integral is finite by the assumptions that $g$ is continuous on the interval $[0,\delta ]$ and that $\lambda >-1$. For the second term we use the assumption that $\varphi $ is exponentially bounded to see that, for $x>b$,
${\begin{aligned}\left|\int _{\delta }^{T}e^{-xt}\varphi (t)\,\mathrm {d} t\right|&\leq \int _{\delta }^{T}e^{-xt}|\varphi (t)|\,\mathrm {d} t\\&\leq K\int _{\delta }^{T}e^{(b-x)t}\,\mathrm {d} t\\&\leq K\int _{\delta }^{\infty }e^{(b-x)t}\,\mathrm {d} t\\&=K\,{\frac {e^{(b-x)\delta }}{x-b}}.\end{aligned}}$
The finiteness of the original integral then follows from applying the triangle inequality to $(1)$.
We can deduce from the above calculation that
$(2)\quad \int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t=\int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t+O\left(x^{-1}e^{-\delta x}\right)$
as $x\to \infty $.
By appealing to Taylor's theorem with remainder we know that, for each integer $N\geq 0$,
$g(t)=\sum _{n=0}^{N}{\frac {g^{(n)}(0)}{n!}}\,t^{n}+{\frac {g^{(N+1)}(t^{*})}{(N+1)!}}\,t^{N+1}$
for $0\leq t\leq \delta $, where $0\leq t^{*}\leq t$. Plugging this in to the first term in $(2)$ we get
${\begin{aligned}(3)\quad \int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t&=\int _{0}^{\delta }e^{-xt}t^{\lambda }g(t)\,\mathrm {d} t\\&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)}{n!}}\int _{0}^{\delta }t^{\lambda +n}e^{-xt}\,\mathrm {d} t+{\frac {1}{(N+1)!}}\int _{0}^{\delta }g^{(N+1)}(t^{*})\,t^{\lambda +N+1}e^{-xt}\,\mathrm {d} t.\end{aligned}}$
To bound the term involving the remainder we use the assumption that $g^{(N+1)}$ is continuous on the interval $[0,\delta ]$, and in particular it is bounded there. As such we see that
${\begin{aligned}\left|\int _{0}^{\delta }g^{(N+1)}(t^{*})\,t^{\lambda +N+1}e^{-xt}\,\mathrm {d} t\right|&\leq \sup _{t\in [0,\delta ]}\left|g^{(N+1)}(t)\right|\int _{0}^{\delta }t^{\lambda +N+1}e^{-xt}\,\mathrm {d} t\\&<\sup _{t\in [0,\delta ]}\left|g^{(N+1)}(t)\right|\int _{0}^{\infty }t^{\lambda +N+1}e^{-xt}\,\mathrm {d} t\\&=\sup _{t\in [0,\delta ]}\left|g^{(N+1)}(t)\right|\,{\frac {\Gamma (\lambda +N+2)}{x^{\lambda +N+2}}}.\end{aligned}}$
Here we have used the fact that
$\int _{0}^{\infty }t^{a}e^{-xt}\,\mathrm {d} t={\frac {\Gamma (a+1)}{x^{a+1}}}$
if $x>0$ and $a>-1$, where $\Gamma $ is the gamma function.
From the above calculation we see from $(3)$ that
$(4)\quad \int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t=\sum _{n=0}^{N}{\frac {g^{(n)}(0)}{n!}}\int _{0}^{\delta }t^{\lambda +n}e^{-xt}\,\mathrm {d} t+O\left(x^{-\lambda -N-2}\right)$
as $x\to \infty $.
We will now add the tails on to each integral in $(4)$. For each $n$ we have
${\begin{aligned}\int _{0}^{\delta }t^{\lambda +n}e^{-xt}\,\mathrm {d} t&=\int _{0}^{\infty }t^{\lambda +n}e^{-xt}\,\mathrm {d} t-\int _{\delta }^{\infty }t^{\lambda +n}e^{-xt}\,\mathrm {d} t\\[5pt]&={\frac {\Gamma (\lambda +n+1)}{x^{\lambda +n+1}}}-\int _{\delta }^{\infty }t^{\lambda +n}e^{-xt}\,\mathrm {d} t,\end{aligned}}$
and we will show that the remaining integrals are exponentially small. Indeed, if we make the change of variables $t=s+\delta $ we get
${\begin{aligned}\int _{\delta }^{\infty }t^{\lambda +n}e^{-xt}\,\mathrm {d} t&=\int _{0}^{\infty }(s+\delta )^{\lambda +n}e^{-x(s+\delta )}\,\mathrm {d} s\\[5pt]&=e^{-\delta x}\int _{0}^{\infty }(s+\delta )^{\lambda +n}e^{-xs}\,\mathrm {d} s\\[5pt]&\leq e^{-\delta x}\int _{0}^{\infty }(s+\delta )^{\lambda +n}e^{-s}\,\mathrm {d} s\end{aligned}}$
for $x\geq 1$, so that
$\int _{0}^{\delta }t^{\lambda +n}e^{-xt}\,\mathrm {d} t={\frac {\Gamma (\lambda +n+1)}{x^{\lambda +n+1}}}+O\left(e^{-\delta x}\right){\text{ as }}x\to \infty .$
If we substitute this last result into $(4)$ we find that
${\begin{aligned}\int _{0}^{\delta }e^{-xt}\varphi (t)\,\mathrm {d} t&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}+O\left(e^{-\delta x}\right)+O\left(x^{-\lambda -N-2}\right)\\&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}+O\left(x^{-\lambda -N-2}\right)\end{aligned}}$
as $x\to \infty $. Finally, substituting this into $(2)$ we conclude that
${\begin{aligned}\int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}+O\left(x^{-\lambda -N-2}\right)+O\left(x^{-1}e^{-\delta x}\right)\\&=\sum _{n=0}^{N}{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}+O\left(x^{-\lambda -N-2}\right)\end{aligned}}$
as $x\to \infty $.
Since this last expression is true for each integer $N\geq 0$ we have thus shown that
$\int _{0}^{T}e^{-xt}\varphi (t)\,\mathrm {d} t\sim \sum _{n=0}^{\infty }{\frac {g^{(n)}(0)\ \Gamma (\lambda +n+1)}{n!\ x^{\lambda +n+1}}}$
as $x\to \infty $, where the infinite series is interpreted as an asymptotic expansion of the integral in question.
Example
When $0<a<b$, the confluent hypergeometric function of the first kind has the integral representation
${}_{1}F_{1}(a,b,x)={\frac {\Gamma (b)}{\Gamma (a)\Gamma (b-a)}}\int _{0}^{1}e^{xt}t^{a-1}(1-t)^{b-a-1}\,\mathrm {d} t,$
where $\Gamma $ is the gamma function. The change of variables $t=1-s$ puts this into the form
${}_{1}F_{1}(a,b,x)={\frac {\Gamma (b)}{\Gamma (a)\Gamma (b-a)}}\,e^{x}\int _{0}^{1}e^{-xs}(1-s)^{a-1}s^{b-a-1}\,ds,$
which is now amenable to the use of Watson's lemma. Taking $\lambda =b-a-1$ and $g(s)=(1-s)^{a-1}$, Watson's lemma tells us that
$\int _{0}^{1}e^{-xs}(1-s)^{a-1}s^{b-a-1}\,ds\sim \Gamma (b-a)x^{a-b}\quad {\text{as }}x\to \infty {\text{ with }}x>0,$
which allows us to conclude that
${}_{1}F_{1}(a,b,x)\sim {\frac {\Gamma (b)}{\Gamma (a)}}\,x^{a-b}e^{x}\quad {\text{as }}x\to \infty {\text{ with }}x>0.$
References
• Miller, P.D. (2006), Applied Asymptotic Analysis, Providence, RI: American Mathematical Society, p. 467, ISBN 978-0-8218-4078-8.
• Watson, G. N. (1918), "The harmonic functions associated with the parabolic cylinder", Proceedings of the London Mathematical Society, vol. 2, no. 17, pp. 116–148, doi:10.1112/plms/s2-17.1.116.
• Ablowitz, M. J., Fokas, A. S. (2003). Complex variables: introduction and applications. Cambridge University Press.
| Wikipedia |
Watt's curve
In mathematics, Watt's curve is a tricircular plane algebraic curve of degree six. It is generated by two circles of radius b with centers distance 2a apart (taken to be at (±a, 0)). A line segment of length 2c attaches to a point on each of the circles, and the midpoint of the line segment traces out the Watt curve as the circles rotate partially back and forth or completely around. It arose in connection with James Watt's pioneering work on the steam engine.
The equation of the curve can be given in polar coordinates as
$r^{2}=b^{2}-\left[a\sin \theta \pm {\sqrt {c^{2}-a^{2}\cos ^{2}\theta }}\right]^{2}.$
Derivation
Polar coordinates
The polar equation for the curve can be derived as follows:[1] Working in the complex plane, let the centers of the circles be at a and −a, and the connecting segment have endpoints at −a+bei λ and a+bei ρ. Let the angle of inclination of the segment be ψ with its midpoint at rei θ. Then the endpoints are also given by rei θ ± cei ψ. Setting expressions for the same points equal to each other gives
$a+be^{i\rho }=re^{i\theta }+ce^{i\psi }.\,$
$-a+be^{i\lambda }=re^{i\theta }-ce^{i\psi }\,$
Add these and divide by two to get
$re^{i\theta }={\tfrac {b}{2}}(e^{i\rho }+e^{i\lambda })=b\cos({\tfrac {\rho -\lambda }{2}})e^{i{\tfrac {\rho +\lambda }{2}}}.$
Comparing radii and arguments gives
$r=b\cos \alpha ,\ \theta ={\tfrac {\rho +\lambda }{2}}\ {\mbox{where}}\ \alpha ={\tfrac {\rho -\lambda }{2}}.$
Similarly, subtracting the first two equations and dividing by 2 gives
$ce^{i\psi }-a={\tfrac {b}{2}}(e^{i\rho }-e^{i\lambda })=ib\sin \alpha e^{i\theta }.$
Write
$a=a\cos \theta \ e^{i\theta }-ia\sin \theta \ e^{i\theta }.\,$
Then
$ce^{i\psi }=ib\sin \alpha e^{i\theta }+a\cos \theta \ e^{i\theta }-ia\sin \theta \ e^{i\theta }=(a\cos \theta \ +i(b\sin \alpha -a\sin \theta ))e^{i\theta },$
$c^{2}=a^{2}\cos ^{2}\theta +(b\sin \alpha -a\sin \theta )^{2},\,$
$b\sin \alpha =a\sin \theta \pm {\sqrt {c^{2}-a^{2}\cos ^{2}\theta }},\,$
$r^{2}=b^{2}\cos ^{2}\alpha =b^{2}-b^{2}\sin ^{2}\alpha =b^{2}-\left[a\sin \theta \pm {\sqrt {c^{2}-a^{2}\cos ^{2}\theta }}\right]^{2}.,\,$
Cartesian coordinates
Expanding the polar equation gives
$r^{2}=b^{2}-(a^{2}\sin ^{2}\theta \ +c^{2}-a^{2}\cos ^{2}\theta \pm 2a\sin \theta {\sqrt {c^{2}-a^{2}\cos ^{2}\theta }}),\,$
$r^{2}-a^{2}-b^{2}+c^{2}+2a^{2}\sin ^{2}\theta =\pm 2a\sin \theta {\sqrt {c^{2}-a^{2}\cos ^{2}\theta }}),\,$
$(r^{2}-a^{2}-b^{2}+c^{2})^{2}+4a^{2}(r^{2}-a^{2}-b^{2}+c^{2})\sin ^{2}\theta +4a^{4}\sin ^{4}\theta =4a^{2}\sin ^{2}\theta (c^{2}-a^{2}\cos ^{2}\theta ),\,$
$(r^{2}-a^{2}-b^{2}+c^{2})^{2}+4a^{2}(r^{2}-b^{2})\sin ^{2}\theta =0,\,$
$(x^{2}+y^{2})(x^{2}+y^{2}-a^{2}-b^{2}+c^{2})^{2}+4a^{2}y^{2}(x^{2}+y^{2}-b^{2})=0.\,$
Letting d 2=a2+b2–c2 simplifies this to
$(x^{2}+y^{2})(x^{2}+y^{2}-d^{2})^{2}+4a^{2}y^{2}(x^{2}+y^{2}-b^{2})=0.\,$
Form of the curve
The construction requires a quadrilateral with sides 2a, b, 2c, b. Any side must be less than the sum of the remaining sides, so the curve is empty (at least in the real plane) unless a<b+c and c<b+a.
The curve has a crossing point at the origin if there is a triangle with sides a, b and c. Given the previous conditions, this means that the curve crosses the origin if and only if b<a+c. If b=a+c then two branches of the curve meet at the origin with a common vertical tangent, making it a quadruple point.
Given b<a+c, the shape of the curve is determined by the relative magnitude of b and d. If d is imaginary, that is if a2+b2 <c2 then the curve has the form of a figure eight. If d is 0 then the curve is a figure eight with two branches of the curve having a common horizontal tangent at the origin. If 0<d<b then the curve has two additional double points at ±d and the curve crosses itself at these points. The overall shape of the curve is pretzel-like in this case. If d=b then a=c and the curve decomposes into a circle of radius b and a lemniscate of Booth, a figure eight shaped curve. A special case of this is a=c, b=√2c which produces the lemniscate of Bernoulli. Finally, if d>b then the points ±d are still solutions to the Cartesian equation of the curve, but the curve does not cross these points and they are acnodes. The curve again has a figure eight shape though the shape is distorted if d is close to b.
Given b>a+c, the shape of the curve is determined by the relative sizes of a and c. If a<c then the curve has the form of two loops that cross each other at ±d. If a=c then the curve decomposes into a circle of radius b and an oval of Booth. If a>c then the curve does not cross the x-axis at all and consists of two flattened ovals.[2]
Watt's linkage
When the curve crosses the origin, the origin is a point of inflection and therefore has contact of order 3 with a tangent. However, if $a^{2}=b^{2}+c^{2}$ (which is the case if the triangle with sides $a$, Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): b and $c$ is a right triangle) then tangent has contact of order 5 with the tangent, in other words the curve is a close approximation of a straight line. This is the basis for Watt's linkage.
See also
• Four-bar linkage
• Watt's linkage
References
1. See Catalan and Rutter
2. Encyclopédie des Formes Mathématiques Remarquables page for section.
External links
• Weisstein, Eric W. "Watt's Curve". MathWorld.
• O'Connor, John J.; Robertson, Edmund F., "Watt's Curve", MacTutor History of Mathematics Archive, University of St Andrews
• Catalan, E. (1885). "Sur la Courbe de Watt". Mathesis. V: 154.
• Rutter, John W. (2000). Geometry of Curves. CRC Press. pp. 73ff. ISBN 1-58488-166-6.
| Wikipedia |
Watts–Strogatz model
The Watts–Strogatz model is a random graph generation model that produces graphs with small-world properties, including short average path lengths and high clustering. It was proposed by Duncan J. Watts and Steven Strogatz in their article published in 1998 in the Nature scientific journal.[1] The model also became known as the (Watts) beta model after Watts used $\beta $ to formulate it in his popular science book Six Degrees.
Part of a series on
Network science
• Theory
• Graph
• Complex network
• Contagion
• Small-world
• Scale-free
• Community structure
• Percolation
• Evolution
• Controllability
• Graph drawing
• Social capital
• Link analysis
• Optimization
• Reciprocity
• Closure
• Homophily
• Transitivity
• Preferential attachment
• Balance theory
• Network effect
• Social influence
Network types
• Informational (computing)
• Telecommunication
• Transport
• Social
• Scientific collaboration
• Biological
• Artificial neural
• Interdependent
• Semantic
• Spatial
• Dependency
• Flow
• on-Chip
Graphs
Features
• Clique
• Component
• Cut
• Cycle
• Data structure
• Edge
• Loop
• Neighborhood
• Path
• Vertex
• Adjacency list / matrix
• Incidence list / matrix
Types
• Bipartite
• Complete
• Directed
• Hyper
• Labeled
• Multi
• Random
• Weighted
• Metrics
• Algorithms
• Centrality
• Degree
• Motif
• Clustering
• Degree distribution
• Assortativity
• Distance
• Modularity
• Efficiency
Models
Topology
• Random graph
• Erdős–Rényi
• Barabási–Albert
• Bianconi–Barabási
• Fitness model
• Watts–Strogatz
• Exponential random (ERGM)
• Random geometric (RGG)
• Hyperbolic (HGN)
• Hierarchical
• Stochastic block
• Blockmodeling
• Maximum entropy
• Soft configuration
• LFR Benchmark
Dynamics
• Boolean network
• agent based
• Epidemic/SIR
• Lists
• Categories
• Topics
• Software
• Network scientists
• Category:Network theory
• Category:Graph theory
Rationale for the model
The formal study of random graphs dates back to the work of Paul Erdős and Alfréd Rényi.[2] The graphs they considered, now known as the classical or Erdős–Rényi (ER) graphs, offer a simple and powerful model with many applications.
However the ER graphs do not have two important properties observed in many real-world networks:
1. They do not generate local clustering and triadic closures. Instead, because they have a constant, random, and independent probability of two nodes being connected, ER graphs have a low clustering coefficient.
2. They do not account for the formation of hubs. Formally, the degree distribution of ER graphs converges to a Poisson distribution, rather than a power law observed in many real-world, scale-free networks.[3]
The Watts and Strogatz model was designed as the simplest possible model that addresses the first of the two limitations. It accounts for clustering while retaining the short average path lengths of the ER model. It does so by interpolating between a randomized structure close to ER graphs and a regular ring lattice. Consequently, the model is able to at least partially explain the "small-world" phenomena in a variety of networks, such as the power grid, neural network of C. elegans, networks of movie actors, or fat-metabolism communication in budding yeast.[4]
Algorithm
Given the desired number of nodes $N$, the mean degree $K$ (assumed to be an even integer), and a parameter $\beta $, all satisfying $0\leq \beta \leq 1$ and $N\gg K\gg \ln N\gg 1$, the model constructs an undirected graph with $N$ nodes and ${\frac {NK}{2}}$ edges in the following way:
1. Construct a regular ring lattice, a graph with $N$ nodes each connected to $K$ neighbors, $K/2$ on each side. That is, if the nodes are labeled $0\ldots {N-1}$, there is an edge $(i,j)$ if and only if $0<|i-j|\ \mathrm {mod} \ \left(N-1-{\frac {K}{2}}\right)\leq {\frac {K}{2}}.$
2. For every node $i=0,\dots ,{N-1}$ take every edge connecting $i$ to its $K/2$ rightmost neighbors, that is every edge $(i,j)$ such that $0<(j-i)\ \mathrm {mod} \ N\leq K/2$, and rewire it with probability $\beta $. Rewiring is done by replacing $(i,j)$ with $(i,k)$ where $k$ is chosen uniformly at random from all possible nodes while avoiding self-loops ($k\neq i$) and link duplication (there is no edge $(i,{k'})$ with $k'=k$ at this point in the algorithm).
Properties
The underlying lattice structure of the model produces a locally clustered network, while the randomly rewired links dramatically reduce the average path lengths. The algorithm introduces about $\beta {\frac {NK}{2}}$ of such non-lattice edges. Varying $\beta $ makes it possible to interpolate between a regular lattice ($\beta =0$) and a structure close to an Erdős–Rényi random graph $G(N,p)$ with $p={\frac {K}{N-1}}$ at $\beta =1$. It does not approach the actual ER model since every node will be connected to at least $K/2$ other nodes.
The three properties of interest are the average path length, the clustering coefficient, and the degree distribution.
Average path length
For a ring lattice, the average path length[1] is $\ell (0)\approx N/2K\gg 1$ and scales linearly with the system size. In the limiting case of $\beta \rightarrow 1$, the graph approaches a random graph with $\ell (1)\approx {\frac {\ln N}{\ln K}}$, while not actually converging to it. In the intermediate region $0<\beta <1$, the average path length falls very rapidly with increasing $\beta $, quickly approaching its limiting value.
Clustering coefficient
For the ring lattice the clustering coefficient[5] $C(0)={\frac {3(K-2)}{4(K-1)}}$, and so tends to $3/4$ as $K$ grows, independently of the system size.[6] In the limiting case of $\beta \rightarrow 1$ the clustering coefficient is of the same order as the clustering coefficient for classical random graphs, $C=K/(N-1)$ and is thus inversely proportional to the system size. In the intermediate region the clustering coefficient remains quite close to its value for the regular lattice, and only falls at relatively high $\beta $. This results in a region where the average path length falls rapidly, but the clustering coefficient does not, explaining the "small-world" phenomenon.
If we use the Barrat and Weigt[6] measure for clustering $C'(\beta )$ defined as the fraction between the average number of edges between the neighbors of a node and the average number of possible edges between these neighbors, or, alternatively,
$C'(\beta )\equiv {\frac {3\times {\text{number of triangles}}}{\text{number of connected triples}}}$
then we get $C'(\beta )\sim C(0)(1-\beta )^{3}.$
Degree distribution
The degree distribution in the case of the ring lattice is just a Dirac delta function centered at $K$. The degree distribution for a large number of nodes and $0<\beta <1$ can be written as,[6]
$P(k)\approx \sum _{n=0}^{f(k,K)}{{K/2} \choose {n}}(1-\beta )^{n}\beta ^{K/2-n}{\frac {(\beta K/2)^{k-K/2-n}}{(k-K/2-n)!}}e^{-\beta K/2},$
where $k_{i}$ is the number of edges that the $i^{\text{th}}$ node has or its degree. Here $k\geq K/2$, and $f(k,K)=\min(k-K/2,K/2)$. The shape of the degree distribution is similar to that of a random graph and has a pronounced peak at $k=K$ and decays exponentially for large $|k-K|$. The topology of the network is relatively homogeneous, meaning that all nodes are of similar degree.
Limitations
The major limitation of the model is that it produces an unrealistic degree distribution. In contrast, real networks are often scale-free networks inhomogeneous in degree, having hubs and a scale-free degree distribution. Such networks are better described in that respect by the preferential attachment family of models, such as the Barabási–Albert (BA) model. (On the other hand, the Barabási–Albert model fails to produce the high levels of clustering seen in real networks, a shortcoming not shared by the Watts and Strogatz model. Thus, neither the Watts and Strogatz model nor the Barabási–Albert model should be viewed as fully realistic.)
The Watts and Strogatz model also implies a fixed number of nodes and thus cannot be used to model network growth.
See also
• Small-world networks
• Erdős–Rényi (ER) model
• Barabási–Albert model
• Social networks
References
1. Watts, D. J.; Strogatz, S. H. (1998). "Collective dynamics of 'small-world' networks" (PDF). Nature. 393 (6684): 440–442. Bibcode:1998Natur.393..440W. doi:10.1038/30918. PMID 9623998. S2CID 4429113. Archived (PDF) from the original on 2020-10-26. Retrieved 2018-05-18.
2. Erdos, P. (1960). "Publications Mathematicae 6, 290 (1959); P. Erdos, A. Renyi". Publ. Math. Inst. Hung. Acad. Sci. 5: 17.
3. Ravasz, E. (30 August 2002). "Hierarchical Organization of Modularity in Metabolic Networks". Science. 297 (5586): 1551–1555. arXiv:cond-mat/0209244. Bibcode:2002Sci...297.1551R. doi:10.1126/science.1073374. PMID 12202830. S2CID 14452443.
4. Al-Anzi, Bader; Arpp, Patrick; Gerges, Sherif; Ormerod, Christopher; Olsman, Noah; Zinn, Kai (2015). "Experimental and Computational Analysis of a Large Protein Network That Controls Fat Storage Reveals the Design Principles of a Signaling Network". PLOS Computational Biology. 11 (5): e1004264. Bibcode:2015PLSCB..11E4264A. doi:10.1371/journal.pcbi.1004264. PMC 4447291. PMID 26020510.
5. Albert, R., Barabási, A.-L. (2002). "Statistical mechanics of complex networks". Reviews of Modern Physics. 74 (1): 47–97. arXiv:cond-mat/0106096. Bibcode:2002RvMP...74...47A. doi:10.1103/RevModPhys.74.47. S2CID 60545.{{cite journal}}: CS1 maint: multiple names: authors list (link)
6. Barrat, A.; Weigt, M. (2000). "On the properties of small-world network models". European Physical Journal B. 13 (3): 547–560. arXiv:cond-mat/9903411. doi:10.1007/s100510050067. S2CID 13483229.
| Wikipedia |
Wave maps equation
In mathematical physics, the wave maps equation is a geometric wave equation that solves
$D^{\alpha }\partial _{\alpha }u=0$
where $D$ is a connection.[1][2]
It can be considered a natural extension of the wave equation for Riemannian manifolds.[3]
References
1. Tataru, Daniel (1 January 2005). "Rough solutions for the wave maps equation". American Journal of Mathematics. 127 (2): 293–377. CiteSeerX 10.1.1.631.6746. doi:10.1353/ajm.2005.0014. S2CID 53521030.
2. Tataru, Daniel (2004). "The wave maps equation" (PDF). Bulletin of the American Mathematical Society. New Series. 41 (2): 185–204. doi:10.1090/S0273-0979-04-01005-5. Zbl 1065.35199.
3. https://www.math.ucla.edu/~tao/preprints/wavemaps.pdf
| Wikipedia |
Wave
In physics, mathematics, engineering, and related fields, a wave is a propagating dynamic disturbance (change from equilibrium) of one or more quantities. Waves can be periodic, in which case those quantities oscillate repeatedly about an equilibrium (resting) value at some frequency. When the entire waveform moves in one direction, it is said to be a traveling wave; by contrast, a pair of superimposed periodic waves traveling in opposite directions makes a standing wave. In a standing wave, the amplitude of vibration has nulls at some positions where the wave amplitude appears smaller or even zero. Waves are often described by a wave equation (standing wave field of two opposite waves) or a one-way wave equation for single wave propagation in a defined direction.
Two types of waves are most commonly studied in classical physics. In a mechanical wave, stress and strain fields oscillate about a mechanical equilibrium. A mechanical wave is a local deformation (strain) in some physical medium that propagates from particle to particle by creating local stresses that cause strain in neighboring particles too. For example, sound waves are variations of the local pressure and particle motion that propagate through the medium. Other examples of mechanical waves are seismic waves, gravity waves, surface waves and string vibrations. In an electromagnetic wave (such as light), coupling between the electric and magnetic fields sustains propagation of waves involving these fields according to Maxwell's equations. Electromagnetic waves can travel through a vacuum and through some dielectric media (at wavelengths where they are considered transparent). Electromagnetic waves, according to their frequencies (or wavelengths) have more specific designations including radio waves, infrared radiation, terahertz waves, visible light, ultraviolet radiation, X-rays and gamma rays.
Other types of waves include gravitational waves, which are disturbances in spacetime that propagate according to general relativity; heat diffusion waves; plasma waves that combine mechanical deformations and electromagnetic fields; reaction–diffusion waves, such as in the Belousov–Zhabotinsky reaction; and many more. Mechanical and electromagnetic waves transfer energy,[1] momentum, and information, but they do not transfer particles in the medium. In mathematics and electronics waves are studied as signals.[2] On the other hand, some waves have envelopes which do not move at all such as standing waves (which are fundamental to music) and hydraulic jumps. Some, like the probability waves of quantum mechanics, may be completely static.
A physical wave field is almost always confined to some finite region of space, called its domain. For example, the seismic waves generated by earthquakes are significant only in the interior and surface of the planet, so they can be ignored outside it. However, waves with infinite domain, that extend over the whole space, are commonly studied in mathematics, and are very valuable tools for understanding physical waves in finite domains.
A plane wave is an important mathematical idealization where the disturbance is identical along any (infinite) plane normal to a specific direction of travel. Mathematically, the simplest wave is a sinusoidal plane wave in which at any point the field experiences simple harmonic motion at one frequency. In linear media, complicated waves can generally be decomposed as the sum of many sinusoidal plane waves having different directions of propagation and/or different frequencies. A plane wave is classified as a transverse wave if the field disturbance at each point is described by a vector perpendicular to the direction of propagation (also the direction of energy transfer); or longitudinal wave if those vectors are aligned with the propagation direction. Mechanical waves include both transverse and longitudinal waves; on the other hand electromagnetic plane waves are strictly transverse while sound waves in fluids (such as air) can only be longitudinal. That physical direction of an oscillating field relative to the propagation direction is also referred to as the wave's polarization, which can be an important attribute.
Mathematical description
Single waves
A wave can be described just like a field, namely as a function $F(x,t)$ where $x$ is a position and $t$ is a time.
The value of $x$ is a point of space, specifically in the region where the wave is defined. In mathematical terms, it is usually a vector in the Cartesian three-dimensional space $\mathbb {R} ^{3}$. However, in many cases one can ignore one dimension, and let $x$ be a point of the Cartesian plane $\mathbb {R} ^{2}$. This is the case, for example, when studying vibrations of a drum skin. One may even restrict $x$ to a point of the Cartesian line $\mathbb {R} $ — that is, the set of real numbers. This is the case, for example, when studying vibrations in a violin string or recorder. The time $t$, on the other hand, is always assumed to be a scalar; that is, a real number.
The value of $F(x,t)$ can be any physical quantity of interest assigned to the point $x$ that may vary with time. For example, if $F$ represents the vibrations inside an elastic solid, the value of $F(x,t)$ is usually a vector that gives the current displacement from $x$ of the material particles that would be at the point $x$ in the absence of vibration. For an electromagnetic wave, the value of $F$ can be the electric field vector $E$, or the magnetic field vector $H$, or any related quantity, such as the Poynting vector $E\times H$. In fluid dynamics, the value of $F(x,t)$ could be the velocity vector of the fluid at the point $x$, or any scalar property like pressure, temperature, or density. In a chemical reaction, $F(x,t)$ could be the concentration of some substance in the neighborhood of point $x$ of the reaction medium.
For any dimension $d$ (1, 2, or 3), the wave's domain is then a subset $D$ of $\mathbb {R} ^{d}$, such that the function value $F(x,t)$ is defined for any point $x$ in $D$. For example, when describing the motion of a drum skin, one can consider $D$ to be a disk (circle) on the plane $\mathbb {R} ^{2}$ with center at the origin $(0,0)$, and let $F(x,t)$ be the vertical displacement of the skin at the point $x$ of $D$ and at time $t$.
Superposition
Waves of the same type are often superposed and encountered simultaneously at a given point in space and time. The properties at that point are the sum of the properties of each component wave at that point. In general, the velocities are not the same, so the wave form will change over time and space.
Wave spectrum
Wave families
Sometimes one is interested in a single specific wave. More often, however, one needs to understand large set of possible waves; like all the ways that a drum skin can vibrate after being struck once with a drum stick, or all the possible radar echos one could get from an airplane that may be approaching an airport.
In some of those situations, one may describe such a family of waves by a function $F(A,B,\ldots ;x,t)$ that depends on certain parameters $A,B,\ldots $, besides $x$ and $t$. Then one can obtain different waves — that is, different functions of $x$ and $t$ — by choosing different values for those parameters.
For example, the sound pressure inside a recorder that is playing a "pure" note is typically a standing wave, that can be written as
$F(A,L,n,c;x,t)=A\left(\cos 2\pi x{\frac {2n-1}{4L}}\right)\left(\cos 2\pi ct{\frac {2n-1}{4L}}\right)$
The parameter $A$ defines the amplitude of the wave (that is, the maximum sound pressure in the bore, which is related to the loudness of the note); $c$ is the speed of sound; $L$ is the length of the bore; and $n$ is a positive integer (1,2,3,...) that specifies the number of nodes in the standing wave. (The position $x$ should be measured from the mouthpiece, and the time $t$ from any moment at which the pressure at the mouthpiece is maximum. The quantity $\lambda =4L/(2n-1)$ is the wavelength of the emitted note, and $f=c/\lambda $ is its frequency.) Many general properties of these waves can be inferred from this general equation, without choosing specific values for the parameters.
As another example, it may be that the vibrations of a drum skin after a single strike depend only on the distance $r$ from the center of the skin to the strike point, and on the strength $s$ of the strike. Then the vibration for all possible strikes can be described by a function $F(r,s;x,t)$.
Sometimes the family of waves of interest has infinitely many parameters. For example, one may want to describe what happens to the temperature in a metal bar when it is initially heated at various temperatures at different points along its length, and then allowed to cool by itself in vacuum. In that case, instead of a scalar or vector, the parameter would have to be a function $h$ such that $h(x)$ is the initial temperature at each point $x$ of the bar. Then the temperatures at later times can be expressed by a function $F$ that depends on the function $h$ (that is, a functional operator), so that the temperature at a later time is $F(h;x,t)$
Differential wave equations
Another way to describe and study a family of waves is to give a mathematical equation that, instead of explicitly giving the value of $F(x,t)$, only constrains how those values can change with time. Then the family of waves in question consists of all functions $F$ that satisfy those constraints — that is, all solutions of the equation.
This approach is extremely important in physics, because the constraints usually are a consequence of the physical processes that cause the wave to evolve. For example, if $F(x,t)$ is the temperature inside a block of some homogeneous and isotropic solid material, its evolution is constrained by the partial differential equation
${\frac {\partial F}{\partial t}}(x,t)=\alpha \left({\frac {\partial ^{2}F}{\partial x_{1}^{2}}}(x,t)+{\frac {\partial ^{2}F}{\partial x_{2}^{2}}}(x,t)+{\frac {\partial ^{2}F}{\partial x_{3}^{2}}}(x,t)\right)+\beta Q(x,t)$
where $Q(p,f)$ is the heat that is being generated per unit of volume and time in the neighborhood of $x$ at time $t$ (for example, by chemical reactions happening there); $x_{1},x_{2},x_{3}$ are the Cartesian coordinates of the point $x$; $\partial F/\partial t$ is the (first) derivative of $F$ with respect to $t$; and $\partial ^{2}F/\partial x_{i}^{2}$ is the second derivative of $F$ relative to $x_{i}$. (The symbol "$\partial $" is meant to signify that, in the derivative with respect to some variable, all other variables must be considered fixed.)
This equation can be derived from the laws of physics that govern the diffusion of heat in solid media. For that reason, it is called the heat equation in mathematics, even though it applies to many other physical quantities besides temperatures.
For another example, we can describe all possible sounds echoing within a container of gas by a function $F(x,t)$ that gives the pressure at a point $x$ and time $t$ within that container. If the gas was initially at uniform temperature and composition, the evolution of $F$ is constrained by the formula
${\frac {\partial ^{2}F}{\partial t^{2}}}(x,t)=\alpha \left({\frac {\partial ^{2}F}{\partial x_{1}^{2}}}(x,t)+{\frac {\partial ^{2}F}{\partial x_{2}^{2}}}(x,t)+{\frac {\partial ^{2}F}{\partial x_{3}^{2}}}(x,t)\right)+\beta P(x,t)$
Here $P(x,t)$ is some extra compression force that is being applied to the gas near $x$ by some external process, such as a loudspeaker or piston right next to $p$.
This same differential equation describes the behavior of mechanical vibrations and electromagnetic fields in a homogeneous isotropic non-conducting solid. Note that this equation differs from that of heat flow only in that the left-hand side is $\partial ^{2}F/\partial t^{2}$, the second derivative of $F$ with respect to time, rather than the first derivative $\partial F/\partial t$. Yet this small change makes a huge difference on the set of solutions $F$. This differential equation is called "the" wave equation in mathematics, even though it describes only one very special kind of waves.
Wave in elastic medium
Main articles: Wave equation and d'Alembert's formula
Consider a traveling transverse wave (which may be a pulse) on a string (the medium). Consider the string to have a single spatial dimension. Consider this wave as traveling
• in the $x$ direction in space. For example, let the positive $x$ direction be to the right, and the negative $x$ direction be to the left.
• with constant amplitude $u$
• with constant velocity $v$, where $v$ is
• independent of wavelength (no dispersion)
• independent of amplitude (linear media, not nonlinear).[4][5]
• with constant waveform, or shape
This wave can then be described by the two-dimensional functions
$u(x,t)=F(x-vt)$ (waveform $F$ traveling to the right)
$u(x,t)=G(x+vt)$ (waveform $G$ traveling to the left)
or, more generally, by d'Alembert's formula:[6]
$u(x,t)=F(x-vt)+G(x+vt).$
representing two component waveforms $F$ and $G$ traveling through the medium in opposite directions. A generalized representation of this wave can be obtained[7] as the partial differential equation
${\frac {1}{v^{2}}}{\frac {\partial ^{2}u}{\partial t^{2}}}={\frac {\partial ^{2}u}{\partial x^{2}}}.$
General solutions are based upon Duhamel's principle.[8]
Beside the second order wave equations that are describing a standing wave field, the one-way wave equation describes the propagation of single wave in a defined direction.
Wave forms
The form or shape of F in d'Alembert's formula involves the argument x − vt. Constant values of this argument correspond to constant values of F, and these constant values occur if x increases at the same rate that vt increases. That is, the wave shaped like the function F will move in the positive x-direction at velocity v (and G will propagate at the same speed in the negative x-direction).[9]
In the case of a periodic function F with period λ, that is, F(x + λ − vt) = F(x − vt), the periodicity of F in space means that a snapshot of the wave at a given time t finds the wave varying periodically in space with period λ (the wavelength of the wave). In a similar fashion, this periodicity of F implies a periodicity in time as well: F(x − v(t + T)) = F(x − vt) provided vT = λ, so an observation of the wave at a fixed location x finds the wave undulating periodically in time with period T = λ/v.[10]
Amplitude and modulation
The amplitude of a wave may be constant (in which case the wave is a c.w. or continuous wave), or may be modulated so as to vary with time and/or position. The outline of the variation in amplitude is called the envelope of the wave. Mathematically, the modulated wave can be written in the form:[11][12][13]
$u(x,t)=A(x,t)\sin \left(kx-\omega t+\phi \right),$
where $A(x,\ t)$ is the amplitude envelope of the wave, $k$ is the wavenumber and $\phi $ is the phase. If the group velocity $v_{g}$ (see below) is wavelength-independent, this equation can be simplified as:[14]
$u(x,t)=A(x-v_{g}t)\sin \left(kx-\omega t+\phi \right),$
showing that the envelope moves with the group velocity and retains its shape. Otherwise, in cases where the group velocity varies with wavelength, the pulse shape changes in a manner often described using an envelope equation.[14][15]
Phase velocity and group velocity
There are two velocities that are associated with waves, the phase velocity and the group velocity.
Phase velocity is the rate at which the phase of the wave propagates in space: any given phase of the wave (for example, the crest) will appear to travel at the phase velocity. The phase velocity is given in terms of the wavelength λ (lambda) and period T as
$v_{\mathrm {p} }={\frac {\lambda }{T}}.$
Group velocity is a property of waves that have a defined envelope, measuring propagation through space (that is, phase velocity) of the overall shape of the waves' amplitudes – modulation or envelope of the wave.
Special waves
Sine waves
This section is an excerpt from Sine wave.[edit]
A sine wave, sinusoidal wave, or sinusoid is a mathematical curve defined in terms of the sine trigonometric function, of which it is the graph.[16] It is a type of continuous wave and also a smooth periodic function.[17] It occurs often in mathematics, as well as in physics, engineering, signal processing and many other fields.
Plane waves
A plane wave is a kind of wave whose value varies only in one spatial direction. That is, its value is constant on a plane that is perpendicular to that direction. Plane waves can be specified by a vector of unit length ${\hat {n}}$ indicating the direction that the wave varies in, and a wave profile describing how the wave varies as a function of the displacement along that direction (${\hat {n}}\cdot {\vec {x}}$) and time ($t$). Since the wave profile only depends on the position ${\vec {x}}$ in the combination ${\hat {n}}\cdot {\vec {x}}$, any displacement in directions perpendicular to ${\hat {n}}$ cannot affect the value of the field.
Plane waves are often used to model electromagnetic waves far from a source. For electromagnetic plane waves, the electric and magnetic fields themselves are transverse to the direction of propagation, and also perpendicular to each other.
Standing waves
A standing wave, also known as a stationary wave, is a wave whose envelope remains in a constant position. This phenomenon arises as a result of interference between two waves traveling in opposite directions.
The sum of two counter-propagating waves (of equal amplitude and frequency) creates a standing wave. Standing waves commonly arise when a boundary blocks further propagation of the wave, thus causing wave reflection, and therefore introducing a counter-propagating wave. For example, when a violin string is displaced, transverse waves propagate out to where the string is held in place at the bridge and the nut, where the waves are reflected back. At the bridge and nut, the two opposed waves are in antiphase and cancel each other, producing a node. Halfway between two nodes there is an antinode, where the two counter-propagating waves enhance each other maximally. There is no net propagation of energy over time.
• One-dimensional standing waves; the fundamental mode and the first 5 overtones.
• A two-dimensional standing wave on a disk; this is the fundamental mode.
• A standing wave on a disk with two nodal lines crossing at the center; this is an overtone.
Solitary waves
A soliton or solitary wave is a self-reinforcing wave packet that maintains its shape while it propagates at a constant velocity. Solitons are caused by a cancellation of nonlinear and dispersive effects in the medium. (Dispersive effects are a property of certain systems where the speed of a wave depends on its frequency.) Solitons are the solutions of a widespread class of weakly nonlinear dispersive partial differential equations describing physical systems.
Physical properties
Propagation
Wave propagation is any of the ways in which waves travel. Single wave propagation can be calculated by second-order wave equation (standing wavefield) or first-order one-way wave equation.
With respect to the direction of the oscillation relative to the propagation direction, we can distinguish between longitudinal wave and transverse waves.
Electromagnetic waves propagate in vacuum as well as in material media. Propagation of other wave types such as sound may occur only in a transmission medium.
Reflection of plane waves in a half-space
The propagation and reflection of plane waves—e.g. Pressure waves (P-wave) or Shear waves (SH or SV-waves) are phenomena that were first characterized within the field of classical seismology, and are now considered fundamental concepts in modern seismic tomography. The analytical solution to this problem exists and is well known. The frequency domain solution can be obtained by first finding the Helmholtz decomposition of the displacement field, which is then substituted into the wave equation. From here, the plane wave eigenmodes can be calculated.
SV wave propagation
The analytical solution of SV-wave in a half-space indicates that the plane SV wave reflects back to the domain as a P and SV waves, leaving out special cases. The angle of the reflected SV wave is identical to the incidence wave, while the angle of the reflected P wave is greater than the SV wave. For the same wave frequency, the SV wavelength is smaller than the P wavelength. This fact has been depicted in this animated picture.[18]
P wave propagation
Similar to the SV wave, the P incidence, in general, reflects as the P and SV wave. There are some special cases where the regime is different.
Wave velocity
Wave velocity is a general concept, of various kinds of wave velocities, for a wave's phase and speed concerning energy (and information) propagation. The phase velocity is given as:
$v_{\rm {p}}={\frac {\omega }{k}},$
where:
• vp is the phase velocity (in meters per second, m/s),
• ω is the angular frequency (in radians per second, rad/s),
• k is the wavenumber (in radians per meter, rad/m).
The phase speed gives you the speed at which a point of constant phase of the wave will travel for a discrete frequency. The angular frequency ω cannot be chosen independently from the wavenumber k, but both are related through the dispersion relationship:
$\omega =\Omega (k).$
In the special case Ω(k) = ck, with c a constant, the waves are called non-dispersive, since all frequencies travel at the same phase speed c. For instance electromagnetic waves in vacuum are non-dispersive. In case of other forms of the dispersion relation, we have dispersive waves. The dispersion relationship depends on the medium through which the waves propagate and on the type of waves (for instance electromagnetic, sound or water waves).
The speed at which a resultant wave packet from a narrow range of frequencies will travel is called the group velocity and is determined from the gradient of the dispersion relation:
$v_{\rm {g}}={\frac {\partial \omega }{\partial k}}$
In almost all cases, a wave is mainly a movement of energy through a medium. Most often, the group velocity is the velocity at which the energy moves through this medium.
Waves exhibit common behaviors under a number of standard situations, for example:
Transmission and media
Waves normally move in a straight line (that is, rectilinearly) through a transmission medium. Such media can be classified into one or more of the following categories:
• A bounded medium if it is finite in extent, otherwise an unbounded medium
• A linear medium if the amplitudes of different waves at any particular point in the medium can be added
• A uniform medium or homogeneous medium if its physical properties are unchanged at different locations in space
• An anisotropic medium if one or more of its physical properties differ in one or more directions
• An isotropic medium if its physical properties are the same in all directions
Absorption
Waves are usually defined in media which allow most or all of a wave's energy to propagate without loss. However materials may be characterized as "lossy" if they remove energy from a wave, usually converting it into heat. This is termed "absorption." A material which absorbs a wave's energy, either in transmission or reflection, is characterized by a refractive index which is complex. The amount of absorption will generally depend on the frequency (wavelength) of the wave, which, for instance, explains why objects may appear colored.
Reflection
When a wave strikes a reflective surface, it changes direction, such that the angle made by the incident wave and line normal to the surface equals the angle made by the reflected wave and the same normal line.
Refraction
Refraction is the phenomenon of a wave changing its speed. Mathematically, this means that the size of the phase velocity changes. Typically, refraction occurs when a wave passes from one medium into another. The amount by which a wave is refracted by a material is given by the refractive index of the material. The directions of incidence and refraction are related to the refractive indices of the two materials by Snell's law.
Diffraction
A wave exhibits diffraction when it encounters an obstacle that bends the wave or when it spreads after emerging from an opening. Diffraction effects are more pronounced when the size of the obstacle or opening is comparable to the wavelength of the wave.
Interference
Main article: Wave interference
When waves in a linear medium (the usual case) cross each other in a region of space, they do not actually interact with each other, but continue on as if the other one weren't present. However at any point in that region the field quantities describing those waves add according to the superposition principle. If the waves are of the same frequency in a fixed phase relationship, then there will generally be positions at which the two waves are in phase and their amplitudes add, and other positions where they are out of phase and their amplitudes (partially or fully) cancel. This is called an interference pattern.
Polarization
The phenomenon of polarization arises when wave motion can occur simultaneously in two orthogonal directions. Transverse waves can be polarized, for instance. When polarization is used as a descriptor without qualification, it usually refers to the special, simple case of linear polarization. A transverse wave is linearly polarized if it oscillates in only one direction or plane. In the case of linear polarization, it is often useful to add the relative orientation of that plane, perpendicular to the direction of travel, in which the oscillation occurs, such as "horizontal" for instance, if the plane of polarization is parallel to the ground. Electromagnetic waves propagating in free space, for instance, are transverse; they can be polarized by the use of a polarizing filter.
Longitudinal waves, such as sound waves, do not exhibit polarization. For these waves there is only one direction of oscillation, that is, along the direction of travel.
Dispersion
A wave undergoes dispersion when either the phase velocity or the group velocity depends on the wave frequency. Dispersion is most easily seen by letting white light pass through a prism, the result of which is to produce the spectrum of colors of the rainbow. Isaac Newton performed experiments with light and prisms, presenting his findings in the Opticks (1704) that white light consists of several colors and that these colors cannot be decomposed any further.[19]
Doppler effect
The Doppler effect or Doppler shift is the change in frequency of a wave in relation to an observer who is moving relative to the wave source.[20] It is named after the Austrian physicist Christian Doppler, who described the phenomenon in 1842.
Mechanical waves
A mechanical wave is an oscillation of matter, and therefore transfers energy through a medium.[21] While waves can move over long distances, the movement of the medium of transmission—the material—is limited. Therefore, the oscillating material does not move far from its initial position. Mechanical waves can be produced only in media which possess elasticity and inertia. There are three types of mechanical waves: transverse waves, longitudinal waves, and surface waves.
Waves on strings
The transverse vibration of a string is a function of tension and inertia, and is constrained by the length of the string as the ends are fixed. This constraint limits the steady state modes that are possible, and thereby the frequencies. The speed of a transverse wave traveling along a vibrating string (v) is directly proportional to the square root of the tension of the string (T) over the linear mass density (μ):
$v={\sqrt {\frac {T}{\mu }}},$
where the linear density μ is the mass per unit length of the string.
Acoustic waves
Acoustic or sound waves are compression waves which travel as body waves at the speed given by:
$v={\sqrt {\frac {B}{\rho _{0}}}},$
or the square root of the adiabatic bulk modulus divided by the ambient density of the medium (see speed of sound).
Water waves
• Ripples on the surface of a pond are actually a combination of transverse and longitudinal waves; therefore, the points on the surface follow orbital paths.
• Sound – a mechanical wave that propagates through gases, liquids, solids and plasmas;
• Inertial waves, which occur in rotating fluids and are restored by the Coriolis effect;
• Ocean surface waves, which are perturbations that propagate through water.
Body waves
Body waves travel through the interior of the medium along paths controlled by the material properties in terms of density and modulus (stiffness). The density and modulus, in turn, vary according to temperature, composition, and material phase. This effect resembles the refraction of light waves. Two types of particle motion result in two types of body waves: Primary and Secondary waves.
Seismic waves
Seismic waves are waves of energy that travel through the Earth's layers, and are a result of earthquakes, volcanic eruptions, magma movement, large landslides and large man-made explosions that give out low-frequency acoustic energy. They include body waves – the primary (P waves) and secondary waves (S waves) – and surface waves, such as Rayleigh waves, Love waves, and Stoneley waves
Shock waves
A shock wave is a type of propagating disturbance. When a wave moves faster than the local speed of sound in a fluid, it is a shock wave. Like an ordinary wave, a shock wave carries energy and can propagate through a medium; however, it is characterized by an abrupt, nearly discontinuous change in pressure, temperature and density of the medium.[22]
Shear waves
Shear waves are body waves due to shear rigidity and inertia. They can only be transmitted through solids and to a lesser extent through liquids with a sufficiently high viscosity.
Other
• Waves of traffic, that is, propagation of different densities of motor vehicles, and so forth, which can be modeled as kinematic waves[23]
• Metachronal wave refers to the appearance of a traveling wave produced by coordinated sequential actions.
Electromagnetic waves
An electromagnetic wave consists of two waves that are oscillations of the electric and magnetic fields. An electromagnetic wave travels in a direction that is at right angles to the oscillation direction of both fields. In the 19th century, James Clerk Maxwell showed that, in vacuum, the electric and magnetic fields satisfy the wave equation both with speed equal to that of the speed of light. From this emerged the idea that light is an electromagnetic wave. Electromagnetic waves can have different frequencies (and thus wavelengths), and are classified accordingly in wavebands, such as radio waves, microwaves, infrared, visible light, ultraviolet, X-rays, and Gamma rays. The range of frequencies in each of these bands is continuous, and the limits of each band are mostly arbitrary, with the exception of visible light, which must be visible to the normal human eye
Quantum mechanical waves
Main article: Schrödinger equation
Schrödinger equation
The Schrödinger equation describes the wave-like behavior of particles in quantum mechanics. Solutions of this equation are wave functions which can be used to describe the probability density of a particle.
Dirac equation
The Dirac equation is a relativistic wave equation detailing electromagnetic interactions. Dirac waves accounted for the fine details of the hydrogen spectrum in a completely rigorous way. The wave equation also implied the existence of a new form of matter, antimatter, previously unsuspected and unobserved and which was experimentally confirmed. In the context of quantum field theory, the Dirac equation is reinterpreted to describe quantum fields corresponding to spin-1⁄2 particles.
de Broglie waves
Louis de Broglie postulated that all particles with momentum have a wavelength
$\lambda ={\frac {h}{p}},$
where h is Planck's constant, and p is the magnitude of the momentum of the particle. This hypothesis was at the basis of quantum mechanics. Nowadays, this wavelength is called the de Broglie wavelength. For example, the electrons in a CRT display have a de Broglie wavelength of about 10−13 m.
A wave representing such a particle traveling in the k-direction is expressed by the wave function as follows:
$\psi (\mathbf {r} ,\,t=0)=Ae^{i\mathbf {k\cdot r} },$
where the wavelength is determined by the wave vector k as:
$\lambda ={\frac {2\pi }{k}},$
and the momentum by:
$\mathbf {p} =\hbar \mathbf {k} .$
However, a wave like this with definite wavelength is not localized in space, and so cannot represent a particle localized in space. To localize a particle, de Broglie proposed a superposition of different wavelengths ranging around a central value in a wave packet,[25] a waveform often used in quantum mechanics to describe the wave function of a particle. In a wave packet, the wavelength of the particle is not precise, and the local wavelength deviates on either side of the main wavelength value.
In representing the wave function of a localized particle, the wave packet is often taken to have a Gaussian shape and is called a Gaussian wave packet.[26] Gaussian wave packets also are used to analyze water waves.[27]
For example, a Gaussian wavefunction ψ might take the form:[28]
$\psi (x,\,t=0)=A\exp \left(-{\frac {x^{2}}{2\sigma ^{2}}}+ik_{0}x\right),$
at some initial time t = 0, where the central wavelength is related to the central wave vector k0 as λ0 = 2π / k0. It is well known from the theory of Fourier analysis,[29] or from the Heisenberg uncertainty principle (in the case of quantum mechanics) that a narrow range of wavelengths is necessary to produce a localized wave packet, and the more localized the envelope, the larger the spread in required wavelengths. The Fourier transform of a Gaussian is itself a Gaussian.[30] Given the Gaussian:
$f(x)=e^{-x^{2}/\left(2\sigma ^{2}\right)},$
the Fourier transform is:
${\tilde {f}}(k)=\sigma e^{-\sigma ^{2}k^{2}/2}.$
The Gaussian in space therefore is made up of waves:
$f(x)={\frac {1}{\sqrt {2\pi }}}\int _{-\infty }^{\infty }\ {\tilde {f}}(k)e^{ikx}\ dk;$
that is, a number of waves of wavelengths λ such that kλ = 2 π.
The parameter σ decides the spatial spread of the Gaussian along the x-axis, while the Fourier transform shows a spread in wave vector k determined by 1/σ. That is, the smaller the extent in space, the larger the extent in k, and hence in λ = 2π/k.
Gravity waves
Gravity waves are waves generated in a fluid medium or at the interface between two media when the force of gravity or buoyancy works to restore equilibrium. Surface waves on water are the most familiar example.
Gravitational waves
Gravitational waves also travel through space. The first observation of gravitational waves was announced on 11 February 2016.[31] Gravitational waves are disturbances in the curvature of spacetime, predicted by Einstein's theory of general relativity.
See also
• Index of wave articles
Waves in general
• Mechanical wave, in media transmission
• One-way wave equation, for waves running in pre-defined direction
• Wave equation, general
• Wave interference, a phenomenon in which two waves superpose to form a resultant wave
• Wave Motion (journal), a scientific journal
• Wavefront, an advancing surface of wave propagation
Parameters
• Frequency
• Phase (waves), offset or angle of a sinusoidal wave function at its origin
• Standing wave ratio, in telecommunications
• Wavelength
• Wavenumber
Waveforms
• Creeping wave, a wave diffracted around a sphere
• Evanescent field
• Longitudinal wave
• Periodic travelling wave
• Sine wave
• Square wave
• Standing wave
• Transverse wave
Electromagnetic waves
• Dyakonov surface wave
• Dyakonov–Voigt wave
• Earth–ionosphere waveguide, in radio transmission
• Electromagnetic radiation
• Electromagnetic wave equation, describes electromagnetic wave propagation
• Microwave, a form of electromagnetic radiation
In fluids
• Airy wave theory, in fluid dynamics
• Capillary wave, in fluid dynamics
• Cnoidal wave, in fluid dynamics
• Edge wave, a surface gravity wave fixed by refraction against a rigid boundary
• Faraday wave, a type of wave in liquids
• Gravity wave, in fluid dynamics
• Internal wave, a wave within a fluid medium
• Shock wave, in aerodynamics
• Sound wave, a wave of sound through a medium such as air or water
• Tidal wave, a scientifically incorrect name for a tsunami
• Tollmien–Schlichting wave, in fluid dynamics
• Wind wave
In quantum mechanics
• Bloch's theorem
• Matter wave
• Pilot wave theory, in Bohmian mechanics
• Wave function
• Wave packet
• Wave–particle duality
In relativity
• Gravitational wave, in relativity theory
• Relativistic wave equations, wave equations that consider special relativity
• pp-wave spacetime, a set of exact solutions to Einstein's field equation
Other specific types of waves
• Alfvén wave, in plasma physics
• Atmospheric wave, a periodic disturbance in the fields of atmospheric variables
• Fir wave, a forest configuration
• Lamb waves, in solid materials
• Rayleigh wave, surface acoustic waves that travel on solids
• Spin wave, in magnetism
• Spin density wave, in solid materials
• Trojan wave packet, in particle science
• Waves in plasmas, in plasma physics
Related topics
• Absorption (electromagnetic radiation)
• Antenna (radio)
• Beat (acoustics)
• Branched flow
• Cymatics
• Diffraction
• Dispersion (water waves)
• Doppler effect
• Envelope detector
• Fourier transform for computing periodicity in evenly spaced data
• Group velocity
• Harmonic
• Huygens–Fresnel principle
• Index of wave articles
• Inertial wave
• Least-squares spectral analysis for computing periodicity in unevenly spaced data
• List of waves named after people
• One-Way Wave Equation
• Phase velocity
• Photon
• Polarization (physics)
• Propagation constant
• Radio propagation
• Ray (optics)
• Reaction–diffusion system
• Reflection (physics)
• Refraction
• Resonance
• Ripple tank
• Rogue wave
• Scattering
• Shallow water equations
• Shive wave machine
• Sound
• Standing wave
• Transmission medium
• Velocity factor
• Wave equation
• Wave power
• Wave turbulence
• Wind wave
• Wind wave#Formation
References
1. (Hall 1982, p. 8) harv error: no target: CITEREFHall1982 (help)
2. Pragnan Chakravorty, "What Is a Signal? [Lecture Notes]," IEEE Signal Processing Magazine, vol. 35, no. 5, pp. 175–177, Sept. 2018. doi:10.1109/MSP.2018.2832195
3. Santos, Edgar; Schöll, Michael; Sánchez-Porras, Renán; Dahlem, Markus A.; Silos, Humberto; Unterberg, Andreas; Dickhaus, Hartmut; Sakowitz, Oliver W. (2014-10-01). "Radial, spiral and reverberating waves of spreading depolarization occur in the gyrencephalic brain". NeuroImage. 99: 244–255. doi:10.1016/j.neuroimage.2014.05.021. ISSN 1095-9572. PMID 24852458. S2CID 1347927.
4. Michael A. Slawinski (2003). "Wave equations". Seismic waves and rays in elastic media. Elsevier. pp. 131 ff. ISBN 978-0-08-043930-3.
5. Lev A. Ostrovsky & Alexander I. Potapov (2001). Modulated waves: theory and application. Johns Hopkins University Press. ISBN 978-0-8018-7325-6.
6. Karl F Graaf (1991). Wave motion in elastic solids (Reprint of Oxford 1975 ed.). Dover. pp. 13–14. ISBN 978-0-486-66745-4.
7. For an example derivation, see the steps leading up to eq. (17) in Francis Redfern. "Kinematic Derivation of the Wave Equation". Physics Journal. Archived from the original on 2013-07-24. Retrieved 2012-12-11.
8. Jalal M. Ihsan Shatah; Michael Struwe (2000). "The linear wave equation". Geometric wave equations. American Mathematical Society Bookstore. pp. 37ff. ISBN 978-0-8218-2749-9.
9. Louis Lyons (1998). All you wanted to know about mathematics but were afraid to ask. Cambridge University Press. pp. 128 ff. ISBN 978-0-521-43601-4.
10. Alexander McPherson (2009). "Waves and their properties". Introduction to Macromolecular Crystallography (2 ed.). Wiley. p. 77. ISBN 978-0-470-18590-2.
11. Christian Jirauschek (2005). FEW-cycle Laser Dynamics and Carrier-envelope Phase Detection. Cuvillier Verlag. p. 9. ISBN 978-3-86537-419-6.
12. Fritz Kurt Kneubühl (1997). Oscillations and waves. Springer. p. 365. ISBN 978-3-540-62001-3.
13. Mark Lundstrom (2000). Fundamentals of carrier transport. Cambridge University Press. p. 33. ISBN 978-0-521-63134-1.
14. Chin-Lin Chen (2006). "§13.7.3 Pulse envelope in nondispersive media". Foundations for guided-wave optics. Wiley. p. 363. ISBN 978-0-471-75687-3.
15. Stefano Longhi; Davide Janner (2008). "Localization and Wannier wave packets in photonic crystals". In Hugo E. Hernández-Figueroa; Michel Zamboni-Rached; Erasmo Recami (eds.). Localized Waves. Wiley-Interscience. p. 329. ISBN 978-0-470-10885-7.
16. "Sine Wave". Mathematical Mysteries. 2021-11-17. Retrieved 2022-09-30.
17. "Sinusoidal". www.math.net. Retrieved 2022-09-30.
18. The animations are taken from Poursartip, Babak (2015). "Topographic amplification of seismic waves". UT Austin.
19. Newton, Isaac (1704). "Prop VII Theor V". Opticks: Or, A treatise of the Reflections, Refractions, Inflexions and Colours of Light. Also Two treatises of the Species and Magnitude of Curvilinear Figures. Vol. 1. London. p. 118. All the Colours in the Universe which are made by Light... are either the Colours of homogeneal Lights, or compounded of these...
20. Giordano, Nicholas (2009). College Physics: Reasoning and Relationships. Cengage Learning. pp. 421–424. ISBN 978-0534424718.
21. Giancoli, D. C. (2009) Physics for scientists & engineers with modern physics (4th ed.). Upper Saddle River, N.J.: Pearson Prentice Hall.
22. Anderson, John D. Jr. (January 2001) [1984], Fundamentals of Aerodynamics (3rd ed.), McGraw-Hill Science/Engineering/Math, ISBN 978-0-07-237335-6
23. M.J. Lighthill; G.B. Whitham (1955). "On kinematic waves. II. A theory of traffic flow on long crowded roads". Proceedings of the Royal Society of London. Series A. 229 (1178): 281–345. Bibcode:1955RSPSA.229..281L. CiteSeerX 10.1.1.205.4573. doi:10.1098/rspa.1955.0088. S2CID 18301080. And: P.I. Richards (1956). "Shockwaves on the highway". Operations Research. 4 (1): 42–51. doi:10.1287/opre.4.1.42.
24. A.T. Fromhold (1991). "Wave packet solutions". Quantum Mechanics for Applied Physics and Engineering (Reprint of Academic Press 1981 ed.). Courier Dover Publications. pp. 59 ff. ISBN 978-0-486-66741-6. (p. 61) ...the individual waves move more slowly than the packet and therefore pass back through the packet as it advances
25. Ming Chiang Li (1980). "Electron Interference". In L. Marton; Claire Marton (eds.). Advances in Electronics and Electron Physics. Vol. 53. Academic Press. p. 271. ISBN 978-0-12-014653-6.
26. See for example Walter Greiner; D. Allan Bromley (2007). Quantum Mechanics (2 ed.). Springer. p. 60. ISBN 978-3-540-67458-0. and John Joseph Gilman (2003). Electronic basis of the strength of materials. Cambridge University Press. p. 57. ISBN 978-0-521-62005-5.,Donald D. Fitts (1999). Principles of quantum mechanics. Cambridge University Press. p. 17. ISBN 978-0-521-65841-6..
27. Chiang C. Mei (1989). The applied dynamics of ocean surface waves (2nd ed.). World Scientific. p. 47. ISBN 978-9971-5-0789-3.
28. Walter Greiner; D. Allan Bromley (2007). Quantum Mechanics (2nd ed.). Springer. p. 60. ISBN 978-3-540-67458-0.
29. Siegmund Brandt; Hans Dieter Dahmen (2001). The picture book of quantum mechanics (3rd ed.). Springer. p. 23. ISBN 978-0-387-95141-6.
30. Cyrus D. Cantrell (2000). Modern mathematical methods for physicists and engineers. Cambridge University Press. p. 677. ISBN 978-0-521-59827-9.
31. "Gravitational waves detected for 1st time, 'opens a brand new window on the universe'". Canadian Broadcasting Corporation. 11 February 2016.
Sources
• Fleisch, D.; Kinnaman, L. (2015). A student's guide to waves. Cambridge: Cambridge University Press. Bibcode:2015sgw..book.....F. ISBN 978-1107643260.
• Campbell, Murray; Greated, Clive (2001). The musician's guide to acoustics (Repr. ed.). Oxford: Oxford University Press. ISBN 978-0198165057.
• French, A.P. (1971). Vibrations and Waves (M.I.T. Introductory physics series). Nelson Thornes. ISBN 978-0-393-09936-2. OCLC 163810889.
• Hall, D.E. (1980). Musical Acoustics: An Introduction. Belmont, CA: Wadsworth Publishing Company. ISBN 978-0-534-00758-4..
• Hunt, Frederick Vinton (1978). Origins in acoustics. Woodbury, NY: Published for the Acoustical Society of America through the American Institute of Physics. ISBN 978-0300022209.
• Ostrovsky, L.A.; Potapov, A.S. (1999). Modulated Waves, Theory and Applications. Baltimore: The Johns Hopkins University Press. ISBN 978-0-8018-5870-3..
• Griffiths, G.; Schiesser, W.E. (2010). Traveling Wave Analysis of Partial Differential Equations: Numerical and Analytical Methods with Matlab and Maple. Academic Press. ISBN 9780123846532.
• Crawford jr., Frank S. (1968). Waves (Berkeley Physics Course, Vol. 3), McGraw-Hill, ISBN 978-0070048607 Free online version
• A. E. H. Love (1944). A Treatise on The Mathematical Theory of Elasticity. New York: Dover.
• E.W. Weisstein. "Wave velocity". ScienceWorld. Retrieved 2009-05-30.
External links
• The Feynman Lectures on Physics: Waves
• Linear and nonlinear waves
• Science Aid: Wave properties – Concise guide aimed at teens
• "AT&T Archives: Similiarities of Wave Behavior" demonstrated by J.N. Shive of Bell Labs
Velocities of waves
• Phase
• Group
• Front
• Signal
Patterns in nature
Patterns
• Crack
• Dune
• Foam
• Meander
• Phyllotaxis
• Soap bubble
• Symmetry
• in crystals
• Quasicrystals
• in flowers
• in biology
• Tessellation
• Vortex street
• Wave
• Widmanstätten pattern
Causes
• Pattern formation
• Biology
• Natural selection
• Camouflage
• Mimicry
• Sexual selection
• Mathematics
• Chaos theory
• Fractal
• Logarithmic spiral
• Physics
• Crystal
• Fluid dynamics
• Plateau's laws
• Self-organization
People
• Plato
• Pythagoras
• Empedocles
• Fibonacci
• Liber Abaci
• Adolf Zeising
• Ernst Haeckel
• Joseph Plateau
• Wilson Bentley
• D'Arcy Wentworth Thompson
• On Growth and Form
• Alan Turing
• The Chemical Basis of Morphogenesis
• Aristid Lindenmayer
• Benoît Mandelbrot
• How Long Is the Coast of Britain? Statistical Self-Similarity and Fractional Dimension
Related
• Pattern recognition
• Emergence
• Mathematics and art
Authority control: National
• France
• BnF data
• Germany
• Israel
• United States
• Japan
• Czech Republic
| Wikipedia |
d'Alembert operator
In special relativity, electromagnetism and wave theory, the d'Alembert operator (denoted by a box: $\Box $), also called the d'Alembertian, wave operator, box operator or sometimes quabla operator[1] (cf. nabla symbol) is the Laplace operator of Minkowski space. The operator is named after French mathematician and physicist Jean le Rond d'Alembert.
Not to be confused with d'Alembert's principle or d'Alembert's equation.
In Minkowski space, in standard coordinates (t, x, y, z), it has the form
${\begin{aligned}\Box &=\partial ^{\mu }\partial _{\mu }=\eta ^{\mu \nu }\partial _{\nu }\partial _{\mu }={\frac {1}{c^{2}}}{\frac {\partial ^{2}}{\partial t^{2}}}-{\frac {\partial ^{2}}{\partial x^{2}}}-{\frac {\partial ^{2}}{\partial y^{2}}}-{\frac {\partial ^{2}}{\partial z^{2}}}\\&={\frac {1}{c^{2}}}{\partial ^{2} \over \partial t^{2}}-\nabla ^{2}={\frac {1}{c^{2}}}{\partial ^{2} \over \partial t^{2}}-\Delta ~~.\end{aligned}}$
Here $\nabla ^{2}:=\Delta $ is the 3-dimensional Laplacian and ημν is the inverse Minkowski metric with
$\eta _{00}=1$, $\eta _{11}=\eta _{22}=\eta _{33}=-1$, $\eta _{\mu \nu }=0$ for $\mu \neq \nu $.
Note that the μ and ν summation indices range from 0 to 3: see Einstein notation. We have assumed units such that the speed of light c = 1.
(Some authors alternatively use the negative metric signature of (− + + +), with $\eta _{00}=-1,\;\eta _{11}=\eta _{22}=\eta _{33}=1$.)
Lorentz transformations leave the Minkowski metric invariant, so the d'Alembertian yields a Lorentz scalar. The above coordinate expressions remain valid for the standard coordinates in every inertial frame.
The box symbol and alternate notations
There are a variety of notations for the d'Alembertian. The most common are the box symbol $\Box $ (Unicode: U+2610 ☐ BALLOT BOX) whose four sides represent the four dimensions of space-time and the box-squared symbol $\Box ^{2}$ which emphasizes the scalar property through the squared term (much like the Laplacian). In keeping with the triangular notation for the Laplacian, sometimes $\Delta _{M}$ is used.
Another way to write the d'Alembertian in flat standard coordinates is $\partial ^{2}$. This notation is used extensively in quantum field theory, where partial derivatives are usually indexed, so the lack of an index with the squared partial derivative signals the presence of the d'Alembertian.
Sometimes the box symbol is used to represent the four-dimensional Levi-Civita covariant derivative. The symbol $\nabla $ is then used to represent the space derivatives, but this is coordinate chart dependent.
Applications
The wave equation for small vibrations is of the form
$\Box _{c}u\left(x,t\right)\equiv u_{tt}-c^{2}u_{xx}=0~,$
where u(x, t) is the displacement.
The wave equation for the electromagnetic field in vacuum is
$\Box A^{\mu }=0$
where Aμ is the electromagnetic four-potential in Lorenz gauge.
The Klein–Gordon equation has the form
$\left(\Box +{\frac {m^{2}c^{2}}{\hbar ^{2}}}\right)\psi =0~.$
Green's function
The Green's function, $G\left({\tilde {x}}-{\tilde {x}}'\right)$, for the d'Alembertian is defined by the equation
$\Box G\left({\tilde {x}}-{\tilde {x}}'\right)=\delta \left({\tilde {x}}-{\tilde {x}}'\right)$
where $\delta \left({\tilde {x}}-{\tilde {x}}'\right)$ is the multidimensional Dirac delta function and ${\tilde {x}}$ and ${\tilde {x}}'$ are two points in Minkowski space.
A special solution is given by the retarded Green's function which corresponds to signal propagation only forward in time[2]
$G\left({\vec {r}},t\right)={\frac {1}{4\pi r}}\Theta (t)\delta \left(t-{\frac {r}{c}}\right)$
where $\Theta $ is the Heaviside step function.
See also
• Four-gradient
• d'Alembert's formula
• Klein–Gordon equation
• Relativistic heat conduction
• Ricci calculus
• Wave equation
• One-way wave equation
References
1. Theoretische Physik (Aufl. 2015 ed.). Berlin, Heidelberg. 2015. ISBN 978-3-642-54618-1. OCLC 899608232.{{cite book}}: CS1 maint: location missing publisher (link)
2. S. Siklos. "The causal Green's function for the wave equation" (PDF). Retrieved 2 January 2013.
External links
• "D'Alembert operator", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Poincaré, Henri (1906). Translation:On the Dynamics of the Electron (July) – via Wikisource., originally printed in Rendiconti del Circolo Matematico di Palermo.
• Weisstein, Eric W. "d'Alembertian". MathWorld.
Operators in physics
General
Space and time
• d'Alembertian
• Parity
• Time
Particles
• C-symmetry
Operators for operators
• Anti-symmetric operator
• Ladder operator
Quantum
Fundamental
• Momentum
• Position
• Rotation
Energy
• Total energy
• Hamiltonian
• Kinetic energy
Angular momentum
• Total
• Orbital
• Spin
Electromagnetism
• Transition dipole moment
Optics
• Displacement
• Hanbury Brown and Twiss effect
• Quantum correlator
• Squeeze
Particle physics
• Casimir invariant
• Creation and annihilation
| Wikipedia |
Wavelet for multidimensional signals analysis
Wavelets are often used to analyse piece-wise smooth signals.[1] Wavelet coefficients can efficiently represent a signal which has led to data compression algorithms using wavelets.[2] Wavelet analysis is extended for multidimensional signal processing as well. This article introduces a few methods for wavelet synthesis and analysis for multidimensional signals. There also occur challenges such as directivity in multidimensional case.
Multidimensional separable discrete wavelet transform (DWT)
The discrete wavelet transform is extended to the multidimensional case using the tensor product of well known 1-D wavelets. In 2-D for example, the tensor product space for 2-D is decomposed into four tensor product vector spaces[3] as
(φ(x) ⨁ ψ(x)) ⊗ (φ(y) ⨁ ψ(y)) = { φ(x)φ(y), φ(x)ψ(y), ψ(x)φ(y), ψ(x)ψ(y) }
This leads to the concept of multidimensional separable DWT similar in principle to the multidimensional DFT.
φ(x)φ(y) gives the approximation coefficients and other subbands:
φ(x)ψ(y) low-high (LH) subband,
ψ(x)φ(y) high-low (HL) subband,
ψ(x)ψ(y) high-high (HH) subband,
give detail coefficients.
Implementation of multidimensional separable DWT
Wavelet coefficients can be computed by passing the signal to be decomposed though a series of filters. In the case of 1-D, there are two filters at every level-one low pass for approximation and one high pass for the details. In the multidimensional case, the number of filters at each level depends on the number of tensor product vector spaces. For M-D, 2M filters are necessary at every level. Each of these is called a subband. The subband with all low pass (LLL...) gives the approximation coefficients and all the rest give the detail coefficients at that level. For example, for M=3 and a signal of size N1 × N2 × N3 , a separable DWT can be implemented as follows:
Applying the 1-D DWT analysis filterbank in dimension N1, it is now split into two chunks of size N1⁄2 × N2 × N3. Applying 1-D DWT in N2 dimension, each of these chunks is split into two more chunks of N1⁄2 × N2⁄2 × N3. This repeated in 3-D gives a total of 8 chunks of size N1⁄2 × N2⁄2 × N3⁄2.[4]
Disadvantages of M-D separable DWT
The wavelets generated by the separable DWT procedure are highly shift variant. A small shift in the input signal changes the wavelet coefficients to a large extent. Also, these wavelets are almost equal in their magnitude in all directions and thus do not reflect the orientation or directivity that could be present in the multidimensional signal. For example, there could be an edge discontinuity in an image or an object moving smoothly along a straight line in the space-time 4D dimension. A separable DWT does not fully capture the same. In order to overcome these difficulties, a method of wavelet transform called Complex wavelet transform (CWT) was developed.
Multidimensional complex wavelet transform
Similar to the 1-D complex wavelet transform,[5] tensor products of complex wavelets are considered to produce complex wavelets for multidimensional signal analysis. With further analysis it is seen that these complex wavelets are oriented.[6] This sort of orientation helps to resolve the directional ambiguity of the signal.
Implementation of multidimensional (M-D) dual tree CWT
Dual tree CWT in 1-D uses 2 real DWTs, where the first one gives the real part of CWT and the second DWT gives the imaginary part of the CWT. M-D dual tree CWT is analyzed in terms of tensor products. However, it is possible to implement M-D CWTs efficiently using separable M-D DWTs and considering sum and difference of subbands obtained. Additionally, these wavelets tend to be oriented in specific directions.
Two types of oriented M-D CWTs can be implemented. Considering only the real part of the tensor product of wavelets, real coefficients are obtained. All wavelets are oriented in different directions. This is 2m times as expansive where m is the dimensions.
If both real and imaginary parts of the tensor products of complex wavelets are considered, complex oriented dual tree CWT which is 2 times more expansive than real oriented dual tree CWT is obtained. So there are two wavelets oriented in each of the directions. Although implementing complex oriented dual tree structure takes more resources, it is used in order to ensure an approximate shift invariance property that a complex analytical wavelet can provide in 1-D. In the 1-D case, it is required that the real part of the wavelet and the imaginary part are Hilbert transform pairs for the wavelet to be analytical and to exhibit shift invariance. Similarly in the M-D case, the real and imaginary parts of tensor products are made to be approximate Hilbert transform pairs in order to be analytic and shift invariant.[6][7]
Consider an example for 2-D dual tree real oriented CWT:
Let ψ(x) and ψ(y) be complex wavelets:
ψ(x) = ψ(x)h + j ψ(x)g and ψ(y) = ψ(y)h + j ψ(y)g.
ψ(x,y) = [ψ(x)h + j ψ(x)g][ ψ(y)h + j ψ(y)g] = ψ(x)hψ(y)h - ψ(x)gψ(x)g + j [ψ(x)hψ(y)g - ψ(x)hψ(x)g]
The support of the Fourier spectrum of the wavelet above resides in the first quadrant. When just the real part is considered, Real(ψ(x,y)) = ψ(x)hψ(y)h - ψ(x)gψ(x)g has support on opposite quadrants (see (a) in figure). Both ψ(x)hψ(y)h and ψ(x)gψ(y)g correspond to the HH subband of two different separable 2-D DWTs. This wavelet is oriented at -45o.
Similarly, by considering ψ2(x,y) = ψ(x)ψ(y)*, a wavelet oriented at 45o is obtained. To obtain 4 more oriented real wavelets, φ(x)ψ(y), ψ(x)φ(y), φ(x)ψ(y)* and ψ(x)φ(y)* are considered.
The implementation of complex oriented dual tree structure is done as follows: Two separable 2-D DWTs are implemented in parallel using the filterbank structure as in the previous section. Then, the appropriate sum and difference of different subbands (LL, LH, HL, HH) give oriented wavelets, a total of 6 in all.
Similarly, in 3-D, 4 separable 3-D DWTs in parallel are needed and a total of 28 oriented wavelets are obtained.
Disadvantage of M-D CWT
Although the M-D CWT provides one with oriented wavelets, these orientations are only appropriate to represent the orientation along the (m-1)th dimension of a signal with m dimensions. When singularities in manifold[8] of lower dimensions are considered, such as a bee moving in a straight line in the 4-D space-time, oriented wavelets that are smooth in the direction of the manifold and change rapidly in the direction normal to it are needed. A new transform, Hypercomplex Wavelet transform was developed in order to address this issue.
Hypercomplex wavelet transform
The dual tree hypercomplex wavelet transform (HWT) developed in [9] consists of a standard DWT tensor and 2m -1 wavelets obtained from combining the 1-D Hilbert transform of these wavelets along the n-coordinates. In particular a 2-D HWT consists of the standard 2-D separable DWT tensor and three additional components:
Hx {ψ(x)hψ(y)h} = ψ(x)gψ(y)h
Hy {ψ(x)hψ(y)h} = ψ(x)hψ(y)g
Hx Hy {ψ(x)hψ(y)h} = ψ(x)gψ(y)g
For the 2-D case, this is named dual tree quaternion wavelet transform (QWT).[10] The total redundancy in M-D is 2m tight frame.
Directional hypercomplex wavelet transform
The hypercomplex transform described above serves as a building block to construct the directional hypercomplex wavelet transform (DHWT). A linear combination of the wavelets obtained using the hypercomplex transform give a wavelet oriented in a particular direction. For the 2-D DHWT, it is seen that these linear combinations correspond to the exact 2-D dual tree CWT case. For 3-D, the DHWT can be considered in two dimensions, one DHWT for n = 1 and another for n = 2. For n = 2, n = m-1, so, as in the 2-D case, this corresponds to 3-D dual tree CWT. But the case of n = 1 gives rise to a new DHWT transform. The combination of 3-D HWT wavelets is done in a manner to ensure that the resultant wavelet is lowpass along 1-D and bandpass along 2-D. In,[9] this was used to detect line singularities in 3-D space.
Challenges ahead
The wavelet transforms for multidimensional signals are often computationally challenging which is the case with most multidimensional signals. Also, the methods of CWT and DHWT are redundant even though they offer directivity and shift invariance.
References
1. Mallat, Stéphane (2008). A Wavelet Tour of Signal Processing. Academic Press.
2. Devore, R.A.; Jawerth, B.; Lucier, B.J. (1991). "Data compression using wavelets: Error, smoothness and quantization". [1991] Proceedings. Data Compression Conference. pp. 186–195. doi:10.1109/DCC.1991.213386. ISBN 978-0-8186-9202-4. S2CID 11964668.
3. Kugarajah, Tharmarajah; Zhang, Qinghua (November 1995). "Multidimensional wavelet frames". IEEE Transactions on Neural Networks. 6 (6): 1552–1556. doi:10.1109/72.471353. hdl:1903/5619. PMID 18263450.
4. Cheng-Wu, Po; Gee-Chen, Liang (7 August 2002). "An efficient architecture for two-dimensional discrete wavelet transform". IEEE Transactions on Circuits and Systems for Video Technology. 11 (4): 536–545. doi:10.1109/76.915359.
5. Kingsbury, Nick (2001). "Complex Wavelets for Shift Invariant Analysis and Filtering of Signals". Applied and Computational Harmonic Analysis. 10 (3): 234–253. doi:10.1006/acha.2000.0343.
6. Selesnick, Ivan; Baraniuk, Richard; Kingsbury, Nick (2005). "The Dual-Tree Complex Wavelet Transform". IEEE Signal Processing Magazine. 22 (6): 123–151. Bibcode:2005ISPM...22..123S. doi:10.1109/MSP.2005.1550194. hdl:1911/20355. S2CID 833630.
7. Selesnick, I.W. (June 2001). "Hilbert transform pairs of wavelet bases". IEEE Signal Processing Letters. 8 (6): 170–173. Bibcode:2001ISPL....8..170S. CiteSeerX 10.1.1.139.5369. doi:10.1109/97.923042. S2CID 5994808.
8. Boothby, W (2003). An Introduction to Differentiable Manifolds and Riemannian Geometry. San Diego: Academic.
9. Wai Lam Chan; Hyeokho Choi; Baraniuk, R.G. (2004). "Directional hypercomplex wavelets for multidimensional signal analysis and processing". 2004 IEEE International Conference on Acoustics, Speech, and Signal Processing. Vol. 3. pp. iii–996–9. doi:10.1109/ICASSP.2004.1326715. hdl:1911/19796. ISBN 0-7803-8484-9. S2CID 8287497.
10. Lam Chan, Wai; Choi, Hyeokho; Baraniuk, Richard (2008). "Coherent Multiscale Image Processing Using Dual-Tree Quaternion Wavelets". IEEE Transactions on Image Processing. 17 (7): 1069–1082. Bibcode:2008ITIP...17.1069C. doi:10.1109/TIP.2008.924282. PMID 18586616. S2CID 16789586.
External links
• Tensor products in wavelet settings
• Matlab implementation of wavelet transforms
• A Panorama on Multiscale Geometric Representations, Intertwining Spatial, Directional and Frequency Selectivity, a review on 2D (two-dimensional) wavelet representations
| Wikipedia |
Wavelet transform modulus maxima method
The wavelet transform modulus maxima (WTMM) is a method for detecting the fractal dimension of a signal.
More than this, the WTMM is capable of partitioning the time and scale domain of a signal into fractal dimension regions, and the method is sometimes referred to as a "mathematical microscope" due to its ability to inspect the multi-scale dimensional characteristics of a signal and possibly inform about the sources of these characteristics.
The WTMM method uses continuous wavelet transform rather than Fourier transforms to detect singularities – that is discontinuities, areas in the signal that are not continuous at a particular derivative.
In particular, this method is useful when analyzing multifractal signals, that is, signals having multiple fractal dimensions.
Description
Consider a signal that can be represented by the following equation:
$f(t)=a_{0}+a_{1}(t-t_{i})+a_{2}(t-t_{i})^{2}+\cdots +a_{h}(t-t_{i})^{h_{i}}\,$
where $t$ is close to $t_{i}$ and $h_{i}$ is a non-integer quantifying the local singularity. (Compare this to a Taylor series, where in practice only a limited number of low-order terms are used to approximate a continuous function.)
Generally, a continuous wavelet transform decomposes a signal as a function of time, rather than assuming the signal is stationary (For example, the Fourier transform). Any continuous wavelet can be used, though the first derivative of the Gaussian distribution and the Mexican hat wavelet (2nd derivative of Gaussian) are common. Choice of wavelet may depend on characteristics of the signal being investigated.
Below we see one possible wavelet basis given by the first derivative of the Gaussian:
$G'(t,a,b)={\frac {a}{(2\pi )^{-1/2}}}(t-b)e^{\left({\frac {-(t-b)^{2}}{2a^{2}}}\right)}\,$
Once a "mother wavelet" is chosen, the continuous wavelet transform is carried out as a continuous, square-integrable function that can be scaled and translated. Let $a>0$ be the scaling constant and $b\in \mathbb {R} $ be the translation of the wavelet along the signal:
$X_{w}(a,b)={\frac {1}{\sqrt {a}}}\int _{-\infty }^{\infty }x(t)\psi ^{\ast }\left({\frac {t-b}{a}}\right)\,dt$
where $\psi (t)$ is a continuous function in both the time domain and the frequency domain called the mother wavelet and $^{\ast }$ represents the operation of complex conjugate.
By calculating $X_{w}(a,b)$ for subsequent wavelets that are derivatives of the mother wavelet, singularities can be identified. Successive derivative wavelets remove the contribution of lower order terms in the signal, allowing the maximum $h_{i}$ to be detected. (Recall that when taking derivatives, lower order terms become 0.) This is the "modulus maxima".
Thus, this method identifies the singularity spectrum by convolving the signal with a wavelet at different scales and time offsets.
The WTMM is then capable of producing a "skeleton" that partitions the scale and time space by fractal dimension.
History
The WTMM was developed out of the larger field of continuous wavelet transforms, which arose in the 1980s, and its contemporary fractal dimension methods.
At its essence, it is a combination of fractal dimension box counting methods and continuous wavelet transforms, where wavelets at various scales are used instead of boxes.
WTMM was originally developed by Mallat and Hwang in 1992 and used for image processing .
Bacry, Muzy, and Arneodo were early users of this methodology. It has subsequently been used in fields related to signal processing.
References
• Alain Arneodo et al. (2008), Scholarpedia, 3(3):4103.
• A Wavelet Tour of Signal Processing, by Stéphane Mallat; ISBN 012466606X; Academic Press, 1999
• Mallat, S.; Hwang, W.L.;, "Singularity detection and processing with wavelets," IEEE Transactions on Information Theory, volume 38, number 2, pages 617–643, Mar 1992 doi:10.1109/18.119727
• Arneodo on Wavelets
• Muzy, J. F.; Bacry, E.; Arneodo, A. (1991-12-16). "Wavelets and multifractal formalism for singular signals: Application to turbulence data". Physical Review Letters. American Physical Society (APS). 67 (25): 3515–3518. Bibcode:1991PhRvL..67.3515M. doi:10.1103/physrevlett.67.3515. ISSN 0031-9007. PMID 10044755.
• Muzy, J. F.; Bacry, E.; Arneodo, A. (1993-02-01). "Multifractal formalism for fractal signals: The structure-function approach versus the wavelet-transform modulus-maxima method" (PDF). Physical Review E. American Physical Society (APS). 47 (2): 875–884. Bibcode:1993PhRvE..47..875M. doi:10.1103/physreve.47.875. ISSN 1063-651X. PMID 9960082.
| Wikipedia |
Wavelet transform
In mathematics, a wavelet series is a representation of a square-integrable (real- or complex-valued) function by a certain orthonormal series generated by a wavelet. This article provides a formal, mathematical definition of an orthonormal wavelet and of the integral wavelet transform.[1][2][3][4][5]
For broader coverage of this topic, see Wavelet.
Definition
A function $\psi \,\in \,L^{2}(\mathbb {R} )$ is called an orthonormal wavelet if it can be used to define a Hilbert basis, that is a complete orthonormal system, for the Hilbert space $L^{2}\left(\mathbb {R} \right)$ of square integrable functions.
The Hilbert basis is constructed as the family of functions $\{\psi _{jk}:\,j,\,k\,\in \,\mathbb {Z} \}$ by means of dyadic translations and dilations of $\psi \,$,
$\psi _{jk}(x)=2^{\frac {j}{2}}\psi \left(2^{j}x-k\right)\,$
for integers $j,\,k\,\in \,\mathbb {Z} $.
If under the standard inner product on $L^{2}\left(\mathbb {R} \right)$,
$\langle f,g\rangle =\int _{-\infty }^{\infty }f(x){\overline {g(x)}}dx$
this family is orthonormal, it is an orthonormal system:
${\begin{aligned}\langle \psi _{jk},\psi _{lm}\rangle &=\int _{-\infty }^{\infty }\psi _{jk}(x){\overline {\psi _{lm}(x)}}dx\\&=\delta _{jl}\delta _{km}\end{aligned}}$
where $\delta _{jl}\,$ is the Kronecker delta.
Completeness is satisfied if every function $f\,\in \,L^{2}\left(\mathbb {R} \right)$ may be expanded in the basis as
$f(x)=\sum _{j,k=-\infty }^{\infty }c_{jk}\psi _{jk}(x)$
with convergence of the series understood to be convergence in norm. Such a representation of f is known as a wavelet series. This implies that an orthonormal wavelet is self-dual.
The integral wavelet transform is the integral transform defined as
$\left[W_{\psi }f\right](a,b)={\frac {1}{\sqrt {|a|}}}\int _{-\infty }^{\infty }{\overline {\psi \left({\frac {x-b}{a}}\right)}}f(x)dx\,$
The wavelet coefficients $c_{jk}$ are then given by
$c_{jk}=\left[W_{\psi }f\right]\left(2^{-j},k2^{-j}\right)$
Here, $a=2^{-j}$ is called the binary dilation or dyadic dilation, and $b=k2^{-j}$ is the binary or dyadic position.
Principle
The fundamental idea of wavelet transforms is that the transformation should allow only changes in time extension, but not shape. This is achieved by choosing suitable basis functions that allow for this. Changes in the time extension are expected to conform to the corresponding analysis frequency of the basis function. Based on the uncertainty principle of signal processing,
$\Delta t\Delta \omega \geq {\frac {1}{2}}$
where $t$ represents time and $\omega $ angular frequency ($\omega =2\pi f$, where $f$ is ordinary frequency).
The higher the required resolution in time, the lower the resolution in frequency has to be. The larger the extension of the analysis windows is chosen, the larger is the value of $\Delta t$.
When $\Delta t$ is large,
1. Bad time resolution
2. Good frequency resolution
3. Low frequency, large scaling factor
When $\Delta t$ is small
1. Good time resolution
2. Bad frequency resolution
3. High frequency, small scaling factor
In other words, the basis function $\psi $ can be regarded as an impulse response of a system with which the function $x(t)$ has been filtered. The transformed signal provides information about the time and the frequency. Therefore, wavelet-transformation contains information similar to the short-time-Fourier-transformation, but with additional special properties of the wavelets, which show up at the resolution in time at higher analysis frequencies of the basis function. The difference in time resolution at ascending frequencies for the Fourier transform and the wavelet transform is shown below. Note however, that the frequency resolution is decreasing for increasing frequencies while the temporal resolution increases. This consequence of the Fourier uncertainty principle is not correctly displayed in the Figure.
This shows that wavelet transformation is good in time resolution of high frequencies, while for slowly varying functions, the frequency resolution is remarkable.
Another example: The analysis of three superposed sinusoidal signals $y(t)\;=\;\sin(2\pi f_{0}t)\;+\;\sin(4\pi f_{0}t)\;+\;\sin(8\pi f_{0}t)$ with STFT and wavelet-transformation.
Wavelet compression
See also: Discrete wavelet transform
Wavelet compression is a form of data compression well suited for image compression (sometimes also video compression and audio compression). Notable implementations are JPEG 2000, DjVu and ECW for still images, JPEG XS, CineForm, and the BBC's Dirac. The goal is to store image data in as little space as possible in a file. Wavelet compression can be either lossless or lossy.[6]
Using a wavelet transform, the wavelet compression methods are adequate for representing transients, such as percussion sounds in audio, or high-frequency components in two-dimensional images, for example an image of stars on a night sky. This means that the transient elements of a data signal can be represented by a smaller amount of information than would be the case if some other transform, such as the more widespread discrete cosine transform, had been used.
Discrete wavelet transform has been successfully applied for the compression of electrocardiograph (ECG) signals[7] In this work, the high correlation between the corresponding wavelet coefficients of signals of successive cardiac cycles is utilized employing linear prediction.
Wavelet compression is not effective for all kinds of data. Wavelet compression handles transient signals well. But smooth, periodic signals are better compressed using other methods, particularly traditional harmonic analysis in the frequency domain with Fourier-related transforms. Compressing data that has both transient and periodic characteristics may be done with hybrid techniques that use wavelets along with traditional harmonic analysis. For example, the Vorbis audio codec primarily uses the modified discrete cosine transform to compress audio (which is generally smooth and periodic), however allows the addition of a hybrid wavelet filter bank for improved reproduction of transients.[8]
See Diary Of An x264 Developer: The problems with wavelets (2010) for discussion of practical issues of current methods using wavelets for video compression.
Method
First a wavelet transform is applied. This produces as many coefficients as there are pixels in the image (i.e., there is no compression yet since it is only a transform). These coefficients can then be compressed more easily because the information is statistically concentrated in just a few coefficients. This principle is called transform coding. After that, the coefficients are quantized and the quantized values are entropy encoded and/or run length encoded.
A few 1D and 2D applications of wavelet compression use a technique called "wavelet footprints".[9][10]
Requirement for image compression
For most natural images, the spectrum density of lower frequency is higher.[11] As a result, information of the low frequency signal (reference signal) is generally preserved, while the information in the detail signal is discarded. From the perspective of image compression and reconstruction, a wavelet should meet the following criteria while performing image compression:
• Being able to transform more original image into the reference signal.
• Highest fidelity reconstruction based on the reference signal.
• Should not lead to artifacts in the image reconstructed from the reference signal alone.
Requirement for shift variance and ringing behavior
Wavelet image compression system involves filters and decimation, so it can be described as a linear shift-variant system. A typical wavelet transformation diagram is displayed below:
The transformation system contains two analysis filters (a low pass filter $h_{0}(n)$ and a high pass filter $h_{1}(n)$), a decimation process, an interpolation process, and two synthesis filters ($g_{0}(n)$ and $g_{1}(n)$). The compression and reconstruction system generally involves low frequency components, which is the analysis filters $h_{0}(n)$ for image compression and the synthesis filters $g_{0}(n)$ for reconstruction. To evaluate such system, we can input an impulse $\delta (n-n_{i})$ and observe its reconstruction $h(n-n_{i})$; The optimal wavelet are those who bring minimum shift variance and sidelobe to $h(n-n_{i})$. Even though wavelet with strict shift variance is not realistic, it is possible to select wavelet with only slight shift variance. For example, we can compare the shift variance of two filters:[12]
Biorthogonal filters for wavelet image compression
Length Filter coefficients Regularity
Wavelet filter 1 H0 9 .852699, .377402, -.110624, -.023849, .037828 1.068
G0 7 .788486, .418092, -.040689, -.064539 1.701
Wavelet filter 2 H0 6 .788486, .047699, -.129078 0.701
G0 10 .615051, .133389, -.067237, .006989, .018914 2.068
By observing the impulse responses of the two filters, we can conclude that the second filter is less sensitive to the input location (i.e. it is less shift variant).
Another important issue for image compression and reconstruction is the system's oscillatory behavior, which might lead to severe undesired artifacts in the reconstructed image. To achieve this, the wavelet filters should have a large peak to sidelobe ratio.
So far we have discussed about one-dimension transformation of the image compression system. This issue can be extended to two dimension, while a more general term - shiftable multiscale transforms - is proposed.[13]
Derivation of impulse response
As mentioned earlier, impulse response can be used to evaluate the image compression/reconstruction system.
For the input sequence $x(n)=\delta (n-n_{i})$, the reference signal $r_{1}(n)$ after one level of decomposition is $x(n)*h_{0}(n)$ goes through decimation by a factor of two, while $h_{0}(n)$ is a low pass filter. Similarly, the next reference signal $r_{2}(n)$ is obtained by $r_{1}(n)*h_{0}(n)$ goes through decimation by a factor of two. After L levels of decomposition (and decimation), the analysis response is obtained by retaining one out of every $2^{L}$ samples: $h_{A}^{(L)}(n,n_{i})=f_{h0}^{(L)}(n-n_{i}/2^{L})$.
On the other hand, to reconstruct the signal x(n), we can consider a reference signal $r_{L}(n)=\delta (n-n_{j})$. If the detail signals $d_{i}(n)$ are equal to zero for $1\leq i\leq L$, then the reference signal at the previous stage ($L-1$ stage) is $r_{L-1}(n)=g_{0}(n-2n_{j})$, which is obtained by interpolating $r_{L}(n)$ and convoluting with $g_{0}(n)$. Similarly, the procedure is iterated to obtain the reference signal $r(n)$ at stage $L-2,L-3,....,1$. After L iterations, the synthesis impulse response is calculated: $h_{s}^{(L)}(n,n_{i})=f_{g0}^{(L)}(n/2^{L}-n_{j})$, which relates the reference signal $r_{L}(n)$ and the reconstructed signal.
To obtain the overall L level analysis/synthesis system, the analysis and synthesis responses are combined as below:
$h_{AS}^{(L)}(n,n_{i})=\sum _{k}f_{h0}^{(L)}(k-n_{i}/2^{L})f_{g0}^{(L)}(n/2^{L}-k)$.
Finally, the peak to first sidelobe ratio and the average second sidelobe of the overall impulse response $h_{AS}^{(L)}(n,n_{i})$ can be used to evaluate the wavelet image compression performance.
Comparison with Fourier transform and time-frequency analysis
TransformRepresentationInput
Fourier transform${\hat {X}}(f)=\int _{-\infty }^{\infty }x(t)e^{-i2\pi ft}\,dt$$f$ : frequency
Time–frequency analysis$X(t,f)$$t$ time; $f$ frequency
Wavelet transform$X(a,b)={\frac {1}{\sqrt {a}}}\int _{-\infty }^{\infty }{\overline {\Psi \left({\frac {t-b}{a}}\right)}}x(t)\,dt$$a$ scaling ; $b$ time shift factor
Wavelets have some slight benefits over Fourier transforms in reducing computations when examining specific frequencies. However, they are rarely more sensitive, and indeed, the common Morlet wavelet is mathematically identical to a short-time Fourier transform using a Gaussian window function.[14] The exception is when searching for signals of a known, non-sinusoidal shape (e.g., heartbeats); in that case, using matched wavelets can outperform standard STFT/Morlet analyses.[15]
Other practical applications
The wavelet transform can provide us with the frequency of the signals and the time associated to those frequencies, making it very convenient for its application in numerous fields. For instance, signal processing of accelerations for gait analysis,[16] for fault detection,[17] for design of low power pacemakers and also in ultra-wideband (UWB) wireless communications.[18][19][20]
1. Discretizing of the $c-\tau $ axis Applied the following discretization of frequency and time:
${\begin{aligned}c_{n}&=c_{0}^{n}\\\tau _{m}&=m\cdot T\cdot c_{0}^{n}\end{aligned}}$
Leading to wavelets of the form, the discrete formula for the basis wavelet:
$\Psi (k,n,m)={\frac {1}{\sqrt {c_{0}^{n}}}}\cdot \Psi \left[{\frac {k-mc_{0}^{n}}{c_{0}^{n}}}T\right]={\frac {1}{\sqrt {c_{0}^{n}}}}\cdot \Psi \left[\left({\frac {k}{c_{0}^{n}}}-m\right)T\right]$
Such discrete wavelets can be used for the transformation:
$Y_{DW}(n,m)={\frac {1}{\sqrt {c_{0}^{n}}}}\cdot \sum _{k=0}^{K-1}y(k)\cdot \Psi \left[\left({\frac {k}{c_{0}^{n}}}-m\right)T\right]$
2. Implementation via the FFT (fast Fourier transform) As apparent from wavelet-transformation representation (shown below)
$Y_{W}(c,\tau )={\frac {1}{\sqrt {c}}}\cdot \int _{-\infty }^{\infty }y(t)\cdot \Psi \left({\frac {t-\tau }{c}}\right)\,dt$
where $c$ is scaling factor, $\tau $ represents time shift factor
and as already mentioned in this context, the wavelet-transformation corresponds to a convolution of a function $y(t)$ and a wavelet-function. A convolution can be implemented as a multiplication in the frequency domain. With this the following approach of implementation results into:
• Fourier-transformation of signal $y(k)$ with the FFT
• Selection of a discrete scaling factor $c_{n}$
• Scaling of the wavelet-basis-function by this factor $c_{n}$ and subsequent FFT of this function
• Multiplication with the transformed signal YFFT of the first step
• Inverse transformation of the product into the time domain results in $Y_{W}(c,\tau )$ for different discrete values of $\tau $ and a discrete value of $c_{n}$
• Back to the second step, until all discrete scaling values for $c_{n}$are processed
There are many different types of wavelet transforms for specific purposes. See also a full list of wavelet-related transforms but the common ones are listed below: Mexican hat wavelet, Haar Wavelet, Daubechies wavelet, triangular wavelet.
Time-causal wavelets
For processing temporal signals in real time, it is essential that the wavelet filters do not access signal values from the future as well as that minimal temporal latencies can be obtained. Time-causal wavelets representations have been developed by Szu et al [21] and Lindeberg,[22] with the latter method also involving a memory-efficient time-recursive implementation.
Synchro-squeezed transform
Synchro-squeezed transform can significantly enhance temporal and frequency resolution of time-frequency representation obtained using conventional wavelet transform.[23][24]
See also
• Continuous wavelet transform
• Discrete wavelet transform
• Complex wavelet transform
• Constant-Q transform
• Stationary wavelet transform
• Dual wavelet
• Least-squares spectral analysis
• Multiresolution analysis
• MrSID, the image format developed from original wavelet compression research at Los Alamos National Laboratory (LANL)
• ECW, a wavelet-based geospatial image format designed for speed and processing efficiency
• JPEG 2000, a wavelet-based image compression standard
• DjVu format uses wavelet-based IW44 algorithm for image compression
• scaleograms, a type of spectrogram generated using wavelets instead of a short-time Fourier transform
• Wavelet
• Haar wavelet
• Daubechies wavelet
• Binomial QMF (also known as Daubechies wavelet)
• Morlet wavelet
• Gabor wavelet
• Chirplet transform
• Time–frequency representation
• S transform
• Set partitioning in hierarchical trees
• Short-time Fourier transform
• Biorthogonal nearly coiflet basis, which shows that wavelet for image compression can also be nearly coiflet (nearly orthogonal).
References
1. Meyer, Yves (1992), Wavelets and Operators, Cambridge, UK: Cambridge University Press, ISBN 0-521-42000-8
2. Chui, Charles K. (1992), An Introduction to Wavelets, San Diego, CA: Academic Press, ISBN 0-12-174584-8
3. Daubechies, Ingrid. (1992), Ten Lectures on Wavelets, SIAM, ISBN 978-0-89871-274-2
4. Akansu, Ali N.; Haddad, Richard A. (1992), Multiresolution Signal Decomposition: Transforms, Subbands, and Wavelets, Boston, MA: Academic Press, ISBN 978-0-12-047141-6
5. Ghaderpour, E.; Pagiatakis, S. D.; Hassan, Q. K. (2021). "A Survey on Change Detection and Time Series Analysis with Applications". Applied Sciences. 11 (13): 6141. doi:10.3390/app11136141.
6. JPEG 2000, for example, may use a 5/3 wavelet for lossless (reversible) transform and a 9/7 wavelet for lossy (irreversible) transform.
7. Ramakrishnan, A.G.; Saha, S. (1997). "ECG coding by wavelet-based linear prediction" (PDF). IEEE Transactions on Biomedical Engineering. 44 (12): 1253–1261. doi:10.1109/10.649997. PMID 9401225. S2CID 8834327.
8. "Vorbis I specification". Xiph.Org Foundation. 2020-07-04. Archived from the original on 2022-04-03. Retrieved 2022-04-10. Vorbis I is a forward-adaptive monolithic transform CODEC based on the Modified Discrete Cosine Transform. The codec is structured to allow addition of a hybrid wavelet filterbank in Vorbis II to offer better transient response and reproduction using a transform better suited to localized time events.
9. N. Malmurugan, A. Shanmugam, S. Jayaraman and V. V. Dinesh Chander. "A New and Novel Image Compression Algorithm Using Wavelet Footprints"
10. Ho Tatt Wei and Jeoti, V. "A wavelet footprints-based compression scheme for ECG signals". Ho Tatt Wei; Jeoti, V. (2004). "A wavelet footprints-based compression scheme for ECG signals". 2004 IEEE Region 10 Conference TENCON 2004. Vol. A. p. 283. doi:10.1109/TENCON.2004.1414412. ISBN 0-7803-8560-8. S2CID 43806122.
11. J. Field, David (1987). "Relations between the statistics of natural images and the response properties of cortical cells" (PDF). J. Opt. Soc. Am. A. 4 (12): 2379–2394. Bibcode:1987JOSAA...4.2379F. doi:10.1364/JOSAA.4.002379. PMID 3430225.
12. Villasenor, John D. (August 1995). "Wavelet Filter Evaluation for Image Compression". IEEE Transactions on Image Processing. 4 (8): 1053–60. Bibcode:1995ITIP....4.1053V. doi:10.1109/83.403412. PMID 18291999.
13. Simoncelli, E.P.; Freeman, W.T.; Adelson, E.H.; Heeger, D.J. (1992). "Shiftable multiscale transforms". IEEE Transactions on Information Theory. 38 (2): 587–607. doi:10.1109/18.119725. S2CID 43701174.
14. Bruns, Andreas (2004). "Fourier-, Hilbert- and wavelet-based signal analysis: are they really different approaches?". Journal of Neuroscience Methods. 137 (2): 321–332. doi:10.1016/j.jneumeth.2004.03.002. PMID 15262077. S2CID 21880274.
15. Krantz, Steven G. (1999). A Panorama of Harmonic Analysis. Mathematical Association of America. ISBN 0-88385-031-1.
16. Martin, E. (2011). "Novel method for stride length estimation with body area network accelerometers". 2011 IEEE Topical Conference on Biomedical Wireless Technologies, Networks, and Sensing Systems. pp. 79–82. doi:10.1109/BIOWIRELESS.2011.5724356. ISBN 978-1-4244-8316-7. S2CID 37689047.
17. Liu, Jie (2012). "Shannon wavelet spectrum analysis on truncated vibration signals for machine incipient fault detection". Measurement Science and Technology. 23 (5): 1–11. Bibcode:2012MeScT..23e5604L. doi:10.1088/0957-0233/23/5/055604. S2CID 121684952.
18. Akansu, A. N.; Serdijn, W. A.; Selesnick, I. W. (2010). "Emerging applications of wavelets: A review" (PDF). Physical Communication. 3: 1–18. doi:10.1016/j.phycom.2009.07.001.
19. Sheybani, E.; Javidi, G. (December 2009). "Dimensionality Reduction and Noise Removal in Wireless Sensor Network Datasets". 2009 Second International Conference on Computer and Electrical Engineering. Vol. 2. pp. 674–677. doi:10.1109/ICCEE.2009.282. ISBN 978-1-4244-5365-8. S2CID 17066179.
20. Sheybani, E. O.; Javidi, G. (May 2012). "Multi-resolution filter banks for enhanced SAR imaging". 2012 International Conference on Systems and Informatics (ICSAI2012). pp. 2702–2706. doi:10.1109/ICSAI.2012.6223611. ISBN 978-1-4673-0199-2. S2CID 16302915.
21. Szu, Harold H.; Telfer, Brian A.; Lohmann, Adolf W. (1992). "Causal analytical wavelet transform". Optical Engineering. 31 (9): 1825. Bibcode:1992OptEn..31.1825S. doi:10.1117/12.59911.
22. Lindeberg, T. (23 January 2023). "A time-causal and time-recursive scale-covariant scale-space representation of temporal signals and past time". Biological Cybernetics. 117 (1–2): 21–59. doi:10.1007/s00422-022-00953-6. PMC 10160219. PMID 36689001.
23. Daubechies, Ingrid; Lu, Jianfeng; Wu, Hau-Tieng (2009-12-12). "Synchrosqueezed Wavelet Transforms: a Tool for Empirical Mode Decomposition". arXiv:0912.2437 [math.NA].
24. Qu, Hongya; Li, Tiantian; Chen, Genda (2019-01-01). "Synchro-squeezed adaptive wavelet transform with optimum parameters for arbitrary time series". Mechanical Systems and Signal Processing. 114: 366–377. Bibcode:2019MSSP..114..366Q. doi:10.1016/j.ymssp.2018.05.020. S2CID 126007150.
[1]
External links
Wikimedia Commons has media related to Wavelets.
• Amara Graps (June 1995). "An Introduction to Wavelets". IEEE Computational Science & Engineering. 2 (2): 50–61. doi:10.1109/99.388960.
• Robi Polikar (2001-01-12). "The Wavelet Tutorial".
• Concise Introduction to Wavelets by René Puschinger
Authority control: National
• Germany
• Japan
• Czech Republic
1. Prasad, Akhilesh; Maan, Jeetendrasingh; Verma, Sandeep Kumar (2021). "Wavelet transforms associated with the index Whittaker transform". Mathematical Methods in the Applied Sciences. 44 (13): 10734–10752. Bibcode:2021MMAS...4410734P. doi:10.1002/mma.7440. ISSN 1099-1476. S2CID 235556542.
| Wikipedia |
Domain theory
Domain theory is a branch of mathematics that studies special kinds of partially ordered sets (posets) commonly called domains. Consequently, domain theory can be considered as a branch of order theory. The field has major applications in computer science, where it is used to specify denotational semantics, especially for functional programming languages. Domain theory formalizes the intuitive ideas of approximation and convergence in a very general way and is closely related to topology.
Motivation and intuition
The primary motivation for the study of domains, which was initiated by Dana Scott in the late 1960s, was the search for a denotational semantics of the lambda calculus. In this formalism, one considers "functions" specified by certain terms in the language. In a purely syntactic way, one can go from simple functions to functions that take other functions as their input arguments. Using again just the syntactic transformations available in this formalism, one can obtain so-called fixed-point combinators (the best-known of which is the Y combinator); these, by definition, have the property that f(Y(f)) = Y(f) for all functions f.
To formulate such a denotational semantics, one might first try to construct a model for the lambda calculus, in which a genuine (total) function is associated with each lambda term. Such a model would formalize a link between the lambda calculus as a purely syntactic system and the lambda calculus as a notational system for manipulating concrete mathematical functions. The combinator calculus is such a model. However, the elements of the combinator calculus are functions from functions to functions; in order for the elements of a model of the lambda calculus to be of arbitrary domain and range, they could not be true functions, only partial functions.
Scott got around this difficulty by formalizing a notion of "partial" or "incomplete" information to represent computations that have not yet returned a result. This was modeled by considering, for each domain of computation (e.g. the natural numbers), an additional element that represents an undefined output, i.e. the "result" of a computation that never ends. In addition, the domain of computation is equipped with an ordering relation, in which the "undefined result" is the least element.
The important step to finding a model for the lambda calculus is to consider only those functions (on such a partially ordered set) that are guaranteed to have least fixed points. The set of these functions, together with an appropriate ordering, is again a "domain" in the sense of the theory. But the restriction to a subset of all available functions has another great benefit: it is possible to obtain domains that contain their own function spaces, i.e. one gets functions that can be applied to themselves.
Beside these desirable properties, domain theory also allows for an appealing intuitive interpretation. As mentioned above, the domains of computation are always partially ordered. This ordering represents a hierarchy of information or knowledge. The higher an element is within the order, the more specific it is and the more information it contains. Lower elements represent incomplete knowledge or intermediate results.
Computation then is modeled by applying monotone functions repeatedly on elements of the domain in order to refine a result. Reaching a fixed point is equivalent to finishing a calculation. Domains provide a superior setting for these ideas since fixed points of monotone functions can be guaranteed to exist and, under additional restrictions, can be approximated from below.
A guide to the formal definitions
In this section, the central concepts and definitions of domain theory will be introduced. The above intuition of domains being information orderings will be emphasized to motivate the mathematical formalization of the theory. The precise formal definitions are to be found in the dedicated articles for each concept. A list of general order-theoretic definitions, which include domain theoretic notions as well can be found in the order theory glossary. The most important concepts of domain theory will nonetheless be introduced below.
Directed sets as converging specifications
As mentioned before, domain theory deals with partially ordered sets to model a domain of computation. The goal is to interpret the elements of such an order as pieces of information or (partial) results of a computation, where elements that are higher in the order extend the information of the elements below them in a consistent way. From this simple intuition it is already clear that domains often do not have a greatest element, since this would mean that there is an element that contains the information of all other elements—a rather uninteresting situation.
A concept that plays an important role in the theory is that of a directed subset of a domain; a directed subset is a non-empty subset of the order in which any two elements have an upper bound that is an element of this subset. In view of our intuition about domains, this means that any two pieces of information within the directed subset are consistently extended by some other element in the subset. Hence we can view directed subsets as consistent specifications, i.e. as sets of partial results in which no two elements are contradictory. This interpretation can be compared with the notion of a convergent sequence in analysis, where each element is more specific than the preceding one. Indeed, in the theory of metric spaces, sequences play a role that is in many aspects analogous to the role of directed sets in domain theory.
Now, as in the case of sequences, we are interested in the limit of a directed set. According to what was said above, this would be an element that is the most general piece of information that extends the information of all elements of the directed set, i.e. the unique element that contains exactly the information that was present in the directed set, and nothing more. In the formalization of order theory, this is just the least upper bound of the directed set. As in the case of the limit of a sequence, the least upper bound of a directed set does not always exist.
Naturally, one has a special interest in those domains of computations in which all consistent specifications converge, i.e. in orders in which all directed sets have a least upper bound. This property defines the class of directed-complete partial orders, or dcpo for short. Indeed, most considerations of domain theory do only consider orders that are at least directed complete.
From the underlying idea of partially specified results as representing incomplete knowledge, one derives another desirable property: the existence of a least element. Such an element models that state of no information—the place where most computations start. It also can be regarded as the output of a computation that does not return any result at all.
Computations and domains
Now that we have some basic formal descriptions of what a domain of computation should be, we can turn to the computations themselves. Clearly, these have to be functions, taking inputs from some computational domain and returning outputs in some (possibly different) domain. However, one would also expect that the output of a function will contain more information when the information content of the input is increased. Formally, this means that we want a function to be monotonic.
When dealing with dcpos, one might also want computations to be compatible with the formation of limits of a directed set. Formally, this means that, for some function f, the image f(D) of a directed set D (i.e. the set of the images of each element of D) is again directed and has as a least upper bound the image of the least upper bound of D. One could also say that f preserves directed suprema. Also note that, by considering directed sets of two elements, such a function also has to be monotonic. These properties give rise to the notion of a Scott-continuous function. Since this often is not ambiguous one also may speak of continuous functions.
Approximation and finiteness
Domain theory is a purely qualitative approach to modeling the structure of information states. One can say that something contains more information, but the amount of additional information is not specified. Yet, there are some situations in which one wants to speak about elements that are in a sense much simpler (or much more incomplete) than a given state of information. For example, in the natural subset-inclusion ordering on some powerset, any infinite element (i.e. set) is much more "informative" than any of its finite subsets.
If one wants to model such a relationship, one may first want to consider the induced strict order < of a domain with order ≤. However, while this is a useful notion in the case of total orders, it does not tell us much in the case of partially ordered sets. Considering again inclusion-orders of sets, a set is already strictly smaller than another, possibly infinite, set if it contains just one less element. One would, however, hardly agree that this captures the notion of being "much simpler".
Way-below relation
A more elaborate approach leads to the definition of the so-called order of approximation, which is more suggestively also called the way-below relation. An element x is way below an element y, if, for every directed set D with supremum such that
$y\sqsubseteq \sup D$,
there is some element d in D such that
$x\sqsubseteq d$.
Then one also says that x approximates y and writes
$x\ll y$.
This does imply that
$x\sqsubseteq y$,
since the singleton set {y} is directed. For an example, in an ordering of sets, an infinite set is way above any of its finite subsets. On the other hand, consider the directed set (in fact, the chain) of finite sets
$\{0\},\{0,1\},\{0,1,2\},\ldots $
Since the supremum of this chain is the set of all natural numbers N, this shows that no infinite set is way below N.
However, being way below some element is a relative notion and does not reveal much about an element alone. For example, one would like to characterize finite sets in an order-theoretic way, but even infinite sets can be way below some other set. The special property of these finite elements x is that they are way below themselves, i.e.
$x\ll x$.
An element with this property is also called compact. Yet, such elements do not have to be "finite" nor "compact" in any other mathematical usage of the terms. The notation is nonetheless motivated by certain parallels to the respective notions in set theory and topology. The compact elements of a domain have the important special property that they cannot be obtained as a limit of a directed set in which they did not already occur.
Many other important results about the way-below relation support the claim that this definition is appropriate to capture many important aspects of a domain.
Bases of domains
The previous thoughts raise another question: is it possible to guarantee that all elements of a domain can be obtained as a limit of much simpler elements? This is quite relevant in practice, since we cannot compute infinite objects but we may still hope to approximate them arbitrarily closely.
More generally, we would like to restrict to a certain subset of elements as being sufficient for getting all other elements as least upper bounds. Hence, one defines a base of a poset P as being a subset B of P, such that, for each x in P, the set of elements in B that are way below x contains a directed set with supremum x. The poset P is a continuous poset if it has some base. Especially, P itself is a base in this situation. In many applications, one restricts to continuous (d)cpos as a main object of study.
Finally, an even stronger restriction on a partially ordered set is given by requiring the existence of a base of finite elements. Such a poset is called algebraic. From the viewpoint of denotational semantics, algebraic posets are particularly well-behaved, since they allow for the approximation of all elements even when restricting to finite ones. As remarked before, not every finite element is "finite" in a classical sense and it may well be that the finite elements constitute an uncountable set.
In some cases, however, the base for a poset is countable. In this case, one speaks of an ω-continuous poset. Accordingly, if the countable base consists entirely of finite elements, we obtain an order that is ω-algebraic.
Special types of domains
A simple special case of a domain is known as an elementary or flat domain. This consists of a set of incomparable elements, such as the integers, along with a single "bottom" element considered smaller than all other elements.
One can obtain a number of other interesting special classes of ordered structures that could be suitable as "domains". We already mentioned continuous posets and algebraic posets. More special versions of both are continuous and algebraic cpos. Adding even further completeness properties one obtains continuous lattices and algebraic lattices, which are just complete lattices with the respective properties. For the algebraic case, one finds broader classes of posets that are still worth studying: historically, the Scott domains were the first structures to be studied in domain theory. Still wider classes of domains are constituted by SFP-domains, L-domains, and bifinite domains.
All of these classes of orders can be cast into various categories of dcpos, using functions that are monotone, Scott-continuous, or even more specialized as morphisms. Finally, note that the term domain itself is not exact and thus is only used as an abbreviation when a formal definition has been given before or when the details are irrelevant.
Important results
A poset D is a dcpo if and only if each chain in D has a supremum. (The 'if' direction relies on the axiom of choice.)
If f is a continuous function on a domain D then it has a least fixed point, given as the least upper bound of all finite iterations of f on the least element ⊥:
$\operatorname {fix} (f)=\bigsqcup _{n\in \mathbb {N} }f^{n}(\bot )$.
This is the Kleene fixed-point theorem. The $\sqcup $ symbol is the directed join.
Generalizations
• "Synthetic domain theory". CiteSeerX 10.1.1.55.903. {{cite journal}}: Cite journal requires |journal= (help)
• Topological domain theory
• A continuity space is a generalization of metric spaces and posets that can be used to unify the notions of metric spaces and domains.
See also
• Category theory
• Codomain
• Denotational semantics
• Scott domain
• Scott information system
• Type theory
Further reading
• G. Gierz; K. H. Hofmann; K. Keimel; J. D. Lawson; M. Mislove; D. S. Scott (2003). "Continuous Lattices and Domains". Encyclopedia of Mathematics and its Applications. Vol. 93. Cambridge University Press. ISBN 0-521-80338-1.
• Samson Abramsky, Achim Jung (1994). "Domain theory" (PDF). In S. Abramsky; D. M. Gabbay; T. S. E. Maibaum (eds.). Handbook of Logic in Computer Science. Vol. III. Oxford University Press. pp. 1–168. ISBN 0-19-853762-X. Retrieved 2007-10-13.
• Alex Simpson (2001–2002). "Part III: Topological Spaces from a Computational Perspective". Mathematical Structures for Semantics. Archived from the original on 2005-04-27. Retrieved 2007-10-13.
• D. S. Scott (1975). "Data types as lattices". Proceedings of the International Summer Institute and Logic Colloquium, Kiel, in Lecture Notes in Mathematics. Springer-Verlag. 499: 579–651.
• Carl A. Gunter (1992). Semantics of Programming Languages. MIT Press. ISBN 9780262570954.
• B. A. Davey; H. A. Priestley (2002). Introduction to Lattices and Order (2nd ed.). Cambridge University Press. ISBN 0-521-78451-4.
• Carl Hewitt; Henry Baker (August 1977). "Actors and Continuous Functionals" (PDF). Proceedings of IFIP Working Conference on Formal Description of Programming Concepts. Archived (PDF) from the original on April 12, 2019.
• V. Stoltenberg-Hansen; I. Lindstrom; E. R. Griffor (1994). Mathematical Theory of Domains. Cambridge University Press. ISBN 0-521-38344-7.
External links
• Introduction to Domain Theory by Graham Hutton, University of Nottingham
| Wikipedia |
Numbers (TV series)
Numbers (stylized as NUMB3RS) is an American crime drama television series that was broadcast on CBS from January 23, 2005, to March 12, 2010, for six seasons and 118 episodes. The series was created by Nicolas Falacci and Cheryl Heuton, and follows FBI Special Agent Don Eppes (Rob Morrow) and his brother Charlie Eppes (David Krumholtz), a college mathematics professor and prodigy, who helps Don solve crimes for the FBI. Brothers Ridley and Tony Scott produced Numbers; its production companies are the Scott brothers' Scott Free Productions and CBS Television Studios (originally Paramount Network Television, and later CBS Paramount Network Television).
Numbers
Genre
• Police procedural
• Crime drama
• Thriller
Created by
• Nicolas Falacci
• Cheryl Heuton
Starring
• Rob Morrow
• David Krumholtz
• Judd Hirsch
• Alimi Ballard
• Sabrina Lloyd
• Dylan Bruno
• Diane Farr
• Navi Rawat
• Sophina Brown
• Aya Sumika
• Peter MacNicol
Music byCharlie Clouser
Country of originUnited States
Original languageEnglish
No. of seasons6
No. of episodes118 (list of episodes)
Production
Executive producers
• Nicolas Falacci
• Cheryl Heuton
• Ridley Scott
• Tony Scott
• David W. Zucker
• Andrew Dettmann
• Don McGill
• Lewis Abel
• Ken Sanzel
• Barry Schindel
• Brooke Kennedy
Producers
• John Forrest Niss
• Michael Attanasio
• Christine Larson-Nitzsche
Production locationsLos Angeles, Pasadena
Running time43 minutes
Production companies
• The Barry Schindel Company (seasons 2–3)
• Post 109 Productions (season 6)
• Scott Free Productions
• Paramount Network Television (seasons 1–2)
• CBS Paramount Network Television (seasons 3–5)
• CBS Television Studios (season 6)
Release
Original networkCBS
Original releaseJanuary 23, 2005 (2005-01-23) –
March 12, 2010 (2010-03-12)
The show focuses equally on the relationships among Don Eppes, his brother Charlie Eppes, and their father, Alan Eppes (Judd Hirsch), and on the brothers' efforts to fight crime, usually in Los Angeles. A typical episode begins with a crime, which is subsequently investigated by a team of FBI agents led by Don and mathematically modeled by Charlie, with the help of Larry Fleinhardt (Peter MacNicol) and Amita Ramanujan (Navi Rawat). The insights provided by Charlie's mathematics were always in some way crucial to solving the crime.
On May 18, 2010, CBS canceled the series after six seasons.[1]
Cast and characters
The show revolved around three intersecting groups of characters: the FBI, scientists at the fictitious California Institute of Science (CalSci), and the Eppes family.
• Don Eppes (Rob Morrow), Charlie's older brother, is the lead FBI agent at the Los Angeles Violent Crimes Squad.
• Professor Charlie Eppes (David Krumholtz) is a mathematical genius, who in addition to teaching at CalSci, consults for the FBI and NSA.
• Alan Eppes (Judd Hirsch) is a former L.A. city planner, a widower, and the father of both Charlie and Don Eppes. Alan lives in a historic two-story California bungalow furnished with period Arts and Crafts furniture.
• David Sinclair (Alimi Ballard) is an FBI field agent and was later made Don's second-in-command and promoted to supervisor.
• Terry Lake (Sabrina Lloyd) is a forensic psychologist and FBI agent. (season 1)
• Prof. Larry Fleinhardt (Peter MacNicol) is a theoretical physicist and cosmologist at CalSci. Charlie's former mentor and now best friend, he also frequently consults for the FBI.
• Prof. Amita Ramanujan (Navi Rawat) is a mathematician at CalSci and an FBI consultant. In season two, she begins dating Charlie, to whom she is engaged and married in season six. Charlie was her thesis advisor. Her name is a reference to influential autodidactic Indian mathematician Srinivasa Ramanujan. (seasons 2–6, main; 1, recurring)
• Megan Reeves (Diane Farr) is an FBI behavioral specialist. She was involved romantically with Larry Fleinhardt and left the FBI to counsel troubled young women. (seasons 2–4)
• Colby Granger (Dylan Bruno) is an FBI field agent. Once thought to have betrayed his colleagues, he is now back in their good graces and confidence. (seasons 3–6, main; 2, recurring)
• Liz Warner (Aya Sumika) is an FBI agent, formerly involved with Agent Eppes. (seasons 5–6, main; 3–4, recurring)
• Nikki Betancourt (Sophina Brown) is an FBI agent with four years' experience in the LAPD and a law degree. (seasons 5–6)
Temporary characters on the show were often named after famous mathematicians. For example, in the episode "In Plain Sight" (season two, episode eight), one of the criminals is named Rolle and Charlie's father mentions a meeting with a man named Robert Peterson.
Episodes
Opening: (Voice-over by David Krumholtz) We all use math every day. To predict weather, to tell time, to handle money. Math is more than formulas and equations. It's logic; it's rationality. It's using your mind to solve the biggest mysteries we know.
Season 1 (2005)
The first season aired between January 23, 2005, and May 13, 2005, at 10:00 pm on Fridays. It started the working relationship between Los Angeles' FBI field office and Charlie Eppes. The main FBI agents are Charlie's brother, Don Eppes, and Terry Lake, as well as David Sinclair. Don and Charlie's father, Alan Eppes, provides emotional support for the pair, while Professor Larry Fleinhardt and doctoral student Amita Ramanujan provide mathematical support and insights to Charlie. Season one was a half-season, producing only 13 episodes. Sabrina Lloyd played Terry Lake, an agent, in this season; she was later replaced by Diane Farr, who played Megan Reeves.
Season 2 (2005–06)
The second season aired between September 23, 2005, and May 19, 2006, again at 10:00 pm on Fridays. Season two has several changes to Don's FBI team: Terry Lake is reassigned to Washington and two new members join Don and David Sinclair: Megan Reeves and Colby Granger. Charlie is challenged on one of his long-standing mathematical workpieces and starts work on a new theory, cognitive emergence theory. Larry sells his home and assumes a nomadic lifestyle while he becomes romantically involved with Megan. Amita receives an offer for an assistant professor position at Harvard University, but is plagued by doubt as her relationship with Charlie is challenged, and her career is in upheaval. Alan begins work and dating again, although he struggles with the loss of his wife and Charlie and his dream of her.
Season 3 (2006–07)
Numb3rs was renewed for a third season,[2] which began airing at 10:00 pm on Friday, September 22, 2006, and ended on May 18, 2007. Charlie and Amita intensify their relationship, as do Larry and Megan, especially after Megan's kidnapping. Amita has trouble adjusting to her new role as a CalSci professor, and Larry announces his leave of absence; he will be on the International Space Station for six months, which greatly distresses Charlie. Charlie and his colleagues are troubled by Dr. Mildred Finch, the newly appointed chair of the CalSci Physics, Mathematics, and Astronomy Division, whom they learn has begun dating Alan. Meanwhile, Don dates Agent Liz Warner and questions his ethics and self worth, and receives counseling. Charlie sees Don's therapist, and the two understand one another more. Despite Don's concerns, Alan engages in some FBI consulting with his engineering knowledge, and Larry returns from the space station, disillusioned. The finale wraps up with a revelation that Colby was a double agent for the Chinese.
Noticeable changes from previous seasons include removing the opening-credit sequence (credits are now done during the first segment of the show), the absence of Peter MacNicol's character for much of the season, and the absence of Diane Farr's character for a few episodes. Peter MacNicol appeared in the first 11 episodes before leaving for the television show 24, but returned to Numbers for the 21st episode of season three ("The Art of Reckoning"). His character's absence was written into the show by becoming a payload specialist on the International Space Station. Diane Farr, pregnant for most of the season, left the show for maternity leave in episode 18 ("Democracy"); her character's absence is explained as a particular assignment to the Department of Justice.
Season 4 (2007–08)
The season premiere aired on September 28, 2007, in the same time slot as in previous seasons, 10:00 pm Eastern Time.[3] Because of the writer's strike, only 12 episodes were initially produced. However, once the strike ended, CBS announced the show's return April 4, 2008, with six episodes.[4] The season ended on May 16, 2008.
As this season starts, Colby Granger escapes from jail and is revealed to be a triple agent. He then rejoins the team. Don and Liz break up halfway through this season after Liz has trouble with Don's trust issues. Amita's parents come to visit, which becomes a secondary theme throughout most of the season. Due to her work at the DOJ, Megan is conflicted by her work and turns to Larry. Near the end of the season, Don's girlfriend from season two, Robin Brooks, returns. Don and Robin then continue their relationship. Charlie attends FBI training camp because he has been working with Don for several years and wants to understand better what his brother does. In the season finale, Megan leaves the team to move back to Washington, DC, and Charlie goes head-to-head with Don about a case. This causes Charlie to send information to scientists in Pakistan. He is subsequently arrested and has his security clearance revoked to no longer help Don on cases. At the end of the episode, Don drives away to another case, and Charlie admits that giving up FBI work will be more challenging than he expected.
Several characters from previous seasons did not appear in season four, most notably Mildred Finch and Ian Edgerton.
Season 5 (2008–09)
The fifth season premiered on October 3, 2008, and the season finale aired on May 15, 2009. The season premiere was moved back one week to accommodate the 2008 presidential debates.[5]
Season five opens three weeks after "When Worlds Collide" (season four's finale), with the government dropping the charges against Charlie. Charlie gets his security clearance back after Don and he fight FBI Security Officer Carl McGowan. Don begins to explore Judaism. The team adds new agent Nikki Betancourt, who arrives shortly after Megan Reeves' departure. Robin is offered a promotion but turns it down. Buck Winters (from the episodes "Spree" and "Two Daughters") breaks out of prison and comes after Don. Alan suddenly finds himself coaching CalSci's basketball team. David becomes Don's primary relief supervisor. DARPA tries to recruit Charlie, but he turns down their offer. Toward the end of the season, Don is stabbed, and Charlie blames himself for it. The aftermath of Don's stabbing causes Charlie to focus more on his FBI consultation work. Amita is kidnapped, and the team races to find her. After she is rescued, Charlie proposes to Amita. Her response is left undisclosed.
"Disturbed" marked the 100th episode of Numbers.[6]
Season 6 (2009–10)
The sixth and final season premiered Friday, September 25, 2009, at 10:00 pm ET[7] and the season finale aired on March 12, 2010, 3 days before Hirsch's 75th birthday.
The season starts with the engagement of Charlie and Amita. Soon after, Larry turns down an opportunity to meet with mathematicians at CERN, in Geneva, and drops his course load for the following semester. This leads Charlie to realize Larry is once again leaving and leaving all of his work to Charlie. Don learns that his former mentor is crooked, causing Don angst to shoot his mentor. Charlie and Don realize that Alan has lost a substantial amount of money in his 401(k). After some delay, Larry leaves Los Angeles to find a vacant piece of land for sale within driving distance of the city. Alan decides to return to work and finds a job as a software technical consultant. David asks Don for advice about career paths within the FBI. Larry returns from the desert with a new theory about the universe's fate. Charlie and Amita begin planning their wedding and decide to join the Big Brother/Big Sister program to practice parenting skills. They get married before their move to England to teach at the University of Cambridge. Don loses his gun, recovers it after it is used in some vigilante murders, and gets engaged to Robin. He also decides to leave the team, taking an administrative position within the FBI. Before leaving, Charlie and Amita decide that the family garage should be converted to a guest house so Alan can continue living with them. Leaving Colby, Liz, and Nikki behind, David departs for Washington, DC, to a position as an anti-corruption team leader.
Home media
CBS DVD (distributed by Paramount Home Entertainment) has released all six seasons of Numb3rs on DVD in Regions 1, 2, and 4.
On June 2, 2017, CBS DVD released Numb3rs: The Complete Series on DVD in Region 1.[8]
Season #Release DatesEp
(#)
Length
(min.)
DiscsRegion 1 Extras
US, Canada (R1)UK (R2)Australia (R4)
Season one May 30, 2006[9]October 2, 2006[10]October 5, 2006[11]13[9]544[9]4[9]Cast and crew commentaries for five episodes, "Crunching Numb3rs: Season 1," "Point of Origin: Inside the Unaired Pilot," "Do The Math: The Caltech Analysis," and "Charlievision: FX Sequences 1.0," blooper reels, and audition reels.[12]
Season two October 10, 2006[13]July 9, 2007[14]June 7, 2007[15]24[13]1037[13]6[13]Cast and crew commentaries for six episodes, "Crunching Numb3rs: Season Two," two "behind the scenes" videos (one with Nicholas Falacci, the other with David Krumholtz), and a blooper reel.[16]
Season three September 25, 2007[17]February 9, 2009[18]July 10, 2008[19]24[17]1029[17]6[17]Cast and crew commentaries for five episodes, "Crunching Numb3rs: Season 3," a mini-documentary of the Eppes house, a blooper reel, and a tour of the Eppes' house set.[20]
Season four September 30, 2008[21]July 13, 2009[22]July 2, 2009[23]18[21]767[21]5[21]"Crunching NUMB3RS: Trust Metric," featurettes for two episodes, "The Tony Touch", pre-production, and post production.[24]
Season five October 20, 2009[25]June 21, 2010[26]August 4, 2010[27]23983[28]6[29]Cast and crew commentaries for three episodes, deleted scenes for "Thirty-Six Hours", "Crunching NUMB3RS: Season Five" featurette, "Celebrating 100" featurette, Blooper reel.[29]
Season six August 10, 2010July 18, 2011July 21, 2011[30]166604Coming Full Circle: Numb3rs the final season, The women of Numb3rs, Pixel Perfect:the digital cinematography of Numb3rs, Production Photo Gallery
Complete series June 6, 2017N/ADecember 1, 2011[31]11831
Awards and nominations
Nicolas Falacci and Cheryl Heuton, the show's creators, have won several awards for the show, including the Carl Sagan Award for Public Understanding of Science in 2006,[32] and the National Science Board's Public Service Award in 2007.[33] Also, the show's stunt coordinator, Jim Vickers, was nominated for an Emmy Award for Outstanding Stunt Coordination in 2006 for episode 14 of Season 2, "Harvest".[34]
Representation of mathematics
We all use math every day. To predict weather, to tell time, to handle money. Math is more than formulas and equations. It's logic; it's rationality. It's using your mind to solve the biggest mysteries we know.
Several mathematicians work as consultants for each episode.[35][36][37] Actual mathematics are presented in the show; the equations on the chalkboards are mathematically valid, and are somewhat applicable to the situations presented in each show. This mathematical validity and applicability of the equations have been asserted by professional mathematicians.[35][38][39]
A book entitled The Numbers Behind NUMB3RS: Solving Crime with Mathematics (ISBN 0452288576; published August 28, 2007), written by Keith Devlin and Dr. Gary Lorden, a consultant to the show along with Dr. Orara, a physics consultant, explain some of the mathematical techniques that have been used both in actual FBI cases and in other law-enforcement departments.[40][41]
Since the premiere season, the blog edited by Prof. Mark Bridger (Northeastern University) has commented on the mathematics behind each episode of the show.[42]
Wolfram Research (the developers of Mathematica) is the chief math consultant, reviewing scripts and providing background mathematics for the show. Starting with season four, their website in collaboration with CBS is entitled "The math behind NUMB3RS".[43]
Alice Silverberg, a mathematician consultant to the show, expressed concern with its use of mathematics, asserting that the math is inserted after the initial script and written to provide plausible-sounding jargon, rather than having consultants involved at all stages of story development.[44] The same part-time consultant offered criticism of the show's portrayal of female mathematicians and expressed concern over the appropriateness of the relationship between Charlie Eppes and his graduate student Amita Ramanujan.[44]
Production
The idea for Numbers was generated in the late 1990s when Nick Falacci and Cheryl Heuton, the show's creators, attended a lecture given by Bill Nye, a popular science educator.[45] The premise of the show is similar to that of author Colin Bruce's reimaginings of the Sherlock Holmes character,[46] and to the "Mathnet" segment on the children's television show Square One.
Gabriel Macht was originally cast to portray the character of Don Eppes.[47] Also, the original concept for the show had the events take place at Massachusetts Institute of Technology;[48] this was later changed to the fictional California Institute of Science, commonly called CalSci. Scenes which take place at CalSci are filmed at California Institute of Technology (Caltech)[48] and the University of Southern California. One of the most frequent campus locations at Caltech is the vicinity of Millikan Library, including the bridge over Millikan Pond, the Trustees room, and the arcades of nearby buildings. At USC, locations include Doheny Library and the Town and Gown dining room. Exteriors for the FBI offices are on the distinctive bridge at Los Angeles Center Studios.[49]
Another common location is the Craftsman home of the Eppes family. The house shown in the first season is real; it is owned by David Raposa and Edward Trosper,[50] although a replica set was used from the second season onwards.[51]
Title of the show
The show uses the number three in its title instead of the letter "e", in which is found in Leetspeak. In the interviews with Tom Jicha of the South Florida Sun-Sentinel and with Alan Pergament of The Buffalo News, Heuton mentioned that the use of the number three in the title derives from leet, a form of computer jargon that replaces letters with numbers.[52][53] Dr. Gary Lorden, a California Institute of Technology mathematics professor who served as the show's mathematics consultant, told NPR's Ira Flatow that it was created on a normal computer keyboard. Lorden also mentioned that the use of the number three in the title can serve as a restriction in Internet searches about the series.[54]
Both entertainment reporters and psychologists noticed the title's spelling. Some reporters, such as Joanne Ostrow of the Denver Post,[55] the staff members of People Magazine,[56] the editors of The Futon Critic,[57] the staff of the Scripps Howard News Service,[58] and Mike Hughes of USA Today[59] acknowledged the presence of the number three in the title. Lynette Rice of Entertainment Weekly asked Krumholtz about the three in the title; his response was, "Isn't that annoying? I think it should be the mathematical symbol for sigma, which looks like an E. I've been fighting that for weeks."[60] (The sigma (Σ) stands for summation.[61]) Others used varying adjectives to describe the title. The TV site Zap2it.com called it "their typographical silliness, not ours".[62] Brad Aspey of The Muskegon Chronicle, stated, "No, that wasn't an ugly typo you just read - "NUMB3RS" (pronounced numbers) is the idiosyncratic title of filmmakers Ridley and Tony Scott's astute and crafty psychological drama which shows that even math can make for edge-of-your-seat entertainment."[63] Ellen Gray of The Philadelphia Daily News, said, "Some of you may have noticed that in promoting "Numb3rs," which premieres Sunday before moving to its regular 10 p.m. Friday slot, CBS has chosen to put a 3 in place of the "e" in the title....I won't be going along with this particular affectation, which slows down my typing and seems to be the graphic equivalent of the reversed "R" in Toys R Us. So there."[64]
Still others had a more positive view of the title. When NPR's Flatow asked both Lorden and Dr. Keith Devlin, NPR's mathematics reporter, about the title, both men denied creating the title; Devlin believed that executive producer Tony Scott originated the title. Lorden stated that he initially thought that the title was "kind of hokey", but later saw it as "brilliant" and a "catchy logo".[54] Jonathan Storm of The Philadelphia Inquirer, in his review of the series stated, "You'd think CBS's new Numbers, which premieres at 10 tonight after the Patriots-Steelers football game, is just another one of those shows with numskull titles trying to draw attention to themselves. But the '3' substituting for the 'e' is actually based on a real thing".... He later said that the show was "written by people familiar with the Dead Cow Cult".[65] David Brooks of The Telegraph (Nashua, NH) devoted the majority of his entire review to the use of leet in the series title.[66] In addition, three psychologists, Manuel Perea, Jon Andoni Duñabeitia, and Manuel Carreiras mentioned the television series in their 2008 article for the American Psychological Association's Journal of Experimental Psychology: Human Perception and Performance.[67]
American television ratings
Seasonal rankings (based on average total viewers per episode) of Numb3rs on CBS.[68]
Note: Each U.S. network television season starts in late September and ends in late May, which coincides with the completion of May sweeps.
Season Timeslot Season premiere Season finale Episodes TV season Ranking Viewers
(in millions)
1st Friday 10:00 PM January 23, 2005 May 13, 2005 13 2004–2005 #36 10.77[69]
2nd September 23, 2005 May 19, 2006 24 2005–2006 #32 11.62[70]
3rd September 22, 2006 May 18, 2007 24 2006–2007 #38 10.5[71]
4th[71] September 28, 2007 May 16, 2008 18 2007–2008 #55 9.14[72]
5th October 3, 2008 May 15, 2009 23 2008–2009 #37 10.29[73]
6th September 25, 2009 March 12, 2010 16[74] 2009–2010 #46 8.45[75]
• Note: The pilot episode aired on Sunday before moving to its regular night on Friday.
International broadcasting
Australia: Network Ten
Austria: Paramount Network
Spain: Calle 13
Netherlands: NET 5
Denmark: 13th Street
France: Universal Channel, M6, 6ter, RTL TVL
India: AXN
Japan: Fox Crime
Poland: Hallmark Channel
South Africa: Universal TV
Italy: Rai 4
Hungary: Viasat3
Russia: Universal Channel
References
1. "Report: CBS cancels 'Ghost Whisperer,' 'Numbers' and five more shows". Archived November 11, 2010, at the Wayback Machine. Zap2It. May 18, 2010.
2. Mahan, Colin (March 6, 2006). "Voila! CBS renews 14 shows at once". TV.com. Retrieved August 25, 2007.
3. "CBS 2007 Fall Preview". CBS. Retrieved August 24, 2007.
4. "CBS Sets Series Return Dates". Zap2it. Archived from the original on February 16, 2008. Retrieved February 13, 2008.
5. "CBS Sets Fall Premiere Dates". Broadcasting & Media. Retrieved June 20, 2009.
6. Kinon, Cristina (May 1, 2009). "Hawking counts 'Numb3rs' as a fave". New York Daily News. Retrieved September 9, 2009.
7. "Fall TV: CBS Announces Premiere Dates". TVGuide.com. Retrieved June 24, 2009.
8. Add the Six Seasons Together in 'The Complete Series' this Summer! Archived March 29, 2017, at the Wayback Machine
9. "Numb3rs – The 1st Season". TVShowsOnDVD.com. May 30, 2006. Archived from the original on November 4, 2008. Retrieved February 17, 2009.
10. "Numb3rs – Season 1 DVD". Amazon. Retrieved February 17, 2009.
11. "NUMB3RS – SEASON 1 (COMPLETE)". BIGWentertainment. October 12, 2006. Archived from the original on August 15, 2007. Retrieved February 17, 2009.
12. "Numb3rs – The Complete First Season (2005)". Amazon. Retrieved August 25, 2007.
13. "Numb3rs – The 2nd Season". TVShowsOnDVD.com. Archived from the original on November 16, 2006. Retrieved February 17, 2009.
14. "Numb3rs Season 2". Amazon. July 9, 2007. Retrieved February 17, 2009.
15. "NUMB3RS – SEASON 2 (COMPLETE)". BIGWentertainment. June 7, 2007. Archived from the original on February 9, 2008. Retrieved February 17, 2009.
16. "Numb3rs – The Complete Second Season (2005)". Amazon. Retrieved August 25, 2007.
17. "Numb3rs – The 3rd Season". TVShowsOnDVD.com. September 25, 2007. Archived from the original on September 17, 2012. Retrieved February 17, 2009.
18. "Numb3rs Season 3 DVD; 2006". Amazon. February 9, 2009. Retrieved February 17, 2009.
19. "NUMB3RS – SEASON 3 (COMPLETE)". BIGWentertainment. July 10, 2008. Archived from the original on April 15, 2009. Retrieved February 17, 2009.
20. "Numb3rs – The Third Season (2005)". Amazon. Retrieved August 25, 2007.
21. "Numb3rs – The Complete 4th Season". TVShowsOnDVD.com. September 30, 2008. Archived from the original on January 5, 2013. Retrieved February 17, 2009.
22. "Numb3rs Season 3 DVD; 2006". Amazon. Retrieved May 29, 2009.
23. "Numbers – Season 4". Archived from the original on July 6, 2011. Retrieved June 14, 2009.
24. "Numb3rs – The Fourth Season (2008)". Amazon. Retrieved January 3, 2009.
25. "Numb3rs – The Complete 5th Season". TVShowsOnDVD.com. September 30, 2008. Archived from the original on August 2, 2009. Retrieved February 17, 2009.
26. "Numbers Season 5 on DVD". LoveFilm.com. April 9, 2010. Archived from the original on June 11, 2010. Retrieved April 9, 2010.
27. "Australia's largest DVD store". EzyDVD. Retrieved February 28, 2011.
28. "Numbers – Season 5". Amazon. Retrieved June 14, 2009.
29. "Numb3rs – The Fifth Season". Barnes & Noble.com. Archived from the original on June 6, 2011. Retrieved December 21, 2009.
30. "Numb3rs (Numbers) - The Final Season (4 Disc Set)". Ezydvd.com.au. July 20, 2011. Archived from the original on June 10, 2011. Retrieved August 18, 2012.
31. Australia, Dicksmith. "Numb3rs Complete Series Collection DVD Region 4 | Drama |". Dicksmith Australia. Retrieved November 30, 2019.
32. "Official Numb3rs website". CBS. Retrieved May 13, 2006.
33. "The "Numb3rs" Add Up: Popular TV Show and Its Creators Receive Public Service Award". National Science Board. April 16, 2007. Retrieved August 25, 2007.
34. "Awards for "Numb3rs"". IMDb. Retrieved August 25, 2007.
35. Weisstein, Eric. "The Math(ematica) behind Television's Crime Drama NUMB3RS". Archived from the original on November 11, 2007. Retrieved August 24, 2007.
36. "Hollywood Math and Science Film Consulting". Retrieved August 24, 2007.
37. Weise, Elizabeth (February 9, 2005). "They're Calculatingly Cool". USA Today. Retrieved October 13, 2007.
38. Devlin, Keith. "NUMB3RS gets the math right". Mathematical Association of America. Retrieved August 24, 2007.
39. Pegg, Ed Jr. (January 21, 2005). "Math Games – The NUMB3RS TV show". Mathematical Association of America. Retrieved August 24, 2007.
40. The Numbers Behind NUMB3RS: Solving Crime with Mathematics (Paperback). Amazon.com. 2007. ISBN 978-0-452-28857-7.
41. Lady Shelley. "NUMB3RS Books and DVDs". Archived from the original on August 21, 2007. Retrieved August 25, 2007.
42. "Numb3rs blog". Northeastern University. June 14, 2005. Archived from the original on February 2, 2007. Retrieved June 15, 2010.
43. "NUMB3RS Episode 412-Power-Wolfram Research Math Notes". Numb3rs.wolfram.com. Retrieved June 15, 2010.
44. Silverberg, Alice (November 2006). "Alice in NUMB3Rland" (PDF). FOCUS. 26 (8): 12–13.
45. "The Numb3rs Guy". Time Magazine. December 4, 2005. Retrieved August 25, 2007.
46. Jonas, Gerald (October 5, 1997). "The Strange Case". The New York Times.
47. Cheryl Heuton (co-creator/co-executive producer), Nicolas Falacci (co-creator-executive producer), Ridley Scott (executive producer), Tony Scott (executive producer), David W. Zucker (co-executive producer), Mark Saks (casting director), Skip Chaissom (producer) (2006). Point of Origin: Inside the Unaired Pilot (DVD (Numb3rs: Season 1)). CBS Studios, Inc.
48. ""Numb3rs" (2005) Trivia". IMDb. Retrieved August 25, 2007.
49. ""Numb3rs" (2005) - Filming locations". Us.imdb.com. May 1, 2009. Retrieved June 15, 2010.
50. Hobart, Christy (February 17, 2005). "Arts and Crafts by the 'Numb3rs'". Los Angeles Times.
51. "NUMB3RS.org forum". Retrieved May 30, 2007.
52. Jicha, Tom (January 22, 2005). "Doing the Math – CBS Hopes Its New Equation for the Crime Drama, Pairing a Detective and a Genius, Will Add up to a Hit". South Florida Sun-Sentinel. Fort Lauderdale, Florida. Retrieved February 26, 2015. – via Newsbank (subscription required)
53. Pergament, Alan (January 20, 2005). "'Numb3rs' Viewers Probably Won't Be Willing to Do the Math". The Buffalo News. Buffalo, New York: The Buffalo News. Retrieved February 27, 2015. – via Newsbank (subscription required)
54. National Public Radio (October 5, 2007). "The 'Numb3rs' Don't Lie". National Public Radio. National Public Radio. Retrieved February 27, 2015.
55. Ostrow, Joanne (December 8, 2004). "Networks are busy, it must be midseason". Denver Post. Denver, Colorado: Denver Post. Retrieved February 26, 2015.
56. Staff Report (January 31, 2005). "Picks and Pans Review: Numb3rs". People Magazine. New York City, New York: People Magazine. Archived from the original on April 2, 2015. Retrieved February 26, 2015.
57. The Futon Critic Staff (November 29, 2004). "Numb3rs' Add Up to a January Premiere". The Futon Critic. The Futon Critic. Retrieved February 26, 2015.
58. Staff Report (December 28, 2004). "'Committed' is TV's best offering for January". Scripps Howard News Service. Cincinnati, Ohio: Scripps Howard News Service. Retrieved February 26, 2015. – via Newsbank (subscription required)
59. Hughes, Mike (January 7, 2005). "'Numb3rs' adds up for new series' stars". USA Today. Tysons Corner, Virginia: USA Today. Retrieved February 26, 2015. – via Newsbank (subscription required)
60. Rice, Lynette (February 14, 2005). "David Krumholtz explains how it all adds up". Entertainment Weekly. Retrieved February 11, 2014.
61. Weiss, David J. (September 21, 2001). "Review of Mathematical Symbols". Psy302: Statistical Methods. California State University, Los Angeles. Retrieved February 11, 2014.
62. dmason (December 4, 2004). "CBS is counting on 'Numb3rs'". Ventura County Star. Camarillo, California: Ventura County Star. Retrieved February 28, 2015. – via Newsbank (subscription required)
63. Aspey, Brad (January 28, 2005). "'Regular' TV finally getting the message from cable". The Muskegon Chronicle. Muskegon, Michigan: The Muskegon Chronicle. Retrieved February 28, 2015. – via Newsbank (subscription required)
64. Gray, Ellen (January 22, 2005). "Math Adds Up to Zero for 'Numb3rs' Star". The Philadelphia Daily News. Philadelphia, Pennsylvania: Watertown Daily Times. Retrieved February 28, 2015. – via Newsbank (subscription required)
65. Storm, Jonathan (January 23, 2005). "Solving Crimes by the 'Numb3rs'". The Philadelphia Inquirer. Philadelphia, Pennsylvania: The Philadelphia Inquirer. Retrieved February 27, 2015. – via Newsbank (subscription required)
66. Brooks, David (January 19, 2005). "C4n y0u r34d 7h15? Maybe you can work for CBS". The Telegraph (Nashua, NH). Nashua, New Hampshire: The Telegraph (Nashua, NH). Retrieved February 28, 2015. – via Newsbank (subscription required)
67. Perea, Manuel; Duñabeitia, Jon Andoni; Carreiras, Manuel Carreiras (2008). "Observations: R34d1ng W0rd5 w1th Numb3r5" (PDF). Journal of Experimental Psychology: Human Perception and Performance. United States: American Psychological Association. 34 (1): 237–241. doi:10.1037/0096-1523.34.1.237. PMID 18248151. Retrieved February 11, 2014.
68. "Ratings search for Numb3rs". ABC Media Net. Archived from the original on January 7, 2009. Retrieved June 15, 2010.
69. "Viewer numbers of the official 2004–2005 U.S. television season". American Broadcasting Company. Archived from the original on March 10, 2007.
70. "Viewer numbers of the official 2005–2006 U.S. television season". American Broadcasting Company. Archived from the original on March 10, 2007.
71. "2006–07 primetime wrap – Series programming results". The Hollywood Reporter. May 25, 2007. Archived from the original on August 15, 2007. Retrieved August 25, 2007.
72. "Televisionista: TV Ratings: 2007–2008 Season Top-200". Televisionista.blogspot.com. June 1, 2008. Retrieved June 15, 2010.
73. "ABC Medianet". ABC Medianet. Retrieved June 15, 2010.
74. Ausiello, Michael (November 4, 2009). "This just in: CBS trims 'Numb3rs,' orders more 'NCIS' and 'Mother'". Entertainment Weekly. Archived from the original on November 7, 2009. Retrieved November 21, 2019.
75. "Deadline.com: Full Series Rankings for The 2009-2010 Broadcast Season". Deadline. May 27, 2010.
External links
• The Math Behind Numb3rs Website
• Numbers at IMDb
Numbers
Characters
• Don Eppes
• Charlie Eppes
• Alan Eppes
• Larry Fleinhardt
• Colby Granger
• Amita Ramanujan
• Megan Reeves
• David Sinclair
• Nikki Betancourt
• Liz Warner
Episodes
Season 1
• "Pilot"
• "Uncertainty Principle"
Season 2
• "Scorched"
• "Harvest"
• "Guns and Roses"
Season 3
• "Spree"
• "Two Daughters"
• "Provenance"
• "Killer Chat"
• "Nine Wives"
• "Democracy"
• "The Art of Reckoning"
• "The Janus List"
Season 4
• "Trust Metric"
• "When Worlds Collide"
Season 5
• "Thirty-Six Hours"
• "Arrow of Time"
• "The Fifth Man"
• "Disturbed"
• "Angels and Devils"
Season 6
• "Cause and Effect"
• Category
California Institute of Technology
Affiliated research organizations
• Infrared Processing and Analysis Center
• Jet Propulsion Laboratory
• Kerckhoff Marine Laboratory
• Guggenheim Aeronautical Laboratory
• LIGO
• Palomar Observatory
• W. M. Keck Observatory
• Submillimeter Observatory
• Table Mountain Observatory
• Thirty Meter Telescope
Student life
• House System
• Great Rose Bowl Hoax
• Quantum Hoops
• Tournament Park (South Athletic Field)
People
• List of California Institute of Technology people
Miscellaneous
• Athenaeum
• Einstein Papers Project
• MIT rivalry
• Real Genius
| Wikipedia |
Weaire–Phelan structure
In geometry, the Weaire–Phelan structure is a three-dimensional structure representing an idealised foam of equal-sized bubbles, with two different shapes. In 1993, Denis Weaire and Robert Phelan found that this structure was a better solution of the Kelvin problem of tiling space by equal volume cells of minimum surface area than the previous best-known solution, the Kelvin structure.[1]
Weaire–Phelan structure
Space group
Fibrifold notation
Coxeter notation
Pm3n (223)
2o
[[4,3,4]+]
History and the Kelvin problem
In two dimensions, the subdivision of the plane into cells of equal area with minimum average perimeter is given by the hexagonal tiling, but although the first record of this honeycomb conjecture goes back to the ancient Roman scholar Marcus Terentius Varro, it was not proven until the work of Thomas C. Hales in 1999.[2] In 1887, Lord Kelvin asked the corresponding question for three-dimensional space: how can space be partitioned into cells of equal volume with the least area of surface between them? Or, in short, what was the most efficient soap bubble foam?[3] This problem has since been referred to as the Kelvin problem.
Kelvin proposed a foam called the Kelvin structure. His foam is based on the bitruncated cubic honeycomb, a convex uniform honeycomb formed by the truncated octahedron, a space-filling convex polyhedron with 6 square faces and 8 hexagonal faces. However, this honeycomb does not satisfy Plateau's laws, formulated by Joseph Plateau in the 19th century, according to which minimal foam surfaces meet at $120^{\circ }$ angles at their edges, with these edges meeting each other in sets of four with angles of $\arccos {\tfrac {1}{3}}\approx 109.47^{\circ }$. The angles of the polyhedral structure are different; for instance, its edges meet at angles of $90^{\circ }$ on square faces, or $120^{\circ }$ on hexagonal faces. Therefore, Kelvin's proposed structure uses curvilinear edges and slightly warped minimal surfaces for its faces, obeying Plateau's laws and reducing the area of the structure by 0.2% compared with the corresponding polyhedral structure.[1][3]
Although Kelvin did not state it explicitly as a conjecture,[4] the idea that the foam of the bitruncated cubic honeycomb is the most efficient foam, and solves Kelvin's problem, became known as the Kelvin conjecture. It was widely believed, and no counter-example was known for more than 100 years. Finally, in 1993, Trinity College Dublin physicist Denis Weaire and his student Robert Phelan discovered the Weaire–Phelan structure through computer simulations of foam, and showed that it was more efficient, disproving the Kelvin conjecture.[1]
Since the discovery of the Weaire–Phelan structure, other counterexamples to the Kelvin conjecture have been found, but the Weaire–Phelan structure continues to have the smallest known surface area per cell of these counterexamples.[5][6][7] Although numerical experiments suggest that the Weaire–Phelan structure is optimal, this remains unproven.[8] In general, it has been very difficult to prove the optimality of structures involving minimal surfaces. The minimality of the sphere as a surface enclosing a single volume was not proven until the 19th century, and the next simplest such problem, the double bubble conjecture on enclosing two volumes, remained open for over 100 years until being proven in 2002.[9]
Description
Irregular dodecahedron
Tetrakaidecahedron
The Weaire–Phelan structure differs from Kelvin's in that it uses two kinds of cells, although they have equal volume. Like the cells in Kelvin's structure, these cells are combinatorially equivalent to convex polyhedra. One is a pyritohedron, an irregular dodecahedron with pentagonal faces, possessing tetrahedral symmetry (Th). The second is a form of truncated hexagonal trapezohedron, a species of tetrakaidecahedron with two hexagonal and twelve pentagonal faces, in this case only possessing two mirror planes and a rotoreflection symmetry. Like the hexagons in the Kelvin structure, the pentagons in both types of cells are slightly curved. The surface area of the Weaire–Phelan structure is 0.3% less than that of the Kelvin structure.[1]
The tetrakaidecahedron cells, linked up in face-to-face chains of cells along their hexagonal faces, form chains in three perpendicular directions. A combinatorially equivalent structure to the Weaire–Phelan structure can be made as a tiling of space by unit cubes, lined up face-to-face into infinite square prisms in the same way to form a structure of interlocking prisms called tetrastix. These prisms surround cubical voids which form one fourth of the cells of the cubical tiling; the remaining three fourths of the cells fill the prisms, offset by half a unit from the integer grid aligned with the prism walls. Similarly, in the Weaire–Phelan structure itself, which has the same symmetries as the tetrastix structure, 1/4 of the cells are dodecahedra and 3/4 are tetrakaidecahedra.[10]
The polyhedral honeycomb associated with the Weaire–Phelan structure (obtained by flattening the faces and straightening the edges) is also referred to loosely as the Weaire–Phelan structure. It was known well before the Weaire–Phelan structure was discovered, but the application to the Kelvin problem was overlooked.[11]
Applications
In physical systems
Experiments have shown that, with favorable boundary conditions, equal-volume bubbles spontaneously self-assemble into the Weaire–Phelan structure.[12][13]
The associated polyhedral honeycomb is found in two related geometries of crystal structure in chemistry. Where the components of the crystal lie at the centres of the polyhedra it forms one of the Frank–Kasper phases, the A15 phase.[14]
Where the components of the crystal lie at the corners of the polyhedra, it is known as the "Type I clathrate structure". Gas hydrates formed by methane, propane and carbon dioxide at low temperatures have a structure in which water molecules lie at the nodes of the Weaire–Phelan structure and are hydrogen bonded together, and the larger gas molecules are trapped in the polyhedral cages.[11] Some alkali metal hydrides silicides and germanides also form this structure, with silicon or germanium at nodes, and alkali metals in cages.[1][15][16]
In architecture
The Weaire–Phelan structure is the inspiration for the design by Tristram Carfrae of the Beijing National Aquatics Centre, the 'Water Cube', for the 2008 Summer Olympics.[17]
See also
• The Pursuit of Perfect Packing, a book by Weaire on this and related problems
References
1. Weaire, D.; Phelan, R. (1994), "A counter-example to Kelvin's conjecture on minimal surfaces", Phil. Mag. Lett., 69 (2): 107–110, Bibcode:1994PMagL..69..107W, doi:10.1080/09500839408241577.
2. Hales, T. C. (2001), "The honeycomb conjecture", Discrete & Computational Geometry, 25 (1): 1–22, doi:10.1007/s004540010071, MR 1797293
3. Lord Kelvin (Sir William Thomson) (1887), "On the Division of Space with Minimum Partitional Area" (PDF), Philosophical Magazine, 24 (151): 503, doi:10.1080/14786448708628135.
4. Weaire & Phelan (1994) write that it is "implicit rather than directly stated in Kelvin's original papers"
5. Sullivan, John M. (1999), "The geometry of bubbles and foams", Foams and emulsions (Cargèse, 1997), NATO Advanced Science Institutes Series E: Applied Sciences, vol. 354, Kluwer, pp. 379–402, MR 1688327
6. Gabbrielli, Ruggero (1 August 2009), "A new counter-example to Kelvin's conjecture on minimal surfaces", Philosophical Magazine Letters, 89 (8): 483–491, Bibcode:2009PMagL..89..483G, doi:10.1080/09500830903022651, ISSN 0950-0839, S2CID 137653272
7. Freiberger, Marianne (24 September 2009), "Kelvin's bubble burst again", Plus Magazine, University of Cambridge, retrieved 4 July 2017
8. Oudet, Édouard (2011), "Approximation of partitions of least perimeter by Γ-convergence: around Kelvin's conjecture", Experimental Mathematics, 20 (3): 260–270, doi:10.1080/10586458.2011.565233, MR 2836251, S2CID 2945749
9. Morgan, Frank (2009), "Chapter 14. Proof of Double Bubble Conjecture", Geometric Measure Theory: A Beginner's Guide (4th ed.), Academic Press.
10. Conway, John H.; Burgiel, Heidi; Goodman-Strauss, Chaim (2008), "Understanding the Irish Bubbles", The Symmetries of Things, Wellesley, Massachusetts: A K Peters, p. 351, ISBN 978-1-56881-220-5, MR 2410150
11. Pauling, Linus (1960), The Nature of the Chemical Bond (3rd ed.), Cornell University Press, p. 471
12. Gabbrielli, R.; Meagher, A.J.; Weaire, D.; Brakke, K.A.; Hutzler, S. (2012), "An experimental realization of the Weaire-Phelan structure in monodisperse liquid foam" (PDF), Phil. Mag. Lett., 92 (1): 1–6, Bibcode:2012PMagL..92....1G, doi:10.1080/09500839.2011.645898, S2CID 25427974.
13. Ball, Philip (2011), "Scientists make the 'perfect' foam: Theoretical low-energy foam made for real", Nature, doi:10.1038/nature.2011.9504, S2CID 136626668.
14. Frank, F. C.; Kasper, J. S. (1958), "Complex alloy structures regarded as sphere packings. I. Definitions and basic principles" (PDF), Acta Crystallogr., 11 (3): 184–190, doi:10.1107/s0365110x58000487. Frank, F. C.; Kasper, J. S. (1959), "Complex alloy structures regarded as sphere packings. II. Analysis and classification of representative structures", Acta Crystallogr., 12 (7): 483–499, doi:10.1107/s0365110x59001499.
15. Kasper, J. S.; Hagenmuller, P.; Pouchard, M.; Cros, C. (December 1965), "Clathrate structure of silicon Na8Si46 and NaxSi136 (x < 11)", Science, 150 (3704): 1713–1714, Bibcode:1965Sci...150.1713K, doi:10.1126/science.150.3704.1713, PMID 17768869, S2CID 21291705
16. Cros, Christian; Pouchard, Michel; Hagenmuller, Paul (December 1970), "Sur une nouvelle famille de clathrates minéraux isotypes des hydrates de gaz et de liquides, interprétation des résultats obtenus", Journal of Solid State Chemistry, 2 (4): 570–581, Bibcode:1970JSSCh...2..570C, doi:10.1016/0022-4596(70)90053-8
17. Fountain, Henry (August 5, 2008), "A Problem of Bubbles Frames an Olympic Design", New York Times
External links
• 3D models of the Weaire–Phelan, Kelvin and P42a structures
• Weaire–Phelan Bubbles page with illustrations and freely downloadable 'nets' for printing and making models.
• "Weaire-Phelan Smart Modular Space Settlement", Alexandru Pintea, 2017, Individual First Prize NASA Ames Space Settlement Contest:
| Wikipedia |
Tricategory
In mathematics, a tricategory is a kind of structure of category theory studied in higher-dimensional category theory.
Whereas a weak 2-category is said to be a bicategory,[1] a weak 3-category is said to be a tricategory (Gordon, Power & Street 1995; Baez & Dolan 1996; Leinster 1998).[2][3][4]
Tetracategories are the corresponding notion in dimension four. Dimensions beyond three are seen as increasingly significant to the relationship between knot theory and physics. John Baez, R. Gordon, A. J. Power and Ross Street have done much of the significant work with categories beyond bicategories thus far.
See also
• Weak n-category
References
1. Bénabou, Jean (1967). "Introduction to bicategories". Reports of the Midwest Category Seminar. Lecture Notes in Mathematics. Vol. 47. Springer Berlin Heidelberg. pp. 1–77. doi:10.1007/bfb0074299. ISBN 978-3-540-03918-1.
2. Gordon, R.; Power, A. J.; Street, Ross (1995). "Coherence for tricategories". Memoirs of the American Mathematical Society. 117 (558). doi:10.1090/memo/0558. ISSN 0065-9266.
3. Baez, John C.; Dolan, James (10 May 1998). "Higher-Dimensional Algebra III.n-Categories and the Algebra of Opetopes". Advances in Mathematics. 135 (2): 145–206. arXiv:q-alg/9702014. doi:10.1006/aima.1997.1695. ISSN 0001-8708.
4. Leinster, Tom (2002). "A survey of definitions of n-category". Theory and Applications of Categories. 10: 1–70. arXiv:math/0107188.
External links
• The Dimensional Ladder
• Branches of higher dimensional algebra
Category theory
Key concepts
Key concepts
• Category
• Adjoint functors
• CCC
• Commutative diagram
• Concrete category
• End
• Exponential
• Functor
• Kan extension
• Morphism
• Natural transformation
• Universal property
Universal constructions
Limits
• Terminal objects
• Products
• Equalizers
• Kernels
• Pullbacks
• Inverse limit
Colimits
• Initial objects
• Coproducts
• Coequalizers
• Cokernels and quotients
• Pushout
• Direct limit
Algebraic categories
• Sets
• Relations
• Magmas
• Groups
• Abelian groups
• Rings (Fields)
• Modules (Vector spaces)
Constructions on categories
• Free category
• Functor category
• Kleisli category
• Opposite category
• Quotient category
• Product category
• Comma category
• Subcategory
Higher category theory
Key concepts
• Categorification
• Enriched category
• Higher-dimensional algebra
• Homotopy hypothesis
• Model category
• Simplex category
• String diagram
• Topos
n-categories
Weak n-categories
• Bicategory (pseudofunctor)
• Tricategory
• Tetracategory
• Kan complex
• ∞-groupoid
• ∞-topos
Strict n-categories
• 2-category (2-functor)
• 3-category
Categorified concepts
• 2-group
• 2-ring
• En-ring
• (Traced)(Symmetric) monoidal category
• n-group
• n-monoid
• Category
• Outline
• Glossary
| Wikipedia |
Bornological space
In mathematics, particularly in functional analysis, a bornological space is a type of space which, in some sense, possesses the minimum amount of structure needed to address questions of boundedness of sets and linear maps, in the same way that a topological space possesses the minimum amount of structure needed to address questions of continuity. Bornological spaces are distinguished by the property that a linear map from a bornological space into any locally convex spaces is continuous if and only if it is a bounded linear operator.
Bornological spaces were first studied by George Mackey. The name was coined by Bourbaki after borné, the French word for "bounded".
Bornologies and bounded maps
Main article: Bornology
A bornology on a set $X$ is a collection ${\mathcal {B}}$ of subsets of $X$ that satisfy all the following conditions:
1. ${\mathcal {B}}$ covers $X;$ that is, $X=\cup {\mathcal {B}}$;
2. ${\mathcal {B}}$ is stable under inclusions; that is, if $B\in {\mathcal {B}}$ and $A\subseteq B,$ then $A\in {\mathcal {B}}$;
3. ${\mathcal {B}}$ is stable under finite unions; that is, if $B_{1},\ldots ,B_{n}\in {\mathcal {B}}$ then $B_{1}\cup \cdots \cup B_{n}\in {\mathcal {B}}$;
Elements of the collection ${\mathcal {B}}$ are called ${\mathcal {B}}$-bounded or simply bounded sets if ${\mathcal {B}}$ is understood.[1] The pair $(X,{\mathcal {B}})$ is called a bounded structure or a bornological set.[1]
A base or fundamental system of a bornology ${\mathcal {B}}$ is a subset ${\mathcal {B}}_{0}$ of ${\mathcal {B}}$ such that each element of ${\mathcal {B}}$ is a subset of some element of ${\mathcal {B}}_{0}.$ Given a collection ${\mathcal {S}}$ of subsets of $X,$ the smallest bornology containing ${\mathcal {S}}$ is called the bornology generated by ${\mathcal {S}}.$[2]
If $(X,{\mathcal {B}})$ and $(Y,{\mathcal {C}})$ are bornological sets then their product bornology on $X\times Y$ is the bornology having as a base the collection of all sets of the form $B\times C,$ where $B\in {\mathcal {B}}$ and $C\in {\mathcal {C}}.$[2] A subset of $X\times Y$ is bounded in the product bornology if and only if its image under the canonical projections onto $X$ and $Y$ are both bounded.
Bounded maps
If $(X,{\mathcal {B}})$ and $(Y,{\mathcal {C}})$ are bornological sets then a function $f:X\to Y$ is said to be a locally bounded map or a bounded map (with respect to these bornologies) if it maps ${\mathcal {B}}$-bounded subsets of $X$ to ${\mathcal {C}}$-bounded subsets of $Y;$ that is, if $f({\mathcal {B}})\subseteq {\mathcal {C}}.$[2] If in addition $f$ is a bijection and $f^{-1}$ is also bounded then $f$ is called a bornological isomorphism.
Vector bornologies
Main article: Vector bornology
Let $X$ be a vector space over a field $\mathbb {K} $ where $\mathbb {K} $ has a bornology ${\mathcal {B}}_{\mathbb {K} }.$ A bornology ${\mathcal {B}}$ on $X$ is called a vector bornology on $X$ if it is stable under vector addition, scalar multiplication, and the formation of balanced hulls (i.e. if the sum of two bounded sets is bounded, etc.).
If $X$ is a topological vector space (TVS) and ${\mathcal {B}}$ is a bornology on $X,$ then the following are equivalent:
1. ${\mathcal {B}}$ is a vector bornology;
2. Finite sums and balanced hulls of ${\mathcal {B}}$-bounded sets are ${\mathcal {B}}$-bounded;[2]
3. The scalar multiplication map $\mathbb {K} \times X\to X$ defined by $(s,x)\mapsto sx$ and the addition map $X\times X\to X$ defined by $(x,y)\mapsto x+y,$ are both bounded when their domains carry their product bornologies (i.e. they map bounded subsets to bounded subsets).[2]
A vector bornology ${\mathcal {B}}$ is called a convex vector bornology if it is stable under the formation of convex hulls (i.e. the convex hull of a bounded set is bounded) then ${\mathcal {B}}.$ And a vector bornology ${\mathcal {B}}$ is called separated if the only bounded vector subspace of $X$ is the 0-dimensional trivial space $\{0\}.$
Usually, $\mathbb {K} $ is either the real or complex numbers, in which case a vector bornology ${\mathcal {B}}$ on $X$ will be called a convex vector bornology if ${\mathcal {B}}$ has a base consisting of convex sets.
Bornivorous subsets
A subset $A$ of $X$ is called bornivorous and a bornivore if it absorbs every bounded set.
In a vector bornology, $A$ is bornivorous if it absorbs every bounded balanced set and in a convex vector bornology $A$ is bornivorous if it absorbs every bounded disk.
Two TVS topologies on the same vector space have that same bounded subsets if and only if they have the same bornivores.[3]
Every bornivorous subset of a locally convex metrizable topological vector space is a neighborhood of the origin.[4]
Mackey convergence
A sequence $x_{\bullet }=(x_{i})_{i=1}^{\infty }$ in a TVS $X$ is said to be Mackey convergent to $0$ if there exists a sequence of positive real numbers $r_{\bullet }=(r_{i})_{i=1}^{\infty }$ diverging to $\infty $ such that $(r_{i}x_{i})_{i=1}^{\infty }$ converges to $0$ in $X.$[5]
Bornology of a topological vector space
Every topological vector space $X,$ at least on a non discrete valued field gives a bornology on $X$ by defining a subset $B\subseteq X$ to be bounded (or von-Neumann bounded), if and only if for all open sets $U\subseteq X$ containing zero there exists a $r>0$ with $B\subseteq rU.$ If $X$ is a locally convex topological vector space then $B\subseteq X$ is bounded if and only if all continuous semi-norms on $X$ are bounded on $B.$
The set of all bounded subsets of a topological vector space $X$ is called the bornology or the von Neumann bornology of $X.$
If $X$ is a locally convex topological vector space, then an absorbing disk $D$ in $X$ is bornivorous (resp. infrabornivorous) if and only if its Minkowski functional is locally bounded (resp. infrabounded).[4]
Induced topology
If ${\mathcal {B}}$ is a convex vector bornology on a vector space $X,$ then the collection ${\mathcal {N}}_{\mathcal {B}}(0)$ of all convex balanced subsets of $X$ that are bornivorous forms a neighborhood basis at the origin for a locally convex topology on $X$ called the topology induced by ${\mathcal {B}}$.[4]
If $(X,\tau )$ is a TVS then the bornological space associated with $X$ is the vector space $X$ endowed with the locally convex topology induced by the von Neumann bornology of $(X,\tau ).$[4]
Theorem[4] — Let $X$ and $Y$ be locally convex TVS and let $X_{b}$ denote $X$ endowed with the topology induced by von Neumann bornology of $X.$ Define $Y_{b}$ similarly. Then a linear map $L:X\to Y$ is a bounded linear operator if and only if $L:X_{b}\to Y$ is continuous.
Moreover, if $X$ is bornological, $Y$ is Hausdorff, and $L:X\to Y$ is continuous linear map then so is $L:X\to Y_{b}.$ If in addition $X$ is also ultrabornological, then the continuity of $L:X\to Y$ implies the continuity of $L:X\to Y_{ub},$ where $Y_{ub}$ is the ultrabornological space associated with $Y.$
Quasi-bornological spaces
Quasi-bornological spaces where introduced by S. Iyahen in 1968.[6]
A topological vector space (TVS) $(X,\tau )$ with a continuous dual $X^{\prime }$ is called a quasi-bornological space[6] if any of the following equivalent conditions holds:
1. Every bounded linear operator from $X$ into another TVS is continuous.[6]
2. Every bounded linear operator from $X$ into a complete metrizable TVS is continuous.[6][7]
3. Every knot in a bornivorous string is a neighborhood of the origin.[6]
Every pseudometrizable TVS is quasi-bornological. [6] A TVS $(X,\tau )$ in which every bornivorous set is a neighborhood of the origin is a quasi-bornological space.[8] If $X$ is a quasi-bornological TVS then the finest locally convex topology on $X$ that is coarser than $\tau $ makes $X$ into a locally convex bornological space.
Bornological space
In functional analysis, a locally convex topological vector space is a bornological space if its topology can be recovered from its bornology in a natural way.
Every locally convex quasi-bornological space is bornological but there exist bornological spaces that are not quasi-bornological.[6]
A topological vector space (TVS) $(X,\tau )$ with a continuous dual $X^{\prime }$ is called a bornological space if it is locally convex and any of the following equivalent conditions holds:
1. Every convex, balanced, and bornivorous set in $X$ is a neighborhood of zero.[4]
2. Every bounded linear operator from $X$ into a locally convex TVS is continuous.[4]
• Recall that a linear map is bounded if and only if it maps any sequence converging to $0$ in the domain to a bounded subset of the codomain.[4] In particular, any linear map that is sequentially continuous at the origin is bounded.
3. Every bounded linear operator from $X$ into a seminormed space is continuous.[4]
4. Every bounded linear operator from $X$ into a Banach space is continuous.[4]
If $X$ is a Hausdorff locally convex space then we may add to this list:[7]
1. The locally convex topology induced by the von Neumann bornology on $X$ is the same as $\tau ,$ $X$'s given topology.
2. Every bounded seminorm on $X$ is continuous.[4]
3. Any other Hausdorff locally convex topological vector space topology on $X$ that has the same (von Neumann) bornology as $(X,\tau )$ is necessarily coarser than $\tau .$
4. $X$ is the inductive limit of normed spaces.[4]
5. $X$ is the inductive limit of the normed spaces $X_{D}$ as $D$ varies over the closed and bounded disks of $X$ (or as $D$ varies over the bounded disks of $X$).[4]
6. $X$ carries the Mackey topology $\tau (X,X^{\prime })$ and all bounded linear functionals on $X$ are continuous.[4]
7. $X$ has both of the following properties:
• $X$ is convex-sequential or C-sequential, which means that every convex sequentially open subset of $X$ is open,
• $X$ is sequentially bornological or S-bornological, which means that every convex and bornivorous subset of $X$ is sequentially open.
where a subset $A$ of $X$ is called sequentially open if every sequence converging to $0$ eventually belongs to $A.$
Every sequentially continuous linear operator from a locally convex bornological space into a locally convex TVS is continuous,[4] where recall that a linear operator is sequentially continuous if and only if it is sequentially continuous at the origin. Thus for linear maps from a bornological space into a locally convex space, continuity is equivalent to sequential continuity at the origin. More generally, we even have the following:
• Any linear map $F:X\to Y$ from a locally convex bornological space into a locally convex space $Y$ that maps null sequences in $X$ to bounded subsets of $Y$ is necessarily continuous.
Sufficient conditions
Mackey–Ulam theorem[9] — The product of a collection $X_{\bullet }=(X_{i})_{i\in I}$ locally convex bornological spaces is bornological if and only if $I$ does not admit an Ulam measure.
As a consequent of the Mackey–Ulam theorem, "for all practical purposes, the product of bornological spaces is bornological."[9]
The following topological vector spaces are all bornological:
• Any locally convex pseudometrizable TVS is bornological.[4][10]
• Thus every normed space and Fréchet space is bornological.
• Any strict inductive limit of bornological spaces, in particular any strict LF-space, is bornological.
• This shows that there are bornological spaces that are not metrizable.
• A countable product of locally convex bornological spaces is bornological.[11][10]
• Quotients of Hausdorff locally convex bornological spaces are bornological.[10]
• The direct sum and inductive limit of Hausdorff locally convex bornological spaces is bornological.[10]
• Fréchet Montel spaces have bornological strong duals.
• The strong dual of every reflexive Fréchet space is bornological.[12]
• If the strong dual of a metrizable locally convex space is separable, then it is bornological.[12]
• A vector subspace of a Hausdorff locally convex bornological space $X$ that has finite codimension in $X$ is bornological.[4][10]
• The finest locally convex topology on a vector space is bornological.[4]
Counterexamples
There exists a bornological LB-space whose strong bidual is not bornological.[13]
A closed vector subspace of a locally convex bornological space is not necessarily bornological.[4][14] There exists a closed vector subspace of a locally convex bornological space that is complete (and so sequentially complete) but neither barrelled nor bornological.[4]
Bornological spaces need not be barrelled and barrelled spaces need not be bornological.[4] Because every locally convex ultrabornological space is barrelled,[4] it follows that a bornological space is not necessarily ultrabornological.
Properties
• The strong dual space of a locally convex bornological space is complete.[4]
• Every locally convex bornological space is infrabarrelled.[4]
• Every Hausdorff sequentially complete bornological TVS is ultrabornological.[4]
• Thus every compete Hausdorff bornological space is ultrabornological.
• In particular, every Fréchet space is ultrabornological.[4]
• The finite product of locally convex ultrabornological spaces is ultrabornological.[4]
• Every Hausdorff bornological space is quasi-barrelled.[15]
• Given a bornological space $X$ with continuous dual $X^{\prime },$ the topology of $X$ coincides with the Mackey topology $\tau (X,X^{\prime }).$
• In particular, bornological spaces are Mackey spaces.
• Every quasi-complete (i.e. all closed and bounded subsets are complete) bornological space is barrelled. There exist, however, bornological spaces that are not barrelled.
• Every bornological space is the inductive limit of normed spaces (and Banach spaces if the space is also quasi-complete).
• Let $X$ be a metrizable locally convex space with continuous dual $X^{\prime }.$ Then the following are equivalent:
1. $\beta (X^{\prime },X)$ is bornological.
2. $\beta (X^{\prime },X)$ is quasi-barrelled.
3. $\beta (X^{\prime },X)$ is barrelled.
4. $X$ is a distinguished space.
• If $L:X\to Y$ is a linear map between locally convex spaces and if $X$ is bornological, then the following are equivalent:
1. $L:X\to Y$ is continuous.
2. $L:X\to Y$ is sequentially continuous.[4]
3. For every set $B\subseteq X$ that's bounded in $X,$ $L(B)$ is bounded.
4. If $x_{\bullet }=(x_{i})_{i=1}^{\infty }$ is a null sequence in $X$ then $L\circ x_{\bullet }=(L(x_{i}))_{i=1}^{\infty }$ is a null sequence in $Y.$
5. If $x_{\bullet }=(x_{i})_{i=1}^{\infty }$ is a Mackey convergent null sequence in $X$ then $L\circ x_{\bullet }=(L(x_{i}))_{i=1}^{\infty }$ is a bounded subset of $Y.$
• Suppose that $X$ and $Y$ are locally convex TVSs and that the space of continuous linear maps $L_{b}(X;Y)$ is endowed with the topology of uniform convergence on bounded subsets of $X.$ If $X$ is a bornological space and if $Y$ is complete then $L_{b}(X;Y)$ is a complete TVS.[4]
• In particular, the strong dual of a locally convex bornological space is complete.[4] However, it need not be bornological.
Subsets
• In a locally convex bornological space, every convex bornivorous set $B$ is a neighborhood of $0$ ($B$ is not required to be a disk).[4]
• Every bornivorous subset of a locally convex metrizable topological vector space is a neighborhood of the origin.[4]
• Closed vector subspaces of bornological space need not be bornological.[4]
Ultrabornological spaces
Main article: Ultrabornological space
A disk in a topological vector space $X$ is called infrabornivorous if it absorbs all Banach disks.
If $X$ is locally convex and Hausdorff, then a disk is infrabornivorous if and only if it absorbs all compact disks.
A locally convex space is called ultrabornological if any of the following equivalent conditions hold:
1. Every infrabornivorous disk is a neighborhood of the origin.
2. $X$ is the inductive limit of the spaces $X_{D}$ as $D$ varies over all compact disks in $X.$
3. A seminorm on $X$ that is bounded on each Banach disk is necessarily continuous.
4. For every locally convex space $Y$ and every linear map $u:X\to Y,$ if $u$ is bounded on each Banach disk then $u$ is continuous.
5. For every Banach space $Y$ and every linear map $u:X\to Y,$ if $u$ is bounded on each Banach disk then $u$ is continuous.
Properties
The finite product of ultrabornological spaces is ultrabornological. Inductive limits of ultrabornological spaces are ultrabornological.
See also
• Bornology – Mathematical generalization of boundedness
• Bornivorous set – A set that can absorb any bounded subset
• Bounded set (topological vector space) – Generalization of boundedness
• Locally convex topological vector space – A vector space with a topology defined by convex open sets
• Space of linear maps
• Topological vector space – Vector space with a notion of nearness
• Vector bornology
References
1. Narici & Beckenstein 2011, p. 168.
2. Narici & Beckenstein 2011, pp. 156–175.
3. Wilansky 2013, p. 50.
4. Narici & Beckenstein 2011, pp. 441–457.
5. Swartz 1992, pp. 15–16.
6. Narici & Beckenstein 2011, pp. 453–454.
7. Adasch, Ernst & Keim 1978, pp. 60–61.
8. Wilansky 2013, p. 48.
9. Narici & Beckenstein 2011, p. 450.
10. Adasch, Ernst & Keim 1978, pp. 60–65.
11. Narici & Beckenstein 2011, p. 453.
12. Schaefer & Wolff 1999, p. 144.
13. Khaleelulla 1982, pp. 28–63.
14. Schaefer & Wolff 1999, pp. 103–110.
15. Adasch, Ernst & Keim 1978, pp. 70–73.
Bibliography
• Adasch, Norbert; Ernst, Bruno; Keim, Dieter (1978). Topological Vector Spaces: The Theory Without Convexity Conditions. Lecture Notes in Mathematics. Vol. 639. Berlin New York: Springer-Verlag. ISBN 978-3-540-08662-8. OCLC 297140003.
• Berberian, Sterling K. (1974). Lectures in Functional Analysis and Operator Theory. Graduate Texts in Mathematics. Vol. 15. New York: Springer. ISBN 978-0-387-90081-0. OCLC 878109401.
• Bourbaki, Nicolas (1987) [1981]. Topological Vector Spaces: Chapters 1–5. Éléments de mathématique. Translated by Eggleston, H.G.; Madan, S. Berlin New York: Springer-Verlag. ISBN 3-540-13627-4. OCLC 17499190.
• Conway, John B. (1990). A Course in Functional Analysis. Graduate Texts in Mathematics. Vol. 96 (2nd ed.). New York: Springer-Verlag. ISBN 978-0-387-97245-9. OCLC 21195908.
• Edwards, Robert E. (1995). Functional Analysis: Theory and Applications. New York: Dover Publications. ISBN 978-0-486-68143-6. OCLC 30593138.
• Grothendieck, Alexander (1973). Topological Vector Spaces. Translated by Chaljub, Orlando. New York: Gordon and Breach Science Publishers. ISBN 978-0-677-30020-7. OCLC 886098.
• Hogbe-Nlend, Henri (1977). Bornologies and functional analysis. Amsterdam: North-Holland Publishing Co. pp. xii+144. ISBN 0-7204-0712-5. MR 0500064.
• Hogbe-Nlend, Henri (1977). Bornologies and Functional Analysis: Introductory Course on the Theory of Duality Topology-Bornology and its use in Functional Analysis. North-Holland Mathematics Studies. Vol. 26. Amsterdam New York New York: North Holland. ISBN 978-0-08-087137-0. MR 0500064. OCLC 316549583.
• Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342.
• Khaleelulla, S. M. (1982). Counterexamples in Topological Vector Spaces. Lecture Notes in Mathematics. Vol. 936. Berlin, Heidelberg, New York: Springer-Verlag. ISBN 978-3-540-11565-6. OCLC 8588370.
• Köthe, Gottfried (1983) [1969]. Topological Vector Spaces I. Grundlehren der mathematischen Wissenschaften. Vol. 159. Translated by Garling, D.J.H. New York: Springer Science & Business Media. ISBN 978-3-642-64988-2. MR 0248498. OCLC 840293704.
• Kriegl, Andreas; Michor, Peter W. (1997). The Convenient Setting of Global Analysis. Mathematical Surveys and Monographs. American Mathematical Society. ISBN 9780821807804.
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
• Schechter, Eric (1996). Handbook of Analysis and Its Foundations. San Diego, CA: Academic Press. ISBN 978-0-12-622760-4. OCLC 175294365.
• Swartz, Charles (1992). An introduction to Functional Analysis. New York: M. Dekker. ISBN 978-0-8247-8643-4. OCLC 24909067.
• Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Boundedness and bornology
Basic concepts
• Barrelled space
• Bounded set
• Bornological space
• (Vector) Bornology
Operators
• (Un)Bounded operator
• Uniform boundedness principle
Subsets
• Barrelled set
• Bornivorous set
• Saturated family
Related spaces
• (Countably) Barrelled space
• (Countably) Quasi-barrelled space
• Infrabarrelled space
• (Quasi-) Ultrabarrelled space
• Ultrabornological space
Topological vector spaces (TVSs)
Basic concepts
• Banach space
• Completeness
• Continuous linear operator
• Linear functional
• Fréchet space
• Linear map
• Locally convex space
• Metrizability
• Operator topologies
• Topological vector space
• Vector space
Main results
• Anderson–Kadec
• Banach–Alaoglu
• Closed graph theorem
• F. Riesz's
• Hahn–Banach (hyperplane separation
• Vector-valued Hahn–Banach)
• Open mapping (Banach–Schauder)
• Bounded inverse
• Uniform boundedness (Banach–Steinhaus)
Maps
• Bilinear operator
• form
• Linear map
• Almost open
• Bounded
• Continuous
• Closed
• Compact
• Densely defined
• Discontinuous
• Topological homomorphism
• Functional
• Linear
• Bilinear
• Sesquilinear
• Norm
• Seminorm
• Sublinear function
• Transpose
Types of sets
• Absolutely convex/disk
• Absorbing/Radial
• Affine
• Balanced/Circled
• Banach disks
• Bounding points
• Bounded
• Complemented subspace
• Convex
• Convex cone (subset)
• Linear cone (subset)
• Extreme point
• Pre-compact/Totally bounded
• Prevalent/Shy
• Radial
• Radially convex/Star-shaped
• Symmetric
Set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Convex hull
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Types of TVSs
• Asplund
• B-complete/Ptak
• Banach
• (Countably) Barrelled
• BK-space
• (Ultra-) Bornological
• Brauner
• Complete
• Convenient
• (DF)-space
• Distinguished
• F-space
• FK-AK space
• FK-space
• Fréchet
• tame Fréchet
• Grothendieck
• Hilbert
• Infrabarreled
• Interpolation space
• K-space
• LB-space
• LF-space
• Locally convex space
• Mackey
• (Pseudo)Metrizable
• Montel
• Quasibarrelled
• Quasi-complete
• Quasinormed
• (Polynomially
• Semi-) Reflexive
• Riesz
• Schwartz
• Semi-complete
• Smith
• Stereotype
• (B
• Strictly
• Uniformly) convex
• (Quasi-) Ultrabarrelled
• Uniformly smooth
• Webbed
• With the approximation property
• Mathematics portal
• Category
• Commons
| Wikipedia |
Markov's principle
Markov's principle, named after Andrey Markov Jr, is a conditional existence statement for which there are many equivalent formulations, as discussed below.
The principle is logically valid classically, but not in intuitionistic constructive mathematics. However, many particular instances of it are nevertheless provable in a constructive context as well.
History
The principle was first studied and adopted by the Russian school of constructivism, together with choice principles and often with a realizability perspective on the notion of mathematical function.
In computability theory
In the language of computability theory, Markov's principle is a formal expression of the claim that if it is impossible that an algorithm does not terminate, then for some input it does terminate. This is equivalent to the claim that if a set and its complement are both computably enumerable, then the set is decidable.
In intuitionistic logic
In predicate logic, a predicate P over some domain is called decidable if for every x in the domain, either P(x) is true, or P(x) is not true. This is not trivially true constructively.
For a decidable predicate P over the natural numbers, Markov's principle then reads:
${\Big (}\forall n{\big (}P(n)\vee \neg P(n){\big )}\wedge \neg \forall n\,\neg P(n){\Big )}\rightarrow \exists n\,P(n)$
That is, if P cannot be false for all natural numbers n, then it is true for some n.
Markov's rule
Markov's rule is the formulation of Markov's principle as a rule. It states that $\exists n\;P(n)$ is derivable as soon as $\neg \neg \exists n\;P(n)$ is, for $P$ decidable. Formally,
$\forall n{\big (}P(n)\lor \neg P(n){\big )},\ \neg \neg \exists n\;P(n)\ \ \vdash \ \ \exists n\;P(n)$
Anne Troelstra[1] proved that it is an admissible rule in Heyting arithmetic. Later, the logician Harvey Friedman showed that Markov's rule is an admissible rule in all of intuitionistic logic, Heyting arithmetic, and various other intuitionistic theories,[2] using the Friedman translation.
In Heyting arithmetic
Markov's principle is equivalent in the language of arithmetic to:
$\neg \neg \exists n\;f(n)=0\rightarrow \exists n\;f(n)=0$
for $f$ a total recursive function on the natural numbers. In the presence of Church's thesis principle, the principle is equivalent to its form for primitive recursive functions. Using Kleene's T predicate, the latter may be expressed as
$\forall e\;\forall x\;{\big (}\neg \neg \exists w\;T_{1}(e,x,w)\rightarrow \exists w\;T_{1}(e,x,w){\big )}$
Realizability
If constructive arithmetic is translated using realizability into a classical meta-theory that proves the $\omega $-consistency of the relevant classical theory (for example, Peano Arithmetic if we are studying Heyting arithmetic), then Markov's principle is justified: a realizer is the constant function that takes a realization that $P$ is not everywhere false to the unbounded search that successively checks if $P(0),P(1),P(2),\dots $ is true. If $P$ is not everywhere false, then by $\omega $-consistency there must be a term for which $P$ holds, and each term will be checked by the search eventually. If however $P$ does not hold anywhere, then the domain of the constant function must be empty, so although the search does not halt it still holds vacuously that the function is a realizer. By the Law of the Excluded Middle (in our classical metatheory), $P$ must either hold nowhere or not hold nowhere, therefore this constant function is a realizer.
If instead the realizability interpretation is used in a constructive meta-theory, then it is not justified. Indeed, for first-order arithmetic, Markov's principle exactly captures the difference between a constructive and classical meta-theory. Specifically, a statement is provable in Heyting arithmetic with Extended Church's thesis if and only if there is a number that provably realizes it in Heyting arithmetic; and it is provable in Heyting arithmetic with Extended Church's thesis and Markov's principle if and only if there is a number that provably realizes it in Peano arithmetic.
In constructive analysis
Markov's principle is equivalent, in the language of real analysis, to the following principles:
• For each real number x, if it is contradictory that x is equal to 0, then there exists y ∈ Q such that 0 < y < |x|, often expressed by saying that x is apart from, or constructively unequal to, 0.
• For each real number x, if it is contradictory that x is equal to 0, then there exists y ∈ R such that xy = 1.
Modified realizability does not justify Markov's principle, even if classical logic is used in the meta-theory: there is no realizer in the language of simply typed lambda calculus as this language is not Turing-complete and arbitrary loops cannot be defined in it.
Weak Markov's principle
The weak Markov's principle is a weaker form of the principle. It may be stated in the language of analysis, as a conditional statement for the positivity of a real number:
$\forall (x\in \mathbb {R} )\,{\Big (}\forall (y\in \mathbb {R} ){\big (}\neg \neg (0<y)\lor \neg \neg (y<x){\big )}{\Big )}\,\to \,(0<x).$
This form can be justified by Brouwer's continuity principles, whereas the stronger form contradicts them. Thus it can be derived from intuitionistic, realizability, and classical reasoning, in each case for different reasons, but this principle is not valid in the general constructive sense of Bishop,[3] nor provable in the set theory ${\mathsf {IZF}}$.
To understand what the principle is about, it helps to inspect a stronger statement. The following expresses that any real number $x$, such that no non-positive $y$ is not below it, is positive:
$\nexists (y\leq 0)\,x\leq y\,\to \,(0<x),$
where $x\leq y$ denotes the negation of $y<x$. This is a stronger implication because the antecedent is looser. Note that here a logically positive statement is concluded from a logically negative one. It is implied by the weak Markov's principle when elevating the De Morgan's law for $\neg A\lor \neg B$ to an equivalence.
Assuming classical double-negation elimination, the weak Markov's principle becomes trivial, expressing that a number larger than all non-positive numbers is positive.
Extensionality of functions
A function $f:X\to Y$ between metric spaces is called strongly extensional if $d(f(x),f(y))>0$ implies $d(x,y)>0$, which is classically just the contraposition of the function preserving equality. Markov's principle can be shown to be equivalent to the proposition that all functions between arbitrary metric spaces are strongly extensional, while the weak Markov's principle is equivalent to the proposition that all functions from complete metric spaces to metric spaces are strongly extensional.
See also
• Constructive analysis
• Church's thesis (constructive mathematics)
• Limited principle of omniscience
References
1. Anne S. Troelstra. Metamathematical Investigation of Intuitionistic Arithmetic and Analysis, Springer Verlag (1973), Theorem 4.2.4 of the 2nd edition.
2. Harvey Friedman. Classically and Intuitionistically Provably Recursive Functions. In Scott, D. S. and Muller, G. H. Editors, Higher Set Theory, Volume 699 of Lecture Notes in Mathematics, Springer Verlag (1978), pp. 21–28.
3. Ulrich Kohlenbach, "On weak Markov's principle". Mathematical Logic Quarterly (2002), vol 48, issue S1, pp. 59–65.
External links
• Constructive Mathematics (Stanford Encyclopedia of Philosophy)
| Wikipedia |
Weak approximation
Weak approximation may refer to:
• Weak approximation theorem, an extension of the Chinese remainder theorem to algebraic groups over global fields
• Weak weak approximation, a form of weak approximation for varieties
• Weak-field approximation, a solution in general relativity
| Wikipedia |
Approximation in algebraic groups
In algebraic group theory, approximation theorems are an extension of the Chinese remainder theorem to algebraic groups G over global fields k.
History
Eichler (1938) proved strong approximation for some classical groups. Strong approximation was established in the 1960s and 1970s, for semisimple simply-connected algebraic groups over global fields. The results for number fields are due to Kneser (1966) and Platonov (1969); the function field case, over finite fields, is due to Margulis (1977) and Prasad (1977). In the number field case Platonov also proved a related result over local fields called the Kneser–Tits conjecture.
Formal definitions and properties
Let G be a linear algebraic group over a global field k, and A the adele ring of k. If S is a non-empty finite set of places of k, then we write AS for the ring of S-adeles and AS for the product of the completions ks, for s in the finite set S. For any choice of S, G(k) embeds in G(AS) and G(AS).
The question asked in weak approximation is whether the embedding of G(k) in G(AS) has dense image. If the group G is connected and k-rational, then it satisfies weak approximation with respect to any set S (Platonov & Rapinchuk 1994, p.402). More generally, for any connected group G, there is a finite set T of finite places of k such that G satisfies weak approximation with respect to any set S that is disjoint with T (Platonov & Rapinchuk 1994, p.415). In particular, if k is an algebraic number field then any connected group G satisfies weak approximation with respect to the set S = S∞ of infinite places.
The question asked in strong approximation is whether the embedding of G(k) in G(AS) has dense image, or equivalently whether the set
G(k)G(AS)
is a dense subset in G(A). The main theorem of strong approximation (Kneser 1966, p.188) states that a non-solvable linear algebraic group G over a global field k has strong approximation for the finite set S if and only if its radical N is unipotent, G/N is simply connected, and each almost simple component H of G/N has a non-compact component Hs for some s in S (depending on H).
The proofs of strong approximation depended on the Hasse principle for algebraic groups, which for groups of type E8 was only proved several years later.
Weak approximation holds for a broader class of groups, including adjoint groups and inner forms of Chevalley groups, showing that the strong approximation property is restrictive.
See also
• Superstrong approximation
References
• Eichler, Martin (1938), "Allgemeine Kongruenzklasseneinteilungen der Ideale einfacher Algebren über algebraischen Zahlkörpern und ihre L-Reihen.", Journal für die Reine und Angewandte Mathematik (in German), 179: 227–251, doi:10.1515/crll.1938.179.227, ISSN 0075-4102
• Kneser, Martin (1966), "Strong approximation", Algebraic Groups and Discontinuous Subgroups (Proc. Sympos. Pure Math., Boulder, Colo., 1965), Providence, R.I.: American Mathematical Society, pp. 187–196, MR 0213361
• Margulis, G. A. (1977), "Cobounded subgroups in algebraic groups over local fields", Akademija Nauk SSSR. Funkcional'nyi Analiz i ego Priloženija, 11 (2): 45–57, 95, ISSN 0374-1990, MR 0442107
• Platonov, V. P. (1969), "The problem of strong approximation and the Kneser–Tits hypothesis for algebraic groups", Izvestiya Akademii Nauk SSSR. Seriya Matematicheskaya, 33: 1211–1219, ISSN 0373-2436, MR 0258839
• Platonov, Vladimir; Rapinchuk, Andrei (1994), Algebraic groups and number theory. (Translated from the 1991 Russian original by Rachel Rowen.), Pure and Applied Mathematics, vol. 139, Boston, MA: Academic Press, Inc., ISBN 0-12-558180-7, MR 1278263
• Prasad, Gopal (1977), "Strong approximation for semi-simple groups over function fields", Annals of Mathematics, Second Series, 105 (3): 553–572, doi:10.2307/1970924, ISSN 0003-486X, JSTOR 1970924, MR 0444571
| Wikipedia |
Weak coloring
In graph theory, a weak coloring is a special case of a graph labeling. A weak k-coloring of a graph G = (V, E) assigns a color c(v) ∈ {1, 2, ..., k} to each vertex v ∈ V, such that each non-isolated vertex is adjacent to at least one vertex with different color. In notation, for each non-isolated v ∈ V, there is a vertex u ∈ V with {u, v} ∈ E and c(u) ≠ c(v).
The figure on the right shows a weak 2-coloring of a graph. Each dark vertex (color 1) is adjacent to at least one light vertex (color 2) and vice versa.
Properties
A graph vertex coloring is a weak coloring, but not necessarily vice versa.
Every graph has a weak 2-coloring. The figure on the right illustrates a simple algorithm for constructing a weak 2-coloring in an arbitrary graph. Part (a) shows the original graph. Part (b) shows a breadth-first search tree of the same graph. Part (c) shows how to color the tree: starting from the root, the layers of the tree are colored alternatingly with colors 1 (dark) and 2 (light).
If there is no isolated vertex in the graph G, then a weak 2-coloring determines a domatic partition: the set of the nodes with c(v) = 1 is a dominating set, and the set of the nodes with c(v) = 2 is another dominating set.
Applications
Historically, weak coloring served as the first non-trivial example of a graph problem that can be solved with a local algorithm (a distributed algorithm that runs in a constant number of synchronous communication rounds). More precisely, if the degree of each node is odd and bounded by a constant, then there is a constant-time distributed algorithm for weak 2-coloring.[1]
This is different from (non-weak) vertex coloring: there is no constant-time distributed algorithm for vertex coloring; the best possible algorithms (for finding a minimal but not necessarily minimum coloring) require O(log* |V|) communication rounds.[1][2][3] Here log* x is the iterated logarithm of x.
References
1. Naor, Moni; Stockmeyer, Larry (1995), "What can be computed locally?", SIAM Journal on Computing, 24 (6): 1259–1277, CiteSeerX 10.1.1.29.669, doi:10.1137/S0097539793254571, MR 1361156.
2. Linial, Nathan (1992), "Locality in distributed graph algorithms", SIAM Journal on Computing, 21 (1): 193–201, CiteSeerX 10.1.1.471.6378, doi:10.1137/0221015, MR 1148825.
3. Cole, Richard; Vishkin, Uzi (1986), "Deterministic coin tossing with applications to optimal parallel list ranking", Information and Control, 70 (1): 32–53, doi:10.1016/S0019-9958(86)80023-7, MR 0853994.
| Wikipedia |
Weak convergence (Hilbert space)
In mathematics, weak convergence in a Hilbert space is convergence of a sequence of points in the weak topology.
Definition
A sequence of points $(x_{n})$ in a Hilbert space H is said to converge weakly to a point x in H if
$\langle x_{n},y\rangle \to \langle x,y\rangle $
for all y in H. Here, $\langle \cdot ,\cdot \rangle $ is understood to be the inner product on the Hilbert space. The notation
$x_{n}\rightharpoonup x$
is sometimes used to denote this kind of convergence.
Properties
• If a sequence converges strongly (that is, if it converges in norm), then it converges weakly as well.
• Since every closed and bounded set is weakly relatively compact (its closure in the weak topology is compact), every bounded sequence $x_{n}$ in a Hilbert space H contains a weakly convergent subsequence. Note that closed and bounded sets are not in general weakly compact in Hilbert spaces (consider the set consisting of an orthonormal basis in an infinitely dimensional Hilbert space which is closed and bounded but not weakly compact since it doesn't contain 0). However, bounded and weakly closed sets are weakly compact so as a consequence every convex bounded closed set is weakly compact.
• As a consequence of the principle of uniform boundedness, every weakly convergent sequence is bounded.
• The norm is (sequentially) weakly lower-semicontinuous: if $x_{n}$ converges weakly to x, then
$\Vert x\Vert \leq \liminf _{n\to \infty }\Vert x_{n}\Vert ,$
and this inequality is strict whenever the convergence is not strong. For example, infinite orthonormal sequences converge weakly to zero, as demonstrated below.
• If $x_{n}\to x$ weakly and $\lVert x_{n}\rVert \to \lVert x\rVert $, then $x_{n}\to x$ strongly:
$\langle x-x_{n},x-x_{n}\rangle =\langle x,x\rangle +\langle x_{n},x_{n}\rangle -\langle x_{n},x\rangle -\langle x,x_{n}\rangle \rightarrow 0.$
• If the Hilbert space is finite-dimensional, i.e. a Euclidean space, then weak and strong convergence are equivalent.
Example
The Hilbert space $L^{2}[0,2\pi ]$ is the space of the square-integrable functions on the interval $[0,2\pi ]$ equipped with the inner product defined by
$\langle f,g\rangle =\int _{0}^{2\pi }f(x)\cdot g(x)\,dx,$
(see Lp space). The sequence of functions $f_{1},f_{2},\ldots $ defined by
$f_{n}(x)=\sin(nx)$
converges weakly to the zero function in $L^{2}[0,2\pi ]$, as the integral
$\int _{0}^{2\pi }\sin(nx)\cdot g(x)\,dx.$
tends to zero for any square-integrable function $g$ on $[0,2\pi ]$ when $n$ goes to infinity, which is by Riemann–Lebesgue lemma, i.e.
$\langle f_{n},g\rangle \to \langle 0,g\rangle =0.$
Although $f_{n}$ has an increasing number of 0's in $[0,2\pi ]$ as $n$ goes to infinity, it is of course not equal to the zero function for any $n$. Note that $f_{n}$ does not converge to 0 in the $L_{\infty }$ or $L_{2}$ norms. This dissimilarity is one of the reasons why this type of convergence is considered to be "weak."
Weak convergence of orthonormal sequences
Consider a sequence $e_{n}$ which was constructed to be orthonormal, that is,
$\langle e_{n},e_{m}\rangle =\delta _{mn}$
where $\delta _{mn}$ equals one if m = n and zero otherwise. We claim that if the sequence is infinite, then it converges weakly to zero. A simple proof is as follows. For x ∈ H, we have
$\sum _{n}|\langle e_{n},x\rangle |^{2}\leq \|x\|^{2}$ (Bessel's inequality)
where equality holds when {en} is a Hilbert space basis. Therefore
$|\langle e_{n},x\rangle |^{2}\rightarrow 0$ (since the series above converges, its corresponding sequence must go to zero)
i.e.
$\langle e_{n},x\rangle \rightarrow 0.$
Banach–Saks theorem
The Banach–Saks theorem states that every bounded sequence $x_{n}$ contains a subsequence $x_{n_{k}}$ and a point x such that
${\frac {1}{N}}\sum _{k=1}^{N}x_{n_{k}}$
converges strongly to x as N goes to infinity.
Generalizations
See also: Weak topology and Weak topology (polar topology)
The definition of weak convergence can be extended to Banach spaces. A sequence of points $(x_{n})$ in a Banach space B is said to converge weakly to a point x in B if
$f(x_{n})\to f(x)$
for any bounded linear functional $f$ defined on $B$, that is, for any $f$ in the dual space $B'$. If $B$ is an Lp space on $\Omega $ and $p<+\infty $, then any such $f$ has the form Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle f(x) = \int_{\Omega} x\,y\,d\mu} for some $y\in \,L^{q}(\Omega )$, where $\mu $ is the measure on $\Omega $ and ${\frac {1}{p}}+{\frac {1}{q}}=1$ are conjugate indices.
In the case where $B$ is a Hilbert space, then, by the Riesz representation theorem,
$f(\cdot )=\langle \cdot ,y\rangle $
for some $y$ in $B$, so one obtains the Hilbert space definition of weak convergence.
See also
• Dual topology
• Operator topologies – Topologies on the set of operators on a Hilbert space
References
Banach space topics
Types of Banach spaces
• Asplund
• Banach
• list
• Banach lattice
• Grothendieck
• Hilbert
• Inner product space
• Polarization identity
• (Polynomially) Reflexive
• Riesz
• L-semi-inner product
• (B
• Strictly
• Uniformly) convex
• Uniformly smooth
• (Injective
• Projective) Tensor product (of Hilbert spaces)
Banach spaces are:
• Barrelled
• Complete
• F-space
• Fréchet
• tame
• Locally convex
• Seminorms/Minkowski functionals
• Mackey
• Metrizable
• Normed
• norm
• Quasinormed
• Stereotype
Function space Topologies
• Banach–Mazur compactum
• Dual
• Dual space
• Dual norm
• Operator
• Ultraweak
• Weak
• polar
• operator
• Strong
• polar
• operator
• Ultrastrong
• Uniform convergence
Linear operators
• Adjoint
• Bilinear
• form
• operator
• sesquilinear
• (Un)Bounded
• Closed
• Compact
• on Hilbert spaces
• (Dis)Continuous
• Densely defined
• Fredholm
• kernel
• operator
• Hilbert–Schmidt
• Functionals
• positive
• Pseudo-monotone
• Normal
• Nuclear
• Self-adjoint
• Strictly singular
• Trace class
• Transpose
• Unitary
Operator theory
• Banach algebras
• C*-algebras
• Operator space
• Spectrum
• C*-algebra
• radius
• Spectral theory
• of ODEs
• Spectral theorem
• Polar decomposition
• Singular value decomposition
Theorems
• Anderson–Kadec
• Banach–Alaoglu
• Banach–Mazur
• Banach–Saks
• Banach–Schauder (open mapping)
• Banach–Steinhaus (Uniform boundedness)
• Bessel's inequality
• Cauchy–Schwarz inequality
• Closed graph
• Closed range
• Eberlein–Šmulian
• Freudenthal spectral
• Gelfand–Mazur
• Gelfand–Naimark
• Goldstine
• Hahn–Banach
• hyperplane separation
• Kakutani fixed-point
• Krein–Milman
• Lomonosov's invariant subspace
• Mackey–Arens
• Mazur's lemma
• M. Riesz extension
• Parseval's identity
• Riesz's lemma
• Riesz representation
• Robinson-Ursescu
• Schauder fixed-point
Analysis
• Abstract Wiener space
• Banach manifold
• bundle
• Bochner space
• Convex series
• Differentiation in Fréchet spaces
• Derivatives
• Fréchet
• Gateaux
• functional
• holomorphic
• quasi
• Integrals
• Bochner
• Dunford
• Gelfand–Pettis
• regulated
• Paley–Wiener
• weak
• Functional calculus
• Borel
• continuous
• holomorphic
• Measures
• Lebesgue
• Projection-valued
• Vector
• Weakly / Strongly measurable function
Types of sets
• Absolutely convex
• Absorbing
• Affine
• Balanced/Circled
• Bounded
• Convex
• Convex cone (subset)
• Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx))
• Linear cone (subset)
• Radial
• Radially convex/Star-shaped
• Symmetric
• Zonotope
Subsets / set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Bounding points
• Convex hull
• Extreme point
• Interior
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Examples
• Absolute continuity AC
• $ba(\Sigma )$
• c space
• Banach coordinate BK
• Besov $B_{p,q}^{s}(\mathbb {R} )$
• Birnbaum–Orlicz
• Bounded variation BV
• Bs space
• Continuous C(K) with K compact Hausdorff
• Hardy Hp
• Hilbert H
• Morrey–Campanato $L^{\lambda ,p}(\Omega )$
• ℓp
• $\ell ^{\infty }$
• Lp
• $L^{\infty }$
• weighted
• Schwartz $S\left(\mathbb {R} ^{n}\right)$
• Segal–Bargmann F
• Sequence space
• Sobolev Wk,p
• Sobolev inequality
• Triebel–Lizorkin
• Wiener amalgam $W(X,L^{p})$
Applications
• Differential operator
• Finite element method
• Mathematical formulation of quantum mechanics
• Ordinary Differential Equations (ODEs)
• Validated numerics
Hilbert spaces
Basic concepts
• Adjoint
• Inner product and L-semi-inner product
• Hilbert space and Prehilbert space
• Orthogonal complement
• Orthonormal basis
Main results
• Bessel's inequality
• Cauchy–Schwarz inequality
• Riesz representation
Other results
• Hilbert projection theorem
• Parseval's identity
• Polarization identity (Parallelogram law)
Maps
• Compact operator on Hilbert space
• Densely defined
• Hermitian form
• Hilbert–Schmidt
• Normal
• Self-adjoint
• Sesquilinear form
• Trace class
• Unitary
Examples
• Cn(K) with K compact & n<∞
• Segal–Bargmann F
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
| Wikipedia |
Direct sum of groups
In mathematics, a group G is called the direct sum[1][2] of two normal subgroups with trivial intersection if it is generated by the subgroups. In abstract algebra, this method of construction of groups can be generalized to direct sums of vector spaces, modules, and other structures; see the article direct sum of modules for more information. A group which can be expressed as a direct sum of non-trivial subgroups is called decomposable, and if a group cannot be expressed as such a direct sum then it is called indecomposable.
Algebraic structure → Group theory
Group theory
Basic notions
• Subgroup
• Normal subgroup
• Quotient group
• (Semi-)direct product
Group homomorphisms
• kernel
• image
• direct sum
• wreath product
• simple
• finite
• infinite
• continuous
• multiplicative
• additive
• cyclic
• abelian
• dihedral
• nilpotent
• solvable
• action
• Glossary of group theory
• List of group theory topics
Finite groups
• Cyclic group Zn
• Symmetric group Sn
• Alternating group An
• Dihedral group Dn
• Quaternion group Q
• Cauchy's theorem
• Lagrange's theorem
• Sylow theorems
• Hall's theorem
• p-group
• Elementary abelian group
• Frobenius group
• Schur multiplier
Classification of finite simple groups
• cyclic
• alternating
• Lie type
• sporadic
• Discrete groups
• Lattices
• Integers ($\mathbb {Z} $)
• Free group
Modular groups
• PSL(2, $\mathbb {Z} $)
• SL(2, $\mathbb {Z} $)
• Arithmetic group
• Lattice
• Hyperbolic group
Topological and Lie groups
• Solenoid
• Circle
• General linear GL(n)
• Special linear SL(n)
• Orthogonal O(n)
• Euclidean E(n)
• Special orthogonal SO(n)
• Unitary U(n)
• Special unitary SU(n)
• Symplectic Sp(n)
• G2
• F4
• E6
• E7
• E8
• Lorentz
• Poincaré
• Conformal
• Diffeomorphism
• Loop
Infinite dimensional Lie group
• O(∞)
• SU(∞)
• Sp(∞)
Algebraic groups
• Linear algebraic group
• Reductive group
• Abelian variety
• Elliptic curve
Definition
A group G is called the direct sum[1][2] of two subgroups H1 and H2 if
• each H1 and H2 are normal subgroups of G,
• the subgroups H1 and H2 have trivial intersection (i.e., having only the identity element $e$ of G in common),
• G = ⟨H1, H2⟩; in other words, G is generated by the subgroups H1 and H2.
More generally, G is called the direct sum of a finite set of subgroups {Hi} if
• each Hi is a normal subgroup of G,
• each Hi has trivial intersection with the subgroup ⟨{Hj : j ≠ i}⟩,
• G = ⟨{Hi}⟩; in other words, G is generated by the subgroups {Hi}.
If G is the direct sum of subgroups H and K then we write G = H + K, and if G is the direct sum of a set of subgroups {Hi} then we often write G = ΣHi. Loosely speaking, a direct sum is isomorphic to a weak direct product of subgroups.
Properties
If G = H + K, then it can be proven that:
• for all h in H, k in K, we have that h ∗ k = k ∗ h
• for all g in G, there exists unique h in H, k in K such that g = h ∗ k
• There is a cancellation of the sum in a quotient; so that (H + K)/K is isomorphic to H
The above assertions can be generalized to the case of G = ΣHi, where {Hi} is a finite set of subgroups:
• if i ≠ j, then for all hi in Hi, hj in Hj, we have that hi ∗ hj = hj ∗ hi
• for each g in G, there exists a unique set of elements hi in Hi such that
g = h1 ∗ h2 ∗ ... ∗ hi ∗ ... ∗ hn
• There is a cancellation of the sum in a quotient; so that ((ΣHi) + K)/K is isomorphic to ΣHi.
Note the similarity with the direct product, where each g can be expressed uniquely as
g = (h1,h2, ..., hi, ..., hn).
Since hi ∗ hj = hj ∗ hi for all i ≠ j, it follows that multiplication of elements in a direct sum is isomorphic to multiplication of the corresponding elements in the direct product; thus for finite sets of subgroups, ΣHi is isomorphic to the direct product ×{Hi}.
Direct summand
Given a group $G$, we say that a subgroup $H$ is a direct summand of $G$ if there exists another subgroup $K$ of $G$ such that $G=H+K$.
In abelian groups, if $H$ is a divisible subgroup of $G$, then $H$ is a direct summand of $G$.
Examples
• If we take $ G=\prod _{i\in I}H_{i}$ it is clear that $G$ is the direct product of the subgroups $ H_{i_{0}}\times \prod _{i\not =i_{0}}H_{i}$.
• If $H$ is a divisible subgroup of an abelian group $G$ then there exists another subgroup $K$ of $G$ such that $G=K+H$.
• If $G$ also has a vector space structure then $G$ can be written as a direct sum of $\mathbb {R} $ and another subspace $K$ that will be isomorphic to the quotient $G/K$.
Equivalence of decompositions into direct sums
In the decomposition of a finite group into a direct sum of indecomposable subgroups the embedding of the subgroups is not unique. For example, in the Klein group $V_{4}\cong C_{2}\times C_{2}$ we have that
$V_{4}=\langle (0,1)\rangle +\langle (1,0)\rangle ,$ and
$V_{4}=\langle (1,1)\rangle +\langle (1,0)\rangle .$
However, the Remak-Krull-Schmidt theorem states that given a finite group G = ΣAi = ΣBj, where each Ai and each Bj is non-trivial and indecomposable, the two sums have equal terms up to reordering and isomorphism.
The Remak-Krull-Schmidt theorem fails for infinite groups; so in the case of infinite G = H + K = L + M, even when all subgroups are non-trivial and indecomposable, we cannot conclude that H is isomorphic to either L or M.
Generalization to sums over infinite sets
To describe the above properties in the case where G is the direct sum of an infinite (perhaps uncountable) set of subgroups, more care is needed.
If g is an element of the cartesian product Π{Hi} of a set of groups, let gi be the ith element of g in the product. The external direct sum of a set of groups {Hi} (written as ΣE{Hi}) is the subset of Π{Hi}, where, for each element g of ΣE{Hi}, gi is the identity $e_{H_{i}}$ for all but a finite number of gi (equivalently, only a finite number of gi are not the identity). The group operation in the external direct sum is pointwise multiplication, as in the usual direct product.
This subset does indeed form a group, and for a finite set of groups {Hi} the external direct sum is equal to the direct product.
If G = ΣHi, then G is isomorphic to ΣE{Hi}. Thus, in a sense, the direct sum is an "internal" external direct sum. For each element g in G, there is a unique finite set S and a unique set {hi ∈ Hi : i ∈ S} such that g = Π {hi : i in S}.
See also
• Direct sum
• Coproduct
• Free product
• Direct sum of topological groups
References
1. Homology. Saunders MacLane. Springer, Berlin; Academic Press, New York, 1963.
2. László Fuchs. Infinite Abelian Groups
| Wikipedia |
Weak duality
In applied mathematics, weak duality is a concept in optimization which states that the duality gap is always greater than or equal to 0. That means the solution to the dual (minimization) problem is always greater than or equal to the solution to an associated primal problem. This is opposed to strong duality which only holds in certain cases.[1]
Uses
Many primal-dual approximation algorithms are based on the principle of weak duality.[2]
Weak duality theorem
The primal problem:
Maximize cTx subject to A x ≤ b, x ≥ 0;
The dual problem,
Minimize bTy subject to ATy ≥ c, y ≥ 0.
The weak duality theorem states cTx ≤ bTy.
Namely, if $(x_{1},x_{2},....,x_{n})$ is a feasible solution for the primal maximization linear program and $(y_{1},y_{2},....,y_{m})$ is a feasible solution for the dual minimization linear program, then the weak duality theorem can be stated as $\sum _{j=1}^{n}c_{j}x_{j}\leq \sum _{i=1}^{m}b_{i}y_{i}$, where $c_{j}$ and $b_{i}$ are the coefficients of the respective objective functions.
Proof: cTx = xTc ≤ xTATy ≤ bTy
Generalizations
More generally, if $x$ is a feasible solution for the primal maximization problem and $y$ is a feasible solution for the dual minimization problem, then weak duality implies $f(x)\leq g(y)$ where $f$ and $g$ are the objective functions for the primal and dual problems respectively.
See also
• Convex optimization
• Max–min inequality
References
1. Boţ, Radu Ioan; Grad, Sorin-Mihai; Wanka, Gert (2009), Duality in Vector Optimization, Berlin: Springer-Verlag, p. 1, doi:10.1007/978-3-642-02886-1, ISBN 978-3-642-02885-4, MR 2542013.
2. Gonzalez, Teofilo F. (2007), Handbook of Approximation Algorithms and Metaheuristics, CRC Press, p. 2-12, ISBN 9781420010749.
| Wikipedia |
Weak equivalence (homotopy theory)
In mathematics, a weak equivalence is a notion from homotopy theory that in some sense identifies objects that have the same "shape". This notion is formalized in the axiomatic definition of a model category.
A model category is a category with classes of morphisms called weak equivalences, fibrations, and cofibrations, satisfying several axioms. The associated homotopy category of a model category has the same objects, but the morphisms are changed in order to make the weak equivalences into isomorphisms. It is a useful observation that the associated homotopy category depends only on the weak equivalences, not on the fibrations and cofibrations.
Topological spaces
Model categories were defined by Quillen as an axiomatization of homotopy theory that applies to topological spaces, but also to many other categories in algebra and geometry. The example that started the subject is the category of topological spaces with Serre fibrations as fibrations and weak homotopy equivalences as weak equivalences (the cofibrations for this model structure can be described as the retracts of relative cell complexes X ⊆ Y[1]). By definition, a continuous mapping f: X → Y of spaces is called a weak homotopy equivalence if the induced function on sets of path components
$f_{*}\colon \pi _{0}(X)\to \pi _{0}(Y)$
is bijective, and for every point x in X and every n ≥ 1, the induced homomorphism
$f_{*}\colon \pi _{n}(X,x)\to \pi _{n}(Y,f(x))$
on homotopy groups is bijective. (For X and Y path-connected, the first condition is automatic, and it suffices to state the second condition for a single point x in X.)
For simply connected topological spaces X and Y, a map f: X → Y is a weak homotopy equivalence if and only if the induced homomorphism f*: Hn(X,Z) → Hn(Y,Z) on singular homology groups is bijective for all n.[2] Likewise, for simply connected spaces X and Y, a map f: X → Y is a weak homotopy equivalence if and only if the pullback homomorphism f*: Hn(Y,Z) → Hn(X,Z) on singular cohomology is bijective for all n.[3]
Example: Let X be the set of natural numbers {0, 1, 2, ...} and let Y be the set {0} ∪ {1, 1/2, 1/3, ...}, both with the subspace topology from the real line. Define f: X → Y by mapping 0 to 0 and n to 1/n for positive integers n. Then f is continuous, and in fact a weak homotopy equivalence, but it is not a homotopy equivalence.
The homotopy category of topological spaces (obtained by inverting the weak homotopy equivalences) greatly simplifies the category of topological spaces. Indeed, this homotopy category is equivalent to the category of CW complexes with morphisms being homotopy classes of continuous maps.
Many other model structures on the category of topological spaces have also been considered. For example, in the Strøm model structure on topological spaces, the fibrations are the Hurewicz fibrations and the weak equivalences are the homotopy equivalences.[4]
Chain complexes
Some other important model categories involve chain complexes. Let A be a Grothendieck abelian category, for example the category of modules over a ring or the category of sheaves of abelian groups on a topological space. Define a category C(A) with objects the complexes X of objects in A,
$\cdots \to X_{1}\to X_{0}\to X_{-1}\to \cdots ,$
and morphisms the chain maps. (It is equivalent to consider "cochain complexes" of objects of A, where the numbering is written as
$\cdots \to X^{-1}\to X^{0}\to X^{1}\to \cdots ,$
simply by defining Xi = X−i.)
The category C(A) has a model structure in which the cofibrations are the monomorphisms and the weak equivalences are the quasi-isomorphisms.[5] By definition, a chain map f: X → Y is a quasi-isomorphism if the induced homomorphism
$f_{*}\colon H_{n}(X)\to H_{n}(Y)$
on homology is an isomorphism for all integers n. (Here Hn(X) is the object of A defined as the kernel of Xn → Xn−1 modulo the image of Xn+1 → Xn.) The resulting homotopy category is called the derived category D(A).
Trivial fibrations and trivial cofibrations
In any model category, a fibration that is also a weak equivalence is called a trivial (or acyclic) fibration. A cofibration that is also a weak equivalence is called a trivial (or acyclic) cofibration.
Notes
1. Hovey (1999), Definition 2.4.3.
2. Hatcher (2002), Theorem 4.32.
3. Is there the Whitehead theorem for cohomology theory?
4. Strøm (1972).
5. Beke (2000), Proposition 3.13.
References
• Beke, Tibor (2000), "Sheafifiable homotopy model categories", Mathematical Proceedings of the Cambridge Philosophical Society, 129: 447–473, arXiv:math/0102087, Bibcode:2000MPCPS.129..447B, doi:10.1017/S0305004100004722, MR 1780498
• Hatcher, Allen (2002), Algebraic Topology, Cambridge University Press, ISBN 0-521-79540-0, MR 1867354
• Hovey, Mark (1999), Model Categories (PDF), American Mathematical Society, ISBN 0-8218-1359-5, MR 1650134
• Strøm, Arne (1972), "The homotopy category is a homotopy category", Archiv der Mathematik, 23: 435–441, doi:10.1007/BF01304912, MR 0321082
| Wikipedia |
Four exponentials conjecture
In mathematics, specifically the field of transcendental number theory, the four exponentials conjecture is a conjecture which, given the right conditions on the exponents, would guarantee the transcendence of at least one of four exponentials. The conjecture, along with two related, stronger conjectures, is at the top of a hierarchy of conjectures and theorems concerning the arithmetic nature of a certain number of values of the exponential function.
Statement
If x1, x2 and y1, y2 are two pairs of complex numbers, with each pair being linearly independent over the rational numbers, then at least one of the following four numbers is transcendental:
$e^{x_{1}y_{1}},e^{x_{1}y_{2}},e^{x_{2}y_{1}},e^{x_{2}y_{2}}.$
An alternative way of stating the conjecture in terms of logarithms is the following. For 1 ≤ i, j ≤ 2 let λij be complex numbers such that exp(λij) are all algebraic. Suppose λ11 and λ12 are linearly independent over the rational numbers, and λ11 and λ21 are also linearly independent over the rational numbers, then
$\lambda _{11}\lambda _{22}\neq \lambda _{12}\lambda _{21}.\,$
An equivalent formulation in terms of linear algebra is the following. Let M be the 2×2 matrix
$M={\begin{pmatrix}\lambda _{11}&\lambda _{12}\\\lambda _{21}&\lambda _{22}\end{pmatrix}},$
where exp(λij) is algebraic for 1 ≤ i, j ≤ 2. Suppose the two rows of M are linearly independent over the rational numbers, and the two columns of M are linearly independent over the rational numbers. Then the rank of M is 2.
While a 2×2 matrix having linearly independent rows and columns usually means it has rank 2, in this case we require linear independence over a smaller field so the rank isn't forced to be 2. For example, the matrix
${\begin{pmatrix}1&\pi \\\pi &\pi ^{2}\end{pmatrix}}$
has rows and columns that are linearly independent over the rational numbers, since π is irrational. But the rank of the matrix is 1. So in this case the conjecture would imply that at least one of e, eπ, and eπ2 is transcendental (which in this case is already known since e is transcendental).
History
The conjecture was considered in the early 1940s by Atle Selberg who never formally stated the conjecture.[1] A special case of the conjecture is mentioned in a 1944 paper of Leonidas Alaoglu and Paul Erdős who suggest that it had been considered by Carl Ludwig Siegel.[2] An equivalent statement was first mentioned in print by Theodor Schneider who set it as the first of eight important, open problems in transcendental number theory in 1957.[3]
The related six exponentials theorem was first explicitly mentioned in the 1960s by Serge Lang[4] and Kanakanahalli Ramachandra,[5] and both also explicitly conjecture the above result.[6] Indeed, after proving the six exponentials theorem Lang mentions the difficulty in dropping the number of exponents from six to four — the proof used for six exponentials "just misses" when one tries to apply it to four.
Corollaries
Using Euler's identity this conjecture implies the transcendence of many numbers involving e and π. For example, taking x1 = 1, x2 = √2, y1 = iπ, and y2 = iπ√2, the conjecture—if true—implies that one of the following four numbers is transcendental:
$e^{i\pi },e^{i\pi {\sqrt {2}}},e^{i\pi {\sqrt {2}}},e^{2i\pi }.$
The first of these is just −1, and the fourth is 1, so the conjecture implies that eiπ√2 is transcendental (which is already known, by consequence of the Gelfond–Schneider theorem).
An open problem in number theory settled by the conjecture is the question of whether there exists a non-integer real number t such that both 2t and 3t are integers, or indeed such that at and bt are both integers for some pair of integers a and b that are multiplicatively independent over the integers. Values of t such that 2t is an integer are all of the form t = log2m for some integer m, while for 3t to be an integer, t must be of the form t = log3n for some integer n. By setting x1 = 1, x2 = t, y1 = log(2), and y2 = log(3), the four exponentials conjecture implies that if t is irrational then one of the following four numbers is transcendental:
$2,3,2^{t},3^{t}.\,$
So if 2t and 3t are both integers then the conjecture implies that t must be a rational number. Since the only rational numbers t for which 2t is also rational are the integers, this implies that there are no non-integer real numbers t such that both 2t and 3t are integers. It is this consequence, for any two primes (not just 2 and 3), that Alaoglu and Erdős desired in their paper as it would imply the conjecture that the quotient of two consecutive colossally abundant numbers is prime, extending Ramanujan's results on the quotients of consecutive superior highly composite number.[7]
Sharp four exponentials conjecture
The four exponentials conjecture reduces the pair and triplet of complex numbers in the hypotheses of the six exponentials theorem to two pairs. It is conjectured that this is also possible with the sharp six exponentials theorem, and this is the sharp four exponentials conjecture.[8] Specifically, this conjecture claims that if x1, x2, and y1, y2 are two pairs of complex numbers with each pair being linearly independent over the rational numbers, and if βij are four algebraic numbers for 1 ≤ i, j ≤ 2 such that the following four numbers are algebraic:
$e^{x_{1}y_{1}-\beta _{11}},e^{x_{1}y_{2}-\beta _{12}},e^{x_{2}y_{1}-\beta _{21}},e^{x_{2}y_{2}-\beta _{22}},$
then xi yj = βij for 1 ≤ i, j ≤ 2. So all four exponentials are in fact 1.
This conjecture implies both the sharp six exponentials theorem, which requires a third x value, and the as yet unproven sharp five exponentials conjecture that requires a further exponential to be algebraic in its hypotheses.
Strong four exponentials conjecture
The strongest result that has been conjectured in this circle of problems is the strong four exponentials conjecture.[9] This result would imply both aforementioned conjectures concerning four exponentials as well as all the five and six exponentials conjectures and theorems, as illustrated to the right, and all the three exponentials conjectures detailed below. The statement of this conjecture deals with the vector space over the algebraic numbers generated by 1 and all logarithms of non-zero algebraic numbers, denoted here as L∗. So L∗ is the set of all complex numbers of the form
$\beta _{0}+\sum _{i=1}^{n}\beta _{i}\log \alpha _{i},$
for some n ≥ 0, where all the βi and αi are algebraic and every branch of the logarithm is considered. The statement of the strong four exponentials conjecture is then as follows. Let x1, x2, and y1, y2 be two pairs of complex numbers with each pair being linearly independent over the algebraic numbers, then at least one of the four numbers xi yj for 1 ≤ i, j ≤ 2 is not in L∗.
Three exponentials conjecture
The four exponentials conjecture rules out a special case of non-trivial, homogeneous, quadratic relations between logarithms of algebraic numbers. But a conjectural extension of Baker's theorem implies that there should be no non-trivial algebraic relations between logarithms of algebraic numbers at all, homogeneous or not. One case of non-homogeneous quadratic relations is covered by the still open three exponentials conjecture.[10] In its logarithmic form it is the following conjecture. Let λ1, λ2, and λ3 be any three logarithms of algebraic numbers and γ be a non-zero algebraic number, and suppose that λ1λ2 = γλ3. Then λ1λ2 = γλ3 = 0.
The exponential form of this conjecture is the following. Let x1, x2, and y be non-zero complex numbers and let γ be a non-zero algebraic number. Then at least one of the following three numbers is transcendental:
$e^{x_{1}y},e^{x_{2}y},e^{\gamma x_{1}/x_{2}}.$
There is also a sharp three exponentials conjecture which claims that if x1, x2, and y are non-zero complex numbers and α, β1, β2, and γ are algebraic numbers such that the following three numbers are algebraic
$e^{x_{1}y-\beta _{1}},e^{x_{2}y-\beta _{2}},e^{(\gamma x_{1}/x_{2})-\alpha },$
then either x2y = β2 or γx1 = αx2.
The strong three exponentials conjecture meanwhile states that if x1, x2, and y are non-zero complex numbers with x1y, x2y, and x1/x2 all transcendental, then at least one of the three numbers x1y, x2y, x1/x2 is not in L∗.
As with the other results in this family, the strong three exponentials conjecture implies the sharp three exponentials conjecture which implies the three exponentials conjecture. However, the strong and sharp three exponentials conjectures are implied by their four exponentials counterparts, bucking the usual trend. And the three exponentials conjecture is neither implied by nor implies the four exponentials conjecture.
The three exponentials conjecture, like the sharp five exponentials conjecture, would imply the transcendence of eπ2 by letting (in the logarithmic version) λ1 = iπ, λ2 = −iπ, and γ = 1.
Bertrand's conjecture
Many of the theorems and results in transcendental number theory concerning the exponential function have analogues involving the modular function j. Writing q = e2πiτ for the nome and j(τ) = J(q), Daniel Bertrand conjectured that if q1 and q2 are non-zero algebraic numbers in the complex unit disc that are multiplicatively independent, then J(q1) and J(q2) are algebraically independent over the rational numbers.[11] Although not obviously related to the four exponentials conjecture, Bertrand's conjecture in fact implies a special case known as the weak four exponentials conjecture.[12] This conjecture states that if x1 and x2 are two positive real algebraic numbers, neither of them equal to 1, then π2 and the product log(x1)log(x2) are linearly independent over the rational numbers. This corresponds to the special case of the four exponentials conjecture whereby y1 = iπ, y2 = −iπ, and x1 and x2 are real. Perhaps surprisingly, though, it is also a corollary of Bertrand's conjecture, suggesting there may be an approach to the full four exponentials conjecture via the modular function j.
Notes
1. Waldschmidt, (2006).
2. Alaoglu and Erdős, (1944), p.455: "It is very likely that q x and p x cannot be rational at the same time except if x is an integer. ... At present we can not show this. Professor Siegel has communicated to us the result that q x, r x and s x can not be simultaneously rational except if x is an integer."
3. Schneider, (1957).
4. Lang, (1966), chapter 2 section 1.
5. Ramachandra, (1967/8).
6. Waldschmidt, (2000), p.15.
7. Ramanujan, (1915), section IV.
8. Waldschmidt, "Hopf algebras..." (2005), p.200.
9. Waldschmidt, (2000), conjecture 11.17.
10. Waldschmidt, "Variations..." (2005), consequence 1.9.
11. Bertrand, (1997), conjecture 2 in section 5.
12. Diaz, (2001), section 4.
References
• Alaoglu, Leonidas; Erdős, Paul (1944). "On highly composite and similar numbers". Trans. Amer. Math. Soc. 56 (3): 448–469. doi:10.2307/1990319. JSTOR 1990319. MR 0011087.
• Bertrand, Daniel (1997). "Theta functions and transcendence". The Ramanujan Journal. 1 (4): 339–350. doi:10.1023/A:1009749608672. MR 1608721. S2CID 118628723.
• Diaz, Guy (2001). "Mahler's conjecture and other transcendence results". In Nesterenko, Yuri V.; Philippon, Patrice (eds.). Introduction to algebraic independence theory. Lecture Notes in Math. Vol. 1752. Springer. pp. 13–26. ISBN 3-540-41496-7. MR 1837824.
• Lang, Serge (1966). Introduction to transcendental numbers. Reading, Mass.: Addison-Wesley Publishing Co. MR 0214547.
• Ramachandra, Kanakanahalli (1967–1968). "Contributions to the theory of transcendental numbers. I, II". Acta Arith. 14: 65–72, 73–88. doi:10.4064/aa-14-1-65-72. MR 0224566.
• Ramanujan, Srinivasa (1915). "Highly Composite Numbers". Proc. London Math. Soc. 14 (2): 347–407. doi:10.1112/plms/s2_14.1.347. MR 2280858.
• Schneider, Theodor (1957). Einführung in die transzendenten Zahlen (in German). Berlin-Göttingen-Heidelberg: Springer. MR 0086842.
• Waldschmidt, Michel (2000). Diophantine approximation on linear algebraic groups. Grundlehren der Mathematischen Wissenschaften. Vol. 326. Berlin: Springer. ISBN 3-540-66785-7. MR 1756786.
• Waldschmidt, Michel (2005). "Hopf algebras and transcendental numbers". In Aoki, Takashi; Kanemitsu, Shigeru; Nakahara, Mikio; et al. (eds.). Zeta functions, topology, and quantum physics: Papers from the symposium held at Kinki University, Osaka, March 3–6, 2003. Developments in mathematics. Vol. 14. Springer. pp. 197–219. CiteSeerX 10.1.1.170.5648. MR 2179279.
• Waldschmidt, Michel (2005). "Variations on the six exponentials theorem". In Tandon, Rajat (ed.). Algebra and number theory. Delhi: Hindustan Book Agency. pp. 338–355. MR 2193363.
• Waldschmidt, Michel (2006). "On Ramachandra's contributions to transcendental number theory". In Balasubramanian, B.; Srinivas, K. (eds.). The Riemann zeta function and related themes: papers in honour of Professor K. Ramachandra. Ramanujan Math. Soc. Lect. Notes Ser. Vol. 2. Mysore: Ramanujan Math. Soc. pp. 155–179. MR 2335194.
External links
• "Four exponentials conjecture". PlanetMath.
• Weisstein, Eric W. "Four Exponentials Conjecture". MathWorld.
| Wikipedia |
Weak dimension
In abstract algebra, the weak dimension of a nonzero right module M over a ring R is the largest number n such that the Tor group $\operatorname {Tor} _{n}^{R}(M,N)$ is nonzero for some left R-module N (or infinity if no largest such n exists), and the weak dimension of a left R-module is defined similarly. The weak dimension was introduced by Henri Cartan and Samuel Eilenberg (1956, p.122). The weak dimension is sometimes called the flat dimension as it is the shortest length of the resolution of the module by flat modules. The weak dimension of a module is, at most, equal to its projective dimension.
The weak global dimension of a ring is the largest number n such that $\operatorname {Tor} _{n}^{R}(M,N)$ is nonzero for some right R-module M and left R-module N. If there is no such largest number n, the weak global dimension is defined to be infinite. It is at most equal to the left or right global dimension of the ring R.
Examples
• The module $\mathbb {Q} $ of rational numbers over the ring $\mathbb {Z} $ of integers has weak dimension 0, but projective dimension 1.
• The module $\mathbb {Q} /\mathbb {Z} $ over the ring $\mathbb {Z} $ has weak dimension 1, but injective dimension 0.
• The module $\mathbb {Z} $ over the ring $\mathbb {Z} $ has weak dimension 0, but injective dimension 1.
• A Prüfer domain has weak global dimension at most 1.
• A Von Neumann regular ring has weak global dimension 0.
• A product of infinitely many fields has weak global dimension 0 but its global dimension is nonzero.
• If a ring is right Noetherian, then the right global dimension is the same as the weak global dimension, and is at most the left global dimension. In particular if a ring is right and left Noetherian then the left and right global dimensions and the weak global dimension are all the same.
• The triangular matrix ring ${\begin{bmatrix}\mathbb {Z} &\mathbb {Q} \\0&\mathbb {Q} \end{bmatrix}}$ has right global dimension 1, weak global dimension 1, but left global dimension 2. It is right Noetherian, but not left Noetherian.
References
• Cartan, Henri; Eilenberg, Samuel (1956), Homological algebra, Princeton Mathematical Series, vol. 19, Princeton University Press, ISBN 978-0-691-04991-5, MR 0077480
• Năstăsescu, Constantin; Van Oystaeyen, Freddy (1987), Dimensions of ring theory, Mathematics and its Applications, vol. 36, D. Reidel Publishing Co., doi:10.1007/978-94-009-3835-9, ISBN 9789027724618, MR 0894033
| Wikipedia |
Goldbach's weak conjecture
In number theory, Goldbach's weak conjecture, also known as the odd Goldbach conjecture, the ternary Goldbach problem, or the 3-primes problem, states that
Every odd number greater than 5 can be expressed as the sum of three primes. (A prime may be used more than once in the same sum.)
Goldbach's weak conjecture
Letter from Goldbach to Euler dated on 7 June 1742 (Latin-German)[1]
FieldNumber theory
Conjectured byChristian Goldbach
Conjectured in1742
First proof byHarald Helfgott
First proof in2013
Implied byGoldbach's conjecture
This conjecture is called "weak" because if Goldbach's strong conjecture (concerning sums of two primes) is proven, then this would also be true. For if every even number greater than 4 is the sum of two odd primes, adding 3 to each even number greater than 4 will produce the odd numbers greater than 7 (and 7 itself is equal to 2+2+3).
In 2013, Harald Helfgott released a proof of Goldbach's weak conjecture.[2] As of 2018, the proof is widely accepted in the mathematics community,[3] but it has not yet been published in a peer-reviewed journal. The proof was accepted for publication in the Annals of Mathematics Studies series[4] in 2015, and has been undergoing further review and revision since; fully-refereed chapters in close to final form are being made public in the process.[5]
Some state the conjecture as
Every odd number greater than 7 can be expressed as the sum of three odd primes.[6]
This version excludes 7 = 2+2+3 because this requires the even prime 2. On odd numbers larger than 7 it is slightly stronger as it also excludes sums like 17 = 2+2+13, which are allowed in the other formulation. Helfgott's proof covers both versions of the conjecture. Like the other formulation, this one also immediately follows from Goldbach's strong conjecture.
Origins
Main article: Goldbach's conjecture
The conjecture originated in correspondence between Christian Goldbach and Leonhard Euler. One formulation of the strong Goldbach conjecture, equivalent to the more common one in terms of sums of two primes, is
Every integer greater than 5 can be written as the sum of three primes.
The weak conjecture is simply this statement restricted to the case where the integer is odd (and possibly with the added requirement that the three primes in the sum be odd).
Timeline of results
In 1923, Hardy and Littlewood showed that, assuming the generalized Riemann hypothesis, the weak Goldbach conjecture is true for all sufficiently large odd numbers. In 1937, Ivan Matveevich Vinogradov eliminated the dependency on the generalised Riemann hypothesis and proved directly (see Vinogradov's theorem) that all sufficiently large odd numbers can be expressed as the sum of three primes. Vinogradov's original proof, as it used the ineffective Siegel–Walfisz theorem, did not give a bound for "sufficiently large"; his student K. Borozdkin (1956) derived that $e^{e^{16.038}}\approx 3^{3^{15}}$ is large enough.[7] The integer part of this number has 4,008,660 decimal digits, so checking every number under this figure would be completely infeasible.
In 1997, Deshouillers, Effinger, te Riele and Zinoviev published a result showing[8] that the generalized Riemann hypothesis implies Goldbach's weak conjecture for all numbers. This result combines a general statement valid for numbers greater than 1020 with an extensive computer search of the small cases. Saouter also conducted a computer search covering the same cases at approximately the same time.[9]
Olivier Ramaré in 1995 showed that every even number n ≥ 4 is in fact the sum of at most six primes, from which it follows that every odd number n ≥ 5 is the sum of at most seven primes. Leszek Kaniecki showed every odd integer is a sum of at most five primes, under the Riemann Hypothesis.[10] In 2012, Terence Tao proved this without the Riemann Hypothesis; this improves both results.[11]
In 2002, Liu Ming-Chit (University of Hong Kong) and Wang Tian-Ze lowered Borozdkin's threshold to approximately $n>e^{3100}\approx 2\times 10^{1346}$. The exponent is still much too large to admit checking all smaller numbers by computer. (Computer searches have only reached as far as 1018 for the strong Goldbach conjecture, and not much further than that for the weak Goldbach conjecture.)
In 2012 and 2013, Peruvian mathematician Harald Helfgott released a pair of papers improving major and minor arc estimates sufficiently to unconditionally prove the weak Goldbach conjecture.[12][13][2][14] Here, the major arcs ${\mathfrak {M}}$ is the union of intervals $\left(a/q-cr_{0}/qx,a/q+cr_{0}/qx\right)$ around the rationals $a/q,q<r_{0}$ where $c$ is a constant. Minor arcs ${\mathfrak {m}}$ are defined to be ${\mathfrak {m}}=(\mathbb {R} /\mathbb {Z} )\setminus {\mathfrak {M}}$.
References
1. Correspondance mathématique et physique de quelques célèbres géomètres du XVIIIème siècle (Band 1), St.-Pétersbourg 1843, pp. 125–129.
2. Helfgott, Harald A. (2013). "The ternary Goldbach conjecture is true". arXiv:1312.7748 [math.NT].
3. "Harald Andrés Helfgott - Alexander von Humboldt-Foundation". www.humboldt-foundation.de. Archived from the original on 2022-08-24. Retrieved 2022-08-24.
4. "Annals of Mathematics Studies". Princeton University Press. 1996-12-14. Retrieved 2023-02-05.
5. "Harald Andrés Helfgott". webusers.imj-prg.fr. Retrieved 2021-04-06.
6. Weisstein, Eric W. "Goldbach Conjecture". MathWorld.
7. Helfgott, Harald Andrés (2015). "The ternary Goldbach problem". arXiv:1501.05438 [math.NT].
8. Deshouillers, Jean-Marc; Effinger, Gove W.; Te Riele, Herman J. J.; Zinoviev, Dmitrii (1997). "A complete Vinogradov 3-primes theorem under the Riemann hypothesis". Electronic Research Announcements of the American Mathematical Society. 3 (15): 99–104. doi:10.1090/S1079-6762-97-00031-0. MR 1469323.
9. Yannick Saouter (1998). "Checking the odd Goldbach Conjecture up to 1020" (PDF). Math. Comp. 67 (222): 863–866. doi:10.1090/S0025-5718-98-00928-4. MR 1451327.
10. Kaniecki, Leszek (1995). "On Šnirelman's constant under the Riemann hypothesis" (PDF). Acta Arithmetica. 72 (4): 361–374. doi:10.4064/aa-72-4-361-374. MR 1348203.
11. Tao, Terence (2014). "Every odd number greater than 1 is the sum of at most five primes". Math. Comp. 83 (286): 997–1038. arXiv:1201.6656. doi:10.1090/S0025-5718-2013-02733-0. MR 3143702. S2CID 2618958.
12. Helfgott, Harald A. (2013). "Major arcs for Goldbach's theorem". arXiv:1305.2897 [math.NT].
13. Helfgott, Harald A. (2012). "Minor arcs for Goldbach's problem". arXiv:1205.5252 [math.NT].
14. Helfgott, Harald A. (2015). "The ternary Goldbach problem". arXiv:1501.05438 [math.NT].
Prime number conjectures
• Hardy–Littlewood
• 1st
• 2nd
• Agoh–Giuga
• Andrica's
• Artin's
• Bateman–Horn
• Brocard's
• Bunyakovsky
• Chinese hypothesis
• Cramér's
• Dickson's
• Elliott–Halberstam
• Firoozbakht's
• Gilbreath's
• Grimm's
• Landau's problems
• Goldbach's
• weak
• Legendre's
• Twin prime
• Legendre's constant
• Lemoine's
• Mersenne
• Oppermann's
• Polignac's
• Pólya
• Schinzel's hypothesis H
• Waring's prime number
| Wikipedia |
Lambda calculus definition
Lambda calculus is a formal mathematical system based on lambda abstraction and function application. Two definitions of the language are given here: a standard definition, and a definition using mathematical formulas.
For a general introduction, see Lambda calculus.
Standard definition
This formal definition was given by Alonzo Church.
Definition
Lambda expressions are composed of
• variables $v_{1}$, $v_{2}$, ..., $v_{n}$, ...
• the abstraction symbols lambda '$\lambda $' and dot '.'
• parentheses ( )
The set of lambda expressions, $\Lambda $, can be defined inductively:
1. If $x$ is a variable, then $x\in \Lambda $
2. If $x$ is a variable and $M\in \Lambda $, then $(\lambda x.M)\in \Lambda $
3. If $M,N\in \Lambda $, then $(M\ N)\in \Lambda $
Instances of rule 2 are known as abstractions and instances of rule 3 are known as applications.[1]
Notation
To keep the notation of lambda expressions uncluttered, the following conventions are usually applied.
• Outermost parentheses are dropped: $M\ N$ instead of $(M\ N)$
• Applications are assumed to be left-associative: $M\ N\ P$ may be written instead of $((M\ N)\ P)$[2]
• The body of an abstraction extends as far right as possible: $\lambda x.M\ N$ means $\lambda x.(M\ N)$ and not $(\lambda x.M)\ N$
• A sequence of abstractions is contracted: $\lambda x.\lambda y.\lambda z.N$ is abbreviated as $\lambda xyz.N$[3][4]
Free and bound variables
The abstraction operator, $\lambda $, is said to bind its variable wherever it occurs in the body of the abstraction. Variables that fall within the scope of an abstraction are said to be bound. All other variables are called free. For example, in the following expression $y$ is a bound variable and $x$ is free: $\lambda y.x\ x\ y$. Also note that a variable is bound by its "nearest" abstraction. In the following example the single occurrence of $x$ in the expression is bound by the second lambda: $\lambda x.y(\lambda x.z\ x)$
The set of free variables of a lambda expression, $M$, is denoted as $\operatorname {FV} (M)$ and is defined by recursion on the structure of the terms, as follows:
1. $\operatorname {FV} (x)=\{x\}$, where $x$ is a variable
2. $\operatorname {FV} (\lambda x.M)=\operatorname {FV} (M)\backslash \{x\}$
3. $\operatorname {FV} (M\ N)=\operatorname {FV} (M)\cup \operatorname {FV} (N)$[5]
An expression that contains no free variables is said to be closed. Closed lambda expressions are also known as combinators and are equivalent to terms in combinatory logic.
Reduction
The meaning of lambda expressions is defined by how expressions can be reduced.[6]
There are three kinds of reduction:
• α-conversion: changing bound variables (alpha);
• β-reduction: applying functions to their arguments (beta);
• η-reduction: which captures a notion of extensionality (eta).
We also speak of the resulting equivalences: two expressions are β-equivalent, if they can be β-converted into the same expression, and α/η-equivalence are defined similarly.
The term redex, short for reducible expression, refers to subterms that can be reduced by one of the reduction rules. For example, $(\lambda x.M)\ N$ is a β-redex in expressing the substitution of $N$ for $x$ in $M$; if $x$ is not free in $M$, $\lambda x.M\ x$ is an η-redex. The expression to which a redex reduces is called its reduct; using the previous example, the reducts of these expressions are respectively $M[x:=N]$ and $M$.
α-conversion
Alpha-conversion, sometimes known as alpha-renaming,[7] allows bound variable names to be changed. For example, alpha-conversion of $\lambda x.x$ might yield $\lambda y.y$. Terms that differ only by alpha-conversion are called α-equivalent. Frequently in uses of lambda calculus, α-equivalent terms are considered to be equivalent.
The precise rules for alpha-conversion are not completely trivial. First, when alpha-converting an abstraction, the only variable occurrences that are renamed are those that are bound by the same abstraction. For example, an alpha-conversion of $\lambda x.\lambda x.x$ could result in $\lambda y.\lambda x.x$, but it could not result in $\lambda y.\lambda x.y$. The latter has a different meaning from the original.
Second, alpha-conversion is not possible if it would result in a variable getting captured by a different abstraction. For example, if we replace $x$ with $y$ in $\lambda x.\lambda y.x$, we get $\lambda y.\lambda y.y$, which is not at all the same.
In programming languages with static scope, alpha-conversion can be used to make name resolution simpler by ensuring that no variable name masks a name in a containing scope (see alpha renaming to make name resolution trivial).
Substitution
Substitution, written $E[V:=R]$, is the process of replacing all free occurrences of the variable $V$ in the expression $E$ with expression $R$. Substitution on terms of the lambda calculus is defined by recursion on the structure of terms, as follows (note: x and y are only variables while M and N are any λ expression).
${\begin{aligned}x[x:=N]&\equiv N\\y[x:=N]&\equiv y{\text{, if }}x\neq y\end{aligned}}$
${\begin{aligned}(M_{1}\ M_{2})[x:=N]&\equiv (M_{1}[x:=N])\ (M_{2}[x:=N])\\(\lambda x.M)[x:=N]&\equiv \lambda x.M\\(\lambda y.M)[x:=N]&\equiv \lambda y.(M[x:=N]){\text{, if }}x\neq y{\text{, provided }}y\notin FV(N)\end{aligned}}$
To substitute into a lambda abstraction, it is sometimes necessary to α-convert the expression. For example, it is not correct for $(\lambda x.y)[y:=x]$ to result in $(\lambda x.x)$, because the substituted $x$ was supposed to be free but ended up being bound. The correct substitution in this case is $(\lambda z.x)$, up to α-equivalence. Notice that substitution is defined uniquely up to α-equivalence.
β-reduction
β-reduction captures the idea of function application. β-reduction is defined in terms of substitution: the β-reduction of $((\lambda V.E)\ E')$ is $E[V:=E']$.
For example, assuming some encoding of $2,7,\times $, we have the following β-reduction: $((\lambda n.\ n\times 2)\ 7)\rightarrow 7\times 2$.
η-reduction
η-reduction expresses the idea of extensionality, which in this context is that two functions are the same if and only if they give the same result for all arguments. η-reduction converts between $\lambda x.(fx)$ and $f$ whenever $x$ does not appear free in $f$.
Normalization
Main article: Beta normal form
The purpose of β-reduction is to calculate a value. A value in lambda calculus is a function. So β-reduction continues until the expression looks like a function abstraction.
A lambda expression that cannot be reduced further, by either β-redex, or η-redex is in normal form. Note that alpha-conversion may convert functions. All normal forms that can be converted into each other by α-conversion are defined to be equal. See the main article on Beta normal form for details.
Normal Form TypeDefinition.
Normal FormNo β- or η-reductions are possible.
Head Normal FormIn the form of a lambda abstraction whose body is not reducible.
Weak Head Normal FormIn the form of a lambda abstraction.
Syntax definition in BNF
Lambda Calculus has a simple syntax. A lambda calculus program has the syntax of an expression where,
NameBNFDescription
Abstraction
<expression> ::= λ <variable-list> . <expression>
Anonymous function definition.
Application term
<expression> ::= <application-term>
Application
<application-term> ::= <application-term> <item>
A function call.
Item
<application-term> ::= <item>
Variable
<item> ::= <variable>
E.g. x, y, fact, sum, ...
Grouping
<item> ::= ( <expression> )
Bracketed expression.
The variable list is defined as,
<variable-list> := <variable> | <variable>, <variable-list>
A variable as used by computer scientists has the syntax,
<variable> ::= <alpha> <extension>
<extension> ::=
<extension> ::= <extension-char> <extension>
<extension-char> ::= <alpha> | <digit> | _
Mathematicians will sometimes restrict a variable to be a single alphabetic character. When using this convention the comma is omitted from the variable list.
A lambda abstraction has a lower precedence than an application, so;
$\lambda x.y\ z=\lambda x.(y\ z)$
Applications are left associative;
$x\ y\ z=(x\ y)\ z$
An abstraction with multiple parameters is equivalent to multiple abstractions of one parameter.
$\lambda x.y.z=\lambda x.\lambda y.z$
where,
• x is a variable
• y is a variable list
• z is an expression
Definition as mathematical formulas
The problem of how variables may be renamed is difficult. This definition avoids the problem by substituting all names with canonical names, which are constructed based on the position of the definition of the name in the expression. The approach is analogous to what a compiler does, but has been adapted to work within the constraints of mathematics.
Semantics
The execution of a lambda expression proceeds using the following reductions and transformations,
1. α-conversion - $\operatorname {alpha-conv} (a)\to \operatorname {canonym} [A,P]=\operatorname {canonym} [a[A],P]$
2. β-reduction - $\operatorname {beta-redex} [\lambda p.b\ v]=b[p:=v]$
3. η-reduction - $x\not \in \operatorname {FV} (f)\to \operatorname {eta-redex} [\lambda x.(f\ x)]=f$
where,
• canonym is a renaming of a lambda expression to give the expression standard names, based on the position of the name in the expression.
• Substitution Operator, $b[p:=v]$ is the substitution of the name $p$ by the lambda expression $v$ in lambda expression $b$.
• Free Variable Set $\operatorname {FV} (f)$ is the set of variables that do not belong to a lambda abstraction in $f$.
Execution is performing β-reductions and η-reductions on subexpressions in the canonym of a lambda expression until the result is a lambda function (abstraction) in the normal form.
All α-conversions of a λ-expression are considered to be equivalent.
Canonym - Canonical Names
Canonym is a function that takes a lambda expression and renames all names canonically, based on their positions in the expression. This might be implemented as,
${\begin{aligned}\operatorname {canonym} [L,Q]&=\operatorname {canonym} [L,O,Q]\\\operatorname {canonym} [\lambda p.b,M,Q]&=\lambda \operatorname {name} (Q).\operatorname {canonym} [b,M[p:=Q],Q+N]\\\operatorname {canonym} [X\ Y,x,Q]&=\operatorname {canonym} [X,x,Q+F]\ \operatorname {canonym} [Y,x,E+S]\\\operatorname {canonym} [x,M,Q]&=\operatorname {name} (M[x])\end{aligned}}$
Where, N is the string "N", F is the string "F", S is the string "S", + is concatenation, and "name" converts a string into a name
Map operators
Map from one value to another if the value is in the map. O is the empty map.
1. $O[x]=x$
2. $M[x:=y][x]=y$
3. $x\neq z\to M[x:=y][z]=M[z]$
Substitution operator
If L is a lambda expression, x is a name, and y is a lambda expression; $L[x:=y]$ means substitute x by y in L. The rules are,
1. $(\lambda p.b)[x:=y]=\lambda p.b[x:=y]$
2. $(X\,Y)[x:=y]=X[x:=y]\,Y[x:=y]$
3. $z=x\to (z)[x:=y]=y$
4. $z\neq x\to (z)[x:=y]=z$
Note that rule 1 must be modified if it is to be used on non canonically renamed lambda expressions. See Changes to the substitution operator.
Free and bound variable sets
The set of free variables of a lambda expression, M, is denoted as FV(M). This is the set of variable names that have instances not bound (used) in a lambda abstraction, within the lambda expression. They are the variable names that may be bound to formal parameter variables from outside the lambda expression.
The set of bound variables of a lambda expression, M, is denoted as BV(M). This is the set of variable names that have instances bound (used) in a lambda abstraction, within the lambda expression.
The rules for the two sets are given below.[5]
$\mathrm {FV} (M)$ - Free Variable Set Comment $\mathrm {BV} (M)$ - Bound Variable Set Comment
$\mathrm {FV} (x)=\{x\}$ where x is a variable $\mathrm {BV} (x)=\emptyset $ where x is a variable
$\mathrm {FV} (\lambda x.M)=\mathrm {FV} (M)\setminus \{x\}$ Free variables of M excluding x $\mathrm {BV} (\lambda x.M)=\mathrm {BV} (M)\cup \{x\}$ Bound variables of M plus x.
$\mathrm {FV} (M\ N)=\mathrm {FV} (M)\cup \mathrm {FV} (N)$ Combine the free variables from the function and the parameter $\mathrm {BV} (M\ N)=\mathrm {BV} (M)\cup \mathrm {BV} (N)$ Combine the bound variables from the function and the parameter
Usage;
• The Free Variable Set, FV is used above in the definition of the η-reduction.
• The Bound Variable Set, BV, is used in the rule for β-redex of non canonical lambda expression.
Evaluation strategy
This mathematical definition is structured so that it represents the result, and not the way it gets calculated. However the result may be different between lazy and eager evaluation. This difference is described in the evaluation formulas.
The definitions given here assume that the first definition that matches the lambda expression will be used. This convention is used to make the definition more readable. Otherwise some if conditions would be required to make the definition precise.
Running or evaluating a lambda expression L is,
$\operatorname {eval} [\operatorname {canonym} [L],Q]$
where Q is a name prefix possibly an empty string and eval is defined by,
${\begin{aligned}\operatorname {eval} [x\ y]&=\operatorname {eval} [\operatorname {apply} [\operatorname {eval} [x]\ \operatorname {strategy} [y]]]\\\operatorname {apply} [(\lambda x.y)\ z]&=\operatorname {canonym} [\operatorname {beta-redex} [(\lambda x.y)\ z],x]\\\operatorname {apply} [x]&=x{\text{ if x does match the above.}}\\\operatorname {eval} [\lambda x.(f\ x)]&=\operatorname {eval} [\operatorname {eta-redex} [\lambda x.(f\ x)]]\\\operatorname {eval} [L]&=L\\\operatorname {lazy} [X]&=X\\\operatorname {eager} [X]&=\operatorname {eval} [X]\end{aligned}}$
Then the evaluation strategy may be chosen as either,
${\begin{aligned}\operatorname {strategy} &=\operatorname {lazy} \\\operatorname {strategy} &=\operatorname {eager} \end{aligned}}$
The result may be different depending on the strategy used. Eager evaluation will apply all reductions possible, leaving the result in normal form, while lazy evaluation will omit some reductions in parameters, leaving the result in "weak head normal form".
Normal form
All reductions that can be applied have been applied. This is the result obtained from applying eager evaluation.
${\begin{aligned}\operatorname {normal} [(\lambda x.y)\ z]&=\operatorname {false} \\\operatorname {normal} [\lambda x.(f\ x)]&=\operatorname {false} \\\operatorname {normal} [x\ y]&=\operatorname {normal} [x]\land \operatorname {normal} [y]\end{aligned}}$
In all other cases,
$\operatorname {normal} [x]=\operatorname {true} $
Weak head normal form
Reductions to the function (the head) have been applied, but not all reductions to the parameter have been applied. This is the result obtained from applying lazy evaluation.
${\begin{aligned}\operatorname {whnf} [(\lambda x.y)\ z]&=\operatorname {false} \\\operatorname {whnf} [\lambda x.(f\ x)]&=\operatorname {false} \\\operatorname {whnf} [x\ y]&=\operatorname {whnf} [x]\end{aligned}}$
In all other cases,
$\operatorname {whnf} [x]=\operatorname {true} $
Derivation of standard from the math definition
The standard definition of lambda calculus uses some definitions which may be considered as theorems, which can be proved based on the definition as mathematical formulas.
The canonical naming definition deals with the problem of variable identity by constructing a unique name for each variable based on the position of the lambda abstraction for the variable name in the expression.
This definition introduces the rules used in the standard definition and relates explains them in terms of the canonical renaming definition.
Free and bound variables
The lambda abstraction operator, λ, takes a formal parameter variable and a body expression. When evaluated the formal parameter variable is identified with the value of the actual parameter.
Variables in a lambda expression may either be "bound" or "free". Bound variables are variable names that are already attached to formal parameter variables in the expression.
The formal parameter variable is said to bind the variable name wherever it occurs free in the body. Variable (names) that have already been matched to formal parameter variable are said to be bound. All other variables in the expression are called free.
For example, in the following expression y is a bound variable and x is free: $\lambda y.x\ x\ y$. Also note that a variable is bound by its "nearest" lambda abstraction. In the following example the single occurrence of x in the expression is bound by the second lambda: $\lambda x.y\ (\lambda x.z\ x)$
Changes to the substitution operator
In the definition of the Substitution Operator the rule,
• $(\lambda p.b)[x:=y]=\lambda p.b[x:=y]$
must be replaced with,
1. $(\lambda x.b)[x:=y]=\lambda x.b$
2. $z\neq x\ \to (\lambda z.b)[x:=y]=\lambda z.b[x:=y]$
This is to stop bound variables with the same name being substituted. This would not have occurred in a canonically renamed lambda expression.
For example the previous rules would have wrongly translated,
$(\lambda x.x\ z)[x:=y]=(\lambda x.y\ z)$
The new rules block this substitution so that it remains as,
$(\lambda x.x\ z)[x:=y]=(\lambda x.x\ z)$
Transformation
The meaning of lambda expressions is defined by how expressions can be transformed or reduced.[6]
There are three kinds of transformation:
• α-conversion: changing bound variables (alpha);
• β-reduction: applying functions to their arguments (beta), calling functions;
• η-reduction: which captures a notion of extensionality (eta).
We also speak of the resulting equivalences: two expressions are β-equivalent, if they can be β-converted into the same expression, and α/η-equivalence are defined similarly.
The term redex, short for reducible expression, refers to subterms that can be reduced by one of the reduction rules.
α-conversion
Alpha-conversion, sometimes known as alpha-renaming,[7] allows bound variable names to be changed. For example, alpha-conversion of $\lambda x.x$ might give $\lambda y.y$. Terms that differ only by alpha-conversion are called α-equivalent.
In an α-conversion, names may be substituted for new names if the new name is not free in the body, as this would lead to the capture of free variables.
$(y\not \in FV(b)\land a(\lambda x.b)=\lambda y.b[x:=y])\to \operatorname {alpha-con} (a)$
Note that the substitution will not recurse into the body of lambda expressions with formal parameter $x$ because of the change to the substitution operator described above.
See example;
α-conversion λ-expression Canonically named Comment
$\lambda z.\lambda y.(z\ y)$ $\lambda \operatorname {P} .\lambda \operatorname {PN} .(\operatorname {P} \operatorname {PN} )$ Original expressions.
correctly rename y to k, (because k is not used in the body) $\lambda z.\lambda k.(z\ k)$ $\lambda \operatorname {P} .\lambda \operatorname {PN} .(\operatorname {P} \operatorname {PN} )$ No change to canonical renamed expression.
naively rename y to z, (wrong because z free in $\lambda y.(z\ y)$) $\lambda z.\lambda z.(z\ z)$ $\lambda \operatorname {P} .\lambda \operatorname {PN} .({\color {Red}\operatorname {PN} }\operatorname {PN} )$ $z$ is captured.
β-reduction (capture avoiding)
β-reduction captures the idea of function application (also called a function call), and implements the substitution of the actual parameter expression for the formal parameter variable. β-reduction is defined in terms of substitution.
If no variable names are free in the actual parameter and bound in the body, β-reduction may be performed on the lambda abstraction without canonical renaming.
$(\forall z:z\not \in FV(y)\lor z\not \in BV(b))\to \operatorname {beta-redex} [\lambda x.b\ y]=b[x:=y]$
Alpha renaming may be used on $b$ to rename names that are free in $y$ but bound in $b$, to meet the pre-condition for this transformation.
See example;
β-reduction λ-expression Canonically named Comment
$(\lambda x.\lambda y.(\lambda z.(\lambda x.z\ x)(\lambda y.z\ y))(x\ y))$ $(\lambda \operatorname {P} .\lambda \operatorname {PN} .(\lambda \operatorname {PNF} .(\lambda \operatorname {PNFNF} .\operatorname {PNF} \operatorname {PNFNF} )(\lambda \operatorname {PNFNS} .\operatorname {PNF} \operatorname {PNFNS} ))(\operatorname {P} \operatorname {PN} ))$ Original expressions.
Naive beta 1, $(\lambda x.\lambda y.((\lambda x.(x\ y)x)(\lambda y.(x\ y)y)))$
Canonical $(\lambda \operatorname {P} .\lambda \operatorname {PN} .((\lambda \operatorname {PNF} .({\color {Blue}\operatorname {P} }\operatorname {PN} )\operatorname {PNF} )(\lambda \operatorname {PNS} .(\operatorname {P} {\color {Blue}\operatorname {PN} })\operatorname {PNS} )))$
Natural $(\lambda \operatorname {P} .\lambda \operatorname {PN} .((\lambda \operatorname {PNF} .({\color {Red}\operatorname {PNF} }\operatorname {PN} )\operatorname {PNF} )(\lambda \operatorname {PNS} .(\operatorname {P} {\color {Red}\operatorname {PNS} )}\operatorname {PNS} )))$
x (P) and y (PN) have been captured in the substitution.
Alpha rename inner, x → a, y → b
$(\lambda x.\lambda y.(\lambda z.(\lambda x.z\ a)(\lambda b.z\ b))(x\ y))$ $(\lambda \operatorname {P} .\lambda \operatorname {PN} .(\lambda \operatorname {PNF} .(\lambda \operatorname {PNFNF} .\operatorname {PNF} \operatorname {PNFNF} )(\lambda \operatorname {PNFNS} .\operatorname {PNF} \operatorname {PNFNS} ))(\operatorname {P} \operatorname {PN} ))$
Beta 2, $(\lambda x.\lambda y.((\lambda a.(x\ y)a)(\lambda b.(x\ y)b)))$
Canonical $(\lambda \operatorname {P} .\lambda \operatorname {PN} .((\lambda \operatorname {PNF} .(\operatorname {P} \operatorname {PN} )\operatorname {PNF} )(\lambda \operatorname {PNS} .(\operatorname {P} \operatorname {PN} )\operatorname {PNS} )))$
Natural $(\lambda \operatorname {P} .\lambda \operatorname {PN} .((\lambda \operatorname {PNF} .(\operatorname {P} \operatorname {PN} )\operatorname {PNF} )(\lambda \operatorname {PNS} .(\operatorname {P} \operatorname {PN} )\operatorname {PNS} )))$
x and y not captured.
${\begin{array}{r}((\lambda x.z\ x)(\lambda y.z\ y))[z:=(x\ y)]\\((\lambda a.z\ a)(\lambda b.z\ b))[z:=(x\ y)]\end{array}}$
In this example,
1. In the β-redex,
1. The free variables are, $\operatorname {FV} (x\ y)=\{x,y\}$
2. The bound variables are, $\operatorname {BV} ((\lambda x.z\ x)(\lambda y.z\ y))=\{x,y\}$
2. The naive β-redex changed the meaning of the expression because x and y from the actual parameter became captured when the expressions were substituted in the inner abstractions.
3. The alpha renaming removed the problem by changing the names of x and y in the inner abstraction so that they are distinct from the names of x and y in the actual parameter.
1. The free variables are, $\operatorname {FV} (x\ y)=\{x,y\}$
2. The bound variables are, $\operatorname {BV} ((\lambda a.z\ a)(\lambda b.z\ b))=\{a,b\}$
4. The β-redex then proceeded with the intended meaning.
η-reduction
η-reduction expresses the idea of extensionality, which in this context is that two functions are the same if and only if they give the same result for all arguments.
η-reduction may be used without change on lambda expressions that are not canonically renamed.
$x\not \in \mathrm {FV} (f)\to {\text{η-redex}}[\lambda x.(fx)]=f$
The problem with using an η-redex when f has free variables is shown in this example,
ReductionLambda expressionβ-reduction
$(\lambda x.(\lambda y.y\,x)\,x)\,a$ $\lambda a.a\,a$
Naive η-reduction $(\lambda y.y\,x)\,a$ $\lambda a.a\,x$
This improper use of η-reduction changes the meaning by leaving x in $\lambda y.y\,x$ unsubstituted.
References
1. Barendregt, Hendrik Pieter (1984), The Lambda Calculus: Its Syntax and Semantics, Studies in Logic and the Foundations of Mathematics, vol. 103 (Revised ed.), North Holland, Amsterdam., ISBN 978-0-444-87508-2, archived from the original on 2004-08-23 — Corrections
2. "Example for Rules of Associativity". Lambda-bound.com. Retrieved 2012-06-18.
3. Selinger, Peter (2008), Lecture Notes on the Lambda Calculus (PDF), vol. 0804, Department of Mathematics and Statistics, University of Ottawa, p. 9, arXiv:0804.3434, Bibcode:2008arXiv0804.3434S
4. "Example for Rule of Associativity". Lambda-bound.com. Retrieved 2012-06-18.
5. Barendregt, Henk; Barendsen, Erik (March 2000), Introduction to Lambda Calculus (PDF)
6. de Queiroz, Ruy J.G.B. "A Proof-Theoretic Account of Programming and the Role of Reduction Rules." Dialectica 42(4), pages 265-282, 1988.
7. Turbak, Franklyn; Gifford, David (2008), Design concepts in programming languages, MIT press, p. 251, ISBN 978-0-262-20175-9
| Wikipedia |
Pettis integral
In mathematics, the Pettis integral or Gelfand–Pettis integral, named after Israel M. Gelfand and Billy James Pettis, extends the definition of the Lebesgue integral to vector-valued functions on a measure space, by exploiting duality. The integral was introduced by Gelfand for the case when the measure space is an interval with Lebesgue measure. The integral is also called the weak integral in contrast to the Bochner integral, which is the strong integral.
Definition
Let $f:X\to V$ where $(X,\Sigma ,\mu )$ is a measure space and $V$ is a topological vector space (TVS) with a continuous dual space $V'$ that separates points (that is, if $x\in V$is nonzero then there is some $l\in V'$ such that $l(x)\neq 0$), for example, $V$ is a normed space or (more generally) is a Hausdorff locally convex TVS. Evaluation of a functional may be written as a duality pairing:
$\langle \varphi ,x\rangle =\varphi [x].$
The map $f:X\to V$ is called weakly measurable if for all $\varphi \in V',$ the scalar-valued map $\varphi \circ f$ is a measurable map. A weakly measurable map $f:X\to V$ is said to be weakly integrable on $X$ if there exists some $e\in V$ such that for all $\varphi \in V',$ the scalar-valued map $\varphi \circ f$ is Lebesgue integrable (that is, $\varphi \circ f\in L^{1}\left(X,\Sigma ,\mu \right)$) and
$\varphi (e)=\int _{X}\varphi (f(x))\,\mathrm {d} \mu (x).$
The map $f:X\to V$ is said to be Pettis integrable if $\varphi \circ f\in L^{1}\left(X,\Sigma ,\mu \right)$ for all $\varphi \in V^{\prime }$ and also for every $A\in \Sigma $ there exists a vector $e_{A}\in V$ such that
$\langle \varphi ,e_{A}\rangle =\int _{A}\langle \varphi ,f(x)\rangle \,\mathrm {d} \mu (x)\quad {\text{ for all }}\varphi \in V'.$
In this case, $e_{A}$ is called the Pettis integral of $f$ on $A.$ Common notations for the Pettis integral $e_{A}$ include
$\int _{A}f\,\mathrm {d} \mu ,\qquad \int _{A}f(x)\,\mathrm {d} \mu (x),\quad {\text{and, in case that}}~A=X~{\text{is understood,}}\quad \mu [f].$
To understand the motivation behind the definition of "weakly integrable", consider the special case where $V$ is the underlying scalar field; that is, where $V=\mathbb {R} $ or $V=\mathbb {C} .$ In this case, every linear functional $\varphi $ on $V$ is of the form $\varphi (y)=sy$ for some scalar $s\in V$ (that is, $\varphi $ is just scalar multiplication by a constant), the condition
$\varphi (e)=\int _{A}\varphi (f(x))\,\mathrm {d} \mu (x)\quad {\text{for all}}~\varphi \in V',$
simplifies to
$se=\int _{A}sf(x)\,\mathrm {d} \mu (x)\quad {\text{for all scalars}}~s.$
In particular, in this special case, $f$ is weakly integrable on $X$ if and only if $f$ is Lebesgue integrable.
Relation to Dunford integral
The map $f:X\to V$ is said to be Dunford integrable if $\varphi \circ f\in L^{1}\left(X,\Sigma ,\mu \right)$ for all $\varphi \in V^{\prime }$ and also for every $A\in \Sigma $ there exists a vector $d_{A}\in V'',$ called the Dunford integral of $f$ on $A,$ such that
$\langle d_{A},\varphi \rangle =\int _{A}\langle \varphi ,f(x)\rangle \,\mathrm {d} \mu (x)\quad {\text{ for all }}\varphi \in V'$
where $\langle d_{A},\varphi \rangle =d_{A}(\varphi ).$
Identify every vector $x\in V$ with the map scalar-valued functional on $V'$ defined by $\varphi \in V'\mapsto \varphi (x).$ This assignment induces a map called the canonical evaluation map and through it, $V$ is identified as a vector subspace of the double dual $V''.$ The space $V$ is a semi-reflexive space if and only if this map is surjective. The $f:X\to V$ is Pettis integrable if and only if $d_{A}\in V$ for every $A\in \Sigma .$
Properties
An immediate consequence of the definition is that Pettis integrals are compatible with continuous, linear operators: If $\Phi :V_{1}\to V_{2}$ is and linear and continuous and $f:X\to V_{1}$ is Pettis integrable, then $\Phi \circ f$ is Pettis integrable as well and:
$\int _{X}\Phi (f(x))\,d\mu (x)=\Phi \left(\int _{X}f(x)\,d\mu (x)\right).$
The standard estimate
$\left|\int _{X}f(x)\,d\mu (x)\right|\leq \int _{X}|f(x)|\,d\mu (x)$
for real- and complex-valued functions generalises to Pettis integrals in the following sense: For all continuous seminorms $p:V\to \mathbb {R} $ and all Pettis integrable $f:X\to V,$
$p\left(\int _{X}f(x)\,d\mu (x)\right)\leq {\underline {\int _{X}}}p(f(x))\,d\mu (x)$
holds. The right hand side is the lower Lebesgue integral of a $[0,\infty ]$-valued function, that is,
${\underline {\int _{X}}}g\,d\mu :=\sup \left\{\left.\int _{X}h\,d\mu \;\right|\;h:X\to [0,\infty ]{\text{ is measurable and }}0\leq h\leq g\right\}.$ :=\sup \left\{\left.\int _{X}h\,d\mu \;\right|\;h:X\to [0,\infty ]{\text{ is measurable and }}0\leq h\leq g\right\}.}
Taking a lower Lebesgue integral is necessary because the integrand $p\circ f$ may not be measurable. This follows from the Hahn-Banach theorem because for every vector $v\in V$ there must be a continuous functional $\varphi \in V^{\ast }$ such that $\varphi (v)=p(v)$ and for all $w\in V,$ $|\varphi (w)|\leq p(w).$ Applying this to $v:=\int _{X}f\,d\mu $ it gives the result.
Mean value theorem
An important property is that the Pettis integral with respect to a finite measure is contained in the closure of the convex hull of the values scaled by the measure of the integration domain:
$\mu (A)<\infty {\text{ implies }}\int _{A}f\,d\mu \in \mu (A)\cdot {\overline {co(f(A))}}$
This is a consequence of the Hahn-Banach theorem and generalizes the mean value theorem for integrals of real-valued functions: If $V=\mathbb {R} ,$ then closed convex sets are simply intervals and for $f:X\to [a,b],$ the following inequalities hold:
$\mu (A)a~\leq ~\int _{A}f\,d\mu ~\leq ~\mu (A)b.$
Existence
If $V=\mathbb {R} ^{n}$ is finite-dimensional then $f$ is Pettis integrable if and only if each of $f$'s coordinates is Lebesgue integrable.
If $f$ is Pettis integrable and $A\in \Sigma $ is a measurable subset of $X,$ then by definition $f_{|A}:A\to V$ and $f\cdot 1_{A}:X\to V$ are also Pettis integrable and
$\int _{A}f_{|A}\,d\mu =\int _{X}f\cdot 1_{A}\,d\mu .$
If $X$ is a topological space, $\Sigma ={\mathfrak {B}}_{X}$ its Borel-$\sigma $-algebra, $\mu $ a Borel measure that assigns finite values to compact subsets, $V$ is quasi-complete (that is, every bounded Cauchy net converges) and if $f$ is continuous with compact support, then $f$ is Pettis integrable. More generally: If $f$ is weakly measurable and there exists a compact, convex $C\subseteq V$ and a null set $N\subseteq X$ such that $f(X\setminus N)\subseteq C,$ then $f$ is Pettis-integrable.
Law of large numbers for Pettis-integrable random variables
Let $(\Omega ,{\mathcal {F}},\operatorname {P} )$ be a probability space, and let $V$ be a topological vector space with a dual space that separates points. Let $v_{n}:\Omega \to V$ be a sequence of Pettis-integrable random variables, and write $\operatorname {E} [v_{n}]$ for the Pettis integral of $v_{n}$ (over $X$). Note that $\operatorname {E} [v_{n}]$ is a (non-random) vector in $V,$ and is not a scalar value.
Let
${\bar {v}}_{N}:={\frac {1}{N}}\sum _{n=1}^{N}v_{n}$
denote the sample average. By linearity, ${\bar {v}}_{N}$ is Pettis integrable, and
$\operatorname {E} [{\bar {v}}_{N}]={\frac {1}{N}}\sum _{n=1}^{N}\operatorname {E} [v_{n}]\in V.$
Suppose that the partial sums
${\frac {1}{N}}\sum _{n=1}^{N}\operatorname {E} [{\bar {v}}_{n}]$
converge absolutely in the topology of $V,$ in the sense that all rearrangements of the sum converge to a single vector $\lambda \in V.$ The weak law of large numbers implies that $\langle \varphi ,\operatorname {E} [{\bar {v}}_{N}]-\lambda \rangle \to 0$ for every functional $\varphi \in V^{*}.$ Consequently, $\operatorname {E} [{\bar {v}}_{N}]\to \lambda $ in the weak topology on $X.$
Without further assumptions, it is possible that $\operatorname {E} [{\bar {v}}_{N}]$ does not converge to $\lambda .$ To get strong convergence, more assumptions are necessary.
See also
• Bochner measurable function
• Bochner integral
• Bochner space – Mathematical concept
• Vector measure
• Weakly measurable function
References
• James K. Brooks, Representations of weak and strong integrals in Banach spaces, Proceedings of the National Academy of Sciences of the United States of America 63, 1969, 266–270. Fulltext MR0274697
• Israel M. Gel'fand, Sur un lemme de la théorie des espaces linéaires, Commun. Inst. Sci. Math. et Mecan., Univ. Kharkoff et Soc. Math. Kharkoff, IV. Ser. 13, 1936, 35–40 Zbl 0014.16202
• Michel Talagrand, Pettis Integral and Measure Theory, Memoirs of the AMS no. 307 (1984) MR0756174
• Sobolev, V. I. (2001) [1994], "Pettis integral", Encyclopedia of Mathematics, EMS Press
Integrals
Types of integrals
• Riemann integral
• Lebesgue integral
• Burkill integral
• Bochner integral
• Daniell integral
• Darboux integral
• Henstock–Kurzweil integral
• Haar integral
• Hellinger integral
• Khinchin integral
• Kolmogorov integral
• Lebesgue–Stieltjes integral
• Pettis integral
• Pfeffer integral
• Riemann–Stieltjes integral
• Regulated integral
Integration techniques
• Substitution
• Trigonometric
• Euler
• Weierstrass
• By parts
• Partial fractions
• Euler's formula
• Inverse functions
• Changing order
• Reduction formulas
• Parametric derivatives
• Differentiation under the integral sign
• Laplace transform
• Contour integration
• Laplace's method
• Numerical integration
• Simpson's rule
• Trapezoidal rule
• Risch algorithm
Improper integrals
• Gaussian integral
• Dirichlet integral
• Fermi–Dirac integral
• complete
• incomplete
• Bose–Einstein integral
• Frullani integral
• Common integrals in quantum field theory
Stochastic integrals
• Itô integral
• Russo–Vallois integral
• Stratonovich integral
• Skorokhod integral
Miscellaneous
• Basel problem
• Euler–Maclaurin formula
• Gabriel's horn
• Integration Bee
• Proof that 22/7 exceeds π
• Volumes
• Washers
• Shells
Analysis in topological vector spaces
Basic concepts
• Abstract Wiener space
• Classical Wiener space
• Bochner space
• Convex series
• Cylinder set measure
• Infinite-dimensional vector function
• Matrix calculus
• Vector calculus
Derivatives
• Differentiable vector–valued functions from Euclidean space
• Differentiation in Fréchet spaces
• Fréchet derivative
• Total
• Functional derivative
• Gateaux derivative
• Directional
• Generalizations of the derivative
• Hadamard derivative
• Holomorphic
• Quasi-derivative
Measurability
• Besov measure
• Cylinder set measure
• Canonical Gaussian
• Classical Wiener measure
• Measure like set functions
• infinite-dimensional Gaussian measure
• Projection-valued
• Vector
• Bochner / Weakly / Strongly measurable function
• Radonifying function
Integrals
• Bochner
• Direct integral
• Dunford
• Gelfand–Pettis/Weak
• Regulated
• Paley–Wiener
Results
• Cameron–Martin theorem
• Inverse function theorem
• Nash–Moser theorem
• Feldman–Hájek theorem
• No infinite-dimensional Lebesgue measure
• Sazonov's theorem
• Structure theorem for Gaussian measures
Related
• Crinkled arc
• Covariance operator
Functional calculus
• Borel functional calculus
• Continuous functional calculus
• Holomorphic functional calculus
Applications
• Banach manifold (bundle)
• Convenient vector space
• Choquet theory
• Fréchet manifold
• Hilbert manifold
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
| Wikipedia |
Weak interpretability
In mathematical logic, weak interpretability is a notion of translation of logical theories, introduced together with interpretability by Alfred Tarski in 1953.
Let T and S be formal theories. Slightly simplified, T is said to be weakly interpretable in S if, and only if, the language of T can be translated into the language of S in such a way that the translation of every theorem of T is consistent with S. Of course, there are some natural conditions on admissible translations here, such as the necessity for a translation to preserve the logical structure of formulas.
A generalization of weak interpretability, tolerance, was introduced by Giorgi Japaridze in 1992.
See also
• Interpretability logic
References
• Tarski, Alfred (1953), Undecidable theories, Studies in Logic and the Foundations of Mathematics, Amsterdam: North-Holland Publishing Company, MR 0058532. Written in collaboration with Andrzej Mostowski and Raphael M. Robinson.
• Dzhaparidze, Giorgie (1993), "A generalized notion of weak interpretability and the corresponding modal logic", Annals of Pure and Applied Logic, 61 (1–2): 113–160, doi:10.1016/0168-0072(93)90201-N, MR 1218658.
• Dzhaparidze, Giorgie (1992), "The logic of linear tolerance", Studia Logica, 51 (2): 249–277, doi:10.1007/BF00370116, MR 1185914
• Japaridze, Giorgi; de Jongh, Dick (1998), "The logic of provability", in Buss, Samuel R. (ed.), Handbook of Proof Theory, Stud. Logic Found. Math., vol. 137, Amsterdam: North-Holland, pp. 475–546, doi:10.1016/S0049-237X(98)80022-0, MR 1640331
| Wikipedia |
Weak inverse
In mathematics, the term weak inverse is used with several meanings.
Theory of semigroups
In the theory of semigroups, a weak inverse of an element x in a semigroup (S, •) is an element y such that y • x • y = y. If every element has a weak inverse, the semigroup is called an E-inversive or E-dense semigroup. An E-inversive semigroup may equivalently be defined by requiring that for every element x ∈ S, there exists y ∈ S such that x • y and y • x are idempotents.[1]
An element x of S for which there is an element y of S such that x • y • x = x is called regular. A regular semigroup is a semigroup in which every element is regular. This is a stronger notion than weak inverse. Every regular semigroup is E-inversive, but not vice versa.[1]
If every element x in S has a unique inverse y in S in the sense that x • y • x = x and y • x • y = y then S is called an inverse semigroup.
Category theory
In category theory, a weak inverse of an object A in a monoidal category C with monoidal product ⊗ and unit object I is an object B such that both A ⊗ B and B ⊗ A are isomorphic to the unit object I of C. A monoidal category in which every morphism is invertible and every object has a weak inverse is called a 2-group.
See also
• Generalized inverse
• Von Neumann regular ring
References
1. John Fountain (2002). "An introduction to covers for semigroups". In Gracinda M. S. Gomes (ed.). Semigroups, Algorithms, Automata and Languages. World Scientific. pp. 167–168. ISBN 978-981-277-688-4. preprint
| Wikipedia |
Limit cardinal
In mathematics, limit cardinals are certain cardinal numbers. A cardinal number λ is a weak limit cardinal if λ is neither a successor cardinal nor zero. This means that one cannot "reach" λ from another cardinal by repeated successor operations. These cardinals are sometimes called simply "limit cardinals" when the context is clear.
A cardinal λ is a strong limit cardinal if λ cannot be reached by repeated powerset operations. This means that λ is nonzero and, for all κ < λ, 2κ < λ. Every strong limit cardinal is also a weak limit cardinal, because κ+ ≤ 2κ for every cardinal κ, where κ+ denotes the successor cardinal of κ.
The first infinite cardinal, $\aleph _{0}$ (aleph-naught), is a strong limit cardinal, and hence also a weak limit cardinal.
Constructions
One way to construct limit cardinals is via the union operation: $\aleph _{\omega }$ is a weak limit cardinal, defined as the union of all the alephs before it; and in general $\aleph _{\lambda }$ for any limit ordinal λ is a weak limit cardinal.
The ב operation can be used to obtain strong limit cardinals. This operation is a map from ordinals to cardinals defined as
$\beth _{0}=\aleph _{0},$
$\beth _{\alpha +1}=2^{\beth _{\alpha }},$ (the smallest ordinal equinumerous with the powerset)
If λ is a limit ordinal, $\beth _{\lambda }=\bigcup \{\beth _{\alpha }:\alpha <\lambda \}.$
The cardinal
$\beth _{\omega }=\bigcup \{\beth _{0},\beth _{1},\beth _{2},\ldots \}=\bigcup _{n<\omega }\beth _{n}$
is a strong limit cardinal of cofinality ω. More generally, given any ordinal α, the cardinal
$\beth _{\alpha +\omega }=\bigcup _{n<\omega }\beth _{\alpha +n}$
is a strong limit cardinal. Thus there are arbitrarily large strong limit cardinals.
Relationship with ordinal subscripts
If the axiom of choice holds, every cardinal number has an initial ordinal. If that initial ordinal is $\omega _{\lambda }\,,$ then the cardinal number is of the form $\aleph _{\lambda }$ for the same ordinal subscript λ. The ordinal λ determines whether $\aleph _{\lambda }$ is a weak limit cardinal. Because $\aleph _{\alpha ^{+}}=(\aleph _{\alpha })^{+}\,,$ if λ is a successor ordinal then $\aleph _{\lambda }$ is not a weak limit. Conversely, if a cardinal κ is a successor cardinal, say $\kappa =(\aleph _{\alpha })^{+}\,,$ then $\kappa =\aleph _{\alpha ^{+}}\,.$ Thus, in general, $\aleph _{\lambda }$ is a weak limit cardinal if and only if λ is zero or a limit ordinal.
Although the ordinal subscript tells us whether a cardinal is a weak limit, it does not tell us whether a cardinal is a strong limit. For example, ZFC proves that $\aleph _{\omega }$ is a weak limit cardinal, but neither proves nor disproves that $\aleph _{\omega }$ is a strong limit cardinal (Hrbacek and Jech 1999:168). The generalized continuum hypothesis states that $\kappa ^{+}=2^{\kappa }\,$ for every infinite cardinal κ. Under this hypothesis, the notions of weak and strong limit cardinals coincide.
The notion of inaccessibility and large cardinals
The preceding defines a notion of "inaccessibility": we are dealing with cases where it is no longer enough to do finitely many iterations of the successor and powerset operations; hence the phrase "cannot be reached" in both of the intuitive definitions above. But the "union operation" always provides another way of "accessing" these cardinals (and indeed, such is the case of limit ordinals as well). Stronger notions of inaccessibility can be defined using cofinality. For a weak (respectively strong) limit cardinal κ the requirement is that cf(κ) = κ (i.e. κ be regular) so that κ cannot be expressed as a sum (union) of fewer than κ smaller cardinals. Such a cardinal is called a weakly (respectively strongly) inaccessible cardinal. The preceding examples both are singular cardinals of cofinality ω and hence they are not inaccessible.
$\aleph _{0}$ would be an inaccessible cardinal of both "strengths" except that the definition of inaccessible requires that they be uncountable. Standard Zermelo–Fraenkel set theory with the axiom of choice (ZFC) cannot even prove the consistency of the existence of an inaccessible cardinal of either kind above $\aleph _{0}$, due to Gödel's incompleteness theorem. More specifically, if $\kappa $ is weakly inaccessible then $L_{\kappa }\models ZFC$. These form the first in a hierarchy of large cardinals.
See also
• Cardinal number
References
• Hrbacek, Karel; Jech, Thomas (1999), Introduction to Set Theory (3 ed.), ISBN 0-8247-7915-0
• Jech, Thomas (2003), Set Theory, Springer Monographs in Mathematics (third millennium ed.), Berlin, New York: Springer-Verlag, doi:10.1007/3-540-44761-X, ISBN 978-3-540-44085-7
• Kunen, Kenneth (1980), Set theory: An introduction to independence proofs, Elsevier, ISBN 978-0-444-86839-8
External links
• http://www.ii.com/math/cardinals/ Infinite ink on cardinals
| Wikipedia |
Maximum principle
In the mathematical fields of partial differential equations and geometric analysis, the maximum principle is any of a collection of results and techniques of fundamental importance in the study of elliptic and parabolic differential equations.
This article describes the maximum principle in the theory of partial differential equations. For the maximum principle in optimal control theory, see Pontryagin's maximum principle. For the theorem in complex analysis, see Maximum modulus principle.
In the simplest case, consider a function of two variables u(x,y) such that
${\frac {\partial ^{2}u}{\partial x^{2}}}+{\frac {\partial ^{2}u}{\partial y^{2}}}=0.$
The weak maximum principle, in this setting, says that for any open precompact subset M of the domain of u, the maximum of u on the closure of M is achieved on the boundary of M. The strong maximum principle says that, unless u is a constant function, the maximum cannot also be achieved anywhere on M itself.
Such statements give a striking qualitative picture of solutions of the given differential equation. Such a qualitative picture can be extended to many kinds of differential equations. In many situations, one can also use such maximum principles to draw precise quantitative conclusions about solutions of differential equations, such as control over the size of their gradient. There is no single or most general maximum principle which applies to all situations at once.
In the field of convex optimization, there is an analogous statement which asserts that the maximum of a convex function on a compact convex set is attained on the boundary.[1]
Intuition
A partial formulation of the strong maximum principle
Here we consider the simplest case, although the same thinking can be extended to more general scenarios. Let M be an open subset of Euclidean space and let u be a C2 function on M such that
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}=0$
where for each i and j between 1 and n, aij is a function on M with aij = aji.
Fix some choice of x in M. According to the spectral theorem of linear algebra, all eigenvalues of the matrix [aij(x)] are real, and there is an orthonormal basis of ℝn consisting of eigenvectors. Denote the eigenvalues by λi and the corresponding eigenvectors by vi, for i from 1 to n. Then the differential equation, at the point x, can be rephrased as
$\sum _{i=1}^{n}\lambda _{i}\left.{\frac {d^{2}}{dt^{2}}}\right|_{t=0}{\big (}u(x+tv_{i}){\big )}=0.$
The essence of the maximum principle is the simple observation that if each eigenvalue is positive (which amounts to a certain formulation of "ellipticity" of the differential equation) then the above equation imposes a certain balancing of the directional second derivatives of the solution. In particular, if one of the directional second derivatives is negative, then another must be positive. At a hypothetical point where u is maximized, all directional second derivatives are automatically nonpositive, and the "balancing" represented by the above equation then requires all directional second derivatives to be identically zero.
This elementary reasoning could be argued to represent an infinitesimal formulation of the strong maximum principle, which states, under some extra assumptions (such as the continuity of a), that u must be constant if there is a point of M where u is maximized.
Note that the above reasoning is unaffected if one considers the more general partial differential equation
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}+\sum _{i=1}^{n}b_{i}{\frac {\partial u}{\partial x^{i}}}=0,$
since the added term is automatically zero at any hypothetical maximum point. The reasoning is also unaffected if one considers the more general condition
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}+\sum _{i=1}^{n}b_{i}{\frac {\partial u}{\partial x^{i}}}\geq 0,$
in which one can even note the extra phenomena of having an outright contradiction if there is a strict inequality (> rather than ≥) in this condition at the hypothetical maximum point. This phenomenon is important in the formal proof of the classical weak maximum principle.
Non-applicability of the strong maximum principle
However, the above reasoning no longer applies if one considers the condition
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}+\sum _{i=1}^{n}b_{i}{\frac {\partial u}{\partial x^{i}}}\leq 0,$
since now the "balancing" condition, as evaluated at a hypothetical maximum point of u, only says that a weighted average of manifestly nonpositive quantities is nonpositive. This is trivially true, and so one cannot draw any nontrivial conclusion from it. This is reflected by any number of concrete examples, such as the fact that
${\frac {\partial ^{2}}{\partial x^{2}}}{\big (}{-x}^{2}-y^{2}{\big )}+{\frac {\partial ^{2}}{\partial y^{2}}}{\big (}{-x}^{2}-y^{2}{\big )}\leq 0,$
and on any open region containing the origin, the function −x2−y2 certainly has a maximum.
The classical weak maximum principle for linear elliptic PDE
The essential idea
Let M denote an open subset of Euclidean space. If a smooth function $u:M\to \mathbb {R} $ is maximized at a point p, then one automatically has:
• $(du)(p)=0$
• $(\nabla ^{2}u)(p)\leq 0,$ as a matrix inequality.
One can view a partial differential equation as the imposition of an algebraic relation between the various derivatives of a function. So, if u is the solution of a partial differential equation, then it is possible that the above conditions on the first and second derivatives of u form a contradiction to this algebraic relation. This is the essence of the maximum principle. Clearly, the applicability of this idea depends strongly on the particular partial differential equation in question.
For instance, if u solves the differential equation
$\Delta u=|du|^{2}+2,$
then it is clearly impossible to have $\Delta u\leq 0$ and $du=0$ at any point of the domain. So, following the above observation, it is impossible for u to take on a maximum value. If, instead u solved the differential equation $\Delta u=|du|^{2}$ then one would not have such a contradiction, and the analysis given so far does not imply anything interesting. If u solved the differential equation $\Delta u=|du|^{2}-2,$ then the same analysis would show that u cannot take on a minimum value.
The possibility of such analysis is not even limited to partial differential equations. For instance, if $u:M\to \mathbb {R} $ is a function such that
$\Delta u-|du|^{4}=\int _{M}e^{u(x)}\,dx,$
which is a sort of "non-local" differential equation, then the automatic strict positivity of the right-hand side shows, by the same analysis as above, that u cannot attain a maximum value.
There are many methods to extend the applicability of this kind of analysis in various ways. For instance, if u is a harmonic function, then the above sort of contradiction does not directly occur, since the existence of a point p where $\Delta u(p)\leq 0$ is not in contradiction to the requirement $\Delta u=0$ everywhere. However, one could consider, for an arbitrary real number s, the function us defined by
$u_{s}(x)=u(x)+se^{x_{1}}.$
It is straightforward to see that
$\Delta u_{s}=se^{x_{1}}.$
By the above analysis, if $s>0$ then us cannot attain a maximum value. One might wish to consider the limit as s to 0 in order to conclude that u also cannot attain a maximum value. However, it is possible for the pointwise limit of a sequence of functions without maxima to have a maxima. Nonetheless, if M has a boundary such that M together with its boundary is compact, then supposing that u can be continuously extended to the boundary, it follows immediately that both u and us attain a maximum value on $M\cup \partial M.$ Since we have shown that us, as a function on M, does not have a maximum, it follows that the maximum point of us, for any s, is on $\partial M.$ By the sequential compactness of $\partial M,$ it follows that the maximum of u is attained on $\partial M.$ This is the weak maximum principle for harmonic functions. This does not, by itself, rule out the possibility that the maximum of u is also attained somewhere on M. That is the content of the "strong maximum principle," which requires further analysis.
The use of the specific function $e^{x_{1}}$ above was very inessential. All that mattered was to have a function which extends continuously to the boundary and whose Laplacian is strictly positive. So we could have used, for instance,
$u_{s}(x)=u(x)+s|x|^{2}$
with the same effect.
The classical strong maximum principle for linear elliptic PDE
Summary of proof
Let M be an open subset of Euclidean space. Let $u:M\to \mathbb {R} $ be a twice-differentiable function which attains its maximum value C. Suppose that
$a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}+b_{i}{\frac {\partial u}{\partial x^{i}}}\geq 0.$
Suppose that one can find (or prove the existence of):
• a compact subset Ω of M, with nonempty interior, such that u(x) < C for all x in the interior of Ω, and such that there exists x0 on the boundary of Ω with u(x0) = C.
• a continuous function $h:\Omega \to \mathbb {R} $ which is twice-differentiable on the interior of Ω and with
$a_{ij}{\frac {\partial ^{2}h}{\partial x^{i}\,\partial x^{j}}}+b_{i}{\frac {\partial h}{\partial x^{i}}}\geq 0,$
and such that one has u + h ≤ C on the boundary of Ω with h(x0) = 0
Then L(u + h − C) ≥ 0 on Ω with u + h − C ≤ 0 on the boundary of Ω; according to the weak maximum principle, one has u + h − C ≤ 0 on Ω. This can be reorganized to say
$-{\frac {u(x)-u(x_{0})}{|x-x_{0}|}}\geq {\frac {h(x)-h(x_{0})}{|x-x_{0}|}}$
for all x in Ω. If one can make the choice of h so that the right-hand side has a manifestly positive nature, then this will provide a contradiction to the fact that x0 is a maximum point of u on M, so that its gradient must vanish.
Proof
The above "program" can be carried out. Choose Ω to be a spherical annulus; one selects its center xc to be a point closer to the closed set u−1(C) than to the closed set ∂M, and the outer radius R is selected to be the distance from this center to u−1(C); let x0 be a point on this latter set which realizes the distance. The inner radius ρ is arbitrary. Define
$h(x)=\varepsilon {\Big (}e^{-\alpha |x-x_{\text{c}}|^{2}}-e^{-\alpha R^{2}}{\Big )}.$
Now the boundary of Ω consists of two spheres; on the outer sphere, one has h = 0; due to the selection of R, one has u ≤ C on this sphere, and so u + h − C ≤ 0 holds on this part of the boundary, together with the requirement h(x0) = 0. On the inner sphere, one has u < C. Due to the continuity of u and the compactness of the inner sphere, one can select δ > 0 such that u + δ < C. Since h is constant on this inner sphere, one can select ε > 0 such that u + h ≤ C on the inner sphere, and hence on the entire boundary of Ω.
Direct calculation shows
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}h}{\partial x^{i}\,\partial x^{j}}}+\sum _{i=1}^{n}b_{i}{\frac {\partial h}{\partial x^{i}}}=\varepsilon \alpha e^{-\alpha |x-x_{\text{c}}|^{2}}\left(4\alpha \sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}(x){\big (}x^{i}-x_{\text{c}}^{i}{\big )}{\big (}x^{j}-x_{\text{c}}^{j}{\big )}-2\sum _{i=1}^{n}a_{ii}-2\sum _{i=1}^{n}b_{i}{\big (}x^{i}-x_{\text{c}}^{i}{\big )}\right).$
There are various conditions under which the right-hand side can be guaranteed to be nonnegative; see the statement of the theorem below.
Lastly, note that the directional derivative of h at x0 along the inward-pointing radial line of the annulus is strictly positive. As described in the above summary, this will ensure that a directional derivative of u at x0 is nonzero, in contradiction to x0 being a maximum point of u on the open set M.
Statement of the theorem
The following is the statement of the theorem in the books of Morrey and Smoller, following the original statement of Hopf (1927):
Let M be an open subset of Euclidean space ℝn. For each i and j between 1 and n, let aij and bi be continuous functions on M with aij = aji. Suppose that for all x in M, the symmetric matrix [aij] is positive-definite. If u is a nonconstant C2 function on M such that
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}+\sum _{i=1}^{n}b_{i}{\frac {\partial u}{\partial x^{i}}}\geq 0$
on M, then u does not attain a maximum value on M.
The point of the continuity assumption is that continuous functions are bounded on compact sets, the relevant compact set here being the spherical annulus appearing in the proof. Furthermore, by the same principle, there is a number λ such that for all x in the annulus, the matrix [aij(x)] has all eigenvalues greater than or equal to λ. One then takes α, as appearing in the proof, to be large relative to these bounds. Evans's book has a slightly weaker formulation, in which there is assumed to be a positive number λ which is a lower bound of the eigenvalues of [aij] for all x in M.
These continuity assumptions are clearly not the most general possible in order for the proof to work. For instance, the following is Gilbarg and Trudinger's statement of the theorem, following the same proof:
Let M be an open subset of Euclidean space ℝn. For each i and j between 1 and n, let aij and bi be functions on M with aij = aji. Suppose that for all x in M, the symmetric matrix [aij] is positive-definite, and let λ(x) denote its smallest eigenvalue. Suppose that $\textstyle {\frac {a_{ii}}{\lambda }}$ and $\textstyle {\frac {|b_{i}|}{\lambda }}$ are bounded functions on M for each i between 1 and n. If u is a nonconstant C2 function on M such that
$\sum _{i=1}^{n}\sum _{j=1}^{n}a_{ij}{\frac {\partial ^{2}u}{\partial x^{i}\,\partial x^{j}}}+\sum _{i=1}^{n}b_{i}{\frac {\partial u}{\partial x^{i}}}\geq 0$
on M, then u does not attain a maximum value on M.
One cannot naively extend these statements to the general second-order linear elliptic equation, as already seen in the one-dimensional case. For instance, the ordinary differential equation y″ + 2y = 0 has sinusoidal solutions, which certainly have interior maxima. This extends to the higher-dimensional case, where one often has solutions to "eigenfunction" equations Δu + cu = 0 which have interior maxima. The sign of c is relevant, as also seen in the one-dimensional case; for instance the solutions to y″ - 2y = 0 are exponentials, and the character of the maxima of such functions is quite different from that of sinusoidal functions.
See also
• Maximum modulus principle
• Hopf maximum principle
Notes
1. Chapter 32 of Rockafellar (1970).
References
Research articles
• Calabi, E. An extension of E. Hopf's maximum principle with an application to Riemannian geometry. Duke Math. J. 25 (1958), 45–56.
• Cheng, S.Y.; Yau, S.T. Differential equations on Riemannian manifolds and their geometric applications. Comm. Pure Appl. Math. 28 (1975), no. 3, 333–354.
• Gidas, B.; Ni, Wei Ming; Nirenberg, L. Symmetry and related properties via the maximum principle. Comm. Math. Phys. 68 (1979), no. 3, 209–243.
• Gidas, B.; Ni, Wei Ming; Nirenberg, L. Symmetry of positive solutions of nonlinear elliptic equations in Rn. Mathematical analysis and applications, Part A, pp. 369–402, Adv. in Math. Suppl. Stud., 7a, Academic Press, New York-London, 1981.
• Hamilton, Richard S. Four-manifolds with positive curvature operator. J. Differential Geom. 24 (1986), no. 2, 153–179.
• E. Hopf. Elementare Bemerkungen Über die Lösungen partieller Differentialgleichungen zweiter Ordnung vom elliptischen Typus. Sitber. Preuss. Akad. Wiss. Berlin 19 (1927), 147-152.
• Hopf, Eberhard. A remark on linear elliptic differential equations of second order. Proc. Amer. Math. Soc. 3 (1952), 791–793.
• Nirenberg, Louis. A strong maximum principle for parabolic equations. Comm. Pure Appl. Math. 6 (1953), 167–177.
• Omori, Hideki. Isometric immersions of Riemannian manifolds. J. Math. Soc. Jpn. 19 (1967), 205–214.
• Yau, Shing Tung. Harmonic functions on complete Riemannian manifolds. Comm. Pure Appl. Math. 28 (1975), 201–228.
• Kreyberg, H. J. A. On the maximum principle of optimal control in economic processes, 1969 (Trondheim, NTH, Sosialøkonomisk institutt https://www.worldcat.org/title/on-the-maximum-principle-of-optimal-control-in-economic-processes/oclc/23714026)
Textbooks
• Caffarelli, Luis A.; Xavier Cabre (1995). Fully Nonlinear Elliptic Equations. Providence, Rhode Island: American Mathematical Society. pp. 31–41. ISBN 0-8218-0437-5.
• Evans, Lawrence C. Partial differential equations. Second edition. Graduate Studies in Mathematics, 19. American Mathematical Society, Providence, RI, 2010. xxii+749 pp. ISBN 978-0-8218-4974-3
• Friedman, Avner. Partial differential equations of parabolic type. Prentice-Hall, Inc., Englewood Cliffs, N.J. 1964 xiv+347 pp.
• Gilbarg, David; Trudinger, Neil S. Elliptic partial differential equations of second order. Reprint of the 1998 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2001. xiv+517 pp. ISBN 3-540-41160-7
• Ladyženskaja, O. A.; Solonnikov, V. A.; Uralʹceva, N. N. Linear and quasilinear equations of parabolic type. Translated from the Russian by S. Smith. Translations of Mathematical Monographs, Vol. 23 American Mathematical Society, Providence, R.I. 1968 xi+648 pp.
• Ladyzhenskaya, Olga A.; Ural'tseva, Nina N. Linear and quasilinear elliptic equations. Translated from the Russian by Scripta Technica, Inc. Translation editor: Leon Ehrenpreis. Academic Press, New York-London 1968 xviii+495 pp.
• Lieberman, Gary M. Second order parabolic differential equations. World Scientific Publishing Co., Inc., River Edge, NJ, 1996. xii+439 pp. ISBN 981-02-2883-X
• Morrey, Charles B., Jr. Multiple integrals in the calculus of variations. Reprint of the 1966 edition. Classics in Mathematics. Springer-Verlag, Berlin, 2008. x+506 pp. ISBN 978-3-540-69915-6
• Protter, Murray H.; Weinberger, Hans F. Maximum principles in differential equations. Corrected reprint of the 1967 original. Springer-Verlag, New York, 1984. x+261 pp. ISBN 0-387-96068-6
• Rockafellar, R. T. (1970). Convex analysis. Princeton: Princeton University Press.
• Smoller, Joel. Shock waves and reaction-diffusion equations. Second edition. Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], 258. Springer-Verlag, New York, 1994. xxiv+632 pp. ISBN 0-387-94259-9
| Wikipedia |
Weak n-category
In category theory, a weak n-category is a generalization of the notion of strict n-category where composition and identities are not strictly associative and unital, but only associative and unital up to coherent equivalence. This generalisation only becomes noticeable at dimensions two and above where weak 2-, 3- and 4-categories are typically referred to as bicategories, tricategories, and tetracategories. The subject of weak n-categories is an area of ongoing research.
History
There is currently much work to determine what the coherence laws for weak n-categories should be. Weak n-categories have become the main object of study in higher category theory. There are basically two classes of theories: those in which the higher cells and higher compositions are realized algebraically (most remarkably Michael Batanin's theory of weak higher categories) and those in which more topological models are used (e.g. a higher category as a simplicial set satisfying some universality properties).
In a terminology due to John Baez and James Dolan, a (n, k)-category is a weak n-category, such that all h-cells for h > k are invertible. Some of the formalism for (n, k)-categories are much simpler than those for general n-categories. In particular, several technically accessible formalisms of (infinity, 1)-categories are now known. Now the most popular such formalism centers on a notion of quasi-category, other approaches include a properly understood theory of simplicially enriched categories and the approach via Segal categories; a class of examples of stable (infinity, 1)-categories can be modeled (in the case of characteristics zero) also via pretriangulated A-infinity categories of Maxim Kontsevich. Quillen model categories are viewed as a presentation of an (infinity, 1)-category; however not all (infinity, 1)-categories can be presented via model categories.
See also
• Bicategory
• Tricategory
• Tetracategory
• Infinity category
• Opetope
• Stabilization hypothesis
External links
• n-Categories – Sketch of a Definition by John Baez
• Lectures on n-Categories and Cohomology by John Baez
• Tom Leinster, Higher operads, higher categories, math.CT/0305049
• Simpson, Carlos (2012). Homotopy theory of higher categories. New Mathematical Monographs. Vol. 19. Cambridge: Cambridge University Press. arXiv:1001.4071. Bibcode:2010arXiv1001.4071S. ISBN 9781139502191. MR 2883823.
• Jacob Lurie, Higher topos theory, math.CT/0608040, published version: pdf
| Wikipedia |
Weak operator topology
In functional analysis, the weak operator topology, often abbreviated WOT, is the weakest topology on the set of bounded operators on a Hilbert space $H$, such that the functional sending an operator $T$ to the complex number $\langle Tx,y\rangle $ is continuous for any vectors $x$ and $y$ in the Hilbert space.
Explicitly, for an operator $T$ there is base of neighborhoods of the following type: choose a finite number of vectors $x_{i}$, continuous functionals $y_{i}$, and positive real constants $\varepsilon _{i}$ indexed by the same finite set $I$. An operator $S$ lies in the neighborhood if and only if $|y_{i}(T(x_{i})-S(x_{i}))|<\varepsilon _{i}$ for all $i\in I$.
Equivalently, a net $T_{i}\subseteq B(H)$ of bounded operators converges to $T\in B(H)$ in WOT if for all $y\in H^{*}$ and $x\in H$, the net $y(T_{i}x)$ converges to $y(Tx)$.
Relationship with other topologies on B(H)
The WOT is the weakest among all common topologies on $B(H)$, the bounded operators on a Hilbert space $H$.
Strong operator topology
The strong operator topology, or SOT, on $B(H)$ is the topology of pointwise convergence. Because the inner product is a continuous function, the SOT is stronger than WOT. The following example shows that this inclusion is strict. Let $H=\ell ^{2}(\mathbb {N} )$ and consider the sequence $\{T^{n}\}$ of unilateral shifts. An application of Cauchy-Schwarz shows that $T^{n}\to 0$ in WOT. But clearly $T^{n}$ does not converge to $0$ in SOT.
The linear functionals on the set of bounded operators on a Hilbert space that are continuous in the strong operator topology are precisely those that are continuous in the WOT (actually, the WOT is the weakest operator topology that leaves continuous all strongly continuous linear functionals on the set $B(H)$ of bounded operators on the Hilbert space H). Because of this fact, the closure of a convex set of operators in the WOT is the same as the closure of that set in the SOT.
It follows from the polarization identity that a net $\{T_{\alpha }\}$ converges to $0$ in SOT if and only if $T_{\alpha }^{*}T_{\alpha }\to 0$ in WOT.
Weak-star operator topology
The predual of B(H) is the trace class operators C1(H), and it generates the w*-topology on B(H), called the weak-star operator topology or σ-weak topology. The weak-operator and σ-weak topologies agree on norm-bounded sets in B(H).
A net {Tα} ⊂ B(H) converges to T in WOT if and only Tr(TαF) converges to Tr(TF) for all finite-rank operator F. Since every finite-rank operator is trace-class, this implies that WOT is weaker than the σ-weak topology. To see why the claim is true, recall that every finite-rank operator F is a finite sum
$F=\sum _{i=1}^{n}\lambda _{i}u_{i}v_{i}^{*}.$
So {Tα} converges to T in WOT means
${\text{Tr}}\left(T_{\alpha }F\right)=\sum _{i=1}^{n}\lambda _{i}v_{i}^{*}\left(T_{\alpha }u_{i}\right)\longrightarrow \sum _{i=1}^{n}\lambda _{i}v_{i}^{*}\left(Tu_{i}\right)={\text{Tr}}(TF).$
Extending slightly, one can say that the weak-operator and σ-weak topologies agree on norm-bounded sets in B(H): Every trace-class operator is of the form
$S=\sum _{i}\lambda _{i}u_{i}v_{i}^{*},$
where the series $\sum \nolimits _{i}\lambda _{i}$ converges. Suppose $\sup \nolimits _{\alpha }\|T_{\alpha }\|=k<\infty ,$ and $T_{\alpha }\to T$ in WOT. For every trace-class S,
${\text{Tr}}\left(T_{\alpha }S\right)=\sum _{i}\lambda _{i}v_{i}^{*}\left(T_{\alpha }u_{i}\right)\longrightarrow \sum _{i}\lambda _{i}v_{i}^{*}\left(Tu_{i}\right)={\text{Tr}}(TS),$
by invoking, for instance, the dominated convergence theorem.
Therefore every norm-bounded set is compact in WOT, by the Banach–Alaoglu theorem.
Other properties
The adjoint operation T → T*, as an immediate consequence of its definition, is continuous in WOT.
Multiplication is not jointly continuous in WOT: again let $T$ be the unilateral shift. Appealing to Cauchy-Schwarz, one has that both Tn and T*n converges to 0 in WOT. But T*nTn is the identity operator for all $n$. (Because WOT coincides with the σ-weak topology on bounded sets, multiplication is not jointly continuous in the σ-weak topology.)
However, a weaker claim can be made: multiplication is separately continuous in WOT. If a net Ti → T in WOT, then STi → ST and TiS → TS in WOT.
SOT and WOT on B(X,Y) when X and Y are normed spaces
We can extend the definitions of SOT and WOT to the more general setting where X and Y are normed spaces and $B(X,Y)$ is the space of bounded linear operators of the form $T:X\to Y$. In this case, each pair $x\in X$ and $y^{*}\in Y^{*}$ defines a seminorm $\|\cdot \|_{x,y^{*}}$ on $B(X,Y)$ via the rule $\|T\|_{x,y^{*}}=|y^{*}(Tx)|$. The resulting family of seminorms generates the weak operator topology on $B(X,Y)$. Equivalently, the WOT on $B(X,Y)$ is formed by taking for basic open neighborhoods those sets of the form
$N(T,F,\Lambda ,\epsilon ):=\left\{S\in B(X,Y):\left|y^{*}((S-T)x)\right|<\epsilon ,x\in F,y^{*}\in \Lambda \right\},$
where $T\in B(X,Y),F\subseteq X$ is a finite set, $\Lambda \subseteq Y^{*}$ is also a finite set, and $\epsilon >0$. The space $B(X,Y)$ is a locally convex topological vector space when endowed with the WOT.
The strong operator topology on $B(X,Y)$ is generated by the family of seminorms $\|\cdot \|_{x},x\in X,$ via the rules $\|T\|_{x}=\|Tx\|$. Thus, a topological base for the SOT is given by open neighborhoods of the form
$N(T,F,\epsilon ):=\{S\in B(X,Y):\|(S-T)x\|<\epsilon ,x\in F\},$
where as before $T\in B(X,Y),F\subseteq X$ is a finite set, and $\epsilon >0.$
Relationships between different topologies on B(X,Y)
The different terminology for the various topologies on $B(X,Y)$ can sometimes be confusing. For instance, "strong convergence" for vectors in a normed space sometimes refers to norm-convergence, which is very often distinct from (and stronger than) than SOT-convergence when the normed space in question is $B(X,Y)$. The weak topology on a normed space $X$ is the coarsest topology that makes the linear functionals in $X^{*}$ continuous; when we take $B(X,Y)$ in place of $X$, the weak topology can be very different than the weak operator topology. And while the WOT is formally weaker than the SOT, the SOT is weaker than the operator norm topology.
In general, the following inclusions hold:
$\{{\text{WOT-open sets in }}B(X,Y)\}\subseteq \{{\text{SOT-open sets in }}B(X,Y)\}\subseteq \{{\text{operator-norm-open sets in }}B(X,Y)\},$
and these inclusions may or may not be strict depending on the choices of $X$ and $Y$.
The WOT on $B(X,Y)$ is a formally weaker topology than the SOT, but they nevertheless share some important properties. For example,
$(B(X,Y),{\text{SOT}})^{*}=(B(X,Y),{\text{WOT}})^{*}.$
Consequently, if $S\subseteq B(X,Y)$ is convex then
${\overline {S}}^{\text{SOT}}={\overline {S}}^{\text{WOT}},$
in other words, SOT-closure and WOT-closure coincide for convex sets.
See also
• Topologies on the set of operators on a Hilbert space
• Weak topology – Mathematical term
• Weak-star operator topology
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Banach space topics
Types of Banach spaces
• Asplund
• Banach
• list
• Banach lattice
• Grothendieck
• Hilbert
• Inner product space
• Polarization identity
• (Polynomially) Reflexive
• Riesz
• L-semi-inner product
• (B
• Strictly
• Uniformly) convex
• Uniformly smooth
• (Injective
• Projective) Tensor product (of Hilbert spaces)
Banach spaces are:
• Barrelled
• Complete
• F-space
• Fréchet
• tame
• Locally convex
• Seminorms/Minkowski functionals
• Mackey
• Metrizable
• Normed
• norm
• Quasinormed
• Stereotype
Function space Topologies
• Banach–Mazur compactum
• Dual
• Dual space
• Dual norm
• Operator
• Ultraweak
• Weak
• polar
• operator
• Strong
• polar
• operator
• Ultrastrong
• Uniform convergence
Linear operators
• Adjoint
• Bilinear
• form
• operator
• sesquilinear
• (Un)Bounded
• Closed
• Compact
• on Hilbert spaces
• (Dis)Continuous
• Densely defined
• Fredholm
• kernel
• operator
• Hilbert–Schmidt
• Functionals
• positive
• Pseudo-monotone
• Normal
• Nuclear
• Self-adjoint
• Strictly singular
• Trace class
• Transpose
• Unitary
Operator theory
• Banach algebras
• C*-algebras
• Operator space
• Spectrum
• C*-algebra
• radius
• Spectral theory
• of ODEs
• Spectral theorem
• Polar decomposition
• Singular value decomposition
Theorems
• Anderson–Kadec
• Banach–Alaoglu
• Banach–Mazur
• Banach–Saks
• Banach–Schauder (open mapping)
• Banach–Steinhaus (Uniform boundedness)
• Bessel's inequality
• Cauchy–Schwarz inequality
• Closed graph
• Closed range
• Eberlein–Šmulian
• Freudenthal spectral
• Gelfand–Mazur
• Gelfand–Naimark
• Goldstine
• Hahn–Banach
• hyperplane separation
• Kakutani fixed-point
• Krein–Milman
• Lomonosov's invariant subspace
• Mackey–Arens
• Mazur's lemma
• M. Riesz extension
• Parseval's identity
• Riesz's lemma
• Riesz representation
• Robinson-Ursescu
• Schauder fixed-point
Analysis
• Abstract Wiener space
• Banach manifold
• bundle
• Bochner space
• Convex series
• Differentiation in Fréchet spaces
• Derivatives
• Fréchet
• Gateaux
• functional
• holomorphic
• quasi
• Integrals
• Bochner
• Dunford
• Gelfand–Pettis
• regulated
• Paley–Wiener
• weak
• Functional calculus
• Borel
• continuous
• holomorphic
• Measures
• Lebesgue
• Projection-valued
• Vector
• Weakly / Strongly measurable function
Types of sets
• Absolutely convex
• Absorbing
• Affine
• Balanced/Circled
• Bounded
• Convex
• Convex cone (subset)
• Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx))
• Linear cone (subset)
• Radial
• Radially convex/Star-shaped
• Symmetric
• Zonotope
Subsets / set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Bounding points
• Convex hull
• Extreme point
• Interior
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Examples
• Absolute continuity AC
• $ba(\Sigma )$
• c space
• Banach coordinate BK
• Besov $B_{p,q}^{s}(\mathbb {R} )$
• Birnbaum–Orlicz
• Bounded variation BV
• Bs space
• Continuous C(K) with K compact Hausdorff
• Hardy Hp
• Hilbert H
• Morrey–Campanato $L^{\lambda ,p}(\Omega )$
• ℓp
• $\ell ^{\infty }$
• Lp
• $L^{\infty }$
• weighted
• Schwartz $S\left(\mathbb {R} ^{n}\right)$
• Segal–Bargmann F
• Sequence space
• Sobolev Wk,p
• Sobolev inequality
• Triebel–Lizorkin
• Wiener amalgam $W(X,L^{p})$
Applications
• Differential operator
• Finite element method
• Mathematical formulation of quantum mechanics
• Ordinary Differential Equations (ODEs)
• Validated numerics
Hilbert spaces
Basic concepts
• Adjoint
• Inner product and L-semi-inner product
• Hilbert space and Prehilbert space
• Orthogonal complement
• Orthonormal basis
Main results
• Bessel's inequality
• Cauchy–Schwarz inequality
• Riesz representation
Other results
• Hilbert projection theorem
• Parseval's identity
• Polarization identity (Parallelogram law)
Maps
• Compact operator on Hilbert space
• Densely defined
• Hermitian form
• Hilbert–Schmidt
• Normal
• Self-adjoint
• Sesquilinear form
• Trace class
• Unitary
Examples
• Cn(K) with K compact & n<∞
• Segal–Bargmann F
Duality and spaces of linear maps
Basic concepts
• Dual space
• Dual system
• Dual topology
• Duality
• Operator topologies
• Polar set
• Polar topology
• Topologies on spaces of linear maps
Topologies
• Norm topology
• Dual norm
• Ultraweak/Weak-*
• Weak
• polar
• operator
• in Hilbert spaces
• Mackey
• Strong dual
• polar topology
• operator
• Ultrastrong
Main results
• Banach–Alaoglu
• Mackey–Arens
Maps
• Transpose of a linear map
Subsets
• Saturated family
• Total set
Other concepts
• Biorthogonal system
| Wikipedia |
Weak order unit
In mathematics, specifically in order theory and functional analysis, an element $x$ of a vector lattice $X$ is called a weak order unit in $X$ if $x\geq 0$ and also for all $y\in X,$ $\inf\{x,|y|\}=0{\text{ implies }}y=0.$[1]
Examples
• If $X$ is a separable Fréchet topological vector lattice then the set of weak order units is dense in the positive cone of $X.$[2]
See also
• Quasi-interior point
• Vector lattice – Partially ordered vector space, ordered as a latticePages displaying short descriptions of redirect targets
Citations
1. Schaefer & Wolff 1999, pp. 234–242.
2. Schaefer & Wolff 1999, pp. 204–214.
References
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
Ordered topological vector spaces
Basic concepts
• Ordered vector space
• Partially ordered space
• Riesz space
• Order topology
• Order unit
• Positive linear operator
• Topological vector lattice
• Vector lattice
Types of orders/spaces
• AL-space
• AM-space
• Archimedean
• Banach lattice
• Fréchet lattice
• Locally convex vector lattice
• Normed lattice
• Order bound dual
• Order dual
• Order complete
• Regularly ordered
Types of elements/subsets
• Band
• Cone-saturated
• Lattice disjoint
• Dual/Polar cone
• Normal cone
• Order complete
• Order summable
• Order unit
• Quasi-interior point
• Solid set
• Weak order unit
Topologies/Convergence
• Order convergence
• Order topology
Operators
• Positive
• State
Main results
• Freudenthal spectral
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
| Wikipedia |
Buchsbaum ring
In mathematics, Buchsbaum rings are Noetherian local rings such that every system of parameters is a weak sequence. A sequence $(a_{1},\cdots ,a_{r})$ of the maximal ideal $m$ is called a weak sequence if $m\cdot ((a_{1},\cdots ,a_{i-1})\colon a_{i})\subset (a_{1},\cdots ,a_{i-1})$ for all $i$.
They were introduced by Jürgen Stückrad and Wolfgang Vogel (1973) and are named after David Buchsbaum.
Every Cohen–Macaulay local ring is a Buchsbaum ring. Every Buchsbaum ring is a generalized Cohen–Macaulay ring.
References
• Buchsbaum, D. (1966), "Complexes in local ring theory", in Herstein, I. N. (ed.), Some aspects of ring theory, Centro Internazionale Matematico Estivo (C.I.M.E.). II Ciclo, Varenna (Como), 23-31 agosto, vol. 1965, Rome: Edizioni cremonese, pp. 223–228, ISBN 978-3-642-11035-1, MR 0223393
• Goto, Shiro (2001) [1994], "Buchsbaum ring", Encyclopedia of Mathematics, EMS Press
• Stückrad, Jürgen; Vogel, Wolfgang (1973), "Eine Verallgemeinerung der Cohen-Macaulay Ringe und Anwendungen auf ein Problem der Multiplizitätstheorie", Journal of Mathematics of Kyoto University, 13: 513–528, ISSN 0023-608X, MR 0335504
• Stückrad, Jürgen; Vogel, Wolfgang (1986), Buchsbaum rings and applications, Berlin, New York: Springer-Verlag, ISBN 978-3-540-16844-7, MR 0881220
| Wikipedia |
Weak solution
In mathematics, a weak solution (also called a generalized solution) to an ordinary or partial differential equation is a function for which the derivatives may not all exist but which is nonetheless deemed to satisfy the equation in some precisely defined sense. There are many different definitions of weak solution, appropriate for different classes of equations. One of the most important is based on the notion of distributions.
Avoiding the language of distributions, one starts with a differential equation and rewrites it in such a way that no derivatives of the solution of the equation show up (the new form is called the weak formulation, and the solutions to it are called weak solutions). Somewhat surprisingly, a differential equation may have solutions which are not differentiable; and the weak formulation allows one to find such solutions.
Weak solutions are important because many differential equations encountered in modelling real-world phenomena do not admit of sufficiently smooth solutions, and the only way of solving such equations is using the weak formulation. Even in situations where an equation does have differentiable solutions, it is often convenient to first prove the existence of weak solutions and only later show that those solutions are in fact smooth enough.
A concrete example
As an illustration of the concept, consider the first-order wave equation:
${\frac {\partial u}{\partial t}}+{\frac {\partial u}{\partial x}}=0$
(1)
where u = u(t, x) is a function of two real variables. To indirectly probe the properties of a possible solution u, one integrates it against an arbitrary smooth function $\varphi \,\!$ of compact support, known as a test function, taking
$\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }u(t,x)\,\varphi (t,x)\,dx\,dt$
For example, if $\varphi $ is a smooth probability distribution concentrated near a point $(t,x)=(t_{\circ },x_{\circ })$, the integral is approximately $u(t_{\circ },x_{\circ })$. Notice that while the integrals go from $-\infty $ to $\infty $, they are essentially over a finite box where $\varphi $ is non-zero.
Thus, assume a solution u is continuously differentiable on the Euclidean space R2, multiply the equation (1) by a test function $\varphi $ (smooth of compact support), and integrate:
$\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }{\frac {\partial u(t,x)}{\partial t}}\varphi (t,x)\,\mathrm {d} t\,\mathrm {d} x+\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }{\frac {\partial u(t,x)}{\partial x}}\varphi (t,x)\,\mathrm {d} t\,\mathrm {d} x=0.$
Using Fubini's theorem which allows one to interchange the order of integration, as well as integration by parts (in t for the first term and in x for the second term) this equation becomes:
$-\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }u(t,x){\frac {\partial \varphi (t,x)}{\partial t}}\,\mathrm {d} t\,\mathrm {d} x-\int _{-\infty }^{\infty }\int _{-\infty }^{\infty }u(t,x){\frac {\partial \varphi (t,x)}{\partial x}}\,\mathrm {d} t\,\mathrm {d} x=0.$
(2)
(Boundary terms vanish since $\varphi $ is zero outside a finite box.) We have shown that equation (1) implies equation (2) as long as u is continuously differentiable.
The key to the concept of weak solution is that there exist functions u which satisfy equation (2) for any $\varphi $, but such u may not be differentiable and so cannot satisfy equation (1). An example is u(t, x) = |t − x|, as one may check by splitting the integrals over regions x ≥ t and x ≤ t where u is smooth, and reversing the above computation using integration by parts. A weak solution of equation (1) means any solution u of equation (2) over all test functions $\varphi $.
General case
The general idea which follows from this example is that, when solving a differential equation in u, one can rewrite it using a test function $\varphi $, such that whatever derivatives in u show up in the equation, they are "transferred" via integration by parts to $\varphi $, resulting in an equation without derivatives of u. This new equation generalizes the original equation to include solutions which are not necessarily differentiable.
The approach illustrated above works in great generality. Indeed, consider a linear differential operator in an open set W in Rn:
$P(x,\partial )u(x)=\sum a_{\alpha _{1},\alpha _{2},\dots ,\alpha _{n}}(x)\,\partial ^{\alpha _{1}}\partial ^{\alpha _{2}}\cdots \partial ^{\alpha _{n}}u(x),$
where the multi-index (α1, α2, …, αn) varies over some finite set in Nn and the coefficients $a_{\alpha _{1},\alpha _{2},\dots ,\alpha _{n}}$ are smooth enough functions of x in Rn.
The differential equation P(x, ∂)u(x) = 0 can, after being multiplied by a smooth test function $\varphi $ with compact support in W and integrated by parts, be written as
$\int _{W}u(x)Q(x,\partial )\varphi (x)\,\mathrm {d} x=0$
where the differential operator Q(x, ∂) is given by the formula
$Q(x,\partial )\varphi (x)=\sum (-1)^{|\alpha |}\partial ^{\alpha _{1}}\partial ^{\alpha _{2}}\cdots \partial ^{\alpha _{n}}\left[a_{\alpha _{1},\alpha _{2},\dots ,\alpha _{n}}(x)\varphi (x)\right].$
The number
$(-1)^{|\alpha |}=(-1)^{\alpha _{1}+\alpha _{2}+\cdots +\alpha _{n}}$
shows up because one needs α1 + α2 + ⋯ + αn integrations by parts to transfer all the partial derivatives from u to $\varphi $ in each term of the differential equation, and each integration by parts entails a multiplication by −1.
The differential operator Q(x, ∂) is the formal adjoint of P(x, ∂) (cf adjoint of an operator).
In summary, if the original (strong) problem was to find a |α|-times differentiable function u defined on the open set W such that
$P(x,\partial )u(x)=0{\text{ for all }}x\in W$
(a so-called strong solution), then an integrable function u would be said to be a weak solution if
$\int _{W}u(x)\,Q(x,\partial )\varphi (x)\,\mathrm {d} x=0$
for every smooth function $\varphi $ with compact support in W.
Other kinds of weak solution
The notion of weak solution based on distributions is sometimes inadequate. In the case of hyperbolic systems, the notion of weak solution based on distributions does not guarantee uniqueness, and it is necessary to supplement it with entropy conditions or some other selection criterion. In fully nonlinear PDE such as the Hamilton–Jacobi equation, there is a very different definition of weak solution called viscosity solution.
References
• Evans, L. C. (1998). Partial Differential Equations. Providence: American Mathematical Society. ISBN 0-8218-0772-2.
| Wikipedia |
Substructure (mathematics)
In mathematical logic, an (induced) substructure or (induced) subalgebra is a structure whose domain is a subset of that of a bigger structure, and whose functions and relations are restricted to the substructure's domain. Some examples of subalgebras are subgroups, submonoids, subrings, subfields, subalgebras of algebras over a field, or induced subgraphs. Shifting the point of view, the larger structure is called an extension or a superstructure of its substructure.
In model theory, the term "submodel" is often used as a synonym for substructure, especially when the context suggests a theory of which both structures are models.
In the presence of relations (i.e. for structures such as ordered groups or graphs, whose signature is not functional) it may make sense to relax the conditions on a subalgebra so that the relations on a weak substructure (or weak subalgebra) are at most those induced from the bigger structure. Subgraphs are an example where the distinction matters, and the term "subgraph" does indeed refer to weak substructures. Ordered groups, on the other hand, have the special property that every substructure of an ordered group which is itself an ordered group, is an induced substructure.
Definition
Given two structures A and B of the same signature σ, A is said to be a weak substructure of B, or a weak subalgebra of B, if
• the domain of A is a subset of the domain of B,
• f A = f B|An for every n-ary function symbol f in σ, and
• R A $\subseteq $ R B $\cap $ An for every n-ary relation symbol R in σ.
A is said to be a substructure of B, or a subalgebra of B, if A is a weak subalgebra of B and, moreover,
• R A = R B $\cap $ An for every n-ary relation symbol R in σ.
If A is a substructure of B, then B is called a superstructure of A or, especially if A is an induced substructure, an extension of A.
Example
In the language consisting of the binary functions + and ×, binary relation <, and constants 0 and 1, the structure (Q, +, ×, <, 0, 1) is a substructure of (R, +, ×, <, 0, 1). More generally, the substructures of an ordered field (or just a field) are precisely its subfields. Similarly, in the language (×, −1, 1) of groups, the substructures of a group are its subgroups. In the language (×, 1) of monoids, however, the substructures of a group are its submonoids. They need not be groups; and even if they are groups, they need not be subgroups.
In the case of graphs (in the signature consisting of one binary relation), subgraphs, and its weak substructures are precisely its subgraphs.
As subobjects
For every signature σ, induced substructures of σ-structures are the subobjects in the concrete category of σ-structures and strong homomorphisms (and also in the concrete category of σ-structures and σ-embeddings). Weak substructures of σ-structures are the subobjects in the concrete category of σ-structures and homomorphisms in the ordinary sense.
Submodel
In model theory, given a structure M which is a model of a theory T, a submodel of M in a narrower sense is a substructure of M which is also a model of T. For example, if T is the theory of abelian groups in the signature (+, 0), then the submodels of the group of integers (Z, +, 0) are the substructures which are also abelian groups. Thus the natural numbers (N, +, 0) form a substructure of (Z, +, 0) which is not a submodel, while the even numbers (2Z, +, 0) form a submodel.
Other examples:
1. The algebraic numbers form a submodel of the complex numbers in the theory of algebraically closed fields.
2. The rational numbers form a submodel of the real numbers in the theory of fields.
3. Every elementary substructure of a model of a theory T also satisfies T; hence it is a submodel.
In the category of models of a theory and embeddings between them, the submodels of a model are its subobjects.
See also
• Elementary substructure
• End extension
• Löwenheim–Skolem theorem
• Prime model
References
• Burris, Stanley N.; Sankappanavar, H. P. (1981), A Course in Universal Algebra, Berlin, New York: Springer-Verlag
• Diestel, Reinhard (2005) [1997], Graph Theory, Graduate Texts in Mathematics, vol. 173 (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-3-540-26183-4
• Hodges, Wilfrid (1997), A shorter model theory, Cambridge: Cambridge University Press, ISBN 978-0-521-58713-6
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
| Wikipedia |
Weak trace-class operator
In mathematics, a weak trace class operator is a compact operator on a separable Hilbert space H with singular values the same order as the harmonic sequence. When the dimension of H is infinite, the ideal of weak trace-class operators is strictly larger than the ideal of trace class operators, and has fundamentally different properties. The usual operator trace on the trace-class operators does not extend to the weak trace class. Instead the ideal of weak trace-class operators admits an infinite number of linearly independent quasi-continuous traces, and it is the smallest two-sided ideal for which all traces on it are singular traces.
Weak trace-class operators feature in the noncommutative geometry of French mathematician Alain Connes.
Definition
A compact operator A on an infinite dimensional separable Hilbert space H is weak trace class if μ(n,A) = O(n−1), where μ(A) is the sequence of singular values. In mathematical notation the two-sided ideal of all weak trace-class operators is denoted,
$L_{1,\infty }=\{A\in K(H):\mu (n,A)=O(n^{-1})\}.$
where $K(H)$ are the compact operators. The term weak trace-class, or weak-L1, is used because the operator ideal corresponds, in J. W. Calkin's correspondence between two-sided ideals of bounded linear operators and rearrangement invariant sequence spaces, to the weak-l1 sequence space.
Properties
• the weak trace-class operators admit a quasi-norm defined by
$\|A\|_{w}=\sup _{n\geq 0}(1+n)\mu (n,A),$
making L1,∞ a quasi-Banach operator ideal, that is an ideal that is also a quasi-Banach space.
See also
• Lp space
• Spectral triple
• Singular trace
• Dixmier trace
References
• B. Simon (2005). Trace ideals and their applications. Providence, RI: Amer. Math. Soc. ISBN 978-0-82-183581-4.
• A. Pietsch (1987). Eigenvalues and s-numbers. Cambridge, UK: Cambridge University Press. ISBN 978-0-52-132532-5.
• A. Connes (1994). Noncommutative geometry. Boston, MA: Academic Press. ISBN 978-0-12-185860-5.
• S. Lord, F. A. Sukochev. D. Zanin (2012). Singular traces: theory and applications. Berlin: De Gruyter. ISBN 978-3-11-026255-1.
| Wikipedia |
Truth-table reduction
In computability theory, a truth-table reduction is a reduction from one set of natural numbers to another. As a "tool", it is weaker than Turing reduction, since not every Turing reduction between sets can be performed by a truth-table reduction, but every truth-table reduction can be performed by a Turing reduction. For the same reason it is said to be a stronger reducibility than Turing reducibility, because it implies Turing reducibility. A weak truth-table reduction is a related type of reduction which is so named because it weakens the constraints placed on a truth-table reduction, and provides a weaker equivalence classification; as such, a "weak truth-table reduction" can actually be more powerful than a truth-table reduction as a "tool", and perform a reduction which is not performable by truth table.
A Turing reduction from a set B to a set A computes the membership of a single element in B by asking questions about the membership of various elements in A during the computation; it may adaptively determine which questions it asks based upon answers to previous questions. In contrast, a truth-table reduction or a weak truth-table reduction must present all of its (finitely many) oracle queries at the same time. In a truth-table reduction, the reduction also gives a boolean function (a truth table) which, when given the answers to the queries, will produce the final answer of the reduction. In a weak truth-table reduction, the reduction uses the oracle answers as a basis for further computation which may depend on the given answers but may not ask further questions of the oracle.
Equivalently, a weak truth-table reduction is a Turing reduction for which the use of the reduction is bounded by a computable function. For this reason, they are sometimes referred to as bounded Turing (bT) reductions rather than as weak truth-table (wtt) reductions.
Properties
As every truth-table reduction is a Turing reduction, if A is truth-table reducible to B (A ≤tt B), then A is also Turing reducible to B (A ≤T B). Considering also one-one reducibility, many-one reducibility and weak truth-table reducibility,
$A\leq _{1}B\Rightarrow A\leq _{m}B\Rightarrow A\leq _{tt}B\Rightarrow A\leq _{wtt}B\Rightarrow A\leq _{T}B$,
or in other words, one-one reducibility implies many-one reducibility, which implies truth-table reducibility, which in turn implies weak truth-table reducibility, which in turn implies Turing reducibility.
Furthermore, A is truth-table reducible to B iff A is Turing reducible to B via a total functional on $2^{\omega }$. The forward direction is trivial and for the reverse direction suppose $\Gamma $ is a total computable functional. To build the truth-table for computing A(n) simply search for a number m such that for all binary strings $\sigma $ of length m $\Gamma ^{\sigma }(n)$ converges. Such an m must exist by Kőnig's lemma since $\Gamma $ must be total on all paths through $2^{<\omega }$. Given such an m it is a simple matter to find the unique truth-table which gives $\Gamma ^{\sigma }(n)$ when applied to $\sigma $. The forward direction fails for weak truth-table reducibility.
References
• H. Rogers, Jr., 1967. The Theory of Recursive Functions and Effective Computability, second edition 1987, MIT Press. ISBN 0-262-68052-1 (paperback), ISBN 0-07-053522-1
| Wikipedia |
Subsets and Splits