text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
Two-center bipolar coordinates
In mathematics, two-center bipolar coordinates is a coordinate system based on two coordinates which give distances from two fixed centers $c_{1}$ and $c_{2}$.[1] This system is very useful in some scientific applications (e.g. calculating the electric field of a dipole on a plane).[2][3]
Transformation to Cartesian coordinates
When the centers are at $(+a,0)$ and $(-a,0)$, the transformation to Cartesian coordinates $(x,y)$ from two-center bipolar coordinates $(r_{1},r_{2})$ is
$x={\frac {r_{2}^{2}-r_{1}^{2}}{4a}}$
$y=\pm {\frac {1}{4a}}{\sqrt {16a^{2}r_{2}^{2}-(r_{2}^{2}-r_{1}^{2}+4a^{2})^{2}}}$[1]
Transformation to polar coordinates
When x > 0, the transformation to polar coordinates from two-center bipolar coordinates is
$r={\sqrt {\frac {r_{1}^{2}+r_{2}^{2}-2a^{2}}{2}}}$
$\theta =\arctan \left({\frac {\sqrt {r_{1}^{4}-8a^{2}r_{1}^{2}-2r_{1}^{2}r_{2}^{2}-(4a^{2}-r_{2}^{2})^{2}}}{r_{2}^{2}-r_{1}^{2}}}\right)$
where $2a$ is the distance between the poles (coordinate system centers).
Applications
Polar plotters use two-center bipolar coordinates to describe the drawing paths required to draw a target image.
See also
• Bipolar coordinates
• Biangular coordinates
• Lemniscate of Bernoulli
• Oval of Cassini
• Cartesian oval
• Ellipse
References
1. Weisstein, Eric W. "Bipolar coordinates". MathWorld.
2. R. Price, The Periodic Standing Wave Approximation: Adapted coordinates and spectral methods.
3. The periodic standing-wave approximation: nonlinear scalar fields, adapted coordinates, and the eigenspectral method.
Orthogonal coordinate systems
Two dimensional
• Cartesian
• Polar (Log-polar)
• Parabolic
• Bipolar
• Elliptic
Three dimensional
• Cartesian
• Cylindrical
• Spherical
• Parabolic
• Paraboloidal
• Oblate spheroidal
• Prolate spheroidal
• Ellipsoidal
• Elliptic cylindrical
• Toroidal
• Bispherical
• Bipolar cylindrical
• Conical
• 6-sphere
| Wikipedia |
Two-dimensional Yang–Mills theory
In mathematical physics, two-dimensional Yang–Mills theory is the special case of Yang–Mills theory in which the dimension of spacetime is taken to be two. This special case allows for a rigorously defined Yang–Mills measure, meaning that the (Euclidean) path integral can be interpreted as a measure on the set of connections modulo gauge transformations. This situation contrasts with the four-dimensional case, where a rigorous construction of the theory as a measure is currently unknown.
An aspect of the subject of particular interest is the large-N limit, in which the structure group is taken to be the unitary group $U(N)$ and then the $N$ tends to infinity limit is taken. The large-N limit of two-dimensional Yang–Mills theory has connections to string theory.
Background
Interest in the Yang–Mills measure comes from a statistical mechanical or constructive quantum field theoretic approach to formulating a quantum theory for the Yang–Mills field. A gauge field is described mathematically by a 1-form $A$ on a principal $G$-bundle over a manifold $M$ taking values in the Lie algebra $L(G)$ of the Lie group $G$. We assume that the structure group $G$, which describes the physical symmetries of the gauge field, is a compact Lie group with a bi-invariant metric on the Lie algebra $L(G)$, and we also assume given a Riemannian metric on the manifold $M$. The Yang–Mills action functional is given by
$S_{YM}(A)={\frac {1}{2}}\int _{M}\|F^{A}\|^{2}\,d\sigma _{M}$
where $F^{A}$ is the curvature of the connection form $A$, the norm-squared in the integrand comes from the metric on the Lie algebra and the one on the base manifold, and $\sigma _{M}$ is the Riemannian volume measure on $M$.
The measure $\mu _{T}$ is given formally by
$d\mu _{T}(A)={\frac {1}{Z_{T}}}e^{-S_{YM}(A)/T}DA,$
as a normalized probability measure on the space of all connections on the bundle, with $T>0$ a parameter, and $Z_{T}$ is a formal normalizing constant. More precisely, the probability measure is more likely to be meaningful on the space of orbits of connections under gauge transformations.
The Yang–Mills measure for two-dimensional manifolds
Study of Yang–Mills theory in two dimensions dates back at least to work of A. A. Migdal in 1975.[1] Some formulas appearing in Migdal's work can, in retrospect, be seen to be connected to the heat kernel on the structure group of the theory. The role of the heat kernel was made more explicit in various works in the late 1970s, culminating in the introduction of the heat kernel action in work of Menotti and Onofri in 1981.[2]
In the continuum theory, the Yang–Mills measure $\mu _{T}$ was rigorously defined for the case where $M={\mathbb {R} }^{2}$ by Bruce Driver[3] and by Leonard Gross, Christopher King, and Ambar Sengupta.[4] For compact manifolds, both oriented and non-oriented, with or without boundary, with specified bundle topology, the Yang–Mills measure was constructed by Sengupta[5][6][7][8] In this approach the 2-dimensional Yang–Mills measure is constructed by using a Gaussian measure on an infinite-dimensional space conditioned to satisfy relations implied by the topologies of the surface and of the bundle. Wilson loop variables (certain important variables on the space) were defined using stochastic differential equations and their expected values computed explicitly and found to agree with the results of the heat kernel action.
Dana S. Fine[9][10][11] used the formal Yang–Mills functional integral to compute loop expectation values. Other approaches include that of Klimek and Kondracki[12] and Ashtekar et al.[13] Thierry Lévy[14][15] constructed the 2-dimensional Yang–Mills measure in a very general framework, starting with the loop-expectation value formulas and constructing the measure, somewhat analogously to Brownian motion measure being constructed from transition probabilities. Unlike other works that also aimed to construct the measure from loop expectation values, Lévy's construction makes it possible to consider a very wide family of loop observables.
The discrete Yang–Mills measure is a term that has been used for the lattice gauge theory version of the Yang–Mills measure, especially for compact surfaces. The lattice in this case is a triangulation of the surface. Notable facts[16][17] are: (i) the discrete Yang–Mills measure can encode the topology of the bundle over the continuum surface even if only the triangulation is used to define the measure; (ii) when two surfaces are sewn along a common boundary loop, the corresponding discrete Yang–Mills measures convolve to yield the measure for the combined surface.
Wilson loop expectation values in 2 dimensions
For a piecewise smooth loop $\gamma $ on the base manifold $M$ and a point $u$ on the fiber in the principal $G$-bundle $P\to M$ over the base point $o\in M$ of the loop, there is the holonomy $h_{\gamma }(A)$ of any connection $A$ on the bundle. For regular loops $\gamma _{1},\ldots ,\gamma _{n}$, all based at $o$ and any function $\varphi $ on $G^{n}$ the function $A\mapsto \varphi {\bigl (}h_{\gamma _{1}}(A),\ldots ,h_{\gamma _{n}}(A){\bigr )}$ is called a Wilson loop variable, of interest mostly when $\varphi $ is a product of traces of the holonomies in representations of the group $G$. With $M$ being a two-dimensional Riemannian manifold the loop expectation values
$\int \varphi {\bigl (}h_{\gamma _{1}}(A),\ldots ,h_{\gamma _{n}}(A){\bigr )}\,d\mu _{T}(A)$
were computed in the above-mentioned works.
If $M$ is the plane then
$\int \varphi {\bigl (}h_{\gamma }(A){\bigr )}\,d\mu _{T}(A)=\int _{G}\varphi (x)Q_{Ta}(x)\,dx,$
where $Q_{t}(y)$ is the heat kernel on the group $G$, $a$ is the area enclosed by the loop $\gamma $, and the integration is with respect to unit-mass Haar measure. This formula was proved by Driver[3] and by Gross et al.[3] using the Gaussian measure construction of the Yang–Mills measure on the plane and by defining parallel transport by interpreting the equation of parallel transport as a Stratonovich stochastic differential equation.
If $M$ is the 2-sphere then
$\int \varphi {\bigl (}h_{\gamma }(A){\bigr )}\,d\mu _{T}(A)={\frac {1}{Q_{Tc}(e)}}\int _{G}\varphi (x)Q_{Ta}(x)Q_{Tb}(x^{-1})\,dx,$
where now $b$ is the area of the region "outside" the loop $\gamma $, and $c$ is the total area of the sphere. This formula was proved by Sengupta[5] using the conditioned Gaussian measure construction of the Yang–Mills measure and the result agrees with what one gets by using the heat kernel action of Menotti and Onofri.[2]
As an example for higher genus surfaces, if $M$ is a torus, then
$\int \varphi {\bigl (}h_{\gamma }(A){\bigr )}\,d\mu _{T}(A)={\frac {\int _{G}\varphi (x)Q_{Ta}(x)Q_{Tb}(x^{-1}wzw^{-1}z^{-1})\,dx\,dw\,dz}{\int _{G}Q_{Tc}(wzw^{-1}z^{-1})\,dw\,dz}},$
with $c$ being the total area of the torus, and $\gamma $ a contractible loop on the torus enclosing an area $a$. This, and counterparts in higher genus as well as for surfaces with boundary and for bundles with nontrivial topology, were proved by Sengupta.[6][8]
There is an extensive physics literature on loop expectation values in two-dimensional Yang–Mills theory.[18][19][20][21][22][23][24][25] Many of the above formulas were known in the physics literature from the 1970s, with the results initially expressed in terms of a sum over the characters of the gauge group rather than the heat kernel and with the function $\varphi $ being the trace in some representation of the group. Expressions involving the heat kernel then appeared explicitly in the form of the "heat kernel action" in work of Menotti and Onofri.[2] The role of the convolution property of the heat kernel was used in works of Sergio Albeverio et al.[26][27] in constructing stochastic cosurface processes inspired by Yang–Mills theory and, indirectly, by Makeenko and Migdal[22] in the physics literature.
The low-T limit
The Yang–Mills partition function is, formally,
$\int e^{-{\frac {1}{T}}S_{YM}(A)}\,DA$
In the two-dimensional case we can view this as being (proportional to) the denominator that appears in the loop expectation values. Thus, for example, the partition function for the torus would be
$\int _{G^{2}}Q_{TS}(aba^{-1}b^{-1})\,da\,db,$
where $S$ is the area of the torus. In two of the most impactful works[28][29] in the field, Edward Witten showed that as $T\downarrow 0$ the partition function yields the volume of the moduli space of flat connections with respect to a natural volume measure on the moduli space. This volume measure is associated to a natural symplectic structure on the moduli space when the surface is orientable, and is the torsion of a certain complex in the case where the surface is not orientable. Witten's discovery has been studied in different ways by several researchers.[30][31][32] Let ${\mathcal {M}}_{g}^{0}$ denote the moduli space of flat connections on a trivial bundle, with structure group being a compact connected semi-simple Lie group $G$ whose Lie algebra is equipped with an Ad-invariant metric, over a compact two-dimensional orientable manifold of genus $g\geq 2$. Witten showed[28] that the symplectic volume of this moduli space is given by
$\operatorname {vol} _{\overline {\Omega }}{\bigl (}{\mathcal {M}}_{g}^{0}{\bigr )}=|Z(G)|\operatorname {vol} (G)^{2g-2}\sum _{\alpha }{\frac {1}{(\dim \alpha )^{2g-2}}},$
where the sum is over all irreducible representations of $G$. This was proved rigorous by Sengupta[33] (see also the works by Lisa Jeffrey and by Kefeng Liu[34]). There is a large literature[35][36][37][38][39] on the symplectic structure on the moduli space of flat connections, and more generally on the moduli space itself, the major early work being that of Michael Atiyah and Raoul Bott.[40]
Returning to the Yang–Mills measure, Sengupta[33] proved that the measure itself converges in a weak sense to a suitably scaled multiple of the symplectic volume measure for orientable surfaces of genus $\geq 2$. Thierry Lévy and James R. Norris [41] established a large deviations principle for this convergence, showing that the Yang–Mills measure encodes the Yang–Mills action functional even though this functional does not explicitly appear in the rigorous formulation of the measure.
The large-N limit
The large-N limit of gauge theories refers to the behavior of the theory for gauge groups of the form $U(N)$, $SU(N)$, $O(N)$, $SO(N)$, and other such families, as $N$ goes to $\uparrow \infty $. There is a large physics literature on this subject, including major early works by Gerardus 't Hooft. A key tool in this analysis is the Makeenko–Migdal equation.
In two dimensions, the Makeenko–Migdal equation takes a special form developed by Kazakov and Kostov. In the large-N limit, the 2-D form of the Makeenko–Migdal equation relates the Wilson loop functional for a complicated curve with multiple crossings to the product of Wilson loop functionals for a pair of simpler curves with at least one less crossing. In the case of the sphere or the plane, it was the proposed that the Makeenko–Migdal equation could (in principle) reduce the computation of Wilson loop functionals for arbitrary curves to the Wilson loop functional for a simple closed curve.
In dimension 2, some of the major ideas were proposed by I. M. Singer,[42] who named this limit the master field (a general notion in some areas of physics). Xu[43] studied the large-$N$ limit of 2-dimensional Yang–Mills loop expectation values using ideas from random matrix theory. Sengupta[44] computed the large-N limit of loop expectation values in the plane and commented on the connection with free probability. Confirming one proposal of Singer,[42] Michael Anshelevich and Sengupta[45] showed that the large-N limit of the Yang–Mills measure over the plane for the groups $U(N)$ is given by a free probability theoretic counterpart of the Yang–Mills measure. An extensive study of the master field in the plane was made by Thierry Lévy.[46][47] Several major contributions have been made by Bruce K. Driver, Brian C. Hall, and Todd Kemp,[48] Franck Gabriel,[49] and Antoine Dahlqvist.[50] Dahlqvist and Norris[51] have constructed the master field on the two-dimensional sphere.
In spacetime dimension larger than 2, there is very little in terms of rigorous mathematical results. Sourav Chatterjee has proved several results in large-N gauge theory theory for dimension larger than 2. Chatterjee[52] established an explicit formula for the leading term of the free energy of three-dimensional $U(N)$ lattice gauge theory for any N, as the lattice spacing tends to zero. Let $Z(n,\varepsilon ,g)$ be the partition function of $d$-dimensional $U(N)$ lattice gauge theory with coupling strength $g$ in the box with lattice spacing $\varepsilon $ and size being n spacings in each direction. Chatterjee showed that in dimensions d=2 and 3, $\log Z(n,\varepsilon ,g)$ is
$n^{d}\left({\frac {1}{2}}(d-1)N^{2}\log(g^{2}\varepsilon ^{4-d})+(d-1)\log \left({\frac {\prod _{j=1}^{N-1}j!}{(2\pi )^{N/2}}}\right)+N^{2}K_{d}\right)$
up to leading order in $n$, where $K_{d}$ is a limiting free-energy term. A similar result was also obtained for in dimension 4, for $n\to \infty $, $\varepsilon \to 0$, and $g\to 0$ independently.
References
1. Migdal, A. A. (1975). "Recursion equations in gauge field theories". Soviet Physics JETP. 42: 413–418.
2. Menotti, P; Onofri, E (1981). "The action of SU(N) lattice gauge theory in terms of the heat kernel on the group manifold". Nuclear Physics B. 190 (2): 288–300. Bibcode:1981NuPhB.190..288M. doi:10.1016/0550-3213(81)90560-5.
3. Driver, Bruce K. (1989). "YM2:Continuum expectations, lattice convergence, and lassos". Communications in Mathematical Physics. 123 (4): 575–616. Bibcode:1989CMaPh.123..575D. doi:10.1007/BF01218586. S2CID 44030239.
4. Gross, Leonard; King, Chris; Sengupta, Ambar (1989). "Two dimensional Yang-Mills theory via stochastic differential equations". Annals of Physics. 194 (1): 65–112. Bibcode:1989AnPhy.194...65G. doi:10.1016/0003-4916(89)90032-8.
5. Sengupta, Ambar (1992). "The Yang-Mills Measure for S2". Journal of Functional Analysis. 108 (2): 231–273. doi:10.1016/0022-1236(92)90025-E.
6. Sengupta, Ambar N. (1992). "Quantum Gauge Theory on Compact Surfaces". Annals of Physics. 220 (1): 157. doi:10.1016/0003-4916(92)90334-I.
7. Sengupta, Ambar N. (1997). "Yang-Mills on Surfaces with Boundary: Quantum Theory and Symplectic Limit". Communications in Mathematical Physics. 183 (3): 661–704. Bibcode:1997CMaPh.183..661S. doi:10.1007/s002200050047. S2CID 120492148.
8. Sengupta, Ambar N. (1997). "Gauge Theory on Compact Surfaces". Memoirs of the American Mathematical Society. 126 (600). doi:10.1090/memo/0600.
9. Fine, Dana S. (1990). "Quantum Yang-Mills on the two-sphere". Communications in Mathematical Physics. 134 (2): 273–292. Bibcode:1990CMaPh.134..273F. doi:10.1007/BF02097703. S2CID 122310649.
10. Fine, Dana S. (1991). "Quantum Yang-Mills on a Riemann surface". Communications in Mathematical Physics. 140 (2): 321–338. Bibcode:1991CMaPh.140..321F. doi:10.1007/BF02099502. S2CID 120616022.
11. Fine, Dana S. (1996). "Topological sectors and measures on moduli space in quantum Yang-Mills on a Riemann surface". Journal of Mathematical Physics. 37 (3): 1161–1170. arXiv:hep-th/9504103. Bibcode:1996JMP....37.1161F. doi:10.1063/1.531453. S2CID 18159735.
12. Klimek, Slawomir; Kondracki, Witold (1987). "A construction of two-dimensional quantum chromodynamics". Communications in Mathematical Physics. 113 (3): 389–402. Bibcode:1987CMaPh.113..389K. doi:10.1007/BF01221253. S2CID 122234042.
13. Ashtekar, Abhay; Lewandowski, Jerzy; Marolf, Donald; Mourão, José; Thiemann, Thomas (1997). "SU(N) quantum Yang-Mills theory in two dimensions: a complete solution". Journal of Mathematical Physics. 38 (11): 5453–5482. arXiv:hep-th/9605128. Bibcode:1997JMP....38.5453A. doi:10.1063/1.532146. S2CID 18153324.
14. Lévy, Thierry (2003). "Yang-Mills Measure on Compact Surfaces". Memoirs of the American Mathematical Society. 166 (790). doi:10.1090/memo/0790. S2CID 119143163.
15. Lévy, Thierry (2010). "Two-dimensional Markovian holonomy fields". Astérisque. 329.
16. Lévy, Thierry (2005). "Discrete and continuous Yang-Mills measure for non-trivial bundles over compact surfaces". Probability Theory and Related Fields. 136 (2): 171–202. arXiv:math-ph/0501014. doi:10.1007/s00440-005-0478-8. S2CID 17397076.
17. Becker, Claas; Sengupta, Ambar N. (1998). "Sewing Yang-Mills measures and moduli spaces over compact surfaces". Journal of Functional Analysis. 152 (1): 74–99. doi:10.1006/jfan.1997.3161.
18. Gross, David; Taylor IV, Washington (1993). "Two-dimensional QCD is a string theory". Nuclear Physics B. 400 (1): 181–208. arXiv:hep-th/9301068. Bibcode:1993NuPhB.400..181G. doi:10.1016/0550-3213(93)90403-C.
19. Cordes, Stefan; Moore, Gregory; Ramgoolam, Sanjaye (1997). "Large N 2D Yang-Mills theory and topological string theory". Communications in Mathematical Physics. 185 (3): 543–619. arXiv:hep-th/9402107. Bibcode:1997CMaPh.185..543C. doi:10.1007/s002200050102. S2CID 14684976.
20. Kazakov, K.; Kostov, I. K. (1980). "Nonlinear strings in two-dimensional l $U(\infty )$ gauge theory". Nuclear Physics B. 176 (1): 199–205. doi:10.1016/0550-3213(80)90072-3.
21. Migdal, A. A. (1975). "Recursion equations in gauge field theories". Sov. Phys. JETP. 42 (3): 2413–418.
22. Makeenko, Yuri M.; Migdal, A. A. (1980). "Self-consistent area law in QCD". Physics Letters B. 97 (2): 253–256. Bibcode:1980PhLB...97..253M. doi:10.1016/0370-2693(80)90595-X.
23. Makeenko, Yuri M.; Migdal, A. A. (1981). "Quantum chromodynamics as dynamics of loops". Nuclear Physics B. 188 (2): 269–316. Bibcode:1981NuPhB.188..269M. doi:10.1016/0550-3213(81)90258-3.
24. Rusakov, Boris (1995). "Lattice QCD as a theory of interacting surfaces". Physics Letters B. 344 (1–4): 293–300. arXiv:hep-th/9410004. Bibcode:1995PhLB..344..293R. doi:10.1016/0370-2693(94)01488-X. S2CID 118908012.
25. Rusakov, Boris (1997). "Exactly soluble QCD and confinement of quarks". Nuclear Physics B. 507 (3): 691–706. arXiv:hep-th/9703142. Bibcode:1997NuPhB.507..691R. doi:10.1016/S0550-3213(97)00604-4. S2CID 119498700.
26. Albeverio, Sergio; Høegh-Krohn, Raphael; Holden, Helge (1988). "Stochastic multiplicative measures, generalized Markov semigroups, and group-valued stochastic processes and fields". Journal of Functional Analysis. 78 (1): 154–184. doi:10.1016/0022-1236(88)90137-1.
27. Albeverio, Sergio; Høegh-Krohn, Raphael; Kolsrud, Torbjörn (1989). "Representation and construction of multiplicative noise". Journal of Functional Analysis. 87 (2): 250–272. doi:10.1016/0022-1236(89)90010-4.
28. Witten, Edward (1991). "On quantum gauge theories in two dimensions". Communications in Mathematical Physics. 141 (1): 153–209. Bibcode:1991CMaPh.141..153W. doi:10.1007/BF02100009. S2CID 121994550.
29. Witten, Edward (1992). "Two-dimensional gauge theories revisited". Journal of Geometry and Physics. 9 (4): 303–368. arXiv:hep-th/9204083. Bibcode:1992JGP.....9..303W. doi:10.1016/0393-0440(92)90034-X. S2CID 2071498.
30. Forman, Robin (1993). "Small volume limits of 2-d Yang-Mills". Communications in Mathematical Physics. 151 (1): 39–52. Bibcode:1993CMaPh.151...39F. doi:10.1007/BF02096747. S2CID 123050859.
31. King, Christopher; Sengupta, Ambar N. (1994). "An explicit description of the symplectic structure of moduli spaces of flat connections". Journal of Mathematical Physics. 35 (10): 5338?5353. Bibcode:1994JMP....35.5338K. doi:10.1063/1.530755.
32. King, Christopher; Sengupta, Ambar N. (1994). "The semiclassical limit of the two-dimensional quantum Yang-Mills model". Journal of Mathematical Physics. 35 (10): 5354–5361. arXiv:hep-th/9402135. Bibcode:1994JMP....35.5354K. doi:10.1063/1.530756. S2CID 119410229.
33. Sengupta, Ambar N. (2003). "The Volume Measure for Flat Connections as Limit of the Yang-Mills measure". Journal of Geometry and Physics. 47 (4): 398–426. Bibcode:2003JGP....47..398S. doi:10.1016/S0393-0440(02)00229-2.
34. Liu, Kefeng (1996). "Heat kernel and moduli space". Mathematics Research Letters. 3 (6): 743–762. doi:10.4310/MRL.1996.v3.n6.a3.
35. Jeffrey, Lisa; Weitsman, Jonathan; Ramras, Daniel A. (2017). "The prequantum line bundle on the moduli space of flat SU(N) connections on a Riemann surface and the homotopy of the large N limit". Letters in Mathematical Physics. 107 (9): 1581–1589. arXiv:1411.4360. Bibcode:2017LMaPh.107.1581J. doi:10.1007/s11005-017-0956-9. S2CID 119577774.
36. Jeffrey, Lisa; Weitsman, Jonathan (2000). "Symplectic geometry of the moduli space of flat connections on a Riemann surface: inductive decompositions and vanishing theorems". Canadian Journal of Mathematics. 52 (3): 582–612. doi:10.4153/CJM-2000-026-4. S2CID 123067470.
37. Goldman, William M. (1984). "The symplectic nature of fundamental groups of surfaces". Advances in Mathematics. 54 (2): 200–225. doi:10.1016/0001-8708(84)90040-9.
38. Huebschmann, Johannes (1996). "The singularities of Yang-Mills connections for bundles on a surface. II. The stratification". Mathematische Zeitschrift. 221 (1): 83–92. doi:10.1007/BF02622101. S2CID 16857228.
39. Huebschmann, Johannes (1996). "Poisson geometry of flat connections for SU(2)-bundles on surfaces". Mathematische Zeitschrift. 221 (2): 243–259. arXiv:hep-th/9312113. doi:10.1007/PL00004249. S2CID 186226623.
40. Atiyah, Michael; Bott, Raoul (1983). "The Yang-Mills equations over Riemann surfaces". Philosophical Transactions of the Royal Society of London. Series A. Mathematical and Physical Sciences. 308 (1505): 523–615.
41. Lévy, Thierry; Norris, James R. (2006). "Large deviations for the Yang-Mills measure on a compact surface". Communications in Mathematical Physics. 261 (2): 405–450. arXiv:math-ph/0406027. Bibcode:2006CMaPh.261..405L. doi:10.1007/s00220-005-1450-2. S2CID 2985547.
42. Singer, Isadore M. (1995). On the master field in two dimensions. Functional analysis on the eve of the 21st century. Vol. 1. pp. 263–281.
43. Xu, Feng (1997). "A random matrix model from two-dimensional Yang-Mills theory". Communications in Mathematical Physics. 190 (2): 287–307. Bibcode:1997CMaPh.190..287X. doi:10.1007/s002200050242. S2CID 120011642.
44. Sengupta, Ambar N. (2008). Traces in two-dimensional QCD: the large-N limit. Traces in number theory, geometry and quantum fields. Vol. 1. pp. 193?212.
45. Anshelevich, Michael; Sengupta, Ambar N. (2012). "Quantum free Yang-Mills on the plane". Journal of Geometry and Physics. 62 (2): 330–343. arXiv:1106.2107. Bibcode:2012JGP....62..330A. doi:10.1016/j.geomphys.2011.10.005. S2CID 54948607.
46. Lévy, Thierry (2017). "The Master Field on the Plane". Astérisque. 388.
47. Lévy, Thierry; Maida, Mylene (2010). "Central limit theorem for the heat kernel measure on the unitary group". Journal of Functional Analysis. 259 (12): 3163–3204. arXiv:0905.3282. doi:10.1016/j.jfa.2010.08.005. S2CID 15801521.
48. Driver, Bruce; Hall, Brian C.; Kemp, Todd (2017). "Three proofs of the Makeenko-Migdal equation for Yang-Mills theory on the plane". Communications in Mathematical Physics. 351 (2): 741–774. arXiv:1601.06283. Bibcode:2017CMaPh.351..741D. doi:10.1007/s00220-016-2793-6. S2CID 13920957.
49. Driver, Bruce; Gabriel, Franck; Hall, Brian C.; Kemp, Todd (2017). "The Makeenko-Migdal equation for Yang-Mills theory on compact surfaces". Communications in Mathematical Physics. 352 (3): 967?978. arXiv:1602.03905. Bibcode:2017CMaPh.352..967D. doi:10.1007/s00220-017-2857-2. S2CID 14786744.
50. Dahlqvist, Antoine (2016). "Free energies and fluctuations for the unitary Brownian motion". Communications in Mathematical Physics. 348 (2): 395–444. Bibcode:2016CMaPh.348..395D. doi:10.1007/s00220-016-2756-y. S2CID 118973747.
51. Dahlqvist, Antoine; Norris, James R. (2020). "Yang-Mills measure and the master field on the sphere". Communications in Mathematical Physics. 377 (2): 1163–1226. Bibcode:2020CMaPh.377.1163D. doi:10.1007/s00220-020-03773-6. S2CID 18485837.
52. Chatterjee, Sourav (2016). "The leading term of the Yang-Mills free energy". Journal of Functional Analysis. 271 (10): 2944–3005. arXiv:1602.01222. doi:10.1016/j.jfa.2016.04.032. S2CID 119135316.
Quantum field theories
Theories
• Algebraic QFT
• Axiomatic QFT
• Conformal field theory
• Lattice field theory
• Noncommutative QFT
• Gauge theory
• QFT in curved spacetime
• String theory
• Supergravity
• Thermal QFT
• Topological QFT
• Two-dimensional conformal field theory
Models
Regular
• Born–Infeld
• Euler–Heisenberg
• Ginzburg–Landau
• Non-linear sigma
• Proca
• Quantum electrodynamics
• Quantum chromodynamics
• Quartic interaction
• Scalar electrodynamics
• Scalar chromodynamics
• Soler
• Yang–Mills
• Yang–Mills–Higgs
• Yukawa
Low dimensional
• 2D Yang–Mills
• Bullough–Dodd
• Gross–Neveu
• Schwinger
• Sine-Gordon
• Thirring
• Thirring–Wess
• Toda
Conformal
• 2D free massless scalar
• Liouville
• Minimal
• Polyakov
• Wess–Zumino–Witten
Supersymmetric
• Wess–Zumino
• N = 1 super Yang–Mills
• Seiberg–Witten
• Super QCD
Superconformal
• 6D (2,0)
• ABJM
• N = 4 super Yang–Mills
Supergravity
• Higher dimensional
• N = 8
• Pure 4D N = 1
Topological
• BF
• Chern–Simons
Particle theory
• Chiral
• Fermi
• MSSM
• Nambu–Jona-Lasinio
• NMSSM
• Standard Model
• Stueckelberg
Related
• Casimir effect
• Cosmic string
• History
• Loop quantum gravity
• Loop quantum cosmology
• On shell and off shell
• Quantum chaos
• Quantum dynamics
• Quantum foam
• Quantum fluctuations
• links
• Quantum gravity
• links
• Quantum hadrodynamics
• Quantum hydrodynamics
• Quantum information
• Quantum information science
• links
• Quantum logic
• Quantum thermodynamics
See also: Template:Quantum mechanics topics
| Wikipedia |
Plane curve
In mathematics, a plane curve is a curve in a plane that may be either a Euclidean plane, an affine plane or a projective plane. The most frequently studied cases are smooth plane curves (including piecewise smooth plane curves), and algebraic plane curves. Plane curves also include the Jordan curves (curves that enclose a region of the plane but need not be smooth) and the graphs of continuous functions.
Symbolic representation
A plane curve can often be represented in Cartesian coordinates by an implicit equation of the form $f(x,y)=0$ for some specific function f. If this equation can be solved explicitly for y or x – that is, rewritten as $y=g(x)$ or $x=h(y)$ for specific function g or h – then this provides an alternative, explicit, form of the representation. A plane curve can also often be represented in Cartesian coordinates by a parametric equation of the form $(x,y)=(x(t),y(t))$ for specific functions $x(t)$ and $y(t).$
Plane curves can sometimes also be represented in alternative coordinate systems, such as polar coordinates that express the location of each point in terms of an angle and a distance from the origin.
Smooth plane curve
A smooth plane curve is a curve in a real Euclidean plane $\mathbb {R} ^{2}$ and is a one-dimensional smooth manifold. This means that a smooth plane curve is a plane curve which "locally looks like a line", in the sense that near every point, it may be mapped to a line by a smooth function. Equivalently, a smooth plane curve can be given locally by an equation $f(x,y)=0,$ where $f:\mathbb {R} ^{2}\to \mathbb {R} $ is a smooth function, and the partial derivatives $\partial f/\partial x$ and $\partial f/\partial y$ are never both 0 at a point of the curve.
Algebraic plane curve
An algebraic plane curve is a curve in an affine or projective plane given by one polynomial equation $f(x,y)=0$ (or $F(x,y,z)=0,$ where F is a homogeneous polynomial, in the projective case.)
Algebraic curves have been studied extensively since the 18th century.
Every algebraic plane curve has a degree, the degree of the defining equation, which is equal, in case of an algebraically closed field, to the number of intersections of the curve with a line in general position. For example, the circle given by the equation $x^{2}+y^{2}=1$ has degree 2.
The non-singular plane algebraic curves of degree 2 are called conic sections, and their projective completion are all isomorphic to the projective completion of the circle $x^{2}+y^{2}=1$ (that is the projective curve of equation $x^{2}+y^{2}-z^{2}=0$). The plane curves of degree 3 are called cubic plane curves and, if they are non-singular, elliptic curves. Those of degree 4 are called quartic plane curves.
Examples
Numerous examples of plane curves are shown in Gallery of curves and listed at List of curves. The algebraic curves of degree 1 or 2 are shown here (an algebraic curve of degree less than 3 is always contained in a plane):
Name Implicit equation Parametric equation As a function graph
Straight line $ax+by=c$ $(x,y)=(x_{0}+\alpha t,y_{0}+\beta t)$ $y=mx+c$
Circle $x^{2}+y^{2}=r^{2}$ $(x,y)=(r\cos t,r\sin t)$
Parabola $y-x^{2}=0$ $(x,y)=(t,t^{2})$ $y=x^{2}$
Ellipse ${\frac {x^{2}}{a^{2}}}+{\frac {y^{2}}{b^{2}}}=1$ $(x,y)=(a\cos t,b\sin t)$
Hyperbola ${\frac {x^{2}}{a^{2}}}-{\frac {y^{2}}{b^{2}}}=1$ $(x,y)=(a\cosh t,b\sinh t)$
See also
• Algebraic geometry
• Convex curve
• Differential geometry
• Osgood curve
• Plane curve fitting
• Projective varieties
• Skew curve
References
• Coolidge, J. L. (April 28, 2004), A Treatise on Algebraic Plane Curves, Dover Publications, ISBN 0-486-49576-0.
• Yates, R. C. (1952), A handbook on curves and their properties, J.W. Edwards, ASIN B0007EKXV0.
• Lawrence, J. Dennis (1972), A catalog of special plane curves, Dover, ISBN 0-486-60288-5.
External links
• Weisstein, Eric W. "Plane Curve". MathWorld.
Topics in algebraic curves
Rational curves
• Five points determine a conic
• Projective line
• Rational normal curve
• Riemann sphere
• Twisted cubic
Elliptic curves
Analytic theory
• Elliptic function
• Elliptic integral
• Fundamental pair of periods
• Modular form
Arithmetic theory
• Counting points on elliptic curves
• Division polynomials
• Hasse's theorem on elliptic curves
• Mazur's torsion theorem
• Modular elliptic curve
• Modularity theorem
• Mordell–Weil theorem
• Nagell–Lutz theorem
• Supersingular elliptic curve
• Schoof's algorithm
• Schoof–Elkies–Atkin algorithm
Applications
• Elliptic curve cryptography
• Elliptic curve primality
Higher genus
• De Franchis theorem
• Faltings's theorem
• Hurwitz's automorphisms theorem
• Hurwitz surface
• Hyperelliptic curve
Plane curves
• AF+BG theorem
• Bézout's theorem
• Bitangent
• Cayley–Bacharach theorem
• Conic section
• Cramer's paradox
• Cubic plane curve
• Fermat curve
• Genus–degree formula
• Hilbert's sixteenth problem
• Nagata's conjecture on curves
• Plücker formula
• Quartic plane curve
• Real plane curve
Riemann surfaces
• Belyi's theorem
• Bring's curve
• Bolza surface
• Compact Riemann surface
• Dessin d'enfant
• Differential of the first kind
• Klein quartic
• Riemann's existence theorem
• Riemann–Roch theorem
• Teichmüller space
• Torelli theorem
Constructions
• Dual curve
• Polar curve
• Smooth completion
Structure of curves
Divisors on curves
• Abel–Jacobi map
• Brill–Noether theory
• Clifford's theorem on special divisors
• Gonality of an algebraic curve
• Jacobian variety
• Riemann–Roch theorem
• Weierstrass point
• Weil reciprocity law
Moduli
• ELSV formula
• Gromov–Witten invariant
• Hodge bundle
• Moduli of algebraic curves
• Stable curve
Morphisms
• Hasse–Witt matrix
• Riemann–Hurwitz formula
• Prym variety
• Weber's theorem (Algebraic curves)
Singularities
• Acnode
• Crunode
• Cusp
• Delta invariant
• Tacnode
Vector bundles
• Birkhoff–Grothendieck theorem
• Stable vector bundle
• Vector bundles on algebraic curves
Authority control: National
• France
• BnF data
• Israel
• United States
• Latvia
• Czech Republic
| Wikipedia |
Two-dimensional singular-value decomposition
Two-dimensional singular-value decomposition (2DSVD) computes the low-rank approximation of a set of matrices such as 2D images or weather maps in a manner almost identical to SVD (singular-value decomposition) which computes the low-rank approximation of a single matrix (or a set of 1D vectors).
SVD
Let matrix $X=[\mathbf {x} _{1},\ldots ,\mathbf {x} _{n}]$ contains the set of 1D vectors which have been centered. In PCA/SVD, we construct covariance matrix $F$ and Gram matrix $G$
$F=XX^{\mathsf {T}}$ , $G=X^{\mathsf {T}}X,$
and compute their eigenvectors $U=[\mathbf {u} _{1},\ldots ,\mathbf {u} _{n}]$ and $V=[\mathbf {v} _{1},\ldots ,\mathbf {v} _{n}]$. Since $VV^{\mathsf {T}}=I$ and $UU^{\mathsf {T}}=I$ we have
$X=UU^{\mathsf {T}}XVV^{\mathsf {T}}=U\left(U^{\mathsf {T}}XV\right)V^{\mathsf {T}}=U\Sigma V^{\mathsf {T}}.$
If we retain only $K$ principal eigenvectors in $U,V$, this gives low-rank approximation of $X$.
2DSVD
Here we deal with a set of 2D matrices $(X_{1},\ldots ,X_{n})$. Suppose they are centered $ \sum _{i}X_{i}=0$. We construct row–row and column–column covariance matrices
$F=\sum _{i}X_{i}X_{i}^{\mathsf {T}}$ and $G=\sum _{i}X_{i}^{\mathsf {T}}X_{i}$
in exactly the same manner as in SVD, and compute their eigenvectors $U$ and $V$. We approximate $X_{i}$ as
$X_{i}=UU^{\mathsf {T}}X_{i}VV^{\mathsf {T}}=U\left(U^{\mathsf {T}}X_{i}V\right)V^{\mathsf {T}}=UM_{i}V^{\mathsf {T}}$
in identical fashion as in SVD. This gives a near optimal low-rank approximation of $(X_{1},\ldots ,X_{n})$ with the objective function
$J=\sum _{i=1}^{n}\left|X_{i}-LM_{i}R^{\mathsf {T}}\right|^{2}$
Error bounds similar to Eckard–Young theorem also exist.
2DSVD is mostly used in image compression and representation.
References
• Chris Ding and Jieping Ye. "Two-dimensional Singular Value Decomposition (2DSVD) for 2D Maps and Images". Proc. SIAM Int'l Conf. Data Mining (SDM'05), pp. 32–43, April 2005. http://ranger.uta.edu/~chqding/papers/2dsvdSDM05.pdf
• Jieping Ye. "Generalized Low Rank Approximations of Matrices". Machine Learning Journal. Vol. 61, pp. 167–191, 2005.
| Wikipedia |
Two-graph
In mathematics, a two-graph is a set of (unordered) triples chosen from a finite vertex set X, such that every (unordered) quadruple from X contains an even number of triples of the two-graph. A regular two-graph has the property that every pair of vertices lies in the same number of triples of the two-graph. Two-graphs have been studied because of their connection with equiangular lines and, for regular two-graphs, strongly regular graphs, and also finite groups because many regular two-graphs have interesting automorphism groups.
A two-graph is not a graph and should not be confused with other objects called 2-graphs in graph theory, such as 2-regular graphs.
Examples
On the set of vertices {1,...,6} the following collection of unordered triples is a two-graph:
123 124 135 146 156 236 245 256 345 346
This two-graph is a regular two-graph since each pair of distinct vertices appears together in exactly two triples.
Given a simple graph G = (V,E), the set of triples of the vertex set V whose induced subgraph has an odd number of edges forms a two-graph on the set V. Every two-graph can be represented in this way.[1] This example is referred to as the standard construction of a two-graph from a simple graph.
As a more complex example, let T be a tree with edge set E. The set of all triples of E that are not contained in a path of T form a two-graph on the set E.[2]
Switching and graphs
A two-graph is equivalent to a switching class of graphs and also to a (signed) switching class of signed complete graphs.
Switching a set of vertices in a (simple) graph means reversing the adjacencies of each pair of vertices, one in the set and the other not in the set: thus the edge set is changed so that an adjacent pair becomes nonadjacent and a nonadjacent pair becomes adjacent. The edges whose endpoints are both in the set, or both not in the set, are not changed. Graphs are switching equivalent if one can be obtained from the other by switching. An equivalence class of graphs under switching is called a switching class. Switching was introduced by van Lint & Seidel (1966) and developed by Seidel; it has been called graph switching or Seidel switching, partly to distinguish it from switching of signed graphs.
In the standard construction of a two-graph from a simple graph given above, two graphs will yield the same two-graph if and only if they are equivalent under switching, that is, they are in the same switching class.
Let Γ be a two-graph on the set X. For any element x of X, define a graph Γx with vertex set X having vertices y and z adjacent if and only if {x, y, z} is in Γ. In this graph, x will be an isolated vertex. This construction is reversible; given a simple graph G, adjoin a new element x to the set of vertices of G, retaining the same edge set, and apply the standard construction above.[3]
To a graph G there corresponds a signed complete graph Σ on the same vertex set, whose edges are signed negative if in G and positive if not in G. Conversely, G is the subgraph of Σ that consists of all vertices and all negative edges. The two-graph of G can also be defined as the set of triples of vertices that support a negative triangle (a triangle with an odd number of negative edges) in Σ. Two signed complete graphs yield the same two-graph if and only if they are equivalent under switching.
Switching of G and of Σ are related: switching the same vertices in both yields a graph H and its corresponding signed complete graph.
Adjacency matrix
The adjacency matrix of a two-graph is the adjacency matrix of the corresponding signed complete graph; thus it is symmetric, is zero on the diagonal, and has entries ±1 off the diagonal. If G is the graph corresponding to the signed complete graph Σ, this matrix is called the (0, −1, 1)-adjacency matrix or Seidel adjacency matrix of G. The Seidel matrix has zero entries on the main diagonal, -1 entries for adjacent vertices and +1 entries for non-adjacent vertices.
If graphs G and H are in a same switching class, the multiset of eigenvalues of two Seidel adjacency matrices of G and H coincide since the matrices are similar.[4]
A two-graph on a set V is regular if and only if its adjacency matrix has just two distinct eigenvalues ρ1 > 0 > ρ2 say, where ρ1ρ2 = 1 - |V|.[5]
Equiangular lines
Main article: Equiangular lines
Every two-graph is equivalent to a set of lines in some dimensional euclidean space each pair of which meet in the same angle. The set of lines constructed from a two graph on n vertices is obtained as follows. Let -ρ be the smallest eigenvalue of the Seidel adjacency matrix, A, of the two-graph, and suppose that it has multiplicity n - d. Then the matrix ρI + A is positive semi-definite of rank d and thus can be represented as the Gram matrix of the inner products of n vectors in euclidean d-space. As these vectors have the same norm (namely, ${\sqrt {\rho }}$) and mutual inner products ±1, any pair of the n lines spanned by them meet in the same angle φ where cos φ = 1/ρ. Conversely, any set of non-orthogonal equiangular lines in a euclidean space can give rise to a two-graph (see equiangular lines for the construction).[6]
With the notation as above, the maximum cardinality n satisfies n ≤ d(ρ2 - 1)/(ρ2 - d) and the bound is achieved if and only if the two-graph is regular.
Notes
1. Colbourn & Dinitz 2007, p. 876, Remark 13.2
2. Cameron, P.J. (1994), "Two-graphs and trees", Discrete Mathematics, 127: 63–74 cited in Colbourn & Dinitz 2007, p. 876, Construction 13.12
3. Cameron & van Lint 1991, pp. 58-59
4. Cameron & van Lint 1991, p. 61
5. Colbourn & Dinitz 2007, p. 878 #13.24
6. van Lint & Seidel 1966
References
• Brouwer, A.E., Cohen, A.M., and Neumaier, A. (1989), Distance-Regular Graphs. Springer-Verlag, Berlin. Sections 1.5, 3.8, 7.6C.
• Cameron, P.J.; van Lint, J.H. (1991), Designs, Graphs, Codes and their Links, London Mathematical Society Student Texts 22, Cambridge University Press, ISBN 978-0-521-42385-4
• Colbourn, Charles J.; Dinitz, Jeffrey H. (2007), Handbook of Combinatorial Designs (2nd ed.), Boca Raton: Chapman & Hall/ CRC, pp. 875–882, ISBN 1-58488-506-8
• Chris Godsil and Gordon Royle (2001), Algebraic Graph Theory. Graduate Texts in Mathematics, Vol. 207. Springer-Verlag, New York. Chapter 11.
• Seidel, J. J. (1976), A survey of two-graphs. In: Colloquio Internazionale sulle Teorie Combinatorie (Proceedings, Rome, 1973), Vol. I, pp. 481–511. Atti dei Convegni Lincei, No. 17. Accademia Nazionale dei Lincei, Rome.
• Taylor, D. E. (1977), Regular 2-graphs. Proceedings of the London Mathematical Society (3), vol. 35, pp. 257–274.
• van Lint, J. H.; Seidel, J. J. (1966), "Equilateral point sets in elliptic geometry", Indagationes Mathematicae, Proc. Koninkl. Ned. Akad. Wetenschap. Ser. A 69, 28: 335–348
| Wikipedia |
Two-sample hypothesis testing
In statistical hypothesis testing, a two-sample test is a test performed on the data of two random samples, each independently obtained from a different given population. The purpose of the test is to determine whether the difference between these two populations is statistically significant.
There are a large number of statistical tests that can be used in a two-sample test. Which one(s) are appropriate depend on a variety of factors, such as:
• Which assumptions (if any) may be made a priori about the distributions from which the data have been sampled? For example, in many situations it may be assumed that the underlying distributions are normal distributions. In other cases the data are categorical, coming from a discrete distribution over a nominal scale, such as which entry was selected from a menu.
• Does the hypothesis being tested apply to the distributions as a whole, or just some population parameter, for example the mean or the variance?
• Is the hypothesis being tested merely that there is a difference in the relevant population characteristics (in which case a two-sided test may be indicated), or does it involve a specific bias ("A is better than B"), so that a one-sided test can be used?
Relevant tests
Statistical tests that may apply for two-sample testing include:
• Hotelling's T-squared distribution#Two-sample statistic
• Kernel embedding of distributions#Kernel two-sample test
• Kolmogorov–Smirnov test
• Kuiper's test
• Median test
• Pearson's chi-squared test
• Student's t-test
• Tukey–Duckworth test
• Welch's t-test
See also
• A/B testing
| Wikipedia |
2-sided
In mathematics, specifically in topology of manifolds, a compact codimension-one submanifold $F$ of a manifold $M$ is said to be 2-sided in $M$ when there is an embedding
$h\colon F\times [-1,1]\to M$
with $h(x,0)=x$ for each $x\in F$ and
$h(F\times [-1,1])\cap \partial M=h(\partial F\times [-1,1])$.
In other words, if its normal bundle is trivial.[1]
This means, for example that a curve in a surface is 2-sided if it has a tubular neighborhood which is a cartesian product of the curve times an interval.
A submanifold which is not 2-sided is called 1-sided.
Examples
Surfaces
For curves on surfaces, a curve is 2-sided if and only if it preserves orientation, and 1-sided if and only if it reverses orientation: a tubular neighborhood is then a Möbius strip. This can be determined from the class of the curve in the fundamental group of the surface and the orientation character on the fundamental group, which identifies which curves reverse orientation.
• An embedded circle in the plane is 2-sided.
• An embedded circle generating the fundamental group of the real projective plane (such as an "equator" of the projective plane – the image of an equator for the sphere) is 1-sided, as it is orientation-reversing.
Properties
Cutting along a 2-sided manifold can separate a manifold into two pieces – such as cutting along the equator of a sphere or around the sphere on which a connected sum has been done – but need not, such as cutting along a curve on the torus.
Cutting along a (connected) 1-sided manifold does not separate a manifold, as a point that is locally on one side of the manifold can be connected to a point that is locally on the other side (i.e., just across the submanifold) by passing along an orientation-reversing path.
Cutting along a 1-sided manifold may make a non-orientable manifold orientable – such as cutting along an equator of the real projective plane – but may not, such as cutting along a 1-sided curve in a higher genus non-orientable surface, maybe the simplest example of this is seen when one cut a mobius band along its core curve.
References
1. Hatcher, Allen (2000). Notes on basic 3-manifold topology (PDF). p. 10.
| Wikipedia |
Identity element
In mathematics, an identity element or neutral element of a binary operation is an element that leaves unchanged every element when the operation is applied.[1][2] For example, 0 is an identity element of the addition of real numbers. This concept is used in algebraic structures such as groups and rings. The term identity element is often shortened to identity (as in the case of additive identity and multiplicative identity)[3] when there is no possibility of confusion, but the identity implicitly depends on the binary operation it is associated with.
Definitions
Let (S, ∗) be a set S equipped with a binary operation ∗. Then an element e of S is called a left identity if e ∗ s = s for all s in S, and a right identity if s ∗ e = s for all s in S.[4] If e is both a left identity and a right identity, then it is called a two-sided identity, or simply an identity.[5][6][7][8][9]
An identity with respect to addition is called an additive identity (often denoted as 0) and an identity with respect to multiplication is called a multiplicative identity (often denoted as 1).[3] These need not be ordinary addition and multiplication—as the underlying operation could be rather arbitrary. In the case of a group for example, the identity element is sometimes simply denoted by the symbol $e$. The distinction between additive and multiplicative identity is used most often for sets that support both binary operations, such as rings, integral domains, and fields. The multiplicative identity is often called unity in the latter context (a ring with unity).[10][11][12] This should not be confused with a unit in ring theory, which is any element having a multiplicative inverse. By its own definition, unity itself is necessarily a unit.[13][14]
Examples
SetOperationIdentity
Real numbers+ (addition)0
· (multiplication)1
Complex numbers + (addition) 0
· (multiplication) 1
Positive integersLeast common multiple1
Non-negative integersGreatest common divisor0 (under most definitions of GCD)
VectorsVector addition Zero vector
m-by-n matricesMatrix addition Zero matrix
n-by-n square matricesMatrix multiplication In (identity matrix)
m-by-n matrices○ (Hadamard product) Jm, n (matrix of ones)
All functions from a set, M, to itself∘ (function composition)Identity function
All distributions on a group, G∗ (convolution)δ (Dirac delta)
Extended real numbersMinimum/infimum+∞
Maximum/supremum−∞
Subsets of a set M∩ (intersection)M
∪ (union)∅ (empty set)
Strings, listsConcatenationEmpty string, empty list
A Boolean algebra∧ (logical and)⊤ (truth)
↔ (logical biconditional)⊤ (truth)
∨ (logical or)⊥ (falsity)
⊕ (exclusive or)⊥ (falsity)
KnotsKnot sumUnknot
Compact surfaces# (connected sum)S2
GroupsDirect productTrivial group
Two elements, {e, f} ∗ defined by
e ∗ e = f ∗ e = e and
f ∗ f = e ∗ f = f
Both e and f are left identities,
but there is no right identity
and no two-sided identity
Homogeneous relations on a set XRelative productIdentity relation
Relational algebraNatural join (⋈)The unique relation degree zero and cardinality one
Properties
In the example S = {e,f} with the equalities given, S is a semigroup. It demonstrates the possibility for (S, ∗) to have several left identities. In fact, every element can be a left identity. In a similar manner, there can be several right identities. But if there is both a right identity and a left identity, then they must be equal, resulting in a single two-sided identity.
To see this, note that if l is a left identity and r is a right identity, then l = l ∗ r = r. In particular, there can never be more than one two-sided identity: if there were two, say e and f, then e ∗ f would have to be equal to both e and f.
It is also quite possible for (S, ∗) to have no identity element,[15] such as the case of even integers under the multiplication operation.[3] Another common example is the cross product of vectors, where the absence of an identity element is related to the fact that the direction of any nonzero cross product is always orthogonal to any element multiplied. That is, it is not possible to obtain a non-zero vector in the same direction as the original. Yet another example of structure without identity element involves the additive semigroup of positive natural numbers.
See also
• Absorbing element
• Additive inverse
• Generalized inverse
• Identity (equation)
• Identity function
• Inverse element
• Monoid
• Pseudo-ring
• Quasigroup
• Unital (disambiguation)
Notes and references
1. Weisstein, Eric W. "Identity Element". mathworld.wolfram.com. Retrieved 2019-12-01.
2. "Definition of IDENTITY ELEMENT". www.merriam-webster.com. Retrieved 2019-12-01.
3. "Identity Element". www.encyclopedia.com. Retrieved 2019-12-01.
4. Fraleigh (1976, p. 21)
5. Beauregard & Fraleigh (1973, p. 96)
6. Fraleigh (1976, p. 18)
7. Herstein (1964, p. 26)
8. McCoy (1973, p. 17)
9. "Identity Element | Brilliant Math & Science Wiki". brilliant.org. Retrieved 2019-12-01.
10. Beauregard & Fraleigh (1973, p. 135)
11. Fraleigh (1976, p. 198)
12. McCoy (1973, p. 22)
13. Fraleigh (1976, pp. 198, 266)
14. Herstein (1964, p. 106)
15. McCoy (1973, p. 22)
Bibliography
• Beauregard, Raymond A.; Fraleigh, John B. (1973), A First Course In Linear Algebra: with Optional Introduction to Groups, Rings, and Fields, Boston: Houghton Mifflin Company, ISBN 0-395-14017-X
• Fraleigh, John B. (1976), A First Course In Abstract Algebra (2nd ed.), Reading: Addison-Wesley, ISBN 0-201-01984-1
• Herstein, I. N. (1964), Topics In Algebra, Waltham: Blaisdell Publishing Company, ISBN 978-1114541016
• McCoy, Neal H. (1973), Introduction To Modern Algebra, Revised Edition, Boston: Allyn and Bacon, LCCN 68015225
Further reading
• M. Kilp, U. Knauer, A.V. Mikhalev, Monoids, Acts and Categories with Applications to Wreath Products and Graphs, De Gruyter Expositions in Mathematics vol. 29, Walter de Gruyter, 2000, ISBN 3-11-015248-7, p. 14–15
| Wikipedia |
Two-step M-estimator
Two-step M-estimators deals with M-estimation problems that require preliminary estimation to obtain the parameter of interest. Two-step M-estimation is different from usual M-estimation problem because asymptotic distribution of the second-step estimator generally depends on the first-step estimator. Accounting for this change in asymptotic distribution is important for valid inference.
Description
The class of two-step M-estimators includes Heckman's sample selection estimator,[1] weighted non-linear least squares, and ordinary least squares with generated regressors.[2]
To fix ideas, let $\{W_{i}\}_{i=1}^{n}\subseteq R^{d}$ be an i.i.d. sample. $\Theta $ and $\Gamma $ are subsets of Euclidean spaces $R^{p}$ and $R^{q}$, respectively. Given a function $m(;;;):R^{d}\times \Theta \times \Gamma \rightarrow R$ , two-step M-estimator ${\hat {\theta }}$ is defined as:
${\hat {\theta }}:=\arg \max _{\theta \in \Theta }{\frac {1}{n}}\sum _{i}m{\bigl (}W_{i},\theta ,{\hat {\gamma }}{\bigr )}$
where ${\hat {\gamma }}$ is an M-estimate of a nuisance parameter that needs to be calculated in the first step.
Consistency of two-step M-estimators can be verified by checking consistency conditions for usual M-estimators, although some modification might be necessary. In practice, the important condition to check is the identification condition.[2] If ${\hat {\gamma }}\rightarrow \gamma ^{*},$ where $\gamma ^{*}$ is a non-random vector, then the identification condition is that $E[m(W_{1},\theta ,\gamma ^{*})]$ has a unique maximizer over $\Theta $.
Asymptotic distribution
Under regularity conditions, two-step M-estimators have asymptotic normality. An important point to note is that the asymptotic variance of a two-step M-estimator is generally not the same as that of the usual M-estimator in which the first step estimation is not necessary.[3] This fact is intuitive because ${\hat {\gamma }}$ is a random object and its variability should influence the estimation of $\Theta $. However, there exists a special case in which the asymptotic variance of two-step M-estimator takes the form as if there were no first-step estimation procedure. Such special case occurs if:
$E{\frac {\partial }{\partial \theta \partial \gamma }}m(W_{1},\theta _{0},\gamma ^{*})=0$
where $\theta _{0}$ is the true value of $\theta $ and $\gamma ^{*}$ is the probability limit of ${\hat {\gamma }}$.[3] To interpret this condition, first note that under regularity conditions, $E{\frac {\partial }{\partial \theta }}m(W_{1},\theta _{0},\gamma ^{*})=0$ since $\theta _{0}$ is the maximizer of $E[m(W_{1},\theta ,\gamma ^{*})]$. So the condition above implies that small perturbation in γ has no impact on the first-order condition. Thus, in large sample, variability of ${\hat {\gamma }}$ does not affect the argmax of the objective function, which explains invariant property of asymptotic variance. Of course, this result is valid only as the sample size tends to infinity, so the finite-sample property could be quite different.
Involving MLE
When the first step is a maximum likelihood estimator, under some assumptions, two-step M-estimator is more asymptotically efficient (i.e. has smaller asymptotic variance) than M-estimator with known first-step parameter. Consistency and asymptotic normality of the estimator follows from the general result on two-step M-estimators.[4]
Let {Vi,Wi,Zi}n
i=1
be a random sample and the second-step M-estimator ${\widehat {\theta }}$ is the following:
${\widehat {\theta }}:={\underset {\theta \in \Theta }{\operatorname {arg\max } }}\sum _{i=1}^{n}m(v_{i},w_{i},z_{i}:\theta ,{\widehat {\gamma }})$
where ${\widehat {\gamma }}$ is the parameter estimated by maximum likelihood in the first step. For the MLE,
${\widehat {\gamma }}:={\underset {\gamma \in \Gamma }{\operatorname {arg\max } }}\sum _{i=1}^{n}\log f(v_{it}:z_{i},\gamma )$
where f is the conditional density of V given Z. Now, suppose that given Z, V is conditionally independent of W. This is called the conditional independence assumption or selection on observables.[4][5] Intuitively, this condition means that Z is a good predictor of V so that once conditioned on Z, V has no systematic dependence on W. Under the conditional independence assumption, the asymptotic variance of the two-step estimator is:
$\mathrm {E} [\nabla _{\theta }s(\theta _{0},\gamma _{0})]^{-1}\mathrm {E} [g(\theta _{0},\gamma _{0})g(\theta _{0},\gamma _{0})^{\mathrm {T} }]\mathrm {E} [\nabla _{\theta }s(\theta _{0},\gamma _{0})]^{-1}$
where
${\begin{aligned}g(\theta ,\gamma )&:=s(\theta ,\gamma )-\mathrm {E} [s(\theta ,\gamma )\nabla _{\gamma }d(\gamma )^{\mathrm {T} }]\mathrm {E} [\nabla _{\gamma }d(\gamma )\nabla _{\gamma }d(\gamma )^{\mathrm {T} }]^{-1}d(\gamma )\\s(\theta ,\gamma )&:=\nabla _{\theta }m(V,W,Z:\theta ,\gamma )\\d(\gamma )&:=\nabla _{\gamma }\log f(V:Z,\gamma )\end{aligned}}$
and ∇ represents partial derivative with respect to a row vector. In the case where γ0 is known, the asymptotic variance is
$\mathrm {E} [\nabla _{\theta }s(\theta _{0},\gamma _{0})]^{-1}\mathrm {E} [s(\theta _{0},\gamma _{0})s(\theta _{0},\gamma _{0})^{\mathrm {T} }]\mathrm {E} [\nabla _{\theta }s(\theta _{0},\gamma _{0})]^{-1}$
and therefore, unless $\mathrm {E} [s(\theta ,\gamma )\nabla _{\gamma }d(\gamma )^{\mathrm {T} }]=0$, the two-step M-estimator is more efficient than the usual M-estimator. This fact suggests that even when γ0 is known a priori, there is an efficiency gain by estimating γ by MLE. An application of this result can be found, for example, in treatment effect estimation.[4]
Examples
• Generated regressor
• Heckman correction
• Feasible generalized least squares
• Two-step feasible generalized method of moments
See also
• Adaptive estimator
References
1. Heckman, J.J., The Common Structure of Statistical Models of Truncation, Sample Selection, and Limited Dependent Variables and a Simple Estimator for Such Models, Annals of Economic and Social Measurement, 5,475-492.
2. Wooldridge, J.M., Econometric Analysis of Cross Section and Panel Data, MIT Press, Cambridge, Mass.
3. Newey, K.W. and D. McFadden, Large Sample Estimation and Hypothesis Testing, in R. Engel and D. McFadden, eds., Handbook of Econometrics, Vol.4, Amsterdam: North-Holland.
4. Wooldridge, J.M., Econometric Analysis of Cross Section and Panel Data, MIT Press, Cambridge, Mass.
5. Heckman, J.J., and R. Robb, 1985, Alternative Methods for Evaluating the Impact of Interventions: An Overview, Journal of Econometrics, 30, 239-267.
| Wikipedia |
Karnaugh map
The Karnaugh map (KM or K-map) is a method of simplifying Boolean algebra expressions. Maurice Karnaugh introduced it in 1953[1][2] as a refinement of Edward W. Veitch's 1952 Veitch chart,[3][4] which was a rediscovery of Allan Marquand's 1881 logical diagram[5][6] aka Marquand diagram[4] but with a focus now set on its utility for switching circuits.[4] Veitch charts are also known as Marquand–Veitch diagrams[4] or, rarely, as Svoboda charts,[7] and Karnaugh maps as Karnaugh–Veitch maps (KV maps).
The Karnaugh map reduces the need for extensive calculations by taking advantage of humans' pattern-recognition capability.[1] It also permits the rapid identification and elimination of potential race conditions.
The required Boolean results are transferred from a truth table onto a two-dimensional grid where, in Karnaugh maps, the cells are ordered in Gray code,[8][4] and each cell position represents one combination of input conditions. Cells are also known as minterms, while each cell value represents the corresponding output value of the boolean function. Optimal groups of 1s or 0s are identified, which represent the terms of a canonical form of the logic in the original truth table.[9] These terms can be used to write a minimal Boolean expression representing the required logic.
Karnaugh maps are used to simplify real-world logic requirements so that they can be implemented using a minimum number of logic gates. A sum-of-products expression (SOP) can always be implemented using AND gates feeding into an OR gate, and a product-of-sums expression (POS) leads to OR gates feeding an AND gate. The POS expression gives a complement of the function (if F is the function so its complement will be F').[10] Karnaugh maps can also be used to simplify logic expressions in software design. Boolean conditions, as used for example in conditional statements, can get very complicated, which makes the code difficult to read and to maintain. Once minimised, canonical sum-of-products and product-of-sums expressions can be implemented directly using AND and OR logic operators.[11]
Example
Karnaugh maps are used to facilitate the simplification of Boolean algebra functions. For example, consider the Boolean function described by the following truth table.
Truth table of a function
ABCD$f(A,B,C,D)$
0 00000
1 00010
2 00100
3 00110
4 01000
5 01010
6 01101
7 01110
8 10001
9 10011
10 10101
11 10111
12 11001
13 11011
14 11101
15 11110
Following are two different notations describing the same function in unsimplified Boolean algebra, using the Boolean variables A, B, C, D and their inverses.
• $f(A,B,C,D)=\sum _{}m_{i},i\in \{6,8,9,10,11,12,13,14\}$ where $m_{i}$ are the minterms to map (i.e., rows that have output 1 in the truth table).
• $f(A,B,C,D)=\prod _{}M_{i},i\in \{0,1,2,3,4,5,7,15\}$ where $M_{i}$ are the maxterms to map (i.e., rows that have output 0 in the truth table).
Construction
In the example above, the four input variables can be combined in 16 different ways, so the truth table has 16 rows, and the Karnaugh map has 16 positions. The Karnaugh map is therefore arranged in a 4 × 4 grid.
The row and column indices (shown across the top and down the left side of the Karnaugh map) are ordered in Gray code rather than binary numerical order. Gray code ensures that only one variable changes between each pair of adjacent cells. Each cell of the completed Karnaugh map contains a binary digit representing the function's output for that combination of inputs.
Grouping
After the Karnaugh map has been constructed, it is used to find one of the simplest possible forms — a canonical form — for the information in the truth table. Adjacent 1s in the Karnaugh map represent opportunities to simplify the expression. The minterms ('minimal terms') for the final expression are found by encircling groups of 1s in the map. Minterm groups must be rectangular and must have an area that is a power of two (i.e., 1, 2, 4, 8...). Minterm rectangles should be as large as possible without containing any 0s. Groups may overlap in order to make each one larger. The optimal groupings in the example below are marked by the green, red and blue lines, and the red and green groups overlap. The red group is a 2 × 2 square, the green group is a 4 × 1 rectangle, and the overlap area is indicated in brown.
The cells are often denoted by a shorthand which describes the logical value of the inputs that the cell covers. For example, AD would mean a cell which covers the 2x2 area where A and D are true, i.e. the cells numbered 13, 9, 15, 11 in the diagram above. On the other hand, AD would mean the cells where A is true and D is false (that is, D is true).
The grid is toroidally connected, which means that rectangular groups can wrap across the edges (see picture). Cells on the extreme right are actually 'adjacent' to those on the far left, in the sense that the corresponding input values only differ by one bit; similarly, so are those at the very top and those at the bottom. Therefore, AD can be a valid term—it includes cells 12 and 8 at the top, and wraps to the bottom to include cells 10 and 14—as is BD, which includes the four corners.
Solution
Once the Karnaugh map has been constructed and the adjacent 1s linked by rectangular and square boxes, the algebraic minterms can be found by examining which variables stay the same within each box.
For the red grouping:
• A is the same and is equal to 1 throughout the box, therefore it should be included in the algebraic representation of the red minterm.
• B does not maintain the same state (it shifts from 1 to 0), and should therefore be excluded.
• C does not change. It is always 0, so its complement, NOT-C, should be included. Thus, C should be included.
• D changes, so it is excluded.
Thus the first minterm in the Boolean sum-of-products expression is AC.
For the green grouping, A and B maintain the same state, while C and D change. B is 0 and has to be negated before it can be included. The second term is therefore AB. Note that it is acceptable that the green grouping overlaps with the red one.
In the same way, the blue grouping gives the term BCD.
The solutions of each grouping are combined: the normal form of the circuit is $A{\overline {C}}+A{\overline {B}}+BC{\overline {D}}$.
Thus the Karnaugh map has guided a simplification of
${\begin{aligned}f(A,B,C,D)={}&{\overline {A}}BC{\overline {D}}+A{\overline {B}}\,{\overline {C}}\,{\overline {D}}+A{\overline {B}}\,{\overline {C}}D+A{\overline {B}}C{\overline {D}}+{}\\&A{\overline {B}}CD+AB{\overline {C}}\,{\overline {D}}+AB{\overline {C}}D+ABC{\overline {D}}\\={}&A{\overline {C}}+A{\overline {B}}+BC{\overline {D}}\end{aligned}}$
It would also have been possible to derive this simplification by carefully applying the axioms of Boolean algebra, but the time it takes to do that grows exponentially with the number of terms.
Inverse
The inverse of a function is solved in the same way by grouping the 0s instead.[nb 1]
The three terms to cover the inverse are all shown with grey boxes with different colored borders:
• brown: A B
• gold: A C
• blue: BCD
This yields the inverse:
${\overline {f(A,B,C,D)}}={\overline {A}}\,{\overline {B}}+{\overline {A}}\,{\overline {C}}+BCD$
Through the use of De Morgan's laws, the product of sums can be determined:
${\begin{aligned}f(A,B,C,D)&={\overline {\overline {f(A,B,C,D)}}}\\&={\overline {{\overline {A}}\,{\overline {B}}+{\overline {A}}\,{\overline {C}}+BCD}}\\&=\left({\overline {{\overline {A}}\,{\overline {B}}}}\right)\left({\overline {{\overline {A}}\,{\overline {C}}}}\right)\left({\overline {BCD}}\right)\\&=\left(A+B\right)\left(A+C\right)\left({\overline {B}}+{\overline {C}}+{\overline {D}}\right)\end{aligned}}$
Don't cares
Karnaugh maps also allow easier minimizations of functions whose truth tables include "don't care" conditions. A "don't care" condition is a combination of inputs for which the designer doesn't care what the output is. Therefore, "don't care" conditions can either be included in or excluded from any rectangular group, whichever makes it larger. They are usually indicated on the map with a dash or X.
The example on the right is the same as the example above but with the value of f(1,1,1,1) replaced by a "don't care". This allows the red term to expand all the way down and, thus, removes the green term completely.
This yields the new minimum equation:
$f(A,B,C,D)=A+BC{\overline {D}}$
Note that the first term is just A, not AC. In this case, the don't care has dropped a term (the green rectangle); simplified another (the red one); and removed the race hazard (removing the yellow term as shown in the following section on race hazards).
The inverse case is simplified as follows:
${\overline {f(A,B,C,D)}}={\overline {A}}\,{\overline {B}}+{\overline {A}}\,{\overline {C}}+{\overline {A}}D$
Through the use of De Morgan's laws, the product of sums can be determined:
${\begin{aligned}f(A,B,C,D)&={\overline {\overline {f(A,B,C,D)}}}\\&={\overline {{\overline {A}}\,{\overline {B}}+{\overline {A}}\,{\overline {C}}+{\overline {A}}\,D}}\\&=\left({\overline {{\overline {A}}\,{\overline {B}}}}\right)\left({\overline {{\overline {A}}\,{\overline {C}}}}\right)\left({\overline {{\overline {A}}\,D}}\right)\\&=\left(A+B\right)\left(A+C\right)\left(A+{\overline {D}}\right)\end{aligned}}$
Race hazards
Elimination
Karnaugh maps are useful for detecting and eliminating race conditions. Race hazards are very easy to spot using a Karnaugh map, because a race condition may exist when moving between any pair of adjacent, but disjoint, regions circumscribed on the map. However, because of the nature of Gray coding, adjacent has a special definition explained above – we're in fact moving on a torus, rather than a rectangle, wrapping around the top, bottom, and the sides.
• In the example above, a potential race condition exists when C is 1 and D is 0, A is 1, and B changes from 1 to 0 (moving from the blue state to the green state). For this case, the output is defined to remain unchanged at 1, but because this transition is not covered by a specific term in the equation, a potential for a glitch (a momentary transition of the output to 0) exists.
• There is a second potential glitch in the same example that is more difficult to spot: when D is 0 and A and B are both 1, with C changing from 1 to 0 (moving from the blue state to the red state). In this case the glitch wraps around from the top of the map to the bottom.
Whether glitches will actually occur depends on the physical nature of the implementation, and whether we need to worry about it depends on the application. In clocked logic, it is enough that the logic settles on the desired value in time to meet the timing deadline. In our example, we are not considering clocked logic.
In our case, an additional term of $A{\overline {D}}$ would eliminate the potential race hazard, bridging between the green and blue output states or blue and red output states: this is shown as the yellow region (which wraps around from the bottom to the top of the right half) in the adjacent diagram.
The term is redundant in terms of the static logic of the system, but such redundant, or consensus terms, are often needed to assure race-free dynamic performance.
Similarly, an additional term of ${\overline {A}}D$ must be added to the inverse to eliminate another potential race hazard. Applying De Morgan's laws creates another product of sums expression for f, but with a new factor of $\left(A+{\overline {D}}\right)$.
2-variable map examples
The following are all the possible 2-variable, 2 × 2 Karnaugh maps. Listed with each is the minterms as a function of $ \sum m()$ and the race hazard free (see previous section) minimum equation. A minterm is defined as an expression that gives the most minimal form of expression of the mapped variables. All possible horizontal and vertical interconnected blocks can be formed. These blocks must be of the size of the powers of 2 (1, 2, 4, 8, 16, 32, ...). These expressions create a minimal logical mapping of the minimal logic variable expressions for the binary expressions to be mapped. Here are all the blocks with one field.
A block can be continued across the bottom, top, left, or right of the chart. That can even wrap beyond the edge of the chart for variable minimization. This is because each logic variable corresponds to each vertical column and horizontal row. A visualization of the k-map can be considered cylindrical. The fields at edges on the left and right are adjacent, and the top and bottom are adjacent. K-Maps for four variables must be depicted as a donut or torus shape. The four corners of the square drawn by the k-map are adjacent. Still more complex maps are needed for 5 variables and more.
• Σm(0); K = 0
• Σm(1); K = A′B′
• Σm(2); K = AB′
• Σm(3); K = A′B
• Σm(4); K = AB
• Σm(1,2); K = B′
• Σm(1,3); K = A′
• Σm(1,4); K = A′B′ + AB
• Σm(2,3); K = AB′ + A′B
• Σm(2,4); K = A
• Σm(3,4); K = B
• Σm(1,2,3); K = A' + B′
• Σm(1,2,4); K = A + B′
• Σm(1,3,4); K = A′ + B
• Σm(2,3,4); K = A + B
• Σm(1,2,3,4); K = 1
Related graphical methods
Further information: Logic optimization § Graphical methods
Related graphical minimization methods include:
• Marquand diagram (1881) by Allan Marquand (1853–1924)[5][6][4]
• Veitch chart (1952) by Edward W. Veitch (1924–2013)[3][4]
• Svoboda chart (1956) by Antonín Svoboda (1907–1980)[7]
• Mahoney map (M-map, designation numbers, 1963) by Matthew V. Mahoney (a reflection-symmetrical extension of Karnaugh maps for larger numbers of inputs)
• Reduced Karnaugh map (RKM) techniques (from 1969) like infrequent variables, map-entered variables (MEV), variable-entered map (VEM) or variable-entered Karnaugh map (VEKM) by G. W. Schultz, Thomas E. Osborne, Christopher R. Clare, J. Robert Burgoon, Larry L. Dornhoff, William I. Fletcher, Ali M. Rushdi and others (several successive Karnaugh map extensions based on variable inputs for a larger numbers of inputs)
• Minterm-ring map (MRM, 1990) by Thomas R. McCalla (a three-dimensional extension of Karnaugh maps for larger numbers of inputs)
See also
• Algebraic normal form (ANF)
• Binary decision diagram (BDD), a data structure that is a compressed representation of a Boolean function
• Espresso heuristic logic minimizer
• List of Boolean algebra topics
• Logic optimization
• Punnett square (1905), a similar diagram in biology
• Quine–McCluskey algorithm
• Reed–Muller expansion
• Venn diagram (1880)
• Zhegalkin polynomial
Notes
1. This should not be confused with the negation of the result of the previously found function.
References
1. Karnaugh, Maurice (November 1953) [1953-04-23, 1953-03-17]. "The Map Method for Synthesis of Combinational Logic Circuits" (PDF). Transactions of the American Institute of Electrical Engineers, Part I: Communication and Electronics. 72 (5): 593–599. doi:10.1109/TCE.1953.6371932. Paper 53-217. Archived from the original (PDF) on 2017-04-16. Retrieved 2017-04-16. (NB. Also contains a short review by Samuel H. Caldwell.)
2. Curtis, Herbert Allen (1962). A new approach to the design of switching circuits. The Bell Laboratories Series (1 ed.). Princeton, New Jersey, USA: D. van Nostrand Company, Inc. ISBN 0-44201794-4. OCLC 1036797958. S2CID 57068910. ISBN 978-0-44201794-1. ark:/13960/t56d6st0q. (viii+635 pages) (NB. This book was reprinted by Chin Jih in 1969.)
3. Veitch, Edward Westbrook (1952-05-03) [1952-05-02]. "A Chart Method for Simplifying Truth Functions". Transactions of the 1952 ACM Annual Meeting. ACM Annual Conference/Annual Meeting: Proceedings of the 1952 ACM Annual Meeting (Pittsburgh, Pennsylvania, USA). New York, USA: Association for Computing Machinery (ACM): 127–133. doi:10.1145/609784.609801.
4. Brown, Frank Markham (2012) [2003, 1990]. Boolean Reasoning - The Logic of Boolean Equations (reissue of 2nd ed.). Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-42785-0.
5. Marquand, Allan (1881). "XXXIII: On Logical Diagrams for n terms". The London, Edinburgh, and Dublin Philosophical Magazine and Journal of Science. 5. 12 (75): 266–270. doi:10.1080/14786448108627104. Retrieved 2017-05-15. (NB. Quite many secondary sources erroneously cite this work as "A logical diagram for n terms" or "On a logical diagram for n terms".)
6. Gardner, Martin (1958). "6. Marquand's Machine and Others". Logic Machines and Diagrams (1 ed.). New York, USA: McGraw-Hill Book Company, Inc. pp. 104–116. ISBN 1-11784984-8. LCCN 58-6683. ark:/13960/t5cc1sj6b. (x+157 pages)
7. Klir, George Jiří (May 1972). "Reference Notations to Chapter 2". Introduction to the Methodology of Switching Circuits (1 ed.). Binghamton, New York, USA: Litton Educational Publishing, Inc. / D. van Nostrand Company. p. 84. ISBN 0-442-24463-0. LCCN 72-181095. C4463-000-3. (xvi+573+1 pages)
8. Wakerly, John F. (1994). Digital Design: Principles & Practices. New Jersey, USA: Prentice Hall. pp. 48–49, 222. ISBN 0-13-211459-3. (NB. The two page sections taken together say that K-maps are labeled with Gray code. The first section says that they are labeled with a code that changes only one bit between entries and the second section says that such a code is called Gray code.)
9. Belton, David (April 1998). "Karnaugh Maps – Rules of Simplification". Archived from the original on 2017-04-18. Retrieved 2009-05-30.
10. Dodge, Nathan B. (September 2015). "Simplifying Logic Circuits with Karnaugh Maps" (PDF). The University of Texas at Dallas, Erik Jonsson School of Engineering and Computer Science. Archived (PDF) from the original on 2017-04-18. Retrieved 2017-04-18.
11. Cook, Aaron. "Using Karnaugh Maps to Simplify Code". Quantum Rarity. Archived from the original on 2017-04-18. Retrieved 2012-10-07.
Further reading
• Katz, Randy Howard (1998) [1994]. Contemporary Logic Design. The Benjamin/Cummings Publishing Company. pp. 70–85. doi:10.1016/0026-2692(95)90052-7. ISBN 0-8053-2703-7.
• Vingron, Shimon Peter (2004) [2003-11-05]. "Karnaugh Maps". Switching Theory: Insight Through Predicate Logic. Berlin, Heidelberg, New York: Springer-Verlag. pp. 57–76. ISBN 3-540-40343-4.
• Wickes, William E. (1968). "3.5. Veitch Diagrams". Logic Design with Integrated Circuits. New York, USA: John Wiley & Sons. pp. 36–49. LCCN 68-21185. p. 36: […] a refinement of the Venn diagram in that circles are replaced by squares and arranged in a form of matrix. The Veitch diagram labels the squares with the minterms. Karnaugh assigned 1s and 0s to the squares and their labels and deduced the numbering scheme in common use.
• Maxfield, Clive "Max" (2006-11-29). "Reed-Muller Logic". Logic 101. EE Times. Part 3. Archived from the original on 2017-04-19. Retrieved 2017-04-19.
• Lind, Larry Frederick; Nelson, John Christopher Cunliffe (1977). "Section 2.3". Analysis and Design of Sequential Digital Systems. Macmillan Press. ISBN 0-33319266-4. (146 pages)
• Holder, Michel Elizabeth (March 2005) [2005-02-14]. "A modified Karnaugh map technique". IEEE Transactions on Education. IEEE. 48 (1): 206–207. doi:10.1109/TE.2004.832879. eISSN 1557-9638. ISSN 0018-9359. S2CID 25576523.
• Cavanagh, Joseph (2008). Computer Arithmetic and Verilog HDL Fundamentals (1 ed.). CRC Press.
• Kohavi, Zvi; Jha, Niraj K. (2009). Switching and Finite Automata Theory (3 ed.). Cambridge University Press. ISBN 978-0-521-85748-2.
• Grund, Jürgen (2011). KV-Diagramme in der Schaltalgebra - Verknüpfungen, Beweise, Normalformen, schaltalgebraische Umformungen, Anschauungsmodelle, Paradebeispiele [KV diagrams in Boolean algebra - relations, proofs, normal forms, algebraic transformations, illustrative models, typical examples] (Windows/Mac executable or Adobe Flash-capable browser on CD-ROM) (e-book) (in German) (1 ed.). Berlin, Germany: viademica Verlag. ISBN 978-3-939290-08-7. Archived (PDF) from the original on 2022-11-12. Retrieved 2022-11-26. (282 pages with 14 animations)
External links
• Detect Overlapping Rectangles, by Herbert Glarner.
• Using Karnaugh maps in practical applications, Circuit design project to control traffic lights.
• K-Map Tutorial for 2,3,4 and 5 variables
• Karnaugh Map Example
• POCKET–PC BOOLEAN FUNCTION SIMPLIFICATION, Ledion Bitincka — George E. Antoniou
• K-Map troubleshoot
| Wikipedia |
Two-variable logic
In mathematical logic and computer science, two-variable logic is the fragment of first-order logic where formulae can be written using only two different variables.[1] This fragment is usually studied without function symbols.
Decidability
Some important problems about two-variable logic, such as satisfiability and finite satisfiability, are decidable.[2] This result generalizes results about the decidability of fragments of two-variable logic, such as certain description logics; however, some fragments of two-variable logic enjoy a much lower computational complexity for their satisfiability problems.
By contrast, for the three-variable fragment of first-order logic without function symbols, satisfiability is undecidable.[3]
Counting quantifiers
The two-variable fragment of first-order logic with no function symbols is known to be decidable even with the addition of counting quantifiers,[4] and thus of uniqueness quantification. This is a more powerful result, as counting quantifiers for high numerical values are not expressible in that logic.
Counting quantifiers actually improve the expressiveness of finite-variable logics as they allow to say that there is a node with $n$ neighbors, namely $\Phi =\exists x\exists ^{\geq n}yE(x,y)$. Without counting quantifiers $n+1$ variables are needed for the same formula.
Connection to the Weisfeiler-Leman algorithm
There is a strong connection between two-variable logic and the Weisfeiler-Leman (or color refinement) algorithm. Given two graphs, then any two nodes have the same stable color in color refinement if and only if they have the same $C^{2}$ type, that is, they satisfy the same formulas in two-variable logic with counting.[5]
References
1. L. Henkin. Logical systems containing only a finite number of symbols, Report, Department of Mathematics, University of Montreal, 1967
2. E. Grädel, P.G. Kolaitis and M. Vardi, On the Decision Problem for Two-Variable First-Order Logic, The Bulletin of Symbolic Logic, Vol. 3, No. 1 (Mar., 1997), pp. 53-69.
3. A. S. Kahr, Edward F. Moore and Hao Wang. Entscheidungsproblem Reduced to the ∀ ∃ ∀ Case, 1962, noting that their ∀ ∃ ∀ formulas use only three variables.
4. E. Grädel, M. Otto and E. Rosen. Two-Variable Logic with Counting is Decidable., Proceedings of Twelfth Annual IEEE Symposium on Logic in Computer Science, 1997.
5. Grohe, Martin. "Finite variable logics in descriptive complexity theory." Bulletin of Symbolic Logic 4.4 (1998): 345-398.
| Wikipedia |
Two-vector
A two-vector or bivector[1] is a tensor of type $\scriptstyle {\binom {2}{0}}$ and it is the dual of a two-form, meaning that it is a linear functional which maps two-forms to the real numbers (or more generally, to scalars).
The tensor product of a pair of vectors is a two-vector. Then, any two-form can be expressed as a linear combination of tensor products of pairs of vectors, especially a linear combination of tensor products of pairs of basis vectors. If f is a two-vector, then[2]
$\mathbf {f} =f^{\alpha \beta }\,{\vec {e}}_{\alpha }\otimes {\vec {e}}_{\beta }$
where the f α β are the components of the two-vector. Notice that both indices of the components are contravariant. This is always the case for two-vectors, by definition. A bivector may operate on a one-form, yielding a vector:
$f^{\alpha \beta }u_{\beta }=v^{\alpha }$,
although a problem might be which of the upper indices of the bivector to contract with. (This problem does not arise with mixed tensors because only one of such tensor's indices is upper.) However, if the bivector is symmetric then the choice of index to contract with is indifferent.
An example of a bivector is the stress–energy tensor. Another one is the orthogonal complement[3] of the metric tensor.
Matrix notation
If one assumes that vectors may only be represented as column matrices and covectors as row matrices; then, since a square matrix operating on a column vector must yield a column vector, it follows that square matrices can only represent mixed tensors. However, there is nothing in the abstract algebraic definition of a matrix that says that such assumptions must be made. Then dropping that assumption matrices can be used to represent bivectors as well as two-forms. Example:
${\begin{pmatrix}f^{00}&&f^{01}&&f^{02}&&f^{03}\\f^{10}&&f^{11}&&f^{12}&&f^{13}\\f^{20}&&f^{21}&&f^{22}&&f^{23}\\f^{30}&&f^{31}&&f^{32}&&f^{33}\end{pmatrix}}{\begin{pmatrix}u_{0}\\u_{1}\\u_{2}\\u_{3}\end{pmatrix}}={\begin{pmatrix}f^{00}u_{0}+f^{01}u_{1}+f^{02}u_{2}+f^{03}u_{3}\\f^{10}u_{0}+f^{11}u_{1}+f^{12}u_{2}+f^{13}u_{3}\\f^{20}u_{0}+f^{21}u_{1}+f^{22}u_{2}+f^{23}u_{3}\\f^{30}u_{0}+f^{31}u_{1}+f^{32}u_{2}+f^{33}u_{3}\end{pmatrix}}={\begin{pmatrix}v^{0}\\v^{1}\\v^{2}\\v^{3}\end{pmatrix}}\iff f^{\alpha \beta }u_{\beta }=v^{\alpha }$
${\begin{pmatrix}u_{0}&&u_{1}&&u_{2}&&u_{3}\end{pmatrix}}{\begin{pmatrix}f^{00}&&f^{01}&&f^{02}&&f^{03}\\f^{10}&&f^{11}&&f^{12}&&f^{13}\\f^{20}&&f^{21}&&f^{22}&&f^{23}\\f^{30}&&f^{31}&&f^{32}&&f^{33}\end{pmatrix}}$
$={\begin{pmatrix}u_{0}f^{00}+u_{1}f^{10}+u_{2}f^{20}+u_{3}f^{30}&&u_{0}f^{01}+u_{1}f^{11}+u_{2}f^{21}+u_{3}f^{31}&&u_{0}f^{02}+u_{1}f^{12}+u_{2}f^{22}+u_{3}f^{32}&&u_{0}f^{03}+u_{1}f^{13}+u_{2}f^{23}+u_{3}f^{33}\end{pmatrix}}$
$={\begin{pmatrix}w^{0}&&w^{1}&&w^{2}&&w^{3}\end{pmatrix}}\iff u_{\alpha }f^{\alpha \beta }=f^{\alpha \beta }u_{\alpha }=w^{\beta }$ or $f^{\beta \alpha }u_{\beta }=w^{\alpha }$.
If f is symmetric, i.e., $f^{\alpha \beta }=f^{\beta \alpha }$, then $v^{\alpha }=w^{\alpha }$.
See also
• Two-point tensor
• Bivector § Tensors and matrices (but note that the stress–energy tensor is symmetric, not skew-symmetric)
• Dyadics
References
1. Penrose, Roger (2004). The road to reality : a complete guide to the laws of the universe. New York: Random House, Inc. pp. 443–444. ISBN 978-0-679-77631-4. Note: This book mentions "bivectors" (but not "two-vectors") in the sense of $\scriptstyle {\binom {2}{0}}$ tensors.
2. Schutz, Bernard (1985). A first course in general relativity. Cambridge, UK: Cambridge University Press. p. 77. ISBN 0-521-27703-5. Note: This book does not appear to mention "two-vectors" or "bivectors", only $\scriptstyle {\binom {2}{0}}$ tensors.
3. Penrose, op. cit., §18.3
| Wikipedia |
Two-body Dirac equations
In quantum field theory, and in the significant subfields of quantum electrodynamics (QED) and quantum chromodynamics (QCD), the two-body Dirac equations (TBDE) of constraint dynamics provide a three-dimensional yet manifestly covariant reformulation of the Bethe–Salpeter equation[1] for two spin-1/2 particles. Such a reformulation is necessary since without it, as shown by Nakanishi,[2] the Bethe–Salpeter equation possesses negative-norm solutions arising from the presence of an essentially relativistic degree of freedom, the relative time. These "ghost" states have spoiled the naive interpretation of the Bethe–Salpeter equation as a quantum mechanical wave equation. The two-body Dirac equations of constraint dynamics rectify this flaw. The forms of these equations can not only be derived from quantum field theory [3][4] they can also be derived purely in the context of Dirac's constraint dynamics [5][6] and relativistic mechanics and quantum mechanics.[7][8][9][10] Their structures, unlike the more familiar two-body Dirac equation of Breit,[11][12][13] which is a single equation, are that of two simultaneous quantum relativistic wave equations. A single two-body Dirac equation similar to the Breit equation can be derived from the TBDE.[14] Unlike the Breit equation, it is manifestly covariant and free from the types of singularities that prevent a strictly nonperturbative treatment of the Breit equation.[15] In applications of the TBDE to QED, the two particles interact by way of four-vector potentials derived from the field theoretic electromagnetic interactions between the two particles. In applications to QCD, the two particles interact by way of four-vector potentials and Lorentz invariant scalar interactions, derived in part from the field theoretic chromomagnetic interactions between the quarks and in part by phenomenological considerations. As with the Breit equation a sixteen-component spinor Ψ is used.
Quantum field theory
Feynman diagram
History
Background
• Field theory
• Electromagnetism
• Weak force
• Strong force
• Quantum mechanics
• Special relativity
• General relativity
• Gauge theory
• Yang–Mills theory
Symmetries
• Symmetry in quantum mechanics
• C-symmetry
• P-symmetry
• T-symmetry
• Lorentz symmetry
• Poincaré symmetry
• Gauge symmetry
• Explicit symmetry breaking
• Spontaneous symmetry breaking
• Noether charge
• Topological charge
Tools
• Anomaly
• Background field method
• BRST quantization
• Correlation function
• Crossing
• Effective action
• Effective field theory
• Expectation value
• Feynman diagram
• Lattice field theory
• LSZ reduction formula
• Partition function
• Propagator
• Quantization
• Regularization
• Renormalization
• Vacuum state
• Wick's theorem
Equations
• Dirac equation
• Klein–Gordon equation
• Proca equations
• Wheeler–DeWitt equation
• Bargmann–Wigner equations
Standard Model
• Quantum electrodynamics
• Electroweak interaction
• Quantum chromodynamics
• Higgs mechanism
Incomplete theories
• String theory
• Supersymmetry
• Technicolor
• Theory of everything
• Quantum gravity
Scientists
• Anderson
• Anselm
• Bargmann
• Becchi
• Belavin
• Berezin
• Bethe
• Bjorken
• Bleuer
• Bogoliubov
• Brodsky
• Brout
• Buchholz
• Cachazo
• Callan
• Coleman
• Dashen
• DeWitt
• Dirac
• Doplicher
• Dyson
• Englert
• Faddeev
• Fadin
• Fermi
• Feynman
• Fierz
• Fock
• Frampton
• Fritzsch
• Fröhlich
• Fredenhagen
• Furry
• Glashow
• Gelfand
• Gell-Mann
• Goldstone
• Gribov
• Gross
• Gupta
• Guralnik
• Haag
• Heisenberg
• Hepp
• Higgs
• Hagen
• 't Hooft
• Ivanenko
• Jackiw
• Jona-Lasinio
• Jordan
• Jost
• Källén
• Kendall
• Kinoshita
• Klebanov
• Kontsevich
• Kuraev
• Landau
• Lee
• Lehmann
• Leutwyler
• Lipatov
• Łopuszański
• Low
• Lüders
• Maiani
• Majorana
• Maldacena
• Migdal
• Mills
• Møller
• Naimark
• Nambu
• Neveu
• Nishijima
• Oehme
• Oppenheimer
• Osterwalder
• Parisi
• Pauli
• Peskin
• Polyakov
• Pomeranchuk
• Popov
• Proca
• Rubakov
• Ruelle
• Salam
• Schrader
• Schwarz
• Schwinger
• Segal
• Seiberg
• Semenoff
• Shifman
• Shirkov
• Skyrme
• Stora
• Stueckelberg
• Sudarshan
• Symanzik
• Thirring
• Tomonaga
• Tyutin
• Vainshtein
• Veltman
• Virasoro
• Ward
• Weinberg
• Weisskopf
• Wentzel
• Wess
• Wetterich
• Weyl
• Wick
• Wightman
• Wigner
• Wilczek
• Wilson
• Witten
• Yang
• Yukawa
• Zamolodchikov
• Zamolodchikov
• Zee
• Zimmermann
• Zinn-Justin
• Zuber
• Zumino
Equations
For QED, each equation has the same structure as the ordinary one-body Dirac equation in the presence of an external electromagnetic field, given by the 4-potential $A_{\mu }$. For QCD, each equation has the same structure as the ordinary one-body Dirac equation in the presence of an external field similar to the electromagnetic field and an additional external field given by in terms of a Lorentz invariant scalar $S$. In natural units:[16] those two-body equations have the form.
${\begin{aligned}\left[(\gamma _{1})_{\mu }(p_{1}-{\tilde {A}}_{1})^{\mu }+m_{1}+{\tilde {S}}_{1}\right]\Psi &=0,\\[1ex]\left[(\gamma _{2})_{\mu }(p_{2}-{\tilde {A}}_{2})^{\mu }+m_{2}+{\tilde {S}}_{2}\right]\Psi &=0.\end{aligned}}$
where, in coordinate space, pμ is the 4-momentum, related to the 4-gradient by (the metric used here is $\eta _{\mu \nu }=(-1,1,1,1)$)
$p^{\mu }=-i{\frac {\partial }{\partial x_{\mu }}}$
and γμ are the gamma matrices. The two-body Dirac equations (TBDE) have the property that if one of the masses becomes very large, say $m_{2}\rightarrow \infty $ then the 16-component Dirac equation reduces to the 4-component one-body Dirac equation for particle one in an external potential.
In SI units:
${\begin{aligned}\left[(\gamma _{1})_{\mu }(p_{1}-{\tilde {A}}_{1})^{\mu }+m_{1}c+{\tilde {S}}_{1}\right]\Psi &=0,\\[1ex]\left[(\gamma _{2})_{\mu }(p_{2}-{\tilde {A}}_{2})^{\mu }+m_{2}c+{\tilde {S}}_{2}\right]\Psi &=0.\end{aligned}}$
where c is the speed of light and
$p^{\mu }=-i\hbar {\frac {\partial }{\partial x_{\mu }}}$
Natural units will be used below. A tilde symbol is used over the two sets of potentials to indicate that they may have additional gamma matrix dependencies not present in the one-body Dirac equation. Any coupling constants such as the electron charge are embodied in the vector potentials.
Constraint dynamics and the TBDE
Constraint dynamics applied to the TBDE requires a particular form of mathematical consistency: the two Dirac operators must commute with each other. This is plausible if one views the two equations as two compatible constraints on the wave function. (See the discussion below on constraint dynamics.) If the two operators did not commute, (as, e.g., with the coordinate and momentum operators $x,p$) then the constraints would not be compatible (one could not e.g., have a wave function that satisfied both $x\Psi =0$ and $p\Psi =0$). This mathematical consistency or compatibility leads to three important properties of the TBDE. The first is a condition that eliminates the dependence on the relative time in the center of momentum (c.m.) frame defined by $P=p_{1}+p_{2}=(w,{\vec {0}})$. (The variable $w$ is the total energy in the c.m. frame.) Stated another way, the relative time is eliminated in a covariant way. In particular, for the two operators to commute, the scalar and four-vector potentials can depend on the relative coordinate $x=x_{1}-x_{2}$ only through its component $x_{\perp }$ orthogonal to $P$ in which
$x_{\perp }^{\mu }=(\eta ^{\mu \nu }-P^{\mu }P^{\nu }/P^{2})x_{\nu },\,$
$P_{\mu }x_{\perp }^{\mu }=0.\,$
This implies that in the c.m. frame $x_{\perp }=(0,{\vec {x}}={\vec {x}}_{1}-{\vec {x}}_{2})$, which has zero time component.
Secondly, the mathematical consistency condition also eliminates the relative energy in the c.m. frame. It does this by imposing on each Dirac operator a structure such that in a particular combination they lead to this interaction independent form, eliminating in a covariant way the relative energy.
$P\cdot p\Psi =(-P^{0}p^{0}+{\vec {P}}\cdot p)\Psi =0.\,$
In this expression $p$ is the relative momentum having the form $(p_{1}-p_{2})/2$ for equal masses. In the c.m. frame (Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): {\displaystyle P^0 = w, \vec P=\vec 0} ), the time component $p^{0}$ of the relative momentum, that is the relative energy, is thus eliminated. in the sense that $p^{0}\Psi =0$.
A third consequence of the mathematical consistency is that each of the world scalar ${\tilde {S}}_{i}$ and four vector ${\tilde {A}}_{i}^{\mu }$ potentials has a term with a fixed dependence on $\gamma _{1}$ and $\gamma _{2}$ in addition to the gamma matrix independent forms of $S_{i}$ and $A_{i}^{\mu }$ which appear in the ordinary one-body Dirac equation for scalar and vector potentials. These extra terms correspond to additional recoil spin-dependence not present in the one-body Dirac equation and vanish when one of the particles becomes very heavy (the so-called static limit).
More on constraint dynamics: generalized mass shell constraints
Constraint dynamics arose from the work of Dirac [6] and Bergmann.[17] This section shows how the elimination of relative time and energy takes place in the c.m. system for the simple system of two relativistic spinless particles. Constraint dynamics was first applied to the classical relativistic two particle system by Todorov,[18][19] Kalb and Van Alstine,[20][21] Komar,[22][23] and Droz–Vincent.[24] With constraint dynamics, these authors found a consistent and covariant approach to relativistic canonical Hamiltonian mechanics that also evades the Currie–Jordan–Sudarshan "No Interaction" theorem.[25][26] That theorem states that without fields, one cannot have a relativistic Hamiltonian dynamics. Thus, the same covariant three-dimensional approach which allows the quantized version of constraint dynamics to remove quantum ghosts simultaneously circumvents at the classical level the C.J.S. theorem. Consider a constraint on the otherwise independent coordinate and momentum four vectors, written in the form $\phi _{i}(p,x)\approx 0$. The symbol$\approx 0$ is called a weak equality and implies that the constraint is to be imposed only after any needed Poisson brackets are performed. In the presence of such constraints, the total Hamiltonian ${\mathcal {H}}$ is obtained from the Lagrangian ${\mathcal {L}}$ by adding to the Legendre Hamiltonian $(p{\dot {x}}-{\mathcal {L}})$ the sum of the constraints times an appropriate set of Lagrange multipliers $(\lambda _{i})$.
${\mathcal {H}}=p{\dot {x}}-{\mathcal {L}}+\lambda _{i}\phi _{i},$
This total Hamiltonian is traditionally called the Dirac Hamiltonian. Constraints arise naturally from parameter invariant actions of the form
$I=\int d\tau {\mathcal {L}}(\tau )=\int d\tau '{\frac {d\tau }{d\tau '}}{\mathcal {L}}(\tau )=\int d\tau '{\mathcal {L}}(\tau ').$
In the case of four vector and Lorentz scalar interactions for a single particle the Lagrangian is
${\mathcal {L}}(\tau )=-(m+S(x)){\sqrt {-{\dot {x}}^{2}}}+{\dot {x}}\cdot A(x)\,$
The canonical momentum is
$p={\frac {\partial {\mathcal {L}}}{\partial {\dot {x}}}}={\frac {(m+S(x)){\dot {x}}}{\sqrt {-{\dot {x}}^{2}}}}+A(x)$
and by squaring leads to the generalized mass shell condition or generalized mass shell constraint
$(p-A)^{2}+(m+S)^{2}=0.\,$
Since, in this case, the Legendre Hamiltonian vanishes
$p\cdot {\dot {x}}-{\mathcal {L}}=0,\,$
the Dirac Hamiltonian is simply the generalized mass constraint (with no interactions it would simply be the ordinary mass shell constraint)
${\mathcal {H}}=\lambda \left[\left(p-A\right)^{2}+(m+S)^{2}\right]\equiv \lambda (p^{2}+m^{2}+\Phi (x,p)).$
One then postulates that for two bodies the Dirac Hamiltonian is the sum of two such mass shell constraints,
${\mathcal {H}}_{i}=p_{i}^{2}+m_{i}^{2}+\Phi _{i}(x_{1},x_{2},p_{1},p_{2})\approx 0,\,$
that is
${\begin{aligned}{\mathcal {H}}&=\lambda _{1}[p_{1}^{2}+m_{1}^{2}+\Phi _{1}(x_{1},x_{2},p_{1},p_{2})]+\lambda _{2}[p_{2}^{2}+m_{2}^{2}+\Phi _{2}(x_{1},x_{2},p_{1},p_{2})]\\[1ex]&=\lambda _{1}{\mathcal {H}}_{1}+\lambda _{2}{\mathcal {H}}_{2},\end{aligned}}$
and that each constraint ${\mathcal {H}}_{i}$ be constant in the proper time associated with ${\mathcal {H}}$
${\dot {\mathcal {H}}}_{i}=\{{\mathcal {H}}_{i},{\mathcal {H}}\}\approx 0\,$
Here the weak equality means that the Poisson bracket could result in terms proportional one of the constraints, the classical Poisson brackets for the relativistic two-body system being defined by
$\left\{O_{1},O_{2}\right\}={\frac {\partial O_{1}}{\partial x_{1}^{\mu }}}{\frac {\partial O_{2}}{\partial p_{1\mu }}}-{\frac {\partial O_{1}}{\partial p_{1}^{\mu }}}{\frac {\partial O_{2}}{\partial x_{1\mu }}}+{\frac {\partial O_{1}}{\partial x_{2}^{\mu }}}{\frac {\partial O_{2}}{\partial p_{2\mu }}}-{\frac {\partial O_{1}}{\partial p_{2}^{\mu }}}{\frac {\partial O_{2}}{\partial x_{2\mu }}}.$
To see the consequences of having each constraint be a constant of the motion, take, for example
${\dot {\mathcal {H}}}_{1}=\{{\mathcal {H}}_{1},{\mathcal {H}}\}=\lambda _{1}\{{\mathcal {H}}_{1},{\mathcal {H}}_{1}\}+\{{\mathcal {H}}_{1},\lambda _{1}\}{\mathcal {H}}_{2}+\lambda _{2}\{{\mathcal {H}}_{2},{\mathcal {H}}_{1}\}+\{\lambda _{2},{\mathcal {H}}_{1}\}{\mathcal {H}}_{2}.$
Since $\{{\mathcal {H}}_{1},{\mathcal {H}}_{1}\}=0$ and ${\mathcal {H}}_{1}\approx 0$ and ${\mathcal {H}}_{2}\approx 0$ one has
${\dot {\mathcal {H}}}_{1}\approx \lambda _{2}\{{\mathcal {H}}_{2},{\mathcal {H}}_{1}\}\approx 0.$
The simplest solution to this is
$\Phi _{1}=\Phi _{2}\equiv \Phi (x_{\perp })$
which leads to (note the equality in this case is not a weak one in that no constraint need be imposed after the Poisson bracket is worked out)
$\{{\mathcal {H}}_{2},{\mathcal {H}}_{1}\}=0\,$
(see Todorov,[19] and Wong and Crater [27] ) with the same $x_{\perp }$ defined above.
Quantization
In addition to replacing classical dynamical variables by their quantum counterparts, quantization of the constraint mechanics takes place by replacing the constraint on the dynamical variables with a restriction on the wave function
${\mathcal {H}}_{i}\approx 0\rightarrow {\mathcal {H}}_{i}\Psi =0,$
${\mathcal {H}}\approx 0\rightarrow {\mathcal {H}}\Psi =0.$
The first set of equations for i = 1, 2 play the role for spinless particles that the two Dirac equations play for spin-one-half particles. The classical Poisson brackets are replaced by commutators
$\{O_{1},O_{2}\}\rightarrow {\frac {1}{i}}[O_{1},O_{2}].\,$
Thus
$[{\mathcal {H}}_{2},{\mathcal {H}}_{1}]=0,\,$
and we see in this case that the constraint formalism leads to the vanishing commutator of the wave operators for the two particles. This is the analogue of the claim stated earlier that the two Dirac operators commute with one another.
Covariant elimination of the relative energy
The vanishing of the above commutator ensures that the dynamics is independent of the relative time in the c.m. frame. In order to covariantly eliminate the relative energy, introduce the relative momentum $p$ defined by
$p_{1}={\frac {p_{1}\cdot P}{P^{2}}}P+p\,,$
(1)
$p_{2}={\frac {p_{2}\cdot P}{P^{2}}}P-p\,,$
(2)
The above definition of the relative momentum forces the orthogonality of the total momentum and the relative momentum,
$P\cdot p=0,$
which follows from taking the scalar product of either equation with $P$. From Eqs.(1) and (2), this relative momentum can be written in terms of $p_{1}$ and $p_{2}$ as
$p={\frac {\varepsilon _{2}}{\sqrt {-P^{2}}}}p_{1}-{\frac {\varepsilon _{1}}{\sqrt {-P^{2}}}}p_{2}$
where
$\varepsilon _{1}=-{\frac {p_{1}\cdot P}{\sqrt {-P^{2}}}}=-{\frac {P^{2}+p_{1}^{2}-p_{2}^{2}}{2{\sqrt {-P^{2}}}}}$
$\varepsilon _{2}=-{\frac {p_{2}\cdot P}{\sqrt {-P^{2}}}}=-{\frac {P^{2}+p_{2}^{2}-p_{1}^{2}}{2{\sqrt {-P^{2}}}}}$
are the projections of the momenta $p_{1}$ and $p_{2}$ along the direction of the total momentum $P$. Subtracting the two constraints ${\mathcal {H}}_{1}\Psi =0$ and ${\mathcal {H}}_{2}\Psi =0$, gives
$(p_{1}^{2}-p_{2}^{2})\Psi =-(m_{1}^{2}-m_{2}^{2})\Psi $
(3)
Thus on these states $\Psi $
$\varepsilon _{1}\Psi ={\frac {-P^{2}+m_{1}^{2}-m_{2}^{2}}{2{\sqrt {-P^{2}}}}}\Psi $
$\varepsilon _{2}\Psi ={\frac {-P^{2}+m_{2}^{2}-m_{1}^{2}}{2{\sqrt {-P^{2}}}}}\Psi .$
The equation ${\mathcal {H}}\Psi =0$ describes both the c.m. motion and the internal relative motion. To characterize the former motion, observe that since the potential $\Phi $ depends only on the difference of the two coordinates
$[P,{\mathcal {H}}]\Psi =0.$
(This does not require that $[P,\lambda _{i}]=0$ since the ${\mathcal {H}}_{i}\Psi =0$.) Thus, the total momentum $P$ is a constant of motion and $\Psi $ is an eigenstate state characterized by a total momentum $P'$. In the c.m. system $P'=(w,{\vec {0}}),$ with $w$ the invariant center of momentum (c.m.) energy. Thus
$(P^{2}+w^{2})\Psi =0\,,$
(4)
and so $\Psi $ is also an eigenstate of c.m. energy operators for each of the two particles,
$\varepsilon _{1}\Psi ={\frac {w^{2}+m_{1}^{2}-m_{2}^{2}}{2w}}\Psi $
$\varepsilon _{2}\Psi ={\frac {w^{2}+m_{2}^{2}-m_{1}^{2}}{2w}}\Psi .$
The relative momentum then satisfies
$p\Psi ={\frac {\varepsilon _{2}p_{1}-\varepsilon _{1}p_{2}}{w}}\Psi ,$
so that
$p_{1}\Psi =\left({\frac {\varepsilon _{1}}{w}}P+p\right)\Psi ,$
$p_{2}\Psi =\left({\frac {\varepsilon _{2}}{w}}P-p\right)\Psi ,$
The above set of equations follow from the constraints ${\mathcal {H}}_{i}\Psi =0$ and the definition of the relative momenta given in Eqs.(1) and (2). If instead one chooses to define (for a more general choice see Horwitz),[28]
$\varepsilon _{1}={\frac {w^{2}+m_{1}^{2}-m_{2}^{2}}{2w}},$
$\varepsilon _{2}={\frac {w^{2}+m_{2}^{2}-m_{1}^{2}}{2w}},$
$p={\frac {\varepsilon _{2}p_{1}-\varepsilon _{1}p_{2}}{w}},$
independent of the wave function, then
$p_{1}={\frac {\varepsilon _{1}}{w}}P+p,$
(5)
$p_{2}={\frac {\varepsilon _{2}}{w}}P-p,$
(6)
and it is straight forward to show that the constraint Eq.(3) leads directly to:
$P\cdot p\Psi =0,$
(7)
in place of $P\cdot p=0$. This conforms with the earlier claim on the vanishing of the relative energy in the c.m. frame made in conjunction with the TBDE. In the second choice the c.m. value of the relative energy is not defined as zero but comes from the original generalized mass shell constraints. The above equations for the relative and constituent four-momentum are the relativistic analogues of the non-relativistic equations
${\begin{aligned}{\vec {p}}&={\frac {m_{2}{\vec {p}}_{1}-m_{1}{\vec {p}}_{2}}{M}},\\[1ex]{\vec {p}}_{1}&={\frac {m_{1}}{M}}{\vec {P}}+{\vec {p}},\\[1ex]{\vec {p}}_{2}&={\frac {m_{2}}{M}}{\vec {P}}+{\vec {p}}.\end{aligned}}$
Covariant eigenvalue equation for internal motion
Using Eqs.(5),(6),(7), one can write ${\mathcal {H}}$ in terms of $P$ and $p$
${\mathcal {H}}\Psi =\{\lambda _{1}[-\varepsilon _{1}^{2}+m_{1}^{2}+p^{2}+\Phi (x_{\perp })]+\lambda _{2}[-\varepsilon _{2}^{2}+m_{2}^{2}+p^{2}+\Phi (x_{\perp })]\}\Psi $
$=(\lambda _{1}+\lambda _{2})[-b^{2}(-P^{2};m_{1}^{2},m_{2}^{2})+p^{2}+\Phi (x_{\perp })]\Psi =0\,,$
(8)
where
$b^{2}(-P^{2},m_{1}^{2},m_{2}^{2})=\varepsilon _{1}^{2}-m_{1}^{2}=\varepsilon _{2}^{2}-m_{2}^{2}\ =-{\frac {1}{4P^{2}}}(P^{4}+2P^{2}(m_{1}^{2}+m_{2}^{2})+(m_{1}^{2}-m_{2}^{2})^{2})\,.$
Eq.(8) contains both the total momentum $P$ [through the $b^{2}(-P^{2},m_{1}^{2},m_{2}^{2})$] and the relative momentum $p$. Using Eq. (4), one obtains the eigenvalue equation
$(\lambda _{1}+\lambda _{2})\left\{p^{2}+\Phi (x_{\perp })-b^{2}(w^{2},m_{1}^{2},m_{2}^{2})\right\}\Psi =0\,,$
(9)
so that $b^{2}(w^{2},m_{1}^{2},m_{2}^{2})$ becomes the standard triangle function displaying exact relativistic two-body kinematics:
$b^{2}(w^{2},m_{1}^{2},m_{2}^{2})={\frac {1}{4w^{2}}}\left\{w^{4}-2w^{2}(m_{1}^{2}+m_{2}^{2})+(m_{1}^{2}-m_{2}^{2})^{2}\right\}\,.$
With the above constraint Eqs.(7) on $\Psi $ then $p^{2}\Psi =p_{\perp }^{2}\Psi $ where $p_{\perp }=p-p\cdot PP/P^{2}$. This allows writing Eq. (9) in the form of an eigenvalue equation
$\{p_{\perp }^{2}+\Phi (x_{\perp })\}\Psi =b^{2}(w^{2},m_{1}^{2},m_{2}^{2})\Psi \,,$
having a structure very similar to that of the ordinary three-dimensional nonrelativistic Schrödinger equation. It is a manifestly covariant equation, but at the same time its three-dimensional structure is evident. The four-vectors $p_{\perp }^{\mu }$ and $x_{\perp }^{\mu }$ have only three independent components since
$P\cdot p_{\perp }=P\cdot x_{\perp }=0\,.$
The similarity to the three-dimensional structure of the nonrelativistic Schrödinger equation can be made more explicit by writing the equation in the c.m. frame in which
$P=(w,{\vec {0}}),$
$p_{\perp }=(0,{\vec {p}}),$
$x_{\perp }=(0,{\vec {x}}).$
Comparison of the resultant form
$\{{\vec {p}}^{2}+\Phi ({\vec {x}})\}\Psi =b^{2}(w^{2},m_{1}^{2},m_{2}^{2})\Psi \,,$
(10)
with the time independent Schrödinger equation
$\left({\vec {p}}^{2}+2\mu V({\vec {x}})\right)\Psi =2\mu E\Psi \,,$
(11)
makes this similarity explicit.
The two-body relativistic Klein–Gordon equations
A plausible structure for the quasipotential $\Phi $ can be found by observing that the one-body Klein–Gordon equation $(p^{2}+m^{2})\psi =({\vec {p}}^{2}-\varepsilon ^{2}+m^{2})\psi =0$ takes the form $({\vec {p}}^{2}-\varepsilon ^{2}+m^{2}+2mS+S^{2}+2\varepsilon A-A^{2})\psi =0~$ when one introduces a scalar interaction and timelike vector interaction via $m\rightarrow m+S~$and $\varepsilon \rightarrow \varepsilon -A$. In the two-body case, separate classical [29][30] and quantum field theory [4] arguments show that when one includes world scalar and vector interactions then $\Phi $ depends on two underlying invariant functions $S(r)$ and $A(r)$ through the two-body Klein–Gordon-like potential form with the same general structure, that is
$\Phi =2m_{w}S+S^{2}+2\varepsilon _{w}A-A^{2}.$
Those field theories further yield the c.m. energy dependent forms
$m_{w}=m_{1}m_{2}/w,$
and
$\varepsilon _{w}=(w^{2}-m_{1}^{2}-m_{2}^{2})/2w,$
ones that Tododov introduced as the relativistic reduced mass and effective particle energy for a two-body system. Similar to what happens in the nonrelativistic two-body problem, in the relativistic case we have the motion of this effective particle taking place as if it were in an external field (here generated by $S$ and $A$). The two kinematical variables $m_{w}$ and $\varepsilon _{w}$ are related to one another by the Einstein condition
$\varepsilon _{w}^{2}-m_{w}^{2}=b^{2}(w),$
If one introduces the four-vectors, including a vector interaction $A^{\mu }$
${\mathfrak {p}}=\varepsilon _{w}{\hat {P}}+p,$
$A^{\mu }={\hat {P}}^{\mu }A(r)$
$r={\sqrt {x_{\perp }^{2}}}\,,$
and scalar interaction $S(r)$, then the following classical minimal constraint form
${\mathcal {H}}=\left({\mathfrak {p-}}A\right)^{2}+(m_{w}+S)^{2}\approx 0\,,$
reproduces
${\mathcal {H}}=p_{\perp }^{2}+\Phi -b^{2}\approx 0\,.$
(12)
Notice, that the interaction in this "reduced particle" constraint depends on two invariant scalars, $A(r)$ and $S(r)$, one guiding the time-like vector interaction and one the scalar interaction.
Is there a set of two-body Klein–Gordon equations analogous to the two-body Dirac equations? The classical relativistic constraints analogous to the quantum two-body Dirac equations (discussed in the introduction) and that have the same structure as the above Klein–Gordon one-body form are
${\mathcal {H}}_{1}=(p_{1}-A_{1})^{2}+(m_{1}+S_{1})^{2}=p_{1}^{2}+m_{1}^{2}+\Phi _{1}\approx 0$
${\mathcal {H}}_{2}=(p_{1}-A_{2})^{2}+(m_{2}+S_{2})^{2}=p_{2}^{2}+m_{2}^{2}+\Phi _{2}\approx 0,$
$p_{1}=\varepsilon _{1}{\hat {P}}+p;~~p_{2}=\varepsilon _{2}{\hat {P}}-p~.$
Defining structures that display time-like vector and scalar interactions
$\pi _{1}=p_{1}-A_{1}=[{\hat {P}}(\varepsilon _{1}-{\mathcal {A}}_{1})+p],$
$\pi _{2}=p_{2}-A_{2}=[{\hat {P}}(\varepsilon _{2}-{\mathcal {A}}_{1})-p],$
$M_{1}=m_{1}+S_{1},$
$M_{2}=m_{2}+S_{2},$
gives
${\mathcal {H}}_{1}=\pi _{1}^{2}+M_{1}^{2},$
${\mathcal {H}}_{2}=\pi _{2}^{2}+M_{2}^{2}.$
Imposing
${\begin{aligned}\Phi _{1}&=\Phi _{2}\equiv \Phi (x_{\perp })\\&=-2p_{1}\cdot A_{1}+A_{1}^{2}+2m_{1}S_{1}+S_{1}^{2}\\&=-2p_{2}\cdot A_{2}+A_{2}^{2}+2m_{2}S_{2}+S_{2}^{2}\\&=2\varepsilon _{w}A-A^{2}+2m_{w}S+S^{2},\end{aligned}}$
and using the constraint $P\cdot p\approx 0$, reproduces Eqs.(12) provided
$\pi _{1}^{2}-p^{2}=-\left(\varepsilon _{1}-{\mathcal {A}}_{1}\right)^{2}=-\varepsilon _{1}^{2}+2\varepsilon _{w}A-A^{2},$
$\pi _{2}^{2}-p^{2}=-\left(\varepsilon _{2}-{\mathcal {A}}_{2}\right)^{2}=-\varepsilon _{2}^{2}+2\varepsilon _{w}A-A^{2},$
$M_{1}{}^{2}=m_{1}^{2}+2m_{w}S+S^{2},$
$M_{2}^{2}=m_{2}^{2}+2m_{w}S+S^{2}.$
The corresponding Klein–Gordon equations are
$\left(\pi _{1}^{2}+M_{1}^{2}\right)\psi =0,$
$\left(\pi _{2}^{2}+M_{2}^{2}\right)\psi =0,$
and each, due to the constraint $P\cdot p\approx 0,$ is equivalent to
${\mathcal {H}}\psi =\left(p_{\perp }^{2}+\Phi -b^{2}\right)\psi =0.$
Hyperbolic versus external field form of the two-body Dirac equations
For the two body system there are numerous covariant forms of interaction. The simplest way of looking at these is from the point of view of the gamma matrix structures of the corresponding interaction vertices of the single particle exchange diagrams. For scalar, pseudoscalar, vector, pseudovector, and tensor exchanges those matrix structures are respectively
$1_{1}1_{2};\gamma _{51}\gamma _{52};\gamma _{1}^{\mu }\gamma _{2\mu };\gamma _{51}\gamma _{1}^{\mu }\gamma _{52}\gamma _{2\mu };\sigma _{1\mu \nu }\sigma _{2}^{\mu \nu },$
in which
$\sigma _{i\mu \nu }={\frac {1}{2i}}[\gamma _{i\mu },\gamma _{i\nu }];i=1,2.$
The form of the Two-Body Dirac equations which most readily incorporates each or any number of these intereractions in concert is the so-called hyperbolic form of the TBDE.[31] For combined scalar and vector interactions those forms ultimately reduce to the ones given in the first set of equations of this article. Those equations are called the external field-like forms because their appearances are individually the same as those for the usual one-body Dirac equation in the presence of external vector and scalar fields.
The most general hyperbolic form for compatible TBDE is
${\mathcal {S}}_{1}\psi =(\cosh(\Delta )\mathbf {S} _{1}+\sinh(\Delta )\mathbf {S} _{2})\psi =0,$
${\mathcal {S}}_{2}\psi =(\cosh(\Delta )\mathbf {S} _{2}+\sinh(\Delta )\mathbf {S} _{1})\psi =0,$
(13)
where $\Delta $ represents any invariant interaction singly or in combination. It has a matrix structure in addition to coordinate dependence. Depending on what that matrix structure is one has either scalar, pseudoscalar, vector, pseudovector, or tensor interactions. The operators $\mathbf {S} _{1}$ and $\mathbf {S} _{2}$ are auxiliary constraints satisfying
$\mathbf {S} _{1}\psi \equiv ({\mathcal {S}}_{10}\cosh(\Delta )+{\mathcal {S}}_{20}\sinh(\Delta )~)\psi =0,$
$\mathbf {S} _{2}\psi \equiv ({\mathcal {S}}_{20}\cosh(\Delta )+{\mathcal {S}}_{10}\sinh(\Delta )~)\psi =0,$
(14)
in which the ${\mathcal {S}}_{i0}$ are the free Dirac operators
${\mathcal {S}}_{i0}={\frac {i}{\sqrt {2}}}\gamma _{5i}(\gamma _{i}\cdot p_{i}+m_{i})=0,$
(15)
This, in turn leads to the two compatibility conditions
$\lbrack {\mathcal {S}}_{1},{\mathcal {S}}_{2}]\psi =0,$
and
$\lbrack \mathbf {S} _{1},\mathbf {S} _{2}]\psi =0,$
provided that $\Delta =\Delta (x_{\perp }).$ These compatibility conditions do not restrict the gamma matrix structure of $\Delta $. That matrix structure is determined by the type of vertex-vertex structure incorporated in the interaction. For the two types of invariant interactions $\Delta $ emphasized in this article they are
$\Delta _{\mathcal {L}}(x_{\perp })=-1_{1}1_{2}{\frac {{\mathcal {L}}(x_{\perp })}{2}}{\mathcal {O}}_{1},{\text{scalar}},$
$\Delta _{\mathcal {G}}(x_{\perp })=\gamma _{1}\cdot \gamma _{2}{\frac {{\mathcal {G}}(x_{\perp })}{2}}{\mathcal {O}}_{1},{\text{vector}},$
${\mathcal {O}}_{1}=-\gamma _{51}\gamma _{52}.$
For general independent scalar and vector interactions
$\Delta (x_{\perp })=\Delta _{\mathcal {L}}+\Delta _{\mathcal {G}}.$
The vector interaction specified by the above matrix structure for an electromagnetic-like interaction would correspond to the Feynman gauge.
If one inserts Eq.(14) into (13) and brings the free Dirac operator (15) to the right of the matrix hyperbolic functions and uses standard gamma matrix commutators and anticommutators and $\cosh ^{2}\Delta -\sinh ^{2}\Delta =1$ one arrives at $\left(\partial _{\mu }=\partial /\partial x^{\mu }\right),$
${\big (}G\gamma _{1}\cdot {\mathcal {P}}_{2}-E_{1}\beta _{1}+M_{1}-G{\frac {i}{2}}\Sigma _{2}\cdot \partial ({\mathcal {L}}\beta _{2}-{\mathcal {G}}\beta _{1})\gamma _{52}{\big )}\psi =0,$
${\big (}-G\gamma _{2}\cdot {\mathcal {P}}_{1}-E_{2}\beta _{2}+M_{2}+G{\frac {i}{2}}\Sigma _{1}\cdot \partial ({\mathcal {L}}\beta _{1}-{\mathcal {G}}\beta _{2})\gamma _{51}{\big )}\psi =0,$
(16)
in which
$G=\exp {\mathcal {G}},$
$\beta _{i}=-\gamma _{i}\cdot {\hat {P}},$
$\gamma _{i\perp }^{\mu }=(\eta ^{\mu \nu }+{\hat {P}}^{\mu }{\hat {P}}^{\nu })\gamma _{\nu i},$
$\Sigma _{i}=\gamma _{5i}\beta _{i}\gamma _{\perp i},$
${\mathcal {P}}_{i}\equiv p_{\perp }-{\frac {i}{2}}\Sigma _{i}\cdot \partial {\mathcal {G}}\Sigma _{i}\,,\quad i=1,2.$
The (covariant) structure of these equations are analogous to those of a Dirac equation for each of the two particles, with $M_{i}$ and $E_{i}$ playing the roles that $m+S$ and $\varepsilon -A$ do in the single particle Dirac equation
$(\mathbf {\gamma } \cdot \mathbf {p-} \beta (\varepsilon -A)+m+S)\psi =0.$
Over and above the usual kinetic part $\gamma _{1}\cdot p_{\perp }$ and time-like vector and scalar potential portions, the spin-dependent modifications involving $\Sigma _{i}\cdot \partial {\mathcal {G}}\Sigma _{i}$ and the last set of derivative terms are two-body recoil effects absent for the one-body Dirac equation but essential for the compatibility (consistency) of the two-body equations. The connections between what are designated as the vertex invariants ${\mathcal {L}},{\mathcal {G}}$ and the mass and energy potentials $M_{i},E_{i}$ are
$M_{1}=m_{1}\cosh {\mathcal {L}}+m_{2}\sinh {\mathcal {L}},$
$M_{2}=m_{2}\cosh {\mathcal {L}}+m_{1}\sinh {\mathcal {L}},$
$E_{1}=\varepsilon _{1}\cosh {\mathcal {G}}-\varepsilon _{2}\sinh {\mathcal {G}},$
$E_{2}=\varepsilon _{2}\cosh {\mathcal {G}}-\varepsilon _{1}\sinh {\mathcal {G}}.$
Comparing Eq.(16) with the first equation of this article one finds that the spin-dependent vector interactions are
${\tilde {A}}_{1}^{\mu }={\big (}(\varepsilon _{1}-E_{1}){\big )}{\hat {P}}^{\mu }+(1-G)p_{\perp }^{\mu }-{\frac {i}{2}}\partial G\cdot \gamma _{2}\gamma _{2}^{\mu },$
$A_{2}^{\mu }={\big (}(\varepsilon _{2}-E_{2}){\big )}{\hat {P}}^{\mu }-(1-G)p_{\perp }^{\mu }+{\frac {i}{2}}\partial G\cdot \gamma _{1}\gamma _{1}^{\mu },$
Note that the first portion of the vector potentials is timelike (parallel to ${\hat {P}}^{\mu })$ while the next portion is spacelike (perpendicular to ${\hat {P}}^{\mu })$. The spin-dependent scalar potentials ${\tilde {S}}_{i}$ are
${\tilde {S}}_{1}=M_{1}-m_{1}-{\frac {i}{2}}G\gamma _{2}\cdot \partial {\mathcal {L}},$
${\tilde {S}}_{2}=M_{2}-m_{2}+{\frac {i}{2}}G\gamma _{1}\cdot \partial {\mathcal {L}}.$
The parametrization for ${\mathcal {L}}$ and ${\mathcal {G}}$ takes advantage of the Todorov effective external potential forms (as seen in the above section on the two-body Klein Gordon equations) and at the same time displays the correct static limit form for the Pauli reduction to Schrödinger-like form. The choice for these parameterizations (as with the two-body Klein Gordon equations) is closely tied to classical or quantum field theories for separate scalar and vector interactions. This amounts to working in the Feynman gauge with the simplest relation between space- and timelike parts of the vector interaction. The mass and energy potentials are respectively
$M_{i}^{2}=m_{i}^{2}+\exp(2{\mathcal {G}})(2m_{w}S+S^{2}),$
$E_{i}^{2}=\exp(2{\mathcal {G}}(A))\left(\varepsilon _{i}-A\right)^{2},$
so that
$\exp {\mathcal {L}}=\exp({\mathcal {L}}(S,A))={\frac {M_{1}+M_{2}}{m_{1}+m_{2}}},$
$G=\exp {\mathcal {G}}=\exp({\mathcal {G}}(A))={\sqrt {\frac {1}{(1-2A/w)}}}.$
Applications and limitations
The TBDE can be readily applied to two body systems such as positronium, muonium, hydrogen-like atoms, quarkonium, and the two-nucleon system.[32][33][34] These applications involve two particles only and do not involve creation or annihilation of particles beyond the two. They involve only elastic processes. Because of the connection between the potentials used in the TBDE and the corresponding quantum field theory, any radiative correction to the lowest order interaction can be incorporated into those potentials. To see how this comes about, consider by contrast how one computes scattering amplitudes without quantum field theory. With no quantum field theory one must come upon potentials by classical arguments or phenomenological considerations. Once one has the potential $V$ between two particles, then one can compute the scattering amplitude $T$ from the Lippmann–Schwinger equation [35]
$T+V+VGT=0,$
in which $G$ is a Green function determined from the Schrödinger equation. Because of the similarity between the Schrödinger equation Eq. (11) and the relativistic constraint equation (10),one can derive the same type of equation as the above
${\mathcal {T}}+\Phi +\Phi {\mathcal {G}}{\mathcal {T}}=0,$
called the quasipotential equation with a ${\mathcal {G}}$ very similar to that given in the Lippmann–Schwinger equation. The difference is that with the quasipotential equation, one starts with the scattering amplitudes ${\mathcal {T}}$ of quantum field theory, as determined from Feynman diagrams and deduces the quasipotential Φ perturbatively. Then one can use that Φ in (10), to compute energy levels of two particle systems that are implied by the field theory. Constraint dynamics provides one of many, in fact an infinite number of, different types of quasipotential equations (three-dimensional truncations of the Bethe–Salpeter equation) differing from one another by the choice of ${\mathcal {G}}$.[36] The relatively simple solution to the problem of relative time and energy from the generalized mass shell constraint for two particles, has no simple extension, such as presented here with the $x_{\perp }$ variable, to either two particles in an external field [37] or to 3 or more particles. Sazdjian has presented a recipe for this extension when the particles are confined and cannot split into clusters of a smaller number of particles with no inter-cluster interactions [38] Lusanna has developed an approach, one that does not involve generalized mass shell constraints with no such restrictions, which extends to N bodies with or without fields. It is formulated on spacelike hypersurfaces and when restricted to the family of hyperplanes orthogonal to the total timelike momentum gives rise to a covariant intrinsic 1-time formulation (with no relative time variables) called the "rest-frame instant form" of dynamics,[39][40]
See also
• Breit equation
• 4-vector
• Dirac equation
• Dirac equation in the algebra of physical space
• Dirac operator
• Electromagnetism
• Kinetic momentum
• Many body problem
• Invariant mass
• Particle physics
• Positronium
• Ricci calculus
• Special relativity
• Spin
• Quantum entanglement
• Relativistic quantum mechanics
References
1. Bethe, Hans A.; Edwin E. Salpeter (2008). Quantum mechanics of one- and two-electron atoms (Dover ed.). Mineola, N.Y.: Dover Publications. ISBN 978-0486466675.
2. Nakanishi, Noboru (1969). "A General Survey of the Theory of the Bethe-Salpeter Equation". Progress of Theoretical Physics Supplement. Oxford University Press (OUP). 43: 1–81. Bibcode:1969PThPS..43....1N. doi:10.1143/ptps.43.1. ISSN 0375-9687.
3. Sazdjian, H. (1985). "The quantum mechanical transform of the Bethe-Salpeter equation". Physics Letters B. Elsevier BV. 156 (5–6): 381–384. Bibcode:1985PhLB..156..381S. doi:10.1016/0370-2693(85)91630-2. ISSN 0370-2693.
4. Jallouli, H; Sazdjian, H (1997). "The Relativistic Two-Body Potentials of Constraint Theory from Summation of Feynman Diagrams". Annals of Physics. 253 (2): 376–426. arXiv:hep-ph/9602241. Bibcode:1997AnPhy.253..376J. doi:10.1006/aphy.1996.5632. ISSN 0003-4916. S2CID 614024.
5. P.A.M. Dirac, Can. J. Math. 2, 129 (1950)
6. P.A.M. Dirac, Lectures on Quantum Mechanics (Yeshiva University, New York, 1964)
7. P. Van Alstine and H.W. Crater, Journal of Mathematical Physics 23, 1697 (1982).
8. Crater, Horace W; Van Alstine, Peter (1983). "Two-body Dirac equations". Annals of Physics. 148 (1): 57–94. Bibcode:1983AnPhy.148...57C. doi:10.1016/0003-4916(83)90330-5.
9. Sazdjian, H. (1986). "Relativistic wave equations for the dynamics of two interacting particles". Physical Review D. 33 (11): 3401–3424. Bibcode:1986PhRvD..33.3401S. doi:10.1103/PhysRevD.33.3401. PMID 9956560.
10. Sazdjian, H. (1986). "Relativistic quarkonium dynamics". Physical Review D. 33 (11): 3425–3434. Bibcode:1986PhRvD..33.3425S. doi:10.1103/PhysRevD.33.3425. PMID 9956561.
11. Breit, G. (1929-08-15). "The Effect of Retardation on the Interaction of Two Electrons". Physical Review. American Physical Society (APS). 34 (4): 553–573. Bibcode:1929PhRv...34..553B. doi:10.1103/physrev.34.553. ISSN 0031-899X.
12. Breit, G. (1930-08-01). "The Fine Structure of HE as a Test of the Spin Interactions of Two Electrons". Physical Review. American Physical Society (APS). 36 (3): 383–397. Bibcode:1930PhRv...36..383B. doi:10.1103/physrev.36.383. ISSN 0031-899X.
13. Breit, G. (1932-02-15). "Dirac's Equation and the Spin-Spin Interactions of Two Electrons". Physical Review. American Physical Society (APS). 39 (4): 616–624. Bibcode:1932PhRv...39..616B. doi:10.1103/physrev.39.616. ISSN 0031-899X.
14. Van Alstine, Peter; Crater, Horace W. (1997). "A tale of three equations: Breit, Eddington—Gaunt, and Two-Body Dirac". Foundations of Physics. 27 (1): 67–79. arXiv:hep-ph/9708482. Bibcode:1997FoPh...27...67A. doi:10.1007/bf02550156. ISSN 0015-9018. S2CID 119326477.
15. Crater, Horace W.; Wong, Chun Wa; Wong, Cheuk-Yin (1996). "Singularity-Free Breit Equation from Constraint Two-Body Dirac Equations". International Journal of Modern Physics E. 05 (4): 589–615. arXiv:hep-ph/9603402. Bibcode:1996IJMPE...5..589C. doi:10.1142/s0218301396000323. ISSN 0218-3013. S2CID 18416997.
16. Crater, Horace W.; Peter Van Alstine (1999). "Two-Body Dirac Equations for Relativistic Bound States of Quantum Field Theory". arXiv:hep-ph/9912386.
17. Bergmann, Peter G. (1949-02-15). "Non-Linear Field Theories". Physical Review. American Physical Society (APS). 75 (4): 680–685. Bibcode:1949PhRv...75..680B. doi:10.1103/physrev.75.680. ISSN 0031-899X.
18. I. T. Todorov, " Dynamics of Relativistic Point Particles as a Problem with Constraints", Dubna Joint Institute for Nuclear Research No. E2-10175, 1976
19. I. T. Todorov, Annals of the Institute of H. Poincaré' {A28},207 (1978)
20. M. Kalb and P. Van Alstine, Yale Reports, C00-3075-146 (1976),C00-3075-156 (1976),
21. P. Van Alstine, Ph.D. Dissertation Yale University, (1976)
22. Komar, Arthur (1978-09-15). "Constraint formalism of classical mechanics". Physical Review D. American Physical Society (APS). 18 (6): 1881–1886. Bibcode:1978PhRvD..18.1881K. doi:10.1103/physrevd.18.1881. ISSN 0556-2821.
23. Komar, Arthur (1978-09-15). "Interacting relativistic particles". Physical Review D. American Physical Society (APS). 18 (6): 1887–1893. Bibcode:1978PhRvD..18.1887K. doi:10.1103/physrevd.18.1887. ISSN 0556-2821.
24. Droz-Vincent, Philippe (1975). "Hamiltonian systems of relativistic particles". Reports on Mathematical Physics. Elsevier BV. 8 (1): 79–101. Bibcode:1975RpMP....8...79D. doi:10.1016/0034-4877(75)90020-8. ISSN 0034-4877.
25. Currie, D. G.; Jordan, T. F.; Sudarshan, E. C. G. (1963-04-01). "Relativistic Invariance and Hamiltonian Theories of Interacting Particles". Reviews of Modern Physics. American Physical Society (APS). 35 (2): 350–375. Bibcode:1963RvMP...35..350C. doi:10.1103/revmodphys.35.350. ISSN 0034-6861.
26. Currie, D. G.; Jordan, T. F.; Sudarshan, E. C. G. (1963-10-01). "Erratum: Relativistic Invariance and Hamiltonian Theories of Interacting Particles". Reviews of Modern Physics. American Physical Society (APS). 35 (4): 1032. Bibcode:1963RvMP...35.1032C. doi:10.1103/revmodphys.35.1032.2. ISSN 0034-6861.
27. Wong, Cheuk-Yin; Crater, Horace W. (2001-03-23). "RelativisticN-body problem in a separable two-body basis". Physical Review C. American Physical Society (APS). 63 (4): 044907. arXiv:nucl-th/0010003. Bibcode:2001PhRvC..63d4907W. doi:10.1103/physrevc.63.044907. ISSN 0556-2813. S2CID 14454082.
28. Horwitz, L. P.; Rohrlich, F. (1985-02-15). "Limitations of constraint dynamics". Physical Review D. American Physical Society (APS). 31 (4): 932–933. Bibcode:1985PhRvD..31..932H. doi:10.1103/physrevd.31.932. ISSN 0556-2821. PMID 9955776.
29. Crater, Horace W.; Van Alstine, Peter (1992-07-15). "Restrictions imposed on relativistic two-body interactions by classical relativistic field theory". Physical Review D. American Physical Society (APS). 46 (2): 766–776. Bibcode:1992PhRvD..46..766C. doi:10.1103/physrevd.46.766. ISSN 0556-2821. PMID 10014987.
30. Crater, Horace; Yang, Dujiu (1991). "A covariant extrapolation of the noncovariant two particle Wheeler–Feynman Hamiltonian from the Todorov equation and Dirac's constraint mechanics". Journal of Mathematical Physics. AIP Publishing. 32 (9): 2374–2394. Bibcode:1991JMP....32.2374C. doi:10.1063/1.529164. ISSN 0022-2488.
31. Crater, H. W.; Van Alstine, P. (1990). "Extension of two‐body Dirac equations to general covariant interactions through a hyperbolic transformation". Journal of Mathematical Physics. AIP Publishing. 31 (8): 1998–2014. Bibcode:1990JMP....31.1998C. doi:10.1063/1.528649. ISSN 0022-2488.
32. Crater, H. W.; Becker, R. L.; Wong, C. Y.; Van Alstine, P. (1992-12-01). "Nonperturbative solution of two-body Dirac equations for quantum electrodynamics and related field theories". Physical Review D. American Physical Society (APS). 46 (11): 5117–5155. Bibcode:1992PhRvD..46.5117C. doi:10.1103/physrevd.46.5117. ISSN 0556-2821. PMID 10014894.
33. Crater, Horace; Schiermeyer, James (2010). "Applications of two-body Dirac equations to the meson spectrum with three versus two covariant interactions, SU(3) mixing, and comparison to a quasipotential approach". Physical Review D. 82 (9): 094020. arXiv:1004.2980. Bibcode:2010PhRvD..82i4020C. doi:10.1103/PhysRevD.82.094020. S2CID 119089906.
34. Liu, Bin; Crater, Horace (2003-02-18). "Two-body Dirac equations for nucleon-nucleon scattering". Physical Review C. American Physical Society (APS). 67 (2): 024001. arXiv:nucl-th/0208045. Bibcode:2003PhRvC..67b4001L. doi:10.1103/physrevc.67.024001. ISSN 0556-2813. S2CID 12939698.
35. J. J. Sakurai, Modern Quantum Mechanics, Addison Wesley (2010)
36. Yaes, Robert J. (1971-06-15). "Infinite Set of Quasipotential Equations from the Kadyshevsky Equation". Physical Review D. American Physical Society (APS). 3 (12): 3086–3090. Bibcode:1971PhRvD...3.3086Y. doi:10.1103/physrevd.3.3086. ISSN 0556-2821.
37. Bijebier, J.; Broekaert, J. (1992). "The two-body plus potential problem between quantum field theory and relativistic quantum mechanics (two-fermion and fermion-boson cases)". Il Nuovo Cimento A. Springer Science and Business Media LLC. 105 (5): 625–640. Bibcode:1992NCimA.105..625B. doi:10.1007/bf02730768. ISSN 0369-3546. S2CID 124035381.
38. Sazdjian, H (1989). "N-body bound state relativistic wave equations". Annals of Physics. Elsevier BV. 191 (1): 52–77. Bibcode:1989AnPhy.191...52S. doi:10.1016/0003-4916(89)90336-9. ISSN 0003-4916.
39. Lusanna, Luca (1997-02-10). "The N- and 1-Time Classical Descriptions of N-Body Relativistic Kinematics and the Electromagnetic Interaction". International Journal of Modern Physics A. 12 (4): 645–722. arXiv:hep-th/9512070. Bibcode:1997IJMPA..12..645L. doi:10.1142/s0217751x9700058x. ISSN 0217-751X. S2CID 16041762.
40. Lusanna, Luca (2013). "From Clock Synchronization to Dark Matter as a Relativistic Inertial Effect". Springer Proceedings in Physics. Vol. 144. Heidelberg: Springer International Publishing. pp. 267–343. arXiv:1205.2481. doi:10.1007/978-3-319-00215-6_8. ISBN 978-3-319-00214-9. ISSN 0930-8989. S2CID 117404702.
• Childers, R. (1982). "Two-body Dirac equation for semirelativistic quarks". Physical Review D. 26 (10): 2902–2915. Bibcode:1982PhRvD..26.2902C. doi:10.1103/PhysRevD.26.2902.
• Childers, R. (1985). "Erratum: Two-body Dirac equation for semirelativistic quarks". Physical Review D. 32 (12): 3337. Bibcode:1985PhRvD..32.3337C. doi:10.1103/PhysRevD.32.3337. PMID 9956143.
• Ferreira, P. (1988). "Two-body Dirac equation with a scalar linear potential". Physical Review D. 38 (8): 2648–2650. Bibcode:1988PhRvD..38.2648F. doi:10.1103/PhysRevD.38.2648. hdl:11449/34339. PMID 9959432.
• Scott, T.; Shertzer, J.; Moore, R. (1992). "Accurate finite-element solutions of the two-body Dirac equation". Physical Review A. 45 (7): 4393–4398. Bibcode:1992PhRvA..45.4393S. doi:10.1103/PhysRevA.45.4393. PMID 9907514.
• Patterson, Chris W. (2019). "Anomalous states of Positronium". Physical Review A. 100 (6): 062128. arXiv:2004.06108. Bibcode:2019PhRvA.100f2128P. doi:10.1103/PhysRevA.100.062128. S2CID 214017953.
• Patterson, Chris W. (2023). "Properties of the anomalous states of Positronium". Physical Review A. 107 (4): 042816. arXiv:2207.05725. doi:10.1103/PhysRevA.107.042816.
• Various forms of radial equations for the Dirac two-body problem W. Królikowski (1991), Institute of theoretical physics (Warsaw, Poland)
• Duviryak, Askold (2008). "Solvable Two-Body Dirac Equation as a Potential Model of Light Mesons". Symmetry, Integrability and Geometry: Methods and Applications. 4: 048. arXiv:0805.4725. Bibcode:2008SIGMA...4..048D. doi:10.3842/SIGMA.2008.048. S2CID 15461500.
| Wikipedia |
Split interval
In topology, the split interval, or double arrow space, is a topological space that results from splitting each point in a closed interval into two adjacent points and giving the resulting ordered set the order topology. It satisfies various interesting properties and serves as a useful counterexample in general topology.
Definition
The split interval can be defined as the lexicographic product $[0,1]\times \{0,1\}$ equipped with the order topology.[1] Equivalently, the space can be constructed by taking the closed interval $[0,1]$ with its usual order, splitting each point $a$ into two adjacent points $a^{-}<a^{+}$, and giving the resulting linearly ordered set the order topology.[2] The space is also known as the double arrow space,[3][4] Alexandrov double arrow space or two arrows space.
The space above is a linearly ordered topological space with two isolated points, $(0,0)$ and $(1,1)$ in the lexicographic product. Some authors[5][6] take as definition the same space without the two isolated points. (In the point splitting description this corresponds to not splitting the endpoints $0$ and $1$ of the interval.) The resulting space has essentially the same properties.
The double arrow space is a subspace of the lexicographically ordered unit square. If we ignore the isolated points, a base for the double arrow space topology consists of all sets of the form $((a,b]\times \{0\})\cup ([a,b)\times \{1\})$ with $a<b$. (In the point splitting description these are the clopen intervals of the form $[a^{+},b^{-}]=(a^{-},b^{+})$, which are simultaneously closed intervals and open intervals.) The lower subspace $(0,1]\times \{0\}$ is homeomorphic to the Sorgenfrey line with half-open intervals to the left as a base for the topology, and the upper subspace $[0,1)\times \{1\}$ is homeomorphic to the Sorgenfrey line with half-open intervals to the right as a base, like two parallel arrows going in opposite directions, hence the name.
Properties
The split interval $X$ is a zero-dimensional compact Hausdorff space. It is a linearly ordered topological space that is separable but not second countable, hence not metrizable; its metrizable subspaces are all countable.
It is hereditarily Lindelöf, hereditarily separable, and perfectly normal (T6). But the product $X\times X$ of the space with itself is not even hereditarily normal (T5), as it contains a copy of the Sorgenfrey plane, which is not normal.
All compact, separable ordered spaces are order-isomorphic to a subset of the split interval.[7]
See also
• List of topologies – List of concrete topologies and topological spaces
Notes
1. Todorcevic, Stevo (6 July 1999), "Compact subsets of the first Baire class", Journal of the American Mathematical Society, 12: 1179–1212, doi:10.1090/S0894-0347-99-00312-4
2. Fremlin, section 419L
3. Arhangel'skii, p. 39
4. Ma, Dan. "The Lexicographic Order and The Double Arrow Space".
5. Steen & Seebach, counterexample #95, under the name of weak parallel line topology
6. Engelking, example 3.10.C
7. Ostaszewski, A. J. (February 1974), "A Characterization of Compact, Separable, Ordered Spaces", Journal of the London Mathematical Society, s2-7 (4): 758–760, doi:10.1112/jlms/s2-7.4.758
References
• Arhangel'skii, A.V. and Sklyarenko, E.G.., General Topology II, Springer-Verlag, New York (1996) ISBN 978-3-642-77032-6
• Engelking, Ryszard, General Topology, Heldermann Verlag Berlin, 1989. ISBN 3-88538-006-4
• Fremlin, D.H. (2003), Measure Theory, Volume 4, Torres Fremlin, ISBN 0-9538129-4-4
• Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978]. Counterexamples in Topology (Dover reprint of 1978 ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-486-68735-3. MR 0507446.
| Wikipedia |
Boy or Girl paradox
The Boy or Girl paradox surrounds a set of questions in probability theory, which are also known as The Two Child Problem,[1] Mr. Smith's Children[2] and the Mrs. Smith Problem. The initial formulation of the question dates back to at least 1959, when Martin Gardner featured it in his October 1959 "Mathematical Games column" in Scientific American. He titled it The Two Children Problem, and phrased the paradox as follows:
• Mr. Jones has two children. The older child is a girl. What is the probability that both children are girls?
• Mr. Smith has two children. At least one of them is a boy. What is the probability that both children are boys?
Gardner initially gave the answers 1/2 and 1/3, respectively, but later acknowledged that the second question was ambiguous.[1] Its answer could be 1/2, depending on the procedure by which the information "at least one of them is a boy" was obtained. The ambiguity, depending on the exact wording and possible assumptions, was confirmed by Maya Bar-Hillel and Ruma Falk,[3] and Raymond S. Nickerson.[4]
Other variants of this question, with varying degrees of ambiguity, have been popularized by Ask Marilyn in Parade Magazine,[5] John Tierney of The New York Times,[6] and Leonard Mlodinow in The Drunkard's Walk.[7] One scientific study showed that when identical information was conveyed, but with different partially ambiguous wordings that emphasized different points, the percentage of MBA students who answered 1/2 changed from 85% to 39%.[2]
The paradox has stimulated a great deal of controversy.[4] The paradox stems from whether the problem setup is similar for the two questions.[2][7] The intuitive answer is 1/2.[2] This answer is intuitive if the question leads the reader to believe that there are two equally likely possibilities for the sex of the second child (i.e., boy and girl),[2][8] and that the probability of these outcomes is absolute, not conditional.[9]
Gender assumptions
Although Gardner envisioned the paradox being considered in a world in which gender was static and binary, and the distribution of children was uniform across that gender binary,[1] his framing of the problem does not state or require those assumptions. The difference between the two questions is equally interesting from a mathematical point of view in a world in which P(girl) and P(boy) are well-defined across a population of individuals at a given time, but are not necessarily equal or static and do not necessarily add to one.
The remainder of this article makes the assumptions listed below, which appear to have been shared by Gardner and many others who have analyzed the problem.[1][3][5][6][7][8] Readers who are troubled by the theory of gender underlying these assumptions may prefer to consider the discussion below as referring to a situation in which each of the two parents in question has flipped two fair coins (each of which has “B” on one face and “G” on the other), the reference to birth order is to the order of the coin flips, and the references to genders are to the faces of the coins that are showing. Alternatively, the mathematical analysis below can be extended in a reasonably straightforward way to a population in which P(girl), P(boy), and P(not girl $\wedge $ not boy) are any three probabilities that add to one.
Common assumptions
First, it is assumed that the space of all possible events can be easily enumerated, providing an extensional definition of outcomes: {BB, BG, GB, GG}.[10] This notation indicates that there are four possible combinations of children, labeling boys B and girls G, and using the first letter to represent the older child. Second, it is assumed that these outcomes are equally probable.[10] This implies the following model, a Bernoulli process with p = 1/2:
1. Each child is either male or female.
2. Each child has the same chance of being male as of being female.
3. The sex of each child is independent of the sex of the other.
First question
• Mr. Jones has two children. The older child is a girl. What is the probability that both children are girls?
Under the aforementioned assumptions, in this problem, a random family is selected. In this sample space, there are four equally probable events:
Older child Younger child
Girl Girl
Girl Boy
Boy Girl
Boy Boy
Only two of these possible events meet the criteria specified in the question (i.e., GG, GB). Since both of the two possibilities in the new sample space {GG, GB} are equally likely, and only one of the two, GG, includes two girls, the probability that the younger child is also a girl is 1/2.
Second question
• Mr. Smith has two children. At least one of them is a boy. What is the probability that both children are boys?
This question is identical to question one, except that instead of specifying that the older child is a boy, it is specified that at least one of them is a boy. In response to reader criticism of the question posed in 1959, Gardner said that no answer is possible without information that was not provided. Specifically, that two different procedures for determining that "at least one is a boy" could lead to the exact same wording of the problem. But they lead to different correct answers:
• From all families with two children, at least one of whom is a boy, a family is chosen at random. This would yield the answer of 1/3.
• From all families with two children, one child is selected at random, and the sex of that child is specified to be a boy. This would yield an answer of 1/2.[3][4]
Grinstead and Snell argue that the question is ambiguous in much the same way Gardner did.[11] They leave it to the reader to decide whether the procedure, that yields 1/3 as the answer, is reasonable for the problem as stated above. The formulation of the question they were considering specifically is the following:
• Consider a family with two children. Given that one of the children is a boy, what is the probability that both children are boys?
In this formulation the ambiguity is most obviously present, because it is not clear whether we are allowed to assume that a specific child is a boy, leaving the other child uncertain, or whether it should be interpreted in the same way as "at least one boy". This ambiguity leaves multiple possibilities that are not equivalent and leaves the necessity to make assumptions about how the information was obtained, as Bar-Hillel and Falk argue, where different assumptions can lead to different outcomes (because the problem statement was not well enough defined to allow a single straightforward interpretation and answer).
For example, say an observer sees Mr. Smith on a walk with just one of his children. If he has two boys then that child must be a boy. But if he has a boy and a girl, that child could have been a girl. So seeing him with a boy eliminates not only the combinations where he has two girls, but also the combinations where he has a son and a daughter and chooses the daughter to walk with.
So, while it is certainly true that every possible Mr. Smith has at least one boy (i.e., the condition is necessary), it cannot be assumed that every Mr. Smith with at least one boy is intended. That is, the problem statement does not say that having a boy is a sufficient condition for Mr. Smith to be identified as having a boy this way.
Commenting on Gardner's version of the problem, Bar-Hillel and Falk[3] note that "Mr. Smith, unlike the reader, is presumably aware of the sex of both of his children when making this statement", i.e. that "I have two children and at least one of them is a boy." It must be further assumed that Mr. Smith would always report this fact if it were true, and either remain silent or say he has at least one daughter, for the correct answer to be 1/3 as Gardner apparently originally intended. But under that assumption, if he remains silent or says he has a daughter, there is a 100% probability he has two daughters.
Analysis of the ambiguity
If it is assumed that this information was obtained by looking at both children to see if there is at least one boy, the condition is both necessary and sufficient. Three of the four equally probable events for a two-child family in the sample space above meet the condition, as in this table:
Older child Younger child
Girl Girl
Girl Boy
Boy Girl
Boy Boy
Thus, if it is assumed that both children were considered while looking for a boy, the answer to question 2 is 1/3. However, if the family was first selected and then a random, true statement was made about the sex of one child in that family, whether or not both were considered, the correct way to calculate the conditional probability is not to count all of the cases that include a child with that sex. Instead, one must consider only the probabilities where the statement will be made in each case.[11] So, if ALOB represents the event where the statement is "at least one boy", and ALOG represents the event where the statement is "at least one girl", then this table describes the sample space:
Older child Younger child P(this family) P(ALOB given this family) P(ALOG given this family) P(ALOB and this family) P(ALOG and this family)
Girl Girl 1/4 0 1 0 1/4
Girl Boy 1/4 1/2 1/2 1/8 1/8
Boy Girl 1/4 1/2 1/2 1/8 1/8
Boy Boy 1/4 1 0 1/4 0
So, if at least one is a boy when the fact is chosen randomly, the probability that both are boys is
$\mathrm {\frac {P(ALOB\;and\;BB)}{P(ALOB)}} ={\frac {\frac {1}{4}}{0+{\frac {1}{8}}+{\frac {1}{8}}+{\frac {1}{4}}}}={\frac {1}{2}}\,.$
The paradox occurs when it is not known how the statement "at least one is a boy" was generated. Either answer could be correct, based on what is assumed.[12]
However, the "1/3" answer is obtained only by assuming P(ALOB|BG) = P(ALOB|GB) =1, which implies P(ALOG|BG) = P(ALOG|GB) = 0, that is, the other child's sex is never mentioned although it is present. As Marks and Smith say, "This extreme assumption is never included in the presentation of the two-child problem, however, and is surely not what people have in mind when they present it."[12]
Modelling the generative process
Another way to analyse the ambiguity (for question 2) is by making explicit the generative process (all draws are independent).
• The following process leads to answer $p(c_{1}=c_{2}=B|\mathrm {Observation} )={\frac {1}{3}}$:
• Draw $c_{1}$ equiprobably from $\{B,G\}$
• Draw $c_{2}$ equiprobably from $\{B,G\}$
• Discard cases where there is no B
• Observe $c_{1}=B\lor c_{2}=B$
• The following process leads to answer $p(c_{1}=c_{2}=B|\mathrm {Observation} )={\frac {1}{2}}$:
• Draw $c_{1}$ equiprobably from $\{B,G\}$
• Draw $c_{2}$ equiprobably from $\{B,G\}$
• Draw index $i$ equiprobably from $\{1,2\}$
• Observe $c_{i}=B$
Bayesian analysis
Following classical probability arguments, we consider a large urn containing two children. We assume equal probability that either is a boy or a girl. The three discernible cases are thus:
1. both are girls (GG) – with probability P(GG) = 1/4,
2. both are boys (BB) – with probability of P(BB) = 1/4, and
3. one of each (G·B) – with probability of P(G·B) = 1/2.
These are the prior probabilities.
Now we add the additional assumption that "at least one is a boy" = B. Using Bayes' Theorem, we find
$\mathrm {P(BB\mid B)} =\mathrm {P(B\mid BB)\times {\frac {P(BB)}{P(B)}}} =1\times {\frac {\left({\frac {1}{4}}\right)}{\left({\frac {3}{4}}\right)}}={\frac {1}{3}}\,.$
where P(A|B) means "probability of A given B". P(B|BB) = probability of at least one boy given both are boys = 1. P(BB) = probability of both boys = 1/4 from the prior distribution. P(B) = probability of at least one being a boy, which includes cases BB and G·B = 1/4 + 1/2 = 3/4.
Note that, although the natural assumption seems to be a probability of 1/2, so the derived value of 1/3 seems low, the actual "normal" value for P(BB) is 1/4, so the 1/3 is actually a bit higher.
The paradox arises because the second assumption is somewhat artificial, and when describing the problem in an actual setting things get a bit sticky. Just how do we know that "at least" one is a boy? One description of the problem states that we look into a window, see only one child and it is a boy. This sounds like the same assumption. However, this one is equivalent to "sampling" the distribution (i.e. removing one child from the urn, ascertaining that it is a boy, then replacing). Let's call the statement "the sample is a boy" proposition "b". Now we have:
$\mathrm {P(BB\mid b)} =\mathrm {P(b\mid BB)\times {\frac {P(BB)}{P(b)}}} =1\times {\frac {\left({\frac {1}{4}}\right)}{\left({\frac {1}{2}}\right)}}={\frac {1}{2}}\,.$
The difference here is the P(b), which is just the probability of drawing a boy from all possible cases (i.e. without the "at least"), which is clearly 1/2.
The Bayesian analysis generalizes easily to the case in which we relax the 50:50 population assumption. If we have no information about the populations then we assume a "flat prior", i.e. P(GG) = P(BB) = P(G·B) = 1/3. In this case the "at least" assumption produces the result P(BB|B) = 1/2, and the sampling assumption produces P(BB|b) = 2/3, a result also derivable from the Rule of Succession.
Martingale analysis
Suppose one had wagered that Mr. Smith had two boys, and received fair odds. One pays $1 and they will receive $4 if he has two boys. Their wager will increase in value as good news arrives. What evidence would make them happier about their investment? Learning that at least one child out of two is a boy, or learning that at least one child out of one is a boy?
The latter is a priori less likely, and therefore better news. That is why the two answers cannot be the same.
Now for the numbers. If we bet on one child and win, the value of their investment has doubled. It must double again to get to $4, so the odds are 1 in 2.
On the other hand if one were learn that at least one of two children is a boy, the investment increases as if they had wagered on this question. Our $1 is now worth $1+1/3. To get to $4 we still have to increase our wealth threefold. So the answer is 1 in 3.
Variants of the question
Following the popularization of the paradox by Gardner it has been presented and discussed in various forms. The first variant presented by Bar-Hillel & Falk[3] is worded as follows:
• Mr. Smith is the father of two. We meet him walking along the street with a young boy whom he proudly introduces as his son. What is the probability that Mr. Smith's other child is also a boy?
Bar-Hillel & Falk use this variant to highlight the importance of considering the underlying assumptions. The intuitive answer is 1/2 and, when making the most natural assumptions, this is correct. However, someone may argue that "...before Mr. Smith identifies the boy as his son, we know only that he is either the father of two boys, BB, or of two girls, GG, or of one of each in either birth order, i.e., BG or GB. Assuming again independence and equiprobability, we begin with a probability of 1/4 that Smith is the father of two boys. Discovering that he has at least one boy rules out the event GG. Since the remaining three events were equiprobable, we obtain a probability of 1/3 for BB."[3]
The natural assumption is that Mr. Smith selected the child companion at random. If so, as combination BB has twice the probability of either BG or GB of having resulted in the boy walking companion (and combination GG has zero probability, ruling it out), the union of events BG and GB becomes equiprobable with event BB, and so the chance that the other child is also a boy is 1/2. Bar-Hillel & Falk, however, suggest an alternative scenario. They imagine a culture in which boys are invariably chosen over girls as walking companions. In this case, the combinations of BB, BG and GB are assumed equally likely to have resulted in the boy walking companion, and thus the probability that the other child is also a boy is 1/3.
In 1991, Marilyn vos Savant responded to a reader who asked her to answer a variant of the Boy or Girl paradox that included beagles.[5] In 1996, she published the question again in a different form. The 1991 and 1996 questions, respectively were phrased:
• A shopkeeper says she has two new baby beagles to show you, but she doesn't know whether they're male, female, or a pair. You tell her that you want only a male, and she telephones the fellow who's giving them a bath. "Is at least one a male?" she asks him. "Yes!" she informs you with a smile. What is the probability that the other one is a male?
• Say that a woman and a man (who are unrelated) each have two children. We know that at least one of the woman's children is a boy and that the man's oldest child is a boy. Can you explain why the chances that the woman has two boys do not equal the chances that the man has two boys?
With regard to the second formulation Vos Savant gave the classic answer that the chances that the woman has two boys are about 1/3 whereas the chances that the man has two boys are about 1/2. In response to reader response that questioned her analysis vos Savant conducted a survey of readers with exactly two children, at least one of which is a boy. Of 17,946 responses, 35.9% reported two boys.[10]
Vos Savant's articles were discussed by Carlton and Stansfield[10] in a 2005 article in The American Statistician. The authors do not discuss the possible ambiguity in the question and conclude that her answer is correct from a mathematical perspective, given the assumptions that the likelihood of a child being a boy or girl is equal, and that the sex of the second child is independent of the first. With regard to her survey they say it "at least validates vos Savant's correct assertion that the "chances" posed in the original question, though similar-sounding, are different, and that the first probability is certainly nearer to 1 in 3 than to 1 in 2."
Carlton and Stansfield go on to discuss the common assumptions in the Boy or Girl paradox. They demonstrate that in reality male children are actually more likely than female children, and that the sex of the second child is not independent of the sex of the first. The authors conclude that, although the assumptions of the question run counter to observations, the paradox still has pedagogical value, since it "illustrates one of the more intriguing applications of conditional probability."[10] Of course, the actual probability values do not matter; the purpose of the paradox is to demonstrate seemingly contradictory logic, not actual birth rates.
Information about the child
Suppose we were told not only that Mr. Smith has two children, and one of them is a boy, but also that the boy was born on a Tuesday: does this change the previous analyses? Again, the answer depends on how this information was presented – what kind of selection process produced this knowledge.
Following the tradition of the problem, suppose that in the population of two-child families, the sex of the two children is independent of one another, equally likely boy or girl, and that the birth date of each child is independent of the other child. The chance of being born on any given day of the week is 1/7.
From Bayes' Theorem that the probability of two boys, given that one boy was born on a Tuesday is given by:
$\mathrm {P(BB\mid B_{T})={\frac {P(B_{T}\mid BB)\times P(BB)}{P(B_{T})}}} $
Assume that the probability of being born on a Tuesday is ε = 1/7 which will be set after arriving at the general solution. The second factor in the numerator is simply 1/4, the probability of having two boys. The first term in the numerator is the probability of at least one boy born on Tuesday, given that the family has two boys, or 1 − (1 − ε)2 (one minus the probability that neither boy is born on Tuesday). For the denominator, let us decompose:$\mathrm {P(B_{T})=P(B_{T}\mid BB)P(BB)+P(B_{T}\mid BG)P(BG)+P(B_{T}\mid GB)P(GB)+P(B_{T}\mid GG)P(GG)} $. Each term is weighted with probability 1/4. The first term is already known by the previous remark, the last term is 0 (there are no boys). $P(B_{T}\mid BG)$ and $P(B_{T}\mid GB)$ is ε, there is one and only one boy, thus he has ε chance of being born on Tuesday. Therefore, the full equation is:
$\mathrm {P(BB\mid B_{T})} ={\frac {\left(1-(1-\varepsilon )^{2}\right)\times {\frac {1}{4}}}{0+{\frac {1}{4}}\varepsilon +{\frac {1}{4}}\varepsilon +{\frac {1}{4}}\left(\varepsilon +\varepsilon -\varepsilon ^{2}\right)}}={\frac {1-(1-\varepsilon )^{2}}{4\varepsilon -\varepsilon ^{2}}}$
For $\varepsilon >0$, this reduces to $\mathrm {P(BB\mid B_{T})} ={\frac {2-\varepsilon }{4-\varepsilon }}$
If ε is now set to 1/7, the probability becomes 13/27, or about 0.48. In fact, as ε approaches 0, the total probability goes to 1/2, which is the answer expected when one child is sampled (e.g. the oldest child is a boy) and is thus removed from the pool of possible children. In other words, as more and more details about the boy child are given (for instance: born on January 1), the chance that the other child is a girl approaches one half.
It seems that quite irrelevant information was introduced, yet the probability of the sex of the other child has changed dramatically from what it was before (the chance the other child was a girl was 2/3, when it was not known that the boy was born on Tuesday).
To understand why this is, imagine Marilyn vos Savant's poll of readers had asked which day of the week boys in the family were born. If Marilyn then divided the whole data set into seven groups – one for each day of the week a son was born – six out of seven families with two boys would be counted in two groups (the group for the day of the week of birth boy 1, and the group of the day of the week of birth for boy 2), doubling, in every group, the probability of a boy-boy combination.
However, is it really plausible that the family with at least one boy born on a Tuesday was produced by choosing just one of such families at random? It is much more easy to imagine the following scenario.
• We know Mr. Smith has two children. We knock at his door and a boy comes and answers the door. We ask the boy on what day of the week he was born.
Assume that which of the two children answers the door is determined by chance. Then the procedure was (1) pick a two-child family at random from all two-child families (2) pick one of the two children at random, (3) see if it is a boy and ask on what day he was born. The chance the other child is a girl is 1/2. This is a very different procedure from (1) picking a two-child family at random from all families with two children, at least one a boy, born on a Tuesday. The chance the family consists of a boy and a girl is 14/27, about 0.52.
This variant of the boy and girl problem is discussed on many internet blogs and is the subject of a paper by Ruma Falk.[13] The moral of the story is that these probabilities do not just depend on the known information, but on how that information was obtained.
Psychological investigation
From the position of statistical analysis the relevant question is often ambiguous and as such there is no "correct" answer. However, this does not exhaust the boy or girl paradox for it is not necessarily the ambiguity that explains how the intuitive probability is derived. A survey such as vos Savant's suggests that the majority of people adopt an understanding of Gardner's problem that if they were consistent would lead them to the 1/3 probability answer but overwhelmingly people intuitively arrive at the 1/2 probability answer. Ambiguity notwithstanding, this makes the problem of interest to psychological researchers who seek to understand how humans estimate probability.
Fox & Levav (2004) used the problem (called the Mr. Smith problem, credited to Gardner, but not worded exactly the same as Gardner's version) to test theories of how people estimate conditional probabilities.[2] In this study, the paradox was posed to participants in two ways:
• "Mr. Smith says: 'I have two children and at least one of them is a boy.' Given this information, what is the probability that the other child is a boy?"
• "Mr. Smith says: 'I have two children and it is not the case that they are both girls.' Given this information, what is the probability that both children are boys?"
The authors argue that the first formulation gives the reader the mistaken impression that there are two possible outcomes for the "other child",[2] whereas the second formulation gives the reader the impression that there are four possible outcomes, of which one has been rejected (resulting in 1/3 being the probability of both children being boys, as there are 3 remaining possible outcomes, only one of which is that both of the children are boys). The study found that 85% of participants answered 1/2 for the first formulation, while only 39% responded that way to the second formulation. The authors argued that the reason people respond differently to each question (along with other similar problems, such as the Monty Hall Problem and the Bertrand's box paradox) is because of the use of naive heuristics that fail to properly define the number of possible outcomes.[2]
See also
• Bertrand paradox (probability)
• Necktie paradox
• Sleeping Beauty problem
• St. Petersburg paradox
• Two envelopes problem
• List of paradoxes
References
1. Martin Gardner (1961). The Second Scientific American Book of Mathematical Puzzles and Diversions. Simon & Schuster. ISBN 978-0-226-28253-4.
2. Craig R. Fox & Jonathan Levav (2004). "Partition–Edit–Count: Naive Extensional Reasoning in Judgment of Conditional Probability" (PDF). Journal of Experimental Psychology. 133 (4): 626–642. doi:10.1037/0096-3445.133.4.626. PMID 15584810. S2CID 391620. Archived from the original (PDF) on 2020-04-10.
3. Bar-Hillel, Maya; Falk, Ruma (1982). "Some teasers concerning conditional probabilities". Cognition. 11 (2): 109–122. doi:10.1016/0010-0277(82)90021-X. PMID 7198956. S2CID 44509163.
4. Raymond S. Nickerson (May 2004). Cognition and Chance: The Psychology of Probabilistic Reasoning. Psychology Press. ISBN 0-8058-4899-1.
5. "Ask Marilyn". Parade Magazine. October 13, 1991 [January 5, 1992; May 26, 1996; December 1, 1996; March 30, 1997; July 27, 1997; October 19, 1997]. {{cite journal}}: Cite journal requires |journal= (help)
6. Tierney, John (2008-04-10). "The psychology of getting suckered". The New York Times. Retrieved 24 February 2009.
7. Leonard Mlodinow (2008). The Drunkard's Walk: How Randomness Rules our Lives. Pantheon. ISBN 978-0-375-42404-5.
8. Nikunj C. Oza (1993). "On The Confusion in Some Popular Probability Problems". CiteSeerX 10.1.1.44.2448. {{cite journal}}: Cite journal requires |journal= (help)
9. P.J. Laird; et al. (1999). "Naive Probability: A Mental Model Theory of Extensional Reasoning". Psychological Review. 106 (1): 62–88. doi:10.1037/0033-295x.106.1.62. PMID 10197363.
10. Matthew A. Carlton and William D. Stansfield (2005). "Making Babies by the Flip of a Coin?". The American Statistician. 59 (2): 180–182. doi:10.1198/000313005x42813. S2CID 43825948.
11. Charles M. Grinstead and J. Laurie Snell. "Grinstead and Snell's Introduction to Probability" (PDF). The CHANCE Project.
12. Stephen Marks and Gary Smith (Winter 2011). "The Two-Child Paradox Reborn?" (PDF). Chance (Magazine of the American Statistical Association). 24: 54–9. doi:10.1007/s00144-011-0010-0. Archived from the original (PDF) on 2016-03-04. Retrieved 2015-01-27.
13. Falk Ruma (2011). "When truisms clash: Coping with a counterintuitive problem concerning the notorious two-child family". Thinking & Reasoning. 17 (4): 353–366. doi:10.1080/13546783.2011.613690. S2CID 145428896.
External links
• At Least One Girl at MathPages
• A Problem With Two Bear Cubs
• Lewis Carroll's Pillow Problem
• When intuition and math probably look wrong
| Wikipedia |
Two-dimensional window design
Windowing is a process where an index-limited sequence has its maximum energy concentrated in a finite frequency interval. This can be extended to an N-dimension where the N-D window has the limited support and maximum concentration of energy in a separable or non-separable N-D passband. The design of an N-dimensional window particularly a 2-D window finds applications in various fields such as spectral estimation of multidimensional signals, design of circularly symmetric and quadrantally symmetric non-recursive 2D filters,[1] design of optimal convolution functions, image enhancement so as to reduce the effects of data-dependent processing artifacts, optical apodization and antenna array design.[2]
Two-dimensional window
Due to the various applications of multi-dimensional signal processing, the various design methodologies of 2-D windows is of critical importance in order to facilitate these applications mentioned above, respectively.
Consider a two-dimensional window function (or window array) $w(n_{1},n_{2})$ with its Fourier transform denoted by $W(w_{1},w_{2})$. Let $i(n_{1},n_{2})$ and $I(w_{1},w_{2})$ denote the impulse and frequency response of an ideal filter and $h(n_{1},n_{2})$ and $H(w_{1},w_{2})$ denote the impulse and frequency response of a filter approximating the ideal filter, then we can approximate $I(w_{1},w_{2})$ by $h(n_{1},n_{2})$. Since $i(n_{1},n_{2})$ has an infinite extent it can be approximated as a finite impulse response by multiplying with a window function as shown below
$h(n_{1},n_{2})=i(n_{1},n_{2})w(n_{1},n_{2})$
and in the Fourier domain
$H(w_{1},w_{2})={\frac {1}{(2\pi )^{2}}}[I(w_{1},w_{2})**W(w_{1},w_{2})]$ [2]
The problem is to choose a window function with an appropriate shape such that $H(w_{1},w_{2})$ is close to $I(w_{1},w_{2})$ and in any region surrounding a discontinuity of $I(w_{1},w_{2})$, $H(w_{1},w_{2})$ shouldn't contain excessive ripples due to the windowing.
2-D window function from 1-D function
There are four approaches for generating 2-D windows using a one-dimensional window as a prototype.[3]
Approach I
One of the methods of deriving the 2-D window is from the outer product of two 1-D windows, i.e., $w(n_{1},n_{2})=w_{1}(n_{1})w_{2}(n_{2}).$ The property of separability is exploited in this approach. The window formed has a square region of support and is separable in the two variables. In order to understand this approach,[4] consider 1-D Kaiser window whose window function is given by
$w[n]=\left\{{\begin{matrix}{\frac {I_{0}\left(\pi \alpha {\sqrt {1-\left({\frac {2n}{N-1}}-1\right)^{2}}}\right)}{I_{0}(\pi \alpha )}},&0\leq n\leq N-1\\\\0&{\text{otherwise}}\\\end{matrix}}\right.$
then the corresponding 2-D function is given by
$w(n_{1},n_{2})=\left\{{\begin{matrix}{\frac {I_{0}\left(\alpha {\sqrt {1-\left({\frac {n_{1}}{a}}\right)^{2}}}\right)I_{0}\left(\alpha {\sqrt {1-({\frac {n_{2}}{a}})^{2}}}\right)}{I_{0}^{2}(\alpha )}},&|n_{1}|\leqslant a,|n_{2}|\leqslant a\\0&{\text{otherwise}}\\\end{matrix}}\right.$
where:
• $r={\sqrt {n_{1}^{2}+n_{2}^{2}}}$
• N is the length of the 1-D sequence,
• I0 is the zeroth-order modified Bessel function of the first kind,
• α is an arbitrary, non-negative real number that determines the shape of the window. In the frequency domain, it determines the trade-off between main-lobe width and side lobe level, which is a central decision in window design.
The Fourier transform of $w(n_{1},n_{2})$ is the outer product of the Fourier transforms of $w_{1}(n_{1}){\text{ and }}w_{2}(n_{2})$. Hence $W(w_{1},w_{2})=W_{1}(w_{1})W_{2}(w_{2})$.[5]
Approach II
Another method of extending the 1-D window design to a 2-D design is by sampling a circularly rotated 1-D continuous window function.[2] A function is said to possess circular symmetry if it can be written as a function of its radius, independent of $\theta $ i.e. $f(r,\theta )=f(r).$
If w(n) denotes a good 1-D even symmetric window then the corresponding 2-D window function[2] is
$w(n_{1},n_{2})=w\left({\sqrt {n_{1}^{2}+n_{2}^{2}}}\right){\text{ for }}\left|{\sqrt {n_{1}^{2}+n_{2}^{2}}}\right|\leqslant a$
(where $a$ is a constant) and
$w(n_{1},n_{2})=0{\text{ for }}\left|{\sqrt {n_{1}^{2}+n_{2}^{2}}}\right|>a$
The transformation of the Fourier transform of the window function in rectangular co-ordinates to polar co-ordinates results in a Fourier–Bessel transform expression which is called as Hankel transform. Hence the Hankel transform is used to compute the Fourier transform of the 2-D window functions.
If this approach is used to find the 2-D window from the 1-D window function then their Fourier transforms have the relation
${\frac {1}{2\pi }}H(w_{1},w_{2})**W(w_{1},w_{2})=H(w)*W(w)$[2]
where:
$H(w)=\left\{{\begin{matrix}1,&w\geq 0\\0,&w<0\\\end{matrix}}\right.$ is a 1-D step function
and
$H(w_{1},w_{2})=\left\{{\begin{matrix}1,&w_{1}\geq 0{\text{ and all }}w_{2}\\0,&w_{1}<0{\text{ and all }}w_{2}\end{matrix}}\right.$ is a 2-D step function.
In order to calculate the percentage of mainlobe constituted by the sidelobe, the volume under the sidelobes is calculated unlike in 1-D where the area under the sidelobes is used.
In order to understand this approach, consider 1-D Kaiser window then the corresponding 2-D function can be derived as
$w(n_{1},n_{2})=\left\{{\begin{matrix}{\frac {I_{0}\left(\alpha {\sqrt {1-{\frac {\sqrt {n_{1}^{2}+n_{2}^{2}}}{a^{2}}}}}\right)}{I_{0}(\alpha )}},&|r|\leqslant a\\0&{\text{otherwise}}\end{matrix}}\right.$
This is the most widely used approach to design the 2-D windows.
2-D filter design by windowing using window formulations obtained from the above two approaches will result in the same filter order. This results in an advantage for the second approach since its circular region of support has fewer non-zero samples than the square region of support obtained from the first approach which in turn results in computational savings due to reduced number of coefficients of the 2-D filter. But the disadvantage of this approach is that the frequency characteristics of the 1-D window are not well preserved in 2-D cases by this rotation method.[3] It was also found that the mainlobe width and sidelobe level of the 2-D windows are not as well behaved and predictable as their 1-D prototypes.[4] While designing a 2-D window there are two features that have to be considered for the rotation. Firstly, the 1-D window is only defined for integer values of $n$ but ${\sqrt {n_{1}^{2}+n_{2}^{2}}}$ value isn't an integer in general. To overcome this, the method of interpolation can be used to define values for $w(n_{1},n_{2})$ for any arbitrary $w\left({\sqrt {n_{1}^{2}+n_{2}^{2}}}\right).$ Secondly, the 2-D FFT must be applicable to the 2-D window.
Approach III
Another approach is to obtain 2-D windows by rotating the frequency response of a 1-D window in Fourier space followed by the inverse Fourier transform.[6] In approach II, the spatial-domain signal is rotated whereas in this approach the 1-D window is rotated in a different domain (e.g., frequency-signal).
Thus the Fourier transform of the 2-D window function is given by
$W_{2}(w_{1},w_{2})=W_{1}\left({\sqrt {(w_{1}^{2}+w_{2}^{2})}}\right).$
The 2-D window function $w_{2}(n_{1},n_{2})$ can be obtained by computing the inverse inverse Fourier transform of $W_{2}(w_{1},w_{2})$.
Another way to show the type-preserving rotation is when the relation $W_{1}(w_{1})=W_{2}(w_{1},w_{2})\ at\ w_{2}=0$ is satisfied. This implies that a slice of the frequency response of 2-D window is equal to that of the 1-D window where the orientation of $(w_{1},w_{2})$ is arbitrary. In spatial domain, this relation is given by $w_{1}(n)=\int _{-\infty }^{\infty }\!w_{2}(n_{1},n_{2})\,dn_{2}$. This implies that a slice of the frequency response $W_{2}(w_{1},w_{2})$ is the same as the Fourier transform of the one-directional integration of the 2-D window $w_{2}(n_{1},n_{2})$.
The advantage of this approach is that the individual features of 1-D window response $W_{1}(w_{1})$ are well preserved in the obtained 2-D window response $W_{2}(w_{1},w_{2})$. Also, the circular symmetry is improved considerably in a discrete system. The drawback is that it's computationally inefficient due to the requirement of 2-D inverse Fourier transform and hence less useful in practice.[3]
Approach IV
A new method was proposed to design a 2-D window by applying the McClellan transformation to a 1-D window.[7] Each coefficient of the resulting 2-D window is the linear combination of coefficients of the corresponding 1-D window with integer or power of 2 weighting.
Consider a case of even length, then the frequency response of the 1-D window of length N can be written as
$W_{1}(w)=\sum _{n=1}^{N/2}w(n)\cos[(n-0.5)w].$
Consider the McClellan transformation:
$\cos(w)=0.5\cos(w_{1})+0.5\cos(w_{2})+0.5\cos(w_{1})\cos(w_{2})-0.5$
which is equivalent to
$\cos(0.5w)=\cos(0.5w_{1})\cos(0.5w_{2}){\text{ for }}0\leq \ w\leq \pi ,0\leq \ w_{1}\leq \pi ,0\leq \ w_{2}\leq \pi .$
Substituting the above, we get the frequency response of the corresponding 2-D window
$W_{2}(w_{1},w_{2})=\sum _{n_{1}=1}^{N/2}\sum _{n_{2}=1}^{N/2}w_{2}(n_{1},n_{2})\cos[(n_{1}-0.5)w_{1}]\cos[(n_{2}-0.5)w_{2}].$
From the above equation, the coefficients of the 2-D window can be obtained.
To illustrate this approach, consider the Tseng window. The 1-D Tseng window of $2N$ weights can be written as
$W(w)=\exp(-jw/2)\sum _{n=1}^{N}2w_{n}\cos \left(\left(n-{\frac {1}{2}}\right)w\right).$
By implementing this approach, the frequency response of the 2-D McClellan-transformed Tseng window is given by
$W(w_{1},w_{2})=\exp(-j(w_{1}+w_{2})/2)\sum _{n_{1}=1}^{N}\sum _{n_{2}=1}^{N}4w(n_{1},n_{2})\cos \left(\left(n_{1}-{\frac {1}{2}}\right)w_{1}\right)\cos \left(\left(n_{2}-{\frac {1}{2}}\right)w_{2}\right)$
where $w(n_{1},n_{2})$ are the 2-D Tseng window coefficients.
This window finds applications in antenna array design for the detection of AM signals.[8]
The advantages include simple and efficient design, nearly circularly symmetric frequency response of the 2-D window, preserving of the 1-D window prototype features. However, when this approach is used for FIR filter design it was observed that the 2-D filters designed were not as good as those originally proposed by McClellan.
2-D window functions
Using the above approaches, the 2-D window functions for few of the 1-D windows are as shown below. When Hankel transform is used to find the frequency response of the window function, it is difficult to represent it in a closed form. Except for rectangular window and Bartlett window, the other window functions are represented in their original integral form. The two-dimensional window function is represented as $w(r)$ with a region of support given by $|r|<a$ where the window is set to unity at origin and $w(r)=0$ for $|r|>a.$ Using the Hankel transform, the frequency response of the window function is given by
$W(f)=\int _{0}^{\infty }\!rw(r)J_{0}(fr)\,dr.$ [9]
where $J_{0}$ is Bessel function identity.
Rectangular window
The two-dimensional version of a circularly symmetric rectangular window is as given below[9]
$w(r)=\left\{{\begin{array}{ll}1,&|r|\leqslant a\\0,&|r|>a\\\end{array}}\right.$
The window is cylindrical with the height equal to one and the base equal to 2a. The vertical cross-section of this window is a 1-D rectangular window.
The frequency response of the window after substituting the window function as defined above, using the Hankel transform, is as shown below
$W(f)=\int _{0}^{\infty }\!rJ_{0}(fr)\,dr$
Bartlett window
The two-dimensional mathematical representation of a Bartlett window is as shown below[9]
$w(r)=\left\{{\begin{array}{cl}1-{\frac {|r|}{a}},&|r|\leqslant a\\0,&|r|>a\end{array}}\right.$
The window is cone-shaped with its height equal to 1 and the base is a circle with its radius 2a. The vertical cross-section of this window is a 1-D triangle window.
The Fourier transform of the window using the Hankel transform is as shown below
$W(f)=\int _{0}^{\infty }\!r\left(1-{\frac {|r|}{a}}\right)J_{0}(fr)\,dr$
Kaiser window
The 2-D Kaiser window is represented by[9]
$w(r)=\left\{{\begin{array}{cl}{\frac {I_{0}\left(\alpha {\sqrt {1-({\frac {r}{a}})^{2}}}\right)}{I_{0}(\alpha )}},&|r|\leqslant a\\[4pt]0,&{\text{otherwise}}\end{array}}\right.$
The cross-section of the 2-D window gives the response of a 1-D Kaiser Window function.
The Fourier transform of the window using the Hankel transform is as shown below
$W(f)=\int _{0}^{\infty }\!r\left({\tfrac {I_{0}\left(\alpha {\sqrt {1-(({\frac {r}{a}})^{2}}}\right)}{I_{0}(\alpha )}}\right)J_{0}(fr)\,dr$
References
1. Antoniou, A.; Lu, W.-S. (August 1990). "Design of 2-D nonrecursive filters using the window method". IEE Proceedings G - Circuits, Devices and Systems. 137 (4): 247–250. doi:10.1049/ip-g-2.1990.0038. ISSN 0956-3768.
2. Huang, T. (March 1972). "Two-dimensional windows". IEEE Transactions on Audio and Electroacoustics. 20 (1): 88–89. doi:10.1109/TAU.1972.1162331. ISSN 0018-9278.
3. PEI, SOO-CHANG; JAW, SY-BEEN (Sep 1987). "A Novel 2-D Window for Spectral Estimation". IEEE Transactions on Circuits and Systems. 34 (9): 1112–1115. Bibcode:1987ITCS...34.1112P. doi:10.1109/TCS.1987.1086250. ISSN 0098-4094.
4. Speake, Theresa C.; Mersereau, Russell M. (Feb 1981). "A Note on the Use of Windows for Two-Dimensional FIR Filter Design". IEEE Transactions on Acoustics, Speech, and Signal Processing. 29 (1): 125–127. doi:10.1109/TASSP.1981.1163515. ISSN 0096-3518.
5. Dudgeon, D. E.; Mersereau, R. M. (1984). Multidimensional Digital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall.
6. Kato, Haruo; Furukawa, Tomozo (Aug 1981). "Two-Dimensional Type-Preserving Circular Windows". IEEE Transactions on Acoustics, Speech, and Signal Processing. 29 (4): 926–928. doi:10.1109/TASSP.1981.1163655. ISSN 0096-3518.
7. Yu, Tian-Hu; Mitra, Sanjit K. (Aug 1985). "A New Two-Dimensional Window". IEEE Transactions on Acoustics, Speech, and Signal Processing. 33 (4): 1058–1061. doi:10.1109/TASSP.1985.1164668. ISSN 0096-3518.
8. Choi, S.; Sarkar, T.K. (June 1989). "Design of 2-D Tseng window and its application to antenna array synthesis". Antennas and Propagation Society International Symposium, 1989. AP-S. Digest: 1638–1641. doi:10.1109/APS.1989.135042. S2CID 25608497.
9. Wulang, Widada (December 1979). TWO DIMENSIONAL WINDOW FUNCTIONS (Master's thesis). Naval Postgraduate School, Monterey, CA. hdl:10945/18901.
| Wikipedia |
Two ears theorem
In geometry, the two ears theorem states that every simple polygon with more than three vertices has at least two ears, vertices that can be removed from the polygon without introducing any crossings. The two ears theorem is equivalent to the existence of polygon triangulations. It is frequently attributed to Gary H. Meisters, but was proved earlier by Max Dehn.
Statement of the theorem
A simple polygon is a simple closed curve in the Euclidean plane consisting of finitely many line segments in a cyclic sequence, with each two consecutive line segments meeting at a common endpoint, and no other intersections. By the Jordan curve theorem, it separates the plane into two regions, one of which (the interior of the polygon) is bounded. An ear of a polygon is defined as a triangle formed by three consecutive vertices $u,v,w$ of the polygon, such that its edge $uw$ lies entirely in the interior of the polygon. The two ears theorem states that every simple polygon that is not itself a triangle has at least two ears.[1]
Relation to triangulations
An ear and its two neighbors form a triangle within the polygon that is not crossed by any other part of the polygon. Removing a triangle of this type produces a polygon with fewer sides, and repeatedly removing ears allows any simple polygon to be triangulated. Conversely, if a polygon is triangulated, the weak dual of the triangulation (a graph with one vertex per triangle and one edge per pair of adjacent triangles) will be a tree and each leaf of the tree will form an ear. Since every tree with more than one vertex has at least two leaves, every triangulated polygon with more than one triangle has at least two ears. Thus, the two ears theorem is equivalent to the fact that every simple polygon has a triangulation.[2]
Triangulation algorithms based on this principle have been called ear-clipping algorithms. Although a naive implementation is slower, ear-clipping can be sped up by the observation that a triple of consecutive vertices of a polygon forms an ear if and only if its central vertex is convex and the triangle they form does not contain any reflex vertices. By maintaining a queue of triples with this property, and repeatedly removing an ear from the queue and updating the adjacent triples, it is possible to perform ear-clipping in time $O{\bigl (}n(r+1){\bigr )}$, where $n$ is the number of input vertices and $r$ is the number of reflex vertices.[3]
If a simple polygon is triangulated, then a triple of consecutive vertices $u,v,w$ forms an ear if $v$ is a convex vertex and none of its other neighbors in the triangulation lie in triangle $uvw$. By testing all neighbors of all vertices, it is possible to find all the ears of a triangulated simple polygon in linear time.[4] Alternatively, it is also possible to find a single ear of a simple polygon in linear time, without triangulating the polygon.[5]
Related types of vertex
An ear is called exposed when its central vertex belongs to the convex hull of the polygon. However, it is possible for a polygon to have no exposed ears.[6]
Ears are a special case of a principal vertex, a vertex such that the line segment connecting the vertex's neighbors does not cross the polygon or touch any other vertex of it. A principal vertex for which this line segment lies outside the polygon is called a mouth. Analogously to the two ears theorem, every non-convex simple polygon has at least one mouth. Polygons with the minimum number of principal vertices of both types, two ears and a mouth, are called anthropomorphic polygons.[7] Repeatedly finding and removing a mouth from a non-convex polygon will eventually turn it into the convex hull of the initial polygon. This principle can be applied to the surrounding polygons of a set of points; these are polygons that use some of the points as vertices, and contain the rest of them. Removing a mouth from a surrounding polygon produces another surrounding polygon, and the family of all surrounding polygons can be found by reversing this mouth-removal process, starting from the convex hull.[8]
History and proof
The two ears theorem is often attributed to a 1975 paper by Gary H. Meisters, from which the "ear" terminology originated.[1] However, the theorem was proved earlier by Max Dehn (circa 1899) as part of a proof of the Jordan curve theorem. To prove the theorem, Dehn observes that every polygon has at least three convex vertices. If one of these vertices, v, is not an ear, then it can be connected by a diagonal to another vertex x inside the triangle uvw formed by v and its two neighbors; x can be chosen to be the vertex within this triangle that is farthest from line uw. This diagonal decomposes the polygon into two smaller polygons, and repeated decomposition by ears and diagonals eventually produces a triangulation of the whole polygon, from which an ear can be found as a leaf of the dual tree.[9]
References
1. Meisters, G. H. (1975), "Polygons have ears", The American Mathematical Monthly, 82 (6): 648–651, doi:10.2307/2319703, JSTOR 2319703, MR 0367792.
2. O'Rourke, Joseph (1987), Art Gallery Theorems and Algorithms, International Series of Monographs on Computer Science, Oxford University Press, ISBN 0-19-503965-3, MR 0921437.
3. Held, M. (2001), "FIST: fast industrial-strength triangulation of polygons", Algorithmica, 30 (4): 563–596, doi:10.1007/s00453-001-0028-4, MR 1829495, S2CID 1317227
4. Highnam, P. T. (1982), "The ears of a polygon", Information Processing Letters, 15 (5): 196–198, doi:10.1016/0020-0190(82)90116-8, MR 0684250
5. ElGindy, Hossam; Everett, Hazel; Toussaint, Godfried (September 1993), "Slicing an ear using prune-and-search", Pattern Recognition Letters, 14 (9): 719–722, Bibcode:1993PaReL..14..719E, doi:10.1016/0167-8655(93)90141-y
6. Meisters, G. H. (1980), "Principal vertices, exposed points, and ears", The American Mathematical Monthly, 87 (4): 284–285, doi:10.2307/2321563, JSTOR 2321563, MR 0567710.
7. Toussaint, Godfried (1991), "Anthropomorphic polygons", The American Mathematical Monthly, 98 (1): 31–35, doi:10.2307/2324033, JSTOR 2324033, MR 1083611.
8. Yamanaka, Katsuhisa; Avis, David; Horiyama, Takashi; Okamoto, Yoshio; Uehara, Ryuhei; Yamauchi, Tanami (2021), "Algorithmic enumeration of surrounding polygons", Discrete Applied Mathematics, 303: 305–313, doi:10.1016/j.dam.2020.03.034, MR 4310502
9. Guggenheimer, H. (1977), "The Jordan curve theorem and an unpublished manuscript by Max Dehn" (PDF), Archive for History of Exact Sciences, 17 (2): 193–200, doi:10.1007/BF02464980, JSTOR 41133486, MR 0532231, S2CID 121684753.
External links
• The Two-Ears Theorem, Godfried Toussaint
| Wikipedia |
Two-element Boolean algebra
In mathematics and abstract algebra, the two-element Boolean algebra is the Boolean algebra whose underlying set (or universe or carrier) B is the Boolean domain. The elements of the Boolean domain are 1 and 0 by convention, so that B = {0, 1}. Paul Halmos's name for this algebra "2" has some following in the literature, and will be employed here.
Definition
B is a partially ordered set and the elements of B are also its bounds.
An operation of arity n is a mapping from Bn to B. Boolean algebra consists of two binary operations and unary complementation. The binary operations have been named and notated in various ways. Here they are called 'sum' and 'product', and notated by infix '+' and '∙', respectively. Sum and product commute and associate, as in the usual algebra of real numbers. As for the order of operations, brackets are decisive if present. Otherwise '∙' precedes '+'. Hence A ∙ B + C is parsed as (A ∙ B) + C and not as A ∙ (B + C). Complementation is denoted by writing an overbar over its argument. The numerical analog of the complement of X is 1 − X. In the language of universal algebra, a Boolean algebra is a $\langle B,+,$∙$,{\overline {..}},1,0\rangle $ algebra of type $\langle 2,2,1,0,0\rangle $.
Either one-to-one correspondence between {0,1} and {True,False} yields classical bivalent logic in equational form, with complementation read as NOT. If 1 is read as True, '+' is read as OR, and '∙' as AND, and vice versa if 1 is read as False. These two operations define a commutative semiring, known as the Boolean semiring.
Some basic identities
2 can be seen as grounded in the following trivial "Boolean" arithmetic:
${\begin{aligned}&1+1=1+0=0+1=1\\&0+0=0\\&0\cdot 0=0\cdot 1=1\cdot 0=0\\&1\cdot 1=1\\&{\overline {1}}=0\\&{\overline {0}}=1\end{aligned}}$
Note that:
• '+' and '∙' work exactly as in numerical arithmetic, except that 1+1=1. '+' and '∙' are derived by analogy from numerical arithmetic; simply set any nonzero number to 1.
• Swapping 0 and 1, and '+' and '∙' preserves truth; this is the essence of the duality pervading all Boolean algebras.
This Boolean arithmetic suffices to verify any equation of 2, including the axioms, by examining every possible assignment of 0s and 1s to each variable (see decision procedure).
The following equations may now be verified:
${\begin{aligned}&A+A=A\\&A\cdot A=A\\&A+0=A\\&A+1=1\\&A\cdot 0=0\\&{\overline {\overline {A}}}=A\end{aligned}}$
Each of '+' and '∙' distributes over the other:
• $\ A\cdot (B+C)=A\cdot B+A\cdot C;$
• $\ A+(B\cdot C)=(A+B)\cdot (A+C).$
That '∙' distributes over '+' agrees with elementary algebra, but not '+' over '∙'. For this and other reasons, a sum of products (leading to a NAND synthesis) is more commonly employed than a product of sums (leading to a NOR synthesis).
Each of '+' and '∙' can be defined in terms of the other and complementation:
• $A\cdot B={\overline {{\overline {A}}+{\overline {B}}}}$
• $A+B={\overline {{\overline {A}}\cdot {\overline {B}}}}.$
We only need one binary operation, and concatenation suffices to denote it. Hence concatenation and overbar suffice to notate 2. This notation is also that of Quine's Boolean term schemata. Letting (X) denote the complement of X and "()" denote either 0 or 1 yields the syntax of the primary algebra of G. Spencer-Brown's Laws of Form.
A basis for 2 is a set of equations, called axioms, from which all of the above equations (and more) can be derived. There are many known bases for all Boolean algebras and hence for 2. An elegant basis notated using only concatenation and overbar is:
1. $\ ABC=BCA$ (Concatenation commutes, associates)
2. ${\overline {A}}A=1$ (2 is a complemented lattice, with an upper bound of 1)
3. $\ A0=A$ (0 is the lower bound).
4. $A{\overline {AB}}=A{\overline {B}}$ (2 is a distributive lattice)
Where concatenation = OR, 1 = true, and 0 = false, or concatenation = AND, 1 = false, and 0 = true. (overbar is negation in both cases.)
If 0=1, (1)-(3) are the axioms for an abelian group.
(1) only serves to prove that concatenation commutes and associates. First assume that (1) associates from either the left or the right, then prove commutativity. Then prove association from the other direction. Associativity is simply association from the left and right combined.
This basis makes for an easy approach to proof, called "calculation" in Laws of Form, that proceeds by simplifying expressions to 0 or 1, by invoking axioms (2)–(4), and the elementary identities $AA=A,{\overline {\overline {A}}}=A,1+A=1$, and the distributive law.
Metatheory
De Morgan's theorem states that if one does the following, in the given order, to any Boolean function:
• Complement every variable;
• Swap '+' and '∙' operators (taking care to add brackets to ensure the order of operations remains the same);
• Complement the result,
the result is logically equivalent to what you started with. Repeated application of De Morgan's theorem to parts of a function can be used to drive all complements down to the individual variables.
A powerful and nontrivial metatheorem states that any identity of 2 holds for all Boolean algebras.[1] Conversely, an identity that holds for an arbitrary nontrivial Boolean algebra also holds in 2. Hence all identities of Boolean algebra are captured by 2. This theorem is useful because any equation in 2 can be verified by a decision procedure. Logicians refer to this fact as "2 is decidable". All known decision procedures require a number of steps that is an exponential function of the number of variables N appearing in the equation to be verified. Whether there exists a decision procedure whose steps are a polynomial function of N falls under the P = NP conjecture.
The above metatheorem does not hold if we consider the validity of more general first-order logic formulas instead of only atomic positive equalities. As an example consider the formula (x = 0) ∨ (x = 1). This formula is always true in a two-element Boolean algebra. In a four-element Boolean algebra whose domain is the powerset of $\{0,1\}$, this formula corresponds to the statement (x = ∅) ∨ (x = {0,1}) and is false when x is $\{1\}$. The decidability for the first-order theory of many classes of Boolean algebras can still be shown, using quantifier elimination or small model property (with the domain size computed as a function of the formula and generally larger than 2).
See also
• Boolean algebra
• Bounded set
• Lattice (order)
• Order theory
References
1. Halmos, Paul; Givant, Steven (2009). Introduction to Boolean Algebras. Undergraduate Texts in Mathematics. doi:10.1007/978-0-387-68436-9. ISBN 978-0-387-40293-2.
Further reading
Many elementary texts on Boolean algebra were published in the early years of the computer era. Perhaps the best of the lot, and one still in print, is:
• Mendelson, Elliot, 1970. Schaum's Outline of Boolean Algebra. McGraw–Hill.
The following items reveal how the two-element Boolean algebra is mathematically nontrivial.
• Stanford Encyclopedia of Philosophy: "The Mathematics of Boolean Algebra," by J. Donald Monk.
• Burris, Stanley N., and H.P. Sankappanavar, H. P., 1981. A Course in Universal Algebra. Springer-Verlag. ISBN 3-540-90578-2.
| Wikipedia |
Line–line intersection
In Euclidean geometry, the intersection of a line and a line can be the empty set, a point, or another line. Distinguishing these cases and finding the intersection have uses, for example, in computer graphics, motion planning, and collision detection.
In three-dimensional Euclidean geometry, if two lines are not in the same plane, they have no point of intersection and are called skew lines. If they are in the same plane, however, there are three possibilities: if they coincide (are not distinct lines), they have an infinitude of points in common (namely all of the points on either of them); if they are distinct but have the same slope, they are said to be parallel and have no points in common; otherwise, they have a single point of intersection.
The distinguishing features of non-Euclidean geometry are the number and locations of possible intersections between two lines and the number of possible lines with no intersections (parallel lines) with a given line.
Formulas
See also: Skew lines § Formulas
A necessary condition for two lines to intersect is that they are in the same plane—that is, are not skew lines. Satisfaction of this condition is equivalent to the tetrahedron with vertices at two of the points on one line and two of the points on the other line being degenerate in the sense of having zero volume. For the algebraic form of this condition, see Skew lines § Testing for skewness.
Given two points on each line
First we consider the intersection of two lines L1 and L2 in two-dimensional space, with line L1 being defined by two distinct points (x1, y1) and (x2, y2), and line L2 being defined by two distinct points (x3, y3) and (x4, y4).[1]
The intersection P of line L1 and L2 can be defined using determinants.
$P_{x}={\frac {\begin{vmatrix}{\begin{vmatrix}x_{1}&y_{1}\\x_{2}&y_{2}\end{vmatrix}}&{\begin{vmatrix}x_{1}&1\\x_{2}&1\end{vmatrix}}\\\\{\begin{vmatrix}x_{3}&y_{3}\\x_{4}&y_{4}\end{vmatrix}}&{\begin{vmatrix}x_{3}&1\\x_{4}&1\end{vmatrix}}\end{vmatrix}}{\begin{vmatrix}{\begin{vmatrix}x_{1}&1\\x_{2}&1\end{vmatrix}}&{\begin{vmatrix}y_{1}&1\\y_{2}&1\end{vmatrix}}\\\\{\begin{vmatrix}x_{3}&1\\x_{4}&1\end{vmatrix}}&{\begin{vmatrix}y_{3}&1\\y_{4}&1\end{vmatrix}}\end{vmatrix}}}\,\!\qquad P_{y}={\frac {\begin{vmatrix}{\begin{vmatrix}x_{1}&y_{1}\\x_{2}&y_{2}\end{vmatrix}}&{\begin{vmatrix}y_{1}&1\\y_{2}&1\end{vmatrix}}\\\\{\begin{vmatrix}x_{3}&y_{3}\\x_{4}&y_{4}\end{vmatrix}}&{\begin{vmatrix}y_{3}&1\\y_{4}&1\end{vmatrix}}\end{vmatrix}}{\begin{vmatrix}{\begin{vmatrix}x_{1}&1\\x_{2}&1\end{vmatrix}}&{\begin{vmatrix}y_{1}&1\\y_{2}&1\end{vmatrix}}\\\\{\begin{vmatrix}x_{3}&1\\x_{4}&1\end{vmatrix}}&{\begin{vmatrix}y_{3}&1\\y_{4}&1\end{vmatrix}}\end{vmatrix}}}\,\!$
The determinants can be written out as:
${\begin{aligned}P_{x}&={\frac {(x_{1}y_{2}-y_{1}x_{2})(x_{3}-x_{4})-(x_{1}-x_{2})(x_{3}y_{4}-y_{3}x_{4})}{(x_{1}-x_{2})(y_{3}-y_{4})-(y_{1}-y_{2})(x_{3}-x_{4})}}\\[4px]P_{y}&={\frac {(x_{1}y_{2}-y_{1}x_{2})(y_{3}-y_{4})-(y_{1}-y_{2})(x_{3}y_{4}-y_{3}x_{4})}{(x_{1}-x_{2})(y_{3}-y_{4})-(y_{1}-y_{2})(x_{3}-x_{4})}}\end{aligned}}$
When the two lines are parallel or coincident, the denominator is zero.
Given two points on each line segment
See also: Intersection_(geometry) § Two_line_segments
The intersection point above is for the infinitely long lines defined by the points, rather than the line segments between the points, and can produce an intersection point not contained in either of the two line segments. In order to find the position of the intersection in respect to the line segments, we can define lines L1 and L2 in terms of first degree Bézier parameters:
$L_{1}={\begin{bmatrix}x_{1}\\y_{1}\end{bmatrix}}+t{\begin{bmatrix}x_{2}-x_{1}\\y_{2}-y_{1}\end{bmatrix}},\qquad L_{2}={\begin{bmatrix}x_{3}\\y_{3}\end{bmatrix}}+u{\begin{bmatrix}x_{4}-x_{3}\\y_{4}-y_{3}\end{bmatrix}}$
(where t and u are real numbers). The intersection point of the lines is found with one of the following values of t or u, where
$t={\frac {\begin{vmatrix}x_{1}-x_{3}&x_{3}-x_{4}\\y_{1}-y_{3}&y_{3}-y_{4}\end{vmatrix}}{\begin{vmatrix}x_{1}-x_{2}&x_{3}-x_{4}\\y_{1}-y_{2}&y_{3}-y_{4}\end{vmatrix}}}={\frac {(x_{1}-x_{3})(y_{3}-y_{4})-(y_{1}-y_{3})(x_{3}-x_{4})}{(x_{1}-x_{2})(y_{3}-y_{4})-(y_{1}-y_{2})(x_{3}-x_{4})}}$
and
$u={\frac {\begin{vmatrix}x_{1}-x_{3}&x_{1}-x_{2}\\y_{1}-y_{3}&y_{1}-y_{2}\end{vmatrix}}{\begin{vmatrix}x_{1}-x_{2}&x_{3}-x_{4}\\y_{1}-y_{2}&y_{3}-y_{4}\end{vmatrix}}}={\frac {(x_{1}-x_{3})(y_{1}-y_{2})-(y_{1}-y_{3})(x_{1}-x_{2})}{(x_{1}-x_{2})(y_{3}-y_{4})-(y_{1}-y_{2})(x_{3}-x_{4})}},$
with
$(P_{x},P_{y})={\bigl (}x_{1}+t(x_{2}-x_{1}),\;y_{1}+t(y_{2}-y_{1}){\bigr )}\quad {\text{or}}\quad (P_{x},P_{y})={\bigl (}x_{3}+u(x_{4}-x_{3}),\;y_{3}+u(y_{4}-y_{3}){\bigr )}$
There will be an intersection if 0 ≤ t ≤ 1 and 0 ≤ u ≤ 1. The intersection point falls within the first line segment if 0 ≤ t ≤ 1, and it falls within the second line segment if 0 ≤ u ≤ 1. These inequalities can be tested without the need for division, allowing rapid determination of the existence of any line segment intersection before calculating its exact point.[2]
Given two line equations
The x and y coordinates of the point of intersection of two non-vertical lines can easily be found using the following substitutions and rearrangements.
Suppose that two lines have the equations y = ax + c and y = bx + d where a and b are the slopes (gradients) of the lines and where c and d are the y-intercepts of the lines. At the point where the two lines intersect (if they do), both y coordinates will be the same, hence the following equality:
$ax+c=bx+d.$
We can rearrange this expression in order to extract the value of x,
$ax-bx=d-c,$
and so,
$x={\frac {d-c}{a-b}}.$
To find the y coordinate, all we need to do is substitute the value of x into either one of the two line equations, for example, into the first:
$y=a{\frac {d-c}{a-b}}+c.$
Hence, the point of intersection is
$P=\left({\frac {d-c}{a-b}},a{\frac {d-c}{a-b}}+c\right).$
Note if a = b then the two lines are parallel. If c ≠ d as well, the lines are different and there is no intersection, otherwise the two lines are identical and intersect at every point.
Using homogeneous coordinates
By using homogeneous coordinates, the intersection point of two implicitly defined lines can be determined quite easily. In 2D, every point can be defined as a projection of a 3D point, given as the ordered triple (x, y, w). The mapping from 3D to 2D coordinates is (x′, y′) = (x/w, y/w). We can convert 2D points to homogeneous coordinates by defining them as (x, y, 1).
Assume that we want to find intersection of two infinite lines in 2-dimensional space, defined as a1x + b1y + c1 = 0 and a2x + b2y + c2 = 0. We can represent these two lines in line coordinates as U1 = (a1, b1, c1) and U2 = (a2, b2, c2). The intersection P′ of two lines is then simply given by[3]
$P'=(a_{p},b_{p},c_{p})=U_{1}\times U_{2}=(b_{1}c_{2}-b_{2}c_{1},a_{2}c_{1}-a_{1}c_{2},a_{1}b_{2}-a_{2}b_{1})$
If cp = 0, the lines do not intersect.
More than two lines
See also: Skew lines § More than two lines
The intersection of two lines can be generalized to involve additional lines. The existence of and expression for the n-line intersection problem are as follows.
In two dimensions
In two dimensions, more than two lines almost certainly do not intersect at a single point. To determine if they do and, if so, to find the intersection point, write the ith equation (i = 1, …, n) as
${\begin{bmatrix}a_{i1}&a_{i2}\end{bmatrix}}{\begin{bmatrix}x\\y\end{bmatrix}}=b_{i},$
and stack these equations into matrix form as
$\mathbf {A} \mathbf {w} =\mathbf {b} ,$
where the ith row of the n × 2 matrix A is [ai1, ai2], w is the 2 × 1 vector [x
y
]
, and the ith element of the column vector b is bi. If A has independent columns, its rank is 2. Then if and only if the rank of the augmented matrix [A | b] is also 2, there exists a solution of the matrix equation and thus an intersection point of the n lines. The intersection point, if it exists, is given by
$\mathbf {w} =\mathbf {A} ^{\mathrm {g} }\mathbf {b} =\left(\mathbf {A} ^{\mathsf {T}}\mathbf {A} \right)^{-1}\mathbf {A} ^{\mathsf {T}}\mathbf {b} ,$
where Ag is the Moore–Penrose generalized inverse of A (which has the form shown because A has full column rank). Alternatively, the solution can be found by jointly solving any two independent equations. But if the rank of A is only 1, then if the rank of the augmented matrix is 2 there is no solution but if its rank is 1 then all of the lines coincide with each other.
In three dimensions
The above approach can be readily extended to three dimensions. In three or more dimensions, even two lines almost certainly do not intersect; pairs of non-parallel lines that do not intersect are called skew lines. But if an intersection does exist it can be found, as follows.
In three dimensions a line is represented by the intersection of two planes, each of which has an equation of the form
${\begin{bmatrix}a_{i1}&a_{i2}&a_{i3}\end{bmatrix}}{\begin{bmatrix}x\\y\\z\end{bmatrix}}=b_{i}.$
Thus a set of n lines can be represented by 2n equations in the 3-dimensional coordinate vector w:
$\mathbf {A} \mathbf {w} =\mathbf {b} $
where now A is 2n × 3 and b is 2n × 1. As before there is a unique intersection point if and only if A has full column rank and the augmented matrix [A | b] does not, and the unique intersection if it exists is given by
$\mathbf {w} =\left(\mathbf {A} ^{\mathsf {T}}\mathbf {A} \right)^{-1}\mathbf {A} ^{\mathsf {T}}\mathbf {b} .$
Nearest points to skew lines
Main article: Skew lines § Nearest points
In two or more dimensions, we can usually find a point that is mutually closest to two or more lines in a least-squares sense.
In two dimensions
In the two-dimensional case, first, represent line i as a point pi on the line and a unit normal vector n̂i, perpendicular to that line. That is, if x1 and x2 are points on line 1, then let p1 = x1 and let
$\mathbf {\hat {n}} _{1}:={\begin{bmatrix}0&-1\\1&0\end{bmatrix}}{\frac {\mathbf {x} _{2}-\mathbf {x} _{1}}{\|\mathbf {x} _{2}-\mathbf {x} _{1}\|}}$
which is the unit vector along the line, rotated by a right angle.
The distance from a point x to the line (p, n̂) is given by
$d{\bigl (}\mathbf {x} ,(\mathbf {p} ,\mathbf {\hat {n}} ){\bigr )}={\bigl |}(\mathbf {x} -\mathbf {p} )\cdot \mathbf {\hat {n}} {\bigr |}=\left|(\mathbf {x} -\mathbf {p} )^{\mathsf {T}}\mathbf {\hat {n}} \right|=\left|\mathbf {\hat {n}} ^{\mathsf {T}}(\mathbf {x} -\mathbf {p} )\right|={\sqrt {(\mathbf {x} -\mathbf {p} )^{\mathsf {T}}\mathbf {\hat {n}} \mathbf {\hat {n}} ^{\mathsf {T}}(\mathbf {x} -\mathbf {p} )}}.$
And so the squared distance from a point x to a line is
$d{\bigl (}\mathbf {x} ,(\mathbf {p} ,\mathbf {\hat {n}} ){\bigr )}^{2}=(\mathbf {x} -\mathbf {p} )^{\mathsf {T}}\left(\mathbf {\hat {n}} \mathbf {\hat {n}} ^{\mathsf {T}}\right)(\mathbf {x} -\mathbf {p} ).$
The sum of squared distances to many lines is the cost function:
$E(\mathbf {x} )=\sum _{i}(\mathbf {x} -\mathbf {p} _{i})^{\mathsf {T}}\left(\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)(\mathbf {x} -\mathbf {p} _{i}).$
This can be rearranged:
${\begin{aligned}E(\mathbf {x} )&=\sum _{i}\mathbf {x} ^{\mathsf {T}}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {x} -\mathbf {x} ^{\mathsf {T}}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}-\mathbf {p} _{i}^{\mathsf {T}}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {x} +\mathbf {p} _{i}^{\mathsf {T}}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}\\&=\mathbf {x} ^{\mathsf {T}}\left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)\mathbf {x} -2\mathbf {x} ^{\mathsf {T}}\left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}\right)+\sum _{i}\mathbf {p} _{i}^{\mathsf {T}}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}.\end{aligned}}$
To find the minimum, we differentiate with respect to x and set the result equal to the zero vector:
${\frac {\partial E(\mathbf {x} )}{\partial \mathbf {x} }}={\boldsymbol {0}}=2\left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)\mathbf {x} -2\left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}\right)$
so
$\left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)\mathbf {x} =\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}$
and so
$\mathbf {x} =\left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)^{-1}\left(\sum _{i}\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\mathbf {p} _{i}\right).$
In more than two dimensions
While n̂i is not well-defined in more than two dimensions, this can be generalized to any number of dimensions by noting that n̂i n̂iT is simply the symmetric matrix with all eigenvalues unity except for a zero eigenvalue in the direction along the line providing a seminorm on the distance between pi and another point giving the distance to the line. In any number of dimensions, if v̂i is a unit vector along the ith line, then
$\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}$ becomes $\mathbf {I} -\mathbf {\hat {v}} _{i}\mathbf {\hat {v}} _{i}^{\mathsf {T}}$
where I is the identity matrix, and so[4]
$x=\left(\sum _{i}\mathbf {I} -\mathbf {\hat {v}} _{i}\mathbf {\hat {v}} _{i}^{\mathsf {T}}\right)^{-1}\left(\sum _{i}\left(\mathbf {I} -\mathbf {\hat {v}} _{i}\mathbf {\hat {v}} _{i}^{\mathsf {T}}\right)\mathbf {p} _{i}\right).$
General derivation
In order to find the intersection point of a set of lines, we calculate the point with minimum distance to them. Each line is defined by an origin ai and a unit direction vector n̂i. The square of the distance from a point p to one of the lines is given from Pythagoras:
$d_{i}^{2}=\left\|\mathbf {p} -\mathbf {a} _{i}\right\|^{2}-\left(\left(\mathbf {p} -\mathbf {a} _{i}\right)^{\mathsf {T}}\mathbf {\hat {n}} _{i}\right)^{2}=\left(\mathbf {p} -\mathbf {a} _{i}\right)^{\mathsf {T}}\left(\mathbf {p} -\mathbf {a} _{i}\right)-\left(\left(\mathbf {p} -\mathbf {a} _{i}\right)^{\mathsf {T}}\mathbf {\hat {n}} _{i}\right)^{2}$
where (p − ai)T n̂i is the projection of p − ai on line i. The sum of distances to the square to all lines is
$\sum _{i}d_{i}^{2}=\sum _{i}\left({\left(\mathbf {p} -\mathbf {a} _{i}\right)^{\mathsf {T}}}\left(\mathbf {p} -\mathbf {a} _{i}\right)-{\left(\left(\mathbf {p} -\mathbf {a} _{i}\right)^{\mathsf {T}}\mathbf {\hat {n}} _{i}\right)^{2}}\right)$
To minimize this expression, we differentiate it with respect to p.
$\sum _{i}\left(2\left(\mathbf {p} -\mathbf {a} _{i}\right)-2\left(\left(\mathbf {p} -\mathbf {a} _{i}\right)^{\mathsf {T}}\mathbf {\hat {n}} _{i}\right)\mathbf {\hat {n}} _{i}\right)={\boldsymbol {0}}$
$\sum _{i}\left(\mathbf {p} -\mathbf {a} _{i}\right)=\sum _{i}\left(\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)\left(\mathbf {p} -\mathbf {a} _{i}\right)$
which results in
$\left(\sum _{i}\left(\mathbf {I} -\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)\right)\mathbf {p} =\sum _{i}\left(\mathbf {I} -\mathbf {\hat {n}} _{i}\mathbf {\hat {n}} _{i}^{\mathsf {T}}\right)\mathbf {a} _{i}$
where I is the identity matrix. This is a matrix Sp = C, with solution p = S+C, where S+ is the pseudo-inverse of S.
Non-Euclidean geometry
See also: Parallel postulate
In spherical geometry, any two lines intersect.[5]
In hyperbolic geometry, given any line and any point, there are infinitely many lines through that point that do not intersect the given line.[5]
See also
• Line segment intersection
• Line intersection in projective space
• Distance between two parallel lines
• Distance from a point to a line
• Line–plane intersection
• Parallel postulate
• Triangulation (computer vision)
• Intersection (Euclidean geometry) § Two line segments
References
1. Weisstein, Eric W. "Line-Line Intersection". MathWorld. Retrieved 2008-01-10.
2. Antonio, Franklin (1992). "Chapter IV.6: Faster Line Segment Intersection". In Kirk, David (ed.). Graphics Gems III. Academic Press, Inc. pp. 199–202. ISBN 0-12-059756-X.
3. Birchfield, Stanley (1998-04-23). "Homogeneous coordinates". robotics.stanford.edu. Archived from the original on 2000-09-29. Retrieved 2015-08-18.
4. Traa, Johannes (2013). "Least-Squares Intersection of Lines" (PDF). cal.cs.illinois.edu. Archived from the original (PDF) on 2017-09-12. Retrieved 2018-08-30.
5. "Exploring Hyperbolic Space" (PDF). math.berkeley.edu. Retrieved 2022-06-03.{{cite web}}: CS1 maint: url-status (link)
External links
• Distance between Lines and Segments with their Closest Point of Approach, applicable to two, three, or more dimensions.
| Wikipedia |
Twofish
In cryptography, Twofish is a symmetric key block cipher with a block size of 128 bits and key sizes up to 256 bits. It was one of the five finalists of the Advanced Encryption Standard contest, but it was not selected for standardization. Twofish is related to the earlier block cipher Blowfish.
Twofish
The Twofish algorithm
General
DesignersBruce Schneier
First published1998
Derived fromBlowfish, SAFER, Square
Related toThreefish
CertificationAES finalist
Cipher detail
Key sizes128, 192 or 256 bits
Block sizes128 bits
StructureFeistel network
Rounds16
Best public cryptanalysis
Truncated differential cryptanalysis requiring roughly 251 chosen plaintexts.[1] Impossible differential attack that breaks 6 rounds out of 16 of the 256-bit key version using 2256 steps.[2]
Twofish's distinctive features are the use of pre-computed key-dependent S-boxes, and a relatively complex key schedule. One half of an n-bit key is used as the actual encryption key and the other half of the n-bit key is used to modify the encryption algorithm (key-dependent S-boxes). Twofish borrows some elements from other designs; for example, the pseudo-Hadamard transform[3] (PHT) from the SAFER family of ciphers. Twofish has a Feistel structure like DES. Twofish also employs a Maximum Distance Separable matrix.
When it was introduced in 1998, Twofish was slightly slower than Rijndael (the chosen algorithm for Advanced Encryption Standard) for 128-bit keys, but somewhat faster for 256-bit keys. Since 2008, virtually all AMD and Intel processors have included hardware acceleration of the Rijndael algorithm via the AES instruction set; Rijndael implementations that use the instruction set are now orders of magnitude faster than (software) Twofish implementations.[4]
Twofish was designed by Bruce Schneier, John Kelsey, Doug Whiting, David Wagner, Chris Hall, and Niels Ferguson: the "extended Twofish team" met to perform further cryptanalysis of Twofish. Other AES contest entrants included Stefan Lucks, Tadayoshi Kohno, and Mike Stay.
The Twofish cipher has not been patented, and the reference implementation has been placed in the public domain. As a result, the Twofish algorithm is free for anyone to use without any restrictions whatsoever. It is one of a few ciphers included in the OpenPGP standard (RFC 4880). However, Twofish has seen less widespread usage than Blowfish, which has been available longer.
Performance
During the design of Twofish, performance was always an important factor. It was designed to allow for several layers of performance trade offs, depending on the importance of encryption speed, memory usage, hardware gate count, key setup and other parameters. This allows a highly flexible algorithm, which can be implemented in a variety of applications.
There are multiple space–time tradeoffs that can be made, in software as well as in hardware for Twofish. An example of such a tradeoff would be the precomputation of round subkeys or s-boxes, which can lead to speed increases of a factor of two or more. These come, however, at the cost of more RAM needed to store them.
The estimates in the table below are all based on existing 0.35 μm CMOS technology.
Hardware trade offs (128-bit key)[5]
Gate counts h blocks Clocks
per block
Pipeline
levels
Clock speed Throughput
(Mbit/s)
Startup
clocks
Comments
14000 1 64 1 40 MHz 80 4 subkeys on the fly
19000 1 32 1 40 MHz 160 40
23000 2 16 1 40 Mhz 320 20
26000 2 32 2 80 MHz 640 20
28000 2 48 3 120 MHz 960 20
30000 2 64 4 150 MHz 1200 20
80000 2 16 1 80 MHz 640 300 S-box RAMs
Cryptanalysis
In 1999, Niels Ferguson published an impossible differential attack that breaks 6 rounds out of 16 of the 256-bit key version using 2256 steps.[2]
As of 2000, the best published cryptanalysis of the Twofish block cipher is a truncated differential cryptanalysis of the full 16-round version. The paper claims that the probability of truncated differentials is 2−57.3 per block and that it will take roughly 251 chosen plaintexts (32 petabytes worth of data) to find a good pair of truncated differentials.[6]
Bruce Schneier responded in a 2005 blog entry that this paper did not present a full cryptanalytic attack, but only some hypothesized differential characteristics: "But even from a theoretical perspective, Twofish isn't even remotely broken. There have been no extensions to these results since they were published in 2000."[7]
See also
• Threefish
• Advanced Encryption Standard
• Data Encryption Standard
References
1. Ship Moriai; Yiqun Lisa Yin (2000). "Cryptanalysis of Twofish (II)" (PDF). Retrieved 2013-01-14. {{cite journal}}: Cite journal requires |journal= (help)
2. Niels Ferguson (1999-10-05). "Impossible differentials in Twofish" (PDF). Retrieved 2013-01-14. {{cite journal}}: Cite journal requires |journal= (help)
3. "Team Men In Black Presents: TwoFish" (PDF). Archived from the original (PDF) on 26 September 2017.
4. Bruce Schneier; Doug Whiting (2000-04-07). "A Performance Comparison of the Five AES Finalists" (PDF/PostScript). Retrieved 2013-01-14. {{cite journal}}: Cite journal requires |journal= (help)
5. Schneier, Bruce (15 June 1998). "Twofish: A 128-Bit Block Cipher" (PDF). Counterpane: 68.
6. Shiho Moriai; Yiqun Lisa Yin (2000). "Cryptanalysis of Twofish (II)" (PDF). Retrieved 2013-01-14. {{cite journal}}: Cite journal requires |journal= (help)
7. Schneier, Bruce (2005-11-23). "Twofish Cryptanalysis Rumors". Schneier on Security blog. Retrieved 2013-01-14.
Articles
• Bruce Schneier; John Kelsey; Doug Whiting; David Wagner; Chris Hall; Niels Ferguson (1998-06-15). "The Twofish Encryption Algorithm" (PDF/PostScript). Retrieved 2013-01-14. {{cite journal}}: Cite journal requires |journal= (help)
• Bruce Schneier; John Kelsey; Doug Whiting; David Wagner; Chris Hall; Niels Ferguson (1999-03-22). The Twofish Encryption Algorithm: A 128-Bit Block Cipher. New York City: John Wiley & Sons. ISBN 0-471-35381-7.
External links
• Twofish web page, with full specifications, free source code, and other Twofish resources by Bruce Schneier
• 256 bit ciphers – TWOFISH reference implementation and derived code
• Products that Use Twofish by Bruce Schneier
• Better algorithm: Rijndael or TwoFish? by sci.crypt
• Standard Cryptographic Algorithm Naming: Twofish
Block ciphers (security summary)
Common
algorithms
• AES
• Blowfish
• DES (internal mechanics, Triple DES)
• Serpent
• Twofish
Less common
algorithms
• ARIA
• Camellia
• CAST-128
• GOST
• IDEA
• LEA
• RC2
• RC5
• RC6
• SEED
• Skipjack
• TEA
• XTEA
Other
algorithms
• 3-Way
• Akelarre
• Anubis
• BaseKing
• BassOmatic
• BATON
• BEAR and LION
• CAST-256
• Chiasmus
• CIKS-1
• CIPHERUNICORN-A
• CIPHERUNICORN-E
• CLEFIA
• CMEA
• Cobra
• COCONUT98
• Crab
• Cryptomeria/C2
• CRYPTON
• CS-Cipher
• DEAL
• DES-X
• DFC
• E2
• FEAL
• FEA-M
• FROG
• G-DES
• Grand Cru
• Hasty Pudding cipher
• Hierocrypt
• ICE
• IDEA NXT
• Intel Cascade Cipher
• Iraqi
• Kalyna
• KASUMI
• KeeLoq
• KHAZAD
• Khufu and Khafre
• KN-Cipher
• Kuznyechik
• Ladder-DES
• LOKI (97, 89/91)
• Lucifer
• M6
• M8
• MacGuffin
• Madryga
• MAGENTA
• MARS
• Mercy
• MESH
• MISTY1
• MMB
• MULTI2
• MultiSwap
• New Data Seal
• NewDES
• Nimbus
• NOEKEON
• NUSH
• PRESENT
• Prince
• Q
• REDOC
• Red Pike
• S-1
• SAFER
• SAVILLE
• SC2000
• SHACAL
• SHARK
• Simon
• SM4
• Speck
• Spectr-H64
• Square
• SXAL/MBAL
• Threefish
• Treyfer
• UES
• xmx
• XXTEA
• Zodiac
Design
• Feistel network
• Key schedule
• Lai–Massey scheme
• Product cipher
• S-box
• P-box
• SPN
• Confusion and diffusion
• Round
• Avalanche effect
• Block size
• Key size
• Key whitening (Whitening transformation)
Attack
(cryptanalysis)
• Brute-force (EFF DES cracker)
• MITM
• Biclique attack
• 3-subset MITM attack
• Linear (Piling-up lemma)
• Differential
• Impossible
• Truncated
• Higher-order
• Differential-linear
• Distinguishing (Known-key)
• Integral/Square
• Boomerang
• Mod n
• Related-key
• Slide
• Rotational
• Side-channel
• Timing
• Power-monitoring
• Electromagnetic
• Acoustic
• Differential-fault
• XSL
• Interpolation
• Partitioning
• Rubber-hose
• Black-bag
• Davies
• Rebound
• Weak key
• Tau
• Chi-square
• Time/memory/data tradeoff
Standardization
• AES process
• CRYPTREC
• NESSIE
Utilization
• Initialization vector
• Mode of operation
• Padding
Cryptography
General
• History of cryptography
• Outline of cryptography
• Cryptographic protocol
• Authentication protocol
• Cryptographic primitive
• Cryptanalysis
• Cryptocurrency
• Cryptosystem
• Cryptographic nonce
• Cryptovirology
• Hash function
• Cryptographic hash function
• Key derivation function
• Digital signature
• Kleptography
• Key (cryptography)
• Key exchange
• Key generator
• Key schedule
• Key stretching
• Keygen
• Cryptojacking malware
• Ransomware
• Random number generation
• Cryptographically secure pseudorandom number generator (CSPRNG)
• Pseudorandom noise (PRN)
• Secure channel
• Insecure channel
• Subliminal channel
• Encryption
• Decryption
• End-to-end encryption
• Harvest now, decrypt later
• Information-theoretic security
• Plaintext
• Codetext
• Ciphertext
• Shared secret
• Trapdoor function
• Trusted timestamping
• Key-based routing
• Onion routing
• Garlic routing
• Kademlia
• Mix network
Mathematics
• Cryptographic hash function
• Block cipher
• Stream cipher
• Symmetric-key algorithm
• Authenticated encryption
• Public-key cryptography
• Quantum key distribution
• Quantum cryptography
• Post-quantum cryptography
• Message authentication code
• Random numbers
• Steganography
• Category
| Wikipedia |
Tychonoff's theorem
In mathematics, Tychonoff's theorem states that the product of any collection of compact topological spaces is compact with respect to the product topology. The theorem is named after Andrey Nikolayevich Tikhonov (whose surname sometimes is transcribed Tychonoff), who proved it first in 1930 for powers of the closed unit interval and in 1935 stated the full theorem along with the remark that its proof was the same as for the special case. The earliest known published proof is contained in a 1935 article of Tychonoff, A., "Uber einen Funktionenraum", Mathematical Annals, 111, pp. 762–766 (1935). (This reference is mentioned in "Topology" by Hocking and Young, Dover Publications, Ind.)
Tychonoff's theorem is often considered as perhaps the single most important result in general topology (along with Urysohn's lemma).[1] The theorem is also valid for topological spaces based on fuzzy sets.[2]
Topological definitions
The theorem depends crucially upon the precise definitions of compactness and of the product topology; in fact, Tychonoff's 1935 paper defines the product topology for the first time. Conversely, part of its importance is to give confidence that these particular definitions are the most useful (i.e. most well-behaved) ones.
Indeed, the Heine–Borel definition of compactness—that every covering of a space by open sets admits a finite subcovering—is relatively recent. More popular in the 19th and early 20th centuries was the Bolzano-Weierstrass criterion that every bounded infinite sequence admits a convergent subsequence, now called sequential compactness. These conditions are equivalent for metrizable spaces, but neither one implies the other in the class of all topological spaces.
It is almost trivial to prove that the product of two sequentially compact spaces is sequentially compact—one passes to a subsequence for the first component and then a subsubsequence for the second component. An only slightly more elaborate "diagonalization" argument establishes the sequential compactness of a countable product of sequentially compact spaces. However, the product of continuum many copies of the closed unit interval (with its usual topology) fails to be sequentially compact with respect to the product topology, even though it is compact by Tychonoff's theorem (e.g., see Wilansky 1970, p. 134).
This is a critical failure: if X is a completely regular Hausdorff space, there is a natural embedding from X into [0,1]C(X,[0,1]), where C(X,[0,1]) is the set of continuous maps from X to [0,1]. The compactness of [0,1]C(X,[0,1]) thus shows that every completely regular Hausdorff space embeds in a compact Hausdorff space (or, can be "compactified".) This construction is the Stone–Čech compactification. Conversely, all subspaces of compact Hausdorff spaces are completely regular Hausdorff, so this characterizes the completely regular Hausdorff spaces as those that can be compactified. Such spaces are now called Tychonoff spaces.
Applications
Tychonoff's theorem has been used to prove many other mathematical theorems. These include theorems about compactness of certain spaces such as the Banach–Alaoglu theorem on the weak-* compactness of the unit ball of the dual space of a normed vector space, and the Arzelà–Ascoli theorem characterizing the sequences of functions in which every subsequence has a uniformly convergent subsequence. They also include statements less obviously related to compactness, such as the De Bruijn–Erdős theorem stating that every minimal k-chromatic graph is finite, and the Curtis–Hedlund–Lyndon theorem providing a topological characterization of cellular automata.
As a rule of thumb, any sort of construction that takes as input a fairly general object (often of an algebraic, or topological-algebraic nature) and outputs a compact space is likely to use Tychonoff: e.g., the Gelfand space of maximal ideals of a commutative C*-algebra, the Stone space of maximal ideals of a Boolean algebra, and the Berkovich spectrum of a commutative Banach ring.
Proofs of Tychonoff's theorem
1) Tychonoff's 1930 proof used the concept of a complete accumulation point.
2) The theorem is a quick corollary of the Alexander subbase theorem.
More modern proofs have been motivated by the following considerations: the approach to compactness via convergence of subsequences leads to a simple and transparent proof in the case of countable index sets. However, the approach to convergence in a topological space using sequences is sufficient when the space satisfies the first axiom of countability (as metrizable spaces do), but generally not otherwise. However, the product of uncountably many metrizable spaces, each with at least two points, fails to be first countable. So it is natural to hope that a suitable notion of convergence in arbitrary spaces will lead to a compactness criterion generalizing sequential compactness in metrizable spaces that will be as easily applied to deduce the compactness of products. This has turned out to be the case.
3) The theory of convergence via filters, due to Henri Cartan and developed by Bourbaki in 1937, leads to the following criterion: assuming the ultrafilter lemma, a space is compact if and only if each ultrafilter on the space converges. With this in hand, the proof becomes easy: the (filter generated by the) image of an ultrafilter on the product space under any projection map is an ultrafilter on the factor space, which therefore converges, to at least one xi. One then shows that the original ultrafilter converges to x = (xi). In his textbook, Munkres gives a reworking of the Cartan–Bourbaki proof that does not explicitly use any filter-theoretic language or preliminaries.
4) Similarly, the Moore–Smith theory of convergence via nets, as supplemented by Kelley's notion of a universal net, leads to the criterion that a space is compact if and only if each universal net on the space converges. This criterion leads to a proof (Kelley, 1950) of Tychonoff's theorem, which is, word for word, identical to the Cartan/Bourbaki proof using filters, save for the repeated substitution of "universal net" for "ultrafilter base".
5) A proof using nets but not universal nets was given in 1992 by Paul Chernoff.
Tychonoff's theorem and the axiom of choice
All of the above proofs use the axiom of choice (AC) in some way. For instance, the third proof uses that every filter is contained in an ultrafilter (i.e., a maximal filter), and this is seen by invoking Zorn's lemma. Zorn's lemma is also used to prove Kelley's theorem, that every net has a universal subnet. In fact these uses of AC are essential: in 1950 Kelley proved that Tychonoff's theorem implies the axiom of choice in ZF. Note that one formulation of AC is that the Cartesian product of a family of nonempty sets is nonempty; but since the empty set is most certainly compact, the proof cannot proceed along such straightforward lines. Thus Tychonoff's theorem joins several other basic theorems (e.g. that every vector space has a basis) in being equivalent to AC.
On the other hand, the statement that every filter is contained in an ultrafilter does not imply AC. Indeed, it is not hard to see that it is equivalent to the Boolean prime ideal theorem (BPI), a well-known intermediate point between the axioms of Zermelo-Fraenkel set theory (ZF) and the ZF theory augmented by the axiom of choice (ZFC). A first glance at the second proof of Tychnoff may suggest that the proof uses no more than (BPI), in contradiction to the above. However, the spaces in which every convergent filter has a unique limit are precisely the Hausdorff spaces. In general we must select, for each element of the index set, an element of the nonempty set of limits of the projected ultrafilter base, and of course this uses AC. However, it also shows that the compactness of the product of compact Hausdorff spaces can be proved using (BPI), and in fact the converse also holds. Studying the strength of Tychonoff's theorem for various restricted classes of spaces is an active area in set-theoretic topology.
The analogue of Tychonoff's theorem in pointless topology does not require any form of the axiom of choice.
Proof of the axiom of choice from Tychonoff's theorem
To prove that Tychonoff's theorem in its general version implies the axiom of choice, we establish that every infinite cartesian product of non-empty sets is nonempty. The trickiest part of the proof is introducing the right topology. The right topology, as it turns out, is the cofinite topology with a small twist. It turns out that every set given this topology automatically becomes a compact space. Once we have this fact, Tychonoff's theorem can be applied; we then use the finite intersection property (FIP) definition of compactness. The proof itself (due to J. L. Kelley) follows:
Let {Ai} be an indexed family of nonempty sets, for i ranging in I (where I is an arbitrary indexing set). We wish to show that the cartesian product of these sets is nonempty. Now, for each i, take Xi to be Ai with the index i itself tacked on (renaming the indices using the disjoint union if necessary, we may assume that i is not a member of Ai, so simply take Xi = Ai ∪ {i}).
Now define the cartesian product
$X=\prod _{i\in I}X_{i}$
along with the natural projection maps πi which take a member of X to its ith term.
We give each Xj the topology whose open sets are: the empty set, the singleton {i}, the set Xi. This makes Xi compact, and by Tychonoff's theorem, X is also compact (in the product topology). The projection maps are continuous; all the Ai's are closed, being complements of the singleton open set {i} in Xi. So the inverse images πi−1(Ai) are closed subsets of X. We note that
$\prod _{i\in I}A_{i}=\bigcap _{i\in I}\pi _{i}^{-1}(A_{i})$
and prove that these inverse images have the FIP. Let i1, ..., iN be a finite collection of indices in I. Then the finite product Ai1 × ... × AiN is non-empty (only finitely many choices here, so AC is not needed); it merely consists of N-tuples. Let a = (a1, ..., aN) be such an N-tuple. We extend a to the whole index set: take a to the function f defined by f(j) = ak if j = ik, and f(j) = j otherwise. This step is where the addition of the extra point to each space is crucial, for it allows us to define f for everything outside of the N-tuple in a precise way without choices (we can already choose, by construction, j from Xj ). πik(f) = ak is obviously an element of each Aik so that f is in each inverse image; thus we have
$\bigcap _{k=1}^{N}\pi _{i_{k}}^{-1}(A_{i_{k}})\neq \varnothing .$
By the FIP definition of compactness, the entire intersection over I must be nonempty, and the proof is complete.
See also
• Alexander's sub-base theorem – Collection of subsets that generate a topologyPages displaying short descriptions of redirect targets
• Compactness theorem
• Tube lemma – proof in topologyPages displaying wikidata descriptions as a fallback
Notes
1. Stephen Willard, "General Topology", Dover Books, ISBN 978-0-486-43479-7, pp. 120.
2. Joseph Goguen, "The Fuzzy Tychonoff Theorem", Journal of Mathematical Analysis and Applications, volume 43, issue 3, September 1973, pp. 734–742.
References
• Chernoff, Paul R. (1992), "A simple proof of Tychonoff's theorem via nets", American Mathematical Monthly, 99 (10): 932–934, doi:10.2307/2324485, JSTOR 2324485.
• Johnstone, Peter T. (1982), Stone spaces, Cambridge Studies in Advanced Mathematics, vol. 3, New York: Cambridge University Press, ISBN 0-521-23893-5.
• Johnstone, Peter T. (1981), "Tychonoff's theorem without the axiom of choice", Fundamenta Mathematicae, 113: 21–35, doi:10.4064/fm-113-1-21-35.
• Kelley, John L. (1950), "Convergence in topology", Duke Mathematical Journal, 17 (3): 277–283, doi:10.1215/S0012-7094-50-01726-1.
• Kelley, John L. (1950), "The Tychonoff product theorem implies the axiom of choice", Fundamenta Mathematicae, 37: 75–76, doi:10.4064/fm-37-1-75-76.
• Munkres, James R. (2000). Topology (Second ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260.
• Tychonoff, Andrey N. (1930), "Über die topologische Erweiterung von Räumen", Mathematische Annalen (in German), 102 (1): 544–561, doi:10.1007/BF01782364.
• Wilansky, A. (1970), Topology for Analysis, Ginn and Company
• Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240.
• Wright, David G. (1994), "Tychonoff's theorem.", Proc. Amer. Math. Soc., 120 (3): 985–987, doi:10.1090/s0002-9939-1994-1170549-2.
External links
• Tychonoff's theorem at ProofWiki
• Mizar system proof: http://mizar.org/version/current/html/yellow17.html#T23
Topology
Fields
• General (point-set)
• Algebraic
• Combinatorial
• Continuum
• Differential
• Geometric
• low-dimensional
• Homology
• cohomology
• Set-theoretic
• Digital
Key concepts
• Open set / Closed set
• Interior
• Continuity
• Space
• compact
• Connected
• Hausdorff
• metric
• uniform
• Homotopy
• homotopy group
• fundamental group
• Simplicial complex
• CW complex
• Polyhedral complex
• Manifold
• Bundle (mathematics)
• Second-countable space
• Cobordism
Metrics and properties
• Euler characteristic
• Betti number
• Winding number
• Chern number
• Orientability
Key results
• Banach fixed-point theorem
• De Rham cohomology
• Invariance of domain
• Poincaré conjecture
• Tychonoff's theorem
• Urysohn's lemma
• Category
• Mathematics portal
• Wikibook
• Wikiversity
• Topics
• general
• algebraic
• geometric
• Publications
| Wikipedia |
Fixed-point theorems in infinite-dimensional spaces
In mathematics, a number of fixed-point theorems in infinite-dimensional spaces generalise the Brouwer fixed-point theorem. They have applications, for example, to the proof of existence theorems for partial differential equations.
The first result in the field was the Schauder fixed-point theorem, proved in 1930 by Juliusz Schauder (a previous result in a different vein, the Banach fixed-point theorem for contraction mappings in complete metric spaces was proved in 1922). Quite a number of further results followed. One way in which fixed-point theorems of this kind have had a larger influence on mathematics as a whole has been that one approach is to try to carry over methods of algebraic topology, first proved for finite simplicial complexes, to spaces of infinite dimension. For example, the research of Jean Leray who founded sheaf theory came out of efforts to extend Schauder's work.
Schauder fixed-point theorem: Let C be a nonempty closed convex subset of a Banach space V. If f : C → C is continuous with a compact image, then f has a fixed point.
Tikhonov (Tychonoff) fixed-point theorem: Let V be a locally convex topological vector space. For any nonempty compact convex set X in V, any continuous function f : X → X has a fixed point.
Browder fixed-point theorem: Let K be a nonempty closed bounded convex set in a uniformly convex Banach space. Then any non-expansive function f : K → K has a fixed point. (A function $f$ is called non-expansive if $\|f(x)-f(y)\|\leq \|x-y\|$ for each $x$ and $y$.)
Other results include the Markov–Kakutani fixed-point theorem (1936-1938) and the Ryll-Nardzewski fixed-point theorem (1967) for continuous affine self-mappings of compact convex sets, as well as the Earle–Hamilton fixed-point theorem (1968) for holomorphic self-mappings of open domains.
Kakutani fixed-point theorem: Every correspondence that maps a compact convex subset of a locally convex space into itself with a closed graph and convex nonempty images has a fixed point.
See also
• Topological degree theory
References
• Vasile I. Istratescu, Fixed Point Theory, An Introduction, D.Reidel, Holland (1981). ISBN 90-277-1224-7.
• Andrzej Granas and James Dugundji, Fixed Point Theory (2003) Springer-Verlag, New York, ISBN 0-387-00173-5.
• William A. Kirk and Brailey Sims, Handbook of Metric Fixed Point Theory (2001), Kluwer Academic, London ISBN 0-7923-7073-2.
External links
• PlanetMath article on the Tychonoff Fixed Point Theorem
| Wikipedia |
Tychonoff plank
In topology, the Tychonoff plank is a topological space defined using ordinal spaces that is a counterexample to several plausible-sounding conjectures. It is defined as the topological product of the two ordinal spaces $[0,\omega _{1}]$ and $[0,\omega ]$, where $\omega $ is the first infinite ordinal and $\omega _{1}$ the first uncountable ordinal. The deleted Tychonoff plank is obtained by deleting the point $\infty =(\omega _{1},\omega )$.
Properties
The Tychonoff plank is a compact Hausdorff space and is therefore a normal space. However, the deleted Tychonoff plank is non-normal. Therefore the Tychonoff plank is not completely normal. This shows that a subspace of a normal space need not be normal. The Tychonoff plank is not perfectly normal because it is not a Gδ space: the singleton $\{\infty \}$ is closed but not a Gδ set.
The Stone–Čech compactification of the deleted Tychonoff plank is the Tychonoff plank.[1]
Notes
1. Walker, R. C. (1974). The Stone-Čech Compactification. Springer. pp. 95–97. ISBN 978-3-642-61935-9.
See also
• List of topologies
References
• Kelley, John L. (1975), General Topology, Graduate Texts in Mathematics, vol. 27 (1 ed.), New York: Springer-Verlag, Ch. 4 Ex. F, ISBN 978-0-387-90125-1, MR 0370454
• Steen, Lynn Arthur; Seebach, J. Arthur Jr. (1995) [1978], Counterexamples in Topology (Dover reprint of 1978 ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-486-68735-3, MR 0507446
• Willard, Stephen (1970), General Topology, Addison-Wesley, 17.12, ISBN 9780201087079, MR 0264581
External links
• Barile, Margherita. "Tychonoff Plank". MathWorld.
| Wikipedia |
Type-2 Gumbel distribution
In probability theory, the Type-2 Gumbel probability density function is
$f(x|a,b)=abx^{-a-1}e^{-bx^{-a}}\,$
Type-2 Gumbel
Parameters $a\!$ (real)
$b\!$ shape (real)
PDF $abx^{-a-1}e^{-bx^{-a}}\!$
CDF $e^{-bx^{-a}}\!$
Mean $b^{1/a}\Gamma (1-1/a)\!$
Variance $b^{2/a}(\Gamma (1-1/a)-{\Gamma (1-1/a)}^{2})\!$
for
$0<x<\infty $.
For $0<a\leq 1$ the mean is infinite. For $0<a\leq 2$ the variance is infinite.
The cumulative distribution function is
$F(x|a,b)=e^{-bx^{-a}}\,$
The moments $E[X^{k}]\,$ exist for $k<a\,$
The distribution is named after Emil Julius Gumbel (1891 – 1966).
Generating random variates
Given a random variate U drawn from the uniform distribution in the interval (0, 1), then the variate
$X=(-\ln U/b)^{-1/a},$
has a Type-2 Gumbel distribution with parameter $a$ and $b$. This is obtained by applying the inverse transform sampling-method.
Related distributions
• The special case b = 1 yields the Fréchet distribution.
• Substituting $b=\lambda ^{-k}$ and $a=-k$ yields the Weibull distribution. Note, however, that a positive k (as in the Weibull distribution) would yield a negative a and hence a negative probability density, which is not allowed.
Based on The GNU Scientific Library, used under GFDL.
See also
• Extreme value theory
• Gumbel distribution
Probability distributions (list)
Discrete
univariate
with finite
support
• Benford
• Bernoulli
• beta-binomial
• binomial
• categorical
• hypergeometric
• negative
• Poisson binomial
• Rademacher
• soliton
• discrete uniform
• Zipf
• Zipf–Mandelbrot
with infinite
support
• beta negative binomial
• Borel
• Conway–Maxwell–Poisson
• discrete phase-type
• Delaporte
• extended negative binomial
• Flory–Schulz
• Gauss–Kuzmin
• geometric
• logarithmic
• mixed Poisson
• negative binomial
• Panjer
• parabolic fractal
• Poisson
• Skellam
• Yule–Simon
• zeta
Continuous
univariate
supported on a
bounded interval
• arcsine
• ARGUS
• Balding–Nichols
• Bates
• beta
• beta rectangular
• continuous Bernoulli
• Irwin–Hall
• Kumaraswamy
• logit-normal
• noncentral beta
• PERT
• raised cosine
• reciprocal
• triangular
• U-quadratic
• uniform
• Wigner semicircle
supported on a
semi-infinite
interval
• Benini
• Benktander 1st kind
• Benktander 2nd kind
• beta prime
• Burr
• chi
• chi-squared
• noncentral
• inverse
• scaled
• Dagum
• Davis
• Erlang
• hyper
• exponential
• hyperexponential
• hypoexponential
• logarithmic
• F
• noncentral
• folded normal
• Fréchet
• gamma
• generalized
• inverse
• gamma/Gompertz
• Gompertz
• shifted
• half-logistic
• half-normal
• Hotelling's T-squared
• inverse Gaussian
• generalized
• Kolmogorov
• Lévy
• log-Cauchy
• log-Laplace
• log-logistic
• log-normal
• log-t
• Lomax
• matrix-exponential
• Maxwell–Boltzmann
• Maxwell–Jüttner
• Mittag-Leffler
• Nakagami
• Pareto
• phase-type
• Poly-Weibull
• Rayleigh
• relativistic Breit–Wigner
• Rice
• truncated normal
• type-2 Gumbel
• Weibull
• discrete
• Wilks's lambda
supported
on the whole
real line
• Cauchy
• exponential power
• Fisher's z
• Kaniadakis κ-Gaussian
• Gaussian q
• generalized normal
• generalized hyperbolic
• geometric stable
• Gumbel
• Holtsmark
• hyperbolic secant
• Johnson's SU
• Landau
• Laplace
• asymmetric
• logistic
• noncentral t
• normal (Gaussian)
• normal-inverse Gaussian
• skew normal
• slash
• stable
• Student's t
• Tracy–Widom
• variance-gamma
• Voigt
with support
whose type varies
• generalized chi-squared
• generalized extreme value
• generalized Pareto
• Marchenko–Pastur
• Kaniadakis κ-exponential
• Kaniadakis κ-Gamma
• Kaniadakis κ-Weibull
• Kaniadakis κ-Logistic
• Kaniadakis κ-Erlang
• q-exponential
• q-Gaussian
• q-Weibull
• shifted log-logistic
• Tukey lambda
Mixed
univariate
continuous-
discrete
• Rectified Gaussian
Multivariate
(joint)
• Discrete:
• Ewens
• multinomial
• Dirichlet
• negative
• Continuous:
• Dirichlet
• generalized
• multivariate Laplace
• multivariate normal
• multivariate stable
• multivariate t
• normal-gamma
• inverse
• Matrix-valued:
• LKJ
• matrix normal
• matrix t
• matrix gamma
• inverse
• Wishart
• normal
• inverse
• normal-inverse
• complex
Directional
Univariate (circular) directional
Circular uniform
univariate von Mises
wrapped normal
wrapped Cauchy
wrapped exponential
wrapped asymmetric Laplace
wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
• Circular
• compound Poisson
• elliptical
• exponential
• natural exponential
• location–scale
• maximum entropy
• mixture
• Pearson
• Tweedie
• wrapped
• Category
• Commons
| Wikipedia |
Type and cotype of a Banach space
In functional analysis, the type and cotype of a Banach space are a classification of Banach spaces through probability theory and a measure, how far a Banach space from a Hilbert space is.
The starting point is the Pythagorean identity for orthogonal vectors $(e_{k})_{k=1}^{n}$ in Hilbert spaces
$\left\|\sum _{k=1}^{n}e_{k}\right\|^{2}=\sum _{k=1}^{n}\left\|e_{k}\right\|^{2}.$
This identity no longer holds in general Banach spaces, however one can introduce a notion of orthogonality probabilistically with the help of Rademacher random variables, for this reason one also speaks of Rademacher type and Rademacher cotype.
The notion of typ and cotype was introduced by French mathematician Jean-Pierre Kahane.
Definition
Let
• $(X,\|\cdot \|)$ be a Banach space,
• $(\varepsilon _{i})$ be a sequence of independent Rademacher random variables, i.e. $P(\varepsilon _{i}=-1)=P(\varepsilon _{i}=1)=1/2$ and $\mathbb {E} [\varepsilon _{i}\varepsilon _{m}]=0$ for $i\neq m$ and $\operatorname {Var} [\varepsilon _{i}]=1$.
Type
$X$ is of type $p$ for $p\in [1,2]$ if there exist a finite constant $C\geq 1$ such that
$\mathbb {E} _{\varepsilon }\left[\left\|\sum \limits _{i=1}^{n}\varepsilon _{i}x_{i}\right\|^{p}\right]\leq C^{p}\left(\sum \limits _{i=1}^{n}\|x_{i}\|^{p}\right)$
for all finite sequences $(x_{i})\in X$. The sharpest constant $C$ is called type $p$ constant and denoted as $T_{p}(X)$.
Cotype
$X$ is of cotype $q$ for $q\in [2,\infty ]$ if there exist a finite constant $C\geq 1$ such that
$\mathbb {E} _{\varepsilon }\left[\left\|\sum \limits _{i=1}^{n}\varepsilon _{i}x_{i}\right\|^{q}\right]\geq {\frac {1}{C^{q}}}\left(\sum \limits _{i=1}^{n}\|x_{i}\|^{q}\right),\quad {\text{if}}\;2\leq q<\infty $
respectively
$\mathbb {E} _{\varepsilon }\left[\left\|\sum \limits _{i=1}^{n}\varepsilon _{i}x_{i}\right\|\right]\geq {\frac {1}{C}}\sup \|x_{i}\|,\quad {\text{if}}\;q=\infty $
for all finite sequences $(x_{i})\in X$. The sharpest constant $C$ is called cotype $q$ constant and denoted as $C_{q}(X)$.[1]
Remarks
By taking the $p$-th resp. $q$-th root one gets the equation for the Bochner $L^{p}$ norm.
Properties
• Every Banach space is of type $1$ (follows from the triangle inequality).
• A Banach space is of type $2$ and cotype $2$ if and only if the space is also isomorphic to a Hilbert space.
If a Banach space:
• is of type $p$ then it is also type $p'\in [1,p]$.
• is of cotype $q$ then it is also of cotype $q'\in [q,\infty ]$.
• is of type $p$ for $1<p\leq 2$, then its dual space $X^{*}$ is of cotype $p^{*}$ with $p^{*}:=(1-1/p)^{-1}$ (conjugate index). Further it holds that $C_{p^{*}}(X^{*})\leq T_{p}(X)$[1]
Examples
• The $L^{p}$ spaces for $p\in [1,2]$ are of type $p$ and cotype $2$, this means $L^{1}$ is of type $1$, $L^{2}$ is of type $2$ and so on.
• The $L^{p}$ spaces for $p\in [2,\infty )$ are of type $2$ and cotype $p$.
• The space $L^{\infty }$ is of type $1$ and cotype $\infty $.[2]
Literature
• Li, Daniel; Queffélec, Hervé (2017). Introduction to Banach Spaces: Analysis and Probability. Cambridge Studies in Advanced Mathematics. Cambridge University Press. pp. 159–209. doi:10.1017/CBO9781316675762.009.
• Joseph Diestel (1984). Sequences and Series in Banach Spaces. Springer New York.
• Laurent Schwartz (2006). Geometry and Probability in Banach Spaces. Springer Berlin Heidelberg. ISBN 978-3-540-10691-3.
• Ledoux, Michel; Talagrand, Michel (1991). Probability in Banach Spaces. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 23. Berlin, Heidelberg: Springer. doi:10.1007/978-3-642-20212-4_11.
References
1. Li, Daniel; Queffélec, Hervé (2017). Introduction to Banach Spaces: Analysis and Probability. Cambridge Studies in Advanced Mathematics. Cambridge University Press. pp. 159–209. doi:10.1017/CBO9781316675762.009.
2. Ledoux, Michel; Talagrand, Michel (1991). Probability in Banach Spaces. Ergebnisse der Mathematik und ihrer Grenzgebiete. Vol. 23. Berlin, Heidelberg: Springer. doi:10.1007/978-3-642-20212-4_11.
| Wikipedia |
Type constructor
In the area of mathematical logic and computer science known as type theory, a type constructor is a feature of a typed formal language that builds new types from old ones. Basic types are considered to be built using nullary type constructors. Some type constructors take another type as an argument, e.g., the constructors for product types, function types, power types and list types. New types can be defined by recursively composing type constructors.
For example, simply typed lambda calculus can be seen as a language with a single non-basic type constructor—the function type constructor. Product types can generally be considered "built-in" in typed lambda calculi via currying.
Abstractly, a type constructor is an n-ary type operator taking as argument zero or more types, and returning another type. Making use of currying, n-ary type operators can be (re)written as a sequence of applications of unary type operators. Therefore, we can view the type operators as a simply typed lambda calculus, which has only one basic type, usually denoted $*$, and pronounced "type", which is the type of all types in the underlying language, which are now called proper types in order to distinguish them from the types of the type operators in their own calculus, which are called kinds.
Type operators may bind type variables. For example, giving the structure of the simply-typed λ-calculus at the type level requires binding, or higher-order, type operators. These binding type operators correspond to the 2nd axis of the λ-cube, and type theories such as the simply-typed λ-calculus with type operators, λω. Combining type operators with the polymorphic λ-calculus (System F) yields System Fω.
Some functional programming languages make explicit use of type constructors. A notable example is Haskell, in which all data type declarations are considered to declare type constructors, and basic types (or nullary type constructors) are called type constants.[1][2] Type constructors may also be considered as parametric polymorphic data types.
See also
• Kind (type theory)
• Algebraic data type
• Recursive data type
References
1. Marlow, Simon (April 2010), "4.1.2 Syntax of Types", Haskell 2010 Language Report, retrieved 15 August 2023
2. "Constructor". HaskellWiki. Retrieved 15 August 2023.
• Pierce, Benjamin (2002). Types and Programming Languages. MIT Press. ISBN 0-262-16209-1., chapter 29, "Type Operators and Kinding"
• P.T. Johnstone, Sketches of an Elephant, p. 940
Data types
Uninterpreted
• Bit
• Byte
• Trit
• Tryte
• Word
• Bit array
Numeric
• Arbitrary-precision or bignum
• Complex
• Decimal
• Fixed point
• Floating point
• Reduced precision
• Minifloat
• Half precision
• bfloat16
• Single precision
• Double precision
• Quadruple precision
• Octuple precision
• Extended precision
• Long double
• Integer
• signedness
• Interval
• Rational
Pointer
• Address
• physical
• virtual
• Reference
Text
• Character
• String
• null-terminated
Composite
• Algebraic data type
• generalized
• Array
• Associative array
• Class
• Dependent
• Equality
• Inductive
• Intersection
• List
• Object
• metaobject
• Option type
• Product
• Record or Struct
• Refinement
• Set
• Union
• tagged
Other
• Boolean
• Bottom type
• Collection
• Enumerated type
• Exception
• Function type
• Opaque data type
• Recursive data type
• Semaphore
• Stream
• Strongly typed identifier
• Top type
• Type class
• Empty type
• Unit type
• Void
Related
topics
• Abstract data type
• Boxing
• Data structure
• Generic
• Kind
• metaclass
• Parametric polymorphism
• Primitive data type
• Interface
• Subtyping
• Type constructor
• Type conversion
• Type system
• Type theory
• Variable
| Wikipedia |
Lambda calculus
Lambda calculus (also written as λ-calculus) is a formal system in mathematical logic for expressing computation based on function abstraction and application using variable binding and substitution. It is a universal model of computation that can be used to simulate any Turing machine. It was introduced by the mathematician Alonzo Church in the 1930s as part of his research into the foundations of mathematics.
Lambda calculus consists of constructing lambda terms and performing reduction operations on them. In the simplest form of lambda calculus, terms are built using only the following rules:[lower-alpha 1]
1. $x$ : Some lambda term, a character or string representing a parameter, or mathematical/logical value.
2. $ (\lambda x.M)$: A lambda abstraction is a function definition $ M$, where the $x$ between the λ and the punctum (the dot .) will need explicit substitution in $ M$ itself. (See β-reduction)
3. $(M\ N)$: An application, applying a function $ M$ to an argument $ N$. Both $ M$ and $ N$ are lambda terms.
The reduction operations include:
• $ (\lambda x.M[x])\rightarrow (\lambda y.M[y])$ : α-conversion, renaming the bound variables in the expression. Used to avoid name collisions.
• $ ((\lambda x.M)\ E)\rightarrow (M[x:=E])$ : β-reduction,[lower-alpha 2] replacing the bound variables with the argument expression in the body of the abstraction.
If De Bruijn indexing is used, then α-conversion is no longer required as there will be no name collisions. If repeated application of the reduction steps eventually terminates, then by the Church–Rosser theorem it will produce a β-normal form.
Variable names are not needed if using a universal lambda function, such as Iota and Jot, which can create any function behavior by calling it on itself in various combinations.
Explanation and applications
Lambda calculus is Turing complete, that is, it is a universal model of computation that can be used to simulate any Turing machine.[3] Its namesake, the Greek letter lambda (λ), is used in lambda expressions and lambda terms to denote binding a variable in a function.
Lambda calculus may be untyped or typed. In typed lambda calculus, functions can be applied only if they are capable of accepting the given input's "type" of data. Typed lambda calculi are weaker than the untyped lambda calculus, which is the primary subject of this article, in the sense that typed lambda calculi can express less than the untyped calculus can. On the other hand, typed lambda calculi allow more things to be proven. For example, in the simply typed lambda calculus it is a theorem that every evaluation strategy terminates for every simply typed lambda-term, whereas evaluation of untyped lambda-terms need not terminate. One reason there are many different typed lambda calculi has been the desire to do more (of what the untyped calculus can do) without giving up on being able to prove strong theorems about the calculus.
Lambda calculus has applications in many different areas in mathematics, philosophy,[4] linguistics,[5][6] and computer science.[7] Lambda calculus has played an important role in the development of the theory of programming languages. Functional programming languages implement lambda calculus. Lambda calculus is also a current research topic in category theory.[8]
History
The lambda calculus was introduced by mathematician Alonzo Church in the 1930s as part of an investigation into the foundations of mathematics.[9][lower-alpha 3] The original system was shown to be logically inconsistent in 1935 when Stephen Kleene and J. B. Rosser developed the Kleene–Rosser paradox.[10][11]
Subsequently, in 1936 Church isolated and published just the portion relevant to computation, what is now called the untyped lambda calculus.[12] In 1940, he also introduced a computationally weaker, but logically consistent system, known as the simply typed lambda calculus.[13]
Until the 1960s when its relation to programming languages was clarified, the lambda calculus was only a formalism. Thanks to Richard Montague and other linguists' applications in the semantics of natural language, the lambda calculus has begun to enjoy a respectable place in both linguistics[14] and computer science.[15]
Origin of the λ symbol
There is some uncertainty over the reason for Church's use of the Greek letter lambda (λ) as the notation for function-abstraction in the lambda calculus, perhaps in part due to conflicting explanations by Church himself. According to Cardone and Hindley (2006):
By the way, why did Church choose the notation “λ”? In [an unpublished 1964 letter to Harald Dickson] he stated clearly that it came from the notation “${\hat {x}}$” used for class-abstraction by Whitehead and Russell, by first modifying “${\hat {x}}$” to “$\land x$” to distinguish function-abstraction from class-abstraction, and then changing “$\land $” to “λ” for ease of printing.
This origin was also reported in [Rosser, 1984, p.338]. On the other hand, in his later years Church told two enquirers that the choice was more accidental: a symbol was needed and λ just happened to be chosen.
Dana Scott has also addressed this question in various public lectures.[16] Scott recounts that he once posed a question about the origin of the lambda symbol to Church's former student and son-in-law John W. Addison Jr., who then wrote his father-in-law a postcard:
Dear Professor Church,
Russell had the iota operator, Hilbert had the epsilon operator. Why did you choose lambda for your operator?
According to Scott, Church's entire response consisted of returning the postcard with the following annotation: "eeny, meeny, miny, moe".
Informal description
Motivation
Computable functions are a fundamental concept within computer science and mathematics. The lambda calculus provides simple semantics for computation which are useful for formally studying properties of computation. The lambda calculus incorporates two simplifications that make its semantics simple. The first simplification is that the lambda calculus treats functions "anonymously;" it does not give them explicit names. For example, the function
$\operatorname {square\_sum} (x,y)=x^{2}+y^{2}$
can be rewritten in anonymous form as
$(x,y)\mapsto x^{2}+y^{2}$
(which is read as "a tuple of x and y is mapped to $ x^{2}+y^{2}$").[lower-alpha 4] Similarly, the function
$\operatorname {id} (x)=x$
can be rewritten in anonymous form as
$x\mapsto x$
where the input is simply mapped to itself.[lower-alpha 4]
The second simplification is that the lambda calculus only uses functions of a single input. An ordinary function that requires two inputs, for instance the $ \operatorname {square\_sum} $ function, can be reworked into an equivalent function that accepts a single input, and as output returns another function, that in turn accepts a single input. For example,
$(x,y)\mapsto x^{2}+y^{2}$
can be reworked into
$x\mapsto (y\mapsto x^{2}+y^{2})$
This method, known as currying, transforms a function that takes multiple arguments into a chain of functions each with a single argument.
Function application of the $ \operatorname {square\_sum} $ function to the arguments (5, 2), yields at once
$ ((x,y)\mapsto x^{2}+y^{2})(5,2)$
$ =5^{2}+2^{2}$
$ =29$,
whereas evaluation of the curried version requires one more step
$ {\Bigl (}{\bigl (}x\mapsto (y\mapsto x^{2}+y^{2}){\bigr )}(5){\Bigr )}(2)$
$ =(y\mapsto 5^{2}+y^{2})(2)$ // the definition of $x$ has been used with $5$ in the inner expression. This is like β-reduction.
$ =5^{2}+2^{2}$ // the definition of $y$ has been used with $2$. Again, similar to β-reduction.
$ =29$
to arrive at the same result.
The lambda calculus
The lambda calculus consists of a language of lambda terms, that are defined by a certain formal syntax, and a set of transformation rules for manipulating the lambda terms. These transformation rules can be viewed as an equational theory or as an operational definition.
As described above, having no names, all functions in the lambda calculus are anonymous functions. They only accept one input variable, so currying is used to implement functions of several variables.
Lambda terms
The syntax of the lambda calculus defines some expressions as valid lambda calculus expressions and some as invalid, just as some strings of characters are valid C programs and some are not. A valid lambda calculus expression is called a "lambda term".
The following three rules give an inductive definition that can be applied to build all syntactically valid lambda terms:[lower-alpha 5]
• variable x is itself a valid lambda term.
• if t is a lambda term, and x is a variable, then $(\lambda x.t)$ [lower-alpha 6] is a lambda term (called an abstraction);
• if t and s are lambda terms, then $(t$ $s)$ is a lambda term (called an application).
Nothing else is a lambda term. Thus a lambda term is valid if and only if it can be obtained by repeated application of these three rules. However, some parentheses can be omitted according to certain rules. For example, the outermost parentheses are usually not written. See §Notation, below for when to include parentheses
An abstraction $\lambda x.t$ denotes an § anonymous function[lower-alpha 7] that takes a single input x and returns t. For example, $\lambda x.(x^{2}+2)$ is an abstraction for the function $f(x)=x^{2}+2$ using the term $x^{2}+2$ for t. The name $f(x)$ is superfluous when using abstraction. $(\lambda x.t)$ binds the variable x in the term t. The definition of a function with an abstraction merely "sets up" the function but does not invoke it. See §Notation below for usage of parentheses
An application $t$ $s$ represents the application of a function t to an input s, that is, it represents the act of calling function t on input s to produce $t(s)$.
There is no concept in lambda calculus of variable declaration. In a definition such as $\lambda x.(x+y)$ (i.e. $f(x)=(x+y)$), in lambda calculus y is a variable that is not yet defined. The abstraction $\lambda x.(x+y)$ is syntactically valid, and represents a function that adds its input to the yet-unknown y.
Parentheses may be used and might be needed to disambiguate terms. For example,
1. $\lambda x.((\lambda x.x)x)$ which is of form $\lambda x.B$ —an abstraction, and
2. $(\lambda x.(\lambda x.x))$ $x$ which is of form $M$ $N$ —an application.
The examples 1 and 2 denote different terms; except for the scope of the parentheses they would be the same. But example 1 is a function definition, while example 2 is function application. Lambda variable x is a placeholder in both examples.
Here, example 1 defines a function $\lambda x.B$, where $B$ is $(\lambda x.x)x$, an anonymous function $(\lambda x.x)$, with input $x$; while example 2, $M$ $N$, is M applied to N, where $M$ is the lambda term $(\lambda x.(\lambda x.x))$ being applied to the input $N$ which is $x$. Both examples 1 and 2 would evaluate to the identity function $\lambda x.x$.
Functions that operate on functions
In lambda calculus, functions are taken to be 'first class values', so functions may be used as the inputs, or be returned as outputs from other functions.
For example, $\lambda x.x$ represents the identity function, $x\mapsto x$, and $(\lambda x.x)y$ represents the identity function applied to $y$. Further, $(\lambda x.y)$ represents the constant function $x\mapsto y$, the function that always returns $y$, no matter the input. In lambda calculus, function application is regarded as left-associative, so that $stx$ means $(st)x$.
There are several notions of "equivalence" and "reduction" that allow lambda terms to be "reduced" to "equivalent" lambda terms.
Alpha equivalence
A basic form of equivalence, definable on lambda terms, is alpha equivalence. It captures the intuition that the particular choice of a bound variable, in an abstraction, does not (usually) matter. For instance, $\lambda x.x$ and $\lambda y.y$ are alpha-equivalent lambda terms, and they both represent the same function (the identity function). The terms $x$ and $y$ are not alpha-equivalent, because they are not bound in an abstraction. In many presentations, it is usual to identify alpha-equivalent lambda terms.
The following definitions are necessary in order to be able to define β-reduction:
Free variables
The free variables [lower-alpha 8] of a term are those variables not bound by an abstraction. The set of free variables of an expression is defined inductively:
• The free variables of $x$ are just $x$
• The set of free variables of $\lambda x.t$ is the set of free variables of $t$, but with $x$ removed
• The set of free variables of $t$ $s$ is the union of the set of free variables of $t$ and the set of free variables of $s$.
For example, the lambda term representing the identity $\lambda x.x$ has no free variables, but the function $\lambda x.y$ $x$ has a single free variable, $y$.
Capture-avoiding substitutions
Suppose $t$, $s$ and $r$ are lambda terms, and $x$ and $y$ are variables. The notation $t[x:=r]$ indicates substitution of $r$ for $x$ in $t$ in a capture-avoiding manner. This is defined so that:
• $x[x:=r]=r$ ; with $r$ substituted for $x$, $x$ becomes $r$
• $y[x:=r]=y$ if $x\neq y$ ; with $r$ substituted for $x$, $y$ (which is not $x$) remains $y$
• $(t$ $s)[x:=r]=(t[x:=r])(s[x:=r])$ ; substitution distributes to both sides of an application
• $(\lambda x.t)[x:=r]=\lambda x.t$ ; a variable bound by an abstraction is not subject to substitution; substituting such variable leaves the abstraction unchanged
• $(\lambda y.t)[x:=r]=\lambda y.(t[x:=r])$ if $x\neq y$ and $y$ does not appear among the free variables of $r$ ($y$ is said to be "fresh" for $r$) ; substituting a variable which is not bound by an abstraction proceeds in the abstraction's body, provided that the abstracted variable $y$ is "fresh" for the substitution term $r$.
For example, $(\lambda x.x)[y:=y]=\lambda x.(x[y:=y])=\lambda x.x$, and $((\lambda x.y)x)[x:=y]=((\lambda x.y)[x:=y])(x[x:=y])=(\lambda x.y)y$.
The freshness condition (requiring that $y$ is not in the free variables of $r$) is crucial in order to ensure that substitution does not change the meaning of functions.
For example, a substitution that ignores the freshness condition could lead to errors: $(\lambda x.y)[y:=x]=\lambda x.(y[y:=x])=\lambda x.x$. This erroneous substitution would turn the constant function $\lambda x.y$ into the identity $\lambda x.x$.
In general, failure to meet the freshness condition can be remedied by alpha-renaming first, with a suitable fresh variable. For example, switching back to our correct notion of substitution, in $(\lambda x.y)[y:=x]$ the abstraction can be renamed with a fresh variable $z$, to obtain $(\lambda z.y)[y:=x]=\lambda z.(y[y:=x])=\lambda z.x$, and the meaning of the function is preserved by substitution.
In a functional programming language where functions are first class citizens (see e.g. SECD machine#Landin's contribution), this systematic change in variables to avoid capture of a free variable can introduce an error, when returning functions as results.[17]
β-reduction
The β-reduction rule[lower-alpha 2] states that an application of the form $(\lambda x.t)s$ reduces to the term $t[x:=s]$. The notation $(\lambda x.t)s\to t[x:=s]$ is used to indicate that $(\lambda x.t)s$ β-reduces to $t[x:=s]$. For example, for every $s$, $(\lambda x.x)s\to x[x:=s]=s$. This demonstrates that $\lambda x.x$ really is the identity. Similarly, $(\lambda x.y)s\to y[x:=s]=y$, which demonstrates that $\lambda x.y$ is a constant function.
The lambda calculus may be seen as an idealized version of a functional programming language, like Haskell or Standard ML. Under this view, β-reduction corresponds to a computational step. This step can be repeated by additional β-reductions until there are no more applications left to reduce. In the untyped lambda calculus, as presented here, this reduction process may not terminate. For instance, consider the term $\Omega =(\lambda x.xx)(\lambda x.xx)$. Here $(\lambda x.xx)(\lambda x.xx)\to (xx)[x:=\lambda x.xx]=(x[x:=\lambda x.xx])(x[x:=\lambda x.xx])=(\lambda x.xx)(\lambda x.xx)$. That is, the term reduces to itself in a single β-reduction, and therefore the reduction process will never terminate.
Another aspect of the untyped lambda calculus is that it does not distinguish between different kinds of data. For instance, it may be desirable to write a function that only operates on numbers. However, in the untyped lambda calculus, there is no way to prevent a function from being applied to truth values, strings, or other non-number objects.
Formal definition
Main article: Lambda calculus definition
Definition
Lambda expressions are composed of:
• variables v1, v2, ...;
• the abstraction symbols λ (lambda) and . (dot);
• parentheses ().
The set of lambda expressions, Λ, can be defined inductively:
1. If x is a variable, then x ∈ Λ.
2. If x is a variable and M ∈ Λ, then (λx.M) ∈ Λ.
3. If M, N ∈ Λ, then (M N) ∈ Λ.
Instances of rule 2 are known as abstractions and instances of rule 3 are known as applications.[18][19]
Notation
To keep the notation of lambda expressions uncluttered, the following conventions are usually applied:
• Outermost parentheses are dropped: M N instead of (M N).
• Applications are assumed to be left associative: M N P may be written instead of ((M N) P).[20]
• When all variables are single-letter, the space in applications may be omitted: MNP instead of M N P.[21]
• The body of an abstraction extends as far right as possible: λx.M N means λx.(M N) and not (λx.M) N.
• A sequence of abstractions is contracted: λx.λy.λz.N is abbreviated as λxyz.N.[22][20]
Free and bound variables
The abstraction operator, λ, is said to bind its variable wherever it occurs in the body of the abstraction. Variables that fall within the scope of an abstraction are said to be bound. In an expression λx.M, the part λx is often called binder, as a hint that the variable x is getting bound by prepending λx to M. All other variables are called free. For example, in the expression λy.x x y, y is a bound variable and x is a free variable. Also a variable is bound by its nearest abstraction. In the following example the single occurrence of x in the expression is bound by the second lambda: λx.y (λx.z x).
The set of free variables of a lambda expression, M, is denoted as FV(M) and is defined by recursion on the structure of the terms, as follows:
1. FV(x) = {x}, where x is a variable.
2. FV(λx.M) = FV(M) \ {x}.[lower-alpha 9]
3. FV(M N) = FV(M) ∪ FV(N).[lower-alpha 10]
An expression that contains no free variables is said to be closed. Closed lambda expressions are also known as combinators and are equivalent to terms in combinatory logic.
Reduction
The meaning of lambda expressions is defined by how expressions can be reduced.[23]
There are three kinds of reduction:
• α-conversion: changing bound variables;
• β-reduction: applying functions to their arguments;
• η-reduction: which captures a notion of extensionality.
We also speak of the resulting equivalences: two expressions are α-equivalent, if they can be α-converted into the same expression. β-equivalence and η-equivalence are defined similarly.
The term redex, short for reducible expression, refers to subterms that can be reduced by one of the reduction rules. For example, (λx.M) N is a β-redex in expressing the substitution of N for x in M. The expression to which a redex reduces is called its reduct; the reduct of (λx.M) N is M[x := N].
If x is not free in M, λx.M x is also an η-redex, with a reduct of M.
α-conversion
α-conversion, sometimes known as α-renaming,[24] allows bound variable names to be changed. For example, α-conversion of λx.x might yield λy.y. Terms that differ only by α-conversion are called α-equivalent. Frequently, in uses of lambda calculus, α-equivalent terms are considered to be equivalent.
The precise rules for α-conversion are not completely trivial. First, when α-converting an abstraction, the only variable occurrences that are renamed are those that are bound to the same abstraction. For example, an α-conversion of λx.λx.x could result in λy.λx.x, but it could not result in λy.λx.y. The latter has a different meaning from the original. This is analogous to the programming notion of variable shadowing.
Second, α-conversion is not possible if it would result in a variable getting captured by a different abstraction. For example, if we replace x with y in λx.λy.x, we get λy.λy.y, which is not at all the same.
In programming languages with static scope, α-conversion can be used to make name resolution simpler by ensuring that no variable name masks a name in a containing scope (see α-renaming to make name resolution trivial).
In the De Bruijn index notation, any two α-equivalent terms are syntactically identical.
Substitution
Substitution, written M[x := N], is the process of replacing all free occurrences of the variable x in the expression M with expression N. Substitution on terms of the lambda calculus is defined by recursion on the structure of terms, as follows (note: x and y are only variables while M and N are any lambda expression):
x[x := N] = N
y[x := N] = y, if x ≠ y
(M1 M2)[x := N] = M1[x := N] M2[x := N]
(λx.M)[x := N] = λx.M
(λy.M)[x := N] = λy.(M[x := N]), if x ≠ y and y ∉ FV(N) See above for the FV
To substitute into an abstraction, it is sometimes necessary to α-convert the expression. For example, it is not correct for (λx.y)[y := x] to result in λx.x, because the substituted x was supposed to be free but ended up being bound. The correct substitution in this case is λz.x, up to α-equivalence. Substitution is defined uniquely up to α-equivalence.
β-reduction
β-reduction captures the idea of function application. β-reduction is defined in terms of substitution: the β-reduction of (λx.M) N is M[x := N].[lower-alpha 2]
For example, assuming some encoding of 2, 7, ×, we have the following β-reduction: (λn.n × 2) 7 → 7 × 2.
β-reduction can be seen to be the same as the concept of local reducibility in natural deduction, via the Curry–Howard isomorphism.
η-reduction
η-reduction (eta reduction) expresses the idea of extensionality,[25] which in this context is that two functions are the same if and only if they give the same result for all arguments. η-reduction converts between λx.f x and f whenever x does not appear free in f.
η-reduction can be seen to be the same as the concept of local completeness in natural deduction, via the Curry–Howard isomorphism.
Normal forms and confluence
For the untyped lambda calculus, β-reduction as a rewriting rule is neither strongly normalising nor weakly normalising.
However, it can be shown that β-reduction is confluent when working up to α-conversion (i.e. we consider two normal forms to be equal if it is possible to α-convert one into the other).
Therefore, both strongly normalising terms and weakly normalising terms have a unique normal form. For strongly normalising terms, any reduction strategy is guaranteed to yield the normal form, whereas for weakly normalising terms, some reduction strategies may fail to find it.
Encoding datatypes
Main articles: Church encoding and Mogensen–Scott encoding
The basic lambda calculus may be used to model booleans, arithmetic, data structures and recursion, as illustrated in the following sub-sections.
Arithmetic in lambda calculus
There are several possible ways to define the natural numbers in lambda calculus, but by far the most common are the Church numerals, which can be defined as follows:
0 := λf.λx.x
1 := λf.λx.f x
2 := λf.λx.f (f x)
3 := λf.λx.f (f (f x))
and so on. Or using the alternative syntax presented above in Notation:
0 := λfx.x
1 := λfx.f x
2 := λfx.f (f x)
3 := λfx.f (f (f x))
A Church numeral is a higher-order function—it takes a single-argument function f, and returns another single-argument function. The Church numeral n is a function that takes a function f as argument and returns the n-th composition of f, i.e. the function f composed with itself n times. This is denoted f(n) and is in fact the n-th power of f (considered as an operator); f(0) is defined to be the identity function. Such repeated compositions (of a single function f) obey the laws of exponents, which is why these numerals can be used for arithmetic. (In Church's original lambda calculus, the formal parameter of a lambda expression was required to occur at least once in the function body, which made the above definition of 0 impossible.)
One way of thinking about the Church numeral n, which is often useful when analysing programs, is as an instruction 'repeat n times'. For example, using the PAIR and NIL functions defined below, one can define a function that constructs a (linked) list of n elements all equal to x by repeating 'prepend another x element' n times, starting from an empty list. The lambda term is
λn.λx.n (PAIR x) NIL
By varying what is being repeated, and varying what argument that function being repeated is applied to, a great many different effects can be achieved.
We can define a successor function, which takes a Church numeral n and returns n + 1 by adding another application of f, where '(mf)x' means the function 'f' is applied 'm' times on 'x':
SUCC := λn.λf.λx.f (n f x)
Because the m-th composition of f composed with the n-th composition of f gives the m+n-th composition of f, addition can be defined as follows:
PLUS := λm.λn.λf.λx.m f (n f x)
PLUS can be thought of as a function taking two natural numbers as arguments and returning a natural number; it can be verified that
PLUS 2 3
and
5
are β-equivalent lambda expressions. Since adding m to a number n can be accomplished by adding 1 m times, an alternative definition is:
PLUS := λm.λn.m SUCC n [26]
Similarly, multiplication can be defined as
MULT := λm.λn.λf.m (n f)[22]
Alternatively
MULT := λm.λn.m (PLUS n) 0
since multiplying m and n is the same as repeating the add n function m times and then applying it to zero. Exponentiation has a rather simple rendering in Church numerals, namely
POW := λb.λe.e b[1]
The predecessor function defined by PRED n = n − 1 for a positive integer n and PRED 0 = 0 is considerably more difficult. The formula
PRED := λn.λf.λx.n (λg.λh.h (g f)) (λu.x) (λu.u)
can be validated by showing inductively that if T denotes (λg.λh.h (g f)), then T(n)(λu.x) = (λh.h(f(n−1)(x))) for n > 0. Two other definitions of PRED are given below, one using conditionals and the other using pairs. With the predecessor function, subtraction is straightforward. Defining
SUB := λm.λn.n PRED m,
SUB m n yields m − n when m > n and 0 otherwise.
Logic and predicates
By convention, the following two definitions (known as Church booleans) are used for the boolean values TRUE and FALSE:
TRUE := λx.λy.x
FALSE := λx.λy.y
Then, with these two lambda terms, we can define some logic operators (these are just possible formulations; other expressions are equally correct):
AND := λp.λq.p q p
OR := λp.λq.p p q
NOT := λp.p FALSE TRUE
IFTHENELSE := λp.λa.λb.p a b
We are now able to compute some logic functions, for example:
AND TRUE FALSE
≡ (λp.λq.p q p) TRUE FALSE →β TRUE FALSE TRUE
≡ (λx.λy.x) FALSE TRUE →β FALSE
and we see that AND TRUE FALSE is equivalent to FALSE.
A predicate is a function that returns a boolean value. The most fundamental predicate is ISZERO, which returns TRUE if its argument is the Church numeral 0, and FALSE if its argument is any other Church numeral:
ISZERO := λn.n (λx.FALSE) TRUE
The following predicate tests whether the first argument is less-than-or-equal-to the second:
LEQ := λm.λn.ISZERO (SUB m n),
and since m = n, if LEQ m n and LEQ n m, it is straightforward to build a predicate for numerical equality.
The availability of predicates and the above definition of TRUE and FALSE make it convenient to write "if-then-else" expressions in lambda calculus. For example, the predecessor function can be defined as:
PRED := λn.n (λg.λk.ISZERO (g 1) k (PLUS (g k) 1)) (λv.0) 0
which can be verified by showing inductively that n (λg.λk.ISZERO (g 1) k (PLUS (g k) 1)) (λv.0) is the add n − 1 function for n > 0.
Pairs
A pair (2-tuple) can be defined in terms of TRUE and FALSE, by using the Church encoding for pairs. For example, PAIR encapsulates the pair (x,y), FIRST returns the first element of the pair, and SECOND returns the second.
PAIR := λx.λy.λf.f x y
FIRST := λp.p TRUE
SECOND := λp.p FALSE
NIL := λx.TRUE
NULL := λp.p (λx.λy.FALSE)
A linked list can be defined as either NIL for the empty list, or the PAIR of an element and a smaller list. The predicate NULL tests for the value NIL. (Alternatively, with NIL := FALSE, the construct l (λh.λt.λz.deal_with_head_h_and_tail_t) (deal_with_nil) obviates the need for an explicit NULL test).
As an example of the use of pairs, the shift-and-increment function that maps (m, n) to (n, n + 1) can be defined as
Φ := λx.PAIR (SECOND x) (SUCC (SECOND x))
which allows us to give perhaps the most transparent version of the predecessor function:
PRED := λn.FIRST (n Φ (PAIR 0 0)).
Additional programming techniques
There is a considerable body of programming idioms for lambda calculus. Many of these were originally developed in the context of using lambda calculus as a foundation for programming language semantics, effectively using lambda calculus as a low-level programming language. Because several programming languages include the lambda calculus (or something very similar) as a fragment, these techniques also see use in practical programming, but may then be perceived as obscure or foreign.
Named constants
In lambda calculus, a library would take the form of a collection of previously defined functions, which as lambda-terms are merely particular constants. The pure lambda calculus does not have a concept of named constants since all atomic lambda-terms are variables, but one can emulate having named constants by setting aside a variable as the name of the constant, using abstraction to bind that variable in the main body, and apply that abstraction to the intended definition. Thus to use f to mean N (some explicit lambda-term) in M (another lambda-term, the "main program"), one can say
(λf.M) N
Authors often introduce syntactic sugar, such as let,[lower-alpha 11] to permit writing the above in the more intuitive order
let f =N in M
By chaining such definitions, one can write a lambda calculus "program" as zero or more function definitions, followed by one lambda-term using those functions that constitutes the main body of the program.
A notable restriction of this let is that the name f be not defined in N, for N to be outside the scope of the abstraction binding f; this means a recursive function definition cannot be used as the N with let. The letrec[lower-alpha 12] construction would allow writing recursive function definitions.
Recursion and fixed points
Main article: Fixed-point combinator
See also: SKI combinator calculus § Self-application and recursion
Recursion is the definition of a function using the function itself. A definition containing itself inside itself, by value, leads to the whole value being of infinite size. Other notations which support recursion natively overcome this by referring to the function definition by name. Lambda calculus cannot express this: all functions are anonymous in lambda calculus, so we can't refer by name to a value which is yet to be defined, inside the lambda term defining that same value. However, a lambda expression can receive itself as its own argument, for example in (λx.x x) E. Here E should be an abstraction, applying its parameter to a value to express recursion.
Consider the factorial function F(n) recursively defined by
F(n) = 1, if n = 0; else n × F(n − 1).
In the lambda expression which is to represent this function, a parameter (typically the first one) will be assumed to receive the lambda expression itself as its value, so that calling it – applying it to an argument – will amount to recursion. Thus to achieve recursion, the intended-as-self-referencing argument (called r here) must always be passed to itself within the function body, at a call point:
G := λr. λn.(1, if n = 0; else n × (r r (n−1)))
with r r x = F x = G r x to hold, so r = G and
F := G G = (λx.x x) G
The self-application achieves replication here, passing the function's lambda expression on to the next invocation as an argument value, making it available to be referenced and called there.
This solves it but requires re-writing each recursive call as self-application. We would like to have a generic solution, without a need for any re-writes:
G := λr. λn.(1, if n = 0; else n × (r (n−1)))
with r x = F x = G r x to hold, so r = G r =: FIX G and
F := FIX G where FIX g := (r where r = g r) = g (FIX g)
so that FIX G = G (FIX G) = (λn.(1, if n = 0; else n × ((FIX G) (n−1))))
Given a lambda term with first argument representing recursive call (e.g. G here), the fixed-point combinator FIX will return a self-replicating lambda expression representing the recursive function (here, F). The function does not need to be explicitly passed to itself at any point, for the self-replication is arranged in advance, when it is created, to be done each time it is called. Thus the original lambda expression (FIX G) is re-created inside itself, at call-point, achieving self-reference.
In fact, there are many possible definitions for this FIX operator, the simplest of them being:
Y := λg.(λx.g (x x)) (λx.g (x x))
In the lambda calculus, Y g is a fixed-point of g, as it expands to:
Y g
(λh.(λx.h (x x)) (λx.h (x x))) g
(λx.g (x x)) (λx.g (x x))
g ((λx.g (x x)) (λx.g (x x)))
g (Y g)
Now, to perform our recursive call to the factorial function, we would simply call (Y G) n, where n is the number we are calculating the factorial of. Given n = 4, for example, this gives:
(Y G) 4
G (Y G) 4
(λr.λn.(1, if n = 0; else n × (r (n−1)))) (Y G) 4
(λn.(1, if n = 0; else n × ((Y G) (n−1)))) 4
1, if 4 = 0; else 4 × ((Y G) (4−1))
4 × (G (Y G) (4−1))
4 × ((λn.(1, if n = 0; else n × ((Y G) (n−1)))) (4−1))
4 × (1, if 3 = 0; else 3 × ((Y G) (3−1)))
4 × (3 × (G (Y G) (3−1)))
4 × (3 × ((λn.(1, if n = 0; else n × ((Y G) (n−1)))) (3−1)))
4 × (3 × (1, if 2 = 0; else 2 × ((Y G) (2−1))))
4 × (3 × (2 × (G (Y G) (2−1))))
4 × (3 × (2 × ((λn.(1, if n = 0; else n × ((Y G) (n−1)))) (2−1))))
4 × (3 × (2 × (1, if 1 = 0; else 1 × ((Y G) (1−1)))))
4 × (3 × (2 × (1 × (G (Y G) (1−1)))))
4 × (3 × (2 × (1 × ((λn.(1, if n = 0; else n × ((Y G) (n−1)))) (1−1)))))
4 × (3 × (2 × (1 × (1, if 0 = 0; else 0 × ((Y G) (0−1))))))
4 × (3 × (2 × (1 × (1))))
24
Every recursively defined function can be seen as a fixed point of some suitably defined function closing over the recursive call with an extra argument, and therefore, using Y, every recursively defined function can be expressed as a lambda expression. In particular, we can now cleanly define the subtraction, multiplication and comparison predicate of natural numbers recursively.
Standard terms
Certain terms have commonly accepted names:[28][29][30]
I := λx.x
S := λx.λy.λz.x z (y z)
K := λx.λy.x
B := λx.λy.λz.x (y z)
C := λx.λy.λz.x z y
W := λx.λy.x y y
ω or Δ or U := λx.x x
Ω := ω ω
I is the identity function. SK and BCKW form complete combinator calculus systems that can express any lambda term - see the next section. Ω is UU, the smallest term that has no normal form. YI is another such term. Y is standard and defined above, and can also be defined as Y=BU(CBU), so that Yg=g(Yg). TRUE and FALSE defined above are commonly abbreviated as T and F.
Abstraction elimination
Main article: Combinatory logic § Completeness of the S-K basis
If N is a lambda-term without abstraction, but possibly containing named constants (combinators), then there exists a lambda-term T(x,N) which is equivalent to λx.N but lacks abstraction (except as part of the named constants, if these are considered non-atomic). This can also be viewed as anonymising variables, as T(x,N) removes all occurrences of x from N, while still allowing argument values to be substituted into the positions where N contains an x. The conversion function T can be defined by:
T(x, x) := I
T(x, N) := K N if x is not free in N.
T(x, M N) := S T(x, M) T(x, N)
In either case, a term of the form T(x,N) P can reduce by having the initial combinator I, K, or S grab the argument P, just like β-reduction of (λx.N) P would do. I returns that argument. K throws the argument away, just like (λx.N) would do if x has no free occurrence in N. S passes the argument on to both subterms of the application, and then applies the result of the first to the result of the second.
The combinators B and C are similar to S, but pass the argument on to only one subterm of an application (B to the "argument" subterm and C to the "function" subterm), thus saving a subsequent K if there is no occurrence of x in one subterm. In comparison to B and C, the S combinator actually conflates two functionalities: rearranging arguments, and duplicating an argument so that it may be used in two places. The W combinator does only the latter, yielding the B, C, K, W system as an alternative to SKI combinator calculus.
Typed lambda calculus
Main article: Typed lambda calculus
A typed lambda calculus is a typed formalism that uses the lambda-symbol ($\lambda $) to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (see Kinds of typed lambda calculi). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus but from another point of view, they can also be considered the more fundamental theory and untyped lambda calculus a special case with only one type.[31]
Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typed imperative programming languages. Typed lambda calculi play an important role in the design of type systems for programming languages; here typability usually captures desirable properties of the program, e.g. the program will not cause a memory access violation.
Typed lambda calculi are closely related to mathematical logic and proof theory via the Curry–Howard isomorphism and they can be considered as the internal language of classes of categories, e.g. the simply typed lambda calculus is the language of Cartesian closed categories (CCCs).
Reduction strategies
Main article: Reduction strategy § Lambda calculus
Whether a term is normalising or not, and how much work needs to be done in normalising it if it is, depends to a large extent on the reduction strategy used. Common lambda calculus reduction strategies include:[32][33][34]
Normal order
The leftmost, outermost redex is always reduced first. That is, whenever possible the arguments are substituted into the body of an abstraction before the arguments are reduced.
Applicative order
The leftmost, innermost redex is always reduced first. Intuitively this means a function's arguments are always reduced before the function itself. Applicative order always attempts to apply functions to normal forms, even when this is not possible.
Full β-reductions
Any redex can be reduced at any time. This means essentially the lack of any particular reduction strategy—with regard to reducibility, "all bets are off".
Weak reduction strategies do not reduce under lambda abstractions:
Call by value
A redex is reduced only when its right hand side has reduced to a value (variable or abstraction). Only the outermost redexes are reduced.
Call by name
As normal order, but no reductions are performed inside abstractions. For example, λx.(λy.y)x is in normal form according to this strategy, although it contains the redex (λy.y)x.
Strategies with sharing reduce computations that are "the same" in parallel:
Optimal reduction
As normal order, but computations that have the same label are reduced simultaneously.
Call by need
As call by name (hence weak), but function applications that would duplicate terms instead name the argument, which is then reduced only "when it is needed".
Computability
There is no algorithm that takes as input any two lambda expressions and outputs TRUE or FALSE depending on whether one expression reduces to the other.[12] More precisely, no computable function can decide the question. This was historically the first problem for which undecidability could be proven. As usual for such a proof, computable means computable by any model of computation that is Turing complete. In fact computability can itself be defined via the lambda calculus: a function F: N → N of natural numbers is a computable function if and only if there exists a lambda expression f such that for every pair of x, y in N, F(x)=y if and only if f x =β y, where x and y are the Church numerals corresponding to x and y, respectively and =β meaning equivalence with β-reduction. See the Church–Turing thesis for other approaches to defining computability and their equivalence.
Church's proof of uncomputability first reduces the problem to determining whether a given lambda expression has a normal form. Then he assumes that this predicate is computable, and can hence be expressed in lambda calculus. Building on earlier work by Kleene and constructing a Gödel numbering for lambda expressions, he constructs a lambda expression e that closely follows the proof of Gödel's first incompleteness theorem. If e is applied to its own Gödel number, a contradiction results.
Complexity
The notion of computational complexity for the lambda calculus is a bit tricky, because the cost of a β-reduction may vary depending on how it is implemented.[35] To be precise, one must somehow find the location of all of the occurrences of the bound variable V in the expression E, implying a time cost, or one must keep track of the locations of free variables in some way, implying a space cost. A naïve search for the locations of V in E is O(n) in the length n of E. Director strings were an early approach that traded this time cost for a quadratic space usage.[36] More generally this has led to the study of systems that use explicit substitution.
In 2014 it was shown that the number of β-reduction steps taken by normal order reduction to reduce a term is a reasonable time cost model, that is, the reduction can be simulated on a Turing machine in time polynomially proportional to the number of steps.[37] This was a long-standing open problem, due to size explosion, the existence of lambda terms which grow exponentially in size for each β-reduction. The result gets around this by working with a compact shared representation. The result makes clear that the amount of space needed to evaluate a lambda term is not proportional to the size of the term during reduction. It is not currently known what a good measure of space complexity would be.[38]
An unreasonable model does not necessarily mean inefficient. Optimal reduction reduces all computations with the same label in one step, avoiding duplicated work, but the number of parallel β-reduction steps to reduce a given term to normal form is approximately linear in the size of the term. This is far too small to be a reasonable cost measure, as any Turing machine may be encoded in the lambda calculus in size linearly proportional to the size of the Turing machine. The true cost of reducing lambda terms is not due to β-reduction per se but rather the handling of the duplication of redexes during β-reduction.[39] It is not known if optimal reduction implementations are reasonable when measured with respect to a reasonable cost model such as the number of leftmost-outermost steps to normal form, but it has been shown for fragments of the lambda calculus that the optimal reduction algorithm is efficient and has at most a quadratic overhead compared to leftmost-outermost.[38] In addition the BOHM prototype implementation of optimal reduction outperformed both Caml Light and Haskell on pure lambda terms.[39]
Lambda calculus and programming languages
As pointed out by Peter Landin's 1965 paper "A Correspondence between ALGOL 60 and Church's Lambda-notation",[40] sequential procedural programming languages can be understood in terms of the lambda calculus, which provides the basic mechanisms for procedural abstraction and procedure (subprogram) application.
Anonymous functions
For example, in Python the "square" function can be expressed as a lambda expression as follows:
(lambda x: x**2)
The above example is an expression that evaluates to a first-class function. The symbol lambda creates an anonymous function, given a list of parameter names, x – just a single argument in this case, and an expression that is evaluated as the body of the function, x**2. Anonymous functions are sometimes called lambda expressions.
For example, Pascal and many other imperative languages have long supported passing subprograms as arguments to other subprograms through the mechanism of function pointers. However, function pointers are not a sufficient condition for functions to be first class datatypes, because a function is a first class datatype if and only if new instances of the function can be created at run-time. And this run-time creation of functions is supported in Smalltalk, JavaScript and Wolfram Language, and more recently in Scala, Eiffel ("agents"), C# ("delegates") and C++11, among others.
Parallelism and concurrency
The Church–Rosser property of the lambda calculus means that evaluation (β-reduction) can be carried out in any order, even in parallel. This means that various nondeterministic evaluation strategies are relevant. However, the lambda calculus does not offer any explicit constructs for parallelism. One can add constructs such as Futures to the lambda calculus. Other process calculi have been developed for describing communication and concurrency.
Semantics
The fact that lambda calculus terms act as functions on other lambda calculus terms, and even on themselves, led to questions about the semantics of the lambda calculus. Could a sensible meaning be assigned to lambda calculus terms? The natural semantics was to find a set D isomorphic to the function space D → D, of functions on itself. However, no nontrivial such D can exist, by cardinality constraints because the set of all functions from D to D has greater cardinality than D, unless D is a singleton set.
In the 1970s, Dana Scott showed that if only continuous functions were considered, a set or domain D with the required property could be found, thus providing a model for the lambda calculus.[41]
This work also formed the basis for the denotational semantics of programming languages.
Variations and extensions
These extensions are in the lambda cube:
• Typed lambda calculus – Lambda calculus with typed variables (and functions)
• System F – A typed lambda calculus with type-variables
• Calculus of constructions – A typed lambda calculus with types as first-class values
These formal systems are extensions of lambda calculus that are not in the lambda cube:
• Binary lambda calculus – A version of lambda calculus with binary I/O, a binary encoding of terms, and a designated universal machine.
• Lambda-mu calculus – An extension of the lambda calculus for treating classical logic
These formal systems are variations of lambda calculus:
• Kappa calculus – A first-order analogue of lambda calculus
These formal systems are related to lambda calculus:
• Combinatory logic – A notation for mathematical logic without variables
• SKI combinator calculus – A computational system based on the S, K and I combinators, equivalent to lambda calculus, but reducible without variable substitutions
See also
• Applicative computing systems – Treatment of objects in the style of the lambda calculus
• Cartesian closed category – A setting for lambda calculus in category theory
• Categorical abstract machine – A model of computation applicable to lambda calculus
• Curry–Howard isomorphism – The formal correspondence between programs and proofs
• De Bruijn index – notation disambiguating alpha conversions
• De Bruijn notation – notation using postfix modification functions
• Deductive lambda calculus – The consideration of the problems associated with considering lambda calculus as a Deductive system.
• Domain theory – Study of certain posets giving denotational semantics for lambda calculus
• Evaluation strategy – Rules for the evaluation of expressions in programming languages
• Explicit substitution – The theory of substitution, as used in β-reduction
• Functional programming
• Harrop formula – A kind of constructive logical formula such that proofs are lambda terms
• Interaction nets
• Kleene–Rosser paradox – A demonstration that some form of lambda calculus is inconsistent
• Knights of the Lambda Calculus – A semi-fictional organization of LISP and Scheme hackers
• Krivine machine – An abstract machine to interpret call-by-name in lambda calculus
• Lambda calculus definition – Formal definition of the lambda calculus.
• Let expression – An expression closely related to an abstraction.
• Minimalism (computing)
• Rewriting – Transformation of formulæ in formal systems
• SECD machine – A virtual machine designed for the lambda calculus
• Scott–Curry theorem – A theorem about sets of lambda terms
• To Mock a Mockingbird – An introduction to combinatory logic
• Universal Turing machine – A formal computing machine that is equivalent to lambda calculus
• Unlambda – An esoteric functional programming language based on combinatory logic
Further reading
• Abelson, Harold & Gerald Jay Sussman. Structure and Interpretation of Computer Programs. The MIT Press. ISBN 0-262-51087-1.
• Hendrik Pieter Barendregt Introduction to Lambda Calculus.
• Henk Barendregt, The Impact of the Lambda Calculus in Logic and Computer Science. The Bulletin of Symbolic Logic, Volume 3, Number 2, June 1997.
• Barendregt, Hendrik Pieter, The Type Free Lambda Calculus pp1091–1132 of Handbook of Mathematical Logic, North-Holland (1977) ISBN 0-7204-2285-X
• Cardone and Hindley, 2006. History of Lambda-calculus and Combinatory Logic. In Gabbay and Woods (eds.), Handbook of the History of Logic, vol. 5. Elsevier.
• Church, Alonzo, An unsolvable problem of elementary number theory, American Journal of Mathematics, 58 (1936), pp. 345–363. This paper contains the proof that the equivalence of lambda expressions is in general not decidable.
• Church, Alonzo (1941). The Calculi of Lambda-Conversion. Princeton: Princeton University Press. Retrieved 2020-04-14. (ISBN 978-0-691-08394-0)
• Frink Jr., Orrin (1944). "Review: The Calculi of Lambda-Conversion by Alonzo Church" (PDF). Bull. Amer. Math. Soc. 50 (3): 169–172. doi:10.1090/s0002-9904-1944-08090-7.
• Kleene, Stephen, A theory of positive integers in formal logic, American Journal of Mathematics, 57 (1935), pp. 153–173 and 219–244. Contains the lambda calculus definitions of several familiar functions.
• Landin, Peter, A Correspondence Between ALGOL 60 and Church's Lambda-Notation, Communications of the ACM, vol. 8, no. 2 (1965), pages 89–101. Available from the ACM site. A classic paper highlighting the importance of lambda calculus as a basis for programming languages.
• Larson, Jim, An Introduction to Lambda Calculus and Scheme. A gentle introduction for programmers.
• Michaelson, Greg (10 April 2013). An Introduction to Functional Programming Through Lambda Calculus. Courier Corporation. ISBN 978-0-486-28029-5.[42]
• Schalk, A. and Simmons, H. (2005) An introduction to λ-calculi and arithmetic with a decent selection of exercises. Notes for a course in the Mathematical Logic MSc at Manchester University.
• de Queiroz, Ruy J.G.B. (2008). "On Reduction Rules, Meaning-as-Use and Proof-Theoretic Semantics". Studia Logica. 90 (2): 211–247. doi:10.1007/s11225-008-9150-5. S2CID 11321602. A paper giving a formal underpinning to the idea of 'meaning-is-use' which, even if based on proofs, it is different from proof-theoretic semantics as in the Dummett–Prawitz tradition since it takes reduction as the rules giving meaning.
• Hankin, Chris, An Introduction to Lambda Calculi for Computer Scientists, ISBN 0954300653
Monographs/textbooks for graduate students
• Morten Heine Sørensen, Paweł Urzyczyn, Lectures on the Curry–Howard isomorphism, Elsevier, 2006, ISBN 0-444-52077-5 is a recent monograph that covers the main topics of lambda calculus from the type-free variety, to most typed lambda calculi, including more recent developments like pure type systems and the lambda cube. It does not cover subtyping extensions.
• Pierce, Benjamin (2002), Types and Programming Languages, MIT Press, ISBN 0-262-16209-1 covers lambda calculi from a practical type system perspective; some topics like dependent types are only mentioned, but subtyping is an important topic.
Documents
• Achim Jung, A Short Introduction to the Lambda Calculus-(PDF)
• Dana Scott, A timeline of lambda calculus-(PDF)
• Raúl Rojas, A Tutorial Introduction to the Lambda Calculus-(PDF)
• Peter Selinger, Lecture Notes on the Lambda Calculus-(PDF)
• Marius Buliga, Graphic lambda calculus
• Lambda Calculus as a Workflow Model by Peter Kelly, Paul Coddington, and Andrew Wendelborn; mentions graph reduction as a common means of evaluating lambda expressions and discusses the applicability of lambda calculus for distributed computing (due to the Church–Rosser property, which enables parallel graph reduction for lambda expressions).
Notes
1. These rules produce expressions such as: $(\lambda x.\lambda y.(\lambda z.(\lambda x.z\ x)\ (\lambda y.z\ y))(x\ y))$. Parentheses can be dropped if the expression is unambiguous. For some applications, terms for logical and mathematical constants and operations may be included.
2. Barendregt,Barendsen (2000) call this form
• axiom β: (λx.M[x]) N = M[N] , rewritten as (λx.M) N = M[x := N], "where M[x := N] denotes the substitution of N for every occurrence of x in M".[1]: 7 Also denoted M[N/x], "the substitution of N for x in M".[2]
3. For a full history, see Cardone and Hindley's "History of Lambda-calculus and Combinatory Logic" (2006).
4. Note that $\mapsto $ is pronounced "maps to".
5. The expression e can be: variables x, lambda abstractions, or applications —in BNF, $e::=x\mid \lambda x.e\mid e\,e$ .— from Wikipedia's Simply typed lambda calculus#Syntax for untyped lambda calculus
6. $(\lambda x.t)$ is sometimes written in ASCII as $Lx.t$
7. In anonymous form, $(\lambda x.t)$ gets rewritten to $x\mapsto t$ .
8. free variables in lambda Notation and its Calculus are comparable to linear algebra and mathematical concepts of the same name
9. The set of free variables of M, but with {x} removed
10. The union of the set of free variables of $M$ and the set of free variables of $N$[1]
11. (λf.M) N can be pronounced "let f be N in M".
12. Ariola and Blom[27] employ 1) axioms for a representational calculus using well-formed cyclic lambda graphs extended with letrec, to detect possibly infinite unwinding trees; 2) the representational calculus with β-reduction of scoped lambda graphs constitute Ariola/Blom's cyclic extension of lambda calculus; 3) Ariola/Blom reason about strict languages using § call-by-value, and compare to Moggi's calculus, and to Hasegawa's calculus. Conclusions on p. 111.[27]
References
Some parts of this article are based on material from FOLDOC, used with permission.
1. Barendregt, Henk; Barendsen, Erik (March 2000), Introduction to Lambda Calculus (PDF)
2. explicit substitution at the nLab
3. Turing, Alan M. (December 1937). "Computability and λ-Definability". The Journal of Symbolic Logic. 2 (4): 153–163. doi:10.2307/2268280. JSTOR 2268280. S2CID 2317046.
4. Coquand, Thierry (8 February 2006). Zalta, Edward N. (ed.). "Type Theory". The Stanford Encyclopedia of Philosophy (Summer 2013 ed.). Retrieved November 17, 2020.
5. Moortgat, Michael (1988). Categorial Investigations: Logical and Linguistic Aspects of the Lambek Calculus. Foris Publications. ISBN 9789067653879.
6. Bunt, Harry; Muskens, Reinhard, eds. (2008). Computing Meaning. Springer. ISBN 978-1-4020-5957-5.
7. Mitchell, John C. (2003). Concepts in Programming Languages. Cambridge University Press. p. 57. ISBN 978-0-521-78098-8..
8. Pierce, Benjamin C. Basic Category Theory for Computer Scientists. p. 53.
9. Church, Alonzo (1932). "A set of postulates for the foundation of logic". Annals of Mathematics. Series 2. 33 (2): 346–366. doi:10.2307/1968337. JSTOR 1968337.
10. Kleene, Stephen C.; Rosser, J. B. (July 1935). "The Inconsistency of Certain Formal Logics". The Annals of Mathematics. 36 (3): 630. doi:10.2307/1968646. JSTOR 1968646.
11. Church, Alonzo (December 1942). "Review of Haskell B. Curry, The Inconsistency of Certain Formal Logics". The Journal of Symbolic Logic. 7 (4): 170–171. doi:10.2307/2268117. JSTOR 2268117.
12. Church, Alonzo (1936). "An unsolvable problem of elementary number theory". American Journal of Mathematics. 58 (2): 345–363. doi:10.2307/2371045. JSTOR 2371045.
13. Church, Alonzo (1940). "A Formulation of the Simple Theory of Types". Journal of Symbolic Logic. 5 (2): 56–68. doi:10.2307/2266170. JSTOR 2266170. S2CID 15889861.
14. Partee, B. B. H.; ter Meulen, A.; Wall, R. E. (1990). Mathematical Methods in Linguistics. Springer. ISBN 9789027722454. Retrieved 29 Dec 2016.
15. Alma, Jesse. Zalta, Edward N. (ed.). "The Lambda Calculus". The Stanford Encyclopedia of Philosophy (Summer 2013 ed.). Retrieved November 17, 2020.
16. Dana Scott, "Looking Backward; Looking Forward", Invited Talk at the Workshop in honour of Dana Scott’s 85th birthday and 50 years of domain theory, 7–8 July, FLoC 2018 (talk 7 July 2018). The relevant passage begins at 32:50. (See also this extract of a May 2016 talk at the University of Birmingham, UK.)
17. Turner, D. A. (12 June 2012). Some History of Functional Programming Languages (PDF). Trends in Functional Programming. Lecture Notes in Computer Science. Vol. 7829. St. Andrews University. Section 3, Algol 60. doi:10.1007/978-3-642-40447-4_1. ISBN 978-3-642-40447-4. This mechanism works well for Algol 60 but in a language in which functions can be returned as results, a free variable might be held onto after the function call in which it was created has returned, and will no longer be present on the stack. Landin (1964) solved this in his SECD machine.
18. Barendregt, Hendrik Pieter (1984). The Lambda Calculus: Its Syntax and Semantics. Studies in Logic and the Foundations of Mathematics. Vol. 103 (Revised ed.). North Holland. ISBN 0-444-87508-5.
19. Corrections.
20. "Example for Rules of Associativity". Lambda-bound.com. Retrieved 2012-06-18.
21. "The Basic Grammar of Lambda Expressions". SoftOption. Some other systems use juxtaposition to mean application, so 'ab' means 'a@b'. This is fine except that it requires that variables have length one so that we know that 'ab' is two variables juxtaposed not one variable of length 2. But we want to labels like 'firstVariable' to mean a single variable, so we cannot use this juxtaposition convention.
22. Selinger, Peter (2008), Lecture Notes on the Lambda Calculus (PDF), vol. 0804, Department of Mathematics and Statistics, University of Ottawa, p. 9, arXiv:0804.3434, Bibcode:2008arXiv0804.3434S
23. de Queiroz, Ruy J. G. B. (1988). "A Proof-Theoretic Account of Programming and the Role of Reduction Rules". Dialectica. 42 (4): 265–282. doi:10.1111/j.1746-8361.1988.tb00919.x.
24. Turbak, Franklyn; Gifford, David (2008), Design concepts in programming languages, MIT press, p. 251, ISBN 978-0-262-20175-9
25. Luke Palmer (29 Dec 2010) Haskell-cafe: What's the motivation for η rules?
26. Felleisen, Matthias; Flatt, Matthew (2006), Programming Languages and Lambda Calculi (PDF), p. 26, archived from the original (PDF) on 2009-02-05; A note (accessed 2017) at the original location suggests that the authors consider the work originally referenced to have been superseded by a book.
27. Zena M. Ariola and Stefan Blom, Proc. TACS '94 Sendai, Japan 1997 (1997) Cyclic lambda calculi 114 pages.
28. Ker, Andrew D. "Lambda Calculus and Types" (PDF). p. 6. Retrieved 14 January 2022.
29. Dezani-Ciancaglini, Mariangiola; Ghilezan, Silvia (2014). "Preciseness of Subtyping on Intersection and Union Types" (PDF). Rewriting and Typed Lambda Calculi. Lecture Notes in Computer Science. 8560: 196. doi:10.1007/978-3-319-08918-8_14. hdl:2318/149874. ISBN 978-3-319-08917-1. Retrieved 14 January 2022.
30. Forster, Yannick; Smolka, Gert (August 2019). "Call-by-Value Lambda Calculus as a Model of Computation in Coq" (PDF). Journal of Automated Reasoning. 63 (2): 393–413. doi:10.1007/s10817-018-9484-2. S2CID 53087112. Retrieved 14 January 2022.
31. Types and Programming Languages, p. 273, Benjamin C. Pierce
32. Pierce, Benjamin C. (2002). Types and Programming Languages. MIT Press. p. 56. ISBN 0-262-16209-1.
33. Sestoft, Peter (2002). "Demonstrating Lambda Calculus Reduction" (PDF). The Essence of Computation. Lecture Notes in Computer Science. 2566: 420–435. doi:10.1007/3-540-36377-7_19. ISBN 978-3-540-00326-7. Retrieved 22 August 2022.
34. Biernacka, Małgorzata; Charatonik, Witold; Drab, Tomasz (2022). Andronick, June; de Moura, Leonardo (eds.). "The Zoo of Lambda-Calculus Reduction Strategies, And Coq" (PDF). 13th International Conference on Interactive Theorem Proving (ITP 2022). 237: 7:1–7:19. doi:10.4230/LIPIcs.ITP.2022.7. Retrieved 22 August 2022.
35. Frandsen, Gudmund Skovbjerg; Sturtivant, Carl (26 August 1991). "What is an Efficient Implementation of the \lambda-calculus?". Proceedings of the 5th ACM Conference on Functional Programming Languages and Computer Architecture. Lecture Notes in Computer Science. Springer-Verlag. 523: 289–312. CiteSeerX 10.1.1.139.6913. doi:10.1007/3540543961_14. ISBN 9783540543961.
36. Sinot, F.-R. (2005). "Director Strings Revisited: A Generic Approach to the Efficient Representation of Free Variables in Higher-order Rewriting" (PDF). Journal of Logic and Computation. 15 (2): 201–218. doi:10.1093/logcom/exi010.
37. Accattoli, Beniamino; Dal Lago, Ugo (14 July 2014). "Beta reduction is invariant, indeed". Proceedings of the Joint Meeting of the Twenty-Third EACSL Annual Conference on Computer Science Logic (CSL) and the Twenty-Ninth Annual ACM/IEEE Symposium on Logic in Computer Science (LICS). pp. 1–10. arXiv:1601.01233. doi:10.1145/2603088.2603105. ISBN 9781450328869. S2CID 11485010.
38. Accattoli, Beniamino (October 2018). "(In)Efficiency and Reasonable Cost Models". Electronic Notes in Theoretical Computer Science. 338: 23–43. doi:10.1016/j.entcs.2018.10.003.
39. Asperti, Andrea (16 Jan 2017). "About the efficient reduction of lambda terms". arXiv:1701.04240v1. {{cite journal}}: Cite journal requires |journal= (help)
40. Landin, P. J. (1965). "A Correspondence between ALGOL 60 and Church's Lambda-notation". Communications of the ACM. 8 (2): 89–101. doi:10.1145/363744.363749. S2CID 6505810.
41. Scott, Dana (1993). "A type-theoretical alternative to ISWIM, CUCH, OWHY" (PDF). Theoretical Computer Science. 121 (1–2): 411–440. doi:10.1016/0304-3975(93)90095-B. Retrieved 2022-12-01. Written 1969, widely circulated as an unpublished manuscript.
42. "Greg Michaelson's Homepage". Mathematical and Computer Sciences. Riccarton, Edinburgh: Heriot-Watt University. Retrieved 6 November 2022.
External links
Wikimedia Commons has media related to Lambda calculus.
• Graham Hutton, Lambda Calculus, a short (12 minutes) Computerphile video on the Lambda Calculus
• Helmut Brandl, Step by Step Introduction to Lambda Calculus
• "Lambda-calculus", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• David C. Keenan, To Dissect a Mockingbird: A Graphical Notation for the Lambda Calculus with Animated Reduction
• L. Allison, Some executable λ-calculus examples
• Georg P. Loczewski, The Lambda Calculus and A++
• Bret Victor, Alligator Eggs: A Puzzle Game Based on Lambda Calculus
• Lambda Calculus Archived 2012-10-14 at the Wayback Machine on Safalra's Website Archived 2021-05-02 at the Wayback Machine
• LCI Lambda Interpreter a simple yet powerful pure calculus interpreter
• Lambda Calculus links on Lambda-the-Ultimate
• Mike Thyer, Lambda Animator, a graphical Java applet demonstrating alternative reduction strategies.
• Implementing the Lambda calculus using C++ Templates
• Shane Steinert-Threlkeld, "Lambda Calculi", Internet Encyclopedia of Philosophy
• Anton Salikhmetov, Macro Lambda Calculus
Alonzo Church
Notable ideas
• Lambda calculus
• Simply typed lambda calculus
• Church–Turing thesis
• Church encoding
• Frege–Church ontology
• Church–Rosser theorem
Students
• Alan Turing
• C. Anthony Anderson
• Peter Andrews
• George Alfred Barnard
• William Boone
• Martin Davis
• William Easton
• Alfred Foster
• Leon Henkin
• John George Kemeny
• Stephen Cole Kleene
• Simon B. Kochen
• Maurice L'Abbé
• Isaac Malitz
• Gary R. Mar
• Michael O. Rabin
• Nicholas Rescher
• Hartley Rogers, Jr
• J. Barkley Rosser
• Dana Scott
• Norman Shapiro
• Raymond Smullyan
Institutions
• Princeton University
• University of California, Los Angeles
Family
• Alonzo Church (college president)
• A. C. Croom
Mathematical logic
General
• Axiom
• list
• Cardinality
• First-order logic
• Formal proof
• Formal semantics
• Foundations of mathematics
• Information theory
• Lemma
• Logical consequence
• Model
• Theorem
• Theory
• Type theory
Theorems (list)
& Paradoxes
• Gödel's completeness and incompleteness theorems
• Tarski's undefinability
• Banach–Tarski paradox
• Cantor's theorem, paradox and diagonal argument
• Compactness
• Halting problem
• Lindström's
• Löwenheim–Skolem
• Russell's paradox
Logics
Traditional
• Classical logic
• Logical truth
• Tautology
• Proposition
• Inference
• Logical equivalence
• Consistency
• Equiconsistency
• Argument
• Soundness
• Validity
• Syllogism
• Square of opposition
• Venn diagram
Propositional
• Boolean algebra
• Boolean functions
• Logical connectives
• Propositional calculus
• Propositional formula
• Truth tables
• Many-valued logic
• 3
• Finite
• ∞
Predicate
• First-order
• list
• Second-order
• Monadic
• Higher-order
• Free
• Quantifiers
• Predicate
• Monadic predicate calculus
Set theory
• Set
• Hereditary
• Class
• (Ur-)Element
• Ordinal number
• Extensionality
• Forcing
• Relation
• Equivalence
• Partition
• Set operations:
• Intersection
• Union
• Complement
• Cartesian product
• Power set
• Identities
Types of Sets
• Countable
• Uncountable
• Empty
• Inhabited
• Singleton
• Finite
• Infinite
• Transitive
• Ultrafilter
• Recursive
• Fuzzy
• Universal
• Universe
• Constructible
• Grothendieck
• Von Neumann
Maps & Cardinality
• Function/Map
• Domain
• Codomain
• Image
• In/Sur/Bi-jection
• Schröder–Bernstein theorem
• Isomorphism
• Gödel numbering
• Enumeration
• Large cardinal
• Inaccessible
• Aleph number
• Operation
• Binary
Set theories
• Zermelo–Fraenkel
• Axiom of choice
• Continuum hypothesis
• General
• Kripke–Platek
• Morse–Kelley
• Naive
• New Foundations
• Tarski–Grothendieck
• Von Neumann–Bernays–Gödel
• Ackermann
• Constructive
Formal systems (list),
Language & Syntax
• Alphabet
• Arity
• Automata
• Axiom schema
• Expression
• Ground
• Extension
• by definition
• Conservative
• Relation
• Formation rule
• Grammar
• Formula
• Atomic
• Closed
• Ground
• Open
• Free/bound variable
• Language
• Metalanguage
• Logical connective
• ¬
• ∨
• ∧
• →
• ↔
• =
• Predicate
• Functional
• Variable
• Propositional variable
• Proof
• Quantifier
• ∃
• !
• ∀
• rank
• Sentence
• Atomic
• Spectrum
• Signature
• String
• Substitution
• Symbol
• Function
• Logical/Constant
• Non-logical
• Variable
• Term
• Theory
• list
Example axiomatic
systems
(list)
• of arithmetic:
• Peano
• second-order
• elementary function
• primitive recursive
• Robinson
• Skolem
• of the real numbers
• Tarski's axiomatization
• of Boolean algebras
• canonical
• minimal axioms
• of geometry:
• Euclidean:
• Elements
• Hilbert's
• Tarski's
• non-Euclidean
• Principia Mathematica
Proof theory
• Formal proof
• Natural deduction
• Logical consequence
• Rule of inference
• Sequent calculus
• Theorem
• Systems
• Axiomatic
• Deductive
• Hilbert
• list
• Complete theory
• Independence (from ZFC)
• Proof of impossibility
• Ordinal analysis
• Reverse mathematics
• Self-verifying theories
Model theory
• Interpretation
• Function
• of models
• Model
• Equivalence
• Finite
• Saturated
• Spectrum
• Submodel
• Non-standard model
• of arithmetic
• Diagram
• Elementary
• Categorical theory
• Model complete theory
• Satisfiability
• Semantics of logic
• Strength
• Theories of truth
• Semantic
• Tarski's
• Kripke's
• T-schema
• Transfer principle
• Truth predicate
• Truth value
• Type
• Ultraproduct
• Validity
Computability theory
• Church encoding
• Church–Turing thesis
• Computably enumerable
• Computable function
• Computable set
• Decision problem
• Decidable
• Undecidable
• P
• NP
• P versus NP problem
• Kolmogorov complexity
• Lambda calculus
• Primitive recursive function
• Recursion
• Recursive set
• Turing machine
• Type theory
Related
• Abstract logic
• Category theory
• Concrete/Abstract Category
• Category of sets
• History of logic
• History of mathematical logic
• timeline
• Logicism
• Mathematical object
• Philosophy of mathematics
• Supertask
Mathematics portal
Authority control
National
• France
• BnF data
• Germany
• Israel
• United States
• Latvia
Other
• IdRef
Formal semantics (natural language)
Central concepts
• Compositionality
• Denotation
• Entailment
• Extension
• Generalized quantifier
• Intension
• Logical form
• Presupposition
• Proposition
• Reference
• Scope
• Speech act
• Syntax–semantics interface
• Truth conditions
Topics
Areas
• Anaphora
• Ambiguity
• Binding
• Conditionals
• Definiteness
• Disjunction
• Evidentiality
• Focus
• Indexicality
• Lexical semantics
• Modality
• Negation
• Propositional attitudes
• Tense–aspect–mood
• Quantification
• Vagueness
Phenomena
• Antecedent-contained deletion
• Cataphora
• Coercion
• Conservativity
• Counterfactuals
• Cumulativity
• De dicto and de re
• De se
• Deontic modality
• Discourse relations
• Donkey anaphora
• Epistemic modality
• Exhaustivity
• Faultless disagreement
• Free choice inferences
• Givenness
• Crossover effects
• Hurford disjunction
• Inalienable possession
• Intersective modification
• Logophoricity
• Mirativity
• Modal subordination
• Opaque contexts
• Performatives
• Polarity items
• Privative adjectives
• Quantificational variability effect
• Responsive predicate
• Rising declaratives
• Scalar implicature
• Sloppy identity
• Subsective modification
• Subtrigging
• Telicity
• Temperature paradox
• Veridicality
Formalism
Formal systems
• Alternative semantics
• Categorial grammar
• Combinatory categorial grammar
• Discourse representation theory (DRT)
• Dynamic semantics
• Frame semantics
• Generative grammar
• Glue semantics
• Inquisitive semantics
• Intensional logic
• Lambda calculus
• Mereology
• Montague grammar
• Segmented discourse representation theory (SDRT)
• Situation semantics
• Supervaluationism
• Type theory
• TTR
Concepts
• Autonomy of syntax
• Context set
• Continuation
• Conversational scoreboard
• Existential closure
• Function application
• Meaning postulate
• Monads
• Possible world
• Quantifier raising
• Quantization
• Question under discussion
• Semantic parsing
• Squiggle operator
• Strict conditional
• Type shifter
• Universal grinder
See also
• Cognitive semantics
• Computational semantics
• Distributional semantics
• Formal grammar
• Inferentialism
• Term logic
• Linguistics wars
• Philosophy of language
• Pragmatics
• Context
• Deixis
• Semantics of logic
| Wikipedia |
Type I and type II errors
In statistical hypothesis testing, a type I error is the mistaken rejection of an actually true null hypothesis (also known as a "false positive" finding or conclusion; example: "an innocent person is convicted"), while a type II error is the failure to reject a null hypothesis that is actually false (also known as a "false negative" finding or conclusion; example: "a guilty person is not convicted").[1] Much of statistical theory revolves around the minimization of one or both of these errors, though the complete elimination of either is a statistical impossibility if the outcome is not determined by a known, observable causal process. By selecting a low threshold (cut-off) value and modifying the alpha (α) level, the quality of the hypothesis test can be increased. The knowledge of type I errors and type II errors is widely used in medical science, biometrics and computer science.
Intuitively, type I errors can be thought of as errors of commission (i.e., the researcher unluckily concludes that something is the fact). For instance, consider a study where researchers compare a drug with a placebo. If the patients who are given the drug get better than the patients given the placebo by chance, it may appear that the drug is effective, but in fact the conclusion is incorrect. In reverse, type II errors are errors of omission. In the example above, if the patients who got the drug did not get better at a higher rate than the ones who got the placebo, but this was a random fluke, that would be a type II error. The consequence of a type II error depends on the size and direction of the missed determination and the circumstances. An expensive cure for one in a million patients may be inconsequential even if it truly is a cure.
Definition
Statistical background
In statistical test theory, the notion of a statistical error is an integral part of hypothesis testing. The test goes about choosing about two competing propositions called null hypothesis, denoted by H0 and alternative hypothesis, denoted by H1. This is conceptually similar to the judgement in a court trial. The null hypothesis corresponds to the position of the defendant: just as he is presumed to be innocent until proven guilty, so is the null hypothesis presumed to be true until the data provide convincing evidence against it. The alternative hypothesis corresponds to the position against the defendant. Specifically, the null hypothesis also involves the absence of a difference or the absence of an association. Thus, the null hypothesis can never be that there is a difference or an association.
If the result of the test corresponds with reality, then a correct decision has been made. However, if the result of the test does not correspond with reality, then an error has occurred. There are two situations in which the decision is wrong. The null hypothesis may be true, whereas we reject H0. On the other hand, the alternative hypothesis H1 may be true, whereas we do not reject H0. Two types of error are distinguished: type I error and type II error.[2]
Type I error
The first kind of error is the mistaken rejection of a null hypothesis as the result of a test procedure. This kind of error is called a type I error (false positive) and is sometimes called an error of the first kind. In terms of the courtroom example, a type I error corresponds to convicting an innocent defendant.
Type II error
The second kind of error is the mistaken failure to reject the null hypothesis as the result of a test procedure. This sort of error is called a type II error (false negative) and is also referred to as an error of the second kind. In terms of the courtroom example, a type II error corresponds to acquitting a criminal.[3]
Crossover error rate
The crossover error rate (CER) is the point at which type I errors and type II errors are equal. A system with a lower CER value provides more accuracy than a system with a higher CER value.
False positive and false negative
In terms of false positives and false negatives, a positive result corresponds to rejecting the null hypothesis, while a negative result corresponds to failing to reject the null hypothesis; "false" means the conclusion drawn is incorrect. Thus, a type I error is equivalent to a false positive, and a type II error is equivalent to a false negative.
Table of error types
Tabularised relations between truth/falseness of the null hypothesis and outcomes of the test:[4]
Table of error types
Null hypothesis (H0) is
True False
Decision
about null
hypothesis (H0)
Fail to reject
Correct inference
(true negative)
(probability = 1−α)
Type II error
(false negative)
(probability = β)
Reject Type I error
(false positive)
(probability = α)
Correct inference
(true positive)
(probability = 1−β)
Error rate
See also: Sensitivity and specificity and False positive rate § Comparison with other error rates
A perfect test would have zero false positives and zero false negatives. However, statistical methods are probabilistic, and it cannot be known for certain whether statistical conclusions are correct. Whenever there is uncertainty, there is the possibility of making an error. Considering this nature of statistics science, all statistical hypothesis tests have a probability of making type I and type II errors.
• The type I error rate is the probability of rejecting the null hypothesis given that it is true. The test is designed to keep the type I error rate below a prespecified bound called the significance level, usually denoted by the Greek letter α (alpha) and is also called the alpha level. Usually, the significance level is set to 0.05 (5%), implying that it is acceptable to have a 5% probability of incorrectly rejecting the true null hypothesis.[5]
• The rate of the type II error is denoted by the Greek letter β (beta) and related to the power of a test, which equals 1−β.
These two types of error rates are traded off against each other: for any given sample set, the effort to reduce one type of error generally results in increasing the other type of error.
The quality of hypothesis test
The same idea can be expressed in terms of the rate of correct results and therefore used to minimize error rates and improve the quality of hypothesis test. To reduce the probability of committing a type I error, making the alpha value more stringent is quite simple and efficient. To decrease the probability of committing a type II error, which is closely associated with analyses' power, either increasing the test's sample size or relaxing the alpha level could increase the analyses' power. A test statistic is robust if the type I error rate is controlled.
Varying different threshold (cut-off) value could also be used to make the test either more specific or more sensitive, which in turn elevates the test quality. For example, imagine a medical test, in which an experimenter might measure the concentration of a certain protein in the blood sample. The experimenter could adjust the threshold (black vertical line in the figure) and people would be diagnosed as having diseases if any number is detected above this certain threshold. According to the image, changing the threshold would result in changes in false positives and false negatives, corresponding to movement on the curve.
Example
Since in a real experiment it is impossible to avoid all type I and type II errors, it is important to consider the amount of risk one is willing to take to falsely reject H0 or accept H0. The solution to this question would be to report the p-value or significance level α of the statistic. For example, if the p-value of a test statistic result is estimated at 0.0596, then there is a probability of 5.96% that we falsely reject H0. Or, if we say, the statistic is performed at level α, like 0.05, then we allow to falsely reject H0 at 5%. A significance level α of 0.05 is relatively common, but there is no general rule that fits all scenarios.
Vehicle speed measuring
The speed limit of a freeway in the United States is 120 kilometers per hour (75 mph). A device is set to measure the speed of passing vehicles. Suppose that the device will conduct three measurements of the speed of a passing vehicle, recording as a random sample X1, X2, X3. The traffic police will or will not fine the drivers depending on the average speed ${\bar {X}}$. That is to say, the test statistic
$T={\frac {X_{1}+X_{2}+X_{3}}{3}}={\bar {X}}$
In addition, we suppose that the measurements X1, X2, X3 are modeled as normal distribution N(μ,4). Then, T should follow N(μ,4/3) and the parameter μ represents the true speed of passing vehicle. In this experiment, the null hypothesis H0 and the alternative hypothesis H1 should be
H0: μ=120 against H1: μ>120.
If we perform the statistic level at α=0.05, then a critical value c should be calculated to solve
$P\left(Z\geqslant {\frac {c-120}{\frac {2}{\sqrt {3}}}}\right)=0.05$
According to change-of-units rule for the normal distribution. Referring to Z-table, we can get
${\frac {c-120}{\frac {2}{\sqrt {3}}}}=1.645\Rightarrow c=121.9$
Here, the critical region. That is to say, if the recorded speed of a vehicle is greater than critical value 121.9, the driver will be fined. However, there are still 5% of the drivers are falsely fined since the recorded average speed is greater than 121.9 but the true speed does not pass 120, which we say, a type I error.
The type II error corresponds to the case that the true speed of a vehicle is over 120 kilometers per hour but the driver is not fined. For example, if the true speed of a vehicle μ=125, the probability that the driver is not fined can be calculated as
$P=(T<121.9|\mu =125)=P\left({\frac {T-125}{\frac {2}{\sqrt {3}}}}<{\frac {121.9-125}{\frac {2}{\sqrt {3}}}}\right)=\phi (-2.68)=0.0036$
which means, if the true speed of a vehicle is 125, the driver has the probability of 0.36% to avoid the fine when the statistic is performed at level α=0.05, since the recorded average speed is lower than 121.9. If the true speed is closer to 121.9 than 125, then the probability of avoiding the fine will also be higher.
The tradeoffs between type I error and type II error should also be considered. That is, in this case, if the traffic police do not want to falsely fine innocent drivers, the level α can be set to a smaller value, like 0.01. However, if that is the case, more drivers whose true speed is over 120 kilometers per hour, like 125, would be more likely to avoid the fine.
Etymology
In 1928, Jerzy Neyman (1894–1981) and Egon Pearson (1895–1980), both eminent statisticians, discussed the problems associated with "deciding whether or not a particular sample may be judged as likely to have been randomly drawn from a certain population":[6] and, as Florence Nightingale David remarked, "it is necessary to remember the adjective 'random' [in the term 'random sample'] should apply to the method of drawing the sample and not to the sample itself".[7]
They identified "two sources of error", namely:
(a) the error of rejecting a hypothesis that should have not been rejected, and
(b) the error of failing to reject a hypothesis that should have been rejected.
In 1930, they elaborated on these two sources of error, remarking that:
...in testing hypotheses two considerations must be kept in view, we must be able to reduce the chance of rejecting a true hypothesis to as low a value as desired; the test must be so devised that it will reject the hypothesis tested when it is likely to be false.
In 1933, they observed that these "problems are rarely presented in such a form that we can discriminate with certainty between the true and false hypothesis" . They also noted that, in deciding whether to fail to reject, or reject a particular hypothesis amongst a "set of alternative hypotheses", H1, H2..., it was easy to make an error:
...[and] these errors will be of two kinds:
(I) we reject H0 [i.e., the hypothesis to be tested] when it is true,[8]
(II) we fail to reject H0 when some alternative hypothesis HA or H1 is true. (There are various notations for the alternative).
In all of the papers co-written by Neyman and Pearson the expression H0 always signifies "the hypothesis to be tested".
In the same paper they call these two sources of error, errors of type I and errors of type II respectively.[9]
Related terms
Null hypothesis
Main article: Null hypothesis
It is standard practice for statisticians to conduct tests in order to determine whether or not a "speculative hypothesis" concerning the observed phenomena of the world (or its inhabitants) can be supported. The results of such testing determine whether a particular set of results agrees reasonably (or does not agree) with the speculated hypothesis.
On the basis that it is always assumed, by statistical convention, that the speculated hypothesis is wrong, and the so-called "null hypothesis" that the observed phenomena simply occur by chance (and that, as a consequence, the speculated agent has no effect) – the test will determine whether this hypothesis is right or wrong. This is why the hypothesis under test is often called the null hypothesis (most likely, coined by Fisher (1935, p. 19)), because it is this hypothesis that is to be either nullified or not nullified by the test. When the null hypothesis is nullified, it is possible to conclude that data support the "alternative hypothesis" (which is the original speculated one).
The consistent application by statisticians of Neyman and Pearson's convention of representing "the hypothesis to be tested" (or "the hypothesis to be nullified") with the expression H0 has led to circumstances where many understand the term "the null hypothesis" as meaning "the nil hypothesis" – a statement that the results in question have arisen through chance. This is not necessarily the case – the key restriction, as per Fisher (1966), is that "the null hypothesis must be exact, that is free from vagueness and ambiguity, because it must supply the basis of the 'problem of distribution,' of which the test of significance is the solution."[10] As a consequence of this, in experimental science the null hypothesis is generally a statement that a particular treatment has no effect; in observational science, it is that there is no difference between the value of a particular measured variable, and that of an experimental prediction.
Statistical significance
If the probability of obtaining a result as extreme as the one obtained, supposing that the null hypothesis were true, is lower than a pre-specified cut-off probability (for example, 5%), then the result is said to be statistically significant and the null hypothesis is rejected.
British statistician Sir Ronald Aylmer Fisher (1890–1962) stressed that the "null hypothesis":
... is never proved or established, but is possibly disproved, in the course of experimentation. Every experiment may be said to exist only in order to give the facts a chance of disproving the null hypothesis.
— Fisher, 1935, p.19
Application domains
Medicine
In the practice of medicine, the differences between the applications of screening and testing are considerable.
Medical screening
Screening involves relatively cheap tests that are given to large populations, none of whom manifest any clinical indication of disease (e.g., Pap smears).
Testing involves far more expensive, often invasive, procedures that are given only to those who manifest some clinical indication of disease, and are most often applied to confirm a suspected diagnosis.
For example, most states in the USA require newborns to be screened for phenylketonuria and hypothyroidism, among other congenital disorders.
Hypothesis: "The newborns have phenylketonuria and hypothyroidism"
Null Hypothesis (H0): "The newborns do not have phenylketonuria and hypothyroidism",
Type I error (false positive): The true fact is that the newborns do not have phenylketonuria and hypothyroidism but we consider they have the disorders according to the data.
Type II error (false negative): The true fact is that the newborns have phenylketonuria and hypothyroidism but we consider they do not have the disorders according to the data.
Although they display a high rate of false positives, the screening tests are considered valuable because they greatly increase the likelihood of detecting these disorders at a far earlier stage.
The simple blood tests used to screen possible blood donors for HIV and hepatitis have a significant rate of false positives; however, physicians use much more expensive and far more precise tests to determine whether a person is actually infected with either of these viruses.
Perhaps the most widely discussed false positives in medical screening come from the breast cancer screening procedure mammography. The US rate of false positive mammograms is up to 15%, the highest in world. One consequence of the high false positive rate in the US is that, in any 10-year period, half of the American women screened receive a false positive mammogram. False positive mammograms are costly, with over $100 million spent annually in the U.S. on follow-up testing and treatment. They also cause women unneeded anxiety. As a result of the high false positive rate in the US, as many as 90–95% of women who get a positive mammogram do not have the condition. The lowest rate in the world is in the Netherlands, 1%. The lowest rates are generally in Northern Europe where mammography films are read twice and a high threshold for additional testing is set (the high threshold decreases the power of the test).
The ideal population screening test would be cheap, easy to administer, and produce zero false-negatives, if possible. Such tests usually produce more false-positives, which can subsequently be sorted out by more sophisticated (and expensive) testing.
Medical testing
False negatives and false positives are significant issues in medical testing.
Hypothesis: "The patients have the specific disease".
Null hypothesis (H0): "The patients do not have the specific disease".
Type I error (false positive): "The true fact is that the patients do not have a specific disease but the physicians judges the patients was ill according to the test reports".
False positives can also produce serious and counter-intuitive problems when the condition being searched for is rare, as in screening. If a test has a false positive rate of one in ten thousand, but only one in a million samples (or people) is a true positive, most of the positives detected by that test will be false. The probability that an observed positive result is a false positive may be calculated using Bayes' theorem.
Type II error (false negative): "The true fact is that the disease is actually present but the test reports provide a falsely reassuring message to patients and physicians that the disease is absent".
False negatives produce serious and counter-intuitive problems, especially when the condition being searched for is common. If a test with a false negative rate of only 10% is used to test a population with a true occurrence rate of 70%, many of the negatives detected by the test will be false.
This sometimes leads to inappropriate or inadequate treatment of both the patient and their disease. A common example is relying on cardiac stress tests to detect coronary atherosclerosis, even though cardiac stress tests are known to only detect limitations of coronary artery blood flow due to advanced stenosis.
Biometrics
Biometric matching, such as for fingerprint recognition, facial recognition or iris recognition, is susceptible to type I and type II errors.
Hypothesis: "The input does not identify someone in the searched list of people"
Null hypothesis: "The input does identify someone in the searched list of people"
Type I error (false reject rate): "The true fact is that the person is someone in the searched list but the system concludes that the person is not according to the data".
Type II error (false match rate): "The true fact is that the person is not someone in the searched list but the system concludes that the person is someone whom we are looking for according to the data".
The probability of type I errors is called the "false reject rate" (FRR) or false non-match rate (FNMR), while the probability of type II errors is called the "false accept rate" (FAR) or false match rate (FMR).
If the system is designed to rarely match suspects then the probability of type II errors can be called the "false alarm rate". On the other hand, if the system is used for validation (and acceptance is the norm) then the FAR is a measure of system security, while the FRR measures user inconvenience level.
Security screening
False positives are routinely found every day in airport security screening, which are ultimately visual inspection systems. The installed security alarms are intended to prevent weapons being brought onto aircraft; yet they are often set to such high sensitivity that they alarm many times a day for minor items, such as keys, belt buckles, loose change, mobile phones, and tacks in shoes.
Here, the null hypothesis is that the item is not a weapon, while the alternative hypothesis is that the item is a weapon.
A type I error (false positive): "The true fact is that the item is not a weapon but the system still alarms".
Type II error (false negative) "The true fact is that the item is a weapon but the system keeps silent at this time".
The ratio of false positives (identifying an innocent traveler as a terrorist) to true positives (detecting a would-be terrorist) is, therefore, very high; and because almost every alarm is a false positive, the positive predictive value of these screening tests is very low.
The relative cost of false results determines the likelihood that test creators allow these events to occur. As the cost of a false negative in this scenario is extremely high (not detecting a bomb being brought onto a plane could result in hundreds of deaths) whilst the cost of a false positive is relatively low (a reasonably simple further inspection) the most appropriate test is one with a low statistical specificity but high statistical sensitivity (one that allows a high rate of false positives in return for minimal false negatives).
Computers
The notions of false positives and false negatives have a wide currency in the realm of computers and computer applications, including computer security, spam filtering, Malware, Optical character recognition and many others.
For example, in the case of spam filtering the hypothesis here is that the message is a spam.
Thus, null hypothesis: "The message is not a spam".
Type I error (false positive): "Spam filtering or spam blocking techniques wrongly classify a legitimate email message as spam and, as a result, interferes with its delivery".
While most anti-spam tactics can block or filter a high percentage of unwanted emails, doing so without creating significant false-positive results is a much more demanding task.
Type II error (false negative): "Spam email is not detected as spam, but is classified as non-spam". A low number of false negatives is an indicator of the efficiency of spam filtering.
See also
• Binary classification
• Detection theory
• Egon Pearson
• Ethics in mathematics
• False positive paradox
• False discovery rate
• Family-wise error rate
• Information retrieval performance measures
• Neyman–Pearson lemma
• Null hypothesis
• Probability of a hypothesis for Bayesian inference
• Precision and recall
• Prosecutor's fallacy
• Prozone phenomenon
• Receiver operating characteristic
• Sensitivity and specificity
• Statisticians' and engineers' cross-reference of statistical terms
• Testing hypotheses suggested by the data
• Type III error
References
1. "Type I Error and Type II Error". explorable.com. Retrieved 14 December 2019.
2. A modern introduction to probability and statistics : understanding why and how. Dekking, Michel, 1946-. London: Springer. 2005. ISBN 978-1-85233-896-1. OCLC 262680588.{{cite book}}: CS1 maint: others (link)
3. A modern introduction to probability and statistics : understanding why and how. Dekking, Michel, 1946-. London: Springer. 2005. ISBN 978-1-85233-896-1. OCLC 262680588.{{cite book}}: CS1 maint: others (link)
4. Sheskin, David (2004). Handbook of Parametric and Nonparametric Statistical Procedures. CRC Press. p. 54. ISBN 1584884401.
5. Lindenmayer, David. (2005). Practical conservation biology. Burgman, Mark A. Collingwood, Vic.: CSIRO Pub. ISBN 0-643-09310-9. OCLC 65216357.
6. NEYMAN, J.; PEARSON, E. S. (1928). "On the Use and Interpretation of Certain Test Criteria for Purposes of Statistical Inference Part I". Biometrika. 20A (1–2): 175–240. doi:10.1093/biomet/20a.1-2.175. ISSN 0006-3444.
7. C.I.K.F. (July 1951). "Probability Theory for Statistical Methods. By F. N. David. [Pp. ix + 230. Cambridge University Press. 1949. Price 155.]". Journal of the Staple Inn Actuarial Society. 10 (3): 243–244. doi:10.1017/s0020269x00004564. ISSN 0020-269X.
8. Note that the subscript in the expression H0 is a zero (indicating null), and is not an "O" (indicating original).
9. Neyman, J.; Pearson, E. S. (30 October 1933). "The testing of statistical hypotheses in relation to probabilities a priori". Mathematical Proceedings of the Cambridge Philosophical Society. 29 (4): 492–510. Bibcode:1933PCPS...29..492N. doi:10.1017/s030500410001152x. ISSN 0305-0041. S2CID 119855116.
10. Fisher, R.A. (1966). The design of experiments. 8th edition. Hafner:Edinburgh.
Bibliography
• Betz, M.A. & Gabriel, K.R., "Type IV Errors and Analysis of Simple Effects", Journal of Educational Statistics, Vol.3, No.2, (Summer 1978), pp. 121–144.
• David, F.N., "A Power Function for Tests of Randomness in a Sequence of Alternatives", Biometrika, Vol.34, Nos.3/4, (December 1947), pp. 335–339.
• Fisher, R.A., The Design of Experiments, Oliver & Boyd (Edinburgh), 1935.
• Gambrill, W., "False Positives on Newborns' Disease Tests Worry Parents", Health Day, (5 June 2006). Archived 17 May 2018 at the Wayback Machine
• Kaiser, H.F., "Directional Statistical Decisions", Psychological Review, Vol.67, No.3, (May 1960), pp. 160–167.
• Kimball, A.W., "Errors of the Third Kind in Statistical Consulting", Journal of the American Statistical Association, Vol.52, No.278, (June 1957), pp. 133–142.
• Lubin, A., "The Interpretation of Significant Interaction", Educational and Psychological Measurement, Vol.21, No.4, (Winter 1961), pp. 807–817.
• Marascuilo, L.A. & Levin, J.R., "Appropriate Post Hoc Comparisons for Interaction and nested Hypotheses in Analysis of Variance Designs: The Elimination of Type-IV Errors", American Educational Research Journal, Vol.7., No.3, (May 1970), pp. 397–421.
• Mitroff, I.I. & Featheringham, T.R., "On Systemic Problem Solving and the Error of the Third Kind", Behavioral Science, Vol.19, No.6, (November 1974), pp. 383–393.
• Mosteller, F., "A k-Sample Slippage Test for an Extreme Population", The Annals of Mathematical Statistics, Vol.19, No.1, (March 1948), pp. 58–65.
• Moulton, R.T., "Network Security", Datamation, Vol.29, No.7, (July 1983), pp. 121–127.
• Raiffa, H., Decision Analysis: Introductory Lectures on Choices Under Uncertainty, Addison–Wesley, (Reading), 1968.
External links
• Bias and Confounding – presentation by Nigel Paneth, Graduate School of Public Health, University of Pittsburgh
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
| Wikipedia |
Mixed tensor
In tensor analysis, a mixed tensor is a tensor which is neither strictly covariant nor strictly contravariant; at least one of the indices of a mixed tensor will be a subscript (covariant) and at least one of the indices will be a superscript (contravariant).
A mixed tensor of type or valence $ {\binom {M}{N}}$, also written "type (M, N)", with both M > 0 and N > 0, is a tensor which has M contravariant indices and N covariant indices. Such a tensor can be defined as a linear function which maps an (M + N)-tuple of M one-forms and N vectors to a scalar.
Changing the tensor type
Main article: Raising and lowering indices
Consider the following octet of related tensors:
$T_{\alpha \beta \gamma },\ T_{\alpha \beta }{}^{\gamma },\ T_{\alpha }{}^{\beta }{}_{\gamma },\ T_{\alpha }{}^{\beta \gamma },\ T^{\alpha }{}_{\beta \gamma },\ T^{\alpha }{}_{\beta }{}^{\gamma },\ T^{\alpha \beta }{}_{\gamma },\ T^{\alpha \beta \gamma }.$
The first one is covariant, the last one contravariant, and the remaining ones mixed. Notationally, these tensors differ from each other by the covariance/contravariance of their indices. A given contravariant index of a tensor can be lowered using the metric tensor gμν, and a given covariant index can be raised using the inverse metric tensor gμν. Thus, gμν could be called the index lowering operator and gμν the index raising operator.
Generally, the covariant metric tensor, contracted with a tensor of type (M, N), yields a tensor of type (M − 1, N + 1), whereas its contravariant inverse, contracted with a tensor of type (M, N), yields a tensor of type (M + 1, N − 1).
Examples
As an example, a mixed tensor of type (1, 2) can be obtained by raising an index of a covariant tensor of type (0, 3),
$T_{\alpha \beta }{}^{\lambda }=T_{\alpha \beta \gamma }\,g^{\gamma \lambda },$
where $T_{\alpha \beta }{}^{\lambda }$ is the same tensor as $T_{\alpha \beta }{}^{\gamma }$, because
$T_{\alpha \beta }{}^{\lambda }\,\delta _{\lambda }{}^{\gamma }=T_{\alpha \beta }{}^{\gamma },$
with Kronecker δ acting here like an identity matrix.
Likewise,
$T_{\alpha }{}^{\lambda }{}_{\gamma }=T_{\alpha \beta \gamma }\,g^{\beta \lambda },$
$T_{\alpha }{}^{\lambda \epsilon }=T_{\alpha \beta \gamma }\,g^{\beta \lambda }\,g^{\gamma \epsilon },$
$T^{\alpha \beta }{}_{\gamma }=g_{\gamma \lambda }\,T^{\alpha \beta \lambda },$
$T^{\alpha }{}_{\lambda \epsilon }=g_{\lambda \beta }\,g_{\epsilon \gamma }\,T^{\alpha \beta \gamma }.$
Raising an index of the metric tensor is equivalent to contracting it with its inverse, yielding the Kronecker delta,
$g^{\mu \lambda }\,g_{\lambda \nu }=g^{\mu }{}_{\nu }=\delta ^{\mu }{}_{\nu },$
so any mixed version of the metric tensor will be equal to the Kronecker delta, which will also be mixed.
See also
• Covariance and contravariance of vectors
• Einstein notation
• Ricci calculus
• Tensor (intrinsic definition)
• Two-point tensor
References
• D.C. Kay (1988). Tensor Calculus. Schaum’s Outlines, McGraw Hill (USA). ISBN 0-07-033484-6.
• Wheeler, J.A.; Misner, C.; Thorne, K.S. (1973). "§3.5 Working with Tensors". Gravitation. W.H. Freeman & Co. pp. 85–86. ISBN 0-7167-0344-0.
• R. Penrose (2007). The Road to Reality. Vintage books. ISBN 978-0-679-77631-4.
External links
• Index Gymnastics, Wolfram Alpha
Tensors
Glossary of tensor theory
Scope
Mathematics
• Coordinate system
• Differential geometry
• Dyadic algebra
• Euclidean geometry
• Exterior calculus
• Multilinear algebra
• Tensor algebra
• Tensor calculus
• Physics
• Engineering
• Computer vision
• Continuum mechanics
• Electromagnetism
• General relativity
• Transport phenomena
Notation
• Abstract index notation
• Einstein notation
• Index notation
• Multi-index notation
• Penrose graphical notation
• Ricci calculus
• Tetrad (index notation)
• Van der Waerden notation
• Voigt notation
Tensor
definitions
• Tensor (intrinsic definition)
• Tensor field
• Tensor density
• Tensors in curvilinear coordinates
• Mixed tensor
• Antisymmetric tensor
• Symmetric tensor
• Tensor operator
• Tensor bundle
• Two-point tensor
Operations
• Covariant derivative
• Exterior covariant derivative
• Exterior derivative
• Exterior product
• Hodge star operator
• Lie derivative
• Raising and lowering indices
• Symmetrization
• Tensor contraction
• Tensor product
• Transpose (2nd-order tensors)
Related
abstractions
• Affine connection
• Basis
• Cartan formalism (physics)
• Connection form
• Covariance and contravariance of vectors
• Differential form
• Dimension
• Exterior form
• Fiber bundle
• Geodesic
• Levi-Civita connection
• Linear map
• Manifold
• Matrix
• Multivector
• Pseudotensor
• Spinor
• Vector
• Vector space
Notable tensors
Mathematics
• Kronecker delta
• Levi-Civita symbol
• Metric tensor
• Nonmetricity tensor
• Ricci curvature
• Riemann curvature tensor
• Torsion tensor
• Weyl tensor
Physics
• Moment of inertia
• Angular momentum tensor
• Spin tensor
• Cauchy stress tensor
• stress–energy tensor
• Einstein tensor
• EM tensor
• Gluon field strength tensor
• Metric tensor (GR)
Mathematicians
• Élie Cartan
• Augustin-Louis Cauchy
• Elwin Bruno Christoffel
• Albert Einstein
• Leonhard Euler
• Carl Friedrich Gauss
• Hermann Grassmann
• Tullio Levi-Civita
• Gregorio Ricci-Curbastro
• Bernhard Riemann
• Jan Arnoldus Schouten
• Woldemar Voigt
• Hermann Weyl
| Wikipedia |
Typed lambda calculus
A typed lambda calculus is a typed formalism that uses the lambda-symbol ($\lambda $) to denote anonymous function abstraction. In this context, types are usually objects of a syntactic nature that are assigned to lambda terms; the exact nature of a type depends on the calculus considered (see kinds below). From a certain point of view, typed lambda calculi can be seen as refinements of the untyped lambda calculus, but from another point of view, they can also be considered the more fundamental theory and untyped lambda calculus a special case with only one type.
Typed lambda calculi are foundational programming languages and are the base of typed functional programming languages such as ML and Haskell and, more indirectly, typed imperative programming languages. Typed lambda calculi play an important role in the design of type systems for programming languages; here, typability usually captures desirable properties of the program (e.g., the program will not cause a memory access violation).
Typed lambda calculi are closely related to mathematical logic and proof theory via the Curry–Howard isomorphism and they can be considered as the internal language of certain classes of categories. For example, the simply typed lambda calculus is the language of Cartesian closed categories (CCCs)[1]
Kinds of typed lambda calculi
Various typed lambda calculi have been studied. The simply typed lambda calculus has only one type constructor, the arrow $\to $, and its only types are basic types and function types $\sigma \to \tau $. System T extends the simply typed lambda calculus with a type of natural numbers and higher order primitive recursion; in this system all functions provably recursive in Peano arithmetic are definable. System F allows polymorphism by using universal quantification over all types; from a logical perspective it can describe all functions that are provably total in second-order logic. Lambda calculi with dependent types are the base of intuitionistic type theory, the calculus of constructions and the logical framework (LF), a pure lambda calculus with dependent types. Based on work by Berardi on pure type systems, Henk Barendregt proposed the Lambda cube to systematize the relations of pure typed lambda calculi (including simply typed lambda calculus, System F, LF and the calculus of constructions).
Some typed lambda calculi introduce a notion of subtyping, i.e. if $A$ is a subtype of $B$, then all terms of type $A$ also have type $B$. Typed lambda calculi with subtyping are the simply typed lambda calculus with conjunctive types and System F<:.
All the systems mentioned so far, with the exception of the untyped lambda calculus, are strongly normalizing: all computations terminate. Therefore, they cannot describe all Turing-computable functions.[2] As another consequence they are consistent as a logic, i.e. there are uninhabited types. There exist, however, typed lambda calculi that are not strongly normalizing. For example the dependently typed lambda calculus with a type of all types (Type : Type) is not normalizing due to Girard's paradox. This system is also the simplest pure type system, a formalism which generalizes the Lambda cube. Systems with explicit recursion combinators, such as Plotkin's "Programming language for Computable Functions" (PCF), are not normalizing, but they are not intended to be interpreted as a logic. Indeed, PCF is a prototypical, typed functional programming language, where types are used to ensure that programs are well-behaved but not necessarily that they are terminating.
Applications to programming languages
In computer programming, the routines (functions, procedures, methods) of strongly typed programming languages closely correspond to typed lambda expressions.
See also
• Kappa calculus—an analogue of typed lambda calculus which excludes higher-order functions
Notes
1. Lambek, J.; Scott, P. J. (1986), Introduction to Higher Order Categorical Logic, Cambridge University Press, ISBN 978-0-521-35653-4, MR 0856915
2. since the halting problem for the latter class was proven to be undecidable
Further reading
• Barendregt, Henk (1992). "Lambda Calculi with Types". In Abramsky, S. (ed.). Background: Computational Structures. Handbook of Logic in Computer Science. Vol. 2. Oxford University Press. pp. 117–309. ISBN 9780198537618.
• Brandl, Helmut (2022). Calculus of Constructions / Typed Lambda Calculus
| Wikipedia |
Types of mesh
A mesh is a representation of a larger geometric domain by smaller discrete cells. Meshes are commonly used to compute solutions of partial differential equations and render computer graphics, and to analyze geographical and cartographic data. A mesh partitions space into elements (or cells or zones) over which the equations can be solved, which then approximates the solution over the larger domain. Element boundaries may be constrained to lie on internal or external boundaries within a model. Higher-quality (better-shaped) elements have better numerical properties, where what constitutes a "better" element depends on the general governing equations and the particular solution to the model instance.
Common cell shapes
Two-dimensional
There are two types of two-dimensional cell shapes that are commonly used. These are the triangle and the quadrilateral.
Computationally poor elements will have sharp internal angles or short edges or both.
Triangle
This cell shape consists of 3 sides and is one of the simplest types of mesh. A triangular surface mesh is always quick and easy to create. It is most common in unstructured grids.
Quadrilateral
This cell shape is a basic 4 sided one as shown in the figure. It is most common in structured grids.
Quadrilateral elements are usually excluded from being or becoming concave.
Three-dimensional
The basic 3-dimensional element are the tetrahedron, quadrilateral pyramid, triangular prism, and hexahedron. They all have triangular and quadrilateral faces.
Extruded 2-dimensional models may be represented entirely by the prisms and hexahedra as extruded triangles and quadrilaterals.
In general, quadrilateral faces in 3-dimensions may not be perfectly planar. A nonplanar quadrilateral face can be considered a thin tetrahedral volume that is shared by two neighboring elements.
Tetrahedron
A tetrahedron has 4 vertices, 6 edges, and is bounded by 4 triangular faces. In most cases a tetrahedral volume mesh can be generated automatically.
Pyramid
A quadrilaterally-based pyramid has 5 vertices, 8 edges, bounded by 4 triangular and 1 quadrilateral face. These are effectively used as transition elements between square and triangular faced elements and other in hybrid meshes and grids.
Triangular prism
A triangular prism has 6 vertices, 9 edges, bounded by 2 triangular and 3 quadrilateral faces. The advantage with this type of layer is that it resolves boundary layer efficiently.
Hexahedron
A hexahedron, a topological cube, has 8 vertices, 12 edges, bounded by 6 quadrilateral faces. It is also called a hex or a brick.[1] For the same cell amount, the accuracy of solutions in hexahedral meshes is the highest.
The pyramid and triangular prism zones can be considered computationally as degenerate hexahedrons, where some edges have been reduced to zero. Other degenerate forms of a hexahedron may also be represented.
Advanced Cells (Polyhedron)
A polyhedron (dual) element has any number of vertices, edges and faces. It usually requires more computing operations per cell due to the number of neighbours (typically 10).[2] Though this is made up for in the accuracy of the calculation.
Classification of grids
Structured grids
Structured grids are identified by regular connectivity. The possible element choices are quadrilateral in 2D and hexahedra in 3D. This model is highly space efficient, since the neighbourhood relationships are defined by storage arrangement. Some other advantages of structured grid over unstructured are better convergence and higher resolution.[3][4][5]
Unstructured grids
An unstructured grid is identified by irregular connectivity. It cannot easily be expressed as a two-dimensional or three-dimensional array in computer memory. This allows for any possible element that a solver might be able to use. Compared to structured meshes, for which the neighborhood relationships are implicit, this model can be highly space inefficient since it calls for explicit storage of neighborhood relationships. It should be noted, though, that the storage requirements of a structured grid and of an unstructured grid are within a constant factor. These grids typically employ triangles in 2D and tetrahedral in 3D.[6]
Hybrid grids
A hybrid grid contains a mixture of structured portions and unstructured portions. It integrates the structured meshes and the unstructured meshes in an efficient manner. Those parts of the geometry that are regular can have structured grids and those that are complex can have unstructured grids. These grids can be non-conformal which means that grid lines don’t need to match at block boundaries.[7]
Mesh quality
A mesh is considered to have higher quality if a more accurate solution is calculated more quickly. Accuracy and speed are in tension. Decreasing the mesh size always increases the accuracy but also increases computational cost.
Accuracy depends on both discretization error and solution error. For discretization error, a given mesh is a discrete approximation of the space, and so can only provide an approximate solution, even when equations are solved exactly. (In computer graphics ray tracing, the number of rays fired is another source of discretization error.) For solution error, for PDEs many iterations over the entire mesh are required. The calculation is terminated early, before the equations are solved exactly. The choice of mesh element type affects both discretization and solution error.
Accuracy depends on both the total number of elements, and the shape of individual elements. The speed of each iteration grows (linearly) with the number of elements, and the number of iterations needed depends on the local solution value and gradient compared to the shape and size of local elements.
Solution precision
A coarse mesh may provide an accurate solution if the solution is a constant, so the precision depends on the particular problem instance. One can selectively refine the mesh in areas where the solution gradients are high, thus increasing fidelity there. Accuracy, including interpolated values within an element, depends on the element type and shape.
Rate of convergence
Each iteration reduces the error between the calculated and true solution. A faster rate of convergence means smaller error with fewer iterations.
A mesh of inferior quality may leave out important features such as the boundary layer for fluid flow. The discretization error will be large and the rate of convergence will be impaired; the solution may not converge at all.
Grid independence
A solution is considered grid-independent if the discretization and solution error are small enough given sufficient iterations. This is essential to know for comparative results. A mesh convergence study consists of refining elements and comparing the refined solutions to the coarse solutions. If further refinement (or other changes) does not significantly change the solution, the mesh is an "Independent Grid."
Deciding the type of mesh
If the accuracy is of the highest concern then hexahedral mesh is the most preferable one. The density of the mesh is required to be sufficiently high in order to capture all the flow features but on the same note, it should not be so high that it captures unnecessary details of the flow, thus burdening the CPU and wasting more time. Whenever a wall is present, the mesh adjacent to the wall is fine enough to resolve the boundary layer flow and generally quad, hex and prism cells are preferred over triangles, tetrahedrons and pyramids. Quad and Hex cells can be stretched where the flow is fully developed and one-dimensional.
Based on the skewness, smoothness, and aspect ratio, the suitability of the mesh can be decided. [8]
Skewness
The skewness of a grid is an apt indicator of the mesh quality and suitability. Large skewness compromises the accuracy of the interpolated regions. There are three methods of determining the skewness of a grid.
Based on equilateral volume
This method is applicable to triangles and tetrahedral only and is the default method.
${\text{ Skewness }}={\frac {\text{ optimal cell size - cell size }}{\text{optimal cell size}}}$
Based on the deviation from normalized equilateral angle
This method applies to all cell and face shapes and is almost always used for prisms and pyramids
${\text{ Skewness ( for a quad ) }}=\max {\left[{\frac {\theta _{\text{max}}-90}{90}},{\frac {90-\theta _{\text{min}}}{90}}\right]}$
Equiangular skew
Another common measure of quality is based on equiangular skew.
${\text{ Equiangle Skew }}=\max {\left[{\frac {\theta _{\text{max}}-\theta _{e}}{180-\theta _{e}}},{\frac {\theta _{e}-\theta _{\text{min}}}{\theta _{e}}}\right]}$
where:
• $\theta _{\text{max}}\,$ is the largest angle in a face or cell,
• $\theta _{\text{min}}\,$ is the smallest angle in a face or cell,
• $\theta _{e}\,$ is the angle for equi-angular face or cell i.e. 60 for a triangle and 90 for a square.
A skewness' of 0 is the best possible one and a skewness of one is almost never preferred. For Hex and quad cells, skewness should not exceed 0.85 to obtain a fairly accurate solution.
For triangular cells, skewness should not exceed 0.85 and for quadrilateral cells, skewness should not exceed 0.9.
Smoothness
The change in size should also be smooth. There should not be sudden jumps in the size of the cell because this may cause erroneous results at nearby nodes.
Aspect ratio
It is the ratio of longest to the shortest side in a cell. Ideally it should be equal to 1 to ensure best results. For multidimensional flow, it should be near to one. Also local variations in cell size should be minimal, i.e. adjacent cell sizes should not vary by more than 20%. Having a large aspect ratio can result in an interpolation error of unacceptable magnitude.
Mesh generation and improvement
See also mesh generation and principles of grid generation. In two dimensions, flipping and smoothing are powerful tools for adapting a poor mesh into a good mesh. Flipping involves combining two triangles to form a quadrilateral, then splitting the quadrilateral in the other direction to produce two new triangles. Flipping is used to improve quality measures of a triangle such as skewness. Mesh smoothing enhances element shapes and overall mesh quality by adjusting the location of mesh vertices. In mesh smoothing, core features such as non-zero pattern of the linear system are preserved as the topology of the mesh remains invariant. Laplacian smoothing is the most commonly used smoothing technique.
See also
• Mesh generation – Subdivision of space into cells
• Unstructured grid – Unstructured (or irregular) grid is a tessellation of a part of the Euclidean plane
• Regular grid – Tessellation of n-dimensional Euclidean space by congruent parallelotopes
• Stretched grid method – Numerical technique
References
1. "Hexahedron elements" (PDF). Archived from the original (PDF) on 2015-02-24. Retrieved 2015-04-13.
2. "Archived copy" (PDF). Archived from the original (PDF) on 2013-12-06. Retrieved 2018-01-10.{{cite web}}: CS1 maint: archived copy as title (link)
3. "Quality and Control - Two Reasons Why Structured Grids Aren't Going Away".
4. Castillo, J.E. (1991), "Mathematical aspects of grid Generation", Society for Industrial and Applied Mathematics, Philadelphia
5. George, P.L. (1991), Automatic Mesh Generation
6. Mavriplis, D.J. (1996), "Mesh Generation and adaptivity for complex geometries and flows", Handbook of Computational Fluid Mechanics
7. Bern, Marshall; Plassmann, Paul (2000), "Mesh Generation", Handbook of Computational Geometry. Elsevier Science
8. "Meshing,Lecture 7". Andre Bakker. Retrieved 2012-11-10.
Mesh generation
Types of mesh
• Polygon mesh
• Triangle mesh
• Volume mesh
Methods
• Laplacian smoothing
• Parallel mesh generation
• Stretched grid method
Related
• Chew's second algorithm
• Image-based meshing
• Marching cubes
• Marching tetrahedra
• Principles of Grid Generation
• Regular grid
• Ruppert's algorithm
• Tessellation
• Unstructured grid
| Wikipedia |
Typical set
In information theory, the typical set is a set of sequences whose probability is close to two raised to the negative power of the entropy of their source distribution. That this set has total probability close to one is a consequence of the asymptotic equipartition property (AEP) which is a kind of law of large numbers. The notion of typicality is only concerned with the probability of a sequence and not the actual sequence itself.
This has great use in compression theory as it provides a theoretical means for compressing data, allowing us to represent any sequence Xn using nH(X) bits on average, and, hence, justifying the use of entropy as a measure of information from a source.
The AEP can also be proven for a large class of stationary ergodic processes, allowing typical set to be defined in more general cases.
(Weakly) typical sequences (weak typicality, entropy typicality)
If a sequence x1, ..., xn is drawn from an i.i.d. distribution X defined over a finite alphabet ${\mathcal {X}}$, then the typical set, Aε(n)$\in {\mathcal {X}}$(n) is defined as those sequences which satisfy:
$2^{-n(H(X)+\varepsilon )}\leqslant p(x_{1},x_{2},\dots ,x_{n})\leqslant 2^{-n(H(X)-\varepsilon )}$
where
$H(X)=-\sum _{x\in {\mathcal {X}}}p(x)\log _{2}p(x)$
is the information entropy of X. The probability above need only be within a factor of 2n ε. Taking the logarithm on all sides and dividing by -n, this definition can be equivalently stated as
$H(X)-\varepsilon \leq -{\frac {1}{n}}\log _{2}p(x_{1},x_{2},\ldots ,x_{n})\leq H(X)+\varepsilon .$
For i.i.d sequence, since
$p(x_{1},x_{2},\ldots ,x_{n})=\prod _{i=1}^{n}p(x_{i}),$
we further have
$H(X)-\varepsilon \leq -{\frac {1}{n}}\sum _{i=1}^{n}\log _{2}p(x_{i})\leq H(X)+\varepsilon .$
By the law of large numbers, for sufficiently large n
$-{\frac {1}{n}}\sum _{i=1}^{n}\log _{2}p(x_{i})\rightarrow H(X).$
Properties
An essential characteristic of the typical set is that, if one draws a large number n of independent random samples from the distribution X, the resulting sequence (x1, x2, ..., xn) is very likely to be a member of the typical set, even though the typical set comprises only a small fraction of all the possible sequences. Formally, given any $\varepsilon >0$, one can choose n such that:
1. The probability of a sequence from X(n) being drawn from Aε(n) is greater than 1 − ε, i.e. $Pr[x^{(n)}\in A_{\epsilon }^{(n)}]\geq 1-\varepsilon $
2. $\left|{A_{\varepsilon }}^{(n)}\right|\leqslant 2^{n(H(X)+\varepsilon )}$
3. $\left|{A_{\varepsilon }}^{(n)}\right|\geqslant (1-\varepsilon )2^{n(H(X)-\varepsilon )}$
4. If the distribution over ${\mathcal {X}}$ is not uniform, then the fraction of sequences that are typical is
${\frac {|A_{\epsilon }^{(n)}|}{|{\mathcal {X}}^{(n)}|}}\equiv {\frac {2^{nH(X)}}{2^{n\log _{2}|{\mathcal {X}}|}}}=2^{-n(\log _{2}|{\mathcal {X}}|-H(X))}\rightarrow 0$
as n becomes very large, since $H(X)<\log _{2}|{\mathcal {X}}|,$ where $|{\mathcal {X}}|$ is the cardinality of ${\mathcal {X}}$.
For a general stochastic process {X(t)} with AEP, the (weakly) typical set can be defined similarly with p(x1, x2, ..., xn) replaced by p(x0τ) (i.e. the probability of the sample limited to the time interval [0, τ]), n being the degree of freedom of the process in the time interval and H(X) being the entropy rate. If the process is continuous-valued, differential entropy is used instead.
Example
Counter-intuitively, the most likely sequence is often not a member of the typical set. For example, suppose that X is an i.i.d Bernoulli random variable with p(0)=0.1 and p(1)=0.9. In n independent trials, since p(1)>p(0), the most likely sequence of outcome is the sequence of all 1's, (1,1,...,1). Here the entropy of X is H(X)=0.469, while
$-{\frac {1}{n}}\log _{2}p\left(x^{(n)}=(1,1,\ldots ,1)\right)=-{\frac {1}{n}}\log _{2}(0.9^{n})=0.152$
So this sequence is not in the typical set because its average logarithmic probability cannot come arbitrarily close to the entropy of the random variable X no matter how large we take the value of n.
For Bernoulli random variables, the typical set consists of sequences with average numbers of 0s and 1s in n independent trials. This is easily demonstrated: If p(1) = p and p(0) = 1-p, then for n trials with m 1's, we have
$-{\frac {1}{n}}\log _{2}p(x^{(n)})=-{\frac {1}{n}}\log _{2}p^{m}(1-p)^{n-m}=-{\frac {m}{n}}\log _{2}p-\left({\frac {n-m}{n}}\right)\log _{2}(1-p).$
The average number of 1's in a sequence of Bernoulli trials is m = np. Thus, we have
$-{\frac {1}{n}}\log _{2}p(x^{(n)})=-p\log _{2}p-(1-p)\log _{2}(1-p)=H(X).$
For this example, if n=10, then the typical set consist of all sequences that have a single 0 in the entire sequence. In case p(0)=p(1)=0.5, then every possible binary sequences belong to the typical set.
Strongly typical sequences (strong typicality, letter typicality)
If a sequence x1, ..., xn is drawn from some specified joint distribution defined over a finite or an infinite alphabet ${\mathcal {X}}$, then the strongly typical set, Aε,strong(n)$\in {\mathcal {X}}$ is defined as the set of sequences which satisfy
$\left|{\frac {N(x_{i})}{n}}-p(x_{i})\right|<{\frac {\varepsilon }{\|{\mathcal {X}}\|}}.$
where ${N(x_{i})}$ is the number of occurrences of a specific symbol in the sequence.
It can be shown that strongly typical sequences are also weakly typical (with a different constant ε), and hence the name. The two forms, however, are not equivalent. Strong typicality is often easier to work with in proving theorems for memoryless channels. However, as is apparent from the definition, this form of typicality is only defined for random variables having finite support.
Jointly typical sequences
Two sequences $x^{n}$ and $y^{n}$ are jointly ε-typical if the pair $(x^{n},y^{n})$ is ε-typical with respect to the joint distribution $p(x^{n},y^{n})=\prod _{i=1}^{n}p(x_{i},y_{i})$ and both $x^{n}$ and $y^{n}$ are ε-typical with respect to their marginal distributions $p(x^{n})$ and $p(y^{n})$. The set of all such pairs of sequences $(x^{n},y^{n})$ is denoted by $A_{\varepsilon }^{n}(X,Y)$. Jointly ε-typical n-tuple sequences are defined similarly.
Let ${\tilde {X}}^{n}$ and ${\tilde {Y}}^{n}$ be two independent sequences of random variables with the same marginal distributions $p(x^{n})$ and $p(y^{n})$. Then for any ε>0, for sufficiently large n, jointly typical sequences satisfy the following properties:
1. $P\left[(X^{n},Y^{n})\in A_{\varepsilon }^{n}(X,Y)\right]\geqslant 1-\epsilon $
2. $\left|A_{\varepsilon }^{n}(X,Y)\right|\leqslant 2^{n(H(X,Y)+\epsilon )}$
3. $\left|A_{\varepsilon }^{n}(X,Y)\right|\geqslant (1-\epsilon )2^{n(H(X,Y)-\epsilon )}$
4. $P\left[({\tilde {X}}^{n},{\tilde {Y}}^{n})\in A_{\varepsilon }^{n}(X,Y)\right]\leqslant 2^{-n(I(X;Y)-3\epsilon )}$
5. $P\left[({\tilde {X}}^{n},{\tilde {Y}}^{n})\in A_{\varepsilon }^{n}(X,Y)\right]\geqslant (1-\epsilon )2^{-n(I(X;Y)+3\epsilon )}$
Applications of typicality
Typical set encoding
In information theory, typical set encoding encodes only the sequences in the typical set of a stochastic source with fixed length block codes. Since the size of the typical set is about 2nH(X), only nH(X) bits are required for the coding, while at the same time ensuring that the chances of encoding error is limited to ε. Asymptotically, it is, by the AEP, lossless and achieves the minimum rate equal to the entropy rate of the source.
Typical set decoding
In information theory, typical set decoding is used in conjunction with random coding to estimate the transmitted message as the one with a codeword that is jointly ε-typical with the observation. i.e.
${\hat {w}}=w\iff (\exists w)((x_{1}^{n}(w),y_{1}^{n})\in A_{\varepsilon }^{n}(X,Y))$
where ${\hat {w}},x_{1}^{n}(w),y_{1}^{n}$ are the message estimate, codeword of message $w$ and the observation respectively. $A_{\varepsilon }^{n}(X,Y)$ is defined with respect to the joint distribution $p(x_{1}^{n})p(y_{1}^{n}|x_{1}^{n})$ where $p(y_{1}^{n}|x_{1}^{n})$ is the transition probability that characterizes the channel statistics, and $p(x_{1}^{n})$ is some input distribution used to generate the codewords in the random codebook.
Universal channel code
See also: algorithmic complexity theory
See also
• Asymptotic equipartition property
• Source coding theorem
• Noisy-channel coding theorem
References
• C. E. Shannon, "A Mathematical Theory of Communication", Bell System Technical Journal, vol. 27, pp. 379–423, 623-656, July, October, 1948
• Cover, Thomas M. (2006). "Chapter 3: Asymptotic Equipartition Property, Chapter 5: Data Compression, Chapter 8: Channel Capacity". Elements of Information Theory. John Wiley & Sons. ISBN 0-471-24195-4.
• David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. ISBN 0-521-64298-1
| Wikipedia |
Typographical Number Theory
Typographical Number Theory (TNT) is a formal axiomatic system describing the natural numbers that appears in Douglas Hofstadter's book Gödel, Escher, Bach. It is an implementation of Peano arithmetic that Hofstadter uses to help explain Gödel's incompleteness theorems.
Like any system implementing the Peano axioms, TNT is capable of referring to itself (it is self-referential).
Numerals
TNT does not use a distinct symbol for each natural number. Instead it makes use of a simple, uniform way of giving a compound symbol to each natural number:
zero 0
one S0
two SS0
three SSS0
four SSSS0
five SSSSS0
The symbol S can be interpreted as "the successor of", or "the number after". Since this is, however, a number theory, such interpretations are useful, but not strict. It cannot be said that because four is the successor of three that four is SSSS0, but rather that since three is the successor of two, which is the successor of one, which is the successor of zero, which has been described as 0, four can be "proved" to be SSSS0. TNT is designed such that everything must be proven before it can be said to be true.
Variables
In order to refer to unspecified terms, TNT makes use of five variables. These are
a, b, c, d, e.
More variables can be constructed by adding the prime symbol after them; for example,
a′, b′, c′, a″, a‴ are all variables.
In the more rigid version of TNT, known as "austere" TNT, only
a′, a″, a‴ etc. are used.
Operators
Addition and multiplication of numerals
In Typographical Number Theory, the usual symbols of "+" for additions, and "·" for multiplications are used. Thus to write "b plus c" is to write
(b + c)
and "a times d" is written as
(a·d)
The parentheses are required. Any laxness would violate TNT's formation system (although it is trivially proved this formalism is unnecessary for operations which are both commutative and associative). Also only two terms can be operated on at once. Therefore, to write "a plus b plus c" is to write either
((a + b) + c)
or
(a + (b + c))
Equivalency
The "Equals" operator is used to denote equivalence. It is defined by the symbol "=", and takes roughly the same meaning as it usually does in mathematics. For instance,
(SSS0 + SSS0) = SSSSSS0
is a theorem statement in TNT, with the interpretation "3 plus 3 equals 6".
Negation
In Typographical Number Theory, negation, i.e. the turning of a statement to its opposite, is denoted by the "~" or negation operator. For instance,
~((SSS0 + SSS0) = SSSSSSS0)
is a theorem in TNT, interpreted as "3 plus 3 is not equal to 7".
By negation, this means negation in Boolean logic (logical negation), rather than simply being the opposite. For example, if I were to say "I am eating a grapefruit", the opposite is "I am not eating a grapefruit", rather than "I am eating something other than a grapefruit". Similarly "The Television is on" is negated to "The Television is not on", rather than "The Television is off", because, for example, it might be broken. This is a subtle difference, but an important one.
Compounds
If x and y are well-formed formulas, and provided that no variable which is free in one is quantified in the other, then the following are all well-formed formulas
<x∧y>, <x∨y>, <x⊃y>
Examples:
• <0=0∧~0=0>
• <b=b∨~∃c:c=b>
• <S0=0⊃∀c:~∃b:(b+b)=c>
The quantification status of a variable doesn't change here.
Quantifiers
There are two quantifiers used: ∀ and ∃.
Note that unlike most other logical systems where quantifiers over sets require a mention of the element's existence in the set, this is not required in TNT because all numbers and terms are strictly natural numbers or logical boolean statements. It is therefore equivalent to say ∀a:(a∈N):∀b:(b∈N):(a+b)=(b+a) and ∀a:∀b:(a+b)=(b+a)
• ∃ means "There exists"
• ∀ means "For every" or "For all"
• The symbol : is used to separate a quantifier from other quantifiers or from the rest of the formula. It is commonly read "such that"
For example:
∀a:∀b:(a+b)=(b+a)
("For every number a and every number b, a plus b equals b plus a", or more figuratively, "Addition is commutative.")
~∃c:Sc=0
("There does not exist a number c such that c plus one equals zero", or more figuratively, "Zero is not the successor of any (natural) number.")
Atoms and propositional statements
All the symbols of propositional calculus apart from the Atom symbols are used in Typographical Number Theory, and they retain their interpretations.
Atoms are here defined as strings which amount to statements of equality, such as
2 plus 3 equals five:
(SS0 + SSS0) = SSSSS0
2 plus 2 is equal to 4:
(SS0 + SS0) = SSSS0
References
• Hofstadter, Douglas R. (1999) [1979], Gödel, Escher, Bach: An Eternal Golden Braid, Basic Books, ISBN 0-465-02656-7.
| Wikipedia |
Grothendieck's Tôhoku paper
The article "Sur quelques points d'algèbre homologique" by Alexander Grothendieck,[1] now often referred to as the Tôhoku paper,[2] was published in 1957 in the Tôhoku Mathematical Journal. It revolutionized the subject of homological algebra, a purely algebraic aspect of algebraic topology.[3] It removed the need to distinguish the cases of modules over a ring and sheaves of abelian groups over a topological space.[4]
Background
Material in the paper dates from Grothendieck's year at the University of Kansas in 1955–6. Research there allowed him to put homological algebra on an axiomatic basis, by introducing the abelian category concept.[5][6]
A textbook treatment of homological algebra, "Cartan–Eilenberg" after the authors Henri Cartan and Samuel Eilenberg, appeared in 1956. Grothendieck's work was largely independent of it. His abelian category concept had at least partially been anticipated by others.[7] David Buchsbaum in his doctoral thesis written under Eilenberg had introduced a notion of "exact category" close to the abelian category concept (needing only direct sums to be identical); and had formulated the idea of "enough injectives".[8] The Tôhoku paper contains an argument to prove that a Grothendieck category (a particular type of abelian category, the name coming later) has enough injectives; the author indicated that the proof was of a standard type.[9] In showing by this means that categories of sheaves of abelian groups admitted injective resolutions, Grothendieck went beyond the theory available in Cartan–Eilenberg, to prove the existence of a cohomology theory in generality.[10]
Later developments
After the Gabriel–Popescu theorem of 1964, it was known that every Grothendieck category is a quotient category of a module category.[11]
The Tôhoku paper also introduced the Grothendieck spectral sequence associated to the composition of derived functors.[12] In further reconsideration of the foundations of homological algebra, Grothendieck introduced and developed with Jean-Louis Verdier the derived category concept.[13] The initial motivation, as announced by Grothendieck at the 1958 International Congress of Mathematicians, was to formulate results on coherent duality, now going under the name "Grothendieck duality".[14]
Notes
1. Grothendieck, A. (1957), "Sur quelques points d'algèbre homologique", Tôhoku Mathematical Journal, (2), 9 (2): 119–221, doi:10.2748/tmj/1178244839, MR 0102537. English translation.
2. Schlager, Neil; Lauer, Josh (2000), Science and Its Times: 1950-present. Volume 7 of Science and Its Times: Understanding the Social Significance of Scientific Discovery, Gale Group, p. 251, ISBN 9780787639396.
3. Sooyoung Chang (2011). Academic Genealogy of Mathematicians. World Scientific. p. 115. ISBN 978-981-4282-29-1.
4. Jean-Paul Pier (1 January 2000). Development of Mathematics 1950-2000. Springer Science & Business Media. p. 715. ISBN 978-3-7643-6280-5.
5. Pierre Cartier; Luc Illusie; Nicholas M. Katz; Gérard Laumon; Yuri I. Manin (22 December 2006). The Grothendieck Festschrift, Volume I: A Collection of Articles Written in Honor of the 60th Birthday of Alexander Grothendieck. Springer Science & Business Media. p. vii. ISBN 978-0-8176-4566-3.
6. Piotr Pragacz (6 April 2005). Topics in Cohomological Studies of Algebraic Varieties: Impanga Lecture Notes. Springer Science & Business Media. p. xiv–xv. ISBN 978-3-7643-7214-9.
7. "Tohoku in nLab". Retrieved 2 December 2014.
8. I.M. James (24 August 1999). History of Topology. Elsevier. p. 815. ISBN 978-0-08-053407-7.
9. Amnon Neeman (January 2001). Triangulated Categories. Princeton University Press. p. 19. ISBN 0-691-08686-9.
10. Giandomenico Sica (1 January 2006). What is Category Theory?. Polimetrica s.a.s. pp. 236–7. ISBN 978-88-7699-031-1.
11. "Grothendieck category - Encyclopedia of Mathematics". Retrieved 2 December 2014.
12. Charles A. Weibel (27 October 1995). An Introduction to Homological Algebra. Cambridge University Press. p. 150. ISBN 978-0-521-55987-4.
13. Ravi Vakil (2005). Snowbird Lectures in Algebraic Geometry: Proceedings of an AMS-IMS-SIAM Joint Summer Research Conference on Algebraic Geometry : Presentations by Young Researchers, July 4-8, 2004. American Mathematical Soc. pp. 44–5. ISBN 978-0-8218-5720-5.
14. Amnon Neeman, "Derived Categories and Grothendieck Duality", at p. 7
External links
• Grothendieck, A. (1957), "Sur quelques points d'algèbre homologique", Tôhoku Mathematical Journal, (2), 9: 119–221. English translation.
• Grothendieck's Tohoku Paper and Combinatorial Topology
| Wikipedia |
Tübingen triangle
The Tübingen triangle is, apart from the Penrose rhomb tilings and their variations, a classical candidate to model 5-fold (respectively 10-fold) quasicrystals. The inflation factor is – as in the Penrose case – the golden mean, $\varphi ={\frac {a}{b}}={\frac {1+{\sqrt {5}}}{2}}\approx 1.618.$
The prototiles are Robinson triangles, but the relationship is different: The Penrose rhomb tilings are locally derivable from the Tübingen triangle tilings.
These tilings were discovered and studied thoroughly by a group in Tübingen, Germany, thus the name.[1]
Since the prototiles are mirror symmetric, but their substitutions are not, left-handed and right-handed tiles need to be distinguished. This is indicated by the colours in the substitution rule and the patches of the relevant figures.[2]
See also
• Mathematics and art
References
1. Baake, M and Kramer, P and Schlottmann, M and Zeidler, D Planar patterns with fivefold symmetry as sections of periodic structures in 4-space Internat. J. Modern Phys. B, 1990, 4, 15–16, pp. 2217–2268, 92b:52041
2. E. Harriss (Drawings of 2005-12-01) und D. Frettlöh (Text of 2006-02-27): Tuebingen Triangle. Archived 2015-04-02 at the Wayback Machine Downloaded on 2015-03-06.
| Wikipedia |
Daniel Tătaru
Daniel Ioan Tătaru (born 6 May 1967, Piatra Neamţ, Romania) is a Romanian mathematician at University of California, Berkeley.
He earned his doctorate from the University of Virginia in 1992, under supervision of Irena Lasiecka.[1]
He won the 2002 Bôcher Memorial Prize for his research on partial differential equations. In 2012 he became a fellow of the American Mathematical Society.[2] In 2013 he was selected as a Simons Investigator[3] in mathematics.
References
1. Daniel Tătaru at the Mathematics Genealogy Project
2. List of Fellows of the American Mathematical Society, retrieved 2013-08-25.
3. Simons Investigator, www.simonsfoundation.org
External links
• Website at UC Berkeley
• Daniel Tătaru publications indexed by Google Scholar
• Daniel Tătaru's results at International Mathematical Olympiad
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
Academics
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• IdRef
| Wikipedia |
u-chart
In statistical quality control, the u-chart is a type of control chart used to monitor "count"-type data where the sample size is greater than one, typically the average number of nonconformities per unit.
u-chart
Originally proposed byWalter A. Shewhart
Process observations
Rational subgroup sizen > 1
Measurement typeNumber of nonconformances per unit
Quality characteristic typeAttributes data
Underlying distributionPoisson distribution
Performance
Size of shift to detect≥ 1.5σ
Process variation chart
Not applicable
Process mean chart
Center line${\bar {u}}={\frac {\sum _{i=1}^{m}\sum _{j=1}^{n}{\mbox{no. of defects for }}x_{ij}}{mn}}$
Control limits${\bar {u}}\pm 3{\sqrt {\frac {\bar {u}}{n_{i}}}}$
Plotted statistic${\bar {u}}_{i}={\frac {\sum _{j=1}^{n}{\mbox{no. of defects for }}x_{ij}}{n}}$
The u-chart differs from the c-chart in that it accounts for the possibility that the number or size of inspection units for which nonconformities are to be counted may vary. Larger samples may be an economic necessity or may be necessary to increase the area of opportunity in order to track very low nonconformity levels.[1]
Examples of processes suitable for monitoring with a u-chart include:
• Monitoring the number of nonconformities per lot of raw material received where the lot size varies
• Monitoring the number of new infections in a hospital per day
• Monitoring the number of accidents for delivery trucks per day
As with the c-chart, the Poisson distribution is the basis for the chart and requires the same assumptions.
The control limits for this chart type are ${\bar {u}}\pm 3{\sqrt {\frac {\bar {u}}{n_{i}}}}$ where ${\bar {u}}$ is the estimate of the long-term process mean established during control-chart setup. The observations $u_{i}={\frac {x_{i}}{n_{i}}}$ are plotted against these control limits, where xi is the number of nonconformities for the ith subgroup and ni is the number of inspection units in the ith subgroup.
See also
• c-chart
References
1. Montgomery, Douglas (2005). Introduction to Statistical Quality Control. Hoboken, New Jersey: John Wiley & Sons, Inc. p. 294. ISBN 978-0-471-65631-9. OCLC 56729567. Archived from the original on 2008-06-20.
| Wikipedia |
u-invariant
In mathematics, the universal invariant or u-invariant of a field describes the structure of quadratic forms over the field.
The universal invariant u(F) of a field F is the largest dimension of an anisotropic quadratic space over F, or ∞ if this does not exist. Since formally real fields have anisotropic quadratic forms (sums of squares) in every dimension, the invariant is only of interest for other fields. An equivalent formulation is that u is the smallest number such that every form of dimension greater than u is isotropic, or that every form of dimension at least u is universal.
Examples
• For the complex numbers, u(C) = 1.
• If F is quadratically closed then u(F) = 1.
• The function field of an algebraic curve over an algebraically closed field has u ≤ 2; this follows from Tsen's theorem that such a field is quasi-algebraically closed.[1]
• If F is a non-real global or local field, or more generally a linked field, then u(F) = 1, 2, 4 or 8.[2]
Properties
• If F is not formally real and the characteristic of F is not 2 then u(F) is at most $q(F)=\left|{F^{\star }/F^{\star 2}}\right|$, the index of the squares in the multiplicative group of F.[3]
• u(F) cannot take the values 3, 5, or 7.[4] Fields exist with u = 6[5][6] and u = 9.[7]
• Merkurjev has shown that every even integer occurs as the value of u(F) for some F.[8][9]
• Alexander Vishik proved that there are fields with u-invariant $2^{r}+1$ for all $r>3$.[10]
• The u-invariant is bounded under finite-degree field extensions. If E/F is a field extension of degree n then
$u(E)\leq {\frac {n+1}{2}}u(F)\ .$
In the case of quadratic extensions, the u-invariant is bounded by
$u(F)-2\leq u(E)\leq {\frac {3}{2}}u(F)\ $
and all values in this range are achieved.[11]
The general u-invariant
Since the u-invariant is of little interest in the case of formally real fields, we define a general u-invariant to be the maximum dimension of an anisotropic form in the torsion subgroup of the Witt ring of F, or ∞ if this does not exist.[12] For non-formally-real fields, the Witt ring is torsion, so this agrees with the previous definition.[13] For a formally real field, the general u-invariant is either even or ∞.
Properties
• u(F) ≤ 1 if and only if F is a Pythagorean field.[13]
References
1. Lam (2005) p.376
2. Lam (2005) p.406
3. Lam (2005) p. 400
4. Lam (2005) p. 401
5. Lam (2005) p.484
6. Lam, T.Y. (1989). "Fields of u-invariant 6 after A. Merkurjev". Ring theory 1989. In honor of S. A. Amitsur, Proc. Symp. and Workshop, Jerusalem 1988/89. Israel Math. Conf. Proc. Vol. 1. pp. 12–30. Zbl 0683.10018.
7. Izhboldin, Oleg T. (2001). "Fields of u-Invariant 9". Annals of Mathematics. Second Series. 154 (3): 529–587. doi:10.2307/3062141. JSTOR 3062141. Zbl 0998.11015.
8. Lam (2005) p. 402
9. Elman, Karpenko, Merkurjev (2008) p. 170
10. Vishik, Alexander (2009). "Fields of u-invariant $2^{r}+1$". Algebra, Arithmetic, and Geometry. Progress in Mathematics. Birkhäuser Boston. doi:10.1007/978-0-8176-4747-6_22.
11. Mináč, Ján; Wadsworth, Adrian R. (1995). "The u-invariant for algebraic extensions". In Rosenberg, Alex (ed.). K-theory and algebraic geometry: connections with quadratic forms and division algebras. Summer Research Institute on quadratic forms and division algebras, July 6-24, 1992, University of California, Santa Barbara, CA (USA). Proc. Symp. Pure Math. Vol. 58. Providence, RI: American Mathematical Society. pp. 333–358. Zbl 0824.11018.
12. Lam (2005) p. 409
13. Lam (2005) p. 410
• Lam, Tsit-Yuen (2005). Introduction to Quadratic Forms over Fields. Graduate Studies in Mathematics. Vol. 67. American Mathematical Society. ISBN 0-8218-1095-2. MR 2104929. Zbl 1068.11023.
• Rajwade, A. R. (1993). Squares. London Mathematical Society Lecture Note Series. Vol. 171. Cambridge University Press. ISBN 0-521-42668-5. Zbl 0785.11022.
• Elman, Richard; Karpenko, Nikita; Merkurjev, Alexander (2008). The algebraic and geometric theory of quadratic forms. American Mathematical Society Colloquium Publications. Vol. 56. American Mathematical Society, Providence, RI. ISBN 978-0-8218-4329-1.
| Wikipedia |
U. Narayan Bhat
U. Narayan Bhat (born 1934) is an Indian-born Mathematician, known for his contributions to queueing theory and reliability theory.
Academic career
He received a B.A. in mathematics (1953) and B.T. in education (1954) from the University of Madras, an M.A. in statistics (1958) from Karnatak University in Dharwar and Ph.D. in Mathematical statistics from the University of Western Australia on the dissertation Some Simple and Bulk Queueing Systems: A Study of Their Transient Behavior (1965).[1] He worked at Michigan State University (1965–66), Case Western Reserve University (1966–69), and Southern Methodist University (1969–2005). Bhat is a fellow of the American Statistical Association and the Institute for Operations Research and the Management Sciences[2] and an elected member of the International Statistical Institute.
U. Narayan Bhat was a dean of research and graduate studies at Southern Methodist University and then was named interim dean for the university's Dedman College .
Books
• A Study of the Queueing Systems M/G/1 and GI/M/1, (Springer Verlag, 1968)
• Elements of Applied Stochastic Processes (Wiley, 1972)
• Introduction to Operations Research Models (W. B. Saunders & Co., 1977). With L. Cooper and L. J. LeBlanc
• Queueing and Related Models (Oxford University Press, 1992). Editor with I. V. Basawa
• Elements of Applied Stochastic Processes (Wiley, 2002). With Gregory K. Miller
• Introduction to queueing theory (Birkhauser, 2008)
Publications
• Further Results for the Queue with Poisson Arrivals, Operations Research, Vol. 11(3), (1963), 380-386 (with Narahari Umanath Prabhu).
• Imbedded Markov Chain Analysis of Single-Server Bulk Queues, Journal of the Australian Math, Soc., Vol. 4(2), (1964), 244-263.
• On Single-Server Bulk Queueing Processes with Binomial Input, Operations Research, Vol. 12(4), (1964), 527-533.
• On a Stochastic Process Occurring in Queueing Systems, Journal of Applied Probability, Vol. 2(2), (1965), 467-469.
• Statistical Analysis of Queueing Systems in Frontiers in Queuing by Dshalalow etc. (1997). (with G.K. Miller and S. Subba Rao).
• Estimation of Renewal Processes with Unobservable Gamma or Erlang Interarrival Times, J. Stat. Plan. and Inf., 61 (1997), 355-372 (with G. K. Miller).
• Maximum Likelihood Estimation for Single Server Queues from Waiting Time Data, Queueing Systems (journal), 24, (1997), 155-167 (with I. V. Basawa and R. Lund).
• Estimation of the Coefficient of Variation for Unobservable Service Times in the M/G/1 Queue, Journal of Mathematical Sciences, Vol. 1, 2002 (with G. K. Miller).
References
1. homepage
2. Fellows: Alphabetical List, Institute for Operations Research and the Management Sciences, retrieved 9 October 2019
• On Google scholar
External links
• Biography of U. Narayan Bhat from the Institute for Operations Research and the Management Sciences (INFORMS)
Authority control
International
• ISNI
• VIAF
National
• France
• BnF data
• Catalonia
• Germany
• Israel
• United States
• Czech Republic
• Netherlands
Academics
• MathSciNet
• zbMATH
Other
• IdRef
| Wikipedia |
UCL Faculty of Mathematical and Physical Sciences
The UCL Faculty of Mathematical and Physical Sciences is one of the 11 constituent faculties of University College London (UCL).[2] The Faculty, the UCL Faculty of Engineering Sciences and the UCL Faculty of the Built Envirornment (The Bartlett) together form the UCL School of the Built Environment, Engineering and Mathematical and Physical Sciences.
UCL Faculty of Mathematical and Physical Sciences
DeanProfessor Ivan Parkin[1]
Administrative staff
445[1]
(Academic and research staff (as at October 2009))
Students1,833[1]
(Undergraduate (2008/09))
544[1]
(Graduate (2008/09))
Location
London, United Kingdom
WebsiteUCL Faculty of Mathematical and Physical Sciences
Departments
The Faculty currently comprises the following departments:[3][4]
• UCL Department of Chemistry
• UCL Department of Earth Sciences
• UCL Department of Mathematics
• Chalkdust is an online mathematics interest magazine published by Department of Mathematics students starting in 2015[5]
• UCL Department of Natural Sciences
• UCL Department of Physics & Astronomy
• UCL Department of Science and Technology Studies
• UCL Department of Space & Climate Physics (Mullard Space Science Laboratory)
• UCL Department of Statistical Science
• London Centre for Nanotechnology - a joint venture between UCL and Imperial College London established in 2003 following the award of a £13.65m higher education grant under the Science Research Infrastructure Fund.[6][7]
Research centres and institutes
The Faculty is closely involved with the following research centres and institutes:[4]
• UCL Centre for Materials Research
• UCL Centre for Mathematics and Physics in the Life Sciences and Experimental Biology (CoMPLEX) - an inter-disciplinary virtual centre that seeks to bring together mathematicians, physical scientists, computer scientists and engineers upon the problems posed by complexity in biology and biomedicine. The centre works with 29 departments and Institutes across UCL. It has a MRes/PhD program that requires that its students also belong to at least one of these Departments/Institutes. The centre is based in the Physics Building on the UCL main campus.
• Centre for Planetary Science at UCL/Birkbeck
• UCL Clinical Operational Research Unit (CORU) - CORU[8] sits within the Department of Mathematics and is a team of researchers dedicated to applying operational research, data analysis and mathematical modelling to problems in health care.
• UCL Institute of Origins
• UCL Institute for Healthcare Engineering (IHE) - IHE[9] is dedicated to transforming lives through digital and medical technologies and fosters collaboration across UCL areas of expertise.
• UCL Institute for Risk and Disaster Reduction
• The Thomas Young Centre
Rankings
In the 2013 Academic Ranking of World Universities, UCL is ranked joint 51st to 75th in the world (and joint 12th in Europe) for Natural Sciences and Mathematics.[10]
In the 2013 QS World University Rankings, UCL is ranked 38th in the world (and 12th in Europe) for Natural Sciences.[11] In the 2014 QS World University Rankings by Subject, UCL is ranked joint 51st-100th in the world (and joint 12th in Europe) for Chemistry,[12] joint 27th in the world (and 8th in Europe) for Earth & Marine Sciences,[13] joint 51st-100th in the world (and joint 13th in Europe) for Materials Science,[14] joint 36th in the world (and joint 10th in Europe) for Mathematics,[15] 35th in the world (and 13th in Europe) for Physics & Astronomy,[16] and 47th in the world (and 9th in Europe) for Statistics & Operational Research.[17]
In the 2013/14 Times Higher Education World University Rankings, UCL is ranked 51st in the world (and 16th in Europe) for Physical Sciences.[18]
Notable people
• Michael Abraham
• William Ramsay
• Steven T. Bramwell
• M. J. Seaton
• Sigurd Zienau
• Andrew Fisher
• Paul Davies
• Edwin Power
• Peter Higgs
• Otto Hahn
• Charles K. Kao
• Andrea Sella
• Raman Prinja
• Helen Wilson
• Hannah Fry
See also
• Birkbeck, University of London
• Imperial College London
References
1. "UCL Review 2009". University College London. Retrieved 14 September 2010.
2. "The Academic Units of UCL". University College London. Retrieved 14 September 2010.
3. "Academic Departments by Faculty". University College London. Retrieved 14 September 2010.
4. "Departments and Institutes". UCL Faculty of Mathematical and Physical Sciences. Retrieved 15 September 2010.
5. Clegg, Brian (26 March 2015). "Stretching mathematical minds". Now Appearing. Brian Clegg - Science author (blog). Retrieved 3 July 2015.
6. "London's little idea". BBC News. 27 January 2003. Retrieved 5 March 2014.
7. "Nanotech under the microscope". BBC News. 12 June 2003. Retrieved 5 March 2014.
8. UCL. "UCL - London's Global University". Clinical Operational Research Unit. Retrieved 6 October 2019.
9. UCL. "UCL - London's Global University". UCL Institute of Healthcare Engineering. Retrieved 6 October 2019.
10. "Academic Ranking of World Universities in Natural Sciences and Mathematics – 2013". Shanghai Ranking Consultancy. Retrieved 2 September 2013.
11. "QS World University Rankings by Faculty 2013 – Natural Science". QS Quacquarelli Symonds Limited. Retrieved 23 September 2013.
12. "QS World University Rankings by Subject 2014 - Chemistry". QS Quacquarelli Symonds Limited. Retrieved 1 March 2014.
13. "QS World University Rankings by Subject 2014 - Earth & Marine Sciences". QS Quacquarelli Symonds Limited. Retrieved 1 March 2014.
14. "QS World University Rankings by Subject 2014 - Materials Science". QS Quacquarelli Symonds Limited. Retrieved 1 March 2014.
15. "QS World University Rankings by Subject 2014 - Mathematics". QS Quacquarelli Symonds Limited. Retrieved 1 March 2014.
16. "QS World University Rankings by Subject 2014 - Physics & Astronomy". QS Quacquarelli Symonds Limited. Retrieved 1 March 2014.
17. "QS World University Rankings by Subject 2014 - Politics and International Studies". QS Quacquarelli Symonds Limited. Retrieved 1 March 2014.
18. "Top 100 universities for Physical Sciences 2013–14". Times Higher Education. Retrieved 6 October 2013.
External links
• UCL Faculty of Mathematical and Physical Sciences
• UCL School of the Built Environment, Engineering and Mathematical and Physical Sciences
• University College London
• CoMPLEX homepage
University College London
Academic activities
Faculties, schools
and groupings
• Faculty of Arts and Humanities (Slade School of Fine Art)
• Faculty of Brain Sciences
• Division of Psychology and Language Sciences
• Faculty of the Built Environment (The Bartlett)
• Faculty of Engineering Sciences
• School of Energy and Resources
• School of Management
• Faculty of Laws
• Faculty of Life Sciences
• School of Pharmacy
• Faculty of Mathematical and Physical Sciences
• Faculty of Medical Sciences
• Medical School
• Faculty of Population Health Sciences
• Faculty of Social and Historical Sciences
• School of Slavonic and East European Studies
• Institute of Education
• Neuroscience
Centres and
departments
• Centre for Advanced Spatial Analysis
• Centre for the Study of the Legacies of British Slave-ownership
• Centre for Digital Humanities
• Centre for the History of Medicine
• Centre for Neuroimaging
• The Constitution Unit
• Department of Information Studies
• Department of Philosophy
• Department of Science and Technology Studies
• Department of Space and Climate Physics
• EPPI-Centre
• London Centre for Nanotechnology
• London Knowledge Lab
• Slade Centre for Electronic Media in Fine Art
• Urban Laboratory
Institutes and
laboratories
• Ear Institute
• Eastman Dental Institute
• Great Ormond Street Institute of Child Health
• Institute of Archaeology
• Institute for Global Health
• Institute of Jewish Studies
• Institute of Neurology
• Institute of Ophthalmology
• Institute of Security and Crime Science
• Pedestrian Accessibility and Movement Environment Laboratory
Other
• Edwards Professor of Egyptian Archaeology and Philology
• Grote Professor of the Philosophy of Mind and Logic
• Pender Chair
• Papers from the Institute of Archaeology
• Prize Lecture in Life and Medical Sciences
• Public Archaeology
• Quain Professor
• Ramsay Memorial Professor of Chemical Engineering
• Slade Professor of Fine Art
• Transcribe Bentham
University
Campus
• Bloomsbury
• Bloomsbury Theatre
• Church of Christ the King
• Euston Road
• Gordon Square
• Gower Street
• Gray's Inn Road
• Halls of residence
• Here East
• Holmbury St Mary
• Main Building
• Petrie Museum of Egyptian Archaeology
• Queen Square
• Somers Town
• Tavistock Square
• Tottenham Court Road
• UCL Observatory
• Woburn Square
People
• List of notable people
• Michael Spence (Provost)
Student life
• UCL Boat Club
• The Cheese Grater
• University College Opera
• Pi Media
• Rare FM
• Royal Free, University College and Middlesex Medical Students RFC
• Students' Union UCL
Other
• Citrus Saturday
• UCL Business
• Filming at UCL
• History
• Rivalry with King's College London
• Third-oldest university in England
Affiliates
Medical
• Francis Crick Institute
• Great Ormond Street Hospital for Children NHS Foundation Trust
• Great Ormond Street Hospital
• Moorfields Eye Hospital NHS Foundation Trust
• Moorfields Eye Hospital
• Royal Free London NHS Foundation Trust
• The Royal Free Hospital
• Royal National Orthopaedic Hospital
• UCLH/UCL Biomedical Research Centre
• UCL Partners
• University College London Hospitals NHS Foundation Trust
• Eastman Dental Hospital
• Hospital for Tropical Diseases
• National Hospital for Neurology and Neurosurgery
• Royal London Hospital for Integrated Medicine
• Royal National Throat, Nose and Ear Hospital
• UCH Macmillan Cancer Centre
• University College Hospital
• University College Hospital at Westmoreland Street
• Whittington Hospital
Other
• Alan Turing Institute
• Anna Freud Centre
• Association of Commonwealth Universities
• European Network for Training Economic Research
• Golden triangle
• Institute of Advanced Legal Studies
• League of European Research Universities
• Russell Group
• Science and Engineering South
• Thomas Young Centre
• UCL Academy
• University College School
• University of London
• Category
• Commons
| Wikipedia |
UCT Mathematics Competition
The UCT Mathematics Competition is an annual mathematics competition for schools in the Western Cape province of South Africa, held at the University of Cape Town.
UCT Mathematics Competition
StatusActive
GenreMathematics competition
FrequencyAnnually
VenueUniversity of Cape Town
Coordinates33°57′27″S 18°27′38″E
Country South Africa
Years active1977-present
Inaugurated1977 (1977)
Most recent2019
Participants7000
Websitewww.uctmathscompetition.org.za
Around 7000 participants from Grade 8 to Grade 12 take part, writing a multiple-choice paper. Individual and pair entries are accepted, but all write the same paper for their grade. The current holder of the School Trophy is Rondebosch Boys High School, with Diocesan College achieving second place in the 2022 competition.[1] These two schools have held the top positions in the competition for a number of years.
The competition was established in 1977 by Mona Leeuwenberg and Shirley Fitton, who were teachers at Diocesan College and Westerford High School, and since 1987 has been run by Professor John Webb of the University of Cape Town.[2]
Awards
Mona Leeuwenburg Trophy
The Mona Leeuwenburg Trophy is awarded to the school with the best overall performance in the competition.[3]
Recipients of the Mona Leeuwenburg Trophy[1]
YearSchool
2022Rondebosch Boys' High School
2021Diocesan College
2020-
2019Rondebosch Boys' High School
2018Rondebosch Boys' High School
2017Rondebosch Boys' High School
2016Rondebosch Boys' High School
2015Rondebosch Boys' High School
2014Rondebosch Boys' High School
2013Diocesan College
2012Diocesan College
2011Diocesan College
2010Diocesan College
2009Diocesan College
2008Diocesan College
2007Rondebosch Boys' High School
2006Rondebosch Boys' High School
2005Rondebosch Boys' High School
2004Rondebosch Boys' High School
2003Rondebosch Boys' High School
2002Rondebosch Boys' High School
2001Diocesan College
2000Rondebosch Boys' High School
1999Rondebosch Boys' High School
1998Rondebosch Boys' High School
1997Diocesan College
1996Diocesan College
1995Diocesan College
1994Diocesan College
1993Westerford High School
1992Diocesan College
1991Diocesan College
1990Westerford High School
1989Diocesan College
1988Diocesan College
1987Diocesan College
UCT Trophy
The UCT Trophy is awarded to the school with the best performance that has not participated in the competition more than twice before.[3]
Recipients of the UCT Trophy
YearSchool
2022Generation Schools
2021The American International School of Cape Town
2020-
2019Curro Hermanus
2018Thembalethu High School
2017Glenwood House
2016Protea Heights Academy
2015Harry Gwala High School
2014Darun-Na'im Girls' High School
2013Intlanganiso Senior Secondary School
2012Claremont High School
2011Spine Road High School
2010The Oracle Academy
2009Chesterhouse
2008Darul Arqam Islamic High School
2007Al-Azhar High School
2006Elkanah House
2005Cravenby Secondary School
2004Reddam House Atlantic Seaboard
2003Ocean View High School
2002Reddam House Constantia
2001Weston Senior School
2000Worcester Gymnasium
1999Somerset College
Diane Tucker Trophy
The Diane Tucker Trophy is awarded to the girl with the best performance in the competition. This trophy was first made in year 2000.[3]
Recipients of the Diane Tucker Trophy
YearWinnerSchool
2022Soyeon LeeReddam House Durbanville
2021Juliette RouxHerschel Girls' School
2020--
2018Yewon KimMeridian Pinehurst High School
2017Danielle KleynParel Vallei High School
2016SangEun LeeSt George's Grammar School
2015SangEun LeeSt George's Grammar School
2014Jane ParkEl Shaddai Christian School
2013Annemiek MeyerReddam House Constantia
2012Lauren DennyRustenburg Girls' High School
2011Khadijah BreyWynberg Girls' High School
2010Emma BelcherSpringfield Convent
2009Khadija BreyWynberg Girls' High School
2008Maggie LuHerschel Girls' School
2007Melissa MunnikHoërskool D F Malan (D F Malan High School)
2006Melissa MunnikHoërskool D F Malan (D F Malan High School)
2005Melissa MunnikHoërskool D F Malan (D F Malan High School)
2004Gayle SherHerzlia Middle School
2003Marietjie VenterHoërskool Stellenbosch (Stellenbosch High School)
2002Marietjie VenterHoërskool Stellenbosch (Stellenbosch High School)
2001Gayle SherHerzlia Middle School
2000Marietjie VenterHoërskool Stellenbosch (Stellenbosch High School)
Moolla Trophy
The Moolla Trophy was donated to the competition by the Moolla family. Saadiq, Haroon and Ashraf Moolla represented Rondebosch Boys' High School and achieved Gold Awards from 2003 to 2011.[4] The trophy is awarded to a school from a disadvantaged community that shows a notable performance in the competition.[3]
Recipients of the Moolla Trophy
YearSchool
2022Sinenjongo High School
2021Cape Academy of Mathematics, Science and Technology
2020-
2019Intsebenziswano High School
2018Qhayiya Secondary School
2017Thembalethu High School
2016Percy Mdala High School
2015Spine Road High School
2014South Peninsula High School
2013COSAT
2012Manyano High School
Lesley Reeler Trophy
The Lesley Reeler Trophy is awarded for the best individual performance over five years (grades 8 to 12).[3]
Recipients of the Lesley Reeler Trophy
YearWinnerSchool
2022Emmanuel RassouSouth African College Schools
2021Justin BotesElkanah House
2020-
2019Adri WesselsCurro Durbanville
2018Timothy SchlesingerRondebosch Boys' High School
2017Abdullah KarbaneeRondebosch Boys' High School
References
1. "Award Winners". UCT Mathematics Competition. Retrieved 10 November 2018.
2. "John Webb". UCT Department of Mathematics and Applied Mathematics. Retrieved 27 June 2017.
3. "Awards". UCT Mathematics Competition. Retrieved 4 February 2018.
4. "Moolla family sponsors new trophy for mathematics competition". bizcommunity.com. Archived from the original on 13 April 2014. Retrieved 11 April 2014.
External links
• UCT Mathematics Competition
University of Cape Town
People
• Chancellor: Precious Moloi-Motsepe
• Vice-chancellor: Mamokgethi Phakeng
• Notable academics
• Notable alumni
History
• South African College
• Demonstrations at UCT
• List of alumni
Academic units
• Animal Demography Unit
• SAFRING
• SABAP
• SABAP2
• Jewish Digital Archive Project
• African Gender Institute
• Graduate School of Business
• Michaelis School of Fine Art
• Centre for Curating the Archive
• Percy FitzPatrick Institute of African Ornithology
• South African College of Music
Libraries
• Bolus Herbarium
• Jagger Library
Student life
• Green Campus Initiative
• Ikey Tigers
• Ikey Warriors
• SHAWCO
• UCT Radio
• UNASA UCT
• Varsity
• Waaihoek
Affiliated hospitals
• Groote Schuur Hospital
• Red Cross War Memorial Children's Hospital
• Valkenberg Hospital
Miscellaneous
• 2021 Cape Town fire
• Baxter Theatre Centre
• Biko Lectures
• Egyptian Building
• Little Theatre
• Mafeje affair
• Mathematics Competition
• Montebello Design Centre
• Irma Stern Museum
• University
• People
| Wikipedia |
UPA model
In the analysis of social networks, the uniform-preferential-attachment model, or UPA model is a variation of the Barabási–Albert model in which the preferential attachment is perceived as having a double nature. New nodes joining the network may either attach themselves with high-degree nodes or with most recently added nodes. This behaviour can be noticed in some examples of social networks, such as the citation network of scientific publications. [1]
Model description
For an UPA network with nodes $\{v_{1}...v_{t}\}$, we define for an arriving node $v_{t+1}$ a subset of nodes $\{v_{t-w+1}...v_{t}\}$ with $w\in \mathbb {N} $. This subset is called a window, which represents the w last nodes inserted into the network. A new node may link itself either with a node from the window subset, with probability p, or with any other node from $\{v_{1}...v_{t}\}$ with probability 1-p. In the former case, the node probability distribution is uniform: each node has a probability $1/w$ of being chosen. In the latter, node selection follows a preferential attachment rule, as in the Barabási–Albert model.
The window size $l$ may be constant during the addition of new nodes, expressed by $w:=w(t)=l$, where $t$ is a discrete time variable. It can also grow with time in accord to $w:=w(t)=\lceil \alpha t\rceil $, where $0<\alpha <1$, which means that window size growth is linear with the size of the network. The network keeps its asymptotic power law behavior in degree distribution for both cases.
Note that when $l=1$ and $p=0$, the UPA model reduces to the Barabási–Albert model. [1]
Degree distribution
The degree distribution for an UPA network is, considering $t\rightarrow \infty $ and $l=1$:
$P(k)={\begin{cases}{\dfrac {2(1-p)}{3-p}}&{\mbox{ if }}k=1\\{\dfrac {(1-p)^{2}}{(2-p)(3-p)}}+{\dfrac {p}{2-p}}&{\mbox{ if }}k=2\\({\dfrac {2}{1-p}}+2)({\dfrac {2}{1-p}}+1)B(k,1+{\dfrac {2}{1-p}}){\bar {P}}(2)&{\mbox{ if }}k>2\end{cases}}$
And for $l>1$ we have:
$P(k)={\begin{cases}{\dfrac {2}{(3-p)}}(1-{\dfrac {p}{l}})^{l}&{\mbox{ if }}k=1\\{\dfrac {2}{2+k(1-p)}}({\dfrac {p}{l}}(H_{k-1}-H_{k})+{\dfrac {(1-p)(k-1)}{2}}P(k-1)&{\mbox{ if }}2\leq k\leq l+1\\{\dfrac {B(k,l+2+{\dfrac {2}{1-p}})}{B(l+1,k+1+{\dfrac {2}{1-p}})}}P(l+1)&{\mbox{ if }}k>l+1\end{cases}}$
Where $B(x,y)$ is the Beta function and $H_{k}$ is:
$H_{k}={\begin{cases}({\dfrac {p}{l}})^{k-1})\sum _{m=1}^{l-(k-1)}{\binom {l-m}{l-m-(k-1)}}(1-{\dfrac {p}{l}})^{l-m-(k-1)}&{\mbox{ if }}1\leq k\leq l\\0&{\mbox{ if }}k>l\end{cases}}$
The demonstration of these formulae involves the analysis of recursive functions and the Azuma-Hoeffding Inequality. It is observable that for $l=1$ and $p=0$, the degree distribution follows a power law with exponent $\gamma =-3$, as expected for the equivalent Barabási–Albert model. It is also proven that for every probability $p$ and window size $l$, the network asymptotically follows a power law and thus keeps its scale free behavior. [1]
Occurrences in real world
Reddit
An UPA network may be used to model Reddit positive votes (upvotes). Consider each node represented by a post $\mathbb {P} $ and the links representing upvotes given by the author after posting $\mathbb {P} $. Whenever a user posts a comment, he or she usually look in the same topic for another post to comment on, which characterizes a uniform attachment. However, this user may also find it more interesting to search for another topic to comment on, possibly a popular one. The latter represent a preferential attachment in the UPA network model.
Citation network
A citation network of scientific publications is usually represented by scientific papers as nodes and citations as links. Considering a network of papers from the same field of knowledge, whenever a new node is inserted into this network, it either attaches itself to the latest publications (uniform attachment) or to the most important papers in its field of expertise (preferential attachment). Thus, the general behavior of these networks can be described by an UPA model.
Related work
• Instead of a double nature involving uniform and preferential attachment, a network may combine preferential and anti-preferential attachments. In this network model, nodes can either be inserted or removed from the network with the passing of time $t$. [2]
References
1. Pachon, Angelica; Sacerdote, Laura; Yang, Shuyi. Scale-free behavior of networks with the copresence of preferential and uniform attachment rules. Mathematics Department “G. Peano”, University of Torino, 2017.
2. de Ambroggio, Umberto; Sacerdote, Laura; Polito, Frederico. On dynamic random graphs with degree homogenization via anti-preferential attachment probabilities. Mathematics Department “G. Peano”, University of Torino, 2019.
| Wikipedia |
UP (complexity)
In complexity theory, UP (unambiguous non-deterministic polynomial-time) is the complexity class of decision problems solvable in polynomial time on an unambiguous Turing machine with at most one accepting path for each input. UP contains P and is contained in NP.
A common reformulation of NP states that a language is in NP if and only if a given answer can be verified by a deterministic machine in polynomial time. Similarly, a language is in UP if a given answer can be verified in polynomial time, and the verifier machine only accepts at most one answer for each problem instance. More formally, a language L belongs to UP if there exists a two-input polynomial-time algorithm A and a constant c such that
if x in L , then there exists a unique certificate y with $|y|=O(|x|^{c})$ such that $A(x,y)=1$
if x is not in L, there is no certificate y with $|y|=O(|x|^{c})$ such that $A(x,y)=1$
algorithm A verifies L in polynomial time.
UP (and its complement co-UP) contain both the integer factorization problem and parity game problem. Because determined effort has yet to find a polynomial-time solution to any of these problems, it is suspected to be difficult to show P=UP, or even P=(UP ∩ co-UP).
The Valiant–Vazirani theorem states that NP is contained in RPPromise-UP, which means that there is a randomized reduction from any problem in NP to a problem in Promise-UP.
UP is not known to have any complete problems.[1]
References
Citations
1. "U". Complexity Zoo. UP: Unambiguous Polynomial-Time.
Sources
• Hemaspaandra, Lane A.; Rothe, Jörg (June 1997). "Unambiguous Computation: Boolean Hierarchies and Sparse Turing-Complete Sets". SIAM Journal on Computing. 26 (3): 634–653. arXiv:cs/9907033. doi:10.1137/S0097539794261970. ISSN 0097-5397.
Important complexity classes
Considered feasible
• DLOGTIME
• AC0
• ACC0
• TC0
• L
• SL
• RL
• NL
• NL-complete
• NC
• SC
• CC
• P
• P-complete
• ZPP
• RP
• BPP
• BQP
• APX
• FP
Suspected infeasible
• UP
• NP
• NP-complete
• NP-hard
• co-NP
• co-NP-complete
• AM
• QMA
• PH
• ⊕P
• PP
• #P
• #P-complete
• IP
• PSPACE
• PSPACE-complete
Considered infeasible
• EXPTIME
• NEXPTIME
• EXPSPACE
• 2-EXPTIME
• ELEMENTARY
• PR
• R
• RE
• ALL
Class hierarchies
• Polynomial hierarchy
• Exponential hierarchy
• Grzegorczyk hierarchy
• Arithmetical hierarchy
• Boolean hierarchy
Families of classes
• DTIME
• NTIME
• DSPACE
• NSPACE
• Probabilistically checkable proof
• Interactive proof system
List of complexity classes
| Wikipedia |
U-quadratic distribution
In probability theory and statistics, the U-quadratic distribution is a continuous probability distribution defined by a unique convex quadratic function with lower limit a and upper limit b.
$f(x|a,b,\alpha ,\beta )=\alpha \left(x-\beta \right)^{2},\quad {\text{for }}x\in [a,b].$
U-quadratic
Probability density function
Parameters $a:~a\in (-\infty ,\infty )$
$b:~b\in (a,\infty )$
or
$\alpha :~\alpha \in (0,\infty )$ :~\alpha \in (0,\infty )}
$\beta :~\beta \in (-\infty ,\infty ),$ :~\beta \in (-\infty ,\infty ),}
Support $x\in [a,b]\!$
PDF $\alpha \left(x-\beta \right)^{2}$
CDF ${\alpha \over 3}\left((x-\beta )^{3}+(\beta -a)^{3}\right)$
Mean ${a+b \over 2}$
Median ${a+b \over 2}$
Mode $a{\text{ and }}b$
Variance ${3 \over 20}(b-a)^{2}$
Skewness $0$
Ex. kurtosis ${3 \over 112}(b-a)^{4}$
Entropy TBD
MGF See text
CF See text
Parameter relations
This distribution has effectively only two parameters a, b, as the other two are explicit functions of the support defined by the former two parameters:
$\beta ={b+a \over 2}$
(gravitational balance center, offset), and
$\alpha ={12 \over \left(b-a\right)^{3}}$
(vertical scale).
Related distributions
One can introduce a vertically inverted ($\cap $)-quadratic distribution in analogous fashion.
Applications
This distribution is a useful model for symmetric bimodal processes. Other continuous distributions allow more flexibility, in terms of relaxing the symmetry and the quadratic shape of the density function, which are enforced in the U-quadratic distribution – e.g., beta distribution and gamma distribution.
Moment generating function
$M_{X}(t)={-3\left(e^{at}(4+(a^{2}+2a(-2+b)+b^{2})t)-e^{bt}(4+(-4b+(a+b)^{2})t)\right) \over (a-b)^{3}t^{2}}$
Characteristic function
$\phi _{X}(t)={3i\left(e^{iate^{ibt}}(4i-(-4b+(a+b)^{2})t)\right) \over (a-b)^{3}t^{2}}$
Beamforming
The quadratic U and inverted quadratic U distribution has an application to beamforming and pattern synthesis.[1][2]
References
1. Buchanan, Kristopher; Wheeland, Sara (July 2022). "Comparison of the Quadratic U and Inverse Quadratic U Sum-Difference Beampatterns". 2022 IEEE International Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting (AP-S/URSI). pp. 1828–1829. doi:10.1109/AP-S/USNC-URSI47032.2022.9887273. ISBN 978-1-6654-9658-2. S2CID 252411058.
2. Buchanan, Kristopher; Wheeland, Sara (July 2022). "Investigation of the Sum-Difference Beampatterns Using the Quadratic U Distribution". 2022 IEEE International Symposium on Antennas and Propagation and USNC-URSI Radio Science Meeting (AP-S/URSI). pp. 135–136. doi:10.1109/AP-S/USNC-URSI47032.2022.9886771. ISBN 978-1-6654-9658-2. S2CID 252410725.
Probability distributions (list)
Discrete
univariate
with finite
support
• Benford
• Bernoulli
• beta-binomial
• binomial
• categorical
• hypergeometric
• negative
• Poisson binomial
• Rademacher
• soliton
• discrete uniform
• Zipf
• Zipf–Mandelbrot
with infinite
support
• beta negative binomial
• Borel
• Conway–Maxwell–Poisson
• discrete phase-type
• Delaporte
• extended negative binomial
• Flory–Schulz
• Gauss–Kuzmin
• geometric
• logarithmic
• mixed Poisson
• negative binomial
• Panjer
• parabolic fractal
• Poisson
• Skellam
• Yule–Simon
• zeta
Continuous
univariate
supported on a
bounded interval
• arcsine
• ARGUS
• Balding–Nichols
• Bates
• beta
• beta rectangular
• continuous Bernoulli
• Irwin–Hall
• Kumaraswamy
• logit-normal
• noncentral beta
• PERT
• raised cosine
• reciprocal
• triangular
• U-quadratic
• uniform
• Wigner semicircle
supported on a
semi-infinite
interval
• Benini
• Benktander 1st kind
• Benktander 2nd kind
• beta prime
• Burr
• chi
• chi-squared
• noncentral
• inverse
• scaled
• Dagum
• Davis
• Erlang
• hyper
• exponential
• hyperexponential
• hypoexponential
• logarithmic
• F
• noncentral
• folded normal
• Fréchet
• gamma
• generalized
• inverse
• gamma/Gompertz
• Gompertz
• shifted
• half-logistic
• half-normal
• Hotelling's T-squared
• inverse Gaussian
• generalized
• Kolmogorov
• Lévy
• log-Cauchy
• log-Laplace
• log-logistic
• log-normal
• log-t
• Lomax
• matrix-exponential
• Maxwell–Boltzmann
• Maxwell–Jüttner
• Mittag-Leffler
• Nakagami
• Pareto
• phase-type
• Poly-Weibull
• Rayleigh
• relativistic Breit–Wigner
• Rice
• truncated normal
• type-2 Gumbel
• Weibull
• discrete
• Wilks's lambda
supported
on the whole
real line
• Cauchy
• exponential power
• Fisher's z
• Kaniadakis κ-Gaussian
• Gaussian q
• generalized normal
• generalized hyperbolic
• geometric stable
• Gumbel
• Holtsmark
• hyperbolic secant
• Johnson's SU
• Landau
• Laplace
• asymmetric
• logistic
• noncentral t
• normal (Gaussian)
• normal-inverse Gaussian
• skew normal
• slash
• stable
• Student's t
• Tracy–Widom
• variance-gamma
• Voigt
with support
whose type varies
• generalized chi-squared
• generalized extreme value
• generalized Pareto
• Marchenko–Pastur
• Kaniadakis κ-exponential
• Kaniadakis κ-Gamma
• Kaniadakis κ-Weibull
• Kaniadakis κ-Logistic
• Kaniadakis κ-Erlang
• q-exponential
• q-Gaussian
• q-Weibull
• shifted log-logistic
• Tukey lambda
Mixed
univariate
continuous-
discrete
• Rectified Gaussian
Multivariate
(joint)
• Discrete:
• Ewens
• multinomial
• Dirichlet
• negative
• Continuous:
• Dirichlet
• generalized
• multivariate Laplace
• multivariate normal
• multivariate stable
• multivariate t
• normal-gamma
• inverse
• Matrix-valued:
• LKJ
• matrix normal
• matrix t
• matrix gamma
• inverse
• Wishart
• normal
• inverse
• normal-inverse
• complex
Directional
Univariate (circular) directional
Circular uniform
univariate von Mises
wrapped normal
wrapped Cauchy
wrapped exponential
wrapped asymmetric Laplace
wrapped Lévy
Bivariate (spherical)
Kent
Bivariate (toroidal)
bivariate von Mises
Multivariate
von Mises–Fisher
Bingham
Degenerate
and singular
Degenerate
Dirac delta function
Singular
Cantor
Families
• Circular
• compound Poisson
• elliptical
• exponential
• natural exponential
• location–scale
• maximum entropy
• mixture
• Pearson
• Tweedie
• wrapped
• Category
• Commons
| Wikipedia |
Sznajd model
The Sznajd model or United we stand, divided we fall (USDF) model is a sociophysics model introduced in 2000[1] to gain fundamental understanding about opinion dynamics. The Sznajd model implements a phenomenon called social validation and thus extends the Ising spin model. In simple words, the model states:
• Social validation: If two people share the same opinion, their neighbors will start to agree with them.
• Discord destroys: If a block of adjacent persons disagree, their neighbors start to argue with them.
Statistical mechanics
• Thermodynamics
• Kinetic theory
Particle statistics
• Spin–statistics theorem
• Identical particles
• Maxwell–Boltzmann
• Bose–Einstein
• Fermi–Dirac
• Parastatistics
• Anyonic statistics
• Braid statistics
Thermodynamic ensembles
• NVE Microcanonical
• NVT Canonical
• µVT Grand canonical
• NPH Isoenthalpic–isobaric
• NPT Isothermal–isobaric
Models
• Debye
• Einstein
• Ising
• Potts
Potentials
• Internal energy
• Enthalpy
• Helmholtz free energy
• Gibbs free energy
• Grand potential / Landau free energy
Scientists
• Maxwell
• Boltzmann
• Bose
• Gibbs
• Einstein
• Ehrenfest
• von Neumann
• Tolman
• Debye
• Fermi
Mathematical formulation
For simplicity, one assumes that each individual $i$ has an opinion Si which might be Boolean ($S_{i}=-1$ for no, $S_{i}=1$ for yes) in its simplest formulation, which means that each individual either agrees or disagrees to a given question.
In the original 1D-formulation, each individual has exactly two neighbors just like beads on a bracelet. At each time step a pair of individual $S_{i}$ and $S_{i+1}$ is chosen at random to change their nearest neighbors' opinion (or: Ising spins) $S_{i-1}$ and $S_{i+2}$ according to two dynamical rules:
1. If $S_{i}=S_{i+1}$ then $S_{i-1}=S_{i}$ and $S_{i+2}=S_{i}$. This models social validation, if two people share the same opinion, their neighbors will change their opinion.
2. If $S_{i}=-S_{i+1}$ then $S_{i-1}=S_{i+1}$ and $S_{i+2}=S_{i}$. Intuitively: If the given pair of people disagrees, both adopt the opinion of their other neighbor.
Findings for the original formulations
In a closed (1 dimensional) community, two steady states are always reached, namely complete consensus (which is called ferromagnetic state in physics) or stalemate (the antiferromagnetic state). Furthermore, Monte Carlo simulations showed that these simple rules lead to complicated dynamics, in particular to a power law in the decision time distribution with an exponent of -1.5.[2]
Modifications
The final (antiferromagnetic) state of alternating all-on and all-off is unrealistic to represent the behavior of a community. It would mean that the complete population uniformly changes their opinion from one time step to the next. For this reason an alternative dynamical rule was proposed. One possibility is that two spins $S_{i}$ and $S_{i+1}$ change their nearest neighbors according to the two following rules:[3]
1. Social validation remains unchanged: If $S_{i}=S_{i+1}$ then $S_{i-1}=S_{i}$ and $S_{i+2}=S_{i}$.
2. If $S_{i}=-S_{i+1}$ then $S_{i-1}=S_{i}$ and $S_{i+2}=S_{i+1}$
Relevance
In recent years, statistical physics has been accepted as modeling framework for phenomena outside the traditional physics. Fields as econophysics or sociophysics formed, and many quantitative analysts in finance are physicists. The Ising model in statistical physics has been a very important step in the history of studying collective (critical) phenomena. The Sznajd model is a simple but yet important variation of prototypical Ising system.[4]
In 2007, Katarzyna Sznajd-Weron has been recognized by the Young Scientist Award for Socio- and Econophysics of the Deutsche Physikalische Gesellschaft (German Physical Society) for an outstanding original contribution using physical methods to develop a better understanding of socio-economic problems.[5]
Applications
The Sznajd model belongs to the class of binary-state dynamics on a networks also referred to as Boolean networks. This class of systems includes the Ising model, the voter model and the q-voter model, the Bass diffusion model, threshold models and others.[6] The Sznajd model can be applied to various fields:
• The finance interpretation considers the spin-state $S_{i}=1$ as a bullish trader placing orders, whereas a $S_{i}=0$ would correspond to a trader who is bearish and places sell orders.
References
1. Sznajd-Weron, Katarzyna; Sznjad, Jozef (2000). "Opinion evolution in closed community". International Journal of Modern Physics C. 11 (6): 1157–1165. arXiv:cond-mat/0101130. Bibcode:2000IJMPC..11.1157S. doi:10.1142/S0129183100000936. S2CID 17307753.
2. Sznajd-Weron, Katarzyna (2005). "Sznajd model and its applications". Acta Physica Polonica B. 36 (8): 2537. arXiv:physics/0503239. Bibcode:2005AcPPB..36.2537S.
3. Sanchez, Juan R. (2004). "A modified one-dimensional Sznajd model". arXiv:cond-mat/0408518.
4. Castellano, Claudio; Fortunato, Santo; Loreto, Vittorio (2009). "Statistical physics of social dynamics". Reviews of Modern Physics. 81 (2): 591–646. arXiv:0710.3256. Bibcode:2009RvMP...81..591C. doi:10.1103/RevModPhys.81.591. S2CID 118376889.
5. "Young Scientist Award for Socio- and Econophysics". Bad Honnef, Germany: Deutsche Physikalische Gesellschaft. Retrieved 15 October 2014.
6. Gleeson, James P. (2013). "Binary-State Dynamics on Complex Networks: Pair Approximation and Beyond". Physical Review X. 3 (2): 021004. arXiv:1209.2983. Bibcode:2013PhRvX...3b1004G. doi:10.1103/PhysRevX.3.021004. S2CID 54622570.
External links
• Katarzyna Sznajd-Weron currently works at the Wrocław University of Technology performing research on interdisciplinary applications of statistical physics, complex systems, critical phenomena, sociophysics and agent-based modeling.
| Wikipedia |
U. K. Anandavardhanan
U. K. Anandavardhanan (born 25 May 1976) is an Indian mathematician specialising in automorphic forms and representation theory. He was awarded the Shanti Swarup Bhatnagar Prize for Science and Technology, the highest science award in India, for the year 2020 in mathematical science category.[1] He is affiliated to Indian Institute of Technology, Bombay.[2][3]
After completing undergraduate studies in University of Calicut, Anandavardhanan joined University of Hyderabad for further studies. He secured M Sc in mathematics from there in 1998 and Ph D in 2003. He was with Tata Institute of Fundamental Research during February 2003 - July 2005 and spent the spring of 2004 in University of Iowa. He has been a faculty member of Indian Institute of Technology Bombay since July 2005.
Awards
In addition to the Shanti Swarup Bhatnagar Prize, he has been conferred the following awards also:[4]
• Young Scientist Platinum Jubilee Award, National Academy of Sciences, India, Allahabad, 2009.
• INSA Medal for Young Scientist, Indian National Science Academy, New Delhi, 2008.
References
1. "Shanti Swarup Bhatnagar Prize (SSB) for Science and Technology 2020 List of recipients" (PDF). Shanti Swarup Bhatnagar Prize for Science and Technology. CSIR Human Resource Development Group, New Delhi. Retrieved 28 September 2020.
2. "Two IIT-Bombay profs, BARC researcher bag India's coveted science award". The Times of India. 28 September 2020. Retrieved 3 October 2020.
3. Anonna Dutt (26 September 2020). "Shanti Swarup Bhatnagar Prize 2020: 12 researchers receive India's highest science award". Hindusthan times. Retrieved 3 October 2020.
4. "Curriculum Vitae". IIT Bombay. Retrieved 28 September 2020.
External links
• U. K. Anandavardhanan publications indexed by Google Scholar
Recipients of Shanti Swarup Bhatnagar Prize for Science and Technology in Mathematical Science
1950s–70s
• K. S. Chandrasekharan & C. R. Rao (1959)
• K. G. Ramanathan (1965)
• A. S. Gupta & C. S. Seshadri (1972)
• P. C. Jain & M. S. Narasimhan (1975)
• K. R. Parthasarathy & S. K. Trehan (1976)
• M. S. Raghunathan (1977)
• E. M. V. Krishnamurthy (1978)
• S. Raghavan & S. Ramanan (1979)
1980s
• R. Sridharan (1980)
• J. K. Ghosh (1981)
• B. L. S. Prakasa Rao & J. B. Shukla (1982)
• I. B. S. Passi & Phoolan Prasad (1983)
• S. K. Malik & R. Parthasarathy (1985)
• T. Parthasarathy & U. B. Tewari (1986)
• Raman Parimala & T. N. Shorey (1987)
• M. B. Banerjee & K. B. Sinha (1988)
• Gopal Prasad (1989)
1990s
• R. Balasubramanian & S. G. Dani (1990)
• V. B. Mehta & A. Ramanathan (1991)
• Maithili Sharan (1992)
• Karmeshu & Navin M. Singhi (1993)
• N. Mohan Kumar (1994)
• Rajendra Bhatia (1995)
• V. S. Sunder (1996)
• Subhashis Nag & T. R. Ramadas (1998)
• Rajeeva Laxman Karandikar (1999)
2000s
• Rahul Mukerjee (2000)
• Gadadhar Misra & T. N. Venkataramana (2001)
• Dipendra Prasad & S. Thangavelu (2002)
• Manindra Agrawal & V. Srinivas (2003)
• Arup Bose & Sujatha Ramdorai (2004)
• Probal Chaudhuri & K. H. Paranjape (2005)
• Vikraman Balaji & Indranil Biswas (2006)
• B. V. Rajarama Bhat (2007)
• Rama Govindarajan (2007)
• Jaikumar Radhakrishnan (2008)
• Suresh Venapally (2009)
2010s
• Mahan Mitra & Palash Sarkar (2011)
• Siva Athreya & Debashish Goswami (2012)
• Eknath Prabhakar Ghate (2013)
• Kaushal Kumar Verma (2014)
• K Sandeep & Ritabrata Munshi (2015)
• Amalendu Krishna (2016)
• Naveen Garg (2016)
• (Not awarded) (2017)
• Amit Kumar & Nitin Saxena (2018)
• Neena Gupta & Dishant Mayurbhai Pancholi (2019)
2020s
• Rajat Subhra Hazra (2020)
• U. K. Anandavardhanan (2020)
• Anish Ghosh (2021)
• Saket Saurabh (2021)
| Wikipedia |
Tilde
The tilde (/ˈtɪldeɪ, -di, -də, ˈtɪld/)[1] ˜ or ~, is a grapheme with several uses. The name of the character came into English from Spanish, which in turn came from the Latin titulus, meaning "title" or "superscription".[2][lower-alpha 1] Its primary use is as a diacritic (accent) in combination with a base letter; but for historical reasons, it is also used in standalone form within a variety of contexts.
~ ◌̃
Tilde (symbol), Combining tilde (diacritic)
See also
Double tilde: Approximation [≈] or Double negation [ ~(~ ]
History
Use by medieval scribes
The tilde was originally written over an omitted letter or several letters as a scribal abbreviation, or "mark of suspension" and "mark of contraction",[3] shown as a straight line when used with capitals. Thus, the commonly used words Anno Domini were frequently abbreviated to Ao Dñi, with an elevated terminal with a suspension mark placed over the "n". Such a mark could denote the omission of one letter or several letters. This saved on the expense of the scribe's labor and the cost of vellum and ink. Medieval European charters written in Latin are largely made up of such abbreviated words with suspension marks and other abbreviations; only uncommon words were given in full.
The text of the Domesday Book of 1086, relating for example, to the manor of Molland in Devon (see adjacent picture), is highly abbreviated as indicated by numerous tildes.
The text with abbreviations expanded is as follows:
Mollande tempore regis Edwardi geldabat pro quattuor hidis et uno ferling. Terra est quadraginta carucae. In dominio sunt tres carucae et decem servi et triginta villani et viginti bordarii cum sedecim carucis. Ibi duodecim acrae prati et quindecim acrae silvae. Pastura tres leugae in longitudine et latitudine. Reddit quattuor et viginti libras ad pensam. Huic manerio est adjuncta Blachepole. Elwardus tenebat tempore regis Edwardi pro manerio et geldabat pro dimidia hida. Terra est duae carucae. Ibi sunt quinque villani cum uno servo. Valet viginti solidos ad pensam et arsuram. Eidem manerio est injuste adjuncta Nimete et valet quindecim solidos. Ipsi manerio pertinet tercius denarius de Hundredis Nortmoltone et Badentone et Brantone et tercium animal pasturae morarum.
Role of mechanical typewriters
On typewriters designed for languages that routinely use diacritics (accent marks), there are two possible solutions. Keys can be dedicated to precomposed characters or alternatively a dead key mechanism can be provided. With the latter, a mark is made when a dead key is typed, but unlike normal keys, the paper carriage does not move on and thus the next letter to be typed is printed under that accent. Typewriters for Spanish typically have a dedicated key for Ñ/ñ but, as Portuguese uses Ã/ã and Õ/õ, a single dead-key (rather than take two keys to dedicate) is the most practical solution.
The tilde symbol did not exist independently as a movable type or hot-lead printing character since the type cases for Spanish or Portuguese would include sorts for the accented forms.
The centralized ASCII tilde
Serif: —~—
Sans-serif: —~—
Monospace: —~—
A free-standing tilde between two em dashes
in three font families
The first ASCII standard (X3.64-1963) did not have a tilde.[4]: 246 Like Portuguese and Spanish, the French, German and Scandinavian languages also needed symbols in excess of the basic 26 needed for English. The ASA worked with and through the CCITT to internationalize the code-set, to meet the basic needs of at least the Western European languages.
It appears to have been at their May 13–15, 1963 meeting that the CCITT decided that the proposed ISO 7-bit code standard would be suitable for their needs if a lower case alphabet and five diacritical marks [...] were added to it.[5] At the October 29–31 meeting, then, the ISO subcommittee altered the ISO draft to meet the CCITT requirements, replacing the up-arrow and left-arrow with diacriticals, adding diacritical meanings to the apostrophe and quotation mark, and making the number sign a dual[lower-alpha 2] for the tilde.[6]
— Yucca's free information site (which cites the original sources).[7]
Thus ISO 646 was born (and the ASCII standard updated to X3.64-1967), providing the tilde and other symbols as optional characters.[4]: 247 [lower-alpha 3]
ISO 646 and ASCII incorporated many of the overprinting lower-case diacritics from typewriters, including tilde. Overprinting was intended to work by putting a backspace code between the codes for letter and diacritic.[8] However even at that time, mechanisms that could do this or any other overprinting were not widely available, did not work for capital letters, and were impossible on video displays, with the result that this concept failed to gain significant acceptance. Consequently, many of these free-standing diacritics (and the underscore) were quickly reused by software as additional syntax, basically becoming new types of syntactic symbols that a programming language could use. As this usage became predominant, type design gradually evolved so these diacritic characters became larger and more vertically centered, making them useless as overprinted diacritics but much easier to read as free-standing characters that had come to be used for entirely different and novel purposes. Most modern fonts align the plain ASCII "spacing" (free-standing) tilde at the same level as dashes, or only slightly higher.
The free-standing tilde is at code 126 in ASCII, where it was inherited into Unicode as U+007E.
A similar shaped mark (⁓) is known in typography and lexicography as a swung dash: these are used in dictionaries to indicate the omission of the entry word.[9]
Connection to Spanish
As indicated by the etymological origin of the word "tilde" in English, this symbol has been closely associated with the Spanish language. The connection stems from the use of the tilde above the letter ⟨n⟩ to form the (different) letter ⟨ñ⟩ in Spanish, a feature shared by only a few other languages, most of which are historically connected to Spanish. This peculiarity can help non-native speakers quickly identify a text as being written in Spanish with little chance of error. Particularly during the 1990s, Spanish-speaking intellectuals and news outlets demonstrated support for the language and the culture by defending this letter against globalisation and computerisation trends that threatened to remove it from keyboards and other standardised products and codes.[10][11] The Instituto Cervantes, founded by Spain's government to promote the Spanish language internationally, chose as its logo a highly stylised Ñ with a large tilde. The 24-hour news channel CNN in the US later adopted a similar strategy on its existing logo for the launch of its Spanish-language version, therefore being written as CN͠N. And similarly to the National Basketball Association (NBA), the Spain men's national basketball team is nicknamed "ÑBA".
In Spanish itself the word tilde is used more generally for diacritics, including the stress-marking acute accent.[12] The diacritic ~ is more commonly called virgulilla or la tilde de la eñe, and is not considered an accent mark in Spanish, but rather simply a part of the letter ñ (much like the dot over ı makes an i character that is familiar to readers of English).
Usage
Letters with tilde
This is a table of precomposed letters with tilde:
• Tilde ◌̃ Latin: Ã ã
• Ẵ ẵ
• Ẫ ẫ
• Ằ ằ
• ᵬ
• ᵭ
• Ẽ ẽ
• Ễ ễ
• Ḛ ḛ
• ᵮ
• Ĩ ĩ
• Ḭ ḭ
• ɫ
• ᵯ
• Ñ ñ
• ᵰ
• Õ õ
• Ỗ ỗ
• Ỡ ỡ
• Ṑ ṑ
• Ṍ ṍ
• Ṏ ṏ
• Ȭ ȭ
• ᵱ
• ᵳ
• ᵲ
• ꭨ
• ᵴ
• ᵵ
• Ũ ũ
• Ữ ữ
• Ṹ ṹ
• Ṵ ṵ
• Ṽ ṽ
• Ỹ ỹ
• ᵶ
A tilde diacritic can be added to almost any character by using a combining tilde.
Common use in English
The English language does not use the tilde as a diacritic, though it is used in some loanwords. The standalone form of the symbol is used more widely. Informally,[13] it means "approximately", "about", or "around", such as "~30 minutes before", meaning "approximately 30 minutes before".[14][15] It may also mean "similar to",[16] including "of the same order of magnitude as",[13] such as "x ~ y" meaning that x and y are of the same order of magnitude. Another approximation symbol is the double tilde ≈, meaning "approximately/almost equal to".[14][16][17] The tilde is also used to indicate congruence of shapes by placing it over an = symbol, thus ≅.
In more recent digital usage, tildes on either side of a word or phrase have sometimes come to convey a particular tone that "let[s] the enclosed words perform both sincerity and irony", which can pre-emptively defuse a negative reaction.[18] For example, BuzzFeed journalist Joseph Bernstein interprets the tildes in the following tweet:
"in the ~ spirit of the season ~ will now link to some of the (imho) #Bestof2014 sports reads. if you hate nice things, mute that hashtag."
as a way of making it clear that both the author and reader are aware that the enclosed phrase – "spirit of the season" – "is cliche and we know this quality is beneath our author, and we don't want you to think our author is a cliche person generally".[18][lower-alpha 4]
Among other uses, the symbol has been used in social media to indicate sarcasm.[19]
Diacritical use
In some languages, the tilde is a diacritic mark placed over a letter to indicate a change in its pronunciation:
Pitch
The tilde was firstly used in the polytonic orthography of Ancient Greek, as a variant of the circumflex, representing a rise in pitch followed by a return to standard pitch.
Abbreviation
Later, it was used to make abbreviations in medieval Latin documents. When an ⟨n⟩ or ⟨m⟩ followed a vowel, it was often omitted, and a tilde (physically, a small ⟨N⟩) was placed over the preceding vowel to indicate the missing letter; this is the origin of the use of tilde to indicate nasalization (compare the development of the umlaut as an abbreviation of ⟨e⟩.) The practice of using the tilde over a vowel to indicate omission of an ⟨n⟩ or ⟨m⟩ continued in printed books in French as a means of reducing text length until the 17th century. It was also used in Portuguese and Spanish.
The tilde was also used occasionally to make other abbreviations, such as over the letter ⟨q⟩, making q̃, to signify the word que ("that").
Nasalization
It is also as a small ⟨n⟩ that the tilde originated when written above other letters, marking a Latin ⟨n⟩ which had been elided in old Galician-Portuguese. In modern Portuguese it indicates nasalization of the base vowel: mão "hand", from Lat. manu-; razões "reasons", from Lat. rationes. This usage has been adopted in the orthographies of several native languages of South America, such as Guarani and Nheengatu, as well as in the International Phonetic Alphabet (IPA) and many other phonetic alphabets. For example, [ljɔ̃] is the IPA transcription of the pronunciation of the French place-name Lyon.
In Breton, the symbol ⟨ñ⟩ after a vowel means that the letter ⟨n⟩ serves only to give the vowel a nasalised pronunciation, without being itself pronounced, as it normally is. For example, ⟨an⟩ gives the pronunciation [ãn] whereas ⟨añ⟩ gives [ã].
In the DMG romanization of Tunisian Arabic, the tilde is used for nasal vowels õ and ṏ.
Palatal n
The tilded ⟨n⟩ (⟨ñ⟩, ⟨Ñ⟩) developed from the digraph ⟨nn⟩ in Spanish. In this language, ⟨ñ⟩ is considered a separate letter called eñe (IPA: [ˈeɲe]), rather than a letter-diacritic combination; it is placed in Spanish dictionaries between the letters ⟨n⟩ and ⟨o⟩. In Spanish, the word tilde actually refers to diacritics in general, e.g. the acute accent in José,[20] while the diacritic in ⟨ñ⟩ is called "virgulilla" (IPA: [birɣuˈliʝa]).[21] Current languages in which the tilded ⟨n⟩ (⟨ñ⟩) is used for the palatal nasal consonant /ɲ/ include
• Asturian
• Aymara
• Basque
• Chamorro
• Filipino
• Galician
• Guaraní
• Iñupiaq
• Mapudungun
• Papiamento
• Quechua
• Spanish
• Tetum
• Wolof
Tone
In Vietnamese, a tilde over a vowel represents a creaky rising tone (ngã). Letters with the tilde are not considered separate letters of the Vietnamese alphabet.
International Phonetic Alphabet
In phonetics, a tilde is used as a diacritic that is placed above a letter, below it or superimposed onto the middle of it:
• A tilde above a letter indicates nasalization, e.g. [ã], [ṽ].
• A tilde superimposed onto the middle of a letter indicates velarization or pharyngealization, e.g. [ɫ], [z̴]. If no precomposed Unicode character exists, the Unicode character U+0334 ◌̴ COMBINING TILDE OVERLAY can be used to generate one.
• A tilde below a letter indicates laryngealisation, e.g. [d̰]. If no precomposed Unicode character exists, the Unicode character U+0330 ◌̰ COMBINING TILDE BELOW can be used to generate one.
Letter extension
In Estonian, the symbol ⟨õ⟩ stands for the close-mid back unrounded vowel, and it is considered an independent letter.
Other uses
Some languages and alphabets use the tilde for other purposes, such as:
• Arabic script: A symbol resembling the tilde (U+0653 ـٓ ARABIC MADDAH ABOVE) is used over the letter ⟨ا⟩ (/a/) to become ⟨آ⟩, denoting a long /aː/ sound.
• Guaraní: The tilded ⟨G̃⟩ (note that ⟨G/g⟩ with tilde is not available as a precomposed glyph in Unicode) stands for the velar nasal consonant. Also, the tilded ⟨y⟩ (⟨Ỹ⟩) stands for the nasalized upper central rounded vowel [ɨ̃]. Munduruku, Parintintín, and two older spellings of Filipino words also use ⟨g̃⟩.
• Syriac script: A tilde (~) under the letter Kaph represents a [t͡ʃ] sound, transliterated as ch or č.[22]
• Estonian and Võro use the tilde above the letter o (õ) to indicate the vowel [ɤ], a rare sound among languages.
• Unicode has a combining vertical tilde character: U+033E ◌̾ COMBINING VERTICAL TILDE. It is used to indicate middle tone in linguistic transcription of certain dialects of the Lithuanian language.[23]
Punctuation
The tilde is used in various ways in punctuation, such as:
Range
In some languages (though not generally in English), a tilde or a tilde-like wave dash (Unicode: U+301C 〜 WAVE DASH) may be used as punctuation (instead of an unspaced hyphen, en dash or em dash) between two numbers, to indicate a range rather than subtraction or a hyphenated number (such as a part number or model number). For example, "12~15" means "12 to 15", "~3" means "up to three", and "100~" means "100 and greater". East Asian languages almost always use this convention, but it is often done for clarity in some other languages as well. Chinese uses the wave dash and full-width em dash interchangeably for this purpose. In English, the tilde is often used to express ranges and model numbers in electronics, but rarely in formal grammar or in type-set documents, as a wavy dash preceding a number sometimes represents an approximation (see below).
Approximation
See also: Approximation
Before a number the tilde can mean 'approximately'; '~42' means 'approximately 42'.[24] When used with currency symbols that precede the number (national conventions differ), the tilde precedes the symbol, thus for example '~$10' means 'about ten dollars'.[25]
The symbols ≈ (almost equal to) and ≅ (approximately equal to) are among the other symbols used to express approximation.
Japanese
The wave dash (波ダッシュ, nami dasshu) is used for various purposes in Japanese, including to denote ranges of numbers (e.g., 5〜10 means between 5 and 10) in place of dashes or brackets, and to indicate origin. The wave dash is also used to separate a title and a subtitle in the same line, as a colon is used in English.
When used in conversations via email or instant messenger it may be used as a sarcasm mark.
The sign is used as a replacement for the chōon, katakana character, in Japanese, extending the final syllable.
Unicode and Shift JIS encoding of wave dash
Correct JIS wave dash, current in Unicode
Previous Unicode wave dash (incorrect)
In practice the full-width tilde (全角チルダ, zenkaku chiruda) (Unicode U+FF5E ~ FULLWIDTH TILDE), is often used instead of the wave dash (波ダッシュ, nami dasshu) (Unicode U+301C 〜 WAVE DASH), because the Shift JIS code for the wave dash, 0x8160, which should be mapped to U+301C,[26][27] is instead mapped to U+FF5E[28] in Windows code page 932 (Microsoft's code page for Japanese), a widely used extension of Shift JIS.
This decision avoided a shape definition error in the original (6.2) Unicode code charts:[29] the wave dash reference glyph in JIS / Shift JIS[30][31] matches the Unicode reference glyph for U+FF5E FULLWIDTH TILDE,[32] while the original reference glyph for U+301C[29] was reflected, incorrectly,[33] when Unicode imported the JIS wave dash. In other platforms such as the classic Mac OS and macOS, 0x8160 is correctly mapped to U+301C. It is generally difficult, if not impossible, for users of Japanese Windows to type U+301C, especially in legacy, non-Unicode applications.
A similar situation exists regarding the Korean KS X 1001 character set, in which Microsoft maps the EUC-KR or UHC code for the wave dash (0xA1AD) to U+223C ∼ TILDE OPERATOR,[34][35] while IBM and Apple map it to U+301C.[36][37][38] Microsoft also uses U+FF5E to map the KS X 1001 raised tilde (0xA2A6),[35] while Apple uses U+02DC ˜ SMALL TILDE.[38]
The current Unicode reference glyph for U+301C has been corrected[33] to match the JIS standard[39] in response to a 2014 proposal, which noted that while the existing Unicode reference glyph had been matched by fonts from the discontinued Windows XP, all other major platforms including later versions of Microsoft Windows shipped with fonts matching the JIS reference glyph for U+301C.[40]
The JIS / Shift JIS wave dash is still formally mapped to U+301C as of JIS X 0213,[41] whereas the WHATWG Encoding Standard used by HTML5 follows Microsoft in mapping 0x8160 to U+FF5E.[42] These two code points have a similar or identical glyph in several fonts, reducing the confusion and incompatibility.
As a unary operator
A tilde in front of a single quantity can mean "approximately", "about"[14] or "of the same order of magnitude as."
In written mathematical logic, the tilde represents negation: "~p" means "not p", where "p" is a proposition. Modern use often replaces the tilde with the negation symbol (¬) for this purpose, to avoid confusion with equivalence relations.
As a relational operator
In mathematics, the tilde operator (which can be represented by a tilde or the dedicated character U+223C ∼ TILDE OPERATOR), sometimes called "twiddle", is often used to denote an equivalence relation between two objects. Thus "x ~ y" means "x is equivalent to y". It is a weaker statement than stating that x equals y. The expression "x ~ y" is sometimes read aloud as "x twiddles y", perhaps as an analogue to the verbal expression of "x = y".[43]
The tilde can indicate approximate equality in a variety of ways. It can be used to denote the asymptotic equality of two functions. For example, f (x) ~ g(x) means that $\lim _{x\to \infty }{\frac {f(x)}{g(x)}}=1$.[13]
A tilde is also used to indicate "approximately equal to" (e.g. 1.902 ~= 2). This usage probably developed as a typed alternative to the libra symbol used for the same purpose in written mathematics, which is an equal sign with the upper bar replaced by a bar with an upward hump, bump, or loop in the middle (︍︍♎︎) or, sometimes, a tilde (≃). The symbol "≈" is also used for this purpose.
In physics and astronomy, a tilde can be used between two expressions (e.g. h ~ 10−34 J s) to state that the two are of the same order of magnitude.[13]
In statistics and probability theory, the tilde means "is distributed as";[13] see random variable(e.g. X ~ B(n,p) for a binomial distribution).
A tilde can also be used to represent geometric similarity (e.g. ∆ABC ~ ∆DEF, meaning triangle ABC is similar to DEF). A triple tilde (≋) is often used to show congruence, an equivalence relation in geometry.
In graph theory, the tilde can be used to represent adjacency between vertices. The edge $(x,y)$ connects vertices $x$ and $y$ which can be said to be adjacent, and this adjacency can be denoted $x\sim y$.
As a diacritic
The symbol "${\tilde {f}}$" is pronounced as "eff tilde" or, informally, as "eff twiddle".[44][45] This can be used to denote the Fourier transform of f, or a lift of f, and can have a variety of other meanings depending on the context.
A tilde placed below a letter in mathematics can represent a vector quantity (e.g. $(x_{1},x_{2},x_{3},\ldots ,x_{n})={\underset {^{\sim }}{\mathbf {x} }}$).
In statistics and probability theory, a tilde placed on top of a variable is sometimes used to represent the median of that variable; thus ${\tilde {\mathbf {y} }}$ would indicate the median of the variable $\mathbf {y} $. A tilde over the letter n (${\tilde {n}}$) is sometimes used to indicate the harmonic mean.
In machine learning, a tilde may represent a candidate value for a cell state in GRUs or LSTM units. (e.g. c̃)
Physics
Often in physics, one can consider an equilibrium solution to an equation, and then a perturbation to that equilibrium. For the variables in the original equation (for instance $X$) a substitution $X\to x+{\tilde {x}}$ can be made, where $x$ is the equilibrium part and ${\tilde {x}}$ is the perturbed part.
A tilde is also used in particle physics to denote the hypothetical supersymmetric partner. For example, an electron is referred to by the letter e, and its superpartner the selectron is written ẽ.
In multibody mechanics, the tilde operator maps three-dimensional vectors ${\boldsymbol {\omega }}\in \mathbb {R} ^{3}$ to skew-symmetrical matrices ${\tilde {\boldsymbol {\omega }}}={\begin{bmatrix}0&-\omega _{3}&\omega _{2}\\\omega _{3}&0&-\omega _{1}\\-\omega _{2}&\omega _{1}&0\end{bmatrix}}$ (see [46] or [47]).
Economics
For relations involving preference, economists sometimes use the tilde to represent indifference between two or more bundles of goods. For example, to say that a consumer is indifferent between bundles x and y, an economist would write x ~ y.
Electronics
It can approximate the sine wave symbol (∿, U+223F), which is used in electronics to indicate alternating current, in place of +, −, or ⎓ for direct current.
Linguistics
The tilde may indicate alternating allomorphs or morphological alternation, as in //ˈniː~ɛl+t// for kneel~knelt (the plus sign '+' indicates a morpheme boundary).[48][49]
The tilde may represent some sort of phonetic or phonemic variation between two sounds, which might be allophones or in free variation. For example, [χ ~ x] can represent "either [χ] or [x]".
In formal semantics, it is also used as a notation for the squiggle operator which plays a key role in many theories of focus.[50]
Computing
Computer programmers use the tilde in various ways and sometimes call the symbol (as opposed to the diacritic) a squiggle, squiggly, swiggle, or twiddle. According to the Jargon File, other synonyms sometimes used in programming include not, approx, wiggle, enyay (after eñe) and (humorously) sqiggle /ˈskɪɡəl/.
Directories and URLs
On Unix-like operating systems (including AIX, BSD, Linux and macOS), tilde normally indicates the current user's home directory. For example, if the current user's home directory is /home/user, then the command cd ~ is equivalent to cd /home/user, cd $HOME, or cd. This convention derives from the Lear-Siegler ADM-3A terminal in common use during the 1970s, which happened to have the tilde symbol and the word "Home" (for moving the cursor to the upper left) on the same key. When prepended to a particular username, the tilde indicates that user's home directory (e.g., ~janedoe for the home directory of user janedoe, such as /home/janedoe).[51]
Used in URLs on the World Wide Web, it often denotes a personal website on a Unix-based server. For example, http://www.example.com/~johndoe/ might be the personal website of John Doe. This mimics the Unix shell usage of the tilde. However, when accessed from the web, file access is usually directed to a subdirectory in the user's home directory, such as /home/username/public_html or /home/username/www.[52]
In URLs, the characters %7E (or %7e) may substitute for a tilde if an input device lacks a tilde key.[53] Thus, http://www.example.com/~johndoe/ and http://www.example.com/%7Ejohndoe/ will behave in the same manner.
Computer languages
The tilde is used in the AWK programming language as part of the pattern match operators for regular expressions:
• variable ~ /regex/ returns true if the variable is matched.
• variable !~ /regex/ returns false if the variable is matched.
A variant of this, with the plain tilde replaced with =~, was adopted in Perl, and this semi-standardization has led to the use of these operators in other programming languages, such as Ruby or the SQL variant of the database PostgreSQL.
In APL and MATLAB, tilde represents the monadic logical function NOT, and in APL it additionally represents the dyadic multiset function without (set difference).
In C the tilde character is used as bitwise NOT unary operator, following the notation in logic (an ! causes a logical NOT, instead). This is also used by most languages based on or influenced by C, such as C++, D and C#. The MySQL database also use tilde as bitwise invert[54] as does Microsoft's SQL Server Transact-SQL (T-SQL) language. JavaScript also uses tilde as bitwise NOT, and because JavaScript internally uses floats and the bitwise complement only works on integers, numbers are stripped of their decimal part before applying the operation. This has also given rise to using two tildes ~~x as a short syntax for a cast to integer (numbers are stripped of their decimal part and changed into their complement, and then back).
In C++ and C#, the tilde is also used as the first character in a class's method name (where the rest of the name must be the same name as the class) to indicate a destructor – a special method which is called at the end of the object's life.
In ASP.NET application tilde ('~') is used as a shortcut to the root of the application's virtual directory.
In the CSS stylesheet language, the tilde is used for the indirect adjacent combinator as part of a selector.
In the D programming language, the tilde is used as an array concatenation operator, as well as to indicate an object destructor and bitwise not operator. Tilde operator can be overloaded for user types, and binary tilde operator is mostly used to merging two objects, or adding some objects to set of objects. It was introduced because plus operator can have different meaning in many situations. For example, what to do with "120" + "14" ? Is this a string "134" (addition of two numbers), or "12014" (concatenation of strings) or something else? D disallows + operator for arrays (and strings), and provides separate operator for concatenation (similarly PHP programming language solved this problem by using dot operator for concatenation, and + for number addition, which will also work on strings containing numbers).
In Eiffel, the tilde is used for object comparison. If a and b denote objects, the boolean expression a ~ b has value true if and only if these objects are equal, as defined by the applicable version of the library routine is_equal, which by default denotes field-by-field object equality but can be redefined in any class to support a specific notion of equality. If a and b are references, the object equality expression a ~ b is to be contrasted with a = b which denotes reference equality. Unlike the call a.is_equal (b), the expression a ~ b is type-safe even in the presence of covariance.
In the Apache Groovy programming language the tilde character is used as an operator mapped to the bitwiseNegate() method.[55] Given a String the method will produce a java.util.regex.Pattern. Given an integer it will negate the integer bitwise like in C. =~ and ==~ can in Groovy be used to match a regular expression.[56][57]
In Haskell, the tilde is used in type constraints to indicate type equality.[58] Also, in pattern-matching, the tilde is used to indicate a lazy pattern match.[59]
In the Inform programming language, the tilde is used to indicate a quotation mark inside a quoted string.
In "text mode" of the LaTeX typesetting language a tilde diacritic can be obtained using, e.g., \~{n}, yielding "ñ". A stand-alone tilde can be obtained by using \textasciitilde or \string~. In "math mode" a tilde diacritic can be written as, e.g., \tilde{x}. For a wider tilde \widetilde can be used. The \sim command produce a tilde-like binary relation symbol that is often used in mathematical expressions, and the double-tilde ≈ is obtained with \approx. The url package also supports entering tildes directly, e.g., \url{http://server/~name}. In both text and math mode, a tilde on its own (~) renders a white space with no line breaking.
In MediaWiki syntax, four tildes are used as a shortcut for a user's signature.
In Common Lisp, the tilde is used as the prefix for format specifiers in format strings.[60]
In Max/MSP, a tilde is used to denote objects that process at the computer's sampling rate, i.e. mainly those that deal with sound.
In Standard ML, the tilde is used as the prefix for negative numbers and as the unary negation operator.
In OCaml, the tilde is used to specify the label for a labeled parameter.
In R, the tilde operator is used to separate the left- and right-hand sides in a model formula.[61]
In Object REXX, the twiddle is used as a "message send" symbol. For example, Employee.name~lower() would cause the lower() method to act on the object Employee's name attribute, returning the result of the operation. ~~ returns the object that received the method rather than the result produced. Thus it can be used when the result need not be returned or when cascading methods are to be used. team~~insert("Jane")~~insert("Joe")~~insert("Steve") would send multiple concurrent insert messages, thus invoking the insert method three consecutive times on the team object.
In Raku, ~~ is used instead of =~ for a regular expression. Because the dot operator is used for member access instead of ->, concatenation is done with a single tilde.
my $concatResult = "Hello " ~ "world!";
$concatResult ~~ /<|w><[A..Z]><[a..z]>*<|w>/;
say $/; # outputs "Hello"
# the $/ variable holds the last regex match result
Keyboards
The presence (or absence) of a tilde engraved on the keyboard depends on the territory where it was sold. In either case, computer's system settings determine the keyboard mapping and the default setting will match the engravings on the keys. Even so, it certainly possible to configure a keyboard for a different locale than that supplied by the retailer. On American and British keyboards, the tilde is a standard keytop and pressing it produces a free-standing "ASCII Tilde". To generate a letter with a tilde diacritic requires the US international or UK extended keyboard setting.
• With US-international, the `/~ key is a dead key: pressing the ~ key and then a letter produces the tilde-accented form of that letter. (For example, ~ a produces ã.) With this setting active, an ASCII tilde can be inserted with the dead key followed by the space bar, or alternatively by striking the dead key twice in a row.
• With UK-extended, the key works normally but becomes a 'dead key' when combined with AltGr. Thus AltGr+# followed by a letter produces the accented form of that letter.
• With a Mac either of the Alt/Option keys function similarly.
• With Linux, the compose key facility is used.
Instructions for other national languages and keyboards are beyond the scope of this article.
In the US and European Windows systems, the Alt code for a single tilde is 126.
Backup filenames
The dominant Unix convention for naming backup copies of files is appending a tilde to the original file name. It originated with the Emacs text editor[62] and was adopted by many other editors and some command-line tools.
Emacs also introduced an elaborate numbered backup scheme, with files named filename.~1~, filename.~2~ and so on. It didn't catch on, as the rise of version control software eliminates the need for this usage.
Microsoft filenames
The tilde was part of Microsoft's filename mangling scheme when it extended the FAT file system standard to support long filenames for Microsoft Windows. Programs written prior to this development could only access filenames in the so-called 8.3 format—the filenames consisted of a maximum of eight characters from a restricted character set (e.g. no spaces), followed by a period, followed by three more characters. In order to permit these legacy programs to access files in the FAT file system, each file had to be given two names—one long, more descriptive one, and one that conformed to the 8.3 format. This was accomplished with a name-mangling scheme in which the first six characters of the filename are followed by a tilde and a digit. For example, "Program Files" might become "PROGRA~1".
The tilde symbol is also often used to prefix hidden temporary files that are created when a document is opened in Windows. For example, when a document "Document1.doc" is opened in Word, a file called "~$cument1.doc" is created in the same directory. This file contains information about which user has the file open, to prevent multiple users from attempting to change a document at the same time.
Juggling notation
In the juggling notation system Beatmap, tilde can be added to either "hand" in a pair of fields to say "cross the arms with this hand on top". Mills Mess is thus represented as (~2x,1)(1,2x)(2x,~1)*.[63]
Unicode
Variants and similars
Unicode has code-points for many forms of non-combined tilde, for symbols incorporating tildes, and for characters visually similar to a tilde.
Character Code point Name Comments
~U+007ETILDEThe keyboard tilde. Center-height alignment.
˜U+02DCSMALL TILDEA spacing version of the combining tilde diacritic.
˷U+02F7MODIFIER LETTER LOW TILDEA spacing version of the combining tilde diacritic.
◌̃U+0303COMBINING TILDEUsed in IPA to indicate a nasal vowel.
◌̰U+0330COMBINING TILDE BELOWUsed in IPA to indicate creaky voice.
◌̴U+0334COMBINING TILDE OVERLAYOverstrikes the preceding letter. Used in IPA to indicate velarization or pharyngealization.
◌̾U+033ECOMBINING VERTICAL TILDE
◌͂U+0342COMBINING GREEK PERISPOMENIUsed as an Ancient Greek accent under the name "circumflex"; it can also be written as an inverted breve.
◌͊U+034ACOMBINING NOT TILDE ABOVEUsed in extIPA to indicate a denasalization.
◌͋U+034BCOMBINING HOMOTHETIC ABOVEUsed in extIPA to indicate a nareal fricative.
◌͠◌U+0360COMBINING DOUBLE TILDEA diacritic that applies to a pair of letters.
◌֘U+0598HEBREW ACCENT ZARQAHebrew cantillation mark.
◌֮U+05AEHEBREW ACCENT ZINORHebrew cantillation mark.
◌᷉U+1DC9COMBINING ACUTE-GRAVE-ACUTEUsed in IPA as a tone mark.
⁓U+2053SWUNG DASHA punctuation mark.
∼U+223CTILDE OPERATORUsed in mathematics. In-line. Ends not curved as much.
∽U+223DREVERSED TILDEIn some fonts it is the tilde's simple mirror image; others extend the tips to resemble a ᔕ, or an open ∞.
∿U+223FSINE WAVEUsed in electronics to indicate alternating current, in place of +, −, or ⎓ for direct current.
≁U+2241NOT TILDE
≂U+2242MINUS TILDE
≃U+2243ASYMPTOTICALLY EQUAL TO
≄U+2244NOT ASYMPTOTICALLY EQUAL TO
≅U+2245APPROXIMATELY EQUAL TO
≆U+2246APPROXIMATELY BUT NOT ACTUALLY EQUAL TO
≇U+2247NEITHER APPROXIMATELY NOR ACTUALLY EQUAL TO
≈U+2248ALMOST EQUAL TO
≉U+2249NOT ALMOST EQUAL TO
≊U+224AALMOST EQUAL OR EQUAL TO
≋U+224BTRIPLE TILDE
≌U+224CALL EQUAL TO
⋍U+22CDREVERSED TILDE EQUALS
⍨U+2368APL FUNCTIONAL SYMBOL TILDE DIAERESIS
⍫U+236BAPL FUNCTIONAL SYMBOL DEL TILDE
⍭U+236DAPL FUNCTIONAL SYMBOL STILE TILDE
⍱U+2371APL FUNCTIONAL SYMBOL DOWN CARET TILDE
⍲U+2372APL FUNCTIONAL SYMBOL UP CARET TILDE
⥲U+2972TILDE OPERATOR ABOVE RIGHTWARDS ARROW
⥳U+2973LEFTWARDS ARROW ABOVE TILDE OPERATOR
⥴U+2974RIGHTWARDS ARROW ABOVE TILDE OPERATOR
⧤U+29E4EQUALS SIGN AND SLANTED PARALLEL WITH TILDE ABOVE
⨤U+2A24PLUS SIGN WITH TILDE ABOVE
⨦U+2A26PLUS SIGN WITH TILDE BELOW
⩪U+2A6ATILDE OPERATOR WITH DOT ABOVE
⩫U+2A6BTILDE OPERATOR WITH RISING DOTS
⩳U+2A73EQUALS SIGN ABOVE TILDE OPERATOR
⫇U+2AC7SUBSET OF ABOVE TILDE OPERATOR
⫈U+2AC8SUPERSET OF ABOVE TILDE OPERATOR
⫳U+2AF3PARALLEL WITH TILDE OPERATOR
⭁U+2B41REVERSE TILDE OPERATOR ABOVE LEFTWARDS ARROW
⭇U+2B47REVERSE TILDE OPERATOR ABOVE RIGHTWARDS ARROW
⭉U+2B49TILDE OPERATOR ABOVE LEFTWARDS ARROW
⭋U+2B4BLEFTWARDS ARROW ABOVE REVERSE TILDE OPERATOR
⭌U+2B4CRIGHTWARDS ARROW ABOVE REVERSE TILDE OPERATOR
⸛U+2E1BTILDE WITH RING ABOVE
⸞U+2E1ETILDE WITH DOT ABOVE
⸟U+2E1FTILDE WITH DOT BELOW
ⸯU+2E2FVERTICAL TILDE
〜U+301CWAVE DASHUsed in Japanese punctuation.
〰U+3030WAVY DASH
◌︢U+FE22COMBINING DOUBLE TILDE LEFT HALF
◌︣U+FE23COMBINING DOUBLE TILDE RIGHT HALF
◌︩U+FE29COMBINING TILDE LEFT HALF BELOW
◌︪U+FE2ACOMBINING TILDE RIGHT HALF BELOW
﹋U+FE4BWAVY OVERLINE
﹏U+FE4FWAVY LOW LINE
~U+FF5EFULLWIDTH TILDEEm wide. In-line. Ends not curved much.
~ U+E007ETAG TILDEFormatting tag control character.
Precomposed characters
A number of characters in Unicode, have tilde precomposed.
Unicode precomposaed characters with tilde diacritic
Letter Code point Name
ẴU+1EB4LATIN CAPITAL LETTER A WITH BREVE AND TILDE
ẵU+1EB5LATIN SMALL LETTER A WITH BREVE AND TILDE
ẪU+1EAALATIN CAPITAL LETTER A WITH CIRCUMFLEX AND TILDE
ẫU+1EABLATIN SMALL LETTER A WITH CIRCUMFLEX AND TILDE
ÃU+00C3LATIN CAPITAL LETTER A WITH TILDE
ãU+00E3LATIN SMALL LETTER A WITH TILDE
ᵬU+1D6CLATIN SMALL LETTER B WITH MIDDLE TILDE
ᵭU+1D6DLATIN SMALL LETTER D WITH MIDDLE TILDE
ỄU+1EC4LATIN CAPITAL LETTER E WITH CIRCUMFLEX AND TILDE
ễU+1EC5LATIN SMALL LETTER E WITH CIRCUMFLEX AND TILDE
ḚU+1E1ALATIN CAPITAL LETTER E WITH TILDE BELOW
ḛU+1E1BLATIN SMALL LETTER E WITH TILDE BELOW
ẼU+1EBCLATIN CAPITAL LETTER E WITH TILDE
ẽU+1EBDLATIN SMALL LETTER E WITH TILDE
ᵮU+1D6ELATIN SMALL LETTER F WITH MIDDLE TILDE
ḬU+1E2CLATIN CAPITAL LETTER I WITH TILDE BELOW
ḭU+1E2DLATIN SMALL LETTER I WITH TILDE BELOW
ĨU+0128LATIN CAPITAL LETTER I WITH TILDE
ĩU+0129LATIN SMALL LETTER I WITH TILDE
ⱢU+2C62LATIN CAPITAL LETTER L WITH MIDDLE TILDE
ɫU+026BLATIN SMALL LETTER L WITH MIDDLE TILDE
ꭞU+AB5EMODIFIER LETTER SMALL L WITH MIDDLE TILDE
ꬸU+AB38LATIN SMALL LETTER L WITH DOUBLE MIDDLE TILDE
ᷬU+1DECCOMBINING LATIN SMALL LETTER L WITH DOUBLE MIDDLE TILDE
ᵯU+1D6FLATIN SMALL LETTER M WITH MIDDLE TILDE
ᵰU+1D70LATIN SMALL LETTER N WITH MIDDLE TILDE
ÑU+00D1LATIN CAPITAL LETTER N WITH TILDE
ñU+00F1LATIN SMALL LETTER N WITH TILDE
ỖU+1ED6LATIN CAPITAL LETTER O WITH CIRCUMFLEX AND TILDE
ỗU+1ED7LATIN SMALL LETTER O WITH CIRCUMFLEX AND TILDE
ỠU+1EE0LATIN CAPITAL LETTER O WITH HORN AND TILDE
ỡU+1EE1LATIN SMALL LETTER O WITH HORN AND TILDE
ṌU+1E4CLATIN CAPITAL LETTER O WITH TILDE AND ACUTE
ṍU+1E4DLATIN SMALL LETTER O WITH TILDE AND ACUTE
ṎU+1E4ELATIN CAPITAL LETTER O WITH TILDE AND DIAERESIS
ṏU+1E4FLATIN SMALL LETTER O WITH TILDE AND DIAERESIS
ȬU+022CLATIN CAPITAL LETTER O WITH TILDE AND MACRON
ȭU+022DLATIN SMALL LETTER O WITH TILDE AND MACRON
ÕU+00D5LATIN CAPITAL LETTER O WITH TILDE
õU+00F5LATIN SMALL LETTER O WITH TILDE
ᵱU+1D71LATIN SMALL LETTER P WITH MIDDLE TILDE
ᵳU+1D73LATIN SMALL LETTER R WITH FISHHOOK AND MIDDLE TILDE
ᵲU+1D72LATIN SMALL LETTER R WITH MIDDLE TILDE
ꭨU+AB68LATIN SMALL LETTER TURNED R WITH MIDDLE TILDE
ᵴU+1D74LATIN SMALL LETTER S WITH MIDDLE TILDE
ᵵU+1D75LATIN SMALL LETTER T WITH MIDDLE TILDE
ỮU+1EEELATIN CAPITAL LETTER U WITH HORN AND TILDE
ữU+1EEFLATIN SMALL LETTER U WITH HORN AND TILDE
ṸU+1E78LATIN CAPITAL LETTER U WITH TILDE AND ACUTE
ṹU+1E79LATIN SMALL LETTER U WITH TILDE AND ACUTE
ṴU+1E74LATIN CAPITAL LETTER U WITH TILDE BELOW
ṵU+1E75LATIN SMALL LETTER U WITH TILDE BELOW
ŨU+0168LATIN CAPITAL LETTER U WITH TILDE
ũU+0169LATIN SMALL LETTER U WITH TILDE
ṼU+1E7CLATIN CAPITAL LETTER V WITH TILDE
ṽU+1E7DLATIN SMALL LETTER V WITH TILDE
ỸU+1EF8LATIN CAPITAL LETTER Y WITH TILDE
ỹU+1EF9LATIN SMALL LETTER Y WITH TILDE
ᵶU+1D76LATIN SMALL LETTER Z WITH MIDDLE TILDE
See also
• Circumflex
• Caret (computing)
• Tittle
• Double tilde (disambiguation)
• Backtick
Notes
1. Several more or less common informal names are used for the tilde that usually describe the shape, including squiggly, squiggle(s), and flourish.
2. alternative association for the same code point
3. ISO 646 (and ASCII, which it includes) is a standard for 7-bit encoding, providing just 96 printable characters (and 32 control characters). This was insufficient to meet the needs of Western European languages and so the standard specifies certain code points that are available for national variation. With the arrival of 8-bit "extended ASCII", this issue was largely mitigated, though not fully resolved until Unicode was established.
4. See also Air quotes.
References
1. "tilde". The Chambers Dictionary (9th ed.). Chambers. 2003. ISBN 0-550-10105-5.
2. "tilde". The American Heritage Dictionary of the English Language (5th ed.). HarperCollins.
3. Martin, Charles Trice (1910). The record interpreter : a collection of abbreviations, Latin words and names used in English historical manuscripts and records (2nd ed.). London, preface, p.5
4. Mackenzie, Charles E. (1980). Coded Character Sets, History and Development (PDF). The Systems Programming Series (1 ed.). Addison-Wesley Publishing Company, Inc. ISBN 978-0-201-14460-4. LCCN 77-90165. Archived (PDF) from the original on May 26, 2016. Retrieved August 25, 2019.
5. "Meeting of CCITT Working Party on the New Telegraph Alphabet". CCITT. 15 May 1963. See Paragraph 3.
6. L. L. Griffin, Chairman, X3.2 (29 November 1963). "Memorandum to Members, Alternates, and Consultants of A.S.A. X3.2 and task groups". US Department of the Navy. p. 8.
7. "Character histories: notes on some ASCII code positions".
8. "Second ISO draft proposal | 6 and 6 bit character codes for information processing interchange". ISO. December 1963. See paragraph 2
9. "Swung dash", WordNet (search) (3.0 ed.)
10. "26 argumentos para seguir defendiendo la Ñ". La Razón. 11 January 2011. Retrieved 31 January 2016.
11. AFP (18 November 2004). "Batalla de la Ñ: Una aventura quijotesca para defender el alma de la lengua". Periódico ABC Paraguay. Retrieved 31 January 2016.
12. Diccionario de la lengua española, Real Academia Española
13. "Tilde". Wolfram/MathWorld. 3 November 2011. Retrieved 11 November 2011.
14. "All Elementary Mathematics – Mathematical symbols dictionary". Bymath. Archived from the original on 2 May 2015. Retrieved 25 September 2014.
15. "Character design standards - Maths". Microsoft.
16. Quinn, Liam. "HTML 4.0 Entities for Symbols and Greek Letters". HTML help. Retrieved 11 November 2011.
17. "Math Symbols... Those Most Valuable and Important: Approximately Equal Symbol". Solving Math problems. 20 September 2010. Retrieved 11 November 2011.
18. Bernstein, Joseph. "The Hidden Language of The ~Tilde~". BuzzFeed News.
19. Jess Kimball Leslie (5 June 2017). "The Internet Tilde Perfectly Conveys Something We Don't Have the Words to Explain". The Cut.
20. Ortografía de la lengua española. Madrid: Real Academia Española. 2010. p. 279. ISBN 978-84-670-3426-4.
21. "Lema en la RAE". Real Academia Española. Retrieved 10 October 2015.
22. Nestle, Eberhard (1888). Syrische Grammatik mit Litteratur, Chrestomathie und Glossar. Berlin: H. Reuther's Verlagsbuchhandlung. [translated to English as Syriac grammar with bibliography, chrestomathy and glossary, by R. S. Kennedy. London: Williams & Norgate 1889. p. 5].
23. Lithuanian Standards Board (LST), proposal for a zigzag diacritic
24. "Tilde Definition". linfo.org. The Linux Information Project. 24 June 2005. Retrieved 27 January 2020.
25. "Using a tilde with currency".
26. "Appendix 1: Shift_JIS-2004 vs Unicode mapping table", JIS X 0213:2004, X 0213.
27. Shift-JIS to Unicode, Unicode.
28. "Windows 932_81". Microsoft. Retrieved 30 July 2010.
29. CJK Symbols and Punctuation (Unicode 6.2) (PDF) (chart), Unicode, archived from the original (PDF) on 27 August 2013.
30. Japanese National Committee on ISO/TC97/SC2. ISO-IR-87: Japanese Graphic Character Set for Information Interchange (PDF). ITSCJ/IPSJ.
31. Japanese Industrial Standards Committee. ISO-IR-233: Japanese Graphic Character Set for Information Interchange, Plane 1 (Update of ISO-IR 228) (PDF). ITSCJ/IPSJ.
32. Halfwidth and Fullwidth Forms (PDF) (chart), Unicode.
33. Errata Fixed in Unicode 8.0.0, Unicode
34. "windows-949-2000 (lead byte A1)". ICU Demonstration - Converter Explorer. International Components for Unicode.
35. "Lead Byte A1-A2 (Code page 949)". MSDN. Microsoft.
36. "ibm-1363_P110-1997 (lead byte A1)". ICU Demonstration - Converter Explorer. International Components for Unicode.
37. "euc-kr (lead byte A1)". ICU Demonstration - Converter Explorer. International Components for Unicode.
38. "Map (external version) from Mac OS Korean encoding to Unicode 3.2 and later". Apple.
39. CJK Symbols and Punctuation (PDF) (chart), Unicode
40. Komatsu, Hiroyuki, L2/14-198: Proposal for the modification of the sample character layout of WAVE_DASH (U+301C) (PDF)
41. Shift_JIS-2004 (JIS X 0213:2004 Appendix 1) vs Unicode mapping table, x0213.org
42. "Shift_JIS visualization", Encoding Standard, WHATWG
43. Derbyshire, J (2004), Prime Obsession: Bernhard Riemann and the Greatest Unsolved Problem in Mathematics, New York: Penguin.
44. "Tilde". Wolfram Research. Retrieved 4 June 2018.
45. Choy, Stephen TL; Jesudason, Judith Packer; Lee, Peng Yee (1988). Proceedings of the Analysis Conference, Singapore 1986. Elsevier. ISBN 9780080872612.
46. Wallrapp (1994). "Standardization of flexible body modeling in multibody system codes, Part I: Definition of Standard Input Data". Mechanics of Structures and Machines. 22 (3): 283–304. doi:10.1080/08905459408905214.
47. Valembois, R. E.; Fisette, P.; Samin, J. C. (1997). "Comparison of Various Techniques for Modelling Flexible Beams in Multibody Dynamics". Nonlinear Dynamics. 12 (4): 367–397. doi:10.1023/A:1008204330035.
48. Collinge (2002) An Encyclopedia of Language, §4.2.
49. Hayes, Bruce (2011). Introductory Phonology. John Wiley & Sons. pp. 87–88. ISBN 9781444360134.
50. Buring, Daniel (2016). Intonation and Meaning. Oxford University Press. pp. 36–41. doi:10.1093/acprof:oso/9780199226269.003.0003. ISBN 978-0-19-922627-6.
51. "Tilde expansion", C Library Manual, The GNU project, retrieved 4 July 2010.
52. "Module mod_userdir", HTTP Server Documentation (version 2.0 ed.), The Apache foundation, retrieved 4 July 2010.
53. Berners-Lee, T.; Fielding, R.; Masinter, L. (2005), RFC 3986, IETF, doi:10.17487/RFC3986.
54. "MySQL :: Reference Manual :: Bit Functions and Operators". dev.mysql.com. Retrieved 20 December 2019.
55. "The Groovy programming language - Operators".
56. Groovy Regular Expression User Guide, Code haus, archived from the original on 26 July 2010, retrieved 11 November 2010.
57. Groovy RegExp FAQ, Code haus, archived from the original on 11 July 2010, retrieved 11 November 2010.
58. "Type Families", Haskell Wiki.
59. "Lazy pattern match - HaskellWiki".
60. "CLHS: Section 22.3". Lispworks.com. 11 April 2005. Retrieved 30 July 2010.
61. The R Reference Index
62. Emacs Manual
63. "The Internet Juggling Database". Archived from the original on 28 July 2005. Retrieved 6 November 2009.
Latin script
• History
• Spread
• Romanization
• Roman numerals
• Ligatures
Alphabets (list)
• Classical Latin alphabet
• ISO basic Latin alphabet
• Phonetic alphabets
• International Phonetic Alphabet
• X-SAMPA
• Spelling alphabet
Letters (list)
Letters of the ISO basic Latin alphabet
Aa Bb Cc Dd Ee Ff Gg Hh Ii Jj Kk Ll Mm Nn Oo Pp Qq Rr Ss Tt Uu Vv Ww Xx Yy Zz
Letters using tilde sign ( ◌̃, ◌̰, ◌̴ )
Ãã ᵬ ᵭ Ẽẽ Ḛḛ ᵮ G̃g̃ Ĩ ĩ Ḭḭ J̃ L̃l̃ Ɫɫ ꬸ M̃m̃ ᵯ Ññ ᵰ Õõ P̃p̃ ᵱ R̃r̃ ᵲ ᵴ ᵵ Ũũ Ṵṵ
Ṽṽ Ỹỹ ᵶ
Multigraphs
Digraphs
• Ch
• Dz
• Dž
• Gh
• IJ
• Lj
• Ll
• Ly
• Nh
• Nj
• Ny
• Sh
• Sz
• Th
Trigraphs
• dzs
• eau
Tetragraphs
• ough
Pentagraphs
tzsch
Keyboard layouts (list)
• QWERTY
• QWERTZ
• AZERTY
• Dvorak
• Colemak
• BÉPO
• Neo
Standards
• ISO/IEC 646
• Unicode
• Western Latin character sets
• DIN 91379: Unicode subset for Europe
Lists
• Precomposed Latin characters in Unicode
• Letters used in mathematics
• List of typographical symbols and punctuation marks
• Diacritics
• Palaeography
Common logical symbols
∧ or &
and
∨
or
¬ or ~
not
→
implies
⊃
implies,
superset
↔ or ≡
iff
|
nand
∀
universal
quantification
∃
existential
quantification
⊤
true,
tautology
⊥
false,
contradiction
⊢
entails,
proves
⊨
entails,
therefore
∴
therefore
∵
because
Philosophy portal
Mathematics portal
Common punctuation marks and other typographical marks or symbols
• space
• , comma
• : colon
• ; semicolon
• ‐ hyphen
• ’ ' apostrophe
• ′ ″ ‴ prime
• . full stop
• & ampersand
• @ at sign
• ^ caret
• / slash
• \ backslash
• … ellipsis
• * asterisk
• ⁂ asterism
• * * * dinkus
• - hyphen-minus
• ‒ – — dash
• = ⸗ double hyphen
• ? question mark
• ! exclamation mark
• ‽ interrobang
• ¡ ¿ inverted ! and ?
• ⸮ irony punctuation
• # number sign
• № numero sign
• º ª ordinal indicator
• % percent sign
• ‰ per mille
• ‱ basis point
• ° degree symbol
• ⌀ diameter sign
• + − plus and minus signs
• × multiplication sign
• ÷ division sign
• ~ tilde
• ± plus–minus sign
• ∓ minus-plus sign
• _ underscore
• ⁀ tie
• | ¦ ‖ vertical bar
• • bullet
• · interpunct
• © copyright symbol
• © copyleft
• ℗ sound recording copyright
• ® registered trademark
• SM service mark symbol
• TM trademark symbol
• ‘ ’ “ ” ' ' " " quotation mark
• ‹ › « » guillemet
• ( ) [ ] { } ⟨ ⟩ bracket
• ” 〃 ditto mark
• † ‡ dagger
• ❧ hedera/floral heart
• ☞ manicule
• ◊ lozenge
• ¶ ⸿ pilcrow (paragraph mark)
• ※ reference mark
• § section mark
• Version of this table as a sortable list
• Currency symbols
• Diacritics (accents)
• Logic symbols
• Math symbols
• Whitespace
• Chinese punctuation
• Hebrew punctuation
• Japanese punctuation
• Korean punctuation
| Wikipedia |
Ubiratan D'Ambrosio
Ubiratan D'Ambrosio (December 8, 1932 – May 12, 2021) was a Brazilian mathematics educator and historian of mathematics.
Ubiratan D'Ambrosio
Born(1932-12-08)December 8, 1932
São Paulo
DiedMay 12, 2021(2021-05-12) (aged 88)
Alma materUniversity of São Paulo
Known forEthnomathematics, Ethnomathematics Program
AwardsKenneth O. May Prize (2001), Felix Klein Medal (2005)
Life
D'Ambrosio was born in São Paulo, and earned his doctorate from the University of São Paulo in 1963. He retired as a professor of mathematics from the State University of Campinas, São Paulo, Brazil in 1993.
He was a member of many societies, including Pugwash, and served the International Commission on the History of Mathematics (ICHM) for five years.
D'Ambrosio was also the founder of the Brazilian Society for Mathematics and History of the International Group of Ethnomathematicians.
In 2001, he and Lam Lay Yong were jointly awarded the Kenneth O. May Prize.[1][2]
Writings
Books
• 1996, Educação Matemática: da teoria à prática. ISBN 9788530804107
Book chapters
• 1997, Ethno Mathematics. Challenging Eurocentrism, in Arthur B. Powell, Marilyn Frankenstein (eds.) Mathematics Education, State University of New York Press, Albany 1997, p. 13–24.
• Historiographical Proposal for Non-Western Mathematics, in Helaine Selin (ed.), Mathematics Across Cultures. The History of Non-Western Mathematics, Kluwer Academic Publishers, Dordrecht, 2000, pp. 79–92.
Articles
• A Busca da paz como responsabilidade dos matemáticos. Cuadernos de Investigación y Formación en Educación Matemática 7 (2011)
• A Etnomatemática no processo de construção de uma escola indígena. Em aberto 63 (1994)
External links
• Selected works
References
1. "ICHM | International Mathematical Union (IMU)". www.mathunion.org.
2. "Lam Lay Yong". Archived from the original on 2011-06-08. Retrieved 2010-04-12.
Kenneth O. May Prize laureates
• Dirk Jan Struik and Adolph P. Yushkevich (1989)
• Christoph Scriba and Hans Wussing (1993)
• René Taton (1997)
• Ubiratàn D'Ambrosio and Lam Lay Yong (2001)
• Henk J. M. Bos (2005)
• Ivor Grattan-Guinness and Radha Charan Gupta (2009)
• Menso Folkerts and Jens Høyrup (2013)
• Eberhard Knobloch and Roshdi Rashed (2017)
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Netherlands
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
| Wikipedia |
On the Number of Primes Less Than a Given Magnitude
"Ueber die Anzahl der Primzahlen unter einer gegebenen Grösse" (usual English translation: "On the Number of Primes Less Than a Given Magnitude") is a seminal 9-page paper by Bernhard Riemann published in the November 1859 edition of the Monatsberichte der Königlich Preußischen Akademie der Wissenschaften zu Berlin.
Overview
This paper studies the prime-counting function using analytic methods. Although it is the only paper Riemann ever published on number theory, it contains ideas which influenced thousands of researchers during the late 19th century and up to the present day. The paper consists primarily of definitions, heuristic arguments, sketches of proofs, and the application of powerful analytic methods; all of these have become essential concepts and tools of modern analytic number theory.
Among the new definitions, ideas, and notation introduced:
• The use of the Greek letter zeta (ζ) for a function previously mentioned by Euler
• The analytic continuation of this zeta function ζ(s) to all complex s ≠ 1
• The entire function ξ(s), related to the zeta function through the gamma function (or the Π function, in Riemann's usage)
• The discrete function J(x) defined for x ≥ 0, which is defined by J(0) = 0 and J(x) jumps by 1/n at each prime power pn. (Riemann calls this function f(x).)
Among the proofs and sketches of proofs:
• Two proofs of the functional equation of ζ(s)
• Proof sketch of the product representation of ξ(s)
• Proof sketch of the approximation of the number of roots of ξ(s) whose imaginary parts lie between 0 and T.
Among the conjectures made:
• The Riemann hypothesis, that all (nontrivial) zeros of ζ(s) have real part 1/2. Riemann states this in terms of the roots of the related ξ function,
... es ist sehr wahrscheinlich, dass alle Wurzeln reell sind. Hiervon wäre allerdings ein strenger Beweis zu wünschen; ich habe indess die Aufsuchung desselben nach einigen flüchtigen vergeblichen Versuchen vorläufig bei Seite gelassen, da er für den nächsten Zweck meiner Untersuchung entbehrlich schien.
That is,
it is very probable that all roots are real. One would, however, wish for a strict proof of this; I have, though, after some fleeting futile attempts, provisionally put aside the search for such, as it appears unnecessary for the next objective of my investigation.
(He was discussing a version of the zeta function, modified so that its roots are real rather than on the critical line.)
New methods and techniques used in number theory:
• Functional equations arising from automorphic forms
• Analytic continuation (although not in the spirit of Weierstrass)
• Contour integration
• Fourier inversion.
Riemann also discussed the relationship between ζ(s) and the distribution of the prime numbers, using the function J(x) essentially as a measure for Stieltjes integration. He then obtained the main result of the paper, a formula for J(x), by comparing with ln(ζ(s)). Riemann then found a formula for the prime-counting function π(x) (which he calls F(x)). He notes that his equation explains the fact that π(x) grows more slowly than the logarithmic integral, as had been found by Carl Friedrich Gauss and Carl Wolfgang Benjamin Goldschmidt.
The paper contains some peculiarities for modern readers, such as the use of Π(s − 1) instead of Γ(s), writing tt instead of t2, and using the bounds of ∞ to ∞ as to denote a contour integral.
References
• Edwards, H. M. (1974), Riemann's Zeta Function, New York: Academic Press, ISBN 0-12-232750-0, Zbl 0315.10035
External links
• Riemann's manuscript
• Ueber die Anzahl der Primzahlen unter einer gegebener Grösse (transcription of Riemann's article)
• On the Number of Prime Numbers less than a Given Quantity (English translation of Riemann's article)
| Wikipedia |
Uffe Haagerup
Uffe Valentin Haagerup (19 December 1949 – 5 July 2015) was a mathematician from Denmark.
Uffe Haagerup
2008
Born
Uffe Valentin Haagerup
(1949-12-19)19 December 1949
Kolding, Denmark
Died5 July 2015(2015-07-05) (aged 65)
Fåborg, Denmark
NationalityDanish
Alma materUniversity of Copenhagen
Known forHaagerup property
Christensen–Haagerup principle
Haagerup tensor norm
Haagerup subfactor
Asaeda-Haagerup subfactor
The Haagerup list
AwardsSamuel Friedman Award
Ole Rømer Medal
Humboldt Research Award
European Research Council (Advanced Grant)
European Latsis Prize
Scientific career
FieldsMathematics
InstitutionsUniversity of Southern Denmark
University of Copenhagen
Doctoral advisorGert K Pedersen
Biography
Uffe Haagerup was born in Kolding, but grew up on the island of Funen, in the small town of Fåborg. The field of mathematics had his interest from early on, encouraged and inspired by his older brother. In fourth grade Uffe was doing trigonometric and logarithmic calculations. He graduated as a student from Svendborg Gymnasium in 1968, whereupon he relocated to Copenhagen and immediately began his studies of mathematics and physics at the University of Copenhagen, again inspired by his older brother who also studied the same subjects at the same university. Early university studies in Einstein's general theory of relativity and quantum mechanics, sparked a lasting interest in the mathematical field of operator algebra, in particular Von Neumann algebra and Tomita–Takesaki theory. In 1974 he received his Candidate's degree (cand. scient.) from the University of Copenhagen and in 1981 – at the age of 31 – , Uffe was appointed at the University of Odense – now University of Southern Denmark –, as the youngest professor of mathematics (dr. scient.) in the country at the time. Pregraduate summer schools at the university and later on extended professional research stays abroad, helped him discover and build a diverse and lasting international network of colleagues. Haagerup accidentally drowned on 5 July 2015, aged 65, while swimming in the Baltic Sea close to Fåborg where his family owned a cabin.[1]
Work
Uffe Haagerup's mathematical focus has been on the fields of operator algebra, group theory and geometry,[2] but his publications has a broad scope and also involves free probability theory and random matrices. He has participated in many international mathematical groups and networks from early on, and has worked as ordinary contributor and participator, organizer, lecturer and editor.
Following his appointment as professor at Odense, Haagerup got acquainted with Vaughan Jones, when he did research in Philadelphia and later at the UCLA in Los Angeles. Jones inspired him to take up studies in and work on subfactor theory. Uffe Haagerup has done extensive work with fellow mathematician Alain Connes on Von Neumann algebra. His solution to the so-called "Champagne Problem",[3] secured him the Samuel Friedmann Award in April 1985, although it was first published in Acta Mathematica in 1987. Uffe considered this his best work. An early contact and collaboration was established with Swedish colleagues at the Mittag-Leffler Institute and the Norwegian group on operator algebra, where Uffe Haagerup has a long history of collaboration with Erling Størmer for example.
In the mathematical literature, Uffe Haagerup is known for the Haagerup property, the Haagerup subfactor, the Asaeda-Haagerup subfactor and the Haagerup list.[4]
From 2000 to 2006 Uffe served as editor-in-chief of the journal Acta Mathematica. He was a member of the Royal Danish Academy of Sciences and Letters and the Norwegian Academy of Science and Letters. He worked at the Department of Mathematics at the University of Copenhagen from 2010 to 2014,[5] where he was involved in the Centre for Symmetry and Deformation (SYM), but was appointed professor of mathematics in 2015 at the University of Southern Denmark in Odense.[6]
Prizes and honors
Uffe Haagerup received several awards and honours throughout his academic career. Amongst the most academically prestigious were the Danish Ole Rømer Medal, the international Humboldt Research Award and the European Latsis Prize.
• 1985. The Samuel Friedman Award (UCLA and Copenhagen)
• 1986. Invited speaker at ICM1986 (Berkeley)
• 1989. The Ole Rømer Medal (Copenhagen).
The Ole Rømer Medal (est. 1944) is a Danish medal awarded by the University of Copenhagen and the municipality of Copenhagen, for outstanding research. It is considered amongst the most honourable scientific awards in the country, established in commemoration of Ole Rømer on his 300th anniversary.
• 2002. Plenary speaker at ICM2002 (Beijing)
• 2007. Distinguished lecturer at the Fields Institute of Mathematical Research (Toronto)
• 2008. The Humboldt Research Award (Münster)
• 2010–2014 European Research Council Advanced Grant
• 2012. Plenary speaker at International Congress on Mathematical Physics ICMP12 (Aalborg)
• 2012. 14th European Latsis Prize from the European Science Foundation, ESF (Brussels)[7][8]
• 2013. Honorary Doctorate from East China Normal University, ECNU (Shanghai)[9]
Works (selection)
• Uffe Haagerup: Principal graphs and subfactors in the index range 4 < M:N < 3 + sqrt{2}; pp. 1–38 in Subfactors – Proceedings of the Taniguchi Symposium Katata (1994).
See also
• Approximately finite-dimensional C*-algebra
• Khintchine inequality
• Planar algebra
• Quasitrace
References
1. "Obituary: Professor Uffe Haagerup". Department of Mathematical Sciences (University of Copenhagen). 10 July 2015. Retrieved 17 July 2015.
2. "Uffe Haagerup". University of Copenhagen. Retrieved 14 May 2015.
3. The Champagne Problem is associated with Hilbert's 17th problem. See Victoria Powers: "Hilbert's 17th Problem and the Champagne Problem" in American Mathematical Monthly, Vol. 103, No. 10, Dec., 1996, pp. 879–887.
4. Marta Asaeda, Uffe Haagerup (17 June 1999). "Exotic subfactors of finite depth with Jones indices (5+sqrt{13})/2 and (5+sqrt{17})/2". Commun.Math.Phys. 202: 1–63. arXiv:math/9803044. doi:10.1007/s002200050574. S2CID 118665858.
5. "Curriculum Vitae (CV)". Copenhagen University.
6. "All publications". University of Southern Denmark. Retrieved 14 May 2015.
7. "Uffe Haagerup wins Latsis prize for operator algebra". University World News. 25 November 2012. Retrieved 15 May 2015.
8. See the European Science Foundation (ESF) announcement.
9. "Chinese honorary doctorate to Professor Uffe Haagerup". Department of Mathematical Sciences (University of Copenhagen). 25 June 2013. Retrieved 15 May 2015.
Sources
• European Science Foundation (ESF): ESF awards 14th European Latsis Prize to Professor Uffe Haagerup for ground-breaking and important contributions to the theory of operator algebras 26 November 2012
• Curriculum Vitae (Uffe Haagerup) University of Copenhagen
• Jacob Hjelmborg: Interview with Uffe Haagerup, Matilde (2002), DMF Aarhus University (in Danish)
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• Belgium
• United States
Academics
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• Publons
• ResearcherID
• zbMATH
Other
• IdRef
| Wikipedia |
Ugo Amaldi (mathematician)
Ugo Amaldi (18 April 1875 – 11 November 1957) was an Italian mathematician.[1] He contributed to the field of analytic geometry and worked on Lie groups. His son Edoardo was a physicist.[2]
Ugo Amaldi
Born18 April 1875
Verona, Italy
Died11 November 1957(1957-11-11) (aged 82)
Rome, Italy
NationalityItalian
Alma materUniversity of Bologna
Scientific career
FieldsMathematics
Biography
He graduated in mathematics (21 November 1898) at the University of Bologna under the guidance of S. Pincherle. He taught at the University of Cagliari (1903-1905), Modena (1905-1919), Padova (1919-1924), Roma (1924-1950).
Notes
1. "Ugo Amaldi". Treccani (in Italian). Retrieved 10 December 2017.
2. Rubbia, C. (1991). "Edoardo Amaldi. 5 September 1908 – 5 December 1989". Biographical Memoirs of Fellows of the Royal Society. 37: 2–31. doi:10.1098/rsbm.1991.0001.
External links
• O'Connor, John J.; Robertson, Edmund F., "Ugo Amaldi", MacTutor History of Mathematics Archive, University of St Andrews
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Italy
• Latvia
• Czech Republic
• Netherlands
• Poland
• Portugal
Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Italian People
Other
• IdRef
| Wikipedia |
Ugo Broggi
Ugo Napoleone Giuseppe Broggi (December 29, 1880, Como – November 23, 1965, Milan) was an Italian actuary, mathematician, philosopher, statistician, and mathematical economist.
Ugo Broggi
Born(1880-12-29)December 29, 1880
Como, Italy
DiedNovember 23, 1965(1965-11-23) (aged 84)
Milan, Italy
Alma materUniversity of Göttingen
Scientific career
Academic advisorsDavid Hilbert
Education and career
Broggi studied in Italy and Germany, graduating in actuarial science in 1902 and in economic science in 1904.[1]
In 1906 Hoepli Editore published Broggi's book Matematica Attuariale,[2] which was translated into French as Traité des Assurances de la Vie (Hermann, 1907)[3] and into German as Versicherungsmathematik (Teubner, 1911).[4][5]
In 1907 he obtained his doctorate, with advisor David Hilbert, from the University of Göttingen with a thesis entitled "Die Axiome der Wahrscheinlichkeitsrechnung"[6][7] (The axioms of probability theory). Hilbert in his 1899 book Grundlagen der Geometrie (GdG) gave axioms for a modern treatment of Euclidean geometry. Influenced by GdG, Georg Bohlmann in 1901 gave axioms for probability theory.[8] In 1904 at the University of Zürich, Rudolf Laemmel (1879–1962) published a doctoral dissertation Ermittlung von Wahrsheinlichkeiten,[9] dealing with the axioms of probability. In 1905 Hilbert gave lectures on axiomatized probability theory based upon Bohlmann's work.
In 1907, however, one of Hilbert's doctoral students, Ugo Broggi (1880–1965), took up once more the issue of the axiomatization of the calculus of probability, attempting to perfect—following the guidelines established in GdG—the earlier proposals of Bohlmann and Laemmel. ... Based on Lebesgue's theory of measure, Broggi not only formulated a systems of axioms for probability, but also showed that his axioms were complete (in Hilbert's sense), independent and consistent, thus demonstrating the shortcomings of Bohlmann's earlier system.[10]
In 1907 Broggi received not only a doctorate in mathematics but also a doctorate in philosophy.[1] Broggi's 1909 paper on relativity[11] "accurately discussed contemporary ideas on matter, radiation and time."[12]
In 1910 Broggi moved to Argentina to become a professor of financial mathematics.[1] At the National University of La Plata (NULP) he was appointed in 1911 professor of mathematical analysis and in 1912 professor of higher mathematics. In June 1912 the University of Buenos Aires (UBA) appointed him full professor of statistics. In November 1913 UBA's (newly created) Faculty of Economic Sciences (FES) appointed him to the FES council for a term of six years. In 1922 the FES council appointed him professor of financial mathematics. For the academic year 1925–1926 Broggi was on academic leave in Europe. In 1927 he gave some mathematical lectures in Rosario. At the end of 1927 he decided to return to Europe. In March 1928 he resigned his professorial chairs.
Ugo Broggi was one of the founders of modern mathematics and statistics in Argentina, and also produced valuable contributions in mathematical economics, such as proofs of existence of the utility function and the criticism to current proofs of existence in general equilibrium.[13]
In 1928 he was invited speaker at the International Congress of Mathematicians in Bologna.[14][15]
Broggi was a book reviewer for Rivista di Scienza.[16] For 20 years he was on the editorial board of the Giornale degli economisti e Annali di economia. He was also an editor for several other journals, including the Bollettino dell'associazione degli attuari italiani (Bulletin of the association of Italian actuaries) and the Rendiconti del Circolo Matematico di Palermo.
After the end of WW II, Broggi returned to Buenos Aires for a brief visit, during which his former students held a party in his honor.[1]
Selected publications
• Matematica attuariale - Teoria statistica della mortalità. Matematica delle assicurazioni sulla vita, Hoepli, Milano, 1906
• Die Axiome der Wahrscheinlichkeitsrechnung, Inaugural-Dissertation zur Erlangung der Doktorwürde der hohen philosophischen Fakultät der Georg-Augusts-Universität zu Göttingen vorgelegt von Ugo Broggi, Göttingen, Druck der Dieterichschen Universitäts-Buchdruckerei, 1907
• Sur le principe de la moyenne arithmetique, Paris, Gauthier Villars, 1909
• Versicherungsmathematik: deutsche Ausgab, Leipzig Druck und Verlag, 1911
• Analisis matematico: vol. I - Las nociones fundamentales, La Plata, 1919
• Analisis matematico: vol. II - Teorias generales, funciones de mas de una variable, La Plata, 1927
References
1. Fernández López, Manuel (2000). "Ugo Broggi, a neglected precursor in modern mathematical economics" (PDF). Anales de la Asociación Argentina de Economía Política.
2. "A passage from Ugo Broggi's 'Matematica Attuariale' (Milan, 1906)". Journal of the Staple Inn Actuarial Society. 7 (3): 178–179. January 1948. doi:10.1017/S0020269X00003236.
3. Ling, George Herbert (1908). "Book Review: Traité des Assurances de la Vie". Bulletin of the American Mathematical Society. 14 (6): 296–298. doi:10.1090/S0002-9904-1908-01609-4.
4. Versicherungsmathematik, Ugo Broggi. OCLC 551503953. {{cite book}}: |website= ignored (help)
5. "Reviewed work: Versicherungsmathematik, Ugo Broggi". The Mathematical Gazette. 7 (105): 121–122. 1913. doi:10.2307/3603324. JSTOR 3603324. p. 121 p. 122
6. Ugo Broggi at the Mathematics Genealogy Project
7. Fischer, Gerd; Hirzebruch, Friedrich; Scharlau, Winfried; Törnig, Willi, eds. (12 March 2013). Ein Jahrhundert Mathematik 1890 – 1990: Festschrift zum Jubiläum der DMV. Springer. p. 460. ISBN 9783322802651.
8. Corry, L. (29 June 2013). David Hilbert and the Axiomatization of Physics (1898–1918): From Grundlagen der Geometrie to Grundlagen der Physik. Springer. ISBN 9781402027789.
9. Rudolf Laemmel at the Mathematics Genealogy Project
10. mention of Ugo Broggi (1880–1965) in David Hilbert and the Axiomatization of Physics
11. Broggi, Ugo (1909). "Sobre el principio electrodinámico de relatividad y sobre la idea de tiempo". Revista Politécnica, Buenos Aires. 10 (86): 41–44.
12. Gangui, Alejandro; Ortiz, Eduardo L. (2016). "The scientific impact of Einstein's visit to Argentina, in 1925". arXiv:1603.03792 [physics.hist-ph]. (See p. 4.)
13. Fernández López, Manuel (2000). "Ugo Broggi, un precursor olvidado en la economía matemática moderna". Anales de la Asociación Argentina de Economía Política. Archived from the original on 2005-12-20. (quote from English language version of summary)
14. Broggi, U. (1929). "Su di una classe di sviluppi in serie di polinomi di Hermite". In: Atti del Congresso Internazionale dei Matematici: Bologna del 3 al 10 de settembre di 1928. Vol. 3. pp. 311–314.
15. Broggi, U. (1929). "Su di un problema perequazione". In: Atti del Congresso Internazionale dei Matematici: Bologna del 3 al 10 de settembre di 1928. Vol. 6. pp. 117–122.
16. The Outlook: A Weekly Review of Politics, Art, Literature, and Finance. June 15, 1907. p. 807.
External links
• "How to pronounce Broggi (Italian/Italy) - PronounceNames.com". YouTube. February 28, 2014.
Authority control
International
• ISNI
• VIAF
National
• Spain
• Germany
• Greece
• Netherlands
• Vatican
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• Scopus
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
| Wikipedia |
United Kingdom Mathematics Trust
The United Kingdom Mathematics Trust (UKMT) is a charity founded in 1996 to help with the education of children in mathematics within the UK.
UKMT
United Kingdom Mathematics Trust
Formation1996
Purposecharity
Websitehttps://www.ukmt.org.uk/
History
The national mathematics competitions existed prior to the formation of the UKMT, but the foundation of the UKMT in the summer of 1996 enabled them to be run collectively. The Senior Mathematical Challenge was formerly the National Mathematics Contest. Founded in 1961, it was run by the Mathematical Association from 1975 until its adoption by the UKMT in 1996. The Junior and Intermediate Mathematical Challenges were the initiative of Dr Tony Gardiner in 1987 and were run by him under the name of the United Kingdom Mathematics Foundation until 1996. The popularity of the UK national mathematics competitions is largely due to the publicising efforts of Dr Gardiner in the years 1987-1995. Hence, in 1995, he advertised for the formation of a committee and for a host institution that would lead to the establishment of the UKMT, enabling the challenges to be run effectively together under one organisation.
Mathematical Challenges
The UKMT runs a series of mathematics challenges to encourage childs interest in mathematics and develop their skills in secondary schools. The three main challenges are:
• Junior Mathematical Challenge (UK year 8/S2 and below)
• Intermediate Mathematical Challenge (UK year 11/S4 and below)
• Senior Mathematical Challenge (UK year 13/S6 and below)[1]
Certificates
In the Junior and Intermediate Challenges the top scoring 50% of the entrants receive bronze, silver or gold certificates based on their mark in the paper. In the Senior Mathematical Challenge these certificates are awarded to top scoring 66% of the entries. In each case bronze, silver and gold certificates are awarded in the ratio 3 : 2 : 1. So in the Junior and Intermediate Challenges
• The Gold award is achieved by the top 8-9% of the entrants.
• The Silver award is achieved by 16-17% of the entrants.
• The Bronze award is achieved by 25% of the entrants.
[2]
In the past, only the top 40% of participants received a certificate in the Junior and Intermediate Challenges, and only the top 60% of participants received a certificate in the Senior Challenge. The ratio of bronze, silver, and gold have not changed, still being 3 : 2 : 1.
Junior Mathematical Challenge
The Junior Mathematical Challenge (JMC) is an introductory challenge for pupils in Years 8 or below (aged 13) or below, taking place in spring each year. This takes the form of twenty-five multiple choice questions to be sat in exam conditions, to be completed within one hour. The first fifteen questions are designed to be easier, and a pupil will gain 5 marks for getting a question in this section correct. Questions 16-20 are more difficult and are worth 6 marks, with a penalty of 1 point for a wrong answer which tries to stop pupils guessing. The last five questions are intended to be the most challenging and so are also 6 marks, but with a 2 point penalty for an incorrectly answered question. Questions to which no answer is entered will gain (and lose) 0 marks.[3] However, in recent years there has been no negative marking so wrong questions will be given 0 marks. Previously, the top 40% of students (50% since the 2022 JMC) get a certificate of varying levels (Gold, Silver or Bronze) based on their score.
Summary of UKMT Junior Challenges[4][5][6][7]
Challenge Eligibility When Duration No. of Qs Questions Max Score Certificates
Junior Mathematical Challenge(JMC) England Wales and Overseas: Y8 or below
Scotland: S2 or below Northern Ireland: Y9 or below
Late April 60 mins 25 Questions 1-15 (multi-choice)
5 marks each
Questions 16-25 (multi-choice)
6 marks each
135 Top 50% of candidates in ratio 3:2:1 get:
Bronze, Silver, Gold.
Junior Kangaroo Good performance (at least Gold certificate) in JMC
OR discretionary entry at some fee.
weeks after JMC 135 Top 25% get Certificate of Merit.
Remaining 75% get Certificate of Qualification or Participation.
Junior Mathematical Olympiad Around top 1,200 performers in JMC invited.
Discretionary entry also available.
120 mins 16 Questions 1-10 (correct answer only)
1 mark each
Questions 11-16 (answer with solution)
0~10 marks each
70 Top 25% get Certificate of Distinction.
Next 40% get Certificate of Merit. Remaining 35% get Certificate of Qualification or Participation. Top 40 get a gold medal, the next 60 a silver medal, the next 100 a bronze medal. All medalists receive a book prize.
Junior Kangaroo
Over 10,000 participants from the JMC are invited to participate in the Junior Kangaroo.[5] Most of the Junior Kangaroo participants are those who performed well in the JMC, however the Junior Kangaroo is open to discretionary entries for a fee. Similar to the JMC, the Junior Kangaroo is a 60 minute challenge consisting of 25 multiple-choice problems.[6] Correct answers for Questions 1-15 earn 5 marks, and for Questions 16-25 earn 6 marks. Blank or incorrect answers are marked 0; there is no penalty for wrong answers.
The top 25% of participants in the Junior Kangaroo receive a Certificate of Merit.[6]
Junior Mathematical Olympiad
The highest 1200 scorers are also invited to take part in the Junior Mathematical Olympiad (JMO). Like the JMC, the JMO is sat in schools. This is also divided into two sections. Part A is composed of 10 questions in which the candidate gives just the answer (not multiple choice), worth 10 marks (1 mark each). Part B consists of 6 questions and encourages students to write out full solutions. Each question in section B is worth 10 marks and students are encouraged to write complete answers to 2-4 questions rather than hurry through incomplete answers to all 6. If the solution is judged to be incomplete, it is marked on a 0+ basis, maximum 3 marks. If it has an evident logical strategy, it is marked on a 10- basis. The total mark for the whole paper is 70. Everyone who participates in this challenge will gain a certificate (Participation 75%, Distinction 25%); the top 200 or so gaining medals (Gold, Silver, Bronze); with the top fifty winning a book prize.[8]
Intermediate Mathematical Challenge
The Intermediate Mathematical Challenge (IMC) is aimed at school years equivalent to English Years 9-11, taking place in winter each year. Following the same structure as the JMC, this paper presents the student with twenty-five multiple choice questions to be done under exam conditions in one hour. The first fifteen questions are designed to be easier, and a pupil will gain 5 marks for getting a question in this section correct. Questions 16-20 are more difficult and are worth 6 marks, with a penalty of 1 point for a wrong answer which tries to stop pupils guessing. The last five questions are intended to be the most challenging and so are also 6 marks, but with a 2 point penalty for an incorrectly answered question. Questions to which no answer is entered will gain (and lose) 0 marks.[9]
Again, the top 40% of students taking this challenge get a certificate. There are two follow-on rounds to this competition: The European Kangaroo and the Intermediate Mathematical Olympiad. Additionally, top performers can be selected for the National Mathematics Summer Schools.
Summary of UKMT Intermediate Challenges[4][10][11][12][13][14][15]
Challenge Eligibility When Duration No. of Qs Questions Max Score Certificates
Intermediate Mathematical Challenge(IMC) England Wales and Overseas: Y11 or below
Scotland: S4 or below Northern Ireland: Y12 or below
February 60 mins 25 Questions 1-15 (multi-choice)
5 marks each
Questions 16-20 (multi-choice)
6 marks each / penalty of 1 if wrong answer chosen
Questions 21-25 (multi-choice)
6 marks each / penalty of 2 if wrong answer chosen
135 Top 50% of candidates in ratio 3:2:1 get:
Bronze, Silver, Gold.
Grey Kangaroo Invitation from good IMC performance (around 8,000 invitations),
OR discretionary entry at a fee.
Y9 or below March Questions 1-15 (multi-choice)
5 marks each
Questions 16-25 (multi-choice)
6 marks each
Separately awarded for each Kangaroo's cohort:Top 25% of candidates receive a Certificate of Merit.
Remaining 75% get Certificate of Qualification or Participation.
Pink Kangaroo Y10/11
Cayley Mathematical Olympiad Invitation from good IMC performance (usually top 1,500),
OR discretionary entry at a fee.
Y9 or below 120 mins 6 Questions 1-6 (answer with solution)
0~10 marks each
60 Separately awarded for each Olympiad's cohort:Highest-scoring participants receive Certificates of Distinction or Merit.
Remaining participants receive Certificate of Qualification or Participation. Top 20 get a gold medal, the next 30 a silver medal, the next 50 a bronze medal. All medalists receive a book prize.
Hamilton Mathematical Olympiad Y10
Maclaurin Mathematical Olympiad Y11
Intermediate Mathematical Olympiad
To prevent this getting confused with the International Mathematical Olympiad, this is often abbreviated to the IMOK Olympiad (IMOK = Intermediate Mathematical Olympiad and Kangaroo).
The IMOK is sat by the top 500 scorers from each school year in the Intermediate Maths Challenge and consists of three papers, 'Cayley', 'Hamilton' and 'Maclaurin' named after famous mathematicians. The paper the student will undertake depends on the year group that student is in (Cayley for those in year 9 and below, Hamilton for year 10 and Maclaurin for year 11).[16]
Each paper contains six questions. Each solution is marked out of 10 on a 0+ and 10- scale; that is to say, if an answer is judged incomplete or unfinished, it is awarded a few marks for progress and relevant observations, whereas if it is presented as complete and correct, marks are deducted for faults, poor reasoning, or unproven assumptions. As a result, it is quite uncommon for an answer to score a middling mark (e.g. 4–6). This makes the maximum mark out of 60. For a student to get two questions fully correct is considered "very good". All people taking part in this challenge will get a certificate (participation for the bottom 50%, merit for the next 25% and distinction for the top 25%). The mark boundaries for these certificates change every year, but normally around 30 marks will gain a Distinction. Those scoring highly (the top 50) will gain a book prize; again, this changes every year, with 44 marks required in the Maclaurin paper in 2006. Also, the top 100 candidates will receive a medal; bronze for Cayley, silver for Hamilton and gold for Maclaurin.[17]
European Kangaroo
Main article: European Kangaroo
The European Kangaroo is a competition which follows the same structure as the AMC (Australian Mathematics Competition). There are twenty-five multiple-choice questions and no penalty marking. This paper is taken throughout Europe by over 3 million pupils from more than 37 countries. Two different Kangaroo papers follow on from the Intermediate Maths Challenge and the next 5500 highest scorers below the Olympiad threshold are invited to take part (both papers are by invitation only). The Grey Kangaroo is sat by students in year 9 and below and the Pink Kangaroo is sat by those in years 10 and 11. The top 25% of scorers in each paper receive a certificate of merit and the rest receive a certificate of participation. All those who sit either Kangaroo also receive a keyfob containing a different mathematical puzzle each year. (The puzzles along with solutions)[18]
National Mathematics Summer Schools
Selected by lottery, 48 of the top 1.5% of scorers in the IMC are invited to participate in one of three week-long National Mathematics Summer Schools in July. Each from a different school across the UK, the 24 boys and 24 girls are facilitated with a range of activities, including daily lectures, designed to go beyond the GCSE syllabus and explore wider and more challenging areas of mathematics. The UKMT aims to "promote mathematical thinking" and "provide an opportunity for participants to meet other students and adults who enjoy mathematics". They were delivered virtually during the COVID-19 pandemic but had reverted to in-person events by 2022.[19]
Senior Mathematical Challenge
The Senior Mathematical Challenge (SMC) takes place in late-autumn each year, and is open to students who are aged 19 or below and are not registered to attend a university. SMC consists of twenty-five multiple choice questions to be answered in 90 minutes. All candidates start with 25 marks, each correct answer is awarded 4 marks and 1 mark is deducted for each incorrect answer. This gives a score between 0 and 135 marks.
Unlike the JMC and IMC, the top 66% get one of the three certificates.[20] Further, the top 1000 highest scorers who are eligible to represent the UK at the International Mathematical Olympiad, together with any discretionary and international candidates, are invited to compete in the British Mathematical Olympiad and the next around 6000 highest scorers are invited to sit the Senior Kangaroo. Discretionary candidates are those students who are entered by their mathematics teachers, on payment of a fee, who did not score quite well enough in the SMC, but who might cope well in the next round.[21]
Summary of UKMT Senior Challenges[4][20][22][23][24][25]
Challenge Eligibility When Duration No. of Qs Questions Max Score Certificates
Senior Mathematical Challenge(SMC) England Wales and Overseas: Y13 or below
Scotland: S6 or below Northern Ireland: Y14 or below
October 90 mins 25 Questions 1-25 (multi-choice)
4 marks each / penalty of 1 if wrong answer chosen
All participants start on 25 points
125 Top 66% of candidates in ratio 3:2:1 get:
Bronze, Silver, Gold.
Senior Kangaroo Invitation from good SMC performance (around 6,000 invitations),
OR discretionary entry at a fee.
November 60 mins 20 Questions 1-20 (answer a number from 1 to 999)
5 marks each
100 Top 25% of candidates receive a Certificate of Merit.
Remaining 75% get Certificate of Qualification or Participation.
British Mathematical Olympiad Round 1(BMO1) Invitation from good SMC performance (usually top 1,000),
OR invitation from good MOG performance, OR discretionary entry at a fee.
210 mins 6 Questions 1-6 (answer with solution)
0~10 marks each
60 Highest-scoring participants receive Certificates of Distinction or Merit.
Remaining participants receive Certificate of Qualification or Participation. Top 20 get a gold medal, the next 30 a silver medal, the next 50 a bronze medal. All medalists receive a book prize.
British Mathematical Olympiad Round 2(BMO2) Roughly top 100 scorers from BMO1,
OR discretionary entry at a fee.
January 4 Questions 1-4 (answer with solution)
0~10 marks each
40 Highest-scoring participants receive Certificates of Distinction or Merit.
Remaining participants receive Certificate of Qualification or Participation.
Mathematical Olympiad for Girls(MOG) Discretionary entry at a fee (first 4 entries free). September 150 mins 5 Questions 1-5 (answer with solution)
0~10 marks each
50 Highest-scoring participants receive Certificates of Distinction or Merit.
Remaining participants receive Certificate of Qualification or Participation. Top 30 get a book prize.
British Mathematical Olympiad
Main article: British Mathematical Olympiad
Round 1 of the Olympiad is a three-and-a-half hour examination including six more difficult, long answer questions, which serve to test entrants' problem-solving skills. As of 2005, a more accessible first question was added to the paper; before this, it only consisted of 5 questions. Around 100 highest scoring candidates from BMO1 are invited to sit the BMO2, which is the follow-up round that has the same time limit as BMO1, but in which 4 harder questions are posed. The top 24 scoring students from the second round are subsequently invited to a training camp at Trinity College, Cambridge for the first stage of the International Mathematical Olympiad UK team selection.[26]
Senior Kangaroo
The Senior Kangaroo is a one-hour examination to which the next around 6000 highest scorers below the Olympiad threshold are invited. The paper consists of twenty questions, each of which require three digit answers (leading zeros are used if the answer is less than 100, since the paper is marked by machine). The top 25% of candidates receive a certificate of merit and the rest receive a certificate of participation.[27]
Team Challenge
The UKMT Team Maths Challenge is an annual event. One team from each participating school, comprising four pupils selected from year 8 and 9 (ages 12–14), competes in a regional round. No more than 2 pupils on a team may be from Year 9. There are over 60 regional competitions in the UK, held between February and May. The winning team in each regional round, as well as a few high-scoring runners-up from throughout the country, are then invited to the National Final in London, usually in late June.[28]
There are 4 rounds:
• Group Questions
• Cross-Numbers
• Shuttle (NB: The previous Head-to-Head Round has been replaced with another, similar to the Mini-Relay used in the 2007 and 2008 National Finals.)
• Relay
In the National Final however an additional 'Poster Round' is added at the beginning. The poster round is a separate competition, however, since 2018 it is worth up to six marks towards the main event. Four schools have won the Junior Maths Team competition at least twice: Queen Mary's Grammar School in Walsall, City of London School, St Olave's Grammar School, and Westminster Under School.
Year 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019
Winner Magdalen College School, Oxford St Paul's Girls' School Harrow School & Orley Farm School (joint team) City of London School City of London School Westminster Under School Westminster Under School St Olave's Grammar School Westminster Under School Bancroft's School
2nd Place Queen Elizabeth's Grammar School for Boys Magdalen College School, Oxford ? King Edward's School, Birmingham Reading School & Colchester Royal Grammar School Bancroft's School - The Judd School - Eton College
3rd Place Clifton College City of London School ? Magdalen College School, Oxford - University College School St Olave's Grammar School Westminster Under School - Westminster Under School
Senior Team Challenge
A pilot event for a competition similar to the Team Challenge, aimed at 16- to 18-year-olds, was launched in the Autumn of 2007 and has been running ever since. The format is much the same, with a limit of two year 13 (Upper Sixth-Form) pupils per team. Regional finals take place between October and December, with the National Final in early February the following year.
Previous winners are below:[29]
Year Winner Runners-up Third place Poster Competition winners
2007/08 Torquay Boys' Grammar School ? ? No competition
2008/09 Westminster School ? ? No competition
2009/10 Westminster School King Edward VI Grammar School, Chelmsford Abingdon School King Edward VI High School for Girls
2010/11 Harrow School Colchester Royal Grammar School Merchant Taylors' School/ Abingdon School/ Concord College (three-way tie) North London Collegiate School
2011/12 Alton College Dean Close School Headington School Royal Grammar School, Newcastle
2012/13 Westminster School City of London School/ Eton College/ Magdalen College School (three-way tie) - The Grammar School at Leeds
2013/14 Hampton School Alton College Rainham Mark Grammar School The Grammar School at Leeds
2014/15 Hampton School/Harrow School/King Edward's School (three-way tie) - - Dunblane High School
2015/16 King Edward's School/ Ruthin School/ Westminster School (three-way tie) - - Backwell School
2016/17 Ruthin School Magdalen College School Headington School Royal Grammar School, Newcastle
2017/18 Ruthin School Tapton School King Edward's School The Perse School
2018/19 Durham School/Westminster School (two-way tie) - Ruthin School/Concord College (two-way tie) The Perse School
2019/20 Westminster School Ruthin School Winchester College Bancroft’s School
British Mathematical Olympiad Subtrust
For more information see British Mathematical Olympiad Subtrust.
The British Mathematical Olympiad Subtrust is run by UKMT, which runs the British Mathematical Olympiad as well as the UK Mathematical Olympiad for Girls, several training camps throughout the year such as a winter camp in Hungary, an Easter camp at Trinity College, Cambridge, and other training and selection of the IMO team.
See also
• European Kangaroo
• British Mathematical Olympiad
• International Mathematical Olympiad
• International Mathematics Competition for University Students
References
1. United Kingdom Mathematics Trust, Thursday 19 April 2012
2. United Kingdom Mathematics Trust , Tuesday 10 May 2022
3. United Kingdom Mathematics Trust , Junior Challenge, Thursday 19 April 2012
4. "Challenges". UK Mathematics Trust. Retrieved 1 January 2023.
5. "Junior Mathematical Challenge | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
6. "Junior Kangaroo | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
7. "Junior Mathematical Olympiad | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
8. United Kingdom Mathematics Trust , Junior Mathematical Olympiad, Thursday 19 April 2012
9. United Kingdom Mathematics Trust , Intermediate Challenge, Thursday 19 April 2012
10. "Intermediate Mathematical Challenge | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
11. "Grey Kangaroo | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
12. "Pink Kangaroo | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
13. "Cayley Mathematical Olympiad | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
14. "Hamilton Mathematical Olympiad | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
15. "Maclaurin Mathematical Olympiad | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
16. United Kingdom Mathematics Trust , Intermediate Mathematical Olympiad, Saturday 26 May 2012
17. United Kingdom Mathematics Trust , Intermediate Mathematical Olympiad, Thursday 19 April 2012
18. United Kingdom Mathematics Trust , Intermediate Kangaroo, Thursday 19 April 2012
19. "Week-long residential events for young mathematicians". ukmt.org. United Kingdom Mathematics Trust. Retrieved 8 July 2022.
20. "Senior Mathematical Challenge | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
21. United Kingdom Mathematics Trust , Senior Challenge, Thursday 19 April 2012
22. "Andrew Jobbings Senior Kangaroo | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
23. "British Mathematical Olympiad Round 1 | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
24. "British Mathematical Olympiad Round 2 | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
25. "Mathematical Olympiad for Girls | UK Mathematics Trust". www.ukmt.org.uk. Retrieved 1 January 2023.
26. British Mathematical Olympiad Subtrust, Thursday 19 April 2012
27. United Kingdom Mathematics Trust , Senior Kangaroo, Thursday 19 April 2012
28. United Kingdom Mathematics Trust , Team Challenges, Thursday 19 April 2012
29. "Home | AMSP".
External links
• United Kingdom Mathematics Trust website
• British Mathematical Olympiad Committee site
• International Mathematics Competition for University Students (IMC) site
• Junior Mathematical Challenge Sample Paper
• Intermediate Mathematical Challenge Sample Paper
• Senior Mathematical Challenge Sample Paper
Mathematics in the United Kingdom
Organizations and Projects
• International Centre for Mathematical Sciences
• Advisory Committee on Mathematics Education
• Association of Teachers of Mathematics
• British Society for Research into Learning Mathematics
• Council for the Mathematical Sciences
• Count On
• Edinburgh Mathematical Society
• HoDoMS
• Institute of Mathematics and its Applications
• Isaac Newton Institute
• United Kingdom Mathematics Trust
• Joint Mathematical Council
• Kent Mathematics Project
• London Mathematical Society
• Making Mathematics Count
• Mathematical Association
• Mathematics and Computing College
• Mathematics in Education and Industry
• Megamaths
• Millennium Mathematics Project
• More Maths Grads
• National Centre for Excellence in the Teaching of Mathematics
• National Numeracy
• National Numeracy Strategy
• El Nombre
• Numbertime
• Oxford University Invariant Society
• School Mathematics Project
• Science, Technology, Engineering and Mathematics Network
• Sentinus
Maths schools
• Exeter Mathematics School
• King's College London Mathematics School
• Lancaster University School of Mathematics
• University of Liverpool Mathematics School
Journals
• Compositio Mathematica
• Eureka
• Forum of Mathematics
• Glasgow Mathematical Journal
• The Mathematical Gazette
• Philosophy of Mathematics Education Journal
• Plus Magazine
Competitions
• British Mathematical Olympiad
• British Mathematical Olympiad Subtrust
• National Cipher Challenge
Awards
• Chartered Mathematician
• Smith's Prize
• Adams Prize
• Thomas Bond Sprague Prize
• Rollo Davidson Prize
| Wikipedia |
Ulam matrix
In mathematical set theory, an Ulam matrix is an array of subsets of a cardinal number with certain properties. Ulam matrices were introduced by Stanislaw Ulam in his 1930 work on measurable cardinals: they may be used, for example, to show that a real-valued measurable cardinal is weakly inaccessible.[1]
Definition
Suppose that κ and λ are cardinal numbers, and let F be a λ-complete filter on λ. An Ulam matrix is a collection of subsets Aαβ of λ indexed by α in κ, β in λ such that
• If β is not γ then Aαβ and Aαγ are disjoint.
• For each β the union of the sets Aαβ is in the filter F.
References
1. Jech, Thomas (2003), Set Theory, Springer Monographs in Mathematics (Third Millennium ed.), Berlin, New York: Springer-Verlag, p. 131, ISBN 978-3-540-44085-7, Zbl 1007.03002
• Ulam, Stanisław (1930), "Zur Masstheorie in der allgemeinen Mengenlehre", Fundamenta Mathematicae, 16 (1): 140–150
| Wikipedia |
Banach measure
In the mathematical discipline of measure theory, a Banach measure is a certain type of content used to formalize geometric area in problems vulnerable to the axiom of choice.
For Banach-space-valued measures, see vector measure.
Traditionally, intuitive notions of area are formalized as a classical, countably additive measure. This has the unfortunate effect of leaving some sets with no well-defined area; a consequence is that some geometric transformations do not leave area invariant, the substance of the Banach–Tarski paradox. A Banach measure is a type of generalized measure to elide this problem.
A Banach measure on a set Ω is a finite, finitely additive measure μ ≠ 0, defined for every subset of ℘(Ω), and whose value is 0 on finite subsets.
A Banach measure on Ω which takes values in {0, 1} is called an Ulam measure on Ω.
As Vitali's paradox shows, Banach measures cannot be strengthened to countably additive ones.
Stefan Banach showed that it is possible to define a Banach measure for the Euclidean plane, consistent with the usual Lebesgue measure. This means that every Lebesgue-measurable subset of $\mathbb {R} ^{2}$ is also Banach-measurable, implying that both measures are equal.[1]
The existence of this measure proves the impossibility of a Banach–Tarski paradox in two dimensions: it is not possible to decompose a two-dimensional set of finite Lebesgue measure into finitely many sets that can be reassembled into a set with a different measure, because this would violate the properties of the Banach measure that extends the Lebesgue measure.[2]
References
1. Banach, Stefan (1923). "Sur le problème de la mesure" (PDF). Fundamenta Mathematicae. 4: 7–33. doi:10.4064/fm-4-1-7-33. Retrieved 6 March 2022.
2. Stewart, Ian (1996), From Here to Infinity, Oxford University Press, p. 177, ISBN 9780192832023.
External links
• Stefan Banach bio
Measure theory
Basic concepts
• Absolute continuity of measures
• Lebesgue integration
• Lp spaces
• Measure
• Measure space
• Probability space
• Measurable space/function
Sets
• Almost everywhere
• Atom
• Baire set
• Borel set
• equivalence relation
• Borel space
• Carathéodory's criterion
• Cylindrical σ-algebra
• Cylinder set
• 𝜆-system
• Essential range
• infimum/supremum
• Locally measurable
• π-system
• σ-algebra
• Non-measurable set
• Vitali set
• Null set
• Support
• Transverse measure
• Universally measurable
Types of Measures
• Atomic
• Baire
• Banach
• Besov
• Borel
• Brown
• Complex
• Complete
• Content
• (Logarithmically) Convex
• Decomposable
• Discrete
• Equivalent
• Finite
• Inner
• (Quasi-) Invariant
• Locally finite
• Maximising
• Metric outer
• Outer
• Perfect
• Pre-measure
• (Sub-) Probability
• Projection-valued
• Radon
• Random
• Regular
• Borel regular
• Inner regular
• Outer regular
• Saturated
• Set function
• σ-finite
• s-finite
• Signed
• Singular
• Spectral
• Strictly positive
• Tight
• Vector
Particular measures
• Counting
• Dirac
• Euler
• Gaussian
• Haar
• Harmonic
• Hausdorff
• Intensity
• Lebesgue
• Infinite-dimensional
• Logarithmic
• Product
• Projections
• Pushforward
• Spherical measure
• Tangent
• Trivial
• Young
Maps
• Measurable function
• Bochner
• Strongly
• Weakly
• Convergence: almost everywhere
• of measures
• in measure
• of random variables
• in distribution
• in probability
• Cylinder set measure
• Random: compact set
• element
• measure
• process
• variable
• vector
• Projection-valued measure
Main results
• Carathéodory's extension theorem
• Convergence theorems
• Dominated
• Monotone
• Vitali
• Decomposition theorems
• Hahn
• Jordan
• Maharam's
• Egorov's
• Fatou's lemma
• Fubini's
• Fubini–Tonelli
• Hölder's inequality
• Minkowski inequality
• Radon–Nikodym
• Riesz–Markov–Kakutani representation theorem
Other results
• Disintegration theorem
• Lifting theory
• Lebesgue's density theorem
• Lebesgue differentiation theorem
• Sard's theorem
For Lebesgue measure
• Isoperimetric inequality
• Brunn–Minkowski theorem
• Milman's reverse
• Minkowski–Steiner formula
• Prékopa–Leindler inequality
• Vitale's random Brunn–Minkowski inequality
Applications & related
• Convex analysis
• Descriptive set theory
• Probability theory
• Real analysis
• Spectral theory
| Wikipedia |
Ulam number
In mathematics, the Ulam numbers comprise an integer sequence devised by and named after Stanislaw Ulam, who introduced it in 1964.[1] The standard Ulam sequence (the (1, 2)-Ulam sequence) starts with U1 = 1 and U2 = 2. Then for n > 2, Un is defined to be the smallest integer that is the sum of two distinct earlier terms in exactly one way and larger than all earlier terms.
Examples
As a consequence of the definition, 3 is an Ulam number (1 + 2); and 4 is an Ulam number (1 + 3). (Here 2 + 2 is not a second representation of 4, because the previous terms must be distinct.) The integer 5 is not an Ulam number, because 5 = 1 + 4 = 2 + 3. The first few terms are
1, 2, 3, 4, 6, 8, 11, 13, 16, 18, 26, 28, 36, 38, 47, 48, 53, 57, 62, 69, 72, 77, 82, 87, 97, 99, 102, 106, 114, 126, 131, 138, 145, 148, 155, 175, 177, 180, 182, 189, 197, 206, 209, 219, 221, 236, 238, 241, 243, 253, 258, 260, 273, 282, ... (sequence A002858 in the OEIS).
There are infinitely many Ulam numbers. For, after the first n numbers in the sequence have already been determined, it is always possible to extend the sequence by one more element: Un−1 + Un is uniquely represented as a sum of two of the first n numbers, and there may be other smaller numbers that are also uniquely represented in this way, so the next element can be chosen as the smallest of these uniquely representable numbers.[2]
Ulam is said to have conjectured that the numbers have zero density,[3] but they seem to have a density of approximately 0.07398.[4]
Properties
Apart from 1 + 2 = 3 any subsequent Ulam number cannot be the sum of its two prior consecutive Ulam numbers.
Proof: Assume that for n > 2, Un−1 + Un = Un+1 is the required sum in only one way; then so does Un−2 + Un produce a sum in only one way, and it falls between Un and Un+1. This contradicts the condition that Un+1 is the next smallest Ulam number.[5]
For n > 2, any three consecutive Ulam numbers (Un−1, Un, Un+1) as integer sides will form a triangle.[6]
Proof: The previous property states that for n > 2, Un−2 + Un ≥ Un + 1. Consequently Un−1 + Un > Un+1 and because Un−1 < Un < Un+1 the triangle inequality is satisfied.
The sequence of Ulam numbers forms a complete sequence.
Proof: By definition Un = Uj + Uk where j < k < n and is the smallest integer that is the sum of two distinct smaller Ulam numbers in exactly one way. This means that for all Un with n > 3, the greatest value that Uj can have is Un−3 and the greatest value that Uk can have is Un−1.[5][7]
Hence Un ≤ Un−1 + Un−3 < 2Un−1 and U1 = 1, U2 = 2, U3 = 3. This is a sufficient condition for Ulam numbers to be a complete sequence.
For every integer n > 1 there is always at least one Ulam number Uj such that n ≤ Uj < 2n.
Proof: It has been proved that there are infinitely many Ulam numbers and they start at 1. Therefore for every integer n > 1 it is possible to find j such that Uj−1 ≤ n ≤ Uj. From the proof above for n > 3, Uj ≤ Uj−1 + Uj−3 < 2Uj−1. Therefore n ≤ Uj < 2Uj−1 ≤ 2n. Also for n = 2 and 3 the property is true by calculation.
In any sequence of 5 consecutive positive integers {i, i + 1,..., i + 4}, i > 4 there can be a maximum of 2 Ulam numbers.[7]
Proof: Assume that the sequence {i, i + 1,..., i + 4} has its first value i = Uj an Ulam number then it is possible that i + 1 is the next Ulam number Uj+1. Now consider i + 2, this cannot be the next Ulam number Uj+2 because it is not a unique sum of two previous terms. i + 2 = Uj+1 + U1 = Uj + U2. A similar argument exists for i + 3 and i + 4.
Inequalities
Ulam numbers are pseudo-random and too irregular to have tight bounds. Nevertheless from the properties above, namely, at worst the next Ulam number Un+1 ≤ Un + Un−2 and in any five consecutive positive integers at most two can be Ulam numbers, it can be stated that
5/2n−7 ≤ Un ≤ Nn+1 for n > 0,[7]
where Nn are the numbers in Narayana’s cows sequence: 1,1,1,2,3,4,6,9,13,19,... with the recurrence relation Nn = Nn−1 +Nn−3 that starts at N0.
Hidden structure
It has been observed[8] that the first 10 million Ulam numbers satisfy $\cos {(2.5714474995\,U_{n})}<0$ except for the four elements $\left\{2,3,47,69\right\}$ (this has now been verified for the first $10^{9}$ Ulam numbers). Inequalities of this type are usually true for sequences exhibiting some form of periodicity but the Ulam sequence does not seem to be periodic and the phenomenon is not understood. It can be exploited to do a fast computation of the Ulam sequence (see External links).
Generalizations
The idea can be generalized as (u, v)-Ulam numbers by selecting different starting values (u, v). A sequence of (u, v)-Ulam numbers is regular if the sequence of differences between consecutive numbers in the sequence is eventually periodic. When v is an odd number greater than three, the (2, v)-Ulam numbers are regular. When v is congruent to 1 (mod 4) and at least five, the (4, v)-Ulam numbers are again regular. However, the Ulam numbers themselves do not appear to be regular.[9]
A sequence of numbers is said to be s-additive if each number in the sequence, after the initial 2s terms of the sequence, has exactly s representations as a sum of two previous numbers. Thus, the Ulam numbers and the (u, v)-Ulam numbers are 1-additive sequences.[10]
If a sequence is formed by appending the largest number with a unique representation as a sum of two earlier numbers, instead of appending the smallest uniquely representable number, then the resulting sequence is the sequence of Fibonacci numbers.[11]
Notes
1. Ulam (1964a, 1964b).
2. Recaman (1973) gives a similar argument, phrased as a proof by contradiction. He states that, if there were finitely many Ulam numbers, then the sum of the last two would also be an Ulam number – a contradiction. However, although the sum of the last two numbers would in this case have a unique representation as a sum of two Ulam numbers, it would not necessarily be the smallest number with a unique representation.
3. The statement that Ulam made this conjecture is in OEIS OEIS: A002858, but Ulam does not address the density of this sequence in Ulam (1964a), and in Ulam (1964b) he poses the question of determining its density without conjecturing a value for it. Recaman (1973) repeats the question from Ulam (1964b) of the density of this sequence, again without conjecturing a value for it.
4. OEIS OEIS: A002858
5. Recaman (1973)
6. OEIS OEIS: A330909
7. Philip Gibbs and Judson McCranie (2017). "The Ulam Numbers up to One Trillion". p. 1(Introduction).
8. Steinerberger (2015)
9. Queneau (1972) first observed the regularity of the sequences for u = 2 and v = 7 and v = 9. Finch (1992) conjectured the extension of this result to all odd v greater than three, and this conjecture was proven by Schmerl & Spiegel (1994). The regularity of the (4, v)-Ulam numbers was proven by Cassaigne & Finch (1995).
10. Queneau (1972).
11. Finch (1992).
References
• Cassaigne, Julien; Finch, Steven R. (1995), "A class of 1-additive sequences and quadratic recurrences", Experimental Mathematics, 4 (1): 49–60, doi:10.1080/10586458.1995.10504307, MR 1359417, S2CID 9985793
• Finch, Steven R. (1992), "On the regularity of certain 1-additive sequences", Journal of Combinatorial Theory, Series A, 60 (1): 123–130, doi:10.1016/0097-3165(92)90042-S, MR 1156652
• Guy, Richard (2004), Unsolved Problems in Number Theory (3rd ed.), Springer-Verlag, pp. 166–167, ISBN 0-387-20860-7
• Queneau, Raymond (1972), "Sur les suites s-additives", Journal of Combinatorial Theory, Series A (in French), 12 (1): 31–71, doi:10.1016/0097-3165(72)90083-0, MR 0302597
• Recaman, Bernardo (1973), "Questions on a sequence of Ulam", American Mathematical Monthly, 80 (8): 919–920, doi:10.2307/2319404, JSTOR 2319404, MR 1537172
• Schmerl, James; Spiegel, Eugene (1994), "The regularity of some 1-additive sequences", Journal of Combinatorial Theory, Series A, 66 (1): 172–175, doi:10.1016/0097-3165(94)90058-2, MR 1273299
• Ulam, Stanislaw (1964a), "Combinatorial analysis in infinite sets and some physical theories", SIAM Review, 6 (4): 343–355, Bibcode:1964SIAMR...6..343U, doi:10.1137/1006090, JSTOR 2027963, MR 0170832
• Ulam, Stanislaw (1964b), Problems in Modern Mathematics, New York: John Wiley & Sons, Inc, p. xi, MR 0280310
• Steinerberger, Stefan (2015), A Hidden Signal in the Ulam sequence, Experimental Mathematics, arXiv:1507.00267, Bibcode:2015arXiv150700267S
External links
• Ulam Sequence from MathWorld
• Fast computation of the Ulam sequence by Philip Gibbs
• Description of Algorithm by Donald Knuth
• The github page of Daniel Ross
Classes of natural numbers
Powers and related numbers
• Achilles
• Power of 2
• Power of 3
• Power of 10
• Square
• Cube
• Fourth power
• Fifth power
• Sixth power
• Seventh power
• Eighth power
• Perfect power
• Powerful
• Prime power
Of the form a × 2b ± 1
• Cullen
• Double Mersenne
• Fermat
• Mersenne
• Proth
• Thabit
• Woodall
Other polynomial numbers
• Hilbert
• Idoneal
• Leyland
• Loeschian
• Lucky numbers of Euler
Recursively defined numbers
• Fibonacci
• Jacobsthal
• Leonardo
• Lucas
• Padovan
• Pell
• Perrin
Possessing a specific set of other numbers
• Amenable
• Congruent
• Knödel
• Riesel
• Sierpiński
Expressible via specific sums
• Nonhypotenuse
• Polite
• Practical
• Primary pseudoperfect
• Ulam
• Wolstenholme
Figurate numbers
2-dimensional
centered
• Centered triangular
• Centered square
• Centered pentagonal
• Centered hexagonal
• Centered heptagonal
• Centered octagonal
• Centered nonagonal
• Centered decagonal
• Star
non-centered
• Triangular
• Square
• Square triangular
• Pentagonal
• Hexagonal
• Heptagonal
• Octagonal
• Nonagonal
• Decagonal
• Dodecagonal
3-dimensional
centered
• Centered tetrahedral
• Centered cube
• Centered octahedral
• Centered dodecahedral
• Centered icosahedral
non-centered
• Tetrahedral
• Cubic
• Octahedral
• Dodecahedral
• Icosahedral
• Stella octangula
pyramidal
• Square pyramidal
4-dimensional
non-centered
• Pentatope
• Squared triangular
• Tesseractic
Combinatorial numbers
• Bell
• Cake
• Catalan
• Dedekind
• Delannoy
• Euler
• Eulerian
• Fuss–Catalan
• Lah
• Lazy caterer's sequence
• Lobb
• Motzkin
• Narayana
• Ordered Bell
• Schröder
• Schröder–Hipparchus
• Stirling first
• Stirling second
• Telephone number
• Wedderburn–Etherington
Primes
• Wieferich
• Wall–Sun–Sun
• Wolstenholme prime
• Wilson
Pseudoprimes
• Carmichael number
• Catalan pseudoprime
• Elliptic pseudoprime
• Euler pseudoprime
• Euler–Jacobi pseudoprime
• Fermat pseudoprime
• Frobenius pseudoprime
• Lucas pseudoprime
• Lucas–Carmichael number
• Somer–Lucas pseudoprime
• Strong pseudoprime
Arithmetic functions and dynamics
Divisor functions
• Abundant
• Almost perfect
• Arithmetic
• Betrothed
• Colossally abundant
• Deficient
• Descartes
• Hemiperfect
• Highly abundant
• Highly composite
• Hyperperfect
• Multiply perfect
• Perfect
• Practical
• Primitive abundant
• Quasiperfect
• Refactorable
• Semiperfect
• Sublime
• Superabundant
• Superior highly composite
• Superperfect
Prime omega functions
• Almost prime
• Semiprime
Euler's totient function
• Highly cototient
• Highly totient
• Noncototient
• Nontotient
• Perfect totient
• Sparsely totient
Aliquot sequences
• Amicable
• Perfect
• Sociable
• Untouchable
Primorial
• Euclid
• Fortunate
Other prime factor or divisor related numbers
• Blum
• Cyclic
• Erdős–Nicolas
• Erdős–Woods
• Friendly
• Giuga
• Harmonic divisor
• Jordan–Pólya
• Lucas–Carmichael
• Pronic
• Regular
• Rough
• Smooth
• Sphenic
• Størmer
• Super-Poulet
• Zeisel
Numeral system-dependent numbers
Arithmetic functions
and dynamics
• Persistence
• Additive
• Multiplicative
Digit sum
• Digit sum
• Digital root
• Self
• Sum-product
Digit product
• Multiplicative digital root
• Sum-product
Coding-related
• Meertens
Other
• Dudeney
• Factorion
• Kaprekar
• Kaprekar's constant
• Keith
• Lychrel
• Narcissistic
• Perfect digit-to-digit invariant
• Perfect digital invariant
• Happy
P-adic numbers-related
• Automorphic
• Trimorphic
Digit-composition related
• Palindromic
• Pandigital
• Repdigit
• Repunit
• Self-descriptive
• Smarandache–Wellin
• Undulating
Digit-permutation related
• Cyclic
• Digit-reassembly
• Parasitic
• Primeval
• Transposable
Divisor-related
• Equidigital
• Extravagant
• Frugal
• Harshad
• Polydivisible
• Smith
• Vampire
Other
• Friedman
Binary numbers
• Evil
• Odious
• Pernicious
Generated via a sieve
• Lucky
• Prime
Sorting related
• Pancake number
• Sorting number
Natural language related
• Aronson's sequence
• Ban
Graphemics related
• Strobogrammatic
• Mathematics portal
| Wikipedia |
Ulam's game
Ulam's game, or the Rényi–Ulam game, is a mathematical game similar to the popular game of twenty questions. In Ulam's game, a player attempts to guess an unnamed object or number by asking yes–no questions of another, but one of the answers given may be a lie.[1]
Alfréd Rényi (1961) introduced the game in a 1961 paper, based on Hungary's Bar Kokhba game, but the paper was overlooked for many years.
Stanislaw Ulam (1976, p. 281) rediscovered the game, presenting the idea that there are a million objects and the answer to one question can be wrong, and considered the minimum number of questions required, and the strategy that should be adopted.[2] Pelc (2002) gave a survey of similar games and their relation to information theory.
See also
• Knights and Knaves
References
1. "How to Play Ulam's Game" (PDF). Retrieved 13 June 2013.
2. Beluhov, Nikolai (2016). "Renyi-Ulam Games and Forbidden Substrings". arXiv:1609.07367 [math.CO].
• Pelc, Andrzej (2002), "Searching games with errors---fifty years of coping with liars", Theoretical Computer Science, 270 (1): 71–109, doi:10.1016/S0304-3975(01)00303-6, ISSN 0304-3975, MR 1871067
• Rényi, Alfréd (1961), "On a problem in information theory", Magyar Tud. Akad. Mat. Kutató Int. Közl. (in Hungarian), 6: 505–516, MR 0143666
• Ulam, S. M. (1976), Adventures of a mathematician, Charles Scribner's sons, ISBN 978-0-520-07154-4, MR 0485098
| Wikipedia |
Ulam's packing conjecture
Ulam's packing conjecture, named for Stanislaw Ulam, is a conjecture about the highest possible packing density of identical convex solids in three-dimensional Euclidean space. The conjecture says that the optimal density for packing congruent spheres is smaller than that for any other convex body. That is, according to the conjecture, the ball is the convex solid which forces the largest fraction of space to remain empty in its optimal packing structure. This conjecture is therefore related to the Kepler conjecture about sphere packing. Since the solution to the Kepler conjecture establishes that identical balls must leave ≈25.95% of the space empty, Ulam's conjecture is equivalent to the statement that no other convex solid forces that much space to be left empty.
Unsolved problem in mathematics:
Is there any three-dimensional convex body with lower packing density than the sphere?
(more unsolved problems in mathematics)
Origin
This conjecture was attributed posthumously to Ulam by Martin Gardner, who remarks in a postscript added to one of his Mathematical Games columns that Ulam communicated this conjecture to him in 1972.[1] Though the original reference to the conjecture states only that Ulam "suspected" the ball to be the worst case for packing, the statement has been subsequently taken as a conjecture.
Supporting arguments
Numerical experiments with a large variety of convex solids have resulted in each case in the construction of packings that leave less empty space than is left by close-packing of equal spheres, and so many solids have been ruled out as counterexamples of Ulam's conjecture.[2] Nevertheless, there is an infinite space of possible shapes that have not been ruled out.
Yoav Kallus has shown that at least among point-symmetric bodies, the ball constitutes a local maximum of the fraction of empty space forced.[3] That is, any point-symmetric solid that does not deviate too much from a ball can be packed with greater efficiency than can balls.
Analogs in other dimensions
The analog of Ulam's packing conjecture in two dimensions would say that no convex shape forces more than ≈9.31% of the plane to remain uncovered, since that is the fraction of empty space left uncovered in the densest packing of disks. However, the regular octagon and smoothed octagon give counter-examples. It is conjectured that regular heptagons force the largest fraction of the plane to remain uncovered.[4] In dimensions above four (excluding 8 and 24), the situation is complicated by the fact that the analogs of the Kepler conjecture remain open.
References
1. Gardner, Martin (1995), New Mathematical Diversions (Revised Edition), Washington: Mathematical Association of America, p. 251
2. de Graaf, Joost; van Roij, René; Dijkstra, Marjolein (2011), "Dense Regular Packings of Irregular Nonconvex Particles", Physical Review Letters, 107 (15): 155501, arXiv:1107.0603, Bibcode:2011PhRvL.107o5501D, doi:10.1103/PhysRevLett.107.155501, PMID 22107298.
3. Kallus, Yoav (2014), "The 3-ball is a local pessimum for packing", Advances in Mathematics, 264: 355–370, arXiv:1212.2551, doi:10.1016/j.aim.2014.07.015, MR 3250288.
4. Kallus, Yoav (2015), "Pessimal packing shapes", Geometry & Topology, 19: 343–363, arXiv:1305.0289, doi:10.2140/gt.2015.19.343, MR 3318753.
| Wikipedia |
Ulcer index
The ulcer index is a stock market risk measure or technical analysis indicator devised by Peter Martin in 1987,[1] and published by him and Byron McCann in their 1989 book The Investors Guide to Fidelity Funds. It is a measure of downwards volatility, the amount of drawdown or retracement over a period.
Other volatility measures like standard deviation treat up and down movement equally, but most market traders are long and so welcome upward movement in prices, it is the downside that causes stress and the stomach ulcers that the index's name suggests. (The name pre-dates the discovery that most gastric ulcers are caused by a bacterium rather than stress.)
The term ulcer index has also been used (later) by Steve Shellans, editor and publisher of MoniResearch Newsletter for a different calculation, also based on the ulcer-causing potential of drawdowns.[2] Shellans' index is not described in this article.
Calculation
The index is based on a given past period of N days. Working from oldest to newest a highest price (highest closing price) seen so-far is maintained, and any close below that is a retracement, expressed as a percentage
$R_{i}=100\times {price_{i}-maxprice \over maxprice}$
For example, if the high so far is $5.00 then a price of $4.50 is a retracement of −10%. The first R is always 0, there being no drawdown from a single price. The quadratic mean (or root mean square) of these values is taken, similar to a standard deviation calculation.
$Ulcer={\sqrt {R_{1}^{2}+R_{2}^{2}+\cdots R_{N}^{2} \over N}}$
The squares mean it does not matter if the R values are expressed as positives or negatives, both come out as a positive Ulcer Index.
The calculation is relatively immune to the sampling rate used. It gives similar results when calculated on weekly prices as it does on daily prices. Martin advises against sampling less often than weekly though, since for instance with quarterly prices a fall and recovery could take place entirely within such a period and thereby not appear in the index.
Usage
Martin recommends his index as a measure of risk in various contexts where usually the standard deviation (SD) is used for that purpose. For example, the Sharpe ratio, which rates an investment's excess return (return above a safe cash rate) against risk, is
$Sharpe\,ratio={Return-RiskFreeReturn \over standard\,deviation}$
The ulcer index can replace the SD to make an ulcer performance index (UPI) or Martin ratio,
$UPI={Return-RiskFreeReturn \over ulcer\,index}$
In both cases, annualized rates of return would be used (net of costs, inclusive of dividend reinvestment, etc.).
The index can also be charted over time and used as a kind of technical analysis indicator, to show stocks going into ulcer-forming territory (for one's chosen time-frame), or to compare volatility in different stocks.[3] As with the Sharpe Ratio, a higher value of UPI is better than a lower value (investors prefer more return for less risk).
References
1. Peter Martin's Ulcer Index page
2. Pankin Managed Funds, client newsletter 3rd Quarter 1996, Questions and Answers
3. Discovering the Absolute-Breadth Index and the Ulcer Index at Investopedia.com
Further reading
Related topics
• Hindenburg Omen
Books
• The Investor's Guide to Fidelity Funds, Peter Martin and Byron McCann, John Wiley & Sons, 1989. Now out of print, but offered for sale in electronic form by Martin at his web site .
External links
• Peter Martin's web site - Ulcer Index: An Alternative Approach to the Measurement of Investment Risk & Risk-Adjusted Performance
Technical analysis
Concepts
• Breakout
• Dead cat bounce
• Dow theory
• Elliott wave principle
• Market trend
Charts
• Candlestick
• Renko
• Kagi
• Line
• Open-high-low-close
• Point and figure
Patterns
Chart
• Broadening top
• Cup and handle
• Double top and double bottom
• Flag and pennant
• Gap
• Head and shoulders
• Island reversal
• Price channels
• Triangle
• Triple top and triple bottom
• Wedge pattern
Candlestick
Simple
• Doji
• Hammer
• Hanging man
• Inverted hammer
• Marubozu
• Shooting star
• Spinning top
Complex
• Hikkake pattern
• Morning star
• Three black crows
• Three white soldiers
Indicators
Support &
resistance
• Bottom
• Fibonacci retracement
• Pivot point (PP)
• Top
Trend
• Average directional index (A.D.X.)
• Commodity channel index (CCI)
• Detrended price oscillator (DPO)
• Know sure thing oscillator (KST)
• Ichimoku Kinkō Hyō
• Moving average convergence/divergence (MACD)
• Mass index
• Moving average (MA)
• Parabolic SAR (SAR)
• Smart money index (SMI)
• Trend line
• Trix
• Vortex indicator (VI)
Momentum
• Money flow index (MFI)
• Relative strength index (RSI)
• Stochastic oscillator
• True strength index (TSI)
• Ultimate oscillator
• Williams %R (%R)
Volume
• Accumulation/distribution line
• Ease of movement (EMV)
• Force index (FI)
• Negative volume index (NVI)
• On-balance volume (OBV)
• Put/call ratio (PCR)
• Volume–price trend (VPT)
Volatility
• Average true range (ATR)
• Bollinger Bands (BB)
• Donchian channel
• Keltner channel
• CBOE Market Volatility Index (VIX)
• Standard deviation (σ)
Breadth
• Advance–decline line (ADL)
• Arms index (TRIN)
• McClellan oscillator
Other
• Coppock curve
• Ulcer index
Analysts
• John Bollinger
• Ned Davis
• Charles Dow
• Ralph Nelson Elliott
• Bob Farrell
• John Murphy
• Mark Hulbert
| Wikipedia |
Ulf Grenander
Ulf Grenander (23 July 1923 – 12 May 2016) was a Swedish statistician and professor of applied mathematics at Brown University.
Ulf Grenander
Born(1923-07-23)23 July 1923
Västervik, Sweden
Died12 May 2016(2016-05-12) (aged 92)[1]
Providence, Rhode Island, US
NationalitySwedish
Alma materStockholm University
Uppsala University
Known forSieve estimation
Pattern theory
Maximum subarray problem[2]
Computational anatomy
AwardsRoyal Swedish Academy of Sciences
National Academy of Sciences
Scientific career
FieldsStatistics
Mathematics
Computer science
InstitutionsStockholm University
Brown University
Doctoral advisorHarald Cramér
Doctoral studentsSven Erlander
Other notable studentsPer Martin-Löf
InfluencedDavid Mumford
His early research was in probability theory, stochastic processes, time series analysis, and statistical theory (particularly the order-constrained estimation of cumulative distribution functions using his sieve estimator). In recent decades, Grenander contributed to computational statistics, image processing, pattern recognition, and artificial intelligence. He coined the term pattern theory to distinguish from pattern recognition.[3]
Honors
In 1966 Grenander was elected to the Royal Academy of Sciences of Sweden, and in 1996 to the US National Academy of Sciences. In 1998 he was an Invited Speaker of the International Congress of Mathematicians in Berlin.[4] He received an honorary doctorate in 1994 from the University of Chicago, and in 2005 from the Royal Institute of Technology of Stockholm, Sweden.[5]
Education
Grenander earned his undergraduate degree at Uppsala University.[6] He earned his Ph.D. at Stockholm University in 1950 under the supervision of Harald Cramér.[7]
Appointments
He was active as a 1950–1951 Associate Professor at Stockholm University, 1951–1952 at University of Chicago, At 1952–1953 University of California–Berkeley, At Stockholm University 1953–1957, at Brown University 1957–1958 and 1958–1966 again at Stockholm University, where he succeeded in 1959 Harald Cramér as the Professor in actuarial science and mathematical statistics. From 1966 until his retirement, Grenander was L. Herbert Ballou University Professor at Brown University. In 1969–1974 he was also professor of Applied Mathematics at The Royal Institute of Technology.[8]
Selected works
• Grenander, Ulf (2012). A Calculus of Ideas: A Mathematical Study of Human Thought. World Scientific Publishing. ISBN 978-9814383189.
• Grenander, Ulf; Miller, Michael (2007). Pattern Theory: From Representation to Inference. Oxford University Press. ISBN 978-0199297061.
• Grenander, Ulf (1996). Elements of Pattern Theory. Johns Hopkins University Press. ISBN 978-0801851889.
• Grenander, Ulf (1994). General Pattern Theory. Oxford Science Publications. ISBN 978-0198536710.
• Grenander, Ulf (1982). Mathematical Experiments on the Computer. Academic Press. ISBN 9780123017505.[9]
• Grenander, Ulf (1981). Abstract Inference. Wiley. ISBN 978-0471082675.
• Grenander, Ulf (1963). Probabilities on Algebraic Structures. Wiley.[10]
• Grenander, Ulf (1959). Probability and Statistics: The Harald Cramér Volume. Wiley.
• Szegő, Gábor; Grenander, Ulf (1958). Toeplitz forms and their applications. Chelsea.[11]
• Grenander, Ulf; Rosenblatt, M (1957). Statistical Analysis of Stationary Time Series. American Mathematical Society. ISBN 978-0-8284-0320-7.[12]
References
1. Ulf Grenander Obituary - Providence, RI | The Providence Journal, accessed 28 May 2016
2. Bentley, Jon (1984). "Programming pearls: algorithm design techniques". Communications of the ACM. 27 (9): 865–873. doi:10.1145/358234.381162. S2CID 207565329..
3. Mumford, David; Desolneux, Agnès (2010). Pattern Theory: The Stochastic Analysis of Real-World Signals. A K Peters/CRC Press. p. 1. ISBN 978-1568815794. The term "pattern theory" was coined by Ulf Grenander to distinguish his approach to the analysis of patterned structures in the world from "pattern recognition."
4. Grenander, Ulf (1998). "Strategies for seeing". Doc. Math. (Bielefeld) Extra Vol. ICM Berlin, 1998, vol. III. pp. 585–592.
5. KTH: Hedersdoktorer 1944–2008 Archived 2010-03-24 at the Wayback Machine, accessed 5 April 2009
6. Mukhopadhyay, Nitis (2006). "A conversation with Ulf Grenander". Statistical Science. 21 (3): 404–426. arXiv:math/0701092. Bibcode:2007math......1092M. doi:10.1214/088342305000000313. S2CID 62516244.
7. Grenander, Ulf (1950). Stochastic processes and statistical inference. Arkiv för matematik, 0004-2080; 1:17 (in Swedish). Stockholm: Almqvist & Wiksell.
8. KTH: En kort historik över professorer vid Institutionen för Matematik, accessed 1 maj 2010
9. Perlis, Alan J. (1985). "Review: Mathematical experiments on the computer by Ulf Grenander" (PDF). Bull. Amer. Math. Soc. (N.S.). 12 (1): 143–145. doi:10.1090/s0273-0979-1985-15322-4.
10. Furstenberg, Harry (1965). "Review: Probabilities on algebraic structures bu Ulf Grenander" (PDF). Bull. Amer. Math. Soc. 71 (1): 132–135. doi:10.1090/s0002-9904-1965-11249-6.
11. Spitzer, F. (1959). "Review: Toeplitz Forms and Their Applications by Ulf Grenander and Gabor Szegő". Bull. Amer. Math. Soc. 65 (2): 97–101. doi:10.1090/s0002-9904-1959-10296-2.
12. Darling, Donald A. (1958). "Review: Statistical analysis of stationary time series by Ulf Grenander and Murray Rosenblatt" (PDF). Bull. Amer. Math. Soc. 64 (2): 70–71. doi:10.1090/s0002-9904-1958-10172-x.
• Mukhopadhyay, Nitis (2006). "A conversation with Ulf Grenander". Statistical Science. 21 (3): 404–426. arXiv:math/0701092. Bibcode:2007math......1092M. doi:10.1214/088342305000000313. ISSN 0883-4237. MR 2339138. S2CID 62516244.
External links
• Homepage of Ulf Grenander at Brown University
• Pattern Theory: Grenander's Ideas and Examples – a video lecture by David Mumford
• Ulf Grenander at the Mathematics Genealogy Project
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Germany
• Israel
• United States
• Sweden
• Latvia
• Czech Republic
• Netherlands
Academics
• CiNii
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
Other
• SNAC
• IdRef
| Wikipedia |
Ulisse Stefanelli
Ulisse Stefanelli is an Italian mathematician. He is currently professor at the Faculty of Mathematics of the University of Vienna. His research focuses on calculus of variations, partial differential equations, and materials science.[1]
Ulisse Stefanelli
Alma materUniversity of Pavia, (Ph.D., 2003)
Known forPlasticity, Rate-independent systems, Gradient flow, Doubly nonlinear equation, Crystallization
Awards
• Vinti Prize (2015)
• Richard von Mises Prize (2010)
• Friedrich Wilhelm Bessel Research Award (2009)
• ERC Starting Grant (2007)
Scientific career
FieldsCalculus of variations, Partial differential equations, Materials science
InstitutionsIstituto di Matematica Applicata e Tecnologie Informatiche E. Magenes, University of Vienna
Websitehttps://www.mat.univie.ac.at/~stefanelli/
Biography
Stefanelli obtained his PhD under the guidance of Pierluigi Colli in 2003 at the University of Pavia. He holds a Researcher position at the Istituto di Matematica Applicata e Tecnologie Informatiche E. Magenes of the National Research Council (Italy) in Pavia since 2001. In 2013 he has been appointed to the chair of Applied Mathematics and Modeling at the Faculty of Mathematics of the University of Vienna. He has also conducted research at the University of Texas at Austin, the ETH and the University of Zurich, the Weierstrass Institute in Berlin, and the Laboratoire de Mécanique et Génie Civil in Montpellier.
Since 2017 he is the speaker of the Spezialforschungsbereich F65 Taming Complexity in Partial Differential Systems funded by the Austrian Science Fund.[2]
Awards
• Vinti Prize of the Unione Matematica Italiana (2015)[3]
• Richard von Mises Prize of the GAMM (2010)[4]
• Friedrich Wilhelm Bessel-Forschungspreis of the Alexander von Humboldt Foundation (2009)
• ERC Starting Grant (2007)[5]
Selected publications
• Mainini, Edoardo; Stefanelli, Ulisse (2014-06-01). "Crystallization in Carbon Nanostructures". Communications in Mathematical Physics. 328 (2): 545–571. Bibcode:2014CMaPh.328..545M. doi:10.1007/s00220-014-1981-5. ISSN 1432-0916. S2CID 253744289.
• Mielke, Alexander; Stefanelli, Ulisse (2011-01-01). "Weighted energy-dissipation functionals for gradient flows". ESAIM: Control, Optimisation and Calculus of Variations. 17 (1): 52–85. doi:10.1051/cocv/2009043. ISSN 1292-8119.
• Auricchio, F.; Reali, A.; Stefanelli, U. (2009-04-15). "A macroscopic 1D model for shape memory alloys including asymmetric behaviors and transformation-dependent elastic properties". Computer Methods in Applied Mechanics and Engineering. 198 (17): 1631–1637. Bibcode:2009CMAME.198.1631A. doi:10.1016/j.cma.2009.01.019. ISSN 0045-7825.
• Caffarelli, Luis A.; Stefanelli, Ulisse (2008-07-03). "A Counterexample to C 2,1 Regularity for Parabolic Fully Nonlinear Equations". Communications in Partial Differential Equations. 33 (7): 1216–1234. arXiv:math/0701769. doi:10.1080/03605300701518240. ISSN 0360-5302. S2CID 15798910.
• Mielke, Alexander; Roubíček, Tomáš; Stefanelli, Ulisse (2008-03-01). "Γ-limits and relaxations for rate-independent evolutionary problems". Calculus of Variations and Partial Differential Equations. 31 (3): 387–416. doi:10.1007/s00526-007-0119-4. ISSN 1432-0835. S2CID 55568258.
• Stefanelli, Ulisse (2008-01-01). "The Brezis–Ekeland Principle for Doubly Nonlinear Equations". SIAM Journal on Control and Optimization. 47 (3): 1615–1642. doi:10.1137/070684574. ISSN 0363-0129.
• Auricchio, Ferdinando; Mielke, Alexander; Stefanelli, Ulisse (2008-01-01). "A rate-independent model for the isothermal quasi-static evolution of shape-memory materials". Mathematical Models and Methods in Applied Sciences. 18 (1): 125–164. arXiv:0708.4378. doi:10.1142/S0218202508002632. ISSN 0218-2025. S2CID 17499836.
• Auricchio, F.; Reali, A.; Stefanelli, U. (2007-02-01). "A three-dimensional model describing stress-induced solid phase transformation with permanent inelasticity". International Journal of Plasticity. 23 (2): 207–226. doi:10.1016/j.ijplas.2006.02.012. ISSN 0749-6419.
References
1. "Ulisse Stefanelli".
2. "FWF announcement on SFB F65".
3. "Vinti prize page".
4. "Richard Von Mises Prize awardees".
5. "ERC StG awardees 2007" (PDF).
Authority control
International
• VIAF
National
• Germany
Academics
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• Scopus
• zbMATH
| Wikipedia |
Height (abelian group)
In mathematics, the height of an element g of an abelian group A is an invariant that captures its divisibility properties: it is the largest natural number N such that the equation Nx = g has a solution x ∈ A, or the symbol ∞ if there is no such N. The p-height considers only divisibility properties by the powers of a fixed prime number p. The notion of height admits a refinement so that the p-height becomes an ordinal number. Height plays an important role in Prüfer theorems and also in Ulm's theorem, which describes the classification of certain infinite abelian groups in terms of their Ulm factors or Ulm invariants.
Definition of height
Let A be an abelian group and g an element of A. The p-height of g in A, denoted hp(g), is the largest natural number n such that the equation pnx = g has a solution in x ∈ A, or the symbol ∞ if a solution exists for all n. Thus hp(g) = n if and only if g ∈ pnA and g ∉ pn+1A. This allows one to refine the notion of height.
For any ordinal α, there is a subgroup pαA of A which is the image of the multiplication map by p iterated α times, defined using transfinite induction:
• p0A = A;
• pα+1A = p(pαA);
• pβA=∩α < β pαA if β is a limit ordinal.
The subgroups pαA form a decreasing filtration of the group A, and their intersection is the subgroup of the p-divisible elements of A, whose elements are assigned height ∞. The modified p-height hp∗(g) = α if g ∈ pαA, but g ∉ pα+1A. The construction of pαA is functorial in A; in particular, subquotients of the filtration are isomorphism invariants of A.
Ulm subgroups
Let p be a fixed prime number. The (first) Ulm subgroup of an abelian group A, denoted U(A) or A1, is pωA = ∩n pnA, where ω is the smallest infinite ordinal. It consists of all elements of A of infinite height. The family {Uσ(A)} of Ulm subgroups indexed by ordinals σ is defined by transfinite induction:
• U0(A) = A;
• Uσ+1(A) = U(Uσ(A));
• Uτ(A) = ∩σ < τ Uσ(A) if τ is a limit ordinal.
Equivalently, Uσ(A) = pωσA, where ωσ is the product of ordinals ω and σ.
Ulm subgroups form a decreasing filtration of A whose quotients Uσ(A) = Uσ(A)/Uσ+1(A) are called the Ulm factors of A. This filtration stabilizes and the smallest ordinal τ such that Uτ(A) = Uτ+1(A) is the Ulm length of A. The smallest Ulm subgroup Uτ(A), also denoted U∞(A) and p∞A, is the largest p-divisible subgroup of A; if A is a p-group, then U∞(A) is divisible, and as such it is a direct summand of A.
For every Ulm factor Uσ(A) the p-heights of its elements are finite and they are unbounded for every Ulm factor except possibly the last one, namely Uτ−1(A) when the Ulm length τ is a successor ordinal.
Ulm's theorem
The second Prüfer theorem provides a straightforward extension of the fundamental theorem of finitely generated abelian groups to countable abelian p-groups without elements of infinite height: each such group is isomorphic to a direct sum of cyclic groups whose orders are powers of p. Moreover, the cardinality of the set of summands of order pn is uniquely determined by the group and each sequence of at most countable cardinalities is realized. Helmut Ulm (1933) found an extension of this classification theory to general countable p-groups: their isomorphism class is determined by the isomorphism classes of the Ulm factors and the p-divisible part.
Ulm's theorem. Let A and B be countable abelian p-groups such that for every ordinal σ their Ulm factors are isomorphic, Uσ(A) ≅ Uσ(B) and the p-divisible parts of A and B are isomorphic, U∞(A) ≅ U∞(B). Then A and B are isomorphic.
There is a complement to this theorem, first stated by Leo Zippin (1935) and proved in Kurosh (1960), which addresses the existence of an abelian p-group with given Ulm factors.
Let τ be an ordinal and {Aσ} be a family of countable abelian p-groups indexed by the ordinals σ < τ such that the p-heights of elements of each Aσ are finite and, except possibly for the last one, are unbounded. Then there exists a reduced abelian p-group A of Ulm length τ whose Ulm factors are isomorphic to these p-groups, Uσ(A) ≅ Aσ.
Ulm's original proof was based on an extension of the theory of elementary divisors to infinite matrices.
Alternative formulation
George Mackey and Irving Kaplansky generalized Ulm's theorem to certain modules over a complete discrete valuation ring. They introduced invariants of abelian groups that lead to a direct statement of the classification of countable periodic abelian groups: given an abelian group A, a prime p, and an ordinal α, the corresponding αth Ulm invariant is the dimension of the quotient
pαA[p]/pα+1A[p],
where B[p] denotes the p-torsion of an abelian group B, i.e. the subgroup of elements of order p, viewed as a vector space over the finite field with p elements.
A countable periodic reduced abelian group is determined uniquely up to isomorphism by its Ulm invariants for all prime numbers p and countable ordinals α.
Their simplified proof of Ulm's theorem served as a model for many further generalizations to other classes of abelian groups and modules.
References
• László Fuchs (1970), Infinite abelian groups, Vol. I. Pure and Applied Mathematics, Vol. 36. New York–London: Academic Press MR0255673
• Irving Kaplansky and George Mackey, A generalization of Ulm's theorem. Summa Brasil. Math. 2, (1951), 195–202 MR0049165
• Kurosh, A. G. (1960), The theory of groups, New York: Chelsea, MR 0109842
• Ulm, H (1933). "Zur Theorie der abzählbar-unendlichen Abelschen Gruppen". Math. Ann. 107: 774–803. doi:10.1007/bf01448919. JFM 59.0143.03.
| Wikipedia |
Ulrica Wilson
Ulrica Wilson is a mathematician specializing in the theory of noncommutative rings and in the combinatorics of matrices.[1] She is an associate professor at Morehouse College, associate director of diversity and outreach at the Institute for Computational and Experimental Research in Mathematics (ICERM),[2][1] and a former vice president of the National Association of Mathematicians.[3]
Education and career
Wilson is African-American,[2] and originally from Massachusetts, but grew up in Birmingham, Alabama.[2] She is a 1992 graduate of Spelman College,[4] and completed her Ph.D. at Emory University in 2004. Her dissertation, Cyclicity of Division Algebras over an Arithmetically Nice Field, was supervised by Eric Brussel.[5]
Wilson has contributed to the advancement of black women, women of color, and women in general in the field of mathematical sciences through the program EDGE[6] Enhancing Diversity in Graduate Education, which is a program that helps minorities with support in order to achieve academic goals and obtain Doctoral Degrees.[7]
After two stints as a postdoctoral researcher,[2] she joined the Morehouse College faculty in 2007, and became associate director at ICERM in 2013.[1] She serves on the Education Advisory Board for ICERM.[8]
In collaboration with ICERM, Wilson is also co-director of the REUF program,[9] The Research Experience for Undergraduate Faculty, this program was founded under the American Institute of Mathematics (AIM) to provide undergraduate faculty a community of scholars that support exchange and expand research ideas and projects to engage in with undergraduate students.[9]
The EDGE Program
In 2011, Wilson became Co-Director of the EDGE Program, a program to mentor, train, and support the academic development and research activities of women in mathematics. The program was designed to focus on training and creating jobs in mathematical sciences for women, especially those from underrepresented groups. The EDGE program helped increase the number of women, especially in minority groups, to take over in academia, industry and government roles. The EDGE program first began offering summer sessions to equip women in research providing annual conferences, mini-research, and collaborations with prestigious universities. The EDGE program has since expanded and its activities are centered on providing ongoing support for women toward the academic development and research productivity of women at several critical stages of their careers. EDGE focuses on women at 4 career stages—entering graduate students, advanced graduate students, postdoctoral students, and early career researchers. Since Wilson became Co-Director, over 50 women participated in various EDGE program activities and 18 EDGE participants received their PhDs. Numerous women have been granted sabbatical support and one woman was even able to use her mini-sabbatical to continue and build her research with a senior mathematician at Purdue University.[7]
Recognition
In 2003, Wilson was awarded the Marshall Hall Award from Emory College of Arts and Sciences in recognition of excellent performance while teaching and outstanding research as a doctoral student.[10]
Wilson was the Morehouse College Vulcan Teaching Excellence Award winner for 2016–2017.[11] She was recognized by Mathematically Gifted & Black as a Black History Month 2017 Honoree.[12] In 2018, she won the Presidential Award for Excellence in Science, Mathematics, and Engineering Mentoring.[13] She is on the Board of directors of Enhancing Diversity in Graduate Education (EDGE), a program that helps women entering graduate studies in the mathematical sciences. She was included in the 2019 class of fellows of the Association for Women in Mathematics " for her many years of supporting the professional development of women in their pursuit of graduate degrees in mathematics, most visibly through mentoring, teaching and program administration within the EDGE Program, and also as associate director of diversity and outreach at The Institute for Computational and Experimental Research in Mathematics (ICERM)".[14] She was awarded the 2023 Award for Impact on the Teaching and Learning of Mathematics from the AMS for her "many initiatives on the teaching and learning of mathematics for many different segments of the mathematics community."[15]
References
1. Ulrica Wilson: Associate Professor of Mathematics, Morehouse College, retrieved 2018-10-07
2. "Ulrica Wilson", Mathematically Gifted and Black: Black History Month 2017 Honoree, The Network of Minorities in Mathematical Sciences, retrieved 2021-11-12
3. "The NAM Board of Directors" (PDF), NAM Newsletter, 18 (4): 12, Spring 2018
4. Mulcahy, Colm (2017), "A Century of Mathematical Excellence at Spelman College", JMM 2017, doi:10.22595/scpubs.00013
5. Ulrica Wilson at the Mathematics Genealogy Project
6. "EDGE".{{cite web}}: CS1 maint: url-status (link)
7. Wilson, Ulrica. "EDGE Program". Retrieved November 15, 2021. {{cite journal}}: Cite journal requires |journal= (help)
8. "ICERM - Trustee and Advisory Boards - Trustee & Advisory Boards". icerm.brown.edu. Retrieved 2021-07-11.
9. Hogben, Leslie, and Ulrica Wilson. "AIM’s Research Experiences for Undergraduate Faculty program." Involve: A Journal of Mathematics 7.3 (2014): 343-353.
10. "Department of MATH - Graduate Programs". www.math.emory.edu. Retrieved 2021-11-16.
11. Mathematics Professor Ulrica Wilson Named Morehouse College's 2016–2017 Vulcan Teaching Excellence Award Winner, Morehouse College, May 26, 2017, retrieved 2018-10-07
12. "Ulrica Wilson". Mathematically Gifted & Black.{{cite web}}: CS1 maint: url-status (link)
13. Morehouse Professors Win White House/National Science Foundation Awards, Morehouse College, July 2, 2018, retrieved 2018-10-07
14. 2019 Class of AWM Fellows, Association for Women in Mathematics, retrieved 2019-01-08
15. "News from the AMS". American Mathematical Society. Retrieved 2023-04-08.
Authority control: Academics
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
| Wikipedia |
Ulrich Görtz
Ulrich Görtz (1973)[1] is a German mathematician specialising in arithmetic geometry.
Education and career
From 1993 to 1997, Görtz studied mathematics at the University of Münster.[1] He completed his PhD at the University of Cologne in 2000;[1] his advisor was Michael Rapoport.[2] In 2006, Görtz habilitated at the University of Bonn.[1] From 2008 to 2009 he was the recipient of a Heisenberg-Stipendium of the German Research Foundation (DFG). He received the Von-Kaven-Ehrenpreis of the DFG.[3] Since 2009, Görtz has been a professor at the University of Duisburg-Essen.[1]
Books
Together with Torsten Wedhorn, Görtz authored the textbook Algebraic Geometry (Part I: Schemes) in 2010.[4] Since August 2015,[5] Görtz has been a member of the editorial board of the journal Results in Mathematics.[6]
• Görtz, Ulrich; Wedhorn, Torsten (2020). Algebraic geometry. I, Schemes : with examples and exercises. Wiesbaden, Germany. ISBN 978-3-658-30733-2. OCLC 1181995650.{{cite book}}: CS1 maint: location missing publisher (link)
Personal life
Görtz is an Esperantist, and had been active in the youth organisation of the German Esperanto Association.[7]
References
1. "Pressemitteilung der Universität Duisburg-Essen". Universität Duisburg-Essen. Retrieved 25 September 2021.
2. Ulrich Görtz at the Mathematics Genealogy Project
3. "UDE: Mathematik weiter im Aufwind". idw – Informationsdienst Wissenschaft. Retrieved 25 September 2021.
4. Algebraic Geometry Part I: Schemes. With Examples and Exercises. Springer. Retrieved 25 September 2021.
5. "Prof. Dr. Ulrich Görtz". Essener Seminar. Retrieved 25 September 2021.
6. "Results in Mathematics". Springer.com. Retrieved 25 September 2021.
7. "Ulrich Goertz (personal website)". Retrieved 26 September 2021.
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Czech Republic
Academics
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• zbMATH
Other
• IdRef
| Wikipedia |
Ulrich Kulisch
Ulrich W. Kulisch (born 1933 in Breslau) is a German mathematician specializing in numerical analysis, including the computer implementation of interval arithmetic.
Experience
After graduation from high school in Freising, Kulisch studied mathematics at the University of Munich and the Technical University of Munich where in 1961 he completed his dissertation (Behandlung von Differentialgleichungen im Komplexen auf dem elektronischen Analogrechner) under Josef Heinhold.[1] After his postdoctoral qualification in 1963, he was acting Professor for Numerical Mathematics of the University of Munich from 1964 to 1966, and from 1966 Professor of Mathematics and Director of the Institute of Applied Mathematics at the University of Karlsruhe.
During his time in academia, Kulisch spent several sabbaticals abroad. He spent time in 1969/1970 at the Mathematics Research Center of the University of Wisconsin–Madison under Ramon Edgar Moore; in 1972/1973 and 1978/1979 at IBM's Thomas J. Watson Research Center in Yorktown Heights (where he worked alongside Willard L. Miranker (1932–2011)); and in 1998 and 1999/2000 at the Electrotechnical Laboratory at the University of Tsukuba.[2]
Kulisch was one of the pioneers of interval arithmetic in Germany in the 1960s and helped to found the discipline, along with Karl Nickel and Fritz Krückeberg. His implementations of interval arithmetic in computers started with Algol in the 1960s. Kulisch developed software with automated results verification including Nixdorf Computer (Pascal-XSC and others), IBM (projects ACRITH and ACRITH-XSC) and Siemens (program package ARITHMOS). In Karlsruhe, he developed C-XSC and associated program libraries. In 1993/1994 he was also involved in a hardware implementation on the XPA 3233 vector arithmetic coprocessor.
He was a founding member of the Computer Science Association in 1968, was chairman of the Computer Mathematics and Scientific Computing Committee of the Gesellschaft für Angewandte Mathematik und Mechanik (GAMM) and of the Technical Committees Enhanced Computer Arithmetic of the International Association for Mathematics and Computers in Simulation (IMACS) 1979 German member of the Working Group 2.5 (Numerical Software) of the International Federation for Information Processing (IFIP), of which he has been a member since 1980. He is on IEEE Standard Committee P1788 for interval arithmetic.
From 1975 to 1998 he was editor of the Bibliographisches Institut's Jahrbuchs Überblicke Mathematik.
Bibliography
• "Grundlagen des Numerischen Rechnens – Mathematische Begründung der Rechnerarithmetik", Reihe Informatik 19, BI 1976
• "Grundzüge der Intervallrechnung", Jahrbuch Überblicke Mathematik, volume 2, BI, Mannheim 1969
• with Willard L. Miranker (editor): A New Approach to Scientific Computation, Academic Press, New York, 1983.
• with Willard L. Miranker: "The arithmetic of the digital computer: a new approach", SIAM Rev. 28 (1986) 1–40.
• with H. J. Stetter (editor), "Scientific Computation with Automatic Result Verification", Computing Supplementum, volume 6, Springer, Wien, 1988.
• Editor: Wissenschaftliches Rechnen mit Ergebnisverifikation, Vieweg 1989
• with Willard L. Miranker: Computer Arithmetic in Theory and Practice, Academic Press 1981
• with R. Klatte, M. Neaga, D. Ratz, Ch. Ullrich: Pascal XSC- Sprachbeschreibung mit Beispielen, Springer 1991 (English edition, Springer 1992)
• with R. Hammer, M. Hocks, D. Ratz: C++ Toolbox for Verified Computing, Springer 1995
• Computer, Arithmetik und Numerik – ein Memorandum, Überblicke Mathematik, Vieweg 1998
• Advanced Arithmetic for the Digital Computer – Design of Arithmetic Units, Springer-Verlag 2002
• Computer Arithmetic and Validity – Theory, Implementation, and Applications, de Gruyter 2008, 2nd edition, 2013
References
1. Ulrich Kulisch at the Mathematics Genealogy Project
2. "Ulrich W. Kulisch — Curriculum Vitae"
External links
• Homepage (in German)
• Biography (in German)
• Literature by and about Ulrich Kulisch in the German National Library catalogue
Authority control
International
• FAST
• ISNI
• VIAF
National
• France
• BnF data
• Germany
• Israel
• Belgium
• United States
• Czech Republic
• Croatia
• Netherlands
Academics
• CiNii
• MathSciNet
• Mathematics Genealogy Project
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
| Wikipedia |
Ulrike Leopold-Wildburger
Ulrike Leopold-Wildburger (born 1949)[1] is an Austrian mathematical economist, applied mathematician, and operations researcher. She is a professor emeritus at the University of Graz, where she headed the department of statistics and operations research,[2] and is a former president of the Austrian Society of Operations Research.[3]
Ulrike Leopold-Wildburger
EducationUniversity of Graz
AwardsAustrian Cross of Honour for Science and Art, First Class
Education and career
Leopold-Wildburger studied mathematics, philosophy, and logic at the University of Graz from 1967 to 1972, earning a master of science in 1971 and a master of philosophy in 1972. She completed a Ph.D. at the University of Graz in 1975, and earned a habilitation in operations research and mathematical economics there in 1982.[3]
She joined the teaching staff at the University of Graz as a lecturer in mathematical economics in 1972, and became an assistant professor in 1983. She became a professor of mathematics and informatics at the University of Klagenfurt in 1986, a professor of operations research at the University of Zurich in 1988, and a professor of mathematical economics at the University of Minnesota in 1991, before returning to Graz as a professor of statistics and operations research.[2][3]
She headed the department from 1996 to 1998, and was dean of studies in the faculty of economics and social sciences from 2001 to 2004. She returned to her position as head of department in 2010.[2][3] She was president of the Austrian Society of Operations Research from 1993 to 1997.[3]
Books
With Gerald A. Heuer, Leopold-Wildburger is the coauthor of the books Balanced Silverman Games on General Discrete Sets (1991)[4] and Silverman’s Game: A Special Class of Two-Person Zero-Sum Games (1995),[5] concerning Silverman's game.
Leopold-Wildburger is a coauthor of The Knowledge Ahead Approach to Risk: Theory and Experimental Evidence (With Robin Pope and Johannes Leitner, 2007).[6] She is also a coauthor of two German-language textbooks, Einführung in die Wirtschaftsmathematik (Introduction to Mathematical Economics, with Jochen Hülsmann, Wolf Gamerith, and Werner Steindl, 1998; 5th ed., 2010)[7] and Verfassen und Vortragen: Wissenschaftliche Arbeiten und Vorträge leicht gemacht (with Jörg Schütze, 2002).
Recognition
Leopold-Wildburger was given the Austrian Cross of Honour for Science and Art, First Class in 2010.[2][3] She became a member of the Academia Europaea in 2011.[2][3]
References
1. Year of birth from German National Library catalog entry, retrieved 2020-10-02
2. "Ulrike Leopold-Wildburger", Member profile, Academia Europaea, retrieved 2020-10-02
3. Curriculum vitae, International Institute for Liberal Politics Vienna, retrieved 2020-10-02
4. Reviews of Balanced Silverman Games on General Discrete Sets:
• Baston, Victor J. (1994), MathSciNet, MR 1223543{{citation}}: CS1 maint: untitled periodical (link)
• Gardner, R. (June 1993), Journal of Economics, 58 (2): 211–213, doi:10.1007/bf01253483, JSTOR 41794316, S2CID 186234355{{citation}}: CS1 maint: untitled periodical (link)
• Mareš, M., zbMATH, Zbl 0743.90117{{citation}}: CS1 maint: untitled periodical (link)
5. Reviews of Silverman’s Game:
• Potters, J., zbMATH, Zbl 0854.90142{{citation}}: CS1 maint: untitled periodical (link)
• von Stengel, Bernhard (1996), MathSciNet, MR 1342076{{citation}}: CS1 maint: untitled periodical (link)
6. Hausken, Kjell (January 2007), "Review of The Knowledge Ahead Approach to Risk", Theory and Decision, 62 (3): 303–309, doi:10.1007/s11238-006-9024-0, S2CID 189841335
7. Ehemann, Klaus, Review of Einführung in die Wirtschaftsmathematik (5th ed), Zbl 1202.00003
Authority control
International
• ISNI
• VIAF
National
• Germany
• Israel
• United States
• Netherlands
Academics
• DBLP
• zbMATH
Other
• IdRef
| Wikipedia |
Ulrike Meier Yang
Ulrike Meier Yang (born 1959)[1] is a German-American applied mathematician and computer scientist specializing in numerical algorithms for scientific computing. She directs the Mathematical Algorithms & Computing group in the Center for Applied Scientific Computing at the Lawrence Livermore National Laboratory,[2][3] and is one of the developers of the Hypre library of parallel methods for solving linear systems.[2]
Education and career
Meier Yang did her undergraduate studies in mathematics at Ruhr University Bochum in Germany,[2] and worked in the Central Institute of Applied Mathematics of the Forschungszentrum Jülich in Germany from 1983 to 1985 and at the National Center for Supercomputing Applications at the University of Illinois Urbana-Champaign from 1985 to 1995.[3] She completed her doctorate through the University of Illinois in 1995 with the dissertation A Family of Preconditioned Iterative Solvers for Sparse Linear Systems, supervised by Kyle Gallivan.[4]
She joined the Lawrence Livermore National Laboratory research staff in 1998.[3]
As of January 1st, 2023 Yang took office as a member of the SIAM Board of Trustees.[5]
References
1. Birth year from WorldCat Identities, retrieved 2023-02-17
2. Women @ Energy: Dr. Ulrike Meier Yang, US Department of Energy, 8 October 2019, retrieved 2023-02-17
3. "Ulrike Meier Yang", People, Lawrence Livermore National Laboratory, retrieved 2023-02-17
4. Ulrike Meier Yang at the Mathematics Genealogy Project
5. "Welcoming the Newest Electees to the SIAM Board of Trustees and Council". SIAM News. Retrieved 2023-03-27.
External links
• Ulrike Meier Yang publications indexed by Google Scholar
Authority control: Academics
• DBLP
• Google Scholar
• MathSciNet
• Mathematics Genealogy Project
| Wikipedia |
Ulrike Tillmann
Ulrike Luise Tillmann FRS is a mathematician specializing in algebraic topology, who has made important contributions to the study of the moduli space of algebraic curves. She is the president of the London Mathematical Society in the period 2021–2022.
Ulrike Tillmann
Born
Ulrike Luise Tillmann
Rhede, Germany
Alma materBrandeis University
Stanford University
University of Bonn
AwardsWhitehead Prize (2004)
Scientific career
Doctoral advisorRalph Cohen
Websitepeople.maths.ox.ac.uk/~tillmann/
She is titular Professor of Mathematics at the University of Oxford and a Fellow of Merton College, Oxford.[1][2] In 2021 she was appointed Director of the Isaac Newton Institute at the University of Cambridge, and N.M. Rothschild & Sons Professor of Mathematical Sciences at Cambridge, but continued to hold a part-time position at Oxford.[3]
Education
Tillmann completed her Abitur at Gymnasium Georgianum in Vreden.[1] She received a BA from Brandeis University in 1985, followed by a MA from Stanford University in 1987. She read for a PhD under the supervision of Ralph Cohen at Stanford University, where she was awarded her doctorate in 1990.[1][4] She was awarded Habilitation in 1996 from the University of Bonn.[2]
Awards and honours
In 2004 she was awarded the Whitehead Prize of the London Mathematical Society.[5]
She was elected a Fellow of the Royal Society in 2008[6] and a Fellow of the American Mathematical Society in 2013.[7] She has served on the council of the Royal Society and in 2018 was its vice-president.[8] In 2017, she became a member of the German Academy of Sciences Leopoldina.[9]
Tillmann was awarded the Bessel Prize by the Alexander von Humboldt Foundation in 2008[10] and was the Emmy Noether Lecturer of the German Mathematical Society in 2009.[11]
She was elected as president-designate of the London Mathematical Society in June 2020 and took over the presidency from Jonathan Keating in November 2021.[12] She was elected to the European Academy of Sciences (EURASC) in 2021.[13] In October 2021 she became the director of the Isaac Newton Institute, taking a post which lasts for five years.[10]
Personal life
Tillmann's parents are Ewald and Marie-Luise Tillmann. In 1995 she married Jonathan Morris with whom she has had three daughters.[2]
Publications
• Tillmann, Ulrike (1997). "On the homotopy of the stable mapping class group". Inventiones Mathematicae. 130 (2): 257–275. Bibcode:1997InMat.130..257T. doi:10.1007/s002220050184. hdl:10338.dmlcz/127120. MR 1474157. S2CID 121645148.
• Galatius, Søren; Tillmann, Ulrike; Madsen, Ib; Weiss, Michael (2009). "The homotopy type of the cobordism category". Acta Mathematica. 202 (2): 195–239. doi:10.1007/s11511-009-0036-9. MR 2506750.
References
1. Ulrike Tillmann (2008), Curriculum Vitae (PDF), retrieved 16 December 2008.
2. "Who's Who 2009: New Names" (PDF). The Daily Telegraph. Retrieved 17 June 2009.
3. Ulrike Tillmann appointed Director of the Isaac Newton Institute for Mathematical Sciences, University of Oxford Mathematical Institute, 4 February 2021
4. Ulrike Tillmann at the Mathematics Genealogy Project
5. The London Mathematical Society (2004), Annual Report 2004 (PDF), archived from the original (PDF) on 8 January 2007, retrieved 16 December 2008.
6. "Fellows of the Royal Society" (PDF). The Royal Society. Retrieved 2 May 2013.
7. "List of Fellows of the American Mathematical Society". American Mathematical Society. Retrieved 2 May 2013.
8. "Ulrike Tillmann". royalsociety.org. Royal Society. Retrieved 14 March 2022.
9. Ulrike Tillmann, German Academy of Sciences Leopoldina, retrieved 5 January 2019
10. "Professor Ulrike Tillmann named as next Director of the Isaac Newton Institute". Isaac Newton Institute. 4 February 2021. Retrieved 14 March 2022.
11. "Preise und Auszeichnungen" (in German). German Mathematical Society. Retrieved 5 November 2018.
12. "LMS President Designate Announced | London Mathematical Society". www.lms.ac.uk. Retrieved 20 August 2020.
13. "Member Profile – Prof. Ulrike Tillmann". European Academy of Sciences. Retrieved 19 January 2022.
External links
Wikimedia Commons has media related to Ulrike Tillmann.
Fellows of the Royal Society elected in 2008
Fellows
• Girish Saran Agarwal
• Dario Alessi
• Michael Alpers
• Fraser Armstrong
• Alan Ashworth
• John Bell
• Jon Blundy
• Leszek Borysiewicz
• Alexander Bradshaw
• Stephen Cohen
• Fergus Craik
• David Deutsch
• John Duncan
• Russell G. Foster
• Brian Foster
• Derek Fray
• Peter Hudson
• Christopher Hunter
• Stephen Jackson
• Nicholas Kaiser
• Mark Kisin
• Christopher John Lamb
• Peter Simon Liss
• Jan Löwe
• Yiu-Wing Mai
• John C. Marshall
• Harvey McMahon
• Anne O'Garra
• Peter Parham
• Ian Parker
• Michael Payne
• Laurence Pearl
• Matthew Rosseinsky
• Robert Russell
• George Sawatzky
• James F. Scott
• Evgeny Sklyanin
• Philip J. Stephens
• Claudio Daniel Stern
• Michael Stratton
• Roger Summons
• Ulrike Tillmann
• Kenneth Timmis
• Chris Toumazou
Foreign
• J. Michael Bishop
• William A. Catterall
• Barbara Hohn
• Ho-Kwang Mao
• Peter Marler
• David Mumford
• Richard Schrock
• Susan Solomon
Honorary
• David Sainsbury, Baron Sainsbury of Turville
Authority control
International
• ISNI
• VIAF
National
• Norway
• Catalonia
• Germany
• Israel
• United States
• Czech Republic
Academics
• CiNii
• DBLP
• Leopoldina
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• zbMATH
Other
• IdRef
| Wikipedia |
Kirkpatrick–Seidel algorithm
The Kirkpatrick–Seidel algorithm, proposed by its authors as a potential "ultimate planar convex hull algorithm", is an algorithm for computing the convex hull of a set of points in the plane, with ${\mathcal {O}}(n\log h)$ time complexity, where $n$ is the number of input points and $h$ is the number of points (non dominated or maximal points, as called in some texts) in the hull. Thus, the algorithm is output-sensitive: its running time depends on both the input size and the output size. Another output-sensitive algorithm, the gift wrapping algorithm, was known much earlier, but the Kirkpatrick–Seidel algorithm has an asymptotic running time that is significantly smaller and that always improves on the ${\mathcal {O}}(n\log n)$ bounds of non-output-sensitive algorithms. The Kirkpatrick–Seidel algorithm is named after its inventors, David G. Kirkpatrick and Raimund Seidel.[1]
Although the algorithm is asymptotically optimal, it is not very practical for moderate-sized problems.[2]
Algorithm
The basic idea of the algorithm is a kind of reversal of the divide-and-conquer algorithm for convex hulls of Preparata and Hong, dubbed "marriage-before-conquest" by the authors.
The traditional divide-and-conquer algorithm splits the input points into two equal parts, e.g., by a vertical line, recursively finds convex hulls for the left and right subsets of the input, and then merges the two hulls into one by finding the "bridge edges", bitangents that connect the two hulls from above and below.
The Kirkpatrick–Seidel algorithm splits the input as before, by finding the median of the x-coordinates of the input points. However, the algorithm reverses the order of the subsequent steps: its next step is to find the edges of the convex hull that intersect the vertical line defined by this median x-coordinate, which turns out to require linear time.[3] The points on the left and right sides of the splitting line that cannot contribute to the eventual hull are discarded, and the algorithm proceeds recursively on the remaining points. In more detail, the algorithm performs a separate recursion for the upper and lower parts of the convex hull; in the recursion for the upper hull, the noncontributing points to be discarded are those below the bridge edge vertically, while in the recursion for the lower hull the points above the bridge edge vertically are discarded.
At the $i$th level of the recursion, the algorithm solves at most $2^{i}$ subproblems, each of size at most ${\frac {n}{2^{i}}}$. The total number of subproblems considered is at most $h$, since each subproblem finds a new convex hull edge. The worst case occurs when no points can be discarded and the subproblems are as large as possible; that is, when there are exactly $2^{i}$ subproblems in each level of recursion up to level $\log _{2}h$ . For this worst case, there are ${\mathcal {O}}(\log h)$ levels of recursion and ${\mathcal {O}}(n)$ points considered within each level, so the total running time is ${\mathcal {O}}(n\log h)$ as stated.
See also
• Convex hull algorithms
References
1. Kirkpatrick, David G.; Seidel, Raimund (1986). "The ultimate planar convex hull algorithm?". SIAM Journal on Computing. 15 (1): 287–299. doi:10.1137/0215021. hdl:1813/6417.
2. McQueen, Mary M.; Toussaint, Godfried T. (January 1985). "On the ultimate convex hull algorithm in practice" (PDF). Pattern Recognition Letters. 3 (1): 29–34. Bibcode:1985PaReL...3...29M. doi:10.1016/0167-8655(85)90039-X. The results suggest that although the O(n Log h) algorithms may be the 'ultimate' ones in theory, they are of little practical value from the point of view of running time.
3. Original paper by Kirkpatrick / Seidel (1986), p. 10, theorem 3.1
| Wikipedia |
Ultrabarrelled space
In functional analysis and related areas of mathematics, an ultrabarrelled space is a topological vector spaces (TVS) for which every ultrabarrel is a neighbourhood of the origin.
Definition
A subset $B_{0}$ of a TVS $X$ is called an ultrabarrel if it is a closed and balanced subset of $X$ and if there exists a sequence $\left(B_{i}\right)_{i=1}^{\infty }$ of closed balanced and absorbing subsets of $X$ such that $B_{i+1}+B_{i+1}\subseteq B_{i}$ for all $i=0,1,\ldots .$ In this case, $\left(B_{i}\right)_{i=1}^{\infty }$ is called a defining sequence for $B_{0}.$ A TVS $X$ is called ultrabarrelled if every ultrabarrel in $X$ is a neighbourhood of the origin.[1]
Properties
A locally convex ultrabarrelled space is a barrelled space.[1] Every ultrabarrelled space is a quasi-ultrabarrelled space.[1]
Examples and sufficient conditions
Complete and metrizable TVSs are ultrabarrelled.[1] If $X$ is a complete locally bounded non-locally convex TVS and if $B_{0}$ is a closed balanced and bounded neighborhood of the origin, then $B_{0}$ is an ultrabarrel that is not convex and has a defining sequence consisting of non-convex sets.[1]
Counter-examples
There exist barrelled spaces that are not ultrabarrelled.[1] There exist TVSs that are complete and metrizable (and thus ultrabarrelled) but not barrelled.[1]
See also
• Barrelled space – Type of topological vector space
• Countably barrelled space
• Countably quasi-barrelled space
• Infrabarreled space
• Uniform boundedness principle#Generalisations – A theorem stating that pointwise boundedness implies uniform boundedness
Citations
1. Khaleelulla 1982, pp. 65–76.
Bibliography
• Bourbaki, Nicolas (1950). "Sur certains espaces vectoriels topologiques". Annales de l'Institut Fourier (in French). 2: 5–16 (1951). doi:10.5802/aif.16. MR 0042609.
• Husain, Taqdir; Khaleelulla, S. M. (1978). Barrelledness in Topological and Ordered Vector Spaces. Lecture Notes in Mathematics. Vol. 692. Berlin, New York, Heidelberg: Springer-Verlag. ISBN 978-3-540-09096-0. OCLC 4493665.
• Jarchow, Hans (1981). Locally convex spaces. Stuttgart: B.G. Teubner. ISBN 978-3-519-02224-4. OCLC 8210342.
• Khaleelulla, S. M. (1982). Counterexamples in Topological Vector Spaces. Lecture Notes in Mathematics. Vol. 936. Berlin, Heidelberg, New York: Springer-Verlag. ISBN 978-3-540-11565-6. OCLC 8588370.
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Robertson, Alex P.; Robertson, Wendy J. (1964). Topological vector spaces. Cambridge Tracts in Mathematics. Vol. 53. Cambridge University Press. pp. 65–75.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
• Trèves, François (2006) [1967]. Topological Vector Spaces, Distributions and Kernels. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-45352-1. OCLC 853623322.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Boundedness and bornology
Basic concepts
• Barrelled space
• Bounded set
• Bornological space
• (Vector) Bornology
Operators
• (Un)Bounded operator
• Uniform boundedness principle
Subsets
• Barrelled set
• Bornivorous set
• Saturated family
Related spaces
• (Countably) Barrelled space
• (Countably) Quasi-barrelled space
• Infrabarrelled space
• (Quasi-) Ultrabarrelled space
• Ultrabornological space
Topological vector spaces (TVSs)
Basic concepts
• Banach space
• Completeness
• Continuous linear operator
• Linear functional
• Fréchet space
• Linear map
• Locally convex space
• Metrizability
• Operator topologies
• Topological vector space
• Vector space
Main results
• Anderson–Kadec
• Banach–Alaoglu
• Closed graph theorem
• F. Riesz's
• Hahn–Banach (hyperplane separation
• Vector-valued Hahn–Banach)
• Open mapping (Banach–Schauder)
• Bounded inverse
• Uniform boundedness (Banach–Steinhaus)
Maps
• Bilinear operator
• form
• Linear map
• Almost open
• Bounded
• Continuous
• Closed
• Compact
• Densely defined
• Discontinuous
• Topological homomorphism
• Functional
• Linear
• Bilinear
• Sesquilinear
• Norm
• Seminorm
• Sublinear function
• Transpose
Types of sets
• Absolutely convex/disk
• Absorbing/Radial
• Affine
• Balanced/Circled
• Banach disks
• Bounding points
• Bounded
• Complemented subspace
• Convex
• Convex cone (subset)
• Linear cone (subset)
• Extreme point
• Pre-compact/Totally bounded
• Prevalent/Shy
• Radial
• Radially convex/Star-shaped
• Symmetric
Set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Convex hull
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Types of TVSs
• Asplund
• B-complete/Ptak
• Banach
• (Countably) Barrelled
• BK-space
• (Ultra-) Bornological
• Brauner
• Complete
• Convenient
• (DF)-space
• Distinguished
• F-space
• FK-AK space
• FK-space
• Fréchet
• tame Fréchet
• Grothendieck
• Hilbert
• Infrabarreled
• Interpolation space
• K-space
• LB-space
• LF-space
• Locally convex space
• Mackey
• (Pseudo)Metrizable
• Montel
• Quasibarrelled
• Quasi-complete
• Quasinormed
• (Polynomially
• Semi-) Reflexive
• Riesz
• Schwartz
• Semi-complete
• Smith
• Stereotype
• (B
• Strictly
• Uniformly) convex
• (Quasi-) Ultrabarrelled
• Uniformly smooth
• Webbed
• With the approximation property
• Mathematics portal
• Category
• Commons
| Wikipedia |
Ultrabornological space
In functional analysis, a topological vector space (TVS) $X$ is called ultrabornological if every bounded linear operator from $X$ into another TVS is necessarily continuous. A general version of the closed graph theorem holds for ultrabornological spaces. Ultrabornological spaces were introduced by Alexander Grothendieck (Grothendieck [1955, p. 17] "espace du type (β)").[1]
Definitions
Let $X$ be a topological vector space (TVS).
Preliminaries
A disk is a convex and balanced set. A disk in a TVS $X$ is called bornivorous[2] if it absorbs every bounded subset of $X.$
A linear map between two TVSs is called infrabounded[2] if it maps Banach disks to bounded disks.
A disk $D$ in a TVS $X$ is called infrabornivorous if it satisfies any of the following equivalent conditions:
1. $D$ absorbs every Banach disks in $X.$
while if $X$ locally convex then we may add to this list:
1. the gauge of $D$ is an infrabounded map;[2]
while if $X$ locally convex and Hausdorff then we may add to this list:
1. $D$ absorbs all compact disks;[2] that is, $D$ is "compactivorious".
Ultrabornological space
A TVS $X$ is ultrabornological if it satisfies any of the following equivalent conditions:
1. every infrabornivorous disk in $X$ is a neighborhood of the origin;[2]
while if $X$ is a locally convex space then we may add to this list:
1. every bounded linear operator from $X$ into a complete metrizable TVS is necessarily continuous;
2. every infrabornivorous disk is a neighborhood of 0;
3. $X$ be the inductive limit of the spaces $X_{D}$ as D varies over all compact disks in $X$;
4. a seminorm on $X$ that is bounded on each Banach disk is necessarily continuous;
5. for every locally convex space $Y$ and every linear map $u:X\to Y,$ if $u$ is bounded on each Banach disk then $u$ is continuous;
6. for every Banach space $Y$ and every linear map $u:X\to Y,$ if $u$ is bounded on each Banach disk then $u$ is continuous.
while if $X$ is a Hausdorff locally convex space then we may add to this list:
1. $X$ is an inductive limit of Banach spaces;[2]
Properties
Every locally convex ultrabornological space is barrelled,[2] quasi-ultrabarrelled space, and a bornological space but there exist bornological spaces that are not ultrabornological.
• Every ultrabornological space $X$ is the inductive limit of a family of nuclear Fréchet spaces, spanning $X.$
• Every ultrabornological space $X$ is the inductive limit of a family of nuclear DF-spaces, spanning $X.$
Examples and sufficient conditions
The finite product of locally convex ultrabornological spaces is ultrabornological.[2] Inductive limits of ultrabornological spaces are ultrabornological.
Every Hausdorff sequentially complete bornological space is ultrabornological.[2] Thus every complete Hausdorff bornological space is ultrabornological. In particular, every Fréchet space is ultrabornological.[2]
The strong dual space of a complete Schwartz space is ultrabornological.
Every Hausdorff bornological space that is quasi-complete is ultrabornological.
Counter-examples
There exist ultrabarrelled spaces that are not ultrabornological. There exist ultrabornological spaces that are not ultrabarrelled.
See also
• Bounded linear operator – Linear transformation between topological vector spacesPages displaying short descriptions of redirect targets
• Bounded set (topological vector space) – Generalization of boundedness
• Bornological space – Space where bounded operators are continuous
• Bornology – Mathematical generalization of boundedness
• Locally convex topological vector space – A vector space with a topology defined by convex open sets
• Space of linear maps
• Topological vector space – Vector space with a notion of nearness
• Vector bornology
External links
• Some characterizations of ultrabornological spaces
References
1. Narici & Beckenstein 2011, p. 441.
2. Narici & Beckenstein 2011, pp. 441–457.
• Hogbe-Nlend, Henri (1977). Bornologies and functional analysis. Amsterdam: North-Holland Publishing Co. pp. xii+144. ISBN 0-7204-0712-5. MR 0500064.
• Edwards, Robert E. (1995). Functional Analysis: Theory and Applications. New York: Dover Publications. ISBN 978-0-486-68143-6. OCLC 30593138.
• Grothendieck, Alexander (1955). "Produits Tensoriels Topologiques et Espaces Nucléaires" [Topological Tensor Products and Nuclear Spaces]. Memoirs of the American Mathematical Society Series (in French). Providence: American Mathematical Society. 16. ISBN 978-0-8218-1216-7. MR 0075539. OCLC 1315788.
• Grothendieck, Alexander (1973). Topological Vector Spaces. Translated by Chaljub, Orlando. New York: Gordon and Breach Science Publishers. ISBN 978-0-677-30020-7. OCLC 886098.
• Khaleelulla, S. M. (1982). Counterexamples in Topological Vector Spaces. Lecture Notes in Mathematics. Vol. 936. Berlin, Heidelberg, New York: Springer-Verlag. ISBN 978-3-540-11565-6. OCLC 8588370.
• Kriegl, Andreas; Michor, Peter W. (1997). The Convenient Setting of Global Analysis (PDF). Mathematical Surveys and Monographs. Vol. 53. Providence, R.I: American Mathematical Society. ISBN 978-0-8218-0780-4. OCLC 37141279.
• Narici, Lawrence; Beckenstein, Edward (2011). Topological Vector Spaces. Pure and applied mathematics (Second ed.). Boca Raton, FL: CRC Press. ISBN 978-1584888666. OCLC 144216834.
• Schaefer, Helmut H.; Wolff, Manfred P. (1999). Topological Vector Spaces. GTM. Vol. 8 (Second ed.). New York, NY: Springer New York Imprint Springer. ISBN 978-1-4612-7155-0. OCLC 840278135.
• Wilansky, Albert (2013). Modern Methods in Topological Vector Spaces. Mineola, New York: Dover Publications, Inc. ISBN 978-0-486-49353-4. OCLC 849801114.
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Boundedness and bornology
Basic concepts
• Barrelled space
• Bounded set
• Bornological space
• (Vector) Bornology
Operators
• (Un)Bounded operator
• Uniform boundedness principle
Subsets
• Barrelled set
• Bornivorous set
• Saturated family
Related spaces
• (Countably) Barrelled space
• (Countably) Quasi-barrelled space
• Infrabarrelled space
• (Quasi-) Ultrabarrelled space
• Ultrabornological space
Topological vector spaces (TVSs)
Basic concepts
• Banach space
• Completeness
• Continuous linear operator
• Linear functional
• Fréchet space
• Linear map
• Locally convex space
• Metrizability
• Operator topologies
• Topological vector space
• Vector space
Main results
• Anderson–Kadec
• Banach–Alaoglu
• Closed graph theorem
• F. Riesz's
• Hahn–Banach (hyperplane separation
• Vector-valued Hahn–Banach)
• Open mapping (Banach–Schauder)
• Bounded inverse
• Uniform boundedness (Banach–Steinhaus)
Maps
• Bilinear operator
• form
• Linear map
• Almost open
• Bounded
• Continuous
• Closed
• Compact
• Densely defined
• Discontinuous
• Topological homomorphism
• Functional
• Linear
• Bilinear
• Sesquilinear
• Norm
• Seminorm
• Sublinear function
• Transpose
Types of sets
• Absolutely convex/disk
• Absorbing/Radial
• Affine
• Balanced/Circled
• Banach disks
• Bounding points
• Bounded
• Complemented subspace
• Convex
• Convex cone (subset)
• Linear cone (subset)
• Extreme point
• Pre-compact/Totally bounded
• Prevalent/Shy
• Radial
• Radially convex/Star-shaped
• Symmetric
Set operations
• Affine hull
• (Relative) Algebraic interior (core)
• Convex hull
• Linear span
• Minkowski addition
• Polar
• (Quasi) Relative interior
Types of TVSs
• Asplund
• B-complete/Ptak
• Banach
• (Countably) Barrelled
• BK-space
• (Ultra-) Bornological
• Brauner
• Complete
• Convenient
• (DF)-space
• Distinguished
• F-space
• FK-AK space
• FK-space
• Fréchet
• tame Fréchet
• Grothendieck
• Hilbert
• Infrabarreled
• Interpolation space
• K-space
• LB-space
• LF-space
• Locally convex space
• Mackey
• (Pseudo)Metrizable
• Montel
• Quasibarrelled
• Quasi-complete
• Quasinormed
• (Polynomially
• Semi-) Reflexive
• Riesz
• Schwartz
• Semi-complete
• Smith
• Stereotype
• (B
• Strictly
• Uniformly) convex
• (Quasi-) Ultrabarrelled
• Uniformly smooth
• Webbed
• With the approximation property
• Mathematics portal
• Category
• Commons
| Wikipedia |
Ultraconnected space
In mathematics, a topological space is said to be ultraconnected if no two nonempty closed sets are disjoint.[1] Equivalently, a space is ultraconnected if and only if the closures of two distinct points always have non trivial intersection. Hence, no T1 space with more than one point is ultraconnected.[2]
Properties
Every ultraconnected space $X$ is path-connected (but not necessarily arc connected). If $a$ and $b$ are two points of $X$ and $p$ is a point in the intersection $\operatorname {cl} \{a\}\cap \operatorname {cl} \{b\}$, the function $f:[0,1]\to X$ defined by $f(t)=a$ if $0\leq t<1/2$, $f(1/2)=p$ and $f(t)=b$ if $1/2<t\leq 1$, is a continuous path between $a$ and $b$.[2]
Every ultraconnected space is normal, limit point compact, and pseudocompact.[1]
Examples
The following are examples of ultraconnected topological spaces.
• A set with the indiscrete topology.
• The Sierpiński space.
• A set with the excluded point topology.
• The right order topology on the real line.[3]
See also
• Hyperconnected space
Notes
1. PlanetMath
2. Steen & Seebach, Sect. 4, pp. 29-30
3. Steen & Seebach, example #50, p. 74
References
• This article incorporates material from Ultraconnected space on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
• Lynn Arthur Steen and J. Arthur Seebach, Jr., Counterexamples in Topology. Springer-Verlag, New York, 1978. Reprinted by Dover Publications, New York, 1995. ISBN 0-486-68735-X (Dover edition).
| Wikipedia |
Ultrafinitism
In the philosophy of mathematics, ultrafinitism (also known as ultraintuitionism,[1] strict formalism,[2] strict finitism,[2] actualism,[1] predicativism,[2][3] and strong finitism)[2] is a form of finitism and intuitionism. There are various philosophies of mathematics that are called ultrafinitism. A major identifying property common among most of these philosophies is their objections to totality of number theoretic functions like exponentiation over natural numbers.
Main ideas
Like other finitists, ultrafinitists deny the existence of the infinite set $\mathbb {N} $ of natural numbers, on the basis that it can never be completed. ( i.e. there is a largest natural number).
In addition, some ultrafinitists are concerned with acceptance of objects in mathematics that no one can construct in practice because of physical restrictions in constructing large finite mathematical objects. Thus some ultrafinitists will deny or refrain from accepting the existence of large numbers, for example, the floor of the first Skewes's number, which is a huge number defined using the exponential function as exp(exp(exp(79))), or
$e^{e^{e^{79}}}.$
The reason is that nobody has yet calculated what natural number is the floor of this real number, and it may not even be physically possible to do so. Similarly, $2\uparrow \uparrow \uparrow 6$ (in Knuth's up-arrow notation) would be considered only a formal expression that does not correspond to a natural number. The brand of ultrafinitism concerned with physical realizability of mathematics is often called actualism.
Edward Nelson criticized the classical conception of natural numbers because of the circularity of its definition. In classical mathematics the natural numbers are defined as 0 and numbers obtained by the iterative applications of the successor function to 0. But the concept of natural number is already assumed for the iteration. In other words, to obtain a number like $2\uparrow \uparrow \uparrow 6$ one needs to perform the successor function iteratively (in fact, exactly $2\uparrow \uparrow \uparrow 6$ times) to 0.
Some versions of ultrafinitism are forms of constructivism, but most constructivists view the philosophy as unworkably extreme. The logical foundation of ultrafinitism is unclear; in his comprehensive survey Constructivism in Mathematics (1988), the constructive logician A. S. Troelstra dismissed it by saying "no satisfactory development exists at present." This was not so much a philosophical objection as it was an admission that, in a rigorous work of mathematical logic, there was simply nothing precise enough to include.
People associated with ultrafinitism
Serious work on ultrafinitism was led, from 1959 until his death in 2016, by Alexander Esenin-Volpin, who in 1961 sketched a program for proving the consistency of Zermelo–Fraenkel set theory in ultrafinite mathematics. Other mathematicians who have worked in the topic include Doron Zeilberger, Edward Nelson, Rohit Jivanlal Parikh, and Jean Paul Van Bendegem. The philosophy is also sometimes associated with the beliefs of Ludwig Wittgenstein, Robin Gandy, Petr Vopěnka, and Johannes Hjelmslev.
Shaughan Lavine has developed a form of set-theoretical ultrafinitism that is consistent with classical mathematics.[4] Lavine has shown that the basic principles of arithmetic such as "there is no largest natural number" can be upheld, as Lavine allows for the inclusion of "indefinitely large" numbers.[4]
Computational complexity theory based restrictions
Other considerations of the possibility of avoiding unwieldy large numbers can be based on computational complexity theory, as in Andras Kornai's work on explicit finitism (which does not deny the existence of large numbers)[5] and Vladimir Sazonov's notion of feasible number.
There has also been considerable formal development on versions of ultrafinitism that are based on complexity theory, like Samuel Buss's bounded arithmetic theories, which capture mathematics associated with various complexity classes like P and PSPACE. Buss's work can be considered the continuation of Edward Nelson's work on predicative arithmetic as bounded arithmetic theories like S12 are interpretable in Raphael Robinson's theory Q and therefore are predicative in Nelson's sense. The power of these theories for developing mathematics is studied in bounded reverse mathematics as can be found in the works of Stephen A. Cook and Phuong The Nguyen. However these researches are not philosophies of mathematics but rather the study of restricted forms of reasoning similar to reverse mathematics.
See also
• Transcomputational problem
Notes
1. International Workshop on Logic and Computational Complexity, Logic and Computational Complexity, Springer, 1995, p. 31.
2. St. Iwan (2000), "On the Untenability of Nelson's Predicativism", Erkenntnis 53(1–2), pp. 147–154.
3. Not to be confused with Russell's predicativism.
4. "Philosophy of Mathematics (Stanford Encyclopedia of Philosophy)". Plato.stanford.edu. Retrieved 2015-10-07.
5. "Relation to foundations"
References
• Ésénine-Volpine, A. S. (1961), "Le programme ultra-intuitionniste des fondements des mathématiques", Infinitistic Methods (Proc. Sympos. Foundations of Math., Warsaw, 1959), Oxford: Pergamon, pp. 201–223, MR 0147389 Reviewed by Kreisel, G.; Ehrenfeucht, A. (1967), "Review of Le Programme Ultra-Intuitionniste des Fondements des Mathematiques by A. S. Ésénine-Volpine", The Journal of Symbolic Logic, Association for Symbolic Logic, 32 (4): 517, doi:10.2307/2270182, JSTOR 2270182
• Lavine, S., 1994. Understanding the Infinite, Cambridge, MA: Harvard University Press.
External links
• Explicit finitism by Andras Kornai
• On feasible numbers by Vladimir Sazonov
• "Real" Analysis Is A Degenerate Case Of Discrete Analysis by Doron Zeilberger
• Discussion on formal foundations on MathOverflow
• History of constructivism in the 20th century by A. S. Troelstra
• Predicative Arithmetic by Edward Nelson
• Logical Foundations of Proof Complexity by Stephen A. Cook and Phuong The Nguyen
• Bounded Reverse Mathematics by Phuong The Nguyen
• Reading Brian Rotman’s “Ad Infinitum…” by Charles Petzold
• Computational Complexity Theory
Authority control: National
• Germany
| Wikipedia |
Ultragraph C*-algebra
In mathematics, an ultragraph C*-algebra is a universal C*-algebra generated by partial isometries on a collection of Hilbert spaces constructed from the ultragraph[1]pg 6-7. These C*-algebras were created in order to simultaneously generalize the classes of graph C*-algebras and Exel–Laca algebras, giving a unified framework for studying these objects.[1] This is because every graph can be encoded as an ultragraph, and similarly, every infinite graph giving an Exel-Laca algebras can also be encoded as an ultragraph.
Definitions
Ultragraphs
An ultragraph ${\mathcal {G}}=(G^{0},{\mathcal {G}}^{1},r,s)$ consists of a set of vertices $G^{0}$, a set of edges ${\mathcal {G}}^{1}$, a source map $s:{\mathcal {G}}^{1}\to G^{0}$, and a range map $r:{\mathcal {G}}^{1}\to P(G^{0})\setminus \{\emptyset \}$ taking values in the power set collection $P(G^{0})\setminus \{\emptyset \}$ of nonempty subsets of the vertex set. A directed graph is the special case of an ultragraph in which the range of each edge is a singleton, and ultragraphs may be thought of as generalized directed graph in which each edges starts at a single vertex and points to a nonempty subset of vertices.
Example
An easy way to visualize an ultragraph is to consider a directed graph with a set of labelled vertices, where each label corresponds to a subset in the image of an element of the range map. For example, given an ultragraph with vertices and edge labels
${\mathcal {G}}^{0}=\{v,w,x\}$, ${\mathcal {G}}^{1}=\{e,f,g\}$
with source an range maps
${\begin{matrix}s(e)=v&s(f)=w&s(g)=x\\r(e)=\{v,w,x\}&r(f)=\{x\}&r(g)=\{v,w\}\end{matrix}}$
can be visualized as the image on the right.
Ultragraph algebras
Given an ultragraph ${\mathcal {G}}=(G^{0},{\mathcal {G}}^{1},r,s)$, we define ${\mathcal {G}}^{0}$ to be the smallest subset of $P(G^{0})$ containing the singleton sets $\{\{v\}:v\in G^{0}\}$, containing the range sets $\{r(e):e\in {\mathcal {G}}^{1}\}$, and closed under intersections, unions, and relative complements. A Cuntz–Krieger ${\mathcal {G}}$-family is a collection of projections $\{p_{A}:A\in {\mathcal {G}}^{0}\}$ together with a collection of partial isometries $\{s_{e}:e\in {\mathcal {G}}^{1}\}$ with mutually orthogonal ranges satisfying
1. $p_{\emptyset }$, $p_{A}p_{B}=p_{A\cap B}$, $p_{A}+p_{B}-p_{A\cap B}=p_{A\cup B}$ for all $A\in {\mathcal {G}}^{0}$,
2. $s_{e}^{*}s_{e}=p_{r(e)}$ for all $e\in {\mathcal {G}}^{1}$,
3. $p_{v}=\sum _{s(e)=v}s_{e}s_{e}^{*}$ whenever $v\in G^{0}$ is a vertex that emits a finite number of edges, and
4. $s_{e}s_{e}^{*}\leq p_{s(e)}$ for all $e\in {\mathcal {G}}^{1}$.
The ultragraph C*-algebra $C^{*}({\mathcal {G}})$ is the universal C*-algebra generated by a Cuntz–Krieger ${\mathcal {G}}$-family.
Properties
Every graph C*-algebra is seen to be an ultragraph algebra by simply considering the graph as a special case of an ultragraph, and realizing that ${\mathcal {G}}^{0}$ is the collection of all finite subsets of $G^{0}$ and $p_{A}=\sum _{v\in A}p_{v}$ for each $A\in {\mathcal {G}}^{0}$. Every Exel–Laca algebras is also an ultragraph C*-algebra: If $A$ is an infinite square matrix with index set $I$ and entries in $\{0,1\}$, one can define an ultragraph by $G^{0}:=$, $G^{1}:=I$, $s(i)=i$, and $r(i)=\{j\in I:A(i,j)=1\}$. It can be shown that $C^{*}({\mathcal {G}})$ is isomorphic to the Exel–Laca algebra ${\mathcal {O}}_{A}$.[1]
Ultragraph C*-algebras are useful tools for studying both graph C*-algebras and Exel–Laca algebras. Among other benefits, modeling an Exel–Laca algebra as ultragraph C*-algebra allows one to use the ultragraph as a tool to study the associated C*-algebras, thereby providing the option to use graph-theoretic techniques, rather than matrix techniques, when studying the Exel–Laca algebra. Ultragraph C*-algebras have been used to show that every simple AF-algebra is isomorphic to either a graph C*-algebra or an Exel–Laca algebra.[2] They have also been used to prove that every AF-algebra with no (nonzero) finite-dimensional quotient is isomorphic to an Exel–Laca algebra.[2]
While the classes of graph C*-algebras, Exel–Laca algebras, and ultragraph C*-algebras each contain C*-algebras not isomorphic to any C*-algebra in the other two classes, the three classes have been shown to coincide up to Morita equivalence.[3]
See also
• Leavitt path algebra
• Exel–Laca algebras
• Infinite matrix
• Infinite graph
Notes
1. A unified approach to Exel–Laca algebras and C*-algebras associated to graphs, Mark Tomforde, J. Operator Theory 50 (2003), no. 2, 345–368.
2. Realization of AF-algebras as graph algebras, Exel–Laca algebras, and ultragraph algebras, Takeshi Katsura, Aidan Sims, and Mark Tomforde, J. Funct. Anal. 257 (2009), no. 5, 1589–1620.
3. Graph algebras, Exel–Laca algebras, and ultragraph algebras coincide up to Morita equivalence, Takeshi Katsura, Paul Muhly, Aidan Sims, and Mark Tomforde, J. Reine Angew. Math. 640 (2010), 135–165.
| Wikipedia |
Ultrahyperbolic equation
In the mathematical field of differential equations, the ultrahyperbolic equation is a partial differential equation (PDE) for an unknown scalar function u of 2n variables x1, ..., xn, y1, ..., yn of the form
${\frac {\partial ^{2}u}{\partial x_{1}^{2}}}+\cdots +{\frac {\partial ^{2}u}{\partial x_{n}^{2}}}-{\frac {\partial ^{2}u}{\partial y_{1}^{2}}}-\cdots -{\frac {\partial ^{2}u}{\partial y_{n}^{2}}}=0.$
More generally, if a is any quadratic form in 2n variables with signature (n, n), then any PDE whose principal part is $a_{ij}u_{x_{i}x_{j}}$ is said to be ultrahyperbolic. Any such equation can be put in the form above by means of a change of variables.[1]
The ultrahyperbolic equation has been studied from a number of viewpoints. On the one hand, it resembles the classical wave equation. This has led to a number of developments concerning its characteristics, one of which is due to Fritz John: the John equation.
In 2008, Walter Craig and Steven Weinstein proved that under a nonlocal constraint, the initial value problem is well-posed for initial data given on a codimension-one hypersurface.[2] And later, in 2022, a research team at the University of Michigan extended the conditions for solving ultrahyperbolic wave equations to complex-time (kime), demonstrated space-kime dynamics, and showed data science applications using tensor-based linear modeling of functional magnetic resonance imaging data. [3][4]
The equation has also been studied from the point of view of symmetric spaces, and elliptic differential operators.[5] In particular, the ultrahyperbolic equation satisfies an analog of the mean value theorem for harmonic functions.
Notes
1. See Courant and Hilbert.
2. Craig, Walter; Weinstein, Steven. "On determinism and well-posedness in multiple time dimensions". Proc. R. Soc. A vol. 465 no. 2110 3023-3046 (2008). Retrieved 5 December 2013.
3. Wang, Y; Shen, Y; Deng, D; Dinov, ID (2022). "Determinism, Well-posedness, and Applications of the Ultrahyperbolic Wave Equation in Spacekime". Partial Differential Equations in Applied Mathematics. Elsevier. 5 (100280): 100280. doi:10.1016/j.padiff.2022.100280. PMC 9494226. PMID 36159725.
4. Zhang, R; Zhang, Y; Liu, Y; Guo, Y; Shen, Y; Deng, D; Qiu, Y; Dinov, ID (2022). "Kimesurface Representation and Tensor Linear Modeling of Longitudinal Data". Partial Differential Equations in Applied Mathematics. Springer. 34 (8): 6377–6396. doi:10.1007/s00521-021-06789-8. PMC 9355340. PMID 35936508.
5. Helgason, S (1959). "Differential operators on homogeneous spaces". Acta Mathematica. Institut Mittag-Leffler. 102 (3–4): 239–299. doi:10.1007/BF02564248.
References
• Richard Courant; David Hilbert (1962). Methods of Mathematical Physics, Vol. 2. Wiley-Interscience. pp. 744–752. ISBN 978-0-471-50439-9.
• Lars Hörmander (20 August 2001). "Asgeirsson's Mean Value Theorem and Related Identities". Journal of Functional Analysis. 2 (184): 377–401. doi:10.1006/jfan.2001.3743.
• Lars Hörmander (1990). The Analysis of Linear Partial Differential Operators I. Springer-Verlag. Theorem 7.3.4. ISBN 978-3-540-52343-7.
• Sigurdur Helgason (2000). Groups and Geometric Analysis. American Mathematical Society. pp. 319–323. ISBN 978-0-8218-2673-7.
• Fritz John (1938). "The Ultrahyperbolic Differential Equation with Four Independent Variables". Duke Math. J. 4 (2): 300–322. doi:10.1215/S0012-7094-38-00423-5.
| Wikipedia |
Ultralimit
In mathematics, an ultra limit is a geometric construction that assigns a limit metric space to a sequence of metric spaces $X_{n}$. The concept of such captures the limiting behavior of finite configurations in the $X_{n}$ spaces and employs an ultrafilter to bypass the need for repeatedly considering subsequences to ensure convergence. Ultra limits generalize the idea of Gromov Hausdorff convergence in metric spaces.
For the direct limit of a sequence of ultrapowers, see Ultraproduct.
Ultrafilters
An ultrafilter, denoted as 'ω', on the set of natural numbers $\mathbb {N} $ is a set of nonempty subsets of $\mathbb {N} $ (whose inclusion function can be thought of as a measure) which is closed under finite intersection, upwards-closed, and also which, given any subset X of $\mathbb {N} $, contains either X or $\mathbb {N} $\ X. An ultrafilter on $\mathbb {N} $ is non-principal if it contains no finite set.
Limit of a sequence of points with respect to an ultrafilter
In the following, ω is a non-principal ultrafilter on $\mathbb {N} $.
If $(x_{n})_{n\in \mathbb {N} }$ is a sequence of points in a metric space (X,d) and x∈ X, then the point x is called the ω-limit of xn, denoted as $x=\lim _{\omega }x_{n}$.
If for every $\epsilon >0$ there are:
$\{n:d(x_{n},x)\leq \epsilon \}\in \omega .$
It is observed that,
• If an ω-limit of a sequence of points exists, it is unique.
• If $x=\lim _{n\to \infty }x_{n}$ in the standard sense, $x=\lim _{\omega }x_{n}$. (For this property to hold, it is crucial that the ultrafilter shoul be non-principal.)
A fundamental fact[1] states that, if (X,d) is compact and ω is a non-principal ultrafilter on $\mathbb {N} $, the ω-limit of any sequence of points in X exists (and is necessarily unique).
In particular, any bounded sequence of real numbers has a well-defined ω-limit in $\mathbb {R} $, as closed intervals are compact.
Ultralimit of metric spaces with specified base-points
Let ω be a non-principal ultrafilter on $\mathbb {N} $. Let (Xn ,dn) be a sequence of metric spaces with specified base-points pn ∈ Xn.
Suppose that a sequence $(x_{n})_{n\in \mathbb {N} }$, where xn ∈ Xn, is admissible. If the sequence of real numbers (dn(xn ,pn))n is bounded, that is, if there exists a positive real number C such that $d_{n}(x_{n},p_{n})\leq C$, then denote the set of all admissible sequences by ${\mathcal {A}}$.
It follows from the triangle inequality that for any two admissible sequences $\mathbf {x} =(x_{n})_{n\in \mathbb {N} }$ and $\mathbf {y} =(y_{n})_{n\in \mathbb {N} }$ the sequence (dn(xn,yn))n is bounded and hence there exists an ω-limit ${\hat {d}}_{\infty }(\mathbf {x} ,\mathbf {y} ):=\lim _{\omega }d_{n}(x_{n},y_{n})$. One can define a relation $\sim $ on the set ${\mathcal {A}}$ of all admissible sequences as follows. For $\mathbf {x} ,\mathbf {y} \in {\mathcal {A}}$, there is $\mathbf {x} \sim \mathbf {y} $ whenever ${\hat {d}}_{\infty }(\mathbf {x} ,\mathbf {y} )=0.$ This helps to show that $\sim $ is an equivalence relation on ${\mathcal {A}}.$
The ultralimit with respect to ω of the sequence (Xn,dn, pn) is a metric space $(X_{\infty },d_{\infty })$ defined as follows.[2]
Written as a set, $X_{\infty }={\mathcal {A}}/{\sim }$ .
For two $\sim $-equivalence classes $[\mathbf {x} ],[\mathbf {y} ]$ of admissible sequences $\mathbf {x} =(x_{n})_{n\in \mathbb {N} }$ and $\mathbf {y} =(y_{n})_{n\in \mathbb {N} }$, there is $d_{\infty }([\mathbf {x} ],[\mathbf {y} ]):={\hat {d}}_{\infty }(\mathbf {x} ,\mathbf {y} )=\lim _{\omega }d_{n}(x_{n},y_{n}).$
This shows that $d_{\infty }$ is well-defined and that it is a metric on the set $X_{\infty }$.
Denote $(X_{\infty },d_{\infty })=\lim _{\omega }(X_{n},d_{n},p_{n})$ .
On basepoints in the case of uniformly bounded spaces
Suppose that (Xn ,dn) is a sequence of metric spaces of uniformly bounded diameter, that is, there exists a real number C > 0 such that diam(Xn) ≤ C for every $n\in \mathbb {N} $. Then for any choice pn of base-points in Xn every sequence $(x_{n})_{n},x_{n}\in X_{n}$ is admissible. Therefore, in this situation the choice of base-points does not have to be specified when defining an ultralimit, and the ultralimit $(X_{\infty },d_{\infty })$ depends only on (Xn,dn) and on ω but does not depend on the choice of a base-point sequence $p_{n}\in X_{n}$. In this case one writes $(X_{\infty },d_{\infty })=\lim _{\omega }(X_{n},d_{n})$.
Basic properties of ultralimits
1. If (Xn,dn) are geodesic metric spaces then $(X_{\infty },d_{\infty })=\lim _{\omega }(X_{n},d_{n},p_{n})$ is also a geodesic metric space.[1]
2. If (Xn,dn) are complete metric spaces then $(X_{\infty },d_{\infty })=\lim _{\omega }(X_{n},d_{n},p_{n})$ is also a complete metric space.[3][4]
Actually, by construction, the limit space is always complete, even when (Xn,dn) is a repeating sequence of a space (X,d) which is not complete.[5]
1. If (Xn,dn) are compact metric spaces that converge to a compact metric space (X,d) in the Gromov–Hausdorff sense (this automatically implies that the spaces (Xn,dn) have uniformly bounded diameter), then the ultralimit $(X_{\infty },d_{\infty })=\lim _{\omega }(X_{n},d_{n})$ is isometric to (X,d).
2. Suppose that (Xn,dn) are proper metric spaces and that $p_{n}\in X_{n}$ are base-points such that the pointed sequence (Xn,dn,pn) converges to a proper metric space (X,d) in the Gromov–Hausdorff sense. Then the ultralimit $(X_{\infty },d_{\infty })=\lim _{\omega }(X_{n},d_{n},p_{n})$ is isometric to (X,d).[1]
3. Let κ≤0 and let (Xn,dn) be a sequence of CAT(κ)-metric spaces. Then the ultralimit $(X_{\infty },d_{\infty })=\lim _{\omega }(X_{n},d_{n},p_{n})$ is also a CAT(κ)-space.[1]
4. Let (Xn,dn) be a sequence of CAT(κn)-metric spaces where $\lim _{n\to \infty }\kappa _{n}=-\infty .$ Then the ultralimit $(X_{\infty },d_{\infty })=\lim _{\omega }(X_{n},d_{n},p_{n})$ is real tree.[1]
Asymptotic cones
An important class of ultralimits are the so-called asymptotic cones of metric spaces. Let (X,d) be a metric space, let ω be a non-principal ultrafilter on $\mathbb {N} $ and let pn ∈ X be a sequence of base-points. Then the ω–ultralimit of the sequence $(X,{\frac {d}{n}},p_{n})$ is called the asymptotic cone of X with respect to ω and $(p_{n})_{n}\,$ and is denoted $Cone_{\omega }(X,d,(p_{n})_{n})\,$. One often takes the base-point sequence to be constant, pn = p for some p ∈ X; in this case the asymptotic cone does not depend on the choice of p ∈ X and is denoted by $Cone_{\omega }(X,d)\,$ or just $Cone_{\omega }(X)\,$.
The notion of an asymptotic cone plays an important role in geometric group theory since asymptotic cones (or, more precisely, their topological types and bi-Lipschitz types) provide quasi-isometry invariants of metric spaces in general and of finitely generated groups in particular.[6] Asymptotic cones also turn out to be a useful tool in the study of relatively hyperbolic groups and their generalizations.[7]
Examples
1. Let (X,d) be a compact metric space and put (Xn,dn)=(X,d) for every $n\in \mathbb {N} $. Then the ultralimit $(X_{\infty },d_{\infty })=\lim _{\omega }(X_{n},d_{n})$ is isometric to (X,d).
2. Let (X,dX) and (Y,dY) be two distinct compact metric spaces and let (Xn,dn) be a sequence of metric spaces such that for each n either (Xn,dn)=(X,dX) or (Xn,dn)=(Y,dY). Let $A_{1}=\{n|(X_{n},d_{n})=(X,d_{X})\}\,$ and $A_{2}=\{n|(X_{n},d_{n})=(Y,d_{Y})\}\,$. Thus A1, A2 are disjoint and $A_{1}\cup A_{2}=\mathbb {N} .$ Therefore, one of A1, A2 has ω-measure 1 and the other has ω-measure 0. Hence $\lim _{\omega }(X_{n},d_{n})$ is isometric to (X,dX) if ω(A1)=1 and $\lim _{\omega }(X_{n},d_{n})$ is isometric to (Y,dY) if ω(A2)=1. This shows that the ultralimit can depend on the choice of an ultrafilter ω.
3. Let (M,g) be a compact connected Riemannian manifold of dimension m, where g is a Riemannian metric on M. Let d be the metric on M corresponding to g, so that (M,d) is a geodesic metric space. Choose a basepoint p∈M. Then the ultralimit (and even the ordinary Gromov-Hausdorff limit) $\lim _{\omega }(M,nd,p)$ is isometric to the tangent space TpM of M at p with the distance function on TpM given by the inner product g(p). Therefore, the ultralimit $\lim _{\omega }(M,nd,p)$ is isometric to the Euclidean space $\mathbb {R} ^{m}$ with the standard Euclidean metric.[8]
4. Let $(\mathbb {R} ^{m},d)$ be the standard m-dimensional Euclidean space with the standard Euclidean metric. Then the asymptotic cone $Cone_{\omega }(\mathbb {R} ^{m},d)$ is isometric to $(\mathbb {R} ^{m},d)$.
5. Let $(\mathbb {Z} ^{2},d)$ be the 2-dimensional integer lattice where the distance between two lattice points is given by the length of the shortest edge-path between them in the grid. Then the asymptotic cone $Cone_{\omega }(\mathbb {Z} ^{2},d)$ is isometric to $(\mathbb {R} ^{2},d_{1})$ where $d_{1}\,$ is the Taxicab metric (or L1-metric) on $\mathbb {R} ^{2}$.
6. Let (X,d) be a δ-hyperbolic geodesic metric space for some δ≥0. Then the asymptotic cone $Cone_{\omega }(X)\,$ is a real tree.[1][9]
7. Let (X,d) be a metric space of finite diameter. Then the asymptotic cone $Cone_{\omega }(X)\,$ is a single point.
8. Let (X,d) be a CAT(0)-metric space. Then the asymptotic cone $Cone_{\omega }(X)\,$ is also a CAT(0)-space.[1]
Footnotes
1. M. Kapovich B. Leeb. On asymptotic cones and quasi-isometry classes of fundamental groups of 3-manifolds, Geometric and Functional Analysis, Vol. 5 (1995), no. 3, pp. 582–603
2. John Roe. Lectures on Coarse Geometry. American Mathematical Society, 2003. ISBN 978-0-8218-3332-2; Definition 7.19, p. 107.
3. L.Van den Dries, A.J.Wilkie, On Gromov's theorem concerning groups of polynomial growth and elementary logic. Journal of Algebra, Vol. 89(1984), pp. 349–374.
4. John Roe. Lectures on Coarse Geometry. American Mathematical Society, 2003. ISBN 978-0-8218-3332-2; Proposition 7.20, p. 108.
5. Bridson, Haefliger "Metric Spaces of Non-positive curvature" Lemma 5.53
6. John Roe. Lectures on Coarse Geometry. American Mathematical Society, 2003. ISBN 978-0-8218-3332-2
7. Cornelia Druţu and Mark Sapir (with an Appendix by Denis Osin and Mark Sapir), Tree-graded spaces and asymptotic cones of groups. Topology, Volume 44 (2005), no. 5, pp. 959–1058.
8. Yu. Burago, M. Gromov, and G. Perel'man. A. D. Aleksandrov spaces with curvatures bounded below (in Russian), Uspekhi Matematicheskih Nauk vol. 47 (1992), pp. 3–51; translated in: Russian Math. Surveys vol. 47, no. 2 (1992), pp. 1–58
9. John Roe. Lectures on Coarse Geometry. American Mathematical Society, 2003. ISBN 978-0-8218-3332-2; Example 7.30, p. 118.
References
• John Roe. Lectures on Coarse Geometry. American Mathematical Society, 2003. ISBN 978-0-8218-3332-2; Ch. 7.
• L.Van den Dries, A.J.Wilkie, On Gromov's theorem concerning groups of polynomial growth and elementary logic. Journal of Algebra, Vol. 89(1984), pp. 349–374.
• M. Kapovich B. Leeb. On asymptotic cones and quasi-isometry classes of fundamental groups of 3-manifolds, Geometric and Functional Analysis, Vol. 5 (1995), no. 3, pp. 582–603
• M. Kapovich. Hyperbolic Manifolds and Discrete Groups. Birkhäuser, 2000. ISBN 978-0-8176-3904-4; Ch. 9.
• Cornelia Druţu and Mark Sapir (with an Appendix by Denis Osin and Mark Sapir), Tree-graded spaces and asymptotic cones of groups. Topology, Volume 44 (2005), no. 5, pp. 959–1058.
• M. Gromov. Metric Structures for Riemannian and Non-Riemannian Spaces. Progress in Mathematics vol. 152, Birkhäuser, 1999. ISBN 0-8176-3898-9; Ch. 3.
• B. Kleiner and B. Leeb, Rigidity of quasi-isometries for symmetric spaces and Euclidean buildings. Publications Mathématiques de L'IHÉS. Volume 86, Number 1, December 1997, pp. 115–197.
See also
• Ultrafilter
• Geometric group theory
• Gromov-Hausdorff convergence
| Wikipedia |
Ultraparallel theorem
In hyperbolic geometry, two lines are said to be ultraparallel if they do not intersect and are not limiting parallel.
The ultraparallel theorem states that every pair of (distinct) ultraparallel lines has a unique common perpendicular (a hyperbolic line which is perpendicular to both lines).
Hilbert's construction
Let r and s be two ultraparallel lines.
From any two distinct points A and C on s draw AB and CB' perpendicular to r with B and B' on r.
If it happens that AB = CB', then the desired common perpendicular joins the midpoints of AC and BB' (by the symmetry of the Saccheri quadrilateral ACB'B).
If not, we may suppose AB < CB' without loss of generality. Let E be a point on the line s on the opposite side of A from C. Take A' on CB' so that A'B' = AB. Through A' draw a line s' (A'E') on the side closer to E, so that the angle B'A'E' is the same as angle BAE. Then s' meets s in an ordinary point D'. Construct a point D on ray AE so that AD = A'D'.
Then D' ≠ D. They are the same distance from r and both lie on s. So the perpendicular bisector of D'D (a segment of s) is also perpendicular to r.[1]
(If r and s were asymptotically parallel rather than ultraparallel, this construction would fail because s' would not meet s. Rather s' would be asymptotically parallel to both s and r.)
Proof in the Poincaré half-plane model
Let
$a<b<c<d$
be four distinct points on the abscissa of the Cartesian plane. Let $p$ and $q$ be semicircles above the abscissa with diameters $ab$ and $cd$ respectively. Then in the Poincaré half-plane model HP, $p$ and $q$ represent ultraparallel lines.
Compose the following two hyperbolic motions:
$x\to x-a$
${\mbox{inversion in the unit semicircle.}}$
Then $a\to \infty ,\quad b\to (b-a)^{-1},\quad c\to (c-a)^{-1},\quad d\to (d-a)^{-1}.$
Now continue with these two hyperbolic motions:
$x\to x-(b-a)^{-1}$
$x\to \left[(c-a)^{-1}-(b-a)^{-1}\right]^{-1}x$
Then $a$ stays at $\infty $, $b\to 0$, $c\to 1$, $d\to z$ (say). The unique semicircle, with center at the origin, perpendicular to the one on $1z$ must have a radius tangent to the radius of the other. The right triangle formed by the abscissa and the perpendicular radii has hypotenuse of length ${\begin{matrix}{\frac {1}{2}}\end{matrix}}(z+1)$. Since ${\begin{matrix}{\frac {1}{2}}\end{matrix}}(z-1)$ is the radius of the semicircle on $1z$, the common perpendicular sought has radius-square
${\frac {1}{4}}\left[(z+1)^{2}-(z-1)^{2}\right]=z.$
The four hyperbolic motions that produced $z$ above can each be inverted and applied in reverse order to the semicircle centered at the origin and of radius ${\sqrt {z}}$ to yield the unique hyperbolic line perpendicular to both ultraparallels $p$ and $q$.
Proof in the Beltrami-Klein model
In the Beltrami-Klein model of the hyperbolic geometry:
• two ultraparallel lines correspond to two non-intersecting chords.
• The poles of these two lines are the respective intersections of the tangent lines to the boundary circle at the endpoints of the chords.
• Lines perpendicular to line l are modeled by chords whose extension passes through the pole of l.
• Hence we draw the unique line between the poles of the two given lines, and intersect it with the boundary circle ; the chord of intersection will be the desired common perpendicular of the ultraparallel lines.
If one of the chords happens to be a diameter, we do not have a pole, but in this case any chord perpendicular to the diameter it is also perpendicular in the Beltrami-Klein model, and so we draw a line through the pole of the other line intersecting the diameter at right angles to get the common perpendicular.
The proof is completed by showing this construction is always possible:
• If both chords are diameters, they intersect.(at the center of the boundary circle)
• If only one of the chords is a diameter, the other chord projects orthogonally down to a section of the first chord contained in its interior, and a line from the pole orthogonal to the diameter intersects both the diameter and the chord.
• If both lines are not diameters, then we may extend the tangents drawn from each pole to produce a quadrilateral with the unit circle inscribed within it. The poles are opposite vertices of this quadrilateral, and the chords are lines drawn between adjacent sides of the vertex, across opposite corners. Since the quadrilateral is convex, the line between the poles intersects both of the chords drawn across the corners, and the segment of the line between the chords defines the required chord perpendicular to the two other chords.
Alternatively, we can construct the common perpendicular of the ultraparallel lines as follows: the ultraparallel lines in Beltrami-Klein model are two non-intersecting chords. But they actually intersect outside the circle. The polar of the intersecting point is the desired common perpendicular.[2]
References
1. H. S. M. Coxeter (17 September 1998). Non-euclidean Geometry. pp. 190–192. ISBN 978-0-88385-522-5.
2. W. Thurston, Three-Dimensional Geometry and Topology, page 72
• Karol Borsuk & Wanda Szmielew (1960) Foundations of Geometry, page 291.
| Wikipedia |
Ultrapolynomial
In mathematics, an ultrapolynomial is a power series in several variables whose coefficients are bounded in some specific sense.
Definition
Let $d\in \mathbb {N} $ and $K$ a field (typically $\mathbb {R} $ or $\mathbb {C} $) equipped with a norm (typically the absolute value). Then a function $P:K^{d}\rightarrow K$ of the form $P(x)=\sum _{\alpha \in \mathbb {N} ^{d}}c_{\alpha }x^{\alpha }$ is called an ultrapolynomial of class $\left\{M_{p}\right\}$, if the coefficients $c_{\alpha }$ satisfy $\left|c_{\alpha }\right|\leq CL^{\left|\alpha \right|}/M_{\alpha }$ for all $\alpha \in \mathbb {N} ^{d}$, for some $L>0$ and $C>0$ (resp. for every $L>0$ and some $C(L)>0$).
References
• Lozanov-Crvenković, Z.; Perišić, D. (5 Feb 2007). "Kernel theorem for the space of Beurling - Komatsu tempered ultradistibutions". arXiv:math/0702093.
• Lozanov-Crvenković, Z (October 2007). "Kernel theorems for the spaces of tempered ultradistributions". Integral Transforms and Special Functions. 18 (10): 699–713. doi:10.1080/10652460701445658. S2CID 123420666.
• Pilipović, Stevan; Pilipović, Bojan; Prangoski, Jasson (2021). "Infinite order $$\Psi $$DOs: Composition with entire functions, new Shubin-Sobolev spaces, and index theorem". Analysis and Mathematical Physics. 11 (3). arXiv:1711.05628. doi:10.1007/s13324-021-00545-w. S2CID 201107206.
| Wikipedia |
Gegenbauer polynomials
In mathematics, Gegenbauer polynomials or ultraspherical polynomials C(α)
n
(x) are orthogonal polynomials on the interval [−1,1] with respect to the weight function (1 − x2)α–1/2. They generalize Legendre polynomials and Chebyshev polynomials, and are special cases of Jacobi polynomials. They are named after Leopold Gegenbauer.
Characterizations
• Plot of the Gegenbauer polynomial C n^(m)(x) with n=10 and m=1 in the complex plane from -2-2i to 2+2i with colors created with Mathematica 13.1 function ComplexPlot3D
• Gegenbauer polynomials with α=1
• Gegenbauer polynomials with α=2
• Gegenbauer polynomials with α=3
• An animation showing the polynomials on the xα-plane for the first 4 values of n.
A variety of characterizations of the Gegenbauer polynomials are available.
• The polynomials can be defined in terms of their generating function (Stein & Weiss 1971, §IV.2):
${\frac {1}{(1-2xt+t^{2})^{\alpha }}}=\sum _{n=0}^{\infty }C_{n}^{(\alpha )}(x)t^{n}\qquad (0\leq |x|<1,|t|\leq 1,\alpha >0)$
• The polynomials satisfy the recurrence relation (Suetin 2001):
${\begin{aligned}C_{0}^{(\alpha )}(x)&=1\\C_{1}^{(\alpha )}(x)&=2\alpha x\\(n+1)C_{n+1}^{(\alpha )}(x)&=2(n+\alpha )xC_{n}^{(\alpha )}(x)-(n+2\alpha -1)C_{n-1}^{(\alpha )}(x).\end{aligned}}$
• Gegenbauer polynomials are particular solutions of the Gegenbauer differential equation (Suetin 2001):
$(1-x^{2})y''-(2\alpha +1)xy'+n(n+2\alpha )y=0.\,$
When α = 1/2, the equation reduces to the Legendre equation, and the Gegenbauer polynomials reduce to the Legendre polynomials.
When α = 1, the equation reduces to the Chebyshev differential equation, and the Gegenbauer polynomials reduce to the Chebyshev polynomials of the second kind.[1]
• They are given as Gaussian hypergeometric series in certain cases where the series is in fact finite:
$C_{n}^{(\alpha )}(z)={\frac {(2\alpha )_{n}}{n!}}\,_{2}F_{1}\left(-n,2\alpha +n;\alpha +{\frac {1}{2}};{\frac {1-z}{2}}\right).$
(Abramowitz & Stegun p. 561). Here (2α)n is the rising factorial. Explicitly,
$C_{n}^{(\alpha )}(z)=\sum _{k=0}^{\lfloor n/2\rfloor }(-1)^{k}{\frac {\Gamma (n-k+\alpha )}{\Gamma (\alpha )k!(n-2k)!}}(2z)^{n-2k}.$
From this it is also easy to obtain the value at unit argument:
$C_{n}^{(\alpha )}(1)={\frac {\Gamma (2\alpha +n)}{\Gamma (2\alpha )n!}}.$
• They are special cases of the Jacobi polynomials (Suetin 2001):
$C_{n}^{(\alpha )}(x)={\frac {(2\alpha )_{n}}{(\alpha +{\frac {1}{2}})_{n}}}P_{n}^{(\alpha -1/2,\alpha -1/2)}(x).$
in which $(\theta )_{n}$ represents the rising factorial of $\theta $.
One therefore also has the Rodrigues formula
$C_{n}^{(\alpha )}(x)={\frac {(-1)^{n}}{2^{n}n!}}{\frac {\Gamma (\alpha +{\frac {1}{2}})\Gamma (n+2\alpha )}{\Gamma (2\alpha )\Gamma (\alpha +n+{\frac {1}{2}})}}(1-x^{2})^{-\alpha +1/2}{\frac {d^{n}}{dx^{n}}}\left[(1-x^{2})^{n+\alpha -1/2}\right].$
Orthogonality and normalization
For a fixed α > -1/2, the polynomials are orthogonal on [−1, 1] with respect to the weighting function (Abramowitz & Stegun p. 774)
$w(z)=\left(1-z^{2}\right)^{\alpha -{\frac {1}{2}}}.$
To wit, for n ≠ m,
$\int _{-1}^{1}C_{n}^{(\alpha )}(x)C_{m}^{(\alpha )}(x)(1-x^{2})^{\alpha -{\frac {1}{2}}}\,dx=0.$
They are normalized by
$\int _{-1}^{1}\left[C_{n}^{(\alpha )}(x)\right]^{2}(1-x^{2})^{\alpha -{\frac {1}{2}}}\,dx={\frac {\pi 2^{1-2\alpha }\Gamma (n+2\alpha )}{n!(n+\alpha )[\Gamma (\alpha )]^{2}}}.$
Applications
The Gegenbauer polynomials appear naturally as extensions of Legendre polynomials in the context of potential theory and harmonic analysis. The Newtonian potential in Rn has the expansion, valid with α = (n − 2)/2,
${\frac {1}{|\mathbf {x} -\mathbf {y} |^{n-2}}}=\sum _{k=0}^{\infty }{\frac {|\mathbf {x} |^{k}}{|\mathbf {y} |^{k+n-2}}}C_{k}^{(\alpha )}({\frac {\mathbf {x} \cdot \mathbf {y} }{|\mathbf {x} ||\mathbf {y} |}}).$
When n = 3, this gives the Legendre polynomial expansion of the gravitational potential. Similar expressions are available for the expansion of the Poisson kernel in a ball (Stein & Weiss 1971).
It follows that the quantities $C_{k}^{((n-2)/2)}(\mathbf {x} \cdot \mathbf {y} )$ are spherical harmonics, when regarded as a function of x only. They are, in fact, exactly the zonal spherical harmonics, up to a normalizing constant.
Gegenbauer polynomials also appear in the theory of Positive-definite functions.
The Askey–Gasper inequality reads
$\sum _{j=0}^{n}{\frac {C_{j}^{\alpha }(x)}{2\alpha +j-1 \choose j}}\geq 0\qquad (x\geq -1,\,\alpha \geq 1/4).$
In spectral methods for solving differential equations, if a function is expanded in the basis of Chebyshev polynomials and its derivative is represented in a Gegenbauer/ultraspherical basis, then the derivative operator becomes a diagonal matrix, leading to fast banded matrix methods for large problems.[2]
See also
• Rogers polynomials, the q-analogue of Gegenbauer polynomials
• Chebyshev polynomials
• Romanovski polynomials
References
• Abramowitz, Milton; Stegun, Irene Ann, eds. (1983) [June 1964]. "Chapter 22". Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. Applied Mathematics Series. Vol. 55 (Ninth reprint with additional corrections of tenth original printing with corrections (December 1972); first ed.). Washington D.C.; New York: United States Department of Commerce, National Bureau of Standards; Dover Publications. p. 773. ISBN 978-0-486-61272-0. LCCN 64-60036. MR 0167642. LCCN 65-12253.*Koornwinder, Tom H.; Wong, Roderick S. C.; Koekoek, Roelof; Swarttouw, René F. (2010), "Orthogonal Polynomials", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248.
• Stein, Elias; Weiss, Guido (1971), Introduction to Fourier Analysis on Euclidean Spaces, Princeton, N.J.: Princeton University Press, ISBN 978-0-691-08078-9.
• Suetin, P.K. (2001) [1994], "Ultraspherical polynomials", Encyclopedia of Mathematics, EMS Press.
Specific
1. Arfken, Weber, and Harris (2013) "Mathematical Methods for Physicists", 7th edition; ch. 18.4
2. Olver, Sheehan; Townsend, Alex (January 2013). "A Fast and Well-Conditioned Spectral Method". SIAM Review. 55 (3): 462–489. arXiv:1202.1347. doi:10.1137/120865458. eISSN 1095-7200. ISSN 0036-1445.
| Wikipedia |
Umberto Zannier
Umberto Zannier (born 25 May 1957, in Spilimbergo, Italy) is an Italian mathematician, specializing in number theory and Diophantine geometry.
Umberto Zannier
Umberto Zannier
Born (1957-05-25) 25 May 1957
Spilimbergo, Italy
NationalityItalian
Alma materScuola Normale Superiore di Pisa
Known forManin–Mumford conjecture
Siegel's theorem on integral points
AwardsMathematics Prize of the Accademia dei XL (2005)
Scientific career
FieldsMathematics
InstitutionsScuola Normale Superiore di Pisa
Università IUAV di Venezia
University of Salerno
University of Padua
Doctoral advisorEnrico Bombieri
Education
Zannier earned a Laurea degree from University of Pisa and studied at the Scuola Normale Superiore di Pisa with Ph.D. supervised by Enrico Bombieri.[1]
Career
Zannier was from 1983 to 1987 a researcher at the University of Padua, from 1987 to 1991 an associate professor at the University of Salerno, and from 1991 to 2003 a full professor at the Università IUAV di Venezia. From 2003 to the present he has been a Professor in Geometry at the Scuola Normale Superiore di Pisa.[2]
In 2010 he gave the Hermann Weyl Lectures at the Institute for Advanced Study.[3] He was a visiting professor at several institutions, including the Institut Henri Poincaré in Paris, the ETH Zurich, and the Erwin Schrödinger Institute in Vienna.
With Jonathan Pila he developed a method (now known as the Pila-Zannier method) of applying O-minimality to number-theoretical and algebro-geometric problems. Thus they gave a new proof of the Manin–Mumford conjecture (which was first proved by Michel Raynaud and Ehud Hrushovski). Zannier and Pietro Corvaja in 2002 gave a new proof of Siegel's theorem on integral points by using a new method based upon the subspace theorem.[4]
Awards & Service
Zannier was an Invited Speaker at the 4th European Mathematical Congress in Stockholm in 2004. Zannier was elected a corresponding member of the Istituto Veneto in 2004, a member of the Accademia dei Lincei in 2006, and a member of Academia Europaea in 2012.[2] In 2014 he was an Invited Speaker of the International Congress of Mathematicians in Seoul.[5]
In 2005 Zannier received the Mathematics Prize of the Accademia dei XL and in 2011 an Advanced Grant from the European Research Council (ERC). He is chief editor of the Annali di Scuola Normale Superiore and a co-editor of Acta Arithmetica.[2]
Selected publications
• On the distribution of self-numbers. Proc. Amer. Math. Soc. vol. 85, 1982, 10-14 doi:10.1090/S0002-9939-1982-0647887-4 (See self number.)
• Some Applications of Diophantine Approximation to Diophantine Equations. Forum, Udine 2003. (69 pages)
• Lecture Notes on Diophantine Analysis. Edizioni Della Normale (Lecture Notes Scuola Normal Superiore), Appendix Francesco Amoroso, 2009. Zannier, Umberto (2015). pbk reprint. ISBN 9788876425172.
• Some Problems of Unlikely Intersections in Arithmetic and Geometry. Annals of Math. Studies, Volume 181, Princeton University Press, 2012 (with appendix by David Masser).[6]
• With Enrico Bombieri and David Masser: Intersecting a Curve with Algebraic Subgroups of Multiplicative Groups. International Mathematics Research Notices, Vol. 20, 1999, 1119–1140. doi:10.1155/S1073792899000628
• A proof of Pisot's $d^{th}$ conjecture. Annals of Mathematics, Vol. 151, 2000, pp. 375–383.
• with P. Corvaja: "A subspace theorem approach to integral points on curves", Compte Rendu Acad. Sci., 334, 2002, pp. 267–271 doi:10.1016/S1631-073X(02)02240-9
• with P. Corvaja: Finiteness of Integral Values for the Ratio of Two Linear Recurrences. Inventiones Mathematicae, Vol. 149, 2002, pp. 431–451. doi:10.1007/s002220200221
• with P. Corvaja: On Integral Points on Surfaces. Annals of Mathematics, Vol. 160, 2004, 705–726. arXiv preprint
• with P. Corvaja: "On the rational approximations to the powers of an algebraic number: solution of two problems of Mahler and Mendès France." Acta Mathematica vol. 193, no. 2, 2004, 175–191. doi:10.1007/BF02392563
• with P. Corvaja: "Some cases of Vojta's conjecture on integral points over function fields." Journal of Algebraic Geometry, Vol. 17, 2008, pp. 295–333. arXiv preprint
• as editor with Francesco Amoroso: Diophantine approximation. Lectures given at the C.I.M.E. summer school held in Cetraro, Italy, June 28–July 6, 2000. Springer 2003.
• with J. Pila: Rational points in periodic analytic sets and the Manin-Mumford conjecture. Atti Accad. Naz. Lincei, Cl. Sci. Fis. Mat. Nature., Rend. Lincei (9) Mat. Appl., Vol. 19, 2008, No. 2, pp. 149–162. arXiv preprint
References
1. Umberto Zannier at the Mathematics Genealogy Project
2. Zannier Umberto, Scuola Normale Superiore
3. "Weyl Lectures, Umberto Zannier". Institute for Advanced Study, Video Lectures. 4 May 2010.
4. P. Corvaja and Zannier, U. "A subspace theorem approach to integral points on curves", Compte Rendu Acad. Sci., 334, 2002, pp. 267–271 doi:10.1016/S1631-073X(02)02240-9
5. Zannier, Umberto. "Elementary integration of differentials in families and conjectures of Pink, Wednesday, August 20, 2014 Seoul ICM". ICM2014 VideoSeries IL3.11.
6. Silverman, Joseph H. (April 2013). "Review of Some Problems of Unlikely Intersections in Arithmetic and Geometry by Umberto Zannier" (PDF). Bull. Amer. Math. Soc. (N.S.). 50 (2): 353–358. doi:10.1090/s0273-0979-2012-01386-1.
External links
• Umberto Zannier at the Mathematics Genealogy Project
• Academia Europaea
• Hermann Weyl Lectures delivered at the Institute for Advanced Study, 2010
• "An Overview of Some Problems of Unlikely Intersections - Umberto Zannier". YouTube. 1 September 2016. (Tuesday, May 4th, 2010)
• "Unlikely Intersections in Multiplicative Groups and the Zilber Conjecture - Umberto Zannier". YouTube. 1 September 2016. (Wednesday, May 5th, 2010)
• "Unlikely Intersections in Elliptic Surfaces and Problems of Masser - Umberto Zannier". YouTube. 1 September 2016. (Tuesday, May 11th, 2010)
• "About the André-Oort Conjecture - Umberto Zannier". YouTube. 1 September 2016. (Wednesday, May 12th, 2010)
Authority control
International
• ISNI
• VIAF
National
• Norway
• France
• BnF data
• Catalonia
• Germany
• Italy
• Israel
• United States
• Poland
Academics
• CiNii
• DBLP
• MathSciNet
• Mathematics Genealogy Project
• ORCID
• zbMATH
People
• Deutsche Biographie
Other
• IdRef
| Wikipedia |
Umbilical point
In the differential geometry of surfaces in three dimensions, umbilics or umbilical points are points on a surface that are locally spherical. At such points the normal curvatures in all directions are equal, hence, both principal curvatures are equal, and every tangent vector is a principal direction. The name "umbilic" comes from the Latin umbilicus (navel).
Umbilic points generally occur as isolated points in the elliptical region of the surface; that is, where the Gaussian curvature is positive.
Unsolved problem in mathematics:
Does every smooth topological sphere in Euclidean space have at least two umbilics?
(more unsolved problems in mathematics)
The sphere is the only surface with non-zero curvature where every point is umbilic. A flat umbilic is an umbilic with zero Gaussian curvature. The monkey saddle is an example of a surface with a flat umbilic and on the plane every point is a flat umbilic. A torus can have no umbilics, but every closed surface of nonzero Euler characteristic, embedded smoothly into Euclidean space, has at least one umbilic. An unproven conjecture of Constantin Carathéodory states that every smooth topological sphere in Euclidean space has at least two umbilics.[1]
The three main types of umbilic points are elliptical umbilics, parabolic umbilics and hyperbolic umbilics. Elliptical umbilics have the three ridge lines passing through the umbilic and hyperbolic umbilics have just one. Parabolic umbilics are a transitional case with two ridges one of which is singular. Other configurations are possible for transitional cases. These cases correspond to the D4−, D5 and D4+ elementary catastrophes of René Thom's catastrophe theory.
Umbilics can also be characterised by the pattern of the principal direction vector field around the umbilic which typically form one of three configurations: star, lemon, and lemonstar (or monstar). The index of the vector field is either −½ (star) or ½ (lemon, monstar). Elliptical and parabolic umbilics always have the star pattern, whilst hyperbolic umbilics can be star, lemon, or monstar. This classification was first due to Darboux and the names come from Hannay.[2]
For surfaces with genus 0 with isolated umbilics, e.g. an ellipsoid, the index of the principal direction vector field must be 2 by the Poincaré–Hopf theorem. Generic genus 0 surfaces have at least four umbilics of index ½. An ellipsoid of revolution has two non-generic umbilics each of which has index 1.[3]
• configurations of lines of curvature near umbilics
• Star
• Monstar
• Lemon
Classification of umbilics
Cubic forms
The classification of umbilics is closely linked to the classification of real cubic forms $ax^{3}+3bx^{2}y+3cxy^{2}+dy^{3}$. A cubic form will have a number of root lines $\lambda (x,y)$ such that the cubic form is zero for all real $\lambda $. There are a number of possibilities including:
• Three distinct lines: an elliptical cubic form, standard model $x^{2}y-y^{3}$.
• Three lines, two of which are coincident: a parabolic cubic form, standard model $x^{2}y$.
• A single real line: a hyperbolic cubic form, standard model $x^{2}y+y^{3}$.
• Three coincident lines, standard model $x^{3}$.[4]
The equivalence classes of such cubics under uniform scaling form a three-dimensional real projective space and the subset of parabolic forms define a surface – called the umbilic bracelet by Christopher Zeeman.[4] Taking equivalence classes under rotation of the coordinate system removes one further parameter and a cubic forms can be represent by the complex cubic form $z^{3}+3{\overline {\beta }}z^{2}{\overline {z}}+3\beta z{\overline {z}}^{2}+{\overline {z}}^{3}$ with a single complex parameter $\beta $. Parabolic forms occur when $\beta ={\tfrac {1}{3}}(2e^{i\theta }+e^{-2i\theta })$, the inner deltoid, elliptical forms are inside the deltoid and hyperbolic one outside. If $\left|\beta \right|=1$ and $\beta $ is not a cube root of unity then the cubic form is a right-angled cubic form which play a special role for umbilics. If $\left|\beta \right|={\tfrac {1}{3}}$ then two of the root lines are orthogonal.[5]
A second cubic form, the Jacobian is formed by taking the Jacobian determinant of the vector valued function $F:\mathbb {R} ^{2}\rightarrow \mathbb {R} ^{2}$, $F(x,y)=(x^{2}+y^{2},ax^{3}+3bx^{2}y+3cxy^{2}+dy^{3})$. Up to a constant multiple this is the cubic form $bx^{3}+(2c-a)x^{2}y+(d-2b)xy^{2}-cy^{3}$. Using complex numbers the Jacobian is a parabolic cubic form when $\beta =-2e^{i\theta }-e^{-2i\theta }$, the outer deltoid in the classification diagram.[5]
Umbilic classification
Any surface with an isolated umbilic point at the origin can be expressed as a Monge form parameterisation $z={\tfrac {1}{2}}\kappa (x^{2}+y^{2})+{\tfrac {1}{3}}(ax^{3}+3bx^{2}y+3cxy^{2}+dy^{3})+\ldots $, where $\kappa $ is the unique principal curvature. The type of umbilic is classified by the cubic form from the cubic part and corresponding Jacobian cubic form. Whilst principal directions are not uniquely defined at an umbilic the limits of the principal directions when following a ridge on the surface can be found and these correspond to the root-lines of the cubic form. The pattern of lines of curvature is determined by the Jacobian.[5]
The classification of umbilic points is as follows:[5]
• Inside inner deltoid - elliptical umbilics
• On inner circle - two ridge lines tangent
• On inner deltoid - parabolic umbilics
• Outside inner deltoid - hyperbolic umbilics
• Inside outer circle - star pattern
• On outer circle - birth of umbilics
• Between outer circle and outer deltoid - monstar pattern
• Outside outer deltoid - lemon pattern
• Cusps of the inner deltoid - cubic (symbolic) umbilics
• On the diagonals and the horizontal line - symmetrical umbilics with mirror symmetry
In a generic family of surfaces umbilics can be created, or destroyed, in pairs: the birth of umbilics transition. Both umbilics will be hyperbolic, one with a star pattern and one with a monstar pattern. The outer circle in the diagram, a right angle cubic form, gives these transitional cases. Symbolic umbilics are a special case of this.[5]
Focal surface
The elliptical umbilics and hyperbolic umbilics have distinctly different focal surfaces. A ridge on the surface corresponds to a cuspidal edges so each sheet of the elliptical focal surface will have three cuspidal edges which come together at the umbilic focus and then switch to the other sheet. For a hyperbolic umbilic there is a single cuspidal edge which switch from one sheet to the other.[5]
Definition in higher dimension in Riemannian manifolds
A point p in a Riemannian submanifold is umbilical if, at p, the (vector-valued) Second fundamental form is some normal vector tensor the induced metric (First fundamental form). Equivalently, for all vectors U, V at p, II(U, V) = gp(U, V)$\nu $, where $\nu $ is the mean curvature vector at p.
A submanifold is said to be umbilic (or all-umbilic) if this condition holds at every point "p". This is equivalent to saying that the submanifold can be made totally geodesic by an appropriate conformal change of the metric of the surrounding ("ambient") manifold. For example, a surface in Euclidean space is umbilic if and only if it is a piece of a sphere.
See also
• umbilical – an anatomical term meaning of, or relating to the navel
References
1. Berger, Marcel (2010), "The Caradéodory conjecture", Geometry revealed, Springer, Heidelberg, pp. 389–390, doi:10.1007/978-3-540-70997-8, ISBN 978-3-540-70996-1, MR 2724440.
2. Berry, M V; Hannay, J H (1977). "Umbilic points on Gaussian random surfaces". J. Phys. A. 10: 1809–21.
3. Porteous, p 208
4. Poston, Tim; Stewart, Ian (1978), Catastrophe Theory and its Applications, Pitman, ISBN 0-273-01029-8
5. Porteous, Ian R. (2001), Geometric Differentiation, Cambridge University Press, pp. 198–213, ISBN 0-521-00264-8
• Darboux, Gaston (1896) [1887], Leçons sur la théorie génerale des surfaces: Volumes I–IV, Gauthier-Villars
• Volume I
• Volume II
• Volume III
• Volume IV
• Pictures of star, lemon, monstar, and further references
| Wikipedia |
Umbral calculus
In mathematics before the 1970s, the term umbral calculus referred to the surprising similarity between seemingly unrelated polynomial equations and certain shadowy techniques used to "prove" them. These techniques were introduced by John Blissard and are sometimes called Blissard's symbolic method.[1] They are often attributed to Édouard Lucas (or James Joseph Sylvester), who used the technique extensively.[2]
Short history
In the 1930s and 1940s, Eric Temple Bell attempted to set the umbral calculus on a rigorous footing.
In the 1970s, Steven Roman, Gian-Carlo Rota, and others developed the umbral calculus by means of linear functionals on spaces of polynomials. Currently, umbral calculus refers to the study of Sheffer sequences, including polynomial sequences of binomial type and Appell sequences, but may encompass systematic correspondence techniques of the calculus of finite differences.
The 19th-century umbral calculus
The method is a notational procedure used for deriving identities involving indexed sequences of numbers by pretending that the indices are exponents. Construed literally, it is absurd, and yet it is successful: identities derived via the umbral calculus can also be properly derived by more complicated methods that can be taken literally without logical difficulty.
An example involves the Bernoulli polynomials. Consider, for example, the ordinary binomial expansion (which contains a binomial coefficient):
$(y+x)^{n}=\sum _{k=0}^{n}{n \choose k}y^{n-k}x^{k}$
and the remarkably similar-looking relation on the Bernoulli polynomials:
$B_{n}(y+x)=\sum _{k=0}^{n}{n \choose k}B_{n-k}(y)x^{k}.$
Compare also the ordinary derivative
${\frac {d}{dx}}x^{n}=nx^{n-1}$
to a very similar-looking relation on the Bernoulli polynomials:
${\frac {d}{dx}}B_{n}(x)=nB_{n-1}(x).$
These similarities allow one to construct umbral proofs, which, on the surface, cannot be correct, but seem to work anyway. Thus, for example, by pretending that the subscript n − k is an exponent:
$B_{n}(x)=\sum _{k=0}^{n}{n \choose k}b^{n-k}x^{k}=(b+x)^{n},$
and then differentiating, one gets the desired result:
$B_{n}'(x)=n(b+x)^{n-1}=nB_{n-1}(x).$
In the above, the variable b is an "umbra" (Latin for shadow).
See also Faulhaber's formula.
Umbral Taylor series
In differential calculus, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. That is, a real or complex-valued function f (x) that is infinitely differentiable at $a$ can be written as:
$f(x)=\sum _{n=0}^{\infty }{\frac {f^{(n)}(a)}{n!}}(x-a)^{n}$
Similar relationships were also observed in the theory of finite differences. The umbral version of the Taylor series is given by a similar expression involving the k-th forward differences $\Delta ^{k}[f]$ of a polynomial function f,
$f(x)=\sum _{k=0}^{\infty }{\frac {\Delta ^{k}[f](a)}{k!}}(x-a)_{k}$
where
$(x-a)_{k}=(x-a)(x-a-1)(x-a-2)\cdots (x-a-k+1)$
is the Pochhammer symbol used here for the falling sequential product. A similar relationship holds for the backward differences and rising factorial.
This series is also known as the Newton series or Newton's forward difference expansion. The analogy to Taylor's expansion is utilized in the calculus of finite differences.
Bell and Riordan
In the 1930s and 1940s, Eric Temple Bell tried unsuccessfully to make this kind of argument logically rigorous. The combinatorialist John Riordan in his book Combinatorial Identities published in the 1960s, used techniques of this sort extensively.
The modern umbral calculus
Another combinatorialist, Gian-Carlo Rota, pointed out that the mystery vanishes if one considers the linear functional L on polynomials in z defined by
$L(z^{n})=B_{n}(0)=B_{n}.$
Then, using the definition of the Bernoulli polynomials and the definition and linearity of L, one can write
${\begin{aligned}B_{n}(x)&=\sum _{k=0}^{n}{n \choose k}B_{n-k}x^{k}\\&=\sum _{k=0}^{n}{n \choose k}L\left(z^{n-k}\right)x^{k}\\&=L\left(\sum _{k=0}^{n}{n \choose k}z^{n-k}x^{k}\right)\\&=L\left((z+x)^{n}\right)\end{aligned}}$
This enables one to replace occurrences of $B_{n}(x)$ by $L((z+x)^{n})$, that is, move the n from a subscript to a superscript (the key operation of umbral calculus). For instance, we can now prove that:
${\begin{aligned}\sum _{k=0}^{n}{n \choose k}B_{n-k}(y)x^{k}&=\sum _{k=0}^{n}{n \choose k}L\left((z+y)^{n-k}\right)x^{k}\\&=L\left(\sum _{k=0}^{n}{n \choose k}(z+y)^{n-k}x^{k}\right)\\&=L\left((z+x+y)^{n}\right)\\&=B_{n}(x+y).\end{aligned}}$
Rota later stated that much confusion resulted from the failure to distinguish between three equivalence relations that occur frequently in this topic, all of which were denoted by "=".
In a paper published in 1964, Rota used umbral methods to establish the recursion formula satisfied by the Bell numbers, which enumerate partitions of finite sets.
In the paper of Roman and Rota cited below, the umbral calculus is characterized as the study of the umbral algebra, defined as the algebra of linear functionals on the vector space of polynomials in a variable x, with a product L1L2 of linear functionals defined by
$\left\langle L_{1}L_{2}|x^{n}\right\rangle =\sum _{k=0}^{n}{n \choose k}\left\langle L_{1}|x^{k}\right\rangle \left\langle L_{2}|x^{n-k}\right\rangle .$
When polynomial sequences replace sequences of numbers as images of yn under the linear mapping L, then the umbral method is seen to be an essential component of Rota's general theory of special polynomials, and that theory is the umbral calculus by some more modern definitions of the term.[3] A small sample of that theory can be found in the article on polynomial sequences of binomial type. Another is the article titled Sheffer sequence.
Rota later applied umbral calculus extensively in his paper with Shen to study the various combinatorial properties of the cumulants.[4]
See also
• Bernoulli umbra
• Umbral composition of polynomial sequences
• Calculus of finite differences
• Pidduck polynomials
• Symbolic method in invariant theory
• Narumi polynomials
Notes
• Blissard, John (1861). "Theory of generic equations". The Quarterly Journal of Pure and Applied Mathematics. 4: 279–305.
1. E. T. Bell, "The History of Blissard's Symbolic Method, with a Sketch of its Inventor's Life", The American Mathematical Monthly 45:7 (1938), pp. 414–421.
2. Rota, G. C.; Kahaner, D.; Odlyzko, A. (1973). "On the foundations of combinatorial theory. VIII. Finite operator calculus". Journal of Mathematical Analysis and Applications. 42 (3): 684. doi:10.1016/0022-247X(73)90172-8.
3. G.-C. Rota and J. Shen, "On the Combinatorics of Cumulants", Journal of Combinatorial Theory, Series A, 91:283–304, 2000.
References
• Bell, E. T. (1938), "The History of Blissard's Symbolic Method, with a Sketch of its Inventor's Life", The American Mathematical Monthly, Mathematical Association of America, 45 (7): 414–421, doi:10.1080/00029890.1938.11990829, ISSN 0002-9890, JSTOR 2304144
• Roman, Steven M.; Rota, Gian-Carlo (1978), "The umbral calculus", Advances in Mathematics, 27 (2): 95–188, doi:10.1016/0001-8708(78)90087-7, ISSN 0001-8708, MR 0485417
• G.-C. Rota, D. Kahaner, and A. Odlyzko, "Finite Operator Calculus," Journal of Mathematical Analysis and its Applications, vol. 42, no. 3, June 1973. Reprinted in the book with the same title, Academic Press, New York, 1975.
• Roman, Steven (1984), The umbral calculus, Pure and Applied Mathematics, vol. 111, London: Academic Press Inc. [Harcourt Brace Jovanovich Publishers], ISBN 978-0-12-594380-2, MR 0741185. Reprinted by Dover, 2005.
• Roman, S. (2001) [1994], "Umbral calculus", Encyclopedia of Mathematics, EMS Press
External links
• Weisstein, Eric W. "Umbral Calculus". MathWorld.
• A. Di Bucchianico, D. Loeb (2000). "A Selected Survey of Umbral Calculus" (PDF). Electronic Journal of Combinatorics. Dynamic Surveys. DS3. Archived from the original (PDF) on 2012-02-24.
• Roman, S. (1982), The Theory of the Umbral Calculus, I
| Wikipedia |
Umbral moonshine
In mathematics, umbral moonshine is a mysterious connection between Niemeier lattices and Ramanujan's mock theta functions. It is a generalization of the Mathieu moonshine phenomenon connecting representations of the Mathieu group M24 with K3 surfaces.
Mathieu moonshine
The prehistory of Mathieu moonshine starts with a theorem of Mukai, asserting that any group of symplectic automorphisms of a K3 surface embeds in the Mathieu group M23. The moonshine observation arose from physical considerations: any K3 sigma-model conformal field theory has an action of the N=(4,4) superconformal algebra, arising from a hyperkähler structure. When Tohru Eguchi, Hirosi Ooguri, and Yuji Tachikawa (2011) computed the first few terms of the decomposition of the elliptic genus of a K3 CFT into characters of the N=(4,4) superconformal algebra, they found that the multiplicities matched well with simple combinations of representations of M24. However, by the Mukai–Kondo classification, there is no faithful action of this group on any K3 surface by symplectic automorphisms, and by work of Gaberdiel–Hohenegger–Volpato, there is no faithful action on any K3 CFT, so the appearance of an action on the underlying Hilbert space is still a mystery.
Eguchi and Hikami showed that the N=(4,4) multiplicities are mock modular forms, and Miranda Cheng suggested that characters of elements of M24 should also be mock modular forms. This suggestion became the Mathieu Moonshine conjecture, asserting that the virtual representation of N=(4,4) given by the K3 elliptic genus is an infinite dimensional graded representation of M24 with non-negative multiplicities in the massive sector, and that the characters are mock modular forms. In 2012, Terry Gannon proved that the representation of M24 exists.
Umbral moonshine
In 2012, Cheng, Duncan & Harvey (2012) amassed numerical evidence of an extension of Mathieu moonshine, where families of mock modular forms were attached to divisors of 24. After some group-theoretic discussion with Glauberman, Cheng, Duncan & Harvey (2013) found that this earlier extension was a special case (the A-series) of a more natural encoding by Niemeier lattices. For each Niemeier root system X, with corresponding lattice LX, they defined an umbral group GX, given by the quotient of the automorphism group of LX by the subgroup of reflections- these are also known as the stabilizers of deep holes in the Leech lattice. They conjectured that for each X, there is an infinite dimensional graded representation KX of GX, such that the characters of elements are given by a list of vector-valued mock modular forms that they computed. The candidate forms satisfy minimality properties quite similar to the genus-zero condition for Monstrous moonshine. These minimality properties imply the mock modular forms are uniquely determined by their shadows, which are vector-valued theta series constructed from the root system. The special case where X is the A124 root system yields precisely Mathieu Moonshine. The umbral moonshine conjecture has been proved in Duncan, Griffin & Ono (2015).
The name of umbral moonshine derives from the use of shadows in the theory of mock modular forms. Other moonlight-related words like 'lambency' were given technical meanings (in this case, the genus zero group attached to a shadow SX, whose level is the dual Coxeter number of the root system X) by Cheng, Duncan, and Harvey to continue the theme.
Although the umbral moonshine conjecture has been settled, there are still many questions that remain. For example, connections to geometry and physics are still not very solid, although there is work relating umbral functions to duVal singularities on K3 surfaces by Cheng and Harrison. As another example, the current proof of the umbral moonshine conjecture is ineffective, in the sense that it does not give natural constructions of the representations. This is similar to the situation with monstrous moonshine during the 1980s: Atkin, Fong, and Smith showed by computation that a moonshine module exists in 1980, but did not give a construction. The effective proof of the Conway-Norton conjecture was given by Borcherds in 1992, using the monster representation constructed by Frenkel, Lepowsky, and Meurman. There is a vertex algebra construction for the E83 case by Duncan and Harvey, where GX is the symmetric group S3. However, the algebraic structure is given by an asymmetric cone gluing construction, suggesting that it is not the last word.
See also
• Monstrous moonshine
References
• Cheng, Miranda C. N.; Duncan, John F. R.; Harvey, Jeffrey A. (2012), Umbral Moonshine, arXiv:1204.2779, Bibcode:2012arXiv1204.2779C
• Cheng, Miranda C. N.; Duncan, John F. R.; Harvey, Jeffrey A. (2013), Umbral Moonshine, arXiv:1307.5793, Bibcode:2013arXiv1307.5793C
• Duncan, John F. R.; Griffin, Michael J.; Ono, Ken (10 December 2015), "Proof of the umbral moonshine conjecture", Research in the Mathematical Sciences, 2 (1), arXiv:1503.01472, doi:10.1186/s40687-015-0044-7
• Eguchi, Tohru; Hikami, Kazuhiro (2009), "Superconformal algebras and mock theta functions", Journal of Physics A: Mathematical and Theoretical, 42 (30): 531–554, arXiv:0904.0911, Bibcode:2009JPhA...42D4010E, doi:10.1088/1751-8113/42/30/304010, ISSN 1751-8113, MR 2521329
• Eguchi, Tohru; Hikami, Kazuhiro (2009), "Superconformal algebras and mock theta functions. II. Rademacher expansion for K3 surface", Communications in Number Theory and Physics, 3 (3): 531–554, arXiv:0904.0911, doi:10.4310/cntp.2009.v3.n3.a4, ISSN 1931-4523, MR 2591882
• Eguchi, Tohru; Ooguri, Hirosi; Tachikawa, Yuji (2011), "Notes on the K3 surface and the Mathieu group M24", Experimental Mathematics, 20 (1): 91–96, arXiv:1004.0956, doi:10.1080/10586458.2011.544585, ISSN 1058-6458, MR 2802725
External links
• Mathematicians Chase Moonshine's Shadow
| Wikipedia |
Umbrella sampling
Umbrella sampling is a technique in computational physics and chemistry, used to improve sampling of a system (or different systems) where ergodicity is hindered by the form of the system's energy landscape. It was first suggested by Torrie and Valleau in 1977.[1] It is a particular physical application of the more general importance sampling in statistics.
Systems in which an energy barrier separates two regions of configuration space may suffer from poor sampling. In Metropolis Monte Carlo runs, the low probability of overcoming the potential barrier can leave inaccessible configurations poorly sampled—or even entirely unsampled—by the simulation. An easily visualised example occurs with a solid at its melting point: considering the state of the system with an order parameter Q, both liquid (low Q) and solid (high Q) phases are low in energy, but are separated by a free energy barrier at intermediate values of Q. This prevents the simulation from adequately sampling both phases.
Umbrella sampling is a means of "bridging the gap" in this situation. The standard Boltzmann weighting for Monte Carlo sampling is replaced by a potential chosen to cancel the influence of the energy barrier present. The Markov chain generated has a distribution given by:
$\pi (\mathbf {r} ^{N})={\frac {w({\textbf {r}}^{N})\exp {\left(-{\frac {U(\mathbf {r} ^{N})}{k_{B}T}}\right)}}{\int {w(\mathbf {r^{\prime }} ^{N})\exp {\left(-{\frac {U(\mathbf {r^{\prime }} ^{N})}{k_{B}T}}\right)}d\mathbf {r^{\prime }} ^{N}}}},$
with U the potential energy, w(rN) a function chosen to promote configurations that would otherwise be inaccessible to a Boltzmann-weighted Monte Carlo run. In the example above, w may be chosen such that w = w(Q), taking high values at intermediate Q and low values at low/high Q, facilitating barrier crossing.
Values for a thermodynamic property A deduced from a sampling run performed in this manner can be transformed into canonical-ensemble values by applying the formula:
$\langle A\rangle ={\frac {\langle A/w\rangle _{\pi }}{\langle 1/w\rangle _{\pi }}},$
with the $\pi $ subscript indicating values from the umbrella-sampled simulation.
The effect of introducing the weighting function w(rN) is equivalent to adding a biasing potential V(rN) to the potential energy of the system.
$V(\mathbf {r} ^{N})=-k_{B}T\ln w(\mathbf {r} ^{N})$
If the biasing potential is strictly a function of a reaction coordinate or order parameter $Q$, then the (unbiased) free energy profile on the reaction coordinate can be calculated by subtracting the biasing potential from the biased free energy profile.
$F_{0}(Q)=F_{\pi }(Q)-V(Q)$
where $F_{0}(Q)$ is the free energy profile of the unbiased system and $F_{\pi }(Q)$ is the free energy profile calculated for the biased, umbrella-sampled system.
Series of umbrella sampling simulations can be analyzed using the weighted histogram analysis method (WHAM)[2] or its generalization.[3] WHAM can be derived using the Maximum likelihood method.
Subtleties exist in deciding the most computationally efficient way to apply the umbrella sampling method, as described in Frenkel & Smit's book Understanding Molecular Simulation.
Alternatives to umbrella sampling for computing potentials of mean force or reaction rates are free energy perturbation and transition interface sampling. A further alternative which functions in full non-equilibrium is S-PRES.
References
1. Torrie, G. M.; Valleau, J. P. (1977). "Nonphysical sampling distributions in Monte Carlo free-energy estimation: Umbrella sampling". Journal of Computational Physics. 23 (2): 187–199. Bibcode:1977JCoPh..23..187T. doi:10.1016/0021-9991(77)90121-8.
2. Kumar, Shankar; Rosenberg, John M.; Bouzida, Djamal; Swendsen, Robert H.; Kollman, Peter A. (30 September 1992). "THE weighted histogram analysis method for free-energy calculations on biomolecules. I. The method". Journal of Computational Chemistry. 13 (8): 1011–1021. doi:10.1002/jcc.540130812. S2CID 8571486.
3. Bartels, C (7 December 2000). "Analyzing biased Monte Carlo and molecular dynamics simulations". Chemical Physics Letters. 331 (5–6): 446–454. Bibcode:2000CPL...331..446B. doi:10.1016/S0009-2614(00)01215-X.
Further reading
• Daan Frenkel and Berend Smit: "Understanding Molecular Simulation: From Algorithms to Applications". Academic Press 2001, ISBN 978-0-12-267351-1
• Johannes Kästner: “Umbrella Sampling”, WIREs Computational Molecular Science 1, 932 (2011) doi:10.1002/wcms.66
| Wikipedia |
Unambiguous Turing machine
In theoretical computer science, a Turing machine is a theoretical machine that is used in thought experiments to examine the abilities and limitations of computers. An unambiguous Turing machine is a special kind of non-deterministic Turing machine, which, in some sense, is similar to a deterministic Turing machine.
Turing machines
Machine
• Turing machine equivalents
• Turing machine examples
• Turing machine gallery
Variants
• Alternating Turing machine
• Neural Turing machine
• Nondeterministic Turing machine
• Quantum Turing machine
• Post–Turing machine
• Probabilistic Turing machine
• Multitape Turing machine
• Multi-track Turing machine
• Symmetric Turing machine
• Total Turing machine
• Unambiguous Turing machine
• Universal Turing machine
• Zeno machine
Science
• Alan Turing
• Category:Turing machine
Formal definition
A non-deterministic Turing machine is represented formally by a 6-tuple, $M=(Q,\Sigma ,\iota ,\sqcup ,A,\delta )$, as explained in the page non-deterministic Turing machine. An unambiguous Turing machine is a non-deterministic Turing machine such that, for every input $w=a_{1},a_{2},...,a_{n}$, there exists at most one sequence of configurations $c_{0},c_{1},...,c_{m}$ with the following conditions:
1. $c_{0}$ is the initial configuration with input $w$
2. $c_{i+1}$ is a successor of $c_{i}$ and
3. $c_{m}$ is an accepting configuration.
In other words, if $w$ is accepted by $M$, there is exactly one accepting computation.
Expressivity
Every deterministic Turing machine is an unambiguous Turing machine, as for each input, there is exactly one computation possible. Unambiguous Turing machines have the same expressivity as a Turing machines. They are a subset of non-deterministic Turing machines, which have the same expressivity as Turing machines.
On the other hand, unambiguous non-deterministic polynomial time is suspected to be strictly less expressive than (potentially ambiguous) non-deterministic polynomial time.
References
Lane A. Hemaspaandra and Jorg Rothe, Unambiguous Computation: Boolean Hierarchies and Sparse Turing-Complete Sets, SIAM J. Comput., 26(3), 634–653
| Wikipedia |
Fair division among groups
Fair division among groups[1] (or families[2]) is a class of fair division problems, in which the resources are allocated among groups of agents, rather than among individual agents. After the division, all members in each group consume the same share, but they may have different preferences; therefore, different members in the same group might disagree on whether the allocation is fair or not. Some examples of group fair division settings are:
• Several siblings inherited some houses from their parents and have to divide them. Each sibling has a family, whose members may have different opinions regarding which house is better.
• A partnership is dissolved, and its assets should be divided among the partners. The partners are firms; each firm has several stockholders, who might disagree regarding which asset is more important.
• The university management wants to allocate some meeting-rooms among its departments. In each department there are several faculty members, with differing opinions about which rooms are better.
• Two neighboring countries want to divide a disputed region among them. The citizens in each country differ on which parts of the region are more important. This is a common obstacle to resolving international disputes.
• The "group of agents" may also represent different conflicting preferences of a single person. As observed in behavioral economics, people often change their preferences according to different frames of mind or different moods. Such people can be represented as a group of agents, each of whom has a different preference.
In all the above examples, the groups are fixed in advance. In some settings, the groups can be determined ad-hoc, that is, people can be grouped based on their preferences. An example of such a setting is:[3]
• Some 30 people want to use the local basketball court. Each game involves 10 players with different preferences regarding which time is better. It is required to partition the time of day into 3 parts and partition the players into 3 groups and assign a group to each time-slot.
Fairness criteria
Common fairness criteria, such as proportionality and envy-freeness, judge the division from the point-of-view of a single agent, with a single preference relation. There are several ways to extend these criteria to fair division among groups.
Unanimous fairness requires that the allocation be considered fair in the eyes of all agents in all groups. For example:
• A division is called unanimously-proportional if every agent in every group values his/her group's share as at least 1/k of the total value, where k is the number of groups.
• A division is called unanimously-envy-free if every agent in every group values his/her group's share at least as much as the share of any other group.
Unanimous fairness is a strong requirement, and often cannot be satisfied.
Aggregate fairness assigns to each group a certain aggregate function, such as: sum, product, arithmetic mean or geometric mean. It requires that the allocation be considered fair according to this aggregate function. For example:
• A division is called average-proportional if, for each group, the arithmetic mean of the agents' values to the group share is at least 1/k of the total value.
• A division is called product-envy-free if, for each group, the product of agents' values of the group share is at least the product of their values of the share of any other group.
Democratic fairness requires that, in each group, a certain fraction of the agents agree that the division is fair; preferredly this fraction should be at least 1/2. A practical situation in which such requirement may be useful is when two democratic countries agree to divide a certain disputed land among them, and the agreement should be approved by a referendum in both countries.
Unanimous-fairness implies both aggregate-fairness and democratic-fairness. Aggregate-fairness and democratic fairness are independent - none of them implies the other.[2]
Pareto efficiency is another important criterion that is required in addition to fairness. It is defined in the usual way: no allocation is better for at least one individual agent and at least as good for all individual agents.
Results for divisible resources
In the context of fair cake-cutting, the following results are known (where k is the number of groups, and n is the number of agents in all groups together).[2]
• Unanimous fairness: Unanimous-proportional and unanimous-envy-free allocations always exist. However, they may be disconnected: at least n connected components might be required. With two groups, n components are always sufficient. With k>2 groups, O(n log k) components are always sufficient for a unanimous-proportional allocation, and O(n k) components are always sufficient for a unanimous-envy-free allocation. It is an open question whether or not n components are always sufficient.
• Aggregate fairness: Average-proportional and average-envy-free allocations always exist, and require only k connected components (that is, each group may get a connected piece). However, they cannot be found using a finite algorithm in the Robertson–Webb query model.
• Democratic fairness: 1/2-democratic proportional and 1/2-democratic envy-free allocations always exist. With two groups, there exist such allocations that are also connected, and they can be found in polynomial time. With k>2 groups, connected 1/2-democratic fair allocations might not exist, but the number of required components is smaller than for unanimous-proportional allocations.
• Fairness and efficiency: All three variants of proportionality are compatible with Pareto-efficiency for any number of groups. Unanimous-envy-freeness is compatible with Pareto-efficiency for 2 groups, but not for 3 or more groups. 1/2-democratic envy-freeness is compatible with Pareto-efficiency for 2 groups, but not for 5 or more groups. It is open whether they are compatible for 3 or 4 groups.[4]
The division problem is easier when the agents can be grouped ad-hoc based on their preferences. In this case, there exists a unanimous envy-free connected allocation for any number of groups and any number of agents in each group.[3]
Unanimous proportionality and exact division
In an exact division (also called consensus division), there are n agents, and the goal is to partition the cake into k pieces such that all agents value all pieces at exactly 1/k. It is known that an exact division with n(k-1) always exists. However, even for k=2, finding an exact division with n cuts is FIXP-hard, and finding an approximate exact division with n cuts is PPA-complete (see exact division for more information). It can be proved that unanimous-proportionality is equivalent to consensus division in the following sense:[2]
• For every n and k, a solution to unanimous-proportional division among n(k-1)+1 agents grouped into k families implies a solution to consensus division among n agents with k pieces. In particular, it implies that unanimous-proportional division requires at least n-1 cuts (n components), finding a unanimous-proportional division with n-1 cuts is FIXP-hard, and finding an approximate unanimous-proportional division with n-1 cuts is PPA-hard.
• For every n and k, a solution to exact-division among n agents and k pieces implies a solution to unanimous-proportional division among n+1 agents grouped into k families. In particular, it implies that exact unanimous-proportional division can be done with (n-1)(k-1) cuts, and that finding an approximate unanimous-proportional division is in PPA. The number of cuts is tight for k=2 families but not for k>2.[5]
Results for indivisible items
In the context of fair item allocation, the following results are known.
Unanimous approximate maximin-share fairness:[6]
• When there are two groups, a positive multiplicative approximation to MMS-fairness can be guaranteed if-and-only-if the numbers of agents in the groups are (1,n-1) or (2,2) or (2,3). The positive results are attainable by polynomial-time algorithms. In all other cases, there are instances in which at least one agent with a positive MMS gets a zero value in all allocations.
• When there are three or more groups, a positive multiplicative approximation to MMS-fairness can be attained if k-1 groups contain a single agent; in contrast, if all groups contain 2 agents and one group contains at least 5 agents, then no positive approximation is possible.
Unanimous approximate envy-freeness:[7]
• When there are two groups of agents with binary additive valuations, a unanimously-EF1 allocation exists if the group sizes are (1,5) or (2,3), but might not exist if the group sizes are (1,6) or (2,4) or (3,3). In general, an EFc allocation might not exist if the group sizes are $({2c+1 \choose c+1},{2c+1 \choose c+1})$. Note that, with binary valuations, EF1 is equivalent to EFX, but weaker than EFX0. A unanimously-EFX0 allocation might not exist if the group sizes are (1,2); this is in contrast to the situation with individual agents, that is, group sizes (1,1), where an EFX0 allocation always exists even for monotone valuations.[8]
• It is NP-hard to decide if a given instance admits a unanimously-EF1 allocation.
• When there are two groups of agents with responsive valuations (a superset of additive valuations), a unanimously-EF1 balanced allocation exists if the group sizes are (1,2). If a certain conjecture on Kneser graphs is true, then a unanimously-EF1 balanced allocation exists also for group sizes (1,4), (2,3) and arbitrary monotone valuations. A unanimously-EFX allocation might not exist if the group sizes are (1,2).
• For two ad-hoc groups, with any number of agents with arbitrary monotone valuations, there exists a unanimously-EF1 allocation. There also exists a balanced partition of the agents and a unanimously-EF1 balanced allocation of the goods. The EF1 cannot be strengthened to EFX even with additive valuations.
• For k ad-hoc groups, with any number of agents with additive valuations, there exists a unanimously-PROP*1 allocation.
• For n agents partitioned arbitrarily into k groups, there always exists an allocation that is envy-free up to c items, where $O({\sqrt {n}})\geq c\geq \Omega ({\sqrt {n/k^{3}}})$. The same is true for proportionality up to c items. For consensus division the bounds are $O({\sqrt {n}})\geq c\geq \Omega ({\sqrt {n/k}})$. All bounds are asymptotically tight when the number of groups is constant. The proofs use discrepancy theory.[9]
Unanimous envy-freeness with high probability:[10]
• When all k groups contain the same number of agents, and their valuations are drawn at random, an envy-free allocation exists with high probability if the number of goods is in $\Omega (n\log n)$, and can be attained by a greedy algorithm that maximizes the sum of utilities.
• The results can be extended to two groups with different sizes.
• There is also a truthful mechanism that attains an approximately-envy-free allocation with high probability.
• If the number of goods is in less than n, then with high probability, an envy-free allocation does not exist.
Democratic fairness:[11]
• For two groups with binary additive valuations (with any number of agents), there always exists a 1/2-democratic envy-free-except-1 allocation. The constant 1/2 is tight even if we allow envy-free-except-c allocation for any constant c. The same is true also for proportionality-except-c. A different fairness notion, that can be guaranteed to more than 1/2 of the agents in each group, is the ordinal maximin-share approximation. For every integer c, there exists a $(1-1/2^{c-1})$-democratic 1-out-of-c MMS-fair allocation. These allocations can be found efficiently using a variant of round-robin item allocation, with weighted approval voting inside each group. The upper bound on the fraction of agents that can be guaranteed 1 of their best c items (a property weaker than 1-out-of-c MMS) is $(1-1/2^{c})$. For $c=2$, the lower bound for 1-out-of-best-c allocation can be improved from 1/2 to 3/5; it is an open question whether the upper bound of 3/4 can always be attained.
• It is NP-hard to decide if a given instance admits an allocation that gives each agent a positive utility.
• For two groups with general monotone valuations, there always exists a 1/2-democratic envy-free-except-1 allocation, and it can be found by an efficient algorithm.
• For three or more groups with binary additive valuations, there always exists a 1/k-democratic envy-free-except-1 allocation; with general monotone valuations, there always exists a 1/k-democratic envy-free-except-2 allocation. The factor 1/k is tight for envy-free-except-c allocation for any constant c. If envy-freeness is relaxed to proportionality or maximin-share, then similar guarantees can be attained using a polynomial-time algorithm. For groups with additive valuations, a variant of round-robin item allocation can be used to find a 1/3-democratic 1-out-of-best-k allocation.
Group fair division of items and money
In the context of rental harmony (envy-free division of rooms and rent), the following results are known.[12]
• Unanimous envy-freeness (called strong envy-freeness in the paper) may not exist when the cost-sharing policy is equal or proportional, but always exists with free cost-sharing policy. Moreover, a unanimously-envy-free allocation with free cost-sharing that maximizes the total rent can be found in polynomial time.
• With ad-hoc groups, unanimous envy-freeness exists even with equal cost-sharing policy.
• Average envy-freeness (called aggregate envy-freeness in the paper) always exists when the cost-sharing policy is equal or proportional or free.
Fair division of ticket lotteries
A practical application of fair division among groups is dividing tickets to parks or other experiences with limited capacity. Often, tickets are divided at random. When people arrive on their own, a simple uniformly-random lottery among all candidates is a fair solution. But people often come in families or groups of friends, who want to enter together. This leads to various considerations in how exactly to design the lottery. The following results are known:
• For the setting in which all group members are identified in advance, the Group Lottery mechanism orders groups uniformly at random, and processes them sequentially as long as there is available capacity. This natural mechanism might be unfair and inefficient; there are some better alternatives.[13]
• If agents may request multiple tickets without identifying members of their group, the Individual Lottery mechanism orders agents uniformly at random and awards each their request as long as there is available capacity. This common mechanism might yield arbitrarily unfair and inefficient outcomes. The Weighted Individual Lottery is an alternative mechanism in which the processing order is biased to favor agents with smaller requests. It is approximately fair and approximately efficient.[13]
• The Iterative Probability Maximization algorithm finds a lottery that maximizes the smallest utility (based on the egalitarian rule and the leximin order). It is group strategyproof, and attains a 1/2-factor approximation of the maximum utilization. It is also Pareto-efficient, envy-free and anonymous. Its properties are maximal in the sense that it is impossible to improve one property without harming another one.[14]
Related concepts
• Group envy-freeness is a fairness criterion for fair division among individual agents. It says that, after each individual agent gets his private share, no coalition of agents envies another coalition of the same size.
• Club good is a resource that is consumed simultaneously by all members in a single group ("club"), but is excluded from members of other groups. In the group fair division problem, all allocated goods are club goods in the group they are allocated to.
• Agreeable subset is a subset of items that is considered, by all people in a certain group, to be at least as good as its complement.
References
1. Suksompong, Warut (2018). Resource allocation and decision making for groups (Thesis). OCLC 1050345365.
2. Segal-Halevi, Erel; Nitzan, Shmuel (December 2019). "Fair cake-cutting among families" (PDF). Social Choice and Welfare. 53 (4): 709–740. doi:10.1007/s00355-019-01210-9. S2CID 1602396.
3. Segal-Halevi, Erel; Suksompong, Warut (2 January 2021). "How to Cut a Cake Fairly: A Generalization to Groups". The American Mathematical Monthly. 128 (1): 79–83. arXiv:2001.03327. doi:10.1080/00029890.2021.1835338. S2CID 210157034.
4. Bade, Sophie; Segal-Halevi, Erel (2020-10-20). "Fair and Efficient Division among Families". arXiv:1811.06684 [econ.TH].
5. Segal-Halevi, Erel; Nitzan, Shmuel (December 2019). "Fair cake-cutting among families" (PDF). Social Choice and Welfare. 53 (4): 709–740. doi:10.1007/s00355-019-01210-9. S2CID 1602396.
6. Suksompong, Warut (1 March 2018). "Approximate maximin shares for groups of agents". Mathematical Social Sciences. 92: 40–47. arXiv:1706.09869. doi:10.1016/j.mathsocsci.2017.09.004. S2CID 3720438.
7. Kyropoulou, Maria; Suksompong, Warut; Voudouris, Alexandros A. (12 November 2020). "Almost envy-freeness in group resource allocation" (PDF). Theoretical Computer Science. 841: 110–123. doi:10.1016/j.tcs.2020.07.008. S2CID 220546580.
8. Plaut, Benjamin; Roughgarden, Tim (January 2020). "Almost Envy-Freeness with General Valuations". SIAM Journal on Discrete Mathematics. 34 (2): 1039–1068. arXiv:1707.04769. doi:10.1137/19M124397X. S2CID 216283014.
9. Manurangsi, Pasin; Suksompong, Warut (2022). "Almost envy-freeness for groups: Improved bounds via discrepancy theory". Theoretical Computer Science. 930: 179–195. arXiv:2105.01609. doi:10.1016/j.tcs.2022.07.022. S2CID 233714947.
10. Manurangsi, Pasin; Suksompong, Warut (1 September 2017). "Asymptotic existence of fair divisions for groups". Mathematical Social Sciences. 89: 100–108. arXiv:1706.08219. doi:10.1016/j.mathsocsci.2017.05.006. S2CID 47514346.
11. Segal-Halevi, Erel; Suksompong, Warut (December 2019). "Democratic fair allocation of indivisible goods". Artificial Intelligence. 277: 103167. arXiv:1709.02564. doi:10.1016/j.artint.2019.103167. S2CID 203034477.
12. Ghodsi, Mohammad; Latifian, Mohamad; Mohammadi, Arman; Moradian, Sadra; Seddighin, Masoud (2018). "Rent Division Among Groups". Combinatorial Optimization and Applications. Lecture Notes in Computer Science. Vol. 11346. pp. 577–591. doi:10.1007/978-3-030-04651-4_39. ISBN 978-3-030-04650-7.
13. Arnosti, Nick; Bonet, Carlos (2022). "Lotteries for Shared Experiences". Proceedings of the 23rd ACM Conference on Economics and Computation. pp. 1179–1180. arXiv:2205.10942. doi:10.1145/3490486.3538312. ISBN 978-1-4503-9150-4. S2CID 248986158.
14. Arbiv, Tal; Aumann, Yonatan (28 June 2022). "Fair and Truthful Giveaway Lotteries". Proceedings of the AAAI Conference on Artificial Intelligence. 36 (5): 4785–4792. doi:10.1609/aaai.v36i5.20405. S2CID 250288879.
| Wikipedia |
Unary operation
In mathematics, a unary operation is an operation with only one operand, i.e. a single input.[1] This is in contrast to binary operations, which use two operands.[2] An example is any function f : A → A, where A is a set. The function f is a unary operation on A.
Common notations are prefix notation (e.g. ¬, −), postfix notation (e.g. factorial n!), functional notation (e.g. sin x or sin(x)), and superscripts (e.g. transpose AT). Other notations exist as well, for example, in the case of the square root, a horizontal bar extending the square root sign over the argument can indicate the extent of the argument.
Examples
Absolute value
Obtaining the absolute value of a number is a unary operation. This function is defined as $|n|={\begin{cases}n,&{\mbox{if }}n\geq 0\\-n,&{\mbox{if }}n<0\end{cases}}$[3] where $|n|$ is the absolute value of $n$.
Negation
This is used to find the negative value of a single number. This is technically not a unary operation as $-n$ is just short form of $0-n$.[4] Here are some examples:
$-(3)=-3$
$-(-3)=3$
Unary negative and positive
As unary operations have only one operand they are evaluated before other operations containing them. Here is an example using negation:
$3$$-$$-$$2$
Here, the first '−' represents the binary subtraction operation, while the second '−' represents the unary negation of the 2 (or '−2' could be taken to mean the integer −2). Therefore, the expression is equal to:
$3$$-$$(-$$2)$$=5$
Technically, there is also a unary + operation but it is not needed since we assume an unsigned value to be positive:
$+2=2$
The unary + operation does not change the sign of a negative operation:
$+$$(-$$2)$$=$ $-2$
In this case, a unary negation is needed to change the sign:
$-(-2)=+2$
Trigonometry
In trigonometry, the trigonometric functions, such as $\sin $, $\cos $, and $\tan $, can be seen as unary operations. This is because it is possible to provide only one term as input for these functions and retrieve a result. By contrast, binary operations, such as addition, require two different terms to compute a result.
JavaScript
In JavaScript, these operators are unary:[5]
• Increment: ++x, x++
• Decrement: −−x, x−−
• Positive: +x
• Negative: −x
• Ones' complement: ~x
• Logical negation: !x
C family of languages
In the C family of languages, the following operators are unary:[6][7]
• Increment: ++x, x++
• Decrement: −−x, x−−
• Address: &x
• Indirection: *x
• Positive: +x
• Negative: −x
• Ones' complement: ~x
• Logical negation: !x
• Sizeof: sizeof x, sizeof(type-name)
• Cast: (type-name) cast-expression
Unix shell (Bash)
In the Unix/Linux shell (bash/sh), '
is a unary operator when used for parameter expansion, replacing the name of a variable by its (sometimes modified) value. For example:
• Simple expansion: $x
• Complex expansion: ${#x}
PowerShell
• Increment: ++$x, $x++
• Decrement: −−$x, $x−−
• Positive: +$x
• Negative: −$x
• Logical negation: !$x
• Invoke in current scope: .$x
• Invoke in new scope: &$x
• Cast: [type-name] cast-expression
• Cast: +$x
• Array: ,$array
See also
• Binary operation
• Iterated binary operation
• Ternary operation
• Arity
• Operation (mathematics)
• Operator (programming)
References
1. Weisstein, Eric W. "Unary Operation". mathworld.wolfram.com. Retrieved 2020-07-29.
2. Weisstein, Eric W. "Binary Operation". mathworld.wolfram.com. Retrieved 2020-07-29.
3. "Absolute value".
4. "Negative number".
5. "Unary Operators".
6. "Chapter 5. Expressions and Operators". C/C++ Language Reference. p. 109. Archived from the original on 2012-10-16. {{cite book}}: |website= ignored (help)
7. "Unary Operators - C Tutorials - Sanfoundry". www.sanfoundry.com.
External links
• Media related to Unary operations at Wikimedia Commons
| Wikipedia |
Unary function
A unary function is a function that takes one argument. A unary operator belongs to a subset of unary functions, in that its range coincides with its domain. In contrast, a unary function's domain may or may not coincide with its range.
Examples
The successor function, denoted $\operatorname {succ} $, is a unary operator. Its domain and codomain are the natural numbers, its definition is as follows:
${\begin{aligned}\operatorname {succ} :\quad &\mathbb {N} \rightarrow \mathbb {N} \\&n\mapsto (n+1)\end{aligned}}$ :\quad &\mathbb {N} \rightarrow \mathbb {N} \\&n\mapsto (n+1)\end{aligned}}}
In many programming languages such as C, executing this operation is denoted by postfixing ${\mathrel {+{+}}}$ to the operand, i.e. the use of $n{\mathrel {+{+}}}$ is equivalent to executing the assignment $n:=\operatorname {succ} (n)$.
Many of the elementary functions are unary functions, including the trigonometric functions, logarithm with a specified base, exponentiation to a particular power or base, and hyperbolic functions.
See also
• Arity
• Binary function
• Binary operator
• List of mathematical functions
• Ternary operation
• Unary operation
References
• Foundations of Genetic Programming
| Wikipedia |
Unary language
In computational complexity theory, a unary language or tally language is a formal language (a set of strings) where all strings have the form 1k, where "1" can be any fixed symbol. For example, the language {1, 111, 1111} is unary, as is the language {1k | k is prime}. The complexity class of all such languages is sometimes called TALLY.
The name "unary" comes from the fact that a unary language is the encoding of a set of natural numbers in the unary numeral system. Since the universe of strings over any finite alphabet is a countable set, every language can be mapped to a unique set A of natural numbers; thus, every language has a unary version {1k | k in A}. Conversely, every unary language has a more compact binary version, the set of binary encodings of natural numbers k such that 1k is in the language.
Since complexity is usually measured in terms of the length of the input string, the unary version of a language can be "easier" than the original language. For example, if a language can be recognized in O(2n) time, its unary version can be recognized in O(n) time, because n has become exponentially larger. More generally, if a language can be recognized in O(f(n)) time and O(g(n)) space, its unary version can be recognized in O(n + f(log n)) time and O(g(log n)) space (we require O(n) time just to read the input string). However, if membership in a language is undecidable, then membership in its unary version is also undecidable.
Relationships to other complexity classes
TALLY is contained in P/poly—the class of languages that can be recognized in polynomial time given an advice function that depends only on the input length. In this case, the required advice function is very simple—it returns a single bit for each input length k specifying whether 1k is in the language or not.
A unary language is necessarily a sparse language, since for each n it contains at most one value of length n and at most n values of length at most n, but not all sparse languages are unary; thus TALLY is contained in SPARSE.
It is believed that there are no NP-hard unary languages: if there exists a unary language that is NP-complete, then P = NP.[1]
This result can be extended to sparse languages.[2]
If L is a unary language, then L* (the Kleene star of L) is a regular language.[3]
Tally classes
The complexity class P1 is the class of the unary languages that can be recognized by a polynomial time Turing machine (given its input written in unary); it is the analogue of the class P. The analogue of NP in the unary setting is NP1. A counting class #P1, the analogue of #P, is also known.[4]
References
Notes
1. Piotr Berman. Relationship between density and deterministic complexity of NP-complete languages. In Proceedings of the 5th Conference on Automata, Languages and Programming, pp.63–71. Springer-Verlag. Lecture Notes in Computer Science #62. 1978.
2. S. R. Mahaney. Sparse complete sets for NP: Solution of a conjecture by Berman and Hartmanis. Journal of Computer and System Sciences 25:130-143. 1982.
3. -, Patrick. "Kleene star of an infinite unary language always yields a regular language". Computer Science Stack Exchange. Retrieved 19 October 2014.{{cite web}}: CS1 maint: numeric names: authors list (link)
4. Leslie Valiant, The Complexity of Enumeration and Reliability Problems,
General references
• Lance Fortnow. Favorite Theorems: Small Sets. April 18, 2006. http://weblog.fortnow.com/2006/04/favorite-theorems-small-sets.html
• Complexity Zoo: TALLY
| Wikipedia |
Unary numeral system
The unary numeral system is the simplest numeral system to represent natural numbers:[1] to represent a number N, a symbol representing 1 is repeated N times.[2]
Part of a series on
Numeral systems
Place-value notation
Hindu-Arabic numerals
• Western Arabic
• Eastern Arabic
• Bengali
• Devanagari
• Gujarati
• Gurmukhi
• Odia
• Sinhala
• Tamil
• Malayalam
• Telugu
• Kannada
• Dzongkha
• Tibetan
• Balinese
• Burmese
• Javanese
• Khmer
• Lao
• Mongolian
• Sundanese
• Thai
East Asian systems
Contemporary
• Chinese
• Suzhou
• Hokkien
• Japanese
• Korean
• Vietnamese
Historic
• Counting rods
• Tangut
Other systems
• History
Ancient
• Babylonian
Post-classical
• Cistercian
• Mayan
• Muisca
• Pentadic
• Quipu
• Rumi
Contemporary
• Cherokee
• Kaktovik (Iñupiaq)
By radix/base
Common radices/bases
• 2
• 3
• 4
• 5
• 6
• 8
• 10
• 12
• 16
• 20
• 60
• (table)
Non-standard radices/bases
• Bijective (1)
• Signed-digit (balanced ternary)
• Mixed (factorial)
• Negative
• Complex (2i)
• Non-integer (φ)
• Asymmetric
Sign-value notation
Non-alphabetic
• Aegean
• Attic
• Aztec
• Brahmi
• Chuvash
• Egyptian
• Etruscan
• Kharosthi
• Prehistoric counting
• Proto-cuneiform
• Roman
• Tally marks
Alphabetic
• Abjad
• Armenian
• Alphasyllabic
• Akṣarapallī
• Āryabhaṭa
• Kaṭapayādi
• Coptic
• Cyrillic
• Geʽez
• Georgian
• Glagolitic
• Greek
• Hebrew
List of numeral systems
In the unary system, the number 0 (zero) is represented by the empty string, that is, the absence of a symbol. Numbers 1, 2, 3, 4, 5, 6, ... are represented in unary as 1, 11, 111, 1111, 11111, 111111, ...[3]
Unary is a bijective numeral system. However, because the value of a digit does not depend on its position, it is not a form of positional notation, and it is unclear whether it would be appropriate to say that it has a base (or "radix") of 1, as it behaves differently from all other bases.
The use of tally marks in counting is an application of the unary numeral system. For example, using the tally mark | (𝍷), the number 3 is represented as |||. In East Asian cultures, the number 3 is represented as 三, a character drawn with three strokes.[4] (One and two are represented similarly.) In China and Japan, the character 正, drawn with 5 strokes, is sometimes used to represent 5 as a tally.[5][6]
Unary numbers should be distinguished from repunits, which are also written as sequences of ones but have their usual decimal numerical interpretation.
Operations
Addition and subtraction are particularly simple in the unary system, as they involve little more than string concatenation.[7] The Hamming weight or population count operation that counts the number of nonzero bits in a sequence of binary values may also be interpreted as a conversion from unary to binary numbers.[8] However, multiplication is more cumbersome and has often been used as a test case for the design of Turing machines.[9][10][11]
Complexity
Compared to standard positional numeral systems, the unary system is inconvenient and hence is not used in practice for large calculations. It occurs in some decision problem descriptions in theoretical computer science (e.g. some P-complete problems), where it is used to "artificially" decrease the run-time or space requirements of a problem. For instance, the problem of integer factorization is suspected to require more than a polynomial function of the length of the input as run-time if the input is given in binary, but it only needs linear runtime if the input is presented in unary.[12][13] However, this is potentially misleading. Using a unary input is slower for any given number, not faster; the distinction is that a binary (or larger base) input is proportional to the base 2 (or larger base) logarithm of the number while unary input is proportional to the number itself. Therefore, while the run-time and space requirement in unary looks better as function of the input size, it does not represent a more efficient solution.[14]
In computational complexity theory, unary numbering is used to distinguish strongly NP-complete problems from problems that are NP-complete but not strongly NP-complete. A problem in which the input includes some numerical parameters is strongly NP-complete if it remains NP-complete even when the size of the input is made artificially larger by representing the parameters in unary. For such a problem, there exist hard instances for which all parameter values are at most polynomially large.[15]
Applications
Unary numbering is used as part of some data compression algorithms such as Golomb coding.[16] It also forms the basis for the Peano axioms for formalizing arithmetic within mathematical logic.[17] A form of unary notation called Church encoding is used to represent numbers within lambda calculus.[18]
See also
• Unary coding
• One-hot encoding
References
1. Hodges, Andrew (2009), One to Nine: The Inner Life of Numbers, Anchor Canada, p. 14, ISBN 9780385672665.
2. Davis, Martin; Sigal, Ron; Weyuker, Elaine J. (1994), Computability, Complexity, and Languages: Fundamentals of Theoretical Computer Science, Computer Science and Scientific Computing (2nd ed.), Academic Press, p. 117, ISBN 9780122063824.
3. Hext, Jan (1990), Programming Structures: Machines and Programs, Programming Structures, vol. 1, Prentice Hall, p. 33, ISBN 9780724809400.
4. Woodruff, Charles E. (1909), "The Evolution of Modern Numerals from Ancient Tally Marks", American Mathematical Monthly, 16 (8–9): 125–33, doi:10.2307/2970818, JSTOR 2970818.
5. Hsieh, Hui-Kuang (1981), "Chinese Tally Mark", The American Statistician, 35 (3): 174, doi:10.2307/2683999, JSTOR 2683999
6. Lunde, Ken; Miura, Daisuke (January 27, 2016), "Proposal to Encode Five Ideographic Tally Marks", Unicode Consortium (PDF), Proposal L2/16-046
7. Sazonov, Vladimir Yu. (1995), "On feasible numbers", Logic and computational complexity (Indianapolis, IN, 1994), Lecture Notes in Comput. Sci., vol. 960, Springer, Berlin, pp. 30–51, doi:10.1007/3-540-60178-3_78, ISBN 978-3-540-60178-4, MR 1449655. See in particular p. 48.
8. Blaxell, David (1978), "Record linkage by bit pattern matching", in Hogben, David; Fife, Dennis W. (eds.), Computer Science and Statistics--Tenth Annual Symposium on the Interface, NBS Special Publication, vol. 503, U.S. Department of Commerce / National Bureau of Standards, pp. 146–156.
9. Hopcroft, John E.; Ullman, Jeffrey D. (1979), Introduction to Automata Theory, Languages, and Computation, Addison Wesley, Example 7.7, pp. 158–159, ISBN 978-0-201-02988-8.
10. Dewdney, A. K. (1989), The New Turing Omnibus: Sixty-Six Excursions in Computer Science, Computer Science Press, p. 209, ISBN 9780805071665.
11. Rendell, Paul (2015), "5.3 Larger Example TM: Unary Multiplication", Turing Machine Universality of the Game of Life, Emergence, Complexity and Computation, vol. 18, Springer, pp. 83–86, ISBN 9783319198422.
12. Arora, Sanjeev; Barak, Boaz (2007), "The computational model —and why it doesn't matter" (PDF), Computational Complexity: A Modern Approach (January 2007 draft ed.), Cambridge University Press, §17, pp. 32–33, retrieved May 10, 2017.
13. Feigenbaum, Joan (Fall 2012), CPSC 468/568 HW1 Solution Set (PDF), Computer Science Department, Yale University, retrieved 2014-10-21.
14. Moore, Cristopher; Mertens, Stephan (2011), The Nature of Computation, Oxford University Press, p. 29, ISBN 9780199233212.
15. Garey, M. R.; Johnson, D. S. (1978), "'Strong' NP-completeness results: Motivation, examples, and implications", Journal of the ACM, 25 (3): 499–508, doi:10.1145/322077.322090, MR 0478747, S2CID 18371269.
16. Golomb, S.W. (1966), "Run-length encodings", IEEE Transactions on Information Theory, IT-12 (3): 399–401, doi:10.1109/TIT.1966.1053907.
17. Magaud, Nicolas; Bertot, Yves (2002), "Changing data structures in type theory: a study of natural numbers", Types for proofs and programs (Durham, 2000), Lecture Notes in Comput. Sci., vol. 2277, Springer, Berlin, pp. 181–196, doi:10.1007/3-540-45842-5_12, ISBN 978-3-540-43287-6, MR 2044538.
18. Jansen, Jan Martin (2013), "Programming in the λ-calculus: from Church to Scott and back", The Beauty of Functional Code: Essays Dedicated to Rinus Plasmeijer on the Occasion of His 61st Birthday, Lecture Notes in Computer Science, vol. 8106, Springer-Verlag, pp. 168–180, doi:10.1007/978-3-642-40355-2_12, ISBN 978-3-642-40354-5.
External links
Wikimedia Commons has media related to Unary numeral system.
• OEIS sequence A000042 (Unary representation of natural numbers)
| Wikipedia |
Assignment problem
The assignment problem is a fundamental combinatorial optimization problem. In its most general form, the problem is as follows:
The problem instance has a number of agents and a number of tasks. Any agent can be assigned to perform any task, incurring some cost that may vary depending on the agent-task assignment. It is required to perform as many tasks as possible by assigning at most one agent to each task and at most one task to each agent, in such a way that the total cost of the assignment is minimized.
Alternatively, describing the problem using graph theory:
The assignment problem consists of finding, in a weighted bipartite graph, a matching of a given size, in which the sum of weights of the edges is minimum.
If the numbers of agents and tasks are equal, then the problem is called balanced assignment. Otherwise, it is called unbalanced assignment.[1] If the total cost of the assignment for all tasks is equal to the sum of the costs for each agent (or the sum of the costs for each task, which is the same thing in this case), then the problem is called linear assignment. Commonly, when speaking of the assignment problem without any additional qualification, then the linear balanced assignment problem is meant.
Examples
Suppose that a taxi firm has three taxis (the agents) available, and three customers (the tasks) wishing to be picked up as soon as possible. The firm prides itself on speedy pickups, so for each taxi the "cost" of picking up a particular customer will depend on the time taken for the taxi to reach the pickup point. This is a balanced assignment problem. Its solution is whichever combination of taxis and customers results in the least total cost.
Now, suppose that there are four taxis available, but still only three customers. This is an unbalanced assignment problem. One way to solve it is to invent a fourth dummy task, perhaps called "sitting still doing nothing", with a cost of 0 for the taxi assigned to it. This reduces the problem to a balanced assignment problem, which can then be solved in the usual way and still give the best solution to the problem.
Similar adjustments can be done in order to allow more tasks than agents, tasks to which multiple agents must be assigned (for instance, a group of more customers than will fit in one taxi), or maximizing profit rather than minimizing cost.
Formal definition
The formal definition of the assignment problem (or linear assignment problem) is
Given two sets, A and T, together with a weight function C : A × T → R. Find a bijection f : A → T such that the cost function:
$\sum _{a\in A}C(a,f(a))$
is minimized.
Usually the weight function is viewed as a square real-valued matrix C, so that the cost function is written down as:
$\sum _{a\in A}C_{a,f(a)}$
The problem is "linear" because the cost function to be optimized as well as all the constraints contain only linear terms.
Algorithms
A naive solution for the assignment problem is to check all the assignments and calculate the cost of each one. This may be very inefficient since, with n agents and n tasks, there are n! (factorial of n) different assignments. Fortunately, there are many algorithms for solving the problem in time polynomial in n.
The assignment problem is a special case of the transportation problem, which is a special case of the minimum cost flow problem, which in turn is a special case of a linear program. While it is possible to solve any of these problems using the simplex algorithm, each specialization has a smaller solution space and thus more efficient algorithms designed to take advantage of its special structure.
Balanced assignment
In the balanced assignment problem, both parts of the bipartite graph have the same number of vertices, denoted by n.
One of the first polynomial-time algorithms for balanced assignment was the Hungarian algorithm. It is a global algorithm – it is based on improving a matching along augmenting paths (alternating paths between unmatched vertices). Its run-time complexity, when using Fibonacci heaps, is $O(mn+n^{2}\log n)$,[2] where m is a number of edges. This is currently the fastest run-time of a strongly polynomial algorithm for this problem. If all weights are integers, then the run-time can be improved to $O(mn+n^{2}\log \log n)$, but the resulting algorithm is only weakly-polynomial.[3] If the weights are integers, and all weights are at most C (where C>1 is some integer), then the problem can be solved in $O(m{\sqrt {n}}\log(n\cdot C))$ weakly-polynomial time in a method called weight scaling.[4][5][6]
In addition to the global methods, there are local methods which are based on finding local updates (rather than full augmenting paths). These methods have worse asymptotic runtime guarantees, but they often work better in practice. These algorithms are called auction algorithms, push-relabel algorithms, or preflow-push algorithms. Some of these algorithms were shown to be equivalent.[7]
Some of the local methods assume that the graph admits a perfect matching; if this is not the case, then some of these methods might run forever.[1]: 3 A simple technical way to solve this problem is to extend the input graph to a complete bipartite graph, by adding artificial edges with very large weights. These weights should exceed the weights of all existing matchings, to prevent appearance of artificial edges in the possible solution.
As shown by Mulmuley, Vazirani and Vazirani,[8] the problem of minimum weight perfect matching is converted to finding minors in the adjacency matrix of a graph. Using the isolation lemma, a minimum weight perfect matching in a graph can be found with probability at least 1⁄2. For a graph with n vertices, it requires $O(\log ^{2}(n))$ time.
Unbalanced assignment
In the unbalanced assignment problem, the larger part of the bipartite graph has n vertices and the smaller part has r<n vertices. There is also a constant s which is at most the cardinality of a maximum matching in the graph. The goal is to find a minimum-cost matching of size exactly s. The most common case is the case in which the graph admits a one-sided-perfect matching (i.e., a matching of size r), and s=r.
Unbalanced assignment can be reduced to a balanced assignment. The naive reduction is to add $n-r$ new vertices to the smaller part and connect them to the larger part using edges of cost 0. However, this requires $n(n-r)$ new edges. A more efficient reduction is called the doubling technique. Here, a new graph G' is built from two copies of the original graph G: a forward copy Gf and a backward copy Gb. The backward copy is "flipped", so that, in each side of G', there are now n+r vertices. Between the copies, we need to add two kinds of linking edges:[1]: 4–6
• Large-to-large: from each vertex in the larger part of Gf, add a zero-cost edge to the corresponding vertex in Gb.
• Small-to-small: if the original graph does not have a one-sided-perfect matching, then from each vertex in the smaller part of Gf, add a very-high-cost edge to the corresponding vertex in Gb.
All in all, at most $n+r$ new edges are required. The resulting graph always has a perfect matching of size $n+r$. A minimum-cost perfect matching in this graph must consist of minimum-cost maximum-cardinality matchings in Gf and Gb. The main problem with this doubling technique is that there is no speed gain when $r\ll n$.
Instead of using reduction, the unbalanced assignment problem can be solved by directly generalizing existing algorithms for balanced assignment. The Hungarian algorithm can be generalized to solve the problem in $O(ms+s^{2}\log r)$ strongly-polynomial time. In particular, if s=r then the runtime is $O(mr+r^{2}\log r)$. If the weights are integers, then Thorup's method can be used to get a runtime of $O(ms+s^{2}\log \log r)$.[1]: 6
Solution by linear programming
The assignment problem can be solved by presenting it as a linear program. For convenience we will present the maximization problem. Each edge (i,j), where i is in A and j is in T, has a weight $ w_{ij}$. For each edge $(i,j)$ we have a variable $ x_{ij}$. The variable is 1 if the edge is contained in the matching and 0 otherwise, so we set the domain constraints:
$0\leq x_{ij}\leq 1{\text{ for }}i,j\in A,T,\,$
$x_{ij}\in \mathbb {Z} {\text{ for }}i,j\in A,T.$
The total weight of the matching is: $\sum _{(i,j)\in A\times T}w_{ij}x_{ij}$. The goal is to find a maximum-weight perfect matching.
To guarantee that the variables indeed represent a perfect matching, we add constraints saying that each vertex is adjacent to exactly one edge in the matching, i.e.,
$\sum _{j\in T}x_{ij}=1{\text{ for }}i\in A,\,~~~\sum _{i\in A}x_{ij}=1{\text{ for }}j\in T,\,$
.
All in all we have the following LP:
${\text{maximize}}~~\sum _{(i,j)\in A\times T}w_{ij}x_{ij}$
${\text{subject to}}~~\sum _{j\in T}x_{ij}=1{\text{ for }}i\in A,\,~~~\sum _{i\in A}x_{ij}=1{\text{ for }}j\in T$
$0\leq x_{ij}\leq 1{\text{ for }}i,j\in A,T,\,$
$x_{ij}\in \mathbb {Z} {\text{ for }}i,j\in A,T.$
This is an integer linear program. However, we can solve it without the integrality constraints (i.e., drop the last constraint), using standard methods for solving continuous linear programs. While this formulation allows also fractional variable values, in this special case, the LP always has an optimal solution where the variables take integer values. This is because the constraint matrix of the fractional LP is totally unimodular – it satisfies the four conditions of Hoffman and Gale.
Other methods and approximation algorithms
Other approaches for the assignment problem exist and are reviewed by Duan and Pettie[9] (see Table II). Their work proposes an approximation algorithm for the assignment problem (and the more general maximum weight matching problem), which runs in linear time for any fixed error bound.
Generalization
When phrased as a graph theory problem, the assignment problem can be extended from bipartite graphs to arbitrary graphs. The corresponding problem, of finding a matching in a weighted graph where the sum of weights is maximized, is called the maximum weight matching problem.
Another generalization of the assignment problem is extending the number of sets to be matched from two to many. So that rather than matching agents to tasks, the problem is extended to matching agents to tasks to time intervals to locations. This results in Multidimensional assignment problem (MAP).
See also
• Auction algorithm
• Generalized assignment problem
• Linear bottleneck assignment problem
• Monge-Kantorovich transportation problem, a more general formulation
• National Resident Matching Program
• Quadratic assignment problem
• Rank-maximal matching
• Secretary problem
• Stable marriage problem
• Stable roommates problem
• Weapon target assignment problem
• House allocation problem
• Multidimensional assignment problem (MAP)
References and further reading
1. Lyle Ramshaw, Robert E. Tarjan (2012). "On minimum-cost assignments in unbalanced bipartite graphs" (PDF). HP research labs.
2. Fredman, Michael L.; Tarjan, Robert Endre (1987-07-01). "Fibonacci Heaps and Their Uses in Improved Network Optimization Algorithms". J. ACM. 34 (3): 596–615. doi:10.1145/28869.28874. ISSN 0004-5411. S2CID 7904683.
3. Thorup, Mikkel (2004-11-01). "Integer priority queues with decrease key in constant time and the single source shortest paths problem". Journal of Computer and System Sciences. Special Issue on STOC 2003. 69 (3): 330–353. doi:10.1016/j.jcss.2004.04.003. ISSN 0022-0000.
4. Gabow, H.; Tarjan, R. (1989-10-01). "Faster Scaling Algorithms for Network Problems". SIAM Journal on Computing. 18 (5): 1013–1036. doi:10.1137/0218069. ISSN 0097-5397.
5. Goldberg, A.; Kennedy, R. (1997-11-01). "Global Price Updates Help". SIAM Journal on Discrete Mathematics. 10 (4): 551–572. doi:10.1137/S0895480194281185. ISSN 0895-4801.
6. Orlin, James B.; Ahuja, Ravindra K. (1992-02-01). "New scaling algorithms for the assignment and minimum mean cycle problems". Mathematical Programming. 54 (1–3): 41–56. doi:10.1007/BF01586040. ISSN 0025-5610. S2CID 18213947.
7. Alfaro, Carlos A.; Perez, Sergio L.; Valencia, Carlos E.; Vargas, Marcos C. (2022-06-01). "The assignment problem revisited". Optimization Letters. 16 (5): 1531–1548. doi:10.1007/s11590-021-01791-4. ISSN 1862-4480.
8. Mulmuley, Ketan; Vazirani, Umesh; Vazirani, Vijay (1987). "Matching is as easy as matrix inversion". Combinatorica. 7 (1): 105–113. doi:10.1007/BF02579206. S2CID 47370049.
9. Duan, Ran; Pettie, Seth (2014-01-01). "Linear-Time Approximation for Maximum Weight Matching" (PDF). Journal of the ACM. 61: 1–23. doi:10.1145/2529989. S2CID 207208641.
• Brualdi, Richard A. (2006). Combinatorial matrix classes. Encyclopedia of Mathematics and Its Applications. Vol. 108. Cambridge: Cambridge University Press. ISBN 978-0-521-86565-4. Zbl 1106.05001.
• Burkard, Rainer; M. Dell'Amico; S. Martello (2012). Assignment Problems (Revised reprint). SIAM. ISBN 978-1-61197-222-1.
• Bertsekas, Dimitri (1998). Network Optimization: Continuous and Discrete Models. Athena Scientific. ISBN 978-1-886529-02-1.
Authority control: National
• Israel
• United States
| Wikipedia |
Unbiased estimation of standard deviation
In statistics and in particular statistical theory, unbiased estimation of a standard deviation is the calculation from a statistical sample of an estimated value of the standard deviation (a measure of statistical dispersion) of a population of values, in such a way that the expected value of the calculation equals the true value. Except in some important situations, outlined later, the task has little relevance to applications of statistics since its need is avoided by standard procedures, such as the use of significance tests and confidence intervals, or by using Bayesian analysis.
However, for statistical theory, it provides an exemplar problem in the context of estimation theory which is both simple to state and for which results cannot be obtained in closed form. It also provides an example where imposing the requirement for unbiased estimation might be seen as just adding inconvenience, with no real benefit.
Motivation
In statistics, the standard deviation of a population of numbers is often estimated from a random sample drawn from the population. This is the sample standard deviation, which is defined by
$s={\sqrt {\frac {\sum _{i=1}^{n}(x_{i}-{\overline {x}})^{2}}{n-1}}},$
where $\{x_{1},x_{2},\ldots ,x_{n}\}$ is the sample (formally, realizations from a random variable X) and ${\overline {x}}$ is the sample mean.
One way of seeing that this is a biased estimator of the standard deviation of the population is to start from the result that s2 is an unbiased estimator for the variance σ2 of the underlying population if that variance exists and the sample values are drawn independently with replacement. The square root is a nonlinear function, and only linear functions commute with taking the expectation. Since the square root is a strictly concave function, it follows from Jensen's inequality that the square root of the sample variance is an underestimate.
The use of n − 1 instead of n in the formula for the sample variance is known as Bessel's correction, which corrects the bias in the estimation of the population variance, and some, but not all of the bias in the estimation of the population standard deviation.
It is not possible to find an estimate of the standard deviation which is unbiased for all population distributions, as the bias depends on the particular distribution. Much of the following relates to estimation assuming a normal distribution.
Bias correction
Results for the normal distribution
When the random variable is normally distributed, a minor correction exists to eliminate the bias. To derive the correction, note that for normally distributed X, Cochran's theorem implies that $(n-1)s^{2}/\sigma ^{2}$ has a chi square distribution with $n-1$ degrees of freedom and thus its square root, ${\sqrt {n-1}}s/\sigma $ has a chi distribution with $n-1$ degrees of freedom. Consequently, calculating the expectation of this last expression and rearranging constants,
$\operatorname {E} [s]=c_{4}(n)\sigma $
where the correction factor $c_{4}(n)$ is the scale mean of the chi distribution with $n-1$ degrees of freedom, $\mu _{1}/{\sqrt {n-1}}$. This depends on the sample size n, and is given as follows:[1]
$c_{4}(n)={\sqrt {\frac {2}{n-1}}}{\frac {\Gamma \left({\frac {n}{2}}\right)}{\Gamma \left({\frac {n-1}{2}}\right)}}=1-{\frac {1}{4n}}-{\frac {7}{32n^{2}}}-{\frac {19}{128n^{3}}}+O(n^{-4})$
where Γ(·) is the gamma function. An unbiased estimator of σ can be obtained by dividing $s$ by $c_{4}(n)$. As $n$ grows large it approaches 1, and even for smaller values the correction is minor. The figure shows a plot of $c_{4}(n)$ versus sample size. The table below gives numerical values of $c_{4}(n)$ and algebraic expressions for some values of $n$; more complete tables may be found in most textbooks on statistical quality control.
Sample size Expression of $c_{4}$ Numerical value
2 ${\sqrt {\frac {2}{\pi }}}$ 0.7978845608
3 ${\frac {\sqrt {\pi }}{2}}$ 0.8862269255
4 $2{\sqrt {\frac {2}{3\pi }}}$ 0.9213177319
5 ${\frac {3}{4}}{\sqrt {\frac {\pi }{2}}}$ 0.9399856030
6 ${\frac {8}{3}}{\sqrt {\frac {2}{5\pi }}}$ 0.9515328619
7 ${\frac {5{\sqrt {3\pi }}}{16}}$ 0.9593687891
8 ${\frac {16}{5}}{\sqrt {\frac {2}{7\pi }}}$ 0.9650304561
9 ${\frac {35{\sqrt {\pi }}}{64}}$ 0.9693106998
10 ${\frac {128}{105}}{\sqrt {\frac {2}{\pi }}}$ 0.9726592741
100 0.9974779761
1000 0.9997497811
10000 0.9999749978
2k ${\sqrt {\frac {2}{\pi (2k-1)}}}{\frac {2^{2k-2}(k-1)!^{2}}{(2k-2)!}}$
2k+1 ${\sqrt {\frac {\pi }{k}}}{\frac {(2k-1)!}{2^{2k-1}(k-1)!^{2}}}$
It is important to keep in mind this correction only produces an unbiased estimator for normally and independently distributed X. When this condition is satisfied, another result about s involving $c_{4}(n)$ is that the standard error of s is[2][3] $\sigma {\sqrt {1-c_{4}^{2}}}$, while the standard error of the unbiased estimator is $\sigma {\sqrt {c_{4}^{-2}-1}}.$
Rule of thumb for the normal distribution
If calculation of the function c4(n) appears too difficult, there is a simple rule of thumb[4] to take the estimator
${\hat {\sigma }}={\sqrt {{\frac {1}{n-1.5}}\sum _{i=1}^{n}(x_{i}-{\overline {x}})^{2}}}$
The formula differs from the familiar expression for s2 only by having n − 1.5 instead of n − 1 in the denominator. This expression is only approximate; in fact,
$\operatorname {E} \left[{\hat {\sigma }}\right]=\sigma \cdot \left(1+{\frac {1}{16n^{2}}}+{\frac {3}{16n^{3}}}+O(n^{-4})\right).$
The bias is relatively small: say, for $n=3$ it is equal to 2.3%, and for $n=9$ the bias is already 0.1%.
Other distributions
In cases where statistically independent data are modelled by a parametric family of distributions other than the normal distribution, the population standard deviation will, if it exists, be a function of the parameters of the model. One general approach to estimation would be maximum likelihood. Alternatively, it may be possible to use the Rao–Blackwell theorem as a route to finding a good estimate of the standard deviation. In neither case would the estimates obtained usually be unbiased. Notionally, theoretical adjustments might be obtainable to lead to unbiased estimates but, unlike those for the normal distribution, these would typically depend on the estimated parameters.
If the requirement is simply to reduce the bias of an estimated standard deviation, rather than to eliminate it entirely, then two practical approaches are available, both within the context of resampling. These are jackknifing and bootstrapping. Both can be applied either to parametrically based estimates of the standard deviation or to the sample standard deviation.
For non-normal distributions an approximate (up to O(n−1) terms) formula for the unbiased estimator of the standard deviation is
${\hat {\sigma }}={\sqrt {{\frac {1}{n-1.5-{\tfrac {1}{4}}\gamma _{2}}}\sum _{i=1}^{n}\left(x_{i}-{\overline {x}}\right)^{2}}},$
where γ2 denotes the population excess kurtosis. The excess kurtosis may be either known beforehand for certain distributions, or estimated from the data.
Effect of autocorrelation (serial correlation)
The material above, to stress the point again, applies only to independent data. However, real-world data often does not meet this requirement; it is autocorrelated (also known as serial correlation). As one example, the successive readings of a measurement instrument that incorporates some form of “smoothing” (more correctly, low-pass filtering) process will be autocorrelated, since any particular value is calculated from some combination of the earlier and later readings.
Estimates of the variance, and standard deviation, of autocorrelated data will be biased. The expected value of the sample variance is[5]
${\rm {E}}\left[s^{2}\right]=\sigma ^{2}\left[1-{\frac {2}{n-1}}\sum _{k=1}^{n-1}\left(1-{\frac {k}{n}}\right)\rho _{k}\right]$
where n is the sample size (number of measurements) and $\rho _{k}$ is the autocorrelation function (ACF) of the data. (Note that the expression in the brackets is simply one minus the average expected autocorrelation for the readings.) If the ACF consists of positive values then the estimate of the variance (and its square root, the standard deviation) will be biased low. That is, the actual variability of the data will be greater than that indicated by an uncorrected variance or standard deviation calculation. It is essential to recognize that, if this expression is to be used to correct for the bias, by dividing the estimate $s^{2}$ by the quantity in brackets above, then the ACF must be known analytically, not via estimation from the data. This is because the estimated ACF will itself be biased.[6]
Example of bias in standard deviation
To illustrate the magnitude of the bias in the standard deviation, consider a dataset that consists of sequential readings from an instrument that uses a specific digital filter whose ACF is known to be given by
$\rho _{k}=(1-\alpha )^{k}$
where α is the parameter of the filter, and it takes values from zero to unity. Thus the ACF is positive and geometrically decreasing.
The figure shows the ratio of the estimated standard deviation to its known value (which can be calculated analytically for this digital filter), for several settings of α as a function of sample size n. Changing α alters the variance reduction ratio of the filter, which is known to be
${\rm {VRR}}={\frac {\alpha }{2-\alpha }}$
so that smaller values of α result in more variance reduction, or “smoothing.” The bias is indicated by values on the vertical axis different from unity; that is, if there were no bias, the ratio of the estimated to known standard deviation would be unity. Clearly, for modest sample sizes there can be significant bias (a factor of two, or more).
Variance of the mean
It is often of interest to estimate the variance or standard deviation of an estimated mean rather than the variance of a population. When the data are autocorrelated, this has a direct effect on the theoretical variance of the sample mean, which is[7]
${\rm {Var}}\left[{\overline {x}}\right]={\frac {\sigma ^{2}}{n}}\left[1+2\sum _{k=1}^{n-1}{\left(1-{\frac {k}{n}}\right)\rho _{k}}\right].$
The variance of the sample mean can then be estimated by substituting an estimate of σ2. One such estimate can be obtained from the equation for E[s2] given above. First define the following constants, assuming, again, a known ACF:
$\gamma _{1}\equiv 1-{\frac {2}{n-1}}\sum _{k=1}^{n-1}{\left(1-{\frac {k}{n}}\right)}\rho _{k}$
$\gamma _{2}\equiv 1+2\sum _{k=1}^{n-1}{\left(1-{\frac {k}{n}}\right)}\rho _{k}$
so that
${\rm {E}}\left[s^{2}\right]=\sigma ^{2}\gamma _{1}\Rightarrow {\rm {E}}\left[{\frac {s^{2}}{\gamma _{1}}}\right]=\sigma ^{2}$
This says that the expected value of the quantity obtained by dividing the observed sample variance by the correction factor $\gamma _{1}$ gives an unbiased estimate of the variance. Similarly, re-writing the expression above for the variance of the mean,
${\rm {Var}}\left[{\overline {x}}\right]={\frac {\sigma ^{2}}{n}}\gamma _{2}$
and substituting the estimate for $\sigma ^{2}$ gives[8]
${\rm {Var}}\left[{\overline {x}}\right]={\rm {E}}\left[{\frac {s^{2}}{\gamma _{1}}}\left({\frac {\gamma _{2}}{n}}\right)\right]={\rm {E}}\left[{\frac {s^{2}}{n}}\left\{{\frac {n-1}{{\frac {n}{\gamma _{2}}}-1}}\right\}\right]$
which is an unbiased estimator of the variance of the mean in terms of the observed sample variance and known quantities. If the autocorrelations $\rho _{k}$ are identically zero, this expression reduces to the well-known result for the variance of the mean for independent data. The effect of the expectation operator in these expressions is that the equality holds in the mean (i.e., on average).
Estimating the standard deviation of the population
Having the expressions above involving the variance of the population, and of an estimate of the mean of that population, it would seem logical to simply take the square root of these expressions to obtain unbiased estimates of the respective standard deviations. However it is the case that, since expectations are integrals,
${\rm {E}}[s]\neq {\sqrt {{\rm {E}}\left[s^{2}\right]}}\neq \sigma {\sqrt {\gamma _{1}}}$
Instead, assume a function θ exists such that an unbiased estimator of the standard deviation can be written
${\rm {E}}[s]=\sigma \theta {\sqrt {\gamma _{1}}}\Rightarrow {\hat {\sigma }}={\frac {s}{\theta {\sqrt {\gamma _{1}}}}}$
and θ depends on the sample size n and the ACF. In the case of NID (normally and independently distributed) data, the radicand is unity and θ is just the c4 function given in the first section above. As with c4, θ approaches unity as the sample size increases (as does γ1).
It can be demonstrated via simulation modeling that ignoring θ (that is, taking it to be unity) and using
${\rm {E}}[s]\approx \sigma {\sqrt {\gamma _{1}}}\Rightarrow {\hat {\sigma }}\approx {\frac {s}{\sqrt {\gamma _{1}}}}$
removes all but a few percent of the bias caused by autocorrelation, making this a reduced-bias estimator, rather than an unbiased estimator. In practical measurement situations, this reduction in bias can be significant, and useful, even if some relatively small bias remains. The figure above, showing an example of the bias in the standard deviation vs. sample size, is based on this approximation; the actual bias would be somewhat larger than indicated in those graphs since the transformation bias θ is not included there.
Estimating the standard deviation of the sample mean
The unbiased variance of the mean in terms of the population variance and the ACF is given by
${\rm {Var}}\left[{\overline {x}}\right]={\frac {\sigma ^{2}}{n}}\gamma _{2}$
and since there are no expected values here, in this case the square root can be taken, so that
$\sigma _{\overline {x}}={\frac {\sigma }{\sqrt {n}}}{\sqrt {\gamma _{2}}}$
Using the unbiased estimate expression above for σ, an estimate of the standard deviation of the mean will then be
${\hat {\sigma }}_{\overline {x}}={\frac {s}{\theta {\sqrt {n}}}}{\frac {\sqrt {\gamma _{2}}}{\sqrt {\gamma _{1}}}}$
If the data are NID, so that the ACF vanishes, this reduces to
${\hat {\sigma }}_{\overline {x}}={\frac {s}{c_{4}{\sqrt {n}}}}$
In the presence of a nonzero ACF, ignoring the function θ as before leads to the reduced-bias estimator
${\hat {\sigma }}_{\overline {x}}\approx {\frac {s}{\sqrt {n}}}{\frac {\sqrt {\gamma _{2}}}{\sqrt {\gamma _{1}}}}={\frac {s}{\sqrt {n}}}{\sqrt {\frac {n-1}{{\frac {n}{\gamma _{2}}}-1}}}$
which again can be demonstrated to remove a useful majority of the bias.
See also
• Bessel's correction
• Estimation of covariance matrices
• Sample mean and sample covariance
References
1. Ben W. Bolch, "More on unbiased estimation of the standard deviation", The American Statistician, 22(3), p. 27 (1968)
2. Duncan, A. J., Quality Control and Industrial Statistics 4th Ed., Irwin (1974) ISBN 0-256-01558-9, p.139
• N.L. Johnson, S. Kotz, and N. Balakrishnan, Continuous Univariate Distributions, Volume 1, 2nd edition, Wiley and sons, 1994. ISBN 0-471-58495-9. Chapter 13, Section 8.2
3. Richard M. Brugger, "A Note on Unbiased Estimation on the Standard Deviation", The American Statistician (23) 4 p. 32 (1969)
4. Law and Kelton, Simulation Modeling and Analysis, 2nd Ed. McGraw-Hill (1991), p.284, ISBN 0-07-036698-5. This expression can be derived from its original source in Anderson, The Statistical Analysis of Time Series, Wiley (1971), ISBN 0-471-04745-7, p.448, Equation 51.
5. Law and Kelton, p.286. This bias is quantified in Anderson, p.448, Equations 52–54.
6. Law and Kelton, p.285. This equation can be derived from Theorem 8.2.3 of Anderson. It also appears in Box, Jenkins, Reinsel, Time Series Analysis: Forecasting and Control, 4th Ed. Wiley (2008), ISBN 978-0-470-27284-8, p.31.
7. Law and Kelton, p.285
• Douglas C. Montgomery and George C. Runger, Applied Statistics and Probability for Engineers, 3rd edition, Wiley and sons, 2003. (see Sections 7–2.2 and 16–5)
External links
• A Java interactive graphic showing the Helmert PDF from which the bias correction factors are derived.
• Monte-Carlo simulation demo for unbiased estimation of standard deviation.
• http://www.itl.nist.gov/div898/handbook/pmc/section3/pmc32.htm What are Variables Control Charts?
This article incorporates public domain material from the National Institute of Standards and Technology.
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
| Wikipedia |
Spectral triple
In noncommutative geometry and related branches of mathematics and mathematical physics, a spectral triple is a set of data which encodes a geometric phenomenon in an analytic way. The definition typically involves a Hilbert space, an algebra of operators on it and an unbounded self-adjoint operator, endowed with supplemental structures. It was conceived by Alain Connes who was motivated by the Atiyah-Singer index theorem and sought its extension to 'noncommutative' spaces. Some authors refer to this notion as unbounded K-cycles or as unbounded Fredholm modules.
Motivation
A motivating example of spectral triple is given by the algebra of smooth functions on a compact spin manifold, acting on the Hilbert space of L2-spinors, accompanied by the Dirac operator associated to the spin structure. From the knowledge of these objects one is able to recover the original manifold as a metric space: the manifold as a topological space is recovered as the spectrum of the algebra, while the (absolute value of) Dirac operator retains the metric.[1] On the other hand, the phase part of the Dirac operator, in conjunction with the algebra of functions, gives a K-cycle which encodes index-theoretic information. The local index formula[2] expresses the pairing of the K-group of the manifold with this K-cycle in two ways: the 'analytic/global' side involves the usual trace on the Hilbert space and commutators of functions with the phase operator (which corresponds to the 'index' part of the index theorem), while the 'geometric/local' side involves the Dixmier trace and commutators with the Dirac operator (which corresponds to the 'characteristic class integration' part of the index theorem).
Extensions of the index theorem can be considered in cases, typically when one has an action of a group on the manifold, or when the manifold is endowed with a foliation structure, among others. In those cases the algebraic system of the 'functions' which expresses the underlying geometric object is no longer commutative, but one may able to find the space of square integrable spinors (or, sections of a Clifford module) on which the algebra acts, and the corresponding 'Dirac' operator on it satisfying certain boundedness of commutators implied by the pseudo-differential calculus.
Definition
An odd spectral triple is a triple (A, H, D) consisting of a Hilbert space H, an algebra A of operators on H (usually closed under taking adjoints) and a densely defined self adjoint operator D satisfying ‖[a, D]‖ < ∞ for any a ∈ A. An even spectral triple is an odd spectral triple with a Z/2Z-grading on H, such that the elements in A are even while D is odd with respect to this grading. One could also say that an even spectral triple is given by a quartet (A, H, D, γ) such that γ is a self adjoint unitary on H satisfying a γ = γ a for any a in A and D γ = - γ D.
A finitely summable spectral triple is a spectral triple (A, H, D) such that a.D for any a in A has a compact resolvent which belongs to the class of Lp+-operators for a fixed p (when A contains the identity operator on H, it is enough to require D−1 in Lp+(H)). When this condition is satisfied, the triple (A, H, D) is said to be p-summable. A spectral triple is said to be θ-summable when e−tD2 is of trace class for any t > 0.[1]
Let δ(T) denote the commutator of |D| with an operator T on H. A spectral triple is said to be regular when the elements in A and the operators of the form [a, D] for a in A are in the domain of the iterates δn of δ.
When a spectral triple (A, H, D) is p-summable, one may define its zeta function ζD(s) = Tr(|D|−s); more generally there are zeta functions ζb(s) = Tr(b|D|−s) for each element b in the algebra B generated by δn(A) and δn([a, D]) for positive integers n. They are related to the heat kernel exp(-t|D|) by a Mellin transform. The collection of the poles of the analytic continuation of ζb for b in B is called the dimension spectrum of (A, H, D).
A real spectral triple is a spectral triple (A, H, D) accompanied with an anti-linear involution J on H, satisfying [a, JbJ] = 0 for a, b in A. In the even case it is usually assumed that J is even with respect to the grading on H.
Important concepts
Given a spectral triple (A, H, D), one can apply several important operations to it. The most fundamental one is the polar decomposition D = F|D| of D into a self adjoint unitary operator F (the 'phase' of D) and a densely defined positive operator |D| (the 'metric' part).
Metric on the pure state space
The positive |D| operator defines a metric on the set of pure states on the norm closure of A.
Pairing with K-theory
The self adjoint unitary F gives a map of the K-theory of A into integers by taking Fredholm index as follows. In the even case, each projection e in A decomposes as e0 ⊕ e1 under the grading and e1Fe0 becomes a Fredholm operator from e0H to e1H. Thus e → Ind e1Fe0 defines an additive mapping of K0(A) to Z. In the odd case the eigenspace decomposition of F gives a grading on H, and each invertible element in A gives a Fredholm operator (F + 1) u (F − 1)/4 from (F − 1)H to (F + 1)H. Thus u → Ind (F + 1) u (F − 1)/4 gives an additive mapping from K1(A) to Z.
When the spectral triple is finitely summable, one may write the above indexes using the (super) trace, and a product of F, e (resp. u) and commutator of F with e (resp. u). This can be encoded as a (p + 1)-functional on A satisfying some algebraic conditions and give Hochschild / cyclic cohomology cocycles, which describe the above maps from K-theory to the integers.
See also
• JLO cocycle
Notes
1. A. Connes, Noncommutative Geometry, Academic Press, 1994
2. A. Connes, H. Moscovici; The Local Index Formula in Noncommutative Geometry
References
• Connes, Alain; Marcolli, Matilde. Noncommutative geometry, quantum fields and motives.
• Várilly, Joseph C. An introduction to noncommutative geometry.
• Khalkhali, Masoud; Marcolli, Matilde (2005). An invitation to noncommutative geometry. Lectures of the international workshop on noncommutative geometry, Tehran, Iran, 2005. Hackensack, NJ: World Scientific. ISBN 978-981-270-616-4. Zbl 1135.14002.
• Cuntz, Joachim. "Cyclic Theory, Bivariant K-Theory and the Bivariant Chern-Connes Character". Cyclic homology in non-commutative geometry.
• Marcolli, Matilde (2005). Arithmetic Noncommutative Geometry. University Lecture Series. Vol. 36. With a foreword by Yuri Manin. Providence, RI: American Mathematical Society. ISBN 978-0-8218-3833-4. Zbl 1081.58005.
| Wikipedia |
Unbounded operator
In mathematics, more specifically functional analysis and operator theory, the notion of unbounded operator provides an abstract framework for dealing with differential operators, unbounded observables in quantum mechanics, and other cases.
The term "unbounded operator" can be misleading, since
• "unbounded" should sometimes be understood as "not necessarily bounded";
• "operator" should be understood as "linear operator" (as in the case of "bounded operator");
• the domain of the operator is a linear subspace, not necessarily the whole space;
• this linear subspace is not necessarily closed; often (but not always) it is assumed to be dense;
• in the special case of a bounded operator, still, the domain is usually assumed to be the whole space.
In contrast to bounded operators, unbounded operators on a given space do not form an algebra, nor even a linear space, because each one is defined on its own domain.
The term "operator" often means "bounded linear operator", but in the context of this article it means "unbounded operator", with the reservations made above. The given space is assumed to be a Hilbert space. Some generalizations to Banach spaces and more general topological vector spaces are possible.
Short history
The theory of unbounded operators developed in the late 1920s and early 1930s as part of developing a rigorous mathematical framework for quantum mechanics.[1] The theory's development is due to John von Neumann[2] and Marshall Stone.[3] Von Neumann introduced using graphs to analyze unbounded operators in 1932.[4]
Definitions and basic properties
Let X, Y be Banach spaces. An unbounded operator (or simply operator) T : D(T) → Y is a linear map T from a linear subspace D(T) ⊆ X—the domain of T—to the space Y.[5] Contrary to the usual convention, T may not be defined on the whole space X.
An operator T is said to be closed if its graph Γ(T) is a closed set.[6] (Here, the graph Γ(T) is a linear subspace of the direct sum X ⊕ Y, defined as the set of all pairs (x, Tx), where x runs over the domain of T .) Explicitly, this means that for every sequence {xn} of points from the domain of T such that xn → x and Txn → y, it holds that x belongs to the domain of T and Tx = y.[6] The closedness can also be formulated in terms of the graph norm: an operator T is closed if and only if its domain D(T) is a complete space with respect to the norm:[7]
$\|x\|_{T}={\sqrt {\|x\|^{2}+\|Tx\|^{2}}}.$
An operator T is said to be densely defined if its domain is dense in X.[5] This also includes operators defined on the entire space X, since the whole space is dense in itself. The denseness of the domain is necessary and sufficient for the existence of the adjoint (if X and Y are Hilbert spaces) and the transpose; see the sections below.
If T : X → Y is closed, densely defined and continuous on its domain, then its domain is all of X.[nb 1]
A densely defined operator T on a Hilbert space H is called bounded from below if T + a is a positive operator for some real number a. That is, ⟨Tx|x⟩ ≥ −a ||x||2 for all x in the domain of T (or alternatively ⟨Tx|x⟩ ≥ a ||x||2 since a is arbitrary).[8] If both T and −T are bounded from below then T is bounded.[8]
Example
Let C([0, 1]) denote the space of continuous functions on the unit interval, and let C1([0, 1]) denote the space of continuously differentiable functions. We equip $C([0,1])$ with the supremum norm, $\|\cdot \|_{\infty }$, making it a Banach space. Define the classical differentiation operator d/dx : C1([0, 1]) → C([0, 1]) by the usual formula:
$\left({\frac {d}{dx}}f\right)(x)=\lim _{h\to 0}{\frac {f(x+h)-f(x)}{h}},\qquad \forall x\in [0,1].$
Every differentiable function is continuous, so C1([0, 1]) ⊆ C([0, 1]). We claim that d/dx : C([0, 1]) → C([0, 1]) is a well-defined unbounded operator, with domain C1([0, 1]). For this, we need to show that ${\frac {d}{dx}}$ is linear and then, for example, exhibit some $\{f_{n}\}_{n}\subset C^{1}([0,1])$ such that $\|f_{n}\|_{\infty }=1$ and $\sup _{n}\|{\frac {d}{dx}}f_{n}\|_{\infty }=+\infty $.
This is a linear operator, since a linear combination a f + bg of two continuously differentiable functions f , g is also continuously differentiable, and
$\left({\tfrac {d}{dx}}\right)(af+bg)=a\left({\tfrac {d}{dx}}f\right)+b\left({\tfrac {d}{dx}}g\right).$
The operator is not bounded. For example,
${\begin{cases}f_{n}:[0,1]\to [-1,1]\\f_{n}(x)=\sin(2\pi nx)\end{cases}}$
satisfy
$\left\|f_{n}\right\|_{\infty }=1,$
but
$\left\|\left({\tfrac {d}{dx}}f_{n}\right)\right\|_{\infty }=2\pi n\to \infty $
as $n\to \infty $.
The operator is densely defined, and closed.
The same operator can be treated as an operator Z → Z for many choices of Banach space Z and not be bounded between any of them. At the same time, it can be bounded as an operator X → Y for other pairs of Banach spaces X, Y, and also as operator Z → Z for some topological vector spaces Z. As an example let I ⊂ R be an open interval and consider
${\frac {d}{dx}}:\left(C^{1}(I),\|\cdot \|_{C^{1}}\right)\to \left(C(I),\|\cdot \|_{\infty }\right),$
where:
$\|f\|_{C^{1}}=\|f\|_{\infty }+\|f'\|_{\infty }.$
Adjoint
The adjoint of an unbounded operator can be defined in two equivalent ways. Let $T:D(T)\subseteq H_{1}\to H_{2}$ be an unbounded operator between Hilbert spaces.
First, it can be defined in a way analogous to how one defines the adjoint of a bounded operator. Namely, the adjoint $T^{*}:D\left(T^{*}\right)\subseteq H_{2}\to H_{1}$ of T is defined as an operator with the property:
$\langle Tx\mid y\rangle _{2}=\left\langle x\mid T^{*}y\right\rangle _{1},\qquad x\in D(T).$
More precisely, $T^{*}y$ is defined in the following way. If $y\in H_{2}$ is such that $x\mapsto \langle Tx\mid y\rangle $ is a continuous linear functional on the domain of T, then $y$ is declared to be an element of $D\left(T^{*}\right),$ and after extending the linear functional to the whole space via the Hahn–Banach theorem, it is possible to find some $z$ in $H_{1}$ such that
$\langle Tx\mid y\rangle _{2}=\langle x\mid z\rangle _{1},\qquad x\in D(T),$
since Riesz representation theorem allows the continuous dual of the Hilbert space $H_{1}$ to be identified with the set of linear functionals given by the inner product. This vector $z$ is uniquely determined by $y$ if and only if the linear functional $x\mapsto \langle Tx\mid y\rangle $ is densely defined; or equivalently, if T is densely defined. Finally, letting $T^{*}y=z$ completes the construction of $T^{*},$ which is necessarily a linear map. The adjoint $T^{*}y$ exists if and only if T is densely defined.
By definition, the domain of $T^{*}$ consists of elements $y$ in $H_{2}$ such that $x\mapsto \langle Tx\mid y\rangle $ is continuous on the domain of T. Consequently, the domain of $T^{*}$ could be anything; it could be trivial (that is, contains only zero).[9] It may happen that the domain of $T^{*}$ is a closed hyperplane and $T^{*}$ vanishes everywhere on the domain.[10][11] Thus, boundedness of $T^{*}$ on its domain does not imply boundedness of T. On the other hand, if $T^{*}$ is defined on the whole space then T is bounded on its domain and therefore can be extended by continuity to a bounded operator on the whole space.[nb 2] If the domain of $T^{*}$ is dense, then it has its adjoint $T^{**}.$[12] A closed densely defined operator T is bounded if and only if $T^{*}$ is bounded.[nb 3]
The other equivalent definition of the adjoint can be obtained by noticing a general fact. Define a linear operator $J$ as follows:[12]
${\begin{cases}J:H_{1}\oplus H_{2}\to H_{2}\oplus H_{1}\\J(x\oplus y)=-y\oplus x\end{cases}}$
Since $J$ is an isometric surjection, it is unitary. Hence: $J(\Gamma (T))^{\bot }$ is the graph of some operator $S$ if and only if T is densely defined.[13] A simple calculation shows that this "some" $S$ satisfies:
$\langle Tx\mid y\rangle _{2}=\langle x\mid Sy\rangle _{1},$
for every x in the domain of T. Thus $S$ is the adjoint of T.
It follows immediately from the above definition that the adjoint $T^{*}$ is closed.[12] In particular, a self-adjoint operator (meaning $T=T^{*}$) is closed. An operator T is closed and densely defined if and only if $T^{**}=T.$[nb 4]
Some well-known properties for bounded operators generalize to closed densely defined operators. The kernel of a closed operator is closed. Moreover, the kernel of a closed densely defined operator $T:H_{1}\to H_{2}$ coincides with the orthogonal complement of the range of the adjoint. That is,[14]
$\operatorname {ker} (T)=\operatorname {ran} (T^{*})^{\bot }.$
von Neumann's theorem states that $T^{*}T$ and $TT^{*}$ are self-adjoint, and that $I+T^{*}T$ and $I+TT^{*}$ both have bounded inverses.[15] If $T^{*}$ has trivial kernel, T has dense range (by the above identity.) Moreover:
T is surjective if and only if there is a $K>0$ such that $\|f\|_{2}\leq K\left\|T^{*}f\right\|_{1}$ for all $f$ in $D\left(T^{*}\right).$[nb 5] (This is essentially a variant of the so-called closed range theorem.) In particular, T has closed range if and only if $T^{*}$ has closed range.
In contrast to the bounded case, it is not necessary that $(TS)^{*}=S^{*}T^{*},$ since, for example, it is even possible that $(TS)^{*}$ does not exist. This is, however, the case if, for example, T is bounded.[16]
A densely defined, closed operator T is called normal if it satisfies the following equivalent conditions:[17]
• $T^{*}T=TT^{*}$;
• the domain of T is equal to the domain of $T^{*},$ and $\|Tx\|=\left\|T^{*}x\right\|$ for every x in this domain;
• there exist self-adjoint operators $A,B$ such that $T=A+iB,$$T^{*}=A-iB,$ and $\|Tx\|^{2}=\|Ax\|^{2}+\|Bx\|^{2}$ for every x in the domain of T.
Every self-adjoint operator is normal.
Transpose
See also: Transpose of a linear map
Let $T:B_{1}\to B_{2}$ be an operator between Banach spaces. Then the transpose (or dual) ${}^{t}T:{B_{2}}^{*}\to {B_{1}}^{*}$ of $T$ is the linear operator satisfying:
$\langle Tx,y'\rangle =\langle x,\left({}^{t}T\right)y'\rangle $
for all $x\in B_{1}$ and $y\in B_{2}^{*}.$ Here, we used the notation: $\langle x,x'\rangle =x'(x).$[18]
The necessary and sufficient condition for the transpose of $T$ to exist is that $T$ is densely defined (for essentially the same reason as to adjoints, as discussed above.)
For any Hilbert space $H,$ there is the anti-linear isomorphism:
$J:H^{*}\to H$
given by $Jf=y$ where $f(x)=\langle x\mid y\rangle _{H},(x\in H).$ Through this isomorphism, the transpose ${}^{t}T$ relates to the adjoint $T^{*}$ in the following way:[19]
$T^{*}=J_{1}\left({}^{t}T\right)J_{2}^{-1},$
where $J_{j}:H_{j}^{*}\to H_{j}$. (For the finite-dimensional case, this corresponds to the fact that the adjoint of a matrix is its conjugate transpose.) Note that this gives the definition of adjoint in terms of a transpose.
Closed linear operators
Main article: Closed linear operator
Closed linear operators are a class of linear operators on Banach spaces. They are more general than bounded operators, and therefore not necessarily continuous, but they still retain nice enough properties that one can define the spectrum and (with certain assumptions) functional calculus for such operators. Many important linear operators which fail to be bounded turn out to be closed, such as the derivative and a large class of differential operators.
Let X, Y be two Banach spaces. A linear operator A : D(A) ⊆ X → Y is closed if for every sequence {xn} in D(A) converging to x in X such that Axn → y ∈ Y as n → ∞ one has x ∈ D(A) and Ax = y. Equivalently, A is closed if its graph is closed in the direct sum X ⊕ Y.
Given a linear operator A, not necessarily closed, if the closure of its graph in X ⊕ Y happens to be the graph of some operator, that operator is called the closure of A, and we say that A is closable. Denote the closure of A by A. It follows that A is the restriction of A to D(A).
A core (or essential domain) of a closable operator is a subset C of D(A) such that the closure of the restriction of A to C is A.
Example
Consider the derivative operator A = d/dx where X = Y = C([a, b]) is the Banach space of all continuous functions on an interval [a, b]. If one takes its domain D(A) to be C1([a, b]), then A is a closed operator which is not bounded.[20] On the other hand if {{math|1=D(A) = [[smooth function|C∞([a, b])]]}}, then A will no longer be closed, but it will be closable, with the closure being its extension defined on C1([a, b]).
Symmetric operators and self-adjoint operators
Main article: Self-adjoint operator
An operator T on a Hilbert space is symmetric if and only if for each x and y in the domain of T we have $\langle Tx\mid y\rangle =\langle x\mid Ty\rangle $. A densely defined operator T is symmetric if and only if it agrees with its adjoint T∗ restricted to the domain of T, in other words when T∗ is an extension of T.[21]
In general, if T is densely defined and symmetric, the domain of the adjoint T∗ need not equal the domain of T. If T is symmetric and the domain of T and the domain of the adjoint coincide, then we say that T is self-adjoint.[22] Note that, when T is self-adjoint, the existence of the adjoint implies that T is densely defined and since T∗ is necessarily closed, T is closed.
A densely defined operator T is symmetric, if the subspace Γ(T) (defined in a previous section) is orthogonal to its image J(Γ(T)) under J (where J(x,y):=(y,-x)).[nb 6]
Equivalently, an operator T is self-adjoint if it is densely defined, closed, symmetric, and satisfies the fourth condition: both operators T – i, T + i are surjective, that is, map the domain of T onto the whole space H. In other words: for every x in H there exist y and z in the domain of T such that Ty – iy = x and Tz + iz = x.[23]
An operator T is self-adjoint, if the two subspaces Γ(T), J(Γ(T)) are orthogonal and their sum is the whole space $H\oplus H.$[12]
This approach does not cover non-densely defined closed operators. Non-densely defined symmetric operators can be defined directly or via graphs, but not via adjoint operators.
A symmetric operator is often studied via its Cayley transform.
An operator T on a complex Hilbert space is symmetric if and only if its quadratic form is real, that is, the number $\langle Tx\mid x\rangle $ is real for all x in the domain of T.[21]
A densely defined closed symmetric operator T is self-adjoint if and only if T∗ is symmetric.[24] It may happen that it is not.[25][26]
A densely defined operator T is called positive[8] (or nonnegative[27]) if its quadratic form is nonnegative, that is, $\langle Tx\mid x\rangle \geq 0$ for all x in the domain of T. Such operator is necessarily symmetric.
The operator T∗T is self-adjoint[28] and positive[8] for every densely defined, closed T.
The spectral theorem applies to self-adjoint operators [29] and moreover, to normal operators,[30][31] but not to densely defined, closed operators in general, since in this case the spectrum can be empty.[32][33]
A symmetric operator defined everywhere is closed, therefore bounded,[6] which is the Hellinger–Toeplitz theorem.[34]
Extension-related
See also: Extensions of symmetric operators
By definition, an operator T is an extension of an operator S if Γ(S) ⊆ Γ(T).[35] An equivalent direct definition: for every x in the domain of S, x belongs to the domain of T and Sx = Tx.[5][35]
Note that an everywhere defined extension exists for every operator, which is a purely algebraic fact explained at Discontinuous linear map § General existence theorem and based on the axiom of choice. If the given operator is not bounded then the extension is a discontinuous linear map. It is of little use since it cannot preserve important properties of the given operator (see below), and usually is highly non-unique.
An operator T is called closable if it satisfies the following equivalent conditions:[6][35][36]
• T has a closed extension;
• the closure of the graph of T is the graph of some operator;
• for every sequence (xn) of points from the domain of T such that xn → 0 and also Txn → y it holds that y = 0.
Not all operators are closable.[37]
A closable operator T has the least closed extension ${\overline {T}}$ called the closure of T. The closure of the graph of T is equal to the graph of ${\overline {T}}.$[6][35] Other, non-minimal closed extensions may exist.[25][26]
A densely defined operator T is closable if and only if T∗ is densely defined. In this case ${\overline {T}}=T^{**}$ and $({\overline {T}})^{*}=T^{*}.$[12][38]
If S is densely defined and T is an extension of S then S∗ is an extension of T∗.[39]
Every symmetric operator is closable.[40]
A symmetric operator is called maximal symmetric if it has no symmetric extensions, except for itself.[21] Every self-adjoint operator is maximal symmetric.[21] The converse is wrong.[41]
An operator is called essentially self-adjoint if its closure is self-adjoint.[40] An operator is essentially self-adjoint if and only if it has one and only one self-adjoint extension.[24]
A symmetric operator may have more than one self-adjoint extension, and even a continuum of them.[26]
A densely defined, symmetric operator T is essentially self-adjoint if and only if both operators T – i, T + i have dense range.[42]
Let T be a densely defined operator. Denoting the relation "T is an extension of S" by S ⊂ T (a conventional abbreviation for Γ(S) ⊆ Γ(T)) one has the following.[43]
• If T is symmetric then T ⊂ T∗∗ ⊂ T∗.
• If T is closed and symmetric then T = T∗∗ ⊂ T∗.
• If T is self-adjoint then T = T∗∗ = T∗.
• If T is essentially self-adjoint then T ⊂ T∗∗ = T∗.
Importance of self-adjoint operators
The class of self-adjoint operators is especially important in mathematical physics. Every self-adjoint operator is densely defined, closed and symmetric. The converse holds for bounded operators but fails in general. Self-adjointness is substantially more restricting than these three properties. The famous spectral theorem holds for self-adjoint operators. In combination with Stone's theorem on one-parameter unitary groups it shows that self-adjoint operators are precisely the infinitesimal generators of strongly continuous one-parameter unitary groups, see Self-adjoint operator § Self-adjoint extensions in quantum mechanics. Such unitary groups are especially important for describing time evolution in classical and quantum mechanics.
See also
• Hilbert space § Unbounded operators
• Stone–von Neumann theorem
• Bounded operator
Notes
1. Suppose fj is a sequence in the domain of T that converges to g ∈ X. Since T is uniformly continuous on its domain, Tfj is Cauchy in Y. Thus, ( fj , T fj ) is Cauchy and so converges to some ( f , T f ) since the graph of T is closed. Hence, f = g, and the domain of T is closed.
2. Proof: being closed, the everywhere defined $T^{*}$ is bounded, which implies boundedness of $T^{**},$ the latter being the closure of T. See also (Pedersen 1989, 2.3.11) for the case of everywhere defined T.
3. Proof: $T^{**}=T.$ So if $T^{*}$ is bounded then its adjoint T is bounded.
4. Proof: If T is closed densely defined then $T^{*}$ exists and is densely defined. Thus $T^{**}$ exists. The graph of T is dense in the graph of $T^{**};$ hence $T=T^{**}.$ Conversely, since the existence of $T^{**}$ implies that that of $T^{*},$ which in turn implies T is densely defined. Since $T^{**}$ is closed, T is densely defined and closed.
5. If $T$ is surjective then $T:(\ker T)^{\bot }\to H_{2}$ has bounded inverse, denoted by $S.$ The estimate then follows since
$\|f\|_{2}^{2}=\left|\langle TSf\mid f\rangle _{2}\right|\leq \|S\|\|f\|_{2}\left\|T^{*}f\right\|_{1}$
Conversely, suppose the estimate holds. Since $T^{*}$ has closed range, it is the case that $\operatorname {ran} (T)=\operatorname {ran} \left(TT^{*}\right).$ Since $\operatorname {ran} (T)$ is dense, it suffices to show that $TT^{*}$ has closed range. If $TT^{*}f_{j}$ is convergent then $f_{j}$ is convergent by the estimate since
$\|T^{*}f_{j}\|_{1}^{2}=|\langle T^{*}f_{j}\mid T^{*}f_{j}\rangle _{1}|\leq \|TT^{*}f_{j}\|_{2}\|f_{j}\|_{2}.$
Say, $f_{j}\to g.$ Since $TT^{*}$ is self-adjoint; thus, closed, (von Neumann's theorem), $TT^{*}f_{j}\to TT^{*}g.$ QED
6. Follows from (Pedersen 1989, 5.1.5) and the definition via adjoint operators.
References
Citations
1. Reed & Simon 1980, Notes to Chapter VIII, page 305
2. von Neumann 1930, pp. 49–131
3. Stone 1932
4. von Neumann 1932, pp. 294–310
5. Pedersen 1989, 5.1.1
6. Pedersen 1989, 5.1.4
7. Berezansky, Sheftel & Us 1996, page 5
8. Pedersen 1989, 5.1.12
9. Berezansky, Sheftel & Us 1996, Example 3.2 on page 16
10. Reed & Simon 1980, page 252
11. Berezansky, Sheftel & Us 1996, Example 3.1 on page 15
12. Pedersen 1989, 5.1.5
13. Berezansky, Sheftel & Us 1996, page 12
14. Brezis 1983, p. 28
15. Yoshida 1980, p. 200
16. Yoshida 1980, p. 195.
17. Pedersen 1989, 5.1.11
18. Yoshida 1980, p. 193
19. Yoshida 1980, p. 196
20. Kreyszig 1978, p. 294
21. Pedersen 1989, 5.1.3
22. Kato 1995, 5.3.3
23. Pedersen 1989, 5.2.5
24. Reed & Simon 1980, page 256
25. Pedersen 1989, 5.1.16
26. Reed & Simon 1980, Example on pages 257-259
27. Berezansky, Sheftel & Us 1996, page 25
28. Pedersen 1989, 5.1.9
29. Pedersen 1989, 5.3.8
30. Berezansky, Sheftel & Us 1996, page 89
31. Pedersen 1989, 5.3.19
32. Reed & Simon 1980, Example 5 on page 254
33. Pedersen 1989, 5.2.12
34. Reed & Simon 1980, page 84
35. Reed & Simon 1980, page 250
36. Berezansky, Sheftel & Us 1996, pages 6,7
37. Berezansky, Sheftel & Us 1996, page 7
38. Reed & Simon 1980, page 253
39. Pedersen 1989, 5.1.2
40. Pedersen 1989, 5.1.6
41. Pedersen 1989, 5.2.6
42. Reed & Simon 1980, page 257
43. Reed & Simon 1980, pages 255, 256
Bibliography
• Berezansky, Y.M.; Sheftel, Z.G.; Us, G.F. (1996), Functional analysis, vol. II, Birkhäuser (see Chapter 12 "General theory of unbounded operators in Hilbert spaces").
• Brezis, Haïm (1983), Analyse fonctionnelle — Théorie et applications (in French), Paris: Mason
• "Unbounded operator", Encyclopedia of Mathematics, EMS Press, 2001 [1994]
• Hall, B.C. (2013), "Chapter 9. Unbounded Self-adjoint Operators", Quantum Theory for Mathematicians, Graduate Texts in Mathematics, vol. 267, Springer, ISBN 978-1461471158
• Kato, Tosio (1995), "Chapter 5. Operators in Hilbert Space", Perturbation theory for linear operators, Classics in Mathematics, Springer-Verlag, ISBN 3-540-58661-X
• Kreyszig, Erwin (1978). Introductory Functional Analysis With Applications. USA: John Wiley & Sons. Inc. ISBN 0-471-50731-8.
• Pedersen, Gert K. (1989), Analysis now, Springer (see Chapter 5 "Unbounded operators").
• Reed, Michael; Simon, Barry (1980), Methods of Modern Mathematical Physics, vol. 1: Functional Analysis (revised and enlarged ed.), Academic Press (see Chapter 8 "Unbounded operators").
• Stone, Marshall Harvey (1932). Linear Transformations in Hilbert Space and Their Applications to Analysis. Reprint of the 1932 Ed. American Mathematical Society. ISBN 978-0-8218-7452-3.
• Teschl, Gerald (2009). Mathematical Methods in Quantum Mechanics; With Applications to Schrödinger Operators. Providence: American Mathematical Society. ISBN 978-0-8218-4660-5.
• von Neumann, J. (1930), "Allgemeine Eigenwerttheorie Hermitescher Functionaloperatoren (General Eigenvalue Theory of Hermitian Functional Operators)", Mathematische Annalen, 102 (1), doi:10.1007/BF01782338
• von Neumann, J. (1932), "Über Adjungierte Funktionaloperatore (On Adjoint Functional Operators)", Annals of Mathematics, Second Series, 33 (2), doi:10.2307/1968331, JSTOR 1968331
• Yoshida, Kôsaku (1980), Functional Analysis (sixth ed.), Springer
This article incorporates material from Closed operator on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
Spectral theory and *-algebras
Basic concepts
• Involution/*-algebra
• Banach algebra
• B*-algebra
• C*-algebra
• Noncommutative topology
• Projection-valued measure
• Spectrum
• Spectrum of a C*-algebra
• Spectral radius
• Operator space
Main results
• Gelfand–Mazur theorem
• Gelfand–Naimark theorem
• Gelfand representation
• Polar decomposition
• Singular value decomposition
• Spectral theorem
• Spectral theory of normal C*-algebras
Special Elements/Operators
• Isospectral
• Normal operator
• Hermitian/Self-adjoint operator
• Unitary operator
• Unit
Spectrum
• Krein–Rutman theorem
• Normal eigenvalue
• Spectrum of a C*-algebra
• Spectral radius
• Spectral asymmetry
• Spectral gap
Decomposition
• Decomposition of a spectrum
• Continuous
• Point
• Residual
• Approximate point
• Compression
• Direct integral
• Discrete
• Spectral abscissa
Spectral Theorem
• Borel functional calculus
• Min-max theorem
• Positive operator-valued measure
• Projection-valued measure
• Riesz projector
• Rigged Hilbert space
• Spectral theorem
• Spectral theory of compact operators
• Spectral theory of normal C*-algebras
Special algebras
• Amenable Banach algebra
• With an Approximate identity
• Banach function algebra
• Disk algebra
• Nuclear C*-algebra
• Uniform algebra
• Von Neumann algebra
• Tomita–Takesaki theory
Finite-Dimensional
• Alon–Boppana bound
• Bauer–Fike theorem
• Numerical range
• Schur–Horn theorem
Generalizations
• Dirac spectrum
• Essential spectrum
• Pseudospectrum
• Structure space (Shilov boundary)
Miscellaneous
• Abstract index group
• Banach algebra cohomology
• Cohen–Hewitt factorization theorem
• Extensions of symmetric operators
• Fredholm theory
• Limiting absorption principle
• Schröder–Bernstein theorems for operator algebras
• Sherman–Takeda theorem
• Unbounded operator
Examples
• Wiener algebra
Applications
• Almost Mathieu operator
• Corona theorem
• Hearing the shape of a drum (Dirichlet eigenvalue)
• Heat kernel
• Kuznetsov trace formula
• Lax pair
• Proto-value function
• Ramanujan graph
• Rayleigh–Faber–Krahn inequality
• Spectral geometry
• Spectral method
• Spectral theory of ordinary differential equations
• Sturm–Liouville theory
• Superstrong approximation
• Transfer operator
• Transform theory
• Weyl law
• Wiener–Khinchin theorem
Hilbert spaces
Basic concepts
• Adjoint
• Inner product and L-semi-inner product
• Hilbert space and Prehilbert space
• Orthogonal complement
• Orthonormal basis
Main results
• Bessel's inequality
• Cauchy–Schwarz inequality
• Riesz representation
Other results
• Hilbert projection theorem
• Parseval's identity
• Polarization identity (Parallelogram law)
Maps
• Compact operator on Hilbert space
• Densely defined
• Hermitian form
• Hilbert–Schmidt
• Normal
• Self-adjoint
• Sesquilinear form
• Trace class
• Unitary
Examples
• Cn(K) with K compact & n<∞
• Segal–Bargmann F
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Boundedness and bornology
Basic concepts
• Barrelled space
• Bounded set
• Bornological space
• (Vector) Bornology
Operators
• (Un)Bounded operator
• Uniform boundedness principle
Subsets
• Barrelled set
• Bornivorous set
• Saturated family
Related spaces
• (Countably) Barrelled space
• (Countably) Quasi-barrelled space
• Infrabarrelled space
• (Quasi-) Ultrabarrelled space
• Ultrabornological space
| Wikipedia |
O*-algebra
In mathematics, an O*-algebra is an algebra of possibly unbounded operators defined on a dense subspace of a Hilbert space. The original examples were described by Borchers (1962) and Uhlmann (1962), who studied some examples of O*-algebras, called Borchers algebras, arising from the Wightman axioms of quantum field theory. Powers (1971) and Lassner (1972) began the systematic study of algebras of unbounded operators.
References
• Borchers, H.-J. (1962), "On structure of the algebra of field operators", Nuovo Cimento, 24: 214–236, doi:10.1007/BF02745645, MR 0142320
• Borchers, H. J.; Yngvason, J. (1975), "On the algebra of field operators. The weak commutant and integral decompositions of states", Communications in Mathematical Physics, 42: 231–252, doi:10.1007/bf01608975, ISSN 0010-3616, MR 0377550
• Lassner, G. (1972), "Topological algebras of operators", Reports on Mathematical Physics, 3 (4): 279–293, doi:10.1016/0034-4877(72)90012-2, ISSN 0034-4877, MR 0322527
• Powers, Robert T. (1971), "Self-adjoint algebras of unbounded operators", Communications in Mathematical Physics, 21: 85–124, doi:10.1007/bf01646746, ISSN 0010-3616, MR 0283580
• Schmüdgen, Konrad (1990), Unbounded operator algebras and representation theory, Operator Theory: Advances and Applications, vol. 37, Birkhäuser Verlag, doi:10.1007/978-3-0348-7469-4, ISBN 978-3-7643-2321-9, MR 1056697
• Uhlmann, Armin (1962), "Über die Definition der Quantenfelder nach Wightman und Haag", Wiss. Z. Karl-Marx-Univ. Leipzig Math.-Nat. Reihe, 11: 213–217, MR 0141413
| Wikipedia |
Fuzzy set
In mathematics, fuzzy sets (a.k.a. uncertain sets) are sets whose elements have degrees of membership. Fuzzy sets were introduced independently by Lotfi A. Zadeh in 1965 as an extension of the classical notion of set.[1][2] At the same time, Salii (1965) defined a more general kind of structure called an L-relation, which he studied in an abstract algebraic context. Fuzzy relations, which are now used throughout fuzzy mathematics and have applications in areas such as linguistics (De Cock, Bodenhofer & Kerre 2000), decision-making (Kuzmin 1982), and clustering (Bezdek 1978), are special cases of L-relations when L is the unit interval [0, 1].
In classical set theory, the membership of elements in a set is assessed in binary terms according to a bivalent condition—an element either belongs or does not belong to the set. By contrast, fuzzy set theory permits the gradual assessment of the membership of elements in a set; this is described with the aid of a membership function valued in the real unit interval [0, 1]. Fuzzy sets generalize classical sets, since the indicator functions (aka characteristic functions) of classical sets are special cases of the membership functions of fuzzy sets, if the latter only takes values 0 or 1.[3] In fuzzy set theory, classical bivalent sets are usually called crisp sets. The fuzzy set theory can be used in a wide range of domains in which information is incomplete or imprecise, such as bioinformatics.[4]
Definition
A fuzzy set is a pair $(U,m)$ where $U$ is a set (often required to be non-empty) and $m\colon U\rightarrow [0,1]$ a membership function. The reference set $U$ (sometimes denoted by $\Omega $ or $X$) is called universe of discourse, and for each $x\in U,$ the value $m(x)$ is called the grade of membership of $x$ in $(U,m)$. The function $m=\mu _{A}$ is called the membership function of the fuzzy set $A=(U,m)$.
For a finite set $U=\{x_{1},\dots ,x_{n}\},$ the fuzzy set $(U,m)$ is often denoted by $\{m(x_{1})/x_{1},\dots ,m(x_{n})/x_{n}\}.$
Let $x\in U$. Then $x$ is called
• not included in the fuzzy set $(U,m)$ if $m(x)=0$ (no member),
• fully included if $m(x)=1$ (full member),
• partially included if $0<m(x)<1$ (fuzzy member).[5]
The (crisp) set of all fuzzy sets on a universe $U$ is denoted with $SF(U)$ (or sometimes just $F(U)$).[6]
Crisp sets related to a fuzzy set
For any fuzzy set $A=(U,m)$ and $\alpha \in [0,1]$ the following crisp sets are defined:
• $A^{\geq \alpha }=A_{\alpha }=\{x\in U\mid m(x)\geq \alpha \}$ is called its α-cut (aka α-level set)
• $A^{>\alpha }=A'_{\alpha }=\{x\in U\mid m(x)>\alpha \}$ is called its strong α-cut (aka strong α-level set)
• $S(A)=\operatorname {Supp} (A)=A^{>0}=\{x\in U\mid m(x)>0\}$ is called its support
• $C(A)=\operatorname {Core} (A)=A^{=1}=\{x\in U\mid m(x)=1\}$ is called its core (or sometimes kernel $\operatorname {Kern} (A)$).
Note that some authors understand "kernel" in a different way; see below.
Other definitions
• A fuzzy set $A=(U,m)$ is empty ($A=\varnothing $) iff (if and only if)
$\forall $$x\in U:\mu _{A}(x)=m(x)=0$
• Two fuzzy sets $A$ and $B$ are equal ($A=B$) iff
$\forall x\in U:\mu _{A}(x)=\mu _{B}(x)$
• A fuzzy set $A$ is included in a fuzzy set $B$ ($A\subseteq B$) iff
$\forall x\in U:\mu _{A}(x)\leq \mu _{B}(x)$
• For any fuzzy set $A$, any element $x\in U$ that satisfies
$\mu _{A}(x)=0.5$
is called a crossover point.
• Given a fuzzy set $A$, any $\alpha \in [0,1]$, for which $A^{=\alpha }=\{x\in U\mid \mu _{A}(x)=\alpha \}$ is not empty, is called a level of A.
• The level set of A is the set of all levels $\alpha \in [0,1]$ representing distinct cuts. It is the image of $\mu _{A}$:
$\Lambda _{A}=\{\alpha \in [0,1]:A^{=\alpha }\neq \varnothing \}=\{\alpha \in [0,1]:{}$$\exists $$x\in U(\mu _{A}(x)=\alpha )\}=\mu _{A}(U)$
• For a fuzzy set $A$, its height is given by
$\operatorname {Hgt} (A)=\sup\{\mu _{A}(x)\mid x\in U\}=\sup(\mu _{A}(U))$
where $\sup $ denotes the supremum, which exists because $\mu _{A}(U)$ is non-empty and bounded above by 1. If U is finite, we can simply replace the supremum by the maximum.
• A fuzzy set $A$ is said to be normalized iff
$\operatorname {Hgt} (A)=1$
In the finite case, where the supremum is a maximum, this means that at least one element of the fuzzy set has full membership. A non-empty fuzzy set $A$ may be normalized with result ${\tilde {A}}$ by dividing the membership function of the fuzzy set by its height:
$\forall x\in U:\mu _{\tilde {A}}(x)=\mu _{A}(x)/\operatorname {Hgt} (A)$
Besides similarities this differs from the usual normalization in that the normalizing constant is not a sum.
• For fuzzy sets $A$ of real numbers (U ⊆ ℝ) with bounded support, the width is defined as
$\operatorname {Width} (A)=\sup(\operatorname {Supp} (A))-\inf(\operatorname {Supp} (A))$
In the case when $\operatorname {Supp} (A)$ is a finite set, or more generally a closed set, the width is just
$\operatorname {Width} (A)=\max(\operatorname {Supp} (A))-\min(\operatorname {Supp} (A))$
In the n-dimensional case (U ⊆ ℝn) the above can be replaced by the n-dimensional volume of $\operatorname {Supp} (A)$.
In general, this can be defined given any measure on U, for instance by integration (e.g. Lebesgue integration) of $\operatorname {Supp} (A)$.
• A real fuzzy set $A$ (U ⊆ ℝ) is said to be convex (in the fuzzy sense, not to be confused with a crisp convex set), iff
$\forall x,y\in U,\forall \lambda \in [0,1]:\mu _{A}(\lambda {x}+(1-\lambda )y)\geq \min(\mu _{A}(x),\mu _{A}(y))$.
Without loss of generality, we may take x ≤ y, which gives the equivalent formulation
$\forall z\in [x,y]:\mu _{A}(z)\geq \min(\mu _{A}(x),\mu _{A}(y))$.
This definition can be extended to one for a general topological space U: we say the fuzzy set $A$ is convex when, for any subset Z of U, the condition
$\forall z\in Z:\mu _{A}(z)\geq \inf(\mu _{A}(\partial Z))$
holds, where $\partial Z$ denotes the boundary of Z and $f(X)=\{f(x)\mid x\in X\}$ denotes the image of a set X (here $\partial Z$) under a function f (here $\mu _{A}$).
Fuzzy set operations
Main article: Fuzzy set operations
Although the complement of a fuzzy set has a single most common definition, the other main operations, union and intersection, do have some ambiguity.
• For a given fuzzy set $A$, its complement $\neg {A}$ (sometimes denoted as $A^{c}$ or $cA$) is defined by the following membership function:
$\forall x\in U:\mu _{\neg {A}}(x)=1-\mu _{A}(x)$.
• Let t be a t-norm, and s the corresponding s-norm (aka t-conorm). Given a pair of fuzzy sets $A,B$, their intersection $A\cap {B}$ is defined by:
$\forall x\in U:\mu _{A\cap {B}}(x)=t(\mu _{A}(x),\mu _{B}(x))$,
and their union $A\cup {B}$ is defined by:
$\forall x\in U:\mu _{A\cup {B}}(x)=s(\mu _{A}(x),\mu _{B}(x))$.
By the definition of the t-norm, we see that the union and intersection are commutative, monotonic, associative, and have both a null and an identity element. For the intersection, these are ∅ and U, respectively, while for the union, these are reversed. However, the union of a fuzzy set and its complement may not result in the full universe U, and the intersection of them may not give the empty set ∅. Since the intersection and union are associative, it is natural to define the intersection and union of a finite family of fuzzy sets recursively.
• If the standard negator $n(\alpha )=1-\alpha ,\alpha \in [0,1]$ is replaced by another strong negator, the fuzzy set difference may be generalized by
$\forall x\in U:\mu _{\neg {A}}(x)=n(\mu _{A}(x)).$
• The triple of fuzzy intersection, union and complement form a De Morgan Triplet. That is, De Morgan's laws extend to this triple.
Examples for fuzzy intersection/union pairs with standard negator can be derived from samples provided in the article about t-norms.
The fuzzy intersection is not idempotent in general, because the standard t-norm min is the only one which has this property. Indeed, if the arithmetic multiplication is used as the t-norm, the resulting fuzzy intersection operation is not idempotent. That is, iteratively taking the intersection of a fuzzy set with itself is not trivial. It instead defines the m-th power of a fuzzy set, which can be canonically generalized for non-integer exponents in the following way:
• For any fuzzy set $A$ and $\nu \in \mathbb {R} ^{+}$ the ν-th power of $A$ is defined by the membership function:
$\forall x\in U:\mu _{A^{\nu }}(x)=\mu _{A}(x)^{\nu }.$
The case of exponent two is special enough to be given a name.
• For any fuzzy set $A$ the concentration $CON(A)=A^{2}$ is defined
$\forall x\in U:\mu _{CON(A)}(x)=\mu _{A^{2}}(x)=\mu _{A}(x)^{2}.$
Taking $0^{0}=1$, we have $A^{0}=U$ and $A^{1}=A.$
• Given fuzzy sets $A,B$, the fuzzy set difference $A\setminus B$, also denoted $A-B$, may be defined straightforwardly via the membership function:
$\forall x\in U:\mu _{A\setminus {B}}(x)=t(\mu _{A}(x),n(\mu _{B}(x))),$
which means $A\setminus B=A\cap \neg {B}$, e. g.:
$\forall x\in U:\mu _{A\setminus {B}}(x)=\min(\mu _{A}(x),1-\mu _{B}(x)).$[7]
Another proposal for a set difference could be:
$\forall x\in U:\mu _{A-{B}}(x)=\mu _{A}(x)-t(\mu _{A}(x),\mu _{B}(x)).$[7]
• Proposals for symmetric fuzzy set differences have been made by Dubois and Prade (1980), either by taking the absolute value, giving
$\forall x\in U:\mu _{A\triangle B}(x)=|\mu _{A}(x)-\mu _{B}(x)|,$
or by using a combination of just max, min, and standard negation, giving
$\forall x\in U:\mu _{A\triangle B}(x)=\max(\min(\mu _{A}(x),1-\mu _{B}(x)),\min(\mu _{B}(x),1-\mu _{A}(x))).$[7]
Axioms for definition of generalized symmetric differences analogous to those for t-norms, t-conorms, and negators have been proposed by Vemur et al. (2014) with predecessors by Alsina et al. (2005) and Bedregal et al. (2009).[7]
• In contrast to crisp sets, averaging operations can also be defined for fuzzy sets.
Disjoint fuzzy sets
In contrast to the general ambiguity of intersection and union operations, there is clearness for disjoint fuzzy sets: Two fuzzy sets $A,B$ are disjoint iff
$\forall x\in U:\mu _{A}(x)=0\lor \mu _{B}(x)=0$
which is equivalent to
$\nexists $ $x\in U:\mu _{A}(x)>0\land \mu _{B}(x)>0$
and also equivalent to
$\forall x\in U:\min(\mu _{A}(x),\mu _{B}(x))=0$
We keep in mind that min/max is a t/s-norm pair, and any other will work here as well.
Fuzzy sets are disjoint if and only if their supports are disjoint according to the standard definition for crisp sets.
For disjoint fuzzy sets $A,B$ any intersection will give ∅, and any union will give the same result, which is denoted as
$A\,{\dot {\cup }}\,B=A\cup B$
with its membership function given by
$\forall x\in U:\mu _{A{\dot {\cup }}B}(x)=\mu _{A}(x)+\mu _{B}(x)$
Note that only one of both summands is greater than zero.
For disjoint fuzzy sets $A,B$ the following holds true:
$\operatorname {Supp} (A\,{\dot {\cup }}\,B)=\operatorname {Supp} (A)\cup \operatorname {Supp} (B)$
This can be generalized to finite families of fuzzy sets as follows: Given a family $A=(A_{i})_{i\in I}$ of fuzzy sets with index set I (e.g. I = {1,2,3,...,n}). This family is (pairwise) disjoint iff
${\text{for all }}x\in U{\text{ there exists at most one }}i\in I{\text{ such that }}\mu _{A_{i}}(x)>0.$
A family of fuzzy sets $A=(A_{i})_{i\in I}$ is disjoint, iff the family of underlying supports $\operatorname {Supp} \circ A=(\operatorname {Supp} (A_{i}))_{i\in I}$ is disjoint in the standard sense for families of crisp sets.
Independent of the t/s-norm pair, intersection of a disjoint family of fuzzy sets will give ∅ again, while the union has no ambiguity:
${\dot {\bigcup \limits _{i\in I}}}\,A_{i}=\bigcup _{i\in I}A_{i}$
with its membership function given by
$\forall x\in U:\mu _{{\dot {\bigcup \limits _{i\in I}}}A_{i}}(x)=\sum _{i\in I}\mu _{A_{i}}(x)$
Again only one of the summands is greater than zero.
For disjoint families of fuzzy sets $A=(A_{i})_{i\in I}$ the following holds true:
$\operatorname {Supp} \left({\dot {\bigcup \limits _{i\in I}}}\,A_{i}\right)=\bigcup \limits _{i\in I}\operatorname {Supp} (A_{i})$
Scalar cardinality
For a fuzzy set $A$ with finite support $\operatorname {Supp} (A)$ (i.e. a "finite fuzzy set"), its cardinality (aka scalar cardinality or sigma-count) is given by
$\operatorname {Card} (A)=\operatorname {sc} (A)=|A|=\sum _{x\in U}\mu _{A}(x)$.
In the case that U itself is a finite set, the relative cardinality is given by
$\operatorname {RelCard} (A)=\|A\|=\operatorname {sc} (A)/|U|=|A|/|U|$.
This can be generalized for the divisor to be a non-empty fuzzy set: For fuzzy sets $A,G$ with G ≠ ∅, we can define the relative cardinality by:
$\operatorname {RelCard} (A,G)=\operatorname {sc} (A|G)=\operatorname {sc} (A\cap {G})/\operatorname {sc} (G)$,
which looks very similar to the expression for conditional probability. Note:
• $\operatorname {sc} (G)>0$ here.
• The result may depend on the specific intersection (t-norm) chosen.
• For $G=U$ the result is unambiguous and resembles the prior definition.
Distance and similarity
For any fuzzy set $A$ the membership function $\mu _{A}:U\to [0,1]$ can be regarded as a family $\mu _{A}=(\mu _{A}(x))_{x\in U}\in [0,1]^{U}$. The latter is a metric space with several metrics $d$ known. A metric can be derived from a norm (vector norm) $\|\,\|$ via
$d(\alpha ,\beta )=\|\alpha -\beta \|$.
For instance, if $U$ is finite, i.e. $U=\{x_{1},x_{2},...x_{n}\}$, such a metric may be defined by:
$d(\alpha ,\beta ):=\max\{|\alpha (x_{i})-\beta (x_{i})|:i=1,...,n\}$ where $\alpha $ and $\beta $ are sequences of real numbers between 0 and 1.
For infinite $U$, the maximum can be replaced by a supremum. Because fuzzy sets are unambiguously defined by their membership function, this metric can be used to measure distances between fuzzy sets on the same universe:
$d(A,B):=d(\mu _{A},\mu _{B})$,
which becomes in the above sample:
$d(A,B)=\max\{|\mu _{A}(x_{i})-\mu _{B}(x_{i})|:i=1,...,n\}$.
Again for infinite $U$ the maximum must be replaced by a supremum. Other distances (like the canonical 2-norm) may diverge, if infinite fuzzy sets are too different, e.g., $\varnothing $ and $U$.
Similarity measures (here denoted by $S$) may then be derived from the distance, e.g. after a proposal by Koczy:
$S=1/(1+d(A,B))$ if $d(A,B)$ is finite, $0$ else,
or after Williams and Steele:
$S=\exp(-\alpha {d(A,B)})$ if $d(A,B)$ is finite, $0$ else
where $\alpha >0$ is a steepness parameter and $\exp(x)=e^{x}$.[6]
Another definition for interval valued (rather 'fuzzy') similarity measures $\zeta $ is provided by Beg and Ashraf as well.[6]
L-fuzzy sets
Sometimes, more general variants of the notion of fuzzy set are used, with membership functions taking values in a (fixed or variable) algebra or structure $L$ of a given kind; usually it is required that $L$ be at least a poset or lattice. These are usually called L-fuzzy sets, to distinguish them from those valued over the unit interval. The usual membership functions with values in [0, 1] are then called [0, 1]-valued membership functions. These kinds of generalizations were first considered in 1967 by Joseph Goguen, who was a student of Zadeh.[8] A classical corollary may be indicating truth and membership values by {f, t} instead of {0, 1}.
An extension of fuzzy sets has been provided by Atanassov. An intuitionistic fuzzy set (IFS) $A$ is characterized by two functions:
1. $\mu _{A}(x)$ – degree of membership of x
2. $\nu _{A}(x)$ – degree of non-membership of x
with functions $\mu _{A},\nu _{A}:U\to [0,1]$ with $\forall x\in U:\mu _{A}(x)+\nu _{A}(x)\leq 1$.
This resembles a situation like some person denoted by $x$ voting
• for a proposal $A$: ($\mu _{A}(x)=1,\nu _{A}(x)=0$),
• against it: ($\mu _{A}(x)=0,\nu _{A}(x)=1$),
• or abstain from voting: ($\mu _{A}(x)=\nu _{A}(x)=0$).
After all, we have a percentage of approvals, a percentage of denials, and a percentage of abstentions.
For this situation, special "intuitive fuzzy" negators, t- and s-norms can be defined. With $D^{*}=\{(\alpha ,\beta )\in [0,1]^{2}:\alpha +\beta =1\}$ and by combining both functions to $(\mu _{A},\nu _{A}):U\to D^{*}$ this situation resembles a special kind of L-fuzzy sets.
Once more, this has been expanded by defining picture fuzzy sets (PFS) as follows: A PFS A is characterized by three functions mapping U to [0, 1]: $\mu _{A},\eta _{A},\nu _{A}$, "degree of positive membership", "degree of neutral membership", and "degree of negative membership" respectively and additional condition $\forall x\in U:\mu _{A}(x)+\eta _{A}(x)+\nu _{A}(x)\leq 1$ This expands the voting sample above by an additional possibility of "refusal of voting".
With $D^{*}=\{(\alpha ,\beta ,\gamma )\in [0,1]^{3}:\alpha +\beta +\gamma =1\}$ and special "picture fuzzy" negators, t- and s-norms this resembles just another type of L-fuzzy sets.[9][10]
Neutrosophic fuzzy sets
The concept of IFS has been extended into two major models. The two extensions of IFS are neutrosophic fuzzy sets and Pythagorean fuzzy sets.[11]
Neutrosophic fuzzy sets were introduced by Smarandache in 1998.[12] Like IFS, neutrosophic fuzzy sets have the previous two functions: one for membership $\mu _{A}(x)$ and another for non-membership $\nu _{A}(x)$. The major difference is that neutrosophic fuzzy sets have one more function: for indeterminate $i_{A}(x)$. This value indicates that the degree of undecidedness that the entity x belongs to the set. This concept of having indeterminate $i_{A}(x)$ value can be particularly useful when one cannot be very confident on the membership or non-membership values for item x.[13] In summary, neutrosophic fuzzy sets are associated with the following functions:
1. $\mu _{A}(x)$—degree of membership of x
2. $\nu _{A}(x)$—degree of non-membership of x
3. $i_{A}(x)$—degree of indeterminate value of x
Pythagorean fuzzy sets
The other extension of IFS is what is known as Pythagorean fuzzy sets. Pythagorean fuzzy sets are more flexible than IFSs. IFSs are based on the constraint $\mu _{A}(x)+\nu _{A}(x)\leq 1$, which can be considered as too restrictive in some occasions. This is why Yager proposed the concept of Pythagorean fuzzy sets. Such sets satisfy the constraint $\mu _{A}(x)^{2}+\nu _{A}(x)^{2}\leq 1$, which is reminiscent of the Pythagorean theorem.[14][15][16] Pythagorean fuzzy sets can be applicable to real life applications in which the previous condition of $\mu _{A}(x)+\nu _{A}(x)\leq 1$ is not valid. However, the less restrictive condition of $\mu _{A}(x)^{2}+\nu _{A}(x)^{2}\leq 1$ may be suitable in more domains.[11][13]
Fuzzy logic
Main article: Fuzzy logic
As an extension of the case of multi-valued logic, valuations ($\mu :{\mathit {V}}_{o}\to {\mathit {W}}$ :{\mathit {V}}_{o}\to {\mathit {W}}} ) of propositional variables (${\mathit {V}}_{o}$) into a set of membership degrees (${\mathit {W}}$) can be thought of as membership functions mapping predicates into fuzzy sets (or more formally, into an ordered set of fuzzy pairs, called a fuzzy relation). With these valuations, many-valued logic can be extended to allow for fuzzy premises from which graded conclusions may be drawn.[17]
This extension is sometimes called "fuzzy logic in the narrow sense" as opposed to "fuzzy logic in the wider sense," which originated in the engineering fields of automated control and knowledge engineering, and which encompasses many topics involving fuzzy sets and "approximated reasoning."[18]
Industrial applications of fuzzy sets in the context of "fuzzy logic in the wider sense" can be found at fuzzy logic.
Fuzzy number and only number
A fuzzy number[19] is a fuzzy set that satisfies all the following conditions:
• A is normalised;
• A is a convex set;
• $\exists !x^{*}\in A,\mu _{A}(x^{*})=1$;
• The membership function $\mu _{A}(x)$ is at least segmentally continuous.
If these conditions are not satisfied, then A is not a fuzzy number. The core of this fuzzy number is a singleton; its location is:
$\,C(A)=x^{*}:\mu _{A}(x^{*})=1$
When the condition about the uniqueness of ${x^{*}}$ is not fulfilled, then the fuzzy set is characterised as a fuzzy interval.[19] The core of this fuzzy interval is a crisp interval with:
$\,C(A)=\left[\min\{x\in \mathbb {R} \mid \mu _{A}(x)=1\};\max\{x\in \mathbb {R} \mid \mu _{A}(x)=1\}\right]$.
Fuzzy numbers can be likened to the funfair game "guess your weight," where someone guesses the contestant's weight, with closer guesses being more correct, and where the guesser "wins" if he or she guesses near enough to the contestant's weight, with the actual weight being completely correct (mapping to 1 by the membership function).
The kernel $K(A)=\operatorname {Kern} (A)$ of a fuzzy interval $A$ is defined as the 'inner' part, without the 'outbound' parts where the membership value is constant ad infinitum. In other words, the smallest subset of $\mathbb {R} $ where $\mu _{A}(x)$ is constant outside of it, is defined as the kernel.
However, there are other concepts of fuzzy numbers and intervals as some authors do not insist on convexity.
Fuzzy categories
The use of set membership as a key component of category theory can be generalized to fuzzy sets. This approach, which began in 1968 shortly after the introduction of fuzzy set theory,[20] led to the development of Goguen categories in the 21st century.[21][22] In these categories, rather than using two valued set membership, more general intervals are used, and may be lattices as in L-fuzzy sets.[22][23]
Fuzzy relation equation
The fuzzy relation equation is an equation of the form A · R = B, where A and B are fuzzy sets, R is a fuzzy relation, and A · R stands for the composition of A with R .
Entropy
A measure d of fuzziness for fuzzy sets of universe $U$ should fulfill the following conditions for all $x\in U$:
1. $d(A)=0$ if $A$ is a crisp set: $\mu _{A}(x)\in \{0,\,1\}$
2. $d(A)$ has a unique maximum iff $\forall x\in U:\mu _{A}(x)=0.5$
3. $\mu _{A}\leq \mu _{B}\iff $
$\mu _{A}\leq \mu _{B}\leq 0.5$
$\mu _{A}\geq \mu _{B}\geq 0.5$
which means that B is "crisper" than A.
1. $d(\neg {A})=d(A)$
In this case $d(A)$ is called the entropy of the fuzzy set A.
For finite $U=\{x_{1},x_{2},...x_{n}\}$ the entropy of a fuzzy set $A$ is given by
$d(A)=H(A)+H(\neg {A})$,
$H(A)=-k\sum _{i=1}^{n}\mu _{A}(x_{i})\ln \mu _{A}(x_{i})$
or just
$d(A)=-k\sum _{i=1}^{n}S(\mu _{A}(x_{i}))$
where $S(x)=H_{e}(x)$ is Shannon's function (natural entropy function)
$S(\alpha )=-\alpha \ln \alpha -(1-\alpha )\ln(1-\alpha ),\ \alpha \in [0,1]$
and $k$ is a constant depending on the measure unit and the logarithm base used (here we have used the natural base e). The physical interpretation of k is the Boltzmann constant kB.
Let $A$ be a fuzzy set with a continuous membership function (fuzzy variable). Then
$H(A)=-k\int _{-\infty }^{\infty }\operatorname {Cr} \lbrace A\geq t\rbrace \ln \operatorname {Cr} \lbrace A\geq t\rbrace \,dt$
and its entropy is
$d(A)=-k\int _{-\infty }^{\infty }S(\operatorname {Cr} \lbrace A\geq t\rbrace )\,dt.$[24][25]
Extensions
There are many mathematical constructions similar to or more general than fuzzy sets. Since fuzzy sets were introduced in 1965, many new mathematical constructions and theories treating imprecision, inexactness, ambiguity, and uncertainty have been developed. Some of these constructions and theories are extensions of fuzzy set theory, while others try to mathematically model imprecision and uncertainty in a different way.[26]
See also
• Alternative set theory
• Defuzzification
• Fuzzy concept
• Fuzzy mathematics
• Fuzzy set operations
• Fuzzy subalgebra
• Interval finite element
• Linear partial information
• Multiset
• Neuro-fuzzy
• Rough fuzzy hybridization
• Rough set
• Sørensen similarity index
• Type-2 fuzzy sets and systems
• Uncertainty
References
1. L. A. Zadeh (1965) "Fuzzy sets" Archived 2015-08-13 at the Wayback Machine. Information and Control 8 (3) 338–353.
2. Klaua, D. (1965) Über einen Ansatz zur mehrwertigen Mengenlehre. Monatsb. Deutsch. Akad. Wiss. Berlin 7, 859–876. A recent in-depth analysis of this paper has been provided by Gottwald, S. (2010). "An early approach toward graded identity and graded membership in set theory". Fuzzy Sets and Systems. 161 (18): 2369–2379. doi:10.1016/j.fss.2009.12.005.
3. D. Dubois and H. Prade (1988) Fuzzy Sets and Systems. Academic Press, New York.
4. Liang, Lily R.; Lu, Shiyong; Wang, Xuena; Lu, Yi; Mandal, Vinay; Patacsil, Dorrelyn; Kumar, Deepak (2006). "FM-test: A fuzzy-set-theory-based approach to differential gene expression data analysis". BMC Bioinformatics. 7 (Suppl 4): S7. doi:10.1186/1471-2105-7-S4-S7. PMC 1780132. PMID 17217525.
5. "AAAI". Archived from the original on August 5, 2008.
6. Ismat Beg, Samina Ashraf: Similarity measures for fuzzy sets, at: Applied and Computational Mathematics, March 2009, available on Research Gate since November 23rd, 2016
7. N.R. Vemuri, A.S. Hareesh, M.S. Srinath: Set Difference and Symmetric Difference of Fuzzy Sets, in: Fuzzy Sets Theory and Applications 2014, Liptovský Ján, Slovak Republic
8. Goguen, Joseph A., 196, "L-fuzzy sets". Journal of Mathematical Analysis and Applications 18: 145–174
9. Bui Cong Cuong, Vladik Kreinovich, Roan Thi Ngan: A classification of representable t-norm operators for picture fuzzy sets, in: Departmental Technical Reports (CS). Paper 1047, 2016
10. Tridiv Jyoti Neog, Dusmanta Kumar Sut: Complement of an Extended Fuzzy Set, in: International Journal of Computer Applications (097 5–8887), Volume 29 No.3, September 2011
11. Yanase J, Triantaphyllou E (2019). "A Systematic Survey of Computer-Aided Diagnosis in Medicine: Past and Present Developments". Expert Systems with Applications. 138: 112821. doi:10.1016/j.eswa.2019.112821. S2CID 199019309.
12. Smarandache, Florentin (1998). Neutrosophy: Neutrosophic Probability, Set, and Logic: Analytic Synthesis & Synthetic Analysis. American Research Press. ISBN 978-1879585638.
13. Yanase J, Triantaphyllou E (2019). "The Seven Key Challenges for the Future of Computer-Aided Diagnosis in Medicine". International Journal of Medical Informatics. 129: 413–422. doi:10.1016/j.ijmedinf.2019.06.017. PMID 31445285. S2CID 198287435.
14. Yager, Ronald R. (June 2013). "Pythagorean fuzzy subsets". 2013 Joint IFSA World Congress and NAFIPS Annual Meeting (IFSA/NAFIPS). pp. 57–61. doi:10.1109/IFSA-NAFIPS.2013.6608375. ISBN 978-1-4799-0348-1. S2CID 36286152. {{cite book}}: |journal= ignored (help)
15. Yager, Ronald R (2013). "Pythagorean membership grades in multicriteria decision making". IEEE Transactions on Fuzzy Systems. 22 (4): 958–965. doi:10.1109/TFUZZ.2013.2278989. S2CID 37195356.
16. Yager, Ronald R. (December 2015). Properties and applications of Pythagorean fuzzy sets. Springer, Cham. pp. 119–136. ISBN 978-3-319-26302-1.
17. Siegfried Gottwald, 2001. A Treatise on Many-Valued Logics. Baldock, Hertfordshire, England: Research Studies Press Ltd., ISBN 978-0-86380-262-1
18. "The concept of a linguistic variable and its application to approximate reasoning," Information Sciences 8: 199–249, 301–357; 9: 43–80.
19. "Fuzzy sets as a basis for a theory of possibility," Fuzzy Sets and Systems
20. J. A. Goguen "Categories of fuzzy sets: applications of non-Cantorian set theory" PhD Thesis University of California, Berkeley, 1968
21. Michael Winter "Goguen Categories:A Categorical Approach to L-fuzzy Relations" 2007 Springer ISBN 9781402061639
22. Michael Winter "Representation theory of Goguen categories" Fuzzy Sets and Systems Volume 138, Issue 1, 16 August 2003, Pages 85–126
23. Goguen, J.A., "L-fuzzy sets". Journal of Mathematical Analysis and Applications 18(1):145–174, 1967
24. Xuecheng, Liu (1992). "Entropy, distance measure and similarity measure of fuzzy sets and their relations". Fuzzy Sets and Systems. 52 (3): 305–318. doi:10.1016/0165-0114(92)90239-Z.
25. Li, Xiang (2015). "Fuzzy cross-entropy". Journal of Uncertainty Analysis and Applications. 3. doi:10.1186/s40467-015-0029-5.
26. Burgin & Chunihin 1997; Kerre 2001; Deschrijver & Kerre 2003.
Bibliography
• Alkhazaleh, S. and Salleh, A.R. Fuzzy Soft Multiset Theory, Abstract and Applied Analysis, 2012, article ID 350600, 20 p.
• Atanassov, K. T. (1983) Intuitionistic fuzzy sets, VII ITKR's Session, Sofia (deposited in Central Sci.-Technical Library of Bulg. Acad. of Sci., 1697/84) (in Bulgarian)
• Atanassov, K. (1986) Intuitionistic Fuzzy Sets, Fuzzy Sets and Systems, v. 20, No. 1, pp. 87–96
• Baruah, Hemanta K. (2011) The Theory of Fuzzy Sets: Beliefs and Realities, International Journal of Energy, Information and Communications, Vol, 2, Issue 2, 1 – 22.
• Baruah, Hemanta K. (2012) An Introduction to the Theory of Imprecise Sets: the Mathematics of Partial Presence, International Journal of Computational and Mathematical Sciences, Vol. 2, No. 2, 110 – 124.
• Bezdek, J.C. (1978). "Fuzzy partitions and relations and axiomatic basis for clustering". Fuzzy Sets and Systems. 1 (2): 111–127. doi:10.1016/0165-0114(78)90012-X.
• Blizard, W.D. (1989) Real-valued Multisets and Fuzzy Sets, Fuzzy Sets and Systems, v. 33, pp. 77–97
• Brown, J.G. (1971) A Note on Fuzzy Sets, Information and Control, v. 18, pp. 32–39
• Brutoczki Kornelia: Fuzzy Logic (Diploma) – Although this script has a lot of oddities and intracies due to its incompleteness, it may be used a template for exercise in removing these issues.
• Burgin, M. Theory of Named Sets as a Foundational Basis for Mathematics, in Structures in Mathematical Theories, San Sebastian, 1990, pp. 417–420
• Burgin, M.; Chunihin, A. (1997). "Named Sets in the Analysis of Uncertainty". Methodological and Theoretical Problems of Mathematics and Information Sciences. Kiev: 72–85.
• Gianpiero Cattaneo and Davide Ciucci, "Heyting Wajsberg Algebras as an Abstract Environment Linking Fuzzy and Rough Sets" in J.J. Alpigini et al. (Eds.): RSCTC 2002, LNAI 2475, pp. 77–84, 2002. doi:10.1007/3-540-45813-1_10
• Chamorro-Martínez, J. et al.: A discussion on fuzzy cardinality and quantification. Some applications in image processing, SciVerse ScienceDirect: Fuzzy Sets and Systems 257 (2014) 85–101, 30 May 2013
• Chapin, E.W. (1974) Set-valued Set Theory, I, Notre Dame J. Formal Logic, v. 15, pp. 619–634
• Chapin, E.W. (1975) Set-valued Set Theory, II, Notre Dame J. Formal Logic, v. 16, pp. 255–267
• Chris Cornelis, Martine De Cock and Etienne E. Kerre, [Intuitionistic fuzzy rough sets: at the crossroads of imperfect knowledge], Expert Systems, v. 20, issue 5, pp. 260–270, 2003
• Cornelis, C., Deschrijver, C., and Kerre, E. E. (2004) Implication in intuitionistic and interval-valued fuzzy set theory: construction, classification, application, International Journal of Approximate Reasoning, v. 35, pp. 55–95
• De Cock, Martine; Bodenhofer, Ulrich; Kerre, Etienne E. (1–4 October 2000). Modelling Linguistic Expressions Using Fuzzy Relations. Proceedings of the 6th International Conference on Soft Computing. Iizuka, Japan. pp. 353–360. CiteSeerX 10.1.1.32.8117.
• Demirci, M. (1999) Genuine Sets, Fuzzy Sets and Systems, v. 105, pp. 377–384
• Deschrijver, G.; Kerre, E.E. (2003). "On the relationship between some extensions of fuzzy set theory". Fuzzy Sets and Systems. 133 (2): 227–235. doi:10.1016/S0165-0114(02)00127-6.
• Didier Dubois, Henri M. Prade, ed. (2000). Fundamentals of fuzzy sets. The Handbooks of Fuzzy Sets Series. Vol. 7. Springer. ISBN 978-0-7923-7732-0.
• Feng F. Generalized Rough Fuzzy Sets Based on Soft Sets, Soft Computing, July 2010, Volume 14, Issue 9, pp 899–911
• Gentilhomme, Y. (1968) Les ensembles flous en linguistique, Cahiers Linguistique Theoretique Appliqee, 5, pp. 47–63
• Gogen, J.A. (1967) L-fuzzy Sets, Journal Math. Analysis Appl., v. 18, pp. 145–174
• Gottwald, S. (2006). "Universes of Fuzzy Sets and Axiomatizations of Fuzzy Set Theory. Part I: Model-Based and Axiomatic Approaches". Studia Logica. 82 (2): 211–244. doi:10.1007/s11225-006-7197-8. S2CID 11931230.. Gottwald, S. (2006). "Universes of Fuzzy Sets and Axiomatizations of Fuzzy Set Theory. Part II: Category Theoretic Approaches". Studia Logica. 84: 23–50. doi:10.1007/s11225-006-9001-1. S2CID 10453751. preprint..
• Grattan-Guinness, I. (1975) Fuzzy membership mapped onto interval and many-valued quantities. Z. Math. Logik. Grundladen Math. 22, pp. 149–160.
• Grzymala-Busse, J. Learning from examples based on rough multisets, in Proceedings of the 2nd International Symposium on Methodologies for Intelligent Systems, Charlotte, NC, USA, 1987, pp. 325–332
• Gylys, R. P. (1994) Quantal sets and sheaves over quantales, Liet. Matem. Rink., v. 34, No. 1, pp. 9–31.
• Ulrich Höhle, Stephen Ernest Rodabaugh, ed. (1999). Mathematics of fuzzy sets: logic, topology, and measure theory. The Handbooks of Fuzzy Sets Series. Vol. 3. Springer. ISBN 978-0-7923-8388-8.
• Jahn, K. U. (1975) Intervall-wertige Mengen, Math.Nach. 68, pp. 115–132
• Kaufmann, Arnold. Introduction to the theory of fuzzy subsets. Vol. 2. Academic Pr, 1975.
• Kerre, E.E. (2001). "A first view on the alternatives of fuzzy set theory". In B. Reusch; K-H. Temme (eds.). Computational Intelligence in Theory and Practice. Heidelberg: Physica-Verlag. pp. 55–72. doi:10.1007/978-3-7908-1831-4_4. ISBN 978-3-7908-1357-9.
• George J. Klir; Bo Yuan (1995). Fuzzy sets and fuzzy logic: theory and applications. Prentice Hall. ISBN 978-0-13-101171-7.
• Kuzmin, V.B. (1982). "Building Group Decisions in Spaces of Strict and Fuzzy Binary Relations" (in Russian). Nauka, Moscow.
• Lake, J. (1976) Sets, fuzzy sets, multisets and functions, J. London Math. Soc., II Ser., v. 12, pp. 323–326
• Meng, D., Zhang, X. and Qin, K. Soft rough fuzzy sets and soft fuzzy rough sets, 'Computers & Mathematics with Applications', v. 62, issue 12, 2011, pp. 4635–4645
• Miyamoto, S. Fuzzy Multisets and their Generalizations, in 'Multiset Processing', LNCS 2235, pp. 225–235, 2001
• Molodtsov, O. (1999) Soft set theory – first results, Computers & Mathematics with Applications, v. 37, No. 4/5, pp. 19–31
• Moore, R.E. Interval Analysis, New York, Prentice-Hall, 1966
• Nakamura, A. (1988) Fuzzy rough sets, 'Notes on Multiple-valued Logic in Japan', v. 9, pp. 1–8
• Narinyani, A.S. Underdetermined Sets – A new datatype for knowledge representation, Preprint 232, Project VOSTOK, issue 4, Novosibirsk, Computing Center, USSR Academy of Sciences, 1980
• Pedrycz, W. Shadowed sets: representing and processing fuzzy sets, IEEE Transactions on System, Man, and Cybernetics, Part B, 28, 103–109, 1998.
• Radecki, T. Level Fuzzy Sets, 'Journal of Cybernetics', Volume 7, Issue 3–4, 1977
• Radzikowska, A.M. and Etienne E. Kerre, E.E. On L-Fuzzy Rough Sets, Artificial Intelligence and Soft Computing – ICAISC 2004, 7th International Conference, Zakopane, Poland, June 7–11, 2004, Proceedings; 01/2004
• Salii, V.N. (1965). "Binary L-relations" (PDF). Izv. Vysh. Uchebn. Zaved. Matematika (in Russian). 44 (1): 133–145.
• Ramakrishnan, T.V., and Sabu Sebastian (2010) 'A study on multi-fuzzy sets', Int. J. Appl. Math. 23, 713–721.
• Sabu Sebastian and Ramakrishnan, T. V.(2010) Multi-fuzzy sets, Int. Math. Forum 50, 2471–2476.
• Sabu Sebastian and Ramakrishnan, T. V.(2011) Multi-fuzzy sets: an extension of fuzzy sets, Fuzzy Inf.Eng. 1, 35–43.
• Sabu Sebastian and Ramakrishnan, T. V.(2011) Multi-fuzzy extensions of functions, Advance in Adaptive Data Analysis 3, 339–350.
• Sabu Sebastian and Ramakrishnan, T. V.(2011) Multi-fuzzy extension of crisp functions using bridge functions, Ann. Fuzzy Math. Inform. 2 (1), 1–8
• Sambuc, R. Fonctions φ-floues: Application a l'aide au diagnostic en pathologie thyroidienne, Ph.D. Thesis Univ. Marseille, France, 1975.
• Seising, Rudolf: The Fuzzification of Systems. The Genesis of Fuzzy Set Theory and Its Initial Applications—Developments up to the 1970s (Studies in Fuzziness and Soft Computing, Vol. 216) Berlin, New York, [et al.]: Springer 2007.
• Smith, N.J.J. (2004) Vagueness and blurry sets, 'J. of Phil. Logic', 33, pp. 165–235
• Werro, Nicolas: Fuzzy Classification of Online Customers, University of Fribourg, Switzerland, 2008, Chapter 2
• Yager, R. R. (1986) On the Theory of Bags, International Journal of General Systems, v. 13, pp. 23–37
• Yao, Y.Y., Combination of rough and fuzzy sets based on α-level sets, in: Rough Sets and Data Mining: Analysis for Imprecise Data, Lin, T.Y. and Cercone, N. (Eds.), Kluwer Academic Publishers, Boston, pp. 301–321, 1997.
• Y. Y. Yao, A comparative study of fuzzy sets and rough sets, Information Sciences, v. 109, Issue 1–4, 1998, pp. 227 – 242
• Zadeh, L. (1975) The concept of a linguistic variable and its application to approximate reasoning–I, Inform. Sci., v. 8, pp. 199–249
• Hans-Jürgen Zimmermann (2001). Fuzzy set theory—and its applications (4th ed.). Kluwer. ISBN 978-0-7923-7435-0.
Non-classical logic
Intuitionistic
• Intuitionistic logic
• Constructive analysis
• Heyting arithmetic
• Intuitionistic type theory
• Constructive set theory
Fuzzy
• Degree of truth
• Fuzzy rule
• Fuzzy set
• Fuzzy finite element
• Fuzzy set operations
Substructural
• Structural rule
• Relevance logic
• Linear logic
Paraconsistent
• Dialetheism
Description
• Ontology (computer science)
• Ontology language
Many-valued
• Three-valued
• Four-valued
• Łukasiewicz
Digital logic
• Three-state logic
• Tri-state buffer
• Four-valued
• Verilog
• IEEE 1164
• VHDL
Others
• Dynamic semantics
• Inquisitive logic
• Intermediate logic
• Modal logic
• Nonmonotonic logic
Set theory
Overview
• Set (mathematics)
Axioms
• Adjunction
• Choice
• countable
• dependent
• global
• Constructibility (V=L)
• Determinacy
• Extensionality
• Infinity
• Limitation of size
• Pairing
• Power set
• Regularity
• Union
• Martin's axiom
• Axiom schema
• replacement
• specification
Operations
• Cartesian product
• Complement (i.e. set difference)
• De Morgan's laws
• Disjoint union
• Identities
• Intersection
• Power set
• Symmetric difference
• Union
• Concepts
• Methods
• Almost
• Cardinality
• Cardinal number (large)
• Class
• Constructible universe
• Continuum hypothesis
• Diagonal argument
• Element
• ordered pair
• tuple
• Family
• Forcing
• One-to-one correspondence
• Ordinal number
• Set-builder notation
• Transfinite induction
• Venn diagram
Set types
• Amorphous
• Countable
• Empty
• Finite (hereditarily)
• Filter
• base
• subbase
• Ultrafilter
• Fuzzy
• Infinite (Dedekind-infinite)
• Recursive
• Singleton
• Subset · Superset
• Transitive
• Uncountable
• Universal
Theories
• Alternative
• Axiomatic
• Naive
• Cantor's theorem
• Zermelo
• General
• Principia Mathematica
• New Foundations
• Zermelo–Fraenkel
• von Neumann–Bernays–Gödel
• Morse–Kelley
• Kripke–Platek
• Tarski–Grothendieck
• Paradoxes
• Problems
• Russell's paradox
• Suslin's problem
• Burali-Forti paradox
Set theorists
• Paul Bernays
• Georg Cantor
• Paul Cohen
• Richard Dedekind
• Abraham Fraenkel
• Kurt Gödel
• Thomas Jech
• John von Neumann
• Willard Quine
• Bertrand Russell
• Thoralf Skolem
• Ernst Zermelo
| Wikipedia |
Uncertainty theory
Uncertainty theory is a branch of mathematics based on normality, monotonicity, self-duality, countable subadditivity, and product measure axioms.
Mathematical measures of the likelihood of an event being true include probability theory, capacity, fuzzy logic, possibility, and credibility, as well as uncertainty.
Four axioms
Axiom 1. (Normality Axiom) ${\mathcal {M}}\{\Gamma \}=1{\text{ for the universal set }}\Gamma $.
Axiom 2. (Self-Duality Axiom) ${\mathcal {M}}\{\Lambda \}+{\mathcal {M}}\{\Lambda ^{c}\}=1{\text{ for any event }}\Lambda $.
Axiom 3. (Countable Subadditivity Axiom) For every countable sequence of events $\Lambda _{1},\Lambda _{2},\ldots $, we have
${\mathcal {M}}\left\{\bigcup _{i=1}^{\infty }\Lambda _{i}\right\}\leq \sum _{i=1}^{\infty }{\mathcal {M}}\{\Lambda _{i}\}$.
Axiom 4. (Product Measure Axiom) Let $(\Gamma _{k},{\mathcal {L}}_{k},{\mathcal {M}}_{k})$ be uncertainty spaces for $k=1,2,\ldots ,n$. Then the product uncertain measure ${\mathcal {M}}$ is an uncertain measure on the product σ-algebra satisfying
${\mathcal {M}}\left\{\prod _{i=1}^{n}\Lambda _{i}\right\}={\underset {1\leq i\leq n}{\operatorname {min} }}{\mathcal {M}}_{i}\{\Lambda _{i}\}$.
Principle. (Maximum Uncertainty Principle) For any event, if there are multiple reasonable values that an uncertain measure may take, then the value as close to 0.5 as possible is assigned to the event.
Uncertain variables
An uncertain variable is a measurable function ξ from an uncertainty space $(\Gamma ,L,M)$ to the set of real numbers, i.e., for any Borel set B of real numbers, the set $\{\xi \in B\}=\{\gamma \in \Gamma \mid \xi (\gamma )\in B\}$ is an event.
Uncertainty distribution
Uncertainty distribution is inducted to describe uncertain variables.
Definition: The uncertainty distribution $\Phi (x):R\rightarrow [0,1]$ of an uncertain variable ξ is defined by $\Phi (x)=M\{\xi \leq x\}$.
Theorem (Peng and Iwamura, Sufficient and Necessary Condition for Uncertainty Distribution): A function $\Phi (x):R\rightarrow [0,1]$ is an uncertain distribution if and only if it is an increasing function except $\Phi (x)\equiv 0$ and $\Phi (x)\equiv 1$.
Independence
Definition: The uncertain variables $\xi _{1},\xi _{2},\ldots ,\xi _{m}$ are said to be independent if
$M\{\cap _{i=1}^{m}(\xi \in B_{i})\}={\mbox{min}}_{1\leq i\leq m}M\{\xi _{i}\in B_{i}\}$
for any Borel sets $B_{1},B_{2},\ldots ,B_{m}$ of real numbers.
Theorem 1: The uncertain variables $\xi _{1},\xi _{2},\ldots ,\xi _{m}$ are independent if
$M\{\cup _{i=1}^{m}(\xi \in B_{i})\}={\mbox{max}}_{1\leq i\leq m}M\{\xi _{i}\in B_{i}\}$
for any Borel sets $B_{1},B_{2},\ldots ,B_{m}$ of real numbers.
Theorem 2: Let $\xi _{1},\xi _{2},\ldots ,\xi _{m}$ be independent uncertain variables, and $f_{1},f_{2},\ldots ,f_{m}$ measurable functions. Then $f_{1}(\xi _{1}),f_{2}(\xi _{2}),\ldots ,f_{m}(\xi _{m})$ are independent uncertain variables.
Theorem 3: Let $\Phi _{i}$ be uncertainty distributions of independent uncertain variables $\xi _{i},\quad i=1,2,\ldots ,m$ respectively, and $\Phi $ the joint uncertainty distribution of uncertain vector $(\xi _{1},\xi _{2},\ldots ,\xi _{m})$. If $\xi _{1},\xi _{2},\ldots ,\xi _{m}$ are independent, then we have
$\Phi (x_{1},x_{2},\ldots ,x_{m})={\mbox{min}}_{1\leq i\leq m}\Phi _{i}(x_{i})$
for any real numbers $x_{1},x_{2},\ldots ,x_{m}$.
Operational law
Theorem: Let $\xi _{1},\xi _{2},\ldots ,\xi _{m}$ be independent uncertain variables, and $f:R^{n}\rightarrow R$ a measurable function. Then $\xi =f(\xi _{1},\xi _{2},\ldots ,\xi _{m})$ is an uncertain variable such that
${\mathcal {M}}\{\xi \in B\}={\begin{cases}{\underset {f(B_{1},B_{2},\cdots ,B_{n})\subset B}{\sup }}\;{\underset {1\leq k\leq n}{\min }}{\mathcal {M}}_{k}\{\xi _{k}\in B_{k}\},&{\text{if }}{\underset {f(B_{1},B_{2},\cdots ,B_{n})\subset B}{\sup }}\;{\underset {1\leq k\leq n}{\min }}{\mathcal {M}}_{k}\{\xi _{k}\in B_{k}\}>0.5\\1-{\underset {f(B_{1},B_{2},\cdots ,B_{n})\subset B^{c}}{\sup }}\;{\underset {1\leq k\leq n}{\min }}{\mathcal {M}}_{k}\{\xi _{k}\in B_{k}\},&{\text{if }}{\underset {f(B_{1},B_{2},\cdots ,B_{n})\subset B^{c}}{\sup }}\;{\underset {1\leq k\leq n}{\min }}{\mathcal {M}}_{k}\{\xi _{k}\in B_{k}\}>0.5\\0.5,&{\text{otherwise}}\end{cases}}$
where $B,B_{1},B_{2},\ldots ,B_{m}$ are Borel sets, and $f(B_{1},B_{2},\ldots ,B_{m})\subset B$ means $f(x_{1},x_{2},\ldots ,x_{m})\in B$ for any$x_{1}\in B_{1},x_{2}\in B_{2},\ldots ,x_{m}\in B_{m}$.
Expected Value
Definition: Let $\xi $ be an uncertain variable. Then the expected value of $\xi $ is defined by
$E[\xi ]=\int _{0}^{+\infty }M\{\xi \geq r\}dr-\int _{-\infty }^{0}M\{\xi \leq r\}dr$
provided that at least one of the two integrals is finite.
Theorem 1: Let $\xi $ be an uncertain variable with uncertainty distribution $\Phi $. If the expected value exists, then
$E[\xi ]=\int _{0}^{+\infty }(1-\Phi (x))dx-\int _{-\infty }^{0}\Phi (x)dx.$
Theorem 2: Let $\xi $ be an uncertain variable with regular uncertainty distribution $\Phi $. If the expected value exists, then
$E[\xi ]=\int _{0}^{1}\Phi ^{-1}(\alpha )d\alpha .$
Theorem 3: Let $\xi $ and $\eta $ be independent uncertain variables with finite expected values. Then for any real numbers $a$ and $b$, we have
$E[a\xi +b\eta ]=aE[\xi ]+b[\eta ].$
Variance
Definition: Let $\xi $ be an uncertain variable with finite expected value $e$. Then the variance of $\xi $ is defined by
$V[\xi ]=E[(\xi -e)^{2}].$
Theorem: If $\xi $ be an uncertain variable with finite expected value, $a$ and $b$ are real numbers, then
$V[a\xi +b]=a^{2}V[\xi ].$
Critical value
Definition: Let $\xi $ be an uncertain variable, and $\alpha \in (0,1]$. Then
$\xi _{sup}(\alpha )=\sup\{r\mid M\{\xi \geq r\}\geq \alpha \}$
is called the α-optimistic value to $\xi $, and
$\xi _{inf}(\alpha )=\inf\{r\mid M\{\xi \leq r\}\geq \alpha \}$
is called the α-pessimistic value to $\xi $.
Theorem 1: Let $\xi $ be an uncertain variable with regular uncertainty distribution $\Phi $. Then its α-optimistic value and α-pessimistic value are
$\xi _{sup}(\alpha )=\Phi ^{-1}(1-\alpha )$,
$\xi _{inf}(\alpha )=\Phi ^{-1}(\alpha )$.
Theorem 2: Let $\xi $ be an uncertain variable, and $\alpha \in (0,1]$. Then we have
• if $\alpha >0.5$, then $\xi _{inf}(\alpha )\geq \xi _{sup}(\alpha )$;
• if $\alpha \leq 0.5$, then $\xi _{inf}(\alpha )\leq \xi _{sup}(\alpha )$.
Theorem 3: Suppose that $\xi $ and $\eta $ are independent uncertain variables, and $\alpha \in (0,1]$. Then we have
$(\xi +\eta )_{sup}(\alpha )=\xi _{sup}(\alpha )+\eta _{sup}{\alpha }$,
$(\xi +\eta )_{inf}(\alpha )=\xi _{inf}(\alpha )+\eta _{inf}{\alpha }$,
$(\xi \vee \eta )_{sup}(\alpha )=\xi _{sup}(\alpha )\vee \eta _{sup}{\alpha }$,
$(\xi \vee \eta )_{inf}(\alpha )=\xi _{inf}(\alpha )\vee \eta _{inf}{\alpha }$,
$(\xi \wedge \eta )_{sup}(\alpha )=\xi _{sup}(\alpha )\wedge \eta _{sup}{\alpha }$,
$(\xi \wedge \eta )_{inf}(\alpha )=\xi _{inf}(\alpha )\wedge \eta _{inf}{\alpha }$.
Entropy
Definition: Let $\xi $ be an uncertain variable with uncertainty distribution $\Phi $. Then its entropy is defined by
$H[\xi ]=\int _{-\infty }^{+\infty }S(\Phi (x))dx$
where $S(x)=-t\ln(t)-(1-t)\ln(1-t)$.
Theorem 1(Dai and Chen): Let $\xi $ be an uncertain variable with regular uncertainty distribution $\Phi $. Then
$H[\xi ]=\int _{0}^{1}\Phi ^{-1}(\alpha )\ln {\frac {\alpha }{1-\alpha }}d\alpha .$
Theorem 2: Let $\xi $ and $\eta $ be independent uncertain variables. Then for any real numbers $a$ and $b$, we have
$H[a\xi +b\eta ]=|a|E[\xi ]+|b|E[\eta ].$
Theorem 3: Let $\xi $ be an uncertain variable whose uncertainty distribution is arbitrary but the expected value $e$ and variance $\sigma ^{2}$. Then
$H[\xi ]\leq {\frac {\pi \sigma }{\sqrt {3}}}.$
Inequalities
Theorem 1(Liu, Markov Inequality): Let $\xi $ be an uncertain variable. Then for any given numbers $t>0$ and $p>0$, we have
$M\{|\xi |\geq t\}\leq {\frac {E[|\xi |^{p}]}{t^{p}}}.$
Theorem 2 (Liu, Chebyshev Inequality) Let $\xi $ be an uncertain variable whose variance $V[\xi ]$ exists. Then for any given number $t>0$, we have
$M\{|\xi -E[\xi ]|\geq t\}\leq {\frac {V[\xi ]}{t^{2}}}.$
Theorem 3 (Liu, Holder's Inequality) Let $p$ and $q$ be positive numbers with $1/p+1/q=1$, and let $\xi $ and $\eta $ be independent uncertain variables with $E[|\xi |^{p}]<\infty $ and $E[|\eta |^{q}]<\infty $. Then we have
$E[|\xi \eta |]\leq {\sqrt[{p}]{E[|\xi |^{p}]}}{\sqrt[{p}]{E[\eta |^{p}]}}.$
Theorem 4:(Liu [127], Minkowski Inequality) Let $p$ be a real number with $p\leq 1$, and let $\xi $ and $\eta $ be independent uncertain variables with $E[|\xi |^{p}]<\infty $ and $E[|\eta |^{q}]<\infty $. Then we have
${\sqrt[{p}]{E[|\xi +\eta |^{p}]}}\leq {\sqrt[{p}]{E[|\xi |^{p}]}}+{\sqrt[{p}]{E[\eta |^{p}]}}.$
Convergence concept
Definition 1: Suppose that $\xi ,\xi _{1},\xi _{2},\ldots $ are uncertain variables defined on the uncertainty space $(\Gamma ,L,M)$. The sequence $\{\xi _{i}\}$ is said to be convergent a.s. to $\xi $ if there exists an event $\Lambda $ with $M\{\Lambda \}=1$ such that
$\lim _{i\to \infty }|\xi _{i}(\gamma )-\xi (\gamma )|=0$
for every $\gamma \in \Lambda $. In that case we write $\xi _{i}\to \xi $,a.s.
Definition 2: Suppose that $\xi ,\xi _{1},\xi _{2},\ldots $ are uncertain variables. We say that the sequence $\{\xi _{i}\}$ converges in measure to $\xi $ if
$\lim _{i\to \infty }M\{|\xi _{i}-\xi |\leq \varepsilon \}=0$
for every $\varepsilon >0$.
Definition 3: Suppose that $\xi ,\xi _{1},\xi _{2},\ldots $ are uncertain variables with finite expected values. We say that the sequence $\{\xi _{i}\}$ converges in mean to $\xi $ if
$\lim _{i\to \infty }E[|\xi _{i}-\xi |]=0$.
Definition 4: Suppose that $\Phi ,\phi _{1},\Phi _{2},\ldots $ are uncertainty distributions of uncertain variables $\xi ,\xi _{1},\xi _{2},\ldots $, respectively. We say that the sequence $\{\xi _{i}\}$ converges in distribution to $\xi $ if $\Phi _{i}\rightarrow \Phi $ at any continuity point of $\Phi $.
Theorem 1: Convergence in Mean $\Rightarrow $ Convergence in Measure $\Rightarrow $ Convergence in Distribution. However, Convergence in Mean $\nLeftrightarrow $ Convergence Almost Surely $\nLeftrightarrow $ Convergence in Distribution.
Conditional uncertainty
Definition 1: Let $(\Gamma ,L,M)$ be an uncertainty space, and $A,B\in L$. Then the conditional uncertain measure of A given B is defined by
${\mathcal {M}}\{A\vert B\}={\begin{cases}\displaystyle {\frac {{\mathcal {M}}\{A\cap B\}}{{\mathcal {M}}\{B\}}},&\displaystyle {\text{if }}{\frac {{\mathcal {M}}\{A\cap B\}}{{\mathcal {M}}\{B\}}}<0.5\\\displaystyle 1-{\frac {{\mathcal {M}}\{A^{c}\cap B\}}{{\mathcal {M}}\{B\}}},&\displaystyle {\text{if }}{\frac {{\mathcal {M}}\{A^{c}\cap B\}}{{\mathcal {M}}\{B\}}}<0.5\\0.5,&{\text{otherwise}}\end{cases}}$
${\text{provided that }}{\mathcal {M}}\{B\}>0$
Theorem 1: Let $(\Gamma ,L,M)$ be an uncertainty space, and B an event with $M\{B\}>0$. Then M{·|B} defined by Definition 1 is an uncertain measure, and $(\Gamma ,L,M\{{\mbox{·}}|B\})$is an uncertainty space.
Definition 2: Let $\xi $ be an uncertain variable on $(\Gamma ,L,M)$. A conditional uncertain variable of $\xi $ given B is a measurable function $\xi |_{B}$ from the conditional uncertainty space $(\Gamma ,L,M\{{\mbox{·}}|_{B}\})$ to the set of real numbers such that
$\xi |_{B}(\gamma )=\xi (\gamma ),\forall \gamma \in \Gamma $.
Definition 3: The conditional uncertainty distribution $\Phi \rightarrow [0,1]$ of an uncertain variable $\xi $ given B is defined by
$\Phi (x|B)=M\{\xi \leq x|B\}$
provided that $M\{B\}>0$.
Theorem 2: Let $\xi $ be an uncertain variable with regular uncertainty distribution $\Phi (x)$, and $t$ a real number with $\Phi (t)<1$. Then the conditional uncertainty distribution of $\xi $ given $\xi >t$ is
$\Phi (x\vert (t,+\infty ))={\begin{cases}0,&{\text{if }}\Phi (x)\leq \Phi (t)\\\displaystyle {\frac {\Phi (x)}{1-\Phi (t)}}\land 0.5,&{\text{if }}\Phi (t)<\Phi (x)\leq (1+\Phi (t))/2\\\displaystyle {\frac {\Phi (x)-\Phi (t)}{1-\Phi (t)}},&{\text{if }}(1+\Phi (t))/2\leq \Phi (x)\end{cases}}$
Theorem 3: Let $\xi $ be an uncertain variable with regular uncertainty distribution $\Phi (x)$, and $t$ a real number with $\Phi (t)>0$. Then the conditional uncertainty distribution of $\xi $ given $\xi \leq t$ is
$\Phi (x\vert (-\infty ,t])={\begin{cases}\displaystyle {\frac {\Phi (x)}{\Phi (t)}},&{\text{if }}\Phi (x)\leq \Phi (t)/2\\\displaystyle {\frac {\Phi (x)+\Phi (t)-1}{\Phi (t)}}\lor 0.5,&{\text{if }}\Phi (t)/2\leq \Phi (x)<\Phi (t)\\1,&{\text{if }}\Phi (t)\leq \Phi (x)\end{cases}}$
Definition 4: Let $\xi $ be an uncertain variable. Then the conditional expected value of $\xi $ given B is defined by
$E[\xi |B]=\int _{0}^{+\infty }M\{\xi \geq r|B\}dr-\int _{-\infty }^{0}M\{\xi \leq r|B\}dr$
provided that at least one of the two integrals is finite.
References
Wikimedia Commons has media related to Uncertainty Theory.
Sources
• Xin Gao, Some Properties of Continuous Uncertain Measure, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, Vol.17, No.3, 419-426, 2009.
• Cuilian You, Some Convergence Theorems of Uncertain Sequences, Mathematical and Computer Modelling, Vol.49, Nos.3-4, 482-487, 2009.
• Yuhan Liu, How to Generate Uncertain Measures, Proceedings of Tenth National Youth Conference on Information and Management Sciences, August 3–7, 2008, Luoyang, pp. 23–26.
• Baoding Liu, Uncertainty Theory, 4th ed., Springer-Verlag, Berlin, 2009
• Baoding Liu, Some Research Problems in Uncertainty Theory, Journal of Uncertain Systems, Vol.3, No.1, 3-10, 2009.
• Yang Zuo, Xiaoyu Ji, Theoretical Foundation of Uncertain Dominance, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 827–832.
• Yuhan Liu and Minghu Ha, Expected Value of Function of Uncertain Variables, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 779–781.
• Zhongfeng Qin, On Lognormal Uncertain Variable, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 753–755.
• Jin Peng, Value at Risk and Tail Value at Risk in Uncertain Environment, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 787–793.
• Yi Peng, U-Curve and U-Coefficient in Uncertain Environment, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 815–820.
• Wei Liu, Jiuping Xu, Some Properties on Expected Value Operator for Uncertain Variables, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 808–811.
• Xiaohu Yang, Moments and Tails Inequality within the Framework of Uncertainty Theory, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 812–814.
• Yuan Gao, Analysis of k-out-of-n System with Uncertain Lifetimes, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 794–797.
• Xin Gao, Shuzhen Sun, Variance Formula for Trapezoidal Uncertain Variables, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 853–855.
• Zixiong Peng, A Sufficient and Necessary Condition of Product Uncertain Null Set, Proceedings of the Eighth International Conference on Information and Management Sciences, Kunming, China, July 20–28, 2009, pp. 798–801.
| Wikipedia |
Schauder basis
In mathematics, a Schauder basis or countable basis is similar to the usual (Hamel) basis of a vector space; the difference is that Hamel bases use linear combinations that are finite sums, while for Schauder bases they may be infinite sums. This makes Schauder bases more suitable for the analysis of infinite-dimensional topological vector spaces including Banach spaces.
Schauder bases were described by Juliusz Schauder in 1927,[1][2] although such bases were discussed earlier. For example, the Haar basis was given in 1909, and Georg Faber discussed in 1910 a basis for continuous functions on an interval, sometimes called a Faber–Schauder system.[3]
Definitions
Let V denote a topological vector space over the field F. A Schauder basis is a sequence {bn} of elements of V such that for every element v ∈ V there exists a unique sequence {αn} of scalars in F so that
$v=\sum _{n=0}^{\infty }{\alpha _{n}b_{n}}{\text{.}}$
The convergence of the infinite sum is implicitly that of the ambient topology, i.e.,
$\lim _{n\to \infty }{\sum _{k=0}^{n}\alpha _{k}b_{k}}=v{\text{,}}$
but can be reduced to only weak convergence in a normed vector space (such as a Banach space).[4] Unlike a Hamel basis, the elements of the basis must be ordered since the series may not converge unconditionally.
Note that some authors define Schauder bases to be countable (as above), while others use the term to include uncountable bases. In either case, the sums themselves always are countable. An uncountable Schauder basis is a linearly ordered set rather than a sequence, and each sum inherits the order of its terms from this linear ordering. They can and do arise in practice. As an example, a separable Hilbert space can only have a countable Schauder basis but a non-separable Hilbert space may have an uncountable one.
Though the definition above technically does not require a normed space, a norm is necessary to say almost anything useful about Schauder bases. The results below assume the existence of a norm.
A Schauder basis {bn}n ≥ 0 is said to be normalized when all the basis vectors have norm 1 in the Banach space V.
A sequence {xn}n ≥ 0 in V is a basic sequence if it is a Schauder basis of its closed linear span.
Two Schauder bases, {bn} in V and {cn} in W, are said to be equivalent if there exist two constants c > 0 and C such that for every natural number N ≥ 0 and all sequences {αn} of scalars,
$c\left\|\sum _{k=0}^{N}\alpha _{k}b_{k}\right\|_{V}\leq \left\|\sum _{k=0}^{N}\alpha _{k}c_{k}\right\|_{W}\leq C\left\|\sum _{k=0}^{N}\alpha _{k}b_{k}\right\|_{V}.$
A family of vectors in V is total if its linear span (the set of finite linear combinations) is dense in V. If V is a Hilbert space, an orthogonal basis is a total subset B of V such that elements in B are nonzero and pairwise orthogonal. Further, when each element in B has norm 1, then B is an orthonormal basis of V.
Properties
Let {bn} be a Schauder basis of a Banach space V over F = R or C. It is a subtle consequence of the open mapping theorem that the linear mappings {Pn} defined by
$v=\sum _{k=0}^{\infty }\alpha _{k}b_{k}\ \ {\overset P_{n}}{\longrightarrow }}\ \ P_{n}(v)=\sum _{k=0}^{n}\alpha _{k}b_{k}$
are uniformly bounded by some constant C.[5] When C = 1, the basis is called a monotone basis. The maps {Pn} are the basis projections.
Let {b*n} denote the coordinate functionals, where b*n assigns to every vector v in V the coordinate αn of v in the above expansion. Each b*n is a bounded linear functional on V. Indeed, for every vector v in V,
$|b_{n}^{*}(v)|\;\|b_{n}\|_{V}=|\alpha _{n}|\;\|b_{n}\|_{V}=\|\alpha _{n}b_{n}\|_{V}=\|P_{n}(v)-P_{n-1}(v)\|_{V}\leq 2C\|v\|_{V}.$
These functionals {b*n} are called biorthogonal functionals associated to the basis {bn}. When the basis {bn} is normalized, the coordinate functionals {b*n} have norm ≤ 2C in the continuous dual V ′ of V.
A Banach space with a Schauder basis is necessarily separable, but the converse is false. Since every vector v in a Banach space V with a Schauder basis is the limit of Pn(v), with Pn of finite rank and uniformly bounded, such a space V satisfies the bounded approximation property.
A theorem attributed to Mazur[6] asserts that every infinite-dimensional Banach space V contains a basic sequence, i.e., there is an infinite-dimensional subspace of V that has a Schauder basis. The basis problem is the question asked by Banach, whether every separable Banach space has a Schauder basis. This was negatively answered by Per Enflo who constructed a separable Banach space failing the approximation property, thus a space without a Schauder basis.[7]
Examples
The standard unit vector bases of c0, and of ℓp for 1 ≤ p < ∞, are monotone Schauder bases. In this unit vector basis {bn}, the vector bn in V = c0 or in V = ℓp is the scalar sequence [bn, j]j where all coordinates bn, j are 0, except the nth coordinate:
$b_{n}=\{b_{n,j}\}_{j=0}^{\infty }\in V,\ \ b_{n,j}=\delta _{n,j},$
where δn, j is the Kronecker delta. The space ℓ∞ is not separable, and therefore has no Schauder basis.
Every orthonormal basis in a separable Hilbert space is a Schauder basis. Every countable orthonormal basis is equivalent to the standard unit vector basis in ℓ2.
The Haar system is an example of a basis for Lp([0, 1]), when 1 ≤ p < ∞.[2] When 1 < p < ∞, another example is the trigonometric system defined below. The Banach space C([0, 1]) of continuous functions on the interval [0, 1], with the supremum norm, admits a Schauder basis. The Faber–Schauder system is the most commonly used Schauder basis for C([0, 1]).[3][8]
Several bases for classical spaces were discovered before Banach's book appeared (Banach (1932)), but some other cases remained open for a long time. For example, the question of whether the disk algebra A(D) has a Schauder basis remained open for more than forty years, until Bočkarev showed in 1974 that a basis constructed from the Franklin system exists in A(D).[9] One can also prove that the periodic Franklin system[10] is a basis for a Banach space Ar isomorphic to A(D).[11] This space Ar consists of all complex continuous functions on the unit circle T whose conjugate function is also continuous. The Franklin system is another Schauder basis for C([0, 1]),[12] and it is a Schauder basis in Lp([0, 1]) when 1 ≤ p < ∞.[13] Systems derived from the Franklin system give bases in the space C1([0, 1]2) of differentiable functions on the unit square.[14] The existence of a Schauder basis in C1([0, 1]2) was a question from Banach's book.[15]
Relation to Fourier series
Let {xn} be, in the real case, the sequence of functions
$\{1,\cos(x),\sin(x),\cos(2x),\sin(2x),\cos(3x),\sin(3x),\ldots \}$
or, in the complex case,
$\left\{1,e^{ix},e^{-ix},e^{2ix},e^{-2ix},e^{3ix},e^{-3ix},\ldots \right\}.$
The sequence {xn} is called the trigonometric system. It is a Schauder basis for the space Lp([0, 2π]) for any p such that 1 < p < ∞. For p = 2, this is the content of the Riesz–Fischer theorem, and for p ≠ 2, it is a consequence of the boundedness on the space Lp([0, 2π]) of the Hilbert transform on the circle. It follows from this boundedness that the projections PN defined by
$\left\{f:x\to \sum _{k=-\infty }^{+\infty }c_{k}e^{ikx}\right\}\ {\overset {P_{N}}{\longrightarrow }}\ \left\{P_{N}f:x\to \sum _{k=-N}^{N}c_{k}e^{ikx}\right\}$
are uniformly bounded on Lp([0, 2π]) when 1 < p < ∞. This family of maps {PN} is equicontinuous and tends to the identity on the dense subset consisting of trigonometric polynomials. It follows that PNf tends to f in Lp-norm for every f ∈ Lp([0, 2π]). In other words, {xn} is a Schauder basis of Lp([0, 2π]).[16]
However, the set {xn} is not a Schauder basis for L1([0, 2π]). This means that there are functions in L1 whose Fourier series does not converge in the L1 norm, or equivalently, that the projections PN are not uniformly bounded in L1-norm. Also, the set {xn} is not a Schauder basis for C([0, 2π]).
Bases for spaces of operators
The space K(ℓ2) of compact operators on the Hilbert space ℓ2 has a Schauder basis. For every x, y in ℓ2, let x ⊗ y denote the rank one operator v ∈ ℓ2 → <v, x > y. If {en}n ≥ 1 is the standard orthonormal basis of ℓ2, a basis for K(ℓ2) is given by the sequence[17]
${\begin{aligned}&e_{1}\otimes e_{1},\ \ e_{1}\otimes e_{2},\;e_{2}\otimes e_{2},\;e_{2}\otimes e_{1},\ldots ,\\&e_{1}\otimes e_{n},e_{2}\otimes e_{n},\ldots ,e_{n}\otimes e_{n},e_{n}\otimes e_{n-1},\ldots ,e_{n}\otimes e_{1},\ldots \end{aligned}}$
For every n, the sequence consisting of the n2 first vectors in this basis is a suitable ordering of the family {ej ⊗ ek}, for 1 ≤ j, k ≤ n.
The preceding result can be generalized: a Banach space X with a basis has the approximation property, so the space K(X) of compact operators on X is isometrically isomorphic[18] to the injective tensor product
$X'{\widehat {\otimes }}_{\varepsilon }X\simeq {\mathcal {K}}(X).$
If X is a Banach space with a Schauder basis {en}n ≥ 1 such that the biorthogonal functionals are a basis of the dual, that is to say, a Banach space with a shrinking basis, then the space K(X) admits a basis formed by the rank one operators e*j ⊗ ek : v → e*j(v) ek, with the same ordering as before.[17] This applies in particular to every reflexive Banach space X with a Schauder basis
On the other hand, the space B(ℓ2) has no basis, since it is non-separable. Moreover, B(ℓ2) does not have the approximation property.[19]
Unconditionality
A Schauder basis {bn} is unconditional if whenever the series $\sum \alpha _{n}b_{n}$ converges, it converges unconditionally. For a Schauder basis {bn}, this is equivalent to the existence of a constant C such that
${\Bigl \|}\sum _{k=0}^{n}\varepsilon _{k}\alpha _{k}b_{k}{\Bigr \|}_{V}\leq C{\Bigl \|}\sum _{k=0}^{n}\alpha _{k}b_{k}{\Bigr \|}_{V}$
for all natural numbers n, all scalar coefficients {αk} and all signs εk = ±1. Unconditionality is an important property since it allows one to forget about the order of summation. A Schauder basis is symmetric if it is unconditional and uniformly equivalent to all its permutations: there exists a constant C such that for every natural number n, every permutation π of the set {0, 1, ..., n}, all scalar coefficients {αk} and all signs {εk},
${\Bigl \|}\sum _{k=0}^{n}\varepsilon _{k}\alpha _{k}b_{\pi (k)}{\Bigr \|}_{V}\leq C{\Bigl \|}\sum _{k=0}^{n}\alpha _{k}b_{k}{\Bigr \|}_{V}.$
The standard bases of the sequence spaces c0 and ℓp for 1 ≤ p < ∞, as well as every orthonormal basis in a Hilbert space, are unconditional. These bases are also symmetric.
The trigonometric system is not an unconditional basis in Lp, except for p = 2.
The Haar system is an unconditional basis in Lp for any 1 < p < ∞. The space L1([0, 1]) has no unconditional basis.[20]
A natural question is whether every infinite-dimensional Banach space has an infinite-dimensional subspace with an unconditional basis. This was solved negatively by Timothy Gowers and Bernard Maurey in 1992.[21]
Schauder bases and duality
A basis {en}n≥0 of a Banach space X is boundedly complete if for every sequence {an}n≥0 of scalars such that the partial sums
$V_{n}=\sum _{k=0}^{n}a_{k}e_{k}$
are bounded in X, the sequence {Vn} converges in X. The unit vector basis for ℓp, 1 ≤ p < ∞, is boundedly complete. However, the unit vector basis is not boundedly complete in c0. Indeed, if an = 1 for every n, then
$\|V_{n}\|_{c_{0}}=\max _{0\leq k\leq n}|a_{k}|=1$
for every n, but the sequence {Vn} is not convergent in c0, since ||Vn+1 − Vn|| = 1 for every n.
A space X with a boundedly complete basis {en}n≥0 is isomorphic to a dual space, namely, the space X is isomorphic to the dual of the closed linear span in the dual X ′ of the biorthogonal functionals associated to the basis {en}.[22]
A basis {en}n≥0 of X is shrinking if for every bounded linear functional f on X, the sequence of non-negative numbers
$\varphi _{n}=\sup\{|f(x)|:x\in F_{n},\;\|x\|\leq 1\}$
tends to 0 when n → ∞, where Fn is the linear span of the basis vectors em for m ≥ n. The unit vector basis for ℓp, 1 < p < ∞, or for c0, is shrinking. It is not shrinking in ℓ1: if f is the bounded linear functional on ℓ1 given by
$f:x=\{x_{n}\}\in \ell ^{1}\ \rightarrow \ \sum _{n=0}^{\infty }x_{n},$
then φn ≥ f(en) = 1 for every n.
A basis [en]n ≥ 0 of X is shrinking if and only if the biorthogonal functionals [e*n]n ≥ 0 form a basis of the dual X ′.[23]
Robert C. James characterized reflexivity in Banach spaces with basis: the space X with a Schauder basis is reflexive if and only if the basis is both shrinking and boundedly complete.[24] James also proved that a space with an unconditional basis is non-reflexive if and only if it contains a subspace isomorphic to c0 or ℓ1.[25]
Related concepts
A Hamel basis is a subset B of a vector space V such that every element v ∈ V can uniquely be written as
$v=\sum _{b\in B}\alpha _{b}b$
with αb ∈ F, with the extra condition that the set
$\{b\in B\mid \alpha _{b}\neq 0\}$
is finite. This property makes the Hamel basis unwieldy for infinite-dimensional Banach spaces; as a Hamel basis for an infinite-dimensional Banach space has to be uncountable. (Every finite-dimensional subspace of an infinite-dimensional Banach space X has empty interior, and is no-where dense in X. It then follows from the Baire category theorem that a countable union of bases of these finite-dimensional subspaces cannot serve as a basis.[26])
See also
• Markushevich basis
• Generalized Fourier series
• Orthogonal polynomials
• Haar wavelet
• Banach space
Notes
1. see Schauder (1927).
2. Schauder, Juliusz (1928). "Eine Eigenschaft des Haarschen Orthogonalsystems". Mathematische Zeitschrift. 28: 317–320. doi:10.1007/bf01181164.
3. Faber, Georg (1910), "Über die Orthogonalfunktionen des Herrn Haar", Deutsche Math.-Ver (in German) 19: 104–112. ISSN 0012-0456; http://www-gdz.sub.uni-goettingen.de/cgi-bin/digbib.cgi?PPN37721857X ; http://resolver.sub.uni-goettingen.de/purl?GDZPPN002122553
4. Karlin, S. (December 1948). "Bases in Banach spaces". Duke Mathematical Journal. 15 (4): 971–985. doi:10.1215/S0012-7094-48-01587-7. ISSN 0012-7094.
5. see Theorem 4.10 in Fabian et al. (2011).
6. for an early published proof, see p. 157, C.3 in Bessaga, C. and Pełczyński, A. (1958), "On bases and unconditional convergence of series in Banach spaces", Studia Math. 17: 151–164. In the first lines of this article, Bessaga and Pełczyński write that Mazur's result appears without proof in Banach's book —to be precise, on p. 238— but they do not provide a reference containing a proof.
7. Enflo, Per (July 1973). "A counterexample to the approximation problem in Banach spaces". Acta Mathematica. 130 (1): 309–317. doi:10.1007/BF02392270.
8. see pp. 48–49 in Schauder (1927). Schauder defines there a general model for this system, of which the Faber–Schauder system used today is a special case.
9. see Bočkarev, S. V. (1974), "Existence of a basis in the space of functions analytic in the disc, and some properties of Franklin's system", (in Russian) Mat. Sb. (N.S.) 95(137): 3–18, 159. Translated in Math. USSR-Sb. 24 (1974), 1–16. The question is in Banach's book, Banach (1932) p. 238, §3.
10. See p. 161, III.D.20 in Wojtaszczyk (1991).
11. See p. 192, III.E.17 in Wojtaszczyk (1991).
12. Franklin, Philip (1928). "A set of continuous orthogonal functions". Math. Ann. 100: 522–529. doi:10.1007/bf01448860.
13. see p. 164, III.D.26 in Wojtaszczyk (1991).
14. see Ciesielski, Z (1969). "A construction of basis in C1(I2)". Studia Math. 33: 243–247. and Schonefeld, Steven (1969). "Schauder bases in spaces of differentiable functions". Bull. Amer. Math. Soc. 75 (3): 586–590. doi:10.1090/s0002-9904-1969-12249-4.
15. see p. 238, §3 in Banach (1932).
16. see p. 40, II.B.11 in Wojtaszczyk (1991).
17. see Proposition 4.25, p. 88 in Ryan (2002).
18. see Corollary 4.13, p. 80 in Ryan (2002).
19. see Szankowski, Andrzej (1981). "B(H) does not have the approximation property". Acta Math. 147: 89–108. doi:10.1007/bf02392870.
20. see p. 24 in Lindenstrauss & Tzafriri (1977).
21. Gowers, W. Timothy; Maurey, Bernard (6 May 1992). "The unconditional basic sequence problem". arXiv:math/9205204.
22. see p. 9 in Lindenstrauss & Tzafriri (1977).
23. see p. 8 in Lindenstrauss & Tzafriri (1977).
24. See James (1950) and Lindenstrauss & Tzafriri (1977, p. 9).
25. See James (1950) and Lindenstrauss & Tzafriri (1977, p. 23).
26. Carothers, N. L. (2005), A short course on Banach space theory, Cambridge University Press ISBN 0-521-60372-2
This article incorporates material from Countable basis on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
References
• Schauder, Juliusz (1927), "Zur Theorie stetiger Abbildungen in Funktionalraumen", Mathematische Zeitschrift (in German), 26: 47–65, doi:10.1007/BF01475440, hdl:10338.dmlcz/104881.
• Banach, Stefan (1932). Théorie des Opérations Linéaires [Theory of Linear Operations] (PDF). Monografie Matematyczne (in French). Vol. 1. Warszawa: Subwencji Funduszu Kultury Narodowej. Zbl 0005.20901. Archived from the original (PDF) on 11 January 2014. Retrieved 11 July 2020.
• Fabian, Marián; Habala, Petr; Hájek, Petr; Montesinos, Vicente; Zizler, Václav (2011), Banach Space Theory: The Basis for Linear and Nonlinear Analysis, CMS Books in Mathematics, Springer, ISBN 978-1-4419-7514-0
• James, Robert C. (1950). "Bases and reflexivity of Banach spaces". The Annals of Mathematics. 52 (3): 518–527. doi:10.2307/1969430. MR39915
• Lindenstrauss, Joram; Tzafriri, Lior (1977), Classical Banach Spaces I, Sequence Spaces, Ergebnisse der Mathematik und ihrer Grenzgebiete, vol. 92, Berlin: Springer-Verlag, ISBN 3-540-08072-4
• Ryan, Raymond A. (2002), Introduction to Tensor Products of Banach Spaces, Springer Monographs in Mathematics, London: Springer-Verlag, pp. xiv+225, ISBN 1-85233-437-1
• Schaefer, Helmut H. (1971), Topological vector spaces, Graduate Texts in Mathematics, vol. 3, New York: Springer-Verlag, pp. xi+294, ISBN 0-387-98726-6.
• Wojtaszczyk, Przemysław (1991), Banach spaces for analysts, Cambridge Studies in Advanced Mathematics, vol. 25, Cambridge: Cambridge University Press, pp. xiv+382, ISBN 0-521-35618-0.
• Golubov, B.I. (2001) [1994], "Faber–Schauder system", Encyclopedia of Mathematics, EMS Press
.
• Heil, Christopher E. (1997). "A basis theory primer" (PDF)..
• Franklin system. B.I. Golubov (originator), Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Franklin_system&oldid=16655
Further reading
• Kufner, Alois (2013), Function spaces, De Gruyter Series in Nonlinear analysis and applications, vol. 14, Prague: Academia Publishing House of the Czechoslovak Academy of Sciences, de Gruyter
Functional analysis (topics – glossary)
Spaces
• Banach
• Besov
• Fréchet
• Hilbert
• Hölder
• Nuclear
• Orlicz
• Schwartz
• Sobolev
• Topological vector
Properties
• Barrelled
• Complete
• Dual (Algebraic/Topological)
• Locally convex
• Reflexive
• Reparable
Theorems
• Hahn–Banach
• Riesz representation
• Closed graph
• Uniform boundedness principle
• Kakutani fixed-point
• Krein–Milman
• Min–max
• Gelfand–Naimark
• Banach–Alaoglu
Operators
• Adjoint
• Bounded
• Compact
• Hilbert–Schmidt
• Normal
• Nuclear
• Trace class
• Transpose
• Unbounded
• Unitary
Algebras
• Banach algebra
• C*-algebra
• Spectrum of a C*-algebra
• Operator algebra
• Group algebra of a locally compact group
• Von Neumann algebra
Open problems
• Invariant subspace problem
• Mahler's conjecture
Applications
• Hardy space
• Spectral theory of ordinary differential equations
• Heat kernel
• Index theorem
• Calculus of variations
• Functional calculus
• Integral operator
• Jones polynomial
• Topological quantum field theory
• Noncommutative geometry
• Riemann hypothesis
• Distribution (or Generalized functions)
Advanced topics
• Approximation property
• Balanced set
• Choquet theory
• Weak topology
• Banach–Mazur distance
• Tomita–Takesaki theory
• Mathematics portal
• Category
• Commons
Authority control
International
• FAST
National
• France
• BnF data
• Germany
• Israel
• United States
Other
• IdRef
| Wikipedia |
Unconditional convergence
In mathematics, specifically functional analysis, a series is unconditionally convergent if all reorderings of the series converge to the same value. In contrast, a series is conditionally convergent if it converges but different orderings do not all converge to that same value. Unconditional convergence is equivalent to absolute convergence in finite-dimensional vector spaces, but is a weaker property in infinite dimensions.
Definition
Let $X$ be a topological vector space. Let $I$ be an index set and $x_{i}\in X$ for all $i\in I.$
The series $\textstyle \sum _{i\in I}x_{i}$ is called unconditionally convergent to $x\in X,$ if
• the indexing set $I_{0}:=\left\{i\in I:x_{i}\neq 0\right\}$ is countable, and
• for every permutation (bijection) $\sigma :I_{0}\to I_{0}$ of $I_{0}=\left\{i_{k}\right\}_{k=1}^{\infty }$ the following relation holds: $\sum _{k=1}^{\infty }x_{\sigma \left(i_{k}\right)}=x.$
Alternative definition
Unconditional convergence is often defined in an equivalent way: A series is unconditionally convergent if for every sequence $\left(\varepsilon _{n}\right)_{n=1}^{\infty },$ with $\varepsilon _{n}\in \{-1,+1\},$ the series
$\sum _{n=1}^{\infty }\varepsilon _{n}x_{n}$
converges.
If $X$ is a Banach space, every absolutely convergent series is unconditionally convergent, but the converse implication does not hold in general. Indeed, if $X$ is an infinite-dimensional Banach space, then by Dvoretzky–Rogers theorem there always exists an unconditionally convergent series in this space that is not absolutely convergent. However when $X=\mathbb {R} ^{n},$ by the Riemann series theorem, the series $ \sum _{n}x_{n}$ is unconditionally convergent if and only if it is absolutely convergent.
See also
• Absolute convergence – Mode of convergence of an infinite series
• Modes of convergence (annotated index) – Annotated index of various modes of convergence
• Rearrangements and unconditional convergence/Dvoretzky–Rogers theorem – Mode of convergence of an infinite series
• Riemann series theorem – Unconditional series converge absolutely
References
• Ch. Heil: A Basis Theory Primer
• Knopp, Konrad (1956). Infinite Sequences and Series. Dover Publications. ISBN 9780486601533.
• Knopp, Konrad (1990). Theory and Application of Infinite Series. Dover Publications. ISBN 9780486661650.
• Wojtaszczyk, P. (1996). Banach spaces for analysts. Cambridge University Press. ISBN 9780521566759.
Analysis in topological vector spaces
Basic concepts
• Abstract Wiener space
• Classical Wiener space
• Bochner space
• Convex series
• Cylinder set measure
• Infinite-dimensional vector function
• Matrix calculus
• Vector calculus
Derivatives
• Differentiable vector–valued functions from Euclidean space
• Differentiation in Fréchet spaces
• Fréchet derivative
• Total
• Functional derivative
• Gateaux derivative
• Directional
• Generalizations of the derivative
• Hadamard derivative
• Holomorphic
• Quasi-derivative
Measurability
• Besov measure
• Cylinder set measure
• Canonical Gaussian
• Classical Wiener measure
• Measure like set functions
• infinite-dimensional Gaussian measure
• Projection-valued
• Vector
• Bochner / Weakly / Strongly measurable function
• Radonifying function
Integrals
• Bochner
• Direct integral
• Dunford
• Gelfand–Pettis/Weak
• Regulated
• Paley–Wiener
Results
• Cameron–Martin theorem
• Inverse function theorem
• Nash–Moser theorem
• Feldman–Hájek theorem
• No infinite-dimensional Lebesgue measure
• Sazonov's theorem
• Structure theorem for Gaussian measures
Related
• Crinkled arc
• Covariance operator
Functional calculus
• Borel functional calculus
• Continuous functional calculus
• Holomorphic functional calculus
Applications
• Banach manifold (bundle)
• Convenient vector space
• Choquet theory
• Fréchet manifold
• Hilbert manifold
This article incorporates material from Unconditional convergence on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License.
| Wikipedia |
Observational study
In fields such as epidemiology, social sciences, psychology and statistics, an observational study draws inferences from a sample to a population where the independent variable is not under the control of the researcher because of ethical concerns or logistical constraints. One common observational study is about the possible effect of a treatment on subjects, where the assignment of subjects into a treated group versus a control group is outside the control of the investigator.[1][2] This is in contrast with experiments, such as randomized controlled trials, where each subject is randomly assigned to a treated group or a control group. Observational studies, for lacking an assignment mechanism, naturally present difficulties for inferential analysis.
Motivation
The independent variable may be beyond the control of the investigator for a variety of reasons:
• A randomized experiment would violate ethical standards. Suppose one wanted to investigate the abortion – breast cancer hypothesis, which postulates a causal link between induced abortion and the incidence of breast cancer. In a hypothetical controlled experiment, one would start with a large subject pool of pregnant women and divide them randomly into a treatment group (receiving induced abortions) and a control group (not receiving abortions), and then conduct regular cancer screenings for women from both groups. Needless to say, such an experiment would run counter to common ethical principles. (It would also suffer from various confounds and sources of bias, e.g. it would be impossible to conduct it as a blind experiment.) The published studies investigating the abortion–breast cancer hypothesis generally start with a group of women who already have received abortions. Membership in this "treated" group is not controlled by the investigator: the group is formed after the "treatment" has been assigned.
• The investigator may simply lack the requisite influence. Suppose a scientist wants to study the public health effects of a community-wide ban on smoking in public indoor areas. In a controlled experiment, the investigator would randomly pick a set of communities to be in the treatment group. However, it is typically up to each community and/or its legislature to enact a smoking ban. The investigator can be expected to lack the political power to cause precisely those communities in the randomly selected treatment group to pass a smoking ban. In an observational study, the investigator would typically start with a treatment group consisting of those communities where a smoking ban is already in effect.
• A randomized experiment may be impractical. Suppose a researcher wants to study the suspected link between a certain medication and a very rare group of symptoms arising as a side effect. Setting aside any ethical considerations, a randomized experiment would be impractical because of the rarity of the effect. There may not be a subject pool large enough for the symptoms to be observed in at least one treated subject. An observational study would typically start with a group of symptomatic subjects and work backwards to find those who were given the medication and later developed the symptoms. Thus a subset of the treated group was determined based on the presence of symptoms, instead of by random assignment.
• Many randomized controlled trials are not broadly representative of real-world patients and this may limit their external validity. Patients who are eligible for inclusion in a randomized controlled trial are usually younger, more likely to be male, healthier and more likely to be treated according to recommendations from guidelines.[3] If and when the intervention is later added to routine-care, a large portion of the patients who will receive it may be old with many concomitant diseases and drug-therapies, although these particular patient groups will not have been studied in the initial experimental trials. An observational study that examines the real-world patients in everyday routine care can complement the results from the randomized trial in order to be more generally applicable in the patient population.
Types
• Case-control study: study originally developed in epidemiology, in which two existing groups differing in outcome are identified and compared on the basis of some supposed causal attribute.
• Cross-sectional study: involves data collection from a population, or a representative subset, at one specific point in time.
• Longitudinal study: correlational research study that involves repeated observations of the same variables over long periods of time. Cohort study and Panel study are particular forms of longitudinal study.
Degree of usefulness and reliability
"Although observational studies cannot be used to make definitive statements of fact about the "safety, efficacy, or effectiveness" of a practice, they can:[4]
1. provide information on 'real world' use and practice;
2. detect signals about the benefits and risks of...[the] use [of practices] in the general population;
3. help formulate hypotheses to be tested in subsequent experiments;
4. provide part of the community-level data needed to design more informative pragmatic clinical trials; and
5. inform clinical practice."[4]
Bias and compensating methods
In all of those cases, if a randomized experiment cannot be carried out, the alternative line of investigation suffers from the problem that the decision of which subjects receive the treatment is not entirely random and thus is a potential source of bias. A major challenge in conducting observational studies is to draw inferences that are acceptably free from influences by overt biases, as well as to assess the influence of potential hidden biases. The following are a non-exhaustive set of problems especially common in observational studies.
Matching techniques bias
In lieu of experimental control, multivariate statistical techniques allow the approximation of experimental control with statistical control by using matching methods. Matching methods account for the influences of observed factors that might influence a cause-and-effect relationship. In healthcare and the social sciences, investigators may use matching to compare units that nonrandomly received the treatment and control. One common approach is to use propensity score matching in order to reduce confounding,[5] although this has recently come under criticism for exacerbating the very problems it seeks to solve.[6]
Multiple comparison bias
Multiple comparison bias can occur when several hypotheses are tested at the same time. As the number of recorded factors increases, the likelihood increases that at least one of the recorded factors will be highly correlated with the data output simply by chance.[7]
Omitted variable bias
An observer of an uncontrolled experiment (or process) records potential factors and the data output: the goal is to determine the effects of the factors. Sometimes the recorded factors may not be directly causing the differences in the output. There may be more important factors which were not recorded but are, in fact, causal. Also, recorded or unrecorded factors may be correlated which may yield incorrect conclusions.[8]
Selection bias
Another difficulty with observational studies is that researchers may themselves be biased in their observational skills. This would allow for researchers to (either consciously or unconsciously) seek out the information they're looking for while conducting their research. For example, researchers may exaggerate the effect of one variable, or downplay the effect of another: researchers may even select in subjects that fit their conclusions. This selection bias can happen at any stage of the research process. This introduces bias into the data where certain variables are systematically incorrectly measured.[9]
Quality
A 2014 Cochrane review concluded that observational studies produce results similar to those conducted as randomized controlled trials.[10] The review reported little evidence for significant effect differences between observational studies and randomized controlled trials, regardless of design, heterogeneity, or inclusion of studies of interventions that assessed drug effects.[10]
See also
• Correlation does not imply causation
• Observation
• Difference-in-differences
• Scientific method
References
1. "Observational study". Archived from the original on 2016-04-27. Retrieved 2008-06-25.
2. Porta M, ed. (2008). A Dictionary of Epidemiology (5th ed.). New York: Oxford University Press. ISBN 9780195314496.
3. Kennedy-Martin T, Curtis S, Faries D, Robinson S, Johnston J (November 2015). "A literature review on the representativeness of randomized controlled trial samples and implications for the external validity of trial results". Trials. 16 (1): 495. doi:10.1186/s13063-015-1023-4. PMC 4632358. PMID 26530985.
4. "Although observational studies cannot provide definitive evidence of safety, efficacy, or effectiveness, they can: 1) provide information on "real world" use and practice; 2) detect signals about the benefits and risks of complementary therapies use in the general population; 3) help formulate hypotheses to be tested in subsequent experiments; 4) provide part of the community-level data needed to design more informative pragmatic clinical trials; and 5) inform clinical practice." "Observational Studies and Secondary Data Analyses To Assess Outcomes in Complementary and Integrative Health Care." Archived 2019-09-29 at the Wayback Machine Richard Nahin, Ph.D., M.P.H., Senior Advisor for Scientific Coordination and Outreach, National Center for Complementary and Integrative Health, June 25, 2012
5. Rosenbaum, Paul R. 2009. Design of Observational Studies. New York: Springer.
6. King, Gary; Nielsen, Richard (2019-05-07). "Why Propensity Scores Should Not Be Used for Matching". Political Analysis. 27 (4): 435–454. doi:10.1017/pan.2019.11. hdl:1721.1/128459. ISSN 1047-1987. S2CID 53585283. | link to the full article (from the author's homepage
7. Benjamini, Yoav (2010). "Simultaneous and selective inference: Current successes and future challenges". Biometrical Journal. 52 (6): 708–721. doi:10.1002/bimj.200900299. PMID 21154895. S2CID 8806192.
8. "Introductory Econometrics Chapter 18: Omitted Variable Bias". www3.wabash.edu. Retrieved 2022-07-16.
9. Hammer, Gaël P; du Prel, Jean-Baptist; Blettner, Maria (2009-10-01). "Avoiding Bias in Observational Studies". Deutsches Ärzteblatt International. 106 (41): 664–668. doi:10.3238/arztebl.2009.0664. ISSN 1866-0452. PMC 2780010. PMID 19946431.
10. Anglemyer A, Horvath HT, Bero L (April 2014). "Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials". The Cochrane Database of Systematic Reviews. 2014 (4): MR000034. doi:10.1002/14651858.MR000034.pub2. PMC 8191367. PMID 24782322.
Further reading
• Rosenbaum PR (2002). Observational Studies (2nd ed.). New York: Springer-Verlag. ISBN 0387989676.
• "NIST/SEMATECH Handbook on Engineering Statistics" at NIST
Statistics
• Outline
• Index
Descriptive statistics
Continuous data
Center
• Mean
• Arithmetic
• Arithmetic-Geometric
• Cubic
• Generalized/power
• Geometric
• Harmonic
• Heronian
• Heinz
• Lehmer
• Median
• Mode
Dispersion
• Average absolute deviation
• Coefficient of variation
• Interquartile range
• Percentile
• Range
• Standard deviation
• Variance
Shape
• Central limit theorem
• Moments
• Kurtosis
• L-moments
• Skewness
Count data
• Index of dispersion
Summary tables
• Contingency table
• Frequency distribution
• Grouped data
Dependence
• Partial correlation
• Pearson product-moment correlation
• Rank correlation
• Kendall's τ
• Spearman's ρ
• Scatter plot
Graphics
• Bar chart
• Biplot
• Box plot
• Control chart
• Correlogram
• Fan chart
• Forest plot
• Histogram
• Pie chart
• Q–Q plot
• Radar chart
• Run chart
• Scatter plot
• Stem-and-leaf display
• Violin plot
Data collection
Study design
• Effect size
• Missing data
• Optimal design
• Population
• Replication
• Sample size determination
• Statistic
• Statistical power
Survey methodology
• Sampling
• Cluster
• Stratified
• Opinion poll
• Questionnaire
• Standard error
Controlled experiments
• Blocking
• Factorial experiment
• Interaction
• Random assignment
• Randomized controlled trial
• Randomized experiment
• Scientific control
Adaptive designs
• Adaptive clinical trial
• Stochastic approximation
• Up-and-down designs
Observational studies
• Cohort study
• Cross-sectional study
• Natural experiment
• Quasi-experiment
Statistical inference
Statistical theory
• Population
• Statistic
• Probability distribution
• Sampling distribution
• Order statistic
• Empirical distribution
• Density estimation
• Statistical model
• Model specification
• Lp space
• Parameter
• location
• scale
• shape
• Parametric family
• Likelihood (monotone)
• Location–scale family
• Exponential family
• Completeness
• Sufficiency
• Statistical functional
• Bootstrap
• U
• V
• Optimal decision
• loss function
• Efficiency
• Statistical distance
• divergence
• Asymptotics
• Robustness
Frequentist inference
Point estimation
• Estimating equations
• Maximum likelihood
• Method of moments
• M-estimator
• Minimum distance
• Unbiased estimators
• Mean-unbiased minimum-variance
• Rao–Blackwellization
• Lehmann–Scheffé theorem
• Median unbiased
• Plug-in
Interval estimation
• Confidence interval
• Pivot
• Likelihood interval
• Prediction interval
• Tolerance interval
• Resampling
• Bootstrap
• Jackknife
Testing hypotheses
• 1- & 2-tails
• Power
• Uniformly most powerful test
• Permutation test
• Randomization test
• Multiple comparisons
Parametric tests
• Likelihood-ratio
• Score/Lagrange multiplier
• Wald
Specific tests
• Z-test (normal)
• Student's t-test
• F-test
Goodness of fit
• Chi-squared
• G-test
• Kolmogorov–Smirnov
• Anderson–Darling
• Lilliefors
• Jarque–Bera
• Normality (Shapiro–Wilk)
• Likelihood-ratio test
• Model selection
• Cross validation
• AIC
• BIC
Rank statistics
• Sign
• Sample median
• Signed rank (Wilcoxon)
• Hodges–Lehmann estimator
• Rank sum (Mann–Whitney)
• Nonparametric anova
• 1-way (Kruskal–Wallis)
• 2-way (Friedman)
• Ordered alternative (Jonckheere–Terpstra)
• Van der Waerden test
Bayesian inference
• Bayesian probability
• prior
• posterior
• Credible interval
• Bayes factor
• Bayesian estimator
• Maximum posterior estimator
• Correlation
• Regression analysis
Correlation
• Pearson product-moment
• Partial correlation
• Confounding variable
• Coefficient of determination
Regression analysis
• Errors and residuals
• Regression validation
• Mixed effects models
• Simultaneous equations models
• Multivariate adaptive regression splines (MARS)
Linear regression
• Simple linear regression
• Ordinary least squares
• General linear model
• Bayesian regression
Non-standard predictors
• Nonlinear regression
• Nonparametric
• Semiparametric
• Isotonic
• Robust
• Heteroscedasticity
• Homoscedasticity
Generalized linear model
• Exponential families
• Logistic (Bernoulli) / Binomial / Poisson regressions
Partition of variance
• Analysis of variance (ANOVA, anova)
• Analysis of covariance
• Multivariate ANOVA
• Degrees of freedom
Categorical / Multivariate / Time-series / Survival analysis
Categorical
• Cohen's kappa
• Contingency table
• Graphical model
• Log-linear model
• McNemar's test
• Cochran–Mantel–Haenszel statistics
Multivariate
• Regression
• Manova
• Principal components
• Canonical correlation
• Discriminant analysis
• Cluster analysis
• Classification
• Structural equation model
• Factor analysis
• Multivariate distributions
• Elliptical distributions
• Normal
Time-series
General
• Decomposition
• Trend
• Stationarity
• Seasonal adjustment
• Exponential smoothing
• Cointegration
• Structural break
• Granger causality
Specific tests
• Dickey–Fuller
• Johansen
• Q-statistic (Ljung–Box)
• Durbin–Watson
• Breusch–Godfrey
Time domain
• Autocorrelation (ACF)
• partial (PACF)
• Cross-correlation (XCF)
• ARMA model
• ARIMA model (Box–Jenkins)
• Autoregressive conditional heteroskedasticity (ARCH)
• Vector autoregression (VAR)
Frequency domain
• Spectral density estimation
• Fourier analysis
• Least-squares spectral analysis
• Wavelet
• Whittle likelihood
Survival
Survival function
• Kaplan–Meier estimator (product limit)
• Proportional hazards models
• Accelerated failure time (AFT) model
• First hitting time
Hazard function
• Nelson–Aalen estimator
Test
• Log-rank test
Applications
Biostatistics
• Bioinformatics
• Clinical trials / studies
• Epidemiology
• Medical statistics
Engineering statistics
• Chemometrics
• Methods engineering
• Probabilistic design
• Process / quality control
• Reliability
• System identification
Social statistics
• Actuarial science
• Census
• Crime statistics
• Demography
• Econometrics
• Jurimetrics
• National accounts
• Official statistics
• Population statistics
• Psychometrics
Spatial statistics
• Cartography
• Environmental statistics
• Geographic information system
• Geostatistics
• Kriging
• Category
• Mathematics portal
• Commons
• WikiProject
Clinical research and experimental design
Overview
• Clinical trial
• Trial protocols
• Adaptive clinical trial
• Academic clinical trials
• Clinical study design
Controlled study
(EBM I to II-1)
• Randomized controlled trial
• Scientific experiment
• Blind experiment
• Open-label trial
• Adaptive clinical trial
• Platform trial
Observational study
(EBM II-2 to II-3)
• Cross-sectional study vs. Longitudinal study, Ecological study
• Cohort study
• Retrospective
• Prospective
• Case–control study (Nested case–control study)
• Case series
• Case study
• Case report
Measures
Occurrence
Incidence, Cumulative incidence, Prevalence, Point prevalence, Period prevalence
Association
Risk difference, Number needed to treat, Number needed to harm, Risk ratio, Relative risk reduction, Odds ratio, Hazard ratio
Population impact
Attributable fraction among the exposed, Attributable fraction for the population, Preventable fraction among the unexposed, Preventable fraction for the population
Other
Clinical endpoint, Virulence, Infectivity, Mortality rate, Morbidity, Case fatality rate, Specificity and sensitivity, Likelihood-ratios, Pre- and post-test probability
Trial/test types
• In vitro
• In vivo
• Animal testing
• Animal testing on non-human primates
• First-in-man study
• Multicenter trial
• Seeding trial
• Vaccine trial
Analysis of clinical trials
• Risk–benefit ratio
• Systematic review
• Replication
• Meta-analysis
• Intention-to-treat analysis
Interpretation of results
• Selection bias
• Survivorship bias
• Correlation does not imply causation
• Null result
• Sex as a biological variable
• Category
• Glossary
• List of topics
| Wikipedia |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.