text
stringlengths
100
500k
subset
stringclasses
4 values
Uriel Rothblum Uriel George "Uri" Rothblum (Tel Aviv, March 16, 1947 – Haifa, March 26, 2012) was an Israeli mathematician and operations researcher. From 1984 until 2012 he held the Alexander Goldberg Chair in Management Science at the Technion – Israel Institute of Technology in Haifa, Israel.[1][2] Uriel Rothblum Born(1947-03-16)March 16, 1947 Tel-Aviv, Israel. DiedMarch 26, 2012(2012-03-26) (aged 65) Haifa, Israel. CitizenshipIsrael, United States Alma mater • Tel Aviv University B.S and M.S. • Stanford Ph.D. Scientific career Fields • Mathematics • operation research • system analysis Institutions • Technion • New York University • Yale Rothblum was born in Tel Aviv to a family of Jewish immigrants from Austria.[3] He went to Tel Aviv University, where Robert Aumann became his mentor; he earned a bachelor's degree there in 1969 and a master's in 1971. He completed his doctorate in 1974 from Stanford University, in operations research, under the supervision of Arthur F. Veinott. After postdoctoral research at New York University, he joined the Yale University faculty in 1975, and moved to the Technion in 1984.[2] Rothblum became president of the Israeli Operational Research Society (ORSIS) for 2006–2008, and editor-in-chief of Mathematics of Operations Research from 2010 until his death.[2] He was elected to the 2003 class of Fellows of the Institute for Operations Research and the Management Sciences.[4] References 1. Loewy, Raphael (2012), "Uriel G. Rothblum (1947–2012)", Linear Algebra and Its Applications, 437 (12): 2997–3009, doi:10.1016/j.laa.2012.07.010, MR 2966614. 2. Golany, Boaz (2012), "Uriel G. Rothblum, March 16, 1947 – March 26, 2012", OR/MS Today. 3. "Uriel Rothblum - Biography". 4. Fellows: Alphabetical List, Institute for Operations Research and the Management Sciences, retrieved 2019-10-09 Authority control International • ISNI • VIAF National • Germany • Israel • United States Academics • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Urmila Mahadev Urmila Mahadev is an American mathematician and theoretical computer scientist known for her work in quantum computing and quantum cryptography. Education and career Mahadev is originally from Los Angeles, where her parents are physicians. She became interested in quantum computing through a course with Leonard Adleman at the University of Southern California,[1] where she graduated in 2010.[2] She went to the University of California, Berkeley for graduate study, supported by a National Science Foundation Graduate Research Fellowship.[2] As a student of Umesh Vazirani at Berkeley, Mahadev discovered interactive proof systems that could demonstrate with high certainty, to an observer using only classical computation, that a quantum computer has correctly performed a desired quantum-computing task.[1] She completed her Ph.D. in 2018,[3] and after continued postdoctoral research at Berkeley,[1] she became an assistant professor of computing and mathematical sciences at the California Institute of Technology.[4] Recognition For her work on quantum verification, Mahadev won the Machtey Award at the Symposium on Foundations of Computer Science in 2018, and in 2021 one of the three inaugural Maryam Mirzakhani New Frontiers Prizes for early-career achievements by women mathematicians.[5][6] References 1. Klarreich, Erica (October 8, 2018), "Graduate Student Solves Quantum Verification Problem: Urmila Mahadev spent eight years in graduate school solving one of the most basic questions in quantum computation: How do you know whether a quantum computer has done anything quantum at all?", Quanta 2. Wall of Scholars, University of Southern California, retrieved 2020-09-19 3. Urmila Mahadev at the Mathematics Genealogy Project 4. Urmila Mahadev, California Institute of Technology, retrieved 2020-09-19 5. "Prizes", FOCS 2018, retrieved 2020-09-19 6. "Winners of the 2021 Breakthrough Prizes in life sciences, fundamental physics and mathematics announced", Breakthrough Prizes, September 10, 2020, retrieved 2020-09-19 Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Urs Schreiber Urs Schreiber (born 1974) is a mathematician specializing in the connection between mathematics and theoretical physics (especially string theory) and currently working as a researcher at New York University Abu Dhabi.[1] He was previously a researcher at the Czech Academy of Sciences, Institute of Mathematics, Department for Algebra, Geometry and Mathematical Physics.[2] Education Schreiber obtained his doctorate from the University of Duisburg-Essen in 2005 with a thesis supervised by Robert Graham and titled From Loop Space Mechanics to Nonabelian Strings.[3] Work Schreiber's research fields include the mathematical foundation of quantum field theory. Schreiber is a co-creator of the nLab, a wiki for research mathematicians and physicists working in higher category theory. Selected writings • With Hisham Sati, Mathematical Foundations of Quantum Field and Perturbative String Theory, Proceedings of Symposia in Pure Mathematics, volume 83 AMS (2011) • Schreiber, Urs (2013). "Differential cohomology in a cohesive ∞-topos". arXiv:1310.7930v1 [math-ph]. Notes 1. "Center for Quantum and Topological Systems". Retrieved 2022-07-21. 2. Researchers, Czech Academy of Sciences, retrieved 2015-07-31. 3. DuEPublico References • Interview of John Baez and Urs Schreiber External links • Home page in nLab Authority control International • ISNI • VIAF National • Norway • Catalonia • Germany • Israel • United States • Netherlands Academics • Mathematics Genealogy Project • ORCID • zbMATH Other • IdRef
Wikipedia
Ursell function In statistical mechanics, an Ursell function or connected correlation function, is a cumulant of a random variable. It can often be obtained by summing over connected Feynman diagrams (the sum over all Feynman diagrams gives the correlation functions). The Ursell function was named after Harold Ursell, who introduced it in 1927. Definition If X is a random variable, the moments sn and cumulants (same as the Ursell functions) un are functions of X related by the exponential formula: $\operatorname {E} (\exp(zX))=\sum _{n}s_{n}{\frac {z^{n}}{n!}}=\exp \left(\sum _{n}u_{n}{\frac {z^{n}}{n!}}\right)$ (where $\operatorname {E} $ is the expectation). The Ursell functions for multivariate random variables are defined analogously to the above, and in the same way as multivariate cumulants.[1] $u_{n}\left(X_{1},\ldots ,X_{n}\right)=\left.{\frac {\partial }{\partial z_{1}}}\cdots {\frac {\partial }{\partial z_{n}}}\log \operatorname {E} \left(\exp \sum z_{i}X_{i}\right)\right|_{z_{i}=0}$ The Ursell functions of a single random variable X are obtained from these by setting X = X1 = … = Xn. The first few are given by ${\begin{aligned}u_{1}(X_{1})={}&\operatorname {E} (X_{1})\\u_{2}(X_{1},X_{2})={}&\operatorname {E} (X_{1}X_{2})-\operatorname {E} (X_{1})\operatorname {E} (X_{2})\\u_{3}(X_{1},X_{2},X_{3})={}&\operatorname {E} (X_{1}X_{2}X_{3})-\operatorname {E} (X_{1})\operatorname {E} (X_{2}X_{3})-\operatorname {E} (X_{2})\operatorname {E} (X_{3}X_{1})-\operatorname {E} (X_{3})\operatorname {E} (X_{1}X_{2})+2\operatorname {E} (X_{1})\operatorname {E} (X_{2})\operatorname {E} (X_{3})\\u_{4}\left(X_{1},X_{2},X_{3},X_{4}\right)={}&\operatorname {E} (X_{1}X_{2}X_{3}X_{4})-\operatorname {E} (X_{1})\operatorname {E} (X_{2}X_{3}X_{4})-\operatorname {E} (X_{2})\operatorname {E} (X_{1}X_{3}X_{4})-\operatorname {E} (X_{3})\operatorname {E} (X_{1}X_{2}X_{4})-\operatorname {E} (X_{4})\operatorname {E} (X_{1}X_{2}X_{3})\\&-\operatorname {E} (X_{1}X_{2})\operatorname {E} (X_{3}X_{4})-\operatorname {E} (X_{1}X_{3})\operatorname {E} (X_{2}X_{4})-\operatorname {E} (X_{1}X_{4})\operatorname {E} (X_{2}X_{3})\\&+2\operatorname {E} (X_{1}X_{2})\operatorname {E} (X_{3})\operatorname {E} (X_{4})+2\operatorname {E} (X_{1}X_{3})\operatorname {E} (X_{2})\operatorname {E} (X_{4})+2\operatorname {E} (X_{1}X_{4})\operatorname {E} (X_{2})\operatorname {E} (X_{3})+2\operatorname {E} (X_{2}X_{3})\operatorname {E} (X_{1})\operatorname {E} (X_{4})\\&+2\operatorname {E} (X_{2}X_{4})\operatorname {E} (X_{1})\operatorname {E} (X_{3})+2\operatorname {E} (X_{3}X_{4})\operatorname {E} (X_{1})\operatorname {E} (X_{2})-6\operatorname {E} (X_{1})\operatorname {E} (X_{2})\operatorname {E} (X_{3})\operatorname {E} (X_{4})\end{aligned}}$ Characterization Percus (1975) showed that the Ursell functions, considered as multilinear functions of several random variables, are uniquely determined up to a constant by the fact that they vanish whenever the variables Xi can be divided into two nonempty independent sets. See also • Cumulant References 1. Shlosman, S. B. (1986). "Signs of the Ising model Ursell functions". Communications in Mathematical Physics. 102 (4): 679–686. Bibcode:1985CMaPh.102..679S. doi:10.1007/BF01221652. S2CID 122963530. • Glimm, James; Jaffe, Arthur (1987), Quantum physics (2nd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-96476-8, MR 0887102 • Percus, J. K. (1975), "Correlation inequalities for Ising spin lattices" (PDF), Comm. Math. Phys., 40 (3): 283–308, Bibcode:1975CMaPh..40..283P, doi:10.1007/bf01610004, MR 0378683, S2CID 120940116 • Ursell, H. D. (1927), "The evaluation of Gibbs phase-integral for imperfect gases", Proc. Cambridge Philos. Soc., 23 (6): 685–697, Bibcode:1927PCPS...23..685U, doi:10.1017/S0305004100011191, S2CID 123023251
Wikipedia
Ursula Hamenstädt Ursula Hamenstädt (born 15 January 1961) is a German mathematician who works as a professor at the University of Bonn.[1] Her primary research subject is differential geometry. Education and career Hamenstädt earned her PhD from the University of Bonn in 1986, under the supervision of Wilhelm Klingenberg. Her dissertation, Zur Theorie der Carnot-Caratheodory Metriken und ihren Anwendungen [The theory of Carnot–Caratheodory metrics and their applications], concerned the theory of sub-Riemannian manifolds.[2] After completing her doctorate, she became a Miller Research Fellow at the University of California, Berkeley and then an assistant professor at the California Institute of Technology before returning to Bonn as a faculty member in 1990.[1] Honors Hamenstädt was an invited speaker at the International Congress of Mathematicians in 2010.[3] In 2012 she was elected to the German Academy of Sciences Leopoldina,[4] and in the same year she became one of the inaugural fellows of the American Mathematical Society.[5] She was the Emmy Noether Lecturer of the German Mathematical Society in 2017.[6] Selected publications • Hamenstädt, Ursula (2008). "Geometry of the mapping class groups I: Boundary amenability". Inventiones Mathematicae. 175 (3): 545–609. arXiv:math/0510116. Bibcode:2009InMat.175..545H. doi:10.1007/s00222-008-0158-2. ISSN 0020-9910. S2CID 2640202. • Hamenstädt, Ursula (1989). "A new description of the Bowen–Margulis measure". Ergodic Theory and Dynamical Systems. 9 (3): 455–464. doi:10.1017/S0143385700005095. ISSN 1469-4417. • Hamenstädt, Ursula (1990). "Some regularity theorems for Carnot–Carathéodory metrics". Journal of Differential Geometry. 32 (3): 819–850. doi:10.4310/jdg/1214445536. ISSN 0022-040X. References 1. Faculty profile, University of Bonn, retrieved 18 December 2014. 2. Ursula Hamenstädt at the Mathematics Genealogy Project 3. Hamenstädt, Ursula (2010), "Actions of the mapping class group", Proceedings of the International Congress of Mathematicians. Volume II (PDF), New Delhi: Hindustan Book Agency, pp. 1002–1021, MR 2827829. 4. List of members: Prof. Dr. Ursula Hamenstädt, Leopoldina, retrieved 18 December 2014. 5. List of Fellows of the American Mathematical Society, retrieved 18 December 2014. 6. Preise und Auszeichnungen (in German), German Mathematical Society, retrieved 5 November 2018 External links • Home page Authority control International • ISNI • VIAF National • Germany • Israel • United States Academics • DBLP • Google Scholar • Leopoldina • MathSciNet • Mathematics Genealogy Project • zbMATH Other • IdRef
Wikipedia
Urysohn's lemma In topology, Urysohn's lemma is a lemma that states that a topological space is normal if and only if any two disjoint closed subsets can be separated by a continuous function.[1] Urysohn's lemma is commonly used to construct continuous functions with various properties on normal spaces. It is widely applicable since all metric spaces and all compact Hausdorff spaces are normal. The lemma is generalised by (and usually used in the proof of) the Tietze extension theorem. The lemma is named after the mathematician Pavel Samuilovich Urysohn. Discussion Two subsets $A$ and $B$ of a topological space $X$ are said to be separated by neighbourhoods if there are neighbourhoods $U$ of $A$ and $V$ of $B$ that are disjoint. In particular $A$ and $B$ are necessarily disjoint. Two plain subsets $A$ and $B$ are said to be separated by a continuous function if there exists a continuous function $f:X\to [0,1]$ from $X$ into the unit interval $[0,1]$ such that $f(a)=0$ for all $a\in A$ and $f(b)=1$ for all $b\in B.$ Any such function is called a Urysohn function for $A$ and $B.$ In particular $A$ and $B$ are necessarily disjoint. It follows that if two subsets $A$ and $B$ are separated by a function then so are their closures. Also it follows that if two subsets $A$ and $B$ are separated by a function then $A$ and $B$ are separated by neighbourhoods. A normal space is a topological space in which any two disjoint closed sets can be separated by neighbourhoods. Urysohn's lemma states that a topological space is normal if and only if any two disjoint closed sets can be separated by a continuous function. The sets $A$ and $B$ need not be precisely separated by $f$, i.e., it is not necessary and guaranteed that $f(x)\neq 0$ and $\neq 1$ for $x$ outside $A$ and $B.$ A topological space $X$ in which every two disjoint closed subsets $A$ and $B$ are precisely separated by a continuous function is perfectly normal. Urysohn's lemma has led to the formulation of other topological properties such as the 'Tychonoff property' and 'completely Hausdorff spaces'. For example, a corollary of the lemma is that normal T1 spaces are Tychonoff. Formal statement A topological space $X$ is normal if and only if, for any two non-empty closed disjoint subsets $A$ and $B$ of $X,$ there exists a continuous map $f:X\to [0,1]$ such that $f(A)=\{0\}$ and $f(B)=\{1\}.$ Proof sketch The proof proceeds by repeatedly applying the following alternate characterization of normality. If $X$ is a normal space, $Z$ is an open subset of $X$, and $Y\subseteq Z$ is closed, then there exists an open $U$ and a closed $V$ such that $Y\subseteq U\subseteq V\subseteq Z$. Let $A$ and $B$ be disjoint closed subsets of $X$. The main idea of the proof is to repeatedly apply this characterization of normality to $A$ and $B^{\complement }$, continuing with the new sets built on every step. The sets we build are indexed by dyadic fractions. For every dyadic fraction $r\in (0,1)$, we construct an open subset $U(r)$ and a closed subset $V(r)$ of $X$ such that: • $A\subseteq U(r)$ and $V(r)\subseteq B^{\complement }$ for all $r$, • $U(r)\subseteq V(r)$ for all $r$, • For $r<s$, $V(r)\subseteq U(s)$. Intuitively, the sets $U(r)$ and $V(r)$ expand outwards in layers from $A$: ${\begin{array}{ccccccccccccccc}A&&&&&&&\subseteq &&&&&&&B^{\complement }\\A&&&\subseteq &&&\ U(1/2)&\subseteq &V(1/2)&&&\subseteq &&&B^{\complement }\\A&\subseteq &U(1/4)&\subseteq &V(1/4)&\subseteq &U(1/2)&\subseteq &V(1/2)&\subseteq &U(3/4)&\subseteq &V(3/4)&\subseteq &B^{\complement }\end{array}}$ This construction proceeds by mathematical induction. For the base step, we define two extra sets $U(1)=B^{\complement }$ and $V(0)=A$. Now assume that $n\geq 0$ and that the sets $U\left(k/2^{n}\right)$ and $V\left(k/2^{n}\right)$ have already been constructed for $k\in \{1,\ldots ,2^{n}-1\}$. Note that this is vacuously satisfied for $n=0$. Since $X$ is normal, for any $a\in \left\{0,1,\ldots ,2^{n}-1\right\}$, we can find an open set and a closed set such that $V\left({\frac {a}{2^{n}}}\right)\subseteq U\left({\frac {2a+1}{2^{n+1}}}\right)\subseteq V\left({\frac {2a+1}{2^{n+1}}}\right)\subseteq U\left({\frac {a+1}{2^{n}}}\right)$ The above three conditions are then verified. Once we have these sets, we define $f(x)=1$ if $x\not \in U(r)$ for any $r$; otherwise $f(x)=\inf\{r:x\in U(r)\}$ for every $x\in X$, where $\inf $ denotes the infimum. Using the fact that the dyadic rationals are dense, it is then not too hard to show that $f$ is continuous and has the property $f(A)\subseteq \{0\}$ and $f(B)\subseteq \{1\}.$ This step requires the $V(r)$ sets in order to work. The Mizar project has completely formalised and automatically checked a proof of Urysohn's lemma in the URYSOHN3 file. See also • Mollifier Notes 1. Willard 1970 Section 15. References • Willard, Stephen (2004) [1970]. General Topology. Mineola, N.Y.: Dover Publications. ISBN 978-0-486-43479-7. OCLC 115240. • Willard, Stephen (1970). General Topology. Dover Publications. ISBN 0-486-43479-6. External links • "Urysohn lemma", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Mizar system proof: http://mizar.org/version/current/html/urysohn3.html#T20 Topology Fields • General (point-set) • Algebraic • Combinatorial • Continuum • Differential • Geometric • low-dimensional • Homology • cohomology • Set-theoretic • Digital Key concepts • Open set / Closed set • Interior • Continuity • Space • compact • Connected • Hausdorff • metric • uniform • Homotopy • homotopy group • fundamental group • Simplicial complex • CW complex • Polyhedral complex • Manifold • Bundle (mathematics) • Second-countable space • Cobordism Metrics and properties • Euler characteristic • Betti number • Winding number • Chern number • Orientability Key results • Banach fixed-point theorem • De Rham cohomology • Invariance of domain • Poincaré conjecture • Tychonoff's theorem • Urysohn's lemma • Category •  Mathematics portal • Wikibook • Wikiversity • Topics • general • algebraic • geometric • Publications
Wikipedia
Tietze extension theorem In topology, the Tietze extension theorem (also known as the Tietze–Urysohn–Brouwer extension theorem or Urysohn-Brouwer lemma[1]) states that continuous functions on a closed subset of a normal topological space can be extended to the entire space, preserving boundedness if necessary. Formal statement If $X$ is a normal space and $f:A\to \mathbb {R} $ is a continuous map from a closed subset $A$ of $X$ into the real numbers $\mathbb {R} $ carrying the standard topology, then there exists a continuous extension of $f$ to $X;$ that is, there exists a map $F:X\to \mathbb {R} $ continuous on all of $X$ with $F(a)=f(a)$ for all $a\in A.$ Moreover, $F$ may be chosen such that $\sup\{|f(a)|:a\in A\}~=~\sup\{|F(x)|:x\in X\},$ that is, if $f$ is bounded then $F$ may be chosen to be bounded (with the same bound as $f$). History L. E. J. Brouwer and Henri Lebesgue proved a special case of the theorem, when $X$ is a finite-dimensional real vector space. Heinrich Tietze extended it to all metric spaces, and Pavel Urysohn proved the theorem as stated here, for normal topological spaces.[2][3] Equivalent statements This theorem is equivalent to Urysohn's lemma (which is also equivalent to the normality of the space) and is widely applicable, since all metric spaces and all compact Hausdorff spaces are normal. It can be generalized by replacing $\mathbb {R} $ with $\mathbb {R} ^{J}$ for some indexing set $J,$ any retract of $\mathbb {R} ^{J},$ or any normal absolute retract whatsoever. Variations If $X$ is a metric space, $A$ a non-empty subset of $X$ and $f:A\to \mathbb {R} $ is a Lipschitz continuous function with Lipschitz constant $K,$ then $f$ can be extended to a Lipschitz continuous function $F:X\to \mathbb {R} $ with same constant $K.$ This theorem is also valid for Hölder continuous functions, that is, if $f:A\to \mathbb {R} $ is Hölder continuous function with constant less than or equal to $1,$ then $f$ can be extended to a Hölder continuous function $F:X\to \mathbb {R} $ with the same constant.[4] Another variant (in fact, generalization) of Tietze's theorem is due to H.Tong and Z. Ercan:[5] Let $A$ be a closed subset of a normal topological space $X.$ If $f:X\to \mathbb {R} $ is an upper semicontinuous function, $g:X\to \mathbb {R} $ a lower semicontinuous function, and $h:A\to \mathbb {R} $ a continuous function such that $f(x)\leq g(x)$ for each $x\in X$ and $f(a)\leq h(a)\leq g(a)$ for each $a\in A$, then there is a continuous extension $H:X\to \mathbb {R} $ of $h$ such that $f(x)\leq H(x)\leq g(x)$ for each $x\in X.$ This theorem is also valid with some additional hypothesis if $\mathbb {R} $ is replaced by a general locally solid Riesz space.[5] See also • Blumberg theorem – Any real function on R admits a continuous restriction on a dense subset of R • Hahn–Banach theorem – Theorem on extension of bounded linear functionals • Whitney extension theorem – Partial converse of Taylor's theorem References 1. "Urysohn-Brouwer lemma", Encyclopedia of Mathematics, EMS Press, 2001 [1994] 2. "Urysohn-Brouwer lemma", Encyclopedia of Mathematics, EMS Press, 2001 [1994] 3. Urysohn, Paul (1925), "Über die Mächtigkeit der zusammenhängenden Mengen", Mathematische Annalen, 94 (1): 262–295, doi:10.1007/BF01208659, hdl:10338.dmlcz/101038. 4. McShane, E. J. (1 December 1934). "Extension of range of functions". Bulletin of the American Mathematical Society. 40 (12): 837–843. doi:10.1090/S0002-9904-1934-05978-0. 5. Zafer, Ercan (1997). "Extension and Separation of Vector Valued Functions" (PDF). Turkish Journal of Mathematics. 21 (4): 423–430. • Munkres, James R. (2000). Topology (Second ed.). Upper Saddle River, NJ: Prentice Hall, Inc. ISBN 978-0-13-181629-9. OCLC 42683260. External links • Weisstein, Eric W. "Tietze's Extension Theorem." From MathWorld • Mizar system proof: http://mizar.org/version/current/html/tietze.html#T23 • Bonan, Edmond (1971), "Relèvements-Prolongements à valeurs dans les espaces de Fréchet", Comptes Rendus de l'Académie des Sciences, Série I, 272: 714–717.
Wikipedia
Urysohn universal space The Urysohn universal space is a certain metric space that contains all separable metric spaces in a particularly nice manner. This mathematics concept is due to Pavel Urysohn. Not to be confused with Urysohn space. Definition A metric space (U,d) is called Urysohn universal[1] if it is separable and complete and has the following property: given any finite metric space X, any point x in X, and any isometric embedding f : X\{x} → U, there exists an isometric embedding F : X → U that extends f, i.e. such that F(y) = f(y) for all y in X\{x}. Properties If U is Urysohn universal and X is any separable metric space, then there exists an isometric embedding f:X → U. (Other spaces share this property: for instance, the space l∞ of all bounded real sequences with the supremum norm admits isometric embeddings of all separable metric spaces ("Fréchet embedding"), as does the space C[0,1] of all continuous functions [0,1]→R, again with the supremum norm, a result due to Stefan Banach.) Furthermore, every isometry between finite subsets of U extends to an isometry of U onto itself. This kind of "homogeneity" actually characterizes Urysohn universal spaces: A separable complete metric space that contains an isometric image of every separable metric space is Urysohn universal if and only if it is homogeneous in this sense. Existence and uniqueness Urysohn proved that a Urysohn universal space exists, and that any two Urysohn universal spaces are isometric. This can be seen as follows. Take $(X,d),(X',d')$, two Urysohn universal spaces. These are separable, so fix in the respective spaces countable dense subsets $(x_{n})_{n},(x'_{n})_{n}$. These must be properly infinite, so by a back-and-forth argument, one can step-wise construct partial isometries $\phi _{n}:X\to X'$ whose domain (resp. range) contains $\{x_{k}:k<n\}$ (resp. $\{x'_{k}:k<n\}$). The union of these maps defines a partial isometry $\phi :X\to X'$ whose domain resp. range are dense in the respective spaces. And such maps extend (uniquely) to isometries, since a Urysohn universal space is required to be complete. References 1. Juha Heinonen (January 2003), Geometric embeddings of metric spaces, retrieved 6 January 2009
Wikipedia
Ushadevi Bhosle Dr. Ushadevi Narendra Bhosle is an Indian mathematician, educator and researcher. She specialises in Algebraic Geometry.[1] She worked on the moduli spaces of bundles.[1] Early life and education She got a B.Sc. degree in 1969 and an M.Sc. degree in 1971 from University of Pune, Shivaji University, respectively.[1] She commenced her post-graduate studies in 1971 from Tata Institute of Fundamental Research and got her doctorate degree of philosophy under the guidance of her mentor S.Ramanan in 1980.[1] Career She started her career with being a research assistant at the Tata Institute of Fundamental Research from 1971 to 1974. Then she became the Research Associate II in the same institute Tata Institute of Fundamental Research, from 1974 to 1977. Later on, she became a Research Fellow from 1977-1982, a Fellow from 1982–1990 and a Reader 1991-1995 at the same institute . She was the Associate Professor 1995 - 1998, Professor 1998-2011 and Senior Professor 2012-2014 at the same institute Tata Institute of Fundamental Research. She was the Raja Ramanna fellow 2014 - 2017 at Indian Institute of Science, Bangalore. She is INSA Senior Scientist at Indian Statistical Institute, Bangalore from Jan 2019. Membership She is the member of FASc, FNASc, FNASI and VBAC international committees.[1] She also was the senior associate of International Centre of Theoretical Physics, Italy.[1] She was a fellow member of the Indian National Science Academy, Delhi, Indian Academy of Sciences, Bangalore and National Academy of Sciences, Allahabad, India.[2][3][1] Works She has 66 publications. • Desale U.V. and Ramanan S. Poincare Polynomials of the variety of stable bundles, Math. Ann.vol. 216, no.3,(1975)233-244. • Desale U.V. and Ramanan S.: Classification of vector bundles of rank two on hyperelliptic curves. Invent. Math. 38, 161-185 (1976). • Bhosle (Desale) Usha N.: Moduli of orthogonal and spin bundles over hyperelliptic curves. Compositio Math. 51, 15-40 (1984). • Bhosle, U. N. (1989). "Parabolic vector bundles on curves". Arkiv för Matematik. 27 (1–2): 15–22. Bibcode:1989ArM....27...15B. doi:10.1007/BF02386356. ISSN 0004-2080. • Bhosle, Usha N. (1999). "Picard groups of the moduli spaces of vector bundles". Mathematische Annalen. 314 (2): 245–263. doi:10.1007/s002080050293. ISSN 0025-5831. • Bhosle, Usha N. (1996). "Generalized parabolic bundles and applications— II". Proceedings Mathematical Sciences. 106 (4): 403–420. doi:10.1007/BF02837696. ISSN 0253-4142. • Bhosle Usha N. (1992), Parabolic sheaves on higher dimensional varieties,Math. Ann. 293 177–192[4] • Bhosle, U.N. (1986): Nets of quadrics and vector bundles on a double plane. Math. Zeit.192, 29–43[5] • Bhosle Usha N. (1992), Generalised parabolic bundles and applications to torsion-free sheaves on nodal curves.Ark. Mat. 30 187–215[6] • Bhosle, U.N. (1989), Ramanathan, A.: Moduli of parabolicG-bundles on curves. Math. Z.202, 161–180[7] • BHOSLE, U. (1999). VECTOR BUNDLES ON CURVES WITH MANY COMPONENTS. Proceedings of the London Mathematical Society, 79(1), 81-106. • Bhosle Usha N (1995), Representations of the fundamental group and vector bundles,Math. Ann.302 601–608[8] Awards and honours She was awarded by Stree Shakti Science Samman in 2010 and Ramaswamy Aiyer Memorial Award in 2000.[1] Personal life Apart from mathematics, her other interests are drawing, painting, reading and music. Currently, she lives in Mumbai.[1] References 1. "INSA :: Indian Fellow Detail". insaindia.res.in. Retrieved 16 February 2019. 2. "The National Academy of Sciences, India - Founder Members". Nasi.org.in. Retrieved 14 October 2018. 3. "INSA". Archived from the original on 12 August 2016. Retrieved 13 May 2016. 4. Bhosle, Usha (1 December 1992). "Parabolic sheaves on higher dimensional varieties". Mathematische Annalen. 293 (1): 177–192. doi:10.1007/BF01444711. ISSN 1432-1807. 5. Bhosle, Usha N. (1 March 1986). "Nets of quadrics and vector bundles on a double plane". Mathematische Zeitschrift. 192 (1): 29–43. doi:10.1007/BF01162017. ISSN 1432-1823. 6. Bhosle, Usha (1 December 1992). "Generalised parabolic bundles and applications to torsionfree sheaves on nodal curves". Arkiv för Matematik. 30 (1): 187–215. Bibcode:1992ArM....30..187B. doi:10.1007/BF02384869. ISSN 1871-2487. 7. Bhosle, Usha; Ramanathan, A. (1 June 1989). "Moduli of parabolicG-bundles on curves". Mathematische Zeitschrift. 202 (2): 161–180. doi:10.1007/BF01215252. ISSN 1432-1823. 8. Bhosle, Usha N. (1 May 1995). "Representations of the fundamental group and vector bundles". Mathematische Annalen. 302 (1): 601–608. doi:10.1007/BF01444510. ISSN 1432-1807. External links • Suirauqa (1 July 2013). "Oh, the humanity of it all!: Noted Women Scientists of India - an attempt at enumeration". Ohthehumanityofitall.blogspot.com. Retrieved 14 October 2018. Authority control: Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Ensemble (mathematical physics) In physics, specifically statistical mechanics, an ensemble (also statistical ensemble) is an idealization consisting of a large number of virtual copies (sometimes infinitely many) of a system, considered all at once, each of which represents a possible state that the real system might be in. In other words, a statistical ensemble is a set of systems of particles used in statistical mechanics to describe a single system.[1] The concept of an ensemble was introduced by J. Willard Gibbs in 1902.[2] Statistical mechanics • Thermodynamics • Kinetic theory Particle statistics • Spin–statistics theorem • Identical particles • Maxwell–Boltzmann • Bose–Einstein • Fermi–Dirac • Parastatistics • Anyonic statistics • Braid statistics Thermodynamic ensembles • NVE Microcanonical • NVT Canonical • µVT Grand canonical • NPH Isoenthalpic–isobaric • NPT Isothermal–isobaric Models • Debye • Einstein • Ising • Potts Potentials • Internal energy • Enthalpy • Helmholtz free energy • Gibbs free energy • Grand potential / Landau free energy Scientists • Maxwell • Boltzmann • Bose • Gibbs • Einstein • Ehrenfest • von Neumann • Tolman • Debye • Fermi A thermodynamic ensemble is a specific variety of statistical ensemble that, among other properties, is in statistical equilibrium (defined below), and is used to derive the properties of thermodynamic systems from the laws of classical or quantum mechanics.[3][4] Physical considerations The ensemble formalises the notion that an experimenter repeating an experiment again and again under the same macroscopic conditions, but unable to control the microscopic details, may expect to observe a range of different outcomes. The notional size of ensembles in thermodynamics, statistical mechanics and quantum statistical mechanics can be very large, including every possible microscopic state the system could be in, consistent with its observed macroscopic properties. For many important physical cases, it is possible to calculate averages directly over the whole of the thermodynamic ensemble, to obtain explicit formulas for many of the thermodynamic quantities of interest, often in terms of the appropriate partition function. The concept of an equilibrium or stationary ensemble is crucial to many applications of statistical ensembles. Although a mechanical system certainly evolves over time, the ensemble does not necessarily have to evolve. In fact, the ensemble will not evolve if it contains all past and future phases of the system. Such a statistical ensemble, one that does not change over time, is called stationary and can be said to be in statistical equilibrium.[2] Terminology • The word "ensemble" is also used for a smaller set of possibilities sampled from the full set of possible states. For example, a collection of walkers in a Markov chain Monte Carlo iteration is called an ensemble in some of the literature. • The term "ensemble" is often used in physics and the physics-influenced literature. In probability theory, the term probability space is more prevalent. Main types The study of thermodynamics is concerned with systems that appear to human perception to be "static" (despite the motion of their internal parts), and which can be described simply by a set of macroscopically observable variables. These systems can be described by statistical ensembles that depend on a few observable parameters, and which are in statistical equilibrium. Gibbs noted that different macroscopic constraints lead to different types of ensembles, with particular statistical characteristics. "We may imagine a great number of systems of the same nature, but differing in the configurations and velocities which they have at a given instant, and differing in not merely infinitesimally, but it may be so as to embrace every conceivable combination of configuration and velocities..." J. W. Gibbs (1903)[5] Three important thermodynamic ensembles were defined by Gibbs:[2] • Microcanonical ensemble (or NVE ensemble) —a statistical ensemble where the total energy of the system and the number of particles in the system are each fixed to particular values; each of the members of the ensemble are required to have the same total energy and particle number. The system must remain totally isolated (unable to exchange energy or particles with its environment) in order to stay in statistical equilibrium.[2] • Canonical ensemble (or NVT ensemble)—a statistical ensemble where the energy is not known exactly but the number of particles is fixed. In place of the energy, the temperature is specified. The canonical ensemble is appropriate for describing a closed system which is in, or has been in, weak thermal contact with a heat bath. In order to be in statistical equilibrium, the system must remain totally closed (unable to exchange particles with its environment) and may come into weak thermal contact with other systems that are described by ensembles with the same temperature.[2] • Grand canonical ensemble (or μVT ensemble)—a statistical ensemble where neither the energy nor particle number are fixed. In their place, the temperature and chemical potential are specified. The grand canonical ensemble is appropriate for describing an open system: one which is in, or has been in, weak contact with a reservoir (thermal contact, chemical contact, radiative contact, electrical contact, etc.). The ensemble remains in statistical equilibrium if the system comes into weak contact with other systems that are described by ensembles with the same temperature and chemical potential.[2] The calculations that can be made using each of these ensembles are explored further in their respective articles. Other thermodynamic ensembles can be also defined, corresponding to different physical requirements, for which analogous formulae can often similarly be derived. For example, in the reaction ensemble, particle number fluctuations are only allowed to occur according to the stoichiometry of the chemical reactions which are present in the system.[6] Representations The precise mathematical expression for a statistical ensemble has a distinct form depending on the type of mechanics under consideration (quantum or classical). In the classical case, the ensemble is a probability distribution over the microstates. In quantum mechanics, this notion, due to von Neumann, is a way of assigning a probability distribution over the results of each complete set of commuting observables. In classical mechanics, the ensemble is instead written as a probability distribution in phase space; the microstates are the result of partitioning phase space into equal-sized units, although the size of these units can be chosen somewhat arbitrarily. Requirements for representations Putting aside for the moment the question of how statistical ensembles are generated operationally, we should be able to perform the following two operations on ensembles A, B of the same system: • Test whether A, B are statistically equivalent. • If p is a real number such that 0 < p < 1, then produce a new ensemble by probabilistic sampling from A with probability p and from B with probability 1 – p. Under certain conditions, therefore, equivalence classes of statistical ensembles have the structure of a convex set. Quantum mechanical Main article: Density matrix A statistical ensemble in quantum mechanics (also known as a mixed state) is most often represented by a density matrix, denoted by ${\hat {\rho }}$. The density matrix provides a fully general tool that can incorporate both quantum uncertainties (present even if the state of the system were completely known) and classical uncertainties (due to a lack of knowledge) in a unified manner. Any physical observable X in quantum mechanics can be written as an operator, X̂. The expectation value of this operator on the statistical ensemble $\rho $ is given by the following trace: $\langle X\rangle =\operatorname {Tr} ({\hat {X}}\rho ).$ This can be used to evaluate averages (operator X̂), variances (using operator X̂ 2), covariances (using operator X̂Ŷ), etc. The density matrix must always have a trace of 1: $\operatorname {Tr} {\hat {\rho }}=1$ (this essentially is the condition that the probabilities must add up to one). In general, the ensemble evolves over time according to the von Neumann equation. Equilibrium ensembles (those that do not evolve over time, $d{\hat {\rho }}/dt=0$) can be written solely as a function of conserved variables. For example, the microcanonical ensemble and canonical ensemble are strictly functions of the total energy, which is measured by the total energy operator Ĥ (Hamiltonian). The grand canonical ensemble is additionally a function of the particle number, measured by the total particle number operator N̂. Such equilibrium ensembles are a diagonal matrix in the orthogonal basis of states that simultaneously diagonalize each conserved variable. In bra–ket notation, the density matrix is ${\hat {\rho }}=\sum _{i}P_{i}|\psi _{i}\rangle \langle \psi _{i}|$ where the |ψi⟩, indexed by i, are the elements of a complete and orthogonal basis. (Note that in other bases, the density matrix is not necessarily diagonal.) Classical mechanical In classical mechanics, an ensemble is represented by a probability density function defined over the system's phase space.[2] While an individual system evolves according to Hamilton's equations, the density function (the ensemble) evolves over time according to Liouville's equation. In a mechanical system with a defined number of parts, the phase space has n generalized coordinates called q1, ... qn, and n associated canonical momenta called p1, ... pn. The ensemble is then represented by a joint probability density function ρ(p1, ... pn, q1, ... qn). If the number of parts in the system is allowed to vary among the systems in the ensemble (as in a grand ensemble where the number of particles is a random quantity), then it is a probability distribution over an extended phase space that includes further variables such as particle numbers N1 (first kind of particle), N2 (second kind of particle), and so on up to Ns (the last kind of particle; s is how many different kinds of particles there are). The ensemble is then represented by a joint probability density function ρ(N1, ... Ns, p1, ... pn, q1, ... qn). The number of coordinates n varies with the numbers of particles. Any mechanical quantity X can be written as a function of the system's phase. The expectation value of any such quantity is given by an integral over the entire phase space of this quantity weighted by ρ: $\langle X\rangle =\sum _{N_{1}=0}^{\infty }\ldots \sum _{N_{s}=0}^{\infty }\int \ldots \int \rho X\,dp_{1}\ldots dq_{n}.$ The condition of probability normalization applies, requiring $\sum _{N_{1}=0}^{\infty }\ldots \sum _{N_{s}=0}^{\infty }\int \ldots \int \rho \,dp_{1}\ldots dq_{n}=1.$ Phase space is a continuous space containing an infinite number of distinct physical states within any small region. In order to connect the probability density in phase space to a probability distribution over microstates, it is necessary to somehow partition the phase space into blocks that are distributed representing the different states of the system in a fair way. It turns out that the correct way to do this simply results in equal-sized blocks of canonical phase space, and so a microstate in classical mechanics is an extended region in the phase space of canonical coordinates that has a particular volume.[note 1] In particular, the probability density function in phase space, ρ, is related to the probability distribution over microstates, P by a factor $\rho ={\frac {1}{h^{n}C}}P,$ where • h is an arbitrary but predetermined constant with the units of energy×time, setting the extent of the microstate and providing correct dimensions to ρ.[note 2] • C is an overcounting correction factor (see below), generally dependent on the number of particles and similar concerns. Since h can be chosen arbitrarily, the notional size of a microstate is also arbitrary. Still, the value of h influences the offsets of quantities such as entropy and chemical potential, and so it is important to be consistent with the value of h when comparing different systems. Correcting overcounting in phase space Typically, the phase space contains duplicates of the same physical state in multiple distinct locations. This is a consequence of the way that a physical state is encoded into mathematical coordinates; the simplest choice of coordinate system often allows a state to be encoded in multiple ways. An example of this is a gas of identical particles whose state is written in terms of the particles' individual positions and momenta: when two particles are exchanged, the resulting point in phase space is different, and yet it corresponds to an identical physical state of the system. It is important in statistical mechanics (a theory about physical states) to recognize that the phase space is just a mathematical construction, and to not naively overcount actual physical states when integrating over phase space. Overcounting can cause serious problems: • Dependence of derived quantities (such as entropy and chemical potential) on the choice of coordinate system, since one coordinate system might show more or less overcounting than another.[note 3] • Erroneous conclusions that are inconsistent with physical experience, as in the mixing paradox.[2] • Foundational issues in defining the chemical potential and the grand canonical ensemble.[2] It is in general difficult to find a coordinate system that uniquely encodes each physical state. As a result, it is usually necessary to use a coordinate system with multiple copies of each state, and then to recognize and remove the overcounting. A crude way to remove the overcounting would be to manually define a subregion of phase space that includes each physical state only once and then exclude all other parts of phase space. In a gas, for example, one could include only those phases where the particles' x coordinates are sorted in ascending order. While this would solve the problem, the resulting integral over phase space would be tedious to perform due to its unusual boundary shape. (In this case, the factor C introduced above would be set to C = 1, and the integral would be restricted to the selected subregion of phase space.) A simpler way to correct the overcounting is to integrate over all of phase space but to reduce the weight of each phase in order to exactly compensate the overcounting. This is accomplished by the factor C introduced above, which is a whole number that represents how many ways a physical state can be represented in phase space. Its value does not vary with the continuous canonical coordinates,[note 4] so overcounting can be corrected simply by integrating over the full range of canonical coordinates, then dividing the result by the overcounting factor. However, C does vary strongly with discrete variables such as numbers of particles, and so it must be applied before summing over particle numbers. As mentioned above, the classic example of this overcounting is for a fluid system containing various kinds of particles, where any two particles of the same kind are indistinguishable and exchangeable. When the state is written in terms of the particles' individual positions and momenta, then the overcounting related to the exchange of identical particles is corrected by using[2] $C=N_{1}!N_{2}!\ldots N_{s}!.$ This is known as "correct Boltzmann counting". Ensembles in statistics The formulation of statistical ensembles used in physics has now been widely adopted in other fields, in part because it has been recognized that the canonical ensemble or Gibbs measure serves to maximize the entropy of a system, subject to a set of constraints: this is the principle of maximum entropy. This principle has now been widely applied to problems in linguistics, robotics, and the like. In addition, statistical ensembles in physics are often built on a principle of locality: that all interactions are only between neighboring atoms or nearby molecules. Thus, for example, lattice models, such as the Ising model, model ferromagnetic materials by means of nearest-neighbor interactions between spins. The statistical formulation of the principle of locality is now seen to be a form of the Markov property in the broad sense; nearest neighbors are now Markov blankets. Thus, the general notion of a statistical ensemble with nearest-neighbor interactions leads to Markov random fields, which again find broad applicability; for example in Hopfield networks. Ensemble average In statistical mechanics, the ensemble average is defined as the mean of a quantity that is a function of the microstate of a system, according to the distribution of the system on its micro-states in this ensemble. Since the ensemble average is dependent on the ensemble chosen, its mathematical expression varies from ensemble to ensemble. However, the mean obtained for a given physical quantity does not depend on the ensemble chosen at the thermodynamic limit. The grand canonical ensemble is an example of an open system.[7] Classical statistical mechanics For a classical system in thermal equilibrium with its environment, the ensemble average takes the form of an integral over the phase space of the system: ${\bar {A}}={\frac {\int {Ae^{-\beta H(q_{1},q_{2},...q_{M},p_{1},p_{2},...p_{N})}d\tau }}{\int {e^{-\beta H(q_{1},q_{2},...q_{M},p_{1},p_{2},...p_{N})}d\tau }}}$ where: ${\bar {A}}$ is the ensemble average of the system property A, $\beta $ is ${\frac {1}{kT}}$, known as thermodynamic beta, H is the Hamiltonian of the classical system in terms of the set of coordinates $q_{i}$ and their conjugate generalized momenta $p_{i}$, and $d\tau $ is the volume element of the classical phase space of interest. The denominator in this expression is known as the partition function, and is denoted by the letter Z. Quantum statistical mechanics In quantum statistical mechanics, for a quantum system in thermal equilibrium with its environment, the weighted average takes the form of a sum over quantum energy states, rather than a continuous integral: ${\bar {A}}={\frac {\sum _{i}{A_{i}e^{-\beta E_{i}}}}{\sum _{i}{e^{-\beta E_{i}}}}}$ Canonical ensemble average The generalized version of the partition function provides the complete framework for working with ensemble averages in thermodynamics, information theory, statistical mechanics and quantum mechanics. The microcanonical ensemble represents an isolated system in which energy (E), volume (V) and the number of particles (N) are all constant. The canonical ensemble represents a closed system which can exchange energy (E) with its surroundings (usually a heat bath), but the volume (V) and the number of particles (N) are all constant. The grand canonical ensemble represents an open system which can exchange energy (E) as well as particles with its surroundings but the volume (V) is kept constant. Operational interpretation In the discussion given so far, while rigorous, we have taken for granted that the notion of an ensemble is valid a priori, as is commonly done in physical context. What has not been shown is that the ensemble itself (not the consequent results) is a precisely defined object mathematically. For instance, • It is not clear where this very large set of systems exists (for example, is it a gas of particles inside a container?) • It is not clear how to physically generate an ensemble. In this section, we attempt to partially answer this question. Suppose we have a preparation procedure for a system in a physics lab: For example, the procedure might involve a physical apparatus and some protocols for manipulating the apparatus. As a result of this preparation procedure, some system is produced and maintained in isolation for some small period of time. By repeating this laboratory preparation procedure we obtain a sequence of systems X1, X2, ....,Xk, which in our mathematical idealization, we assume is an infinite sequence of systems. The systems are similar in that they were all produced in the same way. This infinite sequence is an ensemble. In a laboratory setting, each one of these prepped systems might be used as input for one subsequent testing procedure. Again, the testing procedure involves a physical apparatus and some protocols; as a result of the testing procedure we obtain a yes or no answer. Given a testing procedure E applied to each prepared system, we obtain a sequence of values Meas (E, X1), Meas (E, X2), ...., Meas (E, Xk). Each one of these values is a 0 (or no) or a 1 (yes). Assume the following time average exists: $\sigma (E)=\lim _{N\rightarrow \infty }{\frac {1}{N}}\sum _{k=1}^{N}\operatorname {Meas} (E,X_{k})$ For quantum mechanical systems, an important assumption made in the quantum logic approach to quantum mechanics is the identification of yes-no questions to the lattice of closed subspaces of a Hilbert space. With some additional technical assumptions one can then infer that states are given by density operators S so that: $\sigma (E)=\operatorname {Tr} (ES).$ We see this reflects the definition of quantum states in general: A quantum state is a mapping from the observables to their expectation values. See also • Density matrix • Ensemble (fluid mechanics) • Phase space • Liouville's theorem (Hamiltonian) • Maxwell–Boltzmann statistics • Replication (statistics) Notes 1. This equal-volume partitioning is a consequence of Liouville's theorem, i. e., the principle of conservation of extension in canonical phase space for Hamiltonian mechanics. This can also be demonstrated starting with the conception of an ensemble as a multitude of systems. See Gibbs' Elementary Principles, Chapter I. 2. (Historical note) Gibbs' original ensemble effectively set h = 1 [energy unit]×[time unit], leading to unit-dependence in the values of some thermodynamic quantities like entropy and chemical potential. Since the advent of quantum mechanics, h is often taken to be equal to Planck's constant in order to obtain a semiclassical correspondence with quantum mechanics. 3. In some cases the overcounting error is benign. An example is the choice of coordinate system used for representing orientations of three-dimensional objects. A simple encoding is the 3-sphere (e. g., unit quaternions) which is a double cover—each physical orientation can be encoded in two ways. If this encoding is used without correcting the overcounting, then the entropy will be higher by k log 2 per rotatable object and the chemical potential lower by kT log 2. This does not actually lead to any observable error since it only causes unobservable offsets. 4. Technically, there are some phases where the permutation of particles does not even yield a distinct specific phase: for example, two similar particles can share the exact same trajectory, internal state, etc.. However, in classical mechanics these phases only make up an infinitesimal fraction of the phase space (they have measure zero) and so they do not contribute to any volume integral in phase space. References 1. Rennie, Richard; Jonathan Law (2019). Oxford Dictionary of Physics. pp. 458 ff. ISBN 978-0198821472. 2. Gibbs, Josiah Willard (1902). Elementary Principles in Statistical Mechanics. New York: Charles Scribner's Sons. 3. Kittel, Charles; Herbert Kroemer (1980). Thermal Physics, Second Edition. San Francisco: W.H. Freeman and Company. pp. 31 ff. ISBN 0-7167-1088-9. 4. Landau, L.D.; Lifshitz, E.M. (1980). Statistical Physics. Pergamon Press. pp. 9 ff. ISBN 0-08-023038-5. 5. Gibbs, J.W. (1928). The Collected Works, Vol. 2. Green & Co, London, New York: Longmans. 6. Simulation of chemical reaction equilibria by the reaction ensemble Monte Carlo method: a review https://doi.org/10.1080/08927020801986564 7. http://physics.gmu.edu/~pnikolic/PHYS307/lectures/ensembles.pdf External links • Monte Carlo applet applied in statistical physics problems. Statistical mechanics Theory • Principle of maximum entropy • ergodic theory Statistical thermodynamics • Ensembles • partition functions • equations of state • thermodynamic potential: • U • H • F • G • Maxwell relations Models • Ferromagnetism models • Ising • Potts • Heisenberg • percolation • Particles with force field • depletion force • Lennard-Jones potential Mathematical approaches • Boltzmann equation • H-theorem • Vlasov equation • BBGKY hierarchy • stochastic process • mean-field theory and conformal field theory Critical phenomena • Phase transition • Critical exponents • correlation length • size scaling Entropy • Boltzmann • Shannon • Tsallis • Rényi • von Neumann Applications • Statistical field theory • elementary particle • superfluidity • Condensed matter physics • Complex system • chaos • information theory • Boltzmann machine Wikimedia Commons has media related to Statistical ensemble.
Wikipedia
Using the Borsuk–Ulam Theorem Using the Borsuk–Ulam Theorem: Lectures on Topological Methods in Combinatorics and Geometry is a graduate-level mathematics textbook in topological combinatorics. It describes the use of results in topology, and in particular the Borsuk–Ulam theorem, to prove theorems in combinatorics and discrete geometry. It was written by Czech mathematician Jiří Matoušek, and published in 2003 by Springer-Verlag in their Universitext series (ISBN 978-3-540-00362-5).[1][2] Topics The topic of the book is part of a relatively new field of mathematics crossing between topology and combinatorics, now called topological combinatorics.[2][3] The starting point of the field,[3] and one of the central inspirations for the book, was a proof that László Lovász published in 1978 of a 1955 conjecture by Martin Kneser, according to which the Kneser graphs $KG_{2n+k,n}$ have no graph coloring with $k+1$ colors. Lovász used the Borsuk–Ulam theorem in his proof, and Matoušek gathers many related results, published subsequently, to show that this connection between topology and combinatorics is not just a proof trick but an area.[4] The book has six chapters. After two chapters reviewing the basic notions of algebraic topology, and proving the Borsuk–Ulam theorem, the applications to combinatorics and geometry begin in the third chapter, with topics including the ham sandwich theorem, the necklace splitting problem, Gale's lemma on points in hemispheres, and several results on colorings of Kneser graphs.[1][2] After another chapter on more advanced topics in equivariant topology, two more chapters of applications follow, separated according to whether the equivariance is modulo two or using a more complicated group action.[5] Topics in these chapters include the van Kampen–Flores theorem on embeddability of skeletons of simplices into lower-dimensional Euclidean spaces, and topological and multicolored variants of Radon's theorem and Tverberg's theorem on partitions into subsets with intersecting convex hulls.[1][2] Audience and reception The book is written at a graduate level, and has exercises making it suitable as a graduate textbook. Some knowledge of topology would be helpful for readers but is not necessary. Reviewer Mihaela Poplicher writes that it is not easy to read, but is "very well written, very interesting, and very informative".[2] And reviewer Imre Bárány writes that "The book is well written, and the style is lucid and pleasant, with plenty of illustrative examples." Matoušek intended this material to become part of a broader textbook on topological combinatorics, to be written jointly with him, Anders Björner, and Günter M. Ziegler.[2][5] However, this was not completed before Matoušek's untimely death in 2015.[6] References 1. Dzedzej, Zdzisław (2004), "Review of Using the Borsuk-Ulam Theorem", Mathematical Reviews, MR 1988723 2. Poplicher, Mihaela (January 2005), "Review of Using the Borsuk-Ulam Theorem", MAA Reviews, Mathematical Association of America 3. de Longueville, Mark, "25 years proof of the Kneser conjecture: The advent of topological combinatorics" (PDF), EMS Newsletter, European Mathematical Society: 16–19 4. Ziegler, Günter M., "Review of Using the Borsuk-Ulam Theorem", zbMATH, Zbl 1016.05001 5. Bárány, Imre (March 2004), "Review of Using the Borsuk-Ulam Theorem", Combinatorics, Probability and Computing, 13 (2): 281–282, doi:10.1017/s096354830400608x 6. Kratochvíl, Jan; Loebl, Martin; Nešetřil, Jarik; Valtr, Pavel, Prof. Jiří Matoušek
Wikipedia
Uta Merzbach Uta Caecilia Merzbach (February 9, 1933 – June 27, 2017) was a German-American historian of mathematics who became the first curator of mathematical instruments at the Smithsonian Institution.[1] Uta Merzbach BornBerlin  DiedGeorgetown  Alma mater • University of Texas at Austin • Harvard University  OccupationCurator  Employer • National Museum of American History  Position heldcurator, Associate Curator  Early life Merzbach was born in Berlin, where her mother was a philologist and her father was an economist who worked for the Reich Association of Jews in Germany during World War II. The Nazi government closed the association in June 1943; they arrested the family, along with other leading members of the association, and sent them to the Theresienstadt concentration camp on August 4, 1943.[1][2] The Merzbachs survived the war and the camp, and after living for a year in a refugee camp in Deggendorf they moved to Georgetown, Texas in 1946, where her father found a faculty position at Southwestern University. Education After high school in Brownwood, Texas, Merzbach entered Southwestern, but transferred after two years to the University of Texas at Austin, where she graduated in 1952 with a bachelor's degree in mathematics. In 1954, she earned a master's degree there, also in mathematics.[1] Merzbach became a school teacher, but soon returned to graduate study at Harvard University.[1] She completed her Ph.D. at Harvard in 1965. Her dissertation, Quantity of Structure: Development of Modern Algebraic Concepts from Leibniz to Dedekind, combined mathematics and the history of science; it was jointly supervised by mathematician Garrett Birkhoff and historian of science I. Bernard Cohen.[1][3][4] Career Merzbach joined the Smithsonian as an associate curator in 1964 (later curator), and served there until 1988 in the National Museum of American History. As well as collecting mathematical objects at the Smithsonian, she also collected interviews with many of the pioneers of computing.[1] In 1991, she co-authored the second edition of A History of Mathematics, originally published in 1968 by Carl Benjamin Boyer.[1][5] After her retirement she returned to Georgetown, Texas, where she died in 2017.[1] References 1. "In Memoriam: Uta C. Merzbach", Smithsonian Torch, July 2017 2. Spicer, Kevin; Cucchiara, Martina, eds. (2017), The Evil That Surrounds Us: The WWII Memoir of Erna Becker-Kohen, Indiana University Press, pp. 13, 27–28, 53, 133, 140, ISBN 9780253029904 3. Uta Merzbach at the Mathematics Genealogy Project 4. "Uta C. Merzbach Papers, 1948-2017". Texas Archival Resources Online. Retrieved 2022-05-08. 5. Acker, Kathleen (July 2007), "Review of History of Mathematics (2nd ed.)", Convergence, Mathematical Association of America Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Italy • Israel • Belgium • United States • Latvia • Japan • Czech Republic • Greece • Croatia • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Utilitarian cake-cutting Utilitarian cake-cutting (also called maxsum cake-cutting) is a rule for dividing a heterogeneous resource, such as a cake or a land-estate, among several partners with different cardinal utility functions, such that the sum of the utilities of the partners is as large as possible. It is a special case of the utilitarian social choice rule. Utilitarian cake-cutting is often not "fair"; hence, utilitarianism is often in conflict with fair cake-cutting. Part of a series on Utilitarianism Predecessors • Mozi • Śāntideva • David Hume • Claude Adrien Helvétius • Cesare Beccaria • William Godwin • Francis Hutcheson • William Paley Key proponents • Jeremy Bentham • John Stuart Mill • Henry Sidgwick • R. M. Hare • Peter Singer Types of utilitarianism • Negative • Rule • Act • Two-level • Total • Average • Preference • Classical Key concepts • Pain • Suffering • Pleasure • Utility • Happiness • Eudaimonia • Consequentialism • Equal consideration • Felicific calculus • Utilitarian social choice rule Problems • Demandingness objection • Mere addition paradox • Paradox of hedonism • Replaceability argument • Utility monster Related topics • Rational choice theory • Game theory • Neoclassical economics • Population ethics • Effective altruism Philosophy portal Example Consider a cake with two parts: chocolate and vanilla, and two partners: Alice and George, with the following valuations: PartnerChocolateVanilla Alice91 George64 The utilitarian rule gives each part to the partner with the highest utility. In this case, the utilitarian rule gives the entire chocolate to Alice and the entire Vanilla to George. The maxsum is 13. The utilitarian division is not fair: it is not proportional since George receives less than half the total cake value, and it is not envy-free since George envies Alice. Notation The cake is called $C$. It is usually assumed to be either a finite 1-dimensional segment, a 2-dimensional polygon or a finite subset of the multidimensional Euclidean plane $\mathbb {R} ^{d}$. There are $n$ partners. Each partner $i$ has a personal value function $V_{i}$ which maps subsets of $C$ ("pieces") to numbers. $C$ has to be divided to $n$ disjoint pieces, one piece per partner. The piece allocated to partner $i$ is called $X_{i}$, and $C=X_{1}\sqcup ...\sqcup X_{n}$. A division $X$ is called utilitarian or utilitarian-maximal or maxsum if it maximizes the following expression: $\sum _{i=1}^{n}{V_{i}(X_{i})}$ The concept is often generalized by assigning a different weight to each partner. A division $X$ is called weighted-utilitarian-maximal (WUM) if it maximizes the following expression: $\sum _{i=1}^{n}{\frac {V_{i}(X_{i})}{w_{i}}}$ where the $w_{i}$ are given positive constants. Maxsum and Pareto-efficiency Every WUM division with positive weights is obviously Pareto-efficient. This is because, if a division $Y$ Pareto-dominates a division $X$, then the weighted sum-of-utilities in $Y$ is strictly larger than in $X$, so $X$ cannot be a WUM division. What's more surprising is that every Pareto-efficient division is WUM for some selection of weights.[1] Characterization of the utilitarian rule Christopher P. Chambers suggests a characterization to the WUM rule.[2] The characterization is based on the following properties of a division rule R: • Pareto-efficiency (PE): the rule R returns only divisions which are Pareto-efficient. • Division independence (DI): whenever a cake is partitioned to several sub-cakes and each cake is divided according to rule R, the result is the same as if the original cake were partitioned according to R. • Independence of infeasible land (IIL): whenever a sub-cake is divided according to R, the result does not depend on the utilities of the partners in the other sub-cakes. • Positive treatment of equals (PTE): whenever all partners have the same utility function, R recommends at least one division that gives a positive utility to each partner. • Scale-invariance (SI): whenever the utility functions of the partners are multiplied by constants (a possibly different constant to each partner), the recommendations given by R do not change. • Continuity (CO): for a fixed piece of cake, the set of utility profiles which map to a specific allocation is a closed set under pointwise convergence. The following is proved for partners that assign positive utility to every piece of cake with positive size: • If R is PE DI and IIL, then there exists a sequence of weights $w_{1},\dots ,w_{n}$ such that all divisions recommended by R are WUM with these weights (it is known that every PE division is WUM with some weights; the news are that all divisions recommended by R are WUM with the same weights. This follows from the DI property). • If R is PE DI IIL and PTE, then all divisions recommended by R are utilitarian-maximal (in other words, all divisions must be WUM and all agents must have equal weights. This follows from the PTE property). • If R is PE DI IIL and SI, then R is a dictatorial rule - it gives the entire cake to a single partner. • If R is PE DI IIL and CO, then there exists a sequence of weights $w_{1},\dots ,w_{n}$ such that R is a WUM rule with these weights (i.e. R recommends all and only WUM divisions with these weights). Finding utilitarian divisions Disconnected pieces When the value functions are additive, maxsum divisions always exist. Intuitively, we can give each fraction of the cake to the partner that values it the most, as in the example above. Similarly, WUM divisions can be found by giving each fraction of the cake to the partner for whom the ratio $V_{i}/w_{i}$ is largest. This process is easy to carry out when cake is piecewise-homogeneous, i.e., the cake can be divided to a finite number of pieces such that the value-density of each piece is constant for all partners. When the cake is not piecewise-homogeneous, the above algorithm does not work since there is an infinite number of different "pieces" to consider. Maxsum divisions still exist. This is a corollary of the Dubins–Spanier compactness theorem and it can also be proved using the Radon–Nikodym set. However, no finite algorithm can find a maxsum division. Proof:[3][4]: Cor.2  A finite algorithm has value-data only about a finite number of pieces. I.e. there is only a finite number of subsets of the cake, for which the algorithm knows the valuations of the partners. Suppose the algorithm has stopped after having value-data about $k$ subsets. Now, it may be the case that all partners answered all the queries as if they have the same value measure. In this case, the largest possible utilitarian value that the algorithm can achieve is 1. However, it is possible that deep inside one of the $k$ pieces, there is a subset which two partners value differently. In this case, there exists a super-proportional division, in which each partner receives a value of more than $1/n$, so the sum of utilities is strictly more than 1. Hence, the division returned by the finite algorithm is not maxsum. Connected pieces When the cake is 1-dimensional and the pieces must be connected, the simple algorithm of assigning each piece to the agent that values it the most no longer works, even with piecewise-constant valuations. In this case, the problem of finding a UM division is NP-hard, and furthermore no FPTAS is possible unless P=NP. There is an 8-factor approximation algorithm, and a fixed-parameter tractable algorithm which is exponential in the number of players.[5] For every set of positive weights, a WUM division exists and can be found in a similar way. Maxsum and fairness A maxsum division is not always fair; see the example above. Similarly, a fair division is not always maxsum. One approach to this conflict is to bound the "price of fairness" - calculate upper and lower bounds on the amount of decrease in the sum of utilities, that is required for fairness. For more details, see price of fairness. Another approach to combining efficiency and fairness is to find, among all possible fair divisions, a fair division with a highest sum-of-utilities: Finding utilitarian-fair allocations The following algorithms can be used to find an envy-free cake-cutting with maximum sum-of-utilities, for a cake which is a 1-dimensional interval, when each person may receive disconnected pieces and the value functions are additive:[6] 1. For $n$ partners with piecewise-constant valuations: divide the cake into m totally-constant regions. Solve a linear program with nm variables: each (agent, region) pair has a variable that determines the fraction of the region given to the agent. For each region, there is a constraint saying that the sum of all fractions from this region is 1; for each (agent, agent) pair, there is a constraint saying that the first agent does not envy the second one. Note that the allocation produced by this procedure might be highly fractioned. 2. For $2$ partners with piecewise-linear valuations: for each point in the cake, calculate the ratio between the utilities: $r=u_{1}/u_{2}$. Give partner 1 the points with $r\geq r^{*}$ and partner 2 the points with $r<r^{*}$, where $r^{*}$ is a threshold calculated so that the division is envy-free. In general $r^{*}$ cannot be calculated because it might be irrational, but in practice, when the valuations are piecewise-linear, $r^{*}$ can be approximated by an "irrational search" approximation algorithm. For any $\epsilon >0$, The algorithm find an allocation that is $\epsilon $-EF (the value of each agent is at least the value of each other agent minus $\epsilon $), and attains a sum that is at least the maximum sum of an EF allocation. Its run-time is polynomial in the input and in $\log(1/\epsilon )$. 3. For $n$ partners with general valuations: additive approximation to envy and efficiency, based on the piecewise-constant-valuations algorithm. Properties of utilitarian-fair allocations Brams, Feldman, Lai, Morgenstern and Procaccia[7] study both envy-free (EF) and equitable (EQ) cake divisions, and relate them to maxsum and Pareto-optimality (PO). As explained above, maxsum allocations are always PO. However, when maxsum is constrained by fairness, this is not necessarily true. They prove the following: • When there are two agents, maxsum-EF, maximum-EQ and maximum-EF-EQ allocations are always PO. • When there are three or more agents with piecewise-uniform valuations, maxsum-EF allocations are always PO (since EF is equivalent to proportionality, which is preserved under Pareto improvements). However, there may be no maxsum-EQ and maxsum-EQ-EF allocations that are PO. • When there are three or more agents with piecewise-constant valuations, there may be even no maxsum-EF allocations that are PO. For example, consider a cake with three regions and three agents with values: Alice: 51/101, 50/101, 0 Bob: 50/101, 51/101, 0 Carl: 51/111, 10/111, 50/111 The maxsum rule gives region i to agent i, but it is not EF since Carl envies Alice. Using a linear program, it is possible to find the unique maxsum-EF allocation, and show that it must share both region 1 and region 2 between Alice and Bob. However, such allocation cannot be PO since Alice and Bob could both gain by swapping their shares in these regions. • When all agents have piecewise-linear valuations, the utility-sum of a maxsum-EF allocation is at least as large as a maxsum-EQ allocation. This result extends to general valuations up to an additive approximation (i.e., $\epsilon $-EF allocations have a utility-sum of at least EQ allocations minus $\epsilon $). Monotonicity properties of utilitarian cake-cutting When the pieces may be disconnected, the absolute-utilitarian rule (maximizing the sum of non-normalized utilities) is resource-monotonic and population-monotonic. The relative-utilitarian rule (maximizing the sum of normalized utilities) is population-monotonic but not resource-monotonic.[8] This no longer holds when the pieces are connected.[9] See also • Efficient cake-cutting • Fair cake-cutting • Weller's theorem • Pareto-efficient envy-free division • Rank-maximal allocation • Utilitarian voting - the utilitarian principle in a different context. References 1. Barbanel, Julius B.; Zwicker, William S. (1997). "Two applications of a theorem of Dvoretsky, Wald, and Wolfovitz to cake division". Theory and Decision. 43 (2): 203. doi:10.1023/a:1004966624893. S2CID 118505359.. See also Weller's theorem. For a similar result related to the problem of homogeneous resource allocation, see Varian's theorems. 2. Chambers, Christopher P. (2005). "Allocation rules for land division". Journal of Economic Theory. 121 (2): 236–258. doi:10.1016/j.jet.2004.04.008. 3. Brams, Steven J.; Taylor, Alan D. (1996). Fair Division [From cake-cutting to dispute resolution]. p. 48. ISBN 978-0521556446. 4. Ianovski, Egor (2012-03-01). "Cake Cutting Mechanisms". arXiv:1203.0100 [cs.GT]. 5. Aumann, Yonatan; Dombb, Yair; Hassidim, Avinatan (2013). Computing Socially-Efficient Cake Divisions. AAMAS. 6. Cohler, Yuga Julian; Lai, John Kwang; Parkes, David C; Procaccia, Ariel (2011). Optimal Envy-Free Cake Cutting. AAAI. 7. Steven J. Brams; Michal Feldman; John K. Lai; Jamie Morgenstern; Ariel D. Procaccia (2012). On Maxsum Fair Cake Divisions. Proceedings of the 26th AAAI Conference on Artificial Intelligence (AAAI-12). pp. 1285–1291. Retrieved 6 December 2015. 8. Segal-Halevi, Erel; Sziklai, Balázs R. (2019-09-01). "Monotonicity and competitive equilibrium in cake-cutting". Economic Theory. 68 (2): 363–401. arXiv:1510.05229. doi:10.1007/s00199-018-1128-6. ISSN 1432-0479. S2CID 179618. 9. Segal-Halevi, Erel; Sziklai, Balázs R. (2018-09-01). "Resource-monotonicity and population-monotonicity in connected cake-cutting". Mathematical Social Sciences. 95: 19–30. arXiv:1703.08928. doi:10.1016/j.mathsocsci.2018.07.001. ISSN 0165-4896. S2CID 16282641.
Wikipedia
Utility functions on indivisible goods Some branches of economics and game theory deal with indivisible goods, discrete items that can be traded only as a whole. For example, in combinatorial auctions there is a finite set of items, and every agent can buy a subset of the items, but an item cannot be divided among two or more agents. It is usually assumed that every agent assigns subjective utility to every subset of the items. This can be represented in one of two ways: • An ordinal utility preference relation, usually marked by $\succ $. The fact that an agent prefers a set $A$ to a set $B$ is written $A\succ B$. If the agent only weakly prefers $A$ (i.e. either prefers $A$ or is indifferent between $A$ and $B$) then this is written $A\succeq B$. • A cardinal utility function, usually denoted by $u$. The utility an agent gets from a set $A$ is written $u(A)$. Cardinal utility functions are often normalized such that $u(\emptyset )=0$, where $\emptyset $ is the empty set. A cardinal utility function implies a preference relation: $u(A)>u(B)$ implies $A\succ B$ and $u(A)\geq u(B)$ implies $A\succeq B$. Utility functions can have several properties.[1] Monotonicity Monotonicity means that an agent always (weakly) prefers to have extra items. Formally: • For a preference relation: $A\supseteq B$ implies $A\succeq B$. • For a utility function: $A\supseteq B$ implies $u(A)\geq u(B)$ (i.e. u is a monotone function). Monotonicity is equivalent to the free disposal assumption: if an agent may always discard unwanted items, then extra items can never decrease the utility. Additivity Additive utility $A$$u(A)$ $\emptyset $0 apple5 hat7 apple and hat12 Additivity (also called linearity or modularity) means that "the whole is equal to the sum of its parts." That is, the utility of a set of items is the sum of the utilities of each item separately. This property is relevant only for cardinal utility functions. It says that for every set $A$ of items, $u(A)=\sum _{x\in A}u({x})$ assuming that $u(\emptyset )=0$. In other words, $u$ is an additive function. An equivalent definition is: for any sets of items $A$ and $B$, $u(A)+u(B)=u(A\cup B)+u(A\cap B).$ An additive utility function is characteristic of independent goods. For example, an apple and a hat are considered independent: the utility a person receives from having an apple is the same whether or not he has a hat, and vice versa. A typical utility function for this case is given at the right. Submodularity and supermodularity Submodular utility $A$$u(A)$ $\emptyset $0 apple5 bread7 apple and bread9 Submodularity means that "the whole is not more than the sum of its parts (and may be less)." Formally, for all sets $A$ and $B$, $u(A)+u(B)\geq u(A\cup B)+u(A\cap B)$ In other words, $u$ is a submodular set function. An equivalent property is diminishing marginal utility, which means that for any sets $A$ and $B$ with $A\subseteq B$, and every $x\notin B$:[2] $u(A\cup \{x\})-u(A)\geq u(B\cup \{x\})-u(B)$. A submodular utility function is characteristic of substitute goods. For example, an apple and a bread loaf can be considered substitutes: the utility a person receives from eating an apple is smaller if he has already ate bread (and vice versa), since he is less hungry in that case. A typical utility function for this case is given at the right. Supermodular utility $A$$u(A)$ $\emptyset $0 apple5 knife7 apple and knife15 Supermodularity is the opposite of submodularity: it means that "the whole is not less than the sum of its parts (and may be more)". Formally, for all sets $A$ and $B$, $u(A)+u(B)\leq u(A\cup B)+u(A\cap B)$ In other words, $u$ is a supermodular set function. An equivalent property is increasing marginal utility, which means that for all sets $A$ and $B$ with $A\subseteq B$, and every $x\notin B$: $u(B\cup \{x\})-u(B)\geq u(A\cup \{x\})-u(A)$. A supermoduler utility function is characteristic of complementary goods. For example, an apple and a knife can be considered complementary: the utility a person receives from an apple is larger if he already has a knife (and vice versa), since it is easier to eat an apple after cutting it with a knife. A possible utility function for this case is given at the right. A utility function is additive if and only if it is both submodular and supermodular. Subadditivity and superadditivity Subadditive but not submodular $A$$u(A)$ $\emptyset $0 X or Y or Z2 X,Y or Y,Z or Z,X3 X,Y,Z5 Subadditivity means that for every pair of disjoint sets $A,B$ $u(A\cup B)\leq u(A)+u(B)$ In other words, $u$ is a subadditive set function. Assuming $u(\emptyset )$ is non-negative, every submodular function is subadditive. However, there are non-negative subadditive functions that are not submodular. For example, assume that there are 3 identical items, $X,Y$, and Z, and the utility depends only on their quantity. The table on the right describes a utility function that is subadditive but not submodular, since $u(\{X,Y\})+u(\{Y,Z\})<u(\{X,Y\}\cup \{Y,Z\})+u(\{X,Y\}\cap \{Y,Z\}).$ Superadditive but not supermodular $A$$u(A)$ $\emptyset $0 X or Y or Z1 X,Y or Y,Z or Z,X3 X,Y,Z4 Superadditivity means that for every pair of disjoint sets $A,B$ $u(A\cup B)\geq u(A)+u(B)$ In other words, $u$ is a superadditive set function. Assuming $u(\emptyset )$ is non-positive, every supermodular function is superadditive. However, there are non-negative superadditive functions that are not supermodular. For example, assume that there are 3 identical items, $X,Y$, and Z, and the utility depends only on their quantity. The table on the right describes a utility function that is non-negative and superadditive but not supermodular, since $u(\{X,Y\})+u(\{Y,Z\})<u(\{X,Y\}\cup \{Y,Z\})+u(\{X,Y\}\cap \{Y,Z\}).$ A utility function with $u(\emptyset )=0$ is said to be additive if and only if it is both superadditive and subadditive. With the typical assumption that $u(\emptyset )=0$, every submodular function is subadditive and every supermodular function is superadditive. Without any assumption on the utility from the empty set, these relations do not hold. In particular, if a submodular function is not subadditive, then $u(\emptyset )$ must be negative. For example, suppose there are two items, $X,Y$, with $u(\emptyset )=-1$, $u(\{X\})=u(\{Y\})=1$ and $u(\{X,Y\})=3$. This utility function is submodular and supermodular and non-negative except on the empty set, but is not subadditive, since $u(\{X,Y\})>u(\{X\})+u(\{Y\}).$ Also, if a supermodular function is not superadditive, then $u(\emptyset )$ must be positive. Suppose instead that $u(\emptyset )=u(\{X\})=u(\{Y\})=u(\{X,Y\})=1$. This utility function is non-negative, supermodular, and submodular, but is not superadditive, since $u(\{X,Y\})<u(\{X\})+u(\{Y\}).$ Unit demand Unit demand utility $A$$u(A)$ $\emptyset $0 apple5 pear7 apple and pear7 Unit demand (UD) means that the agent only wants a single good. If the agent gets two or more goods, he uses the one of them that gives him the highest utility, and discards the rest. Formally: • For a preference relation: for every set $B$ there is a subset $A\subseteq B$ with cardinality $|A|=1$, such that $A\succeq B$. • For a utility function: For every set $A$:[3] $u(A)=\max _{x\in A}u({x})$ A unit-demand function is an extreme case of a submodular function. It is characteristic of goods that are pure substitutes. For example, if there are an apple and a pear, and an agent wants to eat a single fruit, then his utility function is unit-demand, as exemplified in the table at the right. Gross substitutes Gross substitutes (GS) means that the agents regards the items as substitute goods or independent goods but not complementary goods. There are many formal definitions to this property, all of which are equivalent. • Every UD valuation is GS, but the opposite is not true. • Every GS valuation is submodular, but the opposite is not true. See Gross substitutes (indivisible items) for more details. Hence the following relations hold between the classes: $UD\subsetneq GS\subsetneq Submodular\subsetneq Subadditive$ See diagram on the right. Aggregates of utility functions A utility function describes the happiness of an individual. Often, we need a function that describes the happiness of an entire society. Such a function is called a social welfare function, and it is usually an aggregate function of two or more utility functions. If the individual utility functions are additive, then the following is true for the aggregate functions: Aggregate function PropertyExample values of functions on {a}, {b} and {a,b } fghaggregate(f,g,h) SumAdditive1,3; 43,1; 44,4; 8 AverageAdditive1,3; 43,1; 42,2; 4 MinimumSuper-additive1,3; 43,1; 41,1; 4 MaximumSub-additive1,3; 43,1; 43,3; 4 Medianneither1,3; 43,1; 41,1; 21,1; 4 1,3; 43,1; 43,3; 63,3; 4 See also • Utility functions on divisible goods • Single-minded agent References 1. Gul, F.; Stacchetti, E. (1999). "Walrasian Equilibrium with Gross Substitutes". Journal of Economic Theory. 87: 95–124. doi:10.1006/jeth.1999.2531. 2. Moulin, Hervé (1991). Axioms of cooperative decision making. Cambridge England New York: Cambridge University Press. ISBN 9780521424585. 3. Koopmans, T. C.; Beckmann, M. (1957). "Assignment Problems and the Location of Economic Activities" (PDF). Econometrica. 25 (1): 53–76. doi:10.2307/1907742. JSTOR 1907742.
Wikipedia
UTM theorem In computability theory, the UTM theorem, or universal Turing machine theorem, is a basic result about Gödel numberings of the set of computable functions. It affirms the existence of a computable universal function, which is capable of calculating any other computable function.[1] The universal function is an abstract version of the universal Turing machine, thus the name of the theorem. Roger's equivalence theorem provides a characterization of the Gödel numbering of the computable functions in terms of the smn theorem and the UTM theorem. Theorem The theorem states that a partial computable function u of two variables exists such that, for every computable function f of one variable, an e exists such that $f(x)\simeq u(e,x)$ for all x. This means that, for each x, either f(x) and u(e,x) are both defined and are equal, or are both undefined.[2] The theorem thus shows that, defining φe(x) as u(e, x), the sequence φ1, φ2, … is an enumeration of the partial computable functions. The function $u$ in the statement of the theorem is called a universal function. References 1. Rogers 1987, p. 22. 2. Soare 1987, p. 15. • Rogers, H. (1987) [1967]. The Theory of Recursive Functions and Effective Computability. First MIT press paperback edition. ISBN 0-262-68052-1. • Soare, R. (1987). Recursively enumerable sets and degrees. Perspectives in Mathematical Logic. Springer-Verlag. ISBN 3-540-15299-7.
Wikipedia
Uwe Jannsen Uwe Jannsen (born 11 March 1954)[1] is a German mathematician, specializing in algebra, algebraic number theory, and algebraic geometry. Education and career Born in Meddewade, Jannsen studied mathematics and physics at the University of Hamburg with Diplom in mathematics in 1978 and with Promotion (PhD) in 1980 under Helmut Brückner and Jürgen Neukirch with thesis Über Galoisgruppen lokaler Körper (On Galois groups of local fields).[2] In the academic year 1983–1984 he was a postdoc at Harvard University. From 1980 to 1989 he was an assistant and then docent at the University of Regensburg, where he received in 1988 his habilitation. From 1989 to 1991 he held a research professorship at the Max-Planck-Institut für Mathematik in Bonn. In 1991 he became a full professor at the University of Cologne and since 1999 he has been a professor at the University of Regensburg. Jannsen's research deals with, among other topics, the Galois theory of algebraic number fields, the theory of motives in algebraic geometry, the Hasse principle (local–global principle), and resolution of singularities. In particular, he has done research on a cohomology theory for algebraic varieties, involving their extension in mixed motives as a development of research by Pierre Deligne, and a motivic cohomology as a development of research by Vladimir Voevodsky. In the 1980s with Kay Wingberg he completely described the absolute Galois group of p-adic number fields, i.e. in the local case.[3] In 1994 he was an Invited Speaker with talk Mixed motives, motivic cohomology and Ext-groups at the International Congress of Mathematicians in Zürich.[4] He was elected in 2009 a full member of the Bayerische Akademie der Wissenschaften and in 2011 a full member of the Academia Europaea. His doctoral students include Moritz Kerz.[5] Selected publications • Continuous étale cohomology, Mathematische Annalen vol. 280, no. 2 1988, pp. 207–245 doi:10.1007/BF01456052 • "On the ℓ-adic cohomology of varieties over number fields and its Galois cohomology." In Galois Groups over $\mathbb {Q} $, pp. 315–360. Springer, New York, NY, 1989. • Mixed motives and algebraic K-theory, Lecture Notes in Mathematics vol. 1400, Springer Verlag 1990 (with appendices by C. Schoen and Spencer Bloch). • with Steven Kleiman and Jean-Pierre Serre (eds.): Motives, Proc. Symposium Pure Mathematics vol. 55, 2 vols., American Mathematical Society 1994 (Conference University of Washington, Seattle, 1991) vol. 2 • Motives, numerical equivalence and semi-simplicity, Inventiones Mathematicae, vol. 107 1992, pp. 447–452 doi:10.1007/BF01231898 References 1. biography, pdf, University of Regensburg 2. Jannsen, Uwe (1982). "Über Galoisgruppen lokaler Körper" (PDF). Inventiones Mathematicae. 70: 53–69. doi:10.1007/BF01393198. S2CID 120934623. 3. Jannsen, U.; Wingberg, K. (1982). "'Die Struktur der absoluten Galoisgruppe p-adischer Zahlkörper" (PDF). Inventiones Mathematicae. 70: 71–98. doi:10.1007/BF01393199. S2CID 119378923. 4. Jannsen, Uwe. "Mixed motives, motivic cohomology, and Ext-groups." In Proceedings of the International Congress of Mathematicians, vol. 1, p. 2. 1994. 5. Uwe Jannsen at the Mathematics Genealogy Project External links • Homepage in Regensburg • Bericht seiner Forschungsgruppe in Regensburg Authority control International • ISNI • VIAF National • Germany • Israel • Belgium • United States • Czech Republic • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • zbMATH People • Deutsche Biographie Other • IdRef
Wikipedia
Uzawa iteration In numerical mathematics, the Uzawa iteration is an algorithm for solving saddle point problems. It is named after Hirofumi Uzawa and was originally introduced in the context of concave programming.[1] Basic idea We consider a saddle point problem of the form ${\begin{pmatrix}A&B\\B^{*}&\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\end{pmatrix}}={\begin{pmatrix}b_{1}\\b_{2}\end{pmatrix}},$ where $A$ is a symmetric positive-definite matrix. Multiplying the first row by $B^{*}A^{-1}$ and subtracting from the second row yields the upper-triangular system ${\begin{pmatrix}A&B\\&-S\end{pmatrix}}{\begin{pmatrix}x_{1}\\x_{2}\end{pmatrix}}={\begin{pmatrix}b_{1}\\b_{2}-B^{*}A^{-1}b_{1}\end{pmatrix}},$ where $S:=B^{*}A^{-1}B$ denotes the Schur complement. Since $S$ is symmetric positive-definite, we can apply standard iterative methods like the gradient descent method or the conjugate gradient method to $Sx_{2}=B^{*}A^{-1}b_{1}-b_{2}$ in order to compute $x_{2}$. The vector $x_{1}$ can be reconstructed by solving $Ax_{1}=b_{1}-Bx_{2}.\,$ It is possible to update $x_{1}$ alongside $x_{2}$ during the iteration for the Schur complement system and thus obtain an efficient algorithm. Implementation We start the conjugate gradient iteration by computing the residual $r_{2}:=B^{*}A^{-1}b_{1}-b_{2}-Sx_{2}=B^{*}A^{-1}(b_{1}-Bx_{2})-b_{2}=B^{*}x_{1}-b_{2},$ of the Schur complement system, where $x_{1}:=A^{-1}(b_{1}-Bx_{2})$ denotes the upper half of the solution vector matching the initial guess $x_{2}$ for its lower half. We complete the initialization by choosing the first search direction $p_{2}:=r_{2}.\,$ In each step, we compute $a_{2}:=Sp_{2}=B^{*}A^{-1}Bp_{2}=B^{*}p_{1}$ and keep the intermediate result $p_{1}:=A^{-1}Bp_{2}$ for later. The scaling factor is given by $\alpha :=p_{2}^{*}a_{2}/p_{2}^{*}r_{2}$ :=p_{2}^{*}a_{2}/p_{2}^{*}r_{2}} and leads to the updates $x_{2}:=x_{2}+\alpha p_{2},\quad r_{2}:=r_{2}-\alpha a_{2}.$ Using the intermediate result $p_{1}$ saved earlier, we can also update the upper part of the solution vector $x_{1}:=x_{1}-\alpha p_{1}.\,$ Now we only have to construct the new search direction by the Gram–Schmidt process, i.e., $\beta :=r_{2}^{*}a_{2}/p_{2}^{*}a_{2},\quad p_{2}:=r_{2}-\beta p_{2}.$ :=r_{2}^{*}a_{2}/p_{2}^{*}a_{2},\quad p_{2}:=r_{2}-\beta p_{2}.} The iteration terminates if the residual $r_{2}$ has become sufficiently small or if the norm of $p_{2}$ is significantly smaller than $r_{2}$ indicating that the Krylov subspace has been almost exhausted. Modifications and extensions If solving the linear system $Ax=b$ exactly is not feasible, inexact solvers can be applied.[2][3][4] If the Schur complement system is ill-conditioned, preconditioners can be employed to improve the speed of convergence of the underlying gradient method.[2][5] Inequality constraints can be incorporated, e.g., in order to handle obstacle problems.[5] References 1. Uzawa, H. (1958). "Iterative methods for concave programming". In Arrow, K. J.; Hurwicz, L.; Uzawa, H. (eds.). Studies in linear and nonlinear programming. Stanford University Press. 2. Elman, H. C.; Golub, G. H. (1994). "Inexact and preconditioned Uzawa algorithms for saddle point problems". SIAM J. Numer. Anal. 31 (6): 1645–1661. CiteSeerX 10.1.1.307.8178. doi:10.1137/0731085. 3. Bramble, J. H.; Pasciak, J. E.; Vassilev, A. T. (1997). "Analysis of the inexact Uzawa algorithm for saddle point problems". SIAM J. Numer. Anal. 34 (3): 1072–1982. CiteSeerX 10.1.1.52.9559. doi:10.1137/S0036142994273343. 4. Zulehner, W. (1998). "Analysis of iterative methods for saddle point problems. A unified approach". Math. Comp. 71 (238): 479–505. doi:10.1090/S0025-5718-01-01324-2. 5. Gräser, C.; Kornhuber, R. (2007). "On Preconditioned Uzawa-type Iterations for a Saddle Point Problem with Inequality Constraints". Domain Decomposition Methods in Science and Engineering XVI. Lec. Not. Comp. Sci. Eng. Vol. 55. pp. 91–102. CiteSeerX 10.1.1.72.9238. doi:10.1007/978-3-540-34469-8_8. ISBN 978-3-540-34468-1. Further reading • Chen, Zhangxin (2006). "Linear System Solution Techniques". Finite Element Methods and Their Applications. Berlin: Springer. pp. 145–154. ISBN 978-3-540-28078-1.
Wikipedia
Convex polytope A convex polytope is a special case of a polytope, having the additional property that it is also a convex set contained in the $n$-dimensional Euclidean space $\mathbb {R} ^{n}$. Most texts[1][2] use the term "polytope" for a bounded convex polytope, and the word "polyhedron" for the more general, possibly unbounded object. Others[3] (including this article) allow polytopes to be unbounded. The terms "bounded/unbounded convex polytope" will be used below whenever the boundedness is critical to the discussed issue. Yet other texts identify a convex polytope with its boundary. Convex polytopes play an important role both in various branches of mathematics and in applied areas, most notably in linear programming. In the influential textbooks of Grünbaum[1] and Ziegler[2] on the subject, as well as in many other texts in discrete geometry, convex polytopes are often simply called "polytopes". Grünbaum points out that this is solely to avoid the endless repetition of the word "convex", and that the discussion should throughout be understood as applying only to the convex variety (p. 51). A polytope is called full-dimensional if it is an $n$-dimensional object in $\mathbb {R} ^{n}$. Examples • Many examples of bounded convex polytopes can be found in the article "polyhedron". • In the 2-dimensional case the full-dimensional examples are a half-plane, a strip between two parallel lines, an angle shape (the intersection of two non-parallel half-planes), a shape defined by a convex polygonal chain with two rays attached to its ends, and a convex polygon. • Special cases of an unbounded convex polytope are a slab between two parallel hyperplanes, a wedge defined by two non-parallel half-spaces, a polyhedral cylinder (infinite prism), and a polyhedral cone (infinite cone) defined by three or more half-spaces passing through a common point. Definitions A convex polytope may be defined in a number of ways, depending on what is more suitable for the problem at hand. Grünbaum's definition is in terms of a convex set of points in space. Other important definitions are: as the intersection of half-spaces (half-space representation) and as the convex hull of a set of points (vertex representation). Vertex representation (convex hull) In his book Convex Polytopes, Grünbaum defines a convex polytope as a compact convex set with a finite number of extreme points: A set $K$ of $\mathbb {R} ^{n}$ is convex if, for each pair of distinct points $a$, $b$ in $K$, the closed segment with endpoints $a$ and $b$ is contained within $K$. This is equivalent to defining a bounded convex polytope as the convex hull of a finite set of points, where the finite set must contain the set of extreme points of the polytope. Such a definition is called a vertex representation (V-representation or V-description).[1] For a compact convex polytope, the minimal V-description is unique and it is given by the set of the vertices of the polytope.[1] A convex polytope is called an integral polytope if all of its vertices have integer coordinates. Intersection of half-spaces A convex polytope may be defined as an intersection of a finite number of half-spaces. Such definition is called a half-space representation (H-representation or H-description).[1] There exist infinitely many H-descriptions of a convex polytope. However, for a full-dimensional convex polytope, the minimal H-description is in fact unique and is given by the set of the facet-defining halfspaces.[1] A closed half-space can be written as a linear inequality:[1] $a_{1}x_{1}+a_{2}x_{2}+\cdots +a_{n}x_{n}\leq b$ where $n$ is the dimension of the space containing the polytope under consideration. Hence, a closed convex polytope may be regarded as the set of solutions to the system of linear inequalities: ${\begin{alignedat}{7}a_{11}x_{1}&&\;+\;&&a_{12}x_{2}&&\;+\cdots +\;&&a_{1n}x_{n}&&\;\leq \;&&&b_{1}\\a_{21}x_{1}&&\;+\;&&a_{22}x_{2}&&\;+\cdots +\;&&a_{2n}x_{n}&&\;\leq \;&&&b_{2}\\\vdots \;\;\;&&&&\vdots \;\;\;&&&&\vdots \;\;\;&&&&&\;\vdots \\a_{m1}x_{1}&&\;+\;&&a_{m2}x_{2}&&\;+\cdots +\;&&a_{mn}x_{n}&&\;\leq \;&&&b_{m}\\\end{alignedat}}$ where $m$ is the number of half-spaces defining the polytope. This can be concisely written as the matrix inequality: $Ax\leq b$ where $A$ is an $m\times n$ matrix, $x$ is an $n\times 1$ column vector whose coordinates are the variables $x_{1}$ to $x_{n}$, and $b$ is an $m\times 1$ column vector whose coordinates are the right-hand sides $b_{1}$ to $b_{m}$ of the scalar inequalities. An open convex polytope is defined in the same way, with strict inequalities used in the formulas instead of the non-strict ones. The coefficients of each row of $A$ and $b$ correspond with the coefficients of the linear inequality defining the respective half-space. Hence, each row in the matrix corresponds with a supporting hyperplane of the polytope, a hyperplane bounding a half-space that contains the polytope. If a supporting hyperplane also intersects the polytope, it is called a bounding hyperplane (since it is a supporting hyperplane, it can only intersect the polytope at the polytope's boundary). The foregoing definition assumes that the polytope is full-dimensional. In this case, there is a unique minimal set of defining inequalities (up to multiplication by a positive number). Inequalities belonging to this unique minimal system are called essential. The set of points of a polytope which satisfy an essential inequality with equality is called a facet. If the polytope is not full-dimensional, then the solutions of $Ax\leq b$ lie in a proper affine subspace of $\mathbb {R} ^{n}$ and the polytope can be studied as an object in this subspace. In this case, there exist linear equations which are satisfied by all points of the polytope. Adding one of these equations to any of the defining inequalities does not change the polytope. Therefore, in general there is no unique minimal set of inequalities defining the polytope. In general the intersection of arbitrary half-spaces need not be bounded. However if one wishes to have a definition equivalent to that as a convex hull, then bounding must be explicitly required. Using the different representations The two representations together provide an efficient way to decide whether a given vector is included in a given convex polytope: to show that it is in the polytope, it is sufficient to present it as a convex combination of the polytope vertices (the V-description is used); to show that it is not in the polytope, it is sufficient to present a single defining inequality that it violates.[4]: 256  A subtle point in the representation by vectors is that the number of vectors may be exponential in the dimension, so the proof that a vector is in the polytope might be exponentially long. Fortunately, Carathéodory's theorem guarantees that every vector in the polytope can be represented by at most d+1 defining vectors, where d is the dimension of the space. Representation of unbounded polytopes For an unbounded polytope (sometimes called: polyhedron), the H-description is still valid, but the V-description should be extended. Theodore Motzkin (1936) proved that any unbounded polytope can be represented as a sum of a bounded polytope and a convex polyhedral cone.[5] In other words, every vector in an unbounded polytope is a convex sum of its vertices (its "defining points"), plus a conical sum of the Euclidean vectors of its infinite edges (its "defining rays"). This is called the finite basis theorem.[3] Properties Every (bounded) convex polytope is the image of a simplex, as every point is a convex combination of the (finitely many) vertices. However, polytopes are not in general isomorphic to simplices. This is in contrast to the case of vector spaces and linear combinations, every finite-dimensional vector space being not only an image of, but in fact isomorphic to, Euclidean space of some dimension (or analog over other fields). The face lattice Main article: abstract polytope A face of a convex polytope is any intersection of the polytope with a halfspace such that none of the interior points of the polytope lie on the boundary of the halfspace. Equivalently, a face is the set of points giving equality in some valid inequality of the polytope.[4]: 258  If a polytope is d-dimensional, its facets are its (d − 1)-dimensional faces, its vertices are its 0-dimensional faces, its edges are its 1-dimensional faces, and its ridges are its (d − 2)-dimensional faces. Given a convex polytope P defined by the matrix inequality $Ax\leq b$, if each row in A corresponds with a bounding hyperplane and is linearly independent of the other rows, then each facet of P corresponds with exactly one row of A, and vice versa. Each point on a given facet will satisfy the linear equality of the corresponding row in the matrix. (It may or may not also satisfy equality in other rows). Similarly, each point on a ridge will satisfy equality in two of the rows of A. In general, an (n − j)-dimensional face satisfies equality in j specific rows of A. These rows form a basis of the face. Geometrically speaking, this means that the face is the set of points on the polytope that lie in the intersection of j of the polytope's bounding hyperplanes. The faces of a convex polytope thus form an Eulerian lattice called its face lattice, where the partial ordering is by set containment of faces. The definition of a face given above allows both the polytope itself and the empty set to be considered as faces, ensuring that every pair of faces has a join and a meet in the face lattice. The whole polytope is the unique maximum element of the lattice, and the empty set, considered to be a (−1)-dimensional face (a null polytope) of every polytope, is the unique minimum element of the lattice. Two polytopes are called combinatorially isomorphic if their face lattices are isomorphic. The polytope graph (polytopal graph, graph of the polytope, 1-skeleton) is the set of vertices and edges of the polytope only, ignoring higher-dimensional faces. For instance, a polyhedral graph is the polytope graph of a three-dimensional polytope. By a result of Whitney[6] the face lattice of a three-dimensional polytope is determined by its graph. The same is true for simple polytopes of arbitrary dimension (Blind & Mani-Levitska 1987, proving a conjecture of Micha Perles).[7] Kalai (1988)[8] gives a simple proof based on unique sink orientations. Because these polytopes' face lattices are determined by their graphs, the problem of deciding whether two three-dimensional or simple convex polytopes are combinatorially isomorphic can be formulated equivalently as a special case of the graph isomorphism problem. However, it is also possible to translate these problems in the opposite direction, showing that polytope isomorphism testing is graph-isomorphism complete.[9] Topological properties A convex polytope, like any compact convex subset of Rn, is homeomorphic to a closed ball.[10] Let m denote the dimension of the polytope. If the polytope is full-dimensional, then m = n. The convex polytope therefore is an m-dimensional manifold with boundary, its Euler characteristic is 1, and its fundamental group is trivial. The boundary of the convex polytope is homeomorphic to an (m − 1)-sphere. The boundary's Euler characteristic is 0 for even m and 2 for odd m. The boundary may also be regarded as a tessellation of (m − 1)-dimensional spherical space — i.e. as a spherical tiling. Simplicial decomposition A convex polytope can be decomposed into a simplicial complex, or union of simplices, satisfying certain properties. Given a convex r-dimensional polytope P, a subset of its vertices containing (r+1) affinely independent points defines an r-simplex. It is possible to form a collection of subsets such that the union of the corresponding simplices is equal to P, and the intersection of any two simplices is either empty or a lower-dimensional simplex. This simplicial decomposition is the basis of many methods for computing the volume of a convex polytope, since the volume of a simplex is easily given by a formula.[11] Algorithmic problems for a convex polytope Construction of representations Different representations of a convex polytope have different utility, therefore the construction of one representation given another one is an important problem. The problem of the construction of a V-representation is known as the vertex enumeration problem and the problem of the construction of a H-representation is known as the facet enumeration problem. While the vertex set of a bounded convex polytope uniquely defines it, in various applications it is important to know more about the combinatorial structure of the polytope, i.e., about its face lattice. Various convex hull algorithms deal both with the facet enumeration and face lattice construction. In the planar case, i.e., for a convex polygon, both facet and vertex enumeration problems amount to the ordering vertices (resp. edges) around the convex hull. It is a trivial task when the convex polygon is specified in a traditional way for polygons, i.e., by the ordered sequence of its vertices $v_{1},\dots ,v_{m}$. When the input list of vertices (or edges) is unordered, the time complexity of the problems becomes O(m log m).[12] A matching lower bound is known in the algebraic decision tree model of computation.[13] Volume computation The task of computing the volume of a convex polytope has been studied in the field of computational geometry. The volume can be computed approximately, for instance, using the convex volume approximation technique, when having access to a membership oracle. As for exact computation, one obstacle is that, when given a representation of the convex polytope as an equation system of linear inequalities, the volume of the polytope may have a bit-length which is not polynomial in this representation.[14] See also • Oriented matroid • Nef polyhedron • Steinitz's theorem for convex polyhedra References 1. Branko Grünbaum, Convex Polytopes, 2nd edition, prepared by Volker Kaibel, Victor Klee, and Günter M. Ziegler, 2003, ISBN 0-387-40409-0, ISBN 978-0-387-40409-7, 466pp. 2. Ziegler, Günter M. (1995), Lectures on Polytopes, Graduate Texts in Mathematics, vol. 152, Berlin, New York: Springer-Verlag. 3. Mathematical Programming, by Melvyn W. Jeter (1986) ISBN 0-8247-7478-7, p. 68 4. Lovász, László; Plummer, M. D. (1986), Matching Theory, Annals of Discrete Mathematics, vol. 29, North-Holland, ISBN 0-444-87916-1, MR 0859549 5. Motzkin, Theodore (1936). Beitrage zur Theorie der linearen Ungleichungen (Ph.D. dissertation). Jerusalem.{{cite book}}: CS1 maint: location missing publisher (link) 6. Whitney, Hassler (1932). "Congruent graphs and the connectivity of graphs". Amer. J. Math. 54 (1): 150–168. doi:10.2307/2371086. hdl:10338.dmlcz/101067. JSTOR 2371086. 7. Blind, Roswitha; Mani-Levitska, Peter (1987), "Puzzles and polytope isomorphisms", Aequationes Mathematicae, 34 (2–3): 287–297, doi:10.1007/BF01830678, MR 0921106. 8. Kalai, Gil (1988), "A simple way to tell a simple polytope from its graph", Journal of Combinatorial Theory, Ser. A, 49 (2): 381–383, doi:10.1016/0097-3165(88)90064-7, MR 0964396. 9. Kaibel, Volker; Schwartz, Alexander (2003). "On the Complexity of Polytope Isomorphism Problems". Graphs and Combinatorics. 19 (2): 215–230. arXiv:math/0106093. doi:10.1007/s00373-002-0503-y. Archived from the original on 2015-07-21. 10. Glen E. Bredon, Topology and Geometry, 1993, ISBN 0-387-97926-3, p. 56. 11. Büeler, B.; Enge, A.; Fukuda, K. (2000). "Exact Volume Computation for Polytopes: A Practical Study". Polytopes — Combinatorics and Computation. p. 131. doi:10.1007/978-3-0348-8438-9_6. ISBN 978-3-7643-6351-2. 12. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2001) [1990]. "33.3 Finding the convex hull". Introduction to Algorithms (2nd ed.). MIT Press and McGraw-Hill. pp. 947–957. ISBN 0-262-03293-7. 13. Yao, Andrew Chi Chih (1981), "A lower bound to finding convex hulls", Journal of the ACM, 28 (4): 780–787, doi:10.1145/322276.322289, MR 0677089; Ben-Or, Michael (1983), "Lower Bounds for Algebraic Computation Trees", Proceedings of the Fifteenth Annual ACM Symposium on Theory of Computing (STOC '83), pp. 80–86, doi:10.1145/800061.808735. 14. Lawrence, Jim (1991). "Polytope volume computation". Mathematics of Computation. 57 (195): 259–271. doi:10.1090/S0025-5718-1991-1079024-2. ISSN 0025-5718. External links Wikimedia Commons has media related to Convex polytopes. • Weisstein, Eric W. "Convex polygon". MathWorld. • Weisstein, Eric W. "Convex polyhedron". MathWorld. • Komei Fukuda, Polyhedral computation FAQ.
Wikipedia
V-ring (ring theory) In mathematics, a V-ring is a ring R such that every simple R-module is injective. The following three conditions are equivalent:[1] 1. Every simple left (resp. right) R-module is injective 2. The radical of every left (resp. right) R-module is zero 3. Every left (resp. right) ideal of R is an intersection of maximal left (resp. right) ideals of R A commutative ring is a V-ring if and only if it is Von Neumann regular.[2] References 1. Faith, Carl (1973). Algebra: Rings, modules, and categories. Springer-Verlag. ISBN 978-0387055510. Retrieved 24 October 2015. 2. Michler, G.O.; Villamayor, O.E. (April 1973). "On rings whose simple modules are injective". Journal of Algebra. 25 (1): 185–201. doi:10.1016/0021-8693(73)90088-4.
Wikipedia
Victor-Amédée Lebesgue Victor-Amédée Lebesgue, sometimes written Le Besgue, (2 October 1791, Grandvilliers (Oise) – 10 June 1875, Bordeaux (Gironde)) was a mathematician working on number theory. He was elected a member of the Académie des sciences in 1847. For the analyst, see Henri Lebesgue. Victor-Amédée Lebesgue Born(1791-10-02)2 October 1791 Grandvilliers, France Died10 June 1875(1875-06-10) (aged 83) Bordeaux, France Scientific career FieldsMathematics See also • Catalan's conjecture • Proof of Fermat's Last Theorem for specific exponents • Lebesgue–Nagell type equations Publications • Lebesgue, Victor-Amédée (1837), Thèses de mécanique et d'astronomie • Lebesgue, Victor-Amédée (1859), Exercices d'analyse numérique • Lebesgue, Victor-Amédée (1862), Introduction à la théorie des nombres, Paris{{citation}}: CS1 maint: location missing publisher (link) • Lebesgue, Victor Amédée (1864), Tables diverses pour le décomposition des nombres en leurs facteurs premiers References • Abria, O.; Hoüel, J. (1876), "Notice sur la vie et les travaux de Victor Amédée Le Besgue", Bullettino di Bibliografia e di Storia delle Scienze Matematiche e Fisiche, IX: 554–594 • LEBESGUE , Victor Amédée Authority control International • ISNI • VIAF National • France • BnF data • Germany • United States • Netherlands Academics • zbMATH Other • IdRef
Wikipedia
V. J. Havel Václav Jaromír Havel is a Czech mathematician. He is known for characterizing the degree sequences of undirected graphs and the Havel–Hakimi algorithm. It is an important contribution to the theory graphs. [1] Selected publications • Havel, Václav (1955), "A remark on the existence of finite graphs", Časopis pro pěstování matematiky (in Czech), 80 (4): 477–480, doi:10.21136/CPM.1955.108220 References 1. Allenby, R.B.J.T.; Slomson, Alan (2011), "Theorem 9.3: the Havel–Hakimi theorem", How to Count: An Introduction to Combinatorics, Discrete Mathematics and Its Applications (2nd ed.), CRC Press, p. 159, ISBN 9781420082616, A proof of this theorem was first published by Václav Havel ... in 1963 another proof was published independently by S. L. Hakimi. Authority control International • VIAF National • Czech Republic Academics • MathSciNet • Scopus • zbMATH
Wikipedia
Ordinal definable set In mathematical set theory, a set S is said to be ordinal definable if, informally, it can be defined in terms of a finite number of ordinals by a first-order formula. Ordinal definable sets were introduced by Gödel (1965). A drawback to this informal definition is that it requires quantification over all first-order formulas, which cannot be formalized in the language of set theory. However there is a different way of stating the definition that can be so formalized. In this approach, a set S is formally defined to be ordinal definable if there is some collection of ordinals α1, ..., αn such that $S\in V_{\alpha _{1}}$ and $S$ can be defined as an element of $V_{\alpha _{1}}$ by a first-order formula φ taking α2, ..., αn as parameters. Here $V_{{\alpha }_{1}}$ denotes the set indexed by the ordinal α1 in the von Neumann hierarchy. In other words, S is the unique object such that φ(S, α2...αn) holds with its quantifiers ranging over $V_{\alpha _{1}}$. The class of all ordinal definable sets is denoted OD; it is not necessarily transitive, and need not be a model of ZFC because it might not satisfy the axiom of extensionality. A set is hereditarily ordinal definable if it is ordinal definable and all elements of its transitive closure are ordinal definable. The class of hereditarily ordinal definable sets is denoted by HOD, and is a transitive model of ZFC, with a definable well ordering. It is consistent with the axioms of set theory that all sets are ordinal definable, and so hereditarily ordinal definable. The assertion that this situation holds is referred to as V = OD or V = HOD. It follows from V = L, and is equivalent to the existence of a (definable) well-ordering of the universe. Note however that the formula expressing V = HOD need not hold true within HOD, as it is not absolute for models of set theory: within HOD, the interpretation of the formula for HOD may yield an even smaller inner model. HOD has been found to be useful in that it is an inner model that can accommodate essentially all known large cardinals. This is in contrast with the situation for core models, as core models have not yet been constructed that can accommodate supercompact cardinals, for example. References • Gödel, Kurt (1965) [1946], "Remarks before the Princeton Bicentennial Conference on Problems in Mathematics", in Davis, Martin (ed.), The undecidable. Basic papers on undecidable propositions, unsolvable problems and computable functions, Raven Press, Hewlett, N.Y., pp. 84–88, ISBN 978-0-486-43228-1, MR 0189996 • Kunen, Kenneth (1980), Set theory: An introduction to independence proofs, Elsevier, ISBN 978-0-444-86839-8
Wikipedia
VEGAS algorithm The VEGAS algorithm, due to G. Peter Lepage,[1][2][3] is a method for reducing error in Monte Carlo simulations by using a known or approximate probability distribution function to concentrate the search in those areas of the integrand that make the greatest contribution to the final integral. The VEGAS algorithm is based on importance sampling. It samples points from the probability distribution described by the function $|f|,$ so that the points are concentrated in the regions that make the largest contribution to the integral. The GNU Scientific Library (GSL) provides a VEGAS routine. Sampling method In general, if the Monte Carlo integral of $f$ over a volume $\Omega $ is sampled with points distributed according to a probability distribution described by the function $g,$ we obtain an estimate $\mathrm {E} _{g}(f;N),$ $\mathrm {E} _{g}(f;N)={1 \over N}\sum _{i}^{N}{f(x_{i})}/g(x_{i}).$ The variance of the new estimate is then $\mathrm {Var} _{g}(f;N)=\mathrm {Var} (f/g;N)$ where $\mathrm {Var} (f;N)$ is the variance of the original estimate, $\mathrm {Var} (f;N)=\mathrm {E} (f^{2};N)-(\mathrm {E} (f;N))^{2}.$ If the probability distribution is chosen as $g=|f|/\textstyle \int _{\Omega }|f(x)|dx$ then it can be shown that the variance $\mathrm {Var} _{g}(f;N)$ vanishes, and the error in the estimate will be zero. In practice it is not possible to sample from the exact distribution g for an arbitrary function, so importance sampling algorithms aim to produce efficient approximations to the desired distribution. Approximation of probability distribution The VEGAS algorithm approximates the exact distribution by making a number of passes over the integration region while histogramming the function f. Each histogram is used to define a sampling distribution for the next pass. Asymptotically this procedure converges to the desired distribution. In order to avoid the number of histogram bins growing like $K^{d}$ with dimension d the probability distribution is approximated by a separable function: $g(x_{1},x_{2},\ldots )=g_{1}(x_{1})g_{2}(x_{2})\cdots $ so that the number of bins required is only Kd. This is equivalent to locating the peaks of the function from the projections of the integrand onto the coordinate axes. The efficiency of VEGAS depends on the validity of this assumption. It is most efficient when the peaks of the integrand are well-localized. If an integrand can be rewritten in a form which is approximately separable this will increase the efficiency of integration with VEGAS. See also • Las Vegas algorithm • Monte Carlo integration • Importance sampling References 1. Lepage, G.P. (May 1978). "A New Algorithm for Adaptive Multidimensional Integration". Journal of Computational Physics. 27 (2): 192–203. Bibcode:1978JCoPh..27..192L. doi:10.1016/0021-9991(78)90004-9. 2. Lepage, G.P. (March 1980). "VEGAS: An Adaptive Multi-dimensional Integration Program". Cornell Preprint. CLNS 80-447. 3. Ohl, T. (July 1999). "Vegas revisited: Adaptive Monte Carlo integration beyond factorization". Computer Physics Communications. 120 (1): 13–19. arXiv:hep-ph/9806432. Bibcode:1999CoPhC.120...13O. doi:10.1016/S0010-4655(99)00209-X. S2CID 18194240.
Wikipedia
VIKOR method The VIKOR method is a multi-criteria decision making (MCDM) or multi-criteria decision analysis method. It was originally developed by Serafim Opricovic to solve decision problems with conflicting and noncommensurable (different units) criteria, assuming that compromise is acceptable for conflict resolution, the decision maker wants a solution that is the closest to the ideal, and the alternatives are evaluated according to all established criteria. VIKOR ranks alternatives and determines the solution named compromise that is the closest to the ideal. The idea of compromise solution was introduced in MCDM by Po-Lung Yu in 1973,[1] and by Milan Zeleny.[2] S. Opricovic had developed the basic ideas of VIKOR in his Ph.D. dissertation in 1979, and an application was published in 1980.[3] The name VIKOR appeared in 1990 [4] from Serbian: VIseKriterijumska Optimizacija I Kompromisno Resenje, that means: Multicriteria Optimization and Compromise Solution, with pronunciation: vikor. The real applications were presented in 1998.[5] The paper in 2004 contributed to the international recognition of the VIKOR method.[6] (The most cited paper in the field of Economics, Science Watch, Apr.2009). The MCDM problem is stated as follows: Determine the best (compromise) solution in multicriteria sense from the set of J feasible alternatives A1, A2, ...AJ, evaluated according to the set of n criterion functions. The input data are the elements fij of the performance (decision) matrix, where fij is the value of the i-th criterion function for the alternative Aj. VIKOR method steps The VIKOR procedure has the following steps: Step 1. Determine the best fi* and the worst fi^ values of all criterion functions, i = 1,2,...,n; fi* = max (fij,j=1,...,J), fi^ = min (fij,j=1,...,J), if the i-th function is benefit; fi* = min (fij,j=1,...,J), fi^ = max (fij,j=1,...,J), if the i-th function is cost. Step 2. Compute the values Sj and Rj, j=1,2,...,J, by the relations: Sj=sum[wi(fi* - fij)/(fi*-fi^),i=1,...,n], weighted and normalized Manhattan distance; Rj=max[wi(fi* - fij)/(fi*-fi^),i=1,...,n], weighted and normalized Chebyshev distance; where wi are the weights of criteria, expressing the DM's preference as the relative importance of the criteria. Step 3. Compute the values Qj, j=1,2,...,J, by the relation Qj = v(Sj – S*)/(S^ - S*) + (1-v)(Rj-R*)/(R^-R*) where S* = min (Sj, j=1,...,J), S^ = max (Sj, j=1,...,J), R* = min (Rj, j=1,...,J), R^ = max (Rj, j=1,...,J),; and is introduced as a weight for the strategy of maximum group utility, whereas 1-v is the weight of the individual regret. These strategies could be compromised by v = 0.5, and here v is modified as = (n + 1)/ 2n (from v + 0.5(n-1)/n = 1) since the criterion (1 of n) related to R is included in S, too. Step 4. Rank the alternatives, sorting by the values S, R and Q, from the minimum value. The results are three ranking lists. Step 5. Propose as a compromise solution the alternative A(1) which is the best ranked by the measure Q (minimum) if the following two conditions are satisfied: C1. “Acceptable Advantage”: Q(A(2) – Q(A(1)) >= DQ where: A(2) is the alternative with second position in the ranking list by Q; DQ = 1/(J-1). C2. “Acceptable Stability in decision making”: The alternative A(1) must also be the best ranked by S or/and R. This compromise solution is stable within a decision making process, which could be the strategy of maximum group utility (when v > 0.5 is needed), or “by consensus” v about 0.5, or “with veto” v < 0.5). If one of the conditions is not satisfied, then a set of compromise solutions is proposed, which consists of: - Alternatives A(1) and A(2) if only the condition C2 is not satisfied, or - Alternatives A(1), A(2),..., A(M) if the condition C1 is not satisfied; A(M) is determined by the relation Q(A(M)) – Q(A(1)) < DQ for maximum M (the positions of these alternatives are “in closeness”). The obtained compromise solution could be accepted by the decision makers because it provides a maximum utility of the majority (represented by min S), and a minimum individual regret of the opponent (represented by min R). The measures S and R are integrated into Q for compromise solution, the base for an agreement established by mutual concessions. Comparative analysis A comparative analysis of MCDM methods VIKOR, TOPSIS, ELECTRE and PROMETHEE is presented in the paper in 2007, through the discussion of their distinctive features and their application results.[7] Sayadi et al. extended the VIKOR method for decision making with interval data.[8] Heydari et al. extende this method for solving Multiple Objective Large-Scale Nonlinear Programming problems.[9] Fuzzy VIKOR method The Fuzzy VIKOR method has been developed to solve problem in a fuzzy environment where both criteria and weights could be fuzzy sets. The triangular fuzzy numbers are used to handle imprecise numerical quantities. Fuzzy VIKOR is based on the aggregating fuzzy merit that represents distance of an alternative to the ideal solution. The fuzzy operations and procedures for ranking fuzzy numbers are used in developing the fuzzy VIKOR algorithm. [10] See also • Rank reversals in decision-making • Multi-criteria decision analysis • Ordinal Priority Approach • Pairwise comparison References 1. Po Lung Yu (1973) "A Class of Solutions for Group Decision Problems", Management Science, 19(8), 936–946. 2. Milan Zelrny (1973) "Compromise Programming", in Cochrane J.L. and M.Zeleny (Eds.), Multiple Criteria Decision Making, University of South Carolina Press, Columbia. 3. Lucien Duckstein and Serafim Opricovic (1980) "Multiobjective Optimization in River Basin Development", Water Resources Research, 16(1), 14–20. 4. Serafim Opricović., (1990) "Programski paket VIKOR za visekriterijumsko kompromisno rangiranje", SYM-OP-IS 5. Serafim Opricovic (1998) “Multicriteria Optimization in Civil Engineering" (in Serbian), Faculty of Civil Engineering, Belgrade, 302 p. ISBN 86-80049-82-4. 6. Serafim Opricovic and Gwo-Hshiung Tzeng (2004) "The Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS", European Journal of Operational Research, 156(2), 445–455. 7. Serafim Opricovic and Gwo-Hshiung Tzeng (2007) "Extended VIKOR Method in Comparison with Outranking Methods", European Journal of Operational Research, Vol. 178, No 2, pp. 514–529. 8. Sayadi, Mohammad Kazem; Heydari, Majeed; Shahanaghi, Kamran (2009). "Extension of VIKOR method for decision making problem with interval numbers". Applied Mathematical Modelling. 33 (5): 2257–2262. doi:10.1016/j.apm.2008.06.002. 9. Heydari, Majeed; Kazem Sayadi, Mohammad; Shahanaghi, Kamran (2010). "Extended VIKOR as a new method for solving Multiple Objective Large-Scale Nonlinear Programming problems". Rairo - Operations Research. 44 (2): 139–152. doi:10.1051/ro/2010011. 10. Serafim Opricovic (2011) "Fuzzy VIKOR with an application to water resources planning", Expert Systems with Applications 38, pp. 12983–12990.
Wikipedia
Vanishing scalar invariant spacetime In mathematical physics, vanishing scalar invariant (VSI) spacetimes are Lorentzian manifolds with all polynomial curvature invariants of all orders vanishing. Although the only Riemannian manifold with VSI property is flat space, the Lorentzian case admits nontrivial spacetimes with this property. Distinguishing these VSI spacetimes from Minkowski spacetime requires comparing non-polynomial invariants[1] or carrying out the full Cartan–Karlhede algorithm on non-scalar quantities.[2][3] All VSI spacetimes are Kundt spacetimes.[4] An example with this property in four dimensions is a pp-wave. VSI spacetimes however also contain some other four-dimensional Kundt spacetimes of Petrov type N and III. VSI spacetimes in higher dimensions have similar properties as in the four-dimensional case.[5][6] References 1. Page, Don N. (2009), "Nonvanishing Local Scalar Invariants even in VSI Spacetimes with all Polynomial Curvature Scalar Invariants Vanishing", Classical and Quantum Gravity, 26 (5): 055016, arXiv:0806.2144, Bibcode:2009CQGra..26e5016P, doi:10.1088/0264-9381/26/5/055016, S2CID 118331266 2. Koutras, A. (1992), "A spacetime for which the Karlhede invariant classification requires the fourth covariant derivative of the Riemann tensor", Classical and Quantum Gravity, 9 (10): L143, Bibcode:1992CQGra...9L.143K, doi:10.1088/0264-9381/9/10/003 3. Koutras, A.; McIntosh, C. (1996), "A metric with no symmetries or invariants", Classical and Quantum Gravity, 13 (5): L47, Bibcode:1996CQGra..13L..47K, doi:10.1088/0264-9381/13/5/002 4. Pravda, V.; Pravdova, A.; Coley, A.; Milson, R. (2002), "All spacetimes with vanishing curvature invariants", Classical and Quantum Gravity, 19 (23): 6213–6236, arXiv:gr-qc/0209024, Bibcode:2002CQGra..19.6213P, doi:10.1088/0264-9381/19/23/318, S2CID 11958495 5. Coley, A.; Milson, R.; Pravda, V.; Pravdova, A. (2004), "Vanishing Scalar Invariant Spacetimes in Higher Dimensions", Classical and Quantum Gravity, 21 (23): 5519–5542, arXiv:gr-qc/0410070, Bibcode:2004CQGra..21.5519C, doi:10.1088/0264-9381/21/23/014, S2CID 17036677. 6. Coley, A.; Fuster, A.; Hervik, S.; Pelavas, N. (2006), "Higher dimensional VSI spacetimes", Classical and Quantum Gravity, 23 (24): 7431–7444, arXiv:gr-qc/0611019, Bibcode:2006CQGra..23.7431C, doi:10.1088/0264-9381/23/24/014, S2CID 85442360
Wikipedia
Vacuous truth In mathematics and logic, a vacuous truth is a conditional or universal statement (a universal statement that can be converted to a conditional statement) that is true because the antecedent cannot be satisfied.[1] It is sometimes said that a statement is vacuously true because it does not really say anything.[2] For example, the statement "all cell phones in the room are turned off" will be true when no cell phones are in the room. In this case, the statement "all cell phones in the room are turned on" would also be vacuously true, as would the conjunction of the two: "all cell phones in the room are turned on and turned off", which would otherwise be incoherent and false. More formally, a relatively well-defined usage refers to a conditional statement (or a universal conditional statement) with a false antecedent.[1][3][2][4] One example of such a statement is "if Tokyo is in France, then the Eiffel Tower is in Bolivia". Such statements are considered vacuous truths, because the fact that the antecedent is false prevents using the statement to infer anything about the truth value of the consequent. In essence, a conditional statement, that is based on the material conditional, is true when the antecedent ("Tokyo is in France" in the example) is false regardless of whether the conclusion or consequent ("the Eiffel Tower is in Bolivia" in the example) is true or false because the material conditional is defined in that way. Examples common to everyday speech include conditional phrases used as idioms of improbability like "when hell freezes over..." and "when pigs can fly...", indicating that not before the given (impossible) condition is met will the speaker accept some respective (typically false or absurd) proposition. In pure mathematics, vacuously true statements are not generally of interest by themselves, but they frequently arise as the base case of proofs by mathematical induction.[5] This notion has relevance in pure mathematics, as well as in any other field that uses classical logic. Outside of mathematics, statements which can be characterized informally as vacuously true can be misleading. Such statements make reasonable assertions about qualified objects which do not actually exist. For example, a child might truthfully tell their parent "I ate every vegetable on my plate", when there were no vegetables on the child's plate to begin with. In this case, the parent can believe that the child has actually eaten some vegetables, even though that is not true. In addition, a vacuous truth is often used colloquially with absurd statements, either to confidently assert something (e.g. "the dog was red, or I'm a monkey's uncle" to strongly claim that the dog was red), or to express doubt, sarcasm, disbelief, incredulity or indignation (e.g. "yes, and I'm the King of England" to disagree a previously made statement). Scope of the concept A statement $S$ is "vacuously true" if it resembles a material conditional statement $P\Rightarrow Q$, where the antecedent $P$ is known to be false.[1][3][2] Vacuously true statements that can be reduced (with suitable transformations) to this basic form (material conditional) include the following universally quantified statements: • $\forall x:P(x)\Rightarrow Q(x)$, where it is the case that $\forall x:\neg P(x)$.[4] • $\forall x\in A:Q(x)$, where the set $A$ is empty. • This logical form $\forall x\in A:Q(x)$ can be converted to the material conditional form in order to easily identify the antecedent. For the above example $S$ "all cell phones in the room are turned off", it can be formally written as $\forall x\in A:Q(x)$ where $A$ is the set of all cell phones in the room and $Q(x)$ is "$x$ is turned off". This can be written to a material conditional statement $\forall x\in B:P(x)\Rightarrow Q(x)$ where $B$ is the set of all things in the room (including cell phones if they exist in the room), the antecedent $P(x)$ is "$x$ is a cell phone", and the consequent $Q(x)$ is "$x$ is turned off". • $\forall \xi :Q(\xi )$, where the symbol $\xi $ is restricted to a type that has no representatives. Vacuous truths most commonly appear in classical logic with two truth values. However, vacuous truths can also appear in, for example, intuitionistic logic, in the same situations as given above. Indeed, if $P$ is false, then $P\Rightarrow Q$ will yield a vacuous truth in any logic that uses the material conditional; if $P$ is a necessary falsehood, then it will also yield a vacuous truth under the strict conditional. Other non-classical logics, such as relevance logic, may attempt to avoid vacuous truths by using alternative conditionals (such as the case of the counterfactual conditional). In computer programming Many programming environments have a mechanism for querying if every item in a collection of items satisfies some predicate. It is common for such a query to always evaluate as true for an empty collection. For example: • In JavaScript, the array method every executes a provided callback function once for each element present in the array, only stopping (if and when) it finds an element where the callback function returns false. Notably, calling the every method on an empty array will return true for any condition.[6] • In Python, the all function returns True if all of the elements of the given iterable are True. The function also returns True when given an iterable of zero length.[7] • In Rust, the Iterator::all function accepts an iterator and a predicate and returns true only when the predicate returns true for all items produced by the iterator, or if the iterator produces no items.[8] Examples These examples, one from mathematics and one from natural language, illustrate the concept of vacuous truths: • "For any integer x, if x > 5 then x > 3."[9] – This statement is true non-vacuously (since some integers are indeed greater than 5), but some of its implications are only vacuously true: for example, when x is the integer 2, the statement implies the vacuous truth that "if 2 > 5 then 2 > 3". • "All my children are goats" is a vacuous truth, when spoken by someone without children. Similarly, "None of my children is a goat" would also be a vacuous truth, when spoken by the same person. See also • De Morgan's laws – specifically the law that a universal statement is true just in case no counterexample exists: $\forall x\,P(x)\equiv \neg \exists x\,\neg P(x)$ • Empty sum and empty product • Empty function • Paradoxes of material implication, especially the principle of explosion • Presupposition, double question • State of affairs (philosophy) • Tautology (logic) – another type of true statement that also fails to convey any substantive information • Triviality (mathematics) and degeneracy (mathematics) References 1. "Vacuously true". web.cse.ohio-state.edu. Retrieved 2019-12-15. 2. "Vacuously true - CS2800 wiki". courses.cs.cornell.edu. Retrieved 2019-12-15. 3. "Definition:Vacuous Truth - ProofWiki". proofwiki.org. Retrieved 2019-12-15. 4. Edwards, C. H. (January 18, 1998). "Vacuously True" (PDF). swarthmore.edu. Retrieved 2019-12-14. 5. Baldwin, Douglas L.; Scragg, Greg W. (2011), Algorithms and Data Structures: The Science of Computing, Cengage Learning, p. 261, ISBN 978-1-285-22512-8 6. "Array.prototype.every() - JavaScript | MDN". developer.mozilla.org. 7. "Built-in Functions — Python 3.10.2 documentation". docs.python.org. 8. "Iterator in std::iter - Rust". doc.rust-lang.org. 9. "logic - What precisely is a vacuous truth?". Mathematics Stack Exchange. Bibliography • Blackburn, Simon (1994). "vacuous," The Oxford Dictionary of Philosophy. Oxford: Oxford University Press, p. 388. • David H. Sanford (1999). "implication." The Cambridge Dictionary of Philosophy, 2nd. ed., p. 420. • Beer, Ilan; Ben-David, Shoham; Eisner, Cindy; Rodeh, Yoav (1997). "Efficient Detection of Vacuity in ACTL Formulas". Computer Aided Verification: 9th International Conference, CAV'97 Haifa, Israel, June 22–25, 1997, Proceedings. Lecture Notes in Computer Science. Vol. 1254. pp. 279–290. doi:10.1007/3-540-63166-6_28. ISBN 978-3-540-63166-8. External links • Conditional Assertions: Vacuous truth
Wikipedia
Vagif Rza Ibrahimov Vagif Rza Ibrahimov (born May 9, 1947, in the village of Jahri) is an Azerbaijani mathematician and professor. He is a corresponding member of ANAS and an organizer and a participant of numerous conferences. He has published more than 102 articles abroad. He is a professor at Baku State University. Vagif Rza Ibrahimov Born(1947-05-09)May 9, 1947 Jahri, Nakhchivan Autonomous Republic, Azerbaijan EducationDoctor of Physical and Mathematical Sciences[1] Occupation(s)Professor at Baku State University,[2] membership of American Mathematical Society and European Mathematical Society References 1. "Vagif Ibrahimov". 2. "Bakı Dövlət Universitetinin əməkdaşlarına fəxri adların verilməsi haqqında" Azərbaycan Respublikası Prezidentinin 30 oktyabr 2009-cu il tarixli, 538 nömrəli Sərəncamı. e-qanun.az (in Azerbaijani) External links • Biography at the Official website of the Baku State University • Biography at the Official website of Institute of Control Systems Authority control: Academics • Google Scholar • MathSciNet • ORCID • ResearcherID • Scopus • zbMATH
Wikipedia
Inverse semigroup In group theory, an inverse semigroup (occasionally called an inversion semigroup[1]) S is a semigroup in which every element x in S has a unique inverse y in S in the sense that x = xyx and y = yxy, i.e. a regular semigroup in which every element has a unique inverse. Inverse semigroups appear in a range of contexts; for example, they can be employed in the study of partial symmetries.[2] (The convention followed in this article will be that of writing a function on the right of its argument, e.g. x f rather than f(x), and composing functions from left to right—a convention often observed in semigroup theory.) Origins Inverse semigroups were introduced independently by Viktor Vladimirovich Wagner[3] in the Soviet Union in 1952,[4] and by Gordon Preston in the United Kingdom in 1954.[5] Both authors arrived at inverse semigroups via the study of partial bijections of a set: a partial transformation α of a set X is a function from A to B, where A and B are subsets of X. Let α and β be partial transformations of a set X; α and β can be composed (from left to right) on the largest domain upon which it "makes sense" to compose them: $\operatorname {dom} \alpha \beta =[\operatorname {im} \alpha \cap \operatorname {dom} \beta ]\alpha ^{-1}\,$ where α−1 denotes the preimage under α. Partial transformations had already been studied in the context of pseudogroups.[6] It was Wagner, however, who was the first to observe that the composition of partial transformations is a special case of the composition of binary relations.[7] He recognised also that the domain of composition of two partial transformations may be the empty set, so he introduced an empty transformation to take account of this. With the addition of this empty transformation, the composition of partial transformations of a set becomes an everywhere-defined associative binary operation. Under this composition, the collection ${\mathcal {I}}_{X}$ of all partial one-one transformations of a set X forms an inverse semigroup, called the symmetric inverse semigroup (or monoid) on X, with inverse the functional inverse defined from image to domain (equivalently, the converse relation).[8] This is the "archetypal" inverse semigroup, in the same way that a symmetric group is the archetypal group. For example, just as every group can be embedded in a symmetric group, every inverse semigroup can be embedded in a symmetric inverse semigroup (see § Homomorphisms and representations of inverse semigroups below). The basics The inverse of an element x of an inverse semigroup S is usually written x−1. Inverses in an inverse semigroup have many of the same properties as inverses in a group, for example, (ab)−1 = b−1a−1. In an inverse monoid, xx−1 and x−1x are not necessarily equal to the identity, but they are both idempotent.[9] An inverse monoid S in which xx−1 = 1 = x−1x, for all x in S (a unipotent inverse monoid), is, of course, a group. There are a number of equivalent characterisations of an inverse semigroup S:[10] • Every element of S has a unique inverse, in the above sense. • Every element of S has at least one inverse (S is a regular semigroup) and idempotents commute (that is, the idempotents of S form a semilattice). • Every ${\mathcal {L}}$-class and every ${\mathcal {R}}$-class contains precisely one idempotent, where ${\mathcal {L}}$ and ${\mathcal {R}}$ are two of Green's relations. The idempotent in the ${\mathcal {L}}$-class of s is s−1s, whilst the idempotent in the ${\mathcal {R}}$-class of s is ss−1. There is therefore a simple characterisation of Green's relations in an inverse semigroup:[11] $a\,{\mathcal {L}}\,b\Longleftrightarrow a^{-1}a=b^{-1}b,\quad a\,{\mathcal {R}}\,b\Longleftrightarrow aa^{-1}=bb^{-1}$ Unless stated otherwise, E(S) will denote the semilattice of idempotents of an inverse semigroup S. Examples of inverse semigroups • Partial bijections on a set X form an inverse semigroup under composition. • Every group is an inverse semigroup. • The bicyclic semigroup is inverse, with (a, b)−1 = (b, a). • Every semilattice is inverse. • The Brandt semigroup is inverse. • The Munn semigroup is inverse. Multiplication table example. It is associative and every element has its own inverse according to aba = a, bab = b. It has no identity and is not commutative. Inverse semigroup abcde aaaaaa babcaa caaabc dadeaa eaaade The natural partial order An inverse semigroup S possesses a natural partial order relation ≤ (sometimes denoted by ω), which is defined by the following:[12] $a\leq b\Longleftrightarrow a=eb,$ for some idempotent e in S. Equivalently, $a\leq b\Longleftrightarrow a=bf,$ for some (in general, different) idempotent f in S. In fact, e can be taken to be aa−1 and f to be a−1a.[13] The natural partial order is compatible with both multiplication and inversion, that is,[14] $a\leq b,c\leq d\Longrightarrow ac\leq bd$ and $a\leq b\Longrightarrow a^{-1}\leq b^{-1}.$ In a group, this partial order simply reduces to equality, since the identity is the only idempotent. In a symmetric inverse semigroup, the partial order reduces to restriction of mappings, i.e., α ≤ β if, and only if, the domain of α is contained in the domain of β and xα = xβ, for all x in the domain of α.[15] The natural partial order on an inverse semigroup interacts with Green's relations as follows: if s ≤ t and s$\,{\mathcal {L}}\,$t, then s = t. Similarly, if s$\,{\mathcal {R}}\,$t.[16] On E(S), the natural partial order becomes: $e\leq f\Longleftrightarrow e=ef,$ so, since the idempotents form a semilattice under the product operation, products on E(S) give least upper bounds with respect to ≤. If E(S) is finite and forms a chain (i.e., E(S) is totally ordered by ≤), then S is a union of groups.[17] If E(S) is an infinite chain it is possible to obtain an analogous result under additional hypotheses on S and E(S).[18] Homomorphisms and representations of inverse semigroups A homomorphism (or morphism) of inverse semigroups is defined in exactly the same way as for any other semigroup: for inverse semigroups S and T, a function θ from S to T is a morphism if (sθ)(tθ) = (st)θ, for all s,t in S. The definition of a morphism of inverse semigroups could be augmented by including the condition (sθ)−1 = s−1θ, however, there is no need to do so, since this property follows from the above definition, via the following theorem: Theorem. The homomorphic image of an inverse semigroup is an inverse semigroup; the inverse of an element is always mapped to the inverse of the image of that element.[19] One of the earliest results proved about inverse semigroups was the Wagner–Preston Theorem, which is an analogue of Cayley's theorem for groups: Wagner–Preston Theorem. If S is an inverse semigroup, then the function φ from S to ${\mathcal {I}}_{S}$, given by dom (aφ) = Sa−1 and x(aφ) = xa is a faithful representation of S.[20] Thus, any inverse semigroup can be embedded in a symmetric inverse semigroup, and with image closed under the inverse operation on partial bijections. Conversely, any subsemigroup of the symmetric inverse semigroup closed under the inverse operation is an inverse semigroup. Hence a semigroup S is isomorphic to a subsemigroup of the symmetric inverse semigroup closed under inverses if and only if S is an inverse semigroup. Congruences on inverse semigroups Congruences are defined on inverse semigroups in exactly the same way as for any other semigroup: a congruence ρ is an equivalence relation that is compatible with semigroup multiplication, i.e., $a\,\rho \,b,\quad c\,\rho \,d\Longrightarrow ac\,\rho \,bd.$[21] Of particular interest is the relation $\sigma $, defined on an inverse semigroup S by $a\,\sigma \,b\Longleftrightarrow $ there exists a $c\in S$ with $c\leq a,b.$[22] It can be shown that σ is a congruence and, in fact, it is a group congruence, meaning that the factor semigroup S/σ is a group. In the set of all group congruences on a semigroup S, the minimal element (for the partial order defined by inclusion of sets) need not be the smallest element. In the specific case in which S is an inverse semigroup σ is the smallest congruence on S such that S/σ is a group, that is, if τ is any other congruence on S with S/τ a group, then σ is contained in τ. The congruence σ is called the minimum group congruence on S.[23] The minimum group congruence can be used to give a characterisation of E-unitary inverse semigroups (see below). A congruence ρ on an inverse semigroup S is called idempotent pure if $a\in S,e\in E(S),a\,\rho \,e\Longrightarrow a\in E(S).$[24] E-unitary inverse semigroups One class of inverse semigroups that has been studied extensively over the years is the class of E-unitary inverse semigroups: an inverse semigroup S (with semilattice E of idempotents) is E-unitary if, for all e in E and all s in S, $es\in E\Longrightarrow s\in E.$ Equivalently, $se\in E\Rightarrow s\in E.$[25] One further characterisation of an E-unitary inverse semigroup S is the following: if e is in E and e ≤ s, for some s in S, then s is in E.[26] Theorem. Let S be an inverse semigroup with semilattice E of idempotents, and minimum group congruence σ. Then the following are equivalent:[27] • S is E-unitary; • σ is idempotent pure; • $\sim $ = σ, where $\sim $ is the compatibility relation on S, defined by $a\sim b\Longleftrightarrow ab^{-1},a^{-1}b$ are idempotent. McAlister's Covering Theorem. Every inverse semigroup S has a E-unitary cover; that is there exists an idempotent separating surjective homomorphism from some E-unitary semigroup T onto S.[28] Central to the study of E-unitary inverse semigroups is the following construction.[29] Let ${\mathcal {X}}$ be a partially ordered set, with ordering ≤, and let ${\mathcal {Y}}$ be a subset of ${\mathcal {X}}$ with the properties that • ${\mathcal {Y}}$ is a lower semilattice, that is, every pair of elements A, B in ${\mathcal {Y}}$ has a greatest lower bound A $\wedge $ B in ${\mathcal {Y}}$ (with respect to ≤); • ${\mathcal {Y}}$ is an order ideal of ${\mathcal {X}}$, that is, for A, B in ${\mathcal {X}}$, if A is in ${\mathcal {Y}}$ and B ≤ A, then B is in ${\mathcal {Y}}$. Now let G be a group that acts on ${\mathcal {X}}$ (on the left), such that • for all g in G and all A, B in ${\mathcal {X}}$, gA = gB if, and only if, A = B; • for each g in G and each B in ${\mathcal {X}}$, there exists an A in ${\mathcal {X}}$ such that gA = B; • for all A, B in ${\mathcal {X}}$, A ≤ B if, and only if, gA ≤ gB; • for all g, h in G and all A in ${\mathcal {X}}$, g(hA) = (gh)A. The triple $(G,{\mathcal {X}},{\mathcal {Y}})$ is also assumed to have the following properties: • for every X in ${\mathcal {X}}$, there exists a g in G and an A in ${\mathcal {Y}}$ such that gA = X; • for all g in G, g${\mathcal {Y}}$ and ${\mathcal {Y}}$ have nonempty intersection. Such a triple $(G,{\mathcal {X}},{\mathcal {Y}})$ is called a McAlister triple. A McAlister triple is used to define the following: $P(G,{\mathcal {X}},{\mathcal {Y}})=\{(A,g)\in {\mathcal {Y}}\times G:g^{-1}A\in {\mathcal {Y}}\}$ together with multiplication $(A,g)(B,h)=(A\wedge gB,gh)$. Then $P(G,{\mathcal {X}},{\mathcal {Y}})$ is an inverse semigroup under this multiplication, with (A, g)−1 = (g−1A, g−1). One of the main results in the study of E-unitary inverse semigroups is McAlister's P-Theorem: McAlister's P-Theorem. Let $(G,{\mathcal {X}},{\mathcal {Y}})$ be a McAlister triple. Then $P(G,{\mathcal {X}},{\mathcal {Y}})$ is an E-unitary inverse semigroup. Conversely, every E-unitary inverse semigroup is isomorphic to one of this type.[30] F-inverse semigroups An inverse semigroup is said to be F-inverse if every element has a unique maximal element above it in the natural partial order, i.e. every σ-class has a maximal element. Every F-inverse semigroup is an E-unitary monoid. McAlister's covering theorem has been refined by M.V. Lawson to: Theorem. Every inverse semigroup has an F-inverse cover.[31] McAlister's P-theorem has been used to characterize F-inverse semigroups as well. A McAlister triple $(G,{\mathcal {X}},{\mathcal {Y}})$ is an F-inverse semigroup if and only if ${\mathcal {Y}}$ is a principal ideal of ${\mathcal {X}}$ and ${\mathcal {X}}$ is a semilattice. Free inverse semigroups A construction similar to a free group is possible for inverse semigroups. A presentation of the free inverse semigroup on a set X may be obtained by considering the free semigroup with involution, where involution is the taking of the inverse, and then taking the quotient by the Vagner congruence $\{(xx^{-1}x,x),\;(xx^{-1}yy^{-1},yy^{-1}xx^{-1})\;|\;x,y\in (X\cup X^{-1})^{+}\}.$ The word problem for free inverse semigroups is much more intricate than that of free groups. A celebrated result in this area due to W. D. Munn who showed that elements of the free inverse semigroup can be naturally regarded as trees, known as Munn trees. Multiplication in the free inverse semigroup has a correspondent on Munn trees, which essentially consists of overlapping common portions of the trees. (see Lawson 1998 for further details) Any free inverse semigroup is F-inverse.[31] Connections with category theory The above composition of partial transformations of a set gives rise to a symmetric inverse semigroup. There is another way of composing partial transformations, which is more restrictive than that used above: two partial transformations α and β are composed if, and only if, the image of α is equal to the domain of β; otherwise, the composition αβ is undefined. Under this alternative composition, the collection of all partial one-one transformations of a set forms not an inverse semigroup but an inductive groupoid, in the sense of category theory. This close connection between inverse semigroups and inductive groupoids is embodied in the Ehresmann–Schein–Nambooripad Theorem, which states that an inductive groupoid can always be constructed from an inverse semigroup, and conversely.[32] More precisely, an inverse semigroup is precisely a groupoid in the category of posets that is an étale groupoid with respect to its (dual) Alexandrov topology and whose poset of objects is a meet-semilattice. Generalisations of inverse semigroups As noted above, an inverse semigroup S can be defined by the conditions (1) S is a regular semigroup, and (2) the idempotents in S commute; this has led to two distinct classes of generalisations of an inverse semigroup: semigroups in which (1) holds, but (2) does not, and vice versa. Examples of regular generalisations of an inverse semigroup are:[33] • Regular semigroups: a semigroup S is regular if every element has at least one inverse; equivalently, for each a in S, there is an x in S such that axa = a. • Locally inverse semigroups: a regular semigroup S is locally inverse if eSe is an inverse semigroup, for each idempotent e. • Orthodox semigroups: a regular semigroup S is orthodox if its subset of idempotents forms a subsemigroup. • Generalised inverse semigroups: a regular semigroup S is called a generalised inverse semigroup if its idempotents form a normal band, i.e., xyzx = xzyx, for all idempotents x, y, z. The class of generalised inverse semigroups is the intersection of the class of locally inverse semigroups and the class of orthodox semigroups.[34] Amongst the non-regular generalisations of an inverse semigroup are:[35] • (Left, right, two-sided) adequate semigroups. • (Left, right, two-sided) ample semigroups. • (Left, right, two-sided) semiadequate semigroups. • Weakly (left, right, two-sided) ample semigroups. Inverse category This notion of inverse also readily generalizes to categories. An inverse category is simply a category in which every morphism f : X → Y has a generalized inverse g : Y → X such that fgf = f and gfg = g. An inverse category is selfdual. The category of sets and partial bijections is the prime example.[36] Inverse categories have found various applications in theoretical computer science.[37] See also • Orthodox semigroup • Biordered set • Pseudogroup • Partial symmetries • Regular semigroup • Semilattice • Green's relations • Category theory • Special classes of semigroups • Weak inverse • Nambooripad order Notes 1. Weisstein, Eric W. (2002). CRC Concise Encyclopedia of Mathematics (2nd ed.). CRC Press. p. 1528. ISBN 978-1-4200-3522-3. 2. Lawson 1998 3. Since his father was German, Wagner preferred the German transliteration of his name (with a "W", rather than a "V") from Cyrillic – see Schein 1981. 4. First a short announcement in Wagner 1952, then a much more comprehensive exposition in Wagner 1953. 5. Preston 1954a,b,c. 6. See, for example, Gołab 1939. 7. Schein 2002, p. 152 8. Howie 1995, p. 149 9. Howie 1995, Proposition 5.1.2(1) 10. Howie 1995, Theorem 5.1.1 11. Howie 1995, Proposition 5.1.2(1) 12. Wagner 1952 13. Howie 1995, Proposition 5.2.1 14. Howie 1995, pp. 152–3 15. Howie 1995, p. 153 16. Lawson 1998, Proposition 3.2.3 17. Clifford & Preston 1967, Theorem 7.5 18. Gonçalves, D; Sobottka, M; Starling, C (2017). "Inverse semigroup shifts over countable alphabets". Semigroup Forum. 96 (2): 203–240. arXiv:1510.04117. doi:10.1007/s00233-017-9858-5Corollary 4.9{{cite journal}}: CS1 maint: postscript (link) 19. Clifford & Preston 1967, Theorem 7.36 20. Howie 1995, Theorem 5.1.7 Originally, Wagner 1952 and, independently, Preston 1954c. 21. Howie 1995, p. 22 22. Lawson 1998, p. 62 23. Lawson 1998, Theorem 2.4.1 24. Lawson 1998, p. 65 25. Howie 1995, p. 192 26. Lawson 1998, Proposition 2.4.3 27. Lawson 1998, Theorem 2.4.6 28. Grillet, P. A. (1995). Semigroups: An Introduction to the Structure Theory. CRC Press. p. 248. ISBN 978-0-8247-9662-4. 29. Howie 1995, pp. 193–4 30. Howie 1995, Theorem 5.9.2. Originally, McAlister 1974a,b. 31. Lawson 1998, p. 230 32. Lawson 1998, 4.1.8 33. Howie 1995, Section 2.4 & Chapter 6 34. Howie 1995, p. 222 35. Fountain 1979, Gould 36. Grandis, Marco (2012). Homological Algebra: The Interplay of Homology with Distributive Lattices and Orthodox Semigroups. World Scientific. p. 55. ISBN 978-981-4407-06-9. 37. Hines, Peter; Braunstein, Samuel L. (2010). "The Structure of Partial Isometries". In Gay and, Simon; Mackie, Ian (eds.). Semantic Techniques in Quantum Computation. Cambridge University Press. p. 369. ISBN 978-0-521-51374-6. References • Clifford, A. H.; Preston, G. B. (1967). The Algebraic Theory of Semigroups. Mathematical Surveys of the American Mathematical Society. Vol. 7. ISBN 978-0-8218-0272-4. • Fountain, J. B. (1979). "Adequate semigroups". Proceedings of the Edinburgh Mathematical Society. 22 (2): 113–125. doi:10.1017/S0013091500016230. • Gołab, St. (1939). "Über den Begriff der "Pseudogruppe von Transformationen"". Mathematische Annalen (in German). 116: 768–780. doi:10.1007/BF01597390. • Exel, R. (1998). "Partial actions of groups and actions of inverse semigroups". Proceedings of the American Mathematical Society. 126 (12): 3481–4. arXiv:funct-an/9511003. doi:10.1090/S0002-9939-98-04575-4. • Gould, V. "(Weakly) left E-ample semigroups". Archived from the original (Postscript) on 2005-08-26. Retrieved 2006-08-28. • Howie, J. M. (1995). Fundamentals of Semigroup Theory. Oxford: Clarendon Press. ISBN 0198511949. • Lawson, M. V. (1998). Inverse Semigroups: The Theory of Partial Symmetries. World Scientific. ISBN 9810233167. • McAlister, D. B. (1974a). "Groups, semilattices and inverse semigroups". Transactions of the American Mathematical Society. 192: 227–244. doi:10.2307/1996831. JSTOR 1996831. • McAlister, D. B. (1974b). "Groups, semilattices and inverse semigroups II". Transactions of the American Mathematical Society. 196: 351–370. doi:10.2307/1997032. JSTOR 1997032. • Petrich, M. (1984). Inverse semigroups. Wiley. ISBN 0471875457. • Preston, G. B. (1954a). "Inverse semi-groups". Journal of the London Mathematical Society. 29 (4): 396–403. doi:10.1112/jlms/s1-29.4.396. • Preston, G. B. (1954b). "Inverse semi-groups with minimal right ideals". Journal of the London Mathematical Society. 29 (4): 404–411. doi:10.1112/jlms/s1-29.4.404. • Preston, G. B. (1954c). "Representations of inverse semi-groups". Journal of the London Mathematical Society. 29 (4): 411–9. doi:10.1112/jlms/s1-29.4.411. • Schein, B. M. (1981). "Obituary: Viktor Vladimirovich Vagner (1908–1981)". Semigroup Forum. 28: 189–200. doi:10.1007/BF02676643. • Schein, B. M. (2002). "Book Review: "Inverse Semigroups: The Theory of Partial Symmetries" by Mark V. Lawson". Semigroup Forum. 65: 149–158. doi:10.1007/s002330010132. • Wagner, V. V. (1952). "Generalised groups". Proceedings of the USSR Academy of Sciences (in Russian). 84: 1119–1122. English translation(PDF) • Wagner, V. V. (1953). "The theory of generalised heaps and generalised groups". Matematicheskii Sbornik. Novaya Seriya (in Russian). 32 (74): 545–632. Further reading • For a brief introduction to inverse semigroups, see either Clifford & Preston 1967, Chapter 7 or Howie 1995, Chapter 5. • More comprehensive introductions can be found in Petrich 1984 and Lawson 1998. • Linckelmann, M. (2012). "On inverse categories and transfer in cohomology" (PDF). Proceedings of the Edinburgh Mathematical Society. 56: 187. doi:10.1017/S0013091512000211. Open access preprint
Wikipedia
Vague topology In mathematics, particularly in the area of functional analysis and topological vector spaces, the vague topology is an example of the weak-* topology which arises in the study of measures on locally compact Hausdorff spaces. Let $X$ be a locally compact Hausdorff space. Let $M(X)$ be the space of complex Radon measures on $X,$ and $C_{0}(X)^{*}$ denote the dual of $C_{0}(X),$ the Banach space of complex continuous functions on $X$ vanishing at infinity equipped with the uniform norm. By the Riesz representation theorem $M(X)$ is isometric to $C_{0}(X)^{*}.$ The isometry maps a measure $\mu $ to a linear functional $I_{\mu }(f):=\int _{X}f\,d\mu .$ The vague topology is the weak-* topology on $C_{0}(X)^{*}.$ The corresponding topology on $M(X)$ induced by the isometry from $C_{0}(X)^{*}$ is also called the vague topology on $M(X).$ Thus in particular, a sequence of measures $\left(\mu _{n}\right)_{n\in \mathbb {N} }$ converges vaguely to a measure $\mu $ whenever for all test functions $f\in C_{0}(X),$ $\int _{X}fd\mu _{n}\to \int _{X}fd\mu .$ It is also not uncommon to define the vague topology by duality with continuous functions having compact support $C_{c}(X),$ that is, a sequence of measures $\left(\mu _{n}\right)_{n\in \mathbb {N} }$ converges vaguely to a measure $\mu $ whenever the above convergence holds for all test functions $f\in C_{c}(X).$ This construction gives rise to a different topology. In particular, the topology defined by duality with $C_{c}(X)$ can be metrizable whereas the topology defined by duality with $C_{0}(X)$ is not. One application of this is to probability theory: for example, the central limit theorem is essentially a statement that if $\mu _{n}$ are the probability measures for certain sums of independent random variables, then $\mu _{n}$ converge weakly (and then vaguely) to a normal distribution, that is, the measure $\mu _{n}$ is "approximately normal" for large $n.$ See also • List of topologies – List of concrete topologies and topological spaces References • Dieudonné, Jean (1970), "§13.4. The vague topology", Treatise on analysis, vol. II, Academic Press. • G. B. Folland, Real Analysis: Modern Techniques and Their Applications, 2nd ed, John Wiley & Sons, Inc., 1999. This article incorporates material from Weak-* topology of the space of Radon measures on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. Banach space topics Types of Banach spaces • Asplund • Banach • list • Banach lattice • Grothendieck • Hilbert • Inner product space • Polarization identity • (Polynomially) Reflexive • Riesz • L-semi-inner product • (B • Strictly • Uniformly) convex • Uniformly smooth • (Injective • Projective) Tensor product (of Hilbert spaces) Banach spaces are: • Barrelled • Complete • F-space • Fréchet • tame • Locally convex • Seminorms/Minkowski functionals • Mackey • Metrizable • Normed • norm • Quasinormed • Stereotype Function space Topologies • Banach–Mazur compactum • Dual • Dual space • Dual norm • Operator • Ultraweak • Weak • polar • operator • Strong • polar • operator • Ultrastrong • Uniform convergence Linear operators • Adjoint • Bilinear • form • operator • sesquilinear • (Un)Bounded • Closed • Compact • on Hilbert spaces • (Dis)Continuous • Densely defined • Fredholm • kernel • operator • Hilbert–Schmidt • Functionals • positive • Pseudo-monotone • Normal • Nuclear • Self-adjoint • Strictly singular • Trace class • Transpose • Unitary Operator theory • Banach algebras • C*-algebras • Operator space • Spectrum • C*-algebra • radius • Spectral theory • of ODEs • Spectral theorem • Polar decomposition • Singular value decomposition Theorems • Anderson–Kadec • Banach–Alaoglu • Banach–Mazur • Banach–Saks • Banach–Schauder (open mapping) • Banach–Steinhaus (Uniform boundedness) • Bessel's inequality • Cauchy–Schwarz inequality • Closed graph • Closed range • Eberlein–Šmulian • Freudenthal spectral • Gelfand–Mazur • Gelfand–Naimark • Goldstine • Hahn–Banach • hyperplane separation • Kakutani fixed-point • Krein–Milman • Lomonosov's invariant subspace • Mackey–Arens • Mazur's lemma • M. Riesz extension • Parseval's identity • Riesz's lemma • Riesz representation • Robinson-Ursescu • Schauder fixed-point Analysis • Abstract Wiener space • Banach manifold • bundle • Bochner space • Convex series • Differentiation in Fréchet spaces • Derivatives • Fréchet • Gateaux • functional • holomorphic • quasi • Integrals • Bochner • Dunford • Gelfand–Pettis • regulated • Paley–Wiener • weak • Functional calculus • Borel • continuous • holomorphic • Measures • Lebesgue • Projection-valued • Vector • Weakly / Strongly measurable function Types of sets • Absolutely convex • Absorbing • Affine • Balanced/Circled • Bounded • Convex • Convex cone (subset) • Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx)) • Linear cone (subset) • Radial • Radially convex/Star-shaped • Symmetric • Zonotope Subsets / set operations • Affine hull • (Relative) Algebraic interior (core) • Bounding points • Convex hull • Extreme point • Interior • Linear span • Minkowski addition • Polar • (Quasi) Relative interior Examples • Absolute continuity AC • $ba(\Sigma )$ • c space • Banach coordinate BK • Besov $B_{p,q}^{s}(\mathbb {R} )$ • Birnbaum–Orlicz • Bounded variation BV • Bs space • Continuous C(K) with K compact Hausdorff • Hardy Hp • Hilbert H • Morrey–Campanato $L^{\lambda ,p}(\Omega )$ • ℓp • $\ell ^{\infty }$ • Lp • $L^{\infty }$ • weighted • Schwartz $S\left(\mathbb {R} ^{n}\right)$ • Segal–Bargmann F • Sequence space • Sobolev Wk,p • Sobolev inequality • Triebel–Lizorkin • Wiener amalgam $W(X,L^{p})$ Applications • Differential operator • Finite element method • Mathematical formulation of quantum mechanics • Ordinary Differential Equations (ODEs) • Validated numerics Hilbert spaces Basic concepts • Adjoint • Inner product and L-semi-inner product • Hilbert space and Prehilbert space • Orthogonal complement • Orthonormal basis Main results • Bessel's inequality • Cauchy–Schwarz inequality • Riesz representation Other results • Hilbert projection theorem • Parseval's identity • Polarization identity (Parallelogram law) Maps • Compact operator on Hilbert space • Densely defined • Hermitian form • Hilbert–Schmidt • Normal • Self-adjoint • Sesquilinear form • Trace class • Unitary Examples • Cn(K) with K compact & n<∞ • Segal–Bargmann F Duality and spaces of linear maps Basic concepts • Dual space • Dual system • Dual topology • Duality • Operator topologies • Polar set • Polar topology • Topologies on spaces of linear maps Topologies • Norm topology • Dual norm • Ultraweak/Weak-* • Weak • polar • operator • in Hilbert spaces • Mackey • Strong dual • polar topology • operator • Ultrastrong Main results • Banach–Alaoglu • Mackey–Arens Maps • Transpose of a linear map Subsets • Saturated family • Total set Other concepts • Biorthogonal system Topological vector spaces (TVSs) Basic concepts • Banach space • Completeness • Continuous linear operator • Linear functional • Fréchet space • Linear map • Locally convex space • Metrizability • Operator topologies • Topological vector space • Vector space Main results • Anderson–Kadec • Banach–Alaoglu • Closed graph theorem • F. Riesz's • Hahn–Banach (hyperplane separation • Vector-valued Hahn–Banach) • Open mapping (Banach–Schauder) • Bounded inverse • Uniform boundedness (Banach–Steinhaus) Maps • Bilinear operator • form • Linear map • Almost open • Bounded • Continuous • Closed • Compact • Densely defined • Discontinuous • Topological homomorphism • Functional • Linear • Bilinear • Sesquilinear • Norm • Seminorm • Sublinear function • Transpose Types of sets • Absolutely convex/disk • Absorbing/Radial • Affine • Balanced/Circled • Banach disks • Bounding points • Bounded • Complemented subspace • Convex • Convex cone (subset) • Linear cone (subset) • Extreme point • Pre-compact/Totally bounded • Prevalent/Shy • Radial • Radially convex/Star-shaped • Symmetric Set operations • Affine hull • (Relative) Algebraic interior (core) • Convex hull • Linear span • Minkowski addition • Polar • (Quasi) Relative interior Types of TVSs • Asplund • B-complete/Ptak • Banach • (Countably) Barrelled • BK-space • (Ultra-) Bornological • Brauner • Complete • Convenient • (DF)-space • Distinguished • F-space • FK-AK space • FK-space • Fréchet • tame Fréchet • Grothendieck • Hilbert • Infrabarreled • Interpolation space • K-space • LB-space • LF-space • Locally convex space • Mackey • (Pseudo)Metrizable • Montel • Quasibarrelled • Quasi-complete • Quasinormed • (Polynomially • Semi-) Reflexive • Riesz • Schwartz • Semi-complete • Smith • Stereotype • (B • Strictly • Uniformly) convex • (Quasi-) Ultrabarrelled • Uniformly smooth • Webbed • With the approximation property •  Mathematics portal • Category • Commons
Wikipedia
Vaidya metric In general relativity, the Vaidya metric describes the non-empty external spacetime of a spherically symmetric and nonrotating star which is either emitting or absorbing null dusts. It is named after the Indian physicist Prahalad Chunnilal Vaidya and constitutes the simplest non-static generalization of the non-radiative Schwarzschild solution to Einstein's field equation, and therefore is also called the "radiating(shining) Schwarzschild metric". General relativity $G_{\mu \nu }+\Lambda g_{\mu \nu }={\kappa }T_{\mu \nu }$ • Introduction • History • Timeline • Tests • Mathematical formulation Fundamental concepts • Equivalence principle • Special relativity • World line • Pseudo-Riemannian manifold Phenomena • Kepler problem • Gravitational lensing • Gravitational waves • Frame-dragging • Geodetic effect • Event horizon • Singularity • Black hole Spacetime • Spacetime diagrams • Minkowski spacetime • Einstein–Rosen bridge • Equations • Formalisms Equations • Linearized gravity • Einstein field equations • Friedmann • Geodesics • Mathisson–Papapetrou–Dixon • Hamilton–Jacobi–Einstein Formalisms • ADM • BSSN • Post-Newtonian Advanced theory • Kaluza–Klein theory • Quantum gravity Solutions • Schwarzschild (interior) • Reissner–Nordström • Gödel • Kerr • Kerr–Newman • Kasner • Lemaître–Tolman • Taub–NUT • Milne • Robertson–Walker • Oppenheimer-Snyder • pp-wave • van Stockum dust • Weyl−Lewis−Papapetrou Scientists • Einstein • Lorentz • Hilbert • Poincaré • Schwarzschild • de Sitter • Reissner • Nordström • Weyl • Eddington • Friedman • Milne • Zwicky • Lemaître • Oppenheimer • Gödel • Wheeler • Robertson • Bardeen • Walker • Kerr • Chandrasekhar • Ehlers • Penrose • Hawking • Raychaudhuri • Taylor • Hulse • van Stockum • Taub • Newman • Yau • Thorne • others •  Physics portal •  Category From Schwarzschild to Vaidya metrics The Schwarzschild metric as the static and spherically symmetric solution to Einstein's equation reads $ds^{2}=-\left(1-{\frac {2M}{r}}\right)dt^{2}+\left(1-{\frac {2M}{r}}\right)^{-1}dr^{2}+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right).$ (1) To remove the coordinate singularity of this metric at $r=2M$, one could switch to the Eddington–Finkelstein coordinates. Thus, introduce the "retarded(/outgoing)" null coordinate $u$ by $t=u+r+2M\ln \left({\frac {r}{2M}}-1\right)\qquad \Rightarrow \quad dt=du+\left(1-{\frac {2M}{r}}\right)^{-1}dr\;,$ (2) and Eq(1) could be transformed into the "retarded(/outgoing) Schwarzschild metric" $ds^{2}=-\left(1-{\frac {2M}{r}}\right)du^{2}-2dudr+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right);$ (3) or, we could instead employ the "advanced(/ingoing)" null coordinate $v$ by $t=v-r-2M\ln \left({\frac {r}{2M}}-1\right)\qquad \Rightarrow \quad dt=dv-\left(1-{\frac {2M}{r}}\right)^{-1}dr\;,$ (4) so Eq(1) becomes the "advanced(/ingoing) Schwarzschild metric" $ds^{2}=-\left(1-{\frac {2M}{r}}\right)dv^{2}+2dvdr+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right).$ (5) Eq(3) and Eq(5), as static and spherically symmetric solutions, are valid for both ordinary celestial objects with finite radii and singular objects such as black holes. It turns out that, it is still physically reasonable if one extends the mass parameter $M$ in Eqs(3) and Eq(5) from a constant to functions of the corresponding null coordinate, $M(u)$ and $M(v)$ respectively, thus $ds^{2}=-\left(1-{\frac {2M(u)}{r}}\right)du^{2}-2dudr+r^{2}\left(d\theta ^{2}+\ sin^{2}\theta \,d\phi ^{2}\right),$ (6) $ds^{2}=-\left(1-{\frac {2M(v)}{r}}\right)dv^{2}+2dvdr+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right).$ (7) The extended metrics Eq(6) and Eq(7) are respectively the "retarded(/outgoing)" and "advanced(/ingoing)" Vaidya metrics.[1][2] It is also sometimes useful to recast the Vaidya metrics Eqs(6)(7) into the form $ds^{2}={\frac {2M(u)}{r}}du^{2}+ds^{2}({\text{flat}})={\frac {2M(v)}{r}}dv^{2}+ds^{2}({\text{flat}})\,,$ (8) where ${\begin{aligned}ds^{2}({\text{flat}})&=-du^{2}-2dudr+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right)\\&=-dv^{2}+2dvdr+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right)\\&=-dt^{2}+dr^{2}+r^{2}\left(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2}\right)\end{aligned}}$ represents the metric of flat spacetime. Outgoing Vaidya with pure Emitting field As for the "retarded(/outgoing)" Vaidya metric Eq(6),[1][2][3][4][5] the Ricci tensor has only one nonzero component $R_{uu}=-2{\frac {M(u)_{,\,u}}{r^{2}}}\,,$ (9) while the Ricci curvature scalar vanishes, $R=g^{ab}R_{ab}=0$ because $g^{uu}=0$. Thus, according to the trace-free Einstein equation $G_{ab}=R_{ab}=8\pi T_{ab}$, the stress–energy tensor $T_{ab}$ satisfies $T_{ab}=-{\frac {M(u)_{,\,u}}{4\pi r^{2}}}l_{a}l_{b}\;,\qquad l_{a}dx^{a}=-du\;,$ (10) where $l_{a}=-\partial _{a}u$ and $l^{a}=g^{ab}l_{b}$ are null (co)vectors (c.f. Box A below). Thus, $T_{ab}$ is a "pure radiation field",[1][2] which has an energy density of $ -{\frac {M(u)_{,\,u}}{4\pi r^{2}}}$. According to the null energy conditions $T_{ab}k^{a}k^{b}\geq 0\;,$ (11) we have $M(u)_{,\,u}<0$ and thus the central body is emitting radiation. Following the calculations using Newman–Penrose (NP) formalism in Box A, the outgoing Vaidya spacetime Eq(6) is of Petrov-type D, and the nonzero components of the Weyl-NP and Ricci-NP scalars are $\Psi _{2}=-{\frac {M(u)}{r^{3}}}\qquad \Phi _{22}=-{\frac {M(u)_{\,,\,u}}{r^{2}}}\;.$ (12) It is notable that, the Vaidya field is a pure radiation field rather than electromagnetic fields. The emitted particles or energy-matter flows have zero rest mass and thus are generally called "null dusts", typically such as photons and neutrinos, but cannot be electromagnetic waves because the Maxwell-NP equations are not satisfied. By the way, the outgoing and ingoing null expansion rates for the line element Eq(6) are respectively $\theta _{(\ell )}=-(\rho +{\bar {\rho }})={\frac {2}{r}}\,,\quad \theta _{(n)}=\mu +{\bar {\mu }}={\frac {-r+2M(u)}{r^{2}}}\;.$ (13) Suppose $ F:=1-{\frac {2M(u)}{r}}$, then the Lagrangian for null radial geodesics $(L=0,{\dot {\theta }}=0,{\dot {\phi }}=0)$ of the "retarded(/outgoing)" Vaidya spacetime Eq(6) is $L=0=-F{\dot {u}}^{2}+2{\dot {u}}{\dot {r}}\,,$ where dot means derivative with respect to some parameter $\lambda $. This Lagrangian has two solutions, ${\dot {u}}=0\quad {\text{and}}\quad {\dot {r}}={\frac {F}{2}}{\dot {u}}\;.$ According to the definition of $u$ in Eq(2), one could find that when $t$ increases, the areal radius $r$ would increase as well for the solution ${\dot {u}}=0$, while $r$ would decrease for the solution $ {\dot {r}}={\frac {F}{2}}{\dot {u}}$. Thus, ${\dot {u}}=0$ should be recognized as an outgoing solution while $ {\dot {r}}={\frac {F}{2}}{\dot {u}}$ serves as an ingoing solution. Now, we can construct a complex null tetrad which is adapted to the outgoing null radial geodesics and employ the Newman–Penrose formalism for perform a full analysis of the outgoing Vaidya spacetime. Such an outgoing adapted tetrad can be set up as $l^{a}=(0,1,0,0)\,,\quad n^{a}=\left(1,-{\frac {F}{2}},0,0\right)\,,\quad m^{a}={\frac {1}{{\sqrt {2}}\,r}}(0,0,1,i\,\csc \theta )\,,$ and the dual basis covectors are therefore $l_{a}=(-1,0,0,0)\,,\quad n_{a}=\left(-{\frac {F}{2}},-1,0,0\right)\,,\quad m_{a}={\frac {r}{\sqrt {2}}}(0,0,1,\sin \theta )\,.$ In this null tetrad, the spin coefficients are $\kappa =\sigma =\tau =0\,,\quad \nu =\lambda =\pi =0\,,\quad \varepsilon =0$ $\rho =-{\frac {1}{r}}\,,\quad \mu ={\frac {-r+2M(u)}{2r^{2}}}\,,\quad \alpha =-\beta ={\frac {-{\sqrt {2}}\cot \theta }{4r}}\,,\quad \gamma ={\frac {M(u)}{2r^{2}}}\,.$ The Weyl-NP and Ricci-NP scalars are given by $\Psi _{0}=\Psi _{1}=\Psi _{3}=\Psi _{4}=0\,,\quad \Psi _{2}=-{\frac {M(u)}{r^{3}}}\,,$ $\Phi _{00}=\Phi _{10}=\Phi _{20}=\Phi _{11}=\Phi _{12}=\Lambda =0\,,\quad \Phi _{22}=-{\frac {M(u)_{\,,\,u}}{r^{2}}}\,,$ Since the only nonvanishing Weyl-NP scalar is $\Psi _{2}$, the "retarded(/outgoing)" Vaidya spacetime is of Petrov-type D. Also, there exists a radiation field as $\Phi _{22}\neq 0$. For the "retarded(/outgoing)" Schwarzschild metric Eq(3), let $ G:=1-{\frac {2M}{r}}$, and then the Lagrangian for null radial geodesics will have an outgoing solution ${\dot {u}}=0$ and an ingoing solution $ {\dot {r}}=-{\frac {G}{2}}{\dot {u}}$. Similar to Box A, now set up the adapted outgoing tetrad by $l^{a}=(0,1,0,0)\,,\quad n^{a}=\left(1,-{\frac {G}{2}},0,0\right)\,,\quad m^{a}={\frac {1}{{\sqrt {2}}\,r}}(0,0,1,i\,\csc \theta )\,,$ $l_{a}=(-1,0,0,0)\,,\quad n_{a}=\left(-{\frac {G}{2}},-1,0,0\right)\,,\quad m_{a}={\frac {r}{\sqrt {2}}}(0,0,1,\sin \theta )\,.$ so the spin coefficients are $\kappa =\sigma =\tau =0\,,\quad \nu =\lambda =\pi =0\,,\quad \varepsilon =0$ $\rho =-{\frac {1}{r}}\,,\quad \mu ={\frac {-r+2M}{2r^{2}}}\,,\quad \alpha =-\beta ={\frac {-{\sqrt {2}}\cot \theta }{4r}}\,,\quad \gamma ={\frac {M}{2r^{2}}}\,,$ and the Weyl-NP and Ricci-NP scalars are given by $\Psi _{0}=\Psi _{1}=\Psi _{3}=\Psi _{4}=0\,,\quad \Psi _{2}=-{\frac {M}{r^{3}}}\,,$ $\Phi _{00}=\Phi _{10}=\Phi _{20}=\Phi _{11}=\Phi _{12}=\Phi _{22}=\Lambda =0\,.$ The "retarded(/outgoing)" Schwarzschild spacetime is of Petrov-type D with $\Psi _{2}$ being the only nonvanishing Weyl-NP scalar. Ingoing Vaidya with pure absorbing field As for the "advanced/ingoing" Vaidya metric Eq(7),[1][2][6] the Ricci tensors again have one nonzero component $R_{vv}=2{\frac {M(v)_{,\,v}}{r^{2}}}\,,$ (14) and therefore $R=0$ and the stress–energy tensor is $T_{ab}={\frac {M(v)_{,\,v}}{4\pi r^{2}}}\,n_{a}n_{b}\;,\qquad n_{a}dx^{a}=-dv\;.$ (15) This is a pure radiation field with energy density $ {\frac {M(v)_{,\,v}}{4\pi r^{2}}}$, and once again it follows from the null energy condition Eq(11) that $M(v)_{,\,v}>0$, so the central object is absorbing null dusts. As calculated in Box C, the nonzero Weyl-NP and Ricci-NP components of the "advanced/ingoing" Vaidya metric Eq(7) are $\Psi _{2}=-{\frac {M(v)}{r^{3}}}\qquad \Phi _{00}={\frac {M(v)_{\,,\,v}}{r^{2}}}\;.$ (16) Also, the outgoing and ingoing null expansion rates for the line element Eq(7) are respectively $\theta _{(\ell )}=-(\rho +{\bar {\rho }})={\frac {r-2M(v)}{r^{2}}}\,,\quad \theta _{(n)}=\mu +{\bar {\mu }}=-{\frac {2}{r}}\;.$ (17) The advanced/ingoing Vaidya solution Eq(7) is especially useful in black-hole physics as it is one of the few existing exact dynamical solutions. For example, it is often employed to investigate the differences between different definitions of the dynamical black-hole boundaries, such as the classical event horizon and the quasilocal trapping horizon; and as shown by Eq(17), the evolutionary hypersurface $r=2M(v)$ is always a marginally outer trapped horizon ($\theta _{(\ell )}=0\;,\theta _{(n)}<0$). Suppose ${\tilde {F}}:=1-{\frac {2M(v)}{r}}$, then the Lagrangian for null radial geodesics of the "advanced(/ingoing)" Vaidya spacetime Eq(7) is $L=-{\tilde {F}}{\dot {v}}^{2}+2{\dot {v}}{\dot {r}}\,,$ which has an ingoing solution ${\dot {v}}=0$ and an outgoing solution $ {\dot {r}}={\frac {\tilde {F}}{2}}{\dot {v}}$ in accordance with the definition of $v$ in Eq(4). Now, we can construct a complex null tetrad which is adapted to the ingoing null radial geodesics and employ the Newman–Penrose formalism for perform a full analysis of the Vaidya spacetime. Such an ingoing adapted tetrad can be set up as $l^{a}=\left(1,{\frac {\tilde {F}}{2}},0,0\right)\,,\quad n^{a}=(0,-1,0,0)\,,\quad m^{a}={\frac {1}{{\sqrt {2}}\,r}}(0,0,1,i\,\csc \theta )\,,$ and the dual basis covectors are therefore $l_{a}=\left(-{\frac {\tilde {F}}{2}},1,0,0\right)\,,\quad n_{a}=(-1,0,0,0)\,,\quad m_{a}={\frac {r}{\sqrt {2}}}(0,0,1,\sin \theta )\,.$ In this null tetrad, the spin coefficients are $\kappa =\sigma =\tau =0\,,\quad \nu =\lambda =\pi =0\,,\quad \gamma =0$ $\rho ={\frac {-r+2M(v)}{2r^{2}}}\,,\quad \mu =-{\frac {1}{r}}\,,\quad \alpha =-\beta ={\frac {-{\sqrt {2}}\cot \theta }{4r}}\,,\quad \varepsilon ={\frac {M(v)}{2r^{2}}}\,.$ The Weyl-NP and Ricci-NP scalars are given by $\Psi _{0}=\Psi _{1}=\Psi _{3}=\Psi _{4}=0\,,\quad \Psi _{2}=-{\frac {M(v)}{r^{3}}}\,,$ $\Phi _{10}=\Phi _{20}=\Phi _{11}=\Phi _{12}=\Phi _{22}=\Lambda =0\,,\quad \Phi _{00}={\frac {M(v)_{\,,\,v}}{r^{2}}}\;.$ Since the only nonvanishing Weyl-NP scalar is $\Psi _{2}$, the "advanced(/ingoing)" Vaidya spacetime is of Petrov-type D, and there exists a radiation field encoded into $\Phi _{00}$. For the "advanced(/ingoing)" Schwarzschild metric Eq(5), still let $ G:=1-{\frac {2M}{r}}$, and then the Lagrangian for the null radial geodesics will have an ingoing solution ${\dot {v}}=0$ and an outgoing solution $ {\dot {r}}={\frac {G}{2}}{\dot {v}}$. Similar to Box C, now set up the adapted ingoing tetrad by $l^{a}=\left(1,{\frac {G}{2}},0,0\right)\,,\quad n^{a}=(0,-1,0,0)\,,\quad m^{a}={\frac {1}{{\sqrt {2}}\,r}}(0,0,1,i\,\csc \theta )\,,$ $l_{a}=\left(-{\frac {G}{2}},1,0,0\right)\,,\quad n_{a}=(-1,0,0,0)\,,\quad m_{a}={\frac {r}{\sqrt {2}}}(0,0,1,\sin \theta )\,.$ so the spin coefficients are $\kappa =\sigma =\tau =0\,,\quad \nu =\lambda =\pi =0\,,\quad \gamma =0$ $\rho ={\frac {-r+2M}{2r^{2}}}\,,\quad \mu =-{\frac {1}{r}}\,,\quad \alpha =-\beta ={\frac {-{\sqrt {2}}\cot \theta }{4r}}\,,\quad \varepsilon ={\frac {M}{2r^{2}}}\,,$ and the Weyl-NP and Ricci-NP scalars are given by $\Psi _{0}=\Psi _{1}=\Psi _{3}=\Psi _{4}=0\,,\quad \Psi _{2}=-{\frac {M}{r^{3}}}\,,$ $\Phi _{00}=\Phi _{10}=\Phi _{20}=\Phi _{11}=\Phi _{12}=\Phi _{22}=\Lambda =0\,.$ The "advanced(/ingoing)" Schwarzschild spacetime is of Petrov-type D with $\Psi _{2}$ being the only nonvanishing Weyl-NP scalar. Comparison with the Schwarzschild metric As a natural and simplest extension of the Schwazschild metric, the Vaidya metric still has a lot in common with it: • Both metrics are of Petrov-type D with $\Psi _{2}$ being the only nonvanishing Weyl-NP scalar (as calculated in Boxes A and B). However, there are three clear differences between the Schwarzschild and Vaidya metric: • First of all, the mass parameter $M$ for Schwarzschild is a constant, while for Vaidya $M(u)$ is a u-dependent function. • Schwarzschild is a solution to the vacuum Einstein equation $R_{ab}=0$, while Vaidya is a solution to the trace-free Einstein equation $R_{ab}=8\pi T_{ab}$ with a nontrivial pure radiation energy field. As a result, all Ricci-NP scalars for Schwarzschild are vanishing, while we have $\Phi _{00}={\frac {M(u)_{\,,\,u}}{r^{2}}}$ for Vaidya. • Schwarzschild has 4 independent Killing vector fields, including a timelike one, and thus is a static metric, while Vaidya has only 3 independent Killing vector fields regarding the spherical symmetry, and consequently is nonstatic. Consequently, the Schwarzschild metric belongs to Weyl's class of solutions while the Vaidya metric does not. Extension of the Vaidya metric Kinnersley metric While the Vaidya metric is an extension of the Schwarzschild metric to include a pure radiation field, the Kinnersley metric[7] constitutes a further extension of the Vaidya metric; it describes a massive object that accelerates in recoil as it emits massless radiation anisotropically. The Kinnersley metric is a special case of the Kerr-Schild metric, and in cartesian spacetime coordinates $x^{\mu }$ it takes the following form: $g_{\mu \nu }=\eta _{\mu \nu }-{\frac {2m{\bigl (}u(x){\bigr )}}{r(x)^{3}}}\sigma _{\mu }(x)\sigma _{\nu }(x)$ (18) $r(x)=\sigma _{\mu }(x)\,\,\lambda ^{\mu }(u(x))$ (19) $\sigma ^{\mu }(x)=X^{\mu }(u(x))-x^{\mu },\quad \eta _{\mu \nu }\sigma ^{\mu }(x)\sigma ^{\nu }(x)=0$ (20) where for the duration of this section all indices shall be raised and lowered using the "flat space" metric $\eta _{\mu \nu }$, the "mass" $m(u)$ is an arbitrary function of the proper-time $u$ along the mass's world line as measured using the "flat" metric, $du^{2}=\eta _{\mu \nu }\,dX^{\mu }dX^{\nu },$ and $X^{\mu }(u)$ describes the arbitrary world line of the mass, $\lambda ^{\mu }(u)=dX^{\mu }(u)/du$ is then the four-velocity of the mass, $\sigma _{\mu }(x)$ is a "flat metric" null-vector field implicitly defined by Eqn. (20), and $u(x)$ implicitly extends the proper-time parameter to a scalar field throughout spacetime by viewing it as constant on the outgoing light cone of the "flat" metric that emerges from the event $X^{\mu }(u),$ and satisfies the identity $\lambda ^{\mu }(u(x))\,\partial _{\mu }u(x)=1.$ Grinding out the Einstein tensor for the metric $g_{\mu \nu }$ and integrating the outgoing energy–momentum flux "at infinity," one finds that the metric $g_{\mu \nu }$ describes a mass with proper-time dependent four-momentum $P^{\mu }=m(u)\,\lambda ^{\mu }(u)$ that emits a net <<link:0>> at a proper rate of $-dP^{\mu }/du;$ as viewed from the mass's instantaneous rest-frame, the radiation flux has an angular distribution $A(u)+B(u)\,\cos(\theta (u)),$ where $A(u)$ and $B(u)$ are complicated scalar functions of $m(u),\lambda ^{\mu }(u),\sigma _{\mu }(u),$ and their derivatives, and $\theta (u)$ is the instantaneous rest-frame angle between the 3-acceleration and the outgoing null-vector. The Kinnersley metric may therefore be viewed as describing the gravitational field of an accelerating photon rocket with a very badly collimated exhaust. In the special case where $\lambda ^{\mu }$ is independent of proper-time, the Kinnersley metric reduces to the Vaidya metric. Vaidya–Bonner metric Since the radiated or absorbed matter might be electrically non-neutral, the outgoing and ingoing Vaidya metrics Eqs(6)(7) can be naturally extended to include varying electric charges, $ds^{2}=-\left(1-{\frac {2M(u)}{r}}+{\frac {Q(u)}{r^{2}}}\right)du^{2}-2dudr+r^{2}(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2})\;,$ (18) $ds^{2}=-\left(1-{\frac {2M(v)}{r}}+{\frac {Q(v)}{r^{2}}}\right)dv^{2}+2dvdr+r^{2}(d\theta ^{2}+\sin ^{2}\theta \,d\phi ^{2})\;.$ (19) Eqs(18)(19) are called the Vaidya-Bonner metrics, and apparently, they can also be regarded as extensions of the Reissner–Nordström metric, analogously to the correspondence between Vaidya and Schwarzschild metrics. See also • Schwarzschild metric • Null dust solution References 1. Eric Poisson. A Relativist's Toolkit: The Mathematics of Black-Hole Mechanics. Cambridge: Cambridge University Press, 2004. Section 4.3.5 and Section 5.1.8. 2. Jeremy Bransom Griffiths, Jiri Podolsky. Exact Space-Times in Einstein's General Relativity. Cambridge: Cambridge University Press, 2009. Section 9.5. 3. Thanu Padmanabhan. Gravitation: Foundations and Frontiers. Cambridge: Cambridge University Press, 2010. Section 7.3. 4. Pankaj S Joshi. Global Aspects in Gravitation and Cosmology. Oxford: Oxford University Press, 1996. Section 3.5. 5. Pankaj S Joshi. Gravitational Collapse and Spacetime Singularities. Cambridge: Cambridge University Press, 2007. Section 2.7.6. 6. Valeri Pavlovich Frolov, Igor Dmitrievich Novikov. Black Hole Physics: Basic Concepts and New Developments. Berlin: Springer, 1998. Section 5.7. 7. Kinnersley, W. (October 1969). "Field of an arbitrarily accelerating point mass". Phys. Rev. 186 (5): 1335. Bibcode:1969PhRv..186.1335K. doi:10.1103/PhysRev.186.1335.
Wikipedia
Vakhitov–Kolokolov stability criterion The Vakhitov–Kolokolov stability criterion is a condition for linear stability (sometimes called spectral stability) of solitary wave solutions to a wide class of U(1)-invariant Hamiltonian systems, named after Soviet scientists Aleksandr Kolokolov (Александр Александрович Колоколов) and Nazib Vakhitov (Назиб Галиевич Вахитов). The condition for linear stability of a solitary wave $u(x,t)=\phi _{\omega }(x)e^{-i\omega t}$ with frequency $\omega $ has the form ${\frac {d}{d\omega }}Q(\omega )<0,$ where $Q(\omega )\,$ is the charge (or momentum) of the solitary wave $\phi _{\omega }(x)e^{-i\omega t}$, conserved by Noether's theorem due to U(1)-invariance of the system. Original formulation Originally, this criterion was obtained for the nonlinear Schrödinger equation, $i{\frac {\partial }{\partial t}}u(x,t)=-{\frac {\partial ^{2}}{\partial x^{2}}}u(x,t)+g(|u(x,t)|^{2})u(x,t),$ where $x\in \mathbb {R} $, $t\in \mathbb {R} $, and $g\in C^{\infty }(\mathbb {R} )$ is a smooth real-valued function. The solution $u(x,t)$ is assumed to be complex-valued. Since the equation is U(1)-invariant, by Noether's theorem, it has an integral of motion, $ Q(u)={\frac {1}{2}}\int _{\mathbb {R} }|u(x,t)|^{2}\,dx$, which is called charge or momentum, depending on the model under consideration. For a wide class of functions $g$, the nonlinear Schrödinger equation admits solitary wave solutions of the form $u(x,t)=\phi _{\omega }(x)e^{-i\omega t}$, where $\omega \in \mathbb {R} $ and $\phi _{\omega }(x)$ decays for large $x$ (one often requires that $\phi _{\omega }(x)$ belongs to the Sobolev space $H^{1}(\mathbb {R} ^{n})$). Usually such solutions exist for $\omega $ from an interval or collection of intervals of a real line. The Vakhitov–Kolokolov stability criterion,[1][2][3][4] ${\frac {d}{d\omega }}Q(\phi _{\omega })<0,$ is a condition of spectral stability of a solitary wave solution. Namely, if this condition is satisfied at a particular value of $\omega $, then the linearization at the solitary wave with this $\omega $ has no spectrum in the right half-plane. This result is based on an earlier work[5] by Vladimir Zakharov. Generalizations This result has been generalized to abstract Hamiltonian systems with U(1)-invariance.[6] It was shown that under rather general conditions the Vakhitov–Kolokolov stability criterion guarantees not only spectral stability but also orbital stability of solitary waves. The stability condition has been generalized[7] to traveling wave solutions to the generalized Korteweg–de Vries equation of the form $\partial _{t}u+\partial _{x}^{3}u+\partial _{x}f(u)=0\,$. The stability condition has also been generalized to Hamiltonian systems with a more general symmetry group.[8] See also • Derrick's theorem • Linear stability • Lyapunov stability • Nonlinear Schrödinger equation • Orbital stability References 1. Колоколов, А. А. (1973). "Устойчивость основной моды нелинейного волнового уравнения в кубичной среде". Прикладная механика и техническая физика (3): 152–155. 2. A.A. Kolokolov (1973). "Stability of the dominant mode of the nonlinear wave equation in a cubic medium". Journal of Applied Mechanics and Technical Physics. 14 (3): 426–428. Bibcode:1973JAMTP..14..426K. doi:10.1007/BF00850963. S2CID 123792737. 3. Вахитов, Н. Г. & Колоколов, А. А. (1973). "Стационарные решения волнового уравнения в среде с насыщением нелинейности". Известия высших учебных заведений. Радиофизика. 16: 1020–1028. 4. N.G. Vakhitov & A.A. Kolokolov (1973). "Stationary solutions of the wave equation in the medium with nonlinearity saturation". Radiophys. Quantum Electron. 16 (7): 783–789. Bibcode:1973R&QE...16..783V. doi:10.1007/BF01031343. S2CID 123386885. 5. Vladimir E. Zakharov (1967). "Instability of Self-focusing of Light" (PDF). Zh. Eksp. Teor. Fiz. 53: 1735–1743. Bibcode:1968JETP...26..994Z. 6. Manoussos Grillakis; Jalal Shatah & Walter Strauss (1987). "Stability theory of solitary waves in the presence of symmetry. I". J. Funct. Anal. 74: 160–197. doi:10.1016/0022-1236(87)90044-9. 7. Jerry Bona; Panagiotis Souganidis & Walter Strauss (1987). "Stability and instability of solitary waves of Korteweg-de Vries type". Proceedings of the Royal Society A. 411 (1841): 395–412. Bibcode:1987RSPSA.411..395B. doi:10.1098/rspa.1987.0073. S2CID 120894859. 8. Manoussos Grillakis; Jalal Shatah & Walter Strauss (1990). "Stability theory of solitary waves in the presence of symmetry". J. Funct. Anal. 94 (2): 308–348. doi:10.1016/0022-1236(90)90016-E.
Wikipedia
Val Val may refer to: Look up val, Val, or -val in Wiktionary, the free dictionary. Military equipment • Aichi D3A, a Japanese World War II dive bomber codenamed "Val" by the Allies • AS Val, a Soviet assault rifle Music • Val, album by Val Doonican • VAL (band), Belarusian pop duo People • Val (given name), a unisex given name • Rafael Merry del Val (1865–1930), Spanish Catholic cardinal • Val (sculptor) (1967–2016), French sculptor • Val (footballer, born 1983), Lucivaldo Lázaro de Abreu, Brazilian football midfielder • Val (footballer, born 1997), Valdemir de Oliveira Soares, Brazilian football defensive midfielder Places • Val (Rychnov nad Kněžnou District), a village and municipality in the Czech Republic • Val (Tábor District), a village and municipality in the Czech Republic • Vál, a village in Hungary • Val, Iran, a village in Kurdistan Province, Iran • Val, Italy, a frazione in Cortina d'Ampezzo, Veneto, Italy • Val, Bhiwandi, a village in Maharashtra, India Other uses • Val (film), an American documentary about Val Kilmer, directed by Leo Scott and Ting Poo • Valley girl or Val, an American stereotype • Abbreviation of the amino acid valine • A weapon used in the Indian martial art of Kalarippayattu • Vieques Air Link, a Puerto Rican airline company See also • VAL (disambiguation) • Wal (disambiguation) • Vala (disambiguation) • Vale (disambiguation) • Vali (disambiguation) • Valo (disambiguation) • Vals (disambiguation) • Valy (disambiguation) • Valk (surname) • Vall (surname) • All pages with titles beginning with Val
Wikipedia
Valentin Belousov Valentin Danilovich Belousov (Russian: Валенти́н Дани́лович Белоу́сов; 20 February 1925 – 23 July 1988) was a Soviet and Moldovan mathematician and a corresponding member of the Academy of Pedagogical Sciences of the USSR (1968).[1][2] Valentin Belousov Born Valentin Danilovich Belousov (1925-02-20)20 February 1925 Bălți, Kingdom of Romania Died23 July 1988(1988-07-23) (aged 63) Kishinev, Moldavian SSR, Soviet Union EducationDoctor of Physical and Mathematical Sciences (1966) Alma materKishinev Pedagogical Institute Scientific career FieldsMathematics He graduated from the Kishinev Pedagogical Institute (1947), Doctor of Physical and Mathematical Sciences (1966), Professor (1967), honored worker of science and technology of the Moldavian SSR. Since 1962, he worked at the Institute of Mathematics, Academy of Sciences of the Moldavian SSR. Major works include algebra, especially the theory of quasigroups and their applications. Known for his book "Fundamentals of the theory of quasigroups and loops" (1967), textbooks for schools. Laureate of the State Prize in Science and Technology of the Moldavian SSR. Honored Worker of Science and Technology of MSSR (1970). Laureate of the State Prize for Science and Technology MSSR (1982). He is the founder of the theory of quasi-groups at school in the former USSR. Milestones in the scientific life • 1944–1947 – student of the Pedagogical Institute in Kishinev, • 1947–1948 – teacher training courses at the Kishinev Pedagogical Institute, • 1948–1950 – a teacher and head teacher of high school, the village Sofia Balti district, • 1950–1954 – Lecturer, Department of Mathematics, Balti Pedagogical Institute, • 1954–1955 – student of the postgraduate courses at MSU. Lomonosov, • 1955–1956 – post-graduate student of Moscow State University. M.V.Lomonosov, • 1957–1960 – Lecturer, Department of Mathematics, Balti Pedagogical Institute, • 1960–1961 – intern University of Wisconsin (exchange between the USSR and the USA), the State of Madison, USA, • 1961–1962 – Head of the Department of Mathematics Balti Pedagogical Institute, • 1962–1987 – Head of the Department at the Institute of Mathematics of the MSSR, • 1964–1966 – Associate Professor, Department of Mathematics Technical University (part-time), • 1966–1988 – Professor, Head of Department (until 1977) of higher algebra Kishinev State University (part-time). Scientific heritage Theses of V. D. Belousov • Studies in the theory of quasigroups and loops (1958) – PhD thesis. • Systems of quasigroups with identities (1966) – doctoral thesis. (Both of the above are protected at the Moscow State University M. V. Lomonosov). • Field of study – theory of quasi-groups, related areas Research areas • The general theory of quasi-groups (derivative operations; core; regular substitution, groups associated with quasigroups; autotopies; antiavtotopii et al.). • Classes binary quasigroups and loops (distributive quasigroups, left distributive quasigroup, IP-quasigroup, F-quasigroups, CI-quasigroup, I-quasigroups Bol loops, totally symmetric quasigroups and quasigroup Stein et al.) • Quasigroups a balanced identities. Systems of quasigroups with identities (associative, medial, transitivity, distributivity, Stein et al.) • Functional equations on quasigroups (general associativity with the same procedure and the various variables, the total Distributivity, loops, medial, and others.) • Positional algebra (algebra Belousov) (apparatus for solving functional equations) • n-ary and infinitary quasigroup (we laid the foundations of the theory of n-ary quasigroups and infinitary) • Algebraic networks and quasigroups (general theory, the conditions of circuit configuration) • Combinatorial questions of the theory of quasi-groups (continued quasigroups, orthogonal systems and binary n-ary operations and quasigroups parastrophic orthogonal quasigroups): Books • Fundamentals of the theory of quasigroups and loops. M .: Nauka, 1967. • Algebraic networks and quasigroups. Kishinev Shtiintsa 1971. • n-ary quasigroup. Kishinev, Ştiinţa 1972. • Changes in algebraic networks. Kishinev, Ştiinţa 1979. • Elements of the theory of quasi-groups (Textbook on a special course). Kishinev, Kishinev State University, 1981. • Latin squares and their applications. Kishinev, Ştiinţa, 1989 (collab. with G.B.Belyavskaya with GB). • Mathematics in schools of Moldova (1812–1972). Kishinev, Ştiinţa1973 (collab. with I.I.Lupu and Y.I.Neagu). • Russian-Moldovan Mathematics Dictionary. 1980, Kishinev, Moldavian Soviet Encyclopedia (collab. with Y.I.Neagu) • IK Man. Pages of life and creativity. Kishinev, Ştiinţa, 1983 (collab. with Y.I.Neagu) Educational activity Valentin Danilovich Belousov was not only a scientist but also an excellent teacher. He has made important contributions in the system of education of Moldova and in training of Moldovan mathematicians. As a member of the Academy of Pedagogical Sciences (Mathematics section), he spent a great scientific and organizational work in the field of mathematics education. About 30 of his students defended their theses and work in many countries. Valentin Belousov and Y. I. Neagu wrote Moldovan-Russian Dictionary of Mathematics, which has long been used by mathematicians in Moldova. Together with I. I. Lupu and Y. I. Neagu, V. D. Belousov published a book of mathematics in schools of Moldova (1812–1972). For many years, V. D. Belousov was the chairman of the jury of school mathematical Olympiads of Moldova. Trainees and graduates Under the direction of Valentin Danilovich 22 mathematics from different republics of the former Soviet Union and from abroad defended their theses, four of them also defended their doctoral dissertations. Social work In parallel with the scientific and pedagogical activity Valentin Danilovich spent big public work as a deputy Balti City Council (1960–1962), member of the District Committee of the CPM (1967–1973), member of the Supreme Council of Moldova (1975–1980), member of the Presidium of the Society "Knowledge", a member of editorial boards of various domestic and foreign publications, organizing committee member of many international conferences. Family Father Daniel Afinogenovich Belousov (1897–1956), was an officer in the army of Tsarist Russia (graduated from military school in Tbilisi). He took part in World War I. In Moldova, he worked at the post office in the city of Bălți. Mother of Valentin Danilovich, Elena K. Belousov (Garbu) (1897–1982), also worked at the post office. Wife – Belousov (Bondareva) Elizabeth Feodorovna (5 May 1925 – 26 November 1991). Philologist, she taught at the State University of Moldova. Children: Alexander Valentinovich (03.10.1948 – 3 September 1998), PhD in physics, senior research fellow of the Academy of Sciences of Moldova; Tatiana Valentinovna Kravchenko (born 16 February 1952), a doctor neurologist. Brother: MD Victor Danilovich Belousov (b. 1927) traumatologist orthopedic, the author of the monograph "Road traffic accidents. First aid to victims (1984) and Conservative treatment of false joints of long bones (1990). Awards and titles For merits in the field of science, education and social activities, he was awarded the Order of the Red Banner of Labour (1961), Honorary Diploma of the Presidium of the Supreme Soviet of the Moldavian Soviet Socialist Republic (1967), he was elected as correspondent member of the Academy of Pedagogical Sciences the USSR (1968). He was honored worker of science of Moldova (1970), laureate of the State Prize of Moldova in the field of science and technology (1972). Overachiever of the Education of Moldova (1980). Foreign tours Despite the strict controls in those years by the government, Valentin Danilovich received permission to research trips abroad and many times traveled abroad: • 1960–1961 – United States • 1964, 1976 – Hungary • 1967, 1972, 1977 – Bulgaria • 1968, 1974,1981 – Yugoslavia • 1967, 1969–1970 – Canada • 1968 – Sierra Leone • 1975 – East Germany School of Belousov The lifework of Valentin Danilovich Belousov is continued by his numerous disciples and followers in various countries, whose number is constantly increasing. In 1994 at the initiative of the students of V. D. Belousov at the Institute of Mathematics and Computer Science of the ASM was established the scientific journal "Quasigroups and Related Systems (http://www.quasigroups.eu/)'", now known all over the world. For twenty years this magazine published many articles by both Moldovan and foreign specialists in the theory of quasi-groups and regions close to it. Since 1995, the Institute of Mathematics and Computer Science of the ASM annual anniversary of Valentin Danilovich (20 February) has carried out advanced algebraic seminars in memory of Valentin Danilovich Belousov. In this workshop, his disciples and followers from Moldova and other countries take stock of the past year and report on new results in the theory of quasi-groups and related areas. Thanks to the productive scientific and pedagogical activity of V. D. Belousov, in the Republic of Moldova was founded on the theory of quasigroups Belousov's School on Theory of Quasigroups, consisting of his 22 disciples (http://belousov.scerb.com/), and more than 40 followers continue his work in Moldova and abroad. References 1. Persons: Valentin Danilovich Belousov 2. G. B. Belyavskaya, W. A. Dudek and V. A. Shcherbacov. (2005). Valentin Danilovich Belousov – his life and work. Quasigroups And Related Systems : magazine. pp. 1–7. ISSN 1561-2848. Authority control International • ISNI • VIAF National • Germany • Israel • United States Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Valentina Harizanov Valentina Harizanov is a Serbian-American mathematician and professor of mathematics at The George Washington University. Her main research contributions are in computable structure theory (roughly at the intersection of computability theory and model theory), where she introduced the notion of degree spectra of relations on computable structures and obtained the first significant results concerning uncountable, countable, and finite Turing degree spectra.[1] Her recent interests include algorithmic learning theory and spaces of orders on groups. Valentina Harizanov NationalitySerbian-American Alma materUniversity of Wisconsin, Madison, University of Belgrade Known forResearch in computability theory AwardsOscar and Shoshana Trachtenberg Prize for Faculty Scholarship (2016) Scientific career FieldsMathematics, Computability Theory InstitutionsThe George Washington University ThesisDegree Spectrum of a Recursive Relation on a Recursive Structure (1987) Doctoral advisorTerry Millar Education She obtained her Bachelor of Science in mathematics in 1978 at the University of Belgrade and her Ph.D. in mathematics in 1987 at the University of Wisconsin–Madison under the direction of Terry Millar.[2][3] Career At The George Washington University, Harizanov was an assistant professor of mathematics from 1987 to 1993, an associate professor of mathematics from 1994 to 2002, and a professor of mathematics from 2003 to the present. She has held two visiting professor positions, one in 1994 at the University of Maryland, College Park and one in 2014 at the Kurt Gödel Research Center at the University of Vienna.[3] Harizanov has co-directed the Center for Quantum Computing, Information, Logic, and Topology at The George Washington University since 2011.[3] Research In 2009, Harizanov received a grant from the National Science Foundation to research how algebraic, topological, and algorithmic properties of mathematical structures relate.[4] Awards and honors Harizanov won the Oscar and Shoshana Trachtenberg Prize for Faculty Scholarship from The George Washington University (GWU) in 2016.[5] This award is presented each year to a tenured GWU faculty member to recognize outstanding research accomplishments.[6] She was named MSRI Eisenbud Professor for Fall 2020.[7] Publications Harizanov has over 40 publications in peer-reviewed journals, including • V.S. Harizanov, "Some effects of Ash-Nerode and other decidability conditions on degree spectra " Annals of Pure and Applied Logic 55 (1), pp. 51–65 (1991), cited 21 times according to Web of Science In addition, she has published the following book-length survey paper and co-edited, co-authored book: • V.S. Harizanov, “Pure computable model theory,” in the volume: Handbook of Recursive Mathematics, vol. 1, Yu.L. Ershov, S.S. Goncharov, A. Nerode, and J.B. Remmel, editors (North-Holland, Amsterdam, 1998), pp. 3–114. • M. Friend, N.B. Goethe, and V.S. Harizanov, Induction, Algorithmic Learning Theory, and Philosophy, Series: Logic, Epistemology, and the Unity of Science, vol. 9, Springer, Dordrecht, 304 pp., 2007. Degree spectra of relations are introduced and first studied in Harizanov's dissertation: Degree Spectrum of a Recursive Relation on a Recursive Structure(1987).[1] References 1. Harizanov, V.S. (1987). "Degree Spectrum of a Recursive Relation on a Recursive Structure". Ph.D. Dissertation, University of Wisconsin–Madison. 2. Valentina Harizanov at the Mathematics Genealogy Project 3. "Curriculum Vitae of Valentina Harizanov" (PDF). The George Washington University. Retrieved 15 January 2018. 4. "Award Abstract #0904101: Topics in Computable Mathematics". National Science Foundation. Retrieved 15 January 2018. 5. "Trachtenberg Research Award Winners". The George Washington University. Retrieved 15 January 2018. 6. "Oscar and Shoshana Trachtenberg Prize for Faculty Scholarship (Research)". The George Washington University. Retrieved 15 January 2018. 7. MSRI. "Mathematical Sciences Research Institute". www.msri.org. Retrieved 2021-06-07. External links • Valentina Harizanov's home page Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Czech Republic Academics • DBLP • MathSciNet • Mathematics Genealogy Project • ORCID • PhilPeople • zbMATH Other • IdRef
Wikipedia
Valentina Gorbachuk Valentina Ivanivna Gorbachuk (born 1937) is a Soviet and Ukrainian mathematician, specializing in operator theory and partial differential equations. Education and career Gorbachuk was born in Mogilev on 25 June 1937; then part of the Soviet Union, it has since become part of Belarus. Her parents worked as an accountant and a telegraphist; in search of better work, they moved to Lutsk in what is now Ukraine when Gorbachuk was a child, and that is where she was schooled.[1] She applied to study mathematics and mechanics at Taras Shevchenko National University of Kyiv, but was denied because of a "stay in the occupation". Instead, she went to the Lutsk Pedagogical Institute, graduating in 1959. On the advice of one of her faculty mentors there, S.I. Zuhovitsky,[1] she entered graduate study at the NASU Institute of Mathematics, as a student of Yury Berezansky, earning a candidate degree (the Soviet equivalent of a Ph.D.) in the early 1960s.[1][2] She continued as a researcher at the Institute of Mathematics for the rest of her career, defending a Doctor of Science (equivalent of a habilitation under the former Soviet system) in 1992.[1] Books Gorbachuk is the coauthor, with M. L. Gorbachuk, of two books on operator theory, translated from Russian into English: • Boundary value problems for operator differential equations (Naukova Dumka, 1984; trans., Mathematics and its Applications 48, Kluwer, 1991)[3] • M. G. Krein’s lectures on entire operators (Operator Theory: Advances and Applications, 97, Birkhäuser, 1997)[4] Recognition In 1998, Gorbachuk won the State Prize of Ukraine in Science and Technology.[1] Personal life Gorbachuk worked closely with her husband, Miroslav L'vovich Gorbachuk (1938–2017), a mathematician with whom she shared her research interests.[1] Their son, Volodymyr Myroslavovich Gorbachuk, is an associate professor of mathematical physics at the Igor Sikorsky Kyiv Polytechnic Institute (National Technical University of Ukraine).[1][5] References 1. "Valentina Ivanivna Gorbachuk (to 80th birthday anniversary)", Methods of Functional Analysis and Topology, 23 (3): 207–208, 2017, Zbl 1399.01003 2. Valentina Gorbachuk at the Mathematics Genealogy Project Note that this source states her Ph.D. year as 1964; Methods of Functional Analysis and Topology states it as 1962. 3. Reviews of Boundary value problems for operator differential equations: J. Wloka, Zbl 0567.47041; J. W. Macki, MR0776604; J. Mawhin, Zbl 0751.47025 4. Reviews of M. G. Krein’s lectures on entire operators: A. Pankov, Zbl 0883.47008; Damir Z. Arov and Harry Dym, MR1466698 5. Gorbachuk Volodymyr Myroslavovich, Igor Sikorsky Kyiv Polytechnic Institute, retrieved 2023-03-03 Authority control International • ISNI • VIAF National • Norway • France • BnF data • Germany • Israel • United States • Netherlands Academics • CiNii • MathSciNet • Mathematics Genealogy Project Other • Encyclopedia of Modern Ukraine • IdRef
Wikipedia
Valeria Simoncini Valeria Simoncini (born 1966)[1] is an Italian researcher in numerical analysis who works as a professor in the mathematics department at the University of Bologna.[2] Her research involves the computational solution of equations involving large matrices, and their applications in scientific computing.[3] She is the chair of the SIAM Activity Group on Linear Algebra.[4] Education and career Simoncini earned a degree from the University of Bologna in 1989, became a visiting scholar at the University of Illinois at Urbana–Champaign from 1991 to 1993, and completed her PhD at the University of Padua in 1994. After working at CNR from 1995 to 2000, she returned to Bologna as an associate professor in 2000, and was promoted to full professor in 2010.[2] Book With Antonio Navarra, she is the author of the book A Guide to Empirical Orthogonal Functions for Climate Data Analysis (Springer, 2010). Recognition Simoncini was a second-place winner of the Leslie Fox Prize for Numerical Analysis in 1997.[5] In 2014 she was elected as a fellow of the Society for Industrial and Applied Mathematics "for contributions to numerical linear algebra".[6] She was named to the 2021 class of fellows of the American Mathematical Society "for contributions to computational mathematics, in particular to numerical linear algebra".[7] In 2023, she was elected to serve on the SIAM Council.[8] References 1. Birth year from German National Library catalog entry, retrieved 2018-12-02. 2. Curriculum vitae (PDF), January 14, 2015, retrieved 2017-08-14 3. "Research Interests and Problem Solving", Valeria Simoncini, University of Bologna, retrieved 2017-08-14 4. "SIAM Activity Groups Election Results", SIAM News, 6 December 2018 5. IMA Leslie Fox Prize for Numerical Analysis, Institute of Mathematics & its Applications, retrieved 2022-02-06 6. SIAM Fellows: Class of 2014, Society for Industrial and Applied Mathematics, retrieved 2017-08-14 7. 2021 Class of Fellows of the AMS, American Mathematical Society, retrieved 2020-11-02 8. "Welcoming the Newest Electees to the SIAM Board of Trustees and Council". SIAM News. Retrieved 2023-03-15. External links • Home page • Valeria Simoncini publications indexed by Google Scholar Authority control International • ISNI • VIAF National • Germany • Israel • United States • Netherlands Academics • Association for Computing Machinery • CiNii • DBLP • Google Scholar • Mathematics Genealogy Project • ORCID • zbMATH Other • IdRef
Wikipedia
Valerie Thomas Valerie L. Thomas (born February 8, 1943) is an American data scientist and inventor. She invented the illusion transmitter, for which she received a patent in 1980.[2] She was responsible for developing the digital media formats image processing systems used in the early years of NASA's Landsat program.[3] Valerie Thomas NASA photograph of Thomas next to a stack of early Landsat Computer Compatible Tapes, 1979[1] Born (1943-02-08) February 8, 1943 Maryland, United States Alma mater • Morgan State University • George Washington University • University of Delaware • Simmons College Graduate School of Management Known forInventor of the illusion transmitter Scientific career Institutions • NASA Goddard • UMBC Early life and education Thomas was born in Baltimore, Maryland.[4] She graduated from high school in 1961, during the era of integration.[5] She attended Morgan State University, where she was one of two women majoring in physics.[6] Thomas excelled in her mathematics and science courses at Morgan State University, graduating with a degree in physics with highest honors in 1964.[5] Career Thomas began working for NASA as a data analyst in 1964.[7][8] She developed real-time computer data systems to support satellite operations control centers (1964–1970). She oversaw the creation of the Landsat program (1970–1981), becoming an international expert in Landsat data products. Her participation in this program expanded upon the works of other NASA scientists in the pursuit of being able to visualize Earth from space.[9] In 1974, Thomas headed a team of approximately 50 people for the Large Area Crop Inventory Experiment (LACIE), a joint effort with the NASA Johnson Space Center, the National Oceanic and Atmospheric Administration (NOAA), and the U.S. Department of Agriculture. An unprecedented scientific project, LACIE demonstrated the feasibility of using space technology to automate the process of predicting wheat yield on a worldwide basis.[8] She attended an exhibition in 1976 that included an illusion of a light bulb that appeared to be lit, even though it had been removed from its socket. The illusion, which involved another light bulb and concave mirrors, inspired Thomas. Curious about how light and concave mirrors could be used in her work at NASA, she began her research in 1977. This involved creating an experiment in which she observed how the position of a concave mirror would affect the real object that is reflected. Using this technology, she would invent an optical device called the illusion transmitter.[6] On October 21, 1980,[7] she obtained the patent for the illusion transmitter, a device NASA continues to use today, and it's being adapted for use in surgery, as well as for televisions and video screens.[10][11] . Thomas became associate chief of the Space Science Data Operations Office at NASA.[12] Thomas's invention has been depicted in a children's fictional book, television, and in video games.[5] In 1985, as the NSSDC Computer Facility manager, Thomas was responsible for a major consolidation and reconfiguration of two previously independent computer facilities, and infused them with new technology. She then served as the Space Physics Analysis Network (SPAN)[13] project manager from 1986 to 1990 during a period when SPAN underwent a major reconfiguration and grew from a scientific network with approximately 100 computer nodes to one directly connecting approximately 2,700 computer nodes worldwide. Thomas' team was credited with developing a computer network that connected research stations of scientists from around the world to improve scientific collaboration.[5] In 1990, SPAN became a major part of NASA's science networking and today's Internet.[8] She also participated in projects related to Halley's Comet, ozone research, satellite technology, and the Voyager spacecraft. She mentored countless numbers of students in the Mathematics Aerospace Research and Technology Inc. program.[14] Because of her unique career and commitment to giving something back to the community, Thomas often spoke to groups of students from elementary school, secondary, college, and university ages, as well as adult groups. As a role model for potential young black engineers and scientists, she made hundreds of visits to schools and national meetings over the years. She has mentored many students working in summer programs at Goddard Space Flight Center. She also judged at science fairs, working with organizations such as the National Technical Association (NTA) and Women in Science and Engineering (WISE). These latter programs encourage students from various underrepresented groups to pursue science and technology careers.[15] At the end of August 1995, she retired from NASA and her positions of associate chief of the NASA Space Science Data Operations Office, manager of the NASA Automated Systems Incident Response Capability, and as chair of the Space Science Data Operations Office Education Committee.[8] Retirement After retiring, Thomas served as an associate at the UMBC Center for Multicore Hybrid Productivity Research.[16] She also continued to mentor youth through the Science Mathematics Aerospace Research and Technology, Inc. and the National Technical Association.[6] Notable achievements Throughout her career, Thomas held high-level positions at NASA including heading the Large Area Crop Inventory Experiment (LACIE) collaboration between NASA, NOAA, and USDA in 1974, serving as assistant program manager for Landsat/Nimbus (1975–1976), managing the NSSDC Computer Facility (1985), managing the Space Physics Analysis Network project (1986–1990), and serving as associate chief of the Space Science Data Operations Office. She authored many scientific papers and holds a patent for the illusion transmitter. For her achievements, Thomas has received numerous awards including the Goddard Space Flight Center Award of Merit and the NASA Equal Opportunity Medal.[14] See also • Timeline of women in science • Mary Jackson (engineer) • Dorothy Vaughan • Katherine Johnson • Claudia Alexander • Doris Cohen • Lynnae Quick References 1. Smith, Yvette (January 28, 2020). "Dr. Valerie L. Thomas: The Face Behind Landsat Images". NASA. 2. US patent 4229761, Valerie L. Thomas, "Illusion Transmitter", issued October 21, 1980 3. "A Face Behind Landsat Images: Meet Dr. Valerie L. Thomas « Landsat Science". February 28, 2019. Retrieved June 10, 2020. 4. "VALERIE THOMAS (1943- )". Blackpast. April 21, 2021. Retrieved February 1, 2022. 5. "Life and Work of Valerie L. Thomas". Robin Lindeen-Blakeley. Retrieved February 21, 2021. 6. "Illusion Transmitter". Inventor of the Week. MIT. 2003. Retrieved January 7, 2020. 7. "Valerie Thomas". Inventors. The Black Inventor On-Line Museum. 2011. Retrieved November 13, 2011. 8. James L. Green (September 1995). "Valerie L. Thomas Retires". Goddard Space Flight Center. Archived from the original on December 19, 1996. Retrieved March 10, 2017. 9. Smith, Yvette (January 28, 2020). "Dr. Valerie L. Thomas: The Face Behind Landsat Images". NASA. Retrieved February 10, 2021. 10. "Valerie Thomas - Inventions, NASA, and Facts - Biography". Biography.com. A&E Television Networks. April 12, 2021 [2 April 2014]. Retrieved February 2, 2022. This technology was subsequently adopted by NASA and has since been adapted for use in surgery as well as the production of television and video screens. 11. "Valerie Thomas | Lemelson". LEMELSON-MIT. MASSACHUSETTS INSTITUTE OF TECHNOLOGY. n.d. Retrieved February 2, 2022. NASA uses the technology today, and scientists are currently working on ways to incorporate it into tools for surgeons to look inside the human body, and possibly for television sets and video screens one day. 12. "Life and Work of Valerie L. Thomas". Robin Lindeen-Blakeley. Retrieved April 28, 2020. 13. Thomas, Koblinsky, Webster, Zlotnicki, Green (1987). "NSSDC: National Space Science Data Center" (PDF).{{cite web}}: CS1 maint: multiple names: authors list (link) 14. Connolly, Danielle (May 15, 2019). "Make them Mainstream". Make Them Mainstream. Archived from the original on February 1, 2022. Retrieved February 1, 2022. 15. "Valerie L. Thomas Retires". nssdc.gsfc.nasa.gov. Retrieved February 25, 2021. 16. "Little Known Black History Fact: Valerie Thomas". Black America Web. October 27, 2014. Retrieved March 10, 2017. Authority control International • VIAF National • Israel • United States • Korea • Poland
Wikipedia
Valery Alexeev (mathematician) Valery Alexeev (born 1964)[1] is an American mathematician who is currently the David C. Barrow Professor at University of Georgia and an Elected Fellow of the American Mathematical Society.[2][3] He received his Ph.D from Lomonosov Moscow State University in 1990.[4] References 1. "Alexeev, Valery, 1964-". id.loc.gov. Retrieved January 6, 2021. 2. "Fellows". ams.org. Retrieved April 25, 2017. 3. "Valery Alexeev". uga.edu. Retrieved April 25, 2017. 4. "Valery Alexeev". genealogy.math.ndsu. Retrieved January 6, 2021. Authority control National • United States Academics • MathSciNet • Mathematics Genealogy Project • zbMATH
Wikipedia
Valery Goppa Valery Denisovich Goppa (Russian: Вале́рий Дени́сович Го́ппа; born 1939[1]) is a Soviet and Russian mathematician. He discovered a relation between algebraic geometry and codes, utilizing the Riemann-Roch theorem. Today these codes are called algebraic geometry codes.[2] In 1981 he presented his discovery at the algebra seminar of the Moscow State University. He also constructed other classes of codes in his career, and in 1972 he won the best paper award of the IEEE Information Theory Society for his paper "A new class of linear correcting codes".[3] It is this class of codes that bear the name of “Goppa code”. Selected publications • V. D. Goppa (1988). Geometry and Codes (Mathematics and its Applications). Berlin: Springer. ISBN 90-277-2776-7. • E. N. Gozodnichev; V. D. Goppa (1995). Algebraic Information Theory (Series on Soviet and East European Mathematics, Vol 11). World Scientific Pub Co Inc. ISBN 981-02-0943-6.{{cite book}}: CS1 maint: multiple names: authors list (link) • VD Goppa (1970). "A New Class of Linear Error Correcting Codes". Problemy Peredachi Informatsii. • VD Goppa (1971). "Rational Representation of Codes and (L,g)-Codes". Problemy Peredachi Informatsii. • VD Goppa (1972). "Codes Constructed on the Base of $(L,g)$-Codes". Probl. Peredachi Inf. 8 (2): 107–109. • VD Goppa (1974). "Binary Symmetric Channel Capacity Is Attained with Irreducible Codes". Probl. Peredachi Inf. 10 (1): 111–112. • VD Goppa (1974). "Correction of Arbitrary Noise by Irreducible Codes". Probl. Peredachi Inf. 10 (3): 118–119. • VD Goppa (1977). "Codes Associated with Divisors". Probl. Peredachi Inf. 13 (1): 33–39. • VD Goppa (1983). "Algebraico-Geometric Codes". Math. USSR Izv. 21 (1): 75–91. Bibcode:1983IzMat..21...75G. doi:10.1070/IM1983v021n01ABEH001641. • VD Goppa (1984). "Codes and information". Russ. Math. Surv. 39 (1): 87–141. Bibcode:1984RuMaS..39...87G. doi:10.1070/RM1984v039n01ABEH003062. S2CID 250898540. • VD Goppa (1995). "Group representations and algebraic information theory". Izv. Math. 59 (6): 1123–1147. Bibcode:1995IzMat..59.1123G. doi:10.1070/IM1995v059n06ABEH000051. S2CID 250882696. References 1. Stepan G. Korneev, Soviet scientists - honorary members of foreign scientific societies.(in Russian) Nauka, Moscow, 1981; p. 41 2. Huffman, William Cary; Pless, Vera S. (2003). Fundamentals of Error-Correcting Codes. Cambridge University Press. p. 521. ISBN 978-0-521-78280-7. 0-521-78280-5. Retrieved 2016-02-02. 3. "Information Theory Society Paper Award". IEEE Information Theory Society. Retrieved March 6, 2013. • David Joyner (23 August 2002). "A brief guide to Goppa codes". Authority control International • ISNI • VIAF National • Israel • United States • Netherlands Academics • zbMATH Other • IdRef
Wikipedia
Valery Senderov Valery Senderov (Russian: Валерий Сендеров; 17 March 1945 – 12 November 2014) was a Soviet dissident, mathematician, teacher, and advocate of human rights known for his struggle against state-sponsored antisemitism. Valery Senderov Born(1945-03-17)17 March 1945 Moscow, Soviet Union Died12 November 2014(2014-11-12) (aged 69) Moscow, Russia NationalityRussian Alma materMoscow Institute of Physics and Technology Scientific career FieldsMathematics, Politics Biography Senderov was born on 17 March 1945 in Moscow. In 1962, he was accepted at the prestigious Moscow Institute of Physics and Technology, where he studied mathematics. In 1968, just before completing his doctoral dissertation, Senderov was expelled for the dissemination of "philosophical literature", which was a euphemism for anything that was viewed by the censors as being anti-Soviet. He was given the opportunity to complete his degree in 1970.[1] In the 1970s, Senderov taught mathematics at the Second Mathematical School in Moscow. Toward the end of the decade, he joined the National Alliance of Russian Solidarists, an anticommunist organization headed by Russian emigres, and also the International Society for Human Rights. In the 1980s, Senderov became one of the leaders of the International Society for Human Rights and one of the founders of the Free Interprofessional Association of Workers, the first labor union in the Soviet Union that sought to be free of government control.[1][2] In 1982, Senderov was arrested by the KGB for publishing anticommunist articles in Russian-language newspapers printed abroad, in particular the magazine Posev (Sowing) and the newspaper Russkaya Mysl. After his arrest, Senderov openly admitted to the KGB that he was a member of the National Alliance of Russian Solidarists, becoming one of just two openly avowed members of this anticommunist group in the Soviet Union. At his trial, Senderov stated that he was a member of anticommunist groups and expressed that he would continue to fight against the Soviet regime even after he was freed from incarceration. He was sentenced to 7 years of hard labor and a subsequent probationary exile of an additional 5 years.[1][2][3] He was sent to a prison camp for political prisoners near Perm, where he spent much of his time in solitary confinement in a cold cell on rationed food for his refusal to comply with the rules of the prison camp. He refused to comply to protest the confiscation of his Bible and the prohibition against studying mathematics. In 1987, Senderov was released and, in 1988, became the leader of the National Alliance of Russian Solidarists in the Soviet Union, holding the first official press conference in this new role in 1988. During the period of perestroika, the National Alliance took an active part in supporting opposition parties.[1][2] Over the course of his life, Senderov authored dozens of political articles in magazines, newspapers, and anthologies, as well as a number of mathematical works dealing with functional analysis. He also wrote three books.[4][5] Death On 12 November 2014, he died at the age of 69 in Moscow.[1] Struggle against Antisemitism In 1980, Senderov self-published with Boris Kanevsky a work titled "Intellectual Genocide" about the discrimination by Soviet universities against Jewish applicants. In particular, the work singled out the mechanical and mathematical departments at the prestigious Moscow State University. Senderov shed light on the various methods used by the university administration to dissuade and reject Jewish applicants. One method was to hand-pick the most difficult problems from the International Mathematical Olympiad and to give these problems to Jewish applicants as part of the entrance examinations - a practice that was specifically prohibited by the Soviet Ministry of Education. Another method was to select problems that could be solved given the standard high school curriculum, but whose solution required far more time than allotted for the entrance exams. In addition, admission committees would ask Jewish applicants questions that were far outside the standard high school curriculum or separate them into special groups and then find reasons to fail those groups during the more subjective oral exams. In addition to describing the various methods used to reject Jewish applicants, Senderov also provided practical advice on preparing for the types of questions often asked of such applicants and using the appeals process to fight against unfair admission decisions. In conjunction with publishing this work, Senderov became one of the founders of a set of informal courses of study under the moniker of "Jewish National University", where well-known mathematicians gave lectures to applicants who had been denied admission to Moscow State University for being Jewish. References 1. "Умер публицист Валерий Сендеров". Grani.ru. 12 November 2014. 2. "Памяти Валерия Сендерова". 3. Łabędź, Leopold (1989). The Use and Abuse of Sovietology. Transaction Publishers. p. 170. ISBN 9781412840873. 4. "Math papers by V. A. Senderov, according to MathSciNet" (PDF). 5. "Publications of Valery Senderov". mathnet.ru. External links • Profile, ucsj.org, 1 November 2012; accessed 14 November 2014. • Reference in The Day, news.google.com; accessed 14 November 2014. • Reference in A Mathematical Medley, books.google.com; accessed 14 November 2014. • Senderov, Valery (1989). How a Program Became a Pogrom. Authority control International • ISNI • VIAF National • Germany • United States • Sweden Academics • MathSciNet • Scopus • zbMATH Other • IdRef
Wikipedia
Valiant–Vazirani theorem The Valiant–Vazirani theorem is a theorem in computational complexity theory stating that if there is a polynomial time algorithm for Unambiguous-SAT, then NP = RP. It was proven by Leslie Valiant and Vijay Vazirani in their paper titled NP is as easy as detecting unique solutions published in 1986.[1] The proof is based on the Mulmuley–Vazirani–Vazirani isolation lemma, which was subsequently used for a number of important applications in theoretical computer science. The Valiant–Vazirani theorem implies that the Boolean satisfiability problem, which is NP-complete, remains a computationally hard problem even if the input instances are promised to have at most one satisfying assignment. Proof outline Unambiguous-SAT is the promise problem of deciding whether a given Boolean formula that has at most one satisfying assignment is unsatisfiable or has exactly one satisfying assignment. In the first case, an algorithm for Unambiguous-SAT should reject, and in the second it should accept the formula. If the formula has more than one satisfying assignment, then there is no condition on the behavior of the algorithm. The promise problem Unambiguous-SAT can be decided by a nondeterministic Turing machine that has at most one accepting computation path. In this sense, this promise problem belongs to the complexity class UP (which is usually only defined for languages). The proof of the Valiant–Vazirani theorem consists of a probabilistic reduction from SAT to SAT such that, with probability at least $\Omega (1/n)$, the output formula has at most one satisfying assignment, and thus satisfies the promise of the Unambiguous-SAT problem. More precisely, the reduction is a randomized polynomial-time algorithm that maps a Boolean formula $F(x_{1},\dots ,x_{n})$ with $n$ variables $x_{1},\dots ,x_{n}$ to a Boolean formula $F'(x_{1},\dots ,x_{n})$ such that • every satisfying assignment of $F'$ also satisfies $F$, and • if $F$ is satisfiable, then, with probability at least $\Omega (1/n)$, $F'$ has a unique satisfying assignment $(a_{1},\dots ,a_{n})$. By running the reduction a polynomial number $t$ of times, each time with fresh independent random bits, we get formulas $F'_{1},\dots ,F'_{t}$. Choosing $t=O(n)$, we get that the probability that at least one formula $F'_{i}$ is uniquely satisfiable is at least $1/2$ if $F$ is satisfiable. This gives a Turing reduction from SAT to Unambiguous-SAT since an assumed algorithm for Unambiguous-SAT can be invoked on the $F'_{i}$. Then the self-reducibility of SAT can be used to compute a satisfying assignment, should it exist. Overall, this proves that NP = RP if Unambiguous-SAT can be solved in RP. The idea of the reduction is to intersect the solution space of the formula $F$ with $k$ random affine hyperplanes over ${\text{GF}}(2)^{n}$, where $k\in \{1,\dots ,n\}$ is chosen uniformly at random. An alternative proof is based on the isolation lemma by Mulmuley, Vazirani, and Vazirani. They consider a more general setting, and applied to the setting here this gives an isolation probability of only $\Omega (1/n^{8})$. References 1. Valiant, L.; Vazirani, V. (1986). "NP is as easy as detecting unique solutions" (PDF). Theoretical Computer Science. 47: 85–93. doi:10.1016/0304-3975(86)90135-0.
Wikipedia
Validated numerics Validated numerics, or rigorous computation, verified computation, reliable computation, numerical verification (German: Zuverlässiges Rechnen) is numerics including mathematically strict error (rounding error, truncation error, discretization error) evaluation, and it is one field of numerical analysis. For computation, interval arithmetic is used, and all results are represented by intervals. Validated numerics were used by Warwick Tucker in order to solve the 14th of Smale's problems,[1] and today it is recognized as a powerful tool for the study of dynamical systems.[2] See also: Numerical analysis and Interval arithmetic Importance Computation without verification may cause unfortunate results. Below are some examples. Rump's example In the 1980s, Rump made an example.[3][4] He made a complicated function and tried to obtain its value. Single precision, double precision, extended precision results seemed to be correct, but its plus-minus sign was different from the true value. Phantom solution Breuer–Plum–McKenna used the spectrum method to solve the boundary value problem of the Emden equation, and reported that an asymmetric solution was obtained.[5] This result to the study conflicted to the theoretical study by Gidas–Ni–Nirenberg which claimed that there is no asymmetric solution.[6] The solution obtained by Breuer–Plum–McKenna was a phantom solution caused by discretization error. This is a rare case, but it tells us that when we want to strictly discuss differential equations, numerical solutions must be verified. Accidents caused by numerical errors The following examples are known as accidents caused by numerical errors: • Failure of intercepting missiles in the Gulf War (1991)[7] • Failure of the Ariane 5 rocket (1996)[8] • Mistakes in election result totalization[9] Main topics The study of validated numerics is divided into the following fields: • Verification in numerical linear algebra • Validating numerical solutions of a given system of linear equations[10][11] • Validating numerically obtained eigenvalues[12][13][14] • Rigorously computing determinants[15] • Validating numerical solutions of matrix equations[16][17][18][19][20][21][22] • Verification of special functions: • Gamma function[23][24] • Elliptic functions[25] • Hypergeometric functions[26] • Hurwitz zeta function[27] • Bessel function • Matrix function[28][29][30] • Verification of numerical quadrature[31][32][33] • Verification of nonlinear equations (The Kantorovich theorem,[34] Krawczyk method, interval Newton method, and the Durand–Kerner–Aberth method are studied.) • Verification for solutions of ODEs, PDEs[35] (For PDEs, knowledge of functional analysis are used.[34]) • Verification of linear programming[36] • Verification of computational geometry • Verification at high-performance computing environment See also: numerical methods for ordinary differential equations, numerical linear algebra, numerical quadrature, and computational geometry Tools • INTLAB Library made by MATLAB/GNU Octave • kv Library made by C++. This library can obtain multiple precision outputs by using GNU MPFR. • kv on GitHub • Arb Library made by C. It is capable to rigorously compute various special functions. • arb on GitHub • CAPD A collection of flexible C++ modules which are mainly designed to computation of homology of sets, maps and validated numerics for dynamical systems. • JuliaIntervals on GitHub (Library made by Julia) • Boost Safe Numerics - C++ header only library of validated replacements for all builtin integer types]]. • Safe numerics on GitHub See also • Computer-assisted proof • Interval arithmetic • Affine arithmetic • INTLAB (Interval Laboratory) • Automatic differentiation • wikibooks:Numerical calculations and rigorous mathematics • Kantorovich theorem • Gershgorin circle theorem • Ulrich W. Kulisch References 1. Tucker, Warwick. (1999). "The Lorenz attractor exists." Comptes Rendus de l'Académie des Sciences-Series I-Mathematics, 328(12), 1197–1202. 2. Zin Arai, Hiroshi Kokubu, Paweãl Pilarczyk. Recent Development In Rigorous Computational Methods In Dynamical Systems. 3. Rump, Siegfried M. (1988). "Algorithms for verified inclusions: Theory and practice." In Reliability in computing (pp. 109–126). Academic Press. 4. Loh, Eugene; Walster, G. William (2002). Rump's example revisited. Reliable Computing, 8(3), 245-248. 5. Breuer, B.; Plum, Michael; McKenna, Patrick J. (2001). "Inclusions and existence proofs for solutions of a nonlinear boundary value problem by spectral numerical methods." In Topics in Numerical Analysis (pp. 61–77). Springer, Vienna. 6. Gidas, B.; Ni, Wei-Ming; Nirenberg, Louis (1979). "Symmetry and related properties via the maximum principle." Communications in Mathematical Physics, 68(3), 209–243. 7. "The Patriot Missile Failure". 8. ARIANE 5 Flight 501 Failure, http://sunnyday.mit.edu/nasa-class/Ariane5-report.html 9. Rounding error changes Parliament makeup 10. Yamamoto, T. (1984). Error bounds for approximate solutions of systems of equations. Japan Journal of Applied Mathematics, 1(1), 157. 11. Oishi, S., & Rump, S. M. (2002). Fast verification of solutions of matrix equations. Numerische Mathematik, 90(4), 755-773. 12. Yamamoto, T. (1980). Error bounds for computed eigenvalues and eigenvectors. Numerische Mathematik, 34(2), 189-199. 13. Yamamoto, T. (1982). Error bounds for computed eigenvalues and eigenvectors. II. Numerische Mathematik, 40(2), 201-206. 14. Mayer, G. (1994). Result verification for eigenvectors and eigenvalues. Topics in Validated Computations, Elsevier, Amsterdam, 209-276. 15. Ogita, T. (2008). Verified Numerical Computation of Matrix Determinant. SCAN’2008 El Paso, Texas September 29–October 3, 2008, 86. 16. Shinya Miyajima, Verified computation for the Hermitian positive definite solution of the conjugate discrete-time algebraic Riccati equation, Journal of Computational and Applied Mathematics, Volume 350, Pages 80-86, April 2019. 17. Shinya Miyajima, Fast verified computation for the minimal nonnegative solution of the nonsymmetric algebraic Riccati equation, Computational and Applied Mathematics, Volume 37, Issue 4, Pages 4599-4610, September 2018. 18. Shinya Miyajima, Fast verified computation for the solution of the T-congruence Sylvester equation, Japan Journal of Industrial and Applied Mathematics, Volume 35, Issue 2, Pages 541-551, July 2018. 19. Shinya Miyajima, Fast verified computation for the solvent of the quadratic matrix equation, The Electronic Journal of Linear Algebra, Volume 34, Pages 137-151, March 2018 20. Shinya Miyajima, Fast verified computation for solutions of algebraic Riccati equations arising in transport theory, Numerical Linear Algebra with Applications, Volume 24, Issue 5, Pages 1-12, October 2017. 21. Shinya Miyajima, Fast verified computation for stabilizing solutions of discrete-time algebraic Riccati equations, Journal of Computational and Applied Mathematics, Volume 319, Pages 352-364, August 2017. 22. Shinya Miyajima, Fast verified computation for solutions of continuous-time algebraic Riccati equations, Japan Journal of Industrial and Applied Mathematics, Volume 32, Issue 2, Pages 529-544, July 2015. 23. Rump, Siegfried M. (2014). Verified sharp bounds for the real gamma function over the entire floating-point range. Nonlinear Theory and Its Applications, IEICE, 5(3), 339-348. 24. Yamanaka, Naoya; Okayama, Tomoaki; Oishi, Shin’ichi (2015, November). Verified Error Bounds for the Real Gamma Function Using Double Exponential Formula over Semi-infinite Interval. In International Conference on Mathematical Aspects of Computer and Information Sciences (pp. 224-228). Springer. 25. Johansson, Fredrik (2019). Numerical Evaluation of Elliptic Functions, Elliptic Integrals and Modular Forms. In Elliptic Integrals, Elliptic Functions and Modular Forms in Quantum Field Theory (pp. 269-293). Springer, Cham. 26. Johansson, Fredrik (2019). Computing Hypergeometric Functions Rigorously. ACM Transactions on Mathematical Software (TOMS), 45(3), 30. 27. Johansson, Fredrik (2015). Rigorous high-precision computation of the Hurwitz zeta function and its derivatives. Numerical Algorithms, 69(2), 253-270. 28. Miyajima, S. (2018). Fast verified computation for the matrix principal pth root. en:Journal of Computational and Applied Mathematics, 330, 276-288. 29. Miyajima, S. (2019). Verified computation for the matrix principal logarithm. Linear Algebra and its Applications, 569, 38-61. 30. Miyajima, S. (2019). Verified computation of the matrix exponential. Advances in Computational Mathematics, 45(1), 137-152. 31. Johansson, Fredrik (2017). Arb: efficient arbitrary-precision midpoint-radius interval arithmetic. IEEE Transactions on Computers, 66(8), 1281-1292. 32. Johansson, Fredrik (2018, July). Numerical integration in arbitrary-precision ball arithmetic. In International Congress on Mathematical Software (pp. 255-263). Springer, Cham. 33. Johansson, Fredrik; Mezzarobba, Marc (2018). Fast and Rigorous Arbitrary-Precision Computation of Gauss--Legendre Quadrature Nodes and Weights. SIAM Journal on Scientific Computing, 40(6), C726-C747. 34. Eberhard Zeidler, Nonlinear Functional Analysis and Its Applications I-V. Springer Science & Business Media. 35. Mitsuhiro T. Nakao, Michael Plum, Yoshitaka Watanabe (2019) Numerical Verification Methods and Computer-Assisted Proofs for Partial Differential Equations (Springer Series in Computational Mathematics). 36. Oishi, Shin’ichi; Tanabe, Kunio (2009). Numerical Inclusion of Optimum Point for Linear Programming. JSIAM Letters, 1, 5-8. Further reading • Tucker, Warwick (2011). Validated Numerics: A Short Introduction to Rigorous Computations. Princeton University Press. • Moore, Ramon Edgar, Kearfott, R. Baker., Cloud, Michael J. (2009). Introduction to Interval Analysis. Society for Industrial and Applied Mathematics. • Rump, Siegfried M. (2010). Verification methods: Rigorous results using floating-point arithmetic. Acta Numerica, 19, 287–449. External links • Validated Numerics for Pedestrians • Reliable Computing, An open electronic journal devoted to numerical computations with guaranteed accuracy, bounding of ranges, mathematical proofs based on floating-point arithmetic, and other theory and applications of interval arithmetic and directed rounding. Numerical methods for partial differential equations Finite difference Parabolic • Forward-time central-space (FTCS) • Crank–Nicolson Hyperbolic • Lax–Friedrichs • Lax–Wendroff • MacCormack • Upwind • Method of characteristics Others • Alternating direction-implicit (ADI) • Finite-difference time-domain (FDTD) Finite volume • Godunov • High-resolution • Monotonic upstream-centered (MUSCL) • Advection upstream-splitting (AUSM) • Riemann solver • Essentially non-oscillatory (ENO) • Weighted essentially non-oscillatory (WENO) Finite element • hp-FEM • Extended (XFEM) • Discontinuous Galerkin (DG) • Spectral element (SEM) • Mortar • Gradient discretisation (GDM) • Loubignac iteration • Smoothed (S-FEM) Meshless/Meshfree • Smoothed-particle hydrodynamics (SPH) • Peridynamics (PD) • Moving particle semi-implicit method (MPS) • Material point method (MPM) • Particle-in-cell (PIC) Domain decomposition • Schur complement • Fictitious domain • Schwarz alternating • additive • abstract additive • Neumann–Dirichlet • Neumann–Neumann • Poincaré–Steklov operator • Balancing (BDD) • Balancing by constraints (BDDC) • Tearing and interconnect (FETI) • FETI-DP Others • Spectral • Pseudospectral (DVR) • Method of lines • Multigrid • Collocation • Level-set • Boundary element • Method of moments • Immersed boundary • Analytic element • Isogeometric analysis • Infinite difference method • Infinite element method • Galerkin method • Petrov–Galerkin method • Validated numerics • Computer-assisted proof • Integrable algorithm • Method of fundamental solutions Industrial and applied mathematics Computational • Algorithms • design • analysis • Automata theory • Coding theory • Computational geometry • Constraint programming • Computational logic • Cryptography • Information theory Discrete • Computer algebra • Computational number theory • Combinatorics • Graph theory • Discrete geometry Analysis • Approximation theory • Clifford analysis • Clifford algebra • Differential equations • Ordinary differential equations • Partial differential equations • Stochastic differential equations • Differential geometry • Differential forms • Gauge theory • Geometric analysis • Dynamical systems • Chaos theory • Control theory • Functional analysis • Operator algebra • Operator theory • Harmonic analysis • Fourier analysis • Multilinear algebra • Exterior • Geometric • Tensor • Vector • Multivariable calculus • Exterior • Geometric • Tensor • Vector • Numerical analysis • Numerical linear algebra • Numerical methods for ordinary differential equations • Numerical methods for partial differential equations • Validated numerics • Variational calculus Probability theory • Distributions (random variables) • Stochastic processes / analysis • Path integral • Stochastic variational calculus Mathematical physics • Analytical mechanics • Lagrangian • Hamiltonian • Field theory • Classical • Conformal • Effective • Gauge • Quantum • Statistical • Topological • Perturbation theory • in quantum mechanics • Potential theory • String theory • Bosonic • Topological • Supersymmetry • Supersymmetric quantum mechanics • Supersymmetric theory of stochastic dynamics Algebraic structures • Algebra of physical space • Feynman integral • Poisson algebra • Quantum group • Renormalization group • Representation theory • Spacetime algebra • Superalgebra • Supersymmetry algebra Decision sciences • Game theory • Operations research • Optimization • Social choice theory • Statistics • Mathematical economics • Mathematical finance Other applications • Biology • Chemistry • Psychology • Sociology • "The Unreasonable Effectiveness of Mathematics in the Natural Sciences" Related • Mathematics • Mathematical software Organizations • Society for Industrial and Applied Mathematics • Japan Society for Industrial and Applied Mathematics • Société de Mathématiques Appliquées et Industrielles • International Council for Industrial and Applied Mathematics • European Community on Computational Methods in Applied Sciences • Category • Mathematics portal / outline / topics list
Wikipedia
Valuation (geometry) In geometry, a valuation is a finitely additive function from a collection of subsets of a set $X$ to an abelian semigroup. For example, Lebesgue measure is a valuation on finite unions of convex bodies of $\mathbb {R} ^{n}.$ Other examples of valuations on finite unions of convex bodies of $\mathbb {R} ^{n}$ are surface area, mean width, and Euler characteristic. In geometry, continuity (or smoothness) conditions are often imposed on valuations, but there are also purely discrete facets of the theory. In fact, the concept of valuation has its origin in the dissection theory of polytopes and in particular Hilbert's third problem, which has grown into a rich theory reliant on tools from abstract algebra. Definition Let $X$ be a set, and let ${\mathcal {S}}$ be a collection of subsets of $X.$ A function $\phi $ on ${\mathcal {S}}$ with values in an abelian semigroup $R$ is called a valuation if it satisfies $\phi (A\cup B)+\phi (A\cap B)=\phi (A)+\phi (B)$ whenever $A,$ $B,$ $A\cup B,$ and $A\cap B$ are elements of ${\mathcal {S}}.$ If $\emptyset \in {\mathcal {S}},$ then one always assumes $\phi (\emptyset )=0.$ Examples Some common examples of ${\mathcal {S}}$ are • the convex bodies in $\mathbb {R} ^{n}$ • compact convex polytopes in $\mathbb {R} ^{n}$ • convex cones • smooth compact polyhedra in a smooth manifold $X$ Let ${\mathcal {K}}(\mathbb {R} ^{n})$ be the set of convex bodies in $\mathbb {R} ^{n}.$ Then some valuations on ${\mathcal {K}}(\mathbb {R} ^{n})$ are • the Euler characteristic $\chi :K(\mathbb {R} ^{n})\to \mathbb {Z} $ • Lebesgue measure restricted to ${\mathcal {K}}(\mathbb {R} ^{n})$ • intrinsic volume (and, more generally, mixed volume) • the map $A\mapsto h_{A},$ where $h_{A}$ is the support function of $A$ Some other valuations are • the lattice point enumerator $P\mapsto |\mathbb {Z} ^{n}\cap P|$, where $P$ is a lattice polytope • cardinality, on the family of finite sets Valuations on convex bodies From here on, let $V=\mathbb {R} ^{n}$, let ${\mathcal {K}}(V)$ be the set of convex bodies in $V$, and let $\phi $ be a valuation on ${\mathcal {K}}(V)$. We say $\phi $ is translation invariant if, for all $K\in {\mathcal {K}}(V)$ and $x\in V$, we have $\phi (K+x)=\phi (K)$. Let $(K,L)\in {\mathcal {K}}(V)^{2}$. The Hausdorff distance $d_{H}(K,L)$ is defined as $d_{H}(K,L)=\inf\{\varepsilon >0:K\subset L_{\varepsilon }{\text{ and }}L\subset K_{\varepsilon }\},$ where $K_{\varepsilon }$ is the $\varepsilon $-neighborhood of $K$ under some Euclidean inner product. Equipped with this metric, ${\mathcal {K}}(V)$ is a locally compact space. The space of continuous, translation-invariant valuations from ${\mathcal {K}}(V)$ to $\mathbb {C} $ is denoted by $\operatorname {Val} (V).$ The topology on $\operatorname {Val} (V)$ is the topology of uniform convergence on compact subsets of ${\mathcal {K}}(V).$ Equipped with the norm $\|\phi \|=\max\{|\phi (K)|:K\subset B\},$ where $B\subset V$ is a bounded subset with nonempty interior, $\operatorname {Val} (V)$ is a Banach space. Homogeneous valuations A translation-invariant continuous valuation $\phi \in \operatorname {Val} (V)$ is said to be $i$-homogeneous if $\phi (\lambda K)=\lambda ^{i}\phi (K)$ for all $\lambda >0$ and $K\in {\mathcal {K}}(V).$ The subset $\operatorname {Val} _{i}(V)$ of $i$-homogeneous valuations is a vector subspace of $\operatorname {Val} (V).$ McMullen's decomposition theorem[1] states that $\operatorname {Val} (V)=\bigoplus _{i=0}^{n}\operatorname {Val} _{i}(V),\qquad n=\dim V.$ In particular, the degree of a homogeneous valuation is always an integer between $0$ and $n=\operatorname {dim} V.$ Valuations are not only graded by the degree of homogeneity, but also by the parity with respect to the reflection through the origin, namely $\operatorname {Val} _{i}=\operatorname {Val} _{i}^{+}\oplus \operatorname {Val} _{i}^{-},$ where $\phi \in \operatorname {Val} _{i}^{\epsilon }$ with $\epsilon \in \{+,-\}$ if and only if $\phi (-K)=\epsilon \phi (K)$ for all convex bodies $K.$ The elements of $\operatorname {Val} _{i}^{+}$ and $\operatorname {Val} _{i}^{-}$ are said to be even and odd, respectively. It is a simple fact that $\operatorname {Val} _{0}(V)$ is $1$-dimensional and spanned by the Euler characteristic $\chi ,$ that is, consists of the constant valuations on ${\mathcal {K}}(V).$ In 1957 Hadwiger[2] proved that $\operatorname {Val} _{n}(V)$ (where $n=\dim V$) coincides with the $1$-dimensional space of Lebesgue measures on $V.$ A valuation $\phi \in \operatorname {Val} (\mathbb {R} ^{n})$ is simple if $\phi (K)=0$ for all convex bodies with $\dim K<n.$ Schneider[3] in 1996 described all simple valuations on $\mathbb {R} ^{n}$: they are given by $\phi (K)=c\operatorname {vol} (K)+\int _{S^{n-1}}f(\theta )d\sigma _{K}(\theta ),$ where $c\in \mathbb {C} ,$ $f\in C(S^{n-1})$ is an arbitrary odd function on the unit sphere $S^{n-1}\subset \mathbb {R} ^{n},$ and $\sigma _{K}$ is the surface area measure of $K.$ In particular, any simple valuation is the sum of an $n$- and an $(n-1)$-homogeneous valuation. This in turn implies that an $i$-homogeneous valuation is uniquely determined by its restrictions to all $(i+1)$-dimensional subspaces. Embedding theorems The Klain embedding is a linear injection of $\operatorname {Val} _{i}^{+}(V),$ the space of even $i$-homogeneous valuations, into the space of continuous sections of a canonical complex line bundle over the Grassmannian $\operatorname {Gr} _{i}(V)$ of $i$-dimensional linear subspaces of $V.$ Its construction is based on Hadwiger's characterization[2] of $n$-homogeneous valuations. If $\phi \in \operatorname {Val} _{i}(V)$ and $E\in \operatorname {Gr} _{i}(V),$ then the restriction $\phi |_{E}$ is an element $\operatorname {Val} _{i}(E),$ and by Hadwiger's theorem it is a Lebesgue measure. Hence $\operatorname {Kl} _{\phi }(E)=\phi |_{E}$ defines a continuous section of the line bundle $Dens$ over $\operatorname {Gr} _{i}(V)$ with fiber over $E$ equal to the $1$-dimensional space $\operatorname {Dens} (E)$ of densities (Lebesgue measures) on $E.$ Theorem (Klain[4]). The linear map $\operatorname {Kl} :\operatorname {Val} _{i}^{+}(V)\to C(\operatorname {Gr} _{i}(V),\operatorname {Dens} )$ :\operatorname {Val} _{i}^{+}(V)\to C(\operatorname {Gr} _{i}(V),\operatorname {Dens} )} is injective. A different injection, known as the Schneider embedding, exists for odd valuations. It is based on Schneider's description of simple valuations.[3] It is a linear injection of $\operatorname {Val} _{i}^{-}(V),$ the space of odd $i$-homogeneous valuations, into a certain quotient of the space of continuous sections of a line bundle over the partial flag manifold of cooriented pairs $(F^{i}\subset E^{i+1}).$ Its definition is reminiscent of the Klain embedding, but more involved. Details can be found in.[5] The Goodey-Weil embedding is a linear injection of $\operatorname {Val} _{i}$ into the space of distributions on the $i$-fold product of the $(n-1)$-dimensional sphere. It is nothing but the Schwartz kernel of a natural polarization that any $\phi \in \operatorname {Val} _{k}(V)$ admits, namely as a functional on the $k$-fold product of $C^{2}(S^{n-1}),$ the latter space of functions having the geometric meaning of differences of support functions of smooth convex bodies. For details, see.[5] Irreducibility Theorem The classical theorems of Hadwiger, Schneider and McMullen give fairly explicit descriptions of valuations that are homogeneous of degree $1,$ $n-1,$ and $n=\operatorname {dim} V.$ But for degrees $1<i<n-1$ very little was known before the turn of the 21st century. McMullen's conjecture is the statement that the valuations $\phi _{A}(K)=\operatorname {vol} _{n}(K+A),\qquad A\in {\mathcal {K}}(V),$ span a dense subspace of $\operatorname {Val} (V).$ McMullen's conjecture was confirmed by Alesker in a much stronger form, which became known as the Irreducibility Theorem: Theorem (Alesker[6]). For every $0\leq i\leq n,$ the natural action of $GL(V)$ on the spaces $\operatorname {Val} _{i}^{+}(V)$ and $\operatorname {Val} _{i}^{-}(V)$ is irreducible. Here the action of the general linear group $GL(V)$ on $\operatorname {Val} (V)$ is given by $(g\cdot \phi )(K)=\phi (g^{-1}K).$ The proof of the Irreducibility Theorem is based on the embedding theorems of the previous section and Beilinson-Bernstein localization. Smooth valuations A valuation $\phi \in \operatorname {Val} (V)$ is called smooth if the map $g\mapsto g\cdot \phi $ from $GL(V)$ to $\operatorname {Val} (V)$ is smooth. In other words, $\phi $ is smooth if and only if $\phi $ is a smooth vector of the natural representation of $GL(V)$ on $\operatorname {Val} (V).$ The space of smooth valuations $\operatorname {Val} ^{\infty }(V)$ is dense in $\operatorname {Val} (V)$; it comes equipped with a natural Fréchet-space topology, which is finer than the one induced from $\operatorname {Val} (V).$ For every (complex-valued) smooth function $f$ on $\operatorname {Gr} _{i}(\mathbb {R} ^{n}),$ $\phi (K)=\int _{\operatorname {Gr} _{i}(\mathbb {R} ^{n})}\operatorname {vol} _{i}(P_{E}K)f(E)dE,$ where $P_{E}:\mathbb {R} ^{n}\to E$ denotes the orthogonal projection and $dE$ is the Haar measure, defines a smooth even valuation of degree $i.$ It follows from the Irreducibility Theorem, in combination with the Casselman-Wallach theorem, that any smooth even valuation can be represented in this way. Such a representation is sometimes called a Crofton formula. For any (complex-valued) smooth differential form $\omega \in \Omega ^{n-1}(\mathbb {R} ^{n}\times S^{n-1})$ that is invariant under all the translations $(x,u)\mapsto (x+t,u)$ and every number $c\in \mathbb {C} ,$ integration over the normal cycle defines a smooth valuation: $\phi (K)=c\operatorname {vol} _{n}(K)+\int _{N(K)}\omega ,\qquad K\in {\mathcal {K}}(\mathbb {R} ^{n}).$ (1) As a set, the normal cycle $N(K)$ consists of the outward unit normals to $K.$ The Irreducibility Theorem implies that every smooth valuation is of this form. Operations on translation-invariant valuations There are several natural operations defined on the subspace of smooth valuations $\operatorname {Val} ^{\infty }(V)\subset \operatorname {Val} (V).$ The most important one is the product of two smooth valuations. Together with pullback and pushforward, this operation extends to valuations on manifolds. Exterior product Let $V,W$ be finite-dimensional real vector spaces. There exists a bilinear map, called the exterior product, $\boxtimes :\operatorname {Val} ^{\infty }(V)\times \operatorname {Val} ^{\infty }(W)\to \operatorname {Val} (V\times W)$ :\operatorname {Val} ^{\infty }(V)\times \operatorname {Val} ^{\infty }(W)\to \operatorname {Val} (V\times W)} which is uniquely characterized by the following two properties: • it is continuous with respect to the usual topologies on $\operatorname {Val} $ and $\operatorname {Val} ^{\infty }.$ • if $\phi =\operatorname {vol} _{V}(\bullet +A)$ and $\psi =\operatorname {vol} _{W}(\bullet +B)$ where $A\in {\mathcal {K}}(V)$ and $B\in {\mathcal {K}}(W)$ are convex bodies with smooth boundary and strictly positive Gauss curvature, and $\operatorname {vol} _{V}$ and $\operatorname {vol} _{W}$ are densities on $V$ and $W,$ then $\phi \boxtimes \psi =(\operatorname {vol} _{V}\boxtimes \operatorname {vol} _{W})(\bullet +A\times B).$ Product The product of two smooth valuations $\phi ,\psi \in \operatorname {Val} ^{\infty }(V)$ is defined by $(\phi \cdot \psi )(K)=(\phi \boxtimes \psi )(\Delta (K)),$ where $\Delta :V\to V\times V$ is the diagonal embedding. The product is a continuous map $\operatorname {Val} ^{\infty }(V)\times \operatorname {Val} ^{\infty }(V)\to \operatorname {Val} ^{\infty }(V).$ Equipped with this product, $\operatorname {Val} ^{\infty }(V)$ becomes a commutative associative graded algebra with the Euler characteristic as the multiplicative identity. Alesker-Poincaré duality By a theorem of Alesker, the restriction of the product $\operatorname {Val} _{k}^{\infty }(V)\times \operatorname {Val} _{n-k}^{\infty }(V)\to \operatorname {Val} _{n}^{\infty }(V)=\operatorname {Dens} (V)$ is a non-degenerate pairing. This motivates the definition of the $k$-homogeneous generalized valuation, denoted $\operatorname {Val} _{k}^{-\infty }(V),$ as $\operatorname {Val} _{n-k}^{\infty }(V)^{*}\otimes \operatorname {Dens} (V),$ topologized with the weak topology. By the Alesker-Poincaré duality, there is a natural dense inclusion $\operatorname {Val} _{k}^{\infty }(V)\hookrightarrow \operatorname {Val} _{k}^{-\infty }(V)/$ Convolution Convolution is a natural product on $\operatorname {Val} ^{\infty }(V)\otimes \operatorname {Dens} (V^{*}).$ For simplicity, we fix a density $\operatorname {vol} $ on $V$ to trivialize the second factor. Define for fixed $A,B\in {\mathcal {K}}(V)$ with smooth boundary and strictly positive Gauss curvature $\operatorname {vol} (\bullet +A)\ast \operatorname {vol} (\bullet +B)=\operatorname {vol} (\bullet +A+B).$ There is then a unique extension by continuity to a map $\operatorname {Val} ^{\infty }(V)\times \operatorname {Val} ^{\infty }(V)\to \operatorname {Val} ^{\infty }(V),$ called the convolution. Unlike the product, convolution respects the co-grading, namely if $\phi \in \operatorname {Val} _{n-i}^{\infty }(V),$ $\psi \in \operatorname {Val} _{n-j}^{\infty }(V),$ then $\phi \ast \psi \in \operatorname {Val} _{n-i-j}^{\infty }(V).$ For instance, let $V(K_{1},\ldots ,K_{n})$ denote the mixed volume of the convex bodies $K_{1},\ldots ,K_{n}\subset \mathbb {R} ^{n}.$ If convex bodies $A_{1},\dots ,A_{n-i}$ in $\mathbb {R} ^{n}$ with a smooth boundary and strictly positive Gauss curvature are fixed, then $\phi (K)=V(K[i],A_{1},\dots ,A_{n-i})$ defines a smooth valuation of degree $i.$ The convolution two such valuations is $V(\bullet [i],A_{1},\dots ,A_{n-i})\ast V(\bullet [j],B_{1},\dots ,B_{n-j})=c_{i,j}V(\bullet [n-j-i],A_{1},\dots ,A_{n-i},B_{1},\dots ,B_{n-j}),$ where $c_{i,j}$ is a constant depending only on $i,j,n.$ Fourier transform The Alesker-Fourier transform is a natural, $GL(V)$-equivariant isomorphism of complex-valued valuations $\mathbb {F} :\operatorname {Val} ^{\infty }(V)\to \operatorname {Val} ^{\infty }(V^{*})\otimes \operatorname {Dens} (V),$ :\operatorname {Val} ^{\infty }(V)\to \operatorname {Val} ^{\infty }(V^{*})\otimes \operatorname {Dens} (V),} discovered by Alesker and enjoying many properties resembling the classical Fourier transform, which explains its name. It reverses the grading, namely $\mathbb {F} :\operatorname {Val} _{k}^{\infty }(V)\to \operatorname {Val} _{n-k}^{\infty }(V^{*})\otimes \operatorname {Dens} (V),$ :\operatorname {Val} _{k}^{\infty }(V)\to \operatorname {Val} _{n-k}^{\infty }(V^{*})\otimes \operatorname {Dens} (V),} and intertwines the product and the convolution: $\mathbb {F} (\phi \cdot \psi )=\mathbb {F} \phi \ast \mathbb {F} \psi .$ Fixing for simplicity a Euclidean structure to identify $V=V^{*},$ $\operatorname {Dens} (V)=\mathbb {C} ,$ we have the identity $\mathbb {F} ^{2}\phi (K)=\phi (-K).$ On even valuations, there is a simple description of the Fourier transform in terms of the Klain embedding: $\operatorname {Kl} _{\mathbb {F} \phi }(E)=\operatorname {Kl} _{\phi }(E^{\perp }).$ In particular, even real-valued valuations remain real-valued after the Fourier transform. For odd valuations, the description of the Fourier transform is substantially more involved. Unlike the even case, it is no longer of purely geometric nature. For instance, the space of real-valued odd valuations is not preserved. Pullback and pushforward Given a linear map $f:U\to V,$ there are induced operations of pullback $f^{*}:\operatorname {Val} (V)\to \operatorname {Val} (U)$ and pushforward $f_{*}:\operatorname {Val} (U)\otimes \operatorname {Dens} (U)^{*}\to \operatorname {Val} (V)\otimes \operatorname {Dens} (V)^{*}.$ The pullback is the simpler of the two, given by $f^{*}\phi (K)=\phi (f(K)).$ It evidently preserves the parity and degree of homogeneity of a valuation. Note that the pullback does not preserve smoothness when $f$ is not injective. The pushforward is harder to define formally. For simplicity, fix Lebesgue measures on $U$ and $V.$ The pushforward can be uniquely characterized by describing its action on valuations of the form $\operatorname {vol} (\bullet +A),$ for all $A\in {\mathcal {K}}(U),$ and then extended by continuity to all valuations using the Irreducibility Theorem. For a surjective map $f,$ $f_{*}\operatorname {vol} (\bullet +A)=\operatorname {vol} (\bullet +f(A)).$ For an inclusion $f:U\hookrightarrow V,$ choose a splitting $V=U\oplus W.$ Then $f_{*}\operatorname {vol} (\bullet +A)(K)=\int _{W}\operatorname {vol} (K\cap (U+w)+A)dw.$ Informally, the pushforward is dual to the pullback with respect to the Alesker-Poincaré pairing: for $\phi \in \operatorname {Val} (V)$ and $\psi \in \operatorname {Val} (U)\otimes \operatorname {Dens} (U)^{*},$ $\langle f^{*}\phi ,\psi \rangle =\langle \phi ,f_{*}\psi \rangle .$ However, this identity has to be carefully interpreted since the pairing is only well-defined for smooth valuations. For further details, see.[7] Valuations on manifolds In a series of papers beginning in 2006, Alesker laid down the foundations for a theory of valuations on manifolds that extends the theory of valuations on convex bodies. The key observation leading to this extension is that via integration over the normal cycle (1), a smooth translation-invariant valuation may be evaluated on sets much more general than convex ones. Also (1) suggests to define smooth valuations in general by dropping the requirement that the form $\omega $ be translation-invariant and by replacing the translation-invariant Lebesgue measure with an arbitrary smooth measure. Let $X$ be an n-dimensional smooth manifold and let $\mathbb {P} _{X}=\mathbb {P} _{+}(T^{*}X)$ be the co-sphere bundle of $X,$ that is, the oriented projectivization of the cotangent bundle. Let ${\mathcal {P}}(X)$ denote the collection of compact differentiable polyhedra in $X.$ The normal cycle $N(A)\subset \mathbb {P} _{X}$ of $A\in {\mathcal {P}}(X),$ which consists of the outward co-normals to $A,$ is naturally a Lipschitz submanifold of dimension $n-1.$ For ease of presentation we henceforth assume that $X$ is oriented, even though the concept of smooth valuations in fact does not depend on orientability. The space of smooth valuations ${\mathcal {V}}^{\infty }(X)$ on $X$ consists of functions $\phi :{\mathcal {P}}(X)\to \mathbb {C} $ :{\mathcal {P}}(X)\to \mathbb {C} } of the form $\phi (A)=\int _{A}\mu +\int _{N(A)}\omega ,\qquad A\in {\mathcal {P}}(X),$ where $\mu \in \Omega ^{n}(X)$ and $\omega \in \Omega ^{n-1}(\mathbb {P} _{X})$ can be arbitrary. It was shown by Alesker that the smooth valuations on open subsets of $X$ form a soft sheaf over $X.$ Examples The following are examples of smooth valuations on a smooth manifold $X$: • Smooth measures on $X.$ • The Euler characteristic; this follows from the work of Chern[8] on the Gauss-Bonnet theorem, where such $\mu $ and $\omega $ were constructed to represent the Euler characteristic. In particular, $\mu $ is then the Chern-Gauss-Bonnet integrand, which is the Pfaffian of the Riemannian curvature tensor. • If $X$ is Riemannian, then the Lipschitz-Killing valuations or intrinsic volumes $V_{0}^{X}=\chi ,V_{1}^{X},\ldots ,V_{n}^{X}=\mathrm {vol} _{X}$ are smooth valuations. If $f:X\to \mathbb {R} ^{m}$ is any isometric immersion into a Euclidean space, then $V_{i}^{X}=f^{*}V_{i}^{\mathbb {R} ^{m}},$ where $V_{i}^{\mathbb {R} ^{m}}$ denotes the usual intrinsic volumes on $\mathbb {R} ^{m}$ (see below for the definition of the pullback). The existence of these valuations is the essence of Weyl's tube formula.[9] • Let $\mathbb {C} P^{n}$ be the complex projective space, and let $\mathrm {Gr} _{k}^{\mathbb {C} }$ denote the Grassmannian of all complex projective subspaces of fixed dimension $k.$ The function $\phi (A)=\int _{\mathrm {Gr} _{k}^{\mathbb {C} }}\chi (A\cap E)dE,\qquad A\in {\mathcal {P}}(\mathbb {C} P^{n}),$ where the integration is with respect to the Haar probability measure on $\mathrm {Gr} _{k}^{\mathbb {C} },$ is a smooth valuation. This follows from the work of Fu.[10] Filtration The space ${\mathcal {V}}^{\infty }(X)$ admits no natural grading in general, however it carries a canonical filtration ${\mathcal {V}}^{\infty }(X)=W_{0}\supset W_{1}\supset \cdots \supset W_{n}.$ Here $W_{n}$ consists of the smooth measures on $X,$ and $W_{j}$ is given by forms $\omega $ in the ideal generated by $\pi ^{*}\Omega ^{j}(X),$ where $\pi :\mathbb {P} _{X}\to X$ :\mathbb {P} _{X}\to X} is the canonical projection. The associated graded vector space $\bigoplus _{i=0}^{n}W_{i}/W_{i+1}$ is canonically isomorphic to the space of smooth sections $\bigoplus _{i=0}^{n}C^{\infty }(X,\operatorname {Val} _{i}^{\infty }(TX)),$ where $\operatorname {Val} _{i}^{\infty }(TX)$ denotes the vector bundle over $X$ such that the fiber over a point $x\in X$ is $\operatorname {Val} _{i}^{\infty }(T_{x}X),$ the space of $i$-homogeneous smooth translation-invariant valuations on the tangent space $T_{x}X.$ Product The space ${\mathcal {V}}^{\infty }(X)$ admits a natural product. This product is continuous, commutative, associative, compatible with the filtration: $W_{i}\cdot W_{j}\subset W_{i+j},$ and has the Euler characteristic as the identity element. It also commutes with the restriction to embedded submanifolds, and the diffeomorphism group of $X$ acts on ${\mathcal {V}}^{\infty }(X)$ by algebra automorphisms. For example, if $X$ is Riemannian, the Lipschitz-Killing valuations satisfy $V_{i}^{X}\cdot V_{j}^{X}=V_{i+j}^{X}.$ The Alesker-Poincaré duality still holds. For compact $X$ it says that the pairing ${\mathcal {V}}^{\infty }(X)\times {\mathcal {V}}^{\infty }(X)\to \mathbb {C} ,$ $(\phi ,\psi )\mapsto (\phi \cdot \psi )(X)$ is non-degenerate. As in the translation-invariant case, this duality can be used to define generalized valuations. Unlike the translation-invariant case, no good definition of continuous valuations exists for valuations on manifolds. The product of valuations closely reflects the geometric operation of intersection of subsets. Informally, consider the generalized valuation $\chi _{A}=\chi (A\cap \bullet ).$ The product is given by $\chi _{A}\cdot \chi _{B}=\chi _{A\cap B}.$ Now one can obtain smooth valuations by averaging generalized valuations of the form $\chi _{A},$ more precisely $\phi (X)=\int _{S}\chi _{s(A)}ds$ is a smooth valuation if $S$ is a sufficiently large measured family of diffeomorphisms. Then one has $\int _{S}\chi _{s(A)}ds\cdot \int _{S'}\chi _{s'(B)}ds'=\int _{S\times S'}\chi _{s(A)\cap s'(B)}dsds',$ see.[11] Pullback and pushforward Every smooth immersion $f:X\to Y$ of smooth manifolds induces a pullback map $f^{*}:{\mathcal {V}}^{\infty }(Y)\to {\mathcal {V}}^{\infty }(X).$ If $f$ is an embedding, then $(f^{*}\phi )(A)=\phi (f(A)),\qquad A\in {\mathcal {P}}(X).$ The pullback is a morphism of filtered algebras. Every smooth proper submersion $f:X\to Y$ defines a pushforward map $f^{*}:{\mathcal {V}}^{\infty }(X)\to {\mathcal {V}}^{\infty }(Y)$ by $(f_{*}\phi )(A)=\phi (f^{-1}(A)),\qquad A\in {\mathcal {P}}(Y).$ The pushforward is compatible with the filtration as well: $f_{*}:W_{i}(X)\to W_{i-(\dim X-\dim Y)}(Y).$ For general smooth maps, one can define pullback and pushforward for generalized valuations under some restrictions. Applications in Integral Geometry Let $M$ be a Riemannian manifold and let $G$ be a Lie group of isometries of $M$ acting transitively on the sphere bundle $SM.$ Under these assumptions the space ${\mathcal {V}}^{\infty }(M)^{G}$ of $G$-invariant smooth valuations on $M$ is finite-dimensional; let $\phi _{1},\ldots ,\phi _{m}$ be a basis. Let $A,B\in {\mathcal {P}}(M)$ be differentiable polyhedra in $M.$ Then integrals of the form $\int _{G}\phi _{i}(A\cap gB)dg$ are expressible as linear combinations of $\phi _{k}(A)\phi _{l}(B)$ with coefficients $c_{i}^{kl}$ independent of $A$ and $B$: $\int _{G}\phi _{i}(A\cap gB)dg=\sum _{k,l=1}^{m}c_{i}^{kl}\phi _{k}(A)\phi _{l}(B),\qquad A,B\in {\mathcal {P}}(M).$ (2) Formulas of this type are called kinematic formulas. Their existence in this generality was proved by Fu.[10] For the three simply connected real space forms, that is, the sphere, Euclidean space, and hyperbolic space, they go back to Blaschke, Santaló, Chern, and Federer. Describing the kinematic formulas explicitly is typically a difficult problem. In fact already in the step from real to complex space forms, considerable difficulties arise and these have only recently been resolved by Bernig, Fu, and Solanes.[12] [13] The key insight responsible for this progress is that the kinematic formulas contain the same information as the algebra of invariant valuations ${\mathcal {V}}^{\infty }(M)^{G}.$ For a precise statement, let $k_{G}:{\mathcal {V}}^{\infty }(M)^{G}\to {\mathcal {V}}^{\infty }(M)^{G}\otimes {\mathcal {V}}^{\infty }(M)^{G}$ be the kinematic operator, that is, the map determined by the kinematic formulas (2). Let $\operatorname {pd} :{\mathcal {V}}^{\infty }(M)^{G}\to {\mathcal {V}}^{\infty }(M)^{G*}$ :{\mathcal {V}}^{\infty }(M)^{G}\to {\mathcal {V}}^{\infty }(M)^{G*}} denote the Alesker-Poincaré duality, which is a linear isomorphism. Finally let $m_{G}^{*}$ be the adjoint of the product map $m_{G}:{\mathcal {V}}^{\infty }(M)^{G}\otimes {\mathcal {V}}^{\infty }(M)^{G}\to {\mathcal {V}}^{\infty }(M)^{G}.$ The Fundamental theorem of algebraic integral geometry relating operations on valuations to integral geometry, states that if the Poincaré duality is used to identify ${\mathcal {V}}^{\infty }(M)^{G}$ with ${\mathcal {V}}^{\infty }(M)^{G*},$ then $k_{G}=m_{G}^{*}$: . See also • Hadwiger's theorem • Integral geometry – theory of measures on a geometrical space invariant under the symmetry group of that spacePages displaying wikidata descriptions as a fallback • Mixed volume • Modular set function – Mapping functionPages displaying short descriptions of redirect targets • Set function – Function from sets to numbers • Valuation (measure theory) – map in measure or domain theoryPages displaying wikidata descriptions as a fallback References 1. McMullen, Peter (1980), "Continuous translation-invariant valuations on the space of compact convex sets", Archiv der Mathematik, 34 (4): 377–384, doi:10.1007/BF01224974 2. Hadwiger, Hugo (1957), Vorlesungen über Inhalt, Oberfläche und Isoperimetrie, Die Grundlehren der Mathematischen Wissenschaften, vol. 93, Berlin-Göttingen-Heidelberg: Springer-Verlag, doi:10.1007/978-3-642-94702-5, ISBN 978-3-642-94703-2 3. Schneider, Rolf (1996), "Simple valuations on convex bodies", Mathematika, 43 (1): 32–39, doi:10.1112/S0025579300011578 4. Klain, Daniel A. (1995), "A short proof of Hadwiger's characterization theorem", Mathematika, 42 (2): 329–339, doi:10.1112/S0025579300014625 5. Alesker, Semyon (2018), Introduction to the theory of valuations, CBMS Regional Conference Series in Mathematics, vol. 126, Providence, RI: American Mathematical Society 6. Alesker, Semyon (2001), "Description of translation invariant valuations on convex sets with solution of P. McMullen's conjecture", Geometric and Functional Analysis, 11 (2): 244–272, doi:10.1007/PL00001675 7. Alesker, Semyon (2011), "A Fourier-type transform on translation-invariant valuations on convex sets", Israel Journal of Mathematics, 181: 189–294, doi:10.1007/s11856-011-0008-6 8. Chern, Shiing-Shen (1945), "On the curvatura integra in a Riemannian manifold", Annals of Mathematics, Second Series, 46 (4): 674–684, doi:10.2307/1969203, JSTOR 1969203 9. Weyl, Hermann (1939), "On the Volume of Tubes", American Journal of Mathematics, 61 (2): 461–472, doi:10.2307/2371513, JSTOR 2371513 10. Fu, Joseph H. G. (1990), "Kinematic formulas in integral geometry", Indiana University Mathematics Journal, 39 (4): 1115–1154, doi:10.1512/iumj.1990.39.39052 11. Fu, Joseph H. G. (2016), "Intersection theory and the Alesker product", Indiana University Mathematics Journal, 65 (4): 1347–1371, arXiv:1408.4106, doi:10.1512/iumj.2016.65.5846, S2CID 119736489 12. Bernig, Andreas; Fu, Joseph H. G.; Solanes, Gil (2014), "Integral geometry of complex space forms", Geometric and Functional Analysis, 24 (2): 403–49, arXiv:1204.0604, doi:10.1007/s00039-014-0251-12 13. Bernig, Andreas; Fu, Joseph H. G. (2011), "Hermitian integral geometry", Annals of Mathematics, Second Series, 173 (2): 907–945, doi:10.4007/annals.2011.173.2.7 Bibliography • S. Alesker (2018). Introduction to the theory of valuations. CBMS Regional Conference Series in Mathematics, 126. American Mathematical Society, Providence, RI. ISBN 978-1-4704-4359-7. • S. Alesker; J. H. G. Fu (2014). Integral geometry and valuations. Advanced Courses in Mathematics. CRM Barcelona. Birkhäuser/Springer, Basel. ISBN 978-1-4704-4359-7. • D. A. Klain; G.-C. Rota (1997). Introduction to geometric probability. Lezioni Lincee. [Lincei Lectures]. Cambridge University Press. ISBN 0-521-59362-X. • R. Schneider (2014). Convex bodies: the Brunn-Minkowski theory. Encyclopedia of Mathematics and its Applications, 151. Cambridge University Press, Cambridge, RI. ISBN 978-1-107-60101-7.
Wikipedia
Valuation (measure theory) In measure theory, or at least in the approach to it via the domain theory, a valuation is a map from the class of open sets of a topological space to the set of positive real numbers including infinity, with certain properties. It is a concept closely related to that of a measure, and as such, it finds applications in measure theory, probability theory, and theoretical computer science. Domain/Measure theory definition Let $\scriptstyle (X,{\mathcal {T}})$ be a topological space: a valuation is any set function $v:{\mathcal {T}}\to \mathbb {R} ^{+}\cup \{+\infty \}$ satisfying the following three properties ${\begin{array}{lll}v(\varnothing )=0&&\scriptstyle {\text{Strictness property}}\\v(U)\leq v(V)&{\mbox{if}}~U\subseteq V\quad U,V\in {\mathcal {T}}&\scriptstyle {\text{Monotonicity property}}\\v(U\cup V)+v(U\cap V)=v(U)+v(V)&\forall U,V\in {\mathcal {T}}&\scriptstyle {\text{Modularity property}}\,\end{array}}$ The definition immediately shows the relationship between a valuation and a measure: the properties of the two mathematical object are often very similar if not identical, the only difference being that the domain of a measure is the Borel algebra of the given topological space, while the domain of a valuation is the class of open sets. Further details and references can be found in Alvarez-Manilla, Edalat & Saheb-Djahromi 2000 and Goubault-Larrecq 2005. Continuous valuation A valuation (as defined in domain theory/measure theory) is said to be continuous if for every directed family $\scriptstyle \{U_{i}\}_{i\in I}$ of open sets (i.e. an indexed family of open sets which is also directed in the sense that for each pair of indexes $i$ and $j$ belonging to the index set $I$, there exists an index $k$ such that $\scriptstyle U_{i}\subseteq U_{k}$ and $\scriptstyle U_{j}\subseteq U_{k}$) the following equality holds: $v\left(\bigcup _{i\in I}U_{i}\right)=\sup _{i\in I}v(U_{i}).$ This property is analogous to the τ-additivity of measures. Simple valuation A valuation (as defined in domain theory/measure theory) is said to be simple if it is a finite linear combination with non-negative coefficients of Dirac valuations, that is, $v(U)=\sum _{i=1}^{n}a_{i}\delta _{x_{i}}(U)\quad \forall U\in {\mathcal {T}}$ where $a_{i}$ is always greater than or at least equal to zero for all index $i$. Simple valuations are obviously continuous in the above sense. The supremum of a directed family of simple valuations (i.e. an indexed family of simple valuations which is also directed in the sense that for each pair of indexes $i$ and $j$ belonging to the index set $I$, there exists an index $k$ such that $\scriptstyle v_{i}(U)\leq v_{k}(U)\!$ and $\scriptstyle v_{j}(U)\leq v_{k}(U)\!$) is called quasi-simple valuation ${\bar {v}}(U)=\sup _{i\in I}v_{i}(U)\quad \forall U\in {\mathcal {T}}.\,$ See also • The extension problem for a given valuation (in the sense of domain theory/measure theory) consists in finding under what type of conditions it can be extended to a measure on a proper topological space, which may or may not be the same space where it is defined: the papers Alvarez-Manilla, Edalat & Saheb-Djahromi 2000 and Goubault-Larrecq 2005 in the reference section are devoted to this aim and give also several historical details. • The concepts of valuation on convex sets and valuation on manifolds are a generalization of valuation in the sense of domain/measure theory. A valuation on convex sets is allowed to assume complex values, and the underlying topological space is the set of non-empty convex compact subsets of a finite-dimensional vector space: a valuation on manifolds is a complex valued finitely additive measure defined on a proper subset of the class of all compact submanifolds of the given manifolds.[lower-alpha 1] Examples Dirac valuation Let $\scriptstyle (X,{\mathcal {T}})$ be a topological space, and let $x$ be a point of $X$: the map $\delta _{x}(U)={\begin{cases}0&{\mbox{if}}~x\notin U\\1&{\mbox{if}}~x\in U\end{cases}}\quad {\text{ for all }}U\in {\mathcal {T}}$ is a valuation in the domain theory/measure theory, sense called Dirac valuation. This concept bears its origin from distribution theory as it is an obvious transposition to valuation theory of Dirac distribution: as seen above, Dirac valuations are the "bricks" simple valuations are made of. See also • Valuation (geometry) – in geometryPages displaying wikidata descriptions as a fallback Notes 1. Details can be found in several arXiv papers of prof. Semyon Alesker. Works cited • Alvarez-Manilla, Maurizio; Edalat, Abbas; Saheb-Djahromi, Nasser (2000), "An extension result for continuous valuations", Journal of the London Mathematical Society, 61 (2): 629–640, CiteSeerX 10.1.1.23.9676, doi:10.1112/S0024610700008681. • Goubault-Larrecq, Jean (2005), "Extensions of valuations", Mathematical Structures in Computer Science, 15 (2): 271–297, doi:10.1017/S096012950400461X External links • Alesker, Semyon, "various preprints on valuation s", arXiv preprint server, primary site at Cornell University. Several papers dealing with valuations on convex sets, valuations on manifolds and related topics. • The nLab page on valuations Measure theory Basic concepts • Absolute continuity of measures • Lebesgue integration • Lp spaces • Measure • Measure space • Probability space • Measurable space/function Sets • Almost everywhere • Atom • Baire set • Borel set • equivalence relation • Borel space • Carathéodory's criterion • Cylindrical σ-algebra • Cylinder set • 𝜆-system • Essential range • infimum/supremum • Locally measurable • π-system • σ-algebra • Non-measurable set • Vitali set • Null set • Support • Transverse measure • Universally measurable Types of Measures • Atomic • Baire • Banach • Besov • Borel • Brown • Complex • Complete • Content • (Logarithmically) Convex • Decomposable • Discrete • Equivalent • Finite • Inner • (Quasi-) Invariant • Locally finite • Maximising • Metric outer • Outer • Perfect • Pre-measure • (Sub-) Probability • Projection-valued • Radon • Random • Regular • Borel regular • Inner regular • Outer regular • Saturated • Set function • σ-finite • s-finite • Signed • Singular • Spectral • Strictly positive • Tight • Vector Particular measures • Counting • Dirac • Euler • Gaussian • Haar • Harmonic • Hausdorff • Intensity • Lebesgue • Infinite-dimensional • Logarithmic • Product • Projections • Pushforward • Spherical measure • Tangent • Trivial • Young Maps • Measurable function • Bochner • Strongly • Weakly • Convergence: almost everywhere • of measures • in measure • of random variables • in distribution • in probability • Cylinder set measure • Random: compact set • element • measure • process • variable • vector • Projection-valued measure Main results • Carathéodory's extension theorem • Convergence theorems • Dominated • Monotone • Vitali • Decomposition theorems • Hahn • Jordan • Maharam's • Egorov's • Fatou's lemma • Fubini's • Fubini–Tonelli • Hölder's inequality • Minkowski inequality • Radon–Nikodym • Riesz–Markov–Kakutani representation theorem Other results • Disintegration theorem • Lifting theory • Lebesgue's density theorem • Lebesgue differentiation theorem • Sard's theorem For Lebesgue measure • Isoperimetric inequality • Brunn–Minkowski theorem • Milman's reverse • Minkowski–Steiner formula • Prékopa–Leindler inequality • Vitale's random Brunn–Minkowski inequality Applications & related • Convex analysis • Descriptive set theory • Probability theory • Real analysis • Spectral theory
Wikipedia
Valuation ring In abstract algebra, a valuation ring is an integral domain D such that for every element x of its field of fractions F, at least one of x or x−1 belongs to D. Given a field F, if D is a subring of F such that either x or x−1 belongs to D for every nonzero x in F, then D is said to be a valuation ring for the field F or a place of F. Since F in this case is indeed the field of fractions of D, a valuation ring for a field is a valuation ring. Another way to characterize the valuation rings of a field F is that valuation rings D of F have F as their field of fractions, and their ideals are totally ordered by inclusion; or equivalently their principal ideals are totally ordered by inclusion. In particular, every valuation ring is a local ring. The valuation rings of a field are the maximal elements of the set of the local subrings in the field partially ordered by dominance or refinement,[1] where $(A,{\mathfrak {m}}_{A})$ dominates $(B,{\mathfrak {m}}_{B})$ if $A\supseteq B$ and ${\mathfrak {m}}_{A}\cap B={\mathfrak {m}}_{B}$.[2] Every local ring in a field K is dominated by some valuation ring of K. An integral domain whose localization at any prime ideal is a valuation ring is called a Prüfer domain. Definitions There are several equivalent definitions of valuation ring (see below for the characterization in terms of dominance). For an integral domain D and its field of fractions K, the following are equivalent: 1. For every nonzero x in K, either x is in D or x−1 is in D. 2. The ideals of D are totally ordered by inclusion. 3. The principal ideals of D are totally ordered by inclusion (i.e. the elements in D are, up to units, totally ordered by divisibility.) 4. There is a totally ordered abelian group Γ (called the value group) and a valuation ν: K → Γ ∪ {∞} with D = { x ∈ K | ν(x) ≥ 0 }. The equivalence of the first three definitions follows easily. A theorem of (Krull 1939) states that any ring satisfying the first three conditions satisfies the fourth: take Γ to be the quotient K×/D× of the unit group of K by the unit group of D, and take ν to be the natural projection. We can turn Γ into a totally ordered group by declaring the residue classes of elements of D as "positive".[lower-alpha 1] Even further, given any totally ordered abelian group Γ, there is a valuation ring D with value group Γ (see Hahn series). From the fact that the ideals of a valuation ring are totally ordered, one can conclude that a valuation ring is a local domain, and that every finitely generated ideal of a valuation ring is principal (i.e., a valuation ring is a Bézout domain). In fact, it is a theorem of Krull that an integral domain is a valuation ring if and only if it is a local Bézout domain.[3] It also follows from this that a valuation ring is Noetherian if and only if it is a principal ideal domain. In this case, it is either a field or it has exactly one non-zero prime ideal; in the latter case it is called a discrete valuation ring. (By convention, a field is not a discrete valuation ring.) A value group is called discrete if it is isomorphic to the additive group of the integers, and a valuation ring has a discrete valuation group if and only if it is a discrete valuation ring.[4] Very rarely, valuation ring may refer to a ring that satisfies the second or third condition but is not necessarily a domain. A more common term for this type of ring is uniserial ring. Examples • Any field $\mathbb {F} $ is a valuation ring. For example, the ring of rational functions $\mathbb {F} (X)$ on an algebraic variety $X$.[5][6] • A simple non-example is the integral domain $\mathbb {C} [X]$ since the inverse of a generic $f/g\in \mathbb {C} (X)$ is $g/f\not \in \mathbb {C} [X]$. • The field of power series: $\mathbb {F} ((X))=\left\{f(X)=\!\sum _{i>-\infty }^{\infty }a_{i}X^{i}\,:\ a_{i}\in \mathbb {F} \right\}$ has the valuation $v(f)=\inf \nolimits _{a_{n}\neq 0}n$. The subring $\mathbb {F} [[X]]$ is a valuation ring as well. • $\mathbb {Z} _{(p)},$ the localization of the integers $\mathbb {Z} $ at the prime ideal (p), consisting of ratios where the numerator is any integer and the denominator is not divisible by p. The field of fractions is the field of rational numbers $\mathbb {Q} .$ • The ring of meromorphic functions on the entire complex plane which have a Maclaurin series (Taylor series expansion at zero) is a valuation ring. The field of fractions are the functions meromorphic on the whole plane. If f does not have a Maclaurin series then 1/f does. • Any ring of p-adic integers $\mathbb {Z} _{p}$ for a given prime p is a local ring, with field of fractions the p-adic numbers $\mathbb {Q} _{p}$. The integral closure $\mathbb {Z} _{p}^{\text{cl}}$ of the p-adic integers is also a local ring, with field of fractions $\mathbb {Q} _{p}^{\text{cl}}$ (the algebraic closure of the p-adic numbers). Both $\mathbb {Z} _{p}$ and $\mathbb {Z} _{p}^{\text{cl}}$ are valuation rings. • Let k be an ordered field. An element of k is called finite if it lies between two integers n < x < m; otherwise it is called infinite. The set D of finite elements of k is a valuation ring. The set of elements x such that x ∈ D and x−1 ∉ D is the set of infinitesimal elements; and an element x such that x ∉ D and x−1 ∈ D is called infinite. • The ring F of finite elements of a hyperreal field *R (an ordered field containing the real numbers) is a valuation ring of *R. F consists of all hyperreal numbers differing from a standard real by an infinitesimal amount, which is equivalent to saying a hyperreal number x such that −n < x < n for some standard integer n. The residue field, finite hyperreal numbers modulo the ideal of infinitesimal hyperreal numbers, is isomorphic to the real numbers. • A common geometric example comes from algebraic plane curves. Consider the polynomial ring $\mathbb {C} [x,y]$ and an irreducible polynomial $f$ in that ring. Then the ring $\mathbb {C} [x,y]/(f)$ is the ring of polynomial functions on the curve $\{(x,y):f(x,y)=0\}$. Choose a point $P=(P_{x},P_{y})\in \mathbb {C} ^{2}$ such that $f(P)=0$ and it is a regular point on the curve; i.e., the local ring R at the point is a regular local ring of Krull dimension one or a discrete valuation ring. • For example, consider the inclusion $(\mathbb {C} [[X^{2}]],(X^{2}))\hookrightarrow (\mathbb {C} [[X]],(X))$. These are all subrings in the field of bounded-below power series $\mathbb {C} ((X))$. Dominance and integral closure The units, or invertible elements, of a valuation ring are the elements x in D such that x −1 is also a member of D. The other elements of D – called nonunits – do not have an inverse in D, and they form an ideal M. This ideal is maximal among the (totally ordered) ideals of D. Since M is a maximal ideal, the quotient ring D/M is a field, called the residue field of D. In general, we say a local ring $(S,{\mathfrak {m}}_{S})$ dominates a local ring $(R,{\mathfrak {m}}_{R})$ if $S\supseteq R$ and ${\mathfrak {m}}_{S}\cap R={\mathfrak {m}}_{R}$; in other words, the inclusion $R\subseteq S$ is a local ring homomorphism. Every local ring $(A,{\mathfrak {p}})$ in a field K is dominated by some valuation ring of K. Indeed, the set consisting of all subrings R of K containing A and $1\not \in {\mathfrak {p}}R$ is nonempty and is inductive; thus, has a maximal element $R$ by Zorn's lemma. We claim R is a valuation ring. R is a local ring with maximal ideal containing ${\mathfrak {p}}R$ by maximality. Again by maximality it is also integrally closed. Now, if $x\not \in R$, then, by maximality, ${\mathfrak {p}}R[x]=R[x]$ and thus we can write: $1=r_{0}+r_{1}x+\cdots +r_{n}x^{n},\quad r_{i}\in {\mathfrak {p}}R$. Since $1-r_{0}$ is a unit element, this implies that $x^{-1}$ is integral over R; thus is in R. This proves R is a valuation ring. (R dominates A since its maximal ideal contains ${\mathfrak {p}}$ by construction.) A local ring R in a field K is a valuation ring if and only if it is a maximal element of the set of all local rings contained in K partially ordered by dominance. This easily follows from the above.[lower-alpha 2] Let A be a subring of a field K and $f:A\to k$ a ring homomorphism into an algebraically closed field k. Then f extends to a ring homomorphism $g:D\to k$, D some valuation ring of K containing A. (Proof: Let $g:R\to k$ be a maximal extension, which clearly exists by Zorn's lemma. By maximality, R is a local ring with maximal ideal containing the kernel of f. If S is a local ring dominating R, then S is algebraic over R; if not, $S$ contains a polynomial ring $R[x]$ to which g extends, a contradiction to maximality. It follows $S/{\mathfrak {m}}_{S}$ is an algebraic field extension of $R/{\mathfrak {m}}_{R}$. Thus, $S\to S/{\mathfrak {m}}_{S}\hookrightarrow k$ extends g; hence, S = R.) If a subring R of a field K contains a valuation ring D of K, then, by checking Definition 1, R is also a valuation ring of K. In particular, R is local and its maximal ideal contracts to some prime ideal of D, say, ${\mathfrak {p}}$. Then $R=D_{\mathfrak {p}}$ since $R$ dominates $D_{\mathfrak {p}}$, which is a valuation ring since the ideals are totally ordered. This observation is subsumed to the following:[7] there is a bijective correspondence ${\mathfrak {p}}\mapsto D_{\mathfrak {p}},\operatorname {Spec} (D)\to $ the set of all subrings of K containing D. In particular, D is integrally closed,[8][lower-alpha 3] and the Krull dimension of D is the number of proper subrings of K containing D. In fact, the integral closure of an integral domain A in the field of fractions K of A is the intersection of all valuation rings of K containing A.[9] Indeed, the integral closure is contained in the intersection since the valuation rings are integrally closed. Conversely, let x be in K but not integral over A. Since the ideal $x^{-1}A[x^{-1}]$ is not $A[x^{-1}]$,[lower-alpha 4] it is contained in a maximal ideal ${\mathfrak {p}}$. Then there is a valuation ring R that dominates the localization of $A[x^{-1}]$ at ${\mathfrak {p}}$. Since $x^{-1}\in {\mathfrak {m}}_{R}$, $x\not \in R$. The dominance is used in algebraic geometry. Let X be an algebraic variety over a field k. Then we say a valuation ring R in $k(X)$ has "center x on X " if $R$ dominates the local ring ${\mathcal {O}}_{x,X}$ of the structure sheaf at x.[10] Ideals in valuation rings We may describe the ideals in the valuation ring by means of its value group. Let Γ be a totally ordered abelian group. A subset Δ of Γ is called a segment if it is nonempty and, for any α in Δ, any element between −α and α is also in Δ (end points included). A subgroup of Γ is called an isolated subgroup if it is a segment and is a proper subgroup. Let D be a valuation ring with valuation v and value group Γ. For any subset A of D, we let $\Gamma _{A}$ be the complement of the union of $v(A-0)$ and $-v(A-0)$ in $\Gamma $. If I is a proper ideal, then $\Gamma _{I}$ is a segment of $\Gamma $. In fact, the mapping $I\mapsto \Gamma _{I}$ defines an inclusion-reversing bijection between the set of proper ideals of D and the set of segments of $\Gamma $.[11] Under this correspondence, the nonzero prime ideals of D correspond bijectively to the isolated subgroups of Γ. Example: The ring of p-adic integers $\mathbb {Z} _{p}$ is a valuation ring with value group $\mathbb {Z} $. The zero subgroup of $\mathbb {Z} $ corresponds to the unique maximal ideal $(p)\subseteq \mathbb {Z} _{p}$ and the whole group to the zero ideal. The maximal ideal is the only isolated subgroup of $\mathbb {Z} $. The set of isolated subgroups is totally ordered by inclusion. The height or rank r(Γ) of Γ is defined to be the cardinality of the set of isolated subgroups of Γ. Since the nonzero prime ideals are totally ordered and they correspond to isolated subgroups of Γ, the height of Γ is equal to the Krull dimension of the valuation ring D associated with Γ. The most important special case is height one, which is equivalent to Γ being a subgroup of the real numbers ℝ under addition (or equivalently, of the positive real numbers ℝ+ under multiplication.) A valuation ring with a valuation of height one has a corresponding absolute value defining an ultrametric place. A special case of this are the discrete valuation rings mentioned earlier. The rational rank rr(Γ) is defined as the rank of the value group as an abelian group, $\mathrm {dim} _{\mathbb {Q} }(\Gamma \otimes _{\mathbb {Z} }\mathbb {Q} ).$ Places This section is based on Zariski & Samuel 1975. General definition A place of a field K is a ring homomorphism p from a valuation ring D of K to some field such that, for any $x\not \in D$,  $p(1/x)=0$. The image of a place is a field called the residue field of p. For example, the canonical map $D\to D/{\mathfrak {m}}_{D}$ is a place. Example Let A be a Dedekind domain and ${\mathfrak {p}}$ a prime ideal. Then the canonical map $A_{\mathfrak {p}}\to k({\mathfrak {p}})$ is a place. Specialization of places We say a place p specializes to a place p′, denoted by $p\rightsquigarrow p'$, if the valuation ring of p contains the valuation ring of p'. In algebraic geometry, we say a prime ideal ${\mathfrak {p}}$ specializes to ${\mathfrak {p}}'$ if ${\mathfrak {p}}\subseteq {\mathfrak {p}}'$. The two notions coincide: $p\rightsquigarrow p'$ if and only if a prime ideal corresponding to p specializes to a prime ideal corresponding to p′ in some valuation ring (recall that if $D\supseteq D'$ are valuation rings of the same field, then D corresponds to a prime ideal of $D'$.) Example For example, in the function field $\mathbb {F} (X)$ of some algebraic variety $X$ every prime ideal ${\mathfrak {p}}\in {\text{Spec}}(R)$ contained in a maximal ideal ${\mathfrak {m}}$ gives a specialization ${\mathfrak {p}}\rightsquigarrow {\mathfrak {m}}$. Remarks It can be shown: if $p\rightsquigarrow p'$, then $p'=q\circ p|_{D'}$ for some place q of the residue field $k(p)$ of p. (Observe $p(D')$ is a valuation ring of $k(p)$ and let q be the corresponding place; the rest is mechanical.) If D is a valuation ring of p, then its Krull dimension is the cardinarity of the specializations other than p to p. Thus, for any place p with valuation ring D of a field K over a field k, we have: $\operatorname {tr.deg} _{k}k(p)+\dim D\leq \operatorname {tr.deg} _{k}K$. If p is a place and A is a subring of the valuation ring of p, then $\operatorname {ker} (p)\cap A$ is called the center of p in A. Places at infinity For the function field on an affine variety $X$ there are valuations which are not associated to any of the primes of $X$. These valuations are called the places at infinity. For example, the affine line $\mathbb {A} _{k}^{1}$ has function field $k(x)$. The place associated to the localization of $k\left[{\frac {1}{x}}\right]$ at the maximal ideal ${\mathfrak {m}}=\left({\frac {1}{x}}\right)$ is a place at infinity. Notes 1. More precisely, Γ is totally ordered by defining $[x]\geq [y]$ if and only if $xy^{-1}\in D$ where [x] and [y] are equivalence classes in Γ. cf. Efrat (2006), p. 39 2. Proof: if R is a maximal element, then it is dominated by a valuation ring; thus, it itself must be a valuation ring. Conversely, let R be a valuation ring and S a local ring that dominates R but not R. There is x that is in S but not in R. Then $x^{-1}$ is in R and in fact in the maximal ideal of R. But then $x^{-1}\in {\mathfrak {m}}_{S}$, which is absurd. Hence, there cannot be such S. 3. To see more directly that valuation rings are integrally closed, suppose that xn + a1xn−1 + ... + a0 = 0. Then dividing by xn−1 gives us x = −a1 − ... − a0x−n +1. If x were not in D, then x−1 would be in D and this would express x as a finite sum of elements in D, so that x would be in D, a contradiction. 4. In general, $x^{-1}$ is integral over A if and only if $xA[x]=A[x].$ Citations 1. Hartshorne 1977, Theorem I.6.1A. 2. Efrat 2006, p. 55. 3. Cohn 1968, Proposition 1.5. 4. Efrat 2006, p. 43. 5. The role of valuation rings in algebraic geometry 6. Does there exist a Riemann surface corresponding to every field extension? Any other hypothesis needed? 7. Zariski & Samuel 1975, Ch. VI, Theorem 3. 8. Efrat 2006, p. 38. 9. Matsumura 1989, Theorem 10.4. 10. Hartshorne 1977, Ch II. Exercise 4.5. 11. Zariski & Samuel 1975, Ch. VI, Theorem 15. Sources • Bourbaki, Nicolas (1972). Commutative Algebra. Elements of Mathematics (First ed.). Addison-Wesley. ISBN 978-020100644-5. • Cohn, P. M. (1968), "Bezout rings and their subrings" (PDF), Proc. Cambridge Philos. Soc., 64 (2): 251–264, Bibcode:1968PCPS...64..251C, doi:10.1017/s0305004100042791, ISSN 0008-1981, MR 0222065, S2CID 123667384, Zbl 0157.08401 • Efrat, Ido (2006), Valuations, orderings, and Milnor K-theory, Mathematical Surveys and Monographs, vol. 124, Providence, RI: American Mathematical Society, ISBN 0-8218-4041-X, Zbl 1103.12002 • Fuchs, László; Salce, Luigi (2001), Modules over non-Noetherian domains, Mathematical Surveys and Monographs, vol. 84, Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-1963-0, MR 1794715, Zbl 0973.13001 • Hartshorne, Robin (1977), Algebraic Geometry, Graduate Texts in Mathematics, vol. 52, New York: Springer-Verlag, ISBN 978-0-387-90244-9, MR 0463157 • Krull, Wolfgang (1939), "Beiträge zur Arithmetik kommutativer Integritätsbereiche. VI. Der allgemeine Diskriminantensatz. Unverzweigte Ringerweiterungen", Mathematische Zeitschrift, 45 (1): 1–19, doi:10.1007/BF01580269, ISSN 0025-5874, MR 1545800, S2CID 121374449, Zbl 0020.34003 • Matsumura, Hideyuki (1989), Commutative ring theory, Cambridge Studies in Advanced Mathematics, vol. 8, Translated from the Japanese by Miles Reid (Second ed.), ISBN 0-521-36764-6, Zbl 0666.13002 • Zariski, Oscar; Samuel, Pierre (1975), Commutative algebra. Vol. II, Berlin, New York: Springer-Verlag, ISBN 978-0-387-90171-8, MR 0389876
Wikipedia
Valuative criterion In mathematics, specifically algebraic geometry, the valuative criteria are a collection of results that make it possible to decide whether a morphism of algebraic varieties, or more generally schemes, is universally closed, separated, or proper. Statement of the valuative criteria Recall that a valuation ring A is a domain, so if K is the field of fractions of A, then Spec K is the generic point of Spec A. Let X and Y be schemes, and let f : X → Y be a morphism of schemes. Then the following are equivalent:[1][2] 1. f is separated (resp. universally closed, resp. proper) 2. f is quasi-separated (resp. quasi-compact, resp. of finite type and quasi-separated) and for every valuation ring A, if Y' = Spec A and X' denotes the generic point of Y' , then for every morphism Y' → Y and every morphism X' → X which lifts the generic point, then there exists at most one (resp. at least one, resp. exactly one) lift Y' → X. The lifting condition is equivalent to specifying that the natural morphism ${\text{Hom}}_{Y}(Y',X)\to {\text{Hom}}_{Y}({\text{Spec}}K,X)$ is injective (resp. surjective, resp. bijective). Furthermore, in the special case when Y is (locally) noetherian, it suffices to check the case that A is a discrete valuation ring. References 1. EGA II, proposition 7.2.3 and théorème 7.3.8. 2. Stacks Project, tags 01KA, 01KY, and 0BX4. • Grothendieck, Alexandre; Jean Dieudonné (1961). "Éléments de géométrie algébrique (rédigés avec la collaboration de Jean Dieudonné) : II. Étude globale élémentaire de quelques classes de morphismes". Publications Mathématiques de l'IHÉS. 8: 5–222. doi:10.1007/bf02699291.
Wikipedia
Value (mathematics) In mathematics, value may refer to several, strongly related notions. In general, a mathematical value may be any definite mathematical object. In elementary mathematics, this is most often a number – for example, a real number such as π or an integer such as 42. • The value of a variable or a constant is any number or other mathematical object assigned to it. • The value of a mathematical expression is the result of the computation described by this expression when the variables and constants in it are assigned values. • The value of a function, given the value(s) assigned to its argument(s), is the quantity assumed by the function for these argument values.[1][2] For example, if the function f is defined by f(x) = 2x2 – 3x + 1, then assigning the value 3 to its argument x yields the function value 10, since f(3) = 2·32 – 3·3 + 1 = 10. If the variable, expression or function only assumes real values, it is called real-valued. Likewise, a complex-valued variable, expression or function only assumes complex values. See also • Value function • Value (computer science) • Absolute value • Truth value References 1. "Value". 2. Meschkowski, Herbert (1968). Introduction to Modern Mathematics. George G. Harrap & Co. Ltd. p. 32. ISBN 0245591095.
Wikipedia
Valuation (logic) In logic and model theory, a valuation can be: • In propositional logic, an assignment of truth values to propositional variables, with a corresponding assignment of truth values to all propositional formulas with those variables. • In first-order logic and higher-order logics, a structure, (the interpretation) and the corresponding assignment of a truth value to each sentence in the language for that structure (the valuation proper). The interpretation must be a homomorphism, while valuation is simply a function. Mathematical logic In mathematical logic (especially model theory), a valuation is an assignment of truth values to formal sentences that follows a truth schema. Valuations are also called truth assignments. In propositional logic, there are no quantifiers, and formulas are built from propositional variables using logical connectives. In this context, a valuation begins with an assignment of a truth value to each propositional variable. This assignment can be uniquely extended to an assignment of truth values to all propositional formulas. In first-order logic, a language consists of a collection of constant symbols, a collection of function symbols, and a collection of relation symbols. Formulas are built out of atomic formulas using logical connectives and quantifiers. A structure consists of a set (domain of discourse) that determines the range of the quantifiers, along with interpretations of the constant, function, and relation symbols in the language. Corresponding to each structure is a unique truth assignment for all sentences (formulas with no free variables) in the language. Notation If $v$ is a valuation, that is, a mapping from the atoms to the set $\{t,f\}$, then the double-bracket notation is commonly used to denote a valuation; that is, $v(\phi )=[\![\phi ]\!]_{v}$ for a proposition $\phi $.[1] See also • Algebraic semantics References 1. Dirk van Dalen, (2004) Logic and Structure, Springer Universitext, (see section 1.2) ISBN 978-3-540-20879-2 • Rasiowa, Helena; Sikorski, Roman (1970), The Mathematics of Metamathematics (3rd ed.), Warsaw: PWN, chapter 6 Algebra of formalized languages. • J. Michael Dunn; Gary M. Hardegree (2001). Algebraic methods in philosophical logic. Oxford University Press. p. 155. ISBN 978-0-19-853192-0.
Wikipedia
Value distribution theory of holomorphic functions In mathematics, the value distribution theory of holomorphic functions is a division of mathematical analysis. It tries to get quantitative measures of the number of times a function f(z) assumes a value a, as z grows in size, refining the Picard theorem on behaviour close to an essential singularity. The theory exists for analytic functions (and meromorphic functions) of one complex variable z, or of several complex variables. In the case of one variable the term Nevanlinna theory, after Rolf Nevanlinna, is also common. The now-classical theory received renewed interest, when Paul Vojta suggested some analogies with the problem of integral solutions to Diophantine equations. These turned out to involve some close parallels, and to lead to fresh points of view on the Mordell conjecture and related questions.
Wikipedia
Value function The value function of an optimization problem gives the value attained by the objective function at a solution, while only depending on the parameters of the problem.[1][2] In a controlled dynamical system, the value function represents the optimal payoff of the system over the interval [t, t1] when started at the time-t state variable x(t)=x.[3] If the objective function represents some cost that is to be minimized, the value function can be interpreted as the cost to finish the optimal program, and is thus referred to as "cost-to-go function."[4][5] In an economic context, where the objective function usually represents utility, the value function is conceptually equivalent to the indirect utility function.[6][7] In a problem of optimal control, the value function is defined as the supremum of the objective function taken over the set of admissible controls. Given $(t_{0},x_{0})\in [0,t_{1}]\times \mathbb {R} ^{d}$, a typical optimal control problem is to ${\text{maximize}}\quad J(t_{0},x_{0};u)=\int _{t_{0}}^{t_{1}}I(t,x(t),u(t))\,\mathrm {d} t+\phi (x(t_{1}))$ subject to ${\frac {\mathrm {d} x(t)}{\mathrm {d} t}}=f(t,x(t),u(t))$ with initial state variable $x(t_{0})=x_{0}$.[8] The objective function $J(t_{0},x_{0};u)$ is to be maximized over all admissible controls $u\in U[t_{0},t_{1}]$, where $u$ is a Lebesgue measurable function from $[t_{0},t_{1}]$ to some prescribed arbitrary set in $\mathbb {R} ^{m}$. The value function is then defined as $V(t,x(t))=\max _{u\in U}\int _{t}^{t_{1}}I(\tau ,x(\tau ),u(\tau ))\,\mathrm {d} \tau +\phi (x(t_{1}))$ with $V(t_{1},x(t_{1}))=\phi (x(t_{1}))$, where $\phi (x(t_{1}))$ is the "scrap value". If the optimal pair of control and state trajectories is $(x^{\ast },u^{\ast })$, then $V(t_{0},x_{0})=J(t_{0},x_{0};u^{\ast })$. The function $h$ that gives the optimal control $u^{\ast }$ based on the current state $x$ is called a feedback control policy,[4] or simply a policy function.[9] Bellman's principle of optimality roughly states that any optimal policy at time $t$, $t_{0}\leq t\leq t_{1}$ taking the current state $x(t)$ as "new" initial condition must be optimal for the remaining problem. If the value function happens to be continuously differentiable,[10] this gives rise to an important partial differential equation known as Hamilton–Jacobi–Bellman equation, $-{\frac {\partial V(t,x)}{\partial t}}=\max _{u}\left\{I(t,x,u)+{\frac {\partial V(t,x)}{\partial x}}f(t,x,u)\right\}$ where the maximand on the right-hand side can also be re-written as the Hamiltonian, $H\left(t,x,u,\lambda \right)=I(t,x,u)+\lambda (t)f(t,x,u)$, as $-{\frac {\partial V(t,x)}{\partial t}}=\max _{u}H(t,x,u,\lambda )$ with $\partial V(t,x)/\partial x=\lambda (t)$ playing the role of the costate variables.[11] Given this definition, we further have $\mathrm {d} \lambda (t)/\mathrm {d} t=\partial ^{2}V(t,x)/\partial x\partial t+\partial ^{2}V(t,x)/\partial x^{2}\cdot f(x)$, and after differentiating both sides of the HJB equation with respect to $x$, $-{\frac {\partial ^{2}V(t,x)}{\partial t\partial x}}={\frac {\partial I}{\partial x}}+{\frac {\partial ^{2}V(t,x)}{\partial x^{2}}}f(x)+{\frac {\partial V(t,x)}{\partial x}}{\frac {\partial f(x)}{\partial x}}$ which after replacing the appropriate terms recovers the costate equation $-{\dot {\lambda }}(t)=\underbrace {{\frac {\partial I}{\partial x}}+\lambda (t){\frac {\partial f(x)}{\partial x}}} _{={\frac {\partial H}{\partial x}}}$ where ${\dot {\lambda }}(t)$ is Newton notation for the derivative with respect to time.[12] The value function is the unique viscosity solution to the Hamilton–Jacobi–Bellman equation.[13] In an online closed-loop approximate optimal control, the value function is also a Lyapunov function that establishes global asymptotic stability of the closed-loop system.[14] References 1. Fleming, Wendell H.; Rishel, Raymond W. (1975). Deterministic and Stochastic Optimal Control. New York: Springer. pp. 81–83. ISBN 0-387-90155-8. 2. Caputo, Michael R. (2005). Foundations of Dynamic Economic Analysis : Optimal Control Theory and Applications. New York: Cambridge University Press. p. 185. ISBN 0-521-60368-4. 3. Weber, Thomas A. (2011). Optimal Control Theory : with Applications in Economics. Cambridge: The MIT Press. p. 82. ISBN 978-0-262-01573-8. 4. Bertsekas, Dimitri P.; Tsitsiklis, John N. (1996). Neuro-Dynamic Programming. Belmont: Athena Scientific. p. 2. ISBN 1-886529-10-8. 5. "EE365: Dynamic Programming" (PDF). 6. Mas-Colell, Andreu; Whinston, Michael D.; Green, Jerry R. (1995). Microeconomic Theory. New York: Oxford University Press. p. 964. ISBN 0-19-507340-1. 7. Corbae, Dean; Stinchcombe, Maxwell B.; Zeman, Juraj (2009). An Introduction to Mathematical Analysis for Economic Theory and Econometrics. Princeton University Press. p. 145. ISBN 978-0-691-11867-3. 8. Kamien, Morton I.; Schwartz, Nancy L. (1991). Dynamic Optimization : The Calculus of Variations and Optimal Control in Economics and Management (2nd ed.). Amsterdam: North-Holland. p. 259. ISBN 0-444-01609-0. 9. Ljungqvist, Lars; Sargent, Thomas J. (2018). Recursive Macroeconomic Theory (Fourth ed.). Cambridge: MIT Press. p. 106. ISBN 978-0-262-03866-9. 10. Benveniste and Scheinkman established sufficient conditions for the differentiability of the value function, which in turn allows an application of the envelope theorem, see Benveniste, L. M.; Scheinkman, J. A. (1979). "On the Differentiability of the Value Function in Dynamic Models of Economics". Econometrica. 47 (3): 727–732. doi:10.2307/1910417. JSTOR 1910417. Also see Seierstad, Atle (1982). "Differentiability Properties of the Optimal Value Function in Control Theory". Journal of Economic Dynamics and Control. 4: 303–310. doi:10.1016/0165-1889(82)90019-7. 11. Kirk, Donald E. (1970). Optimal Control Theory. Englewood Cliffs, NJ: Prentice-Hall. p. 88. ISBN 0-13-638098-0. 12. Zhou, X. Y. (1990). "Maximum Principle, Dynamic Programming, and their Connection in Deterministic Control". Journal of Optimization Theory and Applications. 65 (2): 363–373. doi:10.1007/BF01102352. S2CID 122333807. 13. Theorem 10.1 in Bressan, Alberto (2019). "Viscosity Solutions of Hamilton-Jacobi Equations and Optimal Control Problems" (PDF). Lecture Notes. 14. Kamalapurkar, Rushikesh; Walters, Patrick; Rosenfeld, Joel; Dixon, Warren (2018). "Optimal Control and Lyapunov Stability". Reinforcement Learning for Optimal Feedback Control: A Lyapunov-Based Approach. Berlin: Springer. pp. 26–27. ISBN 978-3-319-78383-3. Further reading • Caputo, Michael R. (2005). "Necessary and Sufficient Conditions for Isoperimetric Problems". Foundations of Dynamic Economic Analysis : Optimal Control Theory and Applications. New York: Cambridge University Press. pp. 174–210. ISBN 0-521-60368-4. • Clarke, Frank H.; Loewen, Philip D. (1986). "The Value Function in Optimal Control: Sensitivity, Controllability, and Time-Optimality". SIAM Journal on Control and Optimization. 24 (2): 243–263. doi:10.1137/0324014. • LaFrance, Jeffrey T.; Barney, L. Dwayne (1991). "The Envelope Theorem in Dynamic Optimization" (PDF). Journal of Economic Dynamics and Control. 15 (2): 355–385. doi:10.1016/0165-1889(91)90018-V. • Stengel, Robert F. (1994). "Conditions for Optimality". Optimal Control and Estimation. New York: Dover. pp. 201–222. ISBN 0-486-68200-5.
Wikipedia
Value of structural health information The value of structural health information is the expected utility gain of a built environment system by information provided by structural health monitoring (SHM). The quantification of the value of structural health information is based on decision analysis adapted to built environment engineering. The value of structural health information can be significant for the risk and integrity management of built environment systems. Background The value of structural health information takes basis in the framework of the decision analysis and the value of information analysis as introduced by Raiffa and Schlaifer[1] and adapted to civil engineering by Benjamin and Cornell.[2] Decision theory itself is based upon the expected utility hypothesis by Von Neumann and Morgenstern.[3] The concepts for the value of structural health information in built environment engineering were first formulated by Pozzi and Der Kiureghian[4] and Faber and Thöns.[5] Formulation The value of structural health information is quantified with a normative decision analysis. The value of structural health monitoring $V$ is calculated as the difference between the optimized expected utilities of performing and not performing structural health monitoring (SHM), $U_{1}$ and $U_{0}$, respectively: $V=U_{1}-U_{0}$ The expected utilities are calculated with a decision scenario involving (1) interrelated built environment system state, utility and consequence models, (2) structural health information type, precision and cost models and (2) structural health action type and implementation models. The value of structural health information quantification facilitates an optimization of structural health information system parameters and information dependent actions.[6][7] Application The value of structural health information provides a quantitative decision basis for (1) implementing SHM or not, (2) the identification of the optimal SHM strategy and (3) for planning optimal structural health actions, such as e.g., repair and replacement. The value of structural health information presupposes relevance of SHM information for the built environment system performance. A significant value of structural health information has been found for the risk and integrity management of engineering structures.[6][8][7] References 1. Raiffa, Howard, 1924-2016. (2000). Applied statistical decision theory. Schlaifer, Robert. (Wiley classics library ed.). New York: Wiley. ISBN 047138349X. OCLC 43662059.{{cite book}}: CS1 maint: multiple names: authors list (link) 2. Benjamin, J. R. Cornell, C. A. (1970). Probability, Statistics, and Decision for Civil Engineers. McGraw-Hill. OCLC 473420360.{{cite book}}: CS1 maint: multiple names: authors list (link) 3. von Neumann, John; Morgenstern, Oskar (2007-12-31). Theory of Games and Economic Behavior (60th Anniversary Commemorative ed.). Princeton: Princeton University Press. doi:10.1515/9781400829460. ISBN 9781400829460. 4. Pozzi, Matteo; Der Kiureghian, Armen (2011-03-24). Kundu, Tribikram (ed.). "Assessing the value of information for long-term structural health monitoring". Health Monitoring of Structural and Biological Systems 2011. SPIE. 7984: 79842W. Bibcode:2011SPIE.7984E..2WP. doi:10.1117/12.881918. S2CID 3057973. 5. Faber, M; Thöns, S (2013-09-18), "On the value of structural health monitoring", Safety, Reliability and Risk Analysis, CRC Press, pp. 2535–2544, doi:10.1201/b15938-380, ISBN 9781138001237 6. "TU1402 Guidelines - Quantifying the Value of Structural Health Monitoring - COST Action TU 1402". www.cost-tu1402.eu. Retrieved 2019-10-21. 7. Thöns, Sebastian. "Background documentation of the Joint Committee of Structural Safety (JCSS): Quantifying the value of structural health information for decision support" (PDF).{{cite web}}: CS1 maint: url-status (link) 8. Sohn, H.; Farrar, C. R.; Hemez, F. M.; Shunk, D. D.; Stinemates, D. W.; Nadler, B. R.; Czarnecki, J. J. (2001). A Review of Structural Health Monitoring Literature: 1996–2001. Los Alamos: Los Alamos National Laboratory report LA-13070-MS.{{cite book}}: CS1 maint: multiple names: authors list (link)
Wikipedia
Valérie Berthé Valérie Berthé (born 16 December 1968)[1] is a French mathematician who works as a director of research for the Centre national de la recherche scientifique (CNRS) at the Institut de Recherche en Informatique Fondamentale (IRIF), a joint project between CNRS and Paris Diderot University. Her research involves symbolic dynamics, combinatorics on words, discrete geometry, numeral systems, tessellations, and fractals.[2] Education Berthé completed her baccalauréat at age 16,[3] and studied at the École Normale Supérieure from 1988 to 1993. She earned a licentiate and master's degree in pure mathematics from Pierre and Marie Curie University in 1989, a Diplôme d'études approfondies from University of Paris-Sud in 1991, completed her agrégation in 1992, and was recruited by CNRS in 1993.[1] Continuing her graduate studies, she defended a doctoral thesis in 1994 at the University of Bordeaux 1. Her dissertation, Fonctions de Carlitz et automates: Entropies conditionnelles was supervised by Jean-Paul Allouche.[1][4] She completed a habilitation in 1999, again under the supervision of Allouche, at the University of the Mediterranean Aix-Marseille II; her habilitation thesis was Étude arithmétique et dynamique de suites algorithmiques.[1] Research Berthé's research spans the area of symbolic dynamics, combinatorics on words, numeration systems and discrete geometry. She has recently made significant process in the study of S-adic dynamical systems, and also of continued fractions in higher dimensions.[5][6][7][8] Associations Berthé is a vice-president of the Société mathématique de France (SMF), and director of publications for the SMF.[9] She has played an active role in L'association femmes et mathématiques.[10] Berthé has also been associated with the M. Lothaire pseudonymous mathematical collaboration on combinatorics on words[11] and the Pythias Fogg pseudonymous collaboration on substitution systems.[12] Recognition In 2013, Berthé was elevated to the Legion of Honour.[3][10] References 1. Curriculum vitae (PDF), April 2012, retrieved 2018-02-10 2. Berthé Valérie, Institut de Recherche en Informatique Fondamentale (IRIF), retrieved 2018-02-10 3. "Valérie Berthé, brillante tête chercheuse au CNRS", Ouest-France (in French), February 7, 2014 4. Valérie Berthé at the Mathematics Genealogy Project 5. Berthé, Valérie; Steiner, Wolfgang; Thuswaldner, Jörg M. (2019). "Geometry, dynamics, and arithmetic of $S$-adic shifts". Annales de l'Institut Fourier. 69 (3): 1347–1409. arXiv:1410.0331. doi:10.5802/aif.3273. 6. Berthé, Valérie; Steiner, Wolfgang; Thuswaldner, Jörg M.; Yassawi, Reem (November 2019). "Recognizability for sequences of morphisms". Ergodic Theory and Dynamical Systems. 39 (11): 2896–2931. arXiv:1705.00167. doi:10.1017/etds.2017.144. ISSN 0143-3857. S2CID 31678325. 7. Berthé, Valérie; Kim, Dong Han (2018). "Some constructions for the higher-dimensional three-distance theorem". Acta Arithmetica. 184 (4): 385–411. arXiv:1806.02721. doi:10.4064/aa171021-30-5. ISSN 0065-1036. S2CID 51808154. 8. Berthé, Valérie; Lhote, Loïck; Vallée, Brigitte (March 2018). "The Brun gcd algorithm in high dimensions is almost always subtractive". Journal of Symbolic Computation. 85: 72–107. doi:10.1016/j.jsc.2017.07.004. 9. Bureau, Société mathématique de France, retrieved 2018-02-10 10. "Valérie Berthé", Women and Men Inspiring Europe Resource-Pool, European Institute for Gender Equality, retrieved 2018-02-10 11. Lothaire, M. (2005), Applied combinatorics on words, Encyclopedia of Mathematics and Its Applications, vol. 105, A collective work by Jean Berstel, Dominique Perrin, Maxime Crochemore, Eric Laporte, Mehryar Mohri, Nadia Pisanti, Marie-France Sagot, Gesine Reinert, Sophie Schbath, Michael Waterman, Philippe Jacquet, Wojciech Szpankowski, Dominique Poulalhon, Gilles Schaeffer, Roman Kolpakov, Gregory Koucherov, Jean-Paul Allouche and Valérie Berthé, Cambridge: Cambridge University Press, ISBN 0-521-84802-4, Zbl 1133.68067 12. Pytheas Fogg, N. (2002), Berthé, Valérie; Ferenczi, Sébastien; Mauduit, Christian; Siegel, A. (eds.), Substitutions in dynamics, arithmetics and combinatorics, Lecture Notes in Mathematics, vol. 1794, Berlin: Springer-Verlag, ISBN 3-540-44141-7, Zbl 1014.11015 External links • Valérie Berthé publications indexed by Google Scholar Authority control International • ISNI • VIAF National • Norway • France • BnF data • Catalonia • Germany • Israel • Belgium • United States • Japan • Czech Republic • Croatia • Netherlands • Poland Academics • CiNii • DBLP • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • ResearcherID • zbMATH Other • IdRef
Wikipedia
Vampire number In number theory, a vampire number (or true vampire number) is a composite natural number with an even number of digits, that can be factored into two natural numbers each with half as many digits as the original number, where the two factors contain precisely all the digits of the original number, in any order, counting multiplicity. The two factors cannot both have trailing zeroes. The first vampire number is 1260 = 21 × 60.[1][2] Definition Let $N$ be a natural number with $2k$ digits: $N={n_{2k}}{n_{2k-1}}...{n_{1}}$ Then $N$ is a vampire number if and only if there exist two natural numbers $A$ and $B$, each with $k$ digits: $A={a_{k}}{a_{k-1}}...{a_{1}}$ $B={b_{k}}{b_{k-1}}...{b_{1}}$ such that $A\times B=N$, $a_{1}$ and $b_{1}$ are not both zero, and the $2k$ digits of the concatenation of $A$ and $B$ $({a_{k}}{a_{k-1}}...{a_{2}}{a_{1}}{b_{k}}{b_{k-1}}...{b_{2}}{b_{1}})$ are a permutation of the $2k$ digits of $N$. The two numbers $A$ and $B$ are called the fangs of $N$. Vampire numbers were first described in a 1994 post by Clifford A. Pickover to the Usenet group sci.math,[3] and the article he later wrote was published in chapter 30 of his book Keys to Infinity.[4] Examples nCount of vampire numbers of length n 47 6148 83228 10108454 124390670 14208423682 1611039126154 1260 is a vampire number, with 21 and 60 as fangs, since 21 × 60 = 1260 and the digits of the concatenation of the two factors (2160) are a permutation of the digits of the original number (1260). However, 126000 (which can be expressed as 21 × 6000 or 210 × 600) is not a vampire number, since although 126000 = 21 × 6000 and the digits (216000) are a permutation of the original number, the two factors 21 and 6000 do not have the correct number of digits. Furthermore, although 126000 = 210 × 600, both factors 210 and 600 have trailing zeroes. The first few vampire numbers are: 1260 = 21 × 60 1395 = 15 × 93 1435 = 35 × 41 1530 = 30 × 51 1827 = 21 × 87 2187 = 27 × 81 6880 = 80 × 86 102510 = 201 × 510 104260 = 260 × 401 105210 = 210 × 501 The sequence of vampire numbers is: 1260, 1395, 1435, 1530, 1827, 2187, 6880, 102510, 104260, 105210, 105264, 105750, 108135, 110758, 115672, 116725, 117067, 118440, 120600, 123354, 124483, 125248, 125433, 125460, 125500, ... (sequence A014575 in the OEIS) There are many known sequences of infinitely many vampire numbers following a pattern, such as: 1530 = 30 × 51, 150300 = 300 × 501, 15003000 = 3000 × 5001, ... Al Sweigart calculated all the vampire numbers that have at most 10 digits.[5] Multiple fang pairs A vampire number can have multiple distinct pairs of fangs. The first of infinitely many vampire numbers with 2 pairs of fangs: 125460 = 204 × 615 = 246 × 510 The first with 3 pairs of fangs: 13078260 = 1620 × 8073 = 1863 × 7020 = 2070 × 6318 The first with 4 pairs of fangs: 16758243290880 = 1982736 × 8452080 = 2123856 × 7890480 = 2751840 × 6089832 = 2817360 × 5948208 The first with 5 pairs of fangs: 24959017348650 = 2947050 × 8469153 = 2949705 × 8461530 = 4125870 × 6049395 = 4129587 × 6043950 = 4230765 × 5899410 Variants Pseudovampire numbers (disfigurate vampire numbers) are similar to vampire numbers, except that the fangs of an n-digit pseudovampire number need not be of length n/2 digits. Pseudovampire numbers can have an odd number of digits, for example 126 = 6 × 21. More generally, more than two fangs are allowed. In this case, vampire numbers are numbers n which can be factorized using the digits of n. For example, 1395 = 5 × 9 × 31. This sequence starts (sequence A020342 in the OEIS): 126, 153, 688, 1206, 1255, 1260, 1395, ... A vampire prime or prime vampire number, as defined by Carlos Rivera in 2002,[6] is a true vampire number whose fangs are its prime factors. The first few vampire primes are: 117067, 124483, 146137, 371893, 536539 As of 2007 the largest known is the square (94892254795 × 10103294 + 1)2, found by Jens K. Andersen in September, 2007.[2] A double vampire number is a vampire number which has fangs that are also vampire numbers, an example of such a number is 1047527295416280 = 25198740 × 41570622 = (2940 × 8571) × (5601 × 7422) which is the smallest double vampire number. A Roman numeral vampire number is vampire number that uses Roman numerals instead of base-10. An example of this number is II × IV = VIII. Other bases Vampire numbers also exist for bases other than base 10. For example, a vampire number in base 12 is 10392BA45768 = 105628 × BA3974, where A mens ten and B means eleven. Another example in the same base is a vampire number with 3 fangs, 572164B9A830 = 8752 × 9346 × A0B1. One example with 4 fangs is 3715A6B89420 = 763 × 824 × 905 × B1A. In these examples, all 12 digits are used exactly once. See also • Friedman number References 1. Weisstein, Eric W. "Vampire Numbers". MathWorld. 2. Andersen, Jens K. "Vampire numbers". 3. Pickover's original post describing vampire numbers 4. Pickover, Clifford A. (1995). Keys to Infinity. Wiley. ISBN 0-471-19334-8. 5. Sweigart, Al. "Vampire Numbers Visualized". 6. Rivera, Carlos. "The Prime-Vampire numbers". External links • Sweigart, Al. Vampire Numbers Visualized • Grime, James; Copeland, Ed. "Vampire numbers". Numberphile. Brady Haran. Archived from the original on 2017-10-14. Retrieved 2013-04-08. Classes of natural numbers Powers and related numbers • Achilles • Power of 2 • Power of 3 • Power of 10 • Square • Cube • Fourth power • Fifth power • Sixth power • Seventh power • Eighth power • Perfect power • Powerful • Prime power Of the form a × 2b ± 1 • Cullen • Double Mersenne • Fermat • Mersenne • Proth • Thabit • Woodall Other polynomial numbers • Hilbert • Idoneal • Leyland • Loeschian • Lucky numbers of Euler Recursively defined numbers • Fibonacci • Jacobsthal • Leonardo • Lucas • Padovan • Pell • Perrin Possessing a specific set of other numbers • Amenable • Congruent • Knödel • Riesel • Sierpiński Expressible via specific sums • Nonhypotenuse • Polite • Practical • Primary pseudoperfect • Ulam • Wolstenholme Figurate numbers 2-dimensional centered • Centered triangular • Centered square • Centered pentagonal • Centered hexagonal • Centered heptagonal • Centered octagonal • Centered nonagonal • Centered decagonal • Star non-centered • Triangular • Square • Square triangular • Pentagonal • Hexagonal • Heptagonal • Octagonal • Nonagonal • Decagonal • Dodecagonal 3-dimensional centered • Centered tetrahedral • Centered cube • Centered octahedral • Centered dodecahedral • Centered icosahedral non-centered • Tetrahedral • Cubic • Octahedral • Dodecahedral • Icosahedral • Stella octangula pyramidal • Square pyramidal 4-dimensional non-centered • Pentatope • Squared triangular • Tesseractic Combinatorial numbers • Bell • Cake • Catalan • Dedekind • Delannoy • Euler • Eulerian • Fuss–Catalan • Lah • Lazy caterer's sequence • Lobb • Motzkin • Narayana • Ordered Bell • Schröder • Schröder–Hipparchus • Stirling first • Stirling second • Telephone number • Wedderburn–Etherington Primes • Wieferich • Wall–Sun–Sun • Wolstenholme prime • Wilson Pseudoprimes • Carmichael number • Catalan pseudoprime • Elliptic pseudoprime • Euler pseudoprime • Euler–Jacobi pseudoprime • Fermat pseudoprime • Frobenius pseudoprime • Lucas pseudoprime • Lucas–Carmichael number • Somer–Lucas pseudoprime • Strong pseudoprime Arithmetic functions and dynamics Divisor functions • Abundant • Almost perfect • Arithmetic • Betrothed • Colossally abundant • Deficient • Descartes • Hemiperfect • Highly abundant • Highly composite • Hyperperfect • Multiply perfect • Perfect • Practical • Primitive abundant • Quasiperfect • Refactorable • Semiperfect • Sublime • Superabundant • Superior highly composite • Superperfect Prime omega functions • Almost prime • Semiprime Euler's totient function • Highly cototient • Highly totient • Noncototient • Nontotient • Perfect totient • Sparsely totient Aliquot sequences • Amicable • Perfect • Sociable • Untouchable Primorial • Euclid • Fortunate Other prime factor or divisor related numbers • Blum • Cyclic • Erdős–Nicolas • Erdős–Woods • Friendly • Giuga • Harmonic divisor • Jordan–Pólya • Lucas–Carmichael • Pronic • Regular • Rough • Smooth • Sphenic • Størmer • Super-Poulet • Zeisel Numeral system-dependent numbers Arithmetic functions and dynamics • Persistence • Additive • Multiplicative Digit sum • Digit sum • Digital root • Self • Sum-product Digit product • Multiplicative digital root • Sum-product Coding-related • Meertens Other • Dudeney • Factorion • Kaprekar • Kaprekar's constant • Keith • Lychrel • Narcissistic • Perfect digit-to-digit invariant • Perfect digital invariant • Happy P-adic numbers-related • Automorphic • Trimorphic Digit-composition related • Palindromic • Pandigital • Repdigit • Repunit • Self-descriptive • Smarandache–Wellin • Undulating Digit-permutation related • Cyclic • Digit-reassembly • Parasitic • Primeval • Transposable Divisor-related • Equidigital • Extravagant • Frugal • Harshad • Polydivisible • Smith • Vampire Other • Friedman Binary numbers • Evil • Odious • Pernicious Generated via a sieve • Lucky • Prime Sorting related • Pancake number • Sorting number Natural language related • Aronson's sequence • Ban Graphemics related • Strobogrammatic • Mathematics portal
Wikipedia
Van Amringe Mathematical Prize The Department of Mathematics at Columbia University has presented a Professor Van Amringe Mathematical Prize each year (since 1910). The prize was established in 1910 by George G. Dewitt, Class of 1867. It was named after John Howard Van Amringe, who taught mathematics at Columbia (holding a professorship from 1865 to 1910), was the first Dean of Columbia College, and was the first president of the American Mathematical Society (between 1888 and 1890). For many years, the prize was awarded to the freshman or sophomore mathematics student at Columbia College deemed most proficient in the mathematical subjects designated during the year of the award. More recently (since 2003), the prize has been awarded to three Columbia College students majoring in math (a freshman, a sophomore, and a junior) who are deemed proficient in their class in the mathematical subjects designated during the year of the award. Recipients Year Recipients 2023 Rafay Abbas Ashary ('24), Noah Bergam ('25), Hao Cui ('26), Zheheng Xiao ('25) [1] 2022 Kevin Zhang ('25), Carter Teplica ('23), Zheheng Xiao ('25), David Chen ('23) [1] 2021 Elena Gribelyuk ('22), Jacob Weinstein ('22), David Chen ('23), Aiden Sagerman ('24) [1] 2020 Christian Serio ('21), Anda Tenie ('22), Gregory Pershing ('22), Rafay Ashary ('23) [2] 2019 Quang Dao ('20), Myeonhu Kim ('20), Anda Tenie ('22) [3] 2018 Quang Dao ('20), Myeonhu Kim ('20), Matthew Lerner-Brecher ('20) [4] 2017 Quang Dao ('20), Vu-Anh Phung ('19), Noah Miller ('18) [5] 2016 Nguyen Dung ('18), Srikar Varadaraj ('17) [6] 2015 Nguyen Dung ('18), Hardik Shah ('17), Samuel Nicoll ('16) [7] 2014 Hardik Shah ('17), Samuel Nicoll ('16), Yifei Zhao ('15) [8] 2013 Ha-Young Shin ('16), Yifei Zhao ('15), Sicong Zhang ('14) [9] 2012 Yifei Zhao ('15), Sicong Zhang ('14), Sung Chul Park ('13) [10] 2011 Sicong Zhang ('14), Sung Chul Park ('13), Shenjun Xu ('12) [11] 2010 Sung Park ('13), Shenjun Xu ('12), Samuel Beck ('11) [12] 2009 Shenjun Xu ('12), Jiayang Jiang ('11), Atanas Atanasov ('10) [13] 2008 Andra Liana Mihali ('11), Atanas Atanasov ('10), So Eun Park ('09) [14] 2007 Atanas Atanasov ('10), So Eun Park ('09), Dmytro Karabash ('08) [15] 2006 Vedant Misra ('09), Dmytro Karabash ('08) and Mikhail Shklyar ('08), Ilya Vinogradov ('07) [16] 2005 Mikhail Shklyar ('08), Ilya Vinogradov ('07), Florian Sprung ('06) [17] 2004 Ilya Vinogradov ('07) 2003 Mark Xue ('06), Kiril Datchev ('05), Jay Heumann ('05) [18] 2002 Kiril Datchev ('05) [19] 2001 Vladislav Shchogolev ('04) and Eric Patterson ('03) [20] 2000 David Anderson ('02) and Ari Stern ('01) 1990 Ali Yegulalp ('90) 1988 Ali Yegulalp ('90) 1987 Ali Yegulalp ('90) 1979 Sahotra Sarkar ('81) 1976 Chris Tong ('78) 1967 Louis Halle Rowen ('69) 1964 Sylvain Cappell ('66) [21] 1937 Jerome Kurshan ('39) [22] 1922 Melvin David Baller ('24) and Benedict Kurshan ('24) [23] 1921 Wilfred Francis Skeats ('23) [24] 1917 Israel Koral ('20) [25] External links • Columbia College Prizes • Columbia College Prizes and Fellowships • Past Prize Exams Notes 1. "Awards and Honors". 2. Columbia College Today, Congrats, Class of 2020!: Academic Prizes 3. Columbia College Today, Summer 2019: Academic Awards and Prizes 4. Columbia College Today, Graduation 2018: Academic Awards and Prizes 5. Columbia College Today, Graduation 2017: Academic Awards and Prizes Winners 6. Columbia College Today, 2016 Academic Awards and Prizes 7. Columbia College Today, 2015 Academic Awards and Prizes 8. Columbia College Today, 2014 Academic Awards and Prizes 9. Columbia College Today, 2013 Academic Awards and Prizes 10. Columbia College Today, 2012 Academic Awards and Prizes 11. Columbia College Today, 2011 Academic Awards and Prizes 12. Columbia College Today, 2010 Academic Awards and Prize 13. Columbia College Today, 2009 Academic Awards and Prizes 14. Columbia College Today, 2008 Academic Awards and Prizes 15. Columbia College Today, 2007 Academic Awards and Prizes 16. Columbia College Today, College Students Honored at Awards and Prizes Ceremony 17. Columbia College Today, College Honors 78 Students at Awards and Prizes Ceremony 18. Columbia College Today, College Honors 78 Students at Awards and Prizes Ceremony 19. Columbia College Today, College Honors 65 Students at Awards and Prizes Ceremony 20. Columbia College Today, Second Annual Awards & Prizes Ceremony Held in Low Rotunda 21. New York Times, Columbia Will Award Degrees to 6,278 Today 22. Obituaries & Guestbooks from The Times 23. New York Times, COLUMBIA AWARDS 1922 PRIZE HONORS 24. New York Times, SIMS'S BOOK WINS COLUMBIA PRIZE 25. New York Times, COLUMBIA ANNOUNCES LIST OF PRIZE WINNERS
Wikipedia
Glen Van Brummelen Glen Robert Van Brummelen (born May 20th, 1965) is a Canadian historian of mathematics specializing in historical applications of mathematics to astronomy. In his words, he is the “best trigonometry historian, and the worst trigonometry historian” (as he is the only one). Glen Van Brummelen Photo of Glen showing off a gift from one of his students. He is president of the Canadian Society for History and Philosophy of Mathematics,[1] and was a co-editor of Mathematics and the Historian's Craft: The Kenneth O. May Lectures (Springer, 2005). Life Van Brummelen earned his PhD degree from Simon Fraser University in 1993,[2] and served as a professor of mathematics at Bennington College from 1999 to 2006. He then transferred to Quest University Canada as a founding faculty member. In 2020, he became the dean of the Faculty of Natural and Applied Sciences at Trinity Western University in Langley, BC.[3] Glen Van Brummelen has published the first major history in English of the origins and early development of trigonometry, The Mathematics of the Heavens and the Earth: The Early History of Trigonometry.[4] His second book, Heavenly Mathematics: The Forgotten Art of Spherical Trigonometry, concerns spherical trigonometry.[5][6] He teaches courses on the history of mathematics and trigonometry at MathPath, specifically Heavenly Mathematics and Spherical Trigonometry. He is also well known for the glensheep and the "glenneagon", a variant on the enneagon (as well as to a lesser extent the glenelephant, and to even lesser extent the glenturtle), a two-dimensional animal he coined at MathPath. Works • The Mathematics of the Heavens and the Earth: The Early History of Trigonometry Princeton; Oxford: Princeton University Press, 2009. ISBN 9780691129730, OCLC 750691811 • Heavenly Mathematics: The Forgotten Art of Spherical Trigonometry Princeton; Oxford: Princeton University Press, 2013. ISBN 9780691175997, OCLC 988234342 • Trigonometry: A Very Short Introduction; Oxford: Princeton University Press, 2020 ISBN 9780198814313, OCLC 1101269106 • The Doctrine of Triangles: The History of Modern Trigonometry Princeton; Oxford: Princeton University Press, 2021 ISBN 978-0691179414, OCLC 1201300540 References 1. CSHPM Council, retrieved 2013-12-26. 2. Glen Van Brummelen at the Mathematics Genealogy Project 3. "Trinity Western University Welcomes New Dean of the Faculty of Natural and Applied Sciences". Trinity Western University. 29 May 2020. Retrieved 8 June 2020. 4. McRae, Alan S. (2009), Review of The Mathematics of the Heavens and the Earth, MR2473955. 5. Steele, John M. (July 2013), "A forgotten discipline (review of Heavenly Mathematics)", Metascience, doi:10.1007/s11016-013-9836-9, S2CID 254793113 6. Funk, Martin (2013), Review of Heavenly Mathematics, MR3012466. External links • Bio at Quest's Website • Homepage at Bennington College • Publication list • Trigonometry Book page Authority control International • ISNI • VIAF National • Norway • France • BnF data • Catalonia • Germany • Israel • Belgium • United States • Netherlands Academics • MathSciNet • Mathematics Genealogy Project • ORCID • zbMATH Other • IdRef
Wikipedia
Hendrik van Heuraet Hendrik van Heuraet (1633, Haarlem - 1660?, Leiden) was a Dutch mathematician also known as Henrici van Heuraet. He is noted as one of the founders of the integral, and author of Epistola de Transmutatione Curvarum Linearum in Rectus [On the Transformation of Curves into Straight Lines] (1659).[1] From 1653 he studied at Leiden University where he interacted with Frans van Schooten, Johannes Hudde, and Christiaan Huygens. In 1658 he and Hudde left for Saumur in France. He returned to Leiden the next year as a physician. After this his trail is lost. Bibliography • van Maanen, Jan A. (1984). "Hendrick van Heureat (1634-1660?): His Life and Mathematical Work". Centaurus. 27 (3): 218–279. Bibcode:1984Cent...27..218V. doi:10.1111/j.1600-0498.1984.tb00781.x. ISSN 0008-8994. References 1. Mathematical Treasures - Van Heuraet's Rectification of Curves, Frank J. Swetz, Victor J. Katz, Mathematical Association of America (maa.org) Accessed: 10-13-2016 External links • Geometria, à Renato Des Cartes Anno 1637 (1683) with Epistola de Transmutatione Curvarum Linearum in Rectus, p. 517, @GoogleBooks. • Hendrik van Heuraet Archived 2014-08-08 at the Wayback Machine at Turnbull WWW server • Text with slightly more info on his life (Dutch) Authority control International • ISNI • VIAF National • France • BnF data • Germany • Italy • United States • Czech Republic • Netherlands Academics • Mathematics Genealogy Project People • Netherlands Other • IdRef
Wikipedia
Van Kampen diagram In the mathematical area of geometric group theory, a Van Kampen diagram (sometimes also called a Lyndon–Van Kampen diagram[1][2][3] ) is a planar diagram used to represent the fact that a particular word in the generators of a group given by a group presentation represents the identity element in that group. History The notion of a Van Kampen diagram was introduced by Egbert van Kampen in 1933.[4] This paper appeared in the same issue of American Journal of Mathematics as another paper of Van Kampen, where he proved what is now known as the Seifert–Van Kampen theorem.[5] The main result of the paper on Van Kampen diagrams, now known as the van Kampen lemma can be deduced from the Seifert–Van Kampen theorem by applying the latter to the presentation complex of a group.[6] However, Van Kampen did not notice it at the time and this fact was only made explicit much later (see, e.g.[7]). Van Kampen diagrams remained an underutilized tool in group theory for about thirty years, until the advent of the small cancellation theory in the 1960s, where Van Kampen diagrams play a central role.[8] Currently Van Kampen diagrams are a standard tool in geometric group theory. They are used, in particular, for the study of isoperimetric functions in groups, and their various generalizations such as isodiametric functions, filling length functions, and so on. Formal definition The definitions and notations below largely follow Lyndon and Schupp.[9] Let $G=\langle A|R\,\rangle $   (†) be a group presentation where all r∈R are cyclically reduced words in the free group F(A). The alphabet A and the set of defining relations R are often assumed to be finite, which corresponds to a finite group presentation, but this assumption is not necessary for the general definition of a Van Kampen diagram. Let R∗ be the symmetrized closure of R, that is, let R∗ be obtained from R by adding all cyclic permutations of elements of R and of their inverses. A Van Kampen diagram over the presentation (†) is a planar finite cell complex ${\mathcal {D}}\,$, given with a specific embedding ${\mathcal {D}}\subseteq \mathbb {R} ^{2}\,$ with the following additional data and satisfying the following additional properties: 1. The complex ${\mathcal {D}}\,$ is connected and simply connected. 2. Each edge (one-cell) of ${\mathcal {D}}\,$ is labelled by an arrow and a letter a∈A. 3. Some vertex (zero-cell) which belongs to the topological boundary of ${\mathcal {D}}\subseteq \mathbb {R} ^{2}\,$ is specified as a base-vertex. 4. For each region (two-cell) of ${\mathcal {D}}$, for every vertex on the boundary cycle of that region, and for each of the two choices of direction (clockwise or counter-clockwise), the label of the boundary cycle of the region read from that vertex and in that direction is a freely reduced word in F(A) that belongs to R∗. Thus the 1-skeleton of ${\mathcal {D}}\,$ is a finite connected planar graph Γ embedded in $\mathbb {R} ^{2}\,$ and the two-cells of ${\mathcal {D}}\,$ are precisely the bounded complementary regions for this graph. By the choice of R∗ Condition 4 is equivalent to requiring that for each region of ${\mathcal {D}}\,$ there is some boundary vertex of that region and some choice of direction (clockwise or counter-clockwise) such that the boundary label of the region read from that vertex and in that direction is freely reduced and belongs to R. A Van Kampen diagram ${\mathcal {D}}\,$ also has the boundary cycle, denoted $\partial {\mathcal {D}}\,$, which is an edge-path in the graph Γ corresponding to going around ${\mathcal {D}}\,$ once in the clockwise direction along the boundary of the unbounded complementary region of Γ, starting and ending at the base-vertex of ${\mathcal {D}}\,$. The label of that boundary cycle is a word w in the alphabet A ∪ A−1 (which is not necessarily freely reduced) that is called the boundary label of ${\mathcal {D}}\,$. Further terminology • A Van Kampen diagram ${\mathcal {D}}\,$ is called a disk diagram if ${\mathcal {D}}\,$ is a topological disk, that is, when every edge of ${\mathcal {D}}\,$ is a boundary edge of some region of ${\mathcal {D}}\,$ and when ${\mathcal {D}}\,$ has no cut-vertices. • A Van Kampen diagram ${\mathcal {D}}\,$ is called non-reduced if there exists a reduction pair in ${\mathcal {D}}\,$, that is a pair of distinct regions of ${\mathcal {D}}\,$ such that their boundary cycles share a common edge and such that their boundary cycles, read starting from that edge, clockwise for one of the regions and counter-clockwise for the other, are equal as words in A ∪ A−1. If no such pair of region exists, ${\mathcal {D}}\,$ is called reduced. • The number of regions (two-cells) of ${\mathcal {D}}\,$ is called the area of ${\mathcal {D}}\,$ denoted ${\rm {Area}}({\mathcal {D}})\,$. In general, a Van Kampen diagram has a "cactus-like" structure where one or more disk-components joined by (possibly degenerate) arcs, see the figure below: Example The following figure shows an example of a Van Kampen diagram for the free abelian group of rank two $G=\langle a,b|aba^{-1}b^{-1}\rangle .$ The boundary label of this diagram is the word $w=b^{-1}b^{3}a^{-1}b^{-2}ab^{-1}ba^{-1}ab^{-1}ba^{-1}a.$ The area of this diagram is equal to 8. Van Kampen lemma A key basic result in the theory is the so-called Van Kampen lemma[9] which states the following: 1. Let ${\mathcal {D}}\,$ be a Van Kampen diagram over the presentation (†) with boundary label w which is a word (not necessarily freely reduced) in the alphabet A ∪ A−1. Then w=1 in G. 2. Let w be a freely reduced word in the alphabet A ∪ A−1 such that w=1 in G. Then there exists a reduced Van Kampen diagram ${\mathcal {D}}\,$ over the presentation (†) whose boundary label is freely reduced and is equal to w. Sketch of the proof First observe that for an element w ∈ F(A) we have w = 1 in G if and only if w belongs to the normal closure of R in F(A) that is, if and only if w can be represented as $w=u_{1}s_{1}u_{1}^{-1}\cdots u_{n}s_{n}u_{n}^{-1}{\text{ in }}F(A),$   (♠) where n ≥ 0 and where si ∈ R∗ for i = 1, ..., n. Part 1 of Van Kampen's lemma is proved by induction on the area of ${\mathcal {D}}\,$. The inductive step consists in "peeling" off one of the boundary regions of ${\mathcal {D}}\,$ to get a Van Kampen diagram ${\mathcal {D}}'\,$ with boundary cycle w and observing that in F(A) we have $w=usu^{-1}w',\,$ where s∈R∗ is the boundary cycle of the region that was removed to get ${\mathcal {D}}'\,$ from ${\mathcal {D}}\,$. The proof of part two of Van Kampen's lemma is more involved. First, it is easy to see that if w is freely reduced and w = 1 in G there exists some Van Kampen diagram ${\mathcal {D}}_{0}\,$ with boundary label w0 such that w = w0 in F(A) (after possibly freely reducing w0). Namely consider a representation of w of the form (♠) above. Then make ${\mathcal {D}}_{0}\,$ to be a wedge of n "lollipops" with "stems" labeled by ui and with the "candys" (2-cells) labelled by si. Then the boundary label of ${\mathcal {D}}_{0}\,$ is a word w0 such that w = w0 in F(A). However, it is possible that the word w0 is not freely reduced. One then starts performing "folding" moves to get a sequence of Van Kampen diagrams ${\mathcal {D}}_{0},{\mathcal {D}}_{1},{\mathcal {D}}_{2},\dots \,$ by making their boundary labels more and more freely reduced and making sure that at each step the boundary label of each diagram in the sequence is equal to w in F(A). The sequence terminates in a finite number of steps with a Van Kampen diagram ${\mathcal {D}}_{k}\,$ whose boundary label is freely reduced and thus equal to w as a word. The diagram ${\mathcal {D}}_{k}\,$ may not be reduced. If that happens, we can remove the reduction pairs from this diagram by a simple surgery operation without affecting the boundary label. Eventually this produces a reduced Van Kampen diagram ${\mathcal {D}}\,$ whose boundary cycle is freely reduced and equal to w. Strengthened version of Van Kampen's lemma Moreover, the above proof shows that the conclusion of Van Kampen's lemma can be strengthened as follows.[9] Part 1 can be strengthened to say that if ${\mathcal {D}}\,$ is a Van Kampen diagram of area n with boundary label w then there exists a representation (♠) for w as a product in F(A) of exactly n conjugates of elements of R∗. Part 2 can be strengthened to say that if w is freely reduced and admits a representation (♠) as a product in F(A) of n conjugates of elements of R∗ then there exists a reduced Van Kampen diagram with boundary label w and of area at most n. Dehn functions and isoperimetric functions Main article: Dehn function Area of a word representing the identity Let w ∈ F(A) be such that w = 1 in G. Then the area of w, denoted Area(w), is defined as the minimum of the areas of all Van Kampen diagrams with boundary labels w (Van Kampen's lemma says that at least one such diagram exists). One can show that the area of w can be equivalently defined as the smallest n≥0 such that there exists a representation (♠) expressing w as a product in F(A) of n conjugates of the defining relators. Isoperimetric functions and Dehn functions A nonnegative monotone nondecreasing function f(n) is said to be an isoperimetric function for presentation (†) if for every freely reduced word w such that w = 1 in G we have ${\rm {Area}}(w)\leq f(|w|),$ where |w| is the length of the word w. Suppose now that the alphabet A in (†) is finite. Then the Dehn function of (†) is defined as ${\rm {Dehn}}(n)=\max\{{\rm {Area}}(w):w=1{\text{ in }}G,|w|\leq n,w{\text{ freely reduced}}.\}$ It is easy to see that Dehn(n) is an isoperimetric function for (†) and, moreover, if f(n) is any other isoperimetric function for (†) then Dehn(n) ≤ f(n) for every n ≥ 0. Let w ∈ F(A) be a freely reduced word such that w = 1 in G. A Van Kampen diagram ${\mathcal {D}}\,$ with boundary label w is called minimal if ${\rm {Area}}({\mathcal {D}})={\rm {Area}}(w).$ Minimal Van Kampen diagrams are discrete analogues of minimal surfaces in Riemannian geometry. Generalizations and other applications • There are several generalizations of van-Kampen diagrams where instead of being planar, connected and simply connected (which means being homotopically equivalent to a disk) the diagram is drawn on or homotopically equivalent to some other surface. It turns out, that there is a close connection between the geometry of the surface and certain group theoretical notions. A particularly important one of these is the notion of an annular Van Kampen diagram, which is homotopically equivalent to an annulus. Annular diagrams, also known as conjugacy diagrams, can be used to represent conjugacy in groups given by group presentations.[9] Also spherical Van Kampen diagrams are related to several versions of group-theoretic asphericity and to Whitehead's asphericity conjecture,[10] Van Kampen diagrams on the torus are related to commuting elements, diagrams on the real projective plane are related to involutions in the group and diagrams on Klein's bottle are related to elements that are conjugated to their own inverse. • Van Kampen diagrams are central objects in the small cancellation theory developed by Greendlinger, Lyndon and Schupp in the 1960s-1970s.[9][11] Small cancellation theory deals with group presentations where the defining relations have "small overlaps" with each other. This condition is reflected in the geometry of reduced Van Kampen diagrams over small cancellation presentations, forcing certain kinds of non-positively curved or negatively cn curved behavior. This behavior yields useful information about algebraic and algorithmic properties of small cancellation groups, in particular regarding the word and the conjugacy problems. Small cancellation theory was one of the key precursors of geometric group theory, that emerged as a distinct mathematical area in the late 1980s and it remains an important part of geometric group theory. • Van Kampen diagrams play a key role in the theory of word-hyperbolic groups introduced by Gromov in 1987.[12] In particular, it turns out that a finitely presented group is word-hyperbolic if and only if it satisfies a linear isoperimetric inequality. Moreover, there is an isoperimetric gap in the possible spectrum of isoperimetric functions for finitely presented groups: for any finitely presented group either it is hyperbolic and satisfies a linear isoperimetric inequality or else the Dehn function is at least quadratic.[13][14] • The study of isoperimetric functions for finitely presented groups has become an important general theme in geometric group theory where substantial progress has occurred. Much work has gone into constructing groups with "fractional" Dehn functions (that is, with Dehn functions being polynomials of non-integer degree).[15] The work of Rips, Ol'shanskii, Birget and Sapir[16][17] explored the connections between Dehn functions and time complexity functions of Turing machines and showed that an arbitrary "reasonable" time function can be realized (up to appropriate equivalence) as the Dehn function of some finitely presented group. • Various stratified and relativized versions of Van Kampen diagrams have been explored in the subject as well. In particular, a stratified version of small cancellation theory, developed by Ol'shanskii, resulted in constructions of various group-theoretic "monsters", such as the Tarski Monster,[18] and in geometric solutions of the Burnside problem for periodic groups of large exponent.[19][20] Relative versions of Van Kampen diagrams (with respect to a collection of subgroups) were used by Osin to develop an isoperimetric function approach to the theory of relatively hyperbolic groups.[21] See also • Geometric group theory • Presentation of a group • Seifert–Van Kampen theorem Basic references • Alexander Yu. Ol'shanskii. Geometry of defining relations in groups. Translated from the 1989 Russian original by Yu. A. Bakhturin. Mathematics and its Applications (Soviet Series), 70. Kluwer Academic Publishers Group, Dordrecht, 1991. ISBN 0-7923-1394-1 • Roger C. Lyndon and Paul E. Schupp. Combinatorial Group Theory. Springer-Verlag, New York, 2001. "Classics in Mathematics" series, reprint of the 1977 edition. ISBN 978-3-540-41158-1; Ch. V. Small Cancellation Theory. pp. 235–294. Footnotes 1. B. Fine and G. Rosenberger, The Freiheitssatz and its extensions. The mathematical legacy of Wilhelm Magnus: groups, geometry and special functions (Brooklyn, NY, 1992), 213–252, Contemp. Math., 169, Amer. Math. Soc., Providence, RI, 1994 2. I.G. Lysenok, and A.G. Myasnikov, A polynomial bound for solutions of quadratic equations in free groups. Tr. Mat. Inst. Steklova 274 (2011), Algoritmicheskie Voprosy Algebry i Logiki, 148-190; translation in Proc. Steklov Inst. Math. 274 (2011), no. 1, 136–173 3. B. Fine, A. Gaglione, A. Myasnikov, G. Rosenberger, and D. Spellman, The elementary theory of groups. A guide through the proofs of the Tarski conjectures. De Gruyter Expositions in Mathematics, 60. De Gruyter, Berlin, 2014. ISBN 978-3-11-034199-7 4. E. van Kampen. On some lemmas in the theory of groups. American Journal of Mathematics. vol. 55, (1933), pp. 268–273. 5. E. R. van Kampen. On the connection between the fundamental groups of some related spaces. American Journal of Mathematics, vol. 55 (1933), pp. 261–267. 6. Invitations to Geometry and Topology. Oxford Graduate Texts in Mathematics. Oxford, New York: Oxford University Press. 2003. ISBN 9780198507727. 7. Aleksandr Yur'evich Ol'shanskii. Geometry of defining relations in groups. Translated from the 1989 Russian original by Yu. A. Bakhturin. Mathematics and its Applications (Soviet Series), 70. Kluwer Academic Publishers Group, Dordrecht, 1991. ISBN 0-7923-1394-1. 8. Bruce Chandler, and Wilhelm Magnus. The history of combinatorial group theory. A case study in the history of ideas. Studies in the History of Mathematics and Physical Sciences, 9. Springer-Verlag, New York, 1982. ISBN 0-387-90749-1. 9. Roger C. Lyndon and Paul E. Schupp. Combinatorial Group Theory. Springer-Verlag, New York, 2001. "Classics in Mathematics" series, reprint of the 1977 edition. ISBN 978-3-540-41158-1; Ch. V. Small Cancellation Theory. pp. 235–294. 10. Ian M. Chiswell, Donald J. Collins, and Johannes Huebschmann. Aspherical group presentations. Mathematische Zeitschrift, vol. 178 (1981), no. 1, pp. 1–36. 11. Martin Greendlinger. Dehn's algorithm for the word problem. Communications on Pure and Applied Mathematics, vol. 13 (1960), pp. 67–83. 12. M. Gromov. Hyperbolic Groups. Essays in Group Theory (G. M. Gersten, ed.), MSRI Publ. 8, 1987, pp. 75–263; ISBN 0-387-96618-8. 13. Michel Coornaert, Thomas Delzant, Athanase Papadopoulos, Géométrie et théorie des groupes: les groupes hyperboliques de Gromov. Lecture Notes in Mathematics, vol. 1441, Springer-Verlag, Berlin, 1990. ISBN 3-540-52977-2. 14. B. H. Bowditch. A short proof that a subquadratic isoperimetric inequality implies a linear one. Michigan Mathematical Journal, vol. 42 (1995), no. 1, pp. 103–107. 15. M. R. Bridson, Fractional isoperimetric inequalities and subgroup distortion. Journal of the American Mathematical Society, vol. 12 (1999), no. 4, pp. 1103–1118. 16. M. Sapir, J.-C. Birget, E. Rips, Isoperimetric and isodiametric functions of groups. Annals of Mathematics (2), vol. 156 (2002), no. 2, pp. 345–466. 17. J.-C. Birget, Aleksandr Yur'evich Ol'shanskii, E. Rips, M. Sapir, Isoperimetric functions of groups and computational complexity of the word problem. Annals of Mathematics (2), vol. 156 (2002), no. 2, pp. 467–518. 18. Ol'sanskii, A. Yu. (1979). Бесконечные группы с циклическими подгруппами [Infinite groups with cyclic subgroups]. Doklady Akademii Nauk SSSR (in Russian). 245 (4): 785–787. 19. A. Yu. Ol'shanskii. On a geometric method in the combinatorial group theory. Proceedings of the International Congress of Mathematicians, Vol. 1, 2 (Warsaw, 1983), pp. 415–424, PWN, Warsaw, 1984. 20. S. V. Ivanov. The free Burnside groups of sufficiently large exponents. International Journal of Algebra and Computation, vol. 4 (1994), no. 1-2. 21. Denis V. Osin. Relatively hyperbolic groups: intrinsic geometry, algebraic properties, and algorithmic problems. Memoirs of the American Mathematical Society 179 (2006), no. 843. External links • Van Kampen diagrams from the files of David A. Jackson
Wikipedia
Frans van Schooten Frans van Schooten Jr. also rendered as Franciscus van Schooten (15 May 1615, Leiden – 29 May 1660, Leiden) was a Dutch mathematician who is most known for popularizing the analytic geometry of René Descartes. Frans van Schooten Born1615 Leiden, Dutch Republic Died29 May 1660 Leiden, Dutch Republic Known forVan Schooten's theorem Scientific career FieldsMathematics Influences • Viète • Descartes • Beaune • Fermat • Hudde • Witt • Heuraet InfluencedChristiaan Huygens Life Van Schooten's father, Frans van Schooten Senior was a professor of mathematics at the University of Leiden, having Christiaan Huygens, Johann van Waveren Hudde, and René de Sluze as students. Van Schooten met Descartes in 1632 and read his Géométrie (an appendix to his Discours de la méthode) while it was still unpublished. Finding it hard to understand, he went to France to study the works of other important mathematicians of his time, such as François Viète and Pierre de Fermat. When Frans van Schooten returned to his home in Leiden in 1646, he inherited his father's position and one of his most important pupils, Huygens. The pendant marriage portraits of him and his wife Margrieta Wijnants were painted by Rembrandt and are kept in the National Gallery of Art:[1] • Portrait of a Gentleman with a Tall Hat and Gloves • Portrait of a Lady with an Ostrich-Feather Fan Work Van Schooten's 1649 Latin translation of and commentary on Descartes' Géométrie was valuable in that it made the work comprehensible to the broader mathematical community, and thus was responsible for the spread of analytic geometry to the world. Over the next decade he enlisted the aid of other mathematicians of the time, de Beaune, Hudde, Heuraet, de Witt and expanded the commentaries to two volumes, published in 1659 and 1661. This edition and its extensive commentaries was far more influential than the 1649 edition. It was this edition that Gottfried Leibniz and Isaac Newton knew. Van Schooten was one of the first to suggest, in exercises published in 1657, that these ideas be extended to three-dimensional space. Van Schooten's efforts also made Leiden the centre of the mathematical community for a short period in the middle of the seventeenth century. In elementary geometry Van Schooten's theorem is named after him. References 1. Discovery of portraits of Leiden professor and his wife in NRC, 6 November 2018 • Some Contemporaries of Descartes, Fermat, Pascal and Huygens: Van Schooten, based on W. W. Rouse Ball's A Short Account of the History of Mathematics (4th edition, 1908) External links • Mathematische Oeffeningen van Frans van Schooten (in Dutch) • Biografisch Woordenboek van Nederlandse Wiskundigen: Frans van Schooten (in Dutch) • Frans van Schooten, and his Ruler Constructions at Convergence • O'Connor, John J.; Robertson, Edmund F., "Frans van Schooten", MacTutor History of Mathematics Archive, University of St Andrews • An e-textbook developed from Frans van Schooten 1646 by dbook Authority control International • FAST • ISNI • VIAF National • Norway • Spain • France • BnF data • Germany • Italy • Israel • Belgium • United States • Sweden • Czech Republic • Australia • Croatia • Netherlands • Poland • Portugal • Vatican Academics • CiNii • MathSciNet • Mathematics Genealogy Project • zbMATH Artists • Scientific illustrators • RKD Artists People • Netherlands • Deutsche Biographie • Trove Other • IdRef
Wikipedia
Heine–Stieltjes polynomials In mathematics, the Heine–Stieltjes polynomials or Stieltjes polynomials, introduced by T. J. Stieltjes (1885), are polynomial solutions of a second-order Fuchsian equation, a differential equation all of whose singularities are regular. The Fuchsian equation has the form ${\frac {d^{2}S}{dz^{2}}}+\left(\sum _{j=1}^{N}{\frac {\gamma _{j}}{z-a_{j}}}\right){\frac {dS}{dz}}+{\frac {V(z)}{\prod _{j=1}^{N}(z-a_{j})}}S=0$ For the orthogonal polynomials, see Stieltjes-Wigert polynomial. For the polynomials associated to a family of orthogonal polynomials, see Stieltjes polynomials. for some polynomial V(z) of degree at most N − 2, and if this has a polynomial solution S then V is called a Van Vleck polynomial (after Edward Burr Van Vleck) and S is called a Heine–Stieltjes polynomial. Heun polynomials are the special cases of Stieltjes polynomials when the differential equation has four singular points. References • Marden, Morris (1931), "On Stieltjes Polynomials", Transactions of the American Mathematical Society, Providence, R.I.: American Mathematical Society, 33 (4): 934–944, doi:10.2307/1989516, ISSN 0002-9947, JSTOR 1989516 • Sleeman, B. D.; Kuznetzov, V. B. (2010), "Stieltjes Polynomials", in Olver, Frank W. J.; Lozier, Daniel M.; Boisvert, Ronald F.; Clark, Charles W. (eds.), NIST Handbook of Mathematical Functions, Cambridge University Press, ISBN 978-0-521-19225-5, MR 2723248. • Stieltjes, T. J. (1885), "Sur certains polynômes qui vérifient une équation différentielle linéaire du second ordre et sur la theorie des fonctions de Lamé", Acta Mathematica, 6 (1): 321–326, doi:10.1007/BF02400421
Wikipedia
Adriaan van Wijngaarden Adriaan "Aad" van Wijngaarden (2 November 1916 – 7 February 1987) was a Dutch mathematician and computer scientist. Trained as a mechanical engineer, Van Wijngaarden emphasized and promote the mathematical aspects of computing, first in numerical analysis, then in programming languages and finally in design principles of such languages. Adriaan van Wijngaarden Born(1916-11-02)2 November 1916 Rotterdam, Netherlands Died7 February 1987(1987-02-07) (aged 70) Amstelveen, Netherlands CitizenshipNetherlands Alma materDelft University of Technology (1939) Known forALGOL CWI IFIP Van Wijngaarden grammar AwardsIEEE Computer Pioneer Award (1986) Scientific career FieldsNumerical mathematics Computer science InstitutionsUniversity of Amsterdam Mathematisch Centrum in Amsterdam Doctoral advisorCornelis Benjamin Biezeno Doctoral studentsEdsger W. Dijkstra Peter van Emde Boas Jaco de Bakker Reinder van de Riet Guus Zoutendijk Maarten van Emden Signature Biography Van Wijngaarden's university education was in mechanical engineering, for which he received a degree from Delft University of Technology[1] in 1939. He then studied for a doctorate in hydrodynamics, but abandoned the field. He joined the Nationaal Luchtvaartlaboratorium in 1945 and went with a group to England the next year to learn about new technologies that had been developed there during World War II. Van Wijngaarden was intrigued by the new idea of automatic computing. On 1 January 1947, he became the head of the Computing Department of the brand-new Centrum Wiskunde & Informatica (CWI), which was at the time known as the Mathematisch Centrum (MC), in Amsterdam.[1] He then made further visits to England and the United States, gathering ideas for the construction of the first Dutch computer, the ARRA, an electromechanical device first demonstrated in 1952. In that same year, van Wijngaarden hired Edsger W. Dijkstra, and they worked on software for the ARRA. in 1958, while visiting Edinburgh, Scotland, Van Wijngaarden was seriously injured in an automobile accident in which his wife was killed. After he recovered, he focused more on programming language research. The following year, he became a member of the Royal Netherlands Academy of Arts and Sciences.[2] In 1961, he became the director of the Mathematisch Centrum in Amsterdam and remained in that post for the next twenty years. He was one of the designers of the original ALGOL language, and later ALGOL 68,[3] for which he developed a two-level type of formal grammar that became known as a Van Wijngaarden grammar. In 1962, he became involved with developing international standards in programming and informatics, as a member of the International Federation for Information Processing (IFIP) IFIP Working Group 2.1 on Algorithmic Languages and Calculi,[4] which specified, maintains, and supports the programming languages ALGOL 60 and ALGOL 68.[5] Van Wijngaarden Awards The Van Wijngaarden Awards are named in his honor and are awarded every 5 years from the 60th anniversary of the Centrum Wiskunde & Informatica in 2006. The physical award consists of a bronze sculpture. • 2006: Computer scientist Nancy Lynch and mathematician-magician Persi Diaconis.[6] • 2011: Computer scientist Éva Tardos and numerical mathematician John C. Butcher.[7] • 2016: Computer scientist Xavier Leroy and statistician Sara van de Geer.[8] • 2021: Computer scientist Marta Kwiatkowska and statistician Susan Murphy.[9] See also • List of pioneers in computer science • List of computer science awards References 1. Verrijn-Stuart, Alex (1995). "IFIP 36 years Obituaries: Prof. Adriaan van WIJNGAARDEN (1916–1987)". Retrieved 2020-10-11. 2. "Adriaan van Wijngaarden (1916 - 1987)". Royal Netherlands Academy of Arts and Sciences. Retrieved 2015-07-20. 3. van Wijngaarden, Adriaan; Mailloux, Barry James; Peck, John Edward Lancelot; Koster, Cornelis Hermanus Antonius; Sintzoff, Michel [in French]; Lindsey, Charles Hodgson; Meertens, Lambert Guillaume Louis Théodore; Fisker, Richard G., eds. (1976). Revised Report on the Algorithmic Language ALGOL 68 (PDF). Springer-Verlag. ISBN 978-0-387-07592-1. OCLC 1991170. Archived (PDF) from the original on 2019-04-19. Retrieved 2019-05-11. 4. Jeuring, Johan; Meertens, Lambert; Guttmann, Walter (2016-08-17). "Profile of IFIP Working Group 2.1". Foswiki. Retrieved 2020-09-11. 5. Swierstra, Doaitse; Gibbons, Jeremy; Meertens, Lambert (2011-03-02). "ScopeEtc: IFIP21: Foswiki". Foswiki. Retrieved 2020-09-11. 6. "First Van Wijngaarden Awards for Lynch and Diaconis" (Press release). Centrum Wiskunde & Informatica. 2006-02-10. Archived from the original on 2012-07-28. Retrieved 2009-10-11. 7. "Van Wijngaarden Award 2011 for Éva Tardos and John Butcher" (Press release). Centrum Wiskunde & Informatica. 2011-02-10. Archived from the original on 2011-02-21. Retrieved 2011-02-12. 8. CWI soiree & Van Wijngaarden Award Ceremony, Centrum Wiskunde & Informatica, 2016-09-01, archived from the original on 2016-09-25, retrieved 2016-09-01 9. Marta kwiatkowska and susan murphy win van wijngaarden awards 2021 for preventing software faults and for improving decision making in health, Centrum Wiskunde & Informatica, 2021-09-20, retrieved 2021-12-07 External links • Adriaan van Wijngaarden at DBLP Bibliography Server • Rekenmeisjes en rekentuig door Gerard Alberts. Pythagoras. • Adriaan van Wijngaarden (1916-1987). Biografisch Woordenboek van Nederlandse Wiskundigen. • Aad van Wijngaarden’s 100th Birthday ALGOL programming Implementations Technical standards • ALGOL 58 • ALGOL 60 • ALGOL 68 Dialects • ABC ALGOL • ALCOR • ALGO • ALGOL 68C • ALGOL 68-R • ALGOL 68RS (ELLA) • ALGOL 68S • ALGOL N • ALGOL W • ALGOL X • Atlas Autocode (Edinburgh IMP) • Burroughs ALGOL • CORAL 66 • Dartmouth ALGOL 30 • DASK ALGOL • DG/L • Elliott ALGOL • Executive Systems Problem Oriented Language (ESPOL) → New Executive Programming Language (NEWP) • FLACC • IMP • JOVIAL • Kidsgrove Algol • MAD • Mary • NELIAC • RTL/2 • S-algol, PS-algol, Napier88 • Simula • Small Machine ALGOL Like Language (SMALL) • SMIL ALGOL Formalisms • Jensen's device • Van Wijngaarden grammar Community Organizations Professional associations • ALCOR Group • Association for Computing Machinery (ACM) • BSI Group • Euro-Asian Council for Standardization, Metrology and Certification (EASC) • International Federation for Information Processing (IFIP) IFIP Working Group 2.1 • Society of Applied Mathematics and Mechanics (GAMM) Business • Burroughs Corporation • Elliott Brothers • Regnecentralen Education • Case Institute of Technology • University of Edinburgh • University of St Andrews • Manchester University • Massachusetts Institute of Technology (MIT) Government • Royal Radar Establishment (RRE) People ALGOL 58 • John Backus • Friedrich L. Bauer • Hermann Bottenbruch • Charles Katz • Alan Perlis • Heinz Rutishauser • Klaus Samelson • Joseph Henry Wegstein MAD • Bruce Arden • Bernard Galler • Robert M. Graham ALGOL 60 • Backus^ • Roland Carl Backhouse • Bauer^ • Richard Bird • Stephen R. Bourne • Edsger W. Dijkstra • Andrey Ershov • Robert W. Floyd • Jeremy Gibbons • Julien Green • David Gries • Eric Hehner • Tony Hoare • Jørn Jensen • Katz^ • Peter Landin • Tom Maibaum • Conor McBride • John McCarthy • Carroll Morgan • Peter Naur • Maurice Nivat • John E. L. Peck • Perlis^ • Brian Randell • Rutishauser^ • Samelson^ • Jacob T. Schwartz • Micha Sharir • David Turner • Bernard Vauquois • Eiiti Wada • Wegstein^ • Adriaan van Wijngaarden • Mike Woodger Simula • Ole-Johan Dahl • Kristen Nygaard ALGOL 68 • Bauer^ • Susan G. Bond • Bourne^ • Robert Dewar • Dijkstra^ • Gerhard Goos • Michael Guy • Hoare^ • Cornelis H. A. Koster • Peter Landin • Charles H. Lindsey • Barry J. Mailloux • McCarthy^ • Lambert Meertens • Naur^ • Peck^ • Willem van der Poel • Randell^ • Douglas T. Ross • Samelson^ • Michel Sintzoff • van Wijngaarden^ • Niklaus Wirth • Woodger^ • Philip Woodward • Nobuo Yoneda • Hal Abelson • John Barnes • Tony Brooker • Ron Morrison • Peter O'Hearn • John C. Reynolds • ALGOL Bulletin Comparison • ALGOL 58 influence on ALGOL 60 • ALGOL 68 to other languages • ALGOL 68 to C++ • ^ = full name and link in prior ALGOL version above Category: ALGOL Category: ALGOL 60 Authority control International • ISNI • VIAF National • Germany • Israel • 2 • United States • Czech Republic • Netherlands • Poland Academics • Association for Computing Machinery • DBLP • MathSciNet • Mathematics Genealogy Project • zbMATH People • Netherlands • Deutsche Biographie • Trove Other • IdRef
Wikipedia
Van der Corput lemma (harmonic analysis) In mathematics, in the field of harmonic analysis, the van der Corput lemma is an estimate for oscillatory integrals named after the Dutch mathematician J. G. van der Corput. The following result is stated by E. Stein:[1] Suppose that a real-valued function $\phi (x)$ is smooth in an open interval $(a,b)$, and that $|\phi ^{(k)}(x)|\geq 1$ for all $x\in (a,b)$. Assume that either $k\geq 2$, or that $k=1$ and $\phi '(x)$ is monotone for $x\in \mathbb {R} $. Then there is a constant $c_{k}$, which does not depend on $\phi $, such that ${\bigg |}\int _{a}^{b}e^{i\lambda \phi (x)}{\bigg |}\leq c_{k}\lambda ^{-1/k}$ for any $\lambda \in \mathbb {R} $. Sublevel set estimates The van der Corput lemma is closely related to the sublevel set estimates,[2] which give the upper bound on the measure of the set where a function takes values not larger than $\epsilon $. Suppose that a real-valued function $\phi (x)$ is smooth on a finite or infinite interval $I\subset \mathbb {R} $, and that $|\phi ^{(k)}(x)|\geq 1$ for all $x\in I$. There is a constant $c_{k}$, which does not depend on $\phi $, such that for any $\epsilon \geq 0$ the measure of the sublevel set $\{x\in I:|\phi (x)|\leq \epsilon \}$ is bounded by $c_{k}\epsilon ^{1/k}$. References 1. Elias Stein, Harmonic Analysis: Real-variable Methods, Orthogonality and Oscillatory Integrals. Princeton University Press, 1993. ISBN 0-691-03216-5 2. M. Christ, Hilbert transforms along curves, Ann. of Math. 122 (1985), 575–596
Wikipedia
Vandermonde matrix In linear algebra, a Vandermonde matrix, named after Alexandre-Théophile Vandermonde, is a matrix with the terms of a geometric progression in each row: an $(m+1)\times (n+1)$ matrix $V=V(x_{0},x_{1},\cdots ,x_{m})={\begin{bmatrix}1&x_{0}&x_{0}^{2}&\dots &x_{0}^{n}\\1&x_{1}&x_{1}^{2}&\dots &x_{1}^{n}\\1&x_{2}&x_{2}^{2}&\dots &x_{2}^{n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{m}&x_{m}^{2}&\dots &x_{m}^{n}\end{bmatrix}}$ with entries $V_{i,j}=x_{i}^{j}$, the jth power of the number $x_{i}$, for all zero-based indices $i$ and $j$.[1] Most authors define the Vandermonde matrix as the transpose of the above matrix.[2][3] The determinant of a square Vandermonde matrix (when $n=m$) is called a Vandermonde determinant or Vandermonde polynomial. Its value is: $\det(V)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i}).$ This is non-zero if and only if all $x_{i}$ are distinct (no two are equal), making the Vandermonde matrix invertible. Applications The polynomial interpolation problem is to find a polynomial $p(x)=a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{n}x^{n}$ which satisfies $p(x_{0})=y_{0},\ldots ,p(x_{m})=y_{m}$ for given data points $(x_{0},y_{0}),\ldots ,(x_{m},y_{m})$. This problem can be reformulated in terms of linear algebra by means of the Vandermonde matrix, as follows. $V$ computes the values of $p(x)$ at the points $x=x_{0},\ x_{1},\dots ,\ x_{m}$ via a matrix multiplication $Va=y$, where $a=(a_{0},\ldots ,a_{n})$ is the vector of coefficients and $y=(y_{0},\ldots ,y_{m})=(p(x_{0}),\ldots ,p(x_{m}))$ is the vector of values (both written as column vectors): ${\begin{bmatrix}1&x_{0}&x_{0}^{2}&\dots &x_{0}^{n}\\1&x_{1}&x_{1}^{2}&\dots &x_{1}^{n}\\1&x_{2}&x_{2}^{2}&\dots &x_{2}^{n}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{m}&x_{m}^{2}&\dots &x_{m}^{n}\end{bmatrix}}\cdot {\begin{bmatrix}a_{0}\\a_{1}\\\vdots \\a_{n}\end{bmatrix}}={\begin{bmatrix}p(x_{0})\\p(x_{1})\\\vdots \\p(x_{m})\end{bmatrix}}.$ If $n=m$ and $x_{0},\dots ,\ x_{n}$ are distinct, then V is a square matrix with non-zero determinant, i.e. an invertible matrix. Thus, given V and y, one can find the required $p(x)$ by solving for its coefficients $a$ in the equation $Va=y$:[4] $a=V^{-1}y$. That is, the map from coefficients to values of polynomials is a bijective linear mapping with matrix V, and the interpolation problem has a unique solution. This result is called the unisolvence theorem, and is a special case of the Chinese remainder theorem for polynomials. In statistics, the equation $Va=y$ means that the Vandermonde matrix is the design matrix of polynomial regression. In numerical analysis, solving the equation $Va=y$ naïvely by Gaussian elimination results in an algorithm with time complexity O(n3). Exploiting the structure of the Vandermonde matrix, one can use Newton's divided differences method[5] (or the Lagrange interpolation formula[6][7]) to solve the equation in O(n2) time, which also gives the UL factorization of $V^{-1}$. The resulting algorithm produces extremely accurate solutions, even if $V$ is ill-conditioned.[2] (See polynomial interpolation.) The Vandermonde determinant is used in the representation theory of the symmetric group.[8] When the values $x_{i}$ belong to a finite field, the Vandermonde determinant is also called the Moore determinant, and has properties which are important in the theory of BCH codes and Reed–Solomon error correction codes. The discrete Fourier transform is defined by a specific Vandermonde matrix, the DFT matrix, where the $x_{i}$ are chosen to be nth roots of unity. The Fast Fourier transform computes the product of this matrix with a vector in O(n log2n) time.[9] In the physical theory of the quantum Hall effect, the Vandermonde determinant shows that the Laughlin wavefunction with filling factor 1 is equal to a Slater determinant. This is no longer true for filling factors different from 1 in the fractional quantum Hall effect. In the geometry of polyhedra, the Vandermonde matrix gives the normalized volume of arbitrary $k$-faces of cyclic polytopes. Specifically, if $F=C_{d}(t_{i_{1}},\dots ,t_{i_{k+1}})$ is a $k$-face of the cyclic polytope $C_{d}(T)\subset \mathbb {R} ^{d}$ corresponding to $T=\{t_{1}<\cdots <t_{N}\}\subset \mathbb {R} $, then $\mathrm {nvol} (F)={\frac {1}{k!}}\prod _{1\leq m<n\leq k+1}{(t_{i_{n}}-t_{i_{m}})}.$ Determinant The determinant of a square Vandermonde matrix is called a Vandermonde polynomial or Vandermonde determinant. Its value is the polynomial $\det(V)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i})$ which is non-zero if and only if all $x_{i}$ are distinct. The Vandermonde determinant was formerly sometimes called the discriminant, but in current terminology the discriminant of a polynomial $p(x)=(x-x_{0})\cdots (x-x_{n})$ is the square of the Vandermonde determinant of the roots $x_{i}$. The Vandermonde determinant is an alternating form in the $x_{i}$, meaning that exchanging two $x_{i}$ changes the sign, and $\det(V)$ thus depends on order for the $x_{i}$. By contrast, the discriminant $\det(V)^{2}$ does not depend on any order, so that Galois theory implies that the discriminant is a polynomial function of the coefficients of $p(x)$. The determinant formula is proved below in three ways. The first uses polynomial properties, especially the unique factorization property of multivariate polynomials. Although conceptually simple, it involves non-elementary concepts of abstract algebra. The second proof is based on the linear algebra concepts of change of basis in a vector space and the determinant of a linear map. In the process, it computes the LU decomposition of the Vandermonde matrix. The third proof is more elementary but more complicated, using only elementary row and column operations. First proof: polynomial properties By the Leibniz formula, $\det(V)$ is a polynomial in the $x_{i}$, with integer coefficients. All entries of the $i$th column (zero-based) have total degree $i$. Thus, again by the Leibniz formula, all terms of the determinant have total degree $0+1+2+\cdots +n={\frac {n(n+1)}{2}};$ (that is the determinant is a homogeneous polynomial of this degree). If, for $i\neq j$, one substitutes $x_{i}$ for $x_{j}$, one gets a matrix with two equal rows, which has thus a zero determinant. Thus, by the factor theorem, $x_{j}-x_{i}$ is a divisor of $\det(V)$. By the unique factorization property of multivariate polynomials, the product of all $x_{j}-x_{i}$ divides $\det(V)$, that is $\det(V)=Q\prod _{1\leq i<j\leq n}(x_{j}-x_{i}),$ where $Q$ is a polynomial. As the product of all $x_{j}-x_{i}$ and $\det(V)$ have the same degree $n(n+1)/2$, the polynomial $Q$ is, in fact, a constant. This constant is one, because the product of the diagonal entries of $V$ is $x_{1}x_{2}^{2}\cdots x_{n}^{n}$, which is also the monomial that is obtained by taking the first term of all factors in $\textstyle \prod _{0\leq i<j\leq n}(x_{j}-x_{i}).$ This proves that $\det(V)=\prod _{0\leq i<j\leq n}(x_{j}-x_{i}).$ Second proof: linear maps Let F be a field containing all $x_{i},$ and $P_{n}$ the F vector space of the polynomials of degree less than or equal to n with coefficients in F. Let $\varphi :P_{n}\to F^{n+1}$ be the linear map defined by $p(x)\mapsto (p(x_{0}),p(x_{1}),\ldots ,p(x_{n}))$. The Vandermonde matrix is the matrix of $\varphi $ with respect to the canonical bases of $P_{n}$ and $F^{n+1}.$ Changing the basis of $P_{n}$ amounts to multiplying the Vandermonde matrix by a change-of-basis matrix M (from the right). This does not change the determinant, if the determinant of M is 1. The polynomials $1$, $x-x_{0}$, $(x-x_{0})(x-x_{1})$, …, $(x-x_{0})(x-x_{1})\cdots (x-x_{n-1})$ are monic of respective degrees 0, 1, …, n. Their matrix on the monomial basis is an upper-triangular matrix U (if the monomials are ordered in increasing degrees), with all diagonal entries equal to one. This matrix is thus a change-of-basis matrix of determinant one. The matrix of $\varphi $ on this new basis is ${\begin{bmatrix}1&0&0&\ldots &0\\1&x_{1}-x_{0}&0&\ldots &0\\1&x_{2}-x_{0}&(x_{2}-x_{0})(x_{2}-x_{1})&\ldots &0\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}-x_{0}&(x_{n}-x_{0})(x_{n}-x_{1})&\ldots &(x_{n}-x_{0})(x_{n}-x_{1})\cdots (x_{n}-x_{n-1})\end{bmatrix}}$. Thus Vandermonde determinant equals the determinant of this matrix, which is the product of its diagonal entries. This proves the desired equality. Moreover, one gets the LU decomposition of V as $V=LU^{-1}$. Third proof: row and column operations This third proof is based on the fact that if one adds to a column of a matrix the product by a scalar of another column then the determinant remains unchanged. So, by subtracting to each column – except the first one – the preceding column multiplied by $x_{0}$, the determinant is not changed. (These subtractions must be done by starting from last columns, for subtracting a column that has not yet been changed). This gives the matrix ${\begin{bmatrix}1&0&0&0&\cdots &0\\1&x_{1}-x_{0}&x_{1}(x_{1}-x_{0})&x_{1}^{2}(x_{1}-x_{0})&\cdots &x_{1}^{n-1}(x_{1}-x_{0})\\1&x_{2}-x_{0}&x_{2}(x_{2}-x_{0})&x_{2}^{2}(x_{2}-x_{0})&\cdots &x_{2}^{n-1}(x_{2}-x_{0})\\\vdots &\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}-x_{0}&x_{n}(x_{n}-x_{0})&x_{n}^{2}(x_{n}-x_{0})&\cdots &x_{n}^{n-1}(x_{n}-x_{0})\\\end{bmatrix}}$ Applying the Laplace expansion formula along the first row, we obtain $\det(V)=\det(B)$, with $B={\begin{bmatrix}x_{1}-x_{0}&x_{1}(x_{1}-x_{0})&x_{1}^{2}(x_{1}-x_{0})&\cdots &x_{1}^{n-1}(x_{1}-x_{0})\\x_{2}-x_{0}&x_{2}(x_{2}-x_{0})&x_{2}^{2}(x_{2}-x_{0})&\cdots &x_{2}^{n-1}(x_{2}-x_{0})\\\vdots &\vdots &\vdots &\ddots &\vdots \\x_{n}-x_{0}&x_{n}(x_{n}-x_{0})&x_{n}^{2}(x_{n}-x_{0})&\cdots &x_{n}^{n-1}(x_{n}-x_{0})\\\end{bmatrix}}$ As all the entries in the $i$-th row of $B$ have a factor of $x_{i+1}-x_{0}$, one can take these factors out and obtain $\det(V)=(x_{1}-x_{0})(x_{2}-x_{0})\cdots (x_{n}-x_{0}){\begin{vmatrix}1&x_{1}&x_{1}^{2}&\cdots &x_{1}^{n-1}\\1&x_{2}&x_{2}^{2}&\cdots &x_{2}^{n-1}\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&x_{n}&x_{n}^{2}&\cdots &x_{n}^{n-1}\\\end{vmatrix}}=\prod _{1<i\leq n}(x_{i}-x_{0})\det(V')$, where $V'$ is a Vandermonde matrix in $x_{1},\ldots ,x_{n}$. Iterating this process on this smaller Vandermonde matrix, one eventually gets the desired expression of $\det(V)$ as the product of all $x_{j}-x_{i}$ such that $i<j$. Rank of the Vandermonde matrix • An m × n rectangular Vandermonde matrix such that m ≤ n has rank m if and only if all xi are distinct. • An m × n rectangular Vandermonde matrix such that m ≥ n has rank n if and only if there are n of the xi that are distinct. • A square Vandermonde matrix is invertible if and only if the xi are distinct. An explicit formula for the inverse is known (see below).[10][3][11] Inverse Vandermonde matrix As explained above in Applications, the polynomial interpolation problem for $p(x)=a_{0}+a_{1}x+a_{2}x^{2}+\dots +a_{n}x^{n}$satisfying $p(x_{0})=y_{0},\ldots ,p(x_{n})=y_{n}$ is equivalent to the matrix equation $Va=y$, which has the unique solution $a=V^{-1}y$. There are other known formulas which solve the interpolation problem, which must be equivalent to the unique $a=V^{-1}y$, so they must give explicit formulas for the inverse matrix $V^{-1}$. In particular, Lagrange interpolation shows that the columns of the inverse matrix $V^{-1}={\begin{bmatrix}1&x_{0}&\dots &x_{0}^{n}\\\vdots &\vdots &&\vdots \\[.5em]1&x_{n}&\dots &x_{n}^{n}\end{bmatrix}}^{-1}=L={\begin{bmatrix}L_{00}&\!\!\!\!\cdots \!\!\!\!&L_{0n}\\\vdots &&\vdots \\L_{n0}&\!\!\!\!\cdots \!\!\!\!&L_{nn}\end{bmatrix}}$ are the coefficients of the Lagrange polynomials $L_{j}(x)=L_{0j}+L_{1j}x+\cdots +L_{nj}x^{n}=\prod _{0\leq i\leq n \atop i\neq j}{\frac {x-x_{i}}{x_{j}-x_{i}}}={\frac {f(x)}{(x-x_{j})\,f'(x_{j})}}\,,$ where $f(x)=(x-x_{0})\cdots (x-x_{n})$. This is easily demonstrated: the polynomials clearly satisfy $L_{j}(x_{i})=0$ for $i\neq j$ while $L_{j}(x_{j})=1$, so we may compute the product $VL=[L_{j}(x_{i})]_{i,j=0}^{n}=I$, the identity matrix. Confluent Vandermonde matrices As described before, a Vandermonde matrix describes the linear algebra interpolation problem of finding the coefficients of a polynomial $p(x)$ of degree $n-1$ based on the values $p(x_{1}),\,...,\,p(x_{n})$, where $x_{1},\,...,\,x_{n}$ are distinct points. If $x_{i}$ are not distinct, then this problem does not have a unique solution (and the corresponding Vandermonde matrix is singular). However, if we specify the values of the derivatives at the repeated points, then the problem can have a unique solution. For example, the problem ${\begin{cases}p(0)=y_{1}\\p'(0)=y_{2}\\p(1)=y_{3}\end{cases}}$ where $p(x)=ax^{2}+bx+c$, has a unique solution for all $y_{1},y_{2},y_{3}$ with $y_{1}\neq y_{3}$. In general, suppose that $x_{1},x_{2},...,x_{n}$ are (not necessarily distinct) numbers, and suppose for simplicity that equal values are adjacent: $x_{1}=\cdots =x_{m_{1}},\ x_{m_{1}+1}=\cdots =x_{m_{2}},\ \ldots ,\ x_{m_{k-1}+1}=\cdots =x_{m_{k}}$ where $m_{1}<m_{2}<\cdots <m_{k}=n,$ and $x_{m_{1}},\ldots ,x_{m_{k}}$ are distinct. Then the corresponding interpolation problem is ${\begin{cases}p(x_{m_{1}})=y_{1},&p'(x_{m_{1}})=y_{2},&\ldots ,&p^{(m_{1}-1)}(x_{m_{1}})=y_{m_{1}},\\p(x_{m_{2}})=y_{m_{1}+1},&p'(x_{m_{2}})=y_{m_{1}+2},&\ldots ,&p^{(m_{2}-m_{1}-1)}(x_{m_{2}})=y_{m_{2}},\\\qquad \vdots &&&\qquad \vdots \\p(x_{m_{k}})=y_{m_{k-1}+1},&p'(x_{m_{k}})=y_{m_{k-1}+2},&\ldots ,&p^{(m_{k}-m_{k-1}-1)}(x_{m_{k}})=y_{m_{k}}.\end{cases}}$ The corresponding matrix for this problem is called a confluent Vandermonde matrix, given as follows. If $1\leq i,j\leq n$, then $m_{\ell }<i\leq m_{\ell +1}$ for a unique $0\leq \ell \leq k-1$ (denoting $m_{0}=0$). We let $V_{i,j}={\begin{cases}0&{\text{if }}j<i-m_{\ell },\\[6pt]{\dfrac {(j-1)!}{(j-(i-m_{\ell }))!}}x_{i}^{j-(i-m_{\ell })}&{\text{if }}j\geq i-m_{\ell }.\end{cases}}$ This generalization of the Vandermonde matrix makes it non-singular, so that there exists a unique solution to the system of equations, and it possesses most of the other properties of the Vandermonde matrix. Its rows are derivatives (of some order) of the original Vandermonde rows. Another way to derive this formula is by taking a limit of the Vandermonde matrix as the $x_{i}$'s approach each other. For example, to get the case of $x_{1}=x_{2}$, take subtract the first row from second in the original Vandermonde matrix, and let $x_{2}\to x_{1}$: this yields the corresponding row in the confluent Vandermonde matrix. This derives the generalized interpolation problem with given values and derivatives as a limit of the original case with distinct points: giving $p(x_{i}),p'(x_{i})$ is similar to giving $p(x_{i}),p(x_{i}+\varepsilon )$ for small $\varepsilon $. Geometers have studied the problem of tracking confluent points along their tangent lines, known as compacitification of configuration space. See also • Companion matrix § Diagonalizability • Schur polynomial – a generalization • Alternant matrix • Lagrange polynomial • Wronskian • List of matrices • Moore determinant over a finite field • Vieta's formulas References 1. Roger A. Horn and Charles R. Johnson (1991), Topics in matrix analysis, Cambridge University Press. See Section 6.1. 2. Golub, Gene H.; Van Loan, Charles F. (2013). Matrix Computations (4th ed.). The Johns Hopkins University Press. pp. 203–207. ISBN 978-1-4214-0859-0. 3. Macon, N.; A. Spitzbart (February 1958). "Inverses of Vandermonde Matrices". The American Mathematical Monthly. 65 (2): 95–100. doi:10.2307/2308881. JSTOR 2308881. 4. François Viète (1540-1603), Vieta's formulas, https://en.wikipedia.org/wiki/Vieta%27s_formulas 5. Björck, Å.; Pereyra, V. (1970). "Solution of Vandermonde Systems of Equations". American Mathematical Society. 24 (112): 893–903. doi:10.1090/S0025-5718-1970-0290541-1. S2CID 122006253. 6. Press, WH; Teukolsky, SA; Vetterling, WT; Flannery, BP (2007). "Section 2.8.1. Vandermonde Matrices". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. 7. Inverse of Vandermonde Matrix (2018), https://proofwiki.org/wiki/Inverse_of_Vandermonde_Matrix 8. Fulton, William; Harris, Joe (1991). Representation theory. A first course. Graduate Texts in Mathematics, Readings in Mathematics. Vol. 129. New York: Springer-Verlag. doi:10.1007/978-1-4612-0979-9. ISBN 978-0-387-97495-8. MR 1153249. OCLC 246650103. Lecture 4 reviews the representation theory of symmetric groups, including the role of the Vandermonde determinant. 9. Gauthier, J. "Fast Multipoint Evaluation On n Arbitrary Points." Simon Fraser University, Tech. Rep (2017). 10. Turner, L. Richard (August 1966). Inverse of the Vandermonde matrix with applications (PDF). 11. "Inverse of Vandermonde Matrix". 2018. Further reading • Ycart, Bernard (2013), "A case of mathematical eponymy: the Vandermonde determinant", Revue d'Histoire des Mathématiques, 13, arXiv:1204.4716, Bibcode:2012arXiv1204.4716Y. External links • Vandermonde matrix at ProofWiki Matrix classes Explicitly constrained entries • Alternant • Anti-diagonal • Anti-Hermitian • Anti-symmetric • Arrowhead • Band • Bidiagonal • Bisymmetric • Block-diagonal • Block • Block tridiagonal • Boolean • Cauchy • Centrosymmetric • Conference • Complex Hadamard • Copositive • Diagonally dominant • Diagonal • Discrete Fourier Transform • Elementary • Equivalent • Frobenius • Generalized permutation • Hadamard • Hankel • Hermitian • Hessenberg • Hollow • Integer • Logical • Matrix unit • Metzler • Moore • Nonnegative • Pentadiagonal • Permutation • Persymmetric • Polynomial • Quaternionic • Signature • Skew-Hermitian • Skew-symmetric • Skyline • Sparse • Sylvester • Symmetric • Toeplitz • Triangular • Tridiagonal • Vandermonde • Walsh • Z Constant • Exchange • Hilbert • Identity • Lehmer • Of ones • Pascal • Pauli • Redheffer • Shift • Zero Conditions on eigenvalues or eigenvectors • Companion • Convergent • Defective • Definite • Diagonalizable • Hurwitz • Positive-definite • Stieltjes Satisfying conditions on products or inverses • Congruent • Idempotent or Projection • Invertible • Involutory • Nilpotent • Normal • Orthogonal • Unimodular • Unipotent • Unitary • Totally unimodular • Weighing With specific applications • Adjugate • Alternating sign • Augmented • Bézout • Carleman • Cartan • Circulant • Cofactor • Commutation • Confusion • Coxeter • Distance • Duplication and elimination • Euclidean distance • Fundamental (linear differential equation) • Generator • Gram • Hessian • Householder • Jacobian • Moment • Payoff • Pick • Random • Rotation • Seifert • Shear • Similarity • Symplectic • Totally positive • Transformation Used in statistics • Centering • Correlation • Covariance • Design • Doubly stochastic • Fisher information • Hat • Precision • Stochastic • Transition Used in graph theory • Adjacency • Biadjacency • Degree • Edmonds • Incidence • Laplacian • Seidel adjacency • Tutte Used in science and engineering • Cabibbo–Kobayashi–Maskawa • Density • Fundamental (computer vision) • Fuzzy associative • Gamma • Gell-Mann • Hamiltonian • Irregular • Overlap • S • State transition • Substitution • Z (chemistry) Related terms • Jordan normal form • Linear independence • Matrix exponential • Matrix representation of conic sections • Perfect matrix • Pseudoinverse • Row echelon form • Wronskian •  Mathematics portal • List of matrices • Category:Matrices
Wikipedia
Permanent (mathematics) In linear algebra, the permanent of a square matrix is a function of the matrix similar to the determinant. The permanent, as well as the determinant, is a polynomial in the entries of the matrix.[1] Both are special cases of a more general function of a matrix called the immanant. Definition The permanent of an n×n matrix A = (ai,j) is defined as $\operatorname {perm} (A)=\sum _{\sigma \in S_{n}}\prod _{i=1}^{n}a_{i,\sigma (i)}.$ The sum here extends over all elements σ of the symmetric group Sn; i.e. over all permutations of the numbers 1, 2, ..., n. For example, $\operatorname {perm} {\begin{pmatrix}a&b\\c&d\end{pmatrix}}=ad+bc,$ and $\operatorname {perm} {\begin{pmatrix}a&b&c\\d&e&f\\g&h&i\end{pmatrix}}=aei+bfg+cdh+ceg+bdi+afh.$ The definition of the permanent of A differs from that of the determinant of A in that the signatures of the permutations are not taken into account. The permanent of a matrix A is denoted per A, perm A, or Per A, sometimes with parentheses around the argument. Minc uses Per(A) for the permanent of rectangular matrices, and per(A) when A is a square matrix.[2] Muir and Metzler use the notation ${\overset {+}{|}}\quad {\overset {+}{|}}$.[3] The word, permanent, originated with Cauchy in 1812 as “fonctions symétriques permanentes” for a related type of function,[4] and was used by Muir and Metzler[5] in the modern, more specific, sense.[6] Properties If one views the permanent as a map that takes n vectors as arguments, then it is a multilinear map and it is symmetric (meaning that any order of the vectors results in the same permanent). Furthermore, given a square matrix $A=\left(a_{ij}\right)$ of order n:[7] • perm(A) is invariant under arbitrary permutations of the rows and/or columns of A. This property may be written symbolically as perm(A) = perm(PAQ) for any appropriately sized permutation matrices P and Q, • multiplying any single row or column of A by a scalar s changes perm(A) to s⋅perm(A), • perm(A) is invariant under transposition, that is, perm(A) = perm(AT). • If $A=\left(a_{ij}\right)$ and $B=\left(b_{ij}\right)$ are square matrices of order n then,[8] $\operatorname {perm} \left(A+B\right)=\sum _{s,t}\operatorname {perm} \left(a_{ij}\right)_{i\in s,j\in t}\operatorname {perm} \left(b_{ij}\right)_{i\in {\bar {s}},j\in {\bar {t}}},$ where s and t are subsets of the same size of {1,2,...,n} and ${\bar {s}},{\bar {t}}$ are their respective complements in that set. • If $A$ is a triangular matrix, i.e. $a_{ij}=0$, whenever $i>j$ or, alternatively, whenever $i<j$, then its permanent (and determinant as well) equals the product of the diagonal entries: $\operatorname {perm} \left(A\right)=a_{11}a_{22}\cdots a_{nn}=\prod _{i=1}^{n}a_{ii}.$ Relation to determinants Laplace's expansion by minors for computing the determinant along a row, column or diagonal extends to the permanent by ignoring all signs.[9] For every $ i$, $\mathbb {perm} (B)=\sum _{j=1}^{n}B_{i,j}M_{i,j},$ where $B_{i,j}$ is the entry of the ith row and the jth column of B, and $ M_{i,j}$ is the permanent of the submatrix obtained by removing the ith row and the jth column of B. For example, expanding along the first column, ${\begin{aligned}\operatorname {perm} \left({\begin{matrix}1&1&1&1\\2&1&0&0\\3&0&1&0\\4&0&0&1\end{matrix}}\right)={}&1\cdot \operatorname {perm} \left({\begin{matrix}1&0&0\\0&1&0\\0&0&1\end{matrix}}\right)+2\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\0&1&0\\0&0&1\end{matrix}}\right)\\&{}+\ 3\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\1&0&0\\0&0&1\end{matrix}}\right)+4\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\1&0&0\\0&1&0\end{matrix}}\right)\\={}&1(1)+2(1)+3(1)+4(1)=10,\end{aligned}}$ while expanding along the last row gives, ${\begin{aligned}\operatorname {perm} \left({\begin{matrix}1&1&1&1\\2&1&0&0\\3&0&1&0\\4&0&0&1\end{matrix}}\right)={}&4\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\1&0&0\\0&1&0\end{matrix}}\right)+0\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\2&0&0\\3&1&0\end{matrix}}\right)\\&{}+\ 0\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\2&1&0\\3&0&0\end{matrix}}\right)+1\cdot \operatorname {perm} \left({\begin{matrix}1&1&1\\2&1&0\\3&0&1\end{matrix}}\right)\\={}&4(1)+0+0+1(6)=10.\end{aligned}}$ On the other hand, the basic multiplicative property of determinants is not valid for permanents.[10] A simple example shows that this is so. ${\begin{aligned}4&=\operatorname {perm} \left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\operatorname {perm} \left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\\&\neq \operatorname {perm} \left(\left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\left({\begin{matrix}1&1\\1&1\end{matrix}}\right)\right)=\operatorname {perm} \left({\begin{matrix}2&2\\2&2\end{matrix}}\right)=8.\end{aligned}}$ Unlike the determinant, the permanent has no easy geometrical interpretation; it is mainly used in combinatorics, in treating boson Green's functions in quantum field theory, and in determining state probabilities of boson sampling systems.[11] However, it has two graph-theoretic interpretations: as the sum of weights of cycle covers of a directed graph, and as the sum of weights of perfect matchings in a bipartite graph. Applications Symmetric tensors The permanent arises naturally in the study of the symmetric tensor power of Hilbert spaces.[12] In particular, for a Hilbert space $H$, let $\vee ^{k}H$ denote the $k$th symmetric tensor power of $H$, which is the space of symmetric tensors. Note in particular that $\vee ^{k}H$ is spanned by the symmetric products of elements in $H$. For $x_{1},x_{2},\dots ,x_{k}\in H$, we define the symmetric product of these elements by $x_{1}\vee x_{2}\vee \cdots \vee x_{k}=(k!)^{-1/2}\sum _{\sigma \in S_{k}}x_{\sigma (1)}\otimes x_{\sigma (2)}\otimes \cdots \otimes x_{\sigma (k)}$ If we consider $\vee ^{k}H$ (as a subspace of $\otimes ^{k}H$, the kth tensor power of $H$) and define the inner product on $\vee ^{k}H$ accordingly, we find that for $x_{j},y_{j}\in H$ $\langle x_{1}\vee x_{2}\vee \cdots \vee x_{k},y_{1}\vee y_{2}\vee \cdots \vee y_{k}\rangle =\operatorname {perm} \left[\langle x_{i},y_{j}\rangle \right]_{i,j=1}^{k}$ Applying the Cauchy–Schwarz inequality, we find that $\operatorname {perm} \left[\langle x_{i},x_{j}\rangle \right]_{i,j=1}^{k}\geq 0$, and that $\left|\operatorname {perm} \left[\langle x_{i},y_{j}\rangle \right]_{i,j=1}^{k}\right|^{2}\leq \operatorname {perm} \left[\langle x_{i},x_{j}\rangle \right]_{i,j=1}^{k}\cdot \operatorname {perm} \left[\langle y_{i},y_{j}\rangle \right]_{i,j=1}^{k}$ Cycle covers Any square matrix $A=(a_{ij})_{i,j=1}^{n}$ can be viewed as the adjacency matrix of a weighted directed graph on vertex set $V=\{1,2,\dots ,n\}$, with $a_{ij}$ representing the weight of the arc from vertex i to vertex j. A cycle cover of a weighted directed graph is a collection of vertex-disjoint directed cycles in the digraph that covers all vertices in the graph. Thus, each vertex i in the digraph has a unique "successor" $\sigma (i)$ in the cycle cover, and so $\sigma $ represents a permutation on V. Conversely, any permutation $\sigma $ on V corresponds to a cycle cover with arcs from each vertex i to vertex $\sigma (i)$. If the weight of a cycle-cover is defined to be the product of the weights of the arcs in each cycle, then $\operatorname {weight} (\sigma )=\prod _{i=1}^{n}a_{i,\sigma (i)},$ implying that $\operatorname {perm} (A)=\sum _{\sigma }\operatorname {weight} (\sigma ).$ Thus the permanent of A is equal to the sum of the weights of all cycle-covers of the digraph. Perfect matchings A square matrix $A=(a_{ij})$ can also be viewed as the adjacency matrix of a bipartite graph which has vertices $x_{1},x_{2},\dots ,x_{n}$ on one side and $y_{1},y_{2},\dots ,y_{n}$ on the other side, with $a_{ij}$ representing the weight of the edge from vertex $x_{i}$ to vertex $y_{j}$. If the weight of a perfect matching $\sigma $ that matches $x_{i}$ to $y_{\sigma (i)}$ is defined to be the product of the weights of the edges in the matching, then $\operatorname {weight} (\sigma )=\prod _{i=1}^{n}a_{i,\sigma (i)}.$ Thus the permanent of A is equal to the sum of the weights of all perfect matchings of the graph. Permanents of (0, 1) matrices Enumeration The answers to many counting questions can be computed as permanents of matrices that only have 0 and 1 as entries. Let Ω(n,k) be the class of all (0, 1)-matrices of order n with each row and column sum equal to k. Every matrix A in this class has perm(A) > 0.[13] The incidence matrices of projective planes are in the class Ω(n2 + n + 1, n + 1) for n an integer > 1. The permanents corresponding to the smallest projective planes have been calculated. For n = 2, 3, and 4 the values are 24, 3852 and 18,534,400 respectively.[13] Let Z be the incidence matrix of the projective plane with n = 2, the Fano plane. Remarkably, perm(Z) = 24 = |det (Z)|, the absolute value of the determinant of Z. This is a consequence of Z being a circulant matrix and the theorem:[14] If A is a circulant matrix in the class Ω(n,k) then if k > 3, perm(A) > |det (A)| and if k = 3, perm(A) = |det (A)|. Furthermore, when k = 3, by permuting rows and columns, A can be put into the form of a direct sum of e copies of the matrix Z and consequently, n = 7e and perm(A) = 24e. Permanents can also be used to calculate the number of permutations with restricted (prohibited) positions. For the standard n-set {1, 2, ..., n}, let $A=(a_{ij})$ be the (0, 1)-matrix where aij = 1 if i → j is allowed in a permutation and aij = 0 otherwise. Then perm(A) is equal to the number of permutations of the n-set that satisfy all the restrictions.[9] Two well known special cases of this are the solution of the derangement problem and the ménage problem: the number of permutations of an n-set with no fixed points (derangements) is given by $\operatorname {perm} (J-I)=\operatorname {perm} \left({\begin{matrix}0&1&1&\dots &1\\1&0&1&\dots &1\\1&1&0&\dots &1\\\vdots &\vdots &\vdots &\ddots &\vdots \\1&1&1&\dots &0\end{matrix}}\right)=n!\sum _{i=0}^{n}{\frac {(-1)^{i}}{i!}},$ where J is the n×n all 1's matrix and I is the identity matrix, and the ménage numbers are given by ${\begin{aligned}\operatorname {perm} (J-I-I')&=\operatorname {perm} \left({\begin{matrix}0&0&1&\dots &1\\1&0&0&\dots &1\\1&1&0&\dots &1\\\vdots &\vdots &\vdots &\ddots &\vdots \\0&1&1&\dots &0\end{matrix}}\right)\\&=\sum _{k=0}^{n}(-1)^{k}{\frac {2n}{2n-k}}{2n-k \choose k}(n-k)!,\end{aligned}}$ where I' is the (0, 1)-matrix with nonzero entries in positions (i, i + 1) and (n, 1). Bounds The Bregman–Minc inequality, conjectured by H. Minc in 1963[15] and proved by L. M. Brégman in 1973,[16] gives an upper bound for the permanent of an n × n (0, 1)-matrix. If A has ri ones in row i for each 1 ≤ i ≤ n, the inequality states that $\operatorname {perm} A\leq \prod _{i=1}^{n}(r_{i})!^{1/r_{i}}.$ Van der Waerden's conjecture In 1926, Van der Waerden conjectured that the minimum permanent among all n × n doubly stochastic matrices is n!/nn, achieved by the matrix for which all entries are equal to 1/n.[17] Proofs of this conjecture were published in 1980 by B. Gyires[18] and in 1981 by G. P. Egorychev[19] and D. I. Falikman;[20] Egorychev's proof is an application of the Alexandrov–Fenchel inequality.[21] For this work, Egorychev and Falikman won the Fulkerson Prize in 1982.[22] Computation Main articles: Computing the permanent and Sharp-P-completeness of 01-permanent The naïve approach, using the definition, of computing permanents is computationally infeasible even for relatively small matrices. One of the fastest known algorithms is due to H. J. Ryser.[23] Ryser's method is based on an inclusion–exclusion formula that can be given[24] as follows: Let $A_{k}$ be obtained from A by deleting k columns, let $P(A_{k})$ be the product of the row-sums of $A_{k}$, and let $\Sigma _{k}$ be the sum of the values of $P(A_{k})$ over all possible $A_{k}$. Then $\operatorname {perm} (A)=\sum _{k=0}^{n-1}(-1)^{k}\Sigma _{k}.$ It may be rewritten in terms of the matrix entries as follows: $\operatorname {perm} (A)=(-1)^{n}\sum _{S\subseteq \{1,\dots ,n\}}(-1)^{|S|}\prod _{i=1}^{n}\sum _{j\in S}a_{ij}.$ The permanent is believed to be more difficult to compute than the determinant. While the determinant can be computed in polynomial time by Gaussian elimination, Gaussian elimination cannot be used to compute the permanent. Moreover, computing the permanent of a (0,1)-matrix is #P-complete. Thus, if the permanent can be computed in polynomial time by any method, then FP = #P, which is an even stronger statement than P = NP. When the entries of A are nonnegative, however, the permanent can be computed approximately in probabilistic polynomial time, up to an error of $\varepsilon M$, where $M$ is the value of the permanent and $\varepsilon >0$ is arbitrary.[25] The permanent of a certain set of positive semidefinite matrices can also be approximated in probabilistic polynomial time: the best achievable error of this approximation is $\varepsilon {\sqrt {M}}$ ($M$ is again the value of the permanent).[26] MacMahon's master theorem Main article: MacMahon's master theorem Another way to view permanents is via multivariate generating functions. Let $A=(a_{ij})$ be a square matrix of order n. Consider the multivariate generating function: ${\begin{aligned}F(x_{1},x_{2},\dots ,x_{n})&=\prod _{i=1}^{n}\left(\sum _{j=1}^{n}a_{ij}x_{j}\right)\\&=\left(\sum _{j=1}^{n}a_{1j}x_{j}\right)\left(\sum _{j=1}^{n}a_{2j}x_{j}\right)\cdots \left(\sum _{j=1}^{n}a_{nj}x_{j}\right).\end{aligned}}$ The coefficient of $x_{1}x_{2}\dots x_{n}$ in $F(x_{1},x_{2},\dots ,x_{n})$ is perm(A).[27] As a generalization, for any sequence of n non-negative integers, $s_{1},s_{2},\dots ,s_{n}$ define: $\operatorname {perm} ^{(s_{1},s_{2},\dots ,s_{n})}(A)$ as the coefficient of $x_{1}^{s_{1}}x_{2}^{s_{2}}\cdots x_{n}^{s_{n}}$ in$\left(\sum _{j=1}^{n}a_{1j}x_{j}\right)^{s_{1}}\left(\sum _{j=1}^{n}a_{2j}x_{j}\right)^{s_{2}}\cdots \left(\sum _{j=1}^{n}a_{nj}x_{j}\right)^{s_{n}}.$ MacMahon's master theorem relating permanents and determinants is:[28] $\operatorname {perm} ^{(s_{1},s_{2},\dots ,s_{n})}(A)={\text{ coefficient of }}x_{1}^{s_{1}}x_{2}^{s_{2}}\cdots x_{n}^{s_{n}}{\text{ in }}{\frac {1}{\det(I-XA)}},$ where I is the order n identity matrix and X is the diagonal matrix with diagonal $[x_{1},x_{2},\dots ,x_{n}].$ Rectangular matrices The permanent function can be generalized to apply to non-square matrices. Indeed, several authors make this the definition of a permanent and consider the restriction to square matrices a special case.[29] Specifically, for an m × n matrix $A=(a_{ij})$ with m ≤ n, define $\operatorname {perm} (A)=\sum _{\sigma \in \operatorname {P} (n,m)}a_{1\sigma (1)}a_{2\sigma (2)}\ldots a_{m\sigma (m)}$ where P(n,m) is the set of all m-permutations of the n-set {1,2,...,n}.[30] Ryser's computational result for permanents also generalizes. If A is an m × n matrix with m ≤ n, let $A_{k}$ be obtained from A by deleting k columns, let $P(A_{k})$ be the product of the row-sums of $A_{k}$, and let $\sigma _{k}$ be the sum of the values of $P(A_{k})$ over all possible $A_{k}$. Then[10] $\operatorname {perm} (A)=\sum _{k=0}^{m-1}(-1)^{k}{\binom {n-m+k}{k}}\sigma _{n-m+k}.$ Systems of distinct representatives The generalization of the definition of a permanent to non-square matrices allows the concept to be used in a more natural way in some applications. For instance: Let S1, S2, ..., Sm be subsets (not necessarily distinct) of an n-set with m ≤ n. The incidence matrix of this collection of subsets is an m × n (0,1)-matrix A. The number of systems of distinct representatives (SDR's) of this collection is perm(A).[31] See also • Computing the permanent • Bapat–Beg theorem, an application of permanents in order statistics • Slater determinant, an application of permanents in quantum mechanics • Hafnian Notes 1. Marcus, Marvin; Minc, Henryk (1965). "Permanents". Amer. Math. Monthly. 72 (6): 577–591. doi:10.2307/2313846. JSTOR 2313846. 2. Minc (1978) 3. Muir & Metzler (1960) 4. Cauchy, A. L. (1815), "Mémoire sur les fonctions qui ne peuvent obtenir que deux valeurs égales et de signes contraires par suite des transpositions opérées entre les variables qu'elles renferment.", Journal de l'École Polytechnique, 10: 91–169 5. Muir & Metzler (1960) 6. van Lint & Wilson 2001, p. 108 7. Ryser 1963, pp. 25 – 26 8. Percus 1971, p. 2 9. Percus 1971, p. 12 10. Ryser 1963, p. 26 11. Aaronson, Scott (14 Nov 2010). "The Computational Complexity of Linear Optics". arXiv:1011.3245 [quant-ph]. 12. Bhatia, Rajendra (1997). Matrix Analysis. New York: Springer-Verlag. pp. 16–19. ISBN 978-0-387-94846-1. 13. Ryser 1963, p. 124 14. Ryser 1963, p. 125 15. Minc, Henryk (1963), "Upper bounds for permanents of (0,1)-matrices", Bulletin of the American Mathematical Society, 69 (6): 789–791, doi:10.1090/s0002-9904-1963-11031-9 16. van Lint & Wilson 2001, p. 101 17. van der Waerden, B. L. (1926), "Aufgabe 45", Jber. Deutsch. Math.-Verein., 35: 117. 18. Gyires, B. (1980), "The common source of several inequalities concerning doubly stochastic matrices", Publicationes Mathematicae Institutum Mathematicum Universitatis Debreceniensis, 27 (3–4): 291–304, MR 0604006. 19. Egoryčev, G. P. (1980), Reshenie problemy van-der-Vardena dlya permanentov (in Russian), Krasnoyarsk: Akad. Nauk SSSR Sibirsk. Otdel. Inst. Fiz., p. 12, MR 0602332. Egorychev, G. P. (1981), "Proof of the van der Waerden conjecture for permanents", Akademiya Nauk SSSR (in Russian), 22 (6): 65–71, 225, MR 0638007. Egorychev, G. P. (1981), "The solution of van der Waerden's problem for permanents", Advances in Mathematics, 42 (3): 299–305, doi:10.1016/0001-8708(81)90044-X, MR 0642395. 20. Falikman, D. I. (1981), "Proof of the van der Waerden conjecture on the permanent of a doubly stochastic matrix", Akademiya Nauk Soyuza SSR (in Russian), 29 (6): 931–938, 957, MR 0625097. 21. Brualdi (2006) p.487 22. Fulkerson Prize, Mathematical Optimization Society, retrieved 2012-08-19. 23. Ryser (1963, p. 27) 24. van Lint & Wilson (2001) p. 99 25. Jerrum, M.; Sinclair, A.; Vigoda, E. (2004), "A polynomial-time approximation algorithm for the permanent of a matrix with nonnegative entries", Journal of the ACM, 51 (4): 671–697, CiteSeerX 10.1.1.18.9466, doi:10.1145/1008731.1008738, S2CID 47361920 26. Chakhmakhchyan, Levon; Cerf, Nicolas; Garcia-Patron, Raul (2017). "A quantum-inspired algorithm for estimating the permanent of positive semidefinite matrices". Phys. Rev. A. 96 (2): 022329. arXiv:1609.02416. Bibcode:2017PhRvA..96b2329C. doi:10.1103/PhysRevA.96.022329. S2CID 54194194. 27. Percus 1971, p. 14 28. Percus 1971, p. 17 29. In particular, Minc (1978) and Ryser (1963) do this. 30. Ryser 1963, p. 25 31. Ryser 1963, p. 54 References • Brualdi, Richard A. (2006). Combinatorial matrix classes. Encyclopedia of Mathematics and Its Applications. Vol. 108. Cambridge: Cambridge University Press. ISBN 978-0-521-86565-4. Zbl 1106.05001. • Minc, Henryk (1978). Permanents. Encyclopedia of Mathematics and its Applications. Vol. 6. With a foreword by Marvin Marcus. Reading, MA: Addison–Wesley. ISSN 0953-4806. OCLC 3980645. Zbl 0401.15005. • Muir, Thomas; Metzler, William H. (1960) [1882]. A Treatise on the Theory of Determinants. New York: Dover. OCLC 535903. • Percus, J.K. (1971), Combinatorial Methods, Applied Mathematical Sciences #4, New York: Springer-Verlag, ISBN 978-0-387-90027-8 • Ryser, Herbert John (1963), Combinatorial Mathematics, The Carus Mathematical Monographs #14, The Mathematical Association of America • van Lint, J.H.; Wilson, R.M. (2001), A Course in Combinatorics, Cambridge University Press, ISBN 978-0521422604 Further reading • Hall Jr., Marshall (1986), Combinatorial Theory (2nd ed.), New York: John Wiley & Sons, pp. 56–72, ISBN 978-0-471-09138-7 Contains a proof of the Van der Waerden conjecture. • Marcus, M.; Minc, H. (1965), "Permanents", The American Mathematical Monthly, 72 (6): 577–591, doi:10.2307/2313846, JSTOR 2313846 External links • Permanent at PlanetMath. • Van der Waerden's permanent conjecture at PlanetMath.
Wikipedia
Van der Waerden's theorem Van der Waerden's theorem is a theorem in the branch of mathematics called Ramsey theory. Van der Waerden's theorem states that for any given positive integers r and k, there is some number N such that if the integers {1, 2, ..., N} are colored, each with one of r different colors, then there are at least k integers in arithmetic progression whose elements are of the same color. The least such N is the Van der Waerden number W(r, k), named after the Dutch mathematician B. L. van der Waerden.[1] Example For example, when r = 2, you have two colors, say red and blue. W(2, 3) is bigger than 8, because you can color the integers from {1, ..., 8} like this:  1   2   3   4   5   6   7   8   B   R   R   B   B   R   R   B  and no three integers of the same color form an arithmetic progression. But you can't add a ninth integer to the end without creating such a progression. If you add a red 9, then the red 3, 6, and 9 are in arithmetic progression. Alternatively, if you add a blue 9, then the blue 1, 5, and 9 are in arithmetic progression. In fact, there is no way of coloring 1 through 9 without creating such a progression (it can be proved by considering examples). Therefore, W(2, 3) is 9. Open problem It is an open problem to determine the values of W(r, k) for most values of r and k. The proof of the theorem provides only an upper bound. For the case of r = 2 and k = 3, for example, the argument given below shows that it is sufficient to color the integers {1, ..., 325} with two colors to guarantee there will be a single-colored arithmetic progression of length 3. But in fact, the bound of 325 is very loose; the minimum required number of integers is only 9. Any coloring of the integers {1, ..., 9} will have three evenly spaced integers of one color. For r = 3 and k = 3, the bound given by the theorem is 7(2·37 + 1)(2·37·(2·37 + 1) + 1), or approximately 4.22·1014616. But actually, you don't need that many integers to guarantee a single-colored progression of length 3; you only need 27. (And it is possible to color {1, ..., 26} with three colors so that there is no single-colored arithmetic progression of length 3; for example:  1   2   3   4   5   6   7   8   9   10   11   12   13   14   15   16   17   18   19   20   21   22   23   24   25   26   R   R   G   G   R   R   G   B   G   B   B   R   B   R   R   G   R   G   G   B   R   B   B   G   B   G  An open problem is the attempt to reduce the general upper bound to any 'reasonable' function. Ronald Graham offered a prize of US$1000 for showing W(2, k) < 2k2.[2] In addition, he offered a US$250 prize for a proof of his conjecture involving more general off-diagonal van der Waerden numbers, stating W(2; 3, k) ≤ kO(1), while mentioning numerical evidence suggests W(2; 3, k) = k2 + o(1). Ben Green disproved this latter conjecture and proved super-polynomial counterexamples to W(2; 3, k) < kr for any r.[3] The best upper bound currently known is due to Timothy Gowers,[4] who establishes $W(r,k)\leq 2^{2^{r^{2^{2^{k+9}}}}},$ by first establishing a similar result for Szemerédi's theorem, which is a stronger version of Van der Waerden's theorem. The previously best-known bound was due to Saharon Shelah and proceeded via first proving a result for the Hales–Jewett theorem, which is another strengthening of Van der Waerden's theorem. The best lower bound currently known for $W(2,k)$ is that for all positive $\varepsilon $ we have $W(2,k)>2^{k}/k^{\varepsilon }$, for all sufficiently large $k$.[5] Proof of Van der Waerden's theorem (in a special case) The following proof is due to Ron Graham, B.L. Rothschild, and Joel Spencer.[6] Khinchin[7] gives a fairly simple proof of the theorem without estimating W(r, k). Proof in the case of W(2, 3) W(2, 3) table bc(n): color of integers 0 12345  R  R  B  R  B  1 678910  B  R  R  B  R  … … 64 321322323324325  R  B  R  B  R  We will prove the special case mentioned above, that W(2, 3) ≤ 325. Let c(n) be a coloring of the integers {1, ..., 325}. We will find three elements of {1, ..., 325} in arithmetic progression that are the same color. Divide {1, ..., 325} into the 65 blocks {1, ..., 5}, {6, ..., 10}, ... {321, ..., 325}, thus each block is of the form {5b + 1, ..., 5b + 5} for some b in {0, ..., 64}. Since each integer is colored either red or blue, each block is colored in one of 32 different ways. By the pigeonhole principle, there are two blocks among the first 33 blocks that are colored identically. That is, there are two integers b1 and b2, both in {0,...,32}, such that c(5b1 + k) = c(5b2 + k) for all k in {1, ..., 5}. Among the three integers 5b1 + 1, 5b1 + 2, 5b1 + 3, there must be at least two that are of the same color. (The pigeonhole principle again.) Call these 5b1 + a1 and 5b1 + a2, where the ai are in {1,2,3} and a1 < a2. Suppose (without loss of generality) that these two integers are both red. (If they are both blue, just exchange 'red' and 'blue' in what follows.) Let a3 = 2a2 − a1. If 5b1 + a3 is red, then we have found our arithmetic progression: 5b1 + ai are all red. Otherwise, 5b1 + a3 is blue. Since a3 ≤ 5, 5b1 + a3 is in the b1 block, and since the b2 block is colored identically, 5b2 + a3 is also blue. Now let b3 = 2b2 − b1. Then b3 ≤ 64. Consider the integer 5b3 + a3, which must be ≤ 325. What color is it? If it is red, then 5b1 + a1, 5b2 + a2, and 5b3 + a3 form a red arithmetic progression. But if it is blue, then 5b1 + a3, 5b2 + a3, and 5b3 + a3 form a blue arithmetic progression. Either way, we are done. Proof in the case of W(3, 3) W(3, 3) table g=2·37·(2·37 + 1) , m=7(2·37 + 1) bc(n): color of integers 0 123…m  G  R  R … B  1 m + 1m + 2m + 3…2m  B  R  G … R  … … g gm + 1gm + 2gm + 3…(g + 1)m  B  R  B … G  A similar argument can be advanced to show that W(3, 3) ≤ 7(2·37+1)(2·37·(2·37+1)+1). One begins by dividing the integers into 2·37·(2·37 + 1) + 1 groups of 7(2·37 + 1) integers each; of the first 37·(2·37 + 1) + 1 groups, two must be colored identically. Divide each of these two groups into 2·37+1 subgroups of 7 integers each; of the first 37 + 1 subgroups in each group, two of the subgroups must be colored identically. Within each of these identical subgroups, two of the first four integers must be the same color, say red; this implies either a red progression or an element of a different color, say blue, in the same subgroup. Since we have two identically-colored subgroups, there is a third subgroup, still in the same group that contains an element which, if either red or blue, would complete a red or blue progression, by a construction analogous to the one for W(2, 3). Suppose that this element is green. Since there is a group that is colored identically, it must contain copies of the red, blue, and green elements we have identified; we can now find a pair of red elements, a pair of blue elements, and a pair of green elements that 'focus' on the same integer, so that whatever color it is, it must complete a progression. Proof in general case The proof for W(2, 3) depends essentially on proving that W(32, 2) ≤ 33. We divide the integers {1,...,325} into 65 'blocks', each of which can be colored in 32 different ways, and then show that two blocks of the first 33 must be the same color, and there is a block colored the opposite way. Similarly, the proof for W(3, 3) depends on proving that $W(3^{7(2\cdot 3^{7}+1)},2)\leq 3^{7(2\cdot 3^{7}+1)}+1.$ By a double induction on the number of colors and the length of the progression, the theorem is proved in general. Proof A D-dimensional arithmetic progression (AP) consists of numbers of the form: $a+i_{1}s_{1}+i_{2}s_{2}+\cdots +i_{D}s_{D}$ where a is the basepoint, the s's are positive step-sizes, and the i's range from 0 to L − 1. A d-dimensional AP is homogeneous for some coloring when it is all the same color. A D-dimensional arithmetic progression with benefits is all numbers of the form above, but where you add on some of the "boundary" of the arithmetic progression, i.e. some of the indices i's can be equal to L. The sides you tack on are ones where the first k i's are equal to L, and the remaining i's are less than L. The boundaries of a D-dimensional AP with benefits are these additional arithmetic progressions of dimension $d-1,d-2,d-3,d-4$, down to 0. The 0-dimensional arithmetic progression is the single point at index value $(L,L,L,L,\ldots ,L)$. A D-dimensional AP with benefits is homogeneous when each of the boundaries are individually homogeneous, but different boundaries do not have to necessarily have the same color. Next define the quantity MinN(L, D, N) to be the least integer so that any assignment of N colors to an interval of length MinN or more necessarily contains a homogeneous D-dimensional arithmetical progression with benefits. The goal is to bound the size of MinN. Note that MinN(L,1,N) is an upper bound for Van der Waerden's number. There are two inductions steps, as follows: Lemma 1 — Assume MinN is known for a given lengths L for all dimensions of arithmetic progressions with benefits up to D. This formula gives a bound on MinN when you increase the dimension to D + 1: let $M=\operatorname {MinN} (L,D,n)$, then $\operatorname {MinN} (L,D+1,n)\leq M\cdot \operatorname {MinN} (L,1,n^{M})$ Proof First, if you have an n-coloring of the interval 1...I, you can define a block coloring of k-size blocks. Just consider each sequence of k colors in each k block to define a unique color. Call this k-blocking an n-coloring. k-blocking an n coloring of length l produces an nk coloring of length l/k. So given a n-coloring of an interval I of size $M\cdot \operatorname {MinN} (L,1,n^{M}))$ you can M-block it into an nM coloring of length $\operatorname {MinN} (L,1,n^{M})$. But that means, by the definition of MinN, that you can find a 1-dimensional arithmetic sequence (with benefits) of length L in the block coloring, which is a sequence of blocks equally spaced, which are all the same block-color, i.e. you have a bunch of blocks of length M in the original sequence, which are equally spaced, which have exactly the same sequence of colors inside. Now, by the definition of M, you can find a d-dimensional arithmetic sequence with benefits in any one of these blocks, and since all of the blocks have the same sequence of colors, the same d-dimensional AP with benefits appears in all of the blocks, just by translating it from block to block. This is the definition of a d + 1 dimensional arithmetic progression, so you have a homogeneous d + 1 dimensional AP. The new stride parameter sD + 1 is defined to be the distance between the blocks. But you need benefits. The boundaries you get now are all old boundaries, plus their translations into identically colored blocks, because iD+1 is always less than L. The only boundary which is not like this is the 0-dimensional point when $i_{1}=i_{2}=\cdots =i_{D+1}=L$. This is a single point, and is automatically homogeneous. Lemma 2 — Assume MinN is known for one value of L and all possible dimensions D. Then you can bound MinN for length L + 1. $\operatorname {MinN} (L+1,1,n)\leq 2\operatorname {MinN} (L,n,n)$ Proof Given an n-coloring of an interval of size MinN(L,n,n), by definition, you can find an arithmetic sequence with benefits of dimension n of length L. But now, the number of "benefit" boundaries is equal to the number of colors, so one of the homogeneous boundaries, say of dimension k, has to have the same color as another one of the homogeneous benefit boundaries, say the one of dimension p < k. This allows a length L + 1 arithmetic sequence (of dimension 1) to be constructed, by going along a line inside the k-dimensional boundary which ends right on the p-dimensional boundary, and including the terminal point in the p-dimensional boundary. In formulas: if $a+Ls_{1}+Ls_{2}+\cdots +Ls_{D-k}$ has the same color as $a+Ls_{1}+Ls_{2}+\cdots +Ls_{D-p}$ then $a+L\cdot (s_{1}+\cdots +s_{D-k})+u\cdot (s_{D-k+1}+\cdots +s_{p})$ have the same color $u=0,1,2,\cdots ,L-1,L$ i.e. u makes a sequence of length L+1. This constructs a sequence of dimension 1, and the "benefits" are automatic, just add on another point of whatever color. To include this boundary point, one has to make the interval longer by the maximum possible value of the stride, which is certainly less than the interval size. So doubling the interval size will definitely work, and this is the reason for the factor of two. This completes the induction on L. Base case: MinN(1,d,n) = 1, i.e. if you want a length 1 homogeneous d-dimensional arithmetic sequence, with or without benefits, you have nothing to do. So this forms the base of the induction. The Van der Waerden theorem itself is the assertion that MinN(L,1,N) is finite, and it follows from the base case and the induction steps.[8] See also • Van der Waerden numbers for all known values for W(n,r) and the best known bounds for unknown values. • Van der Waerden game – a game where the player picks integers from the set 1, 2, ..., N, and tries to collect an arithmetic progression of length n. • Hales–Jewett theorem • Rado's theorem • Szemerédi's theorem • Bartel Leendert van der Waerden Notes 1. van der Waerden, B. L. (1927). "Beweis einer Baudetschen Vermutung". Nieuw. Arch. Wisk. (in German). 15: 212–216. 2. Graham, Ron (2007). "Some of My Favorite Problems in Ramsey Theory". INTEGERS: The Electronic Journal of Combinatorial Number Theory. 7 (2): #A15. 3. Klarreich, Erica (2021). "Mathematician Hurls Structure and Disorder Into Century-Old Problem". Quanta Magazine. 4. Gowers, Timothy (2001). "A new proof of Szemerédi's theorem". Geometric and Functional Analysis. 11 (3): 465–588. doi:10.1007/s00039-001-0332-9. S2CID 124324198. 5. Szabó, Zoltán (1990). "An application of Lovász' local lemma-a new lower bound for the van der Waerden number". Random Structures & Algorithms. 1 (3): 343–360. doi:10.1002/rsa.3240010307. 6. Graham, Ronald; Rothschild, Bruce; Spencer, Joel (1990). Ramsey theory. Wiley. ISBN 0471500461. 7. Khinchin (1998, pp. 11–17, chapter 1) 8. Graham, R. L.; Rothschild, B. L. (1974). "A short proof of van der Waerden's theorem on arithmetic progressions". Proceedings of the American Mathematical Society. 42 (2): 385–386. doi:10.1090/S0002-9939-1974-0329917-8. References • Khinchin, A. Ya. (1998), Three Pearls of Number Theory, Mineola, NY: Dover, pp. 11–17, ISBN 978-0-486-40026-6 (second edition originally published in Russian in 1948) External links • O'Bryant, Kevin. "van der Waerden's Theorem". MathWorld. • O'Bryant, Kevin & Weisstein, Eric W. "Van der Waerden Number". MathWorld.
Wikipedia
Arithmetic progression game The arithmetic progression game is a positional game where two players alternately pick numbers, trying to occupy a complete arithmetic progression of a given size. The game is parameterized by two integers n > k. The game-board is the set {1,...,n}. The winning-sets are all the arithmetic progressions of length k. In a Maker-Breaker game variant, the first player (Maker) wins by occupying a k-length arithmetic progression, otherwise the second player (Breaker) wins. The game is also called the van der Waerden game,[1] named after Van der Waerden's theorem. It says that, for any k, there exists some integer W(2,k) such that, if the integers {1, ..., W(2,k)} are partitioned arbitrarily into two sets, then at least one set contains an arithmetic progression of length k. This means that, if $n\geq W(2,k)$, then Maker has a winning strategy. Unfortunately, this claim is not constructive - it does not show a specific strategy for Maker. Moreover, the current upper bound for W(2,k) is extremely large (the currently known bounds are: $2^{k}/k^{\varepsilon }<W(2,k)<2^{2^{2^{2^{k+9}}}}$). Let W*(2,k) be the smallest integer such that Maker has a winning strategy. Beck[1] proves that $2^{k-7k^{7/8}}<W^{*}(2,k)<k^{3}2^{k-4}$. In particular, if $k^{3}2^{k-4}<n$, then the game is Maker's win (even though it is much smaller than the number that guarantees no-draw). References 1. Beck, József (1981). "Van der Waerden and Ramsey type games". Combinatorica. 1 (2): 103–116. doi:10.1007/bf02579267. ISSN 0209-9683.
Wikipedia
Vandermonde polynomial In algebra, the Vandermonde polynomial of an ordered set of n variables $X_{1},\dots ,X_{n}$, named after Alexandre-Théophile Vandermonde, is the polynomial: $V_{n}=\prod _{1\leq i<j\leq n}(X_{j}-X_{i}).$ (Some sources use the opposite order $(X_{i}-X_{j})$, which changes the sign ${\binom {n}{2}}$ times: thus in some dimensions the two formulas agree in sign, while in others they have opposite signs.) It is also called the Vandermonde determinant, as it is the determinant of the Vandermonde matrix. The value depends on the order of the terms: it is an alternating polynomial, not a symmetric polynomial. Alternating The defining property of the Vandermonde polynomial is that it is alternating in the entries, meaning that permuting the $X_{i}$ by an odd permutation changes the sign, while permuting them by an even permutation does not change the value of the polynomial – in fact, it is the basic alternating polynomial, as will be made precise below. It thus depends on the order, and is zero if two entries are equal – this also follows from the formula, but is also consequence of being alternating: if two variables are equal, then switching them both does not change the value and inverts the value, yielding $V_{n}=-V_{n},$ and thus $V_{n}=0$ (assuming the characteristic is not 2, otherwise being alternating is equivalent to being symmetric). Conversely, the Vandermonde polynomial is a factor of every alternating polynomial: as shown above, an alternating polynomial vanishes if any two variables are equal, and thus must have $(X_{i}-X_{j})$ as a factor for all $i\neq j$. Alternating polynomials Main article: Alternating polynomial Thus, the Vandermonde polynomial (together with the symmetric polynomials) generates the alternating polynomials. Discriminant Its square is widely called the discriminant, though some sources call the Vandermonde polynomial itself the discriminant. The discriminant (the square of the Vandermonde polynomial: $\Delta =V_{n}^{2}$) does not depend on the order of terms, as $(-1)^{2}=1$, and is thus an invariant of the unordered set of points. If one adjoins the Vandermonde polynomial to the ring of symmetric polynomials in n variables $\Lambda _{n}$, one obtains the quadratic extension $\Lambda _{n}[V_{n}]/\langle V_{n}^{2}-\Delta \rangle $, which is the ring of alternating polynomials. Vandermonde polynomial of a polynomial Given a polynomial, the Vandermonde polynomial of its roots is defined over the splitting field; for a non-monic polynomial, with leading coefficient a, one may define the Vandermonde polynomial as $V_{n}=a^{n-1}\prod _{1\leq i<j\leq n}(X_{j}-X_{i}),$ (multiplying with a leading term) to accord with the discriminant. Generalizations Over arbitrary rings, one instead uses a different polynomial to generate the alternating polynomials – see (Romagny, 2005). The Vandermonde determinant is a very special case of the Weyl denominator formula applied to the trivial representation of the special unitary group $\mathrm {SU} (n)$. See also • Capelli polynomial (ref) References • The fundamental theorem of alternating functions, by Matthieu Romagny, September 15, 2005
Wikipedia
Kummer–Vandiver conjecture In mathematics, the Kummer–Vandiver conjecture, or Vandiver conjecture, states that a prime p does not divide the class number hK of the maximal real subfield $K=\mathbb {Q} (\zeta _{p})^{+}$ of the p-th cyclotomic field. The conjecture was first made by Ernst Kummer on 28 December 1849 and 24 April 1853 in letters to Leopold Kronecker, reprinted in (Kummer 1975, pages 84, 93, 123–124), and independently rediscovered around 1920 by Philipp Furtwängler and Harry Vandiver (1946, p. 576), Kummer–Vandiver conjecture FieldAlgebraic number theory Conjectured byErnst Kummer Conjectured in1849 Open problemYes As of 2011, there is no particularly strong evidence either for or against the conjecture and it is unclear whether it is true or false, though it is likely that counterexamples are very rare. Background The class number h of the cyclotomic field $\mathbb {Q} (\zeta _{p})$ is a product of two integers h1 and h2, called the first and second factors of the class number, where h2 is the class number of the maximal real subfield $K=\mathbb {Q} (\zeta _{p})^{+}$ of the p-th cyclotomic field. The first factor h1 is well understood and can be computed easily in terms of Bernoulli numbers, and is usually rather large. The second factor h2 is not well understood and is hard to compute explicitly, and in the cases when it has been computed it is usually small. Kummer showed that if a prime p does not divide the class number h, then Fermat's Last Theorem holds for exponent p. The Kummer–Vandiver conjecture states that p does not divide the second factor h2. Kummer showed that if p divides the second factor, then it also divides the first factor. In particular the Kummer–Vandiver conjecture holds for regular primes (those for which p does not divide the first factor). Evidence for and against the Kummer–Vandiver conjecture Kummer verified the Kummer–Vandiver conjecture for p less than 200, and Vandiver extended this to p less than 600. Joe Buhler, Richard Crandall, and Reijo Ernvall et al. (2001) verified it for p < 12 million. Buhler & Harvey (2011) extended this to primes less than 163 million, and Hart, Harvey & Ong (2017) extended this to primes less than 231. Washington (1996, p. 158) describes an informal probability argument, based on rather dubious assumptions about the equidistribution of class numbers mod p, suggesting that the number of primes less than x that are exceptions to the Kummer–Vandiver conjecture might grow like (1/2)log log x. This grows extremely slowly, and suggests that the computer calculations do not provide much evidence for Vandiver's conjecture: for example, the probability argument (combined with the calculations for small primes) suggests that one should only expect about 1 counterexample in the first 10100 primes, suggesting that it is unlikely any counterexample will be found by further brute force searches even if there are an infinite number of exceptions. Schoof (2003) gave conjectural calculations of the class numbers of real cyclotomic fields for primes up to 10000, which strongly suggest that the class numbers are not randomly distributed mod p. They tend to be quite small and are often just 1. For example, assuming the generalized Riemann hypothesis, the class number of the real cyclotomic field for the prime p is 1 for p<163, and divisible by 4 for p=163. This suggests that Washington's informal probability argument against the conjecture may be misleading. Mihăilescu (2010) gave a refined version of Washington's heuristic argument, suggesting that the Kummer–Vandiver conjecture is probably true. Consequences of the Kummer–Vandiver conjecture Kurihara (1992) showed that the conjecture is equivalent to a statement in the algebraic K-theory of the integers, namely that Kn(Z) = 0 whenever n is a multiple of 4. In fact from the Kummer–Vandiver conjecture and the norm residue isomorphism theorem follow a full conjectural calculation of the K-groups for all values of n; see Quillen–Lichtenbaum conjecture for details. See also • Regular and irregular primes • Herbrand–Ribet theorem References • Buhler, Joe; Crandall, Richard; Ernvall, Reijo; Metsänkylä, Tauno; Shokrollahi, M. Amin (2001), Bosma, Wieb (ed.), "Irregular primes and cyclotomic invariants to 12 million", Computational algebra and number theory (Proceedings of the 2nd International Magma Conference held at Marquette University, Milwaukee, WI, May 12–16, 1996), Journal of Symbolic Computation, 31 (1): 89–96, doi:10.1006/jsco.1999.1011, ISSN 0747-7171, MR 1806208 • Ghate, Eknath (2000), "Vandiver's conjecture via K-theory" (PDF), in Adhikari, S. D.; Katre, S. A.; Thakur, Dinesh (eds.), Cyclotomic fields and related topics, Proceedings of the Summer School on Cyclotomic Fields held in Pune, June 7–30, 1999, Bhaskaracharya Pratishthana, Pune, pp. 285–298, MR 1802389 • Buhler, J. P.; Harvey, D. (2011), "Irregular primes to 163 million", Mathematics of Computation, 80 (276): 2435–2444, doi:10.1090/S0025-5718-2011-02461-0, MR 2813369 • Hart, William; Harvey, David; Ong, Wilson (2017), "Irregular primes to two billion", Mathematics of Computation, 86 (308): 3031–3049, arXiv:1605.02398, doi:10.1090/mcom/3211, MR 3667037, S2CID 37245286 • Kummer, Ernst Eduard (1975), Weil, André (ed.), Collected papers. Volume 1: Contributions to Number Theory, Berlin, New York: Springer-Verlag, ISBN 978-0-387-06835-0, MR 0465760 • Kurihara, Masato (1992), "Some remarks on conjectures about cyclotomic fields and K-groups of Z", Compositio Mathematica, 81 (2): 223–236, ISSN 0010-437X, MR 1145807 • Mihăilescu, Preda (2010), Turning Washington's heuristics in favor of Vandiver's conjecture, arXiv:1011.6283, Bibcode:2010arXiv1011.6283M • Schoof, René (2003), "Class numbers of real cyclotomic fields of prime conductor", Mathematics of Computation, 72 (242): 913–937, doi:10.1090/S0025-5718-02-01432-1, ISSN 0025-5718, MR 1954975 • Vandiver, H. S. (1946), "Fermat's last theorem. Its history and the nature of the known results concerning it", The American Mathematical Monthly, 53 (10): 555–578, doi:10.1080/00029890.1946.11991754, ISSN 0002-9890, JSTOR 2305236, MR 0018660 • Washington, Lawrence C. (1996), Introduction to Cyclotomic Fields, Springer, ISBN 978-0-387-94762-4
Wikipedia
Vanessa Robins Vanessa Robins is an Australian applied mathematician whose research interests include computational topology, image processing, and the structure of granular materials. She is a fellow in the departments of applied mathematics and theoretical physics at Australian National University, where she was ARC Future Fellow from 2014 to 2019.[1] Education Robins earned a bachelor's degree in mathematics at Australian National University in 1994.[1] She completed a PhD at the University of Colorado Boulder in 2000. Her dissertation, Computational Topology at Multiple Resolutions: Foundations and Applications to Fractals and Dynamics, was jointly supervised by James D. Meiss and Elizabeth Bradley.[2] Contributions One of Robins's publications, from 1999, is one of the three works that independently introduced persistent homology in topological data analysis.[3] As well as working on mathematical research, she has collaborated with artist Julie Brooke, of the Australian National University School of Art & Design, on the mathematical visualization of topological surfaces.[4] References 1. "Dr Vanessa Robins", People, Australian National University Research School of Physics, retrieved 2020-05-04 2. Vanessa Robins at the Mathematics Genealogy Project 3. Edelsbrunner, Herbert; Morozov, Dmitriy (2013), "Persistent homology: theory and practice" (PDF), European Congress of Mathematics, Eur. Math. Soc., Zürich, pp. 31–50, MR 3469114 4. "The art of science in jewellery, metal, tape and music", Science in Public, 9 December 2014, retrieved 2020-05-04; "Julie Brooke: Minimal Surfaces", Art Almanac, 30 March 2015, retrieved 2020-05-04 External links • Vanessa Robins publications indexed by Google Scholar Authority control Academics • MathSciNet • Mathematics Genealogy Project • ORCID • ResearcherID • Scopus People • Trove Other • IdRef
Wikipedia
Missing square puzzle The missing square puzzle is an optical illusion used in mathematics classes to help students reason about geometrical figures; or rather to teach them not to reason using figures, but to use only textual descriptions and the axioms of geometry. It depicts two arrangements made of similar shapes in slightly different configurations. Each apparently forms a 13×5 right-angled triangle, but one has a 1×1 hole in it. Solution The key to the puzzle is the fact that neither of the 13×5 "triangles" is truly a triangle, nor would either truly be 13x5 if it were, because what appears to be the hypotenuse is bent. In other words, the "hypotenuse" does not maintain a consistent slope, even though it may appear that way to the human eye. A true 13×5 triangle cannot be created from the given component parts. The four figures (the yellow, red, blue and green shapes) total 32 units of area. The apparent triangles formed from the figures are 13 units wide and 5 units tall, so it appears that the area should be S = 13×5/2 = 32.5 units. However, the blue triangle has a ratio of 5:2 (=2.5), while the red triangle has the ratio 8:3 (≈2.667), so the apparent combined hypotenuse in each figure is actually bent. With the bent hypotenuse, the first figure actually occupies a combined 32 units, while the second figure occupies 33, including the "missing" square. The amount of bending is approximately 1/28 unit (1.245364267°), which is difficult to see on the diagram of the puzzle, and was illustrated as a graphic. Note the grid point where the red and blue triangles in the lower image meet (5 squares to the right and two units up from the lower left corner of the combined figure), and compare it to the same point on the other figure; the edge is slightly under the mark in the upper image, but goes through it in the lower. Overlaying the hypotenuses from both figures results in a very thin parallelogram (represented with the four red dots) with an area of exactly one grid square (Pick's theorem gives 0 [1] + 4 [2]/2 − 1 = 1), so the "missing" area. Principle According to Martin Gardner,[3] this particular puzzle was invented by a New York City amateur magician, Paul Curry, in 1953. However, the principle of a dissection paradox has been known since the start of the 16th century. The integer dimensions of the parts of the puzzle (2, 3, 5, 8, 13) are successive Fibonacci numbers, which leads to the exact unit area in the thin parallelogram. Many other geometric dissection puzzles are based on a few simple properties of the Fibonacci sequence.[4] Similar puzzles Sam Loyd's chessboard paradox demonstrates two rearrangements of an 8×8 square. In the "larger" rearrangement (the 5×13 rectangle in the image to the right), the gaps between the figures have a combined unit square more area than their square gaps counterparts, creating an illusion that the figures there take up more space than those in the original square figure.[5] In the "smaller" rearrangement (the shape below the 5×13 rectangle), each quadrilateral needs to overlap the triangle by an area of half a unit for its top/bottom edge to align with a grid line, resulting overall loss in one unit square area. Mitsunobu Matsuyama's "paradox" uses four congruent quadrilaterals and a small square, which form a larger square. When the quadrilaterals are rotated about their centers they fill the space of the small square, although the total area of the figure seems unchanged. The apparent paradox is explained by the fact that the side of the new large square is a little smaller than the original one. If θ is the angle between two opposing sides in each quadrilateral, then the ratio of the two areas is given by sec2 θ. For θ = 5°, this is approximately 1.00765, which corresponds to a difference of about 0.8%. A vanishing puzzle is a mechanical optical illusion showing different numbers of a certain object when parts of the puzzle are moved around.[6] See also • Chessboard paradox • Einstellung effect • Hooper's paradox • Missing dollar riddle References 1. number of interior lattice points 2. number of boundary lattice points 3. Gardner, Martin (1956). Mathematics Magic and magic. Dover. pp. 139–150. ISBN 9780486203355. 4. Weisstein, Eric. "Cassini's Identity". Math World. 5. "A Paradoxical Dissection". mathblag. 2011-08-28. Retrieved 2018-04-19. 6. The Guardian, Vanishing Leprechaun, Disappearing Dwarf and Swinging Sixties Pin-up Girls – puzzles in pictures External links Wikimedia Commons has media related to Missing square puzzle. • A printable Missing Square variant with a video demonstration. • Curry's Paradox: How Is It Possible? at cut-the-knot • Jigsaw Paradox • The Eleven Holes Puzzle • "Infinite Chocolate Bar Trick", a demonstration of the missing square puzzle utilising a 4×6 chocolate bar
Wikipedia
Vanishing cycle In mathematics, vanishing cycles are studied in singularity theory and other parts of algebraic geometry. They are those homology cycles of a smooth fiber in a family which vanish in the singular fiber. For example, in a map from a connected complex surface to the complex projective line, a generic fiber is a smooth Riemann surface of some fixed genus g and, generically, there will be isolated points in the target whose preimages are nodal curves. If one considers an isolated critical value and a small loop around it, in each fiber, one can find a smooth loop such that the singular fiber can be obtained by pinching that loop to a point. The loop in the smooth fibers gives an element of the first homology group of a surface, and the monodromy of the critical value is defined to be the monodromy of the first homology of the fibers as the loop is traversed, i.e. an invertible map of the first homology of a (real) surface of genus g. A classical result is the Picard–Lefschetz formula,[1] detailing how the monodromy round the singular fiber acts on the vanishing cycles, by a shear mapping. The classical, geometric theory of Solomon Lefschetz was recast in purely algebraic terms, in SGA7. This was for the requirements of its application in the context of l-adic cohomology; and eventual application to the Weil conjectures. There the definition uses derived categories, and looks very different. It involves a functor, the nearby cycle functor, with a definition by means of the higher direct image and pullbacks. The vanishing cycle functor then sits in a distinguished triangle with the nearby cycle functor and a more elementary functor. This formulation has been of continuing influence, in particular in D-module theory. See also • Thom–Sebastiani Theorem References 1. Given in , for Morse functions. • Dimca, Alexandru; Singularities and Topology of Hypersurfaces. • Section 3 of Peters, C.A.M. and J.H.M. Steenbrink: Infinitesimal variations of Hodge structure and the generic Torelli problem for projective hypersurfaces, in : Classification of Algebraic Manifolds, K. Ueno ed., Progress inMath. 39, Birkhauser 1983. • For the étale cohomology version, see the chapter on monodromy in Freitag, E.; Kiehl, Reinhardt (1988), Etale Cohomology and the Weil Conjecture, Berlin: Springer-Verlag, ISBN 978-0-387-12175-8 • Deligne, Pierre; Katz, Nicholas, eds. (1973), Séminaire de Géométrie Algébrique du Bois Marie – 1967–69 – Groupes de monodromie en géométrie algébrique – (SGA 7) – vol. 2, Lecture Notes in Mathematics, vol. 340, Berlin, New York: Springer-Verlag, pp. x+438, see especially Pierre Deligne, Le formalisme des cycles évanescents, SGA7 XIII and XIV. • Massey, David (2010). "Notes on Perverse Sheaves and Vanishing Cycles". arXiv:math/9908107. External links • Vanishing Cycle in the Encyclopedia of Mathematics
Wikipedia
Vanja Dukic Vanja Dukic is an expert in computational statistics and mathematical epidemiology who works as a professor of applied mathematics at the University of Colorado Boulder. Her research includes work on using internet search engine access patterns to track diseases,[1][2] and on the effects of climate change on the spread of diseases. Dukic earned a bachelor's degree in finance and actuarial mathematics from Bryant University in 1995.[3] She completed her doctorate at Brown University in 2001, under the joint supervision of biostatisticians Constantine Gatsonis and Joseph Hogan.[4] She worked as a faculty member in the biostatistics program of the Department of Public Health Sciences at the University of Chicago from 2001 to 2010, before moving to Colorado.[3] In 2015 she was elected as a Fellow of the American Statistical Association "for important contributions to Bayesian modeling of complex processes and analysis of Big Data, substantive and collaborative research in infectious diseases and climate change, and service to the profession, including excellence in editorial work."[5][6] References 1. Wernau, Julie (December 11, 2009), "Flu is waning, say U. of C. professors: Trio uses Google data to track illness", Chicago Tribune. 2. Keim, Brandon (May 20, 2011), "Google search patterns could track MRSA spread", Wired. 3. Home page and brief biography, University of Colorado, retrieved 2016-07-09. 4. Vanja Dukic at the Mathematics Genealogy Project 5. "ASA name 62 new Fellows", IMS Bulletin, October 2, 2015. 6. ASA name 62 new Fellows: Selection honors each as "foremost members" of statistical science (PDF), American Statistical Association, June 4, 2015, archived from the original (PDF) on 2016-03-04, retrieved 2016-07-09. External links • Vanja Dukic publications indexed by Google Scholar Authority control: Academics • Google Scholar • MathSciNet • Mathematics Genealogy Project • ORCID • Scopus • zbMATH
Wikipedia
Vantieghems theorem In number theory, Vantieghems theorem is a primality criterion. It states that a natural number n≥3 is prime if and only if $\prod _{1\leq k\leq n-1}\left(2^{k}-1\right)\equiv n\mod \left(2^{n}-1\right).$ Similarly, n is prime, if and only if the following congruence for polynomials in X holds: $\prod _{1\leq k\leq n-1}\left(X^{k}-1\right)\equiv n-\left(X^{n}-1\right)/\left(X-1\right)\mod \left(X^{n}-1\right)$ or: $\prod _{1\leq k\leq n-1}\left(X^{k}-1\right)\equiv n\mod \left(X^{n}-1\right)/\left(X-1\right).$ Example Let n=7 forming the product 1*3*7*15*31*63 = 615195. 615195 = 7 mod 127 and so 7 is prime Let n=9 forming the product 1*3*7*15*31*63*127*255 = 19923090075. 19923090075 = 301 mod 511 and so 9 is composite References • Kilford, L.J.P. (2004). "A generalization of a necessary and sufficient condition for primality due to Vantieghem". Int. J. Math. Math. Sci. 2004 (69–72): 3889–3892. arXiv:math/0402128. Bibcode:2004math......2128K. doi:10.1155/S0161171204403226. Zbl 1126.11307.. An article with proof and generalizations. • Vantieghem, E. (1991). "On a congruence only holding for primes". Indag. Math. New Series. 2 (2): 253–255. doi:10.1016/0019-3577(91)90013-W. Zbl 0734.11003.
Wikipedia
Vanya Mirzoyan Vanya Mirzoyan (Armenian: Վանյա Միրզոյան, born 5 July 1948) Armenian scientist-mathematician. Vanya Aleksandrovich Mirzoyan Born (1948-07-05) July 5, 1948 Mountainous Jagir Village, Shamkhor, Artsakh Academic work DisciplineScience InstitutionsNational Polytechnic University of Armenia Biography V.A. Mirzoyan was born in Mountainous Jagir, an Armenian Village located in Shamkhor District of Artsakh. His father, Aleksandr Ghazar Mirzoyan, was a teacher of Geography and Astronomy at the Secondary School of Mountainous Jagir, mother - Arshaluys Sergey Harutyunyan was an employee. From 1964 to 1968 he studied at Yerevan Technical College of Electronic Computers. In 1967 graduated from Yerevan Secondary Correspondence School 3 and was admitted to Yerevan State University, Department of Mechanics and Mathematics, which he graduated in 1972. From 1972 to 1974 he served in the Soviet Army as an officer. From October 1975 to October 1978 he pursued his targeted postgraduate studies at the University of Tartu, Estonia, with a degree in “Geometry and Topology” under scientific supervision of Doctor of Physical and Mathematical sciences, member of the Estonian Academy of Sciences, professor Ülo G. Lumiste. From 1979 to 1981 he worked as a professor of the Algebra and Geometry Department at Armenian State Pedagogical University named after Khachatur Abovian. Since 1981, he has been a staff member of National Polytechnic University of Armenia (Yerevan), held the positions of Assistant, Associate Professor, Professor, Head of Department. Scientific interests Scientific interests include Riemannian geometry, which studies Riemannian manifolds and submanifolds with natural parallel and semi-parallel tensor fields. These are Riemannian symmetric, semi-symmetric, Einstein, semi-Einstein, Ricci-semisymmetric manifolds and their isometric realizations in spaces of constant curvature. Scientific results • Has given general local classification of Riemannian Ricci-semisymmetric manifolds, • Has opened semi-Einstein manifolds and singled out the class of such manifolds in the form of cones over Einstein manifolds, • Has given the local classification and geometric description of Ricci-semisymmetric hypersurfaces in Euclidean spaces, • Has studied and geometrically described various classes of Semi-Einstein submanifolds of arbitrary codimension in Euclidean spaces, • Has established fundamental interrelation between submanifolds with parallel tensor fields and submanifolds with corresponding semi-parallel tensor fields in spaces of constant curvature, • Has given general local classification of normally flat Ricci-semisymmetric submanifolds in Euclidean spaces. Awards • Candidate of Physical and Mathematical Sciences (21.01.1980, Moscow, Moscow State Pedagogical University named after V.I.Lenin) • Doctor of Physical and Mathematical Sciences (21.01.1999, Kazan, KSU) References External links • Profile at Marquis Who's Who • Profile at Russian Mathematical Portal • Profile at Hayazg.info • Main Scientific Works Authority control: Academics • MathSciNet • Scopus • zbMATH
Wikipedia
Variance In probability theory and statistics, variance is the squared deviation from the mean of a random variable. The variance is also often defined as the square of the standard deviation. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers is spread out from their average value. It is the second central moment of a distribution, and the covariance of the random variable with itself, and it is often represented by $\sigma ^{2}$, $s^{2}$, $\operatorname {Var} (X)$, $V(X)$, or $\mathbb {V} (X)$.[1] An advantage of variance as a measure of dispersion is that it is more amenable to algebraic manipulation than other measures of dispersion such as the expected absolute deviation; for example, the variance of a sum of uncorrelated random variables is equal to the sum of their variances. A disadvantage of the variance for practical applications is that, unlike the standard deviation, its units differ from the random variable, which is why the standard deviation is more commonly reported as a measure of dispersion once the calculation is finished. There are two distinct concepts that are both called "variance". One, as discussed above, is part of a theoretical probability distribution and is defined by an equation. The other variance is a characteristic of a set of observations. When variance is calculated from observations, those observations are typically measured from a real world system. If all possible observations of the system are present then the calculated variance is called the population variance. Normally, however, only a subset is available, and the variance calculated from this is called the sample variance. The variance calculated from a sample is considered an estimate of the full population variance. There are multiple ways to calculate an estimate of the population variance, as discussed in the section below. The two kinds of variance are closely related. To see how, consider that a theoretical probability distribution can be used as a generator of hypothetical observations. If an infinite number of observations are generated using a distribution, then the sample variance calculated from that infinite set will match the value calculated using the distribution's equation for variance. Variance has a central role in statistics, where some ideas that use it include descriptive statistics, statistical inference, hypothesis testing, goodness of fit, and Monte Carlo sampling. Etymology The term variance was first introduced by Ronald Fisher in his 1918 paper The Correlation Between Relatives on the Supposition of Mendelian Inheritance:[2] The great body of available statistics show us that the deviations of a human measurement from its mean follow very closely the Normal Law of Errors, and, therefore, that the variability may be uniformly measured by the standard deviation corresponding to the square root of the mean square error. When there are two independent causes of variability capable of producing in an otherwise uniform population distributions with standard deviations $\sigma _{1}$ and $\sigma _{2}$, it is found that the distribution, when both causes act together, has a standard deviation ${\sqrt {\sigma _{1}^{2}+\sigma _{2}^{2}}}$. It is therefore desirable in analysing the causes of variability to deal with the square of the standard deviation as the measure of variability. We shall term this quantity the Variance... Definition The variance of a random variable $X$ is the expected value of the squared deviation from the mean of $X$, $\mu =\operatorname {E} [X]$: $\operatorname {Var} (X)=\operatorname {E} \left[(X-\mu )^{2}\right].$ This definition encompasses random variables that are generated by processes that are discrete, continuous, neither, or mixed. The variance can also be thought of as the covariance of a random variable with itself: $\operatorname {Var} (X)=\operatorname {Cov} (X,X).$ The variance is also equivalent to the second cumulant of a probability distribution that generates $X$. The variance is typically designated as $\operatorname {Var} (X)$, or sometimes as $V(X)$ or $\mathbb {V} (X)$, or symbolically as $\sigma _{X}^{2}$ or simply $\sigma ^{2}$ (pronounced "sigma squared"). The expression for the variance can be expanded as follows: ${\begin{aligned}\operatorname {Var} (X)&=\operatorname {E} \left[(X-\operatorname {E} [X])^{2}\right]\\[4pt]&=\operatorname {E} \left[X^{2}-2X\operatorname {E} [X]+\operatorname {E} [X]^{2}\right]\\[4pt]&=\operatorname {E} \left[X^{2}\right]-2\operatorname {E} [X]\operatorname {E} [X]+\operatorname {E} [X]^{2}\\[4pt]&=\operatorname {E} \left[X^{2}\right]-\operatorname {E} [X]^{2}\end{aligned}}$ In other words, the variance of X is equal to the mean of the square of X minus the square of the mean of X. This equation should not be used for computations using floating point arithmetic, because it suffers from catastrophic cancellation if the two components of the equation are similar in magnitude. For other numerically stable alternatives, see Algorithms for calculating variance. Discrete random variable If the generator of random variable $X$ is discrete with probability mass function $x_{1}\mapsto p_{1},x_{2}\mapsto p_{2},\ldots ,x_{n}\mapsto p_{n}$, then $\operatorname {Var} (X)=\sum _{i=1}^{n}p_{i}\cdot (x_{i}-\mu )^{2},$ where $\mu $ is the expected value. That is, $\mu =\sum _{i=1}^{n}p_{i}x_{i}.$ (When such a discrete weighted variance is specified by weights whose sum is not 1, then one divides by the sum of the weights.) The variance of a collection of $n$ equally likely values can be written as $\operatorname {Var} (X)={\frac {1}{n}}\sum _{i=1}^{n}(x_{i}-\mu )^{2}$ where $\mu $ is the average value. That is, $\mu ={\frac {1}{n}}\sum _{i=1}^{n}x_{i}.$ The variance of a set of $n$ equally likely values can be equivalently expressed, without directly referring to the mean, in terms of squared deviations of all pairwise squared distances of points from each other:[3] $\operatorname {Var} (X)={\frac {1}{n^{2}}}\sum _{i=1}^{n}\sum _{j=1}^{n}{\frac {1}{2}}(x_{i}-x_{j})^{2}={\frac {1}{n^{2}}}\sum _{i}\sum _{j>i}(x_{i}-x_{j})^{2}.$ Absolutely continuous random variable If the random variable $X$ has a probability density function $f(x)$, and $F(x)$ is the corresponding cumulative distribution function, then ${\begin{aligned}\operatorname {Var} (X)=\sigma ^{2}&=\int _{\mathbb {R} }(x-\mu )^{2}f(x)\,dx\\[4pt]&=\int _{\mathbb {R} }x^{2}f(x)\,dx-2\mu \int _{\mathbb {R} }xf(x)\,dx+\mu ^{2}\int _{\mathbb {R} }f(x)\,dx\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-2\mu \int _{\mathbb {R} }x\,dF(x)+\mu ^{2}\int _{\mathbb {R} }\,dF(x)\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-2\mu \cdot \mu +\mu ^{2}\cdot 1\\[4pt]&=\int _{\mathbb {R} }x^{2}\,dF(x)-\mu ^{2},\end{aligned}}$ or equivalently, $\operatorname {Var} (X)=\int _{\mathbb {R} }x^{2}f(x)\,dx-\mu ^{2},$ where $\mu $ is the expected value of $X$ given by $\mu =\int _{\mathbb {R} }xf(x)\,dx=\int _{\mathbb {R} }x\,dF(x).$ In these formulas, the integrals with respect to $dx$ and $dF(x)$ are Lebesgue and Lebesgue–Stieltjes integrals, respectively. If the function $x^{2}f(x)$ is Riemann-integrable on every finite interval $[a,b]\subset \mathbb {R} ,$ then $\operatorname {Var} (X)=\int _{-\infty }^{+\infty }x^{2}f(x)\,dx-\mu ^{2},$ where the integral is an improper Riemann integral. Examples Exponential distribution The exponential distribution with parameter λ is a continuous distribution whose probability density function is given by $f(x)=\lambda e^{-\lambda x}$ on the interval [0, ∞). Its mean can be shown to be $\operatorname {E} [X]=\int _{0}^{\infty }x\lambda e^{-\lambda x}\,dx={\frac {1}{\lambda }}.$ Using integration by parts and making use of the expected value already calculated, we have: ${\begin{aligned}\operatorname {E} \left[X^{2}\right]&=\int _{0}^{\infty }x^{2}\lambda e^{-\lambda x}\,dx\\&=\left[-x^{2}e^{-\lambda x}\right]_{0}^{\infty }+\int _{0}^{\infty }2xe^{-\lambda x}\,dx\\&=0+{\frac {2}{\lambda }}\operatorname {E} [X]\\&={\frac {2}{\lambda ^{2}}}.\end{aligned}}$ Thus, the variance of X is given by $\operatorname {Var} (X)=\operatorname {E} \left[X^{2}\right]-\operatorname {E} [X]^{2}={\frac {2}{\lambda ^{2}}}-\left({\frac {1}{\lambda }}\right)^{2}={\frac {1}{\lambda ^{2}}}.$ Fair die A fair six-sided die can be modeled as a discrete random variable, X, with outcomes 1 through 6, each with equal probability 1/6. The expected value of X is $(1+2+3+4+5+6)/6=7/2.$ Therefore, the variance of X is ${\begin{aligned}\operatorname {Var} (X)&=\sum _{i=1}^{6}{\frac {1}{6}}\left(i-{\frac {7}{2}}\right)^{2}\\[5pt]&={\frac {1}{6}}\left((-5/2)^{2}+(-3/2)^{2}+(-1/2)^{2}+(1/2)^{2}+(3/2)^{2}+(5/2)^{2}\right)\\[5pt]&={\frac {35}{12}}\approx 2.92.\end{aligned}}$ The general formula for the variance of the outcome, X, of an n-sided die is ${\begin{aligned}\operatorname {Var} (X)&=\operatorname {E} \left(X^{2}\right)-(\operatorname {E} (X))^{2}\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}i^{2}-\left({\frac {1}{n}}\sum _{i=1}^{n}i\right)^{2}\\[5pt]&={\frac {(n+1)(2n+1)}{6}}-\left({\frac {n+1}{2}}\right)^{2}\\[4pt]&={\frac {n^{2}-1}{12}}.\end{aligned}}$ Commonly used probability distributions The following table lists the variance for some commonly used probability distributions. Name of the probability distribution Probability distribution function Mean Variance Binomial distribution $\Pr \,(X=k)={\binom {n}{k}}p^{k}(1-p)^{n-k}$ $np$ $np(1-p)$ Geometric distribution $\Pr \,(X=k)=(1-p)^{k-1}p$ ${\frac {1}{p}}$ ${\frac {(1-p)}{p^{2}}}$ Normal distribution $f\left(x\mid \mu ,\sigma ^{2}\right)={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{-{\frac {(x-\mu )^{2}}{2\sigma ^{2}}}}$ $\mu $ $\sigma ^{2}$ Uniform distribution (continuous) $f(x\mid a,b)={\begin{cases}{\frac {1}{b-a}}&{\text{for }}a\leq x\leq b,\\[3pt]0&{\text{for }}x<a{\text{ or }}x>b\end{cases}}$ ${\frac {a+b}{2}}$ ${\frac {(b-a)^{2}}{12}}$ Exponential distribution $f(x\mid \lambda )=\lambda e^{-\lambda x}$ ${\frac {1}{\lambda }}$ ${\frac {1}{\lambda ^{2}}}$ Poisson distribution $f(k\mid \lambda )={\frac {e^{-\lambda }\lambda ^{k}}{k!}}$ $\lambda $ $\lambda $ Properties Basic properties Variance is non-negative because the squares are positive or zero: $\operatorname {Var} (X)\geq 0.$ The variance of a constant is zero. $\operatorname {Var} (a)=0.$ Conversely, if the variance of a random variable is 0, then it is almost surely a constant. That is, it always has the same value: $\operatorname {Var} (X)=0\iff \exists a:P(X=a)=1.$ Issues of finiteness If a distribution does not have a finite expected value, as is the case for the Cauchy distribution, then the variance cannot be finite either. However, some distributions may not have a finite variance, despite their expected value being finite. An example is a Pareto distribution whose index $k$ satisfies $1<k\leq 2.$ Decomposition The general formula for variance decomposition or the law of total variance is: If $X$ and $Y$ are two random variables, and the variance of $X$ exists, then $\operatorname {Var} [X]=\operatorname {E} (\operatorname {Var} [X\mid Y])+\operatorname {Var} (\operatorname {E} [X\mid Y]).$ The conditional expectation $\operatorname {E} (X\mid Y)$ of $X$ given $Y$, and the conditional variance $\operatorname {Var} (X\mid Y)$ may be understood as follows. Given any particular value y of the random variable Y, there is a conditional expectation $\operatorname {E} (X\mid Y=y)$ given the event Y = y. This quantity depends on the particular value y; it is a function $g(y)=\operatorname {E} (X\mid Y=y)$. That same function evaluated at the random variable Y is the conditional expectation $\operatorname {E} (X\mid Y)=g(Y).$ In particular, if $Y$ is a discrete random variable assuming possible values $y_{1},y_{2},y_{3}\ldots $ with corresponding probabilities $p_{1},p_{2},p_{3}\ldots ,$, then in the formula for total variance, the first term on the right-hand side becomes $\operatorname {E} (\operatorname {Var} [X\mid Y])=\sum _{i}p_{i}\sigma _{i}^{2},$ where $\sigma _{i}^{2}=\operatorname {Var} [X\mid Y=y_{i}]$. Similarly, the second term on the right-hand side becomes $\operatorname {Var} (\operatorname {E} [X\mid Y])=\sum _{i}p_{i}\mu _{i}^{2}-\left(\sum _{i}p_{i}\mu _{i}\right)^{2}=\sum _{i}p_{i}\mu _{i}^{2}-\mu ^{2},$ where $\mu _{i}=\operatorname {E} [X\mid Y=y_{i}]$ and $\mu =\sum _{i}p_{i}\mu _{i}$. Thus the total variance is given by $\operatorname {Var} [X]=\sum _{i}p_{i}\sigma _{i}^{2}+\left(\sum _{i}p_{i}\mu _{i}^{2}-\mu ^{2}\right).$ A similar formula is applied in analysis of variance, where the corresponding formula is ${\mathit {MS}}_{\text{total}}={\mathit {MS}}_{\text{between}}+{\mathit {MS}}_{\text{within}};$ here ${\mathit {MS}}$ refers to the Mean of the Squares. In linear regression analysis the corresponding formula is ${\mathit {MS}}_{\text{total}}={\mathit {MS}}_{\text{regression}}+{\mathit {MS}}_{\text{residual}}.$ This can also be derived from the additivity of variances, since the total (observed) score is the sum of the predicted score and the error score, where the latter two are uncorrelated. Similar decompositions are possible for the sum of squared deviations (sum of squares, ${\mathit {SS}}$): ${\mathit {SS}}_{\text{total}}={\mathit {SS}}_{\text{between}}+{\mathit {SS}}_{\text{within}},$ ${\mathit {SS}}_{\text{total}}={\mathit {SS}}_{\text{regression}}+{\mathit {SS}}_{\text{residual}}.$ Calculation from the CDF The population variance for a non-negative random variable can be expressed in terms of the cumulative distribution function F using $2\int _{0}^{\infty }u(1-F(u))\,du-\left(\int _{0}^{\infty }(1-F(u))\,du\right)^{2}.$ This expression can be used to calculate the variance in situations where the CDF, but not the density, can be conveniently expressed. Characteristic property The second moment of a random variable attains the minimum value when taken around the first moment (i.e., mean) of the random variable, i.e. $\mathrm {argmin} _{m}\,\mathrm {E} \left(\left(X-m\right)^{2}\right)=\mathrm {E} (X)$. Conversely, if a continuous function $\varphi $ satisfies $\mathrm {argmin} _{m}\,\mathrm {E} (\varphi (X-m))=\mathrm {E} (X)$ for all random variables X, then it is necessarily of the form $\varphi (x)=ax^{2}+b$, where a > 0. This also holds in the multidimensional case.[4] Units of measurement Unlike the expected absolute deviation, the variance of a variable has units that are the square of the units of the variable itself. For example, a variable measured in meters will have a variance measured in meters squared. For this reason, describing data sets via their standard deviation or root mean square deviation is often preferred over using the variance. In the dice example the standard deviation is √2.9 ≈ 1.7, slightly larger than the expected absolute deviation of 1.5. The standard deviation and the expected absolute deviation can both be used as an indicator of the "spread" of a distribution. The standard deviation is more amenable to algebraic manipulation than the expected absolute deviation, and, together with variance and its generalization covariance, is used frequently in theoretical statistics; however the expected absolute deviation tends to be more robust as it is less sensitive to outliers arising from measurement anomalies or an unduly heavy-tailed distribution. Propagation Addition and multiplication by a constant Variance is invariant with respect to changes in a location parameter. That is, if a constant is added to all values of the variable, the variance is unchanged: $\operatorname {Var} (X+a)=\operatorname {Var} (X).$ If all values are scaled by a constant, the variance is scaled by the square of that constant: $\operatorname {Var} (aX)=a^{2}\operatorname {Var} (X).$ The variance of a sum of two random variables is given by $\operatorname {Var} (aX+bY)=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)+2ab\,\operatorname {Cov} (X,Y)$ $\operatorname {Var} (aX-bY)=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)-2ab\,\operatorname {Cov} (X,Y)$ where $\operatorname {Cov} (X,Y)$ is the covariance. Linear combinations In general, for the sum of $N$ random variables $\{X_{1},\dots ,X_{N}\}$, the variance becomes: $\operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\sum _{i,j=1}^{N}\operatorname {Cov} (X_{i},X_{j})=\sum _{i=1}^{N}\operatorname {Var} (X_{i})+\sum _{i\neq j}\operatorname {Cov} (X_{i},X_{j}),$ see also general Bienaymé's identity. These results lead to the variance of a linear combination as: ${\begin{aligned}\operatorname {Var} \left(\sum _{i=1}^{N}a_{i}X_{i}\right)&=\sum _{i,j=1}^{N}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})\\&=\sum _{i=1}^{N}a_{i}^{2}\operatorname {Var} (X_{i})+\sum _{i\not =j}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})\\&=\sum _{i=1}^{N}a_{i}^{2}\operatorname {Var} (X_{i})+2\sum _{1\leq i<j\leq N}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j}).\end{aligned}}$ If the random variables $X_{1},\dots ,X_{N}$ are such that $\operatorname {Cov} (X_{i},X_{j})=0\ ,\ \forall \ (i\neq j),$ then they are said to be uncorrelated. It follows immediately from the expression given earlier that if the random variables $X_{1},\dots ,X_{N}$ are uncorrelated, then the variance of their sum is equal to the sum of their variances, or, expressed symbolically: $\operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\sum _{i=1}^{N}\operatorname {Var} (X_{i}).$ Since independent random variables are always uncorrelated (see Covariance § Uncorrelatedness and independence), the equation above holds in particular when the random variables $X_{1},\dots ,X_{n}$ are independent. Thus, independence is sufficient but not necessary for the variance of the sum to equal the sum of the variances. Matrix notation for the variance of a linear combination Define $X$ as a column vector of $n$ random variables $X_{1},\ldots ,X_{n}$, and $c$ as a column vector of $n$ scalars $c_{1},\ldots ,c_{n}$. Therefore, $c^{\mathsf {T}}X$ is a linear combination of these random variables, where $c^{\mathsf {T}}$ denotes the transpose of $c$. Also let $\Sigma $ be the covariance matrix of $X$. The variance of $c^{\mathsf {T}}X$ is then given by:[5] $\operatorname {Var} \left(c^{\mathsf {T}}X\right)=c^{\mathsf {T}}\Sigma c.$ This implies that the variance of the mean can be written as (with a column vector of ones) $\operatorname {Var} \left({\bar {x}}\right)=\operatorname {Var} \left({\frac {1}{n}}1'X\right)={\frac {1}{n^{2}}}1'\Sigma 1.$ Sum of uncorrelated variables One reason for the use of the variance in preference to other measures of dispersion is that the variance of the sum (or the difference) of uncorrelated random variables is the sum of their variances: $\operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)=\sum _{i=1}^{n}\operatorname {Var} (X_{i}).$ This statement is called the Bienaymé formula[6] and was discovered in 1853.[7][8] It is often made with the stronger condition that the variables are independent, but being uncorrelated suffices. So if all the variables have the same variance σ2, then, since division by n is a linear transformation, this formula immediately implies that the variance of their mean is $\operatorname {Var} \left({\overline {X}}\right)=\operatorname {Var} \left({\frac {1}{n}}\sum _{i=1}^{n}X_{i}\right)={\frac {1}{n^{2}}}\sum _{i=1}^{n}\operatorname {Var} \left(X_{i}\right)={\frac {1}{n^{2}}}n\sigma ^{2}={\frac {\sigma ^{2}}{n}}.$ That is, the variance of the mean decreases when n increases. This formula for the variance of the mean is used in the definition of the standard error of the sample mean, which is used in the central limit theorem. To prove the initial statement, it suffices to show that $\operatorname {Var} (X+Y)=\operatorname {Var} (X)+\operatorname {Var} (Y).$ The general result then follows by induction. Starting with the definition, ${\begin{aligned}\operatorname {Var} (X+Y)&=\operatorname {E} \left[(X+Y)^{2}\right]-(\operatorname {E} [X+Y])^{2}\\[5pt]&=\operatorname {E} \left[X^{2}+2XY+Y^{2}\right]-(\operatorname {E} [X]+\operatorname {E} [Y])^{2}.\end{aligned}}$ Using the linearity of the expectation operator and the assumption of independence (or uncorrelatedness) of X and Y, this further simplifies as follows: ${\begin{aligned}\operatorname {Var} (X+Y)&=\operatorname {E} \left[X^{2}\right]+2\operatorname {E} [XY]+\operatorname {E} \left[Y^{2}\right]-\left(\operatorname {E} [X]^{2}+2\operatorname {E} [X]\operatorname {E} [Y]+\operatorname {E} [Y]^{2}\right)\\[5pt]&=\operatorname {E} \left[X^{2}\right]+\operatorname {E} \left[Y^{2}\right]-\operatorname {E} [X]^{2}-\operatorname {E} [Y]^{2}\\[5pt]&=\operatorname {Var} (X)+\operatorname {Var} (Y).\end{aligned}}$ Sum of correlated variables with fixed sample size In general, the variance of the sum of n variables is the sum of their covariances: $\operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)=\sum _{i=1}^{n}\sum _{j=1}^{n}\operatorname {Cov} \left(X_{i},X_{j}\right)=\sum _{i=1}^{n}\operatorname {Var} \left(X_{i}\right)+2\sum _{1\leq i<j\leq n}\operatorname {Cov} \left(X_{i},X_{j}\right).$ (Note: The second equality comes from the fact that Cov(Xi,Xi) = Var(Xi).) Here, $\operatorname {Cov} (\cdot ,\cdot )$ is the covariance, which is zero for independent random variables (if it exists). The formula states that the variance of a sum is equal to the sum of all elements in the covariance matrix of the components. The next expression states equivalently that the variance of the sum is the sum of the diagonal of covariance matrix plus two times the sum of its upper triangular elements (or its lower triangular elements); this emphasizes that the covariance matrix is symmetric. This formula is used in the theory of Cronbach's alpha in classical test theory. So if the variables have equal variance σ2 and the average correlation of distinct variables is ρ, then the variance of their mean is $\operatorname {Var} \left({\overline {X}}\right)={\frac {\sigma ^{2}}{n}}+{\frac {n-1}{n}}\rho \sigma ^{2}.$ This implies that the variance of the mean increases with the average of the correlations. In other words, additional correlated observations are not as effective as additional independent observations at reducing the uncertainty of the mean. Moreover, if the variables have unit variance, for example if they are standardized, then this simplifies to $\operatorname {Var} \left({\overline {X}}\right)={\frac {1}{n}}+{\frac {n-1}{n}}\rho .$ This formula is used in the Spearman–Brown prediction formula of classical test theory. This converges to ρ if n goes to infinity, provided that the average correlation remains constant or converges too. So for the variance of the mean of standardized variables with equal correlations or converging average correlation we have $\lim _{n\to \infty }\operatorname {Var} \left({\overline {X}}\right)=\rho .$ Therefore, the variance of the mean of a large number of standardized variables is approximately equal to their average correlation. This makes clear that the sample mean of correlated variables does not generally converge to the population mean, even though the law of large numbers states that the sample mean will converge for independent variables. Sum of uncorrelated variables with random sample size There are cases when a sample is taken without knowing, in advance, how many observations will be acceptable according to some criterion. In such cases, the sample size N is a random variable whose variation adds to the variation of X, such that, $\operatorname {Var} \left(\sum _{i=1}^{N}X_{i}\right)=\operatorname {E} \left[N\right]\operatorname {Var} (X)+\operatorname {Var} (N)(\operatorname {E} \left[X\right])^{2}$[9] which follows from the law of total variance. If N has a Poisson distribution, then $\operatorname {E} [N]=\operatorname {Var} (N)$ with estimator n = N. So, the estimator of $\operatorname {Var} \left(\sum _{i=1}^{n}X_{i}\right)$ becomes $n{S_{x}}^{2}+n{\bar {X}}^{2}$, giving $\operatorname {SE} ({\bar {X}})={\sqrt {\frac {{S_{x}}^{2}+{\bar {X}}^{2}}{n}}}$ (see standard error of the sample mean). Weighted sum of variables The scaling property and the Bienaymé formula, along with the property of the covariance Cov(aX, bY) = ab Cov(X, Y) jointly imply that $\operatorname {Var} (aX\pm bY)=a^{2}\operatorname {Var} (X)+b^{2}\operatorname {Var} (Y)\pm 2ab\,\operatorname {Cov} (X,Y).$ This implies that in a weighted sum of variables, the variable with the largest weight will have a disproportionally large weight in the variance of the total. For example, if X and Y are uncorrelated and the weight of X is two times the weight of Y, then the weight of the variance of X will be four times the weight of the variance of Y. The expression above can be extended to a weighted sum of multiple variables: $\operatorname {Var} \left(\sum _{i}^{n}a_{i}X_{i}\right)=\sum _{i=1}^{n}a_{i}^{2}\operatorname {Var} (X_{i})+2\sum _{1\leq i}\sum _{<j\leq n}a_{i}a_{j}\operatorname {Cov} (X_{i},X_{j})$ Product of independent variables If two variables X and Y are independent, the variance of their product is given by[10] $\operatorname {Var} (XY)=[\operatorname {E} (X)]^{2}\operatorname {Var} (Y)+[\operatorname {E} (Y)]^{2}\operatorname {Var} (X)+\operatorname {Var} (X)\operatorname {Var} (Y).$ Equivalently, using the basic properties of expectation, it is given by $\operatorname {Var} (XY)=\operatorname {E} \left(X^{2}\right)\operatorname {E} \left(Y^{2}\right)-[\operatorname {E} (X)]^{2}[\operatorname {E} (Y)]^{2}.$ Product of statistically dependent variables In general, if two variables are statistically dependent, then the variance of their product is given by: ${\begin{aligned}\operatorname {Var} (XY)={}&\operatorname {E} \left[X^{2}Y^{2}\right]-[\operatorname {E} (XY)]^{2}\\[5pt]={}&\operatorname {Cov} \left(X^{2},Y^{2}\right)+\operatorname {E} (X^{2})\operatorname {E} \left(Y^{2}\right)-[\operatorname {E} (XY)]^{2}\\[5pt]={}&\operatorname {Cov} \left(X^{2},Y^{2}\right)+\left(\operatorname {Var} (X)+[\operatorname {E} (X)]^{2}\right)\left(\operatorname {Var} (Y)+[\operatorname {E} (Y)]^{2}\right)\\[5pt]&-[\operatorname {Cov} (X,Y)+\operatorname {E} (X)\operatorname {E} (Y)]^{2}\end{aligned}}$ Arbitrary functions The delta method uses second-order Taylor expansions to approximate the variance of a function of one or more random variables: see Taylor expansions for the moments of functions of random variables. For example, the approximate variance of a function of one variable is given by $\operatorname {Var} \left[f(X)\right]\approx \left(f'(\operatorname {E} \left[X\right])\right)^{2}\operatorname {Var} \left[X\right]$ provided that f is twice differentiable and that the mean and variance of X are finite. Population variance and sample variance See also: Unbiased estimation of standard deviation Real-world observations such as the measurements of yesterday's rain throughout the day typically cannot be complete sets of all possible observations that could be made. As such, the variance calculated from the finite set will in general not match the variance that would have been calculated from the full population of possible observations. This means that one estimates the mean and variance from a limited set of observations by using an estimator equation. The estimator is a function of the sample of n observations drawn without observational bias from the whole population of potential observations. In this example that sample would be the set of actual measurements of yesterday's rainfall from available rain gauges within the geography of interest. The simplest estimators for population mean and population variance are simply the mean and variance of the sample, the sample mean and (uncorrected) sample variance – these are consistent estimators (they converge to the correct value as the number of samples increases), but can be improved. Estimating the population variance by taking the sample's variance is close to optimal in general, but can be improved in two ways. Most simply, the sample variance is computed as an average of squared deviations about the (sample) mean, by dividing by n. However, using values other than n improves the estimator in various ways. Four common values for the denominator are n, n − 1, n + 1, and n − 1.5: n is the simplest (population variance of the sample), n − 1 eliminates bias, n + 1 minimizes mean squared error for the normal distribution, and n − 1.5 mostly eliminates bias in unbiased estimation of standard deviation for the normal distribution. Firstly, if the true population mean is unknown, then the sample variance (which uses the sample mean in place of the true mean) is a biased estimator: it underestimates the variance by a factor of (n − 1) / n; correcting by this factor (dividing by n − 1 instead of n) is called Bessel's correction. The resulting estimator is unbiased, and is called the (corrected) sample variance or unbiased sample variance. For example, when n = 1 the variance of a single observation about the sample mean (itself) is obviously zero regardless of the population variance. If the mean is determined in some other way than from the same samples used to estimate the variance then this bias does not arise and the variance can safely be estimated as that of the samples about the (independently known) mean. Secondly, the sample variance does not generally minimize mean squared error between sample variance and population variance. Correcting for bias often makes this worse: one can always choose a scale factor that performs better than the corrected sample variance, though the optimal scale factor depends on the excess kurtosis of the population (see mean squared error: variance), and introduces bias. This always consists of scaling down the unbiased estimator (dividing by a number larger than n − 1), and is a simple example of a shrinkage estimator: one "shrinks" the unbiased estimator towards zero. For the normal distribution, dividing by n + 1 (instead of n − 1 or n) minimizes mean squared error. The resulting estimator is biased, however, and is known as the biased sample variation. Population variance In general, the population variance of a finite population of size N with values xi is given by ${\begin{aligned}\sigma ^{2}&={\frac {1}{N}}\sum _{i=1}^{N}\left(x_{i}-\mu \right)^{2}={\frac {1}{N}}\sum _{i=1}^{N}\left(x_{i}^{2}-2\mu x_{i}+\mu ^{2}\right)\\[5pt]&=\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-2\mu \left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)+\mu ^{2}\\[5pt]&=\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-\mu ^{2}\end{aligned}}$ where the population mean is $\mu ={\frac {1}{N}}\sum _{i=1}^{N}x_{i}.$ The population variance can also be computed using $\sigma ^{2}={\frac {1}{N^{2}}}\sum _{i<j}\left(x_{i}-x_{j}\right)^{2}={\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}-x_{j}\right)^{2}.$ This is true because ${\begin{aligned}&{\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}-x_{j}\right)^{2}\\[5pt]={}&{\frac {1}{2N^{2}}}\sum _{i,j=1}^{N}\left(x_{i}^{2}-2x_{i}x_{j}+x_{j}^{2}\right)\\[5pt]={}&{\frac {1}{2N}}\sum _{j=1}^{N}\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}^{2}\right)-\left({\frac {1}{N}}\sum _{i=1}^{N}x_{i}\right)\left({\frac {1}{N}}\sum _{j=1}^{N}x_{j}\right)+{\frac {1}{2N}}\sum _{i=1}^{N}\left({\frac {1}{N}}\sum _{j=1}^{N}x_{j}^{2}\right)\\[5pt]={}&{\frac {1}{2}}\left(\sigma ^{2}+\mu ^{2}\right)-\mu ^{2}+{\frac {1}{2}}\left(\sigma ^{2}+\mu ^{2}\right)\\[5pt]={}&\sigma ^{2}\end{aligned}}$ The population variance matches the variance of the generating probability distribution. In this sense, the concept of population can be extended to continuous random variables with infinite populations. Sample variance See also: Sample standard deviation Biased sample variance In many practical situations, the true variance of a population is not known a priori and must be computed somehow. When dealing with extremely large populations, it is not possible to count every object in the population, so the computation must be performed on a sample of the population.[11] This is generally referred to as sample variance or empirical variance. Sample variance can also be applied to the estimation of the variance of a continuous distribution from a sample of that distribution. We take a sample with replacement of n values Y1, ..., Yn from the population, where n < N, and estimate the variance on the basis of this sample.[12] Directly taking the variance of the sample data gives the average of the squared deviations: ${\tilde {S}}_{Y}^{2}={\frac {1}{n}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}=\left({\frac {1}{n}}\sum _{i=1}^{n}Y_{i}^{2}\right)-{\overline {Y}}^{2}={\frac {1}{n^{2}}}\sum _{i,j\,:\,i<j}\left(Y_{i}-Y_{j}\right)^{2}.$ Here, ${\overline {Y}}$ denotes the sample mean: ${\overline {Y}}={\frac {1}{n}}\sum _{i=1}^{n}Y_{i}.$ Since the Yi are selected randomly, both ${\overline {Y}}$ and ${\tilde {S}}_{Y}^{2}$ are random variables. Their expected values can be evaluated by averaging over the ensemble of all possible samples {Yi} of size n from the population. For ${\tilde {S}}_{Y}^{2}$ this gives: ${\begin{aligned}\operatorname {E} [{\tilde {S}}_{Y}^{2}]&=\operatorname {E} \left[{\frac {1}{n}}\sum _{i=1}^{n}\left(Y_{i}-{\frac {1}{n}}\sum _{j=1}^{n}Y_{j}\right)^{2}\right]\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\operatorname {E} \left[Y_{i}^{2}-{\frac {2}{n}}Y_{i}\sum _{j=1}^{n}Y_{j}+{\frac {1}{n^{2}}}\sum _{j=1}^{n}Y_{j}\sum _{k=1}^{n}Y_{k}\right]\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\left({\frac {n-2}{n}}\operatorname {E} \left[Y_{i}^{2}\right]-{\frac {2}{n}}\sum _{j\neq i}\operatorname {E} \left[Y_{i}Y_{j}\right]+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\sum _{k\neq j}^{n}\operatorname {E} \left[Y_{j}Y_{k}\right]+{\frac {1}{n^{2}}}\sum _{j=1}^{n}\operatorname {E} \left[Y_{j}^{2}\right]\right)\\[5pt]&={\frac {1}{n}}\sum _{i=1}^{n}\left[{\frac {n-2}{n}}\left(\sigma ^{2}+\mu ^{2}\right)-{\frac {2}{n}}(n-1)\mu ^{2}+{\frac {1}{n^{2}}}n(n-1)\mu ^{2}+{\frac {1}{n}}\left(\sigma ^{2}+\mu ^{2}\right)\right]\\[5pt]&={\frac {n-1}{n}}\sigma ^{2}.\end{aligned}}$ Hence ${\tilde {S}}_{Y}^{2}$ gives an estimate of the population variance that is biased by a factor of ${\frac {n-1}{n}}$. For this reason, ${\tilde {S}}_{Y}^{2}$ is referred to as the biased sample variance. Unbiased sample variance Correcting for this bias yields the unbiased sample variance, denoted $S^{2}$: $S^{2}={\frac {n}{n-1}}{\tilde {S}}_{Y}^{2}={\frac {n}{n-1}}\left[{\frac {1}{n}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}\right]={\frac {1}{n-1}}\sum _{i=1}^{n}\left(Y_{i}-{\overline {Y}}\right)^{2}$ Either estimator may be simply referred to as the sample variance when the version can be determined by context. The same proof is also applicable for samples taken from a continuous probability distribution. The use of the term n − 1 is called Bessel's correction, and it is also used in sample covariance and the sample standard deviation (the square root of variance). The square root is a concave function and thus introduces negative bias (by Jensen's inequality), which depends on the distribution, and thus the corrected sample standard deviation (using Bessel's correction) is biased. The unbiased estimation of standard deviation is a technically involved problem, though for the normal distribution using the term n − 1.5 yields an almost unbiased estimator. The unbiased sample variance is a U-statistic for the function ƒ(y1, y2) = (y1 − y2)2/2, meaning that it is obtained by averaging a 2-sample statistic over 2-element subsets of the population. Distribution of the sample variance Distribution and cumulative distribution of S2/σ2, for various values of ν = n − 1, when the yi are independent normally distributed. Being a function of random variables, the sample variance is itself a random variable, and it is natural to study its distribution. In the case that Yi are independent observations from a normal distribution, Cochran's theorem shows that S2 follows a scaled chi-squared distribution (see also: asymptotic properties and an elementary proof):[13] $(n-1){\frac {S^{2}}{\sigma ^{2}}}\sim \chi _{n-1}^{2}.$ As a direct consequence, it follows that $\operatorname {E} \left(S^{2}\right)=\operatorname {E} \left({\frac {\sigma ^{2}}{n-1}}\chi _{n-1}^{2}\right)=\sigma ^{2},$ and[14] $\operatorname {Var} \left[S^{2}\right]=\operatorname {Var} \left({\frac {\sigma ^{2}}{n-1}}\chi _{n-1}^{2}\right)={\frac {\sigma ^{4}}{(n-1)^{2}}}\operatorname {Var} \left(\chi _{n-1}^{2}\right)={\frac {2\sigma ^{4}}{n-1}}.$ If the Yi are independent and identically distributed, but not necessarily normally distributed, then[15] $\operatorname {E} \left[S^{2}\right]=\sigma ^{2},\quad \operatorname {Var} \left[S^{2}\right]={\frac {\sigma ^{4}}{n}}\left(\kappa -1+{\frac {2}{n-1}}\right)={\frac {1}{n}}\left(\mu _{4}-{\frac {n-3}{n-1}}\sigma ^{4}\right),$ where κ is the kurtosis of the distribution and μ4 is the fourth central moment. If the conditions of the law of large numbers hold for the squared observations, S2 is a consistent estimator of σ2. One can see indeed that the variance of the estimator tends asymptotically to zero. An asymptotically equivalent formula was given in Kenney and Keeping (1951:164), Rose and Smith (2002:264), and Weisstein (n.d.).[16][17][18] Samuelson's inequality Samuelson's inequality is a result that states bounds on the values that individual observations in a sample can take, given that the sample mean and (biased) variance have been calculated.[19] Values must lie within the limits ${\bar {y}}\pm \sigma _{Y}(n-1)^{1/2}.$ Relations with the harmonic and arithmetic means It has been shown[20] that for a sample {yi} of positive real numbers, $\sigma _{y}^{2}\leq 2y_{\max }(A-H),$ where ymax is the maximum of the sample, A is the arithmetic mean, H is the harmonic mean of the sample and $\sigma _{y}^{2}$ is the (biased) variance of the sample. This bound has been improved, and it is known that variance is bounded by $\sigma _{y}^{2}\leq {\frac {y_{\max }(A-H)(y_{\max }-A)}{y_{\max }-H}},$ $\sigma _{y}^{2}\geq {\frac {y_{\min }(A-H)(A-y_{\min })}{H-y_{\min }}},$ where ymin is the minimum of the sample.[21] Tests of equality of variances The F-test of equality of variances and the chi square tests are adequate when the sample is normally distributed. Non-normality makes testing for the equality of two or more variances more difficult. Several non parametric tests have been proposed: these include the Barton–David–Ansari–Freund–Siegel–Tukey test, the Capon test, Mood test, the Klotz test and the Sukhatme test. The Sukhatme test applies to two variances and requires that both medians be known and equal to zero. The Mood, Klotz, Capon and Barton–David–Ansari–Freund–Siegel–Tukey tests also apply to two variances. They allow the median to be unknown but do require that the two medians are equal. The Lehmann test is a parametric test of two variances. Of this test there are several variants known. Other tests of the equality of variances include the Box test, the Box–Anderson test and the Moses test. Resampling methods, which include the bootstrap and the jackknife, may be used to test the equality of variances. Moment of inertia The variance of a probability distribution is analogous to the moment of inertia in classical mechanics of a corresponding mass distribution along a line, with respect to rotation about its center of mass. It is because of this analogy that such things as the variance are called moments of probability distributions. The covariance matrix is related to the moment of inertia tensor for multivariate distributions. The moment of inertia of a cloud of n points with a covariance matrix of $\Sigma $ is given by $I=n\left(\mathbf {1} _{3\times 3}\operatorname {tr} (\Sigma )-\Sigma \right).$ This difference between moment of inertia in physics and in statistics is clear for points that are gathered along a line. Suppose many points are close to the x axis and distributed along it. The covariance matrix might look like $\Sigma ={\begin{bmatrix}10&0&0\\0&0.1&0\\0&0&0.1\end{bmatrix}}.$ That is, there is the most variance in the x direction. Physicists would consider this to have a low moment about the x axis so the moment-of-inertia tensor is $I=n{\begin{bmatrix}0.2&0&0\\0&10.1&0\\0&0&10.1\end{bmatrix}}.$ Semivariance The semivariance is calculated in the same manner as the variance but only those observations that fall below the mean are included in the calculation: ${\text{Semivariance}}={1 \over {n}}\sum _{i:x_{i}<\mu }(x_{i}-\mu )^{2}$ It is also described as a specific measure in different fields of application. For skewed distributions, the semivariance can provide additional information that a variance does not.[22] For inequalities associated with the semivariance, see Chebyshev's inequality § Semivariances. Generalizations For complex variables If $x$ is a scalar complex-valued random variable, with values in $\mathbb {C} ,$ then its variance is $\operatorname {E} \left[(x-\mu )(x-\mu )^{*}\right],$ where $x^{*}$ is the complex conjugate of $x.$ This variance is a real scalar. As a matrix If $X$ is a vector-valued random variable, with values in $\mathbb {R} ^{n},$ and thought of as a column vector, then a natural generalization of variance is $\operatorname {E} \left[(X-\mu )(X-\mu )^{\operatorname {T} }\right],$ where $\mu =\operatorname {E} (X)$ and $X^{\operatorname {T} }$ is the transpose of $X,$ and so is a row vector. The result is a positive semi-definite square matrix, commonly referred to as the variance-covariance matrix (or simply as the covariance matrix). If $X$ is a vector- and complex-valued random variable, with values in $\mathbb {C} ^{n},$ then the covariance matrix is $\operatorname {E} \left[(X-\mu )(X-\mu )^{\dagger }\right],$ where $X^{\dagger }$ is the conjugate transpose of $X.$ This matrix is also positive semi-definite and square. As a scalar Another generalization of variance for vector-valued random variables $X$, which results in a scalar value rather than in a matrix, is the generalized variance $\det(C)$, the determinant of the covariance matrix. The generalized variance can be shown to be related to the multidimensional scatter of points around their mean.[23] A different generalization is obtained by considering the variance of the Euclidean distance between the random variable and its mean. This results in $\operatorname {E} \left[(X-\mu )^{\operatorname {T} }(X-\mu )\right]=\operatorname {tr} (C),$ which is the trace of the covariance matrix. See also Look up variance in Wiktionary, the free dictionary. • Bhatia–Davis inequality • Coefficient of variation • Homoscedasticity • Least-squares spectral analysis for computing a frequency spectrum with spectral magnitudes in % of variance or in dB • Modern portfolio theory • Popoviciu's inequality on variances • Measures for statistical dispersion • Variance-stabilizing transformation Types of variance • Correlation • Distance variance • Explained variance • Pooled variance • Pseudo-variance References 1. Wasserman, Larry (2005). All of Statistics: a concise course in statistical inference. Springer texts in statistics. p. 51. ISBN 9781441923226. 2. Ronald Fisher (1918) The correlation between relatives on the supposition of Mendelian Inheritance 3. Yuli Zhang, Huaiyu Wu, Lei Cheng (June 2012). Some new deformation formulas about variance and covariance. Proceedings of 4th International Conference on Modelling, Identification and Control(ICMIC2012). pp. 987–992.{{cite conference}}: CS1 maint: uses authors parameter (link) 4. Kagan, A.; Shepp, L. A. (1998). "Why the variance?". Statistics & Probability Letters. 38 (4): 329–333. doi:10.1016/S0167-7152(98)00041-8. 5. Johnson, Richard; Wichern, Dean (2001). Applied Multivariate Statistical Analysis. Prentice Hall. p. 76. ISBN 0-13-187715-1. 6. Loève, M. (1977) "Probability Theory", Graduate Texts in Mathematics, Volume 45, 4th edition, Springer-Verlag, p. 12. 7. Bienaymé, I.-J. (1853) "Considérations à l'appui de la découverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés", Comptes rendus de l'Académie des sciences Paris, 37, p. 309–317; digital copy available 8. Bienaymé, I.-J. (1867) "Considérations à l'appui de la découverte de Laplace sur la loi de probabilité dans la méthode des moindres carrés", Journal de Mathématiques Pures et Appliquées, Série 2, Tome 12, p. 158–167; digital copy available 9. Cornell, J R, and Benjamin, C A, Probability, Statistics, and Decisions for Civil Engineers, McGraw-Hill, NY, 1970, pp.178-9. 10. Goodman, Leo A. (December 1960). "On the Exact Variance of Products". Journal of the American Statistical Association. 55 (292): 708–713. doi:10.2307/2281592. JSTOR 2281592. 11. Navidi, William (2006) Statistics for Engineers and Scientists, McGraw-Hill, p. 14. 12. Montgomery, D. C. and Runger, G. C. (1994) Applied statistics and probability for engineers, page 201. John Wiley & Sons New York 13. Knight K. (2000), Mathematical Statistics, Chapman and Hall, New York. (proposition 2.11) 14. Casella and Berger (2002) Statistical Inference, Example 7.3.3, p. 331 15. Mood, A. M., Graybill, F. A., and Boes, D.C. (1974) Introduction to the Theory of Statistics, 3rd Edition, McGraw-Hill, New York, p. 229 16. Kenney, John F.; Keeping, E.S. (1951) Mathematics of Statistics. Part Two. 2nd ed. D. Van Nostrand Company, Inc. Princeton: New Jersey. http://krishikosh.egranth.ac.in/bitstream/1/2025521/1/G2257.pdf 17. Rose, Colin; Smith, Murray D. (2002) Mathematical Statistics with Mathematica. Springer-Verlag, New York. http://www.mathstatica.com/book/Mathematical_Statistics_with_Mathematica.pdf 18. Weisstein, Eric W. (n.d.) Sample Variance Distribution. MathWorld—A Wolfram Web Resource. http://mathworld.wolfram.com/SampleVarianceDistribution.html 19. Samuelson, Paul (1968). "How Deviant Can You Be?". Journal of the American Statistical Association. 63 (324): 1522–1525. doi:10.1080/01621459.1968.10480944. JSTOR 2285901. 20. Mercer, A. McD. (2000). "Bounds for A–G, A–H, G–H, and a family of inequalities of Ky Fan's type, using a general method". J. Math. Anal. Appl. 243 (1): 163–173. doi:10.1006/jmaa.1999.6688. 21. Sharma, R. (2008). "Some more inequalities for arithmetic mean, harmonic mean and variance". Journal of Mathematical Inequalities. 2 (1): 109–114. CiteSeerX 10.1.1.551.9397. doi:10.7153/jmi-02-11. 22. Fama, Eugene F.; French, Kenneth R. (2010-04-21). "Q&A: Semi-Variance: A Better Risk Measure?". Fama/French Forum. 23. Kocherlakota, S.; Kocherlakota, K. (2004). "Generalized Variance". Encyclopedia of Statistical Sciences. Wiley Online Library. doi:10.1002/0471667196.ess0869. ISBN 0471667196. Theory of probability distributions • probability mass function (pmf) • probability density function (pdf) • cumulative distribution function (cdf) • quantile function • raw moment • central moment • mean • variance • standard deviation • skewness • kurtosis • L-moment • moment-generating function (mgf) • characteristic function • probability-generating function (pgf) • cumulant • combinant Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject Authority control: National • Germany • Japan
Wikipedia
Varadhan's lemma In mathematics, Varadhan's lemma is a result from the large deviations theory named after S. R. Srinivasa Varadhan. The result gives information on the asymptotic distribution of a statistic φ(Zε) of a family of random variables Zε as ε becomes small in terms of a rate function for the variables. Statement of the lemma Let X be a regular topological space; let (Zε)ε>0 be a family of random variables taking values in X; let με be the law (probability measure) of Zε. Suppose that (με)ε>0 satisfies the large deviation principle with good rate function I : X → [0, +∞]. Let ϕ  : X → R be any continuous function. Suppose that at least one of the following two conditions holds true: either the tail condition $\lim _{M\to \infty }\limsup _{\varepsilon \to 0}{\big (}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\phi (Z_{\varepsilon })/\varepsilon {\big )}\,\mathbf {1} {\big (}\phi (Z_{\varepsilon })\geq M{\big )}{\big ]}{\big )}=-\infty ,$ where 1(E) denotes the indicator function of the event E; or, for some γ > 1, the moment condition $\limsup _{\varepsilon \to 0}{\big (}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\gamma \phi (Z_{\varepsilon })/\varepsilon {\big )}{\big ]}{\big )}<\infty .$ Then $\lim _{\varepsilon \to 0}\varepsilon \log \mathbf {E} {\big [}\exp {\big (}\phi (Z_{\varepsilon })/\varepsilon {\big )}{\big ]}=\sup _{x\in X}{\big (}\phi (x)-I(x){\big )}.$ See also • Laplace principle (large deviations theory) References • Dembo, Amir; Zeitouni, Ofer (1998). Large deviations techniques and applications. Applications of Mathematics (New York) 38 (Second ed.). New York: Springer-Verlag. pp. xvi+396. ISBN 0-387-98406-2. MR 1619036. (See theorem 4.3.1)
Wikipedia
Michela Varagnolo Michela Varagnolo is a mathematician whose research topics have included representation theory, Hecke algebra, Schur–Weyl duality, Yangians, and quantum affine algebras. She earned a doctorate in 1993 at the University of Pisa, under the supervision of Corrado de Concini,[1] and is maître de conférences in the department of mathematics at CY Cergy Paris University,[2] affiliated there with the research laboratory on analysis, geometry, and modeling.[3] Varagnolo was an invited speaker at the 2014 International Congress of Mathematicians.[4] In 2019, with Éric Vasserot, she won the Prix de l'État of the French Academy of Sciences for their work on the geometric representation theory of Hecke algebras and quantum groups.[2] References 1. Michela Varagnolo at the Mathematics Genealogy Project 2. Lauréats 2019 du prix fondé par l'État : Michela Varagnolo et Éric Vasserot (in French), French Academy of Sciences, 15 October 2019, retrieved 2021-11-11 3. "Membres laboratoire / département", AGM - Analyse, géométrie et modélisation (in French), CY Cergy Paris University, retrieved 2021-11-11 4. "Invited section lectures", program, International Congress of Mathematicians, 2014, retrieved 2021-11-11 Authority control International • VIAF National • Germany Academics • MathSciNet • Mathematics Genealogy Project Other • IdRef
Wikipedia
Varga K. Kalantarov Varga K. Kalantarov (born 1950) is an Azerbaijani mathematician, scientist and professor of mathematics. He is a member of the Koç University Mathematics Department in İstanbul, Turkey.[1] Education Varga Kalantarov was born in 1950. He graduated from Baku State University in 1971. He received his PhD in Differential Equations and Mathematical Physics at the Baku Institute of Mathematics and Mechanics, Azerbaijan National Academy of Sciences in 1974. He received his Doctor of Sciences degree in 1988 under the supervision of Olga Ladyzhenskaya at the Steklov Institute of Mathematics, Saint Petersburg, Russia.[2] Academic career After he received his PhD he started to hold a scientific researcher position in Baku Institute of Mathematics and Mechanics. Meanwhile, between 1975 and 1981 he was a visiting researcher in the Steklov Institute of Mathematics. From 1989 to 1993 he was the head of the Department of Partial Differential Equations at the Baku Institute of Mathematics and Mechanics. After the perestroika era he moved to Turkey with his family in 1993. Between 1993 and 2001 he was a full time professor in Hacettepe University, Mathematics Department, Ankara.[3] Starting from 2001 he became a full time professor in Koç University. He has been an active researcher, having published more than 60 scientific manuscripts with more than 700 citations. He has had 16 PhD students. Research areas His research interests include PDEs and dynamical systems. Representative scientific publications • Kalantarov, V. K.; Ladyženskaja, O. A. Formation of collapses in quasilinear equations of parabolic and hyperbolic types. (Russian) Boundary value problems of mathematical physics and related questions in the theory of functions, 10. Zap. Naučn. Sem. LOMI 69 (1977), 77-102, 274. • Kalantarov, Varga K.; Titi, Edriss S. Global attractors and determining modes for the 3D Navier-Stokes-Voight equations. Chin. Ann. Math. Ser. B 30 (2009), no. 6, 697–714. • Kalantarov, Varga; Zelik, Sergey Finite-dimensional attractors for the quasi-linear strongly-damped wave equation. J. Differential Equations 247 (2009), no. 4, 1120–1155. Memberships Varga Kalantarov is a member of the Azerbaijan Mathematical Society, Turkish Mathematical Society and the American Mathematical Society. References 1. His web page in Koç University Mathematics Department 2. Record in the Genealogy Project 3. Hacettepe University Mathematics Department External links • Varga Kalantarov's professional home page • Varga K. Kalantarov publications indexed by Google Scholar Authority control: Academics • MathSciNet • Mathematics Genealogy Project
Wikipedia
Constant-Q transform In mathematics and signal processing, the constant-Q transform and variable-Q transform, simply known as CQT and VQT, transforms a data series to the frequency domain. It is related to the Fourier transform[1] and very closely related to the complex Morlet wavelet transform.[2] Its design is suited for musical representation. The transform can be thought of as a series of filters fk, logarithmically spaced in frequency, with the k-th filter having a spectral width δfk equal to a multiple of the previous filter's width: $\delta f_{k}=2^{1/n}\cdot \delta f_{k-1}=\left(2^{1/n}\right)^{k}\cdot \delta f_{\text{min}},$ where δfk is the bandwidth of the k-th filter, fmin is the central frequency of the lowest filter, and n is the number of filters per octave. Calculation The short-time Fourier transform of x[n] for a frame shifted to sample m is calculated as follows: $X[k,m]=\sum _{n=0}^{N-1}W[n-m]x[n]e^{-j2\pi kn/N}.$ Given a data series at sampling frequency fs = 1/T, T being the sampling period of our data, for each frequency bin we can define the following: • Filter width, δfk. • Q, the "quality factor": $Q={\frac {f_{k}}{\delta f_{k}}}.$ This is shown below to be the integer number of cycles processed at a center frequency fk. As such, this somewhat defines the time complexity of the transform. • Window length for the k-th bin: $N[k]={\frac {f_{\text{s}}}{\delta f_{k}}}={\frac {f_{\text{s}}}{f_{k}}}Q.$ Since fs/fk is the number of samples processed per cycle at frequency fk, Q is the number of integer cycles processed at this central frequency. The equivalent transform kernel can be found by using the following substitutions: • The window length of each bin is now a function of the bin number: $N=N[k]=Q{\frac {f_{\text{s}}}{f_{k}}}.$ • The relative power of each bin will decrease at higher frequencies, as these sum over fewer terms. To compensate for this, we normalize by N[k]. • Any windowing function will be a function of window length, and likewise a function of window number. For example, the equivalent Hamming window would be $W[k,n]=\alpha -(1-\alpha )\cos {\frac {2\pi n}{N[k]-1}},\quad \alpha =25/46,\quad 0\leqslant n\leqslant N[k]-1.$ • Our digital frequency, ${\frac {2\pi k}{N}}$, becomes ${\frac {2\pi Q}{N[k]}}$. After these modifications, we are left with $X[k]={\frac {1}{N[k]}}\sum _{n=0}^{N[k]-1}W[k,n]x[n]e^{\frac {-j2\pi Qn}{N[k]}}.$ Variable-Q bandwidth calculation The variable-Q transform is the same as constant-Q transform, but the only difference is the filter Q is variable, hence the name variable-Q transform. The variable-Q transform is useful where time resolution on low frequencies is important. There are ways to calculate the bandwidth of the VQT, one of them using equivalent rectangular bandwidth as a value for VQT bin's bandwidth.[3] The simplest way to implement a variable-Q transform is add a bandwidth offset called γ like this one: $\delta f_{k}=\left({\frac {2}{f_{k}+\gamma }}\right)Q.$ This formula can be modified to have extra parameters to adjust sharpness of the transition between constant-Q and constant-bandwidth like this: $\delta f_{k}=\left({\frac {2}{\sqrt[{\alpha }]{f_{k}^{\alpha }+\gamma ^{\alpha }}}}\right)Q.$ with α as a parameter for transition sharpness and where α of 2 is equals to hyperbolic sine frequency scale, in terms of frequency resolution. Fast calculation The direct calculation of the constant-Q transform (either using naive DFT or slightly faster Goertzel algorithm) is slow when compared against the fast Fourier transform (FFT). However, the FFT can itself be employed, in conjunction with the use of a kernel, to perform the equivalent calculation but much faster.[4] An approximate inverse to such an implementation was proposed in 2006; it works by going back to the DFT, and is only suitable for pitch instruments.[5] A development on this method with improved invertibility involves performing CQT (via FFT) octave-by-octave, using lowpass filtered and downsampled results for consecutively lower pitches.[6] Implementations of this method include the MATLAB implementation and LibROSA's Python implementation.[7] LibROSA combines the subsampled method with the direct FFT method (which it dubs "pseudo-CQT") by having the latter process higher frequencies as a whole.[7] The sliding DFT can be used for faster calculation of constant-Q transform, since the sliding DFT does not have to be linear-frequency spacing and same window size per bin.[8] Alternatively, the constant-Q transform can be approximated by using multiple FFTs of different window sizes and/or sampling rate at different frequency ranges then stitch it together. This is called multiresolution STFT, however the window sizes for multiresolution FFTs are different per-octave, rather than per-bin. Comparison with the Fourier transform In general, the transform is well suited to musical data, and this can be seen in some of its advantages compared to the fast Fourier transform. As the output of the transform is effectively amplitude/phase against log frequency, fewer frequency bins are required to cover a given range effectively, and this proves useful where frequencies span several octaves. As the range of human hearing covers approximately ten octaves from 20 Hz to around 20 kHz, this reduction in output data is significant. The transform exhibits a reduction in frequency resolution with higher frequency bins, which is desirable for auditory applications. The transform mirrors the human auditory system, whereby at lower-frequencies spectral resolution is better, whereas temporal resolution improves at higher frequencies. At the bottom of the piano scale (about 30 Hz), a difference of 1 semitone is a difference of approximately 1.5 Hz, whereas at the top of the musical scale (about 5 kHz), a difference of 1 semitone is a difference of approximately 200 Hz.[9] So for musical data the exponential frequency resolution of constant-Q transform is ideal. In addition, the harmonics of musical notes form a pattern characteristic of the timbre of the instrument in this transform. Assuming the same relative strengths of each harmonic, as the fundamental frequency changes, the relative position of these harmonics remains constant. This can make identification of instruments much easier. The constant Q transform can also be used for automatic recognition of musical keys based on accumulated chroma content.[10] Relative to the Fourier transform, implementation of this transform is more tricky. This is due to the varying number of samples used in the calculation of each frequency bin, which also affects the length of any windowing function implemented.[11] Also note that because the frequency scale is logarithmic, there is no true zero-frequency / DC term present, which may be a drawback in applications that are interested in the DC term. Although for applications that are not interested in the DC such as audio, this is not a drawback. References 1. Judith C. Brown, Calculation of a constant Q spectral transform, J. Acoust. Soc. Am., 89(1):425–434, 1991. 2. Continuous Wavelet Transform "When the mother wavelet can be interpreted as a windowed sinusoid (such as the Morlet wavelet), the wavelet transform can be interpreted as a constant-Q Fourier transform. Before the theory of wavelets, constant-Q Fourier transforms (such as obtained from a classic third-octave filter bank) were not easy to invert, because the basis signals were not orthogonal." 3. Cwitkowitz, Frank C.Jr (2019). "End-to-End Music Transcription Using Fine-Tuned Variable-Q Filterbanks" (PDF). Rochester Institute of Technology: 32–34. Retrieved 2022-08-21. 4. Judith C. Brown and Miller S. Puckette, An efficient algorithm for the calculation of a constant Q transform, J. Acoust. Soc. Am., 92(5):2698–2701, 1992. 5. FitzGerald, Derry; Cychowski, Marcin T.; Cranitch, Matt (1 May 2006). "Towards an Inverse Constant Q Transform". Audio Engineering Society Convention. Paris: Audio Engineering Society. 120. 6. Schörkhuber, Christian; Klapuri, Anssi (2010). "CONSTANT-Q TRANSFORM TOOLBOX FOR MUSIC PROCESSING". 7th Sound and Music Computing Conference. Barcelona. Retrieved 12 December 2018. paper 7. McFee, Brian; Battenberg, Eric; Lostanlen, Vincent; Thomé, Carl (12 December 2018). "librosa: core/constantq.py at 8d26423". GitHub. librosa. Retrieved 12 December 2018. 8. Bradford, R, ffitch, J & Dobson, R 2008, Sliding with a constant-Q, in 11th International Conference on Digital Audio Effects (DAFx-08) Proceedings September 1-4th, 2008 Espoo, Finland . DAFx, Espoo, Finland, pp. 363-369, Proc. of the Int. Conf. on Digital Audio Effects (DAFx-08), 1/09/08. 9. http://newt.phys.unsw.edu.au/jw/graphics/notes.GIF 10. Hendrik Purwins, Benjamin Blankertz and Klaus Obermayer, A New Method for Tracking Modulations in Tonal Music in Audio Data Format, International Joint Conference on Neural Network (IJCNN’00)., 6:270-275, 2000. 11. Benjamin Blankertz, The Constant Q Transform, 1999.
Wikipedia
Free variables and bound variables In mathematics, and in other disciplines involving formal languages, including mathematical logic and computer science, a variable may be said to be either free or bound. The terms are opposites. A free variable is a notation (symbol) that specifies places in an expression where substitution may take place and is not a parameter of this or any container expression. Some older books use the terms real variable and apparent variable for free variable and bound variable, respectively. The idea is related to a placeholder (a symbol that will later be replaced by some value), or a wildcard character that stands for an unspecified symbol. For free variables in systems of linear equations, see Free variables (system of linear equations). "Free variable" redirects here. Not to be confused with Free parameter or Dummy variable. In computer programming, the term free variable refers to variables used in a function that are neither local variables nor parameters of that function. The term non-local variable is often a synonym in this context. An instance of a variable symbol is bound, in contrast, if the value of that variable symbol has been bound to a specific value or range of values in the domain of discourse or universe. This may be achieved through the use of logical quantifiers, variable-binding operators, or an explicit statement of allowed values for the variable (such as, "...where $n$ is a positive integer".) A variable symbol overall is bound if at least one occurrence of it is bound.[1]pp.142--143 Since the same variable symbol may appear in multiple places in an expression, some occurrences of the variable symbol may be free while others are bound,[1]p.78 hence "free" and "bound" are at first defined for occurrences and then generalized over all occurrences of said variable symbol in the expression. However it is done, the variable ceases to be an independent variable on which the value of the expression depends, whether that value be a truth value or the numerical result of a calculation, or, more generally, an element of an image set of a function. While the domain of discourse in many contexts is understood, when an explicit range of values for the bound variable has not been given, it may be necessary to specify the domain in order to properly evaluate the expression. For example, consider the following expression in which both variables are bound by logical quantifiers: $\forall y\,\exists x\,\left(x={\sqrt {y}}\right).$ This expression evaluates to false if the domain of $x$ and $y$ is the real numbers, but true if the domain is the complex numbers. The term "dummy variable" is also sometimes used for a bound variable (more commonly in general mathematics than in computer science), but this should not be confused with the identically named but unrelated concept of dummy variable as used in statistics, most commonly in regression analysis. Examples Before stating a precise definition of free variable and bound variable, the following are some examples that perhaps make these two concepts clearer than the definition would: In the expression $\sum _{k=1}^{10}f(k,n),$ n is a free variable and k is a bound variable; consequently the value of this expression depends on the value of n, but there is nothing called k on which it could depend. In the expression $\int _{0}^{\infty }x^{y-1}e^{-x}\,dx,$ y is a free variable and x is a bound variable; consequently the value of this expression depends on the value of y, but there is nothing called x on which it could depend. In the expression $\lim _{h\rightarrow 0}{\frac {f(x+h)-f(x)}{h}},$ x is a free variable and h is a bound variable; consequently the value of this expression depends on the value of x, but there is nothing called h on which it could depend. In the expression $\forall x\ \exists y\ {\Big [}\varphi (x,y,z){\Big ]},$ z is a free variable and x and y are bound variables, associated with logical quantifiers; consequently the logical value of this expression depends on the value of z, but there is nothing called x or y on which it could depend. More widely, in most proofs, bound variables are used. For example, the following proof shows that all squares of positive even integers are divisible by $4$ Let $n$ be a positive even integer. Then there is an integer $k$ such that $n=2k$. Since $n^{2}=4k^{2}$, we have $n^{2}$ divisible by $4$ not only k but also n have been used as bound variables as a whole in the proof. Variable-binding operators The following $\sum _{x\in S}\quad \quad \prod _{x\in S}\quad \quad \int _{0}^{\infty }\cdots \,dx\quad \quad \lim _{x\to 0}\quad \quad \forall x\quad \quad \exists x$ are some common variable-binding operators. Each of them binds the variable x for some set S. Many of these are operators which act on functions of the bound variable. In more complicated contexts, such notations can become awkward and confusing. It can be useful to switch to notations which make the binding explicit, such as $\sum _{1,\ldots ,10}\left(k\mapsto f(k,n)\right)$ for sums or $D\left(x\mapsto x^{2}+2x+1\right)$ for differentiation. Formal explanation Variable-binding mechanisms occur in different contexts in mathematics, logic and computer science. In all cases, however, they are purely syntactic properties of expressions and variables in them. For this section we can summarize syntax by identifying an expression with a tree whose leaf nodes are variables, constants, function constants or predicate constants and whose non-leaf nodes are logical operators. This expression can then be determined by doing an inorder traversal of the tree. Variable-binding operators are logical operators that occur in almost every formal language. A binding operator Q takes two arguments: a variable v and an expression P, and when applied to its arguments produces a new expression Q(v, P). The meaning of binding operators is supplied by the semantics of the language and does not concern us here. Variable binding relates three things: a variable v, a location a for that variable in an expression and a non-leaf node n of the form Q(v, P). Note: we define a location in an expression as a leaf node in the syntax tree. Variable binding occurs when that location is below the node n. In the lambda calculus, x is a bound variable in the term M = λx. T and a free variable in the term T. We say x is bound in M and free in T. If T contains a subterm λx. U then x is rebound in this term. This nested, inner binding of x is said to "shadow" the outer binding. Occurrences of x in U are free occurrences of the new x.[2] Variables bound at the top level of a program are technically free variables within the terms to which they are bound but are often treated specially because they can be compiled as fixed addresses. Similarly, an identifier bound to a recursive function is also technically a free variable within its own body but is treated specially. A closed term is one containing no free variables. Function expressions To give an example from mathematics, consider an expression which defines a function $f=\left[(x_{1},\ldots ,x_{n})\mapsto t\right]$ where t is an expression. t may contain some, all or none of the x1, …, xn and it may contain other variables. In this case we say that function definition binds the variables x1, …, xn. In this manner, function definition expressions of the kind shown above can be thought of as the variable binding operator, analogous to the lambda expressions of lambda calculus. Other binding operators, like the summation sign, can be thought of as higher-order functions applying to a function. So, for example, the expression $\sum _{x\in S}{x^{2}}$ could be treated as a notation for $\sum _{S}{(x\mapsto x^{2})}$ where $\sum _{S}{f}$ is an operator with two parameters—a one-parameter function, and a set to evaluate that function over. The other operators listed above can be expressed in similar ways; for example, the universal quantifier $\forall x\in S\ P(x)$ can be thought of as an operator that evaluates to the logical conjunction of the boolean-valued function P applied over the (possibly infinite) set S. Natural language When analyzed in formal semantics, natural languages can be seen to have free and bound variables. In English, personal pronouns like he, she, they, etc. can act as free variables. Lisa found her book. In the sentence above, the possessive pronoun her is a free variable. It may refer to the previously mentioned Lisa or to any other female. In other words, her book could be referring to Lisa's book (an instance of coreference) or to a book that belongs to a different female (e.g. Jane's book). Whoever the referent of her is can be established according to the situational (i.e. pragmatic) context. The identity of the referent can be shown using coindexing subscripts where i indicates one referent and j indicates a second referent (different from i). Thus, the sentence Lisa found her book has the following interpretations: Lisai found heri book. (interpretation #1: her = of Lisa) Lisai found herj book. (interpretation #2: her = of a female that is not Lisa) The distinction is not purely of academic interest, as some languages do actually have different forms for heri and herj: for example, Norwegian and Swedish translate coreferent heri as sin and noncoreferent herj as hennes. English does allow specifying coreference, but it is optional, as both interpretations of the previous example are valid (the ungrammatical interpretation is indicated with an asterisk): Lisai found heri own book. (interpretation #1: her = of Lisa) *Lisai found herj own book. (interpretation #2: her = of a female that is not Lisa) However, reflexive pronouns, such as himself, herself, themselves, etc., and reciprocal pronouns, such as each other, act as bound variables. In a sentence like the following: Jane hurt herself. the reflexive herself can only refer to the previously mentioned antecedent, in this case Jane, and can never refer to a different female person. In this example, the variable herself is bound to the noun Jane that occurs in subject position. Indicating the coindexation, the first interpretation with Jane and herself coindexed is permissible, but the other interpretation where they are not coindexed is ungrammatical: Janei hurt herselfi. (interpretation #1: herself = Jane) *Janei hurt herselfj. (interpretation #2: herself = a female that is not Jane) The coreference binding can be represented using a lambda expression as mentioned in the previous Formal explanation section. The sentence with the reflexive could be represented as (λx.x hurt x)Jane in which Jane is the subject referent argument and λx.x hurt x is the predicate function (a lambda abstraction) with the lambda notation and x indicating both the semantic subject and the semantic object of sentence as being bound. This returns the semantic interpretation JANE hurt JANE with JANE being the same person. Pronouns can also behave in a different way. In the sentence below Ashley hit her. the pronoun her can only refer to a female that is not Ashley. This means that it can never have a reflexive meaning equivalent to Ashley hit herself. The grammatical and ungrammatical interpretations are: *Ashleyi hit heri. (interpretation #1: her = Ashley) Ashleyi hit herj. (interpretation #2: her = a female that is not Ashley) The first interpretation is impossible. Only the second interpretation is permitted by the grammar. Thus, it can be seen that reflexives and reciprocals are bound variables (known technically as anaphors) while true pronouns are free variables in some grammatical structures but variables that cannot be bound in other grammatical structures. The binding phenomena found in natural languages was particularly important to the syntactic government and binding theory (see also: Binding (linguistics)). See also • Closure (computer science) • Combinatory logic • Lambda lifting • Name binding • Scope (programming) References 1. W. V. O. Quine, Mathematical Logic (1981). Harvard University Press, 0-674-55451-5. 2. Thompson 1991, p. 33. • Thompson, Simon (1991). Type theory and functional programming. Wokingham, England: Addison-Wesley. ISBN 0201416670. OCLC 23287456. Further reading • Gowers, Timothy; Barrow-Green, June; Leader, Imre, eds. (2008). The Princeton Companion to Mathematics. Princeton, New Jersey: Princeton University Press. pp. 15–16. doi:10.1515/9781400830398. ISBN 978-0-691-11880-2. JSTOR j.ctt7sd01. LCCN 2008020450. MR 2467561. OCLC 227205932. OL 19327100M. Zbl 1242.00016. Calculus Precalculus • Binomial theorem • Concave function • Continuous function • Factorial • Finite difference • Free variables and bound variables • Graph of a function • Linear function • Radian • Rolle's theorem • Secant • Slope • Tangent Limits • Indeterminate form • Limit of a function • One-sided limit • Limit of a sequence • Order of approximation • (ε, δ)-definition of limit Differential calculus • Derivative • Second derivative • Partial derivative • Differential • Differential operator • Mean value theorem • Notation • Leibniz's notation • Newton's notation • Rules of differentiation • linearity • Power • Sum • Chain • L'Hôpital's • Product • General Leibniz's rule • Quotient • Other techniques • Implicit differentiation • Inverse functions and differentiation • Logarithmic derivative • Related rates • Stationary points • First derivative test • Second derivative test • Extreme value theorem • Maximum and minimum • Further applications • Newton's method • Taylor's theorem • Differential equation • Ordinary differential equation • Partial differential equation • Stochastic differential equation Integral calculus • Antiderivative • Arc length • Riemann integral • Basic properties • Constant of integration • Fundamental theorem of calculus • Differentiating under the integral sign • Integration by parts • Integration by substitution • trigonometric • Euler • Tangent half-angle substitution • Partial fractions in integration • Quadratic integral • Trapezoidal rule • Volumes • Washer method • Shell method • Integral equation • Integro-differential equation Vector calculus • Derivatives • Curl • Directional derivative • Divergence • Gradient • Laplacian • Basic theorems • Line integrals • Green's • Stokes' • Gauss' Multivariable calculus • Divergence theorem • Geometric • Hessian matrix • Jacobian matrix and determinant • Lagrange multiplier • Line integral • Matrix • Multiple integral • Partial derivative • Surface integral • Volume integral • Advanced topics • Differential forms • Exterior derivative • Generalized Stokes' theorem • Tensor calculus Sequences and series • Arithmetico-geometric sequence • Types of series • Alternating • Binomial • Fourier • Geometric • Harmonic • Infinite • Power • Maclaurin • Taylor • Telescoping • Tests of convergence • Abel's • Alternating series • Cauchy condensation • Direct comparison • Dirichlet's • Integral • Limit comparison • Ratio • Root • Term Special functions and numbers • Bernoulli numbers • e (mathematical constant) • Exponential function • Natural logarithm • Stirling's approximation History of calculus • Adequality • Brook Taylor • Colin Maclaurin • Generality of algebra • Gottfried Wilhelm Leibniz • Infinitesimal • Infinitesimal calculus • Isaac Newton • Fluxion • Law of Continuity • Leonhard Euler • Method of Fluxions • The Method of Mechanical Theorems Lists • Differentiation rules • List of integrals of exponential functions • List of integrals of hyperbolic functions • List of integrals of inverse hyperbolic functions • List of integrals of inverse trigonometric functions • List of integrals of irrational functions • List of integrals of logarithmic functions • List of integrals of rational functions • List of integrals of trigonometric functions • Secant • Secant cubed • List of limits • Lists of integrals Miscellaneous topics • Complex calculus • Contour integral • Differential geometry • Manifold • Curvature • of curves • of surfaces • Tensor • Euler–Maclaurin formula • Gabriel's horn • Integration Bee • Proof that 22/7 exceeds π • Regiomontanus' angle maximization problem • Steinmetz solid
Wikipedia
Box plot In descriptive statistics, a box plot or boxplot is a method for graphically demonstrating the locality, spread and skewness groups of numerical data through their quartiles.[1] In addition to the box on a box plot, there can be lines (which are called whiskers) extending from the box indicating variability outside the upper and lower quartiles, thus, the plot is also called the box-and-whisker plot and the box-and-whisker diagram. Outliers that differ significantly from the rest of the dataset[2] may be plotted as individual points beyond the whiskers on the box-plot. Box plots are non-parametric: they display variation in samples of a statistical population without making any assumptions of the underlying statistical distribution[3] (though Tukey's boxplot assumes symmetry for the whiskers and normality for their length). The spacings in each subsection of the box-plot indicate the degree of dispersion (spread) and skewness of the data, which are usually described using the five-number summary. In addition, the box-plot allows one to visually estimate various L-estimators, notably the interquartile range, midhinge, range, mid-range, and trimean. Box plots can be drawn either horizontally or vertically. History The range-bar method was first introduced by Mary Eleanor Spear in her book "Charting Statistics" in 1952[4] and again in her book "Practical Charting Techniques" in 1969.[5] The box-and-whisker plot was first introduced in 1970 by John Tukey, who later published on the subject in his book "Exploratory Data Analysis" in 1977.[6] Elements A boxplot is a standardized way of displaying the dataset based on the five-number summary: the minimum, the maximum, the sample median, and the first and third quartiles. • Minimum (Q0 or 0th percentile): the lowest data point in the data set excluding any outliers • Maximum (Q4 or 100th percentile): the highest data point in the data set excluding any outliers • Median (Q2 or 50th percentile): the middle value in the data set • First quartile (Q1 or 25th percentile): also known as the lower quartile qn(0.25), it is the median of the lower half of the dataset. • Third quartile (Q3 or 75th percentile): also known as the upper quartile qn(0.75), it is the median of the upper half of the dataset.[7] In addition to the minimum and maximum values used to construct a box-plot, another important element that can also be employed to obtain a box-plot is the interquartile range (IQR), as denoted below: • Interquartile range (IQR) : the distance between the upper and lower quartiles ${\text{IQR}}=Q_{3}-Q_{1}=q_{n}(0.75)-q_{n}(0.25)$ Whiskers A box-plot usually includes two parts, a box and a set of whiskers as shown in Figure 2. The box is drawn from Q1 to Q3 with a horizontal line drawn in the middle to denote the median. The whiskers must end at an observed data point, but can be defined in various ways. In the most straight-forward method, the boundary of the lower whisker is the minimum value of the data set, and the boundary of the upper whisker is the maximum value of the data set. Another popular choice for the boundaries of the whiskers is based on the 1.5 IQR value. From above the upper quartile (Q3), a distance of 1.5 times the IQR is measured out and a whisker is drawn up to the largest observed data point from the dataset that falls within this distance. Similarly, a distance of 1.5 times the IQR is measured out below the lower quartile (Q1) and a whisker is drawn down to the lowest observed data point from the dataset that falls within this distance. Because the whiskers must end at an observed data point, the whisker lengths can look unequal, even though 1.5 IQR is the same for both sides. All other observed data points outside the boundary of the whiskers are plotted as outliers.[8] The outliers can be plotted on the box-plot as a dot, a small circle, a star, etc. (see example below). There are other representations in which the whiskers can stand for several other things, such as: • The minimum and the maximum value of the data set (as shown in Figure 2) • One standard deviation above and below the mean of the data set • The 9th percentile and the 91st percentile of the data set • The 2nd percentile and the 98th percentile of the data set Rarely, box-plot can be plotted without the whiskers. This can be appropriate for sensitive information to avoid whiskers (and outliers) disclosing actual values observed.[9] Some box plots include an additional character to represent the mean of the data.[10][11] The unusual percentiles 2%, 9%, 91%, 98% are sometimes used for whisker cross-hatches and whisker ends to depict the seven-number summary. If the data are normally distributed, the locations of the seven marks on the box plot will be equally spaced. On some box plots, a cross-hatch is placed before the end of each whisker. Because of this variability, it is appropriate to describe the convention that is being used for the whiskers and outliers in the caption of the box-plot. Variations Since the mathematician John W. Tukey first popularized this type of visual data display in 1969, several variations on the classical box plot have been developed, and the two most commonly found variations are the variable width box plots and the notched box plots shown in Figure 4. Variable width box plots illustrate the size of each group whose data is being plotted by making the width of the box proportional to the size of the group. A popular convention is to make the box width proportional to the square root of the size of the group.[12] Notched box plots apply a "notch" or narrowing of the box around the median. Notches are useful in offering a rough guide of the significance of the difference of medians; if the notches of two boxes do not overlap, this will provide evidence of a statistically significant difference between the medians.[12] The height of the notches is proportional to the interquartile range (IQR) of the sample and is inversely proportional to the square root of the size of the sample. However, there is a uncertainty about the most appropriate multiplier (as this may vary depending on the similarity of the variances of the samples).[12] The width of the notch is arbitrarily chosen to be visually pleasing, and should be consistent amongst all box plots being displayed on the same page. One convention for obtaining the boundaries of these notches is to use a distance of $\pm {\frac {1.58{\text{ IQR}}}{\sqrt {n}}}$ around the median.[13] Adjusted box plots are intended to describe skew distributions, and they rely on the medcouple statistic of skewness.[14] For a medcouple value of MC, the lengths of the upper and lower whiskers on the box-plot are respectively defined to be: ${\begin{matrix}1.5{\text{IQR}}\cdot e^{3{\text{MC}}},&1.5{\text{ IQR}}\cdot e^{-4{\text{MC}}}{\text{ if }}{\text{MC}}\geq 0,\\1.5{\text{IQR}}\cdot e^{4{\text{MC}}},&1.5{\text{ IQR}}\cdot e^{-3{\text{MC}}}{\text{ if }}{\text{MC}}\leq 0.\end{matrix}}$ For a symmetrical data distribution, the medcouple will be zero, and this reduces the adjusted box-plot to the Tukey's box-plot with equal whisker lengths of $1.5{\text{ IQR}}$ for both whiskers. Other kinds of box plots, such as the violin plots and the bean plots can show the difference between single-modal and multimodal distributions, which cannot be observed from the original classical box-plot.[6] Examples Example without outliers A series of hourly temperatures were measured throughout the day in degrees Fahrenheit. The recorded values are listed in order as follows (°F): 57, 57, 57, 58, 63, 66, 66, 67, 67, 68, 69, 70, 70, 70, 70, 72, 73, 75, 75, 76, 76, 78, 79, 81. A box plot of the data set can be generated by first calculating five relevant values of this data set: minimum, maximum, median (Q2), first quartile (Q1), and third quartile (Q3). The minimum is the smallest number of the data set. In this case, the minimum recorded day temperature is 57 °F. The maximum is the largest number of the data set. In this case, the maximum recorded day temperature is 81 °F. The median is the "middle" number of the ordered data set. This means that there are exactly 50% of the elements is less than the median and 50% of the elements is greater than the median. The median of this ordered data set is 70 °F. The first quartile value (Q1 or 25th percentile) is the number that marks one quarter of the ordered data set. In other words, there are exactly 25% of the elements that are less than the first quartile and exactly 75% of the elements that are greater than it. The first quartile value can be easily determined by finding the "middle" number between the minimum and the median. For the hourly temperatures, the "middle" number found between 57 °F and 70 °F is 66 °F. The third quartile value (Q3 or 75th percentile) is the number that marks three quarters of the ordered data set. In other words, there are exactly 75% of the elements that are less than the third quartile and 25% of the elements that are greater than it. The third quartile value can be easily obtained by finding the "middle" number between the median and the maximum. For the hourly temperatures, the "middle" number between 70 °F and 81 °F is 75 °F. The interquartile range, or IQR, can be calculated by subtracting the first quartile value (Q1) from the third quartile value (Q3): ${\text{IQR}}=Q_{3}-Q_{1}=75^{\circ }F-66^{\circ }F=9^{\circ }F.$ Hence, $1.5{\text{IQR}}=1.5\cdot 9^{\circ }F=13.5^{\circ }F.$ 1.5 IQR above the third quartile is: $Q_{3}+1.5{\text{ IQR}}=75^{\circ }F+13.5^{\circ }F=88.5^{\circ }F.$ 1.5 IQR below the first quartile is: $Q_{1}-1.5{\text{ IQR}}=66^{\circ }F-13.5^{\circ }F=52.5^{\circ }F.$ The upper whisker boundary of the box-plot is the largest data value that is within 1.5 IQR above the third quartile. Here, 1.5 IQR above the third quartile is 88.5 °F and the maximum is 81 °F. Therefore, the upper whisker is drawn at the value of the maximum, which is 81 °F. Similarly, the lower whisker boundary of the box plot is the smallest data value that is within 1.5 IQR below the first quartile. Here, 1.5 IQR below the first quartile is 52.5 °F and the minimum is 57 °F. Therefore, the lower whisker is drawn at the value of the minimum, which is 57 °F. Example with outliers Above is an example without outliers. Here is a followup example for generating box-plot with outliers: The ordered set for the recorded temperatures is (°F): 52, 57, 57, 58, 63, 66, 66, 67, 67, 68, 69, 70, 70, 70, 70, 72, 73, 75, 75, 76, 76, 78, 79, 89. In this example, only the first and the last number are changed. The median, third quartile, and first quartile remain the same. In this case, the maximum value in this data set is 89 °F, and 1.5 IQR above the third quartile is 88.5 °F. The maximum is greater than 1.5 IQR plus the third quartile, so the maximum is an outlier. Therefore, the upper whisker is drawn at the greatest value smaller than 1.5 IQR above the third quartile, which is 79 °F. Similarly, the minimum value in this data set is 52 °F, and 1.5 IQR below the first quartile is 52.5 °F. The minimum is smaller than 1.5 IQR minus the first quartile, so the minimum is also an outlier. Therefore, the lower whisker is drawn at the smallest value greater than 1.5 IQR below the first quartile, which is 57 °F. In the case of large datasets An additional example for obtaining box-plot from a data set containing a large number of data points is: General equation to compute empirical quantiles $q_{n}(p)=x_{(k)}+\alpha (x_{(k+1)}-x_{(k)})$ ${\text{with }}k=[p(n+1)]{\text{ and }}\alpha =p(n+1)-k$ Here $x_{(k)}$ stands for the general ordering of the data points (i.e. if $i<k$, then $x_{(i)}<x_{(k)}$ ) Using the above example that has 24 data points (n = 24), one can calculate the median, first and third quartile either mathematically or visually. Median : $q_{n}(0.5)=x_{(12)}+(0.5\cdot 25-12)\cdot (x_{(13)}-x_{(12)})=70+(0.5\cdot 25-12)\cdot (70-70)=70^{\circ }F$ First quartile : $q_{n}(0.25)=x_{(6)}+(0.25\cdot 25-6)\cdot (x_{(7)}-x_{(6)})=66+(0.25\cdot 25-6)\cdot (66-66)=66^{\circ }F$ Third quartile : $q_{n}(0.75)=x_{(18)}+(0.75\cdot 25-18)\cdot (x_{(19)}-x_{(18)})=75+(0.75\cdot 25-18)\cdot (75-75)=75^{\circ }F$ Visualization Although box plots may seem more primitive than histograms or kernel density estimates, they do have a number of advantages. First, the box plot enables statisticians to do a quick graphical examination on one or more data sets. Box-plots also take up less space and are therefore particularly useful for comparing distributions between several groups or sets of data in parallel (see Figure 1 for an example). Lastly, the overall structure of histograms and kernel density estimate can be strongly influenced by the choice of number and width of bins techniques and the choice of bandwidth, respectively. Although looking at a statistical distribution is more common than looking at a box plot, it can be useful to compare the box plot against the probability density function (theoretical histogram) for a normal N(0,σ2) distribution and observe their characteristics directly (as shown in Figure 7). See also • Bagplot • Contour boxplot • Candlestick chart • Data and information visualization • Exploratory data analysis • Fan chart • Five-number summary • Functional boxplot • Seven-number summary • Violin plot References 1. C., Dutoit, S. H. (2012). Graphical exploratory data analysis. Springer. ISBN 978-1-4612-9371-2. OCLC 1019645745.{{cite book}}: CS1 maint: multiple names: authors list (link) 2. Grubbs, Frank E. (February 1969). "Procedures for Detecting Outlying Observations in Samples". Technometrics. 11 (1): 1–21. doi:10.1080/00401706.1969.10490657. ISSN 0040-1706. 3. Richard., Boddy (2009). Statistical Methods in Practice : for Scientists and Technologists. John Wiley & Sons. ISBN 978-0-470-74664-6. OCLC 940679163. 4. Spear, Mary Eleanor (1952). Charting Statistics. McGraw Hill. p. 166. 5. Spear, Mary Eleanor. (1969). Practical charting techniques. New York: McGraw-Hill. ISBN 0070600104. OCLC 924909765. 6. Wickham, Hadley; Stryjewski, Lisa. "40 years of boxplots" (PDF). Retrieved December 24, 2020. 7. Holmes, Alexander; Illowsky, Barbara; Dean, Susan (31 March 2015). "Introductory Business Statistics". OpenStax. 8. Dekking, F.M. (2005). A Modern Introduction to Probability and Statistics. Springer. pp. 234–238. ISBN 1-85233-896-2. 9. Derrick, Ben; Green, Elizabeth; Ritchie, Felix; White, Paul (September 2022). "The Risk of Disclosure When Reporting Commonly Used Univariate Statistics". Privacy in Statistical Databases. 13463: 119–129. doi:10.1007/978-3-031-13945-1_9. 10. Frigge, Michael; Hoaglin, David C.; Iglewicz, Boris (February 1989). "Some Implementations of the Boxplot". The American Statistician. 43 (1): 50–54. doi:10.2307/2685173. JSTOR 2685173. 11. Marmolejo-Ramos, F.; Tian, S. (2010). "The shifting boxplot. A boxplot based on essential summary statistics around the mean". International Journal of Psychological Research. 3 (1): 37–46. doi:10.21500/20112084.823. 12. McGill, Robert; Tukey, John W.; Larsen, Wayne A. (February 1978). "Variations of Box Plots". The American Statistician. 32 (1): 12–16. doi:10.2307/2683468. JSTOR 2683468. 13. "R: Box Plot Statistics". R manual. Retrieved 26 June 2011. 14. Hubert, M.; Vandervieren, E. (2008). "An adjusted boxplot for skewed distribution". Computational Statistics and Data Analysis. 52 (12): 5186–5201. CiteSeerX 10.1.1.90.9812. doi:10.1016/j.csda.2007.11.008. Further reading • Tukey, John W. (1977). Exploratory Data Analysis. Addison-Wesley. ISBN 9780201076165. • Benjamini, Y. (1988). "Opening the Box of a Boxplot". The American Statistician. 42 (4): 257–262. doi:10.2307/2685133. JSTOR 2685133. • Rousseeuw, P. J.; Ruts, I.; Tukey, J. W. (1999). "The Bagplot: A Bivariate Boxplot". The American Statistician. 53 (4): 382–387. doi:10.2307/2686061. JSTOR 2686061. External links Wikimedia Commons has media related to Box plots. • Beeswarm Boxplot - superimposing a frequency-jittered stripchart on top of a box plot Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Variable (mathematics) In mathematics, a variable (from Latin variabilis, "changeable") is a symbol that represents a mathematical object. A variable may represent a number, a vector, a matrix, a function, the argument of a function, a set, or an element of a set.[1] Algebraic computations with variables as if they were explicit numbers solve a range of problems in a single computation. For example, the quadratic formula solves any quadratic equation by substituting the numeric values of the coefficients of that equation for the variables that represent them in the quadratic formula. In mathematical logic, a variable is either a symbol representing an unspecified term of the theory (a meta-variable), or a basic object of the theory that is manipulated without referring to its possible intuitive interpretation. History In ancient works such as Euclid's Elements, single letters refer to geometric points and shapes. In the 7th century, Brahmagupta used different colours to represent the unknowns in algebraic equations in the Brāhmasphuṭasiddhānta. One section of this book is called "Equations of Several Colours".[2] At the end of the 16th century, François Viète introduced the idea of representing known and unknown numbers by letters, nowadays called variables, and the idea of computing with them as if they were numbers—in order to obtain the result by a simple replacement. Viète's convention was to use consonants for known values, and vowels for unknowns.[3] In 1637, René Descartes "invented the convention of representing unknowns in equations by x, y, and z, and knowns by a, b, and c".[4] Contrarily to Viète's convention, Descartes' is still commonly in use. The history of the letter x in math was discussed in a 1887 Scientific American article.[5] Starting in the 1660s, Isaac Newton and Gottfried Wilhelm Leibniz independently developed the infinitesimal calculus, which essentially consists of studying how an infinitesimal variation of a variable quantity induces a corresponding variation of another quantity which is a function of the first variable. Almost a century later, Leonhard Euler fixed the terminology of infinitesimal calculus, and introduced the notation y = f(x) for a function f, its variable x and its value y. Until the end of the 19th century, the word variable referred almost exclusively to the arguments and the values of functions. In the second half of the 19th century, it appeared that the foundation of infinitesimal calculus was not formalized enough to deal with apparent paradoxes such as a nowhere differentiable continuous function. To solve this problem, Karl Weierstrass introduced a new formalism consisting of replacing the intuitive notion of limit by a formal definition. The older notion of limit was "when the variable x varies and tends toward a, then f(x) tends toward L", without any accurate definition of "tends". Weierstrass replaced this sentence by the formula $(\forall \epsilon >0)(\exists \eta >0)(\forall x)\;|x-a|<\eta \Rightarrow |L-f(x)|<\epsilon ,$ in which none of the five variables is considered as varying. This static formulation led to the modern notion of variable, which is simply a symbol representing a mathematical object that either is unknown, or may be replaced by any element of a given set (e.g., the set of real numbers). Notation Variables are generally denoted by a single letter, most often from the Latin alphabet and less often from the Greek, which may be lowercase or capitalized. The letter may be followed by a subscript: a number (as in x2), another variable (xi), a word or abbreviation of a word (xtotal) or a mathematical expression (x2i + 1). Under the influence of computer science, some variable names in pure mathematics consist of several letters and digits. Following René Descartes (1596–1650), letters at the beginning of the alphabet such as a, b, c are commonly used for known values and parameters, and letters at the end of the alphabet such as (x, y, z) are commonly used for unknowns and variables of functions.[6] In printed mathematics, the norm is to set variables and constants in an italic typeface.[7] For example, a general quadratic function is conventionally written as $ ax^{2}+bx+c\,$, where a, b and c are parameters (also called constants, because they are constant functions), while x is the variable of the function. A more explicit way to denote this function is $ x\mapsto ax^{2}+bx+c\,$, which clarifies the function-argument status of x and the constant status of a, b and c. Since c occurs in a term that is a constant function of x, it is called the constant term.[8] Specific branches and applications of mathematics have specific naming conventions for variables. Variables with similar roles or meanings are often assigned consecutive letters or the same letter with different subscripts. For example, the three axes in 3D coordinate space are conventionally called x, y, and z. In physics, the names of variables are largely determined by the physical quantity they describe, but various naming conventions exist. A convention often followed in probability and statistics is to use X, Y, Z for the names of random variables, keeping x, y, z for variables representing corresponding better-defined values. Specific kinds of variables It is common for variables to play different roles in the same mathematical formula, and names or qualifiers have been introduced to distinguish them. For example, the general cubic equation $ax^{3}+bx^{2}+cx+d=0,$ is interpreted as having five variables: four, a, b, c, d, which are taken to be given numbers and the fifth variable, x, is understood to be an unknown number. To distinguish them, the variable x is called an unknown, and the other variables are called parameters or coefficients, or sometimes constants, although this last terminology is incorrect for an equation, and should be reserved for the function defined by the left-hand side of this equation. In the context of functions, the term variable refers commonly to the arguments of the functions. This is typically the case in sentences like "function of a real variable", "x is the variable of the function f: x ↦ f(x)", "f is a function of the variable x" (meaning that the argument of the function is referred to by the variable x). In the same context, variables that are independent of x define constant functions and are therefore called constant. For example, a constant of integration is an arbitrary constant function that is added to a particular antiderivative to obtain the other antiderivatives. Because the strong relationship between polynomials and polynomial function, the term "constant" is often used to denote the coefficients of a polynomial, which are constant functions of the indeterminates. This use of "constant" as an abbreviation of "constant function" must be distinguished from the normal meaning of the word in mathematics. A constant, or mathematical constant is a well and unambiguously defined number or other mathematical object, as, for example, the numbers 0, 1, π and the identity element of a group. Since a variable may represent any mathematical object, a letter that represents a constant is often called a variable. This is, in particular, the case of e and π, even when they represents Euler's number and 3.14159... Other specific names for variables are: • An unknown is a variable in an equation which has to be solved for. • An indeterminate is a symbol, commonly called variable, that appears in a polynomial or a formal power series. Formally speaking, an indeterminate is not a variable, but a constant in the polynomial ring or the ring of formal power series. However, because of the strong relationship between polynomials or power series and the functions that they define, many authors consider indeterminates as a special kind of variables. • A parameter is a quantity (usually a number) which is a part of the input of a problem, and remains constant during the whole solution of this problem. For example, in mechanics the mass and the size of a solid body are parameters for the study of its movement. In computer science, parameter has a different meaning and denotes an argument of a function. • Free variables and bound variables • A random variable is a kind of variable that is used in probability theory and its applications. All these denominations of variables are of semantic nature, and the way of computing with them (syntax) is the same for all. Dependent and independent variables Main article: Dependent and independent variables In calculus and its application to physics and other sciences, it is rather common to consider a variable, say y, whose possible values depend on the value of another variable, say x. In mathematical terms, the dependent variable y represents the value of a function of x. To simplify formulas, it is often useful to use the same symbol for the dependent variable y and the function mapping x onto y. For example, the state of a physical system depends on measurable quantities such as the pressure, the temperature, the spatial position, ..., and all these quantities vary when the system evolves, that is, they are function of the time. In the formulas describing the system, these quantities are represented by variables which are dependent on the time, and thus considered implicitly as functions of the time. Therefore, in a formula, a dependent variable is a variable that is implicitly a function of another (or several other) variables. An independent variable is a variable that is not dependent.[9] The property of a variable to be dependent or independent depends often of the point of view and is not intrinsic. For example, in the notation f(x, y, z), the three variables may be all independent and the notation represents a function of three variables. On the other hand, if y and z depend on x (are dependent variables) then the notation represents a function of the single independent variable x.[10] Examples If one defines a function f from the real numbers to the real numbers by $f(x)=x^{2}+\sin(x+4)$ then x is a variable standing for the argument of the function being defined, which can be any real number. In the identity $\sum _{i=1}^{n}i={\frac {n^{2}+n}{2}}$ the variable i is a summation variable which designates in turn each of the integers 1, 2, ..., n (it is also called index because its variation is over a discrete set of values) while n is a parameter (it does not vary within the formula). In the theory of polynomials, a polynomial of degree 2 is generally denoted as ax2 + bx + c, where a, b and c are called coefficients (they are assumed to be fixed, i.e., parameters of the problem considered) while x is called a variable. When studying this polynomial for its polynomial function this x stands for the function argument. When studying the polynomial as an object in itself, x is taken to be an indeterminate, and would often be written with a capital letter instead to indicate this status. Example: the ideal gas law Consider the equation describing the ideal gas law, $PV=Nk_{B}T.$ This equation would generally be interpreted to have four variables, and one constant. The constant is $k_{B}$, the Boltzmann constant. One of the variables, $N$, the number of particles, is a positive integer (and therefore a discrete variable), while the other three, $P,V$ and $T$, for pressure, volume and temperature, are continuous variables. One could rearrange this equation to obtain $P$ as a function of the other variables, $P(V,N,T)={\frac {Nk_{B}T}{V}}.$ Then $P$, as a function of the other variables, is the dependent variable, while its arguments, $V,N$ and $T$, are independent variables. One could approach this function more formally and think about its domain and range: in function notation, here $P$ is a function $P:\mathbb {R} _{>0}\times \mathbb {N} \times \mathbb {R} _{>0}\rightarrow \mathbb {R} $. However, in an experiment, in order to determine the dependence of pressure on a single one of the independent variables, it is necessary to fix all but one of the variables, say $T$. This gives a function $P(T)={\frac {Nk_{B}T}{V}},$ where now $N$ and $V$ are also regarded as constants. Mathematically, this constitutes a partial application of the earlier function $P$. This illustrates how independent variables and constants are largely dependent on the point of view taken. One could even regard $k_{B}$ as a variable to obtain a function $P(V,N,T,k_{B})={\frac {Nk_{B}T}{V}}.$ Moduli spaces See also: moduli spaces Considering constants and variables can lead to the concept of moduli spaces. For illustration, consider the equation for a parabola, $y=ax^{2}+bx+c,$ where $a,b,c,x$ and $y$ are all considered to be real. The set of points $(x,y)$ in the 2D plane satisfying this equation trace out the graph of a parabola. Here, $a,b$ and $c$ are regarded as constants, which specify the parabola, while $x$ and $y$ are variables. Then instead regarding $a,b$ and $c$ as variables, we observe that each set of 3-tuples $(a,b,c)$ corresponds to a different parabola. That is, they specify coordinates on the 'space of parabolas': this is known as a moduli space of parabolas. Conventional variable names • a, b, c, d (sometimes extended to e, f) for parameters or coefficients • a0, a1, a2, ... for situations where distinct letters are inconvenient • ai or ui for the i-th term of a sequence or the i-th coefficient of a series • e for Euler's number • f, g, h for functions (as in $f(x)$) • i for the imaginary unit • i, j, k (sometimes l or h) for varying integers or indices in an indexed family, or unit vectors • l and w for the length and width of a figure • l also for a line, or in number theory for a prime number not equal to p • n (with m as a second choice) for a fixed integer, such as a count of objects or the degree of an equation • p for a prime number or a probability • q for a prime power or a quotient • r for a radius, a remainder or a correlation coefficient • t for time • x, y, z for the three Cartesian coordinates of a point in Euclidean geometry or the corresponding axes • z for a complex number, or in statistics a normal random variable • α, β, γ, θ, φ for angle measures • ε (with δ as a second choice) for an arbitrarily small positive number • λ for an eigenvalue • Σ (capital sigma) for a sum, or σ (lowercase sigma) in statistics for the standard deviation[11] • μ for a mean See also • Lambda calculus • Observable variable • Physical constant • Propositional variable References 1. Stover & Weisstein. 2. Tabak 2014, p. 40. 3. Fraleigh 1989, p. 276. 4. Sorell 2000, p. 19. 5. Scientific American. Munn & Company. September 3, 1887. p. 148. 6. Edwards Art. 4 7. Hosch 2010, p. 71. 8. Foerster 2006, p. 18. 9. Edwards Art. 5 10. Edwards Art. 6 11. Weisstein, Eric W. "Sum". mathworld.wolfram.com. Retrieved February 14, 2022. Bibliography • Edwards, Joseph (1892). An Elementary Treatise on the Differential Calculus (2nd ed.). London: MacMillan and Co. • Foerster, Paul A. (2006). Algebra and Trigonometry: Functions and Applications (classics ed.). Upper Saddle River, NJ: Prentice Hall. ISBN 978-0-13-165711-3. • Fraleigh, John B. (1989). A First Course in Abstract Algebra (4th ed.). United States: Addison-Wesley. ISBN 978-0-201-52821-3. • Hosch, William L., ed. (2010). The Britannica Guide to Algebra and Trigonometry. Britannica Educational Publishing. ISBN 978-1-61530-219-2. • Menger, Karl (1954). "On Variables in Mathematics and in Natural Science". The British Journal for the Philosophy of Science. University of Chicago Press. 5 (18): 134–142. doi:10.1093/bjps/V.18.134. JSTOR 685170. • Peregrin, Jaroslav (2000). "Variables in Natural Language: Where do they come from?" (PDF). In Böttner, Michael; Thümmel, Wolf (eds.). Variable-Free Semantics. Osnabrück Secolo. pp. 46–65. ISBN 978-3-929979-53-4. • Quine, Willard V. (1960). "Variables Explained Away" (PDF). Proceedings of the American Philosophical Society. American Philosophical Society. 104 (3): 343–347. JSTOR 985250. • Sorell, Tom (2000). Descartes: A Very Short Introduction. New York: Oxford University Press. ISBN 978-0-19-285409-4. • Stover, Christopher; Weisstein, Eric W. "Variable". In Weisstein, Eric W. (ed.). Wolfram MathWorld. Wolfram Research. Retrieved November 22, 2021. • Tabak, John (2014). Algebra: Sets, Symbols, and the Language of Thought. Infobase Publishing. ISBN 978-0-8160-6875-3. Mathematical logic General • Axiom • list • Cardinality • First-order logic • Formal proof • Formal semantics • Foundations of mathematics • Information theory • Lemma • Logical consequence • Model • Theorem • Theory • Type theory Theorems (list)  & Paradoxes • Gödel's completeness and incompleteness theorems • Tarski's undefinability • Banach–Tarski paradox • Cantor's theorem, paradox and diagonal argument • Compactness • Halting problem • Lindström's • Löwenheim–Skolem • Russell's paradox Logics Traditional • Classical logic • Logical truth • Tautology • Proposition • Inference • Logical equivalence • Consistency • Equiconsistency • Argument • Soundness • Validity • Syllogism • Square of opposition • Venn diagram Propositional • Boolean algebra • Boolean functions • Logical connectives • Propositional calculus • Propositional formula • Truth tables • Many-valued logic • 3 • Finite • ∞ Predicate • First-order • list • Second-order • Monadic • Higher-order • Free • Quantifiers • Predicate • Monadic predicate calculus Set theory • Set • Hereditary • Class • (Ur-)Element • Ordinal number • Extensionality • Forcing • Relation • Equivalence • Partition • Set operations: • Intersection • Union • Complement • Cartesian product • Power set • Identities Types of Sets • Countable • Uncountable • Empty • Inhabited • Singleton • Finite • Infinite • Transitive • Ultrafilter • Recursive • Fuzzy • Universal • Universe • Constructible • Grothendieck • Von Neumann Maps & Cardinality • Function/Map • Domain • Codomain • Image • In/Sur/Bi-jection • Schröder–Bernstein theorem • Isomorphism • Gödel numbering • Enumeration • Large cardinal • Inaccessible • Aleph number • Operation • Binary Set theories • Zermelo–Fraenkel • Axiom of choice • Continuum hypothesis • General • Kripke–Platek • Morse–Kelley • Naive • New Foundations • Tarski–Grothendieck • Von Neumann–Bernays–Gödel • Ackermann • Constructive Formal systems (list), Language & Syntax • Alphabet • Arity • Automata • Axiom schema • Expression • Ground • Extension • by definition • Conservative • Relation • Formation rule • Grammar • Formula • Atomic • Closed • Ground • Open • Free/bound variable • Language • Metalanguage • Logical connective • ¬ • ∨ • ∧ • → • ↔ • = • Predicate • Functional • Variable • Propositional variable • Proof • Quantifier • ∃ • ! • ∀ • rank • Sentence • Atomic • Spectrum • Signature • String • Substitution • Symbol • Function • Logical/Constant • Non-logical • Variable • Term • Theory • list Example axiomatic systems  (list) • of arithmetic: • Peano • second-order • elementary function • primitive recursive • Robinson • Skolem • of the real numbers • Tarski's axiomatization • of Boolean algebras • canonical • minimal axioms • of geometry: • Euclidean: • Elements • Hilbert's • Tarski's • non-Euclidean • Principia Mathematica Proof theory • Formal proof • Natural deduction • Logical consequence • Rule of inference • Sequent calculus • Theorem • Systems • Axiomatic • Deductive • Hilbert • list • Complete theory • Independence (from ZFC) • Proof of impossibility • Ordinal analysis • Reverse mathematics • Self-verifying theories Model theory • Interpretation • Function • of models • Model • Equivalence • Finite • Saturated • Spectrum • Submodel • Non-standard model • of arithmetic • Diagram • Elementary • Categorical theory • Model complete theory • Satisfiability • Semantics of logic • Strength • Theories of truth • Semantic • Tarski's • Kripke's • T-schema • Transfer principle • Truth predicate • Truth value • Type • Ultraproduct • Validity Computability theory • Church encoding • Church–Turing thesis • Computably enumerable • Computable function • Computable set • Decision problem • Decidable • Undecidable • P • NP • P versus NP problem • Kolmogorov complexity • Lambda calculus • Primitive recursive function • Recursion • Recursive set • Turing machine • Type theory Related • Abstract logic • Category theory • Concrete/Abstract Category • Category of sets • History of logic • History of mathematical logic • timeline • Logicism • Mathematical object • Philosophy of mathematics • Supertask  Mathematics portal
Wikipedia
Quasi-Newton method Quasi-Newton methods are methods used to either find zeroes or local maxima and minima of functions, as an alternative to Newton's method. They can be used if the Jacobian or Hessian is unavailable or is too expensive to compute at every iteration. The "full" Newton's method requires the Jacobian in order to search for zeros, or the Hessian for finding extrema. Some iterative methods that reduce to Newton's method, such as SLSQP, may be considered quasi-Newtonian. Search for zeros: root finding Newton's method to find zeroes of a function $g$ of multiple variables is given by $x_{n+1}=x_{n}-[J_{g}(x_{n})]^{-1}g(x_{n})$, where $[J_{g}(x_{n})]^{-1}$ is the left inverse of the Jacobian matrix $J_{g}(x_{n})$ of $g$ evaluated for $x_{n}$. Strictly speaking, any method that replaces the exact Jacobian $J_{g}(x_{n})$ with an approximation is a quasi-Newton method.[1] For instance, the chord method (where $J_{g}(x_{n})$ is replaced by $J_{g}(x_{0})$ for all iterations) is a simple example. The methods given below for optimization refer to an important subclass of quasi-Newton methods, secant methods.[2] Using methods developed to find extrema in order to find zeroes is not always a good idea, as the majority of the methods used to find extrema require that the matrix that is used is symmetrical. While this holds in the context of the search for extrema, it rarely holds when searching for zeroes. Broyden's "good" and "bad" methods are two methods commonly used to find extrema that can also be applied to find zeroes. Other methods that can be used are the column-updating method, the inverse column-updating method, the quasi-Newton least squares method and the quasi-Newton inverse least squares method. More recently quasi-Newton methods have been applied to find the solution of multiple coupled systems of equations (e.g. fluid–structure interaction problems or interaction problems in physics). They allow the solution to be found by solving each constituent system separately (which is simpler than the global system) in a cyclic, iterative fashion until the solution of the global system is found.[2][3] Search for extrema: optimization The search for a minimum or maximum of a scalar-valued function is nothing else than the search for the zeroes of the gradient of that function. Therefore, quasi-Newton methods can be readily applied to find extrema of a function. In other words, if $g$ is the gradient of $f$, then searching for the zeroes of the vector-valued function $g$ corresponds to the search for the extrema of the scalar-valued function $f$; the Jacobian of $g$ now becomes the Hessian of $f$. The main difference is that the Hessian matrix is a symmetric matrix, unlike the Jacobian when searching for zeroes. Most quasi-Newton methods used in optimization exploit this property. In optimization, quasi-Newton methods (a special case of variable-metric methods) are algorithms for finding local maxima and minima of functions. Quasi-Newton methods are based on Newton's method to find the stationary point of a function, where the gradient is 0. Newton's method assumes that the function can be locally approximated as a quadratic in the region around the optimum, and uses the first and second derivatives to find the stationary point. In higher dimensions, Newton's method uses the gradient and the Hessian matrix of second derivatives of the function to be minimized. In quasi-Newton methods the Hessian matrix does not need to be computed. The Hessian is updated by analyzing successive gradient vectors instead. Quasi-Newton methods are a generalization of the secant method to find the root of the first derivative for multidimensional problems. In multiple dimensions the secant equation is under-determined, and quasi-Newton methods differ in how they constrain the solution, typically by adding a simple low-rank update to the current estimate of the Hessian. The first quasi-Newton algorithm was proposed by William C. Davidon, a physicist working at Argonne National Laboratory. He developed the first quasi-Newton algorithm in 1959: the DFP updating formula, which was later popularized by Fletcher and Powell in 1963, but is rarely used today. The most common quasi-Newton algorithms are currently the SR1 formula (for "symmetric rank-one"), the BHHH method, the widespread BFGS method (suggested independently by Broyden, Fletcher, Goldfarb, and Shanno, in 1970), and its low-memory extension L-BFGS. The Broyden's class is a linear combination of the DFP and BFGS methods. The SR1 formula does not guarantee the update matrix to maintain positive-definiteness and can be used for indefinite problems. The Broyden's method does not require the update matrix to be symmetric and is used to find the root of a general system of equations (rather than the gradient) by updating the Jacobian (rather than the Hessian). One of the chief advantages of quasi-Newton methods over Newton's method is that the Hessian matrix (or, in the case of quasi-Newton methods, its approximation) $B$ does not need to be inverted. Newton's method, and its derivatives such as interior point methods, require the Hessian to be inverted, which is typically implemented by solving a system of linear equations and is often quite costly. In contrast, quasi-Newton methods usually generate an estimate of $B^{-1}$ directly. As in Newton's method, one uses a second-order approximation to find the minimum of a function $f(x)$. The Taylor series of $f(x)$ around an iterate is $f(x_{k}+\Delta x)\approx f(x_{k})+\nabla f(x_{k})^{\mathrm {T} }\,\Delta x+{\frac {1}{2}}\Delta x^{\mathrm {T} }B\,\Delta x,$ where ($\nabla f$) is the gradient, and $B$ an approximation to the Hessian matrix.[4] The gradient of this approximation (with respect to $\Delta x$) is $\nabla f(x_{k}+\Delta x)\approx \nabla f(x_{k})+B\,\Delta x,$ and setting this gradient to zero (which is the goal of optimization) provides the Newton step: $\Delta x=-B^{-1}\nabla f(x_{k}).$ The Hessian approximation $B$ is chosen to satisfy $\nabla f(x_{k}+\Delta x)=\nabla f(x_{k})+B\,\Delta x,$ which is called the secant equation (the Taylor series of the gradient itself). In more than one dimension $B$ is underdetermined. In one dimension, solving for $B$ and applying the Newton's step with the updated value is equivalent to the secant method. The various quasi-Newton methods differ in their choice of the solution to the secant equation (in one dimension, all the variants are equivalent). Most methods (but with exceptions, such as Broyden's method) seek a symmetric solution ($B^{T}=B$); furthermore, the variants listed below can be motivated by finding an update $B_{k+1}$ that is as close as possible to $B_{k}$ in some norm; that is, $B_{k+1}=\operatorname {argmin} _{B}\|B-B_{k}\|_{V}$, where $V$ is some positive-definite matrix that defines the norm. An approximate initial value $B_{0}=\beta I$ is often sufficient to achieve rapid convergence, although there is no general strategy to choose $\beta $.[5] Note that $B_{0}$ should be positive-definite. The unknown $x_{k}$ is updated applying the Newton's step calculated using the current approximate Hessian matrix $B_{k}$: • $\Delta x_{k}=-\alpha _{k}B_{k}^{-1}\nabla f(x_{k})$, with $\alpha $ chosen to satisfy the Wolfe conditions; • $x_{k+1}=x_{k}+\Delta x_{k}$; • The gradient computed at the new point $\nabla f(x_{k+1})$, and $y_{k}=\nabla f(x_{k+1})-\nabla f(x_{k})$ is used to update the approximate Hessian $B_{k+1}$, or directly its inverse $H_{k+1}=B_{k+1}^{-1}$ using the Sherman–Morrison formula. • A key property of the BFGS and DFP updates is that if $B_{k}$ is positive-definite, and $\alpha _{k}$ is chosen to satisfy the Wolfe conditions, then $B_{k+1}$ is also positive-definite. The most popular update formulas are: Method $\displaystyle B_{k+1}=$ $H_{k+1}=B_{k+1}^{-1}=$ BFGS $B_{k}+{\frac {y_{k}y_{k}^{\mathrm {T} }}{y_{k}^{\mathrm {T} }\Delta x_{k}}}-{\frac {B_{k}\Delta x_{k}(B_{k}\Delta x_{k})^{\mathrm {T} }}{\Delta x_{k}^{\mathrm {T} }B_{k}\,\Delta x_{k}}}$ $\left(I-{\frac {\Delta x_{k}y_{k}^{\mathrm {T} }}{y_{k}^{\mathrm {T} }\Delta x_{k}}}\right)H_{k}\left(I-{\frac {y_{k}\Delta x_{k}^{\mathrm {T} }}{y_{k}^{\mathrm {T} }\Delta x_{k}}}\right)+{\frac {\Delta x_{k}\Delta x_{k}^{\mathrm {T} }}{y_{k}^{\mathrm {T} }\,\Delta x_{k}}}$ Broyden $B_{k}+{\frac {y_{k}-B_{k}\Delta x_{k}}{\Delta x_{k}^{\mathrm {T} }\,\Delta x_{k}}}\,\Delta x_{k}^{\mathrm {T} }$ $H_{k}+{\frac {(\Delta x_{k}-H_{k}y_{k})\Delta x_{k}^{\mathrm {T} }H_{k}}{\Delta x_{k}^{\mathrm {T} }H_{k}\,y_{k}}}$ Broyden family $(1-\varphi _{k})B_{k+1}^{\text{BFGS}}+\varphi _{k}B_{k+1}^{\text{DFP}},\quad \varphi \in [0,1]$ DFP $\left(I-{\frac {y_{k}\,\Delta x_{k}^{\mathrm {T} }}{y_{k}^{\mathrm {T} }\,\Delta x_{k}}}\right)B_{k}\left(I-{\frac {\Delta x_{k}y_{k}^{\mathrm {T} }}{y_{k}^{\mathrm {T} }\,\Delta x_{k}}}\right)+{\frac {y_{k}y_{k}^{\mathrm {T} }}{y_{k}^{\mathrm {T} }\,\Delta x_{k}}}$ $H_{k}+{\frac {\Delta x_{k}\Delta x_{k}^{\mathrm {T} }}{\Delta x_{k}^{\mathrm {T} }\,y_{k}}}-{\frac {H_{k}y_{k}y_{k}^{\mathrm {T} }H_{k}}{y_{k}^{\mathrm {T} }H_{k}y_{k}}}$ SR1 $B_{k}+{\frac {(y_{k}-B_{k}\,\Delta x_{k})(y_{k}-B_{k}\,\Delta x_{k})^{\mathrm {T} }}{(y_{k}-B_{k}\,\Delta x_{k})^{\mathrm {T} }\,\Delta x_{k}}}$ $H_{k}+{\frac {(\Delta x_{k}-H_{k}y_{k})(\Delta x_{k}-H_{k}y_{k})^{\mathrm {T} }}{(\Delta x_{k}-H_{k}y_{k})^{\mathrm {T} }y_{k}}}$ Other methods are Pearson's method, McCormick's method, the Powell symmetric Broyden (PSB) method and Greenstadt's method.[2] Relationship to matrix inversion When $f$ is a convex quadratic function with positive-definite Hessian $B$, one would expect the matrices $H_{k}$ generated by a quasi-Newton method to converge to the inverse Hessian $H=B^{-1}$. This is indeed the case for the class of quasi-Newton methods based on least-change updates.[6] Notable implementations Implementations of quasi-Newton methods are available in many programming languages. Notable open source implementations include: • GNU Octave uses a form of BFGS in its fsolve function, with trust region extensions. • GNU Scientific Library implements the Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. • ALGLIB implements (L)BFGS in C++ and C# • R's optim general-purpose optimizer routine uses the BFGS method by using method="BFGS".[7] • Scipy.optimize has fmin_bfgs. In the SciPy extension to Python, the scipy.optimize.minimize function includes, among other methods, a BFGS implementation.[8] Notable proprietary implementations include: • Mathematica includes quasi-Newton solvers.[9] • The NAG Library contains several routines[10] for minimizing or maximizing a function[11] which use quasi-Newton algorithms. • In MATLAB's Optimization Toolbox, the fminunc function uses (among other methods) the BFGS quasi-Newton method.[12] Many of the constrained methods of the Optimization toolbox use BFGS and the variant L-BFGS.[13]< See also • BFGS method • L-BFGS • OWL-QN • Broyden's method • DFP updating formula • Newton's method • Newton's method in optimization • SR1 formula References 1. Broyden, C. G. (1972). "Quasi-Newton Methods". In Murray, W. (ed.). Numerical Methods for Unconstrained Optimization. London: Academic Press. pp. 87–106. ISBN 0-12-512250-0. 2. Haelterman, Rob (2009). "Analytical study of the Least Squares Quasi-Newton method for interaction problems". PhD Thesis, Ghent University. Retrieved 2014-08-14. 3. Rob Haelterman; Dirk Van Eester; Daan Verleyen (2015). "Accelerating the solution of a physics model inside a tokamak using the (Inverse) Column Updating Method". Journal of Computational and Applied Mathematics. 279: 133–144. doi:10.1016/j.cam.2014.11.005. 4. "Introduction to Taylor's theorem for multivariable functions - Math Insight". mathinsight.org. Retrieved November 11, 2021. 5. Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization. New York: Springer. pp. 142. ISBN 0-387-98793-2. 6. Robert Mansel Gower; Peter Richtarik (2015). "Randomized Quasi-Newton Updates are Linearly Convergent Matrix Inversion Algorithms". arXiv:1602.01768 [math.NA]. 7. "optim function - RDocumentation". www.rdocumentation.org. Retrieved 2022-02-21. 8. "Scipy.optimize.minimize — SciPy v1.7.1 Manual". 9. "Unconstrained Optimization: Methods for Local Minimization—Wolfram Language Documentation". reference.wolfram.com. Retrieved 2022-02-21. 10. The Numerical Algorithms Group. "Keyword Index: Quasi-Newton". NAG Library Manual, Mark 23. Retrieved 2012-02-09. 11. The Numerical Algorithms Group. "E04 – Minimizing or Maximizing a Function" (PDF). NAG Library Manual, Mark 23. Retrieved 2012-02-09. 12. "Find minimum of unconstrained multivariable function - MATLAB fminunc". 13. "Constrained Nonlinear Optimization Algorithms - MATLAB & Simulink". www.mathworks.com. Retrieved 2022-02-21. Further reading • Bonnans, J. F.; Gilbert, J. Ch.; Lemaréchal, C.; Sagastizábal, C. A. (2006). Numerical Optimization : Theoretical and Numerical Aspects (Second ed.). Springer. ISBN 3-540-35445-X. • Fletcher, Roger (1987), Practical methods of optimization (2nd ed.), New York: John Wiley & Sons, ISBN 978-0-471-91547-8. • Nocedal, Jorge; Wright, Stephen J. (1999). "Quasi-Newton Methods". Numerical Optimization. New York: Springer. pp. 192–221. ISBN 0-387-98793-2. • Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Section 10.9. Quasi-Newton or Variable Metric Methods in Multidimensions". Numerical Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press. ISBN 978-0-521-88068-8. • Scales, L. E. (1985). Introduction to Non-Linear Optimization. New York: MacMillan. pp. 84–106. ISBN 0-333-32552-4. Optimization: Algorithms, methods, and heuristics Unconstrained nonlinear Functions • Golden-section search • Interpolation methods • Line search • Nelder–Mead method • Successive parabolic interpolation Gradients Convergence • Trust region • Wolfe conditions Quasi–Newton • Berndt–Hall–Hall–Hausman • Broyden–Fletcher–Goldfarb–Shanno and L-BFGS • Davidon–Fletcher–Powell • Symmetric rank-one (SR1) Other methods • Conjugate gradient • Gauss–Newton • Gradient • Mirror • Levenberg–Marquardt • Powell's dog leg method • Truncated Newton Hessians • Newton's method Constrained nonlinear General • Barrier methods • Penalty methods Differentiable • Augmented Lagrangian methods • Sequential quadratic programming • Successive linear programming Convex optimization Convex minimization • Cutting-plane method • Reduced gradient (Frank–Wolfe) • Subgradient method Linear and quadratic Interior point • Affine scaling • Ellipsoid algorithm of Khachiyan • Projective algorithm of Karmarkar Basis-exchange • Simplex algorithm of Dantzig • Revised simplex algorithm • Criss-cross algorithm • Principal pivoting algorithm of Lemke Combinatorial Paradigms • Approximation algorithm • Dynamic programming • Greedy algorithm • Integer programming • Branch and bound/cut Graph algorithms Minimum spanning tree • Borůvka • Prim • Kruskal Shortest path • Bellman–Ford • SPFA • Dijkstra • Floyd–Warshall Network flows • Dinic • Edmonds–Karp • Ford–Fulkerson • Push–relabel maximum flow Metaheuristics • Evolutionary algorithm • Hill climbing • Local search • Parallel metaheuristics • Simulated annealing • Spiral optimization algorithm • Tabu search • Software
Wikipedia
Differential (mathematics) In mathematics, differential refers to several related notions[1] derived from the early days of calculus, put on a rigorous footing, such as infinitesimal differences and the derivatives of functions.[2] The term is used in various branches of mathematics such as calculus, differential geometry, algebraic geometry and algebraic topology. Introduction The term differential is used nonrigorously in calculus to refer to an infinitesimal ("infinitely small") change in some varying quantity. For example, if x is a variable, then a change in the value of x is often denoted Δx (pronounced delta x). The differential dx represents an infinitely small change in the variable x. The idea of an infinitely small or infinitely slow change is, intuitively, extremely useful, and there are a number of ways to make the notion mathematically precise. Using calculus, it is possible to relate the infinitely small changes of various variables to each other mathematically using derivatives. If y is a function of x, then the differential dy of y is related to dx by the formula $dy={\frac {dy}{dx}}\,dx,$ where ${\frac {dy}{dx}}\,$denotes the derivative of y with respect to x. This formula summarizes the intuitive idea that the derivative of y with respect to x is the limit of the ratio of differences Δy/Δx as Δx becomes infinitesimal. Basic notions • In calculus, the differential represents a change in the linearization of a function. • The total differential is its generalization for functions of multiple variables. • In traditional approaches to calculus, the differentials (e.g. dx, dy, dt, etc.) are interpreted as infinitesimals. There are several methods of defining infinitesimals rigorously, but it is sufficient to say that an infinitesimal number is smaller in absolute value than any positive real number, just as an infinitely large number is larger than any real number. • The differential is another name for the Jacobian matrix of partial derivatives of a function from Rn to Rm (especially when this matrix is viewed as a linear map). • More generally, the differential or pushforward refers to the derivative of a map between smooth manifolds and the pushforward operations it defines. The differential is also used to define the dual concept of pullback. • Stochastic calculus provides a notion of stochastic differential and an associated calculus for stochastic processes. • The integrator in a Stieltjes integral is represented as the differential of a function. Formally, the differential appearing under the integral behaves exactly as a differential: thus, the integration by substitution and integration by parts formulae for Stieltjes integral correspond, respectively, to the chain rule and product rule for the differential. History and usage See also: History of calculus Infinitesimal quantities played a significant role in the development of calculus. Archimedes used them, even though he did not believe that arguments involving infinitesimals were rigorous.[3] Isaac Newton referred to them as fluxions. However, it was Gottfried Leibniz who coined the term differentials for infinitesimal quantities and introduced the notation for them which is still used today. In Leibniz's notation, if x is a variable quantity, then dx denotes an infinitesimal change in the variable x. Thus, if y is a function of x, then the derivative of y with respect to x is often denoted dy/dx, which would otherwise be denoted (in the notation of Newton or Lagrange) ẏ or y′. The use of differentials in this form attracted much criticism, for instance in the famous pamphlet The Analyst by Bishop Berkeley. Nevertheless, the notation has remained popular because it suggests strongly the idea that the derivative of y at x is its instantaneous rate of change (the slope of the graph's tangent line), which may be obtained by taking the limit of the ratio Δy/Δx as Δx becomes arbitrarily small. Differentials are also compatible with dimensional analysis, where a differential such as dx has the same dimensions as the variable x. Calculus evolved into a distinct branch of mathematics during the 17th century CE, although there were antecedents going back to antiquity. The presentations of, e.g., Newton, Leibniz, were marked by non-rigorous definitions of terms like differential, fluent and "infinitely small". While many of the arguments in Bishop Berkeley's 1734 The Analyst are theological in nature, modern mathematicians acknowledge the validity of his argument against "the Ghosts of departed Quantities"; however, the modern approaches do not have the same technical issues. Despite the lack of rigor, immense progress was made in the 17th and 18th centuries. In the 19th century, Cauchy and others gradually developed the Epsilon, delta approach to continuity, limits and derivatives, giving a solid conceptual foundation for calculus. In the 20th century, several new concepts in, e.g., multivariable calculus, differential geometry, seemed to encapsulate the intent of the old terms, especially differential; both differential and infinitesimal are used with new, more rigorous, meanings. Differentials are also used in the notation for integrals because an integral can be regarded as an infinite sum of infinitesimal quantities: the area under a graph is obtained by subdividing the graph into infinitely thin strips and summing their areas. In an expression such as $\int f(x)\,dx,$ the integral sign (which is a modified long s) denotes the infinite sum, f(x) denotes the "height" of a thin strip, and the differential dx denotes its infinitely thin width. Approaches Part of a series of articles about Calculus • Fundamental theorem • Limits • Continuity • Rolle's theorem • Mean value theorem • Inverse function theorem Differential Definitions • Derivative (generalizations) • Differential • infinitesimal • of a function • total Concepts • Differentiation notation • Second derivative • Implicit differentiation • Logarithmic differentiation • Related rates • Taylor's theorem Rules and identities • Sum • Product • Chain • Power • Quotient • L'Hôpital's rule • Inverse • General Leibniz • Faà di Bruno's formula • Reynolds Integral • Lists of integrals • Integral transform • Leibniz integral rule Definitions • Antiderivative • Integral (improper) • Riemann integral • Lebesgue integration • Contour integration • Integral of inverse functions Integration by • Parts • Discs • Cylindrical shells • Substitution (trigonometric, tangent half-angle, Euler) • Euler's formula • Partial fractions • Changing order • Reduction formulae • Differentiating under the integral sign • Risch algorithm Series • Geometric (arithmetico-geometric) • Harmonic • Alternating • Power • Binomial • Taylor Convergence tests • Summand limit (term test) • Ratio • Root • Integral • Direct comparison • Limit comparison • Alternating series • Cauchy condensation • Dirichlet • Abel Vector • Gradient • Divergence • Curl • Laplacian • Directional derivative • Identities Theorems • Gradient • Green's • Stokes' • Divergence • generalized Stokes Multivariable Formalisms • Matrix • Tensor • Exterior • Geometric Definitions • Partial derivative • Multiple integral • Line integral • Surface integral • Volume integral • Jacobian • Hessian Advanced • Calculus on Euclidean space • Generalized functions • Limit of distributions Specialized • Fractional • Malliavin • Stochastic • Variations Miscellaneous • Precalculus • History • Glossary • List of topics • Integration Bee • Mathematical analysis • Nonstandard analysis There are several approaches for making the notion of differentials mathematically precise. 1. Differentials as linear maps. This approach underlies the definition of the derivative and the exterior derivative in differential geometry.[4] 2. Differentials as nilpotent elements of commutative rings. This approach is popular in algebraic geometry.[5] 3. Differentials in smooth models of set theory. This approach is known as synthetic differential geometry or smooth infinitesimal analysis and is closely related to the algebraic geometric approach, except that ideas from topos theory are used to hide the mechanisms by which nilpotent infinitesimals are introduced.[6] 4. Differentials as infinitesimals in hyperreal number systems, which are extensions of the real numbers that contain invertible infinitesimals and infinitely large numbers. This is the approach of nonstandard analysis pioneered by Abraham Robinson.[7] These approaches are very different from each other, but they have in common the idea of being quantitative, i.e., saying not just that a differential is infinitely small, but how small it is. Differentials as linear maps There is a simple way to make precise sense of differentials, first used on the Real line by regarding them as linear maps. It can be used on $\mathbb {R} $, $\mathbb {R} ^{n}$, a Hilbert space, a Banach space, or more generally, a topological vector space. The case of the Real line is the easiest to explain. This type of differential is also known as a covariant vector or cotangent vector, depending on context. Differentials as linear maps on R Suppose $f(x)$ is a real-valued function on $\mathbb {R} $. We can reinterpret the variable $x$ in $f(x)$ as being a function rather than a number, namely the identity map on the real line, which takes a real number $p$ to itself: $x(p)=p$. Then $f(x)$ is the composite of $f$ with $x$, whose value at $p$ is $f(x(p))=f(p)$. The differential $\operatorname {d} f$ (which of course depends on $f$) is then a function whose value at $p$ (usually denoted $df_{p}$) is not a number, but a linear map from $\mathbb {R} $ to $\mathbb {R} $. Since a linear map from $\mathbb {R} $ to $\mathbb {R} $ is given by a $1\times 1$ matrix, it is essentially the same thing as a number, but the change in the point of view allows us to think of $df_{p}$ as an infinitesimal and compare it with the standard infinitesimal $dx_{p}$, which is again just the identity map from $\mathbb {R} $ to $\mathbb {R} $ (a $1\times 1$ matrix with entry $1$). The identity map has the property that if $\varepsilon $ is very small, then $dx_{p}(\varepsilon )$ is very small, which enables us to regard it as infinitesimal. The differential $df_{p}$ has the same property, because it is just a multiple of $dx_{p}$, and this multiple is the derivative $f'(p)$ by definition. We therefore obtain that $df_{p}=f'(p)\,dx_{p}$, and hence $df=f'\,dx$. Thus we recover the idea that $f'$ is the ratio of the differentials $df$ and $dx$. This would just be a trick were it not for the fact that: 1. it captures the idea of the derivative of $f$ at $p$ as the best linear approximation to $f$ at $p$; 2. it has many generalizations. Differentials as linear maps on Rn If $f$ is a function from $\mathbb {R} ^{n}$ to $\mathbb {R} $, then we say that $f$ is differentiable[8] at $p\in \mathbb {R} ^{n}$ if there is a linear map $df_{p}$ from $\mathbb {R} ^{n}$ to $\mathbb {R} $ such that for any $\varepsilon >0$, there is a neighbourhood $N$ of $p$ such that for $x\in N$, $\left|f(x)-f(p)-df_{p}(x-p)\right|<\varepsilon \left|x-p\right|.$ We can now use the same trick as in the one-dimensional case and think of the expression $f(x_{1},x_{2},\ldots ,x_{n})$ as the composite of $f$ with the standard coordinates $x_{1},x_{2},\ldots ,x_{n}$ on $\mathbb {R} ^{n}$ (so that $x_{j}(p)$ is the $j$-th component of $p\in \mathbb {R} ^{n}$). Then the differentials $\left(dx_{1}\right)_{p},\left(dx_{2}\right)_{p},\ldots ,\left(dx_{n}\right)_{p}$ at a point $p$ form a basis for the vector space of linear maps from $\mathbb {R} ^{n}$ to $\mathbb {R} $ and therefore, if $f$ is differentiable at $p$, we can write $\operatorname {d} f_{p}$ as a linear combination of these basis elements: $df_{p}=\sum _{j=1}^{n}D_{j}f(p)\,(dx_{j})_{p}.$ The coefficients $D_{j}f(p)$ are (by definition) the partial derivatives of $f$ at $p$ with respect to $x_{1},x_{2},\ldots ,x_{n}$. Hence, if $f$ is differentiable on all of $\mathbb {R} ^{n}$, we can write, more concisely: $\operatorname {d} f={\frac {\partial f}{\partial x_{1}}}\,dx_{1}+{\frac {\partial f}{\partial x_{2}}}\,dx_{2}+\cdots +{\frac {\partial f}{\partial x_{n}}}\,dx_{n}.$ In the one-dimensional case this becomes $df={\frac {df}{dx}}dx$ as before. This idea generalizes straightforwardly to functions from $\mathbb {R} ^{n}$ to $\mathbb {R} ^{m}$. Furthermore, it has the decisive advantage over other definitions of the derivative that it is invariant under changes of coordinates. This means that the same idea can be used to define the differential of smooth maps between smooth manifolds. Aside: Note that the existence of all the partial derivatives of $f(x)$ at $x$ is a necessary condition for the existence of a differential at $x$. However it is not a sufficient condition. For counterexamples, see Gateaux derivative. Differentials as linear maps on a vector space The same procedure works on a vector space with a enough additional structure to reasonably talk about continuity. The most concrete case is a Hilbert space, also known as a complete inner product space, where the inner product and its associated norm define a suitable concept of distance. The same procedure works for a Banach space, also known as a complete Normed vector space. However, for a more general topological vector space, some of the details are more abstract because there is no concept of distance. For the important case of a finite dimension, any inner product space is a Hilbert space, any normed vector space is a Banach space and any topological vector space is complete. As a result, you can define a coordinate system from an arbitrary basis and use the same technique as for $\mathbb {R} ^{n}$. Differentials as germs of functions This approach works on any differentiable manifold. If 1. U and V are open sets containing p 2. $f\colon U\to \mathbb {R} $ is continuous 3. $g\colon V\to \mathbb {R} $ is continuous then f is equivalent to g at p, denoted $f\sim _{p}g$, if and only if there is an open $W\subseteq U\cap V$ containing p such that $f(x)=g(x)$ for every x in W. The germ of f at p, denoted $[f]_{p}$, is the set of all real continuous functions equivalent to f at p; if f is smooth at p then $[f]_{p}$ is a smooth germ. If 1. $U_{1}$, $U_{2}$ $V_{1}$ and $V_{2}$ are open sets containing p 2. $f_{1}\colon U_{1}\to \mathbb {R} $, $f_{2}\colon U_{2}\to \mathbb {R} $, $g_{1}\colon V_{1}\to \mathbb {R} $ and $g_{2}\colon V_{2}\to \mathbb {R} $ are smooth functions 3. $f_{1}\sim _{p}g_{1}$ 4. $f_{2}\sim _{p}g_{2}$ 5. r is a real number then 1. $r*f_{1}\sim _{p}r*g_{1}$ 2. $f_{1}+f_{2}\colon U_{1}\cap U_{2}\to \mathbb {R} \sim _{p}g_{1}+g_{2}\colon V_{1}\cap V_{2}\to \mathbb {R} $ 3. $f_{1}*f_{2}\colon U_{1}\cap U_{2}\to \mathbb {R} \sim _{p}g_{1}*g_{2}\colon V_{1}\cap V_{2}\to \mathbb {R} $ This shows that the germs at p form an algebra. Define ${\mathcal {I}}_{p}$ to be the set of all smooth germs vanishing at p and ${\mathcal {I}}_{p}^{2}$ to be the product of ideals ${\mathcal {I}}_{p}{\mathcal {I}}_{p}$. Then a differential at p (cotangent vector at p) is an element of ${\mathcal {I}}_{p}/{\mathcal {I}}_{p}^{2}$. The differential of a smooth function f at p, denoted $\mathrm {d} f_{p}$, is $[f-f(p)]_{p}/{\mathcal {I}}_{p}^{2}$. A similar approach is to define differential equivalence of first order in terms of derivatives in an arbitrary coordinate patch. Then the differential of f at p is the set of all functions differentially equivalent to $f-f(p)$ at p. Algebraic geometry In algebraic geometry, differentials and other infinitesimal notions are handled in a very explicit way by accepting that the coordinate ring or structure sheaf of a space may contain nilpotent elements. The simplest example is the ring of dual numbers R[ε], where ε2 = 0. This can be motivated by the algebro-geometric point of view on the derivative of a function f from R to R at a point p. For this, note first that f − f(p) belongs to the ideal Ip of functions on R which vanish at p. If the derivative f vanishes at p, then f − f(p) belongs to the square Ip2 of this ideal. Hence the derivative of f at p may be captured by the equivalence class [f − f(p)] in the quotient space Ip/Ip2, and the 1-jet of f (which encodes its value and its first derivative) is the equivalence class of f in the space of all functions modulo Ip2. Algebraic geometers regard this equivalence class as the restriction of f to a thickened version of the point p whose coordinate ring is not R (which is the quotient space of functions on R modulo Ip) but R[ε] which is the quotient space of functions on R modulo Ip2. Such a thickened point is a simple example of a scheme.[5] Algebraic geometry notions Differentials are also important in algebraic geometry, and there are several important notions. • Abelian differentials usually mean differential one-forms on an algebraic curve or Riemann surface. • Quadratic differentials (which behave like "squares" of abelian differentials) are also important in the theory of Riemann surfaces. • Kähler differentials provide a general notion of differential in algebraic geometry. Synthetic differential geometry A fifth approach to infinitesimals is the method of synthetic differential geometry[9] or smooth infinitesimal analysis.[10] This is closely related to the algebraic-geometric approach, except that the infinitesimals are more implicit and intuitive. The main idea of this approach is to replace the category of sets with another category of smoothly varying sets which is a topos. In this category, one can define the real numbers, smooth functions, and so on, but the real numbers automatically contain nilpotent infinitesimals, so these do not need to be introduced by hand as in the algebraic geometric approach. However the logic in this new category is not identical to the familiar logic of the category of sets: in particular, the law of the excluded middle does not hold. This means that set-theoretic mathematical arguments only extend to smooth infinitesimal analysis if they are constructive (e.g., do not use proof by contradiction). Some regard this disadvantage as a positive thing, since it forces one to find constructive arguments wherever they are available. Nonstandard analysis The final approach to infinitesimals again involves extending the real numbers, but in a less drastic way. In the nonstandard analysis approach there are no nilpotent infinitesimals, only invertible ones, which may be viewed as the reciprocals of infinitely large numbers.[7] Such extensions of the real numbers may be constructed explicitly using equivalence classes of sequences of real numbers, so that, for example, the sequence (1, 1/2, 1/3, ..., 1/n, ...) represents an infinitesimal. The first-order logic of this new set of hyperreal numbers is the same as the logic for the usual real numbers, but the completeness axiom (which involves second-order logic) does not hold. Nevertheless, this suffices to develop an elementary and quite intuitive approach to calculus using infinitesimals, see transfer principle. Differential geometry The notion of a differential motivates several concepts in differential geometry (and differential topology). • The differential (Pushforward) of a map between manifolds. • Differential forms provide a framework which accommodates multiplication and differentiation of differentials. • The exterior derivative is a notion of differentiation of differential forms which generalizes the differential of a function (which is a differential 1-form). • Pullback is, in particular, a geometric name for the chain rule for composing a map between manifolds with a differential form on the target manifold. • Covariant derivatives or differentials provide a general notion for differentiating of vector fields and tensor fields on a manifold, or, more generally, sections of a vector bundle: see Connection (vector bundle). This ultimately leads to the general concept of a connection. Other meanings The term differential has also been adopted in homological algebra and algebraic topology, because of the role the exterior derivative plays in de Rham cohomology: in a cochain complex $(C_{\bullet },d_{\bullet }),$ the maps (or coboundary operators) di are often called differentials. Dually, the boundary operators in a chain complex are sometimes called codifferentials. The properties of the differential also motivate the algebraic notions of a derivation and a differential algebra. See also • Differential equation • Differential form • Differential of a function Notes Citations 1. "Differential". Wolfram MathWorld. Retrieved February 24, 2022. The word differential has several related meaning in mathematics. In the most common context, it means "related to derivatives." So, for example, the portion of calculus dealing with taking derivatives (i.e., differentiation), is known as differential calculus. The word "differential" also has a more technical meaning in the theory of differential k-forms as a so-called one-form. 2. "differential - Definition of differential in US English by Oxford Dictionaries". Oxford Dictionaries - English. Archived from the original on January 3, 2014. Retrieved 13 April 2018. 3. Boyer 1991. 4. Darling 1994. 5. Eisenbud & Harris 1998. 6. See Kock 2006 and Moerdijk & Reyes 1991. 7. See Robinson 1996 and Keisler 1986. 8. See, for instance, Apostol 1967. 9. See Kock 2006 and Lawvere 1968. 10. See Moerdijk & Reyes 1991 and Bell 1998. References • Apostol, Tom M. (1967), Calculus (2nd ed.), Wiley, ISBN 978-0-471-00005-1. • Bell, John L. (1998), Invitation to Smooth Infinitesimal Analysis (PDF). • Boyer, Carl B. (1991), "Archimedes of Syracuse", A History of Mathematics (2nd ed.), John Wiley & Sons, Inc., ISBN 978-0-471-54397-8. • Darling, R. W. R. (1994), Differential forms and connections, Cambridge, UK: Cambridge University Press, ISBN 978-0-521-46800-8. • Eisenbud, David; Harris, Joe (1998), The Geometry of Schemes, Springer-Verlag, ISBN 978-0-387-98637-1 • Keisler, H. Jerome (1986), Elementary Calculus: An Infinitesimal Approach (2nd ed.). • Kock, Anders (2006), Synthetic Differential Geometry (PDF) (2nd ed.), Cambridge University Press. • Lawvere, F.W. (1968), Outline of synthetic differential geometry (PDF) (published 1998). • Moerdijk, I.; Reyes, G.E. (1991), Models for Smooth Infinitesimal Analysis, Springer-Verlag. • Robinson, Abraham (1996), Non-standard analysis, Princeton University Press, ISBN 978-0-691-04490-3. • Weisstein, Eric W. "Differentials". MathWorld. Calculus Precalculus • Binomial theorem • Concave function • Continuous function • Factorial • Finite difference • Free variables and bound variables • Graph of a function • Linear function • Radian • Rolle's theorem • Secant • Slope • Tangent Limits • Indeterminate form • Limit of a function • One-sided limit • Limit of a sequence • Order of approximation • (ε, δ)-definition of limit Differential calculus • Derivative • Second derivative • Partial derivative • Differential • Differential operator • Mean value theorem • Notation • Leibniz's notation • Newton's notation • Rules of differentiation • linearity • Power • Sum • Chain • L'Hôpital's • Product • General Leibniz's rule • Quotient • Other techniques • Implicit differentiation • Inverse functions and differentiation • Logarithmic derivative • Related rates • Stationary points • First derivative test • Second derivative test • Extreme value theorem • Maximum and minimum • Further applications • Newton's method • Taylor's theorem • Differential equation • Ordinary differential equation • Partial differential equation • Stochastic differential equation Integral calculus • Antiderivative • Arc length • Riemann integral • Basic properties • Constant of integration • Fundamental theorem of calculus • Differentiating under the integral sign • Integration by parts • Integration by substitution • trigonometric • Euler • Tangent half-angle substitution • Partial fractions in integration • Quadratic integral • Trapezoidal rule • Volumes • Washer method • Shell method • Integral equation • Integro-differential equation Vector calculus • Derivatives • Curl • Directional derivative • Divergence • Gradient • Laplacian • Basic theorems • Line integrals • Green's • Stokes' • Gauss' Multivariable calculus • Divergence theorem • Geometric • Hessian matrix • Jacobian matrix and determinant • Lagrange multiplier • Line integral • Matrix • Multiple integral • Partial derivative • Surface integral • Volume integral • Advanced topics • Differential forms • Exterior derivative • Generalized Stokes' theorem • Tensor calculus Sequences and series • Arithmetico-geometric sequence • Types of series • Alternating • Binomial • Fourier • Geometric • Harmonic • Infinite • Power • Maclaurin • Taylor • Telescoping • Tests of convergence • Abel's • Alternating series • Cauchy condensation • Direct comparison • Dirichlet's • Integral • Limit comparison • Ratio • Root • Term Special functions and numbers • Bernoulli numbers • e (mathematical constant) • Exponential function • Natural logarithm • Stirling's approximation History of calculus • Adequality • Brook Taylor • Colin Maclaurin • Generality of algebra • Gottfried Wilhelm Leibniz • Infinitesimal • Infinitesimal calculus • Isaac Newton • Fluxion • Law of Continuity • Leonhard Euler • Method of Fluxions • The Method of Mechanical Theorems Lists • Differentiation rules • List of integrals of exponential functions • List of integrals of hyperbolic functions • List of integrals of inverse hyperbolic functions • List of integrals of inverse trigonometric functions • List of integrals of irrational functions • List of integrals of logarithmic functions • List of integrals of rational functions • List of integrals of trigonometric functions • Secant • Secant cubed • List of limits • Lists of integrals Miscellaneous topics • Complex calculus • Contour integral • Differential geometry • Manifold • Curvature • of curves • of surfaces • Tensor • Euler–Maclaurin formula • Gabriel's horn • Integration Bee • Proof that 22/7 exceeds π • Regiomontanus' angle maximization problem • Steinmetz solid Infinitesimals History • Adequality • Leibniz's notation • Integral symbol • Criticism of nonstandard analysis • The Analyst • The Method of Mechanical Theorems • Cavalieri's principle Related branches • Nonstandard analysis • Nonstandard calculus • Internal set theory • Synthetic differential geometry • Smooth infinitesimal analysis • Constructive nonstandard analysis • Infinitesimal strain theory (physics) Formalizations • Differentials • Hyperreal numbers • Dual numbers • Surreal numbers Individual concepts • Standard part function • Transfer principle • Hyperinteger • Increment theorem • Monad • Internal set • Levi-Civita field • Hyperfinite set • Law of continuity • Overspill • Microcontinuity • Transcendental law of homogeneity Mathematicians • Gottfried Wilhelm Leibniz • Abraham Robinson • Pierre de Fermat • Augustin-Louis Cauchy • Leonhard Euler Textbooks • Analyse des Infiniment Petits • Elementary Calculus • Cours d'Analyse
Wikipedia
Variable-order Bayesian network Variable-order Bayesian network (VOBN) models provide an important extension of both the Bayesian network models and the variable-order Markov models. VOBN models are used in machine learning in general and have shown great potential in bioinformatics applications.[1][2] These models extend the widely used position weight matrix (PWM) models, Markov models, and Bayesian network (BN) models. In contrast to the BN models, where each random variable depends on a fixed subset of random variables, in VOBN models these subsets may vary based on the specific realization of observed variables. The observed realizations are often called the context and, hence, VOBN models are also known as context-specific Bayesian networks.[3] The flexibility in the definition of conditioning subsets of variables turns out to be a real advantage in classification and analysis applications, as the statistical dependencies between random variables in a sequence of variables (not necessarily adjacent) may be taken into account efficiently, and in a position-specific and context-specific manner. See also • Markov chain • Examples of Markov chains • Variable order Markov models • Markov process • Markov chain Monte Carlo • Semi-Markov process • Artificial intelligence References 1. Ben-Gal, I.; Shani A.; Gohr A.; Grau J.; Arviv S.; Shmilovici A.; Posch S.; Grosse I. (2005). "Identification of Transcription Factor Binding Sites with Variable-order Bayesian Networks". Bioinformatics. 21 (11): 2657–2666. doi:10.1093/bioinformatics/bti410. PMID 15797905. 2. Grau, J.; Ben-Gal I.; Posch S.; Grosse I. (2006). "VOMBAT: Prediction of Transcription Factor Binding Sites using Variable Order Bayesian Trees" (PDF). Nucleic Acids Research. 34 (Web Server issue): 529–533. doi:10.1093/nar/gkl212. PMC 1538886. PMID 16845064. 3. Boutilier, C.; Friedman, N.; Goldszmidt, M.; Koller, D. (1996). Context-specific independence in Bayesian networks. 12th Conference on Uncertainty in Artificial Intelligence (August 1–4, 1996). Reed College, Portland, Oregon, USA. pp. 115–123. External links • VOMBAT: https://www2.informatik.uni-halle.de:8443/VOMBAT/
Wikipedia
Variable splitting In applied mathematics and computer science, variable splitting is a decomposition method that relaxes a set of constraints.[1] Details When the variable x appears in two sets of constraints, it is possible to substitute the new variables x1 in the first constraints and x2 in the second, and then join the two variables with a new "linking" constraint,[2] which requires that x1=x2. This new linking constraint can be relaxed with a Lagrange multiplier; in many applications, a Lagrange multiplier can be interpreted as the price of equality between x1 and x2 in the new constraint. For many problems, when the equality of the split variables is relaxed, then the system is decomposed, and each subsystem can be solved independently, at substantial reduction of computing time and memory storage. A solution to the relaxed problem (with variable splitting) provides an approximate solution to the original problem: further, the approximate solution to the relaxed problem provides a "warm start", a good initialization of an iterative method for solving the original problem (having only the x variable). This was first introduced by Kurt O. Jörnsten, Mikael Näsberg, Per A. Smeds in 1985. At the same time, M. Guignard and S. Kim introduced the same idea under the name Lagrangean Decomposition (their papers appeared in 1987). The original references are (1) Variable Splitting: A New Lagrangean Relaxation Approach to Some Mathematical Programming Models Authors Kurt O. Jörnsten, Mikael Näsberg, Per A. Smeds Volumes 84-85 of LiTH MAT R.: Matematiska Institutionen Publisher - University of Linköping, Department of Mathematics, 1985 Length - 52 pages; and (2) Lagrangean Decomposition: A Model Yielding Stronger Bounds, Authors Monique Guignard and Siwhan Kim, Mathematical Programming, 39(2), 1987, pp. 215-228. [2][3][4] References 1. Pipatsrisawat, Knot; Palyan, Akop; Chavira, Mark; Choi, Arthur; Darwiche, Adnan (2008). "Solving Weighted Max-SAT Problems in a Reduced Search Space: A Performance Analysis". Journal on Satisfiability Boolean Modeling and Computation (JSAT). UCLA. 4(2008): 4. Retrieved 18 April 2022. 2. Vanderbei (1991) 3. Alvarado (1997) 4. Adlers & Björck (2000) Reprinted as Appendix A, in Mikael Adlers, 2000, Topics in Sparse Least Squares Problems, Linkoping Studies in Science and Technology", Linkoping University, Sweden. Bibliography • Adlers, Mikael; Björck, Åke (2000). "Matrix stretching for sparse least squares problems". Numerical Linear Algebra with Applications. 7 (2): 51–65. doi:10.1002/(sici)1099-1506(200003)7:2<51::aid-nla187>3.0.co;2-o. ISSN 1099-1506. • Alvarado, Fernando (1997). "Matrix enlarging methods and their application". BIT Numerical Mathematics. 37 (3): 473–505. CiteSeerX 10.1.1.24.5976. doi:10.1007/BF02510237. S2CID 120358431. • Grcar, Joseph (1990). Matrix stretching for linear equations (Technical report). Sandia National Laboratories. arXiv:1203.2377. Bibcode:2012arXiv1203.2377G. SAND90-8723. • Vanderbei, Robert J. (July 1991). "Splitting dense columns in sparse linear systems". Linear Algebra and Its Applications. 152: 107–117. doi:10.1016/0024-3795(91)90269-3. ISSN 0024-3795.
Wikipedia
Variance decomposition of forecast errors In econometrics and other applications of multivariate time series analysis, a variance decomposition or forecast error variance decomposition (FEVD) is used to aid in the interpretation of a vector autoregression (VAR) model once it has been fitted.[1] The variance decomposition indicates the amount of information each variable contributes to the other variables in the autoregression. It determines how much of the forecast error variance of each of the variables can be explained by exogenous shocks to the other variables. "Variance decomposition" redirects here. Not to be confused with Variance partitioning. Calculating the forecast error variance For the VAR (p) of form $y_{t}=\nu +A_{1}y_{t-1}+\dots +A_{p}y_{t-p}+u_{t}$ . This can be changed to a VAR(1) structure by writing it in companion form (see general matrix notation of a VAR(p)) $Y_{t}=V+AY_{t-1}+U_{t}$ where $A={\begin{bmatrix}A_{1}&A_{2}&\dots &A_{p-1}&A_{p}\\\mathbf {I} _{k}&0&\dots &0&0\\0&\mathbf {I} _{k}&&0&0\\\vdots &&\ddots &\vdots &\vdots \\0&0&\dots &\mathbf {I} _{k}&0\\\end{bmatrix}}$ , $Y={\begin{bmatrix}y_{1}\\\vdots \\y_{p}\end{bmatrix}}$, $V={\begin{bmatrix}\nu \\0\\\vdots \\0\end{bmatrix}}$ and $U_{t}={\begin{bmatrix}u_{t}\\0\\\vdots \\0\end{bmatrix}}$ where $y_{t}$, $\nu $ and $u$ are $k$ dimensional column vectors, $A$ is $kp$ by $kp$ dimensional matrix and $Y$, $V$ and $U$ are $kp$ dimensional column vectors. The mean squared error of the h-step forecast of variable $j$ is $\mathbf {MSE} [y_{j,t}(h)]=\sum _{i=0}^{h-1}\sum _{l=1}^{k}(e_{j}'\Theta _{i}e_{l})^{2}={\bigg (}\sum _{i=0}^{h-1}\Theta _{i}\Theta _{i}'{\bigg )}_{jj}={\bigg (}\sum _{i=0}^{h-1}\Phi _{i}\Sigma _{u}\Phi _{i}'{\bigg )}_{jj},$ and where • $e_{j}$ is the jth column of $I_{k}$ and the subscript $jj$ refers to that element of the matrix • $\Theta _{i}=\Phi _{i}P,$ where $P$ is a lower triangular matrix obtained by a Cholesky decomposition of $\Sigma _{u}$ such that $\Sigma _{u}=PP'$, where $\Sigma _{u}$ is the covariance matrix of the errors $u_{t}$ • $\Phi _{i}=JA^{i}J',$ where $J={\begin{bmatrix}\mathbf {I} _{k}&0&\dots &0\end{bmatrix}},$ so that $J$ is a $k$ by $kp$ dimensional matrix. The amount of forecast error variance of variable $j$ accounted for by exogenous shocks to variable $l$ is given by $\omega _{jl,h},$ $\omega _{jl,h}=\sum _{i=0}^{h-1}(e_{j}'\Theta _{i}e_{l})^{2}/MSE[y_{j,t}(h)].$ See also • Analysis of variance Notes 1. Lütkepohl, H. (2007) New Introduction to Multiple Time Series Analysis, Springer. p. 63.
Wikipedia
Law of total variance In probability theory, the law of total variance[1] or variance decomposition formula or conditional variance formulas or law of iterated variances also known as Eve's law,[2] states that if $X$ and $Y$ are random variables on the same probability space, and the variance of $Y$ is finite, then $\operatorname {Var} (Y)=\operatorname {E} [\operatorname {Var} (Y\mid X)]+\operatorname {Var} (\operatorname {E} [Y\mid X]).$ In language perhaps better known to statisticians than to probability theorists, the two terms are the "unexplained" and the "explained" components of the variance respectively (cf. fraction of variance unexplained, explained variation). In actuarial science, specifically credibility theory, the first component is called the expected value of the process variance (EVPV) and the second is called the variance of the hypothetical means (VHM).[3] These two components are also the source of the term "Eve's law", from the initials EV VE for "expectation of variance" and "variance of expectation". Example Suppose X is a coin flip with the probability of heads being h. Suppose that when X = heads then Y is drawn from a normal distribution with mean μh and standard deviation σh, and that when X = tails then Y is drawn from normal distribution with mean μt and standard deviation σt. Then the first, "unexplained" term on the right-hand side of the above formula is the weighted average of the variances, hσh2 + (1 − h)σt2, and the second, "explained" term is the variance of the distribution that gives μh with probability h and gives μt with probability 1 − h. Formulation There is a general variance decomposition formula for $c\geq 2$ components (see below).[4] For example, with two conditioning random variables: $\operatorname {Var} [Y]=\operatorname {E} \left[\operatorname {Var} \left(Y\mid X_{1},X_{2}\right)\right]+\operatorname {E} [\operatorname {Var} (\operatorname {E} \left[Y\mid X_{1},X_{2}\right]\mid X_{1})]+\operatorname {Var} (\operatorname {E} \left[Y\mid X_{1}\right]),$ which follows from the law of total conditional variance:[4] $\operatorname {Var} (Y\mid X_{1})=\operatorname {E} \left[\operatorname {Var} (Y\mid X_{1},X_{2})\mid X_{1}\right]+\operatorname {Var} \left(\operatorname {E} \left[Y\mid X_{1},X_{2}\right]\mid X_{1}\right).$ Note that the conditional expected value $\operatorname {E} (Y\mid X)$ is a random variable in its own right, whose value depends on the value of $X.$ Notice that the conditional expected value of $Y$ given the event $X=x$ is a function of $x$ (this is where adherence to the conventional and rigidly case-sensitive notation of probability theory becomes important!). If we write $\operatorname {E} (Y\mid X=x)=g(x)$ then the random variable $\operatorname {E} (Y\mid X)$ is just $g(X).$ Similar comments apply to the conditional variance. One special case, (similar to the law of total expectation) states that if $A_{1},\ldots ,A_{n}$ is a partition of the whole outcome space, that is, these events are mutually exclusive and exhaustive, then ${\begin{aligned}\operatorname {Var} (X)={}&\sum _{i=1}^{n}\operatorname {Var} (X\mid A_{i})\Pr(A_{i})+\sum _{i=1}^{n}\operatorname {E} [X\mid A_{i}]^{2}(1-\Pr(A_{i}))\Pr(A_{i})\\[4pt]&{}-2\sum _{i=2}^{n}\sum _{j=1}^{i-1}\operatorname {E} [X\mid A_{i}]\Pr(A_{i})\operatorname {E} [X\mid A_{j}]\Pr(A_{j}).\end{aligned}}$ In this formula, the first component is the expectation of the conditional variance; the other two components are the variance of the conditional expectation. Proof The law of total variance can be proved using the law of total expectation.[5] First, $\operatorname {Var} [Y]=\operatorname {E} \left[Y^{2}\right]-\operatorname {E} [Y]^{2}$ from the definition of variance. Again, from the definition of variance, and applying the law of total expectation, we have $\operatorname {E} \left[Y^{2}\right]=\operatorname {E} \left[\operatorname {E} [Y^{2}\mid X]\right]=\operatorname {E} \left[\operatorname {Var} [Y\mid X]+\operatorname {E} [Y\mid X]^{2}\right].$ Now we rewrite the conditional second moment of $Y$ in terms of its variance and first moment, and apply the law of total expectation on the right hand side: $\operatorname {E} \left[Y^{2}\right]-\operatorname {E} [Y]^{2}=\operatorname {E} \left[\operatorname {Var} [Y\mid X]+\operatorname {E} [Y\mid X]^{2}\right]-\operatorname {E} [\operatorname {E} [Y\mid X]]^{2}.$ Since the expectation of a sum is the sum of expectations, the terms can now be regrouped: $=\left(\operatorname {E} [\operatorname {Var} [Y\mid X]]\right)+\left(\operatorname {E} \left[\operatorname {E} [Y\mid X]^{2}\right]-\operatorname {E} [\operatorname {E} [Y\mid X]]^{2}\right).$ Finally, we recognize the terms in the second set of parentheses as the variance of the conditional expectation $\operatorname {E} [Y\mid X]$: $=\operatorname {E} [\operatorname {Var} [Y\mid X]]+\operatorname {Var} [\operatorname {E} [Y\mid X]].$ General variance decomposition applicable to dynamic systems The following formula shows how to apply the general, measure theoretic variance decomposition formula [4] to stochastic dynamic systems. Let $Y(t)$ be the value of a system variable at time $t.$ Suppose we have the internal histories (natural filtrations) $H_{1t},H_{2t},\ldots ,H_{c-1,t}$, each one corresponding to the history (trajectory) of a different collection of system variables. The collections need not be disjoint. The variance of $Y(t)$ can be decomposed, for all times $t,$ into $c\geq 2$ components as follows: ${\begin{aligned}\operatorname {Var} [Y(t)]={}&\operatorname {E} (\operatorname {Var} [Y(t)\mid H_{1t},H_{2t},\ldots ,H_{c-1,t}])\\[4pt]&{}+\sum _{j=2}^{c-1}\operatorname {E} (\operatorname {Var} [\operatorname {E} [Y(t)\mid H_{1t},H_{2t},\ldots ,H_{jt}]\mid H_{1t},H_{2t},\ldots ,H_{j-1,t}])\\[4pt]&{}+\operatorname {Var} (\operatorname {E} [Y(t)\mid H_{1t}]).\end{aligned}}$ The decomposition is not unique. It depends on the order of the conditioning in the sequential decomposition. The square of the correlation and explained (or informational) variation In cases where $(Y,X)$ are such that the conditional expected value is linear; that is, in cases where $\operatorname {E} (Y\mid X)=aX+b,$ it follows from the bilinearity of covariance that $a={\operatorname {Cov} (Y,X) \over \operatorname {Var} (X)}$ and $b=\operatorname {E} (Y)-{\operatorname {Cov} (Y,X) \over \operatorname {Var} (X)}\operatorname {E} (X)$ and the explained component of the variance divided by the total variance is just the square of the correlation between $Y$ and $X;$ that is, in such cases, ${\operatorname {Var} (\operatorname {E} (Y\mid X)) \over \operatorname {Var} (Y)}=\operatorname {Corr} (X,Y)^{2}.$ One example of this situation is when $(X,Y)$ have a bivariate normal (Gaussian) distribution. More generally, when the conditional expectation $\operatorname {E} (Y\mid X)$ is a non-linear function of $X$[4] $\iota _{Y\mid X}={\operatorname {Var} (\operatorname {E} (Y\mid X)) \over \operatorname {Var} (Y)}=\operatorname {Corr} (\operatorname {E} (Y\mid X),Y)^{2},$ which can be estimated as the $R$ squared from a non-linear regression of $Y$ on $X,$ using data drawn from the joint distribution of $(X,Y).$ When $\operatorname {E} (Y\mid X)$ has a Gaussian distribution (and is an invertible function of $X$), or $Y$ itself has a (marginal) Gaussian distribution, this explained component of variation sets a lower bound on the mutual information:[4] $\operatorname {I} (Y;X)\geq \ln \left([1-\iota _{Y\mid X}]^{-1/2}\right).$ Higher moments A similar law for the third central moment $\mu _{3}$ says $\mu _{3}(Y)=\operatorname {E} \left(\mu _{3}(Y\mid X)\right)+\mu _{3}(\operatorname {E} (Y\mid X))+3\operatorname {cov} (\operatorname {E} (Y\mid X),\operatorname {var} (Y\mid X)).$ For higher cumulants, a generalization exists. See law of total cumulance. See also • Law of total covariance − a generalization • Law of propagation of errors – Effect of variables' uncertainties on the uncertainty of a function based on themPages displaying short descriptions of redirect targets References 1. Neil A. Weiss, A Course in Probability, Addison–Wesley, 2005, pages 385–386. 2. Joseph K. Blitzstein and Jessica Hwang: "Introduction to Probability" 3. Mahler, Howard C.; Dean, Curtis Gary (2001). "Chapter 8: Credibility" (PDF). In Casualty Actuarial Society (ed.). Foundations of Casualty Actuarial Science (4th ed.). Casualty Actuarial Society. pp. 525–526. ISBN 978-0-96247-622-8. Retrieved June 25, 2015. 4. Bowsher, C.G. and P.S. Swain, Identifying sources of variation and the flow of information in biochemical networks, PNAS May 15, 2012 109 (20) E1320-E1328. 5. Neil A. Weiss, A Course in Probability, Addison–Wesley, 2005, pages 380–383. • Blitzstein, Joe. "Stat 110 Final Review (Eve's Law)" (PDF). stat110.net. Harvard University, Department of Statistics. Retrieved 9 July 2014. • Billingsley, Patrick (1995). Probability and Measure. New York, NY: John Wiley & Sons, Inc. ISBN 0-471-00710-2. (Problem 34.10(b))
Wikipedia
Median In statistics and probability theory, the median is the value separating the higher half from the lower half of a data sample, a population, or a probability distribution. For a data set, it may be thought of as "the middle" value. The basic feature of the median in describing data compared to the mean (often simply described as the "average") is that it is not skewed by a small proportion of extremely large or small values, and therefore provides a better representation of the center. Median income, for example, may be a better way to describe center of the income distribution because increases in the largest incomes alone have no effect on median. For this reason, the median is of central importance in robust statistics. Finite data set of numbers The median of a finite list of numbers is the "middle" number, when those numbers are listed in order from smallest to greatest. If the data set has an odd number of observations, the middle one is selected. For example, the following list of seven numbers, 1, 3, 3, 6, 7, 8, 9 has the median of 6, which is the fourth value. If the data set has an even number of observations, there is no distinct middle value and the median is usually defined to be the arithmetic mean of the two middle values.[1][2] For example, this data set of 8 numbers 1, 2, 3, 4, 5, 6, 8, 9 has a median value of 4.5, that is $(4+5)/2$. (In more technical terms, this interprets the median as the fully trimmed mid-range). In general, with this convention, the median can be defined as follows: For a data set $x$ of $n$ elements, ordered from smallest to greatest, if $n$ is odd, $\mathrm {median} (x)=x_{(n+1)/2}$ if $n$ is even, $\mathrm {median} (x)={\frac {x_{(n/2)}+x_{((n/2)+1)}}{2}}$ Comparison of common averages of values [ 1, 2, 2, 3, 4, 7, 9 ] Type Description Example Result Midrange Midway point between the minimum and the maximum of a data set 1, 2, 2, 3, 4, 7, 9 5 Arithmetic mean Sum of values of a data set divided by number of values: $ {\bar {x}}={\frac {1}{n}}\sum _{i=1}^{n}x_{i}$ (1 + 2 + 2 + 3 + 4 + 7 + 9) / 7 4 Median Middle value separating the greater and lesser halves of a data set 1, 2, 2, 3, 4, 7, 9 3 Mode Most frequent value in a data set 1, 2, 2, 3, 4, 7, 9 2 Formal definition Formally, a median of a population is any value such that at least half of the population is less than or equal to the proposed median and at least half is greater than or equal to the proposed median. As seen above, medians may not be unique. If each set contains more than half the population, then some of the population is exactly equal to the unique median. The median is well-defined for any ordered (one-dimensional) data, and is independent of any distance metric. The median can thus be applied to classes which are ranked but not numerical (e.g. working out a median grade when students are graded from A to F), although the result might be halfway between classes if there is an even number of cases. A geometric median, on the other hand, is defined in any number of dimensions. A related concept, in which the outcome is forced to correspond to a member of the sample, is the medoid. There is no widely accepted standard notation for the median, but some authors represent the median of a variable x as x͂,[3] as μ1/2,[1] or as M.[3][4] In any of these cases, the use of these or other symbols for the median needs to be explicitly defined when they are introduced. The median is a special case of other ways of summarizing the typical values associated with a statistical distribution: it is the 2nd quartile, 5th decile, and 50th percentile. Uses The median can be used as a measure of location when one attaches reduced importance to extreme values, typically because a distribution is skewed, extreme values are not known, or outliers are untrustworthy, i.e., may be measurement/transcription errors. For example, consider the multiset 1, 2, 2, 2, 3, 14. The median is 2 in this case, as is the mode, and it might be seen as a better indication of the center than the arithmetic mean of 4, which is larger than all but one of the values. However, the widely cited empirical relationship that the mean is shifted "further into the tail" of a distribution than the median is not generally true. At most, one can say that the two statistics cannot be "too far" apart; see § Inequality relating means and medians below.[5] As a median is based on the middle data in a set, it is not necessary to know the value of extreme results in order to calculate it. For example, in a psychology test investigating the time needed to solve a problem, if a small number of people failed to solve the problem at all in the given time a median can still be calculated.[6] Because the median is simple to understand and easy to calculate, while also a robust approximation to the mean, the median is a popular summary statistic in descriptive statistics. In this context, there are several choices for a measure of variability: the range, the interquartile range, the mean absolute deviation, and the median absolute deviation. For practical purposes, different measures of location and dispersion are often compared on the basis of how well the corresponding population values can be estimated from a sample of data. The median, estimated using the sample median, has good properties in this regard. While it is not usually optimal if a given population distribution is assumed, its properties are always reasonably good. For example, a comparison of the efficiency of candidate estimators shows that the sample mean is more statistically efficient when—and only when— data is uncontaminated by data from heavy-tailed distributions or from mixtures of distributions. Even then, the median has a 64% efficiency compared to the minimum-variance mean (for large normal samples), which is to say the variance of the median will be ~50% greater than the variance of the mean.[7][8] Probability distributions For any real-valued probability distribution with cumulative distribution function F, a median is defined as any real number m that satisfies the inequalities $\int _{(-\infty ,m]}dF(x)\geq {\frac {1}{2}}{\text{ and }}\int _{[m,\infty )}dF(x)\geq {\frac {1}{2}}.$ An equivalent phrasing uses a random variable X distributed according to F: $\operatorname {P} (X\leq m)\geq {\frac {1}{2}}{\text{ and }}\operatorname {P} (X\geq m)\geq {\frac {1}{2}}$ Note that this definition does not require X to have an absolutely continuous distribution (which has a probability density function f), nor does it require a discrete one. In the former case, the inequalities can be upgraded to equality: a median satisfies $\operatorname {P} (X\leq m)=\int _{-\infty }^{m}{f(x)\,dx}={\frac {1}{2}}=\int _{m}^{\infty }{f(x)\,dx}=\operatorname {P} (X\geq m).$ Any probability distribution on R has at least one median, but in pathological cases there may be more than one median: if F is constant 1/2 on an interval (so that f=0 there), then any value of that interval is a median. Medians of particular distributions The medians of certain types of distributions can be easily calculated from their parameters; furthermore, they exist even for some distributions lacking a well-defined mean, such as the Cauchy distribution: • The median of a symmetric unimodal distribution coincides with the mode. • The median of a symmetric distribution which possesses a mean μ also takes the value μ. • The median of a normal distribution with mean μ and variance σ2 is μ. In fact, for a normal distribution, mean = median = mode. • The median of a uniform distribution in the interval [a, b] is (a + b) / 2, which is also the mean. • The median of a Cauchy distribution with location parameter x0 and scale parameter y is x0, the location parameter. • The median of a power law distribution x−a, with exponent a > 1 is 21/(a − 1)xmin, where xmin is the minimum value for which the power law holds[10] • The median of an exponential distribution with rate parameter λ is the natural logarithm of 2 divided by the rate parameter: λ−1ln 2. • The median of a Weibull distribution with shape parameter k and scale parameter λ is λ(ln 2)1/k. Properties Optimality property The mean absolute error of a real variable c with respect to the random variable X is $E(\left|X-c\right|)\,$ Provided that the probability distribution of X is such that the above expectation exists, then m is a median of X if and only if m is a minimizer of the mean absolute error with respect to X.[11] In particular, if m is a sample median, then it minimizes the arithmetic mean of the absolute deviations.[12] Note, however, that in cases where the sample contains an even number of elements, this minimizer is not unique. More generally, a median is defined as a minimum of $E(|X-c|-|X|),$ as discussed below in the section on multivariate medians (specifically, the spatial median). This optimization-based definition of the median is useful in statistical data-analysis, for example, in k-medians clustering. Inequality relating means and medians If the distribution has finite variance, then the distance between the median ${\tilde {X}}$ and the mean ${\bar {X}}$ is bounded by one standard deviation. This bound was proved by Book and Sher in 1979 for discrete samples,[13] and more generally by Page and Murty in 1982.[14] In a comment on a subsequent proof by O'Cinneide,[15] Mallows in 1991 presented a compact proof that uses Jensen's inequality twice,[16] as follows. Using |·| for the absolute value, we have ${\begin{aligned}|\mu -m|=|\operatorname {E} (X-m)|&\leq \operatorname {E} (|X-m|)\\&\leq \operatorname {E} (|X-\mu |)\\&\leq {\sqrt {\operatorname {E} \left((X-\mu )^{2}\right)}}=\sigma .\end{aligned}}$ The first and third inequalities come from Jensen's inequality applied to the absolute-value function and the square function, which are each convex. The second inequality comes from the fact that a median minimizes the absolute deviation function $a\mapsto \operatorname {E} (|X-a|)$. Mallows's proof can be generalized to obtain a multivariate version of the inequality[17] simply by replacing the absolute value with a norm: $\|\mu -m\|\leq {\sqrt {\operatorname {E} \left(\|X-\mu \|^{2}\right)}}={\sqrt {\operatorname {trace} \left(\operatorname {var} (X)\right)}}$ where m is a spatial median, that is, a minimizer of the function $a\mapsto \operatorname {E} (\|X-a\|).\,$ The spatial median is unique when the data-set's dimension is two or more.[18][19] An alternative proof uses the one-sided Chebyshev inequality; it appears in an inequality on location and scale parameters. This formula also follows directly from Cantelli's inequality.[20] Unimodal distributions For the case of unimodal distributions, one can achieve a sharper bound on the distance between the median and the mean: $\left|{\tilde {X}}-{\bar {X}}\right|\leq \left({\frac {3}{5}}\right)^{\frac {1}{2}}\sigma \approx 0.7746\sigma $.[21] A similar relation holds between the median and the mode: $\left|{\tilde {X}}-\mathrm {mode} \right|\leq 3^{\frac {1}{2}}\sigma \approx 1.732\sigma .$ Jensen's inequality for medians Jensen's inequality states that for any random variable X with a finite expectation E[X] and for any convex function f $f[E(x)]\leq E[f(x)]$ This inequality generalizes to the median as well. We say a function f: R → R is a C function if, for any t, $f^{-1}\left(\,(-\infty ,t]\,\right)=\{x\in \mathbb {R} \mid f(x)\leq t\}$ is a closed interval (allowing the degenerate cases of a single point or an empty set). Every convex function is a C function, but the reverse does not hold. If f is a C function, then $f(\operatorname {Median} [X])\leq \operatorname {Median} [f(X)]$ If the medians are not unique, the statement holds for the corresponding suprema.[22] Medians for samples This section discusses the theory of estimating a population median from a sample. To calculate the median of a sample "by hand," see § Finite data set of numbers above. Efficient computation of the sample median Even though comparison-sorting n items requires Ω(n log n) operations, selection algorithms can compute the kth-smallest of n items with only Θ(n) operations. This includes the median, which is the n/2th order statistic (or for an even number of samples, the arithmetic mean of the two middle order statistics).[23] Selection algorithms still have the downside of requiring Ω(n) memory, that is, they need to have the full sample (or a linear-sized portion of it) in memory. Because this, as well as the linear time requirement, can be prohibitive, several estimation procedures for the median have been developed. A simple one is the median of three rule, which estimates the median as the median of a three-element subsample; this is commonly used as a subroutine in the quicksort sorting algorithm, which uses an estimate of its input's median. A more robust estimator is Tukey's ninther, which is the median of three rule applied with limited recursion:[24] if A is the sample laid out as an array, and med3(A) = median(A[1], A[n/2], A[n]), then ninther(A) = med3(med3(A[1 ... 1/3n]), med3(A[1/3n ... 2/3n]), med3(A[2/3n ... n])) The remedian is an estimator for the median that requires linear time but sub-linear memory, operating in a single pass over the sample.[25] Sampling distribution The distributions of both the sample mean and the sample median were determined by Laplace.[26] The distribution of the sample median from a population with a density function $f(x)$ is asymptotically normal with mean $\mu $ and variance[27] ${\frac {1}{4nf(m)^{2}}}$ where $m$ is the median of $f(x)$ and Failed to parse (SVG (MathML can be enabled via browser plugin): Invalid response ("Math extension cannot connect to Restbase.") from server "http://localhost:6011/en.wikipedia.org/v1/":): n is the sample size: ${\text{Sample median}}\sim {\mathcal {N}}\left(\mu =m,\sigma ^{2}={\frac {1}{4nf(m)^{2}}}\right)$ A modern proof follows below. Laplace's result is now understood as a special case of the asymptotic distribution of arbitrary quantiles. For normal samples, the density is $f(m)=1/{\sqrt {2\pi \sigma ^{2}}}$, thus for large samples the variance of the median equals $({\pi }/{2})\cdot (\sigma ^{2}/n).$[7] (See also section #Efficiency below.) Derivation of the asymptotic distribution We take the sample size to be an odd number $N=2n+1$ and assume our variable continuous; the formula for the case of discrete variables is given below in § Empirical local density. The sample can be summarized as "below median", "at median", and "above median", which corresponds to a trinomial distribution with probabilities $F(v)$, $f(v)$ and $1-F(v)$. For a continuous variable, the probability of multiple sample values being exactly equal to the median is 0, so one can calculate the density of at the point $v$ directly from the trinomial distribution: $\Pr[\operatorname {Median} =v]\,dv={\frac {(2n+1)!}{n!n!}}F(v)^{n}(1-F(v))^{n}f(v)\,dv$. Now we introduce the beta function. For integer arguments $\alpha $ and $\beta $, this can be expressed as $\mathrm {B} (\alpha ,\beta )={\frac {(\alpha -1)!(\beta -1)!}{(\alpha +\beta -1)!}}$. Also, recall that $f(v)\,dv=dF(v)$. Using these relationships and setting both $\alpha $ and $\beta $ equal to $n+1$ allows the last expression to be written as ${\frac {F(v)^{n}(1-F(v))^{n}}{\mathrm {B} (n+1,n+1)}}\,dF(v)$ Hence the density function of the median is a symmetric beta distribution pushed forward by $F$. Its mean, as we would expect, is 0.5 and its variance is $1/(4(N+2))$. By the chain rule, the corresponding variance of the sample median is ${\frac {1}{4(N+2)f(m)^{2}}}$. The additional 2 is negligible in the limit. Empirical local density In practice, the functions $f$ and $F$ are often not known or assumed. However, they can be estimated from an observed frequency distribution. In this section, we give an example. Consider the following table, representing a sample of 3,800 (discrete-valued) observations: v00.511.522.533.544.55 f(v) 0.0000.0080.0100.0130.0830.1080.3280.2200.2020.0230.005 F(v) 0.0000.0080.0180.0310.1140.2220.5500.7700.9720.9951.000 Because the observations are discrete-valued, constructing the exact distribution of the median is not an immediate translation of the above expression for $\Pr(\operatorname {Median} =v)$; one may (and typically does) have multiple instances of the median in one's sample. So we must sum over all these possibilities: $\Pr(\operatorname {Median} =v)=\sum _{i=0}^{n}\sum _{k=0}^{n}{\frac {N!}{i!(N-i-k)!k!}}F(v-1)^{i}(1-F(v))^{k}f(v)^{N-i-k}$ Here, i is the number of points strictly less than the median and k the number strictly greater. Using these preliminaries, it is possible to investigate the effect of sample size on the standard errors of the mean and median. The observed mean is 3.16, the observed raw median is 3 and the observed interpolated median is 3.174. The following table gives some comparison statistics. Sample size Statistic 391521 Expected value of median 3.1983.1913.1743.161 Standard error of median (above formula) 0.4820.3050.2570.239 Standard error of median (asymptotic approximation) 0.8790.5080.3930.332 Standard error of mean 0.4210.2430.1880.159 The expected value of the median falls slightly as sample size increases while, as would be expected, the standard errors of both the median and the mean are proportionate to the inverse square root of the sample size. The asymptotic approximation errs on the side of caution by overestimating the standard error. Estimation of variance from sample data The value of $(2f(x))^{-2}$—the asymptotic value of $n^{-1/2}(\nu -m)$ where $\nu $ is the population median—has been studied by several authors. The standard "delete one" jackknife method produces inconsistent results.[28] An alternative—the "delete k" method—where $k$ grows with the sample size has been shown to be asymptotically consistent.[29] This method may be computationally expensive for large data sets. A bootstrap estimate is known to be consistent,[30] but converges very slowly (order of $n^{-{\frac {1}{4}}}$).[31] Other methods have been proposed but their behavior may differ between large and small samples.[32] Efficiency The efficiency of the sample median, measured as the ratio of the variance of the mean to the variance of the median, depends on the sample size and on the underlying population distribution. For a sample of size $N=2n+1$ from the normal distribution, the efficiency for large N is ${\frac {2}{\pi }}{\frac {N+2}{N}}$ The efficiency tends to ${\frac {2}{\pi }}$ as $N$ tends to infinity. In other words, the relative variance of the median will be $\pi /2\approx 1.57$, or 57% greater than the variance of the mean – the relative standard error of the median will be $(\pi /2)^{\frac {1}{2}}\approx 1.25$, or 25% greater than the standard error of the mean, $\sigma /{\sqrt {n}}$ (see also section #Sampling distribution above.).[33] Other estimators For univariate distributions that are symmetric about one median, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population median.[34] If data is represented by a statistical model specifying a particular family of probability distributions, then estimates of the median can be obtained by fitting that family of probability distributions to the data and calculating the theoretical median of the fitted distribution. Pareto interpolation is an application of this when the population is assumed to have a Pareto distribution. Multivariate median Previously, this article discussed the univariate median, when the sample or population had one-dimension. When the dimension is two or higher, there are multiple concepts that extend the definition of the univariate median; each such multivariate median agrees with the univariate median when the dimension is exactly one.[34][35][36][37] Marginal median The marginal median is defined for vectors defined with respect to a fixed set of coordinates. A marginal median is defined to be the vector whose components are univariate medians. The marginal median is easy to compute, and its properties were studied by Puri and Sen.[34][38] Geometric median The geometric median of a discrete set of sample points $x_{1},\ldots x_{N}$ in a Euclidean space is the[lower-alpha 1] point minimizing the sum of distances to the sample points. ${\hat {\mu }}={\underset {\mu \in \mathbb {R} ^{m}}{\operatorname {arg\,min} }}\sum _{n=1}^{N}\left\|\mu -x_{n}\right\|_{2}$ In contrast to the marginal median, the geometric median is equivariant with respect to Euclidean similarity transformations such as translations and rotations. Median in all directions If the marginal medians for all coordinate systems coincide, then their common location may be termed the "median in all directions".[40] This concept is relevant to voting theory on account of the median voter theorem. When it exists, the median in all directions coincides with the geometric median (at least for discrete distributions). Centerpoint This section is an excerpt from Centerpoint (geometry).[edit] In statistics and computational geometry, the notion of centerpoint is a generalization of the median to data in higher-dimensional Euclidean space. Given a set of points in d-dimensional space, a centerpoint of the set is a point such that any hyperplane that goes through that point divides the set of points in two roughly equal subsets: the smaller part should have at least a 1/(d + 1) fraction of the points. Like the median, a centerpoint need not be one of the data points. Every non-empty set of points (with no duplicates) has at least one centerpoint. Other median-related concepts Interpolated median When dealing with a discrete variable, it is sometimes useful to regard the observed values as being midpoints of underlying continuous intervals. An example of this is a Likert scale, on which opinions or preferences are expressed on a scale with a set number of possible responses. If the scale consists of the positive integers, an observation of 3 might be regarded as representing the interval from 2.50 to 3.50. It is possible to estimate the median of the underlying variable. If, say, 22% of the observations are of value 2 or below and 55.0% are of 3 or below (so 33% have the value 3), then the median $m$ is 3 since the median is the smallest value of $x$ for which $F(x)$ is greater than a half. But the interpolated median is somewhere between 2.50 and 3.50. First we add half of the interval width $w$ to the median to get the upper bound of the median interval. Then we subtract that proportion of the interval width which equals the proportion of the 33% which lies above the 50% mark. In other words, we split up the interval width pro rata to the numbers of observations. In this case, the 33% is split into 28% below the median and 5% above it so we subtract 5/33 of the interval width from the upper bound of 3.50 to give an interpolated median of 3.35. More formally, if the values $f(x)$ are known, the interpolated median can be calculated from $m_{\text{int}}=m+w\left[{\frac {1}{2}}-{\frac {F(m)-{\frac {1}{2}}}{f(m)}}\right].$ Alternatively, if in an observed sample there are $k$ scores above the median category, $j$ scores in it and $i$ scores below it then the interpolated median is given by $m_{\text{int}}=m+{\frac {w}{2}}\left[{\frac {k-i}{j}}\right].$ Pseudo-median Main article: Pseudomedian For univariate distributions that are symmetric about one median, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population median; for non-symmetric distributions, the Hodges–Lehmann estimator is a robust and highly efficient estimator of the population pseudo-median, which is the median of a symmetrized distribution and which is close to the population median.[41] The Hodges–Lehmann estimator has been generalized to multivariate distributions.[42] Variants of regression The Theil–Sen estimator is a method for robust linear regression based on finding medians of slopes.[43] Median filter The median filter is an important tool of image processing, that can effectively remove any salt and pepper noise from grayscale images. Cluster analysis In cluster analysis, the k-medians clustering algorithm provides a way of defining clusters, in which the criterion of maximising the distance between cluster-means that is used in k-means clustering, is replaced by maximising the distance between cluster-medians. Median–median line This is a method of robust regression. The idea dates back to Wald in 1940 who suggested dividing a set of bivariate data into two halves depending on the value of the independent parameter $x$: a left half with values less than the median and a right half with values greater than the median.[44] He suggested taking the means of the dependent $y$ and independent $x$ variables of the left and the right halves and estimating the slope of the line joining these two points. The line could then be adjusted to fit the majority of the points in the data set. Nair and Shrivastava in 1942 suggested a similar idea but instead advocated dividing the sample into three equal parts before calculating the means of the subsamples.[45] Brown and Mood in 1951 proposed the idea of using the medians of two subsamples rather the means.[46] Tukey combined these ideas and recommended dividing the sample into three equal size subsamples and estimating the line based on the medians of the subsamples.[47] Median-unbiased estimators Main article: Bias of an estimator § Median-unbiased estimators Any mean-unbiased estimator minimizes the risk (expected loss) with respect to the squared-error loss function, as observed by Gauss. A median-unbiased estimator minimizes the risk with respect to the absolute-deviation loss function, as observed by Laplace. Other loss functions are used in statistical theory, particularly in robust statistics. The theory of median-unbiased estimators was revived by George W. Brown in 1947:[48] An estimate of a one-dimensional parameter θ will be said to be median-unbiased if, for fixed θ, the median of the distribution of the estimate is at the value θ; i.e., the estimate underestimates just as often as it overestimates. This requirement seems for most purposes to accomplish as much as the mean-unbiased requirement and has the additional property that it is invariant under one-to-one transformation. — page 584 Further properties of median-unbiased estimators have been reported.[49][50][51][52] Median-unbiased estimators are invariant under one-to-one transformations. There are methods of constructing median-unbiased estimators that are optimal (in a sense analogous to the minimum-variance property for mean-unbiased estimators). Such constructions exist for probability distributions having monotone likelihood-functions.[53][54] One such procedure is an analogue of the Rao–Blackwell procedure for mean-unbiased estimators: The procedure holds for a smaller class of probability distributions than does the Rao—Blackwell procedure but for a larger class of loss functions.[55] History Scientific researchers in the ancient near east appear not to have used summary statistics altogether, instead choosing values that offered maximal consistency with a broader theory that integrated a wide variety of phenomena.[56] Within the Mediterranean (and, later, European) scholarly community, statistics like the mean are fundamentally a medieval and early modern development. (The history of the median outside Europe and its predecessors remains relatively unstudied.) The idea of the median appeared in the 6th century in the Talmud, in order to fairly analyze divergent appraisals.[57][58] However, the concept did not spread to the broader scientific community. Instead, the closest ancestor of the modern median is the mid-range, invented by Al-Biruni[59]: 31 [60] Transmission of his work to later scholars is unclear. He applied his technique to assaying currency metals, but, after he published his work, most assayers still adopted the most unfavorable value from their results, lest they appear to cheat.[59]: 35–8  [61] However, increased navigation at sea during the Age of Discovery meant that ship's navigators increasingly had to attempt to determine latitude in unfavorable weather against hostile shores, leading to renewed interest in summary statistics. Whether rediscovered or independently invented, the mid-range is recommended to nautical navigators in Harriot's "Instructions for Raleigh's Voyage to Guiana, 1595".[59]: 45–8  The idea of the median may have first appeared in Edward Wright's 1599 book Certaine Errors in Navigation on a section about compass navigation.[62] Wright was reluctant to discard measured values, and may have felt that the median — incorporating a greater proportion of the dataset than the mid-range — was more likely to be correct. However, Wright did not give examples of his technique's use, making it hard to verify that he described the modern notion of median.[56][60][lower-alpha 2] The median (in the context of probability) certainly appeared in the correspondence of Christiaan Huygens, but as an example of a statistic that was inappropriate for actuarial practice.[56] The earliest recommendation of the median dates to 1757, when Roger Joseph Boscovich developed a regression method based on the L1 norm and therefore implicitly on the median.[56][63] In 1774, Laplace made this desire explicit: he suggested the median be used as the standard estimator of the value of a posterior PDF. The specific criterion was to minimize the expected magnitude of the error; $|\alpha -\alpha ^{*}|$ where $\alpha ^{*}$ is the estimate and $\alpha $ is the true value. To this end, Laplace determined the distributions of both the sample mean and the sample median in the early 1800s.[26][64] However, a decade later, Gauss and Legendre developed the least squares method, which minimizes $(\alpha -\alpha ^{*})^{2}$ to obtain the mean. Within the context of regression, Gauss and Legendre's innovation offers vastly easier computation. Consequently, Laplaces' proposal was generally rejected until the rise of computing devices 150 years later (and is still a relatively uncommon algorithm).[65] Antoine Augustin Cournot in 1843 was the first[66] to use the term median (valeur médiane) for the value that divides a probability distribution into two equal halves. Gustav Theodor Fechner used the median (Centralwerth) in sociological and psychological phenomena.[67] It had earlier been used only in astronomy and related fields. Gustav Fechner popularized the median into the formal analysis of data, although it had been used previously by Laplace,[67] and the median appeared in a textbook by F. Y. Edgeworth.[68] Francis Galton used the English term median in 1881,[69][70] having earlier used the terms middle-most value in 1869, and the medium in 1880.[71][72] Statisticians encouraged the use of medians intensely throughout the 19th century for its intuitive clarity and ease of manual computation. However, the notion of median does not lend itself to the theory of higher moments as well as the arithmetic mean does, and is much harder to compute by computer. As a result, the median was steadily supplanted as a notion of generic average by the arithmetic mean during the 20th century.[56][60] See also • Absolute deviation – Difference between a variable's observed value and a reference valuePages displaying short descriptions of redirect targets • Bias of an estimator – Difference between an estimator's expected value from a parameter's true value • Central tendency – Statistical value representing the center or average of a distribution • Concentration of measure – Statistical parameter for Lipschitz functions – Strong form of uniform continuityPages displaying short descriptions of redirect targets • Median graph – Graph with a median for each three vertices • Median of medians – Fast approximate median algorithm – Algorithm to calculate the approximate median in linear time • Median search – Method for finding kth smallest valuePages displaying short descriptions of redirect targets • Median slope – Statistical method for fitting a linePages displaying short descriptions of redirect targets • Median voter theory • Medoid – representative objects of a data set or a cluster within a data set whose sum of dissimilarities to all the objects in the cluster is minimalPages displaying wikidata descriptions as a fallbacks – Generalization of the median in higher dimensions Notes 1. The geometric median is unique unless the sample is collinear.[39] 2. Subsequent scholars appear to concur with Eisenhart that Boroughs' 1580 figures, while suggestive of the median, in fact describe an arithmetic mean.;[59]: 62–3  Boroughs is mentioned in no other work. References 1. Weisstein, Eric W. "Statistical Median". MathWorld. 2. Simon, Laura J.; "Descriptive statistics" Archived 2010-07-30 at the Wayback Machine, Statistical Education Resource Kit, Pennsylvania State Department of Statistics 3. Derek Bissell (1994). Statistical Methods for Spc and Tqm. CRC Press. pp. 26–. ISBN 978-0-412-39440-9. Retrieved 25 February 2013. 4. David J. Sheskin (27 August 2003). Handbook of Parametric and Nonparametric Statistical Procedures (Third ed.). CRC Press. p. 7. ISBN 978-1-4200-3626-8. Retrieved 25 February 2013. 5. Paul T. von Hippel (2005). "Mean, Median, and Skew: Correcting a Textbook Rule". Journal of Statistics Education. 13 (2). Archived from the original on 2008-10-14. Retrieved 2015-06-18. 6. Robson, Colin (1994). Experiment, Design and Statistics in Psychology. Penguin. pp. 42–45. ISBN 0-14-017648-9. 7. Williams, D. (2001). Weighing the Odds. Cambridge University Press. p. 165. ISBN 052100618X. 8. Maindonald, John; Braun, W. John (2010-05-06). Data Analysis and Graphics Using R: An Example-Based Approach. Cambridge University Press. p. 104. ISBN 978-1-139-48667-5. 9. "AP Statistics Review - Density Curves and the Normal Distributions". Archived from the original on 8 April 2015. Retrieved 16 March 2015. 10. Newman, M. E. J. (2005). "Power laws, Pareto distributions and Zipf's law". Contemporary Physics. 46 (5): 323–351. arXiv:cond-mat/0412004. Bibcode:2005ConPh..46..323N. doi:10.1080/00107510500052444. S2CID 2871747. 11. Stroock, Daniel (2011). Probability Theory. Cambridge University Press. pp. 43. ISBN 978-0-521-13250-3. 12. DeGroot, Morris H. (1970). Optimal Statistical Decisions. McGraw-Hill Book Co., New York-London-Sydney. p. 232. ISBN 9780471680291. MR 0356303. 13. Stephen A. Book; Lawrence Sher (1979). "How close are the mean and the median?". The Two-Year College Mathematics Journal. 10 (3): 202–204. doi:10.2307/3026748. JSTOR 3026748. Retrieved 12 March 2022. 14. Warren Page; Vedula N. Murty (1982). "Nearness Relations Among Measures of Central Tendency and Dispersion: Part 1". The Two-Year College Mathematics Journal. 13 (5): 315–327. doi:10.1080/00494925.1982.11972639 (inactive 1 August 2023). Retrieved 12 March 2022.{{cite journal}}: CS1 maint: DOI inactive as of August 2023 (link) 15. O'Cinneide, Colm Art (1990). "The mean is within one standard deviation of any median". The American Statistician. 44 (4): 292–293. doi:10.1080/00031305.1990.10475743. Retrieved 12 March 2022. 16. Mallows, Colin (August 1991). "Another comment on O'Cinneide". The American Statistician. 45 (3): 257. doi:10.1080/00031305.1991.10475815. 17. Piché, Robert (2012). Random Vectors and Random Sequences. Lambert Academic Publishing. ISBN 978-3659211966. 18. Kemperman, Johannes H. B. (1987). Dodge, Yadolah (ed.). "The median of a finite measure on a Banach space: Statistical data analysis based on the L1-norm and related methods". Papers from the First International Conference Held at Neuchâtel, August 31–September 4, 1987. Amsterdam: North-Holland Publishing Co.: 217–230. MR 0949228. 19. Milasevic, Philip; Ducharme, Gilles R. (1987). "Uniqueness of the spatial median". Annals of Statistics. 15 (3): 1332–1333. doi:10.1214/aos/1176350511. MR 0902264. 20. K.Van Steen Notes on probability and statistics 21. Basu, S.; Dasgupta, A. (1997). "The Mean, Median, and Mode of Unimodal Distributions:A Characterization". Theory of Probability and Its Applications. 41 (2): 210–223. doi:10.1137/S0040585X97975447. S2CID 54593178. 22. Merkle, M. (2005). "Jensen's inequality for medians". Statistics & Probability Letters. 71 (3): 277–281. doi:10.1016/j.spl.2004.11.010. 23. Alfred V. Aho and John E. Hopcroft and Jeffrey D. Ullman (1974). The Design and Analysis of Computer Algorithms. Reading/MA: Addison-Wesley. ISBN 0-201-00029-6. Here: Section 3.6 "Order Statistics", p.97-99, in particular Algorithm 3.6 and Theorem 3.9. 24. Bentley, Jon L.; McIlroy, M. Douglas (1993). "Engineering a sort function". Software: Practice and Experience. 23 (11): 1249–1265. doi:10.1002/spe.4380231105. S2CID 8822797. 25. Rousseeuw, Peter J.; Bassett, Gilbert W. Jr. (1990). "The remedian: a robust averaging method for large data sets" (PDF). J. Amer. Statist. Assoc. 85 (409): 97–104. doi:10.1080/01621459.1990.10475311. 26. Stigler, Stephen (December 1973). "Studies in the History of Probability and Statistics. XXXII: Laplace, Fisher and the Discovery of the Concept of Sufficiency". Biometrika. 60 (3): 439–445. doi:10.1093/biomet/60.3.439. JSTOR 2334992. MR 0326872. 27. Rider, Paul R. (1960). "Variance of the median of small samples from several special populations". J. Amer. Statist. Assoc. 55 (289): 148–150. doi:10.1080/01621459.1960.10482056. 28. Efron, B. (1982). The Jackknife, the Bootstrap and other Resampling Plans. Philadelphia: SIAM. ISBN 0898711797. 29. Shao, J.; Wu, C. F. (1989). "A General Theory for Jackknife Variance Estimation". Ann. Stat. 17 (3): 1176–1197. doi:10.1214/aos/1176347263. JSTOR 2241717. 30. Efron, B. (1979). "Bootstrap Methods: Another Look at the Jackknife". Ann. Stat. 7 (1): 1–26. doi:10.1214/aos/1176344552. JSTOR 2958830. 31. Hall, P.; Martin, M. A. (1988). "Exact Convergence Rate of Bootstrap Quantile Variance Estimator". Probab Theory Related Fields. 80 (2): 261–268. doi:10.1007/BF00356105. S2CID 119701556. 32. Jiménez-Gamero, M. D.; Munoz-García, J.; Pino-Mejías, R. (2004). "Reduced bootstrap for the median". Statistica Sinica. 14 (4): 1179–1198. 33. Maindonald, John; John Braun, W. (2010-05-06). Data Analysis and Graphics Using R: An Example-Based Approach. Cambridge University Press. ISBN 9781139486675. 34. Hettmansperger, Thomas P.; McKean, Joseph W. (1998). Robust nonparametric statistical methods. Kendall's Library of Statistics. Vol. 5. London: Edward Arnold. ISBN 0-340-54937-8. MR 1604954. 35. Small, Christopher G. "A survey of multidimensional medians." International Statistical Review/Revue Internationale de Statistique (1990): 263–277. doi:10.2307/1403809 JSTOR 1403809 36. Niinimaa, A., and H. Oja. "Multivariate median." Encyclopedia of statistical sciences (1999). 37. Mosler, Karl. Multivariate Dispersion, Central Regions, and Depth: The Lift Zonoid Approach. Vol. 165. Springer Science & Business Media, 2012. 38. Puri, Madan L.; Sen, Pranab K.; Nonparametric Methods in Multivariate Analysis, John Wiley & Sons, New York, NY, 1971. (Reprinted by Krieger Publishing) 39. Vardi, Yehuda; Zhang, Cun-Hui (2000). "The multivariate L1-median and associated data depth". Proceedings of the National Academy of Sciences of the United States of America. 97 (4): 1423–1426 (electronic). Bibcode:2000PNAS...97.1423V. doi:10.1073/pnas.97.4.1423. MR 1740461. PMC 26449. PMID 10677477. 40. Davis, Otto A.; DeGroot, Morris H.; Hinich, Melvin J. (January 1972). "Social Preference Orderings and Majority Rule" (PDF). Econometrica. 40 (1): 147–157. doi:10.2307/1909727. JSTOR 1909727. The authors, working in a topic in which uniqueness is assumed, actually use the expression "unique median in all directions". 41. Pratt, William K.; Cooper, Ted J.; Kabir, Ihtisham (1985-07-11). Corbett, Francis J (ed.). "Pseudomedian Filter". Architectures and Algorithms for Digital Image Processing II. 0534: 34. Bibcode:1985SPIE..534...34P. doi:10.1117/12.946562. S2CID 173183609. 42. Oja, Hannu (2010). Multivariate nonparametric methods with R: An approach based on spatial signs and ranks. Lecture Notes in Statistics. Vol. 199. New York, NY: Springer. pp. xiv+232. doi:10.1007/978-1-4419-0468-3. ISBN 978-1-4419-0467-6. MR 2598854. 43. Wilcox, Rand R. (2001), "Theil–Sen estimator", Fundamentals of Modern Statistical Methods: Substantially Improving Power and Accuracy, Springer-Verlag, pp. 207–210, ISBN 978-0-387-95157-7. 44. Wald, A. (1940). "The Fitting of Straight Lines if Both Variables are Subject to Error" (PDF). Annals of Mathematical Statistics. 11 (3): 282–300. doi:10.1214/aoms/1177731868. JSTOR 2235677. 45. Nair, K. R.; Shrivastava, M. P. (1942). "On a Simple Method of Curve Fitting". Sankhyā: The Indian Journal of Statistics. 6 (2): 121–132. JSTOR 25047749. 46. Brown, G. W.; Mood, A. M. (1951). "On Median Tests for Linear Hypotheses". Proc Second Berkeley Symposium on Mathematical Statistics and Probability. Berkeley, CA: University of California Press. pp. 159–166. Zbl 0045.08606. 47. Tukey, J. W. (1977). Exploratory Data Analysis. Reading, MA: Addison-Wesley. ISBN 0201076160. 48. Brown, George W. (1947). "On Small-Sample Estimation". Annals of Mathematical Statistics. 18 (4): 582–585. doi:10.1214/aoms/1177730349. JSTOR 2236236. 49. Lehmann, Erich L. (1951). "A General Concept of Unbiasedness". Annals of Mathematical Statistics. 22 (4): 587–592. doi:10.1214/aoms/1177729549. JSTOR 2236928. 50. Birnbaum, Allan (1961). "A Unified Theory of Estimation, I". Annals of Mathematical Statistics. 32 (1): 112–135. doi:10.1214/aoms/1177705145. JSTOR 2237612. 51. van der Vaart, H. Robert (1961). "Some Extensions of the Idea of Bias". Annals of Mathematical Statistics. 32 (2): 436–447. doi:10.1214/aoms/1177705051. JSTOR 2237754. MR 0125674. 52. Pfanzagl, Johann; with the assistance of R. Hamböker (1994). Parametric Statistical Theory. Walter de Gruyter. ISBN 3-11-013863-8. MR 1291393. 53. Pfanzagl, Johann. "On optimal median unbiased estimators in the presence of nuisance parameters." The Annals of Statistics (1979): 187–193. 54. Brown, L. D.; Cohen, Arthur; Strawderman, W. E. (1976). "A Complete Class Theorem for Strict Monotone Likelihood Ratio With Applications". Ann. Statist. 4 (4): 712–722. doi:10.1214/aos/1176343543. 55. Page; Brown, L. D.; Cohen, Arthur; Strawderman, W. E. (1976). "A Complete Class Theorem for Strict Monotone Likelihood Ratio With Applications". Ann. Statist. 4 (4): 712–722. doi:10.1214/aos/1176343543. 56. Bakker, Arthur; Gravemeijer, Koeno P. E. (2006-06-01). "An Historical Phenomenology of Mean and Median". Educational Studies in Mathematics. 62 (2): 149–168. doi:10.1007/s10649-006-7099-8. ISSN 1573-0816. S2CID 143708116. 57. Adler, Dan (31 December 2014). "Talmud and Modern Economics". Jewish American and Israeli Issues. Archived from the original on 6 December 2015. Retrieved 22 February 2020. 58. Modern Economic Theory in the Talmud by Yisrael Aumann 59. Eisenhart, Churchill (24 August 1971). The Development of the Concept of the Best Mean of a Set of Measurements from Antiquity to the Present Day (PDF) (Speech). 131st Annual Meeting of the American Statistical Association. Colorado State University. 60. "How the Average Triumphed Over the Median". Priceonomics. 5 April 2016. Retrieved 2020-02-23. 61. Sangster, Alan (March 2021). "The Life and Works of Luca Pacioli (1446/7–1517), Humanist Educator". Abacus. 57 (1): 126–152. doi:10.1111/abac.12218. hdl:2164/16100. ISSN 0001-3072. S2CID 233917744. 62. Wright, Edward; Parsons, E. J. S.; Morris, W. F. (1939). "Edward Wright and His Work". Imago Mundi. 3: 61–71. doi:10.1080/03085693908591862. ISSN 0308-5694. JSTOR 1149920. 63. Stigler, S. M. (1986). The History of Statistics: The Measurement of Uncertainty Before 1900. Harvard University Press. ISBN 0674403401. 64. Laplace PS de (1818) Deuxième supplément à la Théorie Analytique des Probabilités, Paris, Courcier 65. Jaynes, E.T. (2007). Probability theory : the logic of science (5. print. ed.). Cambridge [u.a.]: Cambridge Univ. Press. p. 172. ISBN 978-0-521-59271-0. 66. Howarth, Richard (2017). Dictionary of Mathematical Geosciences: With Historical Notes. Springer. p. 374. 67. Keynes, J.M. (1921) A Treatise on Probability. Pt II Ch XVII §5 (p 201) (2006 reprint, Cosimo Classics, ISBN 9781596055308 : multiple other reprints) 68. Stigler, Stephen M. (2002). Statistics on the Table: The History of Statistical Concepts and Methods. Harvard University Press. pp. 105–7. ISBN 978-0-674-00979-0. 69. Galton F (1881) "Report of the Anthropometric Committee" pp 245–260. Report of the 51st Meeting of the British Association for the Advancement of Science 70. David, H. A. (1995). "First (?) Occurrence of Common Terms in Mathematical Statistics". The American Statistician. 49 (2): 121–133. doi:10.2307/2684625. ISSN 0003-1305. JSTOR 2684625. 71. encyclopediaofmath.org 72. personal.psu.edu External links • "Median (in statistics)", Encyclopedia of Mathematics, EMS Press, 2001 [1994] • Median as a weighted arithmetic mean of all Sample Observations • On-line calculator • Calculating the median • A problem involving the mean, the median, and the mode. • Weisstein, Eric W. "Statistical Median". MathWorld. • Python script for Median computations and income inequality metrics • Fast Computation of the Median by Successive Binning • 'Mean, median, mode and skewness', A tutorial devised for first-year psychology students at Oxford University, based on a worked example. • The Complex SAT Math Problem Even the College Board Got Wrong: Andrew Daniels in Popular Mechanics This article incorporates material from Median of a distribution on PlanetMath, which is licensed under the Creative Commons Attribution/Share-Alike License. Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Variance-stabilizing transformation In applied statistics, a variance-stabilizing transformation is a data transformation that is specifically chosen either to simplify considerations in graphical exploratory data analysis or to allow the application of simple regression-based or analysis of variance techniques.[1] Overview The aim behind the choice of a variance-stabilizing transformation is to find a simple function ƒ to apply to values x in a data set to create new values y = ƒ(x) such that the variability of the values y is not related to their mean value. For example, suppose that the values x are realizations from different Poisson distributions: i.e. the distributions each have different mean values μ. Then, because for the Poisson distribution the variance is identical to the mean, the variance varies with the mean. However, if the simple variance-stabilizing transformation $y={\sqrt {x}}\,$ is applied, the sampling variance associated with observation will be nearly constant: see Anscombe transform for details and some alternative transformations. While variance-stabilizing transformations are well known for certain parametric families of distributions, such as the Poisson and the binomial distribution, some types of data analysis proceed more empirically: for example by searching among power transformations to find a suitable fixed transformation. Alternatively, if data analysis suggests a functional form for the relation between variance and mean, this can be used to deduce a variance-stabilizing transformation.[2] Thus if, for a mean μ, $\operatorname {var} (X)=h(\mu ),\,$ a suitable basis for a variance stabilizing transformation would be $y\propto \int ^{x}{\frac {1}{\sqrt {h(\mu )}}}\,d\mu ,$ where the arbitrary constant of integration and an arbitrary scaling factor can be chosen for convenience. Example: relative variance If X is a positive random variable and the variance is given as h(μ) = s2μ2 then the standard deviation is proportional to the mean, which is called fixed relative error. In this case, the variance-stabilizing transformation is $y=\int ^{x}{\frac {d\mu }{\sqrt {s^{2}\mu ^{2}}}}={\frac {1}{s}}\ln(x)\propto \log(x)\,.$ That is, the variance-stabilizing transformation is the logarithmic transformation. Example: absolute plus relative variance If the variance is given as h(μ) = σ2 + s2μ2 then the variance is dominated by a fixed variance σ2 when |μ| is small enough and is dominated by the relative variance s2μ2 when |μ| is large enough. In this case, the variance-stabilizing transformation is $y=\int ^{x}{\frac {d\mu }{\sqrt {\sigma ^{2}+s^{2}\mu ^{2}}}}={\frac {1}{s}}\operatorname {asinh} {\frac {x}{\sigma /s}}\propto \operatorname {asinh} {\frac {x}{\lambda }}\,.$ That is, the variance-stabilizing transformation is the inverse hyperbolic sine of the scaled value x / λ for λ = σ / s. Relationship to the delta method Here, the delta method is presented in a rough way, but it is enough to see the relation with the variance-stabilizing transformations. To see a more formal approach see delta method. Let $X$ be a random variable, with $E[X]=\mu $ and $\operatorname {Var} (X)=\sigma ^{2}$. Define $Y=g(X)$, where $g$ is a regular function. A first order Taylor approximation for $Y=g(x)$ is: $Y=g(X)\approx g(\mu )+g'(\mu )(X-\mu )$ From the equation above, we obtain: $E[Y]=g(\mu )$ and $\operatorname {Var} [Y]=\sigma ^{2}g'(\mu )^{2}$ This approximation method is called delta method. Consider now a random variable $X$ such that $E[X]=\mu $ and $\operatorname {Var} [X]=h(\mu )$. Notice the relation between the variance and the mean, which implies, for example, heteroscedasticity in a linear model. Therefore, the goal is to find a function $g$ such that $Y=g(X)$ has a variance independent (at least approximately) of its expectation. Imposing the condition $\operatorname {Var} [Y]\approx h(\mu )g'(\mu )^{2}={\text{constant}}$, this equality implies the differential equation: ${\frac {dg}{d\mu }}={\frac {C}{\sqrt {h(\mu )}}}$ This ordinary differential equation has, by separation of variables, the following solution: $g(\mu )=\int {\frac {C\,d\mu }{\sqrt {h(\mu )}}}$ This last expression appeared for the first time in a M. S. Bartlett paper.[3] References 1. Everitt, B. S. (2002). The Cambridge Dictionary of Statistics (2nd ed.). CUP. ISBN 0-521-81099-X. 2. Dodge, Y. (2003). The Oxford Dictionary of Statistical Terms. OUP. ISBN 0-19-920613-9. 3. Bartlett, M. S. (1947). "The Use of Transformations". Biometrics. 3: 39–52. doi:10.2307/3001536.
Wikipedia
Index of dispersion In probability theory and statistics, the index of dispersion,[1] dispersion index, coefficient of dispersion, relative variance, or variance-to-mean ratio (VMR), like the coefficient of variation, is a normalized measure of the dispersion of a probability distribution: it is a measure used to quantify whether a set of observed occurrences are clustered or dispersed compared to a standard statistical model. It is defined as the ratio of the variance $\sigma ^{2}$ to the mean $\mu $, $D={\sigma ^{2} \over \mu }.$ It is also known as the Fano factor, though this term is sometimes reserved for windowed data (the mean and variance are computed over a subpopulation), where the index of dispersion is used in the special case where the window is infinite. Windowing data is frequently done: the VMR is frequently computed over various intervals in time or small regions in space, which may be called "windows", and the resulting statistic called the Fano factor. It is only defined when the mean $\mu $ is non-zero, and is generally only used for positive statistics, such as count data or time between events, or where the underlying distribution is assumed to be the exponential distribution or Poisson distribution. Terminology In this context, the observed dataset may consist of the times of occurrence of predefined events, such as earthquakes in a given region over a given magnitude, or of the locations in geographical space of plants of a given species. Details of such occurrences are first converted into counts of the numbers of events or occurrences in each of a set of equal-sized time- or space-regions. The above defines a dispersion index for counts.[2] A different definition applies for a dispersion index for intervals,[3] where the quantities treated are the lengths of the time-intervals between the events. Common usage is that "index of dispersion" means the dispersion index for counts. Interpretation Some distributions, most notably the Poisson distribution, have equal variance and mean, giving them a VMR = 1. The geometric distribution and the negative binomial distribution have VMR > 1, while the binomial distribution has VMR < 1, and the constant random variable has VMR = 0. This yields the following table: DistributionVMR constant random variableVMR = 0not dispersed binomial distribution0 < VMR < 1under-dispersed Poisson distributionVMR = 1 negative binomial distributionVMR > 1over-dispersed This can be considered analogous to the classification of conic sections by eccentricity; see Cumulants of particular probability distributions for details. The relevance of the index of dispersion is that it has a value of 1 when the probability distribution of the number of occurrences in an interval is a Poisson distribution. Thus the measure can be used to assess whether observed data can be modeled using a Poisson process. When the coefficient of dispersion is less than 1, a dataset is said to be "under-dispersed": this condition can relate to patterns of occurrence that are more regular than the randomness associated with a Poisson process. For instance, regular, periodic events will be under-dispersed. If the index of dispersion is larger than 1, a dataset is said to be over-dispersed. A sample-based estimate of the dispersion index can be used to construct a formal statistical hypothesis test for the adequacy of the model that a series of counts follow a Poisson distribution.[4][5] In terms of the interval-counts, over-dispersion corresponds to there being more intervals with low counts and more intervals with high counts, compared to a Poisson distribution: in contrast, under-dispersion is characterised by there being more intervals having counts close to the mean count, compared to a Poisson distribution. The VMR is also a good measure of the degree of randomness of a given phenomenon. For example, this technique is commonly used in currency management. Example For randomly diffusing particles (Brownian motion), the distribution of the number of particle inside a given volume is poissonian, i.e. VMR=1. Therefore, to assess if a given spatial pattern (assuming you have a way to measure it) is due purely to diffusion or if some particle-particle interaction is involved : divide the space into patches, Quadrats or Sample Units (SU), count the number of individuals in each patch or SU, and compute the VMR. VMRs significantly higher than 1 denote a clustered distribution, where random walk is not enough to smother the attractive inter-particle potential. History The first to discuss the use of a test to detect deviations from a Poisson or binomial distribution appears to have been Lexis in 1877. One of the tests he developed was the Lexis ratio. This index was first used in botany by Clapham in 1936. If the variates are Poisson distributed then the index of dispersion is distributed as a χ2 statistic with n - 1 degrees of freedom when n is large and is μ > 3.[6] For many cases of interest this approximation is accurate and Fisher in 1950 derived an exact test for it. Hoel studied the first four moments of its distribution.[7] He found that the approximation to the χ2 statistic is reasonable if μ > 5. Skewed distributions For highly skewed distributions, it may be more appropriate to use a linear loss function, as opposed to a quadratic one. The analogous coefficient of dispersion in this case is the ratio of the average absolute deviation from the median to the median of the data,[8] or, in symbols: $CD={\frac {1}{n}}{\frac {\sum _{j}{|m-x_{j}|}}{m}}$ where n is the sample size, m is the sample median and the sum taken over the whole sample. Iowa, New York and South Dakota use this linear coefficient of dispersion to estimate dues taxes.[9][10][11] For a two-sample test in which the sample sizes are large, both samples have the same median, and differ in the dispersion around it, a confidence interval for the linear coefficient of dispersion is bounded inferiorly by ${\frac {t_{a}}{t_{b}}}\exp {\left(-{\sqrt {z_{\alpha }\left(\operatorname {var} \left[\log \left({\frac {t_{a}}{t_{b}}}\right)\right]\right)}}\right)}$ where tj is the mean absolute deviation of the jth sample and zα is the confidence interval length for a normal distribution of confidence α (e.g., for α = 0.05, zα = 1.96).[8] See also • Count data • Harmonic mean Similar ratios • Coefficient of variation, $\sigma /\mu $ • Standardized moment, $\mu _{k}/\sigma ^{k}$ • Fano factor, $\sigma _{W}^{2}/\mu _{W}$ (windowed VMR) • signal-to-noise ratio, $\mu /\sigma $ (in signal processing) Notes 1. Cox &Lewis (1966) 2. Cox & Lewis (1966), p72 3. Cox & Lewis (1966), p71 4. Cox & Lewis (1966), p158 5. Upton & Cook(2006), under index of dispersion 6. Frome, E. L. (1982). "Algorithm AS 171: Fisher's Exact Variance Test for the Poisson Distribution". Journal of the Royal Statistical Society, Series C. 31 (1): 67–71. doi:10.2307/2347079. JSTOR 2347079. 7. Hoel, P. G. (1943). "On Indices of Dispersion". Annals of Mathematical Statistics. 14 (2): 155–162. doi:10.1214/aoms/1177731457. JSTOR 2235818. 8. Bonett, DG; Seier, E (2006). "Confidence interval for a coefficient of dispersion in non-normal distributions". Biometrical Journal. 48 (1): 144–148. doi:10.1002/bimj.200410148. PMID 16544819. S2CID 33665632. 9. "Statistical Calculation Definitions for Mass Appraisal" (PDF). Iowa.gov. Archived from the original (PDF) on 11 November 2010. Median Ratio: The ratio located midway between the highest ratio and the lowest ratio when individual ratios for a class of realty are ranked in ascending or descending order. The median ratio is most frequently used to determine the level of assessment for a given class of real estate. 10. "Assessment equity in New York: Results from the 2010 market value survey". Archived from the original on 6 November 2012. 11. "Summary of the Assessment Process" (PDF). state.sd.us. South Dakota Department of Revenue - Property/Special Taxes Division. Archived from the original (PDF) on 10 May 2009. References • Cox, D. R.; Lewis, P. A. W. (1966). The Statistical Analysis of Series of Events. London: Methuen. • Upton, G.; Cook, I. (2006). Oxford Dictionary of Statistics (2nd ed.). Oxford University Press. ISBN 978-0-19-954145-4. Statistics • Outline • Index Descriptive statistics Continuous data Center • Mean • Arithmetic • Arithmetic-Geometric • Cubic • Generalized/power • Geometric • Harmonic • Heronian • Heinz • Lehmer • Median • Mode Dispersion • Average absolute deviation • Coefficient of variation • Interquartile range • Percentile • Range • Standard deviation • Variance Shape • Central limit theorem • Moments • Kurtosis • L-moments • Skewness Count data • Index of dispersion Summary tables • Contingency table • Frequency distribution • Grouped data Dependence • Partial correlation • Pearson product-moment correlation • Rank correlation • Kendall's τ • Spearman's ρ • Scatter plot Graphics • Bar chart • Biplot • Box plot • Control chart • Correlogram • Fan chart • Forest plot • Histogram • Pie chart • Q–Q plot • Radar chart • Run chart • Scatter plot • Stem-and-leaf display • Violin plot Data collection Study design • Effect size • Missing data • Optimal design • Population • Replication • Sample size determination • Statistic • Statistical power Survey methodology • Sampling • Cluster • Stratified • Opinion poll • Questionnaire • Standard error Controlled experiments • Blocking • Factorial experiment • Interaction • Random assignment • Randomized controlled trial • Randomized experiment • Scientific control Adaptive designs • Adaptive clinical trial • Stochastic approximation • Up-and-down designs Observational studies • Cohort study • Cross-sectional study • Natural experiment • Quasi-experiment Statistical inference Statistical theory • Population • Statistic • Probability distribution • Sampling distribution • Order statistic • Empirical distribution • Density estimation • Statistical model • Model specification • Lp space • Parameter • location • scale • shape • Parametric family • Likelihood (monotone) • Location–scale family • Exponential family • Completeness • Sufficiency • Statistical functional • Bootstrap • U • V • Optimal decision • loss function • Efficiency • Statistical distance • divergence • Asymptotics • Robustness Frequentist inference Point estimation • Estimating equations • Maximum likelihood • Method of moments • M-estimator • Minimum distance • Unbiased estimators • Mean-unbiased minimum-variance • Rao–Blackwellization • Lehmann–Scheffé theorem • Median unbiased • Plug-in Interval estimation • Confidence interval • Pivot • Likelihood interval • Prediction interval • Tolerance interval • Resampling • Bootstrap • Jackknife Testing hypotheses • 1- & 2-tails • Power • Uniformly most powerful test • Permutation test • Randomization test • Multiple comparisons Parametric tests • Likelihood-ratio • Score/Lagrange multiplier • Wald Specific tests • Z-test (normal) • Student's t-test • F-test Goodness of fit • Chi-squared • G-test • Kolmogorov–Smirnov • Anderson–Darling • Lilliefors • Jarque–Bera • Normality (Shapiro–Wilk) • Likelihood-ratio test • Model selection • Cross validation • AIC • BIC Rank statistics • Sign • Sample median • Signed rank (Wilcoxon) • Hodges–Lehmann estimator • Rank sum (Mann–Whitney) • Nonparametric anova • 1-way (Kruskal–Wallis) • 2-way (Friedman) • Ordered alternative (Jonckheere–Terpstra) • Van der Waerden test Bayesian inference • Bayesian probability • prior • posterior • Credible interval • Bayes factor • Bayesian estimator • Maximum posterior estimator • Correlation • Regression analysis Correlation • Pearson product-moment • Partial correlation • Confounding variable • Coefficient of determination Regression analysis • Errors and residuals • Regression validation • Mixed effects models • Simultaneous equations models • Multivariate adaptive regression splines (MARS) Linear regression • Simple linear regression • Ordinary least squares • General linear model • Bayesian regression Non-standard predictors • Nonlinear regression • Nonparametric • Semiparametric • Isotonic • Robust • Heteroscedasticity • Homoscedasticity Generalized linear model • Exponential families • Logistic (Bernoulli) / Binomial / Poisson regressions Partition of variance • Analysis of variance (ANOVA, anova) • Analysis of covariance • Multivariate ANOVA • Degrees of freedom Categorical / Multivariate / Time-series / Survival analysis Categorical • Cohen's kappa • Contingency table • Graphical model • Log-linear model • McNemar's test • Cochran–Mantel–Haenszel statistics Multivariate • Regression • Manova • Principal components • Canonical correlation • Discriminant analysis • Cluster analysis • Classification • Structural equation model • Factor analysis • Multivariate distributions • Elliptical distributions • Normal Time-series General • Decomposition • Trend • Stationarity • Seasonal adjustment • Exponential smoothing • Cointegration • Structural break • Granger causality Specific tests • Dickey–Fuller • Johansen • Q-statistic (Ljung–Box) • Durbin–Watson • Breusch–Godfrey Time domain • Autocorrelation (ACF) • partial (PACF) • Cross-correlation (XCF) • ARMA model • ARIMA model (Box–Jenkins) • Autoregressive conditional heteroskedasticity (ARCH) • Vector autoregression (VAR) Frequency domain • Spectral density estimation • Fourier analysis • Least-squares spectral analysis • Wavelet • Whittle likelihood Survival Survival function • Kaplan–Meier estimator (product limit) • Proportional hazards models • Accelerated failure time (AFT) model • First hitting time Hazard function • Nelson–Aalen estimator Test • Log-rank test Applications Biostatistics • Bioinformatics • Clinical trials / studies • Epidemiology • Medical statistics Engineering statistics • Chemometrics • Methods engineering • Probabilistic design • Process / quality control • Reliability • System identification Social statistics • Actuarial science • Census • Crime statistics • Demography • Econometrics • Jurimetrics • National accounts • Official statistics • Population statistics • Psychometrics Spatial statistics • Cartography • Environmental statistics • Geographic information system • Geostatistics • Kriging • Category •  Mathematics portal • Commons • WikiProject
Wikipedia
Variational analysis In mathematics, the term variational analysis usually denotes the combination and extension of methods from convex optimization and the classical calculus of variations to a more general theory.[1] This includes the more general problems of optimization theory, including topics in set-valued analysis, e.g. generalized derivatives. In the Mathematics Subject Classification scheme (MSC2010), the field of "Set-valued and variational analysis" is coded by "49J53".[2] History While this area of mathematics has a long history, the first use of the term "Variational analysis" in this sense was in an eponymous book by R. Tyrrell Rockafellar and Roger J-B Wets.[1] Existence of Minima A classical result is that a lower semicontinuous function on a compact set attains its minimum. Results from variational analysis such as Ekeland's variational principle allow us to extend this result of lower semicontinuous functions on non-compact sets provided that the function has a lower bound and at the cost of adding a small perturbation to the function. Generalized derivatives The classical Fermat's theorem says that if a differentiable function attains its minimum at a point, and that point is an interior point of its domain, then its derivative must be zero at that point. For problems where a smooth function must be minimized subject to constraints which can be expressed in the form of other smooth functions being equal to zero, the method of Lagrange multipliers, another classical result, gives necessary conditions in terms of the derivatives of the function. The ideas of these classical results can be extended to nondifferentiable convex functions by generalizing the notion of derivative to that of subderivative. Further generalization of the notion of the derivative such as the Clarke generalized gradient allow the results to be extended to nonsmooth locally Lipschitz functions.[3] See also • Convex analysis – branch of mathematics devoted to the study of properties of convex functions and convex setsPages displaying wikidata descriptions as a fallback • Functional analysis – Area of mathematics • Oriented projective geometry Citations 1. Rockafellar & Wets 2009. 2. "49J53 Set-valued and variational analysis". 5 July 2010. 3. Frank H. Clarke, Optimization and Nonsmooth Analysis, SIAM, 1990. References • Rockafellar, R. Tyrrell; Wets, Roger J.-B. (26 June 2009). Variational Analysis. Grundlehren der mathematischen Wissenschaften. Vol. 317. Berlin New York: Springer Science & Business Media. ISBN 9783642024313. OCLC 883392544. External links • Media related to Variational analysis at Wikimedia Commons Convex analysis and variational analysis Basic concepts • Convex combination • Convex function • Convex set Topics (list) • Choquet theory • Convex geometry • Convex metric space • Convex optimization • Duality • Lagrange multiplier • Legendre transformation • Locally convex topological vector space • Simplex Maps • Convex conjugate • Concave • (Closed • K- • Logarithmically • Proper • Pseudo- • Quasi-) Convex function • Invex function • Legendre transformation • Semi-continuity • Subderivative Main results (list) • Carathéodory's theorem • Ekeland's variational principle • Fenchel–Moreau theorem • Fenchel-Young inequality • Jensen's inequality • Hermite–Hadamard inequality • Krein–Milman theorem • Mazur's lemma • Shapley–Folkman lemma • Robinson-Ursescu • Simons • Ursescu Sets • Convex hull • (Orthogonally, Pseudo-) Convex set • Effective domain • Epigraph • Hypograph • John ellipsoid • Lens • Radial set/Algebraic interior • Zonotope Series • Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx)) Duality • Dual system • Duality gap • Strong duality • Weak duality Applications and related • Convexity in economics
Wikipedia
Variational bicomplex In mathematics, the Lagrangian theory on fiber bundles is globally formulated in algebraic terms of the variational bicomplex, without appealing to the calculus of variations. For instance, this is the case of classical field theory on fiber bundles (covariant classical field theory). The variational bicomplex is a cochain complex of the differential graded algebra of exterior forms on jet manifolds of sections of a fiber bundle. Lagrangians and Euler–Lagrange operators on a fiber bundle are defined as elements of this bicomplex. Cohomology of the variational bicomplex leads to the global first variational formula and first Noether's theorem. Extended to Lagrangian theory of even and odd fields on graded manifolds, the variational bicomplex provides strict mathematical formulation of classical field theory in a general case of reducible degenerate Lagrangians and the Lagrangian BRST theory. See also • Calculus of variations • Lagrangian system • Jet bundle References • Takens, Floris (1979), "A global version of the inverse problem of the calculus of variations", Journal of Differential Geometry, 14 (4): 543–562, doi:10.4310/jdg/1214435235, ISSN 0022-040X, MR 0600611, S2CID 118169017 • Anderson, I., "Introduction to variational bicomplex", Contemp. Math. 132 (1992) 51. • Barnich, G., Brandt, F., Henneaux, M., "Local BRST cohomology", Phys. Rep. 338 (2000) 439. • Giachetta, G., Mangiarotti, L., Sardanashvily, G., Advanced Classical Field Theory, World Scientific, 2009, ISBN 978-981-283-895-7. External links • Dragon, N., BRS symmetry and cohomology, arXiv:hep-th/9602163 • Sardanashvily, G., Graded infinite-order jet manifolds, Int. G. Geom. Methods Mod. Phys. 4 (2007) 1335; arXiv:0708.2434
Wikipedia
Calculus of variations The calculus of variations (or variational calculus) is a field of mathematical analysis that uses variations, which are small changes in functions and functionals, to find maxima and minima of functionals: mappings from a set of functions to the real numbers.[lower-alpha 1] Functionals are often expressed as definite integrals involving functions and their derivatives. Functions that maximize or minimize functionals may be found using the Euler–Lagrange equation of the calculus of variations. Part of a series of articles about Calculus • Fundamental theorem • Limits • Continuity • Rolle's theorem • Mean value theorem • Inverse function theorem Differential Definitions • Derivative (generalizations) • Differential • infinitesimal • of a function • total Concepts • Differentiation notation • Second derivative • Implicit differentiation • Logarithmic differentiation • Related rates • Taylor's theorem Rules and identities • Sum • Product • Chain • Power • Quotient • L'Hôpital's rule • Inverse • General Leibniz • Faà di Bruno's formula • Reynolds Integral • Lists of integrals • Integral transform • Leibniz integral rule Definitions • Antiderivative • Integral (improper) • Riemann integral • Lebesgue integration • Contour integration • Integral of inverse functions Integration by • Parts • Discs • Cylindrical shells • Substitution (trigonometric, tangent half-angle, Euler) • Euler's formula • Partial fractions • Changing order • Reduction formulae • Differentiating under the integral sign • Risch algorithm Series • Geometric (arithmetico-geometric) • Harmonic • Alternating • Power • Binomial • Taylor Convergence tests • Summand limit (term test) • Ratio • Root • Integral • Direct comparison • Limit comparison • Alternating series • Cauchy condensation • Dirichlet • Abel Vector • Gradient • Divergence • Curl • Laplacian • Directional derivative • Identities Theorems • Gradient • Green's • Stokes' • Divergence • generalized Stokes Multivariable Formalisms • Matrix • Tensor • Exterior • Geometric Definitions • Partial derivative • Multiple integral • Line integral • Surface integral • Volume integral • Jacobian • Hessian Advanced • Calculus on Euclidean space • Generalized functions • Limit of distributions Specialized • Fractional • Malliavin • Stochastic • Variations Miscellaneous • Precalculus • History • Glossary • List of topics • Integration Bee • Mathematical analysis • Nonstandard analysis A simple example of such a problem is to find the curve of shortest length connecting two points. If there are no constraints, the solution is a straight line between the points. However, if the curve is constrained to lie on a surface in space, then the solution is less obvious, and possibly many solutions may exist. Such solutions are known as geodesics. A related problem is posed by Fermat's principle: light follows the path of shortest optical length connecting two points, which depends upon the material of the medium. One corresponding concept in mechanics is the principle of least/stationary action. Many important problems involve functions of several variables. Solutions of boundary value problems for the Laplace equation satisfy the Dirichlet's principle. Plateau's problem requires finding a surface of minimal area that spans a given contour in space: a solution can often be found by dipping a frame in soapy water. Although such experiments are relatively easy to perform, their mathematical formulation is far from simple: there may be more than one locally minimizing surface, and they may have non-trivial topology. History The calculus of variations may be said to begin with Newton's minimal resistance problem in 1687, followed by the brachistochrone curve problem raised by Johann Bernoulli (1696).[2] It immediately occupied the attention of Jakob Bernoulli and the Marquis de l'Hôpital, but Leonhard Euler first elaborated the subject, beginning in 1733. Lagrange was influenced by Euler's work to contribute significantly to the theory. After Euler saw the 1755 work of the 19-year-old Lagrange, Euler dropped his own partly geometric approach in favor of Lagrange's purely analytic approach and renamed the subject the calculus of variations in his 1756 lecture Elementa Calculi Variationum.[3][4][lower-alpha 2] Legendre (1786) laid down a method, not entirely satisfactory, for the discrimination of maxima and minima. Isaac Newton and Gottfried Leibniz also gave some early attention to the subject.[5] To this discrimination Vincenzo Brunacci (1810), Carl Friedrich Gauss (1829), Siméon Poisson (1831), Mikhail Ostrogradsky (1834), and Carl Jacobi (1837) have been among the contributors. An important general work is that of Sarrus (1842) which was condensed and improved by Cauchy (1844). Other valuable treatises and memoirs have been written by Strauch (1849), Jellett (1850), Otto Hesse (1857), Alfred Clebsch (1858), and Lewis Buffett Carll (1885), but perhaps the most important work of the century is that of Weierstrass. His celebrated course on the theory is epoch-making, and it may be asserted that he was the first to place it on a firm and unquestionable foundation. The 20th and the 23rd Hilbert problem published in 1900 encouraged further development.[5] In the 20th century David Hilbert, Oskar Bolza, Gilbert Ames Bliss, Emmy Noether, Leonida Tonelli, Henri Lebesgue and Jacques Hadamard among others made significant contributions.[5] Marston Morse applied calculus of variations in what is now called Morse theory.[6] Lev Pontryagin, Ralph Rockafellar and F. H. Clarke developed new mathematical tools for the calculus of variations in optimal control theory.[6] The dynamic programming of Richard Bellman is an alternative to the calculus of variations.[7][8][9][lower-alpha 3] Extrema The calculus of variations is concerned with the maxima or minima (collectively called extrema) of functionals. A functional maps functions to scalars, so functionals have been described as "functions of functions." Functionals have extrema with respect to the elements $y$ of a given function space defined over a given domain. A functional $J[y]$ is said to have an extremum at the function $f$ if $\Delta J=J[y]-J[f]$ has the same sign for all $y$ in an arbitrarily small neighborhood of $f.$[lower-alpha 4] The function $f$ is called an extremal function or extremal.[lower-alpha 5] The extremum $J[f]$ is called a local maximum if $\Delta J\leq 0$ everywhere in an arbitrarily small neighborhood of $f,$ and a local minimum if $\Delta J\geq 0$ there. For a function space of continuous functions, extrema of corresponding functionals are called strong extrema or weak extrema, depending on whether the first derivatives of the continuous functions are respectively all continuous or not.[11] Both strong and weak extrema of functionals are for a space of continuous functions but strong extrema have the additional requirement that the first derivatives of the functions in the space be continuous. Thus a strong extremum is also a weak extremum, but the converse may not hold. Finding strong extrema is more difficult than finding weak extrema.[12] An example of a necessary condition that is used for finding weak extrema is the Euler–Lagrange equation.[13][lower-alpha 6] Euler–Lagrange equation Main article: Euler–Lagrange equation Finding the extrema of functionals is similar to finding the maxima and minima of functions. The maxima and minima of a function may be located by finding the points where its derivative vanishes (i.e., is equal to zero). The extrema of functionals may be obtained by finding functions for which the functional derivative is equal to zero. This leads to solving the associated Euler–Lagrange equation.[lower-alpha 7] Consider the functional $J[y]=\int _{x_{1}}^{x_{2}}L\left(x,y(x),y'(x)\right)\,dx\,.$ where • $x_{1},x_{2}$ are constants, • $y(x)$ is twice continuously differentiable, • $y'(x)={\frac {dy}{dx}},$ • $L\left(x,y(x),y'(x)\right)$ is twice continuously differentiable with respect to its arguments $x,y,$ and $y'.$ If the functional $J[y]$ attains a local minimum at $f,$ and $\eta (x)$ is an arbitrary function that has at least one derivative and vanishes at the endpoints $x_{1}$ and $x_{2},$ then for any number $\varepsilon $ close to 0, $J[f]\leq J[f+\varepsilon \eta ]\,.$ The term $\varepsilon \eta $ is called the variation of the function $f$ and is denoted by $\delta f.$[1][lower-alpha 8] Substituting $f+\varepsilon \eta $ for $y$ in the functional $J[y],$ the result is a function of $\varepsilon ,$ $\Phi (\varepsilon )=J[f+\varepsilon \eta ]\,.$ Since the functional $J[y]$ has a minimum for $y=f$ the function $\Phi (\varepsilon )$ has a minimum at $\varepsilon =0$ and thus,[lower-alpha 9] $\Phi '(0)\equiv \left.{\frac {d\Phi }{d\varepsilon }}\right|_{\varepsilon =0}=\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx=0\,.$ Taking the total derivative of $L\left[x,y,y'\right],$ where $y=f+\varepsilon \eta $ and $y'=f'+\varepsilon \eta '$ are considered as functions of $\varepsilon $ rather than $x,$ yields ${\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}{\frac {dy}{d\varepsilon }}+{\frac {\partial L}{\partial y'}}{\frac {dy'}{d\varepsilon }}$ and because ${\frac {dy}{d\varepsilon }}=\eta $ and ${\frac {dy'}{d\varepsilon }}=\eta ',$ ${\frac {dL}{d\varepsilon }}={\frac {\partial L}{\partial y}}\eta +{\frac {\partial L}{\partial y'}}\eta '.$ Therefore, ${\begin{aligned}\int _{x_{1}}^{x_{2}}\left.{\frac {dL}{d\varepsilon }}\right|_{\varepsilon =0}dx&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta +{\frac {\partial L}{\partial f'}}\eta '\right)\,dx\\&=\int _{x_{1}}^{x_{2}}{\frac {\partial L}{\partial f}}\eta \,dx+\left.{\frac {\partial L}{\partial f'}}\eta \right|_{x_{1}}^{x_{2}}-\int _{x_{1}}^{x_{2}}\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\,dx\\&=\int _{x_{1}}^{x_{2}}\left({\frac {\partial L}{\partial f}}\eta -\eta {\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx\\\end{aligned}}$ where $L\left[x,y,y'\right]\to L\left[x,f,f'\right]$ when $\varepsilon =0$ and we have used integration by parts on the second term. The second term on the second line vanishes because $\eta =0$ at $x_{1}$ and $x_{2}$ by definition. Also, as previously mentioned the left side of the equation is zero so that $\int _{x_{1}}^{x_{2}}\eta (x)\left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\,dx=0\,.$ According to the fundamental lemma of calculus of variations, the part of the integrand in parentheses is zero, i.e. ${\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0$ which is called the Euler–Lagrange equation. The left hand side of this equation is called the functional derivative of $J[f]$ and is denoted $\delta J/\delta f(x).$ In general this gives a second-order ordinary differential equation which can be solved to obtain the extremal function $f(x).$ The Euler–Lagrange equation is a necessary, but not sufficient, condition for an extremum $J[f].$ A sufficient condition for a minimum is given in the section Variations and sufficient condition for a minimum. Example In order to illustrate this process, consider the problem of finding the extremal function $y=f(x),$ which is the shortest curve that connects two points $\left(x_{1},y_{1}\right)$ and $\left(x_{2},y_{2}\right).$ The arc length of the curve is given by $A[y]=\int _{x_{1}}^{x_{2}}{\sqrt {1+[y'(x)]^{2}}}\,dx\,,$ with $y'(x)={\frac {dy}{dx}}\,,\ \ y_{1}=f(x_{1})\,,\ \ y_{2}=f(x_{2})\,.$ Note that assuming y is a function of x loses generality; ideally both should be a function of some other parameter. This approach is good solely for instructive purposes. The Euler–Lagrange equation will now be used to find the extremal function $f(x)$ that minimizes the functional $A[y].$ ${\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0$ with $L={\sqrt {1+[f'(x)]^{2}}}\,.$ Since $f$ does not appear explicitly in $L,$ the first term in the Euler–Lagrange equation vanishes for all $f(x)$ and thus, ${\frac {d}{dx}}{\frac {\partial L}{\partial f'}}=0\,.$ Substituting for $L$ and taking the derivative, ${\frac {d}{dx}}\ {\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}\ =0\,.$ Thus ${\frac {f'(x)}{\sqrt {1+[f'(x)]^{2}}}}=c\,,$ for some constant $c.$ Then ${\frac {[f'(x)]^{2}}{1+[f'(x)]^{2}}}=c^{2}\,,$ where $0\leq c^{2}<1.$ Solving, we get $[f'(x)]^{2}={\frac {c^{2}}{1-c^{2}}}$ which implies that $f'(x)=m$ is a constant and therefore that the shortest curve that connects two points $\left(x_{1},y_{1}\right)$ and $\left(x_{2},y_{2}\right)$ is $f(x)=mx+b\qquad {\text{with}}\ \ m={\frac {y_{2}-y_{1}}{x_{2}-x_{1}}}\quad {\text{and}}\quad b={\frac {x_{2}y_{1}-x_{1}y_{2}}{x_{2}-x_{1}}}$ and we have thus found the extremal function $f(x)$ that minimizes the functional $A[y]$ so that $A[f]$ is a minimum. The equation for a straight line is $y=f(x).$ In other words, the shortest distance between two points is a straight line.[lower-alpha 10] Beltrami's identity In physics problems it may be the case that ${\frac {\partial L}{\partial x}}=0,$ meaning the integrand is a function of $f(x)$ and $f'(x)$ but $x$ does not appear separately. In that case, the Euler–Lagrange equation can be simplified to the Beltrami identity[16] $L-f'{\frac {\partial L}{\partial f'}}=C\,,$ where $C$ is a constant. The left hand side is the Legendre transformation of $L$ with respect to $f'(x).$ The intuition behind this result is that, if the variable $x$ is actually time, then the statement ${\frac {\partial L}{\partial x}}=0$ implies that the Lagrangian is time-independent. By Noether's theorem, there is an associated conserved quantity. In this case, this quantity is the Hamiltonian, the Legendre transform of the Lagrangian, which (often) coincides with the energy of the system. This is (minus) the constant in Beltrami's identity. Euler–Poisson equation If $S$ depends on higher-derivatives of $y(x),$ that is, if $S=\int _{a}^{b}f(x,y(x),y'(x),\dots ,y^{(n)}(x))dx,$ then $y$ must satisfy the Euler–Poisson equation,[17] ${\frac {\partial f}{\partial y}}-{\frac {d}{dx}}\left({\frac {\partial f}{\partial y'}}\right)+\dots +(-1)^{n}{\frac {d^{n}}{dx^{n}}}\left[{\frac {\partial f}{\partial y^{(n)}}}\right]=0.$ Du Bois-Reymond's theorem The discussion thus far has assumed that extremal functions possess two continuous derivatives, although the existence of the integral $J$ requires only first derivatives of trial functions. The condition that the first variation vanishes at an extremal may be regarded as a weak form of the Euler–Lagrange equation. The theorem of Du Bois-Reymond asserts that this weak form implies the strong form. If $L$ has continuous first and second derivatives with respect to all of its arguments, and if ${\frac {\partial ^{2}L}{\partial f'^{2}}}\neq 0,$ then $f$ has two continuous derivatives, and it satisfies the Euler–Lagrange equation. Lavrentiev phenomenon Hilbert was the first to give good conditions for the Euler–Lagrange equations to give a stationary solution. Within a convex area and a positive thrice differentiable Lagrangian the solutions are composed of a countable collection of sections that either go along the boundary or satisfy the Euler–Lagrange equations in the interior. However Lavrentiev in 1926 showed that there are circumstances where there is no optimum solution but one can be approached arbitrarily closely by increasing numbers of sections. The Lavrentiev Phenomenon identifies a difference in the infimum of a minimization problem across different classes of admissible functions. For instance the following problem, presented by Manià in 1934:[18] $L[x]=\int _{0}^{1}(x^{3}-t)^{2}x'^{6},$ ${A}=\{x\in W^{1,1}(0,1):x(0)=0,\ x(1)=1\}.$ Clearly, $x(t)=t^{\frac {1}{3}}$minimizes the functional, but we find any function $x\in W^{1,\infty }$ gives a value bounded away from the infimum. Examples (in one-dimension) are traditionally manifested across $W^{1,1}$ and $W^{1,\infty },$ but Ball and Mizel[19] procured the first functional that displayed Lavrentiev's Phenomenon across $W^{1,p}$ and $W^{1,q}$ for $1\leq p<q<\infty .$ There are several results that gives criteria under which the phenomenon does not occur - for instance 'standard growth', a Lagrangian with no dependence on the second variable, or an approximating sequence satisfying Cesari's Condition (D) - but results are often particular, and applicable to a small class of functionals. Connected with the Lavrentiev Phenomenon is the repulsion property: any functional displaying Lavrentiev's Phenomenon will display the weak repulsion property.[20] Functions of several variables For example, if $\varphi (x,y)$ denotes the displacement of a membrane above the domain $D$ in the $x,y$ plane, then its potential energy is proportional to its surface area: $U[\varphi ]=\iint _{D}{\sqrt {1+\nabla \varphi \cdot \nabla \varphi }}\,dx\,dy.$ Plateau's problem consists of finding a function that minimizes the surface area while assuming prescribed values on the boundary of $D$; the solutions are called minimal surfaces. The Euler–Lagrange equation for this problem is nonlinear: $\varphi _{xx}(1+\varphi _{y}^{2})+\varphi _{yy}(1+\varphi _{x}^{2})-2\varphi _{x}\varphi _{y}\varphi _{xy}=0.$ See Courant (1950) for details. Dirichlet's principle It is often sufficient to consider only small displacements of the membrane, whose energy difference from no displacement is approximated by $V[\varphi ]={\frac {1}{2}}\iint _{D}\nabla \varphi \cdot \nabla \varphi \,dx\,dy.$ The functional $V$ is to be minimized among all trial functions $\varphi $ that assume prescribed values on the boundary of $D.$ If $u$ is the minimizing function and $v$ is an arbitrary smooth function that vanishes on the boundary of $D,$ then the first variation of $V[u+\varepsilon v]$ must vanish: $\left.{\frac {d}{d\varepsilon }}V[u+\varepsilon v]\right|_{\varepsilon =0}=\iint _{D}\nabla u\cdot \nabla v\,dx\,dy=0.$ Provided that u has two derivatives, we may apply the divergence theorem to obtain $\iint _{D}\nabla \cdot (v\nabla u)\,dx\,dy=\iint _{D}\nabla u\cdot \nabla v+v\nabla \cdot \nabla u\,dx\,dy=\int _{C}v{\frac {\partial u}{\partial n}}\,ds,$ where $C$ is the boundary of $D,$ $s$ is arclength along $C$ and $\partial u/\partial n$ is the normal derivative of $u$ on $C.$ Since $v$ vanishes on $C$ and the first variation vanishes, the result is $\iint _{D}v\nabla \cdot \nabla u\,dx\,dy=0$ for all smooth functions v that vanish on the boundary of $D.$ The proof for the case of one dimensional integrals may be adapted to this case to show that $\nabla \cdot \nabla u=0$ in $D.$ The difficulty with this reasoning is the assumption that the minimizing function u must have two derivatives. Riemann argued that the existence of a smooth minimizing function was assured by the connection with the physical problem: membranes do indeed assume configurations with minimal potential energy. Riemann named this idea the Dirichlet principle in honor of his teacher Peter Gustav Lejeune Dirichlet. However Weierstrass gave an example of a variational problem with no solution: minimize $W[\varphi ]=\int _{-1}^{1}(x\varphi ')^{2}\,dx$ among all functions $\varphi $ that satisfy $\varphi (-1)=-1$ and $\varphi (1)=1.$ $W$ can be made arbitrarily small by choosing piecewise linear functions that make a transition between −1 and 1 in a small neighborhood of the origin. However, there is no function that makes $W=0.$[lower-alpha 11] Eventually it was shown that Dirichlet's principle is valid, but it requires a sophisticated application of the regularity theory for elliptic partial differential equations; see Jost and Li–Jost (1998). Generalization to other boundary value problems A more general expression for the potential energy of a membrane is $V[\varphi ]=\iint _{D}\left[{\frac {1}{2}}\nabla \varphi \cdot \nabla \varphi +f(x,y)\varphi \right]\,dx\,dy\,+\int _{C}\left[{\frac {1}{2}}\sigma (s)\varphi ^{2}+g(s)\varphi \right]\,ds.$ This corresponds to an external force density $f(x,y)$ in $D,$ an external force $g(s)$ on the boundary $C,$ and elastic forces with modulus $\sigma (s)$acting on $C.$ The function that minimizes the potential energy with no restriction on its boundary values will be denoted by $u.$ Provided that $f$ and $g$ are continuous, regularity theory implies that the minimizing function $u$ will have two derivatives. In taking the first variation, no boundary condition need be imposed on the increment $v.$ The first variation of $V[u+\varepsilon v]$ is given by $\iint _{D}\left[\nabla u\cdot \nabla v+fv\right]\,dx\,dy+\int _{C}\left[\sigma uv+gv\right]\,ds=0.$ If we apply the divergence theorem, the result is $\iint _{D}\left[-v\nabla \cdot \nabla u+vf\right]\,dx\,dy+\int _{C}v\left[{\frac {\partial u}{\partial n}}+\sigma u+g\right]\,ds=0.$ If we first set $v=0$ on $C,$ the boundary integral vanishes, and we conclude as before that $-\nabla \cdot \nabla u+f=0$ in $D.$ Then if we allow $v$ to assume arbitrary boundary values, this implies that $u$ must satisfy the boundary condition ${\frac {\partial u}{\partial n}}+\sigma u+g=0,$ on $C.$ This boundary condition is a consequence of the minimizing property of $u$: it is not imposed beforehand. Such conditions are called natural boundary conditions. The preceding reasoning is not valid if $\sigma $ vanishes identically on $C.$ In such a case, we could allow a trial function $\varphi \equiv c,$ where $c$ is a constant. For such a trial function, $V[c]=c\left[\iint _{D}f\,dx\,dy+\int _{C}g\,ds\right].$ By appropriate choice of $c,$ $V$ can assume any value unless the quantity inside the brackets vanishes. Therefore, the variational problem is meaningless unless $\iint _{D}f\,dx\,dy+\int _{C}g\,ds=0.$ This condition implies that net external forces on the system are in equilibrium. If these forces are in equilibrium, then the variational problem has a solution, but it is not unique, since an arbitrary constant may be added. Further details and examples are in Courant and Hilbert (1953). Eigenvalue problems Both one-dimensional and multi-dimensional eigenvalue problems can be formulated as variational problems. Sturm–Liouville problems See also: Sturm–Liouville theory The Sturm–Liouville eigenvalue problem involves a general quadratic form $Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx,$ where $y$is restricted to functions that satisfy the boundary conditions $y(x_{1})=0,\quad y(x_{2})=0.$ Let $R$ be a normalization integral $R[y]=\int _{x_{1}}^{x_{2}}r(x)y(x)^{2}\,dx.$ The functions $p(x)$ and $r(x)$ are required to be everywhere positive and bounded away from zero. The primary variational problem is to minimize the ratio $Q/R$ among all $y$ satisfying the endpoint conditions, which is equivalent to minimizing $Q[y]$ under the constraint that $R[y]$ is constant. It is shown below that the Euler–Lagrange equation for the minimizing $u$ is $-(pu')'+qu-\lambda ru=0,$ where $\lambda $ is the quotient $\lambda ={\frac {Q[u]}{R[u]}}.$ It can be shown (see Gelfand and Fomin 1963) that the minimizing $u$ has two derivatives and satisfies the Euler–Lagrange equation. The associated $\lambda $ will be denoted by $\lambda _{1}$; it is the lowest eigenvalue for this equation and boundary conditions. The associated minimizing function will be denoted by $u_{1}(x).$ This variational characterization of eigenvalues leads to the Rayleigh–Ritz method: choose an approximating $u$ as a linear combination of basis functions (for example trigonometric functions) and carry out a finite-dimensional minimization among such linear combinations. This method is often surprisingly accurate. The next smallest eigenvalue and eigenfunction can be obtained by minimizing $Q$ under the additional constraint $\int _{x_{1}}^{x_{2}}r(x)u_{1}(x)y(x)\,dx=0.$ This procedure can be extended to obtain the complete sequence of eigenvalues and eigenfunctions for the problem. The variational problem also applies to more general boundary conditions. Instead of requiring that $y$ vanish at the endpoints, we may not impose any condition at the endpoints, and set $Q[y]=\int _{x_{1}}^{x_{2}}\left[p(x)y'(x)^{2}+q(x)y(x)^{2}\right]\,dx+a_{1}y(x_{1})^{2}+a_{2}y(x_{2})^{2},$ where $a_{1}$ and $a_{2}$ are arbitrary. If we set $y=u+\varepsilon v$the first variation for the ratio $Q/R$ is $V_{1}={\frac {2}{R[u]}}\left(\int _{x_{1}}^{x_{2}}\left[p(x)u'(x)v'(x)+q(x)u(x)v(x)-\lambda r(x)u(x)v(x)\right]\,dx+a_{1}u(x_{1})v(x_{1})+a_{2}u(x_{2})v(x_{2})\right),$ where λ is given by the ratio $Q[u]/R[u]$ as previously. After integration by parts, ${\frac {R[u]}{2}}V_{1}=\int _{x_{1}}^{x_{2}}v(x)\left[-(pu')'+qu-\lambda ru\right]\,dx+v(x_{1})[-p(x_{1})u'(x_{1})+a_{1}u(x_{1})]+v(x_{2})[p(x_{2})u'(x_{2})+a_{2}u(x_{2})].$ If we first require that $v$ vanish at the endpoints, the first variation will vanish for all such $v$ only if $-(pu')'+qu-\lambda ru=0\quad {\hbox{for}}\quad x_{1}<x<x_{2}.$ If $u$ satisfies this condition, then the first variation will vanish for arbitrary $v$ only if $-p(x_{1})u'(x_{1})+a_{1}u(x_{1})=0,\quad {\hbox{and}}\quad p(x_{2})u'(x_{2})+a_{2}u(x_{2})=0.$ These latter conditions are the natural boundary conditions for this problem, since they are not imposed on trial functions for the minimization, but are instead a consequence of the minimization. Eigenvalue problems in several dimensions Eigenvalue problems in higher dimensions are defined in analogy with the one-dimensional case. For example, given a domain $D$ with boundary $B$ in three dimensions we may define $Q[\varphi ]=\iiint _{D}p(X)\nabla \varphi \cdot \nabla \varphi +q(X)\varphi ^{2}\,dx\,dy\,dz+\iint _{B}\sigma (S)\varphi ^{2}\,dS,$ and $R[\varphi ]=\iiint _{D}r(X)\varphi (X)^{2}\,dx\,dy\,dz.$ Let $u$ be the function that minimizes the quotient $Q[\varphi ]/R[\varphi ],$ with no condition prescribed on the boundary $B.$ The Euler–Lagrange equation satisfied by $u$ is $-\nabla \cdot (p(X)\nabla u)+q(x)u-\lambda r(x)u=0,$ where $\lambda ={\frac {Q[u]}{R[u]}}.$ The minimizing $u$ must also satisfy the natural boundary condition $p(S){\frac {\partial u}{\partial n}}+\sigma (S)u=0,$ on the boundary $B.$ This result depends upon the regularity theory for elliptic partial differential equations; see Jost and Li–Jost (1998) for details. Many extensions, including completeness results, asymptotic properties of the eigenvalues and results concerning the nodes of the eigenfunctions are in Courant and Hilbert (1953). Applications Optics Fermat's principle states that light takes a path that (locally) minimizes the optical length between its endpoints. If the $x$-coordinate is chosen as the parameter along the path, and $y=f(x)$ along the path, then the optical length is given by $A[f]=\int _{x_{0}}^{x_{1}}n(x,f(x)){\sqrt {1+f'(x)^{2}}}dx,$ where the refractive index $n(x,y)$ depends upon the material. If we try $f(x)=f_{0}(x)+\varepsilon f_{1}(x)$ then the first variation of $A$ (the derivative of $A$ with respect to ε) is $\delta A[f_{0},f_{1}]=\int _{x_{0}}^{x_{1}}\left[{\frac {n(x,f_{0})f_{0}'(x)f_{1}'(x)}{\sqrt {1+f_{0}'(x)^{2}}}}+n_{y}(x,f_{0})f_{1}{\sqrt {1+f_{0}'(x)^{2}}}\right]dx.$ After integration by parts of the first term within brackets, we obtain the Euler–Lagrange equation $-{\frac {d}{dx}}\left[{\frac {n(x,f_{0})f_{0}'}{\sqrt {1+f_{0}'^{2}}}}\right]+n_{y}(x,f_{0}){\sqrt {1+f_{0}'(x)^{2}}}=0.$ The light rays may be determined by integrating this equation. This formalism is used in the context of Lagrangian optics and Hamiltonian optics. Snell's law There is a discontinuity of the refractive index when light enters or leaves a lens. Let $n(x,y)={\begin{cases}n_{(-)}&{\text{if}}\quad x<0,\\n_{(+)}&{\text{if}}\quad x>0,\end{cases}}$ where $n_{(-)}$ and $n_{(+)}$ are constants. Then the Euler–Lagrange equation holds as before in the region where $x<0$ or $x>0,$ and in fact the path is a straight line there, since the refractive index is constant. At the $x=0,$ $f$ must be continuous, but $f'$ may be discontinuous. After integration by parts in the separate regions and using the Euler–Lagrange equations, the first variation takes the form $\delta A[f_{0},f_{1}]=f_{1}(0)\left[n_{(-)}{\frac {f_{0}'(0^{-})}{\sqrt {1+f_{0}'(0^{-})^{2}}}}-n_{(+)}{\frac {f_{0}'(0^{+})}{\sqrt {1+f_{0}'(0^{+})^{2}}}}\right].$ The factor multiplying $n_{(-)}$ is the sine of angle of the incident ray with the $x$ axis, and the factor multiplying $n_{(+)}$ is the sine of angle of the refracted ray with the $x$ axis. Snell's law for refraction requires that these terms be equal. As this calculation demonstrates, Snell's law is equivalent to vanishing of the first variation of the optical path length. Fermat's principle in three dimensions It is expedient to use vector notation: let $X=(x_{1},x_{2},x_{3}),$ let $t$ be a parameter, let $X(t)$ be the parametric representation of a curve $C,$ and let ${\dot {X}}(t)$ be its tangent vector. The optical length of the curve is given by $A[C]=\int _{t_{0}}^{t_{1}}n(X){\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,dt.$ Note that this integral is invariant with respect to changes in the parametric representation of $C.$ The Euler–Lagrange equations for a minimizing curve have the symmetric form ${\frac {d}{dt}}P={\sqrt {{\dot {X}}\cdot {\dot {X}}}}\,\nabla n,$ where $P={\frac {n(X){\dot {X}}}{\sqrt {{\dot {X}}\cdot {\dot {X}}}}}.$ It follows from the definition that $P$ satisfies $P\cdot P=n(X)^{2}.$ Therefore, the integral may also be written as $A[C]=\int _{t_{0}}^{t_{1}}P\cdot {\dot {X}}\,dt.$ This form suggests that if we can find a function $\psi $ whose gradient is given by $P,$ then the integral $A$ is given by the difference of $\psi $ at the endpoints of the interval of integration. Thus the problem of studying the curves that make the integral stationary can be related to the study of the level surfaces of $\psi .$In order to find such a function, we turn to the wave equation, which governs the propagation of light. This formalism is used in the context of Lagrangian optics and Hamiltonian optics. Connection with the wave equation The wave equation for an inhomogeneous medium is $u_{tt}=c^{2}\nabla \cdot \nabla u,$ where $c$ is the velocity, which generally depends upon $X.$ Wave fronts for light are characteristic surfaces for this partial differential equation: they satisfy $\varphi _{t}^{2}=c(X)^{2}\,\nabla \varphi \cdot \nabla \varphi .$ We may look for solutions in the form $\varphi (t,X)=t-\psi (X).$ In that case, $\psi $ satisfies $\nabla \psi \cdot \nabla \psi =n^{2},$ where $n=1/c.$ According to the theory of first-order partial differential equations, if $P=\nabla \psi ,$ then $P$ satisfies ${\frac {dP}{ds}}=n\,\nabla n,$ along a system of curves (the light rays) that are given by ${\frac {dX}{ds}}=P.$ These equations for solution of a first-order partial differential equation are identical to the Euler–Lagrange equations if we make the identification ${\frac {ds}{dt}}={\frac {\sqrt {{\dot {X}}\cdot {\dot {X}}}}{n}}.$ We conclude that the function $\psi $ is the value of the minimizing integral $A$ as a function of the upper end point. That is, when a family of minimizing curves is constructed, the values of the optical length satisfy the characteristic equation corresponding the wave equation. Hence, solving the associated partial differential equation of first order is equivalent to finding families of solutions of the variational problem. This is the essential content of the Hamilton–Jacobi theory, which applies to more general variational problems. Mechanics Main article: Action (physics) In classical mechanics, the action, $S,$ is defined as the time integral of the Lagrangian, $L.$ The Lagrangian is the difference of energies, $L=T-U,$ where $T$ is the kinetic energy of a mechanical system and $U$ its potential energy. Hamilton's principle (or the action principle) states that the motion of a conservative holonomic (integrable constraints) mechanical system is such that the action integral $S=\int _{t_{0}}^{t_{1}}L(x,{\dot {x}},t)\,dt$ is stationary with respect to variations in the path $x(t).$ The Euler–Lagrange equations for this system are known as Lagrange's equations: ${\frac {d}{dt}}{\frac {\partial L}{\partial {\dot {x}}}}={\frac {\partial L}{\partial x}},$ and they are equivalent to Newton's equations of motion (for such systems). The conjugate momenta $P$ are defined by $p={\frac {\partial L}{\partial {\dot {x}}}}.$ For example, if $T={\frac {1}{2}}m{\dot {x}}^{2},$ then $p=m{\dot {x}}.$ Hamiltonian mechanics results if the conjugate momenta are introduced in place of ${\dot {x}}$ by a Legendre transformation of the Lagrangian $L$ into the Hamiltonian $H$ defined by $H(x,p,t)=p\,{\dot {x}}-L(x,{\dot {x}},t).$ The Hamiltonian is the total energy of the system: $H=T+U.$ Analogy with Fermat's principle suggests that solutions of Lagrange's equations (the particle trajectories) may be described in terms of level surfaces of some function of $X.$ This function is a solution of the Hamilton–Jacobi equation: ${\frac {\partial \psi }{\partial t}}+H\left(x,{\frac {\partial \psi }{\partial x}},t\right)=0.$ Further applications Further applications of the calculus of variations include the following: • The derivation of the catenary shape • Solution to Newton's minimal resistance problem • Solution to the brachistochrone problem • Solution to the tautochrone problem • Solution to isoperimetric problems • Calculating geodesics • Finding minimal surfaces and solving Plateau's problem • Optimal control • Analytical mechanics, or reformulations of Newton's laws of motion, most notably Lagrangian and Hamiltonian mechanics; • Geometric optics, especially Lagrangian and Hamiltonian optics; • Variational method (quantum mechanics), one way of finding approximations to the lowest energy eigenstate or ground state, and some excited states; • Variational Bayesian methods, a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning; • Variational methods in general relativity, a family of techniques using calculus of variations to solve problems in Einstein's general theory of relativity; • Finite element method is a variational method for finding numerical solutions to boundary-value problems in differential equations; • Total variation denoising, an image processing method for filtering high variance or noisy signals. Variations and sufficient condition for a minimum Calculus of variations is concerned with variations of functionals, which are small changes in the functional's value due to small changes in the function that is its argument. The first variation[lower-alpha 12] is defined as the linear part of the change in the functional, and the second variation[lower-alpha 13] is defined as the quadratic part.[22] For example, if $J[y]$ is a functional with the function $y=y(x)$ as its argument, and there is a small change in its argument from $y$ to $y+h,$ where $h=h(x)$ is a function in the same function space as $y,$ then the corresponding change in the functional is[lower-alpha 14] $\Delta J[h]=J[y+h]-J[y].$ The functional $J[y]$ is said to be differentiable if $\Delta J[h]=\varphi [h]+\varepsilon \|h\|,$ where $\varphi [h]$ is a linear functional,[lower-alpha 15] $\|h\|$ is the norm of $h,$[lower-alpha 16] and $\varepsilon \to 0$ as $\|h\|\to 0.$ The linear functional $\varphi [h]$ is the first variation of $J[y]$ and is denoted by,[26] $\delta J[h]=\varphi [h].$ The functional $J[y]$ is said to be twice differentiable if $\Delta J[h]=\varphi _{1}[h]+\varphi _{2}[h]+\varepsilon \|h\|^{2},$ where $\varphi _{1}[h]$ is a linear functional (the first variation), $\varphi _{2}[h]$ is a quadratic functional,[lower-alpha 17] and $\varepsilon \to 0$ as $\|h\|\to 0.$ The quadratic functional $\varphi _{2}[h]$ is the second variation of $J[y]$ and is denoted by,[28] $\delta ^{2}J[h]=\varphi _{2}[h].$ The second variation $\delta ^{2}J[h]$ is said to be strongly positive if $\delta ^{2}J[h]\geq k\|h\|^{2},$ for all $h$ and for some constant $k>0$.[29] Using the above definitions, especially the definitions of first variation, second variation, and strongly positive, the following sufficient condition for a minimum of a functional can be stated. Sufficient condition for a minimum: The functional $J[y]$ has a minimum at $y={\hat {y}}$ if its first variation $\delta J[h]=0$ at $y={\hat {y}}$ and its second variation $\delta ^{2}J[h]$ is strongly positive at $y={\hat {y}}.$[30] [lower-alpha 18][lower-alpha 19] See also • First variation • Isoperimetric inequality • Variational principle • Variational bicomplex • Fermat's principle • Principle of least action • Infinite-dimensional optimization • Finite element method • Functional analysis • Ekeland's variational principle • Inverse problem for Lagrangian mechanics • Obstacle problem • Perturbation methods • Young measure • Optimal control • Direct method in calculus of variations • Noether's theorem • De Donder–Weyl theory • Variational Bayesian methods • Chaplygin problem • Nehari manifold • Hu–Washizu principle • Luke's variational principle • Mountain pass theorem • Category:Variational analysts • Measures of central tendency as solutions to variational problems • Stampacchia Medal • Fermat Prize • Convenient vector space Notes 1. Whereas elementary calculus is about infinitesimally small changes in the values of functions without changes in the function itself, calculus of variations is about infinitesimally small changes in the function itself, which are called variations.[1] 2. "Euler waited until Lagrange had published on the subject in 1762 ... before he committed his lecture ... to print, so as not to rob Lagrange of his glory. Indeed, it was only Lagrange's method that Euler called Calculus of Variations."[3] 3. See Harold J. Kushner (2004): regarding Dynamic Programming, "The calculus of variations had related ideas (e.g., the work of Caratheodory, the Hamilton-Jacobi equation). This led to conflicts with the calculus of variations community." 4. The neighborhood of $f$ is the part of the given function space where $|y-f|<h$ over the whole domain of the functions, with $h$ a positive number that specifies the size of the neighborhood.[10] 5. Note the difference between the terms extremal and extremum. An extremal is a function that makes a functional an extremum. 6. For a sufficient condition, see section Variations and sufficient condition for a minimum. 7. The following derivation of the Euler–Lagrange equation corresponds to the derivation on pp. 184–185 of Courant & Hilbert (1953).[14] 8. Note that $\eta (x)$ and $f(x)$ are evaluated at the same values of $x,$ which is not valid more generally in variational calculus with non-holonomic constraints. 9. The product $\varepsilon \Phi '(0)$ is called the first variation of the functional $J$ and is denoted by $\delta J.$ Some references define the first variation differently by leaving out the $\varepsilon $ factor. 10. As a historical note, this is an axiom of Archimedes. See e.g. Kelland (1843).[15] 11. The resulting controversy over the validity of Dirichlet's principle is explained by Turnbull.[21] 12. The first variation is also called the variation, differential, or first differential. 13. The second variation is also called the second differential. 14. Note that $\Delta J[h]$ and the variations below, depend on both $y$ and $h.$ The argument $y$ has been left out to simplify the notation. For example, $\Delta J[h]$ could have been written $\Delta J[y;h].$[23] 15. A functional $\varphi [h]$ is said to be linear if $\varphi [\alpha h]=\alpha \varphi [h]$   and   $\varphi \left[h+h_{2}\right]=\varphi [h]+\varphi \left[h_{2}\right],$ where $h,h_{2}$ are functions and $\alpha $ is a real number.[24] 16. For a function $h=h(x)$ that is defined for $a\leq x\leq b,$ where $a$ and $b$ are real numbers, the norm of $h$ is its maximum absolute value, i.e. $\|h\|=\displaystyle \max _{a\leq x\leq b}|h(x)|.$[25] 17. A functional is said to be quadratic if it is a bilinear functional with two argument functions that are equal. A bilinear functional is a functional that depends on two argument functions and is linear when each argument function in turn is fixed while the other argument function is variable.[27] 18. For other sufficient conditions, see in Gelfand & Fomin 2000, • Chapter 5: "The Second Variation. Sufficient Conditions for a Weak Extremum" – Sufficient conditions for a weak minimum are given by the theorem on p. 116. • Chapter 6: "Fields. Sufficient Conditions for a Strong Extremum" – Sufficient conditions for a strong minimum are given by the theorem on p. 148. 19. One may note the similarity to the sufficient condition for a minimum of a function, where the first derivative is zero and the second derivative is positive. References 1. Courant & Hilbert 1953, p. 184 2. Gelfand, I. M.; Fomin, S. V. (2000). Silverman, Richard A. (ed.). Calculus of variations (Unabridged repr. ed.). Mineola, New York: Dover Publications. p. 3. ISBN 978-0486414485. 3. Thiele, Rüdiger (2007). "Euler and the Calculus of Variations". In Bradley, Robert E.; Sandifer, C. Edward (eds.). Leonhard Euler: Life, Work and Legacy. Elsevier. p. 249. ISBN 9780080471297. 4. Goldstine, Herman H. (2012). A History of the Calculus of Variations from the 17th through the 19th Century. Springer Science & Business Media. p. 110. ISBN 9781461381068. 5. van Brunt, Bruce (2004). The Calculus of Variations. Springer. ISBN 978-0-387-40247-5. 6. Ferguson, James (2004). "Brief Survey of the History of the Calculus of Variations and its Applications". arXiv:math/0402357. 7. Dimitri Bertsekas. Dynamic programming and optimal control. Athena Scientific, 2005. 8. Bellman, Richard E. (1954). "Dynamic Programming and a new formalism in the calculus of variations". Proc. Natl. Acad. Sci. 40 (4): 231–235. Bibcode:1954PNAS...40..231B. doi:10.1073/pnas.40.4.231. PMC 527981. PMID 16589462. 9. "Richard E. Bellman Control Heritage Award". American Automatic Control Council. 2004. Retrieved 2013-07-28. 10. Courant, R; Hilbert, D (1953). Methods of Mathematical Physics. Vol. I (First English ed.). New York: Interscience Publishers, Inc. p. 169. ISBN 978-0471504474. 11. Gelfand & Fomin 2000, pp. 12–13 12. Gelfand & Fomin 2000, p. 13 13. Gelfand & Fomin 2000, pp. 14–15 14. Courant, R.; Hilbert, D. (1953). Methods of Mathematical Physics. Vol. I (First English ed.). New York: Interscience Publishers, Inc. ISBN 978-0471504474. 15. Kelland, Philip (1843). Lectures on the principles of demonstrative mathematics. p. 58 – via Google Books. 16. Weisstein, Eric W. "Euler–Lagrange Differential Equation". mathworld.wolfram.com. Wolfram. Eq. (5). 17. Kot, Mark (2014). "Chapter 4: Basic Generalizations". A First Course in the Calculus of Variations. American Mathematical Society. ISBN 978-1-4704-1495-5. 18. Manià, Bernard (1934). "Sopra un esempio di Lavrentieff". Bollenttino dell'Unione Matematica Italiana. 13: 147–153. 19. Ball & Mizel (1985). "One-dimensional Variational problems whose Minimizers do not satisfy the Euler-Lagrange equation". Archive for Rational Mechanics and Analysis. 90 (4): 325–388. Bibcode:1985ArRMA..90..325B. doi:10.1007/BF00276295. S2CID 55005550. 20. Ferriero, Alessandro (2007). "The Weak Repulsion property". Journal de Mathématiques Pures et Appliquées. 88 (4): 378–388. doi:10.1016/j.matpur.2007.06.002. 21. Turnbull. "Riemann biography". UK: U. St. Andrew. 22. Gelfand & Fomin 2000, pp. 11–12, 99 23. Gelfand & Fomin 2000, p. 12, footnote 6 24. Gelfand & Fomin 2000, p. 8 25. Gelfand & Fomin 2000, p. 6 26. Gelfand & Fomin 2000, pp. 11–12 27. Gelfand & Fomin 2000, pp. 97–98 28. Gelfand & Fomin 2000, p. 99 29. Gelfand & Fomin 2000, p. 100 30. Gelfand & Fomin 2000, p. 100, Theorem 2 Further reading • Benesova, B. and Kruzik, M.: "Weak Lower Semicontinuity of Integral Functionals and Applications". SIAM Review 59(4) (2017), 703–766. • Bolza, O.: Lectures on the Calculus of Variations. Chelsea Publishing Company, 1904, available on Digital Mathematics library. 2nd edition republished in 1961, paperback in 2005, ISBN 978-1-4181-8201-4. • Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013. • Clegg, J.C.: Calculus of Variations, Interscience Publishers Inc., 1968. • Courant, R.: Dirichlet's principle, conformal mapping and minimal surfaces. Interscience, 1950. • Dacorogna, Bernard: "Introduction" Introduction to the Calculus of Variations, 3rd edition. 2014, World Scientific Publishing, ISBN 978-1-78326-551-0. • Elsgolc, L.E.: Calculus of Variations, Pergamon Press Ltd., 1962. • Forsyth, A.R.: Calculus of Variations, Dover, 1960. • Fox, Charles: An Introduction to the Calculus of Variations, Dover Publ., 1987. • Giaquinta, Mariano; Hildebrandt, Stefan: Calculus of Variations I and II, Springer-Verlag, ISBN 978-3-662-03278-7 and ISBN 978-3-662-06201-2 • Jost, J. and X. Li-Jost: Calculus of Variations. Cambridge University Press, 1998. • Lebedev, L.P. and Cloud, M.J.: The Calculus of Variations and Functional Analysis with Optimal Control and Applications in Mechanics, World Scientific, 2003, pages 1–98. • Logan, J. David: Applied Mathematics, 3rd edition. Wiley-Interscience, 2006 • Pike, Ralph W. "Chapter 8: Calculus of Variations". Optimization for Engineering Systems. Louisiana State University. Archived from the original on 2007-07-05. • Roubicek, T.: "Calculus of variations". Chap.17 in: Mathematical Tools for Physicists. (Ed. M. Grinfeld) J. Wiley, Weinheim, 2014, ISBN 978-3-527-41188-7, pp. 551–588. • Sagan, Hans: Introduction to the Calculus of Variations, Dover, 1992. • Weinstock, Robert: Calculus of Variations with Applications to Physics and Engineering, Dover, 1974 (reprint of 1952 ed.). External links • Variational calculus. Encyclopedia of Mathematics. • calculus of variations. PlanetMath. • Calculus of Variations. MathWorld. • Calculus of variations. Example problems. • Mathematics - Calculus of Variations and Integral Equations. Lectures on YouTube. • Selected papers on Geodesic Fields. Part I, Part II. Analysis in topological vector spaces Basic concepts • Abstract Wiener space • Classical Wiener space • Bochner space • Convex series • Cylinder set measure • Infinite-dimensional vector function • Matrix calculus • Vector calculus Derivatives • Differentiable vector–valued functions from Euclidean space • Differentiation in Fréchet spaces • Fréchet derivative • Total • Functional derivative • Gateaux derivative • Directional • Generalizations of the derivative • Hadamard derivative • Holomorphic • Quasi-derivative Measurability • Besov measure • Cylinder set measure • Canonical Gaussian • Classical Wiener measure • Measure like set functions • infinite-dimensional Gaussian measure • Projection-valued • Vector • Bochner / Weakly / Strongly measurable function • Radonifying function Integrals • Bochner • Direct integral • Dunford • Gelfand–Pettis/Weak • Regulated • Paley–Wiener Results • Cameron–Martin theorem • Inverse function theorem • Nash–Moser theorem • Feldman–Hájek theorem • No infinite-dimensional Lebesgue measure • Sazonov's theorem • Structure theorem for Gaussian measures Related • Crinkled arc • Covariance operator Functional calculus • Borel functional calculus • Continuous functional calculus • Holomorphic functional calculus Applications • Banach manifold (bundle) • Convenient vector space • Choquet theory • Fréchet manifold • Hilbert manifold Convex analysis and variational analysis Basic concepts • Convex combination • Convex function • Convex set Topics (list) • Choquet theory • Convex geometry • Convex metric space • Convex optimization • Duality • Lagrange multiplier • Legendre transformation • Locally convex topological vector space • Simplex Maps • Convex conjugate • Concave • (Closed • K- • Logarithmically • Proper • Pseudo- • Quasi-) Convex function • Invex function • Legendre transformation • Semi-continuity • Subderivative Main results (list) • Carathéodory's theorem • Ekeland's variational principle • Fenchel–Moreau theorem • Fenchel-Young inequality • Jensen's inequality • Hermite–Hadamard inequality • Krein–Milman theorem • Mazur's lemma • Shapley–Folkman lemma • Robinson-Ursescu • Simons • Ursescu Sets • Convex hull • (Orthogonally, Pseudo-) Convex set • Effective domain • Epigraph • Hypograph • John ellipsoid • Lens • Radial set/Algebraic interior • Zonotope Series • Convex series related ((cs, lcs)-closed, (cs, bcs)-complete, (lower) ideally convex, (Hx), and (Hwx)) Duality • Dual system • Duality gap • Strong duality • Weak duality Applications and related • Convexity in economics Major topics in mathematical analysis • Calculus: Integration • Differentiation • Differential equations • ordinary • partial • stochastic • Fundamental theorem of calculus • Calculus of variations • Vector calculus • Tensor calculus • Matrix calculus • Lists of integrals • Table of derivatives • Real analysis • Complex analysis • Hypercomplex analysis (quaternionic analysis) • Functional analysis • Fourier analysis • Least-squares spectral analysis • Harmonic analysis • P-adic analysis (P-adic numbers) • Measure theory • Representation theory • Functions • Continuous function • Special functions • Limit • Series • Infinity Mathematics portal Authority control: National • Israel • United States • Japan • Czech Republic
Wikipedia
Functional derivative In the calculus of variations, a field of mathematical analysis, the functional derivative (or variational derivative)[1] relates a change in a functional (a functional in this sense is a function that acts on functions) to a change in a function on which the functional depends. In the calculus of variations, functionals are usually expressed in terms of an integral of functions, their arguments, and their derivatives. In an integrand L of a functional, if a function f is varied by adding to it another function δf that is arbitrarily small, and the resulting integrand is expanded in powers of δf, the coefficient of δf in the first order term is called the functional derivative. For example, consider the functional $J[f]=\int _{a}^{b}L(\,x,f(x),f\,'(x)\,)\,dx\ ,$ where f ′(x) ≡ df/dx. If f is varied by adding to it a function δf, and the resulting integrand L(x, f +δf, f '+δf ′) is expanded in powers of δf, then the change in the value of J to first order in δf can be expressed as follows:[1][Note 1] $\delta J=\int _{a}^{b}\left({\frac {\partial L}{\partial f}}\delta f(x)+{\frac {\partial L}{\partial f'}}{\frac {d}{dx}}\delta f(x)\right)\,dx\,=\int _{a}^{b}\left({\frac {\partial L}{\partial f}}-{\frac {d}{dx}}{\frac {\partial L}{\partial f'}}\right)\delta f(x)\,dx\,+\,{\frac {\partial L}{\partial f'}}(b)\delta f(b)\,-\,{\frac {\partial L}{\partial f'}}(a)\delta f(a)\,$ where the variation in the derivative, δf ′ was rewritten as the derivative of the variation (δf) ′, and integration by parts was used in these derivatives. Definition In this section, the functional differential (or variation or first variation)[Note 2] is defined. Then the functional derivative is defined in terms of the functional differential. Functional differential Suppose $B$ is a Banach space and $F$ is a functional defined on $B$. The differential of $F$ at a point $\rho \in B$ is the linear functional $\delta F[\rho ,\cdot ]$ on $B$ defined[2] by the condition that, for all $\phi \in B$, $F[\rho +\phi ]-F[\rho ]=\delta F[\rho ;\phi ]+\epsilon \cdot \|\phi \|$ ;\phi ]+\epsilon \cdot \|\phi \|} where $\epsilon $ is a real number that depends on $\|\phi \|$ in such a way that $\epsilon \to 0$ as $\|\phi \|\to 0$. This means that $\delta F[\rho ,\cdot ]$ is the Fréchet derivative of $F$ at $\rho $. However, this notion of functional differential is so strong it may not exist[3], and in those cases a weaker notion, like the Gateaux derivative is preferred. In many practical cases, the functional differential is defined[4] as the directional derivative ${\begin{aligned}\delta F[\rho ,\phi ]&=\lim _{\varepsilon \to 0}{\frac {F[\rho +\varepsilon \phi ]-F[\rho ]}{\varepsilon }}\\&=\left[{\frac {d}{d\varepsilon }}F[\rho +\varepsilon \phi ]\right]_{\varepsilon =0}.\end{aligned}}$ Note that this notion of the functional differential can even be defined without a norm. Functional derivative In many applications, the domain of the functional $F$ is a space of differentiable functions $\rho $ defined on some space $\Omega $ and $F$ is of the form $F[\rho ]=\int _{\Omega }L(x,\rho (x),D\rho (x))\,dx$ for some function $L(x,\rho (x),D\rho (x))$ that may depend on $x$, the value $\rho (x)$ and the derivative $D\rho (x)$. If this is the case and, moreover, $\delta F[\rho ,\phi ]$ can be written as the integral of $\phi $ times another function (denoted δF/δρ) $\delta F[\rho ;\phi ]=\int _{\Omega }{\frac {\delta F}{\delta \rho }}(x)\ \phi (x)\ dx$ ;\phi ]=\int _{\Omega }{\frac {\delta F}{\delta \rho }}(x)\ \phi (x)\ dx} then this function δF/δρ is called the functional derivative of F at ρ.[5][6] If $F$ is restricted to only certain functions $\rho $ (for example, if there are some boundary conditions imposed) then $\phi $ is restricted to functions such that $\rho +\epsilon \phi $ continues to satisfy these conditions. Heuristically, $\phi $ is the change in $\rho $, so we 'formally' have $\phi =\delta \rho $, and then this is similar in form to the total differential of a function $F(\rho _{1},\rho _{2},\dots ,\rho _{n})$, $dF=\sum _{i=1}^{n}{\frac {\partial F}{\partial \rho _{i}}}\ d\rho _{i},$ where $\rho _{1},\rho _{2},\dots ,\rho _{n}$ are independent variables. Comparing the last two equations, the functional derivative $\delta F/\delta \rho (x)$ has a role similar to that of the partial derivative $\partial F/\partial \rho _{i}$, where the variable of integration $x$ is like a continuous version of the summation index $i$.[7] One thinks of δF/δρ as the gradient of F at the point ρ, so the value δF/δρ(x) measures how much the functional F will change if the function ρ is changed at the point x. Hence the formula $\int {\frac {\delta F}{\delta \rho }}(x)\phi (x)\;dx$ is regarded as the directional derivative at point ρ in the direction of ϕ. This is analogous to vector calculus, where the inner product of a vector $v$ with the gradient gives the directional derivative in the direction of $v$. Properties Like the derivative of a function, the functional derivative satisfies the following properties, where F[ρ] and G[ρ] are functionals:[Note 3] • Linearity:[8] ${\frac {\delta (\lambda F+\mu G)[\rho ]}{\delta \rho (x)}}=\lambda {\frac {\delta F[\rho ]}{\delta \rho (x)}}+\mu {\frac {\delta G[\rho ]}{\delta \rho (x)}},$ where λ, μ are constants. • Product rule:[9] ${\frac {\delta (FG)[\rho ]}{\delta \rho (x)}}={\frac {\delta F[\rho ]}{\delta \rho (x)}}G[\rho ]+F[\rho ]{\frac {\delta G[\rho ]}{\delta \rho (x)}}\,,$ • Chain rules: • If F is a functional and G another functional, then[10] ${\frac {\delta F[G[\rho ]]}{\delta \rho (y)}}=\int dx{\frac {\delta F[G]}{\delta G(x)}}_{G=G[\rho ]}\cdot {\frac {\delta G[\rho ](x)}{\delta \rho (y)}}\ .$ • If G is an ordinary differentiable function (local functional) g, then this reduces to[11] ${\frac {\delta F[g(\rho )]}{\delta \rho (y)}}={\frac {\delta F[g(\rho )]}{\delta g[\rho (y)]}}\ {\frac {dg(\rho )}{d\rho (y)}}\ .$ Determining functional derivatives A formula to determine functional derivatives for a common class of functionals can be written as the integral of a function and its derivatives. This is a generalization of the Euler–Lagrange equation: indeed, the functional derivative was introduced in physics within the derivation of the Lagrange equation of the second kind from the principle of least action in Lagrangian mechanics (18th century). The first three examples below are taken from density functional theory (20th century), the fourth from statistical mechanics (19th century). Formula Given a functional $F[\rho ]=\int f({\boldsymbol {r}},\rho ({\boldsymbol {r}}),\nabla \rho ({\boldsymbol {r}}))\,d{\boldsymbol {r}},$ and a function ϕ(r) that vanishes on the boundary of the region of integration, from a previous section Definition, ${\begin{aligned}\int {\frac {\delta F}{\delta \rho ({\boldsymbol {r}})}}\,\phi ({\boldsymbol {r}})\,d{\boldsymbol {r}}&=\left[{\frac {d}{d\varepsilon }}\int f({\boldsymbol {r}},\rho +\varepsilon \phi ,\nabla \rho +\varepsilon \nabla \phi )\,d{\boldsymbol {r}}\right]_{\varepsilon =0}\\&=\int \left({\frac {\partial f}{\partial \rho }}\,\phi +{\frac {\partial f}{\partial \nabla \rho }}\cdot \nabla \phi \right)d{\boldsymbol {r}}\\&=\int \left[{\frac {\partial f}{\partial \rho }}\,\phi +\nabla \cdot \left({\frac {\partial f}{\partial \nabla \rho }}\,\phi \right)-\left(\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}\right)\phi \right]d{\boldsymbol {r}}\\&=\int \left[{\frac {\partial f}{\partial \rho }}\,\phi -\left(\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}\right)\phi \right]d{\boldsymbol {r}}\\&=\int \left({\frac {\partial f}{\partial \rho }}-\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}\right)\phi ({\boldsymbol {r}})\ d{\boldsymbol {r}}\,.\end{aligned}}$ The second line is obtained using the total derivative, where ∂f /∂∇'ρ is a derivative of a scalar with respect to a vector.[Note 4] The third line was obtained by use of a product rule for divergence. The fourth line was obtained using the divergence theorem and the condition that ϕ = 0 on the boundary of the region of integration. Since ϕ is also an arbitrary function, applying the fundamental lemma of calculus of variations to the last line, the functional derivative is ${\frac {\delta F}{\delta \rho ({\boldsymbol {r}})}}={\frac {\partial f}{\partial \rho }}-\nabla \cdot {\frac {\partial f}{\partial \nabla \rho }}$ where ρ = ρ(r) and f = f (r, ρ, ∇ρ). This formula is for the case of the functional form given by F[ρ] at the beginning of this section. For other functional forms, the definition of the functional derivative can be used as the starting point for its determination. (See the example Coulomb potential energy functional.) The above equation for the functional derivative can be generalized to the case that includes higher dimensions and higher order derivatives. The functional would be, $F[\rho ({\boldsymbol {r}})]=\int f({\boldsymbol {r}},\rho ({\boldsymbol {r}}),\nabla \rho ({\boldsymbol {r}}),\nabla ^{(2)}\rho ({\boldsymbol {r}}),\dots ,\nabla ^{(N)}\rho ({\boldsymbol {r}}))\,d{\boldsymbol {r}},$ where the vector r ∈ Rn, and ∇(i) is a tensor whose ni components are partial derivative operators of order i, $\left[\nabla ^{(i)}\right]_{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}={\frac {\partial ^{\,i}}{\partial r_{\alpha _{1}}\partial r_{\alpha _{2}}\cdots \partial r_{\alpha _{i}}}}\qquad \qquad {\text{where}}\quad \alpha _{1},\alpha _{2},\cdots ,\alpha _{i}=1,2,\cdots ,n\ .$ [Note 5] An analogous application of the definition of the functional derivative yields ${\begin{aligned}{\frac {\delta F[\rho ]}{\delta \rho }}&{}={\frac {\partial f}{\partial \rho }}-\nabla \cdot {\frac {\partial f}{\partial (\nabla \rho )}}+\nabla ^{(2)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(2)}\rho \right)}}+\dots +(-1)^{N}\nabla ^{(N)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(N)}\rho \right)}}\\&{}={\frac {\partial f}{\partial \rho }}+\sum _{i=1}^{N}(-1)^{i}\nabla ^{(i)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}\ .\end{aligned}}$ In the last two equations, the ni components of the tensor ${\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}$ are partial derivatives of f with respect to partial derivatives of ρ, $\left[{\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}\right]_{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}={\frac {\partial f}{\partial \rho _{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}}}\qquad \qquad {\text{where}}\quad \rho _{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}\equiv {\frac {\partial ^{\,i}\rho }{\partial r_{\alpha _{1}}\,\partial r_{\alpha _{2}}\cdots \partial r_{\alpha _{i}}}}\ ,$ and the tensor scalar product is, $\nabla ^{(i)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(i)}\rho \right)}}=\sum _{\alpha _{1},\alpha _{2},\cdots ,\alpha _{i}=1}^{n}\ {\frac {\partial ^{\,i}}{\partial r_{\alpha _{1}}\,\partial r_{\alpha _{2}}\cdots \partial r_{\alpha _{i}}}}\ {\frac {\partial f}{\partial \rho _{\alpha _{1}\alpha _{2}\cdots \alpha _{i}}}}\ .$ [Note 6] Thomas–Fermi kinetic energy functional The Thomas–Fermi model of 1927 used a kinetic energy functional for a noninteracting uniform electron gas in a first attempt of density-functional theory of electronic structure: $T_{\mathrm {TF} }[\rho ]=C_{\mathrm {F} }\int \rho ^{5/3}(\mathbf {r} )\,d\mathbf {r} \,.$ Since the integrand of TTF[ρ] does not involve derivatives of ρ(r), the functional derivative of TTF[ρ] is,[12] ${\begin{aligned}{\frac {\delta T_{\mathrm {TF} }}{\delta \rho ({\boldsymbol {r}})}}&=C_{\mathrm {F} }{\frac {\partial \rho ^{5/3}(\mathbf {r} )}{\partial \rho (\mathbf {r} )}}\\&={\frac {5}{3}}C_{\mathrm {F} }\rho ^{2/3}(\mathbf {r} )\,.\end{aligned}}$ Coulomb potential energy functional For the electron-nucleus potential, Thomas and Fermi employed the Coulomb potential energy functional $V[\rho ]=\int {\frac {\rho ({\boldsymbol {r}})}{|{\boldsymbol {r}}|}}\ d{\boldsymbol {r}}.$ Applying the definition of functional derivative, ${\begin{aligned}\int {\frac {\delta V}{\delta \rho ({\boldsymbol {r}})}}\ \phi ({\boldsymbol {r}})\ d{\boldsymbol {r}}&{}=\left[{\frac {d}{d\varepsilon }}\int {\frac {\rho ({\boldsymbol {r}})+\varepsilon \phi ({\boldsymbol {r}})}{|{\boldsymbol {r}}|}}\ d{\boldsymbol {r}}\right]_{\varepsilon =0}\\&{}=\int {\frac {1}{|{\boldsymbol {r}}|}}\,\phi ({\boldsymbol {r}})\ d{\boldsymbol {r}}\,.\end{aligned}}$ So, ${\frac {\delta V}{\delta \rho ({\boldsymbol {r}})}}={\frac {1}{|{\boldsymbol {r}}|}}\ .$ For the classical part of the electron-electron interaction, Thomas and Fermi employed the Coulomb potential energy functional $J[\rho ]={\frac {1}{2}}\iint {\frac {\rho (\mathbf {r} )\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\,d\mathbf {r} d\mathbf {r} '\,.$ From the definition of the functional derivative, ${\begin{aligned}\int {\frac {\delta J}{\delta \rho ({\boldsymbol {r}})}}\phi ({\boldsymbol {r}})d{\boldsymbol {r}}&{}=\left[{\frac {d\ }{d\epsilon }}\,J[\rho +\epsilon \phi ]\right]_{\epsilon =0}\\&{}=\left[{\frac {d\ }{d\epsilon }}\,\left({\frac {1}{2}}\iint {\frac {[\rho ({\boldsymbol {r}})+\epsilon \phi ({\boldsymbol {r}})]\,[\rho ({\boldsymbol {r}}')+\epsilon \phi ({\boldsymbol {r}}')]}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}\,d{\boldsymbol {r}}d{\boldsymbol {r}}'\right)\right]_{\epsilon =0}\\&{}={\frac {1}{2}}\iint {\frac {\rho ({\boldsymbol {r}}')\phi ({\boldsymbol {r}})}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}\,d{\boldsymbol {r}}d{\boldsymbol {r}}'+{\frac {1}{2}}\iint {\frac {\rho ({\boldsymbol {r}})\phi ({\boldsymbol {r}}')}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}\,d{\boldsymbol {r}}d{\boldsymbol {r}}'\\\end{aligned}}$ The first and second terms on the right hand side of the last equation are equal, since r and r′ in the second term can be interchanged without changing the value of the integral. Therefore, $\int {\frac {\delta J}{\delta \rho ({\boldsymbol {r}})}}\phi ({\boldsymbol {r}})d{\boldsymbol {r}}=\int \left(\int {\frac {\rho ({\boldsymbol {r}}')}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}d{\boldsymbol {r}}'\right)\phi ({\boldsymbol {r}})d{\boldsymbol {r}}$ and the functional derivative of the electron-electron coulomb potential energy functional J[ρ] is,[13] ${\frac {\delta J}{\delta \rho ({\boldsymbol {r}})}}=\int {\frac {\rho ({\boldsymbol {r}}')}{|{\boldsymbol {r}}-{\boldsymbol {r}}'|}}d{\boldsymbol {r}}'\,.$ The second functional derivative is ${\frac {\delta ^{2}J[\rho ]}{\delta \rho (\mathbf {r} ')\delta \rho (\mathbf {r} )}}={\frac {\partial }{\partial \rho (\mathbf {r} ')}}\left({\frac {\rho (\mathbf {r} ')}{|\mathbf {r} -\mathbf {r} '|}}\right)={\frac {1}{|\mathbf {r} -\mathbf {r} '|}}.$ Weizsäcker kinetic energy functional In 1935 von Weizsäcker proposed to add a gradient correction to the Thomas-Fermi kinetic energy functional to make it better suit a molecular electron cloud: $T_{\mathrm {W} }[\rho ]={\frac {1}{8}}\int {\frac {\nabla \rho (\mathbf {r} )\cdot \nabla \rho (\mathbf {r} )}{\rho (\mathbf {r} )}}d\mathbf {r} =\int t_{\mathrm {W} }\ d\mathbf {r} \,,$ where $t_{\mathrm {W} }\equiv {\frac {1}{8}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho }}\qquad {\text{and}}\ \ \rho =\rho ({\boldsymbol {r}})\ .$ Using a previously derived formula for the functional derivative, ${\begin{aligned}{\frac {\delta T_{\mathrm {W} }}{\delta \rho ({\boldsymbol {r}})}}&={\frac {\partial t_{\mathrm {W} }}{\partial \rho }}-\nabla \cdot {\frac {\partial t_{\mathrm {W} }}{\partial \nabla \rho }}\\&=-{\frac {1}{8}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho ^{2}}}-\left({\frac {1}{4}}{\frac {\nabla ^{2}\rho }{\rho }}-{\frac {1}{4}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho ^{2}}}\right)\qquad {\text{where}}\ \ \nabla ^{2}=\nabla \cdot \nabla \ ,\end{aligned}}$ and the result is,[14] ${\frac {\delta T_{\mathrm {W} }}{\delta \rho ({\boldsymbol {r}})}}=\ \ \,{\frac {1}{8}}{\frac {\nabla \rho \cdot \nabla \rho }{\rho ^{2}}}-{\frac {1}{4}}{\frac {\nabla ^{2}\rho }{\rho }}\ .$ Entropy The entropy of a discrete random variable is a functional of the probability mass function. $H[p(x)]=-\sum _{x}p(x)\log p(x)$ Thus, ${\begin{aligned}\sum _{x}{\frac {\delta H}{\delta p(x)}}\,\phi (x)&{}=\left[{\frac {d}{d\epsilon }}H[p(x)+\epsilon \phi (x)]\right]_{\epsilon =0}\\&{}=\left[-\,{\frac {d}{d\varepsilon }}\sum _{x}\,[p(x)+\varepsilon \phi (x)]\ \log[p(x)+\varepsilon \phi (x)]\right]_{\varepsilon =0}\\&{}=-\sum _{x}\,[1+\log p(x)]\ \phi (x)\,.\end{aligned}}$ Thus, ${\frac {\delta H}{\delta p(x)}}=-1-\log p(x).$ Exponential Let $F[\varphi (x)]=e^{\int \varphi (x)g(x)dx}.$ Using the delta function as a test function, ${\begin{aligned}{\frac {\delta F[\varphi (x)]}{\delta \varphi (y)}}&{}=\lim _{\varepsilon \to 0}{\frac {F[\varphi (x)+\varepsilon \delta (x-y)]-F[\varphi (x)]}{\varepsilon }}\\&{}=\lim _{\varepsilon \to 0}{\frac {e^{\int (\varphi (x)+\varepsilon \delta (x-y))g(x)dx}-e^{\int \varphi (x)g(x)dx}}{\varepsilon }}\\&{}=e^{\int \varphi (x)g(x)dx}\lim _{\varepsilon \to 0}{\frac {e^{\varepsilon \int \delta (x-y)g(x)dx}-1}{\varepsilon }}\\&{}=e^{\int \varphi (x)g(x)dx}\lim _{\varepsilon \to 0}{\frac {e^{\varepsilon g(y)}-1}{\varepsilon }}\\&{}=e^{\int \varphi (x)g(x)dx}g(y).\end{aligned}}$ Thus, ${\frac {\delta F[\varphi (x)]}{\delta \varphi (y)}}=g(y)F[\varphi (x)].$ This is particularly useful in calculating the correlation functions from the partition function in quantum field theory. Functional derivative of a function A function can be written in the form of an integral like a functional. For example, $\rho ({\boldsymbol {r}})=F[\rho ]=\int \rho ({\boldsymbol {r}}')\delta ({\boldsymbol {r}}-{\boldsymbol {r}}')\,d{\boldsymbol {r}}'.$ Since the integrand does not depend on derivatives of ρ, the functional derivative of ρ(r) is, ${\begin{aligned}{\frac {\delta \rho ({\boldsymbol {r}})}{\delta \rho ({\boldsymbol {r}}')}}\equiv {\frac {\delta F}{\delta \rho ({\boldsymbol {r}}')}}&={\frac {\partial \ \ }{\partial \rho ({\boldsymbol {r}}')}}\,[\rho ({\boldsymbol {r}}')\delta ({\boldsymbol {r}}-{\boldsymbol {r}}')]\\&=\delta ({\boldsymbol {r}}-{\boldsymbol {r}}').\end{aligned}}$ Functional derivative of iterated function The functional derivative of the iterated function $f(f(x))$ is given by: ${\frac {\delta f(f(x))}{\delta f(y)}}=f'(f(x))\delta (x-y)+\delta (f(x)-y)$ and ${\frac {\delta f(f(f(x)))}{\delta f(y)}}=f'(f(f(x))(f'(f(x))\delta (x-y)+\delta (f(x)-y))+\delta (f(f(x))-y)$ In general: ${\frac {\delta f^{N}(x)}{\delta f(y)}}=f'(f^{N-1}(x)){\frac {\delta f^{N-1}(x)}{\delta f(y)}}+\delta (f^{N-1}(x)-y)$ Putting in N = 0 gives: ${\frac {\delta f^{-1}(x)}{\delta f(y)}}=-{\frac {\delta (f^{-1}(x)-y)}{f'(f^{-1}(x))}}$ Using the delta function as a test function In physics, it is common to use the Dirac delta function $\delta (x-y)$ in place of a generic test function $\phi (x)$, for yielding the functional derivative at the point $y$ (this is a point of the whole functional derivative as a partial derivative is a component of the gradient):[15] ${\frac {\delta F[\rho (x)]}{\delta \rho (y)}}=\lim _{\varepsilon \to 0}{\frac {F[\rho (x)+\varepsilon \delta (x-y)]-F[\rho (x)]}{\varepsilon }}.$ This works in cases when $F[\rho (x)+\varepsilon f(x)]$ formally can be expanded as a series (or at least up to first order) in $\varepsilon $. The formula is however not mathematically rigorous, since $F[\rho (x)+\varepsilon \delta (x-y)]$ is usually not even defined. The definition given in a previous section is based on a relationship that holds for all test functions $\phi (x)$, so one might think that it should hold also when $\phi (x)$ is chosen to be a specific function such as the delta function. However, the latter is not a valid test function (it is not even a proper function). In the definition, the functional derivative describes how the functional $F[\rho (x)]$ changes as a result of a small change in the entire function $\rho (x)$. The particular form of the change in $\rho (x)$ is not specified, but it should stretch over the whole interval on which $x$ is defined. Employing the particular form of the perturbation given by the delta function has the meaning that $\rho (x)$ is varied only in the point $y$. Except for this point, there is no variation in $\rho (x)$. Notes 1. According to Giaquinta & Hildebrandt (1996), p. 18, this notation is customary in physical literature. 2. Called first variation in (Giaquinta & Hildebrandt 1996, p. 3), variation or first variation in (Courant & Hilbert 1953, p. 186), variation or differential in (Gelfand & Fomin 2000, p. 11, § 3.2) and differential in (Parr & Yang 1989, p. 246). 3. Here the notation ${\frac {\delta {F}}{\delta \rho }}(x)\equiv {\frac {\delta {F}}{\delta \rho (x)}}$ is introduced. 4. For a three-dimensional Cartesian coordinate system, ${\frac {\partial f}{\partial \nabla \rho }}={\frac {\partial f}{\partial \rho _{x}}}\mathbf {\hat {i}} +{\frac {\partial f}{\partial \rho _{y}}}\mathbf {\hat {j}} +{\frac {\partial f}{\partial \rho _{z}}}\mathbf {\hat {k}} \,,$ where $\rho _{x}={\frac {\partial \rho }{\partial x}}\,,\ \rho _{y}={\frac {\partial \rho }{\partial y}}\,,\ \rho _{z}={\frac {\partial \rho }{\partial z}}$ and $\mathbf {\hat {i}} $, $\mathbf {\hat {j}} $, $\mathbf {\hat {k}} $ are unit vectors along the x, y, z axes. 5. For example, for the case of three dimensions (n = 3) and second order derivatives (i = 2), the tensor ∇(2) has components, $\left[\nabla ^{(2)}\right]_{\alpha \beta }={\frac {\partial ^{\,2}}{\partial r_{\alpha }\,\partial r_{\beta }}}\qquad \qquad {\text{where}}\quad \alpha ,\beta =1,2,3\,.$ 6. For example, for the case n = 3 and i = 2, the tensor scalar product is, $\nabla ^{(2)}\cdot {\frac {\partial f}{\partial \left(\nabla ^{(2)}\rho \right)}}=\sum _{\alpha ,\beta =1}^{3}\ {\frac {\partial ^{\,2}}{\partial r_{\alpha }\,\partial r_{\beta }}}\ {\frac {\partial f}{\partial \rho _{\alpha \beta }}}\qquad {\text{where}}\ \ \rho _{\alpha \beta }\equiv {\frac {\partial ^{\,2}\rho }{\partial r_{\alpha }\,\partial r_{\beta }}}\ .$ Footnotes 1. (Giaquinta & Hildebrandt 1996, p. 18) 2. (Gelfand & Fomin 2000, p. 11). 3. (Giaquinta & Hildebrandt 1996, p. 10). 4. (Giaquinta & Hildebrandt 1996, p. 10). 5. (Parr & Yang 1989, p. 246, Eq. A.2). 6. (Greiner & Reinhardt 1996, p. 36,37). 7. (Parr & Yang 1989, p. 246). 8. (Parr & Yang 1989, p. 247, Eq. A.3). 9. (Parr & Yang 1989, p. 247, Eq. A.4). 10. (Greiner & Reinhardt 1996, p. 38, Eq. 6). 11. (Greiner & Reinhardt 1996, p. 38, Eq. 7). 12. (Parr & Yang 1989, p. 247, Eq. A.6). 13. (Parr & Yang 1989, p. 248, Eq. A.11). 14. (Parr & Yang 1989, p. 247, Eq. A.9). 15. Greiner & Reinhardt 1996, p. 37 References • Courant, Richard; Hilbert, David (1953). "Chapter IV. The Calculus of Variations". Methods of Mathematical Physics. Vol. I (First English ed.). New York, New York: Interscience Publishers, Inc. pp. 164–274. ISBN 978-0471504474. MR 0065391. Zbl 0001.00501.. • Frigyik, Béla A.; Srivastava, Santosh; Gupta, Maya R. (January 2008), Introduction to Functional Derivatives (PDF), UWEE Tech Report, vol. UWEETR-2008-0001, Seattle, WA: Department of Electrical Engineering at the University of Washington, p. 7, archived from the original (PDF) on 2017-02-17, retrieved 2013-10-23. • Gelfand, I. M.; Fomin, S. V. (2000) [1963], Calculus of variations, translated and edited by Richard A. Silverman (Revised English ed.), Mineola, N.Y.: Dover Publications, ISBN 978-0486414485, MR 0160139, Zbl 0127.05402. • Giaquinta, Mariano; Hildebrandt, Stefan (1996), Calculus of Variations 1. The Lagrangian Formalism, Grundlehren der Mathematischen Wissenschaften, vol. 310 (1st ed.), Berlin: Springer-Verlag, ISBN 3-540-50625-X, MR 1368401, Zbl 0853.49001. • Greiner, Walter; Reinhardt, Joachim (1996), "Section 2.3 – Functional derivatives", Field quantization, With a foreword by D. A. Bromley, Berlin–Heidelberg–New York: Springer-Verlag, pp. 36–38, ISBN 3-540-59179-6, MR 1383589, Zbl 0844.00006. • Parr, R. G.; Yang, W. (1989). "Appendix A, Functionals". Density-Functional Theory of Atoms and Molecules. New York: Oxford University Press. pp. 246–254. ISBN 978-0195042795. External links • "Functional derivative", Encyclopedia of Mathematics, EMS Press, 2001 [1994] Functional analysis (topics – glossary) Spaces • Banach • Besov • Fréchet • Hilbert • Hölder • Nuclear • Orlicz • Schwartz • Sobolev • Topological vector Properties • Barrelled • Complete • Dual (Algebraic/Topological) • Locally convex • Reflexive • Reparable Theorems • Hahn–Banach • Riesz representation • Closed graph • Uniform boundedness principle • Kakutani fixed-point • Krein–Milman • Min–max • Gelfand–Naimark • Banach–Alaoglu Operators • Adjoint • Bounded • Compact • Hilbert–Schmidt • Normal • Nuclear • Trace class • Transpose • Unbounded • Unitary Algebras • Banach algebra • C*-algebra • Spectrum of a C*-algebra • Operator algebra • Group algebra of a locally compact group • Von Neumann algebra Open problems • Invariant subspace problem • Mahler's conjecture Applications • Hardy space • Spectral theory of ordinary differential equations • Heat kernel • Index theorem • Calculus of variations • Functional calculus • Integral operator • Jones polynomial • Topological quantum field theory • Noncommutative geometry • Riemann hypothesis • Distribution (or Generalized functions) Advanced topics • Approximation property • Balanced set • Choquet theory • Weak topology • Banach–Mazur distance • Tomita–Takesaki theory •  Mathematics portal • Category • Commons Analysis in topological vector spaces Basic concepts • Abstract Wiener space • Classical Wiener space • Bochner space • Convex series • Cylinder set measure • Infinite-dimensional vector function • Matrix calculus • Vector calculus Derivatives • Differentiable vector–valued functions from Euclidean space • Differentiation in Fréchet spaces • Fréchet derivative • Total • Functional derivative • Gateaux derivative • Directional • Generalizations of the derivative • Hadamard derivative • Holomorphic • Quasi-derivative Measurability • Besov measure • Cylinder set measure • Canonical Gaussian • Classical Wiener measure • Measure like set functions • infinite-dimensional Gaussian measure • Projection-valued • Vector • Bochner / Weakly / Strongly measurable function • Radonifying function Integrals • Bochner • Direct integral • Dunford • Gelfand–Pettis/Weak • Regulated • Paley–Wiener Results • Cameron–Martin theorem • Inverse function theorem • Nash–Moser theorem • Feldman–Hájek theorem • No infinite-dimensional Lebesgue measure • Sazonov's theorem • Structure theorem for Gaussian measures Related • Crinkled arc • Covariance operator Functional calculus • Borel functional calculus • Continuous functional calculus • Holomorphic functional calculus Applications • Banach manifold (bundle) • Convenient vector space • Choquet theory • Fréchet manifold • Hilbert manifold
Wikipedia
Variational inequality In mathematics, a variational inequality is an inequality involving a functional, which has to be solved for all possible values of a given variable, belonging usually to a convex set. The mathematical theory of variational inequalities was initially developed to deal with equilibrium problems, precisely the Signorini problem: in that model problem, the functional involved was obtained as the first variation of the involved potential energy. Therefore, it has a variational origin, recalled by the name of the general abstract problem. The applicability of the theory has since been expanded to include problems from economics, finance, optimization and game theory. History The first problem involving a variational inequality was the Signorini problem, posed by Antonio Signorini in 1959 and solved by Gaetano Fichera in 1963, according to the references (Antman 1983, pp. 282–284) and (Fichera 1995): the first papers of the theory were (Fichera 1963) and (Fichera 1964a), (Fichera 1964b). Later on, Guido Stampacchia proved his generalization to the Lax–Milgram theorem in (Stampacchia 1964) in order to study the regularity problem for partial differential equations and coined the name "variational inequality" for all the problems involving inequalities of this kind. Georges Duvaut encouraged his graduate students to study and expand on Fichera's work, after attending a conference in Brixen on 1965 where Fichera presented his study of the Signorini problem, as Antman 1983, p. 283 reports: thus the theory become widely known throughout France. Also in 1965, Stampacchia and Jacques-Louis Lions extended earlier results of (Stampacchia 1964), announcing them in the paper (Lions & Stampacchia 1965): full proofs of their results appeared later in the paper (Lions & Stampacchia 1967). Definition Following Antman (1983, p. 283), the definition of a variational inequality is the following one. Definition 1. Given a Banach space ${\boldsymbol {E}}$, a subset ${\boldsymbol {K}}$ of ${\boldsymbol {E}}$, and a functional $F\colon {\boldsymbol {K}}\to {\boldsymbol {E}}^{\ast }$ from ${\boldsymbol {K}}$ to the dual space ${\boldsymbol {E}}^{\ast }$ of the space ${\boldsymbol {E}}$, the variational inequality problem is the problem of solving for the variable $x$ belonging to ${\boldsymbol {K}}$ the following inequality: $\langle F(x),y-x\rangle \geq 0\qquad \forall y\in {\boldsymbol {K}}$ where $\langle \cdot ,\cdot \rangle \colon {\boldsymbol {E}}^{\ast }\times {\boldsymbol {E}}\to \mathbb {R} $ is the duality pairing. In general, the variational inequality problem can be formulated on any finite – or infinite-dimensional Banach space. The three obvious steps in the study of the problem are the following ones: 1. Prove the existence of a solution: this step implies the mathematical correctness of the problem, showing that there is at least a solution. 2. Prove the uniqueness of the given solution: this step implies the physical correctness of the problem, showing that the solution can be used to represent a physical phenomenon. It is a particularly important step since most of the problems modeled by variational inequalities are of physical origin. 3. Find the solution or prove its regularity. Examples The problem of finding the minimal value of a real-valued function of real variable This is a standard example problem, reported by Antman (1983, p. 283): consider the problem of finding the minimal value of a differentiable function $f$ over a closed interval $I=[a,b]$. Let $x^{\ast }$ be a point in $I$ where the minimum occurs. Three cases can occur: 1. if $a<x^{\ast }<b,$ then $f^{\prime }(x^{\ast })=0;$ 2. if $x^{\ast }=a,$ then $f^{\prime }(x^{\ast })\geq 0;$ 3. if $x^{\ast }=b,$ then $f^{\prime }(x^{\ast })\leq 0.$ These necessary conditions can be summarized as the problem of finding $x^{\ast }\in I$ such that $f^{\prime }(x^{\ast })(y-x^{\ast })\geq 0\quad $ for $\quad \forall y\in I.$ The absolute minimum must be searched between the solutions (if more than one) of the preceding inequality: note that the solution is a real number, therefore this is a finite dimensional variational inequality. The general finite-dimensional variational inequality A formulation of the general problem in $\mathbb {R} ^{n}$ is the following: given a subset $K$ of $\mathbb {R} ^{n}$ and a mapping $F\colon K\to \mathbb {R} ^{n}$, the finite-dimensional variational inequality problem associated with $K$ consist of finding a $n$-dimensional vector $x$ belonging to $K$ such that $\langle F(x),y-x\rangle \geq 0\qquad \forall y\in K$ where $\langle \cdot ,\cdot \rangle \colon \mathbb {R} ^{n}\times \mathbb {R} ^{n}\to \mathbb {R} $ is the standard inner product on the vector space $\mathbb {R} ^{n}$. The variational inequality for the Signorini problem In the historical survey (Fichera 1995), Gaetano Fichera describes the genesis of his solution to the Signorini problem: the problem consist in finding the elastic equilibrium configuration ${\boldsymbol {u}}({\boldsymbol {x}})=\left(u_{1}({\boldsymbol {x}}),u_{2}({\boldsymbol {x}}),u_{3}({\boldsymbol {x}})\right)$ of an anisotropic non-homogeneous elastic body that lies in a subset $A$ of the three-dimensional euclidean space whose boundary is $\partial A$, resting on a rigid frictionless surface and subject only to its mass forces. The solution $u$ of the problem exists and is unique (under precise assumptions) in the set of admissible displacements ${\mathcal {U}}_{\Sigma }$ i.e. the set of displacement vectors satisfying the system of ambiguous boundary conditions if and only if $B({\boldsymbol {u}},{\boldsymbol {v}}-{\boldsymbol {u}})-F({\boldsymbol {v}}-{\boldsymbol {u}})\geq 0\qquad \forall {\boldsymbol {v}}\in {\mathcal {U}}_{\Sigma }$ where $B({\boldsymbol {u}},{\boldsymbol {v}})$ and $F({\boldsymbol {v}})$ are the following functionals, written using the Einstein notation $B({\boldsymbol {u}},{\boldsymbol {v}})=-\int _{A}\sigma _{ik}({\boldsymbol {u}})\varepsilon _{ik}({\boldsymbol {v}})\,\mathrm {d} x$,    $F({\boldsymbol {v}})=\int _{A}v_{i}f_{i}\,\mathrm {d} x+\int _{\partial A\setminus \Sigma }\!\!\!\!\!v_{i}g_{i}\,\mathrm {d} \sigma $,    ${\boldsymbol {u}},{\boldsymbol {v}}\in {\mathcal {U}}_{\Sigma }$ where, for all ${\boldsymbol {x}}\in A$, • $\Sigma $ is the contact surface (or more generally a contact set), • ${\boldsymbol {f}}({\boldsymbol {x}})=\left(f_{1}({\boldsymbol {x}}),f_{2}({\boldsymbol {x}}),f_{3}({\boldsymbol {x}})\right)$ is the body force applied to the body, • ${\boldsymbol {g}}({\boldsymbol {x}})=\left(g_{1}({\boldsymbol {x}}),g_{2}({\boldsymbol {x}}),g_{3}({\boldsymbol {x}})\right)$ is the surface force applied to $\partial A\!\setminus \!\Sigma $, • ${\boldsymbol {\varepsilon }}={\boldsymbol {\varepsilon }}({\boldsymbol {u}})=\left(\varepsilon _{ik}({\boldsymbol {u}})\right)=\left({\frac {1}{2}}\left({\frac {\partial u_{i}}{\partial x_{k}}}+{\frac {\partial u_{k}}{\partial x_{i}}}\right)\right)$ is the infinitesimal strain tensor, • ${\boldsymbol {\sigma }}=\left(\sigma _{ik}\right)$ is the Cauchy stress tensor, defined as $\sigma _{ik}=-{\frac {\partial W}{\partial \varepsilon _{ik}}}\qquad \forall i,k=1,2,3$ where $W({\boldsymbol {\varepsilon }})=a_{ikjh}({\boldsymbol {x}})\varepsilon _{ik}\varepsilon _{jh}$ is the elastic potential energy and ${\boldsymbol {a}}({\boldsymbol {x}})=\left(a_{ikjh}({\boldsymbol {x}})\right)$ is the elasticity tensor. See also • Complementarity theory • Differential variational inequality • Extended Mathematical Programming for Equilibrium Problems • Mathematical programming with equilibrium constraints • Obstacle problem • Projected dynamical system • Signorini problem • Unilateral contact References Historical references • Antman, Stuart (1983), "The influence of elasticity in analysis: modern developments", Bulletin of the American Mathematical Society, 9 (3): 267–291, doi:10.1090/S0273-0979-1983-15185-6, MR 0714990, Zbl 0533.73001. An historical paper about the fruitful interaction of elasticity theory and mathematical analysis: the creation of the theory of variational inequalities by Gaetano Fichera is described in §5, pages 282–284. • Duvaut, Georges (1971), "Problèmes unilatéraux en mécanique des milieux continus", Actes du Congrès international des mathématiciens, 1970, ICM Proceedings, vol. Mathématiques appliquées (E), Histoire et Enseignement (F) – Volume 3, Paris: Gauthier-Villars, pp. 71–78, archived from the original (PDF) on 2015-07-25, retrieved 2015-07-25. A brief research survey describing the field of variational inequalities, precisely the sub-field of continuum mechanics problems with unilateral constraints. • Fichera, Gaetano (1995), "La nascita della teoria delle disequazioni variazionali ricordata dopo trent'anni", Incontro scientifico italo-spagnolo. Roma, 21 ottobre 1993, Atti dei Convegni Lincei (in Italian), vol. 114, Roma: Accademia Nazionale dei Lincei, pp. 47–53. The birth of the theory of variational inequalities remembered thirty years later (English translation of the title) is an historical paper describing the beginning of the theory of variational inequalities from the point of view of its founder. Scientific works • Facchinei, Francisco; Pang, Jong-Shi (2003), Finite Dimensional Variational Inequalities and Complementarity Problems, Vol. 1, Springer Series in Operations Research, Berlin–Heidelberg–New York: Springer-Verlag, ISBN 0-387-95580-1, Zbl 1062.90001 • Facchinei, Francisco; Pang, Jong-Shi (2003), Finite Dimensional Variational Inequalities and Complementarity Problems, Vol. 2, Springer Series in Operations Research, Berlin–Heidelberg–New York: Springer-Verlag, ISBN 0-387-95581-X, Zbl 1062.90001 • Fichera, Gaetano (1963), "Sul problema elastostatico di Signorini con ambigue condizioni al contorno" [On the elastostatic problem of Signorini with ambiguous boundary conditions], Rendiconti della Accademia Nazionale dei Lincei, Classe di Scienze Fisiche, Matematiche e Naturali, 8 (in Italian), 34 (2): 138–142, MR 0176661, Zbl 0128.18305. A short research note announcing and describing (without proofs) the solution of the Signorini problem. • Fichera, Gaetano (1964a), "Problemi elastostatici con vincoli unilaterali: il problema di Signorini con ambigue condizioni al contorno" [Elastostatic problems with unilateral constraints: the Signorini problem with ambiguous boundary conditions], Memorie della Accademia Nazionale dei Lincei, Classe di Scienze Fisiche, Matematiche e Naturali, 8 (in Italian), 7 (2): 91–140, Zbl 0146.21204. The first paper where an existence and uniqueness theorem for the Signorini problem is proved. • Fichera, Gaetano (1964b), "Elastostatic problems with unilateral constraints: the Signorini problem with ambiguous boundary conditions", Seminari dell'istituto Nazionale di Alta Matematica 1962–1963, Rome: Edizioni Cremonese, pp. 613–679. An English translation of (Fichera 1964a). • Glowinski, Roland; Lions, Jacques-Louis; Trémolières, Raymond (1981), Numerical analysis of variational inequalities. Translated from the French, Studies in Mathematics and its Applications, vol. 8, Amsterdam–New York–Oxford: North-Holland, pp. xxix+776, ISBN 0-444-86199-8, MR 0635927, Zbl 0463.65046 • Kinderlehrer, David; Stampacchia, Guido (1980), An Introduction to Variational Inequalities and Their Applications, Pure and Applied Mathematics, vol. 88, Boston–London–New York–San Diego–Sydney–Tokyo–Toronto: Academic Press, ISBN 0-89871-466-4, Zbl 0457.35001. • Lions, Jacques-Louis; Stampacchia, Guido (1965), "Inéquations variationnelles non coercives", Comptes rendus hebdomadaires des séances de l'Académie des sciences, 261: 25–27, Zbl 0136.11906, available at Gallica. Announcements of the results of paper (Lions & Stampacchia 1967). • Lions, Jacques-Louis; Stampacchia, Guido (1967), "Variational inequalities", Communications on Pure and Applied Mathematics, 20 (3): 493–519, doi:10.1002/cpa.3160200302, Zbl 0152.34601, archived from the original on 2013-01-05. An important paper, describing the abstract approach of the authors to the theory of variational inequalities. • Roubíček, Tomáš (2013), Nonlinear Partial Differential Equations with Applications, ISNM. International Series of Numerical Mathematics, vol. 153 (2nd ed.), Basel–Boston–Berlin: Birkhäuser Verlag, pp. xx+476, doi:10.1007/978-3-0348-0513-1, ISBN 978-3-0348-0512-4, MR 3014456, Zbl 1270.35005. • Stampacchia, Guido (1964), "Formes bilineaires coercitives sur les ensembles convexes", Comptes rendus hebdomadaires des séances de l'Académie des sciences, 258: 4413–4416, Zbl 0124.06401, available at Gallica. The paper containing Stampacchia's generalization of the Lax–Milgram theorem. External links • Panagiotopoulos, P.D. (2001) [1994], "Variational inequalities", Encyclopedia of Mathematics, EMS Press • Alessio Figalli, On global homogeneous solutions to the Signorini problem,
Wikipedia
Variational Bayesian methods Variational Bayesian methods are a family of techniques for approximating intractable integrals arising in Bayesian inference and machine learning. They are typically used in complex statistical models consisting of observed variables (usually termed "data") as well as unknown parameters and latent variables, with various sorts of relationships among the three types of random variables, as might be described by a graphical model. As typical in Bayesian inference, the parameters and latent variables are grouped together as "unobserved variables". Variational Bayesian methods are primarily used for two purposes: 1. To provide an analytical approximation to the posterior probability of the unobserved variables, in order to do statistical inference over these variables. 2. To derive a lower bound for the marginal likelihood (sometimes called the evidence) of the observed data (i.e. the marginal probability of the data given the model, with marginalization performed over unobserved variables). This is typically used for performing model selection, the general idea being that a higher marginal likelihood for a given model indicates a better fit of the data by that model and hence a greater probability that the model in question was the one that generated the data. (See also the Bayes factor article.) Part of a series on Bayesian statistics Posterior = Likelihood × Prior ÷ Evidence Background • Bayesian inference • Bayesian probability • Bayes' theorem • Bernstein–von Mises theorem • Coherence • Cox's theorem • Cromwell's rule • Principle of indifference • Principle of maximum entropy Model building • Weak prior ... Strong prior • Conjugate prior • Linear regression • Empirical Bayes • Hierarchical model Posterior approximation • Markov chain Monte Carlo • Laplace's approximation • Integrated nested Laplace approximations • Variational inference • Approximate Bayesian computation Estimators • Bayesian estimator • Credible interval • Maximum a posteriori estimation Evidence approximation • Evidence lower bound • Nested sampling Model evaluation • Bayes factor • Model averaging • Posterior predictive •  Mathematics portal In the former purpose (that of approximating a posterior probability), variational Bayes is an alternative to Monte Carlo sampling methods—particularly, Markov chain Monte Carlo methods such as Gibbs sampling—for taking a fully Bayesian approach to statistical inference over complex distributions that are difficult to evaluate directly or sample. In particular, whereas Monte Carlo techniques provide a numerical approximation to the exact posterior using a set of samples, variational Bayes provides a locally-optimal, exact analytical solution to an approximation of the posterior. Variational Bayes can be seen as an extension of the expectation-maximization (EM) algorithm from maximum a posteriori estimation (MAP estimation) of the single most probable value of each parameter to fully Bayesian estimation which computes (an approximation to) the entire posterior distribution of the parameters and latent variables. As in EM, it finds a set of optimal parameter values, and it has the same alternating structure as does EM, based on a set of interlocked (mutually dependent) equations that cannot be solved analytically. For many applications, variational Bayes produces solutions of comparable accuracy to Gibbs sampling at greater speed. However, deriving the set of equations used to update the parameters iteratively often requires a large amount of work compared with deriving the comparable Gibbs sampling equations. This is the case even for many models that are conceptually quite simple, as is demonstrated below in the case of a basic non-hierarchical model with only two parameters and no latent variables. Mathematical derivation Problem In variational inference, the posterior distribution over a set of unobserved variables $\mathbf {Z} =\{Z_{1}\dots Z_{n}\}$ given some data $\mathbf {X} $ is approximated by a so-called variational distribution, $Q(\mathbf {Z} ):$ $P(\mathbf {Z} \mid \mathbf {X} )\approx Q(\mathbf {Z} ).$ The distribution $Q(\mathbf {Z} )$ is restricted to belong to a family of distributions of simpler form than $P(\mathbf {Z} \mid \mathbf {X} )$ (e.g. a family of Gaussian distributions), selected with the intention of making $Q(\mathbf {Z} )$ similar to the true posterior, $P(\mathbf {Z} \mid \mathbf {X} )$. The similarity (or dissimilarity) is measured in terms of a dissimilarity function $d(Q;P)$ and hence inference is performed by selecting the distribution $Q(\mathbf {Z} )$ that minimizes $d(Q;P)$. KL divergence The most common type of variational Bayes uses the Kullback–Leibler divergence (KL-divergence) of Q from P as the choice of dissimilarity function. This choice makes this minimization tractable. The KL-divergence is defined as $D_{\mathrm {KL} }(Q\parallel P)\triangleq \sum _{\mathbf {Z} }Q(\mathbf {Z} )\log {\frac {Q(\mathbf {Z} )}{P(\mathbf {Z} \mid \mathbf {X} )}}.$ Note that Q and P are reversed from what one might expect. This use of reversed KL-divergence is conceptually similar to the expectation-maximization algorithm. (Using the KL-divergence in the other way produces the expectation propagation algorithm.) Intractability Variational techniques are typically used to form an approximation for: $P(\mathbf {Z} \mid \mathbf {X} )={\frac {P(\mathbf {X} \mid \mathbf {Z} )P(\mathbf {Z} )}{P(\mathbf {X} )}}={\frac {P(\mathbf {X} \mid \mathbf {Z} )P(\mathbf {Z} )}{\int _{\mathbf {Z} }P(\mathbf {X} ,\mathbf {Z} ')\,d\mathbf {Z} '}}$ The marginalization over $\mathbf {Z} $ to calculate $P(\mathbf {X} )$ in the denominator is typically intractable, because, for example, the search space of $\mathbf {Z} $ is combinatorially large. Therefore, we seek an approximation, using $Q(\mathbf {Z} )\approx P(\mathbf {Z} \mid \mathbf {X} )$. Evidence lower bound Main article: Evidence lower bound Given that $P(\mathbf {Z} \mid \mathbf {X} )={\frac {P(\mathbf {X} ,\mathbf {Z} )}{P(\mathbf {X} )}}$, the KL-divergence above can also be written as $D_{\mathrm {KL} }(Q\parallel P)=\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log {\frac {Q(\mathbf {Z} )}{P(\mathbf {Z} ,\mathbf {X} )}}+\log P(\mathbf {X} )\right]=\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]+\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log P(\mathbf {X} )\right]$ Because $P(\mathbf {X} )$ is a constant with respect to $\mathbf {Z} $ and $\sum _{\mathbf {Z} }Q(\mathbf {Z} )=1$ because $Q(\mathbf {Z} )$ is a distribution, we have $D_{\mathrm {KL} }(Q\parallel P)=\sum _{\mathbf {Z} }Q(\mathbf {Z} )\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]+\log P(\mathbf {X} )$ which, according to the definition of expected value (for a discrete random variable), can be written as follows $D_{\mathrm {KL} }(Q\parallel P)=\mathbb {E} _{\mathbf {Q} }\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]+\log P(\mathbf {X} )$ which can be rearranged to become $\log P(\mathbf {X} )=D_{\mathrm {KL} }(Q\parallel P)-\mathbb {E} _{\mathbf {Q} }\left[\log Q(\mathbf {Z} )-\log P(\mathbf {Z} ,\mathbf {X} )\right]=D_{\mathrm {KL} }(Q\parallel P)+{\mathcal {L}}(Q)$ As the log-evidence $\log P(\mathbf {X} )$ is fixed with respect to $Q$, maximizing the final term ${\mathcal {L}}(Q)$ minimizes the KL divergence of $Q$ from $P$. By appropriate choice of $Q$, ${\mathcal {L}}(Q)$ becomes tractable to compute and to maximize. Hence we have both an analytical approximation $Q$ for the posterior $P(\mathbf {Z} \mid \mathbf {X} )$, and a lower bound ${\mathcal {L}}(Q)$ for the log-evidence $\log P(\mathbf {X} )$ (since the KL-divergence is non-negative). The lower bound ${\mathcal {L}}(Q)$ is known as the (negative) variational free energy in analogy with thermodynamic free energy because it can also be expressed as a negative energy $\operatorname {E} _{Q}[\log P(\mathbf {Z} ,\mathbf {X} )]$ plus the entropy of $Q$. The term ${\mathcal {L}}(Q)$ is also known as Evidence Lower Bound, abbreviated as ELBO, to emphasize that it is a lower bound on the log-evidence of the data. Proofs By the generalized Pythagorean theorem of Bregman divergence, of which KL-divergence is a special case, it can be shown that:[1][2] $D_{\mathrm {KL} }(Q\parallel P)\geq D_{\mathrm {KL} }(Q\parallel Q^{*})+D_{\mathrm {KL} }(Q^{*}\parallel P),\forall Q^{*}\in {\mathcal {C}}$ where ${\mathcal {C}}$ is a convex set and the equality holds if: $Q=Q^{*}\triangleq \arg \min _{Q\in {\mathcal {C}}}D_{\mathrm {KL} }(Q\parallel P).$ In this case, the global minimizer $Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})q^{*}(\mathbf {Z} _{2})=q^{*}(\mathbf {Z} _{2}\mid \mathbf {Z} _{1})q^{*}(\mathbf {Z} _{1}),$ with $\mathbf {Z} =\{\mathbf {Z_{1}} ,\mathbf {Z_{2}} \},$ can be found as follows:[1] $q^{*}(\mathbf {Z} _{2})={\frac {P(\mathbf {X} )}{\zeta (\mathbf {X} )}}{\frac {P(\mathbf {Z} _{2}\mid \mathbf {X} )}{\exp(D_{\mathrm {KL} }(q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})\parallel P(\mathbf {Z} _{1}\mid \mathbf {Z} _{2},\mathbf {X} )))}}={\frac {1}{\zeta (\mathbf {X} )}}\exp \mathbb {E} _{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}\left(\log {\frac {P(\mathbf {Z} ,\mathbf {X} )}{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}}\right),$ in which the normalizing constant is: $\zeta (\mathbf {X} )=P(\mathbf {X} )\int _{\mathbf {Z} _{2}}{\frac {P(\mathbf {Z} _{2}\mid \mathbf {X} )}{\exp(D_{\mathrm {KL} }(q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})\parallel P(\mathbf {Z} _{1}\mid \mathbf {Z} _{2},\mathbf {X} )))}}=\int _{\mathbf {Z} _{2}}\exp \mathbb {E} _{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}\left(\log {\frac {P(\mathbf {Z} ,\mathbf {X} )}{q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})}}\right).$ The term $\zeta (\mathbf {X} )$ is often called the evidence lower bound (ELBO) in practice, since $P(\mathbf {X} )\geq \zeta (\mathbf {X} )=\exp({\mathcal {L}}(Q^{*}))$,[1] as shown above. By interchanging the roles of $\mathbf {Z} _{1}$ and $\mathbf {Z} _{2},$ we can iteratively compute the approximated $q^{*}(\mathbf {Z} _{1})$ and $q^{*}(\mathbf {Z} _{2})$ of the true model's marginals $P(\mathbf {Z} _{1}\mid \mathbf {X} )$ and $P(\mathbf {Z} _{2}\mid \mathbf {X} ),$ respectively. Although this iterative scheme is guaranteed to converge monotonically,[1] the converged $Q^{*}$ is only a local minimizer of $D_{\mathrm {KL} }(Q\parallel P)$. If the constrained space ${\mathcal {C}}$ is confined within independent space, i.e. $q^{*}(\mathbf {Z} _{1}\mid \mathbf {Z} _{2})=q^{*}(\mathbf {Z_{1}} ),$the above iterative scheme will become the so-called mean field approximation $Q^{*}(\mathbf {Z} )=q^{*}(\mathbf {Z} _{1})q^{*}(\mathbf {Z} _{2}),$as shown below. Mean field approximation The variational distribution $Q(\mathbf {Z} )$ is usually assumed to factorize over some partition of the latent variables, i.e. for some partition of the latent variables $\mathbf {Z} $ into $\mathbf {Z} _{1}\dots \mathbf {Z} _{M}$, $Q(\mathbf {Z} )=\prod _{i=1}^{M}q_{i}(\mathbf {Z} _{i}\mid \mathbf {X} )$ It can be shown using the calculus of variations (hence the name "variational Bayes") that the "best" distribution $q_{j}^{*}$ for each of the factors $q_{j}$ (in terms of the distribution minimizing the KL divergence, as described above) satisfies:[3] $q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )={\frac {e^{\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}}{\int e^{\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]}\,d\mathbf {Z} _{j}}}$ where $\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]$ is the expectation of the logarithm of the joint probability of the data and latent variables, taken with respect to $q^{*}$ over all variables not in the partition: refer to Lemma 4.1 of[4] for a derivation of the distribution $q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )$. In practice, we usually work in terms of logarithms, i.e.: $\ln q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )=\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]+{\text{constant}}$ The constant in the above expression is related to the normalizing constant (the denominator in the expression above for $q_{j}^{*}$) and is usually reinstated by inspection, as the rest of the expression can usually be recognized as being a known type of distribution (e.g. Gaussian, gamma, etc.). Using the properties of expectations, the expression $\operatorname {E} _{q_{-j}^{*}}[\ln p(\mathbf {Z} ,\mathbf {X} )]$ can usually be simplified into a function of the fixed hyperparameters of the prior distributions over the latent variables and of expectations (and sometimes higher moments such as the variance) of latent variables not in the current partition (i.e. latent variables not included in $\mathbf {Z} _{j}$). This creates circular dependencies between the parameters of the distributions over variables in one partition and the expectations of variables in the other partitions. This naturally suggests an iterative algorithm, much like EM (the expectation-maximization algorithm), in which the expectations (and possibly higher moments) of the latent variables are initialized in some fashion (perhaps randomly), and then the parameters of each distribution are computed in turn using the current values of the expectations, after which the expectation of the newly computed distribution is set appropriately according to the computed parameters. An algorithm of this sort is guaranteed to converge.[5] In other words, for each of the partitions of variables, by simplifying the expression for the distribution over the partition's variables and examining the distribution's functional dependency on the variables in question, the family of the distribution can usually be determined (which in turn determines the value of the constant). The formula for the distribution's parameters will be expressed in terms of the prior distributions' hyperparameters (which are known constants), but also in terms of expectations of functions of variables in other partitions. Usually these expectations can be simplified into functions of expectations of the variables themselves (i.e. the means); sometimes expectations of squared variables (which can be related to the variance of the variables), or expectations of higher powers (i.e. higher moments) also appear. In most cases, the other variables' distributions will be from known families, and the formulas for the relevant expectations can be looked up. However, those formulas depend on those distributions' parameters, which depend in turn on the expectations about other variables. The result is that the formulas for the parameters of each variable's distributions can be expressed as a series of equations with mutual, nonlinear dependencies among the variables. Usually, it is not possible to solve this system of equations directly. However, as described above, the dependencies suggest a simple iterative algorithm, which in most cases is guaranteed to converge. An example will make this process clearer. A duality formula for variational inference The following theorem is referred to as a duality formula for variational inference.[4] It explains some important properties of the variational distributions used in variational Bayes methods. Theorem Consider two probability spaces $(\Theta ,{\mathcal {F}},P)$ and $(\Theta ,{\mathcal {F}},Q)$ with $Q\ll P$. Assume that there is a common dominating probability measure $\lambda $ such that $P\ll \lambda $ and $Q\ll \lambda $. Let $h$ denote any real-valued random variable on $(\Theta ,{\mathcal {F}},P)$ that satisfies $h\in L_{1}(P)$. Then the following equality holds $\log E_{P}[\exp h]={\text{sup}}_{Q\ll P}\{E_{Q}[h]-D_{\text{KL}}(Q\parallel P)\}.$ Further, the supremum on the right-hand side is attained if and only if it holds ${\frac {q(\theta )}{p(\theta )}}={\frac {\exp h(\theta )}{E_{P}[\exp h]}},$ almost surely with respect to probability measure $Q$, where $p(\theta )=dP/d\lambda $ and $q(\theta )=dQ/d\lambda $ denote the Radon-Nikodym derivatives of the probability measures $P$ and $Q$ with respect to $\lambda $, respectively. A basic example Consider a simple non-hierarchical Bayesian model consisting of a set of i.i.d. observations from a Gaussian distribution, with unknown mean and variance.[6] In the following, we work through this model in great detail to illustrate the workings of the variational Bayes method. For mathematical convenience, in the following example we work in terms of the precision — i.e. the reciprocal of the variance (or in a multivariate Gaussian, the inverse of the covariance matrix) — rather than the variance itself. (From a theoretical standpoint, precision and variance are equivalent since there is a one-to-one correspondence between the two.) The mathematical model We place conjugate prior distributions on the unknown mean $\mu $ and precision $\tau $, i.e. the mean also follows a Gaussian distribution while the precision follows a gamma distribution. In other words: ${\begin{aligned}\tau &\sim \operatorname {Gamma} (a_{0},b_{0})\\\mu |\tau &\sim {\mathcal {N}}(\mu _{0},(\lambda _{0}\tau )^{-1})\\\{x_{1},\dots ,x_{N}\}&\sim {\mathcal {N}}(\mu ,\tau ^{-1})\\N&={\text{number of data points}}\end{aligned}}$ The hyperparameters $\mu _{0},\lambda _{0},a_{0}$ and $b_{0}$ in the prior distributions are fixed, given values. They can be set to small positive numbers to give broad prior distributions indicating ignorance about the prior distributions of $\mu $ and $\tau $. We are given $N$ data points $\mathbf {X} =\{x_{1},\ldots ,x_{N}\}$ and our goal is to infer the posterior distribution $q(\mu ,\tau )=p(\mu ,\tau \mid x_{1},\ldots ,x_{N})$ of the parameters $\mu $ and $\tau .$ The joint probability The joint probability of all variables can be rewritten as $p(\mathbf {X} ,\mu ,\tau )=p(\mathbf {X} \mid \mu ,\tau )p(\mu \mid \tau )p(\tau )$ where the individual factors are ${\begin{aligned}p(\mathbf {X} \mid \mu ,\tau )&=\prod _{n=1}^{N}{\mathcal {N}}(x_{n}\mid \mu ,\tau ^{-1})\\p(\mu \mid \tau )&={\mathcal {N}}\left(\mu \mid \mu _{0},(\lambda _{0}\tau )^{-1}\right)\\p(\tau )&=\operatorname {Gamma} (\tau \mid a_{0},b_{0})\end{aligned}}$ where ${\begin{aligned}{\mathcal {N}}(x\mid \mu ,\sigma ^{2})&={\frac {1}{\sqrt {2\pi \sigma ^{2}}}}e^{\frac {-(x-\mu )^{2}}{2\sigma ^{2}}}\\\operatorname {Gamma} (\tau \mid a,b)&={\frac {1}{\Gamma (a)}}b^{a}\tau ^{a-1}e^{-b\tau }\end{aligned}}$ Factorized approximation Assume that $q(\mu ,\tau )=q(\mu )q(\tau )$, i.e. that the posterior distribution factorizes into independent factors for $\mu $ and $\tau $. This type of assumption underlies the variational Bayesian method. The true posterior distribution does not in fact factor this way (in fact, in this simple case, it is known to be a Gaussian-gamma distribution), and hence the result we obtain will be an approximation. Derivation of q(μ) Then ${\begin{aligned}\ln q_{\mu }^{*}(\mu )&=\operatorname {E} _{\tau }\left[\ln p(\mathbf {X} \mid \mu ,\tau )+\ln p(\mu \mid \tau )+\ln p(\tau )\right]+C\\&=\operatorname {E} _{\tau }\left[\ln p(\mathbf {X} \mid \mu ,\tau )\right]+\operatorname {E} _{\tau }\left[\ln p(\mu \mid \tau )\right]+\operatorname {E} _{\tau }\left[\ln p(\tau )\right]+C\\&=\operatorname {E} _{\tau }\left[\ln \prod _{n=1}^{N}{\mathcal {N}}\left(x_{n}\mid \mu ,\tau ^{-1}\right)\right]+\operatorname {E} _{\tau }\left[\ln {\mathcal {N}}\left(\mu \mid \mu _{0},(\lambda _{0}\tau )^{-1}\right)\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\ln \prod _{n=1}^{N}{\sqrt {\frac {\tau }{2\pi }}}e^{-{\frac {(x_{n}-\mu )^{2}\tau }{2}}}\right]+\operatorname {E} _{\tau }\left[\ln {\sqrt {\frac {\lambda _{0}\tau }{2\pi }}}e^{-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}}\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}\left({\frac {1}{2}}(\ln \tau -\ln 2\pi )-{\frac {(x_{n}-\mu )^{2}\tau }{2}}\right)\right]+\operatorname {E} _{\tau }\left[{\frac {1}{2}}(\ln \lambda _{0}+\ln \tau -\ln 2\pi )-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}-{\frac {(x_{n}-\mu )^{2}\tau }{2}}\right]+\operatorname {E} _{\tau }\left[-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}\right]+\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}{\frac {1}{2}}(\ln \tau -\ln 2\pi )\right]+\operatorname {E} _{\tau }\left[{\frac {1}{2}}(\ln \lambda _{0}+\ln \tau -\ln 2\pi )\right]+C_{2}\\&=\operatorname {E} _{\tau }\left[\sum _{n=1}^{N}-{\frac {(x_{n}-\mu )^{2}\tau }{2}}\right]+\operatorname {E} _{\tau }\left[-{\frac {(\mu -\mu _{0})^{2}\lambda _{0}\tau }{2}}\right]+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right\}+C_{3}\end{aligned}}$ In the above derivation, $C$, $C_{2}$ and $C_{3}$ refer to values that are constant with respect to $\mu $. Note that the term $\operatorname {E} _{\tau }[\ln p(\tau )]$ is not a function of $\mu $ and will have the same value regardless of the value of $\mu $. Hence in line 3 we can absorb it into the constant term at the end. We do the same thing in line 7. The last line is simply a quadratic polynomial in $\mu $. Since this is the logarithm of $q_{\mu }^{*}(\mu )$, we can see that $q_{\mu }^{*}(\mu )$ itself is a Gaussian distribution. With a certain amount of tedious math (expanding the squares inside of the braces, separating out and grouping the terms involving $\mu $ and $\mu ^{2}$ and completing the square over $\mu $), we can derive the parameters of the Gaussian distribution: ${\begin{aligned}\ln q_{\mu }^{*}(\mu )&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\sum _{n=1}^{N}(x_{n}^{2}-2x_{n}\mu +\mu ^{2})+\lambda _{0}(\mu ^{2}-2\mu _{0}\mu +\mu _{0}^{2})\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{\left(\sum _{n=1}^{N}x_{n}^{2}\right)-2\left(\sum _{n=1}^{N}x_{n}\right)\mu +\left(\sum _{n=1}^{N}\mu ^{2}\right)+\lambda _{0}\mu ^{2}-2\lambda _{0}\mu _{0}\mu +\lambda _{0}\mu _{0}^{2}\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\mu ^{2}-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu +\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right\}+C_{3}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\mu ^{2}-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu \right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)(\lambda _{0}+N)\mu \right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)\mu \right)\right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)\mu +\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}-\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}\right)\right\}+C_{4}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu ^{2}-2\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)\mu +\left({\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}\right)\right\}+C_{5}\\&=-{\frac {\operatorname {E} _{\tau }[\tau ]}{2}}\left\{(\lambda _{0}+N)\left(\mu -{\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}\right\}+C_{5}\\&=-{\frac {1}{2}}(\lambda _{0}+N)\operatorname {E} _{\tau }[\tau ]\left(\mu -{\frac {\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}}{\lambda _{0}+N}}\right)^{2}+C_{5}\end{aligned}}$ Note that all of the above steps can be shortened by using the formula for the sum of two quadratics. In other words: ${\begin{aligned}q_{\mu }^{*}(\mu )&\sim {\mathcal {N}}(\mu \mid \mu _{N},\lambda _{N}^{-1})\\\mu _{N}&={\frac {\lambda _{0}\mu _{0}+N{\bar {x}}}{\lambda _{0}+N}}\\\lambda _{N}&=(\lambda _{0}+N)\operatorname {E} _{\tau }[\tau ]\\{\bar {x}}&={\frac {1}{N}}\sum _{n=1}^{N}x_{n}\end{aligned}}$ Derivation of q(τ) The derivation of $q_{\tau }^{*}(\tau )$ is similar to above, although we omit some of the details for the sake of brevity. ${\begin{aligned}\ln q_{\tau }^{*}(\tau )&=\operatorname {E} _{\mu }[\ln p(\mathbf {X} \mid \mu ,\tau )+\ln p(\mu \mid \tau )]+\ln p(\tau )+{\text{constant}}\\&=(a_{0}-1)\ln \tau -b_{0}\tau +{\frac {1}{2}}\ln \tau +{\frac {N}{2}}\ln \tau -{\frac {\tau }{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]+{\text{constant}}\end{aligned}}$ Exponentiating both sides, we can see that $q_{\tau }^{*}(\tau )$ is a gamma distribution. Specifically: ${\begin{aligned}q_{\tau }^{*}(\tau )&\sim \operatorname {Gamma} (\tau \mid a_{N},b_{N})\\a_{N}&=a_{0}+{\frac {N+1}{2}}\\b_{N}&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]\end{aligned}}$ Algorithm for computing the parameters Let us recap the conclusions from the previous sections: ${\begin{aligned}q_{\mu }^{*}(\mu )&\sim {\mathcal {N}}(\mu \mid \mu _{N},\lambda _{N}^{-1})\\\mu _{N}&={\frac {\lambda _{0}\mu _{0}+N{\bar {x}}}{\lambda _{0}+N}}\\\lambda _{N}&=(\lambda _{0}+N)\operatorname {E} _{\tau }[\tau ]\\{\bar {x}}&={\frac {1}{N}}\sum _{n=1}^{N}x_{n}\end{aligned}}$ and ${\begin{aligned}q_{\tau }^{*}(\tau )&\sim \operatorname {Gamma} (\tau \mid a_{N},b_{N})\\a_{N}&=a_{0}+{\frac {N+1}{2}}\\b_{N}&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]\end{aligned}}$ In each case, the parameters for the distribution over one of the variables depend on expectations taken with respect to the other variable. We can expand the expectations, using the standard formulas for the expectations of moments of the Gaussian and gamma distributions: ${\begin{aligned}\operatorname {E} [\tau \mid a_{N},b_{N}]&={\frac {a_{N}}{b_{N}}}\\\operatorname {E} \left[\mu \mid \mu _{N},\lambda _{N}^{-1}\right]&=\mu _{N}\\\operatorname {E} \left[X^{2}\right]&=\operatorname {Var} (X)+(\operatorname {E} [X])^{2}\\\operatorname {E} \left[\mu ^{2}\mid \mu _{N},\lambda _{N}^{-1}\right]&=\lambda _{N}^{-1}+\mu _{N}^{2}\end{aligned}}$ Applying these formulas to the above equations is trivial in most cases, but the equation for $b_{N}$ takes more work: ${\begin{aligned}b_{N}&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[\sum _{n=1}^{N}(x_{n}-\mu )^{2}+\lambda _{0}(\mu -\mu _{0})^{2}\right]\\&=b_{0}+{\frac {1}{2}}\operatorname {E} _{\mu }\left[(\lambda _{0}+N)\mu ^{2}-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu +\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\\&=b_{0}+{\frac {1}{2}}\left[(\lambda _{0}+N)\operatorname {E} _{\mu }[\mu ^{2}]-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\operatorname {E} _{\mu }[\mu ]+\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\\&=b_{0}+{\frac {1}{2}}\left[(\lambda _{0}+N)\left(\lambda _{N}^{-1}+\mu _{N}^{2}\right)-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu _{N}+\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\\\end{aligned}}$ We can then write the parameter equations as follows, without any expectations: ${\begin{aligned}\mu _{N}&={\frac {\lambda _{0}\mu _{0}+N{\bar {x}}}{\lambda _{0}+N}}\\\lambda _{N}&=(\lambda _{0}+N){\frac {a_{N}}{b_{N}}}\\{\bar {x}}&={\frac {1}{N}}\sum _{n=1}^{N}x_{n}\\a_{N}&=a_{0}+{\frac {N+1}{2}}\\b_{N}&=b_{0}+{\frac {1}{2}}\left[(\lambda _{0}+N)\left(\lambda _{N}^{-1}+\mu _{N}^{2}\right)-2\left(\lambda _{0}\mu _{0}+\sum _{n=1}^{N}x_{n}\right)\mu _{N}+\left(\sum _{n=1}^{N}x_{n}^{2}\right)+\lambda _{0}\mu _{0}^{2}\right]\end{aligned}}$ Note that there are circular dependencies among the formulas for $\lambda _{N}$and $b_{N}$. This naturally suggests an EM-like algorithm: 1. Compute $\sum _{n=1}^{N}x_{n}$ and $\sum _{n=1}^{N}x_{n}^{2}.$ Use these values to compute $\mu _{N}$ and $a_{N}.$ 2. Initialize $\lambda _{N}$ to some arbitrary value. 3. Use the current value of $\lambda _{N},$ along with the known values of the other parameters, to compute $b_{N}$. 4. Use the current value of $b_{N},$ along with the known values of the other parameters, to compute $\lambda _{N}$. 5. Repeat the last two steps until convergence (i.e. until neither value has changed more than some small amount). We then have values for the hyperparameters of the approximating distributions of the posterior parameters, which we can use to compute any properties we want of the posterior — e.g. its mean and variance, a 95% highest-density region (the smallest interval that includes 95% of the total probability), etc. It can be shown that this algorithm is guaranteed to converge to a local maximum. Note also that the posterior distributions have the same form as the corresponding prior distributions. We did not assume this; the only assumption we made was that the distributions factorize, and the form of the distributions followed naturally. It turns out (see below) that the fact that the posterior distributions have the same form as the prior distributions is not a coincidence, but a general result whenever the prior distributions are members of the exponential family, which is the case for most of the standard distributions. Further discussion Step-by-step recipe The above example shows the method by which the variational-Bayesian approximation to a posterior probability density in a given Bayesian network is derived: 1. Describe the network with a graphical model, identifying the observed variables (data) $\mathbf {X} $ and unobserved variables (parameters ${\boldsymbol {\Theta }}$ and latent variables $\mathbf {Z} $) and their conditional probability distributions. Variational Bayes will then construct an approximation to the posterior probability $p(\mathbf {Z} ,{\boldsymbol {\Theta }}\mid \mathbf {X} )$. The approximation has the basic property that it is a factorized distribution, i.e. a product of two or more independent distributions over disjoint subsets of the unobserved variables. 2. Partition the unobserved variables into two or more subsets, over which the independent factors will be derived. There is no universal procedure for doing this; creating too many subsets yields a poor approximation, while creating too few makes the entire variational Bayes procedure intractable. Typically, the first split is to separate the parameters and latent variables; often, this is enough by itself to produce a tractable result. Assume that the partitions are called $\mathbf {Z} _{1},\ldots ,\mathbf {Z} _{M}$. 3. For a given partition $\mathbf {Z} _{j}$, write down the formula for the best approximating distribution $q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )$ using the basic equation $\ln q_{j}^{*}(\mathbf {Z} _{j}\mid \mathbf {X} )=\operatorname {E} _{i\neq j}[\ln p(\mathbf {Z} ,\mathbf {X} )]+{\text{constant}}$ . 4. Fill in the formula for the joint probability distribution using the graphical model. Any component conditional distributions that don't involve any of the variables in $\mathbf {Z} _{j}$ can be ignored; they will be folded into the constant term. 5. Simplify the formula and apply the expectation operator, following the above example. Ideally, this should simplify into expectations of basic functions of variables not in $\mathbf {Z} _{j}$ (e.g. first or second raw moments, expectation of a logarithm, etc.). In order for the variational Bayes procedure to work well, these expectations should generally be expressible analytically as functions of the parameters and/or hyperparameters of the distributions of these variables. In all cases, these expectation terms are constants with respect to the variables in the current partition. 6. The functional form of the formula with respect to the variables in the current partition indicates the type of distribution. In particular, exponentiating the formula generates the probability density function (PDF) of the distribution (or at least, something proportional to it, with unknown normalization constant). In order for the overall method to be tractable, it should be possible to recognize the functional form as belonging to a known distribution. Significant mathematical manipulation may be required to convert the formula into a form that matches the PDF of a known distribution. When this can be done, the normalization constant can be reinstated by definition, and equations for the parameters of the known distribution can be derived by extracting the appropriate parts of the formula. 7. When all expectations can be replaced analytically with functions of variables not in the current partition, and the PDF put into a form that allows identification with a known distribution, the result is a set of equations expressing the values of the optimum parameters as functions of the parameters of variables in other partitions. 8. When this procedure can be applied to all partitions, the result is a set of mutually linked equations specifying the optimum values of all parameters. 9. An expectation maximization (EM) type procedure is then applied, picking an initial value for each parameter and the iterating through a series of steps, where at each step we cycle through the equations, updating each parameter in turn. This is guaranteed to converge. Most important points Due to all of the mathematical manipulations involved, it is easy to lose track of the big picture. The important things are: 1. The idea of variational Bayes is to construct an analytical approximation to the posterior probability of the set of unobserved variables (parameters and latent variables), given the data. This means that the form of the solution is similar to other Bayesian inference methods, such as Gibbs sampling — i.e. a distribution that seeks to describe everything that is known about the variables. As in other Bayesian methods — but unlike e.g. in expectation maximization (EM) or other maximum likelihood methods — both types of unobserved variables (i.e. parameters and latent variables) are treated the same, i.e. as random variables. Estimates for the variables can then be derived in the standard Bayesian ways, e.g. calculating the mean of the distribution to get a single point estimate or deriving a credible interval, highest density region, etc. 2. "Analytical approximation" means that a formula can be written down for the posterior distribution. The formula generally consists of a product of well-known probability distributions, each of which factorizes over a set of unobserved variables (i.e. it is conditionally independent of the other variables, given the observed data). This formula is not the true posterior distribution, but an approximation to it; in particular, it will generally agree fairly closely in the lowest moments of the unobserved variables, e.g. the mean and variance. 3. The result of all of the mathematical manipulations is (1) the identity of the probability distributions making up the factors, and (2) mutually dependent formulas for the parameters of these distributions. The actual values of these parameters are computed numerically, through an alternating iterative procedure much like EM. Compared with expectation maximization (EM) Variational Bayes (VB) is often compared with expectation maximization (EM). The actual numerical procedure is quite similar, in that both are alternating iterative procedures that successively converge on optimum parameter values. The initial steps to derive the respective procedures are also vaguely similar, both starting out with formulas for probability densities and both involving significant amounts of mathematical manipulations. However, there are a number of differences. Most important is what is being computed. • EM computes point estimates of posterior distribution of those random variables that can be categorized as "parameters", but only estimates of the actual posterior distributions of the latent variables (at least in "soft EM", and often only when the latent variables are discrete). The point estimates computed are the modes of these parameters; no other information is available. • VB, on the other hand, computes estimates of the actual posterior distribution of all variables, both parameters and latent variables. When point estimates need to be derived, generally the mean is used rather than the mode, as is normal in Bayesian inference. Concomitant with this, the parameters computed in VB do not have the same significance as those in EM. EM computes optimum values of the parameters of the Bayes network itself. VB computes optimum values of the parameters of the distributions used to approximate the parameters and latent variables of the Bayes network. For example, a typical Gaussian mixture model will have parameters for the mean and variance of each of the mixture components. EM would directly estimate optimum values for these parameters. VB, however, would first fit a distribution to these parameters — typically in the form of a prior distribution, e.g. a normal-scaled inverse gamma distribution — and would then compute values for the parameters of this prior distribution, i.e. essentially hyperparameters. In this case, VB would compute optimum estimates of the four parameters of the normal-scaled inverse gamma distribution that describes the joint distribution of the mean and variance of the component. A more complex example Imagine a Bayesian Gaussian mixture model described as follows:[7] ${\begin{aligned}\mathbf {\pi } &\sim \operatorname {SymDir} (K,\alpha _{0})\\\mathbf {\Lambda } _{i=1\dots K}&\sim {\mathcal {W}}(\mathbf {W} _{0},\nu _{0})\\\mathbf {\mu } _{i=1\dots K}&\sim {\mathcal {N}}(\mathbf {\mu } _{0},(\beta _{0}\mathbf {\Lambda } _{i})^{-1})\\\mathbf {z} [i=1\dots N]&\sim \operatorname {Mult} (1,\mathbf {\pi } )\\\mathbf {x} _{i=1\dots N}&\sim {\mathcal {N}}(\mathbf {\mu } _{z_{i}},{\mathbf {\Lambda } _{z_{i}}}^{-1})\\K&={\text{number of mixing components}}\\N&={\text{number of data points}}\end{aligned}}$ Note: • SymDir() is the symmetric Dirichlet distribution of dimension $K$, with the hyperparameter for each component set to $\alpha _{0}$. The Dirichlet distribution is the conjugate prior of the categorical distribution or multinomial distribution. • ${\mathcal {W}}()$ is the Wishart distribution, which is the conjugate prior of the precision matrix (inverse covariance matrix) for a multivariate Gaussian distribution. • Mult() is a multinomial distribution over a single observation (equivalent to a categorical distribution). The state space is a "one-of-K" representation, i.e., a $K$-dimensional vector in which one of the elements is 1 (specifying the identity of the observation) and all other elements are 0. • ${\mathcal {N}}()$ is the Gaussian distribution, in this case specifically the multivariate Gaussian distribution. The interpretation of the above variables is as follows: • $\mathbf {X} =\{\mathbf {x} _{1},\dots ,\mathbf {x} _{N}\}$ is the set of $N$ data points, each of which is a $D$-dimensional vector distributed according to a multivariate Gaussian distribution. • $\mathbf {Z} =\{\mathbf {z} _{1},\dots ,\mathbf {z} _{N}\}$ is a set of latent variables, one per data point, specifying which mixture component the corresponding data point belongs to, using a "one-of-K" vector representation with components $z_{nk}$ for $k=1\dots K$, as described above. • $\mathbf {\pi } $ is the mixing proportions for the $K$ mixture components. • $\mathbf {\mu } _{i=1\dots K}$ and $\mathbf {\Lambda } _{i=1\dots K}$ specify the parameters (mean and precision) associated with each mixture component. The joint probability of all variables can be rewritten as $p(\mathbf {X} ,\mathbf {Z} ,\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )=p(\mathbf {X} \mid \mathbf {Z} ,\mathbf {\mu } ,\mathbf {\Lambda } )p(\mathbf {Z} \mid \mathbf {\pi } )p(\mathbf {\pi } )p(\mathbf {\mu } \mid \mathbf {\Lambda } )p(\mathbf {\Lambda } )$ where the individual factors are ${\begin{aligned}p(\mathbf {X} \mid \mathbf {Z} ,\mathbf {\mu } ,\mathbf {\Lambda } )&=\prod _{n=1}^{N}\prod _{k=1}^{K}{\mathcal {N}}(\mathbf {x} _{n}\mid \mathbf {\mu } _{k},\mathbf {\Lambda } _{k}^{-1})^{z_{nk}}\\p(\mathbf {Z} \mid \mathbf {\pi } )&=\prod _{n=1}^{N}\prod _{k=1}^{K}\pi _{k}^{z_{nk}}\\p(\mathbf {\pi } )&={\frac {\Gamma (K\alpha _{0})}{\Gamma (\alpha _{0})^{K}}}\prod _{k=1}^{K}\pi _{k}^{\alpha _{0}-1}\\p(\mathbf {\mu } \mid \mathbf {\Lambda } )&=\prod _{k=1}^{K}{\mathcal {N}}(\mathbf {\mu } _{k}\mid \mathbf {\mu } _{0},(\beta _{0}\mathbf {\Lambda } _{k})^{-1})\\p(\mathbf {\Lambda } )&=\prod _{k=1}^{K}{\mathcal {W}}(\mathbf {\Lambda } _{k}\mid \mathbf {W} _{0},\nu _{0})\end{aligned}}$ where ${\begin{aligned}{\mathcal {N}}(\mathbf {x} \mid \mathbf {\mu } ,\mathbf {\Sigma } )&={\frac {1}{(2\pi )^{D/2}}}{\frac {1}{|\mathbf {\Sigma } |^{1/2}}}\exp \left\{-{\frac {1}{2}}(\mathbf {x} -\mathbf {\mu } )^{\rm {T}}\mathbf {\Sigma } ^{-1}(\mathbf {x} -\mathbf {\mu } )\right\}\\{\mathcal {W}}(\mathbf {\Lambda } \mid \mathbf {W} ,\nu )&=B(\mathbf {W} ,\nu )|\mathbf {\Lambda } |^{(\nu -D-1)/2}\exp \left(-{\frac {1}{2}}\operatorname {Tr} (\mathbf {W} ^{-1}\mathbf {\Lambda } )\right)\\B(\mathbf {W} ,\nu )&=|\mathbf {W} |^{-\nu /2}\left\{2^{\nu D/2}\pi ^{D(D-1)/4}\prod _{i=1}^{D}\Gamma \left({\frac {\nu +1-i}{2}}\right)\right\}^{-1}\\D&={\text{dimensionality of each data point}}\end{aligned}}$ Assume that $q(\mathbf {Z} ,\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )=q(\mathbf {Z} )q(\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )$. Then[8] ${\begin{aligned}\ln q^{*}(\mathbf {Z} )&=\operatorname {E} _{\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } }[\ln p(\mathbf {X} ,\mathbf {Z} ,\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )]+{\text{constant}}\\&=\operatorname {E} _{\mathbf {\pi } }[\ln p(\mathbf {Z} \mid \mathbf {\pi } )]+\operatorname {E} _{\mathbf {\mu } ,\mathbf {\Lambda } }[\ln p(\mathbf {X} \mid \mathbf {Z} ,\mathbf {\mu } ,\mathbf {\Lambda } )]+{\text{constant}}\\&=\sum _{n=1}^{N}\sum _{k=1}^{K}z_{nk}\ln \rho _{nk}+{\text{constant}}\end{aligned}}$ where we have defined $\ln \rho _{nk}=\operatorname {E} [\ln \pi _{k}]+{\frac {1}{2}}\operatorname {E} [\ln |\mathbf {\Lambda } _{k}|]-{\frac {D}{2}}\ln(2\pi )-{\frac {1}{2}}\operatorname {E} _{\mathbf {\mu } _{k},\mathbf {\Lambda } _{k}}[(\mathbf {x} _{n}-\mathbf {\mu } _{k})^{\rm {T}}\mathbf {\Lambda } _{k}(\mathbf {x} _{n}-\mathbf {\mu } _{k})]$ Exponentiating both sides of the formula for $\ln q^{*}(\mathbf {Z} )$ yields $q^{*}(\mathbf {Z} )\propto \prod _{n=1}^{N}\prod _{k=1}^{K}\rho _{nk}^{z_{nk}}$ Requiring that this be normalized ends up requiring that the $\rho _{nk}$ sum to 1 over all values of $k$, yielding $q^{*}(\mathbf {Z} )=\prod _{n=1}^{N}\prod _{k=1}^{K}r_{nk}^{z_{nk}}$ where $r_{nk}={\frac {\rho _{nk}}{\sum _{j=1}^{K}\rho _{nj}}}$ In other words, $q^{*}(\mathbf {Z} )$ is a product of single-observation multinomial distributions, and factors over each individual $\mathbf {z} _{n}$, which is distributed as a single-observation multinomial distribution with parameters $r_{nk}$ for $k=1\dots K$. Furthermore, we note that $\operatorname {E} [z_{nk}]=r_{nk}\,$ which is a standard result for categorical distributions. Now, considering the factor $q(\mathbf {\pi } ,\mathbf {\mu } ,\mathbf {\Lambda } )$, note that it automatically factors into $q(\mathbf {\pi } )\prod _{k=1}^{K}q(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})$ due to the structure of the graphical model defining our Gaussian mixture model, which is specified above. Then, ${\begin{aligned}\ln q^{*}(\mathbf {\pi } )&=\ln p(\mathbf {\pi } )+\operatorname {E} _{\mathbf {Z} }[\ln p(\mathbf {Z} \mid \mathbf {\pi } )]+{\text{constant}}\\&=(\alpha _{0}-1)\sum _{k=1}^{K}\ln \pi _{k}+\sum _{n=1}^{N}\sum _{k=1}^{K}r_{nk}\ln \pi _{k}+{\text{constant}}\end{aligned}}$ Taking the exponential of both sides, we recognize $q^{*}(\mathbf {\pi } )$ as a Dirichlet distribution $q^{*}(\mathbf {\pi } )\sim \operatorname {Dir} (\mathbf {\alpha } )\,$ where $\alpha _{k}=\alpha _{0}+N_{k}\,$ where $N_{k}=\sum _{n=1}^{N}r_{nk}\,$ Finally $\ln q^{*}(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})=\ln p(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})+\sum _{n=1}^{N}\operatorname {E} [z_{nk}]\ln {\mathcal {N}}(\mathbf {x} _{n}\mid \mathbf {\mu } _{k},\mathbf {\Lambda } _{k}^{-1})+{\text{constant}}$ Grouping and reading off terms involving $\mathbf {\mu } _{k}$ and $\mathbf {\Lambda } _{k}$, the result is a Gaussian-Wishart distribution given by $q^{*}(\mathbf {\mu } _{k},\mathbf {\Lambda } _{k})={\mathcal {N}}(\mathbf {\mu } _{k}\mid \mathbf {m} _{k},(\beta _{k}\mathbf {\Lambda } _{k})^{-1}){\mathcal {W}}(\mathbf {\Lambda } _{k}\mid \mathbf {W} _{k},\nu _{k})$ given the definitions ${\begin{aligned}\beta _{k}&=\beta _{0}+N_{k}\\\mathbf {m} _{k}&={\frac {1}{\beta _{k}}}(\beta _{0}\mathbf {\mu } _{0}+N_{k}{\bar {\mathbf {x} }}_{k})\\\mathbf {W} _{k}^{-1}&=\mathbf {W} _{0}^{-1}+N_{k}\mathbf {S} _{k}+{\frac {\beta _{0}N_{k}}{\beta _{0}+N_{k}}}({\bar {\mathbf {x} }}_{k}-\mathbf {\mu } _{0})({\bar {\mathbf {x} }}_{k}-\mathbf {\mu } _{0})^{\rm {T}}\\\nu _{k}&=\nu _{0}+N_{k}\\N_{k}&=\sum _{n=1}^{N}r_{nk}\\{\bar {\mathbf {x} }}_{k}&={\frac {1}{N_{k}}}\sum _{n=1}^{N}r_{nk}\mathbf {x} _{n}\\\mathbf {S} _{k}&={\frac {1}{N_{k}}}\sum _{n=1}^{N}r_{nk}(\mathbf {x} _{n}-{\bar {\mathbf {x} }}_{k})(\mathbf {x} _{n}-{\bar {\mathbf {x} }}_{k})^{\rm {T}}\end{aligned}}$ Finally, notice that these functions require the values of $r_{nk}$, which make use of $\rho _{nk}$, which is defined in turn based on $\operatorname {E} [\ln \pi _{k}]$, $\operatorname {E} [\ln |\mathbf {\Lambda } _{k}|]$, and $\operatorname {E} _{\mathbf {\mu } _{k},\mathbf {\Lambda } _{k}}[(\mathbf {x} _{n}-\mathbf {\mu } _{k})^{\rm {T}}\mathbf {\Lambda } _{k}(\mathbf {x} _{n}-\mathbf {\mu } _{k})]$. Now that we have determined the distributions over which these expectations are taken, we can derive formulas for them: ${\begin{aligned}\operatorname {E} _{\mathbf {\mu } _{k},\mathbf {\Lambda } _{k}}[(\mathbf {x} _{n}-\mathbf {\mu } _{k})^{\rm {T}}\mathbf {\Lambda } _{k}(\mathbf {x} _{n}-\mathbf {\mu } _{k})]&=D\beta _{k}^{-1}+\nu _{k}(\mathbf {x} _{n}-\mathbf {m} _{k})^{\rm {T}}\mathbf {W} _{k}(\mathbf {x} _{n}-\mathbf {m} _{k})\\\ln {\widetilde {\Lambda }}_{k}&\equiv \operatorname {E} [\ln |\mathbf {\Lambda } _{k}|]=\sum _{i=1}^{D}\psi \left({\frac {\nu _{k}+1-i}{2}}\right)+D\ln 2+\ln |\mathbf {W} _{k}|\\\ln {\widetilde {\pi }}_{k}&\equiv \operatorname {E} \left[\ln |\pi _{k}|\right]=\psi (\alpha _{k})-\psi \left(\sum _{i=1}^{K}\alpha _{i}\right)\end{aligned}}$ These results lead to $r_{nk}\propto {\widetilde {\pi }}_{k}{\widetilde {\Lambda }}_{k}^{1/2}\exp \left\{-{\frac {D}{2\beta _{k}}}-{\frac {\nu _{k}}{2}}(\mathbf {x} _{n}-\mathbf {m} _{k})^{\rm {T}}\mathbf {W} _{k}(\mathbf {x} _{n}-\mathbf {m} _{k})\right\}$ These can be converted from proportional to absolute values by normalizing over $k$ so that the corresponding values sum to 1. Note that: 1. The update equations for the parameters $\beta _{k}$, $\mathbf {m} _{k}$, $\mathbf {W} _{k}$ and $\nu _{k}$ of the variables $\mathbf {\mu } _{k}$ and $\mathbf {\Lambda } _{k}$ depend on the statistics $N_{k}$, ${\bar {\mathbf {x} }}_{k}$, and $\mathbf {S} _{k}$, and these statistics in turn depend on $r_{nk}$. 2. The update equations for the parameters $\alpha _{1\dots K}$ of the variable $\mathbf {\pi } $ depend on the statistic $N_{k}$, which depends in turn on $r_{nk}$. 3. The update equation for $r_{nk}$ has a direct circular dependence on $\beta _{k}$, $\mathbf {m} _{k}$, $\mathbf {W} _{k}$ and $\nu _{k}$ as well as an indirect circular dependence on $\mathbf {W} _{k}$, $\nu _{k}$ and $\alpha _{1\dots K}$ through ${\widetilde {\pi }}_{k}$ and ${\widetilde {\Lambda }}_{k}$. This suggests an iterative procedure that alternates between two steps: 1. An E-step that computes the value of $r_{nk}$ using the current values of all the other parameters. 2. An M-step that uses the new value of $r_{nk}$ to compute new values of all the other parameters. Note that these steps correspond closely with the standard EM algorithm to derive a maximum likelihood or maximum a posteriori (MAP) solution for the parameters of a Gaussian mixture model. The responsibilities $r_{nk}$ in the E step correspond closely to the posterior probabilities of the latent variables given the data, i.e. $p(\mathbf {Z} \mid \mathbf {X} )$; the computation of the statistics $N_{k}$, ${\bar {\mathbf {x} }}_{k}$, and $\mathbf {S} _{k}$ corresponds closely to the computation of corresponding "soft-count" statistics over the data; and the use of those statistics to compute new values of the parameters corresponds closely to the use of soft counts to compute new parameter values in normal EM over a Gaussian mixture model. Exponential-family distributions Note that in the previous example, once the distribution over unobserved variables was assumed to factorize into distributions over the "parameters" and distributions over the "latent data", the derived "best" distribution for each variable was in the same family as the corresponding prior distribution over the variable. This is a general result that holds true for all prior distributions derived from the exponential family. See also • Variational message passing: a modular algorithm for variational Bayesian inference. • Variational autoencoder: an artificial neural network belonging to the families of probabilistic graphical models and Variational Bayesian methods. • Expectation-maximization algorithm: a related approach which corresponds to a special case of variational Bayesian inference. • Generalized filtering: a variational filtering scheme for nonlinear state space models. • Calculus of variations: the field of mathematical analysis that deals with maximizing or minimizing functionals. • Maximum entropy discrimination: This is a variational inference framework that allows for introducing and accounting for additional large-margin constraints[9] References 1. Tran, Viet Hung (2018). "Copula Variational Bayes inference via information geometry". arXiv:1803.10998 [cs.IT]. 2. Adamčík, Martin (2014). "The Information Geometry of Bregman Divergences and Some Applications in Multi-Expert Reasoning". Entropy. 16 (12): 6338–6381. Bibcode:2014Entrp..16.6338A. doi:10.3390/e16126338. 3. Nguyen, Duy (15 August 2023). "AN IN DEPTH INTRODUCTION TO VARIATIONAL BAYES NOTE". SSRN 4541076. Retrieved 15 August 2023. 4. Lee, Se Yoon (2021). "Gibbs sampler and coordinate ascent variational inference: A set-theoretical review". Communications in Statistics - Theory and Methods. 51 (6): 1–21. arXiv:2008.01006. doi:10.1080/03610926.2021.1921214. S2CID 220935477. 5. Boyd, Stephen P.; Vandenberghe, Lieven (2004). Convex Optimization (PDF). Cambridge University Press. ISBN 978-0-521-83378-3. Retrieved October 15, 2011. 6. Bishop, Christopher M. (2006). "Chapter 10". Pattern Recognition and Machine Learning. Springer. ISBN 978-0-387-31073-2. 7. Nguyen, Duy (15 August 2023). "AN IN DEPTH INTRODUCTION TO VARIATIONAL BAYES NOTE". SSRN 4541076. Retrieved 15 August 2023. 8. Nguyen, Duy (15 August 2023). "AN IN DEPTH INTRODUCTION TO VARIATIONAL BAYES NOTE". SSRN 4541076. Retrieved 15 August 2023. 9. Sotirios P. Chatzis, “Infinite Markov-Switching Maximum Entropy Discrimination Machines,” Proc. 30th International Conference on Machine Learning (ICML). Journal of Machine Learning Research: Workshop and Conference Proceedings, vol. 28, no. 3, pp. 729–737, June 2013. External links • The on-line textbook: Information Theory, Inference, and Learning Algorithms, by David J.C. MacKay provides an introduction to variational methods (p. 422). • An in depth introduction to Variational Bayes note. Nguyen, D. 2023 • A Tutorial on Variational Bayes. Fox, C. and Roberts, S. 2012. Artificial Intelligence Review, doi:10.1007/s10462-011-9236-8. • Variational-Bayes Repository A repository of research papers, software, and links related to the use of variational methods for approximate Bayesian learning up to 2003. • Variational Algorithms for Approximate Bayesian Inference, by M. J. Beal includes comparisons of EM to Variational Bayesian EM and derivations of several models including Variational Bayesian HMMs. • High-Level Explanation of Variational Inference by Jason Eisner may be worth reading before a more mathematically detailed treatment. • Copula Variational Bayes inference via information geometry (pdf) by Tran, V.H. 2018. This paper is primarily written for students. Via Bregman divergence, the paper shows that Variational Bayes is simply a generalized Pythagorean projection of true model onto an arbitrarily correlated (copula) distributional space, of which the independent space is merely a special case.
Wikipedia
Variational integrator Variational integrators are numerical integrators for Hamiltonian systems derived from the Euler–Lagrange equations of a discretized Hamilton's principle. Variational integrators are momentum-preserving and symplectic. Derivation of a simple variational integrator Consider a mechanical system with a single particle degree of freedom described by the Lagrangian $L(t,q,v)={\frac {1}{2}}mv^{2}-V(q),$ where $m$ is the mass of the particle, and $V$ is a potential. To construct a variational integrator for this system, we begin by forming the discrete Lagrangian. The discrete Lagrangian approximates the action for the system over a short time interval: ${\begin{aligned}L_{d}(t_{0},t_{1},q_{0},q_{1})&={\frac {t_{1}-t_{0}}{2}}\left[L\left(t_{0},q_{0},{\frac {q_{1}-q_{0}}{t_{1}-t_{0}}}\right)+L\left(t_{1},q_{1},{\frac {q_{1}-q_{0}}{t_{1}-t_{0}}}\right)\right]\\&\approx \int _{t_{0}}^{t_{1}}\,dt\,L(t,q(t),v(t)).\end{aligned}}$ Here we have chosen to approximate the time integral using the trapezoid method, and we use a linear approximation to the trajectory, $q(t)\approx {\frac {q_{1}-q_{0}}{t_{1}-t_{0}}}(t-t_{0})+q_{0}$ between $t_{0}$ and $t_{1}$, resulting in a constant velocity $v\approx \left(q_{1}-q_{0}\right)/\left(t_{1}-t_{0}\right)$. Different choices for the approximation to the trajectory and the time integral give different variational integrators. The order of accuracy of the integrator is controlled by the accuracy of our approximation to the action; since $L_{d}(t_{0},t_{1},q_{0},q_{1})=\int _{t_{0}}^{t_{1}}\,dt\,L(t,q(t),v(t))+{\mathcal {O}}(t_{1}-t_{0})^{2},$ our integrator will be second-order accurate. Evolution equations for the discrete system can be derived from a stationary-action principle. The discrete action over an extended time interval is a sum of discrete Lagrangians over many sub-intervals: $S_{d}=L_{d}(t_{0},t_{1},q_{0},q_{1})+L_{d}(t_{1},t_{2},q_{1},q_{2})+\cdots .$ The principle of stationary action states that the action is stationary with respect to variations of coordinates that leave the endpoints of the trajectory fixed. So, varying the coordinate $q_{1}$, we have ${\frac {\partial S_{d}}{\partial q_{1}}}=0={\frac {\partial }{\partial q_{1}}}L_{d}\left(t_{0},t_{1},q_{0},q_{1}\right)+{\frac {\partial }{\partial q_{1}}}L_{d}\left(t_{1},t_{2},q_{1},q_{2}\right).$ Given an initial condition $(q_{0},q_{1})$, and a sequence of times $(t_{0},t_{1},t_{2})$ this provides a relation that can be solved for $q_{2}$. The solution is $q_{2}=q_{1}+{\frac {t_{2}-t_{1}}{t_{1}-t_{0}}}(q_{1}-q_{0})-{\frac {(t_{2}-t_{0})(t_{2}-t_{1})}{2m}}{\frac {d}{dq_{1}}}V(q_{1}).$ We can write this in a simpler form if we define the discrete momenta, $p_{0}\equiv -{\frac {\partial }{\partial q_{0}}}L_{d}(t_{0},t_{1},q_{0},q_{1})$ and $p_{1}\equiv {\frac {\partial }{\partial q_{1}}}L_{d}(t_{0},t_{1},q_{0},q_{1}).$ Given an initial condition $(q_{0},p_{0})$, the stationary action condition is equivalent to solving the first of these equations for $q_{1}$, and then determining $p_{1}$ using the second equation. This evolution scheme gives $q_{1}=q_{0}+{\frac {t_{1}-t_{0}}{m}}p_{0}-{\frac {(t_{1}-t_{0})^{2}}{2m}}{\frac {d}{dq_{0}}}V(q_{0})$ and $p_{1}=m{\frac {q_{1}-q_{0}}{t_{1}-t_{0}}}-{\frac {t_{1}-t_{0}}{2}}{\frac {d}{dq_{1}}}V(q_{1}).$ This is a leapfrog integration scheme for the system; two steps of this evolution are equivalent to the formula above for $q_{2}$ See also • Lie group integrator References • E. Hairer, C. Lubich, and G. Wanner. Geometric Numerical Integration. Springer, 2002. • J. Marsden and M. West. Discrete mechanics and variational integrators. Acta Numerica, 2001, pp. 357–514.
Wikipedia
Variational multiscale method The variational multiscale method (VMS) is a technique used for deriving models and numerical methods for multiscale phenomena.[1] The VMS framework has been mainly applied to design stabilized finite element methods in which stability of the standard Galerkin method is not ensured both in terms of singular perturbation and of compatibility conditions with the finite element spaces.[2] Stabilized methods are getting increasing attention in computational fluid dynamics because they are designed to solve drawbacks typical of the standard Galerkin method: advection-dominated flows problems and problems in which an arbitrary combination of interpolation functions may yield to unstable discretized formulations.[3][4] The milestone of stabilized methods for this class of problems can be considered the Streamline Upwind Petrov-Galerkin method (SUPG), designed during 80s for convection dominated-flows for the incompressible Navier–Stokes equations by Brooks and Hughes.[5][6] Variational Multiscale Method (VMS) was introduced by Hughes in 1995.[7] Broadly speaking, VMS is a technique used to get mathematical models and numerical methods which are able to catch multiscale phenomena;[1] in fact, it is usually adopted for problems with huge scale ranges, which are separated into a number of scale groups.[8] The main idea of the method is to design a sum decomposition of the solution as $u={\bar {u}}+u'$, where ${\bar {u}}$ is denoted as coarse-scale solution and it is solved numerically, whereas $u'$ represents the fine scale solution and is determined analytically eliminating it from the problem of the coarse scale equation.[1] The abstract framework Abstract Dirichlet problem with variational formulation Consider an open bounded domain $\Omega \subset \mathbb {R} ^{d}$ with smooth boundary $\Gamma \subset \mathbb {R} ^{d-1}$, being $d\geq 1$ the number of space dimensions. Denoting with ${\mathcal {L}}$ a generic, second order, nonsymmetric differential operator, consider the following boundary value problem:[4] ${\text{find }}u:\Omega \to \mathbb {R} {\text{ such that}}:$ ${\begin{cases}{\mathcal {L}}u=f&{\text{ in }}\Omega \\u=g&{\text{ on }}\Gamma \\\end{cases}}$ being $f:\Omega \to \mathbb {R} $ and $g:\Gamma \to \mathbb {R} $ given functions. Let $H^{1}(\Omega )$ be the Hilbert space of square-integrable functions with square-integrable derivatives:[4] $H^{1}(\Omega )=\{f\in L^{2}(\Omega ):\nabla f\in L^{2}(\Omega )\}.$ Consider the trial solution space ${\mathcal {V}}_{g}$ and the weighting function space ${\mathcal {V}}$ defined as follows:[4] ${\mathcal {V}}_{g}=\{u\in H^{1}(\Omega ):\,u=g{\text{ on }}\Gamma \},$ ${\mathcal {V}}=H_{0}^{1}(\Omega )=\{v\in H^{1}(\Omega ):\,v=0{\text{ on }}\Gamma \}.$ The variational formulation of the boundary value problem defined above reads:[4] ${\text{find }}u\in {\mathcal {V}}_{g}{\text{ such that: }}a(v,u)=f(v)\,\,\,\,\forall v\in {\mathcal {V}}$, being $a(v,u)$ the bilinear form satisfying $a(v,u)=(v,{\mathcal {L}}u)$, $f(v)=(v,f)$ a bounded linear functional on ${\mathcal {V}}$ and $(\cdot ,\cdot )$ is the $L^{2}(\Omega )$ inner product.[2] Furthermore, the dual operator ${\mathcal {L}}^{*}$ of ${\mathcal {L}}$ is defined as that differential operator such that ${\mathcal {(}}v,{\mathcal {L}}u)=({\mathcal {L}}^{*}v,u)\,\,\,\forall u,\,v\in {\mathcal {V}}$.[7] Variational multiscale method In VMS approach, the function spaces are decomposed through a multiscale direct sum decomposition for both ${\mathcal {V}}_{g}$ and ${\mathcal {V}}$ into coarse and fine scales subspaces as:[1] ${\mathcal {V}}={\bar {\mathcal {V}}}\oplus {\mathcal {V}}'$ and ${\mathcal {V}}_{g}={\bar {{\mathcal {V}}_{g}}}\oplus {\mathcal {V}}_{g}'.$ Hence, an overlapping sum decomposition is assumed for both $u$ and $v$ as: $u={\bar {u}}+u'{\text{ and }}v={\bar {v}}+v'$, where ${\bar {u}}$ represents the coarse (resolvable) scales and $u'$ the fine (subgrid) scales, with ${\bar {u}}\in {\bar {{\mathcal {V}}_{g}}}$, ${u'}\in {{\mathcal {V}}_{g}}'$, ${\bar {v}}\in {\bar {\mathcal {V}}}$ and $v'\in {\mathcal {V}}'$. In particular, the following assumptions are made on these functions:[1] ${\begin{aligned}{\bar {u}}=g&&{\text{ on }}\Gamma &&\forall &{\bar {u}}\in {\bar {\mathcal {V_{g}}}},\\u'=0&&{\text{ on }}\Gamma &&\forall &u'\in {\mathcal {V_{g}}}',\\{\bar {v}}=0&&{\text{ on }}\Gamma &&\forall &{\bar {v}}\in {\bar {\mathcal {V}}},\\v'=0&&{\text{ on }}\Gamma &&\forall &v'\in {\mathcal {V}}'.\end{aligned}}$ With this in mind, the variational form can be rewritten as $a({\bar {v}}+v',{\bar {u}}+u')=f({\bar {v}}+v')$ and, by using bilinearity of $a(\cdot ,\cdot )$ and linearity of $f(\cdot )$, $a({\bar {v}},{\bar {u}})+a({\bar {v}},u')+a(v',{\bar {u}})+a(v',u')=f({\bar {v}})+f(v').$ Last equation, yields to a coarse scale and a fine scale problem: ${\text{find }}{\bar {u}}\in {\bar {\mathcal {V}}}_{g}{\text{ and }}u'\in {\mathcal {V}}'{\text{ such that: }}$ ${\begin{aligned}&&a({\bar {v}},{\bar {u}})+a({\bar {v}},u')&=f({\bar {v}})&&\forall {\bar {v}}\in {\bar {\mathcal {V}}}&\,\,\,\,{\text{coarse-scale problem}}\\&&a(v',{\bar {u}})+a(v',u')&=f(v')&&\forall v'\in {\mathcal {V}}'&\,\,\,\,{\text{fine-scale problem}}\\\end{aligned}}$ or, equivalently, considering that $a(v,u)=(v,{\mathcal {L}}u)$ and $f(v)=(v,f)$: ${\text{find }}{\bar {u}}\in {\bar {\mathcal {V}}}_{g}{\text{ and }}u'\in {\mathcal {V}}'{\text{ such that: }}$ ${\begin{aligned}&&({\bar {v}},{\mathcal {L}}{\bar {u}})+({\bar {v}},{\mathcal {L}}u')&=({\bar {v}},f)&&\forall {\bar {v}}\in {\bar {\mathcal {V}}},\\&&(v',{\mathcal {L}}{\bar {u}})+(v',{\mathcal {L}}u')&=(v',f)&&\forall v'\in {\mathcal {V}}'.\\\end{aligned}}$ By rearranging the second problem as $(v',{\mathcal {L}}u')=-(v',{\mathcal {L}}{\bar {u}}-f)$, the corresponding Euler–Lagrange equation reads:[7] ${\begin{cases}{\mathcal {L}}u'=-({\mathcal {L}}{\bar {u}}-f)&{\text{ in }}\Omega \\u'=0&{\text{ on }}\Gamma \end{cases}}$ which shows that the fine scale solution $u'$ depends on the strong residual of the coarse scale equation ${\mathcal {L}}{\bar {u}}-f$.[7] The fine scale solution can be expressed in terms of ${\mathcal {L}}{\bar {u}}-f$ through the Green's function $G:\Omega \times \Omega \to \mathbb {R} {\text{ with }}G=0{\text{ on }}\Gamma \times \Gamma $: $u'(y)=-\int _{\Omega }G(x,y)({\mathcal {L}}{\bar {u}}-f)(x)\,d\Omega _{x}\,\,\,\forall y\in \Omega .$ Let $\delta $ be the Dirac delta function, by definition, the Green's function is found by solving $\forall y\in \Omega $ ${\begin{cases}{\mathcal {L}}^{*}G(x,y)=\delta (x-y)&{\text{ in }}\Omega \\G(x,y)=0&{\text{ on }}\Gamma \end{cases}}$ Moreover, it is possible to express $u'$ in terms of a new differential operator ${\mathcal {M}}$ that approximates the differential operator $-{\mathcal {L}}^{-1}$ as [1] $u'={\mathcal {M}}({\mathcal {L}}{\bar {u}}-f),$ with ${\mathcal {M}}\approx -{\mathcal {L}}^{-1}$. In order to eliminate the explicit dependence in the coarse scale equation of the sub-grid scale terms, considering the definition of the dual operator, the last expression can be substituted in the second term of the coarse scale equation:[1] $({\bar {v}},{\mathcal {L}}u')=({\mathcal {L}}^{*}{\bar {v}},u')=({\mathcal {L}}^{*}{\bar {v}},{\mathcal {M}}({\mathcal {L}}{\bar {u}}-f)).$ Since ${\mathcal {M}}$ is an approximation of $-{\mathcal {L}}^{-1}$, the Variational Multiscale Formulation will consist in finding an approximate solution ${\tilde {\bar {u}}}\approx {\bar {u}}$ instead of ${\bar {u}}$. The coarse problem is therefore rewritten as:[1] ${\text{find }}{\tilde {\bar {u}}}\in {\mathcal {\bar {V}}}_{g}:\;\;\;a({\bar {v}},{\tilde {\bar {u}}})+({\mathcal {L}}^{*}{\bar {v}},{\mathcal {M}}({\mathcal {L}}{\tilde {\bar {u}}}-f))=({\bar {v}},f)\;\;\;\forall {\bar {v}}\in {\mathcal {\bar {V}}},$ being $({\mathcal {L}}^{*}{\bar {v}},{\mathcal {M}}({\mathcal {L}}{\tilde {\bar {u}}}-f))=-\int _{\Omega }\int _{\Omega }({\mathcal {L}}^{*}{\bar {v}})(y)G(x,y)({\mathcal {L}}{\tilde {\bar {u}}}-f)(x)\,d\Omega _{x}\,d\Omega _{y}.$ Introducing the form [7] $B({\bar {v}},{\tilde {\bar {u}}},G)=a({\bar {v}},{\tilde {\bar {u}}})+({\mathcal {L}}^{*}{\bar {v}},{\mathcal {M}}({\mathcal {L}}{\tilde {\bar {u}}}))$ and the functional $L({\bar {v}},G)=({\bar {v}},f)+({\mathcal {L}}^{*}{\bar {v}},{\mathcal {M}}f)$, the VMS formulation of the coarse scale equation is rearranged as:[7] ${\text{find }}{\tilde {\bar {u}}}\in {\mathcal {\bar {V}}}_{g}:\,B({\bar {v}},{\tilde {\bar {u}}},G)=L({\bar {v}},G)\,\,\,\forall {\bar {v}}\in {\mathcal {\bar {V}}}.$ Since commonly it is not possible to determine both ${\mathcal {M}}$ and $G$, one usually adopt an approximation. In this sense, the coarse scale spaces ${\bar {\mathcal {V}}}_{g}$ and ${\bar {\mathcal {V}}}$ are chosen as finite dimensional space of functions as:[1] ${\bar {\mathcal {V}}}_{g}\equiv {\mathcal {V}}_{g_{h}}:={\mathcal {V}}_{g}\cap X_{r}^{h}(\Omega )$ and ${\bar {\mathcal {V}}}\equiv {\mathcal {V}}_{h}:={\mathcal {V}}\cap X_{h}^{r}(\Omega ),$ being $X_{r}^{h}(\Omega )$ the Finite Element space of Lagrangian polynomials of degree $r\geq 1$ over the mesh built in $\Omega $ .[4] Note that ${\mathcal {V}}_{g}'$ and ${\mathcal {V}}'$ are infinite-dimensional spaces, while ${\mathcal {V}}_{g_{h}}$ and ${\mathcal {V}}_{h}$ are finite-dimensional spaces. Let $u_{h}\in {\mathcal {V}}_{g_{h}}$ and $v_{h}\in {\mathcal {V}}_{h}$ be respectively approximations of ${\tilde {\bar {u}}}$ and ${\bar {v}}$, and let ${\tilde {G}}$ and ${\tilde {\mathcal {M}}}$ be respectively approximations of $G$ and ${\mathcal {M}}$. The VMS problem with Finite Element approximation reads:[7] ${\text{find }}u_{h}\in {\mathcal {V}}_{g_{h}}:B(v_{h},u_{h},{\tilde {G}})=L(v_{h},{\tilde {G}})\,\,\,\forall {v}_{h}\in {\mathcal {V}}_{h}$ or, equivalently: ${\text{find }}u_{h}\in {\mathcal {V}}_{g_{h}}:a(v_{h},u_{h})+({\mathcal {L}}^{*}v_{h},{\mathcal {\tilde {M}}}({\mathcal {L}}{u_{h}}-f))=(v_{h},f)\,\,\,\forall {v}_{h}\in {\mathcal {V}}_{h}$ VMS and stabilized methods Consider an advection–diffusion problem:[4] ${\begin{cases}-\mu \Delta u+{\boldsymbol {b}}\cdot \nabla u=f&{\text{ in }}\Omega \\u=0&{\text{ on }}\partial \Omega \end{cases}}$ where $\mu \in \mathbb {R} $ is the diffusion coefficient with $\mu >0$ and ${\boldsymbol {b}}\in \mathbb {R} ^{d}$ is a given advection field. Let ${\mathcal {V}}=H_{0}^{1}(\Omega )$ and $u\in {\mathcal {V}}$, ${\boldsymbol {b}}\in [L^{2}(\Omega )]^{d}$, $f\in L^{2}(\Omega )$.[4] Let ${\mathcal {L}}={\mathcal {L}}_{diff}+{\mathcal {L}}_{adv}$, being ${\mathcal {L}}_{diff}=-\mu \Delta $ and ${\mathcal {L}}_{adv}={\boldsymbol {b}}\cdot \nabla $.[1] The variational form of the problem above reads:[4] ${\text{find}}\,u\in {\mathcal {V}}:\;\;\;a(v,u)=(f,v)\;\;\;\forall v\in {\mathcal {V}},$ being $a(v,u)=(\nabla v,\mu \nabla u)+(v,{\boldsymbol {b}}\cdot \nabla u).$ Consider a Finite Element approximation in space of the problem above by introducing the space ${\mathcal {V}}_{h}={\mathcal {V}}\cap X_{h}^{r}$ over a grid $\Omega _{h}=\bigcup _{k=1}^{N}\Omega _{k}$ made of $N$ elements, with $u_{h}\in {\mathcal {V}}_{h}$. The standard Galerkin formulation of this problem reads[4] ${\text{find }}u_{h}\in {\mathcal {V}}_{h}:\;\;\;a(v_{h},u_{h})=(f,v_{h})\;\;\;\forall v\in {\mathcal {V}},$ Consider a strongly consistent stabilization method of the problem above in a finite element framework: ${\text{ find }}u_{h}\in {\mathcal {V}}_{h}:\,\,\,a(v_{h},u_{h})+{\mathcal {L}}_{h}(u_{h},f;v_{h})=(f,v_{h})\,\,\,\forall v_{h}\in {\mathcal {V}}_{h}$ for a suitable form ${\mathcal {L}}_{h}$ that satisfies:[4] ${\mathcal {L}}_{h}(u,f;v_{h})=0\,\,\,\forall v_{h}\in {\mathcal {V}}_{h}.$ The form ${\mathcal {L}}_{h}$ can be expressed as $(\mathbb {L} v_{h},\tau ({\mathcal {L}}u_{h}-f))_{\Omega _{h}}$, being $\mathbb {L} $ a differential operator such as:[1] $\mathbb {L} ={\begin{cases}+{\mathcal {L}}&\,\,\,&{\text{ Galerkin/least squares (GLS)}}\\+{\mathcal {L}}_{adv}&\,\,\,&{\text{ Streamline Upwind Petrov-Galerkin (SUPG)}}\\-{\mathcal {L}}^{*}&\,\,\,&{\text{ Multiscale}}\\\end{cases}}$ and $\tau $ is the stabilization parameter. A stabilized method with $\mathbb {L} =-{\mathcal {L}}^{*}$ is typically referred to multiscale stabilized method . In 1995, Thomas J.R. Hughes showed that a stabilized method of multiscale type can be viewed as a sub-grid scale model where the stabilization parameter is equal to $\tau =-{\tilde {\mathcal {M}}}\approx -{\mathcal {M}}$ or, in terms of the Green's function as $\tau \delta (x-y)={\tilde {G}}(x,y)\approx G(x,y),$ which yields the following definition of $\tau $: $\tau ={\frac {1}{|\Omega _{k}|}}\int _{\Omega _{k}}\int _{\Omega _{k}}G(x,y)\,d\Omega _{x}\,d\Omega _{y}.$[7] VMS turbulence modeling for large-eddy simulations of incompressible flows The idea of VMS turbulence modeling for Large Eddy Simulations(LES) of incompressible Navier–Stokes equations was introduced by Hughes et al. in 2000 and the main idea was to use - instead of classical filtered techniques - variational projections.[9][10] Incompressible Navier–Stokes equations Consider the incompressible Navier–Stokes equations for a Newtonian fluid of constant density $\rho $ in a domain $\Omega \in \mathbb {R} ^{d}$ with boundary $\partial \Omega =\Gamma _{D}\cup \Gamma _{N}$, being $\Gamma _{D}$ and $\Gamma _{N}$ portions of the boundary where respectively a Dirichlet and a Neumann boundary condition is applied ($\Gamma _{D}\cap \Gamma _{N}=\emptyset $):[4] ${\begin{cases}\rho {\dfrac {\partial {\boldsymbol {u}}}{\partial t}}+\rho ({\boldsymbol {u}}\cdot \nabla ){\boldsymbol {u}}-\nabla \cdot {\boldsymbol {\sigma }}({\boldsymbol {u}},p)={\boldsymbol {f}}&{\text{ in }}\Omega \times (0,T)\\\nabla \cdot {\boldsymbol {u}}=0&{\text{ in }}\Omega \times (0,T)\\{\boldsymbol {u}}={\boldsymbol {g}}&{\text{ on }}\Gamma _{D}\times (0,T)\\\sigma ({\boldsymbol {u}},p){\boldsymbol {\hat {n}}}={\boldsymbol {h}}&{\text{ on }}\Gamma _{N}\times (0,T)\\{\boldsymbol {u}}(0)={\boldsymbol {u}}_{0}&{\text{ in }}\Omega \times \{0\}\end{cases}}$ being ${\boldsymbol {u}}$ the fluid velocity, $p$ the fluid pressure, ${\boldsymbol {f}}$ a given forcing term, ${\boldsymbol {\hat {n}}}$ the outward directed unit normal vector to $\Gamma _{N}$, and ${\boldsymbol {\sigma }}({\boldsymbol {u}},p)$ the viscous stress tensor defined as: ${\boldsymbol {\sigma }}({\boldsymbol {u}},p)=-p{\boldsymbol {I}}+2\mu {\boldsymbol {\epsilon }}({\boldsymbol {u}}).$ Let $\mu $ be the dynamic viscosity of the fluid, ${\boldsymbol {I}}$ the second order identity tensor and ${\boldsymbol {\epsilon }}({\boldsymbol {u}})$ the strain-rate tensor defined as: ${\boldsymbol {\epsilon }}({\boldsymbol {u}})={\frac {1}{2}}((\nabla {\boldsymbol {u}})+(\nabla {\boldsymbol {u}})^{T}).$ The functions ${\boldsymbol {g}}$ and ${\boldsymbol {h}}$ are given Dirichlet and Neumann boundary data, while ${\boldsymbol {u}}_{0}$ is the initial condition.[4] Global space time variational formulation In order to find a variational formulation of the Navier–Stokes equations, consider the following infinite-dimensional spaces:[4] ${\mathcal {V}}_{g}=\{{\boldsymbol {u}}\in [H^{1}(\Omega )]^{d}:{\boldsymbol {u}}={\boldsymbol {g}}{\text{ on }}\Gamma _{D}\},$ ${\mathcal {V}}_{0}=[H_{0}^{1}(\Omega )]^{d}=\{{\boldsymbol {u}}\in [H^{1}(\Omega )]^{d}:{\boldsymbol {u}}={\boldsymbol {0}}{\text{ on }}\Gamma _{D}\},$ ${\mathcal {Q}}=L^{2}(\Omega ).$ Furthermore, let ${\boldsymbol {\mathcal {V}}}_{g}={\mathcal {V}}_{g}\times {\mathcal {Q}}$ and ${\boldsymbol {\mathcal {V}}}_{0}={\mathcal {V}}_{0}\times {\mathcal {Q}}$. The weak form of the unsteady-incompressible Navier–Stokes equations reads:[4] given ${\boldsymbol {u}}_{0}$, $\forall t\in (0,T),\;{\text{find }}({\boldsymbol {u}},p)\in {\boldsymbol {\mathcal {V}}}_{g}{\text{ such that }}$ ${\begin{aligned}{\bigg (}{\boldsymbol {v}},\rho {\dfrac {\partial {\boldsymbol {u}}}{\partial t}}{\bigg )}+a({\boldsymbol {v}},{\boldsymbol {u}})+c({\boldsymbol {v}},{\boldsymbol {u}},{\boldsymbol {u}})-b({\boldsymbol {v}},p)+b({\boldsymbol {u}},q)=({\boldsymbol {v}},{\boldsymbol {f}})+({\boldsymbol {v}},{\boldsymbol {h}})_{\Gamma _{N}}\;\;\forall ({\boldsymbol {v}},q)\in {\boldsymbol {\mathcal {V}}}_{0}\end{aligned}}$ where $(\cdot ,\cdot )$ represents the $L^{2}(\Omega )$ inner product and $(\cdot ,\cdot )_{\Gamma _{N}}$ the $L^{2}(\Gamma _{N})$ inner product. Moreover, the bilinear forms $a(\cdot ,\cdot )$, $b(\cdot ,\cdot )$ and the trilinear form $c(\cdot ,\cdot ,\cdot )$ are defined as follows:[4] ${\begin{aligned}a({\boldsymbol {v}},{\boldsymbol {u}})=&(\nabla {\boldsymbol {v}},\mu ((\nabla {\boldsymbol {u}})+(\nabla {\boldsymbol {u}})^{T})),\\b({\boldsymbol {v}},q)=&(\nabla \cdot {\boldsymbol {v}},q),\\c({\boldsymbol {v}},{\boldsymbol {u}},{\boldsymbol {u}})=&({\boldsymbol {v}},\rho ({\boldsymbol {u}}\cdot \nabla ){\boldsymbol {u}}).\end{aligned}}$ Finite element method for space discretization and VMS-LES modeling In order to discretize in space the Navier–Stokes equations, consider the function space of finite element $X_{r}^{h}=\{u^{h}\in C^{0}({\overline {\Omega }}):u^{h}|_{k}\in \mathbb {P} _{r},\;\forall k\in \mathrm {T} _{h}\}$ of piecewise Lagrangian Polynomials of degree $r\geq 1$ over the domain $\Omega $ triangulated with a mesh $\mathrm {T} _{h}$ made of tetrahedrons of diameters $h_{k}$, $\forall k\in \mathrm {T} _{h}$. Following the approach shown above, let introduce a multiscale direct-sum decomposition of the space ${\boldsymbol {\mathcal {V}}}$ which represents either ${\boldsymbol {\mathcal {V}}}_{g}$ and ${\boldsymbol {\mathcal {V}}}_{0}$:[11] ${\boldsymbol {\mathcal {V}}}={\boldsymbol {\mathcal {V}}}_{h}\oplus {\boldsymbol {\mathcal {V}}}',$ being ${\boldsymbol {\mathcal {V}}}_{h}={\mathcal {V}}_{g_{h}}\times {\mathcal {Q}}{\text{ or }}{\boldsymbol {\mathcal {V}}}_{h}={\mathcal {V}}_{0_{h}}\times {\mathcal {Q}}$ the finite dimensional function space associated to the coarse scale, and ${\boldsymbol {\mathcal {V}}}'={\mathcal {V}}_{g}'\times {\mathcal {Q}}{\text{ or }}{\boldsymbol {\mathcal {V}}}'={\mathcal {V}}_{0}'\times {\mathcal {Q}}$ the infinite-dimensional fine scale function space, with ${\mathcal {V}}_{g_{h}}={\mathcal {V}}_{g}\cap X_{r}^{h}$, ${\mathcal {V}}_{0_{h}}={\mathcal {V}}_{0}\cap X_{r}^{h}$ and ${\mathcal {Q}}_{h}={\mathcal {Q}}\cap X_{r}^{h}$. An overlapping sum decomposition is then defined as:[10][11] ${\begin{aligned}&{\boldsymbol {u}}={\boldsymbol {u}}^{h}+{\boldsymbol {u}}'{\text{ and }}p=p^{h}+p'\\&{\boldsymbol {v}}={\boldsymbol {v}}^{h}+{\boldsymbol {v}}'\;{\text{ and }}q=q^{h}+q'\end{aligned}}$ By using the decomposition above in the variational form of the Navier–Stokes equations, one gets a coarse and a fine scale equation; the fine scale terms appearing in the coarse scale equation are integrated by parts and the fine scale variables are modeled as:[10] ${\begin{aligned}{\boldsymbol {u}}'\approx &-\tau _{M}({\boldsymbol {u}}^{h}){\boldsymbol {r}}_{M}({\boldsymbol {u}}^{h},p^{h}),\\p'\approx &-\tau _{C}({\boldsymbol {u}}^{h}){\boldsymbol {r}}_{C}({\boldsymbol {u}}^{h}).\end{aligned}}$ In the expressions above, ${\boldsymbol {r}}_{M}({\boldsymbol {u}}^{h},p^{h})$ and ${\boldsymbol {r}}_{C}({\boldsymbol {u}}^{h})$ are the residuals of the momentum equation and continuity equation in strong forms defined as: ${\begin{aligned}{\boldsymbol {r}}_{M}({\boldsymbol {u}}^{h},p^{h})=&\rho {\dfrac {\partial {\boldsymbol {u}}^{h}}{\partial t}}+\rho ({\boldsymbol {u}}^{h}\cdot \nabla ){\boldsymbol {u}}^{h}-\nabla \cdot {\boldsymbol {\sigma }}({\boldsymbol {u}}^{h},p^{h})-{\boldsymbol {f}},\\{\boldsymbol {r}}_{C}({\boldsymbol {u}}^{h})=&\nabla \cdot {\boldsymbol {u}}^{h},\end{aligned}}$ while the stabilization parameters are set equal to:[11] ${\begin{aligned}\tau _{M}({\boldsymbol {u}}^{h})=&{\bigg (}{\frac {\sigma ^{2}\rho ^{2}}{\Delta t^{2}}}+{\frac {\rho ^{2}}{h_{k}^{2}}}|{\boldsymbol {u}}^{h}|^{2}+{\frac {\mu ^{2}}{h_{k}^{4}}}C_{r}{\bigg )}^{-1/2},\\\tau _{C}({\boldsymbol {u}}^{h})=&{\frac {h_{k}^{2}}{\tau _{M}({\boldsymbol {u}}^{h})}},\end{aligned}}$ where $C_{r}=60\cdot 2^{r-2}$ is a constant depending on the polynomials's degree $r$, $\sigma $ is a constant equal to the order of the backward differentiation formula (BDF) adopted as temporal integration scheme and $\Delta t$ is the time step.[11] The semi-discrete variational multiscale multiscale formulation (VMS-LES) of the incompressible Navier–Stokes equations, reads:[11] given ${\boldsymbol {u}}_{0}$, $\forall t\in (0,T),\;{\text{find }}{\boldsymbol {U}}^{h}=\{{\boldsymbol {u}}^{h},p^{h}\}\in {\boldsymbol {\mathcal {V}}}_{g_{h}}{\text{ such that }}A({\boldsymbol {V}}^{h},{\boldsymbol {U}}^{h})=F({\boldsymbol {V}}^{h})\;\;\forall {\boldsymbol {V}}^{h}=\{{\boldsymbol {v}}^{h},q^{h}\}\in {\boldsymbol {\mathcal {V}}}_{0_{h}},$ being $A({\boldsymbol {V}}^{h},{\boldsymbol {U}}^{h})=A^{NS}({\boldsymbol {V}}^{h},{\boldsymbol {U}}^{h})+A^{VMS}({\boldsymbol {V}}^{h},{\boldsymbol {U}}^{h}),$ and $F({\boldsymbol {V}}^{h})=({\boldsymbol {v}},{\boldsymbol {f}})+({\boldsymbol {v}},{\boldsymbol {h}})_{\Gamma _{N}}.$ The forms $A^{NS}(\cdot ,\cdot )$ and $A^{VMS}(\cdot ,\cdot )$ are defined as:[11] ${\begin{aligned}A^{NS}({\boldsymbol {V}}^{h},{\boldsymbol {U}}^{h})=&{\bigg (}{\boldsymbol {v}}^{h},\rho {\dfrac {\partial {\boldsymbol {u}}^{h}}{\partial t}}{\bigg )}+a({\boldsymbol {v}}^{h},{\boldsymbol {u}}^{h})+c({\boldsymbol {v}}^{h},{\boldsymbol {u}}^{h},{\boldsymbol {u}}^{h})-b({\boldsymbol {v}}^{h},p^{h})+b({\boldsymbol {u}}^{h},q^{h}),\\A^{VMS}({\boldsymbol {V}}^{h},{\boldsymbol {U}}^{h})=&\underbrace {{\big (}\rho {\boldsymbol {u}}^{h}\cdot \nabla {\boldsymbol {v}}^{h}+\nabla q^{h},\tau _{M}({\boldsymbol {u}}^{h}){\boldsymbol {r}}_{M}({\boldsymbol {u}}^{h},p^{h}){\big )}} _{\text{SUPG}}-\underbrace {(\nabla \cdot {\boldsymbol {v}}^{h},\tau _{c}({\boldsymbol {u}}_{h}){\boldsymbol {r}}_{C}({\boldsymbol {u}}^{h}))+{\big (}\rho {\boldsymbol {u}}^{h}\cdot (\nabla {\boldsymbol {u}}^{h})^{T},\tau _{M}({\boldsymbol {u}}^{h}){\boldsymbol {r}}_{M}({\boldsymbol {u}}^{h},p^{h}){\big )}} _{\text{VMS}}-\underbrace {(\nabla {\boldsymbol {v}}^{h},\tau _{M}({\boldsymbol {u}}^{h}){\boldsymbol {r}}_{M}({\boldsymbol {u}}^{h},p^{h})\otimes \tau _{M}({\boldsymbol {u}}^{h}){\boldsymbol {r}}_{M}({\boldsymbol {u}}^{h},p^{h}))} _{\text{LES}}.\end{aligned}}$ From the expressions above, one can see that:[11] • the form $A^{NS}(\cdot ,\cdot )$ contains the standard terms of the Navier–Stokes equations in variational formulation; • the form $A^{VMS}(\cdot ,\cdot )$ contain four terms: 1. the first term is the classical SUPG stabilization term; 2. the second term represents a stabilization term additional to the SUPG one; 3. the third term is a stabilization term typical of the VMS modeling; 4. the fourth term is peculiar of the LES modeling, describing the Reynolds cross-stress. See also • Navier–Stokes equations • Large eddy simulation • Finite element method • Backward differentiation formula • Computational fluid dynamics • Streamline upwind Petrov–Galerkin pressure-stabilizing Petrov–Galerkin formulation for incompressible Navier–Stokes equations References 1. Hughes, T.J.R.; Scovazzi, G.; Franca, L.P. (2004). "Chapter 2: Multiscale and Stabilized Methods". In Stein, Erwin; de Borst, René; Hughes, Thomas J.R. (eds.). Encyclopedia of Computational Mechanics. John Wiley & Sons. pp. 5–59. ISBN 0-470-84699-2. 2. Codina, R.; Badia, S.; Baiges, J.; Principe, J. (2017). "Chapter 2: Variational Multiscale Methods in Computational Fluid Dynamics". In Stein, Erwin; de Borst, René; Hughes, Thomas J.R. (eds.). Encyclopedia of Computational Mechanics Second Edition. John Wiley & Sons. pp. 1–28. ISBN 9781119003793. 3. Masud, Arif (April 2004). "Preface". Computer Methods in Applied Mechanics and Engineering. 193 (15–16): iii–iv. doi:10.1016/j.cma.2004.01.003. 4. Quarteroni, Alfio (2017-10-10). Numerical models for differential problems (Third ed.). Springer. ISBN 978-3-319-49316-9. 5. Brooks, Alexander N.; Hughes, Thomas J.R. (September 1982). "Streamline upwind/Petrov-Galerkin formulations for convection dominated flows with particular emphasis on the incompressible Navier–Stokes equations". Computer Methods in Applied Mechanics and Engineering. 32 (1–3): 199–259. Bibcode:1982CMAME..32..199B. doi:10.1016/0045-7825(82)90071-8. 6. Masud, Arif; Calderer, Ramon (3 February 2009). "A variational multiscale stabilized formulation for the incompressible Navier–Stokes equations". Computational Mechanics. 44 (2): 145–160. Bibcode:2009CompM..44..145M. doi:10.1007/s00466-008-0362-3. S2CID 7036642. 7. Hughes, Thomas J.R. (November 1995). "Multiscale phenomena: Green's functions, the Dirichlet-to-Neumann formulation, subgrid scale models, bubbles and the origins of stabilized methods". Computer Methods in Applied Mechanics and Engineering. 127 (1–4): 387–401. Bibcode:1995CMAME.127..387H. doi:10.1016/0045-7825(95)00844-9. 8. Rasthofer, Ursula; Gravemeier, Volker (27 February 2017). "Recent Developments in Variational Multiscale Methods for Large-Eddy Simulation of Turbulent Flow". Archives of Computational Methods in Engineering. 25 (3): 647–690. doi:10.1007/s11831-017-9209-4. S2CID 29169067. 9. Hughes, Thomas J.R.; Mazzei, Luca; Jansen, Kenneth E. (May 2000). "Large Eddy Simulation and the variational multiscale method". Computing and Visualization in Science. 3 (1–2): 47–59. doi:10.1007/s007910050051. S2CID 120207183. 10. Bazilevs, Y.; Calo, V.M.; Cottrell, J.A.; Hughes, T.J.R.; Reali, A.; Scovazzi, G. (December 2007). "Variational multiscale residual-based turbulence modeling for large eddy simulation of incompressible flows". Computer Methods in Applied Mechanics and Engineering. 197 (1–4): 173–201. Bibcode:2007CMAME.197..173B. doi:10.1016/j.cma.2007.07.016. 11. Forti, Davide; Dedè, Luca (August 2015). "Semi-implicit BDF time discretization of the Navier–Stokes equations with VMS-LES modeling in a High Performance Computing framework". Computers & Fluids. 117: 168–182. doi:10.1016/j.compfluid.2015.05.011.
Wikipedia
Variational principle In science and especially in mathematical studies, a variational principle is one that enables a problem to be solved using calculus of variations, which concerns finding functions that optimize the values of quantities that depend on those functions. For example, the problem of determining the shape of a hanging chain suspended at both ends—a catenary—can be solved using variational calculus, and in this case, the variational principle is the following: The solution is a function that minimizes the gravitational potential energy of the chain. Part of a series of articles about Calculus • Fundamental theorem • Limits • Continuity • Rolle's theorem • Mean value theorem • Inverse function theorem Differential Definitions • Derivative (generalizations) • Differential • infinitesimal • of a function • total Concepts • Differentiation notation • Second derivative • Implicit differentiation • Logarithmic differentiation • Related rates • Taylor's theorem Rules and identities • Sum • Product • Chain • Power • Quotient • L'Hôpital's rule • Inverse • General Leibniz • Faà di Bruno's formula • Reynolds Integral • Lists of integrals • Integral transform • Leibniz integral rule Definitions • Antiderivative • Integral (improper) • Riemann integral • Lebesgue integration • Contour integration • Integral of inverse functions Integration by • Parts • Discs • Cylindrical shells • Substitution (trigonometric, tangent half-angle, Euler) • Euler's formula • Partial fractions • Changing order • Reduction formulae • Differentiating under the integral sign • Risch algorithm Series • Geometric (arithmetico-geometric) • Harmonic • Alternating • Power • Binomial • Taylor Convergence tests • Summand limit (term test) • Ratio • Root • Integral • Direct comparison • Limit comparison • Alternating series • Cauchy condensation • Dirichlet • Abel Vector • Gradient • Divergence • Curl • Laplacian • Directional derivative • Identities Theorems • Gradient • Green's • Stokes' • Divergence • generalized Stokes Multivariable Formalisms • Matrix • Tensor • Exterior • Geometric Definitions • Partial derivative • Multiple integral • Line integral • Surface integral • Volume integral • Jacobian • Hessian Advanced • Calculus on Euclidean space • Generalized functions • Limit of distributions Specialized • Fractional • Malliavin • Stochastic • Variations Miscellaneous • Precalculus • History • Glossary • List of topics • Integration Bee • Mathematical analysis • Nonstandard analysis Overview Any physical law which can be expressed as a variational principle describes a self-adjoint operator.[1] These expressions are also called Hermitian. Such an expression describes an invariant under a Hermitian transformation. History Felix Klein's Erlangen program attempted to identify such invariants under a group of transformations. In what is referred to in physics as Noether's theorem, the Poincaré group of transformations (what is now called a gauge group) for general relativity defines symmetries under a group of transformations which depend on a variational principle, or action principle. Examples In mathematics • The Rayleigh–Ritz method for solving boundary-value problems approximately • Ekeland's variational principle in mathematical optimization • The finite element method • The variation principle relating topological entropy and Kolmogorov-Sinai entropy. In physics • Fermat's principle in geometrical optics • Maupertuis' principle in classical mechanics • The principle of least action in mechanics, electromagnetic theory, and quantum mechanics • The variational method in quantum mechanics • Gauss's principle of least constraint and Hertz's principle of least curvature • Hilbert's action principle in general relativity, leading to the Einstein field equations. • Palatini variation • Gibbons–Hawking–York boundary term References 1. Lanczos, Cornelius (1974) [1st published 1970, University of Toronto Press]. The Variational Principles of Mechanics (4th, paperback ed.). Dover. p. 351. ISBN 0-8020-1743-6. External links • The Feynman Lectures on Physics Vol. II Ch. 19: The Principle of Least Action • Ekeland, Ivar (1979). "Nonconvex minimization problems". Bulletin of the American Mathematical Society. New Series. 1 (3): 443–474. doi:10.1090/S0273-0979-1979-14595-6. MR 0526967. • S T Epstein 1974 "The Variation Method in Quantum Chemistry". (New York: Academic) • C Lanczos, The Variational Principles of Mechanics (Dover Publications) • R K Nesbet 2003 "Variational Principles and Methods In Theoretical Physics and Chemistry". (New York: Cambridge U.P.) • S K Adhikari 1998 "Variational Principles for the Numerical Solution of Scattering Problems". (New York: Wiley) • C G Gray, G Karl G and V A Novikov 1996, Ann. Phys. 251 1. • C.G. Gray, G. Karl, and V. A. Novikov, "Progress in Classical and Quantum Variational Principles". 11 December 2003. physics/0312071 Classical Physics. • Griffiths, David J. (2004). Introduction to Quantum Mechanics (2nd ed.). Prentice Hall. ISBN 0-13-805326-X. • John Venables, "The Variational Principle and some applications". Dept of Physics and Astronomy, Arizona State University, Tempe, Arizona (Graduate Course: Quantum Physics) • Andrew James Williamson, "The Variational Principle -- Quantum monte carlo calculations of electronic excitations". Robinson College, Cambridge, Theory of Condensed Matter Group, Cavendish Laboratory. September 1996. (dissertation of Doctor of Philosophy) • Kiyohisa Tokunaga, "Variational Principle for Electromagnetic Field". Total Integral for Electromagnetic Canonical Action, Part Two, Relativistic Canonical Theory of Electromagnetics, Chapter VI • Komkov, Vadim (1986) Variational principles of continuum mechanics with engineering applications. Vol. 1. Critical points theory. Mathematics and its Applications, 24. D. Reidel Publishing Co., Dordrecht. • Cassel, Kevin W.: Variational Methods with Applications in Science and Engineering, Cambridge University Press, 2013.
Wikipedia
Variational vector field In the mathematical fields of the calculus of variations and differential geometry, the variational vector field is a certain type of vector field defined on the tangent bundle of a differentiable manifold which gives rise to variations along a vector field in the manifold itself. Specifically, let X be a vector field on M. Then X generates a one-parameter group of local diffeomorphisms FlXt, the flow along X. The differential of FlXt gives, for each t, a mapping $d\mathrm {Fl} _{X}^{t}:TM\to TM$ where TM denotes the tangent bundle of M. This is a one-parameter group of local diffeomorphisms of the tangent bundle. The variational vector field of X, denoted by T(X) is the tangent to the flow of d FlXt. References • Shlomo Sternberg (1964). Lectures on differential geometry. Prentice-Hall. pp. 96.
Wikipedia